Where to show Demos in Your Prompt: A Positional Bias of In-Context Learning
This paper investigates a novel positional bias, termed DPP bias, in in-context learning (ICL) for large language models (LLMs). It observes that varying the positions of demonstrations, system prompts, and user messages drastically affects LLM predictions and accuracy. The study finds that placing demos at the start of the prompt yields the most stable and accurate outputs, while placing them at the end of the user message can flip over 30% of predictions without improving correctness. ✨
Article Points:
1
ICL performance is highly sensitive to demonstration position (DPP bias).
2
Placing demos at the prompt's start yields most stable and accurate outputs.
3
Demos at the end of user message cause significant prediction volatility.
4
Smaller LLMs are more severely affected by this positional sensitivity.
5
Optimal demo placement is not universal; it varies by model and task.
6
DPP bias stems from LLM architectural tendencies and training data patterns.
Where to show Demos in Your Prompt: A Positional Bias of In-Context Learning
Definition

Positional bias of in-context learning

Demos position affects LLM output

Observed Effects

Accuracy & prediction drift

Demos at start: stable, accurate outputs

Demos at end: high prediction volatility

Smaller models most affected

Underlying Causes

Architectural tendencies (e.g., induction heads)

Training data regularities

Optimal Placement

Not universal: task & model specific

Early positions often outperform

Mitigation Strategies

Test-time calibration

Post-training on permuted contexts