Agentic Context Engineering: Evolving Contexts for Self-Improving Language Models
This paper introduces ACE (Agentic Context Engineering), a framework that treats LLM contexts as evolving playbooks to overcome limitations like brevity bias and context collapse in context adaptation. ACE employs a modular process of generation, reflection, and curation with structured, incremental updates. It significantly outperforms strong baselines on agent and domain-specific benchmarks, achieving self-improvement without labeled supervision and reducing adaptation costs. ✨
Article Points:
1
ACE treats contexts as evolving playbooks, accumulating and refining strategies over time.
2
ACE prevents context collapse and brevity bias through structured, incremental updates.
3
The framework uses a modular workflow: Generator, Reflector, and Curator for adaptation.
4
ACE achieves average gains of +10.6% on agents and +8.6% on domain-specific benchmarks.
5
It adapts effectively without labeled supervision, leveraging natural execution feedback.
6
ACE reduces adaptation latency by 86.9% and lowers rollout and token costs significantly.
Agentic Context Engineering: Evolving Contexts for Self-Improving Language Models
Problem Addressed

Brevity Bias

Context Collapse

Framework

Evolving Playbooks

Modular Workflow

- Generator
- Reflector
- Curator
Key Innovations

Incremental Delta Updates

Grow-and-Refine Mechanism

Dedicated Reflector

Performance Gains

Agents +10.6%

Domain-Specific +8.6%

Matches IBM-CUGA

Efficiency

Lower Latency -86.9%

Fewer Rollouts

Lower Token Cost

Implications

Online & Continuous Learning

Interpretable Contexts