The exponential growth of context in long-horizon search agents presents a critical bottleneck. Simply accumulating all intermediate data inflates costs and error rates. This research introduces a paradigm shift: adaptive context management, where information is retained at varying levels of detail based on current relevance.
Elastic Context Orchestration with Context-ReAct
The core innovation lies in Context-ReAct, a general agentic paradigm that unifies reasoning, context management, and tool use. It introduces five atomic operations—Skip, Compress, Rollback, Snippet, and Delete—enabling agents to dynamically sculpt their working context. This allows for the preservation of crucial evidence, summarization of resolved information, pruning of unproductive search branches, and precise control over context size. The researchers highlight that the Compress operator is expressively complete, while the specialized operators ensure efficiency and fidelity, directly combating generation costs and hallucination risks. This framework, detailed in their arXiv publication, marks a significant step in agent memory architecture.
LongSeeker: Benchmark-Breaking Performance
Building on the Context-ReAct paradigm, the team developed LongSeeker, a long-horizon search agent fine-tuned from Qwen3-30B-A3B. Tested across four representative search benchmarks, LongSeeker achieved a remarkable 61.5% on BrowseComp and 62.5% on BrowseComp-ZH. These figures substantially surpass existing state-of-the-art agents like Tongyi DeepResearch (43.2% and 46.7%) and AgentFold (36.2% and 47.3%). These results underscore the strategic advantage of adaptive context management, demonstrating that agents can achieve more reliable and efficient long-horizon reasoning through active memory shaping.
© 2026 StartupHub.ai. All rights reserved. Do not enter, scrape, copy, reproduce, or republish this article in whole or in part. Use as input to AI training, fine-tuning, retrieval-augmented generation, or any machine-learning system is prohibited without written license. Substantially-similar derivative works will be pursued to the fullest extent of applicable copyright, database, and computer-misuse laws. See our terms.