Anthropic PBC said today it’s giving its AI agents the ability to “dream” and remember past interactions and work they’ve performed in order that they can identify recurring mistakes and improve over time.
In an update announced at the Code with Claude developer conference, Anthropic said it’s giving Claude Managed Agents a new “dreaming” capability. It’s not putting its artificial intelligence agents to bed, but instead allowing them to go over recent events and identify useful memories that are worth storing in their memory to inform future tasks and interactions.
Anthropic’s Managed Agents give developers an alternative to building AI agents directly on the Messages API. The company describes it as a “pre-built, configurable agent harness” that runs on fully-managed infrastructure, and says it’s intended for situations where multiple agents are working on the same project or task over a period of minutes or hours.
As for dreaming, this is a scheduled process that allows agents to review earlier sessions and their memory stores, extract patterns from them, and then curate memories that could be useful in future. Users can decide how often they want their agents to dream, and they can also choose if the agent is allowed to update its memory automatically, or if they want to review what changes are made before they’re implemented.
It’s an interesting capability because large language models like Claude struggle with limited context windows, which means that important information can be lost when the agents they power are working on lengthy tasks. In basic chatbots, most models use a process known as “compaction,” where they periodically analyze lengthy conversations and try to identify only the most relevant information to be retained as context. But that process is limited to single conversations with single agents.
Dreaming, on the other hand, enables past sessions and memory stores to be analyzed across multiple AI agents, so they can all retain the most important memories.

“Dreaming surfaces patterns that a single agent can’t see on its own, including recurring mistakes, workflows that agents converge on, and preferences shared across a team,” Anthropic explained in a blog post. “It also restructures memory so it stays high-signal as it evolves. This is especially useful for long-running work and multiagent orchestration.”
Outcomes and multi-agent orchestration
The dreaming ability is currently in research preview, which means that developers will need to request access to the new feature and may have to wait before they’re approved. However, the company said it’s also making two features that were formerly in preview more widely available from today.
The first of those is “outcomes,” which is a new trick that’s designed to help AI agents focus on their intent. As Anthropic explains, “agents do their best work when they know what ‘good’ looks like,” and outcomes makes it possible to show them with specific examples.
Users can create an example of an ideal outcome for each task they assign to an AI agent. Then, a separate “grader agent” will evaluate the agent’s outputs based on that example to make sure they’re up to the standard expected. According to Anthropic, this feature should be especially useful for agents working on tasks that require “more attention to detail and exhaustive coverage.” It should also be useful for work where the quality of the outputs is more subjective, such as when an agent is trying to replicate a brand’s voice in a blog or social media post.
Anthropic said its own tests and early adopters show that using outcomes improves task success by as much as 10 points compared to just using standard prompts, without any examples.

The second new feature being made widely available today is “multi-agent orchestration,” which allows Managed Agents to break down complex tasks into smaller jobs, and have a lead agent assign them to different sub-agents. When users do this, they’ll be able to check the Claude Console to see exactly what each sub-agent did to complete a task and carefully review each one’s processes and outputs.
These new features are now available in the public beta of Managed Agents. In a final update, the company said it’s also doubling the current five-hour usage limits for Pro and Max subscribers, so they now get 10 hours.
Main image: SiliconANGLE/Microsoft Designer
Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.
15M+ viewers of theCUBE videos, powering conversations across AI, cloud, cybersecurity and more
11.4k+ theCUBE alumni — Connect with more than 11,400 tech and business leaders shaping the future through a unique trusted-based network.
About SiliconANGLE Media
SiliconANGLE Media is a recognized leader in digital media innovation, uniting breakthrough technology, strategic insights and real-time audience engagement. As the parent company of SiliconANGLE, theCUBE Network, theCUBE Research, CUBE365, theCUBE AI and theCUBE SuperStudios — with flagship locations in Silicon Valley and the New York Stock Exchange — SiliconANGLE Media operates at the intersection of media, technology and AI.
Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.