What Is "Dreaming" in AI? The Core Definition
Most people using AI tools don't know that Claude agents can now review their own past sessions, extract patterns from them, and arrive at the next session as a measurably improved version of themselves — automatically, without human intervention. Anthropic introduced this capability on May 6, 2026, and called it "Dreaming."
Dreaming is a scheduled inter-session process built into Claude Managed Agents that runs between active agent sessions. It reviews past conversation history and memory files, identifies recurring patterns (mistakes, effective workflows, user preferences), and updates the agent's persistent memory with curated insights that improve future performance. The model itself does not change — what changes is the agent's working memory, which it reads at the start of each session.
How Does Claude's Dreaming Feature Actually Work?
Dreaming operates in three sequential phases that run automatically between sessions:
Phase 1 — Review: The system scans the agent's recent session logs, task outcomes, and existing memory files. It looks for patterns that no single session could surface on its own — mistakes that repeat across multiple sessions, shortcuts that appeared independently in different task contexts, and user preferences that emerged over time.
Phase 2 — Extract: It identifies which patterns are signal (consistent, meaningful, actionable) versus noise (one-off anomalies). Cross-agent learning is possible here: on platforms running multiple Claude agents, Dreaming can detect workflows that multiple agents discovered independently — which is a strong signal of a genuinely effective approach.
Phase 3 — Curate: It updates the agent's persistent memory file: pruning outdated notes, merging duplicate entries, resolving contradictions between old and new information, and adding new synthesised insights. This is not an LLM retraining — it is a structured rewrite of a text file that the agent reads as context at the start of every session.
What Does Dreaming Fix That Standard AI Agents Cannot?
The core problem with standard AI agents is session isolation. Every session starts with the same baseline knowledge — the model's training data plus whatever you've put in the system prompt. Unless you manually update the system prompt between sessions, the agent makes the same mistakes repeatedly, rediscovers the same shortcuts every time, and has no structural memory of what worked and what didn't.
Dreaming addresses this directly. According to Anthropic's May 6 announcement, the feature specifically targets three failure modes:
--- Recurring extraction errors that no individual session can self-correct, because the pattern only becomes visible across multiple sessions.
--- Redundant memory buildup, where agents accumulate conflicting notes over time that dilute rather than improve their performance.
--- Missed cross-session patterns, where insights that would improve the agent's approach are scattered across multiple session logs and never synthesised.
Dreaming's "prune, merge, resolve" cycle runs automatically to prevent all three.
Real-World Results: What the Early Data Shows
Anthropic shared two concrete implementations at the May 6 Code with Claude developer conference:
Harvey (legal AI platform): Task completion rates increased roughly 6x after enabling Dreaming for their contract analysis agents. The mechanism: the agent was repeating the same clause extraction errors across sessions because the error pattern was invisible within any single session. Dreaming identified the pattern, updated the agent's memory with corrected guidance, and the recurring failures stopped.
Wisedocs (medical document review): Document review time dropped 50%. Wisedocs' agents handle complex medical records where domain-specific terminology and document structure vary significantly across cases. Dreaming allowed the agent to accumulate and curate domain knowledge across sessions in a structured way, rather than starting each session from a clean slate.
These results are not guaranteed to transfer to all use cases. Both examples share a common profile: high-volume, domain-specific, repetitive task workflows where pattern-level learning produces compounding gains over time. The more sessions an agent runs, the more Dreaming has to work with.
How Does This Affect AI Practitioners Who Use Claude Daily?
As of May 2026, Dreaming is a Managed Agents feature available in research preview. It applies to Claude agents built using the Managed Agents infrastructure — not to standard claude.ai conversations. If you use claude.ai directly, Dreaming does not affect your sessions today.
However, the implications for practitioners are significant in three directions:
Claude-powered tools improve over time: Any application built on Claude Managed Agents — enterprise content platforms, legal tools, coding assistants, customer service agents — will improve through Dreaming without requiring additional setup from the user. If you're using a Claude-powered tool that handles repetitive workflows, the tool's performance ceiling will rise over time automatically.
The ROI calculation on Claude agents changes: An agent that learns from its mistakes becomes more cost-effective over its operational life. For practitioners evaluating whether to build or deploy a Claude agent for their team, Dreaming is a meaningful factor in that calculation — especially for workflows involving high session volume.
Memory design becomes a workflow skill: Dreaming introduces a new design decision for practitioners building with Claude's API: how much autonomy should your agent have over its own memory updates? Anthropic offers two modes. Automatic mode lets Dreaming update the memory file without review. Review mode surfaces proposed changes for human approval before they take effect — described by Anthropic as similar to a pull request workflow for agent memory. Understanding which mode is right for which use case is a practical skill that will differentiate effective AI practitioners.
How to Enable and Configure Dreaming
Dreaming is currently in research preview as of May 2026. Access is available through the Claude Managed Agents API for enterprise and developer accounts. Configuration involves two primary decisions:
Memory scope: Define which parts of the agent's memory are eligible for Dreaming updates. You can restrict Dreaming to specific memory namespaces — for example, limiting it to the agent's task-execution notes while protecting your manually-set system prompt instructions from automatic modification.
Approval workflow: Choose between automatic updates (the agent's memory updates without human review after each Dreaming cycle) and review-gated updates (Dreaming proposes changes that a human approves or rejects). For high-stakes or compliance-sensitive workflows, review-gated is the appropriate choice.
For practitioners not yet building custom agents: the practical path is to monitor whether the Claude-powered tools you currently use announce Dreaming integration. Harvey and Wisedocs are early adopters; more Claude-based applications will follow in the second half of 2026.
What "Dreaming" Is Not — Important Caveats
Three boundaries to understand before drawing conclusions about what Dreaming can do.
Dreaming is not model retraining: The underlying Claude model does not change. Claude Opus 4.7 after Dreaming is still Claude Opus 4.7 — its core reasoning, language capabilities, and knowledge cutoff are identical. What changes is the agent's memory file, which is a structured text document that provides additional context at the start of each session.
Dreaming requires persistent memory to be enabled first: The feature is a layer on top of existing persistent memory infrastructure. If you haven't enabled persistent memory for your Claude Managed Agent, Dreaming has nothing to curate. This is an infrastructure requirement, not an automatic feature.
Dreaming does not guarantee improvement in all contexts: For one-off, novel, or highly variable tasks, Dreaming has less to work with. Pattern extraction requires repetition — the same kinds of tasks, errors, and workflows appearing across multiple sessions. For agents handling high variety with low volume per task type, the performance gains from Dreaming will be modest.
What This Signals About Where AI Agents Are Heading
Dreaming is the clearest signal yet that the agentic era is moving past the "session-by-session chat" model toward something more analogous to an employee who genuinely improves at their job over time. The agent doesn't just remember previous context — it actively curates what to remember, discarding what's no longer relevant and sharpening what is.
For practitioners tracking where to invest time in AI skills: the ability to design, configure, and manage persistent AI agent memory is becoming a meaningful competency. Understanding what Dreaming does — which patterns trigger a memory update versus which get pruned — is the kind of operational knowledge that will separate effective AI practitioners from occasional AI users over the next 12 to 18 months.
Most people use AI one session at a time. Dreaming makes the sessions add up. 懂AI的冷,更懂你的難 — UD 同行28年,讓科技成為有溫度的陪伴. If you're using any Claude-powered tool for repetitive work, Dreaming is the capability to understand now — before your competitors do.
🧠 How Good Is Your AI Knowledge?
Understanding features like Dreaming is exactly the edge that separates AI power users from casual users. UD 團隊手把手帶你完成每一步 — testing where you stand and building the skills that matter for your workflow.