---
title: "Dreaming Isn't Enough: Why Claude's Memory and Dreaming Don't Understand Your Project"
excerpt: "Anthropic taught Claude to dream. Great for coding agents, blind to projects. Why remembering isn't the same as understanding, and what Context-Driven Project Management does differently."
publishedAt: "2026-05-07"
readTime: "10 min read"
author: "Simon Schwer"
tags:
  [
    "Claude",
    "Anthropic",
    "Memory",
    "Dreaming",
    "Claude Code",
    "Managed Agents",
    "MCP",
    "Context-Driven Project Management",
    "CDPM",
    "TensorPM",
    "Local-First AI",
    "Project Management",
    "AI Agents",
    "Agentic Project Management",
  ]
featuredImage: "/images/blog/dreaming-isnt-enough.png"
featuredImageAlt: "A figure stands on a hill at dusk, looking up at a luminous constellation connected like a structured project graph"
featuredImagePosition: "center"
---

On May 6, 2026, at Code with Claude, Anthropic announced that Claude now dreams. That's the actual word they use. In the background, a consolidation pass runs across up to 100 past sessions and produces a fresh, separate memory store from them. Recurring patterns, common mistakes, shared team preferences. The original memory stays untouched. Pattern recognition with poetic branding.

It sounds like a breakthrough. For coding agents grinding through the same codebase for weeks, it is one. For anyone running projects rather than writing code, Dreaming gives you almost nothing. That's where the comparison with a tool like [TensorPM](https://tensorpm.com) gets interesting. On the surface both approaches address the same problem ("AI forgets between sessions"), but they sit on completely different architectures. Mix them up and you'll buy one and keep wondering why the other thing is still missing.

## What Claude's Memory Actually Does

Anthropic has piled five different memory features into the same marketing bucket. Sorting them out helps.

One: **Claude.ai Memory**, the auto-generated conversation summary inside the chat UI. Rolled out to Team and Enterprise plans in September 2025, with Incognito mode and project-scoped isolation. Pure free text, platform-bound.

Two: **Claude Projects**, a static knowledge container with custom instructions and uploaded files. Hand-curated. Not an evolving memory, more like a second drawer per initiative.

Three: **Auto Memory in Claude Code**, paired with `CLAUDE.md`. Claude writes Markdown notes into your project directory and loads them back at the start of every session. Build commands, debug insights, style preferences. Once the file pushes past 200 lines, adherence noticeably degrades.

Four: the **Memory Tool API** for agents, with `view`, `create`, `str_replace` and friends. Claude only emits tool calls. The `/memories` directory lives on your system. ZDR-capable.

Five: **Dreaming**, announced on May 6, 2026. An asynchronous consolidation pass for Claude Managed Agents, currently a research preview by request. Supports Opus 4.7 and Sonnet 4.6, billed at standard token rates. Anthropic cites Harvey as an early adopter with allegedly six times the task-completion rate. Anthropic's own number, so read it with a healthy dose of skepticism.

What all five share: they store *what happened*. Conversation fragments, build commands, fixed bugs. Mostly free-text Markdown, occasionally with a pattern layer on top. Dreaming raises the bar by surfacing patterns no single agent could see on its own. It still stops at memory. Understanding is something else.

A small note on terminology: there's a second `/dream` command floating around the community that tidies `CLAUDE.md` files for Claude Code. Third-party implementation, not an official Anthropic feature. If someone tries to sell you "Auto Dream", ask which of the two they mean.

## Remembering Isn't the Same as Understanding

Picture a new hire who keeps a perfect diary of every meeting they've ever attended. They can tell you instantly what was discussed on March 14, which bug fixes you preferred last week, that you hate tabs and love spaces. Genuinely useful.

Now ask them: *"Are we on track for the Q2 launch? What's blocking the five pilot customers? Is the budget still realistic?"* Silence. The diary holds the atoms. It never bonded them into a molecule.

That's exactly the relationship Claude's memory has to a project. Memory stores conversations. A project is more than its conversations. It has participants, budgets, risks, deadlines, decisions. And a goal that holds those pieces together in the first place. Without that connective tissue, project context goes quietly stale. [That pattern](/en/blog/stale-context-silent-killer) is, in my experience, the biggest unaddressed problem in project management.

## How TensorPM Builds It Differently

TensorPM starts from a different question. Instead of *"What did the user say?"*, it asks *"What is this project trying to achieve, and where does it stand?"*. There's a name for that approach: **Context-Driven Project Management** (CDPM), a methodology where the project model itself is the primary context, not the conversation log. For the deeper definition, see the longer guide [What Is Agentic Project Management?](/en/blog/what-is-agentic-project-management).

The data model has three layers:

* **Vision.** The goal as a typed field. *"Launch MVP and onboard 5 pilot customers by end of Q2"* doesn't sit somewhere in a note. It sits in a fixed slot that everything else gets checked against.
* **Plan.** Action items, Kanban, Gantt, participants, budget, deadlines. Wired to the vision, not floating beside it.
* **Execution.** Live updates, an AI-supported trail (an intelligent change log), file analysis, decisions with rationale.

TensorPM's tagline puts it bluntly: *"Other tools store tasks. TensorPM stores evolving intent."* While Claude's memory tries to infer in the background what might matter, TensorPM models from day one what does matter. Everything that arrives later is tested against that model.

## The Distiller, or External Updates with a Human in the Loop

One thing the Dreaming coverage glosses over. Claude's entire memory system is conversation-driven. If you don't tell Claude something, Claude doesn't know it. Emails, SharePoint docs, Slack threads, tickets only show up in memory once someone drags them into the chat. The memory import from ChatGPT and Gemini is beta, runs with up to 24-hour latency, and goes one direction.

TensorPM's Distiller works the other way around. It takes external updates (notes, files, conversation transcripts) and merges them into the existing project model. With you in the room at the points where friction shows up. TensorPM describes the mechanism roughly like this: most AI tools scrape together scattered, outdated sources, emails, SharePoint, old documents. TensorPM flips that around. Your project context leads. External updates are merged in with you in the loop. And the AI asks the follow-up questions you would have skipped.

The model doesn't quietly decide what's relevant. You decide. In regulated industries, that's not a comfort feature, it's a compliance requirement.

## Why Dreaming Stays Retrospective

Even Dreaming looks backward. The agent reflects on past sessions to perform better in the next one. Excellent for a coding agent on a stable codebase. Anthropic reinforced that direction with two companion features: outcome-based grading via a separate grader agent, and multi-agent orchestration over a shared file system. Both push the same way. Self-improvement at the doing.

Not so useful when your project goal is drifting and nobody notices it's happening.

TensorPM's Distiller asks prospectively: *What does this new update mean for your Q2 goal? Which action item is now blocked? Which deadline slips?* A tool that walks alongside you instead of one that tidies up behind you.

## Who Curates? And Who Owns the Knowledge?

In Claude, the model decides what's important enough to land in the memory summary. Convenient, mostly correct, but black-boxy. You can read and edit the summary, you can't control the selection process. Dreaming is cleaner here than the coverage suggests: the original memory store stays put, the consolidation lands in a separate output. The model still picks which patterns survive, though.

In TensorPM you are the curator. The AI trail leaves a complete record of what changed when. Every field is editable, every suggestion either accepted or rejected. For founders who still want to understand their project five years from now, and for teams under audit obligations, that's a different level of confidence.

Storage location belongs to that picture. Claude Memory lives on Anthropic servers. TensorPM is local-first on your machine. The optional cloud sync is end-to-end encrypted, hosted in Germany. BYOK runs without a proxy token, and local models via Ollama or LM Studio work the same way. For GDPR-conscious teams, that's a real argument, not just a feature.

## Platform Lock-In or Agent Backbone

Possibly the most underrated difference. Who owns your context?

Claude's memory works inside Claude. Period. If you discover Cursor is more productive tomorrow, or that Codex fits a particular task better, you start from zero over there. The ChatGPT and Gemini import goes one way and is beta.

TensorPM turns the project model into a neutral backbone that agents plug into. Through MCP and A2A. Documented integrations include Claude Desktop, Claude Code, Codex, Cursor, OpenClaw, plus any other MCP/A2A-capable agent. Action items can be assigned to agents the way you'd assign them to human teammates. TensorPM invokes the run, tracks the outcome, keeps the plan in sync.

If you treat tool diversity as a virtue and don't want to hand every AI vendor your full project memory, the second model wins.

## Where Claude Still Shines

So this doesn't tip into unfair polemic: memory and Dreaming are excellent at what they were designed for. Conversation continuity and self-improvement of agents inside a single platform. Anyone keeping a single coding agent productive over weeks gets a real lift from Auto Memory plus Dreaming. Cross-session pattern recognition solves a real problem, and the clean split between original store and consolidated output is well executed.

Claude's approach isn't bad. It just solves a different problem.

The interesting setup is therefore both at once. TensorPM holds the project memory: vision, plan, trail. Claude Code (or Codex, or Cursor) does the work TensorPM hands it, and uses its own auto memory for code-specific context. Memory stays close to the code, the project model stays close to the goal. Nobody has to be a platform hostage.

## What to Take Away From This

If you only run a coding agent and you only write code, Claude's memory features are probably enough. The moment you actually run a project, juggling goals, participants, budgets, deadlines, decisions, remembering becomes the wrong tool. You need a model that stores *intent*, not just *content*.

TensorPM is currently at **version 1.2.1-beta.1**. The local full version is free and works without an account. Cloud sync costs **€3.99 per month**, the Pro tier with the AI flat rate sits at **€6.99**. Platform caveat for Mac users: macOS currently runs only on Apple Silicon, Windows and Linux are unproblematic.

Low enough barrier to just try it for yourself.

Dreaming is lovely. Somebody still has to keep the plan in their head while the agents are asleep.

---

*Sources include: [Anthropic Code with Claude, live blog by Simon Willison](https://simonwillison.net/2026/May/6/code-w-claude-2026/), [SiliconANGLE on Claude Dreaming](https://siliconangle.com/2026/05/06/anthropic-letting-claude-agents-dream-dont-sleep-job/), [VentureBeat on Claude Memory](https://venturebeat.com/ai/anthropic-adds-memory-to-claude-team-and-enterprise-incognito-for-all), [Computerworld on the September 2025 memory rollout](https://www.computerworld.com/article/4056366/anthropic-adds-memory-to-claude-for-team-and-enterprise-plan-users.html), [Digital Trends on Dreaming](https://www.digitaltrends.com/computing/anthropic-just-taught-claude-to-dream-between-tasks-and-it-makes-agents-meaningfully-smarter/), [Anthropic Docs on Claude Code memory](https://docs.anthropic.com/en/docs/claude-code/memory), [Memory Tool API](https://platform.claude.com/docs/en/agents-and-tools/tool-use/memory-tool), [TensorPM](https://tensorpm.com), [CDPM Framework](https://contextdrivenpm.org).*

---

_Want to feel what a project looks like when it actually knows its own plan? Grab the [TensorPM beta](https://tensorpm.com). Local-first, no account, free._
