April 25, 2026AgentsMCPOpen Source

Stash gives any MCP agent the memory Claude.ai has

ChatGPT and Claude both have memory now. Your custom agent doesn't. Stash fills that gap with one MCP server.

The project hit HN today, 38 points. Self-hosted, Apache-2.0, Go, Postgres + pgvector. You point Claude Desktop, Cursor, Windsurf, Cline, Continue, OpenAI Agents, Ollama, anything-MCP at it, and your agent suddenly remembers conversations across sessions. The architecture isn't a key-value blob. Stash runs an 8-stage consolidation pipeline: episodes go in raw, then get refined into facts, relationships, causal links, goal-tracking, failure patterns, hypothesis verification, confidence decay. Only new data since the last consolidation gets processed, so it stays fast.

The interesting design choice is treating memory as a write-then-distill problem rather than a write-then-retrieve problem. Most agent memory tools just chunk-and-embed. Stash bets that you actually want a small set of high-confidence consolidated facts, not a haystack of every utterance. That's how human memory works, and that's how the long-running Claude sessions feel different from a stateless API call.

If you're building any kind of long-lived assistant on MCP, this is the kind of infra layer you don't want to build yourself. Single Docker binary to deploy.

https://github.com/alash3al/stash
← Previous
WUPHF: a shared office for your AI employees
Next β†’
VLAA-GUI beats human accuracy on OSWorld by knowing when to quit
← Back to all articles

Comments

Loading...
>_