Stackmind gives your AI coding sessions persistent memory — decisions, context, and learnings that survive across sessions, surfaced exactly when you need them.
Add to your Claude Code config in 30 seconds
// .claude/settings.json
{
"mcpServers": {
"stackmind": {
"command": "npx",
"args": ["-y", "@stackmind/mcp"],
"env": {
"STACKMIND_API_KEY": "sm_your_key_here"
}
}
}
}
Stop re-explaining your codebase. Stackmind remembers what your AI already knows.
Every session summary, decision, and learning is stored in a private vector database — searchable by your AI at the start of every new session.
Open a ticket and your AI already knows the prior decisions, related plans, and architectural constraints — without you typing a word.
ADRs and design decisions stored as first-class records. Your AI checks them before proposing an approach that contradicts a prior call.
Combines dense vector search with BM25 full-text, then re-ranks with a cross-encoder. Finds the right context even when keywords don't match.
Your data lives in your own Qdrant instance. Stackmind never stores your code or conversations on our servers.
Ships as an MCP server. Drop the config into your Claude Code settings and you're done — no new IDE, no new workflow.
Stackmind runs as a local MCP server. Your vector data stays on your machine.
Add three lines to your Claude Code config. The MCP server starts automatically when Claude Code opens.
Run npx @stackmind/mcp ingest to index your docs, plans, and markdown files into a local Qdrant instance.
At the start of each session, Claude retrieves the relevant context — prior decisions, session summaries, related plans — and uses it to give better answers from line one.
Start free, upgrade when the memory pays for itself.
For individual developers running Stackmind locally.
For developers who want managed hosting and priority support.
For teams sharing a knowledge base across multiple developers.