Cody Memory: How to Make Sourcegraph Cody Remember Your Project
Sourcegraph Cody resets every session. Here is how to add persistent memory via MCP so Cody remembers your project, conventions, and past decisions.
Tutorials, tool guides, and deep dives on persistent AI memory for developers.
Everything developers need to know about persistent AI memory — what it is, why it matters, and how to set it up across your tools.
Install MemNexus, save your first memory, and connect your AI tools — all with a single setup command. Your coding agent remembers everything from here on out.
Learn to structure AI memories so your coding assistant walks into every session knowing your architecture, conventions, decisions, and where you left off.
A practical guide to which AI coding assistants support persistent memory today — via MCP, APIs, or built-in features — and how to set up each one.
How the MemNexus team diagnosed a recurring CI failure pattern across 5 incidents in 10 days — and why the sixth incident took 2 minutes instead of 2 hours.
Every AI coding assistant resets at session end. Here's why, what options exist today for persistent memory, and how they compare.
Learn how to add persistent memory to AI apps built on the Anthropic or OpenAI API — architecture, what to store, and a TypeScript SDK walkthrough.
Sourcegraph Cody resets every session. Here is how to add persistent memory via MCP so Cody remembers your project, conventions, and past decisions.
Continue.dev resets every session. Here is how to add persistent memory via MCP so Continue remembers your project, conventions, and past decisions.
JetBrains AI Assistant resets every session. Here is how to add persistent memory via MCP so JetBrains AI remembers your project, conventions, and past decisions.
Kiro resets every session. Here is how to add persistent memory via MCP so Kiro remembers your project, conventions, and past decisions.
Tabnine resets every chat session. Here is how to add persistent memory via MCP so Tabnine remembers your project, conventions, and past decisions.
Context windows give coding agents short-term recall. MCP gives them a persistent memory layer — decisions, patterns, and architecture knowledge that survive every session restart.
Your coding agent forgets everything between sessions. Here's how to give it persistent memory that carries your architecture decisions, debugging history, and team conventions into every future session.
Install /mx-save, /mx-checkpoint, and /mx-buildcontext as global Claude Code slash commands with one CLI command. Type a slash instead of tool syntax.
Five common misconceptions developers have about persistent AI memory — and what actually works for keeping structured context across tools and sessions.
Multiple agent teams contradict each other and rediscover the same bugs without shared memory. A shared knowledge base keeps every team aligned and coherent.
Most AI agents forget everything between sessions. Three patterns — session, preference, and knowledge memory — make agents genuinely useful over time.
Most developers debug the same classes of bugs repeatedly. Here's a workflow that uses persistent memory to make each debugging session faster than the last.
How to load architectural context before reviewing a PR — so your AI reviewer knows why things were built the way they were, not just what the code does.
Open source contributors context-switch between projects months apart. Persistent AI memory means you never re-explain a project's conventions or patterns.
New engineers spend weeks learning undocumented conventions, past decisions, and tribal knowledge. Shared AI memory makes that context instantly accessible.
How to use persistent memory to generate accurate standup updates in seconds — without reconstructing what you did from git history or memory.
Technical writing is hard when your AI doesn't know your product. Persistent memory gives AI context to write accurate, consistent docs without re-explaining.
ChatGPT and Claude have built-in memory. It works well — until you hit the API, switch tools, or need to build something. Here's the architectural difference.
Prompt engineering gets all the attention, but context engineering — managing what your AI knows at session start — separates productive from frustrated.
MCP tools lose context when you restart. Learn the MCP memory pattern — how to wire a memory server so every session starts with relevant context.
When you're the only one who knows the codebase, persistent AI memory turns your assistant into a second engineer who understands the full context over months.
Every developer on your team has a coding agent. None of them share context. Here's how shared team memories fix that — and where to start.
LLMs are stateless by design. Built-in memory helps for simple use cases, but if you're building on the API or working across tools, you need a different approach.
CommitContext captures the reasoning behind every commit — decisions, debugging paths, and gotchas — so your agent can investigate issues and connect code.
Build-context delivers a structured briefing — active work, key facts, gotchas, recent activity — before your agent starts. One command. No cold starts.
MemNexus search now follows connections between memories — entities, facts, topics — not just similar words. 90% recall, stale results filtered by default.
mx setup auto-detects your AI agents and configures MemNexus across Claude Code, Copilot, Cursor, and more. No external binary, no secrets in config files.
We benchmarked MCP vs CLI across three AI agents. GPT-based agents were 2x faster with MCP. Claude-based Kiro performed equally well with CLI. Choose wisely.
We built and tested an agent-help feature for our CLI. AI agents ignored it. Here's what actually helps agents use CLI tools effectively.
Recursive digest synthesis partitions large memory sets into focused groups, synthesizes each at full fidelity, and merges into one comprehensive briefing.
We're opening MemNexus to a small group of developers. Here's what you get, how it works, and what we're looking for.
Memory Digest assembles complete project briefings in one command. Gathers up to 100 memories, expands via topic and entity graphs, and synthesizes with LLM.
Named memories let you assign meaningful names to your most important memories. Updates auto-create versioned chains with full history preserved.
MemNexus now auto-extracts topics, facts, and entities from your memories using LLM analysis. Richer metadata, more retrieval paths, same search speed.
Complete transparency on what happens when you delete your MemNexus account — the 7-day grace period, what gets deleted, and how to permanently erase your AI memory data.
MemNexus CLI v1.7.29 introduces batch memory retrieval: fetch multiple memories in a single API call with piping support from search.
New conversation-based memory retrieval helps developers find work sessions, not just individual memories. Filter by time, group search results by conversation.
Introducing --exclude-topics: the complement to topic filtering that gives you complete control over your memory search results.
Most AI memory is a jumbled pile of everything you've ever said. MemNexus introduces memory versioning and instant recaps to fix the fundamental problems with AI memory.
Timeline Search optimizes memory retrieval for temporal understanding — reconstruct debugging sessions, review decisions, and brief teammates in one query.
Most AI memory systems treat each piece of information as an isolated fact. Narrative Reconstruction understands how your knowledge evolves over time.
Today we're launching MemNexus, a persistent memory layer that helps coding agents remember context across conversations.
Learn how to effectively use persistent memory to improve your coding agent workflows and productivity.