Your AI research assistant shouldn't lose its memory every session
Three research workflows where persistent agent memory eliminates the most frustrating part of working with AI: having to re-explain everything from scratch each time.
The problem is re-briefing
Researchers who work with AI assistants regularly hit the same wall: each new session starts blank. The AI doesn't know what papers you already read last Tuesday, which experimental branches you ruled out in December, or what the argument structure of chapter four was before you restructured it. So you explain. Again.
This isn't a failure of the model. It's a failure of the memory layer — or rather, the absence of one. The model is stateless by design. The question is whether the infrastructure around it is.
Iranti is persistent shared memory for AI agents. It stores structured facts across sessions, survives context window limits, and lets agents pick up exactly where they left off. What follows are three research use cases where that difference is most concrete.
Building a knowledge graph across 50 papers — one session at a time
A literature review isn't a single session. You read papers across days or weeks, making connections, flagging contradictions, noting which claims need verification. The problem is that a stateless AI assistant treats each session as the first one. By session twelve, you've re-explained your research area from scratch eleven times.
With Iranti, the agent writes facts to memory as it reads. A paper on attention mechanisms gets stored under paper/attention-is-all-you-need with a summary, key claims, and a flag that it's been read. Contradictions with earlier papers get linked explicitly. Next session, the agent queries what's been covered before diving into a new paper — no re-reading, no re-briefing.
The result isn't just efficiency. It's continuity. An agent that knows what it already knows can make connections across the full body of work rather than reasoning only from whatever fits in the current context window.
Keeping an AI in the loop across a full experimental cycle
Experimental research is iterative. You run an experiment, interpret results with AI help, form a revised hypothesis, and run again — sometimes across weeks. The model helping you interpret run 47 has no idea what happened in runs 1 through 46. So you paste in a summary of the last three runs, which means the model is really only working with whatever context you remembered to include.
With Iranti, each experimental run gets written to memory with its parameters, outcome, and your verdict on what it means. The AI agent can query the full experimental history before interpreting a new result — not just the last three runs, but every run where a specific parameter was varied, every branch that was ruled out and why.
This changes what the agent can reason about. Instead of working from a summary you cobbled together, it's working from a structured log of what actually happened. Dead ends stay dead. Ruled-out hypotheses stay ruled out. The agent doesn't re-suggest what you already tried.
An AI writing collaborator that actually knows where you left off
Writing a dissertation or long paper with AI assistance is a months-long project. Each writing session, you open a conversation and explain what the argument is, where the chapter fits, what the last reviewer said, and what you're trying to fix. The AI helps for that session. Next time, you explain it all again.
With Iranti, the manuscript's structure lives in memory — each chapter's argument, its current draft status, the editorial decisions made so far, and the reviewer feedback that's still unresolved. The agent can retrieve the state of the manuscript before the session starts. It knows chapter three was restructured last week, that the passive voice note from the supervisor still applies to section 3.2, and that the conclusion is the priority this session.
This also helps when multiple AI sessions touch the same work. Each session can write its changes and decisions back to memory, so the next session — even in a different tool — starts with an accurate picture of the current state.
What to know before trying this
Iranti is structured memory — it stores explicit, keyed facts, not a free-form transcript. That means the AI agent needs to write facts deliberately, using the iranti_write tool and explicit entity+key addressing. It won't automatically extract memory from an unstructured conversation.
The upside of that design is reliability: facts are stored at known addresses, retrievable by exact lookup, with provenance and timestamps attached. The agent and the researcher can both inspect what's in memory. There are no surprises about what was retained or lost.
The current version requires a local PostgreSQL setup. If you're comfortable with that, the setup guide is short. Claude Code and Codex integration is one command.
Install Iranti, bind it to your project, and run iranti claude-setup to connect it to Claude Code. The first session with persistent memory usually demonstrates the value immediately.