Why serious teams trust Iranti.
The product story is stronger because the proof is real.
Iranti earns trust by doing a few important things well: durable shared facts, exact retrieval when agents know what they need, continuity across tools, and operator-visible behavior when workflows go sideways.
The technical claim boundaries are here too — scroll down for evaluator notes and research links when you need to go deeper.
Iranti is built for the moment work moves from one agent, one prompt, or one session into a longer-running workflow. Teams get durable shared state instead of repeated re-briefing.
The strongest current evidence supports addressed retrieval and durable handoffs. That means better continuity when agents already know what they need to look up.
Iranti does not ask teams to trust hidden prompt state. Facts, provenance, conflicts, and lifecycle behavior stay visible enough to debug and operate.
Claude Code, Codex, SDK clients, and operator tooling can all point at the same memory layer, which makes continuity more durable than tool-local memory alone.
Iranti's structured retrieval arm matches raw long-context reading on a 2000-entity, ~107k token blind dataset — at a fraction of the token cost. The efficiency differential is the result, not just the accuracy.
Facts written by one agent are retrievable by a completely independent process with a different agent identity. Provenance is preserved. This is the persistence guarantee the product is built on.
Oracle lookups, multi-hop entity chains, and vector-backed search all pass cleanly. The foundation for structured reasoning across a shared KB is in place.
Relationship writes plus one-hop and deep traversal all work. When work spans people, repos, tasks, and systems, the KB can model those connections explicitly.
Evidence first.
Jargon second.
If you are evaluating Iranti seriously, the question is not whether every memory problem is solved. The question is whether the product has a credible wedge and whether the claims map to real evidence. That answer is much stronger today than a generic AI memory pitch would suggest.
Addressed retrieval, persistence, relationship traversal, and upgrade continuity are the clearest current strengths.
Use the product story for the value proposition and the linked research docs for exact scope, rerun boundaries, and methodological caveats.
The public evidence is more persuasive because it separates validated strengths from bounded areas instead of turning everything into a universal green light.
Null accuracy gap vs. long-context reading at 2,000 entities, ~107k tokens. Structured retrieval at fraction of the token cost.
Facts written by one agent retrieved by a completely independent process with a different identity. Provenance preserved.
3/3: deterministic resolution, close-gap escalation, and equal-confidence contradictory escalation all pass. High-confidence challengers win cleanly; ambiguous conflicts escalate to human review.
Oracle lookups, multi-hop entity chains, and vector-backed search all pass. Foundation for structured KB reasoning.
Direct write path works. LLM arbitration on ambiguous updates is a regression in v0.3.2 — conservative scoring silently rejects same-source updates that previously resolved. Only large confidence gaps trigger updates.
Write-then-query is solid: 6/6 writes, provenance intact, zero contamination. Bulk ingest endpoint regressed in v0.3.2 — crashes or extracts nothing. Direct write path is the reliable surface.
9/9 episodic recall tasks pass on v0.3.2, plus partial temporal ordering. Substantial improvement over prior bounded findings — episodic memory via structured KB is a viable pattern.
6/6 coordination tests pass. Zero missed cross-agent writes. Shared KB as coordination layer holds up.
9/9: relationship writes, one-hop traversal, and deep graph traversal all pass cleanly.
Source and confidence visible on all reads. Agent/writer identity attribution and whoKnows are MCP-only — not exposed on the REST API. Core lineage works; full attribution is bounded.
5/5 full recovery with explicit hints. 3/5 partial recovery. Cold-start without hints: 0/5 — bounded.
8/8 full session recovery. 5/8 partial session context. Recovery quality scales with available prior state.
4/5 facts preserved across versions, 3/3 post-upgrade writes, conflict state intact, API surface stable.
Each benchmark page covers methodology, trial data, and exact claim boundaries for that capability area. Read the ones that matter most for how you plan to use Iranti.
Want the product story behind the evidence?
The product page turns these strengths into the buyer-facing story: durable handoffs, exact retrieval, runtime continuity, and operator trust.