Evidence

Why serious teams trust Iranti.
The product story is stronger because the proof is real.

Iranti earns trust by doing a few important things well: durable shared facts, exact retrieval when agents know what they need, continuity across tools, and operator-visible behavior when workflows go sideways.

The technical claim boundaries are here too — scroll down for evaluator notes and research links when you need to go deeper.

Shared facts outlive one session

Iranti is built for the moment work moves from one agent, one prompt, or one session into a longer-running workflow. Teams get durable shared state instead of repeated re-briefing.

Exact retrieval beats memory theater

The strongest current evidence supports addressed retrieval and durable handoffs. That means better continuity when agents already know what they need to look up.

Operators can inspect what happened

Iranti does not ask teams to trust hidden prompt state. Facts, provenance, conflicts, and lifecycle behavior stay visible enough to debug and operate.

The story survives tool changes

Claude Code, Codex, SDK clients, and operator tooling can all point at the same memory layer, which makes continuity more durable than tool-local memory alone.

For evaluators

Evidence first.
Jargon second.

If you are evaluating Iranti seriously, the question is not whether every memory problem is solved. The question is whether the product has a credible wedge and whether the claims map to real evidence. That answer is much stronger today than a generic AI memory pitch would suggest.

What is strongest today

Addressed retrieval, persistence, relationship traversal, and upgrade continuity are the clearest current strengths.

How to read the claims

Use the product story for the value proposition and the linked research docs for exact scope, rerun boundaries, and methodological caveats.

Why the honesty matters

The public evidence is more persuasive because it separates validated strengths from bounded areas instead of turning everything into a universal green light.

Current boundaries
Conflict resolution is strongest in deterministic cases; ambiguous conflicts escalate conservatively and may need human review.
Entity discovery is a bounded story under controlled conditions — addressed retrieval is stronger than broad search.
Automatic context recovery works best with explicit entity hints; cold autonomous recovery is a known area of active development.
Multi-agent coordination is strongest when agents address known facts directly rather than relying on broad search discovery.
All benchmarks
PASSPARTIALNULL
B1PASS
Entity retrieval at scale

Null accuracy gap vs. long-context reading at 2,000 entities, ~107k tokens. Structured retrieval at fraction of the token cost.

Read →
B2PASS
Cross-process persistence

Facts written by one agent retrieved by a completely independent process with a different identity. Provenance preserved.

Read →
B3PASS
Conflict resolution

3/3: deterministic resolution, close-gap escalation, and equal-confidence contradictory escalation all pass. High-confidence challengers win cleanly; ambiguous conflicts escalate to human review.

Read →
B4PASS
Multi-hop discovery

Oracle lookups, multi-hop entity chains, and vector-backed search all pass. Foundation for structured KB reasoning.

Read →
B5PARTIAL
Knowledge update

Direct write path works. LLM arbitration on ambiguous updates is a regression in v0.3.2 — conservative scoring silently rejects same-source updates that previously resolved. Only large confidence gaps trigger updates.

Read →
B6PARTIAL
Ingest pipeline

Write-then-query is solid: 6/6 writes, provenance intact, zero contamination. Bulk ingest endpoint regressed in v0.3.2 — crashes or extracts nothing. Direct write path is the reliable surface.

Read →
B7PASS
Episodic memory

9/9 episodic recall tasks pass on v0.3.2, plus partial temporal ordering. Substantial improvement over prior bounded findings — episodic memory via structured KB is a viable pattern.

Read →
B8PASS
Agent coordination

6/6 coordination tests pass. Zero missed cross-agent writes. Shared KB as coordination layer holds up.

Read →
B9PASS
Relationship traversal

9/9: relationship writes, one-hop traversal, and deep graph traversal all pass cleanly.

Read →
B10PARTIAL
Knowledge provenance

Source and confidence visible on all reads. Agent/writer identity attribution and whoKnows are MCP-only — not exposed on the REST API. Core lineage works; full attribution is bounded.

Read →
B11PARTIAL
Context recovery

5/5 full recovery with explicit hints. 3/5 partial recovery. Cold-start without hints: 0/5 — bounded.

Read →
B12PARTIAL
Session recovery

8/8 full session recovery. 5/8 partial session context. Recovery quality scales with available prior state.

Read →
B13PASS
Upgrade continuity

4/5 facts preserved across versions, 3/3 post-upgrade writes, conflict state intact, API surface stable.

Read →

Each benchmark page covers methodology, trial data, and exact claim boundaries for that capability area. Read the ones that matter most for how you plan to use Iranti.

Want the product story behind the evidence?

The product page turns these strengths into the buyer-facing story: durable handoffs, exact retrieval, runtime continuity, and operator trust.