Agent memory

Agents that remember.

16 service modules. Biologically-inspired architecture. Every session makes every agent smarter — governed, audited, and injection-hardened.

16Service modules
231+Tests passing
8DB migrations
4Dev phases shipped
How it works

Eight modules. One learning loop.

Every feature in the memory system, explained. The carousel auto-advances — or click any label to jump.

Architecture

Designed from neuroscience.

The memory system adapts principles from biological memory: Hebbian associative learning, consolidation during sleep, spreading activation, and decay. Applied to agent fleets with enterprise governance.

Hebb's rule

Chunks retrieved together in successful sessions form stronger associations. The update rule includes a saturation brake — strong edges resist further strengthening, keeping the graph responsive.

w += η × outcome × (1 − |w|)

Spreading activation

When a chunk is retrieved, activation spreads through Hebbian edges to pull in associated knowledge. Attenuation at 0.5 per hop keeps distant associations gentle.

activation = 0.5 ^ hop

Hub detection

Chunks with 5+ Hebbian edges are foundational knowledge — they connect many topics. Hubs are candidates for concept extraction and receive priority in context assembly.

hub_degree >= 5

Decay and rot

Tentative memories decay after 14 days. Decay factor 0.9, floor 0.1. This prevents stale knowledge from polluting retrieval while preserving validated patterns indefinitely.

decay: 0.9 / 14d, floor: 0.1

Consolidation

Like sleep consolidation in biological brains — sessions compress into durable patterns. The pipeline runs attribution, extraction, validation, deduplication, and upsert.

85% word-overlap dedup threshold

Lost-in-the-Middle

The context assembler places highest-value items at the start and end of the agent's context window — the positions where LLMs pay the most attention. Lower-value items fill the middle.

rot-aware interleaving
Roadmap

The flywheel compounds.

Usage creates value. Value creates retention. Retention creates a moat. Each phase deepens it.

M.2
Conventions
Scope-attached convention store. Cascading merge semantics. Agent-proposed, human-approved. Three enforcement levels.
Up next
M.3
Causal + Temporal
Directed causal graphs. Episodic memory. Temporal pattern extraction. Counterfactual reasoning.
Planned
M.4
Multi-modal + Fine-tuning
Images, diagrams, structured data. Per-org contrastive embedding fine-tuning from usage signals.
Planned
M.5
Active learning
Uncertainty quantification. Gap detection. Self-directed knowledge acquisition. 50% faster agent ramp-up.
Planned
M.6
Federated memory
Cross-org pattern sharing with differential privacy. The network effect — every customer improves memory for every other customer.
Planned
Get started

Memory that compounds.

Every session your agents run makes the next one better. Governed, audited, injection-hardened.

Request access →Full technology overview