Memory Layers
Athena’s memory is organized into five layers, from the most immediate to the most persistent. Each layer serves a different purpose and has a different scope.
Register — Current Turn
Everything that happens in the current turn: your message, tool calls, tool results, and Athena’s reasoning. This is the full working context for the active round-trip.
L1 — Current Session
All completed turns in the current conversation. As you go back and forth with Athena, the full history of the session stays in context. This is standard LLM conversation memory.
L2 — Recent Turns
Summaries of the last ~15 turns from your previous sessions on this context. These are automatically injected into the conversation so Athena remembers what you were working on recently — even across app restarts.
L2 context refreshes every 5 minutes and is scoped to your current context.
L3 — Org Knowledge
Rules and decisions that always apply to your org. These are injected into every conversation automatically.
Examples:
- “Always use
pnpminstead ofnpm” - “We decided to use Zod for validation — don’t suggest Joi”
- “Test files go in
__tests__/next to the source file”
L3 items are durable and persist until you explicitly change or remove them.
Disk — All History
Everything else: past turns, codebase insights, error patterns, preferences, and more. This layer is searched on-demand using the cogz_fetch tool when Athena needs historical context that isn’t in the other layers.
Think of it as Athena’s long-term memory that it can look up when needed.
How It Fits Together
| Layer | Scope | Injected | Lifetime |
|---|---|---|---|
| Register | Current turn | Always | Turn |
| L1 | Current session | Always | Session |
| L2 | Recent turns | Auto (~15 turns) | 5min cache |
| L3 | Rules & decisions | Auto (always) | Permanent |
| Disk | Everything | On-demand | Permanent |
You don’t need to manage these layers directly. Athena handles injection and retrieval automatically. When you tell Athena to “remember” something, it stores it in the appropriate layer based on the type of knowledge.