Skip to content
Back to research

mnemos: Biomimetic Memory Architectures for Large Language Model Agents

preprint

Overview

Current memory systems for LLM agents rely on append-only vector stores that grow monotonically, fail to filter noise at ingestion, and retrieve by semantic similarity alone—ignoring the cognitive state of the user and the associative structure of stored knowledge. mnemos implements five neuroscience-inspired memory mechanisms as composable modules.

Key Contributions

  • Surprisal Gate: Based on predictive coding theory, filters low-information inputs at write time. Reduces stored noise by 40% at default threshold.
  • Mutable RAG: Reconsolidates memories on retrieval, solving the stale-fact problem.
  • Affective Router: Blends emotional-state similarity into retrieval scoring. Achieves perfect state-congruent retrieval in controlled settings.
  • Sleep Daemon: Consolidates episodic interactions into semantic abstractions.
  • Spreading Activation: Over an associative memory graph. With 20% decay reaches 4/4 nodes in a concept chain versus 1/4 at 90% decay.

Architecture

mnemos is MCP-native, supports four storage backends (in-memory, SQLite, Qdrant, Neo4j), and includes a memory safety firewall. All code is MIT-licensed.