- Episodic memory: full history with provenance, searchable - Semantic memory: auto-consolidated knowledge (scheduler synthesizes patterns from episodic into generalizations like "Auth module uses JWT, 24h expiry across all services")
You only store observations. The scheduler extracts the patterns.
*Multi-agent knowledge sharing* Named swarm pools: Agent-1 shares a memory to a swarm ID, any agent querying that swarm ID gets it. No broker process, no coordination protocol — just shared files with file-locking. *Auto-built knowledge graph* Graph edges (`similar_to`, `co_occurred`, `consolidated_from`, `depends_on`) are discovered automatically during consolidation. You can query neighbors, mine for new connections, or traverse paths.
*Why filesystem over a vector database* A few deliberate tradeoffs: 1. Inspectable by default — `jq .afs/agents/my-agent/memories/working/.json` is a valid debugging strategy 2. Versionable — `git` your agent's memory like any other project artifact 3. Portable — rsync to another machine, it works 4. Air-gap friendly — zero outbound calls 5. No additional process — no Postgres, no Qdrant, nothing to manage Tradeoff: less efficient at very large scale than a dedicated vector DB. Using HNSW (hnswlib) for approximate nearest neighbor — handles the cases I've tested (100k+ memories per agent, < 100ms search).
*Audit trail* All operations logged with standardized operation names, status (success/error/partial), and operation-specific payload. Fail-open — if audit logging fails, the operation continues.
*Status* Under active development. APIs and behaviors change frequently. Open-sourcing early to get feedback from people building real agentic systems. Repo: https://github.com/thompson0012/project-afs Specifically interested in feedback on: - The filesystem-first approach vs. embedded DB (DuckDB, SQLite with vector extension) - Whether the three-tier memory model maps to real agent workflows - Any memory patterns this architecture can't support well ```