The problem: every agent framework (LangChain, CrewAI, AutoGen) has its own memory format. There's no standard way to port memory between systems, verify it hasn't been tampered with, or prove to a regulator that you deleted it.
OMS defines three things:
1. *A binary container format (.mg)* — memory grains are content-addressed (SHA-256), immutable, deterministically serialized (MessagePack). Think .git objects for agent knowledge. 10 grain types: Belief, Event, State, Workflow, Action, Observation, Goal, Reasoning, Consensus, Consent.
2. *CAL (Context Assembly Language)* — a query language for assembling LLM context. The notable design choice: delete is a structural impossibility. Not a policy — the grammar has no production rules for destructive operations. Append-only writes, bounded execution, token-budget-aware assembly.
3. *SML (Semantic Markup Language)* — flat output format for LLM consumption. Tag names are grain types (`<belief>`, `<reasoning>`, `<consent>`). No XML processor needed.
The whole thing is CC0 (public domain). OWFa 1.0 licensed.
GitHub: https://github.com/openmemoryspec/oms
Happy to answer questions about the design decisions.