GItHub (https://github.com/mnexium/core-mnx) NPM (https://www.npmjs.com/package/@mnexium/core)
For us, this is a product decision and a philosophy decision.
Memory infrastructure is becoming foundational for serious AI products, and we believe the core layer should be transparent, inspectable, and extensible by the teams building on top of it. We also just want feedback - we want to build the best memory system given the tools we have access to today. We also want to make LLMs perform better then they already do OOTB.
CORE-MNX is the backend layer that powers durable memory workflows: memory storage and retrieval, claim extraction and truth-state resolution, memory lifecycle management, and event streaming for real-time systems. It’s Postgres-backed, API-first, and built to integrate into real production stacks.
We tried our best to make this system as standalone as possible. Ultimately, its fairly difficult we needed LLMs (Cerebras for fast token output, ChatGPT for intelligence etc), Databases for storage etc. We have intentionally made the project API interfaced so your project can be code agnostic.
Open-sourcing CORE lets builders: understand exactly how memory behavior works, self-host or extend the engine for their own products, and avoid reinventing the same memory infrastructure from scratch.
What stays on Mnexium.com Mnexium’s long-term direction is still the same: make AI systems more useful over time through durable memory and reliable recall. We've just figured out that hosting memory isnt the moat we once thought it was - the real moat we believe is making the LLM system(s) as easy to use as possible. The feature-set we've built around memory is what is differentiating.
Open-sourcing CORE is how we make that foundation available to everyone building in this space. Open to everyone to lend an opinion on improvements and how we make this problem solvable.
Would love feedback, opinions and bugs you may find. We release it isn't perfect, but certainly a good start we'd love to improve upon.