The idea is to make that context more portable and plug-and-play across teams and tools, with a local-first approach so it can run in ChatGPT, Codex, Claude, OpenClaw, or basically anywhere with MCP server connectivity. It also has an API if you want to pull prompt/context config out of your codebase so your team can actually see and edit it, or feature-flag between versions.
A big part of it for me is also being more conscious of token spend and getting better answers earlier on the things that actually matter to you and your team. I’m also working on the knowledge side of it, so contexts and workflows can use the right private/shared knowledge more safely without everything being hardwired into code.
We’re particularly interested in talking to teams that want to use knowledge graphs with shared agent contexts and workflows, and have that pass through our system into any AI runtime, local or hosted, without us needing visibility into the underlying private knowledge itself.
It’s still in alpha, so bear with me, but if this sounds useful I’d genuinely love feedback. Happy to share more information or give demos or free access if anyone wants to check it out.