Hey everyone! I've been working on this project for a while and finally got it to a point where I'm comfortable sharing it with the community. Eion is a shared memory storage system that provides unified knowledge graph capabilities for AI agent systems. Think of it as the "Google Docs of AI Agents" that connects multiple AI agents together, allowing them to share context, memory, and knowledge in real-time.
When building multi-agent systems, I kept running into the same issues: limited memory space, context drifting, and knowledge quality dilution. Eion tackles these issues by:
• Unifying API that works for single LLM apps, AI agents, and complex multi-agent systems
• No external cost via in-house knowledge extraction + all-MiniLM-L6-v2 embedding
• PostgreSQL + pgvector for conversation history and semantic search
• Neo4j integration for temporal knowledge graphs
I'm curious what the HN community thinks about this approach. Are there specific use cases you'd find valuable? Any architectural concerns or missing features?
quickthrowman•7mo ago
Are you married to the name? At first glance it looks like ‘Elon’, not sure if that was intentional or not but it might detract from the software itself given how controversial discussions about him are.
I’d suggest a new name if you want people to focus on your actual project and not the name.
mingyk•7mo ago
mmm honestly I just chose it because it looks pretty to design lol... maybe I should change it
dantodor•7mo ago
Maybe I'm missing something, but going through the docs I can see how to register agents, create sessions, etc. What I don't see is a "minor" thing, how can I store a memory and how I can query the server for memories. What am I missing?
mingyk•7mo ago
Ah, currently it's structured in a way that agents can access and utilize to each session's memory storage by itself using its credentials provided in system prompts but technically there is a way to bypass that using api endpoints
mingyk•7mo ago
• No external cost via in-house knowledge extraction + all-MiniLM-L6-v2 embedding
• PostgreSQL + pgvector for conversation history and semantic search
• Neo4j integration for temporal knowledge graphs
I'm curious what the HN community thinks about this approach. Are there specific use cases you'd find valuable? Any architectural concerns or missing features?