easymemory-server --port 8100
Then point Claude Desktop or your agent to http://localhost:8100/mcp. Or chat with Ollama:
easymemory-agent --provider ollama --model llama3.1:8b
Python usage:
from easymemory.agent import EasyMemoryAgent async with EasyMemoryAgent(llm_provider="ollama", model="llama3.1:8b") as agent: print(await agent.chat("Remember: I prefer dark mode.")) # Later... print(await agent.chat("What UI do I prefer?")) # → "You prefer dark mode"
MIT licensed, minimal deps, early stage. Repo: https://github.com/JustVugg/easymemory Looking for feedback on: • What retrieval mix works best for your long-term memory needs? • Pain points with current local memory solutions? • Nice-to-have integrations? Thanks!