Built this because I kept running into the same problems with existing memory libraries:
1. Everything gets equal weight. A peanut allergy and a lunch preference sit at the same priority. So I added importance scoring (1-10) at extraction time, and retrieval ranks by a weighted mix of similarity, importance, and recency.
2. Contradictions pile up. "Lives in Berlin" and "moved to Paris" both stay in the store. Widemem batches new facts against related existing memories in a single LLM call and resolves conflicts at write time.
3. No concept of "don't forget this". Health, legal, and financial facts get tagged as decay-immune so they never fade out of retrieval.
Runs fully local with Ollama + sentence-transformers + SQLite + FAISS. Also supports OpenAI, Anthropic, and Qdrant. 140 tests, Apache 2.0.
eyepaqio•1h ago
1. Everything gets equal weight. A peanut allergy and a lunch preference sit at the same priority. So I added importance scoring (1-10) at extraction time, and retrieval ranks by a weighted mix of similarity, importance, and recency.
2. Contradictions pile up. "Lives in Berlin" and "moved to Paris" both stay in the store. Widemem batches new facts against related existing memories in a single LLM call and resolves conflicts at write time.
3. No concept of "don't forget this". Health, legal, and financial facts get tagged as decay-immune so they never fade out of retrieval.
Runs fully local with Ollama + sentence-transformers + SQLite + FAISS. Also supports OpenAI, Anthropic, and Qdrant. 140 tests, Apache 2.0.
pip install widemem-ai