v1.2 adds Smart Memory - a two-stage pipeline that automatically decides what's worth storing: 1. Fast rule-based filter catches obvious noise (greetings, "thanks", etc.) 2. LLM extracts atomic facts only when the filter passes
This saves ~70% of extraction costs while keeping memory high-quality.
Try it in 15 seconds: pip install aegis-memory aegis demo
GitHub: https://github.com/quantifylabs/aegis-memory
Happy to answer questions about multi-agent memory architecture.