Entelgia is not a chatbot, but an experiment in how persistent identity, internal conflict, emotional regulation, and moral reasoning can emerge from structure rather than hard-coded rules.
Two primary agents engage in continuous dialogue across sessions, backed by a shared persistent memory (SQLite + STM), with mechanisms for:
internal conflict (id / ego / superego dynamics)
emotion tracking and importance scoring
memory promotion through error, repetition, and affect
bounded short-term memory (LRU)
observer-based correction loops (meta-cognitive layer)
The system runs fully locally using a local LLM (via Ollama) for privacy and reproducibility.
I recently completed a production-ready rewrite, focusing on:
deterministic system behavior
bounded memory growth
structured logging
privacy / PII redaction
testability and long-running stability
This is not a claim about artificial consciousness — the terminology is used strictly as architectural metaphor to explore internal regulation and continuity over time.
Repo (README explains both how and why): https://github.com/sivanhavkin/Entelgia
I’d especially appreciate feedback from people interested in:
multi-agent systems
memory architectures
alignment via internal structure rather than external constraints
long-running LLM systems
Happy to answer questions or hear criticism.