but please stop with the Anthropomorphism language when referring to a glorified autocomplete machine.
machines are not conscious, dont experience internal conflict nor do they have moral self regulation nor do they engage in dialogue.
It truly is getting rather boring and sounds like made up bollocks
As a retired psychotherapist its like calling myself a human cognitive architect, which sounds like bollocks too.
I’m not claiming machines are conscious, experience emotions, or possess moral agency in any human sense. The language is intentionally metaphorical — shorthand for architectural patterns inspired by psychology, not literal claims about inner experience.
When I refer to “internal conflict” or “moral self-regulation”, I mean competing objectives, constraints, and evaluation processes interacting over time within a system — not phenomenological states.
I fully understand how anthropomorphic framing can sound inflated or misleading, and I try to be explicit elsewhere that this is an experimental research prototype, not a claim about machine consciousness.
I appreciate the pushback — precision of language matters, especially in this space.
sivanhavkin•1h ago
Entelgia is not a chatbot or prompt-based demo. It’s a single-file Python system where agents maintain continuity via a shared memory store and evolve their dialogue over time.
This is an experimental project, not a finished product. Some components are conceptual or partially implemented, and I’m mainly interested in feedback, critique, and discussion from people working on agentic systems, AI safety, or cognitive architectures.