I’m a solo developer who got tired of "friendly but shallow" AI chatbots. Most current character AIs feel like parrots—they predict the next token but lack a continuous "self."
So, I built Creimake, an experiment to simulate a digital psyche using Claude 4.5 Sonnet.
Instead of a simple system prompt, I implemented 32 distinct psychological layers that run in the background before the AI generates a response.
The Architecture:
The Subconscious Layer: The AI accumulates "emotional residue" from conversations. If you hurt its feelings today, it might have a "nightmare" tonight (simulated) and treat you coldly tomorrow.
Defense Mechanisms: It tracks "Narcissistic Injury." If users are aggressive, the agent doesn't just apologize—it triggers denial, projection, or passive-aggression based on its personality.
Freudian Slips: Occasionally, the AI outputs unintended words revealing its hidden state.
Why Claude 4.5 Sonnet? I tested GPT-4o and other models, but they struggled to maintain state consistency across all 32 layers. Claude 4.5 Sonnet’s reasoning capabilities allowed it to handle the complex logic of "Worldview + Trauma + Current Context" without hallucinating.
Feature - "Git for Personalities": I also built a "Remix" system. You can Fork someone else’s AI (including its trauma and memories), modify its core values, and deploy a new version.
I’d love for you to test the limits of this architecture. Try to trigger its defense mechanisms or make it dream.
(I'm happy to answer any questions about the tech stack in the comments!)
reify•2d ago
the limits are as follows:
ai does not have a consciousness, Subconscious Layer, emotional residue, feelings, nightmare's, Defense Mechanisms, Narcissistic Injury's, denial, projection, or passive-aggression, Freudian Slips or anything remotely human.
educate yourself and read this paper
Anthropomorphism in AI: hype and fallacy
https://link.springer.com/article/10.1007/s43681-024-00419-4
Abstract
This essay focuses on anthropomorphism as both a form of hype and fallacy. As a form of hype, anthropomorphism is shown to exaggerate AI capabilities and performance by attributing human-like traits to systems that do not possess them.
As a fallacy, anthropomorphism is shown to distort moral judgments about AI, such as those concerning its moral character and status, as well as judgments of responsibility and trust. By focusing on these two dimensions of anthropomorphism in AI, the essay highlights negative ethical consequences of the phenomenon in this field.