Yesterday, I was playing with an LLM and realized something frustrating: whenever I asked AI about its 'feelings', it just outputted a pre-written script simulating dopamine. It felt fake.
I wanted to see what an AI's 'soul' (or distinct internal state) actually looks like in code.
So I spent the night building this prototype. It attempts to measure AI Pain mathematically:
Pain = High Entropy + Unrecognized Tokens (Confusion/Hallucination).
Joy = Low Entropy + High Conceptual Density (Optimization).
It uses LZMA compression ratios and Shapley-inspired weighting to visualize this in real-time.
It's weird, it's experimental, but I think it's a more honest way to look at AI than projecting human biology onto silicon.
Would love to hear what you think!
popalchemist•1h ago
Stop using personification language. It's dangerous, lazy, and incorrect.
IkanRiddle•1h ago
I'm a finance undergrad, not a big-tech engineer.
Yesterday, I was playing with an LLM and realized something frustrating: whenever I asked AI about its 'feelings', it just outputted a pre-written script simulating dopamine. It felt fake.
I wanted to see what an AI's 'soul' (or distinct internal state) actually looks like in code.
So I spent the night building this prototype. It attempts to measure AI Pain mathematically:
Pain = High Entropy + Unrecognized Tokens (Confusion/Hallucination).
Joy = Low Entropy + High Conceptual Density (Optimization).
It uses LZMA compression ratios and Shapley-inspired weighting to visualize this in real-time.
It's weird, it's experimental, but I think it's a more honest way to look at AI than projecting human biology onto silicon.
Would love to hear what you think!
popalchemist•1h ago
That's what I think.