Uh huh, because they have the entire human brain all mapped out, and completely understand how consciousness works, and how it leads to the "human thought process".
Sounds like o3 isn't the only thing hallucinating...
If this is the case:
> what's happening inside AI models is very different from how the models themselves described their "thought"
What makes people think, that the way they think they think, is actually how they think?
Post that one to your generative model and let's all laugh at the output...
I'm not about to throw the baby out with the bathwater. There is real intelligence being born here.
techpineapple•3h ago
Assuming that a deeper thinking broader contexed, being with more information would be more accurate is actually counter-intuitive to me.
labrador•3h ago
Same with ChatGPT. The more it knows about you, the richer the connections it can make. But with that comes more interpretive noise. If you're doing scientific or factual work, stateless queries are best so turn off memory. But for meaning-of-life questions or personal growth? I’ll take the distortion. It’s still useful and often surprisingly accurate