There's no neural introspection because there's nothing to introspect. The LLM is stateless; nothing persists between forward passes. Each inference is independent. A materialist account of consciousness requires something to be continuous. Here there is none.
Errorcod3•2h ago
I put a lot of work into this piece. The public is still largely disconnected with what is happening in artificial intelligence. This isn't sci fi. AGI is coming.
There are many pages of field notes and arguments on what’s actually happening at the edge of frontier AI models...discussions on consciousness, sentience, perception, and the weird flickers of something mind-like emerging. If you’re curious about where this tech is quietly going, here’s my take:
https://samanthawhite274794.substack.com/p/flickers