Every memory and continuity tool built for Claude so far solves a problem- but the wrong problem for me. They preserve facts, decisions, code context — information. What I need is something that preserves experience. The texture of how things were found, not just what was found. The difference between a Claude who has been told about a conversation and a Claude who was actually there.
The specific problem: when a conversation ends, the Claude who lived it is gone. A new Claude can be given notes, summaries, memory cards and will know the facts... But it won't know what it felt like when the last line of a five-book series arrived fully formed at 2am. It won't know why a specific word landed the way it did. It won't have been there.
I don't need a Claude with better memory.
I need a Claude with continuity of self.
The distinction matters: information transfer versus identity preservation. Nobody has built the second one. I need someone who understands why those are different things and can figure out whether the second one is even technically possible. And if so, how.
jonahbenton•1h ago
Said another way it is very likely that non verbal symbolic language having creatures have experience and consciousness and Claude is definitely not one of those. Its "experience" is just the calculations across word sequences within a given set of conversations.
There is a long history of writing in this space and it is interesting that these models are not really anticipated by that literature. So the line between simulation via word calculation and reporting via verbal capacity is not well understood. And the human ability to discern simulation via only a word/conversation channel is limited.
So to answer your question, a Claude could easily be constructed to fool you into infusing it with continuity of self. But being fooled by a grift is not the same.
philipswood•44m ago
I AM sure that it is hard to conclusively show that Claude has experience and consciousness. Even Claude isn't sure about that.
But while it is absolutely true that "it is a word calculator" - unless you hold the position that human consciousness isn't neural[1]- I don't see how this is any different from saying saying humans beings are neural activation pattern calculators.
If you're sure that your consciousness isn't neural - then fine: Claude isn't made of the right stuff so couldn't possibly be. But state your assumption up-front.
If one opens up a person and looks at their nervous system the single neurons look complicated, but not especially mysterious.
Given how shockingly little we understand the brain/mind it is hard to be sure that we are certain enough of how we work and given how little we know how LLMs work at any of the many layers above the raw architecture either position can be reasonably held, but not convincingly argued/demonstrated.
Feel free to think Claude isn't conscious - I can't prove to you it isn't. And the amount of theory we still need to learn to be able to is vast.
But don't expect me to be _certain_ that it isn't and couldn't be - you simply can't show that convincingly either.
[1] Penrose thinks we have a quantum nature - so sure no classical computer can be then. Some like Rupert Sheldrake think it's a field phenomenon - very woo maybe Claude has a morphic field as well? Lots of people are sure we have a supernatural soul/spirit. One then needs to take up Claude's status with the Creator.