frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

Claude's memory architecture is the opposite of ChatGPT's

https://www.shloked.com/writing/claude-memory
64•shloked•1h ago

Comments

richwater•1h ago
ChatGPT is quickly approaching (perhaps bypassing?) the same concerns that parents, teachers, psychologists had with traditional social media. It's only going to get worse, but trying to stop the technological process will never work. I'm not sure what the answer is. That they're clearly optimizing for people's attention is more worrisome.
WJW•44m ago
Seems like either a huge evolutionary advantage for the people who can exploit the (sometimes hallucinating sometimes not) knowledge machine, or else a huge advantage for the people who are predisposed to avoid the attention sucking knowledge machine. The ecosystem shifted, adapt or be outcompeted.
aleph_minus_one•10m ago
> Seems like either a huge evolutionary advantage for the people who can exploit the (sometimes hallucinating sometimes not) knowledge machine, or else a huge advantage for the people who are predisposed to avoid the attention sucking knowledge machine. The ecosystem shifted, adapt or be outcompeted.

Rather: use your time to learn serious, deep knowledge instead of wasting your time reading (and particularly: spreading) the science-fiction stories the AI bros tell all the time. These AI bros are insanely biased since they will likely loose a lot of money if these stories turn out to be false, or likely even if people stop believing in these science-fiction fairy tales.

visarga•31m ago
> That they're clearly optimizing for people's attention is more worrisome.

Running LLMs is expensive and we can swap models easily. The fight for attention is on, it acts like an evolutionary pressure on LLMs. We already had the sycophantic trend as a result of it.

simonw•1h ago
This post was great, very clear and well illustrated with examples.
qgin•55m ago
I love Claude's memory implementation, but I turned memory off in ChatGPT. I use ChatGPT for too many disparate things and it was weird when it was making associations across things that aren't actually associated in my life.
kiitos•51m ago
> Anthropic's more technical users inherently understand how LLMs work.

good (if superficial) post in general, but on this point specifically, emphatically: no, they do not -- no shade, nobody does, at least not in any meaningful sense

kingkawn•46m ago
Thanks for this generalization, but of course there is a broad range of understanding how to improve usefulness and model tweaks across the meat populace.
omnicognate•40m ago
Understanding how they work in the sense that permits people to invent and implement them, that provides the exact steps to compute every weight and output, is not "meaningful"?

There is a lot left to learn about the behaviour of LLMs, higher-level conceptual models to be formed to help us predict specific outcomes and design improved systems, but this meme that "nobody knows how LLMs work" is out of control.

lukev•33m ago
If we are going to create a binary of "understand LLMs" vs "do not understand LLMs", then one way to do it is as you describe; fully comprehending the latent space of the model so you know "why" it's giving a specific output.

This is likely (certainly?) impossible. So not a useful definition.

Meanwhile, I have observed a very clear binary among people I know who use LLMs; those who treat it like a magic AI oracle, vs those who understand the autoregressive model, the need for context engineering, the fact that outputs are somewhat random (hallucinations exist), setting the temperature correctly...

kiitos•16m ago
> If we are going to create a binary of "understand LLMs" vs "do not understand LLMs",

"we" are not, what i quoted and replied-to did! i'm not inventing strawmen to yell at, i'm responding to claims by others!

modeless•39m ago
The link to the breakdown of ChatGPT's memory implementation is broken, the correct link is: https://www.shloked.com/writing/chatgpt-memory-bitter-lesson

This is really cool, I was wondering how memory had been implemented in ChatGPT. Very interesting to see the completely different approaches. It seems to me like Claude's is better suited for solving technical tasks while ChatGPT's is more suited to improving casual conversation (and, as pointed out, future ads integration).

I think it probably won't be too long before these language-based memories look antiquated. Someone is going to figure out how to store and retrieve memories in an encoded form that skips the language representation. It may actually be the final breakthrough we need for AGI.

ornornor•26m ago
> It may actually be the final breakthrough we need for AGI.

I disagree. As I understand them, LLMs right now don’t understand concepts. They actually don’t understand, period. They’re basically Markov chains on steroids. There is no intelligence in this, and in my opinion actual intelligence is a prerequisite for AGI.

SweetSoftPillow•21m ago
What is "actual intelligence" and how are you different from a Markov chain?
sixo•17m ago
Roughly, actual intelligence needs to maintain a world model in its internal representation, not merely an embedding of language, which is a very different data structure and probably will be learned in a very different way. This includes things like:

- a map of the world, or concept space, or a codebase, etc

- causality

- "factoring" which breaks down systems or interactions into predictable parts

Language alone is too blurry to do any of these precisely.

SweetSoftPillow•5m ago
Please check an example #2 here: https://github.com/PicoTrex/Awesome-Nano-Banana-images/blob/...

It is not "language alone" anymore. LLMs are multimodal nowadays, and it's still just the beginning.

ornornor•16m ago
What I mean is that the current generation of LLMs don’t understand how concepts relate to one another. Which is why they’re so bad at maths for instance.

Markov chains can’t deduce anything logically. I can.

sindercal•7m ago
You and Chomsky are probably the last 2 persons on earth to believe that.
oasisaimlessly•6m ago
The definition of 'Markov chain' is very wide. If you adhere to a materialist worldview, you are a Markov chain. [Or maybe the universe viewed as a whole is a Markov chain.]
ForHackernews•13m ago
For one thing, I have internal state that continues to exist when I'm not responding to text input; I have some (limited) access to my own internal state and can reason about it (metacognition). So far, LLMs do not, and even when they claim they are, they are hallucinating https://transformer-circuits.pub/2025/attribution-graphs/bio...
creata•14m ago
> As I understand them, LLMs right now don’t understand concepts.

In my uninformed opinion it feels like there's probably some meaningful learned representation of at least common or basic concepts. It just seems like the easiest way for LLMs to perform as well as they do.

pontus•14m ago
I'm curious what you mean when you say that this clearly is not intelligence because it's just Markov chains on steroids.

My interpretation of what you're saying is that since the next token is simply a function of the proceeding tokens, i.e. a Markov chain on steroids, then it can't come up with something novel. It's just regurgitating existing structures.

But let's take this to the extreme. Are you saying that systems that act in this kind of deterministic fashion can't be intelligent? Like if the next state of my system is simply some function of the current state, then there's no magic there, just unrolling into the future. That function may be complex but ultimately that's all it is, a "stochastic parrot"?

If so, I kind of feel like you're throwing the baby out with the bathwater. The laws of physics are deterministic (I don't want to get into a conversation about QM here, there are senses in which that's deterministic too and regardless I would hope that you wouldn't need to invoke QM to get to intelligence), but we know that there are physical systems that are intelligent.

If anything, I would say that the issue isn't that these are Markov chains on steroids, but rather that they might be Markov chains that haven't taken enough steroids. In other words, it comes down to how complex the next token generation function is. If it's too simple, then you don't have intelligence but if it's sufficiently complex then you basically get a human brain.

techbruv•10m ago
I don’t understand the argument “AI is just XYZ mechanism, therefore it cannot be intelligent”.

Does the mechanism really disqualify it from intelligence if behaviorally, you cannot distinguish it from “real” intelligence?

I’m not saying that LLMs have certainly surpassed the “cannot distinguish from real intelligence” threshold, but saying there’s not even a little bit of intelligence in a system that can solve more complex math problems than I can seems like a stretch.

SweetSoftPillow•29m ago
If I remember correctly, Gemini also have this feature? Is it more like Claude or ChatGPT?
extr•29m ago
They are changing the way memory works soon, too: https://x.com/btibor91/status/1965906564692541621

Edit: They apparently just announced this as well: https://www.anthropic.com/news/memory

jimmyl02•3m ago
This is awesome! It seems to line up with the idea of agentic exploration versus RAG which I think Anthropic leans on the agentic exploration side of.

It will be very interesting to see which approach is deemed to "win out" in the future