So, fundamentally, I guess there’s two camps, one would say that a trained LLM is building actual intelligence, so presumably, it could actually know right vs wrong because given enough data the model will optimize towards intelligence/truth, regardless of the training data.
The other camp might say something like, LLMs directly model the world defined in its training data, irregardless of “truth”, it may have some rudimentary ideas on discerning truth, based on the way that’s done in its training data, but let’s say most people in the world are bad at poker, then the machine would probably be bad at poker.
Like on the one hand, having a machine that can sort of synthesize all of the world’s information to generate answers based on all the currently available information is amazing! And there’s a lot of information out there! It’s now wonder they’re incredibly capable.
But it’s not actual intelligence. It’s like imagining working with the most book learned person in the world who has no street smarts, except for what they could regurgitate from repeated viewings of the wire.
aaronbaugher•2h ago
Except that it doesn't synthesize all of the world's information; it's trained on a subset of safe, mainstream sources approved by its creators, and has guardrails to protect it from information that might prompt wrongthink. If you want the current-year-approved answer that you could get from Wikipedia or reddit, only faster, it's great for that, and many times that's plenty sufficient.
But an actual intelligence could think, "Hmm, I wonder what else is out there that they haven't told me about," and go learn about it. LLMs will never do that, at least not if their owners have anything to say about it.
techpineapple•3h ago
The other camp might say something like, LLMs directly model the world defined in its training data, irregardless of “truth”, it may have some rudimentary ideas on discerning truth, based on the way that’s done in its training data, but let’s say most people in the world are bad at poker, then the machine would probably be bad at poker.
Like on the one hand, having a machine that can sort of synthesize all of the world’s information to generate answers based on all the currently available information is amazing! And there’s a lot of information out there! It’s now wonder they’re incredibly capable.
But it’s not actual intelligence. It’s like imagining working with the most book learned person in the world who has no street smarts, except for what they could regurgitate from repeated viewings of the wire.
aaronbaugher•2h ago
But an actual intelligence could think, "Hmm, I wonder what else is out there that they haven't told me about," and go learn about it. LLMs will never do that, at least not if their owners have anything to say about it.