1: AI is a super intelligent employee that applies its observation of truth to problem solving.
2: AI is a politically vulnerable personality whose value is in its ability to adapt to the norms and truths of its users.
This is probably a very complicated question, but are the two goals above sort of sociotechnically compatible in a practical sense, in our world?
Like technically you can imagine two completely separate detached AI systems that are designed to separately solve on or the two problems.
But when you intermingle the AI systems with the context of the business environment it operates in, and that intermingled with the VC environment they operate in, and that intermingled with the social environment it operates in, and that intermingled with the political environment it operates in. Is that possible to truly distinguish between these two goals?
bigyabai•2d ago
LLMs are words. That's just it, you can anthropomorphize it as a "personality" or a "novel" or a "pair programmer" but it really just is text at the end of the day. Text is a lossy encoder for things like "work" and "thought" and "political principles" relative to how humans consider them. It might present a shortcut to those ideas, but it won't always extrapolate correctly.
I'm reminded of The Library of Babel by Borges, where different cults inhabit a library containing every possible permutation of a 410-page book. They all scour for different reasons, either searching for meaning in the randomness or to destroy works they considered nonsense. It was obvious to me a decade ago how pointless this search is - in the age of LLMs people gladly adopt this mindset though.
techpineapple•2d ago
My biggest concern with LLM’s though is that it’s actually the opposite of the library of Babel, it’s a remarkably closed system, tuned by humans and using a (relative to the library of babel) small search space.
There’s meaning in what LLM’s say because actually the way it remixes its source material is much less sophisticated than we think, and thus its meaning is a direct representation of the human meaning it’s trained on.
Would you argue against the notion that there is rich meaning captured by the libraries of the world the LLM’s are trained on?
tocs3•2d ago
So, my take:
The LLMs are just predicting the next likely word (token) to add to a conversion. If it is just trained on data that always say one thing it will just repeat that thing. If it is trained on data that says two things it will return one of the things sometimes and the other thing other times. It may return something "novel" in the same sense that picking words (tokens), that fit together, by rolling a die. Currently it seems to be useful to look some types of things up and entertaining to play with but there is nothing profound (in the sense of new thoughts or ideas) in it.
tim333•1d ago
I'm not sure about the word conservative for MAGA truth denying types.
techpineapple•2d ago
AI seems to serve two purposes.
1: AI is a super intelligent employee that applies its observation of truth to problem solving.
2: AI is a politically vulnerable personality whose value is in its ability to adapt to the norms and truths of its users.
This is probably a very complicated question, but are the two goals above sort of sociotechnically compatible in a practical sense, in our world?
Like technically you can imagine two completely separate detached AI systems that are designed to separately solve on or the two problems.
But when you intermingle the AI systems with the context of the business environment it operates in, and that intermingled with the VC environment they operate in, and that intermingled with the social environment it operates in, and that intermingled with the political environment it operates in. Is that possible to truly distinguish between these two goals?
bigyabai•2d ago
I'm reminded of The Library of Babel by Borges, where different cults inhabit a library containing every possible permutation of a 410-page book. They all scour for different reasons, either searching for meaning in the randomness or to destroy works they considered nonsense. It was obvious to me a decade ago how pointless this search is - in the age of LLMs people gladly adopt this mindset though.
techpineapple•2d ago
There’s meaning in what LLM’s say because actually the way it remixes its source material is much less sophisticated than we think, and thus its meaning is a direct representation of the human meaning it’s trained on.
Would you argue against the notion that there is rich meaning captured by the libraries of the world the LLM’s are trained on?
tocs3•2d ago
The LLMs are just predicting the next likely word (token) to add to a conversion. If it is just trained on data that always say one thing it will just repeat that thing. If it is trained on data that says two things it will return one of the things sometimes and the other thing other times. It may return something "novel" in the same sense that picking words (tokens), that fit together, by rolling a die. Currently it seems to be useful to look some types of things up and entertaining to play with but there is nothing profound (in the sense of new thoughts or ideas) in it.