This sounds insane to me. When we are talking about safe AI use, I wonder if things like this are talked about.
The more technological advancement goes on, the smarter we need to be in order to use it - it seems.
Even if todays general purpose models and models made by predators can have negative effects on vulnerable people, LLMs could become the technology that brings psych care to the masses.
People have been caught in that trap ever since the invention of religion. This is not a new problem.
They tick all the boxes: oblique meaning, a semiotic field, the illusion of hidden knowledge, and a ritual interface. The only reason we don't call it divination is that it's skinned in dark mode UX instead of stars and moons.
Barthes reminds us that all meaning is in the eye of the reader; words have no essence, only interpretation. When we forget that, we get nonsense like "the chatbot told him he was the messiah," as though language could be blamed for the projection.
What we're seeing isn't new, just unfamiliar. We used to read bones and cards. Now we read tokens. They look like language, so we treat them like arguments. But they're just as oracular, complex, probabilistic signals we transmute into insight.
We've unleashed a new form of divination on a culture that doesn't know it's practicing one. That's why everything feels uncanny. And it's only going to get stranger, until we learn to name the thing we're actually doing. Which is a shame, because once we name it, once we see it for what it is, it won't be half as fun.
Words have power, and those that create words - or create machines that create words - have responsibility and liability.
It is not enough to say "the reader is responsible for meaning and their actions". When people or planet-burning random matrix multipliers say things and influence the thoughts and behaviors of others there is blame and there should be liability.
Those who spread lies that caused people to storm the Capitol on January 6th believing an election to be stolen are absolutely partially responsible even if they themselves did not go to DC on that day. Those who train machines that spit out lies which have driven people to racism and genocide in the past are responsible for the consequences.
Acknowledging the interpretive nature of language doesn't absolve us from the consequences of what we say. It just means that communication is always a gamble: we load the dice with intention and hope they land amid the chaos of another mind.
This applies whether the text comes from a person or a model. The key difference is that humans write with a theory of mind. They guess what might land, what might be misread, what might resonate. LLMs don’t guess; they sample. But the meaning still arrives the same way: through the reader, reconstructing significance from dead words.
So no, pointing out that people read meaning into LLM outputs doesn’t let humans off the hook for their own words. It just reminds us that all language is a collaborative illusion, intent on one end, interpretation on the other, and a vast gap where only words exist in between.
Just looking at my recent AI prompts:
I was looking for the name of the small fibers which form a bird’s feather. ChatGPT told me they are called “barbs”. Then using straight forward google search i could verify that indeed that is the name of the thing i was looking for. How is this “divination”?
I was looking for what is the g-code equivalent for galvo fiber lasers are and ChatGPT told me there isn’t really one. The closest might be the sdk of ezcad, but it also listed several other opensource control solutions too.
Wanted to know what are the hallmarking rules in the UK for an item which consist of multiple pieces of sterling silver held together by a non-metalic part. (Turns out the total weight of the silver matters, while the weight of the non-metalic part does not count.)
Wanted to translate the hungarian phrase “besurranó tolvaj” into english. Out of the many possible translations chatGPT provided “opportunistic burglar” fit the best for what I was looking for.
Wanted to write an sql alchemy model and i had an approximate idea of what fields i needed but couldn’t be arsed to come up with good names for them and find the syntax to describe their types. ChatGPT wrote it for me in seconds what would have taken me at least ten minutes otherwise.
These are “divination” only in a very galaxy brained “oh man, when you open your mind you see everything is divination really”. I would call most of these “information retrieval”. The information is out there the LLM just helps me find it with a convenient interface. While the last one is “coding”.
You presented clear, factual queries. Great. But even there, all the components are still in play: you asked a question into a black box, received a symbolic-seeming response, evaluated its truth post hoc, and interpreted its relevance. That's divination in structural terms. The fact that you're asking about barbs on feathers instead of the fate of empires doesn't negate the ritual, you're just a more practical querent.
Calling it "information retrieval" is fine, but it's worth noticing that this particular interface feels like more than that, like there's an illusion (or a projection) of latent knowledge being revealed. That interpretive dance between human and oracle is the core of divination, no matter how mundane the interaction.
I don't believe this paints with an overly broad brush. It's a real type of interaction and the subtle distinction focuses on the core relationship between human and oracle: seeking and interpreting.
What AI actually does is like any other improved tool: it's a force multiplier. It allows a small number of highly experienced, very smart people, do double or triple the work they can do now.
In other words: for idiot management, AI does nothing (EXCEPT enable the competition)
Of course, this results in what you now see: layoffs where as always idiots survive the layoffs, followed by the products of those companies starting to suck more and more because they laid off the people that actually understood how things worked and AI cannot make up for that. Not even close.
AI is a mortal threat to the current crop of big companies. The bigger the company, the bigger a threat it is. The skill high level managers tend to have is to "conquer" existing companies, and nothing else. With some exceptions, they don't have any skill outside of management, and so you have the eternally repeated management song: that companies can be run by professional managers, without knowing the underlying problem/business, "using numbers" and spreadsheet (except when you know a few and press them, of course it turns out they don't have a clue about the numbers, can't come up with basic spreadsheet formulas)
TLDR: AI DOESN'T let financial-expert management run an airplane company. AI lets 1000 engineers build 1000 planes without such management. AI lets a company like what Google was 15-20 years ago wipe the floor with a big airplane manufacturer. So expect big management to come with ever more ever bigger reasons why AI can't be allowed to do X.
Now that they have AI, I can see it become an 'idiocy multiplier'. Already software is starting to break in subtle ways, it's slow, laggy, security processes have become a nightmare.
It's different from other force-multiplier tools in that it cuts off the pipeline of new blood while simultaneously atrophying the experienced and smart people.
It's important that the general public understands their capabilities, even if they don't grasp how they work on a technical level. This is an essential part of making them safe to use, which no disclaimer or PR puff piece about how deeply your company cares about safety will ever do.
But, of course, marketing them as "AI" that's capable of "reasoning", and showcasing how good they are at fabricated benchmarks, builds hype, which directly impacts valuations. Pattern recognition and data generation systems aren't nearly as sexy.
LLMs are impressive probability gadgets that have been fed nearly the entire internet, and produce writing not by thinking but by making statistically informed guesses about which lexical item is likely to follow another
Modern chat-tuned LLMs are not simply statistical models trained on web scale datasets. They are essentially fuzzy stores of (primarily third world) labeling effort. The response patterns they give are painstakingly and at massive scale tuned into them by data labelers. The emotional skill mentioned in the article is outsourced employees writing or giving feedback on emotional responses.So you're not so much talking to statistical model as having a conversation with a Kenyan data labeler, fuzzily adapted through a transformer model to match the topic you've brought up.
While thw distinction doesn't change the substance of the article, it's valuable context and it's important to dispel the idea that training on the internet does this. Such training gives you GPT2. GPT4.5 is efficiently stored low- cost labor.
What would labeling even do for an LLM? (Not including multimodal)
The whole point of attention is that it uses existing text to determine when tokens are related to other tokens, no?
Right now there are top tier LLMs being produced by a bunch of different organizations: OpenAI and Anthropic and Google and Meta and DeepSeek and Qwen and Mistral and xAI and several others as well.
Are they all employing separate armies of labelers? Are they ripping off each other's output to avoid that expense? Or is there some other, less labor intensive mechanisms that they've started to use?
I mean on LinkenIn you can find many AI trainer companies and see they hire for every subject, language, and programming language across several expertise levels. They provide the laborers for the model companies.
Personally my inaccurate estimate is much lower than yours. When non-instruction tuned versions of GPT-3 were available, my perception is that most of the abilities and characteristics that we associate with talking to an LLM were already there - just more erratic, e.g., you asked a question and the model might answer or might continue it with another question (which is also a plausible continuation of the provided text). But if it did "choose" to answer, it could do so with comparable accuracy to the instruction-tuned versions.
Instruction tuning made them more predictable, and made them tend to give the responses that humans prefer (e.g. actually answering questions, maybe using answer formats that humans like, etc.), but I doubt it gave them many abilities that weren't already there.
What does "thinking" even mean? It turns out that some intelligence can emerge from this stochastic process. LLM can do math and can play chess despite not trained for it. Is that not thinking?
Also, could it be possible that are our brains do the same: generating muscle output or spoken output somehow based on our senses and some "context" stored in our neural network.
Modern chat-oriented LLMs are not simply statistical models trained on web scale datasets. Instead, they are the result of a two-stage process: first, large-scale pretraining on internet data, and then extensive fine-tuning through human feedback. Much of what makes these models feel responsive, safe, or emotionally intelligent is the outcome of thousands of hours of human annotation, often performed by outsourced data labelers around the world. The emotional skill and nuance attributed to these systems is, in large part, a reflection of the preferences and judgments of these human annotators, not merely the accumulation of web text.
So, when you interact with an advanced LLM, you’re not just engaging with a statistical model, nor are you simply seeing the unfiltered internet regurgitated back to you. Rather, you’re interacting with a system whose responses have been shaped and constrained by large-scale human feedback—sometimes from workers in places like Kenya—generalized through a neural network to handle any topic you bring up.
For example, numbers are the difference between a bridge collapsing or not
> To call AI a con isn’t to say that the technology is not remarkable, that it has no use, or that it will not transform the world (perhaps for the better) in the right hands. It is to say that AI is not what its developers are selling it as: a new class of thinking—and, soon, feeling—machines.
Of course some are skeptical these tools are useful at all. Others still don’t want to use them for moral reasons. But I’m inclined to believe the majority of the conversation is people talking past each other.
The skeptics are skeptical of the way LLMs are being presented as AI. The non hype promoters find them really useful. Both can be correct. The tools are useful and the con is dangerous.
jdkee•3h ago
threeseed•3h ago
So today is the same AI I used last year. And based on current trajectory same I will use next year.
sroussey•3h ago
assimpleaspossi•2h ago
plemer•2h ago
smcleod•2h ago
user568439•2h ago
Future AI's will be more powerful but probably influenced to push users to spend money or have a political opinion. So they may enshitify...
Pingk•2h ago
stirfish•58m ago
Ultimately these machines work for the people who paid for them.
add-sub-mul-div•2h ago
It's like if we'd said the Youtube we used in 2015 was going to be the worst Youtube we'd ever use.
andy99•2h ago
davidcbc•2h ago
dwaltrip•1h ago