This sounds insane to me. When we are talking about safe AI use, I wonder if things like this are talked about.
The more technological advancement goes on, the smarter we need to be in order to use it - it seems.
Even if todays general purpose models and models made by predators can have negative effects on vulnerable people, LLMs could become the technology that brings psych care to the masses.
People have been caught in that trap ever since the invention of religion. This is not a new problem.
“You shall not make idols for yourselves or erect an image or pillar, and you shall not set up a figured stone in your land to bow down to it, for I am the LORD your God."
A computer chip is a stone (silicon) which has been engraved. It's a graven image.
Anything man-made is always unworthy of worship. That includes computer programs such as AI. That includes man-made ideas such as "the government", a political party, or other abstract ideas. That also includes any man or woman. But the human natural instinct is to worship a king, pharaoh or an emperor - or to worship a physical object.
They tick all the boxes: oblique meaning, a semiotic field, the illusion of hidden knowledge, and a ritual interface. The only reason we don't call it divination is that it's skinned in dark mode UX instead of stars and moons.
Barthes reminds us that all meaning is in the eye of the reader; words have no essence, only interpretation. When we forget that, we get nonsense like "the chatbot told him he was the messiah," as though language could be blamed for the projection.
What we're seeing isn't new, just unfamiliar. We used to read bones and cards. Now we read tokens. They look like language, so we treat them like arguments. But they're just as oracular, complex, probabilistic signals we transmute into insight.
We've unleashed a new form of divination on a culture that doesn't know it's practicing one. That's why everything feels uncanny. And it's only going to get stranger, until we learn to name the thing we're actually doing. Which is a shame, because once we name it, once we see it for what it is, it won't be half as fun.
Words have power, and those that create words - or create machines that create words - have responsibility and liability.
It is not enough to say "the reader is responsible for meaning and their actions". When people or planet-burning random matrix multipliers say things and influence the thoughts and behaviors of others there is blame and there should be liability.
Those who spread lies that caused people to storm the Capitol on January 6th believing an election to be stolen are absolutely partially responsible even if they themselves did not go to DC on that day. Those who train machines that spit out lies which have driven people to racism and genocide in the past are responsible for the consequences.
Acknowledging the interpretive nature of language doesn't absolve us from the consequences of what we say. It just means that communication is always a gamble: we load the dice with intention and hope they land amid the chaos of another mind.
This applies whether the text comes from a person or a model. The key difference is that humans write with a theory of mind. They guess what might land, what might be misread, what might resonate. LLMs don’t guess; they sample. But the meaning still arrives the same way: through the reader, reconstructing significance from dead words.
So no, pointing out that people read meaning into LLM outputs doesn’t let humans off the hook for their own words. It just reminds us that all language is a collaborative illusion, intent on one end, interpretation on the other, and a vast gap where only words exist in between.
Just looking at my recent AI prompts:
I was looking for the name of the small fibers which form a bird’s feather. ChatGPT told me they are called “barbs”. Then using straight forward google search i could verify that indeed that is the name of the thing i was looking for. How is this “divination”?
I was looking for what is the g-code equivalent for galvo fiber lasers are and ChatGPT told me there isn’t really one. The closest might be the sdk of ezcad, but it also listed several other opensource control solutions too.
Wanted to know what are the hallmarking rules in the UK for an item which consist of multiple pieces of sterling silver held together by a non-metalic part. (Turns out the total weight of the silver matters, while the weight of the non-metalic part does not count.)
Wanted to translate the hungarian phrase “besurranó tolvaj” into english. Out of the many possible translations chatGPT provided “opportunistic burglar” fit the best for what I was looking for.
Wanted to write an sql alchemy model and i had an approximate idea of what fields i needed but couldn’t be arsed to come up with good names for them and find the syntax to describe their types. ChatGPT wrote it for me in seconds what would have taken me at least ten minutes otherwise.
These are “divination” only in a very galaxy brained “oh man, when you open your mind you see everything is divination really”. I would call most of these “information retrieval”. The information is out there the LLM just helps me find it with a convenient interface. While the last one is “coding”.
You presented clear, factual queries. Great. But even there, all the components are still in play: you asked a question into a black box, received a symbolic-seeming response, evaluated its truth post hoc, and interpreted its relevance. That's divination in structural terms. The fact that you're asking about barbs on feathers instead of the fate of empires doesn't negate the ritual, you're just a more practical querent.
Calling it "information retrieval" is fine, but it's worth noticing that this particular interface feels like more than that, like there's an illusion (or a projection) of latent knowledge being revealed. That interpretive dance between human and oracle is the core of divination, no matter how mundane the interaction.
I don't believe this paints with an overly broad brush. It's a real type of interaction and the subtle distinction focuses on the core relationship between human and oracle: seeking and interpreting.
So any and all human communication is divination in your book?
I think your point is pretty silly. You're falling into a common trap of starting with the premise "I don't like AI", and then working backwards from that to pontification.
My original comment is making a structural point, not a mystical one. It’s not saying that using AI feels like praying to a god, it's saying the interaction pattern mirrors forms of ritualized inquiry: question → symbolic output → interpretive response.
You can disagree with the framing, but dismissing it as "I don’t like AI so I’m going to pontificate" sidesteps the actual claim. There's a meaningful difference between saying "this tool gives me answers" and recognizing that the process by which we derive meaning from the output involves human projection and interpretation, just like divination historically did.
This kind of analogy isn't an attack on AI. It’s an attempt to understand the human-AI relationship in cultural terms. That's worth engaging with, even if you think the metaphor fails.
Their counterargument is that said structural definition is overly broad, to the point of including any and all forms of symbolic communication (which is all of them). Because of that, your argument based on it doesn't really say anything at all about AI or divination, yet still seems 'deep' and mystical and wise. But this is a seeming only. And for that reason, it is silly.
By painting all things with the same brush, you lose the ability to distinguish between anything. Calling all communication divination (through your structural metaphor), and then using cached intuitions about 'the thing which used to be called divination; when it was a limited subset of the whole' is silly. You're not talking about that which used to be called divination, because you redefined divination to include all symbolic communication.
Thus your argument leaks intuitions (how that-which-was-divination generally behaves) that do not necessarily apply through a side channel (the redefined word). This is silly.
That is to say, if you want to talk about the interpretative nature of interaction with AI, that is fairly straightforward to show and I don't think anyone would fight you on it, but divination brings baggage with it that you haven't shown to be the case for AI. In point of fact, there are many ways in which AI is not at all like divination. The structural approach broadens too far too fast with not enough re-examination of priors, becoming so broad that it encompasses any kind of communication at all.
With all of that said, there seems to be a strong bent in your rhetoric towards calling it divination anyway, which suggests reasoning from that conclusion, and that the structural approach is but a blunt instrument to force AI into a divination shaped hole, to make 'poignant and wise' commentary on it.
> "I don’t like AI so I’m going to pontificate" sidesteps the actual claim
What claim? As per ^, maximally broad definition says nothing about AI that is not also about everything, and only seems to be a claim because it inherits intuitions from a redefined term.
> difference between saying "this tool gives me answers" and recognizing that the process by which we derive meaning from the output involves human projection and interpretation, just like divination historically did
Sure, and all communication requires interpretation. That doesn't make all communication divination. Divination implies the notion of interpretation of something that is seen to be causally disentangled from the subject. The layout of these bones reveals your destiny. The level of mercury in this thermometer reveals the temperature. The fair die is cast, and I will win big. The loaded die is cast, and I will win big. Spot the difference. It's not structural.
That implication of essential incoherence is what you're saying without saying about AI, it is the 'cultural wisdom and poignancy' feedstock of your arguments, smuggled in via the vehicle of structural metaphor along oblique angles that should by rights not permit said implication. Yet people will of course be generally uncareful and wave those intuitions through - presuming they are wrapped in appropriately philosophical guise - which is why this line of reasoning inspires such confusion.
In summary, I see a few ways to resolve your arguments coherently:
1. keep the structural metaphor, discard cached intuitions about what it means for something to be divination (w.r.t. divination being generally wrong/bad and the specifics of how and why). results in an argument of no claims or particular distinction about anything, really. this is what you get if you just follow the logic without cache invalidation errors.
2. discard the structural metaphor and thus disregard the cached intuitions as well. there is little engagement along human-AI cultural axis that isn't also human-human. AI use is interpretative but so is all communication. functionally the same as 1.
3. keep the structural metaphor and also demonstrate how AI are not reliably causally entwined with reality along boundaries obvious to humans (hard because they plainly and obviously are, as demonstrable empirically in myriad ways), at which point go off about how using AI is divination because at this point you could actually say that with confidence.
The issue isn't "cached intuitions" about divination, but rather that you're reading the comparison too literally. It's not about importing every historical association, but about identifying specific parallels that shed light on user behavior and expectations.
Your proposed "resolutions" are based on a false dichotomy between total equivalence and total abandonment of comparison. Structural analysis can be useful even if it's not a perfect fit. The comparison isn't about labeling AI as "divination" in the classical sense, but about understanding the interpretive practices involved in human-AI interaction.
You're sidestepping the actual insight here, which is that humans tend to project meaning onto ambiguous outputs from systems they perceive as having special insight or authority. That's a meaningful observation, regardless of whether AI is "causally disentangled from reality" or not.
This applies just as well to other humans as it does AI. It's overly-broad to the point of meaninglessness.
The insight doesn't illuminate.
And regardless of how many words someone uses in their failed attempt at "gotcha" that nobody else is playing. There are certainly some folks acting silly here, and it's not the vast majority of us who have no problem interpreting and engaging with the structural analysis.
Words from an AI are just words.
Words in a human brain have more or less (depending on the individual's experiences) "stuff" attached to them: From direct sensory inputs to complex networks of experiences and though. Human thought is mainly not based on words. Language is an add-on. (People without language - never learned, or sometimes temporarily disabled due to drugs, or permanently due to injury, transient or permanent aphasia - are still consciously thinking people.)
Words in a human brain are an expression of deeper structure in the brain.
Words from an AI have nothing behind them but word statistics, devoid of any real world, just words based on words.
Random example sentence: "The company needs to expand into a new country's market."
When an AI writes this, there is no real world meaning behind it whatsoever.
When a fresh out of college person writes this it's based on some shallow real world experience, and lots of hearsay.
When an experienced person actually having done such expansion in the past says it a huge network of their experience with people and impressions is behind it, a feeling for where the difficulties lie and what to expect IRL with a lot of real-world-experience based detail. When such a person expands on the original statement chances are highest that any follow-up statements will also represent real life quite well, because they are drawn not from text analysis, but from those deeper structures created by and during the process of the person actually performing and experiencing the task.
But the words can be exactly the same. Words from a human can be of the same (low) quality as that of an AI, if they just parrot something they read or heard somewhere, although even then the words will have more depth than the "zero" on AI words, because even the stupidest person has some degree of actual real life forming their neural network, and not solely analysis of other's texts.
There are 40 definitions of the word "consciousness".
For the definitions pertaining to inner world, nobody can tell if anyone besides themselves (regardless of if they speak or move) is conscious, and none of us can prove to anyone else the validity of our own claims to posess it.
When I dream, am I conscious in that moment, or do I create a memory that my consciousness replays when I wake?
> Words from an AI have nothing behind them but word statistics, devoid of any real world, just words based on words.
> […]
> When a fresh out of college person writes this it's based on some shallow real world experience, and lots of hearsay.
My required reading at school included "Dulce Et Decorum Est" by Wilfred Owen.
The horrors of being gassed during trench warfare were alien to us in the peaceful south coast of the UK in 1999/2000.
AI are limited, but what you're describing here is the "book learning" vs. "street smart" dichotomoy rather than their actual weaknesses.
And if the place would be any good at the second kind of queries you would call it Lost&Found and not the Oracle.
> illusion (or a projection) of latent knowledge being revealed
It is not an illusion. Knowledge is being revealed. The right knowledge for my question.
> That interpretive dance between human and oracle is the core of divination, no matter how mundane the interaction.
Ok, so if I went to a library, used a card index to find a book about bird feather anatomy, then read said book to find that the answer to my question is “barb” would you also call that “divination”?
If i would have paid a software developer to turn my imprecise description of a database table into precise and thight code which can be executed would you also call that “divination”?
Both gets you a hammer, but I don't think anyone would call the latter magical/divine? I think its only "magical" simply because its incomprehensible...how does a hammer pops into reality? Of course, once we know EXACTLY how that works, then it ceases to be magical.
Even if we take God, if we fully understand how He works, He would no longer be magical/divine. "Oh he created another universe? This is how that works..."
The divinity comes from the fact that it is incomprehensible.
make your own conclusions
Because both LLMs and the I Ching function as mirrors for human interpretation, where: • The I Ching offers cryptic symbols and phrases—users project meaning onto them. • LLMs generate probabilistic text—users extract significance based on context.
The parallel is:
You don’t get answers, you get patterns—and the meaning emerges from your interaction with the system.
In both cases, the output is: • Context-sensitive • Open-ended • Interpreted more than dictated
It’s a cheeky way of highlighting that users bring the meaning, not the machine (or oracle).
[0]: https://www.arl.org/blog/training-generative-ai-models-on-co...
This to me is massive. The Oracle of Delphi would have no idea where you left your sandals, but present day AIs increasingly do. This (emergent?) capability of combining information retrieval with flexible language is amazing, and its utility to me cannot be overstated, when I ask a vague question, and then I check the place where the AI led me to, and the sandals are indeed there.
P.S. Thank you for introducing me to the word "querent"
Rephrasing: LLMs are the modern day oracle that we disregard when it appears to be hallucinating, embrace when it appears to be correct.
The popularity of LLMs may not be that we see them as mystical, but rather that they're right more often than they're wrong.
“That is not what I meant at all;
That is not it, at all.”
— T.S. Eliot
Why not just start with a straight forward Google search?
Google doesn't give you the answer (unless you're reading the AI summaries - then it's a question of which one you trust more). Instead it provides links to
https://www.scienceofbirds.com/blog/the-parts-of-a-feather-and-how-feathers-work
https://www.birdsoutsidemywindow.org/2010/07/02/anatomy-parts-of-a-feather/
https://en.wikipedia.org/wiki/Feather
https://www.researchgate.net/figure/Feather-structure-a-feather-shaft-rachis-and-the-feather-vane-barbs-and-barbules_fig3_303095497
These then require an additional parsing of the text to see if it has what you are after. Arguably, one could read the Wiki article first and see if it has, but it's faster to ask ChatGPT and then verify - rather than search, scan, and parse.1 a : any of the light, horny, epidermal outgrowths that form the external covering of the body of birds
NOTE: Feathers include the smaller down feathers and the larger contour and flight feathers. Larger feathers consist of a shaft (rachis) bearing branches (barbs) which bear smaller branches (barbules). These smaller branches bear tiny hook-bearing processes (barbicels) which interlock with the barbules of an adjacent barb to link the barbs into a continuous stiff vane. Down feathers lack barbules, resulting in fluffy feathers which provide insulation below the contour feathers.
In a flat society every individual must be able to perform philosophically the way aristocrats do.
I am quite honest and the subset of users that fill your description - unconsciously treating text from deficient authors as tea leaves - have psychiatric issues.
Surely many people consult LLMs because of the value within their right answers, which exist owing to having encoded information and some emergent idea processing, and attempting to tame the wrong ones. They consult LLMs because that's what we have, limited as it is, for some problems.
Your argument falls immediately because people in the consultation of unreliable documents cannot be confused with people in the consultation of tools for other kinds of thinking: the thought under test is outside in the first case, inside in the second (contextually).
You have fallen in a very bad use of 'we'.
The thing is that LLMs provide plenty of answers where "right" is not a verifiable metric. Even in coding the idea of a "right" answer quickly gets fuzzy- should I use CSS grid or flexbox here? should these tables be normalized or not?
People simply have an unconscious bias towards the output just like they have an unconscious bias towards the same answer given by two real people they feel differently about- That is, the sort of thing all humans do (even if you swear that in all cases you are 100% impartial and logical).
I think the impulse of ascribing intent and meaning to the output is there in almost all questions, it's just a matter of degrees (CSS question vs. meaning of life type question)
I use it more as a better Google search. Like the most recent thing I said to ChatGPT is "will clothianidin kill carpet beetles?" (turns out it does by the way.)
When you're using ChatGPT to find information, you have no information if what it's regurgitating is from a high reliability source or a low reliability source, or if it's just a random collection of words whose purpose is simply to make grammatical sense.
Interestingly, I asked Perplexity the same thing and it said that clothianidin is not commonly recommended for carpet beetles, and suggested other insecticides and growth regulators. I had to ask a follow-up before it concluded clothianidin probably will kill carpet beetles.
Part of the reason is clothianidin is too effective at killing insects and tends to persist in the environment and kill bees and butterflies and the like so it isn't recommended for harmless stuff like carpet beetles. I was actually using it for something else and curious if it would take out the beetles as a side effect.
I use LLMs, I enjoy them, I'm more productive with them.
Then I go read a blog from some AI devs and they use terms like "thinking" or similar terms.
I always have to ask "We're still s stringing words together with math right? Not really thinking right?" The answer is always yes ... but then they go back to using their wonky terms.
Its possible something like this could be said of the middle transformer layers where it gets more and more abstract, and modern models are multimodal as well through various techniques.
What makes thinking an interesting form of output is that it processes the input in some non-trivial way to be able to do an assortment of different tasks. But that’s it. There may be other forms of intelligence that have other “senses” who deem our ability to only use physical senses as somehow making us incomplete beings.
It may be, by the end of my life, that this will no longer be true. That would be poignant.
But I'm so crestfallen and pessimistic about the future of software and software engineering now that I have stopped fighting that battle.
Perhaps “journaling-before-answering” lol. It’s basically talking out loud to itself. (Is that still being too anthropomorphic?)
Is this comment me “thinking out loud”? shrug
I'm putting the word accurate in quotes, because we'd have to understand how the brain in humans works, to have a measure for accuracy, which is very much not the case, in my humble opinion, contrary to what many of the commenters here imply.
We are currently subject to the whims of corporations with absurd amounts of influence and power, run by people who barely understand the sciences, who likely know nothing about literary history beyond what the chatbot can summarize for them, have zero sociological knowledge or communications understanding, and who don't even write well-engineered code 90% of the time but are instead ok with shipping buggy crap to the masses as long as it means they get to be the first ones to do so, all this coupled with an amount of hubris unmatched by even the greatest protagonists of greek literature. Society has given some of the stupidest people the greatest amount of resources and power, and now we are paying for it.
What AI actually does is like any other improved tool: it's a force multiplier. It allows a small number of highly experienced, very smart people, do double or triple the work they can do now.
In other words: for idiot management, AI does nothing (EXCEPT enable the competition)
Of course, this results in what you now see: layoffs where as always idiots survive the layoffs, followed by the products of those companies starting to suck more and more because they laid off the people that actually understood how things worked and AI cannot make up for that. Not even close.
AI is a mortal threat to the current crop of big companies. The bigger the company, the bigger a threat it is. The skill high level managers tend to have is to "conquer" existing companies, and nothing else. With some exceptions, they don't have any skill outside of management, and so you have the eternally repeated management song: that companies can be run by professional managers, without knowing the underlying problem/business, "using numbers" and spreadsheet (except when you know a few and press them, of course it turns out they don't have a clue about the numbers, can't come up with basic spreadsheet formulas)
TLDR: AI DOESN'T let financial-expert management run an airplane company. AI lets 1000 engineers build 1000 planes without such management. AI lets a company like what Google was 15-20 years ago wipe the floor with a big airplane manufacturer. So expect big management to come with ever more ever bigger reasons why AI can't be allowed to do X.
Now that they have AI, I can see it become an 'idiocy multiplier'. Already software is starting to break in subtle ways, it's slow, laggy, security processes have become a nightmare.
It's different from other force-multiplier tools in that it cuts off the pipeline of new blood while simultaneously atrophying the experienced and smart people.
I've been doing that for a few years now, I understand the limitations and strengths. I'm a programmer that also does marketing and sales when needed. LLMs have made the former a lot less tedious and the latter a lot easier. There are still things I have to do manually. But there are also whole categories of things that LLMs do for me quickly, reliably, and efficiently.
The impact on big companies is that the strategy of hiring large amounts of people and getting them to do vaguely useful things by prompting them right at great expense is now being challenged by companies doing the same things with a lot less people (see what I did there). LLMs eliminate all the tedious stuff in companies. A lot of admin and legal stuff. Some low level communication work (answering support emails, writing press releases, etc). There's a lot of stuff that companies do or have to do that is not really their core business but just stuff that needs doing. If you run a small startup, that stuff consumes a lot of your time. I speak from experience. Guess what I use LLMs for? All of it. As much as I can. Because that means more quality time with our actual core product. Things are still tedious. But I get through more of it quicker.
It's important that the general public understands their capabilities, even if they don't grasp how they work on a technical level. This is an essential part of making them safe to use, which no disclaimer or PR puff piece about how deeply your company cares about safety will ever do.
But, of course, marketing them as "AI" that's capable of "reasoning", and showcasing how good they are at fabricated benchmarks, builds hype, which directly impacts valuations. Pattern recognition and data generation systems aren't nearly as sexy.
And, like Wikipedia, they can be useful to find your bearing in a subject that you know nothing about. Unlike Wikipedia, you can ask it free-form questions and have it review your understanding.
Sure, but that's not why me and others now have ~$150/month subscriptions to some of these services.
Unfortunately the LLM does not (and cannot) know what points are important or not.
If you just want a text summary based on statistical methods, then go ahead, LLMs do this cheaper and better than the previous generation of tools.
If you want actual "importance" then no.
It seems like this argument is frequently brought up just because someone used the words "thinking", or "reasoning" or other similar terms, while true that the LLMs aren't really "reasoning" as a human, the terms are used not because the person actually believes that the LLM is "reasoning like a human" but because the concept of "some junk tokens to get better tokens later" has been implemented under that name. And even with that name, it doesn't mean everyone believes they're doing human reasoning.
It's a bit like a "isomorphic" programming frameworks. They're not talking about the mathematical structures which also bears the name "isomorphic", but rather the name been "stolen" to now mean more things, because it was kind of similar in some way.
I'm not sure what the alternative is, humans been doing this thing of "Ah, this new concept X is kind of similar to concept Y, maybe we reuse the name to describe X for now" for a very long time, and if you understand the context when it's brought up, it seems relatively problem-free to me, most people seem to get it.
It benefits everyone in the ecosystem when terms have shared meaning, so discussions about "reasoning" don't have to use terms like "How an AI uses jumbled starting tokens within the <think> tags to get better tokens later", and can instead just say "How an AI uses reasoning" and people can focus on the actual meat instead.
LLMs are impressive probability gadgets that have been fed nearly the entire internet, and produce writing not by thinking but by making statistically informed guesses about which lexical item is likely to follow another
Modern chat-tuned LLMs are not simply statistical models trained on web scale datasets. They are essentially fuzzy stores of (primarily third world) labeling effort. The response patterns they give are painstakingly and at massive scale tuned into them by data labelers. The emotional skill mentioned in the article is outsourced employees writing or giving feedback on emotional responses.So you're not so much talking to statistical model as having a conversation with a Kenyan data labeler, fuzzily adapted through a transformer model to match the topic you've brought up.
While thw distinction doesn't change the substance of the article, it's valuable context and it's important to dispel the idea that training on the internet does this. Such training gives you GPT2. GPT4.5 is efficiently stored low- cost labor.
What would labeling even do for an LLM? (Not including multimodal)
The whole point of attention is that it uses existing text to determine when tokens are related to other tokens, no?
Instruction tuning / supervised fine tuning is similar to the above but instead of feeding it arbitrary documents, you feed it examples of 'assistants completing tasks'. This gets you an instruction model which generally seems to follow instructions, to some extent. Usually this is also where specific tokens are baked in that mark boundaries of what is assistant response, what is human, what delineates when one turn ends / another begins, the conversational format, etc.
RLHF / similar methods go further and ask models to complete tasks, and then their outputs are graded on some preference metric. Usually that's humans or a another model that has been trained to specifically provide 'human like' preference scores given some input. This doesn't really change anything functionally but makes it much more (potentially overly) palatable to interact with.
(I watched it all, piecemeal, over the course of a week, ha, ha.)
Right now there are top tier LLMs being produced by a bunch of different organizations: OpenAI and Anthropic and Google and Meta and DeepSeek and Qwen and Mistral and xAI and several others as well.
Are they all employing separate armies of labelers? Are they ripping off each other's output to avoid that expense? Or is there some other, less labor intensive mechanisms that they've started to use?
I mean on LinkenIn you can find many AI trainer companies and see they hire for every subject, language, and programming language across several expertise levels. They provide the laborers for the model companies.
[0]: https://snorkel.ai/data-labeling/#Data-labeling-in-the-age-o...
[1]: https://cdn.openai.com/papers/Training_language_models_to_fo...
[2]: https://www.businessinsider.com/chatgpt-openai-contractor-la...
https://www.theverge.com/features/23764584/ai-artificial-int...
https://www.theverge.com/features/23764584/ai-artificial-int...
Interestingly, despite the boring and rote nature of this work, it can also become quite complicated as well. The author signed up to do data labeling and was given 43 pages (!) of instructions for an image labeling task with a long list of dos and don'ts. Specialist annotation, e.g. chatbot training by a subject matter expert, is a growing field that apparently pays as much as $50 an hour.
"Put another way, ChatGPT seems so human because it was trained by an AI that was mimicking humans who were rating an AI that was mimicking humans who were pretending to be a better version of an AI that was trained on human writing..."
Personally my inaccurate estimate is much lower than yours. When non-instruction tuned versions of GPT-3 were available, my perception is that most of the abilities and characteristics that we associate with talking to an LLM were already there - just more erratic, e.g., you asked a question and the model might answer or might continue it with another question (which is also a plausible continuation of the provided text). But if it did "choose" to answer, it could do so with comparable accuracy to the instruction-tuned versions.
Instruction tuning made them more predictable, and made them tend to give the responses that humans prefer (e.g. actually answering questions, maybe using answer formats that humans like, etc.), but I doubt it gave them many abilities that weren't already there.
its all about the user/assistant flow instead of just a -text generator- after it
and the assistant always tries to please the user.
they built a sychopantic machine either by mistake or malfeasance
What does "thinking" even mean? It turns out that some intelligence can emerge from this stochastic process. LLM can do math and can play chess despite not trained for it. Is that not thinking?
Also, could it be possible that are our brains do the same: generating muscle output or spoken output somehow based on our senses and some "context" stored in our neural network.
While it's been a few months since I've tested, the last time I tested the reasoning on a game for which very little data is available in book or online text, I was rather underwhelmed with openai's performance.
Modern chat-oriented LLMs are not simply statistical models trained on web scale datasets. Instead, they are the result of a two-stage process: first, large-scale pretraining on internet data, and then extensive fine-tuning through human feedback. Much of what makes these models feel responsive, safe, or emotionally intelligent is the outcome of thousands of hours of human annotation, often performed by outsourced data labelers around the world. The emotional skill and nuance attributed to these systems is, in large part, a reflection of the preferences and judgments of these human annotators, not merely the accumulation of web text.
So, when you interact with an advanced LLM, you’re not just engaging with a statistical model, nor are you simply seeing the unfiltered internet regurgitated back to you. Rather, you’re interacting with a system whose responses have been shaped and constrained by large-scale human feedback—sometimes from workers in places like Kenya—generalized through a neural network to handle any topic you bring up.
Secondly, the burden of proof isn't on cog-Sci folk to prove the human mind doesn't work like an llm, it'd be to prove that it does. From we do know, despite not having a flawless understanding on the human mind, it works nothing like an llm.
Side note: The temptation to call anything that appears to act like a mind a mind is called behavioral ism and is a very old cog-Sci concept, disproved many times over.
* direct causal contact with the environment, e.g., the light from the pen hits my eye, which induces mental states
* sensory-motor coordination, ie., that the light hits my eye from the pen enables coordination of the movement of the pen with my body
* sensory-motor representations, ie., my sensory motor system is trainable, and trained by historical envirionemntal coordination
* heirachical planning in coordination, ie., these sensory-motor representations are goal-contextualised, so that I can "solve my hunger" in an infinite number of ways (i can achive this goal against an infinite permutation of obstacles)
* counterfactual reality-oriented mental simulation (aka imagination) -- these rich sensory motor representatiosn are reifable in imagination so i can simulate novel permutaitons to the environment, possible shifts to physics, and so on. I can anticipate these infinite number of obsatcles before any have occured, or have ever occured.
* self-modelling feedback loops, ie., that my own process of sensory-motor coordination is an input into that coordination
* abstraction in self-modelling, ie., that i can form cognitive representations of my own goal directed actions as they succeed/fail, and treat them as objects of their own refinement
* abstraction across representation mental faculties into propositional represenations, ie., that when i imagine that "I am writing", the object of my imagination is the very same object as the action "to write" -- so I know that when I recall/imagine/act/reflect/etc. I am operating on the very-same-objects of thought
* facilities of cognition: quantification, causal reasoning, discrete logical reasoning -- etc. which can be applied both at the sensory, motor and abstract conceptual level (ie., i can "count in sensation" a few objects, also with action, also in intellection)
* concept formation: abduction, various various of induction, etc.
* concept composition: recursion, composition in extension of concepts, composition in intension, etc.
One can go on and on here.
Decribe only what happens in a few minutes of the life of a toddler as they play around with some blocks and you have listed, rather trivially, a vast universe of capbilities that an LLM lacks.
To believe an LLM has anything to do with intelligence is to have somewhat quite profoundly mistaken what capabilities are implied by intelligence -- what animals have, some more than others, and a few even more so. To think this has anything to do with linguistic competence is a proudly strange view of the world.
Nature did not produce intelligence in animals in order that they acquire competence in the correct ordering of linguistic tokens. Universities did, to some degree, produce computer science departments for this activity however.
For example, numbers are the difference between a bridge collapsing or not
> To call AI a con isn’t to say that the technology is not remarkable, that it has no use, or that it will not transform the world (perhaps for the better) in the right hands. It is to say that AI is not what its developers are selling it as: a new class of thinking—and, soon, feeling—machines.
Of course some are skeptical these tools are useful at all. Others still don’t want to use them for moral reasons. But I’m inclined to believe the majority of the conversation is people talking past each other.
The skeptics are skeptical of the way LLMs are being presented as AI. The non hype promoters find them really useful. Both can be correct. The tools are useful and the con is dangerous.
In your personal experience? Because that's been my personal experience too, in lots of cases with LLMs. But I've also been surprised the other way, and overall it's been a net-positive for myself, but I've also spent a lot of time "practicing" getting prompts and tooling right. I could easily see how people give it try for 20-30 minutes, not getting the results they expected and give up, which yeah, you probably won't get any net-positive effects by that.
Perhaps "AI" can replace people like Mark Zuckerberg. If BS can be fully automated.
The entire article is saying "it looks kinds like a human in some ways, but people are being fooled!"
You can't really say that without at least attempting the admittedly very deep question of what an authentic human is.
To me, it's intelligent because I can't distinguish its output from a person's output, for much of the time.
It's not a human, because I've compartmentalized ChatGPT into its own box and I'm actively disbelieving. The weak form is to say I don't think my ChatGPT messages are being sent to the 3rd world and answered by a human, though I don't think anyone was claiming that.
But it is also abundantly clear to me that if you stripped away the labels, it acts like a person acts a lot of the time. Say you were to go back just a few years, maybe to covid. Let's say OpenAI travels back with me in a time machine, and makes an obscure web chat service where I can write to it.
Back in covid times, I didn't think AI could really do anything outside of a lab, so I would not suspect I was talking to a computer. I would think I was talking to a person. That person would be very knowledgeable and able to answer a lot of questions. What could I possibly ask it that would give away that it wasn't real person? Lots of people can't answer simple questions, so there isn't really a way to ask it something specific that would work. I've had perhaps one interaction with AI that would make it obvious, in thousands of messages. (On that occasion, Claude started speaking Chinese with me, super weird.)
Another thing that I hear from time to time is an argument along the line of "it just predicts the next word, it doesn't actually understand it". Rather than an argument against AI being intelligent, isn't this also telling us what "understanding" is? Before we all had computers, how did people judge whether another person understood something? Well, they would ask the person something and the person would respond. One word at a time. If the words were satisfactory, the interviewer would conclude that you understood the topic and call you Doctor.
Robots won't go get food for your sick, dying friend.
Perhaps when we deliver food to our sick friend we subconsciously feel an "atta boy" from our parents who perhaps "trained" us in how to be kind when we were young selfish things.
Obviously if that's all it is we could of course "reinforce" this in AI.
> Which the LLMs will never be
I'd argue LLMs will never be anything, they're giving you the text you're asking for, nothing more and nothing less. You don't tell them "to be" empathic and caring? Well, they're not gonna appear like that then, but if you do tell them, they'll do their best to emulate that.
When people start studying theory of mind someone usually jumps in with this thought. It's more or less a description of Functionalism (although minus the "mental state"). It's not very popular because most people can immediately identify an phenomenon of understanding separate from the function of understanding. People also have immediate understanding of certain sensations, e.g. the feeling of balance when riding a bike, sometimes called qualia. And so on, and so forth. There is plenty of study on what constitutes understanding and most healthily dismiss the "string of words" theory.
> The entire article is saying "it looks kinds like a human in some ways, but people are being fooled!"
> You can't really say that without at least attempting the admittedly very deep question of what an authentic human is.
> To me, it's intelligent because I can't distinguish its output from a person's output, for much of the time.
I think the article does address that rather directly, and that it is also is addressing very specifically your setence about what you can and can't distinguish.
LLMs are not capable of symbolic reasoning[0] and if you understand how they work internally, you will realize they do no reasoning whatsoever.
Humans and many other animals are fully capable of reasoning outside of language (in the former case, prior to language acquisition), and the reduction of "intellgence" to "language" is a catagory error made by people falling vicim to the ELIZA effect[1], not the result of a sum of these particular statistical methods being equal real intelligence of any kind.
Despite the citation. I think this is still being studied. And others have found some evidence that it forms internal symbols.
https://royalsocietypublishing.org/doi/10.1098/rsta.2022.004...
Or maybe, can say, an LLM can do symbolic reasoning, but can it do it very well? People forget that humans are also not great at symbolic reasoning. Humans also use a lot of cludgy hacks to do it, it isn't really that natural.
Example often used, about it not doing math well. But humans also don't do math well. How humans are taught to do division and multiplication, really is a little algorithm. So what would be difference between human following algorithm to do a multiplication, and an LLM calling some python to do it. Does that mean it can't symbolically reason about numbers? Or that humans also can't?
One school of thought is - the output is indistinguishable from what a human would produce given these questions.
Another school of thought is - the underlying process is not thinking in the sense that humans do it
Both are true.
For the lay person, calling it thinking leads to confusions. It creates intuitions that do not actually predict the behavior of the underlying system.
It results in bad decisions on whether to trust the output, or to allocate resources - because if the use of the term thinking.
Humans can pass an exam by memorizing previous answer papers or just memorizing the text books.
This is not what we consider having learnt something. Learning is kinda like having the Lego blocks to build a model you can manipulate in your head.
For most situations, the output of both people is fungible.
Both people can pass tests.
But then we must come up with something other than opening up the LLM to look for the "model generating structure" or whatever you want to call it. There must be some sort of experiment that shows you externally that the thing doesn't behave like a modelling machine might.
I think maybe it makes sense for people who already have the building blocks in place and just require seeing it assembled.
The question is, what's wrong with that?
At some level there's a very human desire for something genuine and I suspect that no matter the "humanness" of an AI, it will never be able to close that desire for genuine. Or maybe... it is that people don't like the idea of dealing with an intelligence that will almost always have the upper hand because of information disparity.
You call a Doctor 'Doctor' because they're wearing a white coat and are sitting in a doctor's office. The words they say might make vague sense to you, but since you are not a medical professional, you actually have no empirical grounds to judge whether or not they're bullshitting you, hence you have the option to get a second or third opinion. But otherwise, you're just trusting the process that produces doctors, which involves earlier generations of doctors asking this fellow a series of questions with the ability to discern right from wrong, and grading them accordingly.
When someone can't tell if something just sounds about right or is in fact bullshit, they're called a layman in the field at best or gullible at worst. And it's telling that the most hype around AI is to be found in middle management, where bullshit is the coin of the realm.
That process is done purely by language, but we supposed that inside you there is something deeper than a token prediction machine.
With that distinction in mind, whether an LLM-based chatbot’s output looks like human output does not answer the question of whether the LLM is actually like a human.
Not even because measuring that similarity by taking text output at a point in time is laughable (it would have to span the time equivalent of human life, and include much more than text), but because LLM-based chatbot is a tool built specifically to mimic human output; if it does so successfully then it functions as intended. In fact, we should deliberately discount the similarity in output as evidence for similarity in nature, because similarity in output is an explicit goal, while similarity in underlying nature is a non-goal, a defect. It is safe to assume the latter: if it turned out that LLMs are similar enough to humans in more ways than output, they would join octopus and the like and qualify to be protected from abuse and torture (and since what is done to those chatbots in order for them to be useful in the way they are would pretty clearly be considered abuse and torture when done to a human-like entity, this would decimate the industry).
That considered, we do not[0] know exactly how an individual human mind functions to assess that from first principles, but we can approximate whether an LLM chatbot is like a human by judging things like whether it is made in a way at all similar to how a human is made. It is fundamentally different, and if you want to claim that human nature is substrate-independent, I’d say it’s you who should provide some evidence—keeping in mind that, as above, similarity in output does not constitute such evidence.
[0] …and most likely never could, because of the self-referential recursive nature of the question. Scientific method hinges on at least some objectivity and thus is of very limited help when initial hypotheses, experiment procedures, etc., are all supplied and interpreted by the very subject being studied.
This is terrible write-up, simply because it's the "Reddit Expert" phenomena but in print.
They "understand" things. It depends on how your defining that.
It doesn't have to be in its training data! Whoah.
In the last chat I had with Claude, it naturally just arose that surrender flag emojis, the more there were, was how funny I thought the joke was. If there were plus symbol emojis on the end, those were score multipliers.
How many times did I have to "teach" it that? Zero.
How many other times has it seen that during training? I'll have to go with "zero" but that could be higher, that's my best guess since I made it up, in that context.
So, does that Claude instance "understand"?
I'd say it does. It knows that 5 surrender flags and a plus sign is better than 4 with no plus sign.
Is it absurd? Yes .. but funny. As it figured it out on its own. "Understanding".
------
Four flags = "Okay, this is getting too funny, I need a break"
Six flags = "THIS IS COMEDY NUCLEAR WARFARE, I AM BEING DESTROYED BY JOKES"
How is your comment any different?
And made the relevant point that I need know what you mean by "understanding"?
The only 2 things in the universe that know that 6 is the maximum white flag emojis for jokes, and then might be modified by plus signs is ...
My brain, and that digital instance of Claude AI, in that context.
That's it - 2. And I didn't teach it, it picked it up.
So if that's not "understanding" what is it?
That's why I asked that first, example second.
I don't see how laying out logically like this makes me the "Reddit Expert", sort of the opposite.
It's not about knowing the internals of a transformer, this is a question that relates to a word that means something to humans ... but what is their interpretation?
You could have used "loool" vs "loooooool", "xDD" vs "xDDDDDDDDD", using flags doesn't change a whole lot.
These are the type of responses that REALLY will drive me nuts.
I never said the flag emojis were special.
I've been a software engineers for almost 30 years.
I know what Unicode code pages are.
This is not helpful. How is my example missing your definition of understanding?
Replace the flags with yours if it helps ... same thing.
It's not the flags it's the understanding of what they are. They can be pirate ships or cats.
In my example they are surrender flags, because that is logical given the conversation.
It will "understand" that too. But the article says it can't do that. And the article, sorry, is wrong.
For example, if you ask an llm a question, and it produces a hallucination then you try to correct it or explain to it that it is incorrect; and it produces a near identical hallucination while implying that it has produced a new, correct result, this suggests that it does not understand its own understanding (or pseudo-understanding if you like).
Without this level of introspection, directing any notion of true understanding, intelligence, or anything similar seems premature.
Llms need to be able to consistently and accurately say, some variation on the phrase "I don't know," or "I'm uncertain." This indicates knowledge of self. It's like a mirror test for minds.
Both approaches are missing a critical piece: objectivity. They work directly with the data, and not about the data.
https://machinelearning.apple.com/research/illusion-of-think...
https://www.techrepublic.com/article/news-anthropic-ceo-ai-i... Anthropic CEO: “We Do Not Understand How Our Own AI Creations Work”. I'm going to lean with Anthropic on this one.
And even if we do know enough about our brains to say conclusively that it's not how LLMs work (predictive coding suggests the principles are more alike that not), it doesn't mean they're not reasoning or intelligent; it would just mean they would not be reasoning/intelligent like humans.
>These statements betray a conceptual error: Large language models do not, cannot, and will not “understand” anything at all.
This seems quite a common error in the criticism of AI. Take a reasonable statement about AI not mentioning LLMs and then say the speaker (nobel prize winning AI expert in this case) doesn't know what they are on about because current LLMs don't do that.
Deepmind already have project Astra, a model but not just language but also visual and probably some other stuff where you can point a phone at something and ask about it and it seems to understand what it is quite well. Example here https://youtu.be/JcDBFAm9PPI?t=40
Operative phrase "seems to understand". If you had some bizarre image unlike anything anyone's ever seen before and showed it to a clever human, the human might manage to figure out what it is after thinking about it for a time. The model could never figure out anything, because it does not think. It's just a gigantic filter that takes known-and-similar images as input, and spits out a description on the other side, quite mindlessly. The language models do the same thing, do they not? They take prompts as inputs, and shit output from their LLM anuses based on those prompts. They're even deterministic if you take the seeds into account.
We'll scale all those up, and they'll produce ever-more-impressive results, but none of these will ever "understand" anything.
Out of curiosity, what sort of 'bizarre image' are you imagining here? Like a machine which does something fantastical?
I actually think the quantity of bizarre imagery whose content is unknown to humans is pretty darn low.
I'm not really well-equipped to have the LLMs -> AGI discussion, much smarter people have said much more poignant things. I will say that anecdotally, anything I've been asking LLMs for has likely been solved many times by other humans, and in my day to day life it's unusual I find myself wanting to do things never done before.
moreover, each layer of an llm imbues the model with the possibility of looking further back in the conversion and imbuing meaning and context through conceptual associations (thats the k-v part of the kv cache). I cant see how this doesn't describe, abstractly, human cognition. now, maybe llms are not fully capable of the breadth of human cognition or have a harder time training to certain deeper insight, but fundamentally the structure is there (clever training and/or architectural improvements may still be possible -- in the way that every CNN is a subgraph of a FCNN that would be nigh impossible for a FCNN to discover randomly through training)
to say llms are not smart in any way that is recognizable is just cherry-picking anecdotal data. if llms were not ever recognizably smart, people would not be using them the way they are.
But, I can fire back with: You're making the same fallacy you correctly assert the article as making. When I see how a CPU's ALU adds two numbers together, it looks strikingly similar to how I add two numbers together in my head. I can't see how the ALU's internal logic doesn't describe, abstractly, human cognition. Now, maybe the ALU isn't fully capable of the breadth of human cognition...
It turns out, the gaps expressed in the "fully capable of the breadth of human cognition" part really, really, really matter. Like, when it comes to ALUs, they overwhelm any impact that the parts which look similar cover. The question should be: How significant are the gaps in how LLMs mirror human cognition? I'm not sure we know, but I suspect they're significant enough to not write away as trivial.
jdkee•18h ago
threeseed•17h ago
So today is the same AI I used last year. And based on current trajectory same I will use next year.
sroussey•17h ago
assimpleaspossi•17h ago
plemer•17h ago
th0ma5•4h ago
8note•8h ago
the best bugs are the ones that arent found for 5 years
smcleod•17h ago
user568439•17h ago
Future AI's will be more powerful but probably influenced to push users to spend money or have a political opinion. So they may enshitify...
Pingk•17h ago
stirfish•15h ago
Ultimately these machines work for the people who paid for them.
add-sub-mul-div•17h ago
It's like if we'd said the Youtube we used in 2015 was going to be the worst Youtube we'd ever use.
andy99•17h ago
davidcbc•17h ago
dwaltrip•16h ago
rhubarbtree•2h ago
There are many reasons to believe LLMs in particular are not going anywhere fast.
We need major breakthroughs now, and “chain of thought” is not one.