> "We don’t know what is going to happen, we have no idea, and people who tell you what is going to happen are just being silly," he adds. "We are at a point in history where something amazing is happening, and it may be amazingly good, and it may be amazingly bad. We can make guesses, but things aren’t going to stay like they are."
I agree with Hinton: We only know that things will change, not how they will change. We can only make guesses.
Anyone claiming to know with certainty is full of baloney.
Your comment would be fine without that first bit.
By narrow measures of outcome, AI synthesises answers which meet needs in questioners. I think intelligence includes aspects of behaviour (such a word) of a system which go beyond simply providing answers. I don't think AI can do this, yet if ever.
That very phrasing belies the problem with the word: There is no consensus on what intelligence is, what a clear test for it would be or whether such a test could even exist. There are only people on the internet with personal theories and opinions.
So when people say AI is not intelligent, my next questions are whether rocks, trees, flies, dogs, dolphins, humans and “all humans” are intelligent. The person will answer yes/no immediately in a tone that makes it sound like what they’re saying must be obvious, and yet their answers frequently do not agree with each other. We do not have a consensus definition of intelligence that can be used to include some things and exclude others.
The fact that there are degrees of intelligence (dogs > flies) isn't that big of an issue, imo. It's the logically night is day argument - just because we can't point to a clear cut off point between these concepts, doesn't mean they aren't distinct concepts. So it follows with intelligence. It doesn't require consensus, just the same way that "is it night now?" doesn't require consensus
If there's one thing I've found never came true for me, it's almost any sentence of substantive opinion about "philosophy" which starts with "I think we'll agree"
And I do think this AI/AGI question is a philosophy question.
I don't know if you'll agree with that.
Whilst your analogy has strong elements of "consensus not required" I am less sure that applies right now, to what we think about AI/AGI. I think consensus is pretty .. important, and also, absent.
At what point does a human become intelligent? Is a 12 cell embryon intelligent? Is a newborn intelligent? Is a 1 year old intelligent?
> It's the logically night is day argument - just because we can't point to a clear cut off point
Um...what? There may be more than one of them, but precise definitions exist for the transitions between day and night. I think that is a very poor analogy to intelligence.
There are not just degrees of intelligence but different kinds. It is easier for us to understand and evaluate intelligence that is more similar ours and it becomes increasingly harder the more alien it becomes.
Given that, I don't see how you could reject that assertion that LLMs have some kind of intelligence.
Asking
Yes, we don't have clear definitions of intelligence, just like we don't for life, and many other fundamental concepts. And yet it's possible to discuss these topics within specific contexts based on a generally and colloquially shared definition. As long as we're willing to talk about this in good faith with the intention to arrive at some interesting conclusions, and not try to "win" an argument.
So, given this, it is safe to assert that we haven't invented artificial intelligence. We have invented something that mimics it very well, which will be useful to us in many domains, but calling this intelligence is a marketing tactic promoted by people who have something to gain from that narrative.
They're useful. They're not intelligent. He invited the reproach.
The conversation (about whether AI is “intelligent”) was already absurd, I’m just pointing it out ;)
The more important conversation is about whether AI is useful, dangerous, and/or worth it. If AI is competent enough at a task to replace a human for 1/10 the cost, it doesn’t really matter if it “has a mortal soul” or “responds to sensory stimuli” or “can modify its weights in real time”, we need to be talking about what that job loss means for society.
That’s my main frustration: that the “is it intelligent” debate devolves into pointless unsettleable philosophical questions and sucks up all the oxygen, and the actual things of consequence go undiscussed.
This is interesting in its own right, and has propelled the computing industry since it was proposed, but it's not a measurement of intelligence. The reality is that we don't have a good measurement of intelligence, and struggle to define it to begin with.
Original proposal:
"I propose to consider the question, "Can machines think?" This should begin with definitions of the meaning of the terms "machine" and "think." The definitions might be framed so as to reflect so far as possible the normal use of the words, but this attitude is dangerous [...] Instead of attempting such a definition I shall replace the question by another, which is closely related to it and is expressed in relatively unambiguous words."
Clearly Turing is saying "we cannot precisely define what thinking means, so let's instead check if we can tell apart a human and a machine when they communicate freely through a terminal". It's not about fooling humans (what would be the point of it?) but about replacing the ambiguous question "can they think" with an operative definition that can be tested unambiguously. What Turing is saying is that a machine that passes the test is "as good as if it were thinking".
> Machines have arguably been able to do this for decades.
Absolutely not and it's surprisingly uninformed to claim so.
Whether AI has a more powerful effect than it's predecessors remains to be seen. It could.
there is a ton more but you asked for one :)
An alternative view could be, this is just the same as every other technological innovation.
I can't but compare his takes with Stuart Russell takes, which are so well grounded, coherent and easily presented. I often revisit Stuart Russell discussion with Steven Pinker on AI for the clarity he brings to the topic.
Hinton published the seminal paper on backpropagation. He also invented Boltzmann machines, unsupervised learning and mixture of ecperts models. He championed machine learning for 20 years even though there was zero funding for it through the 80s and 90s. He was Yann LeCun's PhD adviser. That means Yann LeCun didn't know ass from tea kettle until Hinton introduced him to machine learning.
Know perchance a fellow by the name of Ilya Sutskever? ChatGPT ring any bells? Also a student of Hinton's. The list is very long.
I know the backprop paper. I've read it in the early 2000s. And I remember Hinton as a co-author. Same with Boltzmann machines. Co-author. "Advisor to that great guy", "Teacher of this great guy", "Nobel price together with that guy" <- all of this leads me to the above conclusion. YMMV
Frankly, this all sounds like hero worship and the language is very cringe.
"Frankly, I just want to be a contrarian"Do these historical accolades give him a blank check to be wrong in the present?
"sounds like he's a poster boy who rode on the success of others"
The person who wrote that didn't even bother checking who Hinton was before pulling that sentence out of their ass.
For context: he once argued AI could handle complex tasks but not drawing or music. Then when Stable Diffusion appeared, he flipped to "AI is creative." Now he's saying carpentry will be the last job to be automated, so people should learn that.
The pattern is sweeping, premature claims about what AI can or can't do that don't age well. His economic framing is similarly simplified to the point of being either trivial or misleading.
Let me start by saying a few things that seem obvious. I think if you work as a radiologist, you're like the coyote that’s already over the edge of the cliff but hasn’t yet looked down
It’s just completely obvious that within five years deep learning is going to do better than radiologists.… It might be 10 years, but we’ve got plenty of radiologists already.”
https://www.youtube.com/watch?v=2HMPRXstSvQ
This article has some good perspective:
https://newrepublic.com/article/187203/ai-radiology-geoffrey...
His words were consequential. The late 2010s were filled with articles that professed the end of radiology; I know at least a few people who chose alternative careers because of these predictions.
---
According to US News, radiology is the 7th best paying job in 2025, and the demand is rising:
https://money.usnews.com/careers/best-jobs/rankings/best-pay...
https://radiologybusiness.com/topics/healthcare-management/h...
I asked AI about radiologists in 2025, and it came up with this article:
https://medicushcs.com/resources/the-radiologist-shortage-ad...
The Radiologist Shortage: Rising Demand, Limited Supply, Strategic Response
(Ironically, this article feels spammy to me -- AI is probably being too credulous about what's written on the web!)
---
I read Cade Metz's book about Hinton and the tech transfer from universities to big tech ... I can respect him for persisting in his line of research for 20-30 years, while others saying he was barking up the wrong tree
But maybe this late life vindication led to a chip on his shoulder
The way he phrased this is remarkably confident and arrogant, and not like the behavior of respected scientist (now with a Nobel Prize) ... It's almost like Twitter-speak that made its way into real life, and he's obviously not from the generation that grew up with Twitter
Of course, because you have different people all predicting a different future, some of them are bound to get it right. That doesn't mean the same person will be right again.
1. The medical world doesn't accept new technologies easily. Humans get a much higher pass on bad performance than technology and especially than new technology. Things need to be extensively tested and certified, so adoption is slow.
2. AI is legally very different than a radiologist. The liability structure is completely different, which matters a lot in an environment that deals with life or death decisions.
3. Image analysis is not language analysis and generation. This specific machine learning part is not the bit of machine learning that has advanced enormously in the past two years. General knowledge of the world doesn't help that much when the task is to look at pixels and determine whether it's cancer or not. Now this can be improved by integrating the image analysis with all the other possibly relevant information (case history etc.) and diagnosing the case via that route.
Russell is much more measured in his statements and much more policy driven.
In my mind you need to listen to both and try to figure out where they're coming from.
Ugh, Scientism at its best (worst?). Do you also back up Watson statements about race? I'm sure you don't, as that's not part of your training.
Accomplished researchers can say dumb things too, it happens all the time.
Some condensed source I found on the topic:
https://www.ing.com/Newsroom/News/The-more-famous-an-expert-...
Calling it "scientism" to care about these things as a way of dismissing the argument out of hand is anti-intellectualism at its worst.
Those are not arguments, that's scientism.
I upvoted you anyway, as you're somehow trying.
I wonder if he's a HN commenter as well, in that case.
I do appreciate your mention of Stuart Russell however. I've recently been watching a few of his talks and have found them very insightful.
The issue is the increasing imbalance of capital being overvalued compared to labor, and how that has a negative impact on most individuals.
A statement like this from someone influential is important to break that narrative, despite the HN crowd finding it obvious.
Inequality has increased but it’s no longer clear that it’s as severe an increase as Piketty and Saez once argued. So, yes, things could certainly be much better. The US could, for example, benefit from a more progressive taxation and a stronger social safety net. But at the same time, we aren’t all headed to hell.
If I make 100 tokens and that buys me 100 food, thats better than making 1000 tokens that buys me 1 food.
https://en.wikipedia.org/wiki/Robot_tax
Make it 50% of the sales price like with cigarettes, since "AI" makes people dumber.
Rich, greedy people ruin everything.
Moreover, most of the rest of the world’s poverty exists so that a few greedy pigs here can be even more wealthy. We have the CIA and the one-party system that controls it to thank for that.
B2C (sell to people)
B2B (sell to B2C companies)
If the “C” is broke, it seems like there won’t by any rich people. In other words, if the masses are poor and jobless, who is sending money to the rich?
What would they need it for? Remember, money is just the accounting of debt. Under the old world model, workers offer a loan to the rich — meaning that they do work for the rich and at some point in the future the rich have to pay that work back with something of equal value [e.g. food, gadgets, etc.].
But if in the new world the rich have AI to do the work they want done, the jobless masses can simply be cut out of the picture. No need to be indebted to them in the first place.
As one individual, you don't really owe anything to anyone. The only time you owe something is in social terms, when you borrow it in your name, or promise reward for work. And even then, people try to get out of paying things back, but in most cases, the courts, the police or the payees themselves get them to do it anyway.
If you own some land, and suddenly, you can get work on it done without giving almost anything in return (except electrical power), you don't owe anything to anyone. And if you can defend that land effectively, you don't physically need anyone else.
This concept of the social contract, where some abstract group of rich owes something to an abstract group of workers, is actually just a series of consequences that happened to a bunch of individuals when debts weren't paid. But if you're rich, the consequences are no longer an issue, and you're not motivated by some other thing (morals or empathy, for example), the social contract breaks down in your favor.
It's a good thing to remind oneself that social contracts don't maintain themselves, we need to maintain them.
The debt to the workers almost never goes unpaid. The workers quickly call the debt to get food and shelter in return.
More often the workers fail to repay their debts to the rich. This is how you get entities like Berkshire Hathaway or Apple sitting on mountains of money. That money is the symbol of the loans that were extended to the workers, with the workers not being able to offer equivalent value in return.
Even among the rich, holding money is unusual, though. They usually like to call the debt for something of real value (e.g. land) as well.
does not mean you cannot get rich by some other means
I was really surprised to hear a scientist like him, who knows how the tech works, to go full scare of a Skynet AI.
Most revolutions (bolsheviks, cuba, iran, arab spring etc) have made people significantly poorer, while most innovations have made people significantly richer (railroads, electricity, first and second agricultural revolutions, manufacturing)
I assume it will be technologically possible to run "medium LLM" (for the lac of a better name) on your phone. A medium LLM is something that knows a limited vocabulary (say slightly larger than simple English), and perhaps doesn't remember the capital of France; but it can reason well with the limited vocabulary. So it can answer what's the capital of France by reading Wikipedia, and likewise, it can work with complicated words using their definitions in terms of simpler words.
Now, everybody would run an AI like that on their phone. It would help people solve real world problems of navigating the world and talking to others. Most importantly, it would help people unite by surpassing the Dunbar number. If you (with help of your phone AI) can keep track of 15 million contacts rather than 150, it is life changing and increases trust-building by orders of magnitude. And soon, machines will be able to do that for us.
Socialists have always emphasized education and communication, for everyone, because these are the true constituents of sovereignty and emancipation. We have, I believe, technological means (widely available universal computation and telecommunication) to provide a kind of mind extension for the mind that will surpass our neocortex and will allow everyone to engage with much larger number of people. Lot more human cooperation will result.
I think activists should look into building and embracing such app - a decentralized communication frontend agent, which would let you build trust with large number of people, without much effort, and help you by coordinating with them (really, learning from them their skills and their struggles). We don't need social media giants to do this for us in a centralized way.
So I posit this techno-anarchism as an antidote against techno-feudalism.
> “What’s actually going to happen is rich people are going to use AI to replace workers,” he says. “It’s going to create massive unemployment and a huge rise in profits. It will make a few people much richer and most people poorer. That’s not AI’s fault, that is the capitalist system.”
It's kind curious how that would happen.
In the old days, if you want to maintain a monopoly, you can try drain out the talent pool so no one else can hire the best people to do the work that you're doing, and you can also try patent wall to delay your competitors from launching their product.
But if a worker can be replaced by an AI, it could also mean that the competitiveness of the work is significantly reduced, to the point that theoretically everybody can do it. The only way (i guess) to remain the monopoly is then tightening control on the AI while optimize the process to kill off all potential competitors etc. It's all Red Ocean policies (https://www.wallstreetprep.com/knowledge/red-ocean-strategy/).
"Massive unemployment" maybe, but I don't think "huge rise in profits" is guaranteed.
Back then, you have people, which is hard to duplicate and thus can act as a barrier of entry. But AI is just a program, which can be copied with ease, and runs on maybe expensive but standardized hardware.
bigyabai•3d ago
pajamasam•3d ago
[1]: https://www.techtimes.com/articles/240511/20190329/godfather...
bigyabai•3d ago
I'd prefer a second opinion from someone with credentials that aren't cosmetically related to the source material.