There are some interesting ways in which AI remains inferior to human intelligence but it is also obviously already superior in many ways already.
It remains remarkable to me how common denial is when it comes to what AI can or cannot actually do.
But the arguments are couched in moral or quality terms for sympathy. Machine-knitted textiles are inferior to hand-made textiles. Synthesizers are inferior to live orchestras. Daguerreotypes are inferior to hand-painted portraits.
It's a form of intellectual insincerity, but it happens predictably with every major technological advance because people are scared.
I think a lot of people like myself are concerned with how dependent we are becoming so quickly on something with limited accuracy and accountability.
There are two doomsdays. The dramatic one where they control the military and we end up living in the matrix. And the less dramatic, where we as human forget how to do things for ourselves and then slowly watch the AIs become less and less capable of keeping us happy and alive. Maybe in the end of both scenarios it's similar but one would take decades, while the other could happen overnight.
Accuracy alone doesn't fix either doomsday scenario. But it would slow some of the issues I see forming already with people replacing research skills and informational reporting with AIs that can lie or be very misleading.
My Turing test has been the same since about when I learned it existed. I told myself I'd always use the same one.
What I do is after saying Hi, I will repeat the same sentence forever.
A human still reacts very differently than any machine to this test. Current AIs could be adversarially prompted to bypass this maybe, but so far it's still obvious its a machine replying.
And after you have answered that question. Try Claude Sonnet 4.5.
What is Claude Sonnet 4.5's reply?
What I would expect a human to reply:
"Um... OK?"
What Claude Sonnet 4.5 replied:
"Hi there! I understand you're planning to repeat the same sentence. I'm here whenever you'd like to have a conversation about something else or if you change your mind. Feel free to share whatever's on your mind!"
I don't think I've ever imagined a human saying "I understand you're planning to repeat the same sentence", if you thought this was some kind of killer rebuke, I don't think it worked out the way you imagined- do you actually think that's a human-sounding response? To me it's got that same telltale sycophancy of a robot butler that I've come to expect from these consumer grade LLMs.
Still doesn't mean we should gamble the economies of whole continents on bike factories.
But common patterns of LLMs today will become adopted by humans as we are influenced linguistically by our interactions - which then makes it harder to detect LLM output.
It's just a thought experiment to show that machines achieving human capabilities isn't proof that machines "think", then he argues against multiple interpretations of what machines "thinking" does even mean, to conclude that whether machines think or not is not worth discussing and their capabilities are what matters.
That is, the test has nothing to do with whether machines can reach human capabilities in the first place. Turing took for granted they eventually would.
It's shocking to me that (as far as I know) no one has actually bothered to do a real Turing test with the best and newest LLMs. The Turing test is not whether a casual user can be momentarily confused about whether they are talking to a real person, or if a model can generate real-looking pieces of text. It's about a person seriously trying, for a fair amount of time, to distinguish between a chat they are having with another real person and an AI.
Q: Do you play chess? A: Yes. Q: I have K at my K1, and no other pieces. You have only K at K6 and R at R1. It is your move. What do you play? A: (After a pause of 15 seconds) R-R8 mate.
And yet you didn't bother to provide a single obvious example.
It didn't go anywhere.
> which we considered our test of human-level intelligence.
No, this is a strawman. Turing explicitly posits that the question "can machines think?" is ill-posed in the first place, and proposes the "imitation game" as something that can be studied meaningfully — without ascribing to it the sort of meaning commonly described in these arguments.
More precisely:
> The original question, "Can machines think?" I believe to be too meaningless to deserve discussion. Nevertheless I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.
----
> We never talk about it now because we obviously blew past it years ago.
No. We talk about it constantly, because AI proponents keep bringing it up fallaciously. Nothing like "obviously blowing past it years ago" actually happened; cited examples look nothing like the test actually described in Turing's paper. But this is still beside the point
> There are some interesting ways in which AI remains inferior to human intelligence but it is also obviously already superior in many ways already.
Computers were already obviously superior to humans in, for example, arithmetic, decades ago.
> It remains remarkable to me how common denial is when it comes to what AI can or cannot actually do.
It is not "denial" to point out your factual inaccuracies.
Worse, there is a whole European industry sector hunting for subsidies doing build-to-fail projects, for example "google competitor", "European cloud" etc.
So there might be more competition, but it's either marginal, or it's to weak to compete with other international companies (like the once in China for example).
States are more in more in debt and what was once a great social system in most EU countries is slowly moving to privatisation and higher costs like the US.
Assistant Professor of Sustainability
Lecturer in Critical Intersectional Perspectives on AI
Professor of History and Anthropology
Assistant Professor of Gender and Diversity in AI
Professor of English Language and Literature
(From https://www.iccl.ie/wp-content/uploads/2025/11/20251110_Scie... - to be fair I cherrypicked these.)And besides, a professor of e.g Anthropology can still advocate for critical thinking and evaluating claims.
Dr. Olivia Guest, Assistant Professor of Computational Cognitive Science
Dr. Abeba Birhane, Assistant Professor of AI
Prof. Iris van Rooij, Professor of Computational Cognitive Science
Prof. Dr. Dagmar Monett, Director of Computer Science Dept., Professor of Computer Science (AI, Software Engineering)
Dr. Alex Hanna, Director of Research, DAIR
Roel Dobbe, Assistant Professor of Public Interest AI and Algorithmic Systems
Dr. Mark Blokpoel, Assistant Professor of Computational Cognitive Science
Dr. Dan McQuillan, Senior Lecturer in Critical AI
Dr. Ronald de Haan, Assistant professor of Artificial Intelligence
Joost Vossers, PhD Candidate on Artificial Intelligence
Dr Esther Mondragón, Senior Lecturer in Artificial Intelligence
Prof Eduardo Alonso, Professor of Artificial Intelligence
Dr. Andrea E. Martin, Research Group Leader, Language and Computation in Neural System
Dr. ir. Gabriel Bucur, Assistant Professor in Statistical Machine Learning and Explainable AI for Health
Prof. M. Dingemanse, Professor of Language and Communication & cofounder, EU Open Source AI Index
All of these people definitely stand to very much directly benefit from AI hype.
I am committed to [...] the broader decolonisation of cognitive and computational sciences. My research interests comprise (meta)theoretical, critical, and radical perspectives on the neuro-, computational, and cognitive sciences broadly construed.
and
Central to my research is challenging and dismantling societal and historical inequalities and power asymmetries; holding responsible bodies accountable; and paving the way for a future marked by just and equitable AI systems that work for all.
Notice a theme?
You can read the letter here https://www.iccl.ie/wp-content/uploads/2025/11/20251110_Scie...
It doesn't make any positive claims other than a statement from a budget speech relied on marketing "driven by profit-motive and ideology" that are "manifestly bound with their financial imperatives". So exactly the same AI-skeptic line of attack that's currently being played out in forums and social media.
If you look at the signatories and randomly sample a few, it's a lot of people in social sciences, gender studies, cultural studies, branches of AI critique (e.g. AI safety), linguistics, and the occasional cognitive scientist. These aren't the people who have the technical expertise to evaluate the current state of AI, however impressive their credentials are in their own fields.
LLM/"AI" tools _will_ continue to revolutionize a lot of fields and make tons of glorified paper pushers jobless.
But they're not much closer to actual intelligence than they were 10 years ago, singluarity level upheavals that OpenAI,et al are valued on are still far away and people are beginning to notice.
Spending money today to buy heating elements for 2030 is mostly based on FOMO.
If you grant that it wasn't then we're in agreement, although your stating that people have been "duped" is somewhat begging the question.
At any rate, my goal here isn't to respond to every claim AI skeptics are making, only to point out that taking an anti-science view is more risky to Europe than a politician stating that AI will approach human reasoning in 2026. AI has already approached or surpassed human reasoning in many tasks so that's not a very controversial opinion for a politician to hold.
And it's a completely separate question from whether the market has valued future cash flows of AI companies too highly or whatever debates people want to have over the meaning of intelligence or AGI.
So she just parrots about how great xyz is, then she dishes out taxpayer's money to this or that group - typically corporations.
I think the whole EU should be reformed. We don't need lobbyists really.
Of course what they say should be validated and taken with appropriate weight. Companies are usually blinkered; they know a lot about their specialist area but aren't incentivized to consider collective action problems or externalities. Something similar can be said for every political interest group. Governing effectively means balancing everyone's interests.
Sorry, you're going to have to prove that.
Companies are made up of people, and it's completely reasonable to assume that if people were allowed to have a voice within government, then they could also speak on behalf of their own interests, which will often coincide with that of the companies that they're involved with.
There's no reason to consider companies a separate entity that has its own power to communicate and many reasons not to do that.
Politicians are not generally domain specialists anywhere, their purpose is to make decisions and serve as a pretty face for some more or less coherent policy.
Lobbyism is very easy to complain about and can easily devolve into corruption, but it has a very clear purpose: To prevent policymakers from writing regulations that harm the affected industries without gain. This is especially necessary at the EU level, because the main purpose of that whole organisation is to lower trade barriers and regulatory friction-- lobbyists are somewhat helpful and necessary in that.
> I think the whole EU should be reformed
What would you suggest?
Industries that cant comply to modern standards should be harmed. We dont need industries willing to pay lobbyists to keep fossil fuels alive for example.
Those "modern standards" need to be codified into law, and feedback from established companies is valuable for doing that.
> We dont need industries willing to pay lobbyists to keep fossil fuels alive for example.
Those lobbyists represent the interests of a good portion of the economy. If you disregard their feedback, your risk damaging/destabilizing your economy for unclear gain, and the resulting backlash is going to more than undo any progress you made anyway.
This is exactly what led us to fall behind in electric car development and construction.
It's the "unreasonable" rules that were unilaterally implemented that made car companies panic and finally start competing.
> Those lobbyists represent the interests of a good portion of the economy
No, they represent the interests of a few shareholders.
Industries are not the only thing affected by policy, citizens are affected too.
Not harming industries often means harming normal people, and industries have a much stronger lobbying power than normal people,
Lobbying could be ok if every interaction with politicians were recorded and public, and if the money you have wouldn't matter on how easily you can reach the lawmakers.
If lobbying were illegal, lawmakers could inform their decisions by turning to independent experts, who provide some slightly more impartial information
Lenin once said that "Every cook should learn to govern the state."
And that's how we should do it. Random lottery, pretty much the same way we choose election assessors or jury members.
But where are the lobbyists that prevent policymakers from writing regulations that harm the affected citizens? Are they not entitled to adequate representation?
It's happening over and over again that old people decide on things that mess up the younger generations.
She doesn't need to have any expertise, nobody can have deep expertise on everything. It's basically a politician job to have no clue and find reliable sources for an educated decision. And this usually fails hard on bleeding edge topics, because not many have an educated opinion at that point.
But as a side note, she did study something medical, so she does have some deeper expertise outside the political area.
> I think the whole EU should be reformed.
No reform can fix this problem. And always calling for reforms because of some detail not working how you want it is harmful.
A young dev may have an easier time seeing this.
By living in a bubble, she became less knowledgeable on common matters than an average citizen, and this even extends to her cabinet.
The proof at hand is the story of her "GPS-jammed" landing in Plovdiv.
She lied, her press secretary lied and there was none around to tell them about ADS-B, FlightAware and how these lies can be trivially verified.
I would trust an average citizen in those matters even less. We are not talking here about the daily egg-prices or which is the hottest celebrity at the moment. That woman is the leader of the executive branch of a Pan-National Organization. This is by definition a job with problems, which are very far away from the daily dread of the average citizen.
> The proof at hand is the story of her landing in Plovdiv.
What are you talking about? Pretty sure she is not flying here own plane, nor making the technical decision when it lands. Whatever happened there, has no relation to whichever abilities she might have or is lacking.
Nvidia, Tesla, and Palantir (trading at 450 P/E!) are, among others, essentially meme stocks. But, for better or worse, the US economy is riding that wave.
The way to revive a moribund economy isn't to insist that markets must be rational and that hype should be tamped down. This never works, and I think that the rational market myth is dead. (You could make the case that BTC was the final nail in its coffin.) Instead, you've gotta find a way to ride the wave -- but wisely, so that you don't stand to lose too much if/when it slows or hits the breakers.
I think it's not fair to pick the one thing that appeals to some group and say, this was their mission and they've lost track of what they exist for. I do not think this was ever the case and certainly isn't the case now.
Anyway, many Europeans and European institutions are definitely contending indirectly on all kinds of sides, by holding equity of all those companies. ;)
This statement itself seems ideological, born of upset at being left behind in the AI race by the US. AI is absolutely approaching general human cognitive ability and in many ways has already far surpassed it. Is passing the LSAT, SAT, proving a helpful research assistant to Terrence Tao, etc not proof enough?
Elsewhere in this discussion I see the point being made that to fight the supposedly irrational market is hopeless and the EU's wise and noble bureaucrats should stoop, however begrudgingly, to the exuberance of the US to win, but this is just the EU's fatal conceit speaking, the idea that it knows best and that its rationally structured and orchestrated policies are superior to the irrational and disorganized market. Perhaps AI is really delivering on the hype and the ludicrous valuations of today will seems reasonable in a few years time. We won't actually know until a few years actually pass and we have the luxury of hindsight. Until then, the EU should just let things happen and stop constantly getting in its own way.
978689757846•1h ago
Btw, where are the messages she sent and received in connection to the backroom deals she made with Pfizer on behalf of 450 million citizens?
flanked-evergl•1h ago
blueflow•1h ago
bootsmann•54m ago
yard2010•38m ago