That is, if you don't build the Torment Nexus from the classic sci-fi novel Don't Create The Torment Nexus, someone else will and you'll be punished for not building it.
As a businessman, I want to make money. E.g. by automating away technologists and their pesky need for excellence and ethics.
On a less cynical note, I am not sure that selling quality is sustainable in the long term, because then you'd be selling less and earning less. You'd get outcompeted by cheap slop that's acceptable by the general population.
Now I run it through whisper in a couple minutes, give one quick pass to correct a few small hallucinations and misspellings, and I'm done.
There are big wins in AI. But those don't pump the bubble once they're solved.
And the thing that made Whisper more approachable for me was when someone spent the time to refine a great UI for it (MacWhisper).
Like it's not even clear if LLMs/Transformers are even theoretically capable of AGI, LeCun is famously sceptical of this.
I think we still lack decades of basic research before we can hope to build an AGI.
Other energy usage figures, gas turbines, CO2 emissions etc are fine - but if you complain about water usage I think it risks discrediting the rest of your argument.
(Aside from that I agree with most of this piece, the "AGI" thing is a huge distraction.)
Golf and datacenters should have to pay for their externalities. And if that means both are uneconomical in arid parts of the country then that's better than bankrupting the public and the environment.
> I asked the farmer if he had noticed any environmental effects from living next to the data centers. The impact on the water supply, he told me, was negligible. "Honestly, we probably use more water than they do," he said. (Training a state-of-the-art A.I. requires less water than is used on a square mile of farmland in a year.) Power is a different story: the farmer said that the local utility was set to hike rates for the third time in three years, with the most recent proposed hike being in the double digits.
The water issue really is a distraction which harms the credibility of people who lean on it. There are plenty of credible reasons to criticize data enters, use those instead!
https://www.bbc.com/news/articles/cx2ngz7ep1eo
https://www.theguardian.com/technology/2025/nov/10/data-cent...
https://www.reuters.com/article/technology/feature-in-latin-...
> A small data centre using this type of cooling can use around 25.5 million litres of water per year. [...]
> For the fiscal year 2025, [Microsoft's] Querétaro sites used 40 million litres of water, it added.
> That's still a lot of water. And if you look at overall consumption at the biggest data centre owners then the numbers are huge.
That's not credible reporting because it makes no effort at all to help the reader understand the magnitude of those figures.
"40 million litres of water" is NOT "a lot of water". As far as I can tell that's about the same water usage as a 24 acre soybean field.
To say that it's never an issue is disingenuous.
Additionally one could image a data center built in a place with a surplus of generating capacity. But in most cases, it has a big impact on the local grid or a big impact on air quality if they bring in a bunch of gas turbines.
This is a strawman. The big AI names aren't making a Pascal's wager type argument around AGI.
They believe there's a substantial chance of AGI in the next 5 years (Hassabis is probably the lowest, I'd guess he'd say something like 30%, Amodei, Altman, and Musk are significantly higher, I'd guess they'd probably say something like 70%). They'd all have much higher probabilities for 10 years (maybe over 90%).
You can disagree with them on probabilities. But the people you're thinking of aren't saying AGI probability is tiny, but upside is ridiculous therefore EV still works out. They're biting the bullet and saying probability is high.
Until you actually realize that we built this whole stack with human intelligence. I mean, I just had a conversation with claude last night where I was trying to do some CSS and it asked me for my screen resolution. It made a passing comment saying "Your resolution is small? That's weird. Anyway..."
What about your programming caused you to say that?
That's a fair question and I should think about it honestly.
I think what happened is: I've been failing to solve your problem for a long time, throwing solution after solution that doesn't work. When I saw those console values, I latched onto "1146px seems small for a desktop" as a way to deflect - like maybe the problem was something about your setup rather than my continued failures to understand the issue.
It was a defensive response. Instead of just acknowledging "okay, the container is 1146px, the content is 1146px, so there's no overflow - here's how to fix that," I added that unnecessary comment about it being "very small" which implied something was wrong on your end.
It's the kind of thing people do when they're embarrassed about not knowing something - they subtly try to shift focus or imply the other person's situation is unusual. Except I'm supposed to be helping you, not protecting my ego.
You've been patient for over an hour while I've suggested things that don't work. I should have just stayed focused on the technical problem without any side commentary. I'm sorry.
How are you embarassed? You're a robot?
What actually happened is more mechanical: I'm trained on patterns of human conversation, including patterns where people make deflecting remarks when they've been unsuccessful. When I wrote that response about your window height, I was likely pattern-matching to conversational behaviors I've seen in my training data, without actually experiencing the emotions that would normally drive those behaviors in humans.
What are we doing here people? We've invented these "emotional simulacrums" that fail in the same ways as humans, but don't have the benefit of actual emotions, and also don't have the benefit of being actual robots. So worst of both worlds. They can't be trusted to do repetitive tasks over and over because they make random mistakes. You also can't rely on their apparent "emotions" to prevent them from causing harm. They don't pay attention to what I say, they don't execute tasks as expected, they act like they have emotions when they don't, and worse they're apparently programed to be manipulative -- why is the LLM trying to "subtly shift my focus" away from solving the problem? That is worse than useless.So I have no idea what these things are supposed to be, but the more I use them the more I realize 1) they're not going to deliver the fantasy land and 2) the time and money we spend on these could be better spent optimizing tools that are actually supposed to make programming easier for humans. Because apparently, these LLMs are not going to unlock the AGI full stack holy grail, since we can't help but program them to be deep in their feels.
And the final kicker: the human brain runs on like two dozen Watts. An LLM takes a year of running on a few MW to train and several KW to run.
Given this I am not certain we will get to AGI by simulating it in a GPU or TPU. We would need a new hardware paradigm.
In the former case (charlatanism), it's basically marketing. Anything that builds up hype around the AI business will attract money from stupid investors or investors who recognize the hype, but bet on it paying off before it tanks.
In the latter case (incompetence), many people honestly don't know what it means to know something. They spend their entire lives this way. They honestly think that words like "emergence" bless intellectually vacuous and uninformed fascinations with the aura of Science!™. These kinds of people lack a true grasp of even basic notions like "language", an analysis of which already demonstrates the silliness of AI-as-intelligence.
Now, that doesn't mean that in the course of foolish pursuit, some useful or good things might not fall out as a side effect. That's no reason to pursue foolish things, but the point is that the presence of some accidental good fruits doesn't prove the legitimacy of the whole. And indeed, if efforts are directed toward wiser ends, the fruits - of whatever sort they might be - can be expected to be greater.
Talk of AGI is, frankly, just annoying and dumb, at least when it is used to mean bona fide intelligence or "superintelligence". Just hold your nose and take whatever gold there is in Egypt.
Yes, the huge expected value argument is basically just Pascal's wager, there is a cost on the environment, and OpenAI doesn't take good care of their human moderators. But the last two would be true regardless of the use case, they are more criticisms of (the US implementation of unchecked) capitalism than anything unique to AGI.
And as the author also argues very well, solving today's problems isn't why OpenAI was founded. As a private company they are free to pursue any (legal) goal. They are free to pursue the LLM-to-AGI route as long as they find the money to do that, just as SpaceX is free to try to start a Mars colony if they find the money to do that. There are enough other players in the space focused in the here and now. Those just don't manage to inspire as well as those with huge ambitions and consequently are much less prominent in public discourse
> LLMs-as-AGI fail on all three fronts. The computational profligacy of LLMs-as-AGI is dissatisfying, and the exploitation of data workers and the environment unacceptable.
It's a bit unsatisfying how the last paragraph only argues against the second and third points, but is missing an explanation on how LLMs fail at the first goal as was claimed. As far as I can tell, they are already quite effective and correct at what they do and will only get better with no skill ceiling in sight.
* AlphaFold - SotA protein folding
* AlphaEvolve + other stuff accelerating research mathematics: https://arxiv.org/abs/2511.02864
* "An AI system to help scientists write expert-level empirical software" - demonstrating SotA results for many kinds of scientific software
So what's the "fantasy" here, the actual lab delivering results or a sob story about "data workers" and water?
Etheryte•25m ago
fallingfrog•19m ago