Second, that page is obviously meant to shed light at AI issues from a lot of different viewpoints, it would have been a serious omission to not mention environmental concerns.
This is massively overstated. We ought to be more careful in performing such calculations.
> Alex de Vries is a Ph.D. candidate at VU Amsterdam and founder of the digital-sustainability blog Digiconomist. In a report published earlier this month in Joule, de Vries has analyzed trends in AI energy use. He predicted that current AI technology could be on track to annually consume as much electricity as the entire country of Ireland (29.3 terawatt-hours per year).
That's a lot of power, and the availability of electricity is already seen as a bottleneck in the development of AI models [2].
[1] https://spectrum.ieee.org/ai-energy-consumption
[2] https://www.tomshardware.com/tech-industry/artificial-intell...
Population of Ireland is .06% of world's population btw.
He didn’t though, did he? If you click through and read the report, it is quite clear in saying that this is a worst case scenario if you ignore things like it being unrealistic for Nvidia to even produce the requisite number of chips. It’s also based on the efficiency of 2023 models; he assumes 3 Wh per prompt when today it is 0.34 Wh.
It also completely ignores the fact that AI use can displace greater energy use:
The carbon emissions of writing and illustrating are lower for AI than for humans
> Our findings reveal that AI systems emit between 130 and 1500 times less CO2e per page of text generated compared to human writers, while AI illustration systems emit between 310 and 2900 times less CO2e per image than their human counterparts. […] the use of AI holds the potential to carry out several major activities at much lower emission levels than can humans.
— https://www.nature.com/articles/s41598-024-54271-x
It isn’t a bad thing if AI uses x Wh if not using AI uses 1500x Wh. Looking at the absolute usage without considering displacement only gives you half the picture.
Not only that, both keep getting less expensive in various ways. small models, though still not cheap to train, are making inference vanishingly cheap.
But the huge amount of fresh water going to waste for cooling makes me very uncomfortable. In an ideal world it really should go the other way where the heat from the DC is used to desalinate salt water.
why is that, may I ask? Always interested to learn about alternative viewpoints.
There’s no proof nor counter-proof that human brain doesn’t work like that.
I'm suprised they didn't make a point which is especially painful for Open Source projects, that AI might reduce coding effort but increases review effort, and maintainers are generally spending the majority of their time on review anyway. Making it easier to generate bad pull requests is seen as well-poisoning.
What shocks me most is that we have found something less useful than bitcoin mining. Remember all the articles about the environmental impact of bitcoin? That is peanuts compared to what the worlds largest companies are biulding to power the next LLM.
Dumb or evil, either way they are not people to be celebrated.
So apart from the use of violence, the Luddites were essentially equivalent to modern tech culture. It's weird that they don't get more sympathy on HN.
Regardless, the complaints of the Luddites resonate because they would eventually apply to everyone, including the working class. It just happens that they came for the textile workers first.
You mean like Doctors and lawyers? Make no mistake. Laws are threats. You can tell yourself it's about some imagined Quality concern, but the extreme resource burden to enter the field creates a stark class boundary that is in general very hard to get past.
If anything, the Luddites actually understood what society was doing even if that same society said they weren't.
The only difference between a craftsman destroying a machine, and a lawyer or doctor erecting entry barriers through political means is the tools being employed at the time.
>Dumb or evil, either way they are not people to be celebrated.
I see them as neither dumb, nor evil. I see them as both wise, intelligent, motivated, and thoroughly realistic about the direction society was headed and recognizing no one wanted to be on the hook for answering the question of "what about me?"
It's a natural response for a class suddenly evicted from a meta stability in the social order, and if anything should be vilified, it's the kind of rapid progress at all costs, bundled with lack of care who gets left behind by it that so characterizes the modern incarnation of tech-capitalism.
As opposed to the status quo at all costs, bundled with a lack of care for who gets to stay in the gutter?
They were just another special interest group that wanted to keep more of the pie for themselves, at the expense of people poorer than them. And they used violence to try to achieve that.
How about slower incremental change with actual effort put into the costs and downstream effects/Externalities? I know, who has time for that though. Someone else's problem right?
>They were just another special interest group that wanted to keep more of the pie for themselves, at the expense of people poorer than them.
Ever heard the phrase "Every accusation an admission?" Often we end up projecting what we can't consciously think and work forward with through onto the intents of others. It's too destructive to the conscious narrative, so ends up externalized into the "other" in spite of blossoming just as fruitfully in the self. You've demonstrated to me, at least you see the game, and you recognize how it's played, and the consequences of not playing it.
>And they used violence to try to achieve that.
They broke machines. And they were beaten, strike broken, vilified and abused by capital wielders who were, in point of fact, not much poorer than them, as they could afford the machines in question, and when faced with the opportunity to divest themselves of the burden of doing business with those pesky tradesmen, were only so happy to do so.
Statistical voyeurism is a capitalist's favorite pass time. All the sexy of number go up, none of the burden of of the other side of balance sheet.
The introduction where they claim LLMs are useless for software engineering is just incorrect. They are useful for many software engineering tasks. I do think that vibe coding is rubbish however, and more junior SWEs very regularly misuse LLMs to produce nonsense code.
The only substantive point is that the LLM may regurgitate pieces of proprietary training data; although it seems unlikely that it would be incorporated wholesale into the codebase in such as way it matters or opens them up to liability.
I do question if LLMs would even be useful for such a niche project -- but I think this should be left up to developers to figure out how it complements their workflows rather then ruling out all uses of LLMs.
EDIT: I want to point out that I think the Asahi Linux project is a jewel of engineering and is extremely impressive.
A small disclaimer, I am not a AI-booster. I think LLMs do have issues, and one should be careful with them.
I've found that there's a large group of people who dislike LLMs strongly and claim they are totally useless or even pernicious. I think this is grounded in truth -- they are trained on copyright work, they are used for spreading misinformation, they can produce sh^t code. Although some folks take a radical/extremist approach and totally dismiss them as useless -- often without actually using LLM-powered tools in any meaningful way.
They are useless for many applications, but programming is not one of them. I think a blanket ban on LLMs has to be somewhat unfounded/radical because they do have practical applications in writing code. The tab-completion models are extremely useful for example.
For this more niche project I would think LLMs might not be as useful as they are for other projects. This said, I still think there would be a variety of use-cases where they could be helpful.
And yet, weather prediction works. Therefore, LLMs work?
On the rest, especially the confidently incorrect argument.. Not. so. much.
Firstly, models are stochastic parrots, but that truth is irrelevant because they're useful stochastic parrots.
Second, hallucinations and confidently incorrect outputs may yet be a thing of the past, so we should keep an open mind. It's possible that mechanistic interpretability research (a fancy term for "understanding what the model is thinking as it produces your response") will allow the UI to warn the user that uncertainty was detected in the model's response.
Unfortunately none of that matters because the IP point is a blocker. Bummer.
Yes, the article might have some wording issues, but for an operating system project to choose to not allow AI written code for a product that is inherently in need of good security, and rather opt for “think before you write and fully understand what you are doing”, I don’t think that their choice is invalid.
I wouldn’t wanna get into a plane where have the core systems are written by hallucinating AI.
Fade_Dance•5h ago
Using that term through the article makes it hard to take seriously. I know nothing of this project but right off the bat it seems like a project that has little credibility just because of the tone used throughout.
There's no need to turn it into a full-on tirade against this set of technologies either. Is this an appropriate place to complain about Reddit comments?
Ironically, the author could well benefit from running this slop through an llm to make it more professional.
doesnt_know•4h ago
Personally I think the term is well deserved and am glad it continues to be popularised.
Fade_Dance•4h ago
_____________
As for the individual points:
The initial concerns about copyright are convincing.
The point about resource impact ending with "these resources would literally be better spent anywhere else" devolved into meaningless grandstanding. I wouldn't mind seeing a project take a stand because of environmental impact, but again it just ends up sounding like the author has a bone to pick rather than a genuine concern about the environment. If that's not the case, then that's a prime example of why tone matters in communication.
The Reddit comment paragraph where the author berates users for using LLMs on social media is just odd and out of place. Maybe better suited to the off-topic section of their community forum/discord.
And the last point I simply disagree with. Highly knowledgeable people in a field that requires precision use LLMs every day. It's a tool like any other. I use it in financial trading (ex: it's great for scanning reams of SEC filings and earnings report transcripts), I know others who use it successfully in trading, and I know firms like Jane Street have it deeply integrated in their process.
RUnconcerned•4h ago
Have you considered that it is not the intent of the author to appear professional? That running it through the Slop Generator would obfuscate their intent to be snarky towards those who outsource all their thinking to Slop Generators?
JimDabell•4h ago
They have no obligation to sound professional but if they intended to appear incredibly childish then they succeeded, and people will judge them accordingly.
electricboots•4h ago
The authors lack of professionalism is a reasonable counter to the completely unhinged mainstream takes on ai/llms that we hear daily.
I think the Reddit example provides useful, generally relatable context, otherwise missing to the average reader.
My opinions are not to detract from the use of the tech or engineers working in the space, but motivated by a disgust for the hype.
Fade_Dance•2h ago
Marketing is completely ridiculous when it comes to the topic, but when isn't that enough in the case with the next shiny thing. They even extolled the life changing virtues of 3D TVs for one of the cycles.
I honestly hear far more unhinged AI doomer stuff and constant pessimism that makes me sort of sad (after all it is cool tech that will do a lot of new things) than AI sycophantism, do you not? If so, where? Granted this is a US perspective, where there is currently a deep seated pessimistic undercurrent about just about everything right now.
skerit•4h ago
tonypapousek•4h ago
It's an impressive one, to say the least. It's worth taking a closer look and weighing the excellence created by the human mind before completely dismissing the article's arguments.
> Ironically, the author could well benefit from running this slop through an llm to make it more professional.
True, that would effectively strip out all the heart and soul from the prose.
FeteCommuniste•4h ago
Fade_Dance•4h ago
rimbo789•4h ago
TrackerFF•4h ago
rimbo789•2h ago
nurettin•4h ago
https://daniel.haxx.se/blog/2025/07/14/death-by-a-thousand-s...
Reading the first two vulnerability reports makes it very clear.
andybak•4h ago
nurettin•1h ago
washadjeffmad•2h ago