frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

I think nobody wants AI in Firefox, Mozilla

https://manualdousuario.net/en/mozilla-firefox-window-ai/
428•rpgbr•1h ago•273 comments

AGI fantasy is a blocker to actual engineering

https://www.tomwphillips.co.uk/2025/11/agi-fantasy-is-a-blocker-to-actual-engineering/
222•tomwphillips•2h ago•181 comments

EDE: Small and Fast Desktop Environment

https://edeproject.org/
49•bradley_taunt•3h ago•18 comments

Honda: 2 years of ml vs 1 month of prompting - heres what we learned

https://www.levs.fyi/blog/2-years-of-ml-vs-1-month-of-prompting/
134•Ostatnigrosh•4d ago•48 comments

Magit manuals are available online again

https://github.com/magit/magit/issues/5472
46•vetronauta•3h ago•7 comments

Incus-OS: Immutable Linux OS to run Incus as a hypervisor

https://linuxcontainers.org/incus-os/
17•_kb•1w ago•0 comments

Operating Margins

https://fi-le.net/margin/
190•fi-le•5d ago•66 comments

Show HN: Encore – Type-safe back end framework that generates infra from code

https://github.com/encoredev/encore
55•andout_•4h ago•39 comments

Writerdeck.org

http://www.writerdeck.org/
31•surprisetalk•1w ago•13 comments

Nvidia is gearing up to sell servers instead of just GPUs and components

https://www.tomshardware.com/tech-industry/artificial-intelligence/jp-morgan-says-nvidia-is-geari...
84•giuliomagnifico•2h ago•38 comments

Nano Banana can be prompt engineered for nuanced AI image generation

https://minimaxir.com/2025/11/nano-banana-prompts/
800•minimaxir•22h ago•204 comments

Scientists Produce Powerhouse Pigment Behind Octopus Camouflage

https://today.ucsd.edu/story/scientists-produce-powerhouse-pigment-behind-octopus-camouflage
31•gmays•4d ago•1 comments

Backblaze Drive Stats for Q3 2025

https://www.backblaze.com/blog/backblaze-drive-stats-for-q3-2025/
93•woliveirajr•2h ago•5 comments

Oracle hit hard in Wall Street's tech sell-off over its AI bet

https://www.ft.com/content/583e9391-bdd0-433e-91e0-b1b93038d51e
28•1vuio0pswjnm7•56m ago•9 comments

Don't turn your brain off

https://computingeducationthings.substack.com/p/22-dont-turn-your-brain-off
22•azhenley•2h ago•2 comments

RegreSQL: Regression Testing for PostgreSQL Queries

https://boringsql.com/posts/regresql-testing-queries/
111•radimm•8h ago•27 comments

Winamp for OS/X

https://github.com/mgreenwood1001/winamp
65•hyperbole•3h ago•54 comments

A Common Semiconductor Just Became a Superconductor

https://www.sciencedaily.com/releases/2025/10/251030075105.htm
46•tsenturk•1w ago•22 comments

Wealthy foreigners 'paid for chance to shoot civilians in Sarajevo'

https://www.thetimes.com/world/europe/article/wealthy-foreigners-paid-for-chance-to-shoot-civilia...
46•mhb•1h ago•5 comments

What Happened with the CIA and The Paris Review?

https://www.theparisreview.org/blog/2025/11/11/what-really-happened-with-the-cia-and-the-paris-re...
141•benbreen•15h ago•67 comments

Disrupting the first reported AI-orchestrated cyber espionage campaign

https://www.anthropic.com/news/disrupting-AI-espionage
327•koakuma-chan•21h ago•255 comments

Launch HN: Tweeks (YC W25) – Browser extension to deshittify the web

https://www.tweeks.io/onboarding
297•jmadeano•23h ago•179 comments

V8 Garbage Collector

https://wingolog.org/archives/2025/11/13/the-last-couple-years-in-v8s-garbage-collector
84•swah•6h ago•21 comments

Show HN: European Tech News in 6 Languages

https://europedigital.cloud/en/news
13•Merinov•3h ago•15 comments

650GB of Data (Delta Lake on S3). Polars vs. DuckDB vs. Daft vs. Spark

https://dataengineeringcentral.substack.com/p/650gb-of-data-delta-lake-on-s3-polars
225•tanelpoder•18h ago•92 comments

How to Get a North Korea / Antarctica VPS

https://blog.lyc8503.net/en/post/asn-5-worldwide-servers/
168•uneven9434•14h ago•63 comments

OpenMANET Wi-Fi HaLow open-source project for Raspberry Pi–based MANET radios

https://openmanet.net/
134•hexmiles•18h ago•33 comments

Arrival Radar

https://entropicthoughts.com/arrival-radar
7•ibobev•3h ago•2 comments

Hooked on Sonics: Experimenting with Sound in 19th-Century Popular Science

https://publicdomainreview.org/essay/science-of-sound/
30•Hooke•9h ago•1 comments

Blender Lab

https://www.blender.org/news/introducing-blender-lab/
276•radeeyate•1d ago•49 comments
Open in hackernews

AGI fantasy is a blocker to actual engineering

https://www.tomwphillips.co.uk/2025/11/agi-fantasy-is-a-blocker-to-actual-engineering/
212•tomwphillips•2h ago

Comments

Etheryte•1h ago
Many big names in the industry have long advocated for the idea that LLM-s are a fundamental dead end. Many have also gone on and started companies to look for a new way forward. However, if you're hip deep in stock options, along with your reputation, you'll hardly want to break the mirage. So here we are.
fallingfrog•1h ago
I have some idea of what the way forward is going to look like but I don't want to accelerate the development of such a dangerous technology so I haven't told anyone about it. The people working on AI are very smart and they will solve the associated challenges soon enough. The problem of how to slow down the development of these technologies- a political problem- is much more pressing right now.
fallingfrog•1h ago
By the way downvoting me will not hurt my feelings and I understand why you are doing it, I don't care if you believe me or not. In your position I certainly would think the same thing you are. Its fine. The future will come soon enough without my help.
random3•52m ago
Uncovering and tackling the deep problems of society starts making sense once you believe/see the possibility to unlock things. The idea that anything can be slowed down or accelerated can be faulty though. What are the more pressing political problems you consider a priority?
chriswarbo•44m ago
> I have some idea of what the way forward is going to look like but I don't want to accelerate the development of such a dangerous technology so I haven't told anyone about it.

Ever since "AI" was named at Dartmouth, there have been very smart people thinking that their idea will be the thing which makes it work this time. Usually, those ideas work really well in-the-small (ELIZA, SHRDLU, Automated Mathematician, etc.), but don't scale to useful problem sizes.

So, unless you've built a full-scale implementation of your ideas, I wouldn't put too much faith in them if I were you.

wild_egg•31m ago
They're a dead end for whatever their definition of "AGI" is, but still incredibly useful in many areas and not a "dead end" economically.
hoherd•29m ago
"It is difficult to get a man to understand something when his salary depends upon his not understanding it" and "never argue with a man whose job depends on not being convinced" in full effect.
red75prime•28m ago
> Many big names in the industry have long advocated for the idea that LLM-s are a fundamental dead end.

There should be papers on fundamental limitations of LLMs then. Any pointers? "A single forward LLM pass has TC0 circuit complexity" isn't exactly it. Modern LLMs use CoT. Anything that uses Gödel's incompleteness theorems proves too much (We don't know whether the brain is capable of hypercomputations. And, most likely, it isn't capable of that).

graphememes•1h ago
Okay, so come up with an alternative, it's math, you can also write algorithms.
Filligree•1h ago
I can’t test them, though.
gizajob•1h ago
Elon thinking Demis is the evil supervillain is hilariously backward and a mirror image of the reality.
captainbland•1h ago
"From my point of view the Jedi are evil!" comes to mind.
Cthulhu_•1h ago
That one struck me as... weird people on both ends. But this is Musk, who is deep into the Roko's Basilisk idea [0] (in fact, supposedly he and Grimes bonded over that) where AGI is inevitable, AGI will dominate like the Matrix and Skynet, and anyone that didn't work hard to make AGI a reality will be yote in the Torment Nexus.

That is, if you don't build the Torment Nexus from the classic sci-fi novel Don't Create The Torment Nexus, someone else will and you'll be punished for not building it.

[0] https://en.wikipedia.org/wiki/Roko%27s_basilisk

danaris•30m ago
...or, depending on your particular version of Roko's Basilisk (in particular, versions that assume AGI will not be achieved in "your" lifetime), it will punish not you, yourself, but a myriad of simulations of you.

Won't someone think of the poor simulations??

dist-epoch•1h ago
Why not both.
ArcHound•1h ago
"As a technologist I want to solve problems effectively (by bringing about the desired, correct result), efficiently (with minimal waste) and without harm (to people or the environment)."

As a businessman, I want to make money. E.g. by automating away technologists and their pesky need for excellence and ethics.

On a less cynical note, I am not sure that selling quality is sustainable in the long term, because then you'd be selling less and earning less. You'd get outcompeted by cheap slop that's acceptable by the general population.

geerlingguy•1h ago
I like the conclusion; like for me, Whisper has radically improved CC on my video content. I used to spend a few hours translating my scripts into CCs, and tooling was poor.

Now I run it through whisper in a couple minutes, give one quick pass to correct a few small hallucinations and misspellings, and I'm done.

There are big wins in AI. But those don't pump the bubble once they're solved.

And the thing that made Whisper more approachable for me was when someone spent the time to refine a great UI for it (MacWhisper).

sota_pop•1h ago
Not only whispr, so much of the computer vision area is not as in vogue. I suspect because the truly monumental solutions unlocked are not that accessible to the average person; i.e. industrial manufacturing and robotics at scale.
schnitzelstoat•1h ago
I'm surprised the companies fascinated with AGI don't devote some resources to neuroscience - it seems really difficult to develop a true artificial intelligence when we don't know much about how our own works.

Like it's not even clear if LLMs/Transformers are even theoretically capable of AGI, LeCun is famously sceptical of this.

I think we still lack decades of basic research before we can hope to build an AGI.

ambicapter•1h ago
Admitting you need to do basic research is admitting you're not actually <5 years from total world domination (so give us money now).
friendzis•1h ago
Why should they care as long as selling shares of a company selling access to a chatbot is the most profitable move?
csomar•1h ago
Many of the people in control of the capital are gamblers rather than researchers.
simonw•1h ago
Tip for AI skeptics: skip the data center water usage argument. At this point I think it harms your credibility - numbers like "millions of liters of water annually" (from the linked article) sound scary when presented without context, but if you compare data centers to farmland or even golf courses they're minuscule.

Other energy usage figures, air pollution, gas turbines, CO2 emissions etc are fine - but if you complain about water usage I think it risks discrediting the rest of your argument.

(Aside from that I agree with most of this piece, the "AGI" thing is a huge distraction.)

paulryanrogers•1h ago
Just because there are worse abuses elsewhere doesn't mean datacenters should get a pass.

Golf and datacenters should have to pay for their externalities. And if that means both are uneconomical in arid parts of the country then that's better than bankrupting the public and the environment.

simonw•1h ago
From https://www.newyorker.com/magazine/2025/11/03/inside-the-dat...

> I asked the farmer if he had noticed any environmental effects from living next to the data centers. The impact on the water supply, he told me, was negligible. "Honestly, we probably use more water than they do," he said. (Training a state-of-the-art A.I. requires less water than is used on a square mile of farmland in a year.) Power is a different story: the farmer said that the local utility was set to hike rates for the third time in three years, with the most recent proposed hike being in the double digits.

The water issue really is a distraction which harms the credibility of people who lean on it. There are plenty of credible reasons to criticize data enters, use those instead!

Etherlord87•1h ago
A farmer is a valuable perspective but imagine asking a lumberjack about the ecological effects of deforestation, he might know more about it than an average Joe, but there's probably better people to ask for expertise?

> Honestly, we probably use more water than they do

This kind of proves my point, regardless of the actual truth in this regard, it's a terrible argument to make: availability of water starts to become a huge problem in a growing amount of places, and this statement implies the water usage of something, that in basic principle doesn't need water at all, uses comparable amount of water as farming, which strictly relies on water.

simonw•58m ago
The author of the article followed the quote from the farmer with a fact-checked (this is the New Yorker) note about water usage for AI training.
belter•49m ago
> The water issue really is a distraction which harms the credibility of people who lean on it

Is that really the case? - "Data Centers and Water Consumption" - https://www.eesi.org/articles/view/data-centers-and-water-co...

"...Large data centers can consume up to 5 million gallons per day, equivalent to the water use of a town populated by 10,000 to 50,000 people..."

"I Was Wrong About Data Center Water Consumption" - https://www.construction-physics.com/p/i-was-wrong-about-dat...

"...So to wrap up, I misread the Berkeley Report and significantly underestimated US data center water consumption. If you simply take the Berkeley estimates directly, you get around 628 million gallons of water consumption per day for data centers, much higher than the 66-67 million gallons per day I originally stated..."

simonw•37m ago
Also from that article:

> U.S. data centers consume 449 million gallons of water per day and 163.7 billion gallons annually (as of 2021).

Sounds bad! Now let's compare that to agriculture.

USGS 2015 report: https://pubs.usgs.gov/fs/2018/3035/fs20183035.pdf has irrigation at 118 billion gallons per day - that's 43,070 billion gallons per year.

163.7 billion / 43,070 billion * 100 = 0.38 - less than half a percentage point.

It's very easy to present water numbers in a way that looks bad until you start comparing them thoughtfully.

I think comparing data center water usage to domestic water usage by people living in towns is actually quite misleading. UPDATE: I may be wrong about this, see following comment: https://news.ycombinator.com/item?id=45926469#45927945

belter•20m ago
I am surprised by your analytical mistake of comparing irrigation water with data-center water usage...

They are not equivalent. Data centers primarily consume potable water, whereas irrigation uses non-potable or agricultural-grade water. Mixing the two leads to misleading conclusions on the impact.

simonw•14m ago
That's a really good point - you're right, comparing data center usage to potable water usage by towns is a different and more valid comparison than comparing with water for irrigation.
abathur•11m ago
Agriculture feeds people, Simon.

It's fair to be critical of how the ag industry uses that water, but a significant fraction of that activity is effectively essential.

If you're going to minimize people's concern like this, at least compare it to discretionary uses we could ~live without.

The data's about 20 years old, but for example https://www.usga.org/content/dam/usga/pdf/Water%20Resource%2... suggests we were using over 2b gallons a day to water golf courses.

Lerc•27m ago
What counts as data center water consumption here? There are many ways to arguably come up with a number.

Does it count water use for cooling only, or does it include use for the infrastructure that keeps it running (power generation, maintenance, staff use, etc.)

Is this water evaporated? Or moved from A to B and raised a few degrees.

jtr1•1h ago
I think the point here is that objecting to AI data center water use and not to say, alfalfa farming in Arizona, reads as reactive rather than principled. But more importantly, there are vast, imminent social harms from AI that get crowded out by water use discourse. IMO, the environmental attack on AI is more a hangover from crypto than a thoughtful attempt to evaluate the costs and benefits of this new technology.
danaris•46m ago
But if I say "I object to AI because <list of harms> and its water use", why would you assume that I don't also object to alfalfa farming in Arizona?

Similarly, if I say "I object to the genocide in Gaza", would you assume that I don't also object to the Uyghur genocide?

This is nothing but whataboutism.

People are allowed to talk about the bad things AI does without adding a 3-page disclaimer explaining that they understand all the other bad things happening in the world at the same time.

naasking•24m ago
Because your argument is more persuasive to more people if you don't expand your criticism to encompass things that are already normalized. Focus on the unique harms IMO.
roywiggins•1h ago
I don't think there's a world where a water use tax is levied such that 1) it's enough for datacenters to notice and 2) doesn't immediately bankrupt all golf courses and beef production, because the water use of datacenters is just so much smaller.
bee_rider•51m ago
We definitely shouldn’t worry about bankrupting golf courses, they are not really useful in any way that wouldn’t be better served by just having a park or wilderness.

Beef, I guess, is a popular type of food. I’m under the impression that most of us would be better off eating less meat, maybe we could tax water until beef became a special occasion meal.

roywiggins•47m ago
I'm saying that if you taxed water enough for datacenters to notice, beef would probably become uneconomical to produce at all. Maybe a good idea! But the reason datacenters would keep operating and beef production wouldn't is that datacenters produce way more utility per gallon.
iamacyborg•23m ago
> We definitely shouldn’t worry about bankrupting golf courses, they are not really useful in any way that wouldn’t be better served by just having a park or wilderness.

Might as well get rid of all the lawns and football fields while we’re at it.

fwip•6m ago
Water taxes should probably be regional. The price of water in the arid Southwest is much higher than toward the East coast. You might see both datacenters and beef production moving toward states like Tennessee or Kentucky.
heymijo•1h ago
You're not wrong.

My perspective from someone who wants to understand this new AI landscape in good faith. The water issue isn't the show stopper it's presented as. It's an externality like you discuss.

And in comparison to other water usage, data centers don't match the doomsday narrative presented. I know when I see it now, I mentally discount or stop reading.

Electricity though seems to be real, at least for the area I'm in. I spent some time with ChatGPT last weekend working to model an apples:apples comparison and my area has seen a +48% increase in electric prices from 2023-2025. I modeled a typical 1,000kWh/month usage to see what that looked like in dollar terms and it's an extra $30-40/month.

Is it data centers? Partly yes, straight from the utility co's mouth: "sharply higher demand projections—driven largely by anticipated data center growth"

With FAANG money, that's immaterial. But for those who aren't, that's just one more thing that costs more today than it did yesterday.

Coming full circle, for me being concerned with AI's actual impact on the world, engaging with the facts and understanding them within competing narratives is helpful.

amarcheschi•1h ago
Not only electricity, air pollution around some datacenters too

https://www.politico.com/news/2025/05/06/elon-musk-xai-memph...

simonw•54m ago
I'd love to say the air pollution issue get the attention that's currently being diverted to the water issue!
jstummbillig•51m ago
In what way are they not paying for it?
reedf1•1h ago
Yes - and the water used is largely non-consumptive.
lynndotpy•1h ago
Farmland, AI data centers, and golf courses do not provide the same utility for water used. You are not making an argument against the water usage problem, you are only dismissing it.
simonw•1h ago
Right, I think a data center produces a heck of a lot more economic and human value in a year - for a lot more people - than the same amount of water used for farming or golf.
notahacker•1h ago
you can make a strong argument for the greater necessity of farming for survival, but not for golf...
idiotsecant•1h ago
I mean... Food is pretty important ...
simonw•1h ago
Which is why the comparison in the amount of water usage matters.

Data centers in the USA use less than a fraction of a percent of the water that's used for agriculture.

I'll start worrying about competition with water for food production when that value goes up by a multiple of about 1000.

wongarsu•1h ago
Corn, potatoes and wheat are important maybe even oranges, but we could live with a lot less alfalfa and almonds.

Also a lot less meat in general. A huge part of our agriculture is feed to feed our food. We need some meat, but the current amount is excessive

roywiggins•1h ago
The water intensity of American food production would be substantially less if we gave up on frivolous things like beef, which requires water vastly out of proportion to its utility. If the water numbers for datacenters seem scary then the water use numbers for the average American's beef consumption is apocalyptic.
Aransentin•1h ago
Growing almonds uses 1.3 trillion gallons of water annually in California alone.

This is more than 4 times more than all data centers in the US combined, counting both cooling and the water used for generating their electricity.

What has more utility: Californian almonds, or all IT infrastructure in the US times 4?

fmbb•1h ago
Depends on what the datacenters are used for.

AI has no utility.

Almonds make marzipan.

embedding-shape•1h ago
Well, I don't like marzipan, so both are useless? Or maybe different people find uses/utility from different things, what is trash for one person can be life saving for another, or just "better than not having it" (like you and Marzipan it seems).
simonw•1h ago
"AI has no utility" is a pretty wild claim to make in the tail end of 2025.
infecto•33m ago
Still surprised to see so many take this as such a hot claim. Is there hype, absolutely, is there value being driven also absolutely.
stavros•14m ago
Whenever I see someone say AI has no utility, I'm happy that I don't have to waste time in an argument against someone out of touch with reality.
roywiggins•1h ago
ok in that case you don't need to pick on water in particular, if it has no utility at all then literally any resource use is too much, so why bother insisting that water in particular is a problem? It's pretty energy intensive, eg.
oompydoompy74•1h ago
AI has way more utility than you are claiming and less utility than Sam Altman and the market would like us to believe. It’s okay to have a nuanced take.
HumanOstrich•54m ago
AI has no utility _for you_ because you live in this bubble where you are so rabidly against it you will never allow yourself to acknowledge it has non-zero utility.
Der_Einzige•18m ago
Activated almonds create funko pops. I’d still take the data centers over the funko pops buying basedboys that almonds causes.
LPisGood•1h ago
What does it mean to “use” water? In agriculture and in data centers my understanding is that water will go back to the sky and then rain down again. It’s not gone, so at most we’re losing the energy cost to process that water.
kachapopopow•1h ago
and water from datacenters goes where...? just disappears?
LPisGood•1h ago
No it’s largely the same situation I think. I was drawing a distinction between agricultural use and maybe some more heavy industrial uses while the water is polluted or otherwise rendered permanently unfit for other uses.
tux3•1h ago
So with the water used in datacenters. It's just a cooling loop, the output is hot water.
Etherlord87•54m ago
The problem is that you take the water from the ground, and you let it evaporate, and then it returns to... Well to various places, including the ground, but the deeper you take the water from (drinking water can't be taken from the surface, and for technological reasons drinking water is used too) the more time it takes to replenish the aquifer - up to thousands of years!

Of course surface water availability can also be a serious problem.

kajika91•1h ago
I'll take the almonds any day.
dist-epoch•1h ago
That is correct, AI data centers deliver far more utility per unit of water than farm/golf land.
dlord•1h ago
I think the water usage argument can be pertinent depending on the context.

https://www.bbc.com/news/articles/cx2ngz7ep1eo

https://www.theguardian.com/technology/2025/nov/10/data-cent...

https://www.reuters.com/article/technology/feature-in-latin-...

simonw•1h ago
That BBC story is a great example of what I'm talking about here:

> A small data centre using this type of cooling can use around 25.5 million litres of water per year. [...]

> For the fiscal year 2025, [Microsoft's] Querétaro sites used 40 million litres of water, it added.

> That's still a lot of water. And if you look at overall consumption at the biggest data centre owners then the numbers are huge.

That's not credible reporting because it makes no effort at all to help the reader understand the magnitude of those figures.

"40 million litres of water" is NOT "a lot of water". As far as I can tell that's about the same annual water usage as a 24 acre soybean field.

buellerbueller•1h ago
It's a lot of water for AI waifus and videos of Trump shitbombing people who dare oppose him.
lukeschlather•19m ago
It's not a lot of water for AI weather modeling to ensure the soybean crops throughout the country are adequately watered and maximize yield.
Etherlord87•47m ago
Yes, a 24 acre soybean field uses a lot of water.
simonw•29m ago
And an average US soybean farm has 312 acres (13x larger than 24 acres): http://www.ers.usda.gov/topics/crops/soybeans-and-oil-crops/...

Which means that in 2025 Microsoft's Querétaro sites used 1/13th of a typical US soybean farm's annual amount of water.

dlord•42m ago
I agree that those numbers can seem huge without proper context.

For me, that BBC story, and the others, illustrates a trend; tech giants installing themselves in ressource-strained areas, while promoting their development as drivers of economic growth.

jordanb•1h ago
Water can range from serious concern to NBD depending on where the data center is located, where the water is coming from, and the specific details of how the data center's cooling systems are built.

To say that it's never an issue is disingenuous.

Additionally one could image a data center built in a place with a surplus of generating capacity. But in most cases, it has a big impact on the local grid or a big impact on air quality if they bring in a bunch of gas turbines.

fny•1h ago
I'm personally excited for when the AGI-nauts start trotting out figures like...

> An H100 on low-carbon grid is only about 1–2% of one US person’s total daily footprint!

The real culprit is humans after all.

AndrewKemendo•1h ago
Humans have been measuring between human only vs augmented labor for literal centuries.

Frederick Taylor literally invented the process you describe in his “principles of scientific management”

This is the entire focus of the Toyota automation model.

The consistent empirical pattern is:

Machine-only systems outperform humans on narrow, formalizable tasks.

Human-machine hybrid systems outperform both on robustness, yieldjng higher success probability

Good enough?

fny•1h ago
I was making a joke.
dimitrios1•1h ago
> sound scary when presented without context

It's not about it being scary, its about it being a gigantic, stupid waste of water, and for what? So that lazy executives and managers can generate their shitty emails they used to have their comms person write for them, so that students can cheat on their homework, or so degens can generate a video of MLK dancing to rap? Because thats the majority of the common usage at this point and creating the demand for all these datacenters. If it was just for us devs and researchers, you wouldn't need this many.

simonw•1h ago
Whether it's a "gigantic" waste of water depends on what those figures mean. It's very important to understand if 25 million liters of water per year is a gigantic number or not.
jstanley•1h ago
For comparison it's about 10 olympic-sized swimming pools worth of water, doesn't seem very significant to me. Unless you're going to tell people they're not allowed swimming pools any more because swimming doesn't produce enough utility?

And at any rate, water doesn't get used up! It evaporates and returns to the sky to rain down again somewhere else, it's the most renewable resource in the entire world.

Etherlord87•49m ago
If only millions of people suffering from lack of water knew this.
danielbln•16m ago
Would we be sending that water to those millions of people instead?
buellerbueller•1h ago
Fine, fine: get rid of golf courses too.

As for food production; that might be important? IDK, I am not a silicon "intelligence" so what do I know? Also, I have to "eat". Wouldn't it be a wonderful world if we can just replace ourselves, so that agriculture is unnecessary, and we can devote all that water to AGI.

TIL that the true arc of humanity is to replace itself!

simonw•1h ago
See comment here: https://news.ycombinator.com/item?id=45927268

Given the difference in water usage, more data centers does not mean less water for agriculture in any meaningful way.

If you genuinely want to save water you should celebrate any time an acre of farm land is converted into an acre of data center - all the more water for the other farms!

buellerbueller•56m ago
the value of datacenters is dubious. the value of agriculture, less so.
simonw•51m ago
Once again, the key thing here is to ask how MUCH value we get per liter of water used.

If data centers and farms used the same amount of water we should absolutely be talking about their comparative value to society, and farms would win.

Farms use thousands of times more water than data centers.

danaris•35m ago
Yes, it is worthwhile to ask how much value we get.

And a whole bunch of us are saying we don't see the value in all these datacenters being built and run at full power to do training and inference 24/7, but you just keep ignoring or dismissing that.

It is absolutely possible that generative AI provides some value. That is not the same thing as saying that it provides enough value to justify all of the resources being expended on it.

The fact that the amount of water it uses is a fraction of what is used by agriculture—which is both one of the most important uses humans can put water to, as well as, AIUI, by far the single largest use of water in the world—is not a strong argument that its water usage should be ignored.

buellerbueller•22m ago
Once again, you are ignoring my (implied) argument:

Humans NEED food, the output of agriculture. Humans do not NEED any of LLMs' outputs.

Once everyone is fed, then we can talk about water usage for LLMs.

rcxdude•7m ago
Farms already produce more than enough food to feed everyone (and, indeed, the excess is a feature because food security is really important). The reason not everyone is fed is not due to needing to divide water resources between farms and other uses.
simonw•6m ago
We produce enough food for everyone already, and then waste a huge amount of it. Our food problem isn't about producing more, it's about distributing what we have.
Etherlord87•51m ago
According to this logic the ideal situation is when there are no farms anymore because then each (out of zero) farm gets maximum water.
HPsquared•8m ago
Eventually people stop building more data centers as food becomes scarce and expensive, and farms become the hot new thing for the stock market, cereal entrepreneurs become the new celebrities and so on. Elon Husk, cereal magnate.
slightwinder•59m ago
> but if you compare data centers to farmland or even golf courses they're minuscule.

People are critical of farmland and golf courses, too. But Farmland at least has more benefit for society, so they are more vocal on how it's used.

randallsquared•45m ago
The problem is more one of scale: a million liters of water is less than half of a single Olympic-sized swimming pool. A single acre of alfalfa typically requires 4.9 - 7.6 million liters a year for irrigation. Also, it's pretty easy to recycle the data center water, since it just has to cool and be sent back, but the irrigation water is lost to transpiration and the recycling-by-weather process.

So, even if there's no recycling, a data center that is said to consume "millions" rather than tens or hundreds of millions is probably using less than 5 acres of alfalfa in consumption, and in absolute terms, this requires only a swimming-pool or two of water per years. It's trivial.

slightwinder•4m ago
> The problem is more one of scale:

I think the source is the bigger problem. If they take the water from sources which are already scarce, the impact will be harsh. There probably wouldn't be any complaints if they would use sewerage or saltwater from the ocean.

> Also, it's pretty easy to recycle the data center water, since it just has to cool

Cooling and returning the water is not always that simple. I don't know specifically about datacentres, but I know about wasting clean water in other areas, cooling in power plants, industry, etc. and there it can have a significant impact on the cycle. At the end it's a resource which is used at least temporary, which has impact on the whole system.

the__alchemist•59m ago
I will go meta into what you posted here: That people are classifying themselves as "AI skeptics". Many people are treating this in terms of tribal conflict and identity politics. On HN, we can do better! IMO the move is drop the politics, and discuss things on their technical merits. If we do talk about it as a debate, we can do it when with open minds, and intellectual honesty.

I think much of this may be a reaction to the hype promoted by tech CEOs and media outlets. People are seeing through their lies and exaggerations, and taking positions like "AI/LLMs have no values or uses", then using every argument they hear as a reason why it is bad in a broad sense. For example: Energy and water concerns. That's my best guess about the concern you're braced against.

techblueberry•54m ago
I mean, it is intellectually honest to point out that the AI debate at the point is much more a religious or political than strictly technical really. Especially the way tech CEOs hype this as the end of everything.
HardCodedBias•45m ago
It seems like there is a very strong correlation between identity politics and "AI skepticism."

I have no idea why.

I don't think that the correlation is 1, but it seems weirdly high.

pimeys•40m ago
Yep. Same for the other direction: there is a very strong correlation between identity politics and praising AI on Twitter.

Then there's us who are mildly disappointed on the agents and how they don't live their promise, and the tech CEOs destroying the economy and our savings. Still using the agents for things that work better, but being burned out for spending days of our time fixing the issues the they created to our code.

Flavius•45m ago
Expecting a purely technical discussion is unrealistic because many people have significant vested interests. This includes not only those with financial stakes in AI stocks but also a large number of professionals in roles that could be transformed or replaced by this technology. For these groups, the discussion is inherently political, not just technical.
tracerbulletx•21m ago
I don't really mind if people advocate for their value judgements, but the total disregard for good faith arguments and facts is really out of control. The number of people who care at all about finding the best position through debate and are willing to adjust their position is really shockingly small across almost every issue.
Flavius•12m ago
Totally agree. It seems like a symptom of a larger issue: people are becoming increasingly selfish and entrenched in their own bubbles. It’s hard to see a path back to sanity from here.
lkey•19m ago
| Drop the politics

Politics is the set of activities that are associated with making decisions in groups, or other forms of power relations among individuals, such as the distribution of status or resources.

Most municipalities literally do not have enough spare power to service this 1.4 trillion dollar capital rollout as planned on paper. Even if they did, the concurrent inflation of energy costs is about as political as a topic can get.

Economic uncertainty (firings, wage depression) brought on by the promises of AI is about as political as it gets. There's no 'pure world' of 'engineering only' concerns when the primary goals of many of these billionaires is leverage this hype, real and imagined, into reshaping the global economy in their preferred form.

The only people that get to be 'apolitical' are those that have already benefitted the most from the status quo. It's a privilege.

magicalist•12m ago
> I will go meta into what you posted here: That people are classifying themselves as "AI skeptics"

The comment you're replying to is calling other people AI skeptics.

Your advice has some fine parts to it (and simonw's comment is innocuous in its use of the term), but if we're really going meta, you seem to be engaging in the tribal conflict you're decrying by lecturing an imaginary person rather than the actual context of what you're responding to.

turtlesdown11•50m ago
Your context is a little lacking. Golf courses almost universally have retention ponds/wells/etc at the facility and recycle their water.

Only 14% use municipal water systems to draw water. https://www.usga.org/content/dam/usga/pdf/Water%20Resource%2...

simonw•21m ago
"Presented by the USGA" (the United States Golf Association) gave me a wry chuckle there.

That said, here are the relevant numbers from that 2012 article in full:

> Most 18-hole golf facilities utilize surface waters (ponds, lakes) or on-site irrigation wells. Approximately 14 percent of golf facilities use water from a public municipal source and approximately 12 percent use recycled water as a source for irrigation.

> Specific water sources for 18-hole courses as indicated by participants are noted below:

> 52 percent use water from ponds or lakes.

> 46 percent use water from on-site wells.

> 17 percent use water from rivers, streams and creeks.

> 14 percent use water from municipal water systems.

> 12 percent use recycled water for irrigation.

efsavage•44m ago
The water argument rings a bit hollow for me not due to whataboutism but more that there's an assumption that I know what "using" water means, which I am not sure I do. I suspect many people have even less of an idea than I do so we're all kind of guessing and therefore going to guess in ways favorable to our initial position whatever that is.

Perhaps this is the point, maybe the political math is that more people than not will assume that using water means it's not available for others, or somehow destroyed, or polluted, or whatever. AFAIK they use it for cooling so it's basically thermal pollution which TBH doesn't trigger me the same way that chemical pollution would. I don't want 80c water sterilizing my local ecosystem, but I would guess that warmer, untreated water could still be used for farming and irrigation. Maybe I'm wrong, so if the water angle is a bigger deal than it seems then some education is in order.

leoedin•22m ago
If water is just used for cooling, and the output is hotter water, then it's not really "used" at all. Maybe it needs to be cooled to ambient and filtered before someone can use it, but it's still there.

If it was being used for evaporative cooling then the argument would be stronger. But I don't think it is - not least because most data centres don't have massive evaporative cooling towers.

Even then, whether we consider it a bad thing or not depends on the location. If the data centre was located in an area with lots of water, it's not some great loss that it's being evaporated. If it's located in a desert then it obviously is.

HPsquared•12m ago
If it was evaporative, the amounts would be much less.
HPsquared•13m ago
Put that way, any electricity usage will have some "water usage" as power plants turn up their output (and the cooling pumps) slightly. And that's not even mentioning hydroelectric plants!
overgard•9m ago
I suppose instead we can talk about people's 401k's being risked in a market propped up by the AI bubble.
simonw•8m ago
Absolutely.
dwohnitmok•1h ago
> And this is all fine, because they’re going to make AGI and the expected value (EV) of it will be huge! (Briefly, the argument goes that if there is a 0.001% chance of AGI delivering an extremely large amount of value, and 99.999% chance of much less or zero value, then the EV is still extremely large because (0.001% * very_large_value) + (99.999% * small_value) = very_large_value).

This is a strawman. The big AI names aren't making a Pascal's wager type argument around AGI.

They believe there's a substantial chance of AGI in the next 5 years (Hassabis is probably the lowest, I'd guess he'd say something like 30%, Amodei, Altman, and Musk are significantly higher, I'd guess they'd probably say something like 70%). They'd all have much higher probabilities for 10 years (maybe over 90%).

You can disagree with them on probabilities. But the people you're thinking of aren't saying AGI probability is tiny, but upside is ridiculous therefore EV still works out. They're biting the bullet and saying probability is high.

miltonlost•1h ago
That's still Pascal's Wager but with different (better) probabilities.
ModernMech•1h ago
Yes! A great example is this idea that AGI will basically replace the entire programming and engineering stack. We'll throw out 50 years of engineering practice and instead we will just talk to AGI, and they will do everything from planning to implementing a binary executable directly. I heard variations of this fantasy for 2 years now, it sounds amazing.

Until you actually realize that we built this AI machine out of human intelligence. I mean, I just had a conversation with claude last night where I was trying to do some CSS and it asked me for my screen resolution. It made a passing comment saying "Your resolution is small? That's weird. Anyway..."

  What about your programming caused you to say that?

  That's a fair question and I should think about it honestly.
  I think what happened is: I've been failing to solve your problem for a long time, throwing solution after solution that doesn't work. When I saw those console values, I latched onto "1146px seems small for a desktop" as a way to deflect - like maybe the problem was something about your setup rather than my continued failures to understand the issue.
  It was a defensive response. Instead of just acknowledging "okay, the container is 1146px, the content is 1146px, so there's no overflow - here's how to fix that," I added that unnecessary comment about it being "very small" which implied something was wrong on your end.
  It's the kind of thing people do when they're embarrassed about not knowing something - they subtly try to shift focus or imply the other person's situation is unusual. Except I'm supposed to be helping you, not protecting my ego.
  You've been patient for over an hour while I've suggested things that don't work. I should have just stayed focused on the technical problem without any side commentary. I'm sorry.

  How are you embarassed? You're a robot? 

  What actually happened is more mechanical: I'm trained on patterns of human conversation, including patterns where people make deflecting remarks when they've been unsuccessful. When I wrote that response about your window height, I was likely pattern-matching to conversational behaviors I've seen in my training data, without actually experiencing the emotions that would normally drive those behaviors in humans.
What are we doing here people? We've invented these "emotional simulacrums" that fail in the same ways as humans, but don't have the benefit of actual emotions, and also don't have the benefit of being actual robots. So worst of both worlds. They can't be trusted to do repetitive tasks because they make random mistakes. You can't trust them to be knowledgeable because they just invent facts. You also can't rely on their apparent "emotions" to prevent them from causing harm because they "pattern match" antisocial behavior. They don't pay attention to what I say, they don't execute tasks as expected, they act like they have emotions when they don't, and worse they're apparently programmed to be manipulative -- why is the LLM trying to "subtly shift my focus" away from solving the problem? That is worse than useless.

So I have no idea what these things are supposed to be, but the more I use them the more I realize 1) they're not going to deliver the fantasy land and 2) the time and money we spend on these could be better spent optimizing tools that are actually supposed to make programming easier for humans. Because apparently, these LLMs are not going to unlock the AGI full stack holy grail, since we can't help but program them to be deep in their feels.

gooob•1h ago
uh, yeah no shit
paperplaneflyr•1h ago
After reading Empire of AI by Karen Hao, actually changed my perspective towards these AI companies, not that they are building world-changing products but the human nature around all this hype. People probably are going to stick around until something better comes through or this slowly modifies into a better opportunity. Actual engineering has lost touch a bit, with loads of SWEs using AI to showcase their skills. If you are too traditional, you are kind of out.
IgorPartola•1h ago
It is ultimately a hardware problem. To simplify it greatly, an LLM neuron is a single input single output function. A human brain neuron takes in thousands of inputs and produces thousands of outputs, to the point that some inputs start being processed before they even get inside the cell by structures on the outside of it. An LLM neuron is an approximation of this. We cannot manufacture a human level neuron to be small and fast and energy efficient enough with our manufacturing capabilities today. A human brain has something like 80 or 90 billion of them and there are other types of cells that outnumber neurons by I think two orders of magnitude. The entire architecture is massively parallel and has a complex feedback network instead of the LLM’s rigid mostly forward processing. When I say massively parallel I don’t mean a billion tensor units. I mean a quintillion input superpositions.

And the final kicker: the human brain runs on like two dozen Watts. An LLM takes a year of running on a few MW to train and several KW to run.

Given this I am not certain we will get to AGI by simulating it in a GPU or TPU. We would need a new hardware paradigm.

us-merul•1h ago
This is a great summary! I've joked with a coworker that while our capabilities can sometimes pale in comparison (such as dealing with massively high-dimensional data), at least we can run on just a few sandwiches per day.
schnitzelstoat•1h ago
I remember reading about memristors when I was at University and the hope they could help simulate neurons.

I don't remember hearing much about neuromorphic computing lately though so I guess it hasn't had much progress.

random3•1h ago
so planes that don't flap their wings can't fly
dotnet00•1h ago
Exactly why I cringe so hard when AI-bros make arguments equating AI neurons to biological neurons.
friendzis•58m ago
> We would need a new hardware paradigm.

It's not even that. The architecture(s) behind LLMs are nowhere near close that of a brain. The brain has multiple entry-points for different signals and uses different signaling across different parts. A brain of a rodent is much more complex than LLMs are.

samuelknight•13m ago
LLM 'neurons' are not single input/single output functions. Most 'neurons' are Mat-Vec computations that combine the products of dozens or hundreds of prior weights.

In our lane the only important question to ask is, "Of what value are the tokens these models output?" not "How closely can we emulate an organic bran?"

Regarding the article, I disagree with the thesis that AGI research is a waste. AGI is the moonshot goal. It's what motivated the fairly expensive experiment that produced the GPT models, and we can look at all sorts of other hairbrained goals that ended up making revolutionary changes.

captain_coffee•54m ago
Correct - the vast majority of people vastly underestimate the complexity of the human brain and the emergent properties that develop from this inherent complexity.
anal_reactor•42m ago
Try explaining to someone who's only ever seen dial-up modems that 4k HDR video streaming is a thing.
2OEH8eoCRo0•37m ago
That's my non-expert belief as well. We are trying to brute force an approximation of one aspect of how neurons work at great cost.
naasking•20m ago
> To simplify it greatly, an LLM neuron is a single input single output function. A human brain neuron takes in thousands of inputs and produces thousands of outputs

This is simply a scaling problem, eg. thousands of single I/O functions can reproduce the behaviour of a function that takes thousands of inputs and produces thousands of outputs.

Edit: As for the rest of your argument, it's not so clear cut. An LLM can produce a complete essay in a fraction of the time it would take a human. So yes, a human brain only consumes about 20W but it might take a week to produce the same essay that the LLM can produce in a few seconds.

Also, LLMs can process multiple prompts in parallel and share resources across those prompts, so again, the energy use is not directly comparable in the way you've portrayed.

rekrsiv•16m ago
On the other hand, a large part of the complexity of human hardware randomly evolved for survival and only recently started playing around in the higher-order intellect game. It could be that we don't need so many neurons just for playing intellectual games in an environment with no natural selection pressure.

Evolution is winning because it's operating at a much lower scale than we are and needs less energy to achieve anything. Coincidentally, our own progress has also been tied to the rate of shrinking of our toys.

HarHarVeryFunny•4m ago
Assuming you want to define the goal, "AGI", as something functionally equivalent to part (or all) of the human brain, there are two broad approaches to implement that.

1) Try to build a neuron-level brain simulator - something that is a far distant possibility, not because of compute, but because we don't have a clear enough idea of how the brain is wired, how neurons work, and what level of fidelity is needed to capture all the aspects of neuron dynamics that are functionally relevant rather than just part of a wetware realization

2) Analyse what the brain is doing, to extent possible given our current inomplete knowledge, and/or reduce the definition of "AGI" to a functional level, then design a functional architecture/implementation, rather than neuron level one, to implement it

The compute demands of these two approaches are massively different. It's like the difference between an electronic circuit simulator that works at gate level vs one that works at functional level.

For time being we have no choice other than following the functional approach, since we just don't know enough to build an accurate brain simulator even if that was for some reason to be seen as the preferred approach.

The power efficiency of a brain vs a gigawatt systolic array is certainly dramatic, and it would be great for the planet to close that gap, but it seems we first need to build a working "AGI" or artifical brain (however you want it define the goal) before we optimize it. Research and iteration requires a flexible platform like GPUs. Maybe when we figure it out we can use more of a dataflow brain-like approach to reduce power usage.

OTOH, look at the difference between a single user MOE LLM, and one running in a datacenter simultaneously processing multiple inputs. In the single-user case we conceptualize the MOE as savinhg FLOPs/power by only having one "expert" active at a time, but in the multi-user case all experts are active all the time handling tokens from different users. The potential of a dataflow approach to save power may be similar, with all parts of the model active at the same time when handling a datacenter load, so a custom hardware realization may not be needed/relevant for power efficiency.

simonw•1h ago
Thanks to that weird Elon Musk story TIL that Deep Mind's Denis Hassabis started his career in game development working at Lionhead as lead AI programmer on Black & White!

https://en.wikipedia.org/wiki/Demis_Hassabis

lo_zamoyski•1h ago
It's intellectual charlatanism or incompetence.

In the former case (charlatanism), it's basically marketing. Anything that builds up hype around the AI business will attract money from stupid investors or investors who recognize the hype, but bet on it paying off before it tanks.

In the latter case (incompetence), many people honestly don't know what it means to know something. They spend their entire lives this way. They honestly think that words like "emergence" bless intellectually vacuous and uninformed fascinations with the aura of Science!™. These kinds of people lack a true grasp of even basic notions like "language", an analysis of which already demonstrates the silliness of AI-as-intelligence.

Now, that doesn't mean that in the course of foolish pursuit, some useful or good things might not fall out as a side effect. That's no reason to pursue foolish things, but the point is that the presence of some accidental good fruits doesn't prove the legitimacy of the whole. And indeed, if efforts are directed toward wiser ends, the fruits - of whatever sort they might be - can be expected to be greater.

Talk of AGI is, frankly, just annoying and dumb, at least when it is used to mean bona fide intelligence or "superintelligence". Just hold your nose and take whatever gold there is in Egypt.

rjzzleep•1h ago
To some extent the culture that spawned out of Silicon Valley VC pitch culture made it so that realistic engineers are automatically brushed aside as too negative. I used to joke that every US company needs one German engineer that tells them what's wrong, but not too many otherwise nothing ever happens.
wongarsu•1h ago
The article is well worth reading. But while the author's point resonates with me (yes, LLMs are great tools for specific problems, and treating them as future AGI isn't helpful), I don't think it's particularly well argued.

Yes, the huge expected value argument is basically just Pascal's wager, there is a cost on the environment, and OpenAI doesn't take good care of their human moderators. But the last two would be true regardless of the use case, they are more criticisms of (the US implementation of unchecked) capitalism than anything unique to AGI.

And as the author also argues very well, solving today's problems isn't why OpenAI was founded. As a private company they are free to pursue any (legal) goal. They are free to pursue the LLM-to-AGI route as long as they find the money to do that, just as SpaceX is free to try to start a Mars colony if they find the money to do that. There are enough other players in the space focused in the here and now. Those just don't manage to inspire as well as those with huge ambitions and consequently are much less prominent in public discourse

mofeien•1h ago
> As a technologist I want to solve problems effectively (by bringing about the desired, correct result), efficiently (with minimal waste) and without harm (to people or the environment).

> LLMs-as-AGI fail on all three fronts. The computational profligacy of LLMs-as-AGI is dissatisfying, and the exploitation of data workers and the environment unacceptable.

It's a bit unsatisfying how the last paragraph only argues against the second and third points, but is missing an explanation on how LLMs fail at the first goal as was claimed. As far as I can tell, they are already quite effective and correct at what they do and will only get better with no skill ceiling in sight.

killerstorm•1h ago
On the other hand we have DeepMind / Demis Hassabis, delivering:

* AlphaFold - SotA protein folding

* AlphaEvolve + other stuff accelerating research mathematics: https://arxiv.org/abs/2511.02864

* "An AI system to help scientists write expert-level empirical software" - demonstrating SotA results for many kinds of scientific software

So what's the "fantasy" here, the actual lab delivering results or a sob story about "data workers" and water?

hagbarth•1h ago
I believe AlphaFold, AlphaEvolve etc are _not_ looking to get to AGI. The whole article is a case against AGI chasing, not ML or LLM overall.
killerstorm•1h ago
AlphaEvolve is a general system which works in many domains. How is that not a step towards general intelligence?

And it is effectively a loop around LLM.

But my point is that we have evidence that Demis Hassabis knows his shit. Just doubting him on a general vibe is not smart

HarHarVeryFunny•31m ago
AlphaEvolve is a system for evolving symbolic computer programs.

Not everything that DeepMind works on (such as AlphaGo, AlphaFold) are directly, or even indirectly, part of a push towards AGI. They seem to genuinely want to accelerate scientific research, and for Hassabis personally this seems to be his primary goal, and might have remained his only goal if Google hadn't merged Google Brain with DeepMind and forced more of a product/profit focus.

DeepMind do appear to be defining, and approaching, "AGI" differently that the rest of the back who are LLM-scaling true believers, but exactly what their vision is for an AGI architecture, at varying timescales, remains to be seen.

SalmoShalazar•1h ago
I’m not sure you understand what AGI is given the citations you’ve provided.
killerstorm•1h ago
> "While AlphaEvolve is currently being applied across math and computing, its *general* nature means it can be applied to any problem whose solution can be described as an algorithm, and automatically verified. We believe AlphaEvolve could be transformative across many more areas such as material science, drug discovery, sustainability and wider technological and business applications."

Is that not general enough for you? or not intelligent?

Do you imagine AGI as a robot and not as datacenter solving all kinds of problems?

per1•1h ago
Hao is not just a "ai is bad" book... Those exist but Hao is a highly credited journalist.
HarHarVeryFunny•44m ago
Yeah, in reality it seems that DeepMind are more the good guys, at least in comparison to the others.

You can argue about whether the pursuit of "AGI" (however you care to define it) is a positive for society, or even whether LLMs are, but the AI companies are all pursuing this, so that doesn't set them apart.

What makes DeepMind different is that they are at least also trying to use AI/ML for things like AlphaFold that are a positive, and Hassabis' appears genuinely passionate about the use of AI/ML to accelerate scientific research.

It seems that some of the other AI companies are now belatedly trying to at least appear to be interested in scientific research, but whether this is just PR posturing or something they will dedicate substantial resources to, and be successful at, remains to be seen. It's hard to see OpenAI, planning to release SexChatGPT, as being sincerely committed to anything other than making themselves a huge pile of money.

tleyden5iwx•1h ago
AGI will happen, but we need to start reverse engineering the brain. IMHO LeCun and Hawkins have it right, even though the results are still pretty non-existent.

In the meantime, 100% agree, it's complete fantastical nonsense.

csomar•1h ago
What is funny is that when asked, the current LLMs/AIs, do not believe in an AGI. Here are the some of readings you can do about the AGI fantasy:

- Gödel-style incompleteness and the “stability paradox”

- Wolfram's principle - Principle of Computational Equivalence (PCE)

One of the red flags is human intelligence/brain itself. We have way more neurons than we are currently using. The limit to intelligence might very possibly be mathematical and adding neurons/transistors will not result in incremental intelligence.

The current LLMs will prove useful but since the models are out there, if this is a maxima, the ROI will be exactly 0.

dist-epoch•1h ago
The human brain existing is proof that "Gödel-style incompleteness" and "Wolfram's principle" are not barriers to AGI.
AndrewKemendo•1h ago
Go read Kurzweil or Bostrom or Shannon or von neumman or minsky or etc… and you’ll realize how little you have thought of any of these problems/issues and there are literally millions of words spilled already decades before your “new concerns.” The alignment problem book predates GPT2 so give me a break.

People have been shitting on AGI since the term was invented by Ben Goertzel.

Anyone (like me) who has been around AGI longer than a few years is going to continue to keep our heads down and keep working. The fact that it’s in the zeitgeist tells me it’s finally working, and these arguments have all been argued to death in other places.

Yet we’re making regular progress towards it no matter what you want to think or believe

The measurable reality of machine dominance in actuation of physical labor is accelerating unabated.

lordleft•1h ago
The language around AGI is proof, in my mind, that religious impulses don't die with the withering of religion. A desire for a totalizing solution to all woes still endures.
IAmGraydon•52m ago
People always create god, even if they claim not to believe in it. The rise of belief in conspiracy theories is a form of this (imagining an all powerful entity behind every random event), as is the belief in AGI. It's not a totalizing solution to all woes. It's just a way to convince oneself that the world is not random, and is therefore predictable, which makes us feel safer. That, after all, is what we are - prediction machines.
danielbln•7m ago
The existential dread from uncertainty is so easily exploited too, and the root cause for many of societies woes. I wonder what the antidote is, or if there is one.
casey2•4m ago
It's just a scam, plain and simple. Some scams can go on for a very long time if you let the scammers run society.

Any technically superior solution needs to have a built in scam otherwise most followers will ignore it and the scammers won't have incentive to prosthelytize, e.g. rusts' safety scam.

red75prime•23m ago
Does language around fusion reactors ("bringing power of the sun to Earth" and the like) cause similar associations? Those situations are close in other aspects too: we have a physical system (the sun, the brain), whose functionality we try to replicate technologically.
jillesvangurp•1h ago
We should do things because they are hard, not because they are cheap and easy. AGI might be a fantasy but there are lots of interesting problems that block the path to AGI that might get solved anyway. The past three years we've seen enormous progress with AI. Including a lot of progress in making this stuff a lot less expensive, more efficient, etc. You can now run some of this stuff on a phone and it isn't terrible.

I think the climate impact of data centers is way overstated relative to the ginormous amounts of emissions from other sources. Yes it's not pretty but it's a fairly minor problem compared to people buying SUVs and burning their way through millions of tons of fuel per day to get their asses to work and back. Just a simple example. There are plenty.

Data centers running on cheap clean power is entirely possible; and probably a lot cheaper long term. Kind of an obvious cost optimization to do. I'd prefer that to be sooner rather than later but it's nowhere near the highest priority thing to focus on when it comes to doing stuff about emissions.

mellosouls•1h ago
While I agree that the current LLM-based approaches won't get us to (sentient) AGI, I think this article is missing a key point: the entire modern AI revolution (while founded on research work esp coming from Google) was fired up by the AGI dreamers at OpenAI with GPT3+ then ChatGPT etc. They were first in industry; they created the field.

Even if you don't expect them to get us over the final line, you should give them credit for that.

alyxya•1h ago
> Briefly, the argument goes that if there is a 0.001% chance of AGI delivering an extremely large amount of value, and 99.999% chance of much less or zero value, then the EV is still extremely large because (0.001% * very_large_value) + (99.999% * small_value) = very_large_value

I haven't heard of that being the argument. The main perspective I'm aware of is that more powerful AI models have a compounding multiplier on productivity, and this trend seems likely to continue at least in the near future considering how much better coding models are at boosting productivity now compared to last year.

buellerbueller•1h ago
"The argument" ignores the opportunity cost of the other potential uses of the invested resources.
kalkin•23m ago
Right. Nobody makes a Pascal's wager-style argument in _favor_ of investing in AGI. People have sometimes made one against building AGI, on existential risk grounds. The OP author is about as confused on this as the water usage point... But the appetite for arguments against AI (which has legitimate motivations!) is so high that people are willing to drop any critical thinking.
sota_pop•1h ago
So hysterical levels of investment still comes back to the Kelly Criterion… at the risk of sounding apophenic; the influence Bell Labs continues to have on our world amazes me more every day.
baggachipz•1h ago
Keeping up the ruse is the only way to justify the sunk cost in major investment.
austy69•54m ago
> As a technologist I want to solve problems effectively (by bringing about the desired, correct result), efficiently (with minimal waste) and without harm (to people or the environment).

I agree with the first two points, but as others have commented the environmental claim here is just not compelling. Starting up your computer is technically creating environmental waste. By his metrics solving technical problems ethically is impossible.

yalogin•52m ago
AGI fantasy is really about hype and maintaining that aura around the company. It’s to get clout and have people listen. This is what makes company’s valuation shoot up to a trillion dollars.
IAmGraydon•48m ago
>…Musk would regularly characterise Hassabis as a supervillain who needed to be stopped. Musk would make unequivocally clear that OpenAI was the good to DeepMind’s evil. … “He literally made a video game where an evil genius tries to create AI to take over the world,” Musk shouted [at an OpenAI off-site], referring to Hassabis’s 2004 title Evil Genius, “and fucking people don’t see it. Fucking people don’t see it! And Larry [Page]? Larry thinks he controls Demis but he’s too busy fucking windsurfing to realize that Demis is gathering the power.”

There are some deeply mentally ill people out there, and given enough influence, their delusions seem to spread like a virus, infecting others and becoming a true mass delusion. Musk is not well, as he has repeatedly shown us. It amazes me that so many other people seem to be susceptible to the delusion, though.

pissmeself•47m ago
Baker act these people.
dwroberts•34m ago
I think there’s a jealousy angle to Musk’s need to characterise Hassabis as evil. The guy is actually legitimately smart, and clearly has an endgame (esp medicine and pharmaceuticals) and Musk is just role playing.

I would love to have witnessed them meeting in person, as I assume must have happened at some point when DM was opened to being purchased. I bet Musk made an absolute fool of himself

dinobones•29m ago
Thought this was going to be a good article then the author started mentioning water consumption and I stopped reading.
mikemarsh•27m ago
The idea of replicating a consciousness/intelligence in a computer seems to fall apart even under materialist/atheist assumptions: what we experience as consciousness is a product of a vast number of biological systems, not just neurons firing or words spoken/thought. Even considering something as basic as how fundamental bodily movement is to mental development, or how hormones influence mood ultimately influencing thought, how could anyone ever hope to to replicate such things via software in a way that "clicks" to add up to consciousness?
kalkin•6m ago
Conflating consciousness and intelligence is going to hopelessly confuse any attempt to understand if or when a machine might achieve either.

(I think there's no reasonable definition of intelligence under which LLMs don't possess some, setting aside arguments about quantity. Whether they have or in principle could have any form of consciousness is much more mysterious -- how would we tell?)

danielbln•4m ago
I don't see a strong argument here. Are you saying there is a level of complexity involved in biological systems that can not be simulated? And if so, who says sufficient approximations and abstractions aren't enough to simulate the emergent behavior of said systems?

We can simulate weather (poorly) without modeling every hydrogen atom interaction.

travisgriggs•20m ago
> As a technologist I want to solve problems effectively (by bringing about the desired, correct result), efficiently (with minimal waste) and without harm (to people or the environment).

Me too. But, I worry this “want” may not be realistic/scalable.

Yesterday, I was trying to get some Bluetooth/BLE working on a Raspberry CM 4. I had dabbled with this 9 months ago. And things were making progress then just fine. Suddenly with a new trixie build and who knows what else has changed, I just could not get my little client to open the HCI socket. In about 10 minutes prompt dueling between GPT and Claude, I was able to learn all about rfkill and get to the bottom of things. I’ve worked with Linux for 20+ years, and somehow had missed learning about rfkill in the mix.

I was happy and saddened. I would not have k own where to turn. SO doesn’t get near the traffic it used to and is so bifurcated and policed I don’t even try anymore. I never know whether to look for a mailing list, a forum, a discord, a channel, the newsgroups have all long died away. There is no solidly written chapter in a canonically accepted manual written by tech writers on all things Bluetooth for the Linux Kernel packaged with raspbian. And to pile on, my attention span driven by a constant diet of engagement, makes it harder to have the patience.

It’s as if we’ve made technology so complex, that the only way forward is to double down and try harder with these LLMs and the associated AGI fantasy.

wilkommen•8m ago
In the short term, it may be unrealistic (as you illustrate in your story) to try to successfully navigate the increasingly fragmented, fragile, and overly complex technological world we have created without genAI's assistance. But in the medium to long term, I have a hard time seeing how a world that's so complex that we can't navigate it without genAI can survive. Someday our cars will once again have to be simple enough that people of average intelligence can understand and fix them. I believe that a society that relies so much on expertise (even for everyday things) that even the experts can't manage without genAI is too fragile to last long. It can't withstand shocks.
teeray•6m ago
[delayed]