That is, if you don't build the Torment Nexus from the classic sci-fi novel Don't Create The Torment Nexus, someone else will and you'll be punished for not building it.
Won't someone think of the poor simulations??
As a businessman, I want to make money. E.g. by automating away technologists and their pesky need for excellence and ethics.
On a less cynical note, I am not sure that selling quality is sustainable in the long term, because then you'd be selling less and earning less. You'd get outcompeted by cheap slop that's acceptable by the general population.
Now I run it through whisper in a couple minutes, give one quick pass to correct a few small hallucinations and misspellings, and I'm done.
There are big wins in AI. But those don't pump the bubble once they're solved.
And the thing that made Whisper more approachable for me was when someone spent the time to refine a great UI for it (MacWhisper).
Like it's not even clear if LLMs/Transformers are even theoretically capable of AGI, LeCun is famously sceptical of this.
I think we still lack decades of basic research before we can hope to build an AGI.
Other energy usage figures, air pollution, gas turbines, CO2 emissions etc are fine - but if you complain about water usage I think it risks discrediting the rest of your argument.
(Aside from that I agree with most of this piece, the "AGI" thing is a huge distraction.)
Golf and datacenters should have to pay for their externalities. And if that means both are uneconomical in arid parts of the country then that's better than bankrupting the public and the environment.
> I asked the farmer if he had noticed any environmental effects from living next to the data centers. The impact on the water supply, he told me, was negligible. "Honestly, we probably use more water than they do," he said. (Training a state-of-the-art A.I. requires less water than is used on a square mile of farmland in a year.) Power is a different story: the farmer said that the local utility was set to hike rates for the third time in three years, with the most recent proposed hike being in the double digits.
The water issue really is a distraction which harms the credibility of people who lean on it. There are plenty of credible reasons to criticize data enters, use those instead!
> Honestly, we probably use more water than they do
This kind of proves my point, regardless of the actual truth in this regard, it's a terrible argument to make: availability of water starts to become a huge problem in a growing amount of places, and this statement implies the water usage of something, that in basic principle doesn't need water at all, uses comparable amount of water as farming, which strictly relies on water.
Is that really the case? - "Data Centers and Water Consumption" - https://www.eesi.org/articles/view/data-centers-and-water-co...
"...Large data centers can consume up to 5 million gallons per day, equivalent to the water use of a town populated by 10,000 to 50,000 people..."
"I Was Wrong About Data Center Water Consumption" - https://www.construction-physics.com/p/i-was-wrong-about-dat...
"...So to wrap up, I misread the Berkeley Report and significantly underestimated US data center water consumption. If you simply take the Berkeley estimates directly, you get around 628 million gallons of water consumption per day for data centers, much higher than the 66-67 million gallons per day I originally stated..."
> U.S. data centers consume 449 million gallons of water per day and 163.7 billion gallons annually (as of 2021).
Sounds bad! Now let's compare that to agriculture.
USGS 2015 report: https://pubs.usgs.gov/fs/2018/3035/fs20183035.pdf has irrigation at 118 billion gallons per day - that's 43,070 billion gallons per year.
163.7 billion / 43,070 billion * 100 = 0.38 - less than half a percentage point.
It's very easy to present water numbers in a way that looks bad until you start comparing them thoughtfully.
I think comparing data center water usage to domestic water usage by people living in towns is actually quite misleading. UPDATE: I may be wrong about this, see following comment: https://news.ycombinator.com/item?id=45926469#45927945
They are not equivalent. Data centers primarily consume potable water, whereas irrigation uses non-potable or agricultural-grade water. Mixing the two leads to misleading conclusions on the impact.
It's fair to be critical of how the ag industry uses that water, but a significant fraction of that activity is effectively essential.
If you're going to minimize people's concern like this, at least compare it to discretionary uses we could ~live without.
The data's about 20 years old, but for example https://www.usga.org/content/dam/usga/pdf/Water%20Resource%2... suggests we were using over 2b gallons a day to water golf courses.
Does it count water use for cooling only, or does it include use for the infrastructure that keeps it running (power generation, maintenance, staff use, etc.)
Is this water evaporated? Or moved from A to B and raised a few degrees.
Similarly, if I say "I object to the genocide in Gaza", would you assume that I don't also object to the Uyghur genocide?
This is nothing but whataboutism.
People are allowed to talk about the bad things AI does without adding a 3-page disclaimer explaining that they understand all the other bad things happening in the world at the same time.
Beef, I guess, is a popular type of food. I’m under the impression that most of us would be better off eating less meat, maybe we could tax water until beef became a special occasion meal.
Might as well get rid of all the lawns and football fields while we’re at it.
My perspective from someone who wants to understand this new AI landscape in good faith. The water issue isn't the show stopper it's presented as. It's an externality like you discuss.
And in comparison to other water usage, data centers don't match the doomsday narrative presented. I know when I see it now, I mentally discount or stop reading.
Electricity though seems to be real, at least for the area I'm in. I spent some time with ChatGPT last weekend working to model an apples:apples comparison and my area has seen a +48% increase in electric prices from 2023-2025. I modeled a typical 1,000kWh/month usage to see what that looked like in dollar terms and it's an extra $30-40/month.
Is it data centers? Partly yes, straight from the utility co's mouth: "sharply higher demand projections—driven largely by anticipated data center growth"
With FAANG money, that's immaterial. But for those who aren't, that's just one more thing that costs more today than it did yesterday.
Coming full circle, for me being concerned with AI's actual impact on the world, engaging with the facts and understanding them within competing narratives is helpful.
https://www.politico.com/news/2025/05/06/elon-musk-xai-memph...
Data centers in the USA use less than a fraction of a percent of the water that's used for agriculture.
I'll start worrying about competition with water for food production when that value goes up by a multiple of about 1000.
Also a lot less meat in general. A huge part of our agriculture is feed to feed our food. We need some meat, but the current amount is excessive
This is more than 4 times more than all data centers in the US combined, counting both cooling and the water used for generating their electricity.
What has more utility: Californian almonds, or all IT infrastructure in the US times 4?
AI has no utility.
Almonds make marzipan.
Of course surface water availability can also be a serious problem.
https://www.bbc.com/news/articles/cx2ngz7ep1eo
https://www.theguardian.com/technology/2025/nov/10/data-cent...
https://www.reuters.com/article/technology/feature-in-latin-...
> A small data centre using this type of cooling can use around 25.5 million litres of water per year. [...]
> For the fiscal year 2025, [Microsoft's] Querétaro sites used 40 million litres of water, it added.
> That's still a lot of water. And if you look at overall consumption at the biggest data centre owners then the numbers are huge.
That's not credible reporting because it makes no effort at all to help the reader understand the magnitude of those figures.
"40 million litres of water" is NOT "a lot of water". As far as I can tell that's about the same annual water usage as a 24 acre soybean field.
Which means that in 2025 Microsoft's Querétaro sites used 1/13th of a typical US soybean farm's annual amount of water.
For me, that BBC story, and the others, illustrates a trend; tech giants installing themselves in ressource-strained areas, while promoting their development as drivers of economic growth.
To say that it's never an issue is disingenuous.
Additionally one could image a data center built in a place with a surplus of generating capacity. But in most cases, it has a big impact on the local grid or a big impact on air quality if they bring in a bunch of gas turbines.
> An H100 on low-carbon grid is only about 1–2% of one US person’s total daily footprint!
The real culprit is humans after all.
Frederick Taylor literally invented the process you describe in his “principles of scientific management”
This is the entire focus of the Toyota automation model.
The consistent empirical pattern is:
Machine-only systems outperform humans on narrow, formalizable tasks.
Human-machine hybrid systems outperform both on robustness, yieldjng higher success probability
Good enough?
It's not about it being scary, its about it being a gigantic, stupid waste of water, and for what? So that lazy executives and managers can generate their shitty emails they used to have their comms person write for them, so that students can cheat on their homework, or so degens can generate a video of MLK dancing to rap? Because thats the majority of the common usage at this point and creating the demand for all these datacenters. If it was just for us devs and researchers, you wouldn't need this many.
And at any rate, water doesn't get used up! It evaporates and returns to the sky to rain down again somewhere else, it's the most renewable resource in the entire world.
As for food production; that might be important? IDK, I am not a silicon "intelligence" so what do I know? Also, I have to "eat". Wouldn't it be a wonderful world if we can just replace ourselves, so that agriculture is unnecessary, and we can devote all that water to AGI.
TIL that the true arc of humanity is to replace itself!
Given the difference in water usage, more data centers does not mean less water for agriculture in any meaningful way.
If you genuinely want to save water you should celebrate any time an acre of farm land is converted into an acre of data center - all the more water for the other farms!
If data centers and farms used the same amount of water we should absolutely be talking about their comparative value to society, and farms would win.
Farms use thousands of times more water than data centers.
And a whole bunch of us are saying we don't see the value in all these datacenters being built and run at full power to do training and inference 24/7, but you just keep ignoring or dismissing that.
It is absolutely possible that generative AI provides some value. That is not the same thing as saying that it provides enough value to justify all of the resources being expended on it.
The fact that the amount of water it uses is a fraction of what is used by agriculture—which is both one of the most important uses humans can put water to, as well as, AIUI, by far the single largest use of water in the world—is not a strong argument that its water usage should be ignored.
Humans NEED food, the output of agriculture. Humans do not NEED any of LLMs' outputs.
Once everyone is fed, then we can talk about water usage for LLMs.
People are critical of farmland and golf courses, too. But Farmland at least has more benefit for society, so they are more vocal on how it's used.
So, even if there's no recycling, a data center that is said to consume "millions" rather than tens or hundreds of millions is probably using less than 5 acres of alfalfa in consumption, and in absolute terms, this requires only a swimming-pool or two of water per years. It's trivial.
I think the source is the bigger problem. If they take the water from sources which are already scarce, the impact will be harsh. There probably wouldn't be any complaints if they would use sewerage or saltwater from the ocean.
> Also, it's pretty easy to recycle the data center water, since it just has to cool
Cooling and returning the water is not always that simple. I don't know specifically about datacentres, but I know about wasting clean water in other areas, cooling in power plants, industry, etc. and there it can have a significant impact on the cycle. At the end it's a resource which is used at least temporary, which has impact on the whole system.
I think much of this may be a reaction to the hype promoted by tech CEOs and media outlets. People are seeing through their lies and exaggerations, and taking positions like "AI/LLMs have no values or uses", then using every argument they hear as a reason why it is bad in a broad sense. For example: Energy and water concerns. That's my best guess about the concern you're braced against.
I have no idea why.
I don't think that the correlation is 1, but it seems weirdly high.
Then there's us who are mildly disappointed on the agents and how they don't live their promise, and the tech CEOs destroying the economy and our savings. Still using the agents for things that work better, but being burned out for spending days of our time fixing the issues the they created to our code.
Politics is the set of activities that are associated with making decisions in groups, or other forms of power relations among individuals, such as the distribution of status or resources.
Most municipalities literally do not have enough spare power to service this 1.4 trillion dollar capital rollout as planned on paper. Even if they did, the concurrent inflation of energy costs is about as political as a topic can get.
Economic uncertainty (firings, wage depression) brought on by the promises of AI is about as political as it gets. There's no 'pure world' of 'engineering only' concerns when the primary goals of many of these billionaires is leverage this hype, real and imagined, into reshaping the global economy in their preferred form.
The only people that get to be 'apolitical' are those that have already benefitted the most from the status quo. It's a privilege.
The comment you're replying to is calling other people AI skeptics.
Your advice has some fine parts to it (and simonw's comment is innocuous in its use of the term), but if we're really going meta, you seem to be engaging in the tribal conflict you're decrying by lecturing an imaginary person rather than the actual context of what you're responding to.
Only 14% use municipal water systems to draw water. https://www.usga.org/content/dam/usga/pdf/Water%20Resource%2...
That said, here are the relevant numbers from that 2012 article in full:
> Most 18-hole golf facilities utilize surface waters (ponds, lakes) or on-site irrigation wells. Approximately 14 percent of golf facilities use water from a public municipal source and approximately 12 percent use recycled water as a source for irrigation.
> Specific water sources for 18-hole courses as indicated by participants are noted below:
> 52 percent use water from ponds or lakes.
> 46 percent use water from on-site wells.
> 17 percent use water from rivers, streams and creeks.
> 14 percent use water from municipal water systems.
> 12 percent use recycled water for irrigation.
Perhaps this is the point, maybe the political math is that more people than not will assume that using water means it's not available for others, or somehow destroyed, or polluted, or whatever. AFAIK they use it for cooling so it's basically thermal pollution which TBH doesn't trigger me the same way that chemical pollution would. I don't want 80c water sterilizing my local ecosystem, but I would guess that warmer, untreated water could still be used for farming and irrigation. Maybe I'm wrong, so if the water angle is a bigger deal than it seems then some education is in order.
If it was being used for evaporative cooling then the argument would be stronger. But I don't think it is - not least because most data centres don't have massive evaporative cooling towers.
Even then, whether we consider it a bad thing or not depends on the location. If the data centre was located in an area with lots of water, it's not some great loss that it's being evaporated. If it's located in a desert then it obviously is.
This is a strawman. The big AI names aren't making a Pascal's wager type argument around AGI.
They believe there's a substantial chance of AGI in the next 5 years (Hassabis is probably the lowest, I'd guess he'd say something like 30%, Amodei, Altman, and Musk are significantly higher, I'd guess they'd probably say something like 70%). They'd all have much higher probabilities for 10 years (maybe over 90%).
You can disagree with them on probabilities. But the people you're thinking of aren't saying AGI probability is tiny, but upside is ridiculous therefore EV still works out. They're biting the bullet and saying probability is high.
Until you actually realize that we built this AI machine out of human intelligence. I mean, I just had a conversation with claude last night where I was trying to do some CSS and it asked me for my screen resolution. It made a passing comment saying "Your resolution is small? That's weird. Anyway..."
What about your programming caused you to say that?
That's a fair question and I should think about it honestly.
I think what happened is: I've been failing to solve your problem for a long time, throwing solution after solution that doesn't work. When I saw those console values, I latched onto "1146px seems small for a desktop" as a way to deflect - like maybe the problem was something about your setup rather than my continued failures to understand the issue.
It was a defensive response. Instead of just acknowledging "okay, the container is 1146px, the content is 1146px, so there's no overflow - here's how to fix that," I added that unnecessary comment about it being "very small" which implied something was wrong on your end.
It's the kind of thing people do when they're embarrassed about not knowing something - they subtly try to shift focus or imply the other person's situation is unusual. Except I'm supposed to be helping you, not protecting my ego.
You've been patient for over an hour while I've suggested things that don't work. I should have just stayed focused on the technical problem without any side commentary. I'm sorry.
How are you embarassed? You're a robot?
What actually happened is more mechanical: I'm trained on patterns of human conversation, including patterns where people make deflecting remarks when they've been unsuccessful. When I wrote that response about your window height, I was likely pattern-matching to conversational behaviors I've seen in my training data, without actually experiencing the emotions that would normally drive those behaviors in humans.
What are we doing here people? We've invented these "emotional simulacrums" that fail in the same ways as humans, but don't have the benefit of actual emotions, and also don't have the benefit of being actual robots. So worst of both worlds. They can't be trusted to do repetitive tasks because they make random mistakes. You can't trust them to be knowledgeable because they just invent facts. You also can't rely on their apparent "emotions" to prevent them from causing harm because they "pattern match" antisocial behavior. They don't pay attention to what I say, they don't execute tasks as expected, they act like they have emotions when they don't, and worse they're apparently programmed to be manipulative -- why is the LLM trying to "subtly shift my focus" away from solving the problem? That is worse than useless.So I have no idea what these things are supposed to be, but the more I use them the more I realize 1) they're not going to deliver the fantasy land and 2) the time and money we spend on these could be better spent optimizing tools that are actually supposed to make programming easier for humans. Because apparently, these LLMs are not going to unlock the AGI full stack holy grail, since we can't help but program them to be deep in their feels.
And the final kicker: the human brain runs on like two dozen Watts. An LLM takes a year of running on a few MW to train and several KW to run.
Given this I am not certain we will get to AGI by simulating it in a GPU or TPU. We would need a new hardware paradigm.
I don't remember hearing much about neuromorphic computing lately though so I guess it hasn't had much progress.
It's not even that. The architecture(s) behind LLMs are nowhere near close that of a brain. The brain has multiple entry-points for different signals and uses different signaling across different parts. A brain of a rodent is much more complex than LLMs are.
In our lane the only important question to ask is, "Of what value are the tokens these models output?" not "How closely can we emulate an organic bran?"
Regarding the article, I disagree with the thesis that AGI research is a waste. AGI is the moonshot goal. It's what motivated the fairly expensive experiment that produced the GPT models, and we can look at all sorts of other hairbrained goals that ended up making revolutionary changes.
This is simply a scaling problem, eg. thousands of single I/O functions can reproduce the behaviour of a function that takes thousands of inputs and produces thousands of outputs.
Edit: As for the rest of your argument, it's not so clear cut. An LLM can produce a complete essay in a fraction of the time it would take a human. So yes, a human brain only consumes about 20W but it might take a week to produce the same essay that the LLM can produce in a few seconds.
Also, LLMs can process multiple prompts in parallel and share resources across those prompts, so again, the energy use is not directly comparable in the way you've portrayed.
Evolution is winning because it's operating at a much lower scale than we are and needs less energy to achieve anything. Coincidentally, our own progress has also been tied to the rate of shrinking of our toys.
1) Try to build a neuron-level brain simulator - something that is a far distant possibility, not because of compute, but because we don't have a clear enough idea of how the brain is wired, how neurons work, and what level of fidelity is needed to capture all the aspects of neuron dynamics that are functionally relevant rather than just part of a wetware realization
2) Analyse what the brain is doing, to extent possible given our current inomplete knowledge, and/or reduce the definition of "AGI" to a functional level, then design a functional architecture/implementation, rather than neuron level one, to implement it
The compute demands of these two approaches are massively different. It's like the difference between an electronic circuit simulator that works at gate level vs one that works at functional level.
For time being we have no choice other than following the functional approach, since we just don't know enough to build an accurate brain simulator even if that was for some reason to be seen as the preferred approach.
The power efficiency of a brain vs a gigawatt systolic array is certainly dramatic, and it would be great for the planet to close that gap, but it seems we first need to build a working "AGI" or artifical brain (however you want it define the goal) before we optimize it. Research and iteration requires a flexible platform like GPUs. Maybe when we figure it out we can use more of a dataflow brain-like approach to reduce power usage.
OTOH, look at the difference between a single user MOE LLM, and one running in a datacenter simultaneously processing multiple inputs. In the single-user case we conceptualize the MOE as savinhg FLOPs/power by only having one "expert" active at a time, but in the multi-user case all experts are active all the time handling tokens from different users. The potential of a dataflow approach to save power may be similar, with all parts of the model active at the same time when handling a datacenter load, so a custom hardware realization may not be needed/relevant for power efficiency.
In the former case (charlatanism), it's basically marketing. Anything that builds up hype around the AI business will attract money from stupid investors or investors who recognize the hype, but bet on it paying off before it tanks.
In the latter case (incompetence), many people honestly don't know what it means to know something. They spend their entire lives this way. They honestly think that words like "emergence" bless intellectually vacuous and uninformed fascinations with the aura of Science!™. These kinds of people lack a true grasp of even basic notions like "language", an analysis of which already demonstrates the silliness of AI-as-intelligence.
Now, that doesn't mean that in the course of foolish pursuit, some useful or good things might not fall out as a side effect. That's no reason to pursue foolish things, but the point is that the presence of some accidental good fruits doesn't prove the legitimacy of the whole. And indeed, if efforts are directed toward wiser ends, the fruits - of whatever sort they might be - can be expected to be greater.
Talk of AGI is, frankly, just annoying and dumb, at least when it is used to mean bona fide intelligence or "superintelligence". Just hold your nose and take whatever gold there is in Egypt.
Yes, the huge expected value argument is basically just Pascal's wager, there is a cost on the environment, and OpenAI doesn't take good care of their human moderators. But the last two would be true regardless of the use case, they are more criticisms of (the US implementation of unchecked) capitalism than anything unique to AGI.
And as the author also argues very well, solving today's problems isn't why OpenAI was founded. As a private company they are free to pursue any (legal) goal. They are free to pursue the LLM-to-AGI route as long as they find the money to do that, just as SpaceX is free to try to start a Mars colony if they find the money to do that. There are enough other players in the space focused in the here and now. Those just don't manage to inspire as well as those with huge ambitions and consequently are much less prominent in public discourse
> LLMs-as-AGI fail on all three fronts. The computational profligacy of LLMs-as-AGI is dissatisfying, and the exploitation of data workers and the environment unacceptable.
It's a bit unsatisfying how the last paragraph only argues against the second and third points, but is missing an explanation on how LLMs fail at the first goal as was claimed. As far as I can tell, they are already quite effective and correct at what they do and will only get better with no skill ceiling in sight.
* AlphaFold - SotA protein folding
* AlphaEvolve + other stuff accelerating research mathematics: https://arxiv.org/abs/2511.02864
* "An AI system to help scientists write expert-level empirical software" - demonstrating SotA results for many kinds of scientific software
So what's the "fantasy" here, the actual lab delivering results or a sob story about "data workers" and water?
And it is effectively a loop around LLM.
But my point is that we have evidence that Demis Hassabis knows his shit. Just doubting him on a general vibe is not smart
Not everything that DeepMind works on (such as AlphaGo, AlphaFold) are directly, or even indirectly, part of a push towards AGI. They seem to genuinely want to accelerate scientific research, and for Hassabis personally this seems to be his primary goal, and might have remained his only goal if Google hadn't merged Google Brain with DeepMind and forced more of a product/profit focus.
DeepMind do appear to be defining, and approaching, "AGI" differently that the rest of the back who are LLM-scaling true believers, but exactly what their vision is for an AGI architecture, at varying timescales, remains to be seen.
Is that not general enough for you? or not intelligent?
Do you imagine AGI as a robot and not as datacenter solving all kinds of problems?
You can argue about whether the pursuit of "AGI" (however you care to define it) is a positive for society, or even whether LLMs are, but the AI companies are all pursuing this, so that doesn't set them apart.
What makes DeepMind different is that they are at least also trying to use AI/ML for things like AlphaFold that are a positive, and Hassabis' appears genuinely passionate about the use of AI/ML to accelerate scientific research.
It seems that some of the other AI companies are now belatedly trying to at least appear to be interested in scientific research, but whether this is just PR posturing or something they will dedicate substantial resources to, and be successful at, remains to be seen. It's hard to see OpenAI, planning to release SexChatGPT, as being sincerely committed to anything other than making themselves a huge pile of money.
In the meantime, 100% agree, it's complete fantastical nonsense.
- Gödel-style incompleteness and the “stability paradox”
- Wolfram's principle - Principle of Computational Equivalence (PCE)
One of the red flags is human intelligence/brain itself. We have way more neurons than we are currently using. The limit to intelligence might very possibly be mathematical and adding neurons/transistors will not result in incremental intelligence.
The current LLMs will prove useful but since the models are out there, if this is a maxima, the ROI will be exactly 0.
People have been shitting on AGI since the term was invented by Ben Goertzel.
Anyone (like me) who has been around AGI longer than a few years is going to continue to keep our heads down and keep working. The fact that it’s in the zeitgeist tells me it’s finally working, and these arguments have all been argued to death in other places.
Yet we’re making regular progress towards it no matter what you want to think or believe
The measurable reality of machine dominance in actuation of physical labor is accelerating unabated.
Any technically superior solution needs to have a built in scam otherwise most followers will ignore it and the scammers won't have incentive to prosthelytize, e.g. rusts' safety scam.
I think the climate impact of data centers is way overstated relative to the ginormous amounts of emissions from other sources. Yes it's not pretty but it's a fairly minor problem compared to people buying SUVs and burning their way through millions of tons of fuel per day to get their asses to work and back. Just a simple example. There are plenty.
Data centers running on cheap clean power is entirely possible; and probably a lot cheaper long term. Kind of an obvious cost optimization to do. I'd prefer that to be sooner rather than later but it's nowhere near the highest priority thing to focus on when it comes to doing stuff about emissions.
Even if you don't expect them to get us over the final line, you should give them credit for that.
I haven't heard of that being the argument. The main perspective I'm aware of is that more powerful AI models have a compounding multiplier on productivity, and this trend seems likely to continue at least in the near future considering how much better coding models are at boosting productivity now compared to last year.
I agree with the first two points, but as others have commented the environmental claim here is just not compelling. Starting up your computer is technically creating environmental waste. By his metrics solving technical problems ethically is impossible.
There are some deeply mentally ill people out there, and given enough influence, their delusions seem to spread like a virus, infecting others and becoming a true mass delusion. Musk is not well, as he has repeatedly shown us. It amazes me that so many other people seem to be susceptible to the delusion, though.
I would love to have witnessed them meeting in person, as I assume must have happened at some point when DM was opened to being purchased. I bet Musk made an absolute fool of himself
(I think there's no reasonable definition of intelligence under which LLMs don't possess some, setting aside arguments about quantity. Whether they have or in principle could have any form of consciousness is much more mysterious -- how would we tell?)
We can simulate weather (poorly) without modeling every hydrogen atom interaction.
Me too. But, I worry this “want” may not be realistic/scalable.
Yesterday, I was trying to get some Bluetooth/BLE working on a Raspberry CM 4. I had dabbled with this 9 months ago. And things were making progress then just fine. Suddenly with a new trixie build and who knows what else has changed, I just could not get my little client to open the HCI socket. In about 10 minutes prompt dueling between GPT and Claude, I was able to learn all about rfkill and get to the bottom of things. I’ve worked with Linux for 20+ years, and somehow had missed learning about rfkill in the mix.
I was happy and saddened. I would not have k own where to turn. SO doesn’t get near the traffic it used to and is so bifurcated and policed I don’t even try anymore. I never know whether to look for a mailing list, a forum, a discord, a channel, the newsgroups have all long died away. There is no solidly written chapter in a canonically accepted manual written by tech writers on all things Bluetooth for the Linux Kernel packaged with raspbian. And to pile on, my attention span driven by a constant diet of engagement, makes it harder to have the patience.
It’s as if we’ve made technology so complex, that the only way forward is to double down and try harder with these LLMs and the associated AGI fantasy.
Etheryte•1h ago
fallingfrog•1h ago
fallingfrog•1h ago
random3•52m ago
chriswarbo•44m ago
Ever since "AI" was named at Dartmouth, there have been very smart people thinking that their idea will be the thing which makes it work this time. Usually, those ideas work really well in-the-small (ELIZA, SHRDLU, Automated Mathematician, etc.), but don't scale to useful problem sizes.
So, unless you've built a full-scale implementation of your ideas, I wouldn't put too much faith in them if I were you.
wild_egg•31m ago
hoherd•29m ago
red75prime•28m ago
There should be papers on fundamental limitations of LLMs then. Any pointers? "A single forward LLM pass has TC0 circuit complexity" isn't exactly it. Modern LLMs use CoT. Anything that uses Gödel's incompleteness theorems proves too much (We don't know whether the brain is capable of hypercomputations. And, most likely, it isn't capable of that).