frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

Electricity use of AI coding agents

https://www.simonpcouch.com/blog/2026-01-20-cc-impact/
37•linolevan•6h ago

Comments

linolevan•1h ago
Had a small discussion about this on an OP on bsky. A somewhat interesting discussion over there.

https://bsky.app/profile/simonpcouch.com/post/3mcuf3eazzs2c

HNisCIS•1h ago
LLMs don't use much energy at all to run, they use it all at the beginning for training, which is happening constantly right now.

TLDR this is, intentionally or not, an industry puff piece that completely misunderstands the problem.

Also, even if everyone is effectively running a a dishwasher cycle every day, this is still a problem that we can't just ignore, that's still a massive increase in ecological impact.

linolevan•1h ago
I'm not convinced that LLM training is at such a high energy use that it really matters in the big picture. You can train a (terrible) LLM on a laptop[1], and frankly that's less energy efficient than just training it on a rented cloud GPU.

Most of the innovation happening today is in post-training rather than pre-training, which is good for people concerned with energy use because post-training is relatively cheap (I was able to post-train a ~2b model in less than 6 hours on a rented cluster[2]).

[1]: https://github.com/lino-levan/wubus-1 [2]: https://huggingface.co/lino-levan/qwen3-1.7b-smoltalk

simonw•1h ago
The training cost for a model is constant. The more individual use that model gets the lower the training-cost-per-inference-query gets, since that one-time training cost is shared across every inference prompt.

It is true that there are always more training runs going, and I don't think we'll ever find out how much energy was spent on experimental or failed training runs.

dietr1ch•39m ago
> The training cost for a model is constant

Constant until the next release? The battle for the benchmark-winning model is driving cadence up, and this competition probably puts a higher cost on training and evaluation too.

simonw•38m ago
Sure. By "constant" there I meant it doesn't change depending on the number of people who use the model.
kingstnap•1h ago
You underestimate the amount of inference and very much overestimate what training is.

Training is more or less the same as doing inference on an input token twice (forward and backward pass). But because its offline and predictable it can be done fully batched with very high utilization (efficiently).

Training is guestimate maybe 100 trillion total tokens but these guys apparently do inference on the quadrillion token monthly scales.

jeffbee•59m ago
Training is pretty much irrelevant in the scheme of global energy use. The global airline industry uses the energy needed to train a frontier model, every three minutes, and unlike AI training the energy for air travel is 100% straight-into-your-lungs fossil carbon.
pluralmonad•36m ago
Not to mention doesn't aviation fuel still make heavy (heh) use of lead?
TSiege•25m ago
I think thats only true for propeller planes, which use leaded gasoline. Jet fuel is just kerosene
simonw•1h ago
At first glance this looks like a credible set of calculations to me. Here's the conclusion:

> So, if I wanted to analogize the energy usage of my use of coding agents, it’s something like running the dishwasher an extra time each day, keeping an extra refrigerator, or skipping one drive to the grocery store in favor of biking there.

That's for someone spending about $15-$20 in a day on Claude Code, estimated at the equivalent of 4,400 "typical queries" to an LLM.

ggm•1h ago
As long as it's unaccounted for by users it's at best anexternality. I think it may demand regulation to force this cost to the surface.

electricity and cooling incur wider costs and consequences.

simonw•1h ago
That's hardly unique to data centers.

I'm all for regulation that makes businesses pay for their externalities - I'd argue that's a key economic role that a government should play.

jeffbee•1h ago
I don't see how this follows. Data center operators buy energy and this is almost their only operating expense. Their products are priced to reflect this. The fact that basic AI features are free reflects the fact that they use almost no energy.
arrowleaf•50m ago
I would be surprised if AI prices reflect their current cost to provide the service, even inference costs. With so much money flowing into AI the goal isn't to make money, it's to grow faster than the competition.
simonw•39m ago
I remain confident that most AI labs are not selling API access for less than it costs to serve the models.

If that's so common then what's your theory as to why Anthropic aren't price competitive with GPT-5.2?

scottcha•1h ago
That is a pretty good article although the one factor not mentioned that we see that has a huge impact on energy is batch size but that would be hard to estimate with the data he has.

We've only launched to friends and family but I'll share this here since its relevant: we have a service which actually optimizes and measures the energy of your AI use: https://portal.neuralwatt.com if you want to check it out. We also have a tools repo we put together that shows some demonstrations of surfacing energy metadata in to your tools: https://github.com/neuralwatt/neuralwatt-tools/

Our underlying technology is really about OS level energy optimization and datacenter grid flexibility so if you are on the pay by KWHr plan you get additional value as we continue to roll new optimizations out.

DM me with your email and I'd be happy to add some additional credits to you.

ccgibson•32m ago
To add a bit more to what @scottcha is saying: overall GPU load has a fairly significant impact on the energy per result. Energy per result is inversely related, since the idle TDP of these servers is significant the more the energy gets spread the more efficient the system becomes. I imagine Anthropic is able to harness that efficiency since I imagine their servers are far from idle :)
nospice•32m ago
I'm not sure I like this method of accounting for it. The critics of LLMs tend to conflate the costs of training LLMs with the cost of generation. But this makes the opposite error: it pretends that training isn't happening as a consequence of consumer demand. There are enormous resources poured into it on an ongoing basis, so it feels like it needs to be amortized on top of the per-token generation costs.

At some point, we might end up in a steady state where the models are as good as they can be and the training arms race is over, but we're not there yet.

TSiege•30m ago
The challenge with no longer developing new models is making sure your model is up to date which as of today requires an entire training run. Maybe they can do that less or they’ll come up with a way to update a model after it’s trained. Maybe we’ll move onto something other than LLMs
mikeaskew4•22m ago
I have a kids and a dishwasher (which with kids, runs quite often) but I’m not convinced I’m doing worse at energy consumption
bramhaag•9m ago
Only tangentially related, but today I found a repo that appears to be developed using AI assistance, and the costs for running the agents are reported in the PRs. For example, 50 USD to remove some code: https://github.com/coder/mux/pull/1658

A 26,000-year astronomical monument hidden in plain sight (2019)

https://longnow.org/ideas/the-26000-year-astronomical-monument-hidden-in-plain-sight/
311•mkmk•6h ago•64 comments

California is free of drought for the first time in 25 years

https://www.latimes.com/california/story/2026-01-09/california-has-no-areas-of-dryness-first-time...
184•thnaks•1h ago•76 comments

The challenges of soft delete

https://atlas9.dev/blog/soft-delete.html
63•buchanae•3h ago•38 comments

Instabridge has acquired Nova Launcher

https://novalauncher.com/nova-is-here-to-stay
118•KORraN•5h ago•88 comments

Cloudflare zero-day: Accessing any host globally

https://fearsoff.org/research/cloudflare-acme
36•2bluesc•8h ago•9 comments

Provably unmasking malicious behavior through execution traces

https://arxiv.org/abs/2512.13821
16•PaulHoule•2h ago•3 comments

The Unix Pipe Card Game

https://punkx.org/unix-pipe-game/
172•kykeonaut•7h ago•49 comments

Electricity use of AI coding agents

https://www.simonpcouch.com/blog/2026-01-20-cc-impact/
37•linolevan•6h ago•22 comments

Which AI Lies Best? A game theory classic designed by John Nash

https://so-long-sucker.vercel.app/
28•lout332•2h ago•19 comments

I'm addicted to being useful

https://www.seangoedecke.com/addicted-to-being-useful/
473•swah•13h ago•233 comments

Are Arrays Functions?

https://futhark-lang.org/blog/2026-01-16-are-arrays-functions.html
8•todsacerdoti•1d ago•1 comments

Inside the secret world of Japanese snack bars

https://www.bbc.com/travel/article/20260116-inside-the-secret-world-of-japanese-snack-bars
81•rmason•3h ago•52 comments

Running Claude Code dangerously (safely)

https://blog.emilburzo.com/2026/01/running-claude-code-dangerously-safely/
271•emilburzo•12h ago•223 comments

Our approach to age prediction

https://openai.com/index/our-approach-to-age-prediction/
54•pretext•5h ago•108 comments

RCS for Business

https://developers.google.com/business-communications/rcs-business-messaging
24•sshh12•20h ago•25 comments

Building Robust Helm Charts

https://www.willmunn.xyz/devops/helm/kubernetes/2026/01/17/building-robust-helm-charts.html
10•will_munn•1d ago•0 comments

Show HN: Agent Skills Leaderboard

https://skills.sh
28•andrewqu•3h ago•14 comments

Show HN: Mastra 1.0, open-source JavaScript agent framework from the Gatsby devs

https://github.com/mastra-ai/mastra
72•calcsam•7h ago•30 comments

Maintenance: Of Everything, Part One

https://press.stripe.com/maintenance-part-one
56•mitchbob•5h ago•11 comments

Unconventional PostgreSQL Optimizations

https://hakibenita.com/postgresql-unconventional-optimizations
256•haki•10h ago•34 comments

Dockerhub for Skill.md

https://skillregistry.io/
16•tomaspiaggio12•9h ago•10 comments

Lunar Radio Telescope to Unlock Cosmic Mysteries

https://spectrum.ieee.org/lunar-radio-telescope
6•rbanffy•2h ago•0 comments

TopicRadar – Track trending topics across Hacker News, GitHub, ArXiv, and more

https://apify.com/mick-johnson/topic-radar
14•MickolasJae•9h ago•3 comments

DOGE employees may have improperly accessed social security data, DOJ says

https://www.axios.com/2026/01/20/doge-employees-social-security-information-court-filing
45•belter•1h ago•2 comments

IPv6 is not insecure because it lacks a NAT

https://www.johnmaguire.me/blog/ipv6-is-not-insecure-because-it-lacks-nat/
27•johnmaguire•5h ago•12 comments

Claude Chill: Fix Claude Code's Flickering in Terminal

https://github.com/davidbeesley/claude-chill
4•behnamoh•1h ago•0 comments

LG UltraFine Evo 6K 32-inch Monitor Review

https://www.wired.com/review/lg-ultrafine-evo-6k-32-inch-monitor/
52•tosh•3d ago•87 comments

Nvidia Stock Crash Prediction

https://entropicthoughts.com/nvidia-stock-crash-prediction
336•todsacerdoti•8h ago•282 comments

Channel3 (YC S25) Is Hiring

https://www.ycombinator.com/companies/channel3/jobs/3DIAYYY-backend-engineer
1•aschiff1•12h ago

Fast Concordance: Instant concordance on a corpus of >1,200 books

https://iafisher.com/concordance/
28•evakhoury•4d ago•2 comments