frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Cognitive Style and Visual Attention in Multimodal Museum Exhibitions

https://www.mdpi.com/2075-5309/15/16/2968
1•rbanffy•1m ago•0 comments

Full-Blown Cross-Assembler in a Bash Script

https://hackaday.com/2026/02/06/full-blown-cross-assembler-in-a-bash-script/
1•grajmanu•6m ago•0 comments

Logic Puzzles: Why the Liar Is the Helpful One

https://blog.szczepan.org/blog/knights-and-knaves/
1•wasabi991011•18m ago•0 comments

Optical Combs Help Radio Telescopes Work Together

https://hackaday.com/2026/02/03/optical-combs-help-radio-telescopes-work-together/
2•toomuchtodo•23m ago•1 comments

Show HN: Myanon – fast, deterministic MySQL dump anonymizer

https://github.com/ppomes/myanon
1•pierrepomes•29m ago•0 comments

The Tao of Programming

http://www.canonical.org/~kragen/tao-of-programming.html
1•alexjplant•30m ago•0 comments

Forcing Rust: How Big Tech Lobbied the Government into a Language Mandate

https://medium.com/@ognian.milanov/forcing-rust-how-big-tech-lobbied-the-government-into-a-langua...
1•akagusu•30m ago•0 comments

PanelBench: We evaluated Cursor's Visual Editor on 89 test cases. 43 fail

https://www.tryinspector.com/blog/code-first-design-tools
2•quentinrl•32m ago•2 comments

Can You Draw Every Flag in PowerPoint? (Part 2) [video]

https://www.youtube.com/watch?v=BztF7MODsKI
1•fgclue•38m ago•0 comments

Show HN: MCP-baepsae – MCP server for iOS Simulator automation

https://github.com/oozoofrog/mcp-baepsae
1•oozoofrog•41m ago•0 comments

Make Trust Irrelevant: A Gamer's Take on Agentic AI Safety

https://github.com/Deso-PK/make-trust-irrelevant
3•DesoPK•45m ago•0 comments

Show HN: Sem – Semantic diffs and patches for Git

https://ataraxy-labs.github.io/sem/
1•rs545837•47m ago•1 comments

Hello world does not compile

https://github.com/anthropics/claudes-c-compiler/issues/1
32•mfiguiere•52m ago•17 comments

Show HN: ZigZag – A Bubble Tea-Inspired TUI Framework for Zig

https://github.com/meszmate/zigzag
3•meszmate•54m ago•0 comments

Metaphor+Metonymy: "To love that well which thou must leave ere long"(Sonnet73)

https://www.huckgutman.com/blog-1/shakespeare-sonnet-73
1•gsf_emergency_6•56m ago•0 comments

Show HN: Django N+1 Queries Checker

https://github.com/richardhapb/django-check
1•richardhapb•1h ago•1 comments

Emacs-tramp-RPC: High-performance TRAMP back end using JSON-RPC instead of shell

https://github.com/ArthurHeymans/emacs-tramp-rpc
1•todsacerdoti•1h ago•0 comments

Protocol Validation with Affine MPST in Rust

https://hibanaworks.dev
1•o8vm•1h ago•1 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
4•gmays•1h ago•0 comments

Show HN: Zest – A hands-on simulator for Staff+ system design scenarios

https://staff-engineering-simulator-880284904082.us-west1.run.app/
1•chanip0114•1h ago•1 comments

Show HN: DeSync – Decentralized Economic Realm with Blockchain-Based Governance

https://github.com/MelzLabs/DeSync
1•0xUnavailable•1h ago•0 comments

Automatic Programming Returns

https://cyber-omelette.com/posts/the-abstraction-rises.html
1•benrules2•1h ago•1 comments

Why Are There Still So Many Jobs? The History and Future of Workplace Automation [pdf]

https://economics.mit.edu/sites/default/files/inline-files/Why%20Are%20there%20Still%20So%20Many%...
2•oidar•1h ago•0 comments

The Search Engine Map

https://www.searchenginemap.com
1•cratermoon•1h ago•0 comments

Show HN: Souls.directory – SOUL.md templates for AI agent personalities

https://souls.directory
1•thedaviddias•1h ago•0 comments

Real-Time ETL for Enterprise-Grade Data Integration

https://tabsdata.com
1•teleforce•1h ago•0 comments

Economics Puzzle Leads to a New Understanding of a Fundamental Law of Physics

https://www.caltech.edu/about/news/economics-puzzle-leads-to-a-new-understanding-of-a-fundamental...
3•geox•1h ago•1 comments

Switzerland's Extraordinary Medieval Library

https://www.bbc.com/travel/article/20260202-inside-switzerlands-extraordinary-medieval-library
4•bookmtn•1h ago•0 comments

A new comet was just discovered. Will it be visible in broad daylight?

https://phys.org/news/2026-02-comet-visible-broad-daylight.html
5•bookmtn•1h ago•0 comments

ESR: Comes the news that Anthropic has vibecoded a C compiler

https://twitter.com/esrtweet/status/2019562859978539342
2•tjr•1h ago•0 comments
Open in hackernews

Mistral's new "environmental audit" shows how much AI is hurting the planet

https://arstechnica.com/ai/2025/07/mistrals-new-environmental-audit-shows-how-much-ai-is-hurting-the-planet/
19•pjmlp•6mo ago

Comments

linotype•6mo ago
I’ll stop using ChatGPT when private jets are banned, families drop down to one car and stop having more than two kids. Seriously there are probably more emissions from a few supertankers/cargo carriers than all LLMs combined.
Zacharias030•6mo ago
3 if you raise them vegan!
zekrioca•6mo ago
No one is asking people to stop using the Internet. However, becoming aware of one’s consumption is important, and as of now, people are generally oblivious to their digital footprint beyond power consumption, which is only one of the aspects.
nsksl•6mo ago
Of course they are. We continuously hear that we have to stop having cars, kids, and consuming in a way that the elites have described as irresponsible, all whilst they travel the world in private jets where they definitely do not eat tofu.
zekrioca•6mo ago
I guess you should just walk outside a bit.
readthenotes1•6mo ago
it made me look. a private jet emits 4,900 g of c02 per mile (how the author's mind didn't explode mixing the measurement systems is beyond me) vs 1 prompt emitting 1.14 g.

https://flybitlux.com/what-is-the-carbon-footprint-of-a-priv...

BriggyDwiggs42•6mo ago
Honestly not as dramatic as I’d have hoped. 5000 chatgpt prompts propelling a jet for a mile is a pretty surprising amount of energy useage.
lostmsu•6mo ago
Yeah, but it's multiple million per flight
BriggyDwiggs42•6mo ago
Yeah but chatgpt has like a hundred million users right?
lostmsu•6mo ago
I fail to see a point in this comment. We are comparing 100 million users to a single jet flight. If you believe that a flight is more important that a few million conversations for anyone, sure I can see how you can be concerned.
linotype•6mo ago
Most private jet flights are more than one mile in length and there are many thousands of them per day. There’s no where you can go (effectively) that you can’t get to with a major airline.

https://www.pbs.org/newshour/science/carbon-pollution-from-h...

Buxato•6mo ago
WTF, the level of .... in this phrase is astonishing IMO, specially when you said the kids "issue". Seriously WTF
linotype•6mo ago
> WTF, the level of .... in this phrase is astonishing IMO, specially when you said the kids "issue". Seriously WTF

What?

votepaunchy•6mo ago
Let’s talk about fewer kids after people stop owning pets.
M4v3R•6mo ago
> environmental impact of a single average prompt (generating 400 tokens' worth of text, or about a page's worth) was relatively minimal: just 1.14 grams of CO2 emitted and 45 milliliters of water consumed

While it’s non-zero, it doesn’t strike me to be “hurting the planet” as some people would want me to believe I’m doing when I decide to use LLMs.

Yes, the training has a much bigger impact but the benefits of training are shared will all users and it’s a one-time cost per model.

I did the math and if I’m right the environmental footprint of a single LLM training, emitting 13,600 metric tons of CO2 and consuming 187,333 cubic meters of water annually, represents 0.000026% of global greenhouse gas emissions and 0.0000047% of freshwater use.

zekrioca•6mo ago
How many trainings and retraining are happening as we speak? How many more when the transition from millions of jobs replaced by AI do you expect? What about inference across all of that?
M4v3R•6mo ago
> How many trainings and retraining are happening as we speak?

Quite a lot. Let's assume a hundred different LLMs of this scale are being trained at the same time. If you multiply the global use percentages by a hundred you'll get: 0.0026% of global greenhouse gas emissions and 0.00047% of freshwater use. Still a literal drop in the bucket.

> How many more when the transition from millions of jobs replaced by AI do you expect?

Dunno, but the argument is that I should feel bad about my current impact on the environment as I use my LLM to autocomplete my code or answer my questions. We have no idea what the future will hold. We can and of course should do everything to minimize the environmental impact of everything we do, but that's a different discussion. For example switching to clean energy sources will make a big positive impact on these numbers.

> What about inference across all of that?

The report speaks about that, the inference cost in marginal when compared to the training cost (~15% for CO2 and ~9% for water consumption).

zekrioca•6mo ago
> Quite a lot. Let's assume a hundred different LLMs of this scale are being trained at the same time.

It won’t be 100, you are underestimating it to make the number be small, ignoring the fact that people are talking about GW worth of continuous power, not counting the refresh rate of GPUs (every 3-5 years the whole infrastructure is renewed).

> Dunno, but the argument is that I should feel bad about my current impact on the environment as I use my LLM to autocomplete my code or answer my questions.

That’s not the argument. The argument is that you should be aware of your consumption and therefore the impact it has. Right now people use everything as a ‘’dumb’’ magical API that just spits things out from nowhere with no impacts.

> The report speaks about that, the inference cost in marginal when compared…

Don’t ignore how many of these are happening as we speak. ChatGPT went from 0 to 100mi users within months, all submitting hundreds of queries.

M4v3R•6mo ago
> It won’t be 100, you are underestimating it to make the number be small

Make it a 1000 (I seriously doubt there are one thousand simultaneous training runs of Mistral Large 2 scale models going on every second) and it's still a drop in the bucket.

> not counting the refresh rate of GPUs (every 3-5 years the whole infrastructure is renewed

I am accounting for this by citing annual usage instead of one-time cost.

zekrioca•6mo ago
Not sure what you think demand is, but operators are building 10 GW AI datacenters. Assuming a GPU consumes ~1 kW, the number is potentially (upper bound) 10 GW / 1 kW, way larger than ‘1000’. For one company.
lostmsu•6mo ago
Still drop in the bucket considering world total electricity production is about 10 TW.

Where did you read one company? I found 10 GW new capacity next year for the entire industry.

zekrioca•6mo ago
It is not a drop in the bucket when we are talking about a factor of 1000000 (and not 100 as your initially calculated), on par with buildings and transportation, only behind agriculture.
lostmsu•6mo ago
Me? Also, 100 was the number of trained LLMs. Do you think there will be 1 000 000 trained at the same time at some point?
zekrioca•6mo ago
Sorry, @OP, not you :)

Yes, there will be. But the point is that potentially, all of these GPUs will be at 100% at all times, which makes the 1 000 000 realistic.

tomhow•6mo ago
Mistral reports on the environmental impact of LLMs - https://news.ycombinator.com/item?id=44651661 - July 2025 (56 comments)
riffraff•6mo ago
Is this article showing an ai slop header image with credits to Getty Images? I'm deeply confused.
joegibbs•6mo ago
It’s definitely a render, look how clear the small text is, you can see the lower resolution ground texture in the foreground.
M95D•6mo ago
The surprizing conclusion for me from this article is not how much LLM hurts the planet, but how much video streaming hurts the planet!