While it’s non-zero, it doesn’t strike me to be “hurting the planet” as some people would want me to believe I’m doing when I decide to use LLMs.
Yes, the training has a much bigger impact but the benefits of training are shared will all users and it’s a one-time cost per model.
I did the math and if I’m right the environmental footprint of a single LLM training, emitting 13,600 metric tons of CO2 and consuming 187,333 cubic meters of water annually, represents 0.000026% of global greenhouse gas emissions and 0.0000047% of freshwater use.
Quite a lot. Let's assume a hundred different LLMs of this scale are being trained at the same time. If you multiply the global use percentages by a hundred you'll get: 0.0026% of global greenhouse gas emissions and 0.00047% of freshwater use. Still a literal drop in the bucket.
> How many more when the transition from millions of jobs replaced by AI do you expect?
Dunno, but the argument is that I should feel bad about my current impact on the environment as I use my LLM to autocomplete my code or answer my questions. We have no idea what the future will hold. We can and of course should do everything to minimize the environmental impact of everything we do, but that's a different discussion. For example switching to clean energy sources will make a big positive impact on these numbers.
> What about inference across all of that?
The report speaks about that, the inference cost in marginal when compared to the training cost (~15% for CO2 and ~9% for water consumption).
It won’t be 100, you are underestimating it to make the number be small, ignoring the fact that people are talking about GW worth of continuous power, not counting the refresh rate of GPUs (every 3-5 years the whole infrastructure is renewed).
> Dunno, but the argument is that I should feel bad about my current impact on the environment as I use my LLM to autocomplete my code or answer my questions.
That’s not the argument. The argument is that you should be aware of your consumption and therefore the impact it has. Right now people use everything as a ‘’dumb’’ magical API that just spits things out from nowhere with no impacts.
> The report speaks about that, the inference cost in marginal when compared…
Don’t ignore how many of these are happening as we speak. ChatGPT went from 0 to 100mi users within months, all submitting hundreds of queries.
Make it a 1000 (I seriously doubt there are one thousand simultaneous training runs of Mistral Large 2 scale models going on every second) and it's still a drop in the bucket.
> not counting the refresh rate of GPUs (every 3-5 years the whole infrastructure is renewed
I am accounting for this by citing annual usage instead of one-time cost.
linotype•3h ago
Zacharias030•3h ago
zekrioca•3h ago
readthenotes1•2h ago
https://flybitlux.com/what-is-the-carbon-footprint-of-a-priv...
BriggyDwiggs42•1h ago