Mistral’s new “environmental audit” shows how much AI is hurting the planet | Individual prompts don’t cost much, but billions together can have aggregate impact

https://arstechnica.com/ai/2025/07/mistrals-new-environmental-audit-shows-how-much-ai-is-hurting-the-planet/

by Hrmbee

1 comment
  1. Details:

    >Through its audit, Mistral found that the marginal “inference time” environmental impact of a single average prompt (generating 400 tokens’ worth of text, or about a page’s worth) was relatively minimal: just 1.14 grams of CO2 emitted and 45 milliliters of water consumed. Through its first 18 months of operation, though, the combination of model training and running millions (if not billions) of those prompts led to a significant aggregate impact: 20.4 ktons of CO2 emissions (comparable to 4,500 average internal combustion-engine passenger vehicles operating for a year, according to the Environmental Protection Agency) and the evaporation of 281,000 cubic meters of water (enough to fill about 112 Olympic-sized swimming pools).
    >
    >Comparing Mistral’s environmental impact numbers to those of other common Internet tasks helps put the AI’s environmental impact in context. Mistral points out, for instance, that the incremental CO2 emissions from one of its average LLM queries are equivalent to those of watching 10 seconds of a streaming show in the US (or 55 seconds of the same show in France, where the energy grid is notably cleaner). It’s also equivalent to sitting on a Zoom call for anywhere from four to 27 seconds, according to numbers from the Mozilla Foundation. And spending 10 minutes writing an email that’s read fully by one of its 100 recipients emits as much CO2 as 22.8 Mistral prompts, according to numbers from Carbon Literacy.
    >
    >…
    >
    >Mistral’s numbers are broadly comparable to other studies that have sought to estimate AI’s environmental impact. A study from researchers at the University of California, Riverside, for instance, estimated the average US AI data center used for OpenAI’s GPT-3 consumed nearly 17 ml of water per LLM prompt. And a 2024 study published in the journal Nature estimated an average of 2.2g of CO2 emissions per query for ChatGPT (across training and inference time).
    >
    >Compared to those previous third-party estimates, the fact that Mistral provided information directly for this latest study definitely lends some additional weight to its reported numbers. Still, Mistral writes that its data represents “a first approximation” of the model’s total environmental impact, with important estimates used for the life-cycle impact of GPUs, for instance.

    It’s good to see that there’s some additional research about the resources needed to run these systems, and some of the downstream environmental consequences. More data will be more helpful, and hopefully this company can help to shift the culture in the sector to be more open with what it is they’re doing and how.

Comments are closed.