GPT-5 may consume as much power daily as 1.5 million US homes

OpenAI’s latest model, GPT-5, is drawing scrutiny from researchers for what they say could be an immense increase in energy consumption – and for the company’s refusal to disclose official power usage figures.

Experts say GPT-5’s enhanced capabilities, from building websites to solving PhD-level science problems, come at a significant resource cost.

Early independent benchmarks suggest it may consume vastly more electricity per response than earlier versions of ChatGPT.

In mid-2023, answering a simple question – like providing a recipe – might have taken GPT-4 roughly 2 watt-hours, about the power of running an incandescent light bulb for two minutes.

Researchers at the University of Rhode Island’s AI lab found that GPT-5 could use as much as 40 watt-hours for a similar-length answer of about 1,000 tokens, with an average usage of just over 18 watt-hours, significantly higher than GPT-4o, OpenAI’s previous flagship model.

Nidhal Jegham, a researcher at the lab, said this aligns with expectations for a model believed to be several times larger than GPT-4.

At GPT-5’s average consumption rate, and assuming ChatGPT’s reported 2.5 billion daily requests, the model could theoretically draw enough power each day to supply 1.5 million US homes.

OpenAI has not released energy data for any of its models since GPT-3 in 2020, which had 175 billion parameters.

CEO Sam Altman published general resource figures in June – 0.34 watt-hours and 0.000085 gallons of water per query – but these were not tied to a specific model and lacked documentation.

Rakesh Kumar, a professor at the University of Illinois, said larger models inevitably draw more energy, both in training and in inference.

“A more complex model like GPT-5 consumes more power both during training and during inference. It’s also targeted at long thinking … I can safely say that it’s going to consume a lot more power than GPT-4,” Kumar said.

Shaolei Ren of the University of California, Riverside, noted that GPT-5’s multimodal reasoning abilities and extended computation time likely push its footprint even higher.

“Based on the model size, the amount of resources [used by GPT-5] should be orders of magnitude higher than that for GPT-3,” said Ren who studies the resource footprint of AI.

However, GPT-5’s efficiency may be aided by a “mixture-of-experts” architecture, which activates only parts off the model per query, and by deployment on more advanced hardware than previous iterations. Still, both Ren and Kumar warned that GPT-5’s reasoning mode could multiply resource use five- to tenfold.

A recent study by French AI company Mistral reinforced the industry’s concerns, finding a strong correlation between model size and environmental impact: a model ten times larger produces roughly ten times the carbon, water and power footprint for the same output.

OpenAI continues to argue that scaling up model size is essential to reaching artificial general intelligence (AGI).

“It appears that you can spend arbitrary amounts of money and get continuous and predictable gains,” CEO Sam Altman said in February, although he recently acknowledged that GPT-5 still cannot learn autonomously – a crucial requirement for matching human capabilities.

On Friday, he added that the company should focus on long-term growth and invest heavily in training and computing power, even if that means postponing profitability.