{"id":281522,"date":"2025-07-22T04:05:18","date_gmt":"2025-07-22T04:05:18","guid":{"rendered":"https:\/\/www.europesays.com\/uk\/281522\/"},"modified":"2025-07-22T04:05:18","modified_gmt":"2025-07-22T04:05:18","slug":"sam-altman-says-openai-could-own-100-million-gpus-by-the-end-of-the-year-estimated-at-3-trillion-worth-of-silicon-chatgpt-maker-to-cross-well-over-1-million-ai-gpus-by-end-of-year","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/uk\/281522\/","title":{"rendered":"Sam Altman says OpenAI could own 100 million GPUs by the end of the year, estimated at $3 trillion worth of silicon \u2014 ChatGPT maker to cross &#8216;well over 1 million&#8217; AI GPUs by end of year"},"content":{"rendered":"<p>OpenAI CEO Sam Altman <a data-analytics-id=\"inline-link\" href=\"https:\/\/www.tomshardware.com\/tech-industry\/tsmc-execs-allegedly-dismissed-openai-ceo-sam-altman-as-podcasting-bro\" data-before-rewrite-localise=\"https:\/\/www.tomshardware.com\/tech-industry\/tsmc-execs-allegedly-dismissed-openai-ceo-sam-altman-as-podcasting-bro\" target=\"_blank\" rel=\"noopener\">isn\u2019t exactly known for thinking small<\/a>, but his latest comments push the boundaries of even his usual brand of audacious tech talk. <a data-analytics-id=\"inline-link\" href=\"https:\/\/x.com\/sama\/status\/1947057625780396512\" target=\"_blank\" data-url=\"https:\/\/x.com\/sama\/status\/1947057625780396512\" referrerpolicy=\"no-referrer-when-downgrade\" data-hl-processed=\"none\">In a new post on X<\/a>, Altman revealed that OpenAI is on track to bring \u201cwell over 1 million GPUs online\u201d by the end of this year. That alone is an astonishing number\u2014consider that Elon Musk\u2019s xAI, <a data-analytics-id=\"inline-link\" href=\"https:\/\/www.tomshardware.com\/tech-industry\/artificial-intelligence\/elon-musks-xai-allegedly-powers-colossus-supercomputer-facility-using-illegal-generators\" target=\"_blank\" data-before-rewrite-localise=\"https:\/\/www.tomshardware.com\/tech-industry\/artificial-intelligence\/elon-musks-xai-allegedly-powers-colossus-supercomputer-facility-using-illegal-generators\" rel=\"noopener\">which made waves earlier this year with its Grok 4 model<\/a>, runs on about 200,000 Nvidia H100 GPUs. OpenAI will have five times that power, and it\u2019s still not enough for Altman going into the future. \u201cVery proud of the team&#8230;\u201d he wrote, \u201cbut now they better get to work figuring out how to 100x that lol.\u201d<\/p>\n<blockquote class=\"twitter-tweet hawk-ignore\" data-lang=\"en\">\n<p lang=\"en\" dir=\"ltr\">we will cross well over 1 million GPUs brought online by the end of this year!very proud of the team but now they better get to work figuring out how to 100x that lol<a href=\"https:\/\/twitter.com\/cantworkitout\/status\/1947057625780396512\" data-url=\"https:\/\/twitter.com\/cantworkitout\/status\/1947057625780396512\" target=\"_blank\" referrerpolicy=\"no-referrer-when-downgrade\" data-hl-processed=\"none\" rel=\"noopener\">July 20, 2025<\/a><\/p>\n<\/blockquote>\n<p>The \u201clol\u201d might make it sound like he\u2019s joking, but Altman\u2019s track record suggests otherwise. Back in February, he admitted that OpenAI had to slow the rollout of GPT\u20114.5 because they were literally \u201c<a data-analytics-id=\"inline-link\" href=\"https:\/\/www.tomshardware.com\/tech-industry\/artificial-intelligence\/openai-has-run-out-of-gpus-says-sam-altman-gpt-4-5-rollout-delayed-due-to-lack-of-processing-power\" data-before-rewrite-localise=\"https:\/\/www.tomshardware.com\/tech-industry\/artificial-intelligence\/openai-has-run-out-of-gpus-says-sam-altman-gpt-4-5-rollout-delayed-due-to-lack-of-processing-power\" target=\"_blank\" rel=\"noopener\">out of GPUs.<\/a>\u201d That wasn\u2019t just a minor hiccup; it was a wake-up call considering <a data-analytics-id=\"inline-link\" href=\"https:\/\/www.tomshardware.com\/pc-components\/gpus\/nvidias-blackwell-gpus-are-sold-out-for-the-next-12-months-chipmaker-to-gain-market-share-in-2025\" data-before-rewrite-localise=\"https:\/\/www.tomshardware.com\/pc-components\/gpus\/nvidias-blackwell-gpus-are-sold-out-for-the-next-12-months-chipmaker-to-gain-market-share-in-2025\" target=\"_blank\" rel=\"noopener\">Nvidia is also sold out till next year<\/a> for its premier AI hardware. Altman has since made compute scaling a top priority, pursuing partnerships and infrastructure projects that look more like national-scale operations than corporate IT upgrades. When OpenAI hits its 1 million GPU milestone later this year, it won\u2019t just be a social media flex\u2014it\u2019ll be cementing itself as the single largest consumer of AI compute on the planet.<\/p>\n<p>Anyhow, let\u2019s talk about that 100x goal, because it\u2019s exactly as wild as it sounds. At current market prices, 100 million GPUs would cost around $3 trillion\u2014almost the GDP of the UK\u2014and that\u2019s before factoring in the power requirements or the data centers needed to house them. There\u2019s no way <a data-analytics-id=\"inline-link\" href=\"https:\/\/www.tomshardware.com\/pc-components\/gpus\/nvidia-expected-to-produce-450000-blackwell-ai-gpus-in-q4-potential-dollar10b-in-revenue-for-the-chipmaker\" data-before-rewrite-localise=\"https:\/\/www.tomshardware.com\/pc-components\/gpus\/nvidia-expected-to-produce-450000-blackwell-ai-gpus-in-q4-potential-dollar10b-in-revenue-for-the-chipmaker\" target=\"_blank\" rel=\"noopener\">Nvidia could even produce that many chips<\/a> in the near term, let alone handle the energy requirements to power them all. Yet, that\u2019s the kind of moonshot thinking that drives Altman. It\u2019s less about a literal target and more about laying down the foundation for AGI (Artificial General Intelligence), whether that means custom silicon, exotic new architectures, or something we haven\u2019t even seen yet. OpenAI clearly wants to find out.<\/p>\n<p>You may like<\/p>\n<p>The biggest living proof of this is <a data-analytics-id=\"inline-link\" href=\"https:\/\/www.tomshardware.com\/tech-industry\/artificial-intelligence\/openais-gargantuan-data-center-is-even-bigger-than-elon-musks-xai-colossus-worlds-largest-300-mw-ai-data-center-in-texas-could-reach-record-1-gigawatt-scale-by-next-year\" data-before-rewrite-localise=\"https:\/\/www.tomshardware.com\/tech-industry\/artificial-intelligence\/openais-gargantuan-data-center-is-even-bigger-than-elon-musks-xai-colossus-worlds-largest-300-mw-ai-data-center-in-texas-could-reach-record-1-gigawatt-scale-by-next-year\" target=\"_blank\" rel=\"noopener\">OpenAI\u2019s Texas data center, now the world\u2019s largest single facility, which consumes around 300 MW<\/a>\u2014enough to power a mid-sized city\u2014and is set to hit 1 gigawatt by mid-2026. Such massive and unpredictable energy demands are already drawing scrutiny from Texas grid operators, who warn that stabilizing voltage and frequency for a site of this scale requires costly, rapid infrastructure upgrades that even state utilities struggle to match. Regardless, innovation must prevail, and the bubble shouldn&#8217;t burst.<\/p>\n<blockquote class=\"twitter-tweet hawk-ignore\" data-lang=\"en\">\n<p lang=\"en\" dir=\"ltr\">Fun math:100,000,000 GPUs \u00d7 $30,000 per GPU = $3,000,000,000,000$3 trillion<a href=\"https:\/\/twitter.com\/cantworkitout\/status\/1947081267767607480\" data-url=\"https:\/\/twitter.com\/cantworkitout\/status\/1947081267767607480\" target=\"_blank\" referrerpolicy=\"no-referrer-when-downgrade\" data-hl-processed=\"none\" rel=\"noopener\">July 20, 2025<\/a><\/p>\n<\/blockquote>\n<p>The company isn\u2019t just hoarding NVIDIA hardware, either. While <a data-analytics-id=\"inline-link\" href=\"https:\/\/www.tomshardware.com\/tag\/microsoft\" data-auto-tag-linker=\"true\" data-before-rewrite-localise=\"https:\/\/www.tomshardware.com\/tag\/microsoft\" target=\"_blank\" rel=\"noopener\">Microsoft<\/a>\u2019s Azure remains its primary cloud backbone, <a data-analytics-id=\"inline-link\" href=\"https:\/\/www.tomshardware.com\/pc-components\/gpus\/oracle-has-reportedly-placed-an-order-for-usd40-billion-in-nvidia-ai-gpus-for-a-new-openai-data-center\" target=\"_blank\" data-before-rewrite-localise=\"https:\/\/www.tomshardware.com\/pc-components\/gpus\/oracle-has-reportedly-placed-an-order-for-usd40-billion-in-nvidia-ai-gpus-for-a-new-openai-data-center\" rel=\"noopener\">OpenAI has partnered with Oracle<\/a> to build its own data centers and is rumored to be exploring Google\u2019s TPU accelerators to diversify its compute stack. It\u2019s part of a larger arms race, where everyone from <a data-analytics-id=\"inline-link\" href=\"https:\/\/www.tomshardware.com\/tech-industry\/artificial-intelligence\/meta-plans-multi-gw-data-center-thats-nearly-the-size-of-manhattan-zuckerberg-promises-enormous-ai-splash-as-company-uses-tents-to-try-and-keep-up-with-rate-of-expansion\" data-before-rewrite-localise=\"https:\/\/www.tomshardware.com\/tech-industry\/artificial-intelligence\/meta-plans-multi-gw-data-center-thats-nearly-the-size-of-manhattan-zuckerberg-promises-enormous-ai-splash-as-company-uses-tents-to-try-and-keep-up-with-rate-of-expansion\" target=\"_blank\" rel=\"noopener\">Meta<\/a> to <a data-analytics-id=\"inline-link\" href=\"https:\/\/www.tomshardware.com\/tech-industry\/artificial-intelligence\/amazons-usd8-billion-anthropic-investment-rumors-suggest-it-would-rather-sell-ai-infrastructure-than-compete-with-chatgpt-and-gemini\" data-before-rewrite-localise=\"https:\/\/www.tomshardware.com\/tech-industry\/artificial-intelligence\/amazons-usd8-billion-anthropic-investment-rumors-suggest-it-would-rather-sell-ai-infrastructure-than-compete-with-chatgpt-and-gemini\" target=\"_blank\" rel=\"noopener\">Amazon<\/a> is building in-house AI chips and betting big on high-bandwidth memory (HBM) to keep these monster models fed. Altman, for his part, has hinted at OpenAI\u2019s own custom chip plans, which would make sense given the company\u2019s growing scale.<\/p>\n<p>Altman\u2019s comments also double as a not-so-subtle reminder of how quickly this field moves. A year ago, a company boasting 10,000 GPUs sounded like a heavyweight contender. Now, even 1 million feels like just another stepping stone toward something much bigger. OpenAI\u2019s infrastructure push isn\u2019t just about faster training or smoother model rollouts; it\u2019s about securing a long-term advantage in an industry where compute is the ultimate bottleneck. And, of course, Nvidia would be more than happy to provide the building blocks.<\/p>\n<p>Is 100 million GPUs realistic? Not today, not without breakthroughs in manufacturing, energy efficiency, and cost. But that\u2019s the point. Altman\u2019s vision isn\u2019t bound by what\u2019s available now but rather aimed at what\u2019s possible next. The 1 million GPUs coming online by year\u2019s end are a real catalyst for marking a new baseline for AI infrastructure, one that seems to be <a data-analytics-id=\"inline-link\" href=\"https:\/\/www.tomshardware.com\/pc-components\/gpus\/tensorwave-just-deployed-the-largest-amd-gpu-training-cluster-in-north-america-features-8-192-mi325x-ai-accelerators-tamed-by-direct-liquid-cooling\" target=\"_blank\" data-before-rewrite-localise=\"https:\/\/www.tomshardware.com\/pc-components\/gpus\/tensorwave-just-deployed-the-largest-amd-gpu-training-cluster-in-north-america-features-8-192-mi325x-ai-accelerators-tamed-by-direct-liquid-cooling\" rel=\"noopener\">diversifying by the day<\/a>. Everything beyond that is ambition, and if Altman\u2019s history is any guide, it might be foolish to dismiss it as mere hype.<\/p>\n<p class=\"newsletter-form__strapline\">Get Tom&#8217;s Hardware&#8217;s best news and in-depth reviews, straight to your inbox.<\/p>\n<p>Follow <a data-analytics-id=\"inline-link\" href=\"https:\/\/news.google.com\/publications\/CAAqLAgKIiZDQklTRmdnTWFoSUtFSFJ2YlhOb1lYSmtkMkZ5WlM1amIyMG9BQVAB\" target=\"_blank\" data-url=\"https:\/\/news.google.com\/publications\/CAAqLAgKIiZDQklTRmdnTWFoSUtFSFJ2YlhOb1lYSmtkMkZ5WlM1amIyMG9BQVAB\" referrerpolicy=\"no-referrer-when-downgrade\" data-hl-processed=\"none\" rel=\"noopener\">Tom&#8217;s Hardware on Google News<\/a> to get our up-to-date news, analysis, and reviews in your feeds. Make sure to click the Follow button.<\/p>\n<p><script async src=\"https:\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script><\/p>\n","protected":false},"excerpt":{"rendered":"OpenAI CEO Sam Altman isn\u2019t exactly known for thinking small, but his latest comments push the boundaries of&hellip;\n","protected":false},"author":2,"featured_media":281523,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[3163],"tags":[323,1942,53,16,15],"class_list":{"0":"post-281522","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-technology","11":"tag-uk","12":"tag-united-kingdom"},"share_on_mastodon":{"url":"https:\/\/pubeurope.com\/@uk\/114894911878686814","error":""},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/posts\/281522","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/comments?post=281522"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/posts\/281522\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/media\/281523"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/media?parent=281522"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/categories?post=281522"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/tags?post=281522"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}