{"id":14713,"date":"2025-08-21T21:08:08","date_gmt":"2025-08-21T21:08:08","guid":{"rendered":"https:\/\/www.europesays.com\/ie\/14713\/"},"modified":"2025-08-21T21:08:08","modified_gmt":"2025-08-21T21:08:08","slug":"deepseek-launches-gpt-5-competitor-optimized-for-chinese-chips","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/ie\/14713\/","title":{"rendered":"DeepSeek launches GPT-5 competitor optimized for Chinese chips"},"content":{"rendered":"<p>Chinese AI startup DeepSeek shocked the world in January with an AI model, called R1, that rivaled OpenAI and Anthropic\u2019s top LLMs. It was built at a fraction of the cost of those other models, using far fewer <a href=\"https:\/\/fortune.com\/company\/nvidia\/\" target=\"_blank\" aria-label=\"Go to https:\/\/fortune.com\/company\/nvidia\/\" class=\"sc-19cc8fd2-0 kuWizV\" rel=\"nofollow noopener\">Nvidia<\/a> chips, and was released for free. Now, just two weeks after OpenAI debuted its latest model, GPT-5, DeepSeek is back with an update to its flagship V3 model that experts say matches GPT-5 on some benchmarks\u2014and is strategically priced to undercut it.<\/p>\n<p>DeepSeek\u2019s new V3.1 model was quietly released in a message to one of its WeChat groups, China\u2019s all-in-one messaging and social app, as well as on the Hugging Face platform. Its debut touches several of today\u2019s biggest AI narratives at once. DeepSeek is a core part of China\u2019s broader push to develop, deploy, and control advanced AI systems without relying on foreign technology. (And in fact, DeepSeek\u2019s new V3 model is specifically tuned to do perform well on Chinese-made chips.)<\/p>\n<p>While U.S. companies have been hesitant to embrace DeepSeek\u2019s models, they\u2019ve been widely adopted in China and increasingly in other parts of the world. Even some American firms have built applications on DeepSeek\u2019s R1 reasoning model. At the same time, researchers warn that the models\u2019 outputs often hew closely to Chinese Communist Party\u2013approved narratives \u2014 raising questions about their neutrality and trustworthiness.<\/p>\n<p>China\u2019s AI push goes beyond DeepSeek: Its industry also includes models including Alibaba\u2019s Qwen, Moonshot AI\u2019s Kimi, and Baidu\u2019s Ernie. DeepSeek\u2019s new release, however, coming just after OpenAI\u2019s GPT-5\u2014a rollout that fell short of industry watchers\u2019 high expectations \u2014underscores Beijing\u2019s determination to keep pace with, or even leapfrog, top U.S. labs.<\/p>\n<p>OpenAI is concerned about China and DeepSeek<\/p>\n<p>DeepSeek\u2019s efforts are certainly keeping U.S. labs on their toes. In a recent dinner with reporters, OpenAI CEO Sam Altman <a href=\"https:\/\/fortune.com\/2025\/08\/05\/openai-launches-open-source-llm-ai-model-gpt-oss-120b-deepseek\/\" target=\"_self\" aria-label=\"Go to https:\/\/fortune.com\/2025\/08\/05\/openai-launches-open-source-llm-ai-model-gpt-oss-120b-deepseek\/\" class=\"sc-19cc8fd2-0 kuWizV\" rel=\"nofollow noopener\">said that rising competition<\/a> from Chinese open-source models, including DeepSeek, influenced his company\u2019s decision to release its own open-weight models two weeks ago.\u00a0<\/p>\n<p>\u201cIt was clear that if we didn\u2019t do it, the world was gonna be mostly built on Chinese open source models,\u201d Altman said. \u201cThat was a factor in our decision, for sure. Wasn\u2019t the only one, but that loomed large.\u201d<\/p>\n<p>In addition, last week the U.S. granted Nvidia and AMD licenses to export China-specific AI chips \u2014 including Nvidia\u2019s H20 \u2014 but only if they agree to hand over 15% of revenue from those sales to Washington. Beijing quickly pushed back, moving to restrict purchases of Nvidia chips after Commerce Secretary Howard Lutnick told CNBC on July 15: \u201cWe don\u2019t sell them our best stuff, not our second-best stuff, not even our third-best.\u201d\u00a0<\/p>\n<p>By optimizing DeepSeek for Chinese-made chips, the company is signaling resilience against U.S. export controls and a drive to reduce reliance on Nvidia. In DeepSeek\u2019s WeChat post, it noted that the new model format is optimised for \u201csoon-to-be-released next-generation domestic chips.\u201d\u00a0<\/p>\n<p>Altman, at that same dinner, warned that the U.S. may be underestimating the complexity and seriousness of China\u2019s progress in AI \u2014 and said export controls alone likely aren\u2019t a reliable solution.<\/p>\n<p>\u201cI\u2019m worried about China,\u201d he said.<\/p>\n<p>Less of a leap, but still striking incremental advances<\/p>\n<p>Technically, what makes the new DeepSeek model notable is how it was built, with a few advances that would be invisible to consumers. But for developers, these innovations make V3.1 cheaper to run and more versatile than many closed and more expensive rival models.\u00a0<\/p>\n<p>For instance, V3.1 is huge \u2013 685 billion parameters, which is on the level of many top \u201cfrontier\u201d models. But its \u201cmixture-of-experts\u201d design means only a fraction of the model activates when answering any query, keeping computing costs lower for developers. And unlike earlier DeepSeek models that split tasks that could be answered instantly based on the model\u2019s pre-training from those\u00a0 that required step-by-step reasoning, V3.1 combines both fast answers and reasoning in one system.<\/p>\n<p>GPT-5, as well as the most recent models from Anthropic and <a href=\"https:\/\/fortune.com\/company\/alphabet\/\" target=\"_blank\" aria-label=\"Go to https:\/\/fortune.com\/company\/alphabet\/\" class=\"sc-19cc8fd2-0 kuWizV\" rel=\"nofollow noopener\">Google<\/a>, have a similar ability. But few open weight models have been able to do this so far. V3.1\u2019s hybrid architecture is \u201cthe biggest feature by far,\u201d Ben Dickson, a tech analyst and founder of the TechTalks blog, told Fortune.\u00a0<\/p>\n<p>Others point out that while this DeepSeek model is less of a leap than the company\u2019s R1 model\u2014which was a reasoning model distilled down from the original V3 that shocked the world in January, the new V3.1 is still striking. \u201cIt is pretty impressive that they continue making non-marginal improvements,\u201d said William Falcon, founder and CEO of AI developer platform Lightning AI. But he added that he would expect OpenAI to respond if its own open source model \u201cstarts to meaningfully lag,\u201d and pointed out that the DeepSeek model is harder for developers to get into production, while OpenAI\u2019s version is fairly easy to deploy.\u00a0<\/p>\n<p>For all the technical details, though, DeepSeek\u2019s latest release highlights the fact that AI is increasingly seen as part of a simmering technological cold war between the US and China. With that in mind, if Chinese companies can build better AI models for what they claim is a fraction of the cost, U.S. competitors have reason to worry about staying ahead.\u00a0\n<\/p>\n<p><strong>Introducing the 2025 Fortune Global 500<\/strong>, the definitive ranking of the biggest companies in the world. <a href=\"https:\/\/fortune.com\/ranking\/global500\/?&amp;itm_source=fortune&amp;itm_medium=article_tout&amp;itm_campaign=plea_text\" target=\"_self\" aria-label=\"Go to https:\/\/fortune.com\/ranking\/global500\/?&amp;itm_source=fortune&amp;itm_medium=article_tout&amp;itm_campaign=plea_text\" class=\"sc-19cc8fd2-0 kuWizV\" rel=\"nofollow noopener\">Explore this year&#8217;s list.<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"Chinese AI startup DeepSeek shocked the world in January with an AI model, called R1, that rivaled OpenAI&hellip;\n","protected":false},"author":2,"featured_media":14714,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[261],"tags":[291,289,290,381,8736,18,19,17,307,82],"class_list":{"0":"post-14713","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-china","12":"tag-deepseek","13":"tag-eire","14":"tag-ie","15":"tag-ireland","16":"tag-openai","17":"tag-technology"},"share_on_mastodon":{"url":"","error":""},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/posts\/14713","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/comments?post=14713"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/posts\/14713\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/media\/14714"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/media?parent=14713"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/categories?post=14713"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/tags?post=14713"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}