{"id":59574,"date":"2025-04-29T07:10:11","date_gmt":"2025-04-29T07:10:11","guid":{"rendered":"https:\/\/www.europesays.com\/uk\/59574\/"},"modified":"2025-04-29T07:10:11","modified_gmt":"2025-04-29T07:10:11","slug":"alibaba-unveils-qwen3-a-family-of-hybrid-ai-reasoning-models","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/uk\/59574\/","title":{"rendered":"Alibaba unveils Qwen3, a family of &#8216;hybrid&#8217; AI reasoning models"},"content":{"rendered":"<p id=\"speakable-summary\" class=\"wp-block-paragraph\">Chinese tech company Alibaba on Monday <a href=\"https:\/\/qwenlm.github.io\/blog\/qwen3\/\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">released<\/a> Qwen3, a family of AI models the company claims matches and in some cases outperforms the best models available from Google and OpenAI.<\/p>\n<p class=\"wp-block-paragraph\">Most of the models are \u2014 or soon will be \u2014 available for download under an \u201copen\u201d license from AI dev platform <a href=\"https:\/\/huggingface.co\/collections\/Qwen\/qwen3-67dd247413f0e2e4f653967f\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">Hugging Face<\/a> and <a href=\"https:\/\/github.com\/QwenLM\/Qwen3\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">GitHub<\/a>. They range in size from 0.6 billion parameters to 235 billion parameters. Parameters roughly correspond to a model\u2019s problem-solving skills, and models with more parameters generally perform better than those with fewer parameters.<\/p>\n<p class=\"wp-block-paragraph\">The rise of China-originated model series like Qwen have increased the pressure on American labs such as OpenAI to deliver more capable AI technologies. They\u2019ve also led policymakers to implement restrictions aimed at limiting the ability of Chinese AI companies to obtain the <a href=\"https:\/\/techcrunch.com\/2025\/04\/15\/nvidia-h20-chip-exports-hit-with-license-requirement-by-us-government\/\" target=\"_blank\" rel=\"noopener\">chips<\/a> <a href=\"https:\/\/techcrunch.com\/2025\/04\/16\/amd-takes-800m-charge-on-us-license-requirement-for-ai-chips\/\" target=\"_blank\" rel=\"noopener\">necessary<\/a> to train models. <\/p>\n<blockquote class=\"wp-block-quote twitter-tweet is-layout-flow wp-block-quote-is-layout-flow\">\n<p class=\"wp-block-paragraph\">Introducing Qwen3! <\/p>\n<p class=\"wp-block-paragraph\">We release and open-weight Qwen3, our latest large language models, including 2 MoE models and 6 dense models, ranging from 0.6B to 235B. Our flagship model, Qwen3-235B-A22B, achieves competitive results in benchmark evaluations of coding, math, general\u2026 <a rel=\"nofollow\" href=\"https:\/\/t.co\/JWZkJeHWhC\">pic.twitter.com\/JWZkJeHWhC<\/a><\/p>\n<p class=\"wp-block-paragraph\">\u2014 Qwen (@Alibaba_Qwen) <a rel=\"nofollow noopener\" href=\"https:\/\/twitter.com\/Alibaba_Qwen\/status\/1916962087676612998?ref_src=twsrc%5Etfw\" target=\"_blank\">April 28, 2025<\/a><\/p>\n<\/blockquote>\n<p class=\"wp-block-paragraph\">According to Alibaba, Qwen3 models are \u201chybrid\u201d models in the sense that they can take time and \u201creason\u201d through complex problems or answer simpler requests quickly. Reasoning enables the models to effectively fact-check themselves, similar to models like OpenAI\u2019s <a href=\"https:\/\/techcrunch.com\/2025\/04\/20\/openais-o3-ai-model-scores-lower-on-a-benchmark-than-the-company-initially-implied\/\" target=\"_blank\" rel=\"noopener\">o3<\/a>, but at the cost of higher latency.<\/p>\n<p class=\"wp-block-paragraph\">\u201cWe have seamlessly integrated thinking and non-thinking modes, offering users the flexibility to control the thinking budget,\u201d wrote the Qwen team in a <a href=\"https:\/\/qwenlm.github.io\/blog\/qwen3\/\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">blog post<\/a>. \u201cThis design enables users to configure task-specific budgets with greater ease.\u201d<\/p>\n<p class=\"wp-block-paragraph\">Some of the models also adopt a mixture of experts (MoE) architecture, which can be more computationally efficient for answering queries. MoE breaks down tasks into subtasks and delegates them to smaller, specialized \u201cexpert\u201d models.\u00a0<\/p>\n<p class=\"wp-block-paragraph\">The Qwen3 models support 119 languages, Alibaba says, and were trained on a dataset of nearly 36 trillion tokens. Tokens are the raw bits of data that a model processes; 1 million tokens is equivalent to about 750,000 words. Alibaba says that Qwen3 was trained on a combination of textbooks, \u201cquestion-answer pairs,\u201d code snippets, AI-generated data, and more.<\/p>\n<p class=\"wp-block-paragraph\">These improvements, along with others, greatly boosted Qwen3\u2019s capabilities compared to its predecessor, Qwen2, says Alibaba. None of the Qwen3 models are head and shoulders above top-of-the-line recent models like OpenAI\u2019s o3 and o4-mini, but they\u2019re strong performers nonetheless.<\/p>\n<p class=\"wp-block-paragraph\">On Codeforces, a platform for programming contests, the largest Qwen3 model \u2014 Qwen-3-235B-A22B \u2014 just beats out OpenAI\u2019s <a href=\"https:\/\/techcrunch.com\/2025\/01\/31\/openai-launches-o3-mini-its-latest-reasoning-model\/\" target=\"_blank\" rel=\"noopener\">o3-mini<\/a> and Google\u2019s <a href=\"https:\/\/techcrunch.com\/2025\/04\/04\/gemini-2-5-pro-is-googles-most-expensive-ai-model-yet\/\" target=\"_blank\" rel=\"noopener\">Gemini 2.5 Pro<\/a>. Qwen-3-235B-A22B also bests o3-mini on the latest version of AIME, a challenging math benchmark, and BFCL, a test for assessing a model\u2019s ability to \u201creason\u201d about problems.<\/p>\n<p class=\"wp-block-paragraph\">But Qwen-3-235B-A22B isn\u2019t publicly available \u2014 at least not yet.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" width=\"3413\" height=\"1920\" src=\"https:\/\/www.europesays.com\/uk\/wp-content\/uploads\/2025\/04\/qwen3-235a22.jpg\" alt=\"Alibaba Qwen 3 benchmarks\" class=\"wp-image-3000196\"  \/>Alibaba\u2019s internal benchmark results for Qwen3.<strong>Image Credits:<\/strong>Alibaba<\/p>\n<p class=\"wp-block-paragraph\">The largest public Qwen3 model, Qwen3-32B, is still competitive with a number of proprietary and open AI models, including Chinese AI lab DeepSeek\u2019s <a href=\"https:\/\/techcrunch.com\/2025\/01\/27\/deepseek-claims-its-reasoning-model-beats-openais-o1-on-certain-benchmarks\/\" target=\"_blank\" rel=\"noopener\">R1<\/a>. Qwen3-32B surpasses OpenAI\u2019s <a href=\"https:\/\/techcrunch.com\/2024\/12\/05\/openais-o1-model-sure-tries-to-deceive-humans-a-lot\/\" target=\"_blank\" rel=\"noopener\">o1<\/a> model on several tests, including the coding benchmark LiveCodeBench.<\/p>\n<p class=\"wp-block-paragraph\">Alibaba says Qwen3 \u201cexcels\u201d in tool-calling capabilities as well as\u00a0following instructions and copying specific data formats. In addition to the models for download, Qwen3 is available from cloud providers, including Fireworks AI and Hyperbolic.<\/p>\n<p class=\"wp-block-paragraph\">Tuhin Srivastava, co-founder and CEO of AI cloud host Baseten, said that Qwen3 is another point in the trend line of open models keeping pace with closed source systems such as OpenAI\u2019s.<\/p>\n<p class=\"wp-block-paragraph\">\u201cThe U.S. is doubling down on restricting sales of chips to China and purchases from China, but models like Qwen 3 that are state-of-the-art and open\u00a0\u2026 will undoubtedly be used domestically,\u201d he told TechCrunch.\u00a0\u201cIt reflects the reality that businesses are both building their own tools [as well as] buying off the shelf via closed-model companies like Anthropic and OpenAI.\u201d<\/p>\n<p><script async src=\"https:\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script><\/p>\n","protected":false},"excerpt":{"rendered":"Chinese tech company Alibaba on Monday released Qwen3, a family of AI models the company claims matches and&hellip;\n","protected":false},"author":2,"featured_media":59575,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[3163],"tags":[323,18064,1942,31053,53,16,15],"class_list":{"0":"post-59574","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-alibaba","10":"tag-artificial-intelligence","11":"tag-qwen","12":"tag-technology","13":"tag-uk","14":"tag-united-kingdom"},"share_on_mastodon":{"url":"https:\/\/pubeurope.com\/@uk\/114420005286321142","error":""},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/posts\/59574","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/comments?post=59574"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/posts\/59574\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/media\/59575"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/media?parent=59574"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/categories?post=59574"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/tags?post=59574"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}