{"id":19862,"date":"2025-04-14T18:22:09","date_gmt":"2025-04-14T18:22:09","guid":{"rendered":"https:\/\/www.europesays.com\/uk\/19862\/"},"modified":"2025-04-14T18:22:09","modified_gmt":"2025-04-14T18:22:09","slug":"openai-debuts-its-gpt-4-1-flagship-ai-model","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/uk\/19862\/","title":{"rendered":"OpenAI debuts its GPT-4.1 flagship AI model"},"content":{"rendered":"<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">OpenAI has introduced GPT-4.1, a successor to the GPT-4o multimodal AI model launched by the company last year. During <a href=\"https:\/\/www.youtube.com\/watch?v=kA-P9ood-cE\" target=\"_blank\" rel=\"noopener\">a livestream on Monday<\/a>, OpenAI said <a href=\"https:\/\/openai.com\/index\/gpt-4-1\/\" target=\"_blank\" rel=\"noopener\">GPT-4.1 has an even larger context<\/a> window and is better than GPT-4o in \u201cjust about every dimension,\u201d with big improvements to coding and instruction following.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">GPT-4.1 is now available to developers, along with two smaller model versions. That includes GPT-4.1 Mini, which, <a href=\"https:\/\/www.theverge.com\/2024\/7\/18\/24200714\/openai-new-cheaper-smarter-model-gpt-4o-mini\" target=\"_blank\" rel=\"noopener\">like its predecessor,<\/a> is more affordable for developers to tinker with, and GPT-4.1 Nano, an even more lightweight model that OpenAI says is its \u201csmallest, fastest, and cheapest\u201d one yet.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">All three models can process up to one million tokens of context \u2014 the text, images, or videos included in a prompt. That\u2019s far more than GPT-4o\u2019s 128,000-token limit. \u201cWe trained GPT\u20114.1 to reliably attend to information across the full 1 million context length,\u201d OpenAI says <a href=\"https:\/\/openai.com\/index\/gpt-4-1\/\" target=\"_blank\" rel=\"noopener\">in a post announcing<\/a> the models. \u201cWe\u2019ve also trained it to be far more reliable than GPT\u20114o at noticing relevant text, and ignoring distractors across long and short context lengths.\u201d<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">The launch comes as OpenAI plans to phase out its <a href=\"https:\/\/www.theverge.com\/2023\/3\/14\/23638033\/openai-gpt-4-chatgpt-multimodal-deep-learning\" target=\"_blank\" rel=\"noopener\">two-year-old GPT-4 model<\/a> from ChatGPT on April 30th, announcing in a changelog that recent upgrades to GPT\u20114o make it a \u201cnatural successor\u201d to replace it. OpenAI also plans to deprectate the GPT-4.5 preview in the API on July 14th, as \u201cGPT\u20114.1 offers improved or similar performance on many key capabilities at much lower cost and latency. \u201d<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">OpenAI is also set to debut the full version of its o3 reasoning model and an o4 mini reasoning model any day now, with references having already been spotted in the latest ChatGPT web release <a href=\"https:\/\/x.com\/btibor91\/status\/1910237861674353108\">by AI engineer Tibor Blaho<\/a>.<\/p>\n","protected":false},"excerpt":{"rendered":"OpenAI has introduced GPT-4.1, a successor to the GPT-4o multimodal AI model launched by the company last year.&hellip;\n","protected":false},"author":2,"featured_media":19863,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[3163],"tags":[323,1942,12,1318,326,53,16,15],"class_list":{"0":"post-19862","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-news","11":"tag-openai","12":"tag-tech","13":"tag-technology","14":"tag-uk","15":"tag-united-kingdom"},"share_on_mastodon":{"url":"https:\/\/pubeurope.com\/@uk\/114337713437521174","error":""},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/posts\/19862","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/comments?post=19862"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/posts\/19862\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/media\/19863"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/media?parent=19862"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/categories?post=19862"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/tags?post=19862"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}