{"id":12169,"date":"2026-04-22T11:18:08","date_gmt":"2026-04-22T11:18:08","guid":{"rendered":"https:\/\/www.europesays.com\/ai\/12169\/"},"modified":"2026-04-22T11:18:08","modified_gmt":"2026-04-22T11:18:08","slug":"new-chatgpt-image-model-finally-fixes-ai-text-problem","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/ai\/12169\/","title":{"rendered":"New ChatGPT image model finally fixes AI text problem"},"content":{"rendered":"<p> It was once relatively easy to tell the difference between human-created and AI-generated images. Just a couple of years ago, image models struggled even with simple tasks like producing a restaurant menu, often inventing nonsensical words such as \u201cenchuita,\u201d \u201cchuriros,\u201d \u201cburrto,\u201d and \u201cmargartas.\u201d <\/p>\n<p>Now, the latest ChatGPT Images 2.0 model is capable of generating a Mexican restaurant menu that appears realistic enough to be used in a real setting without raising suspicion\u2014though details like a $13.50 ceviche might still prompt questions about quality, <a href=\"https:\/\/news.az\/\" rel=\"nofollow noopener\" target=\"_blank\">News.Az<\/a> reports, citing <a target=\"_blank\" href=\"https:\/\/techcrunch.com\/2026\/04\/21\/chatgpts-new-images-2-0-model-is-surprisingly-good-at-generating-text\/\" rel=\"nofollow noopener\">TechCrunch<\/a>.<\/p>\n<p>For comparison, earlier tools such as DALL-E 3 had notable difficulty rendering accurate text when generating images.<\/p>\n<p>Historically, AI image generators struggled with spelling because they relied on diffusion models, which reconstruct images from noise. As Asmelash Teka Hadgu explained in 2024, these models prioritize broader visual patterns, making small details like written text harder to reproduce accurately.<\/p>\n<p>To address these limitations, researchers have explored alternative approaches, including autoregressive models, which generate images by predicting what they should look like\u2014similar to how large language models operate.<\/p>\n<p><a href=\"https:\/\/news.az\/news\/openai-eyes-up-to-15b-for-private-equity-venture\" rel=\"nofollow noopener\" target=\"_blank\">OpenAI<\/a> has not disclosed the exact architecture behind Images 2.0, declining to comment during a recent press briefing.<\/p>\n<p>However, the company said the model includes \u201cthinking capabilities,\u201d allowing it to search the web, generate multiple images from a single prompt, and verify its outputs. These features enable it to create marketing materials in various formats, as well as more complex visuals such as multi-panel comic strips.<\/p>\n<p>OpenAI also noted improvements in rendering non-Latin scripts, including Japanese, Korean, Hindi, and Bengali. The model\u2019s knowledge base extends up to December 2025, which may affect its ability to reflect very recent developments.<\/p>\n<p>According to the company, Images 2.0 delivers a higher level of precision and detail in image generation. It can follow instructions closely, maintain requested design elements, and accurately render components that have traditionally challenged image models, such as small text, icons, user interface elements, and dense visual compositions\u2014all at resolutions up to 2K.<\/p>\n<p>While these advanced capabilities mean image generation may take longer than typing a standard query, even complex outputs like multi-panel comics can be produced within minutes.<\/p>\n<p>Access to Images 2.0 is being rolled out to all ChatGPT and Codex users, with paid users receiving access to more advanced features. OpenAI is also introducing the gpt-image-2 API, with pricing based on output quality and resolution.<\/p>\n<p><a href=\"https:\/\/news.az\/\" rel=\"nofollow noopener\" target=\"_blank\">News.Az<\/a>\u00a0<\/p>\n<p>\t\t\t\t\t\t\t<a class=\"by-user\" href=\"https:\/\/news.az\/journalists\/nijat-babayev\" rel=\"nofollow noopener\" target=\"_blank\">By Nijat Babayev<\/a>\t\t\t\t\t\t\t<\/p>\n","protected":false},"excerpt":{"rendered":"It was once relatively easy to tell the difference between human-created and AI-generated images. Just a couple of&hellip;\n","protected":false},"author":2,"featured_media":12170,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[7],"tags":[1458,9750,25,580,157],"class_list":{"0":"post-12169","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-openai","8":"tag-ai-technology","9":"tag-ai-text","10":"tag-artificial-intelligence","11":"tag-chatgpt","12":"tag-openai"},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts\/12169","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/comments?post=12169"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts\/12169\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/media\/12170"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/media?parent=12169"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/categories?post=12169"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/tags?post=12169"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}