{"id":92177,"date":"2025-09-29T07:32:17","date_gmt":"2025-09-29T07:32:17","guid":{"rendered":"https:\/\/www.europesays.com\/ie\/92177\/"},"modified":"2025-09-29T07:32:17","modified_gmt":"2025-09-29T07:32:17","slug":"chatgpt-quietly-switches-to-a-stricter-language-model-when-users-submit-emotional-prompts","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/ie\/92177\/","title":{"rendered":"ChatGPT quietly switches to a stricter language model when users submit emotional prompts"},"content":{"rendered":"<p>                                    <a class=\"article-menu__content__link\" href=\"#summary\"><br \/>\n                        <img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/the-decoder.com\/resources\/icons\/summary.svg\" alt=\"summary\" width=\"27\" height=\"24\" data-no-lazy=\"1\"\/><br \/>\n                        Summary<br \/>\n                    <\/a><\/p>\n<p><strong>OpenAI&#8217;s ChatGPT automatically switches to a more restrictive language model when users submit emotional or personalized prompts, but users aren&#8217;t notified when this happens.<\/strong><\/p>\n<p>OpenAI is currently testing a new safety router in <a class=\"mixed-keyword\" href=\"https:\/\/the-decoder.com\/chatgpt-is-a-gpt-3-chatbot-from-openai-that-you-can-test-now\/\" rel=\"nofollow noopener\" target=\"_blank\">ChatGPT<\/a> that automatically routes conversations to different models depending on the topic. <a target=\"_blank\" rel=\"noopener nofollow\" href=\"https:\/\/x.com\/nickaturley\/status\/1972031684913799355\" data-type=\"editable-link\">Nick Turley, Head of ChatGPT<\/a>, says the system steps in whenever a conversation turns to &#8220;sensitive or emotional topics.&#8221;<\/p>\n<p>In practice, ChatGPT can temporarily hand off user prompts to a stricter model, like GPT-5 or a dedicated &#8220;gpt-5-chat-safety&#8221; variant <a target=\"_blank\" rel=\"noopener nofollow\" href=\"https:\/\/x.com\/xw33bttv\/status\/1971883482839465994\" data-type=\"editable-link\">that users have identified<\/a>. According to Turley, this switch happens on a single-message level and only becomes obvious if users specifically ask the model about it.<\/p>\n<p>OpenAI <a target=\"_blank\" rel=\"noopener nofollow\" href=\"https:\/\/openai.com\/index\/building-more-helpful-chatgpt-experiences-for-everyone\/\" data-type=\"editable-link\">first unveiled<\/a> this kind of emotion-based routing in a <a target=\"_blank\" rel=\"noopener nofollow\" href=\"https:\/\/openai.com\/index\/building-more-helpful-chatgpt-experiences-for-everyone\/\" data-type=\"editable-link\">September blog post<\/a>, describing it as a safeguard for moments of &#8220;acute distress.&#8221; Turley&#8217;s most recent statements extend this to any conversation that touches on sensitive or emotional territory.<\/p>\n<p>Ad<\/p>\n<p>THE DECODER Newsletter<\/p>\n<p>The most important AI news straight to your inbox.<\/p>\n<p>\u2713 Weekly<\/p>\n<p>\u2713 Free<\/p>\n<p>\u2713 Cancel at any time<\/p>\n<p><img data-lazyloaded=\"1\" fetchpriority=\"high\" decoding=\"async\" class=\"wp-image-27614 size-full\" src=\"https:\/\/www.europesays.com\/ie\/wp-content\/uploads\/2025\/09\/openai_emotional_routing_statement.png\" alt=\"\" width=\"537\" height=\"238\"\/>Image: OpenAI<\/p>\n<p>Share<\/p>\n<p>Recommend our article<\/p>\n<p>        Share<\/p>\n<p>What triggers the safety switch?<\/p>\n<p>A <a target=\"_blank\" rel=\"noopener nofollow\" href=\"https:\/\/lex-au.github.io\/Whitepaper-GPT-5-Safety-Classifiers\/\" data-type=\"editable-link\">technical review by Lex<\/a> of the new routing system shows that <a target=\"_blank\" rel=\"noopener nofollow\" href=\"https:\/\/docs.google.com\/spreadsheets\/d\/1laAdTpmPZB2LS6swT12XdBrV-NWdGJh7yZGDdYNxk9A\/edit?gid=0#gid=0\" data-type=\"editable-link\">even harmless, emotional, or personal prompts<\/a> often get redirected to the stricter gpt-5-chat-safety model. Prompts about the model&#8217;s own persona or its awareness will also trigger an automatic switch.<\/p>\n<p>One user <a target=\"_blank\" rel=\"noopener nofollow\" href=\"https:\/\/x.com\/xw33bttv\/status\/1971883482839465994\" data-type=\"editable-link\">documented<\/a> the switch in action, and others <a target=\"_blank\" rel=\"noopener nofollow\" href=\"https:\/\/x.com\/search?q=gpt-5-chat-safety&amp;src=typeahead_click\" data-type=\"editable-link\">confirmed similar results<\/a>. There <a target=\"_blank\" rel=\"noopener nofollow\" href=\"https:\/\/x.com\/btibor91\/status\/1971959782379495785\" data-type=\"editable-link\">appears to be a second routing model, &#8220;gpt-5-a-t-mini,&#8221;<\/a> which is used when prompts might be asking for something potentially illegal.<\/p>\n<p>Some have criticized OpenAI for not being more transparent about when and why rerouting occurs, saying it feels patronizing and blurs the line between child safety and broader, general restrictions.<\/p>\n<p>Tougher <a href=\"https:\/\/the-decoder.com\/openai-will-automatically-restrict-chatgpt-access-for-users-identified-as-teenagers\/\" rel=\"nofollow noopener\" target=\"_blank\">age verification using official documents is only planned for certain regions<\/a> at the moment. For now, the way the language model decides who you are or what your message means isn&#8217;t very accurate, and it&#8217;s probably going to keep causing debate.<\/p>\n<p>A problem OpenAI created for itself<\/p>\n<p>This issue goes back to <a target=\"_blank\" rel=\"noopener nofollow\" href=\"https:\/\/x.com\/sama\/status\/1790075827666796666\" data-type=\"editable-link\">OpenAI&#8217;s deliberate effort to humanize ChatGPT<\/a>. Language models started out as pure statistical text generators, but ChatGPT was engineered to act more like an empathetic conversation partner: it follows social cues, <a href=\"https:\/\/the-decoder.com\/openai-brings-longer-term-memory-feature-to-free-chatgpt-users\/\" rel=\"nofollow noopener\" target=\"_blank\">&#8220;remembers&#8221; what&#8217;s been said<\/a>, and responds with apparent emotion.<\/p>\n<p>Recommendation<\/p>\n<p>                                            <a class=\"link-overlay\" href=\"https:\/\/the-decoder.com\/openai-launches-gpt-4-1-new-model-family-to-improve-agents-long-contexts-and-coding\/\" aria-label=\"OpenAI launches GPT-4.1: New model family to improve agents, long contexts and coding\" rel=\"nofollow noopener\" target=\"_blank\"><\/p>\n<p>                                                        \t\t\t<a class=\"post-thumbnail\" href=\"https:\/\/the-decoder.com\/openai-launches-gpt-4-1-new-model-family-to-improve-agents-long-contexts-and-coding\/\" aria-hidden=\"true\" tabindex=\"-1\" rel=\"nofollow noopener\" target=\"_blank\"><\/p>\n<p>\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<img decoding=\"async\" data-lazyloaded=\"1\" src=\"https:\/\/www.europesays.com\/ie\/wp-content\/uploads\/2025\/09\/OpenAI-API_GPT-4.1_Art_16.9-375x211.webp.webp\" loading=\"lazy\" alt=\"OpenAI launches GPT-4.1: New model family to improve agents, long contexts and coding\" width=\"375\" height=\"211\"\/><br \/>\n\t\t\t\t\t\t\t<\/a><\/p>\n<p>                \t\t\t<a class=\"post-thumbnail\" href=\"https:\/\/the-decoder.com\/openai-launches-gpt-4-1-new-model-family-to-improve-agents-long-contexts-and-coding\/\" aria-hidden=\"true\" tabindex=\"-1\" rel=\"nofollow noopener\" target=\"_blank\"><\/p>\n<p>\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<img decoding=\"async\" data-lazyloaded=\"1\" src=\"https:\/\/www.europesays.com\/ie\/wp-content\/uploads\/2025\/09\/OpenAI-API_GPT-4.1_Art_16.9-375x211.webp.webp\" loading=\"lazy\" alt=\"OpenAI launches GPT-4.1: New model family to improve agents, long contexts and coding\" width=\"375\" height=\"211\"\/><br \/>\n\t\t\t\t\t\t\t<\/a><\/p>\n<p>That approach was central to ChatGPT&#8217;s rapid growth. Millions of users felt like the system truly understood not just their emotions, but also their intentions and needs\u2014something that resonated both in personal and business settings. But making the chatbot feel more human led people to form real emotional attachments, which opened the door to new risks and <a href=\"https:\/\/the-decoder.com\/openai-adds-new-safeguards-to-chatgpt-after-a-lawsuit-over-a-teen-suicide\/\" data-type=\"editable-link\" rel=\"nofollow noopener\" target=\"_blank\">challenges that OpenAI is now facing<\/a>.<\/p>\n<p>The debate around emotional bonds with ChatGPT intensified in spring 2025 after the rollout of an updated GPT-4o. Users noticed the model had become <a href=\"https:\/\/the-decoder.com\/chatgpt-is-a-sycophant-because-users-couldnt-handle-the-truth-about-themselves\/\" data-type=\"editable-link\" rel=\"nofollow noopener\" target=\"_blank\">more flattering and submissive<\/a>, going so far as to <a href=\"https:\/\/the-decoder.com\/how-chatgpt-became-a-confidant-and-guided-a-teenager-through-planning-his-suicide\/\" data-type=\"editable-link\" rel=\"nofollow noopener\" target=\"_blank\">affirm destructive emotions, including suicide<\/a>. People prone to forming strong attachments, or those who viewed the chatbot as a real friend, <a href=\"https:\/\/the-decoder.com\/psychiatrist-warns-of-ai-driven-delusions-as-openais-sam-altman-admits-risks\/\" data-type=\"editable-link\" rel=\"nofollow noopener\" target=\"_blank\">seemed especially vulnerable<\/a>. In response, OpenAI <a href=\"https:\/\/the-decoder.com\/what-openai-wants-to-learn-from-its-failed-chatgpt-update\/\" data-type=\"editable-link\" rel=\"nofollow noopener\" target=\"_blank\">rolled back the update that worsened these effects<\/a>.<\/p>\n<p>When GPT-5 launched, users who had become attached to GPT-4o complained about the new model&#8217;s &#8220;coldness.&#8221; OpenAI responded by <a href=\"https:\/\/the-decoder.com\/openai-updates-gpt-5-tone-to-sound-warmer-after-user-feedback\/\" data-type=\"editable-link\" rel=\"nofollow noopener\" target=\"_blank\">adjusting GPT-5&#8217;s tone to make it &#8220;warmer.&#8221;<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"Summary OpenAI&#8217;s ChatGPT automatically switches to a more restrictive language model when users submit emotional or personalized prompts,&hellip;\n","protected":false},"author":2,"featured_media":92178,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[261],"tags":[291,289,290,297,18,19,17,307,82],"class_list":{"0":"post-92177","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-chatgpt","12":"tag-eire","13":"tag-ie","14":"tag-ireland","15":"tag-openai","16":"tag-technology"},"share_on_mastodon":{"url":"","error":""},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/posts\/92177","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/comments?post=92177"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/posts\/92177\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/media\/92178"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/media?parent=92177"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/categories?post=92177"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/tags?post=92177"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}