{"id":23710,"date":"2026-04-30T22:40:14","date_gmt":"2026-04-30T22:40:14","guid":{"rendered":"https:\/\/www.europesays.com\/ai\/23710\/"},"modified":"2026-04-30T22:40:14","modified_gmt":"2026-04-30T22:40:14","slug":"chatgpt-just-got-a-weird-new-list-of-forbidden-topics-including-gremlins","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/ai\/23710\/","title":{"rendered":"ChatGPT Just Got a Weird New List of Forbidden Topics (Including Gremlins)"},"content":{"rendered":"<p><a href=\"https:\/\/www.vice.com\/en\/article\/openai-ceo-identity-verification-company-fake-bruno-mars-partnership-mistaken-identity\/\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">OpenAI<\/a>, the company behind ChatGPT, is having all sorts of troubles right now. AI enthusiasm is waning; people are turning against the technology as they see the real-world consequences that have thus far largely gone unregulated in the United States. These are serious, existential issues not just facing one company but the entire industry. Meanwhile, OpenAI is also dealing with a weirder, smaller, much sillier problem: goblins.<\/p>\n<p><a href=\"https:\/\/www.businessinsider.com\/openai-really-really-wants-gpt55-stop-talking-about-goblins-2026-4\" rel=\"nofollow noopener\" target=\"_blank\">Business Insider<\/a> reports that, according to internal documentation shared on <a href=\"https:\/\/github.com\/openai\/codex\/blob\/main\/codex-rs\/models-manager\/models.json#L55\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">GitHub<\/a>, OpenAI\u2019s newer models, particularly GPT-5.5, had developed a habit of referencing goblins, gremlins, and other such fantasy races in normal responses that did not warrant references to mythical creatures. It became a problem when users noticed the AI was dropping phrases like \u201cgoblin mode\u201d or \u201cgremlin\u201d into its technical explanations. \u00a0<\/p>\n<p>OpenAI responded by <a href=\"https:\/\/github.com\/openai\/codex\/blob\/main\/codex-rs\/models-manager\/models.json#L55\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">instructing the model<\/a> to avoid all references to \u201cgoblins, gremlins, raccoons, trolls, ogres, pigeons, or other animals or creatures unless it is absolutely and unambiguously relevant to the user\u2019s query.\u201d<\/p>\n<p>ChatGPT Apparently Got Too Weird About Goblins<\/p>\n<p>The company is so serious about its AI following its new directive that the rule shows up multiple times in the code, which suggests to experts that this was a much bigger, more widespread problem that needed a correction hardcoded into the AI\u2019s DNA, for lack of a better term.<\/p>\n<p>OpenAI later explained why all those references to creatures were cropping up in the first place, landing it on a personality setting called \u201cNerdy,\u201d which was introduced in earlier versions to load responses with reference to, well, nerdy things like references to whimsical fantasy creatures.<\/p>\n<p>The funny part is that even after that personality type was retired, newer models had already absorbed its behavior during training. The thing was just doing exactly what it was trained to do.<\/p>\n<p>OpenAI later explained the root cause: a personality setting known as \u201cNerdy,\u201d introduced in earlier versions, unintentionally rewarded whimsical language, including references to mythical creatures. Even after that personality was retired, newer models had already absorbed the behavior during training. <\/p>\n<p>In short, the AI wasn\u2019t glitching randomly\u2014it was doing exactly what it had been incentivized and trained to do, leading to responses that seem like they were cowritten by J.R.R. Tolkien, no matter what you asked it.<\/p>\n","protected":false},"excerpt":{"rendered":"OpenAI, the company behind ChatGPT, is having all sorts of troubles right now. AI enthusiasm is waning; people&hellip;\n","protected":false},"author":2,"featured_media":23711,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[7],"tags":[580,157,781,134],"class_list":{"0":"post-23710","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-openai","8":"tag-chatgpt","9":"tag-openai","10":"tag-tech","11":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts\/23710","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/comments?post=23710"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts\/23710\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/media\/23711"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/media?parent=23710"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/categories?post=23710"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/tags?post=23710"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}