{"id":595146,"date":"2025-11-26T14:17:15","date_gmt":"2025-11-26T14:17:15","guid":{"rendered":"https:\/\/www.europesays.com\/uk\/595146\/"},"modified":"2025-11-26T14:17:15","modified_gmt":"2025-11-26T14:17:15","slug":"welcome-to-the-slopverse-the-atlantic","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/uk\/595146\/","title":{"rendered":"Welcome to the Slopverse &#8211; The Atlantic"},"content":{"rendered":"<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">Bill Lowery, a sales executive, is confused when a workmate asks where he should take a date out for dinosaur. \u201cYou\u2019re planning to take this girl out for dinosaur?\u201d Lowery asks. \u201cThat\u2019s right,\u201d the colleague responds, totally nonchalant. Lowery presses him, agitated: \u201cWait a minute. You\u2019re saying dinosaur? What is this, some sort of new-wave expression or something\u2014saying dinosaur instead of lunch?\u201d When Lowery returns home later in the day, his wife reports on their sick son while buttering a slice of bread. \u201cHe\u2019s so pale and awfully congested\u2014and he didn\u2019t touch his dinosaur when I took it in to him.\u201d The salesman loses it.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">This is the premise of \u201cWordplay,\u201d <a data-event-element=\"inline link\" href=\"https:\/\/www.youtube.com\/watch?v=bGF5_6x0bNE\" target=\"_blank\" rel=\"noopener\">an episode of the 1980s reboot of The Twilight Zone<\/a>. As time progresses, people around Lowery begin speaking in an even more jumbled manner, using familiar words in unfamiliar ways. Eventually, Lowery resigns himself to relearning English from his son\u2019s ABC book. The last scene shows him running his hands over an illustration of a dog, underneath which is printed the word Wednesday.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">\u201cWordplay\u201d offers a lesson on the nature of error: Small and inconspicuous changes to the norm can be more disorienting and dangerous than larger, wholesale ones. For that reason, the episode also has something to teach about truth and falsehood in ChatGPT and other such generative-AI products. By now everyone knows that large language models\u2014or LLMs, the systems underlying chatbots\u2014tend to invent things. They make up <a data-event-element=\"inline link\" href=\"https:\/\/www.forbes.com\/sites\/mollybohannon\/2023\/06\/08\/lawyer-used-chatgpt-in-court-and-cited-fake-cases-a-judge-is-considering-sanctions\/?sh=3fd97317c7f3\" target=\"_blank\" rel=\"noopener\">legal cases<\/a> and recommend <a data-event-element=\"inline link\" href=\"https:\/\/socket.dev\/blog\/slopsquatting-how-ai-hallucinations-are-fueling-a-new-class-of-supply-chain-attacks\" target=\"_blank\" rel=\"noopener\">nonexistent software<\/a>. People call these \u201challucinations,\u201d and that seems at first blush like a sensible metaphor: The chatbot appears to be delusional, confidently asserting the unreal as real.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">But this is the wrong idea. Hallucination implies that a mistake is being made under a false belief. But an LLM doesn\u2019t believe the \u201cfalse\u201d information it presents to be true. It doesn\u2019t \u201cbelieve\u201d anything at all. Instead, an LLM predicts the next word in a sentence based on patterns that it has learned from consuming extremely large quantities of text. An LLM does not think, nor does it know. It interprets a new pattern based on its interpretation of a previous one. A chatbot is only ever chaining together credible guesses.<\/p>\n<p id=\"injected-recirculation-link-0\" class=\"ArticleRelatedContentLink_root__VYc9V\" data-view-action=\"view link - injected link - item 1\" data-event-element=\"injected link\" data-event-position=\"1\"><a href=\"https:\/\/www.theatlantic.com\/technology\/archive\/2025\/07\/why-are-computers-still-so-dumb\/683524\/\" target=\"_blank\" rel=\"noopener\">Read: The AI mirage<\/a><\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">In \u201cWordplay,\u201d Lowery is driven mad not because he is being lied to\u2014his colleague and wife really do think the word for lunch is dinosaur, just like a chatbot will sometimes assert that <a data-event-element=\"inline link\" href=\"https:\/\/www.theverge.com\/2024\/5\/23\/24162896\/google-ai-overview-hallucinations-glue-in-pizza\" target=\"_blank\" rel=\"noopener\">glue belongs on pizza<\/a>. Lowery is driven mad because the world he inhabits is suddenly just a bit off, deeply familiar but jolted from time to time with nonsense that everyone else perceives as normal. Old words are fabricated with new meanings.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">AI does invent things, but not in the sense of hallucinating, of seeing something that isn\u2019t there. Fabrication can mean \u201clying,\u201d or it can mean \u201cconstruction.\u201d An LLM does the latter. It makes new prose from the statistical raw materials of old prose. The invented legal case and the made-up software are not actual things in the real universe but credible\u2014even plausible\u2014entities in an alternate universe. They are, in another word, fictional.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">Chatbots are convincing because the fictional worlds they present are highly plausible. And they are plausible because the predictive work that an LLM does is extremely effective. This is true when chatbots make outright errors, and it\u2019s also true when they respond to imaginative prompts. This distinctive machinery demands a better metaphor: It is not hallucinatory but multiversal. When generative AI presents fabricated information, it opens a path to another reality for the user; it multiverses rather than hallucinates. The fictions that result, many so small and meaningless, can be accepted without much trouble.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">The multiverse trope\u2014which presents the idea of branching, alternate versions of reality\u2014was once relegated to theoretical physics, esoteric science fiction, and fringe pop culture. But it became widespread in mass-market media. Multiverses are everywhere in the Marvel Cinematic Universe. Rick and Morty has one, as do Everything Everywhere All at Once and Dark Matter. The alternate universes depicted in fiction set the expectation that multiverses are spectacular, involving wormholes and portals into literal, physical parallel worlds. It seems we got stupid chatbots instead, though the basic idea is the same. The nonexistent legal case that AI suggests could exist in a very similar universe parallel to our own. So could the fictional software.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">The multiversal nature of LLM-generated text is easy to see when you use chatbots to do conceptual blending, the novel fusion of disparate topics. I can ask ChatGPT to produce a Charles Bukowski poem about <a data-event-element=\"inline link\" href=\"https:\/\/www.theatlantic.com\/family\/archive\/2025\/08\/labubu-popularity-kidulthood\/683752\/\" target=\"_blank\" rel=\"noopener\">Labubu<\/a> and it gives me lines like, \u201cThe clerk said, they call it art toy, \/ like that explained anything. \/ Thirty bucks for a goblin that grins \/ like it knows the world\u2019s already over.\u201d Even as I know with certainty that Buk never wrote such a poem, the result is plausible; I can imagine a possible world in which the poet and the goblin toy coexisted, and this material resulted from their encounter. But running such a gut check against every single sentence or reference an LLM offers would be overwhelming\u2014especially given that increasing efficiency is a major reason to use an LLM. Chatbots flood the zone with possible worlds\u2014\u201cslopworlds,\u201d we might call them, together composing a slopverse.<\/p>\n<p id=\"injected-recirculation-link-1\" class=\"ArticleRelatedContentLink_root__VYc9V\" data-view-action=\"view link - injected link - item 2\" data-event-element=\"injected link\" data-event-position=\"2\"><a href=\"https:\/\/www.theatlantic.com\/technology\/archive\/2024\/07\/openai-audacity-crisis\/679212\/\" target=\"_blank\" rel=\"noopener\">Read: AI\u2019s real hallucination problem<\/a><\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">The slopverse worsens the better the LLMs become. Think about it in terms of multiversal fiction: The most terrifying or uncanny alternate universes are the ones that appear extremely similar to the known world, with small changes. In \u201cWordplay,\u201d language is far more threatening to Bill Lowery because familiar words have shifted meanings, rather than English having been replaced by a totally different language. In Dark Matter, a parallel-universe version of Chicago as a desolate wasteland is more obviously counterfactual\u2014and thus less uncanny\u2014than a parallel universe in which the main character\u2019s wife had not given up her career as an artist to have children. Parallel universes that wildly diverge from accepted reality are easily processed as absurd or fantastical\u2014like the universe in Everything Everywhere All at Once where people have fingers made of hot dogs\u2014and familiar ones convey subtler lessons of contingency, possibility, and regret.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">Near universes such as the one Lowery occupies in The Twilight Zone can create empathy and unease, the uncanny truth that life could be almost the same yet profoundly different. But the trick works only because the audience knows that those worlds are counterfactual (and they know because the stories tell them directly). Not so for AI chatbots, which leave the matter a puzzle. Worse, LLMs are functional rather than narrative multiverses\u2014they produce ideas, symbols, and solutions that are actually put to use.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">The internet already acclimated users to this state of affairs, even before LLMs came on the scene. When one searches for something on Google, the resulting websites are not necessarily the best or most accurate but the most popular (along with some that have paid to be promoted by the search engine). Their information might be correct, but it need not be in order to rise to the top. Searching for goods on Amazon or other online retailers yields results of a kind, but not necessarily the right ones. Likewise, social-media sites such as Facebook, X, and TikTok surface content that might be engaging but isn\u2019t necessarily correct in every, or any, way.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">People were misled by media long before the internet, of course, but they have been even more since it arrived. For two decades now, almost everything people see online has been potentially incorrect, untrustworthy, or otherwise decoupled from reality. Every internet user has had to run a hand-rolled, probabilistic analysis of everything they\u2019ve seen online, testing its plausibility for risks of deception or flimflam. The slopverse simply expands that situation\u2014and massively, down to <a data-event-element=\"inline link\" href=\"https:\/\/www.theatlantic.com\/technology\/archive\/2023\/03\/gpt4-arrival-human-artificial-intelligence-blur\/673399\" target=\"_blank\" rel=\"noopener\">every utterance<\/a>.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">Faced with the problems a slopverse poses, AI proponents would likely make the same argument they do about hallucinations: that eventually, the data, training processes, and architecture will improve, increasing accuracy and reducing multiversal schism. Maybe so.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">But another worse and perhaps more likely possibility exists: that no matter how much the technology improves, it will do so only asymptotically, making the many multiverses every chat interaction spawns more and more difficult to distinguish from the real world. The worst nightmares in multiversal fiction arrive when an alternate reality is exactly the same save for one thing, which might not matter, or which might change everything entirely.<\/p>\n","protected":false},"excerpt":{"rendered":"Bill Lowery, a sales executive, is confused when a workmate asks where he should take a date out&hellip;\n","protected":false},"author":2,"featured_media":595147,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[3163],"tags":[323,1942,53,16,15],"class_list":{"0":"post-595146","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-technology","11":"tag-uk","12":"tag-united-kingdom"},"share_on_mastodon":{"url":"https:\/\/pubeurope.com\/@uk\/115616431705949322","error":""},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/posts\/595146","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/comments?post=595146"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/posts\/595146\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/media\/595147"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/media?parent=595146"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/categories?post=595146"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/tags?post=595146"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}