{"id":347695,"date":"2025-08-15T23:07:11","date_gmt":"2025-08-15T23:07:11","guid":{"rendered":"https:\/\/www.europesays.com\/uk\/347695\/"},"modified":"2025-08-15T23:07:11","modified_gmt":"2025-08-15T23:07:11","slug":"dont-believe-what-ai-told-you-i-said","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/uk\/347695\/","title":{"rendered":"Don\u2019t Believe What AI Told You I Said"},"content":{"rendered":"<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">John Scalzi is a voluble man. He is the author of several New York Times best sellers and has been nominated for nearly every major award that the science-fiction industry has to offer\u2014some of which he\u2019s won multiple times. Over the course of his career, he has written millions of words, filling dozens of books and 27 years\u2019 worth of posts on his personal blog. All of this is to say that if one wants to cite Scalzi, there is no shortage of material. But this month, the author noticed something odd: He was being quoted as saying things he\u2019d never said.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">\u201cThe universe is a joke,\u201d reads a meme featuring his face. \u201cA bad one.\u201d The lines are credited to Scalzi and were posted, atop different pictures of him, to two Facebook communities boasting almost 1 million collective members. But Scalzi never wrote or said those words. He also never posed for the pictures that appeared with them online. The quote and the images that accompanied them were all \u201cpretty clearly\u201d AI generated, Scalzi <a data-event-element=\"inline link\" href=\"https:\/\/whatever.scalzi.com\/2025\/08\/06\/ai-slop-strikes-again\/\" target=\"_blank\" rel=\"noopener\">wrote<\/a> on his blog. \u201cThe whole vibe was off,\u201d Scalzi told me. Although the material bore a superficial similarity to something he might have said\u2014\u201cit\u2019s talking about the universe, it\u2019s vaguely philosophical, I\u2019m a science-fiction writer\u201d\u2014it was not something he agreed with. \u201cI know what I sound like; I live with me all the time,\u201d he noted.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">Bogus quotations on the internet are not new, but AI chatbots and their hallucinations have multiplied the problem at scale, misleading many more people, and misrepresenting the beliefs not just of big names such as Albert Einstein but also of lesser known individuals. In fact, Scalzi\u2019s experience caught my eye because a similar thing had happened to me. In June, a blog post appeared on the Times of Israel website, written by a self-described \u201ctech bro\u201d working in the online public-relations industry. Just about anyone can start a blog at the Times of Israel\u2014the publication generally does not edit or commission the contents\u2014which is probably why no one noticed that this post featured a fake quote, sourced to me and The Atlantic. \u201cThere\u2019s nothing inherently nefarious about advocating for your people\u2019s survival,\u201d it read. \u201cThe problem isn\u2019t that Israel makes its case. It\u2019s that so many don\u2019t want it made.\u201d<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">As with Scalzi, the words attributed to me were ostensibly adjacent to my area of expertise. I\u2019ve covered the Middle East for more than a decade, including countless controversies involving Israel, most recently the <a data-event-element=\"inline link\" href=\"https:\/\/www.theatlantic.com\/international\/archive\/2025\/07\/corrupt-bargain-behind-gazas-catastrophe\/683690\/\" target=\"_blank\" rel=\"noopener\">corrupt political bargain<\/a> driving Prime Minister <a data-event-element=\"inline link\" href=\"https:\/\/www.theatlantic.com\/newsletters\/archive\/2025\/08\/netanyahus-decisions-rosenberg\/683808\/\" target=\"_blank\" rel=\"noopener\">Benjamin Netanyahu\u2019s actions<\/a> in Gaza. But like Scalzi, I\u2019d never said, and never would say, something so mawkish about the subject. I wrote to the Times of Israel, and an editor promptly apologized and took the article down. (Miriam Herschlag, the opinion and blogs editor at the paper, later told me that its blogging platform \u201cdoes not have an explicit policy on AI-generated content.\u201d)<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">Getting the post removed solved my immediate problem. But I realized that if this sort of thing was happening to me\u2014a little-known literary figure in the grand scheme of things\u2014it was undoubtedly happening to many more people. And though professional writers such as Scalzi and myself have platforms and connections to correct falsehoods attributed to us, most people are not so lucky. Last May, my colleagues Damon Beres and Charlie Warzel <a data-event-element=\"inline link\" href=\"https:\/\/www.theatlantic.com\/technology\/archive\/2025\/05\/ai-written-newspaper-chicago-sun-times\/682861\/\" target=\"_blank\" rel=\"noopener\">reported<\/a> on \u201cHeat Index,\u201d a magazine-style summer guide that was distributed by the Chicago Sun-Times and The Philadelphia Inquirer. The insert included a reading list with fake books attributed to real authors, and quoted one Mark Ellison, a nature guide, not a professional writer, who never said the words credited to him. When contacted, the author of \u201cHeat Index\u201d admitted to using ChatGPT to generate the material. Had The Atlantic never investigated, there likely would have been no one to speak up for Ellison.<\/p>\n<p id=\"injected-recirculation-link-0\" class=\"ArticleRelatedContentLink_root__VYc9V\" data-view-action=\"view link - injected link - item 1\" data-event-element=\"injected link\" data-event-position=\"1\"><a href=\"https:\/\/www.theatlantic.com\/technology\/archive\/2025\/05\/ai-written-newspaper-chicago-sun-times\/682861\/\" target=\"_blank\" rel=\"noopener\">Read: At least two newspapers syndicated AI garbage<\/a><\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">The negative consequences of this content go well beyond the individuals misquoted. Today, chatbots have replaced Google and other search engines as many people\u2019s primary source of online information. Everyday users are employing these tools to inform important life decisions and to make sense of politics, history, and the world around them. And they are being deceived by fabricated content that can leave them worse off than when they started.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">This phenomenon is obviously bad for readers, but it\u2019s also bad for writers, Gabriel Yoran told me. A German entrepreneur and author, Yoran recently published a book about the degradation of modern consumer technology called The Junkification of the World. Ironically, he soon became an object lesson in a different technological failure. Yoran\u2019s book made the Der Spiegel best-seller list, and many people began reviewing and quoting it\u2014and also, Yoran soon noticed, misquoting it.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">An influencer\u2019s review on XING, the German equivalent of LinkedIn, included a passage that Yoran never wrote. \u201cThere\u2019s quotes from the book that are mine, and then there is at least one quote that is not in the book,\u201d he recalled. \u201cIt could have been. It\u2019s kind of on brand. The tone of voice is fitting. But it\u2019s not in the book.\u201d After this and other instances in which he received error-ridden AI-generated feedback on his work, Yoran told me that he \u201cfelt betrayed in a way.\u201d He worries that in the long run, the use of AI in this manner will degrade the quality of writing by demotivating those who produce it. If material is just going to be fed into a machine that will then regurgitate a sloppy summary, \u201cwhy weigh every word and think about every comma?\u201d<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">Like other online innovations such as social media, large language models do not so much create problems as supercharge preexisting ones. The internet has long been awash with fake quotations attributed to prominent personalities. As Abraham Lincoln once said, \u201cYou can\u2019t trust every witticism superimposed over the image of a famous person on the internet.\u201d But the advent of AI interfaces churning out millions of replies to hundreds of millions of people\u2014ChatGPT and Google\u2019s Gemini have more than 1 billion active users combined\u2014has turned what was once a manageable chronic condition into an acute infection that is metastasizing beyond all containment.<\/p>\n<p id=\"injected-recirculation-link-1\" class=\"ArticleRelatedContentLink_root__VYc9V\" data-view-action=\"view link - injected link - item 2\" data-event-element=\"injected link\" data-event-position=\"2\"><a href=\"https:\/\/www.theatlantic.com\/technology\/archive\/2025\/06\/ai-janky-web\/683228\/\" target=\"_blank\" rel=\"noopener\">Read: The entire internet is reverting to beta<\/a><\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">The process by which this happens is simple. Many people do not know when LLMs are lying to them, which is unsurprising given that the chatbots are very convincing fabulists, serving up slop with unflappable confidence to their unsuspecting audience. That compromised content is then pumped at scale by real people into their own online interactions. The result: Meretricious material from chatbots is polluting our public discourse with Potemkin pontification, derailing debates with <a data-event-element=\"inline link\" href=\"https:\/\/www.independent.co.uk\/news\/world\/americas\/us-politics\/hunter-debutts-joe-biden-pardon-b2658300.html\" target=\"_blank\" rel=\"noopener\">made-up appeals<\/a> to authority and <a data-event-element=\"inline link\" href=\"https:\/\/apnews.com\/article\/artificial-intelligence-chatgpt-fake-case-lawyers-d6ae9fa79d0542db9e1455397aef381c\" target=\"_blank\" rel=\"noopener\">precedent<\/a>, and in some cases, defaming living people by attributing things to them that they never said and do not agree with.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">More and more people are having the eerie experience of knowing that they have been manipulated or misled, but not being sure by whom. As with many aspects of our digital lives, responsibility is too diffuse for accountability. AI companies can <a data-event-element=\"inline link\" href=\"https:\/\/youtu.be\/DB9mjd-65gw?si=WR3nWyHoCN8_S1R3&amp;t=1009\" target=\"_blank\" rel=\"noopener\">chide<\/a> users for trusting the outputs they receive; users can blame the companies for providing a service\u2014and charging for it\u2014that regularly lies. And because LLMs are rarely credited for the writing that they help produce, victims of chatbot calumny struggle to pinpoint which model did the deed after the fact.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">You don\u2019t have to be a science-fiction writer to game out the ill effects of this progression, but it doesn\u2019t hurt. \u201cIt is going to become harder and harder for us to understand what things are genuine and what things are not,\u201d Scalzi told me. \u201cAll that AI does is make this machinery of artifice so much more automated,\u201d especially because the temptation for many people is \u201cto find something online that you agree with and immediately share it with your entire Facebook crowd\u201d without checking to see if it\u2019s authentic. In this way, Scalzi said, everyday people uncritically using chatbots risk becoming a \u201cwilling route of misinformation.\u201d<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">The good news is that some AI executives are beginning to take the problems with their products seriously. \u201cI think that if a company is claiming that their model can do something,\u201d OpenAI CEO Sam Altman told Congress in May 2023, \u201cand it can\u2019t, or if they\u2019re claiming it\u2019s safe and it\u2019s not, I think they should be liable for that.\u201d The bad news is that Altman never actually said this. Google\u2019s Gemini just told me that he did.<\/p>\n","protected":false},"excerpt":{"rendered":"John Scalzi is a voluble man. He is the author of several New York Times best sellers and&hellip;\n","protected":false},"author":2,"featured_media":347696,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[3163],"tags":[323,1942,53,16,15],"class_list":{"0":"post-347695","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-technology","11":"tag-uk","12":"tag-united-kingdom"},"share_on_mastodon":{"url":"https:\/\/pubeurope.com\/@uk\/115035297718242946","error":""},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/posts\/347695","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/comments?post=347695"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/posts\/347695\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/media\/347696"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/media?parent=347695"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/categories?post=347695"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/tags?post=347695"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}