{"id":199891,"date":"2025-06-20T12:51:11","date_gmt":"2025-06-20T12:51:11","guid":{"rendered":"https:\/\/www.europesays.com\/uk\/199891\/"},"modified":"2025-06-20T12:51:11","modified_gmt":"2025-06-20T12:51:11","slug":"the-emperor-has-no-clothes","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/uk\/199891\/","title":{"rendered":"\u2018The emperor has no clothes\u2019"},"content":{"rendered":"<p>Before Emily Bender and I have looked at a menu, she has dismissed <a href=\"https:\/\/www.ft.com\/artificial-intelligence\" data-trackable=\"link\" target=\"_blank\" rel=\"noopener\">artificial intelligence<\/a> chatbots as \u201cplagiarism machines\u201d and \u201csynthetic text extruders\u201d. Soon after the food arrives, the professor of linguistics adds that the vaunted large language models (LLMs) that underpin them are \u201cborn shitty\u201d. <\/p>\n<p>Since <a href=\"https:\/\/www.ft.com\/stream\/e3402603-d253-4aa1-ac4d-fc9bdbf4ccb8\" data-trackable=\"link\" target=\"_blank\" rel=\"noopener\">OpenAI<\/a> launched its wildly popular ChatGPT chatbot in late 2022, AI companies have sucked in tens of billions of dollars in funding by promising scientific breakthroughs, material abundance and a new chapter in human civilisation. AI is already capable of doing entry-level jobs and will soon \u201cdiscover new knowledge\u201d, OpenAI chief <a href=\"https:\/\/www.ft.com\/stream\/d91e0e17-f152-4a82-99bb-e4232a092b5d\" data-trackable=\"link\" target=\"_blank\" rel=\"noopener\">Sam Altman<\/a> told a conference this month. <\/p>\n<p>According to Bender, we are being sold a lie: AI will not fulfil those promises, and nor will it kill us all, as others have warned. AI is, despite the hype, pretty bad at most tasks and even the best systems available today lack anything that could be called intelligence, she argues. Recent claims that models are developing a capacity to understand the world beyond the data they are trained on are nonsensical. We are \u201cimagining a mind behind the text\u201d, she says, but \u201cthe understanding is all on our end\u201d.<\/p>\n<p>Bender, 51, is an expert in how computers model human language. She spent her early academic career in Stanford and Berkeley, two Bay Area institutions that are the wellsprings of the modern AI revolution, and worked at YY Technologies, a natural language processing company. She witnessed the bursting of the dotcom bubble in 2000 first-hand.<\/p>\n<p>Her mission now is to deflate AI, which she will only refer to in air quotes and says should really just be called automation. \u201cIf we want to get past this bubble, I think we need more people not falling for it, not believing it, and we need those people to be in positions of power,\u201d she says. <\/p>\n<p>In a recent book called The AI Con, she and her co-author, the sociologist Alex Hanna, take a sledgehammer to AI hype and raise the alarm about the technology\u2019s more insidious effects. She is clear on her motivation. \u201cI think what it comes down to is: nobody should have the power to impose their view on the world,\u201d she says. Thanks to the huge sums invested, a tiny cabal of men has the ability to shape what happens to large swaths of society and, she adds, \u201cit really gets my goat\u201d.<\/p>\n<blockquote class=\"n-content-pullquote o3-editorial-typography-pullquote n-content-pullquote--no-image\" aria-hidden=\"true\">\n<p>It feels like people are mad that I am undermining what they see as the crowning achievement of our field<\/p>\n<\/blockquote>\n<p>Her thesis is that the whizzy chatbots and image-generation tools created by OpenAI and rivals Anthropic, Elon Musk\u2019s xAI, Google and Meta are little more than \u201cstochastic parrots\u201d, a term that she coined in a 2021 paper. A stochastic parrot, she wrote, is a system \u201cfor haphazardly stitching together sequences of linguistic forms it has observed in its vast training data, according to probabilistic information about how they combine, but without any reference to meaning\u201d.<\/p>\n<p>The paper shot her to prominence and triggered a backlash in AI circles. Two of her co-authors, senior members of the ethical AI team at Google, lost their jobs at the company shortly after publication. Bender has also faced criticism from other academics for what they regard as a heretical stance. \u201cIt feels like people are mad that I am undermining what they see as the sort of crowning achievement of our field,\u201d she says. <\/p>\n<p>The controversy highlighted tensions between those looking to commercialise AI fast and opponents warning of its harms and urging more responsible development. In the four years since, the former group has been ascendant.<\/p>\n<p>We\u2019re meeting in a low-key sushi restaurant in Fremont, Seattle, not far from the University of Washington where Bender teaches. We are almost the only patrons on a sun-drenched Monday afternoon in May, and the waiter has tired of asking us what we might like after 30 minutes and three attempts. Instead we turn to the iPad on the table, which promises to streamline the process. <\/p>\n<p>It achieves the opposite. \u201cI\u2019m going to get one of those,\u201d says Bender: \u201cadd to cart. Actual food may differ from image. Good, because the image is grey. This is great. Yeah. Show me the\u2009.\u2009.\u2009.\u2009where\u2019s the otoro? There we go. Ah, it could be they don\u2019t have it.\u201d We give up. The waiter returns and confirms they do in fact have the otoro, a fatty cut of tuna belly. Realising I\u2019m British, he lingers to ask which football team I support, offers his commiserations to me on Arsenal finishing as runners-up this season and tells me he is a Tottenham fan. I wonder if it\u2019s too late to revert to the iPad. <\/p>\n<p>Menu<\/p>\n<p><strong>Kamakura Japanese Cuisine and Sushi<\/strong><br \/>3520 Fremont Ave N, Seattle, 98103<\/p>\n<p>Otoro nigiri x2 $31.90<br \/>Salmon nigiri x2 $8<br \/>Agedashi x2 $8<br \/>Avocado maki $5.95<br \/>Edamame $3.50<br \/>Barley tea x2 $5<br \/><strong>Total<\/strong> (including tax and tip) $82.56<\/p>\n<p>Bender was not always destined to take the fight to the world\u2019s biggest companies. A decade ago, \u201cI was minding my own business doing grammar engineering,\u201d she says. But after a wave of social movements, including Black Lives Matter, swept through campus, \u201cI started asking, well, where do I sit? What power do I have and how can I use it?\u201d She set up a class on ethics in language technology and a few years later found herself \u201chaving just unending arguments on Twitter about why language models don\u2019t \u2018understand\u2019, with computer scientists who didn\u2019t have the first bit of training in linguistics\u201d.<\/p>\n<p>Eventually, Altman himself came to spar. After Bender\u2019s paper came out, he tweeted \u201ci am a stochastic parrot, and so r u\u201d. Ironically, given Bender\u2019s critique of AI as a regurgitation machine, her phrase is now often attributed to him. <\/p>\n<p>She sees her role as \u201cbeing able to speak truth to power based on my academic expertise\u201d. The truth from her perspective is that the machines are inherently far more limited than we have been led to believe. <\/p>\n<p>Her critique of the technology is layered on a more human concern: that chatbots being lauded as a new paradigm in intelligence threaten to accelerate social isolation, environmental degradation and job loss. Training cutting-edge models costs billions of dollars and requires enormous amounts of power and water, as well as workers in the developing world willing to label distressing images or categorise text for a pittance. The ultimate effect of all this work and energy will be to create chatbots that displace those whose art, literature and knowledge are AI\u2019s raw data today. <\/p>\n<p>\u201cWe are not trying to change Sam Altman\u2019s mind. We are trying to be part of the discourse that is changing other people\u2019s minds about Sam Altman and his technology,\u201d she says. <\/p>\n<p><strong>The table is now filled<\/strong> with dishes. The otoro nigiri is soft, tender and every bit as good as Bender promised. We have both ordered agedashi tofu, perfectly deep-fried so it remains firm in its pool of dashi and soy sauce. Salmon nigiri, avocado maki and tea also dot the space between us.<\/p>\n<p>Bender and Hanna were writing The AI Con in late 2024, which they describe in the book as the peak of the AI boom. But since then the race to dominate the technology has only intensified. Leading companies including OpenAI, Anthropic and Chinese rival DeepSeek have launched what Google\u2019s AI team describe as \u201cthinking models, capable of reasoning through their thoughts before responding\u201d.<\/p>\n<p>The ability to reason would represent a significant milestone on the journey towards AI that could outperform experts across the full range of human intelligence, a goal often referred to as artificial general intelligence, or AGI. A number of the most prominent people in the field \u2014 including Altman, OpenAI\u2019s former chief scientist and co-founder Ilya Sutskever and Elon Musk have claimed that goal is at hand. <\/p>\n<p>Anthropic chief Dario Amodei describes AGI as \u201can imprecise term which has gathered a lot of sci-fi baggage and hype\u201d. But by next year, he argues, we could have tools that are \u201csmarter than a Nobel Prize winner across most relevant fields\u201d, \u201ccan control existing physical tools\u201d and \u201cprove unsolved mathematical theorems\u201d. In other words, with more data, computing power and research breakthroughs, today\u2019s AI models or something that closely resembles them could extend the boundaries of human understanding and cognitive ability. <\/p>\n<p>Bender dismisses the idea, describing the technology as \u201ca fancy wrapper around some spreadsheets\u201d. LLMs ingest reams of data and base their responses on the statistical probability of certain words occurring alongside others. Computing improvements, an abundance of online data and research breakthroughs have made that process far quicker, more sophisticated and more relevant. But there is no magic and no emergent mind, says Bender.<\/p>\n<blockquote class=\"n-content-pullquote o3-editorial-typography-pullquote n-content-pullquote--no-image\" aria-hidden=\"true\">\n<p>The more we build systems around this technology, the more we push workers out of sustainable careers and also cut off the entry-level positions<\/p>\n<\/blockquote>\n<p>\u201cIf you\u2019re going to learn the patterns of which words go together for a given language, if it\u2019s not in the training data, it\u2019s not going to be in the output system. That\u2019s just fundamental,\u201d she says. <\/p>\n<p>In 2020, Bender wrote a paper comparing LLMs to a hyper-intelligent octopus eavesdropping on human conversation: it might pick up the statistical patterns but would have little hope of understanding meaning or intent, or of being able to refer to anything outside of what it had heard. She arrives at our lunch today sporting a pair of wooden octopus earrings.<\/p>\n<p>There are other sceptics in the field, such as AI researcher Gary Marcus, who argue the transformational potential of today\u2019s best models has been massively oversold and that AGI remains a pipe dream. A week after Bender and I meet, a group of researchers at Apple publish a paper echoing some of Bender\u2019s critiques. The best \u201creasoning\u201d models today \u201cface a complete accuracy collapse beyond certain complexities\u201d, the authors write \u2014 although researchers were quick to criticise the paper\u2019s methodology and conclusions.<\/p>\n<p>Sceptics tend to be drowned out by boosters with bigger profiles and deeper pockets. OpenAI is <a href=\"https:\/\/www.ft.com\/content\/d21037d1-7b84-4f07-8d82-b417badbe96e\" data-trackable=\"link\" target=\"_blank\" rel=\"noopener\">raising $40bn from investors led by SoftBank<\/a>, the Japanese technology investor, while rivals xAI and <a href=\"https:\/\/www.ft.com\/content\/05c90475-84fb-4f88-bbfc-6b1a60af90db\" data-trackable=\"link\" target=\"_blank\" rel=\"noopener\">Anthropic<\/a> have also secured billions of dollars in the last year. OpenAI, Anthropic and xAI are collectively valued at close to $500bn today. Before ChatGPT was launched, OpenAI and Anthropic were valued at a fraction of that and xAI didn\u2019t exist.<\/p>\n<p>\u201cIt\u2019s to their benefit to have everyone believe that it is a thinking entity that is very, very powerful instead of something that is, you know, a glorified Magic 8 Ball,\u201d says Bender.<\/p>\n<p><strong>We have been talking for an hour and a half,<\/strong> the bowl of edamame beans between us steadily dwindling, and our cups of barley tea have been refilled more than once. As Bender returns to her main theme, I notice she has quietly constructed an origami bird from her chopstick wrapper. AI\u2019s boosters might be hawking false promises, but their actions have real consequences, she says. \u201cThe more we build systems around this technology, the more we push workers out of sustainable careers and also cut off the entry-level positions\u2009.\u2009.\u2009.\u2009And then there\u2019s all the environmental impact,\u201d she says.<\/p>\n<p>Bender is entertaining company, a Cassandra with a wry grin and twinkling eye. At times it feels she is playing up to the role of nemesis to the tech bosses who live down the Pacific coast in and around San Francisco.<\/p>\n<p>But where Bender\u2019s b\u00eates noires in Silicon Valley might gush over the potential of the technology, she can seem blinkered in another way. When I ask her if she sees one positive use for AI, all she will concede is that it might help her find a song.<\/p>\n<p>I ask how she squares her twin claims that chatbots are bullshit generators and capable of devouring large portions of the labour market. Bender says they can be simultaneously \u201cineffective and detrimental\u201d, and gives the example of a chatbot that could spin up plausible-looking news articles without any actual reporting \u2014 great for the host of a website making money from click-based advertising, less so for journalists and the truth-seeking public.<\/p>\n<blockquote class=\"n-content-pullquote o3-editorial-typography-pullquote n-content-pullquote--no-image\" aria-hidden=\"true\">\n<p>Users think it can see everything and so it has this view from nowhere. There is no view from nowhere<\/p>\n<\/blockquote>\n<p>She argues forcefully that chatbots are born flawed because they are trained on data sets riddled with bias. Even something as narrow as a company\u2019s policies might contain prejudices and errors, she says.<\/p>\n<p>Aren\u2019t these really critiques of society rather than technology? Bender counters that technology built on top of the mess of society doesn\u2019t just replicate its mistakes but reinforces them, because users think \u201cthis is so big it is all-encompassing and it can see everything and so therefore it has this view from nowhere. I think it is always important to recognise that there is no view from nowhere.\u201d<\/p>\n<p>Bender dedicates The AI Con to her two sons, who are composers, and she is especially animated describing the deleterious impact of AI on the creative industries. <\/p>\n<p>She is scathing, too, about AI\u2019s potential to empathise or offer companionship. When a chatbot tells you that you are heard or that it understands, this is nothing but placebo. \u201cWhen Mark Zuckerberg suggests that there\u2019s a demand for friendships beyond what we actually have and he\u2019s going to fill that demand with his AI friends, really that\u2019s basically tech companies saying, \u2018We are going to isolate you from each other and make sure that all of your connections are mediated through tech\u2019.\u201d <\/p>\n<p>Yet employers are deploying the technology, and finding value in it. AI has accelerated the rate at which software engineers can write code, and more than 500mn people regularly use ChatGPT. <\/p>\n<p>AI is also a cornerstone of national policy under US President Donald Trump, with superiority in the technology seen as being essential to winning a new cold war with China. That has added urgency to the race and drowned out calls for more stringent regulations. We discuss the parallels between the hype of today\u2019s AI moment and the origins of the field in the 1950s, when mathematician John McCarthy and computer scientist Marvin Minsky organised a workshop at Dartmouth College to discuss \u201cthinking machines\u201d. In the background during that era was an existential competition with the Soviet Union. This time the Red Scare stems from fear that China will develop AGI before the US, and use its mastery of the technology to undermine its rival. <\/p>\n<p>This is specious, says Bender, and beating China to some level of superintelligence is a pointless goal, given the country\u2019s ability to catch up quickly, which was demonstrated by the launch of a ChatGPT rival by DeepSeek earlier this year. \u201cIf OpenAI builds AGI today, they\u2019re building it for China in three months.\u201d<\/p>\n<p>Nonetheless, competition between the two powers has created huge commercial opportunities for US start-ups. On Trump\u2019s first full day of his second term, he invited Altman to the White House to <a href=\"https:\/\/www.ft.com\/content\/4541c07b-f5d8-40bd-b83c-12c0fd662bd9\" data-trackable=\"link\" target=\"_blank\" rel=\"noopener\">unveil Stargate<\/a>, a $500bn data centre project designed to cement the US\u2019s AI primacy. The project has since expanded abroad, in what those involved describe as \u201ccommercial diplomacy\u201d designed to bolster America\u2019s sphere of influence using the technology. <\/p>\n<p class=\"n-content-recommended__title o3-type-body-highlight\">Recommended<\/p>\n<p><a href=\"https:\/\/www.ft.com\/content\/a3d65804-1cf3-4d67-ac79-9b78a10b6dcc\" data-trackable=\"image-link\" data-trackable-context-story-link=\"image-link\" tabindex=\"-1\" aria-hidden=\"true\" target=\"_blank\" rel=\"noopener\"><img decoding=\"async\" class=\"o-teaser__image\" src=\"https:\/\/www.europesays.com\/uk\/wp-content\/uploads\/2025\/06\/https:\/\/www.ft.com\/__origami\/service\/image\/v2\/images\/raw\/https%3A%2F%2Fd1e00ek4ebabms.cloudfront.net.jpeg\" alt=\"A watercolour illustration of a 40 year-old man with brown hair and blue eyes in front of a kitchen\"\/><\/a><\/p>\n<p>If Bender is right that AI is just automation in a shiny wrapper, this unprecedented outlay of financial and political capital will achieve little more than the erosion of already fragile professions, social institutions and the environment. <\/p>\n<p>So why, I ask, are so many people convinced this is a more consequential technology than the internet? Some have a commercial incentive to believe, others are more honest but no less deluded, she says. \u201cThe emperor has no clothes. But it is surprising how many people want to be the naked emperor.\u201d<\/p>\n<p>George Hammond is the FT\u2019s venture capital correspondent<\/p>\n<p>Find out about our latest stories first \u2014 follow FT Weekend on<a href=\"https:\/\/www.instagram.com\/ft_weekend\/\" data-trackable=\"link\" target=\"_blank\" rel=\"noopener\"> Instagram<\/a>, <a href=\"https:\/\/bsky.app\/profile\/ftweekend.com\" data-trackable=\"link\" target=\"_blank\" rel=\"noopener\">Bluesky<\/a> and<a href=\"https:\/\/twitter.com\/ftweekend\" data-trackable=\"link\" target=\"_blank\" rel=\"noopener\"> X<\/a>, and <a href=\"https:\/\/ep.ft.com\/newsletters\/subscribe?newsletterIds=56d42625a2b6c30300fd5748\" data-trackable=\"link\" target=\"_blank\" rel=\"noopener\">sign up<\/a> to receive the FT Weekend newsletter every Saturday morning<\/p>\n<p><script async src=\"https:\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script><script async src=\"\/\/www.instagram.com\/embed.js\"><\/script><\/p>\n","protected":false},"excerpt":{"rendered":"Before Emily Bender and I have looked at a menu, she has dismissed artificial intelligence chatbots as \u201cplagiarism&hellip;\n","protected":false},"author":2,"featured_media":199892,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[3163],"tags":[323,1942,53,16,15],"class_list":{"0":"post-199891","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-technology","11":"tag-uk","12":"tag-united-kingdom"},"share_on_mastodon":{"url":"https:\/\/pubeurope.com\/@uk\/114715786335499289","error":""},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/posts\/199891","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/comments?post=199891"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/posts\/199891\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/media\/199892"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/media?parent=199891"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/categories?post=199891"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/tags?post=199891"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}