{"id":4892,"date":"2026-04-14T12:21:21","date_gmt":"2026-04-14T12:21:21","guid":{"rendered":"https:\/\/www.europesays.com\/ai\/4892\/"},"modified":"2026-04-14T12:21:21","modified_gmt":"2026-04-14T12:21:21","slug":"ai-is-not-the-end-of-the-world","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/ai\/4892\/","title":{"rendered":"AI is not the end of the world"},"content":{"rendered":"<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">In 1964, science fiction writer Arthur C. Clarke <a href=\"https:\/\/www.techradar.com\/pro\/we-should-regard-it-as-a-privilege-to-be-stepping-stones-to-higher-things-how-arthur-c-clarke-predicted-the-rise-of-agi-and-the-looming-demise-of-humanity-back-in-1964\" rel=\"nofollow noopener\" target=\"_blank\">predicted<\/a> that computers would overtake human evolution.\u201cPresent-day electronic brains are complete morons, but this will not be true in another generation,\u201d he told the BBC. \u201cThey will start to think, and eventually, they will completely out-think their makers.\u201d<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">Daniel Roher opens his new documentary <a href=\"https:\/\/www.imdb.com\/title\/tt39150120\/\" rel=\"nofollow noopener\" target=\"_blank\">The AI Doc: Or How I Became An Apocaloptimist<\/a> (2026) with this cheerful prophecy. And in the hundred-some minutes that follow, he tries to make sense of a technology that, by his own admission, he does not understand \u2014 and a world that is rapidly being changed by it. Explaining that he conceives of AI as a \u201cmagic box floating in space,\u201d he enlists the help of experts to provide him with a crash course in what, exactly, AI is.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">Roher\u2019s real concern, however, isn\u2019t so much about the workings of AI \u2014 though some of his subjects do attempt to explain them for him \u2014 but whether it might displace us, as Clarke\u2019s prediction suggests it will.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">While making the film, Roher learns that his wife Caroline is pregnant with their first child. He tracks his wife\u2019s pregnancy and the birth of his son in parallel with the advent of AI. It\u2019s a smart choice that builds on a fear all parents share: What sort of world are we making for our children? And behind that question is another, vibrating in anxious silence: What happens after our offspring replace us? This twinned existential angst drives his efforts to hear from the doomers, the techno-optimists, and the in-between \u201capocaloptimists\u201d whose ranks he ultimately joins.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">The AI Doc, as its sweeping title suggests, wants to shape and lead the narrative around AI. It\u2019s certainly set up to do that \u2014 Roher is <a href=\"https:\/\/www.sundance.org\/blogs\/2023-oscars-navalny-wins-best-documentary-feature-academy-honors-sundance-alums\/\" rel=\"nofollow noopener\" target=\"_blank\">fresh off<\/a> an Oscar win for his documentary Navalny, and the film opened in <a href=\"https:\/\/editorial.rottentomatoes.com\/article\/weekend-box-office-project-hail-mary-continues-to-soar\/#:~:text=Beyond%20the%20Top%2010:%20The,the%20A24%20release%20grossed%20$100%2C000.\" rel=\"nofollow noopener\" target=\"_blank\">nearly 800 theaters<\/a>, which counts as wide-release for a nonfiction title. The final product is indicative of the ways that public attitudes around AI are in massive flux. Roher hopes to reach people of my grandmother\u2019s generation who conflate AI with smartphones and spellcheck, as well as people who don\u2019t seem to care whether a video was AI-generated.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">But I think that this documentary has come too late to steer the conversation, something the film itself <a href=\"https:\/\/www.hollywoodreporter.com\/movies\/movie-reviews\/the-ai-doc-or-how-i-became-an-apocaloptimist-review-1236485368\/\" rel=\"nofollow noopener\" target=\"_blank\">acknowledges<\/a>. For all its transformative potential, AI isn\u2019t actually unique among emerging technologies yet \u2014 it has not been cataclysmic or ushered in a golden age of prosperity \u2014 but Roher and many of those he interviews tend to treat it as a radical break with all that has come before. As a result, they tend to fixate on the binary extremes of doom or salvation. It\u2019s an approach that reinforces our own helplessness in the face of AI-driven change, while also muddying our understanding of what we might yet be able to do as we seek to adapt, mitigate harm, and shape the world that AI could otherwise truly start remaking.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">Roher, contemplating his child\u2019s future, opts to hear the bad news first. Tristan Harris, the cofounder of the <a href=\"https:\/\/www.humanetech.com\/\" rel=\"nofollow noopener\" target=\"_blank\">Center for Humane Technology<\/a>, doesn\u2019t mince words: \u201cI know people who work on AI risk who don\u2019t expect their children to make it to high school.\u201d<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">Many of the film\u2019s other interviewees are similarly gloomy. Geoffrey Hinton, the \u201cgodfather of AI,\u201d for example, argues that as AI becomes smarter, it will become better at manipulating humanity. But no one is more pessimistic than Eliezer Yudkowsky, the well-known AI doomer and co-author of the <a href=\"https:\/\/www.vox.com\/future-perfect\/461680\/if-anyone-builds-it-yudkowsky-soares-ai-risk\" rel=\"nofollow noopener\" target=\"_blank\">controversial<\/a> book <a href=\"https:\/\/www.hachettebookgroup.com\/titles\/eliezer-yudkowsky\/if-anyone-builds-it-everyone-dies\/9780316595643\/?lens=little-brown\" rel=\"nofollow noopener\" target=\"_blank\">If Anyone Builds It, Everyone Dies<\/a>. As the title suggests, Yudkowsky believes that superintelligent AI would wipe out humanity \u2014 a position that he stands by and lays out for Roher.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">Turning his back on these storm clouds \u2014 and taking the advice of his wife, Caroline, who tells him that he needs to find hope for the future \u2014 Roher tunes into the chorus of AI optimists. They tell him, variously, that there are more potential benefits than downsides to AI; that technology has made the world better in every way; that this will be the tool that helps us solve all our greatest problems. Not to mention: AI will bring the best health care on the planet to the poorest people on Earth, extend our healthspan by decades, and enable us to live in a postscarcity utopia free of drudgery. Oh, and: We will become an interplanetary species, all thanks to AI.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">These promises initially reassure Roher, perhaps because he seems easily led by whomever he\u2019s spoken to most recently. It is Harris who ultimately convinces him that we can\u2019t separate the promise of AI from the peril it presents. The conclusions that result will be obvious to anyone who\u2019s thought about these issues for more than a moment or two: If AI automates work, for example, how will people make a living?<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">It doesn\u2019t help that many of the most invested players reflect on these questions superficially, if at all. OpenAI CEO Sam Altman tells Roher that he\u2019s worried about how authoritarian governments will use AI \u2014 a claim that is followed in the film by a cut to images of Altman posing with authoritarian leaders. Other tech CEOs fall back on PR pleasantries in response to the filmmaker\u2019s questions, and Roher too often goes easy on them, never diving deeper when they admit that even they aren\u2019t confident that everything will go well. That these are the leaders of AI companies racing against each other to make the technology more and more advanced does little to inspire confidence.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">(Some of the techno-pessimistic people interviewed for the documentary have expressed their <a href=\"https:\/\/bsky.app\/profile\/emilymbender.bsky.social\/post\/3mdj523d5v22z\" rel=\"nofollow noopener\" target=\"_blank\">strong<\/a> <a href=\"https:\/\/www.linkedin.com\/posts\/timnit-gebru-7b3b407_ghost-in-the-machine-activity-7430060424978415617-mRUT\/\" rel=\"nofollow noopener\" target=\"_blank\">displeasure<\/a> with the final result.)<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">\u201cWhy can\u2019t we just stop?\u201d Roher asks these tech CEOs. He\u2019s told that a moratorium is a pipe dream: Many groups around the world are building advanced AI, all with different motivations. Legislation lags far behind the rate of technological progress. Even if we could pass laws in the US and EU that would stop or slow things down, says Anthropic CEO Dario Amodei, we\u2019d have to convince the Chinese government to follow suit.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">If we don\u2019t create it, the thinking goes, our enemies will. It\u2019s best to get ahead of them.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">This is, of course, the logic of <a href=\"https:\/\/www.carnegiecouncil.org\/explore-engage\/key-terms\/nuclear-deterrence\" rel=\"nofollow noopener\" target=\"_blank\">nuclear deterrence<\/a>: If we don\u2019t mitigate the risk of ending the world through mutually assured destruction, there\u2019s nothing stopping someone else from pressing the button first.<\/p>\n<p>An apocalypse in every generation<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">The atomic comparison is apt, if only because Roher sees the stakes in similarly stark terms. \u201cWill my son live in a utopia, or will we go extinct in 10 years?\u201d he wonders aloud. It\u2019s a question that\u2019s central to the film. But he never really sits with the more likely scenario that AI will neither lead to human extinction nor end all disease and drudgery. Every generation faces the specter of its own annihilation \u2014 and yet the ends of days keep accumulating, no matter how <a href=\"https:\/\/www.vox.com\/future-perfect\/476745\/doomsday-clock-dario-amodei-anthropic-artificial-intelligence-existential-risk\" rel=\"nofollow noopener\" target=\"_blank\">close the doomsday clock gets<\/a> to apocalypse.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">The point, then, isn\u2019t that AI won\u2019t be bad for us, but that by framing the question in strictly utopian or dystopian terms, we miss the messy reality that lies between hell on earth and heaven in the stars. Although The AI Doc tries to chart an \u201capocaloptimist\u201d course between two extremes, it doesn\u2019t grasp the real stakes. AI doesn\u2019t really create new risks as such \u2014 it\u2019s a force multiplier for <a href=\"https:\/\/www.vox.com\/future-perfect\/466368\/openai-for-profit-restructure-biodefense-valthos\" rel=\"nofollow noopener\" target=\"_blank\">existing<\/a> <a href=\"https:\/\/www.vox.com\/technology\/484250\/los-alamos-nuclear-ai-openai-chatgpt\" rel=\"nofollow noopener\" target=\"_blank\">ones<\/a> like the <a href=\"https:\/\/www.vox.com\/artificial-intelligence-nuclear-weapons\" rel=\"nofollow noopener\" target=\"_blank\">threat<\/a> of <a href=\"https:\/\/www.vox.com\/future-perfect\/464678\/house-of-dynamite-movie-netflix-nuclear-risk\" rel=\"nofollow noopener\" target=\"_blank\">nuclear warfare<\/a> and the <a href=\"https:\/\/www.vox.com\/future-perfect\/471726\/where-lab-made-dna-is-created-and-barely-policed\" rel=\"nofollow noopener\" target=\"_blank\">development<\/a> and use of <a href=\"https:\/\/www.vox.com\/future-perfect\/417791\/ai-bioweapons-detection-pandemics-ginkgo-endar-bioradar\" rel=\"nofollow noopener\" target=\"_blank\">biological weapons<\/a>. The chief existential risks of AI are human-made and human-driven. And that means, as Caroline says in the film\u2019s ending narration, \u201cWe get to decide how this goes.\u201d She\u2019s right, but her husband never seems to understand how she\u2019s right.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">Like too many Big Issue Documentaries, Roher\u2019s film is heavy on problems and light on solutions. It does offer some, calling for international cooperation, transparency, legal liabilities for companies if something goes wrong, testing before release, and adaptive rules to match the speed of progress. But just as this is a strictly introductory course in AI \u2014 one that will probably irritate those who\u2019ve already moved on to AI 102 \u2014 these recommendations are only a starting point. For Roher, they offer reason to be hopeful. For the rest of us, they\u2019re just the beginning of an opportunity to meaningfully steer the course of our future.<\/p>\n<p class=\"_1tzd3in1\">You\u2019ve read 1 article in the last month<\/p>\n<p class=\"_1tzd3in4\">Here at Vox, we&#8217;re unwavering in our commitment to covering the issues that matter most to you \u2014 threats to democracy, immigration, reproductive rights, the environment, and the rising polarization across this country.<\/p>\n<p class=\"_1tzd3in4\">Our mission is to provide clear, accessible journalism that empowers you to stay informed and engaged in shaping our world. By becoming a Vox Member, you directly strengthen our ability to deliver in-depth, independent reporting that drives meaningful change.<\/p>\n<p class=\"_1tzd3in4\">We rely on readers like you \u2014 join us.<\/p>\n<p><img alt=\"Swati Sharma\" loading=\"lazy\" width=\"59\" height=\"69\" decoding=\"async\" data-nimg=\"1\" style=\"color:transparent\"  src=\"https:\/\/www.europesays.com\/ai\/wp-content\/uploads\/2026\/04\/1776169281_239_image.png\"\/><\/p>\n<p class=\"_1tzd3in8\">Swati Sharma<\/p>\n<p class=\"_1tzd3in9\">Vox Editor-in-Chief<\/p>\n","protected":false},"excerpt":{"rendered":"In 1964, science fiction writer Arthur C. Clarke predicted that computers would overtake human evolution.\u201cPresent-day electronic brains are&hellip;\n","protected":false},"author":2,"featured_media":4893,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[2],"tags":[24,25,3144,4523,684,134],"class_list":{"0":"post-4892","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-ai","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-culture","11":"tag-future-perfect","12":"tag-innovation","13":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts\/4892","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/comments?post=4892"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts\/4892\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/media\/4893"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/media?parent=4892"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/categories?post=4892"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/tags?post=4892"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}