{"id":6599,"date":"2026-04-17T07:57:08","date_gmt":"2026-04-17T07:57:08","guid":{"rendered":"https:\/\/www.europesays.com\/ai\/6599\/"},"modified":"2026-04-17T07:57:08","modified_gmt":"2026-04-17T07:57:08","slug":"davis-ai-follows-old-script-of-delayed-prevention","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/ai\/6599\/","title":{"rendered":"Davis: AI follows old script of delayed prevention"},"content":{"rendered":"<p>A new documentary about artificial intelligence arrives at a revealing moment.\u00a0\u201cThe AI Doc: Or How I Became an Apocaloptimist,\u201d is framed as a broad public conversation about what AI may mean for humanity. This is useful, but it does not go quite far enough. AI is no longer a story about innovation; it is becoming a story about prevention.<\/p>\n<p>The temptation is to treat AI as wholly unprecedented. In one sense, it is. The pace of change is remarkable, and the range of possible effects is unusually wide. In another sense, the pattern is familiar. Societies often mishandle preventable harm in recognizably similar ways. Warning signs appear. Evidence accumulates unevenly. Institutions hesitate. Deployment races ahead. Only later do we ask whether safeguards should have come first. That is the pattern now emerging around AI.<\/p>\n<p>The public discussion still swings between utopian promise and apocalyptic dread. Both extremes can distract from the central problem. The most immediate danger is not only what AI may someday become. It is that society is normalizing deployment at scale before building the oversight, accountability and public safeguards that such systems require.<\/p>\n<p>That concern is no longer hypothetical. AI-generated deepfakes are expected to spread in the\u00a02026 U.S. midterm campaigns, with experts warning that they could further erode already fragile public trust. A technology capable of distorting public reality is already loose in civic life, while the institutions meant to manage that threat are still improvising their response.<\/p>\n<p>A different example can be seen in the infrastructure behind AI.\u00a0 A typical AI-focused data center can consume as much electricity as\u00a0100,000 households.\u00a0Lawmakers\u00a0have introduced legislation to pause AI data center construction until federal safeguards are in place for workers, consumers and the environment. The basic public costs of AI are now being confronted, even as expansion is already underway.<\/p>\n<p>However, deepfakes and data centers are not the core issue. The central question is governance. Who sets the boundaries for systems that can influence political information, education, employment, finance, medicine and everyday civic life? Who decides what risks are acceptable before these tools become deeply embedded? Who bears responsibility when automated systems mislead, discriminate, manipulate or otherwise cause foreseeable harm?<\/p>\n<p>AI may indeed bring extraordinary benefits. That possibility strengthens the case for governance; it does not weaken it. The more powerful a technology is, the less sensitive it is to improving safety and accountability after the fact.<\/p>\n<p>A generation from now, the central question may not be whether AI was transformative. It almost certainly will be. However, transformation without governance is not foresight.<\/p>\n<p>Barry R. Davis is a physician, statistician and author of \u201cThe Preventioneers: Diseases, Disasters and the Discoveries That Changed Our World.\u201d \/InsideSources<\/p>\n<p>\u00a0<\/p>\n","protected":false},"excerpt":{"rendered":"A new documentary about artificial intelligence arrives at a revealing moment.\u00a0\u201cThe AI Doc: Or How I Became an&hellip;\n","protected":false},"author":2,"featured_media":6600,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[2],"tags":[24,25,39,2657],"class_list":{"0":"post-6599","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-ai","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-data-centers","11":"tag-deepfakes"},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts\/6599","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/comments?post=6599"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts\/6599\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/media\/6600"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/media?parent=6599"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/categories?post=6599"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/tags?post=6599"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}