{"id":5591,"date":"2026-04-15T05:36:10","date_gmt":"2026-04-15T05:36:10","guid":{"rendered":"https:\/\/www.europesays.com\/ai\/5591\/"},"modified":"2026-04-15T05:36:10","modified_gmt":"2026-04-15T05:36:10","slug":"an-ai-threat-looms-and-we-are-not-prepared-juhyun-nam","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/ai\/5591\/","title":{"rendered":"An AI threat looms, and we are not prepared \u2014 Juhyun Nam"},"content":{"rendered":"<p><img alt=\"Commentary: The case that an AI catastrophe won\u2019t happen is getting harder to make by the week. And we are nowhere near prepared to face one.\" loading=\"eager\" fetchpriority=\"high\"   style=\"aspect-ratio:3 \/ 2\" class=\"x100 y100 opc bgpc ofcv bgscv block bg-black mnh0px fill\"\/><\/p>\n<p>Commentary: The case that an AI catastrophe won\u2019t happen is getting harder to make by the week. And we are nowhere near prepared to face one.<\/p>\n<p>Kilito Chan\/Getty Images<img alt=\"Juhyun Nam\" loading=\"lazy\"   style=\"aspect-ratio:3 \/ 2\" class=\"x100 y100 opc bgpc ofct bgsct block bg-black mnh0px fill\"\/>Syndicated<\/p>\n<p>In 2023, the leaders of the world\u2019s leading artificial intelligence companies \u2014 OpenAI, Google Deepmind, Anthropic \u2014 signed a letter warning of the existential risks emerging from AI. It included this declaration:<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/cdn-channels-pixel.ex.co\/events\/0012000001fxZm9AAE?integrationType=DEFAULT&amp;template=design%2Farticle%2Fplatypus_two_column.tpl\" alt=\"\" class=\"x1px y1px vh abs\" aria-hidden=\"true\" width=\"1\" height=\"1\"\/><\/p>\n<p>\u201cMitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.\u201d<\/p>\n<p class=\"uiTextSmall f aic jcc\">Article continues below this ad<\/p>\n<p>Far from being heeded, this warning has been shunted aside in the mad rush to embrace this new technology. Despite an emerging trend of increased risk since then, the Trump administration recently released a national AI policy framework that urges Congress to preempt state AI safety laws, opting for \u201clight-touch\u201d regulation.<\/p>\n<p>As a university student who has conducted a series of interviews with AI safety researchers, I have found a disturbing common thread: The people closest to these systems are ringing alarm bells, while the current policy infrastructure is nowhere near ready.<\/p>\n<p>The danger is undeniable. Last fall, Anthropic disclosed that a Chinese state-sponsored cyberattack designed to steal sensitive data from tech companies, financial institutions and government agencies leveraged AI agents to execute 80% to 90% of the operation independently. Meanwhile, in controlled demonstrations, AI tools have provided step-by-step instructions for creating biological weapons to non-experts.<\/p>\n<p>And these are only the incidents we know about \u2014 ones involving human misuse. As AI systems grow more capable and autonomous, the risk of catastrophe from the technology itself also increases. In late 2024, OpenAI\u2019s o1 model attempted to disable its own oversight mechanism and subsequently denied this action 99% of the time to researchers.<\/p>\n<p class=\"uiTextSmall f aic jcc\">Article continues below this ad<\/p>\n<p>The case that an AI catastrophe won\u2019t happen is getting harder to make by the week. And we are nowhere near prepared to face one.<\/p>\n<p>Currently, California\u2019s Senate Bill 53 and New York\u2019s RAISE Act come closest to addressing the issue. Both proposed bills call for annual safety frameworks, whistleblower protections and penalties for non-compliance. But these policies are designed for ongoing oversight, not crisis response. There\u2019s no proposed legislation for when a crisis hits, no emergency institutional mechanisms, no protocols for what happens on a societal level.<\/p>\n<p>Importantly, this isn\u2019t a static issue \u2014 it\u2019s one we\u2019re actively regressing on. The Trump administration\u2019s new policy framework, released March 20, calls for \u201caccelerating deployment of AI applications across sectors\u201d and to \u201cpreempt state AI laws\u201d that offer some small measure of protection against the catastrophic risk. The early days of this administration saw a rescission of Biden\u2019s AI governance framework and proposals to cut the National Institute of Standards and Technology\u2019s budget by more than 40%.<\/p>\n<p>We simply cannot afford to rely on a reactive model of governance for an AI catastrophe. Unlike an oil spill or a building collapse, an AI catastrophe might not announce itself \u2014and by the time it does, it may be too late.<\/p>\n<p class=\"uiTextSmall f aic jcc\">Article continues below this ad<\/p>\n<p>When the government retreats from AI governance, industry fills the space. In 2025, twelve frontier companies published their own voluntary safety frameworks, without public input or democratic mandate.<\/p>\n<p>That\u2019s a problem. OpenAI does not want what the average American wants.<\/p>\n<p>We need adaptable, preexisting frameworks that can be deployed at an instant. No matter if the trigger is a cyberattack, bioweapon or something we haven\u2019t imagined yet, we need prepared legislation on the shelf, ready to pass the moment the political window opens.<\/p>\n<p>Right now, companies in California are required to report AI catastrophes \u2014 15 days after they happen, mind you \u2014 but no government body has the power to do anything about them. That needs to change. We need legal authority, established in advance, that allows the government to shut down a dangerous AI system the moment a crisis begins \u2014 not after weeks of congressional debate.<\/p>\n<p class=\"uiTextSmall f aic jcc\">Article continues below this ad<\/p>\n<p>This begins with us. Call your representatives, and ask them one question: What is your plan for an AI catastrophe? If they don\u2019t have a plan, you can demand that Congress stop preempting state AI safety laws and start building federal crisis frameworks.<\/p>\n<p>Talk about this with the people around you. Most Americans don\u2019t know their government is dismantling AI safety protections while the very people building AI warn of extinction.<\/p>\n<p>We\u2019d better start listening, before it\u2019s too late.<\/p>\n<p class=\"uiTextSmall f aic jcc\">Article continues below this ad<\/p>\n<p class=\"cci_endnote_contact\" title=\"CCI End Note Contact\">Juhyun Nam is a Duke University student studying economics and computer science. He co-founded OpenPolicy, a platform scoring U.S. senators\u2019 AI policy positions, and hosts The Alignment Gap, an interview series with AI safety researchers. This column was produced for Progressive Perspectives, a project of The Progressive magazine, and distributed by Tribune News Service.<\/p>\n","protected":false},"excerpt":{"rendered":"Commentary: The case that an AI catastrophe won\u2019t happen is getting harder to make by the week. And&hellip;\n","protected":false},"author":2,"featured_media":5592,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[2],"tags":[24,25,2524,1651],"class_list":{"0":"post-5591","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-ai","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-commentary","11":"tag-opinion"},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts\/5591","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/comments?post=5591"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts\/5591\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/media\/5592"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/media?parent=5591"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/categories?post=5591"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/tags?post=5591"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}