{"id":30190,"date":"2026-05-06T22:11:09","date_gmt":"2026-05-06T22:11:09","guid":{"rendered":"https:\/\/www.europesays.com\/ai\/30190\/"},"modified":"2026-05-06T22:11:09","modified_gmt":"2026-05-06T22:11:09","slug":"wh-studying-ai-security-executive-order","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/ai\/30190\/","title":{"rendered":"WH \u2018studying\u2019 AI security executive order"},"content":{"rendered":"<p>The Trump administration is considering issuing an executive order to ensure new artificial intelligence models are secure before they\u2019re released publicly, according to a top White House official.<\/p>\n<p>Kevin Hassett, director of the National Economic Council, compared the approach to how the Food and Drug Administration evaluates drugs for safety.<\/p>\n<p>\u201cWe\u2019re studying possibly an executive order to give a clear road map to everybody about how this is going to go and how future AI that also potentially create vulnerabilities should go through a process so that they\u2019re released in the wild after they\u2019ve been proven safe, just like an FDA drug,\u201d Hassett said during an interview on Fox Business on Wednesday.<\/p>\n<p>Hassett\u2019s comments come as government and private sector leaders continue to respond to Anthropic\u2019s disclosure of its powerful \u201cMythos\u201d model. The company <a href=\"https:\/\/red.anthropic.com\/2026\/mythos-preview\/\" target=\"_blank\" rel=\"noopener nofollow\">previewed<\/a> last month how Mythos was capable of quickly finding and exploiting decades-old vulnerabilities in widely used software, sparking <a href=\"https:\/\/labs.cloudsecurityalliance.org\/wp-content\/uploads\/2026\/05\/mythosreadyv1.0.pdf\" target=\"_blank\" rel=\"noopener nofollow\">concerns<\/a> that cyber attackers will be able to use AI to quickly discover new vulnerabilities and create exploits before defenders can react.<\/p>\n<p>]]><\/p>\n<p>Anthropic has limited the release of the Mythos model to a handful of partner companies.<\/p>\n<p>Hassett said he was \u201chighly confident\u201d in National Cyber Director Sean Cairncross\u2019s work to coordinate the government\u2019s response to Mythos.<\/p>\n<p>\u201cWe have scrambled an all-of-government effort and all the private sector to coordinate and to make sure that before this model is released out into the wild, that it\u2019s been tested left and right to make sure that it doesn\u2019t cause any harm to the American businesses or the American government,\u201d Hassett said.<\/p>\n<p>The shift toward more government oversight of AI would mark a change in direction for the Trump administration, which has touted its largely hands-off approach to the technology.<\/p>\n<p>It would also likely increase the responsibilities of the Center for AI Standards and Innovation (CAISI), a unit within the Commerce Department\u2019s National Institute of Standards and Technology.<\/p>\n<p>Earlier this week, CAISI <a href=\"https:\/\/www.nist.gov\/news-events\/news\/2026\/05\/caisi-signs-agreements-regarding-frontier-ai-national-security-testing\" target=\"_blank\" rel=\"noopener nofollow\">announced<\/a> new agreements with Google DeepMind, Microsoft and xAI that will allow the center to conduct \u201cpre-deployment\u201d evaluations of the firms\u2019 respective frontier AI models. The NIST center had already struck similar agreements with Anthropic and OpenAI.<\/p>\n<p>So far, CAISI has conducted 40 evaluations, including on some models that have yet to be released.<\/p>\n<p>]]><\/p>\n<p>\u201cIndependent, rigorous measurement science is essential to understanding frontier AI and its national security implications,\u201d CAISI Director Chris Fall said in a statement. \u201cThese expanded industry collaborations help us scale our work in the public interest at a critical moment.\u201d<\/p>\n<p>CAISI was initially established as the \u201cAI Safety Institute\u201d under the Biden administration. The Trump administration rebranded the center as part of its AI Action Plan. Commerce Secretary Howard Lutnick has designated CAISI \u201cto serve as industry\u2019s primary point of contact within the U.S. government to facilitate testing, collaborative research and best practice development related to commercial AI systems.\u201d<\/p>\n<p>But some outside experts have raised concerns CAISI lacks the resources needed to adequately carry out its mission.<\/p>\n<p>The Trump-aligned America First Policy Institute, in a recent <a href=\"https:\/\/www.americafirstpolicy.com\/issues\/building-ai-readiness-in-the-u.s-government\" target=\"_blank\" rel=\"noopener nofollow\">issue brief<\/a>, called CAISI \u201cchronically underfunded,\u201d with approximately 30 total staff. The think tank said the center has received $30 million since it was established in 2024, which is less funding than similar AI centers in Canada and Singapore, respectively, have received.<\/p>\n<p>The issue brief argued Congress should fund CAISI with $50-100 million in annual funding.<\/p>\n<p>Meanwhile, a <a href=\"https:\/\/fas.org\/publication\/a-national-center-for-advanced-ai-reliability-and-security\/\" target=\"_blank\" rel=\"noopener nofollow\">proposal<\/a> published by the Federation of American Scientists last year advocated for a \u201csignificantly enhanced\u201d CAISI with an annual operating budget of up to $155 million, as well as $155-275 million in \u201cset up costs\u201d for things like high-security compute facilities.<\/p>\n<p>The enhanced center would have \u201cexpanded capacity for conducting advanced model evaluations for catastrophic risks, provide direct emergency assessments to the president and National Security Council (NSC), and drive critical AI reliability and security research, ensuring America is prepared to lead on AI and safeguard its national interests.\u201d<\/p>\n<p class=\"article-copyright\">Copyright<br \/>\n                            \u00a9\u00a02026 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.\n                    <\/p>\n","protected":false},"excerpt":{"rendered":"The Trump administration is considering issuing an executive order to ensure new artificial intelligence models are secure before&hellip;\n","protected":false},"author":2,"featured_media":30191,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[2],"tags":[24,19633,53,25,19542,19634,4868,19635,19636,19637,353,18745,19638],"class_list":{"0":"post-30190","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-ai","8":"tag-ai","9":"tag-america-first-policy-institute","10":"tag-anthropic","11":"tag-artificial-intelligence","12":"tag-center-for-ai-standards-and-innovation","13":"tag-chris-fall","14":"tag-commerce-department","15":"tag-federation-of-american-scientists","16":"tag-howard-lutnick","17":"tag-kevin-hassett","18":"tag-mythos","19":"tag-national-institute-of-standards-and-technology","20":"tag-sean-cairncross"},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts\/30190","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/comments?post=30190"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts\/30190\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/media\/30191"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/media?parent=30190"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/categories?post=30190"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/tags?post=30190"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}