{"id":129,"date":"2026-04-08T04:26:17","date_gmt":"2026-04-08T04:26:17","guid":{"rendered":"https:\/\/www.europesays.com\/ai\/129\/"},"modified":"2026-04-08T04:26:17","modified_gmt":"2026-04-08T04:26:17","slug":"should-states-treat-ai-incidents-like-aviation-accidents-and-investigate","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/ai\/129\/","title":{"rendered":"Should states treat AI incidents like aviation accidents, and investigate?"},"content":{"rendered":"<p>A new policy framework from the Aspen Policy Academy, a nonpartisan policy training program, is urging state officials to build formal systems to investigate incidents when artificial intelligence tools make mistakes or cause harm.<\/p>\n<p><a href=\"https:\/\/aspenpolicyacademy.org\/project\/genai-forward-incidents-2026\/\" rel=\"nofollow noopener\" target=\"_blank\">The guide<\/a>, published last month, proposes a standardized incident investigation framework specifically designed for Utah\u2019s Office of Artificial Intelligence Policy, a statewide agency that operates one of the nation\u2019s few AI regulatory sandboxes. Regulatory sandboxes (not to be confused with AI sandboxes used for technical testing) allow the state to test technologies under the close watch of regulators checking for legal and policy compliance. According to the office\u2019s website, Utah\u2019s Regulatory Relief program is designed to provide compliance exemptions for AI companies whose tools may benefit the state in the future.<\/p>\n<p>The guide argues that the agency lacks clear processes for responding when those tools produce biased decision-making, unsafe recommendations or other failures, with financial, physical or societal repercussions, which can erode public trust.<\/p>\n<p>\u201cTrust is not a milestone that you hit, it\u2019s something that you earn and you maintain,\u201d Aspen Policy Academy fellow Michelle Sipics, who authored the report, said in an interview. \u201cBoth regulators and members of the public watch what you do when something goes wrong.\u201d<\/p>\n<p>As more state governments turn to generative AI tools, officials are increasingly grappling with how to manage real-world risks, such as algorithmic discrimination\u00a0in hiring, housing and government services. Colorado lawmakers are still debating legislative changes to the <a href=\"https:\/\/statescoop.com\/colorado-releases-new-ai-policy-framework-aimed-revising-the-states-2024-law\/\" rel=\"nofollow noopener\" target=\"_blank\">state\u2019s landmark 2024 AI law<\/a>,\u00a0including how responsibility should be assigned to developers and deployers in case something goes wrong. <\/p>\n<p>Sipics said the framework would establish a structured investigative process that brings together government officials, developers and industry experts to investigate so-called \u201cGenAI incidents,\u201d cases when AI systems cause direct harm through their development, deployment or outputs.<\/p>\n<p>She said she modeled the framework after safety practices in aviation and health care, which emphasize root-cause analysis and prevention, rather than enforcement.<\/p>\n<p>\u201cSafety has continued to improve over the decades, and one of the reasons for that is the dedication to investigating incidents. From those investigations, the industry feeds what they learn back into everything, from how they train pilots, how they train air traffic control, designing aircraft maintenance operations, everything,\u201d she explained, adding, \u201cI feel like GenAI needs that same discipline.\u201d <\/p>\n<p>The recommendations build on Utah\u2019s broader push to position itself as a national leader in AI governance. A previous <a href=\"https:\/\/statescoop.com\/utah-aspen-institute-policy-academy-ai-governance\/?utm_source=chatgpt.com\" rel=\"nofollow noopener\" target=\"_blank\">Aspen Policy Academy collaboration<\/a> outlined evaluation standards focused on transparency, accountability and public trust \u2014 which, according to the <a href=\"https:\/\/commerce.utah.gov\/ai\/\" rel=\"nofollow noopener\" target=\"_blank\">Office of AI Policy\u2019s website<\/a>, are central to the state\u2019s AI strategy.<\/p>\n<p>The framework also calls for companies participating in Utah\u2019s sandbox to sign a pledge committing to publicly share investigation findings, similar to incident reports by the National Transportation Safety Board, which provides oversight to the aviation industry. Sipics said this transparency would show the public that companies and government agencies \u201chave earned their continued trust to keep innovating with this technology.\u201d<\/p>\n<p>\u201cPeople are not using this technology in a vacuum. It exists in the world. It exists for people,\u201d Sipics said. \u201cEverybody should be able to learn the lessons learned as we go along, so that we can improve safety for everyone.\u201d <\/p>\n<p>The guide frames incident investigation as the next phase of AI governance, one that could help states move from reactive regulation to continuous learning, potentially offering a model for federal policymakers seeking more consistent AI oversight. Though, Sipics said, we are \u201ca ways off\u201d from that future.<\/p>\n<p>\u201cRealistically, I think transparency is probably the best path to scale because best practices like this build in a community,\u201d she said. \u201cWhen people see you being responsible and sharing what you\u2019ve learned and continuously improving the safety of your products, that has value, that gets buy-in.\u201d<\/p>\n<p>\t\t\t\t\t<img decoding=\"async\" class=\"author-card__image\" src=\"https:\/\/www.europesays.com\/ai\/wp-content\/uploads\/2026\/04\/sophiafox.jpg\" alt=\"Sophia Fox-Sowell\"\/><\/p>\n<p>\n\t\t\tWritten by Sophia Fox-Sowell<br \/>\n\t\t\tSophia Fox-Sowell reports on artificial intelligence, cybersecurity and government regulation for StateScoop. She was previously a multimedia producer for CNET, where her coverage focused on private sector innovation in food production, climate change and space through podcasts and video content. She earned her bachelor\u2019s in anthropology at Wagner College and master\u2019s in media innovation from Northeastern University.\t\t<\/p>\n","protected":false},"excerpt":{"rendered":"A new policy framework from the Aspen Policy Academy, a nonpartisan policy training program, is urging state officials&hellip;\n","protected":false},"author":2,"featured_media":130,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[2],"tags":[24,221,25,111,222,223,224,225,185,226],"class_list":{"0":"post-129","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-ai","8":"tag-ai","9":"tag-ai-policy","10":"tag-artificial-intelligence","11":"tag-artificial-intelligence-ai","12":"tag-aspen-institute","13":"tag-generative-ai","14":"tag-michelle-sipics","15":"tag-safety","16":"tag-state-local-news","17":"tag-utah-office-of-ai-policy"},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts\/129","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/comments?post=129"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts\/129\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/media\/130"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/media?parent=129"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/categories?post=129"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/tags?post=129"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}