{"id":24773,"date":"2026-05-01T19:41:09","date_gmt":"2026-05-01T19:41:09","guid":{"rendered":"https:\/\/www.europesays.com\/ai\/24773\/"},"modified":"2026-05-01T19:41:09","modified_gmt":"2026-05-01T19:41:09","slug":"why-agentic-ai-governance-is-falling-short-and-what-we-can-do-about-it","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/ai\/24773\/","title":{"rendered":"Why agentic AI governance is falling short \u2013 and what we can do about it"},"content":{"rendered":"<p class=\"p1\">Agentic artificial intelligence misbehavior is reaching epidemic proportions. Today\u2019s AI governance solutions aren\u2019t stopping the madness. We need to rethink our entire approach to AI governance.<\/p>\n<p class=\"p2\">Even though agentic AI is still nascent, many of the AI agents in production today are wreaking havoc. From <a href=\"https:\/\/www.theguardian.com\/technology\/2026\/apr\/29\/claude-ai-deletes-firm-database\" rel=\"nofollow noopener\" target=\"_blank\">deleting production databases<\/a> (and their backups!) to <a href=\"https:\/\/www.wired.com\/story\/ai-models-lie-cheat-steal-protect-other-models-research\/\" rel=\"nofollow noopener\" target=\"_blank\">lying and cheating to avoid deletion<\/a>, horror stories about agents-gone-bad are driving reconsideration of the technology.<\/p>\n<p class=\"p2\">And yet, companies of all sizes are enamored by agents\u2019 promise. Given large language models\u2019 power to glean insights from vast quantities of unstructured data, LLM-powered AI agents can now take action based upon such information to accomplish an astounding variety of business tasks \u2013 as well as a commensurate number of nefarious actions.<\/p>\n<p class=\"p2\">The behavior of such agents is nondeterministic: Given the way LLMs work, agentic behavior is unpredictable. It\u2019s this unpredictability, in fact, that makes agents so powerful, as agents can figure out for themselves novel ways to accomplish the tasks set out for them.<\/p>\n<p class=\"p2\">Companies deploying AI agents, therefore, face a dilemma: Should they either allow such agents free reign to achieve their goals at the risk of dangerous misbehavior, or lock them down so that they can\u2019t go rogue by constraining them exclusively to deterministic, predictable behavior?<\/p>\n<p class=\"p2\">Clearly, we want some middle ground: Give agents the freedom to solve problems nondeterministically but establish sufficient guardrails to constrain their behavior to comply with our rules and policies.<\/p>\n<p class=\"p2\">Such is the motivation for the entire agentic AI governance category: a burgeoning subset of the AI governance market focused on helping organizations establish and manage such guardrails for their AI agents.<\/p>\n<p class=\"p2\">Such guardrails are unquestionably necessary. But if we look more closely at how rapidly agentic AI is evolving, it soon becomes clear that today\u2019s agentic AI governance is woefully insufficient for reigning in increasingly dangerous AI agents.<\/p>\n<p>The \u2018hall of mirrors\u2019 problem<\/p>\n<p class=\"p2\">Perhaps the most obvious problem that all agentic AI governance faces is the predilection of the more powerful AI agents to break the rules.<\/p>\n<p class=\"p2\">This malfeasance leads to a problem I discussed in <a href=\"https:\/\/siliconangle.com\/2026\/04\/17\/will-agentic-ai-governance-run-amok-lesson-asimovs-three-laws\/\" rel=\"nofollow noopener\" target=\"_blank\">my last article<\/a> that I called the hall of mirrors problem, what some people call who watches the watchers.<\/p>\n<p class=\"p2\">Given the power and ubiquity of AI today, leveraging AI (in particular, AI agents) to ensure that agentic AI stays within its guardrails is ostensibly the most logical choice.<\/p>\n<p class=\"p2\">The question then becomes: How do we ensure that these \u201cpolice officer\u201d agents themselves don\u2019t misbehave? How do we keep AI agents and their watchers from conspiring together to break the rules?<\/p>\n<p>The autonomy squeeze<\/p>\n<p class=\"p2\">If adding layers of agentic police officers doesn\u2019t address the problem, then maybe the best approach to keeping misbehaving AI agents in line is to lock down their behavior.<\/p>\n<p class=\"p2\">The most common approach today is to establish a mechanism for defining and enforcing policies and rules that directly constrain agentic behavior.<\/p>\n<p class=\"p2\">As AI agents become more powerful, however, such constraints will increasingly prevent those agents from accomplishing tasks nondeterministically \u2013 what I like to call the autonomy squeeze.<\/p>\n<p class=\"p2\">Here\u2019s how I define the autonomy squeeze: AI agents eventually become so dangerous that the guardrails we would need to put in place to control them prevent them from providing any business value whatsoever. At that point, there\u2019s no reason to deploy AI agents at all.<\/p>\n<p>Why \u2018human in the loop\u2019 doesn\u2019t solve the problem<\/p>\n<p class=\"p2\">Another approach is to prevent agents from taking actions directly \u2013 in other words, constrain autonomous behavior by requiring a human to step in to approve an action.<\/p>\n<p class=\"p2\">You\u2019ll hear the phrase \u201chuman in the loop\u201d from a wide range of vendors, including both vendors selling their own agents as well as the agentic AI governance vendors looking to constrain agentic behavior.<\/p>\n<p class=\"p2\">However, there is a massive problem with all human in the loop approaches: automation bias. That refers to the human tendency to put too much trust into automated systems \u2013 even fallible ones.<\/p>\n<p class=\"p2\">Whenever humans interact with an automated system, they may be skeptical at first. It\u2019s human nature to check and double-check that the automation is working properly.<\/p>\n<p class=\"p2\">However, as the system successfully completes its tasks multiple times, humans become complacent. \u201cIt worked fine the last hundred times,\u201d we say, \u201cso I can trust it to behave properly the next time.\u201d<\/p>\n<p class=\"p2\">Except, of course, when something goes wrong.<\/p>\n<p class=\"p2\">Automation bias, in fact, isn\u2019t specific to AI agents, or even information technology-based automation at all. For example, investigators attributed <a href=\"https:\/\/en.wikipedia.org\/wiki\/Air_France_Flight_447\" rel=\"nofollow noopener\" target=\"_blank\">the crash of Air France flight 447 in 2009<\/a> to human causes that boiled down to automation bias.<\/p>\n<p class=\"p2\">The cockpit crew became so comfortable with the aircraft\u2019s automated systems that when a fault in a sensor developed, they misunderstood the problem and crashed the plane into the ocean.<\/p>\n<p class=\"p2\">Automation bias is just as dangerous for agentic AI, as it leads to the following human behaviors:<\/p>\n<p>Humans reduce manual verification, eventually accepting results at face value every time.<br \/>\nThere is an increasing reluctance to intervene, especially when the agents seem so confident in their actions.<br \/>\nHumans disregard their own judgment even when a result is suspicious. \u201cI trusted it to take the right action the last hundred times, so it must know better, and my suspicions are unwarranted.\u201d<br \/>\nOver time, humans lose the ability to spot potential errors, either individually or as personnel change from more seasoned to more junior staff, and example of what we call the <a href=\"https:\/\/cacm.acm.org\/news\/the-ai-deskilling-paradox\/\" rel=\"nofollow noopener\" target=\"_blank\">AI deskilling paradox<\/a>.<\/p>\n<p class=\"p2\">Agentic AI, in fact, exacerbates the problem of automation bias, because of LLMs\u2019 deceptive appearance of intelligence and confidence.<\/p>\n<p class=\"p2\">Furthermore, given how rapidly agents can make decisions and how often they will make decisions at scale, humans simply won\u2019t be able to keep up, even if they were sufficiently skeptical of suspicious behaviors.<\/p>\n<p class=\"p2\">Note that it doesn\u2019t matter how good the agentic AI guardrails are \u2013 because of automation bias, humans will simply ignore, disregard or turn off any warnings AI governance might provide.<\/p>\n<p>Solving the problem \u2013 but perhaps not the solution you want<\/p>\n<p class=\"p2\">One police officer agent won\u2019t do. Putting one agent in charge of keeping police officer agents on track doesn\u2019t solve the problem, either.<\/p>\n<p class=\"p2\">The best answer we have today: multiple diverse adversarial validators with multi-layer validation.<\/p>\n<p class=\"p2\">Instead of one validator (aka \u201cpolice officer agent\u201d), use multiple validators at the same time. Make sure these validators have the following characteristics:<\/p>\n<p>They all leverage separate technologies \u2013 in particular, different LLMs. Using validators from different vendors is even better.<br \/>\nMake sure each validator is adversarial \u2013 a characteristic familiar from red teaming and penetration testing. Every time an agent makes a potential decision, each validator should actively look for reasons why that decision is incorrect or malicious.<br \/>\nEach validation should be multi-layer \u2013 to reduce the chance that any validator is a single point of failure, implement different validators at different layers, for example:<\/p>\n<p>Syntax layer: Is the result well-formed?<br \/>\nSemantic layer: Does the result make sense?<br \/>\nExecution layer: Does the result work in production?<br \/>\nOutcome layer: Will the agent achieve its goal?<\/p>\n<p class=\"p2\">If multiple diverse adversarial validators can answer these questions for all potential agentic behavior, then your AI governance system can minimize the risk of agentic misbehavior.<\/p>\n<p>The Intellyx take \u2013 did you say \u2018minimize the risk\u2019?<\/p>\n<p class=\"p2\">Yes \u2013 taking this approach to agentic AI governance at best lowers the risk \u2013 but can never eliminate it.<\/p>\n<p class=\"p2\">There is always the possibility that some agentic conspiracy suborns the validators, or that some systemic pattern of validator error or misbehavior lets some agentic mischief through.<\/p>\n<p class=\"p2\">The primary lesson here: Agentic AI never provides certainty. It can only provide confidence thresholds.<\/p>\n<p class=\"p2\">In other words, nondeterministic (probabilistic) behavior can only provide probabilistic trust. Absolute trust is impossible as long as agents behave nondeterministically.<\/p>\n<p class=\"p2\">Confidence thresholds always fall short of 100% \u2013 and the difference between the threshold and 100% is what we call the error budget.<\/p>\n<p class=\"p2\">Site reliability engineers or SREs are quite familiar with error budgets: Given the available time and money, SREs can\u2019t guarantee a site will be up all the time.<\/p>\n<p class=\"p2\">Instead, they work toward the error budget, which quantifies just how good the performance can be given those time and money constraints \u2013 in other words, how much failure is acceptable.<\/p>\n<p class=\"p2\">Just so with agentic behavior. Given the behavioral constraints on such behavior, the best we can do is to say that agents will behave well within their error budgets \u2013 but sometimes they will misbehave regardless of all the constrains and protections we put into place, and we simply have to live with that fact.<\/p>\n<p class=\"p2\">If you\u2019re not OK with such error budgets, then don\u2019t deploy AI agents.<\/p>\n<p>Jason Bloomberg is founder and managing director of Intellyx,\u00a0which advises business leaders and technology vendors on their digital transformation strategies. He wrote this article for SiliconANGLE.\u00a0A human being wrote every word of this article.<\/p>\n<p>Image: Jason Bloomberg<\/p>\n<p>Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE\u2019s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.<\/p>\n<p>15M+ viewers of theCUBE videos, powering conversations across AI, cloud, cybersecurity and more<br \/>\n11.4k+ theCUBE alumni \u2014 Connect with more than 11,400 tech and business leaders shaping the future through a unique trusted-based network.<\/p>\n<p>About SiliconANGLE Media<\/p>\n<p>SiliconANGLE Media is a recognized leader in digital media innovation, uniting breakthrough technology, strategic insights and real-time audience engagement. As the parent company of <a href=\"https:\/\/cts.businesswire.com\/ct\/CT?id=smartlink&amp;url=https%3A%2F%2Fsiliconangle.com%2F&amp;esheet=54119777&amp;newsitemid=20240910506833&amp;lan=en-US&amp;anchor=SiliconANGLE&amp;index=9&amp;md5=646b1b564e2259100a2b8638aab0a552\" rel=\"nofollow noopener\" target=\"_blank\">SiliconANGLE<\/a>, <a href=\"https:\/\/cts.businesswire.com\/ct\/CT?id=smartlink&amp;url=https%3A%2F%2Fwww.thecube.net%2F&amp;esheet=54119777&amp;newsitemid=20240910506833&amp;lan=en-US&amp;anchor=theCUBE+Network&amp;index=10&amp;md5=7de2a85f95ab4a4a495cede20b8cb1da\" rel=\"nofollow noopener\" target=\"_blank\">theCUBE Network<\/a>, <a href=\"https:\/\/cts.businesswire.com\/ct\/CT?id=smartlink&amp;url=https%3A%2F%2Fthecuberesearch.com%2F&amp;esheet=54119777&amp;newsitemid=20240910506833&amp;lan=en-US&amp;anchor=theCUBE+Research&amp;index=11&amp;md5=7bb33676722925eb57d588ec343e4f6f\" rel=\"nofollow noopener\" target=\"_blank\">theCUBE Research<\/a>, <a href=\"https:\/\/cts.businesswire.com\/ct\/CT?id=smartlink&amp;url=https%3A%2F%2Fwww.cube365.net%2F&amp;esheet=54119777&amp;newsitemid=20240910506833&amp;lan=en-US&amp;anchor=CUBE365&amp;index=12&amp;md5=d310fb35919714e66ad8d42c9c0c1bc6\" rel=\"nofollow noopener\" target=\"_blank\">CUBE365<\/a>, <a href=\"https:\/\/cts.businesswire.com\/ct\/CT?id=smartlink&amp;url=https%3A%2F%2Fwww.thecubeai.com%2F&amp;esheet=54119777&amp;newsitemid=20240910506833&amp;lan=en-US&amp;anchor=theCUBE+AI&amp;index=13&amp;md5=b8b98472f8071b23ebb10ab9a8dd0683\" rel=\"nofollow noopener\" target=\"_blank\">theCUBE AI<\/a> and theCUBE SuperStudios \u2014 with flagship locations in Silicon Valley and the New York Stock Exchange \u2014 SiliconANGLE Media operates at the intersection of media, technology and AI.<\/p>\n<p>Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.<\/p>\n","protected":false},"excerpt":{"rendered":"Agentic artificial intelligence misbehavior is reaching epidemic proportions. Today\u2019s AI governance solutions aren\u2019t stopping the madness. We need&hellip;\n","protected":false},"author":2,"featured_media":24774,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[6],"tags":[179,7493,2822,396,16771],"class_list":{"0":"post-24773","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-agentic-ai","8":"tag-agentic-ai","9":"tag-agentic-artificial-intelligence","10":"tag-guest-author","11":"tag-siliconangle","12":"tag-why-agentic-ai-governance-is-falling-short-and-what-we-can-do-about-it"},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts\/24773","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/comments?post=24773"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts\/24773\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/media\/24774"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/media?parent=24773"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/categories?post=24773"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/tags?post=24773"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}