{"id":21611,"date":"2026-04-29T14:24:08","date_gmt":"2026-04-29T14:24:08","guid":{"rendered":"https:\/\/www.europesays.com\/ai\/21611\/"},"modified":"2026-04-29T14:24:08","modified_gmt":"2026-04-29T14:24:08","slug":"are-your-ai-agents-a-security-risk-you-cant-see","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/ai\/21611\/","title":{"rendered":"Are Your AI Agents a Security Risk You Can&#8217;t See?"},"content":{"rendered":"<p>Ask any CX leader to produce a complete list of every AI agent currently running in their contact centre environment \u2013 what it is connected to, what data it can access, and who approved its permissions. Most cannot. This issue is growing by the quarter as major enterprise platform vendors ship autonomous agents directly into customer-facing operations.<\/p>\n<p>Speaking on <a href=\"https:\/\/www.youtube.com\/watch?v=UoWNdgZzwug\" rel=\"nofollow noopener\" target=\"_blank\">The Verge\u2019s Decoder podcast<\/a>, Okta co-founder and CEO Todd McKinnon outlined the issue: <\/p>\n<p>\u201cAI agents need to log into stuff. You need to have a system to keep track of them, define their role, define their permissions, what they can connect to and what they can do.\u201d<\/p>\n<p>For an industry that has spent the past two years racing to deploy agentic AI, governance infrastructure has failed to keep pace with automation.<\/p>\n<p>Why Is AI Agent Governance in the Contact Center So Difficult?<\/p>\n<p>The vendor deluge has arrived faster than most IT and CX teams anticipated. <a href=\"https:\/\/www.cxtoday.com\/contact-center\/microsoft-dynamics-365-contact-center-ai-agents\/\" rel=\"nofollow noopener\" target=\"_blank\">Microsoft has been pushing AI agents deep into Dynamics 365 Contact Center<\/a>, equipping them with the ability to autonomously handle customer interactions across voice and digital channels.<\/p>\n<p><a href=\"https:\/\/www.cxtoday.com\/contact-center\/sap-ai-ticket-resolution-ai-support\/\" rel=\"nofollow noopener\" target=\"_blank\">SAP is automating ticket resolution through its own AI support capabilities<\/a>, operating independently on support cases at a speed and volume that no human team can supervise in real time.<\/p>\n<p>Salesforce, meanwhile, has been defending AgentForce directly to <a href=\"https:\/\/www.cxtoday.com\/crm\/marc-benioff-ai-saas-investor-concerns\/\" rel=\"nofollow noopener\" target=\"_blank\">investors nervous about what agentic AI means for the future of SaaS<\/a>, with CEO Marc Benioff framing the technology as an expansion of the platform rather than a disruption to it.<\/p>\n<p>Agents are arriving from every direction simultaneously, each with its own identity model, data connections, and permission logic. No single team in most enterprises has the full picture across all of them.<\/p>\n<p>What Happens When No One Is Tracking AI Agent Permissions?<\/p>\n<p>The consequences of that visibility gap are not abstract. In a contact centre context, AI agents are routinely credentialed into CRM systems, customer data platforms, ticketing infrastructure, and telephony layers. They operate on behalf of customers, with access to purchase histories, account credentials, and personal data.<\/p>\n<p>When the permissions governing those connections are distributed across vendor dashboards, IT procurement records, and individual team deployments, the risk is not just operational inefficiency \u2013 it is a customer trust exposure.<\/p>\n<p>McKinnon is deliberately unsentimental about what happens next:<\/p>\n<p>\u201cStuff is going to go wrong. There are going to be issues and threats and prompt injection.\u201d<\/p>\n<p>The question for CX leaders is not whether a misconfigured or compromised agent will cause harm \u2013 it is whether they will know about it quickly enough to act, and whether they have the mechanism to respond when they do.<\/p>\n<p>This is the logic behind what Okta calls a \u201ckill switch\u201d: not shutting down the agent itself but revoking its access to every system it touches. \u201cWe\u2019re pulling the access to everything the agent can access, not access to the agent,\u201d McKinnon explains. \u201cAlmost like you would take a machine off the network.\u201d For a contact centre running SAP\u2019s autonomous ticket resolution at scale, or Microsoft\u2019s Dynamics 365 agents handling live customer calls, that capability \u2013 and the speed with which it can be deployed \u2013 may prove to be an important governance feature in the stack.<\/p>\n<p>How Should CX Leaders Build an AI Agent Governance Framework?<\/p>\n<p>McKinnon\u2019s blueprint for what he calls the \u201csecure agentic enterprise\u201d translates into three practical obligations for CX technology buyers \u2013 none of which are vendor-specific, and all of which should be applied to Microsoft, SAP, and Salesforce deployments equally.<\/p>\n<p>Build the inventory first. Before any governance framework can function, organisations need a system of record for every agent in the environment. \u201cJust giving enterprises a list of the agents they have \u2013 sounds simple, but they need a list of the agents they have,\u201d McKinnon says. For CX teams, this means mapping across platform-native agents, third-party CCaaS AI layers, and internally built automations into a single, maintained register.<\/p>\n<p>Define and control connection points. An agent\u2019s value is proportional to its data access \u2013 but so is its risk. McKinnon argues that the long-term model is not consolidating everything into a single data warehouse and running agents on that but ensuring that agents have precisely scoped access tokens to the systems they legitimately need. \u201cThere\u2019s no good standard for how agents connect to a bunch of other systems they need to get their data,\u201d he acknowledges \u2013 which means CX buyers must ask vendors directly how agent authentication is managed, and what audit trail exists.<\/p>\n<p>The Governance Gap Is Growing<\/p>\n<p>There is a harder truth embedded in McKinnon\u2019s analysis. The behaviour of these systems is, by his own admission, \u201cnon-deterministic.\u201d There is, as he puts it, \u201cno free lunch\u201d \u2013 you either grant agents sufficient data access to be genuinely useful, or you constrain them to the point of irrelevance. That tension is not going away. It is the defining operational trade-off of the agentic era in CX.<\/p>\n<p>What separates organisations that navigate it well from those that face avoidable incidents is not the choice of platform vendor. It is whether they treated AI agents with the same governance discipline they would apply to a new human hire \u2013 knowing what access they have, who authorised it, and how to take it back.<\/p>\n<p>FAQs<br \/>\nWhat is AI agent governance in the contact center?<\/p>\n<p>AI agent governance is the process of tracking, permissioning, and controlling every autonomous AI agent operating within your contact centre environment.<\/p>\n<p>Why do enterprises struggle to manage AI agent permissions?<\/p>\n<p>Most organisations are deploying agents from multiple vendors simultaneously, with no single system of record spanning all of them.<\/p>\n<p>What is an AI agent identity?<\/p>\n<p>An agent identity is a hybrid credential \u2013 part human profile, part system account \u2013 that defines what an AI agent can access, on whose behalf, and under what conditions.<\/p>\n<p>What is an AI agent kill switch?<\/p>\n<p>A kill switch revokes an agent\u2019s access to every connected system instantly, effectively removing it from the network without shutting down the agent itself.<\/p>\n<p>How should CX leaders start building an AI agent inventory?<\/p>\n<p>Begin by mapping every agent in your environment \u2013 platform-native, third-party, and internally built \u2013 into a single maintained register before addressing permissions or governance policy.<\/p>\n<p>Is AI agent behaviour predictable?<\/p>\n<p>No \u2013 Okta CEO Todd McKinnon describes agent behaviour as inherently non-deterministic, meaning governance frameworks must account for unexpected actions, not just intended ones.<\/p>\n","protected":false},"excerpt":{"rendered":"Ask any CX leader to produce a complete list of every AI agent currently running in their contact&hellip;\n","protected":false},"author":2,"featured_media":21612,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[6],"tags":[179,509,510,405,7537,513,1681],"class_list":{"0":"post-21611","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-agentic-ai","8":"tag-agentic-ai","9":"tag-agentic-ai-in-customer-service","10":"tag-agentic-ai-software","11":"tag-ai-agents","12":"tag-artificial-intelligence-agents","13":"tag-autonomous-agents","14":"tag-security-and-compliance"},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts\/21611","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/comments?post=21611"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts\/21611\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/media\/21612"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/media?parent=21611"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/categories?post=21611"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/tags?post=21611"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}