Agentic AI
,
Cloud Security
,
Data Security

Governance Gaps, Cross-Team Risks Challenge Enterprise AI Adoption

Tom Field (SecurityEditor) •
April 22, 2026    

Christopher Martin, senior AI security solutions architect, Palo Alto Networks

Organizations race to deploy agentic artificial intelligence, yet many lack visibility into who uses it and how data flows across teams, said Christopher Martin, senior AI security solutions architect, Palo Alto Networks.

Enterprises adopt AI agents to automate workflows, generate documentation, and support customers. These gains introduce governance gaps. Martin said visibility remains the top challenge as employees access widely available AI tools without consistent oversight. Security teams must track usage, define policies, and enable safe adoption without blocking innovation. He emphasized cross-team collaboration as a key weakness. Data often moves between development, operations, and business units without clear controls. Attackers exploit these gaps. Organizations need centralized visibility, policy enforcement, and proactive testing to reduce risk and protect sensitive information and brand reputation.

“The problem is, who’s doing what and how do we get visibility and traction into those types of individuals within the organization,” said Martin, citing reputational harm rather than data compromise as AI’s biggest risk.

In this video interview with ISMG, Martin also discussed:

Common enterprise use cases for agentic AI deployment;
Visibility and governance challenges tied to shadow AI;
How platform-based security improves AI oversight and protection.

Martin is an experienced technology solutions architect with over 20 years of expertise helping organizations drive secure innovation across AI, cloud and infrastructure domains. He bridges the gap between executive strategy and hands-on implementation – advising leadership on AI governance, risk and adoption while remaining deeply technical across AI, microservices and infrastructure.