Ahead of RSAC 2026, a conversation with David Brauchler, Technical Director and Head of AI/ML Security at NCC Group, highlighted a growing concern: many organizations are fundamentally unprepared to secure AI-powered systems.
As enterprises rapidly adopt agentic AI, traditional security assumptions are breaking down, exposing critical gaps in how risk is understood and managed.
The AI Security Skills Gap and Misplaced Priorities
One of the central challenges is a lack of expertise. The cybersecurity industry itself does not yet have enough practitioners deeply versed in AI-specific risks.
As a result, organizations often focus on superficial controls — such as guardrails — while overlooking more significant vulnerabilities embedded in how AI systems interact with data.
Brauchler argues that this misdirection has created “security red herrings,” diverting attention from the real threats that can lead to data breaches, privilege escalation, and full system compromise.
AI Shifts Risk to the Data Layer
At the core of this issue is a fundamental shift in where risk resides.
Traditional security models emphasize the component layer — applications, infrastructure, and access controls.
However, AI systems invert this paradigm by moving risk into the data layer.
In agentic AI systems, behavior is not strictly defined by static code but dynamically influenced by input data.
As Brauchler explained, “AIs are functions of their inputs,” meaning that malicious or untrusted data can directly alter system behavior.
This creates an environment where trust is no longer fixed but fluid, continuously reshaped by incoming information.
Why Traditional Security Models Fail in Agentic Systems
This shift introduces new complexities. In classical systems, permissions are tied to users or services, and trust levels remain relatively constant.
In contrast, agentic systems must account for varying levels of trust within the data itself.
Inputs from multiple sources — such as users, third-party APIs, or web content — can introduce conflicting trust levels, increasing both the likelihood and impact of attacks.
Consequently, traditional access control models are insufficient because they fail to account for the dynamic nature of AI-driven workflows.
Applying Zero Trust Principles to AI Systems
To address these challenges, Brauchler and his team advocate for a zero trust approach tailored to AI environments.
A key principle is binding AI behavior to input provenance. In other words, AI systems should inherit the trust level of the data they process.
If an AI model is exposed to untrusted input, its capabilities must be restricted accordingly.
This concept, known as Dynamic Capability Shifting, ensures that potentially malicious data cannot trigger high-privilege actions.
Dynamic Capability Shifting and Data Provenance
Another critical strategy involves improving data provenance awareness.
While it may not be feasible to trace every piece of data to its exact origin, organizations must establish mechanisms to evaluate and assign trust levels relative to execution context.
This enables more informed decisions about what actions an AI system should be allowed to perform.
Moving Beyond Monolithic AI: Gated Agent Architectures
Equally important is the move away from monolithic AI architectures. Many organizations rely on a single, highly capable agent to perform a wide range of tasks.
This “one agent to rule them all” model significantly amplifies risk, as any compromise can grant broad access to sensitive systems.
Instead, Brauchler recommends Gated Agent architectures, where tasks are distributed across multiple specialized agents with limited privileges.
These agents operate in sandboxed environments, and their outputs are converted into safe, validated data types before being passed to higher-trust components.
Trust Segmentation and Secure Design Patterns
This architectural shift aligns with the broader principle of trust segmentation.
High-trust agents should never be exposed to unvalidated or polluted data, while low-trust agents should not have access to privileged functions.
By enforcing strict boundaries and validating data as it moves between components, organizations can reduce the risk of cross-context attacks, such as prompt injection or data exfiltration.
Supporting Controls for Agentic AI Security
In addition to these design patterns, several supporting controls are essential.
Input masking prevents sensitive data from being exposed to untrusted contexts, while I/O synchronization ensures that outputs from one agent are properly validated before being used elsewhere.
Task minimization further reduces risk by limiting the scope of actions any single agent can perform.
Together, these measures create a layered defense model that reflects the dynamic nature of AI systems.
Rethinking AI as an Extension of User Identity
Ultimately, securing agentic AI requires a mindset shift.
Organizations must stop treating AI as just another application component and instead recognize it as an extension of user identity and intent.
As Brauchler emphasized, AI systems act on behalf of users and should be governed by the same security principles.
Any relaxation of controls simply because an action originates from an AI system introduces unacceptable risk.
A Roadmap for Securing AI Systems
The path forward involves both technical and organizational changes.
In the short term, organizations should identify stakeholders for AI integration and begin threat modeling their AI risk landscape.
Over the medium term, they must design secure inference pipelines and train developers on AI-specific security practices.
Long-term success depends on embedding these controls into all application environments and establishing standardized architectures for future deployments.
Securing the Future of Agentic AI
The rise of agentic AI demands a redefinition of trust in cybersecurity.
By focusing on data-centric risk, enforcing dynamic privilege controls, and adopting segmented architectures, organizations can begin to close the gap between innovation and security.
While the challenges are real, as Brauchler noted, secure AI is achievable — if organizations are willing to rethink their approach and act decisively.