TrendAI has expanded its work with Nvidia on security measures for agentic AI, focusing on Nvidia OpenShell, an open-source runtime designed for autonomous AI agents.
The integration embeds security into the runtime architecture used to run agents that can plan, use tools, and operate for extended periods. Agentic AI systems often run across multiple environments and can take actions without direct user prompts.
TrendAI, the enterprise business unit of Trend Micro, says TrendAI Vision One will provide a layered security architecture for OpenShell. It includes governance controls and monitoring for systems in which agents can change behaviour over time.
Shift In Risk
Agentic AI has drawn interest from enterprises seeking automated workflows across applications, data sources, and infrastructure. It has also raised new security questions. Traditional AI security has tended to focus on short, session-based interactions between a user and a model. By contrast, agentic systems behave more like long-running software processes that can call tools and take action.
Nvidia positions OpenShell as a runtime for “long-lived, self-evolving agents” with planning, memory, and tool execution. That design broadens the range of risk scenarios, including unauthorised skills, hidden behaviours, prompt injection, and unintended system access.
Rachel Jin, Chief Platform and Business Officer and Head of TrendAI, said systems that can act on their own present a different security profile.
“Agentic AI changes the security equation. When AI systems can plan, take action, and interact with other tools on their own, the risk profile looks very different from traditional AI. Our collaboration with NVIDIA allows us to bring security directly into the architecture so organisations can adopt agentic AI with the visibility and control they expect,” Jin said.
Runtime Controls
TrendAI says its approach covers controls before, during, and after an agent runs. This includes defining trust boundaries, enforcing policy at runtime, and maintaining ongoing visibility into agent activity.
The companies framed the work as part of a broader effort to make agentic AI deployments more manageable in large organisations, where governance, audit, and security operations processes often apply once systems move beyond pilot stages.
Pat Lee, Vice President of Strategic Enterprise Partnerships at Nvidia, said the collaboration aims to improve visibility and control for developers running autonomous agents.
“Agentic AI opens the door for a new class of applications that can plan, reason, and take action. By working with TrendAI, we’re helping developers add visibility and controls to make it safer to run autonomous AI,” Lee said.
Broader Integration
The expanded collaboration extends beyond OpenShell to other Nvidia agentic AI initiatives, including the Nvidia AI-Q blueprint and the Nvidia NeMo Agent Toolkit. TrendAI says the alignment provides a consistent approach to security, governance, and observability as agentic systems expand across enterprise environments.
The OpenShell security architecture in TrendAI Vision One includes centralised governance and compliance controls within the agent runtime. It also scans agent skills and Model Context Protocol integrations, which are often used to connect agents to external tools and data sources.
The system also performs behavioural analysis to detect hidden or malicious actions. Inline policy enforcement is designed to block untrusted skills and actions at runtime.
TrendAI also highlighted AI-specific threat protections, including detection of prompt injection and sensitive data exposure. Monitoring and audit features include agent telemetry and SIEM integration, used by security teams for centralised logging, alerting, and incident response.
Enterprise Adoption
Enterprises adopting agentic AI often need controls that map to existing risk management processes. Security teams typically look for policy enforcement, monitoring, and evidence for audits and investigations. Developers also need a framework that avoids excessive friction when agents are updated, new tools are connected, or workloads shift between environments.
TrendAI says its security layer governs which tools agents can access and how risk is detected and enforced across the lifecycle of an agent’s execution. It positions the work as part of a broader security model covering AI infrastructure, data pipelines, models, and applications.
TrendAI says it brings decades of cybersecurity experience and threat intelligence to this area, supported by global security operations and research that track large volumes of threats each day.
Organisations evaluating OpenShell and related Nvidia tooling are likely to watch how well security controls integrate into developer workflows, how policies can be managed at scale, and what telemetry is available for security operations teams as agentic AI moves from experiments to production systems.