The New Zealand National Cyber Security Centre (NCSC) and its Five Eyes counterparts are urging organisations to take a cautious approach to deploying agentic AI systems, warning that autonomous AI tools introduce security and governance risks beyond those associated with traditional generative AI.
The guidance, titled Careful Adoption of Agentic AI Services, was jointly authored by New Zealand’s NCSC, Australia’s ACSC, the US CISA and NSA, the UK NCSC and the Canadian Centre for Cyber Security. It focuses on large language model (LLM)-based agentic AI systems that can independently plan, make decisions and execute actions with limited human oversight.
Unlike conventional generative AI systems that primarily create content for human review, agentic AI systems can autonomously interact with software tools, external data sources and enterprise environments to complete tasks. The agencies warn this expanded autonomy significantly increases attack surfaces and operational complexity.
The report identifies several categories of risk, including privilege escalation, insecure tool integrations, rogue agents, cascading system failures and accountability gaps. One scenario outlined in the guidance describes a compromised procurement agent using excessive privileges to modify contracts and approve payments without detection.
The agencies also caution that interconnected multi-agent systems can amplify failures. According to the guidance, a single orchestration flaw or compromised third-party tool could trigger widespread disruption across downstream systems through autonomous delegation and implicit trust between agents.
For channel partners, MSPs and systems integrators building AI-enabled services, the guidance recommends applying traditional cyber security principles such as least-privilege access, defence-in-depth strategies, strong identity management and continuous monitoring to agentic AI deployments.
The agencies also advise organisations to limit agentic AI deployments to low-risk and non-sensitive tasks until standards and security practices mature. The report states businesses should consider whether simpler automation approaches could achieve the same outcomes with lower risk.
“Organisations should assume that agentic AI systems may behave unexpectedly,” the guidance says, recommending resilience, reversibility and risk containment take priority over efficiency gains.