Agentic AI
,
Artificial Intelligence & Machine Learning
,
Next-Generation Technologies & Secure Development

NIST Seeks Input to Protect AI Systems Used in Government, Critical Infrastructure

Chris Riotta (@chrisriotta) •
January 12, 2026    

NIST Calls for Public to Help Better Secure AI Agents
Image: grandbrothers/Shutterstock

The U.S. National Institute of Standards and Technology is seeking public input on how to secure agentic artificial intelligence systems as their use – and potential risks – expands across government and critical infrastructure networks.

See Also: Proof of Concept: Bot or Buyer? Identity Crisis in Retail

In a notice published Thursday, NIST’s Center for AI Standards and Innovation issued a request for information asking industry, researchers and system operators to weigh in on security risks and mitigation strategies tied to agentic AI systems. The institute defines AI agents as any AI system deployment combining one or more generative AI models with scaffolding software that enables planning and discretionary action, sometimes through multiple coordinated sub agents.

Agentic AI systems can introduce security risks that differ significantly from traditional software or non-agentic AI tools. Agentic systems may be susceptible to hijacking, backdoor attacks and other exploits that could undermine public safety, erode consumer confidence and curb adoption of advanced AI technologies.

Security leaders told Information Security Media Group those risks are already materializing as federal agencies deploy GenAI and emerging tech capabilities faster than the controls designed to protect them. April Lenhard, principal product manager for cyberthreat intelligence at Qualys, said AI vulnerabilities are increasingly visible in real world incidents.

“The next phase of federal AI security is moving past chasing alerts and toward managing mission‑driven risk,” Lenhard said. Agencies deploying GenAI or AI analytics without controls for risks like data poisoning, prompt injection and model drift “are exposing themselves to adversaries potentially rewriting alerts, exfiltrating sensitive data or even disabling defenses.”

The NIST request describes a variety of potential threats linked to the non-secure use of agentic AI systems, including data poisoning and indirect prompt injection, as well as the risk of models being deployed with intentionally placed backdoors. The agency flagged concerns that even uncompromised models may behave in ways that threaten confidentiality, integrity or availability through specification gaming or pursuit of misaligned objectives.

NIST said the request for information is aimed at helping the agency develop technical guidelines, evaluation methods and best practices to better secure AI agent systems before they become deeply embedded in high impact government functions. The notice describes current mitigation techniques, with some drawing on established cybersecurity principles such as least privilege and zero trust architectures even as agentic AI introduces new challenges that require additional controls. The new technology may require restrictions on tool use, tighter data boundaries and continuous monitoring of model behavior.

NIST is also seeking input on how security risks vary across deployment environments, including cloud, on premises and edge systems, as well as the added complexity by introducing multi-agent systems that coordinate actions across multiple models.

The request comes as the White House administration has rapidly expanded the use of AI across federal agencies over the past year, rolling out experimental cross-agency initiatives and pilot programs designed to accelerate adoption of generative and agentic AI capabilities for mission support, analytics and service delivery. Those efforts have included shared-service style initiatives aimed at reducing duplication while boosting AI deployment (see: China, AI and a Federal Retreat Set Cyber Agenda for 2026).

Stakeholders have until March 9 to submit responses through regulations.gov.