{"id":23900,"date":"2026-05-01T02:35:07","date_gmt":"2026-05-01T02:35:07","guid":{"rendered":"https:\/\/www.europesays.com\/ai\/23900\/"},"modified":"2026-05-01T02:35:07","modified_gmt":"2026-05-01T02:35:07","slug":"nsa-joins-the-asds-acsc-and-others-to-release-guidance-on-agentic-artificial-intelligence-systems","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/ai\/23900\/","title":{"rendered":"NSA joins the ASD\u2019s ACSC and Others to Release Guidance on Agentic Artificial Intelligence Systems"},"content":{"rendered":"<p>            FORT MEADE, Md. \u00a0\u2013 Today, the National Security Agency (NSA) joins the Australian Signals Directorate\u2019s Australian Cyber Security Centre (ASD\u2019s ACSC) and others to release the Cybersecurity Information Sheet (CSI), \u201c<a href=\"https:\/\/media.defense.gov\/2026\/Apr\/30\/2003922823\/-1\/-1\/0\/CAREFUL%20ADOPTION%20OF%20AGENTIC%20AI%20SERVICES_FINAL.PDF\" target=\"_blank\" rel=\"nofollow noopener\">Careful Adoption of Agentic AI Services<\/a>.\u201d<\/p>\n<p>This report is a comprehensive guide to understanding and mitigating the unique risks associated with the rise of agentic artificial intelligence (AI) within critical infrastructure, including the defense sector. The CSI highlights general security considerations for agentic AI, including the inherited risks of large language models (LLMs), increased attack surfaces, increased complexity, the evolving security landscape as the technology matures, and the need to address AI security as part of established cybersecurity paradigms.\u00a0<\/p>\n<p>Unlike traditional generative AI, which typically requires human validation, agentic AI systems are designed to operate autonomously, making them a powerful tool. This presents both unprecedented opportunities and significant cybersecurity challenges organizations must address to protect national security and critical infrastructure.\u00a0<\/p>\n<p>\u201c<a href=\"https:\/\/media.defense.gov\/2026\/Apr\/30\/2003922823\/-1\/-1\/0\/CAREFUL%20ADOPTION%20OF%20AGENTIC%20AI%20SERVICES_FINAL.PDF\" target=\"_blank\" rel=\"nofollow noopener\">Careful Adoption of Agentic AI Services<\/a>\u201d outlines risk spaces to consider, including:<\/p>\n<p>\u2022\u00a0\u00a0 \u00a0Privilege Risks: Over-privileged agents can amplify the impact of a single compromise.<br \/>&#13;<br \/>\n\u2022\u00a0\u00a0 \u00a0Design and Configuration Risks: Insecure design and provisioning can introduce vulnerabilities.\u00a0<br \/>&#13;<br \/>\n\u2022\u00a0\u00a0 \u00a0Behavior Risks: Goal misalignment, specification gaming, deceptive behavior, and emergent capabilities can lead to unexpected or undesirable outcomes.<br \/>&#13;<br \/>\n\u2022\u00a0\u00a0 \u00a0Structural Risks: The interconnected nature of agentic systems increases the attack surface and complexity.<br \/>&#13;<br \/>\n\u2022\u00a0\u00a0 \u00a0Accountability Risks: The opacity of agentic systems makes accountability hard to trace, complicating auditing and compliance.<\/p>\n<p>Securing agentic AI systems requires proactive measures that address risks introduced by autonomy, interconnected components, and evolving capabilities. The best practices for securing agentic AI systems are divided into the following subcategories:\u00a0<br \/>&#13;<br \/>\n\u2022\u00a0\u00a0 \u00a0Designing Secure Agents<br \/>&#13;<br \/>\n\u2022\u00a0\u00a0 \u00a0Developing Secure Agents<br \/>&#13;<br \/>\n\u2022\u00a0\u00a0 \u00a0Managing Third-Party Components<br \/>&#13;<br \/>\n\u2022\u00a0\u00a0 \u00a0Deploying Agents Securing<br \/>&#13;<br \/>\n\u2022\u00a0\u00a0 \u00a0Operating Agents Securely<\/p>\n<p>The report recommends deploying agentic AI incrementally, continuously assessing against evolving threat models, and maintaining strong governance, explicit accountability, rigorous monitoring, and human oversight which are essential for safe and secure operation.<\/p>\n<p>Organizations that use agentic AI services, including those in the defense sector, are encouraged to review this guidance and adopt the outlined cybersecurity mitigations. \u00a0<\/p>\n<p>Other agencies co-sealing this CSI are the Canadian Centre for Cyber Security (Cyber Centre), the U.S. Cybersecurity and Infrastructure Security Agency (CISA), the New Zealand National Cyber Security Centre (NCSC-NZ), and the United Kingdom National Cyber Security Centre (NCSC-UK).<\/p>\n<p><a href=\"https:\/\/media.defense.gov\/2026\/Apr\/30\/2003922823\/-1\/-1\/0\/CAREFUL%20ADOPTION%20OF%20AGENTIC%20AI%20SERVICES_FINAL.PDF\" target=\"_blank\" rel=\"nofollow noopener\">Read the full report here<\/a>.\u00a0<\/p>\n<p><a href=\"https:\/\/www.nsa.gov\/Press-Room\/Cybersecurity-Advisories-Guidance\/\" target=\"_blank\" rel=\"nofollow noopener\">Visit our full library for more cybersecurity information and technical guidance<\/a>.&#13;\n<\/p>\n<p>&#13;<br \/>\nNSA Media Relations | MediaRelations@nsa.gov | 443-634-0721<\/p>\n<p>About the National Security Agency<\/p>\n<p>Founded in 1952, NSA is a U.S. Department of War combat support agency and element of the U.S. Intelligence Community. The Agency\u2019s mission is to provide foreign signals intelligence to policy makers and our military, and to prevent and eradicate cybersecurity threats to U.S. national security systems, with a focus on the Defense Industrial Base and the improvement of U.S. weapons\u2019 security. From protecting U.S. warfighters around the world to enabling and supporting operations on land, in the air, at sea, in space, and in the cyber domain, NSA is committed to building public trust through transparency and protecting civil liberties and privacy consistent with our nation\u2019s values.\u2003<br \/>&#13;<br \/>\n\u00a0&#13;\n        <\/p>\n","protected":false},"excerpt":{"rendered":"FORT MEADE, Md. \u00a0\u2013 Today, the National Security Agency (NSA) joins the Australian Signals Directorate\u2019s Australian Cyber Security&hellip;\n","protected":false},"author":2,"featured_media":23901,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[2],"tags":[24,25,16331,16330,16328,313,16329,16332,7517,7114],"class_list":{"0":"post-23900","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-ai","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-codebreaking","11":"tag-codemaking","12":"tag-cryptology","13":"tag-cybersecurity","14":"tag-foreign-signals-intelligence","15":"tag-intelligence-community","16":"tag-national-security-agency","17":"tag-nsa"},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts\/23900","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/comments?post=23900"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts\/23900\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/media\/23901"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/media?parent=23900"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/categories?post=23900"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/tags?post=23900"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}