{"id":28278,"date":"2026-05-05T17:25:08","date_gmt":"2026-05-05T17:25:08","guid":{"rendered":"https:\/\/www.europesays.com\/ai\/28278\/"},"modified":"2026-05-05T17:25:08","modified_gmt":"2026-05-05T17:25:08","slug":"nvidia-and-servicenow-partner-on-new-autonomous-ai-agents-for-enterprises","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/ai\/28278\/","title":{"rendered":"NVIDIA and ServiceNow Partner on New Autonomous AI Agents for Enterprises"},"content":{"rendered":"<p>Enterprise AI has learned to generate. It has learned to reason. Now companies are asking the next question: How should AI act?<\/p>\n<p>Early agent systems have shown what\u2019s possible, moving beyond simple prompts to take on more complex tasks. The next step is bringing those capabilities into enterprise environments \u2014 where agents must operate with context, control and consistency across real workflows.<\/p>\n<p>At ServiceNow Knowledge 2026, NVIDIA founder and CEO Jensen Huang joined ServiceNow chairman and CEO Bill McDermott during the opening keynote to discuss the next phase of enterprise AI.\u00a0<\/p>\n<p>The companies are expanding their collaboration across the full stack, delivering specialized autonomous AI agents that are safe and easy to adopt \u2014 powered by NVIDIA accelerated computing, open models, domain-specific skills and secure agent execution software, and bringing together enterprise workflow context from ServiceNow Action Fabric and governance from ServiceNow AI Control Tower.<\/p>\n<p>ServiceNow is introducing <a target=\"_blank\" href=\"https:\/\/newsroom.servicenow.com\/press-releases\/details\/2026\/ServiceNow-extends-agentic-AI-governance-from-desktops-to-data-centers-with-NVIDIA\/default.aspx\" rel=\"nofollow noopener\">Project Arc<\/a>, a long-running, self-evolving autonomous desktop agent designed for knowledge workers, including developers, IT teams and administrators.\u00a0<\/p>\n<p>Unlike standalone AI agents, Project Arc connects natively to the ServiceNow AI Platform through ServiceNow Action Fabric to bring governance, auditability and workflow intelligence to every action the autonomous desktop agent takes. It can access the local file systems, terminals and applications installed on a machine to complete complex, multistep tasks that traditional automation can\u2019t handle, but with the controls enterprises actually need to deploy AI at scale.<\/p>\n<p>The work is designed based on three requirements every company will need for long-running, autonomous agents: open models and domain-specific skills that can be customized and security that helps agents act without exposing sensitive data or systems \u2014 all running on AI factories that deliver efficient <a href=\"https:\/\/blogs.nvidia.com\/blog\/ai-tokens-explained\/\" rel=\"nofollow noopener\" target=\"_blank\">tokenomics<\/a>.<\/p>\n<p>Bringing this level of autonomy to enterprises requires control from the start.<\/p>\n<p>Project Arc uses <a target=\"_blank\" href=\"https:\/\/build.nvidia.com\/openshell\" rel=\"nofollow noopener\">NVIDIA OpenShell<\/a>, an open source secure runtime for developing and deploying autonomous agents in sandboxed, policy-governed environments. ServiceNow is building on and contributing to OpenShell to advance a common foundation for secure, enterprise-grade agent execution. With OpenShell, enterprises can define what an agent can see, which tools it can use and how each action is contained.\u00a0<\/p>\n<p>\u201cProject Arc represents the next step in our ongoing collaboration with NVIDIA, bringing autonomous execution to the desktop,\u201d said Jon Sigler, executive vice president and general manager of AI Platform at ServiceNow. \u201cBy combining OpenShell\u2019s runtime layer with ServiceNow AI Control Tower, and powered by ServiceNow Action Fabric, we\u2019re delivering the governance and security that enterprise AI requires.\u201d\u00a0<\/p>\n<p>Open Models and Agent Skills Scale Enterprise AI<\/p>\n<p>To be effective, enterprise AI systems must be adaptable. NVIDIA and ServiceNow are building on an open ecosystem that allows organizations to tailor models and applications to their specific domains and data.<\/p>\n<p>NVIDIA agent skills enable specialized agents, such as ServiceNow AI Specialists, to deliver targeted capabilities across enterprise workflows. For example, the <a target=\"_blank\" href=\"https:\/\/build.nvidia.com\/nvidia\/aiq\" rel=\"nofollow noopener\">NVIDIA AI-Q Blueprint<\/a> for building specialized deep research agents empowers ServiceNow AI Specialists to gather context, synthesize information and support more complex decision-making across business functions.\u00a0<\/p>\n<p>In addition, the NVIDIA Agent Toolkit, including <a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/ai-data-science\/foundation-models\/nemotron\/\" rel=\"nofollow noopener\">NVIDIA Nemotron<\/a> open models, provide flexible building blocks and specialized skills for developing customized AI applications. To support real-world performance that these systems can perform reliably, the companies are also advancing NOWAI-Bench, an open benchmarking suite for enterprise AI agents, integrated with the NVIDIA NeMo Gym library. NOWAI-Bench includes <a target=\"_blank\" href=\"https:\/\/enterpriseops-gym.github.io\/\" rel=\"nofollow noopener\">EnterpriseOps-Gym<\/a>, one of the industry\u2019s most challenging enterprise agent benchmarks, where Nemotron 3 Super currently ranks No. 1 among open source models.<\/p>\n<p>Unlike general benchmarks, these evaluations focus on multistep workflows \u2014 where enterprise AI systems often encounter real challenges \u2014 helping teams build agents that perform reliably in production environments.<\/p>\n<p>Efficient AI Factories<\/p>\n<p>As AI agents become long running and always on, scaling them across millions of workflows requires not just capability but efficiency \u2014 making token economics central to enterprise AI.<\/p>\n<p>NVIDIA AI factories are built to deliver the lowest-cost, most-efficient tokenomics for production AI. The <a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/data-center\/technologies\/blackwell-architecture\/\" rel=\"nofollow noopener\">NVIDIA Blackwell<\/a> platform delivers more than 50x greater token output per watt than NVIDIA Hopper, resulting in nearly 35x lower cost per million tokens. For enterprises running agents across millions of workflows, that efficiency can determine how quickly AI moves from pilots to broad production use.<\/p>\n<p>ServiceNow AI Control Tower integrates with the <a target=\"_blank\" href=\"https:\/\/www.nvidia.com\/en-us\/solutions\/ai-factories\/validated-design\/\" rel=\"nofollow noopener\">NVIDIA Enterprise AI Factory<\/a> validated design, extending governance and observability to large-scale AI workloads. With added agent observability capabilities, organizations can monitor behavior in real time and manage AI systems across their full lifecycle \u2014 from deployment to optimization.<\/p>\n<p>AI is becoming a new way that work gets done. What\u2019s changing now is that the core pieces required to deploy it at scale \u2014 capable agents, built-in guardrails and proven performance \u2014 are all coming together.<\/p>\n<p>The companies that move fastest will be the ones that give agents the infrastructure to act, the context to make decisions and the governance to keep every action accountable \u2014 and NVIDIA and ServiceNow are making this a reality for the world\u2019s enterprises.<\/p>\n<p>Learn more about <a target=\"_blank\" href=\"https:\/\/build.nvidia.com\/openshell\" rel=\"nofollow noopener\">NVIDIA OpenShell<\/a> and the <a target=\"_blank\" href=\"https:\/\/build.nvidia.com\/nvidia\/aiq\" rel=\"nofollow noopener\">NVIDIA AI-Q Blueprint<\/a>.\u00a0<\/p>\n","protected":false},"excerpt":{"rendered":"Enterprise AI has learned to generate. It has learned to reason. Now companies are asking the next question:&hellip;\n","protected":false},"author":2,"featured_media":28279,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[2],"tags":[179,24,2321,25,293,7267,18678,10293,335],"class_list":{"0":"post-28278","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-ai","8":"tag-agentic-ai","9":"tag-ai","10":"tag-ai-factory","11":"tag-artificial-intelligence","12":"tag-events","13":"tag-nemotron","14":"tag-nvidia-blueprints","15":"tag-nvidia-nemo","16":"tag-open-source"},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts\/28278","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/comments?post=28278"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts\/28278\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/media\/28279"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/media?parent=28278"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/categories?post=28278"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/tags?post=28278"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}