At RSAC 2026, network security and AI agents were top of mind for many exhibitors and attendees alike. “Why AI agents?” “What work should AI agents be tasked with on the network?” “What human oversight is needed?” were just a few of the questions swirling around the event. To get some answers, SmartBrief sat down with Erez Tadmor, CTO at Tufin, a network security policy management platform provider, to discuss the company’s focus on automated compliance for complex regulatory mandates, robust network security posture management to ensure policy intent and the integration of agentic AI to drive operational efficiency. Tadmor also detailed Tufin’s new Multi-Vendor Agentic Network Security offering.

 

Tufin is moving more toward agentic network security. How do IT leaders and people on the ground ensure autonomous agents stay within defined security boundaries?

Tadmor: This is a great question. The two questions we get most frequently when customers visit our booth: 1) What are you doing? 2) How do you secure it? 

Tufin has more than 1,500 customers, mostly Fortune 2000 and up, and we help manage their networks. We work with them to develop automation playbooks that outline the guardrails they should follow when making changes to their networks. And now, with AI entering the field, we see more and more customers and organizations have an appetite to use AI to increase the velocity and agility of how their teams operate. So it’s very important to keep

Tadmor (Photo credit: Susan Rush)

these AI agents within the same playbooks that have been tested, trusted and proven by Tufin and its customers. 

We already released a few AI agents that are still in beta. The first is our Compliance Agent. Compliance is a serious challenge for large organizations, and with AI driving change, the network is no longer standing still. Organizations are moving increasingly toward continuous validation of compliance. AI agents are a great way to go and govern that, because they can work at scale. They can work in the velocity that humans cannot. The Compliance Agent can focus on your compliance method and scan your networks, scan your devices to make sure that they are in compliance within a certain benchmark. We train our agents to go and understand compliance, whether it’s corporate compliance or PCI, HIPAA, NIST, etc. Agents do the scans continuously, identify where there are gaps, and then you can ask the agent to help you help fix those issues or recommend what to do in order to fix the problems. This Compliance Agent is still in beta, but we have more than a handful of customers already testing it.

A second agent we have introduced is the Network Security Posture Agent, which prioritizes vulnerabilities based on real connectivity exposure, attack paths and critical assets. One of the major challenges organizations face today is the presence of silos within the organization. Security teams and network teams usually have different incentives and goals. Network teams are all about uptime; they need to make sure that the network is always connected and applications are working seamlessly. There’s no downtime. While the security team is more focused on how to make sure that there are no bridges, that the organization is fully secured all the time. To qualify that, security often needs data from the network in order to make their job better, faster and more accurate. 

For example, a security engineer discovers a vulnerability or an asset deemed vulnerable; one of the things organization engineers would like to know about that vulnerability is whether it is actually accessible. If it is actually exploitable. So an AI agent that handles posture could work with a security or SOC analyst to gather information from the network and determine whether a certain vulnerability is accessible or not. But the agent doesn’t stop there; they can also make changes within the network. The engineer could ask the agent: “Hey, can you help me block the connectivity into these assets until I’m done patching them?” because patching isn’t always readily available. The agent will then open a ticket within our playbooks to securely go and understand what needs to happen inside the network in order to block your connectivity. And of course, our agent is always human-on-the-loop. So anything that the agent is doing, an engineer should approve. This is a very early-stage agent, but this is where we see organizations are going.

We have two more agents that we’re talking about here at RSAC. The Application Deployment Agent, which defines application connectivity requirements, validates them against policy and helps deploy compliant network access. And our fourth new agent is the Policy Recertification Agent, which maps rules to owners, requests approval and helps eliminate unnecessary access.

 

How does Tufin help IT leaders map and control AI agents across the network?

Tadmor: By developing agents, we’re making sure that we train them on truth and proprietary data, and the way we’re doing it and the way we are validating and continuously ensuring that is making sure that continuously the agents are doing what they’re supposed to do according to the trusted playbooks that we have worked and developed with our customers over the years. So any agent that works within Tufin will operate based on these playbooks. This is basically one of our advantages, actually, because there are lots of network security companies that have started to speak and do things around agents. Now, obviously, you could have an agent that can go to your firewall, make a change, and it will do the change. The problem is that the agent could do lots of things besides just changing the firewall, could do some wanky thing around that that you may not want to have. Second, the agent doesn’t always have the context and the guardrails that they should work within. Tufin is a control plane that looks over all of the devices and understands the entire network context  and can provide global context.

 

What do you think the biggest misconception is out there about AI agents and the network?

Tadmor: I think that the biggest misconception is that the network is under control and the network is assumed, protected. The best, the biggest misconceptions, where it relates to AI. AI is going to change that equation, because AI is going to work in a velocity that we have never seen before. The changes that the network is going to accommodate are going to be rapid, driven by application developers that already develop code with AI. They will not need to. They cannot wait for the network to connect the application, so the network will need to also use AI in order to do this connectivity. As a result, we’re going to start to see much more changes and kind of revisions happen inside the network. Because we assume the biggest misconception is that the network is assumed to be under control. But, you know, we see it with our customers. As soon as they plug in Tufin into their network access controls, they started to see how it’s maybe not under control. For example, lots of organizations, as soon as they plug into their controls, it’s starting to see, oh, I have a rule, an access rule, that is enabling traffic into my one of my databases. Why do I have this rule? Who does it serve right? Or does somebody realize that they have any rules within their devices? Because four months ago, or, God forbid, a year and a half ago, someone needed access. And, you know, someone decided that the best way to grant that access without doing their due diligence before or how to exactly do the change in the firewall, just wanted to do the easy thing, go and open in any connectivity. By the way, AI without guardrails is going to do that same thing as well. So the misconception is that the network is under control. The network is assumed to be a domain that is resolved, but I think that AI is going to change that equation, and organizations would need to put much more attention to their network posture.

 

Looking at the trends and the threats that are being discussed at RSAC. What do you think is going to be shaping network strategy and security strategy over the next several years?

Tadmor: How do you make sure that AI, agentic AI specifically, but not just a genetic AI, any type of AI that runs in the network, is doing and playing as it intended to do so, setting up the right guardrails, setting up the right playbooks for AI agents to utilize? I think this is one of the major things that security will need to kind of put a lot of attention on because it’s a matter of trust. 

We see lots of organizations that have an appetite for using AI in their networks, but they have zero appetite for letting AI do things in the network without trusting it or knowing what the end result will be. See lots of our customers running AI in their lab environments in order to test it, in order to kind of go after what’s going on, what these AI agents are actually doing. So I think that comes down to trust. Really, the question is: How do you ensure you apply the zero-trust framework in an environment or domain that moves at machine speed? That’s going to be one of the hottest topics in security.

 

If you like these insights, sign up for the ISACA SmartBrief on Cybersecurity, a daily look at the top news and workforce education topics.