Exabeam is expanding its New-Scale platform with security functionality specifically aimed at AI agents. With this expansion, the company aims to give organizations better insight into the risks arising from the increasing use of autonomous AI in business processes.
According to SiliconANGLE, the new release combines several components that were previously often used separately. Exabeam brings together behavioral analysis of AI agents, investigation based on a uniform timeline, and visibility of the security status surrounding AI use within a single security workflow. In doing so, the company is explicitly positioning the expansion as a further development of its existing user and entity behavior analytics, which is now also being applied to non-human users.
The use of AI agents is growing rapidly. These agents are increasingly functioning as a digital workforce with access to sensitive data and business-critical systems. According to Exabeam, there is a lack of governance and continuous monitoring. This increases the risk of agents sharing data outside their intended context and circumventing or changing internal policies without it being clear who gave the order.
New types of insider threatsÂ
This development is also changing the classic image of insider threats. Whereas security was traditionally focused on human users, it now increasingly concerns autonomous software entities that make decisions independently. Traditional SIEM and XDR solutions are primarily based on known threat patterns and fixed rules. This makes it difficult to recognize deviant behavior by AI agents as long as it is not explicitly defined as a threat.
The expansion of the New-Scale platform focuses on behavioral analysis as a core mechanism. By recording what constitutes normal behavior for an AI agent within a specific role, the platform can identify deviations more quickly. This applies, for example, when an agent accesses systems that are not part of its task. Or when it processes unusual amounts of sensitive data. When such behavior is detected, a detailed timeline is automatically constructed to speed up the investigation.
In addition, Exabeam places more emphasis on securely connecting AI agents to data and systems. Agents need access to function effectively, but according to the company, that access must be strictly limited. By centralizing onboarding and data access, Exabeam aims to prevent organizations from setting up ad hoc integrations that are difficult to secure and control.
In addition to operational detection, the expansion also focuses on strategic overview. Security teams and administrators gain insight into how much AI activity is actually being monitored and how mature the security approach around AI agents is. Targeted recommendations are also made to reduce gaps and improve the security position, including in relation to existing compliance frameworks such as NIST and GDPR.
Exabeam argues that monitoring AI agents will inevitably become a structural part of the security landscape. According to the company and analysts, security around autonomous AI will develop into a fully-fledged category within cybersecurity, alongside identity, cloud, and data protection.