Your users aren’t always visiting your website themselves anymore. More and more, they’re now sending AI agents to act on their behalf. ChatGPT, Claude, and other agentic systems can now register accounts, log in, make purchases, manage profiles, and execute transactions without direct human oversight. 

But with this shift comes a critical new threat: agentic fraud.

According to Forrester, agentic traffic has become a distinct category requiring specialized controls, prompting them to rename Bot Management to Bot & Agent Trust Management. According to McKinsey research, by 2030, agentic commerce could drive up to $1 trillion in US B2C retail revenue, with $3-5 trillion globally.

The challenge for security teams is increasingly complex: how do we distinguish legitimate, helpful AI agent traffic from malicious agentic fraud? 

The expanding attack surface of agentic fraud

Agentic fraud represents fraudulent or abusive behavior enabled, amplified, or executed by AI systems at machine scale. 

While some of the underlying attack patterns—credential abuse, session hijacking, account takeover—aren’t new, agentic AI accelerates these familiar threats dramatically, compressing what used to take hours of human effort into seconds of automated execution. 

Unlike conventional bots that rely on preset strategies, AI agents leverage the adaptability and reasoning capabilities of large language models (LLMs) to execute sophisticated attacks that closely mimic legitimate user behavior.

6 emerging agentic attack surfaces 

AI agents have created entirely new fraud patterns that traditional security can’t detect. Here are the specific attack vectors security teams are encountering and should prepare for: 

1. Delegated AI logins

A user authorizes an AI assistant to manage their subscriptions and log in using approved credentials. But attackers can mimic this exact pattern through agentic AI that claims to be user-sanctioned automation.

The risk: Credential misuse disguised as legitimate agent activity. Traditional detection can’t distinguish between authorized and unauthorized agent access without real-time intent analysis.

2. AI-assisted session manipulation

Once authenticated, an AI agent may begin operating within a user session, updating preferences, changing settings, or navigating the application. This creates a blind spot where mid-session hijacking becomes nearly impossible to detect.

The risk: The line between legitimate agent activity and session takeover blurs dangerously. A verified ChatGPT or Gemini session controlled by a fraudster passes authentication checks while executing malicious transactions.

3. Autonomous account management

AI agents can update payment methods, modify profile data, or change security settings autonomously. These behaviors may appear legitimate from a technical standpoint, but could represent unauthorized account manipulation or data exfiltration.

The risk: The agent has proper credentials and authorization, but the underlying purpose is fraudulent.

4. Business logic abuse 

Catalog scraping and price monitoring predate AI agents. What changes is the efficiency: agents can gather your entire product catalog, pricing strategy, and inventory levels in minutes. Beyond data collection, agents can probe systematically for policy weaknesses—think mass discount code enumeration, loyalty point exploitation, or automated return fraud at scale.

The risk: Testing promo codes and identifying return policy gaps isn’t new, but agents probe more systematically and execute exploitation faster than human fraudsters ever could.

5. Agent credential poisoning

Attackers aren’t just hijacking user accounts—they’re compromising the trust relationship between users and their AI agents. By stealing OAuth tokens, API keys, or session identifiers that connect users to their agents, threat actors can control legitimate agents without touching the user’s primary credentials.

The risk: The agent itself remains verified and passes all authentication checks. But when the connection is compromised, the agent executes attacker commands while appearing to act on the user’s behalf. Identity-based detection alone can’t catch this because the agent’s credentials remain valid—only behavioral and intent analysis can identify the compromise.

6. Cross-platform agent pivoting

AI agents often require access to multiple platforms and services to complete tasks. An agent booking travel might need access to email, calendar, payment systems, and loyalty accounts. If one connection is compromised, attackers can pivot through the agent’s authorized access to compromise additional accounts and services.

The risk: A single compromised agent connection becomes a lateral movement vector across your entire digital ecosystem. 

The business impact of agentic fraud
Fraud & revenue risks

Under-enforcement opens pathways to credential abuse, session hijacking, and transaction fraud at unprecedented scale. Agentic fraud campaigns can operate 24/7 with human-level sophistication and machine-level speed.

Compliance & liability risks

Without clear agent classification and policy frameworks, organizations face mounting uncertainty around liability. If an AI agent autonomously makes a purchase that the end user disputes, who is financially liable? The merchant platform that processed it? The company that built the agent? Or the user who deployed it? 

As regulations like the EU AI Act establish requirements for autonomous AI systems, the ability to verify, audit, and attribute agentic actions shifts from a security best practice to a compliance mandate.

User experience risks

Over-aggressive blocking frustrates legitimate users who rely on AI agents for convenience, productivity, and accessibility. Poor agent classification and trust damages your brand and creates friction. 

Competitive disadvantages

70% of consumers have used AI for shopping in the last 12 months. As agentic commerce—commercial activities executed or assisted by AI agents—becomes mainstream, organizations that can’t safely facilitate legitimate agent interactions will lose business to those that can.

Defending against agentic fraud: The Agent Trust framework

Agent Trust is an emerging process and tool for assessing and ensuring the trustworthiness of AI agents by implementing several layers of protection: 

Identity: Is this a recognizable agent with established credentials?
Intent: What is the underlying purpose of this action?
Behavior: Do the observable actions align with expected patterns?
Authorization: Is this agent permitted to perform this specific action?
Continuous session monitoring: Can we validate the legitimacy of this agent’s actions through the full user journey? 

Real-world example: A verified ChatGPT agent logs into 47 different retail accounts within 90 seconds, each time updating payment methods and initiating purchases. Traditional security sees valid credentials and a recognized agent. Intent-based detection sees impossible human behavior patterns executed through a compromised agent connection and makes the decision to block it in real-time.

From binary blocking to contextual trust

This framework shifts account security from binary allow/block decisions to contextual trust assessments. It enables organizations to:

Classify agentic traffic separately from human and traditional bot traffic
Analyze behavioral intent through context like login frequency, device consistency, session patterns, and action sequences
Apply policy-driven outcomes based on business logic rather than one-size-fits-all rules

Preparing for the agentic future

Agentic AI isn’t a distant possibility; it’s accelerating rapidly, and the volume of agentic traffic will only increase. Organizations that adapt early will gain a competitive advantage in the era of agentic commerce. But that requires real-time visibility into agent behavior, intent analysis that goes beyond credentials, and protection that doesn’t add friction. 

DataDome provides visibility and control of all agentic traffic, distinguishing between agents acting on behalf of your users and business, and those executing agentic fraud. DataDome detects and blocks malicious agents in real-time, without disrupting genuine users or slowing down legitimate transactions.

Book a demo to learn how DataDome’s Agent Trust management solution protects your business against agentic fraud.