Agentic AI
,
Fraud Management & Cybercrime
,
Fraud Risk Management
Experts Advise Moving From Verifying Identities to Knowing Agent Intentions
Suparna Goswami (gsuparna) •
February 6, 2026

Image: Shutterstock
Financial institutions are rushing to deploy AI agents capable of autonomously initiating transactions, approving payments and freezing accounts in real time. But these innovations are creating a “dual authentication crisis” that traditional security frameworks cannot address, according to fraud prevention experts.
See Also: Unpredictable by Design: The Challenges of Autonomous AI
Banks must now verify two distinct elements simultaneously: intent – whether the user authorized the agent to act – and integrity – whether the agent is operating as designed. This represents authentication’s most fundamental shift since digital banking began, moving beyond simple identity verification to validating delegated authority.
The industry is moving from “are you who you say you are” to “did you authorize this agent to do these things?” said David Barnhardt, strategic advisor for fraud and AML at Datos Insights. “The assumption that we are dealing with a human on the other end goes out the window.”
The Authentication Gap
Traditional authentication relies upon point-in-time verification like MFA and a password, after which access is granted. Over the years, banks have analyzed human spending patterns. But AI agents purchasing around the clock and seeking optimal deals have rendered that model obsolete.
“With autonomous agents transacting on behalf of users, the distinction between legitimate and fraudulent activity is blurred, and a single compromised identity could trigger automated losses at scale,” said Ajay Patel, head of agentic commerce at Prove.
For example, a customer could authorize an AI agent to purchase concert tickets with explicit instructions not to spend more than $900 per ticket. The agent might ignore the price limit and find better seats down front for $25,000 a ticket. The agent has legitimate credentials, authorized access to the account, and is technically fulfilling its mission of securing tickets – but it wildly exceeded its authorized parameters.
Traditional fraud models would struggle to flag this error. The transaction originated from an authorized agent, used valid credentials and targeted a legitimate merchant. There’s no device fingerprint anomaly, no geographic impossibility, no velocity pattern that screams fraud. The agent is simply interpreting its instructions differently than the human intended.
The problem can be compounded when risk models encounter legitimate agent activity that resembles an attack. When a highly anticipated product launches such as the latest iPhone or Taylor Swift tickets, millions of AI agents might simultaneously converge on merchant sites seeking the best deal for their users.
“We thought that aggregators were, in fact, a DDoS attack because they would all rush in at once,” Barnhardt said. “There is a lot of talk about cryptographic proof, knowing your agent, but not only to verify this is my legitimate agent that I have authorized, but what have I authorized that agent to do?”
But before banks can address the authentication problem, they need to fix their data infrastructure, said Carey Ransom, managing director at BankTech Ventures. AI agents need clean, contextually appropriate data, banks don’t yet have standardized ways to provide it.
So, when mistakes occur, who is at fault, and who is liable for making things right? When AI agents can spawn sub-agents that delegate tasks to other AI systems throughout a transaction chain, the liability question gets murky.
“Autonomous agentic actions are where people want to get to, but there is a lot to figure out with human-in-the-loop agentic transactions,” Patel said. “Every industry participant has to weigh the cost and benefits of newly emerging channels, and ultimately the commercial gain has to be higher than the cost of the fraud.”
The solution could follow the way banks currently function. Banks today allow account aggregators to access customer data, but consumers typically assume liability for those third-party services through user agreements.
“The more agents are provisioned and managed as a human analogy with rights, permissions and authentication, the clearer it will be to manage and arbitrate an issue like this,” Ransom said. He added that current regulations offer little guidance.
“It is a classic situation where the product is being built at light speed, and we are going to have to start thinking about regulation when it gets out of hand,” Barnhardt said.
Vendors Building Products
Despite the regulatory vacuum, financial institutions and vendors are actively developing frameworks to address the dual authentication challenge.
Prove launched a Know Your Agent initiative that enables continuous life cycle identity authentication. Mastercard in January launched its Agent Suite – a comprehensive platform that helps businesses build, test and deploy customizable AI agents with built-in security. The company also published agentic commerce standards and rules of the road to help enterprises prepare for agentic transactions.
Layered authentication that balances security with the speed will reduce agentic AI valuable risks, Ransom said.
“Variant transaction requests might require a new layer or type of authentication to ensure it is legitimate and reflecting the desired activity,” he said. “Checks and balances will be a prevailing approach to protect both sides, while still enabling the autonomy and efficiency the market desires.”
Patel called for a “proactive, consortium approach” to developing standards quickly to keep pace with industry-wide adoption.
“The market expects this emerging channel to mature significantly faster than e-commerce, which arguably has not even fully matured yet,” Patel said.
The good news is momentum is building. “No one is sticking their head into the sand and just saying, ‘Well, it will be what it will be,'” Barnhardt said. “People are really preparing and setting forth a good effort to try and get ready for what is to come.”