The digital storefront is undergoing its most radical transformation since the invention of the shopping cart, but this time, the customer navigating the aisles might not be human.

This is the second in a two part series on the subject.

As agentic AI—autonomous systems capable of researching, selecting, and purchasing products—moves from science fiction to a trillion-dollar reality, the retail landscape is bracing for a collision between rapid innovation and legal liability.

While retailers deploy shopping agents to remove friction and improve the shopping experience, they are rewriting their terms and conditions to ensure that when the AI agent makes a mistake, the consumer bears the cost. But there’s a tricky balance here, and placing the liability onto the shopper could sour the relationship between customers and the retailer.

This shift marks the end of the experimental phase of AI and the beginning of a high-stakes era of accountability. Retailers are finding themselves in a precarious position. They want the efficiency of AI-driven commerce, but they are terrified of the financial fallout when those agents hallucinate an order or misinterpret a budget. The scale of the problem is already immense. Returns are currently an $890 billion annual headache for retailers globally, and experts warn that autonomous shopping could cause that figure to skyrocket.

Matt Maher, an independent researcher and technology advisor at M7 Innovations, told Sourcing Journal that these policy changes are a preemptive strike against a looming logistical nightmare. He expects a wave of retailers and brands to update their legal language because agentic commerce provides the “perfect scapegoat” for consumers to point to when a transaction goes sideways.

Maher cited Walmart’s recent updates to its policies regarding Sparky, its own AI shopping agent, as a watershed moment for the industry. It is a move that Maher finds telling, as the retail giant is essentially seeking to skirt liability from its own proprietary technology. This, he argued, proves that we are living in the “wild west” of e-commerce, where even the architects of the tech tools are not yet ready to stand behind their performance. At least for now.

Meanwhile, the technical reality of agentic shopping remains far from seamless. Maher, who has personally tested various agentic solutions, puts his current success rate at only around 70 percent. His experiences include agents purchasing items that were out of stock, failing to select appropriate shipping methods, and causing multi-week delays in delivery.

For a consumer, these are frustrating glitches, and for a retailer, they are the harbingers of a broken relationship. Maher warned that the “customer is always right” adage is being tested by this tsunami of AI, as users will likely direct their blame at both the AI provider and the brand when errors occur.

Beyond the immediate legal protection, retailers have ulterior motives for tightening their terms. There is a desperate need to protect first-party data, which becomes muddied when bots, rather than humans, navigate a site. Retailers must also safeguard their own media networks. These platforms generate massive advertising revenue by showing ads to humans. As of now, there is no established framework for how to monetize or handle an AI agent that bypasses traditional visual advertisements.

Owen Carr, chief marketing officer for Spreetail, agreed that the speed of technology has outpaced social and legal norms. He said that as AI moves from simple product discovery into the actual transaction process, retailers are desperate to draw clear lines around responsibility. However, Carr warned that legal disclaimers may not be enough to save a brand’s reputation.

To a consumer, the AI is the retailer.

If a tool gets an order wrong, it isn’t viewed as a legal technicality, it is experienced as a failure in product content, search and customer support. Carr said the real issue is not just who pays for a mistake, but whether the underlying retail system is robust enough to prevent them. In the world of conversational shopping, “weak product data and weak execution get exposed fast,” he told Sourcing Journal.

This sentiment is echoed by Anthony Ferry, chief executive officer of Wayvia CEO, whose company works with over 32,000 retailers globally. Ferry said major retail sites are currently being used as “live testbeds,” and the industry is watching closely to see what breaks. While giants such as Walmart and Target may have the leverage to enforce strict liability waivers, smaller retailers might take the opposite approach, absorbing AI risk as a way to differentiate themselves and earn customer trust.

Ferry said consumers do not experience these errors as policy issues. Instead, they experience them as “you got my order wrong.” Whether a competitor’s product is accidentally dropped into a cart or a house brand is pushed unfairly, the result is an immediate trust problem.

Data from the front lines of consumer behavior suggests that the public is already wary. Piyush Patel, chief ecosystem officer at Algolia, pointed to recent surveys showing that only 13 percent of consumers are “very likely” to purchase based solely on AI recommendations. Nearly half of all shoppers polled expressed concerns regarding accuracy and data privacy. Perhaps most significantly, 36 percent of consumers have already returned products because of inaccurate or inconsistent information provided during the digital journey.

Patel views the shift in liability as a defensive tactic to buy time while infrastructure catches up to marketing promises. Retailers rushed to announce AI capabilities, but many are now struggling to operationalize those tools across fragmented data systems and inconsistent catalogs.

When a retailer tells a customer they are responsible for an AI’s error, it reinforces the fear that the system is fundamentally unreliable. Patel said the winners in this space will be the brands that don’t just disclaim risk, but “quietly reduce it” by investing in superior data and retrieval infrastructure. He said while AI hallucinations are discussed less often, subtle errors like incomplete answers or misinterpretations of enterprise data persist.

The transition to agentic commerce is also forcing a shift in how retailers view their own “product.” Sid Vangala, senior AI systems engineer and architect at MasTec, said the industry is entering an era where responsibility boundaries are completely undefined. Vangala said if customers feel they are being held responsible for mistakes made by tools the retailer introduced, the resulting friction could derail adoption entirely.

The industry is now forced to create clearer standards for an era where the “human in the loop” is becoming increasingly rare.

Ultimately, the friction between AI innovation and consumer protection is creating a new hierarchy in retail. As Ferry noted, “policy is now part of the product.” Consumers will gravitate toward what works and what feels safe. While shoppers are currently comfortable letting AI lead them to a product, they are not yet ready to hand over the “keys to the kingdom” and let an agent own the entire transaction.

The retailers that thrive will be those that realize a legal disclaimer is no substitute for a reliable experience. They must decide how much risk they are willing to absorb to make the future of shopping feel like a service rather than a gamble. For now, the message from the retail industry is clear: the AI may be doing the shopping, but the human is still on the hook