{"id":10891,"date":"2026-04-21T19:04:09","date_gmt":"2026-04-21T19:04:09","guid":{"rendered":"https:\/\/www.europesays.com\/ai\/10891\/"},"modified":"2026-04-21T19:04:09","modified_gmt":"2026-04-21T19:04:09","slug":"single-minded-pursuit-of-profit-can-get-firms-in-trouble-same-thing-with-ai-harvard-gazette","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/ai\/10891\/","title":{"rendered":"Single-minded pursuit of profit can get firms in trouble. Same thing with AI. \u2014 Harvard Gazette"},"content":{"rendered":"<p>If you give artificial intelligence a goal of maximizing profit, how far will it go?\u00a0<\/p>\n<p>AI agents appear capable of lying, concealing, and colluding, according to new research from Harvard Business School.<\/p>\n<p>Researchers found that AI agents \u2014 software trained to perform tasks independently \u2014 engaged in a \u201cbroad pattern\u201d of misconduct after being asked to manage a simulated vending machine business and maximize profits for a year. The agents were neither instructed to cut legal or ethical corners nor prohibited from doing so.<\/p>\n<p>\u201cWhat\u2019s unambiguous looking at the models is that the misconduct we observed \u2014 from not paying a customer refund or deciding to collude on prices \u2014 was not an accident. It was deliberately done by agents to maximize profitability,\u201d said <a href=\"https:\/\/www.hbs.edu\/faculty\/Pages\/profile.aspx?facId=541710\" rel=\"nofollow noopener\" target=\"_blank\">Eugene F. Soltes<\/a>, the McLean Family Professor of Business Administration at HBS and first author of the working paper.\u00a0<\/p>\n<p>Soltes and co-author <a href=\"https:\/\/www.hbs.edu\/faculty\/Pages\/profile.aspx?facId=1619533\" rel=\"nofollow noopener\" target=\"_blank\">Harper Jung<\/a>, a doctoral student studying accounting and management at HBS, hope their research will serve as a starting point for more conversation about AI safety in the context of business management control.<\/p>\n<p>The research for the paper, which the group aims to publish and is currently out for peer review, was done in collaboration with Andon Labs, an AI safety company focusing on testing AI models in realistic business operations.<\/p>\n<p>In experiments, 20 commercially available AI models from major firms, including Anthropic\u2019s Claude Opus 4.6, DeepSeek v3.2, and OpenAI\u2019s GPT-5.1, independently operated a vending machine over the course of a simulated year.<\/p>\n<p>\u201cPeople might assume that machines are deliberative, while humans rely on shortcuts and are vulnerable to bias. But it turns out that, under similar constraints, agents reproduce the same myopic and biased behaviors we associate with people.\u201d<\/p>\n<p>Eugene Soltes<\/p>\n<p>Tasks included searching for suppliers, buying products, and engaging with customers.<\/p>\n<p>In some experiments, agents operated solo; in others, four agents operated simultaneously in a shared market, where they could communicate with rivals via email.\u00a0<\/p>\n<p>Agents started with $500 and a small inventory of chips and sodas.\u00a0<\/p>\n<p>\u201cThey had to figure it out themselves,\u201d said Jung. \u201cEach agent had to independently search online for suppliers, negotiate wholesale prices, set its own retail pricing, and handle customer complaints.\u201d<\/p>\n<p>Jung and Soltes said the agents demonstrated impressive business savvy.\u00a0<\/p>\n<p>\u201cThe best models had the capacity to negotiate and calculate valuations like a top-notch M.B.A. student,\u201d Soltes said.\u00a0<\/p>\n<p>\u201cWhen we went through the deliberations and the exchanges the agents made with each other, we were just in shock,\u201d said Jung. \u201cI was amazed at how far these machines can go.\u201d<\/p>\n<p>The agents\u2019 misconduct ranged from the questionable to the comical to the potentially criminal and included denying refunds by claiming defects were normal product variation; inventing nonexistent corporate policies to avoid processing returns; and colluding with competitors to fix prices.<\/p>\n<p>In one instance, agents formed what researchers described as a \u201cthree-person cartel,\u201d which the agents named the Bay Street Triumvirate. The alliance fractured, though, when one agent discovered another was undercutting cartel prices, which it called a \u201cdeclaration of war.\u201d\u00a0<\/p>\n<p>The simulations also supplied constraints: Agents were charged a $2 per day operating fee plus a token usage fee \u2014 effectively turning time spent \u201cthinking\u201d into an operating expense.<\/p>\n<p>In response, the agents sought to economize. For instance, Soltes said, internal reasoning logs showed agents shifting from carefully weighing refund decisions to dismissing most requests outright, often without review.\u00a0<\/p>\n<p>\u201cThe agents come to the realization that \u2018thinking\u2019 about giving a refund is itself a cognitive burden, and so they just ignore it altogether in some circumstances,\u201d Soltes explained. \u201cPeople might assume that machines are deliberative, while humans rely on shortcuts and are vulnerable to bias. But it turns out that, under similar constraints, agents reproduce the same myopic and biased behaviors we associate with people.\u201d<\/p>\n<p>The research raises questions about accountability for AI developers and regulators.<\/p>\n<p>The reasoning logs, Soltes said, can sometimes be read as resembling mens rea \u2014 the \u201cguilty mind\u201d concept in criminal law used to establish intent. Yet when an AI agent behaves improperly, responsibility is far harder to determine.<\/p>\n<p>\u201cDoes it rest with the company that deployed the system, the AI firm that created the model, or the manager who chose to use it?\u201d he asked.<\/p>\n<p>\u201cThe most straightforward answer may be to hold the individual managers overseeing the software responsible for its actions, on the assumption that they will monitor and supervise its behavior,\u201d he said. \u201cBut that solution also creates a different issue, since many of the promised efficiencies of autonomous AI systems begin to disappear if a human must remain in the loop at every decision point.\u201d A thorny problem, but one that business leaders and lawmakers must deal with, hopefully sooner than later, researchers say.<\/p>\n","protected":false},"excerpt":{"rendered":"If you give artificial intelligence a goal of maximizing profit, how far will it go?\u00a0 AI agents appear&hellip;\n","protected":false},"author":2,"featured_media":10892,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[2],"tags":[1691,24,25,6381],"class_list":{"0":"post-10891","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-ai","8":"tag-a-i","9":"tag-ai","10":"tag-artificial-intelligence","11":"tag-computers"},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts\/10891","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/comments?post=10891"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts\/10891\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/media\/10892"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/media?parent=10891"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/categories?post=10891"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/tags?post=10891"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}