{"id":34347,"date":"2026-05-11T08:58:10","date_gmt":"2026-05-11T08:58:10","guid":{"rendered":"https:\/\/www.europesays.com\/ai\/34347\/"},"modified":"2026-05-11T08:58:10","modified_gmt":"2026-05-11T08:58:10","slug":"the-ai-paradox-more-humanlike-means-less-autonomous-machine-learning-times","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/ai\/34347\/","title":{"rendered":"The AI Paradox: More Humanlike Means Less Autonomous \u00ab Machine Learning Times"},"content":{"rendered":"<p><img decoding=\"async\" class=\"size-full wp-image-14145\" src=\"https:\/\/www.europesays.com\/ai\/wp-content\/uploads\/2026\/05\/The-AI-Paradox-More-Humanlike-Means-Less-Autonomous-.webp\" alt=\"\" width=\"100%\"\/><\/p>\n<p>Originally published in\u00a0<a href=\"https:\/\/www.forbes.com\/sites\/ericsiegel\/2026\/01\/26\/the-ai-paradox-more-humanlike-means-less-autonomous\/\" target=\"_blank\" rel=\"noopener nofollow\">Forbes<\/a><\/p>\n<p>The AI executives are at it again, promising human-level machines in the near future. In Davos, the CEOs of Google DeepMind and Anthropic each doubled down on the near-term arrival of artificial general intelligence \u2013 the hypothetical capacity for a machine to do most anything a human can \u2013 giving it\u00a0<a href=\"https:\/\/www.investing.com\/news\/stock-market-news\/google-deepmind-ceo-discusses-ai-progress-and-timeline-for-agi-93CH-4455818\" target=\"_blank\" rel=\"noopener nofollow\">50% odds of arriving by 2030<\/a> and expecting it to\u00a0<a href=\"https:\/\/www.yahoo.com\/news\/articles\/expect-agi-within-few-years-232812156.html\" target=\"_blank\" rel=\"noopener nofollow\">arrive this year or next<\/a>, respectively.<\/p>\n<p>Is AI overpromised? Will the hype cost us dearly when the widespread narrative is recognized as overzealous and disillusionment sets in? Or is human-level machine intelligence around the corner?<\/p>\n<p>As hard as\u00a0<a class=\"gmail-color-link\" href=\"https:\/\/www.forbes.com\/sites\/ericsiegel\/2024\/04\/10\/artificial-general-intelligence-is-pure-hype\/\" rel=\"nofollow noopener\" target=\"_blank\">I\u2019ve argued against the AI hype<\/a>, I have to admit that it\u2019s a religious debate. We\u2019re not approaching consensus. There will always be a contingent who believes even AI\u2019s most grandiose promises.<\/p>\n<p>Yet there\u2019s a lot to be gained by clarifying what it is that we\u2019re arguing about. After all, business leaders and investors need to understand exactly what they\u2019re betting on as they struggle to pursue sound strategies rather than wishful thinking.<\/p>\n<p>To Hype AI Is To Promise Extraordinary Machine Autonomy<\/p>\n<p>The question of whether the AI hype overpromises is a question of goodness: Will AI soon become as good as promised?<\/p>\n<p>But that opens a can of worms: How do we measure goodness? The most obvious answer is intelligence. The more intelligent, the better. Pursuing intelligence has won the day in the public\u2019s eye. After all, the notion that\u2019s making the world salivate is called artificial intelligence.<\/p>\n<p>But \u201cintelligence\u201d does not represent a viable yardstick. It\u2019s subjective. How could we know when it\u2019s been achieved \u2013 or even when there\u2019s been progress toward it?\u00a0<a href=\"https:\/\/hbr.org\/2023\/06\/the-ai-hype-cycle-is-distracting-companies\" target=\"_blank\" rel=\"noopener nofollow\">Any test designed to measure \u201cintelligence\u201d only diminishes it<\/a>, because it only assesses a narrow capability.<\/p>\n<p>On the other hand, the most grandiose AI goal is, ironically, easier to define \u2013 albeit still unmeasurable. Artificial general intelligence simply represents the whole enchilada. An AGI system would effectively be a \u201cvirtual human.\u201d This notion is defined in terms of what humans can do, rather than in terms of a subjective quality that humans hold, intelligence.<\/p>\n<p>AGI would mean supreme autonomy \u2013 by definition. Since it could do everything humans can, we would need no human in the loop. I have argued that AGI is not a feasible goal for the foreseeable future \u2013 that\u00a0<a href=\"https:\/\/www.forbes.com\/sites\/ericsiegel\/2024\/07\/29\/the-great-ai-myth-these-3-misconceptions-fuel-it\/\" target=\"_blank\" rel=\"noopener nofollow\">we are not even making concrete headway toward it<\/a>. But even if you believed that research was making viable progress, there would be practical challenges to measuring it. How do you prove a machine can run a large company for years or fully educate a child without giving it a try on such tasks?<\/p>\n<p>Instead, let\u2019s get concrete and realistic: The suitable benchmark is autonomy. Rather than asking whether a system seems to exhibit \u201cintelligence,\u201d or whether it is headed toward wholesale human-level capabilities, ask how autonomous it is. How much work can it automate? Or, to what degree is it not autonomous, requiring humans remain in the loop?<\/p>\n<p>Autonomy is a measurable criterion that reflects AI goodness. It represents the value of a system, since automation is the goal \u2013 of any machine. Machines exist to do things that would otherwise need to be done by humans. That\u2019s why we build them. The more autonomous, the more potentially valuable.<\/p>\n<p>AI hype promises unrealistic autonomy. By viewing AI goodness as its degree of potential autonomy, we can identify an AI promise as hype when it promises infeasible autonomy. For example, the story of near-term AGI represents the epitome of AI hype, since it promises supreme autonomy. The\u00a0<a href=\"https:\/\/www.forbes.com\/sites\/ericsiegel\/2025\/07\/14\/agentic-ai-is-the-new-vaporware\/\" target=\"_blank\" rel=\"noopener nofollow\">ill-defined buzzword agentic AI<\/a> is also generally guilty of\u00a0<a href=\"https:\/\/www.forbes.com\/sites\/ericsiegel\/2025\/07\/28\/the-agentic-ai-hype-cycle-is-insane--dont-normalize-it\/\" target=\"_blank\" rel=\"noopener nofollow\">promising unrealistic autonomy<\/a>.<\/p>\n<p>Predictive AI Is More Autonomous Than Generative AI<\/p>\n<p>With enterprise applications of generative AI, such as providing strategic advice or helping write marketing creatives or computer code,\u00a0<a href=\"https:\/\/www.forbes.com\/sites\/ericsiegel\/2025\/10\/20\/our-last-hope-before-the-ai-bubble-detonates-taming-llms\/\" target=\"_blank\" rel=\"noopener nofollow\">you generally need a human in the loop<\/a> reviewing each output \u2013 every assertion, suggestion, inference, statement, segment of computer code and draft document that it generates. GenAI positions itself to take on consequential, human tasks, activities that attract scrutiny because they would require high levels of performance for the computer to operate without constant human supervision.<\/p>\n<p>In contrast, by taking on functions that are more forgiving,\u00a0 many\u00a0<a href=\"https:\/\/www.forbes.com\/sites\/ericsiegel\/2024\/03\/04\/3-ways-predictive-ai-delivers-more-value-than-generative-ai\/\" target=\"_blank\" rel=\"noopener nofollow\">predictive AI<\/a> projects can capture the immense value of full autonomy across the largest-scale operational functions. Bank systems\u00a0<a href=\"https:\/\/www.predictiveanalyticsworld.com\/machinelearningtimes\/real-time-machine-learning-why-its-vital-and-how-to-do-it\/12166\/\" target=\"_blank\" rel=\"noopener nofollow\">instantly decide whether to allow a credit card charge<\/a>. Websites instantly decide which ad to display and marketing systems make a million yes\/no decisions as to who gets contacted. So do\u00a0<a href=\"https:\/\/bigthink.com\/articles\/team-obama-mastered-the-science-of-mass-persuasion-and-won\/\" target=\"_blank\" rel=\"noopener nofollow\">the analytics systems of political campaigns<\/a>. E-commerce sets the price for each purchase, from flights to flashlights. Safety systems decide\u00a0<a href=\"https:\/\/www.youtube.com\/watch?v=1wJ6D7HpJnE&amp;t=11s\" target=\"_blank\" rel=\"noopener nofollow\">which bridge, manhole and restaurant to inspect<\/a>. No human is in the loop for those specific decision-making steps.<\/p>\n<p>This is\u00a0The AI Paradox:\u00a0Even as GenAI is so seemingly humanlike, since it is meant to take on human tasks, it generally demands human supervision at each step and for each output. Ironically, this means genAI is less potentially autonomous than predictive AI.\u00a0<\/p>\n<p>Recognizing this paradox could reorient many decision makers. People get excited over genAI because it is so humanlike and advanced. GenAI\u2019s extraordinary capabilities are unprecedented, so it does indeed present many new valuable propositions. But if value excites you more than sexiness \u2013 if initiatives that would deliver the greatest improvements to enterprise efficiencies are your goal \u2013 then you should\u00a0<a class=\"gmail-color-link\" href=\"https:\/\/www.forbes.com\/sites\/ericsiegel\/2024\/03\/04\/3-ways-predictive-ai-delivers-more-value-than-generative-ai\/\" rel=\"nofollow noopener\" target=\"_blank\">bump predictive AI projects far up your priority list<\/a>, placing them at least as high as most genAI initiatives, at least for the foreseeable future.<\/p>\n<p>\u00a0<\/p>\n<p>About the author<\/p>\n<p>Eric Siegel, Ph.D., is a former Columbia University professor who helps companies deploy machine learning. He is the cofounder and CEO of <a href=\"https:\/\/www.gooder.ai\/\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">Gooder AI<\/a>, the founder of the long-running <a href=\"https:\/\/www.machinelearningweek.com\/\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">Machine Learning Week<\/a> conference series, the instructor of the acclaimed online course \u201c<a href=\"http:\/\/machinelearning.courses\/\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">Machine Learning Leadership and Practice \u2013 End-to-End Mastery<\/a>,\u201d executive editor of <a href=\"http:\/\/machinelearningtimes.com\/\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">The Machine Learning Times<\/a>, and a <a href=\"http:\/\/www.machinelearningspeaker.com\/\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">frequent keynote speaker<\/a>. He wrote the bestselling <a href=\"https:\/\/www.machinelearningkeynote.com\/predictive-analytics\" target=\"_blank\" rel=\"noopener nofollow\">Predictive Analytics: The Power to Predict Who Will Click, Buy, Lie, or Die<\/a>, which has been used in courses at hundreds of universities, as well as <a href=\"http:\/\/www.bizml.com\/\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">The AI Playbook: Mastering the Rare Art of Machine Learning Deployment<\/a>. Eric\u2019s interdisciplinary work bridges the stubborn technology\/business gap. At Columbia, he won the Distinguished Faculty award when teaching the graduate computer science courses in ML and AI. Later, he served as a business school professor at UVA Darden. A <a href=\"https:\/\/www.forbes.com\/sites\/ericsiegel\/\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">Forbes contributor<\/a>, Eric publishes <a href=\"http:\/\/www.civilrightsdata.com\/\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">op-eds on analytics and social justice<\/a>.<\/p>\n<p>Eric has <a href=\"https:\/\/www.machinelearningkeynote.com\/press\" target=\"_blank\" rel=\"noopener nofollow\">appeared on<\/a>\u00a0Bloomberg TV and Radio, BNN (Canada), Israel National Radio, National Geographic Breakthrough, NPR Marketplace, Radio National (Australia), and TheStreet. A <a href=\"https:\/\/www.forbes.com\/sites\/ericsiegel\/\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">Forbes contributor<\/a>, Eric and his books have been <a href=\"https:\/\/www.machinelearningkeynote.com\/press\" target=\"_blank\" rel=\"noopener nofollow\">featured in<\/a>\u00a0BBC,\u00a0Big Think, Businessweek, CBS MoneyWatch, Contagious Magazine, The European Business Review, Fast Company, The Financial Times, Fortune, GQ, Harvard Business Review, The Huffington Post, The Los Angeles Times, Luckbox Magazine, MIT Sloan Management Review, The New York Review of Books, The New York Times, Newsweek, Quartz, Salon, The San Francisco Chronicle, Scientific American, The Seattle Post-Intelligencer, Trailblazers with Walter Isaacson, The Wall Street Journal, The Washington Post, and WSJ MarketWatch.<\/p>\n","protected":false},"excerpt":{"rendered":"Originally published in\u00a0Forbes The AI executives are at it again, promising human-level machines in the near future. In&hellip;\n","protected":false},"author":2,"featured_media":34348,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[4],"tags":[6744,8362,3013,1085,3328,6098,10726,10725],"class_list":{"0":"post-34347","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-agi","8":"tag-agi","9":"tag-analytics","10":"tag-artificial-general-intelligence","11":"tag-data-science","12":"tag-data-mining","13":"tag-predictive-analytics","14":"tag-predictive-analytics-jobs","15":"tag-predictive-analytics-news"},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts\/34347","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/comments?post=34347"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts\/34347\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/media\/34348"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/media?parent=34347"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/categories?post=34347"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/tags?post=34347"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}