{"id":371206,"date":"2025-08-25T02:13:10","date_gmt":"2025-08-25T02:13:10","guid":{"rendered":"https:\/\/www.europesays.com\/uk\/371206\/"},"modified":"2025-08-25T02:13:10","modified_gmt":"2025-08-25T02:13:10","slug":"a-better-way-to-think-about-ai","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/uk\/371206\/","title":{"rendered":"A Better Way to Think About AI"},"content":{"rendered":"<p class=\"ArticleParagraph_root__4mszW ArticleParagraph_dropcap__uIVzg\" data-flatplan-paragraph=\"true\" data-flatplan-dropcap=\"true\">No one doubts that our future will feature more automation than our past or present. The question is how we get from here to there, and how we do so in a way that is good for humanity.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">Sometimes it seems the most direct route is to automate wherever possible, and to keep iterating until we get it right. Here\u2019s why that would be a mistake: imperfect automation is not a first step toward perfect automation, anymore than jumping halfway across a canyon is a first step toward jumping the full distance. Recognizing that the rim is out of reach, we may find better alternatives to leaping\u2014for example, building a bridge, hiking the trail, or driving around the perimeter. This is exactly where we are with artificial intelligence. AI is not yet ready to jump the canyon, and it probably won\u2019t be in a meaningful sense for most of the next decade.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">Rather than asking AI to hurl itself over the abyss while hoping for the best, we should instead use AI\u2019s extraordinary and improving capabilities to build bridges. What this means in practical terms: We should insist on AI that can collaborate with, say, doctors\u2014as well as teachers, lawyers, building contractors, and many others\u2014instead of AI that aims to automate them out of a job.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">Radiology provides an illustrative example of automation overreach. In a widely discussed study<a data-event-element=\"inline link\" href=\"https:\/\/economics.mit.edu\/sites\/default\/files\/2024-04\/agarwal-et-al-diagnostic-ai.pdf\" target=\"_blank\" rel=\"noopener\"> published<\/a> in April 2024, researchers at MIT found that when radiologists used an AI diagnostic tool called <a data-event-element=\"inline link\" href=\"https:\/\/stanfordmlgroup.github.io\/competitions\/chexpert\/\" target=\"_blank\" rel=\"noopener\">CheXpert<\/a>, the accuracy of their diagnoses declined. \u201cEven though the AI tool in our experiment performs better than two-thirds of radiologists,\u201d the researchers wrote, \u201cwe find that giving radiologists access to AI predictions does not, on average, lead to higher performance.\u201d Why did this good tool produce bad results?<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">A proximate answer is that doctors didn\u2019t know when to defer to the AI\u2019s judgment and when to rely on their own expertise. When AI offered confident predictions, doctors frequently overrode those predictions with their own. When AI offered uncertain predictions, doctors frequently overrode their own better predictions with those supplied by the machine. Because the tool offered little transparency, radiologists had no way to discern when they should trust it.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">A deeper problem is that this tool was designed to automate the task of diagnostic radiology: to read scans like a radiologist. But automating a radiologist\u2019s entire diagnostic job was infeasible because CheXpert was not equipped to process the ancillary medical histories, conversations, and diagnostic data that radiologists rely on for interpreting scans. Given the differing capabilities of doctors and CheXpert, there was potential for virtuous collaboration. But CheXpert wasn\u2019t designed for this kind of collaboration.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">When experts collaborate, they communicate. If two clinicians disagree on a diagnosis, they might isolate the root of the disagreement through discussion (e.g., \u201cYou\u2019re overlooking this.\u201d). Or they might arrive at a third diagnosis that neither had been considering. That\u2019s the power of collaboration, but it cannot happen with systems that aren\u2019t built to listen. Where CheXpert\u2019s and the radiologist\u2019s assessments differed, the doctor was left with a binary choice: go with the software\u2019s statistical best guess or go with her own expert judgment.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">It\u2019s one thing to automate <a data-event-element=\"inline link\" href=\"https:\/\/academic.oup.com\/qje\/article-abstract\/118\/4\/1279\/1925105\" target=\"_blank\" rel=\"noopener\">tasks<\/a>, quite another to automate whole jobs. This particular AI was designed as an automation tool, but radiologists\u2019 full scope of work defies automation at present. A radiological AI could be built to work collaboratively with radiologists, and it\u2019s likely that future tools will be.<\/p>\n<p class=\"ArticleParagraph_root__4mszW ArticleParagraph_dropcap__uIVzg\" data-flatplan-paragraph=\"true\" data-flatplan-dropcap=\"true\">Tools can be generally divided into two main buckets: In one bucket, you\u2019ll find automation tools that function as closed systems that do their work without oversight\u2014ATMs, dishwashers, electronic toll takers, and automatic transmissions all fall into this category. These tools replace human expertise in their designated functions, often performing those functions better, cheaper, and faster than humans can. Your car, if you have one, probably shifts gears automatically. Most new drivers today will never have to master a stick shift and clutch.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">In the second bucket you\u2019ll find collaboration tools, such as chain saws, word processors, and stethoscopes. Unlike automation tools, collaboration tools require human engagement. They are force multipliers for human capabilities, but only if the user supplies the relevant expertise. A stethoscope is unhelpful to a layperson. A chainsaw is invaluable to some, dangerous to many.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">Automation and collaboration are not opposites, and are frequently packaged together. Word processors automatically perform text layout and grammar checking even as they provide a blank canvas for writers to express ideas. Even so, we can distinguish automation from collaboration functions. The transmissions in our cars are fully automatic, while their safety systems collaborate with their human operators to monitor blind spots, prevent skids, and avert impending collisions.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">AI does not go neatly into either the automation bucket or the collaboration bucket. That\u2019s because AI does both: It automates away expertise in some tasks and fruitfully collaborates with experts in others. But it can\u2019t do both at the same time in the same task. In any given application, AI is going to automate or it\u2019s going to collaborate, depending on how we design it and how someone chooses to use it. And the distinction matters because bad automation tools\u2014machines that attempt but fail to fully automate a task\u2014also make bad collaboration tools. They don\u2019t merely fall short of their promise to replace human expertise at higher performance or lower cost, they interfere with human expertise, and sometimes undermine it.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">The promise of automation is that the relevant expertise is no longer required from the human operator because the capability is now built-in. (And to be clear, automation does not always imply superior performance\u2014consider self-checkout lines and computerized airline phone agents.) But if the human operator\u2019s expertise must serve as a fail-safe to prevent catastrophe\u2014guarding against edge cases or grabbing the controls if something breaks\u2014then automation is failing to deliver on its promise. The need for a fail-safe can be intrinsic to the AI, or caused by an external failure\u2014either way, the consequences of that failure can be grave.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">The tension between automation and collaboration lies at the heart of a notorious aviation accident that occurred in June 2009. Shortly after Air France Flight 447 left Rio De Janeiro for Paris, the plane\u2019s airspeed sensors froze over\u2014a relatively routine, transitory instrument loss due to high-altitude icing. Unable to guide the craft without airspeed data, the autopilot automatically disengaged as it was set to do, returning control of the plane to the pilots. The MIT engineer and historian David Mindell described what happened next in his 2015 book, <a data-event-element=\"inline link\" href=\"https:\/\/sts-program.mit.edu\/book\/robots-robotics-myths-autonomy\/\" target=\"_blank\" rel=\"noopener\">Our Robots, Ourselves<\/a>:<\/p>\n<blockquote class=\"\">\n<p>When the pilots of Air France 447 were struggling to control their airplane, falling ten thousand feet per minute through a black sky, pilot David Robert exclaimed in desperation, \u201cWe lost all control of the airplane, we don\u2019t understand anything, we\u2019ve tried everything!\u201d At that moment, in a tragic irony, they were actually flying a perfectly good airplane \u2026 Yet the combination of startle, confusion, at least nineteen warning and caution messages, inconsistent information, and lack of recent experience hand-flying the aircraft led the crew to enter a dangerous stall. Recovery was possible, using the old technique for unreliable airspeed\u2014lower the pitch angle of the nose, keep the wings level, and the airplane will fly as predicted\u2014but the crew could not make sense of the situation to see their way out of it. The accident report called it \u201ctotal loss of cognitive control of the situation.\u201d<\/p>\n<\/blockquote>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">This wrenching and ultimately fatal sequence of events puts two design failures in sharp relief. One is that the autopilot was a poor collaboration tool. It eliminated the need for human expertise during routine flying. But when expert judgment was most needed, the autopilot abruptly handed control back to the startled crew, and flooded the zone with urgent, confusing warnings. The autopilot was a great automation tool\u2014until it wasn\u2019t, when it offered the crew no useful support. It was designed for automation, not for collaboration.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">The second failure, Mindell argued, was that the pilots were out of practice. No surprise: The autopilot was beguilingly good. Human expertise has a limited shelf life. When machines provide automation, human attention wanders and <a data-event-element=\"inline link\" href=\"https:\/\/journals.sagepub.com\/doi\/abs\/10.1177\/1555343420962897\" target=\"_blank\" rel=\"noopener\">capabilities decay<\/a>. This poses no problem if the automation works flawlessly or if its failure (perhaps due to something as mundane as a power outage) doesn\u2019t create a real-time emergency requiring human intervention. But if human experts are the last fail-safe against catastrophic failure of an automated system\u2014as is currently true in aviation\u2014then we need to vigilantly ensure that humans attain and maintain expertise.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">Modern airplanes have another cockpit navigation aid, one that is less well known than the autopilot: the heads-up display. The HUD is a pure collaboration tool, a transparent LCD screen that superimposes flight data in the pilot\u2019s line of sight. It does not even pretend to fly the aircraft, but it assists the pilot by visually integrating everything that the flight computer digests about the plane\u2019s direction, pitch, power, and airspeed into a single graphic called the flight-path vector. Absent a HUD, a pilot must read multiple flight instruments to intuitively stitch this picture together. The HUD is akin to the navigation app on your smartphone\u2014if that app also had night vision, speed sensors, and intimate knowledge of your car\u2019s engine and brakes.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">The HUD is still a piece of complex software, meaning it can fail. But because it is built to collaborate and not to automate, the pilot continually maintains and gains expertise while flying with it\u2014which, to be clear, is typically not the whole flight, but in crucial moments such as low-visibility takeoff, approach, and landing. If the HUD reboots or locks up during a landing, there is no abrupt handoff; the pilot already has hands on the control yoke for the entire time. Despite the fact that HUDs offer less automation than automatic landing systems, airlines have discovered that their planes suffer fewer costly tail strikes and tire blowouts when pilots use HUDs rather than auto-landers. Perhaps for this reason, HUDs are <a data-event-element=\"inline link\" href=\"https:\/\/www.aviationtoday.com\/2024\/10\/17\/heads-up-display-hud-avionics-systems-increasingly-prevalent-in-cockpits\/\" target=\"_blank\" rel=\"noopener\">integrated<\/a> into newer commercial aircraft.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">Collaboration is not intrinsically better than automation. It would be ridiculous to collaborate with your car\u2019s transmission or to pilot your office elevator from floor to floor. But in some domains, occupations, or tasks where full automation is not currently achievable, where human expertise remains indispensable or a necessary fail-safe, tools should be designed to collaborate\u2014to amplify human expertise, not to keep it on ice until the last possible moment.<\/p>\n<p class=\"ArticleParagraph_root__4mszW ArticleParagraph_dropcap__uIVzg\" data-flatplan-paragraph=\"true\" data-flatplan-dropcap=\"true\">One thing that our tools have not historically done for us is make expert decisions. Expert decisions are high-stakes, one-off choices where the single right answer is not clear\u2014often not knowable\u2014but the quality of the decision matters. There is no single best way, for example, to care for a cancer patient, write a legal brief, remodel a kitchen, or develop a lesson plan. But the skill, judgment, and ingenuity of human decision making determines outcomes in many of these tasks, sometimes dramatically so. Making the right call means exercising expert judgment, which means more than just following the rules. Expert judgment is needed precisely where the rules are not enough, where creativity, ingenuity, and educated guesses are essential.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">But we should not be too impressed by expertise: Even the best experts are fallible, inconsistent, and expensive. Patients receiving surgery on Fridays <a data-event-element=\"inline link\" href=\"https:\/\/jamanetwork.com\/journals\/jamanetworkopen\/fullarticle\/2830842\" target=\"_blank\" rel=\"noopener\">fare worse<\/a> than those treated on other days of the week, and <a data-event-element=\"inline link\" href=\"https:\/\/academic.oup.com\/qje\/article\/140\/2\/943\/7925870\" target=\"_blank\" rel=\"noopener\">standardized test takers<\/a> are more likely to flub equally easy questions if they appear later on a test. Of course, most experts are far from the best in their fields. And experts of all skill levels may be unevenly distributed or simply unavailable\u2014a shortage that is more acute in less affluent communities and lower-income countries.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">Expertise is also slow and costly to acquire, requiring immersion, mentoring, and tons of practice. Medical doctors\u2014radiologists included\u2014spend at least four years apprenticing as residents; electricians spend four years as apprentices and then another couple as journeymen, before certifying as master electricians; law-school grads start as junior partners, and new Ph.D.s begin as assistant professors; pilots must log at least 1,500 hours of flight before they can apply for an Airline Transport Pilot license.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">The inescapable fact that human expertise is scarce, imperfect, and perishable makes the advent of ubiquitous AI an unprecedented opportunity. AI is the first machine humanity has devised that can make high-stakes, one-off expert decisions at scale\u2014in diagnosing patients, developing lesson plans, redesigning kitchens. AI\u2019s capabilities in this regard, while not perfect, have consistently been improving year by year.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">What makes AI such a potent collaborator is that it is not like us. A modern AI system can ingest thousands of medical journals, millions of legal filings, or decades of maintenance logs. This allows it to surface patterns and keep up with the latest developments in health care, law, or vehicle maintenance that would elude most humans. It offers breadth of experience that crosses domains and the capacity to recognize subtle patterns, interpolate among facts, and make new predictions. For example, Google DeepMind\u2019s AlphaFold AI overcame a central challenge in structural biology that has confounded scientists for decades: predicting the folding labyrinthine structure of proteins. This accomplishment is so significant that its designers, Demis Hassabis and John Jumper, colleagues of one of us, were awarded the <a data-event-element=\"inline link\" href=\"https:\/\/www.nobelprize.org\/prizes\/chemistry\/2024\/press-release\/\" target=\"_blank\" rel=\"noopener\">Nobel Prize in Chemistry last year f<\/a>or their work.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">The question is not whether AI can do things that experts cannot do on their own\u2014it can. Yet expert humans often bring something that today\u2019s AI models cannot: situational context, tacit knowledge, ethical intuition, emotional intelligence, and the ability to weigh consequences that fall outside the data. Putting the two together typically <a data-event-element=\"inline link\" href=\"https:\/\/academic.oup.com\/jeea\/article-abstract\/23\/4\/1203\/8175003\" target=\"_blank\" rel=\"noopener\">amplifies<\/a> human expertise: Oncologists can ask a model to flag every recorded case of a rare mutation and then apply clinical judgment to design a bespoke treatment; a software architect can have the model retrieve dozens of edge-case vulnerabilities and then decide which security patch best fits the company\u2019s needs. The value is not in substituting one expert for another, or in outsourcing fully to the machine, or indeed in presuming the human expertise will always be superior, but in leveraging human and rapidly-evolving machine capabilities to achieve best results.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">As AI\u2019s facility in expert judgment becomes more reliable, capable, and accessible in the years ahead, it will emerge as a near-ubiquitous presence in our lives. Using it well will require knowing when to automate versus when to collaborate. This is not necessarily a binary choice, and the boundaries between human expertise and AI\u2019s capabilities for expert judgment will continually evolve as AI\u2019s capabilities advance. AI already collaborates with human drivers today, provides autonomous taxi services in some cities, and may eventually relieve us of the burden and risk of driving altogether\u2014so that the driver\u2019s license can go the way of the manual transmission. Although collaboration is not intrinsically better than automation, premature or excess automation\u2014that is, automation that takes on entire jobs when it\u2019s ready for only a subset of job tasks\u2014is generally worse than collaboration.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">The temptation toward excess automation has always been with us. In 1984, General Motors opened its \u201cfactory of the future\u201d in Saginaw, Michigan. President Ronald Reagan delivered the dedication speech. The vision, as MIT\u2019s Ben Armstrong and Julie Shaw wrote in Harvard Business Review in 2023, was that robots would be \u201cso effective that people would be scarce\u2014it wouldn\u2019t even be necessary to turn on the lights.\u201d But things did not go as planned. The robots \u201cstruggled to distinguish one car model from another: They tried to affix Buick bumpers to Cadillacs, and vice versa,\u201d Armstrong and Shaw wrote. \u201cThe robots were bad painters, too; they spray-painted one another rather than the cars coming down the line. GM shut the Saginaw plant in 1992.\u201d<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">There has been much progress in robotics since this time, but the advent of AI invites automation hubris to an unprecedented degree. Starting from the premise that AI has already attained superhuman capabilities, it is tempting to think that it must be able to do everything that experts do, minus the experts. Many people have therefore adopted an automation mindset, in their desire either to evangelize AI or to warn against it. To them, the future goes like this: AI replicates expert capabilities, overtakes the experts, and finally replaces them altogether. Rather than performing valuable tasks expertly, AI makes experts irrelevant.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">Research on people\u2019s use of AI makes the downsides of this automation mindset ever more apparent. For example, while experts use chatbots as collaboration tools\u2014riffing on ideas, clarifying intuitions\u2014novices often treat them mistakenly as automation tools, oracles that speak from a bottomless well of knowledge. That becomes a problem when an AI chatbot confidently provides information that is misleading, speculative, or simply false. Because current AIs don\u2019t understand what they don\u2019t understand, those lacking the expertise to identify flawed reasoning and outright errors may be led astray.<\/p>\n<p class=\"ArticleParagraph_root__4mszW ArticleParagraph_dropcap__uIVzg\" data-flatplan-paragraph=\"true\" data-flatplan-dropcap=\"true\">The seduction of cognitive automation helps explain a worrying pattern: AI tools can boost the productivity of experts but may also actively mislead novices in expertise-heavy fields such as <a data-event-element=\"inline link\" href=\"https:\/\/dho.stanford.edu\/wp-content\/uploads\/Legal_RAG_Hallucinations.pdf\" target=\"_blank\" rel=\"noopener\">legal services<\/a>. Novices struggle to spot inaccuracies and lack efficient methods for validating AI outputs. And methodically fact-checking every AI suggestion can negate any time savings.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">Beyond the risk of errors, there is some early evidence that overreliance on AI can impede the development of critical thinking, or <a data-event-element=\"inline link\" href=\"https:\/\/papers.ssrn.com\/sol3\/papers.cfm?abstract_id=5104064\" target=\"_blank\" rel=\"noopener\">inhibit learning<\/a>. <a data-event-element=\"inline link\" href=\"https:\/\/papers.ssrn.com\/sol3\/papers.cfm?abstract_id=5082524\" target=\"_blank\" rel=\"noopener\">Studies<\/a> suggest a negative correlation between frequent AI use and critical-thinking skills, likely due to increased \u201ccognitive offloading\u201d\u2014letting the AI do the thinking. In high-stakes environments, this tendency toward overreliance is particularly dangerous: Users may accept incorrect AI suggestions, especially if delivered with apparent confidence.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">The rise of highly capable assistive AI tools also risks disrupting traditional pathways for expertise development when it\u2019s still clearly needed now, and will be in the foreseeable future. When AI systems can perform tasks previously assigned to research assistants, <a data-event-element=\"inline link\" href=\"https:\/\/journals.sagepub.com\/doi\/full\/10.1177\/0001839217751692\" target=\"_blank\" rel=\"noopener\">surgical residents<\/a>, and pilots, the opportunities for apprenticeship and learning-by-doing disappear. This threatens the future talent pipeline, as most occupations rely on experiential learning\u2014like those radiology residents discussed above.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">Early field evidence hints at the value of getting this right. In a <a data-event-element=\"inline link\" href=\"https:\/\/www.pnas.org\/doi\/10.1073\/pnas.2426153122\" target=\"_blank\" rel=\"noopener\">PNAS<\/a> study published earlier this year and covering 2,133 \u201cmystery\u201d medical cases, researchers ran three head-to-head trials: doctors diagnosing on their own, five leading AI models diagnosing on their own, and then doctors reviewing the AI suggestions before giving a final answer. That human-plus-AI pair proved most accurate, correct on roughly 85 percent more cases than physicians working solo and 15 to 20 percent more than an AI alone. The gain came from complementary strengths: When the model missed a clue, the clinician usually spotted it, and when the clinician slipped, the model filled the gap. The researchers engineered human-AI complementarity into the design of the trials, and saw results. As these tools evolve, we believe they will surely take on autonomous diagnostic tasks, such as triaging patients and ordering further testing\u2014and may indeed do better over time on their own, as some early studies <a data-event-element=\"inline link\" href=\"https:\/\/www.nytimes.com\/2025\/02\/02\/opinion\/ai-doctors-medicine.html?unlocked_article_code=1.t04.AeZg.kT0qka6kerAi&amp;smid=url-share\" target=\"_blank\" rel=\"noopener\">suggest<\/a>.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">Or, consider an example with which one of us is closely familiar: Google\u2019s Articulate Medical Intelligence Explorer (AMIE) is an AI system built to assist physicians. AMIE conducts multi-turn chats that mirror a real primary-care visit: It asks follow-up questions when it is unsure, explains its reasoning, and adjusts its line of inquiry as new information emerges. In a blinded study recently published in <a data-event-element=\"inline link\" href=\"https:\/\/www.nature.com\/articles\/s41586-025-08866-7\" target=\"_blank\" rel=\"noopener\">Nature<\/a>, specialist physicians compared the performance of a primary-care doctor working alone with that of a doctor who collaborated with AMIE. The doctor who used AMIE ranked higher on 30 of 32 clinical-communication and diagnostic axes, including empathy and clarity of explanations.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">By exposing its reasoning, highlighting uncertainty, and grounding advice in trusted sources, AMIE pulls the user into an active problem-solving loop instead of handing down answers from on high. Doctors can potentially interrogate and correct it in real time, reinforcing (rather than eroding) their own diagnostic skills. These results are preliminary: AMIE is still a research prototype and not a drop-in replacement. But its design principles suggest a path toward meaningful human collaboration with AI.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">Full automation is much harder than collaboration. To be useful, an automation tool must deliver near flawless performance almost all of the time. You wouldn\u2019t tolerate an automatic transmission that sporadically failed to shift gears, an elevator that regularly got stuck between floors, or an electronic tollbooth that occasionally overcharged you by $10,000.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">By contrast, a collaboration tool doesn\u2019t need to be anywhere close to infallible to be useful. A doctor with a stethoscope can better understand a patient than the same doctor without one; a contractor can pitch a squarer house frame with a laser level than by line of sight. These tools don\u2019t need to work flawlessly, because they don\u2019t promise to replace the expertise of their user. They make experts better at what they do\u2014and extend their expertise to places it couldn\u2019t go unassisted.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">Designing for collaboration means designing for <a data-event-element=\"inline link\" href=\"https:\/\/www.digitalistpapers.com\/essays\/getting-ai-right\" target=\"_blank\" rel=\"noopener\">complementarity<\/a>. AI\u2019s comparative advantages (near limitless learning capacity, rapid inference, round-the-clock availability) should slot into the gaps where human experts tend to struggle: remembering every precedent, canvassing every edge case, or drawing connections across disciplines. And at the same time, interface design must leave space for distinctly human strengths: contextual nuance, moral reasoning, creativity, and a broad grasp of how accomplishing specific tasks achieves broader goals.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">Both AI skeptics and AI evangelists agree that AI will prove a transformative technology\u2013-indeed, this transformation is already under way. The right question then is not whether but how we should use AI. Should we go all in on automation? Should we build collaborative AI that learns from our choices, informs our decisions, and partners with us to drive better results? The correct answer, of course, is both. Getting this balance right across capabilities is a formidable and ever-evolving challenge. Fortunately, the principles and techniques for using AI collaboratively are now emerging. We have a canyon to cross. We should choose our routes wisely.<\/p>\n","protected":false},"excerpt":{"rendered":"No one doubts that our future will feature more automation than our past or present. The question is&hellip;\n","protected":false},"author":2,"featured_media":371207,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[3163],"tags":[323,1942,53,16,15],"class_list":{"0":"post-371206","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-technology","11":"tag-uk","12":"tag-united-kingdom"},"share_on_mastodon":{"url":"https:\/\/pubeurope.com\/@uk\/115086989937403193","error":""},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/posts\/371206","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/comments?post=371206"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/posts\/371206\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/media\/371207"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/media?parent=371206"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/categories?post=371206"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/tags?post=371206"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}