{"id":334938,"date":"2025-10-27T00:20:19","date_gmt":"2025-10-27T00:20:19","guid":{"rendered":"https:\/\/www.europesays.com\/us\/334938\/"},"modified":"2025-10-27T00:20:19","modified_gmt":"2025-10-27T00:20:19","slug":"the-age-of-de-skilling-the-atlantic","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/us\/334938\/","title":{"rendered":"The Age of De-Skilling &#8211; The Atlantic"},"content":{"rendered":"<p class=\"ArticleParagraph_root__4mszW ArticleParagraph_dropcap__uIVzg\" data-flatplan-paragraph=\"true\" data-flatplan-dropcap=\"true\">The fretting has swelled from a murmur to a clamor, all variations on the same foreboding theme: \u201c<a data-event-element=\"inline link\" href=\"https:\/\/www.media.mit.edu\/publications\/your-brain-on-chatgpt\/\" rel=\"nofollow noopener\" target=\"_blank\">Your Brain on ChatGPT<\/a>.\u201d \u201c<a data-event-element=\"inline link\" href=\"https:\/\/www.forbes.com\/sites\/dimitarmixmihov\/2025\/02\/11\/ai-is-making-you-dumber-microsoft-researchers-say\/\" rel=\"nofollow noopener\" target=\"_blank\">AI Is Making You Dumber<\/a>.\u201d \u201c<a data-event-element=\"inline link\" href=\"https:\/\/www.edtechdigest.com\/2025\/05\/27\/ai-is-killing-critical-thinking-but-it-doesnt-have-to-be-that-way\/\" rel=\"nofollow noopener\" target=\"_blank\">AI Is Killing Critical Thinking<\/a>.\u201d Once, the fear was of a runaway intelligence that would wipe us out, maybe while turning the planet into a paper-clip factory. Now that chatbots are going the way of Google\u2014moving from the miraculous to the taken-for-granted\u2014the anxiety has shifted, too, from apocalypse to atrophy. Teachers, especially, say they\u2019re beginning to see the rot. The term for it is unlovely but not inapt: de-skilling.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">The worry is far from fanciful. Kids who turn to Gemini to summarize Twelfth Night may never learn to wrestle with Shakespeare on their own. Aspiring lawyers who use Harvey AI for legal analysis may fail to develop the interpretive muscle their predecessors took for granted. In a recent study, several hundred U.K. participants were given a standard critical-thinking test and were interviewed about their AI use for finding information or making decisions. Younger users leaned more on the technology, and scored lower on the test. Use it or lose it was the basic takeaway. Another study looked at physicians performing colonoscopies: After three months of using an AI system to help flag polyps, they became less adept at spotting them unaided.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">But the real puzzle isn\u2019t whether de-skilling exists\u2014it plainly does\u2014but rather what kind of thing it is. Are all forms of de-skilling corrosive? Or are there kinds that we can live with, that might even be welcome? De-skilling is a catchall term for losses of very different kinds: some costly, some trivial, some oddly generative. To grasp what\u2019s at stake, we have to look closely at the ways that skill frays, fades, or mutates when new technologies arrive.<\/p>\n<p class=\"ArticleParagraph_root__4mszW ArticleParagraph_dropcap__uIVzg\" data-flatplan-paragraph=\"true\" data-flatplan-dropcap=\"true\">Our chatbots are new: The \u201ctransformer\u201d architecture they rely on was invented in 2017, and ChatGPT made its public debut just five years later. But the fear that a new technology might blunt the mind is ancient. In the Phaedrus, which dates to the fourth century B.C.E., Socrates recounts a myth in which the Egyptian god Thoth offers King Thamus the gift of writing\u2014\u201ca recipe for memory and for wisdom.\u201d Thamus is unmoved. Writing, he warns, will do the opposite: It will breed forgetfulness, letting people trade the labor of recollection for marks on papyrus, mistaking the appearance of understanding for the thing itself. Socrates sides with Thamus. Written words, he complains, never answer your particular questions; reply to everyone the same way, sage and fool alike; and are helpless when they\u2019re misunderstood.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">Of course, the reason we know all this\u2014the reason the episode keeps turning up in Whiggish histories of technology\u2014is that Plato wrote it down. Yet the critics of writing weren\u2019t entirely wrong. In oral cultures, bards carried epics in their heads; griots could reel off centuries of genealogy on demand. Writing made such prowess unnecessary. You could now take in ideas without wrestling with them. Dialogue demands replies: clarification, objection, revision. (Sometimes \u201cVery true, Socrates\u201d did the trick, but still.) Reading, by contrast, lets you bask in another\u2019s brilliance, nodding along without ever testing yourself against it.<\/p>\n<p id=\"injected-recirculation-link-0\" class=\"ArticleRelatedContentLink_root__VYc9V\" data-view-action=\"view link - injected link - item 1\" data-event-element=\"injected link\" data-event-position=\"1\"><a href=\"https:\/\/www.theatlantic.com\/technology\/2025\/09\/if-anyone-builds-it-excerpt\/684213\/\" rel=\"nofollow noopener\" target=\"_blank\">Eliezer Yudkowsky and Nate Soares: AI is grown, not built<\/a><\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">What looks like a loss from one angle, though, can look like a gain from another. Writing opened new mental territories: commentary, jurisprudence, reliable history, science. Walter J. Ong, the scholar of orality and literacy, put it crisply: \u201cWriting is a technology that restructures thought.\u201d The pattern is familiar. When sailors began using sextants, they left behind the seafarer\u2019s skycraft, the detailed reading of stars that once steered them safely home. Later, satellite navigation brought an end to sextant skills. Owning a Model T once meant moonlighting as a mechanic\u2014knowing how to patch tubes, set ignition timing by ear, coax the car\u2019s engine back to life after a stall. Today\u2019s highly reliable engines seal off their secrets. Slide rules yielded to calculators, calculators to computers. Each time, individual virtuosity waned, but overall performance advanced.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">It\u2019s a reassuring pattern\u2014something let go, something else acquired. But some gains come with deeper costs. They unsettle not only what people can do but also who they feel themselves to be.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">In the 1980s, the social psychologist Shoshana Zuboff spent time at pulp mills in the southern United States as they shifted from manual to computerized control. Operators who had once judged pulp by touch (\u201cIs it slick? Is it sticky?\u201d) now sat in air-conditioned rooms watching numbers scroll across screens, their old skills unexercised and unvalued. \u201cDoing my job through the computer, it feels different,\u201d one told Zuboff. \u201cIt is like you\u2019re riding a big, powerful horse, but someone is sitting behind you on the saddle holding the reins.\u201d The new system was faster, cleaner, safer; it also drained the work of its meaning.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">The sociologist Richard Sennett recorded a similar transformation at a Boston bakery. In the 1970s, the workers there were Greek men who used their noses and eyes to judge when the bread was ready and took pride in their craft; in the 1990s, their successors interacted with a touch screen on a Windows-style controller. Bread became a screen icon\u2014its color inferred from data, its variety chosen from a digital menu. The thinning of skills brought a thinning of identity. The bread was still good, but the kitchen workers knew they weren\u2019t really bakers anymore. One told Sennett, half-joking, \u201cBaking, shoemaking, printing\u2014you name it, I\u2019ve got the skills.\u201d She meant that she didn\u2019t really need any.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">The cultural realm, certainly, has had a long retreat from touch. In the middle-class homes of 19th-century Europe, to love music usually meant to play it. Symphonies reached the parlor not by stereo but by piano reduction\u2014four hands, one keyboard, Brahms\u2019s Symphony No. 1 conjured as best the household could manage. It took skill: reading notation, mastering technique, evoking an orchestra through your fingers. To hear the music you wanted, you had to practice.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">Then the gramophone took off, and the parlor pianos started to gather dust. The gains were obvious: You could summon the orchestra itself into your living room, expand your ear from salon trifles to Debussy, Strauss, Sibelius. The modern music lover may have been less of a performer but, in a sense, more of a listener. Still, breadth came at the expense of depth. Practicing a piece left you with an intimate feel for its seams and contours. Did your kid with the shiny Victrola get that?<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">That sense of estrangement\u2014of being a step removed from the real thing\u2014shows up whenever a powerful new tool arrives. The slide rule, starting in the 17th century, reduced the need for expertise at mental math; centuries later, the pocket calculator stirred unease among some engineers, who feared the fading of number sense. Such worries weren\u2019t groundless. Pressing \u201cCos\u201d on a keypad got you a number, but the meaning behind it could slip away. Even in more rarefied precincts, the worry persisted. The MIT physicist Victor Weisskopf was <a data-event-element=\"inline link\" href=\"https:\/\/www.mit.edu\/people\/sturkle\/pdfsforstwebpage\/ST_Seeing%20thru%20computers.pdf\" rel=\"nofollow noopener\" target=\"_blank\">troubled<\/a> by his colleagues\u2019 growing reliance on computer simulations. \u201cThe computer understands the answer,\u201d he told them when they handed him their printouts, \u201cbut I don\u2019t think you understand the answer.\u201d It was the disquiet of an Egyptian king, digital edition, convinced that output was being mistaken for insight.<\/p>\n<p class=\"ArticleParagraph_root__4mszW ArticleParagraph_dropcap__uIVzg\" data-flatplan-paragraph=\"true\" data-flatplan-dropcap=\"true\">In what Zuboff called \u201cthe age of the smart machine,\u201d automation was mainly confined to the workplace\u2014the mill, the industrial bakery, the cockpit. In the age of the PC and then the web, technology escaped into the home, becoming general purpose, woven into everyday life. By the 2000s, researchers were already asking what search engines were doing to us. You\u2019d see headlines such as \u201c<a data-event-element=\"inline link\" href=\"https:\/\/spectrum.ieee.org\/this-is-your-brain-on-google-2650251257\" rel=\"nofollow noopener\" target=\"_blank\">This Is Your Brain on Google<\/a>.\u201d Although the panic was overplayed, some effects were real. A <a data-event-element=\"inline link\" href=\"https:\/\/scholar.harvard.edu\/files\/dwegner\/files\/sparrow_et_al._2011.pdf\" rel=\"nofollow noopener\" target=\"_blank\">widely cited study<\/a> found that, in certain circumstances, people would remember where a fact could be found rather than the fact itself.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">In truth, human cognition has always leaked beyond the skull\u2014into instruments, symbols, and one another. (Think of the couples you know: One person remembers birthdays, the other where the passports live.) From the time of tally bones and to the era of clay tablets, we\u2019ve been storing thought in the world for tens of millennia. Plenty of creatures use tools, but their know-how dies with them; ours accumulates as culture\u2014a relay system for intelligence. We inherit it, extend it, and build upon it, so that each generation can climb higher than the last: moving from pressure-flaked blades to bone needles, to printing presses, to quantum computing. This compounding of insight\u2014externalized, preserved, shared\u2014is what sets Homo sapiens apart. Bonobos live in the ecological present. We live in history.<\/p>\n<p id=\"injected-recirculation-link-1\" class=\"ArticleRelatedContentLink_root__VYc9V\" data-view-action=\"view link - injected link - item 2\" data-event-element=\"injected link\" data-event-position=\"2\"><a href=\"https:\/\/www.theatlantic.com\/technology\/archive\/2016\/07\/the-six-main-arcs-in-storytelling-identified-by-a-computer\/490733\/\" rel=\"nofollow noopener\" target=\"_blank\">Adrienne LaFrance: The six main arcs in storytelling, as identified by an A.I.<\/a><\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">Accumulation, meanwhile, has a critical consequence: It drives specialization. As knowledge expands, it no longer resides equally in every head. In small bands, anyone could track game, gather plants, and make fire. But as societies scaled up after the agrarian revolution, crafts and guilds proliferated\u2014toolmakers who could forge an edge that held, masons who knew how to keep a vault from collapsing, glassblowers who refined closely guarded recipes and techniques. Skills once lodged in the body moved into tools and rose into institutions. Over time, the division of labor became, inevitably, a division of cognitive labor.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">The philosopher Hilary Putnam once remarked that he could use the word elm even though he couldn\u2019t tell an elm from a beech. Reference is social: You can talk about elms because others in your language community\u2014botanists, gardeners, foresters\u2014can identify them. What\u2019s true of language is true of knowledge. Human capability resides not solely in individuals but in the networks they form, each of us depending on others to fill in what we can\u2019t supply ourselves. Scale turned social exchange into systemic interdependence.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">The result is a world in which, in a classic <a data-event-element=\"inline link\" href=\"https:\/\/en.wikisource.org\/wiki\/I,_Pencil\" rel=\"nofollow noopener\" target=\"_blank\">example<\/a>, nobody knows how to make a pencil. An individual would need the skills of foresters, saw millers, miners, chemists, lacquerers\u2014an invisible network of crafts behind even the simplest object. Mark Twain, in A Connecticut Yankee in King Arthur\u2019s Court, imagined a 19th-century engineer dropped into Camelot dazzling his hosts with modern wonders. Readers went with it. But drop his 21st-century counterpart into the same setting, and he\u2019d be helpless. Manufacture insulated wire? Mix a batch of dynamite? Build a telegraph from scratch? Most of us would be stymied once we failed to get onto the Wi-Fi.<\/p>\n<p class=\"ArticleParagraph_root__4mszW ArticleParagraph_dropcap__uIVzg\" data-flatplan-paragraph=\"true\" data-flatplan-dropcap=\"true\">The cognitive division of labor is now so advanced that two physicists may barely understand each other\u2014one modeling dark matter, the other building quantum sensors. Scientific mastery now means knowing more and more about less and less. This concentration yields astonishing progress, but it also means grasping how limited our competence is: Specialists inherit conceptual tools they can use but can no longer make. Even mathematics, long romanticized as the realm of the solitary genius, now works like this. When Andrew Wiles proved Fermat\u2019s Last Theorem, he didn\u2019t re-derive every lemma himself; he assembled results that he trusted but didn\u2019t personally reproduce, building a structure he could see whole even if he hadn\u2019t cut each beam.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">The widening of collaboration has changed what it means to know something. Knowledge, once imagined as a possession, has become a relation\u2014a matter of how well we can locate, interpret, and synthesize what others know. We live inside a web of distributed intelligence, dependent on specialists, databases, and instruments to extend our reach. The scale tells the story: The Nature paper that announced the structure of DNA had two authors; a Nature paper in genomics today might have 40. The two papers announcing the Higgs boson? Thousands. Big science is big for a reason. It was only a matter of time before the network acquired a new participant\u2014one that could not just store information but imitate understanding itself.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">The old distinction between information and skill, between \u201cknowing that\u201d and \u201cknowing how,\u201d has grown blurry in the era of large language models. In one sense, these models are static: a frozen matrix of weights you could download to your laptop. In another, they\u2019re dynamic; once running, they generate responses on the fly. They do what Socrates complained writing could not: They answer questions, adjust to an interlocutor, carry on a conversation. (Sometimes even with themselves; when they feed their own outputs back as inputs, AI researchers call it \u201creasoning.\u201d) It wasn\u2019t hard to imagine Google as an extension of memory; a large language model feels, to many, more like a stand-in for the mind itself. In harnessing new forms of artificial intelligence, is our own intelligence being amplified\u2014or is it the artificial kind that, on little cat feet, is coming into its own?<\/p>\n<p class=\"ArticleParagraph_root__4mszW ArticleParagraph_dropcap__uIVzg\" data-flatplan-paragraph=\"true\" data-flatplan-dropcap=\"true\">We can\u2019t put the genie back in the bottle; we can decide what spells to have it cast. When people talk about de-skilling, they usually picture an individual who\u2019s lost a knack for something\u2014the pilot whose hand-flying gets rusty, the doctor who misses tumors without an AI assist. But most modern work is collaborative, and the arrival of AI hasn\u2019t changed that. The issue isn\u2019t how humans compare to bots but how humans who use bots compare to those who don\u2019t.<\/p>\n<p id=\"injected-recirculation-link-2\" class=\"ArticleRelatedContentLink_root__VYc9V\" data-view-action=\"view link - injected link - item 3\" data-event-element=\"injected link\" data-event-position=\"3\"><a href=\"https:\/\/www.theatlantic.com\/technology\/archive\/2025\/09\/youtube-ai-training-data-sets\/684116\/\" rel=\"nofollow noopener\" target=\"_blank\">Alex Reisner: AI is coming for YouTube creators<\/a><\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">Some people fear that reliance on AI will make us worse in ways that will swamp its promised benefits. Whereas Dario Amodei, the CEO of Anthropic, sanguinely imagines a \u201ccountry of geniuses,\u201d they foresee a country of idiots. It\u2019s an echo of the old debate over \u201crisk compensation\u201d: Add seatbelts or antilock brakes, some social scientists argued a few decades ago, and people will simply drive more recklessly, their tech-boosted confidence leading them to spend the safety margin. Research eventually showed a more encouraging result: People do adjust, but only partially, so that substantial benefits remain.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">Something similar has seemed to hold for the clinical use of AI, which has been common in hospitals for more than a decade. Think back to that colonoscopy study: After performing AI-assisted procedures, gastroenterologists saw their unaided rate of polyp detection drop by six percentage points. But when another <a data-event-element=\"inline link\" href=\"https:\/\/www.giejournal.org\/article\/S0016-5107(24)03471-0\/fulltext\" rel=\"nofollow noopener\" target=\"_blank\">study<\/a> pooled data from 24,000 patients, a fuller picture emerged: AI assistance raised overall detection rates by roughly 20 percent. (The AI here was an expert system\u2014a narrow, reliable form of machine learning, not the generative kind that powers chatbots.) Because higher detection rates mean fewer missed cancers, this \u201ccentaur\u201d approach was plainly beneficial, regardless of whether individual clinicians became fractionally less sharp. If the collaboration is saving lives, gastroenterologists would be irresponsible to insist on flying solo out of pride.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">In other domains, the more skillful the person, the more skillful the collaboration\u2014or so some recent studies suggest. One of them found that humans outperformed bots when sorting images of two kinds of wrens and two kinds of woodpeckers. But when the task was spotting fake hotel reviews, the bots won. (Game recognizes game, I guess.) Then the researchers paired people with the bots, letting the humans make judgments informed by the machine\u2019s suggestions. The outcome depended on the task. Where human intuition was weak, as with the hotel reviews, people second-guessed the bot too much and dragged the results down. Where their intuitions were good, they seemed to work in concert with the machine, trusting their own judgment when they were sure of it and realizing when the system had caught something they\u2019d missed. With the birds, the duo of human and bot beat either alone.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">The same logic holds elsewhere: Once a machine enters the workflow, mastery may shift from production to appraisal. A 2024 study of coders using GitHub Copilot found that AI use seemed to redirect human skill rather than obviate it. Coders spent less time generating code and more time assessing it\u2014checking for logic errors, catching edge cases, cleaning up the script. The skill migrated from composition to supervision.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">That, more and more, is what \u201chumans in the loop\u201d has to mean. Expertise shifts from producing the first draft to editing it, from speed to judgment. Generative AI is a probabilistic system, not a deterministic one; it returns likelihoods, not truth. When the stakes are real, skilled human agents have to remain accountable for the call\u2014noticing when the model has drifted from reality, and treating its output as a hypothesis to test, not an answer to obey. It\u2019s an emergent skill, and a critical one. The future of expertise will depend not just on how good our tools are but on how well we think alongside them.<\/p>\n<p class=\"ArticleParagraph_root__4mszW ArticleParagraph_dropcap__uIVzg\" data-flatplan-paragraph=\"true\" data-flatplan-dropcap=\"true\">But collaboration presupposes competence. A centaur goes in circles if the human half doesn\u2019t know what it\u2019s doing. That\u2019s where the panic over pedagogy comes in. You can\u2019t become de-skilled if you were never skilled in the first place. And how do you inculcate basic competencies in an age when the world\u2019s best homework machine snuggles into every student\u2019s pocket?<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">Those of us who teach have a lot of homework of our own to do. Our old standbys need a rebuild; in the past couple of years, too many college kids have, in an unsettling phrase, ended up \u201cmajoring in ChatGPT.\u201d Yet it\u2019s too soon to pronounce confidently what the overall pedagogical effect of AI will be. Yes, AI can dull some edges. Used well, it can also sharpen them.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">Consider a recent randomized trial in a large Harvard physics course. Half of the students learned two lessons in the traditional \u201cbest practice\u201d mode: an active, hands-on class led by a skilled instructor. The other half used a custom-built AI tutor. Then they switched. In both rounds, the AI-tutored students came out ahead\u2014by a lot. They didn\u2019t just learn more. They worked faster, too, and reported feeling more motivated and engaged. The system had been designed to behave like a good coach: showing you how to break big problems into smaller ones, offering hints instead of blurting out answers, titrating feedback and adjusting to each student\u2019s pace.<\/p>\n<p id=\"injected-recirculation-link-3\" class=\"ArticleRelatedContentLink_root__VYc9V\" data-view-action=\"view link - injected link - item 4\" data-event-element=\"injected link\" data-event-position=\"4\"><a href=\"https:\/\/www.theatlantic.com\/technology\/archive\/2025\/08\/ai-mass-delusion-event\/683909\/\" rel=\"nofollow noopener\" target=\"_blank\">Charlie Warzel: AI is a mass-delusion event<\/a><\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">That\u2019s what made the old-style tutorial system powerful: attention. I remember my first weeks at Cambridge University, sitting one-on-one with my biochemistry tutor. When I said, \u201cI sort of get it,\u201d he pressed until we were both sure that I did. That targeted focus was the essence of a Cambridge supervision. If custom-fitted in the right way, large language models promise to mass-produce that kind of attention\u2014not the cardigan, not the burnished briar, not the pensive moue, but the steady, responsive pressure that turns confusion into competence.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">Machines won\u2019t replace mentors. What they promise to do is handle the routine parts of tutoring\u2014checking algebra, drilling lemmas, reminding students to write the units, and making sure they grasp how membrane channels work. This, in theory, can free the teacher to focus on other things that matter: explaining the big ideas, pushing for elegance, talking about careers, noticing when a student is burning out.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">That\u2019s the upbeat scenario, anyway. We should be cautious about generalizing from one study. (A <a data-event-element=\"inline link\" href=\"https:\/\/papers.ssrn.com\/sol3\/papers.cfm?abstract_id=4895486\" rel=\"nofollow noopener\" target=\"_blank\">study<\/a> of Turkish high-school students found no real gains from the use of a tutor bot.) And we should be mindful that those physics students put their tutor bot to good use because they had in-class exams to face\u2014a proctor, a stopwatch, a grader\u2019s cold eye.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">We should also be mindful that what works for STEM courses may not work for the humanities. The term paper, for all its tedium, teaches a discipline that\u2019s hard to reproduce in conversation: building an argument step by step, weighing evidence, organizing material, honing a voice. Some of us who teach undergrads have started telling ambitious students that if they write a paper, we\u2019ll read and discuss it with them, but it won\u2019t count toward their grade. That\u2019s a salve, not a solution. In a curious cultural rewind, orality may have to carry more of the load. Will Socrates, dialogue\u2019s great defender, have the last word after all?<\/p>\n<p class=\"ArticleParagraph_root__4mszW ArticleParagraph_dropcap__uIVzg\" data-flatplan-paragraph=\"true\" data-flatplan-dropcap=\"true\">Erosive de-skilling remains a prospect that can\u2019t be wished away: the steady atrophy of basic cognitive or perceptual capacities through overreliance on tools, with no compensating gain. Such deficits can deplete a system\u2019s reserves\u2014abilities you seldom need but must have when things go wrong. Without them, resilience falters and fragility creeps in. Think of the airline pilot who spends thousands of hours supervising the autopilot but freezes when the system does. Some automation theorists distinguish between \u201chumans in the loop,\u201d who stay actively engaged, and \u201chumans on the loop,\u201d who merely sign off after a machine has done the work. The second, poorly managed, produces what the industrial psychologist Lisanne Bainbridge long ago warned of: role confusion, diminished awareness, fading readiness. Like a lifeguard who spends most days watching capable swimmers in calm water, such human supervisors rarely need to act\u2014but when they do, they must act fast, and deftly.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">The same dynamic shadows office work of every kind. When lawyers, project managers, and analysts spend months approving what the system has already drafted or inferred, they become \u201con the loop,\u201d and out of practice. It\u2019s the paradox of partial automation\u2014the better the system performs, the less people have to stay sharp, and the less prepared they are for the rare moments when performance fails. The remedy probably lies in institutional design. For example, a workplace could stage regular drills\u2014akin to a pilot\u2019s recurrent flight-simulator training\u2014in which people must challenge the machine and ensure that their capacities for genuine judgment haven\u2019t decayed in the long stretches of smooth flight.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">Reserve skills, in many cases, don\u2019t need to be universal; they just need to exist somewhere in the system, such as those elm experts. That\u2019s why the Naval Academy, alarmed by the prospect of GPS jamming, brought back basic celestial-navigation training after years of neglect. Most sailors will never touch a sextant on the high seas, but if a few of them acquire proficiency, they may be enough to steady a fleet if the satellites go dark. The goal is to ensure that at least some embodied competence survives, so that when a system stumbles, the human can still stand\u2014or at least stay afloat.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">The most troubling prospect of all is what might be called constitutive de-skilling: the erosion of the capacities that make us human in the first place. Judgment, imagination, empathy, the feel for meaning and proportion\u2014these aren\u2019t backups; they\u2019re daily practices. If, in Jean-Paul Sartre\u2019s fearful formulation, we were to become \u201cthe machine\u2019s machine,\u201d the loss would show up in the texture of ordinary life. What might vanish is the tacit, embodied knowledge that underwrites our everyday discernment. If people were to learn to frame questions the way the system prefers them, to choose from its menu of plausible replies, the damage wouldn\u2019t take the form of spectacular failures of judgment so much as a gradual attenuation of our character: shallower conversation, a reduced appetite for ambiguity, a drift toward automatic phrasing where once we would have searched for the right word, the quiet substitution of fluency for understanding. To offload those faculties would be, in effect, to offload ourselves. Losing them wouldn\u2019t simply change how we work; it would change who we are.<\/p>\n<p class=\"ArticleParagraph_root__4mszW ArticleParagraph_dropcap__uIVzg\" data-flatplan-paragraph=\"true\" data-flatplan-dropcap=\"true\">Most forms of de-skilling, if you take the long view, are benign. Some skills became obsolete because the infrastructure that sustained them also had. Telegraphy required fluency in dots and dashes; linotype, a deft hand at a molten-metal keyboard; flatbed film editing, the touch of a grease pencil and splicing tape, plus a mental map of where scenes lived across reels and soundtracks. When the telegraph lines, hot-metal presses, and celluloid reels disappeared, so did the crafts they supported.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">Another kind of de-skilling represents the elimination of drudgery. Few of us mourn the loss of hand-scrubbing laundry, or grinding through long division on paper. A neuroscientist I know swears by LLMs for speeding the boilerplate-heavy business of drafting grant proposals. He\u2019s still responsible for the content, but if his grant-writing chops decline, he\u2019s unbothered. That\u2019s not science, in his view; it\u2019s a performance demanded by the research economy. Offloading some of it gives him back time for discovery.<\/p>\n<p id=\"injected-recirculation-link-4\" class=\"ArticleRelatedContentLink_root__VYc9V\" data-view-action=\"view link - injected link - item 5\" data-event-element=\"injected link\" data-event-position=\"5\"><a href=\"https:\/\/www.theatlantic.com\/technology\/archive\/2025\/06\/generative-ai-pirated-articles-books\/683009\/\" rel=\"nofollow noopener\" target=\"_blank\">Alex Reisner: The end of publishing as we know it<\/a><\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">Occupational de-skilling can, in fact, be democratizing, widening the circle of who gets to do a job. For scientists who struggle with English, chatbots can smooth the drafting of institutional-review-board statements, clearing a linguistic hurdle that has little to do with the quality of their research. De-skilling here broadens access. Or think of Sennett\u2019s bakery and the Greek men who used to staff the kitchen floor. The ovens burned their arms, the old-fashioned dough beaters pulled their muscles, and heavy trays of loaves strained their back. By the \u201990s, when the system ran on a Windows controller, the workforce looked different: A multiethnic mix of men and women stood at the screens, tapping icons. The craft had shrunk; the eligible workforce had grown. (And yes, their labor had grown cheaper: a wider gate, a lower wage.)<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">We often lose skills simply because tech lets us put our time to better use and develop skills further up the proverbial value chain. At one of Zuboff\u2019s pulp mills, operators who were freed from manual activity could spend more time anticipating and forestalling problems. \u201cSitting in this room and just thinking has become part of my job,\u201d one said. Zuboff called this reskilling: action skills giving way to abstraction and procedural reasoning, or what she termed \u201cintellective skills.\u201d Something similar happened with accountants after the arrival of spreadsheet programs such as VisiCalc; no longer tasked with totting up columns of numbers, they could spend more time on tax strategy and risk analysis.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">More radical, new technologies can summon new skills into being. Before the microscope, there were naturalists but no microscopists: Robert Hooke and Antonie van Leeuwenhoek had to invent the practice of seeing and interpreting the invisible. Filmmaking didn\u2019t merely borrow from theater; it brought forth cinematographers and editors whose crafts had no real precedent. Each leap enlarged the field of the possible. The same may prove true now. Working with large language models, my younger colleagues insist, is already teaching a new kind of craftsmanship\u2014prompting, probing, catching bias and hallucination, and, yes, learning to think in tandem with the machine. These are emergent skills, born of entanglement with a digital architecture that isn\u2019t going anywhere. Important technologies, by their nature, will usher forth crafts and callings we don\u2019t yet have names for.<\/p>\n<p class=\"ArticleParagraph_root__4mszW ArticleParagraph_dropcap__uIVzg\" data-flatplan-paragraph=\"true\" data-flatplan-dropcap=\"true\">The hard part is deciding, without nostalgia and inertia, which skills are keepers and which are castoffs. None of us likes to see hard-won abilities discarded as obsolete, which is why we have to resist the tug of sentimentality. Every advance has cost something. Literacy dulled feats of memory but created new powers of analysis. Calculators did a number on mental arithmetic; they also enabled more people to \u201cdo the math.\u201d Recorded sound weakened everyday musical competence but changed how we listen. And today? Surely we have some say in whether LLMs expand our minds or shrink them.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">Throughout human history, our capabilities have never stayed put. Know-how has always flowed outward\u2014from hand to tool to system. Individual acumen has diffused into collective, coordinated intelligence, propelled by our age-old habit of externalizing thought: stowing memory in marks, logic in machines, judgment in institutions, and, lately, prediction in algorithms. The specialization that once produced guilds now produces research consortia; what once passed among masters and apprentices now circulates through networks and digital matrices. Generative AI\u2014a statistical condensation of human knowledge\u2014is simply the latest chapter in our long apprenticeship to our own inventions.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">The most pressing question, then, is how to keep our agency intact: how to remain the authors of the systems that are now poised to take on so much of our thinking. Each generation has had to learn how to work with its newly acquired cognitive prostheses, whether stylus, scroll, or smartphone. What\u2019s new is the speed and intimacy of the exchange: tools that learn from us as we learn from them. Stewardship now means ensuring that the capacities in which our humanity resides\u2014judgment, imagination, understanding\u2014stay alive in us. If there\u2019s one skill we can\u2019t afford to lose, it\u2019s the skill of knowing which of them matter.<\/p>\n","protected":false},"excerpt":{"rendered":"The fretting has swelled from a murmur to a clamor, all variations on the same foreboding theme: \u201cYour&hellip;\n","protected":false},"author":3,"featured_media":334939,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[21],"tags":[691,738,158,67,132,68],"class_list":{"0":"post-334938","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-technology","11":"tag-united-states","12":"tag-unitedstates","13":"tag-us"},"share_on_mastodon":{"url":"https:\/\/pubeurope.com\/@us\/115443271215067700","error":""},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/posts\/334938","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/comments?post=334938"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/posts\/334938\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/media\/334939"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/media?parent=334938"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/categories?post=334938"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/tags?post=334938"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}