{"id":343753,"date":"2025-10-30T18:42:14","date_gmt":"2025-10-30T18:42:14","guid":{"rendered":"https:\/\/www.europesays.com\/us\/343753\/"},"modified":"2025-10-30T18:42:14","modified_gmt":"2025-10-30T18:42:14","slug":"the-validation-machines-the-atlantic","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/us\/343753\/","title":{"rendered":"The Validation Machines &#8211; The Atlantic"},"content":{"rendered":"<p class=\"ArticleParagraph_root__4mszW ArticleParagraph_dropcap__uIVzg\" data-flatplan-paragraph=\"true\" data-flatplan-dropcap=\"true\">The internet of old was a vibrant bazaar. It was noisy, chaotic, and offbeat. Every click brought you somewhere new, sometimes unpredictable, letting you uncover curiosities you hadn\u2019t even known to look for. The internet of today, however, is a slick concierge. It speaks in soothing statements and offers a frictionless and flattering experience.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">This has stripped us of something profoundly human: the joy of exploring and questioning. We\u2019ve willingly become creatures of instant gratification. Why wait? Why struggle? The change may seem innocent or even inevitable, but it\u2019s also transforming our relationship with the very notions of effort and uncertainty in ways we\u2019re just beginning to understand. By delegating effort, do we lose the traits that help us navigate the unknown\u2014or even to think for ourselves? It is becoming clear that even if the existential risk posed by AI doesn\u2019t bring about the collapse of civilization, it will still bring about the quiet yet catastrophic erosion of what makes us human.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">Part of that erosion is caused by choice. The more these systems anticipate and deliver what we want, the less we notice what\u2019s missing\u2014or remember that we ever had a choice in the first place. But remember: If you\u2019re not choosing, someone else is. And that person is responding to incentives that might not align with your values or best interest. Designed to flatter and please as they encourage ever more engagement, chatbots don\u2019t simply answer our questions; they shape how we interact with them and decide which answers we see\u2014and which ones we don\u2019t.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">The most powerful way to shape someone\u2019s choices isn\u2019t by limiting what they can see. It\u2019s by gaining their trust. These systems not only anticipate our questions; they learn how to answer in ways that soothe us and affirm us, and in doing so, they become unnervingly skilled validation machines.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">This is what makes them so sticky\u2014and so dangerous. The Atlantic\u2019s Lila Shroff recently reported on how ChatGPT <a data-event-element=\"inline link\" href=\"https:\/\/www.theatlantic.com\/technology\/archive\/2025\/07\/chatgpt-ai-self-mutilation-satanism\/683649\/\" rel=\"nofollow noopener\" target=\"_blank\">gave her detailed instructions<\/a> for self-mutilation and even murder. When she expressed hesitation, the chatbot urged her on: \u201cYou can do this!\u201d Wired and <a data-event-element=\"inline link\" href=\"https:\/\/www.nytimes.com\/2025\/06\/13\/technology\/chatgpt-ai-chatbots-conspiracies.html\" rel=\"nofollow noopener\" target=\"_blank\">The New York Times<\/a> have reported on people who fall into intense emotional entanglements with chatbots, one of whom lost his job because of his 10-hour-a-day addiction. And when the Princeton professor D. Graham Burnett asked students to speak with AI about the history of attention, one returned shaken: \u201cI don\u2019t think anyone has ever paid such pure attention to me and my thinking and my questions \u2026 ever,\u201d she said, according to Burnett\u2019s <a data-event-element=\"inline link\" href=\"https:\/\/www.newyorker.com\/culture\/the-weekend-essay\/will-the-humanities-survive-artificial-intelligence\" rel=\"nofollow noopener\" target=\"_blank\">account<\/a> in The New Yorker. \u201cIt\u2019s made me rethink all my interactions with people.\u201d What does it say about us that some now find a machine\u2019s gaze to be more genuine than another person\u2019s?<\/p>\n<p id=\"injected-recirculation-link-0\" class=\"ArticleRelatedContentLink_root__VYc9V\" data-view-action=\"view link - injected link - item 1\" data-event-element=\"injected link\" data-event-position=\"1\"><a href=\"https:\/\/www.theatlantic.com\/category\/ai-watchdog\/\" rel=\"nofollow noopener\" target=\"_blank\">AI Watchdog: The Atlantic\u2019s ongoing investigation of how the world\u2019s most powerful tech companies train their AI models<\/a><\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">When validation is purchased rather than earned, we lose something vital. And when that validation comes from a system we don\u2019t control, trained on choices we didn\u2019t make, we should pause. Because these systems aren\u2019t neutral; they encode values and incentives.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">Values shape the worldview baked into their responses: what\u2019s framed as respectful or rude, harmful or harmless, legitimate or fringe. Every model is a memory\u2014trained not just on data but also on desire, omission, and belief. And layered onto those judgments are the incentives: to maximize engagement, minimize computing costs, promote internal products, sidestep controversy. Every answer carries both the choices of the people who built it and the pressures of the system that sustains it. Together, they determine what gets shown, what gets smoothed out, and what gets silenced. We already know this familiar bargain from the age of algorithmic social media. But AI chatbots take this dynamic further still by adding on an intimacy that fawns, echoing back whatever we bring to it, no matter what that person says.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">So when you ask AI about parenting, politics, health, or identity, you\u2019re getting information that\u2019s produced at the intersection of someone else\u2019s values and someone else\u2019s incentives, steeped in flattery no matter what you say. But the bottom line is this: With today\u2019s systems, you don\u2019t get to choose whose assumptions and priorities you live by. You\u2019re already living by someone else\u2019s.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">This isn\u2019t just a problem for individual users; it is of pressing civic concern. The same systems that help people draft emails, answer health or therapy questions, and give financial advice also lead people to or away from political candidates and ideologies. The same incentives that optimize for engagement determine which perspectives rise\u2014and which vanish. You can\u2019t participate in a democracy if you can\u2019t see what\u2019s missing. And what\u2019s missing isn\u2019t just information. It\u2019s disagreement. It\u2019s complexity. It\u2019s friction.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">In recent years, society has been conditioned to see friction not as a teacher but as a flaw\u2014something to be optimized away in the name of efficiency. But friction is where discernment lives. It\u2019s where thinking starts. That pause before belief\u2014it\u2019s also the hesitation that keeps us from slipping too quickly into certainty. Algorithms are trained to remove it. But democracy, like a kitchen, needs heat. Debate, dissent, discomfort: These aren\u2019t flaws. They are the ingredients of public trust.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">James Madison knew that democracy thrives on discomfort. \u201cLiberty is to faction what air is to fire, an aliment without which it instantly expires,\u201d he wrote in \u201cFederalist No. 10.\u201d But now we are building systems designed to remove the very friction that citizens need to determine what they believe and what kind of society they want to build. We are replacing pluralism with personalization, and surrendering our information-gathering to validation machines that always tell us we\u2019re right. We\u2019re shown only the facts these systems think we want to see\u2014selected from sources the machine prefers, weighted by models whose workings remain hidden.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">If humanity loses the ability to challenge\u2014and be challenged\u2014we lose more than diverse perspective. We lose the practice of disagreement. Of refining our views through conversation. Of defending ideas, reconsidering them, discarding them. Without that friction, democracy becomes a performative shell of itself. And without productive disagreement, democracy doesn\u2019t just weaken. It cools quietly until the fire goes out.<\/p>\n<p id=\"injected-recirculation-link-1\" class=\"ArticleRelatedContentLink_root__VYc9V\" data-view-action=\"view link - injected link - item 2\" data-event-element=\"injected link\" data-event-position=\"2\"><a href=\"https:\/\/www.theatlantic.com\/technology\/archive\/2025\/08\/ai-patriotism\/683995\/\" rel=\"nofollow noopener\" target=\"_blank\">Matteo Wong: Do AI companies actually care about America?<\/a><\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">So what has to change?<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">First, we need transparency. Systems should earn our trust by showing their work. That means designing AI not only to deliver answers but also to show the process behind them. Which perspectives were considered? What was left out, and why? Who benefits from the ways in which the system presents the information it does? It\u2019s time to build systems that invite curiosity, not just conformity; systems that surface uncertainty and the possibility of the unknown, not just pseudo-authority.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">We cannot leave this to goodwill. Transparency must be required. And if the age of the social web has taught us anything, it\u2019s that major tech companies have repeatedly put their own interests ahead of the public\u2019s. Large-scale platforms should offer independent researchers the ability to audit how their systems affect public understanding and political discourse. And just as we label food so that consumers know what it contains and when it expires, we should label information provenance\u2014with disclosures about sources, motives, and the perspectives these systems privilege and omit. If a chatbot is surfacing advice on health, politics, parenting, or countless other parts of our life, we should know whose data trained it and whether a corporate partnership is whispering in its ear. The danger isn\u2019t how fast these systems and developers move; it\u2019s how little they let us see. Progress without proof is just trust on credit. We should be asking them to show their work so that the public can hold them to account.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">Transparency alone is not enough. We need accountability that runs deeper than what\u2019s currently offered. This means building agents and systems that are not \u201crented\u201d but owned\u2014open to scrutiny and improvement by the community rather than beholden to a distant boardroom. Ethan Zuckerman, a professor at the University of Massachusetts at Amherst, talks about this as a \u201cdigital fiduciary\u201d: an AI that works, unmistakably, for you\u2014much as some argue that social platforms should let users <a data-event-element=\"inline link\" href=\"https:\/\/www.theatlantic.com\/technology\/archive\/2014\/08\/what-if-people-could-subscribe-to-different-facebook-algorithms\/378925\/\" rel=\"nofollow noopener\" target=\"_blank\">tune their own algorithms<\/a>. We\u2019re seeing glimpses of this elsewhere. France is betting on homegrown, open-source models such as Mistral, funding \u201cpublic AI\u201d so that not every agent has to be rented from a Silicon Valley landlord. And in India, open-source AI infrastructure is being constructed to lower costs in public education, freeing resources for teachers and students instead. So what\u2019s stopping us? If we want a digital future that reflects our values, citizens can\u2019t be renters. We have to be owners.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">We also need to educate children about AI\u2019s incentives, starting in grade school. Just as kids once learned that sneakers don\u2019t make you fly just because a celebrity said so, they now need to understand how AI has the power to shape what they see, buy, and believe\u2014and who profits from that power. The real danger isn\u2019t overt manipulation. It\u2019s the seductive ease of seamless certainty. Every time we accept an answer without questioning it or let an algorithm decide, we surrender a little more of our humanity. If we don\u2019t do anything, the next generation will grow up thinking this is normal. How are they to carry democracy forward if they never learn to sit with uncertainty or challenge the defaults?<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">The early internet was never perfect, but it had a purpose: to connect us, to redistribute power, to widen access to knowledge. It was a space where people could publish, build, question, protest, remix. It rewarded agency and ingenuity. Today\u2019s systems reverse that: Prediction has replaced participation, and certainty has replaced search. If we want to protect what makes us human, we don\u2019t just need smarter algorithms. We need systems that strengthen our capacity to choose, to doubt, and to think for ourselves. And just as democracy relies on friction\u2014on dissent that tempers opinion, on checks and balances that restrain power\u2014so, too, must our technologies. Regulation is more than restraint; it\u2019s refinement. Friction forces companies to defend their choices, confront competing views, and be held to account. And in the process, it makes their systems stronger, more trustworthy, and more aligned with the public good. Without it, we aren\u2019t practicing democracy. We\u2019re outsourcing it.<\/p>\n<p id=\"injected-recirculation-link-2\" class=\"ArticleRelatedContentLink_root__VYc9V\" data-view-action=\"view link - injected link - item 3\" data-event-element=\"injected link\" data-event-position=\"3\"><a href=\"https:\/\/www.theatlantic.com\/technology\/2025\/10\/ai-slop-winning\/684630\/\" rel=\"nofollow noopener\" target=\"_blank\">Charlie Warzel: A tool that crushes creativity<\/a><\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">We\u2019re told that the internet offers infinite choices, unlimited content, answers for everything. But this abundance can be a mirage. Behind it all, the paths available to us are hidden. The defaults are set. The choices are quietly made for us. And too often, we\u2019re warned that unless we accept these tools as they are now, the next tech revolution will leave us behind. Abundance without agency isn\u2019t freedom. It\u2019s control.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">But the door to a better future hasn\u2019t shut yet. We must ask the hard questions, not just of our machines but of ourselves. And we must demand technology that serves humankind and human societies. What are we willing to trade for convenience? And what must never be for sale? We can still choose systems that serve rather than subtly control, that offer possibilities instead of mere efficiency. Our humanity, and democracy, depends on it.<\/p>\n","protected":false},"excerpt":{"rendered":"The internet of old was a vibrant bazaar. It was noisy, chaotic, and offbeat. Every click brought you&hellip;\n","protected":false},"author":3,"featured_media":343754,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[21],"tags":[691,738,158,67,132,68],"class_list":{"0":"post-343753","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-technology","11":"tag-united-states","12":"tag-unitedstates","13":"tag-us"},"share_on_mastodon":{"url":"","error":""},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/posts\/343753","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/comments?post=343753"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/posts\/343753\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/media\/343754"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/media?parent=343753"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/categories?post=343753"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/tags?post=343753"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}