{"id":348518,"date":"2025-11-01T18:15:17","date_gmt":"2025-11-01T18:15:17","guid":{"rendered":"https:\/\/www.europesays.com\/us\/348518\/"},"modified":"2025-11-01T18:15:17","modified_gmt":"2025-11-01T18:15:17","slug":"did-american-astrophysicist-neil-degrasse-tyson-really-admit-the-earth-is-flat","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/us\/348518\/","title":{"rendered":"Did American astrophysicist Neil deGrasse Tyson really &#8216;admit&#8217; the earth is flat? |"},"content":{"rendered":"<p> <img src=\"https:\/\/www.europesays.com\/us\/wp-content\/uploads\/2025\/11\/neil-de-grasse-tyson.jpg\" alt=\"Did American astrophysicist Neil deGrasse Tyson really 'admit' the earth is flat?\" title=\"\u201cIt\u2019s Getting Harder to Know What\u2019s Real,\u201d  the viral StarTalk episode where Neil deGrasse Tyson appears to \u201cadmit\u201d the Earth is flat, thanks to a disturbingly convincing deepfake\/ Image: Screengrab Youtube\" decoding=\"async\" fetchpriority=\"high\"\/>\u201cIt\u2019s Getting Harder to Know What\u2019s Real,\u201d  the viral StarTalk episode where Neil deGrasse Tyson appears to \u201cadmit\u201d the Earth is flat, thanks to a disturbingly convincing deepfake\/ Image: Screengrab Youtube <\/p>\n<p>It\u2019s flat.<\/p>\n<p>The words sound jarring coming from <a href=\"https:\/\/timesofindia.indiatimes.com\/topic\/neil-degrasse-tyson\" styleobj=\"[object Object]\" class=\"\" commonstate=\"[object Object]\" frmappuse=\"1\" rel=\"nofollow noopener\" target=\"_blank\">Neil deGrasse Tyson<\/a>, a man who has spent decades dismantling pseudoscience and explaining the cosmos with unwavering clarity. In a viral clip from his StarTalk YouTube channel, he says, \u201cLately, I have been doing calculations as well as looking back at old NASA footage and raw data from satellites hovering above Earth. And I just can&#8217;t escape the conclusion that the Earth might actually be flat,&#8221;\u201d Except it isn\u2019t him. Moments later, the real Tyson appears on screen, holding up a phone playing the same video. \u201cThat\u2019s not me,\u201d he says evenly. \u201cIt was never me. Those aren\u2019t my words.\u201d The clip is a deepfake, an AI-generated fabrication indistinguishable from the real thing.It\u2019s fitting, in a way. Tyson\u2019s voice and likeness have become a staple of the internet\u2019s science-adjacent culture and its hyper-stimulated content format, stitched into split-screen videos, layered over Roblox gameplay loops, and engineered to seduce the doomscrolling masses from ever scrolling away. His credibility, once a safeguard against misinformation, now makes him its most convincing vessel, an unwilling participant in an era where truth itself can be forged, remixed, and repackaged.<\/p>\n<p>Neil gets Deepfaked<\/p>\n<p>The surprising declaration, and the AI-generated clip that sparked it, appeared during a recent episode of StarTalk, Neil deGrasse Tyson\u2019s YouTube show. The video, titled \u201cIt\u2019s Getting Harder To Know What\u2019s Real\u201d features Alexandru Cosoi, Chief Security Strategist at Bitdefender, a cybersecurity expert leading the company\u2019s cyber-intelligence team in darknet investigations, post-breach forensics, and international cybercrime prevention. Together, they discuss how artificial intelligence can now clone a person\u2019s face, voice, and cadence with startling accuracy, and the growing challenge of distinguishing parody from manipulation in the digital age.\u201cI didn\u2019t think much about deepfakes, until I got deepfaked,\u201d Neil deGrasse Tyson admits. At first, he didn\u2019t see the harm. \u201cThe early stuff is fine if it\u2019s parody,\u201d he says. \u201cOne of my favorite examples is when I was \u2018babyified\u2019 in a real conversation I had with Theo Von on his podcast. You\u2019re not thinking to yourself, \u2018Did Neil actually become a baby to do this?\u2019 Because it\u2019s parody. It\u2019s one of the most cherished means of expression we have in the United States.\u201d But that line, between parody and deception, is fast disappearing. \u201cWhen you do this and the viewer doesn\u2019t know it\u2019s parody, then you\u2019re crossing a line,\u201d he says. He\u2019s seen his likeness repurposed for fabricated science scripts written by others, the deepfake Tyson earnestly delivering false explanations in his voice. \u201cSome of them try to spread more science through my persona,\u201d he says. \u201cBut often, the science is wrong.\u201d Even his friends have been fooled. A convincing video of Tyson narrating a grand theory about a Type III civilization, set to the Interstellar soundtrack, led actor Terry Crews to message him in admiration, only to learn it wasn\u2019t real. \u201cI\u2019m flattered that people want to put me into content in ways that attract audiences,\u201d Tyson says. \u201cBut if it\u2019s fooling people, and they\u2019re not thinking, \u2018Oh, this is parody\u2019 or \u2018This is just for fun,\u2019 then it violates the integrity we\u2019ve worked so hard to build. Something\u2019s got to be done about that. And something will.\u201dThe stakes of political Deepfakes \u201cOf course, a science video or a celebrity deepfake may not have the same global consequences as a political one that affects peace or stability,\u201d notes Alexandru Cosoi, Chief Security Strategist at Bitdefender. He recalls the early months of the Russia\u2013Ukraine war, when a hacked Ukrainian TV station broadcast a fabricated video of President Zelenskyy announcing a surrender to Russia, followed by another showing Vladimir Putin declaring, \u201cWe\u2019re finally getting to peace.\u201d \u201cThey weren\u2019t technically very good, Zelenskyy\u2019s head looked slightly larger than normal, but people with limited internet access or few media options might still believe it,\u201d Cosoi explains. Zelenskyy later had to appear on video himself to confirm it was fake. Similar tactics have surfaced during election campaigns. Deepfakes depicting politicians taking bribes or discussing wars have been released just before polling days, when candidates are legally barred from responding, tilting public sentiment at the last moment. Cosoi says the same technology now powers a darker trade: scams that mimic loved ones, bosses, or entire virtual meetings. \u201cScamming isn\u2019t new,\u201d he says. \u201cBut with AI in the hands of bad actors, it\u2019s been taken to another level.\u201d He outlines the main types:<\/p>\n<ul>\n<li> Romance or investment scams, where fraudsters build trust over chat before persuading victims to invest.<\/li>\n<li> Business email compromise scams, such as a Hong Kong case where a worker was tricked into transferring $25 million during a deepfaked video call with fake \u2018executives\u2019.<\/li>\n<li> Family or \u2018relative in distress\u2019 scams, using cloned voices to mimic children or parents pleading for money.<\/li>\n<\/ul>\n<p> Asked by Tyson how people can protect themselves, Cosoi admits the defences are limited. \u201cI stopped answering unknown calls,\u201d he says. \u201cIn the past year, almost every one has been a scammer.\u201d Still, there are new tools on the horizon. AI \u201choneypots\u201d such as Scamo now engage scammers to waste their time and collect data, helping improve detection systems. Researchers are also developing technology that can analyse videos, images, and audio, not only to assess how fake something is but to highlight which parts were altered. \u201cIt\u2019s a race,\u201d Cosoi concludes, \u201cbetween how fast we can build detection, and how fast deepfakes can evolve.\u201dAre we losing against deepfakes?<\/p>\n<p>Are we losing against Deepfakes? <\/p>\n<p>Deepfake technology, once a parlour trick for internet pranksters, has become one of the most disruptive forces shaping how truth circulates online. Built on deep learning, it uses artificial intelligence to generate uncanny audio, video, and imagery, making people appear to say or do things that never happened.Consumer apps have only accelerated the trend. Platforms like Sora have democratised deepfake creation, putting the technology in the hands of millions, and fooling countless Facebook mums in the process. What began as a playful novelty for the tech-curious has evolved into a production line of deception, churning out synthetic faces and false realities that even good old common sense can\u2019t detect.Tyson, meanwhile, has become one of its most recognisable victims, the archetype of a deepfaked intellectual. \u201cWill there come a time when deepfake AI becomes so good that no tool can detect it, rendering these defences useless?\u201d he asked.Maybe. One day, a deepfake might be more appealing to a person than the truth, even if detection tools say it\u2019s fake. People might say, \u2018No, no, this has to be true.\u201d For Tyson, that\u2019s already the reality. He\u2019s watched digital versions of himself hawk everything from sneakers to soft drinks, and deliver pseudoscientific sermons he never wrote. \u201cLet me be clear,\u201d he said. \u201cI have never, and will never, do that. If you see me endorsing something, it\u2019s not me. It\u2019s a deepfake. Pure and simple.\u201d Tyson, as ever, resists instruction. \u201cI don\u2019t tell you what to do,\u201d he said. \u201cExcept for one thing I do tell you every single day, and you know what that is? Just look up.\u201d<\/p>\n","protected":false},"excerpt":{"rendered":"\u201cIt\u2019s Getting Harder to Know What\u2019s Real,\u201d the viral StarTalk episode where Neil deGrasse Tyson appears to \u201cadmit\u201d&hellip;\n","protected":false},"author":3,"featured_media":348519,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[24],"tags":[170919,170918,170917,28052,40450,159,783,67,132,68],"class_list":{"0":"post-348518","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-space","8":"tag-ai-manipulation","9":"tag-deepfake-technology","10":"tag-flat-earth","11":"tag-misinformation","12":"tag-neil-degrasse-tyson","13":"tag-science","14":"tag-space","15":"tag-united-states","16":"tag-unitedstates","17":"tag-us"},"share_on_mastodon":{"url":"https:\/\/pubeurope.com\/@us\/115475810434365316","error":""},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/posts\/348518","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/comments?post=348518"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/posts\/348518\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/media\/348519"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/media?parent=348518"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/categories?post=348518"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/tags?post=348518"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}