{"id":6665,"date":"2026-04-17T09:59:09","date_gmt":"2026-04-17T09:59:09","guid":{"rendered":"https:\/\/www.europesays.com\/ai\/6665\/"},"modified":"2026-04-17T09:59:09","modified_gmt":"2026-04-17T09:59:09","slug":"american-politics-is-already-inundated-with-ai-deepfakes-its-only-getting-worse","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/ai\/6665\/","title":{"rendered":"American Politics Is Already Inundated With AI Deepfakes. It\u2019s Only Getting Worse."},"content":{"rendered":"<p>In 2024, election officials, social media platform managers, and artificial intelligence developers were poised for an \u201c<a href=\"https:\/\/ash.harvard.edu\/articles\/the-apocalypse-that-wasnt-ai-was-everywhere-in-2024s-elections-but-deepfakes-and-misinformation-were-only-part-of-the-picture\/\" rel=\"nofollow noopener\" target=\"_blank\">AI apocalypse<\/a>\u201d that didn\u2019t arrive. They feared that fake content would deceive voters and sway election outcomes. But the fakes that voters did run into\u2014like the 2024 New Hampshire presidential primary robocall imitating former President Joe Biden\u2019s voice that told voters to wait until November to cast their ballots\u2014had limited impact.<\/p>\n<p>There were also telltale signs to spot AI-generated visuals that were easier to catch, like hands with extra or fewer fingers or short-form videos shot from a single angle. But back then, campaign operatives also <a href=\"https:\/\/cdt.org\/insights\/promise-and-peril-generative-ais-experimental-debut-in-u-s-political-campaigns\/\" rel=\"nofollow noopener\" target=\"_blank\">feared<\/a> public blowback from either openly adopting tools that were unfamiliar to most voters or surreptitiously employing AI-generated content and being found out.<\/p>\n<p>The prospects for effective federal action are dim\u2014the Trump administration has taken a hard line against any kind of AI regulation.<\/p>\n<p>Today, Americans find themselves in a political ecosystem where democratic norms have been shattered. The risks of AI-generated fakes are more pronounced, but public objections are less of a deterrent. In January, a White House account <a href=\"https:\/\/www.theguardian.com\/us-news\/2026\/jan\/22\/white-house-ice-protest-arrest-altered-image\" rel=\"nofollow noopener\" target=\"_blank\">posted<\/a> a photograph of an African American protester in Minnesota, altered to darken her skin and depict her in tears. Anti-ICE activists have <a href=\"https:\/\/www.npr.org\/2026\/01\/08\/nx-s1-5671740\/ice-minneapolis-grok-ai-renee-nicole-good\" rel=\"nofollow noopener\" target=\"_blank\">shared<\/a> images of immigration officers who have been \u201cunmasked\u201d by AI\u2014an \u201cill-advised\u201d tactic because of AI\u2019s penchant for hallucinating facial features and other details.<\/p>\n<p>Fast-forward to the 2026 midterm election season. The National Republican Senatorial Committee released a deepfake video of Texas state Rep. James Talarico, the Democratic Senate candidate, \u201c<a href=\"https:\/\/www.youtube.com\/watch?v=qIDhhVah4xE\" rel=\"nofollow noopener\" target=\"_blank\">reading<\/a>\u201d his own years-old social media posts. The tiny words \u201cAI Generated\u201d appear in all caps at the bottom right-hand corner of the video. But many voters might miss that and believe the politician had recorded words that he\u2019d never actually said.<\/p>\n<p>The AI experts and media literacy experts interviewed by the Prospect say it\u2019s more important than ever for voters to maintain good digital hygiene, especially when governments and tech companies fall short in helping voters distinguish between authentic political media and material expressly created to deceive them.<\/p>\n<p>Deepfakes\u2014hyperrealistic audio, video, or photographic representations of a person saying or doing something that did not occur\u2014can fundamentally subvert viewers\u2019 sense of the truth. Adam Rose, a fellow at the Starling Lab for Data Integrity, a joint project of Stanford University and the University of Southern California, explains that bad actors can use <a href=\"https:\/\/news.mit.edu\/2023\/explained-generative-ai-1109\" rel=\"nofollow noopener\" target=\"_blank\">generative AI<\/a> to fabricate content, leaving voters susceptible to the \u201c<a href=\"https:\/\/www.cambridge.org\/core\/journals\/american-political-science-review\/article\/liars-dividend-can-politicians-claim-misinformation-to-evade-accountability\/687FEE54DBD7ED0C96D72B26606AA073\" rel=\"nofollow noopener\" target=\"_blank\">liar\u2019s dividend<\/a>\u201d: A perpetrator\u2019s goal is to increase their own credibility by sowing doubt and eroding trust in real images, and thereby shredding the public\u2019s faith in political content and the news and information systems.<\/p>\n<p>\u201cWithout visual evidence, we lack the ability to understand what\u2019s happening and to make judgments both in a court of public opinion and in a court of law,\u201d Rose says. \u201cIf people in either of those courts do not trust the evidence, it makes it very difficult to function as a civil society.\u201d<\/p>\n<p>Rose <a href=\"https:\/\/www.poynter.org\/commentary\/2026\/the-real-threat-of-ai-is-the-collapse-of-trust-deepfakes\/\" rel=\"nofollow noopener\" target=\"_blank\">cites<\/a> several examples of real evidence being dismissed as fake, including the actual videos of Alex Pretti\u2019s earlier confrontation with immigration officers days before his fatal interaction with them in Minneapolis.<\/p>\n<p>Anyone with a computer, internet access, and decent IT skills can produce and distribute deepfakes. \u201c[They] literally can come from anywhere: from bad actors who are politicians, people who are getting involved in the campaigns and being paid, people who have a stake in a campaign that are not being paid, but are just doing it, people who are just random people on the internet, whether it\u2019s someone in the U.S. or internationally,\u201d Rose says.<\/p>\n<p>Yet the prospects for effective federal action are dim. The Trump administration has taken a hard line against any kind of AI regulation. In March, President Trump <a href=\"https:\/\/www.whitehouse.gov\/releases\/2026\/03\/president-donald-j-trump-unveils-national-ai-legislative-framework\/\" rel=\"nofollow noopener\" target=\"_blank\">called on Congress<\/a> to \u201ctake steps to remove outdated or unnecessary barriers to innovation [and] accelerate the deployment of AI across industry sectors.\u201d Democrats in Congress led by Sen. Brian Schatz (D-HI) and Rep. Don Beyer (D-VA) have pushed back with <a href=\"https:\/\/mauinow.com\/2026\/03\/27\/schatz-colleagues-introduce-legislation-to-repeal-trumps-ai-moratorium\/\" rel=\"nofollow noopener\" target=\"_blank\">companion<\/a> <a href=\"https:\/\/beyer.house.gov\/news\/documentsingle.aspx?DocumentID=9009\" rel=\"nofollow noopener\" target=\"_blank\">proposals<\/a> that would repeal the order, while Sen. Mark Warner (D-VA) has suggested more than a dozen <a href=\"https:\/\/www.warner.senate.gov\/public\/_cache\/files\/f\/5\/f5565f53-419d-4c54-aa1f-b5f9ca2ae71b\/9EC20AF60C22B9908D4B399BB360EECC2BAF4C72934DBD47D6399193F3ED06C8.final-combined-genai-2026-election-commitments-letter-word.pdf\" rel=\"nofollow noopener\" target=\"_blank\">measures<\/a> that social media companies could take to respond to \u201cmaliciously manipulated media.\u201d<\/p>\n<p>More than two dozen states have <a href=\"https:\/\/www.ncsl.org\/elections-and-campaigns\/artificial-intelligence-ai-in-elections-and-campaigns\" rel=\"nofollow noopener\" target=\"_blank\">enacted<\/a> laws concerning the dissemination of deepfaked political content during the election season. But most of those laws only require that political actors disclose AI usage; they don\u2019t limit or prohibit its use. Besides, state lawmakers also operate in slow motion and cannot really keep up. \u201c[Legislation is] a noble effort, but the technology is moving so fast,\u201d says Sarah Kreps, director of the Cornell Brooks School Tech Policy Institute. \u201cYou are going to be addressing yesterday\u2019s problems.\u201d<\/p>\n<p>Social media platforms and voters can still act to stanch the flow of false information. Tim Harper, the elections and democracy project lead at the Center for Democracy and Technology, says platforms should re-up their <a href=\"https:\/\/securityconference.org\/en\/aielectionsaccord\/\" rel=\"nofollow noopener\" target=\"_blank\">2024 commitments<\/a> to election safety. They can increase public awareness about deepfakes, help voters spot AI-generated content, and assure people that they are actively searching out that content.<\/p>\n<p>\u201cThe [political] campaigns should invest heavily in using content provenance\u2014watermarking any of their authentic press releases and videos and images\u2014not only to give a trust signal to voters that this content is coming from them, but also to prevent the risk that they would be deepfaked,\u201d Harper adds.<\/p>\n<p>To educate Latino voters about deepfakes and the wider world of mis\/disinformation, Factchequeado, a fact-checking and media literacy network, distributes reliable Spanish-language news to their media outlet partners across the country. The group also manages a WhatsApp chatbot that users take advantage of to send claims and posts that Factchequeado\u2019s staff will verify or debunk. The service <a href=\"https:\/\/factchequeado.com\/institucional\/20230522\/factchequeado-and-nyc-media-lab-expand-ai-technology-to-combat-misinformation\/\" rel=\"nofollow noopener\" target=\"_blank\">encourages<\/a> users to pause before they share potentially misleading information.<\/p>\n<p>Laura Zommer, Factchequeado\u2019s co-founder and CEO, wants voters to get into the habit of consulting trusted organizations: \u201cYou need to continue using your eye and continue training your eye to look for details that can show you a clue that it is not necessarily true or authentic content, but you don\u2019t need to 100 percent trust your own ability.\u201d She also cautions that while voters must be discerning about the types of material they consume and share, creating media-literate consumers is just the first step in stopping the spread of false information.<\/p>\n<p>The Poynter Institute, a nonprofit newsroom and journalism training organization, does similar work. MediaWise, its media and AI literacy initiative, empowers people to critically interrogate the content they encounter and builds resources and tool kits for libraries, newsrooms, and specific demographic groups. For example, the organization created a brief instructional video for reverse image searching aimed at <a href=\"https:\/\/www.poynter.org\/mediawise\/programs\/seniors\/\" rel=\"nofollow noopener\" target=\"_blank\">seniors<\/a>. For teens, MediaWise\u2019s \u201cAI Unlocked\u201d <a href=\"https:\/\/www.poynter.org\/mediawise\/programs\/ai-unlocked\/\" rel=\"nofollow noopener\" target=\"_blank\">tool kit<\/a> has tips for visually recognizing AI-generated materials and using AI responsibly.<\/p>\n<p>Sean Marcus, an interactive learning designer at MediaWise, says that the best thing voters can do is remain vigilant and \u201cexpect to see more and more extreme misinformation, twisted information, and out-of-context information.\u201d Still, he warns against fatalism. \u201cWe don\u2019t necessarily have to accept the fact that misinformation [and] disinformation can flood in and flow in without us taking action against it, and without audiences being sharp enough to discern good information from bad.\u201d<\/p>\n","protected":false},"excerpt":{"rendered":"In 2024, election officials, social media platform managers, and artificial intelligence developers were poised for an \u201cAI apocalypse\u201d&hellip;\n","protected":false},"author":2,"featured_media":6666,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[2],"tags":[24,25,1069,1670,2657,625,6103,464,1109],"class_list":{"0":"post-6665","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-ai","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-big-tech","11":"tag-congress","12":"tag-deepfakes","13":"tag-donald-trump","14":"tag-election-2026","15":"tag-politics","16":"tag-social-media"},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts\/6665","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/comments?post=6665"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts\/6665\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/media\/6666"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/media?parent=6665"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/categories?post=6665"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/tags?post=6665"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}