By
Max Greenwood

04/10/2026 10:12 AM EDT

State laws requiring disclaimers on political ads that use artificial intelligence are sowing distrust among voters, according to a new study

The research, conducted by the American Association of Political Consultants Foundation in conjunction with a bipartisan group of practitioners, found that the appearance of an AI disclosure on a political ad creates a measurable “disclaimer effect;” viewers reported feeling more mistrust and skepticism toward the advertised message, even if the ad didn’t actually feature AI-generated content. 

“What was important to understand is: was there a penalty or consequence to this broad sense of transparency and disclosure of any time you use the tool?” Julie Sweet, the director of advocacy and industry relations at the AAPC, said. “And what this research says is that, yes, there is a penalty on trust and credibility and believability.” 

That poses a significant challenge for campaigns at a time when more and more practitioners are adopting AI to varying extents as part of the ad-making process. At the same time, Sweet said, the study’s findings should serve as a wake-up call to policymakers who have rushed to enact rules and regulations governing the use of AI in politics.

“Policymakers should know that the language that they are sending out into policy does have a significant effect, and the formatting requirements have a significant effect,” Sweet said. On the other hand, practitioners should know that “when you’re using the technology, you need to be really thoughtful.”

The Study

The study from the AAPC Foundation focused on a mock mayoral election ad modeled after a traditional local campaign, in which a candidate boosts his own campaign and contrasts himself against his opponent. Two versions of the ad were produced as part of the study: one that didn’t use any AI-generated content and another that used the same script and message, but used AI to generate the candidate’s voice and facial expressions.

Both ads were shown to different respondents with and without the disclaimer “This ad has been manipulated or generated by artificial intelligence.” According to the study, the language for the disclaimer was chosen because it “represents the least descriptive and most generic form of AI disclosure currently mandated under existing state laws.”

Respondents then recorded moment-by-moment reactions as they watched the ads.

The Findings

The ultimate takeaway: as soon as viewers saw the AI disclosure language, “approval of the ad message declined sharply,” according to the study.

That spike in disapproval didn’t mean that viewers were less interested in the ad. On the contrary, viewers actually paid more attention to the ad once the disclaimer message appeared on screen. That heightened attention, however, made viewers scrutinize the ad more closely. 

“In effect, the disclaimer functioned as a cognitive speed bump, heightening viewer skepticism and reducing receptivity to the ad’s message at the very moment it appeared,” the study says.

Of course, there are several variables at play. When the disclaimer message was shown in larger type, viewer trust measurably decreased, while many viewers failed to even notice the disclaimer when it appeared in smaller type. At the same time, “higher-tech viewers” – those more familiar with the technology – were less likely to report reduced trust in the ad message when they saw the AI disclaimer than “lower-tech audiences.”

Still, the findings of the study underscore an issue with how policymakers have approached AI disclaimers in politics, Sweet said. The intention of those disclaimers is to inform viewers and increase transparency. But the study argues that current disclaimer frameworks are producing different effects among different populations of voters, raising questions about a “one-size-fits-all approach to AI transparency.”

“It’s not actually solving, I think, the problem that policymakers and responsible practitioners are trying to get at, which is: how do you clarify for people and how do you be transparent?” Sweet said. “And that’s just not happening.”