Study Shows AI Labels May Be Undermining Ad Credibility. That Matters for Political Buyers—and for Local Media Sellers.
(4 minute read)
Artificial-intelligence disclosures on political advertising are intended to promote transparency. A new study suggests they may also be doing something else: making audiences trust the message less.
That finding has implications well beyond campaign strategy. For local media account executives, station managers, digital sellers and agency professionals who work with political advertisers, it points to a growing challenge in the ad marketplace: how to use AI efficiently without weakening credibility at the very moment trust matters most.
Research from the American Association of Political Consultants Foundation, conducted with a bipartisan group of industry practitioners, found that viewers became more skeptical of political ads when they saw a disclosure stating that the ad had been “manipulated or generated by artificial intelligence.” The effect appeared even when the ad itself did not actually contain AI-generated content.
In other words, the disclaimer alone changed how people felt about the message.
For local media sellers, that is a notable development. Political advertising has long relied on urgency, emotional resonance and message discipline. But in local races especially—where campaigns often depend on TV, radio, cable, digital video and increasingly cross-platform buys to influence narrow groups of persuadable voters—credibility can be fragile. Any on-screen or in-audio cue that prompts viewers to question authenticity could reduce the effectiveness of the ad and complicate conversations between buyers and media partners.
Julie Sweet, director of advocacy and industry relations at AAPC, said the research points to a real tradeoff. The intent behind disclosure laws is openness. The practical effect, she said, may be a drop in trust, believability and perceived credibility.
That is a problem at a moment when more campaign professionals are using AI tools in some part of the creative process, whether for voice work, image cleanup, script iteration, translation, versioning or rapid production of multiple formats. The study suggests that policymakers may have moved faster than the evidence.
For media companies and agencies, the issue is not simply whether AI is used. It is how audiences interpret that use once a disclaimer appears.
The study centered on a mock mayoral campaign ad designed to resemble a standard local political spot. In one version, the ad used no AI-generated content. In another, the script and message were the same, but AI was used to generate the candidate’s voice and facial expressions.
Both versions were then shown to different groups of respondents, with and without the disclosure: “This ad has been manipulated or generated by artificial intelligence.” According to the researchers, that wording was selected because it reflects one of the broadest and least descriptive forms of AI disclosure now required under some state laws.
Respondents provided moment-by-moment reactions while watching the ads.
The most striking finding was that viewer approval fell quickly once the AI disclosure appeared. The warning did not cause people to tune out. In fact, it often made them pay closer attention. But that added attention did not help the message. It made viewers more critical of it.
The study described the disclosure as a kind of cognitive speed bump. Rather than simply informing the audience, it interrupted the flow of the message and introduced doubt at exactly the point where persuasion was supposed to happen.
That dynamic should sound familiar to experienced local sellers and agency strategists. Advertising works best when it reduces friction, reinforces relevance and keeps the audience moving toward the message. Anything that creates hesitation—whether it is clutter, poor creative, weak targeting or now a mistrust-triggering disclosure—can lower performance.
The research also found that presentation matters. Larger disclaimer text caused trust to fall more sharply. Smaller disclosures were often missed altogether. Viewers who were more comfortable with technology were less likely to react negatively than those with lower familiarity.
That variation raises another issue for local media and agency professionals: audience response to AI disclosures is unlikely to be uniform. In some markets, voter segments may be highly skeptical of AI-assisted messaging. In others, the reaction may be muted. That makes a one-size-fits-all compliance approach less attractive from both a policy and planning standpoint.
For local broadcasters, digital publishers, cable operators and agency teams, the practical takeaway is not to avoid AI entirely. It is to think more carefully about where AI appears in the workflow, how much of the final product it shapes, and what kind of disclosure environment surrounds it.
That is especially important in local political advertising, where trust in the medium can influence trust in the message. A television station, radio brand, local news site or community publication is not just delivering impressions during campaign season. It is lending context. If AI labels cause viewers to question what they are seeing or hearing, then media companies may need to work harder to position themselves as trusted distribution partners rather than passive carriers of campaign creative.
Agencies should be paying attention, too. AI can improve speed and lower production costs, but those efficiencies may come with persuasion risk if the output triggers legal disclaimers that undercut credibility. Buyers and strategists may need to weigh the savings from AI-assisted execution against the possibility of weaker audience receptivity.
For local media AEs, this opens the door to a more strategic conversation with political advertisers and consultants. The best sellers will not frame the issue as anti-technology. They will frame it as performance protection. How is the ad being made? Will a disclaimer be required? How prominent will it be? Could it affect response among less tech-comfortable voters? Are there ways to preserve speed and compliance without damaging believability?
Those are no longer abstract policy questions. They are campaign effectiveness questions.
The broader lesson is that transparency mechanisms do not always work the way lawmakers expect. In this case, a disclosure meant to reassure audiences may instead be warning them to distrust the message. For political practitioners, that is a creative and regulatory headache. For local media sellers and agencies, it is another reminder that in a fragmented, skeptical marketplace, trust is not a soft metric. It is often the thing that determines whether the ad works at all.
On that point, the study offers a useful caution for everyone in the local ad business: the tools used to create advertising matter, but the cues audiences receive about those tools may matter just as much.
Source: Campaigns Elections