The radio ad posted online by Republican gubernatorial candidate Brian Shortsleeve features a voice that sounds exactly like Governor Maura Healey, a Democrat. “We have one of the highest electricity rates in the nation. Thanks to me for slapping on excessive fees to fund my climate agenda,” the fake Healey says.

While the average listener is likely to understand that Healey isn’t actually bemoaning her economic record, some may be fooled, as there’s no disclosure that her words are fake. The ad is a symptom of a larger problem. As artificial intelligence technology improves and proliferates in political advertising, there is no longer an easy way to know whether a picture or voice depicted in an ad is accurate.

According to the National Conference of State Legislatures, 26 states have regulated the use of deepfakes — realistic images, videos, or voices — in political ads. Massachusetts doesn’t, and it’s an oversight that should be corrected.

While Massachusetts’ consumer protection law prohibits the use of AI to deceive someone in a business transaction, there is no campaign finance law prohibiting deepfake political ads. (A 2024 law temporarily regulated them, but that provision expired and was never in effect for a statewide election.)

The simplest way to regulate deepfakes is by requiring disclosure. The Massachusetts House on Feb. 11 passed a bill, originally sponsored by House Minority Leader Bradley Jones Jr., that would require political ads made with artificial intelligence to include a disclaimer that the ad “contains content generated by AI.” Violations would be punishable by fines up to $1,000.

Disclosure is the method of regulation adopted by 24 of 26 states that regulate political deepfakes, according to the National Conference of State Legislatures. Disclosure requirements are easy to implement because they are already familiar to campaigns from campaign finance laws. The Federal Communications Commission had proposed a rule in 2024 requiring broadcasters to disclose when political ads use AI-generated content, but that rule was never implemented. State lawmakers should pass a disclosure law and ensure that any enforcement mechanism — whether fines or a right to sue — is strong enough to matter.

The Massachusetts House also passed a separate bill that would go farther in prohibiting the intentional distribution of “materially deceptive” audio or video within 90 days of an election, if it depicts a candidate in a way intended to injure their reputation or deceive the voter into voting for or against them, or if it provides misinformation about the logistics of an upcoming election. A candidate could sue to prevent the material’s distribution. The restriction would not apply to satire and parody or to news publications and broadcasting channels using the content with a disclaimer.

Government has a clear interest in prohibiting deepfakes that intentionally interfere with election administration, such as by giving the wrong election date or implying that there are requirements for voting that don’t exist. It makes sense to ensure that’s prohibited, whether through a new law or under existing laws on voter suppression or fraud.

When it comes to speech about candidates, however, the challenge is narrowly crafting a law that doesn’t run afoul of the First Amendment. There may be ways to pass a law that avoids constitutional concerns, for example by closely paralleling existing defamation laws. But lawmakers have a high legal bar to clear.

The US Supreme Court ruled in 2012 that false statements — in that case, about obtaining military honors — are generally protected free speech. Massachusetts’ Supreme Judicial Court in 2015 struck down as unconstitutional a state law that made it a crime to publish false statements about political candidates. A California law prohibiting the use of political deepfakes that harm a candidate’s electoral prospects was struck down last year, although the court left open the possibility that a more narrowly tailored law could pass constitutional muster. A federal judge struck down Hawaii’s law prohibiting the dissemination of “materially deceptive media” last month. A legal challenge to a Minnesota law prohibiting political deepfakes within 90 days of an election is pending.

Lawmakers need to be careful not to infringe on political speech, while at the same time ensuring that AI doesn’t create a free-for-all where it becomes impossible to know what a candidate actually said or did. A disclosure law with a strong enforcement mechanism is the best way to accomplish that.

Editorials represent the views of the Boston Globe Editorial Board. Follow us @GlobeOpinion.