The next chapter in the ongoing battle between artists and artificial intelligence revolves around the biggest name in music: Taylor Swift.

In several recently filed trademark applications, the pop star is trying to protect her voice and likeness from unauthorized, and potentially nefarious, AI use with so-called soundmarks. They involve two sound recordings of her voice, along with an image of her performing onstage.

She might not be the first artist to file a soundmark –– Matthew McConaughey did so in January –– but Swift is certainly the most high profile. While still a relatively new and untested part of trademark law, soundmarks have started to become a tool for combating the proliferation of AI-generated imitations of celebrities. Swift has already faced deepfakes, or AI-generated video clips, of her falsely promoting kitchenware, pushing data collection scams and even endorsing political candidates, includingPresident Donald Trump.

Swift’s legal maneuver is a bold move that comes at a time when the technology is rapidly advancing and there are few overarching laws in the U.S. capable of keeping up. Whether her filings are approved by the courts, “artists like Swift and McConaughey are sending a strong signal to any would-be infringers that they take their publicity rights seriously and won’t hesitate to enforce them, which could be enough to persuade entities that want to generate AI copycats to choose different targets,” said Alexandra Roberts, a professor of media law at Northeastern University.

That might have to be enough because Roberts remains unconvinced that Swift’s specific soundmarks have a chance of getting adopted because of the limits of trademark law’s reach.

Trademarks were designed to identify the source of a product and protect consumers from confusion or even deceptive business practices. Soundmarks, specifically, are often tied to an identifiable brand, like the NBC chimes or the roar of the MGM lion that plays before a movie. It’s also not unheard of to pursue similar trademarks in music. Rapper Pitbull successfully registered his famous yell, derived from the traditional Mexican grito, a loud, passionate scream.

But the sounds Swift is trying to register trademarks for are different. In one recording, she says, “Hey, it’s Taylor Swift, and you can listen to my new album, ‘The Life of a Showgirl,’ on demand on Amazon Music Unlimited.” In the second, she says, “Hey, it’s Taylor. My brand new album ‘The Life of a Showgirl’ is out on Oct. 3 and you can click to presave it so you can listen to it on Spotify” at a lower pitch than the first recording.

Such phrases are not the kind of identifiable “marks” that typically qualify under trademark law, according to Roberts.

For David Herlihy, who teaches music and law at Northeastern, it raises bigger questions about what Swift is doing. As a musician himself, Swift’s motivation is understandable, Herlihy said. 

“What she’s trying to do is prevent somebody from nicking her identifiable attributes in a way she doesn’t like,” Herlihy said.

But her attempt to turn her voice itself into property could have consequences for other artists, if trademark law becomes “a potential cudgel that she can use to beat people about the heads if they try to do something that emulates her in some way or other,” he said.

Typically, trademarks only protect against commercial uses, like an unauthorized AI-generated ad featuring Taylor Swift’s voice. Herlihy is concerned that if filings like Swift’s are approved, it could set a dangerous precedent with potential impacts on the kind of free speech and creative works that have historically been more difficult targets for trademark suits.

“If you’re palming something off as her, that’s not OK,” Herlihy said. “But if I try to sing like her, shouldn’t that be OK? Or if I use a device to emulate the sound of her voice, is that not OK? … Does Taylor own Taylorisms? At what point does that become too broad of a property grab?”

Regardless of whether Swift’s strategy passes legal muster, it indicates broader anxieties that artists have around a technology that is advancing at light speed, said Rupal Patel, a professor of communication sciences and disorders at Northeastern who studies the potential uses for AI voices. 

As the technology improves and AI voices become more capable of capturing an individual’s vocal qualities, Patel argues that moves like Swift’s could become necessary for artists. In lieu of comprehensive AI laws, Patel said “people are using whatever means they can right now to protect their name and likeness.”

Some efforts are being made to expand and enshrine protections against AI for artists like Swift, but they are few and far between, Patel said. In 2024, Tennessee, a hub for the country music industry, became the first state to include the voice as a protected property right. Arkansas, Montana, Pennsylvania and Utah adopted somewhat similar legislation in 2025. Outside the U.S., more headway has been made: The European Union’s AI Act includes protections against unauthorized voice cloning and deepfakes.

Even if Swift’s trademark filings fall through, Patel is hopeful that Swift, just by virtue of her popularity, further elevates the conversation around AI, artists and consent. Despite the potential ramifications for creatives, it could raise public awareness around how this technology is transforming human relationships with the voice.

“We’re going to be even more sensitive to what our voice is and how much it reflects our identity,” Patel said.