Artificial intelligence has become startlingly good at faking reality. With just 15 seconds of audio, it can clone a person’s voice; with a short prompt, it can generate videos that look indistinguishable from everyday life. Faces move naturally, lighting behaves as it should, and the small visual glitches that once gave away a fake are rapidly disappearing.

Even for digital forensic experts, the line between real and fake is dissolving. Hany Farid, a UC Berkeley professor who has spent more than two decades studying manipulated media, says the problem is no longer just that AI can generate convincing videos and images — it’s that it is improving faster than humans can adapt.

“We used to measure progress in years. Now it’s happening in weeks,” Farid tells TODAY.com.

The consequences are no longer theoretical. As these tools become cheaper and easier to use, they are increasingly being deployed in scams that rely not on technical sophistication, but on emotional urgency. Older adults, who did not grow up navigating a digital information ecosystem engineered for manipulation, are particularly vulnerable.

A 2025 report from the Federal Trade Commission (FTC) found that fraud losses among adults 60 and older have surged in recent years, rising from about $600 million in 2020 to $2.4 billion in 2024. Much of that increase was driven by cases in which victims lost more than $100,000, often through investment schemes, impersonation scams or online relationships that turned fraudulent.

Increasingly, these scams are powered by AI. Criminals can now clone a loved one’s voice with seconds of audio, impersonate them on a phone or video call, and create a crisis, a car accident, an arrest, a ransom, that demands immediate action. In those moments, hesitation disappears, and with it, sometimes, life savings.

How To Protect From AI ScamsCreate a family ‘code word’

But for many people, the greater risk is not viral content, it’s the messages and calls that feel personal. Protecting against those scams often comes down to simple habits. One of the most effective, Farid says, is also one of the simplest: agree on a family code word. In an urgent call, real or not, asking for that word can create a moment of pause, and a way to verify who is actually on the other end.

“You have to test each other every once in a while,” Farid says, a reminder that even simple safeguards only work if people remember to use them.

Call back a known number

A code word works best alongside other small habits. One of the most important, Farid says, is to hang up and call back. Even if a call appears to come from a loved one’s number, it cannot be trusted. Scammers can “spoof” phone numbers, making it look as if a call is coming from a child, spouse or friend when it is not. Calling back on a known number creates a second layer of verification, and a chance to confirm what is real before acting on what may not be.

The goal, Farid says, is not to outsmart the technology, but to change how people respond to it.

“You’re not going to detect your way out of this,” Farid tells TODAY.com. “You have to protect yourself.”

Create a habit of using fact-checking sites

The same principle applies to the steady stream of viral videos and stories circulating online, particularly those depicting dramatic or emotional events overseas. Rather than trying to parse visual clues, Farid says, viewers are better off turning to established fact-checking organizations. He points to sites like Snopes, PolitiFact and FactCheck.org, which routinely investigate widely shared claims. “Most of this content has been debunked by the time you’re seeing it,” he says. “It’s often just a search away.”