Hang up and create a secret word, FBI says

NurPhoto via Getty Images

Update, Dec. 06, 2024: This story, originally published Dec. 05, now includes more details on reporting smartphone crime to the FBI along with additional input from security experts regarding the AI-driven cyberattack landscape as the new year fast approaches.

The use of AI in smartphone cyber attacks is increasing as recent reports have revealed; from tech support scams targeting Gmail users to fraudulent gambling apps and banking fraud to name but a few. Now the Federal Bureau of Investigations has issued a public service announcement warning of how generative AI is being used to facilitate such fraud and advising smartphone users to hang up and create a secret word to help mitigate these cyber attacks. Here’s what the FBI warned you must do.

ForbesSmartphone Security Warning—Make These Changes Now Or Become A VictimBy Davey Winder

FBI Warns Of Generative AI Attacks Against Smartphone Users

In public service alert number I-120324-PSA, the FBI has warned of cyber attackers increasingly looking to generative AI to commit fraud on a large scale and increase the believability of their schemes. “These tools assist with content creation and can correct for human errors that might otherwise serve as warning signs of fraud,” the FBI said. Given that, as the FBI admits, it can be difficult to tell what is real and what is AI-generated today, the public service announcement serves as a warning for everyone when it comes to what to look out for and how to respond to mitigate the risk. Although not all the advice is aimed directly at smartphone users, given that this remains a primary delivery mechanism for many AI deepfake attacks, especially those using both facial and vocal cloning, it is this advice that I am focusing on.

The FBI warned of the following examples of AI being used in cyber attacks, mostly phishing-related.

The use of generative AI to produce photos to share with victims so as to convince them they are speaking to a real person.
The use of generative AI to create images of celebrities or social media personas promoting fraudulent activity.
AI-generated short audio clips containing the voice of a loved one or close relative in a crisis situation to ask for financial assistance.
AI-generated real-time video chats with alleged company executives, law enforcement, or other authority figures.
AI-created videos to “prove” the online contact is a “real person.”

AI is going to start blurring our everyday reality as we head into the new year, Siggi Stefnisson, cyber safety chief technical officer at trust-based security platform Gen, whose brands include Norton and Avast, said. “Deepfakes will become unrecognizable,”Stefnisson warned, “AI will become sophisticated enough that even experts may not be able to tell what’s authentic.” All of which means, as the FBI has suggested, that people are going to have to ask themselves every time they see an image or watch a video: is this real? “People with bad intentions will take advantage,” Stefnisson said, “this can be as personal as a scorned ex-partner spreading rumors via fake photos on social media or as extreme as governments manipulating entire populations by releasing videos that spread political misinformation.”

ForbesGmail Takeover Hack Attack—Google Warns You Have Just 7 Days To ActBy Davey Winder

The FBI Says To Hang Up And Create A Secret Word

To mitigate the risk of these smartphone-based AI cyber attacks, the FBI has warned that the public should do the following:

Hang up the phone to verify the identity of the person calling you by researching the contact details online and calling the number found directly.
Create a secret word or phrase that is known to your family and contacts so that this can be used for identification purposes in the case of a true emergency call.
Never share sensitive information with people you have met only online or over the phone.
ForbesWhy You Must Beware Of Dangerous New Scam-Yourself Cyber AttacksBy Davey Winder

How To Report AI-Powered Smartphone Fraud Attacks To The FBI

If you believe you have been a victim of a financial fraud scheme, please file a report with the FBI Internet Crime Complaint Center. The FBI requests that when doing so, you provide as much of the following information as possible:

Any information that can assist with the identification of the attacker, including their name, phone number, address and email address, where available.
Any financial transaction information, including dates, payment types and amounts, account numbers along with the name of the financial institution that was in receipt of the funds and, finally, any recipient cryptocurrency addresses.
As complete as possible description of the attack in question: the FBI asks that you include your interaction with the attacker, advise how contact was initiated, and detail what information was provided to them.