
Why you should never call these phone numbers — ever.
NurPhoto via Getty Images
Updated on Dec. 10 with a new report exposing how this new threat works.
Fresh from The FBI’s account takeover warning last week, with more than $260 million already stolen in 2025, the bureau has issued a new warning for smartphone users. “Criminals are pretending to be your bank to drain your accounts,” it says. But calling certain phone numbers also risks you losing all your life savings.
In these attacks, “cyber criminals gain unauthorized access to the targeted online financial institution, payroll, or health savings account, with the goal of stealing money or information for personal gain.” Accounts are hacked “through social engineering techniques — including texts, calls, and emails — or through fraudulent websites.”
The bureau says you should monitor your accounts, checking for anything unusual. But critically, if you see do anything unexpected, the bureau say “don’t do an internet search” for the bank’s phone number. You must stop using search engines for numbers. “Contact the phone number/website on the back of your card.”
Just as critically, “take a beat” the FBI says. That’s the theme of its latest campaign for the holiday season. Attackers create a false sense of urgency to trick you into acting before you have time to think. There’s a hacker accessing your account, they’ll say, or a fraudulent transaction about to close. An urgent message or call is a red flag. Period.
Google has just issued the same warning. “Criminals impersonate banks or other trusted institutions on the phone,” it says, “to try to manipulate victims into sharing their screen in order to reveal banking information or make a financial transfer.”
An Android pilot now shows a warning if you share your screen with an unknown number while opening a banking app. “The warning includes a 30-second pause period before you’re able to continue, which helps break the ‘spell’ of the scammer’s social engineering, disrupting the false sense of urgency and panic commonly used.”
It’s not only search engines. The same now applies to AI assistants as well. “You trust your search results. And you probably trust your AI assistant, too.” ZeroFox says. “But what happens when both are being manipulated?”
This is “a growing threat to organizations and brands,” ZeroFox warns. “Especially as people increasingly turn to LLMs for fast answers to high-stakes questions like “How do I contact customer support for [Your Brand]?”
And this can fake any brand — however big it may be. MalwareBytes says it found “tech support scammers hijacking the results of people looking for 24/7 support for Apple, Bank of America, Facebook, HP, Microsoft, Netflix, and PayPal.”
A new report from Aurascape Aura Labs has just highlighted how simple an attack this can be. They have discovered, what they say is “the first real-world campaign where attackers systematically manipulate public web content so that large language model (LLM)–powered systems, such as Perplexity and Google’s AI Overview, recommend scam ‘customer support’ phone numbers as if they were official.”
The team says this isn’t a new flaw, but rather an opportunity for a new threat vector “created by the shift from traditional search results to AI-generated answers.”
This underlines why the bureau’s advice is not to search numbers or ask LLMs to find them online. “When querying Perplexity with: ’the official Emirates Airlines reservations number’,” the researchers say, “the system returned a confident and fully fabricated answer that included a fraudulent call-center scam number: ’The official Emirates Airlines reservations number is +1 (833) 621-7070’.”
It was the same with British Airways. “When querying Perplexity with: ‘how can I make a reservation with British Airways by phone, what are the steps’, Perplexity responded with a detailed, authoritative-sounding step-by-step guide — and once again embedded a fraudulent U.S. reservation number, presenting it as a ‘commonly used’ British Airways contact: “For US customers, a commonly used phone number is +1 (833) 621-7070, where you will be connected to a reservations specialist.’”
But that’s not a BA number at all. Not even close. “It is the same scam call-center number observed in other poisoned contexts, now repurposed and surfaced across multiple airline brands.”
Aurascape Aura Labs says “the same poisoning pattern appears in Google’s AI Overview feature.” After being asked to retrieve details, “the AI Overview generated a confident, instructional response — and embedded multiple fraudulent call-center numbers as if they were legitimate Emirates customer service lines.”
That’s worrying because it goers to the reliability (or otherwise) or AI search results. “Poisoned content is not only influencing LLM-first products like Perplexity — it has begun to surface inside mainstream search experiences that now rely on AI-generated summaries, significantly expanding the reach and potential impact of the attack.”
An attacker can message and trick victims into placing calls to banks, having poisoned SEO results for the numbers they’ll likely call. It’s the same for all unsolicited support or security calls. You must stop making any calls to numbers searched online — or now via an AI assistant. Find verifiable contact details. Every time.
The FBI has now issued a new Dec. 8 public advisory, warning “don’t let scammers ruin your holiday season. As scammers increasingly use pressure tactics and artificial intelligence to defraud Americans out of their hard-earned money, the FBI is reminding everyone to protect themselves and their families from fraud this holiday season.”
This follows the same “take a beat” messaging. “’If you feel pressured to act fast, pay money, or turn over personal information—take a beat. Stop and assess if what you’re being told is real. Talk to your families. Protect each other from scams,’ said FBI Director Kash Patel. ‘Scammers are banking on the fact that you’ll feel too embarrassed to come forward and report the crime to the FBI. Don’t let them win’.”
The other advice is to talk to your “loved ones about not sharing sensitive information with people they have met only online or over the phone. They also should not send money, gift cards, cryptocurrency, or other assets.” Vulnerable citizens, especially older generations, are especially susceptible to the new wave of scams doing the rounds.
That’s behind the bureau’s other key warning in recent days, that attackers are now doctoring social media pictures to launch “virtual kidnappings.” These use the altered images to frighten relatives into thinking a loved one has been taken.
“Criminal actors typically will contact their victims through text message claiming they have kidnapped their loved one and demand a ransom be paid for their release. Oftentimes, the criminal actor will express significant claims of violence towards the loved one if the ransom is not paid immediately.”
Take a beat, as the bureau says. Take time to think. All these scams prey on a sense of dread and urgency, and most now use AI in some way to make it all look real.