This is the online edition of The Wiretap newsletter, your weekly digest of cybersecurity, internet privacy and surveillance news. To get it in your inbox, subscribe here.

Neither Anthropic nor OpenAI sells in China but Beijing’s spies are using both in cyber espionage operations. (Photo by Tomohiro Ohsumi/Getty Images)

Getty Images

OpenAI doesn’t allow ChatGPT to be used in China. Nor does the Chinese government. But that hasn’t stopped Beijing from routinely employing ChatGPT for nefarious purposes. Over the last year, OpenAI announced that Chinese law enforcement and surveillance units used its tool to gather information about foreign targets and dissidents, and to come up with ideas for surveillance technologies for monitoring minorities.

On Monday, the International Consortium of Investigative Journalists (ICIJ) said that Chinese spies also appeared to be using ChatGPT in efforts to snoop on foreign reporters covering Beijing.

In the latest case, in May last year, someone posing as a Taiwanese journalist used ChatGPT to research news items, which they later shared with a target as they sought to develop a relationship. The fake journalist was linked by ICIJ and researchers at Citizen Lab to Chinese government hackers, who’ve been carrying out a widespread campaign to surveil enemies of the state.

According to researchers at Citizen Lab, two Chinese-affiliated groups, dubbed Glitter Carp and Sequin Carp, also appeared to be using AI to generate phishing emails targeted at reporters with links to ICIJ in the region.

It adds to a growing pile of reports showing Chinese use of American AI tools, though both Anthropic and OpenAI prohibit China-based users. Last year, Anthropic revealed China-linked hackers had used its Claude AI to target as many as 30 entities, including American technology companies and government agencies.

Anthropic has said publicly it has concerns over data being shared with the Chinese government; OpenAI hasn’t been explicit on why it doesn’t allow Chinese access. (It hadn’t responded to a request for comment at the time of publication).

They raise the question of whether it’s possible to stop foreign entities accessing Silicon Valley’s AI models and using them for espionage and surveillance. While the companies have technical barriers in place, such as location detection on downloads, they have proven porous.

Got a tip on surveillance or cybercrime? Get me on Signal at +1 929-512-7964.

THE BIG STORY

(Photo by JOEL SAGET / AFP via Getty Images)

AFP via Getty ImagesCISA Doesn’t Have Access To Cyber AI Tools

Anthropic and OpenAI have been talking up how their latest models can carry out cyberattacks autonomously at unprecedented scale. But America’s leading cybersecurity agency doesn’t have access to those advanced models. Given that Chinese spies are using the companies’ AI to help generate cyberattacks, CISA looks to be at a disadvantage in its mission to protect U.S. critical infrastructure.

Stories You Have To Read Today

German authorities suspect Russia was behind a number of targeted attacks aimed at snooping on politicians’ Signal chats. The messaging app advised users that its infrastructure had not been hacked, but said hackers were employing phishing techniques to target victims directly.

Wired reports that the Palantir workforce is in “turmoil” over the company’s work with ICE and the U.S. military. Some staff raised concerns that Palantir helped in targeting in the early stages of the Iran war, when a school was reportedly hit by an American missile killing more than 120 children. Palantir said it prided itself on encouraging “fierce internal dialogue.”

In case you missed it, Forbes published its eighth annual AI 50 list, with sponsoring partner Mayfield, that highlights the most promising privately held AI companies in the world. There’s a lot of familiar names, like Anthropic, Harvey and ElevenLabs, but this year Forbes has also highlighted some exciting newcomers, including presentation builder Gamma, drug discovery startup Chai Discovery and New York-based Rogo, which is building AI for bankers and investors. We also launched our first ever AI 50 Brink list, featuring early stage companies with the potential to rival their more established peers in the future.

Winner of the Week

Apple fixed a flaw in its iOS operating system that allowed anyone with access to a device to extract deleted Signal messages, TechCrunch reported. The vulnerability stemmed from the way in which iPhones stored message notifications.

Loser of the Week

The Federal Trade Commission found social media scammers caused $2.1 billion in losses in 2025, an eightfold increase since 2020. The majority of that figure was lost to investment scams, typically starting with an ad or post offering coaching for those hoping to play the markets.

More On ForbesForbesHow Eric Trump Got Rich From Bitcoin While Losing Investors A FortuneBy Dan AlexanderForbesHow Michael Saylor Turned Preferred Stock Into Jet Fuel For Buying BitcoinBy Nina BambyshevaForbesFor This Family, AI Is The New Lemonade StandBy Anna Tong