OpenAI has announced that it banned several ChatGPT accounts that were attempting to use the AI chatbot to develop tools for large-scale monitoring of data collected on social media platforms. The AI startup said that it has banned a user who was asking ChatGPT to help design promotional materials and project plans for an AI-powered social media listening tool for use by a government client. The tool is described as a social media “probe,” which could scan social media sites like X, Facebook, Instagram, Reddit, TikTok, and YouTube for specific extremist speech, and ethnic, religious, and political content.

​The company also says it banned another account, suspected of being connected to a government entity, who was using ChatGPT to help write a proposal for a “High-Risk Uyghur-Related Inflow Warning Model.” The proposal would analyze transport bookings and compare them with the police records in order to provide early warning of travel movements by the Uyghur community.

​”The PRC is making real progress in advancing its autocratic version of AI,” OpenAI said in the report.

​”Some elements of this usage appeared aimed at supporting large-scale monitoring of online or offline traffic, underscoring the importance of our ongoing attention to potential authoritarian abuses in this space,” the company added.

​Notably, OpenAI’s models are not officially available in China, and the company says that these users may have used a VPN to access its website.

​OpenAI says it also banned Russian hackers who were using its AI model to develop and refine malware, including a remote access trojan and credential stealers.

​The company said in the report that it has noticed that persistent threat actors seemed to have changed their behavior in order to remove some of the better-known signs of AI usage from their content, like removing em-dashes.

​The San Francisco-based startup noted that ChatGPT is being used to help people correctly identify scams significantly more often than it is being used by threat actors to generate scams.

​”Our current estimate is that ChatGPT is being used to identify scams up to three times more often than it is being used for scams,” the company noted.

​Since the beginning of its public threat reporting in February 2024, OpenAI says it has disrupted and reported over 40 networks that violated its usage policies.

​The company also noted that threat actors are primarily “building AI into existing workflows,” rather than creating new workflows entirely around AI.

​”We found no evidence of new tactics or that our models provided threat actors with novel offensive capabilities. In fact, our models consistently refused outright malicious requests,” OpenAI added.