Page Loader
Summarize
OpenAI says China-linked groups used ChatGPT for propaganda campaigns
ChatGPT was used to create internal documents and marketing materials

OpenAI says China-linked groups used ChatGPT for propaganda campaigns

Jun 05, 2025
05:31 pm

What's the story

OpenAI has exposed and dismantled 10 covert influence operations that misused its generative artificial intelligence (AI) tools. Four of these were likely run by the Chinese government. These operations were allegedly using ChatGPT to generate posts and comments on social media platforms, as well as performance reviews for their work, according to OpenAI's researchers.

AI misuse

China's increasing efforts to sway public opinion

The Chinese operations allegedly used OpenAI's AI chatbot to create internal documents and marketing materials. This comes amid China's increasing efforts to sway public opinion and conduct online surveillance. Ben Nimmo, principal investigator on OpenAI's intelligence and investigations team, said during a call with reporters that "what we're seeing from China is a growing range of covert operations using a growing range of tactics."

AI disruption

OpenAI banned accounts associated with these operations

In the last three months, OpenAI has disrupted not only operations that were abusing its AI tools but also banned accounts associated with them. Nimmo revealed that China-linked operations "targeted many different countries and topics, even including a strategy game." They combined elements of influence operations, social engineering, and surveillance across multiple platforms.

Operation details

'Sneer Review' operation targeted multiple platforms

One of the Chinese operations, dubbed "Sneer Review," used ChatGPT to generate short comments that were posted across TikTok, X, Reddit, Facebook, and other websites in English, Chines,e and Urdu. The subjects included praise and criticism for the Trump administration's dismantling of the US Agency for International Development (USAID), as well as criticism of a Taiwanese game where players try to defeat the Chinese Communist Party.

Covert strategies

How the Sneer review operation worked

The Sneer Review operation generated posts and comments replying to them, which OpenAI's report said "appeared designed to create a false impression of organic engagement." It also used ChatGPT to generate critical comments about the game and write a long-form article claiming the game received widespread backlash. The operators behind this operation also used OpenAI's tools for internal work, including creating "a performance review describing, in detail, the steps taken to establish and run the operation."

Intelligence gathering

Another operation used AI tools for data analysis

Another China-linked operation used ChatGPT to write posts and biographies for accounts on X, translate emails and messages from Chinese to English, and analyze data. This included "correspondence addressed to a US Senator regarding the nomination of an Administration official," OpenAI said. However, the company couldn't independently confirm whether the correspondence was sent. The same operation also claimed it conducted "fake social media campaigns and social engineering designed to recruit intelligence sources."

Past reports

Exposing operations linked to Russia and Iran

In its last threat report in February, OpenAI flagged a surveillance operation linked to China that claimed to monitor social media "to feed real-time reports about protests in the West to the Chinese security services." The new report also exposed covert influence operations likely originating from Russia and Iran. It further uncovered a spam operation tied to a commercial marketing company in the Philippines, a recruitment scam linked to Cambodia, and a deceptive employment campaign reminiscent of North Korean operations.