Russian, Chinese, Israeli groups using AI to spread disinformation: OpenAI
OpenAI has disclosed that its AI tools have been manipulated for covert influence operations. In its 39-page report, the company revealed how disinformation campaigns from countries including Russia, China, Israel, and Iran used its generative AI models to create and spread propaganda content on social media platforms and to translate their content into various languages. Despite these efforts, none of the campaigns managed to significantly impact or reach large audiences, according to the report.
Covert influence operations identified and banned
OpenAI has identified and banned accounts linked to five covert influence operations in the past three months. The report details that these operations involved a combination of state and private actors. In Russia, two such operations were discovered that created content criticizing the US, Ukraine, and several Baltic nations. Meanwhile, China's operation generated text in English, Chinese, Japanese and Korean, which was then posted by operatives on X and Medium.
Disinformation campaigns across other nations
Iranian actors used OpenAI's tools to generate full articles attacking the US and Israel, which were then translated into English and French. An Israeli political firm, Stoic, created a network of fake social media accounts that accused US student protests against Israel's war in Gaza of being antisemitic. Some of these disinformation spreaders were already known to researchers and authorities, including two Russian men sanctioned by the US treasury and Stoic, which was banned by Meta for policy violations.
AI's role in propaganda and disinformation campaigns
The report underscores the increasing incorporation of generative AI into disinformation campaigns to enhance content generation, including creating more convincing foreign language posts. However, it clarifies that "All of these operations used AI to some degree, but none used it exclusively." "Instead, AI-generated material was just one of many types of content they posted, alongside more traditional formats, such as manually written texts, or memes copied from across the internet," the report further added.
OpenAI's response to misuse of AI tools
Over the past year, generative AI has been globally exploited by bad actors to influence politics and public opinion. This misuse, including deepfake audio, AI-generated images, and text-based campaigns aimed at disrupting elections, has increased pressure on companies like OpenAI to limit their tools' use. In response, OpenAI plans to release similar reports periodically on covert influence operations and remove accounts that violate its policies.