OpenAI bans suspected malicious users in China and North Korea
What's the story
OpenAI, the leading artificial intelligence (AI) firm, has terminated a number of user accounts from China and North Korea.
The company believes these accounts were abusing its technology for malicious purposes like surveillance and shaping public opinion.
The revelation comes from a recent report by OpenAI, which also emphasized the risk of AI being abused by authoritarian regimes against their own people and other countries.
Misuse instances
OpenAI's technology misused for propaganda and fraud
The report highlighted several cases of abuse, including one where users gamed ChatGPT, OpenAI's popular AI chatbot, to generate Spanish-language news articles attacking the US.
These articles were subsequently published by mainstream Latin American news organizations under a Chinese company's name.
In another case, people possibly affiliated with North Korea employed AI to generate fake resumes and online profiles for fake job seekers to fraudulently land jobs at Western firms.
Fraudulent activities
OpenAI's technology used for financial fraud and surveillance
The report also revealed a financial scam operation out of Cambodia that leveraged ChatGPT accounts to translate and post comments on different social media and communication platforms.
Meanwhile, several Chinese accounts were found using ChatGPT to draft sales pitches and debug code for a suspected social media surveillance firm.
These accounts were promoting an AI assistant to gather real-time information on anti-China protests in the US, UK, and other Western countries.
Surveillance tool
Banned accounts used special software to track discussions in West
The terminated accounts were allegedly using a software named "Qianyue Overseas Public Opinion AI Assistant" to relay surveillance reports to Chinese authorities, intelligence agents, and embassy staff.
The software is specifically designed to detect online discussions in Western countries about human rights protests in China.
OpenAI's threat report stressed that it's against company policy to use its AI for communications surveillance or unauthorized monitoring of individuals, including on behalf of governments that seek to suppress personal freedoms and rights.