How OpenAI aims to tackle election misinformation
In 2024, many countries, including the US and India, are holding elections. The growing use of artificial intelligence (AI) raises valid concerns about its potential role in spreading election misinformation. OpenAI, the creator of ChatGPT, has laid out details in a blog post on how it plans to combat election misinformation. The company is taking few important steps in this regard, including increased transparency around the origin of information.
Dall-E generated images will carry cryptic codes
The company will use cryptography, as standardized by the Coalition for Content Provenance and Authenticity (C2PA), to encode the origin of images generated by Dall-E 3. This will help detect AI-generated images using a provenance classifier, similar to DeepMind's SynthID for digitally watermarking AI-generated images and audio. Meta's AI image generator also adds an invisible watermark to its content but hasn't shared its plans for tackling election-related misinformation yet.
Updated policies and guidelines for OpenAI tools
OpenAI has updated its policies for users of ChatGPT, Dall-E, and other tools. Users can no longer use these tools to impersonate candidates or local governments, engage in campaigns or lobbying, discourage voting, or misrepresent the voting process. The company will continue to shut down impersonation attempts in the form of deepfakes and chatbots, as well as content that distorts the voting process or discourages people from voting. New GPTs allow users to report potential violations.
Directing voting questions and future plans
To help voters find accurate information, OpenAI's tools will direct voting questions in the United States to CanIVote.org, a reliable online source for voting info. If successful, OpenAI's early measures could help implement similar strategies worldwide, fighting election misinformation and ensuring a more transparent and reliable flow of information during election campaigns.