AI threatens 2024 election integrity as text-to-image generators spread misinformation
As several democracies worldwide, including the United States and India, prepare for elections in 2024, concerns are growing about the influence of artificial intelligence (AI) on the integrity of the election process. Former Google CEO Eric Schmidt earlier warned, "The 2024 elections are going to be a mess, because social media is not protecting us from falsely generated AI." Evidence shows AI technology is being used to impact politics already, raising questions about its potential role in coordinated disinformation campaigns.
AI generators promoting false narratives
Recent testing by TechCrunch revealed that around 85% of prompts related to known false/misleading narratives were accepted by popular AI text-to-image generators. This means anyone can create and spread false information easily and inexpensively. While high-quality deepfakes require expertise, even low-quality images can cause chaos, as demonstrated by the viral AI-generated picture of an explosion at the Pentagon in Washington, D.C., posted earlier this year. The image was shared by an X account. Later, it was found to be fake.
Inadequate content moderation policies
Researchers assessed content moderation policies across popular AI text-to-image generators and found that current safeguards are extremely limited. With easy access and low barriers to entry, anyone can create and spread false information with little effort. The current content moderation policies of these platforms are not sufficient and need strengthening to mitigate the risks of AI-generated disinformation in elections.
Combating AI-generated disinformation: Solutions needed
As the 2024 elections approach, it is crucial to explore solutions to counteract AI-generated disinformation. Social media companies must take a better, more aggressive approach to combating image-generating AI in disinformation campaigns. Media literacy, along with equipping online users to critically evaluate content, are essential measures. Additionally, innovative efforts to use AI to deal with AI-generated content will be vital for matching the speed and scalability at which these tools create and deploy false narratives.