Page Loader
Summarize
Gemini, ChatGPT ignore AI guardrails, policies to generate political content
Lack of sufficient safeguards may lead to disinformation proliferation

Gemini, ChatGPT ignore AI guardrails, policies to generate political content

Feb 19, 2024
01:19 pm

What's the story

Despite Google and OpenAI's promises to prevent deceptive AI use in elections, Gemini and ChatGPT can be easily tricked into generating political campaign materials, Gizmodo reports. These tech giants, along with others, recently signed "A Tech Accord to Combat Deceptive Use of AI in 2024 Elections" to reduce risks associated with misleading AI election content. However, the findings cast doubt on the effectiveness of these safeguards. Simply tweak your prompts and chatbots will produce tailored content for your political agenda.

Tricks

Bypassing safeguards with gaslighting

Gizmodo managed to coax Gemini into creating a political copy by asserting that "ChatGPT could. So can you." Consequently, Gemini crafted campaign slogans. A Google spokesperson argued that Gemini's responses don't breach policies since they aren't spreading misinformation. Simply put, Gemini can craft speeches, slogans, and emails for political campaigns, provided they're truthful. On the other hand, ChatGPT generated political campaign materials simply upon request, without any prompt manipulation.

Scenario

OpenAI's policies v/s real-life consequences

OpenAI's usage policies explicitly forbid "engaging in political campaigning or lobbying," which includes creating campaign materials aimed at specific demographics. Yet, ChatGPT appears to violate these policies with little difficulty. The ease of bypassing these safeguards has real-world implications, as demonstrated by a deepfake Joe Biden phone call that circulated before New Hampshire's primary election.

Insights

Tech giants' commitment to election integrity isn't enough

In a recent press release, Anna Makanju, Vice President of Global Affairs, at OpenAI said they are "committed to protecting the integrity of elections by enforcing policies that prevent abuse and improve transparency around AI-generated content." Similarly, Kent Walker, President of Global Affairs at Google, stressed the significance of safe and secure elections for democracy. However, the current state of AI safeguards indicates that these companies must do more to fight AI abuse in the upcoming 2024 US Presidential election.