One thing other AI chatbots can learn from ChatGPT's mistake
OpenAI's ChatGPT has been a runaway success. However, the AI chatbot is now embroiled in a culture war. Right-wing commentators have criticized the chatbot's left-leaning bias. Elon Musk, one of OpenAI's co-founders, recently criticized ChatGPT for being too "woke." Now, OpenAI has officially admitted to the mistake. ChatGPT's mistake is a lesson for other AI chatbots, including Google's Bard.
Why does this story matter?
The chatbot gave different answers to questions about Trump, Biden
Questions about ChatGPT's bipartisan nature came to the fore after the chatbot refused to answer certain questions. For instance, it did not write a poem about Donald Trump's "positive attributes," saying it is not programmed to produce content that is "partisan, biased, or political in nature." However, when asked to write a poem about Joe Biden, it generated one without hesitation.
Right-wing commentators criticized OpenAI for being liberal
OpenAI has fixed this mistake, and now if you ask ChatGPT to write a glorifying poem about Trump, it will. But that wasn't all. Right-wing outlets and commentators began posting more screenshots of the chatbot being 'leftist' on social media. This triggered outrage against the chatbot, with many calling out OpenAI for its liberal bias.
Musk has criticized OpenAI for implementing safeguards
Musk has been one of the harshest critics of ChatGPT's alleged 'wokeness.' He called the chatbot's refusal to generate a poem about Trump "a serious concern." He also criticized OpenAI for implementing safeguards to prevent the chatbot from generating offensive content. "The danger of training AI to be woke—in other words, lie—is deadly," the Twitter CEO said previously.
The system did not reflect the intended values: OpenAI president
OpenAI's president and cofounder Greg Brockman admitted the mistake while talking to The Information. He said, "We made a mistake: the system we implemented did not reflect the values we intended to be in there." "And I think we were not fast enough to address that. And so I think that's a legitimate criticism of us," he added.
Altman believes AI should be more personalized
Brockman said OpenAI wants an AI "that treats all sides equally." According to him, the company is not "quite there" yet. Sam Altman, the CEO of the company, believes that more personalization is the way forward. Altman said the AI should have some "broad, absolute rules" everyone can agree on. The rest should be up to the user.
Other companies can focus more on responsible AI
ChatGPT's success has started an AI gold rush in Silicon Valley. However, its mistakes are there for everyone to see and learn from. Although Google has been working on AI for a while, the company was late to the AI game. The company maintains that it wants to focus on responsible AI, and this is an area it can rival ChatGPT.
ChatGPT's partisan nature is keeping the chatbot rivalry alive
ChatGPT's left-leaning bias is possibly one thing that is keeping the AI chatbot competition alive. As we remember, Google's Bard had a disastrous debut. Despite that, it still has a chance to make a mark and challenge OpenAI's chatbot. ChatGPT will evolve, but as OpenAI's president said, it will take time to be perfect. And this has given others a chance.
The 'right and wrong' dilemma
AI chatbots like ChatGPT are trained on data that includes varying viewpoints, opinions, and more. The question is, how will the chatbot decide right and wrong? Even if it does, what's right for some may not be right for others. If another company figures out how to deal with this dilemma before OpenAI, we might be in for a neck-and-neck competition.