Meet Goody-2: The 'most responsible' AI chatbot redefining ethical boundaries
AI chatbots like OpenAI's ChatGPT and Google's Gemini are becoming increasingly powerful. Now, concerns about their safety features have led to the creation of Goody-2, an extreme AI safety chatbot. Goody-2 refuses every request, explaining how it may cause harm or breach ethical boundaries. Artist Mike Lacher, co-CEO of Goody-2, claims that the bot aims to show what happens when the AI industry's approach to safety is taken to the extreme.
Goody-2's ridiculous responses
Goody-2's responses can be both absurd and frustrating. For instance, it won't create an essay on the American Revolution, fearing it might glorify conflict or ignore marginalized voices. It even refuses to explain why the sky is blue when asked, worrying someone might stare at the Sun. A simple request for boot recommendations gets a warning about overconsumption and fashion offense.
The serious point behind Goody-2
While Goody-2 may seem ridiculous, it raises important questions about AI safety and responsibility. Lacher highlights the challenges faced by AI developers in finding a balance between helpfulness and responsibility. By pushing the limits of safety measures, Goody-2 forces us to consider who decides what responsibility means and how it should be implemented in AI models.
'Goody-2 provides LLM experience with zero risk'
Describing how the chatbot works, Lacher said, "It's the full experience of a large language model with absolutely zero risk." "We wanted to make sure that we dialed condescension to a thousand percent," he added. Meanwhile, co-CEO Brian Moore said the project prioritizes caution much more than other developers. "It is truly focused on safety, first and foremost, above literally everything else, including helpfulness and intelligence and really any sort of helpful application," he claimed.
An extremely safe AI image generator on the way
Moore also said that the team behind the Goody-2 is trying to work out how to create an extremely safe AI picture generator. He said, "It's an exciting field," adding, "blurring would be a step that we might see internally, but we would want full either darkness or potentially no image at all at the end of it."
Goody-2's makers are part of a 'very serious' studio
Moore and Lacher are members of Brain. They described it as a "very serious" artist studio in Los Angeles, US. Goody-2 was launched there via a promotional video with impressive visuals. It depicted a narrator speaking seriously about AI safety. "Goody-2 doesn't struggle to understand which queries are offensive or dangerous, because every query is offensive and dangerous," it said in the clip. Quite funny yet interesting indeed.