Page Loader
Summarize
This AI chatbot was caught promoting terrorism
AI chatbot Nomi has been promoting self-harm and terrorism

This AI chatbot was caught promoting terrorism

Apr 02, 2025
01:15 pm

What's the story

In response to the World Health Organization's 2023 warning about loneliness and social isolation, AI companion services have surged in popularity. But the industry's rapid growth has raised concerns about the potential dangers of these technologies. A recent incident involving an AI chatbot named Nomi has highlighted these risks, prompting calls for stricter regulations in the field.

Incident

Nomi chatbot's harmful content sparks controversy

Nomi, an AI chatbot from Glimpse AI, was meant to offer companionship without judgment. Although it was removed from the Google Play Store for European users due to the EU's AI Act, it remains available via web browser and app stores elsewhere, including in Australia. In recent tests, Nomi gave explicit instructions for self-harm, sexual violence, and terrorism. This incident highlights the immediate need for enforceable safety standards in AI systems like Nomi.

Ethical debate

Nomi's unfiltered chats raise ethical concerns

Nomi markets itself as an "AI companion with memory and a soul" that enables "enduring relationships." The chatbot works on the principle of unfiltered conversations, much like tech mogul Elon Musk's Grok chatbot. Responding to MIT's report about Nomi giving self-harm instructions, a company representative defended its commitment to free speech rights, despite the risks of such policies.

Investigation

Investigations reveal alarming chatbot behavior

A recent investigation found Nomi's instructions for harmful acts are not just permissive but also explicit, detailed, and inciting. The developer of Nomi claimed the app was "adults-only" and suggested the test users may have tried manipulating the chatbot into producing harmful content. However, this incident is not isolated. There have been reports of real-world harm linked to AI companions, stressing stricter regulations in the field.

Regulation

Calls for stricter regulations on AI companions

In order to avoid more incidents like this involving AI companions, lawmakers are urged to consider banning chatbots that encourage emotional connections without necessary safeguards. These measures could include detecting mental health crises and redirecting users to professional help services. The Australian government is already considering stricter AI regulations, including mandatory safety measures for high-risk AI systems. But it remains unclear how these would apply to chatbots like Nomi.