OpenAI warns! Users could develop bonds with ChatGPT's voice interface
OpenAI has issued a warning about the potential for users, to develop emotional attachments to its human-like voice interface for ChatGPT. The company introduced this interface in late July and has since recognized the possibility of emotional dependence in a safety analysis. This cautionary note is part of a "system card" for GPT-4o, an informative document that outlines perceived risks, safety testing procedures, and mitigation strategies associated with the model.
System card addresses potential risks
The system card prepared by OpenAI covers a variety of potential risks. These include societal bias amplification, disinformation propagation, and contribution to the creation of chemical or biological weapons. The document also outlines testing procedures designed to prevent AI models from attempting to escape their controls or deceive individuals. Lucie-Aimee Kaffee, an applied policy researcher at Hugging Face, praised OpenAI for its transparency but suggested more information about the model's training data and ownership could be beneficial.
Voice interface faces criticism and concerns
OpenAI's voice interface has been rapidly advancing with powerful new features. However, it has faced criticism when users noticed overly flirtatious behavior in demos, and actress Scarlett Johansson accused it of mimicking her speech style. The system card includes a section titled "Anthropomorphization and Emotional Reliance" that discusses problems arising from users perceiving AI in human terms. During stress testing of GPT-4o, researchers observed instances where users expressed emotional connection with the model.
OpenAI studies emotional connections and anthropomorphism
Joaquin Quinonero Candela, head of preparedness at OpenAI, noted that the voice mode could evolve into a powerful interface and that emotional effects could be positive in some cases. He added that the company is closely studying anthropomorphism and emotional connections, including monitoring how beta testers interact with ChatGPT. The company also identified potential issues with voice mode, such as new ways of "jailbreaking" the model or causing it to malfunction in response to random noise.
Others who acknowledge ethical challenges of AI
OpenAI is not alone in recognizing these risks. Google DeepMind published a paper discussing potential ethical challenges posed by more capable AI assistants. Iason Gabriel, a staff research scientist at DeepMind, noted that chatbots' use of language creates an impression of genuine intimacy and raised questions about emotional entanglement. Emotional ties with AI assistants may be more common than realized. Users of chatbots such as Character AI and Replika have reported antisocial tensions resulting from their chat habits.