Is OpenAI prioritizing AGI over safety? This ex-employee thinks so
William Saunders, a former employee at OpenAI, has criticized the company's approach to Artificial General Intelligence (AGI), on a tech podcast. He compared the firm's trajectory to that of the ill-fated Titanic, leading to his resignation. Saunders worked for three years on OpenAI's Superalignment team, focusing on AGI and launching paid products. His concerns revolve around the company prioritizing product releases over safety measures. Saunders shared his views on an episode of YouTuber Alex Kantrowitz's podcast on July 3.
Saunders draws parallels between OpenAI and historical events
Saunders compared OpenAI's direction to two significant historical events: NASA's 'Project Apollo' space program and the Titanic's construction. He praised the Apollo program for its careful risk assessment while pushing scientific boundaries. Conversely, he likened OpenAI to the Titanic, emphasizing a focus on newer products over safety. Saunders expressed concern that OpenAI might be overly reliant on current AI safety measures and research, suggesting more 'lifeboats' are needed.
Advocating for more research before new AI releases
During his time at OpenAI, Saunders led a team dedicated to understanding AI language models' behavior. He stressed the need for more knowledge about these systems, stating that if future AI systems are as intelligent or more so than humans, techniques will be needed to discern if they're hiding capabilities or motivations. He suggested delaying the release of new language models until potential harms can be thoroughly researched.
OpenAI's response and industry oversight concerns
Saunders left OpenAI in February 2024, and the company dissolved its Superalignment team in May. This decision followed closely after the announcement of GPT-4o, OpenAI's most advanced AI product available to the public. The company has yet to respond to Saunders's comments.
Call for greater corporate governance in AI development
The swift advancement of AI technology has ignited concerns about the necessity for improved corporate governance. In June, a collective of former and current workers from Google's DeepMind and OpenAI, including Saunders, issued an open letter. The letter cautioned that the existing industry oversight standards are inadequate to avert potential catastrophes related to AI development. Meanwhile, OpenAI's Chief Scientist Ilya Sutskever, who led the Superalignment division, resigned recently. He then founded another start-up, called Safe Superintelligence Inc.