OpenAI sets up safety committee to evaluate AI models
OpenAI has announced the formation of a 'Safety and Security Committee' today. Its primary role is to provide safety and security recommendations for OpenAI's projects to the board. Over next 90 days, this committee will assess the safeguards in place within OpenAI's architecture. A report based on their findings will be submitted for board review. "Following the board's review, OpenAI will share an update on adopted recommendations in a manner that is consistent with safety and security," said the company.
Committee members include CEO and external experts
The Safety and Security Committee will be led by OpenAI's CEO, Sam Altman, and includes Directors Bret Taylor, Adam D'Angelo, and Nicole Seligman. In line with its commitment to safety and security, OpenAI will also seek advice from external experts. Among these consultants are Rob Joyce, a Homeland Security adviser to former President Donald Trump, as well as John Carlin, a Justice Department official under President Joe Biden.
Committee formation follows concerns over AI advancements
The decision to form this committee comes amid recent apprehensions about potential risks, associated with OpenAI's advancements in artificial intelligence. These concerns led to a brief ouster of Altman last year, following disagreements with co-founder and chief scientist Ilya Sutskever. In May, Sutskever and his key deputy Jan Leike left OpenAI, citing struggles for computing resources as their primary reason for departure.