OpenAI's o1 model could be misused for creating bioweapons
OpenAI, a leading artificial intelligence (AI) research organization, has acknowledged the possibility for misuse of its latest model, known as o1 or "Strawberry." The company has rated this model with a "medium risk" for aiding in the creation of chemical, biological, radiological, and nuclear (CBRN) weapons. This is the highest risk level ever assigned to any of its AI technologies.
Enhanced capabilities and potential threats
The newly launched o1 model possess advanced capabilities, including improved reasoning skills, the ability to solve complex mathematical problems, and also answering scientific research questions. These enhancements mark significant progress in the development of artificial general intelligence (AGI). However, they also inadvertently increase the potential for misuse in dangerous applications, if exploited by malicious actors.
AI's advanced reasoning abilities
The advanced reasoning abilities of the o1 models could potentially be exploited to develop bioweapons more effectively. This raises ethical and safety concerns within the AI community. Prominent AI researcher, professor Yoshua Bengio, has underscored the need for immediate regulation to mitigate these risks. He supports a proposed bill in California, SB 1047, which mandates AI developers to implement steps that reduce the risk of their models being used for creating bioweapons.
OpenAI's cautious approach to public release
In response to these concerns, OpenAI's Chief Technology Officer, Mira Murati, has stated that the firm is proceeding with caution in releasing the o1 model to the public. The model will be accessible to ChatGPT's paid subscribers and developers through an API after undergoing rigorous testing by "red-teamers," experts responsible for identifying possible vulnerabilities in the model. Despite the risks associated with its use, OpenAI has deemed o1 safe for deployment under its policies.