No one is ready for AGI, warns OpenAI's departing leader
Miles Brundage, the outgoing senior adviser for Artificial General Intelligence (AGI) readiness at OpenAI, has issued a stark warning about the world's unpreparedness for it. In his departure statement, Brundage emphasized that "neither OpenAI nor any other frontier lab is ready [for AGI], and the world is also not ready." He further clarified that this view is not controversial among OpenAI's leadership.
Safety team experiences high-profile departures
Brundage's exit comes on the heels of several high-profile departures from OpenAI's safety teams, including Jan Leike and co-founder Ilya Sutskever. Leike left after raising concerns that "safety culture and processes have taken a backseat to shiny products." Sutskever left to start his own AI start-up dedicated to safe AGI development. The venture secured a whopping $1 billion in funding in September. The recent exits highlight the increasing friction between OpenAI's original mission and its commercial goals.
Shift toward commercialization raises concerns
The dissolution of Brundage's "AGI Readiness" team follows the disbandment of the "Superalignment" team, which was focused on long-term AI risk mitigation. This underscores the mounting pressure on OpenAI to evolve from a non-profit to a for-profit public benefit corporation within two years or face the consequence of returning money from its recent $6.6 billion investment round. Brundage had voiced concerns over this commercialization shift as early as 2019 when OpenAI first formed its for-profit division.
Brundage cites research constraints as reason for departure
Brundage cited increasing constraints on his research and publication freedom at OpenAI as the reason for his departure. He stressed the importance of independent voices in AI policy discussions, free from industry biases and conflicts of interest. After advising OpenAI's leadership on internal preparedness, he now believes he can make a greater impact on global AI governance from outside the organization.
OpenAI's cultural divide and resource allocation issues
Brundage's exit could also signal a wider cultural rift at OpenAI. Many researchers came on board to push AI research forward, and now they are in a more product-oriented setting. Internal resource distribution has become a point of contention—reports suggest Leike's team was refused computing power for safety research before it was ultimately disbanded. Despite these tensions, Brundage said OpenAI has offered to support his future work with funding, API credits, and early model access, with no strings attached.