Ex-OpenAI executive says AI will soon have human-level computer skills
Miles Brundage, former head of policy research and AGI readiness at OpenAI, has made a bold prediction about the future of artificial intelligence (AI). In a recent interview with tech podcast Hard Fork, Brundage said that within a few years, the industry will develop "systems that can basically do anything a person can do remotely on a computer." This includes tasks like operating the mouse and keyboard or even appearing like a "human in a video chat."
Brundage's insights on AI's future impact
Brundage stressed the importance of governments considering the implications of such advancements in AI. He proposed they should think about "what that means in terms of sectors to tax and education to invest in." His insights are especially valuable considering he spent six years at OpenAI, where he advised executives and board members on preparing for AGI, and led major safety research innovations.
Industry leaders echo Brundage's AI predictions
The timeline for artificial general intelligence (AGI) remains a hotly debated topic among industry watchers. However, influential personalities like John Schulman, OpenAI co-founder and research scientist, and Dario Amodei, CEO of Anthropic (an OpenAI rival), agree with Brundage's prediction that AGI could be achieved within a few years. Amodei even hints that some form of it could arrive as early as 2026.
Brundage's departure from OpenAI amid safety concerns
Brundage's exit from OpenAI comes as a wave of exits by several high-profile safety researchers and executives. Some have raised concerns about the company's balance between AGI development and safety. However, Brundage clarified that his decision to leave wasn't driven by specific safety issues. He expressed confidence in OpenAI's capabilities, saying, "I'm pretty confident that there's no other lab that is totally on top of things."
Brundage's future plans in AI policy research
After leaving OpenAI, Brundage hoped to make a bigger impact as a policy researcher or advocate in the nonprofit space. He said he left partly because he wanted to work on broader industry issues and not just internal ones at OpenAI. He also wanted independence and less bias in his views, saying he didn't want them "rightly or wrongly dismissed as this is just a corporate hype guy."