OpenAI co-founder's new safety-focused AI start-up raises $1 billion
Ilya Sutskever, the former chief scientist and co-founder of OpenAI, has successfully secured a whopping $1 billion in funding for his new artificial intelligence (AI) venture, Safe Superintelligence (SSI). The three-month-old start-up is committed to creating "safe" AI systems that outperform human abilities. This significant financial backing demonstrates continued investor confidence in foundational AI research and high-profile talent, despite widespread skepticism about the profitability of substantial investments in AI technology.
Prominent venture capital firms back SSI
The funding round for SSI saw participation from prominent venture capital firms including Andreessen Horowitz, Sequoia Capital, DST Global, and SV Angel. The start-up plans to utilize these funds to enhance its computing power and attract more talent. Currently operating with a team of just 10 employees, SSI aims to expand its research team across Palo Alto in California and Tel Aviv.
SSI's impressive valuation and founding team
While SSI has not officially disclosed its company valuation, sources have informed Reuters that it stands at an impressive $5 billion. This is a remarkable achievement for a company that was founded just three months ago and has yet to release any publicly-known products. The start-up was co-founded by Sutskever along with Daniel Gross, former leader of AI initiatives at Apple, and Daniel Levy, a previous OpenAI researcher.
Sutskever's departure from OpenAI and future plans
Sutskever's exit from OpenAI was reportedly due to dissatisfaction with the company's resource allocation to his "superalignment" research team and his brief involvement in the temporary removal of OpenAI CEO Sam Altman last November. After leaving OpenAI in May, Sutskever stated that his new company would "pursue safe superintelligence in a straight shot, with one focus, one goal, and one product."
SSI's focus on AI safety and future plans
SSI has declared its commitment to "AI safety," a concept that suggests powerful AI systems could pose existential threats to humanity. The company plans to dedicate a few years to research and development before launching a product. This focus on "AI safety" has sparked debates within the tech industry, with various companies and AI experts holding differing views on proposed safety regulations like California's controversial SB-1047 bill.