OpenAI's ex-chief scientist Ilya Sutskever launches new AI start-up
Ilya Sutskever, the former chief scientist and co-founder of OpenAI, has launched a new artificial intelligence (AI) company. It is known as Safe Superintelligence Inc. (SSI). The announcement comes just one month after his departure from OpenAI. Sutskever is joined in this venture by Daniel Gross, a former partner at Y Combinator, and Daniel Levy, an ex-engineer at OpenAI.
Sutskever's departure from OpenAI
Sutskever's departure from OpenAI in May, was reportedly due to disagreements with the company's leadership over AI safety strategies. During his tenure, he worked closely with Jan Leike, who co-led OpenAI's Superalignment team, and left the company hours after Sutskever's departure. Leike now leads a team at Anthropic, another AI firm, known for its Claude chatbot.
SSI's mission and approach to AI safety
Sutskever has been a strong advocate for addressing the complex issues surrounding AI safety. In a 2023 blog post, he predicted that AI superior to human intelligence could emerge within the decade, emphasizing the need for research into its control and restriction. Announcing SSI on X, Sutskever stated, "SSI is our mission, our name, and our entire product roadmap because it is our sole focus." He also outlined the company's approach to safety and capabilities.
Take a look at the official post
Business model and future plans
Sutskever spoke with Bloomberg about his new company. However, he didn't disclose details about SSI's funding or valuation. Unlike OpenAI, which began as a non-profit before restructuring due to financial needs, SSI is being designed as a for-profit entity from the outset. Co-founder Gross expressed confidence in their ability to raise capital for SSI. The company has already set up offices in Palo Alto and Tel Aviv, and is currently recruiting technical talent.