OpenAI Co-founder Establishes New Company
Ilya Sutskever, Co-founder of OpenAI, recently announced the creation of a new artificial intelligence (AI) company focused on developing “safe superintelligence.” The company, named Safe Superintelligence Inc. (SSI), has two other Co-founders: Daniel Levy, former OpenAI engineer, and Daniel Gross, former head of AI at Apple.
SSI believes that the emergence of “superintelligence” is imminent, and ensuring its safety for humanity is the most critical technological challenge of our time. Their mission is to become a dedicated laboratory focused on safe superintelligence, prioritizing security while conducting research and developing technology.
“We are assembling a lean and efficient team consisting of the world’s best engineers and researchers, solely dedicated to creating safe superintelligence without any other distractions,” wrote Safe Superintelligence Inc. in a post on X.
According to Bloomberg, Safe Superintelligence Inc. is a pure research organization and currently has no plans to commercialize AI products or services, apart from creating a secure and powerful AI system.
During an interview with Bloomberg, Sutskever declined to disclose the names of the company’s financial supporters or the total amount of funds raised. Gross, on the other hand, stated that fundraising would not be a problem for the company. It is known that Safe Superintelligence Inc. will be headquartered in Palo Alto, California, with an office in Tel Aviv, Israel.
Sutskever’s departure from OpenAI stems from an internal dispute within the company in May 2024. He played a key role in the “coup” initiated in November 2023, which resulted in the removal of Sam Altman, the CEO of OpenAI. Sutskever’s focus lies in promoting pure scientific research and technological innovation, with a strong emphasis on AI public safety rather than purely commercial interests. Altman, on the other hand, excels in the commercial application of technology and market promotion, transforming OpenAI’s research into tangible products and services. Their differences in strategic direction and technical development ultimately led to their disagreement.
Additionally, Vox reported that OpenAI researchers Jan Leike and Gretchen Krueger recently left the company due to concerns about AI safety. The report also revealed that at least five “safety-conscious employees” have left OpenAI since November last year.
With the establishment of Safe Superintelligence Inc., Sutskever once again brings attention to the issue of AI safety. Building a powerful and secure AI system poses a significant challenge in technological innovation and is a crucial preparation that must be undertaken before AI becomes fully integrated into human daily life.
Sources: CryptoSlate, Bloomberg, Vox