Skip to content

OpenAI co-founder joins new AI safety venture focusing on Israel’s tech scene

  • 4 min read
The article was last updated by verifiedtasks on June 23, 2024.

Former OpenAI co-founder Ilya Sutskever has launched Safe Superintelligence, Inc. (SSI), a new AI venture aimed at safely developing superintelligent AI systems, with operations in Palo Alto and Tel Aviv.

Short Summary:

  • Ilya Sutskever launches Safe Superintelligence, Inc., focused on AI safety.
  • SSI co-founders include Daniel Gross and Daniel Levy, with global offices.
  • SSI aims to avoid distractions and commercial pressures to prioritize safety.

SSI: Setting the Stage for Safe Superintelligence

Ilya Sutskever, a pioneering figure in AI, has embarked on a new journey by founding Safe Superintelligence, Inc. (SSI). After a notable departure from OpenAI, which he co-founded, Sutskever is now concentrating on the critical mission of creating AI systems that exceed human intelligence while ensuring their safety.

Founder’s Vision and Mission

Sutskever articulated SSI’s mission crisply: “We will pursue safe superintelligence in a straight shot, with one focus, one goal, and one product.”

“Building safe superintelligence (SSI) is the most important technical problem of our time,” SSI announced in a post on X.

This clear-cut focus underscores the venture’s commitment to safety over rapid, risky advancements.

Internal Turmoil at OpenAI

Sutskever’s resignation from OpenAI, following the controversial failed attempt to unseat CEO Sam Altman, marks a significant chapter in his career. This attempted ousting stirred debates on whether safety or business opportunities were being prioritized at OpenAI. As Sutskever reflected, he “deeply regretted” his role in this internal upheaval.

“SSI is our mission, our name, and our entire product roadmap, because it is our sole focus,” the SSI team stated.

Core Team and Collaborators

Alongside Sutskever, SSI’s founding team includes two other notable figures:

  • Daniel Gross, former AI lead at Apple and a seasoned startup entrepreneur
  • Daniel Levy, a former technical team member at OpenAI

The trio’s collaborative experience and deep roots in AI position SSI to forge new pathways in AI safety.

Pioneering Safe AI Development

SSI’s sole focus on safety is aimed at insulating the company’s mission from typical commercial pressures. This unique approach means the company can avoid potential distractions stemming from “management overhead or product cycles,” allowing it to maintain a tight hold on safety as it advances AI capabilities.

“We plan to advance capabilities as fast as possible while making sure our safety always remains ahead. This way, we can scale in peace,” said the SSI founders in a statement.

Global Operations with a Strategic Presence

SSI’s operations span from Palo Alto in California to Tel Aviv in Israel, leveraging a rich pool of technical talent. This strategic positioning is set to harness the relentless ambition and drive found in Israel’s tech ecosystem, complementing the innovation hubs already established in Silicon Valley.

“Israel has the talent density, and its entrepreneurs have the relentlessness, drive, ambition that can give the nation incredible prosperity both in terms of AI research and AI applications,” noted OpenAI CEO Sam Altman.

Historical Context and Expertise

Sutskever’s journey in AI began at the University of Toronto under the mentorship of Geoffrey Hinton, a figure revered as the “Godfather of AI.” Their collaborative lab was acquired by Google, where Sutskever further honed his skills before co-founding OpenAI in 2015. His expertise is not without recognition; even Elon Musk admitted that recruiting Sutskever was among the “toughest recruiting battles” he had encountered.

Deepening the Talent Pool

SSI aims to recruit world-class engineers and researchers, creating a “lean, cracked team” dedicated exclusively to developing safe superintelligent AI systems. The ambition is not only to make technological breakthroughs but also to ensure these advancements are ethically and safely integrated into society.

“A small cracked team,” as Sutskever puts it, will drive this ambitious mission forward.

Public and Professional Reactions

While some industry leaders like Musk remain skeptical, stating, “Any given AI startup is doomed to become the opposite of its name,” others see the potential in SSI’s focused mission. The debate around AI safety is rife, but the need for prioritized, safe advancements is universally acknowledged.

A Look Ahead

As SSI sets out on its path, it will face the daunting challenge of developing superintelligent AI systems that are not only groundbreaking but also robustly safe. This goal aligns with the increasing global focus on ethical AI, ensuring that technological progress does not compromise human welfare.

“AI will be a very powerful technology used for amazing applications to cure diseases but could also be used to create a disease if not controlled,” Sutskever highlighted in a past discussion.

Concluding Thoughts

Sutskever’s departure from OpenAI and subsequent founding of SSI reflects a significant pivot in the AI landscape. With an undeterred focus on safety and groundbreaking advancements, SSI is poised to become a critical player in the future of artificial intelligence. The journey will undoubtedly be challenging, but the stakes—ensuring a safe, superintelligent future—couldn’t be higher.