Check this AI

Ilyas Sutskever co-founder OpenAI Raises $1 Billion to Build Superintelligent, Safe AI

Ilyas Sutskever co-founder OpenAI Raises $1 Billion to Build Superintelligent, Safe AI

banner- OpenAI co-founder Sutskever's new safety-focused AI startup SSI raises $1 billion

Safe Superintelligence (SSI), co-founded by AI pioneer Ilya Sutskever, has raised $1 billion to develop advanced and safe AI systems. Backed by top investors like Andreessen Horowitz, SSI aims to build a trusted team focused on AI safety and innovation, with plans for several years of research before launching their product.

SAN FRANCISCO/NEW YORK, Sept 4 – In a bold move to push the boundaries of artificial intelligence, Safe Superintelligence (SSI), co-founded by former OpenAI Chief Scientist Ilya Sutskever, has secured a massive $1 billion in funding to develop cutting-edge AI systems that could one day surpass human intelligence.

SSI, currently a small team of just 10 people, aims to hire top talent and acquire the necessary computing power to build a highly trusted group of researchers and engineers. With offices in both Palo Alto, California, and Tel Aviv, Israel, the company is setting its sights on creating a new wave of AI innovation that prioritizes safety.

Backing from Leading Venture Capital Firms

SSI’s remarkable $1 billion funding round includes investments from prominent venture capital firms like Andreessen Horowitz, Sequoia Capital, DST Global, and SV Angel. Nat Friedman, alongside SSI’s CEO Daniel Gross, also contributed through their investment partnership, NFDG. These heavyweights are betting big on SSI’s mission to create “safe superintelligence,” even as interest in unprofitable AI research companies has waned for many in the tech world.

Gross emphasized the importance of having investors who support SSI’s long-term vision, saying, “We aim to make a straight shot to safe superintelligence, focusing on several years of R&D before bringing our product to market.”

The Growing Importance of AI Safety

The focus on AI safety has never been more crucial, as fears grow around rogue AI systems acting against humanity’s interests. A proposed bill in California seeks to regulate AI safety standards, a move that has sparked debate among AI giants like OpenAI and Google, which oppose it, and supporters such as Anthropic and Elon Musk’s xAI.

At the heart of SSI’s mission is the belief that AI can surpass human intelligence without endangering humanity—a concern that Sutskever, one of AI’s most influential minds, has long prioritized. Co-founding SSI in June alongside Gross and former OpenAI researcher Daniel Levy, Sutskever now serves as Chief Scientist, while Levy leads research as Principal Scientist.

A Fresh Start After OpenAI

Sutskever’s journey to SSI came after a dramatic exit from OpenAI earlier this year. Known for his work on the “Superalignment” team, which aimed to keep AI aligned with human values, Sutskever left OpenAI following internal turmoil that led to the temporary ousting of its CEO, Sam Altman. After reversing his decision to support the ousting, Sutskever’s role at OpenAI diminished, and he eventually departed the company in May.

Now, at SSI, he’s forging a new path. “I identified a mountain that’s a bit different from what I was working on,” Sutskever explained. Unlike OpenAI’s unique corporate structure, which led to governance conflicts, SSI operates as a standard for-profit company focused on its core mission.

Talent with “Good Character”

SSI is committed to assembling a close-knit team of extraordinary talent. Gross noted that they spend significant time ensuring new hires align with the company’s culture and values. “We’re not just looking for credentials; we’re looking for people with passion and character,” Gross said. “It’s about finding those interested in doing the work, not getting caught up in the hype.”

The team plans to partner with cloud providers and chip manufacturers to power their ambitious AI goals, but has yet to finalize partnerships with firms such as Microsoft or Nvidia, which commonly serve AI startups.

Redefining AI Scaling

Sutskever, an early proponent of the scaling hypothesis—the idea that AI models improve dramatically with vast computing resources—will approach scaling in a new way at SSI. “Everyone talks about scaling, but few ask the real question: What are we scaling?” Sutskever said, hinting at a different strategy for advancing AI technology. “If you take a new approach, you can achieve something truly special.”