Back to Gems of AI

The OpenAI mafia is quietly building the rest of the AI industry

From Anthropic to SSI and Eureka Labs, former OpenAI employees are shaping the next generation of AI startups. Here is where the talent went.

If you want to see where the AI industry is heading, looking at OpenAI's product roadmap only gives you half the picture. The other half is being written by the people who left.

We love to talk about the "PayPal Mafia," that famous group of early employees who went on to found Tesla, LinkedIn, Palantir, and YouTube. But I keep thinking about what's happening right now with OpenAI alumni. It feels bigger. The smartest people in the room aren't just starting random tech companies. They are building direct competitors, specialized research labs, and entirely new product categories.

I genuinely don't know if we've ever seen a talent dispersion quite like this in such a short timeframe. Here is a look at where the key players went and what they are building.

The Anthropic exodus: safety first, Claude second

The most obvious branch of the mafia tree is Anthropic. When Dario and Daniela Amodei left OpenAI in 2020, they took a significant chunk of the research team with them. The disagreement was fundamentally about safety priorities and commercial direction.

They founded Anthropic to build safe AI systems, which eventually led to the Claude family of models. What gets me is that this wasn't a one-time event. The pipeline from OpenAI to Anthropic stayed open. Recently, key figures like OpenAI co-founder John Schulman and former Superalignment co-lead Jan Leike made the exact same jump.

Anthropic didn't just survive. They raised billions and built models that consistently trade blows with GPT-4 and GPT-5 architectures.

Ilya Sutskever and the singular focus of SSI

This one still feels surreal. Ilya Sutskever was the technical soul of OpenAI. After the wild boardroom drama of late 2023, his departure felt inevitable, but his next move was a mystery.

He founded Safe Superintelligence (SSI). The name is literal. Sutskever set up a lab with one specific goal: achieving safe superintelligence. There is no enterprise sales team, no rush to ship an API, and no consumer chatbot.

It is a massive bet. Half the industry thinks you need the iterative feedback of commercial products to fund the compute for AGI. Sutskever is betting you can reach it faster without those distractions. The truth is probably somewhere in the middle, but having one of the world's best researchers completely isolated from product pressures is fascinating.

Andrej Karpathy's push into education with Eureka Labs

Andrej Karpathy has one of the most interesting resumes in tech. He was a founding member of OpenAI, left to lead AI at Tesla, came back to OpenAI, and then left again.

Instead of building another frontier model, he launched Eureka Labs. The goal is to build an AI-native school. Education is one of those sectors that everyone agrees AI will disrupt, but nobody has quite figured out the interface yet. Karpathy wants to use generative AI to scale high-quality teaching, pairing AI teaching assistants with human-designed curriculum.

It is a completely different direction from his peers. Honestly, it is refreshing to see top-tier talent focus on a specific vertical instead of just chasing larger parameter counts.

Aravind Srinivas and the search wars

Not every OpenAI alumni is building foundation models. Some are building products on top of them. Aravind Srinivas worked as a research scientist at OpenAI before leaving to co-found Perplexity.

Perplexity changed how I search for information. Instead of returning blue links, it acts as an answer engine. Google practically invented modern AI, but a former OpenAI researcher is the one forcing them to fundamentally rethink their core product.

Why this talent dispersion matters

Monopolies are incredibly hard to maintain when your smartest employees keep leaving to start rival companies.

OpenAI still has a massive lead in brand recognition and enterprise adoption. But the sheer density of talent that has dispersed across the ecosystem means the future of AI is not going to be centralized in one San Francisco office building. It is being distributed across a network of alumni who learned how to build frontier models, recognized the limitations of a single company, and decided to do it their own way.

Conclusion

The next major breakthrough in AI might come from OpenAI. Or it might come from the people who used to work there. Either way, the original team's DNA is now woven into almost every major player in the industry.

If you are trying to keep up with these shifts, the ecosystem moves fast. Keep exploring our latest articles to track where the talent and the technology flow next.

Frequently Asked Questions

What is the OpenAI Mafia?

The 'OpenAI Mafia' refers to the group of former OpenAI employees, researchers, and co-founders who have left the company to start or lead other major AI companies, similar to the original PayPal Mafia.

Why did Dario and Daniela Amodei leave OpenAI?

Dario and Daniela Amodei left OpenAI in 2020 due to disagreements over the company's direction and safety priorities, going on to found Anthropic, the makers of the Claude models.

What is Ilya Sutskever's new company?

Ilya Sutskever, former Chief Scientist at OpenAI, founded Safe Superintelligence (SSI) to focus exclusively on developing safe, highly capable AI without the pressure of near-term commercial products.