Ilya Sutskever, OpenAI co-founder and controversial player in last year’s boardroom coup against CEO Sam Altman, has left the company.
He’s been followed by Jan Leike, a key AI safety researcher at the company.
And the departures raise serious questions about the future of AI safety at one of the world’s top AI companies.
Sutskever announced his exit on May 14 on X. It was also one of his first public statements since the boardroom coup.
In his farewell post, Sutskever had nothing but praise for Altman and co-founder Greg Brockman, and both posted complementary statements about his departure.
After almost a decade, I have made the decision to leave OpenAI. The company's trajectory has been nothing short of miraculous, and I'm confident that OpenAI will build AGI that is both safe and beneficial under the leadership of @sama, @gdb, @miramurati and now, under the…
— Ilya Sutskever (@ilyasut) May 14, 2024
Leike, however, was a little more critical in his statement on X.
I joined because I thought OpenAI would be the best place in the world to do this research.
— Jan Leike (@janleike) May 17, 2024
However, I have been disagreeing with OpenAI leadership about the company's core priorities for quite some time, until we finally reached a breaking point.
Both Sutskever and Leike were working on the superalignment team at OpenAI, a group focused on ensuring superintelligent AI ends up being safe and beneficial for humanity.
That team was dissolved after Sutskever and Leike’s departures.
So, how worried should we be?
I got the answer from Marketing AI Institute founder and CEO Paul Roetzer on Episode 98 of The Artificial Intelligence Show.
The importance of superalignment
It helps to understand a bit more about the superalignment initiative at OpenAI.
The superalignment team was announced in July 2023 with the express goal of solving the problem of how to build superintelligent AI safely within four years.
The team, co-led by Ilya and Jan, was then supposed to receive 20% of OpenAI’s compute to achieve that goal.
It sounds like, according to Leike’s farewell post, it didn’t turn out as planned.
Over the past few months my team has been sailing against the wind. Sometimes we were struggling for compute and it was getting harder and harder to get this crucial research done.
— Jan Leike (@janleike) May 17, 2024
The criticism is particularly pointed given that OpenAI appears to have had harsh disparagement clauses in some employee contracts that forces you to give up your equity if you say anything negative about the company.
At the end of his statement, Leike sounded alarm bells over AGI (artificial general intelligence), encouraging OpenAI employees to “feel the AGI.”
That’s a specific phrase used within OpenAI “often in some tongue-in-cheek ways,” says Roetzer. “But there is a serious aspect to it. ‘Feel the AGI’ means refusing to forget how wild it is that AI capabilities are what they are, recognizing that there is much further to go and no obvious human-level ceiling.”
So, in this context, Leike is encouraging the team to take seriously their moral obligations to shape AGI as positively as possible.
It’s a stark, serious final warning from one of the people who was in charge of doing just that at OpenAI.
Disagreement about the dangers ahead
While it sounds like Leike is dead serious about the risks of AGI, not everyone agrees with him.
Yann LeCun at Meta, one of the godfathers of AI, vocally disagrees regularly that humanity has even figured out how to design a superintelligent system, so fears of one are severely overblown.
“It’s very important to remember this isn’t binary,” says Roetzer. “There are very differing opinions from very smart, industry-leading people who have completely opposing views of where we are right now in AI.”
However, it does sound like there is cause for concern that, if superintelligence happens, OpenAI is now less prepared for it.
In a recent interview on The Dwarkesh Podcast, OpenAI co-founder John Schulman appeared to dodge some tough questions about how ready OpenAI was for AGI.
Host Dwarkesh Patel, after talking through what Schulman sees as limitations to increasingly intelligent AI (of which it sounds like there are few), said:
“It seems like, then, you should be planning for the possibility you would have AGI very soon.”
Schulman responded:
“I think that would be reasonable.”
“So what’s the plan if there’s no other bottlenecks in the next year or something, you’ve got AGI,” responds Patel. “What’s the plan?”
Notes Roetzer:
“This is where John, I think, starts wanting the interview to end.”
He says that, if AGI came sooner than expected, it might be wise to slow it down to make sure it can be dealt with safely.
“Basically, he has no idea. No plan,” says Roetzer.
“This is why superalignment existed.”
Mike Kaput
As Chief Content Officer, Mike Kaput uses content marketing, marketing strategy, and marketing technology to grow and scale traffic, leads, and revenue for Marketing AI Institute. Mike is the co-author of Marketing Artificial Intelligence: AI, Marketing and the Future of Business (Matt Holt Books, 2022). See Mike's full bio.