A high-profile departure from OpenAI has sparked fresh concerns about the AI industry's preparedness for artificial general intelligence (AGI)—and the warning comes from someone who would know.
Miles Brundage, OpenAI's Senior Advisor for AGI Readiness, has left the company after six years to pursue independent AI policy research. His parting message? When it comes to AGI readiness:
"In short, neither OpenAI nor any other frontier lab is ready, and the world is also not ready."
What does this mean for the rest of us? I spoke with Marketing AI Institute founder and CEO Paul Roetzer on Episode 121 of The Artificial Intelligence Show to find out.
Brundage wasn't just any employee—he was instrumental in establishing key safety practices at OpenAI, including:
In a post diving into his departure, Brundage said several factors drove his decision.
First was research publishing constraints at major AI labs. (Places like OpenAI face so much scrutiny and competing incentives internally that getting research published can be difficult.)
He was also concerned that, if he continued employment at a major AI lab, he couldn’t maintain impartiality in policy discussions or be able to function as an independent voice in the industry.
These factors are all related. They stem from his overarching concern that increasingly advanced AI requires everyone involved in its development to take a more active role in guiding the technology.
"AGI benefiting all of humanity is not automatic and requires deliberate choices to be made by decision makers and governments, nonprofits, civil society and industry," Brundage wrote.
Brundage’s departure highlights the biggest problem in how we’re preparing for more advanced AI:
A lack of urgency.
“There’s not enough discussion about the hard topics because I don’t think enough people really understand how urgent this needs to be,” says Roetzer.
“I think people just assume this is going to take 3 or 5 or 10 years and ‘we’ll figure it out’ or ‘somebody will figure it out,’ and that’s not how this is going to work.”
He points out that even the smartest people in AI, like Sam Altman, are just guessing at what the impacts of hyper-powerful AI—or even AGI—will be on business and society.
“We can’t just assume that the frontier model companies building these things have this all figured out, what 1-2 years from now looks like,” he says.
Roetzer says current policies, laws, actions, and discussions simply aren’t matching the breakneck pace of AI development.
Brundage's departure and warnings highlight several crucial needs:
And this needs to start happening now.
“It’s hard for people to step out of their daily roles and the things they’re already thinking about and say ‘Well, what if everything is totally different in 24 months?’,” says Roetzer.
But we have to nonetheless.