2 Min Read

"Neither OpenAI Nor Any Other Frontier Lab Is Ready": Senior AI Safety Expert Sounds Alarm

Featured Image

Wondering how to get started with AI? Take our on-demand Piloting AI for Marketers Series.

Learn More

A high-profile departure from OpenAI has sparked fresh concerns about the AI industry's preparedness for artificial general intelligence (AGI)—and the warning comes from someone who would know.

Miles Brundage, OpenAI's Senior Advisor for AGI Readiness, has left the company after six years to pursue independent AI policy research. His parting message? When it comes to AGI readiness:

"In short, neither OpenAI nor any other frontier lab is ready, and the world is also not ready."

What does this mean for the rest of us? I spoke with Marketing AI Institute founder and CEO Paul Roetzer on Episode 121 of The Artificial Intelligence Show to find out.

Why This Departure Matters

Brundage wasn't just any employee—he was instrumental in establishing key safety practices at OpenAI, including:

  • The external red teaming program
  • System cards implementation
  • AGI readiness initiatives

In a post diving into his departure, Brundage said several factors drove his decision.

First was research publishing constraints at major AI labs. (Places like OpenAI face so much scrutiny and competing incentives internally that getting research published can be difficult.)

He was also concerned that, if he continued employment at a major AI lab, he couldn’t maintain impartiality in policy discussions or be able to function as an independent voice in the industry.

These factors are all related. They stem from his overarching concern that increasingly advanced AI requires everyone involved in its development to take a more active role in guiding the technology.

"AGI benefiting all of humanity is not automatic and requires deliberate choices to be made by decision makers and governments, nonprofits, civil society and industry," Brundage wrote. 

What Should We Do About This?

Brundage’s departure highlights the biggest problem in how we’re preparing for more advanced AI: 

A lack of urgency.

“There’s not enough discussion about the hard topics because I don’t think enough people really understand how urgent this needs to be,” says Roetzer. 

“I think people just assume this is going to take 3 or 5 or 10 years and ‘we’ll figure it out’ or ‘somebody will figure it out,’ and that’s not how this is going to work.”

He points out that even the smartest people in AI, like Sam Altman, are just guessing at what the impacts of hyper-powerful AI—or even AGI—will be on business and society.

“We can’t just assume that the frontier model companies building these things have this all figured out, what 1-2 years from now looks like,” he says.

Roetzer says current policies, laws, actions, and discussions simply aren’t matching the breakneck pace of AI development.

Brundage's departure and warnings highlight several crucial needs:

  • More independent research and advocacy
  • Increased urgency in policy discussions
  • Better preparation at both company and societal levels
  • Greater transparency in AI development
  • Broader public discussion about AI's impacts

And this needs to start happening now.

“It’s hard for people to step out of their daily roles and the things they’re already thinking about and say ‘Well, what if everything is totally different in 24 months?’,” says Roetzer.

But we have to nonetheless.

Related Posts

8 Marketing Experts Reveal How AI Is Going to Turn the Industry Upside Down

Mike Kaput | August 31, 2018

Expert advice from marketers, for marketers on the impact artificial intelligence will have on the industry.

Is AI a Job Killer or Talent Gap Fix?

Mike Kaput | September 26, 2023

Accountants are quitting the industry, partly due to AI’s impact. But there’s more to the story—and it sheds light on how AI might impact every industry.

How to Navigate the AI Industry's Legal Problems

Mike Kaput | April 9, 2024

Experts are sounding the alarm that the AI industry is about to have serious legal problems. And you might too if you use generative AI tools.