A series of explosive essays and interviews from former OpenAI researcher Leopold Aschenbrenner are sending shockwaves through the AI world with a chilling message:
AGI is coming this decade. And it could mean the end of the world as we know it.
Aschenbrenner, a one-time member of OpenAI's superalignment team, says he's one of perhaps a few hundred AI insiders who now have "situational awareness" that superintelligent machines smarter than humans are going to be a reality by 2030.
His mammoth 150+-page thesis, entitled "Situational Awareness: The Decade Ahead," outlines the evidence behind this jaw-dropping claim and paints an urgent picture of a possible future where machines outpace humans...
And it's a future very few individuals, companies, and governments are truly prepared for.
Not to mention, this unpreparedness could create serious problems for entire nations and economies, and international security itself.
What do you need to know about these bombshell predictions?
I got the inside scoop from Marketing AI Institute founder and CEO Paul Roetzer on Episode 102 of The Artificial Intelligence Show.
First, some context on the man sounding the AGI alarm.
Aschenbrenner is a certified genius, for one. He graduated valedictorian from Columbia at age 19 (after entering college at 15) and worked on economic growth research at Oxford's Global Priorities Institute before joining OpenAI.
At OpenAI, he worked on the superalignment team run by AI pioneer Ilya Sutskever.
But that all unraveled in April 2024 when Aschenbrenner was fired from OpenAI for allegedly leaking confidential information. (He claims he simply shared a benign AI safety document with outside researchers, not sensitive company material.)
Regardless, the incident freed him up to now talk about all things AGI and superintelligence. And given his pedigree, the AI world is taking notice.
"This is someone who has a proven history of being able to analyze things very deeply and learn topics very quickly," says Roetzer.
The topic itself also caught Roetzer’s interest because it aligns closely with a timeline he outlined for AI development recently.
The crux of Aschenbrenner's argument rests on something called scaling laws.
These laws describe how, as we give AI models more computing power and make their algorithms more efficient, we see predictable leaps in their capabilities.
By tracing these trendlines, Aschenbrenner says we'll go from the "smart high schooler" abilities of GPT-4 to a "qualitative jump" in intelligence that makes AGI "strikingly plausible" by 2027.
But it won't stop there. Once we hit AGI, hundreds of millions of human-level AI systems could rapidly automate research breakthroughs and achieve "vastly superhuman" abilities in a phenomenon known as an "intelligence explosion."
According to Aschenbrenner, the AGI race is already underway.
He says that the "most extraordinary techno-capital acceleration has been set in motion" as tech giants and governments pursue acquiring and building the vast quantities of chips, data centers, and power generation infrastructure needed to build more advanced AI models.
He continues:
"As AI revenue grows rapidly, many trillions of dollars will go into GPU, datacenter, and power buildout before the end of the decade.”
But the runaway progress isn't without major risks.
Aschenbrenner alleges AI labs are treating security as an "afterthought," making them sitting ducks for IP theft by foreign adversaries.
Worse, he says superalignment—reliably controlling AI systems smarter than us—remains an unsolved problem. And a failure to get it right before an intelligence explosion "could be catastrophic."
To avoid this fate, Aschenbrenner calls for a massive government-led AGI effort.
No startup can handle superintelligence, he says. Instead, he envisions the U.S. embarking on an AGI project on the scale of the Apollo moon missions—this time with trillions in funding.
Doing so, Aschenbrenner argues, will be a national security imperative in the coming decade, with the very survival of the free world at stake.
“He says superintelligence is a matter of national security, which I agree with 100%,” says Roetzer. "If I were the US government, I would be aggressively putting a plan in place to spend trillions of dollars over the next five to ten years to house all the infrastructure in the United States.”
But the clock is ticking, Aschenbrenner says, to take this seriously and get it right.
While Aschenbrenner's predictions may sound far-fetched, Roetzer says we can't afford to ignore them.
“I know this is a lot, and it’s kind of overwhelming, but we all have to start thinking about these things,” he says. “We’re talking about a few years from now. We have to figure out what this means. What does it mean to government? What does it mean to business? What does it mean to society?”
Because if Aschenbrenner is even partially right, the future is coming faster than anyone is ready for.