5 Min Read

The Government Knows AGI Is Coming (And We’re Not Ready)

Featured Image

Wondering how to get started with AI? Take our on-demand Piloting AI for Marketers Series.

Learn More

A new episode of The Ezra Klein Show just sent some serious shockwaves through the AI world.

Titled “The Government Knows AGI Is Coming,” the interview features Ben Buchanan, a former special adviser for artificial intelligence in the Biden White House. 

According to Buchanan, the government is actively preparing for artificial general intelligence (AGI)—systems that can handle virtually any cognitive task a human can do—and he thinks it might be only a few years away, likely during Donald Trump’s second term.

This revelation has host Ezra Klein rattled. Klein says that, for the past few months, he’s been hearing the same message from insiders across AI labs and government agencies:

AGI is coming faster than expected.

Says Klein:

“For the last couple of months, I have had this strange experience: Person after person — from artificial intelligence labs, from government — has been coming to me saying: It’s really about to happen. We’re about to get to artificial general intelligence.”

However, he concludes that almost nobody is prepared for what that actually means in practice.

To see just how unprepared we might be, I spoke to Marketing AI Institute founder and CEO Paul Roetzer on Episode 139 of The Artificial Intelligence Show.

Why This Matters

Previously, says Klein, experts believed AGI to be 5-15 years away. Now, it looks like it’s coming in the next few years. This is not hype. It’s not a fad technology. It’s the sober assessment of many different cohorts of insiders across the public and private sectors.

“The people in the know are trying very, very hard to get everyone else to pay attention,” says Roetzer. “But as Klein illuminates right away: Nobody has a plan for this.”

That’s because this time is different. Buchanan, who has a deep background in cybersecurity, notes that every other revolutionary technology in the last century—the internet, the microprocessor, GPS, space tech—all had deep Department of Defense funding and involvement from the start.

Not so with modern AI.

Today’s generative AI systems caught most parties, including the US government, by surprise. They have been developed with zero government involvement. The latest frontier models didn’t come out of DARPA (the Defense Advanced Research Projects Agency) or the defense department. The government didn’t have a real seat at the table when they were created.

As a result, the government’s scrambling to understand and shape a technology that’s barreling forward without them.

“They’ve basically been playing catch up,” says Roetzer. 

Cybersecurity Fears and a Race Against China

Based on the interview, the US government thinks about AI first and foremost, says Roetzer, as a national security and military dominance issue. Today, that means making sure the US stays competitive against China in the AI arms race. If AGI-like systems can analyze data or hack adversary networks at massive scale, whoever holds that technology will have a huge offensive (and defensive) cyber advantage.

And it’s not just about writing better malware or finding more exploits. Once an adversary collects mountains of data, advanced AI could pore through it instantly, surfacing critical intelligence in a way that no human team could match.

Buchanan touches on this subject at length, essentially saying that the US does not want China to reach AGI first.

But ironically, the biggest vulnerability may be the labs building AGI. Hackers, foreign or otherwise, have enormous incentive to steal advanced AI model weights or details about how new frontier models are being built. And even the best security measures may struggle against a determined state-level actor.

Given these very real concerns, it’s possible that parties within the US government have considered nationalizing AI labs.

The interview even touches on a claim made by venture capitalist Marc Andreessen, who has said he was told by a senior Biden official that AI development would be locked down to just two or three big companies—partly for safety, partly for security. (In fact, Andreessen claims this incident is why he threw his support behind Trump during the election.)

Did that really happen? When asked about it, Buchanan sidesteps a direct yes or no.

But the logic is inescapable if you're a big government trying to play catch up, says Roetzer.

“You start to understand why nationalization of the labs might actually be a strategy that's explored if they become convinced that they need to get there first, and these models are going to become more and more powerful."

A Bigger Threat to Jobs Than We’ve Ever Seen?

Ezra Klein’s biggest personal worry in the interview is the impact on jobs—especially “cognitively demanding” roles that revolve around knowledge work. Think coding, marketing, research. He emphasizes that if AI can suddenly handle these tasks better, faster, and cheaper, the disruption to the labor market might dwarf anything we’ve seen before.

But, ironically, the government’s main lens is military and intelligence. That means labor displacement is secondary. Buchanan acknowledges that. Right now, Washington is more focused on preventing a scenario where an adversary state gets a lead.

That's a problem. Because Klein says it's very clear that not enough people in government or the private sector are thinking about this, even pressing Buchanan at one point:

“I will promise you the labor economists do not know what to do about AI. You were the top adviser for AI. You were at the nerve center of the government’s information about what is coming. If this is half as big as you seem to think it is, it’s going to be the single most disruptive thing to hit labor markets ever, given how compressed the time period is in which it will arrive."

[...]

"You must have heard somebody think about this. You guys must have talked about this.”

Unfortunately, Buchanan doesn't have many concrete answers about what the government thinks will happen with workforce disruption. And, while Washington wavers, AI labs are forging ahead.

Around the time that the interview dropped, The Information reported that OpenAI executives have told some investors they plan to sell different tiers of AI agents that can autonomously perform specialized tasks that knowledge workers can perform. These include:

  • Low-end agents priced at around $2,000/month (targeted at high-income knowledge workers).
  • Mid-tier agents at around $10,000/month (designed for complex software development).
  • High-end “PhD-level” agents that could cost $20,000/month (aimed at advanced research work).

Those costs may seem high compared to today's license costs for ChatGPT.

But if you do the math, says Roetzer, it’s easy to justify that cost for certain roles—especially if the AI can handle tasks that currently require multiple full-time employees. Even at $20,000 per month, where AI agents would be handling the work of people making $200,000 to $500,000 a year (like financial analysts, attorneys, AI researchers, and others). If an agent can work 24/7 or handle the work of multiple professionals, the return on investment becomes clear.

Now, publicly, few AI companies outright say they’re building technology to replace knowledge workers. Instead, they talk about “augmenting” or “enhancing” people in existing roles. But the marketing spin doesn’t entirely mask where this is headed.

Roetzer pointed to Endex, an AI startup powered by OpenAI that markets itself as an autonomous financial analyst, boasting that it’s like having an AI workforce running 24/7. That’s pretty close to describing a future where knowledge work is handled by machines, day and night, without breaks, benefits, or paid time off.

So, what do we do about all this?

Well, that's the problem.

In the end, the Klein interview leaves us with more questions than answers.

And that, in it's own way, is an answer:

The government knows AGI is coming. But it’s scrambling to figure out next steps. And not enough other entities in society are filling in the gaps about what comes next when it comes to AGI and labor displacement.

Related Posts

[The AI Show Episode 139]: The Government Knows AGI Is Coming, Superintelligence Strategy, OpenAI’s $20,000 Per Month Agents & Top 100 Gen AI Apps

Claire Prudhomme | March 11, 2025

Episode 139 of The AI Show: AGI news and safety, Superintelligence Strategy, OpenAI’s $20,000-per-month AI agents, a16z Top 100 Gen AI Apps, Google AI overviews, and more.

Ex-OpenAI Researcher Drops Bombshell AGI Predictions—And They're Terrifying

Mike Kaput | June 12, 2024

A series of explosive essays and interviews from former OpenAI researcher Leopold Aschenbrenner are sending shockwaves through the AI world.

Sam Altman Says OpenAI Is “Confident We Know How to Build AGI”

Mike Kaput | January 14, 2025

Sam Altman, CEO of OpenAI, just took his vision for artificial intelligence to a whole new level—and made it very clear he’s not worried if people call it science fiction.