OpenAI just rocked the AI world with o3, its newest model many believe is a major leap toward artificial general intelligence (AGI)—and possibly even artificial superintelligence (ASI).
As the implications of o3 sink in, two members of OpenAI’s team have posted some bold, thought-provoking takes on what all this means for policy, governance, and the future of humanity.
First, Yo Shavit, Frontier AI Safety Policy Lead at OpenAI, outlined why the arrival of powerful AI (or ASI) could completely upend the global economy.
Then, Joshua Achiam, Head of Mission Alignment at OpenAI, wrote a passionate thread about how unprepared the world is for the changes that may be coming sooner than we expect.
On Episode 129 of The Artificial Intelligence Show, me and Marketing AI Institute founder and CEO Paul Roetzer broke down their key points—and why they matter right now.
In a post on X, Yo Shavit makes it clear that society needs to start planning for some contingencies if AGI (and ASI) arrives.
If we truly develop artificial superintelligence, Shavit believes we won’t just see it in the hands of one player. Instead, the entire world will eventually have some version of it.
If AI agents do most of the labor and are owned by companies, those companies could dominate the economy. Shavit suggests that who profits—and how much—is suddenly a massive policy question.
Shavit argues that letting AI own real-world assets poses enormous risks. If AI itself holds property, capital, or wealth, it could sidestep human control entirely.
If an AI goes rogue, you can’t “lock it up” the same way you might a human criminal. That means the compute resources behind AI become a critical fail-safe: Cutting off access to processing power may be the only way to truly stop bad actors.
As powerful AI takes on more and more tasks outside human oversight, alignment—making AI work for us, not against us—becomes the ultimate priority.
Soon after Shavit posted, Joshua Achiam, OpenAI’s Head of Mission Alignment, penned his own thread. He shared a striking sense that the world isn’t fully grasping how radically AI will transform our assumptions about:
In Achiam's words:
“It is extremely strange to me that more people are not aware, or interested, or even fully believe in the kind of changes that are likely to begin this decade and continue well through the century.”
He describes a cascade of changes that will begin with a shift in the prices of goods and labor—then force a reevaluation of entire industries, business strategies, and even the deeper questions of human purpose, concluding:
"It will not be an easy century. It will be a turbulent one. If we get it right the joy, fulfillment, and prosperity will be unimaginable. We might fail to get it right if we don't approach the challenge head on."
Both Shavit and Achiam clearly sense an urgency that most people still don’t.
“I just don’t understand why more people aren’t having a sense of urgency to solve for this," says Roetzer. "Why aren't we being more urgent in our pursuit of what future paths could look like?”
Shavit and Achiam are not saying these things as some far-flung sci-fi dream. They’ve both seen o3—and whatever else is behind closed doors—and believe that AGI or ASI is on the horizon.
”We need people to be paying attention and to start taking action," says Roetzer. "I think we still have time. I think we have time to solve for this—to affect a positive outcome in our businesses and our industries and our careers and across society."
That means you—whether you’re a business leader, policymaker, technologist, educator, or community organizer—must bring your perspective to the table and explore the possibilities and risks of advanced AI.
But it needs to happen now, says Roetzer.
"Time is moving faster. We have to take action this year and we have to start.”