The news out of OpenAI this week isn’t all chaotic…
CEO Sam Altman also published a prophetic post titled The Intelligence Age that offers a bold vision of AI’s near-future impact on society.
And it’s worth paying attention to. This isn’t the first time Altman has published a post that accurately predicted the future. In 2021, he published Moore’s Law for Everything, an essay that mapped out where AI was headed—20 months before ChatGPT even came out.
If you’d read that essay then—and taken it seriously—you’d have gotten a massive leg-up on AI understanding and adoption. We think the same is true of The Intelligence Age.
I talked through why with Marketing AI Institute founder and CEO Paul Roetzer on Episode 117 of The Artificial Intelligence Show.
Altman's track record lends credibility to his forecasts.
In March 2021, he published Moore's Law for Everything, which accurately predicted many AI developments nearly 2 years before the launch of GPT-4 and 20 months before ChatGPT came out.
In it, Altman outlines how we need to prepare for intense disruption to business and society thanks to the growing powers of AI—and the fact that it’s powered by a “recursive loop of innovation,” where smart machines help us build smarter machines.
He continued in the article:
“The coming change will center around the most impressive of our capabilities: the phenomenal ability to think, create, understand, and reason. To the three great technological revolutions–the agricultural, the industrial, and the computational–we will add a fourth: the AI revolution. This revolution will generate enough wealth for everyone to have what they need, if we as a society manage it responsibly.
The article went largely unnoticed by business and government leaders at the time. But, in retrospect, it provided valuable foresight into where AI was going—and what to do about it.
“People weren’t ready yet in 2021 to hear this stuff,” says Roetzer. “Most people outside of AI in the technical world didn’t know who Sam Altman was or care who Sam Altman was.”
Yet Altman was telling us where the future was going. And now, he’s doing the same thing in The Intelligence Age.
The core thesis of The Intelligence Age is that, thanks to AI:
“In the next couple of decades, we will be able to do things that would have seemed like magic to our grandparents.”
Altman gives examples of what this magic looks like:
“It won’t happen all at once, but we’ll soon be able to work with AI that helps us accomplish much more than we ever could without AI; eventually we can each have a personal AI team, full of virtual experts in different areas, working together to create almost anything we can imagine. Our children will have virtual tutors who can provide personalized instruction in any subject, in any language, and at whatever pace they need. We can imagine similar ideas for better healthcare, the ability to create any kind of software someone can imagine, and much more.
With these new abilities, we can have shared prosperity to a degree that seems unimaginable today; in the future, everyone’s lives can be better than anyone’s life is now. Prosperity alone doesn’t necessarily make people happy – there are plenty of miserable rich people – but it would meaningfully improve the lives of people around the world.”
Not to mention, Altman confidently believes this puts us squarely on the path to artificial general intelligence (AGI), and perhaps even artificial superintelligence (ASI), or AI that is smarter than the smartest humans at all cognitive tasks.
He even goes so far as to write:
“This may turn out to be the most consequential fact about all of history so far. It is possible that we will have superintelligence in a few thousand days (!); it may take longer, but I’m confident we’ll get there.”
In short, Altman is telling you to buckle up, because we’re about to enter an age where the power and availability of intelligence simply skyrockets.
“I hope people are listening this time,” says Roetzer.
Even if we don’t reach some type of AGI or ASI, it may not matter, says Roetzer. Instead, focus on the broader trend Altman outlines in the essay:
“In 15 words: deep learning worked, got predictably better with scale, and we dedicated increasing resources to it.
That’s really it; humanity discovered an algorithm that could really, truly learn any distribution of data (or really, the underlying “rules” that produce any distribution of data). To a shocking degree of precision, the more compute and data available, the better it gets at helping people solve hard problems.”
That trend shows no sign of abating. So, whether it results in AGI/ASI or not, it’s going to fundamentally change our world.
“People need to think about the reality of a world where we have intelligence on demand and it gets smarter every day, and we can reasonably predict one to two years out what these models are going to be capable of doing,” says Roetzer.
That’s where we stand today. We are seeing intelligence become increasingly available. It has become significantly smarter, even compared to a couple months and a couple years ago. And we have a decent idea of where we’re headed in the short-term: increasingly capable models that are multimodal and starting to become agentic.
We’re in another moment, like back in 2021, where we can begin to see the contours of the future, if we’ll only take the time to observe them. We don’t have all the answers. But we can start to plan based on a reasonable assessment of AI’s trajectory in the next couple years.
The leaders that do, like last time, will get a huge leg up. The ones that don’t, like last time, will be left scrambling—except this time they’ll simply be even further behind.
“Think about how many companies didn’t do anything until ChatGPT came out,” says Roetzer. “Had no idea that generative AI was even a thing until that moment. It had been a thing for years.”