A bombshell report from The Information suggests OpenAI might be hitting an unexpected wall with its next-generation AI model, code-named "Orion."
The report claims that while Orion shows improvements over previous models, the gains are notably smaller than the dramatic leap we saw between GPT-3 and GPT-4.
Even more concerning? The model isn't consistently better at certain tasks, including coding—despite higher operational costs.
But OpenAI insiders are pushing back hard against suggestions of an AI slowdown. And the reality might be more complex than it first appears.
Marketing AI Institute founder and CEO Paul Roetzer broke it all down for me on Episode 123 of The Artificial Intelligence Show.
The Training Data Challenge
At the heart of the issue is a fundamental AI challenge: high-quality training data is becoming scarce.
According to The Information, OpenAI has largely exhausted publicly available text sources and has begun experimenting with AI-generated training data—a move that comes with its own complications.
The publication wrote:
“One reason for the GPT slowdown is a dwindling supply of high-quality text and other data that LLMs can process during pretraining to make sense of the world and the relationships between different concepts so they can solve problems such as drafting blog posts or solving coding bugs, OpenAI employees and researchers said.
In the past few years, LLMs used publicly available text and other data from websites, books and other sources for the pretraining process, but developers of the models have largely squeezed as much out of that type of data as they can, these people said.”
Inside Perspectives Paint a Different Picture
“Everyone was reacting to this,” says Roetzer.
While some saw proof in The Information’s report that scaling laws are slowing down, others fundamentally dispute that idea, including some people who work at OpenAI.
There are now two dimensions of scaling that factor into models like the o1 series - train time and now test (inference) time.
— Adam.GPT (@TheRealAdamG) November 10, 2024
Traditional "scaling laws", which focus on (pre-) training larger models for longer, is absolutely still a thing. That aspect of scale is still… https://t.co/3cBct4Zihy
CEO Sam Altman, in a recent interview with Y Combinator's Garry Tan, expressed confidence: "These things are going to compound...The models are going to get so much better, so quickly." When asked what he's excited about in 2025, Altman's response was simple: "AGI."
Dan Shipper, co-founder and CEO of Every and a vocal AI industry watcher, also disagreed, pointing out that The Information’s report is very starkly at odds with what many AI researchers working on the technology appear to believe.
The message that this headline conveys is at odds with what people inside the big labs are actually feeling / saying.
— Dan Shipper 📧 (@danshipper) November 10, 2024
It's technically correct, but the takeaway for the casual reader (AI progress is slowing) is exactly the opposite of what I am hearing https://t.co/6CRTBvMps2
Despite the debate, there are major unanswered questions about whether anyone can create another breakthrough model that stays ahead of the competition for years, like GPT-4 did, says Roetzer.
"When they introduced GPT-4 in March of '23, everybody chased that model since then, and it seems like everybody just sort of caught up," says Roetzer. "No one is clearly ahead of a GPT-4o model, but they're all sort of comparable.”
He says it seems up in the air at the moment whether we’ve hit a real plateau—or if we’re about to get the next big thing in frontier models.
Rather than a slowdown, we might be at a crossroads where companies need to bet on different paths forward beyond simply giving models more data and compute.
Roetzer identifies four potential breakthrough areas:
- Reasoning, which is already being pioneered by OpenAI's o1 model.
- Multimodal training, or training models simultaneously on text, video, audio, and images.
- Symphony of models, where we use a frontier model as a conductor for specialized smaller models
- Self-play and recursive self-improvement, areas where Google has significant advantages thanks to its work on game-playing AI like AlphaGo.
But, even this speculation misses the even bigger picture for business leaders, says Roetzer.
What This Means for Businesses
While the debate over AI's next breakthrough is fascinating, Roetzer emphasizes that it shouldn't distract from the massive untapped potential in the AI we already have today.
The reality for business leaders, says Roetzer, is:
“It’s irrelevant to you if they make a leap forward next year. The absorption of the current capabilities is so low, that the value you can create in your company using today’s models is so significant and so untapped.”
It’s fun to speculate when or if we’ll get GPT-5 or Gemini 2 or Claude 4, he says. But your real focus should be on using what we have today to succeed next year by building a more efficient team, driving productivity, and increasing creativity and innovation.
“I think diffusion of the current capabilities would be enough disruption to last us five years,” says Roetzer. “And they're going to get smarter, they're going to get more generally capable, they're going to be able to take actions—things like that may not create value for your company for another year or two, but what we have today can transform your company right now.
Mike Kaput
As Chief Content Officer, Mike Kaput uses content marketing, marketing strategy, and marketing technology to grow and scale traffic, leads, and revenue for Marketing AI Institute. Mike is the co-author of Marketing Artificial Intelligence: AI, Marketing and the Future of Business (Matt Holt Books, 2022). See Mike's full bio.