Major AI labs are reportedly hitting roadblocks in their race to build next-generation models. But industry insiders are pushing back hard against suggestions that AI development is slowing down.
Recent reports from Bloomberg and The Information suggest OpenAI, Google, and Anthropic are experiencing diminishing returns in their efforts to develop more advanced AI models—despite massive investments in computing power and data. Yet plenty of AI leaders say betting against scaling laws is a terrible idea.
Who’s right could determine how much (or little) AI actually progresses in the next year—and whether or not the next crop of frontier models like GPT-5 actually deliver on their promises.
To better understand the AI scaling debate currently raging, I talked to Marketing AI Institute founder and CEO Paul Roetzer on Episode 124 of The Artificial Intelligence Show.
“Scaling laws” refer to the basic assumption, so far proven out, that AI models continue to get smarter the more compute and data you train them on.
However, recent reports make the case that scaling laws may not be holding up like they did previously.
The argument here is two-fold:
With one or both pillars of scaling laws under threat, proponents of this perspective believe those laws are starting to hit a wall.
However, plenty of AI insiders immediately pushed back against that categorization.
OpenAI CEO Sam Altman tweeted "there is no wall,” in reference to reports that scaling laws have hit their limit.
Google DeepMind VP of Research Oriol Vinales responded "what wall?" to a new benchmark showing Google's forthcoming model jumping to the top of a popular AI leaderboard.
And former OpenAI senior advisor Miles Brundage warned that "betting against AI scaling continuing to yield big gains is a bad idea.”
Notably, dissent often comes from the people actually building the technology within AI labs.
"Media reports and some AI antagonists are claiming the scaling laws are slowing down, or plateauing,” says Roetzer. “But many voices inside the labs say there's no end in sight.”
So, are scaling laws slowing down?
We don’t know for sure until the next generation of models comes out and we can directly ascertain just how much progress has been made between generations.
But there’s more complexity here than the headlines might have you believe, says Roetzer.
The year isn’t even over yet, so declarations of scaling laws failing may be premature. “There's certainly still the possibility we're going to get smarter, bigger, more generally capable models," says Roetzer.
We also need to be careful with our expectations, he says. The “delays” being reported may not be due to actual delays, but to our expectations of timelines—expectations the labs don’t necessarily share.
"The labs don't share their model release plans, so while we may have been anticipating these models by year end, they may not have,” says Roetzer.
It’s also possible the models are delayed or are taking longer precisely because of how advanced they are.
“These models are complex,” says Roetzer. “They are not traditional software where you just brute force a bunch of code and you release a model that does what you want it to do and then you fix some flaws after you release it.”
They don’t work like traditional software and often don’t do what their creators want them to do all the time. Roetzer reminds that, often, it’s not until you actually train a model that you find flaws or deficiencies—all of which require retraining or fine-tuning.
And, the more advanced models get, the more security risks that need to be addressed during the training phase.
We saw this play out already with OpenAI’s Advanced Voice Mode, which was delayed for months because of the company’s concerns about how it might be misused.
“As these models get bigger, they get more complicated to train,” he says.
So, what do you actually do with this information?
Be careful how seriously you take the headlines, says Roetzer. There isn’t yet much proof that scaling laws are actually slowing down. Labs are aggressively pursuing adding more data and more compute to their models. And they’re exploring multiple ways to make models much more capable, whether scaling laws hold or not, including building models with advanced reasoning, models that are increasingly multimodal, and models with memory and self-improvement capabilities.
“The labs and the governments will spend tens of billions of dollars next year on training and building these models,” says Roetzer. “Within two to three years, they will be spending hundreds of billions of dollars, to build bigger, more generally capable models.”
That means, no matter what, you’re going to get dramatically more capable AI in the near future—and this technology will have a material effect on your company and career.
“So whether the scaling laws as we have known them remain exactly true or not, I don't think it really matters,” says Roetzer.