In a revealing five-hour interview with Lex Fridman, Anthropic CEO Dario Amodei pulled back the curtain on what it really takes to build cutting-edge AI models.
In the process, he gave us a sneak peek at where AI is headed—and those insights can help you better prepare for the disruption that “powerful AI” (as Amodei calls it) will bring to every business and industry.
On Episode 124 of The Artificial Intelligence Show, Marketing AI Institute founder and CEO Paul Roetzer unpacked for me what you need to pay attention to in Amodei’s interview.
The top takeaway, if you can boil down a five-hour interview into a single insight, is:
Scale.
Amodei says he doesn’t see issues with scaling laws continuing, though admits he could be wrong and unforeseen barriers could arise. However, he seems to think the use of synthetic data and major advances in giving AI reasoning capabilities (like OpenAI’s o1 model) are going to keep progress moving at its current breathtaking pace.
As a result, he expects the money that frontier companies spend on training to explode. He guesses that, today, training runs cost around a billion dollars. Next year, that will grow to a few billion per training run. In 2026, it may be above $10 billion to train a single model. By 2027, he anticipates that model companies will have ambitions to build $100 billion training clusters.
He also pulled back the curtain on just how complex the training process is for Anthropic’s latest models because of how much bigger, better, and compute-intensive the models are getting.
That’s an important point as we see more and more headlines claiming that scaling laws have hit a wall. Just because models may run into delays or roadblocks doesn’t mean scaling laws are to blame, says Roetzer.
“It may have nothing to do with the scaling laws,” says Roetzer. “It may just mean models are getting bigger and more complex. And these different steps just take longer and they're finding more and more kinds of hiccups or weaknesses or threats or whatever it may be within the models.”
That’s why it pays to listen to people like Amodei: They’re actually building the models and seeing the complexities of what goes into them.
“The media is going to write whatever they write, it may have nothing to do with the reality of what's going on,” says Roetzer.
What does this actually mean for you?
Well, says Roetzer, it means you probably want to bet on AI getting a lot, lot smarter very soon.
(In other words, the death of scaling laws may be greatly exaggerated.)
Amodei still believes we’ll get to AGI (or “powerful AI” as he prefers to call it) by 2026 or 2027 if you “eyeball” the rate at which capabilities are increasing.
“He doesn't really see any obstacles that aren't able to be overcome,” says Roetzer.
Roetzer also encourages us to take these kinds of timelines seriously. Amodei, Sam Altman, and many other AI leaders are essentially staking their entire reputations and careers on these predictions being directionally correct. It’s likely they’d be hedging their comments quite a bit more if they were lying about the rate of progress.
“This is near term stuff,” says Roetzter. “We'll know when the next models come out if we hit scaling law walls or not. And they don't think we did.”