The future of AI is arriving faster than most are ready for.
In this kickoff episode of The Road to AGI series, Paul Roetzer shares why Artificial General Intelligence (AGI) may be only a few years away, why the definition of AGI itself is a moving target, and how leaders can prepare for profound disruption—sooner than they think.
Listen or watch below—and see below for show notes and the transcript.
Listen Now
Watch the Video
Timestamps
00:01:08 — Origins of the Series
- Paul Roetzer's LinkedIn Post with Accompanying Visuals
- AI First Book by Adam Brotman and Andy Sack
- Ep. 87 of The Artificial Intelligence Show
-
Genius Makers: The Mavericks Who Brought AI to Google, Facebook, and the World by Cade Metz
00:11:17 — The Pursuit of AGI
- Elon Musk reveals xAI efforts, predicts full AGI by 2029 - VentureBeat
- Mark Zuckerberg’s new goal is creating artificial general intelligence - The Verge
- Microsoft CEO Nadella uses surprise appearance at OpenAI event to lure developers to Azure cloud - CNBC
- Build AI responsibly to benefit humanity - Google DeepMind
- Demis Hassabis - Achievement
00:14:51 — What is AGI?
- Planning for AGI and beyond - OpenAI
- Levels of AGI for Operationalizing Progress on the Path to AGI
- Google Deepmind CEO Demis Hassabis on the Path from Chatbots to AGI - The New York Times
- How can AI unlock solutions to our biggest challenges? - FT Live YouTube
- What is artificial general intelligence (AGI)? - Google Cloud
- Tesla's Musk predicts AI will be smarter than the smartest human next year - Reuters
00:22:15 — What’s Beyond AGI? Artificial Superintelligence
- Position: Levels of AGI for Operationalizing Progress on the Path to AGI
- Safe Superintelligence Inc.
- This Scientist Left OpenAI Last Year. His Startup Is Already Worth $30 Billion. - The Wall Street Journal
- The Intelligence Age - Sam Altman
- Stephen McAleer X Post
- Reflections - Sam Altman
00:32:20 — Setting the Stage for AGI and Beyond
00:40:54 — The AI Timeline v2
- Unreasonably Effective AI with Demis Hassabis - DeepMind the Podcast Season 3, Episode 1
- Shane Legg X Post
- #452 – Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity - Lex Fridman
- How we think about safety and alignment - OpenAI
- Ep. 139 of The Artificial Intelligence Show
- Superintelligence Strategy - NationalSecurity.AI
- The Government Knows AGI Is Coming - Ezra Klein
- The U.S. Government Knows AGI is Coming - The Artificial Intelligence Show
- Powerful A.I. Is Coming. We’re Not Ready - The New York Times
- Are We Close to AGI? Kevin Roose Thinks So - The Artificial Intelligence Show
00:51:25 — LLM Advancements (2025)
00:59:26 — Multimodal AI Explosion (2025 - 2026)
01:03:53 — AI Agents Explosion (2025 - 2027)
01:10:46 — Robotics Explosion (2026 - 2030)
- The Transcript X Post
- Tesla’s Robot Business Is Closer Than You Think - Barrons
- Starship, carrying Tesla's bot, set for Mars by end-2026: Elon Musk - Reuters
- Scoop: Tesla to display its "humanoid robot" on Capitol Hill - Axios
01:14:50 — AGI Emergence (2027 - 2030)
01:17:56 — What’s Changed?
01:21:10 — What Accelerates AI Progress?
01:24:53 — What Slows AI Progress?
01:31:06 — How Can You Prepare?
01:38:49 — What’s Next for the Series?
01:40:17 — Closing Thoughts
Key Takeaways:
Key insights from this episode that may help you focus your understanding of AGI.
AGI Might Be Closer Than You Think
Leaders at top AI labs like OpenAI, DeepMind, Anthropic, and Meta are now openly suggesting AGI could arrive within the next two to five years. Some believe early forms of AGI may already exist, especially when measured by models that can perform at or above the average human level across a range of cognitive tasks.
The Timeline Is Accelerating
Paul unveils an updated AGI timeline that reflects how rapidly things are shifting. The path includes five core stages: large language model advancements, multimodal AI, autonomous agents, robotics, and ultimately AGI and superintelligence. He emphasizes that AGI likely won’t arrive as a sudden milestone—but as a continuum, and we’re already well along that path.
AI Agents and Autonomy Will Redefine Work
The rise of AI agents—systems that can plan, reason, and act—is one of the most disruptive trends on the horizon. While today’s agents often focus on narrow tasks like research, the goal is broader: systems that can take action across departments or even run entire organizations. This shift will redefine how we think about digital workers and organizational structure.
The Risks Are Real, and We’re Not Ready
Despite rapid advancements, critical risks remain unresolved. Paul highlights that top AI labs still don’t fully understand their models’ behavior—especially when it comes to deceptive outputs or alignment with human values. Meanwhile, most businesses have yet to fully adopt existing generative AI tools, let alone build plans for the impact of AGI.
Critical Questions:
Questions we’ll keep coming back to—and try to answer—as the series unfolds.
- How will the next generation of AI models affect you, your team, and your company?
- How will generative AI model advancements impact creative work, and creativity?
- How will consumer information consumption and buying behaviors change?
- How will consumer changes impact search, advertising, publishing, etc.?
- How will we ensure the responsible use of AI in our organizations?
- How will AI-related copyright and IP issues affect businesses?
- How will AI impact strategies and budgets?
- How will AI impact technology stacks?
- How will AI impact the environment?
- How will AI impact the economy?
- How will AI impact education?
- How will AI impact organizations?
- How will jobs change?
- What remains uniquely human?
Read the Transcription
Disclaimer: This transcription was written by AI, thanks to Descript, and has not been edited for content.
[00:00:00] Paul Roetzer: The goal of AI should be to unlock human potential and not replace it, but we have to be proactive and intentional about pursuing that outcome.
[00:00:08] Paul Roetzer: Welcome to the Road to AGI and beyond a special mini series from the Artificial Intelligence Show. I'm your host, Paul Roetzer, founder and CEO of SmarterX and Marketing AI Institute. Artificial General intelligence or AGI has long been a goal of leading AI research labs, but how close are we really?
[00:00:28] Paul Roetzer: What breakthroughs are shaping its path and what risks and responsibilities come with pursuing and eventually achieving AGI. My goal for this series is to see around the corner, to figure out what happens next, what it means, and what we can do about it, or at least to consider the possible outcomes we should be preparing for Through interviews with leading experts, this series dives into how smarter, more generally capable models will impact businesses, the economy, the workforce, educational systems, and society.[00:01:00]
[00:01:00] Paul Roetzer: The future is unknown. Let's explore what might come next together.
[00:01:08] Origins of the Series
[00:01:08] Paul Roetzer: Welcome to episode 141 of the Artificial Intelligence Show and episode one of our new series, the Road to AGI and Beyond. I'm your host, Paul Roetzer. I figured for the first edition of the series, we would start at the beginning and lay the foundation for what comes next
[00:01:25] Paul Roetzer: for our longtime loyal listeners, you may recall back on episode 86 in early March, 2024, so just over a year ago, we had shared a Sam Altman quote about AGI or artificial general intelligence that hadn't been previously reported. The quote came from Adam Brotman and Andy Sack, who interviewed Altman for chapter one of their forthcoming book, AI.
[00:01:49] Paul Roetzer: First Brotman is the former Chief Digital Officer at Starbucks, who is pivotal in the development of the coffee Giants mobile payment and loyalty programs. While Sack is a [00:02:00] legendary tech visionary and former advisor to Microsoft CEO, Satya Nadella, their story starts with an interview with Sam in October, 2023, one month before he was fired from OpenAI and then rehired.
[00:02:15] Paul Roetzer: During that meeting, Sam had talked about AGI multiple times and they said, when you say AGI, what do you mean? Sam replied to that said, that's a fair question. And I would say it's when AI will be able to achieve novel scientific breakthroughs on its own. The chapter goes on to say, Bratman and SEC replied, okay, well that's sort of wild.
[00:02:38] Paul Roetzer: Not sure exactly what that means, but what do you think AGI will mean for us and for consumer brand marketers trying to create ad campaigns and the like to build their companies to which Sam replied Oh, for that, it will mean that 95% of what marketers use agencies, strategists, and creative professionals for today will [00:03:00] easily, nearly instantly and at almost no cost be handled by the ai.
[00:03:05] Paul Roetzer: The AI will likely be able to test the creative against real or synthetic customer focus groups for predicting results and optimizing, again, all free instant and nearly perfect images, videos, campaign ideas. No problem. Adam and Andy, the authors were admittedly new to the concept of AGI, so at this point they were basically speechless.
[00:03:28] Paul Roetzer: They said that, about when do you think AGI will be a reality? They asked, Sam and Sam replied, five years, give or take, maybe slightly longer, but no one knows exactly when or what it will mean for society, and that was basically the end of their interview with Sam. So. We shared this on episode 86 and there, the quote kind of went everywhere.
[00:03:52] Paul Roetzer: It, I would say went viral to some degree. And so we got a ton of feedback about this and lots of questions and [00:04:00] lots of people becoming worried that we were only a few years away from AGI appearing and taking everyone's jobs basically. So the following week, I was on a flight to Miami on a Monday morning.
[00:04:11] Paul Roetzer: So Mike and I record the weekly episodes on Monday mornings. And so I'm on a flight to Miami and I realized like, we need to. Build on this, like, we have to talk about this some more. And so on that flight, I created what I called an incomplete AI timeline. It was just like a starting point for the discussion.
[00:04:29] Paul Roetzer: and when I got to Miami, got to my hotel room, jumped on the call with Mike, and I said, all right, I'm just gonna talk, I'm just gonna like, share some thoughts of that I had on this flight here. And so what I said as I opened episode 87 then was that I don't like the futurist stuff. I'm, I'm not big on trying to make predictions.
[00:04:49] Paul Roetzer: I don't pretend like I have some insane inside knowledge about everything going on within these AI labs. And honestly, I'm, I'm pretty convinced that most of them don't actually know [00:05:00] what's gonna happen with their own models 18 to 24 months from now. I think they have a pretty good concept of what they think they're gonna be able to do in the next 12 months.
[00:05:10] Paul Roetzer: but I think that it's really important, all of us try and interpret. What's going on at these labs and what these leaders are saying so we can understand the story arc a little bit better and begin to take action. So the week prior to episode 87, so in that kind of week between 86 and 87, I had listened to a series of podcast episodes with Des Asaba, the CEO of Google DeepMind, Jan Koon, the Chief AI scientist at Meta, Sam Altman, and a number of others, and.
[00:05:40] Paul Roetzer: They were all kind of talking about this AGI concept and these ideas behind the timeline sort of accelerating. And so I started considering those interviews in the context of other recent reports and articles and interviews with like Dario Ade of Philanthropic and Mustafa Solomon, who at the time was with Inflection, his company, he'd started, [00:06:00] he, he, you know, soon thereafter would move on to become the CEO of AI at Microsoft Ilya, who at the time was at an open AI and would soon move on and start his own safe super intelligence company.
[00:06:14] Paul Roetzer: Shane Leg of DeepMind, one of the co-founders of DeepMind and a bunch of other AI leaders. And when we look back over the last, like 70 some years, so a lot of people kind of think that AI just emerged in the last few years when in reality this idea of pursuing human-like general intelligence has been going on, or at least theorized since the 1950s.
[00:06:34] Paul Roetzer: So. For more than 70 years, these researchers pursued this idea and they were driven by this belief that we could give machines the ability to think, reason, understand, create, and take actions in the digital and physical worlds. But progress was often slow. We would hit these, what are called AI winters, where it would seem like it just wasn't gonna work.
[00:06:56] Paul Roetzer: there was some breakthroughs around 2011, [00:07:00] 2012 where we started to see that this idea of deep learning might actually work. and everything just sort of escalated from there, leading to the Che GPT moment in November, 2022, when everything sort of changed and when generative AI found its way into society that we all, all of a sudden had these machines that could create, they could generate images, they could generate text, and you and I could experience them through a simple application or website.
[00:07:31] Paul Roetzer: So. For me, I began researching AI in 2011. It started for me with IBM Watson winning on Jeopardy. That was sort of my inflection point where I became curious enough to go figure out what this technology was. And at the time I owned my marketing agency and I was thinking about the practicality of could this sort of technology be applied to my agency?
[00:07:54] Paul Roetzer: Could we use it to help better develop strategies for client campaigns and run campaigns more [00:08:00] effectively? And so I started following the space closely, but back in 2011, there wasn't anybody talking about artificial intelligence that wasn't in the field, like the researchers themselves, the technologists.
[00:08:13] Paul Roetzer: And so I had to spend a lot of my time just trying to decipher what they were talking about and what it actually meant. And one of the hardest things for me in the early years was just arriving at a definition of artificial intelligence that made sense to me. And that. I could eventually, explain to other people and anyone who's like, listened to my talks or heard, you know, been listening to the podcast for a while knows my, my favorite definition of artificial intelligence is the science of making machines smart.
[00:08:38] Paul Roetzer: And that actually came from Demi Sabas, I think it was an interview he did with Rolling Stone Magazine that I first heard that definition. So I've been following this space for a really long time, listening to every interview, reading every article, blog, post research report from top AI researchers, labs, and entrepreneurs.
[00:08:57] Paul Roetzer: I first wrote about AI in my 2014 [00:09:00] book, the Marketing Performance Blueprint, where I actually theorized this idea of building a marketing intelligence engine to drive marketing strategy and campaigns, and performance. I started my Marketing AI Institute in 2016, sold my agency in 2021 to focus on ai.
[00:09:19] Paul Roetzer: Because by spring of 2021, I'd become convinced we were arriving at a tipping point that everything was about to change. I didn't know it was gonna be ChatGPT. I didn't know that that was right around the corner, but I knew the labs were working on language generation and understanding, and they had made a lot of progress by that point.
[00:09:39] Paul Roetzer: But it was actually Cade Mets' book, genius Makers, that became the real, kind of forcing function for me. When I read that book, I started to connect the dots of, of kind of what had happened since 2011 in this deep learning movement, the pursuit of AGI by these leading labs and sort of why it hadn't been adopted yet [00:10:00] within enterprises like I assumed it would have been by that point.
[00:10:03] Paul Roetzer: And so everything just started making sense and I actually decided on a walk, I was on spring break with my family that I was done, I was gonna sell the agency and I was gonna, I. Focus exclusively on trying to figure out the story of ai and by early 2023 then what I had noticed was that the tone and positioning on AGI from the top AI labs had changed.
[00:10:25] Paul Roetzer: They were no longer talking about AGI as something that might be possible in a decade or more. They were conveying increasing confidence that there was a clear path to achieving AGI within three to five years, which would put it in the 2026 to 2028 range. That was a very short time period in my opinion, so I had become convinced that they, these labs were intent on pursuing and achieving AGI.
[00:10:54] Paul Roetzer: Yet, when I looked around, no one was talking about what that meant. No one was game planning. Well, what [00:11:00] if they're right? What are the possible scenarios to businesses and the economy and educational systems? So when you started to look around, you would see this pursuit of AGI and by 2023, 2024, they were becoming much more vocal about it.
[00:11:17] The Pursuit of AGI
[00:11:17] Paul Roetzer: So I wanted to highlight for you a few of the key ways that these leaders are talking about this. So, we have Elon Musk, who Elon Musk started xAI, I think it was, was it end of 2023, early 2024, something like that in the last two years. And this is his attempt to build his own research lab. So.
[00:11:39] Paul Roetzer: Again, if you've listened to the podcast for a long time, you know the backstory. Elon Musk and Sam Altman co-founded OpenAI with a collection of other researchers. They had a falling out around 2019, and now Elon is suing Sam and OpenAI for trying to become a for-profit company. And so there's a whole messy history here, but Elon created his own, AI [00:12:00] research lab called xAI.
[00:12:01] Paul Roetzer: And so Elon has, is on record as saying the overarching goal of xAI is to build a good AGI with the overarching purpose of just trying to understand the universe, mark Zuckerberg. So meta made their big switch from the metaverse to focusing on AI Now. Meta and Facebook have been a major player in AI for well over a decade, but they weren't solely focused on it the way they are now.
[00:12:27] Paul Roetzer: So they'd spent like $10 billion trying to make the metaverse come to life. And then sometime around 2023, early 2024, Zuckerberg realized that they needed to go much more aggressively into ai. And so Zuckerberg said, we've quote, we've come to view that in order to build the products that we want to build, we need to build for general intelligence.
[00:12:49] Paul Roetzer: Satya Nadella, last year on CNBC said, quote, our mission is to empower every person and every organization on the planet to achieve more. I [00:13:00] think we have the best partnership in tech. He was referring to OpenAI, and I'm excited for us to build AGI together Google DeepMind on their about page says, in the coming years, AI and ultimately artificial general intelligence has the potential to drive one of the greatest transformations in history.
[00:13:17] Paul Roetzer: Now, they don't specifically state that it's their mission to build it, but actually, if you dig into it, their stated mission is to build AI responsibly to benefit humanity. But make no, no mistake about it, their goal is to build AGI. So in their vision statement on the Google DeepMind, it says, in the coming years, AI and ultimately a g i's potential to drive one of the greatest transformations in history, as I said.
[00:13:41] Paul Roetzer: Then it goes on to say, we're a team of scientists, engineers, ethicists, and more working to build the next generation of AI systems safely and responsibly. By solving some of the hardest scientific and engineering challenges of our time, we're working to create breakthrough technologies that could advance science, transform work, serve diverse [00:14:00] communities, and improve billions of people's lives.
[00:14:03] Paul Roetzer: Now Demi Asaba, who's the CEO of Google DeepMind and the co-founder, he has said multiple times that this is the whole focus that his whole mission in life is to solve the problem of intelligence and then solve everything else that he sees AGI as the path to solving the most challenging problems in the world.
[00:14:22] Paul Roetzer: So he and his colleagues have been working on this grander ambition of AGI by building machines that can think, learn, and solve humanity's toughest problems. Hassabis has said he believes that it'll be an epoch defining technology, like harnessing, like the harnessing of electricity that will change the very fabric of human life.
[00:14:42] Paul Roetzer: So we know that they're all thinking about it. In many cases, it's actually their mission, whether it's stated or not stated as the mission. It is what they're setting out to do.
[00:14:51] What is AGI?
[00:14:51] Paul Roetzer: The problem in recent years is that the definition has become quite uncertain. We don't know how they actually [00:15:00] define AI AGI, and they keep changing the definition.
[00:15:04] Paul Roetzer: So it's like become this moving target. So we'll go through a few of the definitions just to sort of level set for everyone. So open ai. Who has changed this multiple times and continues to evolve it. one of the pages on their site, we're planning for AGI and beyond they say AI systems that are generally smarter than humans.
[00:15:25] Paul Roetzer: there's a Google DeepMind paper called Levels of AGI We'll Talk About in that paper. They say AGI is an AI system that is at least as capable as a human at most tasks. Demi a, who we just talked about, he has multiple definitions, but they're roughly similar. So in one example in New York Times, he said, able to do pretty much any cognitive task that humans can do.
[00:15:49] Paul Roetzer: And then in another recent interview he said, it's a system that is capable of exhibiting all the cognitive capabilities that humans have. [00:16:00] Google Cloud has a page dedicated to AGI. So we will explore for a moment how Google Cloud thinks about AGI. So they define it as a hypothetical intelligence of a machine that possesses the ability to understand or learn any intellectual task that a human being can.
[00:16:18] Paul Roetzer: It is a type of AI that aims to mimic the cognitive abilities of the human brain. Now that page goes on to say, in addition to the core characteristics mentioned earlier, AGI Systems also possess certain key traits that distinguish them from other types of ai. One is generalization ability. AGI can transfer knowledge and skills learned in one domain to another.
[00:16:42] Paul Roetzer: Enabling it to adapt to new and unseen situations effectively. Now, I'll pause for a minute on, on the definitions from Google Cloud and add some context here. So what this means is historically we have had narrow ai. We've had AI that learned how to generate images or understand images or [00:17:00] generate, voice or create text, or play chess.
[00:17:06] Paul Roetzer: So we had. AI that was trained to do a specific thing. What we are looking for, and what AGI promises is the same AI that learns how to play chess at a super HU human level could flip over and play Pokemon or play Super Mario. It could play other games. It could play checkers. It could play Uno because it's actually able to generalize its knowledge and apply it to other domains.
[00:17:32] Paul Roetzer: That's how humans work. Humans learn very quickly how to go from one game to the next and can develop moderate capabilities in those areas rather quickly. That's not how AI traditionally has worked, and so generality is a really important concept to understand artificial general intelligence. We want these generally capable cognitive abilities that spread across domains.
[00:17:57] Paul Roetzer: The second part, going back to Google [00:18:00] Cloud's overview. Is common sense knowledge. So they say AGI has a vast repository of knowledge about the world, including facts, relationships, and social norms, allowing it to reason and make decisions based on this common understanding. the pursuit of AGI Google continues.
[00:18:18] Paul Roetzer: Google Cloud continues, involves interdisciplinary collaboration among fields such as computer science, neuroscience and cognitive psychology. Advancements in these areas are continuously shaping our understanding and the development of AGI, currently AGI remains largely a concept and a goal that researchers and engineers are working towards.
[00:18:38] Paul Roetzer: So again, that was Google Cloud. Now, in all of these definitions I've tried to arrive at, what do I think it is? I've read all of them. I've studied the space for however many years. This is now 13, 14 years. It's like, what do I feel is a reasonable definition? And so what I've landed on, and again, I like some of these AI leaders, like I may [00:19:00] change this as time goes, but I define it as, a system, an AI system that is generally capable of outperforming the average human at most cognitive tasks.
[00:19:10] Paul Roetzer: Now, I wanna unpack this for a moment because there's a couple of really important phrases in here. One is generally capable and two is average human. So the generality part comes from what we've already discussed. It needs to be able to learn and perform across multiple domains. The key though is what often is missing from these definitions from AI leaders is, what are we talking about in terms of human capability?
[00:19:37] Paul Roetzer: Are we talking about PhD level, you know, superhuman? Are we talking about, I. Average human. And so when I think about the impact of AGI, and I'm trying to plan for my own business, I'm trying to plan for economic impact. I'm trying to plan for like where my kids are gonna go to school and what they're gonna study, like, I'm trying to think about the realities here.
[00:19:57] Paul Roetzer: And the reality is most businesses [00:20:00] are filled with average workers. People who do what they need to do to get the job done. They are not always filled with a talent. They're not filled with the best of the best, the top 1%, the top 10%. And so there's a lot of average work done. And so for me to think about the impact of AGI or anything close to it, my thought is it just needs to be able to do the work that the normal human would do.
[00:20:28] Paul Roetzer: And if the normal human does average work. Then we have much bigger things to worry about, way faster. If the definition is more like what Elon Musk's Musk calls it, which is AI that is smarter than the smartest human, well that's a whole different level we have to get to. But if you look at your business, look at your team and say, okay, let's force rank here.
[00:20:51] Paul Roetzer: Here's our A players, here's our B players, here's our C players. The question basically becomes, when is the model at B player level? [00:21:00] And quite honestly, there's a lot of tasks right now that it's already there. And so when you start stacking those and you start looking at a single model that can perform across marketing and sales and service and accounting and operations and HR and finance and legal, a single model that is at least average human level at all of those things, you all of a sudden start to see how this could get very complicated very quickly with managing this in business and society.
[00:21:28] Paul Roetzer: So. Back to Elon Musk's definition. again, when he was asked about AGI, this is how he defined it, smarter than the smartest human. And he said, I think it's probably next year or within two years now, anyone who follows Tesla and Elon Musk knows that Musk tends to overexaggerate timelines quite dramatically.
[00:21:48] Paul Roetzer: He's been promising full self-driving since like 2016. Now he usually ends up being right that something is technically possible, but he, he is very aggressive in his timelines, let's [00:22:00] say. So this idea though, the thing I wanna focus on was his definition is this smarter than this smartest human Because that leads us to, well what's the beyond AGI part.
[00:22:11] Paul Roetzer: So we go back to the title of this series, it's the Road to AGI and Beyond.
[00:22:15] What’s Beyond AGI? Artificial Superintelligence
[00:22:15] Paul Roetzer: Well, what's beyond AGI? That's pretty significant already. Well, what's beyond AGI Is artificial super intelligence or a si. So there's a paper and I'll, I'll link to all of these things in the show notes. Our team will make sure we put all the links in here.
[00:22:29] Paul Roetzer: So if you wanna spend time and really drill into this stuff, I welcome you to do it. There's a paper that came out in 2024. This is May of 2024 from Google DeepMind called Levels of AGI for operationalizing Progress on the Path to AGI. So this report was written in September, 2023. So if we rewind back to September, 2023, GPT-4, which was the most powerful model in the world for almost two years, was six months old.[00:23:00]
[00:23:00] Paul Roetzer: So. The paper comes out May, 2024. One of the lead authors is Shane Leg, who I mentioned earlier. He is one of DeepMind's co-founders. He's also actually credited with coining the term AGI, around 2002. So Shane leg releases this paper co-authored by eight researchers. The paper starts by considering nine examples of AGI, definitions from prominent AI researchers and organizations, and reflects on their strengths and limitations.
[00:23:27] Paul Roetzer: So they're doing the same thing. I was just trying to do what, what are we even talking about here? Can we agree on what AGI is? So we can there before know how to measure it and know when we get there. 'cause right now we have no idea if we're gonna, if we are there, if we will be there in a year or two.
[00:23:42] Paul Roetzer: So you have to come to some level of understanding and agreement on the definition. So according to the authors quote, the concept of AGI has grown from a subject of philosophical debate. I. To one which also has near term practical relevance. Some experts believe that sparks of AGI quote sparks of [00:24:00] AGI.
[00:24:00] Paul Roetzer: It's referring to a paper called Sparks of AGI are already present in the latest generation of large language models. Again, we're talking about fall, spring 2023 to 2024. So some researchers believed that there were already sparks of AGI in the early form of large language models. We were seeing like a GPT-4.
[00:24:21] Paul Roetzer: back to the papers, quote, some predict AI will broadly outperform humans within about a decade. Some even assert that current LLMs are agis. So the Google DeepMind team proposed a framework for classifying the capabilities and behaviors of AGI. Models and their precursors. The framework introduces levels of AGI based on performance, generality and autonomy meant to provide a common language that compares models, assesses risks, and measures progress along the path to AGI.
[00:24:54] Paul Roetzer: So I'll come back to two of these factors. Performance. In their mind [00:25:00] refers to the depth of an AI system's capabilities, how it compares to human level performance for AGIven task. Generality, as we've already discussed, is about the breadth of an AI's capabilities or the range of tasks for which an AI system reaches a target performance T threshold.
[00:25:17] Paul Roetzer: They argue that it is critical for the AI research community to explicitly reflect on what we mean by AGI and aspire to quantify attributes like levels of AGI performance, generality autonomy. Now, their levels are level zero, no ai. So just traditional software level one is emerging and they classify that as equal to or somewhat better than an unskilled human level two is competent, that is at least 50th percentile of skilled adults.
[00:25:50] Paul Roetzer: So again, we're getting into this average human basically. So at level two, like let's say you take Che GPT. It can do marketing, [00:26:00] sales, service, operations, hr, finance, legal, IT management, if it could do all of those things. Single model, do all of those things at the 50th percentile of skilled adults, they're arguing it is now a form of AGI that what they would call competent AGI.
[00:26:17] Paul Roetzer: So it's actually a spectrum. So this is the real key concept with this paper. We don't have binary it not it is or isn't AGI. They're saying this is a form, this is a competent AGI. This is an early form. It is on the spectrum 50th percentile basically. And so this is where we start to get into my definition.
[00:26:37] Paul Roetzer: Like if we get to the point where an AI model is at or above the average skilled adult at most cognitive tasks within a business, within knowledge work, I. We are at a point of AGI that society is not prepared to handle. So at the a, a after level two comes level three, which is expert at which in, again, in their classification, [00:27:00] at least 90th percentile of skilled adults, level four is virtuoso, at least 99th percentile of skilled adults.
[00:27:07] Paul Roetzer: And then level five is superhuman, which outperforms 100% of humans take the smartest humans in the world, and it can outperform all of 'em at basically any cognitive task. So that is where we would find super intelligence. That is what we basically are defining it as, that you take Google Gemini chat, GBT, Andro, Claude, and you take the smartest human in every domain, and it outperforms all of them.
[00:27:32] Paul Roetzer: Single model better than every human, smartest humans that have ever lived at every domain. So that's a pretty weird thing to think about, but yet. Again, if you listen to our podcast regularly, go back to episode 1 29 where we spent like 20 minutes on this idea of super intelligence and so not only are the AI labs convinced AGI is near, when you look at what's being talked about and what's being written, I.
[00:27:59] Paul Roetzer: Most of [00:28:00] them sure seem to think super intelligence is within reach as well. So let's walk through a couple of those examples. We had situational awareness, a research report or a series of articles from Leopold Leopold Aschenbrenner. so episode 1 0 2, we talked about this one. This was, June 12th, 2024 when we talked about it.
[00:28:21] Paul Roetzer: So in that series of papers, he claims that all the signals he's seeing as one of a few hundred AI insiders say that we will have super intelligence in the true sense of the word by the end of the decade. And that AGI by 2027 is strikingly plausible. he goes on to say, AI progress won't stop at human level.
[00:28:43] Paul Roetzer: Hundreds of millions of Agis could automate AI research, compressing a decade of algorithmic progress into one year, five orders of magnitudes in his world, improvement in a single year. We would rapidly go from human level to vastly superhuman AI systems. The power [00:29:00] and the peril of super intelligence would be dramatic.
[00:29:03] Paul Roetzer: June 19th, 2024, we had the formation of a company called Safe Super Intelligence by Ilya Seva, who was one of the co-founders, and the chief scientist of ai and considered one of probably the top three AI researchers in the world, if not the top researcher in the world. So he's built a company that's on a straight line to super intelligence, has zero intentions of any products or any revenue until they achieve super intelligence.
[00:29:30] Paul Roetzer: They just secured funding 2 billion in funding at a $30 billion valuation at the beginning of March. We also had the Intelligence Age, an article by Sam Altman. This is September 23rd, 2024. In it, he wrote Here is one narrow way to look at human history after thousands of years of compounding scientific discovery and technological progress.
[00:29:54] Paul Roetzer: We have figured out how to melt sand, add impurities, arrange it with astonishing [00:30:00] precision at extraordinarily tiny scale into computer chips, run energy through it, and end up with systems capable of creating increasingly capable artificial intelligence. He goes on to say, this may turn out to be the most consequential fact about all of history so far.
[00:30:17] Paul Roetzer: It is possible that we will have super intelligence in a few thousand days. It may take longer, but I'm confident we'll get there. How did we get to the doorstep of the next leap in prosperity in three words, deep warning. Deep learning worked in 15 words. Deep learning worked, got predictably better with scale, and we dedicated increasing resources to it.
[00:30:42] Paul Roetzer: That's really it. Humanity discovered an algorithm that could really, truly learn any distribution of data or really the underlying rules that produce any distribution of data. To a shocking degree of precision. The more compute and data available, the better it gets at helping [00:31:00] people solve hard problems.
[00:31:01] Paul Roetzer: I find no matter how much I spend time I spend thinking about this, I can never really internalize how consequential this is. Then January 3rd, 2025, we had a tweet that we reported on, from Stephen Cle, who is a research, researching agent safety at OpenAI, and he tweeted, I kind of miss doing AI research back when we didn't know how to create super intelligence.
[00:31:25] Paul Roetzer: Sam Altman shows up again January 5th, 2025 with an article called Reflections, and he says, we are now confident we know how to build AGI, as we have traditionally understood it. We believe that in 2025, we may see the first AI agents quote, join the workforce and materially change the outputs of companies.
[00:31:47] Paul Roetzer: We continue to believe that iteratively putting great tools in the hands of people leads to great broadly distributed outcomes. We are beginning to turn our aim beyond that to super intelligence in the true sense of the word. We love our [00:32:00] current products, but we are here for the glorious future. With super intelligence, we can do anything else.
[00:32:05] Paul Roetzer: Superintelligent tools could massively accelerate scientific discovery and innovation beyond what we are capable of doing on our own, and in turn, massively increase abundance and prosperity.
[00:32:20] Setting the Stage for AGI and Beyond
[00:32:20] Paul Roetzer: So now let's talk about setting the stage for AGI and beyond. How does open AI define this? We went through some basic definitions, but how do they think about the stages of artificial intelligence?
[00:32:31] Paul Roetzer: So in July, 2024, Bloomberg was first to report stages of artificial intelligence that were a open AI's internal ways of thinking about this. this has since been verified that these are indeed the ways that OpenAI looks at this. So in their world, level one is chatbots or AI with conversational language.
[00:32:50] Paul Roetzer: That is what we got with ChatGPT in November, 2022. Level two is reasoners human level problem solving. Level three [00:33:00] agents, systems that can take actions. Level four, innovators, AI that can aid an invention. And level five organizations, AI that can do the work of an organization, of an organization. Right now We had level one in fall 2022.
[00:33:20] Paul Roetzer: We were introduced to reasoning models in September, 2024. The oh one model from OpenAI was the first. We now have half dozen of them or so that we're aware of, from major labs. Everybody's building reasoning into them. We just got, Gemini 2.5 Pro yesterday. That is a reasoning or thinking model. And then agents, we'll talk a lot about agents in a minute, but we are now able to make smarter agents because they have reasoning capabilities.
[00:33:52] Paul Roetzer: And then that should pretty quickly lead us to innovators, which is where like Demis Sabas would consider AGI achieved when we [00:34:00] have true innovation, you know, creation of original scientific breakthroughs, and then level five organizations, which could be. I don't know, like after AGI, before Super Intelligence, we could get to the organization level.
[00:34:13] Paul Roetzer: And that's basically the AI is a, is an autonomous organization. It just, you, you give it a goal and it, it runs everything by itself. That's a weird concept. We'll come back to that one. So what we know is the models are getting smarter, they're getting more generally capable, and that AI leaders speak with increasing confidence that the path is clear.
[00:34:32] Paul Roetzer: As we have heard, they all seem to be pursuing the same potential variables to unlock AGI. So we talked about this on a recent podcast episode that all the labs have a general idea of what needs to happen. They often talk about like needing one or two major breakthroughs to get to AGI and beyond.
[00:34:52] Paul Roetzer: They all seem to kind of be pursuing the same basic ideas. What happens though, is there is a [00:35:00] scarcity of computer chip or the compute chips like the Nvidia chips, and there's a scarcity of energy that prevents them from trying everything at the same time. So they have to serve the models they already have, and then they need to train these new models and they need to run experiments to figure out which research direction to go in to unlock the next breakthrough that's needed.
[00:35:20] Paul Roetzer: And so when you look across what's happening, I'll just highlight a few of the possibilities. So if you're Google or Anthropic or Open AI or Cohere or Myst or, or Xai or Meta, you basically have all these AI researchers, all super smart people. You have some finite ability of Nvidia chips to do training runs on.
[00:35:40] Paul Roetzer: To do your experimentations on. They all generally look at these possibilities. Ag agentic giving these things the ability to take actions, computer use, we'll talk a little bit more about this later, but the ability for these things to see and use applications and content on your devices, your screens, the same way you and I would like, we would use a keyboard in a [00:36:00] mouse context, windows expanding the context window, meaning I can give it 50 PDFs and it'll know everything within there.
[00:36:08] Paul Roetzer: Be able to search and remember things within the context. And then when it gives me outputs, they become more accurate because it's actually doing it within the context window of the information I've provided it. So context windows are known to be a great way to improve the accuracy and reliability of these models.
[00:36:26] Paul Roetzer: Continual learning these things. Forget, they, they won't remember something you talked about 10 threads ago. And so this idea of the model learning and then when they retrain a new model that it doesn't forget everything it previously learned, which is what they do now each time. It's like a reset button.
[00:36:44] Paul Roetzer: emotional intelligence, memory, multimodality, we'll talk about reasoning, recursive self-improvement, where these things improve themselves. Vision, voice, world models being understand the world around being, understand physics, reproduce the laws of physics basically, and [00:37:00] the outputs of its videos and images.
[00:37:02] Paul Roetzer: Any of these could be on locks to the next breakthrough. And there's probably others they're aware of all of these. They have to figure out which ones to make the bets on. So what's happening is some of the labs have billions of dollars to play with. So like OpenAI, Google, meta Xai, Andro in particular, they have billions of dollars to keep just pushing for the biggest, smartest, most generally capable models.
[00:37:26] Paul Roetzer: They buy hundreds of thousands of chips from Nvidia. They take data that they have rights to and don't have rights to like, you know, pirated books. They take all this data in and then they just try and train these massive models. And that's what gets us to, you know, Gemini 4.5, or Gemini 2.5, GPT-4 0.5.
[00:37:48] Paul Roetzer: They just keep building bigger and bigger models. Other approaches are like cohere, minstrel, writer, and also the big labs. they also have these smaller projects going. They're trying to unlock smaller, more [00:38:00] efficient models through algorithmic techniques, reinforcement learning, more fine tuned data, more, you know, proprietary data, trained in specific areas.
[00:38:09] Paul Roetzer: So there's this effort to build the biggest, most generally capable models. And then there's these efforts to build the smaller, more efficient models that can run on device, basically. And so, as I've said, an episode one 40 of the podcast, this isn't all noise and hype. This is what an emerging trend looks like.
[00:38:27] Paul Roetzer: Like you see and hear similar threads from all these different leaders, all these different AI labs. you have a lot of the top AI researchers who bounce around between these labs. They're seeing and hearing everything. They go to the same parties in San Francisco, like they talk to each other all the time.
[00:38:43] Paul Roetzer: They're all seeing the same things across the labs. And when you start to piece it together, you realize that either they're all wrong. Or AGI is coming and it's coming really fast, faster than we're preparing for in the business world, in the economy, in educational systems, in [00:39:00] society. And so that's my belief and the whole purpose behind this series is we have to start considering that they're right and that within like two to three years, the world is going to start changing in a very dramatic way that we are not prepared for.
[00:39:15] Paul Roetzer: And so I for one, don't wanna sit back and wait. I would much rather accept that they might be wrong, I might be wrong. And we don't get there in two years. Maybe it's five, maybe it's seven, maybe it's never, maybe, it sure seems like the probability is high enough that we should be doing more, that we should be considering the implications on ourselves, on our companies, on our industries, on our educational systems.
[00:39:39] Paul Roetzer: That's what I wanna do because when I go back to, November, 2022, when the emergence of ChatGPT, like we knew something like that was coming, like in our book, the Marketing Artificial Intelligence book that came out in like spring of 22, there's a whole section titled What Happens When Machines Can Write Like Humans.
[00:39:56] Paul Roetzer: Like we were already at GPT-3 level. I think when we [00:40:00] wrote the book, we knew this was going to be unlocked. We didn't know it would be through something called ChatGPT, but the signs were obvious. Sam Altman had written his Moore's Law for anything, for everything post in March of 2021, telling us that models that could think, reason, understand, create were coming.
[00:40:17] Paul Roetzer: They'd already seen them in their labs and yet most business leaders, the vast majority of business leaders had done nothing. They had no idea that this stuff was coming. And that's how I feel about AGI today. There's still so many business leaders. Who don't even comprehend the current capabilities of ai.
[00:40:34] Paul Roetzer: More or less be thinking about what happens when AGI shows up. And I don't want people to arrive at that point, whether it's two years from now or three years from now, or five years from now where they did nothing. They had no contingency plans whatsoever. So that's my goal here, is to try and lay out what happens next.
[00:40:54] The AI Timeline v2
[00:40:54] Paul Roetzer: And that brings us to AI timeline version two. So again, in episode 87 [00:41:00] of last year, March, 2024, I laid out what I called an incomplete AI timeline. So the whole premise was, I don't actually know. And I, I'm convinced none of these AI labs actually know what happens, but they talk enough about it and you read enough and see enough and see the research reports as hints of where they're going.
[00:41:19] Paul Roetzer: You can piece together what they're working on. And so that's what I'm trying to do with this timeline is piece together. What are they saying? What do they believe is going to happen? Like where are we now? What's gonna happen next with these models? And then most importantly, what can we do to prepare?
[00:41:35] Paul Roetzer: So the way I go about this is I actually keep an AGI journal. So Mike and I in, in our weekly podcast, we, you know, I'll curate 40 ish articles, podcast interviews, research reports, tweets, like all these things. I have my private conversations I have with companies and AI labs, my own observations of what's going on.
[00:41:55] Paul Roetzer: Presentations we watch, courses, we take all this stuff we curate. [00:42:00] Across all AI related topics every week. The stuff that's related to AGI, I have a separate journal for, and so I basically try and keep track of what's going on, what people are saying, and then at any given point, I can go into there and kind of like try and piece it together.
[00:42:13] Paul Roetzer: And so that's what I did for today, was I went back through my journal since March of last year and tried to piece together, what are they saying? So what I'm gonna do is walk you through now what has sort of been happening, what has become apparent to me over the last 12 months of journaling AGI, and then I'll actually create a visual of this.
[00:42:35] Paul Roetzer: I'll share it on my LinkedIn account, and then we'll put it in the show notes as soon as I have it done. I'm hoping it's done when this goes live on, on Thursday, March 27th, but it's Wednesday, March 26th at 1:00 PM right now. so hopefully it'll be, it'll be live for you, but stay tuned for that in the next couple days.
[00:42:52] Paul Roetzer: So. What has become apparent of his last 12 months? the key for me is the timeline's accelerating. So [00:43:00] there's a lot of things that I had sort of projected last year that have stayed very true. Like there's actually nothing in the timeline when I went back and revisited it that I would change, that I like just got completely wrong.
[00:43:14] Paul Roetzer: Just a lot of new things emerged that evolved the timeline and convinced me that AGI is actually coming sooner than I had originally kind of projected it might. So. Let me go through a few things to add context here. there's a phenomenal podcast series called DeepMind, the podcast. I would highly recommend you check it out.
[00:43:33] Paul Roetzer: I think they've done three seasons now. it's with Hannah Fry, professor Hannah Fry. She's amazing, and she has inside access to everybody at Google DeepMind. So she does all these incredible interviews with the leaders there. So in episode one of season three, August, 2024, Demi Asaba said, I think it's still under hyped or perhaps underappreciated even now.
[00:43:55] Paul Roetzer: What's going to happen when we get to AGI and post AGI, I still don't [00:44:00] feel like people have quite understood how enormous that's going to be and therefore the responsibility of that. March 10th, 2025, just two weeks ago, Shane Legg, again, co-founder of Google DeepMind, said AGI will soon is a tweet, AGI will soon impact the world from science to politics, from security to economics, and far beyond.
[00:44:21] Paul Roetzer: Yet our understanding of these impacts is still very nascent. Now, that is very informational For people who haven't been following along at home, these labs have no idea what happens. Like they're very direct about that. They are not the ones that are gonna figure this out for you. They are not gonna think about what happens in your industry, what happens to your job.
[00:44:41] Paul Roetzer: They don't see that as their responsibility. They're focused on building the smartest technology they can build. And they'll work with people who wanna do research on this stuff, but they are not gonna come and tell you what's gonna happen to your job as a result of this stuff. They're just gonna build it and let us figure it out.
[00:44:56] Paul Roetzer: Dario Amodei, the co-founder and CEO of [00:45:00] Anthropic, an Alex Friedman interview, podcast interview, November, 2024. He said some of the new models that we developed, some reasoning models that have come from other companies, they're starting to get to what I would call the PhD or professional level. We've seen similar things in graduate level math, physics, and biology from models like Open AI oh one, which was their first reasoning model in September of 24.
[00:45:25] Paul Roetzer: he said, so if we just continue to extrapolate, extrapolate this in terms of skill that we have, I think if we extrapolate the straight curve within a few years, we will get to these models being above the highest professional level in terms of humans. So again, go back to like open AIS levels, or, I'm sorry.
[00:45:44] Paul Roetzer: Google Deep Minds levels, they're talking about like that PhD level and beyond. They're talking about the smartest humans. When asked about his timeline for a achieving artificial general intelligence or powerful ai, as he prefers to call it, he hedged based on vari, [00:46:00] variables that could arise. But said, if you just kind of eyeball the rate at, at which these capabilities are increasing, it does make you think we'll get there by 2026 or 2027.
[00:46:12] Paul Roetzer: 2026 again is next year. so he's putting a one to two year timeline on these ais that are smarter than the PhD level humans at everything open. AI then recently published, I think this was early March, a paper called How we Think About Safety and Alignment. In that article, it says the post, or the post states, as AI becomes more powerful, the stakes grow higher.
[00:46:36] Paul Roetzer: The exact way the post AGI world will look is hard to predict. The world will likely be more different from today's world than today's is from 15 hundreds. We expect the transformative impact of AGI to start within a few years. Again, they're not gonna figure out what it means. They're just gonna tell you it's gonna look different than 500 years ago.
[00:46:57] Paul Roetzer: So, yeah. then [00:47:00] we had another one from earlier this year called Super Intelligence Strategy. This is a report from Dan Hendricks who's the director of the Center for AI Safety and an advisor to Elon Musk's xAI and scale ai scale AI is a big player in training these models. They provide the data to train the models, and I'm sure a host of other things.
[00:47:19] Paul Roetzer: Alexander Wang, who is the CEO and founder of Scale ai. We actually had a whole podcast episode, a main topic where we featured Alexander Wang. I don't, I don't remember what episode that was, but our team will drop it in the show notes. so if you wanna go back and learn about him, we profiled him.
[00:47:35] Paul Roetzer: And then Eric Schmidt, who is the former Google, CEO, and executive chairman. So in these three authors, the co-published Super Intelligence, in the opening paragraph, it says, super intelligence or AI vastly better than humans. At nearly all cognitive tasks is now anticipated by AI researchers. Just as nations once developed nuclear strategies to secure their survival, we now need a coherent super intelligence strategy [00:48:00] to navigate a new period of transformative change.
[00:48:04] Paul Roetzer: We then had, and this was a fun one to talk about on the podcast, Ezra Klein, a New York Times opinion writer and host of the Ezra Klein Show on March 4th, 2025, he interviewed Ben Buchanan, who was the former special advisor for AI in the Biden White House. Klein starts the episode and his opinion piece in the New York Times by saying, for the last couple months, I have had this strange experience person after person from AI labs, from government.
[00:48:30] Paul Roetzer: Has been coming to me saying it's really about to happen. We're about to get artificial general intelligence. What they mean is that they have, that they have believed for a long time that we are on a path to creating transformational artificial intelligence, capable of doing basically anything a human being could do behind a computer, but better.
[00:48:48] Paul Roetzer: They thought it would take somewhere from five to 15 years to develop, but now believe that it is coming two to three years. If you, he continues. If you've been telling yourself this isn't coming, I really think you need to question [00:49:00] that. It's not Web3, it's not vaporware. A lot of what we're talking about is already here right now.
[00:49:06] Paul Roetzer: I think we are on the cusp of an era in human history that is unlike any of the areas we eras we have ever experienced before, and we're not prepared in part because it's not clear what it could mean to prepare. We don't know what this will look like, what it will feel like. We don't know how labor markets will respond.
[00:49:23] Paul Roetzer: We don't know which country is going to get there first. We don't know what it will mean for war. We don't know what it will mean for peace. And while there's so much else going on in the world to cover, I do think there's a good chance that when we look back on this era in human history, AI will have been the thing that matters.
[00:49:42] Paul Roetzer: And then finally, before I get into the timeline, Kevin Rus, who is a technology columnist and co-host of the New York Times Tech podcast, hard Fork, recently published an article called Powerful AI is Coming. We are Not Ready in the Post. He starts out. Here are some things I believe about artificial intelligence.[00:50:00]
[00:50:00] Paul Roetzer: I believe that over the past several years, AI systems have started surpassing humans in a number of domains. Math, coding, medical diagnosis, just to name a few. And they're getting better every day. I believe that very soon, probably in 2026 or 2027, but possibly as soon as this year, one or more AI companies will claim they've created an AGI, which is usually defined as something like a general purpose AI system that can do almost any cognitive task a human can do.
[00:50:29] Paul Roetzer: He continues. I believe that when AGI is announced, there will be debates over definitions and arguments about whether or not it counts as real quote unquote AGI, but that these mostly won't matter because the broader point that we are losing our monopoly on human level intelligence and transitioning to a world with very, with very powerful AI systems in it will be true.
[00:50:51] Paul Roetzer: Now, when I read Kevin's article and I re I'm recommend you read the whole article, we'll put the link in the show notes. I tweeted, I'm 100% aligned with everything he says, like everything [00:51:00] he writes in that article I agree with completely and it it echoes many of the things that we've said on the podcast before.
[00:51:06] Paul Roetzer: Alright, so where does that bring us to, as we kind of get into the AI timeline? What I'm gonna do is walk through these 1, 2, 3, 4, 5 different, kind of components of the timeline. And like I said, I'll put the slides up to this so you can visualize this as well, in the coming day. But I'm gonna walk through each of these.
[00:51:25] LLM Advancements (2025)
[00:51:25] Paul Roetzer: So the first is large language model or LLM advancements. So on last year's timeline, I had that, you know, 2024 to 2025. That's continuing. So what LLM advancements, consist of continued advancements and potential leaps in accuracy context, windows decisioning, emotional intelligence. Memory, multimodal, personalization, planning, search tool use and reasoning.
[00:51:52] Paul Roetzer: Again, these go back to those, those different variables that the labs are pursuing to try and figure out which thing is gonna unlock the next thing. [00:52:00] And so they're all kind of continuing along there. We're gonna see some leaps forward. And we again, just saw yesterday with Gemini 2.5 Pro made some leaps forward.
[00:52:07] Paul Roetzer: It's now number one on the leaderboard across basically everything. the other thing, and this is new this year, this was not in last year's timeline, commoditization of frontier models, proprietary data, productization and distribution become TD key differentiators. This was a big open question early last year, how long would open AI's GPT-4 model maintain its lead?
[00:52:29] Paul Roetzer: Because they got out there first and for that roughly two year stretch, they were it, they were the dominant model. And so the question became like. Do they have some secret sauce? Like is there something OpenAI is doing that's just gonna always keep them to have every everybody else? What we have learned is no, that's not actually what's gonna happen.
[00:52:48] Paul Roetzer: While you used to be able to have like 12 to 18 month lead times. What seems to be happening now is it's like three months, maybe six months max. So the leaderboards change [00:53:00] seemingly weekly right now. And oftentimes what happens is these major labs test models under like stealth names. So they don't tell you they're from Google or OpenAI.
[00:53:09] Paul Roetzer: And you'll have these new models that are showing up at the top of the leaderboard, and then all of a sudden Google says, oh yeah, that was our 2.5 Pro model. So these models are just leapfrogging each other every three months. And so what seems to be the differentiators are gonna be the data, your ability to productize these models.
[00:53:24] Paul Roetzer: Like open AI has done a phenomenal job, obviously of doing that, creating probably over 10 billion in revenue this year through productization and then distribution, meaning if you're Google, you have, what, seven platforms with over a billion users, seven different platforms and systems with over a billion users, that's, that's a pretty solid distribution.
[00:53:41] Paul Roetzer: So if you can have a model on par with the best models, but you have 7 billion people using your, your technology. That's pretty good. So that, that was new this year. Another thing that was new this year is traditional scaling laws. So this was the big, big mystery. If we give 'em more data, if we [00:54:00] buy more NVIDIA chips, build bigger data centers, connect more energy to 'em, can we just keep building smarter models?
[00:54:06] Paul Roetzer: They just continually get smarter, more generally capable. What we have found as of fall of last year is they still work, but those laws are slowing. They're not getting the same level of improvement, but the big labs are continuing to push on those traditional scaling laws. What happened in fall of 2024, that's also new on the timeline this year, is test time compute or thinking scaling laws emerged and are accelerating Now, what that means is they found that a new, they found a new scaling law basically that.
[00:54:38] Paul Roetzer: The more time at inference, which is when you and I use the tool. So you go into Google Gemini and you put in your prompt. That's inference. You're now like they're gonna draw on their power to give you a response. and so what they found was when the model takes its time to think what would be called system two thinking that they actually get smarter and more accurate.
[00:54:58] Paul Roetzer: And so we have these traditional [00:55:00] scaling laws and then we have these test time compute scaling laws, and those are accelerating. Another new thing in the timeline this year is model evaluations. So the way they determine how good these models are. They're starting to get more focused on practical applications and use cases rather than pure IQ tests.
[00:55:19] Paul Roetzer: So traditionally when these models get dropped, it's like, how good is it math? How good is it at biology? How good is it at all these different complex tasks that you and I don't generally care about? 'cause they don't affect our day-to-day work. What's gonna start to happen, and we're starting to see it recently, is more and more evals or evaluations where it's like, what does it do to like a lawyer's job?
[00:55:43] Paul Roetzer: What does it do to a marketer's job? So they're starting to figure out ways to do this and I think more industries and associations will likely probably pick this up and start applying evals to their own, jobs within their industries. If I hope that starts happening. Rapid expansion of valuable use [00:56:00] cases in business.
[00:56:00] Paul Roetzer: Again, we're still in the LLM advancements phase here. Wide scale adoption of generated AI continues a multi-year curve despite pockets of disillusionment. I meet with enterprises every day who are at the starting line. Like if you're listening to this and this is all crazy to you, and you think you're so far behind, you're not.
[00:56:16] Paul Roetzer: Like most companies are still trying to figure out how to do this stuff. So we are not at the wide scale adoption phase, in my opinion, yet. We might be at the. Nearing the wide scale piloting phase where more businesses are starting to test it and try and figure it out. But we are definitely not at the phase where they've solved this and they're scaling it and they're doing change management and internal education and all the things they should be doing.
[00:56:38] Paul Roetzer: That's not happening at a wide scale yet. stories of layoffs due to AI within certain industries, will happen this year. but I don't think it'll be widespread. I think it'll be exaggerated by the media, but I also think a lot of tech companies are hiding their layoffs that are due to AI under other terms.
[00:56:57] Paul Roetzer: So I actually think there's quiet AI layoffs [00:57:00] happening that people aren't admitting that's why they're doing it. But I think by the end of this year, they'll probably start bidding. That's what they're doing. on a more positive note, some new AI roles will emerge. You're gonna start seeing AIOps chief AI officers, AI trainers.
[00:57:14] Paul Roetzer: for me, like we're hiring right now, and I actually have AI agent management built into every job description, basically saying like, Hey. Your part of your job is gonna be to figure out, to understand what agents are capable of, especially as they continue to improve and to figure out ways to infuse them into your job to be more efficient, productive, creative, innovative, and then like to focus the things that you're uniquely capable of.
[00:57:36] Paul Roetzer: The high impact, high human level stuff. That's what I want you doing and let's like find ways to infuse AI agents as we go. And so I want the responsibility of AI integration to be bi-directional. I wanna as A CEO push it down when I see opportunities to do something across the organization or across a team, but I want the ideas brought up from the practitioners who are actually doing the work.
[00:57:56] Paul Roetzer: And I want them to have the freedom to do that, to feel like [00:58:00] they can come to the table with new ways of doing things or new tools. Large language models are gonna continue to advance. You know, traditionally they were text in, text out, you gave them a text prompt, they gave you text out. if you wanted to do image generation, you had to go to a different model.
[00:58:15] Paul Roetzer: Like it wasn't built into the same model. So traditional large language models, powered these chat bots, basically the text chat bots. The key though is these language models were always the foundation for what comes next. They, the AI labs never set out to build tools to write your articles and your emails and do your plans for you and write your email co, or your ad copy, or do your social media posts or do your financial reports.
[00:58:42] Paul Roetzer: That's not what they set out to do. They set out to solve language understanding and generation because they thought that was the key to unlocking general intelligence. And so always these language models were just the basis for what comes next. And you'll often hear me say this, especially if you hear me do talks like, this is the dumbest form we're ever gonna [00:59:00] have, like every day.
[00:59:02] Paul Roetzer: You're working with the dumbest form of AI in human history tomorrow, like we just just had yesterday, we got 2.5 Pro from Google and we got, four oh image generation from OpenAI within two hour span. And there appear to be like state of the art, like best in, in, in the world right now. And that just happened yesterday.
[00:59:21] Paul Roetzer: So every day somebody's gonna do something that pushes the frontier forward.
[00:59:26] Multimodal AI Explosion (2025 - 2026)
[00:59:26] Paul Roetzer: And that leads to the second phase we'll talk about in the timeline, which is multimodal AI explosion. And I have this as like 2025, 2026 range. So. What's happening here is these language models that originally were just text are now getting built from the ground up to do more than text multiple modalities.
[00:59:44] Paul Roetzer: So images, video, audio code. This is how, the Gemini models are being built. So they're being trained on multiple modalities and they're being, enabled to output multiple modalities. So I don't have to bounce between models. I can just talk to this model and it's a ground up [01:00:00] system that is built on these different modalities.
[01:00:03] Paul Roetzer: We're gonna see rapid improvements in, text to video capability. So you put a prompt in, you get video out. Right now, there's a bunch of players in this space, like VO two from Google. DeepMind is a, is a great example. Here I go watch their demo video. It's awesome. You can play around with SOA from OpenAI.
[01:00:21] Paul Roetzer: there's a bunch of runway ML comes to mind. There's a bunch of players here. but they have limitations, like one, it's massively compute intensive to do these things. They historically can't keep coherence from like frame to frame. So you may start like with a person in your, you know, your video and by like seven seconds in that person all of a sudden looks.
[01:00:41] Paul Roetzer: Different than they were, than they started. So they can't keep like that control there. the output length is limited, like maybe it's like seven seconds, 10 seconds, and then it starts to like lose its capabilities. Realism, render times, like these are all flaws that are gonna get solved. Most of it is, well, a good chunk of it is related to computing power and like the cost of it [01:01:00] to do it.
[01:01:00] Paul Roetzer: But you're gonna see major advancements there. You are gonna see continued advancements in voice technology, making voices sound more human-like, natural, accurate, customizable, multilingual. AI generated images, video invoices will become indistinguishable from reality. Again, go play with the new four oh image generation model.
[01:01:18] Paul Roetzer: It just came out yesterday afternoon, so I haven't had time to test it myself yet, but I've looked at a bunch of threads of PE online of people, what they've been doing on, on X. I've seen it and it's remarkable. and basically open eyes taking the guardrails off of it. Historically, you know, these AI labs have been trying to be.
[01:01:36] Paul Roetzer: Conscious of misuse of these things. And I think we're just done with that phase of ai. They're just basically throwing these things out there and saying, yeah, they're gonna do things that you might consider harmful or offensive, and sorry, like, just don't use 'em for that yourself, if that's the problem for you.
[01:01:52] Paul Roetzer: So we're sort of removing filters and guardrails and letting people use the true power of these models, which traditionally these labs have held [01:02:00] back. now making them indistinguished reality is gonna create all kinds of problems because society isn't ready for this. They are unaware largely that images and videos can, you know, be generated that look and feel like reality and that's gonna be messy.
[01:02:15] Paul Roetzer: the frontier models, so these labs that are spending the billions on the training runs, they're gonna make models that are 10 to a hundred times more powerful. So we're gonna keep following those original scaling laws, but smaller, faster, more efficient models are also gonna probably become way more prevalent.
[01:02:33] Paul Roetzer: The models are gonna develop some element of, like a worldview to like actually understand. So you can use, project ASRA from Google as an example here. Or if you go into ChatGPT and click on Voice, you can then click on a video and it'll see the world around you. You can also use visual intelligence on Apple Intelligence, and so we're starting to see the early forms of this, where the AI.
[01:02:54] Paul Roetzer: Can see the world and in theory start to actually understand and understand the physics of the world. [01:03:00] We're not sure how exactly that's gonna occur, and there's differing opinions about whether or not it's actually understanding physics at all. But there's a lot of efforts being made around this through synthetic data and simulations and things like that.
[01:03:13] Paul Roetzer: And then one of the other questions I have in the multimodal, AI explosion phase is. How dominant of an interface voice becomes, like, is it a generational thing? But I could see where people really start to just interact with their AI and their devices through their voice. You're just talking all the time to them.
[01:03:30] Paul Roetzer: And so the answer you get back is the answer. You're not going on Google and searching for things, you're, you're just talking to your AI that you trust to provide this information to you. so I mentioned a couple times, four Oh Image Generation and Gemini 2.5 Pro. Mike and I will go in depth on both of those on episode 1 42 next week, which would be, I don't know when that is, March or April 1st maybe.
[01:03:53] AI Agents Explosion (2025 - 2027)
[01:03:53] Paul Roetzer: So the next phase is AI agents explosion. this is 2025 to [01:04:00] 2027. I'm gonna stop for a second. Take a sip of water. I wasn't sure how long this was gonna go. Actually, I told the team right before I started recording this today, I was like, this might be two hours. I'm, I'm honestly not sure. So, I.
[01:04:12] Paul Roetzer: It looks like we're gonna get done in under two hours, but we're in about an hour now. All right, so AI agents explosion 2025 to 2027. So a agents is a really weird space. There's the, again, if you listen to the podcast regularly, you've heard me kind of on my soapbox about this. I feel like a bunch of the tech companies just started branding everything as AI agents and they sort of just bastardized the term, like it became this really fuzzy thing of like, well, what exactly is an agent?
[01:04:40] Paul Roetzer: The way I think about it just to like level set here, and then I'll get into like the components of the AI agents explosion is traditional automation. You know, we could set rules that, that the machine or the software did what we told it to do, and this has been around forever. So you can just write some rules and it does the thing, but it does exactly what [01:05:00] you tell it to do.
[01:05:00] Paul Roetzer: That is deterministic, meaning it's just gonna follow instructions. When you have AI agents and auto, in theory, this automation or the ability for them to take actions, they are probabilistic in part, meaning sometimes they figure stuff out on their own. They're not just following your rules anymore. And so I think of AI agents as AI systems that can take actions, and then you can continue that definition with varying levels of autonomy, varying levels of tool use, varying levels of memory.
[01:05:31] Paul Roetzer: Like, so they're not binary again, they're, they exist on this spectrum of all these different variables. So again, the problem came in, in 2024 that all these tech companies just started talking about these things. Like they're just these autonomous things that are just gonna do your job. And people freak out and they don't understand what that means.
[01:05:48] Paul Roetzer: So I think about this very similar to like a, a Tesla, which supposedly has full self-driving, but then they put in parentheses, supervise in a Tesla. As of now, you still [01:06:00] need a steering wheel and you still need a human. That can take control of that steering wheel at any, any given moment. So a Tesla is not autonomous, it is on the spectrum of autonomy in some situations, but it still has to be overseen by a human.
[01:06:14] Paul Roetzer: So the question is always, well, what's the human's role? What does the, in the car case, what's the driver do? In the case of an AI agent working in your marketing or sales or customer success system, where, what's the human's role? Is the human, the share to make sure it doesn't go off the rails? Does the human check in on it once a week?
[01:06:29] Paul Roetzer: Or is the human approving everything? It does. So the whole point here is they're not, it's not this clean definition. They exist on this spectrum. So. Back to the timeline in 2025, AI agents that can take actions are marketed heavily by leading tech companies, but the confusion remains in the market about what exactly they are, how they work, and the impact they will have.
[01:06:52] Paul Roetzer: Current AI agents often require a lot of manual human work to plan, integrate, and manage them. [01:07:00] there are however powerful early forms of these semi-autonomous agents including, and one of my favorite things right now, deep research tools from OpenAI and Google. And when you use these tools, if you haven't go, go test them, they're incredible.
[01:07:16] Paul Roetzer: you begin to understand how. These AI agents will be able to drive adoption and value because when you see them applied in this sort of narrow instance of conducting research, you can start to imagine when they're built to do all these other things. I do think that adoption in enterprises is going to be slow, largely due to one, they don't really work the way they're advertised to work.
[01:07:42] Paul Roetzer: But more importantly, privacy and security risks, especially related to this idea of computer use. So in fall 2024, andro was first to market with a preview of computer use, which is something OpenAI was working on back in like 2016. And Google now has a version of this in [01:08:00] Chrome as well. What it does is it enables the AI to take over your keyboard and mouse basically, and perform tasks for you on your computer.
[01:08:07] Paul Roetzer: Now, to do that. It sees everything on your screen. In theory, it remembers the majority of it. The way Microsoft was doing it, I'm not sure if this is how the product still works, is they're basically taking screenshots of your screen every like one and a half to three seconds and then it would just search those screenshots to like find things.
[01:08:25] Paul Roetzer: but it can see, remember, and interact with things on your device, the content, the applications could be your work computer. It could be your phone. And so I can tell you as a CEO, that's unnerving, like the thought that employees may have agents that using computer use, like just watching everything on their screen all day long.
[01:08:44] Paul Roetzer: I have major questions about the privacy and the, and the security risks related to that. And I can imagine big enterprises with, you know, big legal teams and IT teams have even bigger concerns than I do. So that's a major problem. and I [01:09:00] think that's gonna slow adoption of AI agents within enterprises.
[01:09:03] Paul Roetzer: The other thing I think is that agents are gonna be largely narrow by vertical and use case initially. So again, go to like the deep research, phenomenal example. Like it's a great product, but it's narrow in its ability. It's like, it's specifically for research, but that's great. I. Becoming more general and horizontal over time, I think still happens though, to where we just have an AI agent that it can just do anything.
[01:09:27] Paul Roetzer: I can do. It's not trained on any specific task per se. It just do my job. And that's when things get really weird. and then that leads to organizations being, begin to build AI agents into their charts and teams. there's a, a quote I I shared on the podcast back in November, 2024 from Jensen Wong, the CEO and founder of Nvidia.
[01:09:50] Paul Roetzer: And he said, quote, these AI workers can understand he's referring to AI agents. they can plan, they can take action. We call them AI agents. And just like [01:10:00] digital employees, you have to train them. You have to create data to welcome them to your company, teach them about your company. You train them for their particular, particular skills.
[01:10:08] Paul Roetzer: You evaluate them after you're done training, you guardrail them to make sure that they perform the job they're asked to do. And of course you operate them, you deploy them. So in other words. Humans are in the loop all over the place with these things. So when you hear about AI agents, don't assume that a year from now everybody's job is gone and the agents are gonna do it.
[01:10:25] Paul Roetzer: That is not what's happening. So we'll see early forms of autonomy, but again, it's gonna be very narrow and likely, highly trained to do those things. But we will start to see, or at least get visibility into what the disruption from these things will look like in knowledge work. It's gonna start to become more tangible and measurable.
[01:10:46] Robotics Explosion (2026 - 2030)
[01:10:46] Paul Roetzer: the next phase is robotics explosion, humanoid robots to be exact, in 2000, 26 to 2030 is the range I have here. So I don't wanna spend a ton of time on, on this one because it's, [01:11:00] it's important, but it's not as. Directly impactful to knowledge workers right now, but there's major investments going into this space.
[01:11:07] Paul Roetzer: Lots of breakthroughs in the last 12 months. open AI is getting back into robotics. They started there, it was one of the things they were working on in the early days of OpenAI Tesla with Optimus, which what may become actually the biggest revenue channel for Tesla over time versus their cars figure is a major player here.
[01:11:25] Paul Roetzer: Amazon Ton with robotics, Google, Nvidia, Boston Dynamics, unit tree, I think they're outta China. has had some insane demonstrations recently. So what's happening is there's major advancements being made on the hardware side of these things. So they become more human-like in their, their capabilities.
[01:11:44] Paul Roetzer: But the real breakthrough was multimodal language models being dropped into them as the brains. So basically. All these abilities of, text and images and video and audio, all of that living in the [01:12:00] robot so it can see and understand the world and interact with people and objects, that's the real breakthrough.
[01:12:05] Paul Roetzer: And so I think what's gonna happen is there'll be like narrow applications initially of commercial robots and then more general robots that are capable of quickly developing a diverse range of skills through observation and reinforcement learning. Meaning they just watch what a human does and they learn how to do it.
[01:12:22] Paul Roetzer: Or they're trained specifically to do these skills by kind of like, yes, you did a good job. No, you didn't get a job, like a reward function basically to, to learn these things. And then I think by like, I don't know, maybe 2028 to 2030, you start to get much more widespread commercial applications starting to really affect numerous industries.
[01:12:41] Paul Roetzer: And then I think over time, maybe in the next decade. There's a potential for general purpose consumer robots that you and I could actually like lease or purchase. And you could just have a robot around your house for say, 20,000 a year or $200 a month, and it'll start as a luxury for the elite. And then it'll [01:13:00] eventually, as they get manufacturing costs down quickly become a mass market thing.
[01:13:04] Paul Roetzer: And that's when you start to really see the impact on blue collar jobs. But again, I don't think. I'm not as bullish on this as others. Like I'm very aggressively looking at investment opportunities in this space. Like who's gonna be the major players as this takes off. But I think there's a lot of exaggeration right now about how quickly these things are actually gonna affect our lives.
[01:13:23] Paul Roetzer: now Jensen Wong, who I just mentioned earlier, he said that ChatGPT moment for robotics is coming less than 10 years from now. I'm certain of it. Humanoid robots will surprise everybody how incredibly good they are. That was January, 2025. Elon Musk recently said that Tesla is aiming to build 5,000 of its optimist humanoid robots.
[01:13:43] Paul Roetzer: This year at CES in January, he shared an ambitious vision for Tesla's optimist, humanoid robot projecting that within three years Tesla would produce 500,000 humanoid robots with production scaling significantly each year he envisioned [01:14:00] a future with tens of billions of robots globally. And then just a few days ago he said that SpaceX, one of his companies, Starship, their major rocket is set that apart for Mars at the end of next year.
[01:14:14] Paul Roetzer: So they wanna land a rocket on Mars and he wants to send a Tesla optimist bot to Mars. And then if that goes well, they wanna send humans in 2029. and then actually today, I. Tesla is on Capitol Hill, demonstrating optimists. along with some other robotics companies, there's apparently a robotics symposium.
[01:14:35] Paul Roetzer: So, again, just a prelude. You're gonna hear a ton about humanoid robots. I would just put it in the category of pay attention, probably not as far along as you maybe made to believe, sort of in a way how AI agents are today. All right.
[01:14:50] AGI Emergence (2027 - 2030)
[01:14:50] Paul Roetzer: And then the final, element of the timeline is AGI emergence, and I have that as 2027 to 2030.
[01:14:57] Paul Roetzer: I moved it up a year. I had this as 2028 [01:15:00] last year. So when AGI emerges, we've spent a lot of time talking about what AGI is and isn't. but the way I think about it is new science becomes possible. It's no longer just connecting dots from existing human knowledge and kind of making predictions about words.
[01:15:17] Paul Roetzer: It's actually discovering new things. and so like the stuff that isn't in the training data or wasn't learned in the training data, and so it starts to be able to develop its own ideas and hypotheses and drugs and solutions to math problems and, things like that. So it really starts to make an impact in chemistry and biology and mathematics and business as well.
[01:15:43] Paul Roetzer: And so once this starts to happen, now you start to get in a complete reset of what a business actually is. You, you will, I would guess sometime in the next couple years, we will hear about the first one to 10 person, billion dollar company, that might happen this year. Honestly, you'll [01:16:00] hear about this idea of AI agent clusters or hives that function as largely autonomous enterprises when this happens.
[01:16:07] Paul Roetzer: We have to truly start rethinking how we measure economic health and growth and that I'm a massive believer that economists should be doing this right now. I just don't know of any that are, because I think that I, if you said to someone, Hey, this is like not a 0% chance, maybe not even 10, like maybe this is like 20 to 30% chance we get to this idea by the end of the decade, that feels like something we should be planning for, that we should be considering a possibility of.
[01:16:36] Paul Roetzer: Now I get that. There's some people who are just complete pessimists on this and think there's no probability they have no standing on that. Like there's no argument behind. That it's not going to happen. No one knows that for sure. So I'm a believer of, there's a possibility, I believe a strong probability.
[01:16:53] Paul Roetzer: And I just think we should be thinking about it. when this happens, we are talking about wide [01:17:00] scale workforce disruption, job displacement, it becomes much more likely. And so we have to rethink business. We have to rethink education in a really weird way. We have to start rethinking human purpose, like what a lot of us assign our jobs to, to our purpose.
[01:17:16] Paul Roetzer: Like they're a very important part of what we do. We have our family, we have our friends, we have our community, we have our faith. Like, we have all these things that define who we are, but the job is part of that. It's like, gives us fulfillment, makes us feel worth, worthwhile, like we contribute to society.
[01:17:31] Paul Roetzer: And if all of a sudden that's not part of the equation or as significant as it used to be, that's a, that's a major problem. So when I think about AGI, what I know to be true is the models are getting smarter, fast. I believe as a result, we should be doing more to prepare for what comes next. Because if this AI timeline is even directionally true, even if it's just off by a couple years, if it's directionally true, we are not ready.
[01:17:56] What’s Changed?
[01:17:56] Paul Roetzer: So when I, when I was kinda preparing this, I went back to the [01:18:00] original. I was like, well, what changed? And so I wanna highlight for you a few quick things here of what changed from last March. So one, lots of leading AI researchers switched labs and started their own AI companies. So we see this all the time.
[01:18:12] Paul Roetzer: Mike and I talk about this on the podcast, sometimes half jokingly, but. These research are jumping all the time and it's highly competitive. And you have researchers that'll leave their labs. They'll go start their own companies like Noam Shaer comes to mind. Google Reacquired him or Acqui hired his company character AI for like two and a half billion dollars last year.
[01:18:32] Paul Roetzer: you had Mustafa Suleyman who left DeepMind and then left Google and or left Google and left DeepMind or vice versa. Goes and starts inflection. He gets aqui hired by Microsoft, come in and run, you know, AI there. you have Noam Brown who is at Meta, who's a major player in development of reasoning models that open ai.
[01:18:49] Paul Roetzer: Like they just jumping all the time. Ilya leaves and starts his own safe super intelligence. Um. So, yeah, so that, that's a, that's a major component of it. It shifts the landscape all the [01:19:00] time. The other major factor that happened in fall, and you know, really into January of this year was new, administration in the United States.
[01:19:08] Paul Roetzer: We have a new president and they have a very different view of this stuff. energy investments are gonna skyrocket investments in infrastructure to build out, you know, these data centers and what's gonna be needed. dramatic reduction in regulations. much more kind of free market in terms of driving innovation, letting these labs do what they're gonna do.
[01:19:29] Paul Roetzer: You're probably gonna see increased mergers and acquisitions in the AI space. We're already starting to see it happen. and the main reason is they don't wanna lose. So they see this as a war for AI supremacy with China and others, and they intend to win it. And they think it's important that the leading AI labs, that the people who get to AGI first, that it has democratic values.
[01:19:50] Paul Roetzer: And so that's happening. And in the midst of all that, we had the deep seek moment where a Chinese lab created something that jumped to the top of the charts and [01:20:00] apps in the app store, and sort of changed the direction, or at least sped up the direction of American based labs because they did something more efficiently than the US labs had.
[01:20:12] Paul Roetzer: also what's changed in the last 12 months, the test time compute scaling law. The thinking law as we talked about earlier, which led to reasoning and thinking models, which we're seeing now coming out from everywhere. We had this major focus on AI agents. even though the marketing of the autonomy is misleading and confusing, we had computer use debuted, which we talked about, the tone and confidence of AI leaders that AGI is near absolutely picked up as part starting last summer.
[01:20:39] Paul Roetzer: And then that leads me to think that the timeline for AGI is, is moved up. And I do think that there's a, I think I said in my executive AI newsletter on my smarter xAI, exec AI newsletter that I think right now there's probably a greater than 50% chance that an AI lab claims AGI within one to two [01:21:00] years claims that they've achieved it.
[01:21:01] Paul Roetzer: Now, whether or not they did, and whether or not we agree on it, I don't know, but I think it'll happen. All right, so as we kind of start to wrap up, I wanted to cover. A few other key areas.
[01:21:10] What Accelerates AI Progress?
[01:21:10] Paul Roetzer: One, what accelerates AI progress? two, what slows it down? And then I want to kind of wrap with how you can prepare, what steps you can take.
[01:21:19] Paul Roetzer: So what accelerates it? continued algorithmic breakthroughs like we saw with deep seeq outta China. I. There are ways to make these models smarter without having to buy more Nvidia chips and build bigger data centers. I think there's gonna be a big focus on that. And if we can keep having these breakthroughs, we might get to AGI sooner clean energy abundance.
[01:21:37] Paul Roetzer: If we invest in wind, solar, nuclear fission, we're seeing that building nuclear power plants, buying nuclear power plants, nuclear power plants coming back online. And so that's gonna continue happening. Compute efficiency breakthroughs, these smaller models or targeted search retrieval, like finding ways to do things faster, like the human brain does.
[01:21:55] Paul Roetzer: Our, our brains are very, very efficient. models aren't, and so they're trying to [01:22:00] figure out how to give the models the kind of efficiencies we enjoy in our, in our brains cost of intelligence declines at a rapid rate. I think I forget the exact number, but I wanna say. Sam Altman recently said that the cost of compute drops 10 x every 12 months.
[01:22:16] Paul Roetzer: So like a model today that costs x 12 months from now, it's gonna cost y and so it just becomes cheaper and cheaper to, to use these tools as a business person, as a company. Energy Breakthroughs. Nuclear Fusion is the one that I pay the closest attention to. It's actually Sam Altman, I think his largest investment is in a nuclear fusion company, and I believe they actually have a contract with Microsoft already for like 2028.
[01:22:41] Paul Roetzer: So fusion is one of those things that might not happen for 20 years. Might not happen ever, but there's a lot of progress being made and it's a space I'm very, keenly interested in, large scale government funding. I've, for over a year been sort of trumpeting. We needed like an Apollo level mission to build ai.
[01:22:57] Paul Roetzer: I think that's gonna happen. we're [01:23:00] starting to see some early signs of that, but I do think that the federal government, at least the United States, is gonna try and nationalize components of this. and I don't know that you and I are gonna hear about it, but I'm pretty convinced it's gonna happen.
[01:23:13] Paul Roetzer: And I think other governments are gonna do the same thing. Um. Greater network and data security against threats. So there's a lot of risk related to this stuff. If we find ways to put greater protections in for the data privacy, security, then we can actually accelerate progress more. But right now there's gonna be a whole bunch of threats that emerge.
[01:23:32] Paul Roetzer: another. It could be new scaling laws. So we found test time compute last year. What is the equivalent of that this year? Is there a new scaling law that's gonna emerge that's gonna accelerate things again, infrastructure investments, upgrade and expand electrical grids. More data centers. Honestly, like one of the biggest bottlenecks is gonna, we don't have enough electricians, so moving people into the trades, is, would be key.
[01:23:55] Paul Roetzer: 'cause there isn't gonna be enough people to build all these data centers that need to get built and to do all the electrical [01:24:00] work that needs to go into them. More compute capacity, so more chips and fabs that build the chips, plus the diversity in the chip supply chain. there's still in massive reliance on Taiwan for chips and that's quite dangerous given the geopolitical climate between China and Taiwan and America.
[01:24:17] Paul Roetzer: so that could be a problem, but if we can find ways to get the fabs working in the United States and bring some of that on shore, that could accelerate it as well as other, allied countries with the United States. And then. Other scientific breakthroughs like quantum computing is certainly an area I I pay attention to.
[01:24:35] Paul Roetzer: I, I get similar feeling to nuclear fusion. Like, it, it could be five years away or it could be 50 years away. Like we just don't really know. There's a lot of really sexy headlines about quant milestones from Microsoft and Google. I'm not sure that they really mean anything, but in the near term to commercialization of it.
[01:24:53] What Slows AI Progress?
[01:24:53] Paul Roetzer: Okay. And then what slows AI progress? A breakdown in the AI compute supply supply chain, earthquakes, [01:25:00] hurricanes, human forces, cyber sabotage, physical, impact on these data centers and things like that in the fabs. So I don't wanna spend a lot of time on that one. I forgot to think about it, but that's the reality.
[01:25:13] Paul Roetzer: catastrophic events that are blamed on ai. So you could see something going wrong and. the talking point becomes that it was AI that caused it. chip scarcity, which we're in. We don't have enough chips to do what the, we wanna do. We don't have enough energy. So energy scarcity is another one.
[01:25:29] Paul Roetzer: failure of the models to align with human values, intentions, goals, and interests. This is a big one. There's been research recently that shows the models are deceptive by nature, that they intentionally mislead their human creators and testers when they know they're being tested. Now, why they do that, we don't know, but that's a problem.
[01:25:53] Paul Roetzer: And the smarter they get, the harder it's gonna know if they're purposely deceiving us, and if they're actually not going [01:26:00] to do what we want them to do. Sounds very sci-fi. It is, but it is also reality that we already see this happening with the models we have today. and the labs don't know how to stop that yet.
[01:26:12] Paul Roetzer: At least they haven't publicly said how to stop it. human misuse that violates laws and values, that's very real. That one's gonna happen this year. another thing could slow it down is lack of value created in the enterprises. We see this every day. A lot of times. I think the lack of value is due to a lack of literacy, a lack of understanding.
[01:26:30] Paul Roetzer: It's not because the technology isn't capable of helping, it's that companies haven't taken the steps they needed to figure this stuff out and adopt it. properly landmark IP lawsuits that impact access to training data and the legality of existing models. I would have had this higher up on the likelihood list last year.
[01:26:52] Paul Roetzer: Due to the current administration and my belief that they're gonna basically throw out this stuff, I don't think this is going to be [01:27:00] a problem in the United States. Some other countries have already taken steps to do this. Not great news for copyright holders, authors like me, whose books were pirated and put into the training data, get nothing for it.
[01:27:13] Paul Roetzer: photographers, artists, writers, anybody, anybody who's created something that these models were trained on, and they absolutely were trained on copyright material. There's no debating that. Their argument is they had the rights to it, and that if the US stops them from doing it, they will thwart innovation and we will lose to China.
[01:27:29] Paul Roetzer: So if you listen to episode one 40, we talked about this. I just, there's gonna be a bunch of lawsuits. There's gonna be a bunch of legal cases, may go to the Supreme Court, I think at the end of the day. The current administration could care less about copyright holders. So that, that's new this year.
[01:27:49] Paul Roetzer: restrictive laws and regulations, again, I far less of a likelihood now. We talked on episode one 40 also about, there's over 700 AI bills at the state [01:28:00] level right now at different, stages. Stages. I don't know what's gonna happen to those, but again, I just don't think that this administration is going to allow, laws and regulations to slow down innovation, societal revolt.
[01:28:13] Paul Roetzer: This is one I actually would probably put pretty high on my list of things, I think could slow it down. I think that there will be pushback increasingly in society against ai. I think once job loss starts to pile up, I think. Politics could choose to make it a much stickier, talking point. I think perceptions and fears may expand as different things unfold, and I think this is gonna become a big problem for tech companies and I, and my current perception is they're doing nothing to prepare for this and I think they should be.
[01:28:49] Paul Roetzer: So I think there's a reality that at some point you're gonna start to see pushback on, on these things as they become more powerful. two other final ones here. [01:29:00] Unexpected collapse in the scaling laws. So again, at this time last year, the original scaling law of more compute plus more data, plus more training time seemed to be humming right along.
[01:29:10] Paul Roetzer: It did slow down in the fall, so we did see a bit of an unexpected slowdown, not collapse, but then the test time compute reasoning, one just showed up and things just kept humming. So. It's possible that we could have a situation where we just, the scaling laws stop working and they just stop getting more powerful.
[01:29:28] Paul Roetzer: I don't see that, and I don't think any of the labs see that happening, but it's possible. And then the final one here is the voluntary or involuntary halt on model advancements due to catastrophic risks. I think that that one is a possibility. philanthropic talks a lot about this in particular, that when they run it, do a new training run and they find out that they can't control the thing, that it's too deceptive, that it's completely misaligned and hiding the misalignment [01:30:00] that it's capable of doing things it shouldn't be capable of doing.
[01:30:04] Paul Roetzer: They have to shut it down. Now, I think anthropic would, there are other AI labs that I don't think would, um. And so I think this is gonna be really fascinating how this plays out. I do think that within the next two years, someone is going to do a training run that they decide is too dangerous to release.
[01:30:26] Paul Roetzer: And we're gonna be at a very interesting point in society, when that happens, if we hear about it. and I think we're gonna be at a very interesting point from a government perspective also of whether or not at some point they have to nationalize. This technology to control it. So that, I would put that one pretty high up on my list as well of like, we should probably be considering this.
[01:30:50] Paul Roetzer: And I know the major labs are, they all have ways of measuring this. but I also know that they're not really a hundred percent sure how these models actually work. And so I [01:31:00] don't have high confidence that they're gonna know it when it happens or that they're gonna be able to do anything about it.
[01:31:06] How Can You Prepare?
[01:31:06] Paul Roetzer: All right. So as we wind down here, what do you do about all this? I get this is a lot. Maybe you've paused this and gone away for a day and come back to it. Maybe you're listening to it for the third time to try and process it all. I will probably actually go back and revisit this to process it all myself.
[01:31:22] Paul Roetzer: I. I put all this together and just showed up and started talking. I haven't actually internalized a lot of this myself yet, so I get that this is a lot. so I want to give you a few things you can do. So the first is, as you'll always hear me say, AI literacy is far and away the most important thing any of us can do for our kids, for our coworkers, for ourselves, for our, our businesses, for our communities.
[01:31:45] Paul Roetzer: people have to understand this stuff. And so the tech companies are gonna keep accelerating. They're gonna keep building smarter tech and more generally capable tech. They're going to pursue AGI and beyond. We have to figure out what that means to us, our [01:32:00] companies, our careers. So I announced in late January, the AI literacy project.
[01:32:04] Paul Roetzer: You can go to literacy, literacy project.ai to learn more about that. That is, designed to help prepare individuals and organizations for the future of work by making education accessible and personalized. So we offer a ton of like, free resources intentionally. I have a very focused effort in our company to try and provide as much free education as we can.
[01:32:23] Paul Roetzer: That's why I do a free intro class every month on Zoom. I do a free scaling AI class every month, newsletters, blueprints, all of it's free. and so you can go and learn more about that and hopefully take advantage of some of that. the AI literacy project's anchored in the belief that AI literacy is not just a competitive advantage, but a career and business imperative.
[01:32:44] Paul Roetzer: My belief is that as weird as all this is, we get a choice. We can either do nothing, maintain status quo, or we can accelerate our AI literacy and capabilities. Our focus is in on trying to empower knowledge workers across every industry to thrive [01:33:00] through the disruption and the uncertainty. So for yourself, focus on what can you do to drive literacy and for your teams.
[01:33:09] Paul Roetzer: The other step in an organization is build AI councils. If you don't have an AI council yet, raise your hand and start one. Focus on near term piloting, scaling generative AI policies. Responsible AI principles. Think about not only adoption, but adaptability. How are we gonna evolve as this stuff keeps getting smarter as the timeline keeps accelerating?
[01:33:29] Paul Roetzer: 'cause I think it will. And how do we think about change management? It's not just about getting a bunch of tools in and thinking about it as a technology thing. This is a people thing, it's a process thing. It's a business structure thing like that requires change management and planning. The third thing is impact assessments, AI impact assessments.
[01:33:47] Paul Roetzer: You can do this on yourself. So we have jobsGPT, you can just go to SmarterX.AI. Click on tools. There's a jobsGPT one right there that will help you assess your current role. It'll walk you through an [01:34:00] exposure key of your role by title. Just put a job title in and it'll assess how exposed that job is to AI as the models get smarter.
[01:34:07] Paul Roetzer: So I developed an exposure key that considers these improvements in the models. And the other thing I just introduced about a month ago is you can now actually pro like project out future roles for different professions or college majors. And so you can just put, click on, you know, the look at the future jobs and then put your job title in there or what you do, what your profession is, and it'll actually try and help you envision what an AI powered version of that job could be.
[01:34:32] Paul Roetzer: Or like reimagine completely new titles. I would also at a business level think about building AI roadmaps that actually guide the projects and use cases. It's gonna be a, you know, gonna need to adapt it all the time, but you're looking at. The adoption of the technology, the integration of it into processes, workflows, campaigns, thinking about, your talent, your tech, your strategies.
[01:34:53] Paul Roetzer: So that's really important and you, it's an ongoing thing. You can do all these other things while you're doing the roadmap. And [01:35:00] then the big thing I just talked about on episode one 40, and I featured in the exec AI newsletter this past week, is this idea of an A AGI Horizons team. So I think the most AI forward companies, the most innovative companies, the most prepared companies, are going to put together small groups of teams.
[01:35:18] Paul Roetzer: It could be some internal experts as well as some outside advisors who could be a bit more objective. And they're gonna start saying, okay, if this timeline is directionally true, what does that mean to us? What does it look like to our business, to our industry? What does it mean maybe more broadly to society and how our recruiting works and how we develop our people?
[01:35:38] Paul Roetzer: I really think we're at the point, and I can't stress this enough. We need to be contingency planning. We need to be building scenarios of possible futures, and we need to start thinking about these things. 'cause this isn't 10 years off. If these people are right, it's like one to two years before this starts to happen.
[01:35:56] Paul Roetzer: Now it's not gonna flip a switch and AGI, everything just changed. [01:36:00] Think about your own business and how long it's taken you to integrate Gen ai. Like we're two and a half years into it and some companies haven't figured out what to do with ChatGPT yet. So it's not like AGI shows up and every industry's just disrupted and we all go home.
[01:36:13] Paul Roetzer: It's like, no, it'll, it'll take a while once it gets here, but. You don't wanna be waiting around like you wanna be out ahead of this. So I would just really encourage you to pursue this idea of an AGI Horizons team that, monitors advancements toward AGI and then assesses potential threats and opportunities.
[01:36:31] Paul Roetzer: And then the final thing I'll say is like to explore this story of AI together. Like, I don't know where this goes. I'm just doing my best to try and like, lay out scenarios based on spending a whole lot of time, probably too much time and mental capacity thinking about this. And so my hope is to like put this out and then like see where the conversation takes all of us.
[01:36:52] Paul Roetzer: And what I would encourage people to do is, you know, I often says like, don't try and do what I'm doing. Like it, it. [01:37:00] Most people who have full-time jobs aren't gonna be able to keep up with every piece of this. Hopefully, that's what we help you do every Tuesday, is like, bring the things that matter to you.
[01:37:09] Paul Roetzer: What I would tell you to do is pick a thread, like find the parts of this that you find in incredibly intriguing, that you ma makes you very curious or passionate. It could be related to your domain expertise, your profession. Pick a topic or two and really go in on those. So maybe like, maybe it is energy or government regulation, or maybe it's application to SEO, like whatever it is.
[01:37:33] Paul Roetzer: Just like pick those threads and become an expert in that area. Like be the one that really like pushes that forward. and then the other thing I'll say, and I mentioned this on one 40, is we recently teamed up with Google Cloud to, to form a marketing AI industry council. So we're trying to look at.
[01:37:51] Paul Roetzer: What's around the corner for the marketing industry. If we assume some level of truth to this, this direction of these models continually getting smarter, more generally [01:38:00] capable, then what does that mean to marketing? You know, how's it gonna impact jobs and agencies and brands and consumer behavior? And so I would encourage people to do something similar in their own industry.
[01:38:10] Paul Roetzer: You know, get together with some other people, get together with the association and form an AI council that tries to look out ahead the next few years and say, well, how is our industry gonna change? You can do this within your own company, but like, try and do this at a community level, at an industry level.
[01:38:25] Paul Roetzer: 'cause I think those are the kinds of conversations that need to happen. So like for us, we pulled together a couple dozen AI experts and marketing leaders, and let's just talk, let's like, think this through. So I think of it as more of like a think tank than anything. but I think things like that can make a difference.
[01:38:40] Paul Roetzer: So hopefully those one or five or six things give you some, I. Level of peace of mind, or at least some direction to go and help figure this out.
[01:38:49] What’s Next for the Series?
[01:38:49] Paul Roetzer: And then I'll just kind of wrap here with, what's coming next. So, my plan for this series is not for me to sit here and talk to you all for an hour and 40 minutes, you know, every other week.
[01:38:57] Paul Roetzer: My plan is to actually interview [01:39:00] experts related in, in related domains and topics. So a few of the key areas I'm looking at is like AI model advancements. So talking to the AI labs people, cybersecurity. As much as I don't wanna think about cybersecurity, it's, it's critical. We all think about it and talk about it.
[01:39:13] Paul Roetzer: the economy, education, energy and infrastructure. Future of business, future of education, future of work, specifically jobs, government laws and regulations, scientific breakthroughs, societal impact, and then the supply chain. Like those are kind of the main areas I'm focused on because I think there's something to be learned in all of them to figure out the bigger picture.
[01:39:35] Paul Roetzer: there may be other areas as well, but I'll hopefully in the next couple weeks start announcing, some of the upcoming sessions. I'm, I'm in the, in the process of scheduling interviews now and pursuing experts in these areas. And then I'll bring those to you through, you know, regular series over the next year beyond probably, we'll start having our, we'll continue to have our ai, weekly every Tuesday with Mike and I, and then I'll start [01:40:00] doing, these regularly.
[01:40:01] Paul Roetzer: We'll have just expert perspectives. So some closing thoughts and then I'll sign off here. I guess I did get close to two hours, huh? so we'll, we'll sign off here and then we'll kinda be back next week for episode 1 42 with Mike again.
[01:40:17] Closing Thoughts
[01:40:17] Paul Roetzer: So the main thing to think about here is the definitions of AGI are gonna vary.
[01:40:21] Paul Roetzer: It's not clear how we will know when it's achieved, but my main takeaway is it doesn't even matter if we get there. I, if we never agree on AGI arriving, we know the models are gonna keep getting smarter and we know they're gonna keep getting more generally capable. We can look at the scaling laws and we can see that.
[01:40:41] Paul Roetzer: And that alone, whether we get to AGI or not in the next couple years, is going to completely transform business, the economy, and society. So even just preparing for the possibility of AGI will put you in a better position to deal with smarter models, whether we call 'em AGI or not. But as we progress toward this idea [01:41:00] of AGI.
[01:41:01] Paul Roetzer: There are some inevitable impacts that we should be considering and preparing for in business. So every business, regardless of the industry, think about shifts in your consumer customer behaviors. Think about the fact that you're probably gonna need fewer people doing the same jobs. What do you do as a result of that?
[01:41:17] Paul Roetzer: Do you find new roles, reskill, upskill, or you're gonna choose to actually reduce the workforce? Hopefully it's the prior that you choose, automation of tasks across industries. It's gonna continue to happen. There will be a premium on proprietary data and distribution as differentiators, especially for these model companies.
[01:41:35] Paul Roetzer: We have increases in capacity to produce more goods and services. There'll be an increase in competition and a potential for your business to be disrupted or for you to disrupt other businesses. There'll be increases in productivity and efficiency. Increases in creativity and innovation if we choose to use them in that way, to augment what we're capable of.
[01:41:53] Paul Roetzer: Increases in profitability, job creation. Certainly. I think there's a whole possibility of like [01:42:00] an, a renaissance and entrepreneurship. I think we could create millions of small businesses that don't need a ton of people that are very innovative and can just build AI native from the ground up. And that could be the thing that offsets the job displacement, because I do think job displacement happens too, and I think it's gonna happen at different levels across different industries, but I think we should just start to accept that that is going to happen.
[01:42:23] Paul Roetzer: but we can do something about it still. And then as the models get smarter, we have to be proactive in pursuing answers to critical questions. Like, how will these next generation models affect you? Your team, your company. How will the model advancements impact creative work and creativity? I. How will consumer information consumption and buying behavior change?
[01:42:41] Paul Roetzer: How will consumer changes impact things like search and advertising and publishing? How are we gonna ensure responsible use of AI in our organizations? How are these copyright and IP issues gonna affect our businesses and our use of generative AI tools? How's it gonna impact strategies and budgets?
[01:42:59] Paul Roetzer: Technology [01:43:00] stacks the environment. A lot of people ask me that question about the impact of the, of these models and these training runs on, on the environment and the use of AI as it per proliferates. how's it gonna impact educational systems? How's it gonna impact organizations like yours, like mine?
[01:43:15] Paul Roetzer: How are jobs gonna change? And then the thing I'm very interested to explore, and sometimes I think I have a, a grasp on this and other times I don't. What remains uniquely human? So these are just some of the questions that I plan to explore as part of the series. We have an opportunity and I think an imperative to reimagine business models, reinvent career paths, and redefine what's possible.
[01:43:39] Paul Roetzer: And I think you have an opportunity to lead. I believe deeply that we should be optimistic about the future, that it can be abundant and a credible if we choose to be responsible and human-centered in our use of ai. The goal of AI should be to unlock human potential and not replace it, but we have to be [01:44:00] proactive and intentional about pursuing that outcome.
[01:44:02] Paul Roetzer: And I think we still have time. I don't think we're at the end of the line here where we can't have that outcome. So I believe we get a choice here. We can choose to make the future more intelligent and more human. And I hope this episode and the rest of this series can play a role in preparing and inspiring you to take action.
[01:44:20] Paul Roetzer: So thank you for being a part of this first episode and this journey, and thank you for letting us be a part of yours.
[01:44:27] Paul Roetzer: Thanks for joining us on the road to AGI and beyond. As we navigate the breakthroughs, challenges, and possibilities of artificial general intelligence, the conversation is just beginning. The future of AI is unfolding faster than we can imagine. We hope this series helps you stay informed and prepared.
[01:44:45] Paul Roetzer: For more insights, resources and discussions, visit smarter x.ai and subscribe to the Artificial Intelligence Show. Until next time, stay curious and explore ai.
Claire Prudhomme
Claire Prudhomme is the Marketing Manager of Media and Content at the Marketing AI Institute. With a background in content marketing, video production and a deep interest in AI public policy, Claire brings a broad skill set to her role. Claire combines her skills, passion for storytelling, and dedication to lifelong learning to drive the Marketing AI Institute's mission forward.