When a topic is making headlines from all directions, you know it’s something important—and this week, that something is AGI.
AGI remains a major focus for government officials and AI experts alike, and this week on The Artificial Intelligence Show, Mike and Paul weigh in with their insights. Our hosts break down the latest AGI news, the strategy behind superintelligence, OpenAI’s rumored $20,000-per-month AI agents, Andreessen Horowitz’s latest Top 100 Gen AI Apps, Google’s AI overviews, and more in our rapid-fire segment.
Listen or watch below—and see below for show notes and the transcript.
Listen Now
Watch the Video
Timestamps
00:04:08 —The Government Knows AGI Is Coming
- Opinion: The Government Knows AGI Is Coming - The New York Times
- The Artificial Intelligence Show Episode 127
- The Artificial Intelligence Show Episode 136
- JD Vance's AI Speech in Europe: "AI Future Will Not Be Won By Hand-Wringing About Safety" - Marketing AI Institute Blog
00:26:08 — AGI and Jobs
- OpenAI Plots Charging $20,000 a Month For PhD-Level Agents - The Information
- China’s Autonomous Agent, Manus, Changes Everything - Forbes
- Endex builds the future of financial analysis, powered by OpenAI’s reasoning models - OpenAI
- AI-Powered Lawyering
- Okay, I’m Starting to Think AI Can Do My Job After All - Big Technology
00:35:28 — What to Do About AGI and Beyond
- How we think about safety and alignment - OpenAI
- Miles Brundage X Post
- Superintelligence Strategy
- Anthropic’s Recommendations
- Tim Urban's AI Revolution
00:44:59 — This Scientist Left OpenAI Last Year. His Startup Is Already Worth $30 Billion.
- This Scientist Left OpenAI Last Year. His Startup Is Already Worth $30 Billion. - The Wall Street Journal
- Paul Roetzer Opening Keynote: The Road to AGI - MAICON 2024 (9/11/24)
- The Artificial Intelligence Show Episode 87
00:48:48 — Ex-DeepMind Researchers’ Startup Aims for Superintelligence
- Ex-DeepMind Researchers’ New Startup Aims for Superintelligence - Bloomberg
- Misha Laskin X Post
- Towards Superintelligence: Reflection AI - Lightspeed
- Partnering with Reflection: Toward Superintelligence, with Autonomous Coding - Sequoia Capital
00:54:35 — Human-to-Machine Scale for Writers Recap
01:00:11 — Google AI Overviews
- Google is adding more AI Overviews and a new ‘AI Mode’ to Search - The Verge
- Expanding AI Overviews and introducing AI Mode - Google Blog
- New Data Shows Just How Badly OpenAI And Perplexity Are Screwing Over Publishers - Forbes
01:03:57 — The Top 100 Gen AI Consumer Apps
01:07:21 — A Quarter of Startups in YC’s Current Cohort Have Codebases Almost Entirely AI-Generated
01:09:52 — The Humanoid 100: Mapping the Humanoid Robot Value Chain
01:13:12 — Listener Questions
- What’s the biggest misconception about AI right now in your opinion?
- Marketing AI Institute: Intro to AI
Summary
AGI is Coming
“The Government Knows AGI Is Coming.”
That is the striking warning that serves as the title of a new episode of The Ezra Klein Show, in which the journalist interviews Ben Buchanan, the former special advisor for AI in the Biden White House.
In the episode, both Klein and Buchanan agree that AGI—or systems that can do any type of cognitive task that a human can do—is likely to arrive in the next few years.
The episode opens with Klein recounting how experts from AI labs and the government have recently told him that AGI is imminent. Previously projected to be 5 to 15 years away, many now believe AGI could emerge within just two to three years, potentially during Donald Trump’s second term.
Klein and Buchanan cover a lot of ground related to what this means, including AI competition between the US and China, how the Trump administration will approach AI, and what AGI could mean for jobs, national security, and cybersecurity.
Klein strongly argues that we’re not remotely prepared as a society for what’s coming in the next few years, especially when it comes to AI’s impact on the economy.
AGI and Jobs
According to The Information, OpenAI executives have told some investors that the company plans to sell a variety of AI agents—agents that seem pretty explicitly targeted at doing the work that knowledge workers do today.
Says The Information:
“OpenAI executives have told some investors it planned to sell low-end agents at a cost of $2,000 per month to “high-income knowledge workers”; mid-tier agents for software development costing possibly $10,000 a month; and high-end agents, acting as PhD-level research agents, which could cost $20,000 per month, according to a person who’s spoken with executives.”
At the same time, we’ve seen projects and papers come out in areas like financial services and law that strongly suggest agents and reasoning models using retrieval-augmented generation may be able to significantly transform how even the highest-paid knowledge work in fields like finance and legal is done.
One project, Endex, is an agentic AI assistant publicized by OpenAI that is built on their technology.
Endex’s agents autonomously process financial reports, market data, and firm-specific knowledge to complete tasks, all thanks to OpenAI’s reasoning models. Using these models, they’re able to achieve the high accuracy that’s critical to complicated financial services work.
A paper that just came out also shows what’s possible.
The paper, called “AI-Powered Lawyering: AI Reasoning Models, Retrieval Augmented Generation, and the Future of Legal Practice,” found that law students using OpenAI’s o1-preview saw work quality increase and saw time savings of 12-28%.
Thanks to retrieval augmented generation (RAG) based AI with access to legal material, hallucinations were reduced to a human level.
AGI Strategy
At the same time as the Klein interview, several major AI players have released updated thoughts on how we need to approach AGI.
OpenAI’s article, “How we think about safety and alignment,” predicts AGI’s transformative impact could begin within a few years, making the future as unrecognizable as today compared to the 1500s. Their approach prioritizes iterative deployment—gradually introducing AI advancements rather than a sudden AGI breakthrough—to manage risks and allow society to adapt safely.
Anthropic projects AGI-like systems emerging by late 2026 or early 2027 and urges the U.S. government to prepare for the economic and national security challenges AI will bring. Their six-part strategy includes rigorous security testing, tightening AI hardware export controls, accelerating AI adoption in government, and anticipating economic disruptions.
Meanwhile, a new report, “Superintelligence Strategy,” by AI experts Dan Hendrycks, Alexandr Wang, and Eric Schmidt, proposes an AI security framework modeled on Cold War nuclear deterrence. Their concept, Mutual Assured AI Malfunction (MAIM), suggests that nations attempting to dominate superintelligent AI will face inevitable sabotage by rivals to prevent a destabilizing power shift. The report calls for AI-focused espionage, cyber sabotage, and strategic transparency—potentially enforced by AI itself—to maintain global stability.
Additionally, the authors highlight the critical importance of ensuring that advanced AI chips, essential to economic and military power, are not concentrated solely in politically volatile regions like Taiwan.
This episode is brought to you by Goldcast.
Goldcast was the presenting sponsor of our AI for Writers Summit and is a Gold partner of the Institute.
We use Goldcast for our virtual Summits, and one of the standout features for us is their AI-powered Content Lab. It takes event recordings and instantly turns them into ready-to-use video clips, transcripts, and social content—eliminating hours of manual work. If you're running virtual events and want to maximize your content effortlessly, check out Goldcast.
Visit goldcast.io to learn more.
This episode is also brought to you by our 2025 State of Marketing AI Report:
Last year, we uncovered insights from nearly 1,800 marketing and business leaders, revealing how AI is being adopted and utilized in their industries.
This year, we’re aiming even higher—and we need your input. Take a few minutes to share your perspective by completing this year’s survey at www.stateofmarketingai.com.
Read the Transcription
Disclaimer: This transcription was written by AI, thanks to Descript, and has not been edited for content.
[00:00:00] Paul Roetzer: The VC money is funding companies that will build the equivalent of human workers and far beyond that because they don't sleep, they don't need benefits, they don't need time off. They cost $20,000 a month and they do the work of 10 people that cost a half a million a year. Like, yep, it's coming fast.
[00:00:22] Paul Roetzer: Welcome to the Artificial Intelligence Show, the podcast that helps your business grow smarter by making AI approachable and actionable. My name is Paul Roetzer. I'm the founder and CEO of Marketing AI Institute, and I'm your host. Each week I'm joined by my co-host and marketing AI Institute Chief Content Officer Mike Kaput, as we break down all the AI news that matters and give you insights and perspectives that you can use to advance your company and your career.
[00:00:51] Paul Roetzer: Join us as we accelerate AI literacy for all.
[00:00:58] Paul Roetzer: Welcome to episode [00:01:00] 139 of the Artificial Intelligence Show. I'm your host, Paul Roetzer, along with my co-host Mike Kaput. We are recording 11:00 AM March 10th, Mario Day. For all those to celebrate. I was just telling Mike, my, my family's big into Nintendo and Mario Kart and Lego, and they just dropped like a Lego Mario Kart thing today, which is pretty exciting.
[00:01:20] Paul Roetzer: all right, so back, back to ai. this episode is brought to us by Goldcast. Goldcast was the presenting sponsor of our AI for Writer Summit that just happened last week. It was incredible. Thank you to everyone who came to the Writer Summit. We had 4,600 people from 96 countries attend the Writer's Summit.
[00:01:40] Paul Roetzer: Pretty remarkable. It was a virtual event, so if you, if you weren't aware, we were doing it was a, half day virtual event and it was amazing. And the Goldcast technology platform was essential that not only they presented the sponsor, they, were the platform that we use to run that conference. We do.
[00:01:57] Paul Roetzer: Three virtual events per year right now. [00:02:00] And Goldcast is the platform we use for all of them. So not only is it a great platform for those events, they have an AI powered content lab that takes all of our event, event recordings and instantly turns them into ready to use video clips, transcripts, and social content, which saves our team tons of manual work and hours.
[00:02:18] Paul Roetzer: So if you're running virtual events and wanna maximize your content effortlessly and create a great experience for your attendees, check out Goldcast at Goldcast io. That is Goldcast io. And then, also you've heard me mention this a few times recently. We are, currently collecting data for our fifth annual state of marketing AI report last year.
[00:02:41] Paul Roetzer: Last year's report, shared never before, seen data from almost 1800 marketers and business leaders on how they actually use and adopt ai. This year we're aiming for even more respondents, and you can help out by going to state of marketing ai.com. You'll see a link at the top to participate in the [00:03:00] 2025 survey.
[00:03:00] Paul Roetzer: You can also download the 2024 report while you are there. So again, that is state of marketing ai.com and once we publish the 2025 report, you'll get an email with a copy to, review and download that report. Okay. So it was a, it was a huge week in a GI, artificial super intelligence news. Anybody who gets my executive AI insider newsletter on Sundays.
[00:03:26] Paul Roetzer: This was like the theme of the week for me. you know, as Mike and I were going through, it was almost like 50 topics this week that we looked at. the thing that jumped out to me, just immediately when I looked at it was a GI and AI's impact on jobs. And so that's where we're going to kind of start and linger for these first three main topics is sort of this overall theme of a GI, which is artificial general intelligence, A SI, which is artificial super intelligence.
[00:03:54] Paul Roetzer: What this might mean to the near term future. 'cause it's, there's a lot of chatter. There's a lot [00:04:00] of buzz And like I said in the newsletter, either they are all wrong or we should be doing more to prepare.
[00:04:08] The Government Knows AGI is Coming
[00:04:08] Mike Kaput: Well, how about this for an opening statement and title here, Paul? The government knows AGI is coming.
[00:04:15] Mike Kaput: That is the warning that serves as the title of a new episode of the Ezra Klein Show, which is a podcast in which the journalist interviews Ben Buchanan, who is the former special advisor for AI to the Biden White House. And in this episode, both Pine and Buchanan agree that a GI or systems that can do any type of cognitive task that a human can do.
[00:04:45] Mike Kaput: Is likely to arrive in the next few years. Klein starts this episode by saying, for the last couple of months I have had this strange experience. Person after person from artificial intelligence labs from government has been coming to me [00:05:00] saying, it's really about to happen. We're about to get artificial general intelligence.
[00:05:06] Mike Kaput: What they mean is that they have believed for a long time that we are on a path to creating transformational artificial intelligence, capable of doing basically anything a human being could do behind a computer, but better says Kle. They thought it would take somewhere from five to 15 years to develop, but now they believe it's coming in two to three years during Donald Trump's second term.
[00:05:31] Mike Kaput: In this episode, Klein and Buchanan cover a lot of ground related to what this all means. They talk about AI competition between the US and China. How the Trump administration will approach AI and what a GI could mean for jobs, national security and cybersecurity. Now, this probably may be one of the more important interviews you listen to this month or maybe even this year if things play out.
[00:05:57] Mike Kaput: you know, Buchanan clearly knows this stuff, shares a [00:06:00] lot of great perspective from his time working on AI in the White House, but also this podcast leaves a lot of things unanswered. Klein strongly argues that we're not remotely prepared as a society for what's coming in the next few years, especially when it comes to AI's impact on the economy.
[00:06:19] Mike Kaput: So, Paul, I'll turn it over to you here. Why don't you walk us through what you found most worth paying attention to in this very extensive and kind of unnerving interview.
[00:06:30] Paul Roetzer: Yeah, it's, it's definitely worth listening to, and we will link to the opinion piece in New York Times that has the full transcript as well.
[00:06:36] Paul Roetzer: So if you prefer to read it's a. Yeah, I mean there's just a lot of noteworthy things to touch on here. So first, if people aren't familiar with Ezra Klein, he has been writing New York Times, New York Times opinion pieces since 2021. He was the founder and editor in chief and then editor at Large of Vox, has the ever Ezra Klein show, obviously, which is number nine in terms of apple [00:07:00] Podcasts, top shows, charts.
[00:07:01] Paul Roetzer: So he is a Top 10 podcast in the world on, on Apple podcasts, I assume similar in Spotify. So he's someone that a lot of people listen to and he, it was a very aggressive interview. you know, I think Buchanan, who I wasn't familiar with honestly before this, I'm sure we saw his name. I don't know that we've ever talked about it on the show.
[00:07:21] Paul Roetzer: Yeah, he was obviously involved in the Biden White House pretty significantly, so I did look into him a little bit because I was curious like who this guy was that played such a key role. And his background is very, very heavy national security and cybersecurity. So. He's the former director of Cyber AI project at Georgetown Center for Security and Emerging Technology.
[00:07:44] Paul Roetzer: he is currently an assistant professor at John Hopkins, university's director for technology and national Security for the National Security Council for one year during the Biden White House. he did his post doc at Harvard Kennedy School Belfor Center where he was working on cybersecurity [00:08:00] project.
[00:08:00] Paul Roetzer: So he is heavy, heavy And this is important context to the conversation, to know his background is actually in cybersecurity and that the Biden White House had someone with a cybersecurity background sort of leading the charge, tells you where they think the conversation needs to be at when it comes to this stuff.
[00:08:18] Paul Roetzer: So wanna start with in the opening of the opinion piece, and I think he said something similar in the podcast, Klein says, if you've been telling yourself this isn't coming, I really think you, you need to question that. It's not Web3, it's not vaporware. A lot of what we're talking about is already here right now.
[00:08:36] Paul Roetzer: I think we are on the cusp of an era in human history that is unlike any of the eras we have experienced before and we're not prepared in part because it's not clear what it would mean to prepare. I think that's a very important point. We don't know what this will look like, what it will feel like. We don't know how labor markets will respond.
[00:08:54] Paul Roetzer: We don't know which country is going to get there first. We don't know what it will mean for war. We don't know what it will [00:09:00] mean for peace. And while there is so much else going on in the world to cover, I do think there's a good chance that when we look back on this era in human history, AI will have been the thing that matters.
[00:09:11] Paul Roetzer: We're at this moment of a big transition in policy makers, and they are probably going to be in power when artificial general intelligence or something like it hits the world. So what are they going to do? What kinds of decisions are they are going to need to be made? And what kinds of thinking do we need to start do doing now to be prepared for something that virtually everybody who works in this area is trying to tell us as loudly as possible is coming.
[00:09:36] Paul Roetzer: So that to me is just kind of like encapsulates what we've been saying on the show, Mike, like this is, this isn't just us. It's not like a couple of talking heads on a podcast who've been following AI for a while who think it's important. This is fundamentally like across organizations, across government, across society, the people in the know are trying very, very hard to get everyone else to pay attention.[00:10:00]
[00:10:00] Paul Roetzer: But as, Klein illuminates right away, nobody has a plan for this. And this is what I keep preaching is like, let's just be proactive. So Mike, you had touched on this definition, but I think it's important that, you know, Buchanan talks about a, a canonical definition. He'd be stressed right away. It was kind of funny.
[00:10:17] Paul Roetzer: Klein called him out on this. He's like, I don't like a GI as as a term. And he's like, we get it, man. Stop telling it with every time you use the term. so it's system capable of doing almost any cognitive task a human can do. Now this is, Buchanan. I don't know that we will quite see that in the next four years or so, but I do think we will see something like that.
[00:10:36] Paul Roetzer: Where the breadth of the system is remarkable, but also its depth, its capacity to, in some cases, exceed human capabilities regardless of the cognitive discipline. Klein then says, systems that can replace human beings in cognitively demanding jobs. And Buchanan says yes or key parts of cognitive, jobs.
[00:10:55] Paul Roetzer: So then we get into the AI and government. Now this is, actually like paused it. I was [00:11:00] listening to this in my car and I paused it after hearing this part, and I'll explain why in a second. So, Buchanan says, what's fascinating to me is that this is the first revolutionary technology that is not funded by the Department of Defense.
[00:11:13] Paul Roetzer: Basically, if you go back historically over the last hundred years or so, nukes space, the early days of the internet, the early days of the microprocessor, the early days of large scale aviation radar, global positioning system, the list is very, very long. All of that tech fundamentally comes from Department of Defense money.
[00:11:33] Paul Roetzer: It's the private sector inventing it to be sure meaning ai, but the central government role gave the Department of Defense and the US government an understanding of the technology that by default it does not have an ai. It also gave the US government a capacity to shape where that technology goes. By default, we don't have, what he's saying is most of the innovation, especially the last 50 years, came out of DARPA Defense Advanced Research Projects Agency in the Department of Defense, [00:12:00] meaning they either invented it or they were funding the private development of these technologies, so they were on the inside as it was emerging, and they could better plan ahead.
[00:12:10] Paul Roetzer: The US government has been funding AI for, for the last 10, 20 years through darpa. They got caught off guard by generative ai and all that basically happened over the last three years. They were not prepared. they are not the ones that built it. the top language model didn't come out of darpa. It came out of open ai, which the government had no involvement with it.
[00:12:31] Paul Roetzer: It originated out of Google's labs, which the government didn't have access to. So that's a really important point that they've basically been playing catch up, trying to understand this technology, and it appears one of the most important people in that process is a cybersecurity guy. So it tells you, again, like the importance.
[00:12:50] Paul Roetzer: So then they get into AI in China, which they spent a good portion of the conversation on. I think the real key here is as soon as Klein brought up, China Buchanan [00:13:00] directed it to cybersecurity. So this is why his cybersecurity background is so important. So it helps illuminate how the government thinks about ai.
[00:13:07] Paul Roetzer: First and foremost, this is national security and military dominance. So jobs in the economy are secondary. It's not that the government isn't aware it might have this massive impact. they are just far more concerned about military dominance and national security, which is what DARPA's existence is all about, is military dominance and protection of democracy in the us.
[00:13:28] Paul Roetzer: So in the, in the AI and China debate, Buchanan says it's pretty out in the open that if you had a much more powerful AI capability, that would probably enable you to do better cyber operations on offense. And on defense, what is cyber operation? He asks, breaking into an adversary's network to collect information, which, if you're already collecting in a large enough volume, AI systems can help you analyze.
[00:13:51] Paul Roetzer: We actually did a whole big thing through darpa, which I mentioned, called the AI Cyber Challenge to test out AI's capabilities [00:14:00] to do this. And I would not want to live in a world in which China has that capability on offense, defense, and cyber. And the United States does not. Meaning he's seen what their capable of doing with the current ai.
[00:14:14] Paul Roetzer: . And they can project out what they'll be able to do with more powerful ai. And they know they don't want China getting there first. they did touch on cybersecurity in these AI labs, which Mike, you and I have talked about in these shows before. I remember specifically an instance where Dario Ade was talking about, you know, basically we spend probably billions on trying to protect our models and our weights, like within philanthropic.
[00:14:36] Paul Roetzer: Only a few people even know the weights to the system, but he said like, if a foreign government wants it, they are going to get it. Like, these people are really good at this and no matter how strong our protections are, they argoing tona get it if they want it. And so this whole idea that. These foreign actors are trying to hack in through cybersecurity or through cyber, but also at San Francisco parties where all [00:15:00] these AI researchers sort of openly talk about what they are doing and what they are working on.
[00:15:04] Paul Roetzer: So I, this was another instance where I paused and I was like, oh my God, like nationalization of these labs actually seems like something that the Biden administration very likely considered and that the current administration, I don't think would be as likely to consider. But you start to understand why nationalization of the labs might actually be a strategy that's explored.
[00:15:27] Paul Roetzer: Because if they become convinced that they need to get there first, and these models are going to become more and more powerful than the government wants full control of protecting those models. So that was really interesting. And then the one where my ears really perked up right, was when Klein asked about Mark Andreessen, and I don't remember my episode.
[00:15:46] Paul Roetzer: Mike, what episode it was?
[00:15:48] Mike Kaput: Yeah,
[00:15:48] Paul Roetzer: it was it like January or something? We talked about that. I think it was the
[00:15:51] Mike Kaput: beginning of the, right around the beginning of the year. I can look it up. Okay. Yeah.
[00:15:54] Paul Roetzer: So we will put this, the link in the show notes, but if you don't recall, mark Andreessen who, a [00:16:00] 16 Z Andreessen Horowitz, the VC firm.
[00:16:03] Paul Roetzer: He had this very interesting quote where he claimed that he was in a meeting with someone from the Biden administration, assume, in 2024, where they basically told them, don't worry about investing in startups doing ai. There's only going to be two or three labs that the Biden administration in the second term is going to make sure that all of this is centered on two to three companies.
[00:16:28] Paul Roetzer: Again, they didn't say nationalization, but basically that they could then better control and protect these models. At the time we thought that's really weird. And I would be, I would be shocked if they said it, but that was the argument Andreessen gave as to why they threw their support behind Trump and why they then pushed very heavily for Trump to get elected.
[00:16:47] Paul Roetzer: so I've been anxiously awaiting the other side of this story, and this is the first time I've heard anyone who might have been in the room. So, Klein says, were you part of the conversation that Andreessen was [00:17:00] describing? Ben said, I met him once. I don't know exactly. he then said, Andreessen talked about concerns related to startups and competitiveness, and I think my view on this is look at our record on competitiveness, and it's pretty clear that we wanted a dynamic ecosystem.
[00:17:16] Paul Roetzer: Now, I do think there are structural dynamics related to scaling laws and the like, that will force things toward big companies that I think in many respects we were pushing against. I think our track record on competition is pretty clear. That is a very clear non-answer. Yeah. This topic in my opinion.
[00:17:33] Paul Roetzer: Did you read it the same way, Mike? I was like, well, that didn't answer the question.
[00:17:36] Mike Kaput: Oh yeah. I was so excited for the next, I was reading the transcript like this morning again. I was so excited for the next paragraph and I was like, ah. Like a few words in, I was like, you're not going to answer this, are you?
[00:17:46] Mike Kaput: Nothing?
[00:17:47] Paul Roetzer: Yeah, no. And so he basically was like, I might've been in the room to the con conversation he's referring to, but maybe that's not exactly what was said. So then Klein says, the view that I understand Andreessen arguing with, which is a view I have heard from people in the AI safety [00:18:00] community, but is not a view I had necessarily heard from the Biden administration, was that you will need to regulate the frontier models of the biggest labs when it gets sufficiently powerful.
[00:18:10] Paul Roetzer: And in order to do that, you will need controls on those models. You just can't have the model and everything floating around so everybody can run this on their home laptop. So. Yeah, we didn't get the answer I was hoping for. I don't know who else might've been in that room. I kind of got the impression it probably came from a meeting with Buchanan.
[00:18:29] Paul Roetzer: Yeah. But he just didn't wanna like get into the specifics there. it was a very politician like answer for a non politician.
[00:18:38] Paul Roetzer: It really was. The thing Mike that jumped out to me and we will time this timestamp, this is kind of like the second major topic was when Klein and Buchanan talked about the impact of AI on jobs.
[00:18:52] Paul Roetzer: So Klein says one of the other things that Vance, so this is going back to JD Vance's AI Paris Summit, talk or [00:19:00] manifesto from a few weeks ago that we covered on the podcast and we will drop that link in the show notes as well. one of the other things, this is Klein again, one of the other things that Vance talked about and that you said you agreed with is making AI pro worker.
[00:19:14] Paul Roetzer: What does that mean? So Buchanan then said, I think we want to have AI deployed across our economy in a way that increases workers, agencies and capabilities. And I think we should be honest that there's going to be a lot of transition in the economy as a result of AI transition is doing a lot of work in that, that sentence.
[00:19:33] Paul Roetzer: No kidding. I, he continues. I don't know what, that will look like. You can find Nobel Prize winning economist who will say, it won't be much. You can find other folks who will say it will be a ton. I tend to lead toward the side that says it's going to be a lot, but I'm not a labor economist. The line that Vice President Vance used is the exact same phrase that President Biden used, which is give [00:20:00] workers a seat at the table in that transition, which I just laughed honestly when I heard that line.
[00:20:04] Paul Roetzer: That means literally nothing. Like, it's like one of the most useless lines. so Klein then says, and this is where it got kind of, um. Chippy, I would say like Klein was going hard here And that was the note I made to myself. I was like, wow, he's 51 minutes. He goes really hard on the end chop, right?
[00:20:22] Paul Roetzer: So Klein says, I will promise you, and this is what I have been saying, I'll promise you the labor economist do not know what to do about ai. You were the top advisor for ai. You are at the nerve center of the government's information about what is coming. If this is half as big as you seem to think it is, it's going to be the single most disruptive thing to hit labor markets ever given how compressed the time period is in which it will arrive.
[00:20:49] Paul Roetzer: It took this, again, Klein continues. It took a long time to lay down electricity. It took a long time to build railroads. AI is going to come really quickly. [00:21:00] It's going to be harder for the big firms to integrate it, but what you're going to have is new entrants who are built from the ground up, where their organization is built around one person overseeing these like seven systems.
[00:21:12] Paul Roetzer: This is the part I just, I was like, whoa. he said, so you might just begin to see triple the unemployment among marketing graduates. . As people who run Marketing Eye Institute. That one again sort of caught my attention. he then says, there are just a lot of jobs that are doing work behind a computer And as companies absorb machines that can do work behind a computer for you, that will change everything.
[00:21:34] Paul Roetzer: And he says to Buchanan, you must have heard somebody talking about this. You guys must have talked about this. Buchanan again, kind sidesteps this, and this was became his answer basically for everything we did talk to economists and to try to texture this debate in 23, 24. The trend line is even clearer now than it was then.
[00:21:57] Paul Roetzer: We knew this was not going to be a [00:22:00] 2023, 2024 question. Frankly to do anything robust about this was going to require Congress and that just was not in the cards at all. So it was more of an intellectual exercise than it was a policy. Now, I found this fascinating, Mike because all of 2024 during election season, I kept saying, you don't hear anything about ai.
[00:22:21] Paul Roetzer: Neither side was talking about ai. And then as soon as the administration flips it just dominates the conversation. And my belief at the time was because you couldn't win votes talking about it. Like there was no point in talking about it. 'cause one, they didn't have answers about what it meant. Two, the public didn't seem to care enough to take a side in this debate.
[00:22:42] Paul Roetzer: And so what we have here is the Biden administration basically saying, we know this is going to decimate the economy and jobs, or might be great for the economy of GDP, but like jobs, it's probably going to decimate in the near term. But it's not going to be on our watch if we don't, we win this election and we're going to need [00:23:00] Congress and like, we just can't do anything about this.
[00:23:02] Paul Roetzer: Hmm. So we explored it a little bit and then I'll kind of like wrap up here. Mike, with Klein gave his example of using deep research, which, you know, we've talked about as one of those moments for us, we're like, whoa, like this changes things for the future of work. So Klein says, I recently used Deep Research, which is a new open AI product.
[00:23:23] Paul Roetzer: It's on their pricing tier. most people I think have not used it. He's correct, but it can build out something that's more like a scientific analytical brief in a matter of minutes. Klein continues, I work with producers on the show. I hire incredibly talented people to do very demanding research work.
[00:23:41] Paul Roetzer: I ask deep research to do this report on the tensions between Mandian constitutional system and the highly polarized, nationalized parties We now have. I don't even know what that means. I need deep research to explain to me what that sentence means and what it produced in a matter of minutes was at least the [00:24:00] median of what any of the teams I've worked with on this could produce within days.
[00:24:05] Paul Roetzer: I've talked to a number of people at firms that do high amounts of coding, and they tell me that by the end of this year or next year, they expect most code will not be written by human beings. and then Mike, like this is a lot, but Klein wasn't the only one talking about AI and jobs this week. So why don't you kind of talk us through a couple of other things.
[00:24:27] Paul Roetzer: 'cause again, what I said in the newsletter and what I let out with this is when you zoom out, you just see the trends emerge. Yep. And like this on its own was noteworthy when in the context of the other stuff Mike's about to walk through, all happening in a five day period, you start to realize something different is happening.
[00:24:46] Paul Roetzer: And again, either they are all wrong. We need to be doing more as a business world and as a society to pre to be prepared.
[00:24:55] Mike Kaput: I'm going to dive into that in one second. I want to add one final note here that just made me laugh out [00:25:00] loud in disbelief is that, you know, you said Klein was really pushing him on stuff and he was like, Hey, why didn't you think, di, why didn't you game this out?
[00:25:06] Mike Kaput: Why didn't you think more about this? Did you have conversations? And he even says at one point, did you drop this into like Claude and game out? What could happen? And he literally says no, because the government, like basically alluding to the government, had restrictions on using the technology. And Klein says, well that's a bit damning in and of itself, isn't it?
[00:25:26] Mike Kaput: And they kinda move on. So I was like, we are starting from a place that is way further behind than where I would've anticipated.
[00:25:34] Paul Roetzer: Yeah, and that's like I, again, it's a theme of this show all the time, is I keep trying to stress to people. If you think someone else is out there doing this research, they are not like, this came up in the situational awareness episode, Mike.
[00:25:47] Paul Roetzer: . We did those two episodes back to back on Leopold Ashen, Brenner's situational awareness. And that's what he said. He's like, dude, if you think someone else is figuring this out, there's like 200 of us in Silicon Valley who are even [00:26:00] aware of what's happening. So that's it. Like there is not some army coming that's going to like figure this all out for everybody.
[00:26:08] AGI and Jobs
[00:26:08] Mike Kaput: Alright, so you talked about the Klein episode and then you are talking about a GI and a SI AI and Jobs, in the Exec AI newsletter. Now tying this all together, we are seeing, like you mentioned, some of these signals that something is up because one of the things that came out recently is according to the information open, AI executives have apparently told some investors that the company plans to sell a variety of AI agents.
[00:26:38] Mike Kaput: Agents that seem pretty explicitly targeted at doing the type of knowledge work that we all do today. So this is straight from the information quote, open AI executives have told some investors they plan to sell low end agents at a cost of $2,000 per month to quote high income knowledge workers, mid-tier [00:27:00] agents for software development costing possibly $10,000 a month.
[00:27:04] Mike Kaput: And high-end agents acting as PhD level research agents, which could cost $20,000 per month according to a person who's spoken with executives. So that in and of itself is quite big news if that ends up coming to fruition. And kind of at the same time, we've started to see some more of these projects and papers come out that strongly suggest agents and reasoning models.
[00:27:27] Mike Kaput: some of them using rag re, chival, augmented generation may be able to transform. How some of these really highly paid knowledge work, jobs in fields like finance and legal are done. So one quick project to note is called index. This is an Nagen AI assistant that's been publicized by OpenAI on their website because it's built on their technology index is agents autonomously processed financial reports, market data, and firm specific knowledge to complete tasks.
[00:27:58] Mike Kaput: All thanks to open AI's [00:28:00] reasoning models and interestingly using reasoning models, they are able to achieve the high levels of accuracy that are critical to this type of financial services work. They basically call it an AI financial analyst. There's also a paper that just came out that got a lot of attention called AI powered Lawyering, AI Reasoning Models, retrieval, augmented Generation, and the Future of Legal Practice.
[00:28:22] Mike Kaput: So this dived into 80 plus pages of research. That looked at what it was like when the lawyers started using some of the most advanced reasoning tools. They found, for instance, that law students using open AI's oh one preview, saw work quality increase and saw time savings of 12 to 28%. And thanks to rag based AI with access to legal material, the hallucinations using this technology were reduced to a human level.
[00:28:49] Mike Kaput: So the whole point here is we're starting to see these domain specific AI assistance, or may some might call them replacements to this type of [00:29:00] really sophisticated knowledge, not
[00:29:02] Paul Roetzer: the actual companies themselves per se. They will not say that. Correct.
[00:29:05] Mike Kaput: So to kind of wrap this all up, this is why the tech journalist, Alex Kreitz, so we talked about a bunch.
[00:29:12] Mike Kaput: He does big technology. He chose this week to write this article literally titled, okay, I'm starting to think AI can do my job after all, in which he concludes. That some work that once seemed safe, now looks like it's directly in the path of machines. And then last but not least, and then Paul, I wanna turn this back over to you to kind of get your take on what OpenAI is doing.
[00:29:34] Mike Kaput: There was a ton of like firestorm and buzz on the internet over this AI agent out of China called Manus, M-A-N-U-S. Now, we have since found that this is probably just a wrapper around Claude that uses some agentic capabilities, but people are sharing all sorts of links and demonstrations of this thing, calling it a truly autonomous general agent that can go [00:30:00] do plenty of stuff for you without human involvement.
[00:30:03] Mike Kaput: So it doesn't look like we're quite there yet, or there's a lot more to that story. But Paul kind of maybe walk us through, tie these threads together. We've got this big, bold statement from OpenAI that they are going to charge all this for agents, like what's going on here?
[00:30:19] Paul Roetzer: So the Manus thing I was watching over the weekend, 'cause it was, I mean, Friday it was like blowing up.
[00:30:23] Paul Roetzer: it was like the deep seek esque kind of like where everyone all of a sudden was the only, well that and the MCP thing that like, I we into that, but like that was crazy too. yeah. So then like yesterday I saw somebody who basically got the system to tell it that it was using like Claude 3.7, I think sonnet to do what it was doing and that it was connected to like dozens of tools regardless.
[00:30:45] Paul Roetzer: It was definitely like a more advanced computer use demo than we have seen. That I think does show the promise. I don't think you look back and be like, oh, Manus was like this great breakthrough. Yeah. But I think it moved the [00:31:00] conversation forward about what these AI agents with computer use will be able to do And may accelerate the timeline in, in some ways.
[00:31:09] Paul Roetzer: which again, all this is inevitable. It's just how quick it happens and how fast it diffuses throughout society and the economy. so on the opening eye pricing thing, like we'd heard the 2000 a month floated and I said at the time like, no brainer. Like it's only $24,000 a year. Like that's, you're paying that like to do.
[00:31:28] Paul Roetzer: when you make the business case for it. 20,000 a month is a different story. Also, I think demand would skyrocket. Now they don't have, so $20,000 a month, you're talking about $240,000 a year, which means you're now in the realm of like financial analysts, attorneys, hedge funds managers, AI researchers, computer programmers, like that's the people making, you know, 200 to 500,000 a year.
[00:31:51] Paul Roetzer: Where, to your point Mike, if we're talking about replacement value Yep. Now I'm like, yeah, 'cause that one $20,000 a month can do the work of five of those [00:32:00] people once they, you know, a year from now or whatever. We fast forward and the capabilities are there and reliability's there. So while they are not saying replace your workers, if you go to the index site, which again is in the financial world, this is straight up the messaging from their homepage.
[00:32:16] Paul Roetzer: The autonomous financial analyst. it says Meet index, your next financial coworker. That's like softening things a little bit. then they say the first AI agent for financial services. Optimize your team's most common workflows and enhance the quality of every output. And then they have, scale your workforce, multiply your results, use index to launch multiple tasks that will continue working in the background, like having an AI workforce 24 7.
[00:32:40] Paul Roetzer: You know what's great about an AI workforce, Mike? They don't need benefits. They don't need paid time off. Yep. their mood never changes. Yep. They just do what you tell 'em to do 24 7, as long as you got enough NVIDIA chips humming in the background. So again, our whole point with all of this is not that Mike and I are proposing a [00:33:00] future where digital workers take over the workforce.
[00:33:02] Paul Roetzer: All we are telling you is. The VC money is funding companies that will build the equivalent of human workers and far beyond that, because they don't sleep, they don't need benefits, they don't need time off. They cost $20,000 a month and they do the work of 10 people that cost a half a million a year.
[00:33:23] Paul Roetzer: Like Yep. It's, it's coming fast. And I think that's the whole point of, like, the first conversation leading off to this Klein thing is like, the government isn't prepared for how fast the labs aren't doing the research to tell you how fast. Like, but it's, it's coming And like we need to do more.
[00:33:42] Paul Roetzer: Like there's still time to prepare, especially as like the downstream industries, like, you know, I don't know, like manufacturing and healthcare and to a degree retail, like these industries that're going to take a few years. It's not like we're going to flip a switch and by the end of 2025, it's just everywhere.
[00:33:58] Paul Roetzer: But it's a hundred percent going to [00:34:00] be infused into financial companies and. Hedge funds, hedge and law firms and AI research firms, like it's, it's going to be there, it's going to be this year, start to have a disruptive impact.
[00:34:11] Mike Kaput: Yeah. And the scariest thing, in a way though I give 'em a lot of credit, is Klein asked some questions in that interview that I was like, these are the most in-depth and smartest questions anyone's been asking about this so far.
[00:34:23] Mike Kaput: Like Right. And it's like gaming it out.
[00:34:25] Paul Roetzer: Yeah. And now, like, I would love, well I, you know, I don't know if JD Vance is the right guy to do this, because I don't know that he would've any depth of his answers. . But I would love, like Andreessen, like they, I I honestly feel like there's just these people who are so proac acceleration that they, they, the answer is always, we will figure it out.
[00:34:45] Paul Roetzer: It will create more jobs. Like, it's going to, it always does. But Klein's questions to your point, like how do you possibly answer those questions with any depth whatsoever? And. That's my concern is like the people who are driving this [00:35:00] innovation don't have good answers to any of the hard questions about the impact.
[00:35:06] Mike Kaput: So to kind of wrap all this up with a bow, this was not the only discussion of AGI and even super intelligence that was happening from some of these major players. Like at the same time several major AI players, coincidentally or not, depending on what you wanna think here, have released some updated thoughts on how we need to approach a GI.
[00:35:28] What to Do About AGI and Beyond
[00:35:28] Mike Kaput: So OpenAI recently published an article titled How we Think about Safety and Alignment. In it, they bluntly state quote, as AI becomes more powerful, the stakes grow higher the exact way the post a GI world will look is hard to predict. The world will likely be more different from today's world than today's is from the 15 hundreds.
[00:35:50] Mike Kaput: We expect the transformative impact of a GI to start within a few years. They then outline their current thinking on how to develop safe, beneficial, a GI. [00:36:00] This is a process though that they, that emphasizes the principle of iterative deployment. So basically gradually introducing increasingly capable AI into real world settings, not keeping it bottled up in a lab.
[00:36:16] Mike Kaput: They argue that by releasing systems incrementally, we can better identify and manage potential risks, and they highlight several risks they are working to mitigate, like human misuse, misalignment, and broader societal disruptions. Anthropic also released its own recommendations on how to keep what it calls powerful ai.
[00:36:38] Mike Kaput: That's kind of their term for a GI, which they expect to emerge. In the late 2026 or early 2027, they published this guidance on how to keep it safe. Their recommendations emphasize the urgency for the US government to prepare strategically for the economic and national security challenges that powerful [00:37:00] AI will bring.
[00:37:01] Mike Kaput: They suggest a six part approach that includes things like enhancing national security testing for AI systems and tightening export controls. On top of all this, there's a new report that came out called Super INT Intelligence Strategy. It's making waves primarily for the people behind it. It's co-authored by Dan Hendricks, who's the director of the Center for AI Safety and an advisor to XAI and scale ai.
[00:37:28] Mike Kaput: Also, Alexander Wang is co-author Scale, AI's founder and CEO, as well as Eric Schmidt, the former CEO of Google. In this report, they propose a framework that mirrors Cold War nuclear strategies. They literally defined this idea called mutual assured AI malfunction. And this is basically akin to the nuclear deterrent strategies used during the Cold War.
[00:37:55] Mike Kaput: And it suggest Mutually
[00:37:56] Paul Roetzer: assured destruction. Mutually assured
[00:37:57] Mike Kaput: destruction. Yes. The [00:38:00] idea that we would both annihilate each other if even some of these weapons, one, we
[00:38:04] Paul Roetzer: shoot 'em all. That's, that's the idea of Yes.
[00:38:07] Mike Kaput: And so this kind of builds on that idea saying that any country that aggressively attempts to monopolize superintelligence will inevitably face covert sabotage by rival nations seeking to prevent a destabilizing imbalance.
[00:38:22] Mike Kaput: So they basically say, look, to enforce stability. They argue for, you know, AI focused espionage sabotage, and strategic transparency, verifying rival state's compliance without revealing sensitive information about how far along they are as we're designing really advanced ai. So Paul, that is a lot to unpack.
[00:38:43] Mike Kaput: A lot of it is kind of terrifying. But I guess to really sum up what I'm taking away from this open AI is like, guess what? Safety equals releasing this stuff. Anthropic is providing guidance, but the train is leaving the station and we also have, you know, literally [00:39:00] treating this like nuclear weapons technology in terms of nation state competition.
[00:39:05] Mike Kaput: Do I have that right?
[00:39:06] Paul Roetzer: Yeah. I mean, there's no sugarcoating on this episode. Like, let's, let's just be straight up, this is a problem. Like, there, there is a lot of very dangerous territory ahead and anything that they are talking about already in Superintelligent strategy and anthropics powerful, it's happening already like this.
[00:39:25] Paul Roetzer: This isn't, this isn't, three years from now, we should be thinking this way. This is like, there's already leaders that are thinking this way, the. All the AI labs know foreign actors are trying to infiltrate their systems, if not already being aware that they have infiltrated their systems. So like in, in the US we know that China and Russia probably are within our electrical grids.
[00:39:48] Paul Roetzer: . they are probably within the infrastructure that powers the country. And in a similar way, we're probably in their infrastructures. And the whole idea there is like, don't take down our infrastructure. We don't take down yours. That's [00:40:00] basically what this is. Like, the labs know that the foreign actors will be trying to get access to their systems.
[00:40:07] Paul Roetzer: The government knows this is happening. So they are fully aware of the national security risks of what they are doing. the jobs and the economy thing, again, kind of bringing it back to that, like Dario and Sam are fully aware that what they are doing, the thing they are driving and not just, not just them throw Google in the mix and meta and all the others.
[00:40:27] Paul Roetzer: That they are the ones who are building the technology that's likely going to be massively disruptive to industries and professions, but they have no idea what it looks like in this instance. Open Eye literally says quote the exact way, as you already highlighted, the exact way the post A IGI world will look is hard to predict.
[00:40:44] Paul Roetzer: The world will look likely be more different from today's world than today is from the 15 hundreds.
[00:40:51] Mike Kaput: That's a crazy analogy.
[00:40:52] Paul Roetzer: That's like Middle Ages, right? Isn't the It's 14, 15 hundreds. Like the middle Ages. 13, 15, right after
[00:40:57] Mike Kaput: it's like, yeah, the early, I actually fun fact put [00:41:00] this into Grok and a couple others.
[00:41:01] Mike Kaput: I said, look, here's what I am now 38-year-old man in 2025. What if I went back to the 15 hundreds? What would that look like? And it's just wild answers like Colonial England, you know? Okay, whatever. So there was
[00:41:13] Paul Roetzer: that. That reminds me, we will have to put the link in the show. Tim Urban wrote this great post about like ai, where it was, I forget what he called it.
[00:41:22] Paul Roetzer: It was that factor where he like. Things would be so different. You would literally just die. Yeah. He's like, if you go back to this period, you're like, you're just dead because you can't comprehend how different it's, or like if you came forward from the past, you would get to that point and be like, oh my God, like I just died.
[00:41:36] Paul Roetzer: Like it's just nuts.
[00:41:37] Mike Kaput: That's why I mentioned it, how crazy this analogy is. And Sam knows what he is doing in it, or open the eye knows what they are doing in it.
[00:41:42] Paul Roetzer: Yeah. So that means they think that like the next five years is so different. It's basically like taking a leap forward, like five. And the thing that's very clear now, as I said earlier, they are not going to solve this.
[00:41:56] Paul Roetzer: they are not going to sit down and play out. What does a post a [00:42:00] GI world look like in education, in business, in your profession, your industry, they are not going to do it, which means it's on governments, think tanks, associations, individual businesses, the consulting firms. Every research report I see outta consult consulting firms is asking people who have no idea about ai, what's the future of your business because of ai.
[00:42:20] Paul Roetzer: So they go and like talk to a bunch of CEOs who themselves aren't really sure about it. Certainly couldn't like tell you what a GI and a s AI are and like the impacts of it. And yet that's what the consulting firms are giving us is like these predictions about 2030 based on who, like people who've never sat down and actually thought about post a GI world.
[00:42:40] Paul Roetzer: So I don't know Mike, like this is, I've mentioned this a couple times on the show, but we're going to launch a, a road to a GI and beyond podcast series as part of the artificial intelligence show. I was looking at my schedule trying to figure out when can I actually do this. So the plan right now is to launch the first episode on March 27th.
[00:42:59] Paul Roetzer: So again, we're going to [00:43:00] continue doing our weekly episodes. The idea with the Road to a GI series is to start with an updated AI timeline. So I did the first one back in March, 2024 on episode 87, and then I actually played that out in my road to a GI keynote at MAICON last year. And we will put the links to both of those.
[00:43:16] Paul Roetzer: You can watch the full keynote. The idea here is to try and figure out what happens next, what it means, and what we can do about it through interviews with people who are actually on the frontiers in all these key areas. So we're going to start releasing them at the end of March as like regular episodes.
[00:43:34] Paul Roetzer: My goal is going to be like every other week. we will, we will see how the schedule, plays out. The whole idea is like what are the impacts of continued AI advancement on business, the economy, education, and society. So what I wanna do is interview experts, related to like AI literacy, AI models, cybersecurity economy, energy infrastructure, future of business, future of education, future of work, government, legal, scientific [00:44:00] breakthroughs, societal impact, supply chain.
[00:44:01] Paul Roetzer: These are just some of like the topics where I want to go and get like the top minds who are actually thinking about a GI and beyond in those areas, and like find out what is actually happening. And hopefully throughout this series start to like see around the corner a little bit. I don't, I don't know, like I don't, I feel like sitting here doing, just talking about every week does nothing.
[00:44:23] Paul Roetzer: And this kind of goes back to like the AI literacy project when I launched that. It's like, let's just do something like, I don't know what comes of it. I don't know what we learn, but just highlighting the fact that it's a problem is doing nothing. So like, let's go try and do something about it with that series.
[00:44:37] Mike Kaput: I love that. I really look forward to it because yeah, you're right. There's not enough creativity and commentary around what this stuff actually looks like. Yeah. Alright, so let's dive into rapid fire for this week. There are still a couple more A-G-I-A-S-I type topics, but we've also got some other things on the docket.
[00:44:56] Mike Kaput: But first it's truly dominated the The news,
[00:44:58] Paul Roetzer: Isaac. It's crazy. Yeah.
[00:44:59] This Scientist Left OpenAI Last Year. His Startup Is Already Worth $30 Billion.
[00:44:59] Mike Kaput: You gotta pay [00:45:00] attention when this many things related to it come at once. First up here, Ilya Sutskever, his startup safe Super Intelligence SSI has secured approximately 2 billion in funding at a $30 billion valuation. The startup has no product, it has just 20 employees, and its fundraising success according to a recent report in the Wall Street Journal, appears to be driven by one thing and one thing only, which is Ilya himself and his reputation in the AI research community according to this report.
[00:45:32] Mike Kaput: Top venture capital firms like Sequoia Capital, Andreesen Horowitz have poured money into SSI based largely on their faith in iass technical brilliance and vision. The good luck figuring out what they are trying to do Exactly. They operate very secretly. They have a bare bones website that is like a 200 word mission statement.
[00:45:52] Mike Kaput: Employees are apparently discouraged from mentioning that they even work at the place on their LinkedIn profiles. They have no plans to release any [00:46:00] products until they develop what the industry calls super intelligence, an AI system that can outsmart everyone in every single feel. So SR is told. Told his associates he is not developing advanced AI using the same methods that they used at OpenAI where he used to work.
[00:46:18] Mike Kaput: Instead, he has identified a different mountain to climb that is showing early signs of promise. Paul, we've talked plenty about Iliad, but. We have to mention this again based on everything we've talked about related to a GI and a SI this week, but what kind of jumps out one investor in this report called this a super high risk bet?
[00:46:37] Mike Kaput: 'cause aren't you basically betting one person's approach can not only solve super intelligence, but safe, super intelligence. Like how likely is that to be the case?
[00:46:49] Paul Roetzer: Yeah, my guess is they are all going into it assuming the money's gone. , okay. Because there's a, there's a reasonable chance, whatever the pursuit is, whatever they've [00:47:00] unlocked or think they've unlocked in terms of a path forward.
[00:47:02] Paul Roetzer: My guess is he's not telling anybody that, like these VCs aren't, you know, you're just kind of trusting that they've got a different path to go. There's no way he is disclosing to them what that path is. there's no plan for revenue. Not only is there no product or revenue, there's no plans for product or revenue.
[00:47:18] Paul Roetzer: There is a reasonable chance. Keep in mind, Ilya, if you haven't been following along for a while, he's the guy who triggered Sam Altman getting fired. He's the co-founder of OpenAI who became so concerned with the direction of OpenAI, and it ends up the re their plans to release their reasoning model, which was strawberry at the time that Ilya led on.
[00:47:35] Paul Roetzer: that he, he led to his demise briefly at OpenAI. Yeah. And then they couldn't work things out and he eventually, you know, leaves and does his own thing. So there's a chance that whatever Ilya unlocks, he decides isn't actually safe. And so, like, they don't ever bring anything to market. So, yeah, I mean, I'm, I'm guessing that these VC firms are like, well, let's [00:48:00] throw a billion at this and like, let's see where it goes.
[00:48:02] Paul Roetzer: If nothing else, it gives us a front row seat to wherever may happen. Which it, by the way, is what Elon Musk did with, with DeepMind before it got acquired by Google back in the day. He made friends with Demis Asab has became so concerned with what Demis knew and where they were building with DeepMind.
[00:48:17] Paul Roetzer: Then it gets acquired by Google, which triggers Elon to build open AI with Sam Altman as a counterweight to it. So like, this is all like, in some ways, like history repeating itself. but I don't know. I mean, if you had to stack up the most respected AI researchers in the world and maybe in history, he's, he's on the Mount Rushmore.
[00:48:37] Paul Roetzer: I mean, this is, this is a top three to five researcher, if not the most respected of all of them. so everyone's going to pay attention to what he does
[00:48:48] Ex-DeepMind Researchers’ Startup Aims for Superintelligence
[00:48:48] Mike Kaput: in some other news. A new AI startup with their own ambitious vision for super intelligence has emerged from stealth mode this week. So this is called Reflection ai, and they have raised [00:49:00] 130 million in funding at a $555 million valuation to build what they call autonomous coding agents.
[00:49:07] Mike Kaput: So they believe this represents a crucial step towards achieving super intelligence. Now, this company was founded by Misha Laskin and. Es an nlu who are two elite researchers from Google DeepMind. An NLU was a founding engineer at DeepMind who helped create Alpha Go the breakthrough AI system that defeated world champion Lee Ole at the board game go in 2016, which is the moment many people consider a watershed in AI history.
[00:49:38] Mike Kaput: Now, unlike the coding assistance tools out there that just help you write code more efficiently, reflection AI aims to create fully autonomous agents that can handle entire programming tasks from start to finish. Now they believe that by combining reinforcement learning with large language models, they can tackle the essential complexities of software [00:50:00] development.
[00:50:02] Mike Kaput: And early results suggest their models outperform traditional code generation approaches by a wide margin. Now as they develop this technology, they plan to expand the capabilities of their coding agents. The vision is that eventually developers become directors of autonomous coding agents. And in the long term, this could extend to all knowledge work, not just coding.
[00:50:28] Mike Kaput: Laskin actually said quote, our team pioneered reinforcement learning and large language models, and we decided that now is the time to bring both of these advancements together and build out a practical super intelligence that will do work on a computer. Now Paul, we're seeing a lot of like agent startups out there, a lot of autonomous coding agents.
[00:50:50] Mike Kaput: Seems like with the background of these guys, this one might be a bit special.
[00:50:55] Paul Roetzer: Yeah, and I, you know, I think this is a lesson we've mentioned many times on this show, which is you follow the [00:51:00] top researchers from the top labs. it's you know, Noam Shazi, you know, when he launched character ai. I think we talked about that on the show.
[00:51:09] Paul Roetzer: and then he eventually goes back to Google. So no one was at Google multiple times. Goes to builds character ai. Google acquires the, technology that they didn't buy the company. They, I don't think they could, but they basically acquihire him and the team back for a few billion dollars. Like the top researchers, are fundamental to understanding the research direction and to following along kind of what develops in this space.
[00:51:38] Paul Roetzer: So, yeah, like will this one work out? I don't know. Will they eventually get pulled back to DeepMind for a couple billion dollars in two years? Maybe. But it's always noteworthy. Now the question here is why pursue autonomous coding agents? . You hear us talk about that a lot. You hear the, what, what was the thing called?
[00:51:55] Paul Roetzer: Man? Manis. Manis. Yeah. Yeah, yeah. Cursor. You hear about all these things. Here's. [00:52:00] AI researchers. There's tens of thousands of AI researchers. There's probably a few hundred, maybe up to a thousand who would be like top tier AI researchers that everyone would compete for, would pay million dollar plus bonuses to get them to come to their labs.
[00:52:15] Paul Roetzer: The, I'm not an AI researcher, but my understanding of the space, one of the key values or traits of an AI researcher is their taste, their, their, their knowledge of which direction to pursue. So all of these labs are trying to get to a GI and beyond. The reason Ilya is so valued is because he has a history of very high taste, meaning he tends to know which research direction to go in that leads to the greatest valued output.
[00:52:44] Paul Roetzer: So if you're sitting in a major lab today, you all kind of have the same idea of how these models are improving. You gotta pick where your NVIDIA chips are going to get used and which things your top researchers are going to work on. So is it multimodal? Is it improving [00:53:00] memory? Is it planning capabilities? Is it improving context window?
[00:53:03] Paul Roetzer: Is it computer use? Is it reasoning? Is it agents? Is it reinforcement learning? Is it understanding world models? You have to make bets as to where to put your energy. So what does an autonomous coding agent do? It gives you almost infinite shots on goal. You can now be running these things, pursuing all of these paths through low val, like low compute experiments that then when you hit on something, you go.
[00:53:30] Paul Roetzer: And so that's what these labs do. They, they take all of these experiments, they fight over compute access within the companies every day. It happens at Google, it happens at meta, it happens at OpenAI. They fight over access to compute, to run their experiments, to prove their hypotheses. Once you approve a hypothesis, you go, so like reasoning models were that that's what, that's what Ilya did with strawberries.
[00:53:52] Paul Roetzer: He proved the test time compute scaling law was likely going to hold and that enabled open AI to, to double down on [00:54:00] reasoning. So that's why this matters. It's why any, it's why we keep talking about these like coding agents. You may be like a VP of marketing or a CEO being like, what do I care about coding?
[00:54:09] Paul Roetzer: You care about coding agents. Like they drive everything once they solve how to do this
[00:54:15] Mike Kaput: and it's probably likely at least an element of a lot of the talk around a GI and a SI, even if this stuff feels far away, the moment you start cracking some of these autonomous coding agents, it's the moment we have kind of a fast takeoff.
[00:54:29] Mike Kaput: Right? Where Yeah. 'cause you can
[00:54:30] Paul Roetzer: run millions of experiments instead of dozens. Right.
[00:54:35] Human-to-Machine Scale for Writers Recap
[00:54:35] Mike Kaput: Alright, next up. We just wrapped up last week our AI for Writers Summit, which is a half day virtual event that had 4,500 plus registrants from 90 plus countries. And this entire event was about how writers can begin to reimagine their work and careers in the age of ai.
[00:54:54] Mike Kaput: So Paul, to kick that event off, you gave a keynote on the state of AI for writers and creators, [00:55:00] which was an overview of how the latest AI models are reinventing the future of creativity. And as part of the keynote you did debuted something called the human to Machine Scale for writers, which is a framework anyone can use to better understand their way forward with ai.
[00:55:19] Mike Kaput: Could you walk us through that scale and what inspired it?
[00:55:23] Paul Roetzer: Yeah, so we will put the link to a, a LinkedIn post that I shared at the end of last week that actually has the 12 slide excerpt from the full presentation that plays out this whole human to machine scale for writers. In essence, what I did is iterated on a framework I had developed a few years back called the Human to Machine Scale that actually looked at levels of autonomy.
[00:55:43] Paul Roetzer: Like what is the human's role at, at a use case level when AI is applied to their job or to the tasks within their job. And so as I was trying to like answer this question, when should we use AI to write, I realized I could probably actually adapt that human to machine scale to this. [00:56:00] And so that's basically what we did is hear from professionals all the time, specifically creative professionals who struggle with this question of like, when do I let the AI help let, what do I let it actually do the writing for me?
[00:56:13] Paul Roetzer: because I'm a writer, it's, it's like my art, my passion. It's the thing that gives me fulfillment. Like pe if people aren't familiar with me, like that's my background. I actually came outta journalism school. I've authored three books. We do the podcast like I consider myself a writer and storyteller by trade.
[00:56:27] Paul Roetzer: I. For me, writing is a very important part of my process. Like it's how I think, it's how I learn topics, it's how I develop an understanding. it's, I can't just take an article, have ai spit out a summary for me, and then talk to you all about the key points in it. It doesn't work for me. I don't develop a true comprehension of the topic.
[00:56:47] Paul Roetzer: And so like the litmus test I gave, I think during the talk, 'cause again, like I didn't script the talk, so I don't, I'm not actually sure exactly what I said, but I think I said something to the effect of, anybody can use deep [00:57:00] research or like Chad, GPT to write a summary of a topic, but to actually understand that topic in a deep way, to the point where you could be a thought leader on it.
[00:57:08] Paul Roetzer: Imagine throwing all that aside and sitting there for 30 minutes and answering questions about the topic. That's my goal with everything we do with this show, is like, I want to be so deeply ingrained in the things we analyze, the things I read and watch and listen to. That I can throw away any script and just talk about the topic, right?
[00:57:26] Paul Roetzer: And so that's like kind of one of the fundamental things I shared with this idea is you have like, level Zero is all human. It's the human is the sole creator. Your voice matters tremendously. The the audience expects authenticity. They expect you to just be sharing your knowledge. so that's all you.
[00:57:43] Paul Roetzer: Level one is mostly human. That's where the author is still leading the human author. But you're using AI for things like research or refining your work or brainstorming. Level two gets into half and half. It's like a co-writer situation where the author and the AI truly start to work together. There's an increasing [00:58:00] focus on efficiency of rewriting, but the voice and the human touch still matter.
[00:58:04] Paul Roetzer: level three gets into mostly machine. That's where it's largely AI driven. The AI's probably writing the first draft. The human maybe tweaks it, refines it, approves it. So efficiency starts to take on far greater meaning. And then level four is all machine where the humans basically remove from the loop.
[00:58:20] Paul Roetzer: It's an AI writer purely at autonomously, you know, writes the stuff with little or no human oversight. And so in the, again, I would encourage people to go download the PDF from my LinkedIn post because it goes into like examples and characteristics at each level. And then it gives some tips at the end.
[00:58:36] Paul Roetzer: Like, when does more human writing matter and when is it more okay to work with machines? But the big point I made is, it's not a binary decision, do I or do I not use ai? It sort of like exists on this spectrum and that spectr the level zero to level four in, in the sense that is very subjective and personal.
[00:58:54] Paul Roetzer: The thing, the thing I didn't really address during the talk that's important is like, some people aren't very good writers [00:59:00] And like they want to express themselves, but they don't have the ability to, and so like. Level two in the co-writer situation may be the sweet spot for you because you're not a writer by trade.
[00:59:10] Paul Roetzer: Whereas for me, I would say probably like 80 to 90% of mine is level zero podcast stuff. My keynotes, my LinkedIn posts, I have zero use for AI for that stuff. Like I want that to come from me. And the process is, the purpose is what I said on LinkedIn, like going through the process is why I do it. But there's a lot more that's become level one where it's still mostly me, but I'm increasingly using AI on the research front, outlining, refining, brainstorming, and that's okay as long as it's clear with the people reading it or hearing it.
[00:59:42] Paul Roetzer: Um. So, yeah, I mean, I, there's, thank you to everyone who's commented on that LinkedIn post. There's like, don't know, maybe a hundred comments by now. and t sounds like it was helpful framework for people. So, you know, definitely go check it out. It was, honestly, it was one of those things I finished at 11:00 PM the night before the talk, so no one had seen it [01:00:00] except my daughter.
[01:00:01] Paul Roetzer: I was laying in bed with my daughter, like, I was like, can I just show you this? Because I gotta make sure this makes sense. and so she, yeah, she's the only one I'd even seen the framework before I did the talk the next morning.
[01:00:11] Google AI Overviews
[01:00:11] Mike Kaput: That's awesome. So Google is doubling down on its incorporation of AI into search.
[01:00:18] Mike Kaput: The company announced last week. It'll show AI overviews for even more queries and add Gemini 2.0 to AI overviews to make those results more useful. It's also getting closer to debuting AI mode. AI mode is a new feature that will generate the answer to a search query based on everything in Google's search index.
[01:00:38] Mike Kaput: Basically just like you'd expect from Perplexity or Chat GPT search. Currently, this is only available if you pay for Google one AI premium. they are like paid tier service, but it will be rolling out a bit to users in the future. Now with all these updates, the official line here is kind of that [01:01:00] more AI overviews, more AI and surge.
[01:01:03] Mike Kaput: None of this will really cannibalize people going to websites via links, which is the behavior of course, that powers today's SEO and search ad ecosystem. Google claims that people are still clicking in and going to websites through AI overviews, and that AI overviews and AI mode will bring new people to Google for new things according to the Verge.
[01:01:26] Mike Kaput: There is some other data that seems to tell a different story. So Forbes actually reported that some new research from a content licensing platform called Tobit, which was shared exclusively with Forbes, says the AI search engine said 96% less referral traffic to new sites and blogs compared to traditional search.
[01:01:49] Mike Kaput: So the report actually analyzed 160 websites that included some new sites, consumer blogs over the last three months of 2024 to kind of understand how this was [01:02:00] all working. So Paul, like we keep hearing that SEO isn't necessarily dead, it's just going to change. Like do you believe that, I mean, we're going to need to make sure, of course we show up in LLMs, but beyond that, it just seems like this whole traditional model is on its way out.
[01:02:20] Paul Roetzer: mean, all I can say is like, from my personal experience, I certainly go to fewer links. Yeah. Like, I mean, if I go to Google and I'm doing research, I'm clicking on every link and I'm curating it and you know, if I think about research for the show or research for, you know, writing a book or research for planning a trip, like if I go into Google and I type links and I get 10 links or however many it is, I'm going to click 'em.
[01:02:42] Paul Roetzer: If I go into Google and I get an AI overview that answers my question directly, even if the links are prominently shown, I generally look at the links to make sure they are pulling from legitimate sources that I would trust. And if they are, I'm kind of assuming it gave me the answer I needed. Or if I use deep [01:03:00] research, the better it gets, like the less I need to go into the citations.
[01:03:03] Paul Roetzer: I just look and make sure they are legitimate. So I'm not saying my personal use is representative of the the market, right. But those seem like really logical assumptions. Like my hypothesis would be. Sure you'd have less traffic coming from it. so regardless of what Google and others say, , I just have to believe that how people consume information and is dramatically going to change.
[01:03:31] Paul Roetzer: For sure. Yeah. so yeah. What it does to S-E-O-I-I, all I'll say is on our intro to AI class, I teach every month we're getting way more questions about how do I show up in learning in language, large language models, right? Like ChatGPT than I used to get. So I think people are starting to catch on to the fact that maybe that's the new SEO is like, how do we show up in chat BT and AO reviews, is it different or the same than past search and how the algorithms work?
[01:03:57] The Top 100 Gen AI Consumer Apps
[01:03:57] Mike Kaput: Next up, Andreessen Horowitz has come out with their [01:04:00] latest top 100 gen AI consumer apps report. So this report, which comes out every six months, ranks the top 50 AI first web products by unique monthly visits per similar web. The top 50 AI first mobile apps by monthly active users per sensor tower.
[01:04:19] Mike Kaput: Some highlights from this latest report, chat GT's explosive resurgence. We talked about how it has reached 400 million weekly active users as of last month, and the mobile story is equally impressive. Chat, GPT is consistently growing. Its active user base by five to 15% every month over the past year, and approximately 175 million of those 400 million weekly active users now access it through the mobile app.
[01:04:49] Mike Kaput: Second is the meteoric rise of deep seek. So this, they launched their public chat bot on January 20th, 2025, and they accumulated enough traffic in just 10 days to [01:05:00] rank as the second most popular AI product globally. In January, the Chinese hedge fund backed AI tool reached 1 million users in 14 days, which was slower than chat GPTs five day mark.
[01:05:12] Mike Kaput: Then surge to 10 million users in just 20 days, which according to Andreessen outpaces ChatGPT T'S 40 day timeline, by February, they had claimed the number two spot on mobile as well. Capturing 50 15% of ChatGPT T'S mobile user base with engagement levels slightly higher than competitors like Perplexity and clo.
[01:05:33] Mike Kaput: Now, in total on this list, there were 17 new companies that entered the rankings. AI video apps are on the rise. They are, quote, bringing, bringing the true, true usability with reliable outputs according to A 16 Z. There are three new entries on the list of those video apps. hi luau at number 12, Kling AI at number 17, and SOA from OpenAI at number [01:06:00] 23.
[01:06:01] Mike Kaput: AI coding tools are also really taking off. These include a agentic integrated development environments or IDs. Text to web app platforms for non-technical users. So tools here include things like Cursor, which we've talked about at number 41 and Bolt at number 48. So Paul, this certainly seems to be a solid barometer of some recent trends we've seen in ai.
[01:06:25] Mike Kaput: Largely, like did anything jump out to you here
[01:06:28] Paul Roetzer: on top web products? I don't see meta AI anywhere and I don't see Gemini anywhere. That's probably not a good sign. Yeah. Top, top gen AI mobile apps. Gemini's coming in at 22. yeah, I mean I, the tops are interesting, but I also think the middle to back of the top 50 or non-existent on the top 50.
[01:06:49] Paul Roetzer: Yeah. 50 omissions indicative two of where the market is. So yeah, it's, it's fascinating. I do think the deepsea thing is just, it's gotta just burn anthropic and meta in particular, I would [01:07:00] imagine Google to a degree too with Gemini. that they just sort of show up out nowhere and skyrocket up there with.
[01:07:07] Paul Roetzer: None of the marketing that these other ones have had.
[01:07:10] Mike Kaput: Yeah. Well, like we talked about last week, also likely a reason Meta is spinning out its own meta AI app, right? Yep. Getting left off all these lists and all this attention. Yep.
[01:07:21] A quarter of startups in YC’s current cohort have codebases that are almost entirely AI-generated
[01:07:21] Mike Kaput: All right. So in some other news, according to why Combinator managing partner, Jared Friedman, a quarter of startups in ycs current W 25 batch now have 95% of their code bases generated by ai.
[01:07:39] Mike Kaput: Now, what's really interesting here is these aren't non-technical founders building businesses. By leveraging AI as a crutch. Friedman emphasized quote, every one of these people is highly technical, completely capable of building their own products from scratch. A year ago, they would've built their product from scratch, but now 95% of it is built by an ai.
[01:07:59] Mike Kaput: [01:08:00] So essentially developers are starting to become directors of AI systems rather than hands-on coders describing what they want built and letting AI handle the imple implementation details, which Y Combinator says has some big implications. So for one, it dramatically accelerates development cycles. It also lowers the barrier potentially for creating software, allowing people with good ideas, but limited coding experience to bring their visions to life.
[01:08:28] Mike Kaput: However, there are some new challenges here. YC general partner, Diana, who noted during a discussion that even when relying heavily on ai, founders still need the skill to evaluate the quality of the generated code. And Y-C-C-E-O, Gary Tan emphasized the point further raising a crucial question about the long-term sustainability here of this approach.
[01:08:50] Mike Kaput: He said, quote, let's say a startup with 95% AI generated code goes out and a year or two out, they have a hundred million users on that product. Does the [01:09:00] code fall over or not? Paul, what can we learn here about the bigger picture? This isn't just about coding or Y Combinator, it just seems like the barriers to building are falling so fast thanks to ai.
[01:09:12] Paul Roetzer: Yeah, it's one of my hopes actually for what I think will be significant job displacement in the coming years, is that I think we're going to go through like a renaissance of entrepreneurship. This entirely new age of entrepreneurship where everyone can be an entrepreneur, where, you know, if you don't have a job or you're coming outta college or, you know, you're looking for a transition that you, you can build something because one, two years out, you know, you're going to be able to just use words to build apps.
[01:09:41] Paul Roetzer: You can do it now in some early demos and stuff, but I think that it's a chance to offset the disruption is through growth of, startups.
[01:09:52] The Humanoid 100: Mapping the Humanoid Robot Value Chain
[01:09:52] Mike Kaput: Very cool. Very exciting as well. So in some other news, Morgan Stanley has released a new research report [01:10:00] diving into the fast growing market for humanoid robots.
[01:10:03] Mike Kaput: they are calling this new frontier, the physical embodiment of ai. They have compiled what they call the humanoid 100, a select, carefully selected list of publicly traded companies that represent different parts of the humanoid robot ecosystem. So basically, it's a guide to which companies are poised to benefit as humanoid robots go from experiments to actually moving to homes, offices, and factories.
[01:10:27] Mike Kaput: So they segment all these companies into three big categories, which include the brain, which includes foundational AI models, semiconductors, and software. The body, which represents components like sensors, actuators, and batteries, and integrators, which are companies currently building full scale humanoids or capable of doing so in the near future.
[01:10:50] Mike Kaput: Interestingly, more than half of these companies are already actively involved in humanoid robot development, and nearly half are seen as having significant potential to [01:11:00] join the market soon. It also turns out that Asia, especially China, is leading the humanoid robot race. Over half of the listed companies involved in humanoids and more than three quarters of the integrators actively developing full humanoids are based in Asia.
[01:11:17] Mike Kaput: Now, one last point here that's really interesting is that Morgan Stanley frames the global market size the tam, the total addressable market for humanoid robots at a staggering $30 trillion, roughly equivalent to about 30% of the global economy in practical terms. By 2040, they expect 8 million humanoid robots operating in the US alone.
[01:11:41] Mike Kaput: Replacing jobs were 307 50 7 billion annually in wages. By 2050, that number could reach 63 million humanoids replacing nearly 3 trillion in annual wages. So Paul, you and I have talked a lot about this idea that the potential for an industry or area of work to be disrupted [01:12:00] will be a function of how much the reward is for disrupting it with smarter technology.
[01:12:05] Mike Kaput: And it seems like there are some particularly rich rewards for disrupting physical labor.
[01:12:12] Paul Roetzer: Yeah. When I talk about job disruption, I'm not even talking about the physical labor, right? So this is a whole nother ball game, and do think we're still a few years away from, you know, these things truly working.
[01:12:24] Paul Roetzer: There's amazing demos happening. You'll see these incredible videos from figure and other places like that. don't think there's a reality anytime soon, but do think by the end of this decade, this starts to come into view. And I think I talked about that on a recent episode of, of the show. yeah, I think you start to see some specific industries that get impacted in the next few years here and then eventually like into the consumer world.
[01:12:47] Paul Roetzer: the other thing I mentioned on a previous episode was like you wanna look at the next investing frontier, is find the robotics supply chain And here comes this report, which is right perfectly like frame this out for us. So [01:13:00] it's say they probably just used open eye deep research to like do it and then like put it with cool visual.
[01:13:04] Paul Roetzer: No, I'm just kidding. Morgan Stanley, I'm sure spent tons of time on this, so thank you for doing it. Saved me from having to do it with deep research.
[01:13:12] Listener Questions
[01:13:12] Mike Kaput: I love that. Yeah. So you're welcome audience. Go check it out. Maybe if you want to go make some investments. Okay. All right. So to end up here this week, we are going to revisit our recurring segment here on listener questions where we are answering the questions that listeners have about ai.
[01:13:30] Mike Kaput: And we get tons of these each week. So we wanna start answering them as best we can. And this week's question, Paul is. What is the biggest misconception about AI right now, in your opinion?
[01:13:43] Paul Roetzer: Yeah, there are a lot. but I would let, let's zoom this in on, on business, because I've, I've been in some meetings even in the last couple weeks where I could, I saw this playing out again as I think that AI is seen as this overwhelming and oftentimes [01:14:00] abstract thing that we have to wait until we get the data right or we have to wait until it and legal, like give us clearance to go, or we have to wait until we get licenses to something and people are just waiting for permission to move forward and oftentimes delaying adoption or piloting projects because they don't really understand it.
[01:14:23] Paul Roetzer: And so it's just easier they got other things to deal with. So I think the biggest misconception is that it's hard to get started like that. You can't just find a couple of use cases like. Use Chad GPT, build a custom GPT, run a deep research project. Like just do something that is part of your usual workflow, part of the tasks you already do, and just go find a way to use these tools that don't need any proprietary data, don't need it, or legal involved.
[01:14:51] Paul Roetzer: That's just like, let me go see if I can save myself a few hours, or lemme go see if I can improve this presentation a little bit with this technology. So I think that's the biggest [01:15:00] thing is that it's hard to get started. And it's not like think if you, if you just, find the right use case, you can go, and mentioned earlier, we teach this intro to AI class every, month.
[01:15:12] Paul Roetzer: think I'm on like the 45th one or something. I started doing this in November, 2021. we will put the link in. but you can sign up for free and I just go through like this 30 minute intro and then we do 30 minutes of q and a. And I promise you, like, you don't know where to start, like you'll know where to start after that talk.
[01:15:29] Paul Roetzer: Like it gives you enough to just go And get rolling with some pilot projects. So. That's my biggest thing. And I guess my biggest urge to you would be like, just, just do something, you know, just get going. So, yeah, I think that's great question. Keep 'em coming. quick programming note, Mike, and I are both out next week, so, we're going to skip March 20, March 17th.
[01:15:52] Paul Roetzer: I feel, I actually feel like bad doing this. I'm like anticipating the, like the messages we're going to get about this, but we're not going to be around [01:16:00] Friday or Monday to record this thing. So we're going to skip a weekly episode on the 17th. we will be back on the 24th with the weekly, and then that's the week that we will also plan to do the special edition, the first episode of the Road to a GI and Beyond series.
[01:16:15] Paul Roetzer: So you'll get a Tuesday and a Thursday that week, but nothing next week. Follow me on LinkedIn. I'll, I'll put the, you know, key things as they are happening. And if you don't subscribe to the Smarter x, exec Insider newsletter. Get that, I'll still publish that on Sunday, like I always do. we will put the link to the show notes in there, but it's Smarter X ai and then just click on newsletter.
[01:16:37] Paul Roetzer: so publish that every week and it's kinda like a preview too, so I'll cover the stuff we're not going to be getting to on the podcast next week.
[01:16:45] Mike Kaput: Yeah. And I would also just add there, while I don't ever wanna skip a week with our audience, I think the content of this episode is more than enough to think about for two weeks.
[01:16:54] Mike Kaput: So maybe listen to this again next week. Yeah. This is an opportunity, perhaps a sign from the [01:17:00] universe to spend a little time considering the implications.
[01:17:03] Paul Roetzer: agree. Mike, I think you and I could probably both use the week to Yeah. Ponder some of the stuff in this. I may actually go back and listen. I've never listened to one of our episodes.
[01:17:12] Paul Roetzer: I may go back and listen to the first like 30 minutes of this one. think that there's just a lot there that has much deeper, meaning and impact than. Might appear right away. And it's, and it's honestly, there's a lot of the stuff we touched on there is feeding into my like version two of the AI timeline.
[01:17:29] Paul Roetzer: There's a lot of things that kind of been trying to piece together in my head and that is part of it.
[01:17:33] Mike Kaput: Cool. Can't wait.
[01:17:35] Paul Roetzer: All right. Thanks everyone for being with us. As always. We appreciate you listening and watching on YouTube. and we will be back on March 24th. Thanks for listening to the AI show.
[01:17:46] Paul Roetzer: Visit marketing ai institute.com to continue your AI learning journey and join more than 60,000 professionals and business leaders who have subscribed to the weekly newsletter, downloaded the AI blueprints, [01:18:00] attended virtual and in-person events, taken our online AI courses and engaged in the Slack community.
[01:18:07] Paul Roetzer: Until next time, stay curious and explore ai.
Claire Prudhomme
Claire Prudhomme is the Marketing Manager of Media and Content at the Marketing AI Institute. With a background in content marketing, video production and a deep interest in AI public policy, Claire brings a broad skill set to her role. Claire combines her skills, passion for storytelling, and dedication to lifelong learning to drive the Marketing AI Institute's mission forward.