This week, Paul and Mike return with a rapid-fire breakdown. From major AI companies' bold policy recommendations to the AI Action Plan to Altman’s teaser of a new creative writing model that blurs the line between human and machine—there’s a lot to unpack.
Plus: Google’s AI infrastructure bets, Claude’s web search rollout, and a new study showing how AI is transforming team dynamics and boosting productivity inside companies.
Listen or watch below—and see below for show notes and the transcript.
Listen Now
Watch the Video
Timestamps
00:05:01 — NY Times Writer “Feeling the AGI”
- Powerful A.I. Is Coming. We’re Not Ready - The New York Times
- Ep.139 of The Artificial Intelligence Show
00:15:00 — AI Action Plan Proposals
- OpenAI calls DeepSeek ‘state-controlled,’ calls for bans on ‘PRC-produced’ models - TechCrunch
- OpenAI’s proposals for the U.S. AI Action Plan - OpenAI
- OpenAI Asks White House for Relief From State AI Rules - Bloomberg
- Google calls for weakened copyright and export rules in AI policy proposal - TechCrunch
- Google Response to the National Science Foundation’s and Office of Science & Technology Policy’s Request for Information on the Development of an Articial Intelligence (AI) Action Plan - Google
- a16z’s Recommendations for the National AI Action Plan - a16z
- Hundreds of celebrities warn against letting OpenAI and Google ‘freely exploit’ Hollywood - The Verge
00:24:13 — Sam Altman Teases New Creative Writing Model
- X Post from Sam Altman on the New Model
- X Post from Noam Brown on the New Model
- OpenAI’s ‘creative writing’ AI evokes that annoying kid from high school fiction club - TechCrunch
00:30:21 —Claude Gets Web Search
00:31:59 — AI’s Impact on Google Search
- New Research: Google Search Grew 20%+ in 2024; receives ~373X more searches than ChatGPT - SparkToro
00:36:35 — Anthropic’s Strong Start to the Year
- Anthropic’s Claude Drives Strong Revenue Growth While Powering ‘Manus’ Sensation - The Information
- Anthropic’s plan to win the AI race - The Verge
- Inside Google’s Investment in the A.I. Start-Up Anthropic - The New York Times
00:40:19 — It Turns Out That Gemini Can Remove Image Watermarks
00:44:32 — Google Research on New Way to Scale AI
- Sample, Scrutinize and Scale: Effective Inference-Time Search by Scaling Verification
- X Post on Research from DeepMind Researcher
00:48:42 — New Research Shows How GenAI Changes Performance in Corporate Work
- The Cybernetic Teammate: A Field Experiment on Generative AI Reshaping Teamwork and Expertise
- The Cybernetic Teammate - One Useful Thing
- JobsGPT
- CampaignsGPT
- ProblemsGPT
00:57:18 — The Time Horizon of Tasks AI Can Handle Is Doubling Fast
- Measuring AI Ability to Complete Long Tasks - METR Blog
- AI could soon tackle projects that take humans weeks
- Elizabeth Barnes, METR founder and Co-CEO X Post
01:05:14 — Apple Comes Clean on Siri AI Delays
- Apple’s Siri Chief Calls AI Delays Ugly and Embarrassing, Promises Fixes - Bloomberg
- Apple Shuffles AI Executive Ranks in Bid to Turn Around Siri - Bloomberg
01:08:51 — OpenAI Agents May Threaten Consumer Apps
01:14:03 — Powering the AI Revolution
- Startup Behind OpenAI’s Stargate Data Center Lands Record Power Deal - The Information
- How A.I. Is Changing the Way the World Builds Computers - The New York Times
- X Post from Sundar Pichai
01:17:44 — Google Deep Research Tips
01:21:14 — Other Product and Funding Updates
- Google Gemini Updates—Including a Robotics Model
- Perplexity Might Be Raising More Money
- OpusClip Now Valued at $215 Million
- Zoom Debuts New Agentic Features
- YouTuber Releases Extensive NotebookLM Tutorial
Summary
NY Times Writer “Feeling the AGI”
In a recent piece for The Times, New York Times technology columnist Kevin Roose argues that the era of artificial general intelligence, or AGI is closer than most of us realize. (He defines AGI as systems capable of performing nearly every cognitive task humans can.)
After extensive conversations with leading engineers, researchers, and entrepreneurs, Roose says AGI might emerge as soon as 2026, possibly even earlier.
What's striking about his findings is the growing consensus among AI insiders themselves. Sam Altman from OpenAI, Demis Hassabis of Google DeepMind, and Dario Amodei from Anthropic all publicly acknowledge that systems rivaling or exceeding human intelligence could arrive within just a few years.
Still, despite clear signals of dramatic change, Roose argues society remains largely unprepared. And he warns that waiting until AGI becomes undeniable—perhaps when it starts eliminating jobs or causing tangible harm—would mirror the costly mistakes we made during the rise of social media, when issues weren't addressed until it was too late.
Even more telling is the concern coming directly from people developing this technology: unlike social media’s early days, where creators didn’t foresee societal harm, today's AI engineers and executives openly worry about what they’re building, even researching the potential for AI to engage in deception or manipulation.
Roose concludes that whether AGI arrives in two years or ten, the time to seriously prepare is now. After all, he argues, the risk of overpreparing pales next to the dangers of complacency.
AI Action Plan Proposals
In February, the Trump administration invited public comment on its AI Action Plan, which is a policy plan required under the administration’s recent Executive Order on AI. A number of AI leaders—including OpenAI, Google, and Andreessen Horowitz—have answered that call, releasing various policy proposals for the AI Action Plan, and some of them are controversial.
OpenAI’s recommendations focus on two hot-button issues: federal preemption of state-level AI regulations and targeted restrictions on Chinese AI models.
They are pushing for federal rules to avoid a messy patchwork of state AI laws that could slow innovation. Their idea? Let AI companies work with the government by sharing model access in exchange for legal protections. They're also raising red flags about China’s DeepSeek, calling it a security risk due to data laws and potential IP theft—and suggesting a ban on Chinese AI models in top allied countries.
Google made similar notes in its recommendations. The company also advocates for consistent federal-level legislation on AI. While Google doesn’t directly attack DeepSeek and Chinese-led AI, it does advocate investment in foundational domestic AI.
Interestingly, Google also devotes space to US copyright laws, contending that certain exceptions to copyright are vital to AI progress because they enable developers to freely train AI models on publicly available material—including copyrighted content—without complicated negotiations or legal battles.
Andreessen’s recommendations echo those of OpenAI and Google. They emphasize federal leadership on AI regulation, establishing a single, coherent national framework rather than leaving regulation up to the states. They also heavily emphasize affirming that existing copyright laws allow AI developers to use publicly available data for training models without unnecessary restrictions.
This episode is presented by Goldcast.
Goldcast is a B2B video content platform that helps marketing teams easily produce, repurpose, and distribute video content. We use Goldcast for our virtual Summits, and one of the standout features for us is their AI-powered Content Lab. If you're running virtual events and want to maximize your content effortlessly, check out Goldcast. Learn more at goldcast.io.
This episode is also presented by our Scaling AI webinar series.
Register now to learn the framework Paul Roetzer has taught to thousands of corporate, education, and government leaders. Learn more at ScalingAI.com and click on “Register for our upcoming webinar”
Read the Transcription
Disclaimer: This transcription was written by AI, thanks to Descript, and has not been edited for content.
[00:00:00] Paul Roetzer: And then I come back to, but is it the same value as if a human did it? I don't know. Like where, where is that line between the value of AI-generated content or art, human-generated content or art? And I don't think we have come to grips with that in society yet, and certainly not in the business world.
[00:00:17] Paul Roetzer: Welcome to the Artificial Intelligence Show, the podcast that helps your business grow smarter by making AI approachable and actionable. My name is Paul Roetzer. I'm the founder and CEO of Marketing AI Institute, and I'm your host. Each week I'm joined by my co-host and marketing AI Institute Chief Content Officer Mike Kaput, as we break down all the AI news that matters and give you insights and perspectives that you can use to advance your company and your career.
[00:00:46] Paul Roetzer: Join us as we accelerate AI literacy for all.
[00:00:53] Paul Roetzer: Welcome to episode one 40 of the Artificial Intelligence Show. I'm your host, Paul Roetzer. I'm with my co-host Mike [00:01:00] Kaput, who is fresh off of a trip to Japan. how long were you in
[00:01:03] Mike Kaput: Japan, Mike? I was there for about 10 days. It's a little hazy to tell because the flight out and the flight back are brutal, but there's a lot of traveling involved.
[00:01:12] Paul Roetzer: Sounds like an amazing experience though.
[00:01:14] Mike Kaput: Oh, it was awesome. I couldn't recommend it enough to anyone who likes to travel. Japan is awesome.
[00:01:20] Paul Roetzer: That's on my family's wishlist. They are huge Nintendo fans and it's like they want to get to the home base and ex, you know, not only experience the culture, but get to the Nintendo experiences as well.
[00:01:33] Mike Kaput: So there's a lot of that.
[00:01:35] Paul Roetzer: My son was messaging me. He's like, Hey, isn't your friend in Japan? Can you ask him to find these like Pokemon things? They are like, only available in Japan. I forgot to send it to these gummy things you wanted me to find in Japan. So, yeah, that's awesome. and so happy you gotta have that in experience.
[00:01:52] Paul Roetzer: And I know our mutual friend who's, you know, living there, got to spend some time with them, so, yeah. That's awesome. and then I don't remember [00:02:00] what I was doing honestly last week. I know I was away as well at, to start the week, but, so no episode last week and we appreciate everyone who reached out saying they were, they, they missed us, and it means a lot.
[00:02:10] Paul Roetzer: Like we are glad, you know, people are, look forward to this every week. Mike and I look forward to doing it every week, so it's good to be back with you all. It is Monday, March 24th. we are doing this at 11:00 AM Eastern time again, in case anything crazy happens today and we don't cover it. this episode is brought to us by Gold Cast.
[00:02:28] Paul Roetzer: Gold Cast is the, or was the presenting sponsor of our AI for Writer Summit and is a gold partner of marketing AI Institute. We use Gold Cast for our virtual summits, and one of the standout features that we always talk about is our AI powered content lab. It takes event recordings and instantly turns them into ready to use video clips, transcripts and social content, which save our, saves our team, dozens of hours of work, which is awesome.
[00:02:53] Paul Roetzer: So if you're running virtual events, wanna maximize your content effortlessly, check out Gold Cast that does [00:03:00] gold cast io. And then, the second thing I wanna mention this week is we have our Scaling AI webinar on March 27th. So that is coming up on Thursday the 27th. This is a monthly free class that I teach.
[00:03:13] Paul Roetzer: So this is last June, June of 2024. I released the Scaling AI series. So it's a course series that's a paid course series that's part of our mastery membership now. But that, that course series is based on a framework of five steps that every organization needs to take to scale ai. So this webinar is actually a free, condensed version of that series.
[00:03:35] Paul Roetzer: It walks you through those five steps. It's super valuable from a beginner perspective if you're trying to think about. Beyond pilot projects, what do we need to do as an organization to truly drive transformation through ai? This class gives you an introduction to that. we have had, I think this is like the sixth or seventh time I'm doing this.
[00:03:53] Paul Roetzer: we have had probably close to 7,000 people register for this series. you can go learn more about it at [00:04:00] scalingai.com. At the top of the page there is a register for our upcoming webinar link. It's the quickest way to get there. So go to scalingai.com, click on register for our upcoming webinar, and you can join us there.
[00:04:12] Paul Roetzer: like with our intro to AI class that I do each month, there is an on-demand version available for seven days or so if you register. So if you register and can't make it because it's at noon eastern time on Thursday, don't worry, you'll get an email with access to it for about seven days after the event so you can go and watch it.
[00:04:29] Paul Roetzer: So again, scalingAI.Com. The webinar is five essential steps to Scaling ai. Alright, Mike. we are gonna go rapid fire style, so we missed a week and so we are gonna catch everybody up by trying to run through as many updates as possible. There were a lot, but we are gonna do our best to get through all the ones that matter.
[00:04:47] Paul Roetzer: And then Mike will include another, I don't know, Mike, with like 15 to 20 links that we couldn't get to today. We'll be in the marketing AI Institute newsletters. If you aren't subscribed to that, check out this week in AI and it'll get you the rest of the [00:05:00] links.
[00:05:01] NY Times Writer “Feeling the AGI”
[00:05:01] Mike Kaput: Alright, Paul, kicking things off. Powerful AI is coming fast according to New York Times technology columnist Kevin Roose, and we are far from ready for what's next according to him.
[00:05:13] Mike Kaput: So in a recent piece in the Times, Roose argues that the era of artificial general intelligence or AGI is closer than most of us realize he defines AGI as systems capable of performing nearly every cognitive task that humans can. So Rus had extensive conversations with. Leading engineers, researchers, and entrepreneurs, and came away with the conclusion that AGI might emerge as soon as 2026 or possibly even earlier.
[00:05:45] Mike Kaput: What's striking about his findings is this growing consensus among AI insiders, so people like Sam Altman and OpenAI Demis Hassabis at Google DeepMind, Dario Amodei at Anthropic all publicly acknowledge [00:06:00] that systems rivaling or exceeding human intelligence could arrive within just a few years. Now, Roose actually says that even more telling is the concern coming directly from the people building this stuff.
[00:06:14] Mike Kaput: So unlike say the early days of like social media, when the people building the technology didn't really warn us or foresee any societal harm. Today's AI engineers and executives are openly worrying about what They are building and even researching the potential for AI to engage in deception or manipulation.
[00:06:34] Mike Kaput: Now, Roose is saying it's not just the people building it that are sounding the alarm. I mean, there's independent experts like Jeff Hinton, Joshua Bengio pioneers in AI research. Were echoing these warnings and Roose points to a bunch of concrete examples that seem to back up this thinking. So we have newer and advanced AI models that now excel at complex reasoning.
[00:06:59] Mike Kaput: They are doing things like [00:07:00] performing metal winning math challenges, and consistently handling sophisticated programming previously reserved for human coders. Now, Roose kind of concludes this argument saying that despite clear signals that some type of dramatic changes coming. Society remains largely unprepared, so governments lack cohesive plans to manage the changes that are going to come from ai.
[00:07:26] Mike Kaput: AGI specifically, and he warns that if we wait until AGI becomes undeniable. Like when it starts eliminating jobs or causing real harm, we are going to make a ton of mistakes that we are not going to be able to fix. He then concludes saying, the time to seriously prepare for AGI whether it arrives in a couple years or a decade, is now.
[00:07:50] Mike Kaput: Now Paul, this is not the first time we have heard the alarm bells around AGI ringing in our last episode, we got a lot of attention for covering [00:08:00] journalist Ezra Klein's warnings about AGI know it's a topic you've been thinking about a lot, especially in the Smarter X Exec AI newsletter this past week.
[00:08:10] Mike Kaput: Maybe walk me through where you're at on this and why we are hearing even more about it.
[00:08:15] Paul Roetzer: Yeah. So I mean, if you're listening in, this may sound real similar to the start of episode 1 39 from two weeks ago. 'cause it is, it's another, you know, mainstream media writer that is talking about this based on conversations with people on the inside.
[00:08:31] Paul Roetzer: on March 7th, we had Alex Kitz, who's the big technology podcast, who had, okay, I'm starting to think AI can do my job. After all, we have ESR Klein, we, we have this, we have the conversations with the labs. So yeah, it's just like, again, it's, increasingly, obvious that the people within all these labs, the AI experts, the different media who follow closely within it, They are all saying the same thing.
[00:08:58] Paul Roetzer: They are all seeing the same trend [00:09:00] emerging. When I, when I read this from, from Kevin, I tweeted, I'm 100% aligned with everything he believes and writes like, I thought he was right on. He said, I believe that most people in institutions are totally unprepared for it. I systems that exist today, let alone the more powerful ones that is.
[00:09:14] Paul Roetzer: Exactly what we have been saying. Like most companies you talk to, most business leaders you talk to, if you show them deep research, They are just floored. Like they, they have no idea that AI is capable of doing things like deep research does, or even notebook lm. Like, we live in the bubble we live in. and I would say many of the people who listen to the show regularly would live in that same bubble.
[00:09:36] Paul Roetzer: We just kind of assume everyone's aware of what these things do already and They are not, like, most leaders have no concept of this stuff. I was at a talk last week, Mike, with, it was like 500, independent distributors from, electrical independent distributors, like brilliant people, amazing businesses.
[00:09:55] Paul Roetzer: And I was actually on a, a, the flight home and I was talking with an executive who was in the talk. [00:10:00] So he's sitting by next to me and he said, Hey, we, the guy did the talk today and we just got talking about like where he's at with it and where his company is at. And it was just that like. It was so representative of what I see over and over again with people who wanna figure this stuff out, but like, they got full-time jobs and their CEOs or presidents or VPs or directors and like, they don't have time to figure this out, and They are not even comfortable with Chad GPT.
[00:10:24] Paul Roetzer: Like, they don't know how to go in there and play around with prompts and get it to do the thing they want. They just know they should probably be figuring it out. And so that's where most of the business world is, is like They are still just trying to comprehend the capabilities of the current things.
[00:10:39] Paul Roetzer: And when you start talking about AGI and this idea that it's gonna be on par, you know, beyond the average human worker in their business, that's a crazy absurd concept for them to try and process. So yeah, I think like pieces like this are so important because it starts advancing the conversation outside of.
[00:10:59] Paul Roetzer: [00:11:00] you know, just here's where we are today. Because the reality is we may be somewhere very, very different, very more advanced, like two years from now, maybe sooner than that. So yeah, that was, as you referenced, the exec AI newsletter they do every Sunday. what I wrote this week was, something I titled the Argument for an AI AGI Horizons team.
[00:11:20] Paul Roetzer: So if you didn't, if you don't get the newsletter, you can go on my LinkedIn. I published an excerpt of it on LinkedIn on Sunday as well. But the basic premises, like back in, in early 2023, I was, advising a major software company who had reached out to try and figure out what the hell's going on.
[00:11:36] Paul Roetzer: Because Chad Chet had just come out like two months earlier and they were saying like, this changes our product roadmap completely. Like our product people are beside themselves because things that they were planning to build over the next 12 months, like a college kid can now build using like Chad cht or Claude or something like that.
[00:11:54] Paul Roetzer: So they were just trying to grasp the moment we were in and trying to figure out what does this mean today? [00:12:00] And I was like, listen, I can guide you on what to do today, but the thing I'm more concerned about for you all is what happens like three years from now. Because these labs are increasingly convinced that they have a clear path to AGI.
[00:12:14] Paul Roetzer: And when that happens, you're at a potential extinction level event for your software because like, do I even need your software to do what it does anymore? And so what I advise them is like, create an AGI Horizons team. And you might need some outside advisors because it's hard for the product people internally to be objective.
[00:12:32] Paul Roetzer: Like They are, They are bought into their product roadmap for the next 12 to 24 months. And to tell them, Hey, throw out your five best ideas because Opening Eyes is gonna be able to do that for us in six months. That's a hard thing for people internally to hear and to like. Be objective about. So I was like, get a few of your key people internally on this and then get a few outside advisors who can come and be very brutally objective and say like, this product roadmap's gotta go.
[00:12:57] Paul Roetzer: Like, here's where we should be going, or [00:13:00] start building the next thing in unison with like, so go ahead and pursue that product roadmap, but you need to be, you know, taking the bigger shots here. Bigger. And so I was saying in my newsletter, like, I think it's time for most major enterprises in particular, small, mid-size businesses, it might be hard to do, but definitely the bigger enterprises.
[00:13:17] Paul Roetzer: I think you need to seriously consider the idea of an AGI Horizons team that's actually starting to look out and say, okay, what if They are all right? Like, what if it's not just noise and hype? What if all these AI leaders and experts and labs and researchers, what if They are right? And two years from now we have AGI.
[00:13:35] Paul Roetzer: It is on par with the average human worker currently doing what we do in accounting and marketing and sales and legal and you know, finance. What if it actually is. Because I'm telling you now, the probability isn't zero, and I actually think it's way closer to 50% than it is to zero. . And so if there's a possibility that your business is gonna be completely disrupted in like, say two to five [00:14:00] years, it'll be different by each industry.
[00:14:02] Paul Roetzer: If there's a possibility, and I'm fairly confident, there's a very strong possibility, wouldn't you start planning for that? Wouldn't you start considering the possibility of that occurring and thinking through different scenarios of like, well, what are we gonna do? What's it mean to our product strategy?
[00:14:16] Paul Roetzer: What's it mean to our talent? What's it mean to our org structure and the competitive landscape? Like, these are things you should be thinking about. So yeah, I'm all for these articles. I think we need more conversation around this. And like I said, I would, I would highly encourage people listening, especially if you work at a bigger company, to start having these conversations about like an AGI Horizons team that's looking out around the corner and trying to figure out.
[00:14:40] Paul Roetzer: What happens if like start doing some scenario planning, start thinking this through because you don't want to get caught like most businesses did with Chad GPT, where they had no idea what was going on and now you know, here we are two years plus later and most companies are still scrambling to figure out gen AI and like what it means and building a roadmap and stuff like [00:15:00] that.
[00:15:00] AI Action Plan Proposals
[00:15:00] Mike Kaput: Back in February, the Trump administration invited some public comment on its AI action plan, which is a policy plan that's required under the administration's recent executive order on AI and a number of AI leaders including open ai, Google, Andreesen, Horowitz, they have all answered that call, releasing different policy proposals for this AI action plan that They are recommending.
[00:15:29] Mike Kaput: The kind of gist here is pretty controversial actually, in terms of just how blatant they are with what They are recommending. So I'm gonna go through open AI's recommendations, but Google and Andreesen also echo these pretty closely. So open AI focuses on two kind of hot button issues, which are federal preemption of state level AI regulations and targeted restrictions on Chinese AI models.
[00:15:54] Mike Kaput: So OpenAI argues that there's all these hundreds of individual state AI bills, [00:16:00] and They are risking bogging down innovation and undermining America's technological leadership. So to counter this, they want the federal government to put in place a framework where AI companies can actually innovate under the guise of federal regulation, not state regulation.
[00:16:18] Mike Kaput: They also took direct aim at China's AI leader or emerging AI leader, deep seek labeling it as state subsidized and state control. OpenAI actually expressed serious security concerns regarding deeps seat's reasoning model R one. They actually went so far as recommend banning the use of AI models produced in the People's Republic of China, including Deepsea, and particularly in countries designated as tier one, which are those aligned closely with the Democratic values and US strategic interests.
[00:16:54] Mike Kaput: Now, Google, in its recommendations, which were released in the last couple weeks as well, also kind of [00:17:00] came out against fragmented state regulation. They didn't really come directly at Deep Seek and Chinese led ai, but did advocate for investment in foundational domestic ai. And interestingly, they also devoted a bunch of space to us copyright laws.
[00:17:15] Mike Kaput: They contended that exceptionals to copyrights, such as fair use and data mining. Are vital to AI progress because they enable AI companies to train their models freely on publicly available material. This is also something OpenAI was advocating for in its recommendations. And then if you look at Andreessen's recommendations, they echo the same types of things.
[00:17:39] Mike Kaput: OpenAI and Google also were suggesting. So Paul, this kind of reads to me like the major AI leaders are basically coming out and saying, we want federal AI legislation, not state legis legislation on ai. We want to get stronger on Chinese companies building ai, and we wanna make it [00:18:00] really clearly legal for AI companies to train on copyrighted material.
[00:18:05] Mike Kaput: Does that kind of sound right to you?
[00:18:07] Paul Roetzer: Yeah, I don't say, I think there's anything surprising in their positions like this has been pretty obvious that these are their positions. I just think it's, it's kind of jarring in some ways to see it so clearly stated in their proposals. The state level policies.
[00:18:21] Paul Roetzer: I think at last count I had seen there was over 700 state level AI bills right now at different differing stages within states. You could imagine being in an AI lab and having to like, follow along and understand and like try and scenario plan for what if this law passes in Texas or California. it's, I'm sure it's, it's a lot of work, so I can understand why they wouldn't want that happening.
[00:18:45] Paul Roetzer: Copyright law, we have touched on this many times on the show. It is a very known fact that they took copyrighted materials to train these models and they continue to do that, including pirated books, um . That we just were talking about. I think with [00:19:00] meta in the last week or two, there was a lot going on around that.
[00:19:03] Paul Roetzer: And then, you know, China, They are, and what They are gonna do is everything's gonna be put under national security. Like that's what this administration appears to care about, or at least says that, that, that they care about deeply. And so I think that this administration is going to side with many of these arguments.
[00:19:23] Paul Roetzer: Like there's, I'm, I mean obviously I'm not a policy expert here, but it's very clear that these arguments seem to jive with what the administration has kind of laid out thus far about what their policy may be. The one I wanted to zoom in for a second on here, Mike, because we have talked about it so much, is the copyright issue about, were these, was it legal for these labs to take copyrighted material from you and I, Mike from YouTube creators, from authors, from brands, blogs like it.
[00:19:57] Paul Roetzer: They took it all and they trained on it. [00:20:00] And is there any, do they have any responsibility to provide to the original creators? Their argument is no. and they claim it's under fair use. So that is what's being challenged in courts right now. And what they basically want is the federal government to come in and say, get rid of all these cases.
[00:20:17] Paul Roetzer: They, what they did was completely legal and they can move on with their lives so that, that we can, the US can win the AI war basically. So here's, this is, again, it's kind of jarring to see, it's so clearly said, but this is directly from OpenAI, what they called promoting the freedom to learn. I thought that was hilarious.
[00:20:37] Paul Roetzer: Okay, so I'll just highlight like two paragraphs here. American Copyright Law, including the longstanding fair use doctrine, protects the transformative uses of existing works, ensuring that innovators have a balanced and predictable framework for experimentation and entrepreneurship. This approach has underpinned American success through early phases of technological progress and is [00:21:00] even more critical to continued American leadership on AI in the wake of recent events in the PRC.
[00:21:06] Paul Roetzer: People's public of China, right? That's right. Okay. Open AI's models are trained to not replicate works for consumption by the public. Instead, they learn from the works and extract patterns, linguistic structures, and contextual insights. This means our AI model training aligns with the core objectives of copyright and the fair use doctrine using existing works to create something wholly new and different without eroding the commercial value of those existing works.
[00:21:35] Paul Roetzer: So that is their argument they'll be making in courts and They are making it to the Trump administration saying, just side with us now and let's get rid of all these cases and let's move on. Innovating. It goes on to say, in other markets, rigid copyright rules are repressing innovation and investments.
[00:21:49] Paul Roetzer: So now They are coming at like, don't let other markets get ahead of us. and it says, applying the fair use doctrine to AI is not only a matter of American competitiveness, [00:22:00] it's a matter of national security. The rapid advances seen in the PRCS deep seek. Among other recent developments show that America's lead on Frontier AI is far from guaranteed given con concerted state support for critical industries and infrastructure projects, there's little doubt that the prcs AI developers will enjoy unfettered access to data, including copyrighted data that will improve their models if the PRCS developers have unfettered access to data and American companies are left without fair use access.
[00:22:34] Paul Roetzer: The race for AI is effectively over America loses as does the success of Democratic ai. So they are straight up saying, we are going to take these copyright materials and if you don't let us, we lose. And if you go to what the Trump administration has said, they have very clearly said, we will not lose in ai.
[00:22:55] Paul Roetzer: We, it is a matter of national security, that it must be Democratic [00:23:00] ai. And they are just regurgitating those words back to them and saying, make this go away, because the only way for us to do what we are doing is to use copyrighted material to do it. So, I don't know. I mean, it was not surprising at all.
[00:23:13] Paul Roetzer: Like we, we have known this was their position, but to see it this blatant and across like, I mean this is like 2000 words or something like that, right in the copyright section to, to lay it out as clear as that it connected to national security, to connected to competitiveness, directly, you know, connected to the war against China for AI supremacy.
[00:23:32] Paul Roetzer: I was just plain as day. And so I, again, like I have no idea where this lands. I'm not a legal expert. I've talked with many attorneys who are legal experts who don't know where this lands. Like this is an unknown. But the big variable here has always been what's the Trump administration's position on this?
[00:23:49] Paul Roetzer: And, you know, where does it go from here? But I don't know. Again, I I think that the administration values winning [00:24:00] more than anything else. Yeah. And if copyright is a hindrance to that happening, then I think that that problem goes away. That's kind of my current belief on what's gonna happen
[00:24:13] Sam Altman Teases New Creative Writing Model
[00:24:13] Mike Kaput: in some other news.
[00:24:14] Mike Kaput: In the past couple weeks, Sam Altman recently shared on X that open AI has trained a new AI model that is good at creative writing. So he shared an output from this model while noting that the model is not out and he's not sure yet or how or when it will get released. But he said, quote, this is the first time I have really been struck by something written.
[00:24:39] Mike Kaput: He. By ai. He then shared a short story that was written by this model, which responded to a prompt that, that he gave it, asking for a quote, metafictional literary short story about AI and grief. So in the piece itself, the model directly acknowledges the constraints of the [00:25:00] instructions. It sets this kind of self-aware and reflective tone.
[00:25:04] Mike Kaput: It weaves a narrative around some fictional characters, uses detailed imagery. And kind of throughout the story, it also frequently reminds readers of its inherent artificiality. Kind of following that prompt to be kind of a meta, metafictional prompt here. Now, I thought it was pretty interesting to actually read through this, but the reaction among observers has been a bit mixed.
[00:25:28] Mike Kaput: So Altman obviously found this piece pretty moving. Critics pointed out that despite moments of genuine poignancy, the prose often becomes overly dramatic. Kind of has these forced metaphors. TechCrunch said it evoked, quote, that annoying kid from high school fiction club and others simply noted that whether they liked the output or not, they weren't really invested in it because it wasn't written by a human.
[00:25:55] Mike Kaput: So Pauwe are're both writers. I'd love to get your opinion on this. [00:26:00] you know, I found also Noam Brown's opinion on this worth noting he's a researcher at Open ai. We mentioned him often. He said about this quote, seeing these creative writing outputs has been a real feel, the AGI moment for some folks at OpenAI.
[00:26:14] Mike Kaput: The pessimist line lately has been only stuff like code and math will keep getting better. The fuzzy subjective bits will stall. Nope. He says the tide is rising everywhere.
[00:26:27] Paul Roetzer: Yeah. I struggle with this one, Mike. I saw a demonstration. I was trying to see if I could find it on, on Twitter. I think I reshared it.
[00:26:37] Paul Roetzer: if we do, I'll, I'll put it in the show notes. But it was actually from someone on the Google DeepMind team, I think, and they were demonstrating what was possible with AI Studio, where they were creating a children's book. And I think the person said they actually did this with their kids and they had the AI writing the story, but then creating illustrations with Imagine three [00:27:00] their, you know, image generation model.
[00:27:01] Paul Roetzer: And so it was doing the illustrations as it was going. and it's just like, it's so wild to see that. And I think it's so personal for me because this is the thing I'm working on with my daughter. So she's 13 and we work on creative writing with ChatGPT. So she does like character development, idea development, and sometimes she uses like ChatGPT to like.
[00:27:24] Paul Roetzer: Develop those ideas out. A lot of times she just like makes her own notes and stuff. And so it's this like hybrid process of like becoming a creative writer. And it's so intriguing to me to watch it happening. But then there's me and you, Mike, who consider ourselves creative writers by trade. Your wife is an amazing writer.
[00:27:42] Paul Roetzer: Like, it's like, it's really hard to watch. But I also accept that this is just where They are going and they, these labs obviously think creative writing is critical to whatever the future of these models is. . Because they all talk about it. Yeah. And they feature it as like a use [00:28:00] case that shows progression.
[00:28:01] Paul Roetzer: Like even when the latest model from ANet came out, that was part of what they were selling was emotional intelligence and creative writing. So, I don't know. I mean, it is fascinating to go do it, like go play around with these models yourselves. You can go into the Google AI studio and experiment like Gemini 2.0 Pro, their experimental one, and it does the stuff.
[00:28:21] Paul Roetzer: You can have it create the illustrations with it. it's impressive and it creates so many unknowns about the future of writing and like how we are gonna teach these things. And, I don't know. I always go back to the, you know, you kind of referred to it a little bit, this idea that, yeah, these things are gonna be great at it.
[00:28:40] Paul Roetzer: Like, I think they already are. Like there's, I've done it myself where I've created experiments like that was really, really good Writing better, probably better than I could do, on a creative standpoint. And then I come back to, but is it the same value as if a human did it? Like, I don't know, like where, where is that line between the value of AI generated content [00:29:00] or art, human generated content or art?
[00:29:03] Paul Roetzer: I just think it's gonna be fascinating to see it play out in the years ahead. I don't think there's right answers to this stuff. I think it's just gonna be how society decides to value these things when it is completely commoditized. Anybody can go in and create an amazing poem or children's story, or.
[00:29:19] Paul Roetzer: Article with AI right now. I would say that this is one of those things where it's probably better than most humans. Like yeah, I would say it's on par with the best humans at this. But is AI a better writer than the average human? In most cases, yeah. Like for most instances, it's probably better than the average human at writing.
[00:29:39] Paul Roetzer: And that's weird, and I don't think we have come to grips with that in society yet, and certainly not in the business world.
[00:29:45] Mike Kaput: Based on the comments responding to Sam's on a tweet, I would say we have not come to grips with that because there's gonna be some backlash to this type of thing.
[00:29:55] Paul Roetzer: Yeah, and I think that's the thing we just keep waiting for is like, how many, how [00:30:00] many times do people need to start realizing that AI is good at the thing they do or like the thing there someone in their family does where you start thinking, I'm not so sure I'm the biggest fan of this AI stuff.
[00:30:11] Paul Roetzer: I dunno, like Right. I do keep waiting for society to sort of catch up to what it's capable of and see what, what happens when that occurs.
[00:30:21] Claude Gets Web Search
[00:30:21] Mike Kaput: So Claude Anthropics Frontier Model has a pretty significant update. It can now search the web. You can now use Claude to search the internet and provide more up-to-date and relevant responses With web search, Claude has access to the latest events and information, which Anthropics says boosts its accuracy on tasks that benefit from the most recent data.
[00:30:46] Mike Kaput: So when Claude uses online info in its answers now, it will provide direct citations to where it got the information from. And this is now available for paid Claude users in the US to start. And [00:31:00] Anthropic says, to get started with it, you have to actually toggle on web search in your profile settings, and you can only use it with Claude 3.7 sonnet.
[00:31:09] Mike Kaput: And the company also says, support for users on the free plan and in more countries is coming soon. So Paul, this is definitely a welcome feature if you're a heavy Claude user. I don't know, maybe I'm like spoiled at this point though because it kind of feels like old news given that other models can do this already.
[00:31:27] Mike Kaput: But I could definitely see this being valuable if you're only using Claude.
[00:31:31] Paul Roetzer: Yeah, I I think there may be some Claude users who don't realize Claude wasn't on the internet like that. I know there used to be the case where you would have people using Claude and they didn't, they weren't aware, it wasn't able to like Right.
[00:31:41] Paul Roetzer: Connect to the internet to verify things. So it is, and I don't remember why they hadn't done this. I thought it used to have to do something with like a security thing or they like verify. I don't remember why they took so long to do this, but it definitely is seems one those things that probably should have rolled out like a year ago or more.
[00:31:59] AI's Impact on Google Search
[00:31:59] Mike Kaput: [00:32:00] Yeah. Yeah, that's what I was wondering. In some other news, there's some new research from SEO Leader Rand Fishkin that shows how Google search is performing amidst competition from ai. And these results might actually be kind of surprising. So he found that despite widespread speculation, that AI tools like Chachi PT might erode Google's dominance in search.
[00:32:24] Mike Kaput: Google search volume didn't just remain stable in the last year. It actually grew dramatically. So this research was done by Fishkins Company, spark Toro, and a company called DAOs, which provided them with Google search data from 130,000 US devices, mobile and desktop, who are actively using Google for 21 consecutive months.
[00:32:45] Mike Kaput: So in this data, Google search is actually increased by over 21% from 2023 to 2024. And that growth aligns with Google's own comments suggesting that their new AI driven search features [00:33:00] things like AI overviews. Have actually boosted usage end user satisfaction. This research also reveals that chat, GPT and similar tools are only representing a tiny fraction of overall search behavior.
[00:33:15] Mike Kaput: While Google handles over 14 billion searches every day by their calculations, ChatGPT search like interactions top out at only about 37.5 million daily, which would make Google's daily search volume roughly 373 times greater than ChatGPTs. So Paul, this is definitely interesting. Well worth diving into fully.
[00:33:40] Mike Kaput: I mean, Rand is like a really notable and authoritative guy in the search industry. I do. There was one point I do wish he kind of dived into deeper. He said that so much for the fear that AI answers in Google would reduce the number of searches people performed. In fact, the exact opposite appears to be true.
[00:33:57] Mike Kaput: That much is borne out in the data. [00:34:00] He goes on to say, though, unfortunately AI answers do seem to kill, click-through rates. Seer interactive study, he references an outside study showed that organic results suffered a 70% drop in CTR and paid drop 12%. Another study from another firm shows a similar drop.
[00:34:19] Mike Kaput: So that kind of seemed to me has also like maybe worth double clicking into at some point, given that even if searches are going up, if we are throttling traffic to sites, that could be a problem.
[00:34:30] Paul Roetzer: Yeah, and I think that's a really key point, Mike, and it's actually the whole time you're talking about this, this, the question kept running through my head is like, I don't remember being worried about whether or not people would continue to search.
[00:34:42] Paul Roetzer: Yeah. It was always like, what's it mean to traffic? Is, is this, is the AI overview going to take the traffic away from publishers and brands? it doesn't seem like they really get into that. I think the whole point of this research was to say like, Google still dominates this space, like. Forget what you're seeing in headlines [00:35:00] about chat, GBT, like taking over the search market or perplexity or any of these other players.
[00:35:05] Paul Roetzer: It's Google's game still. Basically. It seems like what They are saying here is like people are still searching on Google and it's not changing. but I do think the more meaningful thing for brands and publishers is the Yeah, but are they coming to my website? Right. and that's the unknown. Like we, we just, we have seen some supportive data here from SEER and others, but, you know, I think that that is the assumption.
[00:35:29] Paul Roetzer: I don't know if like on our, if we have done a deep dive into our data to see it, I know we are getting traffic from like ChatGPT and Perplexity, but I don't know that we have seen a dramatic change in our Google traffic yet. But no, we'll have to do an analysis and see.
[00:35:41] Mike Kaput: Yeah, so far I don't think we have seen a huge change though.
[00:35:45] Mike Kaput: I think we are starting to maybe see some initial signs that yeah, we are going to be getting more traffic through things like LLMs versus traditional search engines.
[00:35:53] Paul Roetzer: Yeah, I'm more, I think I'm more interested to see. When other people start realizing deep research, like from open A [00:36:00] and Google .
[00:36:00] Paul Roetzer: When that product starts taking off and is more widely used. Like I was actually talking with a, someone a college student the other day and I asked her like, are you, you all using deep research? And she wasn't aware of it yet, so actually like showed her a quick demo of it and I was like, this would be really helpful at school.
[00:36:17] Paul Roetzer: So, so you can imagine like when college students start realizing like, oh my gosh, I can use deep research to do all these like projects and stuff. then the question starts becoming, well, how much of the traffic come to our website is just people running deep research agents right. To your site And what does the meaning of that?
[00:36:35] Anthropic’s Strong Start to the Year
[00:36:35] Mike Kaput: so Anthropic is having a great 2025 so far, according to the information, their annualized revenue is up to 1.4 billion. From 1 billion at the end of 2024. The information says this is roughly the same revenue pace that Rival OpenAI reached in November, 2023. If it keeps up this growth, it would be its best base case revenue projection of [00:37:00] 2 billion for 2025.
[00:37:02] Mike Kaput: Interestingly, at the same time, the New York Times revealed that Google owns 14% of Anthropic, which is a number that was not publicly confirmed previously, but has been released due to some legal filings that came out related to a Google antitrust case. According to the times, Google can only own up to 15% of Anthropic.
[00:37:23] Mike Kaput: It holds no voting rights, no board seats. Now, all of this is interesting from a financial perspective, shows very much philanthropic, has some moment but their product roadmap may be even more interesting. So Chief Product Officer Mike Krieger, who is formerly a co-founder of Instagram. Gave an interview to the Verge where he said the company's quote, critical path isn't through mass market consumer adoption right now.
[00:37:50] Mike Kaput: Instead, the company is focused on building and training the best models in the world and quote, building vertical experiences that unlock AI [00:38:00] agents. He mentioned that the recent Claude Code feature is the company's first take on a vertical agent with coding, and that they'll do others that play to our model's advantages and help solve problems for people.
[00:38:12] Mike Kaput: He said, you'll see us go beyond Claude Code with some other agents over the coming year. So Paul, I found these comments from him pretty interesting. Like it sounds like philanthropic may be less interested in direct consumer competition with the likes of open AI and more focused on productizing agents.
[00:38:33] Paul Roetzer: Yeah. And I think, if I remember correctly, we did had a podcast that Krieger recently did. I feel like we just talked about him a couple episodes ago. Yeah. Where we we are getting into like some of their thinking and it's, we'll put the link in the show. It was a really fascinating kind of inside look at how he thinks about product, based on his in Instagram background and kind of what he's doing at Anthropic.
[00:38:54] Paul Roetzer: But I agree with them. I don't think They are going to win in the consumer [00:39:00] marketplace. we have talked many times about how brand awareness of Anthropic is quite low outside of the AI bubble. I would say most business people I talk to have no idea it's a thing. So they have a lot of catching up to do if they want to compete.
[00:39:14] Paul Roetzer: And I think They are one of the ones, and Krieger mentioned this in his interview that I listened to, they got kind of sideswiped by deep seeks popularity. Like the Zap came out of nowhere and just skyrocketed both them. And Meta just sort of got. Taken out, like it's something they'd been trying to do for a while and this, you know, app shows up outta nowhere and jumps to the top of the charts.
[00:39:34] Paul Roetzer: So I think that They are smart maybe to look out ahead and say, okay, our play probably isn't gonna be a top three, you know, gen AI app. It's gonna be, let's get into enterprises, let's do vertical solutions. let's focus on where we can kind of build a moat. And I think that's, you know, probably the right play for them.
[00:39:51] Paul Roetzer: And it seems like it's working so far on their revenue growth. Now, keep in mind also that's like one 12 the size of OpenAI. If I'm not mistaken, OpenAI [00:40:00] revenue this year is gonna be like 12 billion or something like that. Yeah. Of just keep in context, like these are big numbers, but They are nothing compared to where OpenAI is.
[00:40:09] Mike Kaput: Yeah. the market is much bigger. These are, we are used to orders of magnitude larger than previous startup numbers here. Yeah.
[00:40:19] It Turns Out That Gemini Can Remove Image Watermarks
[00:40:19] Mike Kaput: Internet users have found out a potentially problematic feature of Google Gemini. Apparently, it can do a really good job of removing watermarks from images. A user on Reddit posted several convincing examples of running images with watermarks from sites like Shutterstock through Google Gemini and asking it to remove those watermarks, and it appears to have done that almost flawlessly.
[00:40:46] Mike Kaput: So users on X then went ahead and tested and recreated the same functionality that included one prominent poster named Didi, who is a prominent venture capitalist and a former Googler. [00:41:00] He was talking about, Hey, look what this can do. Look at the examples of getting it to remove watermarks. And interestingly, ed Newton Rex we have talked about many times, who's a former VP at Stability ai, and a vocal critic of how AI companies violate copyright.
[00:41:16] Mike Kaput: Responded to Didi's Post, noting the function you're advertising removing a watermark that contains copyright info is illegal under US law. So Paul, obviously removing watermarks not great. Sounds like it may also be illegal. Obviously not something hard coded into Gemini, but something it can do. There's no way this feature stays in Gemini, right?
[00:41:43] Paul Roetzer: No, I mean, Google's gonna have to take it out 'cause They are Google, but doesn't mean someone's not gonna build an open source version of this tomorrow that does the exact same thing. It's, it's a, it's a game of whack-a-mole. Like, I think like if you, if you're new to this stuff, you have to understand these [00:42:00] models aren't hand coded to do or not do something.
[00:42:04] Paul Roetzer: These aren't deterministic models where these AI researchers at OpenAI or Google are sitting there saying, okay, you're now able to, you know, extract watermarks when someone prompts this. Like, take the watermark out. That's not how it works,
[00:42:15] Mike Kaput: right?
[00:42:16] Paul Roetzer: They just train these things and then they come out and they can and can't do things.
[00:42:21] Paul Roetzer: And if that wasn't something on the testing, agenda before releasing the model, the researchers may not even be aware it can do that thing. They are just training it to be able to edit images and all these things. And then all of a sudden, somehow in its training it learns what watermarks are and that it learns how to extract them and replace the background to make it look like there was never anything there.
[00:42:42] Paul Roetzer: Like they didn't teach it to do this. It just does it's an emerge ability. And so it comes out in the world, somebody finds it and then they gotta go and figure out how to get it to stop doing it. The way you get it to stop doing it is you basically go in and say, don't do this. Look in, in human words, you tell the [00:43:00] model, stop doing the thing you're doing and if someone asks you to do it, don't do it.
[00:43:05] Paul Roetzer: Like that's how you get it to stop. You can't go back and retrain it. So it doesn't do watermarks. It's not how it works. So, could, will Google remove the ability? Probably, they'll probably update the system instructions that makes it so it won't do the thing that they know is illegal and they could get sued for.
[00:43:22] Paul Roetzer: but someone's gonna put a, you know, a a a fork model of some open source model on hugging face tomorrow and you're gonna be able to remove watermarks and like, what do you do now if you're a photography company that depends on these for your livelihood? I don't know, but, and is like, is X AI gonna care?
[00:43:41] Paul Roetzer: Like, is Groq gonna have, my guess is Groq could probably do the same thing. Is, is Elon Musk gonna go in and like, have his team update the system instructions? Doubt it. I really don't think Elon cares if he gets sued over watermarks being removed from images. It's probably pretty low on his list of things to care about right now.
[00:43:58] Paul Roetzer: So welcome to the [00:44:00] new world of creativity. Like this is what it is. You and I don't endorse it. We by no means say this. I agree. Google should, should remove it because They are Google and they should be held to a higher standard, but doesn't mean anybody else is gonna hold themselves to that same standard.
[00:44:14] Paul Roetzer: So this, we are gonna see this stuff happening all the time. Yeah.
[00:44:20] Mike Kaput: Buckle up.
[00:44:20] Paul Roetzer: Yeah. And I don't know, Shutterstock and Getty and like, they better have a big war chest of dollars to be suing people because They are gonna have lots of lawsuits going
[00:44:32] Google Research on New Way to Scale AI
[00:44:32] Mike Kaput: Next up, some new research from Google seems to suggest a way to improve the performance of AI models on complex tasks without using fundamentally better reasoning algorithms.
[00:44:44] Mike Kaput: So this study basically looks at how AI models perform when tasked with solving challenging problems by randomly generating a large number of possible solutions and then verifying their own work to select the best answer. So [00:45:00] surprisingly, the researchers found that even without any type of advanced reasoning capabilities, models like Gemini 1.5 could match and even surpass state-of-the-art reasoning models like oh one simply by generating around 200 random answers and then carefully self-selecting the most accurate one.
[00:45:22] Mike Kaput: Now it turns out this act of verification becomes easier the more candidate solutions you generate. So, with more solutions, the model is increasingly likely to produce at least one rigorous and clearly explained correct answer, which stands out distinctly from incorrect ones. So this discovery kind of highlights a key point here.
[00:45:42] Mike Kaput: As AI continues to scale up, verification actually becomes more effective, not just because the models get smarter, but in this case simply because searching through more answers makes the correct solutions easier to identify. So the whole idea here, [00:46:00] regardless of kind of the technical ins and outs, is that it appears to be a way to actually improve dramatically model performance and scale that up without inventing a fundamentally better reasoning algorithm.
[00:46:13] Mike Kaput: So Paul, we obviously kind of need to see how this plays out, but it does seem to suggest there's plenty of room to still run with improving the performance of even existing models without any kind of fundamental break.
[00:46:27] Paul Roetzer: Yeah, I think that skis, it sounds really technical and like if it was hard to follow this at all, like here's the basic premise.
[00:46:34] Paul Roetzer: What we knew a year ago was we could build bigger data centers with more Nvidia chips and we could spend more money and give them more data, and they got smarter. Like that was the original scaling law. Just keep buying more Nvidia chips, keep stealing more copyrighted data, feed it to the thing, and it just gets smarter, more generally capable.
[00:46:53] Paul Roetzer: Then we found out in September of last year, this thing called test time compute, which is like at at inference when you and I [00:47:00] use ChatGPT PT or Google Gemini, give it time to think and it gets smarter. That's another scaling law. Well, there's another path which is just make the algorithms smarter. That can be done through different things like we are seeing here.
[00:47:14] Paul Roetzer: It can be done through like retrieval, it can be done through memory context, windows. There's all these different variables that the different AI labs are making bets on, like connecting it to other tools, like things like that where we can have other ways to scale the intelligence by trying to just play around with the algorithms themselves without having to buy more NVIDIA chips or build bigger data centers.
[00:47:35] Paul Roetzer: So what's happening is the big labs OpenAI, Google, other meta, They are gonna keep betting on the build. More data centers, buy more Nvidia chips, train longer on more data, and that's one scaling law. They are gonna absolutely push the reasoning one, which is give it time to think and then They are all playing in the more efficient algorithm one.
[00:47:56] Paul Roetzer: That's where like cohere writer, like the [00:48:00] ones who aren't gonna spend the billions on the training runs, They are gonna try and find efficient. It's what Deep Seek got recognized for doing. It's basically they found a smarter way to do the algorithm. And so what's happening is everyone's trying to find these different scaling laws that's gonna unlock more intelligence and do it as efficiently as possible.
[00:48:17] Paul Roetzer: Some companies have the resources to keep doing the big things simultaneously while doing the smaller things. And then some labs only have the resources to do the smaller things drive efficiency. So that's what's happening here. It's just like it's a cool early review, like possible path. And now what's gonna happen is other labs will try and kind of reproduce this and see if they can push on this too.
[00:48:42] New Research Shows How Generative AI Changes Performance in Real-World Corporate Work
[00:48:42] Mike Kaput: So what happens when AI acts as a true teammate in a real corporate environment? This is a question that AI expert Ethan Mollick and his research team set out to answer in a new study called the Cybernetic Teammate. This study [00:49:00] involved nearly 800 professionals at Consumer Giant, Proctor and Gamble. In it.
[00:49:06] Mike Kaput: Molik and researchers from Harvard and University of Pennsylvania tested the impact of AI when it was used as a virtual teammate. So participants were tasked with real world product development challenges, things like designing, packaging, retail strategies, new products, which mirrored actual PNG workflows.
[00:49:28] Mike Kaput: They were then randomly assigned either to work alone, collaborate with another human, or collaborate with advanced AI models like GPT-4. What they found from this was that without a ai, human teams predictably outperformed individuals, but individuals working solo with AI assistance performed just as well as human only teams.
[00:49:53] Mike Kaput: They produced ideas that were longer, more detailed and developed in significantly less time. [00:50:00] Even more impressive teams of two people working with AI created the best outcomes overall, especially when it came to exceptional top tier ideas. Another fascinating discovery was how AI erased traditional professional boundaries.
[00:50:16] Mike Kaput: Normally, technical specialists would propose technical solutions. Commercial specialists would propose market focus ones, but with AI assistance, these distinctions appeared to vanish. Professionals from both groups created solutions that integrated technical and commercial perspectives, and even less experienced employees performed at expert levels when paired with ai, which effectively democratized this kind of specialized knowledge.
[00:50:45] Mike Kaput: Last but not least, the researchers found that AI didn't just enhance productivity. It improved people's emotional experiences at work. Participants using AI reported higher levels of excitement and enthusiasm. Lower levels of [00:51:00] stress and frustration compared to those without ai. So Paul, there's obviously a lot of worry, a lot of doom and gloom out there about AI's impact on work, but this seems to actually paint kind of a positive near term picture of AI's use for some professionals.
[00:51:17] Mike Kaput: It sounds like it can make you better at a lot of different types of work, help you perform even more expertly and do more while being more excited about your work. What did you think of this research?
[00:51:30] Paul Roetzer: Yeah, you and I have talked a lot lately, Mike, about how these standard evaluations that are used by these labs are not practical for, for the average person, average business leader, because They are testing at like PhD levels across like these hard tasks.
[00:51:43] Paul Roetzer: And at the end of the day, like it's a very small percentage of what happens in business. So much is just like getting work done, running campaigns, doing the tasks that make up a job. So I. I love these very practical, have actual users give some ai, give some, not teach some how to use it, don't like this [00:52:00] is much more realistic about what's gonna happen in a corporate environment, in a business.
[00:52:05] Paul Roetzer: So, caught a couple of just additional excerpts here that I think are really important. So they said most knowledge work isn't purely an individual activity, you know, very true. It happens in groups and teams. Teams aren't, are just, aren't just collections of individuals. They provide critical benefits that individuals alone typically can't, including better performance, sharing of expertise and social connections.
[00:52:24] Paul Roetzer: So what happens when AI acts as a teammate? So this is whole, like this co-pilot idea that, you know, I still think it's the best name anybody's done is like Microsoft copilot, right? because that's really how it should be thought of as like an assistant. It's, you know, there to work with you. So they gave everyone a a, everyone assigned to the AI condition was given a training session and a set of prompts because.
[00:52:43] Paul Roetzer: The last study Molik was involved in like a year or so ago. They didn't train them how to use GT four. It was like a, yeah, I don't think consulting firm, I think if remember correctly. Yeah. Boston Consulting Group, maybe. That sounds right. So they gave it to like 60 people and they didn't teach them how to use it.
[00:52:58] Paul Roetzer: So interesting. In the distance they [00:53:00] actually trained them and then they measured out, comes across dimensions including quality as determined by two expert judges, time spent. And then as you refer to like the emotional side, like what were the emotional responses. And then their big surprise was that when they looked at AI enabled participants, individuals working with AI performed just as well as teams.
[00:53:17] Paul Roetzer: So an individual with a coex, like an, you know, a copilot worked just as well as a team. and it suggests that AI effectively replicated the performance benefits of having a human teammate. One person with AI could match previously to people collaboration. So. I think it's interesting, like I would, I would suggest to people, think about running similar things like this in your own business.
[00:53:41] Paul Roetzer: Like if you wanna prove the business value of ai, run a pilot project of your own like this, where you take people on your marketing team, your sales team, your customer success team, whatever, have people do the job without ai, have an individual do it with a co AI and then have two people do it with a co ai.
[00:53:57] Paul Roetzer: . Like run these things you can prove out [00:54:00] yourself. The business case for this. And Mike, I was thinking as, as I was kind of scanning through this before we got got on today, this is so reminiscent of what we have seen in our workshops that you and I run. Yeah. So we run, an applied AI workshop with businesses.
[00:54:12] Paul Roetzer: we have do it in one to many model. I think at MAICON last year we had like 150 people in each of these workshops. So Applied AI teaches a use case model where we try and help people find use cases to pilot in their organization, in their, in their work. And then a strategic leader one that teaches like how to identify problems that can be solved more intelligent with ai.
[00:54:30] Paul Roetzer: So we run these, we run these workshops dozens of times. Last year we created jobs, GPT campaigns, GPT, which we'll put the links to. They are free custom gpt. And then I created problems GPT for the strategic leader one. the productivity of those workshops was mind boggling. Running them without those GPT for years and then giving people a GPT to help them, the output of what people could do in three hours was [00:55:00] crazy.
[00:55:00] Paul Roetzer: Yeah. And like we just created these GPT and gave them to them. I wasn't even sure how they would use them. And at the end of three hours you're like, oh my God. Like you've already built plans for like five problems. Most of the time you just hoped to leave those workshops with a list of things to explore.
[00:55:15] Paul Roetzer: These people were like 10 weeks into that process. They'd already not only identified and prioritized, they built plans for each of these things. Right. So, and I think with CO CEO I've, I've mentioned, you know, I've built my co CEO and I use that thing like a dozen times a day. And so I think that that's the real key.
[00:55:33] Paul Roetzer: And then the other thing I wanted to mention, this is the idea of like teammates, and I hadn't thought about this too deeply, but this made me think about this a little bit more. This idea like if you regularly work with Say IT or legal or procurement or hr, and you have to like prepare for meetings with them and you have to figure out how to explain things to them.
[00:55:50] Paul Roetzer: Create a custom GPT of them. Like, so Kathy McPhillips, our chief growth officer, did this for me. She has her own like co CEO, that when she needs [00:56:00] to like present something to me, she'll apparently work with it to figure out like, okay, what questions is Paul gonna ask me when I deliver this thing to him?
[00:56:07] Paul Roetzer: So this whole idea of creating like your coworkers in a weird way. Yeah. Yeah. Where you can practice with them and like talk to them and get advice from them. I don't know. It's like it really presents some really interesting opportunities for like how people could work in the future with these things as true.
[00:56:24] Paul Roetzer: Enhancements to not replacements to anything. It's just like helping you do your job better, more efficiently, enjoy your job more. That's it. I dunno. It's really exciting research. Like I'd love to see more things like we run like this across different industries and within companies.
[00:56:39] Mike Kaput: Yeah. And same idea there.
[00:56:40] Mike Kaput: You can also do this for just different personality types, right? Like a lot of companies do like Myers-Briggs or Enneagram or whatever. So if you have any of that in, in data or can suspect like, oh I have a coworker who like probably has this personality, it's super helpful to communicate more in language with them that they might, prefer or
[00:56:59] Paul Roetzer: a hundred percent [00:57:00] or Mike go back to our agency days.
[00:57:02] Paul Roetzer: Imagine if you created a persona like your client contact from your agency. Hundred percent. Like, okay, I'm gonna send this to this client. Here's the feedback I've gotten the last five times we did something like this, like analyze this like we think the client's going to and yeah, I mean it could be so valuable.
[00:57:18] The Time Horizon of Tasks AI Can Handle Is Doubling Fast
[00:57:18] Mike Kaput: Another new paper that's out, signs that the length of complex tasks that AI agents can complete is doubling every seven months. So this is a key finding in a research paper from the model evaluation and threat research organization, which is METR meter, and it's titled Measuring AI Ability to Complete Long Tasks.
[00:57:40] Mike Kaput: So what this does is it looks at a diverse set of software and reasoning tasks and records, the time needed to complete each one for humans with the appropriate expertise to do it. So they find out how long does the task take when humans do it, and then they find that this is actually predictive of the model [00:58:00] success on that, on that task.
[00:58:02] Mike Kaput: So for instance, current models have almost a hundred percent success rates on tasks that take humans less than four minutes, but succeed less than 10% of the time on tasks taking more than around four hours. So what the researchers do is they plot out how well models have done and could do tasks of certain lengths up to a 50% success rate.
[00:58:26] Mike Kaput: And what this does it is it allows them to chart trends over the last six years of model performance improvement and make some forecasts based on that. So the way they conclude, this is actually saying quote, if the trend of the past six years continues to the end of this decade, frontier AI systems will be capable of autonomously carrying out months long projects.
[00:58:52] Mike Kaput: This would come with enormous stakes, both in terms of potential benefits and potential risks. So Paul, [00:59:00] this paper is generating a lot of buzz in some AI circles, and it seems like if this is anywhere close to right, that buzz is kind of justified, this is a pretty big deal if we end up directionally going this route.
[00:59:15] Paul Roetzer: This was blowing up like Thursday, Friday last week I think it was. It was like in my AI thread, this was all anyone was tweeting and talking about. So it's a very attention grabbing thesis. The length of complex tasks that AI agents can complete is doubling every seven months. That is a very hard to wrap your head around concept when you dig into it a little bit.
[00:59:38] Paul Roetzer: They are very forthright that this is kind of fuzzy, that there's a lot of variables that could make this research wrong, that, that They are kind of sharing this sort of early in the process, but they also say, listen, we could be off by a factor of 10 x in order of magnitude. We could be wrong by, and it still is dramatically significant to [01:00:00] work and the economy and society.
[01:00:02] Paul Roetzer: so I would expect that other research labs are gonna pick up on this research pretty fast and try and play this out themselves like any other kind of. Potential breakthrough. You, you want other labs to sort of reproduce the results or, or build on the research. So I'll just highlight a few key excerpts here from Elizabeth Barnes who is the, founder and CO CEO of meter.
[01:00:26] Paul Roetzer: so she tweeted this, we'll put the link to this thread in. So she said, currently understanding how AI capabilities are changing over time or even just what the capabilities of current systems actually are is pretty confusing. Models are superhuman in many ways, but often surprisingly useless in practice.
[01:00:42] Paul Roetzer: And this actually goes back to what we just talked about, Ethan Mollick research, right? It's like we need practical guidance here. So her, she went on to say key takeaway. In my opinion, even if you think current models are rubbish and our time horizon numbers are off by 10 x, it's hard to avoid the conclusion that in less [01:01:00] than 10 years we'll see AI agents that are wildly better than current systems and can complete day, month, long data, month long projects independently.
[01:01:11] Paul Roetzer: Agents are strong at things like knowledge or reasoning ability, that traditional benchmarks tend to measure but can't reliably perform diverse tasks of any substantial length. And this goes back to like the argument about when are we getting to AGI? Because you would assume if we achieve AGI, this is kind of solved.
[01:01:26] Paul Roetzer: And I think that's part of what the research is alluding to. She goes on to say, our best results indicate this won't be a limitation for long. There's a clear trend of rapid increase in capabilities with the length of tasks models can perform doubling around every seven months. Now keep in mind the tasks They are talking about here were largely like coding tasks and research tasks.
[01:01:45] Paul Roetzer: They were not, you know, doing your marketing work for you or being a CEO, like they weren't getting into those. These are very kind of more specific technical, cybersecurity I think was another one they looked at. so she says, extrapolating this suggests that within about five years, [01:02:00] we will have generalist AI systems that can autonomously complete basically any software or research engineering task that a human professional could do in a few days.
[01:02:08] Paul Roetzer: As well as a non-trivial fraction of multi-year projects with no human assistance or task specific adaptations required. Meaning, I want you to go do this project that would've taken me a month to do and it's gonna come back 30 minutes later and have done the thing better than you would've done it yourself.
[01:02:26] Paul Roetzer: That's what They are saying.
[01:02:27] Mike Kaput: Yeah.
[01:02:28] Paul Roetzer: however, there are significant limitations to both the theoretical methodology and the data we were able to collect in practice. Some of these are reasons to doubt the overall framing. While others point to ways we may be overestimating or underestimating current or future model capabilities.
[01:02:43] Paul Roetzer: So they know there's some limitations, but They are also saying it could work both ways. Like we may be off the other direction by three years, like this might happen in two years. We like, we need to like, think about this more deeply. And it says it's unclear how to interpret time needed for humans, given that this varies wildly [01:03:00] between different people and is highly sensitive to expertise, existing content and experience with similar tasks.
[01:03:05] Paul Roetzer: For short tasks especially, it makes a big difference whether time to get set up and familiarized with the problem is counted as part of the task or not. So basically it's saying like, humans have different levels of expertise. Which one are we measuring on here? Is it the average human? Is it the expert human?
[01:03:20] Paul Roetzer: Which goes back to my definition of AGI needs to include some thing. Like is it of the average human that we are trying to outproduce or is it the expert level? And then the last point I'll make that she had tweeted, we have tried to operationalize the reference human as a new hire contractor or consultant who has no prior knowledge or experience with this particular task research question, but has all the relevant background knowledge and is familiar with any core frameworks, tools, techniques needed.
[01:03:49] Paul Roetzer: So again, when you think about this research, a lot of people just take these headlines as like, oh my God, the world's ending like every seven months we are we are screwed in like three years. Everybody's gonna, it's like, no, no, no. There's like a hundred variables here to [01:04:00] whether or not this is true.
[01:04:01] Paul Roetzer: They are doing a great job of actually stepping back and saying, listen, we may be completely wrong here, but like, here's all the things we are trying to solve for in this. And so this is the kind of stuff you need to keep in mind when you're evaluating this stuff for your own business, for your own career.
[01:04:15] Paul Roetzer: There's no, it's not binary. Like there's a long spectrums for everything we are talking about. And it's why I caution people so often that if you're hearing quote unquote AI experts who so strongly believe something, They are a hundred percent confident this is gonna happen, They are probably full of it.
[01:04:33] Paul Roetzer: Like they, they, there is no a hundred percent confidence. So even when I talk about AGI, like, I'm always saying like, I don't know, 50 50. Like I feel like we are probably gonna get there. And so I always try and provide. Probabilities of like my confidence level, but I also accept with humility, I may not be even close to right on this.
[01:04:52] Paul Roetzer: And I try and like, that's why I always try and give these confidence levels. So anytime you hear anyone in AI don't even care if They are the heads of one of these AI labs [01:05:00] that says, with a hundred percent confident this is what it looks like 12 to 24 months from now, I would find someone else to listen to, basically like they that nobody can talk with that level of confidence about what's gonna happen right now.
[01:05:14] Apple Comes Clean on Siri AI Delays
[01:05:14] Mike Kaput: So next up we have some more confirmation for what we have increasingly suspected, which is that Apple has dropped the ball on making Siri smarter with ai. So Siri, as we have talked about a few weeks in a row, has faced significant delays in rolling out more advanced conversational features powered by ai.
[01:05:36] Mike Kaput: And these features are delayed until an unspecified future date. Bloomberg has previously reported that some people within Apple's AI division believe that Siri, the true modernized conversational version of it won't reach consumers until as late as 2027. But now Bloomberg is reporting on an internal meeting at Apple where the top executive overseeing Siri said the delays [01:06:00] were quote, ugly and embarrassing.
[01:06:02] Mike Kaput: During the meeting, apple exec Robbie Walker seemed to indicate that it's unclear internally when the updates to Siri will will actually launch. He revealed that the technology is currently only functioning correctly between two thirds and 80% of the time, and it also sounds like too aggressive marketing was a problem.
[01:06:23] Mike Kaput: According to Bloomberg quote. To make matters worse, Walker said Apple's marketing co communications department wanted to promote the enhancements to Siri. Despite not being ready, the capabilities were included in a series of marketing campaigns and TV commercials starting last year. So Paul, this picture just keeps getting Bleecker.
[01:06:44] Mike Kaput: It sounds like there are a lot of problems here.
[01:06:47] Paul Roetzer: I I, the ad one, I, they undersold that so hard featured it like it was the ad, like it was the ad, like a hundred million dollars of ads featuring Apple intelligence. Yep. And I remember talking [01:07:00] about this show at the time. I'm like, it's not what They are saying it is.
[01:07:03] Paul Roetzer: And it's not going to be anytime soon. so that article you were talking about was on March 14th that came out and then on March 20th, mark Germin from Bloomberg, who, if you wanna follow what's happening at Apple, follow that guy on acts, he's inside everything. he actually had another article saying, okay, They are actually making major change, which Apple doesn't do at leadership.
[01:07:23] Paul Roetzer: Like They are very, very stable from a leadership perspective. They, they don't make knee jerk reaction changes, but his article said Apple Inc is undergoing a rare shakeup of its executive ranks. Aiming to get its artificial intelligence efforts back on track after months of delays and stumbles.
[01:07:37] Paul Roetzer: According to people familiar with the situation, CEO, Tim Cook has lost confidence in the ability of AI head John Guera, I dunno if I'm saying that right, to execute on product development. So he's moving over another top executive to help Vision Pro creator Mike Rockwell in a new role. Rockwell will be in charge of the Sury virtual assistant according to the people who asked not to be identified, which is also interesting [01:08:00] because Apple doesn't leak much either.
[01:08:01] Paul Roetzer: . So somebody wanted this out. Rockwell will report to Software Chief Craig Feder Fedi, removing Sury completely from Gia DE's command. Apple announced the changes to employees on Thursday following Bloomberg's News initial report. So, yeah, Jacobs, I mean, they know they gotta figure this out, but it doesn't seem like they really have a clear plan yet of how They are gonna do that.
[01:08:25] Paul Roetzer: And this is another impact. Other product lines, like they had some other ideas for like in-home devices that I think are now getting like. Pushback because of this, it probably impacts Vision Pro, which you know, has sort of been lagging since it came out 'cause that Sury was a key part of that. So Sury was like intended to be the core of their Apple strategy, the Apple intelligence strategy.
[01:08:45] Paul Roetzer: And if it's not gonna be anything until 2027, they got some major problems there.
[01:08:51] OpenAI Agents May Threaten Consumer Apps
[01:08:51] Mike Kaput: we have talked before about open AI's AI powered agent operator, and it is now raising some concerns [01:09:00] among popular consumer apps like DoorDash, Uber and Instacart operator, which launched earlier this year, can autonomously browse websites to perform tasks such as shopping, planning trips or booking appointments on behalf of users.
[01:09:16] Mike Kaput: But in addition to doing things for you, this type of AI agent could also disrupt traditional consumer apps according to the information. DoorDash, for instance, who initially partnered with OpenAI for operators launch. Actually expressed concerns privately. They were worried if AI bots interact with their website instead of human users, their ad revenue derived from users actually visiting the site could take a significant hit and They are not alone.
[01:09:47] Mike Kaput: Other consumer platforms like Uber and Instacart, also operator launch partners face similar issues. AI agents could effectively insert themselves between businesses and customers. [01:10:00] This positions open AI and others with agents as powerful intermediaries, and that puts consumer apps in a difficult position if they block AI agents like operator, which Reddit has done, or do they embrace them and risk becoming overly reliant on these companies.
[01:10:19] Mike Kaput: So Paul, it's still really, really early. We'll see how quickly, if at all, agents reach their true potential. But if they do, it really seems like we need to. Get creative in considering their full implications for these types of businesses, doesn't it?
[01:10:36] Paul Roetzer: Yeah. This is so illustrative of all the unknowns ahead.
[01:10:39] Paul Roetzer: So I mean, if you're an SEO or, or analytics in any way, like you, you know, we talked about the impact of overviews earlier. Like you gotta be scenario planning. Like you can't be waiting for 18 months to like, wait and find out. Like you gotta go through scenarios of like, okay, well what happens if like, and so in this instance it's like, well what happens if AI agents are 50% of web traffic in two [01:11:00] years?
[01:11:00] Paul Roetzer: Certainly not an unrealistic thing. Especially, you know, in, in different industries. Or if it's 50% of like the traffic to your app, things like that. you need to be thinking about that. And so, like I had announced at MAICON last year that we were gonna form a marketing AI industry council for this exact sort of thing.
[01:11:18] Paul Roetzer: we have actually done that in partnership with Google Cloud. It's not gonna be a big public thing for, while we are not gonna talk too much about what's going on, but basically what we have done is brought a bunch of, um. Amazing marketing and AI industry leaders together to try and reimagine the future, future of marketing and to ask these exact questions.
[01:11:35] Paul Roetzer: So the questions I'd outlined at MAICON last year was, how will increasingly advanced AI models impact the marketing profession? How will consumer information consumption and buying behaviors change? How will consumer changes impact search advertising publishing? how will the use of AI agents affect website and app design and user experience and the business models of the companies that create those things?
[01:11:56] Paul Roetzer: How will AI related copyright and IP issues affect marketers? How will [01:12:00] generative AI affect creative work and creativity? How's it gonna affect jobs agencies? We have no answers to these things. And again, this goes back to what I was saying earlier about having have some humility. Like if you're in one of these areas and you think you know the answer to this, you probably don't.
[01:12:15] Paul Roetzer: And so my whole thing right now is we need to be asking really smart questions and then we need to accept that the future may look nothing like what we assume it's going to be. That's the problem I see again, in too many businesses, too many industries right now, is people aren't even asking the hard questions yet.
[01:12:31] Paul Roetzer: . Like, they don't understand enough about the current and near term capabilities of the models to ask the hard questions about their own businesses. And that's, that's scary to me, like that we may be two, three years out before a lot of these industries start asking the hard questions. And so with this marketing AIndustry council are things like, well, let's go start asking these hardest questions in marketing, at least.
[01:12:53] Paul Roetzer: so yeah, I think again, it's just illustrative of, take a step back. Like if you listen to the show a lot, take a step back and think about your [01:13:00] own business model. You, the thing you do for a living, the thing that generates revenue for your business. And ask yourself like, is that gonna look the same two years from now?
[01:13:08] Paul Roetzer: Probably not. And in some industries the change is gonna be pretty dramatic. I would just be the one who's asking the hard questions right now and then, and start really thinking about different scenarios, like, don't be closed-minded. Don't think you know the answer. Because that's what I see all the time with like LinkedIn comments to me about when I was talking about AGI and so it's like, oh, you're, well, it's not gonna happen.
[01:13:28] Paul Roetzer: It's like really? Like how could you possibly be that confident to tell me it's not gonna happen? Even if you assign a 10% probability, it's probably still worth exploring the possibility. So I don't know. I can't, I've said that many times on this. So like, even in the techno optimist realm, it's like, okay, everything's just gonna work out like that is, that is the only possible path is everything just works out and it's a future of abundance and nothing goes wrong.
[01:13:51] Paul Roetzer: Like really? Do you actually believe that to be true? Maybe you do and I dunno, you know, good for you if, if you live with that much confidence in yourself and [01:14:00] optimism about the future.
[01:14:03] Powering the AI Revolution
[01:14:03] Mike Kaput: Next up, some new reporting shows the sheer scale of the infrastructure transformation that is happening thanks to the need to power ai.
[01:14:12] Mike Kaput: So first Cruso, a startup backed by Nvidia has secured a landmark power deal. That could help solve one of AI's biggest bottlenecks, which is finding enough energy to run massive AI data centers. So in partnership with a major gas company, cruso will gain access to 4.5 gigawatts of power by 2027, which is extraordinary levels of capacity capable of powering millions of AI chips and surpassing the entire global footprint of some cloud businesses.
[01:14:45] Mike Kaput: Today, cruso aims to sell this data center capacity to major players like OpenAI, Google, and Meta, all of whom are scrambling to keep up with soaring demand for computational resources. Second, the New York Times did a related [01:15:00] deep dive into just how much power is going to be required by these companies, and the energy demands are pretty staggering.
[01:15:07] Mike Kaput: They say that data center power usage could triple by 2028, driven by AI demand. To put that into context, they say open AI's planned facilities alone. Would use more electricity than 3 million American households combined. And Google's AI facilities are similarly power hungry, prompting them to adopt new cooling methods to manage the intense heat.
[01:15:32] Mike Kaput: Microsoft is even rebooting nuclear power plants to help supply its growing energy needs. So this all points to a pretty dramatic restructuring of how tech infrastructure is built and powered. PE firms, investment firms. They are pouring billions into new energy solutions tailored specifically for ai.
[01:15:52] Mike Kaput: This is all happening really fast as part of the Times reporting. Google, CEO, Sundar Phai said quote, what was probably going to [01:16:00] happen over the next decade has been compressed into a period of just two years now. Paul, I, few things seem like a sure bet than the fact that we are building more of these data centers.
[01:16:13] Paul Roetzer: Yeah, and just to put this in perspective, so. You said they will gain access to 4.5 gigawatts by 2027. how much is that? Is that significant? Well, I'm gonna rely on AI overviews and hopefully They are, accurate right here from Google. So the typical small data center consumes 1, 2, 5 megawatts of power, a large or hyperscale data center, which is like a hundred thousand square feet to 7 million square feet.
[01:16:41] Paul Roetzer: Think about like what Elon Musk recently built in Memphis. Yeah. Consumes 20 megawatts to, a hundred megawatts of power roughly. And then the one that really got me was in 2023, data centers across the globe consumed 7.4 gigawatts of power, which was [01:17:00] up from 4.9 in 2022. So They are, They are basically bringing online the equivalent of all global consumed power by data centers in 2022.
[01:17:13] Paul Roetzer: That's a pretty wild number. Yeah. And then, I don't know, to play out what we talked about earlier. I'm looking at my AI overview. It's got citations next to each of these things, and I'm looking at my list of citations. I'm not clicking on any of those at the moment. I would, I would probably want to go through and click through and verify these facts and stuff, but, yeah, just for context for people, this is, it's a lot, a lot.
[01:17:35] Paul Roetzer: 4.5 is a lot.
[01:17:37] Mike Kaput: Yeah. It's a, it's gonna be a very interesting and strange future. Yeah.
[01:17:44] Google Deep Research Tips
[01:17:44] Mike Kaput: Next up, Google has actually made some announcements around its popular deep research tools. So two big things happened here. Deep research with from Google is now available to anyone, and you can actually use its audio overview feature in the tool as [01:18:00] well.
[01:18:00] Mike Kaput: So audio overviews were before in Notebook lm. You can now use those on your deep research reports as well to get podcast kind of style AI hosts. Reading out a summary of your material. And what's really cool is when they made these updates, Google released a bunch of tips to help you get the most out of the research.
[01:18:20] Mike Kaput: So we thought these were worth covering here, given how useful this tool has been for us and for our audience. These tips come straight from Irish Selvin, who is involved in the creation of the tool at Google, and they include the following. So one, decide whether or not you need deep research to begin with.
[01:18:38] Mike Kaput: He says, deep research is really useful for stuff that requires lots of browsing and lots of tabs, not fast, immediate answers. Despite that, you should start with quick, simple questions. You don't need a long, extensive prompt to get great results. And from there, don't hesitate to ask follow-up questions.
[01:18:56] Mike Kaput: You can ask questions of the research itself. Gemini is just layered [01:19:00] over this to engage with the information, or you can have deep research, go back and research more to answer follow-ups. Now he also recommends looking at the interesting links that deep research surfaces while it's working. You can actually do that in real time while it works.
[01:19:16] Mike Kaput: It's also really good at local searches and finding things in your immediate community. For instance, you could use it to plan a complex home project by finding local businesses or to plan an event. And last but not least, obviously go add an audio overview to your report that generates that podcast style discussion of all the stuff that, deep research has produced for you.
[01:19:40] Mike Kaput: Now, Paul, that last bit is pretty cool because there are usually dozens of pages of research results from something like deep research. Like what did you make of these announcements?
[01:19:50] Paul Roetzer: Yeah, I mean, obviously I'm a huge fan of the deep research products, so, you know, if I had to stack the things that since like 2022 have just been.
[01:19:59] Paul Roetzer: You [01:20:00] see 'em once and you can't imagine a future again where they don't exist. you know, I think ChatGPT moment like that where you just try like this is gonna change things. I think Notebook LM from Google, especially with the auto overview capabilities, is like a mind blowing moment for people who've never seen the technology before.
[01:20:17] Paul Roetzer: Deep research is another one where you do it and you just instantly understand the value proposition. you know, I think that's the thing is there just, there's so many jobs where if you, if you just figured out how to use deep research from OpenAI and or Google, figure out how to use Notebook LM and integrate it into your life and figure out to use ChatGPT or Gemini or cloud, like that's enough.
[01:20:39] Paul Roetzer: Like you could honestly change your whole career path, your whole business. Just go hard on like those three things and find ways to infuse them into your workflow and the workflow of your teams. So, yeah, I mean, anytime you can get these like really practical, that's why, you know, we'll talk a little bit later about a notebook lm, a YouTube video that I'll [01:21:00] recommend.
[01:21:00] Paul Roetzer: I think anything you time, you get these like super practical ways of using these technologies. Just take the few minutes and listen because I think you can unlock so much value in, in your own career by doing these sorts of things.
[01:21:14] Other Product and Funding Updates
[01:21:14] Mike Kaput: Alright, Paul, we are gonna wrap up with some product and funding updates.
[01:21:18] Mike Kaput: I'm gonna run through a few and then you've got one to kind of wrap things up for us this week. So first up, Google has been pretty busy announcing a slew of updates in addition to the deep research updates we just discussed, Gemini also now has personalization, which is a new experimental capability that connects Gemini directly with Google services like search calendar, notes, tasks, and soon Google Photos.
[01:21:43] Mike Kaput: Gemini also now has Canvas, a new interactive workspace designed for collaborative content creation and realtime code editing. You can also access this audio overview feature in Gemini for your docs and uploaded files, not just your deep research reports. [01:22:00] Google DeepMind has also unveiled two new specialized models for robotics.
[01:22:06] Mike Kaput: So built on Gemini 2.01 Gemini Robotics allows robots to understand, respond, and physically act in dynamic environments. And lastly, Google introduced Gemma three. Its latest generation of powerful, yet lightweight AI models designed to run efficiently on single GPUs or TPUs. Perplexity is in early talks to raise new funding at a valuation of $18 billion.
[01:22:30] Mike Kaput: So last year alone, the company's valuation skyrocketed, tripling from 1 billion to 3 billion, and then tripling again several, several months later to around 9 billion. The latest discussion suggests PERPLEXES could raise between 500 million and $1 billion in new investment. The company currently boasts about a hundred million in annual recurring revenue and claims more than 15 million active users.
[01:22:56] Mike Kaput: Generative AI startup Opus Clip has just raised 20 [01:23:00] million from SoftBank's Vision Fund two, bringing its total valuation to 215 million. They are based in San Francisco and founded in 2022. Opus Clips specializes in AI powered short form video editing. And finally, on my end, zoom has announced that it is introducing new Agentic AI features across its products.
[01:23:21] Mike Kaput: According to the company, its new Agentic AI companion will allow users to automate complex multi-step tasks through advanced reasoning, decision-making, memory, and action orchestration. For instance, AI Companion can now handle scheduling tasks quickly, generate video clips, assist with document creation, and execute customer self-service operations using virtual agents.
[01:23:47] Mike Kaput: Paul, that's all I got on my end. You wanna take us home? Yeah. The Zoom one's
[01:23:50] Paul Roetzer: interesting, Mike, like we are power users of Zoom, but we are very specific in our uses of Zoom. Yes. Like we use it for our internal chat. We use it for webinars and we use it for [01:24:00] meetings. It's so interesting. Like I'll be curious.
[01:24:03] Paul Roetzer: Like I have, I have no intentions of testing any of these tools. Like they might change, but like
[01:24:09] Mike Kaput: Right.
[01:24:09] Paul Roetzer: Zoom's got an uphill battle I would think. Like, this stuff might be awesome, but it's like, I think I've got things that do all these already. Like I don't know that I wanna use Zoom for that. I have this very narrow belief of like what Zoom is for.
[01:24:19] Paul Roetzer: Yeah, be interesting. I could
[01:24:21] Mike Kaput: be, I could be wrong, but I've noticed in our Zoom portal there's multiple new notifications about things like docs workflows, and it's like, have you clicked that? I just think I ignore it. I clicked them to make the notification go away, because I'm pretty sure they do everything we already do.
[01:24:35] Paul Roetzer: That's funny. Yeah. I don't know. It'll be interesting to watch. And then, like we talked, last year about like their CEO's vision for like having your AI digital twins show up to meetings and things. It's like, yeah, I like, I don't know, like I'm not so sure on, I'm sold on the Zoom vision, but I love the tech for what we use it for.
[01:24:53] Paul Roetzer: It's great. Yeah. okay. Yeah. The one other way I would add Mike is, t go, forte and we'll put a link [01:25:00] to this, in the notes. He's a YouTuber and he this phenomenal like 32 minute video about notebook lm and like I was just saying with deep research, like sometimes you just need that, like really hands-on, practical way to use something and I thought it was great.
[01:25:13] Paul Roetzer: He went through the updates, audio overviews, expanded context windows. Multimodal sources, the new interface to Notebook, lm, and then Notebook LM plus like an overview. So if you're a notebook LM user, it's a great refresher. if you've never tried it's a really good, starter that will show you the value of it and does a nice job of explaining why.
[01:25:35] Paul Roetzer: So, another value wanna check out and then I'll do a final reminder, Mike. So on Thursday the 27th, we are gonna drop the first episode of a new series. So this is part of the Artificial Intelligence Show podcast. You don't need to go find the new podcast link or anything. It's gonna be a featured series within the podcast called The Road to AGI and Beyond.
[01:25:55] Paul Roetzer: The first episode is gonna be, me sharing version [01:26:00] two of the AI timeline that I first debuted in March two. So the goal with this timeline is to try and see around the corner LA timeline with the whole series, sort of see around the corner and figure out what happens next, what it means, and what we can do about it.
[01:26:15] Paul Roetzer: or at least as I was talking about earlier, like the possible outcomes, because I even presented it like the original headline was an incomplete AI timeline. It's like, I don't know, but like, here's the things that seem like They are coming from these labs. And so we are gonna talk throughout this series, which is gonna feature interviews with AI experts.
[01:26:31] Paul Roetzer: It's gonna feature interviews with people, not not just AI experts from the labs, but like experts on the economy, energy, infrastructure, future of business, future of work, legal side of this stuff, societal impact. Like, we want to go broad on this and really get a bunch of different perspectives by interviewing leaders in all these different areas and look at the impacts of continued AI advancement on businesses, the economy, education, and society.
[01:26:55] Paul Roetzer: So the hypothesis is these models are gonna keep getting smarter and more generally [01:27:00] capable. Faster than we are prepared for them and we need to have these discussions. And so that's what I want to do with this series is start having these discussions. So episode one will drop on Thursday. I don't know when episode two is gonna drop yet.
[01:27:12] Paul Roetzer: My schedule's a little nuts for the next few weeks, but I want to get this going with the timeline and then we'll, we'll start, you know, with those interviews shortly thereafter. so that's all we got. Hopefully, you know, almost an hour and a half into this when we caught everybody up with the last two weeks.
[01:27:28] Paul Roetzer: and, and we appreciate you, giving us the grace of a week off to do what we were doing with our travels. And, we'll be back on Thursday with the road to AGI and beyond. So thanks Mike, glad to have you back in the states. Glad to be back. Great trip. I'm sure your family's happy to see you back and we'll be back with all of you again next week with a regular weekly episode as well.
[01:27:52] Paul Roetzer: Thanks for listening to the AI show. Visit marketing AI institute.com to continue your AI learning journey and [01:28:00] join more than 60,000 professionals and business leaders who have subscribed to the weekly newsletter, downloaded the AI blueprints, attended virtual and in-person events, taken our online AI courses and engaged in the Slack community.
[01:28:14] Paul Roetzer: Until next time, stay curious and explore ai.
Claire Prudhomme
Claire Prudhomme is the Marketing Manager of Media and Content at the Marketing AI Institute. With a background in content marketing, video production and a deep interest in AI public policy, Claire brings a broad skill set to her role. Claire combines her skills, passion for storytelling, and dedication to lifelong learning to drive the Marketing AI Institute's mission forward.