46 Min Read

[The AI Show Episode 133]: DeepSeek, US vs. China AI War, Anthropic CEO: AI Could Surpass Humans by 2027, OpenAI Plans AI for Advanced Coding, & Meta’s 1.3 Million GPUs

Featured Image

Wondering how to get started with AI? Take our on-demand Piloting AI for Marketers Series.

Learn More

Silicon Valley is buzzing, and it's not about OpenAI this time.

DeepSeek has sent shockwaves through Silicon Valley, shaking up conversations about AI companies' futures, and what’s next for policies and infrastructure. Join Mike and Paul as they unpack the far-reaching implications of DeepSeek, the growing AI rivalry between the US and China, bold claims from Anthropic's CEO about AI capabilities, the potential for AI to extend human life, and much more in our rapid-fire section.

Listen or watch below—and see below for show notes and the transcript.

Listen Now

Watch the Video

Timestamps

00:05:08 — DeepSeek

00:18:47 — The AI War Between the US and China

00:25:13 — Amodei Comments on AI Surpassing Human Intelligence

00:32:24 — Humanity’s Last Exam

00:37:56 — OpenAI Targets AGI with System That Thinks Like a Pro Engineer

00:42:17 — Zuckerberg Says Meta Will Have 1.3M GPUs by Year’s End

00:44:54 — Gemini 2.0 Flash Thinking

00:48:01 — Imagen 3 Hits #1 Image Generation Model on Leaderboard

00:50:57 — Altman Backed Retro Biosciences Raises $1B to Extend Human Life

00:55:47 — AI for Public Speaking Prep

Summary 

DeepSeek

DeepSeek, a Hangzhou-based company founded just last year, has created an AI model that rivals top US systems while spending a fraction of the time and money—and is entirely open source.

The company's latest system, DeepSeek-V3, was built using only about 2,000 specialized Nvidia chips—compared to the 16,000 or more chips that major U.S. companies typically use. DeepSeek also claims it spent just $6 million on computing power, roughly one-tenth of what Meta invested in its latest AI technology.

This efficiency hasn't come at the expense of performance. The system can match leading chatbots in answering questions, solving logic problems, and writing computer programs. 

The company's AI assistant recently overtook ChatGPT to become the top-rated free application on Apple's App Store in the United States.

The breakthrough also has particular significance because it occurred despite US government restrictions on sending advanced AI chips to China. 

On the heels of DeepSeek V3, DeepSeek also released R-1, an open source competitor to OpenAI’s advanced reasoning model, o1, which costs 90% less to use than o1—further baffling technologists as to how the company is able to create such powerful technology at such low cost.

DeepSeek’s rapid success has shaken AI builders and investors in the US, raising questions about the dominance of major labs, if a smaller player can create similar tech at a fraction of the cost and make it widely accessible. It’s challenging the belief that cutting-edge AI demands massive spending and the latest chips.

Perhaps most significantly, DeepSeek's emergence is changing the narrative about China's AI capabilities. 

The AI War Between the US and China

Hot on the heels of DeepSeek’s success, some US-based AI leaders are now speaking up louder than ever about the need for America to win the AI war against China.

Most notably, Alexandr Wang, founder and CEO at Scale AI, a major data platform used by companies building AI, took out a full-page ad in the Washington Post the day after President Donald Trump was inaugurated titled “Dear President Trump, America Must Win the AI War.”

The letter outlines a five-point plan to maintain US leadership in AI, especially in light of Wang’s warning that China’s government “outspends” the US government “by about 10 times on AI implementation and adoption.”

Wang's proposed strategy centers on a fundamental restructuring of how the US government approaches AI development. He highlights a critical misalignment in current government spending, where 90% of investments focus on algorithms, contrary to industry best practices that allocate resources across three pillars: compute (60%), data (30%), and algorithms (10%).

The plan calls for five specific actions in the administration's first 100 days. Beyond realigning AI investments, Wang advocates for building an AI-ready workforce, with projections suggesting AI could create 50 million new jobs by 2030.

He also emphasizes the need to modernize federal agencies' AI capabilities by 2027, noting that while the US government is the world's largest data producer, it's not effectively leveraging this advantage.

The proposal also addresses two critical infrastructure challenges: energy and regulation. Wang calls for an aggressive national energy plan to support AI's substantial power demands, while simultaneously advocating for a balanced regulatory framework that ensures safety without hampering innovation.

Amodei Comments on AI Surpassing Human Intelligence

In a striking new interview from the annual World Economic Forum conference in Davos, Anthropic CEO Dario Amodei has made his most definitive predictions yet about artificial intelligence's trajectory, suggesting that AI could surpass human intelligence by 2027.

Amodei revealed that his confidence about rapid AI advancement has increased dramatically in recent months. While he previously maintained uncertainty about the timeline for transformative AI, he now says he is "relatively confident" that within the next two to three years, we'll see AI systems that are "better than us at almost everything."

The Anthropic chief also disclosed unprecedented details about his company's growth. To meet surging demand, Anthropic is planning a massive expansion of its computing infrastructure. 

Amodei predicted that by 2026, his company might deploy more than one million AI chips to power its systems.

The CEO also provided a glimpse into Anthropic's immediate roadmap, indicating that significant updates to their Claude AI assistant are coming within months.

Perhaps most notably, Amodei broke with industry norms by speaking candidly about the societal implications of advanced AI. He argued that by 2027, society will need to fundamentally rethink how we organize our economy as AI becomes increasingly capable. 


This episode is brought to you by our AI Mastery Membership: 

This 12-month membership gives you access to all the education, insights, and answers you need to master AI for your company and career. To learn more about the membership, go to www.smarterx.ai/ai-mastery

As a special thank you to our podcast audience, you can use the code POD100 to save $100 on a membership. 


Read the Transcription

Disclaimer: This transcription was written by AI, thanks to Descript, and has not been edited for content. 

[00:00:00] Paul Roetzer: You just showed how to disrupt the U. S. economy in three days, like in a very hands off, we had nothing to do with this kind of way. And it may be what we saw with TikTok is U. S. consumers don't care if their private data is going to China. They just want convenience and personalization. So if an app from China offers value and use, like U. S. consumers have shown time and again, they're going to use it regardless. Welcome to the Artificial Intelligence Show, the podcast that helps your business grow smarter by making AI approachable and actionable. My name is Paul Roetzer. I'm the founder and CEO of Marketing AI Institute, and I'm your host.

[00:00:39] Paul Roetzer: Each week, I'm joined by my co host and Marketing AI Institute Chief Content Officer, Mike Kaput. As we break down all the AI news that matters and give you insights. and perspectives that you can use to advance your company and your career. Join us as we accelerate AI literacy for all.[00:01:00] 

[00:01:02] Paul Roetzer: Welcome to episode 133 of the Artificial Intelligence Show. I'm your host, Paul Roetzer, along with my co host, Mike Kaput. This is our second episode we are recording on January 27th. So it is Monday morning, 10:50 a. m. Eastern time, January 27th. NVIDIA STOCK. Continues to go down, which we're going to explain why in a moment.

[00:01:24] Paul Roetzer: so if you didn't listen to episode 132, just a quick recap, normally we do one weekly episode that recaps the previous week's news. last week's news was so crazy that we decided to do this over two episodes. So I guess you could think of episode 30, 132 as part one, episode 133 is part two. So we are continuing on with the major news from.

[00:01:49] Paul Roetzer: Last week, this episode is brought to us by the AI Mastery Membership Program. this is a, our 12 month membership program that includes, [00:02:00] quarterly classes like Ask Me Anything sessions, generative AI mastery classes, trends briefings, and then we announced, and if you listened to episode 132, you heard us talk about this, the AI Literacy Project.

[00:02:12] Paul Roetzer: where we announced some major changes to our AI Mastery Membership Program, including, as of now, the Piloting AI and Scaling AI course series are, bundled into the membership. And then in spring of this year, we are going to be launching a new AI powered learning management system and user experience, expanding courses and professional certificates, and then a new turnkey AI Academy solution for businesses.

[00:02:40] Paul Roetzer: We're going to launch the AI Fundamentals course series, which is kind of like AI 101 for all knowledge workers. New Piloting AI and Scaling AI courses, a new weekly Gen AI app course series that we're really excited about, AI for Industries, AI for Departments, so just a ton coming, and it's all going to be built into that same AI Mastery membership.

[00:02:59] Paul Roetzer: So if you join [00:03:00] now, you will get first access to all the new stuff as it's coming online this spring. You can go to Smarter X AI and just click on education and it's right there. Just look at AI Mastery membership in the dropdown or smarter x ai slash uh slash ai dash mastery. We'll put the links in the show notes.

[00:03:19] Paul Roetzer: If you wanna just click on the link in the show notes and there is a promo code you can use POD 100 POD 100, that will get you $100 off the membership. And then, also a quick reminder, we have open submission right now to speak at MAICON 2025, that is taking place October 14th to the 16th in Cleveland, so, you can register too, registration is open, we're expecting probably north of 1, 500 people last year, I think we 1, Or next year.

[00:03:49] Paul Roetzer: This year, I think we had, 1100 last year, if I remember correctly. So, in Cleveland, last year. So, this year we're thinking 1500 plus. We'll see. I mean, it's hard to predict these things in the event [00:04:00] world, but, it's looking like it's going to be another amazing event. This is our sixth year again. So, if you want to speak, get those applications in soon.

[00:04:08] Paul Roetzer: It's a rolling basis, so as phenomenal applications come in, we'll reach out to people and get them out of the agenda. But I think last year we had, I don't know, it was like close to 200 or more submissions to speak, and there obviously aren't that many slots. get those submissions in early, just go to MAICON.

[00:04:27] Paul Roetzer: AI, that's M A I C O N dot A I, and click on Submit Your Speaker Application, there's a button right there on the homepage. Okay, Mike, the thing that was all the rage, I think it started on Thursday, I don't remember when this started taking over the news cycle, at least in Twitterverse, but, all that I saw all weekend was this.

[00:04:47] Paul Roetzer: I listened to probably three podcasts about it. I've read probably 20 articles about it and I have been watching NVIDIA stock plummeting this morning as a result of it. So let's talk about [00:05:00] DeepSeek, which also is the number one app in the app store, I think, as of Sunday night. So it's just, it's everywhere.

[00:05:05] Paul Roetzer: I've never seen anything quite take off like this. 

[00:05:08] DeepSeek

[00:05:08] Mike Kaput: Yes. So DeepSeek is a Chinese AI lab that is sending shockwaves through Silicon Valley because it's had some breakthroughs. That are challenging some fundamental assumptions about AI development. So DeepSeek has actually created AI models that rival or surpass top US based or created systems.

[00:05:31] Mike Kaput: While spending a fraction of the time and money on these models and releasing them in an open fashion so others can use and build on them. So one of the company's models, DeepSeek V3, was built using only about, they claim, 2000 specialized NVIDIA chips, which is. Compared to 16, 000 or more chips that major U. S. companies are using, even more striking is the cost. DeepSeek claims it spent just 6 [00:06:00] million on computing power, to train the model. And the model they say is comparable to something like GPT 4. 0. So it challenges the closed models cost only 6 million to train, which is literally, you know, One 10th of what someone like Meta invested in their latest AI technology.

[00:06:21] Mike Kaput: And the system can match the leading chatbots out there, apparently in answering questions, solving logic problems, writing code, I mentioned it's open so developers can freely access and build on it. And as a result, Paul, like you mentioned, DeepSeek recently overtook ChatGPT to become the top rated free application in the app store in the US.

[00:06:42] Mike Kaput: Now, this also happened. Because, or despite of, U. S. government restrictions on sending advanced AI chips to China. So rather than hindering progress, these constraints may have forced Chinese engineers to develop more efficient approaches. [00:07:00] Now on the heel of, heels of DeepSeek v3, DeepSeek also released R1, which is an open source competitor to OpenAI's advanced reasoning models, specifically O1, and then eventually O3, which is coming out.

[00:07:12] Mike Kaput: Now R1 actually costs like 90 percent less to use than O1, and this is kind of further baffling everybody as to how this company is able to create such powerful AI at such a low cost. So it's raising a bunch of uncomfortable questions, which is why you're hearing about it a lot. So if what they say is true, and that's a big if we'll dive into, that really calls into question how much money is required to actually build AI, how much, how many advanced chips are required to build advanced AI systems.

[00:07:44] Mike Kaput: Have all these big labs just lit money on fire and way over engineered this when there's an easier way. And then also the emergence is having people worried about China's AI capabilities. Former Google CEO Eric Schmidt previously estimated [00:08:00] China was two to three years behind the US in AI development.

[00:08:03] Mike Kaput: But now acknowledges they have caught up. So Paul, a bunch of different angles here, but let's first talk about what DeepSeek says they have achieved. How credible are these claims? How likely is it that they were actually able to achieve this level of performance this fast for this cheap? 

[00:08:22] Paul Roetzer: I have no idea.

[00:08:23] Paul Roetzer: I mean, so I've been trying to track this as closely as possible, look at all the different angles, listen to the different players involved. there are some who think they just innovated, that, that the U. S. reducing the amount of chips they're allowed to have just drove innovation and they found out they built more efficient algorithms to, to do this training more efficiently.

[00:08:48] Paul Roetzer: And there are some who are very strongly opinionated that they're lying and there's no way they actually trained it this efficiently and they probably have way more illegal NVIDIA chips than they're saying they have. [00:09:00] And they're never going to disclose that they do because that's illegal, and they are certainly a fair amount of people who think they probably stole the data and illegally trained the models on U.

[00:09:11] Paul Roetzer: S. model outputs, like, obviously we have no idea, like, we have no inside knowledge on this, there are lots of opinions flying around, Whatever the truth ends up being, there, there is a lot of, concern in Silicon Valley at the moment and in the U. S. stock market that they may have actually just found more efficient ways to do this.

[00:09:37] Paul Roetzer: I think there's probably little debate that they leveraged U. S. innovation to do this. that is, that is pretty much guaranteed. shortcut their path to success by leveraging what the U. S. had done. But throughout the weekend, the app kept climbing the app store, which is hilarious. Like, I don't even, I don't even know like people who, like, what, what, what would you know to [00:10:00] do with this?

[00:10:00] Paul Roetzer: Like, I don't, I don't know how it climbed. Like, and even that there was some question about whether it was like bots and just paid things that was getting it to the top. and then, and there was questions about, is this like a psych op by the Chinese government to actually like just mess with the U S stock market?

[00:10:14] Paul Roetzer: I don't know. I don't know. It, it gets really wild, really deep, but we referenced on 132 at start and we referenced at the beginning here, NVIDIA's stock is like crashing this morning as a result of this. The reason is. NVIDIA's value is based on the belief that we are going to keep building massive data centers filled with millions of NVIDIA chips to not only train more powerful, generally capable models, but to run the inference on these models when all of us consumers use AI apps and devices.

[00:10:45] Paul Roetzer: So NVIDIA's entire future, at least in the stock market, is based on this belief that we're going to keep building, we're going to keep needing millions of chips. Well, this puts into question, do we really need all these data centers and [00:11:00] infrastructure that we just got a 500 billion for Stargate and we got trillions more coming?

[00:11:04] Paul Roetzer: Like, is that all going to be necessary? So, that's why NVIDIA's stock sort of, like, just dropped, because stock, Wall Street doesn't like uncertainty, and this was, this is very uncertain. Like, there's way more questions than answers. There's, there are risks. There's massive risks here. So like one, these models, if you ask them about, you know, things related to like democracy or Tiananmen Square, things like that, like they just won't answer, like they're, they're obviously, in some ways controlled by, they're from a company within China.

[00:11:33] Paul Roetzer: So they have to adhere to the policies of the government. The data is stored on Chinese servers. I believe like that. I saw something last night about like, where's this data going? It's, You're sending personal data, whatever you put in, whatever the inputs and outputs are of the model, like you're, you know, sending that through the app, you're using the app.

[00:11:50] Paul Roetzer: but, you know, again, like, there's differing opinions. So we had, Satya, at Davos said to see the DeepSeq new model. It's super impressive in [00:12:00] terms of both how they have really effectively done an open source model that does this inference time compute and is super compute efficient. We should take the developments out of China very, very seriously.

[00:12:10] Paul Roetzer: Then Satya also tweeted, I may not mispronounce this, Jevons paradox strikes again. So like this. So the idea that, because something became sheep that we weren't, aren't gonna need the data centers. So his exact tweet was Jevons Paradox strikes again as AI gets more efficient and accessible through things like Deep Sea.

[00:12:36] Paul Roetzer: We will see its use skyrocket, turning it into a com a commodity we just can't get enough of. So what he's saying is, NVIDIA stock should actually be going up, is the basic, like, interpretation here. That because something was made cheaper, that there actually will be an increasing demand of it. So, Wikipedia, because it's the quickest thing I can get.

[00:12:55] Paul Roetzer: In economics, the Jevons paradox, or Jevons [00:13:00] effect, occurs when technological progress increases the efficiency with which a resource is used, Reducing the amount necessary for any one use, but the falling cost of use induces increases in demand enough that the resource is, use is increased rather than reduced.

[00:13:16] Paul Roetzer: Meaning, we're going to need more Danish setters, we're going to need more NVIDIA chips. So, the stock market is making the assumption, oh, we won't need as many NVIDIA chips or data centers. What Satya is saying is, no, that's not the case. This is a Jevin paradox. Like, it'll actually increase, we'll need more as we go.

[00:13:33] Paul Roetzer: and then just like to, to drill home the significance of this, there's a information article that said meta scrambles after Chinese AI equals its own, upending Silicon Valley. And so I'm just going to read a couple of quick excerpts here because I think this gives the mentality of, and again, this is like five days old, that just happened, like third, or whatever.

[00:13:52] Paul Roetzer: So it said leaders, including AI infrastructure director, Matthew Oldman. I have told numerous colleagues they are concerned that [00:14:00] the next version of Meta's flagship AI, Llama, won't perform as well as DeepSeq. Now, this is why I was kind of surprised Meta's stock was up today. this month, Hangzoo based HiFlyer Capital Management upped the ante by releasing another version of DeepSeq that you had mentioned.

[00:14:16] Paul Roetzer: app developers can freely download DeepSeq or buy access to it through cloud based APIs. Researchers at OpenAI Meta and other top developers have been scrutinizing the DeepSeq model to see what they could learn from it, including how it manages to run more cheaply and efficiently than some American made models.

[00:14:35] Paul Roetzer: Noam Brown, who we've talked about many times on the show, he tweeted, DeepSeek shows you can get very powerful AI models with relatively little compute. the article goes on to say, even more surprising than the quality of DeepSeek's results was HighFlyer's claim that developing it cost a fraction of the amount American competitors spent on developing similar models.

[00:14:56] Paul Roetzer: A claim that various researchers have met with skepticism. [00:15:00] Underscoring the efficiency of its models, HiFlyer also sells a cloud hosted version that is 17 to 27 times cheaper than OpenAI's comparable offerings. The arrival of DeepSeq is particularly galling to researchers at Meta because, like Llama, it is freely available for other developers to use with publicly accessible settings that control the model's behavior, a concept known as open weights.

[00:15:25] Paul Roetzer: so this is really important because as we talked about in the, the podcast last year, Zuckerberg's play was to undercut the market by making a free open source model. Well, he just got undercut by a Chinese company with a model that's better, based on this article, with, than what we have today. Meta hasn't even released yet, so like, their next model, there's concerns at Meta that this thing is not only more efficient, it's actually better than what they were going to release.

[00:15:53] Paul Roetzer: So the article says that they have researchers at Leading American Headlines probably impressed with the results. [00:16:00] HighFire may have taken some shortcuts to mimic already released models, including training its own models on answers from O1 and Llama. And then they said that managers and engineers from Meta, AI group and infrastructure team have started four war rooms to learn how DeepSeq works.

[00:16:17] Paul Roetzer: Two are mobilized to trying to understand how they lowered the cost of training and running DeepSeek. Meta wants to apply whatever they can learn. And then they said, managers and engineers from Meta, have started, okay, and then there was like, there was two other war rooms dedicated to different elements of DeepSeek.

[00:16:33] Paul Roetzer: So it's surreal, like, again, they don't know yet if they're being truthful in, in their research and what they're saying, and maybe there are some like larger issues at play here. But it seems like there's enough to this that it has AI Research Lab scrambling to figure out what is going on. And then you throw in the fact that it all of a sudden is number one in the App Store, and now consumers are seeing and using this thing.

[00:16:57] Paul Roetzer: It's like, It just, it took the world by [00:17:00] storm. It was really crazy. 

[00:17:02] Mike Kaput: And it seems like one way or another, whether they have in fact found some breakthroughs to do this or are hiding something, there's a lot of money and interest in figuring that out because all of the investors, all the business models of these major players seem to be under threat from this, right?

[00:17:21] Paul Roetzer: Yeah. I mean, I, I didn't do the math, but I mean, NVIDIA is a 3 trillion company. If they lost 14 percent market value this morning, I mean, we're talking about a 400 to 500 billion market cap swing in two hours. 

[00:17:38] Mike Kaput: Right. And 

[00:17:39] Paul Roetzer: it's a, it's a massive economic impact.

[00:17:43] Mike Kaput: So you don't think there's some corporate espionage about that happen around.

[00:17:46] Paul Roetzer: That's a hundred percent. There wasn't something more nefarious at work here. You just showed how to disrupt the US economy in three days. Like, in, in a very [00:18:00] hands off, we had nothing to do with this kind of way. So even if it didn't come from that, you may now see future Copycat things done where, because what we, and it may be what we saw with TikTok is U.

[00:18:13] Paul Roetzer: S. consumers don't care if it's from China. Like, they don't care if their private data is going to China or it's owned by some, a holding company in China. They just want convenience and personalization. They want that experience. So, if an app from China offers value and use, like, U. S. consumers have shown time and again they're going to use it regardless.

[00:18:34] Paul Roetzer: So, I don't know, man, it's It is heavy stuff. For a second episode we're recording in the same day, like it hurts my brain to be trying to like process this. And I know we're only going to talk more about it in the next topic. 

[00:18:47] The AI War Between the US and China

[00:18:47] Mike Kaput: Yeah, for sure. Because the second topic is pretty intimately related to this. It kind of zooms out from just this deep seek drama and looking at how some US based AI leaders are now [00:19:00] speaking up louder than ever about the need for America to win the AI war against China.

[00:19:06] Mike Kaput: So the most notable example of this. Is Alexander Wang, the founder and CEO at Scale AI, which is a major data platform company that's used by a lot of companies building AI, took out a full page ad in the Washington Post the day after President Donald Trump was inaugurated titled literally, Dear President Trump, America must win the AI war.

[00:19:29] Mike Kaput: This ad linked to a letter from Wang to Trump that is published on the Scale AI website And it outlines a five point plan to maintain U. S. leadership in AI. And while that is generally, you know, leading all the countries, it's especially focused on China. Wang warns that the Chinese government, quote, outspends the U.

[00:19:50] Mike Kaput: S. government by about 10 times on AI implementation and adoption. So his proposed strategy centers on a fundamental restructuring [00:20:00] of how the U. S. government approaches AI development. He says that there's a critical misalignment. In current government spending, where 90 percent of investments focus on algorithms, contrary to what's actually a best practice in the industry, which is allocating resources across three pillars.

[00:20:18] Mike Kaput: Compute at about 60%, data at about 30%, and algorithms at about 10%. The plan also calls for five specific actions in the administration's first 100 days. Beyond realigning those AI investments, Wang advocates for building an AI ready workforce with projections suggesting AI could create 50 million new jobs by 2030.

[00:20:43] Mike Kaput: He also emphasized that the need to modernize federal agencies, AI capabilities by 2027, and he notes that while the U S government is the world's largest data producer, it's not effectively leveraging this advantage. And the proposal also [00:21:00] addresses two critical infrastructure challenges, energy and regulation.

[00:21:03] Mike Kaput: He calls for an aggressive national energy plan to support AI's substantial power demands while simultaneously advocating for a balanced regulatory framework that ensures safety without hampering innovation. So Paul, it is not news that there's a brewing AI arms race between the U S and China, but.

[00:21:24] Mike Kaput: Based on the deep seek news, it seems that this is now at the forefront of everyone's mind. Is the U S falling behind? How critical is this scenario? 

[00:21:35] Paul Roetzer: I don't know. I've seen a lot of charts in the last week or so on, you know, how much China's built out and energy and infrastructure. yeah, so, so you and I both read AI superpowers by Kai Fuli years ago.

[00:21:50] Paul Roetzer: I think anybody who wants to understand the dynamics here, because they are becoming. Very important and starting to become reality. so AI superpowers, the stuff that was China, [00:22:00] Silicon Valley, and the new world order. Kai Fu Lee was the former president of Google China, and now he runs a venture fund in China.

[00:22:08] Paul Roetzer: so he, he, he knows what he's talking about. And so he tweeted, just yesterday, I think this was in my book, AI superpowers. I predicted that U. S. will lead breakthroughs. But China will be better and faster in engineering. Many people simplified that to be China will beat U. S. And many claimed I was wrong with Gen AI with the recent DeepSeek releases I feel vindicated.

[00:22:31] Paul Roetzer: So, it is exactly what he laid out in his book that U. S. will drive innovation. We will build the data centers. We will build the biggest models. We will have the breakthroughs in memory and reasoning and all these things. Like, that is what we do in America. And China will very quickly follow and improve on them.

[00:22:49] Paul Roetzer: And that is what has always happened in innovation for decades. And he said AI was going to be no different. And the other thing that China has going for them is they don't have, the civil rights around [00:23:00] like privacy and data usage of civilians and things like that. So they're going to use all the data.

[00:23:05] Paul Roetzer: One of the big question marks was always, would they allow a large language model to exist? Like, could they allow something that could talk about the actual history of Tiananmen Square? Like, would they let something like that exist? And the answer seems to be yes, that, that they will. And, you know, if they're going to do that, it definitely creates a whole lot of new wrinkles in this, I don't know what else to call it.

[00:23:28] Paul Roetzer: I don't know if we have a better name for it. Like US China war, I don't really like referring it to like that, but it is a, is an AI war for sure. And, it's going to be fought on a lot of different levels. and sometimes we're not going to know that that's what's happening. And we may find out years later that that's, things were and how it all played out.

[00:23:47] Paul Roetzer: But, yeah, I don't know. I mean, people listen to Wang. Scale. ai is a very important company. They, they work, you can't even like step back and say, okay, so Alexander works with Sam Altman at OpenAI, but he doesn't [00:24:00] like Elon Musk. 

[00:24:00] Mike Kaput: Like, no, they all work with them. Like, 

[00:24:02] Paul Roetzer: Retta uses them. I'm sure Elon uses them.

[00:24:04] Paul Roetzer: Like, they're a critical component of the data infrastructure that trains these models. And, so people, people do listen to him. And I think that, This, I'm sure that this, you know, letter to the president has been seen and, you know, I, I think there's elements of it that I certainly agree with and I think it'll be fascinating to see how this all plays out, but this is going to be a major ongoing news thing.

[00:24:31] Paul Roetzer: This is not going away. This is going to only grow in importance. 

[00:24:34] Mike Kaput: Yeah, I think we talked about a bit last year, there were all these scenarios where we could see AI becoming this like hot button political issue and it kind of didn't really hit right away. But now this is certainly one area. 

[00:24:47] Paul Roetzer: I had the same thought last night when I was like scanning through, getting ready for today.

[00:24:51] Mike Kaput: Yeah. 

[00:24:52] Paul Roetzer: That we were saying like up until November, how like AI just didn't play a role in the election. It wasn't really talked about as a campaign item. And then day one, [00:25:00] it's all that's talked about. Like it is like, you know, not all, there's obviously immigration, a bunch of other stuff going on, but it became very obvious day one, minute one, that AI was fundamental to the administration.

[00:25:13] Amodei Comments on AI Surpassing Human Intelligence

[00:25:13] Mike Kaput: All right. And our third big topic for this episode. So the World Economic Forum had their annual conference in Davos on episode 132. We talked about a few interesting interviews with AI leaders. On this episode, we wanted to deep dive in a more, formal way into one of them from Anthropic CEO, Dario Amodei, because he made some pretty, interesting predictions.

[00:25:40] Mike Kaput: about AI's trajectory. He suggested that AI could surpass human intelligence by 2027. And of course, people started then quoting and asking questions of other AI leaders to respond to this. He actually revealed during this interview that his confidence about rapid AI advancement increased [00:26:00] dramatically in recent months.

[00:26:01] Mike Kaput: While he previously maintained uncertainty about the timeline for transformative AI, he now says he is, quote, relatively confident. That within the next two to three years, we will see AI systems that are quote, better than us at almost everything. He also talked a bit about Anthropic's growth, their fundraising.

[00:26:23] Mike Kaput: And their immediate roadmap indicating significant updates coming to Claude within coming months. But, really it's this 2027 prediction, Paul, that kind of got everyone's attention because he also said society is going to need to fundamentally rethink how we organize our economy as AI becomes increasingly capable.

[00:26:46] Mike Kaput: He said there are a lot of assumptions we made when humans were the most intelligent species on the planet. That are going to be invalidated by what's happening with AI. So, Paul, like, can you? Maybe walk us through what is [00:27:00] he seeing that's leading him generally to make this prediction and then even further accelerate his timeline.

[00:27:07] Paul Roetzer: Yeah, I don't, it's interesting. I don't think he's accelerating his timeline really. Okay. So if you go back to, when was this, we talked about the Machines of Loving Grace article he wrote in, let's say this is October 15th, 2024, episode 119 of the podcast. Yeah. Okay. So he had published this Machines of Loving Grace, article where he had sort of like radical predictions for AI.

[00:27:32] Paul Roetzer: And at that time he talked about that what he calls powerful AI, he doesn't like AGI, he thinks it's kind of like a marketing term, so he refers to it as powerful. But he had said then, like he thought as early as 2026, so I don't, I don't know, he doesn't do many interviews, so I think some, in some ways this may just, it may have gotten a lot of run.

[00:27:51] Paul Roetzer: because he was out at Davos World Economic Forum doing, doing these interviews. But he said, like, could be 2026, purpose of this essay, you [00:28:00] know, we're looking at maybe 5 to 10 years, like anywhere in that realm, basically. So, he, he historically tends to be quite vague. Like, he's hard to pin down on exactly what he means by things, and when exactly he thinks things are gonna happen, or why he thinks things are happening the way they're happening.

[00:28:19] Paul Roetzer: he, he, more than most, he speaks in pretty broad generalities, and he's hard to drill into specifics. So, I, I found, I always listen to what Dario has to say, but I think he often presents these outlandish scenarios And then basically says, like, I have no idea what it means. Like, that's his general answer, is like, we don't know.

[00:28:46] Paul Roetzer: Okay, why are you accelerating? If you're so worried about this, why are you accelerating development? Well, we, like, we think, you know, it can be good, and we're gonna figure it out, and we're gonna build AI that figures out why, you know, the risks are, and things like that. So, [00:29:00] I, I, it's, it's weird. I get unsettled listening to him, I think is, like, what I'm trying to say.

[00:29:04] Paul Roetzer: I, I think he's, I think Anthropic is probably making breakthroughs, like I said on episode 132, I think they're holding back right now, for whatever the reasons, maybe it's a safety thing, maybe it's, training run didn't work exactly how they wanted it to, but I think they have way more than they're saying they do, or that they're currently sharing with us, but I find interviews with him unsettling because he never seems to have answers to like, what does this mean, and he more than most people throws caution to the dangers of what they're doing and never has the answer to, like, what we're going to do about it, other than when we see the risk has emerged, we will solve for it.

[00:29:49] Paul Roetzer: And so, I don't know, like, I listened to a couple of interviews with him last week and it's a lot of the same Stuff, but he, you know, this 2027 [00:30:00] timeline, you know, I, I think it's coming from something because we're now, what, four months removed, three months removed from when he did the Machines of Love and Grace thing.

[00:30:08] Paul Roetzer: Right. And, I, I just, I do think that he thinks and that others think that we are very near significant advancements in AI and I, I believe that. I don't know that he vocalizes it the best, but I, I, I think that he thinks that this is very real. 

[00:30:28] Mike Kaput: Yeah, on that note, and we've talked about it in previous episodes, you don't see a lot of the major leaders saying, whoa, whoa, whoa, pump the brakes, this is slowing down a bit.

[00:30:40] Paul Roetzer: Because they're all racing for the same funding, they're all racing for the same influence, they all think that they're probably best situated to identify and solve for the risks, but I, I do, I don't know, like, I, I almost wonder if sometime this year or next year, we don't [00:31:00] start seeing much more collaboration between these players.

[00:31:03] Paul Roetzer: Like, I, I think at some point Altman and Amodei, obviously, you know, Amodei left OpenAI. I don't know how, what kind of terms Sam and Dario are on these days, but Dario took 10 percent of the staff with him when he left in 2021. I think at some point We, we really need Demis Esabas and Dario Amodei and Sam Altman or whoever the lead engineer is at opening, like, these people need to be in a room talking about the reality of What if we do get to AGI or superintelligence by 2027?

[00:31:35] Paul Roetzer: They all talk about the need for some international council to exist and somebody to, like, figure this out. I think they need to get together and figure this out. Like, they're the ones building the technology and they're just hoping someone else comes along and solves for what happens as a result of the technology that they're all building.

[00:31:55] Paul Roetzer: Yeah. And, and so I, I don't know. I don't know if something needs to happen for, [00:32:00] for them to then get together. I can't imagine Elon Musk wanting to get in the room with Sam and some of the other guys, but there's like five or six people in the world who are leading companies that are building something that they think changes society within three to five years, and they're not talking to each other that I'm aware of.

[00:32:20] Paul Roetzer: about what to do about that. 

[00:32:24] Humanity’s Last Exam

[00:32:24] Mike Kaput: All right, let's dive into some rapid fire for this episode. The first up rapid fire topic is a provocative new benchmark that is called Humanity's Last Exam. And this is highlighting just how quickly AI is advancing. And raising concerns in the process about our ability to measure its capabilities.

[00:32:45] Mike Kaput: So this Humanities Last Exam was released this week by researchers at the Center for AI Safety and Scale AI. Humanities Last Exam is basically being billed as the most challenging test ever created for AI [00:33:00] systems. It consists of roughly 3, 000 questions, spanning fields from analytic philosophy to rocket engineering.

[00:33:07] Mike Kaput: And each question is crafted by leading experts, some of whom were paid up to 5, 000 per accepted submission. And these are just not typical test questions, they are specifically designed to push the boundaries of what AI can achieve. Often matching or exceeding the difficulty of PhD level challenges.

[00:33:29] Mike Kaput: The creation of this test was spurred by an urgent problem. Existing AI benchmarks are becoming obsolete. And they're becoming obsolete very, very quickly because new models from open AI, Google, Anthropic, et cetera, have been consistently mastering graduate level tests. So researchers are kind of stuck trying to figure out even more difficult challenges.

[00:33:52] Mike Kaput: Now, right now, the most advanced AI modeled out there are struggling with this test. OpenAI's [00:34:00] latest systems have scored the highest among those tested. But still only got an 8. 3 percent accuracy. However, the test creator, Dan Hendricks, predicts these scores could surpass 50 percent by the end of the year.

[00:34:14] Mike Kaput: And that's a threshold that would suggest AI systems have become world class oracles capable of outperforming human experts across virtually Any academic domain. Now, Paul, like this is a pretty interesting name. Seems like a little hyped up and wild, but definitely addressing like a real problem we're seeing, like how closely should we be watching model performance on this particular test?

[00:34:42] Paul Roetzer: so I think these like super advanced tests were like the ARC AGI test, this one, I think they matter to the research labs a lot because they get to benchmark the, You know, the overall potential and [00:35:00] power of these models. My, this has become like my soapbox thing, I think, like. I want to see the evals by profession, like, I don't really care, like, I, so like, I assume this is gonna be achieved in the next one to two years, like, I just, anytime I see something like this, like, oh, we figured out a way to answer, like, questions I've never thought of, and it's gonna, it's not gonna be anywhere in the training data, and it's gonna be amazing, it's gonna be so hard, humanity's last, and then, like, 12 months from now, somebody will have, like, done it, and it's like, oh, okay, like, well, now we're gonna do this one.

[00:35:32] Paul Roetzer: What are we proving? Like, at the end of the day, I want to know how much of a writer's job can it do? How much of a doctor's job, a consultant's job, a psychologist's job, like, I want the same energy doing evals of people's careers. Because that's what actually matters in the economy is like, when are we going to get to the point where this thing can do 80 percent of an attorney's job?

[00:35:58] Mike Kaput: Right 

[00:35:59] Paul Roetzer: now we got a [00:36:00] problem. And, and that's like, that's the part where I think we're way closer to the answer to those kinds of questions being yes, then most people want to accept. And, and so I want, and maybe this is something like, again, kind of like the literacy project. Part of the reason I did that was because.

[00:36:20] Paul Roetzer: We talked all last year about someone has to step up and do this. Like we need to drive literacy across America and throughout the world. And it just wasn't happening. So I was like, all right, let's just do it. I almost feel like maybe this needs to like fit under the umbrella of the literacy project.

[00:36:32] Paul Roetzer: It's like somebody has to start doing this evals at the professional level, at the job level. And looking and saying, okay, this job is within 12 to 18 months is going to achieve like 30 to 50 percent automation. Okay, what are we doing about it? Like, let's be proactive here. Let's not wait until we get to 2027 and these things are, have passed humanity's last exam and they can do 90 percent of the jobs.

[00:36:55] Paul Roetzer: Like, I'm not saying that's going to happen. I'm not like, don't quote me on like 90 percent of the jobs by [00:37:00] 2027. I'm saying jobs are going to be transformed. These things are going to increasingly do the tasks that make up the jobs. And no one's doing anything about it, like nobody's running evals on that and nobody's like proactively reskilling and upskilling based on that or telling us what all these amazing 50 million new AI jobs are going to create.

[00:37:17] Paul Roetzer: Like, I don't see it. I don't see 50 million new jobs being created by AI by 2027 or 2030. I would love for somebody to tell me what they're going to be. Or at least, like, directionally say what they're gonna be. again, it's where Dario and Sam, they talk in these generalities, like, it's just gonna be okay.

[00:37:36] Paul Roetzer: It's gonna, it always happens when we have general purpose technologies, like, new jobs come and it's gonna be amazing. No, it's not. It's not gonna happen that fast. Maybe ten years from now we'll get there, but not in three years. So until somebody lays out that plan for me, I have a really hard time believing that, that it's just going to work out.

[00:37:56] OpenAI Targets AGI with System That Thinks Like a Pro Engineer

[00:37:56] Mike Kaput: So this next topic is actually quite related to this because it's kind of [00:38:00] looking at something like this, what you're talking about in practice, because we got news that OpenAI is reportedly developing a new AI system that aims to match the capabilities of expert software engineers. So this comes from the information and they're reporting that this advanced coding agent.

[00:38:17] Mike Kaput: is designed to handle complex programming tasks that typically require senior level engineering expertise. So, you know, there's existing tools already like ChatGPT, other, assistants like Copilot and GitHub that help with programming tasks, but this new agent is being designed to tackle really sophisticated challenges like code refactoring and system architecture.

[00:38:41] Mike Kaput: That's work typically performed by like high level, what they would call an L6 or senior staff engineer. So Sam Altman actually views this as crucial to the company's revenue targets. So they want to reach 1 billion daily active ChatGPT users. in the next year, they want to generate a hundred billion in revenue by [00:39:00] 2029.

[00:39:01] Mike Kaput: Those kinds of markets require them going after these types of jobs. So like, Paul, I'm not a software engineer, but the reason I mentioned it's related to the previous topic is like, this is an example of at least one AI lab explicitly targeting a high level, highly paid knowledge work task. I mean, Zuckerberg's talked about this as well, that AI is going to replace mid level engineers.

[00:39:25] Mike Kaput: It seems like the impact on highly skilled jobs is like happening now, but to your point, we're not really. preparing for this when they're telling us what the roadmap is. 

[00:39:34] Paul Roetzer: Yeah. And so this, you know, it's interesting as you're reading this, it made me think back to the quote I've mentioned many times on the podcast, from Automate This by Christopher Steiner, where he talked, and this is the book I read in 2012, that sort of like tipped me into like insane curiosity around AI the world.

[00:39:54] Paul Roetzer: his equation was the potential to disrupt plus the reward for disruption. So if you're going to automate [00:40:00] jobs, if you're going to apply your ability to build these agents to take actual entire jobs, What is the most valuable job to an AI research firm? It is an engineer or an AI researcher. So, the ones we're going to hear about, as you're pointing out, we are now hearing about.

[00:40:15] Paul Roetzer: We have Meta Zuckerberg telling us by the middle of this year they're going to have a mid level engineer. We have OpenAI telling us they're going to build this. So, what are you going to build first if you're capable of fulfilling an entire job of a human? You're going to, you're going to build an AI company.

[00:40:29] Paul Roetzer: Researcher and AI engineer because the compound value of that engineer is massive and it can, if it can work with other engineers, now you can build more stuff. So it's not saying the job of engineers goes away. It's saying like, we can employ a thousand of these super engineers and we only need a hundred or two hundred or whatever human engineers to like manage these thousand or million and, you know, AI engineers.

[00:40:52] Paul Roetzer: Yeah. So, this is kind of the canary, was it canary in the coal mine? Is that the right? Yeah. Yeah, like, this is it. We [00:41:00] build, once we do this, once we've built the thing that's really complex, Now, what's stopping us from going and building the next thing that offers massive value? And so you start working from the top down of how much value can be created by building an AI version of a whole job or profession.

[00:41:17] Paul Roetzer: so it's, this, this is my point. We're not going to get to the end of 2025 and have just replaced the need for humans to do all these jobs, right? You're going to start to see the highest value jobs where the AI now can do 90 to 95 percent of the work. doesn't eliminate the profession, but it does dramatically change what that profession looks like when we have an AI accountant or an AI attorney or whatever it is that can now do the majority of what that high performing human would do.

[00:41:47] Paul Roetzer: So this is the stuff we're not modeling enough. People aren't talking about this enough. Economists just are ignoring this, which I just cannot comprehend, but they're not thinking about the reality of this and the impact on the economy or the impact [00:42:00] on education, like as we think about the jobs for our kids and stuff.

[00:42:03] Paul Roetzer: So. Yeah, it was so funny. Like, I had no intention of, like, this being, like, the thread of this podcast episode. It just so happened that as you're going through these topics, they're all building on this same concept. 

[00:42:17] Zuckerberg Says Meta Will Have 1.3M GPUs by Year’s End

[00:42:17] Mike Kaput: All right. So, Meta CEO Mark Zuckerberg has announced that the company plans to spend 65, up to 65 billion this year on AI infrastructure, which is nearly double their spending from last year and well above Wall Street's expectations.

[00:42:34] Mike Kaput: So this investment includes construction of a new data center with more than two gigawatts of computing power. That's enough to cover a significant portion of Manhattan. They also plan to amass an arsenal of over 1. 3 million GPUs by year's end. This would cement them as one of the largest buyers of NVIDIA chips.

[00:42:54] Mike Kaput: So this comes of course, just days after we have unveiled the 500 billion Stargate initiative, [00:43:00] which is going to benefit open AI primarily. And Zuckerberg really just emphasized the strategic importance of the investment. He thinks 2025 is a defining year for AI and wants to expand Meta's AI assistant to serve more than a billion people by year end.

[00:43:18] Mike Kaput: So, Paul, like the big elephant in the room here is that this is like, part of this is likely deep seek, putting the fear of God into Meta, perhaps, but. They also were probably going to be very aggressive with R& D, regardless, and infrastructure investment, rather. 

[00:43:35] Paul Roetzer: Yeah, I mean, this was hap it was just such a weirdly timed flex.

[00:43:39] Paul Roetzer: Like, the day when everyone else is like, Oh my God, Llama 4 just got surpassed and they're gonna have to not release it. And Zuckerberg's entire strategy was to undercut the market with open source models and now China just undercut him. And like, he's tweeting a picture of Manhattan with like the size of their forthcoming data centers.

[00:43:57] Paul Roetzer: It's like, I guess you're just doubling down right or wrong on [00:44:00] this whole thing. So yeah, I mean, it's been in the works forever and they already had this CapEx allocated for the year, the 60 some billion, but it was just so bizarre to like see that tweet or the threads or wherever he put it, that this is what they're doing when everyone's like, Dude, did Llama just get completely undercut?

[00:44:17] Mike Kaput: Right. Yeah, and you're spending more. And like you and I were talking about before the episode, there's no way to tell why some of these stocks are moving the way they do, but I don't understand how they're up a bit. Yeah, I'm truly 

[00:44:30] Paul Roetzer: This is why I don't day trade. This is why I just buy and hold, like, the stocks I really believe in.

[00:44:35] Paul Roetzer: Last week, I said I was losing faith in Apple and they're up two and a half percent. I assume Meta would be down more than Nvidia today, which is now down 17%. Jeez, oh man, I am not looking at my retirement portfolio. I want to get off of here. and then Meta is up like 2%. I was like, what? It makes no sense.

[00:44:54] Gemini 2.0 Flash Thinking

[00:44:54] Mike Kaput: All right. Our next update here this week is that Google has come out with a big update to Gemini [00:45:00] 2. 0 Flash Thinking, their experimental thinking model. They haven't, the new model showcases some remarkable capabilities. It can process up to 1 million tokens of text, which is five times more than OpenAI's PO1 models.

[00:45:14] Mike Kaput: It also has faster response times. It has achieved unprecedented scores on advanced math and science benchmarks. And what sets this release apart is really how Gemini 2. 0 Flash thinking goes about doing all these reasoning tasks. It actually explicitly shows its work, which makes its decision making process transparent to users.

[00:45:37] Mike Kaput: The model is already claimed the top spot for the time being on the chatbot arena leaderboard, and leads in categories including hard prompts, coding, and creative writing. Now what's interesting is at least for the time being, it is also free. Google offers the model to anyone during its experimental beta testing phase in the Google AI studio platform.

[00:45:58] Mike Kaput: So Paul, I just look at [00:46:00] something like 2. 0 flash thinking, and I'm like, just have to appreciate how fast things are moving. Like we just quickly got a more transparent thinking model. It's accessible, it's cheap. This is like such a change from 12 months ago. 

[00:46:15] Paul Roetzer: Yeah. I mean, things have changed so dramatically.

[00:46:17] Paul Roetzer: I do think some of what you just said, like, is kind of what I saw people over the weekend talking about with DeepSeek. Like, one of the most fascinating parts of DeepSeek is to see the reasoning process and it's almost like, people would say like, it's like listening to a human's internal monologue, like the challenges it has, like, oh, the human wants this, oh no, I got to do this.

[00:46:34] Paul Roetzer: And it's like, debates with itself. So I think the more we see the underlying reasoning, because I think, like, 01 from OpenAI, if I'm not mistaken, like, it's, there's almost summarizes the reasoning, it's like, yeah, it does, yeah, there's like, this, like, deep seek, you, you truly see as those, like, the thoughts inside of a mind, it's like, okay, they're asking me for this, but if I give them this, then that's not going to answer their question, so I need to do, and it's like, this is what it's doing in, like, milliseconds.[00:47:00] 

[00:47:00] Paul Roetzer: And so I think we're going to learn a lot about how these models work, the more exposed we are to the thought, the chain of thought that they're going through to, to create the output for us humans. I think it's gonna be a really fascinating part of people actually starting to realize why three years ago you had Google engineers worried that these things were conscious.

[00:47:22] Paul Roetzer: Like when you start to really see what they do, it feels very human. And it's. It's very odd to have to separate yourself and realize that that's not actually what's happening, we don't think. 

[00:47:34] Mike Kaput: And interesting too, just, I realized this is just an experimental model. Google obviously charges for a bunch of its AI stuff, but we even saw this with their like Google workspace pricing for Gemini, like they are releasing stuff for not that much money and or free.

[00:47:50] Mike Kaput: Because not, they don't necessarily have to rely on these subscriptions to like power their business like someone like OpenAI might. 

[00:47:56] Paul Roetzer: Yep, which is a very large advantage. 

[00:48:01] Imagen 3 Hits #1 Image Generation Model on Leaderboard

[00:48:01] Mike Kaput: So the hits don't stop here for Google. They're having a great week as well with their Imogen3 image generation AI system that has now claimed the top spot on lmarena.

[00:48:12] Mike Kaput: ai's popular AI text to image leaderboard. So this model is now leading all the other image generation models out there from competitors like OpenAI, and it's leading right now by a wide margin. So this leaderboard, which we've talked about a bunch, ranks AI model capabilities based on a number of factors, including which models people actually prefer to use based on human votes.

[00:48:37] Mike Kaput: So in this particular leaderboard for these image generation models, the site doesn't just rank overall how good the model is, But also how good it is in specific categories. So there's one category that they have titled user prompts only that basically evaluates how well these models handle real world use cases.

[00:48:55] Mike Kaput: ImageN 3 is number one in that area as well. [00:49:00] So Paul, with everything going on, especially news around reasoning models, like we haven't talked too, too much about image models in the last episode or two, but it sounds like innovation has been moving at light speed here too, especially from Google. 

[00:49:14] Paul Roetzer: Yeah. I I've heard lots of good things about this model.

[00:49:17] Paul Roetzer: I haven't personally tested it in a little while. DALL E seems to be standing still, like open AIs. I'm not sure what their plans are there. But. You can just get a lot of the same generic outputs you did a year ago on DALL E. So, it seems like, you know, Google's made a lot of progress on not only image generation, but video generation, like we talked about with VO.

[00:49:37] Paul Roetzer: So, this is their whole vision of, like, this multi modal model, you know, multi modalities in, trained on, you know, images and videos and text, and able to output those things. And so, I, I, I do think that, like, we're going to see this vision really come together for Google, and maybe it is with 2. 0 in the spring, or, you know, before then.

[00:49:59] Paul Roetzer: where you truly [00:50:00] have this, like, really powerful model. I actually saw Logan Kilpatrick, I think it was on Sunday, tweeted, like, it was, it was a weird tweet in response to, like, DeepSeek being number one, in the App Store. He said, if we packaged AI Studio as an app, it would be number one. Like, because I think he was saying, like, there's so much, Amazing stuff happening within Google's AI studio.

[00:50:21] Paul Roetzer: And if you just like made all that super easy to access and it would just crush because people would realize all the value that's sitting here in these different little products. So something interesting to watch. 

[00:50:32] Mike Kaput: Yeah. That is interesting to mention because I feel like even among very savvy people in our audience, I think a lot of them forget, like through AI studio and like a couple other sandbox areas Google has, you can access a bunch of these experimental models.

[00:50:46] Mike Kaput: Yeah. 

[00:50:46] Paul Roetzer: There are labs you can go in and test out. Like, I think that's where VO2 is, isn't it? I think it might be a lab. Yeah. They got all kinds of stuff in Google labs. It's cool. 

[00:50:57] Altman Backed Retro Biosciences Raises $1B to Extend Human Life

[00:50:57] Mike Kaput: Our next topic is. [00:51:00] about Sam Altman doubling down on his mission to extend human life. So we had referenced in a past episode, a startup he funded, RetroBiosciences.

[00:51:10] Mike Kaput: They are now launching an ambitious billion dollar fundraising round. So this San Francisco based company was initially seeded by Altman with $180 million to develop AI powered treatments aimed at increasing human lifespan by a decade. So they have now partnered with OpenAI to create a specialized AI model that designs proteins capable of temporarily converting regular cells into stem cells.

[00:51:37] Mike Kaput: Potentially reversing the aging process. They plan to begin clinical trials this year, starting with a potential Alzheimer's treatment in Australia. They're also looking to accelerate the traditional drug development timeline. Rather than the typical 10 to 15 years required to bring a drug to market, they are targeting their first drug release by [00:52:00] the end of the decade.

[00:52:01] Mike Kaput: They're pursuing three main drug candidates right now. One is a pill that restores cells, internal recycling processes. So one is a therapy to replace brain cells linked to Alzheimer's and one is a treatment to rejuvenate blood stem cells. So the company is currently for this funding in talks with family offices, venture capitalists, sovereign wealth funds, and a major U S data center provider to secure the massive compute needed for these AI models.

[00:52:31] Mike Kaput: So, Paul, kind of related to what Demis Hassabis was saying in his interview at Davos, which we covered in the last episode, definitely seems like we're seeing some big moves in AI for scientific progress. 

[00:52:44] Paul Roetzer: Yeah. So, I was, just made a note to myself, like, it was kind of a joke, but, Human Life Extension is like the new rocket company for billionaires.

[00:52:51] Paul Roetzer: So like, you know, 10 years ago, if you were a billionaire, you needed to be building a rocket company like SpaceX or Blue Origin. Now you need to be all in, like, Human Life Extension. [00:53:00] Seems like all these guys are talking about this. so I'm back to Dario Amodei's Machines of Loving Grace, thing. And in that October article, he talked about biology and health.

[00:53:12] Paul Roetzer: He said, my basic prediction is that AI enabled biology and medicine will allow us to compress the progress that human biologists make, but have achieved over 50, 100 years in five to 10 years. And he talked about, doubling the human lifespan. This might seem radical, but life expectancy increased almost two times in the 20th century from 40 to 75 years.

[00:53:30] Paul Roetzer: And so it's on trend that the compressed 21st would be to double it again to 150. So Dario's, you know, in October talking about living to be 150. In the Demis interview, he talked about biology and life extension, and he said that The current understanding in biology is that 120 seems to be the natural limit, but that he would be very surprised if that is in fact the limit on human lifespan, that he definitely sees the ability in this [00:54:00] generation to have people living commonly past 120.

[00:54:03] Paul Roetzer: So I think part of it is these AI people see the future and see how it's being applied to advancements in biology like alpha fold and chemistry. And they think of, aging as a disease that's solvable. Like they don't see anything that actually prevents the rejuvenation of cells and stuff like that.

[00:54:24] Paul Roetzer: So like to them, it's not even that crazy to talk about human life extension. And so it's logical that they would want to play a part in that and live longer themselves and benefit other people, I guess, so as a whole another, whole another topic, man. 

[00:54:39] Mike Kaput: Yeah, no, I feel that's feels a lot like what you were trying to communicate with the point.

[00:54:44] Mike Kaput: Demis made about how good AlphaFold is at this stuff over humans and like, it's like a billion years of PhD research. It's like, okay, like we'll see if the life extension thing is actually proven out. But the point being, these models are already capable of doing [00:55:00] things that we couldn't even have dreamt up.

[00:55:01] Mike Kaput: So 

[00:55:02] Paul Roetzer: why not? And what Demis talked about is like their next frontier is they're building virtual cells. So, they're actually trying to build a cell, human cell simulation, and then once you achieve that, which he thinks they can do in the next five years, they took a bunch of the Alpha Fold people and put them on cell creation, and you can actually test drugs in a simulated human cell of how they would solve things.

[00:55:24] Paul Roetzer: So, he thinks within five years, they'll have the ability to simulate human cells, and then once you have that ability, you can now run all these simulations. So, he sees within a decade Massive advancements, cause he's very confident they're going to have the ability to run these simulations within five years.

[00:55:41] Paul Roetzer: It's wild. Like it's so, it's so crazy to think about like the possible outcomes of this stuff. 

[00:55:47] AI for Public Speaking Prep

[00:55:47] Mike Kaput: All right. Our last topic today, we wanted to quickly share, we're trying to, you know, share practical AI use cases where we find them either from others or stuff we're doing. So I just wanted to quickly share one use [00:56:00] case I found for some of the like Google AI studio tech.

[00:56:03] Mike Kaput: There's experimental models that we just referenced in a previous topic. So I'll just quickly, Paul, go through this, and then we can wrap this up. But really what I was able to jump into at Google AI Studio and use some of their experimental models for was a pretty cool experiment with. Leveling up my speaking skills.

[00:56:22] Mike Kaput: So I do quite a few talks. Paul, you're, you're on the road all the time doing talks. I was, in San Francisco last week doing a talk and, you know, obviously you prep quite a bit, even for short talks. And what I found really helpful is I was able to actually turn AI into my own personal speaking coach.

[00:56:41] Mike Kaput: So. I used a custom prompt and Google AI studio and basically recorded myself practicing, uploaded the talk to Google AI studio, the prompt basically being like, you're a speaking coach, here's how I want you to analyze my talk. And then it went ahead and actually analyzed it for tone, pacing, delivery, [00:57:00] and more, and the reason I did this with Google AI studio is because you can actually upload audio and video and like a very large context, but notice I was like doing like a 35 minute talk.

[00:57:10] Mike Kaput: And what's really cool is like it caught all this stuff. I just never would have even been aware of, like it timestamped my best moments. It talked about when my energy peak, it talked about, Hey, you nailed this stat or this example and highlighted a ton of other stuff to focus on or improve. And then I was able to actually compare across different practice runs.

[00:57:31] Mike Kaput: So I do one, read the feedback. Do it again, try to apply it, so on and so forth. So it was just a really cool, really practical, immediately useful way to use some of these tools. 

[00:57:43] Paul Roetzer: That's awesome. And I think that's, I know you'd put it on LinkedIn, which is when I saw it. it's just, it's so perfectly demonstrates what we always talk about.

[00:57:53] Paul Roetzer: And again, going back to like the importance of AI literacy. Understanding what AI is capable of. enables you to then figure [00:58:00] out ways to use it for the mundane things like the repetitive data driven stuff you don't enjoy doing but for like the creative innovative things like and that that's the potential we all have is to like find ways to to use this technology in a positive way to just Make us better at our job, make us enjoy our job more.

[00:58:17] Paul Roetzer: It doesn't all have to be just replacing repetitive stuff and driving efficiency. It can be about creativity and innovation assisted by AI. First, you got to understand that it's possible. If you didn't even know Google had an AI studio, you would never think to do this. So, and I think each week that's what Mike and I try and bring is like this.

[00:58:36] Paul Roetzer: really foundational knowledge so that hopefully you can go take it and do all kinds of cool incredible things that Mike and I might not even think to mention on the podcast. And so yeah, I think these kind of real world examples are awesome. I, I, I think it's fun for me because I see more and more people in my LinkedIn network who share stuff like this on LinkedIn.

[00:58:55] Paul Roetzer: Like, Hey, I did this cool thing last week with AI. Yeah. I think that's like, it's inspirational for me [00:59:00] because you just see people taking knowledge about AI and going and doing cool things. And That's the opportunity. I think once you get in any profession, any business, once you get through the fear of this stuff, and the uncertainty we all face, and you just say, like, I'm just going to go use it to the best of my ability and we'll see what happens, like, figure the rest out later, you, you really start to get excited about what's possible and you start, you know, showing yourself what you can do to kind of re imagine, re imagine your career and that's, that's what keeps us optimistic about the future.

[00:59:30] Mike Kaput: Yeah, for sure. Yeah, it's a lot of fun to also just mess around with and be able to discover this stuff for yourself. Yeah. All right, Paul, that is a wrap on our second episode of today. The second episode we're releasing this week. Just a quick note for everyone. if you haven't checked out our newsletter, go to marketing AI institute.

[00:59:47] Mike Kaput: com forward slash newsletter. It has all the news that we've covered today and everything we couldn't fit into the episode and leave us a review if you can on your podcast platform of choice. We'd really, [01:00:00] really appreciate it. Paul, thanks so much. How was that? That was a lot. 

[01:00:04] Paul Roetzer: While we've been on this, I got like four messages from people asking me about DeepSeek.

[01:00:07] Mike Kaput:

[01:00:09] Paul Roetzer: was just going to say, this was in episode 133. Alright, yeah, thanks everyone. I, I, I, who knows what this week is going to bring. I can't imagine it's going to be as wild as last week, but I assume we'll be back to one weekly episode next week. I don't, I don't think we're going to make this a regular practice.

[01:00:28] Paul Roetzer: This is like our entire Monday has been taken up doing these podcasts, but Alright, hopefully it was helpful for everyone. we will be back next week, with our regular weekly episode. Thanks again. Thanks for listening to the AI Show. Visit MarketingAIInstitute. com to continue your AI learning journey.

[01:00:46] Paul Roetzer: And join more than 60, 000 professionals and business leaders who have subscribed to the weekly newsletter, downloaded the AI blueprints, attended virtual and in person events, and joined the taken our online AI courses, and [01:01:00] engaged in the Slack community. Until next time, stay curious and explore AI.

Related Posts

[The AI Show Episode 134]: DeepSeek Updates, OpenAI’s o3-mini and Deep Research, New AI Copyright Guidelines, OpenAI In Talks to Raise $40 Billion & Your AI Questions Answered

Claire Prudhomme | February 4, 2025

Episode 134 of The AI Show: DeepSeek updates, OpenAI releases o3-mini and research, US Copyright Office shares new AI copyright rules & your AI questions answered.

[The AI Show Episode 132]: OpenAI’s Operator, Stargate, The AI Literacy Project, Trump AI Executive Order, Perplexity Assistant & Zapier Agents

Claire Prudhomme | January 28, 2025

Episode 132 of The AI Show: OpenAI's Operator, Project Stargate, SmarterX's AI Literacy Project, Trump's AI Executive Order, Perplexity Assistant and more.

[The AI Show Episode 122]: ChatGPT Search Is Here, McKinsey: AI Worth “Trillions” in Coming Decades & Microsoft AI CEO Calls AI “New Digital Species”

Claire Prudhomme | November 5, 2024

Explore ChatGPT's latest search features, McKinsey's AI economic forecast, and Suleyman's ideas on AI as a "new digital species" in Episode 122 of The Artificial Intelligence Show.