Powerful AI by 2026? Tesla's steering wheel-free future? Nvidia's compute explosion? Join Paul Roetzer and Mike Kaput they unpack Dario Amodei's bold 15,000-word manifesto predicting the arrival of "powerful AI" by 2026 and its transformative impact on society. Our hosts also examine the future implications of Tesla's "We, Robot" event, Nvidia CEO Jensen Huang's revealing interview, AI-related Nobel prizes, AI’s impact on search, and more in our rapid-fire section.
Listen or watch below—and see below for show notes and the transcript.
Today’s episode is brought to you by rasa.io. Rasa.io makes staying in front of your audience easy. Their smart newsletter platform does the impossible by tailoring each email newsletter for each subscriber, ensuring every email you send is not just relevant but compelling.
Visit rasa.io/maii and sign up with the code 5MAII for an exclusive 5% discount for podcast listeners.
Listen Now
Watch the Video
Timestamps
00:05:26 — Dario Amodei’s Manifesto
- Machines of Loving Grace - Dario Amodei
- Anthropic CEO goes full techno-optimist in 15,000-word paean to AI - TechCrunch
- Episode 94 of The Artificial Intelligence Show
00:26:47 — Tesla’s We Robot Event + Elon Musk
- Everything Announced at Tesla's 'We, Robot' Event - CNET
- Tesla’s big ‘We, Robot’ event criticized for ‘parlor tricks’ and vague timelines for robots, Cybercab, Robovan - VentureBeat
- Starship - SpaceX
- SpaceX X Status
00:41:09 — Jensen Huang BG2 Interview
00:51:14 — AI-Related Nobel Prizes
- Geoff Hinton Wins Nobel Prize
- Demis Hassabis Wins Nobel Prize
00:59:23 —AI’s Impact on Search
- Google’s Grip on Search Slips as TikTok and AI Startup Mount Challenge - The Wall Street Journal
- How to Fight Back Against a Traffic-Less Web - SparkToro
00:58:32 — New Apple Paper Says AI Can’t Reason
- GSM-Symbolic: Understanding the Limitations of Mathematical Reasoning in Large Language Models
- Mehrdad Farajtabar X Status
- This AI Pioneer Thinks AI Is Dumber Than a Cat - The Wall Street Journal
01:02:54 — Zoom AI Announcement and Updates
- Zoom will let AI avatars talk to your team for you - The Verge
- Zoom introduces AI Companion 2.0 and the ability to customize AI Companion with a new add-on - Zoom News
- Episode 104 of The Artificial Intelligence Show
01:06:05 — Apple Intelligence Release Date
01:08:33 — The Rise of AI-Powered Job Applications
Summary
Dario Amodei’s Manifesto
Anthropic CEO Dario Amodei has made waves in the AI community with a 15,000-word essay painting an incredibly optimistic picture of AI's future impact on society.
The essay “Machines of Loving Grace,” subtitled “How AI Could Transform the World for the Better,” provides Amodei’s radical predictions for how this positive future might play out.
Amodei provides a series of predictions over the course of the paper, predicting that what he calls “powerful AI,” he doesn’t like the term AGI, could arrive as soon as 2026. Powerful AI, by his definition, is AI that surpasses Nobel Prize winners in most fields in terms of intelligence, can engage in actions, accomplish tasks autonomously, and collaborate with millions of copies of itself to achieve goals.
With this powerful AI, Amodei then predicts major impacts on biology, neuroscience, economic development, governance, and the future of work in the next decade.
Additionally, Amodei acknowledges AI will disrupt the job market but believes humans will retain some comparative advantage for some time, where we are able to complement AI’s capabilities.
However, he also says that “in the long run” AI will become so broadly effective and so cheap that at some point our current economic setup will no longer make sense.
Tesla’s We Robot Event + Elon Musk
Tesla's highly anticipated "We, Robot" event showcased the company's ambitious vision for the future of autonomous transportation and robotics.
The event featured key announcements and upcoming innovations from Tesla.
One of the announcements was the Cybercab: a two-seater autonomous vehicle designed without a steering wheel or pedals, expected to cost under $30,000, with a projected operating cost of just $0.20 to $0.30 per mile.
Next is the Robovan: a spacious autonomous vehicle capable of transporting up to 20 passengers or goods transport, aimed at addressing high-density urban transport needs.
Additionally, plans were unveiled to launch unsupervised Full Self-Driving (FSD) in Texas and California by 2025, beginning with the Tesla Model 3 and Model Y.
The video that brought the most attention to the event displayed the Optimus Humanoid Robot.The company showcased the Optimus robots performing tasks like bartending and playing games and projected the retail price would be $20,000 to $30,000.
However, the event also drew criticism for its lack of concrete timelines and potentially misleading demonstrations.
Musk admitted to being "optimistic with timeframes," with production dates for Cybercab set vaguely for "before 2027." Reports also suggested that the Optimus robots were controlled remotely by humans rather than fully autonomous.
Questions also remain about Tesla's ability to gain approval for vehicles without traditional controls like steering wheels and pedals and safety concerns related to existing FSD technology, which has faced scrutiny following accidents, raising doubts about its readiness for full autonomy.
Critics also noted a shortage of concrete information on how these technologies would handle adverse situations or complex scenarios.
The market also weighed in: The day after the event, Tesla’s stock dropped 9%.
Jensen Huang BG2 Interview
One of the top CEOs in AI just shared a glimpse of the near-future of the industry. On Sunday, Oct. 13, the popular BG2 Pod, hosted by noted VCs Brad Gerstner and Bill Gurley, sat down with Nvidia’s CEO Jensen Huang to talk about the future of Nvidia and AI.
They covered a ton of incredibly useful points that give us some clues as to where we’re headed.
Huang predicts a massive increase in demand for AI compute, with clusters of 200,000 to 300,000 GPUs becoming common.
Inference is also becoming increasingly important, with Huang expecting it to grow "by a billion times" due to chain-of-reasoning capabilities. NVIDIA is focusing on enabling fast, interactive inference experiences for AI assistants.
Huang believes AI will dramatically increase productivity across industries, potentially leading to a new era of economic growth. He envisions a future where everyone has personal AI assistants capable of complex reasoning and task completion.
While Huang acknowledges that AI will transform every job, he does not anticipate widespread job losses. Instead, he expects that companies leveraging AI for increased productivity will likely expand and hire more employees, not fewer.
Read the Transcription
Disclaimer: This transcription was written by AI, thanks to Descript, and has not been edited for content.
[00:00:00] Paul Roetzer: So I see the autonomous vehicle industry as a great preview of what we are all going to face in our own industries. As we look at their effort to drive autonomy into the auto industry, you can step back and think about how this will play out as agents start coming into your industry. Welcome to the Artificial Intelligence Show, the podcast that helps your business grow smarter by making AI approachable and actionable.
[00:00:27] Paul Roetzer: My name is Paul Roetzer. I'm the founder and CEO of Marketing AI Institute, and I'm your host. Each week, I'm joined by my co host and Marketing AI Institute Chief Content Officer, Mike Kaput, as we break down all the AI news that matters and give you insights and perspectives that you can use to advance your company and your career.
[00:00:48] Paul Roetzer: Join us as we accelerate AI news.
[00:00:55] Paul Roetzer: Welcome to episode 119 of the Artificial Intelligence Show. I'm your host, Paul [00:01:00] Roetzer, along with my co host, Mike Kaput. We have an interesting week. There was like, not a crazy amount of like product announcements and news, but just some big things. We had the Dario Amodei essay we're going to kind of dive into.
[00:01:15] Paul Roetzer: We had a Wee Robot event from Tesla. We had Jensen Wong, the CEO of NVIDIA, doing a wide ranging podcast. So. There's like some macro stuff to talk about today. And I think like, I don't know, Mike, this is one of, I spent more time prepping for this podcast than most because the depth of the stuff we needed to talk about and sort of synthesize and figure out the meaning in between everything, like, I don't know, there was just, there was a lot.
[00:01:42] Paul Roetzer: So my Sunday night was my, my brain was on overdrive when I was prepping. Finally went to bed Sunday night getting ready for this.
[00:01:48] Mike Kaput: Yeah, I feel like this week was just like, the theme is like, buckle up for AGI or something. Yeah, it was like,
[00:01:53] Paul Roetzer: setting the stage. And like, Amodei's essay on the, on the back of Altman's essay, you [00:02:00] know, a few weeks back, and it's like all these big AI research entrepreneurs are preparing us, I think, for something that's coming soon, as we'll talk about, Dario definitely thinks it's coming soon.
[00:02:13] Paul Roetzer: Right. So, plenty to talk about. We are recording this Monday morning, 9 a. m. Eastern time, October 14th, just to timestamp it for you. today's episode is brought to us by Rasa. io. We've talked about Rasa recently. If you're looking for an AI tool that makes staying in front of your audience easy, you should definitely check out Rasa.
[00:02:32] Paul Roetzer: io. Their smart newsletter platform does the impossible by tailoring each email newsletter for each subscriber, ensuring every email you send is not just relevant but compelling, and it also saves you time. We've known the team at Rasa. io for about six years now. They were one of our early partners at Marketing Institute when I first launched it.
[00:02:54] Paul Roetzer: And no one else is doing newsletters like they are. They're offering a 5 percent discount with the code 5MAII [00:03:00] when you sign up. So visit rasa. io slash MAII today. Once again, that's rasa. io slash MAII today. And this episode is also brought to us by our Scaling AI webinar. So we have our Scaling AI paid course series.
[00:03:20] Paul Roetzer: I also do a free monthly webinar where we go through the five core steps so you can build a smarter version of any business. With ai, with AI for marketing, sales, and service, our obvious value drivers with hundreds or thousands of use cases, leaders who scale AI for the greatest impact will infuse it across all functions of the organization, which means you have an unprecedented opportunity to reimagine your business and reinvent your industry.
[00:03:44] Paul Roetzer: So how do you do that? Register for the upcoming webinar, Five Essential Steps to Scaling AI in Your Organization. it's coming up on October 16th, so that is this week. You will learn a framework that I've taught to thousands of corporate, education, and government leaders. In the [00:04:00] webinar, we'll go through key trends in AI impacting businesses today, how to overcome obstacles to AI adoption, effects of AI in the economy, industries and career paths, processes to build an AI savvy workforce, and more, including those five steps that we talked about.
[00:04:14] Paul Roetzer: The webinar takes place Wednesday, October 16th at noon Eastern time. That's 12 p. m. Eastern time. You can go to ScalingAI. com and click on register for our upcoming webinar. So right at the top. there's, you can buy the series, but right under that is a button to register for our upcoming webinar. Just click on that and you'll get signed up.
[00:04:33] Paul Roetzer: If you cannot make it at noon on October 16th, we will send you an on demand access to it. It's live for about a week after. So definitely register, get your email in there and you will get an email from our team with access to that webinar. It's about, I go for about 35 minutes or so presenting the framework and then we do Q& A live.
[00:04:53] Paul Roetzer: So the whole thing is live. So again, I run it every. Four weeks. I think this is our fourth or fifth edition of, of this [00:05:00] webinar. So definitely check it out again, scalingai. com. You can learn more about the webinar and you can also learn more about the course series. Okay, Mike, this is, this is what I was like, I don't know, man.
[00:05:12] Paul Roetzer: This is going to be hard to like go back to a regular Monday after we finished talking about all this stuff is some big, big things. It was hard to pick, honestly, the main topics today. There was a lot going on that, that would have, warranted spending some time exploring.
[00:05:26] Dario Amodei’s Manifesto
[00:05:26] Mike Kaput: All right. So let's dive in then.
[00:05:28] Mike Kaput: So first up we have Anthropic CEO Dario Amodei is making some big waves in the AI community and beyond with. He's releasing a massive 15, 000 word essay that looks at a very optimistic picture of AI's future impact on society. Now he outlines in this essay a very bold vision that's quite inspiring, but it's also raising eyebrows and sparking debate about the realistic potential and risks of AI.
[00:05:59] Mike Kaput: So [00:06:00] this essay is. Titled, Machines of Loving Grace, and it has the subtitle, How AI Could Transform the World for the Better. Now, the title of this essay immediately gives you a footnote, because like your first question is what does this title mean? It goes to a poem by a poet, Richard Brodigan, titled, All Watched Over by Machines of Loving Grace.
[00:06:22] Mike Kaput: And this basically is a short poem that envisions the perfect harmonious existence between technology, humanity, and natures. The essay itself then dives into some radical predictions that Amodei has for how this positive future could play out with AI. So, we're going to talk about a lot of them, but a few that really jump out, which he goes in depth into over literally thousands of words.
[00:06:49] Mike Kaput: First up, he kind of basically predicts that what he calls, quote, powerful AI, he says he doesn't like the term AGI, but he's basically talking about extremely competent, extremely powerful AI. [00:07:00] He predicts it could arrive as soon as 2026, and literally in his definition, which he outlines, powerful AI is AI that surpasses Nobel Prize winners in most fields in terms of intelligence.
[00:07:15] Mike Kaput: It's AI that can engage in actions, can go accomplish tasks autonomously, and potentially even collaborate with millions of copies of itself to achieve goals. So, once we have this powerful AI, Amodei then predicts huge impacts on a bunch of key areas he decides to highlight a handful that are really relevant to him in his opinion, though he admits beyond these few, there's also a ton of other impacts.
[00:07:41] Mike Kaput: So, he focuses on biology, neuroscience, kind of global economic development, governance, and the future of work. So, I'm just gonna list out a few of the more radical predictions, there are literally tons of them in there. But, for instance, he claims that AI could [00:08:00] cure most diseases, eliminate cancers, and halt Alzheimer's within 7 12 years, and predicts AI develop drugs for mental health conditions within 5 10 years.
[00:08:10] Mike Kaput: He predicts AI could cause the average human lifespan to double to 150 years. He believes AI could solve world hunger and reverse climate change, and that it could also, cause dramatic economic growth in developing countries, and he suggests Sub Saharan Africa's GDP, for instance, could end up matching China's current level within a decade.
[00:08:34] Mike Kaput: Now, Additionally, Amodei kind of acknowledges that AI is going to disrupt the job market, but he believes humans will retain some comparative advantage for at least a little while, where we're basically able to complement AI's capabilities. But he also says that, quote, in the long run, this AI is going to become so broadly effective and so cheap that at some point, basically, our current economic setup will no longer make [00:09:00] sense.
[00:09:00] Mike Kaput: So Paul, that's just kind of a taste of all the stuff he goes into here. There's a lot to unpack here, I'd say let's start at the beginning, like, why is Amodei's perspective on this subject important? Which parts of this essay, like, strike you as particularly worth paying attention to?
[00:09:19] Paul Roetzer: Just a reminder who he is, why does he matter, why does his opinion matter, so he led the team at OpenAI that was building GPT 2 and then GPT 3.
[00:09:30] Paul Roetzer: After doing that, well, prior to doing that, he was a senior research scientist at Google Brain for about a year, but after leading the team building GPT 3, he saw the opportunity, to, to build a language model company, to build a frontier model company. He saw a window of opportunity to do this. He left OpenAI in 2021, took about 10 percent of the staff with him.
[00:09:53] Paul Roetzer: Founded Anthropic, so he is the co founder and CEO. They have raised 7. 6 billion to date. [00:10:00] They, we talked about them in one of the recent podcast episodes that they're rumored to be raising again at a 40 billion valuation is what's being talked about. So when we look out to the companies that will build the largest, most generally capable models, there's probably five.
[00:10:15] Paul Roetzer: we have OpenAI, obviously. We have Google. Meta, Anthropic, and XAI, Elon Musk's company, which we'll talk more about in a few minutes. there's some other players building frontier models, people are gonna build smaller models, like Mistral, you know, is kind of in the, in the running right now, probably won't.
[00:10:38] Paul Roetzer: It's going to be really, really hard to raise at the levels these companies are going to raise at. So those five appear to be the primary ones that are going to build the biggest models. So what Dario says, like we should care about and what he thinks happens next matters to us because if that's what he, he is envisioning, he's one of the five main people that are kind of building the [00:11:00] future.
[00:11:00] Paul Roetzer: And as we talked about in, like, previous episodes going back to last year, he doesn't talk much. Like, he doesn't do many interviews. He's not like Sam Altman. He's not active on Twitter. Like, it's hard to get at where Dario is at, but he does show up every once in a while and either do long form podcast episodes, like with Dwarkesh, or he drops an essay of 15, 000 words and lets us know what he's up to.
[00:11:24] Paul Roetzer: So, I think right away in the opening, he, he makes an important point. So he says, people sometimes draw the conclusion that I'm a pessimist or a doomer who thinks AI will be mostly bad or dangerous. Now keep in mind, they supported SB 1047. So Anthropic was for the legislation that we talked a lot about that was going on in California that eventually got vetoed by the governor.
[00:11:47] Paul Roetzer: so that's why he's saying like, people think I'm like a doomer, I'm the one that's trying to get regulation. He said, I don't think that at all. In fact, one of the main reasons for focusing on risks is that they're the only thing standing between us and what I see as [00:12:00] a fundamentally positive future. I think that most people are underestimating just how radical the upside of AI could be, just as I think most people are underestimating how bad the risks could be.
[00:12:12] Paul Roetzer: So he says that in this essay, he wanted to sketch out what he would say, quote, a world with powerful AI might look like if everything goes right. And then, as you said, he focused on kind of these five core areas because he has a background in neuroscience and biology, he has a reasonable background in economic development, and then just kind of his grand visions for peace and governance and work and meeting.
[00:12:33] Paul Roetzer: So, you touched a little bit on, you know, the powerful AI. We've talked so many times about this. Like, we can't agree on what AGI is, and then we have some people who don't even want to use the term AGI, so now we've got powerful AI in the mix. And so I thought it was interesting how he kind of broke it out.
[00:12:49] Paul Roetzer: So he said, I have in mind an AI model likely similar to today's LLMs in form. Though it might be based on a different architecture, meaning maybe not the transformer, [00:13:00] or a new version of the transformer is kind of what he's implying there, might involve several interacting models and might be trained differently.
[00:13:07] Paul Roetzer: So these are a lot of like, mights for something that he thinks is very near term, which I immediately found kind of interesting and would imply to me that maybe they already have some new approaches to architectures kind of thing. So, Ben, you touched on the Nobel Prize winner thing. Now, the interesting thing here is, and again, I want to, I'm going to come back to his timelines in a moment, but I think it's really important to set the stage where he's envisioning this.
[00:13:34] Paul Roetzer: So, smarter than a Nobel Prize winner across most relevant fields. On a recent episode, we talked about Google DeepMind's levels of AGI. Their levels of AGI, which we'll put the link to in the show notes. is zero, no AI, all the way up to level 5, which is superhuman, outperforming 100 percent of humans.
[00:13:55] Paul Roetzer: Level 4 is virtuoso, at least 99th percentile of skilled [00:14:00] adults. He's basically saying that this powerful AI will be level 4 to level 5 in Google DeepMind's levels of AGI. So that's an important distinction. He says it'll be, this is my words describing, like summarizing, multi modal and agentic. So he specifically says, quote, it has all the interfaces available to human working, human, a human working virtually, including text, audio, video, mouse and keyboard control, and internet access.
[00:14:31] Paul Roetzer: So it can do everything we can do in an online world, basically. It can engage in any actions, communications, remote operations, enabled with this interface including taking actions on the internet, taking or giving directions to humans, ordering materials, directing experiments, watching videos, etc. So again, We're going to come back to his timeline.
[00:14:51] Paul Roetzer: Keep in mind what he thinks this powerful AI is. This is a significant advancement. The next one I summarized again is [00:15:00] advanced reasoning and agentics. So in his words, it does not have just passively answered questions. It can be given tasks and take hours, days, or weeks to complete, and then go off and does those tasks autonomously in the way a smart employee would, asking for clarification as necessary.
[00:15:18] Paul Roetzer: Um. It can control existing physical tools, robots, and laboratory equipment through a computer. This is, this is an important one. The resources used to train the model can be repurposed to run millions of instances of it. and the model can absorb information and generate actions at roughly 10 to 100 times human speed.
[00:15:43] Paul Roetzer: It may, however, be limited by response time of the physical world or of software it interacts with. Now, again, this is starting to sound really sci fi to people. If you're not, like, following along really closely with what's going on and kind of the frontiers here, this is starting to sound sci fi. But to people [00:16:00] who have been paying close attention, this all aligns.
[00:16:02] Paul Roetzer: This lines up with the situational awareness, paper from Leopold. It lines up with what we're hearing from OpenAI. Like, nothing I've seen here is anything new to me, because this is what they're all talking about. And then the one that gets kind of crazy is Each of these millions of copies can act independently on unrelated tasks, or, if needed, can all work together in the same way humans would collaborate, perhaps with different subpopulations fine tuned to especially good at particular tasks.
[00:16:31] Paul Roetzer: Now this brings in the OpenAI levels of AI we've talked about. So in OpenAI's levels of AI, we have Level 1 Chatbots, Level 2 Reasoners, which we got a preview of with O1, Level 3 Agents, which we're seeing demonstrations of right now, and you know, in a year or two, they might be reliable. Level 4 innovators, that's where they're coming up with new science, basically.
[00:16:54] Paul Roetzer: Level 5 organizations, dozens, hundreds, thousands, millions of agents [00:17:00] working together to run a company. Which brings us to the Ilya Sutskever quote from July 2023 that we've talked about where he said, The way I think about the AI of the future is not as someone as smart as you or as smart as me, but as an automated organization.
[00:17:16] Paul Roetzer: Its constituent AIs would work and communicate at high speeds like bees in a hive, A single such AI organization would be as powerful as 50 Apples or Googles. Ilya and Dario worked together at OpenAI. They certainly probably shared ideas around these things. I'm going to come back to Jensen Long's quote about this in a minute.
[00:17:36] Paul Roetzer: But what Dario then finishes with is, he thinks of this future as a country of geniuses in a data center. So, when is he envisioning all of this? Well, as you said, he thinks as early as 2026. So everything I just outlined, if he's envisioning that by 2026, That means [00:18:00] nothing we just talked about does he see as impossible with one to two generations of models from now.
[00:18:08] Paul Roetzer: GPT 6, Claude 5, they're seeing a scientifically possible path to achieving everything that was just outlined. So that brought me back last night, and again, why I was like now having trouble sleeping on a Sunday night, to episode 94, which you'll remember well, Mike. We will put the link to episode 94 Episode 94 is when we talked about philanthropics AI safety levels.
[00:18:35] Paul Roetzer: So this is when we went into the Ezra Klein interview, with Dario Ade. And in that interview they talked about these safety levels. So they have a SL one or AI safety level. One refers to systems with which pose, no meaningful, catastrophic, catastrophic risk. Like example 2018 LLMs is how they kind of level two.
[00:18:57] Paul Roetzer: Level 2 is present large language models. This is [00:19:00] where they would put Claude and Gemini and things like that. It refers to systems that show early signs of dangerous capabilities, but they can kind of control it. Level 3, significantly higher risk, refers to systems that substantially increase the risk of catastrophic misuse.
[00:19:15] Paul Roetzer: So this is, we didn't know yet, this is kind of, they're like, these are theoretical levels at this point. And the level four is speculative and, and higher. It's not defined yet. Like we have, we don't even know how to measure it. So in the interview that we talked about on episode 94, Ezra said, when you imagine how many years away just roughly ASL 3 is and how many years away ASL 4 is, you've thought a lot about this exponential scaling curve.
[00:19:42] Paul Roetzer: If you just had to guess What are we talking about? And he said, yeah, I think ASL 3 could easily happen this year or next. I think ASL 4, at which point Ezra Klein interrupted and said, oh Jesus Christ. And then Dario said, no, no, no. I told you I'm a believer in exponentials. I think ASL 4 could happen anywhere [00:20:00] between 2025 and 2028.
[00:20:03] Paul Roetzer: Which Ezra then said, if you don't believe these models will ever become so powerful, they become dangerous, fine. But you do believe that How do you imagine this actually playing out, to which Dario said, Look, I don't have an answer for that. Like, I'm one of the significant number of players trying to navigate this.
[00:20:18] Paul Roetzer: Many are well intentioned, some are not. I have a limited ability to affect it. So, the reason I bring this back up is because everything in this essay continues to imply that the models they're building will be ASL 3 or 4, their own levels of threat, basically, within the next one to two years. And that interview was from, I think, April of this year is when we talked about that one?
[00:20:42] Paul Roetzer: Yeah, so we're talking about like five months later or so, six months later, continuing on this path. And so I don't know, like, his, his vision for where this goes, the way I think about this to kind of summarize, I guess, is there's a handful of entrepreneurs and AI researchers [00:21:00] who are envisioning and building the future, as Dario said, he's one of them.
[00:21:03] Paul Roetzer: He doesn't think he has that much ability to affect this, and yet they're the ones building the models. So, this is why Mike and I are spending time on this kind of thing. The article is, is out there, like, it feels weird when you're reading it. It feels too sci fi, but it's not, like. They're making decisions right now, and the paths that they choose to pursue will affect all of us, all of our kids, all of our communities, like our businesses, all of it's going to be affected by the decision these five companies are making.
[00:21:34] Paul Roetzer: And so what they think, say, and do matters. Now, I choose to be optimistic about the future. But the reality is there's a bunch of known risks and there's a bunch of unknown risks. And so the only way we can see through this is by being informed about it and then being intentional at our individual levels, like the best we can in choosing a responsible application of AI in our [00:22:00] business, in our industry, in our career, in our schools.
[00:22:03] Paul Roetzer: Because I do truly believe, like Dara, that the future can be abundant and incredible if we choose to be responsible and human centered, and I still have concerns. That the five main companies that are driving the future of humanity, I don't know that they're really that aligned on what that looks like, because we can't agree on what AGI is.
[00:22:25] Paul Roetzer: We can't even call it the same thing anymore. And so I don't, I don't know, man. Like I, this is, like I said, this is a hard one to prep for because this was a lot to think about on a Sunday night and then to go back to that episode 94, which I found kind of unnerving at the time. Yeah.
[00:22:41] Mike Kaput: Yeah. I think one thing that jumped out to me is you and I have been following some version of the AGI conversation long before it became in vogue to start talking about, but if he had published this essay three years ago, it would have been laughed out of.[00:23:00]
[00:23:00] Paul Roetzer: It's like Moore's law for everything from Sam Altman in 2021. People just ignored it outside of the AI research community. Like it was just too out there. And I do. I agree with you. I think too many business leaders, government leaders, educational system leaders will look at this and think it's crazy and it's sci fi and it's not something they have to think about.
[00:23:19] Paul Roetzer: All we're saying is they think you should be thinking about it. Like they think that we are within one to two years of this being a reality. And that can be incredible over the next decade. It can lead to all these incredible breakthroughs in biology and, you know, society and solving hunger and diseases and all these things can happen, but they are not promised.
[00:23:39] Paul Roetzer: Like, if everything goes right is basically what he said. This is an essay of if everything goes right. And so he spends all his time thinking about how it goes wrong to try and ensure that we have the best possible future, but he even at the end says, I'm not even sure that this is the future most people would want.
[00:23:56] Paul Roetzer: It's just what he thinks is a beautiful future. [00:24:00]
[00:24:00] Mike Kaput: So to kind of wrap this topic up here, I guess like the burning question I have is if you kick off your essay saying, I believe in the next handful of years we're going to have AIs that are Nobel Prize winners, functionally, at many, if not most, things. How do you not spend half this essay talking about the impact on everyone else's work who's not a Nobel Prize winner?
[00:24:24] Mike Kaput: Like, I just found it very hard to imagine in a world like that, that humans would have any comparative advantage for a second.
[00:24:33] Paul Roetzer: Well, he, and he does address that because the economy and work is the one at the end and he said, let's see, he started off with, with the guys doing everything, how will humans have meaning for that matter?
[00:24:43] Paul Roetzer: How will they survive economically? and then he basically says, like, hey, this is kind of the one I have the least to say about, like, that I know the least about, but he said it's worth saying a few words while keeping in mind that the brevity of this section is not at all meant to be taken as a sign that I don't take these issues seriously, on the [00:25:00] contrary, it's a sign of a lack of clear answers.
[00:25:02] Paul Roetzer: So he gets into, like, the meaning of things. Humans and the meaning of work. And yeah, it's, he's basically just saying like, it's going to get strange. And I don't really know what to tell you, but I think the GDP will go up dramatically and that should create more opportunities for people to do work. And even if they're only doing 10 percent of what they used to do, they should still get some meaning from that.
[00:25:25] Paul Roetzer: , it's very clear. They don't know. And again, Sam Altman has asked point blank about this. He doesn't know, like, this is my biggest concern is it's like, not to Like, I never want to create this, like, again, seer factor out of any of this, but the reality is they all think that within two years, the economy looks very different, jobs look very different, and yet none of them have an answer for what that means, or even a reasonable idea.
[00:25:50] Paul Roetzer: Like, there's no policy ideas other than universal basic income, like, that's the thing, and he even threw that out, he's like, maybe it's that, or maybe it's not, I don't know, like, How is [00:26:00] that not the thing that we're not spending more time studying? Yeah, yeah. I just don't get it.
[00:26:04] Mike Kaput: Yeah, that's kind of what I'm getting at is like, yeah, there's, I totally respect this is not an easy problem to solve, but it's like, if it's coming this soon,
[00:26:13] Paul Roetzer: we are nowhere near happy.
[00:26:16] Paul Roetzer: Somebody, dude, yeah, some of the government, Google, Microsoft, I don't care who it is. Somebody needs to make this their thing. Like, we, we gotta figure this out. We need deep studies about the future of the economy. Not, we, we can't be looking backwards. That's what economists do. They look backwards to try and figure out the future.
[00:26:32] Paul Roetzer: That is not going to help us here.
[00:26:34] Mike Kaput: And we know some of those people are listeners. So I
[00:26:36] Paul Roetzer: hope they hear this episode. Somebody do something. We'll help. Like, call me, like, I will. Do whatever we can, but I don't have the money it's going to take to do this.
[00:26:47] Tesla’s We Robot Event + Elon Musk
[00:26:47] Mike Kaput: All right. So our second big topic this week, Tesla had a highly anticipated event called We Robot that showcased the company's ambitious vision for the future of autonomous [00:27:00] transportation and robotics.
[00:27:02] Mike Kaput: There were some pretty key announcements at this event that included things like the CyberCab, which is a two seater autonomous vehicle without a steering wheel or pedals that has an expected cost allegedly of under 30, 000 and a projected operating cost of 0. 20 to 0. 30 per mile. They also unveiled the RoboVan, a large autonomous vehicle for up to 20 passengers, or a goods transport that's basically being positioned as a solution for high density urban transport.
[00:27:32] Mike Kaput: Elon Musk talked about full self driving FSD. There are plans to launch unsupervised FSD in Texas and California by 2025, and that's going to start with the Tesla Model 3 and Model Y. And, last but certainly not least, they showcased the Optimus humanoid robot, they showed Optimus robots performing tasks like bartending and playing games at the event, and [00:28:00] projected that the retail price would be 20, 000 to 30, 000.
[00:28:04] Mike Kaput: Elon Musk also claimed the optimists would be able to perform a wide range of household tasks once it comes out. However, the event also drew a bit of criticism for a lack of concrete timelines and maybe some misleading demonstrations like Musk admitted to being optimistic with timeframes with production dates for CyberCab set vaguely for quote before 2027.
[00:28:30] Mike Kaput: Reports did start coming out that suggested the Optimus robots at the event were tele operated rather than fully autonomous. There are a ton of questions about can Tesla actually gain approval for vehicles without traditional controls like a steering wheel or pedals. There's existing safety concerns already related to existing FSD technology.
[00:28:52] Mike Kaput: There's been scrutiny following some accidents and basically a bunch of questions, are we actually ready for full autonomy? [00:29:00] And critics also just noted a shortage of concrete information on how any of these technologies would handle adverse situations or complex scenarios. Now, biggest thing was the market also weighed in, because the day after the event, Tesla stock plummeted by 9%.
[00:29:18] Mike Kaput: So, Paul, let's first talk about the event itself. You're a Tesla owner, you're an AI expert, close follower of what Tesla's doing. What do you think you've got? A more solid perspective on this than most people out there, like what did you take away from this event and what it means for AI?
[00:29:34] Paul Roetzer: Yeah, so I think the reason we're going to cover this as a main topic today is because what what Tesla is doing matters in the sense that it's a prelude to other industries pursuing autonomy.
[00:29:46] Paul Roetzer: So if you think about, and I'll explain what that means in a moment, but basically, As we look at their effort to drive autonomy into the auto industry, you can step back and think about how this will play out as [00:30:00] agents start coming into your industry. That, in theory, can do the work of the human, initially with human supervision, and then maybe eventually without human supervision.
[00:30:08] Paul Roetzer: So I see the autonomous vehicle industry as a great, preview of what we are all going to face in our own industries. So, with that being said, so this show, this was originally supposed to be in August, they delayed it to October, for whatever reason, they didn't, they didn't say. But the assumption was they, they just weren't ready.
[00:30:29] Paul Roetzer: It's, it's probably best to describe this as a show, you know, they did show the RoboTaxi or CyberCabs, they previewed this van, they had the robots that were absolutely human controlled, like, I'm actually still shocked that people weren't sure on site whether they were or not, like, within three seconds of hearing one of them speak the first time, it was very obvious that these were not robots.
[00:30:53] Paul Roetzer: Robots talk. This was not a language model talking to you. So, the robots were absolutely controlled. They, they said, I [00:31:00] think if you asked the robot on site, they said they were human assisted is how they explained it, but they were a hundred percent like being controlled by humans and the voices were coming from humans.
[00:31:08] Paul Roetzer: So, It was a show. The most near term potential one is the CyberCab, which is the two seater, no steering wheel. If you read the Elon Musk bio by Walter Isaacson, it actually tells the origin story of the CyberCab. There's like a great background of Elon Musk's efforts to have them build the thing with no steering wheel and his engineers fighting him over it and all this stuff.
[00:31:33] Paul Roetzer: So he did say that, you know, between probably 2026 before 2027, and this is where his timelines become optimistic. Here's, here's my thoughts on the full self driving. So my current Tesla has full self driving supervised is what they call it. If there's anybody that also has one, 12. 5. 4. 1. That is the current release version I have of this.[00:32:00]
[00:32:00] Paul Roetzer: So they will, they've been on a schedule where they're updating the software every like two to three weeks. You'll get a push and it'll go 12. 4. and eventually they'll jump to 13 whenever the next breakthrough is, I guess. So I'm going to share the experience here because I think it is instructive of how agentic systems might learn and might play out in your industry.
[00:32:22] Paul Roetzer: So the Tesla today that I have the full self driving, which you can just put an address in and hit go, and will drive you on city streets, on highways and everything. It is level two in the, in terms of autonomy, we'll put a link to the SAE levels of driving automation in the show notes. There is actually an industry standard for what levels of autonomy are in the industry.
[00:32:43] Paul Roetzer: Basic premise is, what does the driver do? That's how they kind of determine these. So, the full self driving supervise that Tesla currently has is level 2. So if you think about this in your job, let's say you're CRM, or you're accounting system, or you're Legal analysis system. [00:33:00] The human's got to be there at all times, in control, responsible for the outcomes of the system.
[00:33:05] Paul Roetzer: That's kind of what level 2 is. The human is totally in the loop at all times. Waymo is level 4, if anybody's been in a Waymo out in, you know, California or Arizona or whatever. but Waymo can't scale. Like, commercially, because they have a hardware problem, they're ugly, they have massive systems on the cars, like they're not like a beautiful Tesla.
[00:33:25] Paul Roetzer: So Waymo's got a hardware problem, Tesla's trying to solve a software problem. The problem they currently face, and this is why I don't understand how the CyberCab works, if it is drizzling out, my full self driving is useless. It will either shut itself off or it'll say it's degraded and it can't do what it's supposed to do because there's raindrops on one of the cameras.
[00:33:47] Paul Roetzer: It can't function if it's snowing out for the same reasons. If one of the cameras gets blocked, the system is useless. They never talk about this. I've asked online, Can someone explain to me how they're going to solve this? [00:34:00] I don't know how they solve that. It still makes simple errors. It'll swerve into the berm when you're trying to get off the freeway.
[00:34:06] Paul Roetzer: It'll go at wrong speeds. It'll change lanes unnecessarily. It'll cut people off. Like, it still does this. Now, if you follow Elon and, like, the Tesla fan people online You don't hear this. Now, part of it is because in California, I think they actually label the data for specifically for routes where there's a lot of Teslas, including Elon's personal routes.
[00:34:29] Paul Roetzer: So Elon's experience driving a Tesla is actually very different than mine because the engineers are labeling the data on his routes to make sure that it's like perfect. For what he's doing. so the way they do this is they monitor miles per disengagement. So to disengage the system is I have to take over, like the human has to grab control of the system and say, okay, I'm, I'm back in control.
[00:34:53] Paul Roetzer: When you do that and a Tesla, a cop pops up and says, why did you disengage? Send us a voice thing, [00:35:00] and you can actually click the voice button and record up to 15 seconds and tell them why you did it. So think of this, when you start using agents in your company next year, you have to think of actions per intervention is kind of like the term I started coming up with myself this morning, thinking about this.
[00:35:17] Paul Roetzer: So if I'm going to use an AI agent to do my emails or to do my customer service or to do my sales outreach, how often do I, as the human have to intervene, intervene and stop the agent from doing something wrong or fix something it did? And so that's basically how they're doing it. And I think I heard.
[00:35:37] Paul Roetzer: Elon recently say they wanted to get to like, they want to get to 10, 000 miles per disengagement is like their current goal and then eventually like a million miles driven per disengagement meaning they don't ever want the human to have to like interact with this thing. When I use the thing on city streets, I can't, I can't imagine going more than two to three miles without a disengagement.
[00:35:58] Paul Roetzer: I do it all the time. On [00:36:00] highways, maybe you can get a dozen, two dozen on a straight stretch. You have to intervene all the time. This is what I think agents are going to be like. So when we hear about agents in our careers, in our businesses, I envision having these cool systems. And when they work, they're amazing.
[00:36:16] Paul Roetzer: Like full self driving is incredible when it works. But a lot of times it doesn't, and then I have to interact with it to tell it what's wrong. And I think that's how agents are going to work. Like, we're going to have these email and customer service and sales agents. But we're going to have to oversee them, we're going to have to intervene, and the software companies are going to have to monitor how often are we intervening, like what's going on?
[00:36:37] Paul Roetzer: And if they see spikes in it, they're going to know their agents aren't working properly. And so that's like something to keep in mind. And then I realized the other day, like the breakthrough for them is going to come when I can talk to my Tesla. So right now I can send a voice clip to Tesla saying, here's why I disengaged it.
[00:36:54] Paul Roetzer: But I was thinking about it the other day because like when you have your GPS and you say, I want to go to work and [00:37:00] then it's routing me, it's like, no, no, no, I'm going to go around. I want to go a different direction. I have to disengage the system, put a new address in or I have to tell it to go a different way.
[00:37:08] Paul Roetzer: I just have to like disengage and go the other way. It won't hear me. It won't like know to change the route. And so I realized like when these language models are built in, when XAI builds Groq into a Tesla and I can now talk to Groq. And tell it, Hey, reroute me this way, because I know there's construction up ahead that you car don't know about.
[00:37:28] Paul Roetzer: And now I'm just talking and prompting my car. And I don't know if they're, if that's near term, but like that will change the way these Tesla's work. And so. Again, I don't, I think it's amazing what they're doing. I don't think the tech is anywhere near what they're presenting it to be. I don't think we're really that close to having robo cabs that don't need steering wheels without all the cameras that Waymo's have.
[00:37:53] Paul Roetzer: But as a final thought here, on Sunday morning, I woke up [00:38:00] and I watched the Starship launch. I don't know if you saw this, Mike, but this is the fifth flight of Starship. This is Elon Musk's rocket that he wants to take humans to Mars with. So if you haven't followed along, SpaceX is an incredible company, engineering marvel they're putting out.
[00:38:16] Paul Roetzer: So me and my kids and my wife watched the launch of this thing. Now if you didn't follow along, the goal of this one, because it's all experimental still, was to get the Starship up into orbit and then, or into the atmosphere, above the atmosphere, and then bring the booster back and land a 232 foot, almost the size of a football field, 250 metric ton booster.
[00:38:38] Paul Roetzer: and catch it with basically two arms coming off of the tower. It was the most asinine thing when they first presented this. People were like, there's no way you're ever going to do this. Mechazilla, I think they call it. The arms that kind of, you know, come and go. They did it. On the first try, they caught a 250 ton booster out of the air [00:39:00] with two legs.
[00:39:01] Paul Roetzer: Like, it just lands on this thing gently. The booster just sits in the air and then it just lands on this thing. I mean, it almost brings tears to your eyes. I'm watching this thinking, how did humans do this? It is, it was like one of the most awe inspiring things I've ever seen happen. And people thought he was crazy to do this.
[00:39:20] Paul Roetzer: So what I realized in that moment is like, as much as Elon's current behavior on Twitter acts drives me insane, like as much time as he's spending on politics and creating like divides and like. It drives me nuts, and I don't care what side of political party you're on, it's like, the way he's using his platform is divisive to one party or the other.
[00:39:48] Paul Roetzer: I just wish he was spending more time accelerating us toward a better world for humanity. Because even with all the time he's spending tweeting and doing all the other stuff he's [00:40:00] doing, he's still driving humanity forward in a way very few people have ever done in human history. And it is amazing to watch and it's like a beautiful thing to be alive, to see these kinds of innovations and scientific breakthroughs and consistently doing the impossible.
[00:40:18] Paul Roetzer: And so I just like, as much as I get bothered by like the online behavior, I have to recognize like the impact he's making on humanity. And so that leads me back to like, he can achieve anything, like what he's building with XAI, what he's doing with all. It's remarkable, and there's other people out there trying to do similar things, and so it's just kind of one of those moments where like, you realize humans are pretty incredible, and, and just kudos to the engineers at SpaceX that did this.
[00:40:48] Paul Roetzer: And so while the WeRobot event was sort of disappointing, and the CyberCabs aren't coming for two, three, four years, and the Robovan may never happen, and The Tesla Optimus robots may not be functional for five years. Like, [00:41:00] he'll get there if he wants to, is kind of my main takeaway here. And, it's a pretty, pretty amazing thing that humans can do this sort of stuff.
[00:41:09] Jensen Huang BG2 Interview
[00:41:09] Mike Kaput: So for our third big topic, we've kind of been hearing from, you know, really influential CEOs and builders in AI, and we just got a wide ranging interview with another one of them that kind of gives us a glimpse of the near future of the industry. So on Sunday, October 13th, the popular BG2 podcast hosted by VCs Brad Gerstner and Bill Gurley.
[00:41:31] Mike Kaput: Sat down with NVIDIA CEO Jensen Huang to talk about the future of NVIDIA and AI. And they covered a ton of really useful points that kind of give us some clues as to where we're headed. So, they talked about things like The future of compute. Hwang present, predicts a massive increase in demand for AI compute with clusters of 200, 000 to 300, 000 GPUs becoming common.
[00:41:58] Mike Kaput: Hwang expects [00:42:00] inference to become increasingly important and grow by quote a billion times due to chain of reasoning capabilities. NVIDIA's actually focused on enabling fast, interactive inference experiences when you interact with an AI assistant. He believes AI will dramatically increase productivity across industries.
[00:42:19] Mike Kaput: He envisions a future where everyone has personal AI assistants capable of complex reasoning and task completion. And Huang actually said he believes AI will change every job. But won't necessarily lead to widespread job losses. He anticipates companies becoming more productive with ai, and if they do that, they'll likely grow and hire more, not less.
[00:42:42] Mike Kaput: So. Paul, I know you found a lot to love in this interview. Can you kind of walk us through what is worth paying attention
[00:42:51] Paul Roetzer: to in this conversation? Yeah, if you've never watched or listened to an interview with Jensen, you have to do it. Like, outside of Steve Jobs, I was trying to think about this [00:43:00] last night, like, I don't know that there's ever been a more impressive CEO, like, to listen to speak.
[00:43:05] Paul Roetzer: and so I would, this, this podcast is quite technical, like, it does get into some, some technical details, that, that can be a little abstract if you're not familiar with the terms and things like that, but it's mind blowing. And I think what. I love about him is he has this amazing command of the details of his own business, his industry, the technical side.
[00:43:28] Paul Roetzer: He has an incredible confidence in who they are and what they're doing, including the fact that he gives their like five year roadmap to what would appear to be their biggest competitors, like because they're his partners too. He has respect for his peers and competitors, and he has this unwavering belief in his vision for the future.
[00:43:45] Paul Roetzer: And he just sounds like the coolest guy in the world. Like anyone he's ever, I've seen interview with. You can tell he's just present. Like, that is, it's such a hard thing for a CEO, a public leader. When you know your time is being [00:44:00] pulled everywhere and everyone wants to talk to you to be present and to like Make the person you're talking to feel like you're it.
[00:44:07] Paul Roetzer: You're the only thing they're thinking about You're the only thing they're doing at that moment And he did that for an hour and a half with Brad like and he kept saying like Brad's because like hey I want to be respect for your time. He's like no no man, whatever you need. I'm here until we're done We're gonna do this the right way like let's go And he just kept putting it off and like giving him the time.
[00:44:25] Paul Roetzer: And it's just remarkable. So one, you'll learn something from him. Two, if you're a leader, like watch how this guy leads. It's, it's amazing to see, just tremendous respect for him. So a few key ones, Mike, that I wanted to call out, AGI. So we'll get back to the AGI thing. so Brad asked him when will we have AGI, which Brad then said has perfect memory of me, can communicate with me, can book a hotel for me, or maybe book a doctor's appointment for me.
[00:44:49] Paul Roetzer: When you look at the rate of change in the world today. Do you think we're going to have that personal assistant in our pocket? And Jensen said, soon, in some form. Yeah, in some form. And that assistant will get better over time. [00:45:00] A lot of this is happening because we drove the marginal cost of computing down 100, 000 times over the course of 10 years.
[00:45:06] Paul Roetzer: So he talks about like what NVIDIA is doing. another thing he said about AI infrastructure that I found fascinating is He talks a lot about how people think NVIDIA is just a chip company. They just make the GPUs that train these models and allow you to do the inference, meaning, you know, use ChatGPT and get responses or have, you know, AI on devices and get responses.
[00:45:26] Paul Roetzer: Inference is our use of the AI tool, basically. He said the data center is now the unit of computing. And what he means by that is they're not just building single chips. They're building the infrastructure that allows the building of 100, 000 GPU data centers like Elon Musk just built in Memphis. But they're, they're everything that allows the building of that.
[00:45:45] Paul Roetzer: The wiring, the stacks, the chips, everything. So he said, we're trying to build a brand new one every single year. And every single year or two, we deliver two or three times more performance. As a result, every single year, we reduce the cost [00:46:00] by two or three times. What this means is their innovation in building these data centers, building these clusters of 100, 200, 000, half a million GPUs eventually together.
[00:46:10] Paul Roetzer: And then driving that cost down 2 to 3x every year and including the performance 2 to 3x every year. That is what is making the intelligence explosion possible. That is why Dario and Sam and Sundar can look out 2, 3 years from now and say, Well, NVIDIA is going to do this. which enables us to do that. and so he said, I love this quote as a business lesson, as much as it said, as a company, we want to be situation, situationally aware.
[00:46:40] Paul Roetzer: And I'm very situationally aware of everything around our company and our ecosystem. I'm aware of all the people doing alternative things and what they're doing. And sometimes it's adversarial to us. Sometimes it's not, I'm very aware of it. And that doesn't change what the purpose of the company is.
[00:46:56] Paul Roetzer: They were pushing him on like Groq, for example, like faster inference [00:47:00] and not Groq with a Q. they talked about Intel and like the lessons learned from Intel, like having this dominant market position. And he's like, we don't care. Like it's, it's, we're aware of it, but we are so focused on this accelerated computing idea that that's all we think about.
[00:47:18] Paul Roetzer: so he talked about, the future of software. First thing we're doing is reinventing computing. so he said, you know, the future is going to be machine learned software everywhere, almost everything from Word, Excel, PowerPoint, Photoshop, Premiere, AutoCAD. It's all going to be machine learned. And for that to happen, we need entirely different computing systems.
[00:47:38] Paul Roetzer: And he said, there's trillions of dollars in data centers that need to be modernized. We're about 150 billion into it. So he's like kind of laying out their market opportunity. He talked about Elon and this Memphis supercluster they built and he's like nobody in the world could do this but Elon. They did in 19 days what would have taken anyone else years to do.
[00:47:58] Paul Roetzer: And so again, going back to [00:48:00] this idea of Elon and his relentless innovation, you know, hearing Jensen talk about that was so key. And then the one that really jumped out to me, you touched on this a little bit with like the reasoning and stuff, but they asked him, like, are you using these models at NVIDIA?
[00:48:15] Paul Roetzer: Like, how is this impacting your company? And he said, our cybersecurity system team today can't run without our own agents. We have agents helping design chips. Hopper, one of their chips, Blackwell, Rubin, none of it would be possible without it. We have AI chip designers, AI software engineers, AI verification engineers.
[00:48:36] Paul Roetzer: We build them all inside because we have the ability and we would rather use the opportunity to explore technology ourselves. So Brad then said, NVIDIA is in a league of its own at about 4 million of revenue per employee, which is insane. And 2 million of profits per employee. so he said, you build a culture of efficiency that really has unleashed creativity and innovation and ownership and responsibility.
[00:48:59] Paul Roetzer: You've [00:49:00] broken the mold, about, kind of functional management. Everybody likes to talk about all your direct reports. Is leveraging of AI the thing that's going to continue to allow you to hyper, be hyper creative while at the same time being efficient? He said, NVIDIA has 32, 000 employees today. I'm hoping NVIDIA someday has 50, 000 with 100 million AI assistants in every single group.
[00:49:25] Paul Roetzer: So I just, like, I don't know. He's envisioning a near future where they're going to keep scaling. I think there'll be a 10 trillion company within five years, 2. 5 trillion now or something like that. So they're, they're massive and they're, they're going to do this. They'll probably triple the value of the company, triple their revenue, if not more, by only adding another 18, 000 employees because they're going to have hundreds of millions of AI agents.
[00:49:49] Paul Roetzer: And they're already building these agents within the company. And so he's seeing the future. And so, again, it's so critical to listen to these interviews. If nothing else, [00:50:00] synthesize it, drop it into NotebookLM and have a conversation with like, get the key points out because the future is happening right in front of us.
[00:50:09] Paul Roetzer: And these leaders are telling us what it's going to look like. And if we just sit back and think it's all sci fi, we are not going to get there and see around the corner. And that's the opportunity we all have is to listen to this stuff. And then prepare for it, prepare our companies, prepare our industries for it.
[00:50:24] Mike Kaput: Well, when you hear what NVIDIA is actually doing and what he envisions happening very soon, a lot of the stuff we've talked about in this episode doesn't sound like sci fi anymore. It's the exact playbook we just talked about.
[00:50:36] Paul Roetzer: Yeah, and you understand why they're worth so much money, why their stock is crazy.
[00:50:40] Paul Roetzer: I've told this story before, like, the day ChatGPT came out, their stock closed at 169 a share. It touched 1, 200 a share. Before a 10 for 1 split, I don't know where it's at today, like back at 120, 130, something like that, but that's after a 10 for 1 split. the reason is, and the reason why people like me are so bullish on [00:51:00] NVIDIA in the future is nothing, and when you hear this interview, you realize none of this happens.
[00:51:04] Paul Roetzer: None of this intelligent explosion is possible without continued innovation driven by NVIDIA. They are the core of all of it.
[00:51:12] AI-Related Nobel Prizes
[00:51:12] Mike Kaput: All right, let's dive into some rapid fire this week. So first up. Two of the 2024 Nobel Prizes, one in physics, one in chemistry, have put a spotlight on AI because they have been related to AI advancements.
[00:51:26] Mike Kaput: So the Nobel Prize in physics was awarded to John Hopfield and Jeff Hinton for their pioneering work on artificial neural networks. So Hopfield, he had a 1982 development of the Hopfield network model, which explained how the brain recalled memories. using partial information. And Hinton, who we've talked about many times, often called the godfather of AI, developed the Boltzmann machine in 1985 and further advanced neural network research.
[00:51:53] Mike Kaput: Now, Hinton expressed both excitement and concern about the future of AI in his [00:52:00] acceptance of this, saying that instead of exceeding people in physical strength, this will be comparable with the Industrial Revolution. Instead of exceeding people in physical strength, it's going to exceed people in intellectual ability.
[00:52:12] Mike Kaput: He hopes his Nobel Prize will bring more attention to the risks associated with advanced AI. Now, soon after, the Nobel Prize in Chemistry this year was awarded to three scientists for their work on protein structure prediction and design, which heavily involves AI. David Baker of the University of Washington received half the prize for his work on computational protein design.
[00:52:35] Mike Kaput: And, another familiar name, comes into the picture because the other half was jointly awarded to Demis Hassabis and John Jumper of Google DeepMind for their development of AlphaFold, which predicts protein structures with unprecedented accuracy using artificial intelligence. Paul, we've talked about Jeff Hinton and Demis Hassabis countless times on the podcast.
[00:52:57] Mike Kaput: You've said yourself you expect Demis to [00:53:00] win a Nobel Prize at some point, and here we are.
[00:53:03] Paul Roetzer: Yeah, I laugh when this one happens because that is, like I've said on the podcast before, I use Demis definition of what is AI, the science of making machines smart. I for six years have been using that in my keynote presentations.
[00:53:15] Paul Roetzer: I'll poll the room who knows who Demis is. If you're lucky, you get one or two hands. And I'll often say he's gonna be one of the most important people in human history, he'll win multiple Nobel Prizes, and here you go, here's the first one. He'll win more, like this is, he's not done. but yeah, it's, it's fascinating.
[00:53:32] Paul Roetzer: It was interesting to see some of the scientific community push back of like, these aren't chemists, like these aren't people who should be like winning these awards, they're AI people. But I think that the lines have now, you know, been crossed and the barrier broken down and I think you're going to see a lot of AI researchers, being recognized for their impact in a lot of different industries.
[00:53:52] Paul Roetzer: In the future. So, you know, congratulations to them. It's an incredible achievement.
[00:53:58] AI’s Impact on Search
[00:53:58] Mike Kaput: So next step, we kind of want to [00:54:00] revisit some thoughts around AI's impact on search. We have a couple of reports and stats coming out that just kind of paint a tough picture for Google, which is seeing its grip on the market loosen as competitors and AI chip away at its dominance.
[00:54:15] Mike Kaput: According to research from eMarketer, Google's share of U. S. search ad market is expected to drop below 50 percent next year, first time in over a decade that has happened. Amazon is emerging as a formidable challenger, expecting to capture about 22 percent of the market this year, with a growth rate of almost 18 percent compared to Google's 6 7.
[00:54:34] Mike Kaput: 6 percent growth. TikTok, the short form video platform, is making waves by allowing brands to target ads based on user search queries, which directly challenges Google's core business. Perplexity, the AI powered search engine, plans to introduce ads under its AI generated answers. The company processed 340 million queries in September.
[00:54:59] Mike Kaput: And [00:55:00] Google is now rolling out ads in AI generated summaries at the top of search results, starting with mobile searches in the U. S. So basically, Google is getting attacked on a number of different fronts here. We've known that AI was eating into search for a while, but also other companies are eating into ad revenue.
[00:55:18] Mike Kaput: So this kind of poses like a big challenge to traditional web traffic models, and SEO expert Ran Fishkin, very, very well known voice in the SEO industry, kind of pointed out this past week that Google's not just facing competition from Amazon, TikTok, and AIt's also, we're also seeing all these platforms increasingly try to keep users on their sites by providing answers without you having to click away.
[00:55:43] Mike Kaput: So he cites one survey from Ahrefs that. Shows nearly 97 percent of webpages get zero traffic from Google. He talks about Andy Crestodino's 2024 blogging statistics report that shows getting traffic and attracting visitors is [00:56:00] now the number one challenge for bloggers out there. And Fishkin's own experience shows that what he calls zero click searches are on the rise.
[00:56:07] Mike Kaput: Basically, AI generated overviews, even in Google, are taking over a huge part of the results page for many queries. So basically, he wraps this up saying, My position on this is that zero click is taking over everything. Google is trying to answer searches without clicks. Facebook wants to keep people on Facebook.
[00:56:24] Mike Kaput: LinkedIn wants to keep people on LinkedIn. He also thinks AI overviews in Google are taking a tremendous amount more traffic than when he measured them back in June of this year. And basically says, look, content creators, marketers, you have to adapt to this new reality by, you know, you're competing for limited traffic that's still available through traditional means, but also really focusing on basically a zero click content approach, which is creating native content directly on platforms, Where people are engaging.
[00:56:55] Mike Kaput: Instead of including links, trying to get them away from the platform, stop trying to fight this, [00:57:00] go zero click. So, Paul, we definitely have seen some of this start to play out. it seems like zero click content may be a viable path forward. Do you agree with that? Like, what does this all mean for search traffic?
[00:57:15] Paul Roetzer: I'm not going to disagree with Rand and Andy. I mean, they're far greater experts at this stuff than I, than I am. what I would say is like, we're in October 2024. If you are involved in your company's content strategy, impact on website traffic, online advertising, you better be, like, going deep on this stuff.
[00:57:32] Paul Roetzer: Like, there's a lot of uncertainty going into 2025. And you got to be getting locked in and figuring out where it's at. The approach we take, and Mike and I will go into this a little bit more another time. Mike's our chief content officer, so he thinks more about this than I do. but I mean, the way I've kind of thought about it as a CEO is like, I just don't think about organic traffic that much anymore.
[00:57:53] Paul Roetzer: Like I used to care a lot about it. I used to think about our website traffic all the time. I think far more about the growth of our podcast, the [00:58:00] growth of our YouTube channel. I just think about where are people at? Where, where can we be that they are? And where can we create value in a proprietary, like unique point of view?
[00:58:09] Paul Roetzer: And so we are, I've said this before, I think, but we're, we're a podcast driven company. Like we create this original content. We have these unique perspectives. We come up every week and we show up and we do it. We turn that into videos. We cut it into shorts. We put. You know, the transcripts online, hopefully that gets found in language model, like we're just going to create value for people and, and hope the rest sort of takes care of itself and then try and be in as many distribution channels as we can.
[00:58:35] Paul Roetzer: So we're there to be helpful. And to me, that's like the safest approach right now is like our business, if our organic traffic went today, tomorrow. I don't even know that I care. Like 's, it's not critical anymore and it took us a couple of years to shift to that. Yeah. But I think, I mean, is that fair to sum up our strategy, Mike?
[00:58:53] Paul Roetzer: I mean, it's basically what it's become.
[00:58:54] Mike Kaput: Yeah. A hundred percent. And I think we've also been both smart and lucky in building up [00:59:00] an owned audience in the form of our contact database as well over time. So, I mean, we have that to, yeah, if you said like, if organic traffic went to zero, we have ways to get in front of our audience that are not just hoping they, Stumble upon
[00:59:13] Paul Roetzer: us, right?
[00:59:13] Paul Roetzer: Slack community. Yeah. Yeah. Just create value for people and, you know, have, have options. If you're relying on organic traffic, it's gonna be painful. 2025.
[00:59:23] New Apple Paper Says AI Can’t Reason
[00:59:23] Mike Kaput: All right, next up, researchers at Apple just released a paper that questions the true reasoning abilities of large language models. So. This study suggests that even state of the art AI models may not be thinking or reasoning in the way that we as humans understand these concepts.
[00:59:40] Mike Kaput: They basically conducted a simple experiment where they gave AI models, including like GPT 01 mini, Basic math problems. And when given a straightforward math problem, the models could usually solve that. But when they presented the problem with additional irrelevant pieces of information, the models [01:00:00] often stumbled.
[01:00:01] Mike Kaput: So instead of just asking them to solve a math problem, they might mention some You know, extraneous information about some of the details of the problem that the LLM would then like latch onto and get and hit like a stumbling block. They found this pattern of failure consistent across hundreds of similar questions with slight modifications.
[01:00:20] Mike Kaput: And basically, they state that, quote, we hypothesize that this decline, the decline in reasoning performance, is due to the fact that current LLMs are not capable of genuine logical reasoning. Instead, they attempt to replicate the reasoning steps observed in their training data. So Paul, does this result surprise you at all?
[01:00:38] Mike Kaput: I mean, we talk quite often, colloquially, about these models ability to reason and how that capability is growing, like, should we be talking about these differently, thinking about this differently?
[01:00:51] Paul Roetzer: I, all this tells me is the leading labs can't agree on anything. Like they, like, you, you know, again, you have like Yann LeCun, Turing Award winner at Meta, who thinks language [01:01:00] models are as dumb as a cat.
[01:01:00] Paul Roetzer: Like literally there was an article that like, I don't know if it was the New York Times or whatever. Yeah. That was, that was the headline. Like he thinks they're basically as smart of a cat, if that. So you have OpenAI thinking we're within a year of breakthrough in building agents and automated organizations.
[01:01:15] Paul Roetzer: You have Dario Amodei thinking 2026, everything changes. We have powerful AI. Right. And then you have Apple saying, these things aren't even really reasoning. It's wild. Like they did. And so that's all it tells me is it just reaffirms that people don't agree on any of this stuff. Even the people like embedded building it.
[01:01:33] Paul Roetzer: And there's some people will say like the language model don't really understand AI. Jeff Hinton, who's worried about it, thinks that they, they do. And so like, they, they just don't like the smartest people in the world have been working on these things for decades, can't agree on what they're actually doing and how they do it.
[01:01:49] Paul Roetzer: That, that's really all this tells me.
[01:01:52] Mike Kaput: So take with a grain of salt, anyone telling you for sure.
[01:01:55] Paul Roetzer: It's, it's like anything else in life. It's why you have to listen to all [01:02:00] sides. It's why you have to be open minded and like allow people to present cases and allow yourself to consider their arguments. And then at some point you arrive at like, well, here's what I actually think.
[01:02:12] Paul Roetzer: Like, I kind of believe this. And that's like the problem with our society is like, people just aren't that open minded. People get stuck on like, they're right about everything. And they, they have to like. They have to stand their ground instead of stepping back and saying, well, maybe it's not reasoning. I don't know.
[01:02:26] Paul Roetzer: Let's dig back and like look at the research. So hopefully the people at OpenAI are looking at this saying, that's an interesting finding. Let's, let's see what they found. Like, and maybe it affects what they do. So. I don't know, that's how I always feel about everything, it's like a life lesson, I guess.
[01:02:37] Paul Roetzer: It's like, I don't understand why people have to be so set in their ways and so convinced they're right about everything. They can't just step back and like, listen to solid arguments that seem like a well researched paper. Yeah. I'll read through it and, you know, see if there's anything in there I kind of like changes my perspective on things or not.
[01:02:54] Zoom AI Announcement and Updates
[01:02:54] Mike Kaput: All right. So next up, Zoom has a couple new big AI features that may be worth paying attention [01:03:00] to. One is a feature coming out that's going to allow you to create a custom AI avatar of yourself. This is allegedly coming out in early 2025, and you'll be able to record a video of yourself, and Zoom's AI will then create a lifelike avatar that looks and sounds like you.
[01:03:16] Mike Kaput: You can then use this to send a brief video message to colleagues using Zoom's Clips feature. Basically, just type out what message you want your avatar to deliver, AI takes care of the rest. Zoom also, a little closer to now, introduced the AI Companion 2. 0. It's the next generation of their AI virtual assistant.
[01:03:34] Mike Kaput: So, this basically is a side panel interface that has AI baked right into Zoom, integration with third party apps like Outlook, Gmail. You can generate action items from meeting summaries, convert them into tasks, summarize unread messages in your team chat channels, and Use real time web search capabilities to get up to date information.
[01:03:56] Mike Kaput: They're also adding a custom add on for AI Companions. Priced at [01:04:00] 12 per month per user, basically allows you to access a bunch of other customizable, features to tailor the AI experience to your specific needs, including integration with apps like Jira, Zendex, and HubSpot, and personalized coaching for improving communication skills.
[01:04:19] Mike Kaput: So, Paul, in episode 104, we covered how Zoom CEO Eric Yuan saw us moving towards a world where he was hyping up, quote, digital twins. Attend meetings on our behalf. Both you and I were deeply skeptical of this future, but are we getting closer to it?
[01:04:39] Paul Roetzer: Yeah. So I'm going to back up my previous soapbox about being open minded.
[01:04:43] Paul Roetzer: And I'm going to tell you, I'm not open minded on this. I don't want to talk to your avatar and I don't want your avatar showing up in meetings instead of you. So like, if we're ever doing business together or having a conversation, please don't send your avatar instead. And if you want to send me a video message.
[01:04:59] Paul Roetzer: Just [01:05:00] record it and send it. I really don't want to see your avatar. I'm aware the technology exists. I'm not going to be impressed that you're using an avatar to send me a message. Like, I don't think those opinions are going to change. I will try and be open minded about the future, but neither of those things is interesting to me in the least.
[01:05:16] Paul Roetzer: So, yes, I would say, and if you've been watching the, I'm representing my Cleveland Guardians today if you're watching on YouTube. we'll be heading toward game two of the championship series by this time if you listen to this on Tuesday. Zoom AI Companion was all over the American League Division Series game, like the ads were everywhere on the pitchers monitor.
[01:05:38] Paul Roetzer: So, yeah, they're making a big splash with AI companion. Listen, I love some of the AI features of Zoom. I don't love the vision for where the CEO wants to take Zoom. so yeah, I like good, good on them, but please don't send me your AI avatar once you can do it in Zoom.
[01:05:54] Paul Roetzer: Yeah. I
[01:05:54] Mike Kaput: fear. Having to take the time to type out the message your avatar is reading, I think an email [01:06:00] could also be helpful here.
[01:06:01] Mike Kaput: Just
[01:06:01] Paul Roetzer: click the record button. Yeah, email, whatever. Whatever, yeah.
[01:06:05] Apple Intelligence Release Date
[01:06:05] Mike Kaput: Alright, so next up, Apple's highly anticipated iOS 18. 1 update promises to introduce the first wave of Apple Intelligence features. This is reportedly now set for release on October 28th, 2024. This is a little later than expected. Apple needed some extra time to make sure everything was working properly.
[01:06:27] Mike Kaput: this will bring a suite of new AI powered features to iPhone 15 Pro and iPhone 16 users, including writing tools, a new Siri UI, though the full revamped AI experience will not arrive until later, notification summaries, memory creation in photos, a cleanup feature in photos, and intelligent breakthrough with notifications.
[01:06:49] Mike Kaput: Now, Some of the more eagerly anticipated AI features are coming out much later, so we're supposed to get ChatGPT support and an image playground [01:07:00] in December 2024 and then early 2025 is when we are expected to get the fully revamped Siri with much smarter intelligence, so we'll see. Many are kind of questioning why Apple heavily marketed all these AI features for the iPhone 16 launch when some of the most interesting stuff won't be available till next year.
[01:07:22] Mike Kaput: So, Paul, I guess I have the same question. Like, why is Apple hyping up all of this if it's not ready for primetime?
[01:07:29] Paul Roetzer: I have no idea. It's one of the weirdest things Apple's ever done. I don't understand it. It's like on par with launching the Vision Pro and then doing nothing to support it for six months, except this is higher profile because you're running all these Snoop Dogg ads with Right.
[01:07:44] Paul Roetzer: It's Snoop Dogg and Who else was in it? I forget who else. There was some influencer that my daughter knew that I had no idea. So they have like three high paid celebrities running these ads non stop during like baseball playoffs. I mean, they're spending tens of millions of dollars promoting something that [01:08:00] doesn't exist, pissing people off when they buy the phone and it's It's the same phone they just had.
[01:08:04] Paul Roetzer: It has no capabilities. I don't know. I, I don't understand it. I have no idea why they're doing this. Other than it must've been a last minute decision to yank the Apple intelligence from the launch and they'd already done all the ad buys and all the ad creative and they had no choice. That's, that's my only guess is like the ball was rolling on the marketing side and they had to basically just take the black eye and, I could imagine this isn't going over well with an Apple.
[01:08:29] Paul Roetzer: This is not the kind of mistake Apple would normally make.
[01:08:33] The Rise of AI-Powered Job Applications
[01:08:33] Mike Kaput: Alright, so our last topic this week is about AI powered job applications. So according to some new reporting from 404 Media, AI powered bots are now applying for jobs on behalf of human candidates. They profile a new tool that's currently trending on GitHub that kind of shows what's possible.
[01:08:54] Mike Kaput: It's called AutoJobSupplier AI Hawk. And it allows users to automatically [01:09:00] apply for jobs on LinkedIn at scale using AI. So I'm going to quote 404 Media's description here of the tool, that it is essentially a Python program that navigates LinkedIn and uses an LLM, you can use OpenAI's GPT products or Gemini, both work, to generate custom cover letters and resumes based on a series of biographical details that a user codes into a script.
[01:09:23] Mike Kaput: And tweaks as necessary based on the job description and other information that the company has put on LinkedIn. Apparently this program can be set up in about 15 minutes, requires some basic coding knowledge and an API key for the AI tool that you're using. But some users report after that they're getting interviews and job offers within days.
[01:09:43] Mike Kaput: One of them was quoted as saying, I've been using the platform for a little over three months now, during which I applied to 2, 843 roles. In that time, I've had four interviews, received one offer for a senior data engineer role at 85, 000, I assume in the [01:10:00] UK, and I'm awaiting feedback on another offer pending the result of a test.
[01:10:04] Mike Kaput: This also appears to be cost effective. One user applied to 17 jobs in an hour at a cost of 34 cents in OpenAI tokens. Now, 404 Media says LinkedIn is apparently aware of this tool, it prohibits these types of automated applications. One of the tools co founders says he's already been banned from LinkedIn.
[01:10:25] Mike Kaput: So, Paul, like, how ready is the world of HR and recruiting for what's possible now?
[01:10:32] Paul Roetzer: Not that I'm aware of. I, I'll just, I'm going to read, like, one other quote from the article. I, people should go read it. It is, it's just a prelude. Like, this is one of those early demonstrations. Someone proved this could work.
[01:10:44] Paul Roetzer: It required a little bit of coding. Give it two months. Somebody will have another one of these things and be like, whack a mole for LinkedIn. It is against their terms of use, but it's not going to stop them from doing it. Right. But here's, here's the excerpt. The sudden explosion in popularity of AIHawk means that we now live in a world where people are using AI [01:11:00] generated resumes and cover letters to automatically apply for jobs, many of which will be reviewed by automated AI software and where people are sometimes interviewed by AI, creating a bizarre loop where humans have essentially been removed from the job application and hiring process.
[01:11:14] Paul Roetzer: Essentially, robots are writing the cover letters. For other robots to read with uncertain effects for human beings who apply for jobs in the old fashioned way. Again, like, we start with our The future's happening right now. Like, this is already going on. And if you're not aware of it, welcome to reality.
[01:11:35] Paul Roetzer: Like, it's weird, and it's gonna get more bizarre. And no one is really prepared for what this means. The same thing is probably happening in colleges. You have Admissions letters, you know, all these things like being created to try and get into schools and apply to school. Like everything is going to be automated.
[01:11:53] Paul Roetzer: And then we're on the business side, going to create automation systems to review the automated submitted things. And [01:12:00] at the end of the day, it comes back to who do you know, because human relationships are probably going to be the only thing left that is differentiated in this world.
[01:12:07] Mike Kaput: No kidding. Well, that's a, that's a note to end on here.
[01:12:12] Mike Kaput: Oh, it's tough. I think. Some final reminders as we wrap up here, we have our weekly newsletter that goes out and summarizes everything going on in AI each and every week. It's called This Week in AI. You can find it at marketingainstitute. com forward slash newsletter. Please also leave us a review of the podcast.
[01:12:31] Mike Kaput: If you have not already, we very much appreciate and value all the feedback. And one fun fact I'm going to leave us with here, Paul, because as we've talked about AGI this whole time, I've been just keep reminding myself of one of the initial authors you and I read way back when Marketing AI Institute was getting started, our friend Ray Kurzweil, who's like the original OG, the, predictor of AI, you know, where it's going.
[01:12:57] Mike Kaput: And he has consistently maintained, according [01:13:00] to Perplexity, since 1999, His arrival of artificial general intelligence prediction was 2029, so people thought he was crazy back then. We are getting less crazy now.
[01:13:12] Paul Roetzer: I'll add to that. So Shane Legg Cofo, one of the co-founders of Google DeepMind has and Demis when they started DeepMind back in 2010 and showed before it, they saw a GI as a 20 year project, which will put us around 2028, 2029.
[01:13:28] Paul Roetzer: Demis was asked recently, are you on track? And he said something to the effect of, it's rare that a 20 year project ends up being right. But yeah, we're roughly on track for it to be a 20 year project, meaning DEMIS also sees the next few years being the achievement of whatever we decide AGI is.
[01:13:48] Mike Kaput: Wow.
[01:13:48] Mike Kaput: That's it. And that's a way to start the week here.
[01:13:51] Paul Roetzer: There you go.
[01:13:52] Mike Kaput: All right, Paul. Well thank you as always for breaking everything down for us.
[01:13:56] Paul Roetzer: Thanks, Mike. Everybody have a great week. We'll talk to you next week. Thanks for [01:14:00] listening to the AI show. Visit marketing ai institute.com to continue your AI learning journey and join more than 60,000 professionals and business leaders who have subscribed to the weekly newsletter.
[01:14:12] Paul Roetzer: Downloaded the AI blueprints, attended virtual and in-person events, taken our online AI courses and engaged in the Slack community. Until next time, stay curious and explore AI.
Claire Prudhomme
Claire Prudhomme is the Marketing Manager of Media and Content at the Marketing AI Institute. With a background in content marketing, video production and a deep interest in AI public policy, Claire brings a broad skill set to her role. Claire combines her skills, passion for storytelling, and dedication to lifelong learning to drive the Marketing AI Institute's mission forward.