54 Min Read

[The AI Show Episode 117]: OpenAI’s Wild Week, Sam Altman’s Prophetic Post, Meta’s AR Breakthrough & California’s AI Bill Veto

Featured Image

Wondering how to get started with AI? Take our on-demand Piloting AI for Marketers Series.

Learn More

It’s been a wild and busy week full of drama and updates at OpenAI. Mike and Paul break down OpenAI's rollercoaster week, from the rollout of Advanced Voice mode to the shocking executive departures and reports of internal chaos. We also dissect Sam Altman's prophetic new article, "The Intelligence Age," Meta Connect 2024, updates to SB-1047, the FTC’s AI crackdown, Anthropic’s valuation, and more. 

Listen or watch below—and see below for show notes and the transcript.

Listen Now

Watch the Video

Timestamps

00:03:08 — OpenAI’s Wild Week

00:26:14 — The Intelligence Age

00:35:56 — Meta Connect 2024

00:44:45 — SB 1047 Vetoed by Governor Newsom

00:48:44 — Anthropic Valuation

00:52:44 — FTC AI Crackdown

00:55:45 — Scale AI

01:02:40 — The Rapid Adoption of Generative AI

01:07:16 — AI Use Case Warning

Summary

OpenAI’s Wild Week 

The news started innocently enough early last week, when OpenAI finally rolled out Advanced Voice mode to all ChatGPT Plus and Team users. 

At the same time, reports came out that Sam Altman and other tech leaders had pitched the White House on building data centers that are 5 gigawatts each (the equivalent of five nuclear reactors) across the US to power the AI revolution. 

Now here’s where things start to go downhill…

Shortly after these developments, Chief Technology Officer Mira Murati announced she was leaving the company. She was quickly followed by Chief Research Officer Bob McGrew and a research VP, Barret Zoph, who both also announced their resignation.

The very next day, Reuters reports that OpenAI is working on a plan to restructure into a for-profit company and give Sam Altman equity. This is followed by a series of unflattering reports about chaos within the company.

The first, from Karen Hao in The Atlantic, presents a pattern of persuasion and consolidation of power by Altman. 

The second, from The Wall Street Journal says the “chaos and infighting among [OpenAI] executives [is] worthy of a soap opera.”

A third, from The Information, reveals that OpenAI is now being forced to train a new version of its Sora video model, as “several signs point to the video model not being ready for prime time when OpenAI first announced it earlier this year.”

Overall, the departures and insider reports together appear to paint a picture of chaos within OpenAI.

The Intelligence Age

Sam Altman has published a new prophetic article called “The Intelligence Age” that predicts the incredible, awe-inspiring, and potentially disruptive road that AI is about to lead humanity down.

Sam’s writings are incredibly important to pay attention to. For instance, back in 2021, he published an article called Moore’s Law for Everything that basically outlined where AI was going two years before the launch of GPT-4. 

And this time is no different.

In his latest article, Altman outlines how “we’ll soon be able to work with AI that helps us accomplish much more than we ever could without AI,” including having a personal AI team, virtual AI tutors who can provide personalized instruction in any subject, and “shared prosperity to a degree that seems unimaginable today.”

He also predicts we will not just reach artificial general intelligence (AGI) soon, but also possibly artificial superintelligence (ASI) when AI is smarter than the smartest humans at all cognitive tasks.

Meta Connect 2024

On September 25, Meta held its 2024 Connect event, where the company debuted a ton of new AI-related products and developments.

First up were the company’s Orion AR glasses.  Meta unveiled a prototype of augmented reality glasses that look similar to regular eyewear.  

They use advanced projection technology and include the same generative AI capabilities as the company’s current Ray-Ban smart glasses.

Meta also announced the release of Llama 3.2 in 11B and 90B parameter models that can process and reason about images. The release also includes 1B and 3B parameter models specifically designed for edge and mobile devices.

That wasn’t all to come out of Connect 2024. Meta also dropped the following:

  • A new Quest 3S VR headset, which costs just $299.99…
  • Updates to Ray-Ban smart glasses, including improved AI responsiveness…
  • New tests of AI-generated content in Facebook and Instagram feeds…
  • And new celebrity voices available when you use the Meta AI chatbot.

Today’s episode is brought to you by rasa.io. Rasa.io makes staying in front of your audience easy. Their smart newsletter platform does the impossible by tailoring each email newsletter for each subscriber, ensuring every email you send is not just relevant but compelling.

Visit rasa.io/maii and sign up with the code 5MAII for an exclusive 5% discount for podcast listeners. 

Read the Transcription

Disclaimer: This transcription was written by AI, thanks to Descript, and has not been edited for content. 

[00:00:00] Paul Roetzer: Google has to just be salivating right now, like it's, like we've said this before, like I would never bet against Google in the end here. And all these things that Sam is now trying to solve for. He's got talent leaving left and right. He's got to raise money just to like solve for the fact that they're losing 5 billion.

[00:00:17] Paul Roetzer: He's trying to convince people to build the data centers. Like who's got all of that already? Google has it all. Welcome to the Artificial Intelligence Show, the podcast that helps your business grow smarter by making AI approachable and actionable. My name is Paul Roetzer. I'm the founder and CEO of Marketing AI Institute, and I'm your host.

[00:00:37] Paul Roetzer: Each week, I'm joined by my co host and Marketing AI Institute Chief Content Officer, Mike Kaput, as we break down all the AI news that matters and give you insights and perspectives that you can use to advance your company and your career. Join us as we accelerate AI news.

[00:00:59] Paul Roetzer: Welcome [00:01:00] to episode 117 of the Artificial Intelligence Show. I am your host, Paul Roetzer, along with my co host, Mike Kaput. we basically could just do an entire episode on the week that was for OpenAI. It was an insane week. Like it was, I mean, Mike and I were joking before we got on, this may be the most links for a single main topic we've ever had to deal with for OpenAI.

[00:01:25] Paul Roetzer: Yeah. Yeah. So that is, that is going to be main topic number one. This is going to be a rather comprehensive main topic. But before we get into all of that, today's episode is brought to us by Rasa. io. If you're looking for an AI tool that makes staying in front of your audience easy, you should try Rasa.

[00:01:43] Paul Roetzer: io. Their smart newsletter platform does the impossible by tailoring each email newsletter for each subscriber. Ensuring every email you send is not just relevant, but compelling. It also saves you time. we've known the team at RAAF for a long time. They've been an early supporter of Marketing Institute [00:02:00] going back to almost the very beginning.

[00:02:02] Paul Roetzer: and no one else is doing newsletters like they're doing it. true personalization based on behaviors, is a real key if you wanna scale a newsletter, plus they're offering 5% discount with the code five MAII. Again, that is five MAII. When you sign up, visit rasa.io/mai today. Once again, that's Rasa.

[00:02:25] Paul Roetzer: io slash M A I I. So Mike , the, it was honestly hard to follow everything happening with OpenAI this week. Like it was a, it was a flood of, it was. And it just seemed to, like, overwhelm everything else that happened last week. Oh my god, I know. Like, it was, it was wild. So, kick us off with what in the world we went through with OpenAI last 

[00:02:54] Mike Kaput: week.

[00:02:54] Mike Kaput: We also may be running a record for most things that are going into the newsletter only this week. Yes. [00:03:00] Because there was like 30, 30 topics we considered that I'm not going to get into. Thanks largely, in part, to the drama at OpenAI. So, it has been A huge week at OpenAI, but not always in a good way. Now, the week started innocently enough early last week.

[00:03:18] Mike Kaput: OpenAI finally rolled out Advanced Voice Mode to all ChatGPT Plus and Team users. I will say, it is pretty amazing. I'm really enjoying it. At the same time, some big reports that were Altman and other tech leaders had actually been pitching the White House on building enormous data centers. Ones that are 5 gigawatts each.

[00:03:43] Mike Kaput: That is the equivalent of 5 nuclear reactors. All across the US, they're pitching this idea to build these to power the AI revolution. These would roughly cost about a cool 100 billion each. But, [00:04:00] things quickly got a little messy because Right around the same time as this report, CTO Mira Murati said that she was leaving OpenAI, posted a letter revealing why that was the case, saying it was time to step back and explore different opportunities.

[00:04:19] Mike Kaput: At the same time, we also saw the, Chief Research Officer, Bob, and the VP of, Engineering, VP Barret Zoff, We'll also leave the company at the exact same time. So, we now, instead of just having Mira Merati leave, we also had multiple other people. So we have, Bob McGrew, company's Chief Research Officer, and Barrett Zoff, Vice President of Research, departing at the exact same moment.

[00:04:49] Mike Kaput: Now, you may wonder, Pretty quickly, Sam Altman goes into damage control mode, he posted on X thanking them for their work and noting quote Mira, Bob, [00:05:00] and Barrett made these decisions independently of each other and amicably, but the timing of Mira's decision was such that it made sense to now do this all at once so that we can work together for a smooth handover to the next generation of leaderships.

[00:05:15] Mike Kaput: The very next day Reuters reports that OpenAI is working on a plan to restructure into a for profit company and give Sam Altman equity. Now we had known something like this was in the works, but it seems to confirm that this is moving forward quickly. Now this is then followed by a series of unflattering reports about what's going on within the company.

[00:05:41] Mike Kaput: So one of them from Karen Howe in the Atlantic. presents a pattern of persuasion and consolidation of power by Altman internally at OpenAI. A second report from the Wall Street Journal said, quote, the chaos and infighting among executives at OpenAI is [00:06:00] worthy of a soap opera. And a third from the information wasn't directly about the executive shakeups, but it did reveal that OpenAI is now being forced to train a new version of its Sora video model, as quote several signs point to the video model not being ready for primetimes, when OpenAI first announced it earlier this year.

[00:06:21] Mike Kaput: Overall, all these departures, the scrambling by Sam Altman to address the issue, Greg Brockman also had posted about it, in the moment, and then these kind of insider reports of shakeups, delays, uncertainties, together are kind of painting a bit of a chaotic picture at OpenAI, and Paul, I really liked in This week's edition of the Exec AI Newsletter, which is a new weekly newsletter, you're writing through Marketing AI Institute's sister company, SmarterX.

[00:06:50] Mike Kaput: You wrote, quote, I've spent 13 years monitoring and studying the AI industry. This was one of the crazier weeks I can recall. [00:07:00] OpenAI alone had what seemed like a year's worth of news condensed into five days. Can you maybe like take a step back for us here if you tell us what the heck is going on at the company?

[00:07:11] Mike Kaput: Like how worried should we be about this? What's the deal? 

[00:07:15] Paul Roetzer: Yeah, it I mean it really was hard to kind of follow and understand everything that was happening and each of these things like the data center thing We could probably spend 20 minutes talking about what all that means and unpack that like each of these items on their own Could probably be a main topic, but what we're gonna try and do is Connect the dots, why all this is happening at the same time and what it probably means.

[00:07:41] Paul Roetzer: So at a very high level, this is a non profit research company that many of the top AI people in the world went to work at, starting back in 2015, to work on building artificial general intelligence, to be at the frontier of [00:08:00] developing the most advanced intelligence, non human intelligence that the world has ever seen.

[00:08:06] Paul Roetzer: And that was what they were there for. And at some point, around 2022, ChatGPT emerges and OpenAI all of a sudden, catches, you know, lightning in a bottle and they start becoming a product company. And that appears to, since that time, be creating enormous friction within the organization. There are people who still are there for the pure research side of this to be at that frontier to have access to the computing power to do incredible things and build incredible things.

[00:08:42] Paul Roetzer: And then there's people who are business people like Sam Altman, who are trying to capitalize on a potentially once in a generation or once in a look, I don't know, once in a world lifetime opportunity to build a massive company. And so, [00:09:00] there was an article that came out, I think it actually came out like after I wrote the Executive Newsletter on Friday, and it said OpenAI is, is growing fast and burning through piles of money, and this was a New York Times article, and they had gotten access to financial documents that I believe are being shared with potential investors, so as we've talked about, you alluded to Mike, they're raising money, word is they're actually going to raise money.

[00:09:25] Paul Roetzer: Finalize the decision this week of who is going to be allowed to invest. So they're kind of, they have more investors lined up than they're going to take. So, a couple of key insights here. I think this is really relevant. So, in the New York Times article, they said monthly revenue for OpenAI hit 300 million in August, up 1, 700 percent since the beginning of 2023.

[00:09:47] Paul Roetzer: Now, that's a, that's kind of like a, it's one of those things where you can make data say whatever you want it to say. So, beginning of 2023, ChatGPT was two months old. So, the 1, 700 percent is kind of like a useless number. I'm not sure why they do that. [00:10:00] The company, this number is significant. The company expects about 3.7 billion in annual sales this year.

[00:10:06] Paul Roetzer: So that, that's a big number. open eye estimates that its revenue will balloon to 11.6 billion next year. So this is actually, very relevant. We've heard the rumor that OpenAI was being valued at around 150 billion in this, investing round. Which, like a couple months ago when they were doing some internal stuff, it was like 86 billion was the number being thrown around.

[00:10:31] Paul Roetzer: So 150 billion sounds like a lot, but if they're projecting like a forward revenue, one year forward revenue, 11. 6 billion, then that's actually a pretty reasonable multiple range. And like, well, I'll explain the multiple range a little bit more when we talk about anthropics valuation in one of the rapid fire items.

[00:10:49] Paul Roetzer: But a, but a roughly like 10 to 13 times forward looking revenue is not unheard of in, in the technology world. So, that 150 billion valuation [00:11:00] now starts to make a little sense. But the New York Times also said they expect to lose 5 billion this year. So even though they're doing 3. 7 billion in annual sales this year, they're, they're gonna lose 5.

[00:11:10] Paul Roetzer: ChatGPT on its own is bringing in 2. 7 billion this year, so that's 2. 7 out of 3. 7. So, you know, the vast majority of their revenue is coming from ChatGPT. they did 700 million last year for, comparison. and then their, the other billion is coming from other businesses using its technology, I assume, through the API.

[00:11:31] Paul Roetzer: So, roughly 10 million ChatGPT users pay the company a 20 monthly fee, the New York Times had that they're planning on raising that by 2, I assume per month by end of the year, and then aggressively raising it to 44 the next five years, which to me seems like absurd that they even forecast that because like what happens when they're different versions of the model and more intelligent and maybe it's 2, 000 per month, so I wouldn't put a ton of stock in that.

[00:11:58] Paul Roetzer: The big kind of grand finale from the New [00:12:00] York Times is OpenAI predicts its revenue will hit 100 billion in 2029. So this, this kind of like is the first time that I know of that we're seeing these sort of numbers, like true inside look at what's going on. And, but it does confirm what I've said on the podcast recently, which is that the six to seven billion they're raising isn't enough money.

[00:12:22] Paul Roetzer: Like this is just a prelude to everything else. And. I don't remember if it was this article or another one I read over the weekend that was talking about the complexity they're going to deal with. So to raise this money, they have to convert the company into a for profit. But to convert into the for profit They can't just, like, wipe away what existed in the non profit.

[00:12:45] Paul Roetzer: Like, the non profit has to get assets and value out of this thing, and it's going to be really complicated. So apparently what they're doing is raising this money, and then they have two years to complete the process of converting over from the non profit. [00:13:00] So this is going to be really, really messy and weird and almost like Unparalleled.

[00:13:05] Paul Roetzer: It's not like we've never had a non profit become a for profit, but probably not of this size. And so, but again, like they're losing 5 billion this year. Raising 6 to 7 isn't gonna solve anything. And so, I still feel like everything happening right now is just a prelude to an IPO, like as quickly as they can probably get to an IPO.

[00:13:28] Paul Roetzer: And so, you know, again, it's just Keep in context why this is all happening, like, all this drama is likely coming from the fact that we had this research firm that's trying to become this massive trillion dollar company, within a couple of years, because at 100 billion in 2029 in revenue, I mean, you're talking about a one to two trillion dollar market cap publicly traded company at that point.

[00:13:51] Paul Roetzer: So, you know, one of the, what, 15 biggest companies in the world, basically, is what they're projecting in five years. So, this leads to another really good [00:14:00] article from the Wall Street Journal, where it was turning OpenAI into a real business is tearing it apart. This one was fascinating. I think this one came out either Friday night or Saturday morning, if I remember correctly.

[00:14:10] Paul Roetzer: So, I'm just going to go through a few key excerpts from here, because this is stuff we've never heard before. Like, there's some insights in here that, to my knowledge, we had not seen. So, the first is, it says, some tensions are related to conflicts between OpenAI's original mission to develop AI for public good, and then deploying money making products.

[00:14:29] Paul Roetzer: But this gets into Mira leaving and some of the other people leaving. So Mira is now one of 20 OpenAI researchers and executives who have quit this year, including multiple co founders, John Shulman, who we talked about recently on an episode, Andrej Karpathy, Ilya Sutskever, like all of them were like, They were co founders.

[00:14:48] Paul Roetzer: They've been there from the beginning and they've all quit. That, that's, that's a trend. That is not like an anomaly. That is like something is going on. the article said that Altman has been [00:15:00] largely detached from day to day, characterization that the company disputes, but you can see it. Like he's everywhere globally trying to raise tons of money.

[00:15:10] Paul Roetzer: he's doing lots of interviews. He's, he's all over the place, but he's not involved in the technical side of the business in the day to day operations. Meanwhile, he's the CEO of a company that went from 770 employees last November when he was fired as CEO for five days and then brought back, they're now at 1, 700 employees.

[00:15:28] Paul Roetzer: So, lots of growth, so you could just look at this and say this is just standard growth stuff and it's complicated. But that doesn't actually seem to be the case. So it got into Ilya and when he left the company, so it said that Mira and President Greg Brockman actually went to Sutskever's house and tried to convince him to come back when he left because the company was in disarray and might collapse without him.

[00:15:56] Paul Roetzer: Because they saw. All the top researchers [00:16:00] were going to leave if Ilya was out. And so apparently, and this is again the first time I'm seeing this anywhere, he was ready to come back. Like, Ilya was kind of thinking, okay, maybe I'll come back, maybe we'll work this out. And then he got a call from Greg Brockman, who said, well, we're rescinding the offer.

[00:16:16] Paul Roetzer: Like, you're not welcome back, basically. And apparently, what the article said is that they couldn't figure out his new role. Like, they'd already replaced him, and they couldn't make that guy, like, step back down for Ilya. So, they couldn't come to an agreement on what his role was going to be, so he left.

[00:16:33] Paul Roetzer: Then, the other one that I thought was really intriguing is, you know, when Greg Brockman took his sabbatical the day after 4. 0 came out, if I remember correctly. So, it's weird, like we had Advanced Voice comes out, Mira leaves, and these other two executives. A few months earlier, 4. 0 comes out, Greg Brockman leaves the next day on sabbatical.

[00:16:51] Paul Roetzer: And what I said at the time, and I've said multiple times since, is it's a very oddly timed sabbatical. Like, with very little information. Well, ends up that [00:17:00] apparently Sam and Greg Agreed, quote unquote, mutually, that Greg should maybe step aside for a little bit. So, what this article is saying is that people haven't always loved Greg's management style.

[00:17:13] Paul Roetzer: it says, quote, his management style caused tension. Though President Brockman didn't have any direct reports, he tended to get involved in any projects he wanted, often frustrating those involved. According to current former employees, they said he demanded last minute changes to long planned initiatives, prompting other executives, including Mirati, to intervene to smooth things over for years, staffers urged Altman to Reign and Brockman saying his actions demoralized employees.

[00:17:40] Paul Roetzer: Those concerns persisted through this year when Altman and Brockman agreed he should take a leave of absence. So that was interesting and that kind of jives with why all of a sudden the sabbatical. And then the final thing I'll end with here is. This idea again of Sam pushing growth because that's [00:18:00] what Sam does versus the research team and the technical team led by Mira that often was pushing back saying, we're not ready to do these things.

[00:18:10] Paul Roetzer: And if you recall, like ChatGPT was this, like Sam's the one that green lighted ChatGPT and said, basically you have like three weeks to launch this product. And maybe the technical team, Ilya, Mira, maybe they didn't agree even back then. And so they gave a couple of examples. So it said that Mira repeatedly delayed the planned launches of products, including Search, which we still don't have, SearchGPT, which was announced months ago, Voice, which we finally just got, but Mira's the one that took the PR hits over Voice.

[00:18:40] Paul Roetzer: and then Sora, which is the one you mentioned, which is the video, which again, Mira was the one getting interviewed saying, well, what's the training data and her having to lie to people, basically saying, I don't, I don't know what the training data is. Like, of course, you know what the training data is, Mira, come on.

[00:18:54] Paul Roetzer: So. they said that the information had it that it's just not ready, that [00:19:00] what we saw the demos of was misleading, probably at best, that it takes, the one source said, 10 minutes to create like a few seconds of video, like what we were seeing demos of. And that it can't create consistent characters and objects and all these things.

[00:19:15] Paul Roetzer: So, what I wrote on, on LinkedIn, and I'll just kind of read this and then, you know, see if we have any other thoughts here, Mike. so the Wall Street Journal article that we'll link to, the, the one thing that jumped out to me is it said, in this spring, tensions flared up internally over the development of a new AI model called GPT 4.

[00:19:35] Paul Roetzer: 0, which is the one where Greg left the next day. That would power ChatGPT and business products. Researchers were asked to do more comprehensive safety testing than initially planned, but given only nine days to do it, executives wanted to debut 4. 0 ahead of Google's annual developer conference and take attention away from their big rival.

[00:19:55] Paul Roetzer: Which is funny, because we always joke about that. Like, just look and see the calendar of events and you know when the next products [00:20:00] are going to come, because they're just going to launch them the day before Google does their stuff. it went on to say the safety staffers worked 20 hour days and didn't have time to double check their work.

[00:20:10] Paul Roetzer: The initial results, based on incomplete data, indicated GPT 40 was safe enough to deploy. After the model launched, people familiar with the project set a subsequent analysis, found the model exceeded open AI's internal standards of persuasion. Now that's interesting because that's what I've said multiple times recently on the podcast is these things are far more capable of persuasion than they're letting on, and they're actually extracting that capability by trying to find when it's doing it and stop it from doing it.

[00:20:39] Paul Roetzer: But, but natively. These models are insanely persuasive, and they probably have been since GPT 4. 0 was first created. So then I said, this is a microcosm of the industry now. Companies are racing to preview and launch more advanced models and products before their competitors in order to win market share, investment [00:21:00] dollars, stock price increases, and ego boosts.

[00:21:02] Paul Roetzer: This leads to half baked hardware like Rabbit and Humane AI PIN. product demos we won't see in production for months or years like Meta Orion, which we'll talk about in a minute. Apple Intelligence, which they launched phones that don't actually have Apple Intelligence, and yet they're running ads featuring Apple Intelligence, which makes no sense.

[00:21:19] Paul Roetzer: OpenAI, Sora, and Advanced Voice, and Google's Project Astra. And then the models may be more capable and dangerous than what we're being led to believe. In fact, I know that they're more capable than what we're being led to believe. And yet, the balanced side of this is the tech is still very real. Its impact on businesses, the economy, society is still completely underestimated and underappreciated.

[00:21:42] Paul Roetzer: And then I said, for the first time in human history, we have intelligence on demand. It's going to be messy. We're going to have instances like this where the companies at the frontier have really bad days and weeks and take a ton of PR hits and lose really talented people because [00:22:00] this is unparalleled.

[00:22:01] Paul Roetzer: Like what we're doing here is like nothing we've ever seen. And so these companies that are actually out there leading and the leaders who are pushing us into this frontier of the intelligence age. It's not going to be a straight line. And so that's kind of where it leaves us. And then the one final thought I'll leave us, Mike, is Google has to just be salivating right now.

[00:22:20] Paul Roetzer: Like if, like we've said this before, like, I would never bet against Google in the end here. And all these things that Sam is now trying to solve for, he's got talent leaving left and right. He's got, he's gotta raise money just to, like, solve for the fact they're losing five billion. He's trying to convince people to build the data centers.

[00:22:39] Paul Roetzer: Like, who's got all of that? Alright, Google has it all. Google has the ability to pay two and a half billion to bring back Noam Chazer from, Character. ai, who was one of the authors of the Attention Is All You Need paper. They have data centers. They have infrastructure. They have everything. They're the ones that created all the innovations that drove everything.

[00:22:58] Paul Roetzer: And so I just, I [00:23:00] mean, if I'm Google, I am just sitting back and I am like Watching all this turmoil, and I am looking for the opportunity to re seize the leadership position that you had for two decades, and I just I couldn't help myself all weekend thinking about that they have to be ready to pounce.

[00:23:18] Paul Roetzer: Like, right now is when you go, like all, like the Sora model's not ready. You've got Project Astra sitting there. You've got Notebook LM, which obviously alludes to way more advanced voice capabilities than the public knew they had. There's so much stuff that I would just be, like, I don't think Anthropic can do it.

[00:23:37] Paul Roetzer: They're trying to raise their own money. They've got all these problems. They don't have their own chips. They don't have, you know, they don't have distribution. They don't have anything. This is, this is Google's game to win again all of a sudden, and I find that fascinating. 

[00:23:48] Mike Kaput: Especially the whole on device AI thing as they're baking Gemini into, you know, all of your different apps, all of your different phones, especially where we talk about Apple kind of whiffing a little bit at the [00:24:00] moment.

[00:24:00] Mike Kaput: It's really, really an interesting time. 

[00:24:02] Paul Roetzer: Yeah. And to that point, like there's rumors now that part of the reason Apple Intelligence isn't out on time is because Apple's pulling back from their open eye relationship. That they're not taking the board seat, they may not invest in the next round, and maybe they're seeing these conflicts as well and saying, well, maybe that's not the horse to bet on.

[00:24:23] Paul Roetzer: And now I'm not saying that they won't still integrate, ChatGPT capabilities into Apple Intelligence. My guess is they will. 

[00:24:30] Mike Kaput: Yeah. 

[00:24:31] Paul Roetzer: But I would assume they are now far more aggressively building their own capabilities. So six to 12 months from now, they don't need ChatGPT in there. It's going to be Apple models doing everything.

[00:24:40] Paul Roetzer: Or. In a crazy turn of events, maybe it's Google's models. Maybe it is Gemini. Maybe it, maybe it's somebody else. they've done plenty of big deals before, so I don't know. Like I, I, I mean, OpenAI is going to raise their money. They're going to announce sometime this week or next week, probably that they raised 7 billion or 10 billion or whatever that number is.

[00:24:59] Paul Roetzer: [00:25:00] And they have all kinds of amazing investors. But it is not a sure thing right now. That's all I'm saying. It's like, there is a lot of instability and this is the stuff that surfaced to like New York Times and Wall Street Journal. Imagine what's happening that hasn't surfaced yet. And then I did one other quick note.

[00:25:15] Paul Roetzer: I saw an Anthropic employee, a former Anthropic employee, because Anthropic, there was some people sort of like taking a victory lap here that opened, you know, I was getting, you know, a bad week. And that person was like, listen, my NDA is up, you better watch yourselves because you know everything that's going on in OpenAI is happening at Anthropic too, so don't be, you know, don't be pretending like you've got this all figured out.

[00:25:38] Paul Roetzer: So, yeah, it's, it's hard, like, companies building at this pace and these kind of valuations, like, there isn't much precedent for how to do this, right? And so it's going to be messy and maybe it, maybe we have some unexpected Left turns, I guess, for some of these companies that seem like the winners right now.[00:26:00] 

[00:26:00] Mike Kaput: Well, I actually think that's a really good transition into the second topic, because the second topic is also OpenAI related, but kind of shows, like, zooming out where at least some of OpenAI's leaders and researchers think things are going. Because, In addition to all this stuff going on, and I'm sure, regardless of what Sam Altman has said, he did not plan all of this happening.

[00:26:24] Mike Kaput: No. He tries to pretend that this is others contingencies and things, which I don't think there are for this stuff. But, he did find the time to actually publish what, I don't think it's crazy to say, is a prophetic article. called the intelligence age and it basically predicts this awe inspiring and very disruptive road that AI is about to lead humanity down now It's really important this next point, which is this is not just Sam Altman like writing a bunch of corporate hype I realize like love him or hate him or be skeptical of him like his writings [00:27:00] are actually really really important to pay attention to because Back in 2021, he published an article called Moore's Law for Everything that basically outlined where AI was going almost two years before the launch of GPT 4.

[00:27:15] Mike Kaput: I think we actually read that article at the time and got, like, literally a peek into the future. And This article, which is incredible and well worth reading today, went largely unnoticed by businesses and government leaders, and they would have honestly done well to pay attention to that writing at that time.

[00:27:33] Mike Kaput: They would have gotten a leg up on the changes that are coming down the line. And I would argue this time is no different in this article, because Altman outlines how, quote, we'll soon be able to work with AI that helps us accomplish much more than we ever could without AI. And that includes things like Having a personal AI team working for you.

[00:27:52] Mike Kaput: Having virtual AI tutors who can provide personalized instruction in any subject. And, quote, shared [00:28:00] prosperity to a degree that seems unimaginable today. Now, Altman in this essay also predicts we will not just reach Artificial General Intelligence, AGI, soon, but we might also possibly have Artificial Super Intelligence, ASI.

[00:28:15] Mike Kaput: This is when AI is smarter. Than the smartest humans at all cognitive tasks. He even goes so far as to say, quote, It is possible that we will have super intelligence in a few thousand days. And he put an exclamation mark in parentheses after that. It may take longer, but I'm confident we'll get there. So Paul, I mean, I think it's interesting with open AI chaos, like there was a tweet or a post on X rather, we covered when there was the first round of people leaving where Benjamin Dick Craker, who's an AI dev at Grock actually right now, he had tweeted something to the effect of, if you.

[00:28:54] Mike Kaput: We're so convinced you were building AGI, why would you leave? And we've seen, like, with the drama, like, there's [00:29:00] some human elements on this rocky road, but really, it's important to remember that that is the key mission here. That is what they're going for, like, we've talked a bit about, like, why we need to take Sam's writing on this stuff seriously, like, why is this article so important for us to pay 

[00:29:18] Paul Roetzer: attention to?

[00:29:19] Paul Roetzer: Because he's been right. before and people didn't listen. so this is how the week started actually. This was the Monday thing. So the OpenAI week starts with Sam posting this and then they, you know, voice and then all the articles start flowing. But yeah, so like back in March 16th, 2021 was when the Moore's law for everything article came out.

[00:29:39] Paul Roetzer: I remember vividly, like I posted that on LinkedIn and I could go back and find it. And my guess is there was like a thousand impressions, which is not a lot. It, people weren't ready yet in 2021 to hear this stuff. They didn't care about AGI. Most people outside of AI in the technical [00:30:00] world didn't know who Sam Altman was or care who Sam Altman was.

[00:30:03] Paul Roetzer: They just weren't in like the, They, they weren't known enough yet for people to pay attention. And I remember I started using an excerpt from that Moore's law for everything at the start of my talk. So like, I've been doing intro to AI class once a month since fall of 21. And I use that excerpt every time, 42 times I've run that class now.

[00:30:26] Paul Roetzer: And so, and a lot of times when I do keynotes, I use an excerpt from that because it, my point was always. We have known this was all coming, that if we would have just listened to what the research labs were doing back in 2015, all the way through to when Sam wrote this in 2021, they had already seen the early forms of ChatGPT.

[00:30:50] Paul Roetzer: Google had early forms of language models that could do these kinds of things. We knew what they were working on. And so, in the Moore's law for [00:31:00] everything, we which he, so this is published, two years before GPT 4 came out. It was 20 months before ChatGPT came out. And he said, the coming change will center around the most impressive of our capabilities, the phenomenal ability to think, create, understand reason.

[00:31:16] Paul Roetzer: Those are what these models now do. So he was telling us where the future is going. So when we look at kind of where we are now, so in the intelligence age, he writes, as you said, like We'll have these personal AI, we'll, we'll have full of virtual experts in different areas working together to create almost anything we can imagine.

[00:31:35] Paul Roetzer: Our children will have virtual tutors who can provide personalized instruction in any subject. Like he's sort of looking out ahead and then he addresses the question of like, well, why? Like why have we arrived at this moment? And I thought it was, he was putting a stake in the ground because there are people who still believe that we're not on the path to AGI.

[00:31:56] Paul Roetzer: We're not on the path to super intelligence. And so what he says is, in [00:32:00] 15 words, deep learning worked, got predictably better with scale, and we dedicated increasing resources to it. That's truly it. Humanity discovered an algorithm that could really, truly learn any distribution of data. To a shocking degree of precision, the more compute and data available, the better it gets at helping people solve hard problems.

[00:32:22] Paul Roetzer: I find that no matter how much time I spend thinking about this, I can never really internalize how consequential it is. Technology brought us from the Stone Age to the Agricultural Age and then to the Industrial Age. From here, the path to Intelligence Age is paved with compute, energy, and human will.

[00:32:40] Paul Roetzer: And so my point when I put this one up is like, I hope people are listening this time. Like, yes, there may be some hyperbole here, and maybe it doesn't happen in a few thousand days, the superintelligence thing, but like, who cares about the superintelligence thing? I'm still more like, the road to the general intelligence thing, progression along that is enough [00:33:00] to disrupt everything.

[00:33:01] Paul Roetzer: Like, I don't even, the superintelligence conversation is fine, like, we can have that. That is, it's so abstract to people, like, we're still trying to wrap our heads around what 4. 0 can do, more or less, what superintelligence can do. So yeah, I just, I think people need to process this stuff. They need to think about the reality of the world that we have intelligence on demand and it gets smarter every day, and we can reasonably predict one to two years out what these models are going to be capable of doing.

[00:33:30] Paul Roetzer: And if we can look out ahead, then we need to be doing way more with educational systems, governments, the economy, businesses. Like we, we just have to accept that we're heading into a very different timeline. 

[00:33:45] Mike Kaput: You and I have talked about. This, in one way or another, quite often, but I really do want to emphasize to the audience, despite all the chaos and news and stuff to learn and keep up with, there's only, I would argue, like, a few things you have to take [00:34:00] really deeply seriously.

[00:34:02] Mike Kaput: Things like these predictions, to get some of the way directionally correct on where we're going. Even if you put aside all the other noise, if you had only taken seriously the Moore's Law for Everything article. And now this? I think you'd still be quite far along in terms of preparedness, in terms of at least having a sense of the broad contours of the future.

[00:34:22] Mike Kaput: So I think that's like an underrated point. People think they always need to learn more, do more. It's like, no, take this seriously. 

[00:34:28] Paul Roetzer: Yeah, like back in our books, when we published in summer of 2022, I mean, there's a section of the book that says, what happens when AI can write like humans? So the reason that I wrote that section was because I knew that that was what was about to happen.

[00:34:42] Paul Roetzer: I didn't know ChatGPT was coming out that fall, but we already seen early forms of GPT. Like we knew where the labs were going. And so, yeah, to your point, like, If you look out ahead and you can make these assumptions about what AI is going to be capable of one to two years out, think about the head start you can get.

[00:34:59] Paul Roetzer: Think about how many [00:35:00] companies didn't do anything until ChatGPT came out. Had no idea that like genitive AI was even a thing until that moment. It had been a thing for years. Like we knew what they were building. So yeah, like being prepared as, as Leopold would say, that situational awareness of like, be aware of what is happening right now.

[00:35:18] Paul Roetzer: And the stuff that we can reasonably predict will be true one to two years out. Anything beyond that, that's a fool's errand. But like, one to two years out, we have a reasonable idea. And we should start planning for that, not for the current capabilities. 

[00:35:32] Mike Kaput: And I would just lastly add, don't assume it all has to be perfect and accurate for it to be disruptive.

[00:35:38] Mike Kaput: Like, we've talked about before, even if we get 30%, Bill. Down the road that they're predicting, that is one of the biggest disruptions we will see in a long time. It just has to be good enough, not perfect, not super intelligent to have a huge effect. All right. So our third big topic this week on September 25th, Meta had [00:36:00] its 2024 Connect event and they debuted a ton of new AI related products and developments.

[00:36:06] Mike Kaput: So first up, kind of the. Headliner of the show were the company's Orion augmented reality glasses. Now, this is not a full fledged product, but it was a prototype that they revealed of augmented reality glasses that look similar to regular eyewear. And these use advanced projection technology and include the same generative AI capabilities.

[00:36:29] Mike Kaput: As the company's current Ray Ban smart glasses to basically augment your reality as you are wearing them and engaging with the world. Now, they fully acknowledge this product is not being sold yet. Meta CTO Andrew Boz Bosworth actually posted on X the following message. Quote, we just unveiled Orion, our full AR glasses prototype that we've been working on for nearly a decade.

[00:36:53] Mike Kaput: When we started on this journey, our teams predicted that we had a 10 percent chance, at best, of success. [00:37:00] This was our project to see if our dream AR glasses, wireless FOV field view display, less than 100 grams, wireless, were actually possible to build. Not only do they work, we'll be using them internally as a time machine to help build the core experiences and interaction paradigms needed for the consumer AR glasses we plan to launch in the coming years.

[00:37:22] Mike Kaput: Now, Meta also announced the release of Llama 3. 2. This now has 11 billion and 90 billion parameter models that can also process and reason about images and are performing comparable to closed models on a range of benchmarks. That release also includes a 1 billion and 3 billion parameter models that are specifically designed for edge and mobile devices.

[00:37:48] Mike Kaput: Now, there are also a couple other really interesting updates for other meta products and areas. There's a new Quest 3S VR headset that costs just 299. [00:38:00] 99. There are updates to the existing Ray Ban smart glasses, including improved AI responsiveness. Facebook and Instagram are testing AI generated content right in the platform, and there are some new fun celebrity voices available when you use Meta's AI chatbots.

[00:38:18] Mike Kaput: So Paul, as you're looking at these, what kind of jumped out to you here is worth paying attention to? That this thing is 

[00:38:25] Paul Roetzer: not being produced any time soon. 

[00:38:26] Mike Kaput: Like that, I think Not at all. 

[00:38:28] Paul Roetzer: Yeah, like, everybody, like, the media went nuts and like Twitter went crazy with this Orion thing. and one, I don't know if they're like trolling OpenAI using the Orion name.

[00:38:40] Paul Roetzer: Maybe that's been the project name all along, but that's been the rumored name of the next model from OpenAI. Yeah, I think the key thing people have to know is these glasses are not coming any time soon. The Ray Ban ones are there, you can go buy them, you know, if you want. But, I don't even see Meadow winning here.

[00:38:57] Paul Roetzer: Like, so Zuckerberg is obviously all in, and you gotta [00:39:00] keep in mind the history here. Zuckerberg hates Apple. So, he, he despises the fact that his app is controlled by another platform and another company. So anything he wants to do with all of his different apps, you know, Instagram and Facebook and WhatsApp, is controlled by the App Store and Apple.

[00:39:20] Paul Roetzer: He does not want the future to live on someone else's platform. So the reason he tried to do Metaverse and now he's doing Glasses is because he wants people off of phones. He wants to control the platform where all of this stuff lives. And so he even said in an interview a couple days ago that he thinks by 2030 that these glasses, that glasses replace phones.

[00:39:43] Paul Roetzer: that, that he, you know, and he, he's not shy about this. Like you go listen to his interviews. He, he hates Apple. So, That being said, I'm betting on Apple and Google in this one. So, like, if I'm, if I'm placing futures on who wins for [00:40:00] glasses, it's not meta, in my opinion. and the reason is, they have no hardware capabilities.

[00:40:06] Paul Roetzer: This is, I mean, they have Quest and stuff, but, like, they can't manufacture at scale the way Apple does. So, if we're talking about a hardware problem, which is what this is now is, like, this is a manufacturing problem now. intelligence will be a commodity. They're all gonna have really smart models.

[00:40:23] Paul Roetzer: They're all gonna have the capability to see and understand the world around them. The multimodal, like, computer vision stuff. All of that's gonna be table stakes. This is a hardware thing in the future. And a, and a software thing. I'm betting on Apple when it comes to that. Vision Pro, amazing technology, isn't gonna scale.

[00:40:42] Paul Roetzer: Too heavy, too expensive. Apple knows that. They put it out anyway. Google worked on Google Glasses 10 years ago. Sergey Brin is back in the building every day working with the team on Gemini and likely working on hardware for Project Astro, which is their demonstration of seeing and understanding the [00:41:00] world around them.

[00:41:01] Paul Roetzer: I just feel like chips, batteries, supply chain, logistics, manufacturing expertise, that's what this becomes, and Google and Apple will, in my opinion, crush Meta when it comes to that. Now, I, it's hard to bet against Zuckerberg, he, he obviously has the will and the vision and the money to do really complicated things, but I, I just feel like this is gonna be a three horse race, and right now I would probably lean more in the direction of Google and Apple.

[00:41:33] Paul Roetzer: eventually winning this. Now, but Zuckerberg's got probably more motivation than everybody else because he's basically betting the future again on, on kind of blurring his metaverse and now his new AI thing together. and it, and his hatred for Apple. 

[00:41:52] Mike Kaput: That could be a powerful motivator. 

[00:41:54] Paul Roetzer: Yeah. So again, like, while I think that this could change, like, again, like right [00:42:00] now I would be putting odds on, on probably Google and Apple to eventually figure this out and, you know, maybe it'd be interesting to see like who buys what, apparel companies and, or, you know, yeah, eyewear companies and things like that.

[00:42:15] Paul Roetzer: I don't know. Like it's, it's going to be fascinating to see it play out, but I, I agree that, that, that we will experience the world through our phones, which can see, which is Project Astra and through glasses. Like I think those are the two things, but I don't see phones going away. Like this was the whole play with the Humane Cannon, Rabbit, and all that crap.

[00:42:33] Paul Roetzer: Phones aren't going anywhere. It's a great form factor. We did have an article we didn't talk about, recently that, Johnny Ivey is, who, you know, created the iPhone with Steve Jobs, is supposedly working on something with Sam Altman, something that supposedly isn't a phone. We don't know what it is yet, but there's going to be lots of attempts made at embodying this intelligence in different form factors.

[00:42:57] Paul Roetzer: But I, I still feel like glasses and phones are probably [00:43:00] the more likely outcome of how this plays out. 

[00:43:03] Mike Kaput: Yeah, it's interesting this got so much attention because, honestly, I thought some of the implications of Llama 3. 2 were more exciting when you're talking about open, robust intelligence that can go on devices.

[00:43:15] Mike Kaput: Correct. I mean, yeah. 

[00:43:16] Paul Roetzer: They cannibalized their own news with a product that, obviously, Zuckerberg just wanted to show off, that they'd They did something with the 10 billion dollars he spent over the last 10 years, basically, is what this looked like, and yeah, it's like Elon Musk showing off some car or future concept that, like, we're not going to see for 5 or 10 years, and it gives a nice little stock boost and ego boost, and like we said earlier, like, we get these hardware examples of things that aren't going to be here anytime soon and likely won't look like anything, and it's interesting because Apple is the one company who never gave in to that.

[00:43:51] Paul Roetzer: Apple kept things secret until the day they came out. And even Apple gave in finally and introduced Apple Intelligence [00:44:00] months before it was available and then even then they launched the hardware without it in it. So, you know, it's, the pressure to do things in AI is massive right now for private companies and public companies.

[00:44:11] Paul Roetzer: and once Apple gave in and did it, I was like, okay, like, it's game over, everybody's just gonna start showing off all this stuff. And by the way, yes, I'm aware for any listeners that Snap came out with Glasses 2, irrelevant. Like, it's, it's it. They're, they're not gonna be a player in this. So, and plus they're terrible.

[00:44:28] Paul Roetzer: But, yeah. You heard it here 

[00:44:30] Mike Kaput: first. I like that. All right. Let's dive into some rapid fire this week. We've got a lot of really interesting things going on that honestly, if open AI was not just dominating headlines, many of these could have easily been main topics as well. So first up, California governor Gavin Newsom has vetoed the AI safety bill SB 1047 in California.

[00:44:54] Mike Kaput: This is formally known as the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act. [00:45:00] The bill would have implemented an extremely strict framework for AI in California. it would have required things like large AI companies in California implementing safety measures like a kill switch and testing protocols to prevent catastrophic, harm.

[00:45:15] Mike Kaput: It would have applied to AI companies with AI models costing over 100 million to train or 10 million to fine tune. And Governor Newsom actually cited a few reasons for vetoing this bill, which some had criticized as being far too strict and stifling innovation. he said it didn't account for whether AI systems are deployed in high risk environments.

[00:45:38] Mike Kaput: It applied stringent standards even to basic AI functions. He also said it could give a false sense of security about controlling rapidly advancing technology, and that smaller specialized models could potentially be more dangerous than those that are being targeted by the bill. Now, this bill had faced opposition from tech companies, Meta, Amazon, Google, argued it [00:46:00] would stifle innovation.

[00:46:01] Mike Kaput: Some supporters, you know, like Geoff Hinton for instance, one of the godfathers of AI, thought it was reasonable and had published things supporting it. Elon Musk also was a fan, but that is irrelevant now because the bill is not becoming law. So Paul, were you surprised by the veto actually going through?

[00:46:21] Mike Kaput: Like we had known this was a possibility? I mean, what does this mean for AI regulation and legislation? 

[00:46:28] Paul Roetzer: I'm not surprised. I had kind of, I don't remember what I said on the recent episode. I, I feel like I, maybe I said there was like a 60 percent chance this thing was going to get vetoed, but yeah, I was kind of in that 60, 40 camp.

[00:46:41] Paul Roetzer: And I, I thought it was because there, there seemed to be far more appetite for the current administration to get involved here and, that I, I thought there was gonna be a lot of pressure once Pelosi got involved on Newsom to sort of pump the brakes on this and let some stuff play out [00:47:00] before California stepped in and did something that could seem a little bit overreaching based on the model capabilities today and where we kind of knew they were going.

[00:47:08] Paul Roetzer: So, So it's not the end of the story. The big picture here is there's a lot of push from one side that wants a regulation at the application layer, not the model layer. So we want, you know, again, I think I used the example of, you know, guns or bombs or whatever. Like you can create dangerous things that aren't dangerous until you use them in dangerous ways.

[00:47:28] Paul Roetzer: And that's kind of the argument here is the model itself. Yes, it has capabilities just like the internet to be used to do evil things. But it's the use of it to do the evil thing that should be regulated, not the actual general purpose technology. And so that's kind of what won the day here, I think, is like, okay, we need to think this through a little bit more, we need to allow innovation to take, take hold.

[00:47:51] Paul Roetzer: doesn't mean we're done. I mean, we said on a recent episode, there's like 700 bills currently related to AI at state level, you know, different stages. [00:48:00] And, I do think once we get through this election cycle, regardless of what the administration in the U. S. is, there is going to be a far greater appetite for the federal government to get involved.

[00:48:11] Paul Roetzer: And I think that's, in essence, what's being allowed to happen here is, let's slow down for a second. Let's see where these models are going. And let's have higher level conversations at the federal level to figure out if there's some variation of the AI Act that's in the EU. If there's, we should be doing something like that, more of a nationalized, approach to this.

[00:48:33] Paul Roetzer: And I, I'm not, I'm not a believer that's gonna happen, you know, in the near future, but I, I think that's basically what's going on here is like, slow down and let's, let's think about this at a higher level. 

[00:48:44] Mike Kaput: Alright, next up, Anthropic is exploring a new funding round that could significantly boost its valuation.

[00:48:51] Mike Kaput: So, this company has reportedly floated a potential valuation of 30 40 billion in early talks with investors, [00:49:00] which would be roughly double its valuation from earlier this year. Now, as a reminder, we just talked about OpenAI is trying to raise several billions of dollars at a valuation around 150 billion.

[00:49:12] Mike Kaput: So, Anthropic appears to be, you know, attempting to capitalize on the intense investor interest in AI companies. Now, it's worth noting Anthropic's financial position. So, the company projects annualized revenue of about 800 million by the end of 2024, but it is also expected to burn through about 2. 7 billion this year.

[00:49:34] Mike Kaput: And a significant portion of its revenue is shared with Amazon, its cloud provider and reseller, In comparison, OpenAI is generating about five times more monthly revenue than Anthropic. And if Anthropic secures funding at a 40 billion valuation, the multiple on revenue, which is about 50x, would be higher than OpenAI's, which would be less than 40x at the current, revenue numbers.

[00:49:59] Mike Kaput: [00:50:00] So, Paul, as you're looking at the numbers being thrown around here, obviously nothing is set in stone yet until they raise this money. Like, does their valuation, given their position behind OpenAI, like make sense? 

[00:50:13] Paul Roetzer: Yeah, it does. And actually, one of my favorite podcasts is, I've probably referenced it on the show before, but, BG2 with Brad Gerstner and Bill Gurley.

[00:50:22] Paul Roetzer: It's a masterclass in like how VCs see the world every time they publish. And I just happened on Sunday to be, listening to their podcast and they talked about this stuff. And so what they said was the, so open AI is one 50 billion. Valuation is about 15 times forward revenue. The key here is forward revenue, and we're looking out one year out what we, what is the revenue gonna be, not what it is, right?

[00:50:47] Paul Roetzer: In 12 months proceeding, they said Google iPod in 2004. At about, 10 times forward revenue, Microsoft invested in meta, which people may not know or have forgotten at in 2007, [00:51:00] at 50 times revenue. but when Meta IPO'd in 2012, they were at about 13 times revenue. So yeah, like the 10 to 15 times forward revenue for this kind of company is a, is a reasonable range to be looking at.

[00:51:13] Paul Roetzer: And so when you look at Anthropx projected growth 12 months out to get into the 40, 50 billion range, that's my guess is they're probably somewhere in that 10 to 15 times forward looking revenue range. And that's a reasonable way to assess this kind of company. 

[00:51:26] Mike Kaput: Gotcha. Gotcha. That is good context, because I do think people sometimes, especially if you don't know anything about how these numbers are devised, like you see these just eye popping numbers.

[00:51:36] Mike Kaput: But as we've talked about, all the numbers are eye popping now in AI. But also, there is a logic behind some of this, at least in certain circles. 

[00:51:45] Paul Roetzer: Yeah, we'll drop the link to that podcast episode in there, because they talked about nuclear as well, and energy and infrastructure and stuff. But yeah, I mean, Brad and Bill are just, Brilliant.

[00:51:55] Paul Roetzer: so, again, highly recommend it. They, again, they take the perspective from a VC, [00:52:00] but they've been involved in some of the biggest investments and deals of the last, you know, 20, 25 years. So they just have so much insight. it's, yeah, I mean, there's some podcasts where I would gladly pay to listen to it each time.

[00:52:12] Paul Roetzer: This is one of those where I would, I would happily pay because I feel like the value I get is so insane. 

[00:52:17] Mike Kaput: And I would also argue too, like some people that might not be as, you know, investing inclined, like what do you have to learn from a VC? Like these are the people driving the behavior at these companies.

[00:52:28] Mike Kaput: So I would argue even if you have zero interest in public markets or private investments, you can understand the levers that people are pulling and the motivations that are driving these founders and CEOs as well. Alright, so next up, the Federal Trade Commission, the FTC, has launched Operation AI Comply, a law enforcement sweep targeting companies using AI to deceive or harm consumers.

[00:52:57] Mike Kaput: Now these actions from the FTC span a [00:53:00] range of what we would call AI related deceptions. They can include both false claims about AI's capabilities, or things like use of AI tools for actually causing harm, like generating fake reviews. A few notable cases have already been swept up in this. A company called DoNotPay.

[00:53:20] Mike Kaput: They have some misleading claims about an AI lawyer product that they claim does some things that the FTC disagrees it actually does. A company called Ascend Ecom has some false promises of AI powered passive income that have gotten swept up in this, in this operation. And then an AI tool called Writer, now R Y T R, not the word Writer, which is another tool we have talked about.

[00:53:47] Mike Kaput: R Y T R, Writer's AI tool, also has been flagged for creating fake product reviews. So, this initiative is part of a broader effort by the FTC to address AI [00:54:00] related consumer protection issues. Now, Paul, this is, you know, Clearly, somewhat of a problem if the FTC is bothering to take action around it, and you know, you and I have talked about, we've literally spent years observing AI companies like at exaggerate or even sometimes unfortunately deceive about what their technology is capable of.

[00:54:21] Mike Kaput: Like, how big a problem is this today? 

[00:54:23] Paul Roetzer: This is the application level at work, is what I was saying on the previous topic. So The model level is we stop the models from being capable of doing things like this. We protect consumers in that way. The application layer, which is, the government kept saying, we have existing laws that cover this stuff.

[00:54:40] Paul Roetzer: This is misuse of AI at that layer. And this is the government showing its muscles saying, we will stop this already. Now the timing is interesting with SB 1047 being vetoed and the government coming out with this at a federal level saying, hey, we've got this at the same basic time. May just be a coincidence, but I think the government's trying to [00:55:00] demonstrate that we already have laws in place to protect consumers and we will enforce them.

[00:55:05] Paul Roetzer: And, so yeah, I mean, it is so prevalent. I just, I, there's no way the FTC has enough employees or in the near term AI agents that can monitor and pursue all the ways AI is being misused already and going to be misused. Right. But, you know, you gotta. It's like speeding tickets, everybody's speeding, but you gotta, every once in a while, catch a few people so that, hopefully you slow everybody else down.

[00:55:32] Paul Roetzer: That's kind of what this is, it's like, we're here, we have laws, we can choose to enact them when we want to, and we'll make some examples out of people, and hopefully other people do this. 

[00:55:45] Mike Kaput: Alright, so, Scale. ai is a company that provides data labeling services for AI developers, and they have been getting a ton of attention lately, thanks to their remarkable growth.

[00:55:58] Mike Kaput: The company's sales nearly [00:56:00] quadrupled to almost 400 million compared to the same period last year, with annualized revenue approaching 1 billion. Now, that's because scale AI actually pivoted from primarily serving self driving car companies with their data labeling to actually becoming a crucial infrastructure provider for major AI developers like MetaPlatforms and Google.

[00:56:22] Mike Kaput: The company employs hundreds of thousands of hourly workers to fine tune data for AI models and positions itself as a sort of hybrid human AI system for producing high quality data at a low cost, which is necessary to train The models that we all use every day. Not to mention, it's got a pretty high profile and outspoken founder, Alexander Wang, who is just 27 years old.

[00:56:48] Mike Kaput: Now Wang has just given an interview on the A16Z podcast that kind of gave us a look under the hood at scale AI success and why this company is important in the general AI [00:57:00] ecosystem. So, Paul, I know you found a lot to pay attention to in this interview with Alexander. Like, what's worth noting here?

[00:57:08] Paul Roetzer: We've talked about him a few times in the episode, but I, again, I think the, the point here is this is someone everyone should know and be paying attention to. He is heavily influential in the training of basically every major frontier model. His company is kind of leading the way in working with all the major frontier model companies.

[00:57:24] Paul Roetzer: I, I just, you could, I don't know how I think it's just when you like, pay attention to so many different channels, you start to notice these trend lines real quick. And so, he was featured in the information in the Wall Street Journal, and then the A16 thing dropped simultaneously. which tells me this is a, a, a, a proactive PR effort to do something like, you know, they're telling their story.

[00:57:48] Paul Roetzer: Yes. But there, there's more to this. There's reasons why you do this week. Mike and I both came from PR background. I owned a PR firm. it's pretty obvious when things are being like the stage is [00:58:00] being set to do something bigger. So I'll, I'll just go through the, honestly, I wanted to do this one as a main topic, but because you said, Mike, there's so much else going on.

[00:58:08] Paul Roetzer: So I'm going to do a quick rapid fire, With some of the key things from this interview, and I would encourage people to go listen to the A16Z podcast in its totality, because he talks in generalities about things he knows very specific details about, and what I mean by that is he can't say most of what he knows, so he's speaking in these general terms, but if you know what he does, and the companies he works with, you can read between the lines, about a lot of things, and so I'll call a few things out.

[00:58:37] Paul Roetzer: So first, three pillars of AI. He talks about his compute, the models or algorithms, and then the data. So compute has been powered by folks like NVIDIA, the algorithmic advances have been led by large labs like OpenAI, Google, others, and data is scale. That's his company, ScaleAI. So they are the data source, what he calls a data foundry.

[00:58:57] Paul Roetzer: They want to be the place that provides all this [00:59:00] data, and I'll explain why that matters more in a minute. Um Because prior to this, the data foundry was scrape everything from the internet. We, that's not going to get us to the next level of models. He talked about three phases of the state of models.

[00:59:12] Paul Roetzer: One was phase one, pure research. This was like the invention of the transformer. Smaller models he calls up to GPT 3, basically. So this gets us to 2022, roughly, is phase one. Phase two is realizing that scaling laws It seemed to be true that we give them more compute, more NVIDIA chips, more data, more time to train, and we experiment with those models.

[00:59:35] Paul Roetzer: We get more powerful, generally capable models. So that's GPT 3 up until now. And then phase three, he considers heavy research algorithm innovation. So he said, we're entering a phase where research is going to start mattering a lot more. I think there will be a lot more divergence between the labs in terms of what research directions they choose to explore and which ones ultimately have breakthroughs.

[00:59:58] Paul Roetzer: So, for example, Ilya Sutskever [01:00:00] leaving OpenAI to create safe superintelligence. Ilya's probably going to push on some areas of research that the other labs aren't yet, but you're going to have some bets made, basically. And so he said, one of the hallmarks of this next phase is actually going to be data production.

[01:00:14] Paul Roetzer: So now the data is going to matter. It's going to diverge. It's going to be different between the different labs. Talks about AI agents and how they suck and they don't work. And which kind of echoes what I was saying last week. Like, we always talk about agents. The reality is we're not there yet. These are GPT 1, GPT 2 level things.

[01:00:30] Paul Roetzer: We're still just looking at experimentation. Heavy human involvement in the building and oversight of them. He said the reason, which echoes what we, we had said, is these things have an inability to string together tools through a chain of thought like internet, calculator, content management system, knowledge bases, like they're not good at tool use yet.

[01:00:47] Paul Roetzer: They can use individual tools, but like they can't do what humans do and use jump around between all these different tools. And as we've said in multiple times, a lack of reasoning data. The internet is full of output data of like the final product. [01:01:00] It is not full of how humans arrived at the final product.

[01:01:04] Paul Roetzer: And so that's what his data foundry will do. They will hire experts, PhDs, to teach the models how to think at an expert level across all these different domains. So he said, so these reasoning chains through when humans are solving complex problems, we naturally use a bunch of tools. We'll think about things, we'll reason through what needs to happen next.

[01:01:23] Paul Roetzer: We'll hit errors and failures, and then we'll go back and sort of reconsider a lot of these reasoning chains. These agentic chains are the data just doesn't exist. So that's an example of something that needs to be produced. And then the final thing is he talked about enterprise adoption and he said the, the, the proof of concepts just haven't scaled.

[01:01:42] Paul Roetzer: There's been too much focus for the last couple of years on efficiency and productivity and not enough focus on innovation. And that I agree 1000%. Like this is what we see all the time. And I talk to enterprises about all the time. Productivity and efficiency gains are the low hanging fruit. Now, a lot of companies aren't doing those well yet, but [01:02:00] that's like table stakes.

[01:02:01] Paul Roetzer: It's applying these things to drive innovation, to find new markets, new product ideas, new Where I'm just not seeing it yet. Like, there's such a lack of understanding about how to do that. And that, to me, is the opportunity across every industry, is like, be the ones that figure out how to, how to drive innovation and accelerate that with AI.

[01:02:20] Paul Roetzer: And again, to hear him saying it's like, okay, good. Like, it's not just me. Like, sometimes I think I'm just hallucinating that these problems exist. And then I hear someone like him say it. It's like, okay, I'm glad I'm not alone on this one. So, yeah, just a fascinating interview. he's, he's a, he's a major player.

[01:02:38] Paul Roetzer: You're going to hear his name a lot more moving forward. 

[01:02:41] Mike Kaput: Alright, so next up, we have a new research paper out called the Rapid Adoption of Generative AI. And it's making some waves because it's reporting on results, quote, from the first nationally representative U. S. survey of generative AI adoption at work and at home.

[01:02:58] Mike Kaput: This paper is the work [01:03:00] of researchers from the Federal Reserve Bank of St. Louis, New The National Bureau of Economic Research and researchers at Vanderbilt. And basically it shows surprisingly rapid and widespread uptake of generative AI. So this survey was conducted in August of 2024 and shows that 39 percent of U.

[01:03:20] Mike Kaput: S. adults aged 18 to 64 have used generative AI, 28 percent said they've used it at work. And just over 10 percent say they use it daily at work. Now, when asked about what tools they most commonly use, ChatGPT was the most common, followed by Google Gemini and Microsoft Copilot. And the study also notes that two years after being introduced widely to the US population, generative AI has reached a level of adoption, and this level of adoption, at a rate that outpaces, quote, both personal computers and the internet in their early stages.

[01:03:58] Mike Kaput: The researchers estimate that [01:04:00] between 0. 5 percent and 3. 5 percent of all work hours are currently assisted by generative AIs, and they say that generative AI appears to currently be most helpful in writing, administrative tasks, and summarizing information. Now, to get at this data, the researchers used something called the Real Time Population Survey, RPS, this is a national representative survey designed to collect data on various labor market trends.

[01:04:28] Mike Kaput: And so what they did is they incorporated a module within this survey to go ahead and measure generative AI adoption. The survey was fielded, like I said, in August 2024, it had just over 5, 000 responses and focused on workplace use and non work use. So Paul, I mean, this certainly seems pretty significant just given where it's coming from and who's doing it.

[01:04:53] Mike Kaput: Like, did these findings surprise you at all? 

[01:04:57] Paul Roetzer: No, but I [01:05:00] would, here's what I'll say, and again, this, I feel like I could do a main topic on this one, but I'm glad that the Federal Reserve is involved in this. Yeah. they're assuming correlation between past adoption rates to predict the future impact of generative AI.

[01:05:14] Paul Roetzer: My general take is this is interesting, but not very helpful. Economists generally will look at precedents. They will take a historical view, to try and figure out the future. And we're trying to figure out a future that looks nothing like the past. So until we have this kind of approach that layers in the exposure levels we've talked about.

[01:05:34] Paul Roetzer: So when I do jobs GPT. What I was trying to do is say, we can't look at the past to figure this out. Yes, I mean, if anything, this is an indicator that that holds more true. Like, we're looking at adoption rates far beyond that we saw with the PCs and the internet. So this is, this is not apples to apples even then.

[01:05:51] Paul Roetzer: But until we look out one to two years and say these things are going to be expert level of persuasion and reasoning, and they're going to have computer vision to understand the world around them, they're going to have all [01:06:00] these things that they don't currently have. And it's going to be omnipresent in everything we do.

[01:06:05] Paul Roetzer: Then we could get into like a reality of what is the impact on jobs and the economy. And that's where I just, again, I've had conversations with leading economists and I, I see the same problem every time they're, they're looking at the world through the past and the present. They are not truly understanding a future that.

[01:06:23] Paul Roetzer: Seems quite apparent when you look at what the labs are building. And that's, that's my concern. So it's interesting, by the way, a great use for notebook LM from Google. Like I dropped this in there and I was like having conversations with it, chatting with it. which by the way, we didn't have time for this rapid fire item, but they announced a YouTube and audio support now for notebook LM, so you can now drop in a YouTube link, can continue to plug.

[01:06:46] Paul Roetzer: It's just an awesome product. But again, these studies are interesting, but not. Very helpful, and I hope that the government isn't basing decisions about the economy on this kind of 

[01:06:59] Mike Kaput: Yeah, I think [01:07:00] that's why it's also, yeah, worth mentioning, is just understanding that even very, very smart people, like government economists, have blind spots or preferred ways of doing things that may not always map to the transformative effects we're going to see with AI.

[01:07:16] Mike Kaput: Alright, so our last topic today is a little bit of a warning about AI use cases. So a post on X from a machine learning expert has a warning for anyone using AI on work or business calls. This comes from Alex Bilzerian, who's a machine learning head at Hive. ai, and he posted the following. Quote, a VC firm I had a Zoom meeting with used otter AI to record the calls.

[01:07:45] Mike Kaput: And after the meeting, it automatically emailed me the transcript, including hours of their private conversations afterward, where they discussed intimate, confidential details about their business. He then noted in a follow up [01:08:00] post, Otter AI users, and read this with sarcasm, and rest easy knowing that quote, DFJ Dragon Fund China is on the board watching closely.

[01:08:10] Mike Kaput: He included a screenshot of PitchBook Dayduck showing that certain companies on Otter's board may not be ones that are super friendly to your confidentialities. So Paul, we've kind of like griped about this trend before in like other contexts with people being really free about how they. Use AI recorders on calls without asking permission.

[01:08:31] Mike Kaput: This adds like a whole new dangerous element here. Like, should we be rethinking, or people be rethinking broadly, how they're using these types of things on call? 

[01:08:42] Paul Roetzer: Yes. I am, I am shocked, honestly, by how often notetakers are showing up to meetings without, like, permissions. There's no permissions level. So if you think back to like with Zoom, You know, at first, like, pre pandemic, you didn't assume, like, we were going to be on video together.

[01:08:58] Paul Roetzer: Like, you kind of, like, came to an [01:09:00] agreement that we were going to turn the videos on. And then the pandemic led to everyone just assumed we were going to be on video all the time. But even now with Zoom, like, if I want to record our call, it pops up with an alert saying, hey, Mike is recording this call.

[01:09:12] Paul Roetzer: And I say, okay, like, I hopped in and I know it's being recorded. But the note takers just show up and you're like, and now most people are familiar with Otter. But sometimes you'll get this notetaker show, it's like, I've never even heard of that app or that company, like, what, what is this and where's this information going?

[01:09:29] Paul Roetzer: And personally, like, I find it kind of offensive that, like, people just assume I'm cool with their random notetaker showing up for our conversation. And I immediately just become extremely guarded about anything I'll say because, you just don't know. And then we're like, we're seeing this happen, play out in like, like DNA, like the 23andMe, I think is the company that got acquired.

[01:09:50] Paul Roetzer: And it's like. Well, good luck if you sent your DNA to them, like whoever buys them at auction is going to own your DNA, which is an interesting concept. [01:10:00] And so, yeah, this is again, the kind of buyer beware, be aware of the tools you're using, be aware of who is invested in those tools, where those apps came from.

[01:10:11] Paul Roetzer: And I would just take a very cautionary approach to this stuff. And this is why I think at the end of the day, the companies that win in AI are the big existing tech companies, because. For better or worse, they already have all our data and you, you already trust them to some degree. Where there's plenty of these apps that people use where I'm just shocked that they're giving them the data they're giving them.

[01:10:31] Mike Kaput: Yeah. I also wonder how much of this is like How much do people understand where these tools can go wrong, too? Because I also think they're just like, okay, as it becomes normal to do AI recordings and companions, sending out summaries, what if someone wasn't on the call? Does that person understand that AI summaries can just be bad or wrong?

[01:10:51] Mike Kaput: Like if I go on a intro call, right, representing our company to customers, clients, potential speaking. And I get this crappy summary [01:11:00] after that doesn't give any of the context to what I said, someone else is reading it that wasn't on the call. I mean, that's just not a good look either. 

[01:11:08] Paul Roetzer: Yeah, again, it goes back to that people just don't understand.

[01:11:11] Paul Roetzer: yeah, you're right. You could misrepresent something you said and people assume it was fact and they're not going to go take the time to check it. Yeah, this is one of those that's just, it's become such a popular use case that is, there's so little understanding of, of it and there is this assumption that it just works and it's cool if I send my note taker and, yeah, I don't know, man.

[01:11:32] Paul Roetzer: And people just have it automatically join every call, like it's like, oh, that's my note taker, it's just invited to every meeting, it's all automated and then you forget you're even doing it. Yeah, like, slow down on that. Like, if you're, if you're, if you're one of those people who's just sending your note taker to every meeting, don't assume that people are just cool with everything that they say being recorded.

[01:11:52] Paul Roetzer: I personally am not, but people do it anyway. So, just, uh. [01:12:00] That's a good, a good warning. Public service announcement, I guess, that. 

[01:12:05] Mike Kaput: Alright, Paul, that's all we got this week. A big, busy week in AI, as always. This one a little more so. As a final reminder to everyone, check out our newsletter, MarketingAIInstitute.

[01:12:16] Mike Kaput: com forward slash newsletter. Like I said, this week we have literally dozens of things that we could have covered if we had unlimited time. They did not make it into the episode, but they'll be in the newsletter and we'll dive deeper into them there. Also, if you can and have not yet left us a review on your podcast platform of choice, we'd very much appreciate it.

[01:12:36] Mike Kaput: It helps us improve and get to more listeners. Paul, thanks so much for breaking everything down this week. I hope this week is a little slower. I 

[01:12:46] Paul Roetzer: could use like a little less drama this week. So, yeah, hopefully we will be back next week, but hopefully it is not with. the chaos that was The Weak Price.

[01:12:59] Paul Roetzer: All right. Thanks, Mike. [01:13:00] Thanks for listening to The AI Show. Visit MarketingAIInstitute. com to continue your AI learning journey and join more than 60, 000 professionals and business leaders who have subscribed to the weekly newsletter, downloaded the AI blueprints, attended virtual and in person events, taken our online AI courses, and engaged in the Slack community.

[01:13:23] Paul Roetzer: Until next time, stay curious and explore AI.

Related Posts

[The Marketing AI Show Episode 73]: OpenAI Fires Sam Altman: What Happened? And What Could Happen Next?

Claire Prudhomme | November 21, 2023

In this week's The Marketing AI Show, we come with a special episode detailing the history of OpenAI, its charter and the players that made the developments of the past weekend occur.

[The AI Show Episode 96]: Sam Altman Interview: “ChatGPT Is Mildly Embarrassing,” The Email Microsoft Doesn’t Want You to See, and Amazon Q

Claire Prudhomme | May 7, 2024

Discover the latest AI developments in Episode 96: Sam Altman's latest insights, Microsoft's rushed OpenAI partnership, and Amazon's new AI assistant, Q.

[The AI Show Episode 121]: New Claude 3.5 Sonnet and Computer Use, Wild OpenAI "Orion" Rumors, Dark Side of AI Companions & Ex-OpenAI Researcher Sounds Alarm on AGI

Claire Prudhomme | October 29, 2024

Episode 121 of The Artificial Intelligence Show talks about the new Claude 3.5 Sonnet, Claude's computer use, OpenAI "Orion" rumors, and much more.