51 Min Read

[The AI Show Episode 96]: Sam Altman Interview: “ChatGPT Is Mildly Embarrassing,” The Email Microsoft Doesn’t Want You to See, and Amazon Q

Featured Image

Wondering how to get started with AI? Take our on-demand Piloting AI for Marketers Series.

Learn More

Sam Altman shares AI insights, Microsoft's rushed OpenAI decision comes to light, and Amazon Q hits the market! Join Paul Roetzer and Mike Kaput as they explore Altman's thoughts on AI infrastructure, AGI dangers, and the mysterious gpt2-chatbot. This week’s episode also examines Microsoft's accelerated partnership with OpenAI, driven by fears of Google's AI capabilities, and Amazon's new AI-powered assistant, Q, which aims to boost productivity for developers and businesses alike.

Listen or watch below—and see below for show notes and the transcript.

Listen Now

Watch the Video

Timestamps

00:04:22 — Sam Altman’s Possibilities of AI Talk at Stanford

00:15:27 — Microsoft’s Rushed OpenAI Decision

00:29:57 — Amazon Q is now available

00:36:24 — Why Perplexity Might Not Last

00:41:17 — Major U.S. Newspapers Sue OpenAI and Microsoft

00:43:45 — Elon Musk’s Plan For AI News

00:50:29 — First Commissioned Music Video Made 100% with Sora

00:52:47 — New Bill Would Require a Kill Switch for AI Models

00:56:32 — More Bad News for AI Hardware

01:01:29 — Quickfire AI Product Updates

Summary

Sam Altman Interview at Stanford

We now have access to the full interview with Sam Altman that recently came out of Stanford, where he said that GPT-4 was, at best, “mildly embarrassing.”

The interview is wide-ranging and covers a number of important topics that give us some clues as to AI’s near-term future. Some of these topics include: the importance of AI infrastructure, the dangers of AGI, OpenAI’s iterative approach to AI, and AI-generated code and research assistance.

Altman has also been posting on X in increasingly cryptic, and perhaps tongue-in-cheek, ways about the latest mystery in AI:

A mysterious new model has been released called gpt2-chatbot… The chatbot publicly appears to have no affiliation with OpenAI, but seems to have the power of a GPT-4 class model. (Some commentators and testers even say it’s better than GPT-4.)

The model appeared randomly on LMSYS.org, a well-known model benchmarking site.

And, amidst a firestorm of speculation, Altman has posted a couple cryptic posts on X referencing GPT2.

Microsoft’s Rushed OpenAI Decision

An email that Microsoft preferred be left unseen details how the company rushed into its partnership with OpenAI.

The email was recently unsealed due to the US Justice Department's antitrust suit against Google, which alleges that the company has engaged in anticompetitive practices through its dominance of search and search advertising. The New York Times requested the email be made public after its existence was revealed as part of the suit.

There’s a lot to this case, but basically the government is making the case that Google’s practices, like its deal with Apple to make Google the default search engine on Apple devices, have been intentional moves to monopolize search.

Google, however, claims that its ownership of such a large share of the search market is due to the quality of its product, not anti competitive actions.

In the email, which is from mid-2019, Microsoft’s CTO and executive VP of AI, Kevin Scott, said he was “very, very worried” that he had made “a mistake” in underestimating Google’s AI capabilities.

Scott said that Google appeared to be building critical AI infrastructure, not simply paying public lip-service to AI. The development, Scott said, was already paying off and that it could take Microsoft “multiple years” to even attempt to compete with Google.

Satya Nadella took notice and immediately forwarded the email to his CFO, saying it explained “why I want us to do this,” which appears to refer to the company’s investment in OpenAI. Weeks later, the company invested $1B in OpenAI, followed by billions invested since.

This stands in stark contrast to the high-flying language Microsoft used to justify its investment in OpenAI. It turns out they may have pulled the trigger on the partnership because they were terrified of Google.

Amazon Q

Amazon has announced that Amazon Q is now generally available. Q is the company’s AI-powered assistant for internal teams and aims to use your company’s internal knowledge to help your employees be more productive.

There are two main Q applications: Q Developer and Q Business.

Q Developer assists developers and IT with coding, testing, and troubleshooting. Q developer can generate, test, and debug code. Q Business answers questions, provides summaries, generates content, and securely completes tasks based on your companies unique data.

In addition to the two main Q applications, Amazon also launched a preview of Amazon Q Apps, which enables employees to build their own generative AI applications without any prior coding experience.

Q Developer has two plans, a free tier and a pro tier for $19/user/month. Q Business has two subscription tiers: $3/user/month for a lite version and $20/user/month for the Pro version.

Now, when you dive into the fine print, it appears Q Developer and Business have some regionally availability restrictions.

Links Referenced in the Show

Today’s episode is brought to you by our AI for B2B Marketers Summit, presented by Intercept.  This virtual event takes place on June 6 from 12pm - 5pm EDT and is designed to help B2B marketers reinvent what’s possible in their companies and careers.

To learn more go to https://www.marketingaiinstitute.com/events/ai-for-b2b-marketers-summit

Today’s episode is also brought to you by our 2024 State of Marketing AI Survey.

Our 2024 State of Marketing AI Survey is an annual deep dive into how hundreds of marketers actually use and apply AI in their work.

To fill out the survey, go to www.stateofmarketingai.com 

Read the Transcription

Disclaimer: This transcription was written by AI, thanks to Descript, and has not been edited for content.

[00:00:00] Paul Roetzer: the way that the open source side and some of the closed models would argue is we're just building the models if bad actors use the models in bad ways That's their choice, but the model itself isn't inherently bad, but can it be used to do bad things? Of course it can.

[00:00:14] Paul Roetzer: Welcome to the Artificial Intelligence Show, the podcast that helps your business grow smarter by making AI approachable and actionable. My name is Paul Roetzer. I'm the founder and CEO of Marketing AI Institute, and I'm your host. Each week, I'm joined by my co host, and Marketing AI Institute Chief Content Officer, Mike Kaput, as we break down all the AI news that matters and give you insights and perspectives that you can use to advance your company and your career.

[00:00:44] Paul Roetzer: Join us as we accelerate AI literacy for all.

[00:00:51] Paul Roetzer: Welcome to episode 96 of the Artificial Intelligence Show. I'm your host, Paul Roetzer, along with my co host, Mike Kaput. We are coming to [00:01:00] you, let's see, we were recording this May 6th, Monday, May 6th at 10am Eastern time. we have Some interesting topics today. we're going to get into some new Sam Altman quotes, always a popular topic on, on the podcast.

[00:01:16] Paul Roetzer: And I feel like I could just sit around and like ponder things Sam says in tweets all the time. pretty intriguing, leaked email, or I guess, email that came out in a court hearing, from Microsoft that provides some very fascinating context on kind of the history of the last five years in AI development.

[00:01:37] Paul Roetzer: And some big news from Amazon with their Amazon Q products are going to be our main topics today and then a bunch of rapid fire. So, we got all that and more to get into today. 

[00:01:46] Paul Roetzer: This episode is brought to us by the AI for B2B Marketers Summit presented by Intercept. That is the sponsor of the event.

[00:01:55] Paul Roetzer: Uh, this is a Marketing AI Institute virtual event that's happening, June [00:02:00] 6th from 12pm to 5pm Eastern time. And it's designed to help B2B marketers reinvent what's possible in their companies and careers. During the event, you'll learn how to use AI to create dynamic customer experiences, bridge the gap between marketing and sales, build an AI council, and more.

[00:02:16] Paul Roetzer: Uh, thanks again to our presenting sponsor, Intercept. There is a free registration option. So the way we do these virtual events a lot of times through through the Institute is a free option. And then there's a upgrade option for 99 to do if you want on demand, but the event itself is free. the AI for Writers Summit we previously talked about, we had 4, 600 people from 93 countries in that one.

[00:02:39] Paul Roetzer: The b2B Marketer Summit, we're up to 1, 700 people registered so far with a month ago. and we just are announcing the agenda like this week, I think. so I believe the agenda is now live on the site, but we've Previously hadn't announced the agenda yet. so that, that should be good to go. You can go to marketingainstitute.

[00:02:58] Paul Roetzer: com and click on events [00:03:00] and the top navigation and AI for B2B Markers Summit is right there. So you can go check that out and get registered today. And then this episode is also brought to us by the 2024 State of Marketing AI Survey. We've been talking about this one on the recent episodes. We have, how many people, Mike, have taken this so far, gone

[00:03:17] Mike Kaput: Uh, almost 700 have taken this year's survey.

[00:03:20] Paul Roetzer: Yeah. So we were hoping to, you know, clear well over a thousand, respondents to the survey this year. So this survey is open right now. We're going to publish this research later in the summer. so you can go and be a part of the research. And this is something we've done, what is it, three or four years now?

[00:03:36] Paul Roetzer: I'm like, I forget. Four years four years now with Drift. Yeah. So thanks to Drift for continuing to support this research. The State of Marketing Report is our annual deep dive into how hundreds or thousands of marketers actually use and apply AI in their work. By filling out the survey, you're helping the entire industry grow smarter about AI.

[00:03:54] Paul Roetzer: It only takes a few minutes. You can go to stateofmarketingai. com and click the link for the [00:04:00] 2024 survey. And while you're there, you can download 2023 and see where we were at around this time last year. And then, like I said, we'll publish that research this summer. So again, stateofmarketingai. com. And thanks again to Drift for supporting that research.

[00:04:16] Paul Roetzer: All right, Mike. What did Sam Altman have to say this time?

[00:04:22] Sam Altman’s Possibilities of AI Talk at Stanford 

[00:04:22] Mike Kaput: Well quite because we now have access to the full interview. with Sam Altman that recently came out of one of his appearances at Stanford. And this is the interview we had alluded to briefly last week where he said that gPT-4 was at best, quote, mildly embarassing so this interview 

[00:04:43] Mike Kaput: aside from that quote, is really wide ranging and covers a number of important topics that give us some clues as to AI's near term future. So these include topics like the importance of AI infrastructure, the dangers of AGI, OpenAI's iterative [00:05:00] approach to AI development and AI generated code and research assistance and how important those things will be.

[00:05:07] Mike Kaput: You know, it's always been, Paul, like you talked about, important to kind of unpack what Altman's saying and the words of other AI leaders to kind of understand where everything's going. But it might not be, have ever has been as important

[00:05:20] Mike Kaput: today because Altman is also posting

[00:05:24] Mike Kaput: increasingly cryptic and maybe tongue in cheek posts on X about the latest mystery sweeping the world of AI because concurrently we're kind of also seeing this mysterious new model that has been released called GPT 2 Chatbot and this publicly appears to have no affiliation with OpenAI But it seems to have the power of a GPT 4 class model, and some commentators and testers even say it's better than GPT 4.

[00:05:56] Mike Kaput: This model appeared randomly on lmsys. org, [00:06:00] which is a well known benchmarking site.

[00:06:02] Mike Kaput: And it has everyone kind of Speculating, this firestorm of speculation about like, what is this? How is it this good? Who's behind it? And, all the while, Altman has started posting a couple cryptic things referencing GPT 2 on X.

[00:06:20] Mike Kaput: So, Paul, like, first up, kind of, Maybe set us up with what jumped out to you about the interview and then maybe talk to us a bit about What's the deal here with GPT 2?

[00:06:32] Paul Roetzer: I I mean, he tweeted on Sunday, I'm a good GPT 2 chatbot. That was the whole tweet. So yeah, he's either definitely just messing with people or this is like obviously something they're doing. so you, you hit on a couple of topics. So I'll just highlight a few excerpts from the interview. It's a, it's a good watch.

[00:06:54] Paul Roetzer: Like it's on YouTube, so anybody can go watch it. We'll put the show notes in. It's for a class at Stanford, [00:07:00] basically. And for you, for those who don't recall, Sam actually went to Stanford, I think for two years before he dropped out and started his company. he was in the first class of YC Combinator, I think, who are the Y Combinator companies.

[00:07:12] Paul Roetzer: So, he talked about infrastructure, you know, because there's this rumor about them raising seven trillion dollars and, you know, he's never really, like, committed to that number, but he says, yeah, we're thinking about energy, like, we're thinking about, you know, data and chips and, you know, chip capacity.

[00:07:29] Paul Roetzer: And, they're thinking about everything that's going to be required to build much, much larger, more powerful models. And so he does talk about that being maybe the thing that OpenAI is doing different than everybody else. Is there the ambition with which they're looking at the amount of infrastructure needed to achieve what's, you know, ahead?

[00:07:51] Paul Roetzer: So he talks a little bit about that. It's about a, about a 40 minute interview, but they take like 10 or 15 minutes of questions from students. So, um, [00:08:00] he also talked about GPT 4. You mentioned the quote about ChatGPT is like mildly embarrassing. So the full quote there is, ChatGPT is mildly embarrassing at best.

[00:08:09] Paul Roetzer: GPT 4 is the dumbest model any of you will ever have to use again by a lot, but it's important to ship early and often. And we believe in iterative deployment. if we go build AGI in a basement and then the world is kind of blissfully walking blindfolded along, I don't think that's, gonna make us very good neighbors.

[00:08:31] Paul Roetzer: So, again, this is just reinforcing why they do what they do, why they preview Sora, why they put ChatGPT in the world when they think it's an embarrassing product. Like, they're trying to prepare Society for what comes. So then he did get a little bit more into the iterative deployment thing, and I thought this was maybe the most fascinating part of the whole thing, because he also then got into the cost of doing this, and so he said, we learn and we get better, and it does kind of suck to [00:09:00] ship a product that you're embarrassed about, but it's much better than the alternative, and in this case in particular, I think we really owe it to society to deploy iteratively One thing we've learned is that AI and surprise don't go well together.

[00:09:14] Paul Roetzer: people don't want to be surprised. They want a gradual rollout and the ability to influence these systems. That's how we're going to do it. And there may, be a point where we, could totally be things. And oh, it says that sometimes it's a little, there's a lot of likes. You got to jump around a little bit.

[00:09:32] Paul Roetzer: the ability to influence the systems. So in the future, things could change where we'd think iterative deployment isn't such a good strategy, but it does feel like the current best approach that we've had, and I think we've gained a lot from doing this, and hopefully the larger world has gained something too.

[00:09:52] Paul Roetzer: And this is the part where I was like, wow. he says whether we burn, because he got asked about like how much money they're burning, [00:10:00] and he basically said, I could care less. He said it, he literally said it. There's probably someone at OpenAI who cares about and tracks how much money we're burning, but it's not me.

[00:10:10] Paul Roetzer: So he said, we could burn 500 million a year, or 5 billion, or 50 billion a year. I don't care. I genu I genuinely don't, as long as we can stay on a trajectory where eventually we create a Way more value for society than that. And as long as we can figure out how to pay the bills, we're building AGI. It's going to be expensive, but it's totally worth it. So I thought that was a fascinating one. And then he got a little bit more into AGI. The interviewer asked him about his definition. So he, he kind of, the interviewer said to Sam, you have defined it as software that could mimic the median competence of a competent human, at tasks they do. And Sam was basically like, well, that's a terrible definition.

[00:10:56] Paul Roetzer: And he's like, well, I'm just telling you what you've said. And [00:11:00] so Sam basically said, You know, I think we need a more precise definition of AGI. and then he started getting into basically like where he thought we were going to go. And he pretty much said like, listen, these things are just going to get smarter.

[00:11:16] Paul Roetzer: So he said, the most important thing about GPT 5 or whatever we call it, maybe it's GPT 2, is just that it's going to be smarter. And this sounds like a dodge when being asked about what is AGI. But I think that's among the most remarkable facts in human history, that we can just do something. And we can say right now with a high degree of scientific certainty, GPT 5 is going to be smarter than, a lot smarter than GPT 4.

[00:11:43] Paul Roetzer: GPT 6 is going to be a lot smarter than GPT 5. And we are not near the top of this curve and we kind of know what to do. And this is not like it's going to get better in one area. It's not like we're going to, you know, get better at this eval or this [00:12:00] subject or this modality. They're just going to generally get smarter.

[00:12:04] Paul Roetzer: And then he said, I think the gravity of that statement is still underrated. So that was another one that I thought was just totally fascinating. Because again, this is the thing we keep stressing. Whether you believe it or not, whether Yann LeCun believes it or not at Meta, they have a very high level of confidence that there is currently no upper limit to what these models can do.

[00:12:25] Paul Roetzer: can do.

[00:12:27] Paul Roetzer: And the curve appears to continue to show that if they just build bigger models with more data, give it more computing power, train it over longer periods of time, that these things just keep getting generally smarter. and then the final one I'll note, cause, cause I hadn't heard him explain it this way.

[00:12:46] Paul Roetzer: He was asked about, you know, getting fired and the structure at OpenAI and why in the world they had this like crazy structure with the non profit overreaching everything. And Sam basically says like, listen, man, like we didn't know what we know now. Like when we [00:13:00] started this thing, it was just a research lab.

[00:13:02] Paul Roetzer: There was no product. There was no plan for APIs. Like we were just trying to push the frontier of research and this is how it got set up. Would we go back and do it different now? Of course. But like, Basically, give us a break. This isn't, we, we ended up building something we never envisioned building, basically.

[00:13:19] Paul Roetzer: So, those are, those are kind of the key aspects for me, but again, it's a, it's a, it's always fascinating to hear him talk. but I think he said some things here that reinforced what we've been hearing, but also some kind of new perspective on things.

[00:13:34] Mike Kaput: So, how do you, how are you looking at, This speculation around GPT 2 because I've seen just like Take it for what you will but there's insane theories out there that this is like GPT 5 or slightly less insane that maybe OpenAI is testing a new variation of existing

[00:13:56] Mike Kaput: model or I saw some speculation that maybe it's a [00:14:00] new technology that makes old models, like GPT 2, perform better, or it's just a random thing.

[00:14:06] Mike Kaput: Do you have any sense of this?

[00:14:08] Paul Roetzer: I don't. I mean, I think that there's something to be learned in how Sam talks about what OpenAI, how they look at product development, which is iterative. deployment.

[00:14:18] Paul Roetzer: And there's a chance that this is some form of iterative deployment where they're putting something out in the world, whether it's a finely trained version of the original GPT.

[00:14:27] Paul Roetzer: I do think they're going to change the naming convention. Like he, he, every interview he does, he mentions like, well, whatever we call GPT 5. So I wouldn't be surprised at all if They actually like restart the naming convention. I think it would be weird to just throw GPT out the window now from a naming standpoint.

[00:14:47] Paul Roetzer: but yeah, I have no idea. Like I, and I don't think anybody really does. Like until Sam tweeted what he did last night, like I'm a good GPT chatbot or whatever. I hadn't seen anything where OpenAI appeared to take [00:15:00] any credit for it. That was the first one where I thought, Oh, well maybe it, maybe it actually is them doing something like testing out some capability.

[00:15:07] Paul Roetzer: But, I don't know. there's a lot of rumors going on, like, you know, there's increasing rumors that they're going to announce a search engine, maybe even this week. I think it's just a big guessing game, and I haven't really seen any, you know, until we hear, I think, from OpenAI, whether it is or is not them.

[00:15:24] Paul Roetzer: It's just guessing. 

[00:15:27] Microsoft’s Rushed OpenAI Decision

[00:15:27] Mike Kaput: So, in our next big topic this week, there is an email that we now know about that Microsoft preferred we didn't see. And it details how the company may have rushed into its partnership with OpenAI. So, this email has come out because it's been recently unsealed. Thanks to the U. S. Justice Department's antitrust suit against Google and the efforts of the New York Times, which requested that the email be unsealed after its existence was [00:16:00] made public as part of that suit.

[00:16:02] Mike Kaput: Before we get to that email, just a little bit about what this lawsuit against Google is and why Microsoft's involved. So this lawsuit is a major antitrust action being brought by the U. S. Department of Justice against Google And it alleges that they've engaged in anti competitive practices, namely through their dominance of search and search advertising.

[00:16:26] Mike Kaput: Now, there's a ton to the case, but basically the government is trying to prove that Google's practices, like its deal with Apple to make Google more competitive, the default

[00:16:36] Mike Kaput: search engine on Apple devices have been these intentional moves to like monopolize the search market. Google, however, says that its ownership of such a large share of the search market is due to the quality of its product, not Anti competitive actions.

[00:16:52] Mike Kaput: So as part of this lawsuit, Satya Nadella at Microsoft is taking the stand at times and, you know, all these documents and exchanges [00:17:00] are being analyzed and unsealed. So now to this email.

[00:17:04] Mike Kaput: In the email, which is from mid 2019, Microsoft CTO and Executive VP of AI, Kevin Scott, emails Satya Nadella saying he's quote, very, very worried that he had made, quote, a mistake in underestimating Google's AI capabilities.

[00:17:21] Mike Kaput: He had at first thought they were just kind of like doing an AI stunt, paying kind of lip service in public to AI, but he said that actually Google appeared to be building critical AI infrastructure. And that this was already paying off, again, back in 2019, and it could take Microsoft quote, multiple years to even attempt to compete with Google in AI.

[00:17:44] Mike Kaput: So according to reporting around this subject, Nadella took notice, immediately forwarded the email to his CFO saying it explained quote, why I want us to do this, and that appears to refer to the company's investment in OpenAI. [00:18:00] Now, just weeks after this scenario happened, the company invested a billion dollars in OpenAI, which was followed by billions and billions more that have been invested since then.

[00:18:12] Mike Kaput: This kind of starts to stand in a bit of contrast to the high flying language that Microsoft has using

[00:18:18] Mike Kaput: and has always used to kind of justify its investment in OpenAI. They talk a lot about, hey, partnership accelerates AI breakthroughs and shares all their benefits with the world, but it turns out they may have just pulled the trigger on this partnership because they were actually really terrified of Google. So, Paul, kind of kicking this off here, I have to say, like, Well into the past year, Google has been taking some serious heat for like being behind in the AI race or not knowing what it's doing or not having its act together. And it just doesn't sound like that was totally the case. Like Microsoft appears to have been terrified.

[00:18:54] Mike Kaput: Like what does this email tell us about the conflict between Microsoft, Google, and the rest of the leading [00:19:00] players?

[00:19:00] Paul Roetzer: I mean, I think a belief that Google was behind an AI is a pretty uneducated, like, opinion. They were never behind in AI research. They were behind in the productization of the AI. Like they, they certainly were caught off guard by OpenAI releasing ChatGPT. But Google's been doing AI research for two decades.

[00:19:22] Paul Roetzer: They have greater infrastructure than anybody, more data than anybody, probably more AI researchers than anybody. I mean, thousands of AI researchers, two AI labs, like world leading AI labs. Like they weren't behind technically ever. They probably still aren't technically behind. Now, can they get out of their own way and like, you know, get the right products to market?

[00:19:42] Paul Roetzer: That, that's to be determined, but, you know, I don't, I don't think that they ever truly were behind. So, I think it's helpful to have a little context here of like, who is Kevin Scott? It's not a name we've mentioned on the podcast maybe before. I mean, maybe once or twice we've dropped his name. [00:20:00] So Kevi , spent five years at Google as a senior engineering manager from 2003 to 2007, and then again, 2010 to 2011 for a nine month stint.

[00:20:10] Paul Roetzer: He then was the senior VP of engineering and operations at LinkedIn for four years, from 2013 to 2017. Microsoft, if you'll recall, buys LinkedIn in June, 2000 and. 16. So Kevin is the Senior Vice President of Engineering and Operations at LinkedIn when the Microsoft acquisition happens. He then is moved into the Chief Technology Officer role at Microsoft in January of 2017.

[00:20:37] Paul Roetzer: So, what was happening in January 2017 in AI? Well, DeepMind, which is the company I think he's referring to of, they were playing, basically teaching AI to play games. That was pretty much it. Demis strategy still is. Like, DeepMind, Google DeepMind, before they were acquired by Google in 2014, Demis believed video gameplay was the way to do reinforcement [00:21:00] learning to train these models.

[00:21:01] Paul Roetzer: So I think Kevin's saying, Yeah, we thought they were just playing games, basically. Like, we didn't know it was, like, gonna be this massive thing. And so, DeepMind won at the, AlphaGo, which is the documentary we've talked about where they beat the world Go champion Lee Sedol, in March 2016. So, right around that time, DeepMind has this breakthrough in the game of Go.

[00:21:22] Paul Roetzer: Smart replies, which will, you'll recall on your phone, so if you use Gmail, And it would recommend replies to you. That was Google Smart Replies. That was 2017. 2017 is also the year the Transformer was invented by the Google Brain Team. So the Attention Is All You Need paper came out in June of 2017, if I recall correctly.

[00:21:46] Paul Roetzer: That led to Google Smart Compose in 2018. So again, now we're, we're a year before this email happens. And so Google is, who has been working on language generation for two decades by this point, is [00:22:00] now starting to make some significant progress in their ability to predict the next token or word in an email.

[00:22:07] Paul Roetzer: So, Google Smart Compose in 2018, the product release, so this combined a couple of language models that Google had already built. The product release said, Email makes it easy to share information with just about anyone, friends, colleagues, and family, but drafting a message can take some time. Last year, we introduced Smart Reply in Gmail to help you quickly reply to incoming emails.

[00:22:30] Paul Roetzer: Today, we're announcing Smart Compose, a new feature powered by AI to help you draft emails from scratch. faster. The same day, Google Research put out a post that kind of gave the technical details of how Smart Compose was working. So they said we are constantly working on improving the suggestion quality of the language generation model by following state of art architectures.

[00:22:54] Paul Roetzer: In parentheses, for example, transformer. So again, the transformer at [00:23:00] this point is about a year old. The paper is about a year old, but it is not powering Smart Compose yet. Like they're playing with transformers. They're trying to figure out how to productize them. So they're starting to experiment with this stuff.

[00:23:13] Paul Roetzer: And so we're kind of at this point where transformers are becoming a thing. Google invented them.

[00:23:21] Paul Roetzer: Google has made progress on Smart Compose. They've been building their own chips. By that point, they had the TPUs. So they'd invented their own chips to help accelerate AI development. They have Google DeepMind and Google Brain.

[00:23:32] Paul Roetzer: They have two major AI research labs. So they're doing a lot, but it appears at that point that Kevin and Microsoft maybe are under appreciating all of the progress they're making. So now let's come back, back to the email, which is, I mean, it's fascinating. Like, we'll put the link in the show notes. You can go read it for yourself.

[00:23:55] Paul Roetzer: But I would say, 90 of the email is [00:24:00] redacted. So it's not like we have the whole email. Like they, Microsoft was able to argue to the judge not to release it for competitive reasons, most of it. But what we do have is quite fascinating. So I'll just read the couple of paragraphs that we actually have from this email.

[00:24:14] Paul Roetzer: So again, this is. from Kevin Scott to Satya Nadella, the CEO, and Bill Gates, who I think Bill's the chairman of Microsoft still, June 12th, 2019. So now again, we have the context of what else was going on at this time. And it is thoughts on OpenAI. So After the redactions, the thing that's interesting about what OpenAI and DeepMind and Google Brain are doing is the scale of their ambition and how that ambition is driving everything from data center design to compute silicon to networks and distributed systems architecture to numerical optimizers, compilers, programming frameworks, and high level abstractions that model developers have at their disposal.

[00:24:56] Paul Roetzer: When all these programs were doing was competing with one another to [00:25:00] see which reinforcement learning system could achieve the most impressive game playing stunt. Again, this perception that they were just like playing around with games. I, I, was highly dismissive of their efforts. So this is Kevin speaking.

[00:25:14] Paul Roetzer: That was a mistake. When they look at When we look at all the infrastructure they had built to build natural language processing models that we couldn't easily replicate, I started to take things more seriously. And as I dug in to try to understand where all the capability gaps were between Google and us for model training, I got very, very worried.

[00:25:35] Paul Roetzer: Turns out just replicating BERT, which was one of the early language models within Google, wasn't easy for us to do. Even though we had the template for the model, it took us approximately six months to get the model trained before our infrastructure wasn't up to the task. Google had BERT for at least six months prior to that, so in the time that it took us to hack together [00:26:00] the capability to train a 300 million parameter model, 340 million parameter model, they had a year to figure out how to get it into production and move to a larger scale, more interesting models.

[00:26:12] Paul Roetzer: We are already seeing the results of that work in our competitive analysis of their products. One of the Q& A competitive metrics that we just, that we watched just jumped by 10 percentage points on Google Search because of BERT like models. Their autocomplete in Gmail, which is especially useful in the mobile app, is getting scarily good.

[00:26:32] Paul Roetzer: And then we have one other paragraph of this three page email that isn't redacted and it says we have very smart machine learning people in Bing in the vision team and in the speech team. So again, they had all these, they had thousands of AI researchers themselves. So he's saying they're good. But the core deep learning teams within each of these bigger teams are very small, and their ambitions have always been, have also been constrained.

[00:26:57] Paul Roetzer: Which means that even as we start to feed them [00:27:00] resources, they still have to go through a learning process to scale up. And we are multiple years behind the competition in terms of machine learning scale. So I mean, again, like, there's so many fascinating details here. So back in like, 2018, 2019. One of the ways I would assess like how far along organizations were, how far along SaaS companies were, cloud companies were, brands were, is I would go to LinkedIn sales navigator and looked at how many machine learning engineers they had.

[00:27:30] Paul Roetzer: there were some Major AI software companies, business and marketing AI software companies that had less than 20. And so I, my feeling was they are not, they don't understand what's about to happen. They're not taking this seriously enough. When you looked at Microsoft and Google and Amazon and IBM, they had thousands.

[00:27:48] Paul Roetzer: So you knew they were doing stuff. But what they were doing was, as Kevin highlights, they were doing these very, very narrow specific things using machine learning to make predictions about outcomes and [00:28:00] behaviors. They, they didn't have this grand vision for what happens when AI can think and reason and understand and generate.

[00:28:07] Paul Roetzer: That's what OpenAI had. So OpenAI fell into this space where all of a sudden, they realized the potential of the transformer architecture that Google invented, and they built GPT 1. Then you had some other companies start to apply this to, like, image generation and eventually video generation. All the while, Microsoft wasn't there.

[00:28:27] Paul Roetzer: So why did they invest in OpenAI? Why Maybe it is because they realized it was going to take them way longer than they wanted to catch up. That's what Kevin's saying, basically. He's like, even if we throw a billion dollars at this, we go hire a thousand researchers,

[00:28:43] Paul Roetzer: we are not set up to do this. Like, we haven't had this vision, this ambition before this.

[00:28:49] Paul Roetzer: And, I think that's the thing I took away from it, and again, like, I'm not gonna name names, but like, I have talked to some major software companies. [00:29:00] And a lack of vision and ambition, 

[00:29:02] Paul Roetzer: was was was pretty common play around that time. So, even after ChatGPT, you'll have these conversations and you realize, like, they just missed it.

[00:29:15] Paul Roetzer: Like, they just didn't see what was coming. Um, it's not that they didn't have talented people and they weren't executing really well on their roadmap. But they didn't, they hadn't seen around the corner yet. and so it seems like that's basically what Kevin is admitting is like, we were doing this stuff for 20 years and we just didn't, we didn't see it.

[00:29:34] Paul Roetzer: And we didn't realize what Google was building over there. Um, so yeah, I mean, 's fascinating, man. Like, I feel bad for Microsoft. It's out there, but at least they got the rest of this thing redacted. I would love to read the rest of it. That's for sure.

[00:29:48] Mike Kaput: Yeah. That's that just an incredible look at kind of the inside baseball that's really driving. These decisions.

[00:29:57] Amazon Q

[00:29:57] Mike Kaput: All right, our next topic today is that [00:30:00] Amazon has announced that Amazon Q is now generally available. We talked about the preview of Q in a previous podcast episode at the end of last year, but Q is the company's AI powered assistant for internal teams. So Q uses your company's internal knowledge to help employees Be more productive.

[00:30:20] Mike Kaput: And there's two main Q applications, what they're calling Q developer and Q business. So Q developer assists developers with and IT with coding, testing, and troubleshooting, it can generate, test and debug. Code Q business answers, questions, provide summaries, generates content, and completes tasks based on your company's unique data.

[00:30:44] Mike Kaput: So in addition to these two applications, Amazon also. talked about a preview of Amazon Q apps, which enables employees to build their own generative AI applications without any coding experience. So [00:31:00] QDeveloper, the tool, has two

[00:31:01] Mike Kaput: a free tier and a pro tier for 19 per user per month. QBusiness has two subscription tiers. 3 per user per month for a lite version, 20 per user per month for the pro version. Now, when I dived into the fine print, it does look like QDeveloper and Business have some regional availability restrictions, so I'd say just definitely check out the FAQs. That we'll link in the show notes for details as you kind of experiment with them. Paul, these two broad use cases, developer assistance and knowledge worker assistance generally, seem really, really tangible

[00:31:39] Mike Kaput: me

[00:31:39] Mike Kaput: and kind of high value, low hanging fruit. Like, are these areas that companies should be focused on in terms of immediate applications?

[00:31:48] Paul Roetzer: Yeah, I mean, certainly a developer one, it's probably not, I don't think our podcast audience has as many developers in it. I mean, maybe there are a lot of developers listening as well, but you know, I think [00:32:00] generally our audience is going to be more on the practitioner and business leader side. And so the Amazon Q business is the one that I focused on a little bit more personally when I was kind of like assessing this.

[00:32:09] Paul Roetzer: And so I do think that this This builds on a trend we have seen, which is that recent research is saying people are going to be more likely to work with generative AI tools that connect to their existing cloud provider. So AWS, still the dominant cloud provider. the idea that they're going to make it easy to access your data and then to build these tools you can actually trust and that have, high reliability ratings.

[00:32:39] Paul Roetzer: So they talk about in the blog post about it, that Amazon Q Business connects seamlessly to over 40 popular enterprise data sources and stores documents and permissions, including Microsoft 365 and Salesforce. And then it ensures you can access content securely with existing credentials using single sign on according to your [00:33:00] permissions and also enterprise level access controls.

[00:33:03] Paul Roetzer: This is like a really important thing because, you know, I think that on the surface, we all think, oh, it'd be amazing if I could just talk to our data. Like if, if I connect ChatGPT or whatever it is to our server or all of our data sources, our CRM, Well, all of those things have permission controls, and so you can't, like, train a chatbot on everything in there, because what happens if all of the sudden, employee X has access to, like, HR data, and can query it about people's Payscales or something.

[00:33:37] Paul Roetzer: Like, I don't know, like it's just data living there, but that data is segmented and controlled so people have different permissions. So you have to have generative AI chatbots, if you're using them for internal purposes, that follow those permissions, that like walls off certain data so people can't access things they shouldn't be accessing and have conversations about it.

[00:33:57] Paul Roetzer: So I think just this whole idea [00:34:00] that Microsoft, Google, and Amazon have just such a massive. Potential here to control this market because big enterprises have to have these things. These aren't like nice to have things. This is essential for them to do a deal with a company to build generative AI into their organization.

[00:34:20] Paul Roetzer: It goes on to say like some of the use cases, get answers, some questions on company policies, products, business results, or code using it as a web based chat assistant. you can point the Amazon Q business at data repositories. It'll search across all data, summarize, analyze trends, engage. These are amazing use cases.

[00:34:38] Paul Roetzer: Like this is the stuff where real value will start happening with generative AI within corporations. If you can do this, like I've said this of HubSpot, like that's our CRM. I just want to talk to my data. Like, I don't want to go in and learn how to build custom reports. I don't have the time to learn how to build custom reports.

[00:34:56] Paul Roetzer: I don't learn, have the time to learn the capabilities even of custom reports. I [00:35:00] just want to talk to my data. And so that's where I think we have to get to with all of these products is like unlocking all of that knowledge that's sitting there. Not even like, Learning about the data, but having it be an intelligent assistant to tell me what to do with that data.

[00:35:15] Paul Roetzer: Like what, what does this mean? so I think this is a step toward creating real value within enterprises using generative AI. The other thing that, They talk about this ability to kind of build your own apps. Definitely a trend. Like, you can build GPTs in OpenAI. You can build copilots in Microsoft. You can build apps in Amazon.

[00:35:36] Paul Roetzer: No code apps. So that the average knowledge worker can build it. You don't have to be a developer anymore. If you have a repetitive process, you can build an app using natural language to do that repetitive process for you. So I think these are the things Where we get past the disillusionment of like, Hey, we've got a hundred licenses or a thousand licenses to co pilot, and we're not really getting value.

[00:35:59] Paul Roetzer: This is the [00:36:00] stuff where the value becomes like immediate. So I'll be interested to see what adoption looks like and how this maybe impacts. I don't know if in Amazon's earnings, if they break this kind of stuff out of like the impact and you know, how many, you know, subscriptions they have. But it'll definitely be fascinating to watch, how this plays out.

[00:36:18] Paul Roetzer: And now, uh,

[00:36:19] Mike Kaput: All right,

[00:36:20] Mike Kaput: let's dive into some rapid fire topics this week. 

[00:36:24] Perplexity Fade

[00:36:24] Mike Kaput: So, first up, perplexity, the popular AI search tool which is challenging Google, just might not

[00:36:32] Mike Kaput: At least, that's according to an assessment from the noted AI investor and commentator Ben Tossell. Tossell recently posted online that Despite the fact that he uses Perplexity every day and he loves it, he actually thinks this tool is going to end up losing out in the market. And he says that because while Perplexity is valuable, he says it still ends up being reliant on the main search market that it's trying to disrupt [00:37:00] until it innovates a really radically different product experience.

[00:37:04] Mike Kaput: Because right now, Perplexity does like a pretty good job at Getting information from the internet and kind of synthesizing it, but it isn't really using AI for intent, to kind of determine exactly what you're trying to find. Aside

[00:37:18] Mike Kaput: from summarizing those top search results, which is the same info you get on Google and Bing, Perplexity, quote, has no unique model, according to TOSL.

[00:37:28] Mike Kaput: And even though it's valued at more than a billion dollars, it hasn't really innovated on the core products since it launched. Now, Tassel says he suspects that if ChatGPT, Claude, or Gemini

[00:37:41] Mike Kaput: can crack the code on finding information from the web right within those tools, that's what will win the search market.

[00:37:50] Mike Kaput: So Paul, you know, Tassel loves perplexity. We love and use perplexity. But like, do these critiques make sense to you?

[00:37:58] Paul Roetzer: Yeah, so I, I, [00:38:00] as we've said, I'm a big fan of perplexity. I think it's very valuable. I probably do use it more in a given day than I do Google search, right now. That being said, I think he's a hundred percent right. I think this company is not going to last. and I don't say that because it's not a good company and it's, they don't have a vision for something.

[00:38:20] Paul Roetzer: I, I don't think they have anything defensible. it doesn't mean they won't have like get acquired and it won't be like a successful exit or something like that. but if you think about like, how does perplexity work, they pipe in Google search results in essence, like it's not their search engine.

[00:38:36] Paul Roetzer: They rely on other people's frontier models, Claude. You know, GPT 4 or whatever, you pick your model. I don't see any way they raise enough money to build a competing frontier model. Like, I think that's, they're, they're going to be on top of someone's model. It could be Llama, it could be Claude, like whatever it is, but they're going to be relying on those models.

[00:38:57] Paul Roetzer: So if they don't have [00:39:00] their own search engine, if they don't have their own model, Well, Google's gonna do search generative experience better

[00:39:09] Paul Roetzer: at some point. Like, they will figure, they will, I believe Google will figure out how to do it as good or better then. And the reality is, as good as it actually is, they're not taking market share from Google right now.

[00:39:19] Paul Roetzer: Like, it's not like they're taking one or two points off of Google's market share. It's like, maybe a tenth of a point or something. Like, nothing meaningful right now. 

[00:39:30] Paul Roetzer: Honestly, what like Perplexity has right now is they don't have ads. Like, I love that they don't have ads. They're going to have ads.

[00:39:36] Paul Roetzer: Like they will introduce ads into this product and it'll probably make the product not as usable. So I feel like. There's a decent chance if ChatGPT, if OpenAI drops a functional search engine, if they build their own search engine, which they have the resources to do, or they find some way to partner and make Bing, like, way more usable right within OpenAI.[00:40:00] 

[00:40:00] Paul Roetzer: I just, I think I stopped using Perplexity that day. Like, so while it's really valuable today, it's,

[00:40:06] Paul Roetzer: There's nothing a bigger company couldn't replicate that Perplexity is doing, and I don't think Perplexity can replicate what the bigger companies are doing. They can't build their own search engine, most likely.

[00:40:16] Paul Roetzer: They can't build their own frontier model, I don't think. They're not going to raise enough money to do it. Can't get enough Nvidia chips to do it. Whatever it is. So I just feel like it's a really good product right now. while there is a gap in what the existing players can do or are willing to do. But as soon as those big players do the obvious thing, well, I don't know that this product is needed anymore.

[00:40:45] Paul Roetzer: So, again, I could be completely wrong here, but as of right now, I don't defensible about their product. Their platform in a year from now or whatever. So I hope I'm wrong. Like, I hope [00:41:00] they figured out, I hope they do it without injecting ads and making, you know, the experience not good. And maybe they find a way to build a model and, you know, build a better search engine.

[00:41:10] Paul Roetzer: I don't know, but, yeah, yeah, it's a better product today. I think it's going to be hard for them to keep it that way. 

[00:41:17] Major U.S. Newspapers Sue OpenAI and Microsoft 

[00:41:18] Mike Kaput: So in some other news, eight U.S. newspapers are suing OpenAI and Microsoft for copyright infringement. So they're adding to the legal troubles that these companies have after being sued late last year by the New York Times in what is kind of becoming a landmark copyright lawsuit. Some of the newspapers include the New York Daily News, Chicago Tribune, and the Denver Post.

[00:41:43] Mike Kaput: So similar to the times these papers allege that OpenAI and Microsoft used millions

[00:41:49] Mike Kaput: copyrighted articles without permission to train their models. The newspapers also claim that in some cases copyright information was removed from their [00:42:00] material and the companies used their trademarks without authorization.

[00:42:03] Mike Kaput: So Paul, it seems here like even more publications are giving weight to the claims brought by the Times. Does that strengthen the Times overall case? Like, where does this go? Where does this end?

[00:42:15] Paul Roetzer: I mean, like, we, this, we knew this was coming. So back when we first talked about the, New York Times lawsuit, one of the things referenced in their suit was the common crawl data where they're like the number two, I think, source, number one or number two source in the common crawl data, which is used to train these models.

[00:42:32] Paul Roetzer: And then I remember reading off the list of the next, like, ten, and it was LA Times, Newsweek, like, it was all these other media It's like, well, they're all coming for him, too, then, like, because if New York Times wins Or appears to have, you know, be making progress toward, it just opened up to the floodgates and everybody else is going to get in on that too.

[00:42:49] Paul Roetzer: So, yeah, I think it's inevitable that all these companies are going to sue them and try and get something here. I don't know where it goes. and I haven't really seen any great commentary recently that changes my [00:43:00] perspective that the next versions of these models, they're going to try and license more and more of the data.

[00:43:05] Paul Roetzer: And eventually, they'll maybe have to admit that they took the copyrighted material, which we all know they did. And the courts may decide that that was illegal. And it was three models ago, and there's, we're not gonna go back and like, destroy those models. Like, what does it matter? At that point, we're on GPT 7 by that point, or 8.

[00:43:24] Paul Roetzer: And so maybe they pay a few billion dollars in fines, and they figure out a way to compensate creators in the future. I don't know. Like, and this leads into the next one, Elon Musk, you just screw everybody. We're not even gonna cite you. I don't know. Maybe, maybe Elon ends up having the plan around all of this by just ignoring the source.

[00:43:43] Paul Roetzer: But, um. yeah. 

[00:43:45] Elon Musk’s Plan For AI News

[00:43:45] Mike Kaput: You know I am really glad you came out because that the thread in this next news item You know, I had it teed up as saying Elon Musk says he has big plans for AI news, but genuinely I think the plans are just to [00:44:00] screw them, because

[00:44:01] Paul Roetzer: It sure seems that

[00:44:02] Mike Kaput: Yeah,

[00:44:03] Mike Kaput: yeah, because Musk actually just talked to Alex Kandrewicz at Big Technology.

[00:44:08] Mike Kaput: We've covered a couple of his articles before, and Musk says he now aims to use Grok, their AI, to provide a real time synthesis of breaking news and social media reactions right on X. Musk says as more information becomes available, the News Summary will update to include that information, and the goal is simple, to provide maximally accurate

[00:44:30] Mike Kaput: timely information, citing the most significant sources.

[00:44:34] Mike Kaput: Now here's the kicker. He says Grok will summarize what people say on X about a news story, not look directly at the news item itself. So, kind of sounds like he's planning on giving us a real time pulse on the news but without any type of citation. So, can you talk a little bit about the implications here?

[00:44:57] Mike Kaput: Like, does this just kill [00:45:00] backlinks on X to media outlets?

[00:45:02] Paul Roetzer: Well, I mean, I think backlinks are dead on X already, unfortunately.

[00:45:05] Paul Roetzer: So Mini

[00:45:07] Paul Roetzer: rant, I guess, to start this off. So obviously Elon doesn't want you to leave X. So he devalues, and I say he, because he has his finger on the algorithm. So he devalues including the link to a source already. So what's running rampant on X, if you use X ever, is people comment on a news article.

[00:45:28] Paul Roetzer: They will then put the screenshot to the news article that they're commenting on in the tweet. And then they don't put the link to the source. And we're all playing this game. Because this is what the algorithm rewards. Forget the fact that, okay, this is really interesting that you, Mike, have tweeted about this.

[00:45:48] Paul Roetzer: Great for the screenshot, thanks. I can read the lead paragraph now. Where's the article? Like, I want to go and read the source, or watch the source. Not here. I want to go there and do it. [00:46:00] But Axe and Elon have made a very concentrated effort to not do that. Have you leave X, even though it is not in the best interest of the user.

[00:46:10] Paul Roetzer: So. The fact that they're pursuing this path where they will be the AI news real time source makes sense. Absolutely, I find X way more valuable for what's going on in the world at this moment than I do mainstream media. It's way more up to date and I can get a greater diversity of sources through curated lists.

[00:46:30] Paul Roetzer: I have no problem with that. I want it to excel in this area. But the idea that you're not going to cite the source and it gives me an ability to go see it for myself versus a bunch of random people on X that I don't want to hear from necessarily. So I worry about that. But now, so the irony of like how this came to be, and Alex is great.

[00:46:53] Paul Roetzer: Big Technology Podcast is awesome. I listen to every episode. Um, This is how it happened. So he's telling the story. He [00:47:00] says, so there was a time story about Trump's, what Trump's second term would look like with interviews with Trump, and it's circulated on social media, and Alex says, I got access to grok, which is X's AI bot recently, and I clicked into the time story.

[00:47:15] Paul Roetzer: And I noticed that there was an AI summarization that the chatbot created without a link to the time story. So I said, all right, this is interesting. Why isn't it citing it? So he then emails Elon. So he says, like, everybody cites it. So I wrote to Elon and emailed him, should grok link new sites, it summarizes.

[00:47:35] Paul Roetzer: And then he kind of like, So he's like, yeah, it was great. He didn't respond with a poop emoji for once. Like, so Elon isn't a huge fan of media people. So he will often just respond with a poop emoji if people reach out to him. So, Anyway, anyway, so he said, I wrote to Elon, I said, Hi Elon, I got access to grok and have started playing around with it.

[00:47:52] Paul Roetzer: The bot does a good job summarizing news, but should it link to the stories it discusses, that could be nice for users, plus a [00:48:00] worthwhile value exchange for publishers. You can see the example with times coverage below. What do you think? So I, he shared the screenshots of the time coverage. Musk's respond was pretty, well, I'll just read this.

[00:48:13] Paul Roetzer: What Musk responded was pretty dot, dot, dot. So we went back and forth a little bit. And then, so he's like, all right, we're going to improve citation. So basically, I don't think Elon gives a crap about citation. Like, has no care for it. But he's like, yeah, fine, whatever. We'll improve it. But what he responded was even more interesting.

[00:48:29] Paul Roetzer: He says, grok analyzes tens of thousands of X posts to render a news story. He says, as more information becomes available, the news summary will update to include that information. So basically what we're trying to do when they filter news on the platform is create summaries of the news stories that update in real time that not only have the news nuggets, but also the commentary.

[00:48:46] Paul Roetzer: So to your point, he just values what people on X are saying about the story more than the actual story itself. And if forced to, maybe he'll actually link to them, but he really doesn't care, and he doesn't want people going to the news stories, so he's going to [00:49:00] avoid doing it at all costs. Um. Yeah, h, uh, totally separate side note about this.

[00:49:08] Paul Roetzer: I don't know if you've played with this, Mike, but Apple recently, and I don't think we talked about this on the show, introduced transcripts into

[00:49:13] Paul Roetzer: podcasts.

[00:49:14] Mike Kaput: Yeah.

[00:49:14] Paul Roetzer: So when you're on an episode now, like, I usually listen at 1. 5 speed. What I've found is, for something like this, where I want to go back and find what was said at specific times, I usually have, like, my Apple Note open, and I'm, like, trying to, like, Type out, or I'll use voice dictation to say what was said in the thing.

[00:49:30] Paul Roetzer: So I can have it for this podcast. I'll now actually just listen to the podcast at 2x speed. Cause I realized like I actually read probably closer to two to 2. 5x speed. and so I'll follow along in the transcript on Apple podcast on my phone while I'm listening at 2x speed, and then it's way easier to copy and paste exact excerpts from the thing, which is how I do it.

[00:49:51] Paul Roetzer: Did this one. so total side note. and then I think the other thing I was going to note is, I believe we're starting to see versions of how this is going to [00:50:00] work. So if you go into Twitter, X the app, and you click on search, the For You section is now these summaries. And it actually, I don't like it.

[00:50:09] Paul Roetzer: Like, it's harder for me to follow. It used to be like three words of like what was going on. And now it's like these summaries and, so yeah, I think we're just going to see a lot more about this. I, I'm not sure it's going to work. I really want the citations, but I don't think we're going to get that choice.

[00:50:24] Mike Kaput: It'll be interesting at the very least.

[00:50:28] Paul Roetzer: all

[00:50:29] First Commissioned Music Video Made 100% with Sora

[00:50:29] Mike Kaput: right. So we actually just also got what is being called.

[00:50:34] Mike Kaput: the first official commissioned music video collaboration between a music artist and a filmmaker made using OpenAI's Sora video model. So this is OpenAI's video generation model, which is not yet publicly available. And this music video is for a song called The Hardest Part from a band called Washed Out, and it is entirely generated by Sora.

[00:50:58] Mike Kaput: The video's director, [00:51:00] a guy named Paul Trilio, wrote that he wanted to film the quote, infinite zoom concept that you see in the video for over a decade in some type of music video. But he had never really attempted it, and that AI now allowed him to actually realize his vision. He said, quote, I was specifically interested in what makes Sora so unique.

[00:51:20] Mike Kaput: It offers something that couldn't quite be shot with a camera, nor could it be animated in 3D. It was something that could have only existed with this specific technology. So Paul, this is the first

[00:51:33] Mike Kaput: example of this, I think, that we've really seen commercially, it's definitely not the last. Like, are we going to get a wave of these types of Sora created, or Pick Your Poison, other created tools, videos created from these tools?

[00:51:45] Paul Roetzer: And the hardest part was watching the video, honestly, like, it was brutal, like, I, I had to turn it off three seconds,

[00:51:51] Paul Roetzer: It was making me nauseous. So, I mean, good on them that they took a decade to do this. I couldn't watch thing. the thing. sure. I mean, once it's [00:52:00] not just a research project and the average person has access to it, we're going to see an explosion of creativity, kind of like we're seeing with Runway right now.

[00:52:07] Paul Roetzer: Like you can go use Runway. They're having like AI Film festivals with Runway, like competitions of people building things. So it's not like, you know, OpenAI didn't invent text to video. There are other tools out there you can do with it right now with Pika and Runway and others.

[00:52:21] 

[00:52:21] Paul Roetzer: So yeah, I guess it's interesting from a standpoint that it's being positioned as this big thing and it made NBC News, which like once mainstream media starts picking this stuff up, it, you know, creates greater awareness within society more at large.

[00:52:37] Paul Roetzer: But, um. yeah,

[00:52:39] Paul Roetzer: I think once Sora actually is available, we'll start to see a ton of this stuff. Some of them will be more watchable than the first one.

[00:52:47] New Bill Would Require a Kill Switch for AI Models

[00:52:47] Mike Kaput: So on last week's episode, we actually briefly mentioned some noteworthy legislation that's moving through California's state senate that may be worth paying attention to. It is called S. B. [00:53:00] 1047, and it is titled the Safe and Secure Innovation for Frontier Artificial Intelligence Systems Act.

[00:53:07] Mike Kaput: And this bill aims to mandate additional due diligence testing for certain AI systems before they become available to the public. So it requires that the models covered by the bill include specific technical and physical Cybersecurity protections, including, interestingly, this ability to enact what they call, quote, a full shutdown of the model, i.

[00:53:31] Mike Kaput: e. what we might call a kill switch. So while this bill appears, or says, it primarily targets large, state of the art AI models, specifically those that use a certain quantity of computing power. Critics say that it could seriously harm open source AI and AI safety and innovation as a whole. Jeremy Howard, who is an AI researcher and entrepreneur, who we've talked about a bit before in this podcast,

[00:53:59] Mike Kaput: he follows a lot of [00:54:00] these issues and he criticized the bill this past week.

[00:54:03] Mike Kaput: He said that It, the definitions of kind of how it covers, which models are covered is so broad that it could quote inadvertently criminalize the activities of well intentioned developers working on beneficial AI projects and he says it could seriously restrict open source development. So this bill seems to go pretty hard it sounds like.

[00:54:27] Mike Kaput: Is this? at all reasonable or advisable as a way to make AI safer. I don't,

[00:54:33] Paul Roetzer: I don't, I don't think there's any way something like this passes in its current form, but I mean, the way this stuff works is you go to an extreme and you land somewhere in the middle, and starting with the extreme, the middle doesn't look so bad, so it's how negotiations work, it's how politics works, Um, so, I mean, I'm not surprised, I mean, Buy it.

[00:54:54] Paul Roetzer: I think there's probably some elements of this that make sense. I, the reason it's viewed [00:55:00] as such a threat to open source is you can't have a kill switch on an open model that's already out in society. Like, it's branched off. Like, it's forked into all kinds of other different things. Like, there is no kill switch for that.

[00:55:10] Paul Roetzer: So, I think that's why people view it as This is just a threat to open source and it's more regulatory capture in essence, where the big guys who can, you know, adhere to some of these things more reasonably, you know, come out further ahead. I think the argument on this whole idea that, you know, it gets into like liability of the developers and the people building the models, and that they're, you know, potentially liable for what they build.

[00:55:33] Paul Roetzer: The argument on the other side of it is, what was the person who Manufacturer's the hammer responsible for hammers used to injure someone like it's a tool and so the way that the open source side and you know some of the you know closed models would argue is like we're just building the models if bad actors use the models in bad ways like That's their choice, that's their freedom to make those decisions, but the model itself isn't [00:56:00] inherently bad, just like a knife or a hammer or whatever isn't inherently bad, but can it be used to do bad things?

[00:56:06] Paul Roetzer: Of course it can. So, yeah, I think we're going to get into a lot of these philosophical debates, a lot of legal debates, and it's just going to be something that continues to play out, As we move forward, but it'll be interesting to see if like certain states jump ahead or certain countries jump ahead and get really restrictive like we've seen in the EU with their AI Act.

[00:56:25] Paul Roetzer: They're obviously way more, further ahead in terms of regulations around this stuff than we are in the United States.

[00:56:32] More Bad News for AI Hardware

[00:56:32] Mike Kaput: So another piece of AI hardware is getting some really tough

[00:56:37] Mike Kaput: So we've talked quite a bit about humane's aI pen which got brutally bad reviews we covered in a previous episode and now it is the Rabbit R1's turn.

[00:56:49] Mike Kaput: This is a AI hardware device, which we covered in The pre sales of on a previous episode, and it claims to run what it calls a large action

[00:56:59] Mike Kaput: So [00:57:00] it says that it can learn and complete tasks on various apps. It's a little hardware device that is touted as basically a way to do everything your phone does, but better and faster.

[00:57:11] Mike Kaput: And the first round of these devices after pre sale. has started to ship, people have started to get them. One reviewer from The Verge noted that, quote, all the Rabbit R1 does right now is make me tear my hair out. Another reviewer in The Atlantic said that the use cases for the device were super limited.

[00:57:28] Mike Kaput: It can sometimes not even complete basic tasks, and it supports only four apps at launch. Spotify, DoorDash, Uber, and Midjourney. So literally what you can use it for. So all these kind of big claims around This piece of hardware and around these kind of AI wearables that we're seeing enter this space also just seems to be extremely overhyped.

[00:57:51] Mike Kaput: I mean, Paul, do we think like the AI hardware space is in this format is kind of dead at the moment? This is another terrible review.

[00:57:59] Paul Roetzer: [00:58:00] I mean, you got to, you got to take it. Like, you got to put stuff out in the world, I guess, that, you know, leads to progress. Do either of these companies have a chance? Probably not. Like, I mean, just, I don't know how much money they got. I don't know how much runway Rabbit's going to get to figure this out.

[00:58:18] Paul Roetzer: But yeah, I think what we're seeing here and what I guess needed to be learned, I thought this was, I don't know if you've known already, but like iterative deployment works in software, but you can put ChatGPT out and it can be mildly embarrassing at best, and people will still use it, and they will get some value from it, and then you put out the next thing, and the next thing, and the next thing, but ChatGPT was free, and so like, if it didn't work, it didn't work fine, like, you know, no offense, And then even when it was like 20 bucks, it's like, eh, whatever.

[00:58:50] Paul Roetzer: It's 20 bucks. If I don't want to, if I stop paying, I'll stop paying after a month. Like if it doesn't do what it's supposed to do. So iterative deployment in software can work. [00:59:00] if you're Elon and you do this with Teslas and supposed full self driving that for like six years, people have paid 12, 000 to 16, 000 a year for software that doesn't work, generally speaking.

[00:59:11] Paul Roetzer: And I, Tesla customer,

[00:59:13] Paul Roetzer: I have paid for full self driving for four years. Like I'm saying this as a, a loyal, Tesla owner. He's gotten away with it because he's Elon, but the reality is it's, it's kind of irresponsible, like, harm can come from that one. And in this one, Hardware doesn't seem to work so well with iterative deployment.

[00:59:35] Paul Roetzer: So the humane pin comes out, it overheats instantly. It doesn't do like 90 percent of what they showed in their trailer video. And so people Lambased the thing, rightfully so. The rabbit, predictably, doesn't do what it showed in its preview. When they teased the thing and started opening up pre sales for 199 or whatever they're charging for was pretty predictable that it wasn't going to do [01:00:00] anything close to what they were claiming it was going to do.

[01:00:03] Paul Roetzer: and so like, the tolerance for iterative deployment in hardware that you're spending a bunch of money on, it's is just way, way lower, and my feeling is like, it's just borderline irresponsible. Like, to put out a product with so much hype that you know doesn't work. Like, this isn't like software that we're aware it doesn't work, but we're hoping you all help us improve it.

[01:00:29] Paul Roetzer: No, you sold a product for 199 under the promise it worked, and it doesn't. That, that's a problem. Like, that's when the government comes in and says, Hey, this might have been, like, something illegal almost. Like, you can't do this. It's false advertising. Like, there's lots of different things you could get them for.

[01:00:50] Paul Roetzer: And so, I don't think it's a critique on the long term potential of hardware per se. But I wouldn't, I would be okay if [01:01:00] Silicon Valley maybe learned not to rush out hardware that wasn't ready for users, or at least don't charge a full price for it, if it is, or like position it as this is an experiment, you're going to help us develop it, not here's a 109 product that doesn't work.

[01:01:18] Mike Kaput: So in our final segment here, we're going to rapid fire within the rapid fire a few notable AI product updates. So I'm just going to go through these real quick and get your because

[01:01:29] Quickfire AI Product Updates

[01:01:29] Mike Kaput: first up, we found out that all ChatGPT Plus users, except for those in Europe and Korea, now have access to the memory feature.

[01:01:37] Mike Kaput: So this allows ChatGPT to remember any information you want it to. to remember. Second, Anthropic has now launched a Claude Teams plan and an iOS app for its popular AI model. So the team plan allows teams to create a workspace for multiple users and get increased usage at 30 bucks a month. And the app finally gives you [01:02:00] access

[01:02:00] Mike Kaput: Claude on your phone, which I'm super excited about.

[01:02:03] Mike Kaput: Third, you can now use a new shortcut in the Chrome desktop address bar to instantly chat with Gemini. You just type at Gemini in the Chrome address bar

[01:02:13] Mike Kaput: and you can seamlessly start prompting the model, which is really helpful. And then fourth and finally, Yelp is launching an AI chatbot, Yelp Assistant, powered by OpenAI.

[01:02:24] Mike Kaput: This chatbot tries to contextually understand what you're looking for, then match you with service professionals in your area. You can actually also have Yelp Assistant write a message for you to businesses on your behalf. So Paul, any of these updates intrigue or excite you?

[01:02:42] Paul Roetzer: Yeah, I mean, Claude, just continuing to try and, or Anthropic, continuing to try and, you know, compete with OpenAI and Google and others is interesting. I think the memory from ChatGPT is probably the most interesting because I think it's the most significant. to the future and talk about iterative deployment.

[01:02:58] Paul Roetzer: Memory is absolutely a part [01:03:00] of that iterative deployment from open AI. Memory is very, very important to the future of generative AI models. And this is a way for them to start putting it out into the world and letting people experiment with it. yeah, I think, you know, our friend Chris Penn had a great take on the memory part, which is if you're using ChatGPT for a bunch of different things, like say you use it personally to plan trips and you use it for clients.

[01:03:22] Paul Roetzer: Like if you're a service provider, you use it for a bunch of clients. It's like. I don't know that it's going to be that helpful. Like you have to be really, really, on top of curating what it remembers and what it doesn't, because you can go in and like delete memories basically. So it'll be interesting to see how that plays out.

[01:03:36] Paul Roetzer: But I think that's the most intriguing of all, for sure. The other ones are interesting. The Gemini in Chrome is certainly has the viability of, or the potential of being, impactful down the road. You know, more people having access to it, but yeah, memory's the one I'm most intrigued by.

[01:03:55] Mike Kaput: All right, Paul, that's another wrap on a week in AI. Appreciate you [01:04:00] breaking everything down for us. A few quick reminders for our audience. We're getting a ton of awesome reviews from people on their favorite podcast platforms, so please keep those coming in every review.

[01:04:11] Mike Kaput: You send us, really helps us improve the podcast and get it to more people. So please go ahead and review us if you haven't already on your podcast platform of choice. Also, we cover not only what we have talked about today, but tons of other pieces of AI news in the Marketing AI Newsletter this week in AI.

[01:04:32] Mike Kaput: Every single week we send you all the news that you need to know. In Artificial Intelligence and break it down in a very easy to consume brief. Go to marketingainstitute. com forward slash newsletter to go sign up for that if you haven't already. Paul, thanks again.

[01:04:49] Paul Roetzer: real quick on the reviews, just a note, like I put this on LinkedIn, but we really appreciate people taking the time. Like when you're, when you're hosting a podcast, like if you don't have a podcast, like you don't know anything about your audience. Like it's really hard [01:05:00] to learn about who's listening and.

[01:05:02] Paul Roetzer: You know, what matters to them and the things you're doing and what they find valuable and, you know, maybe ways you can improve. So reviews are great because, you know, one, it helps get the podcast found. But two, it really helps the hosts, not just us, like any podcast you listen to, helps the hosts just learn who their audience is and, you know, what impact they maybe are having.

[01:05:18] Paul Roetzer: So I would just encourage you, you know, feel free to review for us. Great. But just Take the time to like leave reviews on books you appreciate, on podcasts you listen to regularly. Part of this is a good reminder for myself. Like there's like 10 podcasts I listen to all the time. I don't think I've ever left a review on any of them.

[01:05:35] Paul Roetzer: So, you know, it matters. Like it matters to the podcasters. It matters to the authors. so yeah, just kind of like pay it forward today. Take, take the time to, you know, a podcaster or an author that you really appreciate that's helped you along on your journey to, you know, let them know. Just leave a quick little rating, a review, and it goes a long way.

[01:05:54] Paul Roetzer: So we appreciate everybody who took the time to leave them for us. And, you know, we, we appreciate [01:06:00] you all listening each week. So, yeah, thanks. And Mike, thanks again for another great week of curating the madness. All right. We'll talk with everyone next week on episode 97. We're fast approaching episode 100, Mike.

[01:06:12] Paul Roetzer: We gotta figure out what we're doing for episode 100. All right. Thanks everyone. Have a great week.

[01:06:17] Thanks for listening to The AI Show. Visit MarketingAIInstitute. com to continue your AI learning journey. And join more than 60, 000 professionals and business leaders who have subscribed to the weekly newsletter, downloaded the AI blueprints, attended virtual and in person events, taken our online AI courses, and engaged in the Slack community.

[01:06:40] Until next time, stay curious and explore AI.

Related Posts

[The AI Show Episode 117]: OpenAI’s Wild Week, Sam Altman’s Prophetic Post, Meta’s AR Breakthrough & California’s AI Bill Veto

Claire Prudhomme | October 1, 2024

OpenAI's turbulent week, Sam Altman's predictions, Meta Connect 2024 highlights, SB-1047 updates, FTC's AI crackdown, Anthropic's valuation, and more industry news dissected.

[The AI Show Episode 122]: ChatGPT Search Is Here, McKinsey: AI Worth “Trillions” in Coming Decades & Microsoft AI CEO Calls AI “New Digital Species”

Claire Prudhomme | November 5, 2024

Explore ChatGPT's latest search features, McKinsey's AI economic forecast, and Suleyman's ideas on AI as a "new digital species" in Episode 122 of The Artificial Intelligence Show.

[The AI Show Episode 90]: Hume AI’s Emotionally Intelligent AI, the Rise of AI Journalists, and Claude 3 Opus Now Beats GPT-4

Claire Prudhomme | April 2, 2024

This week on The Artificial Intelligence Show, Mike and Paul discuss Hume AI's new demo, AI's impact on journalism, Claude 3’s skill surpasses GPT-4 on the Chatbot Leaderboard, and more.