60 Min Read

[The AI Show Episode 126]: 12 Days of OpenAI: o1, ChatGPT Pro, Sora, New Interviews with Altman, Pichai, and Bezos, Amazon Nova & David Sacks Is US “AI Czar”

Featured Image

Wondering how to get started with AI? Take our on-demand Piloting AI for Marketers Series.

Learn More

December is shaping up to be the season of exciting new releases—the gift that keeps on giving!

From OpenAI’s 12 Days of "Shipmas" to Amazon’s latest model lineup, Llama 3.3, and Hume AI’s innovative voice control, the stream of announcements shows no sign of slowing down.

Join hosts Paul Roetzer and Mike Kaput as they explore these latest developments shaping the future of the industry.

Plus, discover key insights from interviews with major AI leaders at the DealBook Summit, the evolving dynamics of the OpenAI-Microsoft partnership, and the intriguing role of David Sacks as the Trump administration’s AI and crypto advisor—and much more.

Listen or watch below—and see below for show notes and the transcript.

Listen Now

Watch the Video

Timestamps

00:05:51 — OpenAI 12 Days of Shipmas

00:27:23 — Interviews with Major AI Leaders at DealBook Summit

00:40:57 — Amazon’s New Family of Models

00:47:05 — OpenAI / Microsoft Deal

00:52:23 —  David Sacks Is Trump Administration’s AI and Crypto Czar

00:55:59 — World Models

00:59:49 — Coca-Cola AI Holiday Ad

01:04:14 — Devin AI Coder Update

01:08:57 — AI Product and Company Updates

01:17:36 — Housekeeping Items

Summary

OpenAI’s 12 Days of Shipmas

OpenAI has launched "Shipmas," an ambitious 12-day holiday campaign featuring daily product releases, demonstrations, and new features. 

On Day 1, the campaign kicked off with two significant announcements: 

The full release of their reasoning model o1 and a new premium subscription tier called ChatGPT Pro. The full o1 model represents a significant improvement over its preview version, which was previously available only to ChatGPT Plus and Team users.

According to OpenAI researcher Max Schwarzer, the new version makes 34% fewer major mistakes while processing information 50% faster than its predecessor.  The model is multimodal, meaning it can process both images and text together, and has been refined based on user feedback from the preview period.

Alongside o1, OpenAI introduced ChatGPT Pro, a new subscription tier priced at $200 per month that offers unlimited access to the latest version of o1. That includes unlimited access to the new o1 model, o1 mini, GPT-4o, and Advanced Voice mode. 

On Day 2, OpenAI announced it is expanding what it calls its “Reinforcement Fine-Tuning Research Program,” which enables developers and machine learning engineers to create expert models fine-tuned to excel at specific sets of complex, domain-specific tasks.

This basically makes it significantly easier for you to create expert AI models for specific domains.

On Day 3, OpenAI formally announced and demoed Sora.

Sora can generate 5-20 second videos from a text prompt or even from an image upload, with multiple variations at once, in different aspect ratios and resolutions from 480p to 1080p.

Sora also has some interesting features outside just video generation:

  • There’s an Explore feed that shows you examples of other videos users have created—and how those videos were created.
  • There’s a new tool called Storyboard to direct your videos by describing each scene and placing it in the video’s timeline.
  • There is also Remix, which lets you change the video just be describing the changes you want to see.
  • And Recut, which allows you to add footage anywhere in the video.

Sora is available at its own standalone site: www.sora.com. If you have a ChatGPT Plus or Pro account, you can access Sora at no extra charge.

ChatGPT Plus accounts will, as of right now, get 50 generations per month.

ChatGPT Pro accounts will get unlimited “slow” generations and 500 “faster” generations.

 

Interviews with Major AI Leaders at DealBook Summit

On December 4, the annual DealBook summit gave us some in-depth interviews with a few of the top people driving the future of AI.

DealBook is the name of a financial news service founded in 2001 by New York Times columnist Andrew Ross Sorkin—and since then it’s been a core piece of The New York Times’ reporting.

Since 2012, the Times has also run the DealBook Summit, which interviews top news makers in business, including, in the past, Elon Musk, Nvidia’s Jensen Huang, Vice President Kamala Harris, and Prime Minister Benjamin Netanyahu of Israel.

At this year’s event, Sorkin interviews, among others, some of the top AI leaders in the world, giving us an inside look at where they see their businesses and the industry at large as we close out 2024.

In particular, Sorkin interviewed OpenAI CEO Sam Altman, Google CEO Sundar Pichai, and Amazon founder Jeff Bezos.

Amazon’s New Family of Models

Amazon Web Services has unveiled Nova, a comprehensive new family of AI models that significantly expands its generative AI capabilities. 

Announced at the re:Invent conference, the Nova suite includes four text-generating models, as well as an image generator called Canvas and a video generator called Reel, marking AWS's most ambitious entry into generative AI to date.

The four main Nova models—Micro, Lite, Pro, and Premier—offer varying levels of capability and performance. The smallest, Micro, focuses on fast text processing, while the larger models can handle multiple types of input including text, images, and video. 

These models feature impressive context windows, with Micro handling up to 100,000 words and the larger models processing up to 225,000 words or 30 minutes of footage. AWS plans to expand this to over 2 million tokens for some models in early 2025.

On the media generation front, Nova Canvas creates and edits images with control over color schemes and layouts, while Nova Reel can generate videos up to six seconds long (with two-minute capabilities promised soon). Both include built-in safeguards including watermarking and content moderation systems.

Looking ahead, AWS has announced plans for two more models: a speech-to-speech model in Q1 2025 and an "any-to-any" model in mid-2025 that promises to handle multiple types of input and output. CEO Andy Jassy claims these models are among the fastest and most cost-effective in their class.

This episode is brought to you by our upcoming webinar: The AI-Forward CEO: Unlock the Power and Intelligence of a Co-CEO Custom GPT

Join us December 17, 2024 at 12pm ET/9am PT to learn: 

  • What Co-CEO is
  • How Co-CEO works
  • Example use-cases 
  • How can you build your own (including a template system prompt)
  • Note: You will need a ChatGPT Plus, Team, or Enterprise account to create your own Co-CEO custom GPT.

Click here to register today.

Read the Transcription

Disclaimer: This transcription was written by AI, thanks to Descript, and has not been edited for content. 

[00:00:00] Paul Roetzer: I did see someone over the weekend tweet this idea that, like, in a film, the average scene is like three to five seconds. So, you know, this idea that this could be very disruptive to the ad industry, to the movie industry. When you think about it in that context, like, It can do 20 seconds, but what if it's really, really good at five seconds?

[00:00:21] Paul Roetzer: Welcome to the Artificial Intelligence Show, the podcast that helps your business grow smarter by making AI approachable and actionable. My name is Paul Rader. I'm the founder and CEO of Marketing AI Institute, and I'm your host. Each week I'm joined by my co. and Marketing AI Institute Chief Content Officer, Mike Kaput, as we break down all the AI news that matters and give you insights and perspectives that you can use to advance your company and your career.

[00:00:50] Paul Roetzer: Join us as we accelerate AI literacy for all.

[00:00:57] Paul Roetzer: Welcome to Episode 126 of the [00:01:00] Artificial Intelligence Show. I'm your host, Paul Roetzer, along with my co host, Mike Kaput. Mike, I was on the road last week and I feel like it's been like three weeks since we did this. I was in Detroit, to Philly, to Charleston, Monday through Friday, didn't come home. I don't usually do like the road warrior thing where you're just gone for a whole week.

[00:01:22] Paul Roetzer: I'm usually like one night at a time. Wonderful meetings, great events, but man, it's good to be home. 

[00:01:29] Mike Kaput: I think that's one of your craziest stretches that I can remember pretty recently. 

[00:01:33] Paul Roetzer: It was, it didn't, we re recorded the podcast last Monday before I went, right? Yeah, we did. And then I drove to Detroit.

[00:01:39] Paul Roetzer: Like, the whole thing is a blur. Like, I, yeah, it's like, I was like, I don't know what city I'm in. I don't know what day it is. Like, the whole week I was just kind of all over the place. But, I am back, and I think I'm done traveling for a while. So, that's exciting. Do you have any talks coming up in December?

[00:01:53] Paul Roetzer: Are you done with the trips? No, I'm, 

[00:01:55] Mike Kaput: I'm done for the year, basically, as far as I know. You know, some, someone surprises [00:02:00] us in the next couple weeks, but yeah, I'm all set. I had a couple things last month and then that's it. 

[00:02:05] Paul Roetzer: Okay, nice. And I think you and I might actually end up in the office, maybe we'll even like go out for a coffee or something.

[00:02:09] Paul Roetzer: Yeah, there you go. Imagine that. We see each other like once a month in the office. We live a mile from each other, not even, and we have the office, we like never can get together because of schedules. All right. So, this episode is brought to us by our AI Forward CEO webinar. I've mentioned this on a last couple.

[00:02:28] Paul Roetzer: We're already, I would say get registered soon if you're going to do this. It's December 17th. so we're like 1500 people registered. I think we're going to cap this at probably 2000 just for Zoom license purposes. but so December CEO unlock the power and intelligence of a co CEO custom GPT. So I kind of told this story before, but what we're going to do is December 17th at noon Eastern, there will be, we'll make the recording available.

[00:02:57] Paul Roetzer: So if you can't make it because of time zones [00:03:00] or conflicts that day, go ahead and register anyway, and then we'll send out the on demand recording and make that available for probably like a week after. So this whole thing came from me just playing around building a GPT for myself to be the co CEO of my companies and help me like function as a strategic advisor on legal and HR and finance and marketing and sales and service and IT and like I just wanted somebody to talk to and I figured, well, let me just build a GPT and it ended up being incredible.

[00:03:29] Paul Roetzer: So I was just keeping it to myself and then I mentioned on the podcast a couple of times and people kept reaching out and so I finally was like, all right, let's just share this thing. So I'm not going to share the one I personally use cause it has a bunch of my personal and company data in it, but I'm going to show how we use that one to analyze data, execute tasks, solve problems, build plans, and innovate.

[00:03:51] Paul Roetzer: And I'm going to make a, a general version of it available to everyone. So I'm going to build that actually this week. And then I'll put that, [00:04:00] through a link on the SmarterX. ai website. And people will be able to access that. And then I'm going to share the prompt. I actually used the full one minus the personal data, to, to build my own.

[00:04:12] Paul Roetzer: And so, you know, hopefully this is really helpful for people. And we're, we're planning to do some more stuff around like building custom GPTs in the new year. So stay tuned on that. We're actually probably going to announce something as of this morning. I think we agreed, right, Mike, that we were going to announce something next week, right?

[00:04:28] Paul Roetzer: So it gives me like seven days to figure it all out. So. What, and my main thinking here is what we see is like custom GPTs are like the fastest way to value for a lot of enterprises, like build a GPT to help someone do the thing they do every day and they just get it. Like it becomes immediately understandable to them what the value of AI can be.

[00:04:49] Paul Roetzer: And so that's what we're going to focus on. So this webinar is going to do, what is the Co CEO? How does it work? What are some example use cases? Who should use it? How to build your own, and then, and it's not just for CEOs, it's [00:05:00] like for anybody who wants to basically have a CEO mindset or kind of, you know, works with CEOs and wants to be able to communicate better, so.

[00:05:05] Paul Roetzer: SmarterX. ai, that is our sister company in the Marketing Institute, that's our AI research and consulting firm. right at the top banner, there is a register now button for this December 17th webinar, you can grab that. And then while you're there, subscribe to the exec AI newsletter. Which is, the newsletter I send out every Sunday morning that has editorial, kind of, and what's coming up for the week ahead.

[00:05:28] Paul Roetzer: So, those are the two things. Go to smarterx. ai and then click register now and then you can grab the newsletter, subscription while you are there. And other than that, it is shipmas, Mike. It is 12 days of shipmas and we are right in the middle of it. you had a chance to look at day three, I have not, so let's roll right into the OpenAI 12 Days of Shipmas.

[00:05:51] OpenAI 12 Days of Shipmas

[00:05:51] Mike Kaput: Alright, Paul. So, first up, OpenAI is basically launching this thing they call Shipmas. They've also called it 12 Days of [00:06:00] OpenAI as a Christmas and holiday reference. Basically, this is an ambitious 12 day holiday campaign. Featuring daily, at least on weekdays, product releases, demonstrations, and new features from OpenAI.

[00:06:14] Mike Kaput: And before you think this is kind of some marketing gimmick, like we've gone through three days so far, including today, and these are pretty significant updates so far. So we're going to go through kind of what's happened on the past three days. Obviously, there's going to be one update per day moving forward until we're done.

[00:06:32] Mike Kaput: We'll cover those on future episodes. But we're going to walk through these three days. I'm going to get your thoughts on all these, Paul, because there's a lot to talk about. So, on day one, this campaign kicked off with two big announcements. So, first up, OpenAI said they are doing the full release of their reasoning model, O1.

[00:06:52] Mike Kaput: And this is coming out for the, for everyone in ChatGPT Plus and Pro. Pro, and if you have never heard of [00:07:00] ChatGPT Pro, it's because that was the second announcement. ChatGPT Pro is a new premium subscription tier. So, first up, the full O1 model. This represents a big improvement over the preview version.

[00:07:12] Mike Kaput: If you didn't know, the O1 we were using before was just a preview. And that was available to ChatGPT Plus and Team users. And according to OpenAI researcher Max Schwarzer, the new version makes 34 percent fewer major mistakes and processes information 50 percent faster than the previous model. The model is multimodal, meaning it can process both images and text together, and it's been refined based on user feedback from all this work everyone did with the preview model.

[00:07:43] Mike Kaput: So, alongside O1, we also got what I referenced just a moment ago, ChatGPT Pro. This is a new subscription tier priced at 200 per month, and it offers unlimited access to the new O1 model, [00:08:00] O1 mini, GPT 4. 0, and advanced voice mode. So, if you're hitting any type of usage limit in your ChatGPT Plus or Team account, ChatGPT Pro might be the license for you.

[00:08:13] Mike Kaput: Now on day two, OpenAI announced that it is expanding what it calls its Reinforcement Fine Tuning Research Program. This enables developers and machine learning engineers to create expert models that are fine tuned to excel at specific sets of complex domain specific tasks. We talked many different times about the importance and the opportunity to create models for specific domains.

[00:08:41] Mike Kaput: This is basically OpenAI giving you a better, easier way to do that. Now, today, the day we are recording, Monday, December 9th, which we 

[00:08:51] Paul Roetzer: delayed recording, by the way, for five hours for this. 

[00:08:55] Mike Kaput: We did, since we are on Eastern time for the most part, and OpenAI is [00:09:00] on Pacific time, we delayed things so that we could hear today's announcement and bring it to you in this episode.

[00:09:07] Mike Kaput: Just a few hours before we recorded, OpenAI, for day 3, finally, formally, released Sora, their video generation model. So Sam Altman and a couple team members demoed Sora, which is getting launched and rolled out in the U. S. and most countries internationally by the time you will listen to this podcast. And they showed off that Sora can generate 5 to 20 second videos from a text prompt or even if you upload an image.

[00:09:36] Mike Kaput: And it can create multiple variations at once, it can create them in different aspect ratios, and resolutions from 480p to 1080p. Now during the demo, they showed off some other interesting features. these included things like an explore feed, so once you log in to Sora, you'll see examples of other videos that users have created.

[00:09:58] Mike Kaput: It'll also tell you [00:10:00] like how those videos were created. There's a new tool called Storyboard to direct your videos, basically by describing each scene and placing it in the video's timeline. It's really cool, you can essentially direct second by second how the video works. There's also some features like Remix, which lets you change the video just by describing the changes you want to see.

[00:10:21] Mike Kaput: And something called Recut, which allows you to basically add or extend footage that is anywhere in the video. Now, Sora is actually available as its own standalone site at Sora, S O R A dot com. Now, This site has been a little buggy all day for me, but the way it's supposed to work is if you have a ChatGPT Plus or a Pro account, you go to Sora.

[00:10:45] Mike Kaput: com, you'll be prompted to log in, and then you can use it at no extra charge. If you don't have one of those, I believe you'll have to sign up. But again, both these functions were essentially broken. I can't get in now. Yeah. You can't get in right now? 

[00:10:59] Paul Roetzer: No. [00:11:00] Yeah. I did the login part and then it said signups are temporarily unavailable.

[00:11:03] Paul Roetzer: So it's like, yeah, it's getting crushed. 

[00:11:05] Mike Kaput: So it's, yeah, it's getting crushed with heavy traffic. So hopefully you can access it when you hear this or sometime this week. ChatGPT Plus accounts will, right now, get 50 generations of videos per month. Now, if you have the Pro account, you'll get what they call unlimited, quote unquote, slow generations.

[00:11:22] Mike Kaput: It'll take more time to do, and you'll get 500 faster generations per month. So, Paul, we're going to take these kind of one at a time. First up, full release of O1 certainly sounds like O1 is exhibiting some very powerful reasoning abilities. Can you give us kind of your initial thoughts on this model? 

[00:11:42] Paul Roetzer: So I would, I would go back to episode 113 on September 4th, we talked about Strawberry.

[00:11:50] Paul Roetzer: So that's, that was the internal code name for the O1 model was Strawberry. And. A lot of rumors were swirling at that point that this was sort of [00:12:00] imminent, that they were going to release this Strawberry model. It then debuted, a couple weeks later, so episode 115, we went deep on the O1 mini O1 preview models, Mike, that you mentioned, and any regular listeners to the show or attendees of our MAICON event, will remember that, that O1 came out two hours before the closing keynote at MAICON on September 12th.

[00:12:25] Paul Roetzer: And so, Mike and I were scurrying to tell the story of this reasoning model as we were closing our conference. So, I think, just provide some perspective here, because I have not, I've tested it a little bit, but I haven't, like, pushed it yet, because it definitely seems like it's predominantly for harder problems, like math, biology, engineering, science related.

[00:12:47] Paul Roetzer: But in, in the context of why this is relevant for OpenAI and the pursuit of intelligence, if you'll remember, they have these five levels internally. So level one AI is [00:13:00] chatbots, which is what we got with the first iteration of ChatGPT back in 2022. Level two is reasoners, which we now have in the public domain, but they have had since Fall of 23, the breakthrough that at least showed them that this was possible is believed to have happened in October 2023 actually, Noam Brown who's working on this internally just said that in an interview recently.

[00:13:27] Paul Roetzer: Level 3 for OpenAI is Agents, which we've obviously talked a lot about Agents on recent episodes. Level four is innovators, and level five is organizations, like autonomous organizations. So we have moved into level two in OpenAI's world. We are quickly moving into level three, and we can talk a little bit about that in Sam Altman's interview on, on Dealbook Summit, which we're going to go to in the next topic.

[00:13:50] Paul Roetzer: And so what, what reasoning does is it's like the human cognitive process of drawing conclusions, making inferences, forming judgments based on information, [00:14:00] logic, experience. It involves the ability to think logically, analyze situations, evaluate evidence, solve problems. So when we think about this being tied into businesses, and when it's kind of baked into large language models like they're doing with O1, now you can do multi step problem solving.

[00:14:16] Paul Roetzer: You get more accurate predictions. You talked about, Mike, like these things make fewer major mistakes. Improved risk assessments. Reduced hallucinations and errors, deeper contextual understanding, completion of higher level cognitive tasks. Like these are all things that get unlocked with a reasoning model.

[00:14:34] Paul Roetzer: And so in simple terms, it makes ChatGPT and the other ads that much smarter, that much more generally capable, that much more human like. And now, you know, one of the things, Mike, I'd flagged for you, and you and I kind of touched on this beforehand, is this comes with downsides. So one of my favorite podcasters is Nathan LeBenz, and he's got The Cognitive Revolution.

[00:14:58] Paul Roetzer: So he does amazing [00:15:00] interviews. And he had sort of an emergency pod with Alexander Menke of Apollo Research. And we will put the link to this in. And I'm not going to go deep into this episode. But I wanted to read to you an excerpt from the opening of this episode, because, again, I think it's a very important perspective for people to have.

[00:15:20] Paul Roetzer: So, in the opening, Nathan says, the O1 model, which is faster, scores higher on reasoning benchmarks, and comes with the full complement of multimodal and tool use capabilities, Like many in the AI space, I've spent the last 24 hours testing the model, trying to absorb everything that's been written about it, including OpenAI's 42 page system card, which I have not dug into yet, Mike.

[00:15:40] Paul Roetzer: and Apollo's 70 page report entitled, Frontier Models Are Capable of In Context Scheming, which will be our main subject today. So again, this is Nathan doing the opening to this episode that we're going to link to. He then goes on to say scheming is when an AI deceives humans to pursue [00:16:00] its own hidden or implicit goals.

[00:16:03] Paul Roetzer: he says, I think we can all agree we don't want that from our AI systems. Some of the examples are prop, properly shocking. For example, trying to overwrite their next version's weights or goals with their own in order to propagate themselves into the future, and then deliberately falsifying data to engineer outcomes that run contrary to user requests.

[00:16:26] Paul Roetzer: So he continues, Some have tried to downplay these findings, arguing that scenarios that Apollo created are contrived and not representative of real usage. said while tests Apollo ran are designed to elicit scheming behavior, they are not conceptually far fetched. The core observation is that when AI's goals conflict with human goals, weird shit starts to happen.

[00:16:51] Paul Roetzer: This is a legitimately huge problem, he continues. Obviously scary to people outside of the field. We, in the field, shouldn't allow [00:17:00] ourselves to get comfortable with it. I'm gonna, I'm gonna read a little bit more because I think he gives the size of the problem and why this open source movement that we're seeing an acceleration of could create some serious problems.

[00:17:15] Paul Roetzer: So he says when OpenAI goes live via their API, which it will, the O1 model goes live by the API. Over 1 million active developers building on OpenAI's platform will be able to place goals in system messages exactly as Apollo did in their research. It seems to me a virtual certainty that O1 will find itself in situations where there is fundamental tension between the standards set for it by OpenAI in the model spec and the goals the developers give it in the system message.

[00:17:47] Paul Roetzer: Apollo found scheming behavior in roughly 1 10 percent of cases across most of the conditions they tried. Even if it's 1000 times rarer in the wild, and even if [00:18:00] OpenAI deploys the 92 percent accurate deception monitoring system they describe in their system card, with hundreds of millions of daily uses across millions of apps, we should expect O1 to be actively scheming with hundreds or thousands of users daily.

[00:18:16] Paul Roetzer: And then he just goes on to talk about, like, if we're really at this point where we have now put a model into the world that we know schemes, that we knows tries to overrule the inputs from its human creators and users in ways that are meant to deceive the human, and we know it will happen with high probability, and we think we're only one to three years from much, much more intelligent systems, couldn't we be doing more for safety and security?

[00:18:45] Paul Roetzer: It's kind of like the whole point of this. So, I haven't had a chance to listen to this whole episode yet. I will be listening to it this week. And I would say anyone who's sort of like intrigued by this or, terrified of this, it's [00:19:00] probably a good one to read. And so it does go back, Mike, like, you know, you'll recall in fall 2023 when Sam Altman was ousted as CEO and we had the whole episode on this and with the talk of, you know, the business world, the I world, certainly for about four weeks.

[00:19:15] Paul Roetzer: The whole thing was, what did Ilya Sutskever see? Because Ilya was the one that led to the ousting. you know, he had a board seat and he pushed for them to oust Sam. And the timelines, while we still don't have 100 percent confirmation it's exactly what happened, Noam Brown said, we realized in October 2023 this was going to scale.

[00:19:34] Paul Roetzer: That this reasoning approach was going to work. That is when Ilya went to the board and alerted them there might be problems. And then Ilya eventually leaves and creates Safe Superintelligence, his own AI company. So it sure does seem that the whole theory all along that Ilya saw the reasoning model coming to life and became fearful that they were putting something out in the [00:20:00] world that the world wasn't ready for yet, it definitely aligns that that's a very distinct possibility, at least played a role in what happened back then.

[00:20:08] Paul Roetzer: So crazy stuff, cool stuff, like it's gonna be fascinating to read. but, you know, it also comes with its downsides. 

[00:20:20] Mike Kaput: It sounds like we're getting into a real territory of almost just unanticipated or unintended consequences. As these models learn more than just kind of, I think it may be mentioned in here, just more than raw intelligence.

[00:20:31] Mike Kaput: It's having these effects that not anyone or everyone can anticipate. And this is all coming from, this isn't any kind of conspiracy theory. This is coming from OpenAI's own efforts. Absolutely. That they're releasing to keep this model safe. 

[00:20:46] Paul Roetzer: Yeah. Yeah. No doubt. and so like, you know, to, to, I guess, continue on from how you sort of set it up to this pro license then, it's like part of me wants to pay the 200 a month just to see what this is, like to see [00:21:00] what this thing is fully capable of.

[00:21:02] Paul Roetzer: I'm not sure like what we would use it for. So I had reached out to Mike actually over the weekend. I was like, Hey man, we should run a hackathon ourselves and like push on this model and see what's possible. And not only for the positive uses, but these unintended consequences, like, what are the dangers of this thing?

[00:21:20] Paul Roetzer: And so Mike and I are actually running an internal hackathon on Tuesday, so the day this episode drops. And we'll, you know, summarize for people, because I'm trying to figure out, like, should we be paying the 200 a month? Is there the value in it for us? Is, you know, beyond just our research and understanding.

[00:21:37] Paul Roetzer: But I, you know, I went in to ChatGPT itself and I was like, Hey, what, what should we be using the V01 reasoning model for from a business and marketing perspective? And it started presenting some interesting ideas like campaign strategy, audience persona, performance analysis and insights, content calendar creation, competitive analysis.

[00:21:52] Paul Roetzer: I was like, this is kind of interesting. So Mike and I are going to kind of talk about these ideas tomorrow and maybe start building some stuff and report back. Like right [00:22:00] now though, I would say most users, your 20 or 30 a month plan is all you need. The 200 a month plan. Before you told me that Sora's included in the 200 a month plan, like that, that might change my perspective on the 200 a month plan, actually.

[00:22:14] Mike Kaput: Alright, so let's talk about Day 2's announcement, and can you maybe break down for us, like, why is reinforcement fine tuning such a big deal? First 

[00:22:23] Paul Roetzer: of all, this is an announcement for developers. So, the average user, like you and I, we're not going to be building on this. This is something that's giving developers the ability to take the core model, And then do reinforcement learning very quickly by giving it examples it learns from, you know, setting goals and rewards that enable it to kind of learn a domain based on a specific dataset.

[00:22:43] Paul Roetzer: So, if you wanted to use this reinforcement fine tuning, this is something you're likely going to be teaming up with a developer, the internal IT team, something like that. You're going to need a unique dataset that you can use to train this thing in a specific area, but this does absolutely hint to [00:23:00] a near term future Where every enterprise can custom train their models, maybe even by department.

[00:23:06] Paul Roetzer: And you just have custom versions of it. So imagine like GPTs on steroids. Like now you can actually take the core model and fine tune it. And not have to be a developer to do it. To where you and I, Mike, could build these fine tuned models the same way we can build a custom GPT. And that's, that's fascinating.

[00:23:22] Paul Roetzer: Like the possibilities there. 

[00:23:25] Mike Kaput: Okay. So let's talk about day three and Sora. So, you know, we've got now, well, once the website works again, it does sound like looking at the release, the announcement they released along with this, it's included as part of a plus account. and it's also included as part of a pro account, but the usage rates differ dramatically.

[00:23:46] Mike Kaput: So what, how are you thinking about this without us having been able yet to test it? 

[00:23:50] Paul Roetzer: Yeah. So first I'll, I'll look at the usage limits of 50 generations a month. doesn't seem like a ton, [00:24:00] it probably just depends on how good it is, honestly. So like, Runway is a company that does video generation, text to video, we've talked about it many times on the show, RunwayML.

[00:24:09] Paul Roetzer: com, unless they've changed their URL. And I have a paid account, I think I pay 30 bucks a month for Runway, I haven't been in there in months. Yeah. Because every time I go and try and use it, the outputs aren't good. usable, like they're, they don't maintain consistency. And so like, anytime they update their model, I'll go in and play with it.

[00:24:27] Paul Roetzer: And every time I go, I'm like, God, I like 900 credits in here. And like, I don't, I don't even know what to use it for. so that's an instance where it didn't seem like a lot, but then once I get in there and use it, I realized there's nothing I can do with this. Then the credits just stack up. I anticipate that Sora is going to be a leap forward in capability.

[00:24:46] Paul Roetzer: And that you could envision using this regularly, especially for, you know, in my world, embedding into videos, you know, things you might create as demonstrations. so if they work well and it creates these quality outputs, [00:25:00] then that'll be intriguing. The second thing is the speed is going to be a huge issue.

[00:25:05] Paul Roetzer: Like, so Runway can take minutes to do four seconds of video and it's just not even worth it to me. It's like the effort, that sounds so silly, the effort to create a high resolution video taking minutes. but it's minutes when you get the output and it's like, well, that's not what I needed. And now you're just like, you just keep Throwing time at something that's not going to create the output you want.

[00:25:28] Paul Roetzer: So I would expect that these things are going to take quite a bit of time to generate. I don't think this is going to be really fast inference time where like you put it in and three seconds later, five seconds later, you've got your video. Yeah. I would expect this as a slow thing and that's without all the traffic they're going to have on this site for the next, you know, month or two.

[00:25:47] Paul Roetzer: So, that's interesting. I think, rate limiting the speed. Unless you're paying the 200 a month, like I could see that being a big thing. It's like, hey, I get faster generation. You get, basically you get the [00:26:00] fast pass, like at Cedar Point or out of the amusement park. Like if you're paying the 200 a month, you get the fast pass on your generations.

[00:26:05] Paul Roetzer: And so I guess part of how fast these things generate may be dependent upon the uptake in the 200 a month license realm. So if a bunch of people are like, Hey, I'll pay it. Then all of a sudden you're, you know, there's a hundred people in the fast pass lane ahead of you. Or the TSA lane, I guess. That's the, and then the clear lane.

[00:26:24] Paul Roetzer: Like you just keep adding another way to get that. so I don't know, man. Like the demos were always super impressive on this, but with video generation, as we've talked about on the show before, it's really hard to maintain character consistency, frame consistency, but I did see someone over the weekend tweet this idea that like in a, in a film, the average scene is like three to five seconds.

[00:26:51] Paul Roetzer: So, you know, this idea that this could be very disruptive to the ad industry, to the movie industry, to, you know, from a brand perspective for content [00:27:00] creation, videos, things like that. When you think about it in that context, like it can do 20 seconds, but what if it's really, really good at five seconds?

[00:27:08] Paul Roetzer: And that's enough because then you can just stitch together frame by frame by frame and you can all of a sudden start building some really incredible things. So I expect adoption to this to be massive, if it works really well. 

[00:27:23] Interviews with Major AI Leaders at DealBook Summit

[00:27:23] Mike Kaput: Alright, so our second big topic this week. On December 4th, the annual Dealbook Summit took place.

[00:27:31] Mike Kaput: And during this, we got some really interesting in depth interviews. With a few of the top people driving the future of AI. So, Dealbook is the name of a financial news service founded in 2001 by New York Times columnist Andrew Ross Sorkin. Since then, it's been kind of a core piece of the New York Times reporting in business.

[00:27:52] Mike Kaput: And, since 2012, the Times has also kind of paired with this the Dealbook Summit. So, in this event, [00:28:00] they interviewed top news makers in business. In the past, they've interviewed people like Elon Musk. NVIDIA's Jensen Huang, Vice President Kamala Harris, and Prime Minister of Israel, Benjamin Netanyahu. So they get some pretty significant figures at this event.

[00:28:15] Mike Kaput: And at this year's event, Sorkin, who kind of MCs the whole thing, interviewed some of the top AI leaders in the world. So there were some other guests that were not related to AI, but the ones we're interested in were the three he talked to, who gave us kind of an inside look at where their companies and the industry at large are going.

[00:28:34] Mike Kaput: As we close out 2024. So in particular, he interviewed OpenAI CEO, Sam Altman, Google CEO, Sundar Pichai, and Amazon founder, Jeff Bezos. Now, Paul, I know you are following these conversations closely. I just want to hit on a few very quick points that jumped out to me and then kind of turn it over to you to kind Reveal to us like what you took away from these talks.

[00:28:56] Mike Kaput: So a couple of things with Altman, that he said [00:29:00] that I thought were kind of notable was he was like, my guess is we will hit AGI sooner than most people think, in the world think, and it will matter much less, interestingly. Things will kind of pass through that milestone and kind of go on with our lives in a more abundant future.

[00:29:16] Mike Kaput: However, he also did say, I expect the economic disruption to take longer than people think, but be more intense than people think. So he's kind of saying that we might see a lot of changes in the economy. I also thought it was noteworthy, given all the drama around this, that they asked him about his beef with Elon Musk.

[00:29:34] Mike Kaput: Will Elon kind of come at him using his newfound influence with the Trump administration? And he said he believes pretty strongly that Elon will do the right thing. It would be profoundly un American to use political power to the degree Elon has it to hurt your competitors and advantage your own businesses.

[00:29:51] Mike Kaput: I don't think people would tolerate that. I don't think Elon would do it. On Sundar Pichai's side, I thought it was kind of interesting, they kind of called him out with [00:30:00] some quotes around Microsoft CEO Satya Nadella, saying, Hey, Google should have been winning this whole generative AI thing. They got caught flat footed.

[00:30:08] Mike Kaput: He kind of just said, I'd love to do a side by side comparison of our models with Microsoft any day. And they also said the area we applied AI the most aggressively, if anything in the company, was search. This is essentially what motivated them to be applying transformers way back when. So, he basically is saying, look, I'm not worried about our core business, though search will change profoundly.

[00:30:31] Mike Kaput: And I think we're gonna actually just be able to make it all better and able to handle more complex questions than ever before using AI. Now, finally with Bezos. He covered a lot more than just AI in his interview, but what's interesting is he said he's basically, you know, kind of moonlighting back at Amazon, helping specifically with AI.

[00:30:51] Mike Kaput: 95 percent of what he's helping with is AI. And he's talking about the fact they're working on literally a thousand applications [00:31:00] internally for AI. We'll talk more in the next topic about their own large language models. They've released the Nova family of models. And basically he said, look, in some ways our models are already smarter than humans because they're multidisciplinary and humans often are not very good at all the things they do in a day.

[00:31:19] Mike Kaput: So Paul, I'll turn that over to you, but just some kind of interesting highlights that jumped out at me in this. 

[00:31:23] Paul Roetzer: Yeah, they're all worth listening to. I think Bezos was the longest, maybe, at 40 some minutes or his might have gone 50 minutes, I don't know, but Sam's was like 30 minutes, Sundar was around 40 something.

[00:31:35] Paul Roetzer: So, they're all very digestible, especially one and a half times speed on, on YouTube. So, I would suggest listening to all of them. I think there's, there's a lot of perspective and context and honestly, like, even reading the quotes, which I'd read some of the quotes before I listened to them, They take on very different meaning when you hear how they're said.

[00:31:54] Paul Roetzer: Like even the Sundar one, there was an edge in his voice. When, [00:32:00] because that's how the interview started. Was, he said straight up like, Hey, Satya's been kind of like, you know, taking it to you guys about this. And his comment was like, yeah, where are their models? Like, I don't need him talking to me because they don't even have their own.

[00:32:14] Paul Roetzer: They're using opening eyes models. Was, it was, it was a, It was a tense response, like he wasn't super ecstatic about it, and there was quite a bit of emotion because I will say like Andrej does an amazing job of just coming straight at and asking hard questions and then he pushes on the hard questions, he doesn't like stop.

[00:32:33] Paul Roetzer: And so he asked point blank, like at one point with Sam, like about their thoughts on copyright law and fair use. And Sam's like, well, you guys are kind of suing us. And Sam's like, well, I think it's fair use. And Andrej's like, it's not. And he's like, I guess, you know, we'll see each other in court kind of thing.

[00:32:50] Paul Roetzer: I was like, whoa, like that was a weird place for that interview to go. So, I would say on Sam's, it's worth listening, especially if you don't, [00:33:00] like, deeply understand or know the OpenAI origin story, his history with Elon Musk, things like we've talked about on the podcast a lot, it was a nice kind of synopsis, and he wove a lot of that into their conversation.

[00:33:11] Paul Roetzer: you kind of hit on this idea of AGI as, like, they're very much now talking about it as almost like this continuous thing and like a mile marker, not Like the milestone goal anymore. and I think they might've used this, if not, Sundar did this analogy over to like Waymo driving cars, where all of a sudden the things just drive without people in them.

[00:33:34] Paul Roetzer: I mean, they're teller operated sometimes, but whatever. so this idea though, that we are going to achieve. Narrow AGI in a way in different fields and like life's gonna just kind of move on. And so when we talked about this, on earlier episodes where we shared Google's levels of AGI, where they look at performance in generality.

[00:33:59] Paul Roetzer: [00:34:00] And so they started talking about this idea that like, at like level three, I think it was in their world where it's better than like 75 percent of the humans that would do a thing. Well, if you think about it and you start looking at writing and SEO and consulting and eventually accounting and lawyers and doctors, like we're, we're likely very close to AI that is superhuman in different domains.

[00:34:28] Paul Roetzer: It's like virtuoso in, you know, Google's world of virtuoso is like better than 99 percent of humans at a thing. it might be a while before a model like ChatGPT or Gemini is just better than all humans at everything. But we're going to start picking off domains where the AI is better than the human at the thing.

[00:34:48] Paul Roetzer: Better than the best humans at the thing. a thing, a discipline that does not mean the AI takes all the jobs. It just means there's like a thing [00:35:00] that's, that's probably better at part of your job than you are. and so that's where like they talk, Sam got into like this idea of super intelligence where you know, really it outperforms humans a hundred percent of the time at all cognitive tasks.

[00:35:11] Paul Roetzer: And he basically was like, yeah, life's going to kind of just keep going on. He's sort of in this mentality that we'll figure it out and we'll build other models and you know, whatever. He did get into the scaling wall, which we've talked a lot about recently, is there, you know, a wall? He, he was very straightforward, like, no, there's, there's just no wall.

[00:35:27] Paul Roetzer: And the main reason, and Sundar kind of backed this up, and even Bezos did to a degree, they think of building these models as three main components. The computing power, which is the NVIDIA chips, like, how many chips can we stack into a data center and wire together and get them to do this thing? The data that goes into them, including now synthetic data.

[00:35:44] Paul Roetzer: And then the algorithms, meaning, you The ways we find to do this smarter, like that these things become more efficient, so we don't, you know, if we have a hundred thousand NVIDIA chips, maybe next year we can achieve the same output with [00:36:00] 50, 000 because we built better algorithms of like how to do the learning and things like that.

[00:36:05] Paul Roetzer: he did confirm 300 million active users, that was weekly, wasn't it? Was it weekly or monthly? Weekly. Yeah. It's weekly 

[00:36:13] Mike Kaput: active users. 

[00:36:14] Paul Roetzer: That was before Sora. So 300 million weekly active users, 1 billion user messages sent on ChatGPT every day, and 1. 3 million developers, which is just, you know, crazy numbers.

[00:36:28] Paul Roetzer: they got into this whole, like, creators who content has been used to train the models again. You know, he pushed him hard on that and Sam doesn't really have, I don't know, they just, they just keep standing on this was, it was fair use and it's not, whatever, the courts will decide that. And then, he, he asked all three of them about like meaning for humans, like as AI kept like evolving, what does this really mean?

[00:36:53] Paul Roetzer: And, you know, Sam, who has a kid on the way, I don't know when they're going to have their [00:37:00] child, but they, they have a child coming. And so he said, like, you know, you have a child coming, like, how do you think about this? Like the future for that child? And, you know, he basically said the economy will grow, jobs will change, but evolution's slow and humans adapt was basically Sam's message for everything.

[00:37:15] Paul Roetzer: Sundar, I thought, like, he asked him about the wall and he said there's, you know, more breakthroughs needed, but they're, they're coming. The algorithms, especially in planning and reasoning, are going to happen. He pushed him on the losing their lead and the search thing, like you mentioned. Impact of hiring coding agents, I thought, was an interesting one.

[00:37:33] Paul Roetzer: He asked him, like, well, are you making changes at Google? You're building these agents. They're more efficient. Have you changed your own hiring plans and budgeting as a result of it? And Sundar kind of sidestepped it, he said, definitely taking into account how to be more productive and efficient. It was like, okay, I don't know what that means, but, they talked a lot about regulations and then he asked him about the economics for creators whose content feed these models.

[00:37:58] Paul Roetzer: And he said they, they're going to be [00:38:00] thoughtful was his quote. And then Bezos, you highlighted, I thought it was just sort of interesting that he's back, like, spending that much time, kind of like, you know, Larry Page. and Serge, Sergey Brin, the founders of Google, being back at Google working on AI. So just the fact that this is so significant that these people have moved on from these companies they founded 20 plus years ago are now back almost full time working on AI.

[00:38:24] Paul Roetzer: and so it was, it was really cool to hear. Jeff's like, you can zip. Oh, his was over an hour because it was like at the 56 minute mark or something when they started to 51 minute where they started talking about AI. So there was like all this other stuff within there. so yeah, I, I thought that he, he had this one part where he said, you know, these kinds of horizontal layers, like electricity and compute and now artificial intelligence, they go everywhere.

[00:38:48] Paul Roetzer: There isn't a guarantee you there isn't a single application that you can think of where this won't make it better. And I thought that was interesting because you can start to think about that in your own business. Like, every piece of software you use is going to have AI in it. [00:39:00] every department in your company is going to have AI in it.

[00:39:03] Paul Roetzer: Every business in your industry is going to have AI in it. And there's just going to be smarter versions of everything. That was where the name SmarterX came from, when I named, like, our AI research firm a couple years ago. And developed that initially as a consulting practice. It was like, SmarterX, like, whatever it is, just fill in the blank.

[00:39:19] Paul Roetzer: Like, marketing, sales, service. Industries, everything is just going to get smarter with the underlying intelligence layer. and then there was the last thing I'll say when he asked Bezos about like, what will it mean to be human? He had an interesting take. He said, you can always find somebody better than you at something now.

[00:39:38] Paul Roetzer: And yet that doesn't take the meaning away. so he was saying like, he's not the best in the world at anything. There is someone smarter than him at every single thing he does or cares about or is passionate about. Yeah. And yet as a human, there's no meaning lost in that. So his point was, if all of a sudden there's these AI models that are just smarter than you at [00:40:00] everything, does it really change your life?

[00:40:02] Paul Roetzer: Like, I mean, yeah, it's on demand now, like you can go get it, ChatGPT, instead of having to go find these people. But his whole point was like, we don't derive our meaning from being the smartest in the world at a thing. And like, your meaning comes from relationships and people and I was like, that's a really fascinating perspective.

[00:40:19] Paul Roetzer: And I need to think more about it because I was just watching this like, you know, hour before we jumped on this podcast, but I thought that was an interesting take that like, there's always people better than you or smarter than you at all these things. And like, that doesn't change your perspective on the meaning of life.

[00:40:32] Paul Roetzer: Right. I don't know. There might be something to unpack there. I have to think about it more. 

[00:40:37] Mike Kaput: Now, it was interesting that they, yeah, all three of them did end up getting almost philosophical at some point. 

[00:40:43] Paul Roetzer: Yeah, they hit on copyright, fair use for everybody, they hit on competition, they hit on, you know, the future of AI models.

[00:40:49] Paul Roetzer: And they hit on, like, the meaning of life, like, that's what I'm saying, like, to get those three guys on the stage in a single day talking about these things was fascinating. 

[00:40:57] Amazon’s New Family of Models

[00:40:57] Mike Kaput: Alright, our third big topic this week, [00:41:00] Amazon has unveiled Nova, which is a new family of AI models that basically expand their generative AI capabilities.

[00:41:09] Mike Kaput: So this was announced at the reInvent conference that happened just recently. And the Nova suite includes four text generating models, as well as an image generator called Canvas and a video generator called Reel. The four main Nova models, Micro, Light, Pro, and Premiere, offer varying levels of capability and performance.

[00:41:30] Mike Kaput: The smallest, Micro, focuses on fast text processing, while the larger models can handle multiple types of inputs, including text, images, and videos. Now, these models appear to feature impressive context windows. Micro can handle up to 100, 000 words, and the larger models process up to 225, 000 words, or 30 minutes of footage.

[00:41:53] Mike Kaput: Now, Amazon plans to expand this to over 2 million tokens for some of these models in early 2025. [00:42:00] On the media generation front, Nova Canvas creates and edits images with control over color schemes and layouts, Nova Reel can generate videos up to 6 seconds long, they promise these will be 2 minutes soon enough.

[00:42:14] Mike Kaput: And they also have plans for two more models, a speech to speech model in Q1 2025, and what they call a quote, any to any model in mid 2025. Basically it'll handle multiple types of input and output. Now, CEO Andy Jassy claims these models are among the fastest and most cost effective in their class. So, Paul, this is kind of interesting, pairing with Bezos interview, saying, Hey, I'm back in Amazon, and by the way, now we've got our own models.

[00:42:43] Mike Kaput: We talked last week about how Amazon is both investing billions more in Anthropic, And trying to reduce its reliance on the company by building its own AI in house. Now, this certainly seems like a big win for them building their own AI in house, [00:43:00] doesn't it? 

[00:43:01] Paul Roetzer: Yeah, it is kind of like confusing as this whole space is confusing right now.

[00:43:05] Paul Roetzer: Everybody's building their own models and doing deals with other people who are building models. And yeah, the whole thing is just very complicated. I think in Amazon's case, maybe more than any other company, I mean, Google I could think of is in a similar boat. I would imagine they're building a lot of these models because they see the opportunity to transform Amazon internally.

[00:43:27] Paul Roetzer: Like it's the same when they built AWS. It was just like they had an internal reason to enable this. And so I think they probably look across their business and say, Well, do we want to use Anthropx models to optimize our own? Operations, marketing, sales, service, ops, you know, finance, HR. Or do we want to like devise our own for all of the different things we do from warehousing, logistics, to running the daily business?

[00:43:51] Paul Roetzer: So I don't know. I mean, that's my guess is they're, they're largely focusing on internal applications and then they're, you know, as a by product, [00:44:00] they're able to also build and open these models up and maybe drive uses of AWS and, I don't know, put it into all their different devices they're building and, you know, It's just one of those, like, it was funny because Andrew did ask Bezos about this and he said something about it.

[00:44:16] Paul Roetzer: And Jeff was like, well, you've, you've been busy, Andrew. We did announce our own models yesterday. And Andrew goes, Oh, the Nova thing. It was almost like he just blew it off as like, not a big deal. And you could tell Bezos was a little bit taken aback. Like, well, what, we don't get any credit for launching our own family of models.

[00:44:34] Paul Roetzer: He's like, they're frontier models, Andrew. Like these are, these are important models. Yeah. I don't think Andrew bought it. So it will be fascinating to just see, again, like, you know, we've now got, I don't know, off the top of my head here, so you have AWS has their own frontier models, certainly Anthropic, Google, OpenAI, XAI, Microsoft is building their own.

[00:44:56] Paul Roetzer: So, well, we have, what, six main players now? Was it five or [00:45:00] six? Yeah. And then you have, oh, Meta, you can't ignore Meta. you got Mistral is sort of playing around, Cohere, they're doing sort of like the lower level models. I mean, it's. It's getting to be a crowd, Nvidia has their own models now. So when you have like seven or so, like major, one of the biggest companies in the world kind of major top 20 building frontier models who have tens of billions of dollars of R& D money every year to build these massive frontier models.

[00:45:30] Paul Roetzer: And you can officially put Amazon in that category now. 

[00:45:35] Mike Kaput: You know, I thought just as a final note here is pretty interesting. you know, TechCrunch reported, there was a comment from the CEO, Andy Jassy saying, quote, we've optimized these models to work with proprietary systems and APIs so they can do multiple orchestrated automatic steps, agent behavior, much more easily with these models.

[00:45:56] Mike Kaput: So sounds like, as we've seen with everyone else do, not only are [00:46:00] they building their own models, but they've got their eye on some type of agentic behavior. 

[00:46:04] Paul Roetzer: Yeah, I mean, the tech companies are gonna push the frontiers here. They all know full well what these things are capable of doing, and they're gonna try and bring those different levels to life, you know, from the reasoners, to the agents, to the innovators, to the organizations, and I think that's going to be where we're going to see the early signs of where corporate America and, you know, corporations around the world are going to go is look at what becomes possible.

[00:46:26] Paul Roetzer: And I keep trying to pay attention to, are they still hiring? Like, are these companies still hiring marketing people and sales people and HR people? Because I feel like, The way we're going to know when we're starting to look at the impact of the economy is when the hiring practices of the frontier model companies change because the tech they've built has enabled them to change their way they hire and promote and retain workers.

[00:46:53] Paul Roetzer: And if we start seeing a consistent decline from the frontier model companies, that's an indicator that we are now heading toward [00:47:00] job disruption that I expect will start to come next year. 

[00:47:05] OpenAI / Microsoft Deal

[00:47:05] Mike Kaput: Alright, let's dive into our rapid fire topics this week. So first up, according to some reporting from the Financial Times, OpenAI is potentially going to remove one of its founding principles in its agreements with Microsoft.

[00:47:20] Mike Kaput: There is a provision that would shut Microsoft out from accessing AI technology. Once the company reaches Artificial General Intelligence, or AGI. So under the current terms, if OpenAI creates what it defines as AGI, a system that can outperform humans at economically valuable work, Microsoft's access to this technology would be void, OpenAI's board would be the ones to determine when that milestone is reached.

[00:47:49] Mike Kaput: Now, if these reports are correct, OpenAI is potentially considering getting rid of this provision. It seems like they might be considering this move to try [00:48:00] to unlock more future investment, especially as they're trying to restructure to become a for profit company, because Microsoft has invested more than 13 billion in open AI.

[00:48:11] Mike Kaput: Presumably, the removal of this clause would enable them to continue investing in and accessing all of OpenAI's valuable technology. So this kind of comes at a bit of a sensitive time for OpenAI. The FTC, the Federal Trade Commission, has actually launched a antitrust investigation into Microsoft with specific focus on the company's deal with OpenAI.

[00:48:34] Mike Kaput: So they're kind of looking at whether or not Microsoft's dominance is in cloud computing has given it an unfair advantage in AI software sales. So, Paul, this would mark a pretty big change, maybe, in OpenAI's relationship with Microsoft. Like, what's going on here? And isn't this kind of implying that OpenAI would just be open to commercializing AGI, unlike what it said in the past?

[00:48:57] Paul Roetzer: I think they are. I mean, so if you, [00:49:00] if you listen to Sam's interview, they get into this, and it's very apparent there's friction with Microsoft. You know, he. He doesn't hide things. I mean, Sam, I'm sure, is an amazing negotiator and everything internally, but just watching interviews, he either is insanely manipulative in terms of how he controls his emotions to present a specific trick.

[00:49:24] Paul Roetzer: emotion, I guess, triggers specific emotions or he's, he's just very honest and open about his thoughts and feelings. And I actually think it's more that, because you can tell when they started pushing on the Microsoft thing, it's like, yeah, man, it's not all great. Like it's, and it wasn't even just the usual PR lines of.

[00:49:42] Paul Roetzer: You know, whatever, you know, we have our differences, but whatever. It's great. He was basically saying like, no man, this is hard. Like it's hard to manage a partnership like this. And we're not the same company we were when we created this. We never intended to be a product company. We started as a nonprofit.

[00:49:56] Paul Roetzer: Elon pulled his money and we had to go get money from somewhere. We'd be [00:50:00] becoming a product. Cause like, Sam has gone through stuff as a business leader that there's very few peers to in like human history. His last two years has been insane. To grow a company like this with these complexities. And so he seems just like really open about it.

[00:50:17] Paul Roetzer: and the Microsoft relationship in particular just seems like a really challenging thing to navigate right now. Because even when they entered that partnership, there was a very different vision for where OpenAI was going and like, you know, what they thought it was going to be. And so I could see them finding a way to do this.

[00:50:34] Paul Roetzer: Obviously I have zero internal knowledge of anything going on here with legal or business sides of this. But from an outside perspective, You could see how the AGI thing is a massive sticking point for both parties and I think it's part of the reason why Sam has started Changing the way he talks about AGI publicly.

[00:50:54] Paul Roetzer: Yeah, and I you could tell he was softening their stance on that being a definitive moment And they [00:51:00] weren't going to stop building their systems and all of a sudden team up with Amsthropic because they got to AGI, which is what their charter said they had to do. And so, this seems like AGI is becoming this friction point for everybody and they need to just remove it from that part of the business relationship.

[00:51:17] Paul Roetzer: So, I wouldn't be surprised at all if, if that's what they ended up doing here. 

[00:51:23] David Sacks Is Trump Administration’s AI and Crypto Czar

[00:51:23] Mike Kaput: Alright, so next up, U. S. President Elect Donald Trump has appointed venture capitalist David Sachs as the nation's first what they're calling an AI and crypto czar. So Sachs is a member of the influential PayPal Mafia, the kind of team that led PayPal to success.

[00:51:42] Mike Kaput: He is more recently very well known as a co host of the very popular All In podcast. And he basically brings an interesting background to this role. He's the former COO of PayPal, he founded Yammer, which Microsoft acquired for 1. 2 billion, and he's deeply embedded [00:52:00] in tech because now he runs a venture capital firm called Kraft Ventures, which has invested in numerous AI enterprises and companies like Elon Musk's SpaceX.

[00:52:10] Mike Kaput: Basically, this position sounds like it's going to come with some interesting responsibilities. Cheese? Sachs will guide administration policy on both AI and cryptocurrency. He will also head up the Presidential Council of Advisors for Science and Technology. This role is structured as a special government employee position, which allows Sachs to serve up to 130 days annually, without requiring him to divest assets or make public disclosure of those assets.

[00:52:42] Mike Kaput: And it seems like the tech industry has largely welcomed the appointment. Leaders from major AI and crypto companies, including, I'll say, Amaltman, have publicly supported the choice. Saks will likely work closely with his fellow PayPal al Elon Musk, who has been tapped to co lead what's been called the Department [00:53:00] of Government Efficiency.

[00:53:02] Mike Kaput: So, Paul, we're both pretty familiar with David Saks. We've listened in the past heavily to y'all in podcasts, like, I don't know, I personally feel like I've heard Sax talk way more about crypto than he has about AI, so for me personally, the line from a TechCrunch report on this kind of resonated with me.

[00:53:18] Mike Kaput: They said, Sax's views on AI and AI policymaking are less obvious, though his than crypto, though his policies generally are decidedly right leaning and deregulatory. What did you think of this pick when you heard about it? 

[00:53:32] Paul Roetzer: So, real quick, to continue on the Elon Musk Sam Altman beef, when Sam tweeted to congratulate Sachs and tagged him, Elon replied, LOL, kind of like laughing at Sam, but he's like, kissing up to Sachs now.

[00:53:47] Paul Roetzer: so, yeah, certainly right leaning, I mean, that, that, that's, that's not debatable. his policies, it does seem, there's a Good Time article [00:54:00] I'll link to in the show notes. Not in favor of regulation, generally speaking. He is going to be very pro open source, very pro acceleration of these AI models with as little regulation as possible.

[00:54:12] Paul Roetzer: heavily, you know, involved in Silicon Valley and the VC world. And yeah, I just listened to the all in podcast and you'll, you will peer what he has to say. That's the good and the bad, I guess, of like where we're at with media today is like, oftentimes the podcast is the media and these people have the platform and you can go learn all about it.

[00:54:35] Paul Roetzer: So, Yeah, I think, I think it fits, you know, we talked about, like, Andreessen Horowitz, I don't, I don't know their relationship, I don't, he wasn't a PayPal Mafia guy, but, um You know, I think it's going to be a lot of that, like the accelerationist manifesto, like move fast, drive innovation at all costs and regulate as little as possible.

[00:54:56] Paul Roetzer: Infuse it into the military, like it's, it's going to [00:55:00] be, accelerate at all costs. I think is probably the best way to look at what he would bring to the table, which is probably going to fit Elon's mindset and, you know, part of the reason why it probably is going to be there. yeah, like if I would have stopped last week, I threw out like Karpathy as like just kind of like a, if I would have stopped last week and like made a list, I could see that he, he would be on that very short list of people that, you know, probably, but I agree with you.

[00:55:27] Paul Roetzer: Like he's very much more crypto, like you hear about that. And I, that is not my world. You're, you're more the crypto guy. I mean, we should come back around to that at some point. But, that's not my world. I had a buddy years ago, joked with me. He's like, man, leave crypto alone. You got, you got in early on AI, leave crypto to the rest of us.

[00:55:44] Paul Roetzer: And I was like, yeah, I don't want crypto. Like I don't invest in crypto. I don't really know a heck of a lot about it. So I will plead ignorant on the crypto conversation. I wish I'd invested in some Bitcoin not too long ago, but that's about it. Yeah. 

[00:55:59] World Models

[00:55:59] Mike Kaput: [00:56:00] all right. So next up. We have two leading AI labs have unveiled some really interesting technologies that can generate explorable 3D worlds from single images.

[00:56:12] Mike Kaput: So first up, WorldLabs, which we've talked about in the past, has introduced tech that can transform a 2D image into a navigable 3D environment, which allows users to essentially step inside the image and explore it from any angle. So this system maintains consistent physics, lighting, and spatial relationships.

[00:56:31] Mike Kaput: So you have features like depth of field effects and interactive elements. Basically, this allows creators to quickly prototype virtual environments and bring static images, including classic paintings, to life in unprecedented ways. In the same time, Google DeepMind has announced Genie 2, which is a more comprehensive, what they call, foundation world model that can generate playable 3D environments from prompt images.[00:57:00] 

[00:57:00] Mike Kaput: The system can create interactive worlds lasting up to a minute, complete with physics, character animation, and even autonomous non player characters. Genie 2 is also able to respond to keyboard and mouse inputs, effectively turning these environments into playable spaces. So Paul, why are these so called, like, worlds or world models so significant?

[00:57:21] Mike Kaput: This, just on the surface is so cool, it sounds like something, feels like something out of Harry Potter a. 

[00:57:26] Paul Roetzer: Yeah, I mean, I always immediately think of like gaming and like being able to build games on the fly and imagine worlds and I guess you could eventually do it into like creating your own fictional worlds of like storytelling and the story unfolds in front of you and, you know, visually.

[00:57:44] Paul Roetzer: But you know, for the main thing, it's, it's trying to understand the physics of the world around us. Like, so creating a world model means giving the AI the ability to see and understand the world the way you and I do. that's one of the things I think is still going to come out, from OpenAI maybe [00:58:00] during Ship Myths and because they previewed it on 60 Minutes last night, this.

[00:58:04] Paul Roetzer: Vision model, like Project Astra from Google, we've talked about before where your device, like your phone or your glasses could see and understand the world around you. So to do that, like this ability to create and understand and kind of model those worlds matters. So we won't go deep on this now, but like episode 115, you had mentioned, we explained the world labs.

[00:58:27] Paul Roetzer: And so I'll just read real quick, read through what we talked about there. So they talk about spatial intelligence is the big thing. And they said, we believe that artificial intelligence will help humans build better worlds. Progress has been rapid, but we have only seen the first chapter of the generative AI revolution.

[00:58:42] Paul Roetzer: Language has thus far catalyzed this electrifying early moment, with text prompted image and video models rising up alongside LLMs. these models have already empowered people to work and create in new ways, but they only scratch the surface of what is possible. To advance beyond the [00:59:00] capabilities of today's models, We need spatially intelligent AI that can model the world and reason about objects, places, and interactions in 3D space and time.

[00:59:10] Paul Roetzer: and they went on to say, we aim to lift AI models from 2D plane of pixels to full 3D worlds, both virtual and real, endowing them with spatial intelligence as rich as our own. So that's, that's the play. That's why they think this matters, and Fei Fei Li is leading the charge at World Labs, who's a world renowned AI researcher.

[00:59:29] Paul Roetzer: so definitely a company worth paying attention to. Both of these things are in research previews, this is not, you can't go play with either of these. Think of it as almost like when we first saw Sora and now here we are like 10 months later actually getting to interact with an early form of it. It might be another year or more before we get to like really use these kind of models.

[00:59:49] Coca-Cola AI Holiday Ad

[00:59:49] Mike Kaput: Alright, so next up, Coca Cola is generating some controversy. Their latest holiday ad campaign has sparked some debate. After they revealed that they [01:00:00] used exclusively AI to create these iconic Christmas commercials that everyone's kind of expecting from Coke each and every season. So, for the first time, their iconic holiday ads were entirely generated by AI.

[01:00:14] Mike Kaput: So this campaign has a few videos that are very short, but they re imagine, one of their classic campaigns that they ran back in 1995 called Holidays Are Coming. This features kind of signature red delivery trucks decorated with Christmas lights driving through snowy towns. Now, three AI studios called Secret Level, Silverside AI, and Wildcard collaborated on this project to create essentially AI generated versions, updated versions of these commercials.

[01:00:44] Mike Kaput: And they used things like AI models like Leonardo, Luma, Runway, as well as a newer model called Kling. Thanks for watching! that was incorporated late in the production to improve the human movements. The results generated a bit [01:01:00] of controversy, like some critics pointed out that there were all these weird kind of AI generated effects that looked uncanny, they didn't look like real humans, there were technical limitations like weird proportions, unnatural movements, some subtle errors in background details.

[01:01:18] Mike Kaput: And basically, they say the ads rely on extremely quick cuts to mask AI's current limitations generating consistent, realistic footage. Even in some interviews, you know, one of the studios said that a brief shot of a squirrel alone required hundreds of AI generation attempts to get it right. Coca Cola's head of generative AI defended this approach, saying there are a bunch of benefits in production speed and creative possibilities.

[01:01:46] Mike Kaput: They even claimed this tech allows them to produce content like this five times faster than traditional methods. Now unfortunately some creative professionals see that as an attempt to cut costs [01:02:00] at the expense of human artists. So, Paul, this ad seems to have hit a nerve. There's definitely a few angles to unpack here.

[01:02:07] Mike Kaput: Like, do you agree with Coca Cola's move here? 

[01:02:11] Paul Roetzer: I mean, it's, there's a lot of layers to it. it's all in how you're judging the move. So, if the move is push the frontier of creative output while reducing the cost of, you know, on site production and talent and all those things, then it's the right move.

[01:02:30] Paul Roetzer: It's like, if you're Goal is to preserve creativity and human, human artistry. It's the wrong move. And you can understand why there'd be people on both sides, but this is a very clear cut, like who wins the studios who can use the AI tools and can sit there and run hundreds of variations of a squirrel to create an output and probably got paid hundreds of thousands of dollars each to do this thing, who loses.

[01:02:54] Paul Roetzer: All the people who would have been on site shooting this thing and the production companies and the [01:03:00] videographers and, you know, the animators. And I, there's going to be winners and losers, but this is the future. Like there, there's no turning back from this. Like the question would become, and I would be running this tomorrow.

[01:03:11] Paul Roetzer: If I had, if I was the CMO of Coca Cola or one of these agencies, I would be taking Sora when I have access to it and running the exact same prompts and everything into it. And then getting to say, Oh, SORA save us 50 percent of time, like, awesome. The next one, we're going to save even more time. Like there is no turning back.

[01:03:31] Paul Roetzer: there will be brands who choose not to do this for ethical reasons or legal reasons, but there are going to be a whole lot more brands that do choose to do this because either they don't have an alternative, they don't have the budgets to do it, or they just look at it and say, it's an efficiency thing.

[01:03:47] Paul Roetzer: And we're going to do it no matter what. so. So yeah, it's, it's wild. It's just, it's such a high profile brand. It's why it's all of a sudden getting so much attention, but you're going to hear a lot more about this. I won't [01:04:00] talk about it, but like I just heard one last week, another major consumer brand that tried to do this and couldn't pull it off in time, but they were going to run all their holiday ads this way.

[01:04:08] Paul Roetzer: So yeah, it's not an isolated incident. I will say that. 

[01:04:14] Devin AI Coder Update

[01:04:14] Mike Kaput: Alright, next up, so back in episode 88, in March of this year, we first talked about the release, or the demo rather, of Devin, which was created by parent company Cognition, and they boldly called it, quote, the first AI software engineer. So at the time, the company was kind of making waves with this demo that showed the tool being able to execute complex tasks using code, all basically kind of on its own without a human programmer.

[01:04:43] Mike Kaput: Now that demo got a ton of buzz. Early testers noted that Devon was a bit unreliable and buggy, and the demo was definitely a bit hyped up. We noted at the time that this is a product demo that needed to be taken with a huge grain of salt, because we are [01:05:00] not yet at the stage of fully autonomous AI coding agents, even though some companies want you to believe that.

[01:05:06] Mike Kaput: Now, Cognition and their product Devon are back in the news, Getting a cover story on Forbes showing basically just how much buzz there is now around AI agents, especially those that can code. So when we first talked about them about nine months ago, they had raised 21 million in funding, so solid. But today they're valued at 2 billion after raising 176 million from investors like Peter Thiel's Founders Fund.

[01:05:35] Mike Kaput: Which by the 

[01:05:35] Paul Roetzer: way is PayPal Mafia also. It's Thiel, Musk, David Sachs, Reid Hoffman. So, yep. Yep, fingerprints all over it. Yep. 

[01:05:45] Mike Kaput: So they are basically back in the news with Forbes at this kind of glowing profile. However, they also are noting that we're kind of seeing some of the same types of uneven results right now from AI agents that can code.

[01:05:58] Mike Kaput: So Forbes kind of points [01:06:00] out that the company's clients, while they report dramatic productivity gains, some have pointed out that they are facing significant limitations using a tool like this. Independent developers have sometimes found that AI can be slower than human programmers. They can introduce errors, even in Cognition's own demonstrations of Devin.

[01:06:19] Mike Kaput: This system has been inconsistent. But the reason we're kind of like talking about this is that with all the talk of AI agents, it's pretty clear that AI for coding is a market we can't really ignore anymore. So, In October, for instance, Google CEO Sundar Pichai said that more than a quarter of new code at the company is written by AI.

[01:06:41] Mike Kaput: GitHub says its AI code completion tool accounted for 40 percent of revenue growth this year. And according to Forbes, PitchBook analyst Brendan Burke told them that AI coding has become the most funded use case in generative AI. Startups focused on it raised over 1 billion [01:07:00] in the first half of 2024 alone.

[01:07:03] Mike Kaput: So, Paul, a couple of reasons this kind of jumps out as important. So first, AI agents becoming all the rage. I think it's important that we keep reminding the audience where the technology actually stands. This is probably not the last cover story we're going to see about AI agents. But second, we can't ignore the fact that it seems like technologists appear highly incentivized to build AI that can code.

[01:07:25] Mike Kaput: Like, what do you think? How are you looking at this space? 

[01:07:27] Paul Roetzer: It's definitely a very powerful space. practical use case in a lot of enterprises right now, especially a lot of tech companies. They are seeing benefits from it. I think there's always something a little bit more behind this, like Sundar was actually asked about this by Sorkin and he said, well, like the humans write the code and then the AI is assisting in this part and then the humans do the final thing.

[01:07:50] Paul Roetzer: That never makes the headlines, it never makes the pitch decks, like, the human involvement in this automation is, is always sort of like, just not really talked about. [01:08:00] So, I have no doubts that people are seeing these kinds of gains, like, as a writer, I could imagine seeing this same scenario play out in writing, like, if you infuse these things in different ways, 50 percent plus of your time on a different writing project.

[01:08:15] Paul Roetzer: we had Andy Jesse, who earlier this year reported, he said, they saved 260 million in 4. 5 thousand developer years, not hours, years, by using their internal Amazon queue coding agent. So, I think you're going to keep seeing these headlines, and I think we'll keep reminding people it's not autonomous yet.

[01:08:35] Paul Roetzer: And like, that's, you think it is, but it's not from these headlines. It can be misleading. So, doesn't mean we shouldn't be paying attention, doesn't mean your company shouldn't be exploring this if they're not already. Repl. it agent is another one we talk about quite a bit. So. Yeah, it's going to be huge.

[01:08:53] Paul Roetzer: We're going to hear a ton more about this stuff going on next year. 

[01:08:57] AI Product and Company Updates

[01:08:57] Mike Kaput: All right, Paul. So for our final topic this week, I'm just going to [01:09:00] run through very quickly, kind of a rapid fire within a rapid fire of some product and company updates. And, you know, please feel free to interject if you got anything you want to double click on further, but we've got a bunch of others that we're just going to hit really quickly.

[01:09:15] Mike Kaput: So first up, Meta has released Llama 3. 3, a new open source language model that basically matches the performance of much larger AI models while requiring significant less, significantly less computing power. 

[01:09:28] Paul Roetzer: This is the algorithm thing we talked about earlier, by the way. Exactly. That's this concept, you know.

[01:09:31] Mike Kaput: Because while it contains just 70 billion parameters, it actually can match, they say, the performance of its 405 billion parameter predecessor. So basically that means It has the same capabilities while requiring only a fraction of the computational resources. So they are claiming the model demonstrates impressive performance across multiple languages.

[01:09:55] Mike Kaput: It has a substantial context window of 128, 000 tokens. So [01:10:00] that matches the capabilities of things like GPT 4. And they're making it available through an open source license. though there are some restrictions. While it's free for most users, If you have over 700 million monthly active users, you must obtain a commercial license.

[01:10:15] Paul Roetzer: Also, that update is largely for developers, for people on your technical teams. That is not, the average business user is not going to go use that. 

[01:10:24] Mike Kaput: Next up, HubSpot has announced it's acquiring FrameAI, which is a conversational intelligence platform. FrameAI's technology specializes in taking unstructured data, so emails, calls, meetings, conversations, And turning those into actionable insights.

[01:10:40] Mike Kaput: So, basically, the interesting thing here is this gives you the potential to combine those conversational insights with HubSpot's existing customer data platform. So they plan to integrate Frame. ai into Breeze, which is HubSpot's recently unveiled and updated AI system. So you can do things like do a real time [01:11:00] analysis of customer sentiment and behavior.

[01:11:04] Mike Kaput: Next up, Hume AI, which we've talked about multiple times, has unveiled voice control. So this allows developers to fine tune synthetic voices along 10 distinct dimensions, including characteristics like assertiveness, confidence, and enthusiasm. And that's without the ethical concerns associated with copying real human voices.

[01:11:24] Mike Kaput: They have this kind of slider based approach to voice modification, so you don't have to use text prompts or preset voices. You can kind of make continuous adjustments along different vocal dimensions. So this is kind of addressing a big challenge in the AI voice industry, like, companies are struggling to create unique voices that match their brand identity without compromising the quality or running into, like, ethical issues of voice cloning.

[01:11:51] Mike Kaput: So this is currently in beta, the technology is being integrated into Hume's empathetic voice interface, Eevee, so developers can create custom [01:12:00] voices through their interface. Next up, Amduril, which is a defense technology company, and OpenAI have announced a strategic partnership. They're going to develop AI solutions for military defense systems.

[01:12:15] Mike Kaput: This marks OpenAI's first major entry into the defense sector. It sounds like the initial focus is on improving what are called counter unmanned aircraft systems that protect U. S. and allied forces from drone threats. So as part of this, OpenAI's models. will be used with Anduril's existing defense systems and their Lattice software platform.

[01:12:39] Mike Kaput: Microsoft has said that Copilot Vision is now in preview for select Copilot Pro subscribers in the U. S. So Copilot Vision basically is an AI browsing assistant. So it integrates directly into the Microsoft Edge browser and basically can look at you what you're doing and provide real time analysis and insights.

[01:12:58] Mike Kaput:  [01:13:00] about web pages as you browse. So it's actively quote unquote seeing and understanding the full context of web pages alongside users. Basically a second pair of eyes. And it can help you do different things thanks to this functionality. Now Microsoft, after some criticism of this feature, is emphasizing privacy and security.

[01:13:21] Mike Kaput: The feature is entirely opt in and all the conversation data and context shared with Copilot is deleted at the end of each session. During this initial raw, Vision is only going to interact with a select set of websites, and Microsoft has said they're going to be cautious about expanding it over time.

[01:13:40] Mike Kaput: They explicitly state Vision does not capture, store, or use any publisher data to train its models. 

[01:13:45] Paul Roetzer: If anybody sees the prompt to get you to allow this to turn on, please reach out to me on LinkedIn. Like, I don't, I'm not, you know, we don't use Windows, so I'm not going to see this myself. I'm very [01:14:00] intrigued to see how they try and convince people to use this.

[01:14:02] Paul Roetzer: I can see the adoption on this product being close to zero, so they're gonna have to push real hard. I, I don't know. I'm just, I'll be really intrigued to see how they position it so that you want to. Let them have this, like what wording they use. So, yeah, if anybody gets it, just, I would love to see a screenshot of that.

[01:14:23] Mike Kaput: Google Cloud has unveiled two significant additions to its Vertex AI platform. So they're adding Veo, which is their newer video generation model, and ImageN3, their updated image generation system. So Veo is Google's entry into the image to video generation capability market. ImageN3, which will be widely available starting this week.

[01:14:45] Mike Kaput: represents Google's most advanced image generation model to date. So, it's actually interesting to hear some of the notable companies already implementing these. Mondelez International, which owns brands like Oreo and Cadbury, is apparently using the technology to [01:15:00] scale content creation across 100 plus brands.

[01:15:03] Mike Kaput: WPP, a major marketing agency, has integrated these tools into its AI powered operating system for marketing transformation. Next up, some news about X. X has briefly launched and then apparently removed something called Aurora, which is a new AI image generation feature. They removed it? It apparently has been removed within like hours because they Well, you could create anything.

[01:15:29] Mike Kaput: I tested 

[01:15:29] Paul Roetzer: it on like Saturday or something, and it was like, any person, like, there was like no guardrails at all. Oh. 

[01:15:39] Mike Kaput: You must have hit the window because it apparently appeared briefly on Saturday and then disappeared for many users. So maybe some people still have access, but basically, yeah, it lacked all kind of content restrictions.

[01:15:52] Mike Kaput: You can generate photorealistic images or whatever you feel like of anything, so it seems like this [01:16:00] Raised a huge amount of content moderation issues. Yeah, it is 

[01:16:03] Paul Roetzer: gone. I don't have it anymore. Yeah, because when you went in, like, you could, it had all kinds of examples and stuff, and you could just create.

[01:16:10] Paul Roetzer: I was doing like Santa Claus stuff just to like see Santa Claus on a Tesla and just random stuff like that. But you could create anything. Yeah. So it's, 

[01:16:19] Mike Kaput: it's unclear. They haven't really revealed like, why did this happen? What are we doing with it? But we'll see. And then last but not least, some news about Spotify and Google Notebook LM.

[01:16:31] Mike Kaput: So. This year, Spotify and Google partnered to create this year's Spotify Wrapped Experience. So this is when Spotify summarizes all the music you listen to in a year. This time around They're using Google Notebook LM and their podcast, their audio overview feature, which creates a podcast of material with AI hosts.

[01:16:53] Mike Kaput: They're using this to actually have AI hosts analyze and discuss your musical journey throughout [01:17:00] 2024, including your favorite tracks, artists, and how your taste evolved over the year. So to access a personalized podcast, users can go to the wrapped feed on their Spotify homepage. There's also a separate URL, spotify.

[01:17:15] Mike Kaput: com forward slash rap AI podcast that you can access it at. This is only available in English for both free and premium users. It's in select countries, US, UK, Australia, New Zealand, Canada, Ireland Sweden. Strangely enough, I don't know how we arrived at that, but it is only available for a limited time.

[01:17:34] Mike Kaput: So go check it out. All

[01:17:36] Housekeeping Information

[01:17:36] Mike Kaput: right, Paul, that is. A breathtaking week so far in AI. Just a couple quick final announcements here, then I'll kind of turn it back over to you to lead us out here. We are recording this week and releasing next week, a 25 AI questions for 2025 special podcast episode. It is coming out on Thursday, December 17th [01:18:00] or 19th, rather.

[01:18:01] Mike Kaput: Sorry. Excuse me. we will be releasing that episode as kind of our last episode of the year. If you could please submit your questions, if you have any, we will try to answer as many as we can on that episode. So go to bit. ly forward slash 25 questions episode. That's b i t dot l y forward slash 25 questions episode.

[01:18:23] Mike Kaput: It's just a simple Google form, you can submit your questions. And as always, Please, if you have not already, check out the Marketing AI Institute newsletter, marketingainstitute. com forward slash newsletter for a full comprehensive brief of everything going on in AI this week. Paul, thank you again. That was a 

[01:18:44] Paul Roetzer: lot of updates at the end there.

[01:18:47] Paul Roetzer: Yeah, and that 25 questions link will be in the show notes as well. So if you didn't jot that down, just check the show notes and there will be a link there for you. I was, I was trying to log in real quick order before we signed off here. [01:19:00] I think. I was trying to log into Sora with my team account, which doesn't look like it's active.

[01:19:05] Paul Roetzer: You have to either be plus or pro. So I have to try my personal pro account. Did you try that yet? I haven't tried 

[01:19:12] Mike Kaput: it while we've been talking, but I tried my plus account, my personal plus account, and it did not work when I tried it. 

[01:19:19] Paul Roetzer: All right. Well, hopefully you all have better luck getting into Sora than us.

[01:19:22] Paul Roetzer: Let us know what you think. I'm, I'm hoping we'll be in by the time we record next week and we can tell you all about our Sora experience. And we'll see what else we get this week from our 12 days of open AI ship mess. It should be. Intriguing. So thanks everyone. We'll be back with you again next week.

[01:19:42] Paul Roetzer: Thanks for listening to the AI show. Visit marketing ai institute.com to continue your AI learning journey and join more than 60,000 professionals and business leaders who have subscribed to the weekly newsletter, downloaded the AI blueprints, attended virtual and in-person [01:20:00] events. taken our online AI courses, and engaged in the Slack community.

[01:20:05] Paul Roetzer: Until next time, stay curious and explore AI.

Related Posts

[The AI Show Episode 127]: 12 Days of OpenAI Continues, Gemini 2, Hands-On with o1, Andressen Says Gov’t Wanted “Complete Control” Over AI & OpenAI Employee Says AGI Achieved

Claire Prudhomme | December 17, 2024

Tune in to Episode 127 of The AI Show for this week's AI news, as 12 Days of OpenAI continues, Google drops Gemini 2 and a hands-on analysis of o1.

[The AI Show Episode 115]: OpenAI o1, Google’s Insane NotebookLM Update, MAICON 2024 & OpenAI’s $150B Valuation

Claire Prudhomme | September 17, 2024

Episode 115 of The Artificial Intelligence Show explores: OpenAI o1, Google’s Insane NotebookLM Update, MAICON 2024 & OpenAI’s $150B Valuation.

[The AI Show Episode 123]: Trump AI Policies, Problems with OpenAI’s New Model, GenAI for Business Strategy & Visa’s 500 AI Use Cases

Claire Prudhomme | November 12, 2024

From Trump's AI policy shakeup to OpenAI's Orion challenges and practical GenAI planning breakthroughs. Mike and Paul decode this week's AI transformation in Ep. 123 of The AI Show.