Some big company-driven controversies are at the front of this week's episode. Our hosts, Paul Roetzer and Mike Kaput, discuss Apple's introduction of "Apple Intelligence," a suite of generative AI features integrated into its devices, former OpenAI’s superalignment researcher Leopold Aschenbrenner claims on the rapid approach of superintelligence, and the backlash surrounding Adobe's updated terms of service.
We will be taking break from the podcast for the next two weeks for travel. Our next episode will be released on July 2nd!
00:05:54 — Apple WWDC
00:20:14 — Leopold Aschenbrenner, AGI and Superintelligence
00:38:31 — Adobe’s Controversial New Terms of Use
00:42:38 — Microsoft Recall Backlash
00:45:40 — Chris Bakke’s Generative AI Policy PSA
00:49:05 — Underlord from Descript
00:51:17 — Perplexity Pages
00:55:12 — Spark Capital, Jared Leto Back AI Video Startup Pika
00:56:49 — McDonald’s HeyGen Campaign
Apple’s Worldwide Developer Conference Announcements
Apple is in the midst of its annual Worldwide Developer Conference (WWDC)—and in the event’s opening keynote, the company finally revealed its AI plans.
The first and biggest AI announcement was what the company calls “Apple Intelligence,” a suite of AI features baked right into all Apple devices.These include a number of generative AI features like AI text generation, image generation, and photo editing.
They also include an integration with ChatGPT, which gives you access to the tool right through iOS. And, it includes a much smarter version of Siri, Apple’s voice assistant. Siri will have significantly upgraded conversational features and the ability to take actions across your apps.
A big draw was how contextual Apple Intelligence appeared to be in on-stage demos. Apple stated that by understanding the user's personal context, Siri can assist individuals in unique way, all while prioritizing privacy.
Privacy was a big talking point that the company kept emphasizing: Apple Intelligence can understand the context of your personal information without collecting your personal information—and it uses what Apple calls Private Cloud Compute to “handle more complex requests while protecting your privacy.”
Apple Intelligence will become publicly available later this year on the iPhone 15 Pro, iPhone 15 Pro Max, iPads, and Macs with M1 chips or later.
Leopold Aschenbrenner’s Claims on the Future of Superintelligence
Leopold Aschenbrenner, a former superalignment researcher fired from OpenAI, is making waves in the AI community with a series of provocative essays and interviews on the rapid approach of artificial general intelligence.
Aschenbrenner was fired in April for allegedly leaking confidential information—an allegation he’s refuted on popular podcasts like The Dwarkesh Podcast, saying he simply shared an AI safety document he was working on with some outside researchers after making sure there was no sensitive information in it.
But what’s really getting the AI world’s attention is the thesis that Aschenbrenner lays out in interviews and a series of related essays called Situational Awareness: The Decade Ahead.
In them, he claims that all the signals he’s seeing as one of a few hundred AI insiders say that we will have superintelligence “in the true sense of the word” by the end of the decade—and that AGI by 2027 is “strikingly plausible.”
He then says this will kick off great power competition between the United States and China in a national security race to build and control superintelligence—a race that could result in all-out war.
Adobe’s New Terms of Service Sparks Outrage
An update to Adobe’s terms of service has the company facing some serious controversy.
On the surface, the updates seem small, totalling just a few paragraph changes. But they’re having a big impact, because the changes resulted in a popup for users where they had to agree to give Adobe access to user content through what it calls “automated and manual methods” or become unable to use the software.
Some users are also taking issue with language that appears to give Adobe sweeping ownership over work created with Adobe products.
This spurred a huge backlash online, with creators worried Adobe was going to begin training its AI models on their products.In turn, that backlash prompted a response from Adobe. The company released a blog post stating that this was a misunderstanding:
They company said “We recently made an update to our Terms of Use with the goal of providing more clarity on a few specific areas and pushed a routine re-acceptance of those terms to Adobe Creative Cloud and Document Cloud customers.”
The company also clarified its policies and reiterated that it does not train on customer content and will never assume ownership of a customer’s work.
Today’s episode is also brought to you by Scaling AI, a groundbreaking original series designed for business leaders who want to thrive in the age of AI, and help drive AI transformation within their organizations. The Scaling AI course series includes 6 hours of content, complete with on-demand courses, video lessons, quizzes, downloadable resources, a final exam, and a professional certificate upon completion.
Head to ScalingAI.com to pre-order today!
Disclaimer: This transcription was written by AI, thanks to Descript, and has not been edited for content.
[00:00:00] Paul Roetzer: If the U.S. government doesn't take a massive initiative, I think they're going to regret it within three to five years.
[00:00:07] Paul Roetzer: Welcome to the Artificial Intelligence Show, the podcast that helps your business grow smarter by making AI approachable and actionable. My name is Paul Roetzer. I'm the founder and CEO of Marketing AI Institute, and I'm your host. Each week, I'm joined by my co host, and Marketing AI Institute Chief Content Officer, Mike Kaput, as we break down all the AI news that matters and give you insights and perspectives that you can use to advance your company and your career.
[00:00:37] Paul Roetzer: Join us as we accelerate AI literacy for all.
[00:00:44] Paul Roetzer: Welcome to episode 102 of the Artificial Intelligence Show. I'm your host, Paul Roetzer, along with my co host, Mike Kaput. This is our last episode for a little while. Mike and I have some travel and some commitments coming [00:01:00] up, so we are going to take a little, two week summer break. So episode 103 is Currently scheduled for July 2nd.
[00:01:08] Paul Roetzer: Now, if OpenAI drops GPT 5 on the world sometime between now and then, I may show up for a emergency pod, but, just to keep in mind, mark your calendar. July 2nd would be the next episode.
[00:01:23] Paul Roetzer: also thank you to everyone that joined us for the AI for B2B Marketers Summit. you probably heard us talk about that on previous episodes.
[00:01:31] Paul Roetzer: It was a huge success. We had almost 4,000 people from 92 countries join us for that event. So it was a incredible half day virtual summit. It was the inaugural one. Yeah, I mean,
[00:01:44] Paul Roetzer: just we could probably spend a half hour talking about all the amazing stuff there. The chat was incredible. I think at our peak, we had 1, 900 people on simultaneously.
[00:01:53] Paul Roetzer: So, you know, for a free event to have over 50 percent attendance was pretty remarkable. So, [00:02:00] yeah, just great stuff. Thanks to all the speakers who gave their time and insights for that event. our team. and just everybody who joined us and was so active in the chat and the community. We are grateful for that.
[00:02:12] Paul Roetzer: We will be doing it again next year. We didn't announce any dates or anything, but we'll definitely be back with another AI for B2B Marketers Summit next year. Okay, this episode is brought to us by the new Scaling AI for Business Leaders series. This has been, we haven't talked about this, I don't think, on the show, have we, Mike?
[00:02:31] Mike Kaput: no, I don't think so.
[00:02:32] Mike Kaput: Okay,
[00:02:33] Paul Roetzer: So scalingai. com is the website. This has been the last, well, I've been working on this for about two years. I told Mike and I told the team, like, this was probably more,
[00:02:47] Paul Roetzer: mental weight and time than writing a book. so I've, I've been thinking about it and planning it for a couple of years.
[00:02:55] Paul Roetzer: But I've been intensely working on it for about three months, and then last week [00:03:00] was sort of the final push. And so I actually, today is Tuesday, July, or June 11th. On Sunday, I spent nine hours in studio recording the ten courses. So Scaling AI is going to be an on demand course series. It's launching June 27th.
[00:03:19] Paul Roetzer: And we're going to launch it with a free webinar. So this is the, you know, brought to you by. So go to ScalingAI. com on June 27th at noon Eastern. We are going to do five essential steps to scaling AI in your organization. The way I think about this is we get approached all the time. all the time to help people build their AI roadmaps.
[00:03:39] Paul Roetzer: So when I finish my keynotes, I'll often end with five steps. I'll tell you, you know, focus on education and training, build an AI council, develop responsibility principles, generate AI policies, run AI impact or
[00:03:51] Paul Roetzer: exposure assessments on your team and tech and partners, and then build your AI roadmap. And we inevitably get people coming up to us saying, can you do this for us?
[00:03:59] Paul Roetzer: And we, [00:04:00] we don't, Do it. we do some very limited advisory work and consulting work, but it's the kind of thing, like if we were to go in from the ground up and do this, or if we were to scale a consulting practice to do this kind of work, I mean, you're probably talking about. I don't know, like one to two to 3 million
[00:04:17] Paul Roetzer: of consulting work. If you wanted someone to come in and like truly do this entire thing, it's a, it's a massive undertaking to do this in an organization with so many variables to consider. And so our choice is more of a one to many model. We are trying to make this knowledge as accessible to as many organizations as possible.
[00:04:36] Paul Roetzer: So rather than, Me or Mike going in and spending three months with one company, I spent three months building 10 courses that any company can apply the framework to. So, that's kind of the gist of it. There's a welcome to the series, a state of AI for business,
[00:04:52] Paul Roetzer: the AI forward organization, so the imperative to build AI native or AI emergent organizations, how to build an AI council.[00:05:00]
[00:05:00] Paul Roetzer: Building an Internal AI Academy, Generative AI Policies, Responsible AI Principles, the Impact Assessments, the AI Roadmap, and then What's Next, Agents, AGI, and Beyond. Those are the 10 courses in the series. So you can go to scalingai. com, learn more about that. We'll talk more about it as we, you know, I guess in July when we're back together.
[00:05:18] Paul Roetzer: But, again, scalingai. com. You can register for free for the 5 Essential Steps to Scaling AI in Your Organization, a June 27th webinar.
[00:05:26]
[00:05:26] Paul Roetzer: webinar. Okay. We, as we talked about last week, we delayed this episode by one day so that we could talk about everything Apple, which we now know is Apple intelligence. So we are going to kick off today with a
[00:05:41] Paul Roetzer: deep dive into the Apple WWDC event. And then we got plenty more to get to after that. So I will turn it over to Mike to guide us into the Apple discussion.
[00:05:52] Mike Kaput: Thanks, Paul.
[00:05:54] Mike Kaput: So yes, we are right in the thick of WWDC, the Worldwide Developer [00:06:00] Conference, an annual event from Apple. It's happening over the next couple days. As we're recording this, but really what we were focused on trying to see what
[00:06:08] Mike Kaput: Apple is doing with artificial intelligence as announced during its opening keynote.
[00:06:14] Mike Kaput: And they certainly told us, because the first and kind of biggest AI announcement that came out of The keynote at WWC is something that the company is calling Apple Intelligence, which is conveniently abbreviated ai, and this is a suite of AI features that are now going to be baked right into all Apple devices.
[00:06:38] Mike Kaput: So according to Apple quote, apple Intelligence is the personal intelligence system that puts powerful generative models right at the core of your iPhone, iPad, and Mac.
[00:06:51] Mike Kaput: Empowers incredible new features to help users communicate, work, and express themselves. So this includes a number of different [00:07:00] Gen AI features.
[00:07:01] Mike Kaput: Things like AI text generation, image generation, and photo
[00:07:05] Mike Kaput: editing. There's also an integration with ChatGPT so you can access that right through iOS. And very much anticipated, there is now going to be a much smarter version of Siri, which is Apple's voice assistant. Siri will have highly upgraded conversational features
[00:07:26] Mike Kaput: and the ability to actually take some actions for you across your apps.
[00:07:31] Mike Kaput: So, kind of a big draw of this whole Apple intelligence system was kind of how contextual it was. Promises to be. So again, according to Apple, quote, Awareness of your personal context enables Siri to help you in ways that are unique to you. You
[00:07:50] Mike Kaput: Can't remember if a friend shared that recipe with you in a note, a text, or an email. Need
[00:07:54] Mike Kaput: your passport number while booking a flight? Siri can use its knowledge of the information on your [00:08:00] device to help you find what you're looking for. without compromising your privacy. And indeed that last bit is important because Apple made privacy pretty big talking point of its Apple Intelligence rollout.
[00:08:13] Mike Kaput: And they kept kind of emphasizing that while Apple Intelligence can understand the context of all your personal information, it is not collecting that
[00:08:22] Mike Kaput: information. It's using what apple is calling private cloud compute to, quote, handle more complex requests while protecting your privacy.
[00:08:31] Mike Kaput: So all of this was kind of a preview of what's coming.
[00:08:35] Mike Kaput: All the Apple intelligence and Siri features are going to become publicly available to all us users later this year on the iPhone 15 Pro, the iPhone 15 Pro Max, iPads and Macs with M1 chips or later.
[00:08:51] Mike Kaput: So Paul, you're, you know, a long time Apple watcher. What were your impressions of the releases?
[00:08:57] Mike Kaput: and what did they kind of mean for where [00:09:00] Apple now stands in this AI arms race we find ourselves in?
[00:09:04] Paul Roetzer: So it was interesting, you know, we've always been waiting for this for a really long time, but I was golfing
[00:09:08] Paul Roetzer: yesterday. We had, so I'm on the board for Junior Achievement of Greater Cleveland. I've been on the board there for, gosh, almost
[00:09:13] Paul Roetzer: 10 years, and so I was out golfing with our friend Joe Polizzi. So we were in a cart together.
[00:09:18] Paul Roetzer: And it happened we teed off at 12. 30 and the thing starts at 1. So Joe and I, in between shots, have, the presentation like streaming on our phones. So I was kind of catching bits and pieces and, you know, I would catch a little bits at a time. And then I got home late last night and that's when I started kind of really diving in.
[00:09:37] Paul Roetzer: And so I, I, I'm going to try and structure this just kind of like. Some very high level takeaways. I'm not going to get into like a lot of the nitty gritty of all the features. There's some really cool stuff coming. So my first thought was no major wow moments, like nothing. unexpected, splashy, wildly innovative that was like, wow, I didn't see that coming.
[00:09:58] Paul Roetzer: That being said, they [00:10:00] did what they had to do. and I think the key takeaway, and I don't think like the average Apple user is going to know or care about this, but they're doing it on their own chips, their own devices with their own privacy controls. And the biggest thing to me was on their own models.
[00:10:18] Paul Roetzer: Now they've been unusually,
[00:10:20] Mike Kaput: um,
[00:10:21] Paul Roetzer: open about their research in recent months. They've shared some of the research that serves a prelude to this, that they were working on these kind of on device models. But I think this was my biggest takeaway is they did this the Apple way, which is, maintaining and building trust with the user.
[00:10:39] Paul Roetzer: So I've said this before on, on the podcast, and, you know, I think it's worth repeating. When we think about all the possibilities of people building AI agents to help take actions and connect to your apps and do all these
[00:10:51] Paul Roetzer: things, the end of the day, I trust apple more than any other company. So I trust that they will build the safest, most secure, [00:11:00] most human centric models, and that's exactly what they're doing.
[00:11:03] Paul Roetzer: So they focused a lot. I read this, they have a machine learning, research arm and that, that arm published a paper, like how their
[00:11:10] Paul Roetzer: models work. And so I was actually focused on that last night. And so they talk about the fact that they take like a base model and then they fine tune for specific use cases around communication, work, expressing ourselves, and then getting things done across Apple products.
[00:11:25] Paul Roetzer: And that's exactly what they demonstrated. So. I think what people have to do is separate this, we talked so much about Anthropic and Google building these massive frontier models and OpenAI and others. That is not Apple's play. That is not what they're trying to do here. They are going to, rather than build and
[00:11:42] Paul Roetzer: release like these general purpose models pursuing AGI, they're focused on fine tuned, efficient models.
[00:11:49] Paul Roetzer: that enhance the user experience and solve for real world applications. That's the Apple way. Well, why? Think about what Apple's mission statement is. It is to bring the best user experience [00:12:00] to its customers through innovative hardware, software, and services. That's what this does. Contrast that with OpenAI's mission is to ensure that artificial general intelligence benefits all of humanity.
[00:12:11] Paul Roetzer: So we have one company going and building massive frontier models that are going to be general purpose and maybe achieve AGI at some point this decade. And you have Apple saying, we have billions of devices, we're just going to make people's lives better. Like one feature at a time to where it becomes like frictionless and they don't even have to think about
[00:12:27] Paul Roetzer: it. So. The Apple way is models are trained on licensed data, or they give you the opportunity to opt out very easily if you don't want your data infused in their models from the web. profanity is actually removed from the training data. They build algorithms to remove harmful content with, you know, they would consider profanity harmful content.
[00:12:47] Paul Roetzer: They have image generation, but you can't do lifelike images. You can do sketches, illustrations, or animations. When they evaluate how the models perform, they don't use the standard industry evals. They use human [00:13:00] evals to benchmark models against things people actually do on devices. So they're looking at communication, work, expression of self.
[00:13:08] Paul Roetzer: So they use these, then they also look at, Evals against harmfulness and their models dominate. Like if you go look at their data and these reports, they are the safest. and they are the least harmful models out there. Like if you compare it to
[00:13:21] Paul Roetzer: Mistral, it was, it was, it was scary. Like Mistral's model is not safe.
[00:13:26] Paul Roetzer: and then everything stays on device or in their private cloud that they introduce. So that is very much the Apple way. And so that was actually.
[00:13:36] Paul Roetzer: I think my biggest question mark one was what were they going to do with siri two was what is this open AI partnership going to be? Are they going to give up all of their models to open AI?
[00:13:44] Paul Roetzer: Like that seemed impossible to me knowing Apple's approach. So that then takes me into the open AI part
[00:13:51] Paul Roetzer: of it. I honestly, like, I just expected more, and I'm not saying it's a bad thing. I actually think they made the right play not going deeper with OpenAI, but [00:14:00] it's,
[00:14:00] Paul Roetzer: the way it works is the ChatGPT is built in. So on, on the Apple Intelligence site, it says, with ChatGPT from OpenAI integrated into Siri and writing tools, you get even more expertise when it might be helpful for you. Meaning, if you need the power of a general model, a frontier model, We're gonna make it seamless for you. You don't have to pay for it.
[00:14:18] Paul Roetzer: So you're getting ChatGPT capabilities for free on your phone if you want it. And then you can choose, like, they'll alert you saying, Hey, this information goes to OpenAI, it's like, anonymized and things like that, but they're just letting you know each time you're making a choice to leave the protection of the Apple bubble, basically.
[00:14:36] Paul Roetzer: So, I, like, the big thing for me was like, are they going to turn Siri? Like, are they just going to use OpenAI's voice capabilities in Siri? And they did not do
[00:14:44] Paul Roetzer: that. And they actually left it open to infuse Gemini and other models later. Like, they basically said, like, this isn't an exclusive thing, We're just going to make it easy for you to get access to these models if you want.
[00:14:55] Paul Roetzer: So I think the Apple intelligence thing makes a ton of sense. I think [00:15:00] they demonstrated their chip, you know, capability, the hardware being a differentiator, their ability to manufacture and build intelligence into devices is unparalleled. Those are a couple things. Then you get into like, well, is this going to actually work?
[00:15:15] Paul Roetzer: We've all, over the last year and a half, seen plenty of AI demonstrations that are like, yeah, that was great. Like, and a year and a half later, we're still waiting for it to be able to do that. But Andrej karpathy,
[00:15:25] Paul Roetzer: you know, who we often talk about on the show, he tweeted that he was impressed, but then he said, you know, someone said, oh yeah, but it's just demos.
[00:15:31] Paul Roetzer: And he's like, I, I agree. Like the proof will be in the pudding, but I will say that I think the technology exists to do what they're showing. Like there's, there's nothing that they would need to invent that isn't already possible.
[00:15:43] Paul Roetzer: And then the last couple of things, siri, again, this was the big one I was waiting for.
[00:15:48] Paul Roetzer: So, if you go to the page and you read about, you know, what they're doing,
[00:15:53] Mike Kaput: um,
[00:15:53] Paul Roetzer: they say, the start of a new era for siri. And that was the one thing, like, in between going up to the t box and coming [00:16:00] back, my initial impression
[00:16:02] Paul Roetzer: up front was like, oh, they're They're introducing some stuff, but it sounds like a year from now, here's what we might do with siri.
[00:16:09] Paul Roetzer: So I think we're going to have a much more useful siri this fall. Like it's going to have the on screen awareness you talked about. It's going to know what's on the screen and be able to do it, but protect it on your device, like only. You and your device are aware of
[00:16:22] Paul Roetzer: this. It's going to know personal context from the information on the device.
[00:16:25] Paul Roetzer: It's going to have the ability to start taking actions, kind of functioning as an agent. It's going to help with summarization and proofreading and, you know, mail replies. And it's like, it's going to start doing these things. but they're doing it in this really unique way where they're building what are called these adapters that actually functions on top of the main model.
[00:16:43] Paul Roetzer: And so the model stays kind of in its, it's. It's, static state almost, and then the adapters like evolve and learn. And so I think what it might lead to, you know, I've heard a couple of people who I respect say like, this was the biggest thing they've done since the iPhone, which is [00:17:00] not the initial reaction you heard.
[00:17:01] Paul Roetzer: A lot of people are like, meh, like it didn't seem like a big deal, but I think what they're doing with siri and where it's going to go and by building all these really efficient models right on the phone and being able to do all these things on device.
[00:17:13] Mike Kaput: If
[00:17:14] Paul Roetzer: If you think about what the iPhone did, it changed the computing interface to touch.
[00:17:20] Paul Roetzer: Like it enabled us to touch screen everything.
[00:17:24] Paul Roetzer: I think now we will, we will start entering the era of the voice interface, like a true voice interface where it's able to do everything on your Mac, on your iPhone, on your iPad, with your AirPods, eventually probably with, you know, something beyond the Vision Pro more, everyday life kind of classes.
[00:17:42] Paul Roetzer: And so I think that voice. Starts to become a very, very important interface for humans with their devices in a very reliable way that maybe We haven't had before.
[00:17:54] Paul Roetzer: And then my final note is the stock was down like 1. 8 percent on the day. That was the kink. Joe and I kept going, like
[00:17:59] Paul Roetzer: [00:18:00] after something would happen, we'd be like, okay, check their stock real quick.
[00:18:02] Paul Roetzer: Oh, go check Microsoft stock. Like we were kind of watching the, how is the market responding?
[00:18:06] Paul Roetzer: And so I would say that down one and a half to 2 percent was a meh. From the market. Like it's up 15%, I think, since March, kind of anticipating this news. um, my thought was it could swing five to 10 percent either way, based on whether the market thought what they were doing was incredible or not.
[00:18:25] Paul Roetzer: And the fact that it was like one, 2 percent was just kind of like, yeah.
[00:18:29] Paul Roetzer: So I don't, I don't think the market's
[00:18:30] Paul Roetzer: pricing in the potential value of everything they've done. But I also don't think they penalized them for not doing anything massively innovative.
[00:18:37] Mike Kaput: That's a great assessment. I don't want to spend a lot of time on this piece, but we should also mention that Elon Musk came out guns blazing. Pretty immediately, yeah, right, right, none of this is news, but
[00:18:53] Mike Kaput: basically he claims he is going to consider banning Apple devices at his companies [00:19:00] because they're integrating OpenAI's tech at the operating system level.
[00:19:04] Mike Kaput: And at one point he posted, quote, it's patently absurd that Apple isn't smart enough to make their own AI yet is somehow capable of ensuring that OpenAI will protect your security and privacy. So. Seems like there's some criticism here. This is not any surprise, but we probably should expect
[00:19:22] Mike Kaput: a
[00:19:23] Paul Roetzer: had to laugh because he got community noted
[00:19:26] Mike Kaput: right away.
[00:19:26] Mike Kaput: So he's
[00:19:27] Paul Roetzer: such a big fan of these community notes And he got
[00:19:29] Paul Roetzer: community noted hard on this. And then, Marques Marcus Brown, Brownlee, who we love, you know, does amazing product reviews and stuff, and was at the
[00:19:39] Paul Roetzer: Apple event. tweeted, and he's like, listen, I asked them directly, what you're saying isn't true.
[00:19:44] Paul Roetzer: Like, it's not, this isn't how it's gonna work. my, I mean, when I saw the tweets from him, I was like, man, he really hates OpenAI. Like, this is all about
[00:19:53] Mike Kaput: about
[00:19:53] Paul Roetzer: against OpenAI, it's like That immediate overreaction and making this assumption that your data is going to go [00:20:00] nowhere. And yeah, people are going to have to, like, if they're coming to SpaceX or Tesla, put their iPhones in a Faraday cage.
[00:20:05] Paul Roetzer: So like, come on, like,
[00:20:08] Mike Kaput: it was just
[00:20:10] Paul Roetzer: an overreaction. so yeah, it is. what it is.
[00:20:14] Mike Kaput: All right. So our next big topic this week is a little bit of a weird one, but a very important one. So we recently are hearing a lot about a guy named Leopold Aschenbrenner, who's a former super alignment researcher at OpenAI. He was
[00:20:32] Mike Kaput: making waves in the AI community with a series of pretty thought provoking essays and interviews on the rapid approach of AGI and possibly eventually.
[00:20:43] Mike Kaput: Superintelligence. Ashenbrenner is also notable because he was fired in April from OpenAI for allegedly leaking confidential information. This is an allegation he's kind of contextualized on popular podcasts
[00:20:56] Mike Kaput: like the Dwark Kesh podcast, which we both love. Basically [00:21:00] saying he simply shared an AI safety document that he was working on with some outside researchers after, you know, making sure there was no sensitive info in it. We you know, talk about that piece, but really what's getting everyone's attention today,
[00:21:16] Mike Kaput: very extensive thesis that he's laid out both in interviews and in
[00:21:21] Mike Kaput: a series of related essays that are about 150 pages long or so called Situational Awareness the Decade
[00:21:29] Mike Kaput: ahead and basically in them he claims that he's one of Perhaps, right now, a few hundred AI insiders who are seeing signals that say we are going to have super intelligence, quote, in the true sense of the word, by the end of the decade, and that AGI by 2027
[00:21:49] Mike Kaput: is, quote, strikingly plausible.
[00:21:51] Mike Kaput: He then goes on to lay out very extensive arguments over why that is the case.
[00:21:57] Mike Kaput: Furthermore, he also makes the point that kind of a big [00:22:00] argument here that this fact happening is going to kick off some serious
[00:22:05] Mike Kaput: competition between the United States and China in a national security race to basically build and control AGI.
[00:22:13] Mike Kaput: And it's a race that he says, if we screw it up, could lead to all out war. So, you know, that's fun. But basically He's
[00:22:21] Mike Kaput: arguing that, look, I'm seeing a bunch of talk in San Francisco shift from 10 billion compute clusters to 100 billion clusters to trillion dollar clusters. So basically, there's this infrastructure kind of arms race kicking off.
[00:22:36] Mike Kaput: And he says the AGI race has begun. We are building machines that can think and reason. By 2025 26,
[00:22:43] Mike Kaput: these machines will outpace many college graduates. By the end of the decade, they will be smarter than you or I.
[00:22:50] Mike Kaput: Before long, the world will wake up, but right now there are perhaps a few hundred people, most of them in San Francisco, in the AI labs that have [00:23:00] situational awareness.
[00:23:01] Mike Kaput: So, Paul,
[00:23:03] Mike Kaput: there's a ton to unpack here. I'm personally only about halfway through full set of essays.
[00:23:07] Mike Kaput: They seem really, really good, but let's kind of take this one step at a time. First, maybe walk us through your thoughts on
[00:23:16] Mike Kaput: it. Leo Aschenbrenner and just like his overall thesis. Like, how seriously should we be taking him and his perspective?
[00:23:24] Paul Roetzer: So, I think, you know, anybody who's listened to this podcast for a long time, Or even this last 10 episodes. Knows we try and take a very balanced approach to all of this. We try and listen to the EAC people and, you know, the techno optimist crowd that's accelerated all costs. We try and share their perspectives.
[00:23:42] Paul Roetzer: We share the perspectives of the doomers. you know, the people who have a high P doom, as they would call it, the probability of doom that, you know, this is all going to go really sideways. And then Mike and I generally kind of fall in the realist realm. We try and accept that there's different perspectives and we try and uncover [00:24:00] You know, directional truth within those perspectives.
[00:24:03] Paul Roetzer: And we try and figure out what might actually happen. And so I definitely try not to get caught up, but I also listen. So I listened to the whole door questionnaire, which I think was almost like three hours long
[00:24:14] Paul Roetzer: I was familiar with Leopold. I was actually following him on Twitter, but I didn't know
[00:24:19] Paul Roetzer: deeply, you know, his background, that he was a valedictorian of Columbia at age 19, you know, started college at 15.
[00:24:25] Paul Roetzer: he's obviously a genius.
[00:24:29] Paul Roetzer: and. And so I think that's, his intelligence matters here. Like this is someone who has a proven history of being able to analyze things very deeply, learn topics very quickly. He's been on the inside at the super intelligence team. The situational awareness document you mentioned is dedicated to Ilya Sutskever, who we've talked about many times.
[00:24:49] Paul Roetzer: Ilya was. This is Leopold's boss, probably, I would imagine, on the super alignment team, is probably who he reported to. the whole thing is based on this premise [00:25:00] of scaling laws that we have talked about on the show many times. That there are a lot of leading AI researchers who currently see, a continued predictable trend in the computing power.
[00:25:12] Paul Roetzer: That we, you know, give it more chips,
[00:25:14] Paul Roetzer: that these algorithms used to do the computation are becoming more efficient. They're able to get more out of these chips because they find efficiencies through algorithmic gains. And then what he turns on hobbling is there's these things that are sort of in the way of progress, but there aren't any things that they don't think they can solve.
[00:25:35] Paul Roetzer: So basically there's a bunch of like dumb things that kind of get in the way or prevent the progress from happening. But they think that they're largely able to get through a lot of these things through either reinforcement learning through human feedback, giving it chain of thought
[00:25:49] Paul Roetzer: reasoning, giving it tools, or just kind of like improving the algorithms.
[00:25:54] Paul Roetzer: And so the whole premise of this is We are following these scaling laws, and if we [00:26:00] follow these, then there's these predictable leaps that will be made from GPT 2 to 4 to 5 to 6. And they, being, you know, these few hundred people who are at the forefront of this, don't see any signs that this won't hold true.
[00:26:17] Paul Roetzer: And so if you go back to episode 87, this was what I was talking about. Like, this is exactly the theory. So I've, I mean, I've been reading this with great interest, because it aligns with a lot of the timeline stuff we were talking about and it sort of goes much deeper in a lot of key areas, that I've been sort of waiting for people to start talking more about.
[00:26:36] Paul Roetzer: So I was pretty excited when I saw this. So what I'm going to do is I'll just recap the couple of sections. It is dense, like as you said, Mike, like it's, it's long. I would give this to Gemini and like
[00:26:45] Mike Kaput: have a conversation
[00:26:45] Paul Roetzer: with Google Gemini about it probably. okay. So the first chapter is, um, From GPT to AGI, counting to OOMS, and OOM means order of magnitude.
[00:26:57] Paul Roetzer: 10X improvement equals one [00:27:00] order of magnitude. So it's just kind of like some technical terminology, but OOMS is a critical concept here. Because he basically goes through and says, hey, to go from this to, you know, three OOMS of compute is a hundred billion dollar cluster. And we already know that Microsoft and OpenAI are rumored to be working on that, and that seems plausible.
[00:27:16] Paul Roetzer: Now, the big stuff, like the superintelligence starts bumping into limitations of infrastructure and energy, which we talked about as the limitation on episode 87. but so that's the first section is AGI by 2027 is strikingly plausible, GPT 2 to GPT 4. Took us from preschooler to smart, high schooler abilities tracing these trend lines.
[00:27:40] Paul Roetzer: We should go from preschool to high schooler size, qualitative jump by 2027. The second section was, from a GI to super intelligence. The intelligence explosion. AI progress won't stop at human level. Hundreds of millions of AGIS could automate AI research, compressing a decade of algorithmic progress.
[00:27:59] Paul Roetzer: [00:28:00] Five plus ums, into one year, we would rapidly go from human level to vastly superhuman AI systems. The power and peril of superintelligence would be dramatic.
[00:28:10] Paul Roetzer: The third section is racing to the trillion dollar cluster. The most extraordinary techno capital acceleration has been set in motion. As AI revenue grows rapidly, many trillions of dollars will go into GPU, data center, and power buildup for the end of the decade.
[00:28:27] Paul Roetzer: The industrial mobilization, including growing U. S. electricity production by tens of percent will be intense. The third, and this gets into the thing you talked about, China, the nation's, lock down the labs, securing, security for AGI. This was a big part of his focus on the Dwarkesh podcast. The nation's leading AI labs treat security as an afterthought.
[00:28:48] Paul Roetzer: It was terrifying to hear. him talk about happening in these labs.
[00:28:52] Paul Roetzer: And then, ironically, 24 hours after this podcast dropped, OpenAI drops a blog post talking about how they're handling [00:29:00] security. So, hit a nerve, for sure. and they were trying to kind of like, fight back from a PR perspective against some of it, but I don't think they have much ground to stand on.
[00:29:08] Paul Roetzer: I think what he's saying is probably true. Um, it says, currently, they're basically hand handing AGI to, CCP, on a silver platter, securing the AGI secrets and weights against state actor threat will be an immense effort and we're not on track. the next section was super alignment, which they dissolved that team that he was on with Ilya.
[00:29:28] Paul Roetzer: Reliably controlling AI systems much smarter than we are is an unsolved technical problem. And while it is a solvable problem, things could very easily go off the rails during a rapid intelligence explosion. Managing this will be extremely tense. Failure could be, easily catastrophic. The next, the free world must prevail.
[00:29:45] Paul Roetzer: Superintelligence will give a decisive economic and military advantage. in the race to AGI, the free world's very survival will be at stake. Again, this is where some people are like, ah, this is a little much, and it may be, but He's got some really good data.
[00:29:59] Paul Roetzer: [00:30:00] and then the project. I specifically like this one because this is my, I'm not like saying this is the hill I'm going to die on yet, but this is kind of the direction I'm going.
[00:30:10] Paul Roetzer: I'll explain the context here. As the race to AGI intensifies, the national security state will get involved. US government will wake from its slumber. And by 27, 28, we'll get some form of government AGI project. No startup can handle superintelligence. Somewhere in a SCIF, secure, what does that stand for?
[00:30:28] Paul Roetzer: Secure
[00:30:29] Mike Kaput: It's like, I forget what the acronym is, but it's where they securely brief people with intelligence briefings, like, you cannot, like, have devices in there and stuff, I forget what it, I'll have to look up
[00:30:39] Paul Roetzer: Yeah, you can look it up while I'm rambling here. So the end game, so here's my feeling.
[00:30:44] Paul Roetzer: I don't I believe that the current administration, nor the former administration in the United States, nor based on the current candidates, either administration that will come, has the [00:31:00] will and the vision to
[00:31:00] Paul Roetzer: do what is likely needed, and that is a Apollo level and beyond project to build and control the infrastructure necessary for the
[00:31:10] Paul Roetzer: intelligence explosion.
[00:31:12] Paul Roetzer: So the United States government, and again, I know we have listeners all around the world and other governments should be doing something similar, but in the United States in the 1960s, when we say we're going to put a, put humans on the moon,
[00:31:24] Paul Roetzer: It was a decade long initiative that, at its peak, was 6 percent of the entire federal budget was going to the Apollo program to build the rockets that would put us on the moon.
[00:31:35] Paul Roetzer: We need that. Like, there needs to be an effort made by the government. Now, 6 percent wouldn't be enough. The, based on my research this morning, the budget in the United States is 1. 7 trillion. The actual annual outlay, the spending is 6. 5 trillion. 6 percent isn't sufficient. You can't be spending a couple hundred billion.
[00:31:53] Paul Roetzer: So in the Apollo program was 25 billion over a decade, which is the equivalent of about 250 billion in today's dollars. That's not going [00:32:00] to cut it. We need
[00:32:01] Paul Roetzer: trillions. So if I was the US government. I would
[00:32:04] Paul Roetzer: be aggressively putting a plan in place to spend trillions of dollars over the next five to ten years to house all the infrastructure in the United States to keep all the best companies, the chip builders, the intelligence builders in the United
[00:32:17] Paul Roetzer: States because it is an imperative for the security of the country and for the economic viability in the future.
[00:32:25] Paul Roetzer: So, that, that, like, I think. If anything, I'm sure there's people in Congress reading this. I'm sure
[00:32:33] Paul Roetzer: they're
[00:32:33] Paul Roetzer: they're being briefed as we speak. If the U S government doesn't take a massive initiative, I think they're, there's, they're going to regret it in within three to five years. I'm massively good because there's no open AI can build whatever they're going to build and video can build whatever they're going to build.
[00:32:52] Paul Roetzer: Unless we build the infrastructure to allow it to prosper. And we do it in, in the United States. Like it's, it's just [00:33:00] not going to happen. They're going to run up against
[00:33:01] Paul Roetzer: energy issues. They're going to run up electricity issues. They're going to run up against,
[00:33:05] Paul Roetzer: you know, where the data center is going to go. And the 10 billion CHIPS Act the US did was. Nice start, but 10 billion isn't doing it.
[00:33:13] Paul Roetzer: That's, that's not going to cut it. I mean, Anthropics raised 7 billion on their own already. So that was like one of the things. And then I'll just end with the kind of synopsis
[00:33:23] Paul Roetzer: he gives at the end, I thought was pretty solid.
[00:33:25] Paul Roetzer: So he says, what if we're right? And this is the big question me.
[00:33:28] Paul Roetzer: Is, is there like a 30 percent chance he's right? That, that's good enough for me. Like 30 percent solid. Um, we should probably be really aggressively.
[00:33:38] Paul Roetzer: Assessing this possibility, if we're in even 10%.
[00:33:42] Paul Roetzer: and I think it's higher than that. I, I think that the direction he's saying this goes, there's probably at least a 50 50 chance that he's, he's right.
[00:33:52] Mike Kaput: Um, that would get, that should get action. So, he says, what if we're right? Before the decade is out, we will have super intelligence. This is what most of [00:34:00] the series is about. you mentioned there's like a few people in, you know, basically in San Francisco who have this situational awareness that are are aware this.
[00:34:08] Mike Kaput: Um, it's hard to contemplate. People think deep learning is going to hit this wall, but they don't. But then he takes on like, hey, the Doomers, they're obsessed with AGI for years, give them a lot of credit. for their prescience, but, basically they're, they're not thinking about this the right way. These claims of doom and calls for indefinite pause are clearly not the way, like, we can't just stop.
[00:34:28] Paul Roetzer: And then he says on the other end, we have the EACs and they're narrowly focused on like, some good points and progress must continue,
[00:34:35] Paul Roetzer: but, and I love this, beneath their shallow Twitter shitposting, they are a sham. And he just kind of straight up says, like, this is all for them
[00:34:42] Paul Roetzer: to just be like, Build, their own products around it in chatbots and basically make some money, you know, capitalistic approach to this.
[00:34:49] Paul Roetzer: We're just going to make some money off of this thing. So. He, he says the core tenets are superintelligence is a matter of national security, which I agree with a hundred percent. America must lead. If you're in America, [00:35:00] you're going to agree with that. I would say democratic societies must
[00:35:03] Paul Roetzer: lead. You know, NATO must lead.
[00:35:05] Paul Roetzer: Like, I think that, that that's more of the approach here is we need an international effort to have democratic values and democratic governments that, that do this. That would probably be a better outcome for society. And then like, we need to not screw it up. So he says that. If we're right, these are the people that have invented and built it.
[00:35:23] Paul Roetzer: They think AGI will be developed this decade. And though there's a fairly wide spectr many of them take very seriously the possibility that the road to superintelligence will play out, as I've described in this series. On the Dwarkesh podcast, they talk about these like, Private parties where all the researchers from DeepMind and OpenAI and Anthropic all hang out together and compare notes.
[00:35:45] Paul Roetzer: And they're all kind of on the same page here of where this goes. So he says, like, I could get some of this wrong, but realistically, like, this is kind of what we think is going to happen. And so then he says, as you mentioned, right now, there are perhaps a few hundred people in [00:36:00] the world who realize what's about to hit us, who understand just how crazy things are about to
[00:36:04] Paul Roetzer: get. who have situation awareness. I probably either personally know or am one degree of separation from everyone who could plausibly run the project, which is the big Apollo type mission I mentioned. The few folks behind the scenes who are desperately trying to keep things from falling apart are you and your buddies and their buddies.
[00:36:21] Paul Roetzer: That's it. That's all there is. Someday it will be out of our hands, but right now, at least for the next few years of mid game, the fate of the world rests on these people.
[00:36:36] Paul Roetzer: Oh man, it
[00:36:37] Mike Kaput: That's the quote.
[00:36:38] Paul Roetzer: yeah, like
[00:36:39] Paul Roetzer: I, I, mean, I was, when I was listening to the request podcast, I was like,
[00:36:41] Paul Roetzer: holy shit. And then I, you know, I was reading the report and I'm like. Man, like, there's a lot to process here, and I I honestly
[00:36:51] Paul Roetzer: don't disagree with any of it, like, there was nothing in there, I was like, oh, okay, this is an
[00:36:55] Paul Roetzer: exaggeration, I was like, no, he's following scaling laws, and if these stay true, [00:37:00] everything he's saying is plausible, like, there's nothing in this that isn't a leap, to think it's, it's doable, and AI timeline episode 87, We have to figure out what does this mean?
[00:37:12] Paul Roetzer: And I think this is kind of reinforcing that. It's like, hey, we're further laying out the possibility here. What does it mean to government? What does it mean to business? What does it mean to society? What does it mean to educational systems? What does it mean to human purpose? Like, it's,
[00:37:24] Paul Roetzer: yeah, it's,
[00:37:26] Paul Roetzer: it's important. Like, it's, I know this is a lot and it's kind of overwhelming. Um, but we all have to really start, like, thinking about these things. We're talking about a few years. Like, I
[00:37:37] Paul Roetzer: I
[00:37:38] Paul Roetzer: mean, if you're, if you have a kid who's a freshman in high school, by the time they graduate, they're saying, this is, this is where that might be by the time they go to college.
[00:37:46] Paul Roetzer: That's how fast this is going
[00:37:47] Mike Kaput: to
[00:37:47] Mike Kaput: happen.
[00:37:50] Mike Kaput: Fun, fun fact of how small, you know, that community is, a person we talked about, I believe last week, avital
[00:37:59] Mike Kaput: [00:38:00] Balwit, who was writing the My Last Five Years of Work essay, also advised on this writing among many other people,
[00:38:08] Paul Roetzer: A lot of people were tweeting and saying, this is important. need to read And my guess is they're all the people who are in the private parties sharing notes.
[00:38:18] Mike Kaput: All right. And last but not least, a SCIF is a Sensitive Compartmented Information Facility, perhaps.
[00:38:24] Paul Roetzer: Wouldn't have got that. would gotten, like, one letter.
[00:38:27] Paul Roetzer: I knew what it was, but I
[00:38:29] Mike Kaput: Yeah. Yeah.
[00:38:30] Paul Roetzer: initials stood for.
[00:38:31] Mike Kaput: All
[00:38:33] Mike Kaput: right. So in our third big topic this week, Paul, there's been an update to Adobe's Terms of Service that has them facing some pretty significant backlash.
[00:38:44] Mike Kaput: on the surface, these updates seem kind of run of the mill and small.
[00:38:48] Mike Kaput: There's just like a few paragraph changes, but they're having this really big impact because,
[00:38:54] Mike Kaput: These changes to the Terms of Service resulted in this pop up where users had to [00:39:00] agree to give Adobe access to user content through what it calls, quote, automated and manual methods, or become unable to use the software.
[00:39:08] Mike Kaput: So you have to opt into this again. Some users are really taking issue with language in the Terms of Service that appears to give Adobe sweeping ownership. over work created with Adobe products. And
[00:39:20] Mike Kaput: so a lot of creators are now worried that Adobe
[00:39:24] Mike Kaput: is going to be training all its AI models on their work. So this backlash has prompted a response from Adobe and they released a blog post stating that this is all kind of a big misunderstanding
[00:39:38] Mike Kaput: They said, quote,
[00:39:39] Mike Kaput: we recently made an update to our terms of use with the goal of providing more clarity. on a few specific areas and pushed a routine reacceptance of those terms to Adobe Creative Cloud and Document Cloud customers.
[00:39:51] Mike Kaput: They clarified the policies and reiterated that they don't train on customer content and will never assume ownership of a [00:40:00] customer's work.
[00:40:01] Mike Kaput: So Paul, I think like what really stands out here is, you know, not getting into the nitty gritty of one
[00:40:07] Mike Kaput: policy versus another, but like this cause, A severe and pretty immediate backlash
[00:40:13] Mike Kaput: from people. Like, are you seeing this where,
[00:40:17] Mike Kaput: just kind of highlighted to me how skeptical and untrusting basically a lot of consumers are about these companies using their work in any way.
[00:40:27] Paul Roetzer: Yeah, I mean, we've talked about this with OpenAI, it's just an unforced error. It's like, I don't know if legal got more control than communications, which is usually the case with stuff like this. but,
[00:40:36] Paul Roetzer: It's just, it's just a bad look. it may be deceptive practices. I don't, I don't know. I wouldn't like accuse Adobe of that.
[00:40:44] Paul Roetzer: I don't, I don't know particularly, but it doesn't look good. the forcing you to accept it without like really going into detail about it and having it be a lot of, like, technical legal lingo that I mean, [00:41:00] I read it. I was was don't know what that
[00:41:01] Paul Roetzer: means. and you and I are pretty, you know, knowledgeable about this stuff.
[00:41:05] Paul Roetzer: And I didn't understand the the terms.
[00:41:07] Paul Roetzer: yeah, I, I mean, we saw this with Zoom last year where they did something where it's like, so are you recording?
[00:41:15] Paul Roetzer: When I record in Zoom, are you training your models on our confidential conversations? Cause that's what your terms sure sound like. And then they have to like pull back like, no, no, no, that's not what it means.
[00:41:24] Paul Roetzer: And we're sorry. I feel like there was a misunderstanding. And I just think that there's going to be an increasing level of mistrust. And I think that we need to expect more of these companies to be very transparent and clear and not even give the perception that they're trying to pull one over on us real fast with their.
[00:41:44] Paul Roetzer: Fancy legal language that isn't really understandable to anyone, but a a lawyer on the adobe team.
[00:41:52] Paul Roetzer: So yeah, I just, I kind of treat this one as a rapid fire, almost in a way like bad look, Um, we need to [00:42:00] expect more from companies. We talked about, HubSpots, AI cards few weeks back.
[00:42:06] Paul Roetzer: Seems like a much better approach.
[00:42:08] Paul Roetzer: I haven't gone deep on HubSpot's terms, but I think we just need more of that. More just
[00:42:13] Paul Roetzer: transparency. Here's what it is like, you know, click here to learn more. I don't know, but, yeah, this can be a problem. I mean, a lot of companies wanting access to your data to use in their AI in some way, and it's going to get really confusing how they're doing it.
[00:42:26] Mike Kaput: Yeah, definitely a lesson for any companies building AI out there. People are very, very sensitive to this.
[00:42:32] Paul Roetzer: Yeah.
[00:42:33] Mike Kaput: Alright, so let's dive into some rapid fires and the first one is somewhat related here because
[00:42:38] Mike Kaput: Microsoft is also facing some backlash against a new Pretty hyped up AI
[00:42:45] Mike Kaput: feature. Now we covered this feature on episode 99. It's called Recall, and it is due to ship with microsoft's new line of Copilot Plus PCs. Here
[00:42:56] Mike Kaput: it is recall is AI that basically screenshots everything you [00:43:00] see and do on your machine so that you can search and reference. this material later. And Microsoft says this is all recorded and stored locally on device for security purposes
[00:43:11] Mike Kaput: and that it's AI is not trained on this data.
[00:43:14] Mike Kaput: However, it's now come to light from some security, cybersecurity researchers who have done early tests of the product that contrary to Microsoft statements, it may actually be possible to extract This data
[00:43:29] Mike Kaput: from a recall totally remotely from a user's machine. So this basically prompted Microsoft to change course immediately and mandate that recall be turned off by default on the machines.
[00:43:42] Mike Kaput: Previously, it was going to be default turned on.
[00:43:46] Mike Kaput: So Paul, I think, you know, you kind of, when we talked about this on episode 99, you kind of called this one saying that this feature is going to be a slippery slope. Within companies due to privacy concerns. Is that kind of what you took away [00:44:00] from this debacle?
[00:44:01] Paul Roetzer: Yeah, a hundred percent, like, again, like it just, I don't know, there's like, sometimes the stuff get announced and you're like, really? Like did
[00:44:08] Paul Roetzer: you, like that, you're just going to turn it on native? Like we haven't really thought through how much people are going to hate this Yeah. Offering and
[00:44:16] Paul Roetzer: like not want everything on their device recorded every five seconds.
[00:44:20] Paul Roetzer: and how easily it's going to be to hack it, even though you say it's like secure, it's actually saved in a text file and you can get that text file pretty easily, it's already been proven. Like,
[00:44:30] Paul Roetzer: I mean, I think Microsoft does a lot of stuff, right? I don't know that this, concept was fully baked at the time and that they thought through all
[00:44:39] Paul Roetzer: the, all the challenges they were, that, that people were going to have to it and, Yeah, I think they now know, and
[00:44:49] Paul Roetzer: they'll retrench a little bit and adapt.
[00:44:53] Paul Roetzer: That's it. I mean, we see this all the time though, like Google racing out AI overviews, Microsoft pushing out this stuff. Like everyone is [00:45:00] racing to out innovate the other player and companies that historically were really smart about releases
[00:45:07] Paul Roetzer: are just. There's just things being missed in the race to get stuff to market. And, and sometimes it's just simple communications and I don't know, like user experience and, you know, anticipating user feedback. I don't, I don't know. Maybe when we get to AGI, we'll just ask the AGI, Hey, what could go wrong with this product and it'll tell us, and then we won't do these things.
[00:45:29] Mike Kaput: Yeah. In the meantime,
[00:45:31] Mike Kaput: some, PR and crisis communications might be
[00:45:34] Paul Roetzer: Yeah. Don't fire your communications people. still, you still need your communications people to think this stuff stuff through.
[00:45:40] Mike Kaput: All right. in our next topic, we're actually going to highlight what is turning out to be a pretty important is turning out to be a pretty important AI public service announcement. So this comes from Chris Bakke, who is an entrepreneur who now works at X,
[00:45:53] Mike Kaput: and he posted a really important, though it's a bit tongue in cheek, kind of reminder.
[00:45:58] Mike Kaput: That everyone needs to [00:46:00] really be paying attention to how their staff may be using AI tools. So, what Bakke posted says, quote, We're not fully prepared for a world in which 20 year old summer interns are uploading thousands of pages of proprietary company financial and product information
[00:46:16] Mike Kaput: to some LLM company that has just raised 2. 2 million and hasn't gotten around to creating a terms of service or privacy policy yet.
[00:46:25] Mike Kaput: Now, Paul, this is definitely a little bit sarcastic, kind of pretty common for Chris to, post on X
[00:46:32] Mike Kaput: like this, but it is really like a pretty important point. I mean, right now I see a lot of organizations that do not really have robust policies in place to regulate how staff are using any type of AI tool and like, don't understand the risks. Like, how do we
[00:46:49] Mike Kaput: prevent. This scenario he outlined for happening.
[00:46:53] Paul Roetzer: Yeah, I thought it was hilarious.
[00:46:55] Paul Roetzer: And so, so true.
[00:46:57] Paul Roetzer: That was part. So, what we know [00:47:00] is, um, People are using AI. So we talked about the LinkedIn, Microsoft report, 75 percent of already using AI at work, 46 percent of that started in the last six months. a good percentage of them. I was trying to find the stat right now.
[00:47:14] Paul Roetzer: It was like a 40 percent are using unapproved tools. and then another 40 some percent are using tools that are outlawed. Like, so, and then there's this also in the LinkedIn, Microsoft report, it says employees across every age group are bringing their own AI
[00:47:29] Paul Roetzer: tools to work. So let's say. You hire an intern for the summer, and that intern is responsible for using some sensitive or confidential data, and they're not allowed to use ChatGPT at the office. It's shut off.
[00:47:41] Paul Roetzer: 85 percent of Gen Z workers are bringing their own AI tools to work. Millennials, 78%, Gen X, 76%, and the boomers, 73%
[00:47:52] Paul Roetzer: People are going to use AI, whether you allow them to or not, is the point
[00:47:57] Mike Kaput: here.
[00:47:57] Paul Roetzer: And the same goes for [00:48:00] schools. Like, the students are going to use the AI tools, whether you want them to or not.
[00:48:04] Paul Roetzer: You have to put the policies in place to get them to do it responsibly. Like this is the whole key
[00:48:12] Paul Roetzer: Like it's going to happen with or without you have them do it in a responsible manner. And that is why, like we, so in the scaling AI courses I mentioned, I have an entire course on building generative AI policies and like what needs to go into it and it should be day one training for interns, for associates, for
[00:48:31] Paul Roetzer: everybody. Like tomorrow there should be generative AI policies that are rolled out to your people that guide them. And put guardrails in place, but give them the freedom to use these things responsibly without the risk that IT is so worried about, and rightfully so. so yeah, I just thought it
[00:48:49] Paul Roetzer: was a really funny way, and I actually mentioned this in a talk last week.
[00:48:52] Paul Roetzer: Like I used this quote as like an example to a conservative organization, like an organization that's very risk adverse. And, we [00:49:00] got some laughs,
[00:49:01] Mike Kaput: and
[00:49:01] Paul Roetzer: because they
[00:49:01] Mike Kaput: they knew it
[00:49:02] Paul Roetzer: true.
[00:49:03] Paul Roetzer: And we
[00:49:05] Mike Kaput: so next up we have some news about Descript, which is a popular AI audio and video editing tool that we love and use at the Institute. They just announced a new AI editing assistant called underlord. So
[00:49:19] Mike Kaput: if you're wondering about that name, here's how Descript, explains this. Quote, nobody wants an AI overlord, but everybody can use an AI underlord, an editing assistant that can do all the tedious stuff, but leaves you control.
[00:49:33] Mike Kaput: So from what I can tell, underlord seems to be kind of a package of both new and some existing AI features in the platform. You can do things like Remove all your retakes on a video except for the best one. Leverage the existing studio sound features clean up audio. Remove filler words automatically.
[00:49:54] Mike Kaput: use a single click to center the active speakers in clips. there's some AI [00:50:00] multi cam features that automatically cut to whoever's talking, and you can do things like turn long form video into short form clips,
[00:50:07] Mike Kaput: And then there's a ton of other generative AI features do like translations, draft titles, and write social posts.
[00:50:15] Mike Kaput: So, Paul, our team is done. certainly heavily invested in using Descript. What did you make of the Underlord announcement?
[00:50:23] Paul Roetzer: I think that the other, like the frontier model companies should borrow the branding team from Descript because it's a hilarious product name. Like it's, I mean, it's, they're, they're like a lot of these names, these companies come up with a really challenged, and then like, they keep changing them cause they realized they're not great names.
[00:50:41] Paul Roetzer: Like as soon as I saw this, like, That's just
[00:50:43] Paul Roetzer: genius. Like that's such a funny name, but Descript is great. I mean, they, not only do we love the platform, they make amazing videos. They have an amazing creative team. I think I've seen that they have an
[00:50:53] Paul Roetzer: outside, team they work with on some of this stuff, but, yeah, I just, I thought it was great.
[00:50:58] Paul Roetzer: And I mean, they pump [00:51:00] out new AI stuff monthly and, you know, Claire and our team is constantly trying to keep up with what they've got going on and being able to infuse it into how we use Descript. So yeah, if you don't use Descript, Check them out. Like, we love it. They're not a sponsor. I'm not doing this because they're paying us.
[00:51:14] Paul Roetzer: We just think it's an amazing platform.
[00:51:17] Mike Kaput: So in another kind of product focused update, we've been experimenting with Perplexity Pages. So this is a new feature in Perplexity that automatically creates pages on a topic for you based on your curated searches. And we've
[00:51:32] Mike Kaput: also been following a little controversy around the new feature. So, a bunch of people are getting upset a lot of media outlets because it appears that some news outlets say Perplexity Pages is pulling in content summaries that seem very similar to the original articles the content is pulled from.
[00:51:52] Mike Kaput: So a high profile example of this is a story that was in Forbes that One of their editors, basically [00:52:00] says it is ripping
[00:52:00] Mike Kaput: off most of our reporting. It cites us and a few that reblogged us as sources in the most easily ignored way possible.
[00:52:07] Mike Kaput: Perplexity CEO responded saying, thanks for the feedback. We agree with it.
[00:52:12] Mike Kaput: We're going to make it a lot easier. To find the contributing sources and highlight them more prominently.
[00:52:19] Mike Kaput: So paul, kind of two parts to this. You've done some experiments with perplexity pages. Can you maybe walk us through your impressions of that, but also kind of what's going on with the controversy about this?
[00:52:30] Paul Roetzer: Yeah, so This probably falls in the category of some tech people built a cool thing and put it out into the world and didn't consider the ramifications. So the first thing that, okay, so I was building the Scaling AI series, the course 10, what's next? I wanted to talk about
[00:52:46] Paul Roetzer: superintelligence. so I built some, a slide about superintelligence, and then I was like, Oh, let me.
[00:52:52] Paul Roetzer: This is probably a good test for perplexity pages. They'd come out like two days earlier. So I went in and I built a page about artificial superintelligence and it's [00:53:00] like super easy to edit. He's, you know, did a great job. Like it was pretty cool. It was like Wikipedia on demand. Basically you build your own own
[00:53:05] Paul Roetzer: Wikipedia page and then you just make it public.
[00:53:08] Paul Roetzer: So I put it on LinkedIn. I was like, Hey, this is It's pretty, pretty slick. Didn't get into the ramifications of it. And then I see the tweet from John, Pekowski. And he said, this one you're referring to, I think, you scraped and repurposed investigative reporting gathered over months, fleshed it out with reblogs of the same story by other outlets and do not even bother to name us in the regurgitated post beyond a sources link, which is clicked to expand. Mm. That's problem. Like that's, that's
[00:53:36] Paul Roetzer: plagiarism. Like,
[00:53:37] Mike Kaput: um
[00:53:39] Paul Roetzer: pretty, pretty cut and dry plagiarism.
[00:53:41] Paul Roetzer: So then, Sarah Emerson, who I think you mentioned, this is the Forbes one, that's when she says, Perplexity is scraping the work of journalists at Forbes, CNBC, Bloomberg, and other pubs, claiming in tiny, missable footprints are fair credit.
[00:53:53] Paul Roetzer: In our case, it also listed text and artwork. So I, that was when I tweeted, Not sure Perplexity [00:54:00] has seen its first lawsuit yet, but might be time to make a few phone calls and set some of that funding aside. As I've said many times, IP attorney may be the safest job in knowledge work for the next decade.
[00:54:09] Paul Roetzer: And the fact that the CEO replied, and I was like, oh, yeah, yeah, yeah, we're gonna fix that. No, that product doesn't come to market without that being fixed. You're going to get sued. Like that isn't, it,
[00:54:21] Paul Roetzer: again, it's just move fast, break things, not just product wise, but like, legal wise. And it is. It again, it's like this,
[00:54:31] Paul Roetzer: is it incompetence or is it arrogance? And I, I think it's both. Like, you can't do this. It's just like,
[00:54:38] Paul Roetzer: it's so frustrating to watch stuff like this, where They have this obvious major issue and they just put it into the world and then say, Oh yeah, sorry. We'll fix that in a future version. No, it's illegal. You can't do it.
[00:54:55] Paul Roetzer: I don't know. It, I mean, this is just like the state we're going to be in in perpetuity because this [00:55:00] is how, Silicon Valley works, but man, is it frustrating to watch and so great product, it's probably going to get them in in trouble. all
[00:55:12] Mike Kaput: All right, so next up, AI video Generation Tool Pika, which we've talked about before, has raised $80 million
[00:55:18] Mike Kaput: in a series B round, led by Spark Capital, and also participated in among others, by actor Jared Leto.
[00:55:26] Mike Kaput: This values Pika at $470 million just a little over a year after it was founded. So Paul, this definitely, we know Pika is kind of a major player here, but this
[00:55:39] Mike Kaput: kind of sets them up even in a better position. we've got Sora coming from OpenAI at some point, making a ton of waves. Runway is another huge player in the space. They have 236
[00:55:50] Mike Kaput: million plus dollars in funding and are valued at 1. 5 billion. Are we expecting the AI video generation startup market to kind of take off [00:56:00] here?
[00:56:00] Paul Roetzer: I think it's gonna be huge. I, I, I just, I'm really interested to see who gets acquired first. it just seems like these are natural acquisition targets for an Adobe or, you Google
[00:56:11] Paul Roetzer: or I mean, OpenAI doesn't seem to be in the acquisition phase for stuff like this, they're building their own stuff. But, I mean, I'll be fascinated to watch how this plays out.
[00:56:20] Paul Roetzer: But yeah, a lot of people are doing this. Meta's doing it. Like everybody's in this space. And I think 2025, we're going to see kind of an explosion of video capabilities and they'll probably become more accessible, like built right into Gemini or, ChatGPT. Maybe it's like, and I'm also just like, how do they do the pricing model?
[00:56:36] Paul Roetzer: Like, is this a premium? Like. Instead of my 20 bucks a month, do I, do I go to 30 and not now I get video production capability built in? Um, yeah, so more of like a business perspective. I'm just fascinated to watch how this plays out.
[00:56:49] Mike Kaput: All right. And our last topic today, we just got from McDonald's, AI focused commercial that's in partnership with HeyGen, which is an AI tool that generates [00:57:00] synthetic voices and avatars. In this commercial, McDonald's has unveiled kind of an experience that they're calling quote, Sweet Connections, and this is like a tool where anyone can record themselves giving a message in the language of their choice, then have HeyGen basically make a video of them giving the message that translated into a completely language. So in the commercial, kind of the way they set this up is
[00:57:25] Mike Kaput: they give this experience to connect younger generations with grandparents who don't speak the same language as them. So you see a bunch of different examples of this in the video.
[00:57:36] Mike Kaput: And it's, you know, super notable, positive. kind of heartwarming commercial.
[00:57:41] Mike Kaput: What do you think of kind of the ad and the use case here? I mean, it's certainly a more optimistic use of technology that can also apparently be used to make pretty realistic deepfakes.
[00:57:53] Paul Roetzer: Yeah, I think it's, it is interesting that kind of like giving credibility to that technology. [00:58:00]
[00:58:00] Paul Roetzer: But I think it's just an example of how it's just, AI is just going to be absolutely everywhere. Like by the fall, it'll be in all of our devices. Billions of people who've never used an AI tool are going to now use it.
[00:58:10] Paul Roetzer: you know, there's, I think we saw the study that only like 7 percent worldwide have tried ChatGPT. Well, that's going to change. Like it's going to be built into your device for free. So I think we're just over the next six to 12 months, like AI is just going to permeate throughout society and it's going to be built into brands.
[00:58:26] Paul Roetzer: It's going to be built into campaigns. The one thing that's interesting here is like, It took humans to come up with how to use PageN in these campaigns. Like, a human conceived of this and, you know, thought of the ways to apply AI in really creative ways. And I think that goes back to that idea we talked about earlier, you know, episode of this prediction machine concept of like, the future is, you know, Knowing what to tell the machine, what to do, and then knowing what to do with the output.
[00:58:52] Paul Roetzer: And so I think this is a sign of, you know, positive things it can do to creativity. It's going to open up all these possibilities for [00:59:00] new campaigns and new ideas and new products and services. And the humans who figure that out and become savvy at this stuff have enormous potential to do some really amazing things and be more creative.
[00:59:12] Paul Roetzer: And so that's, I don't know, kind of, not a positive note. Like I think it's. It's kind of a neat thing to see how people apply this stuff.
[00:59:18] Mike Kaput: Yeah, I was gonna say, it's good to end on that positive note after, you know, companies moving fast and breaking things, super intelligence. I think this is a good,
[00:59:27] Mike Kaput: yeah, yeah. Awesome. Well, Paul, thanks again, as always for breaking down what's going on in AI this week. Just a couple really quick final reminders.
[00:59:36] Mike Kaput: If you haven't left us a review on your podcasting platform of choice, please do so. It helps us get the show into as many People's hands as possible and helps us improve.
[00:59:47] Mike Kaput: also a quick note, the podcast schedule, like Paul mentioned at the, top of the episode, we are taking a two week break for some travel.
[00:59:56] Mike Kaput: So the next episode will drop on July 2nd. And last but [01:00:00] not least. Please check out, if you have not already, our newsletter at marketingAI. com. institute. com forward slash newsletter, which summarizes all of the news we covered on
[01:00:10] Mike Kaput: this episode and all the stuff we didn't have time to get to each and every week. Paul, thanks again.
[01:00:16] Paul Roetzer: Yeah, enjoy couple of weeks off and one final reminder again, scalingai. com, June 27th. I'm going to do the webinar, but I'll also do some Ask Me Anything after that. So if you want to. catch up and you're missing the show that week you can join us on Thursday the 27th and we'll be doing a live webinar with Q& A.
[01:00:34] Paul Roetzer: So, thank you. Have a great couple weeks everyone. We will talk to you again on July 2nd.
[01:00:39] Paul Roetzer: 2nd Thanks for listening to The AI Show. Visit MarketingAIInstitute. com to continue your AI learning journey. And join more than 60, 000 professionals and business leaders who have subscribed to the weekly newsletter, downloaded the AI blueprints, attended virtual and in person events, taken our online AI [01:01:00] courses, and engaged in the Slack community.
[01:01:03] Paul Roetzer: Until next time, stay curious and explore AI.