It’s that time of year again and as students head back to school, they're not just packing pencils and notebooks anymore… they're bringing AI.
Join our hosts, Paul Roetzer and Mike Kaput, as they discuss the complexities of AI in education as the new school year begins, the recent Grok 2 update from Elon Musk's xAI, and the return of Google DeepMind's popular podcast after a two-year hiatus. In our rapid-fire section, we will cover the power of AI innovation and creativity, Sakana AI’s latest, Hubspot’s AI Search Grader, AI employment risks and more.
00:04:33 — Grok 2 + Image Generation
00:13:57 — All New Google DeepMind Podcast Episodes
00:30:30 — AI in Schools
00:39:21 — Can AI Innovate?
00:44:15 — Google takes up licensing deal with Character AI
00:50:25 — Ex-Google CEO says AI startups can steal IP and worry about the legalities later
00:53:43 — Sakana AI unveils AI scientist
00:57:16 — California AI Bill SB-1047 Updates
01:02:13 — AI Employment Risks
01:05:44 — HubSpot AI Search Grader
01:09:27 — Personhood Credentials
Grok 2 Controversy
Elon Musk's AI company, xAI, has just rolled out a major update to its Grok model, and it's causing quite a stir.
The company released an early preview of its Grok-2 and Grok-2 mini AI models. xAI claims that an early version of Grok-2 was able to outperform both Claude 3.5 Sonnet and GPT-4 Turbo on the LMSYS.org chatbot leaderboard.
The models are now linked to an image synthesis model called Flux, allowing users of X to create AI-generated images with very few restrictions using Grok-2.
But the models’ improvements in reasoning, code, and language aren’t what have everyone talking. What is raising eyebrows is the apparent lack of safeguards in Grok's image generation capabilities.
Unlike other major AI image generators, Grok does not seem to refuse prompts involving real people or add identifying watermarks to its outputs.
The Verge reported that while Grok claims to have certain limitations, such as avoiding pornographic or excessively violent content, these rules appear inconsistent in practice.
Elon Musk, known for his stance on "freedom of speech," defended the release, calling it an "intermediate step for people to have some fun" while xAI develops its own image generation system.
Google Deepmind Podcast is Back
After a 2-year hiatus, Google DeepMind is back with its hit podcast called Google DeepMind: The Podcast.
Hosted by Professor Hannah Fry, the previous two seasons of the podcast covered a wide range of topics related to artificial intelligence, including the technology that powers it and how it’s being used to better business and society.
Now, the podcast is back, regularly releasing new episodes, including an initial episode where Fry interviews Google DeepMind CEO Demis Hassabis.
Interestingly, this is the first season of the podcast to launch since the debut of ChatGPT nearly two years ago.
AI in Schools
In the US, another academic year is getting underway and children and young adults are headed back to school. As they do, we are fully expecting AI usage in school to be a hot topic.
Schools still appear to be grappling with how to handle AI’s disruption to classwork, homework, and traditional methods of evaluating students.
Some are still fearful of the technology or don’t fully understand it. Others are embracing it to enrich the learning experience. But the process is messy.
AI is rapidly improving and becoming a constant in students’ lives, creating challenges and opportunities for teachers, administrators, and parents.
This week’s episode is brought to you by MAICON, our 5th annual Marketing AI Conference, happening in Cleveland, Sept. 10 - 12. The code POD200 saves $200 on all pass types.
For more information on MAICON and to register for this year’s conference, visit www.MAICON.ai.
Disclaimer: This transcription was written by AI, thanks to Descript, and has not been edited for content.
[00:00:00] Paul Roetzer: I feel like when they get out of school, they're going to be required to use these tools one, two, three years out.
[00:00:06] Paul Roetzer: So why would we them from learning these things? These kids at all levels, if you teach it properly, they will learn how to be responsible in their use. If you tell them, no, they're just going to find ways to do it without telling you they did it.
[00:00:22] Paul Roetzer: Welcome to the Artificial Intelligence Show, the podcast that helps your business grow smarter by making AI approachable and actionable. My name is Paul Roetzer. I'm the founder and CEO of Marketing AI Institute, and I'm your host. Each week, I'm joined by my co host and Marketing AI Institute Chief Content Officer, Mike Kaput, as we break down all the AI news that matters and give you insights and perspectives that you can use to advance your company and your career.
[00:00:52] Paul Roetzer: Join us as we accelerate AI literacy for all.
[00:00:59] Paul Roetzer: [00:01:00] Welcome to episode 111 of the Artificial Intelligence Show. I am your host, Paul Roetzer, along with my co host, Mike Kaput, who is, where are you today, Mike?
[00:01:09] Mike Kaput: I am in beautiful Green Bay, Wisconsin.
[00:01:12] Paul Roetzer: Nice, nice. Mike's in town for a talk. so, he's traveling. I'm actually, catching a flight right after this as well. I will be out, out west this week. So, yeah, Mike and I are on the road this week, but, we have a lot to talk about before we, we, we worry about any of that. So, There's, there's some big topics.
[00:01:34] Paul Roetzer: We got Grok 2 and it's unfiltered image generation capabilities. We have insights from Demis Hassabis of Google DeepMind in a new podcast episode. We're going to talk about AI in schools. There's a lot to go through. So, first I want to say, this episode is brought to us by the Marketing AI Conference or MAICON, the fifth annual MAICON event is coming up September 10th to the 12th.
[00:01:58] Paul Roetzer: We've been talking about this a lot on the [00:02:00] podcast lately.
[00:02:01] Paul Roetzer: Now, this is the event I started back in 2019 as part of Marketing Institute. So, we're in our fifth year. There is, it is the biggest one yet. there's a total of 69 sessions, 33 breakouts, 10 main stage general sessions and keynotes, 16 AI tech demos.
[00:02:16] Paul Roetzer: We just announced two new main stage talks that I'm extremely excited about. the first is with Andrew Mason, the COO and founder of Descript, which is a platform that we use for the podcast we love Descript. it's an essential part of our tech stack for not only the platform, but all the webinars we do for all the video and audio.
[00:02:35] Paul Roetzer: So that session is going to be AI as your underlord, the future of storytelling. With AI Creative Assistants. And then we also announced another main stage session titled AI First Future Proof Your Business and Brand. This one, anyone who's been listening to the podcast for a while will appreciate. This is actually with Adam Brotman, the former EVP and Chief Digital Officer for Starbucks.
[00:02:58] Paul Roetzer: And Andy Sack, Venture [00:03:00] Capitalist and former advisor to Microsoft CEO Satya Nadella. Now, why would you know Adam and Andy's names? Well, back on episode 86, when we sort of broke the story about Sam Altman's quote about, marketing, in the future of AGI, where Sam said that, AGI will mean that 95 percent of what marketers use agencies, strategists, and creative professionals for today.
[00:03:24] Paul Roetzer: will easily, nearly instantly, and at almost no cost, be handled by the AI. Well, that quote came from Adam and Andy's book, AI First, and an interview they had done with him in October of 2023. So this session at MAICON is going to be sort of a behind the scenes of Not only that interview with Sam, but after they talked to Sam, they interviewed Reid Hoffman, Bill Gates, Mustafa Suleyman, Sal Khan, all about AI and transformation in marketing and business.
[00:03:55] Paul Roetzer: And so this session is going to be a fireside chat with Adam and Andy, exploring [00:04:00] everything they learned along this journey, interviewing some of these AI leaders. So, MAICON, again, it's coming up fast. Check it out. It's at MAICON. AI. That's M A I C O N. AI. You can use POD 200, to get $200 off any pass you want.
[00:04:17] Paul Roetzer: And so again, check it out. The agenda is up there. more than 69 sessions, I'm going to have trouble picking which ones to go to. There's just so many incredible speakers, so make on ai. All right, Mike, Let's, let's start with Grok two.
[00:04:33] Mike Kaput: Yeah, let's dive into the madness right away. so Elon Musk's, AI Company XAI, they've just rolled out.
[00:04:42] Mike Kaput: an early preview of Grok 2 and the Grok 2 Mini AI models. Now, XAI says that an early version of Grok 2 was actually able to outperform both Claude 3. 5 Sonnet and GPT 4 Turbo on the lmsys. org chatbot [00:05:00] leaderboard, but the improvements in the model, and its reasoning, its code, its language capabilities.
[00:05:07] Mike Kaput: These are not what have everyone talk. because that's been overshadowed a bit by the model being linked now to an image generation model called Flux, which we talked about last week. This allows users on X to create AI generated images. Now, what's raising eyebrows here and getting a lot of not so positive attention is that there apparently aren't that many safeguards, if any, in Grok's image generation capabilities right now.
[00:05:37] Mike Kaput: Grok does not seem to be refusing any type of prompts involving real people, it's not adding any type of watermark to its output, and to put it lightly, this has led users to create some very controversial content on one of the largest Communications platforms in the world. So there's been a ton of threads [00:06:00] being shared of images of political figures in compromising situations.
[00:06:05] Mike Kaput: And while Grok claims to have certain limitations on, like, pornographic or excessively violent, imagery. The rules, according to The Verge, appear very inconsistent in practice. So there's basically a flood of a ton of crazy AI generated imagery that is anywhere from really funny to very profane and very offensive.
[00:06:28] Mike Kaput: Elon Musk, known for his stance on freedom of speech, defended the release, calling it an, a quote, intermediate step for people to have some fun, while XAI develops its own image generation. tool. system. Now, Paul, I've seen some of the, let's call these questionable image results that people are creating. Am I wrong in saying this seems just like a recipe for disaster?
[00:06:52] Paul Roetzer: Yeah, but it's what Musk does. He pushes the boundaries and invites lawsuits [00:07:00] and, you know, I think it keeps him entertained and motivated to do what he's doing. I don't know. It's, it is shocking how, uh,
[00:07:09] Paul Roetzer: How good it is at reproducing people and characters. Like it'll do. My first test was like Disney characters as an experiment and it did well.
[00:07:17] Paul Roetzer: Nailed. I did Mickey Mouse and Goofy riding in a Tesla and it was, it was as though Disney created it. So, I think it's important to take a quick step back on XAI and Grok. So, we have talked about them on, on the show before. but as a reminder, so, Elon Musk started OpenAI with Sam Altman and Greg Brockman and others as a counterbalance to Google's AI efforts and their acquisition of DeepMind, which we're going to talk about.
[00:07:44] Paul Roetzer: Next, in the next topic, which Musk was an investor in. So Elon Musk had invested in Google or in DeepMind before it was Google DeepMind to stay close to the frontier, because he was worried about the risk to humanity if we achieved [00:08:00] AGI. He then exited OpenAI in 2019 after trying to roll OpenAI into Tesla and losing a power struggle with Sam Altman.
[00:08:08] Paul Roetzer: So then on episode 71, we're going to talk about. of this podcast, November 7th, 2023. So just 10 months ago, they announced the, well, they didn't announce it on our podcast, but we covered the announcement on our podcast that he announced Grok, an AI agent designed to answer any question conversationally.
[00:08:26] Paul Roetzer: In their announcement about xAI, They said, Grok is designed to answer questions with a bit of wit and has a rebellious streak, so please don't use it if you hate humor. A unique and fundamental advantage of Grok is that it has real time knowledge of the world via the X platform. It also can answer spicy questions that are rejected by most, most other AI systems.
[00:08:48] Paul Roetzer: So from the beginning, they set this precedent that this thing's going to do some stuff that the other ones won't. By design, the real time access to Twitter slash X was, you know, the thing that was really [00:09:00] differentiating them. Grok is a popular term. We talked about this on the time at the podcast.
[00:09:04] Paul Roetzer: Among science fiction fans, that was coined in Robert Heinlein's classic sci fi novel, A Stranger in a Strange Land. Grok basically means to develop a very deep understanding of something. Then, on May 26th, 2024, so just a few months back, they announced 6 billion Series B round funding at a rumored valuation of 24 billion.
[00:09:28] Paul Roetzer: So, what's that? Seven, eight months after its founding, it was valued at 24 billion
[00:09:34] Paul Roetzer: it, well, it was founded, I guess, in July, 2023. They released Grok 1 in November, 2023. And then now we have Grok 2.
[00:09:45] Paul Roetzer: the interesting thing was we talked about Flux last week, as you said, this image generation tool.
[00:09:50] Paul Roetzer: And we were like, well, but it's kind of hard to use. Well, it's not hard to use anymore because now it's in Grok, so if you're paying your, whatever the premium license is on X, 20 bucks or something, [00:10:00] you can now do it. It will create anything, like I asked for images of Taylor Swift and Elon Musk together, Disney, Pixar, other celebrities, politicians, it'll do it all, and it does it in near photorealism for people, and when you ask for Disney and Pixar, it is eerily, similar, if not an exact replica of the characters.
[00:10:24] Paul Roetzer: So now the interesting thing here is, as you mentioned, sort of these, pushing the limits of free speech, Some other context. Now, it's not like it's only doing this for Disney. It can do it for anything. I asked for, like, Transformers, like Optimus Prime meets a Tesla bot and things like that, and it does all that, but Disney in particular, who is notoriously aggressive in protecting its copyrights and trademarks, Elon and Bob Iger have a very recent history together.
[00:10:50] Paul Roetzer: So if you'll recall back in November of last year, at the New York Times Dealbook Summit, they Disney had pulled their [00:11:00] ads from X, so Disney was a big advertiser on X, and, and Elon was not happy about this. And at the time, Elon had retweeted something that was viewed by many as being anti Semitic, and that led to Disney and a bunch of other brands yanking their advertising.
[00:11:15] Paul Roetzer: Well, Elon, in a fireside chat with Andrew Ross Sorkin, was asked about this, to which Elon, to everyone's shock, In the audience said, if somebody's going to try to blackmail me with advertising, blackmail me with money, go F yourself. Go F yourself. Is that clear? I hope it is. Then, he appeared to directly call out Disney CEO Bob Iger by adding, Hey Bob, if you're in the audience, that's how I feel.
[00:11:42] Paul Roetzer: Bob was at the event. So, Elon was Trying to very purposely call out the Disney CEO for yanking advertising, Tesla then yanked Disney Plus from their cars. So in, in, in like in the back of a Tesla, you can watch like Netflix and Disney Plus and, [00:12:00] other shows, YouTube, they took Disney Plus out. So, you know, Elon's known to hold a bit of a grudge.
[00:12:06] Paul Roetzer: So, it's not like this is just being done to Disney, but it sure almost feels like he's just baiting Disney to sue him. Like, just go ahead, and I'm sure it's coming. Like, they can't allow this to happen. So, I don't know where this is going to go, but it does you know, it's worth watching. We actually have a mainstay session at MAICON about IP and generative AI.
[00:12:29] Paul Roetzer: So this is going to be a hot topic at MAICON because you're dealing with copyright infringement issues here with characters from Disney, Pixar, and other, you know, creators, trademark violations, right of publicity violations for celebrities, defamation, misrepresentation. All of these are fair game when you put a system like this out there.
[00:12:49] Paul Roetzer: is one of the few people who's got the money, and the personality to sort of take this kind of stuff on in the name of quote unquote [00:13:00] free speech
[00:13:00] Paul Roetzer: in Elon's terms, so it's going to be interesting to watch, but it is, it is quite good and it will generate whatever you want it to.
[00:13:09] Mike Kaput: should do wonders to raise awareness of people of what these tools are capable of, in good and bad ways, it
[00:13:17] Paul Roetzer: Yeah, and I will say also I've played around with Grok 2 just as a language model and it is way better, like it's, and it's Grok 2 Mini is the one that I have access to as a paid user of X, which is the only reason I'm paying to have X is, it is to be able to, test these tools. I don't know what, what else you would pay for, since the blue check mark is sort of not worth anything these days.
[00:13:44] Paul Roetzer: So. It is quite good. I mean, it, it's, it, I believe it's supposed to be on par with like, GPT 3. 5 range. and it, it does seem to be there, my limited testing.
[00:13:57] Mike Kaput: Okay, so another [00:14:00] big topic we are seeing this week is that after a two year hiatus, Google DeepMind is back with its hit podcast called Google DeepMind, the podcast, and this was previously hosted by Professor Hannah Fry and still is. The previous two seasons of the podcast covered a very wide range of topics. fundamental topics in AI, including, you know, looking behind the scenes at the technology that actually powers the tools and models and how it's being used to better business and society. Well, now the podcast is back. They are regularly releasing new episodes, including an initial episode where Frye interviews Google DeepMind CEO, Demis Hassabis. And, Paul, I know you've been following this series for a while. I think, interestingly, this is the first season of the podcast to launch since the debut of ChatGPT nearly two years ago.
[00:14:52] Mike Kaput: Like, what's important to know about this effort, this show, and this latest season, in your [00:15:00] opinion.
[00:15:00] Paul Roetzer: First, Hannah Fry, the host, is amazing. So she has a book called Hello World, Being Human in the Age of Algorithms, which I would recommend to everyone. I think that came out around. 2021, 2022. I read it a while ago. but she's a brilliant author, professor, mathematician, and she does an incredible job with the interviews.
[00:15:19] Paul Roetzer: Listen, I would suggest listening to the first two seasons. I think they're each about seven or eight episodes. and so you can binge them and they're fantastic. You just learn a ton about the inner workings of DeepMind. But season three
[00:15:32] Paul Roetzer: started off with an interview with Demis and you can go back to season two and it's fascinating to sort of benchmark where he thought AI was.
[00:15:40] Paul Roetzer: been in season two, you know, a couple of years ago and where we think we are today. So I would start off with Demis Hassabis. So I, we talk about Demis a lot on this show. If you're a regular listener to the show, it's a name familiar to you.
[00:15:54] Paul Roetzer: But having done hundreds of keynotes in the last few years on [00:16:00] AI and in almost every one of them using Demis's definition of AI, and then asking how many people in the room are familiar with Demis Hassabis
[00:16:09] Paul Roetzer: It is rare in a room of hundreds of people to get more than one or two hands to go
[00:16:14] Paul Roetzer: up. So, DEMIS is extremely well known within the AI world, certainly within technology. But when you get outside into the business world, into the marketing world, it's just not a name people know. And I think, it should be. So what I'll often say is I'll use his definition. So the definition Demis gives of AI, and this goes back like a Rolling Stone article in like 2016, 17 is when I first saw it.
[00:16:40] Paul Roetzer: AI is the science of making machines smart. And I always loved that definition because of its simplicity. And what I tell people is like replace machines with software and it actually makes a ton of sense. That the software we use to do our jobs, whether it's marketing or HR or finance or legal or whatever it is.
[00:16:57] Paul Roetzer: The software you use is getting smarter. It's [00:17:00] getting the, it's developing the ability to think, to create, to understand, to reason, to plan. And software before didn't do those things. So I like to use his definition because I think it's extremely approachable and it makes sense to a lot of people. And then I'll often say, like, Demis is going to be one of the most, most important people of our generation, if not all of human history.
[00:17:20] Paul Roetzer: And then when you start to talk about the work he has done, like with AlphaGo and AlphaFold,
[00:17:26] Paul Roetzer: just everything they're doing at DeepMind, it's really important people understand who Demis is and follow along. So he's a child chess prodigy. He's 48, I believe. I think he was 1976 he was born. Chess prodigy, a renowned video game designer and developer by the age of 17.
[00:17:41] Paul Roetzer: Co founded DeepMind in 2010. PhD in cognitive neuroscience with a focus on imagination and memory. So like he's. Really focused on understanding the human mind, but then a lot of their research was through gameplay. And so early it was Atari games and then into the, you know, the game of Go.
[00:17:58] Paul Roetzer: And so their team, [00:18:00] after they were acquired in 2014 by Google, their team built AlphaGo that defeated the world champion in Lee Sedol in 2016.
[00:18:08] Paul Roetzer: AlphaFold predicts the structure of nearly all known proteins. They open sourced that research. They just recently came out with AlphaFold 3 that'll accelerate, like drug discovery, for example. So they do incredible work. So anytime Demis does a podcast, I, I listen to it. I think I've listened to basically every podcast he's ever done.
[00:18:26] Paul Roetzer: So. I think everyone should go listen to the episode. I'm just going to highlight maybe like five or six key topics that were discussed, but it's worth going and listening to the full thing. So the title of the podcast episode, if I'm not mistaken, is something like Unreasonably Effective. And that starts out early in the episode where they talk about large language models being unreasonably effective, which is how Demis described them.
[00:18:53] Paul Roetzer: So when Professor Frey asked about that, he said,
[00:18:57] Paul Roetzer: Somehow, these systems, if you give them enough data, [00:19:00] they do seem to learn and generalize from examples, not just rote memorization, but actually somewhat understand what they're processing. And it's sort of a little bit unreasonably effective in the sense that I don't think anyone would have thought it would work as well as it has, say, five years ago.
[00:19:17] Paul Roetzer: So, for me, like, that's a really interesting one because, again, he is at the forefront. And he's saying, five years ago, we wouldn't have guessed these models would be doing what they're doing. And so, What we're always trying to do on the show is like capture the moment, like where are we and where do we think we're going to go?
[00:19:34] Paul Roetzer: But I think that excerpt is representative of the fact that even the people in the, at the leading edge of this aren't really clear. And I remember like in Genius Makers by Cade Matz, one of my favorite books on AI, He told the story of when DeepMind achieved, defeating the world champion of Go, that there were other labs working on it.
[00:19:54] Paul Roetzer: And I think at the time, even like Yann LeCun at Meta thought they were crazy, that it wasn't something that [00:20:00] would be achievable in, in the near term. And then like six months later, they announced that they had done it. So you can't ever,
[00:20:08] Paul Roetzer: Feel like any one AI expert knows what's going to happen because they don't agree on this stuff.
[00:20:14] Paul Roetzer: And oftentimes they underestimate or overestimate what's going to happen. The other one I wanted to touch on is underhyped versus overhyped. So she's, she asked, do you think that where we are right now, how things are at this moment is overhyped or underhyped?
[00:20:28] Paul Roetzer: Or is it just hype perhaps in the wrong direction?
[00:20:30] Paul Roetzer: to which Demis responded, yeah, I think it's more the latter. So I would say that in the near term, it's hyped too much. I think people are claiming it can do all sorts of things it can't. There's all sorts of startups and VC money chasing crazy ideas that are just not ready. On the other hand, I think it's still underhyped.
[00:20:49] Paul Roetzer: I think it's underhyped or perhaps underappreciated even now, what's going to happen when we get to AGI and post AGI. Now, now keep in mind here, he's not saying [00:21:00] if.
[00:21:00] Paul Roetzer: he believes it is a when, as does Sam Altman, as does many of the others. So I still don't feel like people have quite understood how enormous that's going to be, and therefore the responsibility of that.
[00:21:13] Paul Roetzer: So it's both really, I think it's a little bit overhyped in the near term, and at the moment. Another one I liked was she asked about, well, how do you evaluate all these AI startups? Like, how do you know
[00:21:24] Paul Roetzer: what's real? And this is something,
[00:21:25] Paul Roetzer: like you and I talk to, you know, companies about a lot, where it's like, how do we trust these vendors? And so he said, I think you need to look at obviously, the technical due diligence, have some understanding of the technology and the latest sort of trends.
[00:21:39] Paul Roetzer: I think also look at perhaps the background of the people. How technical are they? Have they just arrived in AI like last year from somewhere else?
[00:21:48] Paul Roetzer: I don't know, were they doing crypto last year? You know, these might be clues that perhaps they're jumping on the bandwagon. And it doesn't mean to say they can't come up with good ideas, like this, But it's [00:22:00] more like lottery ticket, shall we say,
[00:22:02] Paul Roetzer: meaning they might get lucky and switch from crypto to this, but, you know, they needed to have been there for a while.
[00:22:07] Paul Roetzer: she did ask him about Gemini and how it's different from other models. He basically talked about how they've built it multimodal from the ground up. So it has a better understanding of the world around it. and you know, again, multimodal being not just text input, but image and video and audio and code.
[00:22:23] Paul Roetzer: And then he also talked about their big innovation in memory, which they were the first to market with the million context. And now it's 2 million and they've talked about 10 million, context being how much content you can give it. so, you know, whether it's videos or books or whatever it may be.
[00:22:40] Paul Roetzer: And then he did say he basically felt like it was Gemini, ChatGPT, and Anthropic Claude are the three main models to, you know, pay attention to. And he said the other ones like Meta and Mistral are doing interesting things, but it's, it's pretty much a three horse race at this point.
[00:22:55] Paul Roetzer: did ask him about open source.
[00:22:57] Paul Roetzer: We talk a lot about that. We'll talk about [00:23:00] SP1047 again, later in this episode. And he said they're huge proponents of it. Like they, you know, they've been big believers of it. They've open sourced AlphaFold, their AlphaGo research. but he did say it's okay to do it now because the systems aren't that powerful.
[00:23:14] Paul Roetzer: Quote, but in two, three, four years time, especially when you start getting agent like systems and agentic behaviors, then
[00:23:23] Paul Roetzer: something, if something's misused by someone or perhaps even a rogue nation, that could cause serious harm. So then I think I don't have a solution for that, but as a community, we need to think about what does that mean for open source.
[00:23:34] Paul Roetzer: And then this is what he proposed. Perhaps the frontier models need to have more checks on them. And then only after they've been out for a year or two, then they can be open sourced. And he did say that's basically what they're doing with their GEMMA model. So GEMMA is their open source model. And he pretty much said, like, we're giving you a year or two old technology.
[00:23:53] Paul Roetzer: We're giving you the smaller version. That's basically like a year old. It's not our frontier model and the frontier models, we need to [00:24:00] have a bigger focus. And then two, two other thoughts here. So he, he talked about the role of planning and agents. So this is a recurring topic for us. So he said they still can't do actions in the world for you.
[00:24:12] Paul Roetzer: They're very much like passive Q and A systems. I think that agentic is the next era, these sort of more agent based systems, we would call them, have these agent like behaviors where they can do things for you. And then he said, and this is where my biggest argument is why I, I think in the end, Google still has the advantage.
[00:24:31] Paul Roetzer: I mean, obviously OpenAI is doing insane stuff, Anthropic's doing insane stuff, but I always go back to AlphaGo as the thing that no one else has, nor has built. And so he says, This agent area is what we're expert in. That's what we use to build with all of our game agents, including AlphaGo and all the other things we've talked about in the past.
[00:24:53] Paul Roetzer: So a lot of what we're doing is bringing, marrying that work together. So the large language models with [00:25:00] their, their reinforcement learning systems like AlphaGo. So the next generation of systems you'd think of as combining AlphaGo with Gemini. So I'm going to say one other part. I'm going to, Mike, actually read an excerpt from our book from because I think.
[00:25:14] Paul Roetzer: This sets the stage for the significance of AlphaGo, but it also is a prelude to some of the other things we're going to talk about a little later in this episode. So, in our, in our book, we, we kind of talked about this move 37, this, this famous move that happened during the 2016 match with Lee Sedol.
[00:25:31] Paul Roetzer: And then we talked about the relation of that move. And what this AlphaGo system was able to do, and the impact it may have on creativity and planning and knowledge work. So I'm just going to read this quick excerpt from, from our book. So it says, this debate over machine creativity was brought to life in game two of the AlphaGo versus Go world champion Lee Sedol match in 2016.
[00:25:56] Paul Roetzer: Move 37, as it became known, saw the DeepMind machine [00:26:00] place a stone on the Go board in a non traditional spot that had human Go experts baffled. Michael Redman, a commenter on the live English broadcast and a top Go player himself, said at the time, quote, I wasn't expecting that. I don't really know if it's good or bad.
[00:26:15] Paul Roetzer: It's a very strange move. Seadal, who had already lost game one to the machine, was in disbelief. He
[00:26:21] Paul Roetzer: stared at the board for a moment, sat back in his chair, and spent the next 12 minutes assessing his options before finally making his next move.
[00:26:30] Paul Roetzer: Seadal never recovered. He would lose Game 2, and AlphaGo would go on to win the best of five series 4 1.
[00:26:37] Paul Roetzer: The DeepMind team had trained AlphaGo, and this is the real important stuff here. The DeepMind team had trained AlphaGo using deep learning, specifically a type called reinforcement learning, in which AlphaGo learned to discover new strategies by playing millions of games of Go against versions of itself.
[00:26:55] Paul Roetzer: In addition, AlphaGo learned the ancient game of Go by studying [00:27:00] millions of moves made by top human Go players. Cade Metz, who I mentioned earlier, an author and technology correspondent with the New York Times, was in Seoul, South Korea, covering the match for Wired Magazine. He spoke with DeepMind's David Silver, the lead researcher on the AlphaGo project, about move 37.
[00:27:18] Paul Roetzer: Metz summarized what happened in this way. Now, as I'm reading this, think about the future of these models, and their ability to learn, and to plan, and to reason. So AlphaGo learns from human moves, and then it learns from moves made when it plays itself. It understands how humans play, but it can also look beyond how humans play to an entirely different level of the game. This is what happened with move 37. AlphaGo had calculated that there was a 1 in 10, 000 chance That a human would make that move, but when it drew on all the knowledge it had accumulated by playing itself so many times and looked ahead to the future of the game, it decided to make the move anyway. And the [00:28:00] move was genius.
[00:28:01] Paul Roetzer: In AlphaGo the movie, which everyone should watch, it's free on YouTube, Silver
[00:28:06] Paul Roetzer: said of move 37 that AlphaGo went, quote, went beyond its human guide and it came up with something new and creative and different. But in the documentary Silver also made the point that this is not human versus machine, rather human plus machine.
[00:28:23] Paul Roetzer: Quote, AlphaGo is human created, and I think that's the ultimate sign of human ingenuity and cleverness. Everything that AlphaGo does, it does because a human has either created the data that it learns from, created the learning algorithm that learns from that data, Or created the data that it learns from, or created the data that it learns from, created the learning algorithm that it learns from the data, or, oh, hold on a second.
[00:28:46] Paul Roetzer: I think I pasted that one in there twice. basically learns from the data or created the data that it learns from. All these things have come from humans. So really, this is a human endeavor. And then in the final excerpt here, Seadal would later say, [00:29:00] I thought AlphaGo was based on probability calculation and that it was merely a machine.
[00:29:05] Paul Roetzer: But when I saw this move, I changed my mind. Surely AlphaGo is creative.
[00:29:10] Paul Roetzer: This move was really creative and beautiful. So
[00:29:13] Paul Roetzer: in a later topic, Rapidfire, we're going to talk about this idea of can large language models create new ideas and innovate. And I want you to remember this excerpt when we get into that conversation.
[00:29:24] Mike Kaput: Yeah, and just to reiterate, that's why we are covering this and why it's so important, as we say so often, to follow these leading voices in AI.
[00:29:33] Mike Kaput: I mean, there's relatively few people like Demis driving AI. this type of innovation, and he's been here since the beginning. So, hearing what he has to say is not just an exercise in listening to someone making, you know, public or PR appearance, but you can really get clues to what the future holds, it sounds like.
[00:29:54] Paul Roetzer: Yeah, and again, it's one perspective, but for me, Demis has always been [00:30:00] sort of the leading. perspective. So, you know, while we listen to and comment on all these other AI leaders, I've just always found him to be like the most authentic, and someone who I felt was truly just pursuing knowledge and intelligence and answers.
[00:30:18] Paul Roetzer: And I don't feel like he's ever
[00:30:21] Paul Roetzer: They're promoting anything. Like I, I just, he's a researcher who became a CEO, but I think at his core, he's still just trying to find answers to things.
[00:30:30] Mike Kaput: All right So our third big topic this week is it is that time of year again in the U S. Another academic year is Getting underway and kids, young adults are all headed back to school.
[00:30:45] Mike Kaput: Now, as they do, we're really expecting AI usage in school to again, be a hot topic because schools still appear to be grappling with how to handle AI's disruption to classwork, homework, and traditional methods of [00:31:00] evaluating students. It still seems some schools are. Fearful of the technology, many don't fully understand it, though some are embracing it to enrich the learning experience. Now, the process, however, unfortunately, is a bit messy. I mean, while AI is rapidly improving and becoming ubiquitous in students lives and the lives of their parents, teachers, administrators, it's also creating a bunch of challenges that schools appear to need to solve relatively quickly, especially when they're Given the pace of innovation we've been discussing in these topics.
[00:31:35] Mike Kaput: So, Paul, you had recently posted about this topic on LinkedIn, saying your kids, who are 11 and 12, quote,
[00:31:43] Mike Kaput: omnipresent AI will be their reality. Students today in middle school, high school, and college will never know a professional world in which AI isn't infused into everything they do, regardless of the career path they choose.
[00:31:56] Mike Kaput: What does this mean? What does this mean for schools, and what's kind of [00:32:00] your take on where schools are at with AI, where they need to be going, and the future of all this in education?
[00:32:06] Paul Roetzer: Yeah, so this, this was kind of triggered by a couple of things. So one, our friend David Meerman Scott, had shared an article that he had recently written and he tagged me on Twitter and it got kind of me thinking about this time of year last year, I think we did an episode on, AI policies in schools and I'd put on LinkedIn at that time, asking if parents had.
[00:32:27] Paul Roetzer: gotten AI policies from their schools yet. And so I feel like last school year, 23 to 24, was this very messy period where schools all of a sudden felt like they had to do something and generally it seemed as though most schools defaulted to this approach of let the teachers decide individually. What they do in their classrooms and whether or not they teach it.
[00:32:50] Paul Roetzer: But then there was no preparation, again, broadly speaking, to give teachers the knowledge they needed to make that decision. It was just kind of like, Hey, we have to [00:33:00] say something. So it's plagiarism if they use ChatGPT to write something. And otherwise it's up to teachers if they want to do this. So I started thinking this kind of Saturday morning, I think I wrote this on a Saturday or Sunday morning.
[00:33:12] Paul Roetzer: And like my kids don't start back for like three more weeks, but I know other kids are starting back to school already. So I thought, wow, okay, we're back in this situation again. So here we are a year later, maybe policies have advanced. Maybe there's been some education and training provided to teachers at different levels.
[00:33:29] Paul Roetzer: I mean, certainly there's a ton of people in my network, like on LinkedIn and in our institute community who are professors, who are academics, who are doing incredible things. So this is by no means meant to be. universal statement about how, what's happening. There are individual teachers, professors, and academic leaders who are very proactively trying to solve for this.
[00:33:51] Paul Roetzer: But broadly speaking, as a parent of two middle school children, and my children go to a wonderful school, they're [00:34:00] not taught this stuff. Like, And there's this fine line I have to walk as a parent who understands deeply what's possible of what do I teach my kids and when? How do I empower them to use these tools?
[00:34:13] Paul Roetzer: And so I've been very selective in doing it, but I've found in recent months, especially over the summer, I've been way more proactive in integrating AI tools into their learning. And
[00:34:25] Paul Roetzer: so not just their learning, but just their overall experiences. Like my kids have incredible imaginations. They, it's, it's still amazing to me, like how their minds work, how they invent characters and stories and, and games and things like this.
[00:34:41] Paul Roetzer: And so I've started using these tools to assist them. So like my son was trying to create like. Names for Pokemon characters. And so he had these ideas of what this Pokemon would do. And so we used ChatGPT and like, okay, what are some other names we could come up with? And then we'd be like, okay, what could this character look like?
[00:34:56] Paul Roetzer: And we would go in and do things like that. you know, I've [00:35:00] taught them how to use perplexity, but how to use it where they're evaluating and verifying the sources of the information. So, I mean, my, my daughter could tell you this over and over again, like, every time she comes to me or something, it's like, dad, I vetted the sources.
[00:35:13] Paul Roetzer: Here's where they're from. Like I double checked it. And so like, they know at this age, all already do this. And so my whole point of this post on LinkedIn was like, As parents,
[00:35:25] Paul Roetzer: we have an obligation to prepare our kids for the future, whether their schools are ready to do it or not. It is more likely than not, your school that your kids go to, whether it's middle school, high school, maybe into college, isn't going to be overly proactive.
[00:35:43] Paul Roetzer: They are more likely to say it is cheating or plagiarism or that they're not allowed to use these tools. And I feel like. That's just the wrong approach. I feel like when they get out of school, or when my kids go to high school, or when they, you know, go on to college, they're going [00:36:00] to be required to use these tools one, two, three years out.
[00:36:03] Paul Roetzer: So why would we prevent them from learning these things? And so, like I put in the post, like what I tell my kids, All the time is these are, these tools are never a replacement for your own imagination, creativity, or critical thinking. And so when they use them, they will actually come back to me and say, well, here's what I did first.
[00:36:22] Paul Roetzer: And then I used it to do this. So like they get it and they're 11 and 12. Like these kids at all levels, if you teach it properly, they will learn how to be responsible in their use. If you tell them, no, they're just going to find ways to do it without telling you they did it. And
[00:36:39] Paul Roetzer: so I, I feel like, you know, as a parent, it's just being proactive, accepting that your children are going to use these tools.
[00:36:46] Paul Roetzer: These tools are going to be embedded into the social platforms they have access to, or eventually have access to, whether it's an Instagram, or a Snapchat, or a WhatsApp, or like, whatever these kids are using these days. The tools are going to be there anyway.
[00:36:58] Paul Roetzer: Yeah. Rock,
[00:36:59] Paul Roetzer: And if they're on [00:37:00] Twitter, like, so I just feel like as parents, we need to be, you know, responsible for being proactive and integrating it into their lives.
[00:37:07] Paul Roetzer: and then ideally academics, if you're involved at any level, again, raise your hand and be the one that says like, okay, let's get our teachers educated. And that was kind of the plea I made in the post is. AI literacy for teachers, like teach the teachers. If we're going to put this responsibility on teachers at all levels to be the ones to make these decisions, at least give them the knowledge of what it is they're deciding.
[00:37:31] Paul Roetzer: so yeah, it's kind of like, I don't know, it's like a soapbox thing, but I mean, I thank you to everyone who commented on it. The post has like almost 10, 000 impressions and
[00:37:41] Paul Roetzer: dozens of incredible comments. So, you know, I appreciate all that and go in and look at the comments. There's. Comments from educators, there's comments from parents.
[00:37:50] Paul Roetzer: you know, I think it's just an important issue that we talk about, and that's why we wanted to make sure we touched on it on this podcast.
[00:37:57] Mike Kaput: Might be a worthwhile experiment or follow [00:38:00] up is just grab all the comments, drop them into ChatGPT and pull out some themes and sentiment. Be curious to see that.
[00:38:07] Paul Roetzer: Yeah, that's a good thought. And again, we'll probably come back to this throughout the year. And
[00:38:12] Paul Roetzer: I don't know, maybe at some point we do a special episode, like deep dive on this, cause I just think it's so relevant to all of us. We all have kids or grandkids or nieces and nephews or whatever it is.
[00:38:22] Paul Roetzer: and this is just a real issue that's going to affect everyone in society. So yeah, it's an important one.
[00:38:28] Mike Kaput: Yeah. I'd probably argue most education or educational outfits or organizations, you're essentially at the front lines of this stuff because there's no delay in you know, adopting the technology from your board or your executives or a long rollout in the enterprise.
[00:38:44] Mike Kaput: It's like, you can use these tools immediately to very, significantly disrupt how education works.
[00:38:50] Paul Roetzer: Yeah, and I'll again make the plea I've made before, like the government's got to get involved here. We need more funding for AI education. if I was a big tech, if I was a [00:39:00] Google, Microsoft, one of those players, I would be seriously think, thinking about earmarking funding to accelerate this stuff across, you know, different countries, because I just feel like we're building these tools.
[00:39:12] Paul Roetzer: We have to prepare people for the future with them and, and we need, just need more, Any bigger picture thinking, I think overall about how to accelerate ad literacy.
[00:39:21] Mike Kaput: All right, let's dive into some rapid fire topics. First up is another Really interesting, important, kind of deep topic. Can large language models, the LLMs, come up with new ideas and innovate?
[00:39:36] Mike Kaput: That's a question we've been kind of increasingly thinking about as we see AI with new capabilities and increased reasoning power coming out. Because in theory, LLMs can only make predictions and create outputs based on their training data, which largely comes from existing human knowledge and content, like we talked about in the case of Google.
[00:39:57] Mike Kaput: So many researchers, among them [00:40:00] Yann LeCun at Meta, believe that AI is essentially incapable of original thoughts or ideas. But others, like researcher Erik Brynjolfsson, we've talked about multiple times on this podcast. Note that this opinion
[00:40:13] Mike Kaput: Opinion may actually misunderstand the nature of a lot of innovation.
[00:40:18] Mike Kaput: Brynjolfsson recently posted, for instance, the following about this topic, saying, quote, a lot of people say that LLMs merely recombine existing content and don't create anything truly new. But that misunderstands the nature of innovation. Most innovation is recombinant innovation. And the scope for additional recombinations is staggering.
[00:40:40] Mike Kaput: Paul, I know you have a lot
[00:40:41] Mike Kaput: of thoughts on this topic and kind of wrote out some things about it, like, could you walk us through the nuance here? Can AI innovate? Can't it?
[00:40:50] Paul Roetzer: Yeah, so I've said before, just a little background context here, like, one of the ways that I stay, Like real time with what's [00:41:00] happening in AI is notifications on X. So I get notifications from both Yann LeCun and, and Eric Brynjolfsson, the economist. And so when those two were talking with each other, I saw this thread.
[00:41:10] Paul Roetzer: And so this was like Friday morning, I think. And so that thread is that what then inspired me to go and like, put these thoughts on, on LinkedIn. So that's kind of how a lot of this stuff happens. So I'll just, I'll read the couple last paragraphs from the post, cause I think it sort of summarizes it. And again, this was.
[00:41:26] Paul Roetzer: You know, thanks to everyone who commented on this one. There was some great comments on the LinkedIn posts related to this one. so
[00:41:33] Paul Roetzer: I said, I think what sometimes gets lost in the technical part of the debate is whether or not it matters in the business world, if AI is truly coming up with new ideas, most innovation in business is just connecting dots from existing information and knowledge that are seemingly unrelated and figuring out how to build products and brands around them.
[00:41:50] Paul Roetzer: My experience with large language models as brainstorming partners is that they are often far more capable than many senior strategists I've worked with in my career, [00:42:00] except you, Mike,
[00:42:00] Paul Roetzer: like, you know, Oh, It's smarter than me.
[00:42:03] Mike Kaput: I already
[00:42:03] Paul Roetzer: people who, no, no, Mike and I have worked together now for 13 years, I think, so. And Mike is one of the best strategists I've worked with.
[00:42:10] Paul Roetzer: So, Okay, Okay,
[00:42:12] Paul Roetzer: but they're, these, these models are only getting smarter. So while we may need additional technical breakthroughs before AI can discover or invent new things, as Yann LeCun would sort of propose, that weren't in the training data, My call to people was don't wait for that day to leverage the power of large language models to drive innovation in your business, because they are more than capable of that now.
[00:42:35] Paul Roetzer: So if you are not using these things as a strategist, as a planning partner, do it. Like I do it all the time. It is one of the primary use cases for me as a business leader. as a strategist, I always use them as a sounding board when I'm developing ideas. Even if I think I'm like locked in and I've got it.
[00:42:56] Paul Roetzer: I will vet that idea with the model, and I almost [00:43:00] always start with my own ideas, my own kind of outlines for things, and then I'll just kind of bounce it around, and I will do it in Gemini, ChatGPT, and Claude. Like, I will use all three of them with the same prompts, and I'll just kind of collect those ideas and then use that to develop and drive my own innovation and new ideas.
[00:43:18] Mike Kaput: You know, it's funny you mention that because it, Recently, especially with ChatGPT voice mode, I've just been putting in AirPods and talk, having it talk me through and talking it through, say, a presentation I'm creating, a document I'm writing. You're almost like pair programming, but for knowledge work, for strategy.
[00:43:37] Mike Kaput: And it's remarkable.
[00:43:39] Paul Roetzer: And, and to your point, like even if it's only on par with like a good strategist, like let's say, which is not, but like, I think it's above that, but let's say that's all it was. It's on call for you 24 7, like if you're on, you know, you're walking, you're, you're in the car, like laying in bed, like whatever you're doing, you can have a conversation with a [00:44:00] good to great strategist about anything you're working on.
[00:44:03] Paul Roetzer: That alone is worth the 20 bucks a month. Like, again, you go back to this, like, is it worth the money? I'm like, yeah, it's worth the 20 bucks for each of them to just have that alone as a use case.
[00:44:15] Mike Kaput: Alright, another big topic we're tracking is that Google has struck a deal with Character AI, which is a startup known for its AI avatar technology.
[00:44:25] Mike Kaput: So, this move includes a licensing agreement and the acquisition of some top talent from Character AI. Now, Character AI, for anyone who's unfamiliar was founded Totally Unfamiliar was
[00:44:37] Mike Kaput: founded in 2022
[00:44:38] Mike Kaput: it made a huge name for itself, allowing users to create AI avatars. It's extremely popular. The company saw rapid growth. It reached a valuation of a billion dollars in March 2023.
[00:44:51] Mike Kaput: They're now, however, making a strategic shift, because they're now going to license They're technology to Google and Character AI's founding [00:45:00] team, which are significant names you're about to hear, Noam Shazeer and Daniel De Freitas, along with some members of the research team, are also joining Google. Now, Shazeer was previously an engineer at Google, heavily involved in some of the earlier research, fundamental research on AI that made today's generative AI possible.
[00:45:21] Mike Kaput: He'll be rejoining the company as part of the DeepMind team. And basically, this marks a big change in how Character is approaching the market. They used to be building their own models, but now they're going to leverage pre trained models and focus on post training capabilities. Now, kind of what nobody has really confirmed here is whether this deal is considered an acquisition.
[00:45:45] Mike Kaput: Sources close to the agreement have stated that it's not an acquisition or an acqui hire, though it certainly does sound like one. So Paul, I think what would be important here is to kind of maybe just unpack What character [00:46:00] AI means being a part of Google essentially now? What does it kind of also mean for consumer behavior?
[00:46:06] Paul Roetzer: Yeah, so I, I mean, I think one of the big things is the, distribution. So, you know, as, as Google and OpenAI, and I believe OpenAI was trying to, you know, get, get character. ai too. I think they're not, everybody was trying to get at them basically. it's, we always talk about this idea of distribution. So, you know, Dem has talked about this in the podcast, this idea that.
[00:46:29] Paul Roetzer: You know, if you're just a research lab, you don't have products to test your models with, like you don't have that distribution where now the new Pixel phone has Gemini baked into it. So they have like this massive distribution, how many, you know, Gemini phones there are, whatever, but like, let's say it's a billion users.
[00:46:49] Paul Roetzer: They can now push models out. to a billion people. And so character is potentially like a whole nother generation of hundreds of millions of users. So Bill Walsadu, who's a [00:47:00] former Googler and host of the TED AI show and also a scout for Andreessen Horowitz, he tweeted, that I thought was really smart, he tweeted some key points here.
[00:47:09] Paul Roetzer: So he said first, and we'll put this, the link to this tweet. In the show notes, he said, Character AI is now number four on the iOS app store and entertainment and number 28 overall. Over half its users are Gen Z, 18 to 24. This is mind boggling to me. Averaging two hours daily on the app. So what that, yeah, what that means is like, these people are spending all this time interacting with AI characters, not real people.
[00:47:35] Paul Roetzer: As these characters enroll, the tweet continues, number two, as these characters enroll playing experiences you call multi modal, audio, video, screen share, they'll become serious substitutes to time spent on Netflix and Discord. Number three, in response, movie and TV properties will likely create their own characters experiences on these platforms to continue engaging with their audience.
[00:47:57] Paul Roetzer: So now you're starting to think about, like, the impact on brands. It's like, oh [00:48:00] wow, we gotta, we gotta be in this world, basically.
[00:48:04]
[00:48:04] Paul Roetzer: Then number four, what do you think the experience of Project Astra, which is DeepMind and Google, or GPT 4 plus Advanced Voice Vision will bring to creative experiences? five, they'll surely be helpful real world assistants, but I think we're on the cusp of experiences that go far beyond AI.
[00:48:22] Paul Roetzer: WAIFUS, W A I F U S. I don't know that word.
[00:48:27] Paul Roetzer: look that one up.
[00:48:27] Mike Kaput: like a, It's like a slang term from like, it's, I'm looking it up right now in ChatGPT, but
[00:48:35] Mike Kaput: I've, I'm feeling very uncool right now that
[00:48:37] Paul Roetzer: I don't know
[00:48:38] Mike Kaput: Actually, I think kind of important in this context, because a waifu is like basically a term originating from Japanese anime, and it refers to basically like a female character. You have a crush on, or something.
[00:48:51] Mike Kaput: It's like an AI, I don't know if it's strictly romantic, but it's your AI girlfriend.
[00:48:58] Mike Kaput: basically. [00:49:00] Yeah,
[00:49:04] Paul Roetzer: we're on the cusp of experiences go far beyond AI Waifu and well into lightly interactive entertainment territory. and then number six was really makes you wonder what new consumption experiences are waiting to be discovered. So again, part of the thinking of this show is to present this information because it's As a business leader, as a brand leader, as a marketer, we may be talking about entirely new experiences, and this is a company that was seeing significant growth and adoption that now is in partnership with Google, whatever we want to call this, which not only then enables them to distribute these models more widely, but this whole multimodal concept becomes at play.
[00:49:46] Paul Roetzer: And then what does that do to entertainment and, and communications? It's, it's just wild. And these are the kinds of updates where you could step back and think, wow, this is probably a year from now we're going to look back and that probably should have been a main topic kind of [00:50:00] thing.
[00:50:00] Mike Kaput: Yeah, no, I love that. And it's so important just to, because I would argue character AI, we both know it, but people, I would pretty strongly argue often don't think of it as this hugely popular platform and modality, oftentimes just because of generational things, you know, we're not 18 year olds who are using this stuff.
[00:50:20] Mike Kaput: And it really is a huge sea change in how people behave, it seems.
[00:50:24] Paul Roetzer: Yeah, definitely.
[00:50:25] Mike Kaput: Okay, next up, the former Google CEO and chairman, Eric Schmidt, has ignited some controversy with some recent comments about AI, startups, and intellectual property. So during a talk at Stanford, Schmidt suggested that entrepreneurs could use AI to rapidly create and iterate on products, even if it means potentially infringing on existing intellectual property.
[00:50:51] Mike Kaput: He gave an example of using large language models, for instance, to create a TikTok clone. You could tell your model, he said, quote, make me a copy of TikTok, [00:51:00] steal all the users, steal all the music, put my preferences in it, produce this program in the next 30 seconds, release it, and in one hour, if it's not viral, do something different along the way.
[00:51:10] Mike Kaput: The same line. So, a pretty stunning statement about what may be possible with LLMs, but also, it's kind of what he said next that really got people talking, because he advised that, say, if a product like this took off, the next step would be to, quote, hire a whole bunch of lawyers to go clean up the mess. He then added, quote, But if nobody uses your product, it doesn't matter they stole all of it. the content.
[00:51:34] Mike Kaput: He says, quote, Silicon Valley will run these tests and clean up the mess. And that's typically how those things are done. Now, The remarks were made during a recorded talk at Stanford, and that has since been removed from the university's YouTube channel, given the controversy around the statements. So Paul, we've talked about, we've talked about this issue basically in one way or another every single week, and it kind of seems like Schmidt is just saying the quiet part [00:52:00] out loud, like, Is this really how Silicon Valley AI startups are looking at this stuff?
[00:52:05] Paul Roetzer: Yeah, and the interview is being conducted by Eric Yolson, who we previously mentioned at Stanford. yeah, and Eric reminded Eric Schmidt
[00:52:13] Paul Roetzer: at one point, hey, like there are cameras in the room and Schmidt like didn't seem to like register it at the time. then
[00:52:22] Paul Roetzer: one point in the video where he kind of realizes that there are actually cameras in the room and everything he's saying is being recorded.
[00:52:29] Paul Roetzer: and I think that was the moment when that that video was not long to be online. So yeah, there are still clips of this if you want to go get it, but the original source has definitely taken it down. yeah, so this is one of those moments where everything we've sort of talked about in this show is validated when it comes to this stuff.
[00:52:49] Paul Roetzer: We talked about NVIDIA last week, I think, in the episode, you know, taking this stuff, we talked about Gen 3 from Runway and how they, you know, stole, well, took a bunch of copyrighted [00:53:00] material to train their models. And yeah, he's basically just saying the quiet part out loud of like, this is what we do in Silicon Valley and the lawyers will clean it up.
[00:53:10] Paul Roetzer: But everybody's doing it, so go do it. And if you're not successful, no one's going to care. If you are successful, you'll be able to afford the lawyers to do it. This is what we talked about with Grok, too.
[00:53:21] Paul Roetzer: This is,
[00:53:21] Paul Roetzer: just take it, do it, put it out there, no one else is doing it. I'm Elon Musk, and we got the money, like, just go.
[00:53:27] Paul Roetzer: Like, we just raised six billion, we put, you know, a billion aside for legal fees, maybe, and Let's just go. So yeah, like, like it or not, this is how the world of AI startups and AI frontier model companies works.
[00:53:43] Mike Kaput: So next up, a prominent AI company called Sakana AI has unveiled what they're calling the AI scientist, which they say is a comprehensive system where you for fully automated scientific discovery. Now the platform, which is developed in [00:54:00] collaboration with researchers from the University of Oxford and the University of British Columbia.
[00:54:05] Mike Kaput: could actually mark a significant leap towards enabling AI to conduct its own research. The AI scientist is designed to automate the entire research life cycle, it'll generate novel ideas, and then write code to execute experiments and produce full scientific discovery. Manuscripts. One of the most interesting features of the AI scientist is its automated peer review process.
[00:54:29] Mike Kaput: The company says the system can evaluate generated papers with near human accuracy, provide feedback, and even improve results. So basically it can start building on its own work. And the company also says that each idea being developed into a full paper is very, very cost effective. It can cost just 15 bucks to do it.
[00:54:51] Mike Kaput: Paul, this sounds It's exactly like what some people in the AI community have been predicting or anticipating. Like AI that can [00:55:00] do novel scientific research, which in turn leads to AI being able to advance the field of AI. Is that what we're seeing here?
[00:55:08] Paul Roetzer: So, there's definitely a part of me on this one that would caution, that this might
[00:55:15] Paul Roetzer: be hype, more, more hype than reality at the moment. Doesn't mean it's not going to be possible that the research direction isn't, heading in a very promising area, but I wouldn't assume that next month we're all going to have the ability to go do this kind of stuff.
[00:55:32] Paul Roetzer: so my initial reaction is probably more hype than reality. That being said. this is a company that was formed in January, 2024, so they're only about eight months old. They got a 30 million initial round. The founding team includes David Ha, former Google brain, and Leon Jones, who was one of the co creators of the transformer.
[00:55:57] Paul Roetzer: So the GPT paper, they raised [00:56:00] another hundred million dollar round in June. So six months after founding, Jeff Dean, Google's chief scientist is one of their angel investors.
[00:56:09] Paul Roetzer: So, initial reaction is, eh, then you, you, drill in a little bit and say, oh, okay, like this is interesting. So when they, when they announced the company, they said the main focus of our research and development, new kinds of foundation models based on nature inspired intelligence.
[00:56:26] Paul Roetzer: The name Sakana is derived from the Japanese word, which means fish. And then they have a logo where the school of fish is basically swimming together and forming a coherently entity. And then there's a red fish that's swimming away from everyone. And that's them. It's like this idea of like, we're going to go in a different research direction.
[00:56:43] Paul Roetzer: And so they talk about evolution and collective intelligence being a key part of the research. So, I don't think this changes anything in the near term for people. I think it is promising research, but I would Absolutely pay attention to this company [00:57:00] and what they do next, because I, I think they're going to be major players based on who they've gotten funding from and who is involved in the company.
[00:57:08] Paul Roetzer: And based on the trend line of the last six months, probably an Aqua hire by somebody in the next six months.
[00:57:16] Mike Kaput: so next up, California lawmakers have made some significant amendments to SB 1047. This is a bill we've talked about a bunch, a California Senate bill, that is a bit controversial and is aimed at trying to prevent serious safety issues from AI and AI companies. The changes come in response to pressure from Silicon Valley, including some suggestions from Anthropic, and they represent a pretty significant softening of the bill's original strict rules.
[00:57:47] Mike Kaput: A couple of the changes include a reduction in government power. The California Attorney General can no longer sue AI companies. for negligent safety practices before a catastrophic event has occurred. The bill no [00:58:00] longer creates what it's called the Frontier Model Division, FMD, a new government agency.
[00:58:05] Mike Kaput: However, it does still establish the Board of Frontier Models, now expanded to nine members, and safety requirements have been relaxed. AI labs are no longer required to submit certifications of safety test results under penalty of perjury, and there's a new protection for open source fine tune models. If someone spends less than 10 million dollars fine tuning a covered model, they are explicitly not considered a developer under SB
[00:58:32] Mike Kaput: 1047. So,
[00:58:34] Mike Kaput: Seems promising if you are someone who is against the extreme safety measures in the bill, but not everyone is satisfied. There's been growing calls from congresspeople from California for Governor Gavin Newsom to veto the bill if it lands on his desk, and AI leaders like Yann LeCun appear to support this position, all over ostensibly fears that these rules go way too far and will stifle progress. [00:59:00] Innovation. So, Paul, it sounds like Silicon Valley is kind of starting to throw its weight around with this bill. Like, what can we learn from this as it stands at the moment?
[00:59:10] Paul Roetzer: I mean, we've talked about this bill quite a bit, but a lot of times we're just sort of like, here's the quick update, here's the quick update, because it does seem like it's gained a lot of traction and it could be a prelude to other laws and regulations in the United States.
[00:59:22] Paul Roetzer: So I'll just give a quick counter here.
[00:59:25] Paul Roetzer: So, a kind of letter was released. This is on August 7th, and I don't think we covered this at the time, but, four authors, Yoshua Bengio, who's a Turing Award winner, Jeff Hinton, who we've talked about, former Googler, Turing Award winner also, Hinton, LeCun actually won the Turing Award together, the same year.
[00:59:47] Paul Roetzer: Lawrence Lessig, who is a Harvard Law School professor and founder of Creative Commons, which I did not know. And then Stuart Russell, computer science at UC Berkeley and director of the Center for Human Compatible AI, also author of [01:00:00] AI books we've read. So they published a letter, and I'm just going to read a couple of excerpts because this is
[01:00:06] Paul Roetzer: the counterpoint stating why this should exist.
[01:00:10] Paul Roetzer: So there's lots of voices like Fei Fei Li and Andreessen Horwitz and others who are like, this is ridiculous and it's going to destroy innovation. This is the counterpoint from some of the leading minds in AI.
[01:00:21] Paul Roetzer: So it says, as senior AI technical and policy researchers, we write to express our strong support for California Senate Bill 1047. Throughout our careers, we have worked to advance the field of AI and unlock its immense potential to benefit humanity.
[01:00:35] Paul Roetzer: However, we are deeply concerned about severe risks posed by the next generation of AI if it is developed without sufficient care and oversight. SB 1047 outlines the bare minim they say, for effective regulation of this technology, doesn't have a license and regime, doesn't require companies to receive permission from government agency before training or deploying a model, It relies on company self assessments of risk, and it doesn't even hold companies strictly liable in the event that [01:01:00] a catastrophe does occur.
[01:01:02] Paul Roetzer: they say some investors have argued that SB 1047 is unnecessary and based on science fiction scenarios. We strongly disagree. The exact nature and timing of these risks remain uncertain, as we talked about in the situational awareness episode, where we went through that report. But as some of the experts who understand these systems most, we can say confidently that these risks are probable and significant enough to make safety, testing, and common sense precautions necessary.
[01:01:30] Paul Roetzer: So, we'll drop the link to that letter in the show notes. You can go read it for yourself. But Again, it's important to be informed on both sides. Mike and I oftentimes sit in the middle and try and understand both perspectives without offering like a strong opinion of, you know, which one we think is necessarily correct.
[01:01:46] Paul Roetzer: And this is definitely one of those where I just keep taking in new information and listening to both sides and trying to see what is reasonable, but more just observing where this ends up going, because I think it's going to have an effect on a lot of things. One of the big issues is this [01:02:00] isn't just for models.
[01:02:01] Paul Roetzer: developed in California. This is for models that are used in California. So it would have widespread effect. It's not just a California law that would only affect California companies.
[01:02:13] Mike Kaput: So in another big topic this week, there's a new study that came out that reveals kind of a bit of a striking shift in how the largest U.
[01:02:22] Mike Kaput: S. companies perceive AI. So according to some research from a company called Arise AI, as reported by the Financial
[01:02:29] Mike Kaput: Times, 56 percent of Fortune 500 companies now cite AI as a, quote, risk factor in their most recent annual reports. Arise AI, I believe their
[01:02:40] Mike Kaput: business is to, analyze and process these types of reports.
[01:02:45] Mike Kaput: This is a huge increase because back in 2022, just 9 percent of companies held that view. And the perception of AI
[01:02:53] Mike Kaput: risk is pretty pronounced in certain sectors. Over 90 percent of media and entertainment companies and 86 percent [01:03:00] of software and technology groups express concerns about AI. And companies are concerned about a range of things.
[01:03:06] Mike Kaput: These include increased competition, reputational and operational issues. Ethical Concerns, Financial Risks Due to Unpredictable Costs and Investment Needs, Legal and Regulatory Uncertainties, So, Paul, like we always say, you have to take every bit of research you read with a grain of salt, but pretty interesting the huge jump from the numbers this research was showing in 2022.
[01:03:33] Mike Kaput: Just 9 percent back then see AI as a risk factor, 56% percent today. does that align with what you're seeing and hearing in the market?
[01:03:42] Paul Roetzer: Yeah, I just think that people are a couple of years behind. So back in May of 2022, I wrote that article, The Future of Businesses, AI or Obsolete. And this is the basic premise of it that I said with each day that passes, each advancement in artificial intelligence, language, and vision technology is becoming more apparent [01:04:00] that there will be three types of businesses in every industry, AI native, AI emergent, and obsolete.
[01:04:05] Paul Roetzer: So I think the business world is just. Finally, realizing that this is the future. You either become AI emergent or you get disrupted. And as the AI advances, as the cost of that intelligence plummets, it becomes easier to innovate, but it also becomes easier to disrupt. And so that's, two things. a challenging position for a lot of organizations who can't move quickly as they all of a sudden see the threat that these other organizations bring.
[01:04:34] Paul Roetzer: And even in that Eric Schmidt interview, one of the things that was taken down when they took the original video down is they said, how could someone like OpenAI take on Google? And he, said, which I think he regretted it was recorded, was they prioritized work life balance and they let people work from home.
[01:04:50] Paul Roetzer: In essence, they lost their edge. And that's the fear that big companies have is When you get bigger, you have obligations to your employees, to your [01:05:00] stakeholders, That sometimes reduce the way you ran your business before that created a lot of the innovation and drove a lot of
[01:05:10] Paul Roetzer: the growth.
[01:05:10] Mike Kaput: hmm. Mm hmm.
[01:05:11] Paul Roetzer: And so that's, that's the challenge here is these, these existing legacy companies across every industry run the risk of being disrupted because smaller, more nimble, more resource efficient companies can come up out of nowhere and just build smarter versions of, of their business.
[01:05:26] Paul Roetzer: The business. And it's going to happen everywhere.
[01:05:29] Paul Roetzer: Law firms, HR practices, consulting firms, research firms, marketing agencies, manufacturing companies, all of them. It's, it's,
[01:05:38] Paul Roetzer: It's a very real thing. This number should be 99%. Like every company should be worried about being disrupted right now.
[01:05:44] Mike Kaput: Alright, Next up HubSpot has released what it's calling an AI search grader. This is a free tool that shows you how visible your brand is in what they call AI search engines. So you type in your company name, the country you're in, the type of business you are, [01:06:00] and a quick description of your products or services.
[01:06:03] Mike Kaput: HubSpot's tool then scores your sentiment and share a voice within AI search engines. Now, the full report provided by the tool basically just breaks down for you how your company is being discussed in things like ChatGPT. So, for instance, we ran Marketing AI Institute through this. The report noted that one of the Institute's strength was our educational value and insights.
[01:06:27] Mike Kaput: Saying something like, quote, practicality is a big win for you. ChatGPT mentions that you provide actionable insights and practical advice multiple times. This includes case studies and real world examples, which are helpful for businesses looking to adopt AI. So there's a lot of insights like that in the report.
[01:06:44] Mike Kaput: It'll also break down your share of voice compared to other companies doing similar things to you. Right now, it looks like HubSpot is only analyzing ChatGPT responses, but they say they're rolling out more AI search engines, and [01:07:00] presumably including Perplexity, soon enough here. So, Paul, this, trying this out seems like one of the first really valuable tools or assessments that I've seen to actually help you understand how you might be appearing in AI results.
[01:07:15] Mike Kaput: Like, is this something companies should be using, thinking about using?
[01:07:19] Paul Roetzer: Yeah, I think the use case is going to expand. You know, we've always, from an SEO perspective, you always wanted to know how you were ranking in organic search results. And now the question in the future is, how am I showing up within these language models? How am I showing up in Perplexity and ChatGPT?
[01:07:37] Paul Roetzer: And, and then from there, the question becomes, well, how do we influence that? So if you're HubSpot and you provide marketing platform that was built on inbound marketing, which drew people to your website and pushed the idea of creating content to optimize your sites and draw people in. You would want to play in the realm of, well, what if people aren't using organic search as much and they're actually going to these models?
[01:07:58] Paul Roetzer: And so for people who aren't [01:08:00] familiar, the way HubSpot grew so fast in the early days is Dharmesh Shah, one of the co founders. created a tool called WebsiteGrader, and it was this massive lead funnel. So they would enable you to go in, put your domain name in, and it would grade your website on these different variables that determine the strength of your website.
[01:08:19] Paul Roetzer: So they're obviously playing on this WebsiteGrader idea and trying to build the next generation of that. I just tried it, I got a 404 error, so it might just be temporarily down. But, It'll be interesting to see how reliable this kind of thing is, but, kudos to HubSpot for another lead magnet effort.
[01:08:39] Mike Kaput: Yeah, they got me pretty easily, but I will say like, at least from this morning testing it, the full report, I found pretty helpful and valuable. Again, you kind of need to figure out on your own what to do about what it tells you, but it was definitely not just kind of a fluffy lead magnet thing. It was pretty useful, I found.
[01:08:58] Mike Kaput: And
[01:08:58] Paul Roetzer: And
[01:08:58] Paul Roetzer: as was WebsiteGrader, [01:09:00] like it was a great free tool. It generated millions of leads a year for HubSpot, but it also provided immense value, which is a great lead magnet, is value creation in exchange for contacts. so that's,
[01:09:14] Mike Kaput: it's a good play. they did mention this morning when I was looking at it, that there was really high demand for the tool.
[01:09:20] Mike Kaput: So I'm assuming that's what the 404
[01:09:21] Paul Roetzer: Oh, yeah, it'll blow up for sure. They know what they're doing. I mean, that's a smart company.
[01:09:26] Mike Kaput: our last topic this week is that a group of researchers from OpenAI, Microsoft, MIT, and some other major institutions has proposed a new concept called, quote, Personhood Credentials, PHCs.
[01:09:42] Mike Kaput: as a potential solution to the growing challenge of AI powered deception online. So basically, Personhood Credentials are digital certificates that would allow users to prove they are real people without revealing their identities. And the core idea is to basically balance the Internet's commitment to [01:10:00] anonymity with the need for trustworthy interactions in an era where AI can basically act as a human.
[01:10:07] Mike Kaput: So this includes Two fundamental requirements. First, credential limits. Each issuer would give at most one credential to an eligible person. And then unlinkable pseudonymity. So users could interact with services anonymously through service specific pseudonyms and essentially be untraceable by the issuer and unlinkable across service providers.
[01:10:29] Mike Kaput: So they argue that basically this could really help address many key challenges including Fake accounts, pretending to be distinct individuals, mitigating bot attacks, and allowing for verified delegation to AI assistants. So, Paul, this may seem a little theoretical, a little sci fi, but it does seem to be hitting on a really important need that we need to verify who is real in an era where AI can increasingly pretend to be human or be indistinguishable [01:11:00] from humans.
[01:11:00] Mike Kaput: Is that how you're looking at this?
[01:11:01] Mike Kaput: Well, I have not read this full paper yet. this is one of those where it's just like, I mean, I know we probably need this, but this is really hard for me to accept that we need to go down this
[01:11:19] Paul Roetzer: hole to like, understand this at a deeper level.
[01:11:22] Paul Roetzer: I thought this was a Sam Altman's other company, WorldCoin is designed for like scanning your eyeballs to like verify your personhood. yeah, I would just say, um. Unfortunately, the topic of whether or not you're an actual person is going to be relevant down the road, maybe sooner than some of us, including myself, would care to admit.
[01:11:47] Paul Roetzer: And that research like this is going to probably become important. And I would imagine this kind of stuff is sitting in the offices of government leaders right now, and they're looking at big [01:12:00] picture things. And, Yeah. At some point, your driver's license is just one form of verification. You may have some anonymous personhood verification as well. it's weird,
[01:12:13] Paul Roetzer: weird, weird, world we're heading into.
[01:12:15] Mike Kaput: interesting. in The shorter term too, I found the point about verified delegation to AI assistants interesting. Like we've talked about, we're already seeing people's. Notate AI note takers showing up to meetings. We're not many steps away from realizing that we could have a total AI stand in for the CEO or someone, but how do I know that's real or yours?
[01:12:37] Mike Kaput: Lots of challenges.
[01:12:38] Paul Roetzer: Yeah, that goes back to that Zoom, the episode we were talking about the CEO of Zoom. And by the way, like just a heads up,
[01:12:45] Paul Roetzer: if
[01:12:45] Paul Roetzer: we ever, like, if you're a listener and we ever have a Zoom meeting and you send your avatar. Instead of you to a meeting, we will never do business again. Like, I, I feel like we're going to have to have these conversations at some point in the future.
[01:12:59] Paul Roetzer: [01:13:00] Like, yeah, you can send your note taker or your avatar. Like you may have that capability one, two, three years out, but, we're going to have to like have a conversation around business etiquette when your AI can go somewhere instead of you. And me personally, like, I, I don't want to do business with someone's avatar two years from now.
[01:13:20] Paul Roetzer: So just like, I'm just setting that seed now.
[01:13:24] Mike Kaput: Like, if
[01:13:24] Mike Kaput: you can't attend the meeting, why are we having the meeting?
[01:13:27] Paul Roetzer: Yeah. Yeah. I could just, I, there's going to be an article in New York times, like two, three years from now about like a job seeker who sent their avatar for the interview and like, it's just going to happen. Like it's inevitable.
[01:13:39] Mike Kaput: All right. Well, that is a very packed week in AI, Paul. I appreciate you breaking everything down for us. Just a quick couple reminders for everyone. As always, go check out our newsletter, MarketingAIInstitute. com forward slash newsletter. This is. Paul. All the news that we have been covering this week in AI, [01:14:00] including plenty of stuff that's not in today's podcast episode.
[01:14:04] Mike Kaput: And also, if you can, if your platform allows you to, please leave us some type of review if you have not already. All the feedback is really taken into account to try to make this show as good as possible, and feedback and reviews help us get this into the earphones of more people. So we'd love if you took a minute to do that.
[01:14:24] Mike Kaput: Paul, thanks again.
[01:14:25] Paul Roetzer: Yeah, thanks, Mike. Safe travels. Enjoy the event. We'll catch up with you soon. And thanks, everyone, for listening. We'll talk to you again next week.
[01:14:32] Paul Roetzer: Thanks for listening to The AI Show. Visit MarketingAIInstitute. com to continue your AI learning journey and join more than 60, 000 professionals and business leaders who have subscribed to the weekly newsletter, downloaded the AI blueprints, attended virtual and in person events, taken our online AI courses, and engaged in the Slack community.
[01:14:55] Paul Roetzer: Until next time, stay curious and explore [01:15:00] AI.