Marketing AI Institute | Blog

[The AI Show Episode 113]: OpenAI’s Upcoming “Strawberry” Launch and $100B Valuation, SB-1047 Passes, Oprah Gets Into AI & Viggle Scrapes YouTube Videos

Written by Claire Prudhomme | Sep 4, 2024 12:15:00 PM

Less than a week remains until MAICON 2024, and the AI news keeps rolling in. Join Paul Roetzer and Mike Kaput as they look into the recent developments at OpenAI, including insights on their "Strawberry" system and their latest fundraising efforts. In our final main topic, listen in to our hosts as they dissect the implications of California's contentious SB-1047 bill on AI development. All this and more in our rapid fire section!

Listen or watch below—and see below for show notes and the transcript.

Listen Now

Watch the Video

Timestamps

00:02:15 — The Story Behind MAII and MAICON

00:13:18 — OpenAI Strawberry

00:25:22 — OpenAI Fundraising

00:31:24 — SB-1047 + AI Regulation

00:41:04 — OpenAI + US AI Safety Institute

00:44:16 — OpenAI, Adobe and Microsoft support AI Watermark Bill

00:50:22 — Oprah + AI ABC Special

00:58:07 — AI Changes in Schools

01:04:27 — Gemini Updates

Summary

OpenAI Strawberry

OpenAI is racing to launch a new AI system codenamed "Strawberry," possibly as soon as this fall, according to The Information. Previously known as Q* (pronounced Q Star), Strawberry represents a major leap forward in AI problem-solving abilities, particularly in mathematics and programming.

According to sources involved in the project, OpenAI researchers aim to integrate Strawberry into a chatbot, possibly within ChatGPT, and possibly very soon (as early as this fall, by one estimate).

Strawberry can reportedly can solve math problems it hasn’t seen before, which is something that today’s AI tools can’t do. The system an also better perform much more subjective tasks, when given additional time to think through problems. (For instance, answering questions about a product marketing strategy.)

OpenAI employees also demonstrated that Strawberry can solve a complex word puzzle like the New York Times Connections game.

Strawberry also appears essential to OpenAI’s upcoming new flagship model, codenamed “Orion.”

The Information also reports from a source that OpenAI is using a bigger version of Strawberry to generate synthetic training data for the new model. If true, and if it works as intended, this basically means that OpenAI could overcome the limitations inherent in AI training right now—the need to find more and higher-quality real world data to feed models.

OpenAI Fundraising

According to a recent report from the Wall Street Journal, OpenAI is discussing a funding round of several billion dollars that would value the company at over $100B (up from its previous $86B valuation).

The funding round is being led by venture capital firm Thrive Capital and is expected to invest about $1 billion.

Microsoft, Nvidia, and Apple are also rumored to be interested in the round.

This new round would be the biggest infusion of outside capital into the company since Microsoft invested about $10B in 2023, according to the Journal.

SB-1047 + AI Regulation

California’s controversial AI safety bill, called SB 1047 for short, has cleared both the California State Assembly and Senate, which means it just needs one more process vote before heading to Governor Gavin Newsom, who must then decide by the end of September whether to sign or veto the bill.

SB 1047 would require AI companies operating in California to implement several safety measures before training advanced foundation models. 

These precautions include the ability to quickly and completely shut down a model, protection against unsafe post-training modifications, and maintaining testing procedures to evaluate potential critical harm risks.

The legislation has faced criticism from major players in the AI industry, including OpenAI and Anthropic, as well as politicians and business groups. Critics have argued that the bill focuses too heavily on catastrophic harms and could negatively impact small, open-source AI developers.

In response to these concerns, the bill underwent several amendments. These changes replaced potential criminal penalties with civil ones, narrowed the enforcement powers granted to California's attorney general, and adjusted requirements for joining a "Board of Frontier Models" created by the bill.

Links Referenced in the Show

This week’s episode is brought to you by MAICON, our 5th annual Marketing AI Conference, happening in Cleveland, Sept. 10 - 12. The code POD200 saves $200 on all pass types. 

For more information on MAICON and to register for this year’s conference, visit www.MAICON.ai

Read the Transcription

Disclaimer: This transcription was written by AI, thanks to Descript, and has not been edited for content. 

[00:00:00] Paul Roetzer: the reality is there's always going to be people who take shortcuts. There's always going to be people who are going to use these tools as a way to just get through stuff. And you know what? At the end of the day, they're the people who probably won't advance in their careers. It's always been true that the people who do the hard work, who put in the hours when no one's looking, they're the people who end up Being the most successful in their careers.

[00:00:23] Paul Roetzer: And AI is just maybe going to weed other people out faster. 

[00:00:26] Paul Roetzer: I don't know, Welcome to the Artificial Intelligence Show, the podcast that helps your business grow smarter by making AI approachable and actionable. My name is Paul Roetzer. I'm the founder and CEO of Marketing AI Institute, and I'm your host. Each week, I'm joined by my co host. and Marketing AI Institute Chief Content Officer, Mike Kaput, as we break down all the AI news that matters and give you insights and perspectives that you can use to advance your company and your career.

[00:00:57] Paul Roetzer: Join us as we accelerate AI literacy [00:01:00] for all.

[00:01:04] Paul Roetzer: Welcome to episode 113 of the Artificial Intelligence Show. I'm your host, Paul Roetzer, along with my co host, Mike Kaput. We are recording this on Tuesday, September 3rd. Seven days before MAICON, Mike,

[00:01:18] Mike Kaput: Seven days. 

[00:01:19] Mike Kaput: Don't remind me. 

[00:01:20] Paul Roetzer: Oh my gosh, there's so much to do in seven days. We have, I have five sessions to get ready for,

[00:01:29] Mike Kaput: Yikes.

[00:01:29] Paul Roetzer: so, I won't, well, I probably shouldn't say out loud that I haven't created the presentation yet for my opening keynote, or the workshop one's basically done, and

[00:01:40] Paul Roetzer: the other, yeah.

[00:01:41] Mike Kaput: I mean, the workshops are done, but we have to be pretty timely in AI, like, we can't be making these, like, three months in advance.

[00:01:48] Paul Roetzer: It's true, it's true.

[00:01:49] Paul Roetzer: So this episode is brought to us by MAICON. So again, our fifth annual MAICON is coming up. And to kind of change it up a little bit, this week we've been talking a lot about MAICON. I had put on [00:02:00] LinkedIn a few days ago, sort of the origin story of MAICON. And so we thought maybe we'd kick off today's episode with, with that story, but it's not just the story of MAICON, our conference, but actually kind of the origin of Marketing Institute, which led to the origin of this podcast.

[00:02:15] A Quick Background

[00:02:15] Paul Roetzer: So I figured I'd take a minute or two and just sort of give the background here, Mike, because I think we have so many people who've joined us along this journey at some point in the process, and maybe Don't even know how this all started. So I'll just kind of go through what I put on LinkedIn. And then if you have anything to add, Mike, any context to it,becauseyou were there for the whole ride.

[00:02:35] Paul Roetzer: So again, if people aren't familiar, Mike and I worked together at my agency prior to this, so Mike started there in 2011. Mike, was it?

[00:02:43] Mike Kaput: yeah, I think it was end of 2012.

[00:02:49] Paul Roetzer: So we worked together at the agency. So actually, as I'm telling this, this kind of quick background story, you can kind of parallel that with Mike's, time at the agency and then the work we did together. [00:03:00] So for me, My interest in AI started back in 2011 when IBM Watson won on Jeopardy.

[00:03:05] Paul Roetzer: So that was my spark that got me curious. that following year in 2012, I read a book called Automate This by Christopher Steiner. And that book told the story of what he called intelligent algorithms. They talked about artificial intelligence. They did categorize it as AI, but it was basically, he would call it bots, these intelligent, intelligent algorithms being strung together.

[00:03:26] Paul Roetzer: So think of this as AI before Gen AI. And the book talked a lot about. The disruption of industries and the formula that Christopher Steiner gave in the book to know when the next industry would be disrupted was the potential to disrupt plus the reward for disruption. And so that, that, that formula stuck with me and I thought deeply about it and basic premise.

[00:03:50] Paul Roetzer: And again, if you've listened to the podcast, I guess you could say through the years now, you may have heard me explain this before. But the basic premise is back in 2011, [00:04:00] 2012, when that book was created, and when I was starting to pay attention to AI, there was only so many AI researchers in the world who could build these intelligent algorithms, build these bots for different industries.

[00:04:10] Paul Roetzer: And so the premise went that those people would be concentrated on the industries with the greatest reward for disruption. So Wall Street, as an example, lots of money to be made. So the people who could build things were building AI. For Wall Street and, and similar industries. So I looked out ahead and said, okay, well, at some point marketing is going to be in line, like marketing will be the next industry at some point.

[00:04:33] Paul Roetzer: And so I became convinced at that time that it was only a matter of time, in my opinion. I, I thought only a matter of years before the marketing industry was transformed by ai. So then in 2014, I wrote my second book, the Marketing Performance Blueprint. That book was about, talent, tech and strategy.

[00:04:50] Paul Roetzer: And so in the technology section there's about a thousand words. On my theory of AI and marketing and the opportunity to eventually build what I was calling a marketing [00:05:00] intelligence engine that would use these intelligent algorithms to automate strategy and campaigns. Mike helped me with the research for that section of the book.

[00:05:08] Paul Roetzer: So I pulled Mike in, I was like, Hey, this is what I want to look into. I want to go deeper on IBM Watson. And we kind of set this research agenda that became the basis for that section of the book. So that kind of piqued Mike's interest, I think. Mike was paying attention to it at the time, but that was like, Mike's first initial, like, real project target around AI.

[00:05:26] Paul Roetzer: So, published that book, I start doing public speaking about AI. By 2016, 

[00:05:29] Paul Roetzer: I'd become convinced that AI was going to transform businesses, the economy, society. But when I looked around, it just wasn't happening. Like, we weren't seeing these examples of it. So then, In, I think, around summer 2016, I made the decision to create the Marketing AI Institute, and the premise of that institute was to tell the story of AI and to try and accelerate our own understanding of AI and help other people understand it, and then to accelerate adoption.

[00:05:57] Paul Roetzer: So this is six years before [00:06:00] ChatGPT. It's one year before the invention of the transformer by Google Brain. That became the basis for

[00:06:06] Paul Roetzer: ChatGPT. So that was the start of it. It was a DBA. It was just a, an offshoot basically of my agency at doing business ads, if you don't know the terminology. So it wasn't its own company yet.

[00:06:17] Paul Roetzer: It was just like a research project within the agency that started sucking resources away from the agency. A lot of my time, my energy, and eventually some more senior people's time and energy as well. So in beginning of 2019, I split it off as its own company, funded it myself. And made the decision to launch the conference.

[00:06:36] Paul Roetzer: Now, in retrospect, that was a crazy decision. So we had about 7, 000 subscribers or email contacts in our CRM at that time. I originally wanted to get to 10, 000 before we, created a conference. but I made the decision to go ahead and go with it. So the business at the time, as I said, was self funded.

[00:06:55] Paul Roetzer: I don't think I properly. Estimated how much money [00:07:00] I could lose in that first year. So that self funding thing was kind of challenging at the start. but I believe we were approaching this tipping point we had to go. So 2019 is the first MAICON. We had 300 people from 12 countries, business lost hundreds of thousands of dollars that year, but I felt like we had, we'd kind of reached the point where we were building this movement and everything was about to change.

[00:07:23] Paul Roetzer: And so going into 2020, We were all in on AI, and then, as I said in the LinkedIn post, in the interest of a character limit, in this case, the interest of time limit, 2020 pandemic hits, we cancel the event, we are on the hook for all the contracts we had signed with the hotels, the convention center, all this stuff.

[00:07:42] Paul Roetzer: So we just lost a boatload of money and it became dire, financially for me personally and for the company. but I raised a million dollars seed round that fall to kind of get us through the uncertain years ahead. 2021 MAICON returns as a virtual event. I sold my agency at that time [00:08:00] to focus on AI.

[00:08:00] Paul Roetzer: Mike came with me to the Institute from the agency as our chief content officer for the Institute. 2022 MAICON's back in person. But. People remember events at that time. It was kind of slow to return. A lot of corporate budgets weren't there yet for travel. So we only had 200 people that year, lost even more money.

[00:08:19] Paul Roetzer: I think we lost, probably shouldn't even be disclosing this kind of stuff, but

[00:08:23] Paul Roetzer: I think in 2022, we lost 700, 000, something like that. so again, real, it was bad. and so by fall of 2022, the funds were running really, really low. I mean, I had like four months of burn rate left. And so I was in the position where I was, I had contingency plans A through like Z basically of how I was going to fund the company.

[00:08:46] Paul Roetzer: Cause I was completely convinced that we were on the right path. We just had to survive financially long enough to get there. And then two weeks after I go through all this financial planning and figuring out where this is, how I'm going to keep this thing going, [00:09:00] ChatGPT launches. And within 30 days, we started seeing just crazy uptick in traffic and interest in what we were doing.

[00:09:07] Paul Roetzer: We had launched this weekly podcast by that point, and, you know, we were seeing growth there. and then 2023, generative AI age emerges. March 2020 is the first profitable month in the seven year history of the Institute. MAICON that summer draws 700 attendees and we're kind of off and running. And then we get to 2024, we're going to be well over a thousand attendees this year.

[00:09:32] Paul Roetzer: you know, September 10th in Cleveland for our fifth annual MAICON. And so what I said in the post was like MAICON is so much more than an event. And again, I understand like a lot of people, maybe they come and it's their first experience. They don't know the backstory, or you listen to this podcast every week, but you don't know the backstory.

[00:09:49] Paul Roetzer: At the end of the day, it's, it's a small business with seven people who, have just like committed over these seven plus years to build this community, because we [00:10:00] believed in this vision that things were going to transform and we wanted to help people figure this out. And so, you know, it really MAICON, the Institute, SmarterX, all this stuff.

[00:10:10] Paul Roetzer: It's, it's really just a testament to what a team of people can do when they believe in an idea and each other. And so. Yeah. I mean, hopefully you'll be with us in Cleveland next week. And now maybe you have a little different appreciation for the story behind the event. but if not, you know, we just appreciate you being along for the journey, whatever that may be, if it's just listening in once a week, then, you know, we're, we're happy to have you with us.

[00:10:31] Paul Roetzer: So yeah, that's kind of like, it wasn't meant to be a main topic per se, but, you know, we thought it might be cool to give a little context as we keep talking about the event to, to help you understand where the event came from. Yeah, the trials and tribulations. I always, I always laugh at that whole idea of like overnight success.

[00:10:49] Paul Roetzer: And, you know, what we've gone through is very representative with a lot of entrepreneurs go through. And so if you're an entrepreneur yourself, you know, the drill, like, you know, how uncertain everything is all the [00:11:00] time. The risk you have to take on if you're not an entrepreneur, but you know one, maybe it gives you a new appreciation for what they go through to build what they build.

[00:11:09] Paul Roetzer: But, you know, I, I believe entrepreneurship is just the fuel to the economy and to, you know, innovation and growth, everywhere in society. And so, you know, it's just a small piece of our entrepreneurial story, I guess, to, to share the context of how we got where we are today.

[00:11:26] Mike Kaput: Paul, that's an awesome summary of where we've been. I mean, when you put it like that, it's like, well,

[00:11:32] Paul Roetzer: It's kind of wild and that's the thing is like Mike and I've been through it all together like he knows from the inside and when you're in it, You just, you believe what you're doing is going to work. And every day you just keep grinding and trying things. But when you zoom out and you're

[00:11:46] Paul Roetzer: like, man, like that, you know, I'm, when I had it on the whiteboard, I have four months of burn rate left in the moment.

[00:11:53] Paul Roetzer: You're just solving for it. You don't like. Really process what does that actually mean to my life? And at what point do I [00:12:00] have to explain to my wife, like, this is kind of where we're at financially. And, you know, luckily for me, I, my wife has been so insanely supportive my entire entrepreneurial life, which has been the better part of my career.

[00:12:12] Paul Roetzer: and so I've always had kind of that support system and her believing in what I was doing. And so I, I never had to worry about that piece of it is like, she was always going to be there to help me solve it, but, yeah, I mean, Being an entrepreneur and taking big risks is no joke. It's not for the faint of heart.

[00:12:29] Mike Kaput: Yeah. And I feel like we need, An annual tradition on November 30th to just send flowers to OpenAI or something for basically as an afterthought releasing ChatGPT.

[00:12:42] Paul Roetzer: and I think, you know, it would have worked. This is my thing is like, I didn't know ChatGPT was coming, but we had GPT

[00:12:47] Paul Roetzer: 3 at that point, like we knew we were at a tipping point, like I, I would deeply believe we were going to get there. And I think ChatGPT just accelerated it for sure for us and for a lot of other people, but, you know, we'd have found a [00:13:00] way we'd have raised more money.

[00:13:01] Paul Roetzer: I had, you know, I had money lined up if we needed it. I had my own money lined up from selling the agency that I'd set aside to fund it if I had to. Like we weren't stopping. But ChatGPT sure made my life easier.

[00:13:12] Mike Kaput: All right, you ready to dive into this week's topics, Paul? All right. 

[00:13:17] Paul Roetzer: More strawberries huh? 

[00:13:18] OpenAI Strawberry

[00:13:18] Mike Kaput: More Yeah, speaking of OpenAI, they're heavily represented this week. Because first up in our main topics this week, OpenAI is apparently racing to launch a new AI system, which we've talked about before, codenamed Strawberry, according to the information, possibly as soon as this fall.

[00:13:39] Mike Kaput: So Strawberry is kind of previously known as the Q Star project or as elements of it that we've talked about in the past couple of years. It basically is representing a major leap forward in problem solving abilities, particularly in math and programming. So according to sources involved in the project interviewed by the information, [00:14:00] OpenAI researchers are aiming to integrate strawberry into a chatbot.

[00:14:05] Mike Kaput: Possibly within ChatGPT and possibly very, very soon. One source, estimates it could be as early as this fall. Now, Strawberry is pretty unique, reportedly, in a couple of different ways.

[00:14:18] Mike Kaput: So, first, it can reportedly solve math problems as not seen before, which is something that today's AI tools can't do, and we'll talk a bit about why that's important. Second, when it is given additional time to think through problems, it can also better perform. More subjective tasks. So some of the examples given by the information include, say, answering questions about a product marketing strategy, OpenAI employees also demonstrated that Strawberry can solve a complex word puzzle like the New York Times Connections game.

[00:14:53] Mike Kaput: Now, interestingly, Strawberry also appears essential. to OpenAI's upcoming new [00:15:00] flagship model, which is codenamed Orion. The information reports from a source that OpenAI is using a bigger version of Strawberry to generate synthetic training data for Orion. So if this is true, and if it works as intended, it basically means OpenAI could overcome these limitations that right now are inherent in AI training.

[00:15:23] Mike Kaput: This need to find more and higher quality. Real world data to feed models so they can become smarter. So Paul, there's been just an upswing in talk and rumors about Strawberry recently. How seriously are you taking this most recent report and just generally The commentary around the capabilities and potential release date of Strawberry.

[00:15:48] Paul Roetzer: we first started talking about it, you know, it was a lot of rumors, but I think it was Reuters that had the article, so back episode 106. We kind of went a little deeper on this topic and explained what the Reuters article said [00:16:00] about it. if you listened to that episode or want to go back and listen to it, we talked about maybe the origin of the code name, being intended to troll Elon Musk for a 2017 Vanity Fair interview where he talked about, you know, the self improving AI.

[00:16:14] Paul Roetzer: AI, basically being like if, you know, the quote was, let's say you create a self improving AI to pick strawberries and it gets better and better at picking strawberries and picks more and more and it's self improving, so all it really wants to do is pick strawberries, so then it would have all the world be strawberry fields, strawberry fields forever, and there'd be no room for human beings.

[00:16:33] Paul Roetzer: So that was Elon Musk in 2017 talking about the dangers of AI. And so, you know, the theory was that, you know, maybe OpenAI was having some fun at Elon Musk's expense with the strawberry codename. But the basic premise back in episode 106 is, Um, as you had kind of mentioned, it's all about reasoning capabilities and improvement in reasoning capabilities.

[00:16:53] Paul Roetzer: The project's goal in the Reuters article was said to enable AI to plan ahead and navigate the internet autonomously to perform. What [00:17:00] OpenAI calls deep research, Um, but it's basically tightly guarded.

[00:17:05] Paul Roetzer: Now, the reason it's significant is if we go back in, in episode 106, we talked about the stages of AI from OpenAI.

[00:17:13] Paul Roetzer: So these are the levels 1 through 5. Level 1 being chatbots with conversational languages where we are today. Level 2

[00:17:21] Paul Roetzer: is reasoners, human level problem solving. So that's what, in theory, Strawberry takes us to. That then enables level three, which is agents. So we talk a lot about AI agents. That's systems that can take actions, which leads to innovators at level four, AI that can aid in innovation, and then level five is organizations.

[00:17:39] Paul Roetzer: So this is a step toward the, you know, AGI that OpenAI pursues. the article had some new information, like I hadn't heard about the government demos, which is going to become relevant a minute when we're talking about, laws and regulations. And then it talked about Ilya, and you know what Ilya Sutskever saw.

[00:17:59] Paul Roetzer: So we've talked, [00:18:00] again, if you've been a long time listener of the podcast, you've heard us talk about Ilya quite a bit. One of the co founders of OpenAI, left to form his own AI lab, not too long ago, earlier this year. So, And so in May of 2023, Ilya and I think it's about seven or eight other researchers at OpenAI published an article called Improving Mathematical Reasoning with Process Supervision.

[00:18:22] Paul Roetzer: And so I think this becomes very important. And so I'll step back for a second and explain why this matters. They also at that time published a research paper called Let's Verify Step by Step. And we'll put the links to both this article and that, research paper in the show notes, but I think this is probably going to end up being, um, the early, like, OpenAI often publishes things, far in advance before they release the things.

[00:18:50] Paul Roetzer: And I think this paper is probably the blueprint to Strawberry if we go back and look at it. So the basic premise here is when you, when you're training these models, they use something called reinforcement learning. [00:19:00] So basically they train the model and then the model is either taught to do something by giving it a reward, like a goal.

[00:19:08] Paul Roetzer: So if you think about, we've talked about AlphaGo, when Google, DeepMind, you know, trained AlphaGo to win at the Game of Go, the reward function is winning, like it's earning points or, you know, something. That's why they often train in game systems, because in games, there's a definitive endpoint.

[00:19:25] Paul Roetzer: There's a reward function that the AI is seeking to achieve.

[00:19:30] Paul Roetzer: But when you're building these generative AI models, there is no Perfect outcome per se. There's no points to be earned. There's no game to win at the end. So you have to develop a way to reward the model through this reinforcement learning in the process.

[00:19:47] Paul Roetzer: And so in, in the, when you're doing like, say you're writing an article, how do you know? If it's written the article well, well, right now they use this reinforcement learning through human feedback where a human looks at output A and output B and says [00:20:00] output B is better. Like that's the reward function, basically, is the human saying this is a better option, then it learns to do it like that.

[00:20:06] Paul Roetzer: So to give these models, though, what we call system two thinking, the ability to reason over time, to go through 10, 20, 50 steps in a process, to plan, to make decisions, they have to know. how each step in the process performed. And so the article, this Let's Verify Step by Step Research paper, what it said is, and I'll quote, we've trained a model to achieve a new state of the art in mathematical problem solving, and this is the real key point, by rewarding each correct step of reasoning, what they were calling process supervision, instead of simply rewarding the correct final answer, which they call outcome supervision.

[00:20:44] Paul Roetzer: So, what it does is,it is able to evaluate itself Step by step and to know if that step is accurate. So if we have to solve a problem that requires 10 or 15 steps, take us, you know, hard math problem. [00:21:00] It can't just know got the right answer. You know, you can't just go to the outcome supervision side.

[00:21:04] Paul Roetzer: What we want to do is know that it took the right step in the process each time. And so if you go back a few episodes ago, we talked about how AI agents aren't reliable yet, because they may only get something right like 95 percent of the time. So to get to that level three, that we just talked about with OpenAI, the AI agent level, We have to remove hallucinations and errors.

[00:21:27] Paul Roetzer: How do you remove hallucinations and errors? You make sure every step of the process is done correctly. How do you do that? By giving the model the ability to do this process supervision. So, Back in May of 2023, I think OpenAI laid out for us exactly what Strawberry was going to be. It was a way to not only enable these models to do reasoning and planning and make decisions and eventually take actions, but it was actually the process of the model evaluating each step along the way [00:22:00] as having a reward function to doing the step correctly.

[00:22:03] Paul Roetzer: And so that's where I think we're going to hear a lot more conversation going. And the information article did talk about this idea. It said Strawberry has its roots in research. It was started years ago by Ilya, then OpenAI's chief scientist. He recently left to start a competing lab. Before he left, OpenAI researchers, Yakov, and Sisman, built on Sutskever's work by developing a new math solving model, QSTAR, alarming researchers focusing on AI safety.

[00:22:32] Paul Roetzer: Cause again, once these models can do this, once they can remove the error rates or dramatically reduce the error rates, they can remove the hallucinations and they can give themselves the rewards along the way. Now they develop the ability to improve themselves, because now they can find, so if you take a 10 step process and it finds an error in step 3, and it can go back and fix the error.

[00:22:55] Paul Roetzer: So self correcting AI, that's when we get into a situation where these [00:23:00] things can take off pretty quickly in their capabilities and their impact on jobs and businesses and all that stuff. And that's why we talk a lot about not where models are today, but where they're going. And you can follow the path of this research from, you know, Google, from OpenAI, from Anthropic.

[00:23:17] Paul Roetzer: Like, you can see where this stuff is going. You just have to read the research papers they've published. And you look at the capabilities within those papers that aren't here yet, but you can make an assumption one to two models out, they're going to be there. And this is the whole premise of my opening talk at MAICON.

[00:23:31] Paul Roetzer: It's like the road to AGI, like it's laid out before us. We, we can look at the research and we know what they're working on. And if you just make some assumptions about those things becoming true, then we, we have a general idea of what the future, the near term future looks like.

[00:23:45] Mike Kaput: Yeah, that's a really important point.

[00:23:47] Mike Kaput: It seems every time we talk about AGI, you know, we always want to kind of qualify like, oh man, this sounds a little out there, but this is the base principle they're assuming could happen at some point when [00:24:00] they make these arguments. And now when you layer on top of that, oh my gosh, we're actually hearing reports that They're starting to do something related to it, then you have to start completely reframing this as not exactly science fiction, but possibly the near future.

[00:24:16] Paul Roetzer: Yeah. And I think one of the other, you know, key points, the information article made is that, and we've talked about this idea on the show before, like the GPT 4 model we all got in spring of 23 is not GPT 4 full, like we got the scaled down version, the safe, quote unquote, version of that model. And they said in this article, Strawberry, when it comes out, if it comes out in ChatGPT, like if it's part of the chatbot, they said the chatbot version.

[00:24:44] Paul Roetzer: is a smaller, simplified version of the original Strawberry model. It seeks to maintain the same level of performance as the bigger model while being easier and less costly to operate. They give us the safer versions of things. So whatever we end up getting [00:25:00] access to, after the government demos and after they've, you know, kind of made it as safe as they can, It's not going to be the full version.

[00:25:07] Paul Roetzer: We talked about this with advanced voice mode.

[00:25:09] Paul Roetzer: Like when we get that in ChatGPT, we're not going to have the full scaled version of it. Maybe it's 80%. Maybe it's 60%. Like, who knows? Um, but these things are more powerful than what we get access to. 

[00:25:22] OpenAI Fundraising

[00:25:22] Mike Kaput: Alright, So in our second big topic today, another OpenAI item, because according to a recent report from the Wall Street Journal, OpenAI is discussing a funding round of several billion dollars that would value the company at over a hundred billion.

[00:25:39] Mike Kaput: Alright. That's up from its previous valuation of 86 billion. This funding round is reportedly being led by venture capital firm Thrive Capital, and they are expected to invest about a billion dollars. Microsoft, NVIDIA, and Apple. are also rumored to be interested in the round. It's quite likely people are saying that [00:26:00] Microsoft at the very least, given its past relationship, is going to be investing.

[00:26:04] Mike Kaput: And this new round would actually be the biggest infusion of outside capital into the company since Microsoft invested about 10 billion in 2023, according to the journal. So Paul, maybe can you put this into context for us? Why does OpenAI need to raise so much money?

[00:26:23] Paul Roetzer: Well, I need to raise it makes their burn on a bunch of money.

[00:26:26] Paul Roetzer: I mean, So they're making money, like their revenue is skyrocketing. So the article, I think, said, earlier this year, it was reports that OpenAI's revenue was 3. 4 billion on an annualized basis. They're up to 283 million per month or something like that.

[00:26:42] Paul Roetzer: So they're generating money, but they're spending way more than that. My first reaction to this is billions doesn't seem significant enough. Like. I honestly, I read the article twice because I was trying to figure out what I was missing. I don't understand why they would raise 3 billion, 5 billion, 10 billion.

[00:26:59] Paul Roetzer: [00:27:00] Like that's just not enough money. if Um, you consider XAI, Musk's startup, raised 6 billion as its first round. Anthropic, I think their latest round was, what, 4 billion or something like that. So, Anything south of 10 billion just seems irrelevant, like, unless they're just trying to clear off some vested stock from employees or something, like, I don't know, it just, it makes no sense to me, like, I would expect A round of like 50 billion, I would have been like, okay, yeah, that's legitimate.

[00:27:35] Paul Roetzer: That's a decent amount of money to get them through like 18 months, which is normally what you would raise around to do. So my only assumption here is, um, it's more likely they're setting the stage for an IPO, but do an IPO, like if it's, if it's anything south of 10 billion,it has to either be to get clear out some employee stock or.

[00:27:59] Paul Roetzer: Just a [00:28:00] bridge to an IPO, but they have the complex corporate structure that they would have to take care of before they could think about an IPO. So again, if people listen to the podcast a long time, you know, they're actually a non profit and then there's a for profit entity underneath that. But the investors in that for profit entity don't actually get equity.

[00:28:20] Paul Roetzer: They get profit share, in essence. So, like, Microsoft has 49 percent share of OpenAI's profits. I think it's up to, like, 100 billion. Like, there's a cap on that profit. but they would have to change the structure, I believe, to eventually have an IPO. But that's, my big take is It's just not enough money.

[00:28:39] Paul Roetzer: Like, what Sam's trying to do, I mean, we, we talked episodes ago about seven trillion being, like, half joking, but, like, I think that it's way more likely that they would be raising a hundred billion or five hundred billion, and in that case, it's like, you might as well just IPO.

[00:28:56] Paul Roetzer: Right. I don't know. I, I, I just thought it was weird.

[00:28:59] Paul Roetzer: Like it, [00:29:00] I took, the article was bizarre to me. And like the billion from Thrive? Who cares? Like, again, it seems weird to say a billion is nothing, but a billion is nothing in this world. And so what is a billion going to do? It's, it's literally

[00:29:14] Paul Roetzer: nothing in the grand scheme of what they're building. And then we have the, the word that came out that Apple and Nvidia may partake in the round as well.

[00:29:22] Paul Roetzer: Which makes total sense because Apple has a strategic partnership with Apple intelligence, and they're going to rely on them pretty heavily for intelligence into their devices. And then NVIDIA obviously stands to benefit greatly from training these future frontier models, as well as inference, meaning the use of the AI and the different devices and software we're going to use.

[00:29:42] Paul Roetzer: So, but again, like what? What are they going to invest, you know, 100 million? Like, who cares? So, I don't know. I'm really intrigued to see what this ends up actually being. I totally believe they're actively raising the money. I think a hundred billion dollar valuation that's being reported is low. [00:30:00] I think it ends up being way more than that.

[00:30:02] Paul Roetzer: That's probably a pre money valuation. So, you know, if they raise 50 billion, then it's 150 billion valuation, which is a little bit more reasonable, but I don't know. The numbers just aren't adding up. It's kind of my main takeaway here.

[00:30:15] Mike Kaput: Well, To your point, the Financial Times also had a report that basically was talking about open AI is weighing changes to its corporate structure amidst these funding talks, so according to a few sources they interviewed, so yeah, very possible, I guess, to your your theory, that this money is just to clear people off the books, get ready to either pivot or IPO,

[00:30:37] Paul Roetzer: Yeah, and I think they got to blow up the Microsoft deal. So the

[00:30:41] Mike Kaput: Yeah. Yeah. 

[00:30:42] Paul Roetzer: where, you know, they get the first whatever percentage of the first hundred billion profits. But if you just restructure the company and you give Microsoft true equity, then

[00:30:52] Paul Roetzer: who then that deal can go away. And so I think to get like Apple and Nvidia and other partners in, they're going to have to just like clear [00:31:00] the table of how they set this all up to begin with, and that's going to take time.

[00:31:04] Paul Roetzer: So as I said, maybe this is just a bridge round. to clear the table so that they can do the bigger thing because they just got a mess to clean up with their corporate structure.

[00:31:13] Mike Kaput: Yeah, what a, what a wonderful world where a few billion is a bridge round in AI, you know? know? It's not even 

[00:31:20] Paul Roetzer: That's not even going to cover the training room for Orion. Like it, I don't know. It's just nuts.

[00:31:25] AI Safety Bill

[00:31:25] Mike Kaput: All right, so in our third big topic today, an update on an AI safety bill we've talked about quite a bit, California's controversial AI safety bill called SB 1047. It has now cleared both the California State Assembly and the California Senate, which means it just needs one more process vote before heading to Governor Gavin Newsom, who must then decide by the end of September whether to sign or veto.

[00:31:51] Mike Kaput: The bill. has a reminder, SB1047 requires AI companies operating in California to implement several safety measures [00:32:00] before training advanced foundation models. So these precautions include things like the ability to quickly shut down a model in the The case of a safety breach, protection against unsafe post training modifications, and maintaining testing procedures to evaluate potential critical harm risks. Now, this legislation, like we've talked about, has faced some criticism from major players in the AI industry. OpenAIs pretty much against it. Anthropic has pushed back on some things, but appears to be largely for it. And critics, though, have argued that the bill focuses too heavily on catastrophic harm and could negatively impact innovation, open source development, and other areas where AI is moving forward at a pretty rapid clip. Now, the bill has undergone some amendments which Anthropic proposed some of these. These included replacing potential criminal penalties with [00:33:00] civil ones and narrowing some of the enforcement powers under the bill.

[00:33:04] Mike Kaput: However, people are still not entirely happy with this. so Paul, like, Can you kind of just walk us through what the significance is of this particular bill in California for U. S. regulation of AI as a whole?

[00:33:20] Paul Roetzer: Yeah, the key here is like it's not just companies in California, it's companies that do business in California. So it's a really important distinction. And I think we talked about this on the last episode that, you know, California is a massive economy. It's a massive economy. I mean, I want to, I don't know, like focus here on like why this matters.

[00:33:39] Paul Roetzer: Like we keep talking about SB 1047 and like, what is the significance to people? I think that it comes down to a few things. So there was an article, I think it was Wall Street Journal, I'll put the link in, it said AI regulation is coming. Fortune 500 companies are bracing for impact. So I thought this is a good one because it's like, what does this mean to corporations?

[00:33:58] Paul Roetzer: So this article said [00:34:00] roughly 27 percent of Fortune 500 companies cited AI regulation as a risk in recent filings with the SEC. One of the clearest signs yet of how rules could affect businesses. a recent analysis by Arise AI, which is a startup building a platform for monitoring AI models, shows 137 of the Fortune 500 sided AI regulation as a risk factor in annual reports, with issues ranging from higher compliance costs and penalties to a drag on revenue and AI running afoul of regulations, the inability to predict how regulation takes shape, and the absence of a single global regulatory framework for AI uncertainty, that was a quote from Credit Card Company Visa.

[00:34:39] Paul Roetzer: And then it said, some corporations are hoping to get ahead of regulation by setting their own guidelines. And this is, I think, an important takeaway is, we just don't know. And this is, like, why having your own internal policies are really important. Understanding the regulations of your own industry that AI may fall under already is really important.

[00:34:58] Paul Roetzer: So, I think there's a [00:35:00] element here that this uncertainty matters to businesses. That If this gets signed into law in the next 30 days, then you know, maybe you got like six months to comply with this. Well, that's going to, that's going to boil down to everybody. Like the CMO is all of a sudden going to have to care about this law.

[00:35:15] Paul Roetzer: Everybody's going to have to like understand it. And this isn't it. There's like hundreds of AI related laws going through states right now. So there's just like massive uncertainty. The one thing that seems Almost like a given in this is the models are going to take longer to come out. So like, what I mean by that is whether SB 1047 passes or not, we're going to talk in the next RAPFIRE item, coming up about, the USAI Safety Institute and what's happening there.

[00:35:44] Paul Roetzer: But what's going to happen is these. These companies are going to be working with the government, even if it's voluntarily, to try and convince them that these models are actually safe. And so they're going to open up access to the models, they're going to [00:36:00] demonstrate these to governments, as we learned in the previous topic with OpenAI demoing strawberry to the government.

[00:36:07] Paul Roetzer: So what's going to happen is the models will be done, but now they got to go through additional layers of safety and eventually additional layers of regulation. So we may go from a 8 to 12 month cycle of the next frontier model coming out to like an 18 to 24 month cycle. So what that means to us as users, as business leaders, as practitioners, is when we finally get GPT 5, Whether SB 1047 is in place or not, whether government, you know, federal government puts some regulations in place or not, we may be on a two year run now before we see GPT-6

[00:36:44] Paul Roetzer: Because they're going to train this thing, they're going to build it, and then they're going to do their own red teaming for five months, and then they're going to bring the government in and show them what they've got. And it's just, it's going to take longer. And so I think what we'll continue to see as users of this technology is the iterative [00:37:00] deployment that we're actively seeing from Google And OpenAI in particular, Anthropic is following the same capability, where rather than doing massive model drops where we just go from one and we get an order of magnitude better model 12 months later, I think we're just going to see over an 18 month period, every three, six months, some new capability.

[00:37:22] Paul Roetzer: Now we have video capability. Now we have advanced voice mode capability. And they like, they build these models in an iterative way. Where now they can just go show the government or the regulators, okay, we're going to launch voice mode in three months. Here's everything we've done with it. And so rather than a single model, then they do, do it iteratively.

[00:37:42] Paul Roetzer: And then you can give yourself the runway you need to cover the regulation. So I think these big companies, OpenAI, Anthropic, Google, they may all be opposed to this. there may be government leaders opposed to it, there may be open source advocates opposed to it, but I think they're all under the assumption the regulations are coming.

[00:37:59] Paul Roetzer: Whether it's [00:38:00] this one or the next one, we're going to have regulations of some sort, and so I think they're going to line up to voluntarily participate in whatever The federal government is doing because it's going to give them a lot of cover to kind of keep moving forward. So I don't know, it's just, and again, I still don't know where exactly I fall on this.

[00:38:21] Paul Roetzer: I do think that the way that this thing is designed is pretty arbitrary. So it basically tries to say, if you're training a model over this size, or if you're fine tuning a model in this way, Then you have to,

[00:38:35] Paul Roetzer: you know, we would, we would have to approve of that basically, but with an exponential growth curve of these things, the capability and how they're trained, it's like the thing we think is not going to be safe today, a year from now, we'll laugh at it's like, oh, that was kind of like obsolete technology and that obsolete technology wouldn't even be allowed to be used.

[00:38:56] Paul Roetzer: Built without the government's approval under this premise. So I, [00:39:00] I don't know. I feel like we're just too early, but I do also worry that these capabilities are going to just explode. and if we don't have some kind of regulation, then we're going to get in trouble fast. So I don't know. I continue to kind of sit on the fence and listen to both sides of this.

[00:39:15] Paul Roetzer: And I just don't know that I have enough information personally to say like definitively that this is a good or bad idea.

[00:39:23] Mike Kaput: Yeah, that last point you made is really interesting. I saw some posts that basically equated it to what if we had tried to pass a law like this in California in 1994. We would have throttled.

[00:39:36] Mike Kaput: Like the internet revolution, because how would you even, not that it's bad to pass this type of law, but how do you even, from a technical perspective, wrap your head around what's going to come 10 years from now?

[00:39:48] Paul Roetzer: Yep, yeah, and I, I do buy, I think the argument of the people against it that I align with best is that we should be regulating at the application level, not the [00:40:00] model level right

[00:40:00] Paul Roetzer: now, so that this is a general purpose technology. It's going to have all kinds of capabilities. And it can be used for good or for bad.

[00:40:07] Paul Roetzer: So like, I think Andrew Ng in his article in Time Magazine, you know, equated to like an engine. Well, an engine can be used in a bomb or it can be used in a car. So do we regulate the invention of the engine and the improvement of the engine? Or do we regulate the use of it in, in bombs? Like, and I think that's the kind of concept here where It makes a lot of sense to look at the application layer and to allow existing laws to cover illegal use of things and people doing harm and let this play out a little bit.

[00:40:38] Paul Roetzer: So, I don't know if I had to like pick a side today, I would probably err on the side of, I think we need to be thinking deeply about this, but I don't think we're at the point yet where we need to step in and stop innovation because I think innovation is critical to growth and GDP to, you know, the safety and security of our country.

[00:40:56] Paul Roetzer: Like I, I just. I feel like we don't want to stop this yet. [00:41:00] and I don't think a pause or anything like that is very helpful at the moment.

[00:41:05] OpenAI + US AI Safety Institute

[00:41:05] Mike Kaput: All right, let's dive into a bunch of rapid fire this week. First up, like we've alluded to a bit, OpenAI, along with Anthropic actually, has agreed to share their AI models with the U.

[00:41:17] Mike Kaput: S. AI Safety Institute. This is a federal agency established in 2023 By the White House's Executive Order on ai. So under this arrangement, both companies will provide access to their quote, major new models before and after public release. The U. S. AI Safety Institute is part of the National Institute of Standards and Technology, and as part of this, we'll offer safety feedback.

[00:41:46] Mike Kaput: to help improve these models. Now, apparently the agreement comes in the form of a Memorandum of Understanding, which is formal but non binding. 

[00:41:54] Mike Kaput: Um, While OpenAI and Anthropic are the first to formally, [00:42:00] publicize their involvement in this, other major players in AI may also follow suit. A Google spokesperson also indicated that the company is in discussions with this federal agency, and they may share some more information when that's available.

[00:42:16] Mike Kaput: So, Paul, like Reading this, does this kind of mean what it sounds like that if they follow this, the US government gets access and input on major AI models before they're released and after?

[00:42:28] Paul Roetzer: I assume so, and I would guess they're already doing this. so I think part of this is. I mean, it's hard to ignore the timing of this, that the, you know, SB 1047 is now heading toward Governor Newsom's desk at some point in the next month, most likely, and then two of the, you know, obviously frontier model companies who have largely opposed SB 1047 have agreed in principle to do this with the government.

[00:42:52] Paul Roetzer: Um, I also think, you know, it's not, Irrelevant that this is a Biden Harris administration [00:43:00] executive order that created this safety institute. Governor Newsom is a leading Democratic governor. Nancy Pelosi, a very powerful Democrat, opposes SB 1047, and she's from California. And so I think this, Again, I can be completely off on this, but this would lead me to think that there's going to be some pretty significant pressure on Governor Newsom not to sign this bill.

[00:43:27] Paul Roetzer: I think this is the federal government trying to, step in and do things and say, we, we've got this, like, we're, you know, we're going to adhere to these principles. We don't need states to step in at this point. Put these restrictions in place. so I, I would, I would be shocked if Governor Newsom isn't having conversations with Pelosi and Harris and others, and that they aren't trying to align their efforts.

[00:43:57] Paul Roetzer: they're all, they all work closely together. So, [00:44:00] I don't know if there's odds anywhere. I would love to,

[00:44:02] Paul Roetzer: the odds are of this bill getting signed or not. But I, I would put the odds. Like 60 40 that he does not sign that, that these efforts, and others, stop that bill from coming to fruition at the moment, but could be wrong on that.

[00:44:17] OpenAI, Adobe and Microsoft support AI Watermark Bill

[00:44:17] Mike Kaput: So in some other AI legislative news, OpenAI has announced its support for another California bill, which would require tech companies to label AI generated content.

[00:44:29] Mike Kaput: So this bill, which has kind of flown a bit more under the radar with all the SB 1047 stuff, is known as AB 3211. 3 2 1 1. And it aims

[00:44:40] Mike Kaput: going to be a quiz afterwards, by the like, remembering what all these builds are.

[00:44:44] Mike Kaput: I'm learning a lot as we do research for this. So AB3211 aims to address concerns about the potential misuse of AI generated media.

[00:44:57] Mike Kaput: especially in the context of political misinformation. [00:45:00] And so it aims to do that by requiring watermarks in the metadata of AI generated photos, videos, and audio. Interestingly, it also requires large online platforms to label AI generated content in a way that average, your average user can understand.

[00:45:17] Mike Kaput: So, unlike its opposition to SB 1047, OpenAI has expressed support for this bill and the overall mission of watermarking synthetic content. And this bill has passed the California State Assembly, and it's going to go to a full vote in the State Senate. If it does pass that, it'll be in the same kind of position as SB 1047, going to the governor's desk for a decision by the end of September. So, Paul, this seems like a little more tangibly focused on what we know to be An existing problem in AI, like how necessary is some type of legislation like this around watermarking, around [00:46:00] figuring out what's real and what's not?

[00:46:01] Paul Roetzer: Yeah, this would be the application layer we were talking about. This makes total sense to me. Like I don't. I would love to hear the opposing opinions of like, why we wouldn't do this. But I think it's pretty obvious that misinformation, disinformation, synthetic content is flooding the internet and it's only going to get worse.

[00:46:18] Paul Roetzer: And it's going to look very real in all modalities, text, image, video, audio, like it's going to be indistinguishable. And so the idea of being able to know where something came from and how it was created makes a ton of sense. Like, I think we're going to want to know when something was human generated or not.

[00:46:36] Paul Roetzer: You know, how much AI played a role in it. So, yeah, I mean, this is a very logical application layer bill, in my opinion.

[00:46:43] Amazon Turns to Anthropic for Alexa Help

[00:46:43] Mike Kaput: Alright, so next up, Amazon is actually set to unveil a revamped version of its Alexa voice assistant.

[00:46:52] Mike Kaput: But it's going to be powered primarily by Anthropic's Claude AI models, rather than their own in house technology. [00:47:00] So, this shift comes as Amazon is getting ready in October to release this version of Alexa, which is a paid version, that they're calling the quote, remarkable version. And according to sources familiar with it, with the matter who spoke to Reuters, Amazon is deciding to use Anthropx AI because they have some performance issues with their own models internally.

[00:47:23] Mike Kaput: According to Reuters, initial versions using Amazon's in house software reportedly struggled with response time, sometimes taking six or seven seconds to acknowledge and reply to prompts. So, this new remarkable version of Alexa is going to need more sophisticated AI because it offers more advanced capabilities, including the ability to engage in complex conversations, provide shopping advice, and do things like order food or draft emails for you.

[00:47:55] Mike Kaput: actually plans to charge for this. Between 5 to 10 per month, [00:48:00] while still offering the classic Alexa for free. Amazon has not officially commented on this yet, this is just from Reuters. But Amazon does have a minority stake in Anthropic.

[00:48:13] Mike Kaput: So Paul, this kind of seems like quite an interesting development.

[00:48:17] Mike Kaput: If true, it would seem to indicate Amazon's a little behind the ball, in the generative AI race?

[00:48:25] Paul Roetzer: I'm going to save them about a year of testing. I'm going to tell you right now, no one is paying five to 10 a month for remarkable Alexa. Like you can do whatever you want. That is not a business model I would be pursuing. There's going to be too many options. Intelligence is going to be commoditized within devices.

[00:48:43] Paul Roetzer: I don't need Remarkable Alexa for 5 a month.

[00:48:47] Paul Roetzer: so that's, that's one. Two, if I'm not mistaken, Amazon, you know, Anthropic's raised like 7 billion. I think Amazon's like four and a half of

[00:48:56] Mike Kaput: yeah, It's a significant chunk.

[00:48:58] Paul Roetzer: Yeah. So if we were in [00:49:00] the age of like these frontier model companies actually getting acquired, I would be like very bullish on Amazon acquiring Anthropic at some point.

[00:49:08] Paul Roetzer: I don't think that's going to happen. and Aquahire, which is the, you know, the playbook right now being run in big tech. I don't think that's going to happen either. I don't, I don't see Anthropic like selling out, but I could be wrong there. Um, I, I can't help, but like think how crazy it is. that Suri and Alexa were, like, a decade ahead of their time.

[00:49:33] Paul Roetzer: And yet, Apple and Amazon both somehow missed this. Like, Google Assistant, too, I guess you could say. They were in voice assistants so far before the generative AI age hit us, and yet here we have Amazon turning to Anthropic for help. To fix something they've probably spent a hundred billion dollars on developing over the years in marketing.

[00:49:55] Paul Roetzer: And then, I mean, Apple supposedly, you know, not going to infuse [00:50:00] OpenAI's voice technology. They're doing their own, but maybe they eventually change their mind on that when they realize Amazon or OpenAI's is better. Um. Just wild that this is where these voice assistants are ending up. It's, it's kind of a, it'll make for great, Harvard business school case study 10 years from now of how, how did they miss this massive transformation of a product they were so far out ahead of

[00:50:23] Oprah + AI ABC Special

[00:50:23] Mike Kaput: No kidding. All right, so next up, AI is about to get a pretty serious spotlight on it.

[00:50:30] Mike Kaput: And this is coming from one of the most famous media celebrities alive today, because Oprah is running an ABC special on

[00:50:40] Mike Kaput: AI that airs on September 12th. In it, she's going to talk to AI leaders like Sam Altman and Bill Gates. She's going to talk to popular tech reviewer Marques Brownlee and government officials like FBI Director Brett Kavanaugh. Christopher Wray to quote, provide a serious, entertaining, and meaningful base for every viewer [00:51:00] to understand AI and something that empowers everyone to be part of one of the most important global conversations of the 21st century, according to ABC. The special is titled AI and the future of us, an Oprah Winfrey special. So Paul, why are we talking about this? Why does it actually matter that someone like Oprah and her kind of profile is now covering AI like this?

[00:51:24] Mike Kaput: Mmm. 

[00:51:25] Paul Roetzer: mainstream. So, you know, we all live it every day. We talk about it on this podcast every week. you know, to us, it's everywhere. Like we, we see it and hear about it all the time. To the average citizen, it's nothing like, you know, they're aware of ChatGPT. you know, they see the ads for Google AI or Microsoft AI, but they don't, they don't really know what's going on.

[00:51:49] Paul Roetzer: And so I think. You know, this is the kind of thing where you could look back five years from now and be like, oh yeah, the Oprah special. That's when things started tipping. Because, you know, we've talked about this when it comes to [00:52:00] politics. Like, why isn't it a bigger conversation from either political party in the United States right now?

[00:52:06] Paul Roetzer: It's because they don't know where the votes would come from,

[00:52:11] Paul Roetzer: So that it's not a big enough issue in the United States yet that they're even talking about it on the campaign trail when it may be the most important thing to the future of education and the economy in the next administration. Like in the next four years, everything changes.

[00:52:27] Paul Roetzer: The infrastructure of the United States, the educational system, the business systems, the economy, jobs, and it's not even being talked about. Why? Because nobody cares yet. And so, an Oprah special like this has the potential to change things to where the average American citizen all of a sudden cares and has an opinion or has questions about how it impacts them and their family, their schools, and that matters.

[00:52:52] Paul Roetzer: So, yeah, I mean, I'll be very interested to see how they cover

[00:52:57] Paul Roetzer: it. They have a

[00:52:58] Paul Roetzer: diverse perspective. You know, they've [00:53:00] got Gates and Altman. But then they do have, um, Harris, the guy who did the Netflix special, I think, about AI being bad. So, they're, they're going to give varying opinions, and I'll be very interested to see how they treat it, and then how people react to it, and if it becomes a political campaign topic after this.

[00:53:19] Paul Roetzer: I can see this as a tipping point where all of a sudden, the campaigns have AI because Oprah talked about it.

[00:53:26] Mike Kaput: Yeah, I think that last point's interesting because I wonder just day in, day out, how many people in our kind of network and circle, you hear so many people saying like, Oh, AI's played out, it's overhyped, and there's plenty of hype, but I think people underestimate how it is not really broken containment yet for the vast majority of people out there.

[00:53:48] Paul Roetzer: Yep. Yeah. And again, like, like I said, if you consider the topics or the issues that'll And again, I'm just focusing on the United States. I know we have people outside the United States as listeners as well. But in our [00:54:00] political cycle right now, and in our economy, if you look out ahead and said what's going to impact it the most over the next four years, it'd be hard to find something bigger than AI and yet nothing, crickets on the campaign trail.

[00:54:12] Paul Roetzer: So yeah, I think it's really, it'd be really fascinating to see if it changes things.

[00:54:16] Viggle Training on YouTube Videos

[00:54:16] Mike Kaput: Alright, So popular AI video startup Viggle has raised about 19 million in a series A funding round. But it is drawing a bunch of controversy for stealing YouTube videos to train its AI models. So, Viggle is a tool that went viral shortly after launch, this past year.

[00:54:37] Mike Kaput: the first version of this, which I'm sure you have seen in some form, is a video of Joaquin Phoenix's portrayal of the Joker replacing a rapper's performance at a music festival. And then they did tons of spinoff memes based on the same thing, showing a bunch of, Other popular celebrities and characters in the same environment, but in a recent interview with TechCrunch, [00:55:00] Viggle's CEO also revealed that the company has been using YouTube to train that AI tool.

[00:55:07] Mike Kaput: This is a charge several other companies have faced as well. We've talked about them, including NVIDIA and Anthropic, and YouTube's CEO has said in interviews that these types of activities are a clear violation of the company's terms. Now, a spokesperson for Viggle actually tried to walk back the comments after the interview, but then was forced to admit when told the interview had been on the record that yes, in fact, the company has trained on YouTube data.

[00:55:34] Mike Kaput: So Paul, I want to read a really quick reaction to this story from Ed Newton Rex, who AI. He resigned there over his concerns of how the company's models were being trained on YouTube. Data like this. He said, quote, AI companies will hope they can normalize training on copyrighted work without permission.

[00:55:54] Mike Kaput: When it's clear they're all doing it, it's more likely the public will assume it must be legal. But a [00:56:00] heist doesn't become more permissible because of its speed and scale. We cannot let this be normalized. Is that what's happening here? Like, is this just a heist in plain sight at this point with YouTube?

[00:56:10] Paul Roetzer: 100%. Like, again, we've talked about this so many times recently with, you know, NVIDIA and others. It's absolutely what they're all doing. and again, like we had Eric Schmidt admit it on video, um, which was later taken down, that this is what you do. Like, if you're an entrepreneur, you go take the stuff, and if you're not a success, no one cares you took it.

[00:56:35] Paul Roetzer: If you are, then you hire lawyers to fight the legal battles. Like, this is the playbook. This poor Ed Newton Rex guy, man, he fights the good fight on Twitter. If you follow his stuff.

[00:56:47] Paul Roetzer: like, this dude is just day after day hammering these same points. And it's just like shouting into the wind, like, I feel for the guy, like, I appreciate what he's doing.

[00:56:58] Paul Roetzer: And, you know, the principles he stood [00:57:00] on to leave stability and take this position in the marketplace, and I'm glad he's doing it, but it's got to be a very ungratifying thing day in and day out when you feel like nobody cares. And maybe this goes back to just the larger public doesn't even understand it.

[00:57:15] Paul Roetzer: Like, what do you mean training video models on it? Like they'll, you know, watch the Oprah special and they'll see it can do video generation and they'll be blown away that it's doing it. They'll think it's the coolest thing they've ever seen. That I can do 10 second videos and I can, you know, go into this bot and do it.

[00:57:29] Paul Roetzer: And then you're like, yeah, they trained on YouTube videos, but who cares? Like, doesn't affect me, like, I don't know. Like, I don't know how this becomes a bigger issue that society cares about. But right now, I just get a sense that people just don't care. Like, it's just is what it is. and IP attorneys care, and the people who created the stuff care.

[00:57:48] Paul Roetzer: But other than that, everybody else just kind of assumes this is what it is. I'm not, I'm not saying I'm in that camp. I'm not saying we shouldn't care. I'm just like stepping back and looking at it, [00:58:00] sayingit just seems like an issue that is going to really, really struggle to get traction with a larger audience of people that think this is worth their time to care about.

[00:58:08] AI Changes in Schools

[00:58:08] Mike Kaput: Alright, next up, AI expert Ethan Mollick just published another great article on AI's impact in education. This article is called Post Apocalyptic Education, and he leads it off by writing, quote, Last summer, I wrote about the homework apocalypse, the coming reality where AI could complete most traditional homework assignments, rendering them ineffective as learning tools and assessment measures.

[00:58:35] Mike Kaput: Thank you. My prophecy has come true and AI can now ace most tests, yet remarkably little has changed as a result, even as AI use became nearly universal among students, he goes on to cite some research that a survey in the US found that 82% percent of undergraduates and 72% percent of K 12 students had used AI for school.[00:59:00] 

[00:59:00] Mike Kaput: Now, he notes that, look, AI itself doesn't cause people to cheat on homework, other technology in the past has helped them with their homework, things like internet answers, but the overall idea here is that this homework apocalypse appears to have already happened. And yet we still haven't done much about it.

[00:59:19] Mike Kaput: He argues there's two big illusions people have around this. It's kind of preventing us from kind of seeing the issue clearly. First, teachers, administrators think they can detect AI generated content, which they cannot. Second, students often don't realize that using AI to, say, just do their homework for them is actually could be harming their ability to learn and apply what they learned to the real world.

[00:59:45] Mike Kaput: So, in all the cases here, Mollick basically encourages educators to look beyond homework and help everyone figure out how to reinvent education to use AI to help them think better, not just do their work for them. [01:00:00] So Paul, we've talked about plenty of schools are taking really proactive and serious steps to address what Mollick is talking about here, but it definitely sounds like from what he's writing in his perspective, not enough are.

[01:00:15] Mike Kaput: Like, you've done plenty of AI advising, teaching at educational institutions, like, what were your thoughts going through this?

[01:00:23] Paul Roetzer: There was one paragraph that really stuck with me. So he said, people are exquisitely good at figuring out ways to avoid things they don't like to do. And as major news analysis shows, most people don't like mental effort. So they delegate some of that effort to the AI. In general, I'm in favor of delegating tasks to AI.

[01:00:41] Paul Roetzer: But education is different, the effort is the point. So this actually takes me back to my junior year of high school. I was in math class. I hated math class. I hated word problems more than anything. Like I, mentally, I just couldn't do them. They were, for whatever reason, to this day, I struggle with word problems.

[01:00:59] Paul Roetzer: And I remember [01:01:00] doing them and I was getting really upset. And so I raised my hand, I said to Mr. Flandero at St. Ignatius High School, I said, why do we have to do this? I'm never going to have to do this in my real life. Like, why are we doing this? And he, he looked at me and he said, because it's hard. And I just stared at him.

[01:01:15] Paul Roetzer: And I realized that moment, like, oh, I get it. Like, the effort was the point. The point was learning to do something that is difficult and having to fight through it. And I took that with me. From every day forward in my life, and I like applied that to college, to the rest of my high school career, and certainly to my professional career.

[01:01:35] Paul Roetzer: There's things you do that are just hard. They take time. They take effort. They're not easy. And that's the point. And so I think about this with my own kids. They're 12 and 11. I think about it when I consult with schools, you know, middle schools, high schools, colleges.

[01:01:49] Paul Roetzer: How do we raise the next generation of professionals when

[01:01:55] Paul Roetzer: The effort still has to happen, but now they have this crutch, this [01:02:00] ability to take kind of shortcuts.

[01:02:02] Paul Roetzer: So I even think about like with this podcast, every week we do this podcast, I don't use AI to develop any of the talking points. So I could very easily, with today's AI, take an article, put it in saying, what are the five key points here? I could give it a video, I could give it a podcast. I don't retain things that way.

[01:02:20] Paul Roetzer: I have to listen to, watch, or read every single thing that we talk about on this show. I have to do the hard work to process it because then it stores in the long term memory. And then you can draw connections between pieces of information. And so for me, that is how you learn. And I don't like, that's not going to change.

[01:02:40] Paul Roetzer: The AI can't simulate the learning in your brain for you. It can give you shortcuts. And so that's what I mentioned this when we talk about AI education on a recent episode, it's how I teach my kids how to use AI every day. It's not a replacement for creativity, imagination, critical thinking. Like it's an assistant.

[01:02:56] Paul Roetzer: And I know that isn't always going to resonate with them, but [01:03:00] especially the critical thinking part. I think the creativity and imagination they get at a young age, the critical thinking is a little bit more of a stretch. I have to like explain that and give them examples, but to teach people it is an assistant.

[01:03:11] Paul Roetzer: But the reality is there's always going to be people who take shortcuts. There's always going to be people who are going to use these tools as a way to just get through stuff. And you know what? At the end of the day, they're the people who probably won't advance in their careers. It's always been true that the people who do the hard work, who put in the hours when no one's looking, they're the people who end up Being the most successful in their careers.

[01:03:34] Paul Roetzer: And AI is just maybe going to weed other people out faster. I don't know, but yeah, I think it's just, it's really important that we remember that yes, these tools can summarize podcasts and, you know, give you the two second recap of an article, or you can watch a short video on TikTok for 10 seconds, that's not learning, like that, that is not how.

[01:03:54] Paul Roetzer: We process things, how the human mind works and how it retains things over time and draws connections. [01:04:00] And those are the things you need to really develop intelligence and capabilities and like true competency and mastery of a topic. Yeah,

[01:04:12] Mike Kaput: have as human professionals.

[01:04:15] Mike Kaput: Your unique blend of experience and connecting the dots in that long term memory is going to give you the only unique advantage to using these tools for all the other stuff, right?

[01:04:26] Paul Roetzer: correct. Yep.

[01:04:28] Gemini Updates

[01:04:28] Mike Kaput: Alright, so our last topic for today is that Google has announced two significant updates to Gemini. So first is the introduction of custom gems for Gemini Advanced.

[01:04:40] Mike Kaput: So Gemini Advanced business and enterprise subscribers can now create and chat with custom AI experts called Gems. You can create Gems by writing instructions, naming them, and then interacting with them on specific topics. or for particular tasks. They're also going to be offering pre made gems for scenarios like [01:05:00] coaching, brainstorming, career guidance, etc.

[01:05:03] Mike Kaput: These are rolling out on desktop and mobile devices in over 150 countries and most languages. The second big update is improved image generation in Gemini. Google is introducing its latest image generation model, ImageN3, across all Gemini versions. This is offering enhanced image quality and can generate images from very short Text prompts. The model supports a bunch of different styles, including photorealism, oil paintings, etc. And it includes built in safeguards as well as SynthID, which is Google's watermarking tool for AI generated images.

[01:05:45] Mike Kaput: So Paul, just to kind of wrap up here, do you want to maybe talk about the significance of GEMS? This appears to be Google's answer to GPTs.

[01:05:53] Paul Roetzer: Yeah, I had asked Chris Penn. I hadn't had time to test the gems yet,

[01:05:57] Paul Roetzer: but I'd asked him on LinkedIn, he had [01:06:00] posted about it. You know, is it a competitor to, like, is it a true alternative to, to custom GPTs? And I think he had said, like, you could see it as that, but like right now you can't share them. And there was a couple of limitations, but if they solved some of the limitations, you could definitely see it being that way.

[01:06:14] Paul Roetzer: I mean, I'm definitely going to play around with them. I think the context window with Gemini is going to be bigger. That could help. I was glad to see the image generation coming back like that, you know, that's been gone for like a few months now, I think. So, yeah, I'm, I'm excited to see Gemini getting better and smarter and having more feature capabilities.

[01:06:34] Paul Roetzer: And, I don't know when Gemini 2 is going to come out, but I'm anxious to see kind of where they go from here. So, yeah, definitely something we'll be testing. Actually, we may be playing around with this for our final session, like at MAICON.

[01:06:47] Paul Roetzer: Mike and I are doing, AI in action and we're planning on using, Gemini in some cool ways. So we, we maybe experiment with this and have some stories to share post MAICON with everybody.

[01:06:58] Mike Kaput: All right, Paul, that's another [01:07:00] big, busy week in AI. Couple quick final notes here. If you have not subscribed yet to the Marketing AI Institute newsletter, I'd encourage you to do so. It recaps deep dives into what we've talked about today, as well as all the topics we don't have a chance to get to, of which there are always many every week.

[01:07:18] Mike Kaput: So, marketingaiinstitute. com forward slash newsletter We'll get you. there. That quick weekly brief that tells you everything going on in AI. And last but not least, if you have not left us a review yet on your podcast platform of choice, if you are able, we would highly, highly appreciate that. That will help us get the podcast to more people and improve. Paul, thanks so much for another great recap.

[01:07:44] Paul Roetzer: Yeah, thanks everyone, and we will be back at the normal time next week. It is MAICON week, but Mike and I are going to record on Monday and drop on Tuesday, the first day of MAICON. So we won't be missing a beat with MAICON in Cleveland next week, and hopefully we'll see a bunch of you there. Looking forward to it.

[01:08:00] Paul Roetzer: Thanks for listening to The AI Show. Visit MarketingAIInstitute. com to continue your AI learning journey and join more than 60, 000 professionals and business leaders who have subscribed to the weekly newsletter, downloaded the AI blueprints, attended virtual and in person events, taken our online AI courses, and engaged in the Slack community.

[01:08:23] Paul Roetzer: Until next time, stay curious and explore AI.