82 Min Read

[The AI Show Episode 103]: Claude 3.5, Suno and Udio Get Sued, OpenAI’s Ex-NSA Board Member, Ilya’s New Company, and Perplexity’s Woes

Featured Image

Wondering how to get started with AI? Take our on-demand Piloting AI for Marketers Series.

Learn More

Paul and Mike are back! After a two-week hiatus, the longest episode of The Artificial Intelligence Show has dropped! Paul Roetzer and Mike Kaput discuss changes at the big companies: Anthropic’s Claude 3.5 is released, Perplexity, Suno, and Udio are in legal troubles, Ilya Sutskever, co-founder and former chief scientist at OpenAI, launches a new company, and speaking of OpenAI, learn more about their ex-NSA board member.

Listen Now

Watch the Video

Timestamps

00:09:12 — Claude 3.5

00:17:28 — Record Labels Sue Suno and Udio

00:24:08 — OpenAI Updates and Revenue

00:29:08 — OpenAI Buys Rockset

00:30:59 — OpenAI Acquires Collaboration Startup Multi

00:32.40 — OpenAI appoints Retired U.S. Army General to Board of Directors

00:37:37 — OpenAI Voice Mode Update

00:40:29 — Elon Musk Drops Suit Accusing OpenAI of Breaching Founding Mission

00:43:00 — Ilya’s New Venture

00:46:45 — Perplexity Updates

00:56:04 — Should Frontier AI Companies Fund Journalism?

01:01:45 — The AP Fund for Journalism launches

01:05:50 — Pope Francis addresses the G7 Summit

01:08:17 — Political deepfakes top list of malicious AI use, DeepMind finds

01:11:04 — NVIDIA Releases Open Synthetic Data Gen Pipeline for Training LLMs

01:13:33 — Toys R’ Us Branded Sora Video

01:19:43 — WPP unveils AI-powered Production Studio

01:22:19 — Does EU AI Act Require AI Literacy?

01:26:40 — McDonald's to end AI drive-thru test with IBM

01:29:37 — Funding, starting with Mistral

01:31:40 — HeyGen Series A

01:34:29 — Stability AI Secures Significant New Investment and Appoints CEO

01:36:30 — EvolutionaryScale raises $142M for protein-generating AI

01:38:37 — Product Updates starting with Adept

01:42:39 — ElevenLabs Text Reader

01:44:50 — Runway Gen-3 Alpha

01:47:46 — Microsoft Delays Recall Feature

01:49:28 — Microsoft GPT Builder Retired

Summary

Claude 3.5 Sonnet has been released

We just got the launch of Claude 3.5 Sonnet, Anthropic's latest AI model and the first in their upcoming Claude 3.5 family. This new model sets industry benchmarks for intelligence while maintaining the speed and cost-effectiveness of its mid-tier offerings.

Claude 3.5 Sonnet outperforms competitor models and its predecessor, Claude 3 Opus, on a wide range of evaluations. It excels in graduate-level reasoning, undergraduate-level knowledge, and coding proficiency. The company even claims it outperforms GPT-4o.

Not only does Claude 3.5 bring some serious firepower, but it’s also fast—operating twice as fast as Claude 3 Opus, the company’s previously most powerful model.

In vision capabilities, Claude 3.5 Sonnet surpasses previous models, showing particular strength in visual reasoning tasks such as interpreting charts and graphs. Claude 3.5 Sonnet maintains Anthropic's commitment to safety and privacy. The model has undergone rigorous testing and has been trained to reduce misuse.

Claude 3.5 Sonnet is now available for free on Claude.ai and the Claude iOS app, with higher rate limits for Claude Pro and Team plan subscribers. It's also accessible via the Anthropic API, Amazon Bedrock, and Google Cloud's Vertex AI.

Paul and MIke have been experimenting…so you won’t want to miss this.

Suno/Udio Lawsuits

The Recording Industry Association of America (RIAA) has filed lawsuits against AI music generation companies Suno AI and Uncharted Labs Inc., the developer of Udio AI, on behalf of Universal Music Group,

Warner Music Group, and Sony Music Entertainment. Suno and Udio both use AI to generate jaw-droppingly good music from simple text prompts in the style of real artists. These lawsuits allege that both companies are unlawfully training their AI models on massive amounts of copyrighted sound recordings.

The RIAA is seeking damages of up to $150,000 per infringed work, potentially amounting to billions of dollars. The lawsuits claim that music generated by these services can sound remarkably similar to copyrighted music, even reproducing authentic producer tags and vocals indistinguishable from famous recording artists.

This legal action follows earlier moves by the music industry to protect their copyrights. Universal Music Group previously sued Anthropic over similar claims, and Sony Music sent warning letters to hundreds of AI companies about using their copyrighted material without permission.

Open AI Updates

Revenue:

OpenAI's annualized revenue has more than doubled in the past six months, reaching $3.4 billion according to reporting from The Information on recent comments made by CEO Sam Altman. This is up from $1.6 billion in late 2023 and about $1 billion last summer.

Most of OpenAI's revenue comes from subscriptions to its chatbots and fees from developers accessing its models through an API. OpenAI also receives a cut from Microsoft's sales of OpenAI models to Azure cloud customers, amounting to about $200 million annually, or roughly 20% of Microsoft's revenue from that business.

The company was recently valued at about $86 billion in a sale of employee shares, putting its valuation at about 25 times forward revenue. An OpenAI spokeswoman stated that the financial details cited by The Information were "inaccurate," without providing further clarification.

Rockset:

OpenAI has acquired Rockset, an enterprise analytics startup, to enhance its retrieval infrastructure across its products. This is OpenAI's first acquisition where it will integrate both a company's technology and its team, a spokesperson told Bloomberg.

Multi:

OpenAI has also acquired Multi, a collaboration startup, in another deal with undisclosed terms. Multi's technology allows developers to screen share and work on code together in real time.

This acquisition suggests that OpenAI may be looking to enhance its enterprise products and build more features for real-time collaboration.

OpenAI/Musk Lawsuit:

Elon Musk has dropped his lawsuit against OpenAI and its CEO Sam Altman, which alleged that the company had breached its founding mission by prioritizing business operations over its mission to build AI that benefits humanity.

The lawsuit, filed last year, claimed that OpenAI had become a "de facto subsidiary" of Microsoft, violating what Musk said was an agreement to remain a non-profit organization dedicated to developing AI for the benefit of humanity. Musk withdrew the complaint just one day before a California judge was scheduled to hear OpenAI's request for dismissal.

This legal battle is part of a larger, ongoing public dispute between Musk and OpenAI. Musk, an early backer and founding team member of OpenAI, had a falling out with the company and is now raising money for his own AI venture, which he positions as an alternative to OpenAI.

That dispute has recently escalated, with Musk threatening to ban Apple devices from his companies if OpenAI's AI software is integrated at the operating system level, citing security concerns.

There’s so much more…we’re just scratching the surface on this episode, as there were way more than three main topics. You must tune in! Plus, Mike had a really great update in the first minute of the show, post-introduction. Tune in!

Links Referenced in the Show

Today’s episode is also brought to you by Scaling AI, a groundbreaking original series designed for business leaders who want to thrive in the age of AI, and help drive AI transformation within their organizations. The Scaling AI course series includes 6 hours of content, complete with on-demand courses, video lessons, quizzes, downloadable resources, a final exam, and a professional certificate upon completion.

Head to ScalingAI.com to pre-order today!

Read the Transcription

Disclaimer: This transcription was written by AI, thanks to Descript, and has not been edited for content.

[00:00:00] Paul Roetzer: if they can put a hundred million in and fund this and then even do it at a broader level, now all of a sudden we can have the local journalism that's critical to democracy, they can do licensing deals to get real time news, and they can use this writing to train future models, but it's still not going to replace what the journalists do.

[00:00:17] So I just I don't know, when I step back and look at this from an AI model perspective, from a business perspective, from a societal perspective, from a journalism perspective, I don't see anyone losing here. It's like one of those, like, everyone can win, but I don't hear any of them talking about this. I don't get it. .

[00:00:34] Welcome to the Artificial Intelligence Show, the podcast that helps your business grow smarter by making AI approachable and actionable. My name is Paul Roetzer. I'm the founder and CEO of Marketing AI Institute, and I'm your host. Each week, I'm joined by my co host. and Marketing AI Institute Chief Content Officer, Mike Kaput, as we break down all the AI news that matters and give you insights [00:01:00] and perspectives that you can use to advance your company and your career.

[00:01:05] Join us as we accelerate AI literacy for all.

[00:01:12] Welcome to episode 103 of the Artificial Intelligence Show. I'm your host, Paul Roetzer, along with my co host, Mike Kaput. We have been on a two week hiatus. I was traveling a little bit. I was in Italy, actually, for about two weeks. 12 days or so I had a talk over in Bologna and got to take the family to Bologna and then Venice, which I'd never been.

[00:01:36] It was pretty incredible. And Mike, you've had a busy couple of weeks as well. What's been going on with you?

[00:01:42] Mike Kaput: Yeah, I was out as well as in Florida for a week or so. Yeah, for the last couple of years, my, me and my wife have been going through the adoption process. So we finally, this past couple of weeks brought home a little guy.

[00:01:57] And kind of are now a family of three. So we were [00:02:00] in Florida for a week, wrapping some stuff up with that. And, yeah, just brought him home this past weekend.

[00:02:06] Paul Roetzer: Amazing. I cannot wait to meet him. We're so excited for you. Yeah. So just to put some things in perspective, I mean, it's, Mike and I have an important job to show up every week and have, have these conversations and keep everyone posted on AI.

[00:02:20] But. Taking time away to be with my family and, and for Mike taking time away away to, to, extend his family in an amazing way. It was a pretty, pretty good two weeks spent. So . Yeah. No kidding. . We are, we are happy to be back, but we were, we, we took full advantage of the time away, so, lots to talk about, lots to catch up on.

[00:02:42] Keeping things in perspective as we go. and obviously with. 4th of July week, you know, hope everybody has time with their family and friends this week as well. If you're celebrating 4th of July, be safe, have fun. But we're going to get this out of the way. We're going to cover a ton of stuff [00:03:00] today and, give you a lot to think about over the long holiday weekend.

[00:03:04] So today's episode is brought to us by Scaling AI, the new course series that we launched last week. This is an original series that I've spent a Probably north of 300 hours, this year working on and a couple of years kind of conceiving of building, prior to that. So scaling AI is built for business leaders.

[00:03:23] It's meant to be kind of a step by step process to, not just adopt and pilot AI in your organization, but to actually scale it. It teaches like state of AI, fundamentals of business, building an AI academy, creating an AI council. developing generative AI policies, responsible AI principles, doing impact assessments, building a roadmap.

[00:03:44] So there's 10 courses. It's about six hours of content. There's, each course has a quiz at the end, and then there's a final 50 question exam. 80 percent or better on the exam gets you a professional certificate of completion. I've actually seen people starting to post their [00:04:00] certificates on LinkedIn already.

[00:04:01] So we have some advanced users who jumped right in. And again, I mean, we announced this at one o'clock on Thursday. So, some people spent their weekend getting their certificates, which was awesome to see. So you can go to scalingai. com. Those courses are live now. And again, it's only six hours, so you can, in a day, get, get that knocked out and, and then really start applying those principles and frameworks and templates.

[00:04:26] So, if you have any questions, feel free to reach out to me. to your business. So scalingai. com to check out those courses. We also last week introduced, a new company. So there's something I've been working on for a couple of years, basically trying to figure out how to advance the story of AI beyond marketing.

[00:04:46] So as many of our loyal long time listeners know, This show used to be called the Marketing AI Show. So the first, kind of domino in this change was rebranding the podcast to the Artificial Intelligence [00:05:00] Show. As you know, we don't, I mean, we talk about marketing certainly on the show, but this show has become about a much bigger story than just marketing.

[00:05:09] And so we needed a brand for that to live. So I do a lot of public speaking to CEOs, government leaders, heads of universities, and my feeling has always been. The Marketing Institute just isn't a logical home base for those people. So if someone wants to go learn more, find out ways we can help them, that's not a natural place for, you know, the C suite and leaders to go.

[00:05:30] So we've known we needed this other brand and I've been working on ways to do it for a long time. So, back in 2022, I had announced SmarterX as the consulting arm of Marketing AI Institute. And what we've now done is spun that brand out into its own AI research and consulting firm. And that, firm is, is built to educate and empower leaders, to help them reimagine business models, reinvent industries, and basically rethink what's possible.

[00:05:58] So that. That brand is [00:06:00] now live, smarterx. ai. You can go check that out. It is mainly going to be focused on research and then kind of one to many. So events, education, workshops, speaking engagements. So we will do consulting, some limited advisory work with some very select companies. But basically the premise here is, A lot of what we talk about on the podcast.

[00:06:23] And so I think what you're going to see is a lot of the research we're going to do through SmarterX is, is going to live in real time on the podcast. We'll probably debut a lot of research. We'll probably talk about a lot of the initiatives, but the main thing is where is AI going, what happens next?

[00:06:37] What's it mean to business, the economy, educational systems, society. Democracy, humanity. What's it mean to you and your industry, your career? What's it mean to me, you know, my family, my friends, what can accelerate AI development, what can slow it down, and then most importantly, what do we do about it as professionals, as people, as business leaders, and so that's, that's where we're going with [00:07:00] SmarterX.

[00:07:00] So again, smarterx. ai, you can go check that out. Marketing AI Institute for our loyal community members and, and followers of the Institute. Nothing's changing there. That team, the mission is staying in place. It is continuing to focus on marketing practitioners and leaders through our content and courses and events.

[00:07:17] We've got our marketing AI conference coming up September 10th to the 12th. That's MAICON, M A I C O N. And so, yeah, everything on the Institute is continuing to grow. The Institute's doing great. We're up to, I think, north of 70, 000 subscribers, 75, 000 subscribers now. So the Institute continues to, to go, but, SmarterX, again, SmarterX.

[00:07:37] ai. is being built to tell the story of AI beyond marketing. And it is really focused on business leaders and AI forward organizations that want to kind of reinvent what's possible in their industry. So, yeah, so it was a busy week, man. I got back from vacation and I got back like the Friday before and I was like, okay, so we were launching a new company in the Scaling AI [00:08:00] series on Thursday.

[00:08:01] So my week last week was sort of a mad dash to get everything ready. So yeah, it was a, it was a good week though. I mean, weeks like that are fun. It's, it's energizing and, you know, it's the adrenaline gets going and, and you just pull the late nights and you get everything ready to go. So thanks to our team that did an incredible job getting everything ready.

[00:08:24] Mike Kaput: All right, Paul. So we are have been off for a couple of weeks here and AI has not, unfortunately, slowed down while we were out of the office. So we've got kind of a huge mega episode today where we're going to cover, all the developments that have been happening the last couple of weeks. We've got.

[00:08:45] At least a couple dozen topics on the docket today. So given that I am going to dive right in. Sound good?

[00:08:53] Paul Roetzer: Yeah, let's go.

[00:08:54] Mike Kaput: All right.

[00:08:55] Paul Roetzer: I, this may be an hour and a half or maybe, I don't even know. Yeah, this

[00:08:58] Mike Kaput: may be a long one. We'll see. [00:09:00] You might want to put that on the 1.

[00:09:00] Paul Roetzer: 5, 1. 75 speed for this one. We'll see.

[00:09:04] Mike Kaput: So, first up, we just actually got the launch of Claude 3. 5 Sonnet, which is Anthropic's latest AI model and the first model in their upcoming Claude 3. 5 family. This new model sets some industry benchmarks for intelligence while maintaining the speed and cost effectiveness of Anthropic's mid tier offerings.

[00:09:29] Now, They actually say that Claude 3. 5 Sonic outperforms some competitor models and its predecessor, Claude 3 Opus. It excels in graduate level reasoning, undergraduate level knowledge, and coding proficiency. The company even says it outperforms GPT. Not only does Claude 3. 5 bring some serious firepower, but it is also fast.

[00:09:54] It operates twice as fast, according to Anthropic, as Claude 3 Opus, which is the [00:10:00] company's previously most powerful model, which only came out a few months ago as it is. In vision capabilities, Claude 3. 5 Sonnet surpasses previous models, showing particular strength in visual reasoning tasks. such as interpreting charts and graphs, and it also comes with Anthropic's long time commitment to safety and privacy.

[00:10:22] They say the model has undergone rigorous testing and has been trained to reduce misuse. If you would like to try out Claude 3. 5 Sonnet, it's now available for free at Claude. ai and in the Claude iOS app, and if you have a Claude Pro or Team plan, there are higher rate limits. For your paid plan, you can also access it via the Anthropic API, Amazon Bedrock, and Google Cloud's Vertex AI.

[00:10:51] Now, Paul, first up, I, you know, I know you've been experimenting quite a bit with Cloud 3. 5 SONNET, I have as well. Like, what did you think of the new [00:11:00] model?

[00:11:01] Paul Roetzer: It's been very impressive in the early tests I've done. So one of the first use cases I gave it was I was, so we have these four workshops that we offer through SmarterX now and through the Institute.

[00:11:13] So there's an applied AI, there's strategic AI leaders, there's prompting for professionals, and there's an AI innovation workshop. And so we have drafts of those abstracts for those workshops. And, so as I was writing the copy and finalizing everything for SmarterX. ai, I, I took those descriptions and I gave them to Claude and I was like, you know, here's what I'm trying to do.

[00:11:36] Like help me improve these, make these better. And so I, I kind of bounced around with a couple of prompts and, and then it outputs them. I was like, that's just better than what I've done. Like, it's a better way to write these descriptions and it's using everything I had created, but it's just a improved version.

[00:11:52] And so I played around with that a little bit and got to a point where I was really happy with the output of the first workshop. And I just took the other workshops and ran it through and [00:12:00] said, just do the same thing for this one. And it rewrote it within that format. And so I did that for all four of them.

[00:12:06] And, that was it. Like they were, they're ready to go. I sent them to the team. I said, here, put these on the website. These are the new descriptions for the workshop. Now, this is something that would have taken me like, you know, Easily two to three hours. And for me, when I'm writing something like that, I have to be locked in, like turn off alerts, turn off everything else.

[00:12:24] I have to like deeply, deeply think about what I'm doing and stay focused for two to three hours. I did it all in like 20 minutes. And that was while I was doing a couple other things actually. And the reality is like, and I think I'm like, you were on that thread. I said, it's just better than me at this.

[00:12:41] Like, and I'm okay with that. Like, this is a task where Claude is now. superhuman compared to me and I've been writing these descriptions for 24 years. Like I've been doing public speaking in workshops for a really long time and it's just better than me and so it's one of those where you have to put in perspective I'm paying [00:13:00] 20 a month for Claude and you hear people complain about the 20 a month thing all the time and so just this single task like I would have paid happily, 500 to 1, 000 to have someone professionally write We rewound back to pre 2021 when I owned an agency and Mike and I were working at a marketing agency together.

[00:13:22] We would have charged 1, 000 probably to write these descriptions for someone. So, I know relatively what this task is worth doing. So, in other words, I got 2 to 3x the value of my annual subscription for Claude. 240 a year. To do a single task in 20 minutes, like the rest of the year, it's just like, so I think, yeah, it's really impressive, but more importantly, it demonstrates the value that can be created with these tools when you know how to use them or you have the right use cases for them.

[00:13:57] So yeah, I, I mean, [00:14:00] people seem to really love it. It grades well, obviously, on their benchmark tests. They introduced this new artifacts capability that people really seem to love, where, you know, as you're doing your prompts and as you're looking at the outputs, it can create snippets and text docs and website designs alongside you, and then you can manipulate those and edit them.

[00:14:19] People are, you know, Loving that. I've seen some crazy applications of that, where people are designing interactive graphs and tools and, you know, web pages. And then they also introduced the, this projects feature, which is basically like GPTs for Anthropic, where you can give it a set of documents and then give it some system prompts to guide it how to, you know, respond to specific prompts.

[00:14:39] So they're definitely moving toward the enterprise play. I mean, that's the thing that just seems really obvious here is they're continually expanding on the enterprise components. And then the other thing that I found really interesting was, there was a post they put up about Claude character. So how they, how they're tuning their models [00:15:00] to not just not do harm, but to actually respond in specific ways.

[00:15:05] So we'll put the link to the, in the show notes, but they said, companies developing AI models, generally train them, train them to avoid saying harmful things and to avoid assisting with harmful tasks. The goal of this is to train models to have, behave in ways that are but we think the character of the, of those we find generally admirable.

[00:15:25] We don't just think of harm avoidance. We think about those who are curious about the world, who strive to tell the truth without, without being unkind, and who are able to see many sides of issues without becoming overconfident or overly cautious. So they go on to explain, like, we're actually doing this character training.

[00:15:42] So I think in 3. 5, you're going to start to feel a little bit of a difference in the character of the model because they're very intentionally training it to be a type of character, not a person. They call that out. Like, we're not saying it's like a person, but you want it to [00:16:00] have, these traits that a good person would or someone you would want to have a conversation with.

[00:16:04] So, they're just really busy. Like Anthropic did a lot in the two weeks we were away.

[00:16:10] Mike Kaput: Yeah, no kidding. And it seems like they are with some of these additional features moving into almost creating these tools, not just as a chat interface, but a collaborative tool. workspace.

[00:16:21] Paul Roetzer: Yep. Yeah. And they, cause I, you called out the team thing.

[00:16:25] So they had, it says Claude team users can now share snapshots of their best conversations with Claude, into your team shared project activity feed. We are developing new modalities and features to support more use cases for businesses, including integrations with enterprise applications. Teams are also exploring memory, which we've talked about with OpenAI, which will enable Cloud to remember users preferences and interaction history as specified, making their experience even more personalized and efficient.

[00:16:51] So they're all kind of working toward the same direction here for sure. We should, we don't have a Cloud team license, right? I don't believe so. No, we have, we have [00:17:00] ChatGPT. You would think I would know this, but, I mean, again, these, it's shocking. Like these things move so fast. I'm even saying like, do we already have Gemini 2 or 2.

[00:17:10] 5? Like you get lost in all the details sometimes. So yeah, that's one we may have to look into and kind of compare it to. We do have Gemini for the team and then we have. ChatGPT team, but we should probably be testing Claude team as well as they introduce these new capabilities.

[00:17:28] Mike Kaput: All right, so next up, the Recording Industry Association of America, the RIAA, has filed some major lawsuits against AI music generation companies, specifically Suno, AI, which we've talked about in the past, and Uncharted Labs, Inc.,

[00:17:43] which is the developer of UDIO, which AI, which we've also talked about. These are both music generation tools that use AI to, based on simply a text prompt, create hyper realistic, really, really jaw dropping, music in kind of any style and in the [00:18:00] tone of any artist you desire. So this lawsuit is being filed on behalf of Universal Music Group, Warner Music Group, and Sony Music Entertainment.

[00:18:09] And the lawsuits allege that both companies are unlawfully training their AI models on massive amounts of copyrighted sound recordings. So the RIAA is actually seeking damages of up to 150, 000 per infringed work, which could potentially amount to billions of dollars. The lawsuits claim that the music being generated by Suno and Udio sounds remarkably similar to a range of copyrighted music.

[00:18:41] In some cases, they allege that it's even reproducing authentic producer tags and vocals that are indistinguishable from real famous recording artists. And, you know, this legal action kind of follows some earlier moves by music industry players to kind of protect their copyrights. Universal Music Group has [00:19:00] previously sued Anthropic over some claims around lyrics, and Sony Music has sent warning letters to hundreds of AI companies in the past about using their copyrighted material without permission.

[00:19:13] So, Paul, this is definitely a landmark lawsuit in this space. Like, how in the wrong are Suno and Udio here, in your opinion?

[00:19:22] Paul Roetzer: Yeah, we, I mean, so we're not IP attorneys, we say it all the time. This will be for the courts to decide. They're admitting to doing what they're accused of. Like, they're not hiding from the fact that they are doing the stuff that they're being accused of within these lawsuits.

[00:19:37] They either just don't think it's illegal, or they think they're going to be able to bend the laws to the future they envision. So, Ludio tweeted, like a day or two after this came out, groundbreaking technologies entail change and uncertainty. Let us offer some insight into how our technology works.

[00:19:52] Genitive AI models, including our music model, learn from examples. Just as students listen to music and study scores, our model has [00:20:00] listened to and learned from a large collection of recorded music. Now, pause for a second. The argument that the people will make against this argument from the AI companies, is that no human can listen to all recorded music in human history and synthesize and be able to output something from it.

[00:20:21] And so they're saying, like, basically the laws don't, don't cover the fact that this is a different way of learning and processing information. So Udio went on to say the goal of model training is to develop an understanding of musical ideas, The basic building blocks of musical expression that are owned by no one.

[00:20:37] Our system is explicitly designed to create music reflecting new musical ideas. We are completely uninterested in reproducing content in our training set and, in fact, have implemented and continued to refine state of the art filters to ensure our model does not reproduce copyrighted works or artists voices.

[00:20:56] We stand behind our technology and believe that generative AI will [00:21:00] become a mainstay in modern society. So that is the argument they will make in court. Then there was a tweet that I, I really, really liked. Found interesting from Bill Awal Shu, he is an ex-Google project manager and host of the TED AI Show.

[00:21:14] So he said in a tweet, and I, I thought this was really summarized how Silicon Valley approaches this well, publicly available data. Data is a euphemism for scraping pretty much anything you can access via an internet browser, including YouTube podcasts and clearly comm commercial music libraries. And as a consequence, pissing off record labels and celebrity artists.

[00:21:35] The crux of the issue, what is ethical is not always legal, and what is legal is not always ethical. Silicon Valley's modus operandi. Our technology is transformative. It is designed to generate completely new outputs, not to memorize and regurgitate pre existing content. That is why we don't allow user prompts that reference specific artists, said Suno's CEO, [00:22:00] Mikey Shulman, in a statement.

[00:22:02] Then this is Bilowal. In other words, this practice will absolutely continue until courts rule that training generative AI models on copyright material is not considered fair use, that the output is not transformative and or restrictive through a new regulatory framework. Otherwise, the Silicon Valley dictum applies.

[00:22:22] Don't bother asking for permission. Ask for forgiveness later. Keep making the case that your usage is transformative, and then as your startup raises more money Retroactively strike licensing deals with the parties whose content you trained on, much like OpenAI. And I don't like that those two tweets basically summarize where we are.

[00:22:42] This is going to keep happening. These AI startups will continue to take everything available on the internet. They will claim fair use and transformative purposes. The people whose content it is will sue them for doing it. And until the courts decide one way or the other [00:23:00] definitively, this is, this is going to continue happening.

[00:23:02] The one other thing I, I thought was really interesting, there was a, a tweet from Udio. Let's see, this is from June 28th. Oh, by the way, we are recording this on Monday, July 1st. So I didn't say that up front, I usually say that. They tweeted, New feature, rate songs, get credits. You'll listen to pairs of songs and select your preferred track, helping to improve audio and earning credits for each pair you rate.

[00:23:25] This is totally reinforcement learning through human feedback, but for credits. Yep. It's actually pretty smart. Like, I don't know if anybody else is doing something like this, but, um It's basically what's going to happen is they're going to give you two outputs and, and you, the human, will say, I like this output better.

[00:23:41] And you're now training their AI model how to do future outputs. That's how reinforcement learning for human feedback works. I don't know, that was kind of an interesting side note. So, yeah, I, I, we're just going to keep hearing about these lawsuits. And until somebody says one way or the other, but in the U.

[00:23:57] S. it would have to be the Supreme Court eventually, I would imagine. [00:24:00]

[00:24:02] Mike Kaput: Alright, so next up we have a ton of different OpenAI updates. So we're going to hit these in rapid fire fashion, one by one. First up is that OpenAI's annualized revenue. That has more than doubled in the past six months. It's reached 3.

[00:24:16] 4 billion, according to the information, and they're basing that on some recent comments made by Sam Altman. This is up from 1. 6 billion in late 2023, and about 1 billion this year. as of last summer. Most of this revenue appears to come from subscriptions to OpenAI's chatbots and fees from developers who are accessing its models through the API.

[00:24:40] OpenAI also apparently receives a cut from Microsoft sales of OpenAI models to Azure cloud customers, and the information says that amounts to about 200 million dollars annually, or roughly 20 percent of Microsoft's revenue revenue. from that business. Now, OpenAI was recently valued at about [00:25:00] 86 billion in a recent sale of employee shares, and that puts its valuation at about 25 times forward revenue.

[00:25:08] Now, the information also said an OpenAI spokeswoman did say some of the financial details cited here, which again, many of them came from internal commentary by Altman. Were quote, inaccurate and she did not provide further clarification there. So take them with a grain of salt, but Paul, this seems like a pretty significant jump forward.

[00:25:30] I mean, what does this mean for OpenAI? What does it tell us about the market for, generative AI right now?

[00:25:38] Paul Roetzer: I think for, for the business people on the call, or, you podcast, we're just at this, the very beginning phase. phase of generative AI adoption in enterprises. Like this, all of this is happening at the earliest phases of adoption.

[00:25:54] OpenAI is being valued at, you know, rumor is actually a hundred billion or more. [00:26:00] And, I really think there's a chance they're a trillion dollar company within two to three years. Like I, I actually think they'd have to have missteps or some major, limitations on future development of their models to not become a trillion dollar company.

[00:26:15] Which, if I'm not mistaken, makes them the most valuable private company in the world within, you know, two to three years. I just, I'm kind of like intrigued to see where does this go? Like, at some point, do they find a way to get out from underneath the original non profit structure and, and IPO at some point?

[00:26:35] Keep in mind, Microsoft owns like, I think 49 percent of OpenAI. What does that mean to Microsoft's value of OpenAI? You know, starts skyrocketing. So I don't know. I'm just more intrigued by all of that, but I think it's just a continued demonstration of the significant, impact this is going to have on enterprises and the way that we do the, like, the future of work.

[00:26:58] So, and then we also [00:27:00] already covered, like, Anthropic's going the same direction. They're making this massive enterprise play. And then I actually listened to a podcast yesterday or an interview with Aidan Gomez of Cohere. We haven't talked about Cohere in a little while, but he was one of the authors of the book.

[00:27:13] The attention is all you need paper in 2018 that invented the transformer in 2017. So Cohere is taking a kind of a different approach than OpenAI and Anthropic, but they are going after the enterprise as well. But in their case, they're not trying to train these like massive general models that are going to compete per se with.

[00:27:31] Like a GPT 5 or a Claude 4 when those come out, they're trying to build these like very finely tuned models. And they're going to, right now they're going horizontal, but they're going to go vertical. It sounds like in the near future and start building specific applications that focus in on like RAG, retrieval, augmented generation, and then working with enterprise tools, but.

[00:27:51] by industry. So like for lawyers and accountants and HR professionals. So it's just so fascinating to watch this market develop, but the values [00:28:00] are just astronomical for the companies that are making it work for that, you know, not the inflections or we'll talk about it in depth in a little bit here, like for the ones that are figuring this out.

[00:28:10] There's just a massive runway here.

[00:28:13] Mike Kaput: So OpenAI has also been busy making some acquisitions. And one of these is a company called Rockset, which is an enterprise analytics startup. And they have acquired this company to enhance retrieval infrastructure across OpenAI products. This is actually OpenAI's first acquisition, where it will actually integrate both the company's tech and its team.

[00:28:37] Bloomberg reported, a spokesperson saying. The terms of the acquisition, though, were not really disclosed yet. But, based on what we know, Paul, like, what do you kind of make of this acquisition in particular? It seems like it's for quite a specific purpose in OpenAI's tech stack.

[00:28:55] Paul Roetzer: As they focus more on enterprises, it becomes more important that these things are [00:29:00] accurate.

[00:29:00] So right now, these language models are imprecise, so it's very hard to use them for use cases within enterprises that require precision and accuracy. So in order to make the models more accurate, more precise, more reliable, so like, let's say, you know, eventually we get to like a 99. 9 percent reliability.

[00:29:19] I don't even know if that's possible, but let's say 99 percent reliability that what they output is correct. To get there, you need to be better at retrieving data, analyzing the data, and then outputting that into whatever the final product is that you're creating, like a report or an email or proposal.

[00:29:35] So, it appears this enables them to advance their retrieval augmented generation capabilities, the ability to infuse data into outputs. On the Rockset site, they list a number of enterprise uses that would certainly, Maybe imply where OpenAI intends to use this tool. So they highlight personalization, dynamic pricing, customer segmentation, fraud detection, [00:30:00] spam fighting, threat detection, real time reporting, live dashboards, monitoring and alerting, and semantic search.

[00:30:07] Those are all, obviously, things that every enterprise does and needs to continue to improve. So I think it's, it's more of a prelude to Models that are more reliable, that, that allows the application across industries where precision matters a lot. So if you think about marketing, most of the use cases for these models is creative.

[00:30:30] Like it's, it doesn't have to be precise. It's ideation, it's brainstorming, it's developing drafts. If you start using it for business intelligence purposes, or for accounting, or for HR practices, or legal, like, you know, building briefs, it can't be wrong. And, and so until they solve for precision, we're not going to have massive adoption within all those industries.

[00:30:53] So it seems like a smart play and kind of an obvious play.

[00:30:59] Mike Kaput: So OpenAI has [00:31:00] also made another acquisition of a company called Multi, which is a collaboration startup. And this is another deal with undisclosed terms, but basically Multi's technology allows developers to screen share and work on code together In real time.

[00:31:16] So this acquisition seems to suggest that OpenAI may be looking to, again, kind of enhance its enterprise products and build some more features for some type of collaboration here, but I did want to ask you, Paul, like what's going on here? Like, why does a collaboration app make sense? For OpenAI.

[00:31:34] Paul Roetzer: I, I don't, I don't know if I'm going to be right on this one or not, but my initial reaction is this has to be an agent play, like it's not actually about letting you and me and 10 other people in our company work together on a screen to do coding because that's not the future of coding.

[00:31:47] So. I just, I don't know. I looked at it. I tried to find additional details and I couldn't really find anything. I was looking again at the, what's the name of the company? Multi. Multi. Multi. [00:32:00] Like I was looking at their site and they've got like, oh, simultaneously screen sharing, keyboard first, stay in the flow with commands and customizable shortcuts, shared control.

[00:32:07] It's like, okay, like none of this is that interesting to the future of work. Hmm. But, if AI agents need to work with each other, maybe they need to share screens. So, I don't know. I actually, I think this is probably a prelude to something that's gonna be built into agents working together.

[00:32:22] Mike Kaput: Gotcha. Okay, yeah, that makes sense.

[00:32:24] It seems like if they were just looking for some collaborative functionality, that's probably something they have the ability to. Bacon. I

[00:32:31] Paul Roetzer: would think. Yeah, just nothing about this seemed on the surface like it's going to be what it appears.

[00:32:40] Mike Kaput: All right, so some more OpenAI news. Retired U. S. Army General Paul M.

[00:32:45] Nakasone has joined OpenAI's board of directors. So Nakasone is a leading expert in cyber security and he was actually pivotal in creating OpenAI. the U. S. Cyber Command. He was also its longest serving leader. [00:33:00] Interestingly, he also led the National Security Agency, the NSA. He is going to join OpenAI's Safety and Security Committee, which is the board, the board committee that makes recommendations on critical safety and security decisions for OpenAI's projects.

[00:33:17] Now, Paul, this is hitting some of those notes that we talked about when we talked about that situational awareness essay that we recently covered from Leopold Aschenbrenner, where he basically theorizes, like, given the acceleration and importance in AI, national security apparatuses and national governments are going to start getting more involved in private AI labs and companies.

[00:33:41] Like, is that what's going on here?

[00:33:45] Paul Roetzer: I think this is going to end up being a A pretty significant addition to the board. So it certainly board appointments can go under the radar for obvious reasons. I mean, it's just usually not that interesting to people. You don't [00:34:00] appoint this person without a much bigger story behind why it's happening.

[00:34:04] And so the NSA, if people aren't familiar, leads the U. S. government in cryptology, that encompasses both signals, intelligence, or a signet. Insights and cybersecurity products and services that enable computer network operations to gain a decisive advantage for the nation and our allies. So they provide intelligence support to military operations through the signals intelligence and then cybersecurity personnel products and services ensure military communications and data remain secure and out of the hands of adversaries.

[00:34:36] So in essence, the NSA. conducts extensive global monitoring and data collection, often outside of the oversight of Congress. This leads to lots of, questions about the purpose of the NSA, whether or not they overstepped their bounds, whether or not it's an invasion of privacy for U. S. citizens, for [00:35:00] people abroad.

[00:35:01] So the NSA isn't without controversy for sure. So my initial take is for OpenAI, they need to secure their models. So we learned that in Leopold's thing. We've heard that from ex OpenAI employees, that they're not doing enough to prevent these models from being taken by state actors, like outside of the U.

[00:35:19] S. They certainly themselves have to worry about cyber threats to their company, to their models, like OpenAI, Anthropic, they need to do more to protect themselves from cyber threats, and, The models themselves have implications of use, with cyber threats, so being able to use an advanced model to conduct cyber, espionage and activities, like, so there's lots of concerns.

[00:35:42] You're building a very powerful thing, and you have to protect that thing at all costs, so having someone who does that at the highest levels of the U. S. government makes a lot of sense. On a broader scale These models are now, or are very quickly becoming a matter of, a matter of [00:36:00] national security and, and power.

[00:36:02] It's, it could be OpenAI in part showing to the U. S. government they are serious about security as efforts to, for laws and regulations ramp up. So if the U. S. government comes calling, you know, over the next 12 to 24 months saying we want to more heavily regulate these, OpenAI now has the ultimate insider.

[00:36:22] at the highest levels that can help navigate that process and a favorable outcome for OpenAI, whatever that looks like. It does open the possibility that OpenAI is, is going to be used, in military and defense purposes. They historically weren't allowed to, I don't think. I think that was part of their governance.

[00:36:39] I believe that was removed at some point. So. Now you have to wonder about like OpenAI's models being applied to the government for military and defense purposes and monitoring purposes and espionage and things like that. And then I've seen a couple of people. Tweet, you know, along the lines of, if you don't think the government has [00:37:00] access to their models, like they do now, like that someone like this isn't being put in a position like this without deep access to these models so the government can figure out what's going on.

[00:37:09] And then you can let all the conspiracy theory stuff do what, do what they do with these kinds of, I didn't even go down that rabbit hole. I'm sure there's all kinds of interesting threads about conspiracies related to stuff like this. But yeah, I think there's an obvious open AI. purpose, but I think there's probably a much broader story here that probably starts to get into some of the situa situational awareness topics we talked about on episode 102, I think it was.

[00:37:35] Mike Kaput: All right, so OpenAI updates. OpenAI has also announced a delay in the rollout of ChatGPT's new advanced voice mode feature. So if you recall, we talked about this was originally unveiled in May. There were plans to quickly release this feature to paying users, but the company now says it needs more time to refine the technology.

[00:37:58] Now, if you recall, this advanced [00:38:00] voice mode is like a step above what is currently in ChatGPT right now. It aims to understand and respond with emotions and understand and respond to nonverbal cues and basically bring AI conversations closer to natural human interactions. And OpenAI back in May demoed this quite extensively.

[00:38:18] Now, OpenAI has explained that while they had initially planned to start an alpha release to a small group of ChatGPT Plus users in late June, there have been some lingering issues that have forced them to push this back. Now, this feature may not launch for ChatGPT Plus customers until the fall. So, Paul, what do you make of this delay?

[00:38:41] I mean, OpenAI certainly had a couple of missteps around Voice with the Scarlett Johansson controversy, and now this. Is this something to kind of be worried about?

[00:38:51] Paul Roetzer: I don't know. I mean, from what I had seen, from people who had had access to it, or at least had demonstrations of it, it seemed like it was [00:39:00] kind of ready to go.

[00:39:02] Like it wasn't that far from a technological capability standpoint from being publicly released. So I would guess there's just more on the back end here. And I think it's just a, It's going to be a very important product launch and it's going to have lots of uses. Some of them may be not good uses.

[00:39:20] Maybe the NSA thing has something to do with it. I don't know. Like maybe there's bigger concerns around cyber risks related to the release of this kind of technology. And it might just be more time to prepare for the implications of this technology on society. I, I don't know. It's could be similar to why they're not releasing Sora yet.

[00:39:39] Maybe they're just, they don't think we're ready. The defenses are in place yet to allow this kind of technology out there. I'm not sure. I'll be interested to see what happens, but I could, I could think, I could imagine, a very fascinating fall. Cause I don't know that we're going to get like GPT 5 by the end of summer.

[00:39:59] But [00:40:00] if you start thinking about all this stuff stacking up, like they've got Sora, they've got the voice. We know GPT 5 is at least in training, if not already in, you know, deep red teaming. So I could imagine a fall event where OpenAI just drops it all at once and it's Sora and voice and everything baked right into GPT 5 like multi modal from the ground up kind of thing.

[00:40:20] So I don't know, maybe they're just holding it all to kind of roll it out together.

[00:40:26] Mike Kaput: All right, our last OpenAI themed update. This week is that Elon Musk has dropped his lawsuit against OpenAI and CEO Sam Altman. This lawsuit alleged that the company had breached its founding mission by prioritizing essentially business operations and profits over its mission to build AI.

[00:40:45] That benefits humanity. This lawsuit claimed that OpenAI had become a, quote, de facto subsidiary of Microsoft, and it violated what Musk said was an agreement to remain a non profit organization. Again, dedicated to kind of [00:41:00] altruistic ends, not good. Profit over people. Now, Musk actually withdrew this complaint just one day before a California judge was scheduled to hear OpenAI's request to dismiss it.

[00:41:11] And of course, this legal battle is part of a larger, ongoing public dispute between Musk and OpenAI, because Musk has kind of fallen out with the company after being an early backer, a founding team member of it when it was strictly a non profit. And of course, he's now the CEO. in the AI game as well. So, Musk most recently threatened to actually ban Apple devices from his companies if OpenAI's software was integrated into it at the operating system level.

[00:41:38] Citing security concerns, but really, Paul, like we've talked about in a previous episode, this is all just kind of Elon's beef with OpenAI and with Sam Altman. Has he realized he doesn't have a case here? Is the PR juice just done with this thing? Like, what's going on here?

[00:41:55] Paul Roetzer: I would imagine That he, I mean, he knew he didn't have a case probably [00:42:00] from day one.

[00:42:00] I think it was just trying to, you know, create a headache for them. So I wouldn't be surprised at all. If it's just a legal thing where it's like, obviously this is going to go nowhere and it's going to be a waste of your time. And there might be discovery on both ends that he doesn't want to have to go through.

[00:42:14] So it could just be as simple as that. Maybe there's something more to it. You know, maybe they've, maybe they've come to a peace. I don't, I don't know. Like maybe we find out Sam and Elon have made their peace and moved on. I highly doubt that. But, yeah, I would guess it's just a legal thing. And, and Elon's got enough other stuff going on.

[00:42:35] It's not worth the headache. Has he, I don't think he's tweeted about this. I would be, I would be fast. Cause he usually will drop what actually happened in a, like a reply to somebody else's like Thread, like it just randomly shows up and explains it. So as far as I know, we haven't gotten an explanation from Elon in an official tweet or reply to somebody else's or anything like that, but we'll keep an eye on it.

[00:42:57] Mike Kaput: All right. So switching gears here, some more [00:43:00] news, Ilya Sutskever, who is co founder of OpenAI and is now an ex employee of the company. He has actually come out of kind of his, internet silence and unveiled a new venture called Safe Superintelligence, Inc. or SSI. And SSI aims to create a powerful and safe artificial intelligence system within a pure research organization.

[00:43:22] And Bloomberg says this, quote, has no near term intention of selling AI products or services. Sutskever told Bloomberg that the company's sole focus will be on developing safe, Superintelligence, free from the pressures of product development and market competition. The venture doesn't have a ton of details right now, but we do know there are two co founders alongside Sutskever.

[00:43:46] Daniel Gross, who is a known and former Apple AI lead and an investor, and Daniel Levy, who worked with Sutskever at OpenAI on training. Sutskever, interestingly, describes their [00:44:00] approach to AI safety as akin to nuclear safety, rather than the typical trust and safety measures we hear about, typically from these companies.

[00:44:08] So, Paul, we've followed Ilya for a very long time. Like, what do you think of his next move here and it's kind of likelihood for success?

[00:44:17] Paul Roetzer: It's an interesting model. So they're going to be based in Silicon Valley and Tel Aviv, the two offices. It's, I think they described it as like a small crack team. Like they're not planning on having a massive team here because it's not, they don't need marketing sales.

[00:44:30] Customer support. It is a pure research play. And again, like kind of like what OpenAI was meant to be in the early days. Not disclosing funding, but did confirm one of the co founders said that's not going to be a problem. Like we have access to all the money in the world we could ever want. I would imagine they, they have, there's no product roadmap, no path to revenue.

[00:44:52] I, I wouldn't be surprised if they raised one to 2 billion initially for just for talent and chips, they're going to, they can pay, they're going to be able [00:45:00] to pay whatever they want to pay for the best research in the world. Ilya is not going to have problem recruiting, you know, whoever he wants to come there.

[00:45:08] I would think, and again, there's probably, we probably won't get any confirmation of this for years from now, but I would think this is going to be, if not have significant government funding, certainly have access to significant government funding if they chose to have it, so I could see the U. S.

[00:45:28] government, certainly the Israeli government, willing to put funding into it. A massive amount of money into this kind of research. It's the kind of thing DARPA, you know, Defense Advanced Research Projects Agency in the US. This is the kind of stuff they would fund in a second. So I don't know that we're going to hear much about this, honestly, because they're not going to release products.

[00:45:47] It's going to be whatever they choose to share, but it's going to be a pretty stealthy organization. And, and my guess is they'll raise billions, if not tens of billions of dollars and. We may not hear [00:46:00] from them for another six to 12 months. So I don't know, we'll keep an eye on it, but I think this one's going to be pretty close to the best.

[00:46:06] Mike Kaput: It's kind of funny that and the OpenAI board topic. I remember one of the seminal books you and I read at the beginning of the Institute was the Pentagon's Brain about DARPA and that stuff appears to be coming full circle. It was a little quiet for a few years there and now we're back.

[00:46:22] Paul Roetzer: I've been wanting a Pentagon's Brain part two for years, like, yeah, so if this is interesting stuff to you, Mike's right, like the Pentagon's Brain is a great book and it tells the story of DARPA and the U.

[00:46:35] S. government's efforts to emulate a human brain going back decades, so yeah, it's the sci fi stuff is real, I'll say that.

[00:46:45] Mike Kaput: All right, so we've got a couple other news items about perplexity. Two kind of big things going on, none of them good, unfortunately, because perplexity is now facing even more scrutiny over its business practices and the [00:47:00] accuracy of its AI powered search engine.

[00:47:02] So we had talked about there have been some lingering issues around perplexity and how it sources information. Well, WIRED did a recent investigation, that uncovered some concerning issues with the platform. So they found some things like, despite claiming to respect a website's robots. txt file, Perplexity appears to be doing some unauthorized web scraping, according to Wired.

[00:47:26] It is providing inaccurate summaries and hallucinations occasionally. It's producing things that are, you know, either not totally correct or completely fabricated. And this includes false claims about, say, news reports that never occurred. So when you're getting these like kind of natural language search results from perplexity, you're assuming they are factual, like something you would get from Google.

[00:47:48] Doesn't seem to be the case in every search. And then there's a lack of perplexity is methods for accessing and summarizing content are unclear. [00:48:00] It's, WIRED kind of raised some concerns that. They may actually just be using reconstructions based on metadata rather than actually directly reading some of the articles they are then summarizing for you.

[00:48:13] And then, of course, and we'll talk about this in the next topic a little further, there's copyright issues. Forbes has accused Perplexity of plagiarizing its content. So there's this issue that if you can now create. Pages in Perplexity based on publicly available content, you are undercutting the people who are actually publishing that.

[00:48:32] Perplexity's CEO, Arvind Srinivas, has disputed some of the findings, but didn't really kind of provide specific rebuttals to what Wired laid out in their report. So, Paul, it certainly seems like the outcry against some of how Perplexity does things is growing. Like, how concerned should users be here?

[00:48:55] Paul Roetzer: So, in an effort to be as balanced as possible, I actually went to [00:49:00] the effort of listening to almost four hours of Aravind's interviews last week. So, he did a product led AI with Seth Rosenberg. It's about a 30 minute podcast. And then he did a three hour podcast recently with Lex Fridman. So, I listened to both of those.

[00:49:15] I came away from the Lex Freeman podcast in particular, with a appreciation for his genius. I mean, the guy's obviously like, a genius. Um Like a math prodigy kind of thing, gave some of the best explanations of how large language models work. Like, really smart guy, fascinating to listen to, fascinating to hear his backstory of perplexity, how he arrived at the idea, all of these things.

[00:49:42] It's also extremely apparent when you listen to him, that the Silicon Valley MO we talked about earlier, is alive and well within Perplexity. So, how Udio and Suno are gonna keep taking until, you know, they're told they're not allowed to, how OpenAI does the same [00:50:00] thing, like, they all do this. They take until they, they're told it's illegal, and then they try and find ways around that.

[00:50:06] Like, that is, that is just what they do. So, Conor Grennan, the Chief AI Architect at NYU Stern School of Business, I had shared another Perplexity article on LinkedIn. That one was from The Verge. It was called Grand Theft AI, Perplexity's Grand Theft AI. The one Mike was referring to as Perplexity as a bullshit machine.

[00:50:25] So these aren't great headlines. These aren't exactly the kind of headlines you want about your company. So my comment I put to Connor's LinkedIn post is, if you listen to the recent Lex Friedman interview with Arvin, It's very clear he has a proud history of pushing the limits of what's ethical and legal in his research and business practices by his own admissions.

[00:50:49] So in the Lex Freeman podcast, he details how they illegally created a bunch of fake academic accounts on Twitter, including publishing [00:51:00] fake GPT generated research papers to get those accounts, just to scrape. Twitter data from the APIs. This is pre Elon. Then he talked about how they scraped illegally from the terms of use LinkedIn data so that they could build demos of tools that would like summarize that information.

[00:51:21] And he laughs when he's doing like, this is funny to him that he, he blatantly disregards terms of use. So I say that because I think it's safe to say he, he is. Happy to share how he has previously chosen to skirt ethical and legal Barriers to build things. And so if it comes out that perplexity is willingly doing these things, including robots.

[00:51:51] txt files and going behind paywalls, and none of that would surprise me after listening to the interview with him, none [00:52:00] of it. So am I saying they're doing it? I have no idea, but the WIRED article was an extremely well done piece of journalism. You want to talk about good journalism, you, you bring the data to the table.

[00:52:12] And they did it. Like they went through a rigorous process and they proved exactly what they were claiming Perplexity was doing. So until someone from Perplexity disputes that, they got nothing to stand on. So this is the challenge. We have these amazing tools. I love Perplexity. You love Perplexity. I still use it every day.

[00:52:33] Is it an ethical company? Is it a legally functioning company? Like I don't know. It's, it's not looking good. Like, they're taking a lot of PR hits, and they don't seem to have much, rebuttal to these accusations.

[00:52:51] Mike Kaput: So, kind of the main lawsuit right now, or at least legal, possible legal issue, rather, that they're going to be facing is a [00:53:00] potential legal action from Forbes.

[00:53:02] over allegations of copyright infringement. So, in the past couple weeks, Forbes sent a letter to Perplexity's CEO accusing the company of, quote, willful infringement by stealing text and images from Forbes content. So we covered this in a past episode. Basically, Forbes chief content officer accused Perplexity of copying Forbes reporting without proper attribution.

[00:53:25] The company not only used Forbes original content, but also cited other, quote, sourced reports that were actually basically just aggregations of Forbes original story. So, Forbes also says that Perplexity's content That it kind of remixed and reused now outranks Forbes original reporting in search results.

[00:53:45] So they're basically demanding that Perplexity remove infringing content, reimburse Forbes for any advertising revenue earned from it, and provide assurances that they will not use Forbes intellectual property or infringe on their copyrights. in the future. Now, [00:54:00] we talked about this when we covered the initial claims here that Perplexy's CEO did say stuff like, hey, we are smoothing out the quote, rough edges in the new Pages product, which is where this all stemmed from, and that they're improving the product based on feedback, and that they need to make it easier to find and highlight contributing sources.

[00:54:21] More prominently. So Paul, building on what we just talked about, like, do we foresee them getting hit with more lawsuits like this? Is this going to be like a open UDIO?

[00:54:32] Paul Roetzer: So I'm going to tell you what's going to happen there. So I'm just looking right now. So they, okay. CB Insight says they've raised 171 million.

[00:54:44] I don't remember when the last round was for them. My guess is at some point the rest of the next half of this year, they're going to raise a billion or more. And they're going to set a hundred million aside for lawsuits, maybe, maybe a quarter billion. They're going to go do [00:55:00] a bunch of licensing deals.

[00:55:01] They're going to work on ways to, to compensate these publications. They're going to keep going. Like they're, they're backed by all of the same VCs that. Follow the same Silicon Valley MO. Like this is a, until someone stops this from happening, this is the way the businesses will be built. And so the VCs behind this, aren't going to let them like go under because of a couple of lawsuits.

[00:55:29] Like, so you just, you raise enough money to pay the lawyers and you keep pushing this down till three, four years from now, and they're a 50 billion company, and then you can pay a couple billion in fines, like, It's just how it works. I don't have to like it. You don't have to like it, but I, I, they're going to keep getting sued.

[00:55:48] I don't think it's going to stop the business from continuing to exist. I could be wrong. Maybe there's one massive lawsuit that just takes them out, but I think they're just going to keep raising enough money to pay [00:56:00] the legal bills to make it so it doesn't.

[00:56:04] Mike Kaput: So, in another recent experiment, kind of another somewhat related, you know, more murky or darker side of some, some AI capabilities, a company called 404 Media just did an experiment about, revealing the ease and the low cost of creating AI powered news websites that plagiarize content from legitimate sources.

[00:56:25] So, the journalist at 404 Media, Emanuel Nyberg, he did an, an experiment where he spent about 365 only to set up a fully automated site that published dozens of AI generated articles daily, and many of these he intentionally made as rewritten versions of stories from 404 Media, his employer, and other tech news outlets.

[00:56:50] Now, he found that while these AI generated articles were grammatically correct and factually accurate. They of course, you know, lack depth and were pretty generic [00:57:00] and not really packed with any type of original reporting. On one hand, Myberg's experiment kind of heartened him. He was like, look, this is low quality AI spam, doesn't come close to replacing the real work of actually useful journalism.

[00:57:16] But on the other hand, he was pretty shocked at just how easy. and affordable it was to spin up AI generated content at scale using, without permission, existing content published by legitimate sources online. So, Paul, this kind of, I mean, wasn't really new to me that this is possible, but it did strike me just as, like, pretty depressing.

[00:57:40] It just seems really, really easy to use AI to rip off websites, create junk websites. Clearly, these can outrank legitimate content. I'm sure that this is not, he's not the first person to discover this is possible. Like, how worried should we be about this trend? This

[00:57:57] Paul Roetzer: is a big problem. We've known it was coming.

[00:57:59] To see [00:58:00] concrete examples like this played out just makes you realize, okay, it's actually really happening. I've talked about it on the show before, like, go back to 2010, 2011, content marketing's becoming the thing, inbound marketing and HubSpot is pushing this whole premise of, like, create content, create the magnets on your sites.

[00:58:17] Publish blog articles, do it three times a day, whatever. You just do all this stuff, create content, people will come. And that was what our agency did. We were a content strategy and production agency. So we created content for people's sites. And at the time, the threat was content farms. So you would get these, you know, companies that were hiring hundreds of writers and they would create this crap content and they would do it for.

[00:58:37] You know, 0. 05 a word or 0. 10 a word. Or then marketplaces emerged where you could buy crappy, even crappier content for 0. 03 a word or a penny a word. And any brand that thought they were getting legitimate, cited. Like non plagiarized content for two cents a word was kidding themselves. And we knew that back then, but companies were still [00:59:00] doing it.

[00:59:00] Like you would have business meetings with like new business meetings with companies and they're like, Oh, we can go through this content farm in Boston and they're going to do stuff for two cents a word. And you guys are charging like 500 an article. Like, we're just going to do that. And it's like, okay, this is just, it now hyperscaled like with, you know, spun up in a day.

[00:59:21] It's a. The problem, I mean, on my end, is that Google rewards it. That, like, there isn't a way yet to filter this out, that It's not content that should be there. And then the other thing is like, this becomes the training data for future models. So I see that as the bigger issue here. Creating cheap content at scale that steals from other people has been going on for decades.

[00:59:47] The fact that this content now becomes training data for future models. That Google is rewarding it by surfacing it in their search results. Like those are the problems. And I don't, I don't know of [01:00:00] any solutions to that. Like, I don't know where that's going to come. Perplexity is probably going to be surfacing this kind of stuff within their results.

[01:00:06] Like that's the problem is when, when the majority of the content on the internet is AI generated. Now what happens to our models, to, you know, to, to, to the originality of what we create? Like, I haven't heard a good answer for that one yet.

[01:00:23] Mike Kaput: Yeah, I've already heard of people starting to, specify that their Google searches should surface results pre 2023 to avoid the problem.

[01:00:32] Because that's really when things kicked off and that shows you how acute this has become.

[01:00:38] Paul Roetzer: Or it's going to be the models. And I think we're actually hearing elements of this is that, you know, as you ramp up these models, like GPT 5, GPT 6, it may not be about more data. It's going to be about more finely curated data, like the quality of the data.

[01:00:56] And so you could imagine a world where they just license from [01:01:00] 10, 20, 50 sources. And you ignore the rest of it, which actually could accelerate the ability to compensate the original creators and licensing content. So rather than just sucking up all of the internet, which is what everybody's doing, you only suck up the portions you have permission and licenses to, but that's the best of the best of human content.

[01:01:20] And in theory, as we train these models, we may need less quantity of content and greater quality of content. And it seems like every research lab is actually moving in that direction. And then you can create more synthetic content based off of the best human content. So, I don't know, there might be a way to fix this, but curating the sources is like the only way I can think of doing it.

[01:01:43] Mike Kaput: So in other news, the Associated Press is launching a non profit to raise at least a hundred million dollars to expand state and local news. This is called the AP Fund for Journalism and the money it raises would be used in a number of ways to support local [01:02:00] news organizations, including Due to some speculation from Axios to help local newsrooms better use AI, especially since AP has signed a two year licensing deal with OpenAI.

[01:02:13] Now the interesting part of this, Paul, is that you posted about this topic saying that AI might be able to play an even bigger role here. Now you wrote, Quote, the frontier AI model companies, Google, OpenAI, Anthropic, et cetera, should fund this, save local journalism, which is necessary to a functioning democracy, and do licensing deals to infuse real time news into AI systems.

[01:02:37] Can you walk us through your premise here?

[01:02:40] Paul Roetzer: So I, I've talked about this a number of times. I don't remember when I first threw this idea, it was probably 50 episodes ago, but for background, I came out of journalism school. So I still spend a lot of time, you know, talking with journalism school at Ohio University, which is where I graduated from, talk with other journalism schools.

[01:02:55] So I, I have a, a deep passion for journalism. I think [01:03:00] I truly do believe. Local journalism is critical. You know, we've seen it. My, my dad, worked for the Plain Dealer when I was growing up. And so even having the Plain Dealer, which is the Cleveland Daily Newspaper, Greater Cleveland, Northeast Ohio, Greater Newspaper, Daily Newspaper.

[01:03:14] So I, I, like, I was around it as a kid. Like, I remember going down and seeing the newspaper coming off the printing press. And I think that, you We've lost a lot of that. Like, knowing what's going on, holding local government officials responsible, you know, making sure that the policies are being adhered to, like, there's a lot of good that comes from local journalism.

[01:03:37] If we lose that, we're We lose a very important part of transparency within our communities, but who's going to pay for that anymore? Like, that's why it's going away. There just, there's no, there's no revenue model for this. So my argument for years now has been the AI model companies could save journalism.

[01:03:56] They need each other. So, If we don't rely on an [01:04:00] ad revenue model for journalism anymore, like if we just have funding for it, then we can get back to doing great journalism, human journalism that the AI is not going to do. The AI isn't going to go and meet with sources and go knock on doors and have important conversations and follow from one person to the next of who do I go interview.

[01:04:19] It's not what an AI does. And so there's an element of journalism that is very uniquely human and for the foreseeable future will remain that way. But there's no revenue model to support it. So if these model companies can come in and fund this, a hundred million dollars is nothing to these people. Like, Nothing.

[01:04:35] So if they can put a hundred million in and fund this and then even do it at a broader level, now all of a sudden we can have the local journalism that's critical to democracy, they can do licensing deals to get real time news, and they can use this writing to train future models, but it's still not going to replace what the journalists do.

[01:04:53] So I just I don't know, when I step back and look at this from an AI model perspective, from a business perspective, from a societal perspective, from [01:05:00] a journalism perspective, I don't see anyone losing here. It's like one of those, like, everyone can win, but I don't hear any of them talking about this. I don't get it.

[01:05:10] I don't understand why Microsoft or Google or OpenAI or Anthropic doesn't just step up and say, how fun, or Bill Gates Foundation. Like, I don't, I don't know, like someone just step up and put this money in. This is nothing to these people and it can have a massive positive impact on society. So yeah, like I'm, I don't know.

[01:05:26] I don't know if I'm not saying this enough or I feel like I have to do something about this. Like I've talked about it like five times and I haven't heard anything about it. And it just seems like one of the most obvious things that needs to happen right now in AI. So. I don't know, maybe I gotta go, like, make a couple phone calls and, like, talk to some people about this.

[01:05:41] Well, talking about

[01:05:41] Mike Kaput: it on the podcast is one good step. I hope!

[01:05:45] Paul Roetzer: Somebody do something!

[01:05:48] Mike Kaput: Alright, so next up, Pope Francis has made history as the first pontiff to address a G7 summit, and he used at least part of the time to raise [01:06:00] concerns about artificial intelligence. So, when he was speaking recently to world leaders in Italy, the pope emphasized the need for human centric AI development and regulation.

[01:06:11] He warned that without proper safeguards, AI risks turning human relations into mere algorithms. And he also stressed the importance of of maintaining human control over AI driven decisions, particularly in high stakes areas like weapons. In response, G7 leaders pledged to better coordinate AI governance and regulatory frameworks, and their final statement acknowledged the potential impacts of AI on labor markets and justice systems.

[01:06:40] So, Paul, this is not, you know, anything really binding. It's a little outside the realm of marketing and business, but It is important in showing just how broad of a concern AI has become to society as a whole when you have, you know, the head of a major religion deeply thinking about weighing in on this topic.

[01:06:59] What did [01:07:00] you think of kind of hearing the Pope's getting involved in AI?

[01:07:04] Paul Roetzer: Yeah, I feel like in a past episode, I don't know, maybe it's like 20, 30 episodes ago, we had some initial conversations around the Pope and, you know, thoughts on AI. So I don't think it's like totally new that he's talking about it, but.

[01:07:17] I mean, my feeling at this point is any, any leader of any, you know, important, organization in society that is raising awareness about the importance of the topic and the need to be not just talking about it, but starting to take action. It's great. Like I'm all, I'm all for it. So yeah, I didn't, I didn't dive deep into what he had to say, or whether or not the G7 leaders are actually going You know, do anything based on it.

[01:07:45] But again, I think it's good. Like, I think, I don't care if it's my local councilman, you know, governor of your state, the president of your country, like whoever, as long as they're talking about this and starting to think about it at a more macro level, I'm all for it. [01:08:00]

[01:08:00] Mike Kaput: Also just interesting. The fact that I imagine his commentary around AI will reach a fair amount of people that may not have been interested in the topic to begin with, kind of outside our typical, you know.

[01:08:12] Alright, so next up, Google DeepMind has released its first study on the most common malicious uses of AI. Now this research found that AI generated deepfakes that impersonate politicians, celebrities, etc. are actually far more prevalent than, for instance, the use of AI to assist cyber attacks. So these researchers analyzed about 200 observed incidents of misuse between January 2023 and March 2024.

[01:08:44] And they sourced these from social media, online blogs, and media reports. And basically, they found some really interesting key findings. First up, creating realistic but fake images of video and audio of people was almost twice as common as the [01:09:00] next highest misuse of generative AI tools, which was falsifying information using text based tools.

[01:09:07] The most common goal of actors misusing generative AI was to shape or influence people. public opinion that accounted for 27 percent of the uses that they studied. And the second most common motivation behind AI misuse was to make money through services like creating deep fakes or generating fake news articles.

[01:09:28] According to the research, most incidents of AI misuse involve easily accessible tools, quote, requiring minimal technical expertise. So, Paul, we talk about this topic all the time. Now we have some research on it. Does anything in here surprise you?

[01:09:46] Paul Roetzer: Nothing surprises me. I think the, I didn't go deep on the reports, so I don't know if they expand on the generative AI using, being used to shape or influence public opinion.

[01:09:59] That's the one we got to [01:10:00] worry about right now. I mean, we're What, five months away from an election in the United States, I know other countries are also going through election cycles, and this is going to be wildly prevalent. It'll be done by, you know, our own organizations within the United States, but absolutely it's going to be done by foreign adversaries who are trying to shape and influence public opinion.

[01:10:22] It's happening, honestly, the U. S. Government probably uses this kind of technology to shape and influence other, countries elections. It's just what people do. And so, I think this is going to be a major problem. And, it's going to expand well beyond elections. It'll, it'll become a problem for businesses.

[01:10:43] Like, you know, We've talked about before, a crisis communications plan. You need to have in your crisis communications plan what happens when your CEO gets deep faked and, says things or board members who get deep faked and do and say things online that never happened. These, these are very real things that are going to need to [01:11:00] be a part of enterprise plans.

[01:11:03] Mike Kaput: So next up, NVIDIA has announced a family of open models that are designed to generate synthetic data to train LLMs. This is called Nemetron 4 340B, you know, it really rolls off the tongue, but what it does is it's a model that creates the LLMs. Data that mimics real world data. So this basically is creating synthetic data that can be used to then train large language models.

[01:11:29] So this kind of helps address some of those challenges like we just discussed of obtaining high quality training data, because you need high quality training data to create custom LLMs that are extremely powerful and accurate, but it's often expensive and difficult to get that data. So, Nemetron 4 340B features a permissive open model license, which allows for free and scalable synthetic data generation, and they are optimized for NVIDIA NEMO, which is an [01:12:00] open source framework for end to end model training.

[01:12:03] Paul, that's a bit technical, but it is important. Can you kind of walk us through why synthetic training data matters so much to the future of LLMs?

[01:12:13] Paul Roetzer: Yeah, it's a topic we'll probably be talking a lot more about, but basic premise is there are a lot of people who think in the training of these models, you're going to hit a data wall where we just run out of data to train them on.

[01:12:24] So there's continued research into the idea that we can, the AI can create training data that can, you know, synthetic training data generated by the AI that can then be used to train AI models. And they can actually have AI that assesses the quality of the data. And so it's just like this loop of, of data generation and, that helps advance the models.

[01:12:47] So I, the Aidan Gomez interview, I. Just referenced earlier, I listened to Aiden talked about the significance of synthetic data. Anthropic, you often hear Dario talk about synthetic data you've heard about from OpenAI. [01:13:00] It seems as though this is a research path that is working that, you know, a year and a half, two years ago, it was a theory that you could use synthetic training data to build better models.

[01:13:11] It now seems they've found a lot of methodologies to actually figure out how to do this. And, As we move into like one, two, three versions of these models down the road, as one, two, three years pass, more and more of the training data within these models is going to be synthetic data.

[01:13:31] Mike Kaput: Alright, so next up, Toys R Us has stirred up some controversy with what it claims is the first ever, quote, brand film.

[01:13:39] created using OpenAI's text to video AI tool, Sora. The company partnered with an ad agency called Native Foreign to produce a short film telling the origin story of Toys R Us founder, Charles Lazarus. This film shows a child version of Lazarus falling asleep and dreaming of flying through a toy [01:14:00] land where he encounters the company's mascot, Jeffrey the Giraffe.

[01:14:03] Again, all generated by OpenAI. by AI from Sora. Now, while Toys R Us is kind of touting this as groundbreaking usages of AI technology, there is some criticism that the commercial has also faced. Many online have pointed out that there are some, like, obvious tells that this is AI generated imagery. It has some, like, unnatural movements and visual artifacts.

[01:14:26] And then, of course, some critics argue that using AI for commercial production like this could potentially replace human creative jobs. It sounds like though the company they did not need actors or cameras to create this film but they did use human script writers and VFX artists to kind of splice together and finalize all the footage that was produced by Sora.

[01:14:49] Now one popular comment online came from advertising copywriter Dan Goldgeier and he posted on X the following, quote, mock that Toys R Us [01:15:00] That's AI spot all you want, but it's just the beginning. Most consumers won't know the difference or care, and most marketers will be more than happy to make this kind of spot for less money.

[01:15:11] Paul, given that we're, we've come from the agency world, I mean, that last quote kind of hits pretty hard when you think about the implications here. Like what are your thoughts on this?

[01:15:20] Paul Roetzer: Yeah, I wouldn't have expected Toys R Us to be the brand that did this. My first thought was like. You know, that it was Toys R Us.

[01:15:28] Awesome. I, I, I loved Toys R Us as a kid. So I actually found the ad kind of cool, because I loved the original song for Toys R Us. And, yeah, I thought it was, it was pretty nostalgic. So I, I think anybody who gets caught up criticizing these outputs because of flaws, like unnatural amusement, unnatural movements, visual artifacts, that is not.

[01:15:59] The argument you want to [01:16:00] be making, like any flaw you see is going away and it might take 12 months, might take 18 months, but just assume these are going to be as good as human level, production and consistency that that is the future we're heading pretty quickly towards. So I think you got to throw away these arguments about, yeah, but it can't do this or it's not, that's going to get fixed.

[01:16:22] So then you got to deal with the bigger issue here about. Yeah, you're, you're not going to need as many people. Like the, the creatives who can use these videos, like they hired an agency to do this, the agency made money figuring out how to apply this new technology. So someone's still going to do the work, but if you and I, like six months from now, or maybe even when we get gen three access from runway, we can go in and create a 30 second video, we're probably going to do it.

[01:16:51] And, and the reality for us is. And for a lot of brands is it may just be something you wouldn't have done. Now, in this case, they were creating a [01:17:00] demonstration, you know, creating an ad for Toys R Us, but if you're a brand that otherwise just wouldn't create videos like this, and now you can, I don't know how that changes the dynamic of hiring big ad agencies and creatives to do spots that you're going to do anyway.

[01:17:14] What I'm saying is it's going to democratize the ability to create this stuff at scale for anybody. And that's what's That, that'll change things where now all of a sudden we all have access to create things like we can now create images on the fly and text on the fly. We're going to be able to create movies and videos and ads and trailers and all these things on the fly and that's going to affect the future of, you know, The ad industry, marketing industry, creative industry.

[01:17:40] So yeah, I mean, I mean, just kind of the floodgates are really starting to open. I think throughout the rest of this year, we're going to start seeing more and more of that as Sora becomes available and more people get access to Runways Gen 3 and, but the video generation stuff's going to become a major part of, of the creative process.

[01:17:58] Mike Kaput: Well, we're an example of [01:18:00] this. I mean, we would not be hiring a major ad agency to create videos, as far as I'm aware, for Marketing AI Institute for our events. We create videos, of course, but this could really unlock some incredible capabilities for MAICON for our, who knows, for anything, I mean, webinars.

[01:18:19] Anything like that. So that is an interesting angle of this that maybe isn't being talked about as much.

[01:18:24] Paul Roetzer: Well, I mean, honestly, like one parallel to it is with this podcast. We use AI to turn this one hour, or in this case probably an hour and a half or two hours of video into dozens of video clips that can be shared across Instagram reels and TikTok and YouTube.

[01:18:43] If we didn't have ai, we would probably just have a one hour show put on YouTube each week, and then that same, you know, audio put on to podcast networks. Yeah. But because of ai, we have more creative freedom. To do all of these other things at a cost that [01:19:00] we can afford as a small business. And I think that's basically what's going to happen here.

[01:19:03] Like previously a big brand could do the same thing we're doing with their podcast, but they were probably paying thousands of dollars a week. To some studio to cut the thing up into all these different shorts and stuff. We now do that for basically like 20 a month. Like I think that's, what's going to happen here is right now, big brands can afford top end creative working with studios.

[01:19:29] We can't. Six months from now, we'll be able to do the same thing those big brands are doing for tens or hundreds of thousands of dollars for 20 a month. Like that's basically where this is going.

[01:19:40] Mike Kaput: That's a really nice lead into our next topic because WPP, you know, leading ad agency has just released an AI tool they're calling Production Studio.

[01:19:49] And this tool combines AI with some of their human creatives and some proprietary 3D workflows they've developed to help clients generate hyper realistic and [01:20:00] accurate content at an unprecedented volume. So, Production Studio is going to help clients do things like quickly create 3D product models, centralize content updates, produce high quality visual content.

[01:20:13] generate multilingual copy that maintains brand consistency, and a number of other benefits for anyone who's creating a high volume of content. WPP has said that they've done some successful pilot tests with companies like Ford and L'Oreal, and they claim that advertisers could potentially create exponentially more content when using Production Studio's AI enabled content supply chain.

[01:20:39] So they actually developed this in collaboration with NVIDIA and integrated it into their, what they're calling their Intelligent Marketing Operating System, WPP Open. So Paul, we actually on previous episodes covered that WPP is investing hundreds of millions of dollars into AI and it now seems like they're making some progress putting that [01:21:00] money to use.

[01:21:01] I mean, based on what we just talked about, like what are the implications of someone like WPP bringing production studio to their clients?

[01:21:11] Paul Roetzer: It's smart. I mean, they've got to do stuff like this. I, I think that It's unknown, you know, one, two, three years from now, what the value of this is. Like, right now it seems like a very logical play, but if Sora and Gen 3, or whatever the future generation from Runway is, and if Adobe builds stuff in, like, I don't know, I mean, I think that a lot of this stuff is going to get commoditized.

[01:21:37] Like, the ability to do all of these things. is going to be so readily available, but the play here is to build like a platform that pulls it all in and you have your existing distribution. So this is that, again, this goes back to the AI emergent concept that an existing business with existing customer base and distribution and data and technology tries to build a smarter business model.

[01:21:58] And that's, I admire the [01:22:00] fact that's what they're doing. Like this is the play they have to make. I just, I don't know two, three years from now, how it all plays out when. All of these capabilities are available to everybody, when you don't need a platform to do it. So, I don't know. It's, it's good. Like, it's, it's the right play.

[01:22:16] See how it plays out.

[01:22:18] Mike Kaput: All right, so next up, a lawyer and AI law expert named Barry Scannell at law firm William Fry has posted an interesting observation about the EU's new AI law, which is called the AI Act. In it, Scannell says, that the AI Act has created a legal obligation for businesses that develop and deploy AI systems to quote, this is according to the actual law, quote, ensure to their best extent a sufficient level of AI literacy on their staff and other persons dealing with the operation and use of AI systems on their behalf.

[01:22:54] Scannell says that this clause in the AI Act means that businesses may face [01:23:00] legal obligations to ensure that staff using AI are adequately trained in the technology, and that this could apply to non EU entities that put AI systems and models into the EU market. So he posted, quote, a significant number of organizations remain unaware of this crucial aspect of the AI Act, essentially a legal obligation to provide AI training to staff who use AI.

[01:23:25] Now, the continual caveat, we're not lawyers, we're certainly not qualified to legally interpret EU law. I did run the AI Act through Google Gemini, it's not the first time I've done it, it's like 450 pages long. I also reviewed manually the sections that Scannil called out. It does appear that the Act mandates the education to foster AI literacy, it kind of confirms that there is, there are clauses in this law about saying you must do this.

[01:23:56] I wasn't able to find any mention of, like, these specific things. [01:24:00] It doesn't say, like, hey, you must take X amount of classes, courses, or hours of something. But it does seem like they are leaning into this idea that businesses must have AI literate employees. So, while we don't have all the details yet, It certainly aligns with kind of what we've consistently advocated for here at Marketing AI Institute around AI literacy, like, how are you interpreting this, Paul, when you're reading it?

[01:24:25] Paul Roetzer: Yeah, I need to dig into the details as well around the timing of this, like, what exactly the requirements are and when you have to meet them by, how they're going to monitor it, things like that, but at a fundamental level, I think it's great. I think it's something the U. S. should very quickly be doing.

[01:24:42] model. I'm aware of states, so eight states in the US that I'm familiar with have incentives or grants to advance technology education, not necessarily specifically AI. I think, if I remember correctly, the [01:25:00] Canadian government has some sort of, I believe so, yeah, like benefits toward AI education because requiring is one thing, funding it is another.

[01:25:08] And again, this goes back to like the journalism argument. Like, if I was A major foundation, or a major AI company. I would consider, you know, significant funding for AI literacy, because if you want to accelerate adoption of your technology, adoption of your models and enterprises, the fastest way to do that is to drive AI literacy within.

[01:25:29] You know, communities and, business leadership and within enterprises and knowledge workers. And that's, that's what we're doing. I mean, that's our whole thing is like AI literacy for all. This is like, the reason is because the way to do AI responsibly is to teach people about it. The way to drive adoption is to make sure they understand it and they're competent with it.

[01:25:47] And the best way to do that is AI literacy. So, I'm not saying this from a self serving perspective. I'm saying it from this is fundamental to success in society and in business. We have to drive literacy. So any [01:26:00] efforts made to push that forward, I think are great. And this is a very forward thinking, stipulation to have within the AI Act, as long as it's actually able to be administered and advanced.

[01:26:15] So yeah, I would, I would again, make a plea in the US, like if you're a listener and you have some, some contacts or organizations that you should talk to, or we should talk to, like we need to do more in the United States. States to push for this and to fund, literacy in the United States. And we can't rely on just universities to do it in high schools to do it like this needs to be a post-grad thing.

[01:26:38] Mike Kaput: Alright, so in other news, McDonald's is actually winding down an AI experiment. They were experimenting with drive-throughs, powered by IBM's AI across more than a hundred restaurants, and they now plan to deactivate this. This is called automated order taker technology and it. began in 2021 as an AI partnership [01:27:00] with McDonald's and IBM, basically to enable voice activated ordering.

[01:27:04] And McDonald's now says that by July 26, they are deactivating the technology because they have faced challenges, particularly in interpret, interpreting various accents and dialects, which then impacted order accuracy, reports CNBC. So, this kind of follows. McDonald's had previously sold its McDLabs tech to IBM in 2021.

[01:27:28] They sold Dyna and Dynamic Yield, which was an AI technology company they had bought. They passed that off to MasterCard as well. So, it sounds like a couple of, you know, not exactly stellar AI pilots going on here. However, They have since announced a new partnership with Google Cloud, though the specifics remain undisclosed.

[01:27:49] So, Paul, there's a couple things going on here. First, seems like this technology was a bit of a flop for McDonald's. But, second, it sounds like they may just be switching to Google Cloud to enable [01:28:00] AI ordering. What were your thoughts on this?

[01:28:03] Paul Roetzer: I think it's a technology that like three to five years from now will seem so obvious and it'll work really well.

[01:28:10] I think they're just early on the tech, like they got in before it was reliable and precise. So I think that'll all be solved and I don't think we've seen the last of AI taking your order at the drive thrus. It does seem like, you know, what's going to happen is these companies that are on the frontier out taking these risks are going to get some stuff wrong.

[01:28:31] And so I don't fault them for trying. I think it's good that they were taking those efforts. Again, the AI emerging ideas, you got to be willing to go out there and test technology and sometimes it's not going to work. And when it doesn't, you got to move on and get to the next thing. Maybe the Google Cloud thing, you know, is a bigger play here and they're going to build these capabilities back in.

[01:28:50] Again, I don't think that AI. Like ordering is going to be like back this fall and McDonald's, I think it's going to take a little time. So, yeah, I mean, [01:29:00] it's, it's, I think, kind of a lesson for companies that are looking to take these risks is it's not always going to work and that's okay, like just.

[01:29:09] Test and move on and keep going, but also don't give up on these things because of current limitations of the technology. It's like the Google Glass thing 10 years ago, way before it's time. Now is probably the time for the Google Glass thing, you know, version two to come back.

[01:29:27] Mike Kaput: All right. So as we wrap up the episode here, we're going to dive into a bunch of.

[01:29:30] So, we'll take these one at a time and get your thoughts on these, Paul. First up, Mistral has secured a massive Series B funding round of 600 million euros, which is approximately, as of today, 640 million dollars. This values the company at six billion dollars. And as a reminder, Mistral is founded by alumni from Meta and Google DeepMind, and they focus on developing foundational models that rival GPT 4.

[01:29:59] 0, Claude [01:30:00] 3, and Llama 3. They've also gained a lot of popularity and a following for releasing several major open source models. However, their most advanced models, something like Mistral Large, are proprietary and designed for API first products. So, with that being said, Paul, with Mistral being so involved in open source, this certainly seems like a vote of confidence from some investors, at least in that direction.

[01:30:28] Is that how you look at a funding announcement like this?

[01:30:31] Paul Roetzer: Yeah, but I also look at it as 640 million sounds like a lot of money. It'll buy you 21, 000 NVIDIA GPUs. 21, 000 NVIDIA GPUs isn't going to compete. So, yeah, it's like it, this is the table scraps for what it takes to try and compete right now for frontier models.

[01:30:48] But. You know, there's a reason that XAI raised six billion, because they're planning on hundreds of thousands of NVIDIA GPUs, not 21, 000. So, I think [01:31:00] it's, it is good, like Mistral is a player for now. I, I think it's going to be really hard for companies like this to remain competitive on the frontier models without a pivot in their strategy.

[01:31:12] And, so We'll see, I mean, we'll see where it goes, but relative to what else is gonna be raised, I would imagine we'll hear announcements on some other funding rounds in the billions later this year, 640 million. Is, I would imagine a bit of a bridge to a much larger or to being rolled up into someone else or an acqui hire like we're seeing with some of these other companies.

[01:31:37] So we'll see what happens.

[01:31:40] Mike Kaput: Next up, HeyGen, which is a startup we've talked about quite often about that creates realistic AI avatars has raised some money. HeyGen is a global company that is currently valued at 60 million. They are now valued at 500 million. This brings HeyGen's total funding to 74 million since it was founded in 2020.

[01:31:58] HeyGen says it has [01:32:00] attracted over 40, 000 paying customers and grown its annualized recurring revenue to more than 3. 5 billion. 35 million in just a year. Not to mention, they say they have been profitable since Q2 of 2023. The company says it's using its new funding to accelerate the product roadmap and double down on investments in enterprise security, AI ethics, and trust and safety.

[01:32:23] So Paul, maybe talk to me here about the opportunity for AI video and avatar generation. I mean, based on these numbers seems pretty significant.

[01:32:34] Paul Roetzer: I think it's just the recurring theme here of. Enterprise, like every one of these startups. I'm sure they knew it when they were starting, but maybe they, maybe they just realized it after the fact.

[01:32:45] You got to figure out the enterprise play to make a sustainable business. And that was a lot of enterprise keywords lumped into what you just said. Security, AI ethics, trust and safety. I'm guessing those are the objections they're hearing on every sales call. [01:33:00] And so they have to. Say that's why they're raising the funding and maybe it is why they're raising the funding, but to, to build the market they need to build to justify their funding and get the next round, you gotta have those elements to, to get the enterprise sales.

[01:33:16] So it'd be fascinating. Like all these AI startups that start as like this fun little toy and you can deep fake yourself and that's cute. And then, Oh, wait, we got to make a real business model here. We got to go upstream. And, these are the kinds of things that come with it. Again, you know, it's so funny.

[01:33:32] Like you become. These like 60 million. It's like, Oh, okay. My game, like that used to be a massive amount of money right here. 60 million for an AI startup. And I think, Oh, they don't have a, they don't have product market fit yet. Like my, my immediate response. If I don't hear hundreds of millions is like, they haven't found their product market fit yet, and they're still trying to figure it out.

[01:33:51] So here's 60 million to like, get you through now over the next three years. Eight to 12 months prove product market fit at the enterprise, get a few big enterprise clients, [01:34:00] and then we'll, then we'll do 250 million or whatever.

[01:34:04] Mike Kaput: It's like how they say, everything's bigger in Texas. Same goes for AI, you know, everything's just an order of magnitude bigger than you'd expect.

[01:34:13] Paul Roetzer: It's just funny in there. These VCs are just throwing money at them because they've got user base and they've got, you know, maybe a good product team. Like there's traction there, but nobody really knows exactly what the play is here.

[01:34:28] Mike Kaput: All right. So Stability AI has reached a deal with its cloud providers and other suppliers as part of a bailout effort.

[01:34:37] So the suppliers have agreed to forgive approximately 400 million in debt and future obligations. Owed by stability ai that includes about a hundred million dollars in current debt and 300 million in future obligations. Now, this deal is part of a larger cash lifeline being extended to stability AI by a group of investors, and that group includes [01:35:00] former Facebook President Sean Parker, and former Google CEO, Eric Schmidt, and they're injecting 80 million bucks.

[01:35:06] into the company. So Paul, we've talked a bit about Stability's implosion a couple of times. Like, is this just like the final death knell here? Or is this actually going to change their trajectory?

[01:35:22] Paul Roetzer: I don't know. Like, I have to like think about this one a little bit more and spend more time on whatever messaging we're hearing from them.

[01:35:31] Eric Schmipping of all is interesting to me. It actually makes me wonder about Like a change in direction because Eric Schmidt is pretty heavily involved on the government side and the defense department and stuff like that. So that kind of hints to me that there like might be a repurposing of stabilities technology for other uses down the road, maybe.

[01:35:55] I don't know, like I gotta, I gotta think about this one. I don't, I haven't followed Sean [01:36:00] Parker closely, like I'm not sure what his Other investment, like what his portfolio looks like and why he would get involved in something like this. Yeah, this is a, this one's a head scratcher for me at the moment as to what's going on, but I think there's probably some technology worth saving, And maybe there's a pivot in terms of the direction of where the technology is going to be developed for, the uses it'll be for, things like that.

[01:36:26] I gotta, I gotta think about this one.

[01:36:29] Mike Kaput: All right. So our final funding update, a company called Evolutionary Scale, which is focused on developing AI models for protein generation, has secured 142 million in seed funding. This round was led by ex GitHub CEO Nat Friedman, Daniel Gross, and Lux Capital. And Amazon and NVIDIA also participated.

[01:36:51] So this company has released something called ESM3, which they describe as a frontier model for biology. It can actually generate novel [01:37:00] proteins for use in drug discovery and materials science. It is trained on a data set of 2. 78 billion proteins, and it can reason over the sequence in structure, and function of proteins, enabling it to kind of create new proteins similar to what we've talked about Google DeepMind's AlphaFold can do.

[01:37:18] The company actually aims to make the process of designing proteins more efficient and less costly compared to traditional lab methods. So, Paul, again, this is not directly related, say, to AI for marketing or directly to You know, your creative and strategic business workflows, but it is actually some really like good AI news and hints at this kind of The benefit of AI or the potential to accelerate fundamental scientific progress.

[01:37:45] Can you maybe walk us through why this investment and this company are so important?

[01:37:52] Paul Roetzer: We're going to hear a lot about these kinds of things. I think this is fantastic. In the AI Timeline, episode 87, we talked about. You know, [01:38:00] this idea of scientific breakthroughs being enabled and accelerated. We've talked about DeepMind and their partnership with biology institutes to, you know, advance biology.

[01:38:10] For me, this is what I'm probably in the near term, most excited about AI for is scientific advancement. And so to see this kind of funding going into it and to see, you know, the promise of what may be near term benefits to society, like, it's great. So I, I. I love this kind of stuff, and I hope we hear a lot more about it.

[01:38:33] Mike Kaput: All right, so we are getting to the end of this massive amount of updates. We've got several product updates as we wrap up the episode here. The first one is about Adept, which is an AI startup focused on building enterprise AI agents, and they kind. kind of quietly announced significant changes to their strategy and structure.

[01:38:55] They say they're shifting their focus entirely to solutions that enable [01:39:00] agentic AI that's powered by their existing models, data, web interaction software, and custom infrastructure. Now as part of this kind of restructuring, ADEP's co founders and some team members are joining Amazon's AGI organization to continue their mission of building useful general intelligence.

[01:39:19] Amazon is also licensing ADEP's agent technology, its family of multimodal models and certain datasets. And there are some leadership changes. Zach Brock, previously head of engineering, is taking over as CEO. And their head of product is staying in place. The company said the decision was made to avoid spending significant time on fundraising for foundation models and instead focus on bringing their agent vision to life.

[01:39:45] So, Paul, this seemed like kind of a brief announcement, but a pretty big one. I mean, on one hand, it's It's what you mentioned with companies pivoting away from getting into the arms race over foundation models, but also they kind of announced an acqui hire [01:40:00] by Amazon a bit, in some way or at least of some of the team, like what's going on here?

[01:40:05] Paul Roetzer: So my first reaction, so this was a Friday News dump thing. I think they published this blog post at like 4pm on Friday, I think is when I saw it, something like that. Never good. Yeah. And ironically, I had just listened, like this, like Thursday, I think I listened to podcast. So it was Product Lead AI again, the Seth Rosenberg.

[01:40:23] I actually just found this podcast last week, like from Seth. So I listened to a few of the episodes and it was Adepts CEO, David Lewin on up leveling human work. This is from May 14th. So about a little over a month and a half ago. So I just like fresh in my mind, his vision for the future of work and agents and all these things.

[01:40:41] And then like, Oh, they got the inflection deal. Like the, my first reaction was, This sounds exactly like inflection bailing out, getting bailed out by Microsoft. And I think that's exactly what it is. It's just like, Amazon, not going to acquire them, but we'll just take all your top talent. So I, I think, [01:41:00] we won't spend a lot of time on this, but I think it's important for people to understand like how this technology is intended to work.

[01:41:05] So imagine. Imagine you go through a sales process and you're going to go into your pipeline and you're going to click on a deal and then you're going to like look at that deal and then you're going to summarize notes and you're going to go through like these steps to prep for like a sales call.

[01:41:21] The idea behind Adept is that you would say, like on the side, let's say there's a module in your browser. And there's a learn button and so you would hit learn and what would happen is Adept would now watch your screen. It would learn all the clicks you're taking, the process you're going through. It would build a list of actions as you're doing them and then you would hit stop at some point when you're done with the activity.

[01:41:45] And then you would review. The list of tasks that Adept has now identified as part of this workflow. And if everything looks good, you have now trained an agent to do that thing. And then the next time you need to [01:42:00] do it, you just run that agent. So my call prep agent or whatever it is, that's the vision behind how these agents are going to work in the near term, before they just start recording everything and learning everything, you will train them to do specific tasks by showing them how to do it.

[01:42:15] So in essence. That means Amazon greatly values the ability for you. to train things on your data. And so this is more interesting to me about what is Amazon's play with this technology than it is another AI company taking the lifeline from a bigger hyperscaler. So, something to watch for sure.

[01:42:38] Mike Kaput: Alright, next up.

[01:42:39] Eleven Labs, which is an AI voice generation company, has introduced the Eleven Labs Reader app, which lets you listen to any text using one of the company's AI generated voices. Give it a PDF, an EPUB, article, newsletter, any other text, and it will read that content using one of Eleven Labs hyper [01:43:00] realistic voices.

[01:43:01] Initially, this app supports only English, but they suggest that Support for 29 plus languages will be coming soon and it is free to download from the App Store. Now, Paul, you're, you posted about this. You're a big fan of this app. Do you want to walk us through kind of your initial impressions of it?

[01:43:18] Paul Roetzer: It's like mid last week, I think this came out and I just, I went and got it right away. It's fantastic. So, you know, I, I think I used it like a half a dozen times in the first day and it's super simple. You just pick a PDF and upload it. So I think I used either the, maybe the GPT's or GPT's paper, I forget which one, the research report I happened to be looking at, at that moment, and upload the PDF and you're good to go.

[01:43:40] Starts reading it to you. You give it a link, it starts reading it to you. They're, they're going to embed this technology into website pages. So I think maybe Time Magazine, they already did a deal with. So rather than having to take the link, like you're going to be able to just click and listen to any page at whatever speed, and you can pick from any of, I don't know, a dozen voices or something.

[01:43:58] So when I first did, I was like, [01:44:00] how had nobody done this yet? This is like, and maybe I actually got a couple of comments on LinkedIn. People had similar, questions. Apps, but nobody had done anything like what they've been able to do here. So to me, this is just one of those, like so simple and streamlined and obvious.

[01:44:15] I could definitely see using this all the time. And since I first tested it last week, I've continued to use it daily to read articles to me because. Now what it does is I can consume this stuff in the passive times, like my rides, to work, when you're at the gym, just like you do with podcasts. It like takes anything and turns it into something you can listen and multitask.

[01:44:35] And so now I can listen to these articles that we're going to talk about on the podcast while I'm doing other things. I don't have to actively be reading it if I don't want to be. So it changes my consumption patterns. It's definitely worth So

[01:44:50] Mike Kaput: AI video generation company Runway has just introduced Gen 3 Alpha, which is a new AI model for high fidelity, controllable video [01:45:00] generation.

[01:45:00] Now this represents a significant improvement over the predecessor model, Gen 2. And Gen 3 Alpha is trained on both videos and images, which helps it to power different runway tools like text to video, image to video, and text to image. Updates to this model include an improved ability to generate expressive human characters with a wide range of actions, gestures, and emotions.

[01:45:24] And Runway is also now providing companies with a way to work with them to customize the model for specific industry needs. So think of maybe an industry specific video model. That's more stylistically controlled, has more consistent characters tailored to the specific artistic and narrative requirements in your industry, for instance.

[01:45:44] Interestingly, Runway pretty prominently says Gen 3 is a step towards its larger goal of building general world models, which is their long term research effort to essentially give AI the ability to simulate the [01:46:00] environment around them. So, Paul, you're a long time Runway user. What is the significance of Gen3, but also maybe talk to us a little bit about this overall pursuit of general world models?

[01:46:11] Paul Roetzer: The people have access to Gen 3 are, over the moon about it. Like I've been, my Twitter feed has been full of people who have been testing it and, you know, sharing examples and loving it. So I'm, I'm anxious to get access to it. We've definitely been using Runway, I mean, probably five years now we've been Runway users.

[01:46:29] So I'm very anxious for that. The world model stuff is kind of like, we talked about this with the introduction of Sora. These AIs like don't understand physics. Like they don't understand how the world works. They don't. You know, really comprehend gravity. Like they, they just don't, they don't understand the world like we do.

[01:46:46] And so they have to be given these physics engines, like these model, the models of the world. And so usually that's done, like even in video game development, the same thing occurs, like you, these characters don't understand the physical world. So they have to be programmed [01:47:00] to. And so you use like NVIDIA has, I forget what it's called, Omniverse or Omni.

[01:47:05] Yeah. Is that what it is? I think so. Basically build a physics engine that sets the rules of. Physics, so that whatever's been created follows those rules of physics. So I think what they're implying here is as they start to develop these models, they can actually continually build more and more training data so that eventually the models may not have to be programmed to understand the world, but they actually understand the world.

[01:47:31] Efforts are being made by, by Meta in OpenAI with Sora, like everyone's trying to do this, where the AI, you know, is Just out of the box understands the physical world, and we're not there.

[01:47:45] Mike Kaput: So, Microsoft has announced another delay for its kind of controversial recall feature. This feature was intended to debut with the new Copilot Plus PCs, which we talked about in a previous episode.

[01:47:58] And recall is this [01:48:00] AI powered feature. that essentially captures periodic screenshots on your machine and creates a searchable database of everything you viewed on your PC. Instead of initially launching as planned, Recall is now going to go through the Windows Insider program, which they do for testing and validation of all their Windows features.

[01:48:19] And this delay comes after some security concerns, which we talked about in a previous episode, were raised about recall. Initial versions were saving data to disk without additional encryption or protection, which made it easy, easily accessible to anyone with access to the PC. Moreover, recall was set to be enabled by default on Copilot Plus PCs.

[01:48:43] So, Paul, I don't know if this is necessarily a surprise to us. We talked about how recall had some issues right from the moment it was announced. Is this the right move from Microsoft here? I

[01:48:53] Paul Roetzer: think it's the right move and hopefully a lesson learned. I just, you know, I think I mentioned it previously. Like, [01:49:00] I was just kind of shocked that, A technology that has this, so many blatant concerns was going to be rushed to market.

[01:49:08] I didn't understand that. And so I, yeah, this is the right decision. And, you know, hopefully they can apply these lessons to future releases of things that cause a lot of concern and anxiety around privacy and risk.

[01:49:23] Mike Kaput: All right, our last item today. Last but not least, Microsoft has also quietly announced that it is retiring its GPT Builder functionality within Microsoft Copilot.

[01:49:36] At least it seems it's doing that for the consumer. What GPT Builder allows you to do is create your own customized version of Microsoft Copilot for specific purposes. Now, Microsoft says it is removing the ability to do that starting July 10th. After that, it's going to remove all the GPTs. All of this is created by Microsoft and customers, [01:50:00] along with all the associated data between July 10th and 14th, according to documentation on their website.

[01:50:07] There's not a ton of documentation, but in a brief FAQ on the site, Microsoft said, They gave the following reason for why they're doing this. They said, quote, we are continuing to evaluate our strategy for consumer co pilot extensibility and our prioritizing core product experiences while remaining committed to developer opportunities.

[01:50:25] To this end, we are shifting our focus on GPTs to commercial and enterprise scenarios and are stopping GPT efforts in consumer. Co pilot. So Paul, from what I can gather, again, it's not super well explained on the website. This looks to just be in the consumer facing version of co pilot. There's not a ton to go on, but like, what should a co pilot user be paying attention to here?

[01:50:48] Paul Roetzer: I have no idea what that statement means. I mean, like some Mustafa Suleyman who went and took the lead as consumer AI, the former inflection CEO who got the lifeline to go to Microsoft. [01:51:00] This is his call, I would assume, I mean, he's in charge of this, so I don't know what that means, I don't have any insights here other than what we can read.

[01:51:11] So, yeah, I mean, I think it's, like, if anything, at a high level, it just shows how dynamic this space is going to be and don't fall in love with any one tool or feature because it might just go away next month. Yeah, which is one of the channels I actually saw, I forget who tweeted it. Somebody tweeted about like how Claude 3.

[01:51:33] 5 was better and they were going to move their GPTs from Claude, from ChatGPT over to Claude projects and start building everything over there. And I replied, or are you going to move everything back to ChatGPT when GPT 5 comes out and it's better than Claude 3. 5? I think the lesson just here is it's a very dynamic space, constantly be testing with different tools.

[01:51:55] And don't bet your business on someone's feature that they could turn off tomorrow. [01:52:00] Like, that was what, as an agency, when I used to own the agency, you want, you know, if you were going to bet your service model on someone else's technology, you had to be really, really confident. That's why we bet on HubSpot all those years ago.

[01:52:12] That's the challenge now is if you build your business around these kinds of things, then all of a sudden they get yanked and you're stuck. So always be trying to think ahead, try and future proof your business. Understand there's risks involved with building around features that these companies can turn off whenever they want.

[01:52:30] Mike Kaput: And document all this stuff in a prompt library that's somewhere that's not within these tools because you can transport this stuff pretty easily with you. I mean we ran into this right the other day when we created GPTs. I think we were demoing them and like it just like hiccups. It like wasn't working for a second, but thankfully we've got all those rules.

[01:52:48] You can copy and paste that into a different tool, different prompt, whatever.

[01:52:51] Paul Roetzer: Yep, that's a good point.

[01:52:54] Mike Kaput: All right, so that's all we got today. We are back to our regularly scheduled programming. As a final reminder to [01:53:00] everyone, you can find everything we talked about today and all the other stuff. Yes, there was more stuff we didn't get to, so you can find most of this in our weekly newsletter.

[01:53:10] Go to marketingainstitute. com forward slash newsletter. It is called This Week in AI and it summarizes everything you need to know each week going on. In this very fast moving industry. So go check that out if you haven't already. And also if you have not yet, please, please, please leave us a review. We really appreciate any and all feedback and it helps us get the podcast into, in front of far more people.

[01:53:35] So we'd love if you took a minute to do that for us, if you haven't already.

[01:53:39] Paul Roetzer: Yeah. And it was great to be back with everyone. I mean, for me personally, I was going in withdrawal, not doing this for as nice as it was to have the mental break. Yeah, having all of this stacked up to not talk about, and then I, like I said before, like this, this podcast is a forcing function for me to like, think about this stuff.

[01:53:59] And when you're just [01:54:00] throwing it into the thread to talk about, you don't take the time to stop and. Think about it all. So I like I personally for my own mental well being like I need to do this podcast once a week Yeah, because it helps me keep up and we don't have 50 60 items in the sandbox to deal with.

[01:54:16] Yeah All right. Well, it was great to be back together Congrats again to you and your family on the addition. Thanks for you both for all of you and we will be back together, when are we going to be? July 9th will be the next, episode, when are we going to be? At 1. 04, so. Yep. Alright, so again, 4th of July week in the United States, and, safe, happy time with your family and friends.

[01:54:41] Maybe take an extra day off, enjoy some time. We're going to shut our office down for an extra day and, just try and enjoy that extended weekend and be safe. Thanks, everyone. We'll talk to you again next week.

[01:54:51] Mike Kaput: Thanks, Paul.

[01:54:51] Paul Roetzer: Thanks for listening to The AI Show. Visit MarketingAIInstitute. com to continue your AI learning journey and [01:55:00] join more than 60, 000 professionals and business leaders who have subscribed to the weekly newsletter, downloaded the AI blueprints, attended virtual and in person events, taken our online AI courses, and engaged in the Slack community.

[01:55:14] Until next time, stay curious and explore AI.

Related Posts

[The AI Show Episode 101]: OpenAI’s Ex-Board Strikes Back, AI Job Fears, and Big Updates from Perplexity, Anthropic, and Showrunner

Claire Prudhomme | June 4, 2024

Episode 101 of The Artificial Intelligence Show covers OpenAI’s movement forward amidst controversy, concerns grow over AI's impact on jobs, and AI tech updates abound.

[The AI Show Episode 125]: GPT-4o Update, Sora Leak, Perplexity Shopping, Elon Musk vs. OpenAI & US Government’s AI Manhattan Project

Claire Prudhomme | December 3, 2024

Tune in to Episode 125 of The Artificial Intelligence Show for this week's AI news, including: GPT-4o update, Sora's leak, Elon Musk vs. OpenAI, and more.

[The AI Show Episode 108]: OpenAI Voice, AI Cloud Wars, Microsoft’s AI Revenue, Meta Earnings and AI, Perplexity Publishers Program & SB-1047 California AI Bill

Claire Prudhomme | August 6, 2024

Episode 108 of The AI Show discusses ChatGPT's voice feature, risks of a global 'cloud war', and Microsoft's AI financials.