52 Min Read

[The Marketing AI Show Episode 54]: ChatGPT Code Interpreter, the Misuse of AI in Content and Media, and Why Investors Are Betting on Generative AI

Featured Image

Wondering how to get started with AI? Take our on-demand Piloting AI for Marketers Series.

Learn More

As generative AI continues to improve, iterate, and integrate, there are news stories to discuss and advancements to break down. That’s why we’re happy Paul Roetzer and Mike Kaput are back for episode 54 of The Marketing AI Show.

Listen or watch below—and see below for show notes and the transcript.

This episode is brought to you by Jasper, on-brand AI content wherever you create.

Listen Now

Watch the Video

Timestamps

00:02:56 — ChatGPT Code Interpreter

00:16:22 — The misuse of AI in content and media

00:24:49 — Why and how investors are betting on generative AI

00:34:27 — Synthetic Data and the Future of Large Language Models

00:39:50 — Revisiting the World of Bits

00:41:24 — OpenAI Introduces Superalignment

00:45:53 — Google DeepMind’s CEO Says Its Next Algorithm Will Eclipse ChatGPT

00:47:35 — Google announces the first Machine Unlearning Challenge

00:51:22 — Google could use public data for AI training, according to new policy

00:53:04 — Congress sets limits on staff ChatGPT use

00:53:50 — OpenAI Legal Troubles Mount With Suit Over AI Training on Novels

00:55:58 — LinkedIn Changed Its Algorithms — Here's How Your Posts Will Get More Attention Now

00:58:17 — The Vatican Releases Its Own AI Ethics Handbook

01:00:58 — Meta Releases Threads

01:03:41 — MAICON final agenda with keynote highlights

Summary

ChatGPT Code Interpreter available for all

OpenAI announced on July 6 that ChatGPT’s Code Interpreter feature will be made available to all ChatGPT Plus users. Previously, only select users received access after signing up for a waitlist.

Code Interpreter gives ChatGPT the ability to run code, use files you upload to produce outputs, analyze data, create charts, and perform sophisticated math. This gives ChatGPT the ability to do all sorts of data analysis and code-dependent tasks it couldn’t do well before.

People are already using Code Interpreter in interesting ways including customer segmentation, data visualization, and data analysis.

The misuse of AI in content and media

A handful of stories in the past several weeks are shedding light on the dangers and misuse of AI in content and media. A report from misinformation tracking site NewsGuard shows that content farms using AI to generate hundreds of low-quality articles a day are raking in programmatic ad dollars—and hundreds of brands are unwittingly supporting them.

And otherwise legitimate media sites are following their lead. Tech site Gizmodo recently started publishing AI-generated content and the results were problematic. One article on Star Wars movies was riddled with inaccuracies and prompted an outcry from Gizmodo staff, who said these types of stories were “actively hurting our reputations and credibility” and showed “zero respect” for journalists.

Last, but certainly not least, news came out of a leaked email from German tabloid Bild detailing how the publication plans to replace over a hundred jobs with AI.

Investors are betting on generative AI. Why and how?

Research recently published by McKinsey estimates that generative AI could add the equivalent of $2.6 trillion to $4.4 trillion of annual value to the global economy.

The firm estimates that about 75% of this value will accrue through four use cases: customer operations, marketing and sales, software engineering, and R&D.The impact will be felt across all industries and sectors, but McKinsey specifically points out that banking, high-tech, and life sciences could see the largest impact.

The full research report is well worth a read. But the larger point here is that the possible market impact of generative AI is massive. And investors are clearly responding to that, having just written some huge checks to leading generative AI companies.

One big example: Inflection AI announced it raised $1.3 billion in a fresh fundraising round led by Microsoft, LinkedIn founder Reid Hoffman, Bill Gates, and NVIDIA.

Inflection AI has been around just over a year and, in that time, the company has built one of the world’s most sophisticated large language models, which powers Pi, its personal AI assistant product. The company is also the “largest AI cluster in the world comprising 22,000 NVIDIA H100 Tensor Core GPUs.” It’s also important to note that Inflection AI’s CEO and co-founder Mustafa Suleyman also co-founded DeepMind, which was acquired by Google and forms the backbone of their AI work.

Another example: At the same time, Runway, which builds generative AI tools for creators, announced a $141 million extension to its Series C funding round from companies like Google, NVIDIA, and Salesforce Ventures.

This is our longest episode in some time, so be sure to tune in! The Marketing AI Show can be found on your favorite podcast player and be sure to explore the links below.

Links referenced in the show

 

 

Read the Transcription

Disclaimer: This transcription was written by AI, thanks to Descript, and has not been edited for content.

[00:00:00] Paul Roetzer: Produce good content. Use AI to assist you in the creation of it, the curation of the content, the enhancement of the content. The outlining, like use it, but don't think you can take humans out of the loop. It's just not a viable thing right now.

[00:00:15] Paul Roetzer: And nor do I think it should be the goal. Welcome to the Marketing AI Show, the podcast that helps your business grow smarter by making artificial intelligence approachable and actionable. You'll hear from top authors, entrepreneurs, researchers, and executives as they share case studies, strategies, and technologies that have the power to transform your business and your career.

[00:00:36] Paul Roetzer: My name is Paul Roetzer. I'm the founder of Marketing AI Institute, and I'm your host.

[00:00:44] Paul Roetzer: Welcome to episode 54 of the Marketing AI Show. I'm your host, Paul Roetzer, along with my co-host Mike Kaput. Hey, Mike. Hey, Paul. How's it going? Good, man. I feel like we haven't seen each other. We've been traveling, we've been, you know, holidays. It's, God, I feel like we've been outta rhythm here, so it's good to be back with the weekly.

[00:01:04] Paul Roetzer: I think I'm on vacation next week. We may have to record on Friday. I didn't even think about this. Heads up. We might have to record Friday. I'm, I'm out of the office next week. Hopefully. We are now back in our weekly schedule, back in our regular flow. So for our regular listeners, welcome back. Hopefully we are smooth sailing from here on out.

[00:01:25] Paul Roetzer: Do you have any vacation coming up I need to know about? I don't know what you have May at the end of the month. Yeah,

[00:01:29] Mike Kaput: a little bit in August, but nothing that interrupts the podcast.

[00:01:33] Paul Roetzer: All right. All right, so hopefully we are back. Alright, so this episode, episode 54, again, is brought to us by Jasper.

[00:01:40] Paul Roetzer: We're big fans of Jasper. We've been using Jasper for a while. So, Jasper's a generative AI platform, transforming marketing content creation for teams and businesses. Unlike other AI solutions, Jasper leverages the best cross section of models and can be trained on your brand voice for greater reliability and brand control.

[00:02:01] Paul Roetzer: With features like brand voice and campaigns, it offers efficiency with consistency that's critical to maintaining a cohesive brand. Jasper has won the trust of more than 100,000 customers, including Canva, Intel, DocuSign, CB Insights, and Sports Illustrated and Marketing AI Institute. As a customer, I could say that Jasper works anywhere with extent with extensions, integrations and APIs that enable on brand content acceleration on the go.

[00:02:30] Paul Roetzer: Sign up free or book a custom demo with an AI expert@jasper.ai. That is J A S P E R dot A I.

[00:02:41] Paul Roetzer: All right, Mike, we are back. Let's do it. If you're new to the podcast, three main topics and rapid fire. We have lots of rapid fire today, so we're going to try and move quick and get this thing done as efficiently as possible.

[00:02:53] Paul Roetzer: Let's go. All

[00:02:54] Mike Kaput: right. First up, we have ChatGPT Code Interpreter is now available. Excuse me for everyone. So OpenAI announced on July 6th, that over the next week, following that date chat GT's code interpreter feature is going to be made available to all ChatGPTplus users. I actually just got access this morning, so if you are a plus user, you're probably going to see it rolling out very, very soon by the time you're listening to this.

[00:03:23] Mike Kaput: So previously only select users had received access after signing up for a wait list. So now everybody who pays 20 bucks a month for ChatGPT plus has code interpreter. And what this does is it gives you the ability to do a lot of interesting things using code and data. So it gives you the ability to run code, to use files that you upload to actually produce output.

[00:03:48] Mike Kaput: So you can ask questions of or create visualizations from data in certain vials you upload. You can analyze data, you can create charts, and you can perform sophisticated math. So you're essentially using an add-on feature within ChatGPT to do these types of data analysis and code dependent tasks.

[00:04:08] Mike Kaput: Now, here's a few examples of ways we've seen people already using code interpreter. So first you could. Have ChatGPT segment your customer list. If you upload a spreadsheet with customer information, you can instruct ChatGPT to segment it effectively. I saw several examples of this online. You can have it gather and visualize data.

[00:04:30] Mike Kaput: One user, I saw user ChatGPT with code interpreter. To actually pull public data from the International Monetary Fund and automatically turn that into a visualized chart. You can also analyze data really effectively. So Wharton Professor Ethan Mollick, who we follow very closely, has actually done things like upload Andfor unformatted data from a PDF into code interpreter and ask it to perform full data analysis.

[00:04:59] Mike Kaput: And code interpreter was actually able to figure out the table layout, restructure the data and the formatting, it could run models, and then reason about the results from seemingly pretty simple prompts. So this is a really powerful add-on, and Mollick, I think, thinks it's one of the, more powerful AI tools that we're seeing out there in the market today.

[00:05:20] Mike Kaput: So Paul, first off, I think you've, you know, read about it, played around with the tool a bit. What kind of use cases do you see for something like this for marketers or business

[00:05:29] Paul Roetzer: leaders? I actually, I've had access to it for like four weeks and I hadn't, I haven't tested it until recently. Finally, I finally found time to get in there and play around with it.

[00:05:40] Paul Roetzer: I think Ethan Mollick's takes are probably the best I have seen on this tool because he's been sharing his experimentation with the while. And just a quick plug. Ethan Mollick is actually one of the keynotes at MAICON this year. So July 26th to 28th. He will be our day one closing keynote, I think, I think it's the July 27th keynote.

[00:05:59] Paul Roetzer: So I'm really looking forward to interviewing him because, excuse me, fire fireside chat staff. But there's so many amazing things he's written. Many related to code interpreter recently, but he posted, just on July 9th on Twitter, and he's just e Mollick, M O L L I C K, at e Mollick. So he's a code interpreter for analysis faq.

[00:06:21] Paul Roetzer: Yes, it is useful and powerful tool for real work. No, it is not appropriate for all types of tasks today. Yes, it will get more capable. No, it doesn't replace data analysts. It's a compliment. Yes, you'll need to know some statistics and check the ai. So he then went on and say, it operates at the level of a pretty good grad student on most data analysis tasks.

[00:06:42] Paul Roetzer: It's surprisingly adept, but still makes occasional mistakes and bad choices that are more experienced human would not, however, you can quickly check the analysis. So I think the, for me, my experience, my understanding of the technology, what I've read about it, and then now what have I, I've experimented with myself, it.

[00:07:01] Paul Roetzer: It does give data analysis capabilities where you previously probably didn't have them. Like if, if you don't have data analysts on staff, you now can have a grad, you know, grad or post grad level data analyst to do a lot of this work. So anywhere where you need to analyze data from, you know, take, take your email campaigns and drop it in and say, what's interesting about this data?

[00:07:24] Paul Roetzer: Where's it, you know, where's it performing? Well, what are the anomalies in the data sets? Run graphs, run charts. The thing I found, so I, the experiment I ran was, last year I created this AI talent index. So you're, you're familiar with this, Mike? We basically, what I was doing was I took sales navigator data, LinkedIn sales navigator data, and I built a workbook where I was trying to find the most likely job titles and industries where AI adoption would be, sooner than others.

[00:07:53] Paul Roetzer: And because, you know, pre-ChatGPT, it was kind of hard to find people who were adopting ai, who were piloting and scaling into organizations. So I had this theory if, if you looked at different job titles and you looked at different verticals and then seniority levels, we could probably find some. Kind of get ahead of the adoption curve and find the personas and the people that were more likely to pursue AI knowledge and tools, and therefore, as an institute, be interested in the content we create and our events and our online courses.

[00:08:22] Paul Roetzer: So I have this database. I was like, well, that's, that's a good starting point. And so what I had learned from kind of following Ethan's, guidance and some others is sometimes you just start with like a really basic thing. So I took this data set, downloaded it from Google Sheets, and it's, it's pretty straightforward data, but again, it's just a bunch of sales navigator data, and I just said, review the data set and identified the best titles and industries to target for AI education and events.

[00:08:45] Paul Roetzer: So again, just real simple, not giving it much. It first went through and did an analysis of the data set. So it says like, the data set seems to be a list of different job titles with corresponding functions and metrics related to adoption and impact of AI and automation. It include counts of roles globally, blah, blah.

[00:09:01] Paul Roetzer: And then it goes through like column by column and gives you its interpretation of what the data is and. It was all accurate. So, it says to determine the best titles and industries to target for education and events, we can consider roles with high adoption likelihood. And it kind of goes on. It's like, okay, this is pretty interesting.

[00:09:18] Paul Roetzer: And then it says, let's continue with the analysis. And then, then goes through and actually starts analyzing the data. And then it comes back with, based on the combined rating of adoption likelihood and automation impact, which were two variables I created, like subjective assessments of these things.

[00:09:33] Paul Roetzer: Here's the top 10 titles to target for AI education and events. It then, like, force ranks the top 10. It then goes through and ranks the top seven industries based on size. So then I said, can you create visualizations to help tell the story of the data and market opportunities? It then created two bar charts that showed the data by title and by industry.

[00:09:53] Paul Roetzer: And then it analyzed those and explained what the da what was showing. I then said, adoption likelihood and intelligent automation impact are subjective ratings that I created and estimated. What else should I be thinking about for target audiences? It then went through and gave me eight different things I should think about enriching this data set with.

[00:10:13] Paul Roetzer: Now code interpreter is not connected to the internet, so it couldn't go do this for me. But it said you could look at current AI usage, growth of a role or industry, techno, technological readiness, regulatory environment, like all these really interesting things that it's like, hmm, those would be interesting data sets.

[00:10:31] Paul Roetzer: And then I said, what data sources do you have access to that you could use to enrich it? And that's when it's like, well, hey, I don't have access to the internet, but here's ways I could go get this data. And then I said, what questions should we be asking about this data set that could lead to interesting strategic insights?

[00:10:46] Paul Roetzer: This is to me, where it starts getting really interesting. This is the data analyst side where I may be a business leader, I may be a marketer, and I look at a data, it's like, I don't even know what to ask this thing. Like I don't live in data sets all day long. I don't know how to build charts. It's out of a bubble chart and a bar graph and you know, heat maps.

[00:11:02] Paul Roetzer: It's like I don't, I don't know all the different visualizations that can exist. So I just said like, what, what should I be asking of it? And it came back with six really good questions, like things, some of them I hadn't thought to ask of the data. And then I said, what are some creative ways we could visualize this data?

[00:11:18] Paul Roetzer: And it came back with a heat map, bubble chart, tree map box, and whisker plots. I don't even know if that's a real thing, like stack bar chart and network graph. And so, and then I said, can you please build these for me? Build me a heat map. So all of these things, it did. So everything I'm explaining right now, it just did in real time.

[00:11:36] Paul Roetzer: And so, take any dataset, anything about churn, about growth, about audience, about persona building, about target markets, whatever it is, that's what this now does. You can upload any style document to it and it'll just analyze it. So as a marketer, the potential uses of this are massive. And it really does.

[00:11:57] Paul Roetzer: Like, I don't, again, it's not a replacement to data scientists and data analysts. It does have the ability to code and do all these interesting things, but I think in the near term, it's just going to be additive. It's going to give marketers and business leaders access to technology and capabilities that they probably just didn't have on their team or they didn't have the time or resources to see through.

[00:12:19] Paul Roetzer: So I don't know. I mean, this is, if like, if you imagine this Mike, like we were big HubSpot users. Can you imagine? If this capability just lived in HubSpot, right? And we could just ask questions of our data and build charts on the fly rather than having to learn, you know, how to go in and build custom reports and all that stuff.

[00:12:36] Paul Roetzer: Like, I don't want to do that. I just want to ask it to do it for me.

[00:12:41] Mike Kaput: That's really interesting you mentioned that because, you know, you and I come from the agency world and I was thinking about I would've, how I would've killed to have this tool on the agency side, right. Not only for our own business, but for our clients.

[00:12:55] Mike Kaput: I mean, this constituted a good amount of the questions we had hoped to be asking about clients' businesses when we could actually

[00:13:01] Paul Roetzer: get at the data. Yeah. The one I was thinking about as I was doing this is, you know, we've, so if anyone's taken our state of the industry survey that I think is we're closing that down this week, I believe.

[00:13:11] Paul Roetzer: Yep. So we're going to be going through and do the data analysis. Well, Mike and I have been doing that for years on our own, like we, we export the data from the survey. I think we have over, what, eight or 900 responses this year? Yep. It's like to 19 or 20 different questions, and then you have all the profile data.

[00:13:27] Paul Roetzer: Well, that's me and Mike building pivot tables and going through and doing analysis and then building charts based, I we're talking about dozens of hours to analyze the survey data. That's a perfect use case for it. Just dump it in and say, what, what correlations do you see in this data? Like, what is interesting about that?

[00:13:44] Paul Roetzer: And then just kind of like ask it questions. But we can do that because we have worked that data for years, right? We, we already know the insights we look for. You could even feed it the PDF of last year's report. And say, here's last year's report. You know, draft findings based on this format for this year.

[00:14:02] Paul Roetzer: Start there and then say like, what else are we missing? So my mind just immediately, when you start playing with this, it's like, oh my gosh. Like all the use cases, all the times we're working with data where we don't really love it. Like, I mean, I love finding the insights in the data. I hate having to go in and remember how to build the pivot tables properly and like, right.

[00:14:21] Paul Roetzer: I don't do it more than like a few times a year. So it's just not my expertise. But I'm not going to pay a data analyst to do it. Like we, we can do it. We just don't necessarily enjoy that part of it.

[00:14:31] Mike Kaput: Yeah. Yeah, it's going to be really interesting. Spoiler. We will absolutely be using that for the state of the industry report even, you know, I think in some of the examples too, I don't think people realize even the low hanging fruit with stuff like this in the state of the industry example, even my ability to upload the spreadsheet and say build tables out of these different columns that took hours before and we had to do it just to see numbers, right?

[00:14:56] Mike Kaput: Baseline numbers to even extract insights from.

[00:14:59] Paul Roetzer: Yeah. Or you may, it may be like we've historically used pie charts and bar graphs, because that's what we know to do. You could say, what are some other creative ways we could be visualizing this in this report? Yeah. We know man, we, we may still need to go build them, you know, from a high res perspective or what, I don't know how the charts, how well they export.

[00:15:18] Paul Roetzer: But yeah. The other thing I'll, I'll share real quick. I had this like perfect use case in my head that I can't do now. So for Ethan's fireside chat at MAICON, I have like 15 of his articles from his newsletter that had like, I want to base the interview on just like things I want to explore with him.

[00:15:36] Paul Roetzer: And so what I thought about doing was creating an Excel file that had those links in it and then just uploading that to code interpreter and saying, go step by step through each of these links and find, you know, five to seven interesting questions to ask about each of the articles. Now, I was going to still go through myself and I've read every article multiple times, like so I could assess it, but because it's not connected to the internet, I can't do that, right?

[00:16:00] Paul Roetzer: So I'm probably actually going to have to do a one-off where I'll use GPT four with the browser connection and I'll just go one by one and, and do it just to see what happens. And I may actually use it as an experiment with Bard and And maybe an or something. Like I may play around with her inflection.

[00:16:16] Paul Roetzer: Yeah. And see how it, how it does with that, use case. Yeah. That's awesome.

[00:16:22] Mike Kaput: In less awesome news, I. The second big topic we're talking about today is that there have been a handful of stories in the past several weeks that are kind of shedding light on the dangers and misuse of AI in content and media.

[00:16:37] Mike Kaput: There's some horror stories kind of coming out about this. First up, a report from a misinformation tracking site called News Guard showed that there are a ton of content farms that are now using AI to generate hundreds of low quality articles a day and then raking in programmatic ad dollars. You know, with programmatic ads, companies might be showing up on sites that they have no knowledge about by signing up to a programmatic ad network like Google's ads, hundreds of brands News Guard found are unwittingly ending up displayed on these sites, and unfortunately otherwise, legitimate media sites are kind of following the lead here.

[00:17:15] Mike Kaput: So in another story, Tech site. Gizmodo recently started publishing AI generated content, trying to publish more frequently using ai, and they got very problematic results. For instance, one article that they published on the Star Wars movies, just kind of like a list of them chronologically and some about the movies.

[00:17:37] Mike Kaput: It was riddled with inaccuracies and it prompted this big backlash and outcry from Gizmoto staff because they're looking at this story saying, Dave said things like, quote, they're actively hurting our reputations and credibility, and they show quote, zero respect for journalists. So the company is trying to use AI failing at it, and their human writers are up in arms about it.

[00:18:00] Mike Kaput: And then kinda last, but certainly not least, news recently came out of a leaked email from a German tabloid named Build, and it details how this publication plans to pretty overtly just replace over a hundred jobs with ai. So in the email, the tabloid said that quote, unfortunately, be parting ways with colleagues who have tasks that in the digital world are performed by AI and or automated processes.

[00:18:25] Mike Kaput: The email also detailed that the people who'd be replaced by AI include quote editors, print production staff, sub-editors, proofreaders, and photo editors, and that these careers quote will no longer exist as they do today. Now that's just kind of one publication, but it is notable to say that they are owned by a company called Axle Springer.

[00:18:45] Mike Kaput: Which is a major European publishing house, and they also own other well-known publications like Business Insider and Politico. So there are some fears that this could be kind of more widespread. So Paul, I want to start with the Gizmoto story specifically because you had posted about this on LinkedIn saying every brand and media company should be experimenting with AI assisted content, but not like this.

[00:19:09] Mike Kaput: So how should companies be experimenting with AI

[00:19:13] Paul Roetzer: content? Yeah, the next sentence was, it's lazy gives the appearance of a complete lack of understanding around how the technology works. I mean, I'm honestly just shocked that a major media outlet would, would make this kind of misstep. Like, it's just, it, it looks really, really bad.

[00:19:32] Paul Roetzer: I mean, I think it destroys the trust certainly of the team, the audience. You know, if I'm reading something and I'm expecting it to be correct from a trusted source, and it's not, I'm going to get really annoyed. So I just feel like it was obvious and inevitable that people were going to use this technology to take shortcuts.

[00:19:54] Paul Roetzer: I think there's a lot of financial pressure on these media outlets. Certainly there's probably a lot of pressure to be, you know, integrating the technology. There's probably a lot of push to drive efficiency and reduce reliance on human writers. Like, I mean, nobody wants to talk about it, but that's it.

[00:20:10] Paul Roetzer: It is absolutely a reality of these, you know, whether they're owned by private equity or they're publicly traded. There, there is going to be massive pressure to downsize teams if they think they can achieve high level outputs. Now, Gizmoto and other outlets, you're basically like, Hey, this is a piece of content that just wouldn't have been written otherwise.

[00:20:27] Paul Roetzer: Like, we're not replacing writers necessarily. We're just creating it. It's like, yeah, right. But you're testing AI to write this stuff to say, End of the day, you're all about generating revenue through ads and clicks. So if you can do that through crap content written by AI instead of quality content written by humans, there's going to be a pretty high motivation to do that.

[00:20:51] Paul Roetzer: So it, it's just a downward spiral that inevitably leads to like collapse of these companies. Like I ju I just, I really don't get how this is not obvious that this doesn't end well. So whether it is, you know, just having some random stuff written by AI and not even like somebody I'm left a comment in my LinkedIn thing about verify and edit.

[00:21:14] Paul Roetzer: It's like, just verify and edit. Like it's, you can have the AI write the Star Wars story but have someone on the staff who knows the Star Wars chronology or can use Wikipedia, have an intern. Yeah. Somebody goes through and makes sure this is correct. So this idea that you're just trusting this tech to just pump something out.

[00:21:32] Paul Roetzer: So either they don't understand how the tech works and that it makes stuff up or they don't care. Neither of those looks good on the editorial team. Yeah. And so if you're a brand or a media company, just you, you should care about the output of your content. It shouldn't just be about like ad dollars or search results, because eventually you're going to just destroy your brand doing that and it's not going to take very long.

[00:21:54] Paul Roetzer: So I mean stuff like this is just maddening to me. Either it is just really bad leadership that just doesn't get it or they just don't care and neither of those is a good scenario. No.

[00:22:07] Mike Kaput: So when it comes to like the AI generated content farms, I was just kind of curious like if these businesses, for instance, or the gizmos of the world think that they're able to make money, I.

[00:22:17] Mike Kaput: Doing this, then clearly they're attracting some type of traffic to their sites by doing this, who they then show programmatic ads to, like, is this going to be a problem that Google needs to solve? Especially as Google has not come out and said AI generated content necessarily is bad.

[00:22:33] Paul Roetzer: Yeah, you would think.

[00:22:34] Paul Roetzer: And there was an article, I don't believe we have it in the show notes, and I'd have to go back and find it, but there was an article a week or so ago about all these major brands that were appearing along this horrible AI generated content. Mm. And so, yeah, I mean, I think there's going to be a reckoning from an ad perspective too.

[00:22:51] Paul Roetzer: And also it's just like, how long does the content farm game last? Like you and I, when did you start working at PR? 2020. Was it like 2000? 2012. 2012? Yeah. Okay. So again, people don't know the history. So I owned an agency called PR 2020. We were HubSpot's first partner back in oh seven. Mike joined me there in 22 as a content specialist.

[00:23:09] Paul Roetzer: A writer. Yep. That was around the time when content farms were exploding. So people were paying like one to 3 cents per word. I started in the industry in 2000. And the going rate was like about a dollar per word, you know? But depending on how highly technical it was, maybe it was be two to $3 per word for like scientific papers or advanced B2B paper, something like that.

[00:23:33] Paul Roetzer: But then it started like kind of coming down and by 2012, 2013, you had these content farm companies that were cropping up, that were charging literally like one to 3 cents per word. Yeah. And that's what we were fighting against as an agency was like, no, we're not going to discount the quality of our content.

[00:23:49] Paul Roetzer: You're paying for junk, like, They're not even verifying this stuff. And it's almost like we're living through that again, except now AI can be the content farm and it can probably be less than a penny per word. Mm. But to what end? Like the content farm game didn't work. A o l tried to do it with, what was it, patch something or other?

[00:24:06] Paul Roetzer: They came that junk site. And from the outside you watch this stuff happen. You're like, this isn't going to work. Like, yeah, you may make some money for a year or two, but this is a horrible business model and I just feel like I'm watching it all over again. Like this is just, except at a whole nother scale.

[00:24:22] Paul Roetzer: And it just never works. Just, I don't know. Produce. Yeah. Good Con content. Use AI to assist you in the creation of it, the curation of the content, the enhancement of the content. The outlining, like use it, but don't think you can take humans out of the loop. Like it's, it's just not a viable thing right now.

[00:24:42] Paul Roetzer: And nor do I think it should be the goal. Amen.

[00:24:47] Mike Kaput: Um. All right. Third topic for today. We're going to kind of talk through why and how investors are betting on generative ai because we've seen some research recently by McKinsey that estimates that generative AI as category could add the equivalent of 2.6 to 4.4 trillion of annual value alone to the global economy.

[00:25:11] Mike Kaput: And they actually estimate that about 75% of that value will accrue through four use cases, which they categorize as customer operations, marketing and sales, software engineering, and r and d. They also detailed the impact will be felt across all industries and sectors, but specifically point out that banking, high tech and life sciences could see a huge impact.

[00:25:34] Mike Kaput: Now, this full research report is far beyond the purview of this discussion. Maybe we can cover it in depth another time. It's well worth a read. But the larger point here is that this possible market impact of generative AI is simply massive. And we've seen in the past week or two investors are clearly responding to that because they've just written a couple of huge checks to leading generative AI companies.

[00:25:58] Mike Kaput: Now, first up, inflection ai, who we've talked about quite a bit. Announced it raised 1.3 billion in a fresh fundraising round that was led by Microsoft Reid Hoffman, who started LinkedIn, bill Gates and Nvidia. And what's crazy is inflection AI has been around just over a year, and in that time, they've built one of the worlds most sophisticated large language models, and this model powers pi, their personal AI assistant product.

[00:26:26] Mike Kaput: The company is also building the quote largest AI cluster in the world, comprising 22,000 Nvidia H 100 tensor core GPUs. It's also important to note that inflection ai, c e o, and co-founder Mustafa Suleman also co-founded DeepMind, which was acquired by Google back in 2014 and forms the backbone of that company's AI projects.

[00:26:50] Mike Kaput: Now at the same time, I think it was on the exact same day, runway. Which is a suite of generative AI tools for creators announced a 141 million extension to its series C funding round. And that was, participated in by companies like Google, Nvidia, Salesforce ventures, and other huge, investors. Now, I wanted to start off here, Paul, by connecting some dots like why are investors writing such enormous checks for generative AI companies, especially as the overall startup funding environment seems to be slowing

[00:27:25] Paul Roetzer: down a little bit?

[00:27:26] Paul Roetzer: Well, these are the companies building the foundational models for one. So, you know, we've talked about in the generative AI tech stack, you have the kind of infrastructure companies, like the cloud and the chip companies like Nvidia and then Google, aws, Microsoft, and then you have kind of the model companies that are building the foundational models that then you can build application or software on top of.

[00:27:49] Paul Roetzer: So these investments, like runways building their own models, inflection is certainly building a massive foundational model. They also announced last week that they're, within a few months will have the second largest supercomputer in the world. So they'reraising all this money because they're investing massively in the infrastructure, in the chips, the GPUs.

[00:28:12] Paul Roetzer: So they can have this, you know, this massive cluster of these GPUs to do this stuff, to train these models. And, it takes a lot of money. So I think the reason you're seeing massive checks is a lot of this money ends up going back to Nvidia, to buy GPUs. So to, to build and train these massive models.

[00:28:30] Paul Roetzer: Cohere, you know, they just raised what it was, a hundred and some million. Yeah. A couple billion dollar valuation. So that's, that's why it takes a lot of money to build these foundational models, to train them. So that's why we're seeing it there. And then at the application layer, you've seen l less, at least we're not hearing about as much at the kind of the SaaS model layer, because it's really hard to know who to make a bet on.

[00:28:53] Paul Roetzer: So right now it seems like a lot of the money is funneling into these like five or six foundation model companies that appear to be the best bets moving forward to be, you know, built on top of, inflection in particular is quite interesting. We talked about it recently on the podcast a few times. I've been testing pi.

[00:29:12] Paul Roetzer: It is, it's, it's designed to be more personal and empathetic and conversational. So if you haven't explored it yet, you can go test it out for free. It asks a lot of questions, like when you ask it to do something, rather than just outputting something like ChatGPT would like write me an essay on this, and then G P T just writes it.

[00:29:30] Paul Roetzer: The inflection will say, Tell me more about the audience you're writing for, or what are the goals of the piece? So it, it does, it's designed to kind of be more, kind of conversational, I guess, and iterative the thing I was, the one other thing that caught my attention yesterday was so Mustafa Solomon was not active on Twitter for a pretty long time.

[00:29:48] Paul Roetzer: It was very, you know, periodically. Which by the way, we don't have threads in the conversation this week. We should probably have a word or two about threads at the end of this. Yeah, yeah. Anyway, so, Twitter, which is still my primary vehicle for learning the things and keeping up on the AI space. He tweeted yesterday.

[00:30:06] Paul Roetzer: Soon LLMs will know when they don't know, they'll know when to say, I don't know, or instead ask another ai or ask a human or use a different tool or a different knowledge base. This will be a humanly hugely transformative moment. So this is kind of a little, I guess, side conversation here, but, One of the big issues we've talked about with LLMs is they make stuff up, they hallucinate and you can't trust them.

[00:30:33] Paul Roetzer: They'll make up citations, they'll make up facts, places, people, whatever. One of the key areas of pursuit right now within the research lab is how do we solve this? How do we solve hallucination? We've talked about Sendar Pacha on this 60 minutes episode back in like March or April, said like, we're making progress but we don't know.

[00:30:50] Paul Roetzer: So the general consensus up until now is like they don't really know how to solve it, but here we have inflection raising 1.3 billion and I believe they're actively raising more. Now. I think they plan to raise like 10 billion plus. You have the C E O and the founder saying, this is something we think is solvable.

[00:31:10] Paul Roetzer: And oftentimes what happens is like a Sam Altman or Mustafa, they're tweeting something that they're actively working on and that they feel like they've made significant progress onto the point where they're going to publicly say, this is solvable. So one of the challenges of large language models, especially in corporations for marketers, for brands, for businesses, is this unreliability issue.

[00:31:34] Paul Roetzer: And so now you have a company that just raised 1.3 billion with a C E O saying, Hey, I think we can solve this. And so I would just, again, like you just never know when the state of this stuff is going to flip. And if we can get to a point where this kind of funding is able to them to build the kind of models that actually solve for hallucination in whatever path they choose to do it.

[00:31:54] Paul Roetzer: It's a really big deal. And that's why like when somebody raises kind of money like I put on LinkedIn, like, Hey, if you don't, you're not following inflection. Like you gotta follow them. This, yeah, this is where the innovation comes from and you gotta pay attention to what they're saying.

[00:32:06] Mike Kaput: So if I'm a marketer or a business leader, Looking at generative AI tools, models, I mean, how should I be thinking about making my own bets?

[00:32:16] Mike Kaput: Not necessarily from an investment perspective, but from where to allocate time and budget to generative AI companies and platforms, given how fiercely competitive this space is?

[00:32:26] Paul Roetzer: Yeah, it's tricky. I've been thinking about this a lot lately. Like what is the, what is the buying process for an L L M or for the applications?

[00:32:34] Paul Roetzer: And I don't know that it's defined yet. I don't know that there's a correct way to do this because you can, like what I'll tell organizations, like there's a big enterprise we've been talking to lately, kind of doing some consulting work for, and I said, okay, like you're, you're a Microsoft shop, start with Microsoft.

[00:32:49] Paul Roetzer: Like you start with the Cortex stack, because if you're in a big enterprise, you're talking about massive procurement, you gotta get through it, you gotta get through security, you gotta get through all these issues. Regulatory, if you're, you know, in a highly regulated industry. So your best bet is going to be if you have aws, Google, or Microsoft, there's a really good chance you're working with one of those companies.

[00:33:09] Paul Roetzer: And if you're a big enterprise, you have a relationship with them, a high up relationship start there. And then you're going to go out and look at like, okay, what are the other language models we should be talking to? What are the application layer companies we should be talking to? And I think what's going to happen is you're going to have like, let's say you take a big enterprise, like a financial services company, there's a chance that the marketing team is going to just move forward with a Jasper or a writer or something at that application layer just so they can get started.

[00:33:36] Paul Roetzer: because there's obvious use cases and these are solutions built for the enterprise for those specific use cases. And so then you're going to have the enterprise overall that's going to be looking at long-term what's our play with language model or language models and how do we build them into the organization.

[00:33:51] Paul Roetzer: But that's going to take a long time. Whereas you can like jump in and start using an application company, a SaaS company tomorrow for marketing. So I think there's going to be this mix of near term, let's just go get an application layer company and then mid to long term, let's start talking to these foundation model companies and seeing if we can build our own or go get an open source model, like a Falcon 40 or llama, whatever, and like build our own language models internally.

[00:34:17] Paul Roetzer: It's just, I haven't seen anybody who solved for this yet, but a lot of people are asking us these questions. Yeah.

[00:34:23] Mike Kaput: All right. Let's jump into rapid fire topics. We have a ton of them, so we'll move pretty fast here. First up, VentureBeat recently published a really interesting in-depth q and a with Aiden Gomez, who is.

[00:34:36] Mike Kaput: The co-founder and c e o of Cohere, a leading AI company we talk about often on the podcast. Now, this interview covered a ton of ground and is well worth reading, but what jumped out to us, I think was Gomez's thoughts on the future of large language models. So what kind of stood out to you as notable in that context in the interview?

[00:34:55] Mike Kaput: Yeah,

[00:34:55] Paul Roetzer: so I'll just, I'll read the excerpt that caught my attention and then I'm going to follow up with, some context from Lex Friedman podcast that I happened to be listening to like a few days after I read this. So the excerpt that caught me was, he said, I think there's a way in which synthetic data leads to the exact opposite of model collapse, like information, knowledge, discovery, expanding a model's knowledge beyond what is shown in its human data.

[00:35:18] Paul Roetzer: So what we mean here is right now, these models learn from human data. They read the internet, they watch videos, they look at images, whatever, that's, they learn from stuff that humans created. He went on to say, that feels like the next frontier. The next unlock for another very steep increase in model performance.

[00:35:36] Paul Roetzer: Getting these models to be able to self-improve, to expand their knowledge by themselves without a human having to teach them that knowledge. So synthetic data, meaning it just makes its own data. You went on to say, I think that because we're starting to run up on the extent of human knowledge, so meaning we're going to run out of human knowledge to give it anymore, it'll, it will have learned everything we've created and then as we create new stuff, it's keeping up with it at a superhuman scale.

[00:36:02] Paul Roetzer: So it's like, it's not going to learn anything new from us. You want to say we're starting to run up on the extent and breadth of the data we can provide these models that give them new knowledge. As you start to approach the performance of the best humans in a particular field, there are increasingly few people for you to turn to, to get new knowledge.

[00:36:20] Paul Roetzer: So these models have to be able to discover new knowledge by themselves without relying on human's existing knowledge. It's inevitable. I see that as the next major breakthrough. So that to me again, is interesting. Like back in February, we're going to talk about the world of bits in a minute. I know it's one of our items today.

[00:36:37] Paul Roetzer: Back in February I looked out and said, okay, everyone's working on action right now. They're working on AI agents being able to take these models beyond just prompts to outputs, to actually doing action. And so when you see a quote like this, it's like, oh, okay, so action is definitely happening. And now synthetic data is becoming a major.

[00:36:54] Paul Roetzer: because if he's talking about this, then every other research lab is thinking the similar ideas and probably exploring it. So then I happen to be listening to Mark Andreessen, Andreesen Horowitz, Netscape. You know Mark Andreessen, the legendary Silicon Valley guy, and he is on an interview with Lex Friedman and this topic comes up, they start talking about large language model training data and is it possible the majority of future training data will be human conversations with AI agents?

[00:37:19] Paul Roetzer: So they're actually saying like, when you're talking to inflections pi. That conversation and its outputs to you what it's saying to you, synthetic data, it's creating it that becomes training data for future l l m. So now all of a sudden, all these outputs from these machines, all these conversations, when you ask it something, that data becomes a new data training set.

[00:37:42] Paul Roetzer: So in theory that kind of makes sense, but then they went into like, is is it actually learning anything though? So, his point, what, what Andreessen was saying was, you can ask the LM to create anything, give it a hypothe hypothetical scenario of like, you know, whatever de debating, you know, the future of business and, the role of LLMs in, in the future of marketing tech stacks and whatever it may be.

[00:38:08] Paul Roetzer: And then that output becomes training data for the next set so that that synthetic data that it created. So he's saying, Is there a signal in there? This is the exact quote. Is there a signal in there that's additive to the content that was used to train it in the first place? So one argument, the information theory argument that he presented is that it's completely useless.

[00:38:30] Paul Roetzer: The AI output comes from the human generated content it was trained on. So there's actually nothing new. The analogy he gave is like empty calories. It doesn't help. So I ask it a question, it generates some synthetic data, but that synthetic data came from its training set of human data. So is it really creating anything new to, to train on?

[00:38:52] Paul Roetzer: And then there's an opposite that says, no, no, no, like it actually is. And so he said, this is the trillion dollar question. Like someone will make or lose a trillion dollars, whether synthetic data actually works as training data or not. So it's just, it's a little bit more of like kind of an abstract topic to be thinking about, but it's critical to the future here.

[00:39:10] Paul Roetzer: And you're going to be hearing a lot more about synthetic data and whether or not it can actually be used to improve these models.

[00:39:17] Mike Kaput: So another related topic here that you touched on is this idea of autonomous AI agents. So systems that can take actions for you online. So one example of this is like if you could ask an AI agent, Hey, go find me the three cheapest flights to Cleveland for.

[00:39:34] Mike Kaput: July 26th to the 28th and then book one that is within my budget or whatever, and it can go actually do these actions for you in your web browser. So it's still really early days for these types of systems, but we've definitely thought it's worth paying attention to. And you mentioned wor your concept called World of Bits that you had published about earlier this year, and you kind of have started revisiting some of these thoughts, in the last, few weeks.

[00:40:02] Mike Kaput: Can you tell us a little bit more about World of Bits and how it relates to this overall topic?

[00:40:07] Paul Roetzer: Yeah, so I would say, check out the link in the show notes. We weren't going to spend a ton of time on this today, but basically the premise was, world of Bits, was a research paper from like 2017 at OpenAI. This guy Andre Carpo, who we've talked about on the show before, who then went on to head up AI at Tesla, and now he's back at Open and back in February.

[00:40:24] Paul Roetzer: I was trying to figure out why did he go back? Like what is, what is he doing back at OpenAI? And that's when I realized that their theory of world of Bits, this idea that the machines could actually take actions, keyboard clicks, or mouse clicks, keyboard, you know, inputting information, filling out forms that they thought the breakthroughs had occurred that could now make this possible.

[00:40:43] Paul Roetzer: That we could now build AI agents that could take actions on, on our behalf. So if you're a marketer, rather than doing the 21 steps to send an email in HubSpot, you tell the machine to send an email or the example I gave at the beginning, rather than me learning how to build custom reports and then doing these visualizations and figuring out what's charged to use, I just ask code interpreted, do it for me, and it just builds the stuff for me.

[00:41:03] Paul Roetzer: So we're now entering this world where these, these AI agents will be taking actions. It's early, but it's accelerating very quickly and there's lots of money being poured into it. So if you want to understand why and where this is going, go, go read The World of Bits article that I wrote and it'll give you all the context of what's happening and kind of why it's moving so quickly.

[00:41:24] Mike Kaput: So OpenAI actually just published an article titled Introducing Super Alignment in which they announced the company's intent to build a team around what they're kind of aligning a super intelligence. So OpenAI says they believe super intelligence, which is kind of AI systems vastly smarter than humans could arrive in the next decade.

[00:41:47] Mike Kaput: But they worry that no solution exists to make sure that a super intelligent system would actually follow human intent. And OpenAI has now said that they intend to create a solution to this saying they're going to dedicate 20% of their current compute and a team led by co-founder Elia Su Sutzkever to solve this problem, ideally within four years.

[00:42:11] Mike Kaput: Now, Paul, can you kind of put this into context for us? Because super intelligence arriving this decade seems like a pretty bold prediction, but one they're taking seriously.

[00:42:21] Paul Roetzer: Yeah. And this all connects back to like this, AI is a threat to humanity and some of the stuff we've talked about recently that's been in the news about some of the concerns long-term.

[00:42:29] Paul Roetzer: So, again, just like a little historical context here, Ilia was a co-founder of D N N Research with Jeff Hinton. Who is the Google guy who left and said, yeah, he's going to destroy humanity. They created something called AlexNet back in 2012 that achieved this amazing, benchmark within the ImageNet competition, which is like computer vision basically, and set off what we're seeing today.

[00:42:55] Paul Roetzer: The deep learning movement that is being commercialized now in language and vision and video and all this stuff. It really kind of goes back to that moment in 2012 at ImageNet. So Ilia is a very important figure in the deep learning movement of the last 10 plus years. Co-founder and chief scientist at OpenAI.

[00:43:13] Paul Roetzer: So the fact that he's working on this matters, that's a significant thing. This is one of the things that has been talked about as trying to solve these threats, is that we need to put way more resources toward thinking about this and working on these solutions now. When super intelligence occurs and how we sort of jump from AGI to super intelligence all of a sudden as like the thing, nobody really knows, but like Nick Bostrom wrote the Super Intelligence book.

[00:43:43] Paul Roetzer: Was it 20 years ago? I mean, I this isn't a new concept, or at least the original research paper was from like 1997 or 1998. Yeah, so a couple quick excerpts that jumped out to me. When you look at the article from OpenAI, they say Super intelligence will be the most impactful technology humanity has ever invented and could help us solve many of the world's most important problems.

[00:44:04] Paul Roetzer: But the vast power of super intelligence could also be very dangerous. And lead to the disempowerment of humanity or even human extinction. So, you know, a little light Tuesday conversation here. Here we focus on super intelligence rather than AGI to stress a much higher capability level. We have a lot of uncertainty over the speed of the development of the tech over the next few years.

[00:44:24] Paul Roetzer: So we hope to aim for the more difficult target to align a much more capable. So basically if we try and solve super intelligence, we'll figure out the AGI thing along the way and then, you know, we'll save humanity basically. So yeah, and, and just like quick definition, so super intelligence. I'll just, I'll go to the Nick Bostrom definition.

[00:44:44] Paul Roetzer: By super intelligence we mean an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom, and social skills. So basically, you know, it's, it's game over. Like these things are just better than us at everything. And so that's, it's a world that's really hard to imagine.

[00:45:05] Paul Roetzer: It's hard to, you know, sit here and have a realistic conversation about, because it's so abstract to conceive of. But yeah, it's interesting that they're working on it. They're putting a lot of resources toward it. They have obviously a lot of, compute and money to throw at this, and they have a lot of brains, to work on it.

[00:45:24] Paul Roetzer: So, I mean, if nothing else, I think there's going to be some insane research coming out of this. And I assume they're going to be sharing a lot of. Their findings along the way. So I think it will lead to a lot of innovation around safety and alignment, which is good. Who knows if they'll ever solve for the super alignment or if it's ever going to be needed, but I think it's going to accelerate a lot of the safety and alignment, innovation, which is, I guess, a good thing.

[00:45:53] Mike Kaput: So Google DeepMind, C E o, Demi Sabas, who we've talked about quite a bit on the podcast, says that his team is using techniques from AlphaGo, which is the famous AI system they built that beat the human world champion at the game of Go. And they're using techniques from AlphaGo to create a system called Gemini and Hass Ababa says this will be more capable than ChatGPT.

[00:46:17] Mike Kaput: So Gemini is a large language model similar to G P T four, which powers ChatGPT, and by injecting some of AlphaGo's intelligence, Hass Ababa and his team are quote, aiming to give the system new capabilities such as planning or the ability to solve problems according to Wired. Now Paul, given your long history following DeepMind, what do you make of this story?

[00:46:40] Paul Roetzer: I'm very intrigued. I I mean, if you haven't watched the AlphaGo documentary, I think it's still free on YouTube. Watch it. It is, it's, it was life-changing for me. I mean, just in terms of my understanding what was happening and where it could go. Mike and I also did a pretty in-depth summary of AlphaGo, and, and, and the competition in 2016.

[00:47:01] Paul Roetzer: In the book. So marketing, artificial intelligence book. There's kind of some context around this and why it matters. So yeah. Anything Demis has to do or say, I find worth listening to, and I wouldn't be surprised if by the end of this year they do something pretty significant in this space that changes what you expect these agents to be able to do.

[00:47:25] Mike Kaput:

[00:47:25] Mike Kaput: So interestingly, Google is also taking some steps to make AI systems safer and more responsible. They just announced what they're calling their first machine unlearning challenge. And according to Google, this is a competition that considers a realistic scenario in which after training a certain subset of the training, images must be forgotten to protect the privacy or rights of the individual's concern.

[00:47:49] Mike Kaput: So basically, they're setting up a fictional scenario where we need to stop an AI system from having access to data that it was trained on and forgetting essentially that it was ever trained on it. So this concept of machine unlearning can be used to protect privacy in the future, and it can also be used to erase inaccurate or harmful information from models in the future.

[00:48:12] Mike Kaput: So Paul, it sounds like Google is kind of accelerating steps to make AI systems safer in this way. Was that you're kind of reading on this. I

[00:48:20] Paul Roetzer: think this is a huge deal. Yeah. I honestly hadn't really thought about this, and then I saw this and read this and my guess is there's way more money and resources going into this than we were thinking about or aware of.

[00:48:34] Paul Roetzer: It's basically a lobotomy for a language model. Like, so, you know, these use cases we just covered are, are pretty obvious. But if we go back to the synthetic data example, like let's say G PT five is trained on some synthetic data and then partway through the training they realize that that synthetic data adds zero value and it actually maybe harms the eventual foundational model of G P T five.

[00:48:58] Paul Roetzer: How do you get that training data out of it? Do you have to start all over again and just remove it? Or can you actually go in and extract that from its knowledge base? And so as these models continue to learn from either human knowledge or synthetic data, whatever it is, If you decide that there's harmful stuff in there for whatever reason, either it de, you know, it deprecates the output of the machine, or it has harmful information in it, or it learned from just total falsehoods, you're going to want to go in and get that out.

[00:49:30] Paul Roetzer: Or if you want to steer these things to have certain political leanings. So if you're a certain government that wants to have more control around what your citizens see and believe, then you may need to extract things out of the models that you don't want them knowing. Well, to my knowledge, there's no way to do that.

[00:49:49] Paul Roetzer: Yeah. Which is why they're doing this. This is like a DARPA style. Like I, my, I wouldn't be surprised if darpa, the government defense Advanced Research Project agency, has some stealth thing going on to do this exact same thing. So the more I thought about this, the more I realized like, this is a grand challenge in the future of AI, is to be able to figure out how to get knowledge out of these things.

[00:50:09] Paul Roetzer: I have no idea how you do it. I'll be fascinated to read the research papers about it. Yeah.

[00:50:13] Mike Kaput: And even just speculating totally. Here, I have no, you know, in depth knowledge of this, it sounds like it could be a solution to some of these models being trained on copy written information and percent not being sued by the EU or

[00:50:26] Paul Roetzer: individuals.

[00:50:26] Paul Roetzer: So, yep. I thought the same thing. That y they could say, okay, fine, we'll we'll go extract this data set out. Yeah. You know, if you're stability AI and you're getting sued by who's the photo company? Getty Images. Yeah. Yeah. If you're getting sued by Getty, for them to say, listen, we'll go, we'll pay the 200 million fine and we'll go get all the Getty images out of that.

[00:50:46] Paul Roetzer: Right. I that today, that's not possible. To my knowledge, they would have to build a new foundational model that doesn't use those images to start with, to get them out. They could not go retroactively and yank them out. So I agree a hundred percent. That's a really interesting use case as well. If you accidentally vacuum up a bunch of copyright children or the AI Act in the eu.

[00:51:08] Paul Roetzer: Yeah, it's that you can't, and you retroactively go back and get the stuff out that the AI Act doesn't allow. That's interesting too. Yeah. Well, again, all theoretical here, we're just kinda Totally, yeah. Brainstorming, but interesting.

[00:51:22] Mike Kaput: Interestingly as well, Google also just changed its privacy policy to say that it can now use public data to help train and create its AI models.

[00:51:32] Mike Kaput: So this was, changed that it had had some limited language in there to that effect, before July 1st. But afterwards, Mashable was reporting that it seems like the privacy policy language is much more expansive to allow Google to use information from searches or search behavior to more widely train the company's AI models.

[00:51:54] Mike Kaput: Now, Paul, I know some people seemed upset about this, the changes online, but, from my perspective, this seems kind of unsurprising from its move from Google, but what did you think about that? I was

[00:52:05] Paul Roetzer: surprised the story. Like, I mean, just, I just assumed all of this. Like, I don't, I mean, at this point, aren't we all just conditioned that everything we're doing is training AI in some way?

[00:52:15] Paul Roetzer: Like. I don't know. Yeah.

[00:52:17] Mike Kaput: Might, might have come as a surprise, I guess, to the people complaining online

[00:52:21] Paul Roetzer: perhaps. I don't know. I and I think Google's, I just like, I saw like a quote from somebody from Google who was like, Kinda like, really like this, this is what we do. Like, so yeah, I don't know. I mean, I didn't really think it was too big of a deal.

[00:52:33] Paul Roetzer: So if

[00:52:34] Mike Kaput: you're seeing this come across your feed, maybe just consider that it's been already trained on you.

[00:52:39] Paul Roetzer: Just click yeah.

[00:52:43] Mike Kaput: All right, so the US House of Representatives, just circulated a memo saying that offices within the house are only allowed to use the paid version of ChatGPT moving forward, because there are privacy concerns and ChatGPT Plus has some improved privacy features, and additionally, house offices are not allowed to use any other large language models.

[00:53:04] Mike Kaput: Not to mention this memo circulated some guidelines for how to use the tool, which kind of seems like Paul Congress, like many other companies, is waking up to the fact that misuse of Chache p t can land you very quickly in hot water and lead to major privacy issues. So what did you think about that?

[00:53:23] Paul Roetzer: I mean, just a another reminder, if your organization does not have generative AI policies that guide the use of generative AI within your organization, get them tomorrow.

[00:53:32] Paul Roetzer: Like we just, I mean, we're. How many months now? Nine months since ChatGPT came out. It, it's, yeah, it's time to set some policies for your team about what they're allowed to do and not allowed to do. So, yeah, I I'm kinda surprised it took this long for them to do it. So,

[00:53:50] Mike Kaput: so OpenAI has been hit by another class action lawsuit, and this one is a copyright lawsuit that claims chat.

[00:53:57] Mike Kaput: G P T is trained on full novels or books without permission from the authors. So a couple authors leading this lawsuit basically claimed that. Chache PT was able to very accurately summarize their books when asked, which led them to believe, given some documentation in, Chache p t's documentation that says they have trained on some book repositories.

[00:54:20] Mike Kaput: They are alleging that OpenAI has essentially hoovered up their books and is used it as training data, which it was not allowed to do. So this is kinda the latest in a series of these lawsuits alleging that OpenAI has used protected data and information to train its models without obtaining consent to do so.

[00:54:39] Mike Kaput: So how do you see these lawsuits kind of shaping up? I mean, it's impossible to predict the exact outcomes, but on one hand they keep happening. On the other hand, the model's already been trained one way or another. So what, what kind of is the end game here? I

[00:54:52] Paul Roetzer: have no idea. I mean, years of legal battles ahead.

[00:54:56] Paul Roetzer: It does. I mean, I. They, they obviously didn't release the training data set for GPT four, so we don't know. Like what it's trained on, but it does seem to be eerily good at like writing in a certain style of a novel and continuing on from previous novels. So, yeah, I mean, like we've said before, it's a great time to be an IP attorney.

[00:55:14] Paul Roetzer: You know, they're going to be staying very busy for a very long time here. There's going to be class action lawsuits, there's going to be all kinds of, interesting things to follow. I have no idea how it all plays out. I just assume lots of, penalties are going to be paid and I don't know, fu I think it probably impacts more like the future foundational models and what they're allowed to do with those.

[00:55:34] Paul Roetzer: And I think they're just going to be a lot of, you know, what is it, beg for forgiveness instead of asking for permission. Like, yeah, they just train these things however they wanted to, and if they find out they weren't allowed to do it, then they'll fix it for the future ones and pay a few penalties.

[00:55:49] Paul Roetzer: Like I I, my un lawyery educated guess of how this all eventually plays out. But we'll see.

[00:55:58] Mike Kaput: So if you want an example of how changes to AI algorithms are affecting us in a day-to-day fashion, you can look no further than some recent news from LinkedIn. So LinkedIn just changed their algorithms significantly, which resulted in preser changes to how certain posts and types of content show up in feeds.

[00:56:19] Mike Kaput: It looks like LinkedIn is appearing to try to make the content in your feed less kind of viral clickbait content and much more relevant to your needs. So they've actually indicated that the algorithms will now reward more posts that share knowledge and advice and have actual meaningful content or comments as they try to crack down on some of the more, trying to game the algorithm style content that we've seen on LinkedIn.

[00:56:45] Mike Kaput: So Paul, you are very active on LinkedIn, so I know you had some thoughts on this.

[00:56:50] Paul Roetzer: Well, yeah, you and I traded some messages because there was one day I put something up. I think it was the inflection, like the 1.3 billion thing. Yep. And so my, I don't know what my average post gets, but it's, it's thousands of impressions.

[00:57:01] Paul Roetzer: Sometimes it'll hit 10, 20, 50,000 and that thing after two hours had like under a hundred. And I messaged Mike, I'm like, something changed. Like, this is really weird. There's nothing different about this post. I tweaked a couple of things in it to try and figure out the algorithm change and then like we did some research and found out that they were messing with the algorithm.

[00:57:22] Paul Roetzer: It a sense seemed to like check up. But yeah, I think it's just a reminder of how captive we are to how these algorithms work. We had the introduction of threads by, you know, meta, last week and people are already trying to figure out like, how's this algorithm work? How's it determining whether to show?

[00:57:37] Paul Roetzer: And it's so funny because I have almost 20,000 followers on Twitter and I get like almost no engagement. I have about a hundred followers right now on threads, and I get more engagement on posts with a hundred followers than I do with. 20,000. So it's just, all right. It's always this game of trying to figure out how these algorithms are making the predictions they are and, and clustering things together and doing recommendations, and it's the never ending game, but we're always captive to whatever they decide to do.

[00:58:03] Mike Kaput: Always a reminder for us marketers as our friend Joe Pulizzi says, don't rely a hundred percent on someone's shared media to distribute your message. I suppose it's your own audience. Yep. Okay. So interestingly, the Vatican just released its own handbook on the ethics of artificial intelligence. It's called Ethics in the Age of Disruptive Technologies and Operational Roadmap.

[00:58:27] Mike Kaput: And basically it is a bunch of guidelines and suggestions that are meant to guide the tech industry through ethics in ai, machine learning, encryption, encryption, data usage tracking, et cetera. Now, according to Gizmoto, they said, rather than wait for governments to set rules for industry, they are hoping to provide guidance for people within tech companies who are already wrestling with AI's most difficult question.

[00:58:55] Mike Kaput: So, Paul, what did you make of this? I thought it was pretty interesting, but kind of a demonstration that, you know, I guess maybe ethical guidelines can come from, don't just have to come from Silicon Valley.

[00:59:05] Paul Roetzer: I don't know. Did you, did you read it? Like I, I've read through

[00:59:08] Mike Kaput: coverage of it. It seems like, so far it seems pretty, like pretty standard guidelines that you might, might see.

[00:59:15] Mike Kaput: I would have to dive super deep into it. I guess I'm, may, you know, I guess they comment quite the Vatican comments fairly often on social issues, but I just don't know if like, how

[00:59:28] Paul Roetzer: they're thinking about this. Yeah, I didn't see, I guess that was my take is like, I guess that's nice. Like, I don't know that the Vatican is who I turn to, to, to guide, you know, AI business.

[00:59:37] Paul Roetzer: Right. But I don't want to pre-judge it. So I just like the, it says the biggest anchor principle is broken down into seven guidelines, such as respect for human dignity and rights. Promote transparency and explainability. Like, yeah. I mean, okay, cool. Like, yeah, I mean, I, if, if. If a lot of people listen and attribute, credibility and authority to what they're saying, right.

[01:00:01] Paul Roetzer: And it helps align this in a responsible way. Wonderful. I don't think of the Vatican as my go-to source for, for this kind of thing. But again, the massive influence and hopefully comp positively affect this stuff in a responsible way then. Wonderful. But yeah, I was more like, It just didn't jive with me at first.

[01:00:21] Paul Roetzer: It's like, okay, I get the White House releasing something. But yeah. Right. The Vatican was sort of felt like out of right field to me, but, yeah, maybe, maybe they looked at more for that than I know

[01:00:30] Mike Kaput: of they might have seen that viral mid journey, deep fake of Pope Francis, like, decided to get in the game.

[01:00:36] Mike Kaput: Dressed, dressed up and they, yeah, they wanted to comment, I suppose. Okay. As we wrap up here, we're going to talk in a second about, as our final topic, the MAICON final agenda and keynote highlights. But we did want to talk quickly about meta releasing threads. So for anyone who's been living under a rock meta released essentially a competitor to Twitter, which I'm seeing a report from an hour ago from the Verge, that it has now already reached a hundred million users.

[01:01:05] Mike Kaput: It's very, very easy to sign up through Instagram, so that's probably something to do with it. But basically you can, they're trying to, I guess, directly compete. With Twitter, and some of the perceived weaknesses that platform has. Paul, did you, have you tried out threads? Are you I have interested in this.

[01:01:24] Paul Roetzer: Not, no. I I hate that it exists, honestly. And I'm just being straight up, like I do not want another social network to have to go to. I just want Twitter to work. I want it to be not as, unpleasant of a place to be. You know, I feel like it has recent months has not been as, valuable, as maybe previously.

[01:01:49] Paul Roetzer: But my, my use of Twitter is so specific. Like, I have very specific lists. I have a highly curated, group of people that I get alerts from, mainly around AI news and sports. So it's like that, that's how I use Twitter. So a lot of the other noise, I don't even really experience it. Like I just want.

[01:02:09] Paul Roetzer: This highly curated list of people. I want to know what they're thinking and what they're sharing and what they're reading. Cause it's how I stay up on stuff. Threads has none of that. There's no list. There's like this, it's a free for all of who's even over there. They're saying they don't want it to be a news site.

[01:02:24] Paul Roetzer: It's like, well, okay. So no, personally, I just want Twitter to work and to be done with all the craziness around it. Just be functional. I don't know. I mean, I guess competition's good. I always think it's funny how tech companies just literally copy each other. You know? I think there's already a 90 million lawsuit against Zuckerberg and, and meta for poaching talent, supposedly, and copying, like, which I have no idea if it's true or not, but it's funny that it literally just looks like Twitter.

[01:02:56] Paul Roetzer: Like if you're in the app, you actually have to scroll the top and say, wait, am I in Threads or in Twitter right now? Like, I don't, they look so similar. But threads has almost no functionality. You can't build lists, you can't affect the newsfeed. So I'm sure they're going to add all that, but no, I I I don't want to have to go into another place and rebuild a following and build all this list again.

[01:03:17] Mike Kaput: Yeah, yeah, yeah. I saw a lot of journalists jumping over there and being like, Hey, come follow me. And it's like, I can't

[01:03:21] Paul Roetzer: imagine. And people are just cross posting again, right? It's just like, whatever I put on Twitter, I'm going to put on here and see what happens. And nobody knows how to use it, and it, it, it's just messy.

[01:03:31] Paul Roetzer: And no, I don't like it, but, Well, there you go. Let's, let's end

[01:03:36] Mike Kaput: on,

[01:03:37] Paul Roetzer: On something, but I'm on there. If you want to follow me on threads. Follow me on threads.

[01:03:41] Mike Kaput: Let's end on something that we both definitely like, which is the Marketing AI Conference, MAICON, which is happening this month in a couple weeks here, July 26th to the 28th in our hometown of Cleveland, Ohio.

[01:03:54] Mike Kaput: It was weeks, Paul, two weeks away. I know. Oh, I didn't want to put too, I didn't want to put too, specific a number on it because I don't want to stress out or stress you out.

[01:04:06] Paul Roetzer: Oh, yeah. Anybody who listened to like our AI writers summit stuff earlier this year and how long it took me to get my final presentation ready we're, we are entering that zone again.

[01:04:14] Paul Roetzer: I, I've not started the presentation, now I'm doing like the state of AI for marketing and business. I'm going to try and like, Where are we at? Like, what is going on? Pick like the major themes. So I have to build that, and then yeah, I check out the agenda. This is MAICON.ai, m a i cn.ai. You can use ai pod 100 for a hundred dollars off code.

[01:04:33] Paul Roetzer: It's an amazing agenda. There's, there's actually a final keynote. I cannot say right now. It may be announced by this time about the podcast come out, but I can't, I can't yet announce it. But we have an amazing final keynote we're adding. It's just an incredible lineup, you know, really it's going to be, an amazing experience.

[01:04:52] Paul Roetzer: It's trending towards selling out. So if you're thinking about going, I would get your tickets this week. We, we may not sell, but it's definitely trending in that direction. Mike and I are each teaching workshops on day one. Those are optional on the 26th and then the 27th and 28th, they're just going to be, you know, amazing content.

[01:05:10] Paul Roetzer: There's three tracks. Bunch of incredible general sessions. So yeah, check it out. We'd love to see you in Cleveland. Still got a couple weeks to get your situation, get your flights, get your hotels, and, and, and be with us there. So yeah, check that out. mayon.ai. Awesome. Well, Paul,

[01:05:28] Mike Kaput: I think we've certainly covered plenty of material and we gotta, gotta stop taking weeks

[01:05:32] Paul Roetzer: off, man.

[01:05:33] Paul Roetzer: Yeah, right. It's too, too much to cover when we do two weeks now.

[01:05:35] Mike Kaput: Yeah. Yeah, like 30 rapid fire topics in two weeks. So thank you as always for sharing your insights with our audience. I know everyone

[01:05:43] Paul Roetzer: appreciates it. All right, we'll talk to you all next week. Thanks for being with us.​

[01:05:47] Paul Roetzer: Thanks for listening to the Marketing AI Show. If you like what you heard, you can subscribe on your favorite podcast app, and if you're ready to continue your learning, head over to www.marketingaiinstitute.com. Be sure to subscribe to our weekly newsletter, check out our free monthly webinars, and explore dozens of online courses and professional certifications.

[01:06:08] Paul Roetzer: Until next time, stay curious and explore AI.

Related Posts

[The Marketing AI Show Episode 53]: Salesforce AI Cloud, White House Action on AI, AI Writes Books in Minutes, ChatGPT in Cars, and More

Cathy McPhillips | June 27, 2023

This week's episode of the Marketing AI Show covers a week (or three) in review of tech updates, responsible AI news, ChatGPT’s latest, and more.

[The Marketing AI Show Episode 57]: Recap of 2023’s Marketing AI Conference (MAICON), Does Sam Altman Know What He’s Creating? and Generative AI’s Impact on Jobs

Cathy McPhillips | August 1, 2023

This week's episode of The Marketing AI Show talks about MAICON 2023, a mind-blowing article on Sam Altman, and generative AI's impact on jobs.

[The Marketing AI Show Episode 62]: ChatGPT Enterprise, Big Google AI Updates, and OpenAI’s Combative Response to Copyright Lawsuits

Cathy McPhillips | September 5, 2023

On this week's episode of the Marketing AI Show, we break down ChatGPT for enterprise, Google’s big news, and OpenAI’s defensive response to lawsuits.