48 Min Read

[The Marketing AI Show Episode 60]: AI Is Going to Eliminate Way More Jobs Than Anyone Realizes, AI’s Impact on Schools, and the New York Times Might Sue OpenAI

Featured Image

Wondering how to get started with AI? Take our on-demand Piloting AI for Marketers Series.

Learn More

It’s been another interesting week in the world of AI…with a few things we need to keep our eyes on. Paul and Mike break down three big topics—and then some—on this week’s episode of The Marketing AI Show.

Listen or watch below—and see below for show notes and the transcript.

This episode is brought to you by MAICON, our Marketing AI Conference. Main stage recordings are now available for purchase, and a $50 discount code is mentioned at the start of the show.

Listen Now

Watch the Video

Timestamps

00:03:52 — AI is going to eliminate more jobs than anyone realizes

00:17:20 — AI’s exciting and uncertain impact on schools

00:29:25 — New York Times considers legal action against OpenAI

00:35:54 — Andreessen-Horowitz AI Town simulator

00:38:55 — Anthropic gets $100M to build custom LLM for the telecom industry

00:40:23 — Hollywood Studios offers writers a new deal

00:43:23 — You can search and learn while using generative AI

00:48:48 — The Godfather of AI has a hopeful plan for keeping AI future-friendly

00:52:54 — Why we must pass the Create AI Act

00:54:41 — Ethan Mollick on implementing generative AI

00:57:01 — Adobe Express now has AI-powered features to take on Canva

00:59:09 — Jasper releases AI usage statement

01:01:48 — AP standards around generative AI

01:03:12 — US federal judge says AI-generated art cannot be copyrighted

01:04:55 — The “Great AI Backlash” came from a tiny startup you’ve never heard of

Summary

AI is going to eliminate way more jobs than anyone realizes

AI is going to eliminate way more jobs than anyone realizes, according to a new in-depth article from Business Insider. The publication says AI could disrupt over 300 million jobs worldwide but also add trillions in value to the economy. The article dives into a number of data points that support this conclusion from various sources, including the fact that non-generative and generative AI is estimated to add between $17 trillion and $26 trillion to the global economy. While it’s very hard for economists and technologists to predict exactly what happens next, the article does a solid job of curating the current thinking from some of the top minds and institutions—including AI’s impact on employment and career skills.

AI’s exciting and uncertain impact on schools

Kids are in full swing going back to school here in the U.S., but there are equal parts excitement and uncertainty as schools everywhere try to grapple with the chaos and opportunity provided by AI tools like ChatGPT. We’re seeing more schools release policies or guidance on the use of AI in the classroom, but those policies and guidelines are often different in tone and content. Some schools are cracking down on AI use in the classroom, and restricting how students are able to use it. Others appear to be taking a positive view of the technology, attempting to guide students and educators on how to make the most of AI tools in a sensible way. Given how important the topics are, and how much uncertainty there is around these policies, we wanted to explore them more in-depth given how quickly AI has upended education as usual.

New York Times considers legal action against OpenAI as copyright tensions swirl

The New York Times is exploring suing OpenAI over using its articles to train AI models like ChatGPT without permission, according to reporting from NPR, setting up a potential major copyright battle over generative AI. The Times is concerned ChatGPT competes with it by answering questions using the paper's original reporting. If AI tools replace visiting news sites, it threatens the Times' business. The Times is also concerned about how OpenAI’s systems get information by scraping the internet, and potentially copyrighted material, to train models. The Times and OpenAI have been discussing a licensing agreement for the Times’ content, but NPR seems to indicate this has gone so poorly the Times is now considering legal action.

And, unsurprisingly, there’s a lot more covered. Tune in!  

Links Referenced in the Show

Read the Transcription

Disclaimer: This transcription was written by AI, thanks to Descript, and has not been edited for content.

[00:00:00] Paul Roetzer: this whole idea of, you know, the internet took decades. The reality is AI has taken 70 years, seven decades, and, and there are milestones that have enabled this moment to all of a sudden happen. So, you know, people are thinking like, oh, the AI thing's just gonna take off way faster. Well, yeah, it's taking off way faster because cloud exists and we can do this compute through Google or Amazon or Microsoft.

[00:00:22] Paul Roetzer: Nvidia chips exists because they started building them for video games 20 years ago and realized that the GPUs could do deep learning, advancements in AI research labs, the transformer paper, like all of these things are the infrastructure that now enables this to happen seemingly overnight.

[00:00:40] Paul Roetzer: Welcome to the Marketing AI Show, the podcast that helps your business grow smarter by making artificial intelligence approachable and actionable. You'll hear from top authors, entrepreneurs, researchers, and executives as they share case studies, strategies, and technologies that have the power to transform your business and your career.

[00:01:00] Paul Roetzer: My name is Paul Roetzer. I'm the founder of Marketing AI Institute, and I'm your host.

[00:01:10] Paul Roetzer: Welcome to episode 60 of the Marketing AI Show. I'm your host, Paul Roetzer, along with my co-host Mike Kaput, good morning, Mike. Morning Paul. Welcome back. You were traveling this weekend, I believe, out in the West coast. I was,

[00:01:24] Mike Kaput: I was in Portland,

[00:01:24] Paul Roetzer: Oregon. So not speaking right?

[00:01:27] Paul Roetzer: This was like,

[00:01:27] Mike Kaput: no, this, this is just family stuff for, for traveling once, for, for personal

[00:01:32] Paul Roetzer: reasons in the last, there you go. I don't remember what that's like for personal reasons. Oh, man. All right. So it is, Monday morning, August 21st. So this episode will come out on August 22nd. And, there's a lot going on.

[00:01:48] Paul Roetzer: I mean, to the point where we were seriously considering asecond episode this week, but we're gonna plow forward and, and get it all in, in one episode, and we may be revisiting this idea of. A couple times a week as we get into the fall, we'll see. There's just so much going on. Again. Sometimes it feels like a slow week.

[00:02:05] Paul Roetzer: And then I don't, I mean, our, like I've, Ithink we've said before, we, we keep a sandbox in Zoom where we just post links throughout the week. I think there was like 60 like, it, it was, it was crazy. There were a lot. Yeah. So I was messaging Mike yesterday, probably as he's flying back from Portland, like, Hey, we might, we might need to split this up, but all right, we're gonna, we're gonna definitely move quickly through the rapid fire items to make sure we get to everything.

[00:02:29] Paul Roetzer: But there is, there's a lot going on in ai. All right. This episode is brought to us again by the MAICON 2023 on demand, where we have 17 keynotes panels and breakout sessions that are available right now on demand, include the state of AI for marketing and business, which was my opening keynote, the amazing fireside chat with Ethan Molik.

[00:02:50] Paul Roetzer: Beyond the obvious, I still like, I have to go back and re-watch that. Idid the interview, but I haven't watched it since. Ethan just continues to pour out amazing content, and I just want to go back and kind of relive that moment. Cassie Koff from Google, whose job does AI automate Chris Penn on language models.

[00:03:09] Paul Roetzer: Dan Slagan on the org chart of tomorrow. Just a, abunch of incredible content. So, if you want to relive that, if you were there, or if you want to check it out because you were not able to join us in Cleveland in July, just go to MAICON.Ai, that's MAICON.ai and right toward the bo the top of the page is a Buy MAICON 2023 on demand button.

[00:03:32] Paul Roetzer: You can use AIPOD50 for $50 off. So again, MAICON AI for MAICON 2023 on demand. With that, Mike, let's dive into our three main topics and our, I think there was like 16 rapid fires, so we got a lot to cover today. We'll move

[00:03:51] Mike Kaput: fast. So first up, AI is going to eliminate way more jobs than anyone realizes. That's at least according to a new in-depth article from Business Insider.

[00:04:03] Mike Kaput: So in this, kind of in-depth article, the publication rounds up a bunch of data points and facts to support this theory that AI could disrupt over 300 million jobs worldwide. But, At the same time add trillions in value to the economy. Now, like I mentioned, there's a ton of really interesting stats and data points in this article we're going to dive into from various authoritative sources.

[00:04:31] Mike Kaput: One of them that jumped out to me is the fact they say that non generative and generative AI both together are estimated to add between 17 to $26 trillion in value to the global economy. Now, as we've said before, anytime we analyze any type of report or stats or predictions, it's really, really hard for economists and technologists to predict exactly what's going to happen next with ai.

[00:05:00] Mike Kaput: However, this article is really notable in the sense that it does a solid job of kind of curating some of the current thinking from some of the top minds and institutions. So if you have not been following the more recent. Analysis of AI's potential impact on employment and career skills. This does a really good job of getting us up to speed of where everyone's heads are at in this debate.

[00:05:25] Mike Kaput: So I want to kick things off and just like, let's talk about the elephant in the room right away. Like how many jobs are we thinking we are going to lose thanks to ai, either as this article outlines or kind of in your opinion.

[00:05:40] Paul Roetzer: So this is obviously a topic we've been talking quite a bit about in recent months on this podcast and previously I.

[00:05:49] Paul Roetzer: We mainly looked at the United States. So, you know, I've, I've shared the stats before. There's 132 million full-time jobs in the us. A hundred million are knowledge workers. People who think and create for a living, about 10 or 12 million of those are for private equity owned companies that generally focus on efficiency and cost reduction, as a driver of growth, and profitability.

[00:06:13] Paul Roetzer: So, you know, I think we've said millions, is is kind of like, generally what I've said is like, it, it appears as though there is the possibility of millions of jobs being lost. What was interesting to me is this does broaden this to the world realm, which I had not, dove into the details there. So, Mentioned the World Economic Forum estimated 83 million jobs worldwide could be lost over the next five years with 69 million jobs created, leaving 14 million that ceased to exist.

[00:06:44] Paul Roetzer: And the World Economic Forum also says 44% of workers' core skills are expected to change in the next five years. Ithink that number's low. The the 44%. Okay. Well, Iguess again, it depends on how they're classifying this, but when we look at knowledge work, so again the United States, a hundred million, Ifeel like it's like 90%, like

[00:07:06] Paul Roetzer: every, everything everyone does in knowledge work is going to be changed in the next five years. And I just, I find it so hard to project out beyond like, you know, one to two years. So five years is just, I mean, we might have GPT seven, five years from now and like, what does that even mean? So, I think akey takeaway, you know, upfront for this is the number of jobs.

[00:07:29] Paul Roetzer: No one's gonna get this right. They may be off by 30% one way or the other. But the point is it seems almost indisputable that there will be massive disruption to jobs. Now we'll get into like, you know, what could more be created or not. But my whole point is we need to be talking way more about this topic because there is absolutely the possibility, if not the probability that we lose more than we gain in the very near future.

[00:08:01] Paul Roetzer: And so, like I put this on LinkedIn on Sunday morning, and as of the recording of this podcast, there's 36,000 impressions of this post and 48 re-shares and almost 60 comments. So this like obviously, you know, captured some people's interest. So you get all these like arguments about, well, you know, people are wrong all the time about these prognosis.

[00:08:23] Paul Roetzer: That is a hundred percent correct. They are almost always wrong about these things. But the thing that often seems to be overlooked in people's arguments that this isn't going to cause a loss is when you drill into these very specific instances like we've done on this podcast before. It's to take writers, take graphic designers, take software engineers, whatever.

[00:08:45] Paul Roetzer: It's my whole argument and concern is that we might just need fewer humans doing the existing jobs. So it's not that the AI can do the job of an engineer or a writer or a designer, like full, full, full autonomy. It's not replacing the human, but. If you can gain, you know, they cited I think like almost 56% or so, I forget what the number was in terms of efficiency.

[00:09:12] Paul Roetzer: Here it was a study out of MIT found software developers completed tasks 56% faster with generative code completion software. And another study found that professional document writing was 40% faster using generative ai. Those are massive gains. So if you work in an industry or at a company where you can just increase the output, where there's demand for the product you create or the service you provide, that you can just do more, then you don't have to have any job loss.

[00:09:45] Paul Roetzer: But if your demand is relatively fixed or increasing incrementally, and your ability to do that work quickly is almost on an exponential, then you need fewer, fewer people. To produce the same level of output. And so that's my concern is that in the next one to two years, the job loss is gonna come from these companies that just don't need as many people doing the same job and work.

[00:10:08] Paul Roetzer: And, and then that leads us into the next debate of, you know, do do more jobs get created, basically.

[00:10:14] Mike Kaput: Yeah. So talk to me a bit more about that, because they do say, you know, 69 million jobs will be created Now again, it remains to be seen exactly what those will be or who will be equipped for them, but it sounds like there are opportunities that AI won't just make our existing work more productive, but will create new opportunities.

[00:10:34] Mike Kaput: I mean, is that kind of how you also see the silver lining

[00:10:37] Paul Roetzer: here? I do believe it will, I think over the next decade, you know, when we do start to think out five to 10 years, there's gonna be all kinds of career paths that we just can't comprehend right now. That don't exist. I don't know what those are gonna be.

[00:10:52] Paul Roetzer: I've yet to really see a really solid prognosis of what those career paths could be. People largely just rely on the fact that when these kinds of disruptions happen, when general purpose technologies emerge into the world, it generally creates jobs. I think they had a stat in there that something like 85% of the current jobs didn't exist prior to like the industrial Revolution or something like that.

[00:11:16] Paul Roetzer: Where it's like, okay, yes. Over decades, we, we did recreate entire career paths and industries, so that will happen. I just don't see it happening on the same time horizon as the job disruption and the loss. So Ithink it'll be a longer tail of creation of new jobs. We talked about in one of the previous episodes, this idea of an explosion of entrepre entrepreneurship.

[00:11:42] Paul Roetzer: Ido believe in that, and I think that is gonna create a lot of new jobs and career paths. But again, they're, they're just gonna need fewer people to build businesses. So, like we talked about with the institute, you know, we have five employees. We just hired two more, so we're at seven functioning as a team of probably 15 to 20, right?

[00:11:59] Paul Roetzer: Because we just, we function differently. We're using AI tools and in all these different ways, and I think that's the future of organizations is you're just gonna be able to build more nimble, fu like less capital intensive businesses from the ground up, faster, cheaper. And so I think we're gonna see a lot of those, but I don't know that it nets out to where I.

[00:12:19] Paul Roetzer: Those create more jobs. So I, Ithink that's my, my concern is not that the AI is not going to open up all these possibilities, it's just they won't open up as quickly as the jobs are going to maybe go away. Now, I, the other caveat we've talked about previously is there are industries that can't hire enough people.

[00:12:38] Paul Roetzer: And, and so that might offset this a little bit as well, like accounting and insurance we've used as examples. Like they just, they can't get enough people for the roles that they have open. So in that case, AI is not even replacing anybody. It's almost filling a, acapacity need in industries that lack people wanting to go into those industries.

[00:12:57] Mike Kaput: Yeah. At least it should be noted in the us we're still currently at record low unemployment.

[00:13:02] Paul Roetzer: Correct. I mean, we're already having, and you can't hire enough people. Right,

[00:13:05] Mike Kaput: right. So a lot of that could absorb some of this and.

[00:13:09] Paul Roetzer: Yeah, I was just gonna, like, Ithink the, what people have raised is like, we are not economists.

[00:13:13] Paul Roetzer: I took a bunch of economy classes or economics classes in college, but that was a long time ago. And I studied the economy, but I'm not an economist. We are not working in these AI research labs. We're observing reading research papers, following people. Like we are not like the expert in any of these.

[00:13:31] Paul Roetzer: But yet what I'm finding is when you go synthesize all the information from all the people who are the leading economists in the world, and then the leading technologists, they don't seem to have a clue. And, and they're like, they come up with all these numbers and that's why I thought this article was so good.

[00:13:48] Paul Roetzer: Is it just like summarized, like curated, all the top things. And that's why my LinkedIn post, I was just kind of calling out the things that jumped out to me. But what becomes very clear within this is it's completely uncertain that the top minds in the world, in these specific areas don't have a clear.

[00:14:06] Paul Roetzer: Perspective on what happens next. And that's why my whole takeaway was we just have to talk more about this. We have to have dialogue about it. We have to have people in different industries and different professions thinking critically about what this means to them and their industry and their company.

[00:14:22] Paul Roetzer: Because we're not gonna solve this as two guys, you know, talking on a podcast. But my whole point is like, we need to advance the conversation because it is critical that this is happening, even if it doesn't result in job loss, which is what I hope happens. We have to be aware of it and trying to solve for it so we don't get caught when it all of a sudden is here and we didn't do what we needed to do.

[00:14:45] Mike Kaput: And it sounds like not only could this happen faster than a lot of people think, like they actually mentioned, you know, when the internet. Kind of came along and hit mass adoption that required software network protocols, all this infrastructure and devices. So it took a long time for every home and office to have all these personal computers and internet access.

[00:15:07] Mike Kaput: But the article argues that today AI's adoption could happen much, much faster since this tech infrastructure to run it is really already in place. So it sounds like not only could this happen a lot faster than we anticipate, but also like, are we ready for this as a society, as businesses, as

[00:15:26] Paul Roetzer: policy makers?

[00:15:28] Paul Roetzer: I mean, this whole idea of, you know, the internet took decades. The reality is AI has taken 70 years, seven decades, and, and there are milestones that have enabled this moment to all of a sudden happen. So, you know, people are thinking like, oh, the AI thing's just gonna take off way faster. Well, yeah, it's taking off way faster because cloud exists and we can do this compute through Google or Amazon or Microsoft.

[00:15:51] Paul Roetzer: Nvidia chips exists because they started building them for video games 20 years ago and realized that the GPUs could do deep learning, advancements in AI research labs, the transformer paper, like all of these things are the infrastructure that now enables this to happen seemingly overnight. So again, it's like, it is, it kind of runs parallel to the internet.

[00:16:12] Paul Roetzer: It's like we had to have all this infrastructure and people are thinking it was just there. It's not, it took years to build, but it is there now. And, and so yes, this can happen. Quickly because we have the ability to do this stuff. And you know, I think the thing we ended with, in my summary and, and with the article was this idea that, no, we're not ready because we're not putting enough into training and re-skilling people.

[00:16:35] Paul Roetzer: They said there are 43, federal employment training programs, this in the United States, whose total budget is 20 billion or less than 0.1% of the US G D P. And then they said this is an alarmingly trivial amount for an economy of 25 trillion G D P and over 150 million workers. And then they talked about the idea of potentially incentivizing retraining through tax credits like New York and Georgia that could spur employers to take action or if they can use, you know, tax money or grant money to do this upskilling retraining.

[00:17:06] Paul Roetzer: So Ido think there are answers to this, but we're not gonna find them if we don't talk about it. It, it's kind of my whole point of making this a main topic today and putting it on LinkedIn yesterday. So we just have to have more conversations.

[00:17:20] Mike Kaput: So another really important conversation to have is AI is exciting and very uncertain impact on schools.

[00:17:28] Mike Kaput: So in the US at least, kids are in full swing right now going back to school here as our school year starts. There are equal parts, excitement and uncertainty as schools everywhere are trying to grapple with all this chaos and opportunity provided by AI tools like Chat, GPT. So we're actually seeing much more kind of in our circles, more schools releasing policies or guidance on the use of AI in the classroom.

[00:17:56] Mike Kaput: But these policies and guidelines are very, very different depending on who you're looking at. So some schools are cracking down on AI use in classrooms. They're restricting how students are able to use it. You can get punished for using it in certain contexts. Other schools appear to be taking some positive views of the technology, and they're trying to help guide students and educators on how to make the most of AI tools in a sensible way.

[00:18:21] Mike Kaput: So we just kind of wanted to explore a little more in depth on today's episode. What is going on with some of these policies? What schools might want to be thinking about what's happening, just given how quickly AI seems to have upended education as usual. So Paul, you also posted recently about this on LinkedIn and you asked your network if schools, their kids', schools have AI policies in place.

[00:18:46] Mike Kaput: Like what did you learn from that?

[00:18:49] Paul Roetzer: So first off, I just want to say, let's be understanding of the position administrators and teachers are in right now. This is areally complicated thing. We, as business professionals, as marketers, as executives, we're struggling to understand this stuff and we're more living it daily and, and even experimenting with it daily.

[00:19:09] Paul Roetzer: So we're asking administrators and, and teachers and professors who have other things to deal with, to all of a sudden also understand ai. So, I just want to be clear up front that while what we're gonna talk about here, Is challenging the system a bit and, and encouraging them to take faster, and more thorough action in this area.

[00:19:32] Paul Roetzer: It, it's unrealistic probably to expect them to solve this on their own. So what happened is, I had about three weeks ago, I think buddy of mine shared, a high school policy and that was the first one I'd seen. And then I think, you know, Tracy in our office, her, her, children, young kids, they had an updated policy that she had to sign off on.

[00:19:56] Paul Roetzer: And so that kind of triggered for me. And then I've had conversations with universities, have had conversations with, leaders of high schools and I've also talked with, leaders of middle schools. So I've had all this context and then I saw those and I realized like, oh my gosh, we're like, school is starting in parts of the United States in like a week when I was first thinking about this.

[00:20:16] Paul Roetzer: And I realized like there was no uniformity of how this was gonna be handled in school systems. So Idid put on LinkedIn and Facebook. I might've put it on Twitter too. And I was basically just asking like, yeah, is anyone seeing handbooks, like updates to the handbooks or specific AI policies?

[00:20:34] Paul Roetzer: the general response has been no, like nothing. And there was, even on Facebook, I have friends who are teachers in middle schools and they said largely, no, we don't have anything yet. So imagine these teachers who, you know, might be anywhere in middle school or even into high school. Or even professors who are going into school right now with no guidance on whether or not AI is cheating or not, how they're supposed to teach it in the classroom.

[00:21:02] Paul Roetzer: You know, it's, it's, it's a very unrealistic thing to put them in. It's an unfair position to put them in. But again, having talked to the leaders of a lot of these schools, I realize that they themselves don't really understand what's going on or what the implications are to the future of education and the future of work.

[00:21:19] Paul Roetzer: So, couple of of notes. So the first is most kind of have nothing, There was an example I saw from asixth to eighth grade handbook updates. And this just basically said, cheating is defined as, and there was like three things. One of them was using AI to complete assignments. That's, that is all it said there, like very broad.

[00:21:40] Paul Roetzer: So for me that's a, a useless policy. Like, it, it, it's, there's obviously lots of categories and variables to the use of ai. But then it does go on to say plagiarism as an example, using AI to create work and claiming as your own. It's like, okay, well, I mean, claiming it as your own. I, Iguess I could see that if it wrote, but again, it needs more detail of what exactly that means.

[00:22:03] Paul Roetzer: So here's an example where they did update a handbook, but it's so general that it's almost of no value. Like, it, it creates more confusion. There was another, person that posted, I think this might've been a high school one. It just said, any use of generative AI result in a disciplinary view.

[00:22:21] Paul Roetzer: That was the entire update. A high school, It said, students may at the administration and faculty's discretion learn to use ai, text generators and other AI-based assistive resources, AI tools generally to enhance 21st century learning. However, AI use can constitute academic dishonesty and plagiarism in many cases.

[00:22:43] Paul Roetzer: It's like, okay, like that. I like that they're at least saying, Hey, we understand this is a key to the future, but then it goes into saying, you must cite AI tools when used, even if only for ideas. Not terrible, but it doesn't tell you what happens if you do it since you're not allowed to. because it says students may not use AI unless permitted.

[00:23:04] Paul Roetzer: So they're basically like, you're not allowed to do it, but if you do do it and you're like given permission by the teacher, then you have to cite them. So it's like, okay, that's, that's not like awful. It does go on to say, use it to deepen understanding and support learning, which good on the surface says teachers will seek to understand how AI tools work and optimize value for student learning.

[00:23:25] Paul Roetzer: So my perception right now, based on these kind of guidance, again, this is all under, this is actually under an artificial intelligence policy. A separate policy that was sent to parents is that we may allow it to use it, but our teachers don't understand it. We're not explaining to you how they're going to learn this.

[00:23:41] Paul Roetzer: We're just saying they commit to learning it in some capacity. Meaning this is probably gonna be very isolated. So you may have a teacher or a couple at this school that choose to figure this out for the good of their students and to prepare them for their future. Then you're gonna have a whole bunch that want nothing to do with this, that outlawed completely in the classroom and don't want to hear anything about it being used, no matter if it's good for the students or not.

[00:24:07] Paul Roetzer: So it just seems like this is going to be poorly applied and unevenly distributed. As we like to say. The benefits of AI will largely depend on who your teacher is and whether or not that teacher chooses to understand ai. So to me, while heading in the right direction, this seems very problematic. So, and then they do say employee AI detection tools where appropriate, that the teachers will use 'em, which we know don't work.

[00:24:37] Paul Roetzer: So now the guidance teachers are being given is used, turned in, or whatever other tool to identify whether something was written by ai. And we know that kids are gonna be falsely failed and accused of things that they may or may not have done. But it's gonna create all kinds of friction. So if me as a parent who understands this stuff gets, you know, a letter home that my, my child used AI and they therefore failed this thing, we're gonna have problems.

[00:25:05] Paul Roetzer: Like, it's gonna create all kinds of friction between parents and administrators and faculty. That's not necessary. So the thing I always say is like, integrated and honest use is the key. Like we should be encouraging them to use it and teaching them the capabilities of AI and, and using it to advance what the students are able to do and how they learn.

[00:25:26] Paul Roetzer: But that if the administrators and teachers don't understand it, how are they supposed to teach it? So I think it's sort of irresponsible and counterproductive. For schools to ignore the potential and just label AI as cheating. Mm. It, it kind of shows a complete lack of vision for what we need to be thinking about for these kids.

[00:25:46] Paul Roetzer: So, I don't know, like I, Idon't know if sometimes inaction is even better than some of the action, like it's, but Ilook, what I've told some leaders is like, I don't see how we get through even the first half of this school year, like, till the turn of the calendar year. I don't know how a school doesn't have some formal policy or point of view on this stuff.

[00:26:09] Paul Roetzer: Like, it just seems like we are running into an absolute train wreck of cheating scandals that were, were not cheating scandals of accusing, accusations of plagiarism when it wasn't plagiarism and, and, and it's just gonna create so much confusion and friction that is unnecessary right now. So

[00:26:29] Mike Kaput: given how fast all this is moving, What can schools be doing to try to get more prepared?

[00:26:37] Mike Kaput: It sounds like at least one step is realize that AI detection tools are not a hundred percent accurate. Probably don't build policies around those. Are there anything, anything else they should be doing or thinking about?

[00:26:50] Paul Roetzer: I mean, I think they need to look to the associations and the organizations that are, are leading the way in this, in education and, and we'll, we need to do more ourselves.

[00:27:01] Paul Roetzer: And, and maybe in future episodes we'll curate some resources for people. But I have seen solid, like here's, you know, AI curriculum in classroom that you can be using. Here's ways that, you know, you know, template policies that are for the good of the student. Like, I think we need more of that. Where there are these places that people can look to.

[00:27:22] Paul Roetzer: We need formal education and training for teachers and professors so that they arrive at kind of alevel benchmark of understanding of this technology. And then we need more sharing of kind of best practices. So again, some of this may be occurring at, and I kind of assumed it was at like the association level of teachers and administrators and things, but given the lack of policies in schools, now I'm questioning whether or not it's as wide scale as I assumed it was.

[00:27:51] Paul Roetzer: And I'm kind of leaning in the direction of there's no uniform effort right now being made in education to solve for this, given how many schools I'm seeing that have nothing. So I just think we just, we need more urgency to get some things in place to help these students versus, you know, just taking the initial action of just accuse 'em of cheating.

[00:28:12] Paul Roetzer: And again, they're probably using these things. I know they are at all levels without permission. Maybe they are writing papers. I'm sure they're, and helping with their math homework and all these things, but they have no guidance of what they're allowed and not allowed to do. So of course they are. I, one other quick analogy.

[00:28:32] Paul Roetzer: Somebody this came up in auniversity, I was having conversation, recently with some university leaders and someone equated it to telling someone in like 2000, you're not allowed to use Google for your homework. Like you have to go to the library and take out the encyclopedia. You, you, it's cheating to use a search engine.

[00:28:51] Paul Roetzer: And Ibelieve back at that time, that was how some professors and administrators saw the internet was, it was cheating if you use that. And it feels like that's where we are again, it's like you have this in incredible knowledge base and resource to help these students and you're just gonna straight up tell 'em it's cheating because it's not what you're used to.

[00:29:08] Paul Roetzer: It's different. And that has proven time and time again throughout history to be the wrong approach. And Ijust, I. I would encourage people to take action in the direction of integrating it in in honest ways and using it as teaching tools. So in

[00:29:25] Mike Kaput: our kind of third main topic here, the New York Times is actually.

[00:29:30] Mike Kaput: Exploring suing open AI over the use of its articles to train AI models like chat, GPT. This is according to some reporting from NPR. Basically the Times is considering taking this action because, ChatGPT competes with it by answering questions using the paper's original recording.

[00:29:54] Mike Kaput: That's what the Times is concerned about. They are also concerned that if AI tools replace visiting the actual news sites, this could threaten the New York Times' business. They're also generally concerned, like many publications are about how open AI systems get information by scraping the internet and potentially scraping copyrighted material in order to train models.

[00:30:19] Mike Kaput: Now, according to NPR, the Times in OpenAI have actually been discussing a licensing agreement to move forward where OpenAI can use some of the times as content to train its models and provide answers through Chacha bt, but. NPR seems to indicate this has gone so badly that the Times is now considering legal action.

[00:30:43] Mike Kaput: One other really interesting note here, NPR says that if OpenAI is found to have violated any copyrights in this process, federal law allows for the infringing articles to be destroyed at the end of the case. In other words, if a federal judge finds that OpenAI illegally copied the Times' articles to train its AI model, the court could actually order the company to destroy Chad GT's dataset, forcing the company to recreate it using only work that it is authorized to use.

[00:31:13] Mike Kaput: So first up, Paul, can you just put this into context for us? Why is this potential lawsuit we noted is potential, according to people disclosing details to NPR does not happened yet. Why is this such a big deal?

[00:31:27] Paul Roetzer: I assume this is being leaked by someone on the time side just to let it open. AI know they're very serious about these negotiations.

[00:31:35] Paul Roetzer: It, it just seems to build on topic. It was either one or two episodes ago, we talked about the New York Times, you know, pulling out of the AI Coalition and that Google had licensing deal in place for the New York Times for a hundred million was what it was rumored to be. And that this was the future of these language models, that these companies know that there is the chance that there are some lawsuits that could make it through and that they could be found that they did infringe on copyrights.

[00:32:00] Paul Roetzer: And so it seems like the logical play is build the next version of these models on licensed materials, properly licensed materials. And so this just seems to verify that that is indeed what's happening. These language model companies are trying to find, valuable sources that they can do licensing deals with and the media companies are trying to.

[00:32:25] Paul Roetzer: You know, play hardball, but they're also viably concerned about the future vi or they're concerned about the future viability of their business model. So if, if, like right now, G PT four isn't connected to the internet unless you're using binging. But if I go in there and I can in real time ask questions about something that happened yesterday and it's curating from sources, including New York Times articles, and it writes me an amazing summary that is factual, that's the next step for these models, you know, in two to three years where the hallucinations are gone and they find ways to make these things as factual as, you know, the best elements of the search engines and what do I need to go to the times for?

[00:33:04] Paul Roetzer: That's a really valid issue. And so, you know, I think these media companies are aware that, that their future wellbeing is threatened if these things get really, really good at doing real time synopsis of what's going on. And so it just, yeah, it seems to validate that we're looking at future models that are much more responsible about licensing of the data that they train on.

[00:33:30] Paul Roetzer: There's probably a massive race right now to do deals with all the top media companies and publishers and, the threat of, you know, destroying the model while not probably realistic. Like Ijust don't see that as a, an outcome that would result from this. It's certainly an interesting leverage point because

[00:33:51] Paul Roetzer: the OpenAI can't go into the model and just extract New York Times data. It's, it's in there, it's embedded. They don't, they can't get it out. So we talked about this before, like you can't unlearn things. They, they don't know how to make the models unlearn what it's learned. So this is really more about, I think, the future versions of these foundation models and the data that they're trained on in a more, legal and ethical way Probably.

[00:34:17] Paul Roetzer: We're saying it's

[00:34:18] Mike Kaput: not going to be likely that OpenAI is going to be forced to destroy any type of data

[00:34:25] Paul Roetzer: set here. Ithink that seems very unlikely, but this is gonna take years and probably end up in the Supreme, Supreme Court, not this case in particular, because again, it's just the threat of a lawsuit.

[00:34:36] Paul Roetzer: But there are other related lawsuits that are moving forward. And I, Ido think that we'll have some sort of landmark case that defines what these models are allowed to train on, what they're not allowed to train on. And it, Iwould guess that has to happen in the next, you know, probably two years.

[00:34:53] Paul Roetzer: Like it, this is a really critical topic and these models are gonna keep moving forward. And it seems like the language model companies current play is just be proactive and license as much of the data as possible, and then pay the fines for whatever they were found to have done illegally in the first versions of this technology.

[00:35:10] Paul Roetzer: Gotcha. So

[00:35:10] Mike Kaput: the way forward seems to be, licensing deals combined with kind of ask forgiveness for what's already been

[00:35:17] Paul Roetzer: done, pay some fines and move on, figure out how to get, you know, let the artist benefit from it. That's the other, it's like, I don't know, class action lawsuit kind of thing. Where, but then how do you figure out work was used?

[00:35:30] Paul Roetzer: And I don't know. This is, I, I've said many times, half jokingly, but not really. IP attorneys are like, it's the safest profession in the world for the next 10 years, because there's gonna be so many messy things that have to get worked out around all of

[00:35:45] Mike Kaput: this stuff. All right, let's dive into a bunch of rapid fire topics.

[00:35:50] Mike Kaput: We're going to move pretty quick. We have a ton of them on the list today. First up is the, Well-known venture capital firm, Andreessen Horowitz, has unveiled an exciting new open source project. They're calling this AI town, and it's basically a simulated virtual world that lets developers create their own AI powered environments and characters.

[00:36:12] Mike Kaput: So this is inspired by recent Stanford research on generative AI agents. So basically, they're creating these little sandboxes where AI generated characters can evolve organically and essentially have unscripted conversations, take unscripted actions, and essentially retain memories of those conversations and actions, which develops really distinct narratives over time that kind of are totally organically spinning out of the interactions between.

[00:36:40] Mike Kaput: These agents. So Andreessen Horowitz actually created this, in tandem with one of their portfolio companies and are open sourcing this so that developers can go build on top of it their own mini AI worlds, essentially. So are these AI simulated worlds? Kind of the next big thing we're going

[00:37:00] Paul Roetzer: to see here?

[00:37:01] Paul Roetzer: It definitely seems like one of 'em, and we talked about this with the fable, I think was the example I used in my Make con keynote where they trained it on South Park episodes and then these characters sort of develop and evolve. Yeah, I mean you could see massive uses for this. You know, specifically like video game industry comes to mind where right now, you know, these characters are largely rules based in terms of how they, you know, do their conversations.

[00:37:26] Paul Roetzer: But, you know, you can think about like Pokemon for example, or you know, any of these action games where the characters continue. Developing even when you're not in game mode, when you're not playing and they're living their lives in the villages and the towns, and you show back up and their experiences are different.

[00:37:46] Paul Roetzer: And I mean, it, it's the Truman show, in essence with ai. Ifind these quite disturbing because I know that they're, efforts towards a g I like, you know, creating almost these world experiences for these characters or these AI agents. So they're learning from things around them. They're developing very human-like traits.

[00:38:11] Paul Roetzer: It's a wild world. It's probably a space that I need to dive into a little bit more and, and try and understand what's going on and the intentions behind it. But I think you're gonna see an explosion of this sort of thing in a lot of different applications for it.

[00:38:27] Mike Kaput: So next up a company we've talked about quite a bit.

[00:38:30] Mike Kaput: Roic is raising an additional a hundred million dollars from a South Korean telecom company called SSK Telecom. And they're raising this money specifically to build a custom large language model for the telecom industry. So this basically will be customized towards that particular business, that particular industry, and be much more tailored to telecoms than the general language models on the market today.

[00:38:55] Mike Kaput: So Anthro has raised hundreds and hundreds of millions of dollars, actually 1.5 billion in total. This is the latest effort, but it is, I think, the first that is really customized towards building a specific L l M for an industry. Now, do we expect to see more companies kind of directly funding industry specific models here?

[00:39:18] Paul Roetzer: It's definitely one of the assumptions about the future that we've been making, that you would, these models would go vertical with proprietary training sets, and I think that's when they can start to be more reliably used in customer service and sales and marketing and operations and things like that.

[00:39:35] Paul Roetzer: So, yeah, Iwould imagine that. These different language model companies that, you know, to date of large, been competing on these kind of horizontal capabilities where they're all building things that can generally create any kind of language and, in some cases, like inflection PI, have conversations about generally anything.

[00:39:53] Paul Roetzer: Now imagine them being able to be trained on specific data sets and for specific capabilities. Iabsolutely think this is where it's gonna go. I think in the latter half of 2023 and certainly into 2024, we're probably gonna see an explosion of these things, both the existing foundation model companies and then probably a whole bunch of startups that are building like vertical specific, either on open source models or, I guess starting from scratch.

[00:40:23] Mike Kaput: So, as we know, there's a pretty extensive, writer and actor strike going on right now in Hollywood, and AI is a huge part of this story. So, Hollywood Studios offered screenwriters a new deal on Friday, this past Friday that includes concessions on artificial intelligence. So major studios have agreed that writers not AI will get credit for screenplays.

[00:40:47] Mike Kaput: Though writers are still working on securing concessions that guarantee AI won't also impact their compensation. Now, there's a bunch of other details unrelated to AI in the new deal. Drew won't get into here, but how likely do you think it is that regardless of what deal is reached, that studios moving forward avoid their use of ai or use AI totally responsibly to augment the work of actors and writers?

[00:41:14] Paul Roetzer: Ijust don't see how it doesn't. Significantly disrupt this industry. Like you, I don't know what's gonna be in the final agreement, but it kind of goes back to the, you know, thing we talked about with the job disruption. It's, you know, if these things can take any genre or any, you know, existing movies and you can feed it as the context, a prompt and say, write me a script based on this.

[00:41:38] Paul Roetzer: And you have the whole history of all the characters and everything that's ever happened. Or you, you're starting from scratch and you're envisioning this world as a producer or director and you're just saying like, what, what could happen? And you can instantly get draft scripts or concepts and build them like scene by scene.

[00:41:55] Paul Roetzer: And it, it just changes the way you can do this. And so the question just becomes, do we need as many screenwriters? And, and, and I don't, I mean, my instinct is probably not like, because this goes to like, well, are we just gonna make more movies, more shows? Maybe if, if, if Netflix and Hulu and Disney and, and Amazon Prime, like all if, if demand is infinite for content, then maybe we don't get rid of screenwriters.

[00:42:22] Paul Roetzer: We just make way more content. That's a, that's a possibility. But I think those are kind of the two scenarios. You know, demand remains relatively stable for movies and shows and, you know, video shorts and things like that. And so we, you know, just use the same screenwriters to do it. Or we need fewer screeners or there's almost infinite demand for content.

[00:42:47] Paul Roetzer: And so, you know, we don't have to get rid of any of the screeners. We're just gonna double the amount of content that we create as a studio. And then you get into the other aspect about the virtual. Beings and you know, the extras being cloned and then just being able to like simulate extras in scenes and you don't need as many actors.

[00:43:05] Paul Roetzer: I don't know how you avoid that. That, that's, that's a tricky one. And I would think that's a pretty solid sticking point in the negotiations, but I haven't seen exactly how they're planning to address that. But I don't know. I mean, it just Sure. Seems like this industry is gonna be disrupted quite a bit.

[00:43:23] Mike Kaput: So Google actually recently announced some upgrades to its AI powered generative generative search experience, which is SS G E. This is its AI powered search results that we're seeing roll out across the main search engine. They now have the ability to hover over definitions in AI responses about different topics so you can hover over underlying terms and words to see definitions related images if you are asking coding questions.

[00:43:50] Mike Kaput: AI overviews now have color-coded syntax highlighting certain code snippets, and they've also launched an an experiment called SS g e while browsing in mobile and desktop search. This generates key points and questions on long web articles, letting users jump to relevant sections faster. So this is all available in their search labs where you can opt in and give feedback to get access to some of these features.

[00:44:15] Mike Kaput: So Paul, what were your thoughts seeing some of these updates and SSG.

[00:44:19] Paul Roetzer: It's interesting to see it just keep evolving and I haven't seen any solid reports yet about people's experience and how people are responding to the search experience, you know, changes. I've personally been testing 'em on my, on my personal Gmail.

[00:44:36] Paul Roetzer: Again, we don't, we don't have access to it in our organization account. But you know, I think it's, everyone's waiting to see how this is gonna impact search and organic traffic. It's a major, question mark for media and for publishers and for marketers, brands that rely on organic traffic. So I think everyone should probably just continue to test and export themselves and keep an eye on what's going on, in this space, because it's gonna affect all of us, for sure.

[00:45:05] Mike Kaput: Yeah, definitely. Go test it out if you can, because it's worth seeing how this is going to impact everyone. So Google is actually also. Reportedly testing some AI tools that could become essentially personal life coaches according to the New York Times. So they're actually evaluating over 20 different life and work assistance to help you do a range of things, like get life advice, get help, planning things, do tutoring and more.

[00:45:34] Mike Kaput: So this is actually really interesting, as a shift for Google, because earlier, in December of last year, safety experts actually warned Google that chatbots like these ones that are highly, you know, personal and emotional and helping you with complicated topics could cause people to become too emotionally attached to them.

[00:45:57] Mike Kaput: So, on one hand, a really cool idea, an application of AI on another, something that might have some pitfalls. Are these a good idea in your opinion? I mean, is Google getting more lax about safety considerations here?

[00:46:10] Paul Roetzer: Good or bad. I think it's inevitable. This is what's gonna happen. And so, you know, if you, if you haven't tested inflections pie yet, if you want to feel what this is gonna be like, go, go there and start telling it about yourself.

[00:46:22] Paul Roetzer: So it's just pi.ai inflection we've talked about before. They've raised 1.3 billion. That's what it does basically. I mean, it just asks you questions and you can talk to it about your mental health. You can talk to it about, you know, you know, wanting to create a healthier lifestyle you can talk to about anything.

[00:46:41] Paul Roetzer: And it's meant to remember those conversations and, and learn about you as an individual. So definitely, again, when we look to the future and we make some assumptions about what it looks like, Everyone having a personal assistant does seem to be, a part of that future. And maybe, as we've said before, a symphony of personal assistance that specialize in certain things like trip planning and health advice and marriage counseling and whatever it is.

[00:47:09] Paul Roetzer: Like there, there's gonna be assistance trained to do these very specific things. You can get GPT four to do these kinds of things if you know how to prompt it. And you can certainly get inflections pie to move in this direction if again, you, you know, experiment with it. But I do think that, these tools that are custom built for very specific personal assistant use cases is absolutely a part of what's gonna happen in both your personal life and in business.

[00:47:35] Paul Roetzer: So you'll have an on-call advisor in business and you know, something that functions almost like an attorney that knows all, all, all, all business law. And you can ask these questions in real time. And that's why I say like, I. You know, these industries ha have to realize what's gonna happen. You know, if you're in professional service of any kind and accounting and, and, and law, marketing people are going to have real time access to AI advisors that do everything like that, have all of the knowledge we have, and what does that mean to the future of our industries.

[00:48:07] Paul Roetzer: And the, you know, when do you give your kids, like on the first line, when do you give your kids access to something like this? Like, there, there's so many ramifications of this kind of technology that we are not even talking about as a society, that need different backgrounds, sociologists, psychologists, ethicists.

[00:48:26] Paul Roetzer: Like we, we just need more conversation around stuff like this because it's just gonna be out and available to the world and we're not gonna be prepared for it. I, I'm excited in some ways about this kind of technology. I think it could be extremely helpful, but I also could see it having a lot of downsides to it.

[00:48:48] Mike Kaput: So we've talked quite a bit in the past about Jeff Hinton, who is one of the godfathers of ai, and he used to work at Google on leading edge AI topics. He actually left Google because he wanted to speak openly about the dangers of AI technology, partially AI technology. He helped create, and we actually found this past week a new interview with Wired where he expands on some of the reasons he's worried about ai.

[00:49:15] Mike Kaput: He says a number of different points in the interview about this, but some of the main ones are he thinks that AI agents may become smarter than humans within five to 20 years. He thinks an advanced AI could hide its full capabilities from humans. And he says that basically he didn't always used to think this way.

[00:49:34] Mike Kaput: His perspective changed because he realized a few different things that chatbots can understand language really well. They can share knowledge easily, and they have superior learning algorithms when compared to human brains, at least in his opinion. So he's not totally pessimistic though he is worried about the future concerns of super smart artificial intelligence.

[00:49:57] Mike Kaput: But he also sees some ways forward where, you know, AI doesn't necessarily get out of our control and is developed responsibly. One of those is using analog computing versus digital, AI computing. To build these systems. He claims there are some serious advantages to doing that, that would prevent some super intelligent system from going off the rails in the ways that he fears.

[00:50:23] Mike Kaput: So Paul, there's a lot to unpack here, but does anything like really surprise you about these further details he's giving or kind of more of, is it more him building on what he's been saying in the past, you know, since the beginning of the year?

[00:50:38] Paul Roetzer: Yeah. I think he came out without a clear messaging platform and plan.

[00:50:43] Paul Roetzer: It was more just to raise the alarm and it seems like he's starting to hone his messaging. So again, like going back to my public relations background, it's like now he is getting some guidance or working himself on how to convey what it is he's worried about and what his plan is to help the five to 20 years.

[00:50:59] Paul Roetzer: The 20 years seems surprising in a way because just past weekend I listened to I think three different podcasts. So one was with. Mustafa Solomon, the founder of Inflection. It was the Have a nice future podcast from Wired, Jan Leiki, who's leading the Super alignment initiative at OpenAI because they think super intelligence is possible within four years.

[00:51:21] Paul Roetzer: I think 10 years is, they're far out, but they're planning for four. And then, Dario Ade, the c e O of RO on the eSSH podcast. And then Amta Ahad Mustak, who's the C e O of stability, AI on the Peter Diamandis Moonshots Mindsets podcast. So I was listening to like deeply into what these people are thinking.

[00:51:46] Paul Roetzer: And across the board, they, they all seem to feel like this human level ai, this a g i, if we want to call it that it is, is within reach now, like within the next, you know, three to five years. So, Nothing surprises me, but I would say that his concerns and are, are being shared by a lot of the leaders of these major AI companies and if not, more aggressive timelines for this to happen.

[00:52:18] Paul Roetzer: So while again the past we've said, you know, I think this threat of to humanity, the existential risk humanity of AI is, is probably more than the average person needs to be worrying about right now. I will say that there does seem to be kind of uniform agreement that we're heading into a really precarious position if we don't figure out ways to, solve for the threats of AI beyond just the obvious misinformation, disinformation, you know, downfall of democracy and election cycles kind of stuff.

[00:52:54] Mike Kaput: So some interesting developments on the legislative front. The Bipartisan Create AI Act was recently introduced in Congress, and this is a law that would establish a national AI research resource giving compute and data access to academics, nonprofits, and startups. So at Stanford University there's a highly influential group called the Institute for Human-Centered ai, h a I, and they're strongly advocating for the passage of this law.

[00:53:24] Mike Kaput: What did you think of this proposed legislation? It seems like their h a I at least is saying that, you know, it's very good to get all the control out of the hands of profit motivated AI companies. Not that it's bad, they have some control, but they cite the fact that most of, if not all of the industry breakthroughs are coming out of these companies and not out of

[00:53:46] Paul Roetzer: academic labs.

[00:53:49] Paul Roetzer: I mean, I'm a huge proponent that the US government, again, you know, I know we have listeners around the world, but you know, I think any government in particular, the US in our case, needs to take a moonshot on this. Like they, they have to be making, you know, unparalleled investments into this, you know, on, I guess on par or greater than, you know, the moon program and, probably the Manhattan Project.

[00:54:15] Paul Roetzer: Like, I, Ireally think that it's important enough to the future of society that they should be putting every resource possible into finding ways to do this. So, yeah, I don't know if there's, you know, I haven't studied the Create AI Act deeply, to know if there's elements of it I don't agree with.

[00:54:32] Paul Roetzer: But I think conceptually we, we need anationalized approach to AI technology. I.

[00:54:41] Mike Kaput: So we've also talked about on this podcast quite a bit, Ethan Molik, who's a leading voice in ai. He spoke at our MAICON conference, and specifically he talks a lot about AI in higher education, but he also does a ton of like consulting and advisory for companies trying to apply ai.

[00:54:57] Mike Kaput: And he tweeted recently something that seems like a pretty important point. He wrote a lot of the advice on implementing generative AI at companies ignores the speed of improvement in foundation models. For example, McKinsey's options for CTOs implementing AI call for a year or so of development work.

[00:55:17] Mike Kaput: He says, I have talked to companies that built on GPT three. They regret it. So he is really kind of talking through the length of time it takes to implement some of these models and how quickly they improve. Can you kind of unpack for us what he's talking about here, why this is really important for

[00:55:34] Paul Roetzer: companies?

[00:55:35] Paul Roetzer: We've touched on this idea before of one of the great challenges ahead for businesses is what language model or models do you build on and what is the time horizon to do that? Who's involved in the organization? We've seen large enterprises where marketing's racing forward doing their own thing.

[00:55:52] Paul Roetzer: Meanwhile, the C I O C T O or working on some bigger play for, you know, an organizational wide L l M, I think that just, it just highlights further this, the challenge. You know, if you're recline relying on something like a McKinsey to do this and you're spending a year and five or $10 million to build something, and I.

[00:56:11] Paul Roetzer: Two versions of that language model may have emerged since you started the project. And so just this idea that we really need a lot more open dialogue within organizations solving for this, and then how you make these plans dynamic that builds in the fact that these models are gonna keep improving and someone may launch while you're building on G P GT three, someone may come into your industry and launch a better vertical foundation model for your industry, and now all of a sudden you just spent $5 million building on top of GT three and there's something better.

[00:56:41] Paul Roetzer: So it, it's gonna be really challenging. Not that tech hasn't always evolved, but it's never evolved at this speed to where there's such a dramatic difference in some cases between like a version three and a version four or a four and five. Because these things are doubling in their capabilities every like 12 months.

[00:56:57] Paul Roetzer: So it's a really challenging space to navigate right now. So we've

[00:57:02] Mike Kaput: kept tabs on some of Adobe's AI work with its Firefly generative AI model, and they recently announced they're rolling out new AI features in Adobe Express, their cloud-based design platform using that Firefly model. So you can now generate custom text and image effects using prompts in a hundred different languages.

[00:57:23] Mike Kaput: In this platform, you can also automatically remove backgrounds and animations, from visuals and. Users can now access all of these AI features for free on the desktop web version. Sounds like a mobile rollout is coming soon. And what's really interesting with Firefly in general is that it's trained on Adobe's own content versus external content.

[00:57:49] Mike Kaput: So what did you make of this update, Paul, and do you see that being a really important differentiator? The fact that Adobe is rolling out these features trained on content that it has license to do, to train on, it

[00:58:03] Paul Roetzer: probably implies like the quality of the output isn't as good. I mean, that's what we've seen previously is like mid journeys, you know, image generation is, is gonna be superior to Adobe's, but Mid Journey has questionable training data in their models.

[00:58:16] Paul Roetzer: So I think in the long run Adobe's probably taking the smarter route and I think that a lot of enterprises that have are maybe more risk adverse. Are going to be okay with, the outputs in the near term, maybe not being as high quality as others because, from a legal perspective or an ethical perspective, or both, they're more confident in how the model was trained and thereby the, legitimacy of whatever they're creating within Adobe.

[00:58:46] Paul Roetzer: So, yeah, I mean, I, again, Adobe's not new to this game. They've been investing in AI for a long time. They were a little slow on the generative AI movement, like they, you know, kind of delayed in their launch. But it sure seems like they're just gonna come, aggressively into the space and continue to build out capabilities.

[00:59:05] Paul Roetzer: So it'll be interesting to keep an eye on what they're doing. So

[00:59:09] Mike Kaput: Jasper, which is a leading AI writing tool, and a friend of the institute just released a responsible AI usage template, which companies can take and adapt to their own needs to build their own AI usage policies. This template was actually used by Jasper to create their own AI usage guidelines.

[00:59:28] Mike Kaput: And it includes details on things like how your company will treat, transparency of AI usage, tool selection, bias, privacy, and a bunch of other factors. It also includes some guidelines for how employees should be trained on how to use ai. You know, we're coming up next week, really, or this pa this coming week rather releasing some new.

[00:59:48] Mike Kaput: 2023 state of marketing, AI research. And as a quick preview of that, our research found that the vast majority of companies out there that we survey don't have these types of policies. So this is like a big step forward and really good to see them providing resources that companies can use. Like why?

[01:00:05] Mike Kaput: Why do all these companies need to prioritize having these policies?

[01:00:09] Paul Roetzer: It kinda goes back to the policies in schools. If, if you don't provide the policies and you don't provide guidance on what human-centered use of these technologies are, then your, your employees and your leaders have no idea what to do about it.

[01:00:24] Paul Roetzer: So you referenced the data. I think 22% of companies have, only 22% of companies have generative AI guidelines and, and 21% have AI ethics, responsible AI policies. My hope is when we do that research next year, that's like 80 to 90%. Like I just, I see it as absolutely foundational to figuring this stuff out as you have to give guidance to your team of what they're allowed to do and not allowed to do and how to use the technology for good within the organization.

[01:00:50] Paul Roetzer: So yeah, I'm a huge fan and thanks to Megan. She gave us a shout out in her post announcing this Megan Keeny Anderson at Jasper, for our original Responsibly I policies that we put out under Creative Commons. And Ijust want to see more of it. Like I, I'm, I'm love that they're putting it out there. I think the challenge for every organization is when you're putting these out there to live by them too.

[01:01:10] Paul Roetzer: And so I think a lot of organizations as they develop these policies are gonna have to gut check, are we adhering to our own policies and, and is everyone within in our community, all of our vendors, all of our partners, do they adhere to these kinds of policies? And I think that's gonna be a challenging aspect of this because, you know, not everybody follows the same, policies, but I think it's gonna be important.

[01:01:33] Paul Roetzer: I think you're gonna start to see over time employees choosing the, choosing the places they work based on this kind of stuff, because it's gonna be as important as culture, in my opinion, to the future of businesses.

[01:01:46] Mike Kaput: So also policy related, the Associated Press just released public standards for how it's using generative ai.

[01:01:52] Mike Kaput: So it actually says overall that it's journalists' role of gathering and evaluating facts for news stories is not going to change despite the use of ai. They don't see AI as replacing journalists. Their guidelines include disclosing that the AP has a licensing agreement with open ai, and they also offer some guidelines for how journalists will and won't use generative ai, including the fact that they will treat any output from generative AI as unvetted source.

[01:02:24] Mike Kaput: So do you expect to see similar standards and policies from every media outlet moving

[01:02:30] Paul Roetzer: forward? I. Yeah, definitely. And Ikind of liked how they position that as, you know, just assume it's wrong and, and you have to vet everything, and that it's our responsibility that the human is still the owner and has to take responsibility for the output.

[01:02:44] Paul Roetzer: I think that's a really good baseline as you're building generat AI policies for your own company is the, you cannot rely on the AI for the output, like the human has to be in the loop and has to be able to sign off on responsibility of the final product that goes out there, meaning verification of all facts and sources and citations and everything like that.

[01:03:04] Paul Roetzer: It's a really good baseline. Just

[01:03:08] Mike Kaput: a couple more stories in this packed week of AI news here. We actually just saw that a US district judge ruled on Friday that AI generated artwork cannot be copyrighted. So this was in a response to a lawsuit against the US Copyright Office. This lawsuit was from a guy named Steven Thaler, who tried multiple times to copyright an image created by an algorithm that he was using, that he had himself created.

[01:03:36] Mike Kaput: The copyright office rejected his request and stated that AI creations lack human authorship required for copy copyright. And after being rejected several times, Baylor then sued the office. But recently, the judge now has said that copyright has always required human creativity guiding the work. So, To their credit, the judge and not acknowledge that AI art copyright is going to raise a lot of challenges here.

[01:04:02] Mike Kaput: But it sounds like, at least based on this ruling, that AI generated artwork cannot be copyrighted. So what are the implications of this for people and companies trying to use AI generated art?

[01:04:15] Paul Roetzer: And it seems like one of the first confirmations we've seen of the march updated guidance from the US Copyright Office that humans had to be the author as they have been since 1870.

[01:04:25] Paul Roetzer: And that, prompting does not equal human authorship. So it, it, I think it's just the first of many cases we're gonna see brought that challenge, the copyright law and until the US adapts that law, or, or not, we're probably gonna see a lot of rulings like this where the, they're not given the copyright they seek because they don't deem that the human was involved enough in the final output.

[01:04:55] Mike Kaput: So last but not least, there is a tiny company called Pros Craft, that is no more after a viral backlash from some people that were enraged that the company used copyrighted material without permission. Specifically, these were authors because Pros Craft was a simple tool that basically just analyzed the style of different authors in different books.

[01:05:19] Mike Kaput: But the problem is it did so using models that scraped those books from piracy websites. Now, After this outcry from authors online who discovered this was being done. The founder, the site apologized and deleted it. And it was honestly more of a side project, it sounds like, than any large commercial concern.

[01:05:38] Mike Kaput: But he eventually did just delete everything and the dataset used to train it. And though Pros Craft is kind of this minor target here, really it is a symbol of some of the bigger issues, especially in creative industries with these systems being trained on protected data. And interestingly, I found, at least in this story, the creator of the company, he fully apologized.

[01:06:01] Mike Kaput: He's like, I understand why everyone's upset. He did say like, what I thought would happen in the long term is that if I could show people this thing, they would say, wow, it's so cool and it's never been done before. It's so useful and interesting. And then people would willingly contribute their books and their manuscripts.

[01:06:21] Mike Kaput: To this project, he says there was no way to convey what this thing could be without building first. So I went about getting the data the only way that I knew how, which was, it's all there on the internet. So what are your thoughts on this, Paul? I mean, some interesting, conflicts between people wanting to build things but also doing it in the

[01:06:41] Paul Roetzer: wrong way.

[01:06:43] Paul Roetzer: So, we'll, Ithink we're gonna need to go deeper on this topic as a main topic next week, because an article just came out that, exposes how prevalent pirated books are in the core foundation models that we all talk about. And it's an amazing piece of journalism that actually found the source where it's coming from, and the guy who created the source with all the pirated books, like 170,000 pirated books that are being used to train these models.

[01:07:12] Paul Roetzer: So this is a much bigger problem than a single startup that had to shut down. And I guess that would be ateaser for next week's episode. We're gonna go a little deeper into this topic because it's, it's gonna be a fundamental challenge moving forward. And like I said, a great piece of journalism that no large language model could have written.

[01:07:32] Paul Roetzer: Sort of exposed everything that's happening in this space. And it'll be fascinating to see the ramifications of that, investigative journalistic piece.

[01:07:42] Mike Kaput: Awesome. Well, Paul, thanks for breaking down this week in AI for everyone. There's lots of topics here, obviously, but you make it a lot clearer and easier to understand, kind of sorting the signal from the noise, so we appreciate

[01:07:53] Paul Roetzer: it.

[01:07:54] Paul Roetzer: Yeah, thanks everyone for listening. Another action packed week and we'll be back next week maybe kicking off with the, the pirated books scandal. Alright, thanks Mike. Thanks everyone.

[01:08:09] Paul Roetzer:

[01:08:09] Paul Roetzer: Thanks for listening to the Marketing AI Show. If you like what you heard, you can subscribe on your favorite podcast app, and if you're ready to continue your learning, head over to www.marketingaiinstitute.com. Be sure to subscribe to our weekly newsletter, check out our free monthly webinars, and explore dozens of online courses and professional certifications.

[01:08:30] Paul Roetzer: Until next time, stay curious and explore AI.

Related Posts

[The AI Show Episode 111]: Grok-2 Controversy, Demis Hassabis: AI Is “Unreasonably Effective,” AI in Schools & Ex-Google CEO Says AI Can Steal IP

Claire Prudhomme | August 20, 2024

Episode 111 of The AI Show: Explore Grok 2's update, DeepMind's podcast return, and AI's impact on education. Plus, rapid-fire tech news on AI innovation, employment, and more.

[The Marketing AI Show Episode 30]: ChatGPT Disrupts Education, Generative AI Gets Sued, and OpenAI Now Available from Microsoft

Cathy McPhillips | January 18, 2023

ChatGPT is everywhere, but generative AI isn’t all smooth sailing. Educators and attorneys have something to say. Plus, Microsoft and OpenAI updates.

[The Marketing AI Show Episode 35]: Microsoft’s Unsettling Chatbot, How AI Systems Like ChatGPT Should Behave, and What “World of Bits” Means to Marketing and Business

Cathy McPhillips | February 21, 2023

Slowing down to speed up, educating the public, and what’s next for SaaS companies are all covered on this week's Marketing AI Show.