31 Min Read

[The Marketing AI Show Episode 36]: OpenAI Plans for AGI, the Rise of More Human Content, and ChatGPT Get-Rich-Quick Schemes

Featured Image

Wondering how to get started with AI? Take our on-demand Piloting AI for Marketers Series.

Learn More

This week’s episode of The Marketing AI Show brings out some strong opinions from Paul and Mike. The common thread in the three stories covered? Humans.

OpenAI drops a big announcement planning for AGI.

OpenAI, the creator of ChatGPT, just published a bombshell article titled “Planning for AGI and Beyond,” AI systems that are smarter than humans at many different tasks. OpenAI says that AGI “has the potential to give everyone incredible new capabilities; we can imagine a world where all of us have access to help with almost any cognitive task, providing a great force multiplier for human ingenuity and creativity.” But it also notes the serious risks of misusing such a hyper-intelligent system. Because of this, OpenAI outlines short- and long-term principles to “carefully steward AGI into existence.”

AI-generated content will lead to more human content.

Paul recently posted on LinkedIn about the “rise of more human content,” and it’s gotten some attention. In the post, he outlines one possible future for content in the age of AI-generated content, saying “As AI-generated content floods the web, I believe we will see authentic human content take on far greater meaning and value for individuals and brands.”

Readers had some things to say, including Alvaro Melendez, who said, “I totally agree I think we will see a rise in relevance and appreciation of artisan content. Human-crafted stories will gain in value.” Paul and Mike discuss their thoughts and observations. See the show notes below for a link to Paul’s post.

Get-rich-quick schemes are on the rise as ChatGPT takes center stage.

Internet scammers are now selling get-rich-quick advice on how to use ChatGPT to churn out content that makes money.

In one noted example, the editors of Clarkesworld, a popular science fiction and fantasy magazine that accepts short story submissions, recently estimated that 500 out of 1,200 submissions received in February were AI-generated by tools like ChatGPT. The problem got so bad, the magazine had to suspend submissions. And Clarkesworld is not alone.

This trend is impacting far more than fiction. Similar advice on how to make a quick buck generating content across book publishing, e-commerce, and YouTube is prevalent. In fact, there are already 200+ books on Amazon that now list ChatGPT as an author or co-author.

Paul and Mike have a lot to say on this topic!

Plus, stick around for the rapid-fire questions at the end, covering Bain x OpenAI, and Meta AI’s LLaMA release.

Listen to this week’s episode on your favorite podcast player, and be sure to explore the links below for more thoughts and perspectives on these important topics.

Timestamps

00:03:12 — OpenAI plans for AGI

00:17:28 — The rise of more human content

00:23:50 — ChatGPT get-rich-quick schemes

00:31:37 — Rapid fire topics: Bain x OpenAI and Meta AI’s LLaMA release

Links referenced in the show

 

Watch the Video

 

Read the Interview Transcription

Disclaimer: This transcription was written by AI, thanks to Descript, and has not been edited for content.

[00:00:00] Paul Roetzer: I think humans generally just ruin everything. Like marketers ruin. Good stuff. Like this is a very valuable, cool technology. And of course it was spun into this like scamming thing right away. So I think it's going to be widespread because people are generally lazy and like to find quick ways to make money.

[00:00:18] Paul Roetzer: Welcome to the Marketing AI Show, the podcast that helps your business grow smarter by making artificial intelligence approachable and actionable. You'll hear from top authors, entrepreneurs, researchers, and executives as they share case studies, strategies, and technologies that have the power to transform your business and your career.

[00:00:38] Paul Roetzer: My name is Paul Razer. I'm the founder of Marketing AI Institute, and I'm your host.

[00:00:47] Paul Roetzer: welcome to episode 36 of the Marketing AI Show. I'm your host, Paul Roetzer, along with my co-host Mike Kaput. Hello, Mike. Hey, Paul. How's it going? We got a deep topic today. We gotta talk. We gotta talk about AGI wasn't, we weren't, wasn't really planning on going into this. I've been working on some AGI related stuff for a while.

[00:01:10] Paul Roetzer: And then Sam Altman just like goes and ruins my Friday and drops his like planning for AGI post. So we will get into AGI as well as a couple other big topics for the week. But first, this episode is brought to you by the AI for Writers Summit. Artificial intelligence won't replace writers, but writers who use AI will replace writers who don't.

[00:01:31] Paul Roetzer: AI writing tools, which we're going to talk a little bit about today, are rapidly transforming the art and science of storytelling, writing and editing. Career paths are being redefined. Media companies, brands, and agencies must move quickly to reimagine their content teams and strategies to stay competitive.

[00:01:47] Paul Roetzer: This is very relevant today, actually, now that I'm thinking about the ChatGPT stuff. So that's where we're bringing together thousands of writers and marketers at our virtual event. The AI for Writers Summit on March 30th, from 12 to four Eastern Time, I think we have, I think we're over 1500 or 1600 people registered for that event already.

[00:02:04] Paul Roetzer: So, check it out. There's a free pass option so you can join for. We're going to talk through the state of AI writing tools. We're going to get into how generative AI can make writers and content teams more efficient. Go through dozens of writing, use cases and tools. Considering the impact, consider the impact on career paths.

[00:02:23] Paul Roetzer: Look at negative effects on, writers and potential negative effects, and then give you a chance to connect with other, writers and marketers and attendees. So again, there's a free option. It's aiwritersummit.com, March 30th from 12 to four. It is a virtual event, so no reason not to check that out.

[00:02:41] Paul Roetzer: And with that, I'm going to turn it over to Mike. If you're new to the show. We, go through three topics. Mike and I kind of pick the hot topics. We swapped two of them out this morning. There was so much going on that we had our three sets. We are recording this on Monday, February 27th. And we , we took two of them out.

[00:02:59] Paul Roetzer: So we're going to do a rapid fire at the end because there was a couple we wanted to get to that got bumped. So. All right, Mike, let's go. All right,

[00:03:06] Mike Kaput: well, like you mentioned, we're going to dive in. Kind come in very hot here because OpenAI is planning for AGI. So OpenAI, the maker of ChatGPT, DALLE-2, GPT-3 just published a bombshell of a blog post titled quote, planning for AGI I and Beyond.

[00:03:27] Mike Kaput: So here AGI means artificial general intelligence or AI systems that are smarter than humans at many different tasks, not just a task. So OpenAI says that AGI quote has the potential to give everyone incredible new capabilities. We can imagine a world where all of us have access to help with almost any cognitive task, providing a great force multiplier for human ingenuity and creativity.

[00:03:57] Mike Kaput: But OpenAI also notes the serious risks of misusing such a hyper-intelligent system. They then outlined some short and long-term principles to quote carefully steward AGI into existence. Now this is a really, really important announcement. Paul. I want to ask you why are they making this and why?

[00:04:23] Paul Roetzer: It appears that they think they're making progress toward agi.

[00:04:27] Paul Roetzer: I mean, I think that's the largest, takeaway here, and they've implied this before, you know, that is their mission, as you said, is to, you know, benefit humanity with agi. I, so they have believed since the beginning that this was possible, and they seem to think that they're making progress now, Sam. So they put this out on Friday, the.

[00:04:50] Paul Roetzer: Fourth, was it? I guess 25th. Yeah, 24th. Sam then tweeted, I think the next day, a new version of Moore's Law that could start soon. The amount of intelligence in the universe doubles every 18 months. So I think it's one. They have a very specific point of view on this. I think that's really important for everyone to understand.

[00:05:15] Paul Roetzer: They see things that many of us don't see, most of us don't see. There are certainly people in other AI research labs that are making similar progress and may or may not believe that they're also making progress toward agi. I, the problem a few, there's a lot of problems with this post, but the big thing that jumps out to me for a moment is they, Really define agi.

[00:05:39] Paul Roetzer: So I mean, they give a definition, but it differs from their other definitions they've previously given. And so, you know, I had made a note that there was a sort of glaring gap here with what they're saying. And that is that they don't really state what it is or how we're going to know when we're going to get there.

[00:05:58] Paul Roetzer: Like what are the measurements with which we're actually tracking against. Because AGI is a very, The definitions I've seen are very vague, so just OpenAI themselves. They say OpenAI's mission is to ensure that agi, by which we mean highly autonomous systems that outperform humans at most economically viable work.

[00:06:21] Paul Roetzer: So that's, that's the one they say on their website, which is very vague. It's very general and its definition. Then in this post they say AI systems that are generally smarter than humans. Those are very different things. And so that to me, right up front is the biggest issue here is we're throwing up these red flags and creating maybe some unnecessary fear.

[00:06:46] Paul Roetzer: A lot of AI researchers think that it's just, them like bluster, like it's not real stuff. Like they aren't actually making progress toward AGI and this is unnecessary to create this kind of fear around. So my big thing is whether we agree on AGI we're making a path toward it or not, or that it is even possible.

[00:07:10] Paul Roetzer: We need clear definitions and ways to measure progress toward it. Because one of the key takeaways is, how do we, like, they want to have guardrails to prevent it from going bad. Well, if we don't know what it is or how we're going to know when we get to it, how do we avoid accidentally reaching? Before taking the necessary steps to prevent it.

[00:07:35] Paul Roetzer: So my, my biggest problem with this post is it was, I understand if they think it's important to put it out there like, like that's fine. But there was lots of really vague statements that seemed very extreme without much clarity around anything. Like there was very little you could do reading this and say, okay, what can I do to help there?

[00:07:56] Paul Roetzer: It's not, It's almost like a, Hey, trust us. We're, we're building it, we're making progress. You're probably not going to know when we're getting anywhere, but we think we should involve a bunch of people to help figure this out. it's just, that was my, my challenge. So it might help to go through a few of these points because I think the average marketer, business leader, who might listen to our podcast, could go read this post and just be like, I don't understand anything they're saying.

[00:08:21] Paul Roetzer: So I think it might be helpful. To unpack a few of these key points from it. And maybe, Mike, if you have any context, like feel free to jump in here. So it starts off with AGI I as the potential to give everyone incredible new capabilities. We can imagine a world where all of us have access to help with almost any cognitive task.

[00:08:40] Paul Roetzer: Riding a great force multiplier for human ingenuity and creativity. Sounds amazing. Like just on the surface sounds. Almost any cognitive task is a really big market and lots of applications. As we create successfully more powerful systems, we want to deploy them and gain experience with operating them in the real world.

[00:09:01] Paul Roetzer: We believe this is the best way to carefully steward AGI into existence. A gradual transition to a world with AGI is better than a sudden one. We expect powerful AI to make the rate of progress in the world much faster, and we think it's better to adjust to this incrementally. So there's some times in the post where they actually talk about it's an inevitability, like, Hey, we're going to get there and we're going to actually make it happen.

[00:09:20] Paul Roetzer: And then there's other times where they talk about, it's almost like something we don't want to have happen, and yet that's what they're doing is bringing it into existence. I'll stop here and. As we're thinking about this, it's almost helpful to throw the AGI part out of this. it is kind of a weird idea, but don't, cause, because the top AI researchers can't agree, we're not going to agree.

[00:09:43] Paul Roetzer: Like, is AGI I possible, I have no idea if AGI is possible. I don't work in these research labs. Like we just follow what they're saying and we go read the research papers and it's like, try and synthesize it. So I have no idea if AGI is a year away or a hundred years away, or if it's never going to happen.

[00:09:58] Paul Roetzer: But what we know is going to happen is more advanced AI systems live in the research labs right now, and they will find their way into society and business in the very near future. So let's look at the remaining points we're going to go through here under the lens of more advanced AI systems that the public hasn't previously seen that they know exist and that we're probably not ready.

[00:10:20] Paul Roetzer: So let's just assume we're talking about advanced AI systems, then we don't have to have this debate about is AGI possible or not? Cause I don't think any of the AI researchers who debate around AGI would disagree that there are more advanced AI systems that haven't been released yet. Okay. So a gradual transition of these more advanced AI systems gives people, policymakers and institutions time to understand what's happening personally, experience the benefits and downsides, adapt the economy and put regulation in place.

[00:10:46] Paul Roetzer: Those are important. and I think those are very critical things to be discussing. There's nothing concrete in here about how any of that is happening. I would love to see that built out. Then we go on, generally speaking, we think more usage of AI in the world will lead to good, and we want to promote it by putting models in our API open, sourcing them, et cetera.

[00:11:08] Paul Roetzer: We believe that democratized access will also lead to more and better research, decentralized power, more benefits, and a broader set of people contributing new ideas. In other words, we're going to keep putting this stuff out there for good or for good or bad. There's going to be a lot of downsides, but we're going to keep doing it.

[00:11:22] Paul Roetzer: As our systems get closer to agi, we are becoming increasingly cautious with the creation and deployment of our models. Our decisions will require much more caution than society usually applies. I think we talked about this one before, but basically same premise. At some point, the balance between the upsides and the downsides of deployments could shift our thinking.

[00:11:40] Paul Roetzer: So basically if things get worse and we get more powerful systems, we may pull back on this. We think it's important that society agree on extremely wide bounds of how AI can be used, but within those bounds, individual users have a lot of discretion. This goes back to what we talked about last week, I think, where they're going to have these personalized systems.

[00:11:57] Paul Roetzer: They're going to do things you probably don't agree with, that a lot of humans probably think are bad, but they're going to default to letting people kind of figure this out on their own. This next one is the one that worries me. So the default setting of our products will likely be quite constrained, but we plan to make it easy for users to change the behavior of AI they're using.

[00:12:15] Paul Roetzer: We believe in empowering individuals to make their own decisions, and the inherent power of diversity ideas. So this is, this is where I started thinking about if they have these ai, these more advanced AI systems, and they're restraining them from release, how is this any d. Then the people that left Google in 2000 17, 18, 19, because they wouldn't release the language models they had.

[00:12:44] Paul Roetzer: So you have these AI researchers who work on these really advanced systems and then they sit in research labs for years because the the barriers to release them. So what's stopping OpenAI employees who work on this stuff and know what's possible from saying, yeah, I don't agree with OpenAI anymore. I don't think OpenAI.

[00:13:03] Paul Roetzer: Knows what they're doing in terms of what's best for society. I'm going to leave OpenAI and go advance AGI on my own or start do a startup and advance AGI because I think it's more important. So we have this, like this tiered system of the technology companies making the decisions and then the individuals within those companies, whether or not they agree with the decisions being made.

[00:13:27] Paul Roetzer: So this is the one that actually started unsettling. Is that you could have these researchers who say, I'm out. I'm done with OpenAI. They're being too cautious now. They got too big, they're too worried about government regulations and all these other things. I'm going to leave and go do my own AGI I project.

[00:13:44] Paul Roetzer: And that's absolutely what's going to happen. Like, I, you can just foresee that right now, that will have these research papers over the coming years about AGI and then they're just going to leave and go do their thing. A couple other quick notes. They hope the global conversation about three key questions, how to govern these systems, how to fairly distribute the benefits, and how to fairly share access.

[00:14:05] Paul Roetzer: Again, they just basically say, we're going to work on these things. they don't get into how they're going to ensure these things. They also talk about the idea of independent audits. My question is from whom? Like, who would even do these audit? Is it auditing each other, the research lab's, auditing each other?

[00:14:25] Paul Roetzer: Who would even have the knowledge to do these audits? So a lot of this depends on these like guardrails and things that seem really, really hard to put in place. And then it kind of wraps with a couple of notes around the first agi. We'll just be a point along the continuum of intelligence. It's like, oh geez, like now we're just talking about these.

[00:14:43] Paul Roetzer: Almost intentionally throwing things out that are really hard for the average person to comprehend and then successfully trans transition. Transitioning the world with super intelligence is perhaps the most important and hopeful and scary project in human history. So it was a lot. I mean, honestly, when I read this on Friday, I was like, oh my gosh.

[00:15:04] Paul Roetzer: Like I did not plan to end my week trying to process this post. But I do. They're a very impactful company. There's no two ways around that. They're going to be a very influential company on marketing, business and human history, and they think they're making very significant progress toward much more advanced systems, which they're going to call agi.

[00:15:30] Paul Roetzer: And that matters. What they're doing matters, why they're doing it matters. And it's an important thing that we're, that we talk about it as this industry and a.

[00:15:43] Mike Kaput: So reading between the lines here, obviously we can disagree with the way they've kind of gone about this, but the fact that they're doing it, does this mean some type of breakthrough is imminent when it comes to, let's toss out agi, let's just say advanced artificial intelligence coming down the

[00:16:05] Paul Roetzer: line.

[00:16:05] Paul Roetzer: it sure seems that way. I mean, everything we're seeing, not just from them, but from meta, which, you know, we'll talk about their new language model. Stability AI CEO everybody is increasingly talking about some stuff coming out in the near term. And we were talking about this, you know, in, in the fallen end of 2022, that it was things were building.

[00:16:28] Paul Roetzer: I think it's pretty safe to assume that in the coming months there's going to be some advancements in AI that are going to be mind-blowing, whether we think they're actually a path to AGI or not. I don't think it really matters at this point. I think the thing we have to prepare for is that they're going to be advancements specifically in generative ai.

[00:16:47] Paul Roetzer: I mean, that's the fo the area we're focusing on right now. Images, videos, audio language. That businesses, marketers, governments are not prepared for. And, that's I think the most important takeaway here is that we have to be prepared for this kind of rapid, perpetual change that's about to occur as these systems get more intelligent and more powerful.

[00:17:15] Mike Kaput: Gotcha. And you know, I think that's a good kind of transition into our second topic here, which is kind of more about what we can do. As a response as marketers, as business people. So Paul, you posted recently on LinkedIn about the rise of more human content and that post got a ton of attention and in it you outline kind of one possible future for content marketing in the age of.

[00:17:43] Mike Kaput: AI generated content and you said, quote, as AI generated content floods the web. I believe we will see authentic human content take on far greater meaning and value for individuals and brands. Can you tell us a little bit more about what you mean by authentic human content?

[00:18:02] Paul Roetzer: Yeah. The basic premise here is the content that can't be easily.

[00:18:07] Paul Roetzer: So we've touched on this, I think in past episodes. I talked about it, on this old marketing podcast with Robert Rose A. Little bit. Th this premise is like, as you see all this content on the internet, you almost start to just kind of assume a lot of it is written by ai and that's not a bad thing. As long as the content's valuable, I still, I don't have a problem with using AI writing tools.

[00:18:33] Paul Roetzer: Like it's, it's fine. And if you're not a great writer and you, you come to depend on these AI writing tools to create your content, that's not necessarily a bad thing. I'm just saying. I believe what's going to happen is the pendulum will swing to a preference for, in many cases, stuff you know, is very human.

[00:18:54] Paul Roetzer: And by that I mean, I kind of categorize. In person. Can't fake in person. Like if I'm standing there on the stage and I'm talking to you or man and questions like that's the real. Unscripted. So like this podcast, Mike and I have some bullet points of what we're going to talk about, but other than that, it's like off the cuff and we're just having a conversation and sharing our point of view and offering things we hope, help you figure out the space.

[00:19:16] Paul Roetzer: And then uniquely human, which I basically says like you, you have a very clear point of view. it, it infuses emotions. It infuses your experie. It's very obvious that it is me talking or that I'm interviewing someone who is sharing their point of view from their human perspective of emotion and experience.

[00:19:37] Paul Roetzer: And so those are things that you just, you can't really fake. And so I feel like people are going to crave that kind of content. And that was the basic premise is, you know, think about the kinds of content you can. Where they know that it's coming from the human mind, the human imagination coming from your heart, where you're really, it's you.

[00:19:59] Paul Roetzer: and that's, you know, kind of, kind of move us into evolving our content strategies, I think. So

[00:20:07] Mike Kaput: it sounds like we're kind of talking about, you know, brands and individuals. That create content may have to get quite a bit more introspective and perhaps vulnerable, transparent, and share more of what

[00:20:21] Paul Roetzer: makes them them.

[00:20:23] Paul Roetzer: Yeah, and it's interesting because in retrospect, it's kind of what we did with our content strategy back in November before ChatGPT. It wasn't even by design per se, like with this thought in mind, the more human idea. But we used to publish a ton of content on the institute's site that was listicles how-tos, you know?

[00:20:43] Paul Roetzer: And this, this is valuable stuff. Like, it's not like that isn't important. But our blog, if you go through the last like three or four months of content, It's largely summaries of podcast episodes, so it's our podcast. It's the video from the podcast, the audio from the podcast. We then take and synthesize each topic into a post and kind of summarize the points we made within that.

[00:21:05] Paul Roetzer: So the more human content is at the core of our content strategy, and then it's played out in these different channels Through that. Now we still publish the how-tos and the lists and things like that that people find. But generally our content strategy has completely shifted to this more human idea.

[00:21:23] Paul Roetzer: And so I think in the LinkedIn post I'd highlighted, you know, you might see more resources towards newsletters with strong editorials. So not just lists, but like things you can't fake. Again, a strong point of view on something or experience. Podcast videos and live events. Those are the things that jumped out to me as the obvious options to do this.

[00:21:42] Paul Roetzer: And I think that's largely what we're doing, again, without even really thinking about it. because I'd been thinking about this idea for a couple months, but until I wrote that LinkedIn post last week, I hadn't really like, solidified exactly what it was in my head. And then as I did it, I realized, oh, that's kind of funny.

[00:21:58] Paul Roetzer: I guess that's actually what we've been doing for the last few months without really thinking under these con this context. So

[00:22:05] Mike Kaput: it sounds like, you know, as people explore AI writing tools, content creation tools, that maybe you might be missing the point if you're just looking at it as a way to do what you've always done just at a higher velocity.

[00:22:21] Mike Kaput: Right. It sounds like you many brands may need to actually take a step back and reevaluate their

[00:22:27] Paul Roetzer: strategy as a whole. Yeah. Or it could be just you keep doing what you've been doing. But you do that more efficiently. So you're using the AI writing tools to help you create the stuff you've always been creating, and then you take the time you're saving to create the more human content.

[00:22:43] Paul Roetzer: . So maybe we weren't doing the podcast or investing what we should have been, or we haven't really thought about doing some live events, or we don't use editorials in our newsletter, or we don't go out and interview anybody. Like the things that the AI's not going to do for you. Maybe you take, you know, let's say you were spending 200 hours on content creation a month, and AI writing tools got you down to a hundred hours or 50 hours, then redistribute that time into the more human content.

[00:23:07] Paul Roetzer: Like that's how I would look at it. I wouldn't, we're not saying stop doing the other stuff. As long as the other stuff's valuable and the AI writing tools are helping you, great. Keep doing it. If it's not, then stop doing that and focus on the other stuff. But in, in many existing media companies, brands, agencies, The stuff you've been creating still has value, and we're not saying that's going to go away.

[00:23:29] Paul Roetzer: I think I would just look at this as a, like a multi-tiered approach and really think about the fact that your readers, listeners, viewers, may, in the near future be craving content that they know is, is you, that it has a, your point of view, your perspective, your experience mixed into it. And that's going to be more valuable to them in the long.

[00:23:50] Paul Roetzer: So one

[00:23:51] Mike Kaput: really good example of how bad this can go, is our third topic, which is about chat. G p t Get rich quick schemes. This

[00:24:00] Paul Roetzer: one, this one pissed me off. I just read that I was so annoyed when I read this article. It

[00:24:04] Mike Kaput: is, it does not sound fun to have to be on the receiving end of what we're about to talk about.

[00:24:09] Mike Kaput: So one example of what we're talking about here is the editors of Clark's World, which is a popular science fiction and fantasy fiction magazine, that accepts short story submissions. They recently estimated that in February alone, 500 out of the 1200 submissions of stories they received. Were AI generated by tools like Chat j p t.

[00:24:32] Mike Kaput: The problem got so bad the magazine had to suspend the submission process entirely, which is literally how the magazine gets its content. Now what's really interesting here is the editors. Say that they suspect a new online trend was to blame. So there are internet scammers out there now selling Get Rich Quick advice on how to use ChatGPT to churn out various types of content that makes money and it's going far beyond submitting to a fiction magazine.

[00:25:04] Mike Kaput: There's similar advice out there on how to make a quick buck generating content in book publishing on Amazon e-commerce and YouTube. In fact, there are already 200 or so books on Amazon that now list chat, j p t as an author or a co-author, and I'm sure many, many more that have been created with it, but don't list it.

[00:25:25] Mike Kaput: So how big of a problem do you anticipate

[00:25:29] Paul Roetzer: this being? I think it's going to be a huge problem. I mean, honestly, I think I read this on Saturday morning and that's when I put it up on LinkedIn. I immediately thought, wow, the institute accepts submissions. I don't know if I want to anymore. Like . I was like, immediately, and we don't get a ton cause we don't promote the fact that we accept submissions that much.

[00:25:50] Paul Roetzer: But we do get, you know, I don't know, five to 10 a month or something like that. We have thought previously about scaling up our content through guest submissions because that's what a lot of institutes media companies do is they get other people to write content for them. Maybe they pay them. A lot of times they don't.

[00:26:07] Paul Roetzer: They just give them promotion on the platform to grow their personal brand. And we've thought about that strategy in the past and I almost felt like that strategy died Saturday morning. For me, it's like, oh my God. The thought of having to like sift through all these how-tos and lists, and it's like, Dude, you didn't even write this thing like this is it.

[00:26:28] Paul Roetzer: And then just the idea of having to go through that editor's going through of trying to even figure out, is this legit content? And again, not that AI generated content is bad or isn't valuable, but I've said before in this podcast, like I almost can't even look at the homepage or the four you page, whatever it is in.

[00:26:47] Paul Roetzer: Because every single thread is just like, obviously they just put in a prompt and they got this list of 10 things to know as an entrepreneur or 10 things about this. I was like, Ugh. Like I just can't read that stuff anymore. And so that was my first reaction is, wow, we're, we're not expanding our submitted article ever because there's just no way to avoid this happening.

[00:27:09] Paul Roetzer: And I think humans generally just ruin everything. Like marketers ruin. Good stuff. Like this is a very valuable, cool technology. And of course it was spun into this like scamming thing right away. So I think it's going to be widespread because people are generally lazy and like to find quick ways to make money.

[00:27:28] Paul Roetzer: And they're, if there's a scheme to be had, they're going to have it. And I, you know, I get it. Like I understand the money's there to be made and they're going to make it, but it doesn't mean I have to like it or agree with the approach. It's. It's a very frustrating thing to see, honestly. So if you're

[00:27:49] Mike Kaput: a brand that accepts submitted content today or relies on it for a significant part of your strategy, should you just be rethinking this entirely?

[00:27:59] Paul Roetzer: I would. I, I don't, I mean, you have to. I don't know how you can. Unless you just don't care about the integrity of the brand and it's really just all a game of clicks and views and affiliate links. Like I don't know, like I would be seriously if I, again, we, we do have a site that does accept submission, so I guess it supplies to us.

[00:28:27] Paul Roetzer: But yeah, if I was building a media company or a brand that was built upon submissions from third parties, I would at minimum revisit the process of who is allowed to submit. Like, it seems like this site, I wasn't familiar with this site. It seems like they basically accept from anyone, and then the editor goes through.

[00:28:47] Paul Roetzer: So I mean, if you're accepting from a trusted group of writers and you have maybe your responsible AI policy is clear with them and they agree to that, maybe you update your terms of service. In terms of use, where it's, here's what we expect in terms of the use of AI within content you submit. Like at minimI'm, I'm updating our policies and making it clear our stance on, you know, how AI writing tools should be used in the process.

[00:29:14] Paul Roetzer: Because this isn't that on differentiated honestly, from the conversations we've had around higher education and whether or not students should be allowed to submit papers that are written by the ai, right? It's like, I want to know that you actually thought this through. You're recommending these five steps for building a more intelligent content strategy to our readers because you've actually done it for a living and you know what the hell you're talking about.

[00:29:35] Paul Roetzer: Not because you went in and figured out the 20 prompts to give ChatGPT, and then it prompted, and here's the output, and then 10 seconds you gave me these things that you have no idea what you're talking about. I still think experience and critical thinking matter, and I want to know that if I'm getting strategic advice from someone, that they've actually been through it and they can, they can validate that what they're writing is real and and valuable to the reader.

[00:30:06] Paul Roetzer: So one

[00:30:07] Mike Kaput: thing that jumped out at me in this story is that the Clark's world editors mentioned that some of the ChatGPT detection software they were using was just absolutely terrible in their opinion at catching the AI generated content. So, you know, as we're talking about this and our audience is thinking through, oh, well maybe I could just use one of these tools, it doesn't sound like that's a consistent.

[00:30:33] Paul Roetzer: Yeah, we've touched on that topic before on the show. I have yet to see any proof that those things are going to be usable at scale, that they're going to be reliable, that you're going to be able to make real business decisions. Call someone out on, you used AI to write this? No, I didn't. Yeah, you did. Because we're 83% confident that you did.

[00:30:53] Paul Roetzer: Like I just don't see how that's going to work. And I get that everybody wants to build the magic. Tool for figuring out whether or not AI wrote the whole thing. But until someone shows me a tool that actually works, stands up to research and testing, and that is usable at like a, a professor level in college where you're going to accuse a student of submitting an AI written article that they didn't have anything to do with.

[00:31:19] Paul Roetzer: . I haven't seen it yet, and. Checked out most of what's been out there. So yeah, until there's some breakthrough in this space, which I don't anticipate, I don't know how those tools are going to be. Very helpful.

[00:31:33] Mike Kaput: Gotcha. And, you know, continuing kind of the talk of language here. We've got two rapid fire topics today.

[00:31:40] Mike Kaput: And the first one deals with a new, large language model that actually is released by Meta. So Facebook's, parent company, it's called lla, and this stands for large language model meta ai. And it's basically a foundational large language model that is being kind of open sourced by meta and.

[00:32:02] Mike Kaput: Smaller, but performs extremely well, than some of the other models out there. And so by being smaller, it's actually they claim easier for researchers to understand better how the model works, some of the possible areas where bias or risk could be involved. What did you make of this

[00:32:21] Paul Roetzer: announcement, Paul?

[00:32:23] Paul Roetzer: Well, it, I think it. It's certainly interesting. It's whether or not it's open source seems to be up for debate because as soon as they release it, they start getting the counterpoints of, it's not really open source. You have to like, give your contact information through a Google form to get it.

[00:32:38] Paul Roetzer: It's only for the research community and blah, blah, blah. So there was, it's almost like the a i thing, like the AI researchers just bicker with each other back and forth on technicalities, around terminology and what it really means. But I think the key takeaway you highlighted it is a smaller model trained with more specific data that that performs at the level of or above larger models, which we've talked about previously on this show.

[00:33:01] Paul Roetzer: Is one of the areas where this is going is that there's been, increasing number of research papers related to this idea that we don't actually have to build bigger models, that we might be able to achieve greater outcomes with smaller models that require less computing power, less energy, because they're more specialized either in terms of how they behave, where they're only using a portion of the model, or because they're trained on specific data sets.

[00:33:27] Paul Roetzer: So that to me was the big breakthrough here. Again, we, we don't know enough about how this is all going to play out in terms of the research community, but they seemed similar that they're releasing it in this specific way, in part because maybe it's could be used in other ways. And so they don't want like a true open source release.

[00:33:47] Paul Roetzer: Like here you go, here's the code. Now there's going to probably be a race to build this thing on. Hugging face or Stability AI, like, you know, someone's going to release an open source version of this probably in the next month, or. And that's kind of the race we're in right now is meta, ironically, seems to be like the most open of the research labs right now.

[00:34:06] Paul Roetzer: They continue to share what they're doing. They're sharing their models, their papers, some cases, the weights behind the models. While OpenAI is, you know, seemingly becoming more closed with what they're sharing. So Yann LeCun, who heads up the research lab, the fair, research lab at Facebook Meta, this was his thing from day one, like when he joined Facebook back in, I think it was like 2013, that was his requirement with Zuckerberg, was that it was going to be an open model.

[00:34:34] Paul Roetzer: They were going to share their research because Lacoon believes that is how you advance AI and society. And they've stuck to that, to their credit, I've never been like the biggest advocate for Facebook and meta like, you know, from a personal perspective, but I have a lot of respect right now for how they're sticking to their mo the, like, their roadmap of what they're doing.

[00:34:55] Paul Roetzer: And I think Yann LeCun's a really fascinating figure in all of this. Going back to the AGI topic, he's, you know, more of a human level intelligence guy, and I always look at his kind of counterpoints related to this. But yeah, I think that was my main takeaway from it is it's, these research labs have a lot more behind the scenes that we're not seeing, and I would expect this kind of stuff to continue to come out in the months ahead.

[00:35:21] Paul Roetzer: Yeah, and it sounds

[00:35:22] Mike Kaput: like while we can't be sure that the trend will continue, you know, some people have pointed very rightly to possible hardware and compute constraints on some of these models as they grow. But the point here is, That may not actually come into play with AI advancements, we may actually be able to do more with less.

[00:35:43] Mike Kaput: Yep. So our last topic for today is a pretty interesting announcement where, Banyan Company, which is a major consulting firm, I think they've got a venture capital or investment arm as well, announced that they are partnering with Open. To offer services to expand the potential business applications of artificial intelligence, specifically OpenAI's technology.

[00:36:11] Mike Kaput: We kind of merge that with the consulting and expertise they already bring to the table. So they're offering now services around applying AI technologies to different areas, business and business strategy. What did you think when you saw this, Paul? I know we've talked quite a bit about the potential.

[00:36:31] Mike Kaput: Consulting opportunities in this area and actually helping businesses apply this technology.

[00:36:39] Paul Roetzer: it was interesting, they obvious, like Bain obviously wanted to make a big splash with this. I don't, on the surface, I'm not sure that it's really that big of a deal. Yeah. I saw somebody from OpenAI actually tweeted, somebody was like talking about this and somebody from open eyes like, Hey, this actually isn't that big of a deal.

[00:36:57] Paul Roetzer: Like, is this what we've been doing? Ourselves, we just can't scale to meet the demand. So like these enterprises are coming to OpenAI, asking them to do custom models and build all these solutions, and they need service partners that can help them because they can't scale services and maybe they don't even want to scale services to do this.

[00:37:17] Paul Roetzer: So to me, it was probably more of a PR play than anything. I'm guess. Bain's not the only firm they're working with, like maybe there's an Accenture and McKinsey and you know, they're probably doing this with other firms. It's like, if I'm Bain and I take my client to OpenAI and say, let's build some custom solutions, and then Bain spins up the ability to help build on top of OpenAIt's like, great.

[00:37:40] Paul Roetzer: So I don't know. I mean it, I saw there's more of a PR play, most likely for Bain, but I think you're going to see a lot more of this. I did think it was interesting. I saw. Amjad, the CEO of Relet, who I think I might have mentioned last week, he had tweeted on February 22nd, professional services industry companies like Accenture are almost a 1 trillion market.

[00:38:01] Paul Roetzer: That's the addressable market for Relet bounties. So he was talking about. Their bounties is where I can say, okay, I need an AI tool that does X and I can actually put it up on the replica system and somebody will build that AI tool for me. I just said an amount. And so it's, it's very obvious that this is like going to be an exploding space that all these big consulting firms and analyst firms, they're, they're going to get into the game of trying to build AI solutions for their, for their clients because that's where the demand is going to be.

[00:38:29] Paul Roetzer: And then there was actually like a Wall Street Journal article. Last week as well that talked about, I know Andrew, had posted it. But it was this massive pressure that's mounting on CIOs of major enterprises because they're getting demand internally from all these different, you know, divisions or departments wanting AI tools for X, Y, and Z and they don't have the resources to build these things.

[00:38:53] Paul Roetzer: So, I think you're going to see a lot of movement in this space of people that are building up capabilities and doing partnerships to help build smarter solutions for enterprises. So probably just indicative of where we're going more than like some major milestone in the industry or something like that.

[00:39:11] Paul Roetzer: Yeah, it definitely

[00:39:11] Mike Kaput: seems like one of those signals of the many we've seen that kind of validates like, okay, perhaps we are at the inflection point of actually wide scale

[00:39:20] Paul Roetzer: adapt. Yeah, and I think any consulting firm or marketing agency that doesn't have these kinds of capabilities, you know, in the next 12 to 18 months is just not going to be relevant.

[00:39:31] Paul Roetzer: Like, I don't, every client is going to be demanding this kind of stuff, and most agencies still don't have a clue what they're talking about. So I think you're going to see a, either a rapid consolidation in the industry, where're, just like if you don't understand this stuff. Acquire, get, get acquired, merge, whatever.

[00:39:49] Paul Roetzer: But I don't see how agencies and consultancies survive without being AI emergent in the, in the next one to two years. Like the demand for that expertise is going to be so massive. They're going to have to, or they just won't be able to compete. Probably another story for another time. Yeah, right. We,

[00:40:05] Mike Kaput: we certainly have more to share on that topic actually.

[00:40:08] Mike Kaput: Yeah. Well Paul, as always, thank you so much for the time and sharing your expertise. I think this was an awesome rundown of kind of what's going on this week in AI and I'm sure the audience appreciate say, and I appreciate it as

[00:40:22] Paul Roetzer: well. Thank you everybody for joining us as always, and we will be back next week.

[00:40:26] Paul Roetzer: We have, we, I have a topic I'm very excited about for next week, so I'll, I won't, I won't get into it right now, but there's a, there's a big topic to be discussed next week, so definitely come back, subscribe, give us a rating. We'd love to, you know, see, we, we've noticed the show's really been moving up the charts, which is awesome in the business and marketing realm in particular.

[00:40:46] Paul Roetzer: So we appreciate all of our listeners and everybody who's kind of finding. The show each week. It's fun to hear from you all, so don't hesitate to reach out as well and engage with us. We, we love to hear where you're at and what you're working on, and you know what parts of the show you find interesting and, so we can keep evolving there and creating value for everyone.

[00:41:03] Paul Roetzer: So thanks again. We'll talk with you next week.

[00:41:06] Paul Roetzer: Thanks for listening to the Marketing AI Show. If you like what you heard, you can subscribe on your favorite podcast app, and if you're ready to continue your learning, head over to marketing ai institute.com. Be sure to subscribe to our weekly newsletter, check out our free monthly webinars, and explore dozens of online courses and professional certifications.

[00:41:27] Paul Roetzer: Until next time, stay curious and explore ai.

Related Posts

[The AI Show Episode 102]: Apple WWDC, Warnings of Superintelligence, and Adobe’s Controversial New Terms of Use

Claire Prudhomme | June 12, 2024

In Episode 102 of The Artificial Intelligence Show our hosts discuss Apple's AI plans at WWDC, former OpenAI researchers bold AGI predictions, and Adobe's new terms that sparked controversy.

[The AI Show Episode 84]: OpenAI Releases Sora, Google’s Surprise Launch of Gemini 1.5, and AI Rivals Band Together to Fight Deepfakes

Claire Prudhomme | February 20, 2024

Episode 84 provides insights on OpenAI's Sora for video generation, Google's Gemini 1.5, and tech giants' aim to regulate deepfakes with the C2PA standard.

[The Marketing AI Show Episode 48]: Artificial Intelligence Goes to Washington, the Biggest AI Safety Risks Today, and How AI Could Be Regulated

Cathy McPhillips | May 23, 2023

This week's episode of The Marketing AI Show covers a major Congressional hearing on AI, major AI safety risks, and possible regulatory action.