59 Min Read

[The Marketing AI Show Episode 56]: Meta’s Incredible New (Free!) ChatGPT Competitor, Elon Musk Changes Twitter to X, GPT-4 Might Be Getting Dumber, and AI Can Now Build Entire Websites

Featured Image

Wondering how to get started with AI? Take our on-demand Piloting AI for Marketers Series.

Learn More

With MAICON 2023 just around the corner, Paul Roetzer and Mike Kaput break down very different directions of AI this week. From incredible to dumb, from thorough to questionable, there’s lots to break down.

Listen or watch below—and see below for show notes and the transcript.

This episode is brought to you by Jasper, On-brand AI content wherever you create.

Listen Now

Watch the Video

Timestamps

00:03:48 — Meta LLaMa 2

00:15:01 — Elon Musk’s X and xAI

00:25:28 — Meta, Google, and OpenAI make AI responsibility promises to the White House

00:34:25 — Introducing ChatGPT custom instructions

00:40:06 — OpenAI 2Xs number of messages ChatGPT Plus customers can send to GPT-4

00:42:40 — Is GPT-4  getting dumber?

00:47:04 — OpenAI API data privacy commitments

00:48:39 — New tool from Wix creates entire websites from prompts

00:53:20 — Author’s Guild asks members to sign letter to gen AI leaders

00:58:03 — McKinsey partners with Cohere

01:00:48 — Microsoft closes at record after pricing announcement

01:03:29 — Bing Chat Enterprise offers better privacy

01:05:04 — Apple preps Apple GPT

01:07:20 — Google tests AI tool to write news articles

01:09:45 — AI-generated South Park episodes

01:12:54 — Descript’s eye contact feature

01:14:43 — Playground AI

Summary

Meta’s incredible new (free!) ChatGPT competitor is here

Meta’s latest announcement has big implications for the world of AI. The company announced that its new, powerful large language model, LLaMA 2, will be available free of charge for research and commercial use. The model is “open source,” which means anyone can copy it, build on top of it, remix it, and use it however they see fit. This puts an extremely powerful large language model into anyone’s hands—and gives them the appropriate permissions to build products with it.

But that’s not all. It signals a major strategic direction that Meta is taking to compete with other AI companies—one that could have an effect on AI safety. Some major AI players place serious restrictions on the use and release of their models, often due to concerns about how models might be misused if they’re put in the wrong hands without guardrails. Meta is taking the opposite approach, believing that getting the technology into anyone and everyone’s hands will make the technology better much faster—and more quickly help Meta reveal and address issues that contribute to safety, like the use of the model to produce misinformation or toxic content. Will this new approach be successful?

Elon Musk changes Twitter to X

Musk is in this news again. As of the morning of the podcast recording (July 24, 2023), he has formally rebranded Twitter as X. The platform formerly known as Twitter hasn’t changed much aside from its logo, but it seems like Musk and leadership are viewing it as just one piece of a much larger entity.

In a somewhat cryptic set of tweets, CEO Linda Yaccarino said:

“X is the future state of unlimited interactivity – centered in audio, video, messaging, payments/banking – creating a global marketplace for ideas, goods, services, and opportunities. Powered by AI, X will connect us all in ways we’re just beginning to imagine.”

In the past couple weeks, Musk has also announced xAI, his new company dedicated to building “good” artificial general intelligence and competing with OpenAI, among others. Time will tell what this means for the future of the brand.

Meta, Google, and OpenAI Make AI Responsibility Promises to the White House

Seven major AI companies—Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI—have all agreed to safety commitments proposed by the White House.

The commitments include promises to engage in “security testing” carried out by independent experts, using digital watermarking to identify AI-generated vs. human-generated content, testing for bias and discrimination in AI systems, and several other safety-related actions.

It should be noted these are simply voluntary commitments publicly announced by the companies and the White House, not any type of formal regulation or legislation. We discuss this on this week’s episode, and will keep our eyes on this for you.

Tune in to the last pre-MAICON 2023 episode! We’ll be back next week with more news, more insights, and lots of MAICON takeaways to share with you all. The Marketing AI Show can be found on your favorite podcast player and be sure to explore the links below.

Links References in the Show

 

Read the Transcription

Disclaimer: This transcription was written by AI, thanks to Descript, and has not been edited for content.

[00:00:00] Paul Roetzer: I think the future is licensing of data. Again, Twitter shut off their APIs, Reddits been trying to avoid people scraping things. New York Times done doing a deal with OpenAI. I think what you're gonna see is all the major data brokers, and if we think of data as content, images, text, video, that is the asset.

[00:00:22] Paul Roetzer: And so, Licensing that to these companies that are building these models will solve this issue in the future.

[00:00:30] Paul Roetzer: Welcome to the Marketing AI Show, the podcast that helps your business grow smarter by making artificial intelligence approachable and actionable. You'll hear from top authors, entrepreneurs, researchers, and executives as they share case studies, strategies, and technologies that have the power to transform your business and your career.

[00:00:50] Paul Roetzer: My name is Paul Roetzer. I'm the founder of Marketing AI Institute, and I'm your host.

[00:01:00] Paul Roetzer: Welcome to episode 56 of the Marketing AI Show. I'm your host, Paul Roetzer, along with my co-host Mike Kaput. We are both in MAICON week mode. We managed to still show up Monday morning, July 24th to record this. So it's dropping Tuesday, July 25th, and MAICON is in Cleveland July 26th to the 28th. We have a busy week ahead of us.

[00:01:23] Paul Roetzer: I know you're teaching an applied AI workshop, right? . That's how you're starting off on Wednesday. With that, I've got this, yes, sir. Strategic AI leader workshop. What's your, what's your talk? What, what did the final title end here?

[00:01:34] Mike Kaput: 45 AI Tools in 45 minutes. So it's aggressive. It's very aggressive.

[00:01:41] Mike Kaput: We've done versions of this talk a couple times, but this is easily the most tools in the same amount of time, so I'm gonna have to practice talking much faster.

[00:01:52] Paul Roetzer: There's no upfront storytelling there, there's no context. No, no time. It's just, yeah, we are just hard, hardcore going 45 apps. Yeah. That's awesome.

[00:02:00] Paul Roetzer: Yeah. So we're pumped. It's, I think we're approaching 700 attendees. It looks like we're gonna hit 700 here in the next day or two. So yeah, just an incredible response. Incredible, support for the event. Can't wait for it. So we'll have, next week's episode, we'll show back up again, day off of our MAICON hangover.

[00:02:21] Paul Roetzer: Probably, just show up on Monday and we'll, we'll talk about some of the amazing stuff we learned or, or heard at MAICON. But hopefully a lot of you listening are gonna be there. We'd love to meet the audience in person. It's always great to connect with people who listen to the show. So, yeah, that's what's going on.

[00:02:37] Paul Roetzer: and so we got a lot to do. So let's go ahead and jump into this episode so we can, get back to MAICON stuff. All right. This episode is brought to us by Jasper, the generative AI platform that is transforming marketing content creation for teams and businesses. Unlike other AI solutions, Jasper leverages the best cross section of models and can be trained on your brand voice for gener or for greater reliability in brand control.

[00:03:02] Paul Roetzer: With features like brand voice and campaigns, it offers efficiency with consistency that's critical to maintaining a cohesive brand. Jasper has won the trust of more than 100,000 customers, including Canva, Intel, DocuSign, CB Insights, and Sports Illustrated. Jasper works anywhere with extensions, integrations and APIs that enable on-brand content acceleration on the go.

[00:03:26] Paul Roetzer: Sign up free or book a custom demo with an AI expert@jasper.ai. All right, Mike, let's do it. We got a bunch of stuff to cover. We got our three topics, and then we got a lot of rapid fire, and it's even Monday morning. We've had. To add like three rapid fire items this morning, and it's only 9:50 AM on Eastern time, so yeah,

[00:03:45] Mike Kaput: let's roll.

[00:03:46] Mike Kaput: That seems to be a trend here. All right, let's dive in. First up, so Meta just dropped a bombshell with pretty big implications for the world of ai. The company announced that it's new, powerful, large language model. Something called llama two will be available free of charge for research and for commercial use.

[00:04:09] Mike Kaput: So what this means is that the model is what we would call open source, which means anyone can copy it, build on top of it, remix it, and use it however they see fit. Now, one reason this is so important is it puts an extremely powerful, large language model right into anyone's hands and gives them the appropriate permissions to build commercial products with it.

[00:04:32] Mike Kaput: Now, another reason this matters is it actually signals a major strategic direction that Meta is taking here in order to compete with some of the other AI companies out there. And this direction could have some effect on AI safety. That's because some major AI players in this space place serious restrictions on the use and release of their models.

[00:04:58] Mike Kaput: And that's often, at least partially due to concerns about how models could be misused if they're put in the wrong hands without guardrails. Meta is taking the exact opposite approach. They believe that by getting this technology out there into the world, into anyone and everyone's hands, we'll actually make the technology better in a much faster cadence, and it will more quickly help meta reveal and address issues that contribute to safety, like using the model, say, to produce misinformation or toxic content.

[00:05:32] Mike Kaput: So as we look at this topic, Paul, like how

[00:05:35] Paul Roetzer: big a deal is this? Yeah, I mean, it definitely shakes up the whole. Industry, from an open source perspective, the general, feedback I've seen is it is the most advanced open source model overall. So Falcon 40 I think was maybe the most powerful one.

[00:05:53] Paul Roetzer: But it seems as though Lama too is on par with, G B T 3.5, and in some cases, maybe even GPT four in terms of its capabilities and. Testing. So that's a huge deal. And then the thing that really was surprising to me was that they came to market with Microsoft. Yes. So the fact that they announced this through a partnership with Microsoft, it was almost a joint announcement.

[00:06:17] Paul Roetzer: I just thought that was really interesting. Like at first I stepped back, I'm like, okay, what did I miss here? Like, how did I not like see this one coming? Or, or, or, Was there nothing to see? Like they, they just kind of came out of nowhere and did this. But I think what caught me off guard was just because of Microsoft's obviously deep relationship with OpenAI and everything being powered by GTP three and GT four, that they would also make a major bet in the open source space and do it with meta.

[00:06:42] Paul Roetzer: Like I don't know, just it was kind of out of nowhere to me. So I thought that was interesting, but o obviously adds immediate legitimacy to what they're doing. We've talked about on the show many times that meta is a sleeper in all of this. Like everyone pays so much attention to Google and DeepMind within Google and Microsoft and OpenAI and AWS is beginning the conversation and meta outside of.

[00:07:07] Paul Roetzer: Maybe Google and OpenAI has probably the most advanced AI research lab in the world. and so the fact that they're doing stuff like this is a huge deal. And putting the power of open source, like an advanced open source model into the wild, it's a game changing thing. I don't, I usually, I'm not a fan of like the whole game changing, no nomenclature for everything.

[00:07:32] Paul Roetzer: But this one feels very significant. And I think even in the first week you've seen just massive amounts of innovation being built on top of this model. So yeah, it just seems like it's a prelude for a lot of innovation to come based on this.

[00:07:48] Mike Kaput: What did you make of these kind of competing approaches to AI safety?

[00:07:53] Mike Kaput: So on this one hand, we've got some major people thinking that safety comes from careful controlled building and release of AI technology. Others like meta are truly just saying, we will get it out as fast and as far as possible, believing that transparency and usage are some of the best guardrails against how AI models can go wrong.

[00:08:15] Mike Kaput: Do you have any kinda read there on which approach makes the most sense? What the merits are of each

[00:08:20] Paul Roetzer: one? I hope they're right. I mean, we've talked about the closed versus open source before. Honestly, like the open source ideas terrify me. Like I really think that, When we look at what can go wrong with aI feel like open source accelerates that dramatically.

[00:08:37] Paul Roetzer: Like we're gonna find out way faster. What can go wrong with ai because the bad actors have access to the same powerful tools as the good guys. And so I find that scary, honestly. I don't know which is the better approach. I can sit and listen to debates on this all day about who's right in this scenario.

[00:08:57] Paul Roetzer: I feel like OpenAI was obviously the open approach and they very clearly changed their direction and they've been very straightforward in why they changed their direction and they seem extremely confident it's the right play, but I feel like. There's no turning back what the open source is going to be a part of this ecosystem.

[00:09:16] Paul Roetzer: We don't get that choice now. it's only gonna become more advanced and people are gonna build capabilities on top of these foundational models that are now out in the wild. So it's, it's kind of a debate that just doesn't matter. Like its already out there. We now get to find out if, if it was good or bad.

[00:09:32] Paul Roetzer: And like I said, I think it's, it, the open source, the availability of these advanced models, open source is going to accelerate innovation in a good way. So all the positives that can come from AI are gonna come faster and it's gonna accelerate 'em in a bad way. And I think we're gonna be hit with some crazy stuff in the second half of this year, in terms of how these things are used that present risk to security and privacy and all kinds of stuff.

[00:09:56] Paul Roetzer: And so, yeah, I mean, I think it's kind of like we're gonna find out. It's out there. There's no change in the, the direction now. We're doing an

[00:10:05] Mike Kaput: experiment in real

[00:10:06] Paul Roetzer: time, basically. Yeah. On humanity basically.

[00:10:10] Mike Kaput: Here we go. Oh boy. So it does beg the question this is obviously important. Open source models appear to becoming more prevalent.

[00:10:19] Mike Kaput: What does your average marketing or business leader need to be thinking about here, if anything?

[00:10:25] Paul Roetzer: This is something I'm very intrigued by, and I also don't know the answers yet, I don't think anybody really does, is if you are a C M O or a ccio or CTO or whatever, what, whatever it is, and you're going to be charged with figuring out how to infuse large language models into your marketing, sales, service, ops, hr, every aspect of the business is gonna have language models at the foundation of them.

[00:10:49] Paul Roetzer: What, what do you build on? And so you can go to someone like Jasper sponsor the show and they've, they kind of a symphony of language models based on the use case and you can customize 'em based on brand tone and style and policies and all these things. So that's one solution is just you go to an application company that's kind of the best of these models and helping you figure this out for you.

[00:11:13] Paul Roetzer: Or you go and get an open source model and you customize it on your own data. You keep everything in house, nothing leaves the, the, the guarded walls or text confidential information, privacy. I don't, I don't know, like I, and I haven't yet seen anything definitive that clearly states this.

[00:11:31] Paul Roetzer: I know we're gonna talk later on about coherent teaming up with McKinsey, so I think you're gonna start to see this where. A lot of organizations are gonna turn to their trusted advisors and consultants say, what do we do? . And so a lot of those, the, the middle companies like the McKinsey's are gonna have to try and make some bets.

[00:11:50] Paul Roetzer: But again, just look at Microsoft and the fact that they have OpenAI at the core of everything, building it into their own, technology, and yet they're doing a deal with meta. So they're playing both sides. Like, it, and even Amazon's the same way. You can go get Amazon's, what was it, Titan or Bedrock or whatever.

[00:12:08] Paul Roetzer: . Their language model is, but you can also get ANR and cohere and like, well, these other ones. So it just feels to me right now, like no one actually knows whether open or closed is gonna be best within an enterprise. And the reality is it may be a mix of 'em. Yeah. So I don't know. I mean, it's, I think that's the thing though, as a marketer, as a business leader, I'd be really thinking about here is what do we do?

[00:12:31] Paul Roetzer: How do, how do we build language models into our organization moving forward? Who do we work with?

[00:12:38] Mike Kaput: Gotcha. Yeah, and that's, we did mention, I believe on last week's episode, the importance of marketers both individually and at the leadership level, testing multiple tools, models and platforms as much as possible.

[00:12:50] Mike Kaput: Even though it is a bit messy and a bit of a lift, you have to be looking at different models relatively regularly for a handful of your core use cases because of how quickly they move in advance and open source just supercharges that.

[00:13:06] Paul Roetzer: Yeah, and I think that brings out an interesting point about how quickly these things evolve and how one can leapfrog the other so, so quick.

[00:13:14] Paul Roetzer: But also that this is not traditional software. This isn't like version five remembers everything version four had, and they just like make it way better and add a bunch of features. When you make version five, you're basically rebuilding from the ground up. Like it doesn't remember all it did.

[00:13:34] Paul Roetzer: That's why they have to train these things. That's why they have to stop the training data at a certain time period. They have to then red team the thing, make sure it's safe, make sure it's aligned, make sure it has the proper guardrails. So it's, it's just not like, like if you pick a model today .

[00:13:49] Paul Roetzer: it doesn't seem conceivable that you can just assume for the next three to five years you're good and like whatever they bring out next works. Because often times what happens, and we'll talk about this in a later topic, is changes can be made and all of a sudden it just doesn't work as well as it did.

[00:14:07] Paul Roetzer: And the reality, again, we'll touch on this in a minute, is they don't know why. Like they can't just go in into the code like you would in traditional software and be like, oh, there's the bug and fix the bug. You have to try and like analyze these things In some cases, like philosophically, what could it possibly be doing different based on something that changed?

[00:14:26] Paul Roetzer: Is it the training data? Is it a change we made? Is it a guardrail we put in place that had downstream effects we didn't expect? These things are, they're unknown how they really work. And so it's just not, there's no like very clear, here's the blueprint, here's how you pick a language model and like go. I don't, and I don't even expect those to emerge.

[00:14:46] Paul Roetzer: Like I don't know that there can be a definitive source on how to do this in an enterprise. And I think that's why it's gonna be such a unknown and critical area for businesses moving forward is like, this is the wild west right now. That's a really

[00:14:59] Mike Kaput: good point. Okay, so for our second topic, speaking of the Wild West, yes.

[00:15:06] Mike Kaput: This story is quickly evolving because a major piece of it happened the morning that we are recording this podcast Monday, July 24th, I believe. Elon Musk is making waves with only a single letter. That letter is X. As of the morning we are recording. Twitter has been formally rebranded as X. The platform formally known as Twitter, doesn't seem to have changed much right now aside from the logo, but it does seem like Musk and leadership are now viewing this as just one piece of a much larger entity.

[00:15:43] Mike Kaput: This entity, they are calling x and in a somewhat cryptic set of tweets, c e o Linda Jino said that X is the future state of unlimited interactivity. Centered in audio, video messaging, payments slash banking, creating a global marketplace for ideas, goods, services, and opportunities powered by AI X will connect us all in ways we're just beginning to imagine.

[00:16:10] Mike Kaput: In the past couple weeks as well, Musk has also announced an entity called Xai, his new company dedicated to building what he calls good artificial general intelligence and designed to compete with OpenAI among others. So there's a lot going on X related and it's moving fast, and I'm not sure how much anyone is actually saying, but Paul, first off, let me just like express what might be on the minds of a lot of listeners as of this morning and the morning tomorrow when this podcast drops, which is what the hell is going on at

[00:16:45] Paul Roetzer: Twitter.

[00:16:46] Paul Roetzer: Yeah, it's, I mean, if anyone follows Elon welcome to the show. This is, this is How Things work. He, he went on like a, a tweet binge like Friday night into Saturday morning. I'm not sure he slept Friday night because he was tweeting at like 3:00 AM still. And at one point I think he tweeted, cause I was watching some of this before I went to bed Friday night, and he tweeted like, if someone posts a great X logo tonight, like we'll use it.

[00:17:19] Paul Roetzer: And so I think they actually crowdsourced the X logo on Twitter like Friday night. And ironically, it seems like maybe it's actually something based on some code, like someone just like took and threw up and he liked it. But I don't even think they're gonna be able to trademark the thing because it's actually just like generated by some computational code.

[00:17:37] Paul Roetzer: It creates this X that he's using now. But who who cares about copywriting trademarks when you're. Running the businesses he's running. So, yeah, I I think a lot of people are gonna sign on to Twitter this morning who don't follow Twitter closely or Elon Musk closely, and they're just gonna have that reaction, like, what in the hell is this?

[00:17:58] Paul Roetzer: Like, what is, what is the X? Where'd the bird go? . And they're not gonna have a clue, like, good luck trying to understand what they're doing, because I don't think they know what they're doing, honestly. Like if you read the CEO's quote, Linda's quote, and you have to go look at the tweet. A as someone who came from a communications PR background, I ran an agency, so we did for a living.

[00:18:21] Paul Roetzer: This to me feels like, I mean, I know Linda's the ceo, but I, she's obviously still taking direction from Elon. It feels like Friday night at midnight, he messaged her and said, we're changing the name to X tomorrow. Can you tweet something? Like that's, that's how this reads. Because it's like literally, you, you read the, the one line, It starts with, it's an exceptionally rare thing in life or business that you get a second chance to make.

[00:18:48] Paul Roetzer: Another big impression. Twitter made one massive impression and changed the way we communicate. Now X will go further transforming the global town square and then it just goes on for like for years, fans and critics alike have pushed Twitter to dream bigger, to innovate faster and build our great potential.

[00:19:04] Paul Roetzer: It sounds like Che g PT wrote this, X will do that and more. We've already started to CX take shape or last eight months, but the part about like that you read about, we're just beginning to imagine. In other words, we actually don't know like what it's gonna be. My take on it is twofold. One, he's always wanted x.com, that he created that company in 1999.

[00:19:24] Paul Roetzer: He's obsessed with x.com. He always has been the guy doing his biography. Walter Isaacson, who you and I have both read many books from him. Actually broke it down in, in a tweet about. How, going back to 1999, this was the company he created. He actually tried to change the name of PayPal to x.com and got overruled.

[00:19:42] Paul Roetzer: So he is literally obsessed with this idea of x.com being like the everything app. And back in the day it was all about like online payments and banking and things like that. So that is a thing. And the other part is like, I think, and this is just total thinking out loud here.

[00:20:04] Paul Roetzer: Twitter's value is rumored to have gone from the 44 billion. He bought it for down to 5 billion. So as an asset, he has just tanked the thing He spent all this money on. If you make it something much bigger, it's like a startup. And now it's whatever value someone is willing to give you for it.

[00:20:22] Paul Roetzer: So if Twitter is no longer just a social network that's getting run into the ground, and it is actually this x.com, whatever you can imagine, banking and goods and services and whatever, now you can be like, oh yeah, it's worth like 200 billion. And now all of a sudden you've inflated an asset, which he's been known to do at times with different companies.

[00:20:43] Paul Roetzer: So I don't know. I mean, I think it is just his lifelong mission to turn x.com into something. I think when he tried to get out of the Twitter deal and couldn't. He basically said, all right, fine. We'll just make it x.com then, and then that. So this, again, if, if you don't pay attention to Elon and you don't know his history and you don't know the history of Twitter, all of this is all of a sudden, like, what is going on?

[00:21:09] Paul Roetzer: What? There's actually like 24 years of his behind this idea. And then when you go back to the Twitter deal, when he tried to get out of it and couldn't it be, and he even told Isaac Isaacson in a tweet like Isaacson put this up, that he tweeted to him at some point and said, I'm just gonna turn into x.com.

[00:21:29] Paul Roetzer: Like it was just like a random, oh, here it is. It's like, in the days leading up to his takeover of Twitter and at the end of October, 2022, Musk's moods fluctuated wildly. I quote, I'm very excited about finally implementing x.com as it should have been done using Twitter as an accelerant. He texted me out of the blue at 3 31 morning.

[00:21:50] Paul Roetzer: So again, this is like, it was coming, um . But it's crazy. I don't know. Like I just, I, like I said about previous ones, I just want Twitter to work, right? Like, I just want my lists and my alerts and it's how I keep up on AI news and information. Like 90% of what I do about like researching and understanding AI comes from Twitter and research papers I find on there, and influencers who share all kinds of amazing insights.

[00:22:18] Paul Roetzer: I just want it to be that. But that's very personal and selfish of me probably. Like that's, I just want it to work in that way and I think it's gonna become something much bigger. And maybe it works, maybe it doesn't. I hope it's great. Yeah. But I don't even think the CEO knows what

[00:22:33] Mike Kaput: it is. Yeah, for sure.

[00:22:34] Mike Kaput: It's definitely hard. With the public statements to unpack anything that's actually going on. One last bit of additional context that may be interesting to people that I've read multiple times in the past as we've heard rumors of X one possible precedent or model are several consumer facing apps in China that basically are all in one apps.

[00:22:57] Mike Kaput: Like You Mess. You can message everyone through them, communicate, make calls, but also there's a very strong culture thereof being able to trust and use these types of messaging apps for payments, for buying things. So, for certain like identity verification things you might need to do business or to do commerce.

[00:23:17] Mike Kaput: So I think that is one possible model they're considering here. But good luck. I probably just described that more clearly than seriously they have some. I'm

[00:23:28] Paul Roetzer: not trying to, I think I started to tweet something and I just deleted it. It was like, If they even have a communications team left the Twitter, which I don't know that they do.

[00:23:38] Paul Roetzer: . It just, it just, they needed 24 hours to work with this, like . That tweet announcing this could go down in the history books and be studied at colleges of how not to communicate something like it's, it's just a word salad of stuff. Yeah. Yeah. It's really bad. So again, I don't know. I hope they get it right.

[00:23:59] Paul Roetzer: I hope they fix it for society's sake. Like it's a valuable tool and system. I love it personally, and I just want it to work.

[00:24:09] Mike Kaput: Agreed. So before we wrap this up, and you know, with the audience fully understanding there are not a lot of details to go on here, do you have any ideas of how you see AI fitting into this?

[00:24:19] Mike Kaput: Because on one hand we have X AI being a significant AI play based on the people involved and it's own right. But it also sounds like AI is just going to be critical to this bigger idea of x

[00:24:32] Paul Roetzer: I guess. I mean, that was a, of all the things that the CEO statement said, the powered by AI part, I actually laughed at.

[00:24:38] Paul Roetzer: It's like, do we really, do we really need to say that? Like there's, there's nothing they're gonna do that wouldn't be infused with ai. So I just thought it was hilarious that they started the sentence with powered by ai. Yeah, I mean, like anything we've said before, no software exists three years from now that isn't powered by ai.

[00:24:54] Paul Roetzer: Like, I mean, maybe, maybe one year from now, I just don't even know why you'd be building software right now. If you weren't infusing AI into it, you're just gonna get destroyed. So obviously everything they're gonna do is AI powered, and I think, like we've talked about before, Elon has like six companies that are powered by AI from SpaceX and Neurolink to Tesla and whatever else he's got going on.

[00:25:17] Paul Roetzer: And I think all of that data in some capacity eventually all powers the future stuff.

[00:25:23] Mike Kaput: . All right, so for kind of our third major development happening this week, we saw seven major AI companies, Amazon, an Thropic, Google Inflection, ai, meta, Microsoft, and OpenAI. All agreed to. Safety commitments proposed by the White House.

[00:25:42] Mike Kaput: Now, this is basically the White House offering up specific commitments around safety that these companies have now decided or said that they will follow, and that includes promises to engage in different types of security testing carried out by independent experts. It includes using digital watermarking to identify AI generated versus human generated content.

[00:26:04] Mike Kaput: It includes testing for bias and discrimination in AI systems and several other safety related actions. Now, these are simply voluntary commitments publicly announced by these companies and the White House, not any type of formal regulation or legislation. Now, despite that, it is getting a lot of attention because of the people involved and the fact is at the highest levels of government that some of these directives or commitments are coming from.

[00:26:31] Mike Kaput: So Paul, how legit is this? Is this meaningful or a PR move? A

[00:26:36] Paul Roetzer: little bit of both. I actually think it's more meaningful than a PR move, honestly. Okay. Like when, when you go through the eight things, I think a lot of these labs have been working on all eight of these things anyway. Some of them, I'm not sure how practical they are, like the watermarking is always talked about, and yet every research thing I've seen says that's easily manipulated.

[00:26:57] Paul Roetzer: Like it's not hard to, do away with that in different capacities. But the reason I think it's legitimate is there's a lot of things that are not gonna be said publicly, that are gonna be tied to these supposed voluntary commitments such as government funding for initiatives, projects with these companies.

[00:27:15] Paul Roetzer: They're all gonna be facing some ongoing massive legal disputes that they could have the use the government in their corner, on. And so I would imagine there's some promises being made, wink, wink, like these cases with the D O J or heading the Supreme Court might be nice if you played nice with the government.

[00:27:35] Paul Roetzer: So, yeah, voluntarily commit to these things, but if you don't do 'em, then maybe we won't be as helpful in your future endeavors. And then the other thing is just the US interest in winning the AI battle. So I think more and more it's becoming apparent. There's a article that I just read this morning, the Atlantic, we'll talk about a next week's episode cause I have to process this one a little bit more.

[00:27:56] Paul Roetzer: But they were asking Sam Altman about the US versus China. And I think sometimes in article quotes, if you know what to look for, there's some really telling things that are said by the people at OpenAI and some other executives. But OpenAI executives in particular seem to like to tip their hand a bit with their quotes.

[00:28:15] Paul Roetzer: And I just, I feel like it's, the US government is extremely aware of the importance of not falling behind in generative AI and in AGI and in super intelligence and. I think they need these organizations that are at the table right now to be a part of that. And I think there's going to be a lot of things we'll read about 20 years from now of deals that were made that allowed for the continued advancements that this technology and some of the things that would've been hindrances to that happening sort of just magically go away with some agreements.

[00:28:53] Paul Roetzer: And this is the kind of thing that you would publicly do with, here's some commitments we're gonna make. And it's voluntary, but it's not really voluntary. And the other thing is, Biden already said like, there's going to be more executive action still, and he's, there's still saying summer. So sometime in the next month and a half or whatever, what summer ends September 20th, at two months, there's supposedly gonna be some more executive actions related to this stuff.

[00:29:17] Paul Roetzer: Gotcha. Yeah. I I don't think it's, I don't think it's pr I really think that they're actually trying to find ways to accelerate the innovation, but putting guardrails in place to try and put some protections.

[00:29:31] Mike Kaput: So this is just one of many actions that the US government has been taking recently to kind of further understand and establish guardrails around ai.

[00:29:40] Mike Kaput: We've talked about several others on previous episodes. For people that kind of haven't been following as closely along with the US government's actions, do you see the US getting involved in creating sweeping AI legislation like the eus AI Act? Or is this something, are the, their priorities here a little different from how

[00:30:00] Paul Roetzer: you see it?

[00:30:01] Paul Roetzer: it's always hard because next year's an election year and you always have to look at who's in office, who controls Congress, to be able to project out like what's really gonna happen. I would say in the current climate, and based on the urgency to win with ai, as we talked about, I don't think they're gonna put anything crazy restrictive in place.

[00:30:27] Paul Roetzer: I think there will be legislation. I think there will be, things specifically around like obvious things like copyright and . Um misuse. That's stuff that's covered under the FTC laws like we've talked about. But I just don't see the US and the current administration, I think that they view it more as a competitive advantage than they do a threat to the us.

[00:30:50] Paul Roetzer: And I think they're very realistic about the threats. It's going to present the challenges, it's gonna present the with bias and synthetic content being spread and all these things, copyright infringement. Like they're aware of all of that, but I think that they're going to prioritize it. And again, like I don't want this to become like some crazy like conspiracy theory kind.

[00:31:09] Paul Roetzer: Just go back the last 20 some years and look at the research. DARPA has done the events, advanced research projects, agency for the US government, billions of dollars into ai. The government didn't wake up to AI in November, like the rest of the business world did. The government has been putting billions into AI for 20 years.

[00:31:24] Paul Roetzer: Longer than that probably. They're fully aware of the power and potential of ai. And I don't see them stopping because I, like I said, I think they view this as a more of a competitive advantage in many areas than they do a threat and they, they just can't stop it. Like it would not be wise for them to do that.

[00:31:46] Paul Roetzer: Yeah. DARPA is a really

[00:31:47] Mike Kaput: good example because people sometimes don't realize Yeah, not at all conspiracy. We for decades, technologies like G p s or the internet were first developed for government and or yeah. Military applications, defense applications, or just government, lab applications, and then became commercially available.

[00:32:06] Mike Kaput: So it doesn't seem like AI is any different.

[00:32:08] Paul Roetzer: Yeah. Musk company that was DARPA technology. Yeah. I mean, go, go read Pentagon's Brain. Like if this is, if this, I know this isn't like all business and marketing related, but if this thread fascinates you, like it does, me and Mike, like Mike and I sit around and have beers and talk about this stuff all the time.

[00:32:24] Paul Roetzer: Go read Pentagon's Brain by Annie Jacobson. Like, it'll, it'll blow your mind. And that book was written seven years ago. Yeah. I mean, I'm just like, I would love Pentagon's brain too. Like I would love to know what Annie knows Yeah. About what has been going on in the last 10 years. But yeah, it's just a, it's a fascinating thing, but yeah.

[00:32:41] Paul Roetzer: We'll, Switch gears and get back to marketing and business talk now.

[00:32:46] Mike Kaput: So to wrap this up, I'm curious, what kind of weight do you give to the criticisms people are making about these commitments? So a bunch of people have said, look, these don't really go near far enough to hold any of these players accountable.

[00:33:03] Mike Kaput: You know, they allude to what you alluded to, that it's not a great, I mean, it's not necessarily, a transparent or open look if you're having these closed door meetings where presumably other conversations are happening. Do you, is there weight to these

[00:33:17] Paul Roetzer: criticisms at all? Yeah, they're viable. I mean, it's, it's natural to be skeptical of what the government's doing and whether or not this matters and whether these voluntary commitments mean anything.

[00:33:27] Paul Roetzer: But I I just, it's le the government has a lot of leverage here and it would be, I think it would be misguided to assume they're not using it. So I just, I really don't think they're voluntary. I think these are commitments that are going to be required under law and regulations that are to come and they got them to agree to them in advance in exchange for something.

[00:33:54] Paul Roetzer: There is considerate legal term. There's consideration somewhere in here where you're gonna voluntarily agree to this, this is what we're gonna work on with legislation and future laws, but you do it now and in exchange X, like, we will do this for you. Or you will have the opportunity for, that's how this stuff works.

[00:34:13] Paul Roetzer: So I just assume it, I would just say it's more meaningful than voluntary commitments is, is my assumption right now.

[00:34:21] Mike Kaput: All right. Let's dive into a ton of different rapid fire. Topics. Now, first up comes several updates from OpenAI about various products and and initiatives they have going on. So first they just rolled out what they're calling custom instructions for ChatGPT.

[00:34:38] Mike Kaput: So custom instructions essentially give you the ability to set preferences for how ChatGPT responds to you. And then these instructions are saved across all your future conversations. So this kind of removes the friction of. Having to start each conversation with ChatGPT from scratch like by having to always remind it of key details about you and the context of your work or your query.

[00:35:03] Mike Kaput: So here are some quick examples of how you might use custom instructions that might be relevant to different kind of personas or people using it. So for instance, if you were a teacher using ChatGPT for lesson planning, classroom related work, what have you, you could create custom instructions that remind ChatGPT, what grade you teach.

[00:35:23] Mike Kaput: Say you teach the third grade, and it will remember that every time you start a chat. If you're using chat g t for writing, you can use custom instructions to remind it to apply the same voice and style each time you have it right for you. So you're writing one instruction and it is carried through all these future chats so you don't have to write it again or remind ChatGPT.

[00:35:45] Mike Kaput: One other example. In any field, you could use customer reminders to remind ChatGPT of your level of expertise in a field. So it can give you the appropriate level of explanation each time you're asking about a topic. So might not bore you with the basics if you happen to know about a certain topic really, really well.

[00:36:05] Mike Kaput: Now, right now, custom instructions are in beta for ChatGPT plus users only. But OpenAI has said they'll be rolling it out to all users soon. So Paul, how big a deal

[00:36:18] Paul Roetzer: is this feature? I think until people experiment with it and assess how the outputs vary, it's hard to tell. It definitely appears to be a prelude to more personalized chat experiences, which is something Sam Altman has said is specifically a direction they're going.

[00:36:35] Paul Roetzer: So I do see this as probably a building block for G P T 4.5 or G P T five. The thing that immediately jumped out to me was, I do wonder how it affects the quality of the outputs because it's, I assume, gonna use these instructions on every prompt. And so is it gonna actually start giving you less variability in the outputs and it's gonna start just writing as the third grade teacher the kind of thing you want.

[00:37:01] Paul Roetzer: And I don't know if that would affect how good they are. So I was thinking about someone like me where I don't use it the same every time. Like, I have dozens of use cases, I'll test it on. And so I don't know that I feel like if I set the instructions that it may actually. Lessen the value of G P T four to me because I want it to be diverse in how it responds based on what my use case is.

[00:37:27] Paul Roetzer: . And so that when I went in to turn it on or to check it, I, that actually kind of confirmed for me that I'm probably not going to use this because in essence you're just giving it a pre-pro. And so if you haven't done it yet, you just go into your profile at the bottom left, make sure it's turned on in beta features.

[00:37:43] Paul Roetzer: Again, you need to be a ChatGPT plus user, I think, to have access to this. Yeah. So you turn it on and then it gives you two questions. So the first is, what would you like chat p t to know about you to provide better responses? And it gives you thought starters. Where are you based? What do you do for work?

[00:38:00] Paul Roetzer: What are your hobbies and interests? What subjects can you talk about for hours? What are some goals you have? So again, it's just kind of understanding you. But how that plays out in the outputs, nobody seems to know and they didn't provide a ton of guidance. And then the second question, I actually found more intriguing as a prelude to whatever the next model is, how would you like ChatGPT to respond?

[00:38:21] Paul Roetzer: And then again, the thought starters are how formal or casual should ChatGPT be? How long or short should responses generally be? How do you want to be addressed? And the, the one that was the kicker for me, should chat chip PT have opinions on topics or remain neutral. That is the one that Sam has specifically addressed when talking about do they allow right winging left wing stuff in there and like trying to set guardrails.

[00:38:45] Paul Roetzer: And he said, in the future, you will pick your own guardrails. In essence, if you want it to be right wing or left wing or in the middle, whatever, you're gonna be able to tell it. Like, I like this. This is what I believe, kind of thing. So my question becomes like, if you and I were to go in there and set these parameters, these thought starters, And then give similar prompts, like is it just gonna be totally different than what we would've gotten before?

[00:39:09] Paul Roetzer: Right. So with and without these instructions, how different are the outputs gonna be? Because I feel like it could start just narrowing the value of the tool if it always answers within these instructions. But I have no idea. I haven't seen anything yet. because it's brand new. Yeah. If anyone has tested outputs with and without these custom instructions.

[00:39:29] Paul Roetzer: So I don't know, just something to check out. Yeah,

[00:39:32] Mike Kaput: definitely a good reminder with a lot of the things we discuss here, new features, new tools, a lot of experimentation is needed. Just because there's new feature doesn't mean it's actually necessarily going to benefit

[00:39:45] Paul Roetzer: you. I probably won't use it. I don't have time right now to run a bunch of tests.

[00:39:49] Paul Roetzer: I'll wait for Ethan Mollick to like run the experience for us. But yeah, I mean, if we see cool experiments, we hear feedback on it, we'll certainly share it, here on the podcast. Or if you race forward and do some of this testing, let us know. You know, reach out and ping us and let us know what it's going.

[00:40:06] Mike Kaput: So OpenAI also announced that they're doubling the number of messages that ChatGPT plus customers can send to GPT for. So the new limit is 50 messages every three hours. It used to be 25 now pretty simple, basic, straightforward update here, but important, and I'm curious, Paul, this begs kind of a bigger question between being able to use G P T four and value ads, like code interpreters.

[00:40:32] Mike Kaput: Should marketers be paying for a chat, g PT plus

[00:40:35] Paul Roetzer: subscription? I mean, I've said before on this show, I think it's the greatest value in software history. Like, I don't, I don't know why you wouldn't be paying 20 bucks a month for it. Like literally one simple prompt could save you 20 bucks a month in time.

[00:40:48] Paul Roetzer: So, yeah, I mean, and even if you have Jasper or writer or whatever, I would still be paying the 20 bucks a month. Now, if you have a team of 600 marketers . I don't know that I would be having everyone have it unless you had specific use cases for it. But as someone like me or like you who's constantly just testing this and experiment with use cases, oh, can I save time on podcast scripting or can I take save time on summarizing this research article That takes me 40 minutes to read, but I'm not sure if it's worth the 40 minutes.

[00:41:17] Paul Roetzer: Let me throw it in there and get a summary real quick and then decide if I'm gonna spend the 40 minutes on it. If I do that like five times a week, so I mean, I could literally just be saving 3, 4, 5 hours a week just using summarization. Yeah, well of course I'm gonna pay the 20 bucks for it. So I think once you have a case for how you're gonna use it or why you're testing it, it would be crazy to me to not be paying the 20 bucks to test it.

[00:41:38] Paul Roetzer: Yeah. And

[00:41:38] Mike Kaput: we've talked about before with the addition of things like code interpreter, it really does seem like a no-brainer. And I realize not everyone's made of money, but I mentioned this because there are some, there's some pushback often from marketers saying, well, why would I pay 20 bucks for something where I can get something comparable for free?

[00:41:54] Mike Kaput: And it's a valid question, but this is just one of the easiest cost effective, most cost effective ways to be experimenting with one of the leading tools out there that can do many, many things.

[00:42:05] Paul Roetzer: Yeah. I think that in, and in their case, that's another argument to do it, Mike, is if you want to know where this stuff's going Yeah.

[00:42:11] Paul Roetzer: OpenAI is going to continue leading the way. Yeah. And so just be able to pop in and test code interpreter and be like, oh, that was pretty amazing. Yeah. It's, it's just, yeah. It's worth it. So if you're, yeah. I mean, if you're a leader in your organization or if you're a marketer trying to stay at the forefront of this stuff, pay the 20 bucks, test the tech out, even if you're not using in your daily workflows .

[00:42:30] Paul Roetzer: it's worth it to stay at the forefront of this stuff.

[00:42:33] Mike Kaput: And it is important to stay on top of it because another OpenAI topic, here is. A question that's being raised in a few different circles is GT four actually getting dumber. Researchers from Stanford and Berkeley recently published a paper claiming that GPT four is becoming less capable over time.

[00:42:55] Mike Kaput: So they actually evaluated GT four on certain tasks like solving math problems and generating code in kind of highly structured evaluations. And they found that overall G PT four appeared to be getting worse at reasoning based tasks. However, the paper doesn't identify why JPT four might be getting dumber.

[00:43:16] Mike Kaput: Not to mention it's possible that this is just a totally unintended or unanticipated consequence of other changes to the model. Now, OpenAI has responded to these claims saying it is looking into this. It has been aware, it is now aware that there could be an issue. Again, this is just one paper. Some have pointed out that the research itself may be flawed.

[00:43:38] Mike Kaput: One tweet from a Stanford professor named Arvin Nan says that the researcher structured their evaluations in such a way that may have made a decline in performance more likely. As such, their research may be misinterpreted. So there's kind of some competing claims flying around. Paul, have you noticed any of this?

[00:43:58] Mike Kaput: What do you make of these claims that G P T four might be getting

[00:44:01] Paul Roetzer: dumber? Yeah, I thought it was interesting. I kind of followed it throughout the week. You were seeing these claims pop up, and then there's a, a guy that I think we both follow, it's official Logan K on Twitter. He, Logan works for OpenAI and he's, I think like an informal spokesperson in some ways.

[00:44:17] Paul Roetzer: Like he is one of the most Yeah. Like vocal and active in the community, engaging with the community. So he's a, a great person to follow, but he tweeted out. Just wanted to say generally thank you to everyone reporting their experiences with GT four model performance. Everyone at OpenAI wants the best models that help people do more of what they're excited about.

[00:44:34] Paul Roetzer: We are actively looking into the reports people shared, and he had had a couple tweets prior days kind of addressing this. And so the thing that jumped out to me is just what we talked about earlier. They don't know like, it, they take the reports seriously and they, because it doesn't function like normal software, they're not sure if it's actually getting dumber.

[00:44:54] Paul Roetzer: So they see this research report, they scan like, I don't know. Let's go do our own testing. Maybe some guardrails we put in place, caused it to get dumber. Maybe some other change, some fine tuning affected the downstream system. Like we don't know, we have to go kind of figure it out ourselves, see if we replicate the, the research.

[00:45:11] Paul Roetzer: And so as the days went on toward the end of last week, I saw what you were seeing where it's like, it seems like maybe it was just a flawed research project that was done and the paper maybe wasn't accurate. But I think again, it's just a reminder that this stuff is still so new to like humanity, to business.

[00:45:28] Paul Roetzer: Like they don't know exactly how it works. Yeah. And so stuff like this can pop up and these models get, maybe they got dumb or maybe they didn't. But if you're, this is another thing that's interesting. If you're relying on the APIs from GPT four for a third party solution, like you're building applications on top of this that you're selling, What if it is getting dumber?

[00:45:48] Paul Roetzer: Like what if you've built a company on top of a model that somehow got dumber and you don't, they don't even know how it happened. More or less you knowing how it happened and now your product is jeopardized. And this goes back to the enterprise of what LLMs do you build on. We don't understand the technology we're building on.

[00:46:06] Paul Roetzer: You're, you're betting on the future of your organization on models that are like a year old that we, we just don't know. So I don't know. I think it just, all of this to me reinforces how early we are in this technology, how little people really know about how it works or where it's gonna take us in the future.

[00:46:25] Paul Roetzer: And that can be kind of scary, but I think it also is such an opportunity for everyone. Like if you're just starting to listen to this podcast, like you can jump in this stuff right now and in a week or two, like get caught up in certain areas. Yeah, I've seen it happen. I've seen people who developed an interest in this stuff three months ago who now know more than me about large language models.

[00:46:45] Paul Roetzer: Cause they just like, Just went all in 50 hours a week. That's all they did. It's like one thing, and you can learn this stuff pretty quick if you develop interest and everybody's trying to figure it out. So I just think it's such an opportunity in your career, whether an entrepreneur or within an organization to help solve this stuff.

[00:47:00] Paul Roetzer: It's so unknown right now.

[00:47:04] Mike Kaput: So one last piece of OpenAI news. They've actually published some updates to their data privacy policies, specifically related to their a p I signaling that the company does not actually train its models on the inputs and outputs created through using its a p i, which connects you to G P T four G, PT 3.5, and some other models.

[00:47:25] Mike Kaput: So it seems that OpenAI is being very straightforward in seeming to indicate that you can use and build on top of their a p i without privacy and security concerns. However, it is really important to note that these policies very clearly state that these kinds of privacy and security considerations, this kind of level of privacy is not, does not apply to commercial use of tools like ChatGPT.

[00:47:52] Mike Kaput: So Paul, it sounds like OpenAI is kind of trying to communicate here that they are responding to some of the data privacy concerns, especially among. Possible enterprise users and customers. Is that how you see this

[00:48:05] Paul Roetzer: policy update? I mean, it certainly could be part of their strategy to address the growing momentum around open source and LAMA two and things like that, because that's gonna be the argument around open source is none of this stuff's an issue.

[00:48:18] Paul Roetzer: So I think they, they have to, I mean, part of it might just be strategically the direction they were going anyway, but it sure seems like with everything else going on and maybe some of the government regulations, things like that, it's just they, they have to take these steps now to deal with the competitive environment.

[00:48:35] Mike Kaput: So moving on to some non-OpenAI news and developments. Popular website building platform. Wix just announced something they're calling the AI site generator tool, and the tool does exactly what it says. It uses AI to automatically generate an entire website from scratch based on your prompts. So Wix already has a number of AI features as part of its platform already, which generate things like portions of websites like AImage generation tools, and a tool that writes product descriptions and copy.

[00:49:08] Mike Kaput: But this AI site generator goes one step further. It literally generates everything you need for an entire website, and each website that it will generate is actually unique. It's not a template, and you'll be able to use prompts to use this tool to make edits to your website as well. Now, Paul, we've predicted generative AI applications like this being possible, and now we're seeing these predictions come true.

[00:49:34] Mike Kaput: Are we entering a new phase of generative ai? Like one where we're kind of going way beyond having AI just write and create art for us?

[00:49:43] Paul Roetzer: Yeah, for sure. And there's another example we'll talk about toward the end here with rapid fire. Yeah, I mean this is inevitable. Like, again, if this is shocking to people, it's been coming for a while.

[00:49:57] Paul Roetzer: There was early versions of this. You can see it with PowerPoint slides, like anywhere where words and images and videos mesh, this is gonna happen. I think that we've talked about this idea that every knowledge worker, every organization that does knowledge work provides services, consulting, um marketing services, web development, web design.

[00:50:21] Paul Roetzer: You, you have to be realistic about the future. And so the, the three questions I always advise people is, what will we lose? What will we gain and when? So if you are a web design development shop, and Wix succeeds at this, and it truly can just on the fly build these, or if you're a HubSpot and you have template builders internally, or if you're a firm that builds website templates and sells them for a living, that's part of your revenue.

[00:50:42] Paul Roetzer: It's done. Like, I'm sorry, you're not gonna need it. So template library's gone. You're, you're just gonna prompt it, the kind you want. You can provide images as inspiration like we see with runway where you're just given an image and it builds a video off of it. So I just feel like the, the, the, the, and when part of this is coming very soon.

[00:51:07] Paul Roetzer: And so if, if you're doing this kind of work, it's not that it's gonna go away completely, but I think certainly in the SMB world where they don't want to spend 5, 10, 15, a hundred thousand dollars on building a website, they're, they, they will use this in a second. There, there's just no reason to pay for that and deal with the challenges of working with outside providers for it.

[00:51:29] Paul Roetzer: So again, this's coming from someone who owned the marketing agency for 16 years, we did web development work. We saw this coming five years ago. Probably didn't come as fast as I thought it would, but again, once this rolls out and other people emulate the same capabilities, I think it changes the way these things are done and it changes the need for these services and it affects that part of the industry.

[00:51:53] Paul Roetzer: And like, I just don't see how, it doesn't like how, it just doesn't. Transform that. So yeah, I think that's just interesting to note. And then the other thing is I thought that the c e O of Wix, maybe it's their communications team, going back to having a good communications team. I thought the CEO did a really good job of, here's our point of view, here's the vision for this.

[00:52:14] Paul Roetzer: Here's a roadmap of what we are going to do with ai. Here's what we already have. And it's shocking to me how few SaaS CEOs have done that yet. Like I just, I it's one of the things we always advise, we advise that in our book, find out what their point of view is on ai. And I want it coming from the c e o, like if it's just some director of marketing product or whatever.

[00:52:36] Paul Roetzer: It doesn't matter. Like I want the c e o saying, this is what our vision is for building a smarter solution for you, the customer. If it's not coming from the c e o and they're not talking about it in earnings calls, if they're publicly traded or putting out blog posts and videos, if they're private, then they likely are not serious about AI yet.

[00:52:54] Paul Roetzer: So I think this is a good example of what we need to see. We've seen it with Asana's done a good job of this. No one jumps to mind. Aaron Levy, levy from Box does a great job with this like CEO E level points of view, vision roadmap updates. That's where it needs to be coming for. So if you work in the communications department or marketing department, get your c e talking about this stuff.

[00:53:15] Paul Roetzer: Get, get, get them caring about this stuff.

[00:53:20] Mike Kaput: So next up, the Author's Guild, which is America's largest and oldest professional organization for writers, is asking its members to sign an open letter to generative AI leaders. This letter says, the Guild calls on the CEOs of OpenAI, alphabet, meta stability, AI and IBM to compensate writers fairly for the use of copyrighted materials in their generative AI programs.

[00:53:44] Mike Kaput: Among other things, we seek to create a licensing solution that will bring money back to writers whose works are used on an ongoing basis. So if you're someone on unfamiliar with how a lot of generative AI models are trained, at least ones that use the written word, they are fed tons of data.

[00:54:01] Mike Kaput: Often scraped from places online. And some of that data, as many lawsuits are now alleging, and as letters like this allege are coming from books, articles, essays, and other written works that are copyrighted by certain authors and or publishers. So with these works being used in AI models, at no point are authors or writers being compensated for literally giving up data or having it taken from them without consent to train AI models.

[00:54:35] Mike Kaput: So Paul, you yourself, have written several books we co-authored one. How do you as an author, view these concerns that the Guild has and that writers generally are starting to have about how these models are being trained?

[00:54:48] Paul Roetzer: Yeah, I mean, well you've also written a couple books. Give yourself some credit. I think we have five books between us.

[00:54:54] Paul Roetzer: So I signed it, um start there. I. I think that, this is a, it's a sign of where that this will go in the future. So you can't go into the existing foundational models and extract this knowledge if they scrape the content, which they most likely did in many cases. You, we talked about this last week.

[00:55:15] Paul Roetzer: I think the machine on learning project that Google's gonna run, they don't know how to just go into the model and say, let's get Mike's book out of there. Like, okay, we, we, we shouldn't have used Mike's book. We're gonna take it out. They can't do that. So this is all about like the future foundational models.

[00:55:30] Paul Roetzer: And I think the future is licensing of data. So if you, again, Twitter shut off their APIs, Reddits been trying to avoid people scraping things. New York Times done doing a deal with OpenAI. I think what you're gonna see is all the major data brokers, and if we think of data as content, images, text, video, that is the asset.

[00:55:54] Paul Roetzer: And so, Licensing that to these companies that are building these models will solve this issue in the future. So, I don't know. I mean, maybe there's some, maybe the Supreme Court finds that they g PT four in 2022 or 2021 did scrape some illegal data, and OpenAI pays a half a billion dollar fine, and that half a billion gets spread out among how many millions of authors are there.

[00:56:20] Paul Roetzer: And everybody gets their $5 and like, okay, we're done like that. That takes care of the past. Now, in the future, we have things like this where the Guild is trying to say, you are gonna license collectively, not individually. You're not gonna go to Paul and Mike and everybody else and like say, okay, we want your book.

[00:56:37] Paul Roetzer: In your book, in your book, you're gonna have this one collective license and it's gonna be worth 2 billion, whatever that number is. And then that's gonna get spread off amongst all the authors that are part of the guild and everybody's gonna get their $22 or whatever it is. it, again, this is, we're just talking theory here.

[00:56:55] Paul Roetzer: But this one doesn't seem super hard to project out how this plays out. I, again, one, I I think there's little debate that they probably took stuff they shouldn't have. They, they may win in case say it was fair use or not. I don't know. We'll see. Let the legal precedence come forward on that, but likely they took things they shouldn't have taken.

[00:57:15] Paul Roetzer: It's now known they did that in the future, they're not gonna be able to get away with that as quickly or as easily. And there's probably be a lot of incentive for them to not do it that way. Therefore, you have to license the content, how the licensing plays out. We don't know how it gets distributed to the people who created the content.

[00:57:29] Paul Roetzer: We don't know. There's a great f a Q actually on the author's guild. We'll put in the show notes that kind of plays this out, how they're thinking about it. And you can tell a lot of this is like theory of here's how we think it could work for everyone to get compensated, but we don't know. We just don't want them doing this again.

[00:57:46] Paul Roetzer: So, yeah, I I think it'll there, it will. Positively affect authors in some way, but I don't think like we're gonna be sitting around waiting for our royalty checks to show up every six months. And you know, it's not gonna be like that.

[00:58:02] Mike Kaput: So consulting firm McKinsey announced that it's partnering with AI Company Cohere, which makes large language models, in order to provide AI solutions to its enterprise clients.

[00:58:15] Mike Kaput: Now, this is the first partnership McKinsey has announced with a large language model company. McKinsey has said it's working with Coherent to build customized solutions to help improve customer engagement and workflow automation for clients. And it added that it's also considering using Coherent to increase its internal efficiency and power, its own internal knowledge management system at McKinsey.

[00:58:39] Mike Kaput: Now Paul, this isn't the first time a major consulting firm has partnered with some of these big AI model companies, but why are we seeing this trend? Why are big consulting firms entering into these partnerships specifically for enterprise clients? This

[00:58:52] Paul Roetzer: plays into the challenge we talked about earlier about knowing which models or companies to build with.

[00:58:59] Paul Roetzer: . And so I think it's a smart play by, McKinsey by Cohere, because realistically you're gonna trust your known advisors consultant, strategist. So McKinsey obviously has a massive book of business and reputation, a strong reputation. And so it's, it's understandable that big enterprise are gonna go to McKinsey and say, what should we be doing with LLMs?

[00:59:21] Paul Roetzer: And they're gonna have solutions baked in. My guess is they will also have LAMA two solutions, just like Microsoft, like they're gonna diversify. It's not like they're exclusive with cohere, but they're gonna make some bets on some of the closed models and some of the open models, and then they're gonna build services around it.

[00:59:36] Paul Roetzer: It's what I would be doing if I still ran a service firm. If I still ran a consulting firm you would do deals with, I'd probably do deals with one or two of the closed models directly. I'd probably do deal with one of the leading application companies and I'd probably. Have a team dedicated to building open source capabilities for clients that want that too.

[00:59:55] Paul Roetzer: We did the opposite at my agency back in the day, we bet on HubSpot in 2007, we were HubSpot's first partner and we basically built services around a closed partner. So we did we, I value add services to HubSpot solutions. We didn't work with Marketo part, Eloqua. Like over time I phased all those other closed applications out because we just bet on HubSpot as the main thing and it worked.

[01:00:16] Paul Roetzer: But I don't, I wouldn't do that right now. Like if I was running a consulting firm or an agency, I, there's not a single model or application company that I would bet the future of my consulting firm on. Cause we just don't know. At that time, I was very confident HubSpot was going to be a major player and a winner in the space and I was willing to bet my company on 'em.

[01:00:36] Paul Roetzer: I, there isn't a company I've met with yet that I would, that I say has this all figured out and is gonna remain the leader for the next five years. It's just impossible to know.

[01:00:48] Mike Kaput: So in some bits of Microsoft News, a couple of big developments. So the company just announced pricing for its AI features that are being added to Microsoft 365.

[01:00:59] Mike Kaput: So we've gotten some clarity on essentially Microsoft's co-pilot subscriptions. So co-pilot essentially injects AI right into popular Microsoft products like Word, Excel, and teams. And Microsoft has announced that copilot will cost an additional $30 per month per user, which according to C N B C, they claim that could raise prices for enterprise customers.

[01:01:22] Mike Kaput: With, keep in mind, lots of users, lots of seats for each tool as much as 83%. When all is said and done, if they kind of go all in on this additional pricing. Paul, does this pricing make sense to you? Because on one hand it seems like a huge jump for some companies that rely on this software, but on the other, based on what we know about copilot, it seems likely to deliver massive productivity gains

[01:01:45] Paul Roetzer: as soon as.

[01:01:47] Paul Roetzer: Companies figure out how to make the business case for this and understand the impact it's gonna have on efficiency and productivity across the organization. And they have plans to implement it, not just turn on 30 bucks a month for more thousand workers and jump the bill with no plan of how they're gonna activate that software, that technology.

[01:02:06] Paul Roetzer: But once you have a plan, like let's say you start with the marketing department in a big enterprise and you say, okay, these are the 10 use cases we are going to use co-pilot four, we have a business goal. Our K P I is a 30% increase in efficiency across these 10 use cases. Go because now it's like a no-brainer.

[01:02:25] Paul Roetzer: 30 bucks a month is a joke for the value you could create if you have a plan to use the technology. And so I think the adoption may be. Slower. Like, I don't know that you're gonna see just whole enterprises buy and increase their pay out 83% a month or whatever. Right. But I think if you do it in a smart way by division, and you have action plans and you have specific use cases and you have systems in place to monitor the use and to measure the productivity gains, then I think over one to two years, why wouldn't you pay this for everyone in the organization?

[01:02:58] Paul Roetzer: But based on our understanding and all the big enterprises I've talked to, I don't know anybody ready to scale this across the entire organization. Like Right. Just not ready. I mean, we're, we work really closely with some big marketing organizations and they're just trying to figure it out and they're gonna be ahead of most other departments in the company.

[01:03:14] Paul Roetzer: . So again, hr, legal, finance, I can't see paying that per user because you're just gonna have a bunch of unused licenses in essence. Yeah,

[01:03:23] Mike Kaput: it all starts with the use cases and the strategy, like we've talked about for, yeah, many years now. So Microsoft has also announced a version of Bing's AI powered chat, and this version has enhanced security features designed specifically for enterprises.

[01:03:39] Mike Kaput: So they're calling this Bing Chat enterprise and they say the tool offers a higher level of data protection for businesses with privacy and security concerns about generative AI tools. Microsoft says that no data about chats with this new tool are saved and the company cannot view your data. Now this tool is actually going to be free for Microsoft 365 customers, existing customers, and Microsoft also is reportedly going to release a $5 a month standalone version.

[01:04:08] Mike Kaput: So Paul, it certainly seems like the trend here is AI vendors rushing to kind of bake increased security into their products to serve, especially enterprise needs. Is that kind of what you're seeing or how you're reading this?

[01:04:21] Paul Roetzer: Yeah, definitely. And I as you're talking, this thought popped into my mind of, I wonder at what point do these vendors start paying us for our data?

[01:04:30] Paul Roetzer: . So, like, it's 30 bucks a month standard, but if we're allowed to use this data from your users, then it's $20 a month. It's almost like the Twitter mentality of paying creators to drive ad revenue. Now you get a, a split of the ad revenue's, what we've always talked about of like we are the product, we're, it's our data that they're building on top of like, at some point, do they pay us for the data?

[01:04:53] Paul Roetzer: I dunno, I'm just like, again, thinking out loud here, but Yeah, definitely. I mean, it's privacy, security, it's kind of been a recurring theme of like three of the topics we've talked about today. Yeah.

[01:05:04] Mike Kaput: So Bloomberg just broke a story that Apple is secretly working on AI tools as well as building and using an internal chatbot, nicknamed Apple, g p T.

[01:05:14] Mike Kaput: Now, there's not a lot of details at the moment, but it sounds like as part of these efforts that Apple is now using an in-house generative AI system called Ajax to develop a large language model that powers Apple, g p T. Now, Paul, I know you're a longtime Apple watcher. This seems long overdue. I mean, apple uses plenty of AI in its products and has done plenty of interesting things there, but we've seen them largely sit on the sidelines when it comes to this big generative AI race.

[01:05:43] Mike Kaput: Can you break this down for us, kind of where you see them being at right now?

[01:05:47] Paul Roetzer: Yeah, they, I mean, they have more AI in their products than probably any planet or any company on the planet. I mean, the whole iPhone is powered by ai. All the apps, like everything it does operationally. So they have incredible AI capabilities and researchers.

[01:06:00] Paul Roetzer: They, they just don't release things before they're ready. I think that's like the biggest thing with Apple is they're perfectionists. Largely, and this isn't traditional software, as we've talked about a couple times today. So the user experience is definitely at risk if you put a product out that doesn't work all the time or most of the time where they can't fix with a quick patch.

[01:06:20] Paul Roetzer: So that's just not like Apple to let go of control at the same time, as we've said on this show before, to me the dream of this and maybe the, the thing that takes over the market is if Surrey works. If you build an actual intelligent assistant that can not only answer your questions accurately every time, but can take actions on your behalf, it's game over .

[01:06:45] Paul Roetzer: With their distribution of the iPhone. And that would be, I would not be surprised at all if you hear nothing from them for until September or maybe even until next spring when they do the developer conference, whatever that is, and then all of a sudden they just drop. You know, Sury 2.0 on us, or Sury, G P T or whatever it is, and the thing changes everything like that.

[01:07:07] Paul Roetzer: Yeah. I have to imagine there are dozens of people with an apple working on that exact thing right now. Because to me it might, it might be the biggest thing they could build at the moment. So in other

[01:07:21] Mike Kaput: big tech news, Google is now testing an AI tool that actually generates news stories. And this was revealed in some reporting by the New York Times.

[01:07:30] Mike Kaput: This tool is known internally at Google by the name Genesis, and it can generate news content based on information about current events. It's actively being pitched to outlets like the New York Times, the Washington Post and News Corp, which owns like Fox and a bunch of other media properties. A Google spokesperson said that this tool is not intended to replace human journalists and cannot replace human journalists.

[01:07:56] Mike Kaput: They were very, very stark about that. Now, Paul, do you believe the commentary that tools like this aren't meant to replace journalists? And do you even think it's likely that news organizations are going to adopt this

[01:08:08] Paul Roetzer: type of thing? I'm really intrigued what exactly it is that's different about it than what we're seeing right now.

[01:08:15] Paul Roetzer: Like, I was trying to think about that when I was reading the article About what, what could be specifically related to news here that isn't already available with these other tools? I I think it's assuming people are more naive than they are to say this doesn't replace jobs or to go and trying to pretend like the tech isn't gonna replace jobs.

[01:08:35] Paul Roetzer: It's going to like, it's just like, I think at some point we have to just accept that and stop pretending like that isn't the outcome or at least a partial outcome of what this technology does. So I just, I think it's offensive honestly, to like journalists that. You're gonna show up and present this technology and pretend like it's not gonna have some effect on them.

[01:08:56] Paul Roetzer: So what that is and how significant it is and how close we are to that happening, I have no idea. But I think we need to be realistic here about where this technology is going and it's gonna affect people. And

[01:09:09] Mike Kaput: one thing you've repeatedly said in the past is look at incentives, especially business models and journalism is not the world's most healthy business model these days.

[01:09:20] Mike Kaput: So there is direct incentive to cut costs, do more with less, potentially automate some of this work. Yep. All right, we're gonna wrap up here with a few updates under what I would file under jaw dropping AI technology. And the first is a stunning series of AI generative videos, which have just come out and shown us this kind of brave new frontier in generative ai.

[01:09:46] Mike Kaput: So a company called Fable has released what it's calling showrunner AI technology, which allows you to generate new episodes of TV shows from scratch. And this is all part of a project that it's calling the simulation sort of creepily. The technology has been on display, and we linked to this in the show notes on their Vimeo channel where the company has released.

[01:10:10] Mike Kaput: Full episodes generated totally by AI of new content of the hit show South Park. So each episode is a completely new creation. It features a cohesive plot and the exact visuals and character voices from the show. It is functionally a full episode of television created by ai. Now Fable actually trained AI on hundreds of creative assets from the show, which allowed it to generate episodes based on high level prompts, like, give me a South Park episode about A, B, or C.

[01:10:44] Mike Kaput: These results are wild. And if you've seen South Park at all, you should definitely check these out because they're pretty uncanny. What were your thoughts when you first

[01:10:53] Paul Roetzer: saw this, Paul? It was pretty wild. So it is research only. They have no rights to the IP of South Park. There's no apparent relationship with South Park or the creators of South Park.

[01:11:02] Paul Roetzer: That was rumored for a while cause this tech has been known for a little while. So, as of right now, it doesn't appear there's any connection there. So, it's pretty remarkable. I actually have, I built this into my, opening keynote at MAICON this week, so I have an example of this. I think the, we'll, we'll probably have to dive into this one more, maybe as a main topic down the road, but they're, they're not trying to just build TV shows that is part of the, but it's, they're trying to build agi, artificial general intelligence.

[01:11:32] Paul Roetzer: Their basic premise is when you chat with ChatGPT or inflection pi, or ANR CLO two, whatever it is, it, the chat bott pops into existence and then it goes out of existence. They don't want it to go out of existence. They want to create AI simulations that live on, even when you turn your computer off and continue learning from the world around them.

[01:11:51] Paul Roetzer: And they want it to, in essence, be the Truman Show. They want you to raise agis that they, they want me or Mike or whomever be able to go in, create simulations, create little worlds, create little AI characters, and those AI characters continue to move around the world and exist even when we're not with them.

[01:12:09] Paul Roetzer: And they learn and they grow and they develop intelligence. They're creating a sandbox for intelligence. It is crazy. Like I don't know, like I was having trouble over the weekend processing this one and thinking about this. But they want to create reality shows for AI where we watch 'em grow up and become super intelligent.

[01:12:29] Paul Roetzer: Again, topic for another time. Like, I can't rapid fire, I can't get into all of it on this, but it's wild. Part of the

[01:12:36] Mike Kaput: value though, of these conversations is you could read. A story about this and be like, wow, that's pretty wild. A South Park AI episode. But it's so much more beneath the surface of what these are, is what's going on.

[01:12:49] Paul Roetzer: There's always some crazier story both beneath the surface.

[01:12:54] Mike Kaput: So another interesting development is popular AI video and audio editing software script, which we rely on here at the institute, has released a new feature called eye contact. Eye Contact uses ai quote to subtly adjust your gaze and video, so it appears you're looking directly into the camera even when you're reading something off camera.

[01:13:15] Mike Kaput: So that means you could easily be looking to the side, each side looking at a script, slides, notes, while still creating an authentic connection with viewers through ai, essentially correcting your eye contact with the camera. Now Paul, this is a pretty good example of. The advanced innovation we're seeing in what I would classify as very affordable AI tools.

[01:13:39] Mike Kaput: . So what should marketers be paying attention to here and thinking about

[01:13:43] Paul Roetzer: this fits under the law of uneven AI distribution? Just because the tech exists doesn't mean you'll accept what you have to give up to do it. Like I . This one's weird to me. Like It's a cool tech. I love the script. I don't know that we'll use it.

[01:13:57] Paul Roetzer: Yeah. I just dunno. I would use it. I was explaining this to like, my family was over Friday night for picnic in our yard and I was explaining this tech and they were all creeped out. Yeah. And I was like, well you're probably it's gonna be on all social media platforms, like anywhere you use it.

[01:14:09] Paul Roetzer: Like this is not gonna be a hard thing to replicate. I don't know. I, this is one that's gonna take a little while for me to get used to. I think it's kind of creepy.

[01:14:17] Mike Kaput: Yeah. I'll have to try it out because I could see, maybe it's not like this, but it could be a bit unnerving. I I don't think humans stare at each other a hundred percent when they're talking.

[01:14:28] Mike Kaput: So

[01:14:29] Paul Roetzer: it's natural to not to look away. Yeah. Yeah. Like when I think I look away, it's just how. So it'd be really weird.

[01:14:35] Mike Kaput: Yeah. Well, so we'll see if this one backfires, but it is out there. If you want to try it, people will use it, I'm sure. Yep. All right. Last but not least, we're looking at this week featuring some exciting new AI tech that we're experimenting with called Playground ai.

[01:14:51] Mike Kaput: Now we have no affiliation with them. As of today, playground AI is just an image creation and editing tool that allows you to create really great images and easily do professional grade image editing using only text prompts. So this is just something we discovered that we thought was worth talking about because Paul, you've been playing around with the tool recently.

[01:15:10] Mike Kaput: Can you kind of walk us through why it's worth paying attention to?

[01:15:13] Paul Roetzer: Yeah, I'd check it out. Just like quick thoughts. What it does is it gives you access to five different image generation models. So there's playground V one, Stable Diffusion, 1.5, Stable Diffusion, 2.1, Stable Diffusion xl, and then DALL-E 2.

[01:15:26] Paul Roetzer: . And so you can go in and just build images like anything else, but the one thing I really liked is they have these pre-trained filters. So you have like, Saturated space, lush, illumination, warm box, cinematic a masterpiece, black and white. And so whatever your prompt is, you can AP apply this. And it basically layers a pre-pro to give you that kind of output.

[01:15:47] Paul Roetzer: And so I experiment with it for my keynote for MAICON, and it was pretty cool. So yeah, just like a, a fun tool. I don't even think I'm paying for it. I think it's just a free tool and gives you a chance to experiment in a little bit more ways with these different tools and add some different variables that maybe you wouldn't know to add yourself.

[01:16:01] Paul Roetzer: So cool Tech, just check it out.

[01:16:05] Mike Kaput: Yeah. That's awesome. And that assistive feature with those filters can be really helpful. I know using, on my end, using some of the image generation software, you see these amazing results people get, but it really takes some trial and error. I don't know exactly, exactly.

[01:16:17] Mike Kaput: Results,

[01:16:18] Paul Roetzer: right? Yeah, that's exactly, it's like I'm not a, I'm not an advanced designer. Like I don't know how to explain what I want and Oh, I know. And Dahlia is like, okay, digital art illustration, like these boring sample, and this thing is like 70 words of additional prompt. You pick these things, it's like, okay, I couldn't do that.

[01:16:34] Paul Roetzer: Yeah.

[01:16:36] Mike Kaput: All right, Paul. Well, we've covered quite a lot of ground this episode. As always, thanks for shedding some light on some of the complex topics going on in ai. We really appreciate the time

[01:16:45] Paul Roetzer: and the insights. Yeah. Thanks everyone for listening. Hopefully we'll be seeing a lot of you in Cleveland this week for MAICON.

[01:16:51] Paul Roetzer: If you want to grab a last minute ticket and you're still around, that's, MAICON.ai, m a i cn.ai, and I think we have AI POD 100 still set up. If you grab a last minute ticket or a flight to Cleveland, we'd love to see you. Otherwise, we'll talk to you again next week.​

[01:17:06] Paul Roetzer: Thanks for listening to the Marketing AI Show. If you like what you heard, you can subscribe on your favorite podcast app, and if you're ready to continue your learning, head over to www.marketingaiinstitute.com. Be sure to subscribe to our weekly newsletter, check out our free monthly webinars, and explore dozens of online courses and professional certifications.

[01:17:27] Paul Roetzer: Until next time, stay curious and explore AI.

Related Posts

[The Marketing AI Show Episode 66]: ChatGPT Can Now See, Hear, and Speak, Meta’s AI Assistant, Amazon’s $4 Billion Bet on Anthropic, and Spotify Clones Podcaster Voices

Cathy McPhillips | October 3, 2023

This week's episode of The Marketing AI Show covers AI advancements from ChatGPT, Anthropic, Meta, Spotify, and more.

[The Marketing AI Show Episode 42]: Meta’s Segment Anything Model (SAM) for Computer Vision, ChatGPT’s Safety Problem, and the Limitations of ChatGPT Detectors

Cathy McPhillips | April 11, 2023

The Segment Anything Model from Meta takes the stage in the midst of safety and detection concerns for ChatGPT.

[The Marketing AI Show Episode 54]: ChatGPT Code Interpreter, the Misuse of AI in Content and Media, and Why Investors Are Betting on Generative AI

Cathy McPhillips | July 11, 2023

Generative AI is advancing, and this week it’s two steps forward, and one step back. Learn more in this week's episode of The Marketing AI Show.