53 Min Read

[The Marketing AI Show Episode 57]: Recap of 2023’s Marketing AI Conference (MAICON), Does Sam Altman Know What He’s Creating? and Generative AI’s Impact on Jobs

Featured Image

Wondering how to get started with AI? Take our on-demand Piloting AI for Marketers Series.

Learn More

That’s a wrap on MAICON 2023, and Paul and Mike break down some common themes, key takeaways, thoughts on what’s next, and much, much more on this week's episode of The Marketing AI Show.

And while our annual Marketing AI Conference was top of mind the most, a story on Sam Altman and more news on generative AI’s impact on jobs were two topics that needed to be covered.

Listen or watch below—and see below for show notes and the transcript.

This episode is brought to you by Jasper, On-brand AI content wherever you create.

Listen Now

Watch the Video

Timestamps

00:02:51 — The recap of MAICON 2023

00:18:45 — Sam Altman and the article published by The Atlantic

00:34:36 — Generative AI and the future of work - according to McKinsey

00:44:39 — Altman’s Worldcoin launches

00:47:58 — Publishers want billions, not millions, from AI companies

00:52:01 — Netflix’s AI-focused new role…potentially getting $900K

00:55:18 — Musk may have issues with trademarking X

00:57:29 — The big players launch Frontier Model Forum

00:59:12 — Microsoft’s Copilot could be delayed until 2024

01:01:39 — Cohere introduces Coral

01:04:28 — iPhone’s new Rewind feature

01:06:50 — AWS expands Amazon Bedrock

Summary

A recap of MAICON 2023

In case you’ve missed it, it’s been a huge week here at Marketing AI Institute as we just wrapped up our 2023 Marketing AI Conference (MAICON) last Friday. This event was our biggest yet by far, with 700+ amazing marketers and business leaders coming together in Cleveland (our home base) to share, collaborate, learn, and grow together. We had a spectacular lineup of speakers, 2+ days of incredible content, and world-class conversations and connections between some of the top professionals in AI, marketing, and business. Paul and Mike talked through some of the highlights. Whether you attended or weren’t able to make it, we hope this portion of the podcast creates some value for you and helps you learn more about this unique event in our industry.

The Atlantic posts an interesting story on Sam Altman

The Atlantic just published one of the most comprehensive deep dives into OpenAI—its history, where it stands today, and where it’s going. And this article was informed by several in-depth interviews with CEO and co-founder Sam Altman. Titled “Does Sam Altman Know What He’s Creating?”, the article looks at how OpenAI went from near-failure trying to develop rudimentary AI models to GPT-4, which Altman described to the reporter as an “alien intelligence.” This article is long but well worth reading in full. The link is below. There’s too much to summarize in this short paragraph, so be sure to tune in. You won’t regret it!

Generative AI and another look at the future of work

Will AI take your job? According to some new research from McKinsey, it’s complicated. McKinsey just released a report called “Generative AI and the future of work in America.” In this report, they attempt to forecast AI’s impact on employment in the U.S. Overall, McKinsey said that employment changes caused by AI that they’ve been tracking in earlier research “are happening even faster and on an even bigger scale than expected.”

Some of the research’s key findings include:

  • By 2030, activities that account for up to 30 percent of hours currently worked across the US economy could be automated—a trend accelerated by generative AI.
  • Generative AI could enhance the way STEM, creative, and business and legal professionals work rather than eliminating a significant number of jobs outright.
  • Automation’s biggest effects are likely to hit other job categories, which include office support, customer service, and food service employment.
  • An additional 12 million occupational transitions may be needed by 2030. As people leave shrinking occupations, the economy could reweight toward higher-wage jobs. Workers in lower-wage jobs are up to 14 times more likely to need to change occupations than those in highest-wage positions, and most will need additional skills to do so successfully. Women are 1.5 times more likely to need to move into new occupations than men.
  • The United States will need workforce development on a far larger scale as well as more expansive hiring approaches from employers. Employers will need to hire for skills and competencies rather than credentials, recruit from overlooked populations (such as rural workers and people with disabilities), and deliver training that keeps pace with their evolving needs.

There’s plenty more data in this research that is worth checking out, and this is a segment in the podcast worth listening to.

We can’t thank our attendees, speakers, sponsors, and team for an incredible MAICON 2023, and we hope you enjoyed the recap. We’ll be back next week with more AI news…all the news we can fit in just about an hour! The Marketing AI Show can be found on your favorite podcast player and be sure to explore the links below.

Links Referenced in the Show

Read the Transcription

Disclaimer: This transcription was written by AI, thanks to Descript, and has not been edited for content.

[00:00:00] Paul Roetzer: you always worry people aren't going to be, want to stand up and share strategic secrets or competitive stuff. But I mean we had tons of people standing up, sharing, here's the problem I'm looking at solving, here's how I'm thinking about it. But other people saying, yeah, I got the same problem. So again, just truly collaborative and pretty beautiful to see actually.

[00:00:17] Paul Roetzer: Welcome to the Marketing AI Show, the podcast that helps your business grow smarter by making artificial intelligence approachable and actionable. You'll hear from top authors, entrepreneurs, researchers, and executives as they share case studies, strategies, and technologies that have the power to transform your business and your career.

[00:00:38] Paul Roetzer: My name is Paul Roetzer. I'm the founder of Marketing AI Institute, and I'm your host.

[00:00:47] Paul Roetzer: Welcome to episode 57 of the Marketing AI Show. I'm your host, Paul Roetzer, along with my co-host Mike Kaput, we are fresh off of Mayon 2023 in Cleveland. I don't know about you, but I'm still trying to like get my, my brain power back after three days. Oh my gosh. I know we're going to talk, we'll dig into make on 2023 to start, but wow, what a, what a week.

[00:01:12] Paul Roetzer: It's kind of hard to just go back to the normal job after doing that. I always say, like, when I came up in my career, like early in my career, I did a lot of events, specifically the junior p g A championship. I used to run media relations early in my life for. For that night. Do you always go through this like event hangover period where you're like, yeah, just the normal day to day doesn't meet up with the that kind of high you get of being around that many people and going through the grind of putting on an event like that?

[00:01:41] Paul Roetzer: So yeah, it was always that like, kind of let down the first day you're starting back to work. It's like, ah, I want to go back to that week. That was fun though. That was awesome. That was awesome. Alright, we'll get into MAICON 2023 in a minute. But this episode is brought to us by Jasper, who was actually one of our sponsors at MAICON 2023.

[00:01:58] Paul Roetzer: Jasper's, a generative AI platform transforming marketing content creation for teams and businesses. Unlike other AI solutions, Jasper leverages the best cross section of models and can be trained on your brand voice for greater reliability and brand control. With features like brand voice and campaigns, it offers efficiency with consistency that's critical to maintaining a cohesive brand.

[00:02:21] Paul Roetzer: Jasper's won the trust of more than 100,000 customers, including Canva, Intel, DocuSign, CB Insights, and Sports Illustrated. Jasper works anywhere with extensions, integrations and APIs that enable on-brand content acceleration on the go. Sign up free or book a custom demo with an AI expert at Jasper ai.

[00:02:43] Paul Roetzer: All right, Mike, episode 57. Let's go. We got three main topics and we got a bunch of rapid fire.

[00:02:49] Mike Kaput: Sounds great. Well, like Paul, you mentioned, it's been a huge past week here at Marketing AI Institute since we just wrapped up MAICON, our marketing AI conference. The event was our biggest, yet by far, we had 700 plus incredible marketers and business leaders coming together in Cleveland, where we are based to share, collaborate, learn, and grow together.

[00:03:10] Mike Kaput: And we actually had a spectacular lineup of speakers. We're going to mention a couple of them in a minute. We had two plus days of incredible content. And you know, I, for one at least experienced plenty of world-class conversations and connections with some of the top people in a number of different industries across AI, marketing and business.

[00:03:29] Mike Kaput: So as one of our topics today, we wanted to go through some of the highlights here. You know, in the process. Both kind of create some value for anyone who couldn't attend and kind of share some of what was shared at MAICON, and just really give our audience an idea of why this is such a unique event in the industry.

[00:03:47] Mike Kaput: So to kick that off, Paul, this year was somewhat different than past years. It was our biggest year yet. I want you to maybe give us a sense of the context of what made 2020 three's MAICON so special and so different from

[00:04:01] Paul Roetzer: previous years. It definitely felt different, I think from the very beginning.

[00:04:07] Paul Roetzer: We did the workshops on day one, you and I. You did the applied AI workshop. I did the Strategic AI leader workshop. Mine had like 150 people in it. I think yours was probably close to a hundred people. Yeah. So the last year's event total, the 2022 conference had 200 at the entire conference. And we had over, what, 250 in the optional first day workshops.

[00:04:31] Paul Roetzer: So it just immediately, it was a different level of, attendance. Certainly engagement. I think for me the biggest change this year, well there was a lot of them, but one of the biggest ones was, I feel like up until this year everyone just looked for us. To us for the answers. Like the things weren't happening that much in the industry without them trying to find the answers from us.

[00:04:57] Paul Roetzer: And this was the first year where I felt like everyone was out in the frontier experimenting that what happened with ChatGPT and generative AI as a whole just opened up the use of these tools to the masses. And so everyone there, all 700 plus people were engaged in experimenting themselves and wanting to share and wanting to learn from each other.

[00:05:18] Paul Roetzer: So, just felt like a community all of a sudden, like a truly engaged group of people, group of peers who were all working to solve for this. So that was the one thing, and really felt like the early days of inbound to me. So, you know, I think everybody listens to this show knows we were HubSpot's first agency partner back in 2007 and before I built and sold that agency, or Mike worked at that agency with me for a long time.

[00:05:43] Paul Roetzer: And those early days of the Inbound conference, it was just this amazing community feel. All the partners were working together, collaborating with each other, sharing ideas and frameworks and templates. And that's what this really felt like to me. It just, it felt like the beginnings of a. Really valuable community of peers.

[00:06:06] Paul Roetzer: The other thing that jumped out to me was the responsible AI theme, which I absolutely built into the agenda. Like I wanted that theme to come through, but it really came through way more than I expected. It was almost like every speaker had that integrated into what they were saying. Everyone you met there seemed to be all working toward that same direction of trying to find a human-centered way to do this stuff.

[00:06:30] Paul Roetzer: There was no like feeling of this is a bunch of cheap tricks and shortcuts to just do marketing more efficiently and make a bunch of money. I, there was not a single person I talked to, I felt like was there for that reason. Like, how do we just, you know, do take shortcuts and stuff. And then the other thing that just kept emerging to me was like, I didn't meet a single person where I felt there was any ego whatsoever.

[00:06:54] Paul Roetzer: Like, It didn't matter if it was, you know, the top speakers at the event to attendees who were wildly accomplished. We had major brands there, like leaders from big companies. Nobody seemed to feel they were better than anybody. Like it was, it was truly just about collaboration and learning and sharing.

[00:07:11] Paul Roetzer: So it was just, it was a really special time for me. I mean, for, for me and you, I mean, this goes back a long time. We've been working on this idea of trying to create this community and bring people together because we saw AI transforming everything. And I feel like this was the moment where all started, started to, to click like everyone else was on board and helping move in that direction.

[00:07:34] Paul Roetzer: And now it just feels like it's just going to, you know, really snowball from here because now I. Everyone is experimenting. I feel like next year we're going to have a groundswell of speaker submissions of amazing stories of people trying to figure this stuff out. So yeah, I mean, that was my take. Did, how did you feel about it?

[00:07:51] Paul Roetzer: Like what did you think of last year versus this year?

[00:07:53] Mike Kaput: Yeah, very similar in the, everyone look sharing how to get to solutions instead of expecting answers. You know, I, we often, after every talk get tons of questions, rightly so, like, about how do I solve for this? Or how do I solve for that? Or what tools should I use?

[00:08:08] Mike Kaput: Those are all really valuable and I'm, I want people to keep asking them, but I definitely saw more diversity of people actually being able to exchange information and insight with each other to solve that for themselves. And I think especially it came out to me during the workshop. The workshop I ran, applied AI for marketers was designed to walk people through over several hours a framework of how to get at high priority AI use cases.

[00:08:33] Mike Kaput: And when we all came together, At the end as a group, the discussion was incredible. I was a little worried because I just expected there was a chance people would just ask me, okay, how do I go solve for every single use case? I don't know how to solve for every single use case. However, 85% of the conversation was people in the audience sharing with each other and saying, oh, you add that use case.

[00:08:55] Mike Kaput: Here's a couple tools that do that. Here's what we're looking at. And it was insanely valuable to have that kind of cross pollination of ideas and

[00:09:03] Paul Roetzer: solutions. Yeah, it really was. It was cool. Ours was the same with the strategic leader workshop is, you know, that one's more problem based and you always worry people aren't going to be, want to stand up and share Yeah.

[00:09:13] Paul Roetzer: You know, strategic secrets or competitive stuff. But I mean we, we had tons of people standing up, sharing, here's the problem I'm looking at solving, here's how I'm thinking about it. But other people saying, yeah, I got the same problem. So again, just truly collaborative and pretty beautiful to see actually.

[00:09:29] Paul Roetzer: I mean, just how it all kind of emerged. Yeah, it was really exciting.

[00:09:34] Mike Kaput: I did want to ask you, you know, we had dozens of incredible speakers, but maybe could you share a few of the highlights of some of the main stage people who they were kind of just to show the diversity of people and topics that we brought together in this past week?

[00:09:48] Paul Roetzer: Yeah. So the way this event comes together is I largely own kind of the agenda and the speakers and the story. So the way I plan events, and again, it was, it's weird, like I've been doing this now for five years. I never thought of myself as an event person until like I was driving in Tuesday, like heading downtown.

[00:10:06] Paul Roetzer: 'cause we're like 10 minutes from the convention center where we live. And I'm like, oh, I've, I guess I've, I'm kind of an event person now. It's like five years of doing this. So the way I think about a conference is I try and think of the story that I want told and like, what are the key. Themes that need to come through.

[00:10:25] Paul Roetzer: So certainly ethics and responsible AI was a critical theme. I wanted, what is going on at bigger brands? How are they thinking about it? I wanted people who are actually building AI councils and roadmaps like I wanted, and then obviously use cases and tools and case studies and things like that. But you wanted this diversity of perspectives, but you want it to be kind of a cohesive story.

[00:10:46] Paul Roetzer: So right off the bat you can actually see it because I did the state of AI and marketing and business and I just kind of laid the groundwork with 10 things people should know about what's going on. And at the core of that was large language models. One, what they are and their importance. You know, I said LLMs are the new CRMs not in function, but in foundational importance to marketing, sales, and service.

[00:11:07] Paul Roetzer: That LLMs are just getting started, but they're the foundational for what comes next. And we talked about AI agents and multimodal training and all these other things that are going to happen, and kind of set up. Then Christopher s Penn came up and went deep on large language models, how they work, what the opportunities are, bunches of use cases, and it was perfect.

[00:11:25] Paul Roetzer: It just kind of teed everybody up because I wanted them to understand large language models at a more fundamental level because the rest of the conference would make much more sense if you understood that. So that was great. We had, Dan Slagan did an amazing, from tomorrow.io CMO on the future org chart and, in marketing and what teams look like.

[00:11:47] Paul Roetzer: And the beauty there was he doesn't know, like, and he was straight up saying that, like, this is what we're thinking. And he showed these amazing frameworks and like a report card to grade where you're at, I thought was beautiful. But he was showing this is what we're doing and next year it might look different.

[00:12:01] Paul Roetzer: But this is where we're at today. You had Megan Kitty Anderson from Jasper did an amazing talk on, on the transformation we're going through and marketing strategy day one ended with, you know, probably the highlight for me with Professor Ethan Malik from, Wharton School Business, and beyond the obvious, and it was just a fireside chat I did with him, and it was amazing.

[00:12:23] Paul Roetzer: I could have sat there and talked to him for three more hours. I had people texting me afterwards like, dude, I would've stayed for three more hours if you would've kept going. Like, yeah, I think a lot of attendees didn't know, Ethan, like weren't aware of his writing his one useful thing, newsletter and blog, but can almost guarantee you everyone in that room is the subscriber.

[00:12:43] Paul Roetzer: After that 45 minute conversation, it was just mind blowing. Like it really was an incredible talk. And then Cassie Kozyrkov, the chief decision scientist at Google started day two with whose job does AI automate and her thinking versus Thunking analogy, you know what Thunking is? Look it up. It is a cool concept.

[00:13:04] Paul Roetzer: We had a building the Future panel with Gary Survis from Insight Partners and some of their portfolio companies. We had Jessica Hreha from VMware go deep on how to, how they're building their marketing AI council internally. We had the C M O Perspectives panel and then Olivia Gambelin and ended with an ethics and AI talk.

[00:13:22] Paul Roetzer: So those are just some of the main stage, but you can get a sense of the themes of the conference through those talks. And it's very much by design, which talks were, were selected, which speakers were selected, and then how it flowed throughout. So like I wanted the conference ending on the ethics part, the more human part.

[00:13:40] Paul Roetzer: You know, it was very important to me that that was what people left with thinking about. So, yeah, I mean, I just, it's so crazy. Like building a, an agenda and getting all those speakers is a, is a challenging thing. And to see it come together and to, you know, see the impact it had on people I. Is a really special thing.

[00:13:59] Paul Roetzer: And just the early ratings are like through the roofs. Like the speaker ratings were incredible. So yeah, I mean, just amazing. Like the, all, all the talks were incredible. Even tons of breakout. You did a super popular break. I think yours was actually the highest rated, the 45 tools in 45 minutes.

[00:14:16] Paul Roetzer: People love the tactical thing. Just endless amounts of incredible content and I can't, I can't wait to do it again next year. Yeah, same,

[00:14:25] Mike Kaput: same. I'm already pumped. Yeah. As we wrap up here, I did want to just share a couple of my quick takeaways. These are by no means comprehensive, and I tried to pick ones that, you know, I think actually resonate even if you weren't.

[00:14:37] Mike Kaput: At this particular session or keynote and or if you weren't at the event. But you had mentioned Paul during your keynote as part of many, many really important reminders that no matter how fast you think things are moving today, no matter how many advancements we've talked about every single week on this podcast, you said, quote, this is the least capable AI you will ever use.

[00:14:59] Mike Kaput: And this being in reference to every tool, every model, every technology use you use today, that could be a little scary. But I also found it exciting because I don't think this technology is slowing down. We're seeing it accelerate. I. So I'm very excited for what positive use cases there could be moving forward.

[00:15:18] Mike Kaput: I think there was a helpful reminder to people though, that this stuff is moving fast. You can't just sit still. Another takeaway I got was from Christopher Ss Penn's keynote. Chris actually shared, I love just this framing is thanks to large language models, everyone is a developer. Everything is now software and every word is now an opportunity.

[00:15:39] Mike Kaput: I found that kind of really empowering, especially as someone with a writing, not a coding background. That was awesome. And he also just gave some incredible breakdowns of what could be coming next. I won't get into all of it, but One thing I really enjoyed is he was talking about M's. Possible impact on unbranded organic search and the fact that we may not see nearly as much unbranded organic search as people increasingly turn to conversational search experiences.

[00:16:07] Mike Kaput: And he offered some actual ways to defend against that. And I won't go through every single one, but number one is build your brand. The way you get into models is being mentioned more on the internet in a lot of ways. We might be going back to some more traditional PR and kinda link building or being featured on certain interviews, podcasts, media, appearances, et cetera, as a way to get in front of more people and therefore more models.

[00:16:35] Mike Kaput: And I also, you know, Ethan Mollick obviously had probably per minute just hundreds of takeaways, but I did love his really practical advice for how to get started with ai, which I'm even going to try to be adopting more and more is. Instead of, if you've done nothing with AI or you're struggling to kind of get traction, don't worry about applying 18 different tools to your workflow just yet.

[00:17:00] Mike Kaput: Invite one AI tool into everything you do. He gave an example of like, look, just throughout your entire day, every single action you're taking, every single thing you're trying to accomplish, use for instance, ChatGPT or try to use won't work for everything, but you'll quickly start relying on this as almost like an assistant or a colleague.

[00:17:19] Mike Kaput: And you're going to really get deeply masterful with these tools really, really quick by doing

[00:17:24] Paul Roetzer: that. Yeah, I will, I don't want to make this a big promotion at all, obviously, but there, we, we get asked all the time, is it recorded? Yeah. So we did record all the main stage sessions. There's like 15 main stage SE sessions, and that is available on demand.

[00:17:39] Paul Roetzer: It's a, if there's a cost associated, but if you go to MAICON.ai, m ai c o n.ai, there is an on-demand package and, Make on 2024 tickets are going on sale this week, so it it'll be September 10th to the 12th in Cleveland, 2024 that we missed out this year. And you want to be there. We're coming back bit bigger, bigger and better than ever.

[00:18:02] Paul Roetzer: I don't know how big, we were planning for 12 to 1500, you know, go to seven, 700 to 1500. I had a lot of attendees tell me that's not enough that they're going to bring, you know, I had a number of people come up and say, I'm bringing my whole company next year. We're bringing 20 people from our department next year.

[00:18:19] Paul Roetzer: So it definitely resonated with the audience, and we'll figure out how big we can go, you know, next. I again, don't have the brain power for that at the moment. I'm going to hopefully regroup later this week and start thinking about that. But definitely, bigger and better next year. If you weren't there, we would love to have you there next year.

[00:18:39] Paul Roetzer: So save the day, September 10th to the 12th in Cleveland again.

[00:18:44] Mike Kaput: All right, so for our second topic, speaking of not having the brainpower for something, this is a doozy. The Atlantic just published what I would consider one of the most comprehensive, deep dives into OpenAI, Sam Altman, their history, where the company stands today and where it's going.

[00:19:02] Mike Kaput: And this is all kind of underpinned by several in-depth interviews with Sam Altman, the c e o co-founder, creator of OpenAI, somewhat worryingly. The article is titled, does Sam Altman Know What He's Creating? And it looks at how OpenAI kind of went from near failure many years ago, trying to kind of develop the initial models GPT model, to getting to things like GPT four, which Altman actually described to a reporter as a quote, alien Intelligence.

[00:19:32] Mike Kaput: Now this article is really long and has a lot of different points to it that we're going to get into, but it is well worth reading in full. We'll link to it in the show notes,

[00:19:40] Paul Roetzer: but. Much, and you do need a subscription, by the way, to the Atlantic. But honestly, if you can get a monthly subscription, I would pay for it just to read this article.

[00:19:49] Paul Roetzer: Yes, I think they have a free trial as well. So whatever you need to do, I re You have to read this article for

[00:19:54] Mike Kaput: yourself. And some of the things that stood out, and Paul, I know you had a lot of other things that also stood out to you is that, you know, it's pretty clear that Altman, despite knowing and acknowledging there's risks and safety measures needed to kind of shepherd forward, responsible AI believes that advanced AI is not stopping and is going to usher in a golden age.

[00:20:16] Mike Kaput: However, that golden age is going to potentially be built on pretty serious disruption because Altman readily admits in this article, he does not know how powerful AI will become or what that will even mean to your average person. And he had this kind of standout quote that said quote, A lot of people working on AI pretend it's only going to be good.

[00:20:38] Mike Kaput: It's only going to be a supplement. No one is ever going to be replaced. Jobs are definitely going to go away, full stop. However, somewhat paradoxically, he does note that, you know, better jobs will fill their place and he's also at the same time kind of advocated for possibly. Pursuing universal basic income, U B I measures basically giving people a set amount of income to even out economic disruption if there's mass unemployment due to ai.

[00:21:05] Mike Kaput: But what really they get to throughout this kinda wide ranging interview and exploration of OpenAI is their work towards a g i, artificial general intelligence. And there are a lot of quotes in here, Paul. I know a lot of these jumped out to you that really just show Altman taking 110% seriously the arrival of a g i, the arrival of potentially, you know, better than human artificial intelligence across a range of functions and that happening relatively soon.

[00:21:37] Mike Kaput: So Paul, first off as you're reading this, and I'll kinda let you just take and run with what jumped out to you, how much of this was like, this seems like what Altman and OpenAI appear to actually believe versus kind of a PR move, right? Or did you see it another

[00:21:52] Paul Roetzer: way? I don't think this is a PR move. Okay.

[00:21:57] Paul Roetzer: No, 's a disturbing article. I mean, honestly, like we, we probably are closer to this stuff than most, you know, read pretty much every interview Altman's ever done, you know, read pretty much everything you can read on these topics. I'll say like, during Ethan Malik's fireside chat at Mahan, he said, if you haven't lost at least three nights of sleep, you don't properly understand ai.

[00:22:23] Paul Roetzer: Like, what's going on? I may lose three nights of sleep this week because of this article. Like, changed me. Like I, 's really hard to explain this, but like when I read, there's been like, I don't know, a handful of moments over the last 12 plus years I've been studying AI where I. Just looked at everything differently.

[00:22:42] Paul Roetzer: This, this article is one of those. So the last time it happened was probably when I read Genius Makers in 2021 and I sold my agency because of that book. Like, just like I have to go all in on all of this. I don't know what exactly the outcomes are of this for me, but I definitely look at AI in the future differently after reading this interview and kind of the summary of what's going on, there's a few things like we can dig into some specific, but there's a few things I noted as just highlights and I just read this this morning.

[00:23:13] Paul Roetzer: I saw this last week. It's in the September issue of The Atlantic, so it's not in print yet, but you can get it digitally. So I read it through Apple News, which I have a subscription for. So I did not read it last week during Make. I was like, I can't do this. I read the first two paragraphs and I thought, I can't go here right now.

[00:23:29] Paul Roetzer: So I went there this morning. Okay, so one, it starts off with, that they have something they can't release. So the opening paragraph, like the lead says, on a Monday morning in April, Sam Altman sat inside OpenAI's San Francisco headquarters telling me about a dangerous artificial intelligence that his company had built but would never release his employees.

[00:23:51] Paul Roetzer: He later said, often lose sleep worrying about Theis. They might one day release without fully appreciating their dangers. He did come back around the author went back around to that idea of this thing they wouldn't release later on, but they did not get into details. You're kind of left theorizing what that could be, which I do have some theories, that I probably won't get into right now.

[00:24:12] Paul Roetzer: So they have something way more powerful. They, they, they have said before, they're not training GPT five, but they also say they could be run, doing a training run at any moment. So I think it said specifically, Altman insisted that they had not yet begun GPT Five's training run. When I visited opening AI's headquarters, both he and his researchers made it clear in 10 different ways that they pray to the God of scale.

[00:24:39] Paul Roetzer: They want to keep going bigger, to see where this paradigm leads. After all, Google isn't slacking its pace. It seems like a, likely to unveil Gemini, a GPT four competitor within months. We were basically always prepping for a run. The OpenAI researcher told me, so they're not just sitting around with g PT four out in the while and not doing things.

[00:25:01] Paul Roetzer: There's, there's other stuff going on, the impact on knowledge work, which we'll come back around to. There was a concept in here that started disturbing me that I hadn't thought a lot about, but they talked about AI agents as an autonomous corporation and I gotta find that passage because I think it's important to understand what they're talking about.

[00:25:24] Paul Roetzer: The basic premise is that once you build these, like. Agis are super intelligent agents that they can start to work together. And once they do that, they actually start to function more like an autonomous corporation than individual AI agents. And so when you start to think about it in those terms, starts to, you start to realize why the fears exist.

[00:25:54] Paul Roetzer: Like I think that was part of what really struck me about this article is we've talked a lot about, you know, the Future of Life Institute letter and these fears about existential risk to humanity and dah da da. This was the first article I read where I actually saw how they were connecting those dots.

[00:26:09] Paul Roetzer: Where I actually understood the fears that Jeff Hinton and these other people have, and why they may be, may be near term than we thought, or that's been previously discussed. So let's see, actively develop a true agency. We're not talking about g, we're talking about autonomous. Okay. Here's Seva.

[00:26:29] Paul Roetzer: Who's Ilya? SVA is one of the co-founders, and also one of the Jeff Hinton's proteges from, you know, 2011 we've talked about before. I think he's the chief science officer or something at open. We're not talking about GPT four. We're talking about an autonomous corporation said tva. It's constituent ais would work and communicate at high speed like bees in a hive in a single such AI organization or a single such AI organization would be as powerful as 50 apples or Googles he used.

[00:26:59] Paul Roetzer: This is incredible, tremendous, unbelievably disruptive power. Then I went, I say presume for a moment that human society ought to abide the idea of autonomous AI corporations. We had better get their founding charters just right. What goals should we give an autonomous hive of ais that can plan on century longtime horizons, optimizing billions of consecutive decisions toward an objective that is written into their very being.

[00:27:24] Paul Roetzer: That was when I was like, I gotta stop for a minute. Like I just had to kind of get up and process these ideas because again, the article goes into great depth of actually providing examples of how this stuff happens and why they think it could happen. They talked extensively about the red teaming that occurred for G B T four, which is the safety and alignment testing it went through before they released it in March of 2023.

[00:27:48] Paul Roetzer: It was one of the first times I've read about specific examples of what that red team unearthed within GPT four, sort of these emergent capabilities. It demonstrated that they weren't prepared for and that they had to guardrail in. Which then led me to like, oh my gosh, like they're not guard railing this in the open source models.

[00:28:08] Paul Roetzer: So yeah, we always have this to be closed versus an open source and Right. You know, you unveil what they were saying in this article GPT four could do before they neutered it so it couldn't do it. And you realize like, oh wow, these open source models can do that right now. So that, that, that affected me a little bit.

[00:28:27] Paul Roetzer: And then, you know, I think I think more than anything I took away that they aren't slowing down. They're speeding up. And no matter what they say about wanting regulations and laws, they clearly don't actually want them. Like they were, they talked about, the idea that Altman said, something related to like a democratic society wouldn't want to slow down.

[00:28:54] Paul Roetzer: You don't want this in the hands of autocratic communist societies. So like, you have to keep going. So there's definitely that, like talking out of both sides of their mouth. Oh yeah. We want regulation. What they really want is to control the regulation. And to influence, you know, the direction this research goes and to convince the government that, that they should let them keep going or else the other countries will get, get aligned with this.

[00:29:20] Paul Roetzer: So that, that affected me a little bit. And then, I think I just walked away with a deeper understanding of the importance of alignment and the safety research that's going on. It's a lot, but like, I do think it might affect what I do next, what we do next, in, in some profound ways. And I won't get into that right now.

[00:29:46] Paul Roetzer: I have some thoughts about what that has to be, but I don't know that we're doing enough with just talking about responsible AI and ethics and safety and some talks that may kind, I don't think that's going to be sufficient. I think there's, we're going to have to probably do more. And I, this, this article gave me a sense of urgency for that.

[00:30:07] Paul Roetzer: So one

[00:30:08] Mike Kaput: of the things, you know, obviously we're getting into some pretty deep, In the weeds topics that are definitely important. But one of the things that really jumped out to me as like, oh shoot, okay, here's some real urgency to this, is when Altman talks about very starkly, many jobs will go away.

[00:30:25] Mike Kaput: Full stop. What did you make of that take? 'cause we have talked about the impact on knowledge work, but I did find it interesting. He just readily admitted that I suspected for a while he believed it. But just hearing him say it, I think, was like, oh man, okay. We gotta start preparing now.

[00:30:43] Paul Roetzer: Yeah. I've, I've heard him say that before. Yeah. I agree. I mean, we've said it on the show, I've, I think I said it on stage at MAICON, like, I think it's, we're being unrealistic if we pretend like this isn't going to affect things. And you go back to the thing you quoted from my opening keynote about the least capable eye you'll ever use.

[00:31:02] Paul Roetzer: This article makes that come to life. Like, you, you, you quickly realize, That there is just way more powerful versions of what already exists that weren't released onto the world. They're training more advanced models that are trained on images, on video, on audio, on code. Right now, these models we use, we're trained on text, like that's it.

[00:31:24] Paul Roetzer: So it's it's understanding of the world. Its ability to reason, its ability to write, its ability to do all these things is largely based on text training. Once it can see and understand images, which it can already, they're already training it. Bard is able to do this now. You give it an image, it interprets what's in the image and can analyze it for you.

[00:31:41] Paul Roetzer: So you can see that now do the same thing with video and audio and code, and you start to give it the representation of the world around it. And then they talk in this article about embodying that, that they, they want this in humanoid robots and they want it in robotics. Just last week, DeepMind I think put out an article that they're using language models to train, robotic arms that, you know, to develop more intelligence within these robotics.

[00:32:06] Paul Roetzer: So just, again, they, we don't want this to be overwhelming and straight too far from marketing and business, but this is real. Like you, you have to understand what they're actually building because the implications on is it going to affect jobs. If we sit here and say, listen, there's millions of knowledge work, jobs at risk next year.

[00:32:27] Paul Roetzer: You could have somebody come back and be like, oh, you're so full. But that is not true. It's just, just fearmongering. No, it's not like, if you actually understand what they're building and what these things are capable of, it's almost irresponsible to not plan for that to happen. Because 's, at this point, it's getting to where, I know we're going to talk about the McKinsey generative ad report in a minute, but I'm starting to feel like the probability is greater, that there's a massive impact on jobs in the next, like two to three years than not.

[00:32:59] Paul Roetzer: And I felt like it was like 50 50 before. But when you read stuff like this, it just affects the way you think about these things. And I still think that the people who are willing to sort of like just absorb this, have the sleepless night over it. And then wake up and start saying, okay, what do we do about this?

[00:33:18] Paul Roetzer: And that was the theme I kept saying through throughout is like, this isn't going to be easy, it's not going to be clean. There's going to be disruptions, it's going to be hard for people. But as soon as we can get through that and just accept that this has got not, there's an uncertain future head and we all start working in the same direction to do this responsibly with humans at the center of gives us the best possible outcome.

[00:33:41] Paul Roetzer: If we ignore it and pretend like this isn't happening and there aren't more, more way more advanced models already in these research labs, then, then you got problems. And so I think that's one of the big things we try and do at the show we try and do with things. If by never fear monger, never like Overhype, what's going on, this is just reality.

[00:34:00] Paul Roetzer: And I, we just need more people thinking about this critically. And what does it mean and how's it going to impact your career organizations, whatever it is, because 's going to be challenging. Yeah. So I don't know. I mean, I think it's an important topic. Like I said, I don't want to go too far out outside of the boundaries of marketing and business and education, but I really think that the next two to three years is going to be way more disruptive than people think it is.

[00:34:31] Paul Roetzer: That

[00:34:31] Mike Kaput: actually pivots perfectly into kind of our third main topic, because this whole idea of, you know, the question on a lot of people's minds is, will AI take my job? And according to new research from McKinsey, it's. Complicated. It's not super straightforward. They just released this report called Generative AI in the Future of Work in America.

[00:34:52] Mike Kaput: And in it they're attempting to forecast AI's impact on employment in the us. It's not the first time they've tried to do this. And overall, they basically said the employment changes caused by AI that they've been tracking in earlier reports, quote, are happening even faster and on an even bigger scale than expected.

[00:35:10] Mike Kaput: So some of the research's key findings include things like, by 2030, McKinsey predicts the activities that account for up to 30% of our currently worked across the US economy could be automated. And that trend is being accelerated by improvements in generative ai. However, they see Gen AI enhancing the way certain professions work like stem, creative, business and legal.

[00:35:37] Mike Kaput: And they say that that type of work might actually be enhanced rather than eliminating a ton of jobs in those areas overnight. However, automation's biggest effects are likely to hit other job categories. Those include things like office support, customer service, food service, employment. And what was really striking is they said an additional 12 million, what they call occupational transitions, i e needing to switch jobs.

[00:36:06] Mike Kaput: Forcefully occupational

[00:36:08] Paul Roetzer: transitions. Yeah.

[00:36:09] Mike Kaput: Yeah. It's a little bit jargony, but basically up to 12 million people will be forced. Not choose will be forced to switch jobs by 2030. And the really,

[00:36:21] Paul Roetzer: this is in the US right? 12 million us. This is US only by 20 thirty's. 10% of all jobs in the us Roughly, yes.

[00:36:27] Paul Roetzer: 9%

[00:36:28] Mike Kaput: above. Yeah. And that's outside any possible, you know, other disruption there. Workers in low wage jobs, lower wage jobs are up to 14 times more likely to need to switch jobs than those in the higher wage positions. And most will need additional skills to do that successfully. Women are going to be 1.5 times more likely to need to move into new occupations than men.

[00:36:54] Mike Kaput: As a result, McKinsey is saying, look, the US is going to need workforce development on a far larger scale and more expansive hiring approaches from employers. So employers will need to, they say, hire for skills and competencies rather than credentials and deliver training that keeps pace with their evolving needs.

[00:37:11] Mike Kaput: Now, this report is really great. There's plenty more data in the research worth checking out, but really the high level point here seems to be they're predicting a re-weighting of the economy with all the positive and negatives that could mean in the US by 2030 thanks to generative ai. And right now it looks like they're re-weighting in favor of higher wage jobs, leaving lower wage workers, kinda much more likely to have to forcefully deal with this.

[00:37:36] Mike Kaput: Now it's unclear. They say if this will lead to long-term mass employment, but you know, and we always note Paul, like take every projection with a grain of salt. This is just one perspective though. It's important. But what stood out to me is that these changes appear to be coming faster than even McKinsey expected.

[00:37:57] Mike Kaput: What do you make of that? I mean, it seems like despite their prediction that we don't know for sure if it will lead to long-term mass employment. There's some really rocky road ahead.

[00:38:08] Paul Roetzer: Yeah, we're going to see a lot of reports like this where they're accelerating the impact. So I think what has happened mainly 'cause again, if we've talked about this before on the show, but you have to have the context of generative AI to most of the world started November 30th, 2022.

[00:38:23] Paul Roetzer: Mm. Now it has been around before that, like we had Dolly two in the spring. We had mid journey in the spring, like things started happening in the spring of 22. But for most of the world it started. I. November 30th, 2022 and you got the holidays come back, it's January, February, the world starts waking up saying, what does all of this mean?

[00:38:41] Paul Roetzer: So the research on this is so new and the theories of what is going to happen are so new that right now what we're seeing is just accelerating beliefs around what the impact is going to be. There is no authoritative way to say what's actually going to happen. If you study economists, they say one thing depending on what the economi, who the economists are.

[00:39:03] Paul Roetzer: If you look at consulting firms like McKinsey, they may have a different take on it. If you talk to the AI researchers like Sam Altman or you know, IVA, like they think a g i's here in three years, and they would laugh at a report like this, like, who cares like this? Basically it's obsoleted information A year from now if, if they think they're approaching a g I.

[00:39:24] Paul Roetzer: So it's just an unknown. Period. And this just goes to what I was saying earlier, like you have to assume disruption is coming way faster and more significantly than what is currently believed. Because I assume this study also looked at existing capabilities, it probably didn't try and project out. What if g PT five has x, y, and Z capabilities, then what happens?

[00:39:51] Paul Roetzer: Yeah, so you could play this out all day long of, well, if the, if the technology freezes in the current moment and we never get smarter, this is what could happen. But six months from now, we have to assume we're going to see at least the same level of innovation we saw last year, probably more so from the AI models that are, we're using.

[00:40:12] Paul Roetzer: So my guess is you could do this report six months from now and they'll probably have accelerated these predictions to like 2025, 2027. Like it's just going to keep moving up. So it's a really hard space to get accurate data on right now. I think you can't look at these reports as anything other than context, and what I think you're going to see is everyone is going to agree it's going to disrupt the workforce and it's going to happen quicker than we previously thought.

[00:40:40] Paul Roetzer: Some people will say two years from now, some will say eight years from now, but I don't know that you're going to find anyone willing to say it's not going to be massively disruptive. I wouldn't believe it. If you find someone who says that, I would try and figure out what their motivations are, because I don't know how you could make that argument confidently right now.

[00:41:01] Paul Roetzer: Well, that's why I think it's

[00:41:02] Mike Kaput: worth talking about because I wouldn't get too hung up on, yeah. Is it going to be 8 million, 12 million, 15 million jobs? It's more like this does seem to be pretty strongly directionally correct. Yes. This trend is happening. So given that if I'm a business leader right now, and I assume this is directionally correct, like what should I be

[00:41:21] Paul Roetzer: doing about this?

[00:41:22] Paul Roetzer: Yeah, the, I mean the thing we talked about at MAICON, like, I ended my, my like kind of 10 things to know. I think I had action items in, in number nine and it was like, education and training is critical. You have to develop AI council within the organization. We've talked about this before when we did the knowledge work episode.

[00:41:37] Paul Roetzer: Unit AI council charged with monitoring this stuff, like just putting policies and principles in place isn't enough. You have to be actively monitoring the shifts in this stuff and the impact. One of the things I recommend is looking at doing impact assessments of your team. What are the key roles across your marketing team, across your organization?

[00:41:56] Paul Roetzer: What are their tactical responsibilities? And then like constantly doing like a six to 12 month outlook as new technology emerges. So let's say like as GPT five comes online, maybe it is this year, maybe it's next year, or as Microsoft copilot emerges into your organization and you have generative AI infused everywhere.

[00:42:15] Paul Roetzer: You need to then look at that and say, okay, how does this affect our writers, our designers, our email marketers, our paid ad specialists, our social media professionals, their jobs are going to evolve, what do we call it? Occupational redistribution. What was the, what was the term? Yeah,

[00:42:30] Mike Kaput: the kind of, jargon here was, let me find the exact term.

[00:42:36] Mike Kaput: Occupational transitions.

[00:42:38] Paul Roetzer: Yes. So there'll be occupational transitions occurring throughout your marketing team and organization, and you can't wait until it already happened to, to be the one to figure that out. So you have to get to this baseline understanding of ai. You have to be proactive and staying on top of it.

[00:42:55] Paul Roetzer: And then you have to have infrastructure in place to constantly monitor it and reassess. Like when we did the interview with Jessica of VMware for the AI council thing. Their AI council started as two people, like, just like a grassroots thing. She said they have 30 people now, they're meeting weekly.

[00:43:13] Paul Roetzer: They have information sharing and like, I even joked on stage, well that's going to be a full-time job for someone is leading the AI council. And she, she laughed and it was probably like, yeah, probably is going to be a full-time job for somebody because I think you just, you have to have these systems in place to economy monitor.

[00:43:28] Paul Roetzer: That's what the best companies, the best leaders are going to do is not figure out an AI roadmap for today. They're going to embrace the fact that this stuff's going to keep evolving and they're going to be the ones who put infrastructure in place to figure it out every day, every week, every month as it keeps changing.

[00:43:45] Paul Roetzer: And they're going to look at their tech stack, their strategies, budgets, people, they're going to constantly evaluate this stuff. And I think that's the biggest thing people need to realize is there is no more 12 month plans. Like they just don't exist in business. You can have a vision, you can have a product roadmap, you can have, you know, you can have strategies for the year, you can do your annual budgets.

[00:44:06] Paul Roetzer: But if you think those things that we're able to predict out as strategists, as marketers, as business leaders, what is going to happen 12 months from now? You, you are in for a rude awakening. and I think that's what people have to start understanding is we're going to have to lead in the most dynamic environment we have ever experienced in terms of the change is going to keep coming and it's going to be very tricky to solve for that.

[00:44:32] Paul Roetzer: Interesting

[00:44:33] Mike Kaput: times ahead. Let's, jump into some rapid fire topics here. First step is that Sam Altman is busy in another area of business. He just had a coming out party, essentially for another company that could have a major impact on society. And it is an impact that not everyone agrees is positive.

[00:44:51] Mike Kaput: The company that Altman has co-founded and is involved with is called World Coin. And this has been around for several years, but really in the last week they've kind of formally announced what it is, what they're doing and why it kind of matters and really spinning up operations. And what this is, is a cryptocurrency project that creates digital passports.

[00:45:11] Mike Kaput: And these digital passports help. People prove that they're human, not ai as they interact online and in the world. What's turning many people off to the project is how world coin verifies your human identity. You have to go in, in person and scan your irises into one of world coins, what they call orbs.

[00:45:31] Mike Kaput: It's like an or. It looks

[00:45:32] as

[00:45:32] Paul Roetzer: sci-fi as it sounds. It really does.

[00:45:34] Mike Kaput: Yeah. And so in return, if you do that, you get what's called a world ID and also some W L D, which is the company's crypto token. So this is something that Altman appears to think is one way to address the economic effects of ai. So he thinks that one possible outcome, like we discussed of.

[00:45:57] Mike Kaput: Very intelligent AI is universal basic income that will be needed to kind of soften the disruptive effects of AI on the economy. Essentially governments distributing income, and that may be world id, world coin, can be a way to prevent fraudulent use of that system. So Paul, as a longtime Altman watcher, like what's going on

[00:46:19] Paul Roetzer: here?

[00:46:20] Paul Roetzer: Yeah, I mean, you hit it at the end. Is not meant to just be universal basic income, but it does certainly appear to be one of his strategies to redistribute the wealth that AI might create. So, you know, for context, OpenAI started as nonprofit had became, had a for-profit arm emerge in nine, 2019. I think it is.

[00:46:38] Paul Roetzer: They have capped returns or ca profits in the company. So, you know, for example, I don't know what Microsoft's cap is, but the initial cap was like a hundred x return on their investment. So if you put in, I'm going to get these numbers wrong, but if you put in a billion. The most you could return out of it is a a hundred x the billion.

[00:46:57] Paul Roetzer: And then after that, that money is distributed. So their belief is that a g i Superintelligence will create trillions of dollars in wealth and that that those trillions of dollars are going to need to de be distributed to society. And that's part of the vision for how this stuff bere creates Utopia in essence is creates so much money that everyone can have endless amounts of money, basically.

[00:47:19] Paul Roetzer: So it is partially that it is partially about, verification of human identity. It is very, very sci-fi. Like I've looked at this for six months or whatever. 's just like, you can't get over it. I will not be scanning my, I you can't do it in the United States anyway. It's not currently available in the us.

[00:47:35] Paul Roetzer: But I would not be getting into these long lines that happened to scan my iris and give it over to Sam Altman and World Coin at the moment. It's bizarre. Like, honestly, like I just, it's just, it's just weird. But it's not going away. So be familiar with World Coin. You're going to hear that company name a lot moving forward.

[00:47:58] Mike Kaput: Next step is that some leading publishers, including I A C, which is a big media conglomerate, have told some reporters at the publication Semaphore that they are expecting billions in compensation from the AI companies they're suing right now, or planning to sue. Now. Publishers we've talked about in the past are increasingly taking legal action against AI companies.

[00:48:21] Mike Kaput: They claim AI companies are using copyrighted materials, sometimes from authors or back catalogs of books to train models like GPT four. Now, in the past when tech companies have been sued by publishers, specifically in a lot of social media related copyright suits, they've paid out, in some cases tens of millions of dollars.

[00:48:42] Mike Kaput: But it sounds like publishers appear to think that the threat from AI is orders of magnitude larger and should be compensated accordingly. So, according to Semaphore quote, unless the publishers lower their expectations, or the tech companies adjust their fundamental sense of what it is to be a platform, this high stakes conflict is likely to escalate.

[00:49:02] Mike Kaput: Now, Paul, as you're looking at this, is this expectation of billions in compensation at all, realistic? Like, what's the likely outcome of a wider escalation here between publishers and AI companies?

[00:49:15] Paul Roetzer: I don't know. I mean, it's really fascinating because when you step back and, you know, get away from the verbals back and back and forth, these language models need data.

[00:49:26] Paul Roetzer: They have taken that data from these publishing companies, whether it's, you know, In front of a paywall, most likely. Apparently it was also taken from behind paywalls, somehow, but they needed that data to train these models. So if all of these companies can't survive financially, then the data eventually goes away that trains these models.

[00:49:49] Paul Roetzer: Then you get into the scenario of like, well, how will the future models learn if all the data is gone? So it seems like there is some symbiotic relationship here where the language models, the foundational models need the media companies to keep creating content. And therefore the media companies need to be able to survive financially.

[00:50:07] Paul Roetzer: It does not appear, if we get rid of search or search becomes something much different than what we know today where these, these companies can't make money on ad dollars, can't make money on the, you know, you know, traffic. Being supported through search engines. It's like this, what does this look like world?

[00:50:24] Paul Roetzer: And I have yet to see a great analysis of like, what happens if all of this traffic goes away and these media companies no longer can get the ad dollars to live? So then it becomes, well, does do, do these foundational model companies financially license them to create this stuff because they need to keep creating this content.

[00:50:41] Paul Roetzer: There's an argument to be made that we need more great human written content. We need more journalists that if I was OpenAI or Google, I would be considering funding journalism schools and, you know, making journalism a a, a career where you can continue to make a living in the future. Because for these models to be good, they're going to need quality content.

[00:51:01] Paul Roetzer: And as we talked about previous episode, we don't know that synthetic data, like the AI generated stuff is going to work the same way that human generated content does. Right, right. So, I don't know, but I put this on LinkedIn when I first saw it and I said, do you want to know how major publishers really feel about large language models in generative ai?

[00:51:18] Paul Roetzer: This quote sums it up pretty well. So this is a quote from one of the executives in the article that says, search was designed to find the best of the internet. These large language models or generative AI are designed to steal the best of the internet. So I think that pretty much encompasses how media companies currently look at what's going on, but it seems to me that they need each other and somehow this is going to play out.

[00:51:41] Paul Roetzer: I don't think paying billions in penalties is the solution. I think you have to get into licensing and you have to get into a, a place where these technology companies truly support journalism, which I don't know what that looks like to be determined, I guess. So

[00:52:01] Mike Kaput: Netflix is actually making some waves as well online with a new AI job posting that is being shared quite a bit.

[00:52:07] Mike Kaput: And this is a posting for a product manager on its machine learning platform with a starting salary range of 300 to $900,000 a year. So if you have these requisite skills, that's a pretty good gig. It sounds like this is a wide ranging AI role that they're advertising. It's designed to do things like make Netflix's platform content and recommendations more intelligent.

[00:52:28] Mike Kaput: But one of the reasons it's making waves is because of the timing. I mean, there is an ongoing serious actors and writer strikes currently kind of bringing Hollywood to a standstill. And a key concern of striking professionals is that AI is going to be used to replace or devalue their work and their earning power.

[00:52:48] Mike Kaput: Now Paul, neither of us are, you know, have worked in the entertainment industry really, but it sure seems like a possibility that studios and companies like Netflix are going to be begin investigating how AI can help fill gaps in human employment if say, strikes continue or people with the requisite skills aren't available.

[00:53:08] Mike Kaput: Do you kind of see it that that way?

[00:53:11] Paul Roetzer: Yeah, this is a sticky one. Again, they need each other, like, we're not going to go to digital actors, synthetic versions of actors and actresses. No one's paying to watch that. At scale. I mean, obviously in select movies it's cool, but we're not, we're not going to replace humans as actors and actresses.

[00:53:28] Paul Roetzer: But I think that script writers and that they're going to find out real quick how good this tech is. If this strike goes on for a while, we've talked about it before. If you're a movie production company or if you're, you know, TVs, whatever, if you own the rights to say Friends is an example we used recently, and you want to reboot that series and you feed it however many years, how many seasons, how many episodes there were, and you teach a model to write like that.

[00:54:00] Paul Roetzer: Pretty confident that today's language models could probably generate some pretty solid scripts based on what those shows are like, tone, humor, character development, whatever it is. So I don't think anybody wants to get rid of writers and, you know, having humans involved in that stuff. But the reality is, like the AI's pretty good at it.

[00:54:24] Paul Roetzer: And I think that they need to come to an agreement real quick before the production companies become too dependent on the AI that they're worried about, you know? Yeah. So, I don't, again, I'm not, don't have a side neither of this. I want it to work out, as you always do in negotiations like this, you want it to be best for everybody.

[00:54:44] Paul Roetzer: I just think both sides need to be realistic and find a way to work together because they, the AI is not, they're not agreeing to anything where the AI AI's not a part of this in the future, right? So I think you gotta try and work to the best possible outcome, be realistic about what the technology's going to be capable of and how it's going to affect the human, you know, the human side of this.

[00:55:05] Paul Roetzer: And like we've always said, you have to take the human-centered approach. And I don't, I don't know that the, you. The movie and TV show side is not necessarily going to do that. We'll see.

[00:55:18] Mike Kaput: So speaking of drama, with Elon Musk, there is never a dull moment, which I believe listeners probably know. We covered last week, the recent kind of chaotic rebranding of Twitter to X.

[00:55:32] Mike Kaput: Some more details have surfaced that indicate this rebranding could have perhaps benefited from a little more thought and deliberation because it's come to light that Microsoft. Apparently owns some type of trademark to X as a name. Now it sounds like that's specifically related to its Xbox video game console.

[00:55:51] Mike Kaput: So unclear how directly problematic that is for X, the social media network. But it also turns out some other companies have X trademarks for specific lines of business too, and one of those includes meta and they have an X trademark that is directly related to social media services. So that means that we could now see not only this general chaos from the rebranding, but we could soon see X and or Elon Musk kind of engaged in serious and damaging trademark lawsuits over the actual name of the company.

[00:56:24] Mike Kaput: Paul, kind of from your perspective as comms PR marketer, how big a screw up are we looking at

[00:56:30] Paul Roetzer: here? I don't think he cares. I mean, his history is basically like sue me. Like everything he does, he just. Just does, it just digs the tunnel, doesn't need permission, takes down the Twitter sign without permission, and then San Francisco shows up and tells him to stop.

[00:56:46] Paul Roetzer: So he puts a big strobe, light X on top of the building and starts flashing the thing in Morris Code over the weekend and starts paying his a thousand dollars a day fine. It's like, fine me, like he doesn't care. I think he just assumes that whatever headache emerges, he can throw billions at it and his, his lawyers and just deal with it.

[00:57:05] Paul Roetzer: So I don't see him all of a sudden mea copa, like, sorry, I took the x I think it, you know, whatever, just go to courts. And I don't think he's going to change the name. Who knows? I don't know how the lawyering stuff works, but I don't think he, he's losing any sleep over whether or not Meta or Microsoft or anybody else owns X.

[00:57:24] Paul Roetzer: He's got plenty of attorneys to battle that out.

[00:57:29] Mike Kaput: So in other kind of big tech news, anthropic a major AI company, Google, Microsoft, and OpenAI, say they're launching something called the Frontier Model Forum. This is an industry body that is focused on ensuring safe and responsible development of frontier AI models.

[00:57:45] Mike Kaput: So the forum says that it aims to advance AI safety research, identify safety best practices for these models, share that knowledge with relevant policy makers, academics, et cetera, and support efforts to leverage AI to address society's biggest challenges. So not the first time we've seen, I. Somewhat similar partnerships or initiatives, Paul, but how significant is this new attempt at cooperation between these

[00:58:11] Paul Roetzer: companies?

[00:58:12] Paul Roetzer: I mean, it's interesting that it's those four. I mean, Google, Microsoft, OpenAI alone, it seems interesting that they're working together. Anthropic, you know, the c e O came from OpenAI. There seems like there could be some bad blood between all these organizations. I think it's positive that they're working together, whether or not it actually plays out how all the press statements, you know, they each made their own individual announcement related to it, like 6:00 AM on Thursday or Friday whenever it was.

[00:58:36] Paul Roetzer: So they came out in unison. It's interesting that Meta and Amazon and Coherent inflection and like that the others aren't included. I found that kind of noteworthy. But I think, you know, as we talked about with that Atlantic article, at this moment, I'm taking anything that shows progress towards figuring this out as a positive step.

[00:58:56] Paul Roetzer: It's usually not going to play out in the most optimistic way, is, you know, the press announcements happen, but. I think I'm cheerleading anything right now that helps solve for this in a safe and responsible way. So we've

[00:59:13] Mike Kaput: talked a ton over the months and years about Microsoft, a major AI leader, but their stock actually just took a hit after they failed to provide specifics about exactly when they're releasing.

[00:59:24] Mike Kaput: These kind of much talked about co-pilot AI features the AI across Microsoft 365 products that essentially helps you use tools like Word, Excel, and others in a much, more natural language prompting way with more generative AI features. Now, we actually talked about the other week that Microsoft just released details on co-pilot's pricing, saying this would cost 30 bucks per user per month, but now the company is being kind of vague about when it will actually roll out, and it seems to be indicating that the rollout might actually happen sometime in 2024, not actually this year, like a lot of people expected.

[01:00:01] Mike Kaput: Do you see us as something to worry about?

[01:00:04] Paul Roetzer: No. I mean, I think it gives us a little more time to figure out the impact it's going to have. My guess is. Google probably isn't any closer either. Just given the competition here. I think they're probably figuring if, if Google Workspace is implication of this or instance of this is, is on par with what Bard's currently capable of and they have nothing to worry about 'cause Bard kind of sucks right now.

[01:00:28] Paul Roetzer: So maybe they're looking at, it's like, well, let's not rush it out. Google's if even if they bring theirs out, it's not going to be any good and people aren't going to be that impressed. I don't, by the way, I don't think that's the case. I think Google will come out with more advanced versions of what they have when they do the workspace integration.

[01:00:42] Paul Roetzer: I did talk with someone who, has access to the Microsoft Co-pilot and I said, is it on par with the demo video? Because if it is, then it's going to be game changing. And, this person indicated it is indeed extremely impressive. Okay. That it's changing everything about how they work. So just one data point, but, it was someone who ha who has an intimate access to it.

[01:01:09] Paul Roetzer: So, I don't know. I mean, I'm optimistic. I'd love to see it, love to try it, but, it does sound like it might take a little extra time. I have nothing to worry about. I mean, unless you're waiting for it, but Right. I, again, yet to meet an organization that's ready for it. So I don't know that anybody's, going to be upset if it takes a little extra time to get to market.

[01:01:29] Mike Kaput: Cool. So in our final few updates here, we're kind of trying to do more of a, you know, tech of the week, kind of spotlighting some interesting products being released, features, updates and things like that. And first up we have one from Cohere Cohere. We've talked about a bunch as a major player in ai and it creates AI foundation models.

[01:01:48] Mike Kaput: They just announced something called Coral, a knowledge assistant for enterprises. So coral layers on top of your existing enterprise documents and knowledge, and then allows you to find answers across these docs using think like a conversational ChatGPT, like interface powered by natural language prompts.

[01:02:06] Mike Kaput: So for instance, you could just ask Coral, Hey, overview specific industries for me, based on the docs we have internally, all the research we've done, extract data insights from. Various company documents or even generate new documents based on existing knowledge. So two big things here. Cohere says the generated responses are verifiable with citations to sources, which mitigates against hallucinations.

[01:02:30] Mike Kaput: A key problem of these systems, not to mention it, is able to be managed within your own secure cloud, either through cloud partners that you already use, or virtual private clouds. Coral data is never sent to cohere. It stays within your own environment. So the reason this is worth noting, I think Paul, is it seems like a solution we've talked about in the past as being really important for enterprises.

[01:02:53] Mike Kaput: This ability to query your own documents and data with ChatGPT like capabilities, but feel actually comfortable about doing it because there's enterprise grade security and compliance. Is that how you see. Something like this.

[01:03:07] Paul Roetzer: Yeah. Cohere definitely seems to be steering real hard into the enterprise and addressing the items you just mentioned.

[01:03:14] Paul Roetzer: So rather than trying to win at the horizontal, large language model game where it's just applicable to any use case, any organization, they do seem to be very focused on building specific use cases to enterprises and considering the buying obstacles that might exist within a company. So, yeah, I mean, they're definitely an organization.

[01:03:37] Paul Roetzer: We mention when enterprises ask us who they should be talking to, we always say, I would, you know, at least have a conversation with cohere because they are building in this direction. So, yeah, I mean, this is a great use case, I think. A year, two years from now, everyone will have an agent doing this. Yeah.

[01:03:52] Paul Roetzer: Like, I can't imagine going into a search engine, you know, or your internal server and trying to find a document. And then where was that graph in that slide that we used in last year's board? And it's like, you're just going to go in and ask it, right? I need this, where's the data? And it just pops up in the conversation and links right to the slide within the deck that has a source of the data.

[01:04:11] Paul Roetzer: So, 's one of those where it's going to feel so antiquated to do it how we're currently doing it. When you can just have a conversation with the data within your organization or your c r m or whatever, you know, the system may be that has the data in it.

[01:04:26] Mike Kaput: So another update here is, so Rewind AI is a company we followed for a while and we've heard a lot about from our audience.

[01:04:33] Mike Kaput: It's an AI tool that captures everything you've seen and done on your Mac or PC while you're working so that you can then go back and look at Access search and query all this information and data for a ton of different use cases. And one user actually on the website says it feels like a superpower because you never forget anything ever again, and you have everything you've ever done right at your fingertips.

[01:04:56] Mike Kaput: Now, they have now released a version of this for iPhone. Previously it was only desktop. It has all the same features on your mobile device. And rewind, for what it's worth, says this solution is completely private. It is not like, Totally public, everything you're doing on your devices. I'm certainly intrigued by it, but haven't used it yet.

[01:05:16] Mike Kaput: Like what are your thoughts on this? I'm definitely interested, but a little wary.

[01:05:21] Paul Roetzer: I've said this before, I would put this in the category of that law of uneven AI distribution that I, you know, wrote about earlier this year, where you have to accept what you have to give up to use the tool. So this tool is exists, it has amazing capabilities.

[01:05:38] Paul Roetzer: I will never give up these, like, to ha to think that one company has access to everything you do from screenshots, private messages, chats with your family, like everything is stored in there infinitely. That's a really disturbing concept to me, and whatever the benefit of that is, I get it. But for me personally, I just have no interest in it.

[01:06:05] Paul Roetzer: And I know that they have patents or like patent pending on doing this through your AirPods and doing it through whatever, you know, Apple's glasses become like, they just, they want to literally record your entire virtual life, digital, offline, you know, in person, whatever. They want it all recorded. So you have instant, instantaneous memory for your entire life.

[01:06:28] Paul Roetzer: I have no use for it. Like I just don't want that level of intimacy with a single company having access to everything. So I know they raised a bunch of money. I know a lot of people love it. A lot of tech people think it's amazing. It is incredible tech. I won't ever personally use it.

[01:06:50] Mike Kaput: The last but not least here, A W Ss just announced it is expanding.

[01:06:54] Mike Kaput: Its Amazon Bedrock service. This is their AI service that gives you access to a range of foundation AI models. So Bedrock will now include Cohere as a foundation model provider and offer the latest models from atropic and Stability ai. Amazon says you'll also be able to create fully managed AI agents in just a few clicks.

[01:07:17] Mike Kaput: Now, Paul, we talked about Bedrock several months ago when they announced it, but it kind of bears repeating here given the expanded updates. Like why

[01:07:24] Paul Roetzer: is this so important? I'll give you an example, a context of conversations I was having at MAICON. So the big, one of the big debates we were talking about is, well, who do you work with?

[01:07:33] Paul Roetzer: Do you, if you want to infuse a large language model into your marketing, sales, and service, do you go with a foundation model company like Cohere or OpenAI or Entropic? Or do you go to an application company like a Jasper writer or somebody like that? And so when you say, oh, like you could start looking at a w s, they have bedrock, they have access to like four or five different language models within there.

[01:07:57] Paul Roetzer: The immediate response you get from people is, oh, well my data's already in a w s I trust them with my data. You're saying I could just connect my data Right. To a model there. So I think A W Ss has a really interesting play here, being kind of the everything store for LLMs. So that's one thing is I think they could win a lot of market share by the fact that people already trust them with their data.

[01:08:20] Paul Roetzer: The second thing that I, that really jumped out to me is this whole ability to build AI agents that take actions. And we've talked about that over and over again as the next major thing within these models is that stringing them together to take actions. And so the fact that Amazon is now playing directly in that space as all these other major players are going to at some point this year, I think that's one of the things I'll watch really closely is what happens with that and how real is that technology?

[01:08:47] Paul Roetzer: And is it something you or I could go in, you know, three months from now and start building our own AI agents. If we get to the point where non-developers like us can go in and build agents to do things. 's, it has a massive ripple effect through marketing. In many, many ways we'll get into in future episodes.

[01:09:08] Mike Kaput: Awesome. Well, Paul, as always, you know, Thanks for sharing your insight and your time to break down the latest in AI for us. Congratulations on a successful mayon. Excited to see what's next and, we'll have, we'll be back here

[01:09:21] Paul Roetzer: next week. Yeah. And thanks to all our listeners who came to MAICON. I mean, I, yeah, we were doing our book sign.

[01:09:26] Paul Roetzer: We must have had, I don't know, two dozen people who were like, oh, we're here 'cause of the podcast. So yeah. We know that the audience has really grown for the podcast and that a lot of, a lot of you actually joined us in Cleveland, so thank you for being there and, you know, coming up to us and saying hello while you were there.

[01:09:42] Paul Roetzer: We love to see that. So, yeah, I mean, another wild week of ai and this was honestly probably shortened because Mike and I were at Make on three of the day, so. Right. I, we may have missed some stuff last week, but I feel like that was enough for one episode. For sure. Alright, so we'll talk to you again next week.

[01:09:59] Paul Roetzer: Thanks, Mike. Thanks Paul.

[01:10:02] Paul Roetzer:

[01:10:02] Paul Roetzer: Thanks for listening to the Marketing AI Show. If you like what you heard, you can subscribe on your favorite podcast app, and if you're ready to continue your learning, head over to www.marketingaiinstitute.com. Be sure to subscribe to our weekly newsletter, check out our free monthly webinars, and explore dozens of online courses and professional certifications.

[01:10:23] Paul Roetzer: Until next time, stay curious and explore AI.

Related Posts

[The Marketing AI Show Episode 53]: Salesforce AI Cloud, White House Action on AI, AI Writes Books in Minutes, ChatGPT in Cars, and More

Cathy McPhillips | June 27, 2023

This week's episode of the Marketing AI Show covers a week (or three) in review of tech updates, responsible AI news, ChatGPT’s latest, and more.

[The Marketing AI Show Episode 48]: Artificial Intelligence Goes to Washington, the Biggest AI Safety Risks Today, and How AI Could Be Regulated

Cathy McPhillips | May 23, 2023

This week's episode of The Marketing AI Show covers a major Congressional hearing on AI, major AI safety risks, and possible regulatory action.

[The Marketing AI Show Episode 62]: ChatGPT Enterprise, Big Google AI Updates, and OpenAI’s Combative Response to Copyright Lawsuits

Cathy McPhillips | September 5, 2023

On this week's episode of the Marketing AI Show, we break down ChatGPT for enterprise, Google’s big news, and OpenAI’s defensive response to lawsuits.