47 Min Read

[The Marketing AI Show Episode 43]: AWS Gets Into the Generative AI Game, AutoGPT and Autonomous AI Agents, and How AI Could Impact Millions of Knowledge Workers Sooner Than You Think

Featured Image

Wondering how to get started with AI? Take our on-demand Piloting AI for Marketers Series.

Learn More

This week's episode of The Marketing AI Show, hosted by Paul Roetzer and Mike Kaput, talks about big generative AI announcements from AWS, the impact of AutoGPT, and how AI could impact millions of knowledge workers sooner than we think.

Check out the audio, video, and transcript below.

This episode is brought to you by BrandOps, built to optimize your marketing strategy, delivering the most complete view of marketing performance, allowing you to compare results to competitors and benchmarks.

Timestamps

00:05:07 — AWS Gets Into the Generative AI Game

00:15:21 — AutoGPT and Autonomous AI Agents

00:25:06 — How AI Could Impact Millions of Jobs Sooner Than You Think

00:57:58 — Rewind.ai shares its investor deck publicly

01:01:20 — Elon Musk creates new AI company X.ai

01:03:57 — Anthropic’s plan to beat OpenAI

01:06:42 — Is artificial intelligence advancing too quickly? What AI leaders at Google say

Listen to the Audio

 

 

Links referenced in the show

Read the Interview Transcription

Disclaimer: This transcription was written by AI, thanks to Descript, and has not been edited for content.

[00:00:00] Paul Roetzer: I think what needs to happen is you need to think about the workers in your organization, what they do and how essential the work is and how critical it is that they do it right every time.

[00:00:10] Paul Roetzer: Welcome to the Marketing AI Show, the podcast that helps your business grow smarter by making artificial intelligence approachable and actionable. You'll hear from top authors, entrepreneurs, researchers, and executives as they share case studies, strategies, and technologies that have the power to transform your business and your career.

[00:00:30] Paul Roetzer: My name is Paul Roetzer. I'm the founder of Marketing AI Institute, and I'm your host.

[00:00:38] Paul Roetzer: Welcome to episode 43 of the Marketing AI Show. I'm your host, Paul Roetzer, along with my co-host Mike Kaput, chief content officer at Marketing Institute and co-author of our book, marketing Artificial Intelligence, AI Marketing in the Future Business, available now in audio, digital, and print forms.

[00:00:57] Paul Roetzer: You don't have the book grab a copy, just marketing ad book.com. I guess that's our first plug of the day is Marketing ad book. All right, so this episode, which is going to be a doozy, is brought to us by our friends at brand ops. Thank you to brand ops for supporting the show. Brand ops is built to optimize your marketing strategy, delivering the most complete view of marketing performance, allowing you to compare results to competitors and benchmarks.

[00:01:25] Paul Roetzer: Leaders. Use it to know which messages and activities will most effectively improve results. Brand ops also improves your generative marketing with brand ops, your content is more original, relevant to your audience and connected to your business. Find out more and get a special offer. Visit brand ops.io/marketing show.

[00:01:46] Paul Roetzer: That's brand ops.io/marketing show. And episode 43 is also brought to us by the fourth Annual Marketing AI Conference, or Macon, which returns to Cleveland this summer. You can join us July 26th to 28th in person for the largest and most exciting event of the year. Oh, no, no. AI for Writer's Summit was pretty exciting, but it's, this is going to be in person.

[00:02:08] Paul Roetzer: All right. The conference brings together hundreds of professionals to explore AI and marketing and business experience AI technologies, and engage with other forward thinking marketers and business leaders. You will leave Macon prepared for the next phase of your AI journey with a clear vision and a near term strategy you can implement immediately.

[00:02:27] Paul Roetzer: I'm working on the agenda right now, and I promise you it is going to be incredible. It's, there's so many topics I want to cover in the agenda and so many speakers I would love to have. So it's a, it's a challenging process to go through to kind of filter this down for what's going to make the cut at the event.

[00:02:42] Paul Roetzer: But definitely, hope you can join us. It is MAICON.ai - M A I C O N dot A I. We hope to see you in Cleveland in July. So today's show I want to kind of preface by turn over to Mike again, if you're new to this show, we have hundreds, maybe thousands of new listeners every week to this show. So if you're new to it, the format is we pick three topics.

[00:03:05] Paul Roetzer: Well, we try to pick three topics, that are the most timely and relevant to you as marketer or business leader. And kind of give our thoughts on 'em, kind of the status and where we're at and what's going on. So something kind of emerged, while I was on spring break last week with my family. I was doing a talk for the associational of national advertisers.

[00:03:25] Paul Roetzer: Met with some amazing marketers and leaders at some of the world's biggest brands. And then on the ride home we took the road trip, which was crazy, but we did it 17 hours, I think it was total. I got a chance to listen to a number of podcasts and really kind of, for the first time in a while, just think about what was happening in AI and what were the implications and specifically related to knowledge workers.

[00:03:49] Paul Roetzer: And this is a topic I've been kind of drafting ideas around for the last few weeks, intense intensely and probably the last few months been really trying to think about my positioning on this, and what I think is going to occur. and so I just kind of literally on the ride home, my wife was driving, I put something up on LinkedIn on Saturday morning and I think we need to, expand on that thought, because it definitely created some interesting reactions.

[00:04:21] Paul Roetzer: A lot of support for the idea. A lot of. I wouldn't say challenging, just like we don't want it to be true. I think it's probably the simplest way to think about this. And so what I wanted to do today's show is really unpack our thoughts around our current thoughts. because this is definitely, in progress.

[00:04:39] Paul Roetzer: I will say. But I spent this morning really trying to flesh out a little bit more the intention behind my post. And so we're going to, we're going to, so most of the show talking about that. But Mike's going to walk us through a couple of other topics to get started that in some ways set the stage for the knowledge work conversation.

[00:04:59] Paul Roetzer: So with that, I will turn it over to Mike to kind of guide us through our conversation today. It's, it's a big one.

[00:05:06] Mike Kaput: Indeed. So first up, we have some big news from Amazon who just made a big play into generative ai, and it's a play that immediately makes them a very serious contender if they weren't before in the AI arms race.

[00:05:20] Mike Kaput: With all these announcements we're hearing from Google, Microsoft, and the OpenAIs of the world. So they announced a few things related to generative AI capabilities being baked right into Amazon Web Services a w s. They're kind of flagship cloud product. And these announcements include something called Amazon Bedrock, which is going to now make foundational text and image models from companies like Philanthropic and ST Stability AI access.

[00:05:48] Mike Kaput: To AW S customers right in the platform. So Bedrock will actually help customers find the right model for their needs, including some that Amazon has also created, as well as customized models securely with their own data, and then integrate those models into their own applications. So according to a w s, this ability to customize models is one of bedrock's most important capabilities.

[00:06:14] Mike Kaput: So customers only need to provide a few labeled examples to fine tune models to their specific needs. So the a w s gives this following marketing related example. They say imagine a content marketing manager who works at a leading fashion retailer and needs to develop fresh, targeted ad and campaign copy for an upcoming new line of handbags.

[00:06:38] Mike Kaput: To do this, they provide bedrock, a few labeled examples of their best performing taglines from past campaigns, along with the associated product description. And bedrock will actually automatically start generating effective social media display at and web copy for the new handbags. So as part of the announcement, Amazon also announced some advancements in its AI infrastructure that it sells to customers, as well as the general availability of its code whisper tool, which like other tools on the market, aids developers by generating code.

[00:07:14] Mike Kaput: So this was a major announcement that got a ton of play among, in AI circles. Paul, just at a high level, why does this move by Amazon matter?

[00:07:27] Paul Roetzer: There's a few thoughts I have on this one. The first is, the more I've been thinking about this, I feel like the l l m, the large language model is, is the new c r m.

[00:07:38] Paul Roetzer: Like, and, and not replacing the c r m, but in terms of the importance to the tech stack for the last, like, you know, 10, 15 years. Again, for people who don't know my background and Mike's background, I built HubSpot's first partner agency. So back in 2007, I had started my agency in 2005, we became HubSpot's first partner in 2007, and then I built and eventually sold that agency.

[00:08:01] Paul Roetzer: But the foundation of that agency was content creation, you know, driving growth through content creation, and then the infusion of CRM into companies. So we've, we've a like last 12, 15 years of our lives were spent, helping organizations with their tech stacks, specifically foundationally on the cr.

[00:08:19] Paul Roetzer: And the more conversations I'm having with large enterprises, which will trickle down to the smaller enterprises, smaller companies, in, in the months and years ahead, the language model is, is it like everyone is trying to figure out what do we do? Do we build our own language model? Do we use OpenAI?

[00:08:36] Paul Roetzer: Do we go to coherent? Do we go to philanthropic? Do we use, stability? Do we build on top of Jasper or writer or hyper, right? Like there's, there are so many options right now and so few people in organizations that have any idea what to do about it. So even organizations that have, you know, top-notch CIOs and data teams and business intelligence teams, large language models are new to them.

[00:09:00] Paul Roetzer: and, and the, one of the challenges is the horizontal application of it. So this isn't a marketing thing, this is a marketing, sales service, finance, hr. Like as we look at the infusion of what. Microsoft 365 is going to do with copilot and what Google's going to do with workspace. It's bringing large language models into every aspect of the organization.

[00:09:21] Paul Roetzer: And so I feel like the large language model is going to be as fundamental to the tech stack as a C R M has been for the last 10 to 15 years. So that kind of is like my main thing. And I'm getting questions every day from large enterprises asking for introductions to these language model companies at the application layer companies.

[00:09:39] Paul Roetzer: And asking our opinion on what should they even do? Like how should they move forward with language models? So that's, that's one kind of thought. The other is, I saw, I don't remember who put this on Twitter, or maybe I don't even at this point. Maybe it was just like a thought I had. I have no idea.

[00:09:55] Paul Roetzer: But Amazon is the everything store, and they basically became the everything store for language models. This, this is Amazon's model. Like you can use ours, you can buy the Amazon product, or you can buy from the other people and maybe the Amazon language model will be cheaper than the other ones because they can do that.

[00:10:11] Paul Roetzer: But this is, this is the Amazon model, and I think it's really smart. I think it will work. I, I think, Amazon stock price jumped the day they did this, quite a bit. And I could see this being a major play for them. I mean, I think if you're an enterprise and you're trying to think about what to do with large language models, you have to now look at a w s and say, okay, is, is that the right play for us?

[00:10:35] Paul Roetzer: Because we have diversity of models we can choose from these foundational models. We don't have to pick just one. There's open source models in there. There's some custom, you know, proprietary models in there. And the reality is there's probably, in your organization, you likely may not have a single language model, foundational model.

[00:10:52] Paul Roetzer: You probably will have kind of a blend of these things, at least for the foreseeable future. So, yeah, I mean, I think if you're a big enterprise, your first calls are where they've always gone. You're going to call Microsoft, you're going to call Google Cloud, and you're going to call Amazon a w s and you're going to say, what, what do you got?

[00:11:07] Paul Roetzer: How, how are the large, large language model structured? And then a quick side note, I actually recently got, outreach from Google offering. A significant, I was like a quarter million dollars in credits for, to use their AI in their cloud. And I was like, oh, that's, that's the model. So like if you think about Microsoft, Amazon, Google, I think what's going to happen is they're going to be giving away like credits or time, to get you into using their language models.

[00:11:38] Paul Roetzer: So it's going to be a really tricky period right now moving forward for people in kind of the buying seats and organizations that have to figure this stuff out. But major player, major news, it's going to change the way people think about and buy large language models. And it's

[00:11:51] Mike Kaput: probably worth noting for audience members that are just starting to get caught up on what's possible here is the fact that, you know, with the advancements in generative ai, there's really no foreseeable future where your company is not using some type of generative AI in some context moving forward.

[00:12:09] Mike Kaput: So as companies start looking into the technology, the reason we get these questions about custom models is because all of these companies, especially large enterprises, see the immediate value in saying, okay, I don't just want a generic language model, though sometimes that fits their use case. I want one that is privately and securely using my own data to customize the model to my brand standards, voice editorial guidelines, whatever, even our specific data.

[00:12:39] Mike Kaput: So we're getting into this world where it's much, much easier now for brands to access that enterprise grade. Model customization, because sometimes, at least historically, some of the vendors have not had that out of the box. So can you talk just really briefly about, I mean, why is this ability to customize models just so critical for these companies moving

[00:13:01] Paul Roetzer: forward?

[00:13:02] Paul Roetzer: Yeah. It's the f it's the future of how every organization will use these models. You, you can't, like, I mean, just a real simple example is you, you brought up like privacy and security. You can't put private data into chat GPT. It's why we previously talked about, you know, Italy shutting it down and, there was the example we used it, was it Samsung, I think maybe in the last step.

[00:13:21] Paul Roetzer: Episode. So we talked about, you know, they, they were putting like confidential meaning notes into chat GPT and asking it to summarize it. Like you just can't have that. Now you could choose to use the APIs from OpenAI as like you could build on top of that. So, you know, if you're comfortable with OpenAI, you, you could do this on top of there, but you could go to cohere or philanthropic or you could go to writer with who's, you know, recently announced their, own language models, like variations of them, which APIs are open to.

[00:13:50] Paul Roetzer: Jasper, I think last week announced APIs to their models. So it's like, it's a really confusing space at the moment, but fundamentally you have proprietary data. You want to tune these models on that data, whether it's for internal external use cases, and you want to be highly confident in the privacy and security of that data and how these models work within your organization.

[00:14:12] Paul Roetzer: So that's, I mean, that's basically the use case. So any organization that cares about the privacy and protection of its data is going to want to have kind of closed systems where they're using these. APIs maybe from different models, but then they're building on top of it. It doesn't seem like building your own models from the ground up, like, Bloomberg, was that the example we used?

[00:14:35] Paul Roetzer: Yeah. Most organizations are not going to have the resources to build their own large language models from the ground up. They're likely going to start with either an open source model from Stability AI, or hugging face, or wherever it's going to be from, or they're going to pull the APIs from one of these proprietary models.

[00:14:54] Paul Roetzer: It's, it's going to be very limited. and honestly, like the rate of return of building your own model, is it's just the incentive isn't going to be there. It's going to be way fast. Again, go back to the crm. You need a C R M. Is it better to build your own or just use HubSpots? Or just use Salesforce.

[00:15:11] Paul Roetzer: Like in most cases, people are going to build on top for speed and for innovation purposes. Gotcha.

[00:15:19] Mike Kaput: So next up, we are seeing some major attention being paid to this broad category of AI tools called roughly autonomous AI agents. So these are AI tools that are now coming out that are able to start performing tasks completely on their own without human involvement.

[00:15:39] Mike Kaput: So one project that has gone viral is called Auto, GPT Auto. GPT is a project from an independent developer and it takes goals that you give it and executes the actions to achieve those goals on its own without oversight from you. So in one example, displayed in a demo video of auto GPT, the tool is able to go, for instance, Google topics on its own, read news articles to learn about a specific market and then build a requested business plan for a new company in that market.

[00:16:11] Mike Kaput: You don't have to tell it. How to do any of these things. It will literally, if you say, build me this business plan for a new company in a new market, it will start attempting to achieve that goal. Now, at the same time, we actually saw another autonomous AI assistant unveiled by hyper, right. A company we're very familiar with and know well that currently sells an AI writing tool.

[00:16:36] Mike Kaput: In the case of hyper, right? They gave a closed demo to VentureBeat who reported on this and in it, hyper right showed off this agent's ability to use the internet on its own like a human would to go ahead and actually take all the actions required to order a pizza online. So a little different from a Chachi p t where you're giving it a prompted, producing an output.

[00:16:59] Mike Kaput: These are agents that are able to go autonomously, act on their own. You know, in the future you could see this leading to us having our own personal AI assistance that automatically go do things for us without us needing to kind of give them input or oversights. Now a really important qualifier here.

[00:17:16] Mike Kaput: Dr. Jim Fan, who is a popular AI scientist at Nvidia, tweeted about some very real limitations of things like an auto GPT. He calls it a fun experiment, which its creators also fully admit. It is a fun experiment today. It is unreliable for the really complex tasks you'd actually want it to do. So we're not saying that this auto GPT system is out there in the wild working really well at many different things.

[00:17:46] Mike Kaput: It's just not today. Dr. Fan also talks about its autopilot mode being dangerous, which is something the creators of this tool will also warn about. Because in certain cases, given enough permissions and enough autonomy, this system can go perform tasks and execute code with no restrictions. So it can be doing things indefinitely if you don't tell it to stop working.

[00:18:14] Mike Kaput: So this is getting a ton of play online for obvious reasons, but there are very real limitations here. But it is very interesting to see what is now starting to become possible with these autonomous AI agents. Do you see this, Paul, as kind of a, the next stage in ai?

[00:18:33] Paul Roetzer: Yeah, my, so my first thought on this was I was on spring break last week, and I did not, like, I really wanted to just shut off from the world.

[00:18:42] Paul Roetzer: Like I feel as though the last five months, since chat boutique came out has been, honestly overwhelming. Like in terms of the processing of information, the pace of this stuff, our ability to synthesize the things we're reading and seeing and trying and learning. It's a lot mentally to, to do.

[00:19:05] Paul Roetzer: And I, and so I really tried last week to just not think about AI for 24 hours. So of course, like the day I did that, I come, you know, back to Twitter the evening and it's like auto GPT and baby Agi are just like blowing up at least within our bubble. I don't know. The average Twitter user would have seen or heard about auto GPT, but certainly within the AI circles we follow.

[00:19:31] Paul Roetzer: It was the only thing being talked about for that stretch of time. So I think I even messaged you, I was like, what the hell is auto GPT and baby a g I like, are these product launches from somebody legit? Like, what is this? And then you dig in, it's like, oh, okay. No, they're GitHub. Like somebody put this code up on GitHub and people are playing around with this and it's not actually a user experience that like you or I could go necessarily get value out of.

[00:19:55] Paul Roetzer: It's like, okay, well at least I understand what in the world is going on. But it definitely starts bringing the World of Bits story. So if anybody listened, I don't remember what podcast it was. It was in the twenties. We'll put it in the show notes. But I had written this paper called World of Bits in February where it was sort of connecting the dots about Andre's Carpo going back to OpenAI to work on action transformers and adept and inflection ai, you know, raising hundreds of millions of dollars to do action.

[00:20:19] Paul Roetzer: Transformers. This is it. Like, it was the idea that we could not only have these language models that could generate things based on prompts, but they could actually take actions and that is where everyone we said at the time was racing to. I think it was in mid-March, we maybe did the podcast about this.

[00:20:36] Paul Roetzer: And it is absolutely where everyone is racing to. So we're seeing that happen. I think, you know, based on my synthesis of information and kind of what we're seeing and the people we trust in the space that we're looking to is it's, it's very early. But very early doesn't mean what it used to mean very early may mean it's months away from being usable technology to the average person.

[00:21:01] Paul Roetzer: It could mean it's a year away, but it sure seems like the shorter timeframe is more likely the, sort of like we've, we've let the, genie out of the bottle in a way like this isn't, we're not going back, like, we're not going to just stop trying to build these action transformers. But I do think it's a sign of things to come and I think it's dangerous.

[00:21:22] Paul Roetzer: So Ethan Molik, who's a professor of Wharton, who we've quoted before, on this podcast, he tweeted yesterday, well this is April 16th, just yesterday. We're recording this on Monday the 17th. I've been playing with various open source efforts to give AI access to other systems. On one hand, they can make AI much more useful and powerful.

[00:21:42] Paul Roetzer: On the other hand, this is something the OpenAI GPT-4 white paper cautions about as it multiplies the potential of unexpected risks. So I just, I, again, like, I try so hard not to overhype this stuff. I really, really worry about how easy it is for people to build this stuff. I mean, we're talking about like a weekend project for people who know what they're doing.

[00:22:06] Paul Roetzer: I, I love the guys that hyper, right? Like, you know, Matt and Jason are awesome. I've had some amazing conversations with Matt. I have not had a chance to talk with him about their personal assistant and, you know, kind of what they're doing from a safety and security standpoint. But I just, I really hope that the people that are building these things understand the potential ramifications of what they're building.

[00:22:34] Paul Roetzer: And do everything in their power internally and with their peers who are working on similar technology to do everything possible, to do it in a responsible way and to do it with safety first and foremost, and alignment, as the main drivers of this, I feel like we're just given how easy it is to build stuff like this, I just feel like we're racing toward a future.

[00:23:00] Paul Roetzer: We're just not ready for, from not, you know, from a knowledge work standpoint, which we'll get into, but from a, from a safety and alignment standpoint. And I think that's what the Future of Life Institute Open Letter was all about, was like we're just, we're heading down this path and it just seems like we're building tech for tech's sake at the moment.

[00:23:20] Paul Roetzer: Like, it's possible. So we're just going to do it and we'll figure it all out later on. And I, you know, I think we've been, I don't know if it's obvious, but I think we've been somewhat critical at times with OpenAI, um but the more I think about what they're doing, you know, holding off for seven months before they release GPT-4 and doing all the safety and red teaming work and the alignment work, it at least appears open.

[00:23:49] Paul Roetzer: AI is making massive investments to try and do this the right way. And I think Google, and Microsoft, like, they're, they're trying, they're, yet, you can question the ethics and you can question, you know, decisions they've made around their ethics teams. But at the end of the day, they have lots of people dedicated to trying to do this in a safe way.

[00:24:10] Paul Roetzer: Whether we agree with the timing of these releases or not. My concern is all of these smaller companies that don't have those teams in place, that have the ability to build this stuff very quickly. And I just don't think most people are aware of how dangerous that can be. And so It's just more than anything like auto GT P GPT, you're not going to go race as the average listener to this podcast and build something on auto GPT today or baby agi or whatever the next thing is going to be.

[00:24:41] Paul Roetzer: But people who know how to build our building, dozens, hundreds of applications on top of these things and they're going to get really good, really fast. And I think it's a topic we're going to keep coming back to on the show. That is a really

[00:24:54] Mike Kaput: good lead in to kind of this main topic that we want to talk through today because we've kind of structured this to lead up to this third and final topic on knowledge work.

[00:25:06] Mike Kaput: So if you think about, okay, the AWS's, the Microsoft, the Googles of the world, the top AI research outfits on the planet are all working towards racing towards AI developments. We have things like auto GPT improving in capabilities that allow it to do much more cognitive labor autonomously and really, Where this ends up, at least in the way you've written and many others have started to commentate on recently, is that we are probably looking at a future in the next 24 months where AI impacts millions of knowledge, work jobs sooner than anyone thinks, and possibly with greater impact than we have anticipated.

[00:25:50] Mike Kaput: So you published on LinkedIn, like you alluded to a post about the impact of AI on knowledge workers. And I just want to read a couple quick excerpts here to, to tee this up and then just have you run with your thoughts and how you're looking at the impact of AI on labor. So you've said things like GPT-4 and other advanced generative AI tech has accelerated the ability of AI to perform knowledge work, including strategic and creative work in ways that we're still exploring and trying to comprehend.

[00:26:23] Mike Kaput: And on top of this, we're seeing recent advancements, rapid advancements such as auto GPT, that are giving AI the. To complete complex tasks and actions with minimal human intervention. Now, you say that AI may not replace workers directly in the near term, but if companies can five or 10 x the productivity of each knowledge worker, we certainly don't need as many employees as we have in marketing, sales, service, finance, hr, operations, et cetera, at least in for-profit companies, driven by rules of efficiency and profits.

[00:26:57] Mike Kaput: And we don't really have any clear path despite some of the moves made by the EU recently to government oversight or regulation. And there's no unified efforts by businesses or educational systems to address the impact on workforces and students. And you conclude with, we are potentially looking at millions of jobs.

[00:27:16] Mike Kaput: Being impacted in the next one to two years. And you go into a bit more of a thread from Jason Kakais, who is a popular investor and host of the Mega Popular All In podcast, a great business podcast we both listened to, where he kind of put a much more, put it in a, maybe a more abrasive or direct way by saying AI is going to nuke the bottom third of performers and jobs done on computers, even in creative ones in the next 24 months.

[00:27:44] Mike Kaput: So I'll leave it there. Do you want to walk us through what you're thinking? Why this post? Why now?

[00:27:52] Paul Roetzer: As I alluded to in the open, this is something I've been thinking a lot about. So, you know, in my talk, starting probably back in November, I don't remember when I first started saying it, but the AI role only place marketers, but marketers who use AI will place marketers who don't.

[00:28:08] Paul Roetzer: Inspired by the Stanford study from 2018 about AI not replacing radiologists. We have been trying to use that premise to one, I I believed it. Two, to give hope to people that this is not, this, this is not, going to definitely replace lots of jobs. it's kind of like there is a path forward where we can avoid a really negative outcome.

[00:28:42] Paul Roetzer: In recent weeks, I have, come to believe that it might not be entirely true. And so I started changing that slide in my presentations, including at the AI for Writer Summit March that said, you know, I won't replace writers, but writers use, they'll replace writers who don't, or at least they will have the best chance of thriving.

[00:29:04] Paul Roetzer: In the coming years. So it's almost like a caveat to say like, listen, I don't know for a fact that if you dive all in on AI, that you aren't still at risk of losing your job. When we look at the realities of how much efficiency can be gained and productivity can be generated, that there is the chance that you're still going to be impacted.

[00:29:22] Paul Roetzer: So I've been debating how to say this on LinkedIn and how to talk about it on the podcast without sounding overly alarmist or being perceived to be trying to create fear as a motivating factor, which is the opposite of what we try and do. I've talked about on this podcast before though, that I think a sense of urgency is critical to, to this.

[00:29:46] Paul Roetzer: So let me kind of set the. There are, approximately 132 million full-time workers in the United States. I'm not going to get into a global audience today. I'll, I'll focus on kind of what we know a little better. Of those, it's estimated about 100 million are knowledge workers, what we've previously would call like white collar jobs.

[00:30:06] Paul Roetzer: That's about 75% of the total US workforce. Are knowledge workers, a knowledge worker, according to Wikipedia, not GPT-4, our workers whose main capital is knowledge. Examples include programmers, physicians, pharmacists, architects, marketers, engineers, scientists, design thinkers, public accountants, lawyers, editors, academics, whose job is to quote, think for a living.

[00:30:27] Paul Roetzer: So just kind of, what are we talking about as knowledge workers? We're on the same page. If you think about the industries that this, where these people largely exist. Professional services, including marketing agencies and many marketers. Software, information technology, finance and insurance, healthcare publishing.

[00:30:42] Paul Roetzer: Just some examples. Okay. So when I first had a kind of like, we need to do something about this, there was a report that appeared to be leaked from Goldman Sachs because the p d f of the report was not distributed directly by Goldman Sachs that I had seen. But a p d of this report ended up online and Goldman Sachs ended up writing, definitely more watered down summary of the report on their blog last week.

[00:31:06] Paul Roetzer: So this report, the headlines that the media ran with was a report by Goldman Sachs predicts that as many as 300 million jobs could be affected by generative ai. So when you go into the actual P d F of the report, it says, if generative AI delivers on its promise capabilities, the labor market could face significant disruption.

[00:31:25] Paul Roetzer: Some two-thirds of US jobs are exposed to automation by ai. Goldman says, adding that of those positions affected as much as 50% of their workload could be. And it went on to say, although the impact of AI in the labor market is likely to be significant, most jobs and industries are only partially exposed to automation and are thus more likely to be complimented rather than substituted.

[00:31:47] Paul Roetzer: But this is where it starts to get kind of like hard to comprehend. Some 7% of US jobs could be replaced by ai. So quick math, 10% of 132 million is 13 million. So 7% is roughly around what, 10, 10 million jobs. We find that roughly two-thirds of current jobs are exposed to some degree. Okay. So as we started, like looking at this data, and I'll get into the OpenAI report in a, in a moment and some other personal thoughts, my general feeling it is, it is one of the most important things we can be doing with the platform.

[00:32:23] Paul Roetzer: We have to be creating awareness that there is a possibility, a very real possibility that knowledge work is going to be disrupted in the near future. So, as I said in the post, when I start looking at the possibilities and you just start adding up some very simple numbers, just in the SaaS industry or just in publishing you, you pretty quickly get to millions of jobs being impacted and potentially lost in the next one to two years.

[00:32:50] Paul Roetzer: So the LinkedIn post has, as of this moment, almost 53,000 impressions. 74 comments, 22 repost. The vast majority of the comments are, yeah, I totally see it. Like this is definitely important, it's really critical, we gotta be talking about this. And then there was definitely some that had a bit of a more visceral reaction to what can appear to be hype.

[00:33:12] Paul Roetzer: And I think in some cases it's the Jason Cala Canis quote and just Jason in general that maybe creates this reaction from people because he is definitely a tech insider. He lives and breathes, you know, SaaS and tech. And so within that world where coding is the dominant thing they're looking at, it's a very real thing.

[00:33:30] Paul Roetzer: and, and so then they're extending that knowledge to other industries. And you know, I think you could certainly say that there, there could be some hyperbole within there, but once you sift through that, there are very real and imminent issues that are facing business leaders, education systems, governments, professionals, knowledge workers that we're not ready for.

[00:33:48] Paul Roetzer: So when you consider the current state and what appears to be coming next, my general feeling at the moment is I would much rather organizations and leaders. Work under the assumption that significant job loss is imminent, that you are going to have to make choices in your organization that could affect large percentages beyond single digit digit percentages of your workers.

[00:34:13] Paul Roetzer: So I would rather organizations prepared for that possibility than, and be wrong, that it doesn't happen. Then do nothing and say nothing and be right. Because if you think about what we've done at the institute since, you know, we started talking about AI in 2014, and my second book was the first time we publicly talked about it, started researching in 2011, started the institute in 2016, started the conference in 2019, our online academy in 2020.

[00:34:45] Paul Roetzer: And the reality is it's been a very slow build. Like it's, it's, it's been hard to get people to listen, to care, and to take action. So just to quantify it, in 2022, prior to chat GPT, we would add about just rough, average 500 new subscribers to the institute a month. Contacts people who download assets, people who subscribe to the newsletter attend a webinar.

[00:35:12] Paul Roetzer: For the first time, about 500 per month. Since ChatGPT we're averaging close to 3000. So in 2022, we added about 6,000 new contacts. We're adding 6,000 every two months now. So what chat GPT did was it drove massive awareness and interest in the topic, but in many cases it's for the tools, it's for like ChatGPT, or they just want to learn about what text I use for email, what text I use for advertising, and that's fine as an entry.

[00:35:42] Paul Roetzer: What we've said is the platform we've created has to do more than connect people to tools. There's plenty of people who've jumped into AI or built newsletters about the, you know, 20 tools you should go get today and all this. And we do some education around tools for sure, because that's valued to people.

[00:35:58] Paul Roetzer: But we need to go beyond that. So what I'm basically looking at saying, like for seven years we've been telling people you have to pay attention. You have to, to look at what AI is going to do. It's coming, it's going to intelligently automate large portions of your work. And only a small percentage of people were listening, you know, 500 new people a month.

[00:36:19] Paul Roetzer: Now more people are listening, but they're, they're not necessarily aware of the impact overall. So my feeling is now that people are listening, we have to do more to make them care about the really important topics. So the LinkedIn post was a starting point and I realized like, I can't tell the full story there.

[00:36:35] Paul Roetzer: So I spent time this morning and last night. Really thinking about this. So I want to apply a little bit of my reasoning to this and I would love for people to, you know, reach out to me on LinkedIn. We'll put this video up probably on LinkedIn, like comment there. But here, here's some additional thinking and context.

[00:36:53] Paul Roetzer: So why do I feel that there is a reasonable probability job loss could occur in the near term, significant job loss? I'm going to go through a few points. Number one, capitalism companies are driven by efficiency and profits. AI enables both of those potentially on a level we have never seen before. In terms of speed of transition, we're not talking about five-year transition to this, we're talking about five months of transition.

[00:37:16] Paul Roetzer: Two, the economy sucks. Companies that can save money have to save money. And so if they're in a for-profit world and AI enables the ability to cut costs, they're going to do it. Third private equity firms, almost 12 million employees in the us. So what, what is that one? Almost 10%, right? Yeah, almost 10%.

[00:37:41] Paul Roetzer: Okay. Well, roughly 7% according to the US labor force, work for private equity backed businesses. According to the American Investment Council and Industry lobbying group, a 2019 study by the National Bureau of Economic Research lays out what happens when private equity firms buy companies.

[00:38:01] Paul Roetzer: Researchers analyzed almost 10,000 debt fueled buyouts between 1980 and 2013, and found that employment fell by 13% when a private equity firm took over a public company, employment declined by even more, 16%. When private equity acquired a unit or a division of a company. Private equity firms buy businesses to maximize profits.

[00:38:25] Paul Roetzer: You maximize profits initially by reducing expenses. Workforces are often the largest expense if AI becomes a tool in the private equity tool set. It is going to be the one they wield first. There is a theory we have going back to May of 2022, that the future of all businesses AI are obsolete. I am more confident now than ever that you either build an AI native company to disrupt an existing company, or you build an AI emergent company that reduce cost and accelerate revenue through the application of ai.

[00:38:58] Paul Roetzer: As you mentioned, Mike, there is no clear path to government stepping in and slowing this down, and there's no obvious unified effort by businesses or educational systems to address the impact on workforce and students. So these are all reasons this that could likely occur. Another one, companies are generally unaware and unprepared for how prevalent this technology is going to be within knowledge work by the end of 2023.

[00:39:22] Paul Roetzer: Just look at Microsoft 365 co-pilot and Google Workspace and consider that the tech is going to be infused into every function of your business if you use those platforms. Whether you go seek AI tools out or not. Next. We know from our own state of marketing AI research, which we've done two straight years, that the vast majority of companies do not have any AI specific internal education or training programs.

[00:39:48] Paul Roetzer: 67% of people in our survey, which have almost 800 people last year say, lack of education and training is the top obstacle to adoption. 81% say they do not have AI focused education and training in their organization. Next, the tech is really good and largely untapped in its potential in language. Just look at GPT-4 and image generation, look at mid journey inStability AI and video.

[00:40:13] Paul Roetzer: Look at what runway is doing in coding. Look at replica, ghost Rider and GitHub co-pilot. Then consider, as we saw with auto GPT, the fact that people are building thousands of apps and plugins on top of these foundational models. Next, There is a paper from March 27th from OpenAI Open Research in the University of Pennsylvania called GPTs R GPTs, an early look at the labor market impact potential of large language models.

[00:40:42] Paul Roetzer: So what is GP T r GPTs mean? They conclude that large language models such as GPTs, generative pre-trained transformers, exhibit traits of general purpose technologies, indicating that they could have considerable economic, social, and policy implications. You are not familiar with general purpose technologies.

[00:41:02] Paul Roetzer: They can affect an entire economy, usually at a national or global level. GPTs have the potential to drastically alter societies through their impact on preexisting economic and social structures. Examples would be the steam engine, electricity and information technology. As you can consider, railways, interchangeable parts, electronics, things like that.

[00:41:21] Paul Roetzer: Basic premise. It is a massive disruption to the way things have previously done in the OpenAI research paper. They propose a rubric for assessing large language model capabilities and their potential on impact on jobs. They look at what they call exposure of tasks to language models. They do not distinguish between labor augmenting or labor displacing effects.

[00:41:44] Paul Roetzer: They just look at exposure of tasks and roles. So they say that accounting for other generative models and complimentary technologies, they're human estimates. So these are people who looked at jobs and then estimated the impact of large language models. The paper itself is kind of dense and scientific, but I'm going to try and give you like the highlights here.

[00:42:04] Paul Roetzer: They're human estimates indicate that up to 49% of workers could have half or more of their tasks exposed to large language models. Occupations with higher wages generally present with higher exposure, sample occupations with the greatest exposure. Interpreters and translators. Survey researchers, writers and authors, PR specialists.

[00:42:27] Paul Roetzer: I would throw marketers in there. Accountants, web and digital designers. Two more thoughts on why I think this is going to happen. Based on my own experiences and research as well as context of dozens of conversations, it is reasonable to assume the time to complete most knowledge tasks such as writing, design, coding, planning, et cetera, can be reduced on average 20 to 30% at minimwith current generative AI technology.

[00:42:55] Paul Roetzer: We are not at the peak of this technology. We are at the very beginning stages. It's actually very early in terms of understanding and adoption of the current forms, and the tech is getting faster and smarter at a compounding rate. So these percentages are only going to rise what it's capable of doing.

[00:43:13] Paul Roetzer: Okay, so those are, I'm going to get into what is going to prevent it from being true. Mike, do you have any, any thoughts to add on. These ideas of why it might actually

[00:43:23] Mike Kaput: happen? No, I think I would just emphasize for anyone listening, if you haven't already, go look up what Paul mentioned about Microsoft 365 and Google Workspaces.

[00:43:33] Mike Kaput: You know, we've talked to a couple of groups recently and we've emphasized again and again, these things are real. They're being baked into the technology. They're coming for every part of your tech stack. This is all going to be in your world. Even if you know the people using this software can't even smell ai, they're going to absolutely need to understand it and what's possible, because it will be in every part of your tech stack by the end of the year.

[00:43:59] Mike Kaput: So I think as we're talking about kind of more macro ideas here, realize that this is not just wild speculation. We're already seeing the velocity of AI adoption in all the tools that power modern

[00:44:13] Paul Roetzer: businesses. Yeah, and I mean, I've shown the Microsoft 365 co-pilot trailer to a room full of CEOs. It changes things like it is extremely tangible to them the impact this is going to have across their business.

[00:44:28] Paul Roetzer: And so if you just look at that stuff, like it is really hard to debate this premise. Okay, so maybe a little silver lining here. What could keep this from happening? Keep us from losing millions of knowledge worker jobs. Now, keep in mind, I don't want this to happen. I am, I am doing this. I'm putting myself out there with these ideas because I am trying to get people to care so we can work together to stop this from happening.

[00:44:55] Paul Roetzer: Okay? So what would prevent this government regulation? Low probability, in my opinion, it's coming there. There's going to be heavy regulation of this space. It is not coming in 2023, massive legal precedent that challenges how the foundation models were trained, thus slowing down progress. This would be GPT-4 illegally scraped content, and therefore the foundation model gets shut down by the government or stability.

[00:45:23] Paul Roetzer: AI illegally took copyrighted images to train their model, which they did, and therefore, illegal case shuts down the foundation model. But that's like whack-a-mole. It's like, okay, great. We shut down G P D four, but you can still get an open model off a hugging face or GitHub or wherever. Like I could still go get models to build on top of.

[00:45:42] Paul Roetzer: So it's not, it's not going to solve it. There, there will be cases, there will be fines paid, there may be some other disciplines, but it's, it's not going to halt progress. Okay. Other reasons that it could prevent us from being true? AI creates more new jobs in the near term than are currently being, I do believe it's going to create lots of new jobs and career paths.

[00:46:04] Paul Roetzer: We can't imagine. We addressed this in our book. We, we thought about all this, like in our book, like, so we, we addressed the idea that these new roles would be created. We even hypothesize what some of them could be. They are not going to come online as fast as the jobs are going to be going away. That that is just a reality.

[00:46:21] Paul Roetzer: So the idea that new jobs, reduce this from happening, it's a low probability, low impact companies, next companies rapidly adapt to re-skill or up-skill their talent and thereby redistribute their time and the workforces so that there isn't job loss. That is a low probability, but it is our best chance in my opinion.

[00:46:44] Paul Roetzer: So, again, as I highlighted in our state of marketing eye, It's just not happening. There is not a wide scale effort within the organizations to, to build internal AI academies and things like that. Now, I will say we have seen the needle move in 2023. So we're seeing our piloting ad for marketers series, which is like the 17 on-demand courses lumped into like eight hours of content to get you started.

[00:47:07] Paul Roetzer: We have seen large organizations buying dozens of licenses for their teams and actually using that as a training mechanism. And then like, I'll show up and do moderated q and a with their teams. So like, we're seeing way more effort this year than we did previously, but it's still not at scale by any means.

[00:47:24] Paul Roetzer: So again, low probability of it, of a changing anything, but it is our best chance. Next adoption rates and companies continue to lang behind the technology's capabilities. I would assess this as a medium probability. I think there's a reasonable chance that we overestimate the impact. Because we overestimate how quickly this is going to get adopted.

[00:47:45] Paul Roetzer: That goes back to our law of uneven AI distribution that we have talked about before. I had a whole, whole podcast on this. The idea that the value you get from AI and how quickly and consistently that value is realized is directly proportional to your understanding of access to and acceptance of the technology.

[00:48:03] Paul Roetzer: There are going to be plenty of companies, plenty of industries that just do not jump into this. There's going to be some outliers that do it, but generally you may have slow adoption and that could prevent the, this disruption. So this to me, is the greatest variance, understanding and adoption by industries and professions.

[00:48:19] Paul Roetzer: But again, it, it's only going to slow it down. So rather than one to two years, it may be two to four years or, you know, whatever that may be. Next is the flaws and limitations of generative AI are greater than are being discussed in the media and will prevent mass disruption in the near term. I think this is a medium probability, but again, it's not evenly distributed across use cases.

[00:48:43] Paul Roetzer: So like coding is like it's happening, like it's going to happen in coding. Is it going to happen in writing as quickly to be determined? So lemme give you an actual example of this one, that the flaws and limitations could reduce the curve a little bit. For this preparation for this podcast, last night I went in and gave a prompt to chat GPT.

[00:49:04] Paul Roetzer: I did it with the browser version and the GPT-4 version that is not connected to the browser. I don't think I need to read this whole, this whole prompt, but just to give you a sense of the prompt. You are an expert market researcher working at a top consulting firm. Your task is to analyze the knowledge worker market in the United States and forecast the impact of AI on knowledge workers in the next three to five years.

[00:49:25] Paul Roetzer: Your analysis should include, but not limited to definition of a worker. How many total employees in the US have the number of knowledge workers increased or decreased? What is the breakdown of knowledge workers keeps going down? Include an ovary of how ai, specifically large language models and generative AI will impact knowledge workers and what new knowledge work, jobs and careers may be created by large language models and generative ai.

[00:49:46] Paul Roetzer: Now telling you that prompt, I will say everything I'm saying to you and everything I said beforehand was not generated by ai. I didn't use any of this stuff that gave me other than to use this as context. So again, I used GPT-4 and I used the browser plugin. So my first takeaway was the GPT-4 output was insanely impressive.

[00:50:06] Paul Roetzer: Like if the facts were all true better than most people I ever hired in my agency that could have written this, and I'm talking about people with 10 years experience. So practical experience, it was really, really well done. The trick was there's a bunch of statistics that I had no idea if they were right or not.

[00:50:25] Paul Roetzer: So I would still need to go through. So in terms of automating this task, I would still need to go through and verify everything. And at the end of the day, it probably actually wouldn't save me that much time other than the outlining or ideation as well as improving what I write. When I used the browser plugin, again, it was really impressive, but the sources it cited were terrible.

[00:50:47] Paul Roetzer: So as a trained journalist, I would never have used the sources that it's cited. So I have solved for the fact that there isn't citation by the browser plugin adding citations and the citations suck. So again, I haven't actually changed. This process. So this means that there is a spectrum of use cases with associated benefits and risks that will actually determine the true impact on workers.

[00:51:13] Paul Roetzer: So I kind of, I tried to think about this visually and we'll, again, this is like a lot of stream of consciousness thought here. We'll probably play around with this stuff more moving forward. But think about like a x y access. the, the Y access is the ability to automate a task, to think about translation, transcription, drafting, outlining, whatever.

[00:51:32] Paul Roetzer: It's, you just stay in the writing, like degenerative AI writing example, and the the X access is the risk of errors occurring. So in this example where I am putting myself out there and saying, I think there's the potential that we are going to lose millions of knowledge, work jobs, I cannot rely on a GPT-4 model to output something that has factual errors.

[00:51:55] Paul Roetzer: Because if I start spitting out a bunch of factual errors and someone goes and fax checks me, then they're going to say, okay, this guy doesn't know what he's talking about. And now I've ruined my reputation because I relied on this model. So the risk of error is very high. So can I write this analysis with GPT-4?

[00:52:11] Paul Roetzer: It is very high on the ability to automate. I can definitely do it, but the risk of error is so significant. I would never use it for that use case. Now, if I wanted to think about transcription of this podcast can definitely do it. Our experience is it's like 90 to 95% accurate with some light editing.

[00:52:29] Paul Roetzer: And if there's a little bit of air in the transcript, it's, it's not the end of the world. We can go in and fix it. Like the it's, it's, and we're going to scan it. So that one is much more likely. So if I think about the, the knowledge workers in my company and I say, okay, this person does analysis of data.

[00:52:44] Paul Roetzer: They have to be right, right all the time. Can I have wrong names, places, dates, statistics, whatever it is, can't be wrong. That person's not going anywhere, like. They, they are essential to continue to move forward. And there is no indication that these large language models that are going to be solved for the hallucination or the making stuff up anytime soon.

[00:53:04] Paul Roetzer: Just Sundar Pacha, we'll talk about it in a minute. On 60 Minutes last night, they asked him explicitly, can we solve for hallucination? He said, I think we can make progress. They, they don't know how to solve this, that it makes stuff up. They, they gave the example on that one of, they asked it 60 minutes, asked it to write some like economic theory paper, something like that, and then cite books and it made up the five books.

[00:53:26] Paul Roetzer: Like did, so we're not solving this. So I think what needs to happen is you need to think about the workers in your organization, what they do and how essential the work is and how critical it is that they do it right every time. So like sending emails, social media shares may not be as high in that risk factor.

[00:53:43] Paul Roetzer: The other thing, and I'll, I'll kind of go into what do we do about it? We created this marketer to machine scale that's in the book as, and then we've adapted to like a human to machine scale. And so if you think about task level work, there's a few variables you have to consider. The inputs needed for the AI to do its job, the oversight needed by the human to make sure it does, it does, it does its job correctly.

[00:54:05] Paul Roetzer: The dependence of the machine on the human to make sure it's accurate and safe and align with human values. The improvement rate. So does it get better on its own? And then I would throw in, which was not on the original scale, the risk. What is the risk of it being wrong or performing an unintended or misaligned action.

[00:54:24] Paul Roetzer: So as you can see, this is a complicated one. So if I go back to again the this main thought, the what could prevent millions of jobs from being. I think that the flaws and limitations of generative AI being greater than it appears online, where it's sort of like in the Instagram version of everything, everything looks amazing.

[00:54:43] Paul Roetzer: There's all these awesome demos and like it just works. And auto gps going to disrupt everything in Baby A a g I is a, you know, our first, it it's bullshit. Like, it is not reality. Those five use cases you're seeing are not going to be in the world tomorrow. So it is a sign of things to come, but there are flaws and limitations to this technology.

[00:55:02] Paul Roetzer: Okay, I will wrap here with, what do we do about it? Again, we could spend an entire episode just on what do we do about it. Education and training. You have to train individuals in your organization and teams. You have to find people in the industry that you can trust that are talking about these topics.

[00:55:21] Paul Roetzer: There are a lot of overnight AI experts who are spending their time doing Twitter threads about the 10 things to do with chat GPT posts. They're a dime a dozen, most likely written by chat. GPT I am not saying we don't need to find use cases for ChatGPT and generative ai, but if that's all the expert is talking about, they are not an expert in ai.

[00:55:44] Paul Roetzer: Find voices you trust in this space and trust them to synthesize things that you don't have time for yourself. So that's one. Two, I would, if I was a a company, I don't care if you're a small company or a large company, I would create an internal AI council that is charged with developing policies and practices, and I would have them consider the near and long-term impact of ai.

[00:56:06] Paul Roetzer: On the company across all functions. Three, and maybe this could be one because there's already templates to do this. Develop responsible AI principles for your organization that guides your use of this technology and your point of view on it and the impact on your team. And I would've generative AI policies that explicitly say how you're using language tools, image generation tools, video generation tools, coding generation.

[00:56:29] Paul Roetzer: So those are two things I would do right now. I would then conduct an impact or exposure assessment of your teams. So the likelihood of intelligent automation impacting people in your teams. At an individual level, at a team level of function level. I would build an AI roadmap for your company that looks, you know, one to three years out and says, how are we going to adapt, become AI emergent as an organization, get someone internally or externally who can lead the charge on that.

[00:56:55] Paul Roetzer: And then at a high community level, I would engage lawmakers at the city, state, and federal levels. They, they have to get involved. This is. It is so critical that there is some level of government oversight, and regulation. And, I just think if that doesn't happen sooner than later, we're going to have much bigger problems a year from now.

[00:57:18] Paul Roetzer: So, thank you for like listening. It was a lot. But I'll turn it back over to you, Mike, and take a sip of my coffee.

[00:57:27] Mike Kaput: No, that's a, it's an absolutely mission critical topic, so I'm glad we're getting some thinking out there because to your point, I mean, yes, it is daunting. It can be intimidating and scary, but preparation is the only remedy.

[00:57:43] Mike Kaput: For what could potentially happen regardless of how scary that might seem. So as we wrap up here, let's dive into just a, a handful of rapid fire topics and then we will, we'll close out this week in ai. So first up, a company called rewind.ai shared their investor deck publicly, so they're getting some, ton of attention for this because it's a really cool gesture of transparency to the market.

[00:58:10] Mike Kaput: For anyone who doesn't know what Rewind is, it actually is an AI tool that acts as your AI assisted memory. So it will record anything you've seen, said, or heard. You enable a plugin on your desktop and then it makes it all searchable. This is not only a move for transparency with rewind.ai, but also founder Dan Crocker tweeted, we don't have time to meet with everyone.

[00:58:35] Mike Kaput: So instead we're sharing our investor presentation with the world since they've gotten so many inquiries about their tool and their funding. So Paul, what did you make of this when you saw this?

[00:58:45] Paul Roetzer: I've been following them for a while. The guy who created us, the founder of Optimizly. I don't know if you said that.

[00:58:50] Paul Roetzer: But, so I mean, the guy, I've followed the guy for a while. It's a really hot tech. This is actually one of the technologies I thought about when I wrote the law of uneven AI distribution. I won't use it like again, awesome tech, but the thought of no. However, how secure they say they are or how private your data is, the thought that some AI is not just like transcribing a meeting where I'm picking like, okay, yes, record this one.

[00:59:17] Paul Roetzer: Desktop action, but like every desktop action. And every email and every conversation and every Zoom chat that is done under private dms, like all of it's recording everything. Like that is the acceptance of AI and access to it that I thought about with the law of uneven AI distribution. I just won't do it.

[00:59:39] Paul Roetzer: and I worry actually about an environment where you are doing it. And you and I are having private dms, and I'm assuming Right. That's now their argument's going to be Yeah. But it's just like augmenting the human memory. And like, you would have that person would have that anyways. Like Yeah.

[00:59:53] Paul Roetzer: But the risk of that being packed and like other people having like really worries. And the roadmap has like, apple ar glasses on it. So not only is it going to be your digital life on the screen, they're envisioning everything you do is going to be recorded everywhere you go. In, in the, the world of atoms, like not just the world of bits, the world of atoms where I'm out in the world and seeing people in human form, everything you do and say is recorded for like me to just query on the fly.

[01:00:23] Paul Roetzer: Like, I don't know, man. Like, I get why people would do it. It's fascinating. Tech good on them. They're going to raise a ton of money. You can't stop this from happening probably. But I personally would choose not to use this kind of technology. I don't, yeah. I mean, I like the idea of, I love my, my memories in Facebook.

[01:00:45] Paul Roetzer: I love that I can query April 16th, 2016 and see where I was and what I was doing in my Apple photo album on my phone. Like, I love the idea of access to these memories and knowledge, but. I, the idea of that level of access, is terrifying to me personally.

[01:01:05] Mike Kaput: And to your point, this comes down to these individual trade-offs.

[01:01:08] Mike Kaput: We're going to have to accept not reaping the benefits of a tool like this because of personal comfort levels with what's required to get those benefits. A

[01:01:18] Paul Roetzer: understand.

[01:01:20] Mike Kaput: All right, next up, Elon Musk is again, back in the AI conversation. He just founded a new company dedicated to ai. It's called X ai.

[01:01:29] Mike Kaput: Basically nothing is known about this company today. I believe it was just reported on because they found that the incorporation documents had been filed, in early March. But some people are speculating the company is a possible response to OpenAI. So Musk helped found OpenAI and he has been openly critical of them, and broadly critical of the direction taken by AI research.

[01:01:55] Mike Kaput: He went so far as to be the signatory on a recent letter, an open letter to pause AI research for at least six months. That was signed by all sorts of other AI experts. What did you make of this? Do you have any sense of what his play could be here?

[01:02:14] Paul Roetzer: Yeah, I think we've known for a long time that he owned Xtu AI as a website, and he talked about the, when he bought Twitter, that it was, I think when he tried to back out of the Twitter purchase and realized he couldn't, he then, you know, and kind of almost the way to save face seemed to say, well, fine, it's just going to enable X to happen faster.

[01:02:33] Paul Roetzer: So, I think, and I could be wrong here, this is more of a theory. They're shutting off all the APIs to the Twitter knowledge base, all, all that, learning data that exists within Twitter, not just, you know, tweets, words, but images, videos, memes, everything. You think about how you build a really valuable foundational language model.

[01:02:55] Paul Roetzer: You have to have proprietary data. And the other foundational models wouldn't have the Twitter data. And so my assumption is he is likely planning to build AGI through Twitter data. As the, as the unique part of the recipe. And then when you mix it with what they're doing with Optimist at Tesla, his play is to build an AGI and then embody it within a humanoid robot.

[01:03:26] Paul Roetzer: And I'm sorry, this sounds really sci-fi, but this is what he's doing. it, he is not going to go down without a massive fight. I, I think he feels burned by OpenAI. I think he worries about the application of this stuff and he may feel that he has to build it himself to have the best chance to apply it.

[01:03:49] Paul Roetzer: So I don't know if it's going to work, but my, that is my, my guess of what's happening. Awesome.

[01:03:57] Mike Kaput: So another, in another competitive story here, philanthropic, which is a major AI player that builds foundational AI models. We've talked about them before as someone to follow. It's reported they're looking to raise as much as 5 billion over the next couple of years to challenge OpenAI.

[01:04:15] Mike Kaput: So a pitch deck for their series C round, was leaked or released and the company said that it plans to build what it calls a quote frontier model that it claims will be 10 times more powerful than today's most ad advanced ai. The deck goes on to say that the model quote could begin to automate large portions of the economy.

[01:04:37] Mike Kaput: Example, use cases of what that could look like. Include using the model for things like customer service, emails and chat coding, productivity related search document editing and content generation, HR tasks like job descriptions and interview analysis and much, much, much more. Now, Paul, you had posted about this as well on LinkedIn.

[01:04:56] Mike Kaput: What did you think of this when you saw it? I

[01:04:58] Paul Roetzer: can't believe this was a week ago. This is probably when I actually started having the existential crisis around knowledge work. it's really hard for the human mind to conceive of what a 10 x version of a language model could be capable of. I mean my, that's probably my main takeaway when I saw this, and again, we've known anthro was going to do some major stuff here as well as these other players.

[01:05:28] Paul Roetzer: It's just continue reinforcement that the. Business leaders, government leaders, education system leaders are, are, are so unprepared for the next two years and beyond. And I think that's why I've probably felt the motivation to, put the knowledge work conversation out there despite, you know, the challenges and disagreements that may come from it.

[01:05:56] Paul Roetzer: Yeah. I just think it's, it's so critical and so few people have a line of sight to what is, what is happening and what is about to happen. And this is just another example. Again, they may miss, they may not get there, but maybe it's two x, maybe it's five x. Like, it's, it's massive and it's disruptive no matter where it lands.

[01:06:17] Paul Roetzer: Yeah.

[01:06:17] Mike Kaput: The size of the purported money they're looking for in the timeline I think should make people wake up. because you're right, if. Okay, there will still be an impact, but the fact they even think this is possible gives you a sense of what some of the leading minds in AI do think is a possibility in terms of how quickly these things can advance.

[01:06:40] Mike Kaput: All right. Last but not least, Is artificial intelligence advancing too quickly? That is the question broadly asked by 60 minutes in a recent interview where they sat down with Google, and so they sat down with several leaders at Google, including CEO Sendar Pacha, to understand what's coming next in ai.

[01:06:59] Mike Kaput: And they covered a lot of different topics and use cases and things Google is working on. But as part of the interview, Pichai said that AI will be as good or as evil as human nature allows, and that regardless, the revolution is coming faster than we know or anticipate. He also mentioned specifically that knowledge work would be disrupted to kind of bring everything full circle.

[01:07:22] Mike Kaput: Paul, what were your reactions to this interview, which I highly recommend everyone

[01:07:26] Paul Roetzer: watch. Yeah, I didn't, I didn't get to watch the whole interview yet. I was actually listening to, sound bites of it on the ride to work today, with Demi Saabas, who we've talked about many times on the show and Sundar.

[01:07:40] Paul Roetzer: Yeah, I would, I would go watch, I'm going to watch the full thing. But you know, I think that what they do with Bard, what they do with their language models is still largely to be determined and the impact that they'll have, is to be determined. But I just wouldn't get caught watching GPT-4 and ChatGPT because it's the thing everyone has access to at the moment.

[01:08:05] Paul Roetzer: And thinking that this conversation is done, like it's just starting, there is more advanced technology than GPT-4 in multiple labs. GPT-4 on its own will become multimodal at some point. Right now, it's just text, like it's going to infuse images and video like the, we are just starting, and that's why I think the knowledge work conversation has to happen now.

[01:08:30] Paul Roetzer: We cannot make assumptions about the future of work based on the tech we're looking at right now. Even if it, this all was all we had, it's still going to be a massive disruption, but it is not all we're going to have.

[01:08:45] Mike Kaput: Well, I think on that note, that's a good place for us to wrap the conversation. There's a lot, I'm sure people are chewing over now, but it's good to have the conversation out there.

[01:08:53] Mike Kaput: And Paul, as always, thank you for your time and your insight in demystifying some of this for us.

[01:08:59] Paul Roetzer: Yeah, we appreciate everybody listening. We will talk to you again next week.

[01:09:03] Paul Roetzer: Thanks for listening to the Marketing AI Show. If you like what you heard, you can subscribe on your favorite podcast app, and if you're ready to continue your learning, head over to www.marketingaiinstitute.com. Be sure to subscribe to our weekly newsletter, check out our free monthly webinars, and explore dozens of online courses and professional certifications.

[01:09:25] Paul Roetzer: Until next time, stay curious and explore AI.

Related Posts

[The Marketing AI Show Episode 57]: Recap of 2023’s Marketing AI Conference (MAICON), Does Sam Altman Know What He’s Creating? and Generative AI’s Impact on Jobs

Cathy McPhillips | August 1, 2023

This week's episode of The Marketing AI Show talks about MAICON 2023, a mind-blowing article on Sam Altman, and generative AI's impact on jobs.

[The AI Show Episode 114]: ProblemsGPT, The ROI of Generative AI, Andrej Karpathy on the Road to Automated Intelligence & Ilya Sutskever Raises $1B

Claire Prudhomme | September 10, 2024

Episode 114 of The Artificial Intelligence Show reveals ProblemsGPT, examines the ROI of Generative AI and, evaluates Andrej Karpathy's insights on the journey toward automated intelligence.

[The Marketing AI Show Episode 54]: ChatGPT Code Interpreter, the Misuse of AI in Content and Media, and Why Investors Are Betting on Generative AI

Cathy McPhillips | July 11, 2023

Generative AI is advancing, and this week it’s two steps forward, and one step back. Learn more in this week's episode of The Marketing AI Show.