47 Min Read

[The AI Show Episode 105]: What Economists Get Wrong About AI, The AI Agent Landscape, and the Urgent Need for AI Literacy

Featured Image

Wondering how to get started with AI? Take our on-demand Piloting AI for Marketers Series.

Learn More

Paul Roetzer and Mike Kaput explore a future where AI could turn one cent of electricity into hundreds of dollars worth of top-tier professional work. Join our hosts as they unpack Carl Shulman's provocative insights on superintelligence, explore the rising world of AI agents, and examine the importance of AI literacy for all. Plus, take a look into the latest from OpenAI, Google Gemini, Anthropic and more in our rapid fire section.

Listen Now

Watch the Video

Timestamps

00:03:32 — 80,000 Hours Podcast with Carl Shulman

00:20:10 — The AI Agent Landscape

00:33:03 — AI Literacy for All

00:42:02 — OpenAI Superintelligence Scale

00:47:02 — Lattice and AI Employees

00:52:18 — OpenAI Exits China

00:54:23 — Microsoft, Apple and the OpenAI Board

00:56:20 — Gemini for Workspace

00:58:48 — Claude Prompt Playground

01:01:48 — Captions Valued at $500M

Summary

AI Expert Carl Shulman on the Economy, National Security, and Society in the Race Towards AGI

In a marathon interview on the 80,000 Hours Podcast, a somewhat reclusive, but massively influential, AI expert says we are headed towards massive disruption thanks to the coming development of cheap, superhuman AI.

Carl Shulman is an independent AI researcher with ties to the Machine Intelligence Research Institute and the Future of Humanity Institute at Oxford University.

And in this rare interview, he spends more than 6 hours unpacking the huge—and massively disruptive—implications of our race towards artificial general intelligence (AGI) and what it means for the world economy and life on Earth.

A vast amount of topics are covered in this episode, but they all essentially boil down to the following argument made by Shulman (this comes right from 80,000 Hours):

  • The human brain does what it does with a shockingly low energy supply: just 20 watts — a fraction of a cent worth of electricity per hour. What would happen if AI technology merely matched what evolution has already managed, and could accomplish the work of top human professionals given a 20-watt power supply?
  • Carl simply follows the logic to its natural conclusion. This is a world where 1 cent of electricity can be turned into medical advice, company management, or scientific research that would today cost $100s, resulting in a scramble to manufacture chips and apply them to the most lucrative forms of intellectual labor.

So, Shulman is essentially saying that it is likely in the near future that we not only have superintelligence, but extremely cheap superintelligence.

And that is going to change everything.

The AI Agent Landscape

AI agents are poised to define the next major wave of progress in AI. Agents are autonomous systems that can pursue open-ended goals, make long-term plans, and use tools to complete complex tasks with minimal human input.

The concept of AI agents has emerged gradually over the past two years, building on advances like chain-of-thought prompting, tool use capabilities, and multi-agent architectures.

OpenAI's Greg Brockman envisions a future where advanced models integrate deeply into our world, transforming user interactions.

In a recent article in Forbes, venture capitalist Rob Toews explores AI's potential in areas like customer support, regulatory compliance, data science automation, and personal assistants, with startups like Klarna and Mindy leading the way.

There is no question the major AI labs are working on agent and now we are starting to see some startups enter the space.

The Importance of AI Literacy for All

This topic is so important that, earlier this year, we made it the mission of Marketing AI Institute: AI literacy for all.

At the beginning of this year, we first talked with our audience about the concept when we announced our fully updated 2024 Piloting AI course and our brand new Scaling AI course series (which is available now!).

We are seeing signal after signal that companies are falling behind in AI literacy, which is why we are talking about this now and why it’s more urgent than ever. The latest datapoint comes from our upcoming 2024 State of Marketing AI Report.

This is the fourth-annual State of Marketing AI Report, and every year, a lack of AI education and training is cited as the most common barrier to AI adoption in marketing.

This year is no different. 67% of respondents said a lack of education and training was the top barrier to AI adoption in their marketing—and this number actually rose slightly this year versus last year, when 64% had the same response.

Despite the challenges companies and executives are facing in trying to adapt and devise AI roadmaps, a commitment to AI education, training and encouraging AI literacy across organizations can help build a smarter version of any business.

Links Referenced in the Show

Today’s episode is brought to you by Marketing AI Institute’s 2024 State of Marketing AI Report. This is the fourth-annual report we’ve done in partnership with Drift to gather data on AI understanding and adoption in marketing and business.

This year, we’ve collected never-before-seen data from more people than ever, with almost 1,800 marketers and business leaders revealing to us how they adopt and use AI…

And we’re revealing the findings during a special webinar on Thursday, July 25 at 12pm ET.

Go to stateofmarketingai.com to register.

Read the Transcription

Disclaimer: This transcription was written by AI, thanks to Descript, and has not been edited for content.

[00:00:00] Paul Roetzer: if you fast forward two, three years out, it seems to me there's at least a 50 50 chance That the AI is getting real close to this idea of AGI, where it's like generally as capable as us at pretty much every cognitive task. 

[00:00:15] Paul Roetzer: Welcome to the Artificial Intelligence Show, the podcast that helps your business grow smarter by making AI approachable and actionable. My name is Paul Roetzer. I'm the founder and CEO of Marketing AI Institute, and I'm your host. Each week, I'm joined by my co host. and Marketing AI Institute Chief Content Officer, Mike Kaput, as we break down all the AI news that matters and give you insights and perspectives that you can use to advance your company and your career.

[00:00:45] Paul Roetzer: Join us as we accelerate AI literacy for all.

[00:00:52] Paul Roetzer: Welcome to episode 105 of the Artificial Intelligence Show. I'm your host, Paul Roetzer, along with my co [00:01:00] host, Mike Kaput. We are recording this on Friday, July 12th at about 3 p. m. Eastern time.

[00:01:06] Paul Roetzer: We are doing it a little early this week because I'm out on Monday and we have some rather Complex and heavy topics to

[00:01:16] Paul Roetzer: to end the week

[00:01:17] Paul Roetzer: Mike and I. so we're going to get into some interesting stuff today. No big breaking news, no major models released, unless there's a Friday news dump, nothing too crazy this week.

[00:01:28] Paul Roetzer: I guess we could talk about the AT& T breach. That was kind of nuts, but not really an AI thing. but we're going to talk about some big issues, some, some topics related to jobs, the economy, AI agents, AI literacy, you know, Some updates on some tech, but some interesting and important topics to get into.

[00:01:46] Paul Roetzer: So before we dive into that, today's episode is brought to us by Marketing AI Institute's 2024 State of Marketing AI Report. This is the fourth annual report, we've done in partnership with [00:02:00] Drift. Together, Data on AI Understanding and Adoption in Marketing and Business. This year is our biggest sample ever, with more than 1, 800 marketers and business leaders sharing with us how they adopt and use AI.

[00:02:14] Paul Roetzer: We will be revealing the findings during a special webinar with myself and Mike on July 25th, Thursday, July 25th, at 12 p. m. Eastern Time. During the webinar, we will go through key findings from the report. If you register for the free webinar, you will get an ungated copy of the report that will come to you by email.

[00:02:35] Paul Roetzer: So the webinar is going to go through current state of AI adoption, AI, which AI tools are most popular, top barriers to AI usage and adoption, one of which we will talk about in a minute. In a couple topics from now, how the market's feeling about AI job loss, whether or not they have generative AI policies and responsible AI principles, all kinds of amazing stuff.

[00:02:53] Paul Roetzer: So we, the survey was in the field, what Mike April to June, roughly.

[00:02:57] Mike Kaput: Yep.

[00:02:58] Paul Roetzer: yeah. So three months, [00:03:00] more than 1800 people, tons of data. Mike's been crunching numbers the last couple of weeks, pulling it all together. So you can join us for that webinar. Go to state of, of state of marketingai. com. Again, that is state of marketingai.

[00:03:14] Paul Roetzer: com and you can register for the free webinar. Okay. So for our first topic today, we are going to talk about a rather

[00:03:22] Paul Roetzer: intriguing

[00:03:23] Paul Roetzer: and, h, far reaching interview with Carl Shulman. So Mike, tell us about Carl Shulman and his interview.

[00:03:32] 80,000 Hours Podcast with Carl Shulman

[00:03:32] Mike Kaput: Absolutely, Paul. So, Carl Shulman is an independent AI researcher.

[00:03:38] Mike Kaput: He has ties to the Machine Intelligence Research Institute and the Future of Humanity Institute at Oxford University.

[00:03:47] Mike Kaput: And he's pretty reclusive, like he doesn't do a lot of interviews, kind of keeps to himself, does, has very low profile, but recently he gave a very, like a marathon interview on the 80, 000 Hours podcast. [00:04:00] And as part of this like multi hour interview, he basically said, we are headed towards massive disruption thanks to the coming development of cheap Superhuman AI. So he spends across two podcast episodes, more than like six hours kind of unpacking these huge implications of humanity's race towards AGI, artificial general intelligence.

[00:04:26] Mike Kaput: And basically his argument, even though he takes it in a ton of different directions, is pretty simple. It says, According to 80, 000 Hours, quote, the human brain does what it does with a shockingly low energy supply. Just 20 watts, a fraction of a cent worth of electricity per hour. What would happen if AI technology merely matched what evolution had already managed and could accomplish the work of top human professionals, given a 20 watt power supply.

[00:04:58] Mike Kaput: Many people consider that [00:05:00] hypothetical, but maybe nobody has followed through and considered all the implications as much as Karl Schulman. Behind the scenes, his work has greatly influenced

[00:05:11] Mike Kaput: how leaders

[00:05:12] Mike Kaput: In artificial general intelligence, picture the world that they're creating. Carl simply follows the logic to its natural conclusion.

[00:05:20] Mike Kaput: This is a world where one cent of electricity can be turned into medical device, medical advice, company management, or scientific research that would today cost hundreds of dollars, resulting in a scramble to manufacture chips and apply them to the most lucrative forms of intellectual activity. And of course, this is going to change everything in Shulman's view.

[00:05:43] Mike Kaput: Now, Paul, we're going to kind of give you the reins here. You did a pretty serious deep dive on this interview. Can you maybe walk us through kind of what he gets at here and why this matters?

[00:05:56] Paul Roetzer: Yeah, so this, this interview is over four hours. It is, [00:06:00] it's a lot to process. It does go through some pretty, complex stuff. It seems quite theoretical at times, very sci fi at times. And I kept finding myself. thinking, is this actually possible and on what timeline is this possible? And so we wanted to try and break it down a little bit here.

[00:06:21] Paul Roetzer: Now, if you're interested in these kind of very deep, very technical, very philosophical, very theoretical, interviews, this is a great one. I mean, the guy is obviously brilliant. Um, he is well educated, well versed in every topic that they went through, from regulation and energy consumption to ideas around how to capture more solar energy to fuel intelligence, the economics of an intelligence explosion, safety and security, consciousness, rights of AIs.

[00:06:52] Paul Roetzer: I mean, it was lot.

[00:06:54] Paul Roetzer: So I'm going to zero in on two things that I think are highly relevant to all of us. [00:07:00] The first is this idea of scarcity versus abundance of intelligence. So this is one of the key points he makes, although he doesn't frame it this way. So this is kind of like my interpretation of what he was saying is this idea of today we have scarcity, We will in the near term have abundance of intelligence.

[00:07:18] Paul Roetzer: So right now we have scarcity and that is general of intelligence that is generally as capable as humans, especially human experts. So it makes sense that the scarce intelligence that is available will be applied to the jobs where the most value per hour is created. And this is like one of the things like, I'm not sure that I had.

[00:07:37] Paul Roetzer: Thought it through completely in this way. So when we think about the impact AI is going to have, we focus so much on just any role, right? And a lot of times we're talking with marketers or accountants or lawyers, and you just break that job into tasks. And you have a logical conversation on what AI can and can't do.

[00:07:54] Paul Roetzer: But if you take a step back and you look at kind of a more macro view, the question is, well, what is the [00:08:00] greatest value for AI to be applied? Like what roles is it most likely to be applied to? And that's when you start to think about managers, directors, executives, as well as specialists in different industries like lawyers and doctors.

[00:08:14] Paul Roetzer: AI researcher is another job, and maybe the most obvious area. Anthropic, OpenAI, Google, Microsoft, and other research labs have all talked about the idea of applying AI to the job of an AI researcher in order to scale AI faster. So if you only have Let's just pick a number and say there's 10, 000 AI researchers in the world, like expert AI researchers.

[00:08:37] Paul Roetzer: But if AI is able to do 80 percent of what those people do at their level, well now all of a sudden you can have 100, 000 AI researchers and some of them are just AI agents basically. And so the idea is that by accelerating what AI is capable of doing, You're able to accelerate the AI's ability to accelerate AI.

[00:08:57] Paul Roetzer: It's like a really weird thing to think about. [00:09:00] But Shulman actually talks about AI researchers. He said, when it's the choice between different occupations where AI advantages are similar, Then it goes to the domains with the highest value. Open AI researchers, if they're already earning millions of dollars, then applying AI to an AI's capabilities explosion is an incredibly lucrative thing to do.

[00:09:19] Paul Roetzer: So he says an AI model running on brain like efficiency, as you had mentioned, is going to be working all the time. And this is the one that I think is really important for people to consider. Once we get to the point where these AI are able to do things at the level we do them, all the tasks, the key they set is It doesn't sleep.

[00:09:39] Paul Roetzer: It doesn't take time off. It doesn't spend most of its time and career in education or retirement or leisure. So if you do 8, 760 hours per year, so rather than all those other things, like the average person works probably about 2, 000 hours a year, 2, 000 to 2, 200 a year if you're a full time employee. So what he's saying is, These things can work 8, [00:10:00] 700 hours a year, and they can do it at 100 percent efficiency with complete focus, no distractions.

[00:10:07] Paul Roetzer: And so if they're doing the equivalent of 100 per hour worth of work, so if someone making 100 per hour is probably making about, what, about 220, 000 a year, there's millions of people making that kind of money. It's a lot of money, but there's a lot of people that make that kind of money. But if the AI is capable of doing that work, then that starts to impact these high paying jobs.

[00:10:28] Paul Roetzer: So the, again, right now it's not really there, but what he's proposing is we're not far from it being there and we don't really need that many technical breakthroughs to make it happen. So as the intelligence explosion occurs and intelligence becomes more abundant due to things like more and cheaper energy, which he talks extensively about, more efficient and cheaper, cheaper compute, meaning it can do things.

[00:10:55] Paul Roetzer: like closer to the efficiency of the human brain, then it'll start at the top, the [00:11:00] top earning, you know, wages, the most value creation, and then it'll descend down like a waterfall. So, as I was reading this or listening to this podcast, it took me back to 2012 when I read Automate This by Christopher Steiner, and that was one of the books that inspired me to sort of pursue and explore AI.

[00:11:19] Paul Roetzer: Now, what Steiner said back in 2012 was when you wanted to determine what industry was going to be affected by AI the most, or which one was going to come next, you would just look at a simple formula of what is the potential to disrupt that industry and what is the reward for disruption. And you could basically stack industries and say, well, this one's worth more than this one to disrupt.

[00:11:39] Paul Roetzer: And the same basic thing can now be done for jobs. So if you take. AI researcher making a million a year, doctor making 300, 000 a year, accountant making 200, 000 a year, and online marketing manager at 80, Where is the AI going to go first? Like, what are people going to build AI to do first? They're going to build it to do AI [00:12:00] research jobs and attorney jobs and doctor jobs, because it's the highest value employment.

[00:12:05] Paul Roetzer: And so as I, you know, I'm thinking about this and you're like looking at the possibilities and where AI is going to go, I'm, I'm just like getting this increasing feeling. Like I had back in 2016. So back in 2016, when I created the AI Institute, I'd been studying AI for about five years, and I kept looking around, trying to figure out why isn't anyone talking about AI?

[00:12:26] Paul Roetzer: Like why in marketing and sales and service and business, aren't people looking at this as like a transformational thing? And Mike, that's when you and I, you know, started having conversations and, you know, eventually started like writing about AI and, you know, Created AI Institute and all this other stuff, but at the time I couldn't figure it out.

[00:12:44] Paul Roetzer: Like this seems so obvious that this was inevitable outcome that AI would eventually change business. And I kind of feel that way about jobs now. Like I am increasingly convinced that it is going to be insanely disruptive to jobs in the future of work. And [00:13:00] yet you look around and there just isn't that much talk about it.

[00:13:03] Paul Roetzer: Like it's, it's like they haven't figured out what's going to happen next. And so, Listening to this interview and then listening to him kind of put it into these economics terms and financial terms, it really just started to hit home for me that there's not enough conversation happening around this, because assuming that the technological advancements keep occurring as they seem to be, like, again, these models keep following these scaling laws, they keep getting smarter and more generally capable, that The only real roadblocks, or the most obvious roadblocks, are a lack of energy to make it possible, which Shulman talks a lot about, laws and regulations that slow it down, and then the biggest one, and maybe the most likely barrier, is humans resistance to change.

[00:13:48] Paul Roetzer: There's just so much friction to this kind of change, where if you fast forward two, three years out, it seems to me there's at least a 50 50 chance That the AI is [00:14:00] getting real close to this idea of AGI, where it's like generally as capable as us at pretty much every cognitive task. There's certainly not a 0 percent chance.

[00:14:08] Paul Roetzer: Like, it seems like leading AI research kind of feel like there's a decent probability. And you would think that we would be planning for that we would be working on scenarios where if this is true, This is what we would do about it. And we just don't see it. So that was my first observation. I don't know, Mike, if you had any thoughts when you were reading through this stuff as

[00:14:34] Paul Roetzer: as well.

[00:14:34] Mike Kaput: Yeah, you hit the nail on the head. I mean, you know, There were a lot of different paths he went down, but really what kept jumping out to me is the concept of intelligence as a resource, right? I mean, it's supply and demand,

[00:14:48] Mike Kaput: he's

[00:14:49] Mike Kaput: talking about we're getting more intelligence available to us, whether it's in the form of machines or humans, for less money.

[00:14:56] Mike Kaput: So that's going to inherently From an economics perspective have a [00:15:00] massive impact as long as that trend continues and that's, this is kind of the latest in a lot of different things we've talked about on the podcast that have really communicated to me that the supply demand equation of intelligence getting

[00:15:13] Mike Kaput: pretty wonky now and that's going to have some really interesting effects.

[00:15:18] Paul Roetzer: Yeah, and I guess, like, then the second thing that is related to this, and this is where I hear people point to a lot, is, well, the economists don't see it this way. Like, they don't, they don't see this explosion of intelligence and, you know, the doubling of GDP every year versus every 15 years. Like, they're not, they're not seeing it.

[00:15:36] Paul Roetzer: And so, Rob, the interviewer, asked Shulman, like, well, what about the economists? Why do they seem to be missing this? And it sort of. Echoed what I've, my experience has been having talked with some leading economists, and obviously having read all of their research that they've come up with. His general feeling, and he had a lot of thought, I mean they spent like 30 minutes on this topic, but his [00:16:00] general feeling is that they just seem to fall back on it doesn't seem possible.

[00:16:06] Paul Roetzer: That historically, when we look back at general purpose technologies and what happened to society and what happened to the economy, that there's no way we could ever approach a scenario where the productivity expands this much. And so, The economists he's talked to just kind of aren't even that interested in entertaining the possibility because they think it's so out of the realm of a scenario that could happen.

[00:16:35] Paul Roetzer: And so what I, what I found and kind of, again, what he said is the same thing is occurring with economists as is with business leaders. They look at what the AI can do today. And then they build their reports or projections or forecasts based on their understanding of that. What they're not seeming to do is develop a deep understanding of [00:17:00] what one to two models out could look like.

[00:17:02] Paul Roetzer: So, you know, 12 to 24 months out, what does it look like when we're at GPT 6 or Gemini 3 or Claude 5 or whatever it's going to be?

[00:17:14] Paul Roetzer: And you can talk to these researchers and get a decent sense of the things they think they're going to be able to do. And I feel like we need to start building some economic models and workforce models that follow those assumptions through.

[00:17:28] Paul Roetzer: Like, okay, if they're right, even if they're only 50 80 percent right, then we're in a totally different world. From the economy, we're in a totally different world of what jobs look like, and we're talking about three years. Like this isn't, this isn't like decades out. And I think when you listen to an interview like this, you can get caught up in like, Oh, we're not going to capture 10 percent of all the solar energy that's coming in.

[00:17:53] Paul Roetzer: So we're not going to be able to power these kinds of things and like all the math that he was getting into. But when you just step out and say, [00:18:00] But the idea of these things getting smarter and getting one to two models out and they're better at reasoning and they can take action and they can follow a chain of thought and they can, you know, they don't make errors anymore and they don't hallucinate.

[00:18:10] Paul Roetzer: Like, if we solve all that, which seems all solvable, then the world is just different. And I don't feel like economists seem to be that smart.

[00:18:22] Paul Roetzer: understanding that. And that a lot of business leaders, it's just so abstract. And I think that's probably it. And it's, I mean, it's honestly like. I've been following AGI for like nine years, like the concept of it, researching it, and I purposely didn't even talk about it on LinkedIn until like a year and a half ago, because the concept of AGI was so abstract that people wouldn't They would just tune out and Mike and I would have these conversations.

[00:18:49] Paul Roetzer: I'd be like, Hey, I don't think we can really get into this topic yet. Cause I don't think people are ready for it. And that's sort of where we are with jobs and the economy. And I was, I'm just not sure that people are really ready [00:19:00] because they, they haven't seen it yet to realize, Oh my gosh, like it really is capable of doing 80 percent of the things I do better than me.

[00:19:09] Paul Roetzer: And maybe, maybe we just need to get there. Maybe we need to have that ChatGPT moment for the next, you know, Milestone on the path to AGI where people step back and say, Oh my gosh, it's doing my job now. It's doing, you know, 50 percent of the tasks or whatever. So, I don't know. I mean, again, it's a long interview.

[00:19:29] Paul Roetzer: It's a lot to process, but if, if you want to geek out on this stuff and you love this, like, deeper, Topics and thinking big picture have at it. And Rob talks really fast, the interviewer. So I usually listen to things at 1. 5, but if you want to like process Rob's questions, you almost have to drop the one time speed for Rob's questions.

[00:19:51] Paul Roetzer: And then you can jump back to Carl's. um,

[00:19:54] Paul Roetzer: Or just got to like adapt to how fast Rob talks. Cause his regular speed is like [00:20:00] 75. So it's, it's sort of hard to follow along what he's asking, but he asks brilliant questions. Like he's obviously a genius too, to even conduct this interview.

[00:20:10] The AI Agent Landscape

[00:20:10] Mike Kaput: So, our second topic here is somewhat related, and there's definitely a theme running through these main topics this week. So, we've talked a lot about AI agents, so autonomous AI systems that eventually can pursue open ended goals and actions. Complete tasks for you with minimal human input using a browser, doing digital actions.

[00:20:36] Mike Kaput: And over the past couple years, we've definitely seen some advances in AI agent like capabilities, like chain of thought prompting, the ability of AI to use tools, and multi agent architectures. Now we're hearing from OpenAI President Greg Brockman that agents are an integral part of where OpenAI and AI in general is going.

[00:20:58] Mike Kaput: So recently he posted, [00:21:00] quote, As models continue to become much more capable, we expect they'll start being integrated with the world more deeply. Users will increasingly interact with systems composed of many multimodal models plus tools, which can take actions on their behalf, rather than talking

[00:21:18] Mike Kaput: a single model. With just text inputs and outputs. Now that's very aspirational and kind future

[00:21:27] Mike Kaput: looking, but in a recent article in Forbes, noted venture capitalist Rob Toews actually got into the weeds on where AI agents are starting to show particular promise when it comes to actual use cases and the startups driving

[00:21:42] Mike Kaput: So I just want to go through a few examples that he cites here and then kind of talk through. where we're actually at when it comes to AI agents and these AI systems. They're increasingly going to be doing the work that people do. So Toews outlines a few areas where he's seeing some [00:22:00] interesting successes.

[00:22:01] Mike Kaput: So first up is customer support. So companies like Klarna, who we've talked about, are using AI agents to automate customer service. handling

[00:22:10] Mike Kaput: millions of conversations. According to Klarna, they're driving significant cost savings. There's some startups in this space like Sierra, Decagon, and Maven AGI that he cites that are building specialized agent solutions.

[00:22:25] Mike Kaput: He also says regulatory compliance is a big area. AI agents are well suited to handle structured, repeatable compliance workflows. So some startups like Norm AI and Greenlight AI are developing agents in that space. Data science. is also a huge area. Delfina is one startup building agents to handle the full data science life cycle and personal assistants.

[00:22:49] Mike Kaput: So AI personal assistants that basically can help you perform tasks in your day. Some startups like Mindy and Ario are developing agent based assistants. [00:23:00] And of course, and we're going to talk about AI is being built increasingly into the apps. that you use with platforms like Google, Microsoft, et cetera.

[00:23:09] Mike Kaput: So, to kind of tie all this together, Paul, there's no question the major AI labs are working on agents. We're starting to see some startups enter the space, like when you look at this in a practical sense, like a as a business owner, leader, investor, like what is the near term landscape for agents? The startups trying to build them look like in your mind?

[00:23:32] Paul Roetzer: You know, I was thinking about this one coming into today's episode. I just went back to the AI Timeline episode 87 from March and I was revisiting the notes we had around AI agents explosion. And so I feel like. I'll just recap what I said back then, because I don't know that I've changed my position at all about where we are yet.

[00:23:55] Paul Roetzer: So, what I said then was 2025 to 2027 is [00:24:00] when you would see AI agents really become viable and, you know, Just like change the way work was done. So the notes I had was lots of talk about agents that can take action, like using your keyboard and mouse this year in 2024, but we're mostly going to see experimentations and demonstrations that are maybe the equivalent of like GPT 1, GPT 2 level, meaning very early versions of AI agents, very unreliable, but you know, early, There would be lots of manual work to get the agents to function reliably, which I am guessing is true in all of the examples you just listed and all the companies that we're talking about, plus lots of human oversight of these agents, because they're nowhere close to full autonomy yet.

[00:24:44] Paul Roetzer: And most people generally won't be willing to give up the data and privacy needed to get the benefits. Starting in 2025, AI, AI can now take actions reliably with limited human oversight, probably in select domains or verticals, initially then more generally horizontally [00:25:00] capable, which kind of plays out what you're saying.

[00:25:02] Paul Roetzer: You're like building these ones specific into industry. So we're starting to see the early levels of this is my assumption. some early instances of full autonomy, you know, start to occur, i. e. humans provide the goal, desired outcome, the AI agent does all the work with no additional human inputs, and then disruption to knowledge work starts to become more tangible and measurable because the AI agents are the, kind of, the key to true disruption to knowledge work.

[00:25:28] Paul Roetzer: So We talked in the Apple intelligence episode about like your iPhone might be the earliest interactions many of us will have with, you know, real AI agents where your phone maybe has access to the different apps and can actually work within them, almost like functioning as though you're touching the screen and going from app to app and clicking around.

[00:25:48] Paul Roetzer: So we're going to probably start to see elements of that. I think people AI agents are or will be, you can Probably just play around with Claude's artifacts and like [00:26:00] see how it's going through steps to build things for you. You give it a prompt, it doesn't just give you a text output, it actually builds things.

[00:26:07] Paul Roetzer: It's writing code, it's creating things, Perplexity, since they made the update to Perplexity Pro, where it'll show you all the steps it's going through. It's the same idea, that this thing doesn't just output something. It actually goes through, creates its own task list, not a human created task list.

[00:26:24] Paul Roetzer: It doesn't do 10 things a human programmed it to do. It actually creates steps and goes and does something. And then I thought it would be, you know, helpful to just provide a little bit of context as to what exactly it is we're talking about. 

[00:26:38] Paul Roetzer: So I went back to World of Bits, which is, something that we talked about in February 2023.

[00:26:46] Paul Roetzer: I think I'll have to go see what episode this was. But, I wrote something called World of Bits and what it means to marketing and business. So that article in February 23 started with we are so caught up right now in figuring out AI [00:27:00] writing tools and large language models. That most marketing and business leaders, as well as SaaS executives and investors, are missing the bigger picture.

[00:27:09] Paul Roetzer: This is all just the foundation for what comes next. So again, this is This is kind of what I was trying to say in the previous topic, where if you rewind back to February 2023, and all anybody was thinking about was that these things could write articles, like output text, and their entire business plan and strategy was around creation of text, and they didn't realize at the time that for six years, all of the research labs had been working on AI agents that could take action, and they had had breakthroughs.

[00:27:40] Paul Roetzer: So in February 23, we're looking at saying, Hey, wait, like, don't get caught up in just language models. Like this is just step one. They're going to become multimodal and they're going to be able to do things. But again, at that time, most people, it's just too abstract because they hadn't seen it yet. So that post went on to explore early work from Andrej Karpathy, who we [00:28:00] talk about all the time, during his first stint at OpenAI, so in a 2017 research paper.

[00:28:06] Paul Roetzer: Titled World of Bits, an open domain platform for web based agents, Karpathy and other authors explored the potential of agents to complete tasks such as booking flights and completing forms through simulated usage of a keyboard and mouse. They made progress, but obstacles remained. In the conclusion of their paper, they said, We tried to do this world of bits thing, basically, um, it's an op, there is an opportunity here, but the gap between, this is how they ended it, the gap between agents and humans remains large and welcomes additional modeling advances.

[00:28:41] Paul Roetzer: So in 2017, they had the vision to give these AIs the ability to take action. But the technological capabilities weren't there. Then in October 2022, Karpathy did an interview with Lex Friedman, and he applied on that interview. So this was right before he went back to OpenAI for a year. [00:29:00] He implied that those barriers were now gone.

[00:29:03] Paul Roetzer: He said, when I was at OpenAI, I was working on this project, World of Bits, and basically it was an idea of giving neural networks access to the keyboard and mouse. And the idea is that, basically, you perceive the input of the screen pixels, and the state of the computer is visualized for human consumption in images in the browser and things like that.

[00:29:21] Paul Roetzer: And then you give the network the ability to press the keyboard and use the mouse. World of Bits was too early at OpenAI. And this is around 2015 when he started working on it. He then said it is time to revisit that and OpenAI is interested in this. Companies like Adept, who we talked about last week, got acquihired by, acquihired by Amazon, are interested in this and so on.

[00:29:43] Paul Roetzer: And the idea is coming back because the interface is very powerful. And then he said, this is the real key. You are taking the GPT as the initialization. So what they didn't have in 2015 was the transformer. They didn't have large language models. By [00:30:00] 2022, we were on the precipice of ChatGPT being released.

[00:30:03] Paul Roetzer: Keep in mind, this is a month before ChatGPT was being released, which he knew was coming. And he said, you're going to take the GPT as the initialization. The GPT is pre trained on all the text and eventually all the images. And it understands what a booking is. It understands what a submit is. It understands quite a bit more.

[00:30:22] Paul Roetzer: And so. He then also tweeted a follow on paper that someone else published, and in that paper's conclusion, in February 22, this was, Humans use digital devices for billions of hours every day. Your computer, your phone, your iPad. If we can develop agents that can assist with even a tiny fraction of these tasks, we can hope to enter a virtuous cycle of agent assistance followed by human feedback on failures, and hence,

[00:30:50] Paul Roetzer: to agent improvement and new capabilities.

[00:30:53] Paul Roetzer: This is what they're working on, this is what everyone's been working on since 2015. They know humans [00:31:00] doing cognitive tasks spend billions of hours collectively every day. And that if they could build agents that could do those tasks for the humans, It would unlock all kinds of possibilities. So this is what they are all chasing. It's what they've all been chasing. Right now, these agents are imprecise.

[00:31:19] Paul Roetzer: They are error prone. They lose focus as task lists grow. They have limited memories. They lack common sense and intuition. They're largely black boxes. So it's difficult to interpret why they do or how they do what they do. But the assumption Shulman and others are making is that is that all of that is solvable.

[00:31:38] Paul Roetzer: All of that will change. And once that changes, once they solve these things, then an intelligence explosion occurs. And everything we talked about in the AI timeline becomes true. And what Shulman is presenting, all of the sudden, starts to make way more sense. And if economists and business leaders [00:32:00] don't start thinking about those possibilities now, then It's going to get ugly in 18 or 24 months when the reality hits that again, they're telling us this is what you should look at.

[00:32:12] Paul Roetzer: I always go back to that Sam Altman, article, like Moore's law for everything in 2021, where he told everybody this was coming. Think, create, understand, reason. They're going to be able to do it all. And people are like, eh, whatever, Sam. Like, The average business leader, like, didn't even know who Sam Altman was at that point.

[00:32:29] Paul Roetzer: Like, and I, that's what I'm saying. Like, I feel like this is 2016 all over for me again. Like I'm sitting here looking at this thing. Okay. This all seems really obvious. Like every one of these researchers is saying the same thing. Like they're all telling us this is all going to happen. and people just keep making plans for 2025.

[00:32:47] Paul Roetzer: Like it's 2020. And like, I don't know. I, I don't know. Sometimes I think I'm crazy. I could just,

[00:32:55] Mike Kaput: Yeah, I feel like that's pretty normal in what we do and [00:33:00] follow. I feel that like once a week, easily.

[00:33:03] AI Literacy for All

[00:33:04] Mike Kaput: Well, this is a perfect lead in to the third big topic, and we obviously intentionally kind of structured these this way because we wanted in this last main topic to kind of spend a couple minutes talking about something that we just see as being more and more important each and every week when we're following These developments, talking about them on the podcast.

[00:33:29] Mike Kaput: It's something so important that literally earlier this year, you made it the mission of Marketing AI Institute.

[00:33:36] Mike Kaput: this

[00:33:37] Mike Kaput: idea of AI literacy for all. So at the beginning of this year, we first talked about this concept with our audience when we fully, or we announced our fully updated 2024 Piloting AI course.

[00:33:51] Mike Kaput: And our brand new Scaling AI course series, which is available right now. To explain this idea, I wanted to read a really brief excerpt [00:34:00] from the announcement, Paul, that you wrote. At the

[00:34:03] Mike Kaput: And in that announcement, in that blog post, you wrote, Continuing AI advancements in language, vision, prediction, persuasion, reasoning, decisioning, action

[00:34:12] Mike Kaput: will augment human capabilities and redefine knowledge work at a rate and scale that the economy never seen.

[00:34:17] Mike Kaput: seen. Millions of jobs will be impacted as companies realize the power and potential of AI to drive productivity, efficiency, and profits.

[00:34:27] Mike Kaput: Now, we have presented to and talked with thousands of marketers and business leaders over the last year, we have

[00:34:34] Mike Kaput: seen first hand how executives are scrambling to adapt and devise AI roadmaps while facing complex challenges, including a lack of AI talent,

[00:34:44] Mike Kaput: legacy tech stacks, a rapidly expanding AI tech landscape, fear of change from staff, industry regulations, privacy and security concerns, and mounting competitive pressure.

[00:34:55] Mike Kaput: Now, what has become clear? is that our mission must evolve to [00:35:00] pursue a North Star of accelerating AI literacy for all. We believe you can build a smarter version of any business through a responsible, human centered approach to AI, but success requires a commitment to AI education and training across the organization.

[00:35:17] Mike Kaput: So paul, maybe to kick this off, could you just walk us through where this idea came from, why it's so important? It just. It sounds like it came from this lack of preparedness around AI within all these companies in the face of this massive change headed towards us.

[00:35:38] Paul Roetzer: Yeah, I mean, like, if you're listening to this podcast, you get Like you, you, you're,

[00:35:44] Paul Roetzer: obviously taking the initiative to figure this stuff out and think about The level of confidence it gives you to know what's happening in this space. And you start to think about your own career differently. You think about the challenges in your business [00:36:00] differently.

[00:36:00] Paul Roetzer: You think about strategies differently. You ask really good questions about the technology you're, you're using. and think about if everyone in your organization had that power, like that's kind of how. It works right now. It's like we go into these companies and we'll meet with, you know, 300 marketers or 500 accountants or, you know, 120 lawyers or whatever it is, whatever the room consists of that day.

[00:36:27] Paul Roetzer: And every single time, the first thing I tell people is you have to commit to literacy.

[00:36:32] Paul Roetzer: Like, you got to start an AI academy in your, your enterprise. You have to build it into professional development. Um, because until we level up understanding, we just develop a baseline understanding of AI, like intro to AI level 101 stuff across the organization.

[00:36:48] Paul Roetzer: Then you're going to keep having these same issues, the lack of AI savvy talent, the fear of change, like. Until you develop transparency in the organization and start helping people, [00:37:00] you're going to keep running into these and you're not going to be able to develop optimal strategies and outcomes for your organization.

[00:37:08] Paul Roetzer: And so I, I just, I think it's so fundamental to figuring all this stuff out. And the thing, you know, I always, always stress is. What we need are people with the fundamental knowledge who then apply their domain expertise to go be the thought leader in, in their company or in their industry. And they, you know, take the lawyer of 30 years who takes an interest in AI or the CFO or, you know, the HR person or the marketer, whomever it is, and takes their domain expertise and experience and intuition and layers in.

[00:37:45] Paul Roetzer: And understanding what AI can do. Now you can, you know, really race forward as an organization. So yeah, I just, I mean, I think that it's so important and I, I found that it's a, it's a message that resonates [00:38:00] really well with people and. You know, I, I do hope that our listeners carry that forward into their own organizations and like sort of be a spark plug to drive, um, literacy, because I think it just gives people the confidence and it gives people the perspective to work together to solve this stuff.

[00:38:18] Mike Kaput: Yeah, and it couldn't be more needed, honestly, because, you know, like we said at the top of the episode, we are having a webinar in two weeks where we're going to unveil.

[00:38:31] Mike Kaput: The findings of our 2024 State of Marketing AI Report. And if you go to stateofmarketingai. com, you can sign up for that webinar, get a copy of the report.

[00:38:41] Mike Kaput: But this year it's the fourth annual report done

[00:38:44] Mike Kaput: in partnership with Drift. We surveyed nearly 1, 800 marketers and business leaders about AI usage and adoption, and I just wanted to maybe preview a couple stats related to what we just talked about because [00:39:00] every year a lack of AI education training is cited as the most common barrier to AI adoption in marketing.

[00:39:10] Mike Kaput: And unfortunately, like this year, fourth annual, is no different. So from the initial data analysis we've done, as we're finalizing the report, we found that 67% of respondents said that a lack of education and training was the top barrier to them adopting AI in their marketing. And actually, this number rose slightly from last year.

[00:39:34] Mike Kaput: Last year, 64 percent said it was a barrier. And we also asked outright if respondents organizations offer AI focused education and training for the marketing team. Now, collectively, 75 percent either said, no, there is no education at all. That was about 47%. 24 percent said, you know, it's in development.

[00:39:59] Mike Kaput: And [00:40:00] 4 percent said they're not sure if training exists. So, Paul, you know, unfortunately, these numbers are not a surprise to me or you, given The dozens, if not hundreds of conversations and talks we have with companies, but I, you know, the numbers are improving, but it's still such a huge proportion of companies that appear not to be making their employees as AI literate as they can be.

[00:40:25] Paul Roetzer: Yeah, I mean, I, again, I think it just comes back to a lot of the leadership doesn't know to make it a priority. Like if you, if you don't. Understand like the disruption, if it's treated in silos, like the CIO is owning this and it's not really being decentralized out to the other departments and other leaders.

[00:40:45] Paul Roetzer: And, you know, in a lot of organizations we talked to, like they're just not even allowed to use it. Like the general AI is just turned

[00:40:51] Paul Roetzer: off and they have to make, you know, business cases to even be allowed to have access to the tools. So. I don't know. It's just still so early. And we [00:41:00] talked about this with that census research last week and some of the other, you know, research about the gaps and the lack of adoption.

[00:41:07] Paul Roetzer: And I do think that's the issue. Like I, I actually just tweeted this morning, shared something Ethan Mollick had tweeted. And I said, like, this is the same thing we're seeing. There's no change management plans. There's no AI roadmaps. There's no education and training. There's no plan to actually scale this stuff.

[00:41:22] Paul Roetzer: It's just isolated pilot projects and experiments.

[00:41:26] Paul Roetzer: And. And the companies that are doing more than that are far ahead, and they often feel like they're not. And then we go in there and it's like, no, no, no. Like the fact that you're even doing this is ahead of the curve right now. So yeah, I think the key here is just take, keep taking the next step on, you know, the literacy, listen to this podcast, listen to these other podcasts, go read stuff, take a course, like keep pushing forward, but bring the people in your organization along with you.

[00:41:53] Paul Roetzer: Like we, we really need more momentum in these companies to, to start driving the kind change. [00:42:00] and honestly, like just, 

[00:42:02] OpenAI Superintelligence Scale

[00:42:02] Mike Kaput: Amen to that. So let's dive in real quick to a bunch of rapid fire this week. So first up, OpenAI has developed a new classification system to track its progress towards creating AI that can outperform humans. Now, this system of Classification, which apparently shared with employees during a recent all hands meeting, according to Bloomberg.

[00:42:28] Mike Kaput: And it has five levels that range from the current capabilities of AI systems to AI systems that can do the work of an organization.

[00:42:39] Mike Kaput: So the five levels are chatbots, which is the current level they say we're at. This is AI with conversational language. What they call reasoners, which is human level problem solving AI.

[00:42:50] Mike Kaput: Agents, of course, systems that can take actions. Innovators, AI that can aid in invention. And organizations, AI that can do the work of an [00:43:00] organization. OpenAI executives believe the company is currently at level one, but on the verge of reaching level two, the reasoners, the human level problem solving.

[00:43:11] Mike Kaput: During this meeting, apparently, OpenAI leadership demonstrated a research project involving GPT 4, which they believe showed new skills approaching human like reasoning. So, Paul, we've talked a ton about OpenAI's quest to build AGI better than human AI, comparable to human AI, like, This is nothing new that they're interested in this question, but I am curious, like, why are they doing this now?

[00:43:40] Mike Kaput: Why are they bothering to classify in this way?

[00:43:44] Paul Roetzer: One, I think a new model is coming, obviously, they're trying to give some context. earlier this year, we had the, Google DeepMind paper on levels of AGI, which had also five levels. Very different descriptions of the [00:44:00] levels and ways of measuring them. So I think that's like a big component of it is trying to kind of put some standards around how they're going to measure the improvement in the models and define them.

[00:44:10] Paul Roetzer: I don't find the categories terribly helpful. I mean, obviously they didn't choose to release this. I think this was like leaked out to

[00:44:18] Mike Kaput: it was like a leak, yeah.

[00:44:19] Paul Roetzer: I'll be interested to read in more detail when they explain it. the two things I would comment on here is reasoning matters. We talk about it all the time.

[00:44:29] Paul Roetzer: You know, as these models improve their reasoning capabilities, they're able to solve more complex multi step problems. They can automate higher level cognitive tasks like we talked about already. gives the machine the ability to apply logic and knowledge in more human like ways and reasoning is needed to draw conclusions and make decisions.

[00:44:48] Paul Roetzer: And if we want to. Make decisions or have it assist in decisioning. it needs to be able to follow reasoning. Now, the last level, organizations, AI that can do the [00:45:00] work of an organization. That one made me go back to the Ilya Sutskever quote from the Atlantic in, almost a year ago, July 24th, 2023.

[00:45:11] Paul Roetzer: We covered this on episode 57, but this quote, sometimes I wake up in cold sweats because of this excerpt.

[00:45:17] Paul Roetzer: So. What said at the time, which now tells you where part of the inspiration for their, ranking of AI's capabilities comes from, the article was called, Does Sam Altman Know What He's Building?

[00:45:31] Paul Roetzer: And Ilya's quote was, The way I think about the AI of the future is not as someone as smart as you or as smart as me. but as an automated organization that does science and engineering and development and manufacturing. Suppose OpenAI, and this is the author, adding some context. Suppose OpenAI braids a few strands of research together and builds an AI with a rich conceptual model of the world and awareness of its immediate surroundings and an ability to act not just with one [00:46:00] robot body, but with hundreds or thousands.

[00:46:02] Paul Roetzer: Sutskever then went on to say, we're not talking about GPT 4. We're talking about an autonomous corporation. Its constituent AIs would work and communicate at high speed like bees in a hive. A single such AI organization would be as powerful as 50 Apples or

[00:46:20] Paul Roetzer: Googles, he mused. This is incredible, tremendous, unbelievably disruptive power.

[00:46:26] Paul Roetzer: So, level five! In OpenAI as a parent ranking is 

[00:46:32] Paul Roetzer: the hive. It is the dozens, hundreds, millions of AI agents working together, Um, as a functioning organization with potentially very little to no human oversight. When that's possible, I don't know, but that is, 

[00:46:49] Paul Roetzer: um, it what is being envisioned. I can tell you that.

[00:46:53] Mike Kaput: I like the hive for the name of level five. Rather than [00:47:00] organizations. all right. 

[00:47:02] Lattice and AI Employees

[00:47:02] Mike Kaput: So next up, we have a company called Lattice, which is a people management and HR platform. They have announced what they're calling a groundbreaking initiative to introduce digital worker employee records. And they say they are the first company to do this.

[00:47:19] Mike Kaput: Here's what this means according to a blog post from Lattice's CEO. she had written, quote, Today, Lattice is making history.

[00:47:29] Mike Kaput: We will be the first to give digital workers official employee records in Lattice. Digital workers will be securely onboarded, trained, and assigned goals, performance metrics, appropriate systems access, and even a manager, just as any person would be.

[00:47:44] Mike Kaput: So, she is referencing AI workers that are working or hired by or built by a company. But if that sounds a little out there to you at this stage, Lattice seems to believe that soon enough, will

[00:47:57] Mike Kaput: be hiring AI [00:48:00] workers en masse. She also writes in the post, quote, When we asked our Resources for Humans community of more than 22, 000 HR leaders representing over 3 million employees about their plan for digital workers, over half told us they were already planning to hire them. So Paul, there's a lot to unpack here.

[00:48:21] Mike Kaput: confess I'm fascinated by this, like, with everything we just about, but I'm a little confused

[00:48:27] Mike Kaput: because I don't really know exactly what this announcement means in practice, or kind of how you execute on this. Like, what, what's going on here?

[00:48:37] Paul Roetzer: Well, my first thought was, I would love to know how they phrased the question where they got a response claiming over half of 22, 000 HR leaders plan to hire digital workers. I I just can't even. Imagine how that question was worded to elicit that response. listen, [00:49:00] I, I don't, I don't want to like crush this idea or this company.

[00:49:05] Paul Roetzer: Um, it's, I think this whole thing's kind of laughable. I mean, it seems. Safe to say, like, my perception is this is a PR play, try to boost revenue, draw attention. I don't know if they need to raise money. Like, I'm not sure what the real motivation here is, but my initial reaction to the tweet thread from the CEO and then reading the post twice is I have no idea what they're talking about.

[00:49:32] Paul Roetzer: Like, it is,

[00:49:34] Paul Roetzer: it's really hard to understand what exactly it is they're doing or why they think it needs to exist. So. The post says we're seeing AI personas like Devin the engineer, which

[00:49:47] Paul Roetzer: isn't a, I mean, it was a agent that doesn't work like it's not like Devin is some like groundbreaking AI agent that's everywhere in the world.

[00:49:55] Paul Roetzer: It was a demonstration of a technology that was proven. It's not viable yet. [00:50:00] Harvey, the lawyer, Einstein, which is referring to Salesforce, the service agent and Piper, the sales agent, at Lattice were committed to leading the way in the responsible employment of AI. Okay. We can't sit idly by and leave our customers to figure things out on their own.

[00:50:14] Paul Roetzer: We need to employ AI as responsibly as we employ people. no, we don't. 

[00:50:20] Paul Roetzer: uh, sorry, that's my commentary. And to empower everyone to thrive working together. We must navigate the rise of digital worker with trans I mean, this is the kind of stuff that just makes people start thinking AI is just overhyped.

[00:50:33] Paul Roetzer: Like this is, it's just, ugh. God. 

[00:50:37] Paul Roetzer: God We will be the first to give digital workers office employee records in Lattice. Digital workers will be securely onboarded, trained, and assigned goals. Okay. all right. So this is a rapid fire. I'm not going to go. there.

[00:50:53] Paul Roetzer: The need for this is extremely premature. I am not saying there's been some time down the road where something like [00:51:00] this might make sense.

[00:51:01] Paul Roetzer: and that we shouldn't maybe have some of these conversations, but to have the CEO announcing this as something groundbreaking is probably a little much. And honestly, the announcement should have just been, you can now add your co AI coworkers to your org chart. Like, thanks.

[00:51:18] Paul Roetzer: Like, Just, 

[00:51:20] Mike Kaput: yeah, 

[00:51:21] Paul Roetzer: I don't know. It's like a feature within a product that no one's going to care about or use for a few years. So positioning it as some sort of groundbreaking thing. I get it. Like Mike and I did PR back in a past life. Like I understand why things like this get published, but, man.

[00:51:45] Paul Roetzer: It is too crazy not to talk about it like,

[00:51:47] Mike Kaput: for sure,

[00:51:48] Paul Roetzer: but no, I don't think you need this software in your company at the moment. but I, you know, I think the idea of thinking about AI coworkers assistants as part of your org chart, isn't a bad idea like [00:52:00] that. Conceptually. I like that. And I could see needing a feature to like, figure out how to do that in there, but like onboarding and training and assigning goals and.

[00:52:11] Paul Roetzer: It just seems like a little bit too early for this kind of stuff

[00:52:18] OpenAI Exits China

[00:52:18] Mike Kaput: In some other news, last week OpenAI notified users in China that they would be blocked from accessing its tools and services starting on July 9th, so just a few days ago.

[00:52:29] Mike Kaput: This decision of course comes amidst rising tensions between the U. S. and China over A number of AI related issues, including the export of semiconductors that

[00:52:40] Mike Kaput: crucial

[00:52:41] Mike Kaput: for building AI. OpenAI is not elaborated on its reasons, but the block is now coming from the U. S. side. So previously, because of restrictions in China, ChatGPT was blocked by their government.

[00:52:54] Mike Kaput: Um, but now OpenAI is the one driving. The ban on using [00:53:00] the tool, at the same time, Chinese AI firms are basically trying to flood into the gap here, trying to offer a ton of free tokens or credits, essentially, to use their software, the country already has it. At

[00:53:14] Mike Kaput: 130 large language models, which account for about 40 percent of the world's total, second only to the US, so those are now jockeying for position as OpenAI exits the market. So Paul, we've definitely been talking a lot about This type of thing the last weeks. it seems like we're hearing more AI and national security or geopolitical concerns along the lines of Ashenbrenner's situational awareness essay. Like, what did you make of this particular development? I

[00:53:47] Paul Roetzer: think it's a complex geopolitical world and there's going to be all kinds of stuff happening that we don't. Aren't privileged to know the inner workings of, and this is probably an [00:54:00] example of it, I thought it was interesting that you, you can still get access to the same models through Azure in China, like Microsoft didn't turn off access,

[00:54:08] Paul Roetzer: but so I, I don't know what the dynamics are there as well.

[00:54:13] Paul Roetzer: Yeah. I mean, I think it's going to become an increasingly important topic, throughout the world and how different governments are working with each other.

[00:54:23] Microsoft, Apple and the OpenAI Board

[00:54:23] Mike Kaput: All right,

[00:54:24] Mike Kaput: So next up we. I had talked in previous weeks about how Apple had gotten a board seat with OpenAI, how Microsoft already had one. Both of them are not

[00:54:35] Mike Kaput: getting board seats, seems like. Microsoft has voluntarily given up the observer seat it has on OpenAI's board, effective immediately, and Apple, which had not yet taken the board seat, their observer role as

[00:54:47] Mike Kaput: part of their work with OpenAI to integrate ChatGPT into iPhones has also said they're not joining the board either.

[00:54:55] Mike Kaput: So these changes have come kind of amidst increasing [00:55:00] regulatory scrutiny of big tech's investments in AI startups. OpenAI's They plan to host regular meetings with key partners and investors like Microsoft and Apple instead of board representation. But as of right now, none of them are getting those promised observer board

[00:55:19] Mike Kaput: So paul, this seems like a pretty quick about phase from the news of the past couple weeks. Like, what's going on here?

[00:55:27] Paul Roetzer: It was definitely fast. it seems at least from everything I can gather, just regulatory scrutiny around big tech and AI, that there's just too much heat right now. and consolidation of power is not being looked upon well by different governments. And I think they, everybody decided it was not probably worth the headaches to do this.

[00:55:49] Paul Roetzer: So I don't know. I'm again, love to know the inner conversations going on here, but we don't have any insights that any, you know, somebody else doesn't have already. [00:56:00] but yeah, that seems to be the issue is just regulatory scrutiny and fear around, they already got enough headaches. Let's not bring on any more right now.

[00:56:07] Paul Roetzer: And I guess they must have some level of trust to where OpenAI is going, what the board's going to do now that maybe they didn't have when the, you know, the current board wasn't, um, structured the way it is.

[00:56:20] Gemini for Workspace

[00:56:20] Mike Kaput: All right. So Google has been rolling out Gemini for Workspace where Gemini is being built right into Gmail, Docs, Sheets, and more, all with kind of enterprise grade security and privacy.

[00:56:35] Mike Kaput: And with those apps, within those apps, Gemini is now going to be able to assist you directly using information, context from your email. Email your documents, your files to kind of help you be more productive. So for instance, Gemini and Gmail allows you to essentially prompt the tool just like you would ChatGPT to help you find specific files in your inbox, summarize email threads, surface [00:57:00] specific details from past conversations.

[00:57:02] Mike Kaput: In Drive, you can prompt Gemini to summarize files and documents. You can ask questions about all that information you store in your documents on Drive and learn about and find different files that you've saved.

[00:57:15] Mike Kaput: A gemini business plan costs 20 per user per month with a one year commitment on top of your current business workspace plan.

[00:57:24] Mike Kaput: And it gives you access to Gemini in Gmail, Docs, Slides, Sheets, and Meet. There's also an enterprise plan.

[00:57:32] Mike Kaput: Paul, we just saw this get turned on in our Institute Google Workspace account. Like, what are your impressions so far of this?

[00:57:40] Paul Roetzer: Well, I've been experimenting with it in my personal Gmail for a couple of months, I think I've had it. And, it. You know, just, I think like two days ago it got turned on maybe. but the uses in Gmail alone are awesome. Like I've always said, Gmail search is just like insane, insanely useless. And so to be able to just have a [00:58:00] conversation with my inbox is amazing.

[00:58:03] Paul Roetzer: And I've started to use it within Drive as well to, to find things. So seems like it's going to be really helpful. I have not. built out a bunch of the use cases to share with the team and say, okay, here's the 10 ways to use this or anything like that. But it seems like the intelligence is starting to find its way into these core technologies, which is going to be very helpful for businesses.

[00:58:23] Paul Roetzer: And if you're a Microsoft shop, it's the same kind of thing that Copilot enables within their platform.

[00:58:29] Mike Kaput: Yeah, same as you, I've been using the personal version for quite a bit, and I would say the email capabilities alone are worth the price of admission.

[00:58:37] Mike Kaput: Like, it's really, really good at surfacing everything you need very quickly.

[00:58:40] Paul Roetzer: Yeah. I've definitely found some hard to find things by just asking questions. Yeah. uh,

[00:58:48] Claude Prompt Playground

[00:58:48] Mike Kaput: up, Anthropic has introduced some new tools to help you improve AI applications using Google Claude. And so these features, which were released this past Tuesday, are designed to [00:59:00] partially automate the process of prompt engineering.

[00:59:03] Mike Kaput: So the new tools available right within Anthropic Console's Evaluate tab include a built in prompt generator, which can take a short task description and create a more DetailPrompt using Anthropic's Prompt Engineering techniques. It has a testing environment now where developers upload

[00:59:22] Mike Kaput: real world examples or generate AI created test cases to evaluate how effective their prompts are.

[00:59:29] Mike Kaput: And you can use side by side comparison tools assess different prompts and rate sample answers on a five point scale.

[00:59:37] Mike Kaput: So, these features basically aim to help developers, especially those that know a little bit about engineering, quickly

[00:59:43] Mike Kaput: improve AI application performance. So Paul, these features are kind of geared towards developers, but they definitely seem to have some bigger implications for prompt engineering as

[00:59:55] Mike Kaput: whole. Like I saw Ethan Mollick had posted about this saying, quote, [01:00:00] automated testing of prompts in Claude is another sign that a lot of current prompt engineering is going to be done by AI soon. So how important do you see prompting as a skill moving forward? I

[01:00:13] Paul Roetzer: mean, for right now, there's definitely still advantages to knowing how to do it well. I mean, we see it all the time, even with, Um, my own experiences building some GPTs recently. Like there, there's an art and a science to it and being able to test and revise based on that, uh, is very valuable. But for the last two years, we've been saying this is the number one friction point to getting value out of these systems.

[01:00:37] Paul Roetzer: So, these things. These technology companies know that they don't want to have to rely on the user to learn how to prompt to get value. So this idea of automated testing or rewriting of prompts, um,

[01:00:49] Paul Roetzer: makes ton of sense. And I do think we'll probably a year or two from now look back and it will feel like this was the Boolean era of search.

[01:00:56] Paul Roetzer: Like, I feel like that's kind of what it is [01:01:00] 3 we talked about last week. Like, you gotta really know how to prompt to get any use out of it.

[01:01:06] Paul Roetzer: 3 is what I've seen. And I just think that the AI is going to keep getting smarter at rewriting your prompts and you may be aware it happened or you may not, but this is definitely a solvable thing for AI models to where, I, I don't think you're going to have to be an expert in prompting or know specific ways to do it.

[01:01:29] Paul Roetzer: Down the road, but for right now, totally, you can get way more value out of these things if you know how to talk to them the right way. And you know, certain tricks to get value from them. So it is still a thing and it will be probably for the foreseeable future, but I could certainly envision a day where it's, it's just not that important.

[01:01:48] Captions Valued at $500M

[01:01:50] Mike Kaput: All right, our last piece of news this week. Captions and a startup focused on AI powered video creation and editing just raised 60 [01:02:00] million in a new funding round led by Index Ventures, and that puts the company's valuation at 5 billion. 100 million dollars. So they have raised so far about 100 million dollars.

[01:02:12] Mike Kaput: Previous investors include Andreessen Horowitz, Sequoia, and Index is also associated with the actor Jared Leto. So basically Captions allows you to create, edit, distribute

[01:02:22] Mike Kaput: videos. It's geared towards producing videos that feature a person speaking. So users can input text to be spoken by a AI avatar or provide a topic for the software to generate a script.

[01:02:38] Mike Kaput: You can also offer translation capabilities and convert captions and dialogue into different languages.

[01:02:46] Mike Kaput: The app has over 10 million downloads and nearly 3. 5 million videos a year. published each month. Paul, this is kind of on trend here, the latest in a slew of funding announcements

[01:02:59] Mike Kaput: AI [01:03:00] video startups. You had just mentioned Runway, which we talked about is potentially raising a bunch money soon enough here.

[01:03:08] Mike Kaput: Is video kind of the next like hot space in AI?

[01:03:12] Paul Roetzer: Definitely. Yeah, I mean, we, we talked about this a number of times, you know, images and text and then video and audio are like the hot things right now. I mean, this feels like

[01:03:26] Paul Roetzer: I was at late 2022, maybe right, maybe right before ChatGPT when all the language model companies, the wrappers for the language models started getting, you know, a hundred million here, a hundred million there in funding.

[01:03:38] Paul Roetzer: I feel like we're entering this very frothy stage of funding of video tools and a year. Year and a half from now, there'll be some shakeout and half of these companies are going to be worth zero, like what's happening with some of the language companies right now. but it's hard to keep up right now with, with this, but I do feel like [01:04:00] video is going to continue to have.

[01:04:02] Paul Roetzer: These very significant improvements in what it's capable of, the quality of the output, the consistency of the output, the length of the output. um, there's, there's a lot of progress being made in, in these areas, both from a research perspective and a product perspective, which is cool to see. Again, aside from the copyright issue where they're all going to get sued.

[01:04:23] Paul Roetzer: Like it's, it's, it's cool to see the technology emerging when you can, when you set the IP issues aside. 

[01:04:30] Mike Kaput: All right that's all we got this week Thanks as always for breaking it all down please subscribe to the Marketing AI Institute newsletter at marketingaiinstitute. com /newsletter. It is looking at what is going on in AI this week. all down more in depth, as well as a bunch of other stories we didn't get to.

[01:04:59] Mike Kaput: So go check that [01:05:00] out if you have not already. And last but not least, if you have not left us a review, we would love to hear your feedback. It helps us improve. Each and every week here as we deliver you all the news and AI. So we very much appreciate whatever you have in terms of feedback to give us about the show and it helps us reach more people and also improve.

[01:05:20] Mike Kaput: All right, Paul, thanks so much. 

[01:05:23] Paul Roetzer: Thanks,

[01:05:23] Paul Roetzer: appreciate it. Everyone have a great week.

[01:05:25] Paul Roetzer: Thanks for listening to The AI Show. Visit MarketingAIInstitute. com to continue your AI learning journey and join more than 60, 000 professionals and business leaders who have subscribed to the weekly newsletter, downloaded the AI blueprints, attended virtual and in person events, taken our online AI courses, and engaged in the Slack community.

[01:05:49] Paul Roetzer: Until next time, stay curious and explore AI.

Related Posts

[The AI Show Episode 121]: New Claude 3.5 Sonnet and Computer Use, Wild OpenAI "Orion" Rumors, Dark Side of AI Companions & Ex-OpenAI Researcher Sounds Alarm on AGI

Claire Prudhomme | October 29, 2024

Episode 121 of The Artificial Intelligence Show talks about the new Claude 3.5 Sonnet, Claude's computer use, OpenAI "Orion" rumors, and much more.

[The AI Show Episode 122]: ChatGPT Search Is Here, McKinsey: AI Worth “Trillions” in Coming Decades & Microsoft AI CEO Calls AI “New Digital Species”

Claire Prudhomme | November 5, 2024

Explore ChatGPT's latest search features, McKinsey's AI economic forecast, and Suleyman's ideas on AI as a "new digital species" in Episode 122 of The Artificial Intelligence Show.

[The AI Show Episode 124]: Has AI Hit a Wall?, What Is An AI Agent?, Dario Amodei Interview, OpenAI’s New Agent, Greg Brockman Returns & Microsoft Copilot’s Woes

Claire Prudhomme | November 25, 2024

Discover why AI scaling faces challenges at OpenAI, Google, and Anthropic, plus the reality behind AI agents and exclusive insights from Dario Amodei’s recent interview on responsible AI development.