46 Min Read

[The AI Show Episode 94]: Stanford’s 2024 AI Index Report, News on AI Agents at Microsoft, OpenAI, and Google, and the Rise of AI Job Titles

Featured Image

Wondering how to get started with AI? Take our on-demand Piloting AI for Marketers Series.

Learn More

In a special bonus episode this week, hosts Paul Roetzer and Mike Kaput jump into a range of quick-fire subjects. We start the episode by highlighting Dario Amodei, CEO of Anthropic AI, featured on the Ezra Klein Show, discussing the exponential growth curve of AI, and the societal adjustments required for AI integration. Additional rapid fire topics include Stanford's 2024 AI Index Report, tech giants' pursuit of AI Agents, AI mind-reading, and more.

Listen or watch below—and see below for show notes and the transcript.

Today’s episode is brought to you by rasa.io.

Rasa.io is the ultimate platform for AI-powered newsletters. If you’re looking to transform your email newsletter into a powerful, engaging tool that truly resonates with your audience, rasa.io is the game-changer you need. Join the 500+ organizations already making their newsletters smart.

Visit rasa.io/maii today.

Listen Now

Watch the Video

 

Timestamps

00:04:09 — Anthropic CEO on the Trajectory of AI

00:23:07 — Stanford Releases 2024 AI Index Report

00:29:19 — Microsoft, Google, OpenAI Are Racing Towards Agents

00:32:40 — Rise in AI Job Roles

00:35:30 — AI is Already Reshaping Newsrooms

00:37:43 — How to Opt Out of AI

00:41:13 — AI Mind Reading

00:45:47 — Google Makes Structural Changes to Support AI Efforts

00:48:33 — The Ethics of Advanced AI Assistants

00:54:13 — On the Economics of Frontier Models

00:58:59 — Perplexity Raises $62.7M, Announces Enterprise Pro

Links Referenced in the Show

Read the Transcription

Disclaimer: This transcription was written by AI, thanks to Descript, and has not been edited for content.

[00:00:00] Paul Roetzer: If we as a society, and we've talked about this on the show before, if we can't agree on truth, which we can't. In the United States, we cannot agree on truth.

[00:00:07] Paul Roetzer: How do we build models that agree on truth? And now you start to realize, governments that build frontier models will control what truth is.

[00:00:14] Paul Roetzer: Open source models. Anyone can build whatever they believe truth to be

[00:00:18] Paul Roetzer: It can create. cults, it can create new religions, it can get all these things, because these things are insanely good, superhuman, at persuading people to believe something.

[00:00:29] Paul Roetzer: Welcome to the Artificial Intelligence Show, the podcast that helps your business grow smarter by making AI approachable and actionable. My name is Paul Roetzer. I'm the founder and CEO of Marketing AI Institute, and I'm your host. Each week, I'm joined by my co host, and Marketing AI Institute Chief Content Officer, Mike Kaput, as we break down all the AI news that matters and give you insights and perspectives that you can use to advance your company and your career.

[00:00:59] Paul Roetzer: Join [00:01:00] us as we accelerate AI literacy for all.

[00:01:06] Paul Roetzer: Welcome episode 94 of the Artificial Intelligence Show. I'm your host, Paul Roetzer, along with my co host, Mike Kaput. We are coming at you for the second time this week. If you listened to episode 93, we said we were going to do part two this week. So this is basically still trying to catch up on everything that we had in our sandbox, in our rapid fire items.

[00:01:28] Paul Roetzer: There was just. too much to get one show in and too many of these topics that justified having a conversation about and not dropping newsletter only. So I don't know. I mean, and I just told Mike, like, the first one we're going to lead off with is this conversation from Dario Amadei from Anthropic. And I said to Mike before he came on, like, Honestly, I could talk for an hour just about this podcast interview.

[00:01:51] Paul Roetzer: So we're going to have basically, it's going to be a little different format this time, one, cause we're dropping on a different day. You might as well mix up the format slightly, [00:02:00] but we're going to pretty much have like one main topic, the Dario Amadei interview. It was going to be a rapid fire, but there's just no way to do this as a rapid fire, and then we're just to a bunch of rapid fires.

[00:02:10] Paul Roetzer: So, this episode is probably going to come in a little shorter than our usual weekly. um, It's not a change in format in terms of like, we're going to start due into a week necessarily, but this week, like I said, on the previous episode, there was just too much to get into one episode. So, I don't know.

[00:02:27] Paul Roetzer: It should be interesting. The Dario one is just really swirling in my head. We have a lot to discuss on that one. Alright, so episode 94 is brought to us by Rasa. io, the ultimate game changer for AI powered newsletters. Rasa. io's smart newsletter platform tailors your newsletter content for each and every subscriber and automates tedious newsletter production tasks.

[00:02:49] Paul Roetzer: We've known the team at Rasa. io for years now and think their solution is well worth checking out. Join the 500 plus organizations leveraging Rasa.

[00:02:57] Paul Roetzer: Rasa. io and get a[00:03:00] 

[00:03:00] Paul Roetzer: demo today at rasa. io slash m a i i and again that's r a s a. io slash m a i i. and then also I mentioned in episode 93, we have our 2024 State of Marketing AI survey is in the field right now, and you can be a part of that research.

[00:03:19] Paul Roetzer: Every year we do this survey in partnership with Drift to produce the State of Marketing AI report. This is our, what did we say, Mike? Third year?

[00:03:26] Paul Roetzer: Fourth year? Yes. Third year. I believe so. Okay.

[00:03:28] Paul Roetzer: Uh, annual deep dive into how hundreds or thousands, hopefully this year, of marketers actually use and apply AI in their work. By filling out this survey, you're helping the entire industry grow smarter about AI.

[00:03:40] Paul Roetzer: It only takes a few minutes to complete and we'll send you a copy of the 2024 State of Marketing AI report as a thank you once it's done, which will be this summer that will be released. Um, to fill out the survey, go to stateofmarketingai.

[00:03:52] Paul Roetzer: stateofmarketingai.

[00:03:53] Paul Roetzer: com

[00:03:53] Paul Roetzer: and click the link for the 2024 survey at the top of the page.

[00:03:58] Paul Roetzer: You can also download the [00:04:00] 2023 report if you have not previously done so. Read that. Okay. So one main topic and a bunch of rapid fire. Take it away, Mike.

[00:04:09] Anthropic AI’s CEO on AI’s Trajectory

[00:04:09] Mike Kaput: Alright, Paul, we are back. The CEO of Anthropic, which is the company behind the powerful Claude3 Opus model, which is a ChatGPT challenger.

[00:04:22] Mike Kaput: Well, he just gave a wide ranging interview with some pretty interesting implications. This interview was between CEO Dario Amadei. And Ezra Klein at the New York Times, and they about

[00:04:36] Mike Kaput: AI's near future trajectory. And Amadei talked to Klein about things scaling laws, AI's exponential growth curve, and how society might need to adapt. to intelligence.

[00:04:49] Mike Kaput: It's got some really, really compelling takeaways, I think, for anyone who's trying to get kind of a better handle on what to expect from AI in the next one to two years, [00:05:00] I would say. Now, Paul, we both listened to this podcast. I know you had a lot of thoughts on kind of what Amadei was saying here.

[00:05:06] Mike Kaput: Could you walk us through what you took away from this?

[00:05:10] Paul Roetzer: Yeah, I honestly, so I think I've explained this before, but a lot of. this is interesting people. A lot

[00:05:16] Paul Roetzer: the way, I try and process information every week and still like have some balance in my life is I actually like, I'll drop my kids off at school and then I go to the gym for an hour. And when I'm at the gym is when I listen to the podcast, so I'll put them on 1.

[00:05:33] Paul Roetzer: 25 or 1. 5. And so I will like often get a workout in. while getting the podcast and I'll be making notes in my Apple Notes app while I'm doing this stuff. This interview, no joke, like at least three times I stopped the workout and had to like rewind and listen to it again to, to make sure I was hearing it correctly.

[00:05:55] Paul Roetzer: it is one of like the craziest interviews in retrospect that I've ever listened to, [00:06:00] and I've listened to probably a half a dozen Dari Amadei interviews, you know, LeCun, Sam, like listen to all of these, this is one of the crazier ones, and just initial reaction, like Mike, I told you about this podcast, I know you went and listened to it, did you have the same reaction, like was there times where you're just like, what the hell?

[00:06:17] Mike Kaput: Yeah, I was listening to it while walking to and from the gym yesterday, and there were times where, yeah, had

[00:06:25] Mike Kaput: to stop and kind of pull out my phone. And make a quick note to revisit a concept.

[00:06:31] Paul Roetzer: I mean, there was, there was one moment, now I'll get to that moment in a minute, I literally, like, I think I just swore a lot on my headphones and

[00:06:37] Paul Roetzer: and I couldn't hear myself, but I was like the

[00:06:38] Paul Roetzer: WTF? Like, are you serious right now? Like, this is actually what's happening? happening? okay, so, with that being said, so, as a reminder, Dario Amadei is a co founder of Anthropic with Jack Clark and I think a couple other people.

[00:06:51] Paul Roetzer: Um, Dario led the team that built GPT 3. So before he left OpenAI, he led the building of GPT 3, which is a very important [00:07:00] context to what we're about to say. Um, left Anthropic. We just talked about this an episode or two ago, where I explained like the interview with Jack Clark that I had listened to, where he disclosed that they didn't actually leave to build a safer model.

[00:07:11] Paul Roetzer: They just built to build a model and then wrapped safety into it after the fact. okay. That all being said, what I'm going to do is just go through about six key highlights with some excerpts. We honestly could spend the rest of the hour or 45 minutes, whatever, is just talking about this and going further into it.

[00:07:31] Paul Roetzer: Cause I actually stopped, like I had another dozen findings from this episode, that I'm not even going to get into today. So the first is scaling laws. If you listen to this podcast, you hear us constantly refer to the idea of scaling laws, more data, more data. More computing power, more time given, leads to smarter, more powerful models.

[00:07:52] Paul Roetzer: It is the foundation of what all these frontier model companies are doing, and it is to date, holding true. It's [00:08:00] why they keep just building bigger models. So, This is foundational to why they left OpenAI and started Anthropic. He said, so I worked at OpenAI for five years. I was one of the first employees to join and they built a model in 2018 called GPT 1.

[00:08:15] Paul Roetzer: So that was right after the transformer was invented by Google brain. So it was the first version of this based on the transformer model, which used something like 100, 000 times. Less computational power than the models we build today. I looked at that, and I and my colleagues were among the first to run what we are, what are called scaling laws.

[00:08:35] Paul Roetzer: Which is basically studying what happens as you vary the size of the model, its capacity to absorb information, and the amount of data that you feed into it. And we found these very smooth patterns. And we had this projection that, look, if you spend 100 million, or a billion, or 10 billion on these models, instead of the 10, 000 we were spending then, Projections that all of these wondrous things would happen.

[00:08:59] Paul Roetzer: And we [00:09:00] imagined that they would have enormous economic value. That is the basis for why they left and built Anthropic and the scaling laws that continue to hold to hold true today. Then the another key part was he talked about like kind of what's next and when is it going to happen? So he got into some of the things he thinks are coming for these models.

[00:09:18] Paul Roetzer: He said, I think going further in the direction of models, having personalities while still being objective. While still being useful and not falling into various ethical traps, that will be, I think, a significant unlock for adoption. The models taking actions in the world is going to be a big one. This is the AI agent stuff we talk about all the time.

[00:09:36] Paul Roetzer: I know basically all the big companies that work on AI are working on that. I think all of that is coming in the next, I would say, I don't know, 3 to 18 months with increasing levels of ability. I think that's going to change how people think about AI. So, I guess the Good thing for us is episode AI timeline.

[00:09:56] Paul Roetzer: This jives with everything we said in there. This was one of [00:10:00] the ones where I stopped and I was like, hold on a second. Did I hear him correctly? So the next one is rate of change. Quote, In terms of what we need to make it work, one thing is literally we just need more scale. And I think the reason we're going to need more scale is to do all the things a junior software engineer does.

[00:10:17] Paul Roetzer: They involve chain of long actions, right? I have to write this code. I have to run this test. I have to write a new test. I have to check how it looks in the app after I interpret it or compile it. And these things can easily get 20 or 30 layers deep. Now, zoom back out. He's referring to any, I mean, in this case, he's specifically talking about engineers, software designers, but think about any

[00:10:39] Paul Roetzer: Job, any, any part of knowledge work that involves.

[00:10:42] Paul Roetzer: Steps. That's what he's talking about. There's steps to do something. There are tasks involved in doing this thing. So then he goes on to say,

[00:10:49] Paul Roetzer: and if accuracy of any given step is not very high, it's not like 99. 9 percent as you compose these steps, the probability of making a mistake becomes itself very high.

[00:10:59] Paul Roetzer: So the [00:11:00] industry is going to get a new generation of models, probably boldface here, me boldfacing, probably four to eight months, every four to eight months. he's projecting

[00:11:11] Paul Roetzer: will have new models coming up. And so my guess, this is back to him, I'm not sure, is that to really get these things working well, meaning doing all these 20 to 30 layers of steps, um, we need maybe one to four more generations.

[00:11:25] Paul Roetzer: So that ends up translating to, like three to 24 months or something like that. So in essence, he's saying. Not only will they have agents, we'll have agents that are working at a high, like 99 by 9 percent accuracy level within 3 to 24 months. Size of models was another one. He said, today's models cost roughly a hundred million to train.

[00:11:44] Paul Roetzer: Again, these are like decent size models, costing a hundred million, plus or minus a factor of two or three. The models that are training right now, so he's referring to what they're training, probably Claude, you know, four, basically, that will come out at various times later this year. He's also, you know, VA in a veiled way [00:12:00] of referring to GPT five, and probably Gemini too.

[00:12:04] Paul Roetzer: And meta, Lama four that will come out various times later this year, early next year, closer in cost to 1 billion. So that's already happening. And then I think in 2025 next year and 2026 will get toward five or 10 billion in training runs. This is the next one, is one that I, I had to rewind like three different times because it was so like. like

[00:12:29] Paul Roetzer: I'm profound, like I hadn't previously considered it, but the way it was presented, I was like, oh my god, like this is a fundamental problem. So Ezra, who's a brilliant interviewer, by the way, like I just ask great questions and really pushes, like does a great job. Um, Ezra says, Are you familiar with Harry Frankfurt, the late philosopher's book on bullshit?

[00:12:48] Paul Roetzer: Dario, yes, it's been a while since I read it. I think His thesis is that bullshit is actually more dangerous than lying because it has this kind of complete disregard for the truth, whereas lies [00:13:00] are at least the opposite of the truth. Ezra replies, yeah, the liar, the way Frankfurt puts it, is that the liar has a relationship with the truth.

[00:13:08] Paul Roetzer: He's playing a game against the truth. The bullshitter doesn't care. The bullshitter has no relationship to the truth, might have a relationship with other objectives that go. And he basically wants to say, When he, Ezra, started playing with these models, he realized this is the perfect bullshitter. Like, these things can do so much damage because they're perfect.

[00:13:28] Paul Roetzer: They don't care about the truth. They have no relationship to the truth. They just achieve whatever objective is set out.

[00:13:34] Paul Roetzer: then they talked about persuasion within this. And the research Anthropic has done about how good these models are at persuasion. So, where I, the reason I had to kind of, like, literally just hit pause for a minute, now I'm just sitting there at the gym, like, just staring into space.

[00:13:47] Paul Roetzer: Anybody walk past me probably thought I was, like, losing my mind.

[00:13:51] Paul Roetzer: you have to realize how significant it is who builds and tunes these models. Because for them [00:14:00] to know truth, They have to have grounding in truth, which means they have to be given, in essence, rules of what is real and what is not, what is truth.

[00:14:10] Paul Roetzer: If we as a society, and we've talked about this on the show before, if we can't agree on truth, which we can't. In the United States, we cannot agree on truth.

[00:14:17] 

[00:14:17] Paul Roetzer: How do we build models that agree on truth? And now you start to realize, like, governments that build frontier models will control what truth is.

[00:14:26] Paul Roetzer: Open source models. Anyone can build whatever they believe truth to

[00:14:30] Paul Roetzer: It can create. cults, it can create new religions, it can get all these things, because these things are insanely good, superhuman, at persuading people to believe something. And that is like, the implications of that are so massive, that it really is hard to like, not believe it.

[00:14:50] Paul Roetzer: get pulled into that and to think about how bad of an outcome this could lead to. This led to the safety conversation, which was the moment where I literally just [00:15:00] want to turn the podcast off because my head was going to explode. So Anthropic has actually done more in responsible AI research than most.

[00:15:09] Paul Roetzer: They have constitutional AI, which is supposed to guide the model, give it some grounding to truth, some grounding to morals and ethics, and they have responsible scaling policy paper they publish. Now in that paper, there are four levels. ASL 1 through ASL 4. ASL 1, as small models, refers to systems that pose no meaningful catastrophic risk.

[00:15:33] Paul Roetzer: ASL 2 are the present large models. These are systems that show early signs of dangerous capabilities. for example, giving instructions on how to build bioweapons. but they're generally considered not very dangerous yet. And I think most of these AI researchers would put the current models in that ASL 2 realm.

[00:15:54] Paul Roetzer: Um, ASL 3 is significantly higher risk. It's systems that substantially increase [00:16:00] the risk of catastrophic misuse compared to non AI baselines, or show a level of autonomous capability where they can like self improve and things like that. ASL 4, they called in the report last year, speculative. Um,

[00:16:14] Paul Roetzer: And

[00:16:15] Paul Roetzer: we got major, major problems if, if we get to ASL 4.

[00:16:18] Paul Roetzer: So Ezra, Pushed him and said, okay, but like, give me specifics. Like, when would you see something in a training run, if they're doing billion dollar trainings, when would you see something that would get Anthropic to stop building it? And Dario kind of was like, oh, you know, like this and this and this, and he would kind of go through it.

[00:16:37] Paul Roetzer: So then kept pushing me, like, no, no, give me a very specific example. So here's an excerpt, Dario. So for example, on biology, the way we've defined it, and we're still defining it, the test, but the way we've defined it is relative use of Google search, there's a substantial increased risk as they would be evaluated by, say, the national security community of misuse of biology, creation of bioweapons, proliferation or spread of this kind of stuff.

[00:16:59] Paul Roetzer: Um, we'll [00:17:00] probably have, we'll probably have. We will probably have some quantitative thing, so this comes, this comes

[00:17:07] Paul Roetzer: 30 seconds.

[00:17:07] Paul Roetzer: in like 30 seconds. We will probably have some exact quantitative thing working with folks who are ex government biodefense, but something like this accounts for 20 percent of the total source of risk of biological tax, and he basically starts getting into like, this is what ASL 3 could look like.

[00:17:23] Paul Roetzer: ASL 4 is going to be more about the misuse side, enabling state level actors to greatly increase their capability, which is much harder than enabling random people. It's like, okay, well, ASA 4, maybe in the next 5 to 10 years, they'll figure this out, they'll get these ex government people in, they'll interview them, they'll build some standards, and they'll know when to shut their models off.

[00:17:43] Paul Roetzer: No. Ezra then says, when you imagine how many years away just roughly ASL 3 is, and how many years away ASL 4 is, you've thought a lot about this exponential scaling curve. If you just had to guess, what are we talking about? [00:18:00] Dario. Yeah, I think ASL 3 could easily happen this year. Or next, I think ASL

[00:18:06] Paul Roetzer: ASL 4, at point Ezra says, Oh, Jesus Christ.

[00:18:09] Paul Roetzer: Like, that was the moment where I did my WTF moment out loud. And I was like, hold on. And so now I go over and I go pull the responsible scaling policy from like previously, because I'm sitting there just on my phone. Like, hold on a second. What 4? And Dario says, no, no, I told you, I'm a believer in exponentials.

[00:18:30] Paul Roetzer: I think ASL 4 could happen anywhere between 2025 to 2028. Then the final one. And again, I don't know, there are a dozen of these things I'll talk about. The final one I will leave it with is,

[00:18:42] Paul Roetzer: the

[00:18:43] Paul Roetzer: um, Ezra says, and this is where I think the title of the podcast comes from. What if our, if Dario Amede is right.

[00:18:50] Paul Roetzer: If you don't believe these models will ever become so powerful, They beco, they become dangerous. Fine. So he's saying like, Hey, if you didn't think what you're building could [00:19:00] be catastrophic, then this is fine that you keep racing forward. Like I get that. But Ezra says, but be be because you believe that that ASL three L four are possible within the next year.

[00:19:11] Paul Roetzer: How do you imagine this actually playing out to which. Dario says, look, I don't have the answer. Like, I'm one of a significant number of people trying to navigate this. Many are well intentioned, some are not. I have a limited ability to affect it.

[00:19:29] Paul Roetzer: so if you had to put on one hand the people who are building the frontier models that are leading us to ASL 3 and 4, Dario's one of those people. Time and time again, I've said it on the show, these people do not know How to prevent catastrophe. Like, I consider Anthropic to be a leader in this space, to be one of the ones thinking most deeply about this. They don't know, and they do think it is like, [00:20:00] One to two years out of an issue. You can understand the Doomer perspective.

[00:20:04] Paul Roetzer: We've talked about the show way better when you listen to an interview like

[00:20:08] Paul Roetzer: this. So,

[00:20:09] Paul Roetzer: don't know. I mean, I'll stop there. Like I said, I don't want to do the whole episode on this, but you have to go listen to this episode. Like, I really encourage people to go listen to this. And Mike, did you have any others that like really jumped out at you or any context you want to add?

[00:20:23] Mike Kaput: I simply arrived at the same question Ezra Klein was asking, and I think it is a really core question I'm never going to have the answer which

[00:20:33] Mike Kaput: is if you really believe everything he said. Are they, are him and Sam Altman and a handful of other people having conversations behind closed doors about what

[00:20:46] Mike Kaput: the future like because he can't dodge this question. The way he did, he has to have perspective on this and I think the reason we don't hear it is because it's uncomfortable, would be my [00:21:00] guess.

[00:21:00] Paul Roetzer: I think that's possibly right. I do think, and they have said, DALL said this many times, is they are all under the assumption now that if they don't build it, someone else will. And that if they build it first, it will help them figure out how to make it safe. So Sam, I know, has specifically said that Dario has basically said as much because they got into competition and like this race to build these things.

[00:21:27] Paul Roetzer: And like, if we don't build it, other governments will build it. So we got to build it. And it has to be in the U. S. and or has to be over here. so I really think we are just in an arms race and it is that if we don't, they will. So we're just going to keep going. And we trust ourselves to build it safer than they do, Uh, but this really presents the challenges of the Zuckerberg interview we talked about in episode 93 and their belief that we're not there yet, we're not that worried, we're just going to keep open sourcing and maybe we'll open source Llama4 or maybe we won't.

[00:21:59] Paul Roetzer: But [00:22:00] now the question is, well, how's Zuckerberg going to know? Like, they're not all going to agree on this. There aren't going to be any standards unless there's government regulation, which the EAC people and the techno optimists, like, think is regulatory capture. Like, it is. It's such a weird spot to be in, where they all think this could go very bad.

[00:22:21] Paul Roetzer: And they, they, the publicly, the only thing they're saying is, we just got to build the smartest version possible as quick as possible, and it'll help us figure all this out. It is pretty much how I think they're approaching it versus the other people like, yeah, just build it all. It'll all work out somehow.

[00:22:34] Paul Roetzer: Society will figure it out. Like that's kind of the techno optimist approach. Holy cow.

[00:22:40] Paul Roetzer: I mean, crazy interview.

[00:22:42] Mike Kaput: Yeah, it's unreal.

[00:22:43] Mike Kaput: I think it's, I think to your point, it's, even if this kind of thing makes you a little nervous, you have to understand where these people are coming from and you have to engage with some of these topics because it is going to affect us in one way or [00:23:00] another, sooner rather than later, would be my guess. Well,

[00:23:03] Paul Roetzer: That was our one main topic. Now we had a bunch of rapid fires.

[00:23:07] 2024 AI Index Report

[00:23:07] Mike Kaput: Alright, so, let's also jump into some research that has come out stanford

[00:23:13] Mike Kaput: University. They have released the 7th edition of their popular AI Index Report. Now, the 2024 AI Index Report comes out of Stanford Institute for Human Centered Artificial Intelligence, HAI.

[00:23:29] Mike Kaput: I, and it's absolutely packed with a bunch of original data from tons of different sources on every aspect of AI

[00:23:37] Mike Kaput: and how it relates to business, to the economy, to society. This report, full disclosure, is a beast. It is more than 500 pages, so we're probably not going to read the whole thing. You might want to drop it into Gemini 1.

[00:23:52] Mike Kaput: 5 see if its context can help you read the whole thing quickly.

[00:23:57] Mike Kaput: It covers nine key areas. R& [00:24:00] D, Technical Performance, Responsible AI, Economy, Science and Medicine, Education, Policy and Governance, Diversity, and Public Opinion. And thankfully the researchers seem to recognize that readers might need some help, so

[00:24:14] Mike Kaput: provide takeaways for the report overall and for chapter.

[00:24:18] Mike Kaput: So I wanted to just quickly highlight, Paul, a couple that jumped out at me as I kind of went through, read different sections of the report, and looked through. takeaway.

[00:24:28] Mike Kaput: So first up, they said that despite a decline in overall AI private investment last year, funding for generative AI surged. It grew by nearly 8x from 2022 to reach

[00:24:41] Mike Kaput: 2 billion. They also they also said that in 2023, several studies assessed AI's impact on suggesting

[00:24:51] Mike Kaput: that AI enables workers to complete tasks more quickly and to improve the quality of their output. And these studies also [00:25:00] demonstrated AI's potential to bridge the skill gap between low and high skilled workers.

[00:25:06] Mike Kaput: They AI was mentioned 394

[00:25:11] Mike Kaput: earnings calls, nearly 80 percent of all Fortune 500 companies. Which was an increase from 266 mentions in 2022. Now, I won't go through every single stat here, but the chapter on public opinion really jumped out at me as kind of a fascinating look of how people are really feeling about AI based on a number of different surveys and survey data they collected.

[00:25:36] Mike Kaput: So what I found really interesting was that 52 percent of people that were surveyed expressed nervousness towards AI products and services. And that was a 13 percentage point rise from 2022. In fact, in America, 52 percent of Americans report feeling more concerned. then excited about AI, which [00:26:00] rose from 38 percent in 2022.

[00:26:04] Mike Kaput: Now, also, about 37 percent of respondents feel that AI will improve their job. That's a relatively small proportion, it sounds like, and only 34 percent anticipate AI will boost the economy.

[00:26:17] Mike Kaput: And some final stats here that are interesting is that 63 percent of respondents said they were aware of ChatGPT, and of those aware, around half report using ChatGPT once a week.

[00:26:31] Mike Kaput: So, Paul, there's a lot to unpack here, but seems like interesting the story that is kind of beginning to emerge here is, you know, the data on funding, productivity, earnings calls, it's pretty clear that this is AI's economic moment

[00:26:48] Mike Kaput: we're getting going with adoption and investment. But overall, sentiment really seems to have taken downward trend here. I mean, it feels like people are [00:27:00] beginning feel

[00:27:01] Mike Kaput: the acceleration of AI and not, not always loving effects.

[00:27:04] Mike Kaput: Like, what did you think as you were going through this?

[00:27:06] Paul Roetzer: Understanding what

[00:27:07] Paul Roetzer: effects are going to be. yeah, so I, one interesting note, one of the co directors who authors the foreword is Jack Clark, the co founder of Anthropic, who we just mentioned, so just some context there. The, my overall take on, on this study is I think it's a phenomenal macro level report.

[00:27:24] Paul Roetzer: Like if you're new to the AI space, you know, just started paying attention the last few months, don't understand the macro level of this, great read. Just, at minimum, read the report highlights by chapter from pages 14 to 26. Take you about five minutes, and you will get a really solid high level of where they see AI right now.

[00:27:45] Paul Roetzer: Um, so that's the first thing. The one you mentioned about the funding, I think any time you read these reports, any research reports, you have to understand how is the data gathered, What are the sources of the data? So like the one about funding, I didn't look at that [00:28:00] chart, but my assumption is the long tail got short.

[00:28:02] Paul Roetzer: And what I mean by that is the disproportionately large funding of

[00:28:07] Paul Roetzer: these model companies gives the impression that funding overall was skyrocketing when in reality it might've gone to like six companies and then the long tail of all these smaller funding rounds got shorter. Cause

[00:28:19] Paul Roetzer: you and I both know a whole bunch of AI tech companies who couldn't.

[00:28:22] Paul Roetzer: Sniff money last year from anybody because the investors don't know what their mode is. And so they couldn't raise anything. So I wouldn't look at that and say, Oh, everybody's getting money who has AI. That that's not the case. the second thing is a lot of the research they cite were from research that was done by other people.

[00:28:41] Paul Roetzer: early to mid 2023, in some cases before or shortly after GPT 4 came into the world. So given that we have new frontier models every four to eight months, as, as Dario tells us, you have to understand the shelf life of some of this research. So all that being [00:29:00] said, really good macro level report. They do a phenomenal job.

[00:29:04] Paul Roetzer: It is a massive, thing that only GemIIni would probably be able to help you assess or summarize. Um, but, you know, I think just focus on those report highlights by chapter at Goodread regardless of your current level of understanding of AI.

[00:29:19] Microsoft, Google, OpenAI Are Racing Towards Agents

[00:29:19] Mike Kaput: So we also got some new reporting from The Information, confirms

[00:29:24] Mike Kaput: something we've been talking about on this podcast for a while now. they report that Microsoft, OpenAI, Google, Meta, they're all actively working on AI agents. And they've The information found some interesting details, kind of breaking down how some of these companies are starting to approach AI agents, which is something on the past few episodes, we've also had some updates on.

[00:29:48] Mike Kaput: So the information spoke, for instance, to current Microsoft employees who confirmed that the company is making AI agents to automate multiple actions across apps, like creating, [00:30:00] sending, and tracking invoices, as well as writing code. And the employees say that they're announcing some of these agentic capabilities at their annual Build Developer Conference next month, so we'll keep an eye on that.

[00:30:12] Mike Kaput: And the information also reports that OpenAI is, quote, quietly designing computer using agents that could take over a person's computer and operate different applications at the same time, such as transferring data from a document to a spreadsheet. Separately, they say OpenAI and Meta are working on agents that can handle complex web based tasks like creating itineraries, booking travel accommodations.

[00:30:38] Mike Kaput: And as we've talked about in past, google is also working on AI agents, which they have previously.

[00:30:45] Mike Kaput: But the information also reported they're doing so with the help of one of the co founders of the AI agent startup, Adept. Now, the whole goal here, it sounds like, is unlock

[00:30:57] Mike Kaput: more customer spending on these [00:31:00] companies AI products by getting people to use them more by automating some of these harder tasks or more

[00:31:06] Mike Kaput: tasks that knowledge workers are doing every day.

[00:31:10] Mike Kaput: So, Paul, I don't think it's any surprise these companies are all working on AI agents. I did find it a pretty compelling preview of what could coming,

[00:31:19] Mike Kaput: though. I mean, we talked to so many firms that are barely still leveraging the existing capabilities of Copilot, ChatGPT, yet it sounds like they could all be getting some pretty pretty serious agent upgrades soon.

[00:31:34] Mike Kaput: that really automate like some multi step tasks that knowledge workers are doing every day, right? Was that kind of what you were thinking about as you were reading this?

[00:31:43] Paul Roetzer: Yeah,

[00:31:44] Paul Roetzer: see this as an inevitable path forward. Again, as we've talked about on our AI timeline, that this is coming, but I think what we're starting to see is there's been probably a slower uptake on adoption and enterprises than these major brands would like to see. And agents are part of their [00:32:00] answer to that.

[00:32:01] Paul Roetzer: So, if the humans aren't going to adapt and be able to use these generative AI tools, we're you know, at scale within their organizations, then we'll just build agents to help them do it. So I think it's kind of both ends. It was inevitable, but it might be being accelerated by slower adoption than expected.

[00:32:16] Paul Roetzer: So the last thing I'll say is, May 14th is the Google IO conference. That's their largest developer conference. So I would expect we'll hear a lot more about Google's plans for AI agents on May 14th. And then the Apple WDC, their developer conference is June 10th to the 14th. So over the next month and a

[00:32:34] Paul Roetzer: a half.

[00:32:35] Paul Roetzer: We're going to be hearing a lot more about AI agents and what's going on at these big tech companies.

[00:32:40] Rise in AI Job Roles

[00:32:40] Mike Kaput: So in another piece of news that caught our eye, we saw a post the cEO of Writer, is

[00:32:45] Mike Kaput: a generative AI writing platform, that shared on LinkedIn and kind of gives a glimpse of how AI is starting to potentially change. The job market certain

[00:32:55] Mike Kaput: roles. Now this post comes from May Habib, who we know well here at the [00:33:00] Institute, and in this post May shared the following.

[00:33:03] Mike Kaput: She said, when you start a company, what you're really hoping start is a movement.

[00:33:08] Mike Kaput: Not like WeWork, Newman, Adam Newman style, but just plain old fashioned new jobs being created around your company's idea of how to do things. And it's finally happening in generative AI. And then she went on to share some new AI focused roles that are opening up in AI program management and strategy at some companies that her company, services as customers.

[00:33:34] Mike Kaput: And these include roles like assistant

[00:33:36] Mike Kaput: VP of AI program lead at L'Oreal, a director of AI and business data operations at New York Life, senior director conversational and generative AI at Mars.

[00:33:47] Mike Kaput: Director of Generative AI at

[00:33:49] Mike Kaput: Prudential, a Chief AI ML Officer at U. S. Bank,

[00:33:54] Mike Kaput: Associate

[00:33:55] Mike Kaput: Director of Generative AI Prompt Engineer at KPMG [00:34:00] US, and a Senior Content Designer AI TACS adding to it.

[00:34:05] Mike Kaput: So

[00:34:05] Mike Kaput: Paul, I was looking at these and these seem like pretty interesting roles that wouldn't have existed, you know, even 12 to 24 months ago. I mean, are you expecting us to see more roles? There are these AI ops or strategy roles that don't require you to be a machine learning engineer or a data scientist.

[00:34:24] Paul Roetzer: Definitely. And you and I wrote about that in our book in 2022. We're trying to kind of project out what these roles might be. I didn't have time to pull it for this episode. We'll do it for a future one. But I used to run these reports every like three to six months through Sales Navigator. And I would actually run audits of AI ML titles by company, by industry, just to get a sense of who the leaders were and who was kind of the most forward thinking organizations.

[00:34:50] Paul Roetzer: I didn't have a chance to run the updated numbers to see how those benchmarks compared to today. But I would expect to see a pretty significant growth curve. So, yeah, I think the [00:35:00] titles are going to be fascinating. I don't expect that Genitive AI prompt engineer one to be something we're going to necessarily need a year or two from now, but I might be wrong on that one.

[00:35:09] Paul Roetzer: I've often said I just don't see that as a, career path se or a role. I think it's more of a skill, but, we'll see, we'll see what, what develops and how these new models will

[00:35:18] Paul Roetzer: kind of

[00:35:19] Paul Roetzer: um, responsibilities and tasks are related to the models as they emerge. But yeah, we're definitely going to see a lot of creative titles being, could be coming into play here.

[00:35:30] AI Already Reshaping Newsrooms

[00:35:30] Mike Kaput: So some more research that caught our eye, new study from the Associated Press reveals that 70 percent of newsroom staffers say that they're already

[00:35:43] Mike Kaput: generative generative AI.

[00:35:44] Mike Kaput: So, their study surveyed 292 people, mostly in the US and Europe, who work in newsrooms at media companies, public broadcasters, and magazines. The respondents said they're using generative AI for everything from [00:36:00] crafting social media posts, doing newsletters, writing headlines, to translation, interview transcription, and even drafting stories.

[00:36:09] Mike Kaput: And about one fifth of actually said they use generative AI for multimedia, like social media graphics and videos. Interestingly, as part of this research, the AP also found that less than half of the respondents actually have guidelines for using AI in their newsrooms. Some of the other interesting points that jumped out are 49 percent of respondents said their workflows had already changed because of generative AI.

[00:36:37] Mike Kaput: Yet 56 percent said the AI generation of entire pieces of content should be banned, and just 7 percent were worried about AI displacing jobs. So Paul, we both come from journalism backgrounds, like, what were your reactions to this data?

[00:36:55] Paul Roetzer: I mean, good, good research. good insights. The [00:37:00] one that I just find, I don't know, that's shocking, but. Disappointing is the lack of generative AI policies in newsrooms. I mean, my goodness, like we, we know that there's a lack of generative AI policies in corporations and in schools. But in newsrooms where the use is so high profile and so essential, that's got to be a hundred percent.

[00:37:19] Paul Roetzer: Like, I don't know how you're running a newsroom today that doesn't have generative AI policies that everyone in there doesn't fully know. that's problematic. So hopefully this survey and the related research is, is an accelerant to get generative AI policies defined in these newsrooms for, to ensure responsible use of the technology and reporting.

[00:37:43] How to Opt Out of AI

[00:37:43] Mike Kaput: So in another piece of news this week we got a new report from Wired magazine which offers an in depth guide got

[00:37:51] Mike Kaput: on how to opt out of having your content used in AI training or having it be sold for training purposes.

[00:37:59] Mike Kaput: The [00:38:00] report outlines what, if anything, you can even do to opt out of some of the major tools. Some of the examples they go through are things like ChatGPT, Gemini. Adobe, HubSpot, Perplexity, and Slack. But unfortunately, it says WIRED. Quote, it's worth setting some expectations, you know, as you go trying to do this.

[00:38:20] Mike Kaput: Because many companies building AI have already scraped the web, so anything you've posted is probably already in their systems.

[00:38:28] Mike Kaput: Companies are also secretive about what they have actually scraped. Purchased or used to train their systems and unfortunately you can see that kind of reflected in this report because they go kind of tool by tool through a bunch of examples and tips how to opt out.

[00:38:44] Mike Kaput: Many of those are extremely helpful. I highly recommend you check them out but every tool they discuss Pretty clearly offers really different levels of possible opting out in different ways and some of them don't even offer it at all. And plenty of them, [00:39:00] it sounds like, are not really intuitive. It's basically like, hey, go email this address and ask them if they'll remove you.

[00:39:08] Mike Kaput: So, Paul, I don't want to be, you know, a cynic here. This seems like really useful guidance. I would say everyone should check it out and if you want to try to opt yourself out of these models, go for it. But like, how realistic is it to actually think you'll be able to, like, fully opt out of most or all AI training?

[00:39:29] Paul Roetzer: I think at some point people are just going to realize like there's no hope here.

[00:39:33] Paul Roetzer: Like some companies are going to make it hard to do. Like I was scanning some of your notes where you actually went and looked at terms for different

[00:39:40] Paul Roetzer: Software companies we work with. I won't name names on some of these, but they don't exactly make it easy.

[00:39:47] Paul Roetzer: Um, you have to email, privacy at email or something like that to request this

[00:39:53] Paul Roetzer: Yeah. So not making it like push button easy to opt out of this stuff. And then even when they do, it's probably going to be like a [00:40:00] cookies thing where it's like, yeah, whatever, a lot of cookies. Like, who cares? I just feel like it Because there's no single way to do this and you have to go through do it all of them.

[00:40:10] Paul Roetzer: I can't even tell you. I mean how many AI tools do we use where you turn it on? It's like, I don't know, like, or you look at the terms, you're like, this seems like I should probably have my attorney look at

[00:40:20] Paul Roetzer: this, but I want to use the tool, like, whatever everybody, you know, I think people just get to the point where you're like, like, what is the meta AI thing?

[00:40:27] Paul Roetzer: Like now I'm using meta AI and Instagram and Facebook, like, what are they collecting now when you're in there? And somebody actually asked me that. Like if I, go into meta. ai and I'm using their new. language model where I can, you know, generate stuff like ChatGPT, not just images, but have it write papers and all that stuff.

[00:40:44] Paul Roetzer: This is meta for God's sakes. Like they're going to take everything you do on meta. ai and connect it to your Facebook account. We had an episode not too long ago, we talked about like, they have 30, 2, 900 data sources on average for every Facebook user. You don't think they're going to use, [00:41:00] Everything you do, everything you type into Meta.

[00:41:04] Paul Roetzer: aI think it's just the world we're in and the reality is the vast majority of people just aren't going to care enough to go figure this out and do this and I think the tech companies kind of like bank on that.

[00:41:13] AI Mind Reading

[00:41:13] Mike Kaput: All right, so in case you didn't think from our couple of our previous segments that the future was arriving fast enough, we have a segment this week on AI mind literally.

[00:41:25] Mike Kaput: So a couple new stories are out that show some of the promise and pitfalls of using AI to scan the brain.

[00:41:33] Mike Kaput: So first, there's an interview on the popular podcast The Cognitive Revolution between host and AI expert Nathan Lappins and Paul Scotti at StabilityAI about a project called the MindEye2 project.

[00:41:48] Mike Kaput: Now Scotti is the head of neuroimaging at StabilityAI and as part of this project he's working on Creating high quality of

[00:41:59] Mike Kaput: [00:42:00] visual perception from brain activity using just one hour of fMRI training data. So at a really basic level, basically they get data and when someone looks at an image while getting an fMRI scan, that brain activity data is recorded.

[00:42:18] Mike Kaput: And then this model can go ahead and take that data and actually begin to reconstruct visually what image the person was looking at. So this is like building AI foundation models that can kind of read minds based on Just brain activity data. Now, these kinds of incredible breakthroughs are giving some lawmakers pause.

[00:42:43] Mike Kaput: Because this past week, Colorado's governor also signed the first bill in the U. S. that aims to keep neural data generated by the brain truly private. Now this law aims

[00:42:55] Mike Kaput: protect

[00:42:56] Mike Kaput: sensitive patient data obtained from consumer level [00:43:00] brain technologies. That's a key because something like MindEye 2, it appears is using data from a clinical

[00:43:07] Mike Kaput: That's already all protected by federal law. But this law wants to sure that

[00:43:12] Mike Kaput: all these consumer products out there that are hitting the market to scan your brain activity Actually end up protecting your data, because right now largely

[00:43:22] Mike Kaput: unregulated, which is a little terrifying. Um, we seem to be getting more devices and apps that can scan and interpret brain data, and kind of like we just talked about with your content online, you're opting in to give some of that information, if not all of it, to these companies.

[00:43:38] Mike Kaput: So that can include companies that have wearables that do things like Monitor brain activity to facilitate meditation, and then also apps that try and read and interpret your brain signals while you scroll.

[00:43:51] Mike Kaput: So, Paul, this can definitely feel like science fiction, but I think it does highlight an important point, which is We're [00:44:00] basically exploring one of the next logical steps aI

[00:44:04] Mike Kaput: development, which we're starting to see to some degree with all these wearables, which is that AI is getting integrated directly into our physical lives.

[00:44:12] Mike Kaput: And that includes interfacing with our brains and bodies. Like, what did you think of these developments as you're reading through this?

[00:44:21] Paul Roetzer: On April 17th, when I saw the New York Times one about the Colorado governor signing a bill on who owns your brain data, I retweeted, here's something I haven't really thought about before that now seems very relevant to the future.

[00:44:31] Paul Roetzer: Like, it's like, oh man, I didn't have enough to think about with this stuff.

[00:44:38] Paul Roetzer: This is a complicated one that I need to spend more time processing because my mind immediately goes to, well, who else has access to this data? Apple Vision Pro, you know, for one, comes immediately to mind and all, like, go look at the patents on the Apple Vision Pro and what they have access to. I've heard word that there's going to be a future generation of AirPods that'll actually have electrodes built [00:45:00] into the AirPods and be able to look at, Like, um, brain activity through your AirPods.

[00:45:05] Paul Roetzer: Um, just this morning. So we're recording this on Tuesday, April 23rd for context. Meta announced Meta AI with Vision built into the Ray Ban glasses. So, you know, it's just, it's. As you play this out, you realize like, wow, like, yeah, I hadn't played, played around with this one in my head of like, who's going to have access to brain activity and what can they do with it.

[00:45:26] Paul Roetzer: And the real kicker is who's building Neuralink that embeds this stuff right into your brain, but Elon Musk. Musk.

[00:45:32] Paul Roetzer: Um, make your own call if you trust Elon Musk with your brain data or not. But it's, it's, it's just a whole nother realm that I hadn't really spent a lot of time thinking about that I guess we'll talk about on future episodes.

[00:45:47] Google Makes Structural Changes to Support AI Efforts

[00:45:47] Mike Kaput: So in some other news, Google CEO Sundar Pichai just released a company wide note that details some structural changes related to how Google is pursuing

[00:45:59] Mike Kaput: So [00:46:00] first, the company is consolidating teams that focus on building models across research and Google DeepMind. All of this work, says Pichai, will now sit in DeepMind. The responsible AI teams from research are also moving. to Google DeepMind, quote, to be closer to where the models are built and scaled.

[00:46:21] Mike Kaput: Google's Devices and Services product area and its Platforms and Ecosystems groups are also being put into a new product area called Platforms and Devices. So, Paul, this is definitely some Google inside baseball, but it does seem to kind of indicate Some talent and responsibility is being centralized into Google DeepMind.

[00:46:41] Mike Kaput: We've been talking lately on episodes about how Demis Hassabis, head of DeepMind, has been taking a more active and prominent role in the press, and seeing his public profile raised. Like, what do you see as going on here?

[00:46:55] Paul Roetzer: That was the first thing I thought of is this is a lot more under Demis's [00:47:00] domain. the research and models under DeepMind, the responsible AI teams in research now reporting to DeepMind. I would imagine. A lot of this potentially is in reaction to the failed launch of the image generation tool and the apparent miscommunications and oversights within that.

[00:47:20] Paul Roetzer: And I would imagine Demis was very unhappy. He certainly sounded unhappy in the interviews I've heard since. So, you know, maybe as someone do this, the other interesting thing that may be nothing, but I just found noteworthy is the last few main things related to like anything related to Google DeepMind, And, Google broadly, Sundar and Demis co authored.

[00:47:44] Paul Roetzer: Like, they, on the blog post, they were both at the top of it, from both of them, and then it would have perspective from both of them. This, this blog post is basically about Google DeepMind, and Demis is not mentioned. He is not co authored on it. So, again, maybe absolutely nothing, [00:48:00] but it was just noteworthy to me that they seemed to be making a play to have Sundar and Demis co author things, and this one was not.

[00:48:07] Paul Roetzer: Um, Would have been a natural thing to have a few quotes from Demis in there about his excitement about having the models underneath the team and the Responsible AI team closer, like providing perspective. But this was very much a Sundar message. And then I always have to laugh. It's an internal message that they just released because they know someone's going to leak it anyway.

[00:48:24] Paul Roetzer: So they just like publish these internal emails as blog posts because they know it's going to get to the information or Verge or whoever within like 10 seconds.

[00:48:33] The Ethics of Advanced AI Assistants

[00:48:33] Mike Kaput: So researchers at Google DeepMind have also been a bit busy. They dropped a mammoth new paper. It is over 270 on the ethics of advanced assistance.

[00:48:47] Mike Kaput: So this paper explores what the ethical and societal implications are aI assistance or AI agents. In a blog post talking about this they said, quote, [00:49:00] imagine a future where interact regularly with a range of advanced.

[00:49:04] Mike Kaput: AI Assistants, and where millions of interact with each other on our behalf. These experiences and interactions may soon become part of our everyday reality. And as part of this, the researchers argue it's going to be really, really important to think proactively about how this phenomenon will change and disrupt our world.

[00:49:24] Mike Kaput: They seem to be most concerned with the fact that advanced AI agents could influence our behavior. They, for better or

[00:49:31] Mike Kaput: worse, and the fact that once they get good enough, they start to raise really difficult questions around human relationships, what we consider human, anthropomorphism.

[00:49:42] Mike Kaput: So, ultimately this paper argues that serious work is needed on alignment so that agents don't disproportionately favor certain actors over others, and that we carefully

[00:49:52] Mike Kaput: consider the human impact, not just the technological one, that agents are going to have on life as we know [00:50:00] it.

[00:50:00] Mike Kaput: Now, Paul, this is definitely, you know, pretty dense and academic, but I did find the timing kind of interesting, given that we've talked about Google and others going all in on agents. You know, DeepMind researchers releasing this, does this seem like an effort to raise concerns, considerations about path we're headed down with agents?

[00:50:20] Paul Roetzer: Yeah, I mean, they didn't start writing this last week. you know, if you want to understand how, how much these Companies believe this is the future just like there's like 60 authors on this paper. Like, this is a massive effort. so Iason Gabriel, who appears to be the lead author on it, you know, of the 50 or 60

[00:50:41] Paul Roetzer: 60 authors,

[00:50:42] Paul Roetzer: um, he tweeted, which is actually how I discovered, the report.

[00:50:47] Paul Roetzer: And he said, we define advanced AI systems as artificial agents with natural language interfaces. So I got ChatGPT, Gemini, whose function is to plan and execute sequences of actions on behalf of a user across [00:51:00] one or more domains in line with the user's expectations. So go back to Dario Amadei, 20 to 30 layers of things that a coder would do.

[00:51:08] Paul Roetzer: That's what we're talking about, AI agents. Um,

[00:51:11] Paul Roetzer: I like we could probably do a whole episode, honestly, on this report, but I'll just, I'll read two quick, um, paragraphs from the blog post announcing it. So they say advanced AI assistance could have a profound impact on users in society and be integrated into most aspects of people's lives.

[00:51:27] Paul Roetzer: For example, people may ask them to book holidays, manage social time, or perform other life tasks. If deployed at scale, AI assistance could impact the way people approach work, education, creative projects, hobbies, and social interaction. Over time, AI assistance could also influence the goals people pursue in their path of personal development through the information and advice assistants give and the actions they take.

[00:51:50] Paul Roetzer: Ultimately, this raises important questions about how people interact with this technology

[00:51:55] Paul Roetzer: and how it can best support their goals and aspirations. I don't want to like do a [00:52:00] disservice by, you know, this is just a rapid fire today. I feel like We'll probably have to come back and talk more about this. I think this is a very fundamental thing.

[00:52:08] Paul Roetzer: When I talk to people, I had a meeting this morning, um, with, an educational leader, and this is the stuff that people Go to, you know, they, people think about like, when they really understand what's going on and where this could go, if our timeline is directionally correct, what does this mean? What does it mean to humanity?

[00:52:30] Paul Roetzer: What does it mean to society? What does it mean to education? What does it mean to morals and religions and ethics? Like we have to come to grips as a society, like there, there's some really big things that we're going to have to. address, and I think it's very important that they're dedicating resources to be talking about these things.

[00:52:48] Paul Roetzer: You can be skeptical if DeepMind's the right people to be telling you this, but these are researchers who do this for a living, who believe Deeply in the importance of what they're doing, and so [00:53:00] I think it's, uh, it's worth people's time to be exploring this, and I think we need more people who, you know, I always say, like, people always ask me, like, what they can do in AI, like, how do they get involved?

[00:53:10] Paul Roetzer: What I always say is, like, find a thread. Like take your domain expertise, take the things that you're passionate about, the things you're curious about, and pull on that thread in AI. So if you are one of those people who, once you understand it, steps back and says, Oh my gosh, like, what about humanity?

[00:53:24] Paul Roetzer: What are our kids going to study? What, how's this going to impact jobs? Go research that. Like there's really smart people publishing amazing stuff. Like go find that stuff. and

[00:53:34] Paul Roetzer: Have it inspire you and, like, be a thought leader in that thing. You don't have to do what Mike and I do and try and, like, take the fire hose in every week and figure out what these 50 things mean every week.

[00:53:44] Paul Roetzer: Like, pick the one topic you, you're really passionate about and go on that topic. That's what we need in AI believe, is, like, more people who become passionate about the thing that matters to them and figures out what does AI mean to that.

[00:53:59] Mike Kaput: [00:54:00] That's such good advice why we counsel so much for AI literacy at scale, we need everyone in every domain to understand

[00:54:09] Mike Kaput: how to start thinking about this stuff in what they know best.

[00:54:13] On the Economics of Frontier Models

[00:54:13] Mike Kaput: So a couple final topics here. we have a new essay from, VC firm Airstreak Capital, which invests in AI first companies, and they have this breakdown of the economics of frontier AI models.

[00:54:27] Mike Kaput: And it's well worth a look if you're kind of trying to predict and unpack where the near future of AI goes from kind of an investment and technology perspective. So, their core thesis here revolves around frontier models, which are the crop of core AI models today. that have the most powerful capabilities.

[00:54:47] Mike Kaput: So think of GPT 4, Claude3, etc.

[00:54:51] Mike Kaput: And their thesis is this. They say we're going to increasingly see this split in the AI world when it comes to these frontier [00:55:00] models. A split between what they call predominantly closed frontier models. Charging economically viable prices to a subset of deep pocketed users that need their capabilities and efficient, cheaper, open source models with a specific task focus.

[00:55:17] Mike Kaput: So at the center of this argument is Airstreak Capital's take that the core economics of large AI models currently just don't work. Don't work that well. The models cost hundreds of millions of dollars to train and all the major providers are losing money on their models. And this isn't really like your typical SaaS or software market.

[00:55:39] Mike Kaput: The growth and adoption of a model doesn't necessarily mean the relative cost of their infrastructure decreases, because more users can cost them way more money as they use more compute. Also, the major models appear to be

[00:55:53] Mike Kaput: converging to a place where eventually they all have similar levels of capabilities among the [00:56:00] top leaders.

[00:56:01] Mike Kaput: So basically, they conclude, quote, the combination of a relatively undifferentiated offering and high CapEx before earning a cent of revenue is highly unusual for software. And this turns the market into a competition to raise as much as possible from deep pocketed big

[00:56:18] Mike Kaput: companies and investors to, in turn, incinerate in the pursuit of market and mindshare.

[00:56:25] Mike Kaput: So Paul, it seems like they're kind of getting at this idea that there's probably going to end up being very few big players that are building and controlling closed models that do all this stuff well and are kind

[00:56:36] Mike Kaput: frontier

[00:56:37] Mike Kaput: models and that the real diversity is going to be in smaller, more economical models used for narrower tasks.

[00:56:46] Mike Kaput: What did you think of this assessment?

[00:56:48] Paul Roetzer: it seems

[00:56:49] Paul Roetzer: to be pretty good.

[00:56:51] Paul Roetzer: In line with what

[00:56:51] Paul Roetzer: hearing. And, you know, I think for business people, you know, listening to this, the reason we've been talking about this is because This is [00:57:00] one of the biggest challenges we hear when we meet with people is like, which model should we be using? Who should we be building on?

[00:57:05] Paul Roetzer: Is openAI? Do we go get Llama? Somebody who knows how to fine tune that? Like where, what do we do? do we go get a third party piece of software that is building their own model? And how are they even building their own model that's competitive with these bigger models? And when we get 6, do we even need smaller models?

[00:57:22] Paul Roetzer: Or are these things going to be so generally intelligent that Like, all these other things are relevant. This is like, this is the trillion dollar question. Like, what is the future of these models and who are the winners? so And it's, it's kind of, it's good to know what goes on and how they work a little bit.

[00:57:39] Paul Roetzer: And, you know, even from some recent things I've been reading, just how finicky they are in their training. Like, you don't just say, okay, we've got 20, and let's go spend a hundred million dollars and let's train the new model. And then you just. Flip it on and it goes and out comes this model working perfectly.

[00:57:56] Paul Roetzer: That is not how these things work. You got to start training [00:58:00] smaller models and see if what you're doing is working. And then you got to kind of fine tune things and then keep going. It's like, it's not a perfect science. And so it makes sense that there's only going to be. Like a few big people who can spend billions or tens of billions of dollars training these future generations of models.

[00:58:17] Paul Roetzer: Um, but I think there's going to be a lot of these smaller models that do work on device. So it just runs through your iPhone rather than in the cloud. And, models that are trained to do very specific things. It's a really dynamic space. And it's, again, like we said before, this is hard. Like it's hard to know, What to make bets on, especially when new models are coming every four to eight months.

[00:58:36] Paul Roetzer: Like at some point you got to like make a decision and go and start building actual capabilities around AI within your company. So yeah, I mean, it makes a lot of sense if you're, again, if this is a thread you're interested in, go, go read this one. And I think that's some of the times with the, this podcast, we're trying to just sprinkle in topics that cover a lot of different areas so people can go and go deeper on something if it's of interest to them.

[00:58:59] Perplexity Raises $62.7M, Announces Enterprise Pro

[00:58:59] Mike Kaput: [00:59:00] So in our last topic today is kind of a last minute addition that happened right before we started recording, but we saw an announcement perplexity

[00:59:09] Mike Kaput: that they've raised 62. 7 million dollars in Series B1 funding.

[00:59:15] Mike Kaput: Um, basically Series B1 is an extra in between funding round that follows Series B, precedes Series C, and The company has not only raised this, but they've also announced that they now have an enterprise pro version that is basically offering a more secure paid version of Perplexity that it looks like they're billing as basically an internal research assistant that can dramatically accelerate, your team's efficiency with the most powerful AI research assistant available.

[00:59:49] Mike Kaput: So when comes to pricing for enterprise pro. um, there's a self serve option, companies with fewer than 250 employees can access Enterprise [01:00:00] Pro for

[01:00:02] Mike Kaput: bucks per month or 400 year per seat, or there's a custom pricing version for larger enterprises.

[01:00:10] Mike Kaput: Um, for instance, they have a testimonial that Perplexity Enterprise Pro has allowed Databricks, a company we've talked about before, to make it easier for engineering, marketing, and sales teams to execute faster.

[01:00:23] Mike Kaput: We estimate it helps our team save 5, 000 working hours monthly. So Paul, this was a pretty interesting announcement see.

[01:00:31] Mike Kaput: Perplexity kind of going upmarket with the Enterprise Pro license. We talk about Perplexity all time.

[01:00:37] Mike Kaput: Um, what were your thoughts kind of reading this? I

[01:00:39] Paul Roetzer: mean, investors are stupid, like

[01:00:42] Paul Roetzer: Jeff Bezos, Andrej Karpathy, Matt Friedman, Brad Gerstner. My goodness. I don't know how these guys don't get acquired

[01:00:52] Paul Roetzer: at point here. Like, they're just They're that annoying fly right now on Google, that is just getting [01:01:00] bigger and going to become a problem.

[01:01:02] Paul Roetzer: Um,

[01:01:04] Paul Roetzer: Yeah, although I saw an interview on CNBC this morning with the founder, and he gave a horrible answer to, Like they were asking him about, you know, people are just going to go to websites anymore. Like you're just giving the answers. Why would people go to websites? Like you're going to completely crush organic traffic.

[01:01:21] Paul Roetzer: And he said, his analogy was, we're giving them the movie trailer. People are still going to want to go watch the movie. And I was like, Again, like, you gotta build, what, how much, 62. 7 million, spend a half a million on a PR

[01:01:36] Paul Roetzer: firm.

[01:01:37] Paul Roetzer: Like, the answers that people give on the most predictable questions on high profile media interviews, just get out.

[01:01:45] Paul Roetzer: A PR person or two to help with the messaging. Love perplexity, doing great stuff. It's really painful to listen to some of these interviews. They just, they're getting asked these layup questions that should be core to their PR strategy, communication strategy, and they just like [01:02:00] fumble it. So congrats on the funding.

[01:02:02] Paul Roetzer: Awesome investor list. Great tech. Um, get a PR firm.

[01:02:08] Mike Kaput: All right. Paul, well, thanks so much for breaking down in this special episode this week, even more AI news. I just want to encourage our audience a couple final, reminders. If you get value out of the podcast, we would love if you could leave us a review if you have not already on your podcast platform of choice.

[01:02:30] Mike Kaput: I would also encourage you, like Paul mentioned at the top the

[01:02:33] Mike Kaput: episode, if you have five minutes to contribute to furthering AI understanding and marketing. Go check out our state of marketing AI survey. We are creating a new report this year based on a new survey. So if you go to state of marketing ai.com and spend five minutes taking that survey, we'll send you the report when it's done.

[01:02:55] Mike Kaput: And you can also join hundreds or thousands of marketers are going to tell us [01:03:00] this year exactly how they're

[01:03:01] Mike Kaput: And last but not least,

[01:03:03] Mike Kaput: for all the other AI news that we are unable to fit in each podcast episode, I'd highly encourage you to go check out our newsletter at MarketingAIInstitute. com forward slash newsletter.

[01:03:16] Mike Kaput: Every week we round up all the news we discussed today and all the other items we just didn't have time for. So it's a great weekly brief to get you caught up and up to speed on AI. Paul, thanks again.

[01:03:30] Mike Kaput: Thanks, Mike. And unless GPT 5 happens to drop in the few days, won't talk to you again until our regularly scheduled podcast next Tuesday. So thanks everyone for listening.

[01:03:40] Thanks for listening to The AI Show. Visit MarketingAIInstitute. com to continue your AI learning journey. And join more than 60, 000 professionals and business leaders who have subscribed to the weekly newsletter, downloaded the AI blueprints, attended virtual and in person events, taken our online AI [01:04:00] courses, and engaged in the Slack community.

[01:04:03] Until next time, stay curious and explore AI.

Related Posts

[The AI Show Episode 89]: A New In-Depth Sam Altman Interview, The “Inflection Point” for Enterprise Generative AI Adoption, and Inflection AI’s Big Shakeup

Claire Prudhomme | March 26, 2024

The AI Show analyzes tough news for generative AI companies, including Sam Altman's latest interview, a16z's enterprise research, changes at Inflection AI, and more.

[The AI Show Episode 84]: OpenAI Releases Sora, Google’s Surprise Launch of Gemini 1.5, and AI Rivals Band Together to Fight Deepfakes

Claire Prudhomme | February 20, 2024

Episode 84 provides insights on OpenAI's Sora for video generation, Google's Gemini 1.5, and tech giants' aim to regulate deepfakes with the C2PA standard.

[The AI Show Episode 97]: OpenAI’s Big Announcement, Google’s AI Roadmap, and Microsoft’s New AI Model MAI-1

Claire Prudhomme | May 14, 2024

The Artificial Intelligence Show’s Episode 97 explores OpenAI's anticipated announcement, Google's AI roadmap, and Microsoft's new AI model, MAI-1.