What do the next two years hold for AI? Sam Altman says it’s going to far surpass the recent developments in AI.
This week our hosts, Paul Roetzer and Mike Kaput, unpack Sam Altman’s bold claims about the rapid pace of AI advancements, Anthropic's new initiative to quantify AI's impact on jobs, and the largest-ever ChatGPT deployment.
Plus, in our rapid fire section this week, new safety measures from Anthropic, Google, and Meta to keep AI development in check, ByteDance's groundbreaking deepfake system, the EU’s latest AI bans, and more.
Listen or watch below—and see below for show notes and the transcript.
00:04:22 — Sam Altman on GPT-5
00:26:55 — The Anthropic Economic Index
00:33:31 — OpenAI and the CSU system bring AI to 500,000 students & faculty
00:41:40 — Gemini 2.0
00:46:34 — Meta, Google, Anthropic Safety Measures
00:54:19 — Boom Times For ChatGPT
00:58:42 — Omni-Human1
01:02:13 — New EU AI Bans
01:08:25 — Figure and OpenAI Breakup
01:11:01 — Schulman Leaves Anthropic, Joins OpenAI Ex-CTO’s Company
01:12:46 — Sutskever’s startup to fundraise at $20B valuation
01:14:53 — New AI Case Studies from Google and Microsoft
01:17:18 — Listener Questions
Altman, GPT-5, and the Future of AI
OpenAI CEO Sam Altman made some striking predictions about AI (with a mention of GPT-5) during a recent panel discussion. In one of his most direct statements yet about AI's trajectory, Altman expressed strong confidence that the next two years will bring even more dramatic advances than we've seen recently.
Altman emphasized that they know how to improve their models significantly, with no obvious roadblocks ahead. Most notably, he suggested that the progress we'll see from February 2025 to February 2027 will feel more impressive than what we've witnessed over the previous two years.
Altman was particularly enthusiastic about AI's potential impact on scientific discovery. He predicted that within a few years, AI systems will be able to compress ten years of scientific progress into just one year, potentially accelerating breakthroughs in areas like climate change and disease treatment.
On Sunday, Altman also released an essay titled “Three Observations,” which lays out three key observations that point to dramatic changes ahead.
The first observation: the intelligence of AI models scales with the logarithm of resources used to train and run them.
The second observation: the price to use a given level of AI will fall by roughly 10 times every 12 months.
The third observation: the socioeconomic value of linearly increasing AI intelligence is super-exponential.
This suggests we'll continue to see exponentially increasing investment in AI development for the foreseeable future. He envisions a future where AI agents function as virtual co-workers, particularly in knowledge work, and suggests that by 2035, any individual should be able to marshal intellectual capacity equivalent to everyone alive in 2025.
Anthropic Economic Index
A new study from Anthropic, based on analysis of real-world conversations with their AI assistant Claude, reveals fascinating patterns about which jobs and tasks are seeing the most AI adoption.
It’s called the Anthropic Economic Index, and it basically, Anthropic organized Claude conversations by occupational task to determine which professions are using AI the most.
This initial research finds that AI usage is heavily concentrated in computer-related and technical writing tasks, with these fields accounting for nearly half of all AI interactions.
However, the technology's reach extends more broadly across the economy—about 36% of occupations are now using AI for at least a quarter of their associated tasks.
Perhaps most interestingly, AI isn't completely automating many jobs. Instead, the study found that 57% of AI usage involves augmenting and enhancing human capabilities, while 43% involves automation. This suggests AI is largely serving as a collaborative tool rather than a replacement for human workers.
The relationship between AI adoption and wages shows a surprising pattern. Usage peaks in mid-to-high wage occupations like computer programmers and data scientists, but drops off at both the highest and lowest ends of the wage spectrum. This likely reflects both the current limitations of AI capabilities and practical barriers to adoption.
OpenAI and Education
The California State University system (CSU) is making history with the largest deployment of ChatGPT ever attempted, bringing AI to more than half a million people across its 23 campuses.
Through a groundbreaking partnership with OpenAI, the university system will provide ChatGPT Edu—a version of ChatGPT specifically customized for educational institutions—to over 460,000 students and 63,000 staff and faculty members.
This initiative transforms CSU into the first AI-powered university system in the United States. The implementation includes several key components: faculty can use ChatGPT for curriculum development and create course-specific GPTs, while students get access to personalized tutoring and study guides.
The university is also launching a dedicated platform offering free AI training programs and certifications, along with apprenticeship programs connecting students to AI-driven industries.
The scale of this deployment is particularly noteworthy given ChatGPT's rapid growth to over 300 million weekly active users worldwide.
The initiative uses ChatGPT Edu, which launched in May 2024 and provides universities with access to OpenAI's latest models, enterprise-level security, and specialized pricing.
This episode is brought to you by our AI for Writers Summit:
Join us and learn how to build strategies that future-proof your career or content team, transform your storytelling, and enhance productivity without sacrificing creativity.
The Summit takes place virtually from 12:00pm - 5:00pm ET on Thursday, March 6. There is a free registration option, as well as paid ticket options that also give you on-demand access after the event.
To register, go to www.aiwritersummit.com
Disclaimer: This transcription was written by AI, thanks to Descript, and has not been edited for content.
[00:00:00] Paul Roetzer: The models are going to be commoditized. They're going to roughly have the same capabilities. And so brand preference is actually going to become very important for these AI models and the platforms. And I would think that they're very aggressively trying to solve for how do they become the preferred brand of the next generation of workers.
[00:00:18] Paul Roetzer: Welcome to the Artificial Intelligence Show, the podcast that helps your business grow smarter, grow faster. by making AI approachable and actionable. My name is Paul Roetzer. I'm the founder and CEO of Marketing AI Institute, and I'm your host. Each week, I'm joined by my co host and Marketing AI Institute Chief Content Officer, Mike Kaput, as we break down all the AI news that matters and give you insights and perspectives that you can use to advance your company and your career.
[00:00:47] Paul Roetzer: Join us as we accelerate
[00:00:54] Paul Roetzer: Welcome to episode 135 of the Artificial Intelligence Show. I'm your host, Paul Roetzer, along with my co [00:01:00] host, Mike Kaput. We are recording Monday, February 10th, that's about 11 a.m. Eastern time. We've already had some late breaking launches this morning, so, or at least a launch this morning that we had to remix the main topics like literally 20 minutes ago.
[00:01:16] Paul Roetzer: So, hot off the presses with some anthropic economic. Research, which should be fascinating to talk about. And, we had Sam Altman saying all kinds of things last week on his Tokyo tour. He was over in, I think Japan for a series of events, maybe, but he did a bunch of interviews and said a whole bunch of stuff.
[00:01:34] Paul Roetzer: So we're going to get into a little bit about what Sam had to say. And then he dropped a article on us last night. While the Superbowl was happening, if I'm not mistaken, it was like right around like 4 p. m., but I don't know if it was Eastern time or Pacific time. So right before the Superbowl, Sam dropped a new article on us.
[00:01:51] Paul Roetzer: So, lots to talk about, like big picture macro stuff, big week of AI safety, there's, I don't know, it was just a lot of, it was actually kind of like a [00:02:00] relatively slow week, Mike, I felt, like it wasn't a bunch of crazy news. Yeah, and then all of a sudden like Friday hit and things just started kind of adding up.
[00:02:08] Paul Roetzer: So, lots to get through. This episode is brought to us by the AI for Writers Summit. We've been talking about this, the Marketing AI Institute event. This is our third annual event each year. The first two years we've had over 4, 000 attendees for this virtual summit. I think last year we had 90 countries represented in the attendee base, which was pretty incredible.
[00:02:29] Paul Roetzer: So, this is all about reimagining the futures of writing and creativity, really. So, it's an event we created a few years back to try and help writers figure out where AI was going and what it meant to their careers, whether Europe or Asia. You know, an author, a copywriter, an ad creative, an editor, or again, just like a creative professional who, you know, focuses on storytelling through different mediums.
[00:02:52] Paul Roetzer: That's what this event is for. The beauty of this event is there is a free registration option, so it goes from noon to five eastern on [00:03:00] Thursday, March 6th. again, there's a free registration option, thanks to our sponsors, and you can go to aiwriterssummit. org. All right, Mike, AIwriterssummit. com. Yep, that is the URL.
[00:03:12] Paul Roetzer: You can learn more about it, or if you're on the Marketing AI Institute site, just click on events. It is right there in the drop down for events. So, we're going to talk a lot about sort of the state of AI for writers and creators. I actually swapped my Opening keynote. I think last week I mentioned a different keynote I was planning, and then I sort of had some inspiration on a trip last week and decided I was going to do a state of AI for writers and creators and talk about model advancements, where they're going and what it means to writers, storytellers, creators.
[00:03:42] Paul Roetzer: So I'm pretty excited actually about. This talk, I think there's going to be a lot of new stuff to go over. We've got a session on AI copyright and IP, a prompting session, AI powered research, which is what Mike's going to do, talking about all these new deep research tools and how they can be integrated into your [00:04:00] role.
[00:04:00] Paul Roetzer: We still have one more keynote we're going to announce, and then an Ask Me Anything session. So, AIWritersSummit. com, if you are a creator, a writer, an editor, or if anyone on your team is, it's a great event for them to join. And again, there's that free option. All right, so, Mike. Take it away with Sam Altman and everything he had to say last week.
[00:04:22] Mike Kaput: Well, Sam made some pretty striking comments about AI and pretty explicitly mentioned GPT 5 during a recent panel discussion as part of, as you noted, his trips abroad, in one of the most direct statements yet about AI's trajectory. Altman expressed a very strong confidence that the next two years will bring even more dramatic advancements than we've seen recently.
[00:04:50] Mike Kaput: He emphasized that OpenAI knows how to improve their model significantly, and they see no obvious roadblocks ahead. [00:05:00] Most notably, he suggested that the progress we'll see from February 2025 to February 2027 will feel more impressive than what we've witnessed over the last decade. two years. That's pretty remarkable given the rapid advancement we've already seen.
[00:05:16] Mike Kaput: He was particularly enthusiastic in these discussions about AI's potential impact on scientific discovery. He predicted that within a few years, AI systems will be able to compress 10 years of scientific progress into just a year and potentially accelerate major breakthroughs in Areas like climate change and disease treatment.
[00:05:39] Mike Kaput: Now, as part of this discussion, he came out and talked about GPT 5 briefly, saying, quote, How many people here at this panel feel smarter than GPT 4? And, you know, people kind of laughed, some hands went up, and then he said, How many of you still think you're going to be smarter than GPT 5? [00:06:00] Still some more laughter, but not nearly as many hands, in fact, not really many people at all appear to raise their hands during this.
[00:06:07] Mike Kaput: And he said, I don't think I'm going to be smarter than GPT 5, and I don't feel sad about it, because I think it just means that we'll be able to use it to do incredible things. Now, these kinds of comments were followed Paul, like you mentioned, by this essay that Altman dropped on Sunday titled Three Observations, it lays out three key observations about what is coming.
[00:06:33] Mike Kaput: So he talks about basically the predictable yet astonishing pace of AI advancement. So very briefly, the observations he outlines are number one. The intelligence of AI models scales with the logarithm of resources used to train and run them. So according to Altman, companies can spend virtually unlimited amounts of money and achieve continuous, predictable gains, which is a pattern that [00:07:00] holds true across many orders of magnitude.
[00:07:03] Mike Kaput: Observation number two. The price to use a given level of AI falls by roughly 10 times every 12 months. He notes that this rate of improvement far outpaces Moore's Law, which historically doubled computing power every 18 months. And observation number three. The socioeconomic value of linearly increasing AI intelligence is super exponential, which suggests we'll continue to see exponentially increasing investment in AI development for the foreseeable future.
[00:07:37] Mike Kaput: He also mentions he envisions a future where AI agents function as virtual coworkers, especially in knowledge work, and even suggested that by 2035 any individual should be able to marshal Equivalent to everyone alive in 2025. All right, Paul, so I have to say, like, you [00:08:00] can love Sam Altman, you can hate him, you can honestly, if you want, think he is a complete con artist, as some people do, but he is making really, really bold predictions that we will literally know in the next 24 months if he is right or not, so, I don't think I'm naive, maybe I am, but it does seem strange to me you would commit so fully to very specific predictions simply to lie and drum up investment.
[00:08:28] Mike Kaput: Now, I'm not saying people don't lie and drum up investment, but it seems like there's like easier, way less risky ways to build up the AI hype train if you're Altman and you want to go that route. Like what's going on here?
[00:08:41] Paul Roetzer: Yeah. I mean, certainly there's the people that immediately jumped to this as just hype.
[00:08:46] Paul Roetzer: He's not really saying anything more than he said before. And he's just trying to raise this 40 billion. And certainly there's no denying that he is actively Raising money, if they haven't already got the commitments to it. And, you know, I think that there [00:09:00] might be some component of that, but Sam's history isn't to do that.
[00:09:04] Paul Roetzer: Like Sam's history is to lay out what he thinks the near term future looks like. Hope people listen and go about building it. building the future. And so anytime he's done this, like I often reference on this podcast, the Moore's law for everything post from March 2021. And I remember at the time, like I was trying to get people to listen, like pay attention.
[00:09:27] Paul Roetzer: You share it on LinkedIn. You know, talk about it at business events and it was just too soon, like people weren't ready. We hadn't had the ChatGPT moment yet, but Sam was laying out that this was coming. These things are gonna be able to think and understand and reason like his exact words and people didn't believe it.
[00:09:43] Paul Roetzer: The average business community, I would say, didn't, didn't believe it. So I think it's helpful to take this actually at face value and assume Sam isn't just Typing, like he is laying out what he thinks to be true about the future and we need to figure out what to [00:10:00] do with that. So I thought it'd be helpful to kind of break each of these observations down a little bit.
[00:10:04] Paul Roetzer: So this first one about the intelligence of AI models scale with the log of resources used to train them. This actually jives with what Demis Hassabis just said last week. he did an interview, I think it was Big Technology maybe, He said original scaling laws are working, but slowing down. So that's exactly what Sam's saying here.
[00:10:25] Paul Roetzer: He's saying that you can keep increasing the amount of computing power, training data, and other resources you put into building the model, the intelligence performance grows, but it grows more and more slowly over time. So in other words, like if you double, say we go from a a 5 million training run to a 10 or a 10 to a 20 or a 50 to 100 million training run.
[00:10:47] Paul Roetzer: The doubling of the resources doesn't equal the doubling of the intelligence, but you do get kind of these incremental gains. So this also syncs with what we were talking about in fall 2024, when the [00:11:00] media started latching on to the scaling laws are hitting a wall idea. I think this is kind of what the, was being referred to, is that we were seeing these things sort of taper off.
[00:11:09] Paul Roetzer: and what that means over time is that the gains start to flatten out. You eventually hit a point where these frontier model companies are going to have to decide is the next billion dollar, five billion dollar, ten billion dollar training run worth the gain that we're going to get from it. It also means that You're going to continue to continue to see these pushes in new reach research directions.
[00:11:34] Paul Roetzer: So where they're looking for efficiencies in the architecture and the algorithms themselves and the training methods. And this is where we've heard a lot about like, you know, these more efficient models with DeepSeq like we were talking about in the last couple episodes. So and again, like you go back.
[00:11:49] Paul Roetzer: A year ago, and Demis Hassabis was saying this exact thing, like we were going to keep pushing the frontier. We're going to keep building the bigger and bigger models because there's still [00:12:00] gains to be had there. And we don't know where the upper limit is. We don't know when You stop gaining enough to validate doing it, but we're also going to push on the lower end.
[00:12:10] Paul Roetzer: We're going to find more efficient ways to build these models and to train these models and to post train these models. So in essence, if you've been listening to the AI leaders for the last two years, what they have said is, was going to happen is what's happening, which is why Articles like this are, are helpful.
[00:12:27] Paul Roetzer: Like it usually plays out that they're generally, the other thing that, that this first observation brought to mind for me is what I've been saying recently on this podcast, which is at the end of the day, I think there's two to five frontier model companies. And when this all shakes out, which is in like two to three years, what I mean by that is there's two companies I think for sure keep building the biggest models that is OpenAI and Google.
[00:12:57] Paul Roetzer: The maybes. That maybe they keep playing [00:13:00] this game of billion dollar, five billion dollar, ten billion dollar training runs, CapEx of a hundred billion, two hundred billion, like, what it's going to take to do this. XAI, because I'm not sure Elon wants to lose to Sam in this, so I think that his ego will keep him in this game for another couple years.
[00:13:16] Paul Roetzer: Meta. I don't know, like, I feel like they got undercut hard by DeepSeek, and it probably bruised the egos a little bit, and made them, maybe made them question a little bit, like, their plans to try and compete with OpenAI and Google here, but I think they stay in the game for a while, and then Microsoft's a bit of a dark horse here, like, do they ever try and actually get into the Frontier game themselves, either through acquisition or through building their own team?
[00:13:42] Paul Roetzer: Major models because they have been relying on OpenAI's models to date and they obviously have a massive stake in OpenAI. So I think that they don't play in this game. So OpenAI and Google for sure. I think XAI and Meta are possible. The likely not in the Frontier model game in one to two years. I would put [00:14:00] Anthropic in that moat.
[00:14:02] Paul Roetzer: I would put Amazon there and Mistral I think is just eventually gonna fade to building nice, More efficient models. and then I guess you could throw like DeepSeq in there as probably like a likely not. I don't think they can compete at the high end. I think we, they were in this Goldilocks zone where they had enough GPUs to compete with OpenAI in their current model.
[00:14:24] Paul Roetzer: But they are not going to be able to compete in the future model due to export controls. So I think OpenAI, Google are the major players that two years from now have basically run away with, When it comes to frontier models and they're the only ones that are still really pursuing that true. I, again, I'm like, if it's legally possible from a regulatory standpoint, I think Google buying Anthropic is like the most obvious acquisition that could happen in 2025 because we'll talk a little bit more about what Anthropic's doing with their research and safety and stuff.
[00:14:57] Paul Roetzer: They're just such a perfect brand fit, I think, for where [00:15:00] Google's going with their models. and I can't imagine Dario going back to OpenAI, and I can't imagine working with Elon Musk at XAI, so it's like, they're only, if they want to stay in the frontier model game, I think they have to get acquired, and Google's the only one that actually makes sense.
[00:15:16] Paul Roetzer: Okay, so that was, that was one. Two, this one's really important for business people. The price to use a given level of AI falls by roughly 10 times every 12 months. That is an insane thing to actually try and process. So, in essence, the way to think about this is the cost of intelligence, the cost of having 01, 03 models on demand, is racing towards zero.
[00:15:41] Paul Roetzer: Like, it's the cost to deliver. So if we move forward a year from now, and let's say you're paying 200 a month for the O1 model from OpenAI, you're going to get the equivalent of that for basically, like, 2 a month. Like, that level of intelligence is going to cost next to [00:16:00] nothing. And so, when you try and imagine a world one to two years out where any business leader has the current most advanced level of technology available to them for next to nothing.
[00:16:13] Paul Roetzer: Like, the current best models will be open sourced 12 months from now. So you literally could have them for zero. That's a really bizarre world to try and imagine. And so, what Sam's saying here is like, Moore's law was about the computing power of these chips doubling every 18 months. And so in essence, like, the cost of the hardware comes down every 18 months, but it follows a relatively predictable pattern.
[00:16:41] Paul Roetzer: What he's saying here is the cost of intelligence is scaling way, way faster. Like it's becoming dramatically cheaper every 12 months. It's actually moving faster than Moore's law. And so what this means is you start true democratization of AI, that as the advanced AI becomes more affordable, [00:17:00] More people and organizations can afford to use it.
[00:17:02] Paul Roetzer: It lowers the barriers to entry that drives innovation across different industries. Innovation can kind of come from anywhere. This pushes towards one of the things I've said I'm, I'm most excited about with the future, which is a rise of entrepreneurship. I think AI native companies will come to dominate industries.
[00:17:20] Paul Roetzer: So again, take, take any industry you want. legal industry is certainly right for this, consulting industry, marketing agencies, HR, finance, wealth management, take your pick. in the next three to five years, I think AI native companies, like built AI from the ground up, are just gonna dominate almost every industry.
[00:17:43] Paul Roetzer: Like, there'll be some industries that figure this out and sort of evolve, but as the cost drops so steeply, It becomes so cheap for new players to enter the market that have access to this, like, PhD level intelligence in every aspect of running a business. So, [00:18:00] the companies that don't keep pace with the cost and efficiencies gained by these AI native companies just have no chance of competing in it.
[00:18:07] Paul Roetzer: And I see this every day myself, like, I mean, Mike, you, you kind of have a front row seat to this, and you and I talk about this stuff all the time, but, you As I'm thinking about building SmarterX, so again, like, Marketing Institute is our core business. I started in 2016, and then we sort of spun out SmarterX to, to tell the story of AI to all knowledge work, not just marketing.
[00:18:27] Paul Roetzer: And as I think about the building of SmarterX, and more specifically about the building of our AI Academy, I'm doing it, ground up, AI native. So like, every decision I make is like, what is a smarter way to do this than has traditionally been done? I have, I don't know, five to ten conversations a day with O1 in, in
[00:18:49] Paul Roetzer: ChatGPT.
[00:18:49] Paul Roetzer: I'm literally pushing like every decision, every thought about staffing structure, about pricing models, like everything I talk to O1 about. [00:19:00] And I can't even like, If you haven't done this as a business leader, it is really hard to convey the amount of value you get in minutes that would, I just wouldn't even honestly had access to.
[00:19:14] Paul Roetzer: And so sometimes it's things I've already decided on and I just go into O1 just to like vet my, my thought process and I'll talk to it just straight up like an advisor. Others, I'm just stuck. And it's like, I don't even know how to think about this. Like for example, like, and this is real stuff, like Customer success team.
[00:19:32] Paul Roetzer: So if you build an online education business that has tens of thousands of learners, what does a customer success team look like in that environment? I've never built that. I don't know the answer to that question. So I can sit there and talk to O1 about it. And in five to 10 minutes, basically have the knowledge of someone who spent 10 years probably building customer success.
[00:19:54] Paul Roetzer: Now I haven't done it. I'm the experience, but I now have that same knowledge base. That's a [00:20:00] really, really weird. thing to have the power to have on demand for next to no cost. So I think the impact on human labor, again, kind of on this number two umbrella, I think the impact on human labor becomes massive.
[00:20:15] Paul Roetzer: I think industries that have a talent gap like accounting, insurance, healthcare, where they can't hire enough people, they can't find enough professionals in those industries. I think you start to see, quote unquote, AI digital workers filling those gaps in the next two to three years. But by doing that, you're creating.
[00:20:31] Paul Roetzer: The replacements, too, for the professionals who are still there. And then we start to deal with job displacement, which then leads to observation number three, which is the socioeconomic value of linearly, linearly, linearly increasing AI intelligence is super exponential. This is the most like, Jargon y one of all, so we'll break this down real quick.
[00:20:52] Paul Roetzer: So what he means here is even modest gains in the AI system's capability, which he's calling linearly increasing intelligence, so modest gains in the [00:21:00] system, can generate outsized, disproportionately large socio economic benefits, thus super exponential. So, Because these incremental improvements are so valuable across different industries, investors see strong returns or potential massive value gains, so they just keep putting the money into it.
[00:21:20] Paul Roetzer: So like, why is SoftBank willing to put 40 billion in? Because it sees the entire I don't know, hundreds of millions of knowledge workers around the world as the total addressable market. And it's saying, 40 billion is nothing. We're talking about tens of trillions in value if we solve this. So, the investors see the gain potential, so they just keep going.
[00:21:40] Paul Roetzer: So, the research advances quickly, the deployment starts to accelerate. And so, the way to understand this super exponential, so in a, in a, in a linear growth, 1 becomes 2, 2 becomes 3, 3 becomes 4. Linear is literally just like this very predictable, same amount of value over a given period of time.
[00:21:58] Paul Roetzer: Exponential is [00:22:00] 1 becomes 10, 10 becomes 100, 100 becomes 1, 000, 1, 000 becomes 1, 000, 000. That's exponential. It's some given multiple of a value over a given period of time. So when you hear 10X something, you're literally talking about 10 times the value. the current amount. And imagine 10 times compounding every year, you start to see the massive impact that this can have.
[00:22:22] Paul Roetzer: So the ratios may differ, the time periods may differ, but you start to see this massive economic growth, you see job and industry transformation, what they're hoping for is massive social improvements. And so I'll kind of wind up my thoughts here, Mike, with a couple of other highlights that I think just drive this home.
[00:22:40] Paul Roetzer: I really recommend people read this article, and if you Think Sam is a hype man, set that aside for a moment and, and just read it as objectively as you can and think about some of these things. So, one, again, they're always sort of revising their AGI, what is AGI definition. So in this one, they say system [00:23:00] that can tackle increasingly complex problems at human level in many fields.
[00:23:04] Paul Roetzer: They equate AGI to a general purpose technology. So they talk about like electricity, the transistor, computer, the internet. so as the economic growth in front of us looks astonishing and we can now imagine a world where we cure all diseases, have much more time to enjoy with our families, and can fully realize our creative potential.
[00:23:22] Paul Roetzer: This is really important because this is the techno optimist future view that justifies all the risk. And all the dangers. So because this is possible, then whatever else comes from it, we will solve for because we want the abundant future. That is the mindset of Sam and other techno optimists. they talk about virtual co workers, so let's, and this one's a good example, let's imagine the case of a software engineering agent, which is an agent that they're actively trying to build, and they think these will eventually be capable of doing Most things, and this is a quote, most things a software engineer at a top company [00:24:00] with a few years of experience could do for tasks up to a couple days long.
[00:24:05] Paul Roetzer: So they don't have enough compute to do inference on something that might take a senior professional a year to do, but they envision building AI within the next 12 to 18 months. that could do something that might take Mike or I two, three days to do. So that's the kind of thing we're solving for here.
[00:24:23] Paul Roetzer: It will re they say it will require lots of human supervision and direction, and it'll be great at some things but surprisingly bad at others. Then this is the real important part. Still, this quote, imagine it as a real but relatively junior virtual coworker. Now imagine 1, 000 of them or 1 million of them.
[00:24:43] Paul Roetzer: Now imagine such agents in every field of knowledge work. So again, we don't have to build PhD level at every cognitive task. They want to build average human level, junior quality if it needs to be. But because they [00:25:00] can do it once, they can do it a thousand times. So instead of building a team of a thousand people, you can say, we have this task that needs to be done, or this job that needs to be done.
[00:25:11] Paul Roetzer: Let's hire three people, and they will oversee a thousand AI agents. They'll supervise them, they'll train them, they'll monitor them, that's the future. And this is actually, Jensen Wang said this exact thing in November of last year. He envisioned a future where there was tens of millions of AI agents working at NVIDIA.
[00:25:29] Paul Roetzer: They all see the same possibility. a couple other quick ones said the future will come at us in a way that is impossible to ignore. Long term changes in society and economy will be huge. We will find new things to do, new ways to be useful to each other, and new ways to compete, but the jobs will not look like they do today.
[00:25:46] Paul Roetzer: Now, interesting here, he never describes in, in detail the tangible detail of what this future looks like. He talks in these very broad strokes. I honestly think he, he hopes that [00:26:00] economists, philosophers, sociologists, science fiction writers, figure out what the actual future looks like. He's so set on building it, I don't think he actually Has the ability to envision what it really looks like, talks about the price of goods falling, right now the cost of intelligence, cost of energy can strain a lot of things, and the price of luxury goods and a few inherently limited resources like land may rise even more dramatically.
[00:26:23] Paul Roetzer: I actually envision him, like this is like thinking out loud, where he's like, ah, I wonder what it could look like, ah, land. And I also envision him talking to, like, GPT 5, which I'm pretty sure, right, row four, and, like, imagining these details. so yeah, so I don't know, I think there's, there's a lot to this article, and I think dismissing it as hype to raise money is very, very nearsighted, and I would not, you know, fall into that trap because I think you miss the chance to actually think about the bigger picture if you do that.
[00:26:55] Mike Kaput: So somewhat related to this, there's a new study that literally kind of [00:27:00] just we stumbled on right before we started recording from Anthropic that reveal some fascinating patterns about which jobs and tasks might be seeing the most AI adoption and disruption. So they released this thing called the Anthropic Economic Index and according to them it quote provides first of its kind data and analysis based on millions of anonymized conversations on Claude.
[00:27:29] Mike Kaput: ai, revealing the clearest picture yet of how AI is being incorporated into real world tasks across the modern economy. So basically what Anthropic did is they took a ton, literally millions and millions of anonymized Claude conversations between Claude and users, and they, Organize them by occupational tasks.
[00:27:51] Mike Kaput: So what were the people trying to accomplish with Claude? Not necessarily in their jobs, but what tasks were they trying to do that might [00:28:00] map to what we are doing in our professions? And then from there, they were trying to. Extrapolate, okay, well, what professions, what areas are going to be most possibly affected right now or are being affected right now by AI?
[00:28:14] Mike Kaput: So the initial research finds that AI usage is heavily concentrated in computer related and technical writing tasks. So these fields account for nearly half of all the AI interactions with Claude. However, it, this extends more broadly across the economy, they found that about 36 percent of occupations would be using AI for at least a quarter of their associated tasks by this methodology they're kind of using.
[00:28:43] Mike Kaput: Now, perhaps most interestingly, they don't really see AI completely automating many jobs. The study found that 57% Of AI usage involves augmenting and enhancing human capabilities while 43 percent involves automation. [00:29:00] Now the relationship between AI adoption and wages shows a surprising pattern here. So usage peaks in the mid to high wage occupation.
[00:29:10] Mike Kaput: So people are using it for things like computer programming and data science, but it drops off at both the highest and lowest ends of the wage spectrum. So this kind of. Reflects both probably the current limitations of AI capabilities, as well as some of the barriers we're seeing to adoption. So. Paul, this actually just takes like a really interesting approach to unpacking AI's impact on labor.
[00:29:35] Mike Kaput: They're using people's actual conversations with AI to determine what AI is being used for. What did you take away from that approach and that research from it?
[00:29:46] Paul Roetzer: Yeah, I was really happy to see the research. And as you mentioned, I mean, this Like literally popped up 20 minutes before and we'd made the decision to swap this in and it actually so nicely transitions from the socioeconomic impact stuff we were just talking [00:30:00] about.
[00:30:00] Paul Roetzer: I don't honestly know how relevant this data is because, for one, I don't think most people outside of the kind of people who would listen to this show actually have a clue what Claude is. I don't know that it's, awareness or usage in the general public is. Much of note m, I know there was like a recent thing that just came out of like top apps and top websites and anthropics nowhere to be found.
[00:30:25] Paul Roetzer: So I don't think that they have very broad awareness or usage in the general public. 3. 5 is known as being a top development coding tool. So it makes sense that this would be skewed heavily toward technical users. And then it also has, I think, a bit of a following in the writer world. So it makes total sense that those are the two things.
[00:30:47] Paul Roetzer: I think that if you had this kind of data from ChatGPT, one, it would be extremely fascinating. And I hope OpenAI follows the lead on this and does something similar with their data. [00:31:00] Love to see it from Gemini as well. But I think ChatGPT, given the usage, which we'll talk about in one of the rapid fire items, you're going to get the most representative data of the general public.
[00:31:10] Paul Roetzer: Anthropic. actually in their tweet sharing this, which is how we saw it this morning, they said, one of their tweets was, like all analyses, ours comes with caveats. We can't be certain all these tasks were performed at work, or what people did with Claude's outputs. this might undercut augmentation.
[00:31:30] Paul Roetzer: And since Claude doesn't generate images, we're likely missing some important AI use cases. And if I'm not mistaken, like, Claude also isn't connected to the internet yet. Right?
[00:31:39] Mike Kaput: right.
[00:31:39] Paul Roetzer: So that's why I'm saying like Claude is not representative of Gen AI right now. Like it is, it seems like it's got a great underlying model that's really good at coding and really good at writing and some other things.
[00:31:51] Paul Roetzer: But it is, it is not a tool in the, in the sense of what Gemini and ChatGPT are currently doing. Where they're multimodal and connected to the internet, have [00:32:00] reasoning capabilities, things like that. So, yeah. Appreciate the effort. Like I, again, I think it's good research and I hope other, platforms with more awareness and usage, follow suit and start publishing it.
[00:32:12] Paul Roetzer: Cause I think it would be helpful.
[00:32:14] Mike Kaput: Yeah. Not to mention aside from the limitations in what it is tracking, you're not even touching on any type of. Combinatorial task or creative ways of using the tools like you described with O1, right? that are reshaping how we're even doing work in some of those ways.
[00:32:33] Mike Kaput: Like, this is going to probably be looking more like, oh, did you use Claude to refactor code? Did you use Claude to generate headline ideas? Which is all really helpful, but Did you use Claude to re invent the entire business strategy? Right, right. I'm sure some of that's in there, but I would bet that some of the more exciting transformative use cases kind of get lost in that.
[00:32:57] Paul Roetzer: Yeah, they don't have vision, they don't have video generation, they don't [00:33:00] have image generation, they don't have audio generation. Yeah, it's not multimodal. So again, great starting point. Very valuable if other companies follow suit, but on its own, I wouldn't read too much into the data and assume the use cases they highlight are representative of.
[00:33:17] Paul Roetzer: The broader market.
[00:33:18] Mike Kaput: All right, because I haven't looked at it, but I guarantee you there's a headline out there that's like, X percent of computer programmers are now using Cloudera. Oh
[00:33:25] Paul Roetzer: yeah, it's going to be clickbait for the next like 72 hours. I'm sure.
[00:33:31] Mike Kaput: All right, our third big topic this week, the California State university system, CSU for short, is making some history with the largest deployment of ChatGPT attempted, bringing AI to more than half a million people across its 23 campuses.
[00:33:49] Mike Kaput: So this was announced on OpenAI's website. They have a partnership with OpenAI through which the university system will provide ChatGPT EDU. [00:34:00] which is a version of ChatGPT specifically customized for educational institutions, to over 460, 000 students and 63, 000 staff and faculty members. So this Turn CSU into kind of the first real like AI powered university system in the U.
[00:34:19] Mike Kaput: S. And the implementation includes a few key components. So one faculty can use ChatGPT for curriculum development and create course specific GPTs. While students get access to personalized tutoring and study guides, the university is also launching a dedicated platform offering free AI training programs and certifications, along with apprenticeship programs connecting students to AI driven industries.
[00:34:48] Mike Kaput: So, early research into how ChatGPT is actually impacting education shows that this does Could actually be really substantial. Like Harvard researchers have found that AI powered tutoring [00:35:00] doubled student engagement and improved problem solving. A Microsoft study indicates that individuals with AI skills are 70 percent more likely to be hired.
[00:35:10] Mike Kaput: So, like we mentioned, this uses ChatGPT EDU. That's not special to CSU. It is, that is something that launched in May 2024. And provides universities with access to OpenAI's latest models, enterprise level security, specialized pricing. So if you are just hearing about this now, go check that out for sure if you're a higher ed institution.
[00:35:34] Mike Kaput: So Paul, this seems really promising, especially in its scale. What is there to like about this partnership between CSU and OpenAI? And what are some of the bigger picture benefits?
[00:35:48] Paul Roetzer: Yeah, I mean, it's, it's fantastic from a CSU perspective. Now, it's weird for me to say CSU because we're from Cleveland, Cleveland State University is CSU.
[00:35:56] Paul Roetzer: So the first time I read this, I was like, Oh, awesome. CSU is [00:36:00] doing a massive program. I was like, Oh, it's California State University. Hopefully our CSU also follows suit. So I think it's great from a university perspective to have A vision for this, not just infusion into the curricul but into administration, you know, into the academics.
[00:36:18] Paul Roetzer: And to provide the training is so critical here of how to use these platforms, not just get the licenses and hand them out, but actually like make sure that they're being used in a responsible way. that's critical. I think as a parent, now again, my kids are 13 and about to be 12. they're in, still in grade school, but if I was looking at high schools, which we will be soon.
[00:36:43] Paul Roetzer: I would say if If we had two choices that were comparable, again, I, my, my belief is the kids should choose where they go. My parents gave me that choice as a, a high schooler, and I think my kids should have the same. But as a parent, [00:37:00] I would very aggressively look at how schools are doing, and if I saw one of the schools that.
[00:37:07] Paul Roetzer: My daughter was, say, considering, had a very aggressive program to integrate AI, drive literacy, drive usage, responsible usage, and one, outlawed it, or considered it plagiarism. And there's, like, a really, really good chance I'm going to, try and help her see the opportunity of, the more modernized school.
[00:37:28] Paul Roetzer: I guess I would say, same is true in college, I would push more heavily. Like, if I had a child right now, you know, junior, senior in high school, this would be in the top three of my list of students. How are they handling AI? what are they going to teach you over the next four years to prepare you for the reality of the workforce in four years from now?
[00:37:51] Paul Roetzer: And again, if it was a college that wanted me to be paying them 20, 000 to 50, 000 a year or more, and they weren't integrating AI and didn't have AI systems, I'm, I'm sorry, but [00:38:00] as a parent, like, Let's look at the next option, because you're, you're not going to be ready for the real world. So, I think that you're going to see a lot more of this.
[00:38:07] Paul Roetzer: I think the ChatGPT EDU program will explode in a very positive way, and I think this is a really important aspect when we go back to the models that are the winners in the end. There is a battle for the next generation of workers, and when the kids come out of school, they're either going to be loyal to Gemini or ChatGPT is my current, given who I think are going to be the major frontier models two years from now, I think there's a battle for mindshare and usage, from those students and I think whoever you come out of school Relying on is probably the model you stay loyal to, because as we talked about, the models are going to be commoditized.
[00:38:52] Paul Roetzer: They're going to roughly have the same capabilities, and so brand preference is actually going to become very important for these, these [00:39:00] AI models and the platforms. And so I think that Google and OpenAI would imagine, know that, and I would think that they're very aggressively trying to solve for how do they become the preferred brand of the next generation of workers.
[00:39:14] Mike Kaput: So really quickly, I'd be curious as to your perspective on Like what kind of training actually needs to be in place here for this to work? Because they say they're providing it. but like we talked about tons of times, both in higher ed and outside of it, the people getting access to powerful AI being turned on need training to be able to use it effectively.
[00:39:39] Mike Kaput: Like what kinds of change management are they going to have to overcome? I mean, no offense, but some teachers are now notorious for hating ChatGPT and AI in general. Like, what does that look like to you?
[00:39:51] Paul Roetzer: That's, that's gonna be the painful part is you're, you're going, you can sit here and lay out a vision for what this looks like, what the change management is going to need, building centers [00:40:00] of excellence for teachers so that they can share best practices.
[00:40:03] Paul Roetzer: They can, you know, you can build into your training. Here's the personalized use cases that are going to be most relevant to each department, each teacher. So when you hand them the license, here's the three to five ways to use this in curriculum development, assessing students. grading, you know, homework, whatever it is, like whatever the main ways they're going to use it is and teach those as like the cores, maybe even give them pre built GPTs for specific use case.
[00:40:28] Paul Roetzer: Like that's best case scenario. Yeah. But as you highlighted, you're going to have a whole bunch of teachers, professors who want nothing to do with this. They, they just don't care. They, they are, you're not going to convince them this isn't cheating, and it's not short cutting critical thinking, and and that's gonna be the biggest problem, honestly, because, especially at the university level, when there's tenure involved, like, you, you can't force change at the university level, especially if they're state funded, like, it doesn't work like that.
[00:40:57] Paul Roetzer: They, they can't, it's not like we can bring in [00:41:00] Elon Musk and have him, you know, create Doge and like, just, you know, totally change universities in 30 days doesn't happen that way. So, I think that there's going to be some universities that solve for this and find out ways to move forward and bring the professors and teachers along.
[00:41:19] Paul Roetzer: And then there's going to be some that just really struggle against the friction of resistance to change that Is what humans tend to do, especially when you've been doing something for a really long time, a very specific way.
[00:41:34] Mike Kaput: Okay, let's dive into this week's Rapidfire. We've got a bunch of interesting topics on the docket this week.
[00:41:40] Mike Kaput: So, first up, Google has expanded the Gemini AI model family. Gemini 2. 0 is widely available in three distinct variants. The The release kind of marks a pretty significant step forward to the stuff we were talking about. More intelligence at lower costs. So this lineup [00:42:00] includes Gemini 2. 0 Flash, which is now generally available for production use.
[00:42:04] Mike Kaput: This is Google's kind of workhorse model for high volume tasks. It has a massive 1 million token context window. And for developers seeking an even more complex model. cost efficient option, Google has introduced 2. 0 Flash Lite, which maintains better performance than the previous 1.0, 1.5 Flash version, while keeping the same speed and cost structure.
[00:42:27] Mike Kaput: The most powerful addition is Gemini 2. 0 Pro Experimental. This is obviously an experimental model that Google claims shows superior performance in coding and handling complex prompts. It has an even larger 2 million token context window and can integrate with external tools like Google Search and Code Execution.
[00:42:48] Mike Kaput: It is available to developers in Google AI Studio, Vertex AI, and as well you can use it in Gemini Advanced if you have one of those accounts. Interestingly, as of recording [00:43:00] 2. 0 Pro Experimental sits in the number one spot across all performance categories on the popular chatbot arena leaderboard at lmarena.
[00:43:10] Mike Kaput: ai. Paul, what is there to Look forward to with the 2. 0 family of models with Gemini. I mean, we're getting, seems like new models every other day. Why are these ones so important?
[00:43:21] Paul Roetzer: Yeah. So if you're a Workspace customer, I actually have no idea which of these you have. So if you go into your Gemini account through Google Workspace, all you see is Gemini Advanced.
[00:43:34] Paul Roetzer: There is no dropdown. I don't know which model is currently used and I did not Google it to figure out which. I, I think it's still 1. 5 Pro, but I honestly do not know. So, first and foremost, as a Google Workspace customer, you still have Gemini Advanced. No idea what the underlying model is. If you have Gemini through your Gmail, like if you have your personal one, As you just highlighted, Mike, there are now six choices, and [00:44:00] because we gave crap to OpenAI about this, we, we have to, you know, be fair and do the same thing to, to Google.
[00:44:06] Paul Roetzer: I'm not actually sure who's, is worse right now. so, in my, I'm looking at my Gemini app right now. I have 2. 0 Flash for everyday tasks, plus more features. I have 2. 0 Flash Thinking Experimental, best for multi step reasoning. Isn't all reasoning multi step? anyway, 2. 0 Flash Thinking Experimental with apps for reasoning across YouTube, Maps, and Search.
[00:44:32] Paul Roetzer: I don't know why Flash Thinking wouldn't just have those apps, but we have a separate model for the apps. Then I have 2. 0 Pro Experimental. Best for complex tasks, which I would actually immediately think is reasoning. Like, I don't know why those are different. then I have 1. 5 Pro previous model and 1.
[00:44:52] Paul Roetzer: 5 Flash previous model. I have no idea when or why I would use The previous models that are still in my dropdown, [00:45:00] if the cost is the same to me, I'm paying my same 20 bucks a month. Why, why do I even need the 1. 5s? So we have not solved for the branding issue. apparently we have yet to achieve a model that is capable of solving this issue for these companies.
[00:45:14] Paul Roetzer: I don't honestly know. Like, I think the reasoning is the biggest thing, like the thinking, what they're calling the thinking is the, is the main thing here. Yeah. yeah. But you have to use your personal account to, to, to use it. The big question for me is when does 2. 0 Pro come out? Which I assume is like the multimodal leap that maybe all the reasoning's baked into it, and then I just need 2.
[00:45:38] Paul Roetzer: 0 Pro, and it has the apps, and it has the thinking, and I don't need to decide which model to use, but I don't, I actually don't know.
[00:45:46] Mike Kaput: So more on that, hopefully we can improve that part of the AI experience.
[00:45:50] Paul Roetzer: Yeah, and I think it's like, for all of our listeners, like, if you feel completely overwhelmed, and honestly, like, I was sitting there last night, like, wait a [00:46:00] second, I have the O, I'm paying the 200 a month on my personal account for OpenAI, but I also have the ChatGPT team license that has Oh, one.
[00:46:08] Paul Roetzer: And now all three mini, what is, what am I getting for the 200? Like I was literally sitting there thinking like I was going to message you, Mike, say, what's the difference again, between what I'm getting. So if you are listening to this and you're like, I don't know what I'm supposed to do. I don't know which model to use.
[00:46:22] Paul Roetzer: Like, welcome to the club. And Mike and I. Do this for a living and I'm like, I get so confused by all these models and which one I should actually be using for things.
[00:46:34] Mike Kaput: Our next rapid fire topic concerns some major AI companies that are rolling out new safety measures this week. And this is kind of signaling a growing concern about controlling increasingly powerful AI systems from some of the companies building them.
[00:46:52] Mike Kaput: So in a series of announcements, Anthropic, Meta, and Google DeepMind all unveiled new frameworks for managing the [00:47:00] risks of advanced AI. Anthropic introduced what they're calling constitutional classifiers, a new defense system against AI jailbreaks or attempts to bypass an AI's safety guardrails. you This system has apparently proven quite effective.
[00:47:17] Mike Kaput: Testing has shown it blocks over 95 percent of jailbreak attempts, while only increasing normal query refusal rates by less than half a percent. Anthropic is so confident in the system, they are offering a 20, 000 bounty to anyone who can successfully break it. Meanwhile, Meta has taken an unusually strong stance on AI development, announcing they may completely halt the development of AI systems they deem too dangerous.
[00:47:46] Mike Kaput: Their new Frontier AI framework specifically identifies two risk categories, high risk systems that could aid in cyber or biological attacks, and and critical risk systems that could lead to catastrophic outcomes that can't be [00:48:00] mitigated. Now, Google DeepMind, in turn, has updated its own safety framework with a particular focus on preventing what they call, quote, deceptive alignment, the risk of AI systems deliberately undermining human control.
[00:48:15] Mike Kaput: They're implementing new security protocols and deployment mitigations, especially for models that could accelerate AI development itself. Okay, that feels pretty weighty, Paul. Like the big question I have here is three of the leading AI companies make these same types of announcements at the same time.
[00:48:35] Mike Kaput: They're three very different companies. For instance, meta is not exactly known as safety conscious. Is this a coincidence we're hearing from all of them about AI safety right now?
[00:48:47] Paul Roetzer: Yeah, I don't think so. so I wrote about this in the exec AI insider newsletter. And if people aren't familiar with that, I, every Sunday, I sort of do an editorial and like a preview of what's coming for the podcast.
[00:48:58] Paul Roetzer: So we'll, we'll drop a link in [00:49:00] the show notes for that. And, and what I said was in the totality, of everything that happened last week, this was very unusual to me. And so the way we do the podcast, I've said this before, but basically throughout the week, I drop a bunch of links. We use Zoom, so we keep an episode sandbox, drop a bunch of links, tweets, videos, courses, all this stuff.
[00:49:20] Paul Roetzer: And then Mike goes through and curates everything, you know, starting on Friday, usually, and then throughout the weekend, and he curates everything. So then I sat down Saturday morning to write. The editorial for my newsletter, and I'm scanning through the 40 or so links, and I count like six or seven unrelated links, all tied to AI safety.
[00:49:42] Paul Roetzer: So we had European Union news, we had a gov. uk article we'll link to, we had these three major things, there was a bunch of tweets around it, and it's just one of those where you take a step back and you're like, h this is, very unusual. Like, why would they all be doing this in the same week? And so I do think, and what [00:50:00] I said in the newsletter was, I don't think the timing is a coincidence at all.
[00:50:04] Paul Roetzer: I think that they all see the reasoning models taking off on this exponential growth rate. They see the continued improvement of these frontier models that we talked about with, you know, forthcoming GPT 5 and what Sam was alluding to. And they all know that they're all. going to be releasing things soon.
[00:50:23] Paul Roetzer: And, and so I think they're starting to try and get ahead of this. And AI safety and alignment is going to become a much more mainstream topic as more people start accepting how disruptive this stuff is going to be to the economy and to jobs. And so they're, they're trying to get out ahead of this a little bit.
[00:50:43] Paul Roetzer: also this question of like, what remains uniquely human? We heard Sam sort of like struggling with that. very question in his Three Observations article. So I think that the labs are all sort of trying to get their ducks in a row. They're all trying to figure out how do we know when it's [00:51:00] too smart to release into the world?
[00:51:02] Paul Roetzer: And so they're very aggressively looking at their own policies internally. The thing I will say, and this isn't very, it doesn't give us much peace of mind, but no matter how much time they spend on these things, and how much PR they eventually put behind this frontier work and their AI safety work, what history, recent history will show us is GPT 2.
[00:51:26] Paul Roetzer: was considered too dangerous to release. So they did not release the full model originally. Google had their own internal capabilities very similar to ChatGPT, if not more powerful than ChatGPT. They did not release it first because of concerns for risk and safety. OpenAI took the risk and did. OpenAI didn't want to show the thinking of the reasoning models.
[00:51:52] Paul Roetzer: DeepSeq did it. So then OpenAI followed suit within seven days and started showing more of the thinking behind the models. [00:52:00] nobody was willing to put out an open source model of the magnitude of Llama 3 until Meta did it. What I'm saying is, it only takes one player in the game to do the thing that's considered too risky by everyone else.
[00:52:15] Paul Roetzer: And then everybody has to follow suit. So just because Anthropic has some level four thing that they think is like end of humanity concerns, it takes one other research lab to push that out. And then Anthropic either says, listen, we're just out. Like we're, we're not going to go there. We're going to like bring our safety research to like Google or somewhere else, but like, we're not playing in this game anymore, but you're not going to have openAI step out of the game and you're not going to have probably Meta step out and certainly not XAI.
[00:52:45] Paul Roetzer: They're going to push the frontiers that someone else has made okay to push. That's what concerns me, is we, we have seen a one up game played out over the last five years, every single time, and I don't know that these [00:53:00] labs have the will to not do the thing that the other labs do. Make the norm. And so I think this stuff's critical.
[00:53:10] Paul Roetzer: I think we have to have much more, in depth conversations across all industries, not just like in these, you know, our AI show kind of thing. We need other industries, other leaders thinking about AI safety. I don't think the current administration is going to get involved. I don't think that they're going to try and push this forward.
[00:53:31] Paul Roetzer: I think the states will, though. I think it's become a massive issue with state legislation. Like California, Texas, we've already seen it. The EU's done their thing. this is a major issue. Like AI safety is going to be 60 Minutes style, like, by the end of this year, you're going to see those major episodes on AI safety and alignment, and you're going to start seeing a lot of mainstream media headlines around this, and this is the thing I think, this and job loss is what eventually triggers [00:54:00] society To, you'll see backlash about AI.
[00:54:03] Paul Roetzer: I think it'll happen this year. I think there's going to be a couple of events this year that will actually trigger where we start to see true pushback on AI advancement, and this is going to be one of the main two areas where it's going to be.
[00:54:19] Mike Kaput: In another news story this week, ChatGPT has hit a pretty significant new milestone. The tool has reached 3. 8 billion visits in January 2025 and is widening its lead over competitors in the AI chatbot space, according to some new analysis from Big Technology. ChatGPT's nearest rival in terms of just volume of visits, Microsoft Bing, logged 1.
[00:54:43] Mike Kaput: 8 billion visits, so less than half of ChatGPT's traffic. Other major players trail even further behind. Gemini has 267 million visits, Perplexity 99. 5 million, Anthropic Claude at 76. 8 million. This [00:55:00] growth spurt, according to Big Technology, appears to have coincided with OpenAI's release of GPT 4. 0. They also integrated DALL E's image generation directly into the chatbot and Released enhanced models that show improved reasoning, have fewer hallucinations.
[00:55:16] Mike Kaput: So Big Technology says, quote, the traffic surge is a remarkable reversal for ChatGPT following a usage stagnation that lasted longer than a year. After reaching 1. 9 billion visits in March 2023, ChatGPT didn't surpass that number until May 2024. So this is obviously pretty good timing for OpenAI, you know, DeepSeek has been, gnawing away at its heels, I think, you know, during its peak publicity wave.
[00:55:46] Mike Kaput: It achieved one third of ChatGPT's daily traffic almost overnight. And OpenAI is now clearly working to solidify its brand dominance in other ways, including running its first Super Bowl ad this weekend. [00:56:00] So Paul, it certainly seems like despite all the criticism ChatGPT gets when a new model comes out, I mean, we've read thousands of threads on like ChatGPT's done for, it is positively surging in usage.
[00:56:14] Mike Kaput: So in context, how important is it to look at these user numbers? Do they tell us anything useful about the overall AI race?
[00:56:24] Paul Roetzer: Yeah, it's just huge numbers. I mean, that's like, obviously you have a far and away leader, that I think a lot of the average person it's become like the Google of AI, like, you know, you just Google things like Google is synonymous with search.
[00:56:38] Paul Roetzer: I think ChatGPT is synonymous with generative AI for a lot of, People, like, don't even think about the other players in the game, just sort of assumes ChatGPT. SimilarWeb also put out, like, their top 10 apps in the US, like, iPhone apps, for January. Just, like, you know, perspective here. DeepSeek's number one, still.
[00:56:59] Paul Roetzer: [00:57:00] ChatGPT was two. Threads? Who, who uses Threads? Like, I think you're forced
[00:57:04] Mike Kaput: to, I think you're forced to have an account is part of the reason that those, those numbers are the way they are. I don't know anyone that uses threads. I don't either.
[00:57:15] Paul Roetzer: and then Google Gemini is 8th on that list of top apps.
[00:57:19] Paul Roetzer: So, I mean, people are using Google Gemini too. By the way, shout out, like, The ads, we don't have this on the thing to talk about, the Super Bowl ads, ChatGPT, my God, like, I was gonna tweet this and I resisted, but whoever they're having doing their brand naming, I think also does their ads, like, it was, it was really bad, like, it was just a miss for me, like, I don't know, maybe some people loved it, on Twitter, it was not getting much love, so if you didn't see the ad, you can go look at it on their Twitter account, it's just like a bunch of dots for like, 50 seconds forming all these things throughout human history and then it's just, it's ChatGPT.
[00:57:54] Paul Roetzer: I don't even say ChatGPT. On the other side, Google Gemini did this amazing, now maybe it's because [00:58:00] I'm like a sappy father with a 13 year old daughter, but like they did this amazing ad of like a stay at home dad who raised his daughter and now he's trying to like rejoin the workforce and he's doing a prep interview with Gemini.
[00:58:13] Paul Roetzer: You know, you had talked about this use case, Micah, preparing to give talks and it's just like, it's so emotional. Yeah, I was, I had a hard time watching it, honestly. So Google nailed their ad, I think, for Gemini, ChatGPT, you know, back to the drawing board. so anyway, so yeah, it's just, but it's huge numbers.
[00:58:33] Paul Roetzer: It's really hard to overcome that, that kind of lead when you're talking about, you know, the market for the next few years here.
[00:58:42] Mike Kaput: ByteDance, the owner of TikTok, has demonstrated a new deepfake video system called OmniHuman1 that might set a new standard for realism. So, unlike many current deepfake tools that often leave obvious digital traces, OmniHuman1 has [00:59:00] produced some really, really, really convincing results based on the demo videos.
[00:59:04] Mike Kaput: With just a single reference image and some accompanying audio, this system can generate video clips that are complete with adjustable aspect ratios and body proportions and look hyper, hyper, awesome. Real. So this capability was highlighted by demos that included a fictional Taylor Swift performance and imagine Ted talk, even a deep faked Einstein lecture, all crafted from a training set of 19, 000 hours of video.
[00:59:33] Paul Roetzer: Sure. They had permission for all 19, 000 hours. Sure. I'm sure they did.
[00:59:39] Mike Kaput: So obviously like any technology, this has flaws, but it is. It's definitely like one of the more stunning recent examples of deepfakes out there. We've talked, Paul, about how weird and dangerous hyper realistic deepfakes are going to get and make the world, but it is quite another thing seeing this stuff.
[00:59:59] Mike Kaput: And [01:00:00] interestingly, this is from ByteDance, which created TikTok. I could certainly see some use cases for this appearing on their platform as well.
[01:00:09] Paul Roetzer: Yeah, this is a problem. I mean, we've talked, this is one of those, like, slow moving trains we've been talking about for a year and a half now. These deepfake videos and the ability to make anything look real and sound real.
[01:00:21] Paul Roetzer: You know, I think we're gonna make a lot of progress, good or bad, in 2025 on video. text to video, image to video, video to video, where you just continue on from scenes and make things happen that never happen. Indistinguishable from, you're gonna have companies like Google with their VO tool that won't, you know, put watermarks into it so at least you can determine from the metadata if it is or is not a fake.
[01:00:44] Paul Roetzer: You're gonna have others who do not. And you know, I think the reward function of social media is engagement and things going viral. And so there's no real, you know, we just had, I mean, Meta, like, a few weeks [01:01:00] ago announced they were, like, removing human reviewers of content, basically, for, except for the most violent stuff.
[01:01:06] Paul Roetzer: So you have the walls coming down from the social media channels that's gonna allow this stuff to spread even faster. And, yeah, I don't know, again, like I. I worry about this with my kids, like now, you know, thinking about what they see online and knowing whether it's real or not. Like, I have these conversations at 13 and 11 already about how to know what's real and what's not online and make sure it's coming from the verified source.
[01:01:32] Paul Roetzer: And if you see something, look at the source. If it's not a source you, you know, make sure you go to the actual source of that person or that media company. It's, it, yeah, man, this is, this is one I always like worried about and I just think it's, it's coming, becoming real where you can be able to do this over 10, 20, 30 seconds, eventually minutes of video that completely reinvents.
[01:01:56] Paul Roetzer: Now there's all kinds of cool things you could do with that. But they don't, they don't care about [01:02:00] IP. They don't care if they're representing celebrities doing things. Like, sue them all you want. Like, good luck. yeah, it's a problem. I wish I had better thoughts about this one, but this terrifies me, honestly.
[01:02:13] Mike Kaput: So we're also starting to see the first enforcement phase of the European Union's Landmark AI Act to go into effect. So as of Sunday, regulators there can now ban AI systems they deem to pose quote, unacceptable risk to society. So these regulations create, four distinct risk categories for AI systems.
[01:02:35] Mike Kaput: There are one, the highest risk levels, these unacceptable risk systems that are now completely banned. So these include things like. AI that creates social scores based on behavior, AI that manipulates people's decisions subliminally, and a bunch of other types that can provide, create real world harm for people in society.
[01:02:56] Mike Kaput: The penalties for these kinds of violations under the EU's [01:03:00] AI Act are pretty steep, so if you're found using these types of applications, you could face fines of up to 35 million euros, or 7 percent of your annual revenue. There are a few exceptions, like law enforcement can use certain biometric systems in public places, and some systems that detect emotions may be permitted if there is legitimate medical or safety justification.
[01:03:24] Mike Kaput: So Paul, like, how are you viewing these regulations? I mean, The EU sometimes comes under criticism for regulating too much, but we also just talked a bunch about how companies are coming out and saying that we're getting worried about some of these systems.
[01:03:38] Paul Roetzer: Yeah, the balance for them is how do you still allow for innovation to happen?
[01:03:42] Paul Roetzer: I mean, the EU, you know, it's no secret that they, they lag dramatically behind You know, the United States in terms of AI and innovation and AI startups and things like that, like Silicon Valley is just so far beyond in other pockets of the United States, but [01:04:00] that's their biggest struggle. Like there's a lot of practical, elements to the EU AI Act and I think probably some things that are correct.
[01:04:11] Paul Roetzer: There is this balance where you, you probably are thwarting, you know, making sure you're staying well behind America, I guess, when it comes to this stuff. And that's the, I think, the frustrations you see online. but, you know, they're, they're trying, they're trying to find that balance and I think only time will tell, but I assume this is, you know, it would have.
[01:04:34] Paul Roetzer: that effect of creating more obstacles for innovation in the EU. That's the trade off they're willing to have.
[01:04:41] Mike Kaput: Can you really quickly talk me through there's like an AI literacy component to the EO's AI Act and there seems to be some confusion around this. This is not directly related to the, you know, extreme risk systems, but a part of the overall legislation.
[01:04:55] Mike Kaput: Could you walk me through this?
[01:04:57] Paul Roetzer: I think what's happening is there's There's [01:05:00] probably like a push to that where some people are trying to convince people in the EU that this is all about like being AI first and driving AI adoption and that's not what it's about. Like that's, I think, where the confusion mostly comes in is that you're, you know, it's all about building AI native companies and AI emerging companies.
[01:05:17] Paul Roetzer: You got to drive AI literacy for those reasons. That's not their intention, it doesn't seem. So you can go read it. We'll put the links and this is article four which is titled AI literacy and then recital 20 which is I don't know, there's about 200 words of like explanation. I will read you very briefly the summary of the article.
[01:05:36] Paul Roetzer: It says, the article states, the companies that create and use AI systems, so using, I assume that means any business, so someone using ChatGPT or whatever, must make sure their employees and anyone else who operates or uses these systems on their behalf are well educated about AI. That is a very broad point.
[01:05:55] Paul Roetzer: statement, obviously. This includes considering their technical [01:06:00] knowledge, experience, education, and training, as well as the context in which the AI systems will be used and the people or groups who will be using them. So that, that's the article. And then in Recital 20, they go into a little bit more detail, and I'll highlight a couple of excerpts here.
[01:06:17] Paul Roetzer: It says in order to obtain the greatest benefits from AI systems, while protecting fundamental rights, health and safety, and to enable democratic control, AI literacy should equip providers, deployers, and affected persons with the necessary notions to make informed decisions regarding AI systems. So that's very, again, very broad, but you're basically saying like if you are selling AI systems, if you are using AI systems, you need to understand much more than just how they work.
[01:06:45] Paul Roetzer: You need to understand how are they trained? What is, what are the implications of my decisions around using this? Am I going to use it? That's going to inject bias into our decisions. Things like that. they're, they're really looking more at like risk than anything and understanding these systems to [01:07:00] avoid that risk.
[01:07:01] Paul Roetzer: they talk about suitable ways in which you interpret the AI systems output. So like what. Training people, what do I do with this once it gives me this thing? If I use a reasoning model, how am I supposed to decide how to use this? And then the last thing I'll highlight is, they kind of basically say, we sort of need to figure out what this article actually means, because they say the European AI board, should support the commission, To, quote, promote AI literacy tools, public awareness and understanding of the benefits, risks, safeguards, rights, and obligations in relation to these systems, in cooperation with relevant stakeholders, the commission and the member states should facilitate the drawing up of voluntary codes of conduct to advance AI literacy among persons dealing with development, operation, and use.
[01:07:48] Paul Roetzer: We haven't really figured out how we're going to actually execute all this, but like we should get together and put some codes in place that make sure we follow some best practices to actually do what the spirit of [01:08:00] this article is intended to do. So I don't know that gives people any more clarity, but maybe the fact that there isn't clarity gives you some clarity in a weird way of like, like if you think you're supposed to understand what this means, I don't know that you're actually supposed to per se by reading the article and, you know, going deep into the recital.
[01:08:19] Paul Roetzer: Gotcha. Open to interpretation is kind of how I interpret this.
[01:08:25] Mike Kaput: So we've talked several times about the company FIGURE, which makes humanoid robotics. They have announced that they're ending their high profile partnership with OpenAI just months after the two companies joined forces. So this decision comes after what FIGURE calls a quote, major breakthrough.
[01:08:42] Mike Kaput: In their in house AI development, OpenAI is also an investor in FIGURE, and FIGURE has raised about 1. 5 billion, achieving a valuation of 2. 6 billion. So FIGURE CEO Brett Adcock, who's quite active online, explains the split by [01:09:00] pointing to the challenges of integration. While OpenAI excels in many areas of AI, Embodied AI, which brings AI to physical objects like robots, is not its primary focus, according to him.
[01:09:13] Mike Kaput: He says that the proper solution is building an end to end AI model. Specifically designed for their hardware saying, quote, we can't outsource AI for the same reason we can't outsource our hardware. Now the timing of this becomes more interesting given that OpenAI just filed a trademark application with the U.
[01:09:33] Mike Kaput: S. Patent Office involving humanoid robots. So Paul, what is really going on here? Is this just PR spin that figure is putting on the split? Like it certainly sounds like OpenAI could be a competitor. Or is there a legitimate business argument here?
[01:09:51] Paul Roetzer: Open eyes building robots. I mean, like reading Adcock's tweets was kind of rough because it was [01:10:00] just like trying to convince people that wasn't what was happening.
[01:10:04] Paul Roetzer: But OpenAI started trying to build robots eight years ago. This isn't new. Like they, they always thought embodiment of intelligence was critical. they, we just weren't there yet from a hardware perspective. We weren't there from an intelligence perspective yet, but the multimodal language models are the brains.
[01:10:21] Paul Roetzer: Give them vision, give them reasoning capability. This is what Elon Musk thinks the future of Tesla is. He thinks there's going to be billions of robots. NVIDIA thinks there's going to be billions of robots. So does Sam Altman. And that is like the biggest addressable market of all. Like if you think that knowledge work is an addressable market, like wait till you see the size of the robot market.
[01:10:42] Paul Roetzer: So if you want to justify a 300 billion valuation for openAI, talk about how many billions of robots you're going to sell seven years from now. Just, they can print the money. So, they can say whatever they want, OpenAI is going to try and build robots and that is why this deal fell apart.
[01:11:01] Mike Kaput: In some other OpenAI related news, OpenAI co founder, John Schulman, has left Anthropic, where he was after leaving OpenAI.
[01:11:11] Mike Kaput: He left Anthropic after just five months. Reportedly, to join former OpenAI CTO Mira Mirati's new startup venture. So Schulman had originally departed OpenAI last August after nearly nine years with the company. When he first left, he explained that the move was driven by his desire to morph, to focus more deeply on AI alignment.
[01:11:33] Mike Kaput: at Anthropic, and to return to hands on technical work. Now, Fortune has reported that Schulman is joining Mirati's secretive new company, which has been quietly taking shape since she left OpenAI in September. Now, details about the whole company and venture remain pretty scarce. it seems like she's also attracted a former OpenAI Supercomputing team researcher Christian [01:12:00] Gibson, they're reportedly in talks to raise over 100 million in funding last October.
[01:12:05] Mike Kaput: Overall, Paul, like, this seems like a win for Murati's company, but like, any guesses as to why Schulman left Anthropic so soon after joining?
[01:12:16] Paul Roetzer: Yeah, it'll be fascinating to see what they end up building. I mean, obviously, what she's pursuing is very enticing to him, and I'm sure that there's a lot of equity opportunity there that maybe he didn't have at Anthropic, but I think that's the thing I'm really anxious to see is just what is she building.
[01:12:32] Paul Roetzer: She's not going to build another frontier model company. Like, we established that. I think it's too late to enter into that game. So, I'll be intrigued of what her, the thread she pulls is is
[01:12:46] Mike Kaput: another openAI alum Ilya Sutskever, his safe Superin intelligence startup is in talks to raise funding at a staggering $20 billion valuation. This Reem represents a fourfold [01:13:00] increase from the company's $5 billion valuation just a handful of months ago in September. This despite the fact they have not yet generated any revenue.
[01:13:09] Mike Kaput: The company has raised a billion dollars to date from prominent firms like Sequoia Capital, Andreessen Horowitz. while the size of the new funding round has not been disclosed, obviously the valuation suggests it would be substantial. While little is known about Safe Superintelligence's actual work on the tech or technology, The company's names, Sutskever's background, suggest that they are trying to develop superintelligence that's both powerful and controllable.
[01:13:39] Mike Kaput: Now, Paul, like, it seems like it's a smart bet to go with whatever Ilya's doing. We've talked about this before. However, this is still, like, a huge bet. Like, do we have any idea what this company actually does or is going to do? Because if you're an investor evaluating this opportunity, like, outside of Ilya, like, what details do you [01:14:00] actually
[01:14:01] Paul Roetzer: Yeah, I don't know.
[01:14:02] Paul Roetzer: I mean, they not only don't have any revenue, they have no plans to have revenue, no plans for products, no plans to make money. They have said, we are on a straight shot to superintelligence, which is artificial superintelligence, which is smarter than the smartest humans at every cognitive task. Like, that's their pursuit.
[01:14:18] Paul Roetzer: I, the only thing you have relative is he was leading the team that built Strawberry, which became the reasoning models at OpenAI. I think he's probably the one internally who figured out the test time compute, scaling law, basically. And you have to assume that's what he left to go pursue. question is, can he, get there faster than OpenAI.
[01:14:40] Paul Roetzer: My guess is he thinks he can, and he's proved before he's one of the top, if not the top AI researcher in the world. And so you're going to have people willing to You know, make those bets.
[01:14:53] Mike Kaput: We alluded to this a little bit before, but both Google and Microsoft keep running lists of [01:15:00] hundreds of case studies and use cases detailing how customers of their different AI solutions use those products to achieve real world results.
[01:15:08] Mike Kaput: We've even reported on some of these in the past. Now both companies are adding to that database. Google has released a new series of case studies. On how 50 different businesses from across all 50 States in the U S are using AI in Google workspace. This includes stories about how businesses in many industries you can think of are using AI for Docs, Drive, Gmail, Meet, et cetera.
[01:15:33] Mike Kaput: These appear to maybe intentionally focus on smaller local businesses using Gemini AI in workspace. To do everything from write social posts, to draft emails, to track inventory. Microsoft has also added more than 50 new customer stories to its huge list of 300 plus AI transformation stories. So Paul, we'll link to both of these in the show notes.
[01:15:57] Mike Kaput: It's awesome. Everyone should. Go check [01:16:00] all of these out for inspiration and examples. One thing that jumped out to me, I mean, Google's new case studies definitely seem to be focused on like normal, accessible, small, local businesses using AI, not like huge tech firms. Do you think that's an intentional marketing choice?
[01:16:18] Paul Roetzer: I would imagine. I mean, in the United States, 99. 7 percent of all businesses are small businesses. That like half of employees work for big companies, but in terms of number of businesses,
[01:16:28] Paul Roetzer: they have something like 24 million businesses in the U. S. or small to mid sized businesses. So it's, I would imagine it's central because it's a massive market and it's the market that probably is struggling to like really see the opportunity and like the use cases.
[01:16:41] Paul Roetzer: So trying to personalize it to all these different businesses makes it. Ton of sense. And we know people love use cases, Mike. Like, I mean, anytime we put links to this stuff in, it's always the top link stuff in the newsletters. Like people want the tangible things that it's like, all right, maybe I'll find some inspiration by seeing a company like mine and hearing what they do.
[01:16:59] Paul Roetzer: [01:17:00] So this has always been our focus. Like since the day we started Marketing Institute back in 2016, it was trying to connect the dots for what are the use cases? Because that's how you make it tangible to someone is when they, See them. It's like seeing yourself in the mirror. It's like, okay, I get it. Like, that's how I can use these models myself.
[01:17:18] Mike Kaput: All right. Our last topic for this week. So we're going to end with a new segment that we're hoping to do each and every week. We did this last week as well. Listener questions. So we get a ton of questions. we'd encourage you if you have a question about AI to reach out to us, through the website, marketingaiinstitute.com
[01:17:35] Mike Kaput: Click contact us and we'll try to answer a bunch of interesting user and audience questions. So this week's question, Paul, is I'm a marketer who wants to level up my career with AI. What is the path I should take? And I want to add really quickly some context as to why I picked this one. This sounds like it's easy to answer.
[01:17:55] Mike Kaput: We've got. Tons of resources. We have literally developed over nearly a [01:18:00] decade to answer this. But the reason I picked this is because people still ask me all the time. Like, should I go get a certain degree, a certain certificate? How should I get started here? Like there are so many options. I think people might be a little more lost than sometimes we realize.
[01:18:17] Paul Roetzer: Yeah, you know, I think about this all the time, because we do and we've, so my intraday I class, which like I'm teaching tomorrow, Tuesday is like the 44th or 45th of the year. I started teaching intraday I in November of 2021 is a free monthly class for people on zoom. If you happen to be listening to this before noon, Eastern time, we'll drop the link to intraday I just in case, but we do it every month so you can register.
[01:18:40] Paul Roetzer: So I have taught. I think we've had 27, 000 people go through that, and over that time, and we get an average of probably 80 to 100 questions every time I teach it, so it's, I'm not exaggerating when I say tens of thousands of questions, probably. so we see this all the time, and then when Mike and I are out doing talks, like, we hear it [01:19:00] firsthand all the time.
[01:19:02] Paul Roetzer: And what I, what I tell people, whether you're a marketer or any knowledge worker, You know, you have to personalize your learning journey, and that happens in a couple ways. Like, one is, how do you learn best? Do you like taking online courses? Do you like reading books? Do you like listening to podcasts?
[01:19:18] Paul Roetzer: Do you like watching videos? Like, what is it that is your best learning vehicle? And then find The content and the experts who can help you in that way. like for me, I used to read a ton of books. Like that was how I, my whole career, that was how I learned everything was through books. I used to just go to the bookstore and buy five, six books at a time.
[01:19:38] Paul Roetzer: Now it's, it's a lot of podcasts. It's a lot of like passive listening when I'm in the car, when I'm at the gym, like I'm just always listening to different perspectives on things. So I think the way you have to think about it is how do you learn best? And then I actually, I shared a learning journey framework with Mike, like, literally last Friday, that we're probably going to do more with, but like, this is a rough [01:20:00] draft.
[01:20:01] Paul Roetzer: think about it as like, step one is curiosity. Like, you, you are beginning to explore it, but you're not really sure exactly where to go. And maybe that's how you found this podcast, and like, this is your first step. The second is understanding. This is like, the fundamentals of AI, and that's what like, our Intro to AI class is all about, is like, free.
[01:20:19] Paul Roetzer: Here's the fundamentals. It's, the certificate that I'm building for the AI Academy we're gonna, you know, relaunch in the spring. There's an AI fundamentals certificate. It is specifically built for this purpose. Like, the people just want this fundamental AI 101 knowledge. Experimentation is the next step.
[01:20:36] Paul Roetzer: That is like, you gotta just get in and use ChatGPT. Even if it's not allowed at work, ChatGPT, Gemini, whatever it is. You got to just play around with these tools. Use it to plan a trip. Use it to build a coaching lesson for your kid. Like whatever it is, like use the tools in your life, figure it out. Third is integration.
[01:20:53] Paul Roetzer: You're now building it into your, or I'm sorry, fourth is building it into your workflows and processes. Like you have found your three to [01:21:00] five to seven core uses in every day. You are using AI in that process. Like for me and Mike preparing for this podcast every week, AI is infused into that workflow and then there's like transformation.
[01:21:11] Paul Roetzer: I have reinvented how I do my job, how I go through my personal life. Because AI is so fundamental to what I do. And so, whether you're a marketer, or an attorney, or a, again, wealth manager, or an entrepreneur, or a CEO, whatever you are, personalize it based on how you learn best, and find the resources to do that, and then personalize it based on where you are in that journey.
[01:21:35] Paul Roetzer: Now that is What I'm trying to do with our AI Academy moving forward is to like create this platform to allow people to follow these personalized journeys. But that's how I, you know, I think about learning and this all like towards this path to like high level of confidence and proficiency in, in AI.
[01:21:53] Mike Kaput: Awesome. I love that. That's a great kind of positive, clear note to end on after a week full of pretty [01:22:00] daunting AI news and developments. Paul, thank you as always for breaking everything down for us.
[01:22:07] Paul Roetzer: Thanks Mike. and we appreciate everyone being with us again this week. We'll be back next week with a regular schedule episode.
[01:22:14] Paul Roetzer: Thanks for listening to the AI show. Visit marketing ai institute.com to continue your AI learning journey and join more than 60,000 professionals and business leaders. We've subscribed to the weekly newsletter, downloaded the AI blueprints, attended virtual and in person events, taken our online AI courses, and engaged in the Slack community.
[01:22:36] Paul Roetzer: Until next time, stay curious and explore AI.