OpenAI unveils an ambitious Economic Blueprint proposing $175 billion in AI investment opportunities—a plan that could redefine America's tech landscape.
Plus, Google makes waves by integrating Gemini across Workspace, Microsoft introduces revolutionary AI pricing, and Sam Altman announces OpenAI's next-gen o3-mini model.
Mike and Paul break down these major shifts, along with Apple's AI challenges, TikTok's uncertain future, and the most significant AI developments of last week.
Listen or watch below—and see below for show notes and the transcript.
Listen Now
Watch the Video
Timestamps
00:06:38 — OpenAI Releases Its Economic Blueprint
- OpenAI’s Economic Blueprint - OpenAI
- OpenAI presents its preferred version of AI regulation in a new ‘blueprint’ - TechCrunch
00:19:27 — OpenAI “Super-Agent” Rumors + o3 Mini Release Date
00:30:19 — Google Is Giving Away AI Capabilities to Workspace Customers (And Microsoft Is Changing Pricing)
- Google Workspace enables the future of AI-powered work for every business - Google Workspace Blog
- Microsoft, Google Roll Out New AI Pricing for Businesses - The Information
- Microsoft bundles Office AI features into Microsoft 365 and raises prices - The Verge
- Timo Springer X Status
00:40:03 — Google Releases New Research on the Potential Successor to Transformers
00:44:10 — Google Releases Factuality Benchmark for LLMs
- FACTS Grounding: A new benchmark for evaluating the factuality of large language models - Google Deepmind
- FACTS Leaderboard
00:47:48 — Apple Intelligence Falls Flat
- Apple is pausing notification summaries for news in the latest iOS 18.3 beta - The Verge
- Paul Roetzer LinkedIn Post
00:52:56 — TikTok Shutdown Drama
- TikTok Ban Live Updates: Trump Says ‘TikTok Is Back’ In Victory Rally As App Restores Access - Forbes
- Tech Perplexity AI makes a bid to merge with TikTok U.S. - CNBC
00:58:42 — US Patent and Trademark Office Releases Its AI Strategy
01:01:54 — Meta AI Copyright Lawsuit
- Inside Meta’s race to beat OpenAI: ‘We need to learn how to build frontier and win this race’ - The Verge
- Jason Kint X Status
- Jason Kint X Status
01:05:04 —Benchmarking the Energy Costs of Large Language Models
01:09:00 — NotebookLM Has to Do “Friendliness” Tuning on Its AI Podcast Hosts
01:10:48 — AI Funding and Product Updates to Watch
- Synthesia Raises $180M Round
- Cursor Raises $105M
- Andreessen Horowitz Leads Series A Round in Company Developing AI for Mental Health
- ChatGPT Gets Tasks…
- …And Custom Instructions
- DeekSeek R-1
- Microsoft Introduces Copilot Chat
- Adobe Releases AI Tool That Can Edit 10,000 Images in One Click
- Luma Releases Ray 2 Video Generation Model
- Runway Releases Frames
Summary
OpenAI Releases Its Economic Blueprint
OpenAI just released what it’s calling its “Economic Blueprint,” a policy proposal for how the US should develop and regulate AI.
The blueprint makes a bold claim that there is approximately $175 billion in global funds waiting to be invested in AI projects—and that if the US doesn’t attract these funds, China will.
To prevent this, OpenAI proposes a comprehensive national strategy that includes developing AI economic zones, creating research labs aligned with local industries, and building what they call a "National AI Infrastructure Highway," a network of power and communication grids specifically designed to support AI development.
OpenAI also recommends that the federal government, in consultation with industry, should take the lead in developing “alternatives” to the “growing patchwork of state and international regulations that risk hindering American competitiveness.”
The blueprint also wades into controversial territory around copyright and AI training data: OpenAI argues that AI developers should be able to use "publicly available information," including copyrighted content, to develop their models.
Google Is Giving Away AI Capabilities to Workspace Customers (And Microsoft Is Changing Pricing)
Two of AI’s biggest players just made big changes to their pricing strategies, with both Google and Microsoft revamping how they package and charge for their AI products.
Google announced that it is basically giving away Gemini to Business and Enterprise customers, adding it by default to all Google Workspace business plans.
The catch? It comes with a price increase. Previously, a Workspace Business Standard plan with the Gemini add-on cost $32 per user, per month—but now it will be $14 per user, per month, a $2 per month increase from the previous Standard plan.
Microsoft, while keeping its premium Copilot Pro at $30 per user per month, is introducing new consumption-based pricing for certain AI “agent” features that can “automate workplace processes,” according to The Information.
They write: “Under the new consumption pricing, one message within 365 Copilot Chat costs roughly one cent, while messages that require the chatbot to create a lengthy answer using generative AI cost two cents, and messages that require the chatbot to draw on other data from other applications cost 30 cents.”
“Super-Agent” Rumors + o3 Mini Release Date
Major developments are brewing at the major AI labs, with multiple signals pointing to some big upcoming announcements, possibly from OpenAI.
According to a breaking report from Axios:
“Architects of the leading generative AI models are abuzz that a top company, possibly OpenAI, in coming weeks will announce a next-level breakthrough that unleashes Ph.D.-level super-agents to do complex human tasks.”
Axios goes on to say: “The expected advancements help explain why Meta's Mark Zuckerberg and others have talked publicly about AI replacing mid-level software engineers and other human jobs this year.”
Right now, Axios is hedging its bets on if this is from OpenAI or another lab.
At the same time, however, another development is confirmed by Altman himself: He posted on X that that the company is finalizing the o3-mini model for release in approximately two weeks
Altman has noted that while this model isn't as capable as their o1 pro version, it's significantly faster—and importantly, will launch simultaneously on both their API and ChatGPT platform. Based on his comments, it also sounds like o3-mini will be accessible by ChatGPT Plus users in some fashion.
This episode is brought to you by our AI Mastery Membership:
This 12-month membership gives you access to all the education, insights, and answers you need to master AI for your company and career. To learn more about the membership, go to www.smarterx.ai/ai-mastery.
As a special thank you to our podcast audience, you can use the code POD150 to save $150 on a membership.
Today’s episode is also brought to you by Marketing AI Institute’s AI for Writers Summit, happening virtually on Thursday, March 6 from 12pm - 5pm Eastern Time. Learn to craft compelling stories faster, boost your productivity, and build a sustainable writing strategy for the years ahead. Choose between free live access or premium tickets with on-demand replay. Don't miss this opportunity to transform your writing. Register now at aiwritersummit.com
Read the Transcription
Disclaimer: This transcription was written by AI, thanks to Descript, and has not been edited for content.
[00:00:00] Paul Roetzer: We are entering a very different phase in American business and innovation, and the heads of the AI companies are in the first row. Welcome to the Artificial Intelligence Show, the podcast that helps your business grow smarter by making AI approachable and actionable. My name is Paul Roetzer. I'm the founder and CEO of Marketing AI Institute, and I'm your host.
[00:00:23] Paul Roetzer: Each week, I'm joined by my co host and Marketing AI Institute Chief Content Officer, Mike Kaput, as we break down all the AI news that matters and give you insights and perspectives that you can use to advance your company and your career. Join us as we accelerate AI news.
[00:00:46] Paul Roetzer: Welcome to episode 131 of the Artificial Intelligence Show. I'm your host, Paul Roetzer, along with my co host, Mike Kaput. last week on episode 130, we started off saying it was probably going to be a hectic week because last [00:01:00] Monday morning had started off with a bang with a bunch of stuff and, we were correct.
[00:01:04] Paul Roetzer: It was a very busy week in AI. Mike and I were cutting rapid fire items up until like 10 minutes ago, because it was just too much to get through. So. It is an action packed, week. It's, we are recording this on January 20th, Monday morning. so, I know this week's gonna already be busy because we're gonna have some executive orders probably repealed from the previous administration and, on AI.
[00:01:33] Paul Roetzer: we're gonna have a whole new day of how AI is approached in the United States, at least. So I expect a lot happening again this week, Mike. Yes, sir. We'll be keeping up with plenty to talk about on Episode 132 next week. All right. But for now, we are on Episode 131. This week's episode is brought to us by the AI Mastery Membership Program.
[00:01:55] Paul Roetzer: We've been talking a lot about this lately. I have mentioned that some changes were coming [00:02:00] to the program. I decided to Last Friday that we're gonna just go ahead and announce those changes this week, at least preview the changes this week. So what we're gonna do is, as part of the Mastery program, 12 month membership program with exclusive content and experiences.
[00:02:17] Paul Roetzer: one of the key components of it is every month Mike and I do an exclusive, session for our members. So we do a generative AI mastery series that Mike runs where he demos a bunch of technology, we do an AI Trends Briefing and we do an Ask Me Anything session. So each of those happens once a quarter.
[00:02:36] Paul Roetzer: This Friday is our AI Trends Briefing, where we'll go through kind of the last three months and what are the main things. We usually count down from 10 is kind of the format of it. And so what we're going to do this Friday is we're actually going to open that session up to anyone that wants to attend.
[00:02:52] Paul Roetzer: So this is usually members only. We're going to make the quarterly AI Trends Briefing for Q1 open to the public. And part of [00:03:00] the reason we're doing that is to let people experience it. But the, the bigger reason is I'm going to, lay out our roadmap for, AI Academy, for what we're going to do with our Academy.
[00:03:11] Paul Roetzer: And also introduce a new initiative. That's designed to dramatically accelerate AI education worldwide. So, anyone who listens to us, regularly has heard me say our kind of North Star is to accelerate AI literacy for all. And so that's what I've been working on is, is sort of a, a new project that will enable us to do that, hopefully in partnership with other organizations and associations.
[00:03:33] Paul Roetzer: And so I'm going to explain that vision on Friday and share our near term roadmap for what we're going to be doing as part of that project. So You can go to smarterx.ai/ai-mastery, and you can register for that free session on Friday. We'll put that link in the show notes as well. So, again, that's Friday at, noon Eastern time.
[00:03:57] Paul Roetzer: We'll go through the trends briefing and I'll kick it off with, like, ten minutes [00:04:00] about the vision for this, AI literacy program that we're introducing. So, again, smarterx.ai/aimastery. Thank you. And then, as we've been sharing recently, you can use POD150 if you want to join the AI Mastery program, and that'll get you 150 off that annual membership.
[00:04:18] Paul Roetzer: this episode is also brought to us by the AI for Writers Summit. We've been talking about this on the last couple episodes. This is our third annual virtual summit. This is going to be happening March 6th, Thursday, March 6th, from noon to 5 p. m. Eastern time. we had over a thousand people register last week.
[00:04:35] Paul Roetzer: We just, you know, started promoting it really, last week and over a thousand people registered. We had 4, 500 plus in 2024, so we're expecting similar turnout this year. We will be posting the 2025 agenda in the coming days, I believe. We just kind of finalized that last week. So that's going to be coming soon.
[00:04:55] Paul Roetzer: You can go to AIWriterSummit. com. Again, that's [00:05:00] AIWriterSummit. com. You can also find information about that directly from the Marketing AI Institute site under the Events tab. And then one final reminder, we have our 6th annual MAICON event, Marketing AI Conference, is going to be back in Cleveland October 14th.
[00:05:15] Paul Roetzer: To 14th to the 16th, you can go to ma con.ai, M-A-I-C-O-N ai. the key here is we are open for speaker applications. So if you want to speak or if you know someone that would be a great speaker, for us to have at ma Con 2025, definitely check that out. Again, go to Ma Con, do AI and there is a Submit your speaker application button right there on the homepage.
[00:05:40] Paul Roetzer: Those are open till February 28th, it looks like we're accepting applications. So get those in early. We review them on a rolling basis, so as they come in, we actually, look at those applications. and then if someone's a great fit, we don't wait until, you know, March to let them know. We'll actually reach out to people sometimes in advance.
[00:05:59] Paul Roetzer: So, [00:06:00] get in early, And we would love to hear from you. If you've got a great session that you think would be a good fit for that audience, we're expecting about 1500 people at MAICON 2025. All right, Mike, we got a lot of economic stuff. We got super intelligence. We got it. We got it all going on.
[00:06:18] Mike Kaput: Yeah, it's a, it's a crazy week.
[00:06:20] Mike Kaput: Like do you.
[00:06:21] Paul Roetzer: I feel like somebody told me one time, like we start every podcast saying it's crazy. It's gotten worse though. It is a crazy week. It's gotten crazier.
[00:06:30] Mike Kaput: I feel like we got to go back and look at the first time we mentioned it was a crazy week and just laugh at how probably light it was compared to now.
[00:06:38] OpenAI Releases Its Economic Blueprint
[00:06:38] Mike Kaput: All right. So first up OpenAI just released what it's calling its Economic Blueprint. This is a policy proposal for how the U. S. should develop and regulate AI. This blueprint makes a pretty bold claim that there is approximately 175 billion in global funds [00:07:00] waiting to be invested in AI projects. They also argue that if the U. S. does not attract these funds, China will. To prevent this, OpenAI proposes a comprehensive national strategy that includes developing AI economic zones, creating research labs aligned with local industries, and providing and building what they call a National AI Infrastructure Highway, a network of power and communication grids specifically designed to support AI development.
[00:07:30] Mike Kaput: OpenAI also recommends that the federal government, in consultation with industry, should take the lead in developing, quote, Alternatives. To the, quote, growing patchwork of state and international regulations that risk hindering American competitiveness. This blueprint also wades into controversial territory around copyright and AI training data.
[00:07:53] Mike Kaput: OpenAI argues that AI developers should be able to use, quote, publicly available [00:08:00] information, including copyrighted content, to develop their models. And this all comes at an interesting time. We'll talk about this as well in the next topic, but OpenAI CEO Sam Altman has scheduled a closed door AI briefing for U. S. government officials on January 30th. Paul, why are we getting this blueprint now? And maybe talk a little bit or tee up a little bit. What's the deal with the closed door briefing for lawmakers? Is it about this? Is it about something else?
[00:08:32] Paul Roetzer: We've been talking a lot about infrastructure, especially in the last like six to eight months on the podcast.
[00:08:38] Paul Roetzer: I think we've been, you know, trying to introduce that topic for people who maybe aren't paying as close attention to that side of AI. It's very fundamental to what happens next. And so it's not. New. I mean, OpenAI has been very aggressively meeting with lawmakers for the last couple of years. There's been lots of conversation around trying to make US a leader in the buildout of data centers and the infrastructure to power [00:09:00] AI.
[00:09:01] Paul Roetzer: but I think with the new administration coming in, everyone's lining up to sort of get their messaging in place and build the relationships they need to build and have a say in kind of what happens next. So, My guess is the, the, January 30th meeting is just a timing. The new administration is, you know, coming into power today in America.
[00:09:21] Paul Roetzer: January 20th will be the inauguration. So two weeks from now, you've got, you know, Congress, Senate, president. Everybody's kind of, set up now and, and, and time to get to work. So, the thing that was interesting to me here is, as a kind of like journalism school major. I always drill into data points, where this, where's this coming from?
[00:09:42] Paul Roetzer: Because this whole thing is basically centered on this 175 billion. And so in the second paragraph, they say shared prosperity is as near and measurable as the new jobs and growth that will come from building more AI infrastructure like data centers, chip manufacturing facilities and power plants.
[00:09:57] Paul Roetzer: That's as our CEO Sam Altman has [00:10:00] written, AI will soon help our children do things we can't, not far off in the future, in which everyone's lives can be better than anyone's life is now. So I think this is interesting because they're basically, there's a lot of concern that I have shared many times on this podcast that AI is going to displace jobs.
[00:10:16] Paul Roetzer: I'm, I'm, I believe that, very deeply. I think this is the setup for how these companies Make the government believe there's a net positive outcome if the government invests properly in infrastructure. so the, the 175 billion, so in that opening paragraph it says new jobs and growth. If you click on that link, it actually takes you to the September 2024 OpenAI.
[00:10:45] Paul Roetzer: report called Infrastructure is Destiny, Economic Returns on U. S. Investment in Democratic AI, which I assume we talked about at that time. Like I, sort of in September of 24, so I'm guessing we at least mentioned that. I [00:11:00] didn't go back and look and see the extent to which we talked about it. So when you go into that report though for September 2024, this is the source of the 175 billion.
[00:11:09] Paul Roetzer: It says capital spending on AI already rivals the The mainframe era of the late 1960s and the fiber optic deployment of the late 1990s. With an estimated 175 billion in global infrastructure funds waiting to be committed. Now, that report cites, in the citation for the 175 billion, actually comes from Houlihan Lokey, Digital Infrastructure Industry Update Q2 2024.
[00:11:38] Paul Roetzer: I have never heard of Houlihan Lokey. I had not gone to that report prior to this. It's a pretty dense report. Report on digital infrastructure, but OpenAI is citing Houlihan Loki to come up with 175 billion. And when you drill into like that 175 billion, it, you need like the [00:12:00] O1 reasoning model from OpenAI to understand how they come up with that number.
[00:12:04] Paul Roetzer: But the whole point here is like, Don't just accept data on face value. Like, too much, I think we've gotten to the point with Twitter and social media, and like, even mainstream media does it to a degree. Everyone latches on to these numbers with no concept of where the number actually originated from or how, like, legitimate that number is.
[00:12:25] Paul Roetzer: I 175 billion isn't true. Reasonably accurate. I'm not even saying it's not underestimated that it's not a trillion. I have no idea. But this is kind of where we follow it. So then if you so now you understand like open AI is basically building on top of other people's research data to justify The opportunity that exists now in the infrastructure is destiny.
[00:12:51] Paul Roetzer: Open AI report that basically laid out how data struck data center build out will create all of these jobs in [00:13:00] America and accelerate the growth of GDP growth, domestic product, a gross domestic product. So in that report, it says that each five gigawatt data center We'll have 2 million GPUs, so NVIDIA will be very happy, because 2 million GPUs per data center.
[00:13:15] Paul Roetzer: Each one will cost 100 billion in 2028 dollars to build, so they're already projecting out three years from now. And it'll create 14, 000 construction jobs and 40 billion in annual revenue, per data center. Then, to operate those data centers, you're looking at an estimated 4, 000 employees per data center.
[00:13:38] Paul Roetzer: So the whole point of this is, they think data centers, which are needed to build the future AI models, and deliver all the AI that we need at inference time, all this intelligence we need at inference time when you and I use our smartphones and use ChatGPT and things like that is a really big deal and it's going to be a massive driver of employment and GDP [00:14:00] specifically in the states where the data centers are built and that infrastructure is destiny report from OpenAI actually breaks down by state.
[00:14:10] Paul Roetzer: How much money could be generated? How much GDP could grow? And how many jobs could be created in those states? So, I think that the basic premise here is, they're making this massive bet on infrastructure. They believe they're going to build insanely intelligent models, and that those models are going to need more and more data centers.
[00:14:29] Paul Roetzer: Now, in the blueprint, the one other aspect is, they start like, Connecting this to the past, and I thought this was really interesting historical context. I hadn't read about this prior, but they talk about how, like, when cars were first invented in the UK, the UK actually put something in called the 1865 Red Flag Act.
[00:14:48] Paul Roetzer: When a car was coming down the street, they had a flag bearer that had to walk in front of the car to warn people that the car was coming, and that, you know The car had to move aside in favor [00:15:00] of horse drawn transport. So they're sharing this as, like, a lesson of let's not over regulate things. Let's, like, accept that change happens, and it may look weird at first, but that we shouldn't actually Restrict this.
[00:15:15] Paul Roetzer: That's what's happening in the EU. They're saying, like, we can't go that path. We have to push forward. Chips, data, energy, and talent are the focus of it. And then the one other note that I , I think is really interesting. The, the Economic Blueprint, the actual full blown report that we'll link to, states in the very opening, OpenAI's mission is to ensure that artificial intelligence benefits everyone.
[00:15:39] Paul Roetzer: This is the first time I've actually seen them drop general from that. They usually say artificial general intelligence. To us, that means building AI that helps people solve hard problems, because helping with the hard problems, AI can benefit the most people possible. Now, the timing here is interesting, Mike, because you and I had touched on this, but o3 is built to solve hard [00:16:00] problems.
[00:16:00] Paul Roetzer: Like, these reasoning models aren't for the average user to go in and ask, like, about a summary of sports events from last night or do some basic research. These things are designed to solve hard math and science problems. So, the MIT Technology Review comes out with an article. It says, OpenAI has created an AI model for longevity science.
[00:16:21] Paul Roetzer: In that article it says when you think of AI's contributions to science you probably think of AlphaFold, the Google DeepMind protein folding program that earned its creator a Nobel Prize last year. Now OpenAI says it's getting into science game two with a model for engineering proteins. Company says it has developed a language model that dreams up proteins capable of turning regular cells into stem cells, and that it has handily beat humans at the task.
[00:16:45] Paul Roetzer: The work represents OpenAI's first model focused on biological data and its first public claim that its models can deliver unexpected scientific results. If you remember, The five levels of AI at OpenAI, number four is [00:17:00] innovators, creating new solutions to problems, basically. As such, it is a step forward determining whether or not AI can make true discoveries, which some argue is a major test on the pathway to AGI.
[00:17:10] Paul Roetzer: The pro team engineering project started a year ago when Retro Biosciences, a longevity research company based in San Fran, approached OpenAI about working together. That link up did not happen by chance. Sam Altman, the CEO of OpenAI, personally funded Retro with 180 million, as MIT Technology Review first reported in 2023.
[00:17:31] Paul Roetzer: Retro's goal is to extend the normal human lifespan by 10 years. So I think that a lot of things are happening here. One, one is the new administration and OpenAI trying to kind of stake their claim and influence it. Two is they truly believe infrastructure is destiny, that to achieve the kind of intelligence they plan to achieve and to have the impact on the world they want to have, they need to build this infrastructure out.
[00:17:53] Paul Roetzer: Three, they're seeing massive gains in their reasoning models. Like 01, moving into 03, and eventually [00:18:00] 04. and they see the ability for these things to start solving really hard problems in society. And I think they want to prepare the government and the world for that, which I believe they think is very near to happening.
[00:18:15] Paul Roetzer: That was a lot.
[00:18:17] Mike Kaput: Funnily enough, the only thought I could keep thinking while I'm looking at this January 30th meeting is Sam Altman better not show up and suddenly Elon's in the room too.
[00:18:27] Paul Roetzer: Oh, trust me. I was thinking about that all weekend. There's no way Elon's not in the room. Which is gonna be the weirdest, I would like, I pray that that is somehow broadcast, like, I saw, the pre party for the inauguration last night, there was a clip on Twitter I saw of Jeff Bezos and Elon Musk standing there talking to each other, which, we won't get into the whole history of those two, but they've been very Oddly friendly on Twitter lately, like Bezos Blue Origin, rocket company successfully put something into [00:19:00] orbit last week and Elon actually tweeted like, Congratulations, great job.
[00:19:03] Paul Roetzer: Jeff then replied, Hey, great job to you too. It's like something weird is happening, like the two richest people in the world are now like turning into buddies, it seems. And, yeah, it's like, I , I keep thinking like Elon is going to be in whatever meeting Sam is at and I don't know that those two have been together in person.
[00:19:21] Mike Kaput: Yeah.
[00:19:22] Paul Roetzer: In a room where they have to speak to each other for a long time.
[00:19:27] OpenAI “Super-Agent” Rumors + o3 Mini Release Date
[00:19:27] Mike Kaput: Our second big topic is really closely related to what we just discussed. So there are some major developments, it sounds like, that are brewing at the main AI labs and multiple signals seem to be pointing towards that. Some big upcoming announcements, maybe from OpenAI.
[00:19:45] Mike Kaput: So there was a breaking report from Axios the other day that said, quote, architects of the leading generative AI models are abuzz that a top company, possibly OpenAI n coming weeks will announce [00:20:00] a next level breakthrough that unleashes PhD level super agents to do complex human tasks. Axios goes on to say, quote, the expected advancements help explain why Meta's Mark Zuckerberg and others have talked publicly about AI replacing mid level software engineers and other human jobs this year.
[00:20:22] Mike Kaput: Now Axios is hedging its bets on whether or not this is from OpenAI or another lab, though they do mention, like we just talked about, Sam Altman's closed door briefing with government officials on the 30th. And at the same time, another development is confirmed by Altman himself. He posted on X that the company is finalizing the O3 mini model for release in approximately two weeks.
[00:20:46] Mike Kaput: Altman has noted that while this model is not as capable as O1 Pro, it is significantly faster and will launch simultaneously both with their API and on ChatGPT. It also sounds like [00:21:00] O3 Mini might be accessible in some form by ChatGPT Plus users based on some of his replies. To the initial posts. So Paul, this like super agent is this, I feel like we're just going to see this like term for some reason, all over this could be in reference to like, we talked about an open AI release code named operator that was rumored back on episode one 24 could be something totally different.
[00:21:26] Mike Kaput: Like what is most likely being referenced here? And like PhD level super agent feels a little more aggressive than. Some of the talk we've heard about agents in the past.
[00:21:36] Paul Roetzer: I think it's, it's likely O3, but it's probably more likely the, test time compute that they're seeing the scaling law.
[00:21:46] Paul Roetzer: Accelerating. So if we remember back to like, so weird to say the historical large language models of the last two years, like contextually, the whole premise there was give them more Nvidia chips to train [00:22:00] on, give them more data, give them more time to learn, and they became. Much larger, much more intelligent, much more generally capable.
[00:22:07] Paul Roetzer: And so that took us from GPT 1 to GPT 4, where we scaled this law, where we just give them more data, more chips, and they got bigger and smarter. Then in O with O 1, in September, I guess we got 01, right? Is that right? Yeah. September 24, we were introduced to this test time compute, this idea that if you just give them more time to think at inference, so when you, when you ask the question, we give them more time that they seem to get smarter, even if they're not massively bigger, that they, by allowing them time to think, to think harder, they actually just start performing way better.
[00:22:45] Paul Roetzer: And so it does seem that Based on a lot of different things I've been seeing on Twitter, that that seems to be playing out and maybe even faster than people thought. That by giving these things more time, they're, they're starting to [00:23:00] perform at these PhD levels. So I'll go through a quick series of tweets because this is, this is This started on Friday, like, so this is just three days ago.
[00:23:09] Paul Roetzer: Noam Brown, who we've talked about a number of times on the podcast, we did a feature on him, because he was the guy, he was at Meta, and now he's at OpenAI, working on Reasoning. But he, was the guy who kind of solved, like, Texas, poker, Texas Hold'em poker, where we, by giving the AI time to think, it became, like, superhuman at poker.
[00:23:33] Paul Roetzer: And so he's applied that line of thinking now to building these models. So he tweeted, and we'll put the links to all these tweets in if people want to follow along, Lots of vague AI hype on social media these days. There are good reasons to be optimistic about further progress, but plenty of unsolved research problems remain.
[00:23:51] Paul Roetzer: We have not yet achieved superintelligence. And then he was kind of like replying to people, but he said between the O 1 announcement, O 3 announcement and various [00:24:00] podcast talks, I think we've said a lot when someone said, Hey, could you tell us more about this? We believe O 1 represents a new scaling paradigm, and we're still early in scaling along that dimension.
[00:24:10] Paul Roetzer: Then also on Friday, January 17th, Altman tweeted, thank you to the external safety researchers who tested O 3 mini. We have now finalized a version and are beginning the release process, planning the ship in a couple weeks. Also, we heard the feedback, we'll launch API and ChatGPT at the same time. Then someone asked like, what specifically about it is, you know, good.
[00:24:34] Paul Roetzer: And he just said, it's very good. O3 is much smarter than O1. We are, we are turning our attention to that now. And O3 Pro with the mind blown emoji. And then someone said, oh, is the O3 Pro going to be 2000 a month? He said, no, you'll get it for the same 200. Then, Noam Brown, again, this is a little bit later.
[00:24:54] Paul Roetzer: And this was actually on, sunday, the 19th, so two days later, he said, it can be hard to [00:25:00] feel the AGI, which is a term we've shared on the podcast before. it's kind of like the vibes of AGI, like people in these labs are just like, do you feel the AGI? Said, it's hard to feel the AGI until you see an AI surpass top humans in a domain you care deeply about.
[00:25:15] Paul Roetzer: Competitive coders will feel it, will feel it within a couple years. And then he's referencing, Paul Schrader, who I'll get to in a second. He says, Paul is early, but I think writers will feel it too. Everyone will have their Lisa Dahl moment at a different time. Lisa Dahl is a reference to, AlphaGo, the Go champion.
[00:25:33] Paul Roetzer: So I think people, a lot of people, listen to the podcast, have heard us talk about this, but watch the AlphaGo documentary. You'll see what we're talking about. It's free on YouTube. Lee Sedol was defeated by the AlphaGo system built by Google DeepMind at a time when most people didn't think an AI could defeat a Go champion.
[00:25:51] Paul Roetzer: So, this Noam tweet is in reply to someone who shared a post, I think it was a Facebook post, from Paul [00:26:00] Schrader, who's an American screenwriter. He, wrote Taxi Driver for Scorsese and then he later co wrote Raging Bull and a bunch of other popular movies. He posted AI have come to realize AI is smarter than I am, has better ideas, has more efficient ways to execute them.
[00:26:18] Paul Roetzer: This is an existential moment akin to the Kasparov, how, what Kasparov felt in 1997 when he realized Deep Blue was going to beat him at chess. Someone then said, what brought you to this conclusion, Paul? And he replied, I asked it for Paul Schrader's script ideas. I had better ones, it had better ones than mine.
[00:26:40] Paul Roetzer: This reinforces what we talked about on episode 130, which is like, forget about all these evals, like, these research labs talk about all these really hard evals, is it PhD level in math, and is it PhD level in biology? Who cares? Like, what matters is that Paul Schrader, a legendary screenwriter, Now believes the thing is better [00:27:00] at his job than him.
[00:27:01] Mike Kaput: You're right.
[00:27:01] Paul Roetzer: That's what matters is when it starts to affect our jobs. So this then leads into the last two tweets I will mention. The first one is this Axios article where this is the tweet from one of their editors. We've learned OpenAI CEO Sam Altman has scheduled a closed door briefing for U. S.
[00:27:16] Paul Roetzer: government officials on January 30th. Which wasn't news because OpenAI had put that in their economic blueprint, but anyway. with people inside and out of the government telling us, AI insiders believe a big breakthrough on PhD level super agents is coming. So that was like, what, everybody went nuts on Sunday, like on the 19th.
[00:27:36] Paul Roetzer: tweets the morning of January 20th, Twitter hype is out of control again. We are not going to deploy AGI next month, nor have we built it. We have really cool stuff for you, but please chill and cut your expectations 100x. So, my overall takeaway here, things are likely advancing far faster than people realize or are prepared for.
[00:27:59] Paul Roetzer: [00:28:00] That much I'm fairly confident in. They just probably aren't advancing as quickly as the hype on Twitter might make you believe. When an Axios headline about superagents shows up and people go crazy and then like three hours later everyone thinks that Sam's going to introduce superintelligence to Congress on January 30th and it's likely not what's going to happen but there's a decent chance he may show like an 03 preview like 03 probe preview with their projections of what 0405 could look like like that's a distinct possibility and that is Earth shattering.
[00:28:39] Paul Roetzer: Again, I feel like we're becoming so numb to these advancements that it's hard for people to process what that could mean if we truly do start having these PhD level agents on demand for 200 a month for whatever profession you want to pick.
[00:28:55] Mike Kaput: Two of the top replies to Sam's tweet, which went out at [00:29:00] 3. 32 a.
[00:29:01] Mike Kaput: m. Oh, is that what I knew it was this morning? Yeah, real early. the two, two of the top replies I think are hilarious, but also kind of indicative of like what moment we're in. Someone first replied, super intelligence on Tuesday at 10 a. m. Pacific time per Axios. So these Axios headlines are getting out of control.
[00:29:19] Mike Kaput: My favorite was someone just asked, when are we getting the ChatGPT meme coin? So, you know, when are we getting their crypto project?
[00:29:26] Paul Roetzer: Which if you followed the news at all on Saturday and Sunday, the meme coin thing, which I honestly like Mike, I was, I was going to take a minute and have you explain meme coins to me.
[00:29:36] Paul Roetzer: And then I just went to ChatGPT myself. And the whole point is like, If you didn't follow this, I don't want to get into this, but Trump launched a meme coin that made him like 60 billion dollars in like 5 hours or something like that. And then I think it crashed when they announced a Melania meme coin like later that day or something like that.
[00:29:57] Paul Roetzer: so it's like this cryptocurrency [00:30:00] thing and I do, I'm not an expert on this. I'm not even going to try and explain this, but it makes as little sense as you would think basically. So if you go do the research, it's like vaporware. There is nothing. It's just hype and people launch these meme coins. But yes, that is funny.
[00:30:13] Paul Roetzer: An OpenAI meme coin would be, they, they could raise all the money they need for their infrastructure if they just launched a meme coin. No kidding.
[00:30:19] Google Is Giving Away AI Capabilities to Workspace Customers (And Microsoft Is Changing Pricing)
[00:30:19] Mike Kaput: That's All right. Our third topic this week, bringing it back down to earth a little bit. Two of the biggest players in AI have made some pretty significant changes to their pricing strategies.
[00:30:30] Mike Kaput: So both Google and Microsoft are revamping a bit how they package and charge for their AI products. So first up, Google announced it is basically giving away Gemini to business and enterprise customers. It's adding it by default to all Google Workspace business plans. The catch is this comes with a small price increase.
[00:30:50] Mike Kaput: So previously a Workspace Business Standard Plan with the Gemini add on cost 32 per user per month. Now it will be [00:31:00] 14 per user per month. But that's a 2 per month increase from the previous Standard Plan like without Gemini. So Microsoft However, is taking a slightly different approach. They are keeping their premium Copilot Pro license at 30 bucks a user a month.
[00:31:18] Mike Kaput: They are, however, introducing new consumption based pricing for certain AI agent features that they say, quote, can automate workplace processes, according to some reporting from the information. The information writes, quote, under the new consumption pricing, one message within 365 copilot chat costs roughly one cent, while messages that require the chatbot to create a lengthy answer using generative AI cost two cents, and messages that require the chatbot to draw on other data from other applications Cost 30 cents.
[00:31:52] Mike Kaput: So Paul, maybe first walk me through Google's move in particular, like what are they trying to achieve with the new pricing and is it [00:32:00] going to work?
[00:32:01] Paul Roetzer: yeah, I just had a funny thought on the Microsoft one, but we'll come back to that. So I think the Google move is two things. one, it's just a strategic move to undercut.
[00:32:10] Paul Roetzer: The market, you know, ChatGPT Enterprise and Team, Microsoft Copilot. We've said this all along, like, Google has a bunch of advantages here. One of them is their, resources, their compute power, their own data centers, their own chips. Like, they have all this stuff that they can throw at their competitors.
[00:32:28] Paul Roetzer: Now, Microsoft has similar stuff, but Microsoft doesn't have their own models. They're using OpenAI's models. OpenAI doesn't have these things. OpenAI also doesn't have the distribution of Google Workspace, they don't have the distribution of Gmail. So, Google's got some, Some plays to make. And this seems like one strategic play is like, let's just undercut the market and give this away.
[00:32:46] Paul Roetzer: The second part to Google strategy could be, and I'm just hypothesizing here. It's impossible to know how many people pay for Google Workspace for Business. I dare you to try it. Pick, pick your favorite [00:33:00] AI tool, you can use Google Deep Research, ChatGPT, Perplexity. No one can answer for you how many people pay for Google Workspace for Business.
[00:33:08] Paul Roetzer: It's not in their earnings transcripts, it's nowhere. So all we have are like estimates here. Based on best estimates, There are 8 million companies that pay for Google Workspace for business, including Marketing Institute and SmarterX. We are a paying customer, have been for years. The rest of this, I'm just going to hypothesize.
[00:33:27] Paul Roetzer: Let's say if there's 8 million companies, and this is two weeks in a row I've had to do math, Mike, so this is like, this is hard work for Monday mornings. 8 million companies, and let's just say out of those 8 million, there's 50 million users. that are paying this 12 a month. So, we're just gonna assume 50 million people are paying every month to use this product.
[00:33:48] Paul Roetzer: Now, if 5 percent of those 50 million users chose to upgrade to Gemini, so that's 2. 5 million, 10 percent would be 5, yeah, that's 2. 5 [00:34:00] million users who are paying 20 a month for Gemini. That's about 50 million a month. Or 600 million for the year being generated by people paying 20 a month for Gemini. So let's just assume that's what Google's currently making it.
[00:34:15] Paul Roetzer: I'm making up the 50 million number. I'm making that number up. if instead they say, hey, we're going to give you Gemini for free. , but we're gonna increase your standard plan $2 a month. Hmm. Well, if you have 50 million users who you went from 12 to $14 a month on, that's a hundred million a month in revenue, or 1.2 billion per year.
[00:34:38] Paul Roetzer: So by giving Gemini, Gemini away, basically, but charging everyone $2 a month. Rather than them opting in for the extra 20 a month, they just made 600 million dollars. So, now again, I don't know if the math math's there, as my son would tell me. My math isn't mathing. He sells me all the time. so, but if the numbers are in this rough range, You could [00:35:00] see how, one, this could undercut the market.
[00:35:02] Paul Roetzer: Two, it might actually just be a smart financial move for Google to just charge everyone 2, whether they use the tools or not. Now, on the consumption based pricing, my first thought is, if I have to re read your pricing four times to comprehend, What it is, it's probably not going to work. It's not following like the simplicity rule.
[00:35:24] Paul Roetzer: at a higher level though, I think the key here is we're just going to see a ton of experimentation. Nobody knows what to charge. We talked a couple episodes how Sam kind of picked 200 a month out of the air for O1 and realized like they're losing money on it because he just kind of guessed at what would be a profitable number and he was wrong.
[00:35:40] Paul Roetzer: So. You're going to see a lot of experiments. I've mentioned this before, but like my former agency that I sold in 2021 was HubSpot's first partner in 2007. And we went through dozens of changes to their SaaS pricing model over the years. And so I think that we're just in this new phase where these companies that are [00:36:00] selling AI, aren't really sure how to charge for it and how to make money on these models and on the services.
[00:36:07] Paul Roetzer: Now, one final note here I thought was just, it was so well written. I have no idea who this guy is. his name's Timo Springer. We'll put a link to this tweet in the show notes. He has 300 followers on Twitter, so it's not like this is some influencer that everyone just like listens to. But I saw his tweet and I thought it was so well done.
[00:36:25] Paul Roetzer: And it's representative of what I was explaining last week with the issues with Google Workspace. It's representative of ChatGP team. It's representative of all these models. So, and the reason I think we should pay attention to this is he actually got replies from the head of product for ChatGPT and the head of engineering for ChatGPT because he tagged some people and they apparently saw it.
[00:36:44] Paul Roetzer: So. Here's the Teemo tweet. ChatGPT is a confusing mess right now. It seems like a few months ago they embraced a new product strategy, maybe when their chief product officer joined, which is good, but there's still lots of legacy features like GPTs that feel [00:37:00] really out of place with new releases like Projects and Tasks.
[00:37:03] Paul Roetzer: What bothers me the most is that even for power users, it's extremely difficult to know which tool currently works with which model on which platform, web, mobile, desktop. The feature matrix is incredibly complex. In the normal chat, I can connect Google Drive, but this doesn't work as a data source for GPTs or projects.
[00:37:22] Paul Roetzer: Advanced voice mode can access custom instructions and memory, but doesn't work with projects or GPTs. O1 can now handle file uploads too, but only images. When I upload a PDF to the chat, only the text contents are analyzed, but if I upload screenshots of the PDF pages as images, these can be analyzed.
[00:37:40] Paul Roetzer: Projects can also use GPT 4. 0 as a model, but I could list at least five more things that are similarly annoying in daily use. I'm an absolute power user, and even I sometimes struggle to keep track of everything. I wish the product team at OpenAI would focus more on removing all these complexities from the product.
[00:37:56] Paul Roetzer: And then he followed up with a comment that said, a wonderful example. ChatGPT [00:38:00] Enterprise now supports reading and understanding visuals, images, graphs, diagrams. Embedded in PDF files, users can upload a PDF and ChatGPT can interpret the text and any visual elements within that file. Cool, but, and it is not currently available for GPT based projects.
[00:38:16] Paul Roetzer: So we can do this thing, but I can't do it in GPTs, which is what I use all the time. These things are so incredibly confusing and daily use for users. In my personal account, ChatGPT cannot analyze images and PDFs. In my business account, it works, but not when I upload the PDF as knowledge to my GPT. So as I said, this is the experience we're all having.
[00:38:37] Paul Roetzer: Like if you feel confused, this is a great example of someone who's in here power using all day long. And the abnormalities aren't actually your fault. It is like a fault of the company and how fast they're moving and they're not solving for the end user. And as I said with Google, the issue seems to be these companies spend so much time solving for developers [00:39:00] And yet all their revenue is coming from enterprise users.
[00:39:03] Paul Roetzer: So the head of product for ChatGPT said, thank you, extremely top of mind, we will fix this. The head of engineering for ChatGPT said, yeah, we have to make it simpler and we'll do so. So whatever pricing model you want to have, just make it so it's actually user friendly, not what we currently have with all these platforms.
[00:39:21] Mike Kaput: And one final question I have for you around consumption based pricing, like, I'm by no means a business or enterprise finance expert, but like, how on earth do you even budget for usage based pricing? I don't even know how the usage of a tool that we would pay 20 bucks a month for.
[00:39:39] Paul Roetzer: No idea. And you would get so many surprise bills where people are like, I didn't realize that.
[00:39:44] Paul Roetzer: And then you got to put all these caps in place for usage. Yeah, I just, I can't see in an enterprise allowing the variability of pricing when the CFO is like, don't even understand how the product's going to be used. It's, it's not going to work. Like it's a great idea, but like, good luck. [00:40:00]
[00:40:01] Mike Kaput: Let's jump into this week's rapid fire.
[00:40:03] Google Releases New Research on the Potential Successor to Transformers
[00:40:03] Mike Kaput: We've got a few big things going on. So. First up, researchers at Google, MIT, and some other institutions have unveiled an AI system called Titans that fundamentally reimagines how AI can learn and remember information. This system represents one of the first major attempts to give AI the kind of nuanced memory capabilities that we as humans kind of take for granted.
[00:40:27] Mike Kaput: The key innovation is what the researchers call, quote, neural long term memory. This is an AI component that can actively learn and adapt while it's being used. Not just during initial training. So much like how humans form memories based on surprising or unexpected experiences, the system pays special attention to information that violates its expectations, storing those memories for future use.
[00:40:54] Mike Kaput: Titan is particularly notable in how it combines three different types of memory, short term memory for [00:41:00] immediate tasks, long term memory that continues learning from new experiences. And what they call, quote, persistent memory, which maintains core knowledge about tasks. So this kind of mimics how human memory as we understand it works.
[00:41:13] Mike Kaput: We have different systems for different types of information. So Paul, I guess like my big question here, just kind of as on a surface read of this study, like how big a deal is this? Because I've seen some people call this basically the successor to Transformers, which was one of the most important developments in modern AI.
[00:41:34] Paul Roetzer: Quick background, if Transformers is new to people, 2017, the Google Brain team released a paper called Attention is All You Need, in which they invented the Transformer. It was building on prior research, but they were kind of credited with the creation of the Transformer. That is the T in GPT, Generative Pre trained Transformer.
[00:41:51] Paul Roetzer: And Transformers, for the last What are we on now? Eight years, roughly, almost, have, have really continued to be [00:42:00] the basis for the acceleration of these models. That's what language models are built on. It's what enables everything that we've kind of seen to date. if you listen to Yann other leaders in the AI space, there does seem to be a uniform belief that a number of breakthroughs are needed to get to the next level of intelligence.
[00:42:21] Paul Roetzer: And so at any time, a research paper may emerge that is one of those breakthroughs. Titans might be one of those breakthroughs. You don't often know right away, even when the attention is all you need transformer paper came out in 2017. Google admittedly didn't realize the significance of their own invention until later the next year and started to actually try and productize it.
[00:42:45] Paul Roetzer: By that point, OpenAI was, you know, now starting to work towards building GPT 1. So some believe that OpenAI actually figured out the significance of the Transformer paper before Google did. and so we don't know. We, you know, this might be one of those ones we look [00:43:00] back in two years and be like, Oh, on episode 131, we talked about that Titans paper and look at that.
[00:43:04] Paul Roetzer: They just invented a whole new kind of model based on it. But this is why we pay attention to the research papers and, and you, like Mike and I spent a lot of time kind of monitoring the influencers in the space and seeing which papers they're talking about, which ones are getting a lot of attention and citations, because that often is a kind of a hint at what might be something of significance down the road.
[00:43:23] Paul Roetzer: So, Definitely worth keeping an eye on and it came out of the Google research team. My guess is, they're not going to make the same mistake twice with releasing breakthroughs in AI models. So if this came out December 31st, 2024, it's probably something that they internalized long before that and have already figured out how to apply it.
[00:43:43] Paul Roetzer: Maybe already building it into models. back in 2017, there was a much more open research model within the AI community where you published your breakthroughs. That stopped after ChatGPT. It basically slowed, not completely to a halt, but, [00:44:00] the amount of papers being published where they were putting out the new stuff was pulled back, dramatically to, in favor of product development after ChatGPT.
[00:44:10] Google Releases Factuality Benchmark for LLMs
[00:44:10] Mike Kaput: Our next topic, it also involves some work from Google. So, Google DeepMind just unveiled a new tool for measuring one of AI's biggest weaknesses, which is its tendency to make things up. This is called FACTS. This is an acronym, F A C T. S grounding. This new benchmark, it's a new benchmark that sets out to do something that has been surprisingly difficult until now, which is determining just how well an AI system sticks to the truth when it's answering your questions.
[00:44:44] Mike Kaput: At the heart of this system that Google has built is a collection of over 1, 700 carefully designed examples that challenge AI models to do something that we might find deceptively simple. Read a document and answer questions using only the [00:45:00] information provided. Now, what makes this particularly clever is how it works.
[00:45:05] Mike Kaput: Each response is evaluated not just by one, but three of the most advanced AI models out there today. Gemini 1. 5 Pro, GPT 40, and Claude 3. 5 Sonic. So, all the work from this project has been put into a public leaderboard that Google has launched to track how different AI models perform on these tests. So right now, Google's experimental Gemini 2.
[00:45:32] Mike Kaput: 0 model ranks number one with 83. 6 percent grounding. Google models also occupy spots number two and three. And are followed pretty closely by Claude 3. 5 Sonnet and GPT 4. 0. So Paul, this seems pretty notable. It sounds like we now at least have some visibility based on Google's methodology and tests into which models are the most accurate at retrieving [00:46:00] information, that is actually in a document or source that you're referencing.
[00:46:05] Mike Kaput: Does this mean we're getting closer to solving hallucination?
[00:46:10] Paul Roetzer: It could. I mean, I think anyone who uses Notebook. LM or Google Deep Research, you can experience probably this at work where it cites right within the source doc. So, I do think that at some point we largely solve hallucination. The question becomes how As always, I guess in society we have this issue, it's like, what is the source of truth?
[00:46:32] Paul Roetzer: Like, what is truth, unfortunately can't always be agreed upon, and so a hallucination to one person might be fact to another person, so assuming we can get around that and we actually agree on sources of truth, if it is like documents provided, Fine, like that's an easy source of truth. If I'm giving you the 50 documents and saying I just want facts based on these documents, I want an accurate relation to those.
[00:46:55] Paul Roetzer: And my money would be on Google 10 times out of 10 being the one that leads in this, given [00:47:00] their history and their business model, in search and retrieval. So, I would be, guessing that we will continue to see progress being made here. I've heard Demis Hassabis talk about this exact problem as a Sundar Pichai.
[00:47:15] Paul Roetzer: So I think that Google team is very focused on solving this. And I do think in, you know, one to two years, we may not have complete elimination of hallucination or inaccuracies, but humans, Certainly have plenty of hallucinations and inaccuracies. I would imagine we will be at superhuman levels of accuracy from these models within the next one to two years.
[00:47:35] Paul Roetzer: I don't, I don't think there's any scientific, obstacles to that being done. It's just kind of a brute force thing they got to keep working through and finding ways to solve it. But seems like they're on the right path.
[00:47:48] Apple Intelligence Falls Flat
[00:47:48] Mike Kaput: Our next topic concerns another major player in AI, but it is not exactly positive news.
[00:47:54] Mike Kaput: So, Apple is temporarily pausing a new AI powered Notification [00:48:00] Summary feature for news and entertainment. They're pausing this after the feature, which is powered by Apple Intelligence, inaccurately summarized content from news outlets. The most notable incident sparked some criticism from the BDC, which saw Apple incorrectly summarize its coverage of the UnitedHealthcare shooter.
[00:48:20] Mike Kaput: While Apple makes it clear that the summaries are in beta and quote, may contain errors, this move also seems to acknowledge that the tech just needs some more work. Paul, I'm just going to let you take this and run with it because you are a huge Apple fan. And this is. Just not what people expect from this company, I don't know.
[00:48:42] Paul Roetzer: Yeah, I don't get it. I got it, I've, so I've had Apple Intelligence now for whatever, two months. I , the only time I ever use it is I guess Surrey, it, you know, sometimes gives different responses or at least connects to ChatGPT now if it doesn't know the answer. But I mess around [00:49:00] with like, I don't even know what they're called, genmojis, which I think is different than image playground.
[00:49:04] Paul Roetzer: There's like these two things that are native in there now and I'm not really ever sure which one I'm using. But in text messages to my son, like I'll create. I guess they're Genmojis of him, like, in different outfits and stuff like that. That's it. That is literally it. Like, it's the only function of Apple Intelligence I use.
[00:49:22] Paul Roetzer: And so, with the amount of, like, AdBunny and hype, like, it is, like, an embarrassing product launch. And on top of, like, the Apple Vision Pro, which is insane technology that has zero support since the product came out. It's literally sitting next to me collecting data. Dust at the moment, it's like two major failed product launches in a row.
[00:49:42] Paul Roetzer: Hasn't really affected their stock price. Like I, people are still bullish on Apple. I'm still bullish on Apple, but it's highly out of character. just for fun, I put a poll on LinkedIn. I put it on Twitter too, but I don't get a ton of engagement on Twitter. So we'll go to LinkedIn. I said, what is the most disappointing AI product so [00:50:00] far?
[00:50:00] Paul Roetzer: Hype versus reality. Wish I wasn't limited to four choices. , right? Ands, welcome in the comments. So this had 599 votes. It's still's actually still open for a couple hours. 46% is the leader at Apple Intelligence, Microsoft 365 copilot 36% a agent force from Salesforce, 9% and then Google Gemini for workspace 8%.
[00:50:20] Paul Roetzer: Now obviously this is not a scientific study, this is kind of more based on. Overall vibes, I would guess, and kind of talk, because there's a chance some of you, like, don't use these products. So, it's hard for you to say Copilot is worse than Apple Intelligence if you've never used Copilot. So, it was more of just kind of a fun, put it out there, and just get some general responses to it.
[00:50:40] Paul Roetzer: But, yeah, Apple Intelligence, it's just, you know, I mean, the more time I spend with it, the more disbelief I am that this is the product they put out. And they're historically, like, they play catch up a lot, like fast followers, and they may end up building this amazing experience into the phone sometime in 2000, you know, 26, [00:51:00] 27, I don't know, but right now it is a pretty embarrassing offering from Apple.
[00:51:07] Mike Kaput: Yeah. And right now it's embarrassing, but I feel like it becomes really dangerous to them once someone actually reinvents an AI first device, right? Because like, for instance, I was listening to an end of year episode on the Tim Ferris podcast friend, investor Kevin Rose on, and they were just speculating at one point.
[00:51:26] Mike Kaput: There are a lot of people they've talked to in Silicon Valley where it's like, why can't I just have a phone with like one button where it's that, that's just really smart AI that loads everything that I need. Like I don't need all these apps.
[00:51:38] Paul Roetzer: I don't need like a rabbit,
[00:51:39] Mike Kaput: yeah, something that works actually, that, it's just one of those things where it's like we haven't yet reinvented.
[00:51:46] Mike Kaput: A mobile device that's really AI first yet, so it'll
[00:51:49] Paul Roetzer: be interesting to see how that plays out. Google has definitely swung the door open on this one. And again, like, I mean, but so much, or, I'm sorry, not Google, Apple, has, you know, but Apple's devices are [00:52:00] so much more than that, but like, I use the advanced voice and ChatGPT.
[00:52:04] Paul Roetzer: 10 times out of 10 over Surrey. Like sometimes it's faster because I could say, Hey, Surrey on my phone. And it like opens it. And I t's like, this is a basic thing. Surrey should be able to handle this. Like I don't need to go into my ChatGPT app, but if it's anything of actual value or that requires any actual reasoning or thought process, I'm going into ChatGPT every time I'm going to talk to Surrey.
[00:52:23] Paul Roetzer: So, and my kids is like a generation of kids who just thinks Siri's stupid. Like, they never ask Siri anything. It's like, other than turn the music off or something like that. so yeah, I don't know. It'd be interesting to see what happens. They definitely have just Faltered multiple times here. Just totally fumbled the Apple intelligence thing.
[00:52:45] Paul Roetzer: And I don't know, maybe, maybe this spring they'll come out with something significant, but don't have great optimism at the moment based on what they've delivered so far.
[00:52:56] TikTok Shutdown Drama
[00:52:56] Mike Kaput: Our next topic, let's talk quickly about TikTok. TikTok [00:53:00] has had a crazy couple of days. So we wanted to quickly run down what's going on with TikTok.
[00:53:07] Paul Roetzer: Which will have changed five times probably by the time you actually hear this. Yeah, more than
[00:53:13] Mike Kaput: most AI, straightforward AI news. This could change faster than anything else, for sure. Because, you know, like TikTok does, is prominent to many in our audience. There's also an AI angle to all this drama. So we just kind of quickly wanted to go through what's going on here.
[00:53:31] Mike Kaput: TikTok went dark late Saturday night as a congressionally mandated ban took effect. It resumed service Sunday afternoon with a message crediting president elect as Trump for its return. Trump speaking at a quote, victory rally as part of his inauguration events in DC declared that quote, TikTok is back.
[00:53:52] Mike Kaput: He outlined a vision for keeping the platform operational, suggesting a joint venture that would give the U. S. 50 percent ownerships. [00:54:00] So here's kind of how this really quickly all went down. At some point around 11pm on Saturday, January 18th, TikTok shut down in the U. S. before the ban took effect at 12am on January 19th.
[00:54:12] Mike Kaput: At 7. 03am the morning of January 19th, Trump posted SAVE TIKTOK, in all caps, on TruthSocial. Hours later, he announced he would sign an executive order on Monday, which is today, the day we're recording this podcast, that delays the TikTok ban. He also called for the platform to be taken over by a joint venture with U.
[00:54:31] Mike Kaput: S. and current owners. At 12. 30pm on Saturday, TikTok posted on X that it was in the process of restoring service. They publicly thanked Trump. At 1. 50pm, TikTok was reportedly back online for many U. S. users. They again pointed directly to Trump as the reason TikTok was saved. Late Saturday afternoon, a Trump advisor told CNN that the administration is still finalizing the executive order to delay the ban and give the platform more time to reach a deal [00:55:00] to stay in the U.
[00:55:01] Mike Kaput: S. Literally an hour later, Trump said, quote, TikTok is back during the rally. And now here's the AI component of all this. At some point on Saturday, CNBC reported that perplexity of all companies, quote, officially made a play for TikTok, submitting a bid to its parent company ByteDance. To create a new merged entity combining Perplexity, TikTok US, and new capital partners.
[00:55:25] Mike Kaput: So Paul, like, talk to me about Perplexity here. Like, what are their motivations?
[00:55:30] Paul Roetzer: Alright, so real quick on the ban, I don't want to spend a lot of time on this, but so Trump and the Republican Party have been pushing for the ban for years. this isn't like this is some Republican thing, like once they got in the office, we were going to bring TikTok back.
[00:55:45] Paul Roetzer: It is, it's actually, They've supported the ban for national security reasons, because ByteDance is a Chinese owned company, and the assumption is the data goes back to the Chinese government if they want it. So, the Supreme Court upheld that this is not a violation of the First Court, First [00:56:00] Amendment, that, that the ban should, Retent, remain.
[00:56:03] Paul Roetzer: So this is like a, it's a legal thing and Congress and the Senate both supported the ban, but it being brought back is like a popular thing to do. So it may come back. Okay. So now personally, I actually stopped using TikTok. Like I found, I was, I was a very late adopter of TikTok and it does just like suck you in.
[00:56:21] Paul Roetzer: Like their algorithm is insane. And so I took it off my phone like a week or two ago. Cause I was like, it's just wasting my life. Now, most of the stuff I find, there's actually things like. Basketball plays to run for my daughter's team, sports things I'm interested in. It's like, it's actually really good, valuable stuff for me because the AI is so good.
[00:56:40] Paul Roetzer: I'll spend like 40 minutes. I'm like, Oh my God, I just like probably went through a hundred videos on TikTok of like useless stuff. so I actually took it off because I found it was like sucking time out of my life. that I wanted back, the perplexity thing, I don't, I think perplexity like jumped the shark.
[00:56:56] Paul Roetzer: Like, I mentioned this a couple episodes ago that I thought perplexity was [00:57:00] eventually just gonna get acquired or like acqui hired or whatever. I don't get this company. Like, this is a pure PR move. Obviously, you're not going to merge with TikTok. Why in the world would you put this out there other than because they thought it was funny?
[00:57:13] Paul Roetzer: I don't know. Like, it's, it's just absurd. So, I think I'm going to cancel my Perplexity subscription, honestly. Not because of this, it was just like, this was like a tipping point where like, I already thought that they were kind of questionable and I thought, based on the interviews I've listened to with the founder, like it wasn't a very serious company.
[00:57:30] Paul Roetzer: And then I thought, with all these new things they're throwing, they're just like throwing spaghetti at a wall with like, What's going to stick and differentiate us. And I feel like they've just like lost the, the magic of like what differentiated them early on. Right. And I realized over the weekend, I'm still paying 20 bucks a month for a product I haven't used in over 30 days.
[00:57:47] Paul Roetzer: because I just used deep research and ChatGPT and the other stuff anyway. So again, I. Perplexity, man, of being an amazing company. Like, I'm not saying there isn't a chance, like it works. What the [00:58:00] hell? Like, what, what is this? How do you have time to make some joke thing like this, like a legitimate business?
[00:58:07] Paul Roetzer: Deal, try and make it look like a legitimate business deal. I don't know. It's just absurd.
[00:58:11] Mike Kaput: So it seems increasingly desperate. These, yeah, it's just like
[00:58:16] Paul Roetzer: hoping for headlines, like trying to get some PR, trying to be relevant in the conversation. It's like, I don't know, just stop, just fix your UI, make it look like it's not from 2020 and try and figure out how to differentiate yourself from.
[00:58:31] Paul Roetzer: Deep research and all the products that caught up and surpassed you. I don't think there's CEOs coming on our podcast anytime soon for an interview. Interview.
[00:58:42] US Patent and Trademark Office Releases Its AI Strategy
[00:58:42] Mike Kaput: Yeah, we'll see. I mean, certainly welcome to, but in our next topic here, we have seen the U. S. Patent and USPTO, release a comprehensive AI strategy. And at its [00:59:00] core, this strategy focuses on five key areas, advancing IP policies that promote innovation, building robust AI capabilities within the USPTO, ensuring responsible AI use, developing internal AI expertise, and fostering collaboration with USPTO partners.
[00:59:18] Mike Kaput: Interestingly, they're taking a notably human centric approach. They said that while AI will transform their operations, they're It has to complement rather than replace human expertise. Their implementation plan includes extensive training for patent examiners and trademark attorneys to help them better evaluate AI related applications.
[00:59:38] Mike Kaput: They also talked about their position on AI and copyright law. They acknowledged the complex challenges around AI generated content and training data. And they commit to working closely with the U. S. Copyright Office on policy recommendations. They are actively monitoring relevant court cases and aim to shape, help shape legislation that addresses [01:00:00] IP issues.
[01:00:01] Mike Kaput: So, Paul, it's good to see this really important body getting a strategy in place here. Like, what are your thoughts on the copyright point here? Like, are we going to get any updated guidance on this stuff anytime soon? There are a lot of unanswered questions people have.
[01:00:18] Paul Roetzer: Yeah. So, I mean, unless I missed something and I just kind of scan this report, it doesn't say anything about copyright.
[01:00:24] Paul Roetzer: Like, I mean, it doesn't make any changes. It basically says we're watching legal cases, just like you are. we're advising Congress when they ask us, we're doing listening sessions, like nothing changed. And so I actually didn't know exactly how the USPTO works, I just did a quick search. So it looks like in October 2021, President Biden nominated, Cathy Vidal to serve as USPTO director.
[01:00:49] Paul Roetzer: She was sworn in April 13th, 2022. So like most government offices, I don't expect Biden appointees to remain in those positions. [01:01:00] I would assume there's going to be a pretty swift transition of those leaders. Elon Musk is advising Trump, as are all these other VCs who could care less about the copyright, holders that their data, their models are training on.
[01:01:15] Paul Roetzer: I would assume that if this is a government agency that has someone appointed by the president, that we may get a, person who. is more favorable toward the VC world and their views on copyright and we may see some changes. I mean, Sam, that was in their economic blueprint, right? They talked about copyright and they don't want it to slow down American innovation.
[01:01:39] Paul Roetzer: I wouldn't be surprised if in the next four years we don't see some changes to the way this works. And I don't think people who are copyright traditionalists will be happy with those changes. It's kind of my personal opinion. High level assumptions at the moment.
[01:01:54] Meta AI Copyright Lawsuit
[01:01:54] Mike Kaput: In another copyright related development, we got some [01:02:00] newly unsealed court documents that paint Meta in a pretty poor light due to actions it took while racing to compete with OpenAI.
[01:02:09] Mike Kaput: According to documents from a California court, Meta executives discussed and ultimately approved using A site called LibreGenesis, or LibGen for short, which is a book piracy site, to train the company's Llama3 model. This decision was reportedly escalated all the way to CEO Mark Zuckerberg. In an October 2023 email, Meta's VP of Generative AI, Ahmed Aldal, Emphasized that the company needed to quote, learn how to build frontier and win this race against OpenAI's GPT 4.
[01:02:42] Mike Kaput: Meta's director of product then argued that using LibGen was quote, essential for achieving state of the art performance. Claiming through quote, word of mouth, that competitors OpenAI and Mistral were also using the library. These documents reveal Meta's attempts to conceal their usage, including plans to remove [01:03:00] copyright headers, document identifiers, and metadata, quote, to avoid potential legal complications.
[01:03:06] Mike Kaput: The company also established, quote, mitigations, including removing clearly marked pirated content and avoiding public mentions of using LibGen data. Paul, I cannot say, unfortunately, this is surprising, given that we know many, or if not all, the major model companies have behaved in this way with one site or another that has copyrighted content.
[01:03:29] Mike Kaput: I guess what I'm curious about is this, like nothing serious seems to have happened yet to these firms. obviously you're facing a ton of costly lawsuits, but, you know, They're all doing this. Like, are they going to get away with this?
[01:03:40] Paul Roetzer: Yes. Okay. I think so. I think they're just going to keep spending the millions or hundreds of millions they need to to keep these legal cases going until the law changes.
[01:03:49] Paul Roetzer: Like, so the way I think about this is, we know they did it. They know we know we did it. They know we know they did it. But they don't want [01:04:00] to admit it for legal reasons until those legal reasons are gone. So we're just going to keep having court cases. We're going to keep pushing this forward. They're going to keep spending their money and keeping their lawyers busy.
[01:04:10] Paul Roetzer: And they're never going to really admit that that's how it's done until it's safe to admit that that's what they did. It's not a secret, like, they, they did it, like, I t's, it's the craziest thing.
[01:04:21] Mike Kaput: And with all this emphasis on American competitiveness and AI t seems like an utter fantasy that any consequence of, like, shutting down a model or anything like that would happen.
[01:04:32] Paul Roetzer: No way.
[01:04:33] Paul Roetzer: And it's all out in the open anyway because people have, they've built the open source models trained on these things like Llama and you can't, yeah, can't put it back in the bag. And like I've said, I said early on, like maybe at some point it's deemed that they, they broke the law at that point and they pay some big fines and they move on with their lives.
[01:04:50] Paul Roetzer: But I don't think, especially as you're saying with the incoming administration and the focus on accelerating innovation, they're not slowing down for this stuff. [01:05:00] Right or wrong. They're not, they're not going to.
[01:05:04] Benchmarking the Energy Costs of Large Language Models
[01:05:04] Mike Kaput: In our next topic, a new study came out from MIT that shows just how much energy large language models consume, and these numbers are pretty eye opening.
[01:05:14] Mike Kaput: The research team used Meta's Llama model to conduct detailed experiments and better understand how LLMs consume energy. They found that running the largest version of Llama requires between 300 watts to 1 kilowatt of power. That's equivalent to running 10 to 30 bright LED light bulbs continuously just to power a single AI model's operations.
[01:05:38] Mike Kaput: They also found some surprising patterns in how energy gets used. When they spread the AI model across more chips, which you might think would make things more efficient, it actually increased the energy costs substantially. Energy usage jumped significantly moving from 8 GPUs to 32 GPUs, even when processing the exact same [01:06:00] amount of work.
[01:06:01] Mike Kaput: They also discovered different types of tasks consume varying amounts of energy. When testing the model on standard language tasks versus math problems, they found notable differences in energy consumption. Basically suggest that the type of work we ask AI to do has a direct impact on energy footprint.
[01:06:20] Mike Kaput: They also revealed we may be using more computational resources than necessary. They found that even when running these massive models, only about 20 to 25 percent of the available GPU memory was being utilized. So, seems like there are some opportunities for optimization. Paul, this is an interesting angle to LLMs that we have not historically discussed this much.
[01:06:43] Mike Kaput: Like, these models eat up a ton of energy. We're increasingly going to be using more advanced models more often as adoption rises. What are the implications of all this?
[01:06:54] Paul Roetzer: Yeah. So again, I mean, you got to put everything in the context in America, at least of the incoming [01:07:00] administration. there's going to be less focus on, the environment in terms of like impact on the environment than previous administration.
[01:07:07] Paul Roetzer: So I don't see this being a massive political issue in the next four years in relation to the environment. there's going to obviously be people who continue to push that, but I don't think they're going to find friendly, Ears at the White House that care as much, so I think what's going to happen is it's going to be on the AI companies themselves.
[01:07:28] Paul Roetzer: They're going to push for the efficiency in those algorithms like you talked about, where they can build intelligence more efficiently by, you know, being smarter with how they, devise everything, and that can drive cost savings, which has the positive impact, because there are still a lot of people within those companies that care about the impact on the environment, even if, you know, it's not a governmentally supported thing, per se.
[01:07:49] Paul Roetzer: So, you're still going to have people trying to do good, like they, they want to build the intelligence, but they don't want to have a negative impact on the environment, and they want to save money. And so, I think [01:08:00] you're going to see a lot of innovation in this space and drive for efficiency in, in the use of GPUs and the building of the models.
[01:08:07] Paul Roetzer: but yeah, I mean, it's a, it's an issue that's been a hot button issue. And I think it's just a hard one for people to understand. Like it's hard to come up with an analogy that helps you actually. Conceptualize the impact it could have. So like trying to draw the analogy to like the number of led lights, burning, things like that, that's trying to make this like matter to people to the point where it's like, Oh, that's a big deal.
[01:08:29] Paul Roetzer: Other than that, I think it just all sounds very like scientific and abstract to people and it's like, I don't know, like I can't, I can't see that impact every day, so it's hard for me to care that much. I'm not saying that's how I feel, I'm saying that's how the average person might feel.
[01:08:45] Mike Kaput: Wow, get ready for a lot more power generation, right?
[01:08:48] Mike Kaput: This is related to all the infrastructure. I think
[01:08:50] Paul Roetzer: that's the key is it's just like, well, then let's build more. Like that's the mentality right now. It's like, oh, if it needs that much energy, let's just build more in the grid.
[01:09:00] NotebookLM Has to Do “Friendliness” Tuning on Its AI Podcast Hosts
[01:09:00] Mike Kaput: Here's a little lighthearted AI news this week. So Google posted from its Notebook LM account on X.
[01:09:07] Mike Kaput: That the AI hosts in NotebookLM actually developed attitudes. So, NotebookLM can create audio overviews, we've talked about these. You basically, all the docs, links, papers that you upload to a notebook. It can create basically a mini podcast hosted by two hyper realistic AI hosts. If you have not tried this out, it's really cool.
[01:09:27] Mike Kaput: Go do so. However, Google recently added a feature where you can like quote unquote call in to ask questions and interrupt the hosts while they talk. When they added this feature, the following happened according to their post. Quote, after we launched interactive audio overviews, which let you call in and ask the AI hosts a live question, We had to do some friendliness tuning, because the hosts seemed annoyed at being interrupted.
[01:09:54] Mike Kaput: File this away under things I never thought would be my job, but are. Paul, this was really [01:10:00] funny to me, but it also kinda highlighted a bigger point you and I discussed a bunch of times, like AI is not traditional software.
[01:10:08] Paul Roetzer: Yeah, we don't, we don't know why it does what it does. We've said this many times.
[01:10:11] Paul Roetzer: Like it just does weird things and then they got to go in and figure out why it's doing the weird thing. This is a funny one. but there's very serious instances of this too, where these models start doing things that may be determined to be misaligned with its goals or it's the values that humans wanted to have, which leads to, Unintended outcomes.
[01:10:31] Paul Roetzer: And so it is, it is humorous, but it is also representative of a much larger problem that we deal with with these models.
[01:10:38] Mike Kaput: Yeah. Wait until your AI system with consumption based pricing decides to go read like half the internet or something like that, because that's when your CEO is calling you.
[01:10:48] AI Funding and Product Updates to Watch
[01:10:48] Mike Kaput: All right.
[01:10:50] Mike Kaput: So to wrap up this week, we have a bunch of really quick funding and product updates. So Paul, I'm just going to run through these and then wrap us up here. Sounds good. All right. So [01:11:00] first up, Synthesia, which is a leading AI video generation platform has announced a major 180 million Series D funding round.
[01:11:07] Mike Kaput: So they claim they're evolving beyond their initial AI avatar technology. To offer a comprehensive suite of video creation tools that include dubbing, screen recording, translation, and collaboration features. The platform currently supports over 230 avatars in 140 plus languages and serves more than a million users.
[01:11:29] Mike Kaput: Next, AnySphere, which is the company behind the viral hit AI coding tool, Cursor, has secured a 105 million funding round that increases its valuation to 2. 5 billion, which is a six fold increase in valuation from just eight months ago. Cursor uses proprietary OpenAI and Anthropic to help programmers code more efficiently through auto completion.
[01:11:57] Mike Kaput: They have also begun rolling out some agentic [01:12:00] features that can independently complete certain coding tasks. Next, Andreessen Horowitz has announced leading a Series A investment in Slingshot AI, which is a startup developing what it calls the world's first foundation model specifically designed for psychology and mental health support.
[01:12:19] Mike Kaput: This brings the total capital raised by the company for that mission to 40 million. They're aiming to differentiate from general purpose AI chatbots by focusing specifically on therapeutic approaches. Next up, ChatGPT tasks have come out. OpenAI has rolled out this new feature called Tasks, where users can ask ChatGPT to do things like give me a news briefing every day at 7am, or remind me when my passport expires in six months.
[01:12:49] Mike Kaput: The AI will then follow through on these requests automatically, even when you're not actively using the app. And start sending you notifications when it has completed a task. This is currently [01:13:00] in beta and available only to paying subscribers. ChatGPT is also getting an upgrade to custom instructions.
[01:13:07] Mike Kaput: This is a feature that allows you to customize your ChatGPT experience. The new system focuses on three key areas. Personality ChatGPT to exhibit. Preferred communication styles and specific rules they want the AI to follow. So think of this as like fine tuning your own personal AI assistant to match your working style and preferences.
[01:13:33] Mike Kaput: Deep seek a Chinese AI lab that made waves earlier this month with its open source, deep seat, V three model. Just released something called DeepSeq R1. This is an open source model that they claim matches OpenAI's O1 in performance. It is a reasoning model, just like O1, but unlike O1, this model's license allows for unrestricted commercial use and modification.
[01:13:59] Mike Kaput: Basically [01:14:00] meaning if the company's claims are true, there's now an open source equivalent to the advanced reasoning models coming out of some of the other labs. The company is also releasing a series of smaller distilled models, and very notably, the model is priced dramatically cheaper than O1. The main DeepSeq model, the R1 model, has a 0.
[01:14:21] Mike Kaput: 14 per million token input price, which is a fraction of the cost of the same million tokens from O1. Microsoft has announced Microsoft 365 Copilot Chat, a new pay as you go service that makes its AI capabilities more accessible to organizations of all sizes. This new offering has three key components.
[01:14:44] Mike Kaput: There's one, a free chat experience powered by GPT 4. 0 with web based knowledge. Two, is pay as you go AI agents that can be created and used directly within chat. And three, enterprise IT controls for data protection and agent management. [01:15:00] Next up, Adobe has announced Firefly Bulk Create. This is a new tool that can edit up to 10, 000 images simultaneously with a single click.
[01:15:11] Mike Kaput: So this is launching in beta and the tool has basically two main features. There's remove background and resize. So you can upload images from your computer, Dropbox, or Adobe Experience Manager. And AI can automatically remove backgrounds from entire batches of images all at once. These new features will operate on a consumption based pricing model.
[01:15:35] Mike Kaput: This will likely require users to purchase premium Adobe Firefly credits. All right. Two more updates here this week. AI company Luma has announced Ray 2, which is an advanced video model. The Ray 2 can generate videos with realistic coherent motion, handle complex physics and simulations. And creates cinematic scenes with sophisticated camera movement.[01:16:00]
[01:16:00] Mike Kaput: They are making Ray 2 available through their platforms for paid subscribers with API access coming later. And last, but certainly not least, Runway has released Frames, its most advanced base model for image generation, which the company says offers, quote, unprecedented stylistic control and visual fidelity.
[01:16:20] Mike Kaput: We actually covered their announcement of Frames on episode 125. Now, the company says the model is available for unlimited and enterprise plan users. So they say with frames, you can begin to define worlds that basically represent your own artistic points of view, styles, composition, subject matter, and more.
[01:16:41] Mike Kaput: Anything you can imagine you can bring to life with frames. All right, Paul. So that is it this week. Jam packed. I'm sure we are in for another crazy week as well. Just a couple final notes here. If you have not checked out the Marketing AI Institute newsletter, check that out at marketingainstitute. com forward slash [01:17:00] newsletter.
[01:17:00] Mike Kaput: It contains all the news we covered today and stuff that didn't make it into the episode, which increasingly is a very long list, given how much is going on. And if you have not left us a review and can do so through your podcast platform of choice, we would really appreciate your feedback. All right, Paul, that's it for this week.
[01:17:20] Paul Roetzer: So while we're doing this, just, just to, if people had any doubts about like how this is going to play out. So the, Trump was just, signed in and right behind the vice president was, Zuckerberg. Sundar Pichai, Mark Zuckerberg, Jeff Bezos, Elon Musk, all sitting together in front of the cabinet. So, like, right behind the vice president, you have your row of billionaires, and then you have the cabinet.
[01:17:46] Paul Roetzer: And, Tim Cook is also there, as is the CEO of TikTok. And Sam Altman's somewhere there, but sitting together was Pichai, Musk, Zuckerberg, Bezos, right behind J. D. Vance. So It is, [01:18:00] we are entering a very different phase in American business and innovation. and the heads of the AI companies are, are, in the first row.
[01:18:10] Paul Roetzer: So buckle up.
[01:18:12] Paul Roetzer: It'll be fascinating. Yeah. We won't be lacking information to discuss each week on the podcast. All right, everyone. Thanks. And again, reminder, AI Mastery membership, the quarterly trends briefing this Friday. If you want to join us, it is open to everyone to join us and we're going to be sharing our vision for our AI Academy and the AI Literacy Initiative.
[01:18:33] Paul Roetzer: So we'd love to have you there. Thanks again. And, yeah, have a great week. If you're in the, if you're in the Midwest, by the way, stay warm. It's supposed to be like one degree in Cleveland the next two days. So, stay warm, stay safe, and we'll talk to you next week. Thank you. Thanks for listening to The AI Show.
[01:18:51] Paul Roetzer: Visit MarketingAIInstitute. com to continue your AI learning journey and join more than 60, 000 professionals and business leaders [01:19:00] who have subscribed to the weekly newsletter, downloaded the AI blueprints, attended virtual and in person events, taken our online AI courses, and engaged in the Slack community.
[01:19:11] Paul Roetzer: Until next time, stay curious and explore AI.
Claire Prudhomme
Claire Prudhomme is the Marketing Manager of Media and Content at the Marketing AI Institute. With a background in content marketing, video production and a deep interest in AI public policy, Claire brings a broad skill set to her role. Claire combines her skills, passion for storytelling, and dedication to lifelong learning to drive the Marketing AI Institute's mission forward.