From AI safety to “AI opportunity,” U.S. tech companies are racing to innovate, while politicians and industry leaders continue to champion its rapid advancement.
This week, Mike Kaput and Paul Roetzer analyze the ripple effects of Elon Musk's bid to acquire OpenAI, JD Vance’s keynote address at the AI Action Summit in Paris, the latest GPT-4o update from OpenAI, and unfolding drama surrounding xAI. They also explore the growing impact of robotics, along with other pressing topics in our rapid-fire segment.
Listen or watch below—and see below for show notes and the transcript.
Listen Now
Watch the Video
Timestamps
00:05:44 — Elon Musk Bid to Buy OpenAI and Ongoing Feud
- Elon Musk Leads $97.4 Billion Bid to Control OpenAI - The New York Times
- Musk’s Shrewd OpenAI Bid - The Information
- OpenAI Board Will Reject Musk’s ‘Embarrassing’ Takeover Bid, CEO Says - The Information
- Sam Altman's Comments on Elon Musk
- Elon Musk wanted an OpenAI for-profit - OpenAI
- Post from Paul Roetzer on LinkedIn
- Post from Marty Swann on X
00:15:06 — JD Vance Keynote at AI Action Summit in Paris
- JD Vance at AI Action Summit in France - PodiumVC YouTube
- JD Vance rails against ‘excessive’ AI regulation in a rebuke to Europe at the Paris AI summit - AP News
- Post from Neil Chilson on X
- The AI Action Summit: A golden age of innovation - Google Blog
- Statement from Dario Amodei on the Paris AI Action Summit - Anthropic
- Post from Suhail on X
00:28:23 — Effect of Generative AI on Jobs
- The Labor Market Effects of Generative Artificial Intelligence - SSRN
- Post from Jon Hartley on Paper Highlights
- Sam Altman comments on economic disruption from Deep Research - Times Radio
00:34:35 — GPT-4o Update + OpenAI Roadmap
00:40:00 — Grok 3 + xAI Drama
00:45:51 — AI More Empathetic Than Humans
- Turns Out AI Is More Empathetic Than Allstate’s Insurance Reps - Wall Street Journal
- Post from Ethan Mollick on AI for Therapy
00:51:22 — Results of Major AI Copyright Case in the US
00:54:47 — OpenAI Reasoning Model Prompting Guide
00:59:40 — Rise of the Robots
- Meta Plans Major Investment Into AI-Powered Humanoid Robots - Bloomberg
- Robotics Startup Figure AI in Talks for New Funding at $39.5 Billion Valuation - Bloomberg
- Apple is reportedly exploring humanoid robots - TechCrunch
- Episode 87 of The Artificial Intelligence Show
01:04:59 — Apple’s AI for Siri Faces Issues & Delays
01:07:51 — Listener Questions
- We have a leadership team that believe they understand AI, but do not actually understand it. They just think of using AI and building agents for coding, but don’t realize it can do so much more. What change management ideas would you recommend to get them to really understand?
Summary
Elon Musk’s Bid to Buy OpenAI
Elon Musk, along with a group of investors, has placed a staggering $97.4 billion bid to acquire the nonprofit entity that controls OpenAI—an unsolicited offer that throws a wrench into the company’s current fundraising efforts.
The consortium backing Musk’s bid includes his AI company, xAI, investment firm Vy Capital, and Hollywood power player Ari Emanuel.
OpenAI CEO Sam Altman quickly dismissed the bid with a mocking response on X, offering to buy Twitter from Musk for $9.74 billion instead. Musk shot back with a single-word reply: “Swindler.”
The timing of Musk’s offer is significant. OpenAI is in the middle of securing a massive $40 billion investment round, led by SoftBank, which would value the company at $300 billion. This would make OpenAI one of the most valuable private companies in the world, alongside Musk’s own SpaceX.
At the core of the conflict is OpenAI’s unusual corporate structure. Originally founded as a nonprofit in 2015, the company later formed a for-profit arm to attract investment. However, the nonprofit still retains legal control.
This means that despite OpenAI's enormous valuation, Musk’s bid targets the nonprofit itself—a tiny entity with just two employees and $22 million in assets—because it holds the keys to OpenAI’s future.
Musk’s bid appears to be an attempt to establish the market value of this control, potentially complicating OpenAI’s efforts to sever ties with its nonprofit parent.
JD Vance’s Keynote Speech at AI Action Summit
At the Paris AI Summit, US Vice President JD Vance took a firm stance against what he called “excessive regulation” of artificial intelligence, delivering a sharp rebuke to European efforts to impose strict controls on the rapidly advancing technology.
He started off his speech by saying: “I'm not here this morning to talk about AI safety, which was the title of the conference a couple of years ago. I'm here to talk about AI opportunity.”
Speaking at his first major policy event since taking office, Vance framed AI as an economic turning point, comparing its impact to the steam engine’s role in the Industrial Revolution. He warned that overregulation could stifle innovation, saying, “It will never come to pass if overregulation deters innovators from taking the risks necessary to advance the ball.”
The US further cemented its position by refusing to sign an international pledge on AI development, a document endorsed by over 60 countries—including China.
The agreement commits signatories to ensuring AI is safe, transparent, and ethical, while also addressing human rights and sustainability. The UK, despite agreeing with much of the pledge, also declined to sign, citing concerns over national security and a lack of clarity on governance.
Vance’s speech also highlighted increasing tensions between the US and Europe over AI regulation.
Effect of Generative AI on Jobs
A new economic analysis of generative AI’s impact on the labor market has found that adoption is accelerating, with significant implications for employment and productivity.
The study, led by Jonathan Hartley at Stanford and a team of researchers from George Mason University, Columbia University, and the World Bank, surveyed US workers and found that as of December 2024, 30.1% of respondents reported using generative AI at work, marking a substantial increase in AI’s workplace presence.
The data shows a strong correlation between AI usage and demographic factors—younger, highly educated, and high-income workers are the most frequent adopters. Industries such as customer service, marketing, and IT report the highest levels of AI integration, while sectors like agriculture, mining, and government lag behind.
One of the study’s key findings is that generative AI significantly boosts productivity. Workers who use AI tools estimate that tasks that previously took 90 minutes can now be completed in just 30 minutes—a threefold efficiency increase.
Beyond its impact on productivity, AI is also transforming job search dynamics. Over 50% of unemployed respondents who were job-seeking in the past two years used AI tools for resume writing, cover letters, and interview preparation.
As AI becomes deeply embedded in the labor market, the study suggests that its effects will be mixed—enhancing efficiency for some while displacing others. Low-skilled workers may face greater risks of job loss, while high-skilled professionals are poised to benefit from AI augmentation.
The study concludes that AI is both a substitute and a complement to human labor, making its long-term impact complex and unpredictable.
This episode is brought to you by our AI for Writers Summit:
Join us and learn how to build strategies that future-proof your career or content team, transform your storytelling, and enhance productivity without sacrificing creativity.
The Summit takes place virtually from 12:00pm - 5:00pm ET on Thursday, March 6. There is a free registration option, as well as paid ticket options that also give you on-demand access after the event.
To register, go to www.aiwritersummit.com
This episode is also brought to you by our 2025 State of Marketing AI Report:
Last year, we uncovered insights from nearly 1,800 marketing and business leaders, revealing how AI is being adopted and utilized in their industries.
This year, we’re aiming even higher—and we need your input. Take a few minutes to share your perspective by completing this year’s survey at www.stateofmarketingai.com.
Read the Transcription
Disclaimer: This transcription was written by AI, thanks to Descript, and has not been edited for content.
[00:00:00] Paul Roetzer: One of the things I said that could slow this progress down and this vision for this kind of like technologist acceleration future is societal revolt. Welcome to the Artificial Intelligence Show, the podcast that helps your business grow smarter by making AI approachable and actionable. My name is Paul Roetzer.
[00:00:19] Paul Roetzer: I'm the founder and CEO of Marketing AI Institute, and I'm your host. Each week, I'm joined by my co host and Marketing AI Institute Chief Content Officer, Mike Kaput. As we break down all the AI news that matters and give you insights and perspectives that you can use to advance your company and your career.
[00:00:38] Paul Roetzer: Join us as we accelerate AI literacy for all.
[00:00:46] Paul Roetzer: Welcome to episode 136 of the Artificial Intelligence Show. I'm your host, Paul Roetzer, along with my co host, as always, Mike Kaput. We are recording this February 17th, Monday morning, 11 a. m. Eastern Time, which is relevant because we [00:01:00] expect some models to be coming out this week. And this week. So, if a new model has come out that we didn't talk about on the show, you know why.
[00:01:09] Paul Roetzer: Alright. quick reminder, you can stay updated kind of throughout the week at, AI Show Pod on Twitter or X if you prefer. So it's just at AI show Pod. And then we also are using the same handle on our YouTube channel, which is at AI show Pod. so you can check us out on Twitter and YouTube.
[00:01:29] Paul Roetzer: YouTube's cool if you haven't, uh. been on the YouTube channel before, we not only publish the full video version of this podcast, but our team does an incredible job of cutting up like the main topics into individual videos, and then they also create shorts based on each one. So if you prefer to consume some video content or if you want to share the video content, the team does an awesome job of cutting that all up.
[00:01:52] Paul Roetzer: So thanks to Claire and Cathy who handle a lot of that work behind the scenes. All right, This episode is brought to us by AI for Writers [00:02:00] Summit. We've been talking a lot about this lately, but we have not acknowledged our presenting sponsor, GoldCast. So we are grateful for GoldCast being on board.
[00:02:07] Paul Roetzer: They've been a great partner of Marketing Eye Institute for the last couple of years. We're excited to be working with them again, and the Writers Summit actually runs through GoldCast, so it's a great chance for people to experience, the sponsor and the platform themselves. So the AI for Writers Summit is coming up on Thursday, March 6th, which, oh my gosh, is only three weeks away.
[00:02:27] Paul Roetzer: I have some work to do, Mike, as I'm sure you do for your presentation. Yeah. If any of our, you know, regular listeners are familiar with the, how this works, we, we have events and then Mike and I freak out two to three weeks before the event, realizing we have yet to create our presentation. so this is coming up again.
[00:02:46] Paul Roetzer: It's a virtual conference. Last year, we had 4, 500 attendees from 90 countries. And it's great because thanks to GoldCast, there is a free registration option. So you can join us for this [00:03:00] Writer's Summit from noon to five Eastern. On Thursday, March 6th, for free if you choose to, again, that is thanks to the sponsor.
[00:03:08] Paul Roetzer: There's a paid option that gets you on demand access and or a private registration if you choose not to share your contact information with the sponsor. So you can go to AIWriterSummit. com, again, that is AIWriterSummit. com. You can also find it under the events section of the Marketing Institute website.
[00:03:27] Paul Roetzer: And second, our fifth annual State of Marketing AI Report is now in the field. The survey is active. This is the, as I said, fifth time we're doing this, so we have a bunch of benchmark data. We've evolved the questions a little bit this year, added some new stuff in there. This is always one of my favorite pieces of content we create every year, Mike.
[00:03:50] Paul Roetzer: It's awesome to see. Last year, we had 1, 800. Marketers and business leaders take it. So if you're in the marketing field or if you are involved in marketing, I mean, we oversee the [00:04:00] marketing team. We'd love to have your input. How long are we going to have this in the field, Mike? What is the, what's the schedule?
[00:04:05] Paul Roetzer: You know, I
[00:04:05] Mike Kaput: think right now we're open for about four to six weeks, depending on the response, right? So we have a little bit of time and we'll talk about the next several episodes. but I think we're going to hit our kind of target response rate pretty quick, given how much interest we see in these.
[00:04:21] Paul Roetzer: So people can go to stateofmarketingai.
[00:04:24] Paul Roetzer: com. Again, that is stateofmarketingai. com. That will actually take you to the 2024 report page. So you can go grab the report from last year, which I think we've had over 10, 000 downloads of that report. So it was a pretty popular report. and at the top of that page is the link to take the 2025 survey.
[00:04:43] Paul Roetzer: So if you've got five to seven minutes, you can go through and give us your input. we'd love to have you be a part of that, and then as soon as the report's ready, we will be sending that out to everyone who participated, and that, what's the plan there, Mike? Is that this spring we're planning on releasing that?
[00:04:58] Mike Kaput: Yeah, so I think the [00:05:00] target is the end of April we'll be having kind of a big launch webinar and releasing the report, so kind of, yeah, right at the, right towards the end of, beginning of, you know, Q2, you'll have some really great data for the, for 2025.
[00:05:14] Paul Roetzer: And we'll definitely talk about, I think we usually do a special podcast episode, maybe going through like the 10 trends from the report.
[00:05:19] Paul Roetzer: So you'll definitely hear about it as we go. But again, stateofmarketingai. com if you want to be a part of that survey. All right. I don't, so much happened like last Monday and Tuesday that I felt like we already talked about it when I was looking through the brief for this podcast. I was like, Oh, I guess We didn't actually talk about this since last week, so let's kick it off with, OpenAI.
[00:05:40] Paul Roetzer: It's like the never ending soap opera that just keeps on giving.
[00:05:44] Elon Musk Bid to Buy OpenAI and Ongoing Feud
[00:05:44] Mike Kaput: Yeah, and the train that keeps on wrecking over here, right? And so, Elon Musk has, this past week, made another Bold move, depending on your perspective, in his ongoing feud with OpenAI. So Elon Musk, along with a group of investors, has placed [00:06:00] a 97.
[00:06:00] Mike Kaput: 4 billion bid to acquire the non profit entity that controls OpenAI. This is an unsolicited offer that seems to be designed to kind of throw a wrench into the company's current fundraising efforts. This consortium backing Musk's bid includes his AI company, XAI, the investment firm ViCapital, and Hollywood power player Ari Emanuel.
[00:06:24] Mike Kaput: Now OpenAI CEO Sam Altman quickly dismissed the bid with a mocking response on X. He offered to buy Twitter from Musk for 9. 74 billion instead. Musk shot back with a single word reply. She said Swindler. And you know, the board then again also rejected the bid, but the timing of this offer is kind of what matters because despite the rejection, like OpenAI probably doesn't need this kind of headache because it's in the middle of trying to secure a massive 40 billion investment round led by SoftBank.[00:07:00]
[00:07:00] Mike Kaput: which would value them at 300 billion dollars. That would make them one of the most valuable private companies in the world alongside Musk's own SpaceX. And kind of at the core of all this conflict is OpenAI's unusual corporate structure. Founded as a non profit in 2015, they later formed a for profit arm to attract investment.
[00:07:20] Mike Kaput: However, the non profit, as of right now, still retains legal control. That's something they're trying to change as they transition to a for profit company. Now, Musk's bid seems like it may be an attempt to establish the market value of this, potentially kind of complicating OpenAI's whole efforts to get to that non profit structure.
[00:07:42] Mike Kaput: If they can move forward there, they may now have to justify paying a significantly higher price for their independence from that non profit. So, Paul, there's a lot of moving pieces here. First up, kind of We have to ask with anything Elon Musk does, is this a [00:08:00] serious offer? Is it trolling? Is it both? Is it something else?
[00:08:04] Paul Roetzer: Yeah, I mean, I think it's just largely an attempt to muddy up the waters, create friction for OpenAI and Sam, you know, creating this artificially high value that he, I don't think has any belief would actually end up going through. I don't think he actually thinks he's going to be able to buy OpenAI.
[00:08:22] Paul Roetzer: for context on the Altman tweet, it was a shot at Musk. you know, if you recall, Musk bought Twitter for 44 billion in October, 2023, and then proceeded to You know, the value proceeded to tank to somewhere roughly in the range of nine billion today. so it's definitely been getting personal, and I think Sam stayed out of the personal attacks for a while, and you can tell now with interviews that Sam's patience is basically just gone.
[00:08:51] Paul Roetzer: And so he was interviewed, I think it was like last Tuesday, that he said, I wish he, being Elon, would just compete by building a better [00:09:00] product, and then When the interviewer asked him, you know, some more detail, he said his whole life is from a position of insecurity, I feel for the guy, I don't think he's a happy person.
[00:09:11] Paul Roetzer: So, you know, I think it's just getting to the point where Sam is just fed up with this and truly just, you know, if you're gonna compete, compete, but like stop trying to screw us over in the process. And Elon feels wronged and that doesn't end well when Elon feels like he's been wronged. So, a little context on the offer, um.
[00:09:31] Paul Roetzer: You know, so you kind of alluded to this, Mike, but there's, and we'll put it, there's a great article from The Information that kind of like had a really good breakdown here. But the key is that, you know, When Altman took over as CEO in 2019 and sort of like Elon was pushed out after he tried to roll OpenAI into Tesla.
[00:09:49] Paul Roetzer: so Elon actually wanted OpenAI to be a for profit company. In 2017 18, when they realized they needed to raise billions of dollars, Elon was the one pushing for the for profit entity and [00:10:00] actually tried to move it in and take full control of it under Tesla in 2019, and that's when he lost the power struggle with Sam, and so this friction has existed since, since that time.
[00:10:12] Paul Roetzer: the basic concept, as the information reports it, is that the non profit arm, and if you're confused by all this, like, welcome to the club, it's a really complex organizational structure that you can't just simply take a non profit and make it a for profit and IPO and everything's fine, like, The non profit owns the assets, it controls the mission, like, you can't just extract it out, you have to give the value of, to, to that non profit.
[00:10:37] Paul Roetzer: And so, the, the rumor is, that in this transition, the non profit would take a 25 percent equity stake. in the for profit. So if the company, I'm going to do some quick math here, Mike. If the company is valued at 300 billion, 25 percent would be 75 billion, I think. So they're saying that [00:11:00] the equity stake from the non profit would be roughly 75 billion.
[00:11:03] Paul Roetzer: Well, Elon's saying, fine, I'll pay 97 billion. You as a non profit need to accept this because you have a fiscal responsibility to do so. And we're offering more than you're going to value it at. So they're just trying to like make this like super complex. so there is this irony in here that like this was Elon's plan all along, OpenAI published, we'll put this link back on the site.
[00:11:24] Paul Roetzer: So again, if you haven't been listening to the show for a long time, we covered all this like episodes ago. But, OpenAI in December published like basically an expose of like all of Elon's emails, all their communications that just laid all this out that like, hey, he's the one that wanted to do this.
[00:11:40] Paul Roetzer: We're just doing what he was originally planning on anyway. Yeah. So there's all that. And then there's this weird twist here, that Brett Taylor, who is the chair of the OpenAI board right now, the nonprofit board, he was the chair of the Twitter board when Elon bought Twitter. And so I'll explain why that's [00:12:00] kind of weird in a second.
[00:12:01] Paul Roetzer: So Brett Taylor released an official statement through the OpenAI newsroom, Twitter account. That said, OpenAI is not for sale, and the board has unanimously rejected Mr. Musk's latest attempt to disrupt his competition. Any potential reorganization of OpenAI will strengthen our non profit and its mission to ensure AGI benefits all of humanity.
[00:12:21] Paul Roetzer: So what they've said is basically we're a mission driven non profit, and Elon buying us does not fit the mission, so no. We're not entrusted, doesn't matter how much. So then Elon actually replies, of course, to the OpenAI newsroom tweet, And he says, from a friend, quote, Brett Taylor is a scammer running an agent startup that literally has no product whatsoever and is funneling money into open AI.
[00:12:43] Paul Roetzer: He just does bureaucracy. So, now Elon's coming at, at Brett. So again, let's go back to 2022, when Elon tries basically a hostile takeover of Twitter, makes an offer unsolicited for 44 [00:13:00] billion or whatever it was, 54. 20 a share, 54. he does this through Brett Taylor, so like, this all goes back, they had a relationship, so he, he texts Brett Taylor back in 2020 saying, hey, I'm gonna, I'm taking over Twitter, basically, and then the next day, submits the offer, well then if you'll recall, Elon tries to back out of the Twitter acquisition.
[00:13:23] Paul Roetzer: So he blames it on like spam and bots and all this stuff. So he realizes like he got in over his skis and he didn't actually want to buy Twitter. And Brett Taylor and the board basically forced him to see the acquisition through. And so Elon didn't want Twitter. It's kind of worked out. I mean, other than tanking the value, it's kind of worked out okay, because it gave him a platform and influence and data to train his own AI.
[00:13:45] Paul Roetzer: But at the time he didn't want it. And Brett made him do it, basically. So there's, There's just, like, layers of drama and complexity here, and it's just so funny to, like, watch. I don't know, like, if it wasn't scary, it'd be, like, really funny.
[00:13:59] Mike Kaput: [00:14:00] So, the board has rejected this offer. I guess I have to ask, like, is this the end of it?
[00:14:05] Mike Kaput: Is there a chance this could still happen in some form? Like, what is your take on what's most likely to happen in the immediate future?
[00:14:14] Paul Roetzer: Yeah, so what triggered this was that last week a judge basically indicated in the case against OpenAI that the judge was likely to side with OpenAI in this and basically throw Musk's case out.
[00:14:26] Paul Roetzer: So in essence, like, Musk's attorneys, which he probably has an infinite number of and endless resources to pay them, basically just probably have a bunch of, you know, bullets in the chamber of like, all right, cool, like, let's do this next thing and this next thing. So I don't see Musk's. Stopping at all on this.
[00:14:41] Paul Roetzer: that then the irony is, as we'll talk about, he's launching his own new model tonight, Monday night. And so he's directly competing with OpenAI and he truly is, I think, just trying to mess with them and slow them down and try and get out ahead of OpenAI. I don't, I don't know. So I know, I do not think he's going to stop.
[00:14:59] Paul Roetzer: He. He has [00:15:00] no reason to. I think he finds it amusing and he has the resources to do it.
[00:15:06] JD Vance Keynote at AI Action Summit in Paris
[00:15:06] Mike Kaput: Well, in our second topic, speaking of not stopping, at the Paris AI Summit, US Vice President J. D. Vance took a very firm stance against what he called, quote, excessive regulation of AI. delivering a sharp rebuke to European efforts to impose strict controls on the rapidly advancing technology.
[00:15:26] Mike Kaput: He actually started off his speech saying literally, quote, I'm not here this morning to talk about AI safety, which was the title of the conference a couple years ago. I'm here to talk about AI opportunity. So this was his first kind of major policy event since the Trump administration has taken office, and in this, Vance, over about 15 minutes, framed AI as an economic turning point.
[00:15:49] Mike Kaput: He compared its impact to the steam engine's role in the Industrial Revolution, and then he warned that over regulation could stifle innovation, saying, quote, It will never come to pass if [00:16:00] over regulation deters innovators from taking the risks necessary to advance the ball. Now, the U. S. further cemented that position by refusing to sign an international pledge on AI development, which was endorsed by over 60 countries, including China.
[00:16:15] Mike Kaput: That agreement commits signatories to ensuring AI is safe, transparent, and ethical. Vance's speech also kind of highlighted these increasing tensions between the U. S. and Europe over AI regulation, and that kind of comes at an interesting time as the EU starts to enforce its AI Act. So First, Paul, let's kind of talk about this speech itself.
[00:16:37] Mike Kaput: It's pretty short, but I would say for the free, despite being short, it's impact has been pretty significant because some people in AI circles have been calling this one of the most pro innovation, pro AI accelerationist speeches they've ever heard from a politician, much less the vice president of the United States.
[00:16:57] Mike Kaput: Like Vance basically comes out and says, we're [00:17:00] accelerating. Well, you want to either work with you, but if you don't want to work with us, get out of your way, our way. So what did you make of this?
[00:17:08] Paul Roetzer: Yeah. So I think contextually, if people aren't aware, it's important to know who JD Vance is. So JD Vance is a former venture capitalist who was brought into political power by Peter Thiel, who co founded PayPal with Elon Musk and was the first major investor in Facebook.
[00:17:25] Paul Roetzer: And Peter Thiel was also the first. Silicon Valley, leader to support the Trump, campaign in 2016. So Thiel, you know, has sort of like invested heavily in bringing politicians that sort of share his worldview into power and politics. So that's J. D. Vance as like a rough background, became a senator, became the vice president.
[00:17:48] Paul Roetzer: So he has Silicon Valley ties, specifically this movement towards like open source and deregulation. So, his background fits very well with this [00:18:00] administration's focus on reducing regulation, putting AI safety concerns aside, as we talked about on the last episode, massive investment in infrastructure, and accelerated all costs.
[00:18:10] Paul Roetzer: That is the Silicon Valley current, mentality and JD Vance represents that better than any politician right now. That is, that is his MO. So I'll just go through like a couple of excerpts here that really caught my attention. It was a very, as you said, pro acceleration, I guess would be one way to categorize this talk.
[00:18:31] Paul Roetzer: very pro American. I can't, I don't, I don't really know how to even like explain this talk. It is one of the bolder political speeches I have. I have seen. so he says, to restrict this development now would not only unfairly benefit incumbents in the space, but it would also mean paralyzing one of the most promising technologies we've seen in generations.
[00:18:54] Paul Roetzer: This concept goes back to things we talked about. Last fall of [00:19:00] 2024, this idea of regulatory capture and that the incumbents would control what the regulations would be in their favor. And so small tech, startups, things like that wouldn't have the benefit of fair competition because the people like OpenAI would sort of set the parameters of what the regulations would be.
[00:19:20] He went on to say later, when a massive incumbent comes to us asking for safety regulations, we ought to ask whether that regulation benefits our people or the incumbent. I think he was talking to some specific people with those things. The thing that became very obvious when you read through this transcript is every single word is intentional.
[00:19:39] Paul Roetzer: Like, every word and every phrase. Many of them have meaning behind them, but they're all extremely intentional. And so I think it's worth reading a few times through the transcript because you start to really see what they're saying. So, he outlines four key things. One, his administration will ensure that American AI technology [00:20:00] continues to be the gold standard worldwide.
[00:20:02] Paul Roetzer: We aim to be the partner of choice for foreign countries and businesses as they expand their use of AI. Number two, We believe that excessive regulation of the AI sector could kill a transformative, transformative industry just as it's taking off and we will make every effort to encourage pro growth AI policies.
[00:20:17] Paul Roetzer: Three, we strong, we feel strongly that AI must remain free from ideological bias and that American AI will not be co opted into a tool of authoritarian censorship. I'll come back to that one. And finally, number four, the Trump administration will maintain a pro worker growth path for AI so that it can be a potent tool for job creation in the United States.
[00:20:37] Paul Roetzer: He said, AI will facilitate and make people more productive. It is not going to replace human beings, I think is in him. I think too many leaders in the industry, when they talk about the fear of replacing workers, miss the point. AI will make us more productive, more prosperous, and more free. I'll come back to that one in a moment as well.
[00:20:56] Paul Roetzer: he talked about like infrastructure and how, [00:21:00] this is when I was like, You know, I think we're stretching here. It says the US possesses all components across the full AI stack, including advanced semiconductor design, frontier algorithms, and transformational applications. The computing power the stack requires is integral to advancing technology.
[00:21:13] Paul Roetzer: They do not acknowledge the fact that TSMC which is in Taiwan, is fundamental to all of this, and, you know, Taiwan, which China believes is part of China. that's a, that's a problem. That'll come up over the next four years a lot. Gets into regulation, and the fact that the EU AI Act and the Digital Services Act is an impediment to U.
[00:21:34] Paul Roetzer: S. technology companies, and they're not going to stand for it, so there are some veiled threats toward the EU for their current policies. Free speech, which is, their, I don't know, code word is the right word, but like, that's That's their language for we are not going to moderate anything online. So we've already seen this where Facebook a couple weeks ago announced to their employees, Zuckerberg did, that they're removing moderation off of Facebook.
[00:21:58] Paul Roetzer: They're basically gonna rely on [00:22:00] AI only. They're getting rid of human moderators. X is obviously the Wild West right now. There's no like real, you know, any information. So they literally said, The administration will ensure that AI systems developed in America are free from ideological bias and never restrict our citizens right to free speech.
[00:22:15] Paul Roetzer: We can trust our people to think, consume information, develop their own ideas, and debate the open marketplace of ideas. So again, whether you support or don't support this concept, they're telling you point blank, we're not, we don't believe in moderation and we will not allow our social networks within our country To do moderation, basically.
[00:22:32] Paul Roetzer: We were going to let anything, you know, spread on there. And then the last one I wanted to just make sure we touched on is this idea of this future of work, and this is a really important thing. It's going to be the next main topic, and we're going to come back to this next week. So he said, finally, this administration wants to be clear about one last point.
[00:22:48] Paul Roetzer: And again, everything's intentional, and this is the closing. we will always send our American workers and our AI policy where we refuse to view AI as a purely disruptive technology that will inevitably automate away our [00:23:00] labor force. We believe AI will make our workers more productive, and we expect they will re produce.
[00:23:04] Paul Roetzer: Reap rewards with higher wages, better benefits, and safer, more prosperous communities. And then I bolded this one myself. the most immediate applications of AI involve supplementing, not replacing, the work being done by Americans. Now, I'm gonna, postulate here, Mike, that most immediate was a very intentional phrase.
[00:23:24] Paul Roetzer: I, so I made a note to myself, like, most immediate is doing a lot of work in this sentence. So then, so then he says, We believe that the U. S. labor force prepared to use AI to its fullest extent will attract the attention of businesses that have offshored some of these roles. I believe that to accomplish this, the administration will ensure that America has the best trained workforce in the world.
[00:23:43] Paul Roetzer: That would be amazing. Our schools will teach students how to manage, supervise, and interact with AI enabled tools as they become part of everyday lives. I, that would be phenomenal. I think that's a great vision. As AI creates new jobs and industries, our government, business, and labor organizations have an obligation to work together.
[00:23:59] Paul Roetzer: To empower [00:24:00] workers, not just in the US, but around the world. So, I think it's really important on this last part, Mike, and then I'll kind of throw it back to you. This future of work content, this is something that just keeps coming up time and time again. And after I saw this talk last week, it sort of, kind of, like, moved me to action, I guess you could say.
[00:24:16] Paul Roetzer: I'm, I'm really, I've said this many times, I'm, I'm struggling tremendously. With all of these leaders, and now our own government leaders, proclaiming that this is all just going to be fine. And it's all just going to make more jobs. And there's no actual plan or vision for how that's going to happen. So, if this final section happens, I will be the happiest person.
[00:24:38] Paul Roetzer: Like, if our schools are teaching it, if it's, if it's integrated into the workforce, if we're driving literacy across companies and industries, if we're Creating more jobs than we're destroying and not displacing workers, like, I hope that that is the future. My problem is no one has a plan for that future to be created.
[00:24:55] Paul Roetzer: that so that's the thing where I feel like, you know, kind of like [00:25:00] we did a few weeks ago with this AI literacy project. Like at some point you just have to start doing something. Like it's not enough for me to sit here and complain about this every week on the podcast versus like trying to do it.
[00:25:10] Paul Roetzer: And so, you know, I started working on something last week to try and like project out what, what are those jobs, like what, what could be created that we're not envisioning, you know, versus these abstract things that every once in a while Sam Altman will respond to, or you'll hear some other leader respond to.
[00:25:27] Paul Roetzer: I think I made some progress and I wanted to share it today, but It hasn't, we haven't fully tested it out yet, but we were able to build something that can help with this. I think, at least start the conversation in different companies and industries. I shared it with Mike and kind of, you know, Mike was able to do some testing on it.
[00:25:46] Paul Roetzer: So stay tuned next, next week, we should be able to actually have it publicly live and, be able to talk about it on the show. But our whole effort is to try and like, let's put some tangible. Ideas behind this. So if I'm a [00:26:00] lawyer, if I'm a marketer, if I'm an, you know, an accountant or if I'm a consultant or an agency leader, like what are those jobs?
[00:26:06] Paul Roetzer: Like that's, that's the big question right now is like, what jobs? Like let's, let's try and envision this. We know what the models are going to do one to two years out. That's, that's a known path. What jobs could possibly be created that don't exist today as these models get smarter and more generally capable?
[00:26:20] Paul Roetzer: So that was what I set out to solve, like, last Wednesday. Like I said, I think I made progress. Like, I think, I think there's a way we can actually help move this conversation forward. So sort of stay tuned on that, I guess.
[00:26:31] Mike Kaput: Yeah, it seems especially on jobs from what we know of this administration, he was walking a bit of a tightrope, right?
[00:26:38] Mike Kaput: Because there's a huge amount of populist, rhetoric and sympathy that brought this administration into power, but also it's backed by people like Marc Andreessen, who I guarantee you do not share this view on labor. They are looking to activate. They're funding the building of AI agent companies to replace people.
[00:26:57] Mike Kaput: Hundred percent. So I [00:27:00] can very well see that this is an interesting line to keep towing. And yeah, I mean, it points out that there is no plan right now that is at least public.
[00:27:09] Paul Roetzer: Yeah. And I think, and I've mentioned this before, The AI timeline, you know, we talked about in episode 87, I'll reference it again on the robotics conversation in a little bit here.
[00:27:19] Paul Roetzer: But one of the things I said that could slow this progress down and this vision for this kind of like technologist acceleration future is societal revolt. And I do think we're going to start to see a piece of this. There was actually, I had a tweet from, this was on the 15th, from Suhail, who's the founder of Playground AI, and he, he, I'll put it in the show notes, Mike, and you can share it.
[00:27:42] He said, someone hung this on my door in San Francisco, people have lost their mind, Accelerate Humanity, and it was a, it was a placard, like a sign that says, stop AI, like OpenAI is trying to build AGI, they define it as this, if nothing goes wrong, the majority of jobs will be lost to AI, like, [00:28:00] So you're, I mean, this is just one tweet, but like in San Francisco is starting to like form groups and post signs about like stopping AI and I don't know, like, it's, it'll be interesting to follow this along, but now we have the government for the first time stating what we'd seen like, you know, Andreessen Horowitz and others be stating.
[00:28:23] Effect of Generative AI on Jobs
[00:28:23] Mike Kaput: Well, related to that in our third big topic, we actually are seeing some additional. Data on kind of how generative AI might actually impact the labor market. So we saw a new economic analysis come out this past week that found that AI adoption is accelerating and has significant implications for employment and productivity.
[00:28:43] Mike Kaput: So this study in particular is led by Jonathan Hartley at Stanford and a team of researchers from George Mason University, Columbia University, and the World Bank. Now, they surveyed U. S. workers and found that as of December 2024, 30. [00:29:00] 1 percent of respondents reported using generative AI at work. So that marks a pretty substantial increase in AI's workplace presence compared to a few years ago.
[00:29:09] Mike Kaput: The data shows a strong correlation between AI usage and demographic factors. So younger, highly educated, high income workers are the most frequent adopters. Industries such as customer service, marketing, and IT report the highest levels of integration, while predictably maybe some sectors like agriculture, mining, and government lag behind.
[00:29:31] Mike Kaput: Now, one of the key study, the study's key findings is that generative AI significantly boosts productivity. Workers who use AI tools estimate the tasks that previously took 90 minutes can now be completed in just 30 minutes, which is a threefold efficiency increase. However, the technology is used intermittently with most employees integrating it for less than 15 hours per week.
[00:29:58] Mike Kaput: Interestingly, [00:30:00] beyond the impact on productivity, AI is also transforming job search dynamics. Over 50 percent of unemployed respondents who were job seeking in the past two years used AI tools for resume writing, cover letters, and interview preparation. So Paul, I think this is Some research worth mentioning for a couple reasons, amidst everything else we've covered, I mean, it's interesting takeaways that we can definitely talk about, but namely, it's pretty recent data, which hasn't always been the case with some of these studies.
[00:30:30] Mike Kaput: It's based on a survey of 4, 000 plus people, looking at some of the methodology, they seem to have done a, at least, For my novice opinion, a pretty good job of creating some pretty clear and specific questions. Like they define generative AI up front. They ask very clearly about how you're using it. Like very specific date range or time ranges, like once a week, twice a week, et cetera.
[00:30:54] Mike Kaput: And it's also recent data. It's from December, 2024. So. What did you kind of [00:31:00] make of this and what can it tell us about where this is all headed or where we stand today?
[00:31:06] Paul Roetzer: Yeah, it seemed really well done and like you said, the recency matters because a lot of times when we get these, you know, reports, it'll be from like six months ago and a generation ago of models.
[00:31:16] Paul Roetzer: So this is current generation models roughly. It is. It is. I've been in conflict with one of the studies we've talked about on the show before, which was Microsoft and LinkedIn did like a workplace study, like Gen AI in the workforce in spring of 24, so almost a year ago, and their data at that time said 75 percent of workers had used Gen AI.
[00:31:39] Paul Roetzer: And I thought, I always thought that felt high, but it also was probably centered more on like the tech world, I would imagine, which would make sense then. So this seems like a more diverse base of like industries they were looking at, career paths. I like how they go into a lot of specifics and this gets, gets back to what we were talking about in the last couple episodes of this idea of these [00:32:00] evaluations that compare it to like the smartest humans in the world at like the most complex problem solving, like mathematics and biology.
[00:32:09] Paul Roetzer: That's good, but that doesn't tell us about like future of work stuff per se. So I really like these studies where we get into like real tangible applications in people's jobs. One thing to consider is like you can't put too much weight on any one of these studies. over an extended period of time because they're all snapshots in time.
[00:32:32] Paul Roetzer: So as the models scale, we're not sure if the impact scales. So if we get a 10x improvement or like 10x compute, put into the training of a model, do we see a 10x impact on the, on the jobs and the tasks that it's able to perform? So, We had this quote recently from Sam Altman when he was talking about the OpenAI Deep Research project that we talked about last week.
[00:32:57] Paul Roetzer: And he said, it's not a scientifically rigorous thing, [00:33:00] but my vibes based estimate is that it, being deep research, does about 5 percent of all tasks in the economy today. One feature, 5 percent of all tasks in the economy. So, that's the thing we're trying to figure out, is like, Where are these models and what are they actually capable of doing?
[00:33:18] Paul Roetzer: because I, again, I don't think that the majority of leaders, whether that's our government leaders, like we just talked about, or if it's business leaders or university leaders, I don't think people realize how many of the tasks we do, we're within like 12 months of these AI doing them at or above.
[00:33:37] Paul Roetzer: average human level. Right. And that's the part where as we jump to, you know, we're, we know we're going to get 4. 5. We'll talk about that in a minute. We're going to get some new version from Anthropic, a new model from them, maybe this week. We know we're getting Groq 3 this week. as these things get better and better and more generally capable, what truly is the impact on, on work?
[00:33:57] Paul Roetzer: And that's what we're trying to like figure out. [00:34:00]
[00:34:00] Mike Kaput: I think people get too focused on, The almost science fiction like changes that very well could be coming, right, that are important to consider, but this study alone is saying that, look, we're three times more efficient with these tools today. Obviously, that's a broad statement, but if you imagine everything stopped tomorrow.
[00:34:20] Mike Kaput: And all we got was a 3x efficiency gain across the board in knowledge work. That seems like something that's going to have a pretty significant impact. It's a big deal. All right. So let's jump into a bunch of rapid fire topics for this week.
[00:34:35] GPT-4o Update and OpenAI Roadmap
[00:34:35] Mike Kaput: So first up, Sam Altman has revealed OpenAI's upcoming product plans in a post he put on X on February 12th.
[00:34:44] Mike Kaput: Now in it. Altman detailed the company's plans for GPT 4. 5 and GPT 5, saying, quote, We want to do a better job of sharing our intended roadmap and a much better job simplifying our product offering. We want AI to just work for you. We [00:35:00] realize how complicated our model and product offering so gotten. We hate the model picker as much as you do and want to return to magic unified intelligence.
[00:35:08] Mike Kaput: We will next ship GPT 4. 5, the model we call Orion, internally. As our last non chain of thought model. After that, a top goal for us is to unify O series models and GPT series models by creating systems that can use all our tools, know when to think for a long time or not, and generally be useful for a wide range, a very wide range of tasks.
[00:35:31] Mike Kaput: In both ChatGPT and our API, we will release GPT 5 as a system that integrates a lot of our technology, including O3. We will no longer ship O3 as a standalone model. The free tier of ChatGPT will get unlimited chat access to GPT 5 at the standard intelligence setting subject to abuse thresholds. Plus subscribers will be able to run GPT 5 at a higher level of intelligence and pro [00:36:00] subscribers will be able to run GPT 5 at an even higher level of intelligence.
[00:36:04] Mike Kaput: These models will incorporate voice, canvas, search, deep research, and more. Now, as if that wasn't enough, Altman, a few days later, also announced, again via a post on X, that the company has officially updated GPT 4. 0. He said, we put out an update to ChatGPT 4. 0. It is pretty good. It is soon going to get much better.
[00:36:24] Mike Kaput: Team is cooking. Now, I've seen some early users report the updated model seems like it has way more personality and produces much better results, especially around writing. I've kind of noticed a little bit of that as well in my early testing. But this is a lot of pretty big news in a couple of pretty short informal posts on X.
[00:36:45] Mike Kaput: I mean, for me at least, it seems like a breath of fresh air. We're going to simplify picking models finally, something we've talked about. It seems also like a huge change that the O Series and GPT Series models are going to be unified at [00:37:00] some point going forward. So, Paul, what jumped out to you about these announcements?
[00:37:07] Paul Roetzer: Yeah, I think, you know, first, it was everything we've been pleading for on the podcast. So fix the naming conventions, get rid of all the model choice for the average business user who has no idea which model to pick. Yeah, I would assume they'll solve for, I saw some developers complaining on Twitter about, you know, taking away their choice.
[00:37:24] Paul Roetzer: I assume they'll just have like an advance button or something like that that lets people, you know, go through and if they still want to choose, they can. Um. I've seen some requests for like your ability to personalize the model. I think that's going to come in also, like how creative you want it to be in its outputs, you know, things like that, how personalized you want it to be, how much you want it to use its memory of you, things like that.
[00:37:45] Paul Roetzer: this is where all the models are going. Gemini will do the same thing. Logan Kilpatrick tweeted as much, like this is our vision for Gemini 2. so these multimodal models that have reasoning baked into them, That is the future. That's what all [00:38:00] the models will be in 2025. And we've known that like this was, you know, a year ago.
[00:38:05] Paul Roetzer: This is what we said was coming. So it's been a pretty obvious path that was being followed here. It's a very unusual move for SAM to share their, their roadmap. my guess is that was triggered by knowledge that XAI was likely going to do something with Groq this week. And the rumors are that Anthropic has been released a model this week as well.
[00:38:26] Paul Roetzer: And I expect that those may at least be on par with 4. 0's ability, if not, according to Elon, you know, surpass the model. He said it'll be the smartest that they'll have. we can talk a little bit more about that in the next topic. But I think they're trying to kind of get ahead of some of this. And, and so I assume that's why they did it.
[00:38:49] Paul Roetzer: he, Aravind Srinivas, the CEO of Perplexity, tried to, like, I don't know, like, take a shot at Sam on the tweet. And he's like, what is it [00:39:00] better at or something like that? And Sam replied, among many other things, it's the best search product on the web. Check it out, let me know what you think. And then someone said something, I forget what the comment was, and he replied, Oh, they wanted like O1 level reasoning and he said how about O3 pro level intelligence and even better personality so I think either 4.
[00:39:21] Paul Roetzer: 5 which we'll see in a couple weeks I think maybe they'll drop it tomorrow to mess with Elon or and five which it sounds like we'll see in a couple months assume this like really advanced reasoning capability. And more personality, as you were saying, Mike, like it's kind of like more personable, there's less restrictions on what it does and says, I think they're starting to like pull some of the guardrails back.
[00:39:43] Paul Roetzer: So, yeah, I don't know, I was playing with custom GPTs over the weekend a lot, and I wasn't like, seeing a lot of it right within there, and I tried a few. regular chats in GPT 4, but I didn't personally like experience the dramatic change in personality, but I saw a lot of people online saying the same [00:40:00] thing.
[00:40:00] Grok 3 + xAI Drama
[00:40:00] Mike Kaput: So at the same time, in another piece of news, we have gotten confirmation that Grok 3 is imminent. Elon Musk has posted that Grok 3, XAI's newest model, will drop on Monday night, this is the day we're recording this, so tonight for us, at 8 p. m. Pacific time. Musk says it is, quote, the smartest AI on earth.
[00:40:21] Mike Kaput: Now, like everything Musk does, this announcement is also accompanied by some drama. On February 11th, an AI engineer at XAI, who's very vocal on X, announced his resignation from the company in a post on the platform. his name is Benjamin De Kraeker, and he says XAI told him he had to delete a post from February 8th that simply acknowledged that Groq 3 existed, which he points out Musk himself had done publicly many times.
[00:40:53] Mike Kaput: He decided instead to leave the original post up and resign instead of getting fired. Now, this original [00:41:00] post was, pretty benign, like it said, quote, the ranking, currently, my opinion, for code. Meaning he's ranking a bunch of AI models and how they code. And he lists them off in order. ChatGPT 01 Pro, 01, 03 Mini, all kind of tied.
[00:41:15] Mike Kaput: Groq 3, expected, TBD, to be determined. Claude 3. 5 Sonic, DeepSeq, GPT 4. 0, Groq 2, Gemini 2. 0, Pro series. Might be higher, we'll probably move up. So pretty standard tweet or post on X that he ends up having to resign over. So I guess two pieces to this, Paul, like what can we expect from Groq 3? Will it live up to the hype?
[00:41:39] Mike Kaput: And also I wonder, just curious what you think here. I honestly wonder if he got. in trouble, not just for mentioning it exists, but because he said it was behind OpenAI, and we know Elon Musk probably doesn't take kindly to that kind of thing.
[00:41:56] Paul Roetzer: Yeah, I think it's pretty safe to assume that's why he got, [00:42:00] was asked to remove it, and chose not to, is because he basically acknowledged the product he was working on wasn't as good as the competitors, so.
[00:42:08] Paul Roetzer: Yeah, and he's a pretty prominent, like, engineer there. Yes, yeah. He's a well known guy. in terms of Groq 3, who knows, like, again, this is what Elon does with Tesla, too, like these late night, 8pm events, so you're like, you know, I'm not staying up until 11 o'clock tonight to watch this, I'll check the tweets in the morning, but yeah, he like always does these like big events late at night on, you know, west coast, um.
[00:42:33] Paul Roetzer: My guess is it, I mean, again, I'm just going off of traditionally how like Tesla events go with these sorts of things. There's a lot of promises and like what's gonna be able to happen. Like, you know, in the last Tesla event where they showed the Optimus robots and the robots were actually tele operated, but they didn't disclose that.
[00:42:52] Paul Roetzer: So you'll see these like capabilities that seem really outlandish and amazing, but it's not actually there yet. [00:43:00] Full self driving's another one like it. Tesla's don't full self drive, but like they demonstrate it. So my guess is we'll see some incredible demonstration. I doubt that it's like tomorrow morning, we're going to wake up and have like this and all these capabilities they're showing, but I don't know.
[00:43:16] Paul Roetzer: I mean, maybe, maybe this is gonna be different. He was interviewed and he said, Groq, quote, Groq 3 is scary smart and outperforms any released model we are aware of. It will definitely have reasoning capabilities. Like, that's the one thing we do know. the one thing I wanted to note here is, like, to give XAI a little bit of love on Groq, like, if you haven't used it yet, when you go into a tweet now, there's an, like, an X button, XAI or Groq button.
[00:43:41] Paul Roetzer: I have to look and see what it looks like. yeah, it's just XAI. And so in the top right, Like, so you see the person's name, the date, and then to the right, you have your three little dots, and then an XAI logo. And if you click on that, it'll explain the post. And it uses Groq to actually, like, add context to the post.
[00:43:59] Paul Roetzer: I found [00:44:00] them incredibly useful. Like, I use it all the time when I'm looking at really technical things that I don't understand. It'll, you know, open up a conversation around it. Or like, just, I don't know, maybe like GenZLink or something. Like, I don't even know what they're saying. It's like, what does this mean?
[00:44:15] Paul Roetzer: so, I don't know that people use it. Like, I don't know that it's affecting on Twitter, like, accuracy and truth, like, people actually seeking out those things, but it is there, and if you haven't used it, I would check it out. the other thing I kind of alluded to earlier, I would not be shocked at all if Sam Altman does something tonight or tomorrow to, to kind of newsjack Elon.
[00:44:38] Paul Roetzer: That would be pretty shocking. I'll actually be more surprised if OpenAI doesn't do something to screw with Elon back now that we know how personal this has become.
[00:44:48] Mike Kaput: Could see like a 3 or 4 a. m. tweet from Sam saying some crazy thing that suddenly gets covered on East Coast Time in the morning, right?
[00:44:56] Paul Roetzer: Now the interesting thing here will be like, there's a, I'm pretty sure there's a [00:45:00] standalone Groq app now. I don't have it. I just use Groq within Twitter. so. You know, we saw the DeepSeq app skyrocket to number one. I don't know where it's at now. that you got to wonder, like, that's got to bother people like Elon and Anthropic and like people who haven't been able to capture what ChatGPT has.
[00:45:20] Paul Roetzer: Like ChatGPT is far and away, even if you go back to that Gen AI jobs report, Mike, that you just alluded to, like ChatGPT is number one there. Like it is, ChatGPT is generative AI to a lot of people. Like that is synonymous with it. And so you gotta think like. You're gonna release this thing, it's the smartest thing in the world, scary smart, and if it doesn't, like, get consumer attention, that Which I don't think it will, like no matter what it is, it's just really hard to come at ChatGPT right now.
[00:45:48] Paul Roetzer: Right.
[00:45:51] AI More Empathetic Than Humans
[00:45:51] Mike Kaput: In our next topics, a new research and a new report suggests that AI may not just match human empathy, but maybe [00:46:00] in some cases exceed it. So stay tuned. There's two things, a study and a report we want to talk about. So first a study, a team of researchers tested whether people could tell the difference between responses from GPT 4.
[00:46:12] Mike Kaput: 0 and licensed therapists when presented with various therapy vignettes. So the results are the participants actually struggled to tell AI from human responses, and when they asked to rate them, they actually preferred the AI written responses in key areas like empathy, therapeutic alliance, and cultural competence.
[00:46:32] Mike Kaput: The AI responses were also linguistically richer and more structured, which may have contributed to their higher ratings. However, the study stopped short of measuring the effectiveness. They made it very clear just because AI can produce responses that sound empathetic, doesn't mean it can replace real therapists.
[00:46:50] Mike Kaput: Meanwhile, in the corporate world, we also got a report about how Allstate, one of the largest insurers in the U. S., is using AI. to generate nearly all its [00:47:00] emails for back and forth between customers and reps after a claim is filed. The reason is that these AI generated emails are less accusatory, use clearer language, and apparently feel more empathetic, according to the company, than those written by the humans.
[00:47:17] Mike Kaput: According to Allstate's Chief Information Officer, AI removes frustration and bias from claims processing, ensuring customers get responses that are fair, patient, and well explained. Now the AI generated emails acknowledge people's concerns, clarify information, and offer help proactively. Whereas in the past, the human agents were kind of talking insurance jargon to confrontational tones and created poorer customer experiences.
[00:47:45] Mike Kaput: So Paul, this seems like an area, honestly, people kind of miss or underweight, like. The models have gotten a lot better at sounding at least natural and empathetic. Like, I don't know if it's always a given that humans will always [00:48:00] be the most empathetic option consistently in these situations. You factor in the fact that AI has unlimited patience, isn't impacted by mood, like, what should we be thinking about here?
[00:48:11] Paul Roetzer: Yeah, this is, this is a trend that we're going to have to get used to. And a way to think about this is if you're in, like Marketing and you do copywriting, ad copy, landing page copy, websites, emails. If you're in sales and you're doing, you know, proposals or responsive, you're in customer service, like we see with the Allstate example or HR, just take your pick.
[00:48:37] Paul Roetzer: And imagine that the best person in your department, the one that you want training everybody else. So Mike, think back to our days when we were running an agency. What we used to do is be like, okay, like, let's take Mike's writing ability and let's like have Mike teach other people how to do what Mike does and you go through these like extensive training processes with people and you have to like [00:49:00] you know make sure they stop making the obvious mistakes and then once you reduce that that's like okay let's focus on the creative component and it's it is a process to train people and to develop them it is a rewarding process it is part of what I have enjoyed running companies for the last 20 years but it is a process And instead, imagine that across all these different departments in your company, you can take the best person, all of their abilities, and you can train a model to be like that every single time.
[00:49:31] Paul Roetzer: Most empathetic, most creative, best at problem resolution, you know, things like that. that what we're going to have to come to accept in a lot of different professions and industries is the AI is And that's going to be the preferred output. Like if the human on the other end doesn't know if the AI did it or the human did it, there's a fairly real chance that the other human is going to prefer the AI output.
[00:49:58] Paul Roetzer: Now this is not me [00:50:00] advocating for this or saying, I am just telling you what I think is fact about the near future, is that a lot of professions are going to have to come to realize that humans prefer the AI generated output. it is more effective. And I don't know the ramifications of that. Like, that's what we were talking about with the jobs and trying to like look out ahead and say, Okay, what are, what are the humans doing when AI is better than the agents that are responding to these inquiries?
[00:50:29] Paul Roetzer: And this is the part, like, government can say all they want, we can have as many speeches as we want about no displacement of jobs, and, like, I've yet to see the plan for what happens in each of these departments and industries when this becomes true, which it probably already is in many cases, let's be honest, like, you know, you and I have talked about before, like, there's so many things, if you think back to our days running an agency, Whether it was build in proposals or, you know, internal, you know, emails or, strategic plans, like things we used to do when we'd spend days and weeks on, [00:51:00] that I'm fairly convinced that some of these models are just better at.
[00:51:05] Paul Roetzer: So, I don't know, I think it's just a reality people are going to have to come to, and then we got to figure out what does that mean? And it doesn't mean jobs go away necessarily, but we just got to be proactive about figuring out what it means.
[00:51:22] Results of Major AI Copyright Case in the US
[00:51:22] Mike Kaput: So in some other news, the first major US copyright ruling against an AI company has landed and it sounds like some bad news for the generative AI industry.
[00:51:32] Mike Kaput: A federal judge has ruled in favor of Thomson Reuters in its lawsuit against Ross Intelligence, a legal AI startup that trained its model on content from Westlaw, which is Thomson Reuters legal research platform. The judge rejected all of Ross's defenses, including claims that its use of copyright materials fell under fair use, said Judge Stephanos Bibas.
[00:51:59] Mike Kaput: None of [00:52:00] Ross's possible defenses holds water. I reject them all. The implications are pretty significant. Gen AI companies, even OpenAI, Google, Meta, etc. sometimes argue that scraping copyrighted content to train AI models is transformative and falls under fair use. This ruling suggests courts may not end up agreeing.
[00:52:21] Mike Kaput: So Ross Intelligence actually shut down in 2021 due to the cost of litigation. They're no longer operating, but that ruling still matters because it may set a precedent that AI companies cannot freely train on copyrighted material without permission. Legal experts, including Cornell professor James Grimmelman, put it bluntly in an article in Wired and said, quote, if this decision is followed elsewhere, it's really bad for the generative AI companies.
[00:52:50] Mike Kaput: So, this is a major media company, Thomson Reuters is huge, Reuters, the news outlet, is a big deal, but it's also suing a [00:53:00] smaller AI company, it seems unlikely, Paul, we're gonna see someone like OpenAI get bankrupted by lawsuits, right, like, if courts begin to uphold copyright in this way with the big labs, will the end result necessarily be like, okay, they're out of business, or they're gonna stop doing this?
[00:53:18] Paul Roetzer: Yeah, so, if people are new to this, the basic premise here is these companies absolutely trained on copyrighted material, there's no debating that. They don't disclose it, like they try and hide the fact that they did it most of the time. but they, they did it, and so their argument is they were allowed to do it, even though when they did it, and there was actually some recent stuff with META, META's in the middle of a, kind of a sticky court case on this, where they were caught, like, hiding the fact that they were doing this, and knowingly hiding the fact that they were doing it, so it's like, they know it's not legal, they kinda Try and hide it, but they know everyone else is doing it, so they do it anyway.
[00:53:57] Paul Roetzer: And then they just hope that eventually, like, [00:54:00] just spend enough money on legal fees. Now, the good news for the AI model companies is I could definitely see if this stuff ever gets to the Supreme Court. there's a reasonable chance that they're going to get a very favorable ruling that would basically throw all this out.
[00:54:17] Paul Roetzer: Like, again, go back to JD Vance's talk, does it sound like a government that is going to slow down the AI model companies? No. I am not an attorney, but my read on this Is that there's gonna be a bunch of these little wins probably, and I think at the end of the day, it's not gonna do anything. It's gonna cost these companies some money, some fines, maybe, but I don't think it's slowing anything down personally.
[00:54:47] OpenAI Reasoning Model Prompting Guide
[00:54:47] Mike Kaput: Next up, OpenAI has officially published some new guidance on how to use its reasoning models like oh one and oh three, and has confirmed best practices for when and how to apply them compared to traditional [00:55:00] GPT models. Now, the core takeaway from OpenAI's guidance is that these two model families serve fundamentally different purposes.
[00:55:09] Mike Kaput: GPT models they describe as quote, workhorses. They're built for speed, cost efficiency, and straightforward execution. That includes things like GPT 4. 0. They handle well defined tasks with lower latency, making them ideal for quick responses and content generation. Reasoning models like O1, O3 Mini, and O3 Mini High, which are out right now.
[00:55:33] Mike Kaput: On the other hand, these are, quote, planners. They're designed to take on complex, ambiguous problems, synthesizing information across multiple documents, making decisions with expert level precision, and, quote, and navigating nuanced logic. These capabilities, says OpenAI, make them better suited for industries like finance, law, and scientific research, where accuracy and reasoning over large datasets matters more than [00:56:00] speed.
[00:56:01] Mike Kaput: Now, crucially, OpenAI also gave us new guidance confirming best practices for how you prompt reasoning models effectively. Unlike GPT models, which is what we've typically been using, Those benefit from step by step prompting. Reasoning models already engage in deep internal reasoning. So these previous kind of best practices and instructions, things like telling a model to think step by step may not improve performance.
[00:56:28] Mike Kaput: And they even note some of these traditional prompt engineering practices can even sometimes hinder performance when you're using a reasoning model. Instead, they recommend keeping prompts simple and direct, starting with zero shot prompts, using clear delimiters to separate different sections of input, so using something like Markdown or XML, and then outlining the goal you want to achieve, not the steps to achieve it.
[00:56:54] Mike Kaput: So Paul, this advice is quite a bit different than the typical prompting advice out there for traditional [00:57:00] models. Obviously, knowing how to prompt each of these is really valuable right now. But I guess I keep thinking, like, at what point do we not need to do anything except talk to the model and give it as much context as possible?
[00:57:13] Mike Kaput: Like, how long is this kind of different rules and settings going to last here?
[00:57:18] Paul Roetzer: I don't know. I mean, we thought this whole idea of like prompt crafting or prompt engineering, whatever you want to call it, was sort of like, not going to be that important, but we're two and a half years into this now and like, it still matters.
[00:57:31] Paul Roetzer: And these new models, as they keep like evolving them, you have to learn like new tricks to get better. Talk to them and as someone who was playing around with like custom GPTs the last five days, I can tell you like, they're just weird still. Like you, you know, you have these instructions and it's like sort of doing what you want it to do, but it doesn't really do what you want it to do.
[00:57:49] Paul Roetzer: And you have to like, you have to figure out how to talk to it to get it to do the thing. And like, that's not The only way you figure that out is like experimenting. [00:58:00] And so prompting does matter. Like, I see it every day myself. I'm sure you see it too, Mike. Like, you know, learning how to talk to these reasoning models matters, learning how to just still prompt the traditional models matter.
[00:58:12] Paul Roetzer: So, I don't know, again, I still don't think prompting is a career path, but I think, Like, when we're interviewing for hires, I want, I want to, like, explore their prompting abilities. Like, that's a skill that I want to know they have and I want to, like, talk me through an example where you've had to, like, coax the AI to do something that maybe it wasn't doing right, things like that.
[00:58:33] Paul Roetzer: Like, that's the new job interview question for me. now, interesting, sort of related as we're on this, Sam just tweeted, Trying GPT 4. 5, which again is, I think, gonna have, no, that won't have the reasoning model in it, that'll, GPT 5, I think, is what he said about the reasoning in it. Yeah, I think
[00:58:52] Mike Kaput: 5 is the one that's gonna have, yeah.
[00:58:53] Paul Roetzer: Okay, so 5 is coming in a few months. Trying GPT 4. 5 has much more of a, quote, [00:59:00] feel the AGI moment among high taste testers than I expected. Amjad Massad, who's the CEO and founder of Repl. it, replied, is the update. Coming to voice mode, he said, not with 4. 5, but we want to make it much better. Super important for future products.
[00:59:19] Paul Roetzer: Somebody, random person, steal the show tonight, man. Livestream at 7. 30 PM and chill. He replies, that wouldn't be very nice. Dot, dot, dot.
[00:59:32] Paul Roetzer: So it's at least on his radar to mess with Elon. I may actually have to stay up now just to see what they do, aren't I?
[00:59:40] Rise of the Robots
[00:59:40] Mike Kaput: Yeah. Well, it's also been a huge week for humanoid robotics because a few companies have released some interesting news in this space. So first, Meta is making a major investment in AI powered humanoid robots.
[00:59:55] Mike Kaput: Kind of marking its next step into AI. The company has formed a new [01:00:00] robotics team within its reality labs division. They actually plan to make their own humanoid robotic hardware with a focus first on robots for household chores. But really the bigger focus says Bloomberg is on developing software, AI models, and sensor systems for robots.
[01:00:14] Mike Kaput: So kind of taking a similar approach to what Google did with Android, which is kind of creating that core technology that other manufacturers can end up going. And using. It sounds like they've already started discussions with robotics companies like Unitree Robotics and Figure. ai. Figure. ai, by the way, which is one of the most well funded humanoid robot startups, is in talks to raise a staggering 1.
[01:00:42] Mike Kaput: 5 billion at a 39. 5 billion valuation, which is a massive leap from its previous valuations of just 2. 6 billion. They're, they were previously backed by OpenAI, though they have, as we reported last week, kind of terminated their technical partnership, [01:01:00] as well as backed by Microsoft and NVIDIA. FIGURE is developing humanoid robots that it hopes will take on dangerous jobs and help address the climate crisis.
[01:01:08] Mike Kaput: Labor shortage. At the same time, Apple is quietly exploring its own robotics ambitions. A new research paper in leaked supply chain reports suggests that the company might be in the early stages of working on both humanoid and non humanoid robots. However, they appear to be in a phase of experimentation.
[01:01:29] Mike Kaput: Analysts caution the project is still in the proof of concept stage. And if Apple does move forward, a consumer robot likely wouldn't hit the market until at least 2028. So Paul, it does seem like this market for humanoid robots is heating up. Does this track with your AI timeline that you've published in the past?
[01:01:46] Mike Kaput: I know robotics was a key milestone there.
[01:01:49] Paul Roetzer: Yeah. So the episode 87 I referenced earlier, and again, I'm working on an updated timeline. We'll probably do a March episode about the timeline. And I'll share a link on [01:02:00] LinkedIn, Mike, that we can drop in the show notes for the robotics part of this. So I'll go back.
[01:02:04] Paul Roetzer: This is verbatim from the March 2024 timeline I put together. so I had robotics 2026 to 2030. Lots of investment going into humanoid robots in 2024, which is again when I wrote this. OpenAI, Tesla, Optimus, Figure, Amazon, Google, NVIDIA, Boston Dynamics, etc. that are leading to major advancements in the hardware.
[01:02:26] Paul Roetzer: Multimodal LLMs are the brains embodied in the robots. In the 26 to 30 range, we start to see widespread commercial applications. For example, a humanoid robot stocking retail shelves or providing limited nursing home care. Commercial robots will likely be narrow applications initially, trained to complete specific high value tasks.
[01:02:46] Paul Roetzer: But more general robots that are capable of quickly developing a range of skills through observation and reinforcement learning will emerge. That's what's happening now. They're working on that. There's the potential for general purpose consumer robots in the next decade. [01:03:00] Elon Musk says billions. There are robots, or these robots would be capable of performing in home tasks.
[01:03:05] Paul Roetzer: For example, laundry, dishes, cleaning, maintenance. and likely available for purchase or lease. They will start as a luxury for the elite and then quickly move into the mass market as manufacturing costs rapidly fall due to technological advances and competition. Tangible impact on blue collar jobs starts to become more clear.
[01:03:23] Paul Roetzer: So that was again what I wrote in March 2024, and that Certainly seems to jive with what we're seeing right now. The one note I put on Twitter the other day about this is, you know, if you missed the 2015, 2016 ai, you know, investing window where you didn't buy Nvidia in 2016 and things like that, I'm gonna give you a tip.
[01:03:47] Paul Roetzer: But now, again, not investing advice, I'm not telling you specific companies, but if you have the, the belief that humanoid robots are going to be a massive market, which Tesla, Apple, Nvidia, [01:04:00] Meta, Amazon, OpenAI, Google, like, They all seem to think that. Figure out the humanoid robot supply chain. Who is the TSMC of humanoid robots?
[01:04:11] Paul Roetzer: Like, who is like the, who are the core companies that are going to power this, this robot revolution? And make a couple bets. Again, I'm not saying specific companies. I'm not even sure who they are. I mean, these are obvious ones that I just talked about, and I'm not saying there's any winners per se in there.
[01:04:27] Paul Roetzer: But someone's going to be building all of the pieces that go into these robots. And, that would be a, something worth investigating, I guess, if you're looking to kind of get in on the next wave of AI.
[01:04:40] Mike Kaput: Go run a really careful OpenAI deep research report on this and justify your expenses for ChatGPT Pro for the year, right?
[01:04:49] Mike Kaput: That would be a
[01:04:49] Paul Roetzer: good use
[01:04:50] Mike Kaput: of it. Yeah. Maybe you can make your money back. Yeah. I'm going to run that after we get off this podcast, honestly.
[01:04:59] Apple’s AI for Siri Faces Issues & Delays
[01:04:59] Mike Kaput: Some other [01:05:00] Apple news this week. This is not positive. Apple's long awaited AI powered overhaul of Siri is running into serious challenges. We talked about this, an episode or two ago, they are having some engineering issues and software bugs that are threatening to delay or limit the release.
[01:05:16] Mike Kaput: This is a central piece of Apple intelligence. And it was initially set for release in April with iOS 18. 4. Some sources now indicate that some features may be postponed until at least May or later. These are kind of planned improvements to make Siri like truly intelligent. The ability for Siri to pull information from your messages, emails, and files to provide better answers.
[01:05:38] Mike Kaput: A new app control system allowing Siri to More precisely interact with and manage iPhone apps and on screen context awareness. Meaning Siri would be able to see what's currently displayed on the user's device and respond accordingly. So Paul, this news is just the latest stumble for Apple when it comes to AI.
[01:05:58] Mike Kaput: We've talked ad nauseum [01:06:00] about their missteps here, but you've also been experimenting with their current Apple intelligence features. Like what is your kind of take on where Apple is right now in the AI arms race?
[01:06:10] Paul Roetzer: Yeah, so, Apple Intelligence still sucks. I mean, that's like, generally speaking, like it's not really worth the price of admission to like upgrade your phone.
[01:06:20] Paul Roetzer: That being said, I started experimenting with this Playground app, which is native to Apple Intelligence. So if you have an iPhone 15 Pro or higher, I think, and then you have to have like iOS 18. 2, maybe. so on your iPad, on your phone, maybe on your, your Mac, you can use this playground and basically you just take, you can just ask it to create an image, but you can also give it like a driving photo, like a source photo of a person and create a character of them.
[01:06:48] Paul Roetzer: And it is honestly like a blast, like I spent like two hours Friday night doing this, creating like images of family and friends and coworkers. I put that one on LinkedIn. Yeah. Yeah. It's a lot of fun. And [01:07:00] so, again, I'm not saying it, you know, Apple Intelligence is now worth it and it's awesome. I'm saying that app is a lot of fun.
[01:07:07] one interesting piece of context here is, despite Siri not being ready, despite Apple Intelligence being a massive disappointment, Apple's stock is up 33 percent over the last 12 months. so Wall Street doesn't seem to care that they haven't figured out AI yet. And that bodes well, I would imagine, for Apple investors, because like, when they finally actually do, it's almost like people just expect they're not going to, and it's like, ah, fine, whatever, they're just gonna like, not be good at AIf they ever actually figure it out, if they make Surrey, Truly competitive with like voice mode from ChatGPT.
[01:07:42] Paul Roetzer: Like that gets really interesting. you know, from an investing standpoint, just from Apple's growth potential standpoint.
[01:07:51] Listener Questions
[01:07:51] Mike Kaput: All right. Our last topic this week, last, but certainly not least is continuing a new segment, which we have done the past couple of weeks called listener questions. [01:08:00] So we get tons of questions each and every week about AI during our intranet AI courses, online, on LinkedIn, at our talks.
[01:08:07] Mike Kaput: So we wanted to start. Highlighting a few of these and answering them. Please reach out if you have a question to Paul or myself. Go to MarketingAIInstitute. com, click contact us, drop a question in, or reach out to us personally. This week's question, Paul. Someone said, we have a leadership team that believes they understand AI, but they do not actually understand it.
[01:08:30] Mike Kaput: They just think of using AI and building agents just for coding. They don't realize it can do so much more. What change management ideas would you recommend to get them to really understand the potential here?
[01:08:43] Paul Roetzer: I started snickering because like, this is true of so many companies we talk to. Right, right.
[01:08:48] Paul Roetzer: So I don't know if it's the case in this organization, but what we commonly see happen is like, let's say you have a CEO or a C suite. that knows they need to do something about AI, but don't personally like really [01:09:00] comprehend it. The first call you're making is like CIO, CTO, IT, like you're talking to the technical people.
[01:09:07] Paul Roetzer: Cause you think they're the ones that are going to have this figured out. They are often working on security and risk and product. And like, they're not talking to the CMO and the head of sales and the leader of customer success and the head of HR and trying to find like, Practical business use cases for each of those departments and then personalize, like that's not what the technical team in your organization is really there to do.
[01:09:33] Paul Roetzer: And so if you have a leadership team that basically thinks of this as a technology problem, then that's natural that that's what they're doing with it. They're coding and they're, you know, integrating it into some systems and stuff like that. So I think what we often tell people is like, you, most organizations don't have people trained.
[01:09:51] Paul Roetzer: in, in AI and like the business side of AI. And so you have the opportunity to raise your hand. I don't care if you're the intern asking this question, or if you're a VP, like [01:10:00] within the marketing department, every organization needs people to raise their hand and bring practical use cases to the team, like run pilot projects, like prove it out, like if you, if you have a leadership team that needs data before they're going to do anything, and there's going to be a lot of obstacles and maybe legal is going to get in the way.
[01:10:17] Paul Roetzer: Run some like, Low key pilot projects for 30, 60, 90 days. Prove it. Say, hey, with our writing team, we got this tool and we did this thing. There was no risk involved. We didn't have any customer data in it. There was no proprietary data. Like, it was a low risk, you know, thing we did. We just got some ChatGPT team licenses.
[01:10:33] Paul Roetzer: We had five people run this thing. Here's the data. Like, here's what we did over 90 days. That's how you win. Like, that's how you, you have to talk to leadership in terms that matter to them. Efficiency, productivity, revenue growth, innovation. Thank you. So if you're in that kind of organization, move the needle, then show them you moved it.
[01:10:52] Paul Roetzer: Don't go in talking about AI and you want to get, you know, 100, 000 a month or a year for, you know, some platform that they don't understand. Like, [01:11:00] that's not going to matter. If they're not already along their own AI learning journey, if they're not experimenting with themselves and like have figured some of this out, you've just got to show it to them.
[01:11:08] Paul Roetzer: And so that's, that would be my main advice is just run the experiments. Do the work, show them what they care about, which is often just like data and performance.
[01:11:18] Mike Kaput: That's awesome advice. Paul, thank you for breaking everything down this week, as always. we've got a big week ahead, so I'm excited for next week's episode.
[01:11:27] Mike Kaput: Got a big night ahead, apparently. Right, yeah, it's gonna be a crazy Monday and Tuesday. Yep. As a quick reminder to everyone, if you have not podcast platform of choice, and you have the ability to do so, we would appreciate it. Very much appreciate it. It helps us get better and get into the hands of more people.
[01:11:46] Mike Kaput: Also, if you would have not subscribed yet to our weekly newsletter, go to marketingainstitute. com forward slash newsletter. We round up all of the week's AI news, including not only what we talked about on this [01:12:00] episode, but all the stuff that didn't make the cut, which. Increasingly is more and more articles as we get more and more AI news.
[01:12:07] Mike Kaput: So if you want one comprehensive brief to start your week, go ahead and subscribe. Paul, thanks again.
[01:12:15] Paul Roetzer: Thank you, Mike. And, yeah, I don't know. I think we're gonna get a lot of model news this week. So we're gonna have probably a whole section on models next week. All right. Thanks everyone. Thanks for listening to the AI show.
[01:12:27] Paul Roetzer: Visit marketing ai institute.com to continue your AI learning journey and join more than 60,000 professionals and business leaders who have subscribed to the weekly newsletter, downloaded the AI blueprints, attended virtual and in-person events, taken our online AI courses and engaged in the Slack community.
[01:12:47] Paul Roetzer: Until next time, stay curious and explore AI.
Claire Prudhomme
Claire Prudhomme is the Marketing Manager of Media and Content at the Marketing AI Institute. With a background in content marketing, video production and a deep interest in AI public policy, Claire brings a broad skill set to her role. Claire combines her skills, passion for storytelling, and dedication to lifelong learning to drive the Marketing AI Institute's mission forward.