Marketing AI Institute | Blog

[The AI Show Episode 101]: OpenAI’s Ex-Board Strikes Back, AI Job Fears, and Big Updates from Perplexity, Anthropic, and Showrunner

Written by Claire Prudhomme | Jun 4, 2024 12:04:02 PM

Following a week filled with controversy, drama, and updates in the AI world, hosts Mike and Paul are ready to bring you up to speed on all the latest happenings. Join us in Episode 101 of The Artificial Intelligence Show where we explore OpenAI's progress on GPT-5, the formation of its Safety and Security Committee, and criticism from former board members. This episode also explores the growing concerns about AI's potential to displace workers, the latest AI tech updates, Apple’s AI Privacy plans and more.

Listen or watch below—and see below for show notes and the transcript.

Please note, Episode 102 will be released on Wednesday, June 12 instead of June 11. We are aligning the release with the Apple Developer conference on June 10 to ensure comprehensive coverage in that episode.

Listen Now

Watch the Video

Timestamps

00:04:47 — OpenAI Updates

00:18:00 — AI Job Fears and Worries

00:31:47 — AI Tech Updates

00:39:01 — xAI Raises $6B

00:42:30 — LeCun & Musk x/Twitter Feud, Hinton’s Remarks

00:53:45 — PwC is now OpenAI's Largest Customer

00:55:19 — Apple’s AI Privacy Plans

00:57:54 — Google Leak + AI Overviews

01:01:25 — NVIDIA's Jensen Huang Quote

01:05:13 — Paul’s LinkedIn Post on Prediction Machines

Summary

OpenAI Updates and Ongoing Controversies

Despite a number of recent and ongoing controversies, OpenAI is full steam ahead—though not everyone’s happy about that.

The company announced this past week that it has begun training its new frontier model, which is assumed to be GPT-5. In a blog post announcing the news, the company wrote:

“OpenAI has recently begun training its next frontier model and we anticipate the resulting systems to bring us to the next level of capabilities on our path to AGI.”

The post announced the formation of OpenAI's Safety and Security Committee, tasked with advising the board on crucial safety decisions. One of their initial goals is to enhance OpenAI's processes and safeguards in the next 90 days, hinting that GPT-5 may still be 90 days away.

As the company progresses, former board members are stirring controversy. Helen Toner and Tasha McCauley penned an op-ed in The Economist urging government regulation of OpenAI and other AI firms, citing their failure in self-regulation.

Toner also gave an explosive interview on The TED AI Show podcast, where she said the board took the decision last year to fire Altman for “outright lying” to them in some cases and withholding information about certain happenings at OpenAI.

Concerns on AI’s Impact on Jobs

We’re starting to see more commentary and concern around AI’s impact on jobs.

First up, Elon Musk is now saying that AI will take all of our jobs. “Probably none of us will have a job,” he said at VivaTech 2024 in Paris, describing a future where jobs would be “optional” thanks to AI.

Second, the CEO of Klarna, a fintech company, is facing backlash after posting about how AI is helping his company by essentially replacing the need for humans. Sebastian Siemiatkowski posted on X all the ways that AI was helping his company save an estimated $10M in 2024, including:

"We're spending less on photographers, image banks, and marketing agencies," he wrote. "Our in-house marketing team is HALF the size it was last year but is producing MORE!"

He also said the company cut external marketing agency expenses by 25%. He later removed the post after an overwhelmingly negative response online.

McKinsey also weighed in on AI’s impact on jobs with new research, saying

“Our updated modeling of the future of work finds that demand for workers in STEM-related, healthcare, and other high-skill professions would rise, while demand for occupations such as office workers, production workers, and customer service representatives would decline. By 2030, in a midpoint adoption scenario, up to 30 percent of current hours worked could be automated, accelerated by generative AI (gen AI).”

Last, but certainly not least, a fantastic new article from Avital Balwit, the chief of staff to Anthropic CEO Dario Amodei, called “My Last Five Years of Work”, makes some bold predictions about the near-term future of work, writing:

“I am 25. These next three years might be the last few years that I work. I am not ill, nor am I becoming a stay-at-home mom, nor have I been so financially fortunate to be on the brink of voluntary retirement. I stand at the edge of a technological development that seems likely, should it arrive, to end employment as I know it.”

AI Tech Updates

OpenAI is adding advanced features to the free version of ChatGPT, including the ability to get responses from the web, analyze data, use images and files, and access GPTs—all features previously only available to ChatGPT paid users. OpenAI is also formally relaunching its robotics team, which it had previously shut down.

Perplexity is in talks to raise $250M at a $3B valuation and just released a new feature called Perplexity Pages, which generates entire shareable pages on any topic from the searches and research you do with the tool.

Meta is considering a paid version of its AI assistant, Meta AI. The assistant would be a more advanced version of the currently free assistant available right now. The company is also reportedly developing AI agents.

Anthropic has announced that its Claude model can now use tools by interacting with external services and APIs. The company is also making waves by hiring Jan Leike, a leading AI researcher who recently left OpenAI’s superalignment team. Leike will lead a new superalignment team within Anthropic.

IBM had its annual THINK conference and announced a number of new AI updates, including: a new class of its watsonx AI assistants, including several new code assistants; IBM Concert, a new genAI tool that will provide AI-driven insights across a client’s portfolio of apps; and the company open sourced its Granite family of language and code models.

Fable Studios has released Showrunner, an AI app that you can use to generate your own AI TV shows from prompts.

Suno has released version 3.5, which allows you to make up to 4 minute songs.

Links Referenced in the Show


Today’s episode is brought to you by our AI for B2B Marketers Summit, presented by Intercept. This virtual event takes place on June 6 from 12pm - 5pm EDT and is designed to help B2B marketers reinvent what’s possible in their companies and careers. Thanks to our presenting sponsor Intercept, there is a free registration option.

To register, go to www.b2bsummit.ai.

Today’s episode is also brought to you by Piloting AI, a collection of 18 on-demand courses designed as a step-by-step learning path for beginners at all levels, from interns to CMOs. Piloting AI includes about 9 hours of content that covers everything you need to know in order to begin piloting AI in your role or business, and includes a professional certification upon passing the final exam.

You can use the code POD100 to get $100 off when you go to www.PilotingAI.com.

Read the Transcription

Disclaimer: This transcription was written by AI, thanks to Descript, and has not been edited for content.

[00:00:00] Paul Roetzer: Anthropic, OpenAI. Google, Meta, these are the companies with enough resources to build these frontier models and they have the ambition to do it. And, until someone steps in and stops it.

[00:00:13] Paul Roetzer: It's just going to be a race.

[00:00:15] Paul Roetzer: So Trust them or don't trust them. It's who we've got, and it's not going to change.

[00:00:20] Paul Roetzer: These are the companies that are going to be building the future.

[00:00:23] Paul Roetzer: Welcome to the Artificial Intelligence Show, the podcast that helps your business grow smarter by making AI approachable and actionable. My name is Paul Roetzer. I'm the founder and CEO of Marketing AI Institute, and I'm your host. Each week, I'm joined by my co host, and Marketing AI Institute Chief Content Officer, Mike Kaput, as we break down all the AI news that matters and give you insights and perspectives that you can use to advance your company and your career.

[00:00:53] Paul Roetzer: Join us as we accelerate AI literacy for [00:01:00] all.

[00:01:00] Paul Roetzer: Welcome to episode 101 of the Artificial Intelligence Show. I'm your host, Paul Roetzer. Paul Roetzer along with my co host Mike Kaput. We are recording June 3rd around 10 a. m. Eastern time.

[00:01:14] 

[00:01:14] Paul Roetzer: Next week's episode is going to be a little different. Instead of dropping on a Tuesday like normal, we are actually going to drop episode 102 on Wednesday, June 12th

[00:01:24] Paul Roetzer: 12th.

[00:01:25] Paul Roetzer: So mark your calendars. Next week's episode is not its regular scheduled time. The reason is that Apple is holding its developer conference, uh, that Monday, June 10th at 1 PM Pacific. So by the time we record our podcast, it would not have happened yet. And we want to make sure we include everything from the Apple conference.

[00:01:47] Paul Roetzer: In episode 102. So we are going to record it on, on the 11th and we're going to drop it on the 12th. So again, episode 102 will be on Wednesday, June 12th, not Tuesday, June 11th. Did I get [00:02:00] that right, Mike? I didn't confuse myself. You got that. No, you got that right. We're good. Okay. Alright, so today's episode 101 is brought to us by our AI for B2B Marketers Summit presented by Intercept.

[00:02:12] Paul Roetzer: So that is happening June 6th. So if you're listening to this on June 4th or June 5th or even the morning of June 6th, if you want, you can still get in for this virtual summit. It is from noon to 5 p. m. Eastern time. It is designed to help B2B marketers reinvent what's possible in their companies and their careers.

[00:02:32] Paul Roetzer: During the event, you'll learn how to use AI to create dynamic customer experiences, bridge the gap between marketing and sales, build an AI council and more go to b2bsummit.ai

[00:02:44] Paul Roetzer: That's B2bsummit. ai. Again, there's a free registration option. There's a private registration option that is paid and there's an on demand option as well that is paid.

[00:02:55] Paul Roetzer: But the live free virtual summit is made possible by our [00:03:00] Intercept. So b2bsummit. ai, if you want to join us for that summit, I think we're close to 4, 000 people registered already. So it should be an amazing event. The last one we did like this, the AI for Writers Summit, Had an incredible, incredibly active group of attendees in the, in the virtual platform.

[00:03:20] Paul Roetzer: So there's lots of connections made, resources shared. So it's a great place to learn and

[00:03:26] Paul Roetzer: also today's episode is brought to us by the Piloting AI course series, a collection of 18 on demand courses. That's designed as a step by step learning journey for beginners at all levels, from interns to CMOs.

[00:03:38] Paul Roetzer: So the series is specifically built

[00:03:39] Paul Roetzer: for marketers and marketing leaders. More than a thousand people have registered for the certificate series since it first launched in December of 2022. We did a fully updated version of it this year. Mike and I refreshed and re recorded all 18 courses at the end of January, 2024.

[00:03:56] Paul Roetzer: 2024.

[00:03:57] Paul Roetzer: So it's, freshly updated [00:04:00] this year. It's about nine hours of content, includes quizzes, a final exam, and a professional certificate upon completion. If you're interested in the piloting AI course series, you can use POD 100 as a promo code. To get a hundred dollars off, go to piloting ai.com to learn more.

[00:04:18] Paul Roetzer: There are team licenses and team discounts available as well. So again, piloting ai. for a step by step learning journey to pilot AI in your organization. Okay, Mike, we have lots of open AI news. Talk about job loss, tons of tech updates, some fundraising, some, some butting of heads of AI technology.

[00:04:42] Paul Roetzer: Titans and researchers, plenty to get through this week.

[00:04:47] OpenAI Updates

[00:04:47] Mike Kaput: Yeah, there's a ton going on this week. So first up, despite a number of recent and ongoing controversies, OpenAI is going very full steam ahead and not [00:05:00] everyone is happy about it. Because the company announced this past week that it has begun training its new frontier model.

[00:05:07] Mike Kaput: So that's assumed to be GPT 5. In a blog post announcing the news, the wrote, quote, OpenAI has recently begun training its next frontier model, we anticipate the resulting systems to bring us the next level of capabilities on our path to AGI.

[00:05:25] Mike Kaput: Now, notably, the post that made this announcement was about formation of what OpenAI calling the Safety and Security This is a group for making recommendations to the full board on critical safety security issues. decisions.

[00:05:43] Mike Kaput: One of the committee's first tasks is to evaluate and develop OpenAI's processes and safeguards over the next 90 days, leading to some speculation here that we're at least 90 days away from GPT 5.

[00:05:57] Mike Kaput: Now this safety and security [00:06:00] committee consists of current board members Brett Taylor, Adam D'Angelo, and Nicole Seligman, Sam Altman himself. So as this company is moving full steam ahead, The X Board members are also causing stir ex-Board members Helen Toner tasha McCauley just published an op ed in The Economist calling for OpenAI

[00:06:25] Mike Kaput: and other AI companies be regulated because they have proven themselves unable to do so. to self regulate. At the same time, Toner also gave a kind of explosive interview on the TED AI show podcast, where said

[00:06:39] Mike Kaput: the board took the decision year to fire Altman quote outright lying them in some cases and withholding information about certain things at OpenAI. She also revealed the board was not informed about the release of ChatGPT in advance.

[00:06:56] Mike Kaput: She says they learned about it on Twitter. She alleges that [00:07:00] Altman also didn't reveal he owned the AI startup fund and misled the board about formal safety processes in place.

[00:07:08] Mike Kaput: Toners said of Altman, quote, any individual case, Sam could always come up with some kind of innocuous explanation of why it wasn't a big deal or misinterpreted or whatever. But the end effect was that after years this

[00:07:23] Mike Kaput: kind of thing, all four of us who fired him, which Her, Macaulay, D'Angelo, and Ilya Sutskever came to the conclusion that we just couldn't believe things Sam was telling us. So Paul, there's a lot to unpack here about OpenAI

[00:07:38] Mike Kaput: week. Let's kind of start with the frontier model that's coming and the Safety and Security Committee. Can you kind of walk us through what this means? kind of for the near future of AI, next 90 days or so.

[00:07:52] Paul Roetzer: Yeah. I feel like we already knew the frontier models being trained. I think it maybe is the first time where they just very directly said [00:08:00] it, but they've certainly alluded to it. Like, so I didn't feel like that was.

[00:08:04] Paul Roetzer: Massive news, at least saying it was being trained. I think we can have just kind of assumed that for a while. The safety and security committee, I didn't read a bunch into because think it just, at the end of the day, probably isn't that relevant. I don't know. Like, I feel like they're just kind of. Doing their own thing now, since we dissolved the super alignment committee. they had to do something. They have to try and keep regulators away and

[00:08:31] Paul Roetzer: keep government off their back. And like, they can't just release So I don't know who else are you going to put on it, but the board who's supposed to be responsible for this and sam.

[00:08:40] Paul Roetzer: So I don't know that there's much to it other than someone has to run it. And those seem to be

[00:08:46] Paul Roetzer: people that are left the know, the board members, Helen Toner coming out and doing this TED AI. Um, interview where she questioned Sam's, um, honesty and his business practices. [00:09:00] The reality is like, we have no idea what, what was happening between Sam and the board.

[00:09:05] Paul Roetzer: And, you know, I think it makes for a great headline that the, the board didn't know about ChatGPT. And when I first saw it, when I first like watched that clip, I was like, well, Oh, okay. That explains why they fired him. And then I thought, wait a second. Like. They didn't think chat GBT was gonna be a big deal.

[00:09:23] Paul Roetzer: Like maybe they just didn't tell the board because there was nothing to tell them. It was just a continuation of their research efforts. So when I tweeted, like I, I shared that Helen Toner interview and I put, you know, when chat GBT came out in November, 2022, the board was not informed in advance about that toner set on the podcast.

[00:09:40] Paul Roetzer: We learned about Jet GBT on Twitter. And then I said, pretty much tells you, um, what you need to know about how OpenAI leaders viewed the prior board. Or

[00:09:50] Paul Roetzer: how insignificant they thought ChatGPT would be. And I actually think it's both of I think that

[00:09:59] Paul Roetzer: They, you know, [00:10:00] at the time, if I remember correctly, Greg Brockman, the, what, Greg's the president,

[00:10:05] Paul Roetzer: I think, said they thought they were going to get like 5, 000 downloads of ChatGPT.

[00:10:10] Paul Roetzer: They didn't think it was a thing. Like, it was powered by GPT 3. 5 with the board already knew about it. It wasn't like they unveiled some brand new model. They just put out this research initiative to try and give people

[00:10:21] Paul Roetzer: chance to engage with a model that board was already aware so

[00:10:24] Paul Roetzer: I could totally see The leaders of OpenAI not feeling like they really needed go to the board to tell them they're releasing this thing that they thought was just a research So, and then in an interview last week, Sam confirmed this. So he actually did an interview with Nick Thompson,

[00:10:42] Paul Roetzer: CEO of The Atlantic, at the AI for global summit. And he said this exact thing, which was good because that was kind of what I guess is he's like,

[00:10:50] Paul Roetzer: we didn't need to tell the board about chatGPT. We didn't think it was a big deal. Like this isn't, it wasn't like we were doing some major change in direction and then he basically [00:11:00] took the high road or. Didn't want self incriminate.

[00:11:02] Paul Roetzer: I don't know about the rest of these. Like I'm not, I'm not going to get into a bunch of specifics So, I think, so, you know, I was, I was thinking about this more this morning before we came on. And, and I think it's fair to say that by November, 2023, when Sam was fired, he had grown to regret

[00:11:22] Paul Roetzer: how the company was structured under the nonprofit board and it's safe to say he was probably doing everything in his power to accelerate.

[00:11:31] Paul Roetzer: Research, product development, and revenue within the constraints of that structure. Was that frustrating to him? Probably, but there wasn't a heck of a lot they could do about Because you

[00:11:41] Paul Roetzer: have to go back and remember in 2015 openAI was created as a research lab, there were no specific product ambitions.

[00:11:48] Paul Roetzer: They didn't know What was going to work? The transformer hadn't been invented yet by the Google brain team, which didn't happen until 2017. They didn't know large language models would scale. They [00:12:00] didn't know there'd be a thing. They were working on robotics. They were working on all kinds of things. And so in 2015, when they created this nonprofit profit structure and this governing board that had all this control, they had no vision of what open AI is today.

[00:12:15] Paul Roetzer: And so I think. You know, if you go back to November of 2023, was Sam doing things that the board didn't agree with? Was pushing growth forward? Was

[00:12:24] Paul Roetzer: raising funding? Was he getting involved in other initiatives? Yeah, probably. And was he doing it

[00:12:29] Paul Roetzer: spite the board? Or was just doing it because he trying to figure out how to take advantage of this opportunity? Within, within a roadmap that he couldn't have possibly envisioned, you know, eight years earlier. So then as I was like prepping for the show this morning, I happened to come across this Wall Street Journal article. And I think this adds some really interesting context. So we'll put this in the show notes.

[00:12:54] Paul Roetzer: so, you know, basically what it did is took a look at, [00:13:00] extensive investment portfolio and influence within Silicon Valley and beyond. And so it says like, although Altman is a prolific investor startups and tech

[00:13:08] Paul Roetzer: companies, including ones like Helion, which is working on nuclear fusion, and Reddit, which he owns 7.

[00:13:14] Paul Roetzer: 6 percent of, which is valued at 2. 5 billion. 754 million. So Reddit stock recently jumped

[00:13:21] Paul Roetzer: percent when openAI announced a partnership to bring Reddit's content to ChatGPT. So

[00:13:26] Paul Roetzer: his mounting list of side projects like Reddit and Helion and all these other things, have potential conflicts of interest, which were cited by the board when he was fired.

[00:13:36] Paul Roetzer: So the Wall Street Journal article says some of the directors who ousted Altman felt he was giving them so little information. About the size and scope

[00:13:44] Paul Roetzer: of his startup holdings that it was becoming impossible to understand how he might personally benefit from deals being pursued. So he has been quietly a major investor for more than a decade.

[00:13:57] Paul Roetzer: And again, according to the wall street journal in 2019, [00:14:00] he was asked to resign. From Y Combinator, where he was the president, after partners alleged he had put personal projects, including OpenAI, ahead of his duties as president. So when he stepped aside as Y Combinator is when he took over as CEO of OpenAI, quote, Altman's arrangement, where much of his wealth is tied up outside in outside ventures,

[00:14:21] Paul Roetzer: not open. AI pushes the boundaries of traditional corporate governance, according to tech lawyers and venture capitalists, most startup founders

[00:14:30] Paul Roetzer: their wealth tied to their companies, fueling motivation to make their companies Few are ever in a position where they stand to make more money by benefiting

[00:14:38] Paul Roetzer: business on the other side of the table. I had forgotten about this one, but Sam's actually a major investor in Humane, the AI pin that is now trying to find an exit. Um, He holds 15 percent of the equity in Humane, which was using OpenAI technology to do what it was doing. So, my overall take here is like, Sam is wildly ambitious. [00:15:00] He is a shrewd investor and business person.

[00:15:02] Paul Roetzer: He is aggressive with a long history of failure. Deal making and placing big bets on hyper growth companies, those aren't qualities that align well with a conservative board structure that limits his ability what comes natural to So this

[00:15:18] Paul Roetzer: isn't in any way absolving him or pretending like he may not have done things the former board members claim, but it's who he is.

[00:15:25] Paul Roetzer: And I can see why things didn't end well. So I think moving forward, what trying to do is Sam is going to continue doing what Sam does. He's invested in all these

[00:15:35] Paul Roetzer: companies. Many of those companies may end up doing deals with OpenAI or becoming partners of OpenAI. And I could completely see how that deal making and those partnerships, which don't fit under

[00:15:47] Paul Roetzer: methods and governance, would not play well with a board that is designed to, in many ways, limit and restrict growth and innovation in favor of safety and [00:16:00] security.

[00:16:01] Paul Roetzer: So, I think that seeing this new Safety and Security Committee is really just a prelude to a, a much more, less limited version of Sam Altman and OpenAI that are going

[00:16:12] Paul Roetzer: to favor growth and innovation and accelerated development ahead of everything else, because that sam's MO. And as long as Sam's CEO,

[00:16:22] Paul Roetzer: I think that that is what OpenAI is going to do.

[00:16:27] Mike Kaput: So, having given these details, Like, how confident are we that OpenAI is going to safely usher in this next wave of powerful AI? Call it AGI or even if we don't get to

[00:16:42] Mike Kaput: AGI.

[00:16:45] Paul Roetzer: I have no idea. I, I, I mean if I was forced to put like percentages to things, I, I, I would be in the middle I guess. Like, I just don't know.

[00:16:55] Paul Roetzer: I mean, I, I think I said on the last episode, we have no choice but to trust these companies right now. [00:17:00] So, Anthropic, OpenAI. Google, Meta, XAI, we'll talk about like, these are the companies with enough resources to build these frontier models and they have the ambition to do it. And, until someone steps in and stops it.

[00:17:18] Paul Roetzer: It's just going to be a race. and they're going to do, you know, they're going to have safety committees and they're going to have interpretability studies and they're going to do all these things, but at the end of the day, it's a capitalistic society and they're, they're going to race forward and build the biggest models possible and the most powerful models possible.

[00:17:38] Paul Roetzer: And as a society, we just basically. Have to figure out what that means. Cause I don't see an end in sight to what doing.

[00:17:46] Paul Roetzer: And they don't see an end in sight to the ability of these models to keep getting bigger and smarter. So trust them or don't trust them. It's who we've got, and it's not going to change.

[00:17:55] Paul Roetzer: These are the companies that are going to be building the [00:18:00] future.

[00:18:00] AI Job Fears

[00:18:00] Mike Kaput: all right, so our second big topic today is we're kind of starting to see more commentary and concern around AI's impact on jobs. So first up this past week, Elon Musk said AI is going to take all of our jobs. Quote, probably none of us will have a job, he said at Viva Tech 2024 in Paris, where he was describing a future for AI.

[00:18:23] Mike Kaput: Where jobs would be optional. Thanks to ai. However, he did say for this future to come to pass, there would need to be quote, universal high income, which he did not elaborate too much on. Second, the CEO of Klarna, a FinTech company, is facing some backlash after posting about how AI is helping his company by essentially replacing the need for humans.

[00:18:48] Mike Kaput: Sebastian Sakowski posted on x. all the ways that AI was helping his company save an estimated 10 million in 2024, including [00:19:00] saying, quote, we're spending less on photographers, image banks, and marketing agencies, he wrote. Our in house marketing team is half the size it was last year, but is producing more.

[00:19:10] Mike Kaput: He also said the company cut external marketing agency expenses by 25%. He later removed this post after overwhelmingly negative. responses online. Now, third, in kind of these threads coming together around job fears, McKinsey also weighed in on AI's impact on jobs with some new research, saying, quote, Our updated modeling of the future of work finds that demand for workers in STEM related healthcare and other high skill professions would rise, while demand for occupations such as office workers, production workers, and customer service representatives would decline.

[00:19:47] Mike Kaput: 2030, in a midpoint AI adoption scenario, up to 30 percent of current hours worked could be automated or accelerated by generative AI.

[00:19:59] Mike Kaput: Now last [00:20:00] and certainly not least, we also saw a fantastic article come out from Avital Balwit, who is the Chief of Staff to Anthropic CEO Dario Amadei, and it's called, My Last Five Years of Work.

[00:20:14] Mike Kaput: And in it, she makes some really bold predictions about where this is all going, saying, quote, I am 25.

[00:20:21] Mike Kaput: These next three years might be the last few years that I work. I am not ill, nor am I becoming

[00:20:27] Mike Kaput: stay at

[00:20:27] Mike Kaput: home mom, nor have I been so financially fortunate to be on the brink of voluntary retirement. I stand at the edge of a technological development that seems likely, should it arrive, to end employment as I know it. She then goes on to detail how working at Anthropic has convinced her. that increasingly powerful AI models are going to lead to widespread automation of quote, every economically useful

[00:20:52] Mike Kaput: task, and that she expects AI to first excel at any kind of online work. In closing, she [00:21:00] considers kind of the mental and emotional effects that This is going to have on workers in a post a GI

[00:21:06] Mike Kaput: world where their work is no longer needed. So Paul, it kind of seems like we're certainly hearing more commentary around this subject from a lot of different sources. Like, are we starting to kind of see real reasons to worry here?

[00:21:23] Paul Roetzer: I hope we're starting to see a sense of urgency to have the conversations. Like, that's kind of my thing. So, I feel like the clarinet CEO is saying the quiet part out loud.

[00:21:34] Paul Roetzer: I have sat in executive meetings and board meetings over the last 12 to 18 months, and these are the exact conversations that are happening. How do we cut agency

[00:21:43] Paul Roetzer: Do we need as many outside agencies? to spend as much with our outside I are

[00:21:50] Paul Roetzer: we able to bring in house now with our team, do what the agencies were doing, so we don't have to reduce staff?

[00:21:55] Paul Roetzer: Maybe we just cut the outside agency budgets, we keep the people, and the internal people can now do the work of [00:22:00] the agencies.

[00:22:01] Paul Roetzer: I've gotten lots of questions discussions around how do we grow without hiring? So maybe it's not reduction of workforce, it's we can now achieve our growth goals without having to bring more people on board.

[00:22:15] Paul Roetzer: There are absolutely the discussions though of do we need as many humans doing the work? So if we're not in a massive growth mode and we don't have plans to

[00:22:23] Paul Roetzer: Introduce new products and services, or there's a demand for more of what we do. Do we need as many people doing the work? Do we need as many SEO people, as many writers, as many accountants?

[00:22:33] Paul Roetzer: Like these are the discussions that are happening, whether you think they are or not, or you, you haven't been privy to those conversations, I can tell you right now, they're happening now. Is that the right path? I don't know. Like, what we're gonna do as an, like, what I believe is every company gets choice about how you take

[00:22:50] Paul Roetzer: advantage of what AI is going to enable. productivity, creativity, innovation, decision making, like, all of those benefits. can be to the benefit of [00:23:00] people, or it can be to, you know, maximize profits and productivity, um, often in, in lieu of the So

[00:23:08] Paul Roetzer: what we will do as a company is we will continue to grow, needing to hire fewer people to achieve those growth So as team

[00:23:17] Paul Roetzer: of seven at the Institute, we are probably more productive than What my agency was at its peak, which think was around 19 people, I would argue that with a staff of seven, we out produce the team of 19. Now, that's nothing against the team of 19. I sold that agency in 2021. If we had today's technology with that team

[00:23:39] Paul Roetzer: people, we probably could 2x revenue without having to make a hire. Mm-hmm. I think most responsible organizations. We'll take a human centered approach to this and say, yeah, the AI is driving efficiency and productivity, and we're going to keep people we have.

[00:23:54] Paul Roetzer: they're going to become more efficient, more creative, more productive. We're going to keep growing, we just don't need to [00:24:00] hire as many people moving forward as we would have.

[00:24:02] Paul Roetzer: So I think that's kind of a best case scenario in many cases. But, You know, this article, you know, that you mentioned, from Avatar, I saw this, on today's

[00:24:14] Paul Roetzer: Monday. This was, so this was Sunday morning or Saturday morning. I think saw this. Yeah, it

[00:24:18] Paul Roetzer: it was Saturday morning. So I read this and it definitely like, It was one of those articles you started like, God, this is a really smart article.

[00:24:27] Paul Roetzer: And I did find myself wondering if she used Claude Opus to help in writing it. but just based on, on her resume and history, like I did a lot of research. One, to make sure it was the real person. Like, cause I first saw this, I was like,

[00:24:39] Paul Roetzer: is of staff to Dario Amede? Like, what? This is interesting. So I went and kind of researched who she was and, you

[00:24:45] Paul Roetzer: saw, she was a Rhodes scholar and it looks like she came up in actually communications at Anthropic and then eventually moved into the chief staff position.

[00:24:52] Paul Roetzer: But she hasn't updated her LinkedIn with that information. So I had to go digging to make sure, you know, she was who it appeared she was.

[00:24:59] Paul Roetzer: So [00:25:00] anyway, when I read the article, I was like, gosh, I don't know. I, I. gotta talk about this on a episode.

[00:25:06] Paul Roetzer: and then I, I went ahead and put it on LinkedIn, and it, and the response was so overwhelming right away that I messaged Mike, and I was like, hey, we gotta make this a main topic, like, today, um, because I think, It, it strikes a nerve with people and it's a really important

[00:25:19] Paul Roetzer: So I'll just read what I put on LinkedIn and then we can see if we have any other, you know, follow on conversations from here. So. What

[00:25:26] Paul Roetzer: I wrote was, I spend a reasonable amount of my time these days considering the future of work and talking with business and education leaders who are beginning to ponder the topic with greater sense of urgency.

[00:25:37] Paul Roetzer: I'll start by repeating what seems to be my most common refrain. No one really knows what happens next. That is the only thing about next two

[00:25:44] Paul Roetzer: years and beyond that I can say with a high degree of confidence. The rest of what I think and what leading AI researchers, entrepreneurs, and economists believe are varying degrees of educated guesses and theories.

[00:25:57] Paul Roetzer: Some are more, are likely more likely more [00:26:00] directionally correct than others, but that's the point. We need to start having more serious discussions about potential paths AI could take business, education, the economy, and society as its capabilities impact the world.

[00:26:11] Paul Roetzer: grow with each new model that is released, GPT et cetera. Assuming everything

[00:26:17] Paul Roetzer: will stay status quo or that AI will only affect gradual change, like other general purpose technologies before it, will likely lead to suboptimal outcomes for the leaders who choose that path. We need to consider different perspectives and ideas, especially from people who are on the inside of

[00:26:34] Paul Roetzer: frontier aI model companies, such as Anthropic, Cohere, Google, Meta, Microsoft, Mistral, OpenAI. For example, this piece in Palladium Magazine from Avital a

[00:26:44] Paul Roetzer: Rhodes Scholar and Chief of Staff anthropic's Dario Amadei, is one of the best articles on AI I've read. And then I went on to quote a couple of excerpts. I work at a Frontier AI company. With every iteration of our model, I am confronted with something more capable [00:27:00] The general The general

[00:27:02] Paul Roetzer: reaction to language models among knowledge workers is one of denial.

[00:27:06] Paul Roetzer: They grasp at the ever diminishing number of places where such models still struggle rather than noticing the ever growing range of tasks where they reached

[00:27:15] Paul Roetzer: or passed human level. Many will point out that AI systems are not yet writing award winning books, let alone patenting inventions. But most of us also don't do I thought this next paragraph was really important. The economically and politically relevant comparison on most tasks is not whether the language model is better than the best It is whether they are better than

[00:27:40] Paul Roetzer: human who would otherwise do that task. But these systems continue to improve at all cognitive tasks.

[00:27:47] Paul Roetzer: The shared goal of the field of artificial intelligence is to create a system that can do anything. That's sort of like a very broad blanket statement about pursuit

[00:27:55] Paul Roetzer: of AGI, but we that another time. I expect [00:28:00] us to soon reach it. If I'm right, how should we think about the coming obsolescence of work?

[00:28:05] Paul Roetzer: Should we meet the possibility with its loss, with sadness, fear, joy, or hope? The overall economic effects of AGI are difficult to forecast

[00:28:13] Paul Roetzer: and here I'll focus on the question of how people will feel about the So, you know, my main takeaway here was anyone who, and I've said this before on the show, but like to Elon Musk about like, oh, yeah, we're all just gonna be out of Anyone who makes a definitive about this stuff,

[00:28:30] Paul Roetzer: like, very confidently, with conviction, stating two years from now, this is what it looks like. Um, they're either intentionally exaggerating to make a point, which is what I would consider probably Elon at this point, given his history of sort of exaggerating outcomes in a shorter time.

[00:28:49] Paul Roetzer: overconfident in their own point of view about the future or intentionally misrepresenting reality for some reason, potentially financial So, [00:29:00] this is,

[00:29:00] Paul Roetzer: again, the thing I think we need to recap and we're going to touch on this in a, in one of the topics in rapid fire about, Musk, Yann LeCun, and Jeff Hinton.

[00:29:09] Paul Roetzer: The that are leading the research, building these companies, they can't agree on any of this And like the leading economist can't agree on this stuff. Like you have to develop your own point of view. You have to develop some conviction about what you think is likely happening. And maybe you have A, B and C options

[00:29:29] Paul Roetzer: couple of contingency paths and you have to start thinking about that. So if you're a leader, if you're the CEO of a company you're building

[00:29:36] Paul Roetzer: your own business, or you're even leading a department, you have to start to form an opinion. About what you think is going to happen next, so you can start preparing it that way, like doing impact assessments on your teams and things like that.

[00:29:51] Paul Roetzer: We can't just deny that it's happening. And, and I thought that that was one of the real key aspects that, that caught me was that this quote from her article about the general [00:30:00] reaction to language models among knowledge workers is one of denial. I think that is very, very true. I think most people just choose to deny that this is happening, that it's going to be disruptive in the very near future, and they just go about their lives And I, I just don't think that that is the best path forward. I don't agree with that. Elon that it's just going to like in two years, like it's just going to do all of our work for us or anything like that. But I also think that it would be naive to say it's not going to have a disruptive impact. And I think what the Klarna CEO is saying is probably what a lot of CEOs are thinking

[00:30:34] Paul Roetzer: right now. And they're watching that and wondering if that is fact true and what it's going to

[00:30:40] 

[00:30:41] Mike Kaput: Yeah, so, as we kind of wrap this up, I'm curious, like, what were, was the sentiment of the comments you got? What kinds of things were people saying in reaction to this?

[00:30:50] Paul Roetzer: Yeah, there was, I mean, so at the time of us recording this, there was over 55 comments

[00:30:55] Paul Roetzer: this post and it had 15, 000 impressions, which again, I posted on a [00:31:00] Saturday morning.

[00:31:00] Paul Roetzer: So, um, that's a lot like for an average post for me, and it definitely got people thinking, and I think a lot of people appreciated the perspective. I think a lot of people shared the idea that. It's important we're talking about

[00:31:15] Paul Roetzer: that there maybe isn't enough conversation around this. And it was a pretty diverse group people,

[00:31:20] Paul Roetzer: like different backgrounds, practitioners, leaders, government, you know, people involved in government, people involved in education.

[00:31:27] Paul Roetzer: So I think that this post. You know, her article, not necessarily my LinkedIn post, but her article and her positioning really resonated with people. And I think it definitely triggered the kind of conversation that I was hoping it would, that maybe we can start having these important conversations working through scenarios of what possible outcomes

[00:31:47] AI Tech Updates

[00:31:47] Mike Kaput: So for our third main topic, Today, we're going to do something a little because

[00:31:53] Mike Kaput: there are just so many updates happening that we're going to do kind of an AI tech roundup of ton

[00:31:59] Mike Kaput: of major [00:32:00] updates from some leading tools and technologies and kind of talk through these. So I'm going to run through them all and then kind of get your thoughts on them, Paul.

[00:32:09] Mike Kaput: So first up, OpenAI is adding advanced features to the free version of ChatGPT, including the ability to responses from the

[00:32:17] Mike Kaput: web, analyze data, use images, use files. and AccessGPTs. These are all features previously only available

[00:32:25] Mike Kaput: ChatGPT paid users. And OpenAI is also formally relaunching its robotics team, which it had previously shut down. AI Search Startup Perplexity is also in talks to raise About 250 million at a 3 billion valuation. And they just released a new feature called Perplexity Pages,

[00:32:47] Mike Kaput: this generates entire shareable pages on any topic the searches and the research that you do with. perplexity. Meta is considering a paid version of its AI [00:33:00] assistant called Meta AI. The assistant would just be a more advanced version of the

[00:33:05] Mike Kaput: free assistant available right now. And according to some reporting out there, the company is now confirmed to be actively developing AI agents.

[00:33:17] Mike Kaput: With Anthropic, they have announced that Claude can now use tools by interacting with external services and APIs.

[00:33:25] Mike Kaput: And Anthropic is making waves by hiring Yann LeCun, who is a leading AI researcher we talked about recently who left OpenAI's Super Alignment team. LeCun will lead a new Super Alignment team within Anthropic. IBM, who we haven't mentioned in a while, had its annual Think Conference and announced a number of new AI updates, including a new class of its Watson X AI assistants, including several new coding assistants.

[00:33:56] Mike Kaput: They announced something called IBM Concert, which is a new gen [00:34:00] AI tool that gives you AI driven insights across a

[00:34:03] Mike Kaput: portfolio of apps. And IBM also announced that it is open sourcing its Granite family of language and code models. Fable Studios has formally started to release ShowRunner, which is an AI app that you can use

[00:34:21] Mike Kaput: generate your own AI TV shows from prompts. And

[00:34:26] Mike Kaput: last but not least, popular AI music generation tool, Suno, has released version 3. 5, which allows you make up to 4 minute long songs.

[00:34:37] Mike Kaput: So, paul, maybe first talk to me about, did any of these updates really stand out to you the most? Thanks

[00:34:43] Paul Roetzer: Yeah, so the robotics one with OpenAI is interesting. That was one of their big bets early on, you know, in the early days

[00:34:50] Paul Roetzer: the research lab when they weren't sure what direction they were going to go.

[00:34:53] Paul Roetzer: They ran into the limitations of the hardware side, but also just the lack of intelligence. So, you know, the [00:35:00] language models have enabled us to now embody the intelligence within the robots. We talked about this in episode 87 and some of the other episodes.

[00:35:08] Paul Roetzer: They've got to deal with Figure, which is one of the big robotics companies where they're Basically embodying ChatGPT the robot. you're going to see the same thing with Op, Optimus from, Tesla. They're going to embody Groq, which is the, we'll talk about Groq in a minute, but Groq is the, language model that'll be embedded within, Optimus. And I, I assume, um, hope also probably very near future, Groq will

[00:35:33] Paul Roetzer: be embedded within Teslas. So you have an intelligent assistant in your car. So that's interesting that OpenAI is going to kind of move back, not only into doing deals where they're going to embed their intelligence into other people's robots, maybe build

[00:35:45] Paul Roetzer: their own. Anthropic, don't sleep on the idea of tool use, that is a huge play. OpenAI is pursuing it. Every research lab is pursuing it.

[00:35:54] Paul Roetzer: We will probably hear about that with the Apple conference. Although they probably won't refer to it that way. But, [00:36:00] um, being able to access different apps on your phone to use them as tools to achieve that. Complete, activities is something that everyone is working on.

[00:36:09] Paul Roetzer: So Anthropic doing that is one of the key unlocks to the next generation of language models. The showrunner stuff from Fable, if you haven't seen that, that's some crazy stuff, man. Like we, I think we talked about that last year when it first came out, they did that demo with South Park where they created an episode of South Park and they trained it on like, Thousand plus characters and all these scenes and everything, but their vision for future

[00:36:30] Paul Roetzer: is personalized shows. So instead of Netflix, where you go in and pick an existing show, you'll go in and say,

[00:36:37] Paul Roetzer: I want to watch, um, something like theme, kind of like harry

[00:36:41] Paul Roetzer: Potter, want it to be set in, you know, the Lord of rings land. And I, you know, I want these kinds of characters and like go, and it'll just like build you.

[00:36:51] Paul Roetzer: A show. So it's this idea of like personalized content on demand, but like long form entertainment. And then the other [00:37:00] thing is they envision this as a way to train AGI, that these characters can live within these environments and interact

[00:37:06] Paul Roetzer: each other and learn from each other, and that this is actually a path that To achieving AGI the agents live within an environment where they're learning their physical surroundings and it's crazy.

[00:37:18] Paul Roetzer: So, yeah, that's one of those that I would go spend a little time if you're interested in this stuff, looking at and Fable It's, pretty

[00:37:27] Mike Kaput: Yeah. And just as a connecting the dots moment there on episode 86. In early March, we talked about kind of how one of the main topics was how AI was to maybe threaten creators.

[00:37:40] Mike Kaput: And in that, I believe as Netflix, Netflix Netflix executive was like, look, this is an existential threat to what we do. This exact type of thing of people being able to generate their own content that could be where Netflix themselves

[00:37:54] Paul Roetzer: goes as well. Yeah. I would think this company don't get acquired by like a Netflix or a Disney somebody like [00:38:00] that. I,

[00:38:00] Paul Roetzer: I think like. We don't talk as much about it on, you know, on the, like big,

[00:38:05] Paul Roetzer: um, movie studios and production companies, but like Disney's working on this too.

[00:38:10] Paul Roetzer: There's no way they're not. And, but imagine being able to do this with licensed Disney

[00:38:16] Paul Roetzer: So, you know, be able to create your own, you know, episodes, spinoffs of popular movies and shows and things like that.

[00:38:23] Paul Roetzer: So yeah, I think it's reasonable to assume that the future of entertainment is going be, You know, you creating your own personalized shows

[00:38:32] Paul Roetzer: you could then share with people if you wanted to, like, so maybe I create a whole, you know, series of stuff. It's kind of like rope. I don't know anything about before.

[00:38:40] Paul Roetzer: It's like the roadblocks for entertainment. So when you go into roadblocks and create your own games and share them, I think you're going

[00:38:46] Paul Roetzer: be able to do that with. Content like this, we're doing it with songs, you're gonna be able to do it with shows. Yeah. Everything's gonna be consumer generated through prompts, and then you can keep it to yourself or you can share it out.

[00:38:57] Mike Kaput: all right,

[00:38:59] Mike Kaput: let's dive [00:39:00] into some rapid fire

[00:39:01] xAI Raises $6B

[00:39:01] Mike Kaput: So first up, we've got some news about Elon Musk's AI company, XAI. Their flagship product is Grok, and they just announced a whopping Series B fundraising round that tot that is, Total of 6 billion raised and it has participation from top investors like Andreessen Horowitz and Sequoia Capital.

[00:39:23] Mike Kaput: After this round, xAI's valuation is now 24 billion.

[00:39:29] Mike Kaput: So this almost immediately catapults them into the top ranks of AI startups that have started, that have raised the most money. So for context, anthropic as of right now has raised just slightly more at about 7 billion. OpenAI so far has raised. 13.

[00:39:45] Mike Kaput: 5 billion. =In an announcement about the funding, XAI quote, the funds from the round will be used take xAI's first products market. build advanced infrastructure, and accelerate the research and development of future technologies. [00:40:00] The information has some details on those infrastructure plans because they recently reported that Musk told investors.

[00:40:08] Mike Kaput: The XAI will need 100, 000 specialized semiconductors to train and run the next version of Grok, and that he plans to string these chips together into a single, massive computer.

[00:40:20] Mike Kaput: He's calling this quote, a giga-factory of

[00:40:22] Mike Kaput: compute, a term he borrowed from Tesla's large scale manufacturing facilities. So Paul, like we've known XAI is a force, though, just getting started kind of in the world of AI, but this is pretty significant.

[00:40:38] Mike Kaput: Mean, how does this. Change or evolve the AI landscape.

[00:40:42] Paul Roetzer: The, the thing you cited about publication reporting, he told investors, yeah, he went ahead and tweeted that last so

[00:40:49] Paul Roetzer: he said, given the pace of technology improvement, it's not worth sinking one gigawatt of power into H100s being, you know, the current Nvidia chips, the XAI 100, 000 [00:41:00] H100 liquid cooled training cluster will be online in a few months, meaning he's saying, yes, we're doing the hundred thousand thing and it's going to be ready to go in the next few months.

[00:41:09] Paul Roetzer: Then he said the next big step

[00:41:10] Paul Roetzer: would probably be about 300,000 B two hundreds, which is the next generation of Nvidia chips, uh, with CX eight networking. Next, I don't know what CX eight ne ne next summer. So he's basically out there like, yeah, we're, we're going all way. it's because of who he is.

[00:41:29] Paul Roetzer: He, he obviously has the backing to raise as many billions as he wants. He has the ambition. to do really big things. He has the other companies that would benefit from achieving these. And he has, a grudge that he still holds against Sam Altman and OpenAI. And I wouldn't underestimate, the motivation vengeance

[00:41:55] Paul Roetzer: for him. And I think he absolutely wants to build a bigger, more powerful, more [00:42:00] influential research lab than, and company than OpenAI. And so all those things combined.

[00:42:07] Paul Roetzer: you gotta pay attention. Like I, you know, I think Grok is still not like that relevant to what we're doing. Like as a business person, Grok probably doesn't have any real significance to you at the moment.

[00:42:20] Paul Roetzer: You're not really considering it as an alternative to ChatGPT or Claude or anything, but that doesn't mean his ambitions won't make it a big part of the story in the future

[00:42:30] LeCun, Musk, Hinton

[00:42:30] Mike Kaput: So, in another topic hitting the news this week, Yann LeCun and Jeff Hinton are both people we've talked about a bunch on the podcast.

[00:42:40] Mike Kaput: They're both considered godfathers of modern AI, thanks to their research contributions and work in the field. And both are back in the news this week. Yann LeCun, who is Chief AI Scientist at Meta, for one, has spent this week in a bit of a online war with Elon Musk on X. Thanks, This all [00:43:00] started when LeCun posted a snarky response to Musk, who had posted an advertisement of careers at XAI. LeCun said, join XAI

[00:43:10] Mike Kaput: you can stand a boss claims that what you are working on will solved next year. No pressure. Claims that what you are working on will kill everyone and must be stopped or paused. Yay, vacation for six months. Claims

[00:43:21] Mike Kaput: to want a quote, maximally rigorous pursuit of the truth, but spews crazy ass conspiracy theories on his own social platform.

[00:43:29] Mike Kaput: So, as I'm sure everyone here can predict, this spiraled immediately out of control. LeCun and and Musk traded a bunch of barbs on a lot of different things. Different topics. At the same time, LeCun is also drawing some attention for an interview with

[00:43:46] Mike Kaput: Financial times where he said that large language models will never be able to reach human intelligence and that radically different approaches to type of superintelligence are needed before we

[00:43:56] Mike Kaput: move forward. forward there. And then Hinton, who left Google over [00:44:00] concerns about AI safety, gave a wide ranging interview. at the Royal Institute of Great Britain, largely on large language models. And in it, he noted that Ilya Sutskever, formerly at OpenAI, was, quote, basically right about AI scaling laws by preaching that just making larger models was the way to make models more intelligent.

[00:44:21] Mike Kaput: So, Paul, you had kind of alluded this before. Lots conflicting opinions. Uh, in the world of AI. Lots drama

[00:44:28] Mike Kaput: in the world of AI. You followed both of these. These people, since the very beginning, can you kind of walk us through why all this matters and kind of why we're hearing from them now?

[00:44:38] Paul Roetzer: So I, I, I've mentioned before on the show, one of the ways that I kind of like keep up with stuff in real time is I have notifications turned on, on X.

[00:44:46] Paul Roetzer: Now, they're nine times out of 10 when Elon Musk. tweets. I wish I didn't have a notification for his tweets, but it's when stuff like this happens where it's like, all right, like it's kind of worth dealing with the rest of the stuff. So Yann and Elon, I [00:45:00] both have notifications for both

[00:45:01] Paul Roetzer: them. So as this is all happening, I'm kind of like seeing these things in So when I saw Yann's post, first tweet, I just, I was laughing hard in part because

[00:45:12] Paul Roetzer: so my, my response I should, I said, Yann has officially activated the Elon, Tesla and Xbox with a single tweet. Might as well turn off those notifications for a couple of So he knows what he's doing. Like the equivalent of this, if you're not active on X or and you don't know what I mean by like the Elon, Tesla and Xbox, they're like.

[00:45:34] Paul Roetzer: These bots, maybe some of them are actual

[00:45:36] Paul Roetzer: nothing he does bad. Like every single thing, they will come to his defense no matter what.

[00:45:44] Paul Roetzer: And so I said, doing like my thought, what I thought myself is doing this on Twitter, what Yann did is like walking up to a bully in a park full of his buddies and punching the bully in the

[00:45:54] Paul Roetzer: Like, you know, the buddies are going to take you out. Like in this case is Musk's bots, defenders, [00:46:00] you know, on X. But they're coming for you, but you're going to get your shot in before like all the smoke. So that was hilarious. So that now we are what, six days later, this is still going on. And I thought Elon or Yann had a great tweet.

[00:46:16] Paul Roetzer: So this is June 2nd at 10 AM. We'll, we'll put the link in the show so I think it's important to understand the context here. So I'm going to read. Yann's and then I'll offer a little bit of So he

[00:46:28] Paul Roetzer: says, my opinion of Elon Musk. I like his cars. I own a 2015 S and a 2023 S. His rockets, his solar energy systems, and his satellite communication system.

[00:46:38] Paul Roetzer: I also like his position on open source and patents, but I very much disagree with him on a number of issues. I disagree with how he treats scientists. Technology product development may not need openness and publications

[00:46:50] Paul Roetzer: to advance, But forward looking research sure does. Whether it's in AI, neural interfaces, material science, or whatever, secrecy hampers progress and [00:47:00] discourages talents from joining the effort.

[00:47:02] Paul Roetzer: I also disagree with the hype. I mean, expressing an ambitious vision for future is great, but telling the public blatantly false predictions like AGI next year, 1 million robo taxis 2020,

[00:47:13] Paul Roetzer: AGI will kill us all, it's very counterproductive. Parentheses also illegal in some More importantly, I think his public on many political issues, journalism, the media, press, and

[00:47:24] Paul Roetzer: academia are just, are not just wrong, but dangerous for democracy, civilization, and human welfare.

[00:47:30] Paul Roetzer: Say what you want about traditional media, but you can't really have reliable information without professional journalists working for a free and diverse press. Democracy can't exist without it. which is why only authoritarian enemies of rail

[00:47:43] Paul Roetzer: against the media. Finally, he doesn't hesitate to disseminate batshit crazy conspiracy theories as long as they serve his interests.

[00:47:51] Paul Roetzer: One would expect a technological visionary to be a rationalist. Rationalism doesn't work without truth. has particularly concerning since he bought himself a [00:48:00] platform to disseminate his dangerous political opinions, conspiracy theories, and hype. He has been quite naive about difficulties of

[00:48:06] Paul Roetzer: running the social network, and then it goes on and on. So basically what he's saying is like I really like the guy. Like, I respect the guy. but of course this led to, you know, all these like zero replies the

[00:48:19] Paul Roetzer: actual balanced critique. Like, I felt it was a fairly balanced opinion. Like there's, just kind of stating things that seem sort of obvious.

[00:48:27] Paul Roetzer: and, and so it just led to a whole lot of like, what about ism? Like, oh yeah, well, what about Zuckerberg? Or like, what about this? Like no one actually addressing his positions. And so, you know, Without this becoming like a big critique of Elon or like a political thing, I think Yann's position probably aligns with a lot of people who are afraid to be more vocal on Twitter due to bots and retribution.

[00:48:51] Paul Roetzer: So Elon is probably the most brilliant entrepreneur of our generation and certainly one of the leading innovators in human history. His [00:49:00] companies, and his visions are audacious and inspiring. His brilliance is unquestionable. I love when he

[00:49:06] Paul Roetzer: about science and technology and space and business, but I hate when he, that

[00:49:11] Paul Roetzer: become so political and that the majority of his tweets are pushing increasingly fringe ideals and beliefs. So my overall feeling is like, he's absolutely entitled to these opinions.

[00:49:21] Paul Roetzer: just wish there was tweets, dealing with politics and only get tweets. to the good this isn't just him like i feel this way about the all in

[00:49:30] Paul Roetzer: Like i love their business insights, investing insights, economic insights.

[00:49:35] Paul Roetzer: I hate the politics of it because I feel like politics right now is just so divisive and inflammatory. And I know it's in the United States and I'm sure it's in other places, but you and I might see it in the United States more than anything. and and so I don't want this to turn any political segment,

[00:49:49] Paul Roetzer: I think Yann in some way is trying to tell Elon, like how respected and admired he is.

[00:49:55] Paul Roetzer: And to stop letting the political stuff distract from his visions [00:50:00] for a better future for humanity. He's just one of the few people who Elon respects enough who's willing to take the hits to say it to him on his own turf. Because if Yann does this on Facebook or on threads, nobody cares. So Yann, I think is basically just taking one for humanity

[00:50:16] Paul Roetzer: saying like, Elon, man, you got so many good things going, like, let's get our eye on the prize here and stop with all this craziness. And so again, like, I don't care what political

[00:50:26] Paul Roetzer: leanings are. I would rather not know people's leanings, honestly, a lot of the cause I just think it.

[00:50:31] Paul Roetzer: It just gets in the way of civil discussion and, like, logical thinking. And I feel that way about Elon. Like, I don't care what his political beliefs are.

[00:50:39] Paul Roetzer: Like, it doesn't affect me, what, what he's pushing. And, and I don't care if it's far right, far left, or somewhere in the middle. I just respect what he does as an entrepreneur and as an innovator. And I want him to keep doing good things for humanity. And so it is, it's distracting to me to like deal with the rest of the stuff.

[00:50:56] Paul Roetzer: I kind of like tolerate because I want to get to the good stuff in the [00:51:00] process. So, and then real quick, cause I know this is just a rapid fire item, but on LeCun's thoughts on AGI, I think it's really important people understand here, cause

[00:51:08] Paul Roetzer: is where this divergent happens. So. Yann has been, for a really long time, willing to pursue a different path, in AI research.

[00:51:18] Paul Roetzer: His whole career. He's played the long game, and over time he's often been proven right. So his approach takes conviction, it takes a fortitude to be willing to be wrong in the eyes of

[00:51:29] Paul Roetzer: peers for a really long time. And it doesn't mean that he's right. It doesn't mean that his thoughts about language models and where this all goes is correct.

[00:51:39] Paul Roetzer: Um. But his track record justifies listening to what he has to say, even when it's contrary to what seems to be accepted beliefs by everyone

[00:51:48] Paul Roetzer: And what he basically is saying is, large language models, all this money pouring into these things at OpenAI and Google and XAI, that they are just a distraction to us.

[00:51:59] Paul Roetzer: to [00:52:00] achieving actual human level of intelligence and beyond. He says that these language models don't have the ability to, there's four things, understand the physical world, which means just like basic physics of, of reality. They don't have persistent memory. They can't reason, and they can't plan. Now there's plenty of people, including,

[00:52:20] Paul Roetzer: Hinton and everyone at everyone at OpenAI, who probably wouldn't agree with the memory, reason, and planning things as, you know, a real limitation, Um, but he's saying they're only doing these things in superficial ways, and we need to approach this in an entirely path.

[00:52:35] Paul Roetzer: and he literally told a body of students, do not go into large language models in your like, don't work on them when you get out of college. Work on what's next.

[00:52:44] Paul Roetzer: So I I guess we could have done this as but like, that's his basic premise is what he's working on is non language models, which is ironic considering Meta's building everything around language models right now, but that's not what

[00:52:58] Mike Kaput: [00:53:00] And just to kind of wrap that up, and Hinton, it seems, just says, It appears to disagree and say that lLMs just keep getting smarter.

[00:53:09] Paul Roetzer: Yeah, I mean, there was a great interview we'll put in the show notes he was talking about a lot of things. It was mostly around like safety like that. yeah, he, he thinks that they actually seem to understand. What they're doing and he,

[00:53:22] Paul Roetzer: seems to see a path where they're just going to keep getting smarter and keep developing reasoning capabilities and understanding and the ability to plan. And

[00:53:29] Paul Roetzer: So yeah, it's, this is what we always say, like, you can't say anything like a hundred percent conviction that this is what future looks

[00:53:36] Paul Roetzer: like because that's The people at the very forefront of this stuff can't agree on these things. And that's why I think it's important to have all these perspectives.

[00:53:44] PwC is now OpenAI's Largest Customer

[00:53:44] Mike Kaput: All right. Another rapid fire topic we got up week

[00:53:47] Mike Kaput: is consulting firm PWC is now set to become OpenAI's largest enterprise customer. And it's first ever reseller. So there's a new deal that PwC [00:54:00] has with OpenAI to roll out ChatGPT Enterprise to more than 100, 000 employees, including 75, 000 US employees and 26, 000 UK employees.

[00:54:11] Mike Kaput: Now, neither company confirmed how much this deal is valued at, and they didn't share any details yet on what the reseller looks like.

[00:54:20] Mike Kaput: So paul, maybe talk to me about the angles here in terms of kind of like, how does this impact PwC's Competitiveness as a consulting firm.

[00:54:30] Paul Roetzer: Well, I think it's just where they all need to go. I mean, all these big consulting firms have go in this

[00:54:35] Paul Roetzer: direction. if properly onboarded, if they build the proper change management needed, if they infuse education training

[00:54:43] Paul Roetzer: throughout the employee base, then you can see like a Moderna like effect when we talked about. That case study a few episodes ago and their partnership with OpenAI and how it's driving innovation within their organization.

[00:54:53] Paul Roetzer: So yeah, I think these like AI forward organizations that are looking at ways to infuse this across people, processes,

[00:54:59] Paul Roetzer: [00:55:00] technology, and truly build like comprehensive roadmaps and see this all the way through are going to, they're, they're going to, you know, Be really hard to And, And

[00:55:08] Paul Roetzer: that's at a, you know, macro level.

[00:55:10] Paul Roetzer: You can do this at any size company, any industry. Like, if you take this really thorough approach, you're, you're gonna outcompete everybody

[00:55:17] Apple’s AI Privacy Plans

[00:55:17] Mike Kaput: We just learned some new details about Apple's AI privacy and

[00:55:23] Mike Kaput: these will likely be things that are discussed during next week's worldwide developer.

[00:55:28] Mike Kaput: According to reporting from the information, as Apple integrates AI into Siri and other products, it, quote, plans to process data from AI applications in a virtual black box, making it impossible for its employees to access it. Over the last three years, the company has been working on a project known internally as Apple Chips in Data Centers, ACDC for short, that enables this type of black box processing.

[00:55:56] Mike Kaput: So the information says, quote, if approach works, it will allow Apple to [00:56:00] integrate AI into its products without threatening its longstanding promise to keep its user data private.

[00:56:07] Mike Kaput: Now, at the same we're also seeing that Apple and OpenAI have successfully closed that deal we referenced on previous podcast to include OpenAI technology in Apple software.

[00:56:19] Mike Kaput: So Paul, as we get closer to Worldwide Developer Conference, we're getting more and more hints about that. About Apple's AI strategy and

[00:56:27] Mike Kaput: Like, how do you see these latest details fitting into that picture?

[00:56:31] Paul Roetzer: I just

[00:56:32] Paul Roetzer: still think Apple can end up being a major winner in all of this. Like they've been very low key behind the scenes, you know, but they've been infusing AI 15

[00:56:42] Paul Roetzer: years into phones. I don't think that most

[00:56:45] Paul Roetzer: people, most investors, most business leaders probably fully comprehend all the Competitive advantages that Apple holds here, but the ability to enable these models function on device without having to go to the [00:57:00] cloud, potentially, to run efficiently, to all

[00:57:03] Paul Roetzer: of your apps, to build intelligence that lets you not only talk to all of your apps, but interact with them and have them actions for you.

[00:57:11] Paul Roetzer: Like if, if they, if they go all in on this, which I think they are, and they

[00:57:17] Paul Roetzer: Have the right vision for how to execute this on the iPhone and, you know, what the next, whatever the next generation iPhone looks like and functions like. I just think that they can do what they often do, which is show up late to the party and dominate it.

[00:57:31] Paul Roetzer: because because they just have so many advantages with their distribution, with \their technology, with their supply chain, their manufacturing prowess, their access to data, their ability to keep the data safe. it's just.

[00:57:45] Paul Roetzer: I I don't think people really comprehend how big of a role AI, Apple can play in you know, three to five years of the story of AI.

[00:57:54] Google Leak + AI Overviews

[00:57:54] Mike Kaput: So

[00:57:55] Mike Kaput: in the shorter term, Google is having another bad week. [00:58:00] So had a bunch of controversy around AI overviews, and now they've had a major document So thousands of documents were leaked on GitHub, and earlier in May, Shared with SEO leader, Ran Fishkin, and these documents provide an unprecedented

[00:58:16] Mike Kaput: at how Google's ranking algorithm may work. A lot of the leaks confirm kind of what SEOs have already suspected. Things like, hey, link diversity and relevance matter. Successful clicks on your content matter. The documents did not reveal kind of the holy grail of how all these

[00:58:32] Mike Kaput: factors are weighted in the algorithm. However, they also made Google look really bad because there were things in the documents appear to outright contradict public statements that Google has made.

[00:58:45] Mike Kaput: Fishkin wrote about the leaks, quote, many of their claims directly googlers over the years. In particular, company's

[00:58:53] Mike Kaput: repeated denial that click centric user signals are employed. Denial that subdomains are considered separately in [00:59:00] rankings, and denials of other features. And this even led Fishkin to say, he worries that Google's much cited E E A T guidance, this is in SEO, you know, how you get high ranking content is experience, expertise, authoritativeness, and trustworthiness, quote, is 80 percent propaganda.

[00:59:18] Mike Kaput: 20 percent substance. This comes as Google's VP, head of Google Search, Liz Reed, basically also published an article about AI overviews, completely defending

[00:59:30] Mike Kaput: despite the outcry about them, saying that the feature remains popular and Google is working on fixing a few issues with it. So Paul, how bad is this getting for Google right now?

[00:59:42] Paul Roetzer: I mean, honestly, probably outside of like the SEO circles of Twitter and Online and, you know, people who follow us closely, most people probably have no clue any of this is happening. So I mean, I guess it could be perceived as a negative within, you know, those [01:00:00] communities and certainly there's plenty of those people who listen to this podcast.

[01:00:03] Paul Roetzer: So I would encourage you, like if, if SEO is your thing and geek out on this stuff, like, You're probably deep in it already and know way more about this stuff than I do. never considered myself like an SEO expert per se. yeah, so I, I was reading it with interest as anybody probably in the marketing space would, and a lot of what I saw is like, yeah, there's lots in here.

[01:00:24] Paul Roetzer: They probably wish wouldn't had come out, but you know, it's probably not even accurate information at this point. so yeah, I, I don't know. Like, I, I think it. It's probably just bad PR and things like that, but I don't know that it's fundamentally affecting Google's business. I don't think their stock price is going to change, you know, one way or the other because of any of this information. And and I think the AI

[01:00:44] Paul Roetzer: AI overviews as part, yeah, I'm personally most interested at this point. And I just think it's right now, over the last week, like I just keep going back to perplexity thinking, man, it really is just a better product right now. Like it's, it's shocking how often perplexity just nails [01:01:00] exactly what I'm looking for and how, how that doesn't happen with Google AI overviews right now.

[01:01:06] Paul Roetzer: Like that's the one thing I just keep kind of taking out of this right now. so yeah, I, I think. You know, bad week from that perspective. SEO people are probably having a field day right now. Yeah. But other than that, probably not much of a news item to the general public or to Wall Street.

[01:01:24] NVIDIA's Jensen Huang Quote

[01:01:24] Mike Kaput: So NVIDIA CEO Jensen Huang recently participated in a fireside chat with Stripe co founder and CEO Patrick Collison. And as part

[01:01:35] Mike Kaput: of this conversation, he outlined the true sides of the AI opportunity ahead. In one segment, he said that AI represents a new industrial revolution, where we are essentially, for the first time, manufacturing tokens that represent intelligence in data centers using GPUs.

[01:01:52] Mike Kaput: Of course, NVIDIA is at the forefront building these data centers and chips that produce tokens at scale. Now, [01:02:00] Huang says that The, there is a hundred trillion dollars worth of

[01:02:05] Mike Kaput: that will be built on top of this hardware creating an absolutely unprecedented opportunity. So Paul, that's a pretty big number. you maybe walk us through why that is? This concept is so important to grasp.

[01:02:21] Paul Roetzer: Yeah, we talked about this a couple episodes. I may have talked about episode 87, but you know, in essence, I still just think people underestimate NVIDIA. You know, I wouldn't pull, I was working on our scaling AI courses that I'm building and like last Friday I pulled their five year stock chart.

[01:02:37] Paul Roetzer: And so on Friday, their stock was at like 1115 or something to share. The, the day. ChatGPT came out on November 30th, 2022. NVIDIA's stock closed at 169 a share. So from, from ChatGPT moment until today, it's gone from [01:03:00] 169 a share to over 1, 100 a share. Now they just announced a 10 for 1 stock split that'll happen sometime later in June.

[01:03:06] Paul Roetzer: So that, you know, the price would go up. Drop, obviously. but I, I just, it's gone that far just training the current models, not just, this is diluting everything that NVIDIA does, but most of their growth has come because of the demand for their chips to do the training of these models. These models are going to get 10 times, a hundred times, a thousand times bigger in the coming years.

[01:03:33] Paul Roetzer: We just heard, you know, Elon Musk wants 300, 000 of their new chips by, you know, next year to start building So all these big frontier model companies. Take their funding and they turn around and give it to Jensen and NVIDIA. That's just the training. As intelligence goes into robots, goes into your devices, goes everywhere where we have intelligence on demand to do everything we do in our jobs and in our personal lives, all of that requires [01:04:00] processing.

[01:04:00] Paul Roetzer: All of that requires compute that NVIDIA builds too. So I feel like. We're on the cusp this intelligence

[01:04:06] Paul Roetzer: explosion and all this growth in NVIDIA's stock isn't even accounting for that yet. Like, it's just the demand for their chips to do the training that has created this run on their stock and this massive, upward tick in the price.

[01:04:23] Paul Roetzer: And I just feel like we're scratching the surface of what's possible. Not investing advice. I am not telling you to go buy NVIDIA stock today. I'm just observing that I don't think we've even scratched the surface of what NVIDIA can be as a company, and they're already worth 2. 8 trillion or whatever for market cap.

[01:04:42] Paul Roetzer: It's wild. And I can listen to Jensen talk all day. He just did a talk at the Computex conference Taipei, on Sunday, and he introduced a new, chip called Rubin, or a new, you know, the successor to their Blackwell chips and data centers called Rubin. He's just I [01:05:00] mean, he may be the most innovative, brilliant, you know, forward thinking entrepreneur of our time outside of Elon or maybe on par with Elon at this point.

[01:05:08] Paul Roetzer: He's, and he's just so smart and authentic. Like I just, the guy's

[01:05:13] Paul’s LinkedIn Post on Prediction Machines

[01:05:13] Mike Kaput: All right. And our final topic today, Paul, you recently published a LinkedIn post that got some serious attention how AI is basically a prediction

[01:05:24] Mike Kaput: and what that means for knowledge workers everywhere. So you wrote, quote, the future of knowledge work is telling AI what to predict and knowing what to do with the prediction.

[01:05:35] Mike Kaput: And you also said that many knowledge work jobs are a bundle of tasks that largely have to do with making predictions. then said, quote, So when evaluating the impact AI will have on your job, team, company, or industry, start by evaluating each job as a series

[01:05:50] Mike Kaput: tasks. Then consider how many of those tasks are making a prediction about outcome or a behavior. Hint, it's

[01:05:57] Mike Kaput: most of them. Can you walk us through [01:06:00] kind of what you're saying here and why matters to knowledge workers?

[01:06:02] Paul Roetzer: Yeah, I just don't remember I was doing this the morning I posted this, but

[01:06:07] Paul Roetzer: something

[01:06:08] Paul Roetzer: popped in my mind about Prediction Machines, which is the book I'd read in 2018. and, and the main takeaway I had from the book, so actually, like, the way I read is I'll, I'll take, I'll highlight everything in the book.

[01:06:21] Paul Roetzer: And Apple Books is where I read my stuff, and then I'll highlight everything, and then I'll go back and read the highlights. You used to be able to export the highlights, now they don't you. so I was going back and re reading all my highlights from that book, and, and the main takeaway I had noted at that time was that the future of knowledge work is telling me what to predict and knowing what to do with the prediction, which was the premise of that book.

[01:06:40] Paul Roetzer: And so, like, basically, Data goes in and predictions come out, is kind of what happens, predictions of outcomes and behavior. So I gave a few examples, like doctors analyze symptoms, which is the data to predict illness. Lawyers consider legal precedent and evidence, the data, to predict guilt or innocence.

[01:06:57] Paul Roetzer: Marketers assess historical audience behavior, the [01:07:00] data. To predict future purchases. Sales reps look at lead scores and actions to predict conversions. Like everything we do. And then I broke it down with like an email campaign. It's like the subject line is a prediction on what's going to get them to open.

[01:07:11] Paul Roetzer: The send time is a prediction on what time someone will most likely open. First paragraph is a prediction for what will hook them to get them to the second paragraph. The call to action is a prediction of what's getting them to click. Everything we do is a prediction. But the main takeaway here is that the machine doesn't know what to predict until the human gives it a task or a goal.

[01:07:32] Paul Roetzer: And the prediction itself isn't a decision or an action. The decision requires judgment on what action to take. And that judgment in most cases is still humans. It's, it's humans using instinct, intuition, experience, expertise. Those are the traits that give us the ability to tell the machine what to predict, and then the judgment to know what to do with the prediction.

[01:07:53] Paul Roetzer: So while AI is good at the prediction, especially when it has quality data to learn from, it's human experience and judgment that are essential to [01:08:00] guide AI. and turn its outputs into optimal decisions and actions. And that to me is hope. Like that means there's plenty for us to do. Like, you know, Elon saying we're getting all the jobs are going to be gone.

[01:08:09] Paul Roetzer: Well, we got to solve for this. Like humans are still really important. Our instinct, our experience, our intuitions, like everything we've learned before, our ability to go through these chain of thought, you know, use our reasoning to apply it to judgment. Like all of these things are needed for the foreseeable future.

[01:08:24] Paul Roetzer: So basically like, I think we're in a good place. Like we got time to figure this out. And I

[01:08:30] Paul Roetzer: Prediction Machines is a great book. Even, you know, seven years later, it's still a worthwhile read, even though it was written, you know, long before ChatGPT

[01:08:38] Mike Kaput: All right, Paul, that's a wrap on week's news.

[01:08:42] Mike Kaput: Appreciate you demystifying everything for us. As a quick reminder, if you have not left a review of the show yet, we would very much it. helps us get into the earbuds of other people and get in front of Even more people with our content each and

[01:08:59] Mike Kaput: [01:09:00] week. Also, if you have not checked out our newsletter, please go to marketingaiinstitute. com forward slash newsletter, where we summarize every week, all of

[01:09:08] Mike Kaput: the news that covered today, and also all the news that didn't make it into today's episode. Paul, thanks again for going through another wild week in artificial intelligence

[01:09:21] Paul Roetzer: Yeah. Thanks Mike. And one final reminder, episode 102 dropping on Wednesday, June 12th.

[01:09:27] Paul Roetzer: Instead of Tuesday, June 11th, we can talk about everything, Apple, WDC, W D D The developer conference from Apple, whatever it's called. That's happening on June 10th. So thanks again for being with us. We'll talk to you next

[01:09:41] Paul Roetzer: Thanks for listening to The AI Show. Visit MarketingAIInstitute. com to continue your AI learning journey. And join more than 60, 000 professionals and business leaders who have subscribed to the weekly newsletter, downloaded the AI blueprints, attended virtual and in person events, [01:10:00] taken our online AI courses, and engaged in the Slack community.

[01:10:04] Paul Roetzer: Until next time, stay curious and explore AI.