<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=2006193252832260&amp;ev=PageView&amp;noscript=1">

57 Min Read

[The AI Show Episode 98]: Google I/O, GPT-4o, and Ilya Sutskever’s Surprise Departure from OpenAI

Featured Image

Wondering how to get started with AI? Take our on-demand Piloting AI for Marketers Series.

Learn More

Get ready for a deep dive into Google's AI announcements from Google I/O 2024, including Gemini 1.5 Pro's integration, Project Astra, Gems, and Veo. Hosts Paul Roetzer and Mike Kaput also examine the unveiling of OpenAI's latest cutting-edge model, GPT-4o, revolutionizing AI accessibility and the departure of the company's chief scientist, Ilya Sutskever.

Listen or watch below—and see below for show notes and the transcript.

Today’s episode is Episode 98, which means we’re almost to the milestone of Episode 100! To celebrate, we plan on releasing a special Episode 100 on Thursday, May 30 that is all about YOU.

In that episode, we plan on answering audience submitted questions and anyone can submit a question for consideration.

The link below will take you to a quick form that has a spot for entering your questions. We will curate and answer as many as we can in a couple weeks.

So, go ahead and submit your questions before May 30 at https://bit.ly/aishow100

Listen Now

Watch the Video

Timestamps

00:04:49 — Google I/O Announcements

00:28:07 — GPT-4o

00:38:23 — OpenAI’s Chief Scientist and Co-Founder Is Leaving the Company

00:57:28 — Apple’s AI Plans

00:59:51 — Behind HubSpot AI

01:02:22 — US Senate’s AI Working Group Report + Key AI Bills

01:05:19 — Co-Founder of Instagram Joins Anthropic As Chief Product Officer

01:07:23 — AI Use Case Spotlight: Interview Prep

01:10:45 — AI Impact Assessments

Summary

Google I/O’s Impressive AI Announcements

Google just made some huge AI announcements at its Google I/O developer event last week. Now, we got a ton of updates from the event but, here’s a quick rundown of notable announcements:

Gemini 1.5 Pro will soon be infused into all your Google Workspace apps.

Google also announced that Gemini 1.5 Pro itself, which is a 1M token context model, will be available to Gemini Advanced users, which means you can upload literally a 1,500-page PDF to it if you want.

Google previewed what it’s calling Project Astra, an experimental multimodal AI assistant that can understand everything it sees through your device’s camera. That means it can see the world around you and answer questions about it, remember where things are in the world, and do things for you.

Google announced Gems, which is a custom chatbot creator—basically its answer to OpenAI’s GPTs, where you create a custom version of Gemini for a specific task. We also got word of Veo, Google’s answer to OpenAI’s Sora. Veo is a model that generates 1080p video based on text, image, and video prompts.

And Google Search is getting its first major overhaul in decades. Google will roll out “AI Overviews,” which used to be called Search Generative Experience when it was in limited rollout, to everyone in the US this week. These are AI-generated and summarized search results that appear at the top of a page.

There were a ton of other updates as well, dealing with a range of new Gemini capabilities for engaging with your photos, planning trips, and making your Android phone smarter, as well as the announcement of a new, smaller/faster Gemini 1.5 model called Flash and Gemini Nano that will later be added to Google Chrome.

OpenAI’s New Flagship Model, GPT-4o

OpenAI made a huge announcement last week when it dropped a new flagship model, GPT-4o. With GPT-4o, you can now communicate with ChatGPT in real time via live voice conversation, video streams right from your phone, and text.

What is really noteworthy here is just the streamlined improvement this is over the capabilities of the previous versions of GPT-4. GPT-4o is trained as a single model end-to-end across text, vision,and audio. (The “o” in the name stands for “omni.”)

As a result, it performs just as well as previous GPT-4 models on text, reasoning, and coding, and sets new benchmarks for performance across multilingual, audio, and vision capabilities.

But perhaps one of the biggest changes here is that everyone gets access to it. Previously, you could mostly only get access to GPT-4 models with a paid ChatGPT Plus, Team, or Enterprise account.

GPT-4o will be available to all free users, giving millions of people access to a far more powerful version of AI than what they’ve been using before. (Plus users aren’t forgotten though: They get 5X higher message limits in exchange for paying for a license and OpenAI says they’re rolling out the new version of Voice Mode with GPT-4o in alpha within Plus in coming weeks.)

Last, but not least, there’s a lot for developers to like here, too: GPT-4o is 2X faster, half the price, and has 5X higher rate limits compared to the previous version of GPT-4.

OpenAI’s Chief Scientist and Co-Founder Is Leaving the Company

Ilya Sutskever, OpenAI co-founder and chief scientist, is leaving the company. In an announcement on X on May 14, Ilya said he’s leaving after almost a decade.

In the announcement, he had nothing but glowing things to say about CEO Sam Altman, who he attempted to help oust late last year, and president/co-founder Greg Brockman. Both Altman and Brockman returned the favor with their own glowing statements on X about his departure.

This is the first time it appears that Sutskever has made a public statement on X since late 2023. (His silence has been so deafening after the failed boardroom coup against Altman that it prompted a series of “Where’s Ilya?” memes for months.)

Jakub Pachocki, who worked alongside Sutskever, is replacing him as chief scientist at the company.

At the same time, a lesser-known researchers who worked with Sutskever, Jan Leike, resigned from OpenAI. Leike ran the superalignment team with Sutskever, a group working on making sure that superintelligent AI, if/when it’s developed, benefits humanity.

Links Referenced in the Show

Today’s episode is brought to you by Piloting AI, a collection of 18 on-demand courses designed as a step-by-step learning path for beginners at all levels, from interns to CMOs. Piloting AI includes about 9 hours of content that covers everything you need to know in order to begin piloting AI in your role or business, and includes a professional certification upon passing the final exam.

You can use the code POD100 to get $100 off when you go to www.PilotingAI.com

Read the Transcription

Disclaimer: This transcription was written by AI, thanks to Descript, and has not been edited for content.

[00:00:00] Paul Roetzer: once you know what AI is capable of, you just find use cases in your daily life. And that to me is like the real value of AIs. Find these use cases in your life to help you do things.

[00:00:11] Paul Roetzer: Welcome to the Artificial Intelligence Show, the podcast that helps your business grow smarter by making AI approachable and actionable. My name is Paul Roetzer. I'm the founder and CEO of Marketing AI Institute, and I'm your host. Each week, I'm joined by my co host, and Marketing AI Institute Chief Content Officer, Mike Kaput, as we break down all the AI news that matters and give you insights and perspectives that you can use to advance your company and your career.

[00:00:41] Paul Roetzer: Join us as we accelerate AI literacy for all.

[00:00:48] Paul Roetzer: Welcome to episode 98 of the Artificial Intelligence Show. I am your host, Paul Roetzer, along with my co host, Mike Kaput. We have a pretty wild week of [00:01:00] stuff. I honestly, like, I've explained before how we for this show.

[00:01:05] Paul Roetzer: Um, you know, basically we share links all week long and then we curate them into the main topics and the rapid fire items. And then Mike and I, Prep Sunday night, Monday morning, and we kind of get ready to go. I, this is one of like the more complex ones I've ever prepped for, to be honest with you. Like I was in, like, we're doing this Monday morning, 7. 30 AM, Eastern time.

[00:01:26] Paul Roetzer: I, I was laying in bed last night, like 1130 and my head was just ready to explode. trying to process mainly like the open AI and Google stuff. So we are going do our best to unpack, what was a pretty crazy week in ai. Um. There's a lot though. So we're going to try to curate this and make it all make sense and try and filter through the stuff that actually really matters to you and then talk about some of the more long term, far reaching, crazy stuff.

[00:01:55] Paul Roetzer: So today's episode is brought to us by Piloting AI, a collection [00:02:00] of 18 on demand courses that is designed as a step by step process. learning journey for beginners at all levels, from interns to chief marketing officers. More than a thousand professionals have registered for the certification series since it first launched in December, 2022.

[00:02:16] Paul Roetzer: The fully updated piloting AI 2024 series includes more AI technology demos and vendors, revised templates, and a brand new generative AI 101 course. Mike and I refreshed and recorded all 18 courses at the end of January 2024. So they are fresh, all kinds of new information. It's about nine hours of content that covers everything you need to know in order to begin piloting AI in your role or in your company and includes a professional certification upon passing the final exam.

[00:02:47] Paul Roetzer: It's not just for individual learners, it's also designed for entire teams. We've had some of the top companies in the world put their teams through the courses as a way to dramatically accelerate their team's understanding and adoption of AI in an accessible, [00:03:00] affordable way. If you're interested in purchasing Piloting AI, you can use the code POD100 to get 100 off, and you can go check all that out at PilotingAI.com

[00:03:11] Paul Roetzer: If you're interested in a team license, there's even better discounts available. So definitely get in touch with our team. So again, pilotingai. com. And then I also wanted to mention this before we get started. we talked about this last week. We are quickly approaching episode 100. We are just a couple away.

[00:03:27] Paul Roetzer: And so Mike and I are going to do a special episode as kind of a thank you to everyone for listening and being a part of this growing community. And so episode 100 is going to be all about your questions. So if you have questions you'd like Mike and I to potentially address during the episode, you can go to bit.

[00:03:44] Paul Roetzer: ly slash AIShow100. That's B I T dot L Y slash AI Show 100. put your questions in. You don't have to give contact information even. You can just throw your questions in there if you want. Nothing's required, I don't think, right Mike? [00:04:00] Maybe? Um, so you can submit those up till May 30th. So we got 10 days left.

[00:04:05] Paul Roetzer: Um, oh no, episode is May 30th, so do it before May 30th. Do it by May 28th. Mike and I will probably go ahead and curate those. So again, bit.ly/AIShow100 to get your questions submitted for episode 100 of the Artificial Intelligence Show. Okay, Mike. We had Google I O last week. We recorded episode 97 on a Friday.

[00:04:30] Paul Roetzer: Shortly after that, OpenAI announced that they were holding a surprise event on Monday. So that happened after we recorded episode 97. So we are are here to talk about all things. GPT 4. 0 and Google I O and everything else we've got to cover this week.

[00:04:47] Mike Kaput: Yep, that it's been a busy week, Paul.

[00:04:49] Google I/O Announcements

[00:04:49] Mike Kaput: So first up, like you said, Google just made a ton of AI announcements at the Google I O developer event last week. And there were a ton of updates [00:05:00] that were. Announced and rolled out. We're going to deep dive into a few of kind of what we see as the most important ones, but I'm going to kind of quickly run down some of the notable announcements here to give you a sense of exactly what caliber of AI updates we're talking about here.

[00:05:15] Mike Kaput: So first up, Gemini 1. 5 Pro is soon going to be infused into all Google workspace apps. So it is going to function as. a general purpose assistant that can get any info or docs you need across Drive, summarize emails, work across Docs, Sheets, Slides, Drive, and Gmail to help you do a range of things and make you more productive.

[00:05:40] Mike Kaput: Now Google also announced that Gemini 1. 5 Pro, which is a 1 million token context model, is now going to actually be available to Gemini Advanced users. So, that 1 million token context is just utterly massive. It just blows the other models out of the water. [00:06:00] And it means you can upload literally a a 1, 500 page PDF to it if you want and use that.

[00:06:07] Mike Kaput: In concert with the model. Now, they also mentioned that Gemini 1. 5 Pro is getting expanded as well to a whopping 2 million tokens. Now, we don't know when that expanded context version of the model is going to be available to all of us, but CEO Sundar Pichai did say, quote, and this represents the next step on our journey towards the future.

[00:06:31] Mike Kaput: Ultimate goal of infinite context. So functionally, no limit to the amount of information you can be using with Google's models. Now, Google also previewed something it is calling Project Astra, which is an experimental multimodal AI assistant that can understand everything it sees through your device's camera.

[00:06:53] Mike Kaput: So you can. It means it can go see the world around you, answer questions about it, remember [00:07:00] where things are in your world as you're using your phone to show it your environment, and then perform actions for you based on what it sees. Now Google also announced something called GEMS, which is a custom chatbot creator.

[00:07:15] Mike Kaput: This is basically their answer to OpenAI's GPTs, where you can create a custom version of Gemini for a specific task. We also got word of Veo, which is Google's answer to OpenAI's Sora. Veo is a model that generates 1080p video based on text, image, and video prompts. And, just in case there wasn't enough updates, Google Search is getting its full support.

[00:07:41] Mike Kaput: First major overhaul in literally decades. So Google is rolling out what it's calling AI overviews. Now these used to be called search generative experience. You may have actually already used it, seen it, if you've had it enabled in your Google account. But it is now rolling out to [00:08:00] everyone in the U. S.

[00:08:00] Mike Kaput: this week. So these are those AI generated and summarized search results that appear before all the traditional links at the top of a page. Now, ton of other updates as well. there were some that dealt with a range of new Gemini capabilities for engaging with your photos, planning trips, making your Android phone smarter.

[00:08:21] Mike Kaput: Got the announcement of a new, smaller, and faster Gemini 1. 5 model called Flash. And also, Gemini Nano is getting added right into Google Chrome. So, Paul, there's a lot going on here, but in your mind, kind of which ones of these announcements do you think Jumped out to you the most?

[00:08:40] Paul Roetzer: Yeah, so this was, so this was Tuesday morning, well, Tuesday afternoon, I guess. So I was actually in Columbus Tuesday morning doing a fireside chat keynote at the Angel Capital Association. So my day started talking to like investors and fund managers and things like that. So, And then I'm driving back from Columbus to Cleveland, right as this was [00:09:00] about to start.

[00:09:00] Paul Roetzer: So I'm listening to this live stream in my car. I've got it playing through my phone as I'm driving. And it just so happens that it's about a two hour drive, and it was about a two hour live stream of this portion of it. There was a number of times where I just, like, wanted to stop and be like, wait, what?

[00:09:17] Paul Roetzer: Like, how, what did they just show? Cause I'm obviously not really watching it. my full self driving Tesla isn't really full self driving. Therefore you can't actually like watch the whole thing. so I, I did like kind of glance down every once in a while, just like, I don't even understand what they just explained.

[00:09:34] Paul Roetzer: Like, how are they doing that? So there was a lot, I will say there was a lot of announcements, not many releases. So all the stuff Mike's talking about. There's very little of it that you can go experiment with right now. So let's just be clear on this was a bunch of stuff that's coming. Some it sounds maybe in a few weeks, some we have no idea.

[00:09:54] Paul Roetzer: Like probably Project Astro. We don't, we don't know when it's going to be out. so as I was laying in bed last night [00:10:00] trying to prep for this, I tried to categorize this as like things I'm most excited about. Things I'm most unsure about and things with big potential. And then I also had a category of things that could end up being really important that we're probably glossing over right now.

[00:10:12] Paul Roetzer: So I'll run through these real quick.

[00:10:14] Paul Roetzer: Things I'm most excited about 100 percent Gmail search actually being good. So this has always been one of the more perplexing things in my professional life is how Useless Gmail search and Google Drive search are like just painfully bad search functions. and yet this is the greatest search company on earth.

[00:10:35] Paul Roetzer: So anything they do to make Gmail search really functional is amazing. It seems like they're actually going to go a step further than that and bring new capabilities into it that I would have never even thought of, obviously. So. They gave some examples of, you know, being able to curate receipts, for example, like to go in and search for receipts.

[00:10:58] Paul Roetzer: And then not only that, [00:11:00] to train it on tasks related to those receipts.

[00:11:04] Paul Roetzer: So they're saying you're going to basically be able to like, have it do something and then have it like learn that function where you're starting to kind of show the capabilities around, The agents in a way. Then they gave another example that I absolutely loved, which was, you know, your kid's school emails you all the time, which a hundred percent they do.

[00:11:23] Paul Roetzer: I mean, I get like three emails a day from my kid's school and sometimes you just don't look at them for a few days. and so they said, you can go in and give it an email address and say, summarize all the recent emails from. Kids School email address, basically. And it like pops up all the key things that you need to know.

[00:11:39] Paul Roetzer: It's like, man, that's a really valuable function, especially in business. It's like, I've been out for 10 days. Summarize everything that Mike sent to me over the last 10 days. Like, highlight the key things for me. So to make Gmail not just a search, but to actually make it like a functional assistant is awesome.

[00:11:55] Paul Roetzer: Um, I'm extremely excited about anything they do to make [00:12:00] Gemini more valuable within Google Workspace. We are Google Workspace customers. We have Docs, Sheets, um, whatever their pre What, what's the, Slides, the PowerPoint version. Um, so we use all of that. And so the idea of having that established So if you want side panel that is connected to everything, would be awesome.

[00:12:19] Paul Roetzer: Now, their blog post is a little misleading because it starts off with, Starting today, Gemini and Side Panel is available for Google Docs, Sheets, Drive, Slides, everything. Then when you get down to the bottom, it says, Actually, Gemini and Workspace Side Panel is now available for Workspace Labs, and Gemini for Workspace Alpha users.

[00:12:35] Paul Roetzer: So it will be available next month on desktop for businesses and consumers through Gemini for Workspace add ons, And the Google One AI premium plan. So I guess that is something to look forward to sometime in the next month. If you are using Google Workspace apps, you should be able to have Gemini, support.

[00:12:54] Paul Roetzer: Then I saw something interesting that wasn't in their announcements that I saw at least, but when I was looking [00:13:00] at their blog post announcing the Gemini Workspace updates, and we'll put a link to the show notes, if you go to it, there is a Um, AI summary at the top. There's a button that says read AI generated summary.

[00:13:14] Paul Roetzer: So it's using Gemini to automatically summarize the blog post. And then when you click it gives you the general summary that looks like the AI overview is almost in search, but then you can also click bullet point version so I can like have the style and bullet point. It also has the option to listen to the article, which I assume is also AI generated.

[00:13:33] Paul Roetzer: All AI generated. So as a brand, as a publisher, this is really fascinating to me. Like, I would, I would love to have that capability. Like, I would put that on our blog today if we could.

[00:13:43] Paul Roetzer: Um, so that was kind of cool. All right. So that was the stuff I was kind of excited about. The thing I'm unsure about is the AI overviews in search.

[00:13:49] Paul Roetzer: And maybe Mike will come back and talk about that one for a minute.

[00:13:52] Paul Roetzer: Um, they talked about the AI teammate, this idea of like, again, I don't, they're not, I'm not calling it agents as much, but this [00:14:00] idea of having that teammate that's always with you and things like that, I like that idea. I think that's something to build on there.

[00:14:07] Paul Roetzer: The big potential ones, certainly Project Astra. I think they debuted part of that in December when they got, you know, So, they talked about that sort of grilled for maybe showing some capabilities that weren't actually available today or weren't done in like single shots and things like that. So, they talked about that being available later this year through the Gemini Live and Hasabis said they were testing several prototypes, smart glasses, because there was one example where they actually panned past some smart glasses.

[00:14:36] Paul Roetzer: And then Sergei Brin, one of the co founders of Google was at the event and one of the reporters actually said, Hey, Sergei, Sergei, this is. Seems like the Google Glasses thing from 10 years ago, like now is the time and he said something like they had the right form factor. They were just early basically, so I wouldn't be surprised at all if we don't hear something related to the Google Glasses being resurrected at some point here.[00:15:00] 

[00:15:00] Paul Roetzer: Um, and then in the things that could end up being really important category, You mentioned Gemini 1. 5 Flash, which is kind of that smaller, faster model. The Trillium TPUs, so they, they introduced new versions of their chips, basically their answer to GPUs. Now they use both their own tensor processing units, TPUs and GPUs in, in their AI, but this, I think they said it was like 4.

[00:15:25] Paul Roetzer: 5 times more efficient maybe than the current generation. You mentioned the 1 million tokens. They also said they're going to 2 million tokens, which is Two hours of video, 22 hours of audio, 1. 4 million words, like, it's a massive amount. But the infinite context that you mentioned, Sundar, said, that was one where I, I like, ears perked up as soon as I heard.

[00:15:45] Paul Roetzer: That was the first time I've heard them talk about, like, infinite context, which basically means memory. Like, the ability for these things to not forget, and to very accurately find things within it. Along those lines, they did release a report that I think is [00:16:00] worth, we're not going to spend a lot of time on it today, but I think this might be worth coming back to.

[00:16:03] Paul Roetzer: So it's called Gemini 1. 5, Unlocking Multimodal Understanding Across Millions of Tokens of Context. 153 page report. Uh,

[00:16:14] Paul Roetzer: I did not read the whole thing, but I want to read a quick excerpt here because I think this kind of connects why the faster model matters to us as business professionals or as developers. So in the introduction, it says, in this report, we introduced Gemini 1.

[00:16:30] Paul Roetzer: 5 family of models representing the next generation of highly compute efficient multimodal models. Capable of recalling and reasoning over fine grained information from millions of tokens of context. Okay, so pause there for a second. A couple big words in there that we want to just make sure everybody, like, understands.

[00:16:47] Paul Roetzer: So represent the next generation of highly compute efficient. Means they don't need as many chips. to power this, maybe in training or inference when they actually output something because they've [00:17:00] made the chips themselves more efficient. So that efficiency they unlock with the Trillium chips actually enables them to build these kind of faster models.

[00:17:08] Paul Roetzer: And then the multi modal meaning text, image, video, audio. and then the idea that it's recall and reasoning over fine grained information from millions of tokens, meaning not only can you give it 2 million tokens or a hundred, you know, 1.4 million words. It's actually exceptionally good at finding very specific kind of needle in the haystack stuff within that and recalling things very accurately over that.

[00:17:36] Paul Roetzer: So the issues with these models for the last couple years was, one, they didn't have very much context. So, you know, I, I think Claude right now is 200,000 tokens, which is about 160,000 words or so, but that's fine. It actually needs to accurately be able to search that context, though, and that's what we've kind of been missing is the size of the context and then the quality of the ability to recall things [00:18:00] and find things within that context.

[00:18:01] Paul Roetzer: So they're solving for both of those things here. family includes 1. 5 Pro, which you talked on, and 1. 5 Flash. But then the part I boldfaced, it says, studying the limits of Gemini 1. 5's long context ability, we find continued improvement in its next token prediction and near perfect retrieval, greater than 99 percent accuracy.

[00:18:21] Paul Roetzer: Meaning, when you look for something in that context, it's finding it. And it says, up to at least 10 million tokens. So they're going to release 2 million. But their research is already ahead of this saying this thing is staying accurate up to at least 10 million tokens, which I don't even know what the use cases are for that in business.

[00:18:39] Paul Roetzer: I mean, that's basically your entire knowledge base as a company. and then they compare it to Claude, which is 200, 000, GPT 4 Turbo, which is 128, 000. And then finally, we highlight real world use cases, such as Gemini 1. 5, collaborating with professions on the, and completing their tasks, achieving, 26 to [00:19:00] 75 percent time savings across 10 different job categories.

[00:19:04] Paul Roetzer: So they actually went in and had professionals craft, like these advanced prompts, and then they gauged how much time would be saved using these prompts with some examples within this. So pretty cool stuff. again, not all things we're going to see today that are going to like affect our jobs tomorrow, but it definitely hints at where it's going.

[00:19:25] Paul Roetzer: Um, one final note here. Where do you get all this stuff the question I had. So you can go to labs. google. com and that's where a lot of this is going to be previewed. So you can sign up to join waitlist. So they have VideoFX, which I think is the VO thing. They have ImageFX, MusicFX, they have NotebookLM.

[00:19:45] Paul Roetzer: So this is kind of where they test these things. So if you've never been to labs. google. com, go there, check it out. and there's some good information there. And then the

[00:19:55] Paul Roetzer: Uh, maybe we'll put a pin in this one and come back to this maybe on the next episode, [00:20:00] Mike, but on May 17th, they introduced their frontier safety framework.

[00:20:04] Paul Roetzer: So DeepMind released a paper or a post and a report that basically said, okay, we're now studying or we're going to start sharing what we've been studying about the high risk. So on episode 94, we talked about Anthropic's responsible scaling policy. This is kind of along the same lines of Google's Frontier Safety Framework, where they try and project risks as the models get smarter and then prepare for those.

[00:20:30] Paul Roetzer: So. it was a lot, man. Is was there anything you were most excited about? Like, like that was different from mine? The Gmails and the search? No, 

[00:20:40] Mike Kaput: nothing different just to emphasize again, the longer context windows and the ability to retrieve that information accurately, I think is extremely impactful. And one of those things that.

[00:20:52] Mike Kaput: We've talked about a ton of times on this show is like figuring out where the future is going and [00:21:00] planning for that versus, you know, responding to the current capabilities. I think context is an obvious example of that for me.

[00:21:07] Paul Roetzer: Yeah, and I think this, again, is probably more of, you know, if you're not in IT, you know, on the data side, the more technical side of this stuff, and you're like Mike and I, more business professionals trying to figure this stuff out and find use cases and solve problems more intelligently. The context window becomes really, really important because the way that we're currently solving for this stuff is going to change, you know, and so I don't know, like a practical example, I think I'll think out loud here for a second, like, so if, if Gemini has access to our entire Google Drive, which has everything we've ever created as a company, like all of our information is in there, every document, everything, if It has the ability to recall and find anything with a simple prompt within that accurately.

[00:21:56] Paul Roetzer: Then all of a sudden, like when I go to use Gemini to [00:22:00] help me, say, draft a proposal, or craft a reply to something, or build a presentation, it already knows all of that. Like, it can have millions of tokens of context. And so everything in that knowledge base is already there. I don't have to, you know, You know, give it 15 examples.

[00:22:16] Paul Roetzer: I can literally just say, here's the folder on the drive. Like go, this is where everything is already or go find the folder on the drive. Great. All the documents in there is what you're going to need to help me in this next project we're doing together. So I think that. In the next, like, you know, this is probably a near term horizon, like 12 to 18 months, it's going to start to become very commonplace where people are going to just have either Microsoft, Amazon AWS, or, you know, Google with Gemini.

[00:22:44] Paul Roetzer: They're just going to have these models connected to their knowledge bases and this context window. As it keeps getting bigger, it's going to enable this like instant recall of anything that's ever happened. And if it's connected to your Gmail, your Docs, your Sheets, [00:23:00] um,

[00:23:00] Paul Roetzer: all it is going to start to just be literally the Gemini modules on the side.

[00:23:05] Paul Roetzer: And I can be in Gmail, Calendar, Docs, doesn't matter. I can literally say like, Hey, moments notice, I'm thinking about the MAICON event last year. where were we at 17 weeks out from that event? It's going to, it's going to know I'm talking about something in sheets. It's going to go into sheets. It's going to run a pivot table.

[00:23:23] Paul Roetzer: It's going to build the thing and it's going to spit back to me 17 weeks out. You'd sold X amount of tickets. Great. What are you projecting our current sales trend to be at? And it's going to do it while I'm sitting in my calendar. Like I'm not to have to go find it. That's. Awesome. Like, that's the potential of where this goes in the very near term because it has almost this infinite context where it can understand the connections between all of these things and you can just talk to your data.

[00:23:49] Mike Kaput: So, before we move on to the next topic, I just kind of wanted to get your perspective. Kind of first draft thoughts on, you know, with all these announcements, I think it's not an [00:24:00] understatement to say the SEO world is freaking out because I've seen a ton of commentary online where a lot of people are starting to kind of sound some alarm bells here because AI overviews in particular are kind of causing a lot of worry that we could be starting to see search change in a way that's. http: TheBusinessProfessor.

[00:24:21] Mike Kaput: to websites and publishers due to AI summarized results. So, like, how concerned are you about this after this specifically that they're moving to this format.

[00:24:33] Paul Roetzer: Yeah, I mean, I think people are rightfully concerned about this. We've talked about this numerous times on the show that there's going to be an impact to organic search, that as these AI overviews start becoming a prominent part of what's going on, you're, you're going to see. a reduction in search traffic coming from it. Um,

[00:24:51] Paul Roetzer: What is the play? How do we address that? We, we just don't know. And Google's not sharing much data yet. So Sundar did say on stage that [00:25:00] they're seeing like high levels of engagement with these AI overviews, the search generative experience as it was known.

[00:25:05] Paul Roetzer: um, they didn't really say what is that Mean though, because these things aren't consistent.

[00:25:11] Paul Roetzer: Like if you go in right now, you may see AI overviews for a search you conduct, or you may see the traditional blue links. So it's part, I think they're just managing how often it's showing. I think there's just a lot of experimentation on their end going. It seems to be connected to the type of search you perform.

[00:25:30] Paul Roetzer: Um, I found them honestly quite valuable. Like I, I like the AI overviews,

[00:25:37] Paul Roetzer: Will I click on, click on fewer links? I don't know. Like I'm, I've never sat down and said like, am I, am I not going to these links? But I do think it's going to impact people. So yesterday I was doing a keynote for a group of publishers, like magazine publishers, and this was obviously a key topic for them is like, what is the impact going to be on media companies and publishers as we move forward?

[00:25:58] Paul Roetzer: So I [00:26:00] think that there's a lot to be determined, about. I don't know where this goes and the true impact it has, but I think that brands need to prepare for a drop in organic traffic. And I think you need to go, like, I don't remember if I talked about this on this episode last week, or a or, previous episode, but I talked about our brand, like our sort of search neutral strategy to our content.

[00:26:23] Paul Roetzer: So we knew that this drop was coming for the last year and a half. And so we have diversified. The distribution of our content. So we record this podcast. It goes on podcast networks. It goes on YouTube. It gets cut up into, you know, 10 different videos on YouTube. It goes on to TikTok with shorts. It goes on to Instagram.

[00:26:39] Paul Roetzer: Like we've kind of built this out. Plus we focus on building subscribers to our newsletter. So we have 75, 000 or so subscribers to the newsletter. So if organic traffic stopped today to our site, which it's not, we've actually seen an uptake in organic traffic so far for us, but if it were to stop today.

[00:26:57] Paul Roetzer: It wouldn't have a measurable impact on our brand. [00:27:00] Like, we have basically diversified out to where our audiences are, and that enables us to kind of remain strong. So I would encourage people to really think about kind of an organic search neutral strategy where worst case scenario, it's just gone. it's not going to happen like that for most companies.

[00:27:18] Paul Roetzer: It'll be kind of more of a gradual decline and it may level off. Some may actually see an increase. But you should plan for the fact that maybe your search traffic drops in a significant way. And don't wait for that to happen before you start diversifying the strategy, basically. I don't know, Mike, you're our chief content officer, like any other thoughts on that?

[00:27:37] Paul Roetzer: I mean, that's kind of our approach, right? Is just like, make sure we're getting content in all the different channels. So we're not dependent upon 

[00:27:43] Mike Kaput: it. Yeah, absolutely. Diversification and also just nothing can compare to owning your own audience. So I think we've, you know, been very fortunate and strategic in that over the years, but yeah, those are two things that unfortunately, if you haven't started doing those [00:28:00] things yet, that's a bit of an uphill climb, but there's no better day to start than today, I would say. 

[00:28:07] GPT-4o

[00:28:07] Mike Kaput: Alright, so next up, OpenAI also had a big announcement this past week when it dropped its new flagship model, GPT 4. 0. GPT 4. 0 now allows you to communicate with ChatGPT in real time via live voice conversation, use video streams, write from your phone to communicate with it, and also typical text. Now, this model was demoed this past Monday during a live event by OpenAI CTO Mira Murady and two OpenAI researchers.

[00:28:40] Mike Kaput: Now, in this demo, the researchers carried on natural live conversations with the model, and it appeared to pretty decently handle kind of pivoting in real time during the conversation. when, for instance, it was interrupted. It's all, it's now also able to speak in different intonations, different tones [00:29:00] of voice.

[00:29:01] Mike Kaput: It even carried on a conversation with Mirati in fluent Italian. Now the video streaming also looked interesting as well. One of the researchers filmed himself doing a math problem and had GPT 4. 0 follow along in real time and help him reason through the problem itself. Now what's really noteworthy here is This is just what a, what an improvement this is over the capabilities of the previous versions of GPT 4.

[00:29:30] Mike Kaput: GPT 4. 0 is trained as a single model end to end across text, vision, and audio. And fact, the o in the name stands for Omni. As a result, it performs just as well as previous

[00:29:45] Mike Kaput: 4 models on text, reasoning, and code. And then sets a whole new set of benchmarks for performance across multilingual, audio, and vision capabilities.

[00:29:57] Mike Kaput: But, perhaps one of the most exciting things [00:30:00] here is not only how capable this model appears to be, but that everyone is now getting access to it. So previously we've talked about a bunch of times you could only get access to GPT 4 class models, Typically with like a paid ChatGPT Plus team or enterprise account for the most part.

[00:30:19] Mike Kaput: In a change from that, GPT 4. 0 will now be available to all free users. So millions and millions of people are going to get access to a far more powerful version of AI than what they've been using before. Now there's plus users are still obviously paying for a subscription. They do get five

[00:30:39] Mike Kaput: times

[00:30:39] Mike Kaput: 4. 0 and OpenAI also says they're rolling out the new version of voice mode. with GPT 4. 0 in alpha within plus specifically. So there are still, it seems some benefits to having the paid account. And last but not least, there's a lot for developers to like here. [00:31:00] GPT 4. 0 is two times faster, half the price and has five times higher rate limits compared to the previous version of GPT 4.

[00:31:09] Mike Kaput: So Paul, what were your first kind of overall impressions of 4. 0?

[00:31:13] Paul Roetzer: Yeah, there, again, like Google, there's so much to unpack here. There are not as many announcements, obviously. They definitely focused in on the voice thing. They've had to walk some stuff back already, which is

[00:31:23] Paul Roetzer: kind of fascinating. Um, so first, again, is it available? Is it not? What they said was text and image capabilities started rolling out in ChatGPT that day on May 13th.

[00:31:37] Paul Roetzer: Um, They'll roll out a new version of voice mode with GPT 4. 0 and alpha within ChatGPT Plus in the coming weeks. So I did hear some confusion from people who would go in and like talk, because voice was already in ChatGPT. And so people were thinking that that was the new thing, but it's not the new thing yet.

[00:31:54] Paul Roetzer: They haven't actually rolled this out. Now, Sam, I think, got himself in a little bit of trouble. [00:32:00] because the day of the announcement, he tweeted her. Like, that was the entire tweet, meaning it sounds like Scarlett Johansson from the movie Her. they then had to put out a statement yesterday, or maybe this was even this morning, um, that further explained how they trained these models.

[00:32:18] Paul Roetzer: So, this was May 20th. Yeah, it was 2. 33 AM. May 20th. So this is why I think we got, we got a legal notice potentially involved here. Um, so the tweet from OpenAI's account was, we've heard questions about how we chose the voices in ChatGPT, especially Skye, which is the one that sounds like her, we are working to to pause the use of Sky while we address them.

[00:32:42] Paul Roetzer: Now, why they would have to pause it, I don't know, because that tweet linked to a blog post on their site that says how we chose the voices, and they say that they had 400 submissions, they worked with, voice actors and agencies to basically narrow it down and pick the five [00:33:00] voices, none of which was Scarlett Johansson, so why are pausing something if it wasn't her?

[00:33:04] Paul Roetzer: I have no idea. But, just a kind of a fascinating side note here is maybe you'll have the Sky option when this comes out. Maybe you won't, depending on how they solve for this. Now, I found, I gotta give OpenAI props. They do, like, a wonderful job. Maybe it's GPT 4 writing these things. I

[00:33:21] Paul Roetzer: don't know. But their blog do a really good job of of making things understandable. So I'll just go to their blog post where it explains this idea of multimodal from the ground up. Now again, this is something Google debuted in December with Gemini, like that was the premise of Gemini is it's trained on multimodal data, not just text, but images, video, audio. And so this is the, we'll put it in the show notes.

[00:33:46] Paul Roetzer: This is the OpenAI blog post. It says, prior to GPT 4. 0, you could use voice mode to talk to chat GPT with latencies of 2. 8 seconds for GPT 3. 5 and 5. 4 [00:34:00] seconds for GPT 4 on average. Meaning I go in, I use voice mode. I say, Hey, you know, what's happening in Cleveland this week? yeah. And it would take up to like 5.

[00:34:10] Paul Roetzer: 4 seconds to respond to me. So that's the latency. It's like, it's not human. Like it doesn't respond in a fast manner, to achieve this. So this is how they do multimodal or how they previously did it. The voice mode, voice mode is a pipeline of three separate models. One model transcribes audio to text.

[00:34:29] Paul Roetzer: So I speak into it. It turns it within its system into text. GPT 3. 5 or GPT 4 takes that text and output text to itself.

[00:34:40] Paul Roetzer: and then a third simple model converts that text back to audio. So previously, like in the current voice mode, all three of these things happen before it it responds to you.

[00:34:51] Paul Roetzer: Um, so by doing it, this process means the main source of intelligence, GPT 4, loses a lot of information.

[00:34:58] Paul Roetzer: It can't directly [00:35:00] observe tone. multiple speakers or background noises, and it can't output laughter, singing, or expression. So since it's like three layers deep before it comes back to you, it misses all the nuance of the conversation, basically. With GPT 4. 0, we trained a single new model end to end across text, vision, and audio, meaning that all inputs and outputs are processed by the same neural network.

[00:35:24] Paul Roetzer: Because GPT 4. 0 is our first model combining all these modalities, We are still just scratching the surface of exploring what the model can do and its limitations. So that's why this is significant is this is kind of the next generation. Now we have Gemini that's built this way. We have GPT 4. 0. I'm sure GPT 5, whatever they call it, is being built the same way.

[00:35:47] Paul Roetzer: So these things are now being trained on all these modalities from the ground up, which, Rapidly expands what they're going to be capable of doing. And then one other quick note on this one, Sam did his own blog post. Sometimes Sam [00:36:00] publishes on samaltman. com. and he said the new voice and video mode is the best computer interface I've ever used.

[00:36:08] Paul Roetzer: It feels like AI from the movies. thus the H. E. R. tweet,

[00:36:11] Paul Roetzer: And it's still a bit surprising to me that it's real. Getting to human level response times and expressiveness turns out to be a big change. Talking to a computer has never felt natural to me, now it does. As we add, parenthesis, optional personalization. This is, again, a hint of where they're now going next.

[00:36:32] Paul Roetzer: As we add personalization, Access to your information, the ability to take actions on your behalf, and more, I can really see an exciting future where we are able to use computers to do more than ever before. The question becomes, is this what Surrey is?

[00:36:48] Paul Roetzer: Like, is that what we're going to get June 10th from Apple?

[00:36:51] Paul Roetzer: Is all these things he's previewing and is OpenAI going to compete directly with Apple? Or are they going to to work together

[00:36:56] Paul Roetzer: do this kind of thing? So, um, yeah, I mean, [00:37:00] voice interface, absolutely where this is going. Vision interface is where it's going. We can see the future now starting to unfold with both of these announcements from Google and OpenAI.

[00:37:10] Paul Roetzer: It is a multimodal future. Text interface is still going to be a thing, but that is not going to be the only option, you know, one to two years from now.

[00:37:20] Mike Kaput: And just very briefly, as we kind of close this out, like how big a deal is the 4. 0 becoming free thing in your, in your mind?

[00:37:28] Paul Roetzer: I think it's huge only because, you know, I've said this before, anytime I'm on stage, I'll ask the audience who's used ChatGPT, all, every hand goes up, who has the paid version, and like, most of the hands go down. and you miss a lot of the capabilities. So, I still think that AI literacy is a critical step to this.

[00:37:47] Paul Roetzer: Like just giving people more powerful tools does not mean they're going to know what to do with them. again, you and I talk to people all the time, big companies, people, you know, at the engagements we do, and it's, [00:38:00] It's very rare you find people who are advanced users of these tools, who have like really built their own prompt libraries, have pushed the limits of what they're capable of.

[00:38:09] Paul Roetzer: So I don't know that just giving people more powerful tools solves it. I think the literacy still has to come with it, but it certainly makes, makes it that we can accelerate adoption within the business world and really throughout society by, by doing this. 

[00:38:25] Paul Roetzer: Alright, so as if we haven't covered enough huge topics, our third one today is pretty important one as well.

[00:38:32] Paul Roetzer: Ilya

[00:38:33] Mike Kaput: Sutskever, the OpenAI co founder and former chief scientist is leaving the company

[00:38:39] OpenAI’s Chief Scientist and Co-Founder Is Leaving the Company

[00:38:39] Mike Kaput: He posted on X on May 14th that he's leaving after almost a decade and this was notable not only because he's leaving but also we haven't heard from him in a while. This is like one of the first times he, it appears he's made a public statement on X since late 2023, right around the time when he was, you know, Ilya Sutskever was implicated in [00:39:00] the failed boardroom coup against Sam Altman that removed him briefly as CEO.

[00:39:05] Mike Kaput: Now, in his announcement, he had nothing but glowing things to say about Altman and about president and co founder Greg Brockman. Altman and Brockman also kind of returned the favor. They released their own glowing statements about Ilya on X about his departure. And

[00:39:24] Mike Kaput: Now Jacob pachocki. who worked alongside Sutskever is replacing him as the chief scientist at the company.

[00:39:32] Mike Kaput: Now, at the same time as Ilya leaving, a lesser known researcher who worked with him, Jan Leakey, resigned as well from OpenAI. Leakey ran the Super Alignment team with Sutskever, which is a group working on making sure that super intelligent AI, if or when it's developed, ends up being safe, responsible, and benefits.

[00:39:54] Mike Kaput: Humanity. So Paul, some high profile departures and a lot to unpack [00:40:00] here, as well as a lot kind of left unsaid about the whole debacle. What do you make both of the actual announcement and kind of what it means for the future?

[00:40:09] Mike Kaput: Ha ha ha.

[00:40:13] Paul Roetzer: let's unpack this one for a minute. Okay, so super alignment. We covered this on episode back in July of 2023. The super alignment team of Yann LeCun and Ilya Sutskever was announced July 5th, 2023. In that post, they said, quote, we need scientific and technical breakthroughs to steer and control AI systems much smarter than us.

[00:40:36] Paul Roetzer: To solve this problem within four years, we're now almost one year into this,

[00:40:41] Paul Roetzer: we're starting a new team co led by Ilya and Yann. And dedicating 20 percent of the compute we've secured to date to this effort. We're looking for excellent machine learning researchers and engineers to join us.

[00:40:55] Paul Roetzer: Superintelligence, continuing on, will be the most impactful [00:41:00] impactful technology humanity has ever invented. And could help us solve many of the world's most important problems. But the vast power of superintelligence could also be very dangerous and could lead to this disempowerment of humanity, or even human extinction.

[00:41:16] Paul Roetzer: Uh, while superintelligence seems far off now, we believe it could arrive this decade. Now, just quick math, this decade means within six years, five and a half years now. Okay, so that was, was less than a a year ago that we

[00:41:32] Paul Roetzer: the thing that was going to save humanity. Um, Yann, when he left, I'll just read thread, we'll put this in, in the show notes.

[00:41:43] Paul Roetzer: This is May 17th. Yesterday was my last day as Head of Alignment, Superalignment Lead, and Executive at OpenAI. Now this is after he His previous tweet was, I resigned. Now there is a interesting thing with OpenAI before I finished reading the excerpt here from [00:42:00] his thread, that no one ever says anything bad when they leave.

[00:42:03] Paul Roetzer: There, there is, there has been lots of rumors that there's a disparagement clause when they leave and they lose all their stock options, all their vested options, if they say anything bad about OpenAI. So as soon as I saw this thread start, I was like, Oh, Oh, this might be interesting. So continue on knowing that up until this point, the assumption was if you said anything bad, you, you give up all of your equity and they will claw back anything that you previously had, meaning you're going to

[00:42:31] Paul Roetzer: to leave with nothing. Okay. So Jan continues.

[00:42:34] Paul Roetzer: It's been such a wild journey over the past, approximately three years. My team launched the first ever reinforcement learning through human feedback. LLM, with InstructGPT, published the first scalable oversight on large language models, pioneered automated interpretability, meaning understanding what the models are doing, and weak to strong generalization.

[00:42:56] Paul Roetzer: More exciting stuff is coming out soon, meaning They've been working on a lot [00:43:00] of stuff that that OpenAI is about to release.

[00:43:02] Paul Roetzer: He continues, I love my team. I'm so grateful for the many amazing people I got to work with, both inside and outside the SuperLima team. OpenAI has so much exceptionally smart, kind, and effective talent.

[00:43:12] Paul Roetzer: Stepping away from this job has been one of the hardest things I've ever done because we urgently need to figure out how to steer and control AI systems much smarter than us. Um, I joined because I thought OpenAI would be the best place in the world to do this research. However, I have been disagreeing with OpenAI leadership about the company's core priorities for quite some time until we finally reached a breaking point.

[00:43:34] Paul Roetzer: I believe much more of our bandwidth should be spent getting ready for the next generation of models on security, monitoring, preparedness, safety, advise,

[00:43:41] Paul Roetzer: adversarial robustness, super alignment, confidentiality, societal impact, and related topics. These problems are quite hard to get right, and I'm concerned we aren't on a trajectory to get there.

[00:43:51] Paul Roetzer: Over the past few months, my team has been sailing against the wind. Sometimes we were struggling for compute, despite the fact that we were promised 20%, and it was getting [00:44:00] harder and harder to get the crucial research done. Building smarter than human machines is an inherently dangerous endeavor. OpenAI is shouldering an enormous responsibility on behalf of all of humanity.

[00:44:11] Paul Roetzer: But over the past years, safety, culture, and process have taken a backseat to shiny products. We are long overdue in getting incredibly serious about the implications of AGI. We must prioritize preparing for them as best we can. Only then can we ensure AGI benefits all of humanity. OpenAI must become a safety first AGI company.

[00:44:34] Paul Roetzer: To all OpenAI employees, I want to say, learn to feel the AGI. Act with the gravitas appropriate for what you're building. I believe you can ship the culture change that's needed. I am counting on you. So that was his parting shots. What does feel the AGI mean, you might ask? So we have Miles Brungage, who is at OpenAI and Policy Research, who tweets, What does it [00:45:00] mean to feel the AGI?

[00:45:01] Paul Roetzer: This is May 18th, the next day. I can't speak for the originator populizer of the term, Ilya, and it's admittedly often used in somewhat tongue in cheek ways, But there is a serious aspect of it, which I'll try to briefly summarize for the uninitiated. Feel the AGI means refusing to forget how wild it is that AI capabilities are what they are, recognizing that there is much further to go and no obvious human level ceiling, and taking seriously one's moral obligation to shape the outcomes of AGI As positively as we can.

[00:45:36] Paul Roetzer: Now, who is not feeling the AGI? That would be Yann LeCun at METO. So I will quickly re Yann retweets Yann. So Yann, it's the same. It's spelled different. Yann, W, Y A N N, Yann LeCun. And Yann Jan Leicke,

[00:45:55] Paul Roetzer: A N, is Jan Leicke. Okay, so Yann LeCun. Retweets [00:46:00] Yann LeCun, and he says, It seems to me that before urgently, quote, urgently figure out how to control AI systems smarter than us, we need to have the beginning of a hint of a design for a system smarter than a house cat.

[00:46:13] Paul Roetzer: Such a sense of urgency reveals an extremely distorted view of reality. No wonder the more based members of the organization seek to marginalize the superalignment group. It's as if someone had said in 1925, we urgently need to figure out how to control aircrafts that can transport hundreds of passengers at near the speed of sound over the oceans.

[00:46:33] Paul Roetzer: It would have been difficult to make long haul passenger jets safe before the turbojet was invented and before any aircraft had crossed the Atlantic non stop. Yet we can now fly halfway around the world in complete safety. It didn't require some sort of magical recipe for safety. It took decades of careful engineering and iterative refinements.

[00:46:53] Paul Roetzer: The process will be similar for intelligent systems. It will take years for them to get as smart as cats, and more years to [00:47:00] get as smart as humans, let alone smarter. Parentheses, don't confuse the superhuman knowledge accumulation. and retrieval abilities of current large language models with actual intelligence.

[00:47:11] Paul Roetzer: It will take years for them to be deployed and fine tuned for efficiency and safety as they are made smarter and smarter.

[00:47:18] Paul Roetzer: I pull that up because it's very important to remind people this isn't binary. There, like, there are very differing opinions. From very smart, industry leading people who have completely opposing views of where we are right now in

[00:47:36] Paul Roetzer: AI. There are a lot of them who share Yann LeCun's opinion that this is a major problem, and we are quickly approaching a point where we have to have answers. And then there are people like Yann LeCun at Meta,

[00:47:49] Paul Roetzer: Meta who say, who just don't see it that way, just really don't think we're that far along. And large language models aren't all they're cracked up to be.

[00:47:57] Paul Roetzer: So I'm going to stop there for a second. Like I want to do, I want to [00:48:00] do one more segment here on a podcast. I listened to it with Dwarkash, but see if you have any thoughts on, on any, anything else or anything I missed on that.

[00:48:08] Mike Kaput: Yeah, so I think that, that last point is really, really important, because this is one of those stories where you start to see, I think, the trends we've talked about in the past few episodes of possible Let's say societal backlash towards this idea of rampantly developing AI.

[00:48:26] Mike Kaput: Like, this is a story I got multiple texts about from friends that are not interested in this subject. And all they're seeing is the headline and starting to Think some very doom and gloom thoughts. Now I'm not saying they're wrong to think that, but the context and perspective you just offered seems like something 99 percent of people viewing this on the outside have any conception of.

[00:48:52] Mike Kaput: So I think that's really important to note. 

[00:48:54] Paul Roetzer: All right. so then one more final part and then we'll move on to the rapid fire. So there is [00:49:00] a dude named John Shulman, who I have been following on Twitter. I think at some point I knew he was He's a co founder of OpenAI, like, I must've known that at some point, but he is never talked about as a co founder of OpenAI.

[00:49:13] Paul Roetzer: Um, but he was early on and he leads the post training team that kind of refines these models. He led the building of ChatGPT. So yeah, like who are, if you go into Perplexity and ChatGPT, I actually, ironically, ChatGPT doesn't even recognize him as a co founder. Like if you say who are the co founders of OpenAI, he's not one of the names that comes up.

[00:49:32] Paul Roetzer: Um, Perplexity got him as one of the names. So. Anyway, research scientist, brilliant dude, he does an interview with Dvorkesh. We've talked about the Dvorkesh podcast before, it's brilliant. It's very technical, so like, you want to, you need to go, want to go deep on the technical side of this. But he does an interview with him.

[00:49:49] Paul Roetzer: So, you know, position it as, Again, he leads the post training team at OpenAI, led the creation of ChatGPT. The podcast talks a lot about, kind of, [00:50:00] where are we with these models? What are they capable of? And how important is this, like, post training that you're doing? This reinforcement learning? And kind of, where are the flaws today that prevent us from getting to AGI?

[00:50:12] Paul Roetzer: that Yann LeCun is so concerned about and Ilya Sutskever is so concerned about. So I'm going to take a couple moments here and I'm just going to read you some excerpts because I think this is really

[00:50:20] Paul Roetzer: important. So this is Dwarkesh asking John these questions and I'll kind of share who's saying what. So Dwarkesh, yeah, well, okay.

[00:50:31] Paul Roetzer: So correct me if I'm wrong on this, but it seems like that implies right now we have models that are on a per token basis, pretty smart. So again, predicting the next token,

[00:50:40] Paul Roetzer: like they may be as smart as humans on a per token basis. And the thing that prevents them from being as useful as they could be is that five minutes from now, so again, this is Dworkash.

[00:50:51] Paul Roetzer: They're not going to be writing code in a way that's coherent and aligns with the broader goals. If it's the case that once you start this [00:51:00] long horizon reinforcement learning training regime, it immediately unlocks your ability to be coherent for longer periods of time, should we be predicting something that is human level as soon as that regime is unlocked?

[00:51:10] Paul Roetzer: Now, what that means is the context window. It almost goes back to the context. Like, these models are super, super smart to do, like, a really narrow, specific thing. But if you give them A process of like 10 steps you want them to go through. And it's going to take, you know, five minutes, an hour, five days.

[00:51:28] Paul Roetzer: They start to lose their ability to stay coherent. They like forget where they were. They, they kind of lose track of the goal. They're not able to do what a human does and say, go do this thing over the next month and go through these 25 steps. And then we step back and it does it. Precisely the way you would expect, like, a high performing human to do.

[00:51:49] Paul Roetzer: That is not what they're capable of. So Dwarkesh becomes Very concerned. Like this was reminiscent of the Dario Amadei interview, with Ezra Klein. So Darkesh [00:52:00] starts pushing John and you can tell, like, again, John's super smart. I don't get a sense he does a lot of these interviews. One, I've, I've never heard of an interview with him, but that was the first indicator.

[00:52:09] Paul Roetzer: Two, you could tell he was starting to get a little shaken by the line of questioning. So,

[00:52:14] Paul Roetzer: um, okay, so it goes on. so Darkesh says, or if not, then what is re remaining after you plan for a year and execute projects that take that long? So he's basically saying like. What is the limitation? Like if, if you think we can get to the point where these things become coherent, like what, what else is there that's left before we get to to AGI?

[00:52:31] Paul Roetzer: So John says, yeah, it's not totally clear what we're going to see once we get that regime and how fast progress will be. So it's still uncertain. I would say I would expect there to be, I wouldn't expect everything to be immediately solved by doing this. So, Dorkesh says, do you have some intuition about right now these models can maybe act coherently for what, five minutes?

[00:52:51] Paul Roetzer: We want them to be able to do tasks that a human would take for an hour, then a week, then a month and so forth to get for each of these benchmarks. Is it going to [00:53:00] just take more compute? and then it's going to follow like basic scaling laws in essence. And he said, yeah, I would say at a high level, I would agree that long horizon tasks are going to require more model intelligence to do well and are going to be more expressive to train for.

[00:53:15] Paul Roetzer: So Artur Kasch says, okay. So I want to go back to this cause I'm not sure I understood. Like if you have this model. that is trained to be coherent for long periods of time. So if you achieve this, does that imply that unless there are other bottlenecks, which there may or may not be, by next year, could we have models that are potentially like human level in terms of acting?

[00:53:38] Paul Roetzer: Like they're interacting with this as a colleague, and it's as good as interacting with a human colleague. You can tell it to do stuff and they get it done. What seems wrong with that picture? If this is the capabilities you think might be possible, That's Dworkesh. John says, yeah, it's hard to say exactly what the deficit will be.

[00:53:56] Paul Roetzer: Um, he basically kind of goes along and says like, we, we, we just don't know, [00:54:00] but I would assume there's going to be something else that we're going to have to solve. Um, it seems, and so then Dworkesh says, so given that, like that you think you can solve the coherence thing, and you don't really know of any other obvious roadblocks, Dworkesh says, it seems like then you should be planning.

[00:54:19] Paul Roetzer: for the possibility you would have AGI very soon. So John, who built ChatGPT, leads the post training team, says I think that would be reasonable to our cache. This is when it starts getting uncomfortable to our cache. So what's the plan? If there's no other bottleneck next year or something, you got AGI.

[00:54:38] Paul Roetzer: What is the plan? And this is where John starts like, I think wanting the interview to end. He says, well, I would say that if AGI came way sooner than expected, we would definitely want to be careful about it. And we might want to slow down a little bit on training and deployment until we're pretty sure we know we can deal with it safely.

[00:54:55] Paul Roetzer: And we have pretty good handle on what it's going to do. So I think we'd have to be really [00:55:00] careful. So Harkesh, and what would being careful mean? Because presumably, you are already careful, right? You do these evaluations before deploying.

[00:55:09] Paul Roetzer: It's yeah, I would say maybe not training the smarter version. Basically, no idea. Like, no plan. Which is what I assume the super alignment team is working on.

[00:55:17] Paul Roetzer: So, yeah. It kind of keeps going on. he says, uh, are you thinking of these? What is the company doing in these scenarios? It presents some scenarios. and John says, yeah, game theory is a little tough to play all this out.

[00:55:30] Paul Roetzer: Um, but maybe it's like two to three years, like the AGI, maybe it's in two to three years, not in one year. Tarkesh says, but two to three years is still pretty soon. I do think you probably need some coordination. Like. These model companies should be talking to each other. Like we should be working on this, not thinking about working on this.

[00:55:49] Paul Roetzer: And so, he kind of ends with, what is the plan to do? Suppose in two years we get to AGI and now everybody's freaking out. And so now the AI companies have paused. And now what? Like he keeps pushing them. [00:56:00] Like, okay, fine. We paused. Now what do we pause? Like, what are we doing then? what would the plan be?

[00:56:05] Paul Roetzer: What are we waiting until? And John says, yeah, I don't have a good answer for that. So. All of that, like, one, go listen to the podcast. I think Torkesh is great. Two, this is why super alignment existed. It's no longer in existence at OpenAI. They have,

[00:56:21] Paul Roetzer: they have, the team is gone. They rolled it into the other research and safety efforts.

[00:56:25] Paul Roetzer: Is Yann LeCun right? And we have nothing to be worried about at all? Is everyone at OpenAI and seemingly at Google DeepMind who thinks we're within a couple of years of AGI right? I I have no idea, but enough really smart people who live this stuff every day and feel the AGI every day seem to think it's like a two to four year window.

[00:56:49] Paul Roetzer: And it would seem to me like, if it's a possibility, we should probably be having all these conversations now. and they don't, they don't seem [00:57:00] to be doing that, or at least OpenAI appears to be redistributing resources.

[00:57:04] Paul Roetzer: Um, it, I don't know, it was wild. I was listening to that when I was cutting my grass and I was just like, oh God. like, So,

[00:57:11] Mike Kaput: to your point, two to four years is literally no time at all for the capabilities and the implications we're talking about here. So, that is mind boggling. 

[00:57:22] Paul Roetzer: So, there you go. that's it. And into rapid fire we go.

[00:57:28] Apple’s AI Plans

[00:57:28] Mike Kaput: Alright, so, we're in a hustle through RapidFire here, but, you know, after all these announcements from OpenAI and Google, speculation is pretty high. Starting to get very rampant about what Apple's AI moves will be. We're going to find out what Apple's formal AI play is at

[00:57:45] Mike Kaput: company's worldwide developer conference on June 10th, but we're getting some rumors in the meantime.

[00:57:50] Mike Kaput: So two in particular are getting a lot of attention. So three anonymous sources told the New York Times Siri voice [00:58:00] assistant. at the event, and that the underlying technology will now include a new generative AI system that allows you to chat naturally with Siri. says the Times, Apple plans to build the improved Siri as more private than rival AI services because it will process requests on iPhones rather than remotely in data centers.

[00:58:20] Mike Kaput: Now also, Apple is rumored to be closing in on an agreement with OpenAI. to use certain ChatGPT features or ChatGPT as a whole in Apple iOS 18. So, Paul, obviously this is all speculation until June 10th, but it sounds like Apple's going all in on Siri and kind of leaving the text generative AI to OpenAI and Google. Is that kind of your read on this?

[00:58:46] Paul Roetzer: I don't know, I mean, Apple's got their own small models they've been building, they've been releasing research, which they don't traditionally do, so. I don't, I have no idea what they're going to do. I, I think they're going to be aggressive. I hope they're going to be aggressive as an Apple user, [00:59:00] an Apple investor.

[00:59:00] Paul Roetzer: Like I would, I want them to solve this. It is for better, for worse, the company I probably trust most with my data. So I would love if they had amazing multimodal solutions infused into everything they do. I don't know what to expect. I, I, the idea that they turned Surrey over in essence to OpenAI's voice assistant just seems.

[00:59:23] Paul Roetzer: Very anti Apple to me. Maybe that's what they do, but I will certainly be waiting anxiously on, on June 10th to find out what their actual plan is. And hopefully we get some semblance of that plan. And June 10th is a Monday. God, we may may have to delay the recording of the

[00:59:41] Paul Roetzer: Oh, and I have a golf outing that day. Dang it. We're going to have to figure that I don't know when we're going to record that, but, and then I'm in Italy. So, ah, I don't, I don't know when we're going to talk about that one. 

[00:59:51] Behind HubSpot AI

[00:59:51] Mike Kaput: Alright, so in some other news, HubSpot just launched an initiative to improve transparency around how it uses AI in its [01:00:00] products. You can now go to the URL behind HubSpotAI.

[01:00:03] Mike Kaput: com and find a series of product cards that provide details about HubSpot's AI powered features. So each card tells you how data is being used in the different AI features and which models Power that feature. So for instance, if I click on the card about HubSpot's image generation features for blog posts, I can see what data is used to provide that data retention and deletion policies, and then details on things like, hey, DALL E3 from OpenAI is used for this particular feature.

[01:00:36] Mike Kaput: So Paul, it seems like this is a Solid effort to provide some more transparency into how HubSpot uses AI. Like how important do you see this as being to the company's AI efforts?

[01:00:47] Paul Roetzer: Uh, I think it's probably just more a fit with their culture and their way of doing business. I don't know that it's, like, You know, going to affect like their stock price or anything like it's not having like

[01:00:58] Paul Roetzer: a meaningful on [01:01:00] that kind of stuff. But I think it's a kind of thing I would expect to see from HubSpot. It's great company with great people. reminder, were,

[01:01:07] Paul Roetzer: my agency was their first partner back in 2007. So I've been working with HubSpot for 17 years. I don't have any insights. today as to what they're doing with AI have been a little critical of their AI efforts in the past, because I don't know that they've been, aggressive enough building AI capabilities in, in the pre ChatGPT era.

[01:01:26] Paul Roetzer: And, you know, I think they're, they're catching up now and they're doing a lot in this space, which is good to see. As a HubSpot user today, even though we're not an agency anymore, We use HubSpot, you know, all day long. So I'm a big fan of anything they do to make it smarter. I think the cards are cool.

[01:01:42] Paul Roetzer: Like I I would expect see other people kind of follow this thing where it's easy to see what model's being used and where the data is coming from. I did also notice on that page, they link to their ethical approach to AI.

[01:01:53] Paul Roetzer: AI, so that I hadn't seen that before. It was a good, good read, but security is our priority.

[01:01:58] Paul Roetzer: We respect privacy and our customers [01:02:00] data. Believe in human accountability, moderate and mitigate bias, transparency is key to ethical AI, and embracing, embracing growth and progress. So those are kind of like the overarching, you can go read them as well. But yeah, I mean HubSpot's a good company with good people

[01:02:15] Paul Roetzer: and I would expect them to take a, an ethical approach to how they do all this stuff. Hi

[01:02:22] US Senate’s AI Working Group Report + Key AI Bills

[01:02:22] Mike Kaput: on AI coming out of the United States Senate. So first four senators unveiled a roadmap for AI regulation. So they're. Recommending that we spend 32 billion per year on AI innovation. This roadmap is basically just a report that comes from something called the AI Working Group, spearheaded by Majority Leader Chuck Schumer.

[01:02:43] Mike Kaput: And they basically spent months hosting AI forums, talking to AI leaders. to outline how various government bodies, how the Senate recommends they should focus on regulating and approaching AI. This is not any type of law, just kind of a report that touches on a range of subjects, [01:03:00] including AI funding, AI in the workforce, copyright, and more.

[01:03:04] Mike Kaput: Now, a well known kind of policy analyst who follows this stuff, Adam Theurer, noted online that the guidance isn't calling for any broad new AI regulatory agencies or frameworks, and it does not embrace what he calls the quote, existential risk lunacy of the people who are kind of the doomers, so to speak, worried that AI poses an existential risk to humanity.

[01:03:27] Mike Kaput: And also, the regulatory, or this guidance does not come after open source at all, and he sees these all positives. Now, at the same time, the Senate did start to try to take legislative action on AI. The Senate Rules Committee passed three bills that aim to safeguard elections from AI manipulation. So they include things like creating reports on AI risks for election offices, A bill to prohibit deepfakes of federal candidates in certain circumstances and a bill to force disclaimers on political ads that have been substantially [01:04:00] created or altered by

[01:04:01] Mike Kaput: AI. Now, those still need to pass the House, advance in the House, and pass the Senate. So, Paul, none of these measures actually yet constitute any formal regulations or laws, but like, what do you make of the kind of overall trajectory of Senate interest in AI policy and legislation? 

[01:04:18] Paul Roetzer: I mean, we're definitely seeing an uptick. I was just trying to look real quick. I don't know if you know, when is Congress on recess? Don't they take like summers

[01:04:25] Mike Kaput: They do. I don't, I think we've got a little time here, but I think there it's getting close.

[01:04:31] Paul Roetzer: Okay. Yeah. I was just wondering if anything's actually going to happen like in the next couple of months. yeah, I mean, we definitely seeing an uptick certainly at the state level. I think we talked about how many, you know, piece of legislation are starting to move through states in the United States.

[01:04:43] Paul Roetzer: Again, we're talking here. I don't think anything major is going to get done. I still feel that way. I think you might see these ones specifically related to AI manipulation. You're probably going to see some hand slaps for the social media companies that aren't doing enough to prevent this stuff.

[01:04:59] Paul Roetzer: [01:05:00] Um, but. yeah, I don't know.

[01:05:03] Paul Roetzer: I always feel this legislation is There's always some bill or new piece to talk about, but nothing really ever seems to actually change anything. So when it does stay tuned, we'll be sure to let you know if we see something that we think is actually going to make a meaningful impact on any of this. 

[01:05:19] Co-Founder of Instagram Joins Anthropic As Chief Product Officer

[01:05:19] Mike Kaput: So in some other news, Anthropic just made a pretty big hire. Mike Krieger, who is one of Instagram's co founders, is coming on board as the company's chief product officer.

[01:05:30] Mike Kaput: So he is going to be in charge of all the company's product efforts. The Verge, noted an interesting part of CEO Dario Amadei's announcement about the hire. Amadei said, quote, Mike's background in developing intuitive products and user experiences will be invaluable as we create new ways for people to interact with Claude, particularly in the workplace.

[01:05:53] Mike Kaput: And it's that last part that they kind of highlighted as quite interesting because it sounds like Anthropic is getting pretty serious [01:06:00] about productizing its technology, especially for businesses. Was that kind of your takeaway from this news? 

[01:06:06] Paul Roetzer: Yeah, I think there's an overall theme here. We saw Repl. it lay off 20 percent of their staff, focus on enterprise products last week in an, you know, an email from Amjad, the CEO. So, we're seeing OpenAI. Get rid of their super alignment team so they can get the 20 percent of compute back to work on enterprise solutions.

[01:06:26] Paul Roetzer: So I think these companies, you know, Anthropic, again, was supposed to be the safety company. I think we're seeing productization and commercialization and profits over, Safety. It's not, I'm not saying they're not going to be all doing safety too, but I think these companies are burning so much money, billions of dollars that they realize there's a massive market here in the enterprise for commercialization of this technology and they need people who know how to do it.

[01:06:51] Paul Roetzer: Like these companies weren't built, Sam has said it, like we weren't built to build products OpenAI. That is not what we were here for. And then they realized there was products to be [01:07:00] built. And that's a shift in everything in the structure of the the company. the way the talent is recruited, like how the organization is built.

[01:07:09] Paul Roetzer: So I think it's just going to, it's a natural thing. These big frontier model companies got to find ways to make money. And so we're going to see more and more stuff like this where product people, marketing people, brand people start going over and working for these research labs. 

[01:07:23] AI Use Case Spotlight: Interview Prep

[01:07:23] Mike Kaput: So, Paul, you had recently posted on LinkedIn about a really awesome, tangible use case for AI that it sounds like is saving you significant time each and every month, related to kind of like interview prep or getting ready to, you know, discuss with, say, journalists or being interviewed on a podcast about different topics.

[01:07:44] Mike Kaput: Could you walk us through kind of what you did and what the result is?

[01:07:48] Paul Roetzer: We always like to share really tangible examples of how to apply this stuff and get value. So yeah, what happened was last, Wednesday, I guess it was, after the AI overviews announcement [01:08:00] from Google, we had someone reach out to the Institute, a journalist for AFP, which is a French news agency, to get my thoughts on the Google AI overviews and the impact it might have on, on search.

[01:08:11] Paul Roetzer: So I was, Wednesday was nuts. Like I was in the middle of 50 different things. And so I saw it and I was like, I don't, I don't have time to do this. And normally, you know, again, I ran an agency that did PR work for 16 years. So I've done this stuff in my life. And like, normally what I would do if it was for me or for a client to prep is you would go find articles this person has written, research the news agency first, find articles the person has written.

[01:08:37] Paul Roetzer: Read those articles, take some notes on their approach, their tone, their style, to make sure I could properly prepare myself for the interview. Because the issue is, sometimes, journalists have angles they're going for. And you want to be prepared for that. So if you think they're going to ask a question out of left field, that's kind of a gotcha for Google.

[01:08:55] Paul Roetzer: Maybe I don't even want to take the interview. Like,

[01:08:56] Paul Roetzer: I don't know. So you have to do prep before you just say, yeah, great, let's talk. [01:09:00] Um, so, I go in to ChatGPT, and I say, give me a summary of the type of stories that Thomas Urbain, who's the journalist, of AFP writes. Include links to some examples. And it instantly gave me this great summary of his writing, gave me like five links to articles, and kind of summarized each of the articles.

[01:09:18] Paul Roetzer: Then I followed up, I said, what's the overall tone of his stories? Are they fact based or do they take a position that could be controversial? It said, no, he's predominantly fact based and objective in tone. He focuses on delivering comprehensive coverage of events, presenting facts clearly and accurately.

[01:09:33] Paul Roetzer: Without taking sides or injecting personal opinion, his articles are detailed and thorough, often exploring the broader implications of the topics he covered. It's like, great. Emailed him, said, Thomas, I'm available for the next 30 minutes. You want to jump on a call? Let's do it. Great interview.

[01:09:46] Paul Roetzer: Wrote a great article. Quotes were exactly what I said. It's published on a bunch of, because it's a wire service, it goes out to all these, so I saw it in Barron's. It's great. So, took me three minutes to do something that I probably would have spent an hour

[01:09:57] Paul Roetzer: on in my past life. This is [01:10:00] something you can use for anything you're doing.

[01:10:01] Paul Roetzer: Like, any interview you're going on, sales calls, like, you can use this same basic prompt. And if there's public information about a person, you're probably going to do this thing. Now, you're going to want to check the facts of it, because some things there's less information about than others, some people.

[01:10:15] Paul Roetzer: But this is the kind of thing, like, once you know what AI is capable of, you just find use cases in your daily life. It's like, oh, wait, I could do this. And honestly, I would have just passed on the opportunity. Like, traditionally, I would have said, I don't have the hour to even look into this. Just not going to respond or just respond to the guy and tell him, we don't, don't have time to do it today.

[01:10:34] Paul Roetzer: So that was it. Like, and that, and that to me is like the real value of AIs. Find these use cases in your life to help you do things. it can make a huge difference.

[01:10:45] Applying AI to Different Roles and Functions in Companies

[01:10:45] Mike Kaput: Alright, our final topic today, Paul, in another post online you shared a thread on access. about how to think about applying AI to different roles and functions within companies. And you said, quote, leaders should be performing impact assessments across the key roles in [01:11:00] their companies, not just based on today's models, but what we assume to be true for next gen models.

[01:11:05] Mike Kaput: Can you walk us through kind of what, why you were talking in the first place about AI impact assessments and maybe a little bit about the post you had kind of quote tweeted around this?

[01:11:14] Paul Roetzer: Yeah. So this came from John Horton, who I don't even know if I was following John until I saw this, he's an economist and he was showing these great visuals of how we should be thinking about generative AI in production. And he basically said, imagine a job as a sequence of tasks, which is what we talk about all the time.

[01:11:30] Paul Roetzer: It's like just a bundle of tasks. That's what your job is. And he basically said, like, break it down task by task and say, like, Can generative AI help me with this task? And then you go through and you create your prompt or you create a series of prompts. And then eventually as you go through this chain, so let's say the podcast as an example, podcast is a series of like 25 to 30 tasks across four people in our company.

[01:11:51] Paul Roetzer: You could literally just lay out each task and go task by task and say, how can AI assist us in this task? And then you bundle that all together and [01:12:00] all of a sudden you're saving 30 percent of your time. And you're, you're producing more content for distribution across all these channels in our organic traffic neutral strategy.

[01:12:11] Paul Roetzer: So I just thought it was like a great practical way to think about this and to reinforce to people, don't think about AI as taking your job. Think of your job as a bundle of tasks, and AI can assist you with those tasks. Break it, everything you do, break campaigns, your job responsibilities, the programs you're in charge of, break those things into tasks, and then assign AI potential for each of those tasks.

[01:12:35] Paul Roetzer: That, that's how you do this stuff. So I just thought it was like a nice, simple way to think about AI in a very approachable way that anyone can do once you understand the fundamentals of what AI is capable of.

[01:12:46] Mike Kaput: Awesome. And on that note, Paul, thanks for breaking everything down this week. As a really quick reminder to our audience, we covered not only today's topics, but all the other things we didn't get to in our weekly newsletter.

[01:12:59] Mike Kaput: Go to MarketingAIInstitute. [01:13:00] com forward slash newsletter to get on that mailing list if you're not already there. Paul, thanks again for demystifying AI and everything going on this week for us.

[01:13:11] Paul Roetzer: Yeah, thanks, Mike. And again, a reminder, bit. ly, B I T dot L Y slash AIShow100, get your questions in for episode 100. It is coming up fast. That's going to be on May 30th. We're going to do that special episode. We would love to hear your questions and we will talk to you next week. I think there's a Microsoft event today.

[01:13:29] Paul Roetzer: Microsoft tweeted some events, so we might have a Microsoft event to event to be talking about next week. All all right. Thanks everyone. We'll talk to you you next week.

[01:13:35] Thanks for listening to The AI Show. Visit MarketingAIInstitute. com to continue your AI learning journey. And join more than 60, 000 professionals and business leaders who have subscribed to the weekly newsletter, downloaded the AI blueprints, attended virtual and in person events, taken our online AI courses, and engaged in the Slack community.

[01:13:58] Until next time, [01:14:00] stay curious and explore AI.

Related Posts

[The AI Show Episode 110]: OpenAI’s Secret Project “Strawberry” Mystery Grows, JobsGPT, GPT-4o Dangers, Groq Funding, Figure 02 Robot, YouTube AI Class Action Suit & Flux

Claire Prudhomme | August 13, 2024

Episode 110 of The AI Show explores OpenAI's leadership changes, JobsGPT's job market insights, and GPT-4o's risk management. Plus, unravel the "Strawberry Mystery" at OpenAI.

[The AI Show Episode 114]: ProblemsGPT, The ROI of Generative AI, Andrej Karpathy on the Road to Automated Intelligence & Ilya Sutskever Raises $1B

Claire Prudhomme | September 10, 2024

Episode 114 of The Artificial Intelligence Show reveals ProblemsGPT, examines the ROI of Generative AI and, evaluates Andrej Karpathy's insights on the journey toward automated intelligence.

[The AI Show Episode 101]: OpenAI’s Ex-Board Strikes Back, AI Job Fears, and Big Updates from Perplexity, Anthropic, and Showrunner

Claire Prudhomme | June 4, 2024

Episode 101 of The Artificial Intelligence Show covers OpenAI’s movement forward amidst controversy, concerns grow over AI's impact on jobs, and AI tech updates abound.