49 Min Read

[The Marketing AI Show Episode 53]: Salesforce AI Cloud, White House Action on AI, AI Writes Books in Minutes, ChatGPT in Cars, and More

Featured Image

Wondering how to get started with AI? Take our on-demand Piloting AI for Marketers Series.

Learn More

The band is back together! After Episode 51’s Year (so far) in Review and Episode 52’s Top AI Questions episodes, Paul Roetzer and Mike Kaput are back to the podcast format you’re used to. Well, sort of. Because there’s a bit of news to catch up on, many topics that took place over the past few weeks were covered in this episode. We hope you enjoy it!

Listen or watch below—and see below for show notes and the transcript.

This episode is brought to you by BrandOps, built to optimize your marketing strategy, delivering the most complete view of marketing performance, allowing you to compare results to competitors and benchmarks.

And, check out MAICON, our 4th annual Marketing AI Conference. Taking place July 26-28, 2023 in Cleveland, OH. Paul mentioned a $100 discount code - tune in for additional savings!

Listen Now

Watch the Video

Timestamps

00:04:59 — Salesforce unveils AI Cloud offering

00:07:44 — AI firm Synthesia hits $1 billion valuation in Nvidia-backed Series C 

00:12:02 — Schumer to host first of three senator-only A.I. briefings

00:14:46 — Europe moves ahead on AI regulation

00:17:58 — OpenAI Lobbied E.U. to Water Down AI Regulation

00:20:04 — The White House's urgent push to regulate AI

00:23:19 — Anthropic and AI Accountability

00:26:58 — Meta and Microsoft to Join Framework for Collective Action on Synthetic Media

00:29:37 — Aidan Gomez: AI threat to human existence is ‘absurd’ distraction from real risks

00:34:31 — Descript - Season 4 updates

00:38:56 — Meta AI Voicebox announcement 

00:42:25 — I-JEPA: The first AI model based on Yann LeCun's vision for more human-like AI 

00:44:56 — Matt Shumer tweet - gpt author 

00:48:25 — OpenAI Considers Creating an App Store for AI Software

00:49:31 — Mercedes-Benz tests ChatGPT in cars

00:51:14 — AI and composability from Scott Brinker

00:54:43 — Inflection-1: Pi’s Best-in-Class LLM

01:01:33 — AI 100: The most promising artificial intelligence startups of 2023

01:02:36 — Coca-Cola appoints Pratik Thakar as global head of generative AI, ET BrandEquity

01:03:22 — Inside the AI Factory: the humans that make tech seem human

Summary

Salesforce Unveils AI Cloud Offering

Salesforce just launched “AI Cloud,” which gives enterprises the ability to safely use enterprise-ready AI. AI Cloud is a platform that hosts the company's AI-powered products, such as Einstein, Slack, and Tableau, as well as large-language models from other providers like Amazon Web Services. The firm aims to cater to enterprises by ensuring data privacy and preventing AI models from retaining sensitive customer information. According to Reuters, Salesforce says the AI Cloud “starter pack” will be available for $360,000 annually, the company said.

The U.S. Senate Gets Up to Speed on AI

Senate Majority Leader Chuck Schumer announced a series of educational sessions on artificial intelligence for senators as Congress explores potential regulations for the technology. The first session, led by MIT professor and machine learning expert Antonio Torralba, aims to provide a general overview of AI and its current capabilities. The initiative underscores the importance of lawmakers understanding AI, its implications, and its challenges in order to create legislation that both fosters its potential for human prosperity and mitigates its risks. This seems like a positive step forward, but is it surprising that the Senate is only just now starting to educate its members on AI’s basics?

In addition, President Biden and his advisors have been leveraging AI technology, such as OpenAI's ChatGPT, and have moved AI from a peripheral concern to a central priority in policy development, recognizing its potential for both tremendous benefits and risks. The White House is acting with urgency to establish a regulatory framework for AI through executive orders and new policies, aiming to maximize positive impact and mitigate unintended consequences, with a focus on areas such as cybersecurity, consumer protection, and economic transformation.

Hyperwrite’s Matt Shumer announces GPT Author

What if we told you AI can now write entire novels in minutes based on an initial text prompt? Matt Shumer, co-founder and CEO of OthersideAI, which makes the AI writing tool, HyperWrite, recently announced an open-source project called GPT Author. The project strings together a chain of AI systems to write an entire book for you in minutes, complete with cover art and easy export to the Kindle store—all based on a description of the high-level details you want to see in your novel. Tune in for his experience and Paul’s reaction.

This summary only scratches the surface of the topics covered, so be sure to tune in!

The Marketing AI Show can be found on your favorite podcast player and be sure to explore the links below.

Links referenced in the show 

Funding

Government

Responsible AI

Tech Updates/News

Jobs/Careers

Read the Transcription

Disclaimer: This transcription was written by AI, thanks to Descript, and has not been edited for content.

[00:00:00] Paul Roetzer: this isn't something anybody can jump in and do. Building these foundational models, it's insanely complicated.

[00:00:05] Paul Roetzer: It takes a ton of compute, a lot of money. Building the foundation models inherently has some level of moat because there's only so many people that are going to be able to do it.

[00:00:13] Paul Roetzer: Welcome to the Marketing AI Show, the podcast that helps your business grow smarter by making artificial intelligence approachable and actionable. You'll hear from top authors, entrepreneurs, researchers, and executives as they share case studies, strategies, and technologies that have the power to transform your business and your career.

[00:00:33] Paul Roetzer: My name is Paul Roetzer. I'm the founder of Marketing AI Institute, and I'm your host.

[00:00:43] Paul Roetzer: Welcome to episode 53 of the Marketing AI Show. I'm your host, Paul Roetzer, along with my co-host Mike Kaput. We are back from our world travels. We haven't done a regular weekly episode of this in like, I spent three weeks I think.

[00:00:58] Mike Kaput: Yeah, yeah. We have been many places on the road

[00:01:01] Paul Roetzer: in the last three or four weeks.

[00:01:03] Paul Roetzer: Mike, Mike was in Chile. I was in Romania and Italy doing talks, and then I mixed in a little family vacation. So yeah, this is our first time back. Hopefully you enjoyed the special edition of the Marketing AI show. We had. Sort of the year in review, the mid-year in review, and then we did the 15 kind of questions everyone's been asking from the intro to AI series.

[00:01:26] Paul Roetzer: So, we heard some great feedback. Hopefully you had a chance to listen to those episodes and enjoyed them. But we are back with our regular weekly format today. This episode, again, episode 53 is brought to us by BrandOps. Did you know brand health, media monitoring, social listening, competitive intel, share of voice and review tracking could all be done with the same tool?

[00:01:50] Paul Roetzer: When we sat down with the BrandOps team, it was remarkable to see what one AI-based platform could do. Having a complete view of brand marketing performance and instantly knowing where to focus to increase impact helps businesses unlock faster growth. Visit BrandOps.io/marketing ai show to learn more and see BrandOps in action.

[00:02:13] Paul Roetzer: This episode is also brought to us by the fourth Annual Marketing AI Conference, or MAICON, which is exactly a month away. It's crazy to think we're recording this on June 26th. The marketing AI conference happens July 26th to 28th. We're already approaching 500 registrants, I think. So we're looking at six to 700 people coming to Cleveland at the Convention Center who haven't been to Cleveland.

[00:02:37] Paul Roetzer: It's beautiful, especially that time of year. We're going to be right on Lake Erie across from the Rock and Roll Hall of Fame and the Science Center. So we would love to see you there. We've got an amazing agenda. You can go check it out. It's almost completely wrapped. I think we, we filled. The last two spots.

[00:02:53] Paul Roetzer: I don't think they've been announced yet. II won't like, give a precursor to the final announcements, but, I believe the agenda is basically finalized at this point. I brought in the last couple speakers just over the weekend. And then we're just wrapping a couple other minor things on the panels, but it's going to be an incredible agenda.

[00:03:12] Paul Roetzer: Mike and I are both teaching workshops. Mike's doing an applied AI workshop on the 26th. That's an optional thing. And then I'm doing the strategic AI leader workshop, and then we just have some incredible speakers and content. So we'd love to see you in Cleveland. It's MAICON.ai. Again, that's m a i c o n.ai.

[00:03:31] Paul Roetzer: The final price increases July 14th. So if you're going to come get the tickets before July 14th, save save a hundred bucks. And we should do a podcast. Let's do a, I'm making this up, like literally on the fly. So Cathy, when you get to see the transcript, let's do a hundred dollars off code for our podcast listeners.

[00:03:50] Paul Roetzer: How about, we'll do. AIPOD100. All right, there we go. We're making, again, we're making this up on the fly AIPOD100. That'll save you a hundred dollars off in addition to the a hundred dollars discount that's currently active. So, use AIPOD100 if you register@MAICON.ai. All right, with that, we're going to do, so we're back to our weekly format here, but we're going to do a variation of, because we're going to cover all the major things that happened over the last couple weeks when we weren't recording the weekly.

[00:04:21] Paul Roetzer: So we're not going to do the three big things. We're just going to kind of do more of a rapid fire of all of the news to try and get you caught up on everything that we were paying attention to while we were in our travels, putting into our sandbox of the things to talk about once we finally got back together.

[00:04:35] Paul Roetzer: So, with that, I'll turn it over to Mike to run us through all the news from the last couple weeks in ai. Thanks Paul.

[00:04:42] Mike Kaput: We are no doubt putting the word rapid in rapid fire for this week because we have like dozens of things that have been going on. So we'll try to run through these pretty quickly, but really give people kind of a comprehensive look at what the last few weeks in AI have been looking like.

[00:04:59] Mike Kaput: First off, Salesforce just launched what it's calling AI Cloud, which gives enterprises the ability to kind of safely use, enterprise grade, enterprise ready artificial intelligence. So, AI Cloud, as the company describes it, is a platform that hosts all of Salesforce's AI powered products. So think Einstein, slack, Tableau.

[00:05:23] Mike Kaput: And it also gives you the ability to use large language models from other providers. So people like Amazon Web Services. This is really a play. It seems to cater to enterprises by giving them away to ensure data privacy and use AI models in a way that stops them from having to retain sensitive customer information.

[00:05:45] Mike Kaput: Now, related specifically to AI Cloud, they are offering kind of what they're calling a quote starter pack that is available for, it's looking like about $360,000 annually. Now, at the same time, in the same breath, Salesforce has also announced that they are doubling their venture capital fund for generative AI startups to 500 million previously from 250 million.

[00:06:15] Mike Kaput: How big of a development is this, in your opinion, Paul, in the sales and

[00:06:19] Paul Roetzer: marketing world? Yeah, I mean we, we've known, this was coming back in February, they announced the plans to do the Einstein G P t. I don't know if this is the first time they're like calling it the AI cloud. I'm not, not sure about that.

[00:06:30] Paul Roetzer: But we've known for months that this was in development, that there's, I think it's been in beta. I think it's the first time we're seeing anything about the pricing of it. But I, overall, the thing I take away from this is what we've talked about of the acceleration of adoption later in 2023, as Microsoft and Google and Salesforce and others start infusing these generative AI capabilities right into the platforms that you're already using.

[00:06:56] Paul Roetzer: So you don't have to go find third party AI tools to get these benefits. That to me is the most interesting thing. Now, the price part. The price point, you know, is it prohibitive? Is that for the whole thing? Can you get individual pieces? Like if I'm a Tableau customer, can I get AI capabilities baked in Tableau?

[00:07:13] Paul Roetzer: That's all to be determined, but I think big picture, it's just accelerating adoption as we move into two later in 2023, which as we've talked about also is going to be disruptive to knowledge work. And that to me is the part I'm probably most interested in is what impact this actually has as it infuses into platform and becomes available to everyone across business function.

[00:07:37] Paul Roetzer: So sort of stay tuned. It'll definitely be a topic we'll be following on the podcast. Very cool.

[00:07:44] Mike Kaput: Another big funding announcement that happened recently is synthia, which is a company that uses AI to create videos, has raised 90 million to further develop its technology. And what they do is they basically use AI generated avatars to lower the cost of video production.

[00:08:04] Mike Kaput: We've experimented a bit with the tool a while ago. You can basically, pick an avatar that is totally virtual and start generating realistic sounding video and text by using their tool. So this round actually values the company at 1 billion, billion, notably, one of the investors is the AI chipmaker Nvidia, who's been having a heck of a year.

[00:08:29] Mike Kaput: And C N B C actually reports that the company has experienced. Significant growth with over 12 million videos created on the platform, and more than a 450% increase in the number of users that they have. So what kind of stood out to me here is the whole area of what we would call synthetic media. Your ability to create essentially a deep fake or a virtual video, depending on what you want to call it, with no real actual people.

[00:09:00] Mike Kaput: And this can extend to also just audio as well. Is this type of funding a signal that we're going to be seeing more of

[00:09:07] Paul Roetzer: this? I don't think there's any doubt we're going to see rapid deployment of synthetic media in case some cases by bad actors. You know, a topic we've talked about before, the, my experience with their content is, it's pretty obvious this is a synthetic, you know, person that it's an avatar, digitally generated when you're seeing this.

[00:09:26] Paul Roetzer: So this, this to me, doesn't feel like a deceptive technology in any way. The way they're applying it. I did think it was interesting, a couple of things to call from that article that just further kind of played out how they see the market and the use cases for it. So they talked about it as, digital avatars to deliver corporate presentations, training videos, or even compliments to colleagues in more than 120 different languages.

[00:09:49] Paul Roetzer: They said it's ultimate aim is to eliminate cameras, microphones, actors. Lengthy edits and other costs from the professional video production process. So again, when we talk about impact on knowledge work, this is kind of when those flags start popping up in my mind of, well this is a company that's just straight up saying it like we are.

[00:10:08] Paul Roetzer: We want to disrupt video production. And so I think that that's you, you know, you're in that space. You start to really pay attention to what it is that they're doing and is it, can it be truly disruptive? So to the, to do that, Synthesia has created animated avatars, which look and sound like humans, but are generated by ai.

[00:10:26] Paul Roetzer: They're based on real life actors who speak in front of a green screen. And then I thought it was interesting, just a couple quick quotes from, someone from a cell, I think that's how you say it. A C C E L, the lead investor in the series C round. This is, they said productivity can be improved because you're reducing the cost of producing the video that, to that of making a PowerPoint.

[00:10:50] Paul Roetzer: So again, we're simplifying a process. And then went on to say, video is a much better way to communicate knowledge. When we think about the potential of the company and the valuation, we think about what it can return. And in the case of Synthia that we're just scratching the surface. So again, just that, you know, as we hear about 90 million, oddly enough to me, just doesn't sound like a lot of money right now to do this stuff.

[00:11:14] Paul Roetzer: Yeah. Because a lot of these companies are raising hundreds of millions because they, they need a lot of money to build the, these models. It's interesting that NVIDIA's involved because maybe part of this is like a lot of the money PE these companies are raising is just going back to Nvidia to buy the chips to power Yeah.

[00:11:31] Paul Roetzer: Building these models. So maybe they've got some kind of crazy deal with Nvidia. But yeah, just, I mean, certainly we're going to hear tons more about synthetic media generation and there's going to be a lot of players raising money in the space.

[00:11:44] Mike Kaput: So as these funding announcements are rolling out, there's also a ton going on related to government regulation or exploration of what should be done about ai, what risks there are and how to approach the technology from a legal and, and of federal government perspective.

[00:12:02] Mike Kaput: So on one hand, the United States Senate is trying to get up to speed, essentially on artificial intelligence. So US Senate Majority Leader, Chuck Schumer, announced that they would be having a series of educational sessions on artificial intelligence for senators as Congress kind of ramps up exploring potential regulations.

[00:12:25] Mike Kaput: So these are for senators only. The first session is led by an M I T professor and machine learning expert named Antonio Toba, and that is basically providing a general overview of AI and what it can currently do. So the initiative is really looking to help lawmakers understand ai, understand its implications, and grasp the challenges it poses in order to ideally create legislation that is optimal in terms of both regulation and helping the technology.

[00:12:58] Mike Kaput: The good parts of it, contribute to human prosperity while mitigating risks. Now, you know, we've talked a bit about regulation in the US before Paul. This seems like a positive step forward, but I think I was a little surprised that only now the Senate is just starting to educate their members on AI basics.

[00:13:18] Mike Kaput: I mean, what was your take on this?

[00:13:20] Paul Roetzer: Yeah, I mean I think just when you look at the totality of what's going on, there's obviously just increased interest at all levels of the government to figure this stuff out. I know there are some senators who've been proactive in learning this stuff and trying to educate other members in the Senate, and the house around this.

[00:13:38] Paul Roetzer: Honestly, there's so many businesses and educational, systems that are still trying to like realize the basics are critical, that I'm not that surprised the government's just starting to do this because a lot of the companies we talk to are at this foundational knowledge level where they're just starting to talk about the basics.

[00:13:57] Paul Roetzer: So again, I think it's just, it seems like everything is moving very fast, because it is. The reality is that this whole generative AI age that we've entered is seven months old and it really was the first few months of kind of whiplash effect for everyone before like March, April hit. And everyone realized like, okay, this is real and this is going to.

[00:14:20] Paul Roetzer: Be transformative and we have to figure out what it means. I mean, so we're realistically just a few months into people starting to actually grasp, the significance of this. So it's not that surprising. II guess it's a slow moving body. So, any action, any forward progress is better than nothing at this point.

[00:14:42] Paul Roetzer: So I'm happy to see that they're at least taking the steps.

[00:14:46] Mike Kaput: And it seems like our counterparts in the European Union are moving a little faster. The European Union Parliament has taken a significant step towards implementing comprehensive restrictions on the use of ai. So what they, where they got to is finally getting the European Union Parliament to approve the proposed EU AI Act, which is a broad, widespread set of regulations that are designed to safeguard consumers from potential harmful applications of ai, things like misuse and surveillance, and AI's role in promoting misinformation.

[00:15:23] Mike Kaput: So at this stage, The legislation has moved forward. It still needs to be approved by the European Parliament, so it is not law yet, but EU officials have, been reported to be hoping to reach a final agreement by the end of 2023. Now, some of the provisions in this legislation are, introducing what they call a risk-based approach to ai.

[00:15:49] Mike Kaput: So they're banning tools that are deemed, quote, unacceptable and limiting the use of high risk technologies. Now, the not every single detail of the legislation is available yet, but some provisions in include making companies label AI generated content and requiring firms. To publish summaries of which copyrighted data is being used to train their tools.

[00:16:14] Mike Kaput: So according to reporting from the Washington Post, they went ahead and said the threat posed by the legislation to such companies, you know, AI leaders is so grave that open ai, the maker of ChatGPT said it may be forced to pull out of Europe depending on what is included in the final text. As you're looking at something like this, and you know, we won't know exactly what is in the final bill until it is, ratified, but.

[00:16:44] Mike Kaput: How might this affect AI companies trying to do business in the European Union?

[00:16:49] Paul Roetzer: Yeah, I mean, it's obviously, I mean, said it's kind of like a grave threat in some cases, and they may just choose to not go there. So we talked about in previous episode, Google is their generate AI technologies aren't available to the 450 million citizens in the EU because they're, I don't know, making a statement to the eu.

[00:17:08] Paul Roetzer: Iyou know, we're not sure exactly why they're not doing it, but, it's obvious that the current way it's written is going to be very restrictive to the EU benefiting from these technologies. And I think that they need to find a balance. You know, there's been lots of meetings you see on Twitter, like Sam Altman's been over there.

[00:17:26] Paul Roetzer: Greg Brockman, I think, you know, is the C T o OpenAI was over there. Sundar, I thinks, visited the eu. Like there's lots of conversations that we aren't privy to, and I think at the end of the day, they probably find some balance in the final legislation because, They realize like this is the future and they can't just shut it off.

[00:17:47] Paul Roetzer: So I don't know what that balance is. It'll be fascinating to follow along. And I know our next topic sort of starts getting into, some of the potential effects that lobbying is having.

[00:17:58] Mike Kaput: Yeah. So in terms of where that balance ends up landing, a new report from time, shows that open Ai, C E O, Sam Altman, as he's kind of, you know, publicly advocating for global AI regulations was in fact privately lobbying for lighter restrictions in the eus AI Act.

[00:18:19] Mike Kaput: So it's basically looking to lessen the regulatory burdens on their AI systems. Chat, GT GPT-3, four, DALL-E, et cetera. And. Time claims that the lobbying efforts seem to have been successful as this final draft of the act that we're looking at right now. Doesn't consider general purpose. AI systems inherently high risk, which is something they were arguing for, classifying, but it does require them to comply with less stringent requirements such as preventing the generation of illegal content and carrying out risk assessments on some of these systems.

[00:18:56] Mike Kaput: So what's also interesting is time notes that the argument that their arguments to the EU about lessening their regulations brought open AI in line with Microsoft and Google, both of which have previously lobbied EU officials in favor of loosening the ACT's regulatory burden on these large AI providers.

[00:19:18] Mike Kaput: Does this surprise you at all that these companies are lobbying EU lawmakers while promoting global AI safety?

[00:19:26] Paul Roetzer: Not in the least. This is how the game works. I mean, it, it's, we've known this like even when Altman had his, you know, meeting before Congress and you know, it came out that the Monday night before he was out to dinner with a bunch of them.

[00:19:38] Paul Roetzer: Like, this is how this stuff works. And, for better or for worse, they're going to influence the legislation. They have the power to do it. They have the leverage to do it. So no, I mean, nothing about this is surprising. I would be more surprised if it wasn't happening. This.

[00:19:58] Mike Kaput: So it, the pressure keeps mounting, within the US government as well because a new report from C N N is reporting that US President Joe Biden and his advisors have been using AI technology more and more in experimenting with it things like chat, p t and c n n notes that AI has now moved from a peripheral concern to a central priority in policy development.

[00:20:25] Mike Kaput: And it appears to be, being led by the White House. The White House is actually acting with urgency to establish a regulatory framework for AI through a combination of executive orders and new policies, and they're trying to focus on areas like cybersecurity. Consumer protection and economic transformation.

[00:20:47] Mike Kaput: So some of these initiatives include things like collaborating with AI companies for privacy and safety, releasing guidance for federal agencies on AI use, which we've talked about on previous episode, creating an inventory of applicable government regulations to ai, you know, looking at some of the existing regulations out there and seeing which ones could be applied and developing international norms around ai, all while building upon an earlier AI bill of rights that believe the White House, science Committee actually published.

[00:21:21] Mike Kaput: And we covered on this podcast several months ago. So do you see this as just kind of a natural outgrowth of the increased concern regulation and attention to AI safety issues?

[00:21:35] Paul Roetzer: I mean, I think I'll be surprised if they actually really move quickly to do something here. You know, the couple of things that stood out to me is, you know, the article that I had read said that the White House chief of staff has been meeting two to three times a week to advance AI policy, multiple fronts of misinformation, cyber security, economic transformation, and equity.

[00:21:59] Paul Roetzer: They're laying the groundwork for several policy actions that'll be avail unveiled this summer. Like that's, Iwouldn't expect it. I mean, that's, that's in the next month and a half. Yeah. But looking at executive actions around, you know, putting guardrails in place and then just the fact that they're planning to take this action, very quickly would, would be really interesting.

[00:22:22] Paul Roetzer: And if it, if it's not watered down, like I could see some kind of watered down stuff making it where it gives the appearance that they're. Being proactive, but it's not really doing anything major. So I don't know. We'll, it's kind of an ongoing topic. We're obviously interested in. We've been talking a lot about this on the podcast.

[00:22:39] Paul Roetzer: And I think it's, you know, it's almost like a necessity to cover this because it's going to affect a lot. It's going to affect, at the end of the day, that kind of technology we have access to as marketers, as business professionals, as knowledge workers. These, whatever they end up doing is going to have an effect on us.

[00:22:56] Paul Roetzer: So it's worth paying attention to. There's lots going on and again, lots that we're not hearing about cause we're not privy to these meetings. Yep. So be interested to keep an eye on it.

[00:23:05] Mike Kaput: Yeah, that's a good, important point. One of the main reasons we do try to talk about these rapid fire topics as well is just to help people read between the lines a little bit because there's a lot more going on than is what, than what is reported.

[00:23:19] Mike Kaput: So in terms of some responsible AI developments and actions that have happened in the last few weeks, one of the leading AI companies out there that we've talked about many times is called anthropic. And recently they shared recommendations that they sent to a US government regulatory body about AI accountability.

[00:23:40] Mike Kaput: So Anthropic says it has submitted reg recommendations to the National Telecommunications and Information Administration, which is a regulatory arm of the Department of Commerce. And they're basically requesting comment on AI accountability. So this. Federal body is calling for improved valuation processes and infrastructure for advanced AI systems, and they may be potentially taking the lead on setting standards with other government bodies related to these systems.

[00:24:15] Mike Kaput: So basically, anthropic is responding to this request for comment as some experts in the space. And they have also led from the front on responsible AI development. They talk about it quite often and have a lot of policies in place around it, and they're advocating for things like increased research funding, requiring AI companies to disclose evaluation methods, creating risk responsive assessments, establishing pre-registration for large AI training runs involving third party auditors, mandating external red teaming.

[00:24:52] Mike Kaput: And advancing interpretability research. So they're recommending all of these actions to help us better understand how AI systems are built, trained, and how, what risks they may have. So when you're looking at this, Paul, I mean, anthropic seems like one of the leaders in more responsible ai, so not every company is necessarily sitting here w sitting on their hands when it comes to commentary around AI responsibility.

[00:25:21] Mike Kaput: Is that kind of your take here?

[00:25:24] Paul Roetzer: Yeah. And their, their major player, and we've talked about their CEO Dario was in on the major meetings with Sam Altman and Sundar, and, so they have a seat at the table and I think, you know, their, their co-founders are former open AI executives. So they're, the interesting thing is all these.

[00:25:42] Paul Roetzer: Like companies like Cohere and Anthropic and Open AI and like, Google, like they all know each other. Like they're all talking to each other about these same challenges. Yes, they're competitive, but they generally came out of the same research labs where they work together on all this stuff. And so a lot of what they're saying is reflecting a lot of the things Sam was saying.

[00:26:03] Paul Roetzer: And it varies at different times, you know, what they actually think the solutions are. But generally it seems like they're all moving in a similar direction of the ways they think that this should be solved. And they have a voice. I mean, I would imagine out of these meetings that they've been having, some public, some behind closed doors, governments saying, okay, what do you, what do you think we should do?

[00:26:25] Paul Roetzer: And then things like this come out. It's like, well, here's what we think. And then they submit these papers and. So, yeah, I mean, when again, when you zoom out and you start to look at the way the different organizations and research labs are recommending to approach this, you definitely start to see some common threads of how they think this should play out.

[00:26:42] Paul Roetzer: And my guess is it probably looks something like this in the end. So yeah. Worth keep keeping an eye on. And

[00:26:50] Mike Kaput: Anthropic isn't the only company in this space really getting involved in responsible AI initiatives. So we also just got an announcement from the partnership on AI p ai, which is this influential nonprofit partnership that brings together AI companies major players to create better AI systems.

[00:27:13] Mike Kaput: They actually just announced that Meta and Microsoft have joined one of their initiatives to promote the responsible use of generative AI in creating and distributing synthetic media. So this P A I initiative is called the Responsible Practices for Synthetic Media, a framework for Collective Action.

[00:27:36] Mike Kaput: So, This initiative was launched with some initial partners, including Adobe, the B B C, OpenAI, TikTok, and several synthetic media startups. And basically it involves taking collective action to make sure that synthetic media, generated video and audio and images is, are used responsibly in a way that minimizes misinformation and protects user information while promoting creativity.

[00:28:04] Mike Kaput: So this framework was created over an entire year with the contributions of over a hundred different organizations, and basically it's aiming to provide guidelines for creating, sharing and distributing synthetic media. To balance the potential for creativity and expression with all the risks of misinformation and manipulation.

[00:28:24] Mike Kaput: And so, Paul, I know you and I have talked at length about that last part here. There seems to be a significant risk. We're going to be seeing a huge rise in misinformation and manipulation. So, in your opinion, are initiatives like this. An

[00:28:41] Paul Roetzer: urgent priority. Yeah, it's a great organization to follow. It's one of the ones we highlighted in our book.

[00:28:46] Paul Roetzer: We had that chapter on like AI for good, responsible ai, and we highlighted, I don't know, it was like maybe 10 or 12 different organizations that are working on this kind of stuff. So I think part of what gives me some peace of mind about all this is I know that there are organizations that are nonprofits, they're groups of really smart people that are thinking about these bigger issues.

[00:29:05] Paul Roetzer: We don't always hear about it, and it's not covered in mainstream media per se, that much. But there are major initiatives like this and these are the kinds of things that can actually make a difference. So, you know, bringing these organizations together that control the distribution of information around the world and getting them to work towards a unified approach to something is what's going to be required.

[00:29:26] Paul Roetzer: And certainly synthetic media is one of the today issues that we're facing. And so this kind of stuff is really positive in my opinion.

[00:29:37] Mike Kaput: Another major player also weighed in on some important AI topics, including some around responsible ai. Aiden Gomez, who we've talked about in the past is the co-founder and c e o of company called Cohere, which is another major AI player.

[00:29:54] Mike Kaput: And he just did a recent interview with the Financial Times about different things going on in the industry. Now cohere builds enterprise large language models and they recently raised 270 million with a valuation of around 2 billion. So when someone like Aiden shares his perspective, it's worth paying attention to, which is kind of why we're covering it now.

[00:30:18] Mike Kaput: Paul, you actually posted about this on LinkedIn saying you found several of his excerpts and quotes really interesting. What stood out to you here?

[00:30:28] Paul Roetzer: The thing I love about these kind of interviews and the fact that we have access to people like Aiden through either things they tweet or interviews they do, is you, you start to learn more in depth about stuff that a lot of times we're just trying to kind of theorize about.

[00:30:42] Paul Roetzer: So there were a few things that came out in the interview that I had highlighted. So the first one was, you know, he was asked about Google being outmaneuvered by Microsoft and open AI in the area of generative ai. When Google was far and away the dominant player pre Che G b t, when it came to artificial intelligence, there was probably minimal dispute about the fact that ai, that Google was the dominant leader in AI research.

[00:31:09] Paul Roetzer: So he said, Google Brain, which is their, one of their internal research labs that has since merged into Google DeepMind, was the hub of excellence in ai. It was the best laboratory for AI that existed on the planet. But then when asked about like, well, why didn't they come up with J G B T? Basically, how did they get beat to market?

[00:31:30] Paul Roetzer: He said, during the research phase of this technology, during the tinkering and discovery phase, Google Brain was the best place you could possibly be. I think we've now moved into a phase where it's about building real products and experiences with the technology that we developed. And that was where Google was not the best place to be.

[00:31:49] Paul Roetzer: Now, again, remember Aiden worked on the research paper called Attention is All You Need. That came out in 2017 that invented the transformer architecture that enabled generative AI to exist exist. So he was at the forefront of this working on a team of, I think there was like nine authors on that paper.

[00:32:07] Paul Roetzer: And so what he's saying is like, you couldn't have done that anywhere else in the world, but when we actually needed to turn that into products, people like him needed to leave and go start their own companies to commercialize it, because that wasn't what Google Brain. Excelled at. So that was interesting then when he said about, building large language models and the potential moat for companies like cohere.

[00:32:27] Paul Roetzer: I thought this was kind of an interesting quote. He said, I'm realizing that what cohere does is as complex a system as the most sophisticated engineering projects humans have taken on, like rocket engineering. In other words, this isn't something anybody can jump in and do. Building these foundational models, it's insanely complicated.

[00:32:46] Paul Roetzer: It takes a ton of compute, a lot of money. So hi into him. Building the foundation models inherently has some level of moat because there's only so many people that are going to be able to do it. When he asked about the future of AI in our lives, he said, A big chunk of your time is going to be spent communicating with these models.

[00:33:03] Paul Roetzer: They're going to be your interface to the world. But that was interesting he said, dealing with the fact that these models hallucinate or make stuff up. So we're still in the first few days of this technology, and so it will become increasingly robust over time. The other thing to remember is that humans will be in the loop for critical paths, potentially forever, which led into will human will ai, you know, augment or replace humans.

[00:33:26] Paul Roetzer: Said there will be some tasks that humans currently carry out that I think get completely replaced totally. Then there are others that will never get replaced at all. And then finally, he was pretty, opinionated on the idea of pausing AI development as proposed in the Future of Life Institute Letter said, I think the six month pause letter is absurd.

[00:33:48] Paul Roetzer: It is just categorically absurd. So he, he is not a fan of, of these efforts to kind of slow down development. He doesn't see them as practical, which I generally agree with. I said even with the Future of Life Institute when we first covered it. That, yes, conceptually it was like good that it existed, but it was never going to happen.

[00:34:08] Paul Roetzer: Nor did anyone who signed that think they were ever actually going to pause. In my opinion, nobody actually thought that that was an end game. It was all about raising awareness about the bigger issues and trying to create a platform. So yeah, really, really good interview. Read I'd, I'd recommend reading the whole article.

[00:34:23] Paul Roetzer: It's a really good, good, interview to learn more about how he sees the world in ai.

[00:34:30] Paul Roetzer: So

[00:34:31] Mike Kaput: as if there wasn't enough going on already, we have a ton of other AI tech updates and news that have also happened, in the world of AI in different kind of domains here. And one of them comes from one a company. We love a popular AI video editing tool called Descrip Script. Just announced a handful of powerful new features.

[00:34:50] Mike Kaput: And if you are doing any type of video creation across different channels and video editing, you definitely should be checking out to script if you haven't already. I mean, we rely heavily on it to create this podcast and to create the videos, in the editing for YouTube as well. So these top features that were announced, there's five of them.

[00:35:10] Mike Kaput: They include first script for web, so Descrip currently, or. Is not able to be used in a web browser. You download an app and then use it from your desktop, so you'll very soon be able to just run it right in your browser. There's a feature called eye contact, and what this does is it basically rotates your eyeballs in their sockets.

[00:35:31] Mike Kaput: These are their words. Not there's a better way

[00:35:32] Paul Roetzer: to say that. That is, yeah, that's disturbing description.

[00:35:36] Mike Kaput: Yeah. Yeah, so they, so that whenever you are looking at the screen throughout your video, so whether you're looking kind of just above, below to the side of the camera looking at a script, it will actually use AI to make sure that your eyes are still looking at the camera in post-production.

[00:35:53] Mike Kaput: Now, they fully admit it sounds creepy and it is, but you'll be seriously surprised by how well it actually works. You can just record yourself reading a script. Script straight off your computer screen, and it looks like you're staring at the camera the whole time. There's another feature called replace Selection or Record and replace where you can record over an existing script.

[00:36:17] Mike Kaput: Script will match the new transcript to the one you wrote intelligently expanding and contracting the length, the scenes to match the pacing of your new recording. Another feature is recording scene by scene, so you can also use this record and replace feature. You can just manually step through scenes one by one by pressing enter issue record.

[00:36:39] Mike Kaput: And so if you want to record something that's totally different from your original or recording to a blank script, you can now do that with scene by scene recording. And last but not least is regenerate. This has a generative AI audio feature. Now you can literally just click in gaps between words, click the regenerate button, and then you can essentially, they call it grow new audio cells that make your edit sound absolutely seamless.

[00:37:08] Mike Kaput: So you can use regenerate to actually improve a weird performance. So if somebody is speaking and starts trailing off, but you want to keep what they're saying midway through, you select regenerate, and it will sound like you wish they'd said it. Which is pretty wild for me. Also, a more less sci-fi use here is you can remove unexpected background noise using regenerate.

[00:37:32] Mike Kaput: What did you think of these features looking at them, Paul? I mean, they really seem to illuminate what's possible when you fundamentally build a solution from the ground up in a

[00:37:42] Paul Roetzer: smarter way. Yeah, I, that's why we always talk about script and runway is another one. They're just, they just take a smarter approach to product development and they're running circles around people.

[00:37:52] Paul Roetzer: So it's, it's what it looks like when you build a smarter company from the ground up that has aIs it's, you know, AI first approach to things. And that's why a lot of these kind of legacy software companies run a real significant risk of being obsoleted as they have product teams that don't think this way.

[00:38:12] Paul Roetzer: They didn't come up this way. So yeah, it's just, it's awesome tech. It's, they could use a little maybe assistance on their messaging. I don't know that eyes rolling around in your sockets is kind of the way I would want to describe this stuff, but yeah, I mean, it's, it's a really cool company that we're big fans of.

[00:38:30] Paul Roetzer: Yeah,

[00:38:31] Mike Kaput: it's probably a good reminder to people too, is just, you know, it, whether or not you want to use the eye contact thing, this stuff is possible in very affordable software right now. So if you are the, what is possible when it comes to recording video and audio has fundamentally changed in the last year or two.

[00:38:48] Mike Kaput: So it's worth revisiting your processes and your approach is with some of this technology. Speaking of another announcement that is somewhat, almost feels like science fiction, meta AI researchers have developed a new generative AI model for speech, and it's called Voice Box. Voice box can generate high quality audio clips across six different languages and conduct operations like noise removal, content editing, style conversion, and audio sample generation.

[00:39:23] Mike Kaput: Now, what's cool about this is that voice box significantly outperforms many other models in terms of the error rate and how the audio sounds. It's actually trained. I found this interesting for more than 50,000 hours of recorded speech and transcripts from audio books. So you can literally use this to generate a hyperrealistic version of a voice in six different languages.

[00:39:49] Mike Kaput: Now, meta, however, is not releasing the model or the code publicly. But is sharing a research paper about this because they're very, very worried this could be used for, bad intentions in cloning people's voices or creating essentially audio deep fakes. How did you view this announcement?

[00:40:11] Paul Roetzer: I mean, not releasing it because of concerns, about being misused just means someone else will build it within three weeks and release it.

[00:40:21] Paul Roetzer: Like it's just this is, you know, it's the approach they're taking. I'm actually listening to the interview Lex Freeman did with Mark Zuckerberg right now and about their decisions around the releasing of like Lama on some of their models. The technology is real. It is here. My general sense is within the next one to two years, it'll be extremely commonplace that people have replicated their voices.

[00:40:48] Paul Roetzer: In all kinds of different uses. Sometimes it's licensed use of it, other times it's just replicating someone's voice for bad uses. It's going to be prevalent, it'll be everywhere. The technology will be cheap. And you know, again, I go back to when you and I tried in spring of 2022 to look into, using synthetic voice to read portions of our book, just to demonstrate it was possible.

[00:41:15] Paul Roetzer: And for the audio book, you needed like 40 hours of training data and probably lots of money to use Google Wave technology to do it. Yeah. And now it's going to be like, you can do a free demo of this stuff. They're not the only player in this space. I mean, you can go search for this. There's probably 10 vendors already doing variations of this to different degrees.

[00:41:35] Paul Roetzer: So this technology is going to be everywhere, whether you like it or not. People will use it for all kinds of good and bad things, like replicating the voice of loved ones. And then you mix in a language model, you train it on a history of data of that person and how you can have conversations with deceased people in perpetuity.

[00:41:54] Paul Roetzer: Like yeah. Whether you think that's a good thing or a bad thing for society. I have opinions on that I won't get into at the moment. I just don't, you know, again, it goes back to like, I just don't think we're ready for this, like, as a society, but it's here. And I think we, we have to kind of come to grips with that.

[00:42:13] Paul Roetzer: This technology is going to be everywhere, whether Facebook or Meta chooses not to release it or not, it doesn't matter. It's going to, it'll be available on hugging face within, you know, a month. Yeah.

[00:42:25] Mike Kaput: Meta is on a roll in the past couple weeks. So Meta's chief AI scientist, Yann LeCun, who we've mentioned many times here, introduced a new AI model which learns an internal model of the world by creating abstract representations of images, which is somewhat similar to how we humans process information in the real world.

[00:42:47] Mike Kaput: So they're calling this model the Image Joint embedding predictive architecture model I dash J E P A. It demonstrates strong efficiency and performance and it essentially provides this stepping stone towards human level intelligence in AI and opens up possibilities for other applications within video and image artificial intelligence.

[00:43:10] Mike Kaput: Now, Paul, I know you're a big fan and follow Yann LeCun quite closely. Can you kind of contextualize this for us, like why he's important to follow, and what his kind of. What he's kind of working on and thinking about

[00:43:22] Paul Roetzer: in this space? Yeah. In the, in the spirit of trying to keep this rapid fire, I'll try and keep this simple.

[00:43:28] Paul Roetzer: Yann LeCun doesn't believe that large language models are a path to general intelligence. So basically he, he doesn't see them as an endgame. He sees them as an interesting step in his view. To oversimplify it, the AI needs to be able to experience the world around it, a worldview to actually achieve human-like intelligence.

[00:43:51] Paul Roetzer: So if you think about a toddler and how they learn from observing the world around them. They're not just learning through words, they're learning through observation. You develop common sense. You develop an understanding of the world by observing everything around you. And so his belief is that this worldview is critical to breaking through, to human level and beyond intelligence.

[00:44:13] Paul Roetzer: And so whenever they release something that is tied to Yann's view o of a AI model, it's worth paying attention to. It's hard to understand sometimes, like I read this and yeah, if I didn't have all the context I already had of him and his kind of vision for aIt probably wouldn't made as much sense.

[00:44:31] Paul Roetzer: But that's the basic premise here, is you have everyone in all these research labs are trying to to, to achieve general intelligence. They're trying to build these general AI agents that have the ability to do everything a human can do, in essence. And there's different opinions of how you get there. And so Yann is, for a long time, held this belief that the worldview is a critical element of, of getting there.

[00:44:54] Paul Roetzer: Gotcha.

[00:44:56] Mike Kaput: Another, Fun, potentially terrifying, AI use case is that AI can now write entire novels in minutes based on an initial prompt. So Matt Schumer, who we know well, co-founder and c e o of other side ai, which makes an AI tool writing tool that we have talked about quite a bit called Hyper Write.

[00:45:16] Mike Kaput: He recently shared an open source project called G P T Author. What it does is it strings together a chain of AI systems to write an entire book for you in minutes complete with cover art and. The ability to export this to the Kindle store, all based on just a high level description of what you want to see in your novel.

[00:45:37] Mike Kaput: So Matt notes that quote, A 15 chapter novel can cost as little as $4 to produce and is written in just a few minutes. Now, Paul, obviously I think you had some, some thoughts on this.

[00:45:50] Paul Roetzer: Yeah, so I mean, a couple of interesting things. One, this is, we've talked about building AI agents, so not just language models that output something, but agents that are capable of taking a series of actions, you know, building a list and prompting itself to kind of do the next thing.

[00:46:04] Paul Roetzer: And so the way he constructed this is, follows that general concept of building an agent that's able to do multiple different tasks to create an end output. It's inevitable. Like there's, there's nothing about the fact that this exists that is shocking to me or even surprising. It's sort of just matter.

[00:46:21] Paul Roetzer: It was only a matter of time. I understand people may look at this and, and hate it or, you know, have fear around it or anger around it that someone has built something that can write novels. Like, why do we need that? Isn't that what humans do? You don't have to like it. I don't have to like it. We just have to accept that this is the world and it's using, in essence dumb ai.

[00:46:41] Paul Roetzer: Like, fast forward 12 months and AI's going to be infinitely better and smarter than it is today. So what he's achieved is using pretty rudimentary forms of AI in the context of where this is all going. So it's going to get better, faster, smarter. I did find it interesting. Matt did comment, and again, Matt's a friend of ours and, someone I've spent a lot of time talking about AI and the future of AI with, so I have a ton of respect for what he's doing and how he's trying to build this stuff.

[00:47:09] Paul Roetzer: But he said, I think one important point to clarify is that I don't see this replacing authors, at least not anytime soon. I built this as an interesting way to get more of the content I like. Let's say I love a specific series, but I finished it. Now I can generate more books that are relatively similar.

[00:47:25] Paul Roetzer: My take, if Matt and I were having a conversation about this, would say, oh, so you're prompting it to write more Harry Potter novels? And then you're like using it, it writes like J Rowlings and it, you know, is there copyright concerns to this concept? Are there, are you even allowed to do this kind of thing?

[00:47:45] Paul Roetzer: Especially if you're going to commercialize it and sell it. Like I get if you're doing that for your own consumption and you're never going to share it or sell it, but if I just say, oh, I love the way that Mike writes. I'm going to take Mike's books. I'm going to like write more books from Mike right there.

[00:48:00] Paul Roetzer: There's some really interesting conversations to be had around legal and ethical. So yeah, I don't know. I mean, I think it's, you know, again, Matt's a super smart guy. He is doing awesome stuff. I think it's an intriguing use case and we're just going to see lots more of it and it's going to get better and better.

[00:48:16] Paul Roetzer: And what that means to writers and publishing and media. I don't know, probably a topic for another time to expand on. So open

[00:48:25] Mike Kaput: AI is also considering, considering the keyword here, a new major release that could have a big impact on the AI ecosystem. So the information, media outlet reported that the company is, quote, considering launching a marketplace in which customers could sell AI models, they customized for their own needs to other businesses.

[00:48:44] Mike Kaput: And that's according to two people with knowledge of discussions at the company. So basically, in theory, this marketplace could make it easier for companies to access and deploy more customized AI systems. What do you see as the reason for them considering this move?

[00:49:00] Paul Roetzer: I mean, it's just a massive market play.

[00:49:03] Paul Roetzer: I think it'd be interest to see if Amazon gets involved in this. I mean, are they everything store of. Generative AI in the future. So, you know, I don't know, who, who wins at this game or if, if there's other players involved. I dunno if it has anything to do with the plugin stuff not taken off, maybe the way they thought it could.

[00:49:19] Paul Roetzer: Yeah. So, yeah, I don't know. I mean, it's just, it's all market potential and trying to figure out where this all goes and what the major, you know, billion or trillion dollar plays are. So it's interesting to keep an eye on

[00:49:31] Mike Kaput: another interesting market experiment. Chat. G p T could be coming to a vehicle near you.

[00:49:37] Mike Kaput: Mercedes-Benz is beta testing ChatGPT as a voice assistant in its cars. So over 900,000 US vehicles, that are Mercedes Benzs, can activate this experimental program and basically be using ChatGPT while they drive. So, Paul, I know you're a Tesla owner, bring it. So you probably have, you can see you're into it.

[00:50:00] Paul Roetzer: Oh, I hate, I hate the current voice assistance systems. So yeah, I have a Tesla, I've had one for five years. It's horrible. Like the voice can, like, you have to, you have to give very specific, if you think about it, like, it's almost like it's trained on a series of X number of prompts. And if you don't nail that prompt or like learn exactly what the way to ask it, you're not going to get in the result.

[00:50:20] Paul Roetzer: It's kinda like how Sury works. Like it's just these, his, these kind of legacy voice systems are very rudimentary and you have to know how to guide them. Or if I think about like, my car, if it was just trained on the operating operations manual, if I could just have a conversation as with a voice assistant and it knew everything that was in the manual and could have a conversation back with me about it.

[00:50:40] Paul Roetzer: Awesome. Like that's a great use case for this stuff. It would make it infinitely better than it is right now, and it's not even hard to do. So, yeah, II love that idea. I think it's a great use case. And I could see within five to seven years, certainly the luxury car market, probably within like two to three years.

[00:50:58] Paul Roetzer: Yeah. You know, this would be pushed out, the, to all cars within the next decade. This would be just a. Off the shelf. Like you have to have a conversational agent in the car to, to do these kinds of

[00:51:11] Mike Kaput: things. So in another tech update here, Scott Brinker, who is a big voice in marketing tech, who we have encountered kind of no over the years, in our work, you know, in the marketing agency side, he wrote an article saying essentially two major MarTech disruptions just collided together AI and what he calls composability.

[00:51:34] Mike Kaput: So by that he is basically talking about the idea that when we combine AI tools with existing marketing and sales software platforms, we get this idea of, quote, composability, the ability to mix and match different software components according to specific user requirements. And Scott writes at length about why this combination is going to essentially give marketers new superpowers.

[00:52:01] Mike Kaput: So when we can combine. LLMs large language models with custom data. So think about an example of this that he leans on heavily is chat spott for HubSpot, which is applying your G P T like capabilities to your HubSpot crm, which we use heavily and say, querying your data exactly the same idea you just talked about with the car.

[00:52:23] Mike Kaput: So when this happens, he notes that, you know, a business user doesn't have to be bound by a prescribed workflow or a user experience invented by a far away product team at a completely different company. They can invent their own. Now, he rightly, I think, points out this could have a revolutionary impact on how we do work in marketing.

[00:52:43] Mike Kaput: Now, Paul, you and I, you know, Or big HubSpot users, for instance. But how many times have we encountered a feature or a marketing tool, in a different company or different context where we're like, I don't really understand why they created this tool to do this or to not have this feature. This seems to address that problem.

[00:53:02] Mike Kaput: Do you see this having a big impact on MarTech? Yeah,

[00:53:06] Paul Roetzer: I'm trying, I'm scanning the article. I'm trying to figure out if there's anything like new here we haven't talked about. Like is is, is he just basically saying what we've been saying that like Yeah, once you can be in a software and ask it to help you and Yeah.

[00:53:19] Paul Roetzer: It has these capabilities that's where, this is all, that's, I think it's just reaffirming, like this is where this is all going. Like, yeah. Right. I I, maybe it's just given a name to it. So yeah, I think. You know, again, II think it just is reaffirming a lot of what we've been saying, that as these capabilities are baked right into the platforms you're using, all of a sudden you have these kind of superpowers.

[00:53:44] Paul Roetzer: Yep. Chat spott in its current form is very rudimentary. Like you kind, almost like the Tesla voice system. Like you have to know what it's capable of to get value out of it. And I think as tools like Chat Bott become more general in their capabilities, where the user doesn't have to know which 10 prompts actually get you value out of it, that's when we start seeing stuff unlocked.

[00:54:07] Paul Roetzer: And I have no idea if Chats Spott will get there in a month or in a year, or if Salesforce AI Cloud gets there in a month or in a year. But that, yeah, that's the future of this stuff. So, and Scott, people don't know. He is the VP of platform at HubSpot, so I would assume he has some More knowledge about where chatbot is going and the impact it could have.

[00:54:28] Paul Roetzer: So, but Scott's awesome. I'm a super smart guy. You know, I've been a big fan of Scott's for 10 years, so, I'll have to go back and reread the article and maybe, maybe there's some elements of it I'm missing, but, you know, it's always worth checking out what Scott has to say.

[00:54:43] Mike Kaput: So, inflection ai, yet another major AI player here has announced a new large language model that surprisingly outperforms G P T 3.5.

[00:54:52] Mike Kaput: It outperforms Meta Lama model, which we've talked about in Google's palm model. This model is called Inflection one and it's going to power inflection AI's personal AI assistant. They're kind of flagship product called pi. Now what's interesting here is if. People in the past have not followed inflection.

[00:55:12] Mike Kaput: They're founded by, a serious team of AI experts, but they were founded just about a year ago. And it seems to be that they're already releasing a model that can compete with some of these major releases at huge tech firms. So Paul, I know you've followed inflection AI quite closely. What did you think of this announcement?

[00:55:32] Paul Roetzer: Yeah, so we talked about them a few episodes ago and I think I said at the time, I was like, I was a little surprised that this is what they came out with because it seemed just real similar to ChatGPT and other tools that didn't seem very differentiated. Almost like they kind of rushed to market to just get something out.

[00:55:47] Paul Roetzer: I'm not sure if I've fully backed off of that positioning, but I've just kind of been very open-minded to what they're trying to do. And so I did listen. So he, he teamed up with Reid Hoffman, who was the co-founder of, or founder of LinkedIn. And I believe sits on Microsoft's board.

[00:56:03] Paul Roetzer: So they have deep access at Microsoft, just as kind of open AI does. But Mustafa is also a co-founder of DeepMind, which we've talked about, is now the lead AI research lab within Google. So this is like a, a major player in the modern world of AI as well. And so ironically, I listened to Paris Swisher's interview with him on my flight back from Italy, last week.

[00:56:27] Paul Roetzer: And so there was a few notes from that interview that I wanted to kind of call out that might give some context to them and the significance of their foundational model. Again, I'll kind of hit on some of these because they're relevant based on other things we've talked about on AI safety. He did talk about that these things could get recursively self-improving and that's the concern.

[00:56:46] Paul Roetzer: So the, right now it's not a huge risk, but as it kind of plays out, he sees this as a multi-decade timescale though. So this is a guy who's not as concerned about existential threat to humanity kind of stuff. In terms of regulations, going back to what we were hearing earlier with anthropic, he said guardrails, are important, that he thinks watermarks for synthetic media are generat, AI generated content are critical.

[00:57:10] Paul Roetzer: And then third party red teamers. So red teamers are the people basically trying to find the flaws and the lack of safety within these models. He thinks they need to be kind of independent third parties and that transparency is critical. He talked about that they've raised 225 million, they're going to raise another 675 million.

[00:57:27] Paul Roetzer: So it's like, what do you do with a billion? He said, you train these models. So going back to Aiden Gomez from a coherence position, it's really expensive to train these massive foundational models, and so you need a ton of money to do it. I thought it'd start getting really interesting when he was talking about his vision for where this all goes.

[00:57:45] Paul Roetzer: He does see it as complete transformation of computing, and he said there will be many ais, which is actually something that Mark Zuckerberg reaffirmed in his conversation with Lex Fridman. They don't see this as there's going to be one or two or three general ais that you use to kind of control your life.

[00:58:01] Paul Roetzer: You're going to have dozens or hundreds of ais that are specifically trained in certain areas of your life, like a travel assistant and a, executive coach, and a, you know, a life coach. And like, you're going to have all these ais that have unique data sets on you that you give access so that you can kind of, you know, get the benefits of them.

[00:58:23] Paul Roetzer: He talks about the key to theirs being empathetic and personal. And he sees it as a path to accessing all digital services. So he specifically thinks about PI as a personal ai, kind of an executive assistant, a chief of staff. It's, the key to him is it needs to be personalized to you and it needs to remember you across all platforms, across all devices.

[00:58:45] Paul Roetzer: So whether I'm using it, you know, on my phone or if I in WhatsApp or if I'm using it on, my, my desktop, like it remembers me and our conversations, it follows us. Which chat chip PT does not like. It's not following, it doesn't have a memory of everything you've done. The revenue model, they don't know yet.

[00:59:02] Paul Roetzer: They don't have a price for it. So it's kind of early on. He did say that over time these companies building these are going to get less reliant on the big cloud companies like AWS and Google and Microsoft Azure because the models are going to get smaller and more powerful and be able to exist on your phone in essence.

[00:59:20] Paul Roetzer: He got into a couple of other key areas like safety and ethics. He's, I thought there's, it's interesting he said that they're going to have to get more involved in governing what the AI can and can't say. And so he was asked by Carol like, isn't this going to create conflict? because this is the issue we have with ChatGPT and others and even social media sites where, who is the arbiter of truth?

[00:59:40] Paul Roetzer: Who, who decides what is good and what is bad? What is, what is fact and what is false? And what he's saying is, we are, and if you don't like it, don't pay for it. So he's basically saying they're going to have a very clear opinion on what good behavior is and is not. And that will govern what the AI does. And if you don't like it, don't use our product.

[01:00:01] Paul Roetzer: And that's the first time I've actually heard someone like straight up say like, we are going to arbiter be the arbiter of the truth and use someone else's product if you don't agree with our approach to this. And she even said like, you're going to alien alienate half of the us. And he goes, then fine.

[01:00:15] Paul Roetzer: Then the other half can be the users. So I thought that was interesting. And then he talked about hallucinations, which is these things, just making stuff up, which we've talked about as an issue. And he thinks that they can be eliminated by June, 2025, which I thought was a really random thing. But I mean, in essence he's saying two years from now.

[01:00:33] Paul Roetzer: So he is saying the trajectory shows that it's imminently controllable to stop these models from making stuff up. He got into job loss and talked about, yes, there will be job loss, but it's, you know, still going to be a positive to humanity. And then a little bit about like AI act and that he is supportive of it, but it overreaches So we'll put the Kara Swisher interview in the show notes too, because I think it's a really good interview.

[01:00:58] Paul Roetzer: And again, the more you understand Yann LeCun and Aiden Gomez and Mustafa Solomon and Demi Saabas and like all the major players who are building these foundation models, when you understand their perspective on this stuff, you have a whole different view of where we are and where we're going. So interviews like these are really valuable to get those insights.

[01:01:21] Paul Roetzer: All right,

[01:01:22] Mike Kaput: well we have three more very quick topics to cover here, and then I think, you know, that's probably enough for we've done our job, people's brains to Explode. Yeah, right In one week. So very quickly here, we just got the latest annual AI 100 list from CB Insights. They're big player, that shares data on startups.

[01:01:42] Mike Kaput: They're on their seventh annual list, which is a list the hundred most promising private AI companies in the world. Now, they choose these companies based on a bunch of different factors from their data, including things like r and d activity and the strength of teams. What I found interesting is we see some of our notable names on this year's list.

[01:02:02] Mike Kaput: People we've talked about over this episode and in the past. So some that jumped out to me. Open ai, of course Cohere is on there. Descrip inflection, AI Jasper and Mid Journey. Paul Danone. Bit of this list surprise you. No,

[01:02:16] Paul Roetzer: there wasn't a ton in the marketing space like I was looking at that. It's a very fragmented 100.

[01:02:23] Paul Roetzer: So don't like assume you can go just find all the companies you need to talk to in the marketing space. There's only like six that seem like obviously in the marketing space, but yeah. Cool list gives you a sense of where the money's going. Yep.

[01:02:36] Mike Kaput: Now interesting hiring announcement. Coca-Cola. Just appointed a global head of generative ai.

[01:02:43] Mike Kaput: Pratik Kar is now the global head of generative AI in their marketing transformation offices. So basically, according to his LinkedIn, he is developing creative platforms that leverage AI technology to enhance the consumer experience across the entire brand and category portfolio of the Coca-Cola company.

[01:03:02] Mike Kaput: Should we expect to see more jobs specifically around generative AI with AI in the title? I

[01:03:08] Paul Roetzer: think that's going to be a really popular title. The head of generative ai. Yeah, I could see that. Definitely. I don't know where that sits. I don't know if that's a marketing thing. I would assume, but yeah, that's, that's definitely been an interesting

[01:03:21] Mike Kaput: one.

[01:03:22] Mike Kaput: And then last but not least here, a new report looks at some of the hidden work that humans do behind the scenes to make AI systems possible. So a report from the Verge talks about this job of data annotation. Data annotation is a necessary temporary step when you develop ai. So behind every system there are tons of people labeling data in order to train AI systems.

[01:03:47] Mike Kaput: And the report talks about how this work is often tedious, repetitive, isolating. We talked on a previous podcast episode about some of the people being hired often, you know, outside of the US and Europe. For relatively little money, essentially labeling all these data sets, telling AI what language is and isn't appropriate, looking at toxic images to flag them.

[01:04:11] Mike Kaput: It can be quite, intense work. And this report is talking about how there's a lot of secrecy around this industry. So workers often don't know exactly what they're exactly working on. They often have NDAs and they don't always understand who exactly they're working on. So the whole point here is this results in a lack of understanding about the information and the training that's shaping AI systems and the people that are involved in that process.

[01:04:40] Mike Kaput: Did anything jump out to you about this, Paul? I mean, we did discuss a little bit about it on a previous episode related to open AI's data labeling practices.

[01:04:49] Paul Roetzer: The two things was one, just. I think there's a lot of people that don't know how these models are trained, and so it's an interesting article that gives some background on how it works.

[01:04:58] Paul Roetzer: So if you didn't know humans label a bunch of stuff to train the ai, then I'd give this a read and you'll understand it a little bit better. The second was, I kept thinking of the gig economy as I was reading this, like how Uber and these other players created this whole new economy of work, and we talk about where the job's going to be in the future.

[01:05:17] Paul Roetzer: This started, I don't know what to call it, like the AI economy or whatever, but like the annotating economy, like yeah, there, there's all these people who, for a living are annotating data. And I just, I wonder if that's going to keep being like an emerging field, if it's going to be more necessary, or if eventually the AI gets so good at it that we just don't need as many people doing it.

[01:05:38] Paul Roetzer: But that was the thing that ran through my mind is like this whole economy that most people don't even know exists is like millions of people's job is to annotate stuff.

[01:05:47] Mike Kaput: Yeah. So, and that, and that it is largely at, at least at this stage, not automated by, by something like artificial

[01:05:53] Paul Roetzer: intelligence. Yeah.

[01:05:54] Paul Roetzer: Yeah. AI wouldn't be as intelligent it is without the humans training it and labeling it.

[01:06:00] Mike Kaput: Awesome, Paul, well, thank you for helping us clear some things off the docket here. Given

[01:06:05] Paul Roetzer: let's not go away for three weeks. Again, that was a lot processing. Right,

[01:06:09] Mike Kaput: right. Well, as always, we appreciate the, time, the insight, and I know our audience gets a ton of value out of it.

[01:06:15] Mike Kaput: We keep getting great comments from the audience. Keep them coming in. They're really

[01:06:19] Paul Roetzer: encouraging. All right. Thanks everyone. And we'll be back, not next week, we're taking the week off for July 4th, the Independence Day holiday in America. So we will be back. Whatever the next time is. July 11th, I think, with a, well, I guess we're going to have to merge two weeks of data.

[01:06:36] Paul Roetzer: So nothing big happened in AI for the next week. We want a week of like, nothing crazy. So we'll be back in two weeks, to get back with the regular weekly format. So thanks for being a part of this journey, letting us be a part of your AI journey. We'll talk to you next time.

[01:06:50] Paul Roetzer: Thanks for listening to the Marketing AI Show. If you like what you heard, you can subscribe on your favorite podcast app, and if you're ready to continue your learning, head over to www.marketingaiinstitute.com. Be sure to subscribe to our weekly newsletter, check out our free monthly webinars, and explore dozens of online courses and professional certifications.

[01:07:12] Paul Roetzer: Until next time, stay curious and explore AI.

Related Posts

[The Marketing AI Show Episode 44]: Inside ChatGPT’s Revolutionary Potential, Major Google AI Announcements, and Big Problems with AI Training Are Discovered

Cathy McPhillips | April 25, 2023

This week's Marketing AI Show covers ChatGPT’s potential, more announcements from Google, and problems with AI training.

[The Marketing AI Show Episode 62]: ChatGPT Enterprise, Big Google AI Updates, and OpenAI’s Combative Response to Copyright Lawsuits

Cathy McPhillips | September 5, 2023

On this week's episode of the Marketing AI Show, we break down ChatGPT for enterprise, Google’s big news, and OpenAI’s defensive response to lawsuits.

[The Marketing AI Show Episode 66]: ChatGPT Can Now See, Hear, and Speak, Meta’s AI Assistant, Amazon’s $4 Billion Bet on Anthropic, and Spotify Clones Podcaster Voices

Cathy McPhillips | October 3, 2023

This week's episode of The Marketing AI Show covers AI advancements from ChatGPT, Anthropic, Meta, Spotify, and more.