29 Min Read

[The Marketing AI Show Episode 42]: Meta’s Segment Anything Model (SAM) for Computer Vision, ChatGPT’s Safety Problem, and the Limitations of ChatGPT Detectors

Featured Image

Wondering how to get started with AI? Take our on-demand Piloting AI for Marketers Series.

Learn More

One step forward, two steps back…or at least made with caution. Meta announces their Segment Anything Model, and in that same breath, we’re talking about ChatGPT and safety, as well as the limitations of being able to detect the usage of ChatGPT. Paul and Mike break it down:

Meta AI announces their Segment Anything Model

An article from Meta introduces their Segment Anything project, aiming to democratize image segmentation in computer vision. This project includes the Segment Anything Model (SAM) and the Segment Anything 1-Billion mask dataset (SA-1B), the largest segmentation dataset ever.

This process of identifying objects in images is called segmentation, hence the name “segment anything.” This matters even if you’re not super technical. You have to do segmentation to create computer vision systems. But, until now, doing it took a lot of time and money, and custom training. Now, using SAM, you can simply click on objects to segment them—or just write a text prompt to tell it what objects you want to identify.

This has wide-ranging applications across different industries. Meta cites that it could do things like be incorporated into augmented reality glasses to instantly identify objects you’re looking at and prompt you with reminders and instructions related to an object.

In marketing and business specifically, Gizmodo calls the demo of SAM a Photoshop Magic Wand tool on steroids, and one of its reporters used it to do sophisticated image editing on the fly with ease by simply pointing and clicking to remove and adjust images.

Right now, the model is available only for non-commercial testing, but given the use cases, it could find its way into Meta’s platforms as a creative aid.

Paul and Mike discuss the opportunities for marketers and the business world at large.

Does ChatGPT have a safety problem?

OpenAI’s website states clearly, “OpenAI is committed to keeping powerful AI safe and broadly beneficial. We know our AI tools provide many benefits to people today. Our users around the world have told us that ChatGPT helps to increase their productivity, enhance their creativity, and offer tailored learning experiences. We also recognize that, like any technology, these tools come with real risks—so we work to ensure safety is built into our system at all levels.”

And they also state, “We believe that powerful AI systems should be subject to rigorous safety evaluations. Regulation is needed to ensure that such practices are adopted, and we actively engage with governments on the best form such regulation could take.”

Is this April 5 statement on their website a response to calls for increased AI safety, like the open letter signed by Elon Musk and others, and Italy’s full ban on ChatGPT?

A new article from WIRED breaks down why and how Italy’s ban could spur wider regulatory action across the European Union—and call into question the overall legality of AI tools. When banning ChatGPT, Italy’s data regulator cited several major problems with the tool. But, fundamentally, their reasoning for the ban hinged on GDPR, the European Union’s wide-ranging General Data Protection Regulation privacy law.

Experts cited by WIRED said there are just two ways that OpenAI could have gotten that data legally under EU law. The first would be if they had gotten consent from each user affected, which they did not. The second would be arguing they have “legitimate interests” to use each user’s data in training their models. The experts cited say that the second one will be extremely difficult for OpenAI to prove to EU regulators. Italy’s data regulator has already been quoted by WIRED as saying this defense is “inadequate.”

This matters outside Italy because all EU countries are bound by GDPR. And data regulators in France, Germany, and Ireland have already contacted Italy’s regulator to get more info on their findings and actions.

This also isn’t just an OpenAI problem. Plenty of other major AI companies likely have trained their models in a way that violates GDPR. This is an interesting conversation and topic to keep our eyes on. With other countries follow suit?

Can we really detect the use of ChatGPT?

OpenAI, the maker of ChatGPT, just published what it’s calling “Our approach to AI safety,” an article outlining specific steps the company takes to make its AI systems safer, more aligned, and developed responsibly.

Some of the steps listed include delaying the general release of systems like GPT-4 to make sure they’re as safe and aligned as possible before being accessible to the public, protecting children by requiring people to be 18 or older, or 13 or older with parental approval, to use AI tools. They are also looking into options to verify users. They cite that GPT-4 is 82% less likely to respond to requests for disallowed content.

Other steps for protection include respecting privacy by not using data to sell their services, advertise, or profile users. They also say they work to remove personal information from models where possible. Lastly, they’re working to improve factual accuracy. They say GPT-4 is 40% more likely to produce factual content than GPT-3.5.

Why now? Are we confident they’re developing AI responsibly?

Listen to this week’s episode on your favorite podcast player, and be sure to explore the links below for more thoughts and perspectives on these important topics.

This episode is brought to you by BrandOps, built to optimize your marketing strategy, delivering the most complete view of marketing performance, allowing you to compare results to competitors and benchmarks.

Timestamps

00:03:29 — Segment Anything from Meta AI

00:9:42 — ChatGPT’s safety problem

00:22:04 — Limitations of ChatGPT detectors

00:30:54 — Yann LeCun on large language models

00:35:30 — AI Index is published

Links referenced in the show

Watch the Video


Read the Interview Transcription

Disclaimer: This transcription was written by AI, thanks to Descript, and has not been edited for content.

[00:00:00] Paul Roetzer: the most accomplished AI researchers in human history can disagree on the path forward and they often do. And that's healthy. I like the fact that we don't all have this echo chamber of like, this is exactly how it needs to happen.

[00:00:12] Paul Roetzer: Welcome to the Marketing AI Show, the podcast that helps your business grow smarter by making artificial intelligence approachable and actionable. You'll hear from top authors, entrepreneurs, researchers, and executives as they share case studies, strategies, and technologies that have the power to transform your business and your career.

[00:00:32] Paul Roetzer: My name is Paul Roetzer. I'm the founder of Marketing AI Institute, and I'm your host.

[00:00:42] Paul Roetzer: Welcome to episode 42 of the Marketing AI Show. I'm your host, Paul Roetzer, along with my co-host Mike Kaput, who is the Chief Content Officer at Marketing Institute, where I am the CEO. And we co-authored the book, marketing Artificial Intelligence, AI Marketing, and the Future Business. And prior to that, we spent 10 years together at my agency.

[00:01:03] Paul Roetzer: Is that I, how long were you, were we there Right around, right around 10 years? Yeah. Yeah. Frankly, doesn't know my background. I owned a marketing agency prior to my life running the AI Institute. It was simultaneous for a time, time in my life, but I sold the agency in 2020. And all of our energy is focused on bringing you the latest information and education and events around artificial intelligence.

[00:01:24] Paul Roetzer: All right, so today's episode, is brought to us by brand ops. Thank you to brand ops for supporting the show. Brand ops is built to optimize your marketing strategy, delivering the most complete view of marketing performance, allowing you to compare results to competitors and benchmark. Leaders use it to know which messages and activities will most effectively improve results.

[00:01:47] Paul Roetzer: Brand ops also brand ops also improves your generative marketing with brand ops, your content is more original, relevant to your audience and connected to your business. Find out more and get a special listener offer. Visit brand ops.io/marketing ai. That's brand ops.io/marketing ai show. And the 42nd episode is also also brought to us by the fourth Annual Marketing AI Conference, or Macon myON.

[00:02:23] Paul Roetzer: I hear it said a lot. We use Macon. You can call it whatever you want. But the Marketing AI Conference returns to Cleveland This. At the convention Center right across from the Rock and Roll Hall of Fame and the beautiful Lake Erie, July 26th to the 28th, the conference brings together hundreds of professionals, maybe thousands, depending on how many listeners want to come join us in Cleveland to explore AI and marketing, experience AI technologies and engage with other forward thinking next gen marketers and business leaders.

[00:02:53] Paul Roetzer: You will leave Macon prepared for the next phase of your AI journey with a clear vision and a near term strategy you can implement immediately. Plus, you'll get to hang out with me and Mike, and we'll do like a podcast listener. Happy hour. I'm gonna need some drinks by July. All right. But prices increase April 14th, which is this Friday.

[00:03:14] Paul Roetzer: So, save $400 on any pass by getting your Macon passes today. It is Macon. That's m a i c o n.ai. All right, Mike, three topics. Let's roll.

[00:03:27] Mike Kaput: All right. Thanks Paul. First up, we're going to dive into a new announcement from Meta, the parent company of Facebook, because they just announced what it they are calling the segment.

[00:03:40] Mike Kaput: Anything model or Sam for short. This is a model that can essentially identify objects in an image. So even when it hasn't been trained on a specific object, it's seeing this, Sam is able to actually identify what the object is and the constituent pieces of it. So this is really important. We're not gonna get super technical, but it's extremely important because this process of identifying objects and images is called something called segmentation.

[00:04:09] Mike Kaput: So hence that name, segment, anything. This matters even if you're not, you know, a data scientist, engineer, or technical type, because segmentation has to be done. In order for computer vision systems to do what they do up until now though it doing that took a ton of time and money and custom training. But now using something like sam, you can simply click on objects to segment all the pieces of them and understand what's in the image.

[00:04:41] Mike Kaput: You can even just write a text prompt to tell the model what objects you'd like to identify. So an example they gave was just type in cat and it'll identify all the cats in a photo. Now, historically doing that has taken a lot more time and energy and money to do. Now this has some wide ranging applications across different industries.

[00:05:03] Mike Kaput: So meta gives some examples of what this model might be able to do in the near future. So for instance, you could see it incorporated into augmented reality glasses. So using AR glasses, something like Sam could instantly identify objects that you're looking at and prompt you with reminders and instructions related.

[00:05:26] Mike Kaput: For marketing and business specifically, there was an article in Gizmoto that called the Demo of Sam, a quote, Photoshop magic wand tool on steroids. One of the reporters at Gizmoto actually used it to do really sophisticated image editing on the fly with ease just by pointing and clicking to remove and adjust an image.

[00:05:47] Mike Kaput: Now that's just kind of scratching the surface here. Because right now the model is only available for non-commercial testing. But given the use cases, you can start to see there are many, many possible ways we might start seeing this really advanced computer vision model be incorporated into META'S platforms as a creative aid.

[00:06:07] Mike Kaput: So Paul, when you saw this announcement, what were your initial

[00:06:11] Paul Roetzer: thoughts here? A few things come to mind. One, I think people. A lot of people don't realize Meta has a major AI research lab. People who aren't in the AI space sometimes forget that or didn't know that Meta is a major player in this space, led by Yann LeCun, a touring award-winning AI researcher, and one of kind of the fathers of modern ai and.

[00:06:36] Paul Roetzer: So I think it's a good reminder. Any, any regular listeners to our show, would recall maybe talking about Cicero from Meta AI, which is a, a really cool, kind of research model, research paper and model. So I think one is just a reminder that Meta is doing some really interesting things in ai. Two, I think the first thing that jumped to me is the openness of this.

[00:07:01] Paul Roetzer: So that is again, something that was key to Yann LeCun going to FA with Facebook back in like 2013 or whenever it started, the Facebook AI Research Lab. He wanted to work for an organization that, adhered to open research principles that shared everything they were creating. Now, again, if you're a regular listener, paying attention to the space, there is a battle right now between open and closed.

[00:07:26] Paul Roetzer: And some of the more traditionally open labs like OpenAI are becoming very closed, and Yann LeCun is continuing to push for the openness and the sharing of this. So with this openness, it appears. That the major breakthroughs here are one, the foundational model, so the ability for people to build on top of this model.

[00:07:45] Paul Roetzer: And two, the openness of the data set. So they said specifically the segmentation data needed to train such a model is not readily available online or elsewhere. Unlike images, videos, and text, which are abundant on the internet. So if we think about how a language model like GPT-4 learns, it just goes and consumes a bunch of text data.

[00:08:05] Paul Roetzer: Think about, stable diffusion or mid journey. They go and consume a bunch of images. If you think about the need to be able to recognize objects with images that doesn't exist. And so they're basically building this data set and making it available. So to me, the big takeaway. This could likely lead to a, an explosion in terms of computer vision applications and companies.

[00:08:29] Paul Roetzer: So people who previously could not build this kind of stuff because it required tons of expertise and data they didn't have. They're making it open to be able to access this stuff and build on it. Whether, you know, you can envision advertising applications, brand, brand monitoring applications, product development.

[00:08:46] Paul Roetzer: Like, oh, there's all these ways you could probably think about using this technology. But prior to this, it was gonna be a lot harder to build stuff. And so it seems like this is a really important moment for people who want to build around objects and segmentation. And I know they say that they can't currently extract from video.

[00:09:04] Paul Roetzer: It can extract from stills of videos, but it can't extract from like the moving video itself, but the way they word it sure makes it sound like this can be applied to video over time. Like the same kind of approach will be able to be applied to extracting objects and you know, masking them within video.

[00:09:20] Paul Roetzer: So just seems like it's gonna open a whole new realm of innovation. And any time that some major paper or model comes out of meta, it's usually worth paying attention to as sort of a, a prelude of other things that might come from it.

[00:09:34] Mike Kaput: Gotcha. Yeah, it seems like it's about to be a very, very exciting time in computer vision.

[00:09:42] Mike Kaput: So what is not as exciting for some of the AI companies out there? Notably open. Our regulatory woes. So we talked last week about OpenAI's problems in Italy, namely Italy put into effect a full ban on ChatGPT in the country. That's unfortunately for OpenAI. Just the beginning. A new article from Wired breaks Down Why and how Italy's Ban could actually spur wider regulatory action across the European Union, and actually could call into question the overall legality of AI tools.

[00:10:18] Mike Kaput: In, in the EU as a whole. Now, when banning chat, C B T Italy's data regulator ended up citing several major problems they had with the tool, but fundamentally, their reasoning for the ban hinged on our good friend G D P R, the European Union's wide ranging general data protection regulation privacy law under G D P.

[00:10:41] Mike Kaput: There are six ways you can legally collect data from EU citizens of which are 400 million of them. These range from things like someone giving you permission to use their data to having data collection be part of a legal contract that you enter into with someone. So there's these six ways that you're legally allowed to get someone's data.

[00:11:02] Mike Kaput: Now here's the issue. OpenAI has collected personal data from EU citizens to train ChatGPT and its underlying models. So Wired actually interviewed several regulatory experts who said, look, there's only two ways that OpenAI could have gotten this data legally under EU law, EU laws as they are today.

[00:11:23] Mike Kaput: The first would be if they had gotten consent from each user that they trained the model on, which we know they did not. The second would be arguing they have quote, legitimate interests to use each user's data in training their models. Now, the experts cited say that that second argument is going to be extremely difficult for OpenAI to prove to EU regulat.

[00:11:47] Mike Kaput: And Wired actually interviewed Italy's data regulator who created the ban, and they said already this type of defense is quote, inadequate. So this really matters outside of Italy because all you countries are bound by gdpr, and the article has cited that data regulators in France, Germany, and Ireland, have already gotten in touch with Italy's regulator to get more info on what they did and their findings.

[00:12:13] Mike Kaput: Norway's Data Protection director even said that if a model is built on data unlawfully collected, it raises questions about whether anyone can even use the tools legally. That were built on top of those models. So this isn't just an OpenAI problem anymore. Plenty of other major AI companies likely have trained models in ways that could be problematic.

[00:12:38] Mike Kaput: So first up, Paul, I want to ask, do you expect to see additional regulatory action coming from actions like that are the ones happening in the

[00:12:46] Paul Roetzer: eu? I'm no expert in European law and GDPR in in in particular, but it sure sounds like this is a mounting problem in Europe. And the thing we've always been guided on previously was to look at, you know, the AI Act in Europe, the efforts to build that look at G D P R and just assume those same level of regulations restrictions would, would be arriving in the us.

[00:13:14] Paul Roetzer: You know, shortly after they, they are enacted in Europe and so you, you start to wonder, you know, what are the, what is the fallout to this? So it does certainly appear that there is. Domino effects starting to happen where Italy comes first and then others potentially follow. So I think you have to be paying close attention to that.

[00:13:35] Paul Roetzer: Now, obviously we're talking about OpenAI here. They're not the only player, it's not the only language model company. It's not the only application that you can use. So I immediately start thinking about, well, what's the impact of the other applications in in the ecosystem that are built on top of these language models?

[00:13:50] Paul Roetzer: And again, not just. You know, GPT-4 and GPT-3 and OpenAI models. And so if you're a user of one of these AI writing tools that's using these technologies and you're in Europe, is this going to immediately affect you in the near term? So that's, that's one thought I have is just, it seems like it's a domino effect and it seems like it could go beyond just OpenAI into other areas.

[00:14:12] Paul Roetzer: You then start wondering about the same issues related to any generative AI technology in Europe, and then again, kind of coming to the us whether it's images or videos or anywhere where they, if there's, you call into question the training data basically. So I think as we've said before, the big question is, There's a lot of uncertainty about this, and so if you're, especially if you're at a bigger enterprise where you can't take like big risks on this stuff, if you're a small business or a startup, you're probably way more willing to, you know, roll the dice and take some chances on these things and see how it plays out.

[00:14:46] Paul Roetzer: But if you're a bigger enterprise as cio, CEO, or you know, similar, that's making recommendations to the C-Suite about the integration of these technologies. I, I think our whole position here is you gotta at least be paying attention. And again, like we've talked about, like at some point we'll probably need to start bringing in some guests.

[00:15:03] Paul Roetzer: Like you and I, riffing on some of this stuff is helpful to a point for people to surface it for them. When we start getting into these really technical issues and legal issues, it's not our domain to provide guidance to people about what you should do about this. It's our feeling that giving you a point of view that you need to be thinking about this and talking to the right people is kind of our role here.

[00:15:25] Paul Roetzer: And so I feel like this is, especially for our European listeners, I It's right at your doorstep, like you gotta be paying attention. For the us You know, the question becomes, is this something that we should be thinking about? Like Sam Altman we know has been meeting in, in Washington, with legislators and, you know, is part of the concern here that they have concerns around the training data.

[00:15:47] Paul Roetzer: Now there's some other thoughts I have around the US and whether or not they would follow suit. My, my instinct is they're not going to, again, this is totally, no inside sources. The US has every motivation in the world for US-based companies to be leading the world in the development of these technologies.

[00:16:06] Paul Roetzer: And so there's very minimal reward to the US government to slow down this innovation now, they want it to be done safely. But they also want access to this level of technology. And we know just this past week, president Biden was convening his science and technology group to talk about artificial intelligence.

[00:16:26] Paul Roetzer: So this is like, this is in the Oval Office stuff. Like they're, they are actively looking at ai. And so I just, I think there's all these, Kind of competing interests around whether or not this is good or bad and whether or not we should be slowing this stuff down. Going back to the conversation last week about the Future of Life Institute letter and I think my general takeaway in the US at least, is there, there's no real slowing this down.

[00:16:51] Paul Roetzer: I think it's full speed ahead for this stuff. I think there are legal issues around the training data. I think they'll be resolved in somehow, through court cases, through settlements, through something. It's, it's certainly worth keeping an eye on, and again, if you're in Europe real close ion because you may not have access not only to ChatGPT, but all other generative AI technologies.

[00:17:13] Paul Roetzer: If, if this goes through.

[00:17:16] Mike Kaput: That's a really good point. And, you know, related to this topic, you can also see OpenAI making some moves of their own because. At the same time, they just published what they're calling, quote, our approach to AI safety, which is an article outlining specific steps that the company is taking or considering taking to making its systems safer, more aligned, and more responsibly developed.

[00:17:43] Mike Kaput: And. I found it probably not a coincidence that many of the steps were directly related to the problems that Italy's regulators raised when they banned it. So we're talking about things like they list out that they have delayed the general release of systems like GPT-4 to make sure they're safe before they're widely, widely accessible.

[00:18:05] Mike Kaput: They're protecting children, which was specifically called out in Italy's ban. By requiring people to either be 18 or older or 13 or older with parental approval to use AI tools. They're also looking into, they say, options to verify users, though they don't specify what that looks like, they are saying that they are respecting privacy by not using the data that they collect to sell their service.

[00:18:29] Mike Kaput: Advertise or profile users. They also say they work to remove personal information from models where. And then last but not least, they say that they are making huge strides towards improving the factual accuracy of things like ChatGPT. So they say GPT-4. The underlying model is 40% more likely to produce factual content than the previous model, G P T 3.5.

[00:18:55] Mike Kaput: So seems pretty obvious why they are publishing these things now. What was. Take on seeing their approach. Did it give you any degree of confidence that they're developing AI responsibly that you didn't have

[00:19:11] Paul Roetzer: before? No. I mean, listen, look, look there. We know that they get that this is a major focus. We, you know, I think that you can question open eyes overall motivations and their, you know, path to agi and whether or not what they're doing is in the best interest of humanity.

[00:19:29] Paul Roetzer: But I do believe that at their core, they have a lot of really good people working at the organization that know the dangers of what they're building, and they're, they're trying within a competitive landscape to build in the right security and safety precautions. Now, is it enough? I have no idea, but there's certainly.

[00:19:52] Paul Roetzer: Making the effort. It is interesting the timing and the synopsis you just gave of that because it does basically mirror the reasons why they're being banned in in Italy. I could also see OpenAI's frustration because there's a decent chance they're doing more than other organizations to protect this.

[00:20:16] Paul Roetzer: Where if you're them and they say, well, what about these open models? Like they have no guardrails. And I, so I think the way Sam Altman is positioned in way, Greg Brockman, you know, the, was, I think Greg's the coo, or yeah, coo, maybe, a co-founder of OpenAI is that they're gonna do everything in their power to try and do this safely, but these.

[00:20:38] Paul Roetzer: Models are going to come out that don't have the guardrails in place that they have. And so I feel like the challenge in Italy and other countries that want to take this to OpenAI, that's just the head of the snake. Like there, there's all these other models that are gonna be out there and you're gonna be able to get more dangerous models and customize them and fine tune them.

[00:21:04] Paul Roetzer: And. I just don't know that it's gonna stop the problem. But I mean, I think at the end of the day, I think it's good that they're being held to a high standard, and I think it's essential that other major technology players are held to a high standard. Of what these models are capable of doing.

[00:21:24] Paul Roetzer: But the reality is this isn't gonna slow down. Other models are gonna be out there. And as a society, we, we just have to bring these conversations to the forefront. And I think at an individual company level, be doing everything possible to ensure the tools you're using are gonna be used in a responsible way.

[00:21:41] Paul Roetzer: Because I do think for a long time looking forward, it's gonna come down to individual corporations and individuals and how they choose to use these tools because they're not, they're not. They're not gonna stick on again. ChatGPT may be gone in Italy. Doesn't mean there aren't other ways, other models that can be used.

[00:21:59] Paul Roetzer: That makes a lot of

[00:21:59] Mike Kaput: sense. So before we dive into a couple quick rapid fire topics, our last big topic I want to cover today is about the limitations of ChatGPT detectors. So according to a new article in the Washington Post, a company called Turn itin, which is a popular education software tool, is activating a new AI writing detection feature in.

[00:22:26] Mike Kaput: Education software two, that is rolling out to more than 10,000 secondary and higher education institutions. So basically they're a plagiarism detector and they're now including AI writing detection in the tool Now. There's a problem here. The tool doesn't work as a reliable way to detect AI generated content.

[00:22:49] Mike Kaput: The latest from the posts and some of their experiments shows that the tool is not reliable. It actually fails multiple times to detect adequately what is AI written? What is human generated? Sometimes it fails outright. Other times it misses parts of the text and miscategorize them. The latest research has shown they're not reliable, and it actually does seem like the chief product officer of Turnitin appears to know this too, and he was quoted as saying, our job is to create directionally correct information for the teacher to prompt a conversation.

[00:23:24] Mike Kaput: I'm confident enough to put it out in the market as long as we're continuing to educate educators on how to use the data. The issue here being that the company appears to be hedging how well, its software can detect AI writing, whereas teachers, educators, and parents may be taking them at face value and assuming that this is accurate, 99% of the time.

[00:23:49] Mike Kaput: Now, Paul, I know this resonated for you. You wrote a LinkedIn post about this software citing some of the problems with this approach. Could you walk us through maybe a little more of your

[00:23:59] Paul Roetzer: thinking? I think it's a terrible idea. So the, my, my basic takeaway is we know from the latest research these things do not work.

[00:24:09] Paul Roetzer: That yes, it can with some level of accuracy. Maybe it's 50%, 70%, whatever it is, predict that something was written by ai. Is that good enough to fail a student and affect their career and their reputation and their life? No, it's absolutely not. So we're putting. The power to make that decision in the hands of an untrained teacher.

[00:24:34] Paul Roetzer: So unproven technology in the hands of untrained teachers, where all of a sudden expecting teachers become experts in large language models so they know the validity of some score that says A student may have cheated if we're gonna consider using AI cheating now. So the problem here, Is twofold. One, schools haven't caught up yet to even have a point of view on whether or not students should be using ai, and if so, how?

[00:25:00] Paul Roetzer: Now there's some individual teachers, you know, who are doing incredible work to not only embrace this technology, but to infuse it into the classroom and use it as a teaching guide and an assistant and to show them how the technology works. Like amazing kudos to all the teachers, professors who are doing that, but that is, the exception, not the.

[00:25:21] Paul Roetzer: So for the most part, school systems, teachers, administrators have no idea whether or not they should be encouraging the use of these things, how to integrate 'em into the classroom. And now we all of a sudden have a score that's telling a teacher, maybe a student cheated on this, well, who's the arbitrator to figure that out, whether they did or not.

[00:25:40] Paul Roetzer: So if the student gets to make, or if the teacher gets to make the judgment call that this student cheated and there's no way for that student to challenge that decision. Now all of a sudden the students at the Mercy and how does bias not fall into this? So now you start like the teacher has to like prejudge this person, someone who's likely to cheat, and you know, there's all these, it's just so rife with problem and yet the tech is out there.

[00:26:02] Paul Roetzer: Like they released this on, what was it, April 4th, which is like, turn the feature on. So now all these teachers are getting this score, like, oh yeah, this person might have cheated. 20% chance, 80% chance, what is the teacher supposed to do with that information? So I just feel like this is it's. It's just the wrong time to do this.

[00:26:21] Paul Roetzer: Like the whole idea of we'll educate educators. Okay, well, how about we educate educators first? We have real conversations where you roll this out on a scaled basis, where you go in and explain what a language model is and why these things make stuff up, and why the score may be completely inaccurate and how directionally correct isn't sufficient to fail a.

[00:26:42] Paul Roetzer: So, and the pro, like I had problems with this beforehand, but it just so happened that I'm, I'm on the board of directors for Junior Achievement and so I was teaching a class that week, two students, high school students, and the first question I got was from a student who said he got a 50 out of 50 on an essay.

[00:26:59] Paul Roetzer: And then a week later, the teacher came back and told me he had a zero out of 50 because the teacher believed he had used an AI writing tool to do the. And the kid swore he didn't, I'm not saying the kid did or didn't, but that was it. And the kid said, well, that's, I didn't like, I don't agree. And she said, there's nothing you can do.

[00:27:15] Paul Roetzer: You have a zero out of 50. So this, this is a real life thing. And so now the teacher's gonna have these scoring systems to tell them this. So I just think it's such a failure of just because we can doesn't be mean. We should, just because we can have AI writing detection tools does not mean that they should be put in the hands of people.

[00:27:35] Paul Roetzer: That make decisions around people's futures because of it. So yeah, I mean, you can tell I'm a little bit passionate about this topic, but I'm just like, there's so many opportunities in education to infuse AI and the fact that this is what we're having to talk about rather than all the positives that could be happening.

[00:27:55] Paul Roetzer: It's just a shame and I hope that this backfires like really. So that we can pull this technology back and have a real conversation about whether or not it should even be in schools right now. I was going, and again, I don't know, turn it in. Like I don't know anybody there. If anybody turned in and had like, listened to this podcast and has point of view, we should understand and maybe there's some missing information here, but it was a pretty thorough article from the Washington Post.

[00:28:20] Paul Roetzer: So I'm certainly open to hearing alternatives, but I just feel like there needs to be more dialogue around this kind of stuff before we just turn it loose on.

[00:28:31] Mike Kaput: Is there any way here, in your opinion, to put the genie back in the bottle? Just, I mean, do you, is there a chance that schools will realize this is an awful idea and.

[00:28:41] Mike Kaput: Stop using the technology? Or is it more of an we need to be more aware and educated moving forward now that we're stuck with this

[00:28:49] Paul Roetzer: approach? Yeah, I don't know. I'm not, I'm not sure the extent to which Turnin is actually adopted with, you know, at each individual teacher level. Like if it's, if truly used as like, an authoritative source.

[00:29:00] Paul Roetzer: So I'd probably have to do a little more homework on, you know, what the utilization rates are gonna be of this feature. You may have a bunch of people just like, I don't know, like, I'm not gonna use it. I don't understand what it's saying. And others may just rely on it. So if they, if they trust it as a plagiarism detection tool and it's done it confidently, then I assume whatever that software company puts out, you're gonna believe it's a, a truly predictive score.

[00:29:22] Paul Roetzer: And I thought I saw something to the effect of like, it's accurate, like 98% of the time according to them. But, You know, and the 16 examples, the Washington Post gave it sure was nowhere close to 98%. And I've yet to see a research paper that says any AI detection tool can get close to 98%. So, and again, like the thing you have to keep in mind is this is a race.

[00:29:44] Paul Roetzer: Like just be, even if it, even if it came out Tuesday and it was 98% accurate, there's a chance 30 days should now be 50%. Because there's ways to mask the AI created content, so it doesn't come across as though AI created it. So if the game is, I'm gonna use ai, like I'm gonna be, I'm a student and I know I'm gonna get in trouble.

[00:30:05] Paul Roetzer: If I use the technology available to me to write my essay, I'll just find a way to mask the fact that I used it. I'll go get some other app that then does an overlay of what I create to mask it so that it can't be detected by, turn it in software. Like do we really want to play that game? Encourage kids to find more creative ways to.

[00:30:22] Paul Roetzer: That's basically what this is doing because this is never gonna be foolproof. And so you're never, you're not gonna get a hundred percent accuracy. So why, why even play this game? Like It just, it seems like just the wrong thing we should be doing. We should be focusing on the good this can do as a teaching aid and a personalized teaching assistant for students and ways to enrich education with it, not teaching 'em to gain the system with more advanced technology.

[00:30:48] Mike Kaput: Gotcha. So diving into a couple rapid fire topics here, first up, we have some great Twitter commentary from Yann LeCun, who we talk about quite often on this

[00:31:00] Paul Roetzer: podcast. The chief mentioned earlier is our guy, our meta guy.

[00:31:03] Mike Kaput: Yes. He is the Chief AI scientist at Meta. For people who don't know the, history of ai, he is also essentially one of the godfathers of modern AI based on his research over the last several decades.

[00:31:17] Mike Kaput: And he, we should also note related to what we're about to talk about, he has pretty vocally come out against, some of the rhetoric around the open letter for AI pausing AI research about some of the more. Extreme views around how unsafe AI may be, and he often tries to talk about artificial intelligence from much more of an engineering perspective rather than some of the other commentary and color out there.

[00:31:48] Mike Kaput: And he was responding on Twitter to an N P R story that talked about some of the nuances of chatbots and AI being used in the medical field. And he offered us some really, really good reminders. About large language models. He said, repeat after me one. Current auto regressive LLMs large language models are very useful as writing aids.

[00:32:11] Mike Kaput: Yes. Even for medical reports which are referenced in the N P R Article two, they are not reliable as factual information sources. And three, writing assistance is like driving assistance. Your hands must remain on the keyboard slash wheel at all times. Now, Paul, we just had our AI for Writer Summit where we talked about a lot of the ins and outs of what to think about and consider when using these tools.

[00:32:37] Mike Kaput: What did you think of, Jann's reminder here in comment?

[00:32:41] Paul Roetzer: Yeah, I just echoed exactly what we talked about at the Writer's Summit. These things are writing assistance, not replacements. And I just, I think that the idea of the analogy of the driving assistance is a good one. Like even if you have autopilot on an A Tesla, you still gotta keep your hands on the wheel.

[00:32:59] Paul Roetzer: So this whole idea that we are not trying to intelligently automate away writers. And these language models are not capable of intelligently automating away writing jobs. They, you have to have the human in the loop. They make stuff up, they hallucinate, they have no idea what facts are. And even at the bottom of chat cheap, you says like may get people, places, facts, wrongs, because it doesn't know anything.

[00:33:23] Paul Roetzer: It's making predictions about words. So, There's so much misunderstanding of what these things are capable of, and I just thought that was a very concise three points of it is a writing aid. It is not reliable on facts and human has to stay in the loop, like those three things. Are really, really good to re re remind people where we're at with language models today and in Jann's PO opinion where we will remain with language models.

[00:33:50] Paul Roetzer: He believes there's, we'll get into this another time, but he doesn't believe language models are the path to true general intelligence, which will. Again, deeper dive. Another time, not a rapid fire topic.

[00:34:01] Mike Kaput: So good. Just good to remind folks too that there isn't one answer or perspective in AI among even the most expert people in the field.

[00:34:12] Mike Kaput: I mean, as we hear all this commentary about banning ai, banning ChatGPT, the open letter. Just because Elon Musk signs the letter doesn't mean every AI researcher actually agrees. People like Yann LeCun take a pretty serious, a opposing view to a lot of these things. So there's conversation, there's debate, and there's disagreement.

[00:34:32] Paul Roetzer: Which is good, and that's like we always say on the show. Part of our role is to bring you the information so you can form your own perspective. Like our point of view on stuff doesn't have to be your point of view, and oftentimes shouldn't be your point of view. It's surfacing really important topics for you to go do your own work on.

[00:34:50] Paul Roetzer: And then you need to f, you need to find a collection of opinions. It's like anything else in life. You have to find a collection of perspectives and then you have to synthesize those perspectives and figure out what your point of view is yourself. Yeah. Like Ya Yasua Ben was one of the leading co-signers on that letter.

[00:35:06] Paul Roetzer: Well, Yaha Yasua and Jan won the turning award together with Jeff Hinton. Like, so yes, there are even the most accomplished AI researchers in human history can disagree on the path forward and they often do. And that's healthy. Like I actually like, I like the fact that we don't all have this echo chamber of like, this is exactly how it needs to happen.

[00:35:27] Paul Roetzer: Absolutely.

[00:35:28] Mike Kaput: So last but not least, we want to give a nod to the 2023 AI Index report. The AI Index is something that's being put out regularly by the Stanford Institute for Human-Centered Artificial Intelligence, and basically they work with a number of different organizations to track the progress of AI year over year across.

[00:35:53] Mike Kaput: Many, many different areas of development. So we won't, you know, go through every single finding here. But they are finding, things like industry is actually starting to take over, releasing the most advanced machine learning models. In the past, since about 2014, those were actually being released by academia.

[00:36:13] Mike Kaput: So there's definitely a major shift in the market here. They've also. Cited things like year over year improvement on many AI benchmarks actually continues to be marginal. They also found that the number of AI incidents and controversies has increased 26 fold since 2012. So there's tons of great examples, data and anecdotes in the report.

[00:36:38] Mike Kaput: Definitely check it out. We'll link to it in the show notes. Paul, did you have any initial feelings about the report and some of the things you

[00:36:46] Paul Roetzer: found? No, I mean, we love the organization. We feature them in our marketing, artificial intelligence book, so it's a great organization to follow on Twitter, great organization to follow their resources.

[00:36:56] Paul Roetzer: They do an amazing, very thorough job. There was nothing like, as someone who's kind of followed the space and read the previous reports, there was nothing to me that jumped out. I was like, oh, wow, I didn't expect that. Like a lot of this is kind of recapping, but again, if you have, if you're newer to this, And you want to understand kind of the context of what's going on in some of the bigger pictures, even if you just read the top 10 takeaways at the front of the report.

[00:37:19] Paul Roetzer: It'll give you a nice summary of kind of this in a way, the state of ai. So yeah, great organization, always a great report. I didn't personally take away a ton of like aha moments. But that doesn't mean it's not a really valuable thing to, at minimum skim through. And if you find interesting areas like, the education angle or the job impact angle, whatever, dive into it and read that section.

[00:37:42] Paul Roetzer: There's tons of information and charts in it. Awesome.

[00:37:46] Mike Kaput: Well, Paul, as always, thank you for the insight and for breaking down the world of AI this week for us and for our listeners. Appreciate your time and your insights.

[00:37:57] Paul Roetzer: Thank you everybody. We'll talk to you next.

[00:37:59] Paul Roetzer: Thanks for listening to the Marketing AI Show. If you like what you heard, you can subscribe on your favorite podcast app, and if you're ready to continue your learning, head over to www.marketingaiinstitute.com. Be sure to subscribe to our weekly newsletter, check out our free monthly webinars, and explore dozens of online courses and professional certifications.

[00:38:21] Paul Roetzer: Until next time, stay curious and explore AI.

Related Posts

[The Marketing AI Show Episode 56]: Meta’s Incredible New (Free!) ChatGPT Competitor, Elon Musk Changes Twitter to X, GPT-4 Might Be Getting Dumber, and AI Can Now Build Entire Websites

Cathy McPhillips | July 25, 2023

The latest episode of the Marketing AI Show covers the good, the bad, and the dumber of AI, including Meta’s new AI model, Elon Musk’s X, and more.

[The Marketing AI Show Episode 71]: Massive ChatGPT Reveals at OpenAI Dev Day, Elon Musk’s ChatGPT Competitor, and Why Only 4 Foundation Models Have Lasting Value

Claire Prudhomme | November 7, 2023

This week's episode of The Marketing AI Show delves into OpenAI’s reveals at DevDay, xAI’s announcement of Grok, an analysis of lasting foundational models, and more.

[The Marketing AI Show Episode 66]: ChatGPT Can Now See, Hear, and Speak, Meta’s AI Assistant, Amazon’s $4 Billion Bet on Anthropic, and Spotify Clones Podcaster Voices

Cathy McPhillips | October 3, 2023

This week's episode of The Marketing AI Show covers AI advancements from ChatGPT, Anthropic, Meta, Spotify, and more.