26 Min Read

[The Marketing AI Show: Episode 10] Senior AI Editor Talks Responsible AI

Featured Image

Wondering how to get started with AI? Take our on-demand Piloting AI for Marketers Series.

Learn More

The Marketing AI Show—the podcast that helps businesses grow smarter by making artificial intelligence approachable and actionable—is BACK with another episode.

You can listen now on your favorite podcast app, or keep reading for more on what to expect in this episode.

Episode 10: Karen Hao, MIT Technology Review, on Responsible AI: Ethics, Innovation, and Lessons Learned from Big Tech 

In this week's episode, show host Paul Roetzer sits down with Karen Hao, senior AI editor, MIT Technology Review. This special episode took place during MAICON 2021 when Paul and Karen sat down for a fireside chat to discuss responsible AI.

In this episode, Paul and Karen explore the ethical development and application of AI.

Drawing on her expansive research and writing, Hao offers:

  • An inside look at the policies and practices of major tech companies.
  • Lessons learned that you can use to ensure your company’s AI initiatives put people over profits. 
  • A look into what's next and what's needed for responsible AI. 

Timestamps

[00:05:34] Details on Karen being a Knight Science Journalism Fellow

[00:14:16] A lesson in Google’s history with AI and ethics

​​[00:31:04] A discussion of Karen’s MIT Technology Review about Facebook

​​[00:36:56] Rapid fire Q&A

 


Watch the Video


Read the Interview Transcription 

Disclaimer: This transcription was written by AI, thanks to Descript

[00:00:00] Paul Roetzer: So I am joined today by Karen Hao, senior AI editor of MIT Technology Review. Karen, thank you for being with us. It's great to see you again.

[00:00:14] Karen Hao: Thank you so much for having me here, Paul.

[00:00:15] Paul Roetzer: Yeah, it was when I reached out to you, because for those who weren't with us in 2019, Karen was one of our keynotes for the first Marketing AI Conference on “What is AI?”

And not only was it a huge impact for me, I love the talk, but we heard so much incredible feedback from our attendees. I had tried to get Karen to come on in 2020 when we were going to do the conference. And that conference didn't end up happening. But then I reached back out to 2021 and said “any chance we could do this again?”

And she's just written so many incredible articles, especially in the last year about responsible AI, AI for good. ethical AI. And it's such an important topic for us and for our audiences. That that's what we wanted to kind of bring Karen back for today is to talk about that. So, Karen, again, thank you so much for being back and all the contributions you make to our community.

[00:01:03] Karen Hao: Yeah. Thank you so much for that really kind introduction.

[00:01:08] Paul Roetzer: So I wanted to get started with--I've heard your background--but how did you end up in AI? How did you end up writing about AI instead of building AI? So let's start with, how did you get into AI?

[00:01:26] Karen Hao: Well to kind of explain that part, I actually have to answer the second question together.

So I had a bit of an interesting journey into journalism because I actually started as an engineer. I studied mechanical engineering  in undergrad at MIT and at the time, the reason why I was fascinated by mechanical engineering was the idea of building; I was very fascinated by the idea of using technology as a driver for social change.

And MIT has an incredible mechanical engineering program that's really focused on user centered product design. And how do you use products to change people's minds, change people's behaviors, and I was intrigued by that. And always imagined following in the footsteps of Steve Jobs or someone like that who really understood the user, was able to invent all these things that then completely transformed culture, transformed the way that we consume information, music, all of these things.

When I graduated, I ended up working at a software startup first. So I kind of nudged away from the mechanical engineering hardware side of things and to software. And I was really a enamored at the time with the startup because it was a startup that spun out of Google X, and it was focused on building architectural software, or urban development software that would help city governments essentially optimize their city design and building design to be more sustainable, be more resource efficient.

And I was very interested in sustainability and climate change. And about nine months into this experience where I had thought that I'd found my dream job working at a very mission driven technology company that was using technology for social change, the startup devolved because it wasn't actually making money.

And the private investors got very unhappy. They fired the CEO, replaced him with another CEO who was supposed to be very business oriented and help us turn a profit. And he fundamentally didn't understand what we were doing. Completely scrapped the product, started pivoting all over the place. And it made me think a lot about just incentives in Silicon Valley, like if we are constantly obsessing over making quarterly returns, then we end up reducing or constraining our ability to actually pursue the really long time scale problems like climate change, like poverty alleviation, all of these really hairy issues. And so then that's when I jumped into journalism and I started thinking about writing.

Maybe I could use writing as a tool for social change instead. Because the journalism industry, I thought, might be a little bit better about being mission-driven and not as profit driven. It was through going into journalism that I then found my way into AI. So I became a general tech reporter and I was interested in tech and society and thinking about “how do we actually hold technology accountable and make sure that we develop it for social change? I had experienced the flaws that Silicon Valley had and I wanted to use a platform to try and nudge the valley in specific directions to incentivize  better products, better technology, and from there landed in AI because AI is just so expansive. And it's hard to talk about technology today without talking about AI.

Once I found my home there, I became very, very obsessed with it because it's the perfect microcosm of all of the tech and society issues that I wanted to talk about and explore.

[00:05:25] Paul Roetzer: And I don't think you and I have spoken since you got appointed to the Knight Science Journalism Program Fellowship class.

That just happened recently.

[00:05:34] Karen Hao: That just happened. So I proposed this project, as it's going to be a story series on this idea that AI development today, if we look at it through a global lens, it's really not successfully serving everyone. There are a lot of wealthier countries, wealthier companies that are extracting a lot of value and a lot of profit from the technology, but at the expense of vulnerable communities, vulnerable countries. The economic benefits are completely being distorted and concentrated among countries and companies that already have a lot of power and money. The fellowship was really excited about it. I'm super excited about I've been waiting for like two years at this point to write this story series. So that’s what I'm going to be focused on for the next year.

[00:06:29] Paul Roetzer: Well, congratulations. That's fantastic. I actually wasn't familiar with the fellowship. So anybody else watching that isn't familiar? It's really cool. And you know, that what I had saw on the site was that you were going to investigate the global supply chain and how it concentrates often in countries with power in the hands of the wealthy people, companies and nations, while leaving the less fortunate. So kind of as you as outlined. And I think that's a really important background, so people understand where you're coming from. So you have the intimate knowledge of the technology and how it works, what it takes to build it.

And so when you're doing research and writing, you can look critically, not just the PR messages that maybe these big tech companies want you to write about and pushing these messages out, hoping you'll sort of trumpet the things that they want out in the market. You can actually step back and look critically and say, but is this good?

Like, is this the right path for the technology to go down? And that's why I love your writing. I think you are optimistic about the technology like that comes through in your writing, that there is, there is possibility here. There are things that can happen for good as you illuminated, like climate change and poverty and hunger. Like all of these things it can help do, but it can go very wrong also. And that’s kind of what I wanted to focus on today in this conversation about responsible AI. You and I met ahead of time and kind of talked about like, we have a mix of audiences. So we have people at this conference in our community who are building AI tech, you know, maybe they work at venture funded tech companies, or maybe some of them were actually working at the big AI companies.

We have marketers who use AI tech. And maybe don't realize the bias that can be built in or the things that can go wrong with it. And then we have business leaders who might not get into the weeds and learn all the ins and outs of AI. But they're trying to figure out how to use this new technology to advance their business, to drive growth, to grow smarter.

So a lot of them are really just trying to figure this stuff out and they don't know what they don't know. These unknowns around ethics, the responsibility. So what you and I had talked about is that our job is to make them care. Even if you're just running your first pilot project to help optimize digital ad spend, or figuring out how to send emails more efficiently, that now is when you actually want to be thinking about how your organization is going to use AI in a responsible way. And so I think a good place to start is - how do you define responsibly? Because I see AI for good, responsible, ethical AI. We see different terms. What do you think of when you think of responsible AI?

[00:09:06] Karen Hao: I think the core of responsible AI is really about mitigating harm and maximizing benefit.

People may have seen terms like AI bias as a synonym used for responsible AI. That's sort of one example of a harm that you could identify with AI technologies, the AI could perpetuate discrimination, but there are many other types of harm that an AI system can potentially perpetuate.

Maybe it's infringing on your privacy, maybe it's miscategorizing your identity, something like that. So when thinking about how to actually build responsible AI, you first have to be very clear-eyed about what are you using AI for? What goal are you trying to achieve and what are the ways that it could go wrong?

How could it end up putting people down and then you sort of have to think, “okay, then what types of ways can I redesign the system? What guard rails can I put in place to make sure that it doesn't do that?” And then the next step is okay, now that we've eliminated or at least minimized the amount of harm that it can do, how do we maximize the amount of benefit a system can bring? So just to make this a little bit more concrete, if we're talking about something like an AI healthcare system, there are a lot of tools now that where AI is really good at detecting a cancer lesion, a particular scan, medical scan, and one way that it could potentially harm these patients is through privacy infringement.

In order to train this AI model, you have to amass a lot of patient medical records, and what if a hacker then hacks that system and gets all this private patient information? So that's one harm you have to think about and minimize and then another harm is what if this AI system is discriminatory because for some reason you were only able to get scans from white patients, but you weren't able to get scans from black patients?

And then it only performs well on white patients and it's going to end up exacerbating the existing health disparities in our healthcare system. So that's another thing that you have to think about, and the ways that you go about mitigating them can be various solutions.

Sometimes it has nothing to do with like the AI system itself. Sometimes it's about cybersecurity, making sure that you have good data infrastructure to protect and prevent hackers from accessing the data, sometimes it is about redesigning the algorithm, making sure that you rebalance the data that you have so that it's not discriminatory, and then once we think about maximizing the benefit for the system, it's like, okay, now let's make sure that the doctors are using the system correctly.

That's the best way that we'll get benefit out of this system. So then that involves educational programs, training the doctors, training the nurses, communicating to patients so that they understand and feel empowered by the fact that an AI system is evaluating them and giving them a diagnosis. It encompasses like the entire pipeline of development and deployment of AI technology.

[00:12:30] Paul Roetzer: And I think from a marketing standpoint and, and you and I have touched on this before, it's this idea of even if you're not building it, you're going to be buying all these tools that someone else built based on some data set, because for AI to do what it does, it needs data. And so there is this motivation as a marketer, as a brand to capture as much data as possible.

I won't use a name, but a major telecom carrier just got hacked. And they were, for whatever reason, still collecting social security numbers tied to their customer accounts. And it was like 40 some million records, I think. And now all of a sudden, all this personal data that the marketers were using in this organization is out in the world and it has cell phone numbers, social security numbers, and who knows what other data.

And that's probably just the proprietary data, the first party data they were capturing, then they were probably buying third party data to further enrich that data. So when, as a marketer to understand like, okay, AI doesn't happen without data. So at its very basic sense to do responsibly AI in marketing and in business, you need to understand where the data's coming from and how the AI learns.

And so if you're going and buying an app tool or a content tool or whatever it is, you need to have the people in the room who know to ask those kinds of questions and understand what the vendor tells you about where they're getting the data.

[00:13:51] Karen Hao: Yeah, absolutely. I think that's such an important aspect of responsible AI it’s like, who is the data coming from? Have we actually looked at it to see if it makes sense? Did the person who this data came from consent? do they know how the data is being used? How are we storing the data and securing the data?

Like all of those things are, are critical to responsible AI.

[00:14:16] Paul Roetzer: So we talked a little bit about how it can harm people, so it can affect employees. It can affect consumers, your customers, your investors, your stakeholders. So if you make a mistake with this stuff, you can be liable. And you know, a lot of times the precedent hasn't been set legally yet, but you could potentially be opening yourself up this cybersecurity example you gave.

Everybody is potentially a candidate to have cybersecurity risks. Like it's only a matter of time probably to the organization. So think about the data you have and is that data responsible or are you not thinking about the end harm it could do. So for a lot of organizations, we look to these big tech companies as an example.

I know Google has an ethics AI standard. I'm sure Facebook has it. Amazon probably has it. I know Adobe has it. So everybody at least has standards of ethics. So you cover these big companies. What are they trying to do? So Google obviously has massive amounts of data. Facebook has massive amounts of data, all marketers use those two tools.

Those two companies, either target ads to do the tech we do, to run the campaigns ee run. What have they tried to do to approach AI from a responsible standpoint? We’ll get to into whether it's working or not in a moment, but what are they trying to do?

[00:15:35] Karen Hao: Okay. So let's start with Google. So Google is an interesting, interesting case study.

If you just think about what Google's mission is, they're trying to, I forget what exactly what the tagline is, but like organize the world's information and essentially help retrieve relevant information for users. So when you're using search, when you're using Gmail, when you're using their advertising tool, all of it is about trying to deliver the most relevant information to you in an efficient way. And so AI is actually really a sort of a shoe in for that mission. And that AI is very, very good at processing massive amounts of information and selecting bits of it based on different signals of what might be considered relevant to the user. So from just like from a baseline fundamental level, it doesn't necessarily appear bad at all that Google is using AI.

Like it makes sense, it aligns with their mission, and it ideally makes the product a lot better and helps users. Where could this system go wrong? If you think about information retrieval and the fact that all of what we know basically, our knowledge, is filtered through Google, then you would then start to be concerned about what if Google is not actually retrieving accurate information for you, or what if Google is only retrieving a specific subset of information? That's not giving you a whole picture about a particular topic that leads you to have a very skewed understanding, or very skewed perceptions of something.

Well that's when you start worrying about how AI might be involved in that, if you don't design your AI systems. Well, that can absolutely happen. There have been known cases where Google search will retrieve misinformation for people, because if you type in “is climate change fake?” you will get results that sort of reinforce your biases versus when you type in “is climate change real?”

There have been examples of the fact that Google’s search algorithm will associate negative terms with searches about black women. Like back in the day, if you search black women it would mostly show porn. Whereas if you search white women, it might show fashion. And those were associations that the AI was making and it was then retrieving information in these very, very discriminatory ways.

And so what Google essentially was trying to do when they started building their ethical AI team, their ethical AI team was tasked with thinking about what are the different ways our technologies can go wrong. We're constantly trying to advance the way that AI helps us retrieve information.

They set up a world-class research team to just think about these problems and conduct studies on these problems. And the team had a very broad agenda. They  weren't necessarily told you have to do this specific study or focus on this specific product. It was literally like any AI technology that we ever deploy, you are allowed to then scrutinize it and then tell us where things might be going wrong and whether we need it to change them. And I guess the punchline is that this team started doing that. And the moment that they started criticizing certain, very, very profitable aspects of Google's technology, the leaders of the team were fired.

[00:19:21] Paul Roetzer: Yeah. And so you're referring to Dr. Timnit Gebru, if I'm pronouncing that correctly. And the interesting thing for marketers is it was related to language models, f I'm not mistaken, Karen. So her team with Dr. Mitchell, the two leads of the ethical AI team had identified potential concerns related to these large language models, which Karen has written about extensively. But when we talked about in my opening, talk to talk about GPT-3, and the ability for machines to start generating language at scale, well Google has made some insane advancements in that space, but to do it, they take in lots of information, they process tons of data that you consume massive amounts of compute power. So there are ramifications to you being able to finish a sentence with smart compose, and eventually finish entire paragraphs in Google docs and maybe eventually write the first draft of what you're writing.

So this all sounds amazing as a marketer, but the reality is to Karen's point, there are ramifications to this. And so Dr. Gebru’s team brought those to light. And what happened, Karen? You kind of hit the punchline, but there were some other things that led into it, but in essence, that led to like, well, maybe they're not really behind the ethical AI they claim to be behind.

[00:20:44] Karen Hao: Yeah. So language models, I guess, to give more context, the way that Google uses language models extensively it shows up in smart composes, as you mentioned, it's also it underpins search. So the reason why you can enter a couple of keywords and then get some very relevant information is because these language models are processing all of the webpages online, processing your keyword searches and then matching essentially what are the web pages that they should be ranking up higher that are relevant to your query? Originally, Google search hasn't always used language models, but what they realized was in using the latest iteration of language model technology, they were able to increase the relevancy of their search significantly across a broad swath of search results, which means that advertising revenue increases because then ads can be more specifically targeted to users.

And what happened was Dr. Timnit Gabru and Dr. Margaret Mitchell were looking at this technology, which is honestly relatively nascent. And they started looking at like the research in the field and essentially they didn't even do new research. They kind of just summarized the existing research saying this is a nascent technology.

Here are some of the risks of this technology we really should be thinking about before we roll it out to affect billions of users around the world and the information retrieval process on the search engine. And one of the things that they mentioned was these models have to be trained on so much text data that we can't, it is not even humanly possible anymore to understand what is in that text data.

So they were capturing things like really abusive language, profanity, really racist and sexist language, which then has downstream effects. But what are those downstream effects? We don't really know.

We just know that somewhere in these vast billions and billions of sentences that we've used to train these models, all of this garbage is being folded in, and at some point it could hurt a user by either returning really racist search results, or when you're using smart, compose telling you to complete a sentence with really abusive language.

And so what's interesting is they literally just said, we should think about this. They didn't really say anything else. They didn't say we should pull the product. They didn't say that Google needs to shut down large parts of their business, but that was enough for Google to suddenly be up in arms because search is a huge cash cow.

Language models are powering many other things that are cash cows for Google, and one thing led to another, the paper that they were trying to release to just notify the public and the field that this is something that people should be studying more closely, ended up Google tried to censor it.

And then as things escalated, both of the co-leads Dr. Gebru and Dr. Mitchell got fired and the team has sort of disintegrated from there.

[00:24:19] Paul Roetzer: And this was winter 2020, if I'm not mistaken, into January, February, 2001. So this is recent. And then again, this isn't meant to be trashing on Google.

This is hard. Like what's happening is these advancements, as you're saying are nascent. Most of the abilities and language and vision and Cade Metz is going to talk about this in his closing keynote as well. They all basically stem from 2012 and the realization that deep learning was actually possible. It came this race for language generation and understanding, and vision and all these things that Google goes and buys deep minds, and they buy Geoff Hinton's company.

And you know, all this stuff just happened in the last nine years. And so they're racing forward, advancing the tech and putting teams in place to ask the hard questions, but then to your point, sometimes those hard questions…it's just maybe better that you don't ask that hard question. So they're not alone.

Why don't we take a moment and talk about Facebook. You had an article last year or was it earlier this year? The disinformation one? It was earlier this year. It just took off. Like I saw it everywhere in my feed, so I assume it went viral, from an MIT Tech Review standpoint. Maybe share with everybody just the premise of the article you wrote about Facebook. And I know you've written other ones since, but what was the crux of what happened at Facebook as they were also supposedly trying to build responsible AI?

[00:25:45] Karen Hao: Yeah. So to kind of start where we started with Google let's also consider what Facebook's mission is, which is to connect everyone around the world.

And very, very early on, they started incorporating AI into everything in the platform. Like anything you can possibly imagine that you can do with Facebook. It's not just advertising the way that the newsfeed is ranked, the reason why you see certain dog pictures first or your friend's posts first, that's all AI.

Even when you're messaging in Messenger, text data is also being hoovered up to train Facebook's AI systems, on Instagram when you're tagging people and photos on Facebook. When you're talking to people in photos, all of that is AI.

[00:26:31] Paul Roetzer: Or not tagging them and they recognize them anyway.

So probably four years ago when that started happening and people were like, how does it know it's Karen in the photo? It’s AI. It's everywhere.

[00:26:44] Karen Hao: Like everything you can possibly think of. There's probably not a single feature on Facebook now that doesn't have some kind of AI somewhere, hanging out in the background doing some things.

 The reason why they started incorporating it is because they thought, well, if we can increase the amount of engagement on the platform, then people will spend more time on the platform. They'll connect with more people. They'll see more groups they'll like more pages and, and we will successfully connect even more people around the world, I guess that was the philosophy.

We sort of know a little bit of what started happening. So at the time in 2016, when the Trump administration came into office there, the tech lash kind of started where people started questioning, wait a minute, did we somehow innately, like did Facebook somehow enable this new administration to come into power?

What role did we play sort of in helping elect this person? There were a lot of questions around that, and people started wondering, wait a minute, like when we see content from our friends and family on Facebook, it seems like everyone is sort of in their own filter bubble. And everyone is sort of seeing different information.

And some people are seeing misinformation and some people are seeing hate speech and other really abusive content. And so what effect is that having on our society? As people sort of start thinking about these things, it went back to, “ Facebook has been using this AI to sort of maximize engagement on a platform, but it seems like we have run into this issue where just as the AI maximizes engagement, it also ends up amplifying divisive content because that's how you maximize engagement.”

So there were a lot of researchers externally that started calling for Facebook to think more deeply about this issue and Facebook decided in 2018, OK we're going to start a responsible AI team and also an integrity team, which is their name for trying to reduce badness on the platform.

And they started doing some research into, are we actually amplifying this information? Are we actually polarizing our users with these AI algorithms that we're using? And the short answer is yes. They did these studies and they determined that this was indeed happening, but the issue was, they didn't actually empower the responsible AI team to then do anything about it.

Instead they thought: if we got rid of these polarizing effects, we're going to have to get people to share less divisive content. And once people start sharing less divisive content, the platform is just not going to be as engaging anymore.

Instead, they asked the responsible AI team to pivot to focusing on things that did not gouge their bottom line. They asked them to focus on things like fairness. Like “when we deploy content moderation algorithms, does it equally impact conservative users as it impacts liberal users?”

Or “when we deploy our photo tagging algorithm, does it equally recognize white faces as it does black faces?” Those are all such important questions, but it just totally ignored this like huge, fundamental problem that was kind of lurking underneath that they had already confirmed and verified themselves internally.

And it’s very, very similar to the Google situation; few employees that were really actively trying to ask these tough questions and we're revealing ugly answers were not successfully getting leadership to actually do anything about it. And eventually they were either pushed out of the company or left voluntarily.

[00:31:04] Paul Roetzer: So the article and I mean, it's one of the best headlines and teasers I've seen! So the article is How Facebook got Addicted to Spreading Misinformation. And then the teaser on the article is the company's AI algorithms gave it an insatiable habit for lies and hate speech. Now, the man who built them can't fix the problem.

It was an awesome read. And then I know you and I both read An Ugly Truth, which is a book that recently came out from a couple of writers from The New York Times where they go very, very deep into this topic. And so if that's, again, if Facebook is so critical to what we do as marketers, there's a very good chance that everyone on this session spends money on Facebook in some way to target users, whether it's lookalike ads or through, you know, their demographic and geographic targeting or whatever it is.

You're likely using AI to market your company. And you're most definitely probably using it in your personal life as well. Maybe your kids use it and it's like, it is critical that we understand how this technology works so you understand the impact it has. So I want to kind of like step back and say like, what can we learn from this?

So as marketers, as business leaders what can we take away? Like what to you is the one big takeaway? When we look at Google and Facebook and again, I’m not trying to pick on Google and Facebook, they're very high profile cases of this. They are not alone. Like the other big tech companies have problems, you know, I think Apple and Microsoft probably execute what they try and do a little better. They have their own flaws, but they seem like maybe they're more intentional about the application of AI in an ethical way.

But again, it's kind of universal. We could debate that probably. I’m a business leader, I'm a marketer and I'm trying to figure out what do I take away from this session?

What is like that one thing that matters to me? What do you think that people can learn from these missteps when they start to figure out responsible AI within their organization?

[00:33:13] Karen Hao: Well, I think from both these examples, you kind of see that originally these companies started with very reasonable objectives for incorporating AI into their products and initially, there was no possible way to really conceive of how badly it could go wrong. And so I think for marketers and business leaders now today, thinking about incorporating AI into their products for the first time.

You should be thinking ahead into, what are some of the harms what am I trying to achieve, how would AI help enhance that? And also what are some of the ways that this could go wrong. But then continue revisiting that question as your project evolves. And as you incorporate AI more and more into whatever you're doing. What happened with Facebook and Google is they were so early on in the AI revolution that there hadn't yet been a muscle really developed at the time to think about responsible AI at all.

So in a way, people that are starting now have a little bit of advantage and that they've already seen ways that things can go wrong. And they are prime to start thinking about this early, but you do need to continue revisiting this every time. And it's not just about thinking about these things.

You also need to incorporate this into your key performance indicators. You need to incorporate it into the expectations of your employees and when employees think about AI ethics. Like they should be rewarded. They should be praised. They should be promoted when they're asking tough questions that might have ugly answers.

You should be happy about that. You shouldn't think, “well, I'm going to ignore that now, moving forward and continue barreling forward with this AI project,” you should think about how can we modify the project or do we need to actually pull the plug on this project? Even if it might in the short term, have an impact on our bottom line, in the long-term it'll probably be a good thing because you won't end up in a PR scandal.

You won't end up having brand damage and people, consumers and stakeholders will trust you more. So I think all of those things kind of are things that people should start thinking about. Just like establishing processes to think about these things and that have real teeth in them.

[00:35:44] Paul Roetzer: And, you know, the thing I've heard you say other times is this idea of people over profits, like as simple as it sounds having that north star, I think these are your words, must benefit humans, like whatever we're going to do as to benefit the humans that matter, the people. Now, that's where it becomes hard.

Like if you're working at a VC funded startup and they just got 20 million in funding to raise four with AI tech. And you're sitting in this thinking, well, I could identify five ways we might be kind of walking a gray line right now with this. What do you do? Do you have the voice? So it's hard to step forward to do what Dr. Gebru and Dr. Mitchell did, to know that you might be putting your career on the line, but that's what it's going to take is this movement toward requiring responsible AI from consumers, from investors, from business leaders.

I'm so happy we got to do this session and that all the research you're doing is helping advance this conversation because it's so important to what we're doing as an industry.

[00:36:53] Karen Hao: Thank you so much for letting me talk about, Paul.

[00:36:56] Paul Roetzer: All right. So we've got a few minutes left and I want to have a little fun and end with some rapid fire questions. Do you use any AI tools in your research and writing?

[00:37:09] Karen Hao: I do. I use Google Search, and actually I also use

the Twitter algorithm, which ranks all of the people I follow in their tweets in my newsfeed. And I kind of rely on it to surface relevant tweets to me within the AI conversation.

[00:37:29] Paul Roetzer: I love both of those, because in my talk, I talked about how AI is just seamlessly integrated into your life. And you start to take things for granted and that's where the marketing industry is going.

It's going to be like Facebook where everything you touch within a CRM platform has AI in some way. We're not there as an industry right now, but Google search absolutely is an example of AI and Twitter's algorithm and YouTube's algorithm. And Facebook's. All of those don't exist without AI. So that's cool.

All right. How do you demystify AI when you're explaining it to non-technical people?

[00:38:02] Karen Hao: I try to just tell them that it's fancy statistics or like fancy math. Like you just have a giant pile of data and you need to figure out what are the relationships in this data. So you have some fancy mathematical machine that will comb through the data, find the patterns, and then use those patterns to make decisions in the future.

[00:38:23] Paul Roetzer: I like it. We borrow may borrow it sometimes. OK,  favorite example of AI in your daily life that most consumers take for granted or don't even realize is made possible by AI?

[00:38:34] Karen Hao: My Netflix recommendation algorithm.

[00:38:38] Paul Roetzer: Is there a show that you got recommended by that algorithm that you wouldn't have otherwise watched, that you were like “that worked?”

Mine are space ones. I get recommended for documentaries and space. And so I find all these cool things that unless I went and searched for that specific topic, but AI, documentaries and space are the ones that just keep surfacing in my recommendations and they keep sucking me in.

[00:39:04] Karen Hao: That’s such a good question.

I was about to say the last movie that I watched, but then I realized it was my Amazon prime recommendation algorithms. But it was an academy award winning movie. I watch a lot of like award winning movies. So I think I ended up getting recommended more and more award winning movies. And I watched “One Night in Miami,” a historical drama based on a true story with Muhammad Ali, Malcolm X, and two other men that I can't remember the name of right now, who just happened to intersect one night in Miami. And it was like a cool, fictional representation of, of a historical moment.

[00:39:52] Paul Roetzer: So Amazon Prime and Netflix both are getting it;  Disney+,  all of them…every media platform is doing it.

What worries you most about AI and how it could go wrong?

[00:40:05] Karen Hao: Just the fact that I think a lot of the public these days,  I think are still stuck in this space where they believe they don't really have agency to shape the future of AI. Which first of all, I don't think it's true.

I think everyone has agency to participate in the future of how this technology is going to be developed. And I think attending this conference is like one way that you can start doing that. I just worry that if people don't participate in co-creating this technology, then we will end up with the technology that really doesn't benefit us.

Like how is it going to benefit us when there's only a small, tiny fraction of humanity that's deciding what values get to be embedded in these systems and what they are optimized to do. So I just hope that more people can actually jump in to work on the technology.

[00:41:04] Paul Roetzer: So you spend a lot of time obviously researching the potential downsides, or the existing downsides, but you also get this front row seat to the innovations and maybe like what comes next and what good can do. So what excites you most about AI?

[00:41:19] Karen Hao: I think there's so much potential for AI to make a really big difference in things like healthcare education, like scientific discovery. That's one of the most exciting fields like drug discovery, material discovery, astrophysics, just understanding the universe, and really big computationally intensive problems like climate modeling and using climate modeling to help mitigate some of the impacts of climate change.

All of those things are very exciting applications of AI. And I just hope that more people work on them, because not all of them are very profitable.

[00:41:56] Paul Roetzer: Yeah. A cool example was Demis Hassabis’ team with DeepMind and the Alpha Fold project where they can predict the folding of proteins.

Again, I don't pretend to know exactly how all of that works, but what I know is it opens the door for the development of new pharmaceuticals, major advancements in life sciences, because they opened sourced everything. And so again, we go back to like, yes, Google has missteps, but Google bought deep mind.

And the resources of Google is what is enabling Demis's team to make these leaps forward that can affect climate change and health and all these other things. There is always going to be a good and the bad with all of this stuff.  Well, we're kind of coming up on time, so I'm just going to ask one final, like, what would you want to leave our audience with?

So having spent so much time researching this stuff, writing about this stuff, thinking about it, what is the thing you would kind of want to leave to these technologists, marketers, business leaders, as they go forward and think about responsible AI?

[00:43:00] Karen Hao: Just that AI isn't scary. And it's not complicated. I think there's this always this perception that, oh, it's too technical. I'm not going to understand it.

I guarantee you just spend a little bit time of time. You'll realize that AI is very simple, and I hope that you will feel empowered to do that because we need more people thinking about AI and thinking about its future and thinking about how to make sure it helps people.

So, yeah, it's not scary. It's not complicated. Just jump in.

[00:43:37] Paul Roetzer: You can go back and watch her “What is AI” talk from year one to learn the basics and realize it's not that abstract. So Karen, I'm just so grateful for you to be here. I love your writing. I can't wait to continue reading the research you do, especially with the new fellowship.

That's going to be amazing. So thank you for everything you're doing and for being a part of this event and a part of our community.

Karen Hao: Thank you so much, Paul.

Paul Roetzer: All right. Well, that's all for this session. I appreciate everybody being here and look forward to chatting with you after the fact. Thanks again.

Related Posts

[The Marketing AI Show Episode 77]: 15 AI Questions Everyone is Asking -  Round Two

Claire Prudhomme | December 19, 2023

Our Intro to AI class has generated great questions from our community. We answer the most-asked on this week’s podcast

[The Marketing AI Show Episode 52]: 15 AI Questions Everyone Is Asking

Cathy McPhillips | June 20, 2023

Our Intro to AI class has generated great questions from our community. We answer the most-asked on this week’s podcast.

[The Marketing AI Show Episode 38]: Salesforce Einstein GPT and the Smart CRM Market, the Law of Uneven AI Distribution, and Why the AI Productivity Narrative Is a Lie

Cathy McPhillips | March 14, 2023

Smart CRMs, how access to and understanding of AI impacts you, and why are we replacing time saved with more work? This and more on The Marketing AI Show.