4 Min Read

How Should You Think About Ethics in Artificial Intelligence?

Featured Image

Wondering how to get started with AI? Take our on-demand Piloting AI for Marketers Series.

Learn More

If you're trying to understand, pilot, scale, or sell AI, you need to have a point of view about the ethical use of it.

Forming a point of view on ethics in artificial intelligence isn't easy. But it's worth it.

Without clear thinking on ethics in AI, you run the risk of doing damage to your brand, your business, and your customers.

We're not talking about fighting superhuman AI run amok. That's science fiction.

We're talking about avoiding real business problems that can result when you don't consider the ethical implications of AI technology.

Ethical AI concerns fall into two categories:

  1. Bias concerns. AI is only as good as the data used to train it. There are plenty of ways the data used in an AI system can give it (intentional or unintentional) bias.
  2. Ethical concerns. Even if it's unbiased, AI can still be misused. There are plenty of ways to violate user privacy or manipulate emotions and behavior using AI. 

So how should you think about ethics in AI?

It starts with asking yourself, your leadership, and your vendors three essential questions.

1. Where does the data come from?

Whether AI uses your organization's data or its own dataset, where did the data come from? Your organization and AI tools need to collect only data that users consent to give. It's also necessary to determine how representative the data is. If the data came from too limited a pool, it might not be helpful. 

One high profile example:

AI researcher Timnit Gebru found that the source of data in language models used by Google could itself be a problem. From MIT Technology Review:

"An AI model taught to view racist language as normal is obviously bad. The researchers, though, point out a couple of more subtle problems. One is that shifts in language play an important role in social change; the MeToo and Black Lives Matter movements, for example, have tried to establish a new anti-sexist and anti-racist vocabulary. An AI model trained on vast swaths of the internet won't be attuned to the nuances of this vocabulary and won't produce or interpret language in line with these new cultural norms.

It will also fail to capture the language and the norms of countries and peoples that have less access to the internet and thus a smaller linguistic footprint online. The result is that AI-generated language will be homogenized, reflecting the practices of the richest countries and communities."

Gebru herself was embroiled in controversy due to the publication. She says Google fired her for pointing out ethical issues. The company says she resigned.

2. How do you use AI responsibly?

How does your organization use AI responsibly? How do vendors use AI responsibly? You need to know.

This is where a formal list of ethical values becomes handy. One model is Phrasee, an AI vendor with a simple but smart ethics policy. You can take a leaf out of Phrasee's book and formulate your own AI ethics policy as a series of statements on "things we won't do" and "things we will do."

From Phrasee's ethics policy:

"Things we won't do:

  1. We won't use data to target vulnerable populations. It's possible, for example, that a machine learning algorithm could identify bi-polar people about to enter a mania phase… and then suggest you target them with extravagant product offers. Or, say someone has recently gone through a breakup, or worse, a bereavement, and people used AI to exploit their emotional state. We do not believe using AI to these ends is ethical. We believe that even though a machine's prediction may be right, that doesn't mean we should use it to exploit people. We will NEVER use data like this.
  2. We won't promote the use of negative emotions to exploit people. Some people and companies suggest selecting messages that explicitly focus on "fear, guilt and anxiety." We believe that people shouldn't be treated like this. We, as marketers, shouldn't make people feel fearful, guilty and anxious; instead, we should focus on the positive aspects of our brands. We will NEVER encourage our customers to use negative emotions to target their consumers.
  3. We will not work with customers whose values don't align with ours. In the past we have actively turned down working with companies that we believe are harmful to society or have unscrupulous business models. All potential customers go through a review process to make sure their ethics align with ours. We will not work with: gun & weapon retailers, political parties or any company that promotes hate speech or the marginalisation of segments of society. The individuals who work for these types of organisations are probably nice people, and we'd love to work with you when you switch industries. But for now, it's not for us.

Things we will do:

  1. Take action to avoid prejudice and bias. First off, we ensure our team itself is diverse. Phrasee is gender balanced, has staff members from many countries around the world, many socio-economic profiles, ages, sexual orientations, ethnic backgrounds and political beliefs. Secondly, we actively develop methods to identify and remove prejudice from our data sets. This ensures our models are generalised, and any small amount of biased data is washed out during training.
  2. Be open about what our AI does. We use AI to do two very specific tasks: to generate human-sounding, brand-compliant marketing copy, and to predict the performance of that copy in the wild. We do not use AI for other purposes, and if/when we do, we will be open about it. Our customers deserve to know what we do, and that we aren't hiding any secret, evil development schemes.
  3. We will not change this policy. We will monitor it and add to it when required. We view this ethics policy like the US Constitution. So, the core text will not change. However, ethics are not static, and adapt over time. The world around us changes. When our policy needs to be amended, we will transparently add amendments at the end of the document, dated and with reasons. This way you can always see where we stand, and what we stand for."

3. What guardrails are (or should be) in place?

AI ethics considerations and an AI ethics policy are good first steps. But AI ethical considerations aren't a one-off set of actions or discussions. Your company, and vendors, need to have guardrails that ensure AI ethics policies are followed.

Think of guardrails broadly as regular touch points and safeguards baked into your normal work. We're talking things like:

  • An AI ethics owner. There needs to be a point person who owns or oversees this.
  • A recurring meeting on AI ethics. Schedule a recurring discussion at least quarterly to make sure your ethics policy is being followed and working as intended.
  • Adequate publicizing of your policy. It's not enough to publish your policy on your website. People at your organization need to know it exists. They also need to know why it's important. And they need to believe it is being enforced. After all, your teams are the ones who need to consider AI ethics when no one is looking.

Related Posts

The AI Bill of Rights: What It Is, Why It Matters, and How to Apply It

Mike Kaput | October 17, 2022

The AI Bill of Rights from the White House offers new guidelines on responsible AI. We discuss why this matters and how to use it in your business.

Why You Must Embrace Responsible AI Now

Mike Kaput | November 8, 2022

Boston Consulting Group (BCG) just put out a warning to brands...Get serious about responsible AI or face regulatory consequences.

How to Boost Email Opens by 35% Using Artificial Intelligence

Mike Kaput | March 22, 2018

Gumtree, an online classifieds website with millions of users, boosted email opens by 35% using AI. Here’s how they did it.