2 Min Read

What Is Bias in AI—and How Do You Prevent It?

Featured Image

Wondering how to get started with AI? Take our on-demand Piloting AI for Marketers Series.

Learn More

Can artificial intelligence be biased?

You bet.

Despite AI's obvious benefits, it can end up harming consumers, brands, and industries through bias.

What do we mean when we say AI can be biased? What is bias in AI?

Put simply, bias in AI is when an AI system produces an unexpected, undesirable output.

Bias happens for two big reasons:

  • Human blindspots. Humans inject their own biases, consciously or unconsciously, into data used by AI or the AI system's design. These could include direct or indirect discrimination based on age, gender, sex, race, or other characteristics.
  • Incomplete data. Data used to train AI can also create bias when it's incomplete. AI is only as good as its training data. So, if that data lacks adequate diversity or comprehensiveness, the gaps cause issues.

Why can bias in AI be harmful? 

First is the obvious harm: AI can promote blatant bigotry.

An example of this is an AI bot that Microsoft created to post to Twitter.

The bot learned from conversations on Twitter how to tweet like a person. Unfortunately, much of the language used to train it was profane and bigoted.

As a result, the bot tweeted seriously inappropriate language. Microsoft shut the test down quickly.

AI can't invent bigotry on its own. It learns it from humans that display it in data sets used for training.

If Microsoft hadn't trained its bot on a dataset that included bigoted language, it wouldn't have been bigoted. It was a mistake, not a malicious action. The company didn't anticipate the consequences of using all of Twitter as a dataset.

Yet, it still harmed people and the company's image. The result was the same as if the company had intentionally programmed the bot to be biased.

That's why bias in AI is so dangerous.

Second, is the more common, but less obvious, harm: AI can become unintentionally biased due to incomplete data.

An example of this is the Apple Card.

In 2019, Apple released a credit card product. AI used by the company automatically gave applicants a credit line based on many characteristics, like spending, credit score, earnings, etc.

However, Apple took a massive amount of flak when it turned out that their AI gave women a smaller credit line than men, even when controlling for other factors.

It happened because the AI system was using incomplete data, which didn't account for a range of gender-related factors related to income and pay.

As a result, it concluded that women deserved less credit than men, even when financials were equal.

So, how do you address bias in AI?

Once you've built a product or system, it's usually too late.

You need to address bias at every step of the process that leads to the adoption of AI in products and operations.

The examples of Microsoft and Apple prove this. Both companies are adept at AI. Both companies have world-class engineering talent. Yet, they were both still caught by surprise by bias in AI. And when they discovered it, it was too late.

The technology was sound. But the bias considerations were not.

That's because fixing bias in AI isn't just a technology problem.

Sure, you need to be completely confident your data is comprehensive, accurate, and clean.

But you also need to have people and processes in place across every business function, not just engineering, to assess bias risks. It's a holistic effort, and it takes time.

An excellent place to start is to draft an AI ethics policy for your organization, whether you build AI technology or use it in your work.

An AI ethics statement is a formal document posted in public that outlines your company's position on AI.

It provides specifics on how your company will and won't use AI.

And it details what steps you take (or will take) to make sure ethical issues and bias don't affect AI you build or use.

Related Posts

What Is AI? A Simple, Non-Technical Guide

Mike Kaput | March 14, 2022

No matter your background or level of understanding, there is a simple, non-technical way to answer the question: “What is AI?”

How to Navigate the AI Industry's Legal Problems

Mike Kaput | April 9, 2024

Experts are sounding the alarm that the AI industry is about to have serious legal problems. And you might too if you use generative AI tools.

20 Data Science and AI Terms You Need to Know

Mike Kaput | April 27, 2021

Confused about key terms in data science and AI? You’re not alone. This handy glossary of 20 data science and AI terms is here to help.