2 Min Read

Google's AI Image Blunder Sparks Debate: Bias or Bug?

Featured Image

Wondering how to get started with AI? Take our on-demand Piloting AI for Marketers Series.

Learn More

Google was recently on the receiving end of some serious controversy over AI image generation.

The controversy centered around some users' attempts to generate images of historical figures. Gemini ended up generating more than once historical images that were inaccurately diverse. There was an image of the US Founding Fathers that was ethnically diverse. And even an image of Nazis that featured many different races.

Google promptly apologized. Google says the issue was due to major engineering challenges that it needs to solve better. But critics say the real problem is that the company is forcing its views on users.

What's really going on here? 

On Episode 85 of The Artificial Intelligence Show, Marketing AI Institute founder and CEO Paul Roetzer broke it down for me.

Google isn’t alone in facing this problem

AI models, like Gemini, learn from data found on the internet. Unfortunately, the internet is a vast repository of human bias. Even with the best intentions, engineers can unknowingly train their models to reflect existing prejudices. 

This is not only a Google problem. It’s a problem faced by every company that builds AI.

"This is widely known to be a major challenge with these models," says Roetzer.

In Google’s case, its attempts to counteract the bias learned by the model appears to have had unintended consequences. Essentially, the model ended up trying too hard and sometimes generating images that defied historical fact.

Politics is distracting from the real problem

Unfortunately, the issue became politicized almost immediately. (People like Elon Musk jumped into fray, claiming the tool was biased.)

Musk and allies are of the opinion that Google did this on purpose and don't want to fix this. They believe that, unless Google clears house, their culture won't change.

(They don't bother, however, to address the fact every AI company has had these issues.)

There’s no question that this is an important issue, says Roetzer. But the politicization around it misses the actual problem to be solved.

"This is a very challenging technical issue," says Roetzer. And it's one with important social impacts:

"The more people rely on these models for their information, we're going to start having a generation that learns what is true and what is not based on what these models output.”

There's no single "fix" for AI bias, either bias it learned online or bias from a particular company’s culture. Guardrails designed to protect users can also create new points of contention. The very concept of bias is subjective—what one person finds unfair, another may claim as necessary. 

The only real way to solve this, according to Roetzer, is the path OpenAI has hinted about in the past:

Allowing users to control the bias settings of AI models. That would allow you to “set the temperature” of any AI tool you use to your own personal preferences. This puts the responsibility for what you get from AI models into your own hands—and takes it out of the hands of AI companies.

"I think that's the only way to solve this: let individuals choose what sort of experience they want through settings," says Roetzer. "Otherwise, we're going to keep having these arguments all the time."

Related Posts

Google Just Made Some Big Updates to Bard

Mike Kaput | February 6, 2024

Google has made some more big updates to Bard. Here's what you need to know.

Google's Surprise Release of a 1-Million Token Model

Mike Kaput | February 20, 2024

Google, hot on the heels of releasing Gemini Ultra 1.0, has surprised the AI world with the announcement of Gemini 1.5, including Gemini 1.5 Pro.

The Growing Backlash to Google's New AI Overviews

Mike Kaput | May 28, 2024

Google CEO Sundar Pichai is facing tough questions about the company's rollout of AI Overviews.