Marketing AI Institute | Blog

OpenAI's Scarlett Johansson Controversy

Written by Mike Kaput | May 28, 2024 2:31:18 PM

OpenAI is facing serious backlash after allegations it used actress Scarlett Johansson's voice without permission for one of the updated voices in its new GPT-4o model.

The controversy is the latest in a string of trust-eroding missteps from the AI leader.

What went down, and why does it matter?

I got the answers from Marketing AI Institute founder and CEO Paul Roetzer on Episode 99 of The Artificial Intelligence Show.

How it started

The drama began on Monday, May 13, when OpenAI demoed the new voice capabilities of GPT-4o during the launch event for this new model. 

Among the voices showcased was "Sky," which many listeners immediately noted sounded eerily similar to Scarlett Johansson. 

The actress herself was among those who noticed the likeness. 

In a statement, she revealed that OpenAI CEO Sam Altman had approached her last September about voicing the system. 

She says he told her that her voice could "bridge the gap between tech companies and creatives" and help consumers feel comfortable with the AI shift.

Johansson declined the offer. Yet when GPT-4o debuted with the Johansson-esque "Sky" voice, Altman stoked the flames by tweeting simply "Her"—a reference to the movie in which Johansson voices an AI assistant.

How it (appears to have) ended

Altman and OpenAI denied the voice is Johansson’s, was never intended to be hers, and came from a voice actor they hired before outreach to Johansson.

The company went on to say:

“Out of respect for Ms. Johansson, we have paused using Sky’s voice in our products. We are sorry to Ms. Johansson that we didn’t communicate better.”

As of May 22, The Washington Post is reporting that a voice actress was indeed hired in June to create the Sky voice “months before Altman contacted Johansson, according to documents, recordings, casting directors and the actress’s agent.”

So, it appears that OpenAI was not at fault. But the backlash has been swift and severe.

"Illegal or not, it is a really bad look for OpenAI," says Roetzer. "They've had a number of unforced errors lately. This is either incompetence or arrogance. And I don't think they're incompetent."

Roetzer points to other recent missteps:

"The big picture here is not to nitpick one individual thing," says Roetzer. "They're making a collection of really apparently bad PR choices and bad business choices that just don't bode well for a company who is asking for so much trust from society."

Why it all matters

These repeated lapses in judgment could have serious consequences for OpenAI—and for the development of AI as a whole. 

"We're supposed to trust them," says Roetzer. "But the potential for employee blowback, societal blowback, government intervention becomes greater and greater the less we trust the organizations that are [building] AI."

There are already signs that public sentiment may be starting to turn against OpenAI and other AI companies that are seen as moving too fast and breaking too many things. A LinkedIn post from Roetzer about the Johansson controversy drew a flood of negative comments about OpenAI.

"There's this almost sense of invincibility right now with AI companies where they're just going to do whatever they have to do to accelerate innovation," says Roetzer. “They just think they can get away with this stuff.”