2 Min Read

Adobe's Controversial AI Policy Faces Fierce Backlash

Featured Image

Wondering how to get started with AI? Take our on-demand Piloting AI for Marketers Series.

Learn More

A few small changes to Adobe's terms of service just sparked a massive backlash from creators—and highlighted the growing mistrust around how companies use customer data to power AI.

The controversy began when Adobe updated its terms of service and required users to agree to give the company access to their content via "automated and manual methods" in order to keep using its software. 

The Verge explains:

"Specifically, the notification said Adobe had 'clarified that we may access your content through both automated and manual methods' within its TOS, directing users to a section that says 'techniques such as machine learning' may be used to analyze content to improve services, software, and user experiences. The update went viral after creatives took Adobe’s vague language to mean that it would use their work to train Firefly — the company’s generative AI model — or access sensitive projects that might be under NDA."

Adobe quickly backtracked, releasing a blog post calling the controversy a "misunderstanding." The company clarified it doesn't train AI models on customer content or assume ownership of users' work.

But the damage was done.

What can we learn from Adobe's faux pas?

I got the scoop from Marketing AI Institute founder and CEO Paul Roetzer on Episode 102 of The Artificial Intelligence Show.

Transparency matters more than ever

"It's just an unforced error," says Roetzer. "It's just a bad look."

Even with the explanation provided by Adobe, the terms are still in confusing legalese that understandably scared users.

"I read it and I was like, 'I don't know what that means,'" says Roetzer. "And you and I are pretty knowledgeable about this stuff."

The snafu follows a similar pattern to a controversy that hit Zoom last year. The video conferencing giant had to walk back terms of service that made it sound like user conversations could be used for AI training.

In both cases, a lack of transparency gave a strong perception the companies were trying to "pull one over" on customers, says Roetzer. And in the current climate, that's a major liability.

"I think there's going to be an increasing level of mistrust," he says.

"We need to expect more of these companies—to be very transparent and clear and not even give the perception that they're trying to pull one over on us."

The stakes are only rising

As more and more companies race to develop AI, accessing quality training data is becoming a make-or-break factor. Customer content represents a potential goldmine for feeding data-hungry models.

But as Adobe just learned, tapping into that goldmine without true transparency and consent is a dangerous game. Users are increasingly sensitive about how their data and creations are being used by the AI tools they rely on.

Companies who fail to get ahead of these concerns with clear, plainspoken communication risk serious backlash and lost trust.

"A lot of companies wanting access to your data to use in their AI in some way, and it's going to get really confusing how they're doing it," says Roetzer.

The bottom line? AI builders who prioritize clear communication, informed consent, and responsible data practices are going to have a major leg up as public scrutiny intensifies.

Related Posts

Adobe and Microsoft Partner on Machine Learning and Artificial Intelligence

Mike Kaput | May 11, 2017

Adobe and Microsoft have partnered to share machine learning data and expertise. Here’s what that means for marketers.

Adobe Bets the House on Image Generation AI

Mike Kaput | October 31, 2022

Adobe just made a big play into generative AI, one that will fundamentally change how creators use their products.

[The AI Show Episode 102]: Apple WWDC, Warnings of Superintelligence, and Adobe’s Controversial New Terms of Use

Claire Prudhomme | June 12, 2024

In Episode 102 of The Artificial Intelligence Show our hosts discuss Apple's AI plans at WWDC, former OpenAI researchers bold AGI predictions, and Adobe's new terms that sparked controversy.