ChatGPT’s new 4o image generation capabilities just pulled off a massive viral moment, transforming ordinary photos into whimsical illustrations inspired by the legendary Japanese animation studio, Studio Ghibli.
But while social media timelines are flooded with these dreamy, hand-drawn aesthetic images, not everyone’s happy about it.
Some are calling it a joyful, artistic explosion of AI creativity. Others see it as a clear-cut case of unauthorized copying. In other words: The honeymoon period of AI-generated art may be coming to an end. (If it ever existed in the first place.)
On Episode 142 of The Artificial Intelligence Show, me and Marketing AI Institute founder and CEO Paul Roetzer dived into the backlash directed at AI tools now coming from artists, writers, and creatives.
Everyone’s Going "Ghibli Mode"
Just days after ChatGPT introduced its 4o image generator, users discovered they could apply an instantly recognizable “Studio Ghibli filter” to their personal photos. Studio Ghibli is a legendary Japanese animation studio that has produced beloved, often hand-drawn, animated films for decades.
Plenty of users found lots of fun and joy in uploading their pictures to ChatGPT, then seeing them get the Ghibli treatment. But plenty of artists and creatives were horrified that ChatGPT was now blatantly using the style of a famed animator—clearly without permission. It led to the question: How did ChatGPT know Ghibli’s style (and the styles of others) so well?
The answer is simple, says Roetzer.
“It’s very, very obvious [this was] trained on copyrighted material.”
The resulting backlash has led to hundreds of angry comments on social channels, accusing AI models of “ripping off” beloved art styles and using creative work as training data without having permission to do so.
The Growing Tension Over Copyright
This upset over AI-driven art styles isn’t coming out of nowhere. In fact, it’s just the latest example of a wider conversation around how exactly AI developers train their models—and whether they do so using copyrighted text or images without permission.
One of the driving questions: If people can easily generate art in the style of a well-known creator or studio, what does that mean for the original artists (and the future value of their work)?
Unfortunately, it doesn’t seem that AI labs care all that much because, regardless of the answer, it’s clear they’re moving forward no matter what.
Feel like OpenAI and others have entered the "don't give a f*ck" phase of IP infringement.
— Paul Roetzer (@paulroetzer) March 27, 2025
Phase 1) Pretend models weren't trained on copyrighted materials.
Phase 2) Claim fair use.
Phase 3) Embrace and capitalize with no regard. https://t.co/WjREqn73WY
“I don't know if they've just become convinced they're going to win these lawsuits or they just have enough billions set aside for the lawsuits that they just don't care,” he says. “But it's obvious that they're just full steam ahead.”
“Meta Used All 3 of My Books”
The Studio Ghibli blowback isn’t happening in a vacuum. At the same time, more authors are discovering their books may have been used to train AI language models without their knowledge.
The Atlantic recently released a database showing all the pirated books that were potentially used by Meta to train its AI models. (Obviously, without the explicit permission of the books’ authors.)
For instance, bestselling writer and marketing leader Ann Handley found her entire library in the database and, understandably, took to social media to voice her anger. Hundreds of angry comments and reposts agreed with her. That could indicate that creatives at large are waking up to an uncomfortable fact in AI, says Roetzer.
“People are becoming aware of how this has been working for years,” says Roetzer. “This is not a secret that this is how this has been done. Your books have been stolen for years and used to train models for years, as have your creative outputs, your designs, your photography, your paintings. All of it's been stolen for years. That is not new. People's awareness of it is new.”
So, Are We About to See a Bigger Backlash?
If viral posts are any indication, the answer is yes. And it might not die down anytime soon. Some creators have spent years building their styles and livelihoods, only to discover AI can replicate their signature look with a handful of prompts because it’s been trained on work like theirs without permission.
Roetzer predicts we’ll see more of these high-profile social media flare-ups—and more frustration. The copyright concerns are one thing. The effects these tools have on creative jobs are another.
Roetzer points out that these models allow more people to do more creative work themselves, which may reduce the need (and budgets) for outside talent. Designers, writers, and other creatives are increasingly worried about losing business to AI.
“I think this is the year where people actually start to feel it,” says Roetzer. “Where maybe I’m not making what I used to make to do logo designs. Or I’m not getting paid what I used to get paid to do writing. I think this is the year where the rest of the world starts realizing what these things can do.”
What happens when you combine this copyright infringement with a negative impact on creative work?
“That is a recipe for a lot of backlash,” says Roetzer.
What It All Means for Business and Beyond
For businesses, marketers, or content creators hoping to tap into the latest “Studio Ghibli effect” or other viral AI trends, there’s a new dimension to consider: the risk of potential reputational blowback. Even if some legal experts argue these use cases fall into a gray area, the public perception might be another matter entirely.
Meanwhile, AI keeps marching on. Roetzer points out that other powerful models are right on the heels of 4o image generation—both from well-known tech giants and new AI startups. If one platform restricts style replication, another might not. It’s a race, and no one’s eager to slow down.
In the end, the “Studio Ghibli moment” is a sign that we’ve crossed another critical threshold in AI’s impact on creative work—and on the people whose livelihoods depend on it. Whether lawsuits and public pressure can rein in these models remains to be seen.
Still, he feels a personal tug-of-war between embracing the democratization of creativity and acknowledging the questionable methods used to train AI.
And navigating the personal tension between the wonders these tools can produce and how they were trained is an ongoing battle.
“I'm inspired by what you can build now, the democratization of the abilities to do these things, and I like using the tools,” says Roetzer. “And then the other part of me is like, but I know how they're trained and I know the impact they're going to have on people. Sometimes I'm not really sure how to feel about it all.”
Mike Kaput
As Chief Content Officer, Mike Kaput uses content marketing, marketing strategy, and marketing technology to grow and scale traffic, leads, and revenue for Marketing AI Institute. Mike is the co-author of Marketing Artificial Intelligence: AI, Marketing and the Future of Business (Matt Holt Books, 2022). See Mike's full bio.