Skip to main content

Celebrities Push Back Against iQIYI's AI Avatar Plans

Streaming Giant Faces Backlash Over Unauthorized AI Avatars

At its 2026 World Conference, iQIYI unveiled what it called a groundbreaking initiative - the "AI Artist Library." The Chinese streaming platform boasted that more than 100 performers, including popular actors Zhang Ruoyun and Wang Churan, had signed on to have their digital likenesses created using the company's proprietary "Nadu Pro" film production technology.

But there was one big problem: many of the named artists say they never agreed to participate.

Celebrity Denials Pour In

Within hours of the announcement, social media erupted with denials from several high-profile names. Zhang Ruoyun posted simply: "No authorization given" on Weibo, while Wang Churan shared a more detailed statement questioning how her image could be used without consent.

"This isn't just about contracts," entertainment lawyer Li Ming told us. "It touches on fundamental rights of publicity and personality that artists fiercely protect."

The backlash highlights growing tensions in China's entertainment industry as tech companies race to develop AI solutions. While digital avatars promise cost savings for productions, the ethical boundaries remain murky.

iQIYI claims all participants signed proper authorization forms, but declined to provide specifics when pressed. Industry insiders suggest the platform may have relied on broad clauses in existing contracts rather than obtaining fresh consent for AI usage.

"Many standard acting agreements include vague language about digital rights," explained talent manager Chen Wei. "Artists are now realizing they need explicit AI clauses to protect themselves."

The controversy comes at a sensitive time for China's streaming platforms, which face increasing scrutiny over content practices. Just last month, regulators tightened rules around celebrity compensation and endorsement deals.

Broader Implications for AI Entertainment

Beyond the immediate dispute, this incident raises thorny questions about:

  • How digital likenesses should be regulated
  • What constitutes fair compensation for AI performances
  • Whether existing intellectual property laws adequately cover synthetic media

"We're entering uncharted territory," said Tsinghua University media professor Zhao Lin. "The technology is advancing faster than our legal frameworks can adapt."

For now, iQIYI maintains its project will proceed as planned, though analysts predict the company may need to revisit its artist recruitment strategy. As one industry insider put it: "Nobody wants to be the test case for AI rights violations."

Key Points:

  • Celebrity pushback: Multiple artists deny participating in iQIYI's AI library despite being named as participants
  • Consent questions: Concerns emerge about whether proper authorization was obtained for digital avatar creation
  • Industry impact: The dispute highlights growing tensions between tech innovation and performer rights in entertainment

Enjoyed this article?

Subscribe to our newsletter for the latest AI news, product reviews, and project recommendations delivered to your inbox weekly.

Weekly digestFree foreverUnsubscribe anytime

Related Articles

ChatGPT rolls out smart age detection to protect young users
News

ChatGPT rolls out smart age detection to protect young users

OpenAI is introducing an innovative age prediction system for ChatGPT that analyzes user behavior to identify minors. When the AI detects someone under 18, it automatically activates protective filters that block sensitive content. The feature includes optional identity verification through Persona, requiring selfies or IDs for confirmation. Launching first in Europe, this move shows OpenAI's commitment to creating safer digital spaces for teenagers as AI becomes more prevalent in daily life.

April 20, 2026
ChatGPTonline safetyAI ethics
News

Man's AI-generated suicide photo prank backfires, lands him in legal trouble

A domestic dispute in China's Qinghai province took a bizarre turn when a man used AI to create fake suicide photos to scare his wife. The images, showing him in the Yellow River, triggered a full-scale police search before authorities discovered the hoax. Now facing administrative detention, the case highlights growing concerns about misuse of AI technology in personal conflicts.

April 17, 2026
AI ethicsdigital deceptionpublic safety
News

Apple Nearly Booted Grok Over Deepfake Failures

Apple came close to pulling Elon Musk's AI app Grok from the App Store earlier this year after the platform failed to rein in non-consensual deepfake content. While parent company X addressed Apple's concerns, Grok reportedly still doesn't meet the tech giant's content moderation standards - particularly around AI-generated images targeting women and minors.

April 16, 2026
AI ethicscontent moderationdeepfake technology
News

AI Lab Denies Code Copying Claims as Developer Drama Heats Up

Silicon Valley's Nous Research faces plagiarism accusations from Chinese AI team EvoMap over their Hermes Agent project. EvoMap alleges striking similarities in architecture with their Evolver engine, sparking a fiery exchange. With nearly 190,000 social media views, the dispute highlights growing tensions in competitive AI development circles.

April 16, 2026
AI ethicsopen sourcetech disputes
News

Apple Pressured Musk's X to Fix Grok's AI Image Risks or Face App Store Ban

Behind closed doors, Apple warned Elon Musk's X platform that its Grok AI tool violated App Store policies by generating inappropriate images. Internal documents reveal a months-long battle where Apple repeatedly rejected X's content moderation fixes before approving a revised version. While incidents have decreased, recent tests show users can still circumvent safeguards to create explicit content.

April 15, 2026
AI ethicsApp Store policiescontent moderation
DeepMind's Philosopher Hire: Why AI Labs Need More Than Engineers
News

DeepMind's Philosopher Hire: Why AI Labs Need More Than Engineers

Google DeepMind has made an unusual move by hiring philosopher Henry Shevlin in a full-time position - a first for leading AI labs. His focus on machine consciousness and human-AI relationships signals a shift from viewing AGI as purely an engineering challenge to recognizing its profound philosophical implications. As AI systems grow more sophisticated, questions about consciousness boundaries and ethical frameworks can no longer be avoided.

April 15, 2026
AI ethicsMachine consciousnessAGI development