Skip to main content

Stars Push Back Against iQIYI's AI Avatar Plan

Streaming Giant's AI Ambitions Meet Celebrity Resistance

At its 2026 World Conference, iQIYI unveiled ambitious plans for an "AI Artist Library" featuring digital avatars of performers. The platform boasted participation from over 100 artists, including household names like Zhang Ruoyun and Wang Churan, promising to revolutionize film production through its "Nadu Pro" technology.

But the announcement quickly turned controversial. Within hours, several named celebrities took to social media to dispute iQIYI's claims. "I haven't signed any agreement regarding AI avatars," Zhang Ruoyun stated bluntly on Weibo. His sentiment was echoed by Wang Churan and others listed as participants.

The swift celebrity pushback has cast doubt on iQIYI's entire initiative. Legal experts point out that using an artist's likeness without explicit permission could violate personal rights protections. "This isn't just about contracts," explains entertainment lawyer Li Ming. "It touches on fundamental questions of identity and creative control in the digital age."

Public reaction has been equally critical. On social platforms, fans have rallied behind the artists with hashtags like #ProtectOurStars and #AIConsentMatters. Many express discomfort with the idea of digital replicas performing without their original counterparts' ongoing involvement.

Industry at a Crossroads

iQIYI maintains its project represents the future of entertainment production. "Our technology offers unprecedented creative possibilities while respecting artists' rights," a company spokesperson told reporters. However, they declined to specify how many participants had actually signed agreements.

The controversy comes as global entertainment grapples with AI's expanding role. From Hollywood to Bollywood, producers see potential cost savings in digital performers, while actors fear losing control over their professional identities.

Key Points:

  • 🚨 Celebrity Denials: Multiple stars dispute participation in iQIYI's AI library
  • ⚖️ Legal Questions: Experts debate consent requirements for digital likenesses
  • 🌐 Industry Impact: Case highlights growing AI tensions in entertainment worldwide

Enjoyed this article?

Subscribe to our newsletter for the latest AI news, product reviews, and project recommendations delivered to your inbox weekly.

Weekly digestFree foreverUnsubscribe anytime

Related Articles

ChatGPT rolls out smart age detection to protect young users
News

ChatGPT rolls out smart age detection to protect young users

OpenAI is introducing an innovative age prediction system for ChatGPT that analyzes user behavior to identify minors. When the AI detects someone under 18, it automatically activates protective filters that block sensitive content. The feature includes optional identity verification through Persona, requiring selfies or IDs for confirmation. Launching first in Europe, this move shows OpenAI's commitment to creating safer digital spaces for teenagers as AI becomes more prevalent in daily life.

April 20, 2026
ChatGPTonline safetyAI ethics
News

Man's AI-generated suicide photo prank backfires, lands him in legal trouble

A domestic dispute in China's Qinghai province took a bizarre turn when a man used AI to create fake suicide photos to scare his wife. The images, showing him in the Yellow River, triggered a full-scale police search before authorities discovered the hoax. Now facing administrative detention, the case highlights growing concerns about misuse of AI technology in personal conflicts.

April 17, 2026
AI ethicsdigital deceptionpublic safety
News

Apple Nearly Booted Grok Over Deepfake Failures

Apple came close to pulling Elon Musk's AI app Grok from the App Store earlier this year after the platform failed to rein in non-consensual deepfake content. While parent company X addressed Apple's concerns, Grok reportedly still doesn't meet the tech giant's content moderation standards - particularly around AI-generated images targeting women and minors.

April 16, 2026
AI ethicscontent moderationdeepfake technology
News

AI Lab Denies Code Copying Claims as Developer Drama Heats Up

Silicon Valley's Nous Research faces plagiarism accusations from Chinese AI team EvoMap over their Hermes Agent project. EvoMap alleges striking similarities in architecture with their Evolver engine, sparking a fiery exchange. With nearly 190,000 social media views, the dispute highlights growing tensions in competitive AI development circles.

April 16, 2026
AI ethicsopen sourcetech disputes
News

Apple Pressured Musk's X to Fix Grok's AI Image Risks or Face App Store Ban

Behind closed doors, Apple warned Elon Musk's X platform that its Grok AI tool violated App Store policies by generating inappropriate images. Internal documents reveal a months-long battle where Apple repeatedly rejected X's content moderation fixes before approving a revised version. While incidents have decreased, recent tests show users can still circumvent safeguards to create explicit content.

April 15, 2026
AI ethicsApp Store policiescontent moderation
DeepMind's Philosopher Hire: Why AI Labs Need More Than Engineers
News

DeepMind's Philosopher Hire: Why AI Labs Need More Than Engineers

Google DeepMind has made an unusual move by hiring philosopher Henry Shevlin in a full-time position - a first for leading AI labs. His focus on machine consciousness and human-AI relationships signals a shift from viewing AGI as purely an engineering challenge to recognizing its profound philosophical implications. As AI systems grow more sophisticated, questions about consciousness boundaries and ethical frameworks can no longer be avoided.

April 15, 2026
AI ethicsMachine consciousnessAGI development