Skip to main content

TikTok cracks down on AI celebrity scams

TikTok Takes On AI-Powered Celebrity Scams

In response to growing concerns about artificial intelligence being used to create fake celebrity endorsements, TikTok's Chinese counterpart Douyin has launched a major crackdown. The move comes after several high-profile cases where AI-generated versions of public figures appeared to hawk products they never endorsed.

The Impersonation Problem Goes Digital

Douyin Vice President Li Liang addressed the issue head-on during a recent press briefing. "While some reported cases didn't actually occur on our platform," Liang clarified, "we've definitely seen content where AI technology mimics celebrities for sales purposes."

Image

The executive didn't mince words about the seriousness of these digital impersonations. "This isn't just about copyright—it erodes trust in our entire ecosystem," Liang explained. "When creators, merchants, and platforms lose credibility with consumers, everyone suffers."

Technical Arms Race Against Deepfakes

Identifying AI-generated content presents an ongoing challenge across social media platforms. As detection methods improve, so do the tools used by bad actors. "These accounts constantly evolve their techniques," Liang noted. "It's like playing whack-a-mole with generative technology."

Douyin plans to combat this by:

  • Boosting investment in detection algorithms
  • Expanding moderation teams specializing in synthetic media
  • Streamlining reporting processes for affected creators

The platform has drawn a clear line in the sand: any use of AI to impersonate others—especially authoritative figures like military or police—for marketing purposes constitutes serious misconduct.

Protecting Creators And Consumers Alike

For influencers worried about their digital likeness being stolen, Douyin offers dedicated channels to report impersonations. The company promises swift action on verified cases, though specifics about response times remain unclear.

The crackdown reflects broader industry concerns as AI tools become more accessible. While platforms grapple with balancing innovation against misuse, cases like these highlight why clearer regulations around synthetic media may be necessary.

The stakes are high—not just for individual creators whose reputations could be damaged by unauthorized endorsements, but for consumers who might fall victim to scams wearing a familiar face.

Key Points:

  • Douyin launches special initiative targeting AI celebrity impersonations
  • Detection remains challenging as fake accounts employ evolving techniques
  • New protections offered for creators whose likeness gets misused
  • Industry-wide implications as synthetic media becomes more sophisticated

Enjoyed this article?

Subscribe to our newsletter for the latest AI news, product reviews, and project recommendations delivered to your inbox weekly.

Weekly digestFree foreverUnsubscribe anytime

Related Articles

X Platform Rolls Out Mandatory AI Labels - What Creators Need to Know
News

X Platform Rolls Out Mandatory AI Labels - What Creators Need to Know

X Platform (formerly Twitter) is testing compulsory 'Made with AI' tags for synthetic content, according to researcher Nima Owji. The move aims to combat misinformation as AI-generated posts flood social media. Creators who fail to label AI content risk penalties ranging from reduced visibility to account suspension. This follows similar initiatives by Meta and YouTube, signaling a industry-wide push for transparency.

February 24, 2026
AI regulationsocial media policycontent moderation
Xiaohongshu Tightens Rules on AI-Generated Content with Mandatory Labeling
News

Xiaohongshu Tightens Rules on AI-Generated Content with Mandatory Labeling

Xiaohongshu has rolled out strict new guidelines requiring creators to clearly label AI-generated content. The move comes after numerous reports of fake videos impersonating public figures. Violators face traffic restrictions or even account bans, as the platform aims to maintain trust in its community amid growing concerns over AI misuse.

February 13, 2026
AI regulationsocial media policydigital authenticity
News

Xiaohongshu Tightens Rules on AI Content: Label or Lose Visibility

China's popular social platform Xiaohongshu is cracking down on unmarked AI-generated content with new regulations. Starting immediately, creators must clearly label AI-made posts or face reduced visibility. The move targets fake videos, manipulated classics, and misinformation campaigns. It aligns with China's broader 'Clear and Bright 2026' initiative to clean up online spaces ahead of Lunar New Year celebrations.

February 13, 2026
AI regulationcontent moderationsocial media
China Cracks Down on AI Copycats and Tech Thieves
News

China Cracks Down on AI Copycats and Tech Thieves

China's market watchdog has exposed five shady practices plaguing the AI industry, from fake ChatGPT clones to stolen algorithms. The cases reveal how some companies are exploiting the AI gold rush through brand impersonation, false claims, and trade secret theft. Penalties range from modest fines to hefty six-figure sanctions.

February 6, 2026
AI regulationintellectual propertymarket competition
News

Musk's AI chatbot Grok sparks UK probe over explicit deepfake scandal

Elon Musk's xAI faces mounting legal troubles as UK regulators investigate its Grok chatbot for generating unauthorized explicit images. The probe follows reports of users weaponizing the AI to create harmful deepfakes targeting women and minors. With potential fines reaching £17.5 million, this scandal highlights growing concerns about AI ethics and data protection.

February 4, 2026
AI regulationdeepfakesdata privacy
China Cracks Down on AI Tampering of Classic Films and Cultural Icons
News

China Cracks Down on AI Tampering of Classic Films and Cultural Icons

Chinese authorities have launched a nationwide campaign targeting AI-altered videos that distort classic films and historical figures. Major platforms like Douyin and Xiaohongshu removed over 8,000 violative clips during the month-long operation. The crackdown focuses on protecting cultural heritage and preventing disrespectful manipulation of revered works.

February 3, 2026
AI regulationdigital content protectionChinese media policy