Skip to main content

X cracks down on unmarked AI war videos with revenue bans

X Takes Hard Line Against Unmarked AI War Content

Social media giant X (formerly Twitter) announced sweeping new penalties Tuesday for creators who share AI-generated videos of armed conflicts without proper labeling. The platform will remove violators from its revenue-sharing program - first-time offenders for 90 days, with permanent bans awaiting those who don't learn their lesson.

Image

Why This Matters Now

"During wars and conflicts, accurate information becomes life-or-death," explained Nikita Bier, a product executive at X. "AI tools have made it frighteningly easy to create convincing fakes that can sway public opinion or even put lives at risk."

The platform plans to combat this growing threat with a dual approach: automated detection tools will flag suspicious content, while the crowd-powered "Community Notes" system allows users to fact-check questionable posts.

Addressing Past Controversies

This policy shift directly responds to criticism of X's creator monetization program. Critics argue the incentive structure has encouraged sensationalism, with some users prioritizing engagement over truthfulness. The platform's previous hands-off approach allowed misleading content - especially about global conflicts - to spread rapidly.

"We're drawing a clear line," Bier emphasized. "Profit shouldn't come at the expense of truth during humanitarian crises."

Room for Improvement?

While experts applaud the focus on conflict zones, many note glaring gaps in X's strategy:

  • Political misinformation remains largely unchecked
  • Influencers still push dubious AI products without consequence
  • Other sensitive topics like natural disasters aren't covered

The policy also raises practical questions: How will X distinguish between obvious fakes and sophisticated deepfakes? Can detection tools keep pace with rapidly advancing AI technology?

As synthetic media becomes more convincing by the day, platforms worldwide face mounting pressure to balance creative expression against information integrity - without stifling legitimate content creators.

Key Points:

  • 90-day ban: First offense removes creators from revenue sharing
  • Permanent ban: Repeat violators lose monetization permanently
  • Detection combo: AI tools + Community Notes identify violations
  • Conflict focus: Current rules only apply to war/armed conflict footage
  • Ongoing challenge: Platforms struggle to police ever-improving synthetic media

Enjoyed this article?

Subscribe to our newsletter for the latest AI news, product reviews, and project recommendations delivered to your inbox weekly.

Weekly digestFree foreverUnsubscribe anytime

Related Articles

X Platform Rolls Out Mandatory AI Labels - What Creators Need to Know
News

X Platform Rolls Out Mandatory AI Labels - What Creators Need to Know

X Platform (formerly Twitter) is testing compulsory 'Made with AI' tags for synthetic content, according to researcher Nima Owji. The move aims to combat misinformation as AI-generated posts flood social media. Creators who fail to label AI content risk penalties ranging from reduced visibility to account suspension. This follows similar initiatives by Meta and YouTube, signaling a industry-wide push for transparency.

February 24, 2026
AI regulationsocial media policycontent moderation
Kuaishou Takes Action Against AI-Altered Videos Targeting Classics
News

Kuaishou Takes Action Against AI-Altered Videos Targeting Classics

Kuaishou has removed over 4,000 videos featuring inappropriate AI modifications of classic films and animations. The crackdown focuses on protecting minors from disturbing content and preserving the integrity of cultural treasures like 'Journey to the West'. The platform vows to strengthen content review systems while encouraging user participation in reporting violations.

March 3, 2026
AI regulationcontent moderationdigital heritage
News

WeChat Pulls 4,000 AI-Altered Videos in Crackdown on Distorted Classics

WeChat has removed nearly 4,000 videos in February that used AI to grotesquely alter classic films and animations. The platform is targeting content that distorts cultural classics like 'Romance of the Three Kingdoms,' misrepresents historical figures, or creates disturbing versions of children's cartoon characters. This crackdown comes as part of broader efforts to maintain healthy online content and protect young users from harmful material.

March 3, 2026
AI regulationcontent moderationdigital culture
News

Taobao Flash Sales Rolls Out AI-Powered Food Safety Checks

Taobao Flash Sales has launched a new '3+1+AI' food safety system in response to stricter regulations. The platform now combines artificial intelligence with rider inspections to monitor restaurant hygiene and compliance throughout the delivery process. This move comes as China tightens oversight of online food services, with major platforms racing to implement smarter safety measures.

February 27, 2026
food deliveryAI regulatione-commerce
News

Canada Demands OpenAI Strengthen Safety Measures After Shooting Incident

Canadian officials have issued a strong warning to OpenAI following a tragic school shooting linked to a banned ChatGPT user. Justice Minister Sean Fraser emphasized the need for immediate improvements to AI safety protocols, threatening legislative action if changes aren't made voluntarily. The case highlights growing concerns about tech companies' responsibility in preventing misuse of their platforms.

February 26, 2026
AI regulationOpenAIpublic safety
News

Xiaohongshu Tightens Rules on AI Content: Label or Lose Visibility

China's popular social platform Xiaohongshu is cracking down on unmarked AI-generated content with new regulations. Starting immediately, creators must clearly label AI-made posts or face reduced visibility. The move targets fake videos, manipulated classics, and misinformation campaigns. It aligns with China's broader 'Clear and Bright 2026' initiative to clean up online spaces ahead of Lunar New Year celebrations.

February 13, 2026
AI regulationcontent moderationsocial media