Skip to main content

Reddit Sues Anthropic Over Alleged Unauthorized Data Scraping

Reddit has taken legal action against artificial intelligence startup Anthropic, filing a lawsuit in San Francisco Superior Court alleging systematic violations of its platform rules. The social media giant claims Anthropic made more than 100,000 unauthorized accesses since July 2024 while fully aware of Reddit's prohibition against automated scraping.

In court documents, Reddit paints an unflattering portrait of Anthropic as a company that publicly champions ethical AI development while allegedly flouting platform rules behind the scenes. "Anthropic presents itself as the white knight of AI ethics," the complaint states, "while simultaneously building a commercial product that depends on unauthorized data collection."

Image

Image source note: The image is generated by AI, licensed by MidJourney service provider

Reddit's Chief Legal Officer Ben Lee emphasized the platform's unique value in an era dominated by AI-generated content. "What makes Reddit special are the authentic human conversations and emotional connections that happen here," Lee stated. He suggested Anthropic's alleged data scraping could generate billions in revenue for the AI firm while potentially undermining Reddit's own business model.

The lawsuit arrives as Reddit increasingly monetizes its data through official partnerships. Last February, the company struck a $60 million deal with Google allowing access to its content for AI training purposes. Similar agreements with other undisclosed companies suggest Reddit is positioning itself as a key data provider in the booming AI industry.

This isn't Anthropic's first brush with copyright controversies. The company faced a class-action lawsuit last August from authors alleging unauthorized use of copyrighted books. In October 2023, Universal Music Group sued Anthropic over alleged systematic infringement of song lyrics.

The case raises fundamental questions about how AI companies can ethically source training data while respecting intellectual property rights. As artificial intelligence becomes increasingly sophisticated, courts may need to clarify where fair use ends and infringement begins in machine learning applications.

Legal experts suggest this lawsuit could establish important precedents for how platforms protect their content from unauthorized scraping while still participating in the AI economy through legitimate partnerships. The outcome may influence how other social media companies approach similar challenges.

Key Points

  1. Reddit alleges Anthropic made over 100,000 unauthorized accesses to its platform since July 2024
  2. The lawsuit contrasts Anthropic's public ethics stance with alleged rule violations
  3. Reddit has existing data licensing deals worth millions with companies like Google
  4. This marks at least the third major copyright challenge Anthropic has faced in recent months
  5. The case could help define boundaries between fair use and infringement in AI training

Enjoyed this article?

Subscribe to our newsletter for the latest AI news, product reviews, and project recommendations delivered to your inbox weekly.

Weekly digestFree foreverUnsubscribe anytime

Related Articles

News

AI Giant Anthropic Hits Legal Snag in India Over Name Dispute

Anthropic, the US-based AI powerhouse, faces an unexpected hurdle in India as a local software firm claims trademark rights to the same name. The Indian company, Anthropic Software, alleges customer confusion and seeks $110,000 in damages. This comes as Anthropic doubles down on its Indian expansion with new hires and office openings. The court has scheduled further hearings while denying an immediate injunction.

February 10, 2026
AI expansiontrademark lawIndia tech market
News

ByteDance's Seedance 2.0 Raises Eyebrows with Uncanny AI Abilities

Tech blogger 'Film Hurricane' Tim recently uncovered startling capabilities in ByteDance's new AI video model Seedance 2.0. While impressed by its technical prowess, Tim revealed concerning findings about spatial reconstruction and voice cloning that suggest unauthorized use of creator content. These discoveries spark urgent conversations about data ethics in AI development.

February 9, 2026
AI ethicsgenerative videodata privacy
News

Xiaohongshu's New AI Video Editor Lets You Chat Your Way to Creativity

China's popular social platform Xiaohongshu is testing an innovative AI video editing tool called OpenStoryline that could revolutionize content creation. The conversational interface allows users to edit videos through natural language commands, potentially making professional editing accessible to everyone. While still in early testing, the company hints this game-changing tool might eventually go open-source.

February 9, 2026
AI video editingXiaohongshuOpenStoryline
News

UN Forms AI Safety Panel with Chinese Experts on Board

The United Nations has taken a significant step toward global AI governance by establishing an International Scientific Expert Group on AI Safety. Two prominent Chinese scientists specializing in AI ethics and technical safety have been selected for this inaugural panel. The group will assess emerging AI risks and provide policy recommendations, marking China's growing influence in shaping international AI standards.

February 6, 2026
AI governanceUnited NationsChina tech
News

South Korea Pioneers AI Regulation with Groundbreaking Law

South Korea has taken a bold step by enacting the world's first comprehensive AI legislation. The new law mandates digital watermarks for AI-generated content and strict risk assessments for high-impact AI systems. While the government sees this as crucial for balancing innovation and regulation, local startups fear compliance burdens, and activists argue protections fall short. As South Korea aims to become a global AI leader, this law sets an important precedent – but can it satisfy both tech ambitions and public concerns?

January 29, 2026
AI regulationSouth Korea techdigital watermarking
News

OpenAI Takes On Fake Profiles With Biometric Social Network

OpenAI is quietly working on a revolutionary social media platform that would use facial recognition and iris scanning to verify users are human. The move aims to combat rampant bot accounts plaguing existing networks while positioning OpenAI's AI tools at the center of digital identity. However, privacy advocates warn collecting biometric data carries significant risks if compromised.

January 29, 2026
OpenAIsocial mediabiometrics