Skip to main content

Clegg: Mandatory Artist Consent Could "Kill" UK AI Industry

At a pivotal moment for UK AI policy, Nick Clegg—former Deputy Prime Minister and Meta executive—has ignited fierce debate by arguing that mandatory artist consent for AI training data would "fundamentally kill" the nation's artificial intelligence industry.

Speaking at a book event, Clegg acknowledged creators' rights but dismissed prior consent as impractical. "These systems ingest enormous datasets," he told The Times. "I don't see how you ask permission at that scale." His warning comes as Parliament weighs stricter copyright protections that could require AI companies to disclose training materials.

Image

Creative Backlash vs. Tech Realities

The debate reached Parliament through an amendment to the Data Bill by filmmaker Beeban Kidron, demanding transparency about copyrighted works used in AI training. Over 300 cultural figures—including Elton John and Dua Lipa—backed the proposal in an open letter. Yet lawmakers rejected it last Thursday, with Science Minister Peter Kyle stressing the need to balance both sectors: "Our economy needs thriving AI and creative industries."

Why Transparency Matters

Proponents argue disclosure would curb unauthorized use of creative works. Kidron insists knowing data origins lets copyright law function properly. But opponents counter that such requirements would cripple innovation while competitors abroad operate freely. As one tech lobbyist put it: "You can't build ChatGPT by knocking on every artist's door."

The Road Ahead

The battle isn't over. The Data Bill returns to the House of Lords in June, ensuring continued clashes between tech growth and creator rights. This struggle mirrors global tensions—how can societies harness AI's potential without undermining the very creators who fuel it? For now, Britain's attempt at compromise leaves both sides uneasy.

Key Points

  1. Nick Clegg claims mandatory artist consent for AI training data is logistically impossible
  2. Parliament rejected a transparency amendment backed by major music and literary figures
  3. The UK government seeks middle ground between tech innovation and copyright protection
  4. The debate will continue as the Data Bill moves to the House of Lords next month

Enjoyed this article?

Subscribe to our newsletter for the latest AI news, product reviews, and project recommendations delivered to your inbox weekly.

Weekly digestFree foreverUnsubscribe anytime

Related Articles

AI Adoption Divide: How China and the U.S. Approach AI Tools Differently
News

AI Adoption Divide: How China and the U.S. Approach AI Tools Differently

OpenClaw founder Peter Steinberger reveals stark contrasts in AI adoption between China and the U.S. While Chinese companies mandate AI tool usage, some American firms restrict them over security concerns. Steinberger shares insights on workplace impacts and his vision for personal AI agents that could reshape how we work and interact with technology.

March 27, 2026
AI adoptionOpenClawtech policy
News

Google bows to UK publishers, adding opt-out for AI search summaries

In a significant shift, Google has agreed to let websites opt out of its AI-generated search summaries following pressure from UK regulators and publishers. The move addresses concerns that these automated overviews were diverting traffic from content creators. While seen as a win for publishers, questions remain about how the changes will be implemented globally and whether opting out might still impact search rankings.

March 20, 2026
GoogleAI regulationsearch engines
News

Beijing Cracks Down on AI Misuse with Month-Long 'AI for Good' Campaign

Beijing has launched a targeted campaign to clean up AI misuse online. The one-month initiative aims to tackle everything from deepfake scams to AI-generated pornography, focusing on five key problem areas. Authorities will work with platforms to strengthen content moderation while cracking down on illegal services that exploit AI technology.

March 18, 2026
AI regulationdeepfake crackdowncontent moderation
News

Douyin Cracks Down on AI-Generated Explicit Content

Douyin has taken strong action against accounts using AI to create inappropriate content, banning over 14,000 violators this year. The platform targets black market operations that generate fake personas and suggestive videos to redirect users. Authorities have already detained suspects involved in these schemes as Douyin vows to intensify its crackdown.

March 16, 2026
content moderationAI regulationplatform governance
News

Lobster AI Shakes Up Pharma Workflows as Platforms Draw Regulatory Lines

An AI tool called OpenClaw, recognizable by its red lobster icon, is revolutionizing pharmaceutical workflows with unprecedented automation capabilities. While boosting efficiency dramatically - cutting some tasks from hours to minutes - its power raises new security concerns. Xiaohongshu has become the first platform to ban AI impersonating human users, sparking industry-wide discussions about balancing innovation with responsibility.

March 12, 2026
AI regulationpharmaceutical technologyworkplace automation
News

Gracenote takes OpenAI to court over alleged data theft for AI training

Nielsen's Gracenote has filed a lawsuit against OpenAI, accusing the AI giant of illegally scraping its proprietary media metadata to train models like ChatGPT. The company claims its carefully curated database - painstakingly assembled by human editors - was copied without permission, threatening its entire business model. While OpenAI maintains it only uses publicly available data, this case could set important precedents for how AI companies source training materials.

March 11, 2026
AI litigationcopyright lawmetadata