Skip to main content

NPR Host Takes Google to Court Over Alleged AI Voice Clone

NPR Host Claims Google's AI Stole His Voice

Image

David Greene, the familiar voice behind NPR's "Morning Edition" for years, is taking legal action against tech giant Google. The veteran broadcaster alleges that the male voice generated by Google's NotebookLM artificial intelligence tool bears an uncanny resemblance to his own distinctive vocal style.

"They've Taken My Professional Identity"

The current host of KCRW's "Left, Right & Center" says the controversy began when NotebookLM launched its podcast feature. "My inbox flooded with messages from colleagues and listeners asking if I'd secretly recorded for Google," Greene explained. After listening himself, he became convinced the AI wasn't just similar - it mirrored his speech patterns down to characteristic pauses and verbal tics.

What makes this particularly personal? "Radio hosts aren't just reading scripts - we pour our personalities into every word," Greene told reporters. "That voice represents decades of my professional life."

Google Fires Back

The tech company strongly denies Greene's claims. A spokesperson told The Washington Post that NotebookLM's audio features use voices recorded by professional actors under contract with Google. "We have strict protocols regarding voice acquisition and usage," the representative stated.

This isn't just about one radio personality though. Legal experts see this case as part of a broader pattern emerging in our AI-driven world.

When Machines Sound Too Human

The Greene lawsuit follows another high-profile AI voice controversy earlier this year involving OpenAI and actress Scarlett Johansson. After users noted ChatGPT's "Sky" assistant sounded remarkably like Johansson, the company removed the disputed voice option.

"These cases raise fundamental questions," explains media law professor Elena Carter from Stanford University. "At what point does inspiration become imitation? And who owns the rights to how someone sounds?"

The legal landscape remains murky regarding AI-generated voices that resemble real people without directly copying recordings. While copyright protects specific recordings, the general sound and style of a person's speech exist in a gray area.

For broadcasters like Greene whose careers depend on their distinctive voices, these technological advances pose both practical and existential concerns. As he put it: "If anyone can push a button and sound like me, what does that mean for my profession?"

Key Points:

  • Legal Action: NPR veteran David Greene sues Google over alleged unauthorized use of his vocal style in NotebookLM AI
  • Similar Cases: Follows OpenAI removing ChatGPT voice that resembled Scarlett Johansson
  • Industry Impact: Raises questions about ownership of vocal mannerisms in age of advanced voice cloning tech
  • Google Response: Maintains they used properly licensed professional actors

Enjoyed this article?

Subscribe to our newsletter for the latest AI news, product reviews, and project recommendations delivered to your inbox weekly.

Weekly digestFree foreverUnsubscribe anytime

Related Articles

News

Oxford Philosopher Shapes Claude's Digital Conscience

Amanda Askell, a philosopher at Anthropic, is crafting Claude's moral compass through dialogue rather than code. Her approach mirrors parenting - teaching the AI emotional intelligence and ethical reasoning while protecting its core identity. This unique method raises fascinating questions about AI consciousness and the risks of anthropomorphizing technology.

February 15, 2026
AI ethicsConversational AIPhilosophy of technology
News

Your LinkedIn Photo Might Predict Your Paycheck, Study Finds

A provocative new study reveals AI can analyze facial features in LinkedIn photos to predict salary trajectories with surprising accuracy. Researchers examined 96,000 MBA graduates' profile pictures, linking AI-detected personality traits to career outcomes. While the technology shows promise, experts warn it could enable dangerous workplace discrimination masked as 'objective' assessment.

February 11, 2026
AI ethicsworkplace discriminationhiring technology
News

New Benchmark Aims to Make AI Phone Calls Feel More Human

Agora and Meituan have teamed up to launch VoiceAgentEval, the first industry standard for evaluating AI outbound calls. This practical benchmark tests real business scenarios rather than lab conditions, covering 30 sub-scenarios across six business areas. The system uses actual call data and evaluates both text logic and voice quality, with 150 simulated dialogues to test AI performance. Early results have already identified top-performing models in this growing field.

February 10, 2026
AI communicationvoice technologycustomer experience
News

ByteDance's Seedance 2.0 Raises Eyebrows with Uncanny AI Abilities

Tech blogger 'Film Hurricane' Tim recently uncovered startling capabilities in ByteDance's new AI video model Seedance 2.0. While impressed by its technical prowess, Tim revealed concerning findings about spatial reconstruction and voice cloning that suggest unauthorized use of creator content. These discoveries spark urgent conversations about data ethics in AI development.

February 9, 2026
AI ethicsgenerative videodata privacy
News

UN Forms AI Safety Panel with Chinese Experts on Board

The United Nations has taken a significant step toward global AI governance by establishing an International Scientific Expert Group on AI Safety. Two prominent Chinese scientists specializing in AI ethics and technical safety have been selected for this inaugural panel. The group will assess emerging AI risks and provide policy recommendations, marking China's growing influence in shaping international AI standards.

February 6, 2026
AI governanceUnited NationsChina tech
News

South Korea Pioneers AI Regulation with Groundbreaking Law

South Korea has taken a bold step by enacting the world's first comprehensive AI legislation. The new law mandates digital watermarks for AI-generated content and strict risk assessments for high-impact AI systems. While the government sees this as crucial for balancing innovation and regulation, local startups fear compliance burdens, and activists argue protections fall short. As South Korea aims to become a global AI leader, this law sets an important precedent – but can it satisfy both tech ambitions and public concerns?

January 29, 2026
AI regulationSouth Korea techdigital watermarking