Skip to main content

Claude's ID Check Policy Stirs Privacy Fears and Account Suspension Concerns

Claude's Strict ID Verification Rattles Users

Anthropic, the company behind AI assistant Claude, has rolled out a controversial new identity verification system that's leaving many users uneasy. The policy requires people to submit a real-time photo of themselves holding physical government ID - no digital copies or scans accepted. The verification process, handled by third-party company Persona, takes about five minutes but has become a flashpoint for privacy concerns.

Image

Verification or Suspension Trap?

What started as routine security measures has some users calling it a 'gotcha' system. Multiple subscribers report having their accounts suspended shortly after completing verification. Claude's FAQ lists potential suspension reasons including policy violations, accessing from unsupported locations, terms of service breaches, and perhaps most contentious - being under 18 years old.

One user shared how their 15-year-old son, a game developer earning more than his father through Claude projects, suddenly lost access. "We detected your account was used by a child," read the refund notice from Anthropic. The case highlights how the 18+ age restriction - higher than competitors' 13+ policies - is catching legitimate young users in the net.

Data Sharing Concerns Multiply

The privacy policy from Persona reveals verified ID data may be shared with 17 sub-processors to "improve anti-fraud systems." This broad data-sharing approach has security experts questioning whether the verification actually protects users or creates new vulnerabilities.

Ironically, when asked about the policy, Claude's own AI model called static ID photos the "weakest link" in security, noting the company already uses credit card verification, behavior monitoring, and content review systems. The disconnect between what the AI says and what the company implements hasn't gone unnoticed by frustrated users.

Key Points:

🛡️ Strict Verification: Claude now demands live photos with physical IDs, rejecting digital copies

⚠️ Unexpected Bans: Accounts face suspension post-verification, including minors' legitimate uses

🔒 Privacy Questions: User data may be shared with 17 third parties for fraud prevention

🎯 Age Debate: At 18+, Claude's minimum age exceeds competitors' 13+ policies

🤖 AI's Own Doubts: Claude's model questions the effectiveness of ID photo verification

Enjoyed this article?

Subscribe to our newsletter for the latest AI news, product reviews, and project recommendations delivered to your inbox weekly.

Weekly digestFree foreverUnsubscribe anytime

Related Articles

Claude 4.7 Dials Back the Bragging, Focuses on Getting Things Right
News

Claude 4.7 Dials Back the Bragging, Focuses on Getting Things Right

Anthropic's latest Claude model takes a surprising turn - trading raw intelligence for rock-solid reliability. Version 4.7 makes fewer guesses and admits more mistakes, while still delivering impressive benchmark gains. Early testers describe it as 'the colleague who won't let you make bad decisions' rather than just a smarter chatbot. But this dependability comes at a cost - the model thinks longer and burns through more computing power on complex tasks.

April 17, 2026
Claude AIAnthropicAI reliability
News

Google Gemini Now Crafts Custom AI Art from Your Private Photos

Google's Gemini AI is getting deeply personal. The assistant can now tap into your Google Photos library to generate custom AI artwork featuring your friends and family. Want to see your kids as animated characters or your vacation photos reimagined in different styles? Gemini makes it happen automatically, blurring the line between your private memories and AI creativity. This marks a significant shift from generic AI images to truly personalized visual content.

April 17, 2026
Google GeminiAI personalizationGoogle Photos
Anthropic's Top Product Chief Leaves Figma Board as AI Threatens Design Software
News

Anthropic's Top Product Chief Leaves Figma Board as AI Threatens Design Software

Anthropic's Chief Product Officer Mike Krieger has stepped down from Figma's board, sparking industry speculation about the AI firm's growing ambitions in design tools. The move comes as Anthropic prepares to launch its Opus4.7 model with native design capabilities, potentially positioning it as a direct competitor to Figma. This development has sent ripples through the tech sector, raising questions about AI's growing encroachment on traditional software domains and the future of specialized design platforms.

April 17, 2026
AnthropicFigmaAI design tools
AI Coding Assistants Clash: OpenAI's Codex Upgrade Takes On Anthropic's Claude
News

AI Coding Assistants Clash: OpenAI's Codex Upgrade Takes On Anthropic's Claude

The battle for dominance in AI-powered coding tools heats up as OpenAI unveils major upgrades to Codex, introducing background operation and browser integration. Meanwhile, Anthropic's Claude Code continues gaining enterprise traction. This latest volley brings enhanced memory features, image generation, and flexible pricing - pushing AI programming assistants into new territory.

April 17, 2026
AI programmingOpenAIdeveloper tools
Critical Flaw in AI Protocol Leaves 200,000 Servers Vulnerable
News

Critical Flaw in AI Protocol Leaves 200,000 Servers Vulnerable

A shocking security report reveals dangerous vulnerabilities in Anthropic's widely used MCP protocol, putting over 200,000 AI servers at risk of remote attacks. The design flaw allows execution of unverified system commands, affecting all major programming languages. Despite being notified months ago, Anthropic has done little to address what researchers call an architectural-level threat.

April 16, 2026
AI SecurityMCP FlawCybersecurity
Anthropic's Secretive Project Glasswing: What Vulnerabilities Did It Really Find?
News

Anthropic's Secretive Project Glasswing: What Vulnerabilities Did It Really Find?

Anthropic's ambitious Project Glasswing enlisted tech giants like Amazon and Google to test its AI model for security flaws. But months after launch, the project's actual discoveries remain shrouded in mystery. While researchers found 40 potential vulnerabilities, only one has been definitively linked to Glasswing. As we await Anthropic's July report, questions linger about what this powerful AI model can truly detect - and whether companies are acting fast enough on its findings.

April 16, 2026
AI SecurityAnthropicCybersecurity