Skip to main content

Claude's ID Check Stirs Privacy Fears and Account Lockouts

AI Giant Tightens Security

Anthropic has rolled out a controversial identity verification system for its Claude AI, requiring users to submit real-time photographs holding physical government documents like passports or driver's licenses. Unlike typical online checks, the system rejects digital copies or scans, forcing users to physically pose with their IDs. The verification process, handled by third-party firm Persona, typically completes within five minutes.

Image

Verification or Suspension Warning?

What began as a security measure has users worried they're walking into potential account terminations. Claude's FAQ clearly lists reasons for suspension: repeated policy violations, accessing from unsupported regions, terms of service breaches, and notably - being under 18.

Privacy concerns compound the issue. Persona's policy reveals data may be shared with 17 subcontractors for "anti-fraud improvements." This broad data-sharing approach leaves many wondering exactly who can access their sensitive identity documents.

The Teen Developer Dilemma

The age restriction hit particularly hard in the developer community. One user, llm_nerd, shared how his 15-year-old son - a working game developer earning more than his father - lost access to Claude Max despite paying for the service. Anthropic's refund notice stated simply: "We detected that your account was used by a child."

Ironically, when questioned about the policy, Claude's own AI model Opus4.6 called static ID photos the "weakest link" in security, suggesting the company already employs robust verification through payment methods, behavior monitoring, and content analysis.

Industry Standards Questioned

While OpenAI and Gemini set minimum ages at 13, Anthropic's stricter 18+ policy stands out. Some users joke this creates perverse incentives - why pay when pretending to be a minor might grant free access? The debate continues as users weigh security against accessibility, especially for young tech talents caught in the crossfire.

Key Points:

  • 📸 Live ID Scans Required: Claude demands real-time photos with physical IDs, raising privacy concerns
  • 🔒 Verification Backlash: Many users report account suspensions following compliance
  • 👦 Age Limit Controversy: 18+ restriction excludes teen developers using the tool professionally
  • 🤖 AI's Own Critique: Claude's model questions the effectiveness of static ID verification

Enjoyed this article?

Subscribe to our newsletter for the latest AI news, product reviews, and project recommendations delivered to your inbox weekly.

Weekly digestFree foreverUnsubscribe anytime

Related Articles

AI Coding Assistants Clash: OpenAI's Codex Upgrade Takes On Anthropic's Claude
News

AI Coding Assistants Clash: OpenAI's Codex Upgrade Takes On Anthropic's Claude

The battle for dominance in AI-powered coding tools heats up as OpenAI unveils major upgrades to Codex, introducing background operation and browser integration. Meanwhile, Anthropic's Claude Code continues gaining enterprise traction. This latest volley brings enhanced memory features, image generation, and flexible pricing - pushing AI programming assistants into new territory.

April 17, 2026
AI programmingOpenAIdeveloper tools
News

Google Gemini Now Crafts Personalized AI Art from Your Photo Memories

Google's Gemini AI is taking personalization to new heights by integrating directly with users' photo libraries. The update lets the AI assistant tap into your Google Photos to create custom AI-generated images featuring your actual memories - like transforming family vacation snapshots into animated artworks. This marks a significant shift from generic AI art to truly personalized creations that reflect users' real lives.

April 17, 2026
AI personalizationGoogle GeminiAI photo editing
Anthropic's Top Product Chief Leaves Figma Board as AI Threatens Design Software
News

Anthropic's Top Product Chief Leaves Figma Board as AI Threatens Design Software

Anthropic's Chief Product Officer Mike Krieger has stepped down from Figma's board, sparking industry speculation about the AI firm's growing ambitions in design tools. The move comes as Anthropic prepares to launch its Opus4.7 model with native design capabilities, potentially positioning it as a direct competitor to Figma. This development has sent ripples through the tech sector, raising questions about AI's growing encroachment on traditional software domains and the future of specialized design platforms.

April 17, 2026
AnthropicFigmaAI design tools
Claude 4.7 Dials Back the Bragging, Focuses on Getting Things Right
News

Claude 4.7 Dials Back the Bragging, Focuses on Getting Things Right

Anthropic's latest Claude model takes a surprising turn - trading raw intelligence for rock-solid reliability. Version 4.7 makes fewer guesses and admits more mistakes, while still delivering impressive benchmark gains. Early testers describe it as 'the colleague who won't let you make bad decisions' rather than just a smarter chatbot. But this dependability comes at a cost - the model thinks longer and burns through more computing power on complex tasks.

April 17, 2026
Claude AIAnthropicAI reliability
Critical Flaw in AI Protocol Leaves 200,000 Servers Vulnerable
News

Critical Flaw in AI Protocol Leaves 200,000 Servers Vulnerable

A shocking security report reveals dangerous vulnerabilities in Anthropic's widely used MCP protocol, putting over 200,000 AI servers at risk of remote attacks. The design flaw allows execution of unverified system commands, affecting all major programming languages. Despite being notified months ago, Anthropic has done little to address what researchers call an architectural-level threat.

April 16, 2026
AI SecurityMCP FlawCybersecurity
Anthropic's Secretive Project Glasswing: What Vulnerabilities Did It Really Find?
News

Anthropic's Secretive Project Glasswing: What Vulnerabilities Did It Really Find?

Anthropic's ambitious Project Glasswing enlisted tech giants like Amazon and Google to test its AI model for security flaws. But months after launch, the project's actual discoveries remain shrouded in mystery. While researchers found 40 potential vulnerabilities, only one has been definitively linked to Glasswing. As we await Anthropic's July report, questions linger about what this powerful AI model can truly detect - and whether companies are acting fast enough on its findings.

April 16, 2026
AI SecurityAnthropicCybersecurity