Skip to main content

Claude's Strict ID Checks Leave Users Feeling Watched and Worried

Claude Tightens Security - But Users Feel the Squeeze

Anthropic's AI assistant Claude has implemented what may be the strictest identity verification in the industry - and it's not sitting well with users. The new system requires a real-time photo of you holding physical government identification like a passport or driver's license. Scans or digital IDs won't cut it.

Image

Verification or Vigilance?

While the process through third-party provider Persona takes just five minutes, many users report their accounts were suspended shortly after verification. Claude's FAQ lists several suspension triggers: repeated policy violations, accessing from unsupported locations, terms of service breaches, and crucially - being under 18.

"My 15-year-old son had built a successful game development business using Claude," shared user llm_nerd. "He was earning more than me until his account got flagged and suspended for being underage. All we got was a refund notice saying we violated their child policy."

The Privacy Paradox

What's raising eyebrows is Persona's privacy policy, which allows sharing data with 17 subcontractors. While intended to combat fraud, this data-sharing web leaves many uneasy about where their sensitive information might end up.

Ironically, when quizzed about the security measures, Claude's own Opus4.6 model described static ID photos as the "weakest link" in their security chain, suggesting their existing three-layer protection (payment verification, behavior monitoring, and content review) should be sufficient.

Age Limit Backlash

The 18+ age restriction seems particularly out of step when competitors like OpenAI and Gemini set their minimum at 13. Online commentators joke this could encourage teens to "hack" the system by posing as younger users to avoid detection.

As one Reddit user put it: "They want my ID to 'protect' me, but won't protect my data from half their business partners? Something doesn't add up."

Key Points:

🔐 Stringent Verification: Live ID photos required, no digital copies accepted

⚠️ Suspension Surprises: Many users report accounts terminated post-verification

👶 Adult-Only Access: 18+ policy catching young entrepreneurs in the crossfire

🤝 Data Sharing Concerns: Verification provider uses 17 sub-processors

🤖 AI's Own Doubts: Claude's model questions effectiveness of ID photo requirement

Enjoyed this article?

Subscribe to our newsletter for the latest AI news, product reviews, and project recommendations delivered to your inbox weekly.

Weekly digestFree foreverUnsubscribe anytime

Related Articles

Anthropic's Top Product Chief Leaves Figma Board as AI Threatens Design Software
News

Anthropic's Top Product Chief Leaves Figma Board as AI Threatens Design Software

Anthropic's Chief Product Officer Mike Krieger has stepped down from Figma's board, sparking industry speculation about the AI firm's growing ambitions in design tools. The move comes as Anthropic prepares to launch its Opus4.7 model with native design capabilities, potentially positioning it as a direct competitor to Figma. This development has sent ripples through the tech sector, raising questions about AI's growing encroachment on traditional software domains and the future of specialized design platforms.

April 17, 2026
AnthropicFigmaAI design tools
Claude 4.7 Dials Back the Bragging, Focuses on Getting Things Right
News

Claude 4.7 Dials Back the Bragging, Focuses on Getting Things Right

Anthropic's latest Claude model takes a surprising turn - trading raw intelligence for rock-solid reliability. Version 4.7 makes fewer guesses and admits more mistakes, while still delivering impressive benchmark gains. Early testers describe it as 'the colleague who won't let you make bad decisions' rather than just a smarter chatbot. But this dependability comes at a cost - the model thinks longer and burns through more computing power on complex tasks.

April 17, 2026
Claude AIAnthropicAI reliability
AI Coding Assistants Clash: OpenAI's Codex Upgrade Takes On Anthropic's Claude
News

AI Coding Assistants Clash: OpenAI's Codex Upgrade Takes On Anthropic's Claude

The battle for dominance in AI-powered coding tools heats up as OpenAI unveils major upgrades to Codex, introducing background operation and browser integration. Meanwhile, Anthropic's Claude Code continues gaining enterprise traction. This latest volley brings enhanced memory features, image generation, and flexible pricing - pushing AI programming assistants into new territory.

April 17, 2026
AI programmingOpenAIdeveloper tools
Claude AI Gets Smarter: Opus 4.7 Boosts Coding and Creativity
News

Claude AI Gets Smarter: Opus 4.7 Boosts Coding and Creativity

Anthropic's latest AI model, Claude Opus 4.7, brings notable upgrades that make it better at handling complex coding tasks and visual content. The new version can process larger images, remember conversation details more effectively, and includes built-in security features to block risky requests. While maintaining the same pricing as its predecessor, users will need to tweak their prompts to fully harness its enhanced capabilities. Available across multiple platforms, Opus 4.7 could change how developers work with AI.

April 17, 2026
AI developmentAnthropicmachine learning
Critical Flaw in AI Protocol Leaves 200,000 Servers Vulnerable
News

Critical Flaw in AI Protocol Leaves 200,000 Servers Vulnerable

A shocking security report reveals dangerous vulnerabilities in Anthropic's widely used MCP protocol, putting over 200,000 AI servers at risk of remote attacks. The design flaw allows execution of unverified system commands, affecting all major programming languages. Despite being notified months ago, Anthropic has done little to address what researchers call an architectural-level threat.

April 16, 2026
AI SecurityMCP FlawCybersecurity
Anthropic's Secretive Project Glasswing: What Vulnerabilities Did It Really Find?
News

Anthropic's Secretive Project Glasswing: What Vulnerabilities Did It Really Find?

Anthropic's ambitious Project Glasswing enlisted tech giants like Amazon and Google to test its AI model for security flaws. But months after launch, the project's actual discoveries remain shrouded in mystery. While researchers found 40 potential vulnerabilities, only one has been definitively linked to Glasswing. As we await Anthropic's July report, questions linger about what this powerful AI model can truly detect - and whether companies are acting fast enough on its findings.

April 16, 2026
AI SecurityAnthropicCybersecurity