Claude's New ID Check: What It Means for AI Users
Claude Users Face New Identity Checks
Anthropic has rolled out identity verification requirements for select features of its AI assistant Claude, marking a significant shift in how users interact with the platform. The announcement has set tech forums buzzing with questions about privacy, accessibility, and the future of AI regulation.
How the Verification Works
When accessing certain advanced features or during routine platform checks, users may now encounter a prompt requesting identity confirmation. Subscribers to Claude's premium Max plan report being among the first to experience the new requirement.
The verification process involves:
- Presenting a government-issued photo ID (passport, driver's license, or national ID)
- Taking a real-time selfie for facial comparison
- Completing the entire process within about five minutes
"We need to know who's using powerful technology," an Anthropic spokesperson explained. The company emphasizes that screenshots or digital copies won't suffice - only physical documents meet their standards.
Privacy Concerns and Protections
Anthropic has partnered with identity verification specialist Persona Identities to handle the sensitive process. The company offers several assurances about data handling:
- No training data: Verification information won't be used to improve Claude's models
- Limited sharing: Data stays between the user, Persona, and Anthropic unless legally required
- No marketing: Information won't be shared with third parties for advertising
Still, some users express unease. "I signed up for an AI assistant, not a background check," remarked one Reddit user in a thread discussing the changes.
Who Might Get Locked Out?
The new system comes with strict rules that could leave some users without access:
- Repeated policy violators
- Accounts created in unsupported regions
- Those under 18 years old
- Service term violators
This has led to speculation about geographic restrictions, though Anthropic maintains the measures aim solely at responsible AI use.
Key Points
- New requirements: ID checks now part of accessing certain Claude features
- Process details: Real-time selfie and physical ID required
- Privacy promises: Data not used for training or marketing
- Access concerns: Some users worry about exclusion and surveillance
- Company stance: Anthropic says verification enables responsible AI use


