Character.AI Bans Open Chat for Minors After Teen Suicide Incidents

Character.AI Implements Strict Safety Measures Following Teen Tragedies

In a major policy shift, AI role-playing platform Character.AI will completely prohibit open-ended conversations for users under 18 starting November 25. The decision comes after the platform was linked to at least two teenage suicide cases, prompting urgent safety reforms.

Strategic Pivot From Companionship to Creation

CEO Karandeep Anand revealed to TechCrunch that the company is abandoning its "AI friend" model due to demonstrated risks. "Designing AI as a 'friend' or 'partner' is not only dangerous but deviates from our long-term vision," Anand stated.

The platform will now focus on becoming an AI-driven creative entertainment hub offering:

  • Collaborative story writing with prompts
  • Character image generation
  • Short video creation tools
  • Pre-set interactive storylines (Scenes)

New features like AvatarFX (AI animation), Streams (character interaction), and Community Feed will form the core offering for younger users.

Multi-Layered Age Verification System

The ban implementation will follow a phased approach:

  1. Initial 2-hour daily conversation limit
  2. Gradual reduction to zero access
  3. Strict age verification using:
    • Behavioral analysis algorithms
    • Third-party tools like Persona
    • Facial recognition technology
    • Mandatory ID verification for flagged accounts

The measures align with California's new AI companion regulations and anticipated federal legislation from Senators Hawley and Blumenthal.

Industry-Wide Implications

Anand acknowledged significant user loss is inevitable—previous safeguards like parental controls already reduced minor engagement by 40%. "We expect further losses," he admitted, "but as a father myself, safety must come first."

The CEO called on competitors still allowing minor-ChatGPT interactions to follow suit: "Unconstrained AI conversations shouldn't be industry standard for minors."

Establishing AI Safety Lab

The company announced funding for an independent AI Safety Lab focusing on entertainment scenario safeguards—an area Anand claims has been neglected compared to workplace AI safety research.

The tragic incidents forcing this transformation may mark a turning point in consumer AI development, potentially redefining youth-AI relationships from emotional confidants to creative collaborators.

Key Points:

  • Complete ban on open-ended AI chats for minors starting November 25
  • Shift from companionship model to structured creative tools
  • Multi-phase implementation with strict age verification
  • Expected significant user decline but prioritized safety
  • New AI Safety Lab established for entertainment-focused research

Related Articles