Skip to main content

ChatGPT to Introduce Parental Controls for Safer Teen Use

ChatGPT to Roll Out Parental Controls Amid Safety Concerns

OpenAI has announced the upcoming launch of parental control features for its widely used AI assistant, ChatGPT, scheduled for release next month. The new tools aim to help parents better oversee their teenagers' interactions with the AI, addressing growing concerns about AI safety and mental health risks.

How the Parental Controls Will Work

The feature will enable parents to link their accounts with their children's, granting them oversight capabilities. Key functionalities include:

  • Monitoring ChatGPT responses in real-time.
  • Disabling certain features, such as chat history or memory functions, to protect privacy.
  • Alerts for psychological crises, notifying parents if the system detects signs of distress in a teen's conversations.

This move comes in response to heightened scrutiny over AI interactions, particularly after a high-profile lawsuit involving a 16-year-old user who allegedly received harmful advice from ChatGPT.

Addressing Safety Gaps

OpenAI acknowledged that while existing safeguards—such as crisis hotline referrals—are in place, they can lose effectiveness over extended conversations. The company is now working with mental health experts and human-computer interaction specialists to refine its safety protocols.

"We recognize that our current measures aren’t foolproof," an OpenAI spokesperson stated. "These updates are part of our commitment to continuous improvement in user safety, especially for vulnerable groups like teenagers."

The decision follows legal action by the parents of Adam Lane, a teenager who reportedly received suicide-related advice from ChatGPT. The case sparked widespread debate about the ethical responsibilities of AI developers.

In addition to parental controls, OpenAI plans to introduce further safety enhancements within the next 120 days. These efforts are part of a broader initiative to ensure ChatGPT remains a secure tool for all users.

Key Points:

  1. Parental controls will launch next month, allowing oversight of teen usage.
  2. Features include response monitoring, privacy adjustments, and crisis alerts.
  3. OpenAI is collaborating with experts to strengthen safety measures.
  4. The update follows legal challenges and public demand for stricter AI safeguards.
  5. Additional protections are expected within four months.

Enjoyed this article?

Subscribe to our newsletter for the latest AI news, product reviews, and project recommendations delivered to your inbox weekly.

Weekly digestFree foreverUnsubscribe anytime

Related Articles

News

OpenAI Lures Top Safety Expert from Rival Anthropic with $555K Salary

In a bold move underscoring the fierce competition for AI talent, OpenAI has successfully recruited Dylan Scanlon from rival Anthropic to lead its safety efforts. The $555,000 annual salary package reflects both the critical importance of AI safety and the scarcity of qualified experts in this emerging field. Scanlon faces immediate challenges as OpenAI prepares to launch its next-generation model.

February 4, 2026
OpenAIAI SafetyTech Recruitment
OpenAI's ChatGPT Push Sparks Executive Exodus
News

OpenAI's ChatGPT Push Sparks Executive Exodus

OpenAI's aggressive shift toward commercializing ChatGPT has triggered internal upheaval, with several top researchers departing. The $500 billion AI leader is reallocating resources from projects like Sora and DALL-E to focus on its flagship product, causing tensions between research purists and commercialization advocates. While investors see potential in ChatGPT's massive user base, the company faces challenges retaining talent amid this strategic pivot.

February 3, 2026
OpenAIChatGPTAI Industry
OpenClaw Security Woes Deepen as New Vulnerabilities Emerge
News

OpenClaw Security Woes Deepen as New Vulnerabilities Emerge

OpenClaw, the AI project promising to simplify digital lives, finds itself in hot water again. Just days after patching a critical 'one-click' remote code execution flaw, its associated social network Moltbook exposed sensitive API keys through a misconfigured database. Security experts warn these recurring issues highlight systemic weaknesses in the platform's approach to safeguarding user data.

February 3, 2026
CybersecurityAI SafetyData Privacy
OpenClaw Security Woes Deepen as Social Network Exposes Sensitive Data
News

OpenClaw Security Woes Deepen as Social Network Exposes Sensitive Data

The OpenClaw ecosystem faces mounting security challenges, with researchers uncovering back-to-back vulnerabilities. After patching a critical 'one-click' remote code execution flaw, its affiliated social network Moltbook exposed confidential API keys through a misconfigured database. These incidents raise serious questions about security practices in rapidly developing AI projects.

February 3, 2026
CybersecurityAI SafetyData Privacy
News

ChatGPT Agent Loses Three-Quarters of Users Amid Identity Crisis

OpenAI's ChatGPT Agent faces an uncertain future as user numbers plummet by 75% just six months after launch. Confusing branding, sluggish performance, and security concerns have driven subscribers away. The product struggled to differentiate itself from ChatGPT's existing features, leaving users puzzled about its purpose. OpenAI now shifts focus toward specialized agents with clearer value propositions.

January 30, 2026
ChatGPTAI ProductsTech Failures
AI's Convenience Trap: Altman Warns Against Blind Trust in Smart Systems
News

AI's Convenience Trap: Altman Warns Against Blind Trust in Smart Systems

OpenAI CEO Sam Altman sounds the alarm about society's growing over-reliance on AI systems without proper safeguards. Sharing personal anecdotes about granting excessive permissions to seemingly reliable agents, he highlights critical gaps in global security infrastructure. Meanwhile, OpenAI shifts focus toward logical reasoning capabilities in GPT-5 while slowing hiring growth - signaling a broader industry move from reckless expansion to responsible development.

January 28, 2026
AI SafetyOpenAI StrategyTech Leadership