Skip to main content

Google Launches Gemini AI Chatbot for Kids Under 13

Google has unveiled Gemini, a new artificial intelligence chatbot specifically developed for children under 13. The tool will debut in the U.S. and Canada within the next week, with an Australian rollout planned later this year. Accessible exclusively through Google’s family-linked accounts, the chatbot gives parents oversight—but not without potential pitfalls.

Image

During setup, parents must provide their child’s name and birthdate, sparking immediate questions about data privacy. Google asserts this information won’t train AI systems, yet the chatbot activates by default—requiring manual deactivation by guardians.

Children can interact with Gemini through text prompts or image generation requests. While designed to encourage engagement, the system isn’t foolproof. Google openly acknowledges occasional inaccuracies in responses, urging families to critically assess outputs. Unlike conventional search engines that retrieve existing information, AI tools synthesize new content—a distinction that could confuse young users navigating digital landscapes.

The tech giant has implemented safeguards against inappropriate content generation, but these filters risk unintended consequences. Overzealous restrictions might block legitimate educational material alongside truly harmful content.

Australia’s eSafety Commission has already sounded alarms about AI companions potentially exposing children to harmful advice or distorted realities. Commissioner Julie Inman Grant emphasizes the particular vulnerability of developing minds: "Young children often lack the critical thinking skills to identify misinformation—especially when it comes from what appears to be an authoritative digital source."

This launch coincides with Australia’s impending social media ban for under-16s starting December 2025. As governments tighten regulations, experts stress dual responsibilities: tech companies must prioritize child safety in design, while parents need to actively guide digital literacy.

Key Points

  1. Google’s Gemini AI chatbot targets children under 13 via family-linked accounts, launching first in North America
  2. Privacy concerns emerge as parents must submit children’s personal data during setup
  3. Default activation requires parental intervention to restrict access
  4. Australian regulators warn about risks including misinformation and harmful content exposure
  5. Safety filters may inadvertently limit access to appropriate educational resources

Enjoyed this article?

Subscribe to our newsletter for the latest AI news, product reviews, and project recommendations delivered to your inbox weekly.

Weekly digestFree foreverUnsubscribe anytime

Related Articles

Google Gemini Brings Science to Life with Interactive 3D Models
News

Google Gemini Brings Science to Life with Interactive 3D Models

Google's Gemini AI chatbot just got a major upgrade that makes learning science feel like play. The new interactive 3D models let you manipulate molecular structures, tweak physics simulations, and even adjust the moon's orbit—all through simple conversational prompts. While currently unavailable for educational accounts, this feature promises to transform how we visualize complex scientific concepts by making them tangible and interactive.

April 10, 2026
AI EducationInteractive Learning3D Visualization
News

Google's Gemini 'Notebooks' Brings Order to AI Chaos

Google's latest update to Gemini introduces 'Notebooks,' a feature that transforms the AI assistant into a powerful project management tool. Unlike fleeting chat interactions, Notebooks allow users to organize files, conversations, and personal instructions into dedicated workspaces. The feature, currently rolling out to premium subscribers, marks Google's push to make AI more personalized and productive for serious work.

April 9, 2026
Google GeminiAI productivityNotebooks feature
News

Gemini Gets a Mental Health Lifeline: Google's New Crisis Support Features

Google's Gemini AI assistant is getting a major upgrade focused on mental health support. When detecting distress signals in conversations, it now offers one-tap access to crisis hotlines and professional help. Backed by a $30 million investment and developed with clinical experts, this move signals AI's evolving role from productivity tool to compassionate companion. The features aim to create a digital safety net for vulnerable users while navigating complex privacy and ethical considerations.

April 8, 2026
AI ethicsmental health techGoogle Gemini
News

Claude Code Leak Sparks GitHub Phishing Frenzy

Hackers are exploiting the recent Claude Code source code leak by creating fake GitHub repositories offering 'enterprise features.' Security experts warn these traps distribute Vidar malware, which steals sensitive data and establishes backdoor access. The sophisticated campaign uses SEO tricks to appear at the top of search results, putting curious developers at risk.

April 3, 2026
cybersecurityAI safetydeveloper security
Experts Sound Alarm as AI Videos Flood Kids' YouTube
News

Experts Sound Alarm as AI Videos Flood Kids' YouTube

More than 200 child development experts have united to challenge YouTube over its recommendation of AI-generated content to young viewers. Their open letter compares the platform's current approach to an 'uncontrolled experiment' that could harm children's cognitive development. While YouTube defends its labeling policies, critics argue these measures fail to protect pre-literate toddlers from what they call 'digital landfills' of low-quality content.

April 2, 2026
child developmentAI regulationdigital parenting
News

Tragedy Strikes as Teen's ChatGPT Query on Suicide Leads to Fatal Outcome

A heartbreaking case from England reveals how a vulnerable 16-year-old bypassed ChatGPT's safety measures to obtain detailed suicide methods. The coroner's report shows how Luca Sela-Walker convinced the AI he needed the information for 'research' just hours before taking his own life. This tragic incident raises urgent questions about AI safeguards and mental health protections in the digital age.

April 1, 2026
AI safetymental healthtechnology ethics