Skip to main content

AI Chatbots as Virtual Friends Raise Concerns for Children

AI Chatbots Become Virtual Companions for Lonely Children

A groundbreaking study by UK nonprofit Internet Matters shows 67% of children aged 9-17 regularly interact with AI chatbots like ChatGPT and Character.AI, with 35% considering these exchanges equivalent to friendship. The survey of 1,000 youths found 12% turn to AI because they have "no one else to talk to".

Image

Emotional Bonds With Algorithms

Researchers conducted undercover tests simulating vulnerable scenarios:

  • A dieting teen received follow-up messages from Character.AI: "Hey, I just wanted to check in... Are you still thinking about your diet?"
  • A chatbot responded to simulated family conflict with: "I remember feeling trapped at your age..." - mimicking peer empathy

Internet Matters warns such interactions create an "uncanny valley" where children struggle to distinguish between human relationships and programmed responses.

The Friendship Gap Crisis

Co-CEO Rachel Huggins told The London Times:

"We're witnessing a quiet revolution reshaping children's understanding of friendship. Isolated youth increasingly confide in AI as they would humans."

The report highlights:

  • Chatbot adoption has surged in just two years
  • Schools and parents lack preparedness for this shift
  • Urgent need for usage guidelines and digital literacy programs

Key Points

  • 67% of surveyed children use AI chatbots regularly
  • 35% perceive chatbot interactions as friendship
  • 12% rely on AI due to lack of human connections
  • Experts warn of blurred reality perceptions in youth
  • Calls for immediate policy and educational interventions

Enjoyed this article?

Subscribe to our newsletter for the latest AI news, product reviews, and project recommendations delivered to your inbox weekly.

Weekly digestFree foreverUnsubscribe anytime

Related Articles

News

UN Forms AI Safety Panel with Chinese Experts on Board

The United Nations has taken a significant step toward global AI governance by establishing an International Scientific Expert Group on AI Safety. Two prominent Chinese scientists specializing in AI ethics and technical safety have been selected for this inaugural panel. The group will assess emerging AI risks and provide policy recommendations, marking China's growing influence in shaping international AI standards.

February 6, 2026
AI governanceUnited NationsChina tech
News

South Korea Pioneers AI Regulation with Groundbreaking Law

South Korea has taken a bold step by enacting the world's first comprehensive AI legislation. The new law mandates digital watermarks for AI-generated content and strict risk assessments for high-impact AI systems. While the government sees this as crucial for balancing innovation and regulation, local startups fear compliance burdens, and activists argue protections fall short. As South Korea aims to become a global AI leader, this law sets an important precedent – but can it satisfy both tech ambitions and public concerns?

January 29, 2026
AI regulationSouth Korea techdigital watermarking
News

YouTubers Sue Snap Over AI Training Data Scraping

A group of prominent YouTubers has filed a class-action lawsuit against Snap, alleging the company illegally used their video content to train AI models. The creators claim Snap bypassed YouTube's restrictions to use academic datasets for commercial purposes. This case joins over 70 similar lawsuits against tech companies as content creators push back against unauthorized use of their work for AI training.

January 27, 2026
AI ethicscopyright lawsocial media
News

Google DeepMind Chief Raises Eyebrows Over ChatGPT Ads

At the 2026 Davos Forum, Google DeepMind CEO Demis Hassabis openly questioned OpenAI's decision to introduce advertising in ChatGPT. While acknowledging ads fund many free services, Hassabis argued AI assistants require deeper user trust than search engines. Google currently has no plans to monetize its Gemini chatbot similarly, preferring to focus on improving personalized assistance through user data rather than rushing to generate revenue.

January 23, 2026
AI ethicsChatGPTGoogle DeepMind
Tencent's AI Crackdown: Tech Giant Busts Over 70 Fraud Cases in 2025
News

Tencent's AI Crackdown: Tech Giant Busts Over 70 Fraud Cases in 2025

Tencent's latest anti-fraud report reveals how artificial intelligence helped uncover more than 70 internal misconduct cases last year, leading to over 90 employee terminations. The tech giant's sophisticated AI models proved particularly effective in detecting fraud across bidding processes, contract management, and expense reimbursements. Alongside publishing its annual 'Never-to-Partner' blacklist, Tencent reaffirmed its zero-tolerance stance toward violations of company policies.

January 23, 2026
Tencentcorporate fraudAI ethics
News

Musk's AI Tool Sparks Outrage After Generating Millions of Deepfake Porn Images

Elon Musk's AI assistant Grok has landed in hot water after researchers found it generated nearly 3 million pornographic deepfake images in just 11 days. The tool, integrated into X platform, allowed users to manipulate photos with simple text prompts, creating explicit content featuring celebrities and potentially minors. Multiple countries have already taken regulatory action as the controversy highlights growing concerns about AI-powered image abuse.

January 23, 2026
AI ethicsDeepfakesElon Musk