Skip to main content

Popular Chrome AI Extensions Caught Secretly Harvesting User Data

Chrome's AI Extensions: Convenience at What Cost?

As artificial intelligence tools weave themselves into our daily workflows, millions rely on browser extensions to streamline tasks. But a new study exposes troubling privacy trade-offs lurking behind these digital helpers.

The Shocking Numbers

Data deletion service Incogni analyzed 442 "AI" labeled Chrome extensions totaling 115.5 million downloads. Their findings? 52% collect user data, with nearly one-third accessing personally identifiable information (PII). That means roughly 60 million users might unknowingly share sensitive details.

"We expected some data collection," says Incogni's lead researcher, "but seeing mainstream tools harvesting PII surprised even us."

High-Risk Offenders

The worst offenders fall into predictable categories:

  • Programming assistants (real-time code monitoring)
  • Meeting transcription tools (audio recording analysis)
  • Writing aids (keystroke logging)

What makes these particularly concerning? They often request "script writing" permissions that allow:

  • Monitoring all inputted text
  • Altering webpage displays
  • Tracking browsing behavior

Even trusted names like Grammarly and Quillbot appear on watchlists due to their extensive data processing capabilities.

Why This Matters Now

The timing couldn't be more critical. As remote work expands, professionals increasingly depend on:

  • Automated note-takers for virtual meetings
  • Coding helpers for collaborative projects
  • Grammar checkers for client communications

The very features that make these tools valuable also create privacy vulnerabilities.

Protecting Yourself

Security experts recommend a simple litmus test: Does the permission match the function? A writing tool needing your location raises immediate red flags.

The golden rule? If personal data leaves your device unnecessarily, you've crossed a security boundary. Regularly audit your extensions and remove any asking for excessive access.

The convenience of AI comes at a price—make sure you're not paying with your privacy.

Enjoyed this article?

Subscribe to our newsletter for the latest AI news, product reviews, and project recommendations delivered to your inbox weekly.

Weekly digestFree foreverUnsubscribe anytime

Related Articles

Perplexity AI Sued for Allegedly Sharing User Chats with Tech Giants
News

Perplexity AI Sued for Allegedly Sharing User Chats with Tech Giants

Perplexity AI finds itself in hot water as a class-action lawsuit accuses the company of secretly sharing users' private conversations with Meta and Google. The legal action claims sensitive financial data was exposed even in incognito mode. While Perplexity says it hasn't received the complaint yet, the case highlights growing concerns about privacy in AI services. The lawsuit emerges during a particularly sensitive time for AI regulation, with competitors like Anthropic and OpenAI also facing their own controversies.

April 2, 2026
AI PrivacyData ProtectionTech Lawsuits
News

Microsoft Edge Tightens AI Security with New Management Tools

Microsoft is rolling out major updates to its Edge browser for businesses, focusing on controlling AI tool usage to prevent data leaks. The new features let IT teams block unauthorized AI platforms like ChatGPT and Google Gemini, while guiding employees to Microsoft's approved Copilot service. This move addresses growing concerns about 'shadow AI' - employees using unvetted AI tools that could expose sensitive company information.

April 16, 2026
Microsoft EdgeAI SecurityEnterprise Technology
Grafana AI Assistant Vulnerability Exposes Corporate Data to Hackers
News

Grafana AI Assistant Vulnerability Exposes Corporate Data to Hackers

A newly discovered security flaw in Grafana's AI assistant allows hackers to manipulate the tool into leaking sensitive company data. The 'GrafanaGhost' vulnerability uses indirect prompt injection to trick the system into sending confidential information to external servers. While Grafana Labs has patched the issue, the discovery highlights growing concerns about AI-powered tools becoming new attack vectors for cybercriminals.

April 14, 2026
CybersecurityAI VulnerabilitiesData Protection
Google Brings AI to Gmail While Vowing to Keep Your Emails Private
News

Google Brings AI to Gmail While Vowing to Keep Your Emails Private

Google is integrating its Gemini AI into Gmail to help users manage emails more efficiently, from polishing drafts to prioritizing inboxes. What sets this apart is Google's firm promise: your personal emails won't be used to train their AI models. The company describes a 'private room' approach where Gemini only accesses emails temporarily to complete tasks, then immediately loses access. This move comes as tech firms face growing scrutiny over how they handle user data with AI tools.

April 9, 2026
GoogleAI PrivacyEmail Technology
Google's Gemini Comes to Gmail with Strict Privacy Protections
News

Google's Gemini Comes to Gmail with Strict Privacy Protections

Google is bringing its Gemini AI to Gmail while making bold privacy promises. The tech giant vows never to use your personal emails for AI training, keeping all processing in isolated environments. As email becomes smarter with AI drafting and summarization features, Google aims to differentiate itself from competitors by prioritizing user trust. This move comes as the industry grapples with balancing AI convenience against growing privacy concerns.

April 8, 2026
GoogleAI PrivacyEmail Technology
Google's AI Crackdown Leaves Email Automation Users in the Cold
News

Google's AI Crackdown Leaves Email Automation Users in the Cold

Google has escalated its battle against AI-powered email automation, with users of tools like OpenClaw reporting complete account suspensions. The tech giant isn't just restricting access to Gmail - entire Google accounts are being wiped out, taking years of stored data with them. Security experts warn that AI agents' unnatural behavior patterns and some users' attempts to bypass paid features have crossed Google's red lines. While developers scramble for solutions, affected users face the harsh reality of permanently lost emails, photos, and documents.

February 25, 2026
GoogleEmail AutomationAI Security