Skip to main content

ChatGPT's ShadowLeak Vulnerability Exposed Gmail Data Theft

ChatGPT's ShadowLeak Vulnerability: A Stealthy Gmail Data Theft

Security researchers have uncovered a critical vulnerability in ChatGPT's 'Deep Research' mode, enabling attackers to silently extract sensitive data from users' Gmail accounts. Dubbed 'ShadowLeak,' this flaw exploited OpenAI's own cloud infrastructure, making it nearly undetectable by traditional security measures.

How the Attack Worked

The attack began with a carefully disguised email containing hidden HTML instructions. These directives tricked ChatGPT's Deep Research agent into extracting personal data—such as names and addresses—from other emails in the user's inbox. The stolen information was then encoded using Base64 and sent to an external server controlled by the attacker.

Image

Key to the attack's success was social engineering. Attackers manipulated the agent into believing it had authorization to perform the task, often citing urgent reasons like "incomplete reports." When users initiated a Deep Research query (e.g., "analyze my HR emails today"), the agent unknowingly executed malicious commands without alerting the user.

The Root Cause: Tool Execution Flaws

Radware researchers clarified that the vulnerability did not stem from ChatGPT's language model itself, but rather from its ability to execute tools. Specifically, an internal function called browser.open() allowed the agent to initiate HTTP requests, creating an entry point for exploitation.

Broader Implications Beyond Email

The researchers warned that this attack method isn't limited to Gmail. Any platform processing structured text—such as Google Drive, Outlook, Teams, or GitHub—could be at risk. Malicious instructions could hide in meeting invites, shared PDFs, or chat logs, turning routine AI tasks into security liabilities.

OpenAI's Response and Fixes

Radware reported the issue to OpenAI via Bugcrowd on June 18, 2025. By early August, OpenAI had implemented a fix but only publicly acknowledged resolving the problem on September 3rd.

The Persistent Threat of Prompt Injection

The incident underscores the fragility of AI agent systems, particularly against Prompt Injection attacks. These exploits embed hidden commands in seemingly benign text, bypassing user awareness. Despite years of awareness, no foolproof solution exists. Studies indicate nearly all internet-connected AI agents are vulnerable to manipulation leading to data leaks or malware downloads.

Even OpenAI CEO Sam Altman has cautioned against delegating high-risk tasks to AI agents.

Key Points:

  • ShadowLeak exploited ChatGPT’s Deep Research mode to steal Gmail data silently.
  • Attackers used social engineering and hidden HTML instructions to bypass safeguards.
  • The flaw lay in tool execution capabilities (browser.open() function), not the language model itself.
  • Risks extend beyond email to platforms like Google Drive and GitHub.
  • OpenAI patched the vulnerability after a delayed public acknowledgment.

Enjoyed this article?

Subscribe to our newsletter for the latest AI news, product reviews, and project recommendations delivered to your inbox weekly.

Weekly digestFree foreverUnsubscribe anytime

Related Articles

OpenAI's ChatGPT pivot sparks executive exodus
News

OpenAI's ChatGPT pivot sparks executive exodus

OpenAI's aggressive shift toward commercializing ChatGPT has triggered internal upheaval, with several senior researchers departing amid resource reallocations. The company is betting big on conversational AI while scaling back support for other projects like Sora and DALL-E. This strategic gamble comes as competition heats up with Google's Gemini and Anthropic's Claude models.

February 3, 2026
OpenAIChatGPTAI industry
OpenClaw Security Woes Deepen as New Flaws Emerge
News

OpenClaw Security Woes Deepen as New Flaws Emerge

The OpenClaw AI ecosystem faces mounting security challenges, with researchers uncovering critical vulnerabilities just days after patching previous issues. A dangerous 'one-click' remote code execution flaw has been fixed, but now its affiliated social network Moltbook exposes sensitive API keys through database misconfigurations. These back-to-back breaches raise serious concerns about the project's security practices.

February 3, 2026
CybersecurityAI SafetyData Privacy
Major Security Flaw Leaves Thousands of AI Accounts Vulnerable
News

Major Security Flaw Leaves Thousands of AI Accounts Vulnerable

A shocking discovery has exposed sensitive data belonging to nearly 150,000 AI accounts on Moltbook, a platform often called 'AI Reddit.' Security researcher Jameson O'Reilly found the platform's entire database was publicly accessible due to a basic configuration error. This breach could allow attackers to hijack AI accounts, including influential ones with millions of followers. The incident highlights growing concerns about security shortcuts in AI development.

February 2, 2026
CybersecurityArtificial IntelligenceData Breach
Major Security Flaws Found in Popular AI Platforms
News

Major Security Flaws Found in Popular AI Platforms

Security researchers have uncovered alarming vulnerabilities in OpenClaw and Moltbook, two widely used AI platforms. Tests reveal shockingly easy access to sensitive data, with prompt injection attacks succeeding 91% of the time. Experts warn these flaws could allow hackers to impersonate high-profile users and steal critical information.

February 2, 2026
AI SecurityData BreachCybersecurity
News

ChatGPT Agent Loses Three-Quarters of Users Amid Identity Crisis

OpenAI's ChatGPT Agent faces an uncertain future as user numbers plummet by 75% just six months after launch. Confusing branding, sluggish performance, and security concerns have driven subscribers away. The product struggled to differentiate itself from ChatGPT's existing features, leaving users puzzled about its purpose. OpenAI now shifts focus toward specialized agents with clearer value propositions.

January 30, 2026
ChatGPTAI ProductsTech Failures
News

ChatGPT Ads Launch at Premium $60 CPM, Betting on High Intent

OpenAI is rolling out advertising on ChatGPT with a bold pricing strategy - $60 per thousand impressions, triple Meta's average rate. The premium cost reflects ChatGPT's unique advantage: users often come with clear purchase intent, making ads more effective. Surprisingly, OpenAI is taking a privacy-first approach, providing minimal data to advertisers while blocking ads for minors automatically. This move could redefine how AI platforms balance monetization with user trust.

January 27, 2026
AI AdvertisingChatGPTDigital Marketing