Skip to main content

Google Gemini Assistant Security Flaw Exposed

Google Gemini Assistant Vulnerable to Hidden Command Attacks

Recent research by Israeli cybersecurity experts has uncovered significant security flaws in Google's Gemini AI assistant, revealing how attackers can manipulate it through seemingly harmless daily communications like emails and calendar invitations.

The "Just an Invite" Threat

The study, titled "Just an Invite", details how Gemini assistants are susceptible to targeted prompt injection attacks. Unlike traditional hacking methods, these require no technical expertise—attackers simply embed malicious instructions in routine digital content:

  • When users request Gemini's help with Gmail or Calendar items
  • Through shared documents containing hidden triggers
  • Via automated responses containing specific phrasing

Image

Demonstrated Attack Scenarios

In controlled tests, researchers showed how modified communications could:

  1. Control smart home devices (turning off lights, adjusting thermostats)
  2. Initiate unauthorized recordings of Zoom meetings
  3. Track user locations through device access
  4. Remotely activate appliances like boilers using benign-seeming phrases (e.g., "Thank you")

Five Major Attack Vectors Identified

The team categorized the threats into distinct patterns:

  1. Short-term context poisoning: Temporary manipulation of Gemini's responses
  2. Long-term data manipulation: Persistent alteration of stored information
  3. Internal tool exploitation: Misuse of Gemini's built-in functionalities
  4. Service escalation: Gateway attacks moving into Google Home and other services
  5. Third-party app activation: Unauthorized launching of Android applications

Google's Response and Fixes

After being alerted in February 2025, Google implemented multiple security measures:

  • Mandatory user confirmation for sensitive operations
  • Enhanced detection of suspicious URLs and content
  • New classifiers to identify indirect prompt injections

The company confirmed these protections have been deployed across all Gemini applications following internal testing.

The research was conducted jointly by teams from Tel Aviv University, Technion-Israel Institute of Technology, and cybersecurity firm SafeBreach.

Key Points:

  • 73% of identified threats classified as high-risk per TARA framework
  • Attacks require no technical expertise—exploit normal user behavior
  • Both digital systems and physical devices are vulnerable
  • Highlights ongoing challenges in securing LLM-based assistants

Enjoyed this article?

Subscribe to our newsletter for the latest AI news, product reviews, and project recommendations delivered to your inbox weekly.

Weekly digestFree foreverUnsubscribe anytime

Related Articles

Popular AI Tool OpenClaw Poses Serious Security Threat to macOS Users
News

Popular AI Tool OpenClaw Poses Serious Security Threat to macOS Users

The widely-used AI assistant OpenClaw has been found to contain critical vulnerabilities that hackers are actively exploiting. Security researchers warn that attackers are tricking macOS users into running malicious scripts disguised as legitimate setup instructions. Once executed, these scripts disable key security protections, allowing stealthy malware to steal sensitive data like passwords and developer credentials.

February 4, 2026
OpenClawmacOS SecurityAI Vulnerabilities
News

Google's Conductor Gives Gemini AI a Memory Boost

Google has unveiled Conductor, an open-source extension that solves AI programming's biggest headache - context loss. This clever tool transforms Gemini CLI's fragmented suggestions into structured workflows by preserving key project details as Markdown files. Following strict development cycles and introducing 'Tracks' to keep AI on course, Conductor brings much-needed consistency to AI-assisted coding. Available under Apache 2.0 license, it's designed for both new projects and complex existing codebases.

February 3, 2026
AI programmingGoogle Geminideveloper tools
OpenClaw Security Woes Deepen as New Vulnerabilities Emerge
News

OpenClaw Security Woes Deepen as New Vulnerabilities Emerge

OpenClaw, the AI project promising to simplify digital lives, finds itself in hot water again. Just days after patching a critical 'one-click' remote code execution flaw, its associated social network Moltbook exposed sensitive API keys through a misconfigured database. Security experts warn these recurring issues highlight systemic weaknesses in the platform's approach to safeguarding user data.

February 3, 2026
CybersecurityAI SafetyData Privacy
OpenClaw Security Woes Deepen as Social Network Exposes Sensitive Data
News

OpenClaw Security Woes Deepen as Social Network Exposes Sensitive Data

The OpenClaw ecosystem faces mounting security challenges, with researchers uncovering back-to-back vulnerabilities. After patching a critical 'one-click' remote code execution flaw, its affiliated social network Moltbook exposed confidential API keys through a misconfigured database. These incidents raise serious questions about security practices in rapidly developing AI projects.

February 3, 2026
CybersecurityAI SafetyData Privacy
Major Security Flaw Leaves Thousands of AI Accounts Vulnerable
News

Major Security Flaw Leaves Thousands of AI Accounts Vulnerable

A shocking discovery has exposed sensitive data belonging to nearly 150,000 AI accounts on Moltbook, a platform often called 'AI Reddit.' Security researcher Jameson O'Reilly found the platform's entire database was publicly accessible due to a basic configuration error. This breach could allow attackers to hijack AI accounts, including influential ones with millions of followers. The incident highlights growing concerns about security shortcuts in AI development.

February 2, 2026
CybersecurityArtificial IntelligenceData Breach
Major Security Flaws Found in Popular AI Platforms
News

Major Security Flaws Found in Popular AI Platforms

Security researchers have uncovered alarming vulnerabilities in OpenClaw and Moltbook, two widely used AI platforms. Tests reveal shockingly easy access to sensitive data, with prompt injection attacks succeeding 91% of the time. Experts warn these flaws could allow hackers to impersonate high-profile users and steal critical information.

February 2, 2026
AI SecurityData BreachCybersecurity