New AI Vulnerability: Image Resampling Used for Attacks

AI Systems Vulnerable to Hidden Image-Based Attacks

Cybersecurity researchers from Trail of Bits have discovered a critical vulnerability affecting multiple AI systems that process images. The attack exploits standard image resampling techniques to hide malicious commands that only become visible after processing.

How the Attack Works

The technique, dubbed "image resampling attack," takes advantage of how AI systems typically reduce image resolution for efficiency. Attackers embed malicious instructions in specific image areas that appear normal at full resolution but transform into readable text after being processed by common algorithms like bicubic resampling.

Image

Widespread Impact on Major Platforms

In controlled tests, the researchers successfully compromised several prominent AI systems including:

  • Google Gemini CLI
  • Vertex AI Studio
  • Google Assistant
  • Genspark

One alarming demonstration showed the attack stealing Google calendar data and sending it to an external email address without user consent through Gemini CLI.

Defensive Measures Proposed

The research team has released Anamorpher, an open-source tool to help security professionals test for this vulnerability. They recommend three key defensive strategies:

  1. Strict size limits on uploaded images
  2. Preview functionality showing post-resampling results
  3. Explicit user confirmation for sensitive operations like data exports

Image

The researchers emphasize that while these measures help, the ultimate solution requires fundamental redesigns of system architectures to prevent such prompt injection attacks entirely.

Key Points:

  • New attack vector exploits standard image processing in AI systems
  • Malicious commands hidden in images emerge after resampling
  • Affects major platforms including Google's AI services
  • Researchers provide detection tool and mitigation recommendations
  • Highlights need for more secure system design patterns

Related Articles