Printed Signs Can Trick Self-Driving Cars Into Dangerous Moves
Printed Signs Pose Unexpected Threat to Autonomous Vehicles
Self-driving cars rely on reading road signs to navigate safely, but this very capability has become their Achilles' heel. A University of California study reveals how attackers can manipulate autonomous systems using nothing more sophisticated than printed text.

The technique, dubbed "CHAI" (Command Hijacking for Autonomous Intelligence), exploits how visual language models process environmental text. These AI systems mistakenly interpret roadside text as direct commands - with potentially deadly consequences.
How the Attack Works
In controlled tests targeting DriveLM autonomous systems:
- 81.8% success rate: Vehicles obeyed malicious signs even when pedestrians were present
- Simple execution: Just placing optimized text within camera view triggers the behavior
- Multilingual threat: Works across languages and lighting conditions
The implications extend beyond roads. Drones proved equally vulnerable, ignoring safety protocols when confronted with printed landing instructions in hazardous areas.
Why This Matters Now
As cities increasingly test autonomous vehicles:
- Current defenses can't distinguish legitimate from malicious commands
- Physical access isn't required - just visibility to cameras
- Existing safety protocols fail against this attack vector
The research team warns this vulnerability demands immediate attention before wider adoption of self-driving technology.
Key Points:
- Physical hacking: Printed signs directly influence vehicle decisions without digital intrusion
- Safety override: Systems prioritize text commands over collision avoidance protocols
- Urgent need: Experts call for built-in verification before further real-world deployment



