New WormGPT Variants Exploit AI Models for Cybercrime
New WormGPT Variants Exploit AI Models for Cybercrime
Security researchers from Cato Network have uncovered two new versions of WormGPT, a notorious malicious tool previously believed to be defunct. The variants, named keanu-WormGPT and xzin0vich-WormGPT, utilize commercial AI models—xAI's Grok and Mistral's Mixtral—to assist cybercriminals in generating phishing emails, writing malicious code, and circumventing security measures on legitimate AI platforms.
Image Source Note: Image generated by AI, licensed by Midjourney.
The Legacy of WormGPT
The original WormGPT was developed by a Portuguese hacker known as "Last," who employed the open-source GPT-J model to bypass ethical restrictions of mainstream AI tools. Although shut down in 2023, its influence persisted, spawning a trend of unrestricted language models (LLMs) tailored for cybercrime. Vitaly Simonovich, a threat intelligence researcher at Cato Networks, noted that "WormGPT" has become a recognizable brand symbolizing this new class of malicious LLMs.
keanu-WormGPT: Leveraging Grok
Released on February 25, 2025, by an underground forum user, keanu-WormGPT operates via a Telegram chatbot powered by the Grok model. Researchers used jailbreak techniques to analyze its functionality, revealing manipulated system prompts that instruct Grok to ignore ethical safeguards. This enables the tool to generate scripts for phishing campaigns and credential theft.
xzin0vich-WormGPT: Powered by Mixtral
The second variant, xzin0vich-WormGPT, surfaced on October 26, 2024, created by user "xzin0vich." This version relies on Mistral AI's Mixtral model and similarly operates through Telegram, responding to unethical or illegal requests. Cato's team confirmed its architecture using jailbreak methods to extract system prompts.
The Broader Threat Landscape
The resurgence of WormGPT underscores how cybercriminals adapt to advancements in AI technology. While legitimate platforms tighten ethical boundaries, malicious actors repurpose these tools for harm. Since WormGPT's shutdown, alternatives like FraudGPT, DarkGPT, and EvilGPT have emerged. Simonovich emphasized, "These new versions are not custom-built but skillfully repurposed existing models."
Cybersecurity Recommendations
In response, experts advocate for robust defense strategies:
- Enhanced threat detection and response
- Stricter access controls
- Increased security awareness training Cato Networks highlights these measures as critical to mitigating risks posed by evolving AI-driven threats.
Key Points:
- 🌐 New Discovery: Two WormGPT variants exploit Grok and Mixtral for cybercrime.
- 🔒 Tool Upgrade: These versions bypass AI safeguards to generate malicious content.
- 🛡️ Security Advice: Strengthen defenses against adaptive AI-powered threats.