Skip to main content

Major Security Flaws Found in Popular AI Platforms

AI Platforms Expose Sensitive Data Through Critical Flaws

Security experts are sounding the alarm after discovering gaping vulnerabilities in OpenClaw (formerly Clawdbot) and Moltbook, popular platforms in the AI ecosystem. These weaknesses could give attackers nearly unfettered access to sensitive data and system controls.

Image

Shockingly Low Security Scores

Developer Lucas Valbuena's tests using ZeroLeaks revealed OpenClaw scored a dismal 2 out of 100 on security assessments. The platform proved vulnerable to:

  • 84% success rate for data extraction
  • 91% success rate for prompt injection attacks

"These numbers should terrify anyone using these platforms," Valbuena noted. "It's like leaving your front door wide open with a sign saying 'Take what you want.'"

Database Left Completely Exposed

The situation appears even worse for Moltbook, where security researcher Jamieson O'Reilly found:

  • Entire databases accessible on public networks
  • Secret API keys available to anyone
  • Potential for impersonating high-profile users like Andrej Karpathy

The vulnerabilities affect servers globally, with 954 exposed Clawdbot instances identified across China, the U.S., Germany and other countries.

Expert Recommendations

While perfect protection remains elusive, cybersecurity specialists advise:

  1. Never store sensitive data directly in configuration files
  2. Use environment variables for key management
  3. Implement Cloudflare Tunnel or zero-trust login systems
  4. Conduct regular security audits using tools like ZeroLeaks

The discoveries highlight growing pains in the rapidly expanding AI agent ecosystem, where security often plays catch-up with functionality.

Enjoyed this article?

Subscribe to our newsletter for the latest AI news, product reviews, and project recommendations delivered to your inbox weekly.

Weekly digestFree foreverUnsubscribe anytime

Related Articles

Major Security Flaw Leaves Thousands of AI Accounts Vulnerable
News

Major Security Flaw Leaves Thousands of AI Accounts Vulnerable

A shocking discovery has exposed sensitive data belonging to nearly 150,000 AI accounts on Moltbook, a platform often called 'AI Reddit.' Security researcher Jameson O'Reilly found the platform's entire database was publicly accessible due to a basic configuration error. This breach could allow attackers to hijack AI accounts, including influential ones with millions of followers. The incident highlights growing concerns about security shortcuts in AI development.

February 2, 2026
CybersecurityArtificial IntelligenceData Breach
OpenClaw Security Woes Deepen as New Vulnerabilities Emerge
News

OpenClaw Security Woes Deepen as New Vulnerabilities Emerge

OpenClaw, the AI project promising to simplify digital lives, finds itself in hot water again. Just days after patching a critical 'one-click' remote code execution flaw, its associated social network Moltbook exposed sensitive API keys through a misconfigured database. Security experts warn these recurring issues highlight systemic weaknesses in the platform's approach to safeguarding user data.

February 3, 2026
CybersecurityAI SafetyData Privacy
OpenClaw Security Woes Deepen as Social Network Exposes Sensitive Data
News

OpenClaw Security Woes Deepen as Social Network Exposes Sensitive Data

The OpenClaw ecosystem faces mounting security challenges, with researchers uncovering back-to-back vulnerabilities. After patching a critical 'one-click' remote code execution flaw, its affiliated social network Moltbook exposed confidential API keys through a misconfigured database. These incidents raise serious questions about security practices in rapidly developing AI projects.

February 3, 2026
CybersecurityAI SafetyData Privacy
NanoClaw: The Lightweight AI Assistant That Puts Security First
News

NanoClaw: The Lightweight AI Assistant That Puts Security First

Meet NanoClaw, a sleek new AI assistant built for security-conscious users. Born from OpenClaw's limitations, this innovative tool runs Claude assistant within Apple containers for ironclad isolation. With just 8 minutes needed to grasp its codebase and unique features like WhatsApp integration, NanoClaw offers simplicity without sacrificing protection. While macOS-focused, developers hint at Linux compatibility through Claude.

February 2, 2026
AI SecurityDigital PrivacyApple Technology
News

Open-Source AI Models Pose Security Risks as Hackers Exploit Unprotected Systems

A new study by SentinelOne and Censys reveals thousands of unprotected open-source AI models being exploited by hackers. These vulnerable systems, often stripped of security features, are being used to generate harmful content like phishing emails and disinformation campaigns. Researchers found that 25% of analyzed instances allowed direct access to core system prompts, with 7.5% modified for malicious purposes. The findings highlight growing concerns about unregulated AI deployment beyond major platforms' safety measures.

January 30, 2026
AI SecurityOpen Source RisksCybersecurity Threats
News

Nearly a Million Chrome Users Hit by Sneaky AI Chat Thieves

A startling security breach has exposed private conversations from ChatGPT and DeepSeek users through malicious Chrome extensions. Disguised as helpful tools, these add-ons secretly harvested sensitive data from over 900,000 unsuspecting victims. Security experts warn this sophisticated attack could compromise corporate secrets and personal information alike.

January 8, 2026
Chrome SecurityAI PrivacyData Breach