Skip to main content

Lovable's Security Flaw Sparks Outcry as Platform Points Fingers

Lovable's Data Exposure Debacle: A Timeline of Mishandled Security

A storm of controversy has engulfed AI programming platform Lovable after security researchers uncovered what might be every developer's nightmare - a vulnerability so severe that anyone with a free account could access others' sensitive information. The discovery has sparked heated debates about corporate accountability in tech security breaches.

The Vulnerability That Shouldn't Exist

Researchers sounded alarms when they found that Lovable's systems lacked basic object-level permission validation (BOLA). This technical oversight meant users could:

  • View private chat histories
  • Access proprietary source code
  • Obtain database credentials

"It wasn't even hacking," explained one researcher who wished to remain anonymous. "Just five simple API calls and you're in - like walking through an unlocked door marked 'private.'"

Lovable's Evolving Explanations

The platform's response has been anything but consistent. Their initial statement called it "intentional actions" before pivoting to blame "poor documentation." When pressed further, they admitted their definition of 'public' projects was unclear - a startling admission for a platform handling sensitive developer data.

Social media posts from @weezerOSINT reveal the vulnerability was reported 48 days prior to public disclosure, only to be dismissed as a "duplicate submission." This delay allowed the exposure to continue until researchers escalated to HackerOne on March 3.

Passing the Buck to HackerOne?

In a surprising twist, Lovable ultimately shifted responsibility to HackerOne, claiming their partner deemed the visibility of public project chats as "expected behavior." Security experts raised eyebrows at this justification, noting that enterprise users will soon lose public project options entirely - suggesting the company knew these settings were problematic.

"They're treating security like a feature toggle rather than a fundamental requirement," commented cybersecurity analyst Mark Chen. "When your API accidentally makes private chats visible again, that's not an expected behavior - that's a failure."

The Fallout and Fixes

The company has since implemented several changes:

  • Restricted new enterprise projects from being public starting May 2025
  • Clarified permission settings in their API
  • Acknowledged their communication missteps

Yet for early free-tier users, the only path to privacy remains upgrading to paid plans - a move some see as profiting from their own security lapses.

Key Points:

  • 🔓 Critical BOLA vulnerability exposed user data through simple API calls
  • 🔄 Lovable's explanations evolved from 'intentional' to 'poor docs' before blaming HackerOne
  • ⏳ Researchers reported the flaw 48 days before action was taken
  • 💰 Free users must pay for privacy features after security failures
  • 🛠️ Fixes implemented but trust may take longer to rebuild

Enjoyed this article?

Subscribe to our newsletter for the latest AI news, product reviews, and project recommendations delivered to your inbox weekly.

Weekly digestFree foreverUnsubscribe anytime

Related Articles

News

Critical Security Flaws Found in Widely Used AI Protocol

Security researchers have uncovered serious vulnerabilities in Anthropic's Model Context Protocol (MCP), a widely adopted standard for AI communication. The flaws, embedded in the protocol's core architecture, could allow attackers to execute malicious code. Major tech companies using MCP may be affected. Despite warnings, Anthropic maintains these are 'intended features,' sparking debate in the AI security community.

April 20, 2026
AI SecurityModel Context ProtocolCybersecurity
Critical Flaw in AI Protocol Leaves 200,000 Servers Vulnerable
News

Critical Flaw in AI Protocol Leaves 200,000 Servers Vulnerable

A shocking security report reveals dangerous vulnerabilities in Anthropic's widely used MCP protocol, putting over 200,000 AI servers at risk of remote attacks. The design flaw allows execution of unverified system commands, affecting all major programming languages. Despite being notified months ago, Anthropic has done little to address what researchers call an architectural-level threat.

April 16, 2026
AI SecurityMCP FlawCybersecurity
News

Microsoft Edge Tightens AI Security with New Management Tools

Microsoft is rolling out major updates to its Edge browser for businesses, focusing on controlling AI tool usage to prevent data leaks. The new features let IT teams block unauthorized AI platforms like ChatGPT and Google Gemini, while guiding employees to Microsoft's approved Copilot service. This move addresses growing concerns about 'shadow AI' - employees using unvetted AI tools that could expose sensitive company information.

April 16, 2026
Microsoft EdgeAI SecurityEnterprise Technology
Anthropic's Secretive Project Glasswing: What Vulnerabilities Did It Really Find?
News

Anthropic's Secretive Project Glasswing: What Vulnerabilities Did It Really Find?

Anthropic's ambitious Project Glasswing enlisted tech giants like Amazon and Google to test its AI model for security flaws. But months after launch, the project's actual discoveries remain shrouded in mystery. While researchers found 40 potential vulnerabilities, only one has been definitively linked to Glasswing. As we await Anthropic's July report, questions linger about what this powerful AI model can truly detect - and whether companies are acting fast enough on its findings.

April 16, 2026
AI SecurityAnthropicCybersecurity
Node.js Halts Bug Bounty Program Over AI-Generated Spam
News

Node.js Halts Bug Bounty Program Over AI-Generated Spam

The Node.js project has temporarily suspended its cash rewards for security vulnerabilities after being flooded with low-quality, AI-generated reports. The open-source platform, which relies on community volunteers, found itself overwhelmed by automated submissions that wasted developers' time. While researchers can still report issues, the bounty program remains on hold as the team explores solutions to this growing problem affecting open-source projects worldwide.

April 14, 2026
Node.jsAI SecurityOpen Source
News

Claude Mythos Security Claims Under Scrutiny: Only 10 Critical Vulnerabilities Found

Anthropic's much-hyped Claude Mythos AI system, touted as having 'nuclear-level' vulnerability detection capabilities, may have significantly overstated its effectiveness. Independent testing reveals that of 600 vulnerabilities identified in 7,000 software stacks, merely 10 were classified as severe. Industry experts question whether the model's restricted access is truly about security concerns or simply reflects its prohibitive operating costs. This comes amid growing skepticism about AI companies using fear-based marketing tactics to promote their products.

April 13, 2026
AI SecurityClaude MythosAnthropic