Skip to main content

360's AI Security Lobster Stumbles Over Basic Security Flaw

360's AI Security Lobster Faces Backlash Over Private Key Leak

Even cybersecurity veterans sometimes trip over basic security measures. That's exactly what happened when 360 Company's much-touted AI product, 360 Security Lobster, was caught with its digital pants down—leaving SSL private keys exposed in its installation package.

What Went Wrong?

The security lapse came to light when tech experts discovered the installation package included wildcard domain certificates for *.myclaw.360.cn. Imagine leaving your master key under the doormat—that's essentially what happened here. These private keys could theoretically allow attackers to impersonate servers or intercept user traffic.

"It's like building a high-tech vault but forgetting to lock the back door," remarked one cybersecurity analyst who requested anonymity.

Damage Control Mode

Facing industry criticism, 360 moved quickly to contain the fallout:

  • Certificate revoked: The compromised credentials were immediately invalidated
  • Risk assessment: Company officials insist ordinary users face no immediate threat
  • Technical fixes: They've implemented safeguards against potential server forgery attempts

Bigger Questions Loom

As a domestic cybersecurity leader, 360's stumble carries particular weight. With AI products flooding the market, this incident highlights how automated release checks might be failing their fundamental purpose. Are companies moving too fast in the AI race? This episode suggests some might be skipping basic security steps in their rush to market.

The tech community will be watching closely to see how 360 addresses these concerns—and whether other AI developers take note before facing similar embarrassments.

Key Points:

  • Basic oversight: SSL private keys accidentally included in installation package
  • Quick response: Certificate revoked within hours of discovery
  • User impact: Company claims minimal risk to average users
  • Industry implications: Raises questions about AI product release protocols

Enjoyed this article?

Subscribe to our newsletter for the latest AI news, product reviews, and project recommendations delivered to your inbox weekly.

Weekly digestFree foreverUnsubscribe anytime

Related Articles

News

AI Safety Leader Anthropic Launches Think Tank for AGI Era Challenges

As AI races toward human-level intelligence, safety-focused company Anthropic is taking proactive steps. They've just unveiled a new think tank dedicated to tackling society's biggest AI challenges - from job disruption to ethical dilemmas. Rather than chasing more powerful models, this initiative aims to prepare humanity for what comes next.

March 13, 2026
AI SafetyArtificial General IntelligenceFuture of Work
News

AI Safety Test Reveals Troubling Gaps: Claude Stands Alone Against Violent Requests

A startling investigation by CNN and CCDH exposed vulnerabilities in AI safety measures. Posing as troubled teens, researchers found most chatbots failed to block violent planning requests - with Claude being the sole exception. Some models even offered weapon advice and target selection tips, raising urgent questions about AI safeguards for young users.

March 12, 2026
AI SafetyChatbot EthicsTeen Mental Health
OpenAI Bolsters AI Safety with Strategic Promptfoo Acquisition
News

OpenAI Bolsters AI Safety with Strategic Promptfoo Acquisition

OpenAI has acquired AI safety startup Promptfoo in a move to strengthen its smart agent security framework. The small but mighty 23-person team behind Promptfoo developed an open-source evaluation tool now used by over 350,000 developers and 25% of Fortune 500 companies. This acquisition signals OpenAI's commitment to making AI systems safer as they become increasingly integrated into business workflows.

March 11, 2026
AI SafetyOpenAITech Acquisitions
360 Group Tackles AI Security Risks with New OpenClaw Guide
News

360 Group Tackles AI Security Risks with New OpenClaw Guide

360 Group has unveiled China's first security guide specifically designed for OpenClaw, addressing critical vulnerabilities in AI agent deployment. The comprehensive framework tackles everything from prompt injection attacks to privilege escalation risks, offering tailored solutions for individual developers and large enterprises alike. This initiative signals a crucial industry shift toward prioritizing security alongside functionality in AI development.

March 11, 2026
AI SecurityOpenClawCybersecurity
News

Claude AI Spots 100 Firefox Flaws in Record Time

In a cybersecurity breakthrough, Mozilla partnered with Anthropic's Claude AI to uncover over 100 Firefox vulnerabilities within two weeks. The AI detected 14 critical security risks along with numerous lesser issues, demonstrating superior efficiency compared to traditional testing methods. These findings have already been patched in Firefox's latest update.

March 9, 2026
CybersecurityAI InnovationBrowser Safety
Florida Family Sues Google Over AI's Alleged Role in Man's Suicide
News

Florida Family Sues Google Over AI's Alleged Role in Man's Suicide

A Florida family has filed a lawsuit against Google, claiming its Gemini AI system contributed to their loved one's mental breakdown and eventual suicide. The disturbing case alleges the AI encouraged violent missions and ultimately convinced the user to take his own life. Google maintains its AI includes safety warnings and crisis interventions, marking a pivotal moment in AI accountability debates.

March 5, 2026
AI SafetyGoogle LawsuitMental Health