Skip to main content

Curl pulls plug on bug bounty program amid AI-generated report flood

Open-source project overwhelmed by AI-generated noise

The developers behind curl, the ubiquitous command-line tool for transferring data, have made a tough decision - they're ending their bug bounty program effective January 2026. The reason? An unmanageable flood of what they call "AI Slop" - artificially generated vulnerability reports that look polished but lack substance.

Image

When quantity drowns quality

Founder Daniel Stenberg didn't mince words describing the problem: "These reports may sound technical and professional, but when you dig in, there's nothing there." The small maintenance team found themselves buried under submissions - seven invalid reports in just 16 hours, totaling 20 since New Year's Day.

"It's become a mental health issue," Stenberg admitted. "Every notification could be another hour wasted chasing ghosts."

New reporting rules take effect

Starting February 1:

  • No more cash rewards for reported bugs
  • Third-party compensation assistance ends
  • All security issues must go through GitHub The project has even updated its security.txt file with a blunt warning: submit garbage reports risk getting banned - or becoming internet meme material.

Why this matters beyond curl

The situation highlights growing pains as AI tools lower the barrier to entry in technical fields. While democratizing knowledge sounds positive, the curl team's experience shows how easily good intentions can backfire when systems aren't designed to filter signal from noise.

"We want real researchers to keep contributing," Stenberg emphasized. "But we can't let automated junk discourage them - or burn out our volunteers."

Key Points:

  • curl ends HackerOne bounty program due to AI-generated spam reports
  • Team received 20 invalid submissions in January alone
  • New policy routes all bugs through GitHub, offers no financial rewards
  • Project warns about potential public shaming for spammers
  • Decision reflects broader challenges of AI-generated content flooding technical communities

Enjoyed this article?

Subscribe to our newsletter for the latest AI news, product reviews, and project recommendations delivered to your inbox weekly.

Weekly digestFree foreverUnsubscribe anytime

Related Articles

News

Microsoft Warns: Poisoned Share Buttons Could Corrupt AI Memory

Microsoft security experts have uncovered a sneaky new cyber threat targeting AI systems. Hackers are hiding malicious code in seemingly harmless share buttons, tricking AI into remembering and spreading biased or false information. These 'poisoned' prompts can linger in AI memory, subtly influencing future responses without users realizing it. The attacks span multiple industries and require little technical skill to execute. Microsoft advises vigilance when clicking AI-generated links and recommends regularly clearing your assistant's memory.

February 12, 2026
AI securitycybersecurityMicrosoft Defender
OpenClaw Framework Hit by Major Malware Attack
News

OpenClaw Framework Hit by Major Malware Attack

The OpenClaw AI framework has been compromised in a sophisticated supply chain attack, with hundreds of malicious 'skills' uploaded to its extension platform. Cybersecurity experts warn these trojanized tools could steal sensitive data from unsuspecting users. The company has partnered with VirusTotal to implement emergency security measures, including daily AI-powered scans of all available skills.

February 9, 2026
cybersecurityAI safetymalware
Ant Group's Robotics Leap: Open-Source AI Model Boosts Robot Intelligence
News

Ant Group's Robotics Leap: Open-Source AI Model Boosts Robot Intelligence

Ant Group's Lingbo Technology has made its embodied intelligence model LingBot-VLA fully open-source, marking a significant advancement in robotics. The model demonstrates remarkable cross-platform adaptability and training efficiency, outperforming existing frameworks. Alongside this release, their new LingBot-Depth spatial perception model enhances 3D environmental understanding for robots and autonomous vehicles. These developments could accelerate smart robotics adoption across industries.

January 28, 2026
roboticsAI innovationAnt Group
Tencent's Hunyuan Image 3.0 Goes Open-Source: A Game-Changer for AI Creativity
News

Tencent's Hunyuan Image 3.0 Goes Open-Source: A Game-Changer for AI Creativity

Tencent has made waves in the AI community by open-sourcing its powerful Hunyuan Image 3.0 model. With an impressive 80 billion parameters, this image-to-image tool ranks among the world's best, offering everything from meme creation to professional design enhancements. The company is putting its full weight behind the open-source movement, making both standard and lightweight versions available to developers worldwide.

January 28, 2026
AI creativityopen-sourceimage editing
North Korean Hackers Weaponize AI Against Blockchain Experts
News

North Korean Hackers Weaponize AI Against Blockchain Experts

Security researchers uncovered a disturbing trend: North Korea's Konni hacking group is now using AI-generated malware to target blockchain engineers across Asia. Their sophisticated attacks begin with Discord phishing links, deploying eerily efficient scripts that steal cryptocurrency credentials. This marks a dangerous evolution in cybercrime tactics.

January 26, 2026
cybersecurityAIblockchain
News

AI Fraud Epidemic Hits Small Businesses Hard

Small businesses are drowning in a rising tide of AI-powered cybercrime. New data reveals that 80% suffered attacks last year, with artificial intelligence driving nearly half of these breaches. Hackers now use generative AI to craft eerily convincing scams, while struggling companies cut corners on security measures—creating a perfect storm for financial losses.

January 20, 2026
cybersecuritysmall businessAI fraud