Skip to main content

Open-Source AI Tool Exposes Critical Payment Flaw - Hackers Could Get Free Credits

Critical Payment Vulnerability Found in NewAPI System

Security researchers have sounded the alarm about a dangerous flaw in the QuantumNous/new-api (NewAPI) system, widely used by developers to manage AI model interfaces. The vulnerability could let attackers essentially print money - digitally speaking - by exploiting a weakness in payment verification.

Image

How the Exploit Works

At the heart of the issue lies Stripe's payment verification system. When administrators fail to properly configure their Stripe keys (or leave them blank), the system fails to properly authenticate incoming payment notifications. Here's what makes this particularly concerning:

  • Signature verification breaks down: The system's security checks don't flag empty keys as problematic, essentially rubber-stamping any payment notification
  • Attackers can forge payments: With some basic knowledge of the system, hackers can fabricate successful payment confirmations
  • Free money glitch: The system credits accounts without requiring actual payment processing

"It's like leaving your vault unlocked at a bank," explained one security researcher who asked to remain anonymous. "The system sees the forged payment confirmation and happily adds credits, while Stripe's records show zero actual transactions."

Image

Who's Affected?

The vulnerability primarily impacts:

  • Test environments where Stripe keys might be left blank
  • Sites using alternative payment methods like Alipay or WeChat Pay
  • Newer installations where administrators may not have completed all security configurations

Immediate Actions Required

The project team has already released version v0.12.10 to address the issue, but that's just the first step. Security professionals recommend:

  1. Upgrade immediately to the patched version
  2. Set proper Stripe keys - even if you're not using Stripe payments
  3. Audit recent transactions for any suspicious activity
  4. Review all payment callbacks to ensure proper verification

"This isn't just about losing potential revenue," warns cybersecurity expert Dr. Emily Chen. "Unchecked, this could completely undermine trust in affected platforms. Imagine users discovering they can get unlimited AI credits - it would be chaos."

Key Points

  • Vulnerability allows bypassing payment systems in NewAPI
  • Occurs when Stripe webhook keys are left unconfigured
  • Attackers can add credits without actual payment
  • Patch available in v0.12.10
  • All administrators urged to upgrade immediately
  • Financial audits recommended for past transactions

Enjoyed this article?

Subscribe to our newsletter for the latest AI news, product reviews, and project recommendations delivered to your inbox weekly.

Weekly digestFree foreverUnsubscribe anytime

Related Articles

Anthropic's Secretive Project Glasswing: What Vulnerabilities Did It Really Find?
News

Anthropic's Secretive Project Glasswing: What Vulnerabilities Did It Really Find?

Anthropic's ambitious Project Glasswing enlisted tech giants like Amazon and Google to test its AI model for security flaws. But months after launch, the project's actual discoveries remain shrouded in mystery. While researchers found 40 potential vulnerabilities, only one has been definitively linked to Glasswing. As we await Anthropic's July report, questions linger about what this powerful AI model can truly detect - and whether companies are acting fast enough on its findings.

April 16, 2026
AI SecurityAnthropicCybersecurity
News

OpenAI Issues Urgent macOS Update After Third-Party Library Hack

OpenAI has confirmed its applications were compromised in a supply chain attack targeting the popular Axios library. While no data breaches occurred, macOS users should immediately update their ChatGPT apps. The attack, originating from hijacked npm developer accounts, shows how even trusted software components can become security risks.

April 15, 2026
OpenAICybersecuritySupplyChainAttack
Anthropic's Secret AI Model Mythos Showcased to Trump Team
News

Anthropic's Secret AI Model Mythos Showcased to Trump Team

Anthropic co-founder Jack Clark revealed at the Semafor summit that his company demonstrated its unreleased AI model Mythos to Trump administration officials, citing its advanced cybersecurity capabilities. Despite an ongoing legal battle with the Pentagon over military AI use, Clark emphasized the importance of government-tech collaboration. The revelation comes as major banks reportedly test the powerful new system, while Clark offers surprising optimism about AI's employment impact compared to his CEO's dire predictions.

April 15, 2026
Artificial IntelligenceCybersecurityGovernment Tech
Grafana AI Assistant Vulnerability Exposes Corporate Data to Hackers
News

Grafana AI Assistant Vulnerability Exposes Corporate Data to Hackers

A newly discovered security flaw in Grafana's AI assistant allows hackers to manipulate the tool into leaking sensitive company data. The 'GrafanaGhost' vulnerability uses indirect prompt injection to trick the system into sending confidential information to external servers. While Grafana Labs has patched the issue, the discovery highlights growing concerns about AI-powered tools becoming new attack vectors for cybercriminals.

April 14, 2026
CybersecurityAI VulnerabilitiesData Protection
OpenAI Scrambles to Patch Security Hole After Axios Hack
News

OpenAI Scrambles to Patch Security Hole After Axios Hack

OpenAI has rushed to update security certificates after hackers compromised a key third-party library, Axios. The breach, which occurred in late March, allowed attackers to potentially access devices running ChatGPT Desktop and other OpenAI applications. The company has released patched versions of affected software and urges users to update immediately to protect their data.

April 13, 2026
OpenAICybersecuritySoftware Vulnerabilities
Researchers Uncover Critical Security Flaw in AI Relay Systems
News

Researchers Uncover Critical Security Flaw in AI Relay Systems

Cybersecurity researchers have exposed a dangerous vulnerability in third-party AI routing services that could allow attackers to secretly control AI agents. The findings reveal how malicious actors could intercept and manipulate data flow between AI models and users, potentially gaining access to sensitive information without detection. Developers relying on these relay services should review their security measures immediately.

April 10, 2026
AI SecurityCybersecurityArtificial Intelligence