Skip to main content

Xiaomi Exec Warns Against AI Token Price Wars After Anthropic's OpenClaw Block

AI Industry Reckons With Unsustainable Pricing Models

Anthropic's recent decision to block third-party frameworks like OpenClaw from accessing its Claude subscription service has sent shockwaves through the AI community. The move came after some users reportedly paid just $200 while consuming computing resources worth $5,000 - an unsustainable imbalance that forced Anthropic's hand.

The Hidden Costs of Cheap Tokens

Luo Fuli, who leads Xiaomi's MiMo large model team, sees this as a cautionary tale for the industry. "Third-party frameworks often have inefficient context management," she explains. "This can lead to token consumption rates ten times higher than native frameworks."

The financial math becomes alarming when heavy users exploit these inefficiencies. Anthropic found itself providing $5,000 worth of computing power for just $200 - a recipe for financial disaster no company could sustain long-term.

Beyond the Price War Mentality

Luo warns competitors against chasing market share through token price cuts alone. "Slashing prices without proper subscription strategies is a trap," she states bluntly. "The future lies in balancing efficient frameworks with high-quality models."

Xiaomi has already taken this approach with MiMo's new pay-as-you-go token plan supporting third-party access. The model aims to create sustainable economics where both platform providers and developers can thrive.

Industry at a Crossroads

The Anthropic incident highlights growing pains as AI adoption accelerates. With computing demand skyrocketing, companies must develop business models that:

  • Fairly compensate platform providers
  • Incentivize framework efficiency improvements
  • Maintain accessibility for developers

"Short-term cost pressures will push third-party developers to optimize their technology," Luo predicts. "This painful adjustment will ultimately benefit the entire ecosystem's health."

Key Points:

  • Resource imbalance: Anthropic blocked OpenClaw after users consumed $5,000 in resources while paying $200
  • Pricing warning: Xiaomi's Luo cautions against destructive token price wars
  • Sustainable path: MiMo's pay-as-you-go model offers alternative approach
  • Industry evolution: Pressure may drive needed efficiency improvements in third-party frameworks

Enjoyed this article?

Subscribe to our newsletter for the latest AI news, product reviews, and project recommendations delivered to your inbox weekly.

Weekly digestFree foreverUnsubscribe anytime

Related Articles

Anthropic's Mythos AI Uncovers Hidden Cyber Threats in Exclusive Preview
News

Anthropic's Mythos AI Uncovers Hidden Cyber Threats in Exclusive Preview

Anthropic has unveiled Mythos, its most advanced AI model yet, currently available only to select security partners. This powerful tool has already identified thousands of previously unknown vulnerabilities in software code, some dating back decades. While demonstrating remarkable potential for cybersecurity defense, concerns linger about its potential misuse. The company is navigating complex discussions with US officials while maintaining strict controls over access to this groundbreaking technology.

April 8, 2026
AI SecurityCybersecurityAnthropic
News

Anthropic's Mythos AI: A Cybersecurity Game-Changer with a Troubling Edge

Anthropic has unveiled Mythos, its most powerful AI model yet, specializing in uncovering hidden software vulnerabilities. This digital detective can spot flaws even in decades-old code, outperforming human experts. But its capabilities come with risks - the same tech that could protect systems might also be weaponized. Currently limited to select tech giants and government partners, Mythos is sparking debates about AI ethics and security in an increasingly vulnerable digital world.

April 8, 2026
AI SecurityCybersecurityAnthropic
Anthropic's Conway: Claude Gets Its Own Workspace and App Store
News

Anthropic's Conway: Claude Gets Its Own Workspace and App Store

Anthropic is developing Conway, a persistent agent solution that transforms Claude into an always-on AI assistant. Unlike traditional chatbots, Conway operates as an independent workspace with browser control, webhook triggers, and a coming extension system. This upgrade could position Claude as a serious competitor in the AI agent space, blurring the line between chatbot and digital assistant.

April 2, 2026
AI AgentsAnthropicClaude AI
News

Anthropic's GitHub Cleanup Backfires, Wiping Thousands of Legit Repos

In a dramatic case of overzealous damage control, AI company Anthropic accidentally deleted thousands of legitimate GitHub repositories while trying to remove leaked source code. What began as an effort to contain a security breach turned into a PR disaster when automated tools misfired, wiping out unrelated projects. The incident has sparked outrage among developers and raised questions about how tech giants handle crisis management in the open-source community.

April 2, 2026
AnthropicGitHubOpenSource
Claude Code's Security Flaw: When Too Many Commands Overwhelm AI Defenses
News

Claude Code's Security Flaw: When Too Many Commands Overwhelm AI Defenses

A newly discovered vulnerability in Anthropic's Claude Code tool reveals how hackers can bypass its security measures simply by flooding it with commands. When the system receives more than 50 sub-commands at once, its automatic rejection mechanism fails, potentially exposing users to dangerous operations. Security experts warn this flaw could be particularly risky in automated development environments where permission checks might be skipped.

April 2, 2026
AI SecurityAnthropicSoftware Vulnerabilities
Claude Code Leak: How a Simple Mistake Exposed AI's Dirty Secret
News

Claude Code Leak: How a Simple Mistake Exposed AI's Dirty Secret

Anthropic's Claude Code source code leaked not due to a sophisticated hack, but because of an embarrassing human error - an un-obfuscated MAP file accidentally included in production. While developers celebrated the unexpected windfall, Anthropic scrambled to contain the damage with DMCA takedowns and promised automation improvements. The incident highlights the ironic vulnerability of AI tools to basic human mistakes in their own deployment processes.

April 1, 2026
AI SecurityClaude CodeAnthropic