Skip to main content

China Unveils Massive 30,000-Card AI Supercluster

China's Computing Power Breakthrough: Inside the New AI Supercluster

China's artificial intelligence capabilities just got a major boost with the operational launch of what developers are calling "the country's most powerful AI computing pool." The newly activated scaleX supercluster packs an astonishing 30,000 acceleration cards at Zhengzhou's National Supercomputing Internet node.

From Concept to Reality at Breakneck Speed

What makes this achievement remarkable isn't just the scale - it's how quickly China reached this milestone. When Sunway unveiled its 10,000-card cluster last December at the HAIC conference, few expected they'd triple that capacity before spring. This blistering pace signals China's growing confidence in domestic computing solutions.

"We've effectively compressed years of development into months," explains a Sunway engineer familiar with the project. "The jump from individual cards to superclusters happened faster than anyone predicted."

Developer-Friendly Design Breaks Down Barriers

The scaleX system addresses one of developers' biggest frustrations - ecosystem compatibility. Unlike many domestic platforms that require extensive code rewrites, this cluster embraces an open architecture:

  • Full CUDA compatibility means existing projects can migrate with minimal changes
  • Supports mixed deployment of different domestic accelerator brands
  • Already optimized for 400+ mainstream large models

"It's like upgrading from a studio apartment to a mansion without needing new furniture," jokes one researcher who tested early access.

Scientific Frontiers Begin to Blur

The supercluster isn't just about raw power - it's already producing groundbreaking results:

  • A Chinese materials science model topped international rankings using scaleX resources
  • Protein research efficiency improved by 3-6 orders of magnitude
  • Supports everything from internet giants' core operations to cutting-edge AI-driven science (AI4S)

The cluster represents more than computational muscle - it's becoming what engineers call "digital energy" transforming multiple research domains simultaneously.

Just Getting Started: Million-Card Horizons Ahead

The current installation might be impressive, but Sunway insists this is merely phase one. The system architecture allows expansion up to:

100,000 cards in near-term deployments ✔ Potential 1 million-card configurations future-ready

As global competition for AI supremacy intensifies, China appears determined not just to compete but redefine what large-scale computing means.

Key Points:

  • China debuts largest domestic AI supercluster (30,000 cards)
  • Achieved record development speed (10K→30K cards in <2 months)
  • Open architecture supports CUDA and mixed hardware environments
  • Already enabling scientific breakthroughs across multiple disciplines
  • Designed for massive future scaling (up to 1M cards)

Enjoyed this article?

Subscribe to our newsletter for the latest AI news, product reviews, and project recommendations delivered to your inbox weekly.

Weekly digestFree foreverUnsubscribe anytime

Related Articles

News

UN Forms AI Safety Panel with Chinese Experts on Board

The United Nations has taken a significant step toward global AI governance by establishing an International Scientific Expert Group on AI Safety. Two prominent Chinese scientists specializing in AI ethics and technical safety have been selected for this inaugural panel. The group will assess emerging AI risks and provide policy recommendations, marking China's growing influence in shaping international AI standards.

February 6, 2026
AI governanceUnited NationsChina tech
News

Startup's Bold Claim: New AI Chip Could Outperform NVIDIA by 5X

In a move that could shake up the AI hardware market, startup Positron unveiled its Asimov chip architecture promising dramatic efficiency gains. The company claims its specialized design delivers five times better performance-per-watt than NVIDIA's upcoming Rubin platform, targeting the booming demand for large language model inference. While impressive on paper, industry watchers question whether Positron can challenge NVIDIA's dominance without established developer tools.

February 5, 2026
AI chipssemiconductorsmachine learning
News

a16z Bets Big on AI's Backbone With $1.7 Billion Infrastructure Fund

Silicon Valley heavyweight Andreessen Horowitz is doubling down on AI's foundational technologies, earmarking $1.7 billion from its latest fundraise specifically for infrastructure plays. The move signals a strategic shift toward powering the next wave of artificial intelligence innovation rather than just chasing applications. With past investments in OpenAI and ElevenLabs, a16z aims to control the 'pipes' of AI development - from computing power to talent pipelines.

February 5, 2026
venture capitalAI infrastructureSilicon Valley
News

AI Traffic Gets Smarter: How Large Model Gateways Are Streamlining Enterprise Tech

As businesses adopt multiple AI tools, managing different models has become chaotic. Enter the Large Model Gateway - a traffic cop for AI systems that simplifies access while cutting costs. One company slashed expenses by creating a centralized 'model marketplace' with unified APIs. This innovation could reshape how enterprises deploy AI across departments.

February 3, 2026
AI infrastructureEnterprise technologyMachine learning ops
News

Anthropic's Claude 5 Shakes Up AI Programming with Fennec Model

Anthropic is set to release Claude Sonnet5 (codenamed Fennec), a game-changing AI programming model that outperforms its flagship while costing half as much. With a record-breaking SWE-Bench score of 80.9% and innovative 'swarm' development capabilities, this model can autonomously handle entire software projects. The timing appears strategic, coming just as OpenAI prepares its Codex suite launch.

February 3, 2026
AI programmingClaude Sonnet5Anthropic
Printed Signs Can Trick Self-Driving Cars Into Dangerous Moves
News

Printed Signs Can Trick Self-Driving Cars Into Dangerous Moves

A startling discovery shows how easily autonomous vehicles can be fooled by simple printed signs. Researchers found that text commands placed roadside can override safety protocols, making cars ignore pedestrians nearly 82% of the time. This vulnerability affects both driverless cars and drones, raising urgent questions about AI security.

February 2, 2026
autonomous vehiclesAI securitymachine learning