Alibaba's Qwen3.5-Plus Takes Open-Source AI Crown Ahead of Lunar New Year
Alibaba Claims Open-Source AI Throne With Qwen3.5-Plus Launch
As fireworks prepared to light up China's Lunar New Year celebrations, Alibaba fired its own technological salvo - releasing Qwen3.5-Plus, now benchmarked as the planet's strongest open-source large language model.
Efficiency Meets Power
The new model packs a revolutionary architecture that delivers outsized performance from relatively modest resources. With 397 billion total parameters (only 17 billion activated during use), it surprisingly outperforms its trillion-parameter predecessor Qwen3-Max while using 60% less memory.

"What makes this special isn't just raw power, but how efficiently that power gets delivered," explains Dr. Li Wei, an AI researcher at Tsinghua University. "The throughput improvements in long-context scenarios - up to 19 times faster - could reshape how we deploy these models commercially."
Budget-Friendly Brainpower
Perhaps most startling is the cost advantage. At 0.8 yuan per million tokens, Qwen3.5-Plus undercuts Gemini 3 Pro by a factor of eighteen while matching its capabilities.
The model also breaks new ground technically:
- First in Qwen series with native multimodal capabilities (processing both text and visuals)
- Trained on mixed visual-text tokens for more human-like understanding
- Achieves top-tier performance with <40% parameters of comparable models
Benchmark Dominance
The numbers tell a compelling story:
- 87.8 on MMLU-Pro (surpassing GPT-5.2)
- 88.4 on GPQA (beating Claude 4.5)
- Record-breaking IFBench performance
The synchronized rollout across Qwen platforms gives developers immediate access to test these claims firsthand.
Key Points:
- Festival timing: Strategic release during Lunar New Year celebrations
- Cost revolution: Dramatically lower API pricing could disrupt market
- Technical leap: Multimodal capability opens new application possibilities
- Global implications: Strengthens China's position in open-source AI development


