Skip to main content

Ant Group and Renmin University Unveil First Native MoE Diffusion Language Model

Ant Group and Renmin University Unveil Groundbreaking LLaDA-MoE Model

At the 2025 Inclusion·Bund Conference, Ant Group and Renmin University jointly introduced LLaDA-MoE, the industry's first native Mixture of Experts (MoE) architecture diffusion language model (dLLM). This breakthrough challenges the conventional belief that language models must be autoregressive.

Key Innovations

The LLaDA-MoE model was trained from scratch on approximately 20 terabytes of data, demonstrating remarkable scalability and stability in industrial-scale training. It outperforms previous dense diffusion language models like LLaDA1.0/1.5 and Dream-7B, while matching the performance of equivalent autoregressive models like Qwen2.5-3B-Instruct. Notably, it achieves this by activating only 1.4 billion parameters out of a total 7 billion.

Image

Caption: Renmin University and Ant Group jointly launched the first MoE architecture diffusion model LLaDA-MoE.

Performance Highlights

Under Ant's unified evaluation framework, LLaDA-MoE showed an average improvement of 8.4% across 17 benchmarks, including HumanEval, MBPP, and GSM8K. It leads LLaDA-1.5 by 13.2% and ties with Qwen2.5-3B-Instruct, validating the "MoE amplifier" effect in the dLLM field.

Image

Caption: Performance metrics of LLaDA-MoE compared to other models.

Technical Breakthroughs

Lan Zhenzhong, Director of Ant Group's General AI Research Center, emphasized that this model represents a significant step toward scaling dLLMs to larger sizes. The team rewrote training code based on LLaDA-1.0 and utilized Ant's distributed framework ATorch for parallel acceleration.

Assistant Professor Li Chongxuan from Renmin University highlighted that traditional autoregressive models struggle with bidirectional token dependencies, a limitation addressed by LLaDA-MoE's parallel decoding approach.

Open-Source Commitment

Ant Group plans to open-source not only the model weights but also a custom inference engine optimized for dLLM parallelism, which reportedly outperforms NVIDIA's fast-dLLM solution. Technical reports and code will be released on GitHub and Hugging Face.

Key Points:

  • First native MoE architecture diffusion language model (dLLM)
  • Trained on 20T data with 7B total parameters (1.4B activated)
  • Outperforms dense diffusion models; matches autoregressive counterparts
  • Open-sourcing model weights and inference framework soon

Enjoyed this article?

Subscribe to our newsletter for the latest AI news, product reviews, and project recommendations delivered to your inbox weekly.

Weekly digestFree foreverUnsubscribe anytime

Related Articles

NVIDIA's NemoClaw Brings One-Click AI to OpenClaw Ecosystem
News

NVIDIA's NemoClaw Brings One-Click AI to OpenClaw Ecosystem

NVIDIA has unveiled NemoClaw, a game-changing toolkit that simplifies AI agent deployment for the OpenClaw platform. With just one command, users can now install powerful AI models like Nemotron and OpenShell runtime. The solution addresses critical privacy concerns with isolated sandboxes and hybrid model strategies while supporting everything from consumer devices to enterprise supercomputers. NVIDIA CEO Jensen Huang calls it the 'AI operating system' of our era.

March 17, 2026
AINVIDIAOpenClaw
Zhipu's GLM-5-Turbo: The AI Assistant That Won't Quit on You
News

Zhipu's GLM-5-Turbo: The AI Assistant That Won't Quit on You

Zhipu AI has unveiled GLM-5-Turbo, a powerful new model designed to tackle complex tasks without stalling. Unlike standard AI tools that might falter with lengthy processes, this upgrade focuses on four key improvements: reliable tool usage, breaking down complicated requests, understanding time-sensitive tasks, and handling heavy workloads efficiently. Early tests show it outperforms competitors in real-world business scenarios, with major tech companies already praising its accuracy and reliability.

March 17, 2026
AIZhipuProductivity
News

MiniMax Surpasses Baidu: China's AI Landscape Gets a Shake-Up

In a stunning market reversal, AI unicorn MiniMax has overtaken tech giant Baidu with a HK$382.6 billion valuation. The company's stock surged 22% amid strong financials showing 158.9% revenue growth, with 70% coming from international markets. This milestone signals shifting priorities in China's AI sector - from technical benchmarks to real-world profitability and global competitiveness.

March 11, 2026
AITechStocksMarketTrends
Xie Saining's Team Unveils Solaris: A Breakthrough in Multi-User Video AI
News

Xie Saining's Team Unveils Solaris: A Breakthrough in Multi-User Video AI

Xie Saining's research team has launched Solaris, the world's first multi-user video world model, powered by Kunlun Wanzhi's Matrix-Game2.0. This innovative technology enhances player interaction in environments like Minecraft, outperforming previous solutions. The release coincides with a major funding milestone for Xie's AI company, AMI, highlighting the growing importance of world models in advancing artificial general intelligence.

March 11, 2026
AIMachine LearningVirtual Worlds
Ant Group's AI Hiring Spree: Tech Roles Dominate 2026 Campus Recruitment
News

Ant Group's AI Hiring Spree: Tech Roles Dominate 2026 Campus Recruitment

Ant Group kicks off its 2026 spring campus hiring with a strong focus on AI talent. A whopping 85% of openings are technical roles, with over 70% directly tied to artificial intelligence fields like large model algorithms and AI security. The recruitment spans global locations including Hangzhou, San Francisco, and London. This marks Ant's sixth consecutive year maintaining tech positions above 80%, signaling its aggressive push into AGI development while nurturing future tech leaders through specialized programs.

March 10, 2026
AntGroupAI RecruitmentTech Talent
ChatGPT Now Recognizes Songs Like Shazam - Here's How It Works
News

ChatGPT Now Recognizes Songs Like Shazam - Here's How It Works

OpenAI has teamed up with Shazam to bring music recognition directly into ChatGPT. No more switching apps when you hear that catchy tune - just ask ChatGPT what's playing and get instant results. The integration lets users identify songs through simple voice or text commands, complete with artist info and preview clips. It's like having a music-savvy friend in your chat.

March 10, 2026
OpenAIChatGPTShazam