Skip to main content

Radical Numerics Releases Open-Source 30B-Parameter Diffusion AI Model

Radical Numerics Unveils Open-Source Diffusion AI Breakthrough

AI research firm Radical Numerics has publicly released RND1-Base, the largest open-source diffusion language model to date. The 30-billion parameter architecture marks a significant advancement in parallel text generation technology.

Technical Specifications

The model features:

  • 30B total parameters (3B active via sparse expert mixture)
  • Built upon Qwen3-30BA3B autoregressive base
  • Trained on 500B tokens using bidirectional masking
  • Supports 8M token batch sizes for stability Image

Performance Benchmarks

RND1-Base demonstrates superior capabilities across multiple domains:

Benchmark Score

The model outperforms previous open-source diffusion models like Dream-7B and LLaDA-8B while maintaining computational efficiency through selective parameter activation.

Architectural Innovations

Unlike traditional autoregressive models, RND1 treats text generation as a denoising process, enabling:

  • Parallel sequence refinement
  • Bidirectional attention mechanisms
  • Reduced inference latency

The transition from autoregressive to diffusion paradigm was achieved through continuous pre-training with layer-specific learning rates, preserving existing knowledge while adopting new capabilities.

Research Implications

The open-source release includes:

  • Complete model weights
  • Training methodologies
  • Inference code with FlashInfer/SGLang backends

This transparency aims to accelerate community research into post-training optimization and practical applications of diffusion language models.

Future Directions

While demonstrating strong performance, challenges remain in:

  • Generalization capability
  • Memory optimization Radical Numerics suggests future integration with multi-objective fine-tuning could unlock additional potential.

The team - comprising researchers from DeepMind, Meta, and Stanford - positions this as foundational work toward recursive self-improving AI systems.

Key Points:

  1. Largest open-source diffusion language model released (30B parameters)
  2. Achieves state-of-the-art benchmarks while enabling parallel generation
  3. Complete technical stack made available to research community
  4. Represents shift toward non-autoregressive AI architectures
  5. Foundation for future self-improving AI systems

Enjoyed this article?

Subscribe to our newsletter for the latest AI news, product reviews, and project recommendations delivered to your inbox weekly.

Weekly digestFree foreverUnsubscribe anytime

Related Articles

Sakana AI's Tiny Plugin Could Revolutionize How AI Handles Massive Documents
News

Sakana AI's Tiny Plugin Could Revolutionize How AI Handles Massive Documents

Tokyo-based Sakana AI has unveiled groundbreaking technologies that could solve large language models' notorious 'memory anxiety.' Their Text-to-LoRA and Doc-to-LoRA systems enable AI to digest lengthy documents in under a second, shrinking memory requirements from gigabytes to mere megabytes. This breakthrough promises to make customizing AI models dramatically cheaper and more accessible.

February 28, 2026
AI InnovationMachine LearningNatural Language Processing
Chinese AI Models Outpace US Competitors in Global Adoption
News

Chinese AI Models Outpace US Competitors in Global Adoption

In a surprising shift, Chinese AI models have overtaken their US counterparts in global usage for the first time. Platforms like MiniMax and Moonshot AI are leading the charge, with Chinese models accounting for over 5 trillion weekly tokens - nearly double American offerings. This milestone reflects China's growing influence in artificial intelligence development.

February 27, 2026
AI CompetitionChinese TechMachine Learning
Moonshot AI's Kimi K2.5 Achieves Remarkable Profitability Milestone
News

Moonshot AI's Kimi K2.5 Achieves Remarkable Profitability Milestone

Moonshot AI's latest model, Kimi K2.5, has stunned the tech world by generating more revenue in its first 20 days than all of 2025 combined. The breakthrough comes primarily from overseas users and developers embracing its API services, propelling the company's valuation past $10 billion. Founder Yang Zhilin confirms the company is well-funded with no immediate IPO plans.

February 24, 2026
Artificial IntelligenceTech StartupsMachine Learning
News

Chinese AI Models Capture Global Spotlight During Lunar New Year

Chinese artificial intelligence models made waves internationally during the 2026 Spring Festival, capturing over 60% market share on OpenRouter's developer platform. Three domestic models - MiniMax M2.5, Kimi K2.5, and Zhipu GLM-5 - dominated the rankings by offering superior coding and automation capabilities at remarkably low costs. Their success highlights China's growing influence in AI productivity tools.

February 24, 2026
Artificial IntelligenceChinese TechDeveloper Tools
Google's Gemini 3.1 Pro Outshines Competitors With Breakthrough Reasoning Skills
News

Google's Gemini 3.1 Pro Outshines Competitors With Breakthrough Reasoning Skills

Google has unveiled Gemini 3.1 Pro, its most advanced AI model yet, showcasing remarkable improvements in logical reasoning and problem-solving. The new architecture delivers more than double the performance of its predecessor in critical tests, even surpassing GPT-5.2 in some benchmarks. Beyond raw power, Gemini 3.1 Pro introduces innovative multimodal capabilities, handling ultra-long contexts and generating visual representations of complex concepts.

February 24, 2026
AI InnovationGoogle TechMachine Learning
Google's Gemini 3.1 Pro Doubles Down on AI Reasoning Power
News

Google's Gemini 3.1 Pro Doubles Down on AI Reasoning Power

Google has unveiled Gemini 3.1 Pro, its latest AI model that dramatically improves reasoning capabilities. Benchmarks show it outperforms its predecessor by more than double in logical processing tests. The tech giant is making the model widely available through multiple platforms, offering enhanced features for premium subscribers.

February 20, 2026
AI InnovationGoogle TechMachine Learning