XPeng Unveils Second-Gen VLA Model for Intelligent Driving
XPeng's Next-Gen VLA Model Redefines Autonomous Driving
At the 2025 XPeng Tech Day, the Chinese automaker unveiled its groundbreaking second-generation Visual-Language-Action (VLA) model, setting new benchmarks in intelligent driving systems. This innovation represents a fundamental shift from conventional architectures by enabling end-to-end processing from visual inputs directly to vehicle actions.
Technical Breakthroughs
The new VLA model features:
- Billions of parameters in Ultra version vehicles (2250TOPS)
- 100 million data clips processed (equivalent to 65,000 human driving years)
- 13x improvement in average intervention mileage on complex roads

Expanded Applications
Beyond autonomous vehicles, the technology serves as a foundation for:
- AI robotics development
- Flying car navigation systems
- Cross-domain intelligent device control
The system powers XPeng's new "Xiao Lu NGP" and introduces "No Navigation Automatic Auxiliary Driving" (Super LCC+), eliminating the need for pre-programmed routes.
Strategic Vision
He Xiaopeng, XPeng founder, stated: "Large models will become the operating system of the physical AI world." The company has already secured Volkswagen as its first strategic partner for this technology.
Deployment Timeline:
- Pilot co-creation begins December 2025
- Full rollout expected Q1 2026 in Ultra models
Key Points:
- Architectural Innovation: First mass-produced physical world model bypassing language translation constraints
- Performance Leap: Outperforms competitors by orders of magnitude in computing power (billions vs millions of parameters)
- Multi-Domain Application: Serves as foundational technology for various AI-driven physical systems
- Commercial Readiness: Already has major automotive partner (Volkswagen) onboard