Ant Group's Robotics Leap: Open-Source AI Model Boosts Robot Intelligence
Ant Group Opens the Door to Smarter Robots
In a move that could reshape the robotics landscape, Ant Group's Lingbo Technology has fully open-sourced its embodied intelligence model LingBot-VLA along with related training tools. This isn't just another tech release - it represents a tangible leap forward in making robots more adaptable and intelligent.

Breaking Down the Breakthrough
The LingBot-VLA model has already proven its worth through partnerships with major robot manufacturers like Xinghai Tu, Songling, and Leju. What sets it apart? The system can train at lightning speed - processing 261 samples per second using just eight GPUs. That's nearly three times faster than current industry standards.
"We're seeing training efficiency that outpaces frameworks like StarVLA and OpenPI by 1.5 to 2.8 times," explains a Lingbo Technology spokesperson. "This directly translates to lower costs for developers and faster deployment times."
The Data-Performance Connection
Through extensive testing with real-world data, researchers made a crucial discovery: more data equals better performance. As training data increased from 3,000 to 20,000 hours, the model's success rate in practical tasks showed consistent improvement. In Shanghai Jiao Tong University's GM-100 benchmark tests:
- Initial success rate: 13.0%
- After optimization: 15.7%
- With depth information added: 17.3%
Seeing in 3D: The Depth Perception Advantage
The recently launched LingBot-Depth model takes spatial understanding to new levels. Using stereo 3D cameras, it transforms noisy sensor data into precise environmental measurements - crucial for applications ranging from warehouse robots to self-driving cars.
Early tests show exceptional performance in depth completion and monocular depth estimation tasks, potentially solving one of robotics' persistent challenges: accurate environmental mapping.
What This Means for the Future
By open-sourcing these technologies, Ant Group is effectively democratizing advanced robotics capabilities. Smaller manufacturers and research institutions can now access tools previously available only to well-funded corporations.
The implications are vast - from more responsive service robots to safer autonomous vehicles. As these models continue evolving through community contributions, we may be witnessing the dawn of a new era in practical robotics applications.
Key Points:
- LingBot-VLA achieves unprecedented training speeds (261 samples/sec)
- Success rates improve consistently with more training data
- Depth perception accuracy boosted by LingBot-Depth model
- Open-source approach could accelerate industry-wide innovation


