Qwen Architect Reveals: AI Models Are Learning to Act, Not Just Think
From Thought to Action: The Next Leap in AI
Lin Junyang, the architect behind Alibaba's Qwen large language model, has broken his post-departure silence with a striking vision of AI's future. Speaking publicly for the first time since leaving the company, the former lead engineer painted a picture of artificial intelligence that doesn't just think - but acts.
The Agentic Revolution
"We've been obsessed with making models think longer," Lin observed in his March 26 statement, "but that's only half the battle." His analysis reveals an industry at a crossroads, transitioning from what he calls "reasoning-based thinking" to "agentic thinking" - where AI systems continuously refine their plans through real-world interaction.
This shift represents more than just technical tweaking. It's a fundamental reimagining of how we build intelligent systems. Rather than measuring success by how deeply a model can reason, Lin suggests we should ask: Can it turn those thoughts into effective actions?
Lessons from Qwen's Growing Pains
The path to this realization wasn't smooth. Lin openly shared Qwen's early stumbles in 2025, when the team ambitiously tried creating a unified system that could adjust its reasoning depth based on question difficulty. "We thought we could have it all," he admitted.
Reality proved harsher. Forcing reasoning and instruction capabilities together created a system that excelled at neither - producing verbose but indecisive thoughts while executing commands unreliably. These growing pains eventually led Qwen to separate its "Instruct" and "Thinking" versions, a move that became an industry reference point.
Rethinking Intelligence Metrics
Lin challenges conventional wisdom about what makes AI smart. "Longer reasoning chains don't necessarily mean greater intelligence," he argues. Blind pursuit of complex thought processes often just wastes computing power without improving real-world usefulness.
The future, according to Lin, lies in training not just models but entire agent systems - combining AI with its environment in continuous feedback loops. It's a vision where artificial intelligence becomes less like an oracle and more like an assistant that learns by doing.
Key Points:
- Action over analysis: Future AI success hinges on execution capability, not just reasoning depth
- Hard-won lessons: Qwen's early struggles revealed the pitfalls of forcing different cognitive functions together
- New benchmarks: Traditional measures like reasoning chain length may become less relevant as agentic systems emerge

