Open-Source AI Cogito v2 Challenges Industry Giants with Machine Intuition
Open-Source AI Cogito v2 Challenges Industry Giants with Machine Intuition
San Francisco, August 1, 2025 — Deep Cogito, a startup founded by former Google engineers, has launched its Cogito v2 series of large language models (LLMs), positioning itself as a formidable competitor in the open-source AI market. The company's innovative approach centers on "machine intuition" and self-improving reasoning capabilities, setting it apart from traditional parameter-heavy models.
A New Approach: Learning While Using
The Cogito v2 series includes four models with parameter sizes ranging from 70B to 671B, available in both dense (Dense) and mixture-of-experts (MoE) architectures. These models are now accessible on platforms like Hugging Face and Together AI. The flagship model, Cogito v2-671B MoE, boasts a 60% shorter reasoning path than DeepSeek R1 while matching or exceeding the performance of Qwen1.5-72B and Claude4Opus.
Key Innovation: Introspective Reasoning
The core technology behind Cogito v2 lies in its ability to perform "introspective reasoning" during operation. The model distills these reasoning paths back into its weights, effectively internalizing intuition. This mechanism allows the model to improve with each use, akin to AlphaGo's strategy refinement through gameplay.
Practical Testing: Efficiency in Action
Deep Cogito has released multiple test cases showcasing its model's capabilities:
- Mathematical problems: Cogito671B reaches conclusions with reasoning chains as short as 100 tokens, compared to DeepSeek R1's 200+ tokens.
- Legal reasoning: The model employs a two-step logical structure to deliver clear conclusions, outperforming many existing models and even real law master's students.
- Family logic questions: In the classic "Is Alice Charlie's grandmother?" scenario, Cogito v2 avoids pronoun confusion traps and accurately identifies "grandmother."
Cost-Effective Training Challenges Industry Norms
Despite its large scale, Deep Cogito claims the total training cost for eight models was under $3.5 million—a stark contrast to the hundreds of millions spent by OpenAI and Anthropic. CEO Drishan Arora emphasized, "Better models are not about training more data, but training more meaningful data."
Open-Source Commitment and Future Plans
The Cogito v2 models are available for download or API calls via platforms like Hugging Face, Baseten, and RunPod. To support lightweight deployment, Cogito671B offers an FP8 quantized version, reducing hardware requirements while maintaining high inference efficiency.
Deep Cogito pledges to keep all models open-source and continue iterating on its "reasoning chain feedback + self-improvement" training path. The project has already garnered support from institutions like Benchmark and South Park Commons.
Key Points:
- Machine intuition: Cogito v2 uses introspective reasoning to improve over time.
- Efficiency: Shorter reasoning paths outperform competitors like DeepSeek R1.
- Cost-effective: Training costs under $3.5 million challenge industry norms.
- Open-source: Models available on Hugging Face with ongoing updates promised.