Skip to main content

Moonshot's Kimi-Researcher Launches for Deep Research Tasks

Moonshot's Kimi-Researcher Launches for Deep Research Tasks

Moonshot Dark Side has officially unveiled Kimi-Researcher, its first AI-powered deep research agent, now in limited internal testing. The new model leverages end-to-end autonomous reinforcement learning (agentic RL) to provide users with efficient, in-depth research capabilities.

Advanced Autonomous Research Capabilities

When tackling complex queries, Kimi-Researcher demonstrates remarkable autonomy:

  • Performs 23 steps of reasoning per task on average
  • Plans 74 search keywords per inquiry
  • Evaluates 206 URLs, retaining only the top 3.2% of highest-quality content Image

The system goes beyond simple information retrieval by:

  • Automatically calling tools like browsers and code interpreters
  • Processing raw data into actionable insights
  • Generating comprehensive reports with traceable sources

Benchmark Performance and Real-World Applications

To validate its capabilities, developers subjected Kimi-Researcher to the rigorous Humanity's Last Exam (HLE) benchmark, which spans hundreds of professional domains including:

  • Mathematics and physics
  • Medical research
  • Political science and history The model achieved impressive scores of 26.9% Pass@1 and 40.17% Pass@4 accuracy, outperforming several established AI systems.

In practical scenarios, Kimi-Researcher has proven valuable for:

  • Algorithm engineers seeking high-value benchmarks
  • Business analysts researching industry trends
  • Legal professionals comparing international data privacy laws The system can produce 10,000+ word reports with approximately 26 quality references, complete with shareable interactive visualizations.

Technical Innovation and Availability

The model's unique architecture features:

  • Zero-structure design: No complex prompts or preset workflows
  • Self-adaptation: Learns entirely through trial-and-error reinforcement learning This approach enables superior performance when handling conflicting information or adapting to environmental changes.

The service is currently in limited beta testing. Interested users can apply for access at kimi.com and activate the "Deep Research" feature after approval.

Key Points:

  1. Moonshot Dark Side launches AI research agent Kimi-Researcher in beta testing
  2. System autonomously plans searches, filters content, and generates detailed reports
  3. Achieved top-tier performance on challenging Humanity's Last Exam benchmark
  4. Currently available through limited access program at kimi.com

Enjoyed this article?

Subscribe to our newsletter for the latest AI news, product reviews, and project recommendations delivered to your inbox weekly.

Weekly digestFree foreverUnsubscribe anytime

Related Articles

News

Zhiyi LS8 Debuts with Revolutionary AI Driving: 20x Smarter Decisions

The automotive world just took a giant leap forward as Zhiyi unveils its LS8 model featuring groundbreaking AI technology. This isn't just another smart car - it's the first vehicle to integrate cockpit and driving systems like a human nervous system, powered by a new reinforcement learning model that makes decisions 20 times faster than current systems. Pre-sales begin March 26 for what could redefine our expectations of intelligent vehicles.

March 19, 2026
automotive AIintelligent drivingZhiyi LS8
News

Peking University and OceanBase Break New Ground in Long Video Search Technology

Researchers from Peking University and OceanBase have developed LoVR, a groundbreaking benchmark for long video retrieval that tackles key industry challenges. Accepted by WWW 2026, this innovation enables precise searches across entire videos or specific segments through advanced semantic analysis. The system features over 40,000 finely annotated clips and addresses real-world problems like semantic drift in lengthy content.

March 2, 2026
video retrievalAI researchmultimodal technology
News

Robots Get a Sense of Touch with Groundbreaking New Dataset

A major leap forward in robotics arrived this week with the release of Baihu-VTouch, the world's first cross-body visual-tactile dataset. Developed collaboratively by China's National-Local Co-built Humanoid Robot Innovation Center and multiple research teams, this treasure trove contains over 60,000 minutes of real robot interaction data. What makes it special? The dataset captures not just what robots see, but how objects feel - enabling machines to develop human-like tactile sensitivity across different hardware platforms.

January 27, 2026
roboticsAI researchtactile sensing
Robots Get a Sense of Touch: Groundbreaking Dataset Bridges Vision and Feeling
News

Robots Get a Sense of Touch: Groundbreaking Dataset Bridges Vision and Feeling

Scientists have unveiled Baihu-VTouch, the world's most comprehensive dataset combining robotic vision and touch. This collection spans over 60,000 minutes of interactions across various robot types, capturing delicate contact details with remarkable precision. The breakthrough could revolutionize how robots handle delicate tasks - imagine machines that can actually 'feel' what they're doing.

January 26, 2026
roboticsAI researchtactile sensors
News

AI cracks famous math puzzle with a fresh approach

OpenAI's latest model has made waves in mathematics by solving a long-standing number theory problem. The solution to the Erdős problem caught the attention of Fields Medalist Terence Tao, who praised its originality. But behind this success lies a sobering reality - AI's overall success rate in solving such problems remains low, reminding us that these tools are assistants rather than replacements for human mathematicians.

January 19, 2026
AI researchmathematicsmachine learning
Tiny AI Model Packs a Punch, Outperforms Giants
News

Tiny AI Model Packs a Punch, Outperforms Giants

Liquid AI's new experimental model LFM2-2.6B-Exp is turning heads in the tech world. Despite its modest size of just 2.6 billion parameters, this open-source powerhouse outperforms models hundreds of times larger in key benchmarks. Designed for edge devices, it brings PhD-level reasoning to smartphones while maintaining blazing speeds and low memory usage. Could this be the future of accessible AI?

December 26, 2025
AI innovationedge computingreinforcement learning