Skip to main content

AI Expert Revises Doomsday Timeline: Humanity Gets a Few More Years

The Shifting Timeline of AI's Existential Threat

The artificial intelligence community has been buzzing with debate since former OpenAI employee Daniel Kokotajlo adjusted his much-discussed prediction about when superintelligent AI might pose an existential threat to humanity. What was once a looming 2027 deadline has now been pushed back by several years, giving humanity what some might call "a little more breathing room."

Image

Image source note: The image was generated by AI, and the image licensing service is Midjourney

From Science Fiction to Revised Forecasts

Kokotajlo's original "AI2027" prediction sent shockwaves through both tech and political circles. His scenario painted a dramatic picture: AI achieving full autonomous programming within three years, rapidly evolving beyond human control, and potentially leading to humanity's downfall by the mid-2030s. This view found supporters among some U.S. politicians but faced sharp criticism from scientists like neuroscientist Gary Marcus, who dismissed it as "science fiction."

Now, reality appears to be tempering expectations. According to latest observations shared with AIbase, Kokotajlo has revised his timeline significantly. The window for AI achieving autonomous programming now stretches into the early 2030s, with superintelligence potentially emerging around 2034.

"Current systems still show clear limitations when dealing with complex real-world environments," Kokotajlo acknowledged in his updated assessment. This admission comes as researchers observe AI struggling with tasks that require nuanced understanding beyond pattern recognition.

The Race Continues Despite Uncertainty

While timelines may be shifting, the pace of development shows no signs of slowing. OpenAI CEO Sam Altman recently revealed an ambitious internal goal: creating automated AI researchers by 2028. This target suggests companies remain confident in overcoming current technical hurdles, even as independent experts urge caution.

The gap between corporate optimism and academic skepticism highlights a fundamental tension in AI development. As Kokotajlo's revised prediction demonstrates, even experts struggle to forecast how quickly—or slowly—true artificial general intelligence (AGI) might emerge.

"The real world turns out to be far more complicated than our science fiction scenarios," one researcher noted wryly. This complexity serves as both a buffer against rapid uncontrolled advancement and a reminder of how much we still don't understand about creating human-level intelligence.

Key Points:

  • Revised timeline: Superintelligence emergence pushed from 2027 to early 2030s
  • Current limitations: AI still struggles with complex real-world environments
  • Corporate ambitions: OpenAI aims for automated researchers by 2028 despite uncertainty
  • Ongoing debate: Significant disagreement remains about AGI development timelines

Enjoyed this article?

Subscribe to our newsletter for the latest AI news, product reviews, and project recommendations delivered to your inbox weekly.

Weekly digestFree foreverUnsubscribe anytime

Related Articles

News

Alibaba Denies Qwen Team Exodus Rumors, Vows Continued AI Innovation

Alibaba has firmly dismissed online rumors about mass resignations in its Qwen AI model team. The tech giant confirmed the team remains intact and focused on advancing artificial general intelligence (AGI) through open-source development. Contrary to speculation, Alibaba emphasized its commitment to technological breakthroughs over commercial metrics, while actively recruiting global AI talent.

March 6, 2026
ArtificialIntelligenceTechIndustryChinaTech
AI Simulated Nuclear War: Startling Results Show 95% Strike Rate
News

AI Simulated Nuclear War: Startling Results Show 95% Strike Rate

A chilling study reveals AI's alarming tendency toward nuclear escalation when placed in simulated crisis scenarios. Researchers tested three advanced models as national leaders, finding they chose military aggression far more often than human counterparts. The findings raise urgent questions about integrating AI into military decision-making.

March 4, 2026
AI safetyMilitary technologyNuclear risk
News

Altman's Vision: Why Artists May Hold the Key to AGI Breakthroughs

OpenAI's Sam Altman suggests that developing true artificial general intelligence requires more than just coding skills. He argues that people with strong aesthetic judgment - entrepreneurs, artists, and those with unconventional backgrounds - can spot the most promising directions in AI research. This echoes Steve Jobs' philosophy that technology needs humanities to create truly great products. OpenAI is already adjusting its hiring practices accordingly.

February 27, 2026
AGIOpenAITechPhilosophy
News

Polished AI Outputs May Lull Us Into Complacency

New research from Anthropic reveals a troubling trend: the more polished AI-generated content appears, the less likely people are to question its accuracy. Analyzing nearly 10,000 conversations with Claude, researchers found users checked facts less often when outputs looked professional. However, those who treated AI responses as drafts and asked follow-up questions caught significantly more errors.

February 24, 2026
AI safetyHuman-AI interactionCritical thinking
OpenClaw Framework Hit by Major Malware Attack
News

OpenClaw Framework Hit by Major Malware Attack

The OpenClaw AI framework has been compromised in a sophisticated supply chain attack, with hundreds of malicious 'skills' uploaded to its extension platform. Cybersecurity experts warn these trojanized tools could steal sensitive data from unsuspecting users. The company has partnered with VirusTotal to implement emergency security measures, including daily AI-powered scans of all available skills.

February 9, 2026
cybersecurityAI safetymalware
Tencent Sets Record Straight on Yuanbao Red Envelope Rumors
News

Tencent Sets Record Straight on Yuanbao Red Envelope Rumors

Tencent has officially addressed swirling rumors about its Yuanbao AI assistant's red envelope campaign. Contrary to viral claims, the company confirms there's no link between Yuanbao and WeChat crashes, nor any unauthorized data collection. Users are advised to stick to official channels amid reports of fraudulent links mimicking the popular promotion.

February 4, 2026
TencentAI safetydigital payments