AI Expert Revises Doomsday Timeline: Humanity Gets a Few More Years
The Shifting Timeline of AI's Existential Threat
The artificial intelligence community has been buzzing with debate since former OpenAI employee Daniel Kokotajlo adjusted his much-discussed prediction about when superintelligent AI might pose an existential threat to humanity. What was once a looming 2027 deadline has now been pushed back by several years, giving humanity what some might call "a little more breathing room."

Image source note: The image was generated by AI, and the image licensing service is Midjourney
From Science Fiction to Revised Forecasts
Kokotajlo's original "AI2027" prediction sent shockwaves through both tech and political circles. His scenario painted a dramatic picture: AI achieving full autonomous programming within three years, rapidly evolving beyond human control, and potentially leading to humanity's downfall by the mid-2030s. This view found supporters among some U.S. politicians but faced sharp criticism from scientists like neuroscientist Gary Marcus, who dismissed it as "science fiction."
Now, reality appears to be tempering expectations. According to latest observations shared with AIbase, Kokotajlo has revised his timeline significantly. The window for AI achieving autonomous programming now stretches into the early 2030s, with superintelligence potentially emerging around 2034.
"Current systems still show clear limitations when dealing with complex real-world environments," Kokotajlo acknowledged in his updated assessment. This admission comes as researchers observe AI struggling with tasks that require nuanced understanding beyond pattern recognition.
The Race Continues Despite Uncertainty
While timelines may be shifting, the pace of development shows no signs of slowing. OpenAI CEO Sam Altman recently revealed an ambitious internal goal: creating automated AI researchers by 2028. This target suggests companies remain confident in overcoming current technical hurdles, even as independent experts urge caution.
The gap between corporate optimism and academic skepticism highlights a fundamental tension in AI development. As Kokotajlo's revised prediction demonstrates, even experts struggle to forecast how quickly—or slowly—true artificial general intelligence (AGI) might emerge.
"The real world turns out to be far more complicated than our science fiction scenarios," one researcher noted wryly. This complexity serves as both a buffer against rapid uncontrolled advancement and a reminder of how much we still don't understand about creating human-level intelligence.
Key Points:
- Revised timeline: Superintelligence emergence pushed from 2027 to early 2030s
- Current limitations: AI still struggles with complex real-world environments
- Corporate ambitions: OpenAI aims for automated researchers by 2028 despite uncertainty
- Ongoing debate: Significant disagreement remains about AGI development timelines