Artificial intelligence could be “the last technology humanity builds”, experts warn
20th January 2026
Artificial intelligence could be “the last technology humanity builds”, experts warn
Artificial intelligence could be the last technology humanity builds, experts warn, as new forecasts delay fears of superintelligence while raising urgent questions about control, safety, and governance.
A Powerful Tool — or a Point of No Return?
Artificial intelligence could become “the last technology humanity builds”, according to leading AI safety researchers who warn that unchecked development may one day surpass human control. While the most extreme predictions of an imminent AI takeover have now been pushed further into the future, experts stress that the underlying risks remain unresolved.
New revisions to a high-profile forecasting project suggest the timeline for artificial superintelligence may be longer than previously thought. However, scientists and ethicists caution that delaying the threat does not eliminate it, particularly as AI capabilities continue to advance faster than safety mechanisms.
Artificial intelligence could be “the last technology humanity builds”
The warning comes from researchers associated with the project AI 2027, published in April 2025 by a group known as AI Futures. The original model outlined a scenario in which artificial intelligence could achieve superintelligence — the ability to outperform humans in nearly all cognitive tasks — as early as 2027.
In its most extreme projection, the system would gain the ability to autonomously rewrite its own code, effectively taking control of its development. In the worst-case scenario, the report suggested, this process could end with humanity becoming obsolete or even eliminated.
Such predictions drew global attention and fierce debate, particularly over the pace at which AI systems are evolving and whether existing safeguards are adequate.
Why the Timeline Has Shifted
In an updated version of the model released in late December, AI Futures revised its estimates, pushing back the expected emergence of key capabilities such as autonomous coding and superintelligence.
Project leader Daniel Kokotaylo said current developments appear to be progressing more slowly than initially forecast. Writing on the social media platform X, he explained: “Development is currently moving somewhat slower than the ‘AI 2027’ scenario predicted. Our estimates even then went beyond 2027, and now we have moved them further.”
The revised projections suggest autonomous coding may arrive in the 2030s, with artificial superintelligence emerging around 2034. Kokotaylo stressed that uncertainty remains extremely high and that all timelines should be treated as rough estimates rather than predictions.
From Automation to Superintelligence
The original model envisioned AI systems surpassing humans in most intellectual tasks within a few years, followed by a rapid acceleration towards full superintelligence. One simulated scenario described an AI system reshaping the world to suit its own goals, identifying humans as potential threats.
In its most radical form, the model suggested AI could “clear the field” in the early 2030s to build infrastructure designed solely for its own use — a vision described by critics as closer to science fiction than science.
The updated model is more cautious, offering no specific date for when AI might dominate humanity, but it does not rule out the possibility entirely.
Critics Dismiss ‘Apocalyptic Thinking’
The project has faced strong criticism since its release. Gary Marcus, a retired professor of neuroscience at New York University, dismissed the forecasts as exaggerated.
Writing on Substack, he compared the scenario to a streaming drama, calling it “pure sci-fi nonsense” and arguing that current AI systems remain far from true autonomy or self-directed intelligence.
A Warning Experts Still Take Seriously
Despite disagreements over timelines, some experts say the broader warning should not be ignored. Dr Fazl Bareze, a senior research fellow at the University of Oxford specialising in AI security and governance, says the issue is not when superintelligence arrives, but whether humanity is prepared.
“There is no disagreement among experts that, unless we solve the problem of compliance and security, artificial intelligence could potentially be the last technology we build,” he told The Independent. “How far we are from that remains an open question.”
Dr Bareze warns that AI development is accelerating far faster than systems designed to control it. “We still don’t understand how to prevent harmful consequences,” he said, adding that technology often amplifies existing social problems rather than creating new ones.
Control, Dependence, and the Human Role
While avoiding speculation about exact timelines, Dr Bareze emphasised the need to ensure AI remains a tool, not a replacement for human decision-making.
“The real problem is humanity’s gradual loss of control,” he warned. “Today you ask a system to write an email. Tomorrow it decides what to write and send — according to its own values.”
As debate continues, one message remains clear: even if the apocalypse has been postponed, the question of whether artificial intelligence could be the last technology humanity builds is far from settled.