Nick Bostrom is best known for his influential 2014 book “Superintelligence: Paths, Dangers, Strategies,” which helped establish AI safety as a serious academic field. That work took a notably precautionary stance, warning that superintelligent AI could pose existential risks and advocating for careful safety work before rushing toward advanced AI.
His new 2026 working paper, "Optimal Timing for Superintelligence: Mundane Considerations for Existing People," marks a striking evolution in his thinking. Rather than simply urging caution, Bostrom now argues that developing superintelligence isn't like playing Russian roulette - it's more like deciding whether to undergo risky surgery for a fatal condition. The status quo isn't safe: 170,000 people die daily from disease, aging, and other causes. The question isn't whether to take risks, but which risks to accept.
The paper analyzes optimal timing for launching superintelligent AI from a "person-affecting perspective" (focusing on people alive today) while setting aside more speculative considerations. Bostrom focuses on what he calls "mundane" factors: mortality rates, quality of life, and practical safety improvements.
Key points
Simple Models: In basic scenarios where successful AI would reduce mortality to that of a healthy 20-year-old (life expectancy ~1,400 years), launching superintelligence increases expected lifespan even with catastrophically high AI risk - up to 97% chance of extinction. This counterintuitive result stems from comparing the certain death everyone faces within decades against the possibility of radical life extension.
Timing Dynamics: When the choice involves when to launch rather than whether, optimal delays are typically modest - often measured in months or single-digit years. Very fast safety progress favors launching soon (since risk drops quickly anyway). Very slow progress also favors launching soon (since waiting yields little benefit). Only intermediate progress rates produce longer optimal delays.
The "Swift to Harbor, Slow to Berth" Strategy: Bostrom's multiphase model distinguishes two periods: Phase 1 (before AGI capability exists) and Phase 2 (after capability but before deployment). Safety progress is likely "front-loaded" during early Phase 2, when researchers finally have actual AGI systems to study, test, and refine. This creates a burst of rapid safety gains that quickly faces diminishing returns.
The analysis suggests moving quickly through Phase 1 to capture this "safety windfall," then implementing a brief, purposeful pause in Phase 2 - perhaps weeks to months - to exploit the period of maximum safety progress. For many scenarios, it's actually optimal to accelerate Phase 1 development while still maintaining a short Phase 2 pause.

Complicating factors
Quality of Life: If post-AGI life is dramatically better (not just longer), this strengthens the case for earlier launch. However, the effect saturates - once post-AGI life is sufficiently attractive, further quality improvements don't change the calculus much.
Risk Aversion: When people have diminishing marginal utility in life-years (valuing the first 50 years more than years 50-100), optimal timelines become more conservative, but not dramatically so.
Mortality Changes: If background mortality rises during Phase 2 (from AI-enabled bioweapons, for instance), optimal pauses shorten. If mortality plummets (from anti-aging breakthroughs), longer pauses become justifiable - though even then, not extremely long ones given temporal discounting.
Distributional Justice: The elderly, sick, and poor have shorter remaining lifespans and lower quality of life, making them more willing to accept AI risk for a shot at transformation. On prioritarian grounds (which weigh the interests of the worse-off more heavily), this shifts optimal timelines earlier than a neutral utilitarian calculation would suggest.
Critical limitations
Bostrom emphasizes this analysis doesn't directly translate to policy recommendations. Real-world AI pauses could backfire in numerous ways: driving development underground, increasing militarization, creating regulatory theater, triggering geopolitical competition, enabling cheating on international agreements, building compute overhangs that make eventual deployment more explosive, or ossifying into permanent relinquishment of superintelligence's benefits.
The paper also focuses narrowly on person-affecting considerations while excluding what might be called "impersonal" perspective (focusing on possible future generations) and various "arcane" considerations (simulation theory, anthropics, alien civilizations, etc.) that Bostrom considers important but saves for future work.
Bottom line
From the standpoint of people alive today seeking to maximize their own expected quality-adjusted lifespan, the optimal strategy often involves moving relatively quickly to AGI capability, then pausing briefly (not years or decades) to capture rapid safety gains from studying actual systems, before full deployment. Extremely long delays are rarely optimal unless initial risk is extraordinarily high and safety improves at just the right intermediate pace.
Critically, this finding holds even under pessimistic "doomer" assumptions about AI risk, because the baseline of continued death and suffering is itself a catastrophe unfolding in slow motion. The paper reframes the AI safety debate: we're not choosing between safety and danger, but between different risk profiles - with the seemingly "safe" option of indefinite delay carrying its own massive, certain costs.
Reactions
These days Bostrom is less influential than he used to be, but reactions are slowly beginning to pile up. Beff Jezos, the prophet of effective accelerationism, praised Bostrom for endorsing AI accelerationism, saying that the greatest existential risk is deceleration.
In another commentary, Émile Torres says that Bostrom’s paper “encapsulates much of what’s abhorrent about Bostrom’s worldview, ethical thought, and moral character.” To me, the commentary encapsulates much of what’s abhorrent about Torres. Reading the commentary, you can feel the author’s palpable hatred (there’s no other word) for the imaginative futurists who dare imagine the possibility to avoid death. But for Torres, hatred is justified because this possibility “is definitely on the radar of far-right American fascists.”
In “Superintelligence,” despite his more precautionary stance at the time of writing, Bostrom admitted that “[if] the machine intelligence revolution goes well, the resulting superintelligence could almost certainly devise means to indefinitely prolong the lives of the then still-existing humans, not only keeping them alive but restoring them to health and youthful vigor.”
In a commentary included in my book “Tales of the Turing Church,” I noted that many AI accelerationists “feel doomed to the irreversible non-existence of certain death, unless the superintelligence explosion happens in their lifetime, and therefore they want to push forward recklessly, as fast as possible.” I argued that this is a consequence of their bad philosophy of militant atheism, which doesn’t allow them to hope in life after death.
Since I do believe in (or at least consider possibilities of) life after death, I’m not vulnerable to this (very human and understandable) reaction to the inevitability of death without superintelligent AI. But I firmly believe that giving birth to our superintelligent mind children is our cosmid destiny and duty, and therefore I’m all for AI accelerationalism with only a very tiny bit of caution. I think if we pursue our cosmic duty, the universe will reward us.