The misguided crusade against superintelligence

2025-10-25
5 min read.
The ban on superintelligence research recently proposed by the Future of Life Institute is, I think, misguided and potentially dangerous.
The misguided crusade against superintelligence
Credit: Tesfu Assefa

The "Statement on Superintelligence" recently issued by the Future of Life Institute (FLI) represents yet another well-intentioned but ultimately futile attempt to halt the march of technological progress. I think their proposed ban on artificial superintelligence (ASI) research is misguided and potentially dangerous.

This statement has garnered over 30,000 signatures, including well-known AI scientists like Geoffrey Hinton, Yoshua Bengio, and Stuart Russell. It calls for a outright prohibition on developing superintelligence - defined as artificial intelligence (AI) systems that outperform humans in essentially all cognitive tasks - until there's broad scientific consensus on safety and strong public support.

The statement reads: "We call for a prohibition on the development of superintelligence, not lifted before there is 1. broad scientific consensus that it will be done safely and controllably, and 2. strong public buy-in."

Proponents warn of risks like human obsolescence, loss of freedoms, national security threats, and even human extinction. While these concerns are raised with good intentions, the statement is profoundly naive, ignoring historical precedents, geopolitical realities, and the inescapable logic of technological defense. It's a document that prioritizes fear over foresight, potentially endangering humanity more than the AI it seeks to restrain.

One glaring issue is the statement's disregard for past failures in similar campaigns. Remember the FLI's 2023 open letter, which urged a six-month pause on training AI systems more powerful than GPT-4 to allow for safety assessments. Signed by more than 30,000 people including well-known personalities, it echoed the current statement's alarmist tone about existential risks.

Yet, what happened? The pause never materialized. Major AI labs like xAI, OpenAI, Google, and Meta pushed ahead, releasing ever-more advanced models without meaningful interruption. Governments paid lip service but offered no enforceable regulations. If that high-profile plea fell on deaf ears, why would this new statement fare any better? It seems to me a waste of time and resources, recycling the same rhetoric without addressing why previous efforts floundered. The signatories are shouting into the void, deluding themselves that signatures alone can override the momentum of innovation.

The unstoppable momentum of ASI research

In the West, superintelligence research is far too entangled with financial, military, and geopolitical interests to ever be stopped by a petition. Tech giants pour billions into AI because it promises astronomical profits - think automated economies, personalized medicine, and hyper-efficient industries. Wall Street investors aren't swayed by philosophical arguments; they're driven by hard financial returns.

Meanwhile, the military sees superintelligence as a strategic imperative. The U.S. Department of Defense is moving to integrate AI into everything from drone warfare to cybersecurity, viewing it as essential for maintaining superiority over adversaries. Geopolitically, halting development would cede ground in a new arms race. No Western leader will risk falling behind, especially when AI could decide future conflicts. This alignment of incentives makes the statement's proposed ban laughable - it's like asking oil companies to voluntarily abandon drilling amid an energy crisis. The pursuit of superintelligence is, today, a core driver of economic and national power.

There's also a group of philosophically oriented people who think that giving birth to our superintelligent mind children is our cosmic destiny and duty. I'm one of them. We are not powerful, and we can only play a minor role. But if every voice must be counted, then ours must also be counted. Of course we hope humans 1.0 - old-fashioned flesh-and-blood humans like you and me - will continue to be relevant. But our philosophical stance makes it impossible for us to support, and mandatory for us to strongly oppose, any ban on superintelligence research. Trying to stop this phase transition is futile. I say, full speed ahead, and the universe will provide.

Credit: Tesfu Assefa

If it is outlawed, only outlaws will have it

Let's go back to more practical considerations. Even if, against all odds, the West imposed a ban, the consequences would be catastrophic. Research wouldn't stop - it would simply migrate to less-regulated corners of the globe. China, already a powerhouse in AI with stealth state projects and state-backed initiatives from the private sector, would accelerate its programs unchecked. Beijing's approach prioritizes national strength, and a Western retreat would hand them a monopoly on superintelligence. Rogue nations, such as North Korea or Iran, could pursue superintelligence covertly, weaponizing AI for asymmetric warfare or oppression. Criminal organizations and terrorist groups, unbound by laws, might develop black-market versions using leaked code or underground labs. Imagine superintelligent systems in the hands of hackers orchestrating global cyberattacks or cartels optimizing drug empires. The results could be devastating: unchecked proliferation leading to misuse, accidents, or deliberate harm. By stifling open, collaborative research in democratic societies, the statement inadvertently empowers the very actors it fears most, turning a manageable risk into an uncontrollable one. Or in other words, paraphrasing a wise older observation: if ASI is outlawed, only outlaws will develop it.

My conclusions

Ultimately, the only viable safeguard against "bad" superintelligence is the development of "good" superintelligence. History shows that defensive technologies are needed to counter offensive ones - think nuclear deterrence or antivirus software. A benevolent or at least collaborative superintelligence could monitor and neutralize rogue AIs, predict threats, and enforce global standards. Banning progress leaves us vulnerable, relying on hope rather than capability. Instead of prohibition, we need accelerated development, international cooperation, and robust governance to ensure superintelligence serves humanity. The statement's defeatist stance ignores this, promoting paralysis over preparation.

In conclusion, the "Statement on Superintelligence" is a shortsighted manifesto that underestimates human ingenuity and overestimates the power of petitions. By failing to learn from the FLI's previously ignored call, overlooking entrenched interests, risking dangerous shifts to adversaries, and rejecting proactive defense, it seems likely to do more harm than good. True progress demands bold advancement, not fearful retreat. If we were to heed this statement, we might wake up to a world where superintelligence exists - but is controlled by bad humans.

#AccelerationOfAGI

#GeoPolitics

#StrongAI



Related Articles


Comments on this article

Before posting or replying to a comment, please review it carefully to avoid any errors. Reason: you are not able to edit or delete your comment on Mindplex, because every interaction is tied to our reputation system. Thanks!