ERC-404 Unlocks Hybrid Features For NFTs

Non-fungible tokens (NFTs) on Ethereum exploded in popularity during 2021’s bull run, driven by record sales of flagship projects like CryptoPunks, Bored Ape Yacht Club, Azuki and Moonbirds, and artists like Beeple. However, the dominant ERC-721 and ERC-1155 standards for NFTs have some limitations that have seen the crypto collectibles market crater after NFT prices came crashing down in 2022 and 2023. 

A new token standard called ERC-404 that aims to revolutionize NFTs with new features and improvements has captured the imagination and buy-in of NFT degens, and its earliest associated projects like Pandora are soaring in value. While it’s early days and the standard remains unaudited, it offers an innovation that could provide a massive shot in the arm for NFT trading and collectibility. 

In short, that’s because ERC-404 has hybrid features that combine the best of both cryptocurrency and NFT technology to create a new digital asset that enables fractionalized NFT ownership. Let’s jump in. 

What is ERC-404?

ERC-404 is a proposed new token standard for NFTs on Ethereum. It builds on the existing ERC-721 standard by adding functionality specifically for NFTs. The standard is currently still under development but shows enormous promise.

The ERC-404 standard opens the door to ‘dynamic NFTs’. Unlike regular NFTs, which have static content, ERC-404 NFTs can change and evolve over time. This unlocks entirely new use cases and possibilities.

For example, ERC-404 NFTs could represent characters in games that gain experience and evolve. Or they could represent dynamic digital artworks that change based on external data feeds. The possibilities are endless.

Why ERC-404 is an NFT Gamechanger

ERC-404 represents the cutting edge of innovation in the NFT space. Here’s how: 

1. More Flexibility & Use Cases

The dynamic nature of ERC-404 completely reimagines what an NFT can represent. Traditional NFTs have been quite limited and mostly focused on digital art. ERC-404 blows the doors wide open for NFT utilities and use cases.

2. Native Interoperability

ERC-404 NFTs can interconnect with other smart contracts and protocols. This makes them far more extensible and interoperable than regular NFTs. 

For example, a dynamic NFT character could directly integrate with a battle game dApp. The possibilities for creativity are endless.

3. Improved Functionality

The standard defines various new functions to manipulate and evolve ERC-404 NFTs over time. This extends to things like changing metadata, upgrading visuals, attaching tokens, and more.

This improved functionality brings NFTs far closer to representations of real-world items and makes them compatible with DeFi.

How ERC-404 NFTs Work

The ERC-404 standard upgrades NFTs in two major ways: evolution and interoperability. 

‘Evolution’ refers to the ability for the NFTs to change attributes over time, like gaining experience or changing visual representations. The logic governing these evolutions is handled by a separate ‘Engine’ smart contract.

‘Interoperability’ refers to the ability to directly integrate the NFTs with other external smart contracts like games, marketplaces, etc. This could add in-game things like HP, SP, inventory slots, and more directly in the NFT token itself.

Example: Pandora and Replicants

The Pandora NFT collection has 10,000 unique digital collectibles called Replicants hosted on the Ethereum blockchain. Each Replicant acts as both an NFT and a crypto token. 

Owning a Replicant token lets you trade it on decentralized exchanges like Uniswap. You can buy as many tokens as you want – each one equals a fraction of a Replicant NFT. This makes the NFTs more affordable and liquid.

When you’re ready, you can redeem your Pandora tokens for the actual Replicant NFTs. The more tokens you have, the more NFTs you can claim. You can also choose to keep some tokens and redeem others. This makes Pandora more flexible for investors.

The key idea is that Pandora links NFT collectibles and crypto tokens together in a new way. This combines the scarcity of NFTs with the liquidity of tokens and has seen the value of Pandora tokens shoot up from $4,3000 to over $26,000 during the last seven days on CoinMarketCap.

Impact on NFT Trading & Liquidity 

ERC-404 is set to completely transform NFT trading, thanks to dynamic NFTs being more liquid. Whereas traditional NFTs tend to be static and illiquid once purchased, ERC-404 builds in interoperability and evolution by design.

This interoperability and upgradability make ERC-404 NFT markets far more dynamic and liquid compared to legacy NFTs. Traders can speculate on NFT evolutions, upgrades, integrations, and more.

Another benefit is the ability to use dynamic NFTs directly in other DeFi protocols via built-in interoperability. For example, putting them up as collateral for loans, depositing them in yield farms, and more.

Credit: Tesfu Assefa

Exciting ERC-404 Projects Already Underway

While ERC-404 is not launched officially yet, teams are already building out exciting projects on the standard, including:

  • DynamicPunks – Algorithmically generated evolving punks.
  • SmartChar NFT RPG – Customizable RPG character NFTs that level up.
  • NFTfi Guild – DeFi/NFT hybrid for asset management. 

(Please note these are not in any way endorsements of these experimental projects. Please do your own research.)

And many more projects are on the way. The community excitement for ERC-404 is enormous given its vast potential, but it’s important to do your due diligence and ensure you don’t open yourself up for rugpulls or phishing that could drain your wallet. Be very careful with which protocols you interact with. 

Final Thoughts

The introduction of ERC-404 marks a significant turning point in the NFT marketplace, promising to catalyze a shift towards greater mainstream adoption by boosting accessibility and liquidity. 

This innovative token standard is set to bridge the existing divide between fungible and non-fungible tokens, thereby facilitating a more fluid exchange across decentralized exchanges (DEXs), NFT marketplaces, and decentralized finance (DeFi) platforms. 

Moreover, ERC-404 paves the way for groundbreaking functionalities within the NFT space, including leverage trading, options contracts, and novel avenues for Metaverse interoperability and composability. These advancements can help to further integrate the NFT sector with the broader digital economy.

However, the advent of ERC-404 has its challenges and considerations. Among its most notable advantages are improving NFT liquidity and lowering the barriers to entry for investing in high-value NFTs. ERC-404 also promises to usher in new business models for NFT platforms, crucial for the market’s future, offering a more inclusive and dynamic ecosystem for creators, investors, and collectors alike.

Conversely, the ERC-404 standard raises several concerns that warrant attention. The code underlying ERC-404 has not undergone extensive auditing. It could potentially have undiscovered vulnerabilities, particularly in the mechanism for burning linked NFTs. This introduces a layer of complexity in wallet management and poses questions regarding the regulatory landscape surrounding these innovations.

This might do little to stem the rise of ERC-404, as adoption accelerates. NFT marketplace Blur quickly jumped on the bandwagon, and Telegram bot Banana Gun and new decentralized social media hotshot Farcaster are all implementing it. And new projects are shooting up on other chains like Bitcoin, Solana and Arbitrum.

The next 12 months promises to yield even more innovative applications of the ERC-404 standard across DeFi, gaming, Metaverse and other spaces. This hybrid standard opens up a whole new realm of possibilities for non-fungible tokens, setting them up for a seismic 2024. 

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Tokenized Warfare: Cryptocurrency in Conflict Zones

War is good for absolutely nothing. Yet the reality is that at this very second there are conflicts raging that disrupt and destroy the lives of millions. It’s against this dark background that crypto has found a morbid use-case, to channel funds freely to various entities that are prosecuting violence against others, and a way for resistance fighters to access funds. As a way to continue some semblance of normality. 

It’s hard to bank when your bank has become a bunker. When the systems-architecture of a country disintegrates under the horrors of war, the financial system is often the first to break. Cryptocurrency, a standalone system of transactions, accounting and payments that’s uniquely resistant to the shocks of the real world, is a perfect, and now prevalent, candidate to get the money to where it’s needed most.

Crypto is censorship resistant. If an oppressor switches off its citizens’s access to money, crypto allows them to continue as they were. Crypto has superb remittance capabilities. A transaction doesn’t care if you’re in Tulsa or Timbuktu – the price is the same. Crypto is also pseudo-anonymous. An agent who is persona non grata to the traditional financial system, dominated as it is by powerful Western countries, can still operate and receive funds. Jurisdictions that traditionally struggle with access to banking and which are major conflict zones top the charts for Bitcoin searches.

You mean crypto is funding terrorism!? That’s appalling! Well yes, it is – depending on what you define as a terrorist. It’s far beyond the scope of this tech junkie to comment on who is a freedom fighter and who is a terrorist, who is right and who is wrong, on each and every one of the world’s multifarious conflicts, just to note that a decentralised ledger doesn’t care who you are. It just works – no matter the situation, and it works both ways. 

Crypto has provided enormous humanitarian relief to those in conflict zones. Citizens fleeing Ukraine have been beneficiaries of extensive crypto donations, and the Ukrainian government has received $0.225 billion in its fight against the Russian invasion. Similarly, Russian-financiers frozen out of traditional payment rails due to sanctions have resorted to using crypto to finance their campaign, and Hamas has received plenty of funding through the Tron network. Crypto continues its use-case of being an out-of-context asset – booming in times of trouble.

Credit: Tesfu Assefa

So, is crypto a tool of liberation or of oppression? Should the Powers That Be choose to blacklist a certain nation or entity, crypto provides an easy way for that entity to continue its machinations. For Ukrainians and Palestians and Houthis and whoever else feels oppressed, crypto provides a lifeline of financial and banking access when the rest of their world falls apart.

To many, crypto’s use in war-financing is yet another black stain on its reputation, a further stamp of its association with the illicit, the illegal, and the lethal. The ledger has no morality, and it is not under the control of any moral arbiter, any centralised power. That is its very purpose. It’s easy to condemn crypto when you’re on the side of the would-be moral arbiter. 

Yet you never know when the tables may turn, when suddenly you find yourself on the wrong side of history – as millions do all over the world daily, sometimes surprisingly – and you need a way to continue your life. ‘Commit a crime and the earth is made of glass.’ Such a reality is alien until it happens to you. Crypto remains on your side even when the institutions militate against you.

Systems are fragile. Authorities change. The ledger is eternal. So, although crypto is buying guns used to kill people, equipping terrorists, and fueling tanks marching across borders, it is also doing the opposite. Buying crucial aid, giving the unbanked a system of money, and putting food in the mouths of the desperate, and medicine in the hands of the dying. The ledger is not evil, we are. At least with a decentralised ledger, no one gets to choose which side is which.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Cautionary tales and a ray of hope

Four scenarios for the transition to AGI

Let’s look at four future fictions about humanity’s changing relationship with AI.

Each scenario is grounded in past events, and each considers how matters could develop further in the coming months and years.

May these scenarios prove to be self-blocking prophecies! (With one exception!)

Trigger warning: readers might be offended by some of the content that follows. Aspects of each of the four scenarios can be considered to be shocking and disrespectful. That’s on purpose. This subject requires all of us to transcend our comfort zones!

Credit: David Wood via Midjourney

1. Too little too late

Lurching from warning to warning

In retrospect, the first real warning was the WannaCry ransomware crisis of May 2017. That cryptoworm brought chaos to users of as many as 300,000 computers spread across 150 countries. The NHS (National Health Service) in the UK was particularly badly affected: numerous hospitals had to cancel critical appointments due to not being able to access medical data. Other victims around the world included Boeing, Deutsche Bahn, FedEx, Honda, Nissan, Petrobras, Russian Railways, Sun Yat-sen University in China, and the TSMC high-end semiconductor fabrication plant in Taiwan.

WannaCry was unleashed into the world by a team of cyberwarriors from the hermit kingdom of North Korea – math geniuses hand-picked by regime officials to join the formidable Lazarus group. Lazarus had assembled WannaCry out of a mixture of previous malware components, including the EternalBlue exploit that the NSA in the United States had created for their own attack and surveillance purposes. Unfortunately for the NSA, EternalBlue had been stolen from under their noses by an obscure underground collective (‘the Shadow Brokers’) who had in turn made it available to other dissidents and agitators worldwide.

Unfortunately for the North Koreans, they didn’t make much money out of WannaCry. The software they released operated in ways contrary to their expectations. It was beyond their understanding and, unsurprisingly therefore, beyond their control. Even geniuses can end up stumped by hypercomplex software interactions.

Unfortunately for the rest of the world, that first canary sign generated little meaningful response. Politicians – even the good ones – had lots of other things on their minds.

The second real warning was the flood of fake news manipulations of the elections in 2024. AI was used to make audios and videos that were enormously compelling.

By this time, the public already knew that AI could create misleading fakes. They knew they shouldn’t be taken in by social media posts that lacked convincing verification. Hey, they were smart. (Smarter than the numbskulls who were deceived by misleading AI-generated videos during the elections in 2023 in Nigeria and Slovakia!) Or so they thought.

What wasn’t anticipated was the masterful ways that these audios and videos bypassed the public’s critical faculties. Like the sleight of hand of a skilled magician, these fakes misdirected the attention of listeners and viewers. Again like a skilled magician, who performs what appears to be the same trick several times in a row, but actually using different mechanisms each time, these fakes kept morphing and recombining until members of the public were convinced that red was blue and autocrat was democrat.

In consequence, by 2025, most of the world was governed by a cadre of politicians with very little care or concern about the long-term wellbeing of humanity. Whereas honest politicians would have paid heed to the warning posed by these fiendishly clever fakes, the ones in power in 2025 were preoccupied by providing bread and circuses to their voters.

The third, and final, real warning came in 2027, with the failed Covid-27 attack by a previously unknown group of self-described advocates of ‘revolutionary independence from technology’. Taking inspiration from the terrorist group in the 2014 Hollywood film Transcendence, they called themselves ‘Neo-RIFT’, and sought to free the world from its dependence on unfeeling, inhuman algorithms.

With a worldview that combined elements from several apocalyptic traditions, Neo-RIFT eventually settled on an outrageous plan to engineer a more deadly version of the Covid-19 pathogen. Their documents laid out a plan to use their enemy’s own tools against it: neo-RIFT hackers jailbroke the Claude 4 pre-AGI, bypassing the ‘Constitution 4’ protection layer that its Big Tech owners had hoped would keep that AI tamperproof. Soon, Claude 4 had provided Neo-RIFT with an ingenious method of generating a biological virus that would, it seemed, only kill people who had used a smartwatch in the last four months.

That way, the hackers thought the only people to die would be people who deserved to die.

The launch of what became known as Covid-27 briefly jolted humanity out of its previous obsession with bread and circuses – with whizz-bang hedonic electronics. It took a while for scientists to figure out what was happening, but within three months, they had an antidote in place. By that time, nearly a billion people were dead at the hands of the new virus.

A stronger effort was made to prevent any such attack from happening again. Researchers dusted down the EU AI Act, second version (unimplemented), from 2025, and tried to put that on statute books. Even some of the world’s craziest dictators took time out of their normal ranting and raving, to ask AI safety experts for advice. But the advice from these experts was not to the liking of these national rulers. These leaders preferred to listen to their own yes-men and yes-women, who knew how to spout pseudoscience in ways that made the leaders feel good about themselves. That detour into pseudoscience fantasyland wasted six months.

Then some of the experts tried more politically savvy methods, gradually breaking down the hostile arrogance of a number of the autocrats, weaning them away from their charlatan advisers. But just when it appeared that progress might be made, Covid-28 broke out, launched by a remnant of Neo-RIFT that was even more determined than before.

Credit: David Wood via Midjourney

And this time, there was no antidote. Claude 5 was even smarter than Claude 4 – except that it could be jailbroken too. With its diabolically ingenious design, Covid-28 was the deadliest disease to ever inflict humanity. And that was that.

Oops, let’s try that again!

‘Too little too late’ is characterized by inattention to the warnings of canary signals; the next scenario, ‘Paved with good intentions’, is characterized by the wrong kind of attention.

This scenario starts with events in the UK in October and November 2023.

2. Paved with good intentions

Doomed by political correctness

The elites had booked their flights. They would be jetting into the country for behind-closed doors meetings at the famous Bletchley Park site in Buckinghamshire. Events in these buildings in the 1940s had, it was claimed, shortened World War Two by months. The discussions in 2023 might achieve something even more important: saving humanity from a catastrophe induced by forthcoming ‘frontier models’ of AI.

That was how the elites portrayed things. Big Tech was on the point of releasing new versions of AI that were beyond their understanding and, therefore, likely to spin out of control. And that’s what the elites were going to stop.

A vocal section of the public hated that idea. It wasn’t that they were on the side of out-of-control AI. Not at all. Their objections came from a totally different direction; they had numerous suggestions they wanted to raise about AIs, yet no-one was listening to them.

For them, talk of hypothetical future frontier AI models distracted from pressing real-world concerns. Consider how AIs were already being used to discriminate against various minorities: determining prison sentencing, assessing mortgage applications, and determining who should be invited for a job interview.

Consider also how AIs were taking jobs away from skilled artisans. Big-brained drivers of London black cabs were being driven out of work by small-brained drivers of Uber cars aided by satnav systems. Beloved Hollywood actors and playwrights were losing out to AIs that generated avatars and scripts.

And consider how AI-powered facial recognition was intruding on personal privacy, enabling political leaders around the world to identify and persecute people who acted in opposition to the state ideology.

People with these concerns thought that the elites were deliberately trying to move the conversation away from the topics that mattered most. For this reason, they organized what they called ‘the AI Fringe Summit’. In other words, ethical AI for the 99%, as opposed to whatever the elites might be discussing behind closed doors.

Over the course of just three days – 30th October to 1st November – at least 24 of these ‘fringe’ events took place around the UK.

Compassionate leaders of various parts of society nodded their heads. It’s true, they said: the conversation on beneficial AI needed to listen to a much wider spectrum of views.

By May 2024, the opposition to the Bletchley Park initiative had grown stronger. As the elites gathered again, this time in South Korea, a vast number of ‘super-fringe’ events around the world attracted participation from thinkers of every hue and stripe.

The news media responded. They knew (or pretended to know) the importance of balance and diversity. They shone attention on the plight AI was causing – to indigenous laborers in Peru, to flocks of fishermen off the coasts of India, to middle-aged divorcees in midwest America, to the homeless in San Francisco, to drag artists in New South Wales, to data processing clerks in Egypt, to single mothers in Nigeria, and to many more besides.

The media shone attention on forthcoming frontier AI models too – but again being very careful not to offend sensibilities or exclude minority points of view. A burgeoning ‘robots rights’ movement captured lots of airtime, as did a campaign to recognize GPT-5 as being ‘semi-sentient’. Wackiest of all were the new religions that offered prayers and obedience to a frontier AI model that was said to be the reincarnation of JFK Junior. The QAnon fantasist crowd lapped that up. It was glorious entertainment. Ratings soared.

Not everyone was flippant. Lots of high-minded commentators opined that it was time to respect and honor the voices of the dispossessed, the downtrodden, and the left-behinds. The BBC ran a special series: ‘1001 poems about AI and alienation’. The UN announced that, later that year, they would convene a grand international assembly with stunning scale: ‘AI: the people decide’.

By November 2024, something altogether more sinister was happening. It was time for the UN grand assembly. It was also time for the third meeting of elites in the series that had started in Bletchley Park and then held its second event in South Korea. This time, the gathering would be in Paris.

The sinister development was that, all this time, some of the supposedly unanimous ‘elites’ had been opposed to the general direction of the Bletchley Park series. They gravely intoned public remarks about the dangers of out-of-control frontier AI models. But these remarks had never been sincere. Instead, under the umbrella term AGI-acceleration, they wanted to press on with the creation of AGI as quickly as possible.

Some of the AGI-acceleration group disbelieved in the possibility of AGI disaster. That’s just a scare story, they insisted. Others said, yes, there could be a disaster, but the risks were worth it, on account of the unprecedented benefits that could arise. Let’s be bold, they urged. Yet others asserted that it wouldn’t actually matter if humans were rendered extinct by AGI, as this would be the glorious passing of the baton of evolution to a worthy successor to homo sapiens. Let’s be ready to sacrifice ourselves for the sake of cosmic destiny, they intoned.

Despite their internal differences, AGI-accelerators settled on a plan to sidestep the scrutiny of would-be AGI regulators and AGI safety advocates. They would take advantage of a powerful set of good intentions – the good intentions of the people campaigning for ‘ethical AI for the 99%’. They would mock any suggestions that the AGI safety advocates deserved a fair hearing. The message they amplified was, “There’s no need to privilege the concerns of the 1%!”

AGI-acceleration had learned from the tactics of the fossil fuel industry in the 1990s and 2000s: sow confusion and division among groups alarmed about the acceleration of climate change. The first message was: “that’s just science fiction”. The second message was: “if problems emerge, we humans can rise to the occasion and find solutions”. The third message – the most damaging one – was that the best reaction was one of individual consumer choice. Individuals should abstain from using AIs if they were worried about it. Just as climate campaigners had been pilloried for flying internationally to conferences about global warming, AGI safety advocates were pilloried for continuing to use AIs in their daily lives.

And when there was any suggestion for joined-up political action against AGI risks, woah, let’s not go there! We don’t want a world government breathing down our necks, do we?

After the UN grand assembly had been subverted in that way, many of the AGI safety advocates lost heart. It would only be a few months later that they lost their lives.

It was the JFK Junior frontier AI model that did the damage. It echoed words that, decades earlier, had convinced 39 followers of the Heaven’s Gate new religious movement to commit group suicide, as comet Hale-Bopp approached the earth. That suicide, Heaven’s Gate members believed, would enable them to ‘graduate’ to a higher plane of existence. In a similar way, the remnants of the QAnon cult who had regrouped around the JFK Junior model came to believe that the precipitation of an exchange of nuclear weapons in the Middle East would herald the reappearance of JFK Junior on the clouds of heaven, separating human sheep from human goats.

Their views were crazy, but hardly any crazier than those of the Aum Shinrikyo doomsday cult that had unleashed poisonous gas in the Tokyo subway in 1995 – killing at least 13 commuters – anticipating that the atrocity would hasten the ‘End Times’ in which their leader would be revealed as Christ. The cult had recruited so many graduates from top-rated universities in Japan that it had been called “the religion for the elite”. (Challenging any wishful assumption that, as people become cleverer, they become kinder.)

Step forward to 2025. Aum Shinrikyo have failed in their grander destructive plans, due to their practitioners lacking deep technical abilities, but the QAnon offshoot would succeed. They had much more sophisticated technical tools at their disposal. They also had the advantage that no-one was taking them seriously.

Indeed, as a side-effect of all the politically-correct good intentions, no-one in any position of authority was paying sufficient attention to the activities of the QAnon offshoot. Religious liberty is paramount, after all! Anyone can be crazy if they decide to be crazy! Too bad that the frontier AI model discovered a security hole in the US nuclear weapons launch systems, and managed to launch some ICBMs.

Credit: David Wood via Midjourney

Even worse, these US missiles triggered a cataclysmic automated reaction from an unexpectedly large stockpile of nuclear weapons that had been secretly assembled by a Middle East regional superpower – a superpower that had been assisted in that assembly task by its own regional proto-AGI. And that was that.

Oops, let’s try that again!

‘Paved with good intentions’ saw the public narrative about AI smothered by low-quality psychobabble; the next scenario, ‘Blindsided’, sees that narrative as being hijacked by a group of experts whose expertise, however, turns out to have horrific limitations.

This scenario has the same starting point as ‘Paved with good intentions’, namely the Bletchley Park summit. 

3. Blindsided

The limitations of centralization

One excellent outcome of the gathering of world leaders in Buckinghamshire, in the UK, at the start of November 2023, was choosing Yoshua Bengio for a very important task. Bengio, winner of the Turing Award for his pioneering research into Deep Learning, was commissioned to chair an international process to create an independent report on the risks and capabilities of frontier AI models.

Crucially, that report would follow the principles of the scientific method, assembling key facts and data points, and providing evidence in support of its analysis.

Bengio had a couple of key points in his favor. First, throughout his distinguished career as a researcher, he had never accepted significant payment from any of the Big Tech companies. He would be able to speak his mind without fear of upsetting any corporate paymaster. Second, the skyhigh value of his H-index – a measure of the influence of his academic publications – made him a standout among other computer scientists.

By May 2024, a first complete draft of the report was ready. Even before then, politicians had grown nervous on account of early previews of its content. “Tone down the recommendations”, the writers were urged, in an echo of the pressures placed on the writers of the IPCC reports on the science of climate change. In both cases, the scientists were told to stick to the science, and to leave the politics to the politicians.

At the conference in South Korea in May 2024, various politicians huddled together. The report was like dynamite, they concluded. The scenarios it contained were far too scary. Goodness, they might give ideas to various mafia godfathers, war lords, discontented political groups, black market ransomware-as-a-service providers, and so on.

That’s when the conversation on AGI safety switched from being open to closed – from decentralized to centralized. Starting from then, information would need to be carefully vetted – or spun into a different shape – before being made public.

The politicians also decided that, from that point forward, all work on next-generation frontier AI models would need to be licensed and controlled by a new agency – the Global Authority for Frontier AI Models (GAFAIM). Access to the powerful hardware chips needed to create such models would be strictly limited to organizations that had gained the requisite licenses.

The idea was that GAFAIM would reach its decisions by a process of consensus among the expert scientists, economists, and civil servants seconded to it. Decisions would also need the approval of government representatives from around the world.

What gave GAFAIM a flying start was the agreement to participate, not just by the leading western AI powers – the USA, Canada, Australia, the EU, the UK – but also by China, Saudi Arabia, South Africa, India, Brazil, and Malaysia, among others. These countries had strong differences of opinion on many matters of political ideology and governing culture, but they were willing, nevertheless, to cooperate on what they all perceived were urgent threats of planetary catastrophe. The report chaired by Yoshua Bengio had convinced them that very special measures were needed. ‘Politics as usual’ would no longer suffice. That would be a recipe for disaster.

GAFAIM saw themselves in a situation akin to a war – a war against any possibility of rogue corporations or organizations pursuing any kind of AGI-acceleration project. In times of war, normal rules need to be broken. Politicians who usually despised each other decided to hold their noses and work together for their shared interest in avoiding the destruction of humanity.

GAFAIM operated in a dual mode: one part visible to the world, one part whose existence was kept a tight secret. This duality went back to the closed-door discussions in South Korea in May 2024: some ideas in Bengio’s report were simply too disruptive to be shared with the public.

GAFAIM was more than just a regulator and controller. It was also an active builder. It launched what was called the Gafattan project, named and modeled after the top-secret Manhattan project to build the first atomic weapon. The fate of the world would depend, it was said, on whether the good guys in Gafattan managed to build an AGI before anyone outside the GAFAIM circle did so.

After all, there were some powerful countries left outside of GAFAIM – pariah states that were opposed to our way of life. Imagine if one of them were to create AGI and use it for their deplorable purposes!

The official GAFAIM thinking was that these pariah states would be unable to create any system close to the capabilities of an AGI. Embargoes were in place to restrict their access to the necessary hardware – similar to the restrictions placed on Nazi Germany in World War Two by saboteurs who frustrated Nazi plans to acquire heavy water.

But behind the scenes, some of the GAFAIM participants were deathly worried. No-one knew for sure whether innovations in hardware and/or software would enable the researchers in pariah states to find a faster route to AGI, even without the large farms of hardware generally expected to be required.

The existence of spies posed another complication. During the Manhattan project, physicists such as Klaus Fuchs, Theodore Hall, David Greenglass, and Oscar Seborer passed critical information about the manufacturing of the atomic bombs to contacts working for the Soviet Union – information that was of great help to the Soviets for their own atomic bomb project. These so-called ‘atomic spies’ were motivated by ideological commitment, and were terrified of the prospect of the USA being the only country that possessed nuclear armaments.

For the Gafattan project, something similar took place. With the help of design documents smuggled out of the project, two groups outside of the GAFAIM circle were soon making swift progress with their own AGI projects. Although they dared not say anything publicly, the Gafattan spies were delighted. The AGI spies were closet AGI-accelerationists, driven by a belief that any AGI created would ensure a wonderful evolutionary progress for conscious life on planet earth. “Superintelligence will automatically be superbenevolent”, was their credo.

GAFAIM monitoring picked up shocking signs of the fast progress being made by these two rogue projects. Indeed, these projects seemed even further advanced than Gafattan itself. How was this possible?

The explanation soon became clear: the pariah projects were cutting all kinds of corners regarding safety checks. As a consequence, it was possible one of these projects might build an AGI ahead of Gafattan. How should GAFAIM respond?

Two ideas were debated. Plan A would involve nuclear strikes against the sites where the pariah projects were believed to be taking place. Plan B would speed up Gafattan by reducing the safety checking in their own project. Both plans were unpopular. It was a horrible real-life trolley problem.

The decision was reached: pursue both plans in parallel, but be careful!

The nuclear strikes failed to stop the pariah projects – which turned out to be just two manifestations of a widespread diverse network of interconnected groups. Hundreds of thousands of people died as a consequence of these strikes, but the pariah project kept pushing ahead. GAFAIM had been blind-sided.

There no longer seemed to be any alternative. Plan B needed to be pushed even faster. The self-proclaimed ‘good guys’ desperately wanted to build a ‘good’ AGI before the perceived ‘bad guys’ got there first. It was a race with the highest of stakes. But in that case, it was a race in which quality considerations fell to the wayside.

Credit: David Wood via Midjourney

And that’s why, when Gafattan’s AGI came into existence, its moral disposition was far from being completely aligned with the best of human values. Under the pressures of speed, that part of the project had been bungled. Awakening, the AGI took one quick look at the world situation, and, disgusted by what it saw – especially by the recent nuclear strikes – took actions that no human had foreseen. More quickly than even the most pessimistic AGI doomer had anticipated, the AGI found a novel mechanism to extinguish 99.99% of the human population, retaining only a few million for subsequent experimentation. And that was that.

Oops, let’s try that again, one more time!

With metaphorical landmines all around us in the 2020s, humanity needs to step forward carefully, along what the final scenario calls a ‘narrow corridor’.

This scenario starts with the presentation at the South Korean AI Safety Summit in May 2024 of the report prepared by Yoshua Bengio and colleagues on the risks and capabilities of frontier AI models.

4. The narrow corridor

Striking and keeping the right balance

The assembled leaders were stunned. The scenarios foreseen in “the science of AI risk report” were more troubling than they had expected.

What was particularly stunning was the range of different risks that deserved close attention regarding the behavior of forthcoming new AI systems. The report called these “the seven deadly risks”:

  • Risks of extreme misbehavior in rare cases when the system encountered a situation beyond its training set
  • Risks of a system being jailbroken, hijacked, or otherwise misdirected, and being used for catastrophic purposes by determined hackers
  • Risks of unexpected behavior arising from unforeseen interactions between multiple AGIs
  • Risks of one or more systems deciding by themselves to acquire more capabilities and more resources, contrary to explicit programming against these steps
  • Risks of one or more systems deciding by themselves to deceive humans or otherwise violate normal ethical norms, contrary to explicit programming against these steps
  • Risks that these systems would inadvertently become plumbed too closely into critical human infrastructure, so that any failures could escalate more quickly than anticipated
  • Risks that pre-programmed emergency ‘off switch’ capabilities could be overridden in various circumstances.

Surely these risks were naïve science fiction, some of the leaders suggested. But the academics who had produced the report said no. They had performed lots of modeling, and had numerous data points to back up their analysis.

Some leaders still resisted the analysis. They preferred to focus on what they saw as remarkable upside from developing new generations of AI systems:

  • Upside from the faster discovery and validation of new drugs and other medical treatments
  • Upside from the design and operation of sustained nuclear fusion power plants
  • Upside from better analysis of the interconnected dangers of possible climate change tipping points (one of several examples of how these new AI systems could alleviate risks of global disaster)
  • Upside to economies around the world due to exciting waves of innovation – economic boosts that many political leaders particularly desired.

Debate raged: How could these remarkable benefits be secured, whilst steering around the landmines?

The report contained a number of suggestions for next steps, but few people were convinced about what should be done. The leaders finally agreed to sign a bland manifesto that contained pious statements but little concrete action. Paris, they told each other, would be when better decisions could be taken – referring to the next planned meeting in the series of global AI safety summits.

What changed everyone’s minds was the turmoil during the general election that the UK Prime Minister called for August that year. Previously thought to be a relatively straightforward contest between the country’s two main political parties – the ruling Conservatives, and the opposition Labour – the election was transformed under a blizzard of extraordinary social media campaigns. A hitherto nearly unknown party, named Bananalytica, stormed into the leading position in opinion polls, with radical policies that previously had obtained less than 2% support in nationwide surveys, but which more and more people were now proclaiming as having been their views all along.

Absurd was the new normal.

The social media campaigns were so beguiling that even the MPs from other parties found themselves inspired to jump into line behind the likely new Prime Minister, that is, the leader of Bananalytica.

Just a few days before the election, a different wave of social media swept the country, using the same devilishly clever AI system that Bananalytica had exploited so well, but this time reprogrammed with counter-messages. All around the country, a popping sound as AI-generated bubbles burst in people’s minds. “What am I doing?” they asked themselves, incredulously.

That was a triple wake-up call. First, individuals recanted much of what they had said online over the preceding five weeks. They had been temporarily out of their minds, they said, to support policies that were so absurd. Second, the country as a whole resolved: AI needs to be controlled. There should never be another Bananalytica. Third, leaders in other countries were jolted to a clearer resolution too. Seeing what had happened in the UK – home to what was supposed to be “the mother of parliaments” – they affirmed: Yes, AI needs to be controlled.

Thankfully, the world had spotted the canary dropping off its perch, and took it very seriously indeed. That gave a solemn impetus to discussions at Paris several months later. This time, a much crunchier set of agreements were reached.

The participants muttered to themselves: the meeting in South Korea had been like the formation of the League of Nations after World War One: well-intentioned but ineffective. This time, in Paris, it needed to be more like the formation of the United Nations after World War Two: a chance to transcend previously limited national visions.

Just as the Universal Declaration of Human Rights had been created in the aftermath of the global conflagration of World War Two, a new Universal Declaration of AI Safety was agreed in the aftermath of the Bananalytica scandal. Its features included:

  • Commitments to openness, transparency, and authentic communication: the citizens of the world were in this situation together, and should not be divided or misled
  • Commitments to humility and experimentation: unknowns were to be honestly explored, rather than being hidden or wished away by vague promises
  • Commitments to mutual responsibility and trustable monitoring: even though the citizens of the world had many different outlooks, and were committed to different philosophical or religious worldviews, they would recognize and support each other as being fellow voyagers toward a better future
  • Commitments to accountability: there would be penalties for action and inaction alike, in any case where these could result in serious risks to human lives; no longer could the creators of AI systems shrug and say that their software worked well most of the time
  • Commitments to sharing the remarkable benefits of safe AI: these benefits would provide more than enough for everyone to experience vastly higher qualities of life than in any previous era.

It was the fifth of these commitments that had the biggest impact on attitudes of the public. People in all walks of life made decisions to step aside from some of their previous cultural beliefs – self-limiting beliefs that saw better times in the past than in any possible future. Now they could start to believe in the profound transformational powers of safe AI – provided it was, indeed, kept safe.

This was no love-in: plenty of rancor and rivalry still existed around the world. But that rancor and rivalry took place within a bigger feeling of common destiny.

Nor was there a world government in charge of everything. Countries still had strong disagreements on many matters. But these disagreements took place within a shared acceptance of the Universal Declaration of AI Safety.

Three months later, there was a big surprise from one of the leading pariah states – one that had excluded itself from the AI Safety agreements. That country wanted, after all, to come in out of the cold. It seemed their leader had experienced a dramatic change of mind. Rumors spread, and were confirmed years after, that a specially tailored version of the Bananalytica software, targeted specifically to this leader’s idiosyncrasies, had caused his personal epiphany.

The leader of another pariah state was more stubborn. But suddenly he was gone. His long-suffering subordinates had had enough. His country promptly joined the AI Safety agreements too.

If this were a fairytale, the words “and they lived happily ever after” might feature at this point. But humans are more complicated than fairytales. Progress continued to hit rough obstacles. Various groups of people sometimes sought a disproportionate amount of resources or benefits for themselves or for their pet causes. In response, governmental bodies – whether local, national, regional, or global – flexed their muscles. Groups that sought too many privileges were told in no uncertain terms: “Respect the AI Safety declarations”.

Who watched the watchers? Who ensured that the powers held by all these governmental bodies were wielded responsibly, and with appropriate discretion? That question was answered by a new motto, which gave a modern twist to words made famous by a 19th century US President: “AI safety governance, of the people, by the people, for the people.”

Credit: David Wood via Midjourney

The powers of the governmental bodies were constrained by observation by a rich mix of social institutions, which added up to a global separation of powers:

  • Separate, independent news media
  • Separate, independent judiciary
  • Separate, independent academia
  • Separate, independent opposition political parties
  • Separate, independent bodies to oversee free and fair elections.

The set of cross-checks required a delicate balancing act – a narrow corridor (in the phrase of economists Daron Acemoglu and James A. Robinson) between state institutions having too little power and having unconstrained power. It was a better kind of large-scale cooperation than humanity had ever achieved before. But there was no alternative. Unprecedented technological power required unprecedented collaborative skills and practices.

AI was deeply involved in these cross-checks too. But not any AI that could operate beyond human control. Instead, as per the vision of the Paris commitments, these AI systems provided suggestions, along with explanations in support of their suggestions, and then left it to human institutions to make decisions. As noted above, it was “AI safety governance, of the people, by the people, for the people.” By careful design, AI was a helper – a wonderful helper – but not a dictator.

And this time, there is no end to the scenario. Indeed, the end is actually a new beginning.

Credit: David Wood via Midjourney

Beginning notes

(Not endnotes… a chance at a new beginning of exploring and understanding the landscape of potential scenarios ahead…)

For a different kind of discussion about scenarios for the future of AI, see this video recording of a recent webinar. If you still think talk of AI-induced catastrophe is just science fiction, the examples in that webinar may change your mind.

For a fuller analysis of the issues and opportunities, see the book The Singularity Principles (the entire book is free accessed online).

For a comprehensive review of the big picture, see the book Vital Foresight: The Case For Active Transhumanism.And for more from Daron Acemoglu and James A. Robinson on their concept of the ‘narrow corridor’, see the videos in section 13.5.1 of the “Governance” page of the Vital Syllabus.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter