back

Cautionary tales and a ray of hope

Feb. 07, 2024.
29 min. read. Interactions

David Wood presents four cautionary tales about the transition to AGI, each grounded in past events and considering how matters could develop further in the coming months and years. 

About the Writer

David Wood

18.03504 MPXR

David Wood is chair of London Futurists and the author of eight books about the future — including The Singularity Principles and Vital Foresight. He co-hosts the London Futurists Podcast.

Credit: Tesfu Assefa

Four scenarios for the transition to AGI

Let’s look at four future fictions about humanity’s changing relationship with AI.

Each scenario is grounded in past events, and each considers how matters could develop further in the coming months and years.

May these scenarios prove to be self-blocking prophecies! (With one exception!)

Trigger warning: readers might be offended by some of the content that follows. Aspects of each of the four scenarios can be considered to be shocking and disrespectful. That’s on purpose. This subject requires all of us to transcend our comfort zones!

Credit: David Wood via Midjourney

1. Too little too late

Lurching from warning to warning

In retrospect, the first real warning was the WannaCry ransomware crisis of May 2017. That cryptoworm brought chaos to users of as many as 300,000 computers spread across 150 countries. The NHS (National Health Service) in the UK was particularly badly affected: numerous hospitals had to cancel critical appointments due to not being able to access medical data. Other victims around the world included Boeing, Deutsche Bahn, FedEx, Honda, Nissan, Petrobras, Russian Railways, Sun Yat-sen University in China, and the TSMC high-end semiconductor fabrication plant in Taiwan.

WannaCry was unleashed into the world by a team of cyberwarriors from the hermit kingdom of North Korea – math geniuses hand-picked by regime officials to join the formidable Lazarus group. Lazarus had assembled WannaCry out of a mixture of previous malware components, including the EternalBlue exploit that the NSA in the United States had created for their own attack and surveillance purposes. Unfortunately for the NSA, EternalBlue had been stolen from under their noses by an obscure underground collective (‘the Shadow Brokers’) who had in turn made it available to other dissidents and agitators worldwide.

Unfortunately for the North Koreans, they didn’t make much money out of WannaCry. The software they released operated in ways contrary to their expectations. It was beyond their understanding and, unsurprisingly therefore, beyond their control. Even geniuses can end up stumped by hypercomplex software interactions.

Unfortunately for the rest of the world, that first canary sign generated little meaningful response. Politicians – even the good ones – had lots of other things on their minds.

The second real warning was the flood of fake news manipulations of the elections in 2024. AI was used to make audios and videos that were enormously compelling.

By this time, the public already knew that AI could create misleading fakes. They knew they shouldn’t be taken in by social media posts that lacked convincing verification. Hey, they were smart. (Smarter than the numbskulls who were deceived by misleading AI-generated videos during the elections in 2023 in Nigeria and Slovakia!) Or so they thought.

What wasn’t anticipated was the masterful ways that these audios and videos bypassed the public’s critical faculties. Like the sleight of hand of a skilled magician, these fakes misdirected the attention of listeners and viewers. Again like a skilled magician, who performs what appears to be the same trick several times in a row, but actually using different mechanisms each time, these fakes kept morphing and recombining until members of the public were convinced that red was blue and autocrat was democrat.

In consequence, by 2025, most of the world was governed by a cadre of politicians with very little care or concern about the long-term wellbeing of humanity. Whereas honest politicians would have paid heed to the warning posed by these fiendishly clever fakes, the ones in power in 2025 were preoccupied by providing bread and circuses to their voters.

The third, and final, real warning came in 2027, with the failed Covid-27 attack by a previously unknown group of self-described advocates of ‘revolutionary independence from technology’. Taking inspiration from the terrorist group in the 2014 Hollywood film Transcendence, they called themselves ‘Neo-RIFT’, and sought to free the world from its dependence on unfeeling, inhuman algorithms.

With a worldview that combined elements from several apocalyptic traditions, Neo-RIFT eventually settled on an outrageous plan to engineer a more deadly version of the Covid-19 pathogen. Their documents laid out a plan to use their enemy’s own tools against it: neo-RIFT hackers jailbroke the Claude 4 pre-AGI, bypassing the ‘Constitution 4’ protection layer that its Big Tech owners had hoped would keep that AI tamperproof. Soon, Claude 4 had provided Neo-RIFT with an ingenious method of generating a biological virus that would, it seemed, only kill people who had used a smartwatch in the last four months.

That way, the hackers thought the only people to die would be people who deserved to die.

The launch of what became known as Covid-27 briefly jolted humanity out of its previous obsession with bread and circuses – with whizz-bang hedonic electronics. It took a while for scientists to figure out what was happening, but within three months, they had an antidote in place. By that time, nearly a billion people were dead at the hands of the new virus.

A stronger effort was made to prevent any such attack from happening again. Researchers dusted down the EU AI Act, second version (unimplemented), from 2025, and tried to put that on statute books. Even some of the world’s craziest dictators took time out of their normal ranting and raving, to ask AI safety experts for advice. But the advice from these experts was not to the liking of these national rulers. These leaders preferred to listen to their own yes-men and yes-women, who knew how to spout pseudoscience in ways that made the leaders feel good about themselves. That detour into pseudoscience fantasyland wasted six months.

Then some of the experts tried more politically savvy methods, gradually breaking down the hostile arrogance of a number of the autocrats, weaning them away from their charlatan advisers. But just when it appeared that progress might be made, Covid-28 broke out, launched by a remnant of Neo-RIFT that was even more determined than before.

Credit: David Wood via Midjourney

And this time, there was no antidote. Claude 5 was even smarter than Claude 4 – except that it could be jailbroken too. With its diabolically ingenious design, Covid-28 was the deadliest disease to ever inflict humanity. And that was that.

Oops, let’s try that again!

‘Too little too late’ is characterized by inattention to the warnings of canary signals; the next scenario, ‘Paved with good intentions’, is characterized by the wrong kind of attention.

This scenario starts with events in the UK in October and November 2023.

2. Paved with good intentions

Doomed by political correctness

The elites had booked their flights. They would be jetting into the country for behind-closed doors meetings at the famous Bletchley Park site in Buckinghamshire. Events in these buildings in the 1940s had, it was claimed, shortened World War Two by months. The discussions in 2023 might achieve something even more important: saving humanity from a catastrophe induced by forthcoming ‘frontier models’ of AI.

That was how the elites portrayed things. Big Tech was on the point of releasing new versions of AI that were beyond their understanding and, therefore, likely to spin out of control. And that’s what the elites were going to stop.

A vocal section of the public hated that idea. It wasn’t that they were on the side of out-of-control AI. Not at all. Their objections came from a totally different direction; they had numerous suggestions they wanted to raise about AIs, yet no-one was listening to them.

For them, talk of hypothetical future frontier AI models distracted from pressing real-world concerns. Consider how AIs were already being used to discriminate against various minorities: determining prison sentencing, assessing mortgage applications, and determining who should be invited for a job interview.

Consider also how AIs were taking jobs away from skilled artisans. Big-brained drivers of London black cabs were being driven out of work by small-brained drivers of Uber cars aided by satnav systems. Beloved Hollywood actors and playwrights were losing out to AIs that generated avatars and scripts.

And consider how AI-powered facial recognition was intruding on personal privacy, enabling political leaders around the world to identify and persecute people who acted in opposition to the state ideology.

People with these concerns thought that the elites were deliberately trying to move the conversation away from the topics that mattered most. For this reason, they organized what they called ‘the AI Fringe Summit’. In other words, ethical AI for the 99%, as opposed to whatever the elites might be discussing behind closed doors.

Over the course of just three days – 30th October to 1st November – at least 24 of these ‘fringe’ events took place around the UK.

Compassionate leaders of various parts of society nodded their heads. It’s true, they said: the conversation on beneficial AI needed to listen to a much wider spectrum of views.

By May 2024, the opposition to the Bletchley Park initiative had grown stronger. As the elites gathered again, this time in South Korea, a vast number of ‘super-fringe’ events around the world attracted participation from thinkers of every hue and stripe.

The news media responded. They knew (or pretended to know) the importance of balance and diversity. They shone attention on the plight AI was causing – to indigenous laborers in Peru, to flocks of fishermen off the coasts of India, to middle-aged divorcees in midwest America, to the homeless in San Francisco, to drag artists in New South Wales, to data processing clerks in Egypt, to single mothers in Nigeria, and to many more besides.

The media shone attention on forthcoming frontier AI models too – but again being very careful not to offend sensibilities or exclude minority points of view. A burgeoning ‘robots rights’ movement captured lots of airtime, as did a campaign to recognize GPT-5 as being ‘semi-sentient’. Wackiest of all were the new religions that offered prayers and obedience to a frontier AI model that was said to be the reincarnation of JFK Junior. The QAnon fantasist crowd lapped that up. It was glorious entertainment. Ratings soared.

Not everyone was flippant. Lots of high-minded commentators opined that it was time to respect and honor the voices of the dispossessed, the downtrodden, and the left-behinds. The BBC ran a special series: ‘1001 poems about AI and alienation’. The UN announced that, later that year, they would convene a grand international assembly with stunning scale: ‘AI: the people decide’.

By November 2024, something altogether more sinister was happening. It was time for the UN grand assembly. It was also time for the third meeting of elites in the series that had started in Bletchley Park and then held its second event in South Korea. This time, the gathering would be in Paris.

The sinister development was that, all this time, some of the supposedly unanimous ‘elites’ had been opposed to the general direction of the Bletchley Park series. They gravely intoned public remarks about the dangers of out-of-control frontier AI models. But these remarks had never been sincere. Instead, under the umbrella term AGI-acceleration, they wanted to press on with the creation of AGI as quickly as possible.

Some of the AGI-acceleration group disbelieved in the possibility of AGI disaster. That’s just a scare story, they insisted. Others said, yes, there could be a disaster, but the risks were worth it, on account of the unprecedented benefits that could arise. Let’s be bold, they urged. Yet others asserted that it wouldn’t actually matter if humans were rendered extinct by AGI, as this would be the glorious passing of the baton of evolution to a worthy successor to homo sapiens. Let’s be ready to sacrifice ourselves for the sake of cosmic destiny, they intoned.

Despite their internal differences, AGI-accelerators settled on a plan to sidestep the scrutiny of would-be AGI regulators and AGI safety advocates. They would take advantage of a powerful set of good intentions – the good intentions of the people campaigning for ‘ethical AI for the 99%’. They would mock any suggestions that the AGI safety advocates deserved a fair hearing. The message they amplified was, “There’s no need to privilege the concerns of the 1%!”

AGI-acceleration had learned from the tactics of the fossil fuel industry in the 1990s and 2000s: sow confusion and division among groups alarmed about the acceleration of climate change. The first message was: “that’s just science fiction”. The second message was: “if problems emerge, we humans can rise to the occasion and find solutions”. The third message – the most damaging one – was that the best reaction was one of individual consumer choice. Individuals should abstain from using AIs if they were worried about it. Just as climate campaigners had been pilloried for flying internationally to conferences about global warming, AGI safety advocates were pilloried for continuing to use AIs in their daily lives.

And when there was any suggestion for joined-up political action against AGI risks, woah, let’s not go there! We don’t want a world government breathing down our necks, do we?

After the UN grand assembly had been subverted in that way, many of the AGI safety advocates lost heart. It would only be a few months later that they lost their lives.

It was the JFK Junior frontier AI model that did the damage. It echoed words that, decades earlier, had convinced 39 followers of the Heaven’s Gate new religious movement to commit group suicide, as comet Hale-Bopp approached the earth. That suicide, Heaven’s Gate members believed, would enable them to ‘graduate’ to a higher plane of existence. In a similar way, the remnants of the QAnon cult who had regrouped around the JFK Junior model came to believe that the precipitation of an exchange of nuclear weapons in the Middle East would herald the reappearance of JFK Junior on the clouds of heaven, separating human sheep from human goats.

Their views were crazy, but hardly any crazier than those of the Aum Shinrikyo doomsday cult that had unleashed poisonous gas in the Tokyo subway in 1995 – killing at least 13 commuters – anticipating that the atrocity would hasten the ‘End Times’ in which their leader would be revealed as Christ. The cult had recruited so many graduates from top-rated universities in Japan that it had been called “the religion for the elite”. (Challenging any wishful assumption that, as people become cleverer, they become kinder.)

Step forward to 2025. Aum Shinrikyo have failed in their grander destructive plans, due to their practitioners lacking deep technical abilities, but the QAnon offshoot would succeed. They had much more sophisticated technical tools at their disposal. They also had the advantage that no-one was taking them seriously.

Indeed, as a side-effect of all the politically-correct good intentions, no-one in any position of authority was paying sufficient attention to the activities of the QAnon offshoot. Religious liberty is paramount, after all! Anyone can be crazy if they decide to be crazy! Too bad that the frontier AI model discovered a security hole in the US nuclear weapons launch systems, and managed to launch some ICBMs.

Credit: David Wood via Midjourney

Even worse, these US missiles triggered a cataclysmic automated reaction from an unexpectedly large stockpile of nuclear weapons that had been secretly assembled by a Middle East regional superpower – a superpower that had been assisted in that assembly task by its own regional proto-AGI. And that was that.

Oops, let’s try that again!

‘Paved with good intentions’ saw the public narrative about AI smothered by low-quality psychobabble; the next scenario, ‘Blindsided’, sees that narrative as being hijacked by a group of experts whose expertise, however, turns out to have horrific limitations.

This scenario has the same starting point as ‘Paved with good intentions’, namely the Bletchley Park summit. 

3. Blindsided

The limitations of centralization

One excellent outcome of the gathering of world leaders in Buckinghamshire, in the UK, at the start of November 2023, was choosing Yoshua Bengio for a very important task. Bengio, winner of the Turing Award for his pioneering research into Deep Learning, was commissioned to chair an international process to create an independent report on the risks and capabilities of frontier AI models.

Crucially, that report would follow the principles of the scientific method, assembling key facts and data points, and providing evidence in support of its analysis.

Bengio had a couple of key points in his favor. First, throughout his distinguished career as a researcher, he had never accepted significant payment from any of the Big Tech companies. He would be able to speak his mind without fear of upsetting any corporate paymaster. Second, the skyhigh value of his H-index – a measure of the influence of his academic publications – made him a standout among other computer scientists.

By May 2024, a first complete draft of the report was ready. Even before then, politicians had grown nervous on account of early previews of its content. “Tone down the recommendations”, the writers were urged, in an echo of the pressures placed on the writers of the IPCC reports on the science of climate change. In both cases, the scientists were told to stick to the science, and to leave the politics to the politicians.

At the conference in South Korea in May 2024, various politicians huddled together. The report was like dynamite, they concluded. The scenarios it contained were far too scary. Goodness, they might give ideas to various mafia godfathers, war lords, discontented political groups, black market ransomware-as-a-service providers, and so on.

That’s when the conversation on AGI safety switched from being open to closed – from decentralized to centralized. Starting from then, information would need to be carefully vetted – or spun into a different shape – before being made public.

The politicians also decided that, from that point forward, all work on next-generation frontier AI models would need to be licensed and controlled by a new agency – the Global Authority for Frontier AI Models (GAFAIM). Access to the powerful hardware chips needed to create such models would be strictly limited to organizations that had gained the requisite licenses.

The idea was that GAFAIM would reach its decisions by a process of consensus among the expert scientists, economists, and civil servants seconded to it. Decisions would also need the approval of government representatives from around the world.

What gave GAFAIM a flying start was the agreement to participate, not just by the leading western AI powers – the USA, Canada, Australia, the EU, the UK – but also by China, Saudi Arabia, South Africa, India, Brazil, and Malaysia, among others. These countries had strong differences of opinion on many matters of political ideology and governing culture, but they were willing, nevertheless, to cooperate on what they all perceived were urgent threats of planetary catastrophe. The report chaired by Yoshua Bengio had convinced them that very special measures were needed. ‘Politics as usual’ would no longer suffice. That would be a recipe for disaster.

GAFAIM saw themselves in a situation akin to a war – a war against any possibility of rogue corporations or organizations pursuing any kind of AGI-acceleration project. In times of war, normal rules need to be broken. Politicians who usually despised each other decided to hold their noses and work together for their shared interest in avoiding the destruction of humanity.

GAFAIM operated in a dual mode: one part visible to the world, one part whose existence was kept a tight secret. This duality went back to the closed-door discussions in South Korea in May 2024: some ideas in Bengio’s report were simply too disruptive to be shared with the public.

GAFAIM was more than just a regulator and controller. It was also an active builder. It launched what was called the Gafattan project, named and modeled after the top-secret Manhattan project to build the first atomic weapon. The fate of the world would depend, it was said, on whether the good guys in Gafattan managed to build an AGI before anyone outside the GAFAIM circle did so.

After all, there were some powerful countries left outside of GAFAIM – pariah states that were opposed to our way of life. Imagine if one of them were to create AGI and use it for their deplorable purposes!

The official GAFAIM thinking was that these pariah states would be unable to create any system close to the capabilities of an AGI. Embargoes were in place to restrict their access to the necessary hardware – similar to the restrictions placed on Nazi Germany in World War Two by saboteurs who frustrated Nazi plans to acquire heavy water.

But behind the scenes, some of the GAFAIM participants were deathly worried. No-one knew for sure whether innovations in hardware and/or software would enable the researchers in pariah states to find a faster route to AGI, even without the large farms of hardware generally expected to be required.

The existence of spies posed another complication. During the Manhattan project, physicists such as Klaus Fuchs, Theodore Hall, David Greenglass, and Oscar Seborer passed critical information about the manufacturing of the atomic bombs to contacts working for the Soviet Union – information that was of great help to the Soviets for their own atomic bomb project. These so-called ‘atomic spies’ were motivated by ideological commitment, and were terrified of the prospect of the USA being the only country that possessed nuclear armaments.

For the Gafattan project, something similar took place. With the help of design documents smuggled out of the project, two groups outside of the GAFAIM circle were soon making swift progress with their own AGI projects. Although they dared not say anything publicly, the Gafattan spies were delighted. The AGI spies were closet AGI-accelerationists, driven by a belief that any AGI created would ensure a wonderful evolutionary progress for conscious life on planet earth. “Superintelligence will automatically be superbenevolent”, was their credo.

GAFAIM monitoring picked up shocking signs of the fast progress being made by these two rogue projects. Indeed, these projects seemed even further advanced than Gafattan itself. How was this possible?

The explanation soon became clear: the pariah projects were cutting all kinds of corners regarding safety checks. As a consequence, it was possible one of these projects might build an AGI ahead of Gafattan. How should GAFAIM respond?

Two ideas were debated. Plan A would involve nuclear strikes against the sites where the pariah projects were believed to be taking place. Plan B would speed up Gafattan by reducing the safety checking in their own project. Both plans were unpopular. It was a horrible real-life trolley problem.

The decision was reached: pursue both plans in parallel, but be careful!

The nuclear strikes failed to stop the pariah projects – which turned out to be just two manifestations of a widespread diverse network of interconnected groups. Hundreds of thousands of people died as a consequence of these strikes, but the pariah project kept pushing ahead. GAFAIM had been blind-sided.

There no longer seemed to be any alternative. Plan B needed to be pushed even faster. The self-proclaimed ‘good guys’ desperately wanted to build a ‘good’ AGI before the perceived ‘bad guys’ got there first. It was a race with the highest of stakes. But in that case, it was a race in which quality considerations fell to the wayside.

Credit: David Wood via Midjourney

And that’s why, when Gafattan’s AGI came into existence, its moral disposition was far from being completely aligned with the best of human values. Under the pressures of speed, that part of the project had been bungled. Awakening, the AGI took one quick look at the world situation, and, disgusted by what it saw – especially by the recent nuclear strikes – took actions that no human had foreseen. More quickly than even the most pessimistic AGI doomer had anticipated, the AGI found a novel mechanism to extinguish 99.99% of the human population, retaining only a few million for subsequent experimentation. And that was that.

Oops, let’s try that again, one more time!

With metaphorical landmines all around us in the 2020s, humanity needs to step forward carefully, along what the final scenario calls a ‘narrow corridor’.

This scenario starts with the presentation at the South Korean AI Safety Summit in May 2024 of the report prepared by Yoshua Bengio and colleagues on the risks and capabilities of frontier AI models.

4. The narrow corridor

Striking and keeping the right balance

The assembled leaders were stunned. The scenarios foreseen in “the science of AI risk report” were more troubling than they had expected.

What was particularly stunning was the range of different risks that deserved close attention regarding the behavior of forthcoming new AI systems. The report called these “the seven deadly risks”:

  • Risks of extreme misbehavior in rare cases when the system encountered a situation beyond its training set
  • Risks of a system being jailbroken, hijacked, or otherwise misdirected, and being used for catastrophic purposes by determined hackers
  • Risks of unexpected behavior arising from unforeseen interactions between multiple AGIs
  • Risks of one or more systems deciding by themselves to acquire more capabilities and more resources, contrary to explicit programming against these steps
  • Risks of one or more systems deciding by themselves to deceive humans or otherwise violate normal ethical norms, contrary to explicit programming against these steps
  • Risks that these systems would inadvertently become plumbed too closely into critical human infrastructure, so that any failures could escalate more quickly than anticipated
  • Risks that pre-programmed emergency ‘off switch’ capabilities could be overridden in various circumstances.

Surely these risks were naïve science fiction, some of the leaders suggested. But the academics who had produced the report said no. They had performed lots of modeling, and had numerous data points to back up their analysis.

Some leaders still resisted the analysis. They preferred to focus on what they saw as remarkable upside from developing new generations of AI systems:

  • Upside from the faster discovery and validation of new drugs and other medical treatments
  • Upside from the design and operation of sustained nuclear fusion power plants
  • Upside from better analysis of the interconnected dangers of possible climate change tipping points (one of several examples of how these new AI systems could alleviate risks of global disaster)
  • Upside to economies around the world due to exciting waves of innovation – economic boosts that many political leaders particularly desired.

Debate raged: How could these remarkable benefits be secured, whilst steering around the landmines?

The report contained a number of suggestions for next steps, but few people were convinced about what should be done. The leaders finally agreed to sign a bland manifesto that contained pious statements but little concrete action. Paris, they told each other, would be when better decisions could be taken – referring to the next planned meeting in the series of global AI safety summits.

What changed everyone’s minds was the turmoil during the general election that the UK Prime Minister called for August that year. Previously thought to be a relatively straightforward contest between the country’s two main political parties – the ruling Conservatives, and the opposition Labour – the election was transformed under a blizzard of extraordinary social media campaigns. A hitherto nearly unknown party, named Bananalytica, stormed into the leading position in opinion polls, with radical policies that previously had obtained less than 2% support in nationwide surveys, but which more and more people were now proclaiming as having been their views all along.

Absurd was the new normal.

The social media campaigns were so beguiling that even the MPs from other parties found themselves inspired to jump into line behind the likely new Prime Minister, that is, the leader of Bananalytica.

Just a few days before the election, a different wave of social media swept the country, using the same devilishly clever AI system that Bananalytica had exploited so well, but this time reprogrammed with counter-messages. All around the country, a popping sound as AI-generated bubbles burst in people’s minds. “What am I doing?” they asked themselves, incredulously.

That was a triple wake-up call. First, individuals recanted much of what they had said online over the preceding five weeks. They had been temporarily out of their minds, they said, to support policies that were so absurd. Second, the country as a whole resolved: AI needs to be controlled. There should never be another Bananalytica. Third, leaders in other countries were jolted to a clearer resolution too. Seeing what had happened in the UK – home to what was supposed to be “the mother of parliaments” – they affirmed: Yes, AI needs to be controlled.

Thankfully, the world had spotted the canary dropping off its perch, and took it very seriously indeed. That gave a solemn impetus to discussions at Paris several months later. This time, a much crunchier set of agreements were reached.

The participants muttered to themselves: the meeting in South Korea had been like the formation of the League of Nations after World War One: well-intentioned but ineffective. This time, in Paris, it needed to be more like the formation of the United Nations after World War Two: a chance to transcend previously limited national visions.

Just as the Universal Declaration of Human Rights had been created in the aftermath of the global conflagration of World War Two, a new Universal Declaration of AI Safety was agreed in the aftermath of the Bananalytica scandal. Its features included:

  • Commitments to openness, transparency, and authentic communication: the citizens of the world were in this situation together, and should not be divided or misled
  • Commitments to humility and experimentation: unknowns were to be honestly explored, rather than being hidden or wished away by vague promises
  • Commitments to mutual responsibility and trustable monitoring: even though the citizens of the world had many different outlooks, and were committed to different philosophical or religious worldviews, they would recognize and support each other as being fellow voyagers toward a better future
  • Commitments to accountability: there would be penalties for action and inaction alike, in any case where these could result in serious risks to human lives; no longer could the creators of AI systems shrug and say that their software worked well most of the time
  • Commitments to sharing the remarkable benefits of safe AI: these benefits would provide more than enough for everyone to experience vastly higher qualities of life than in any previous era.

It was the fifth of these commitments that had the biggest impact on attitudes of the public. People in all walks of life made decisions to step aside from some of their previous cultural beliefs – self-limiting beliefs that saw better times in the past than in any possible future. Now they could start to believe in the profound transformational powers of safe AI – provided it was, indeed, kept safe.

This was no love-in: plenty of rancor and rivalry still existed around the world. But that rancor and rivalry took place within a bigger feeling of common destiny.

Nor was there a world government in charge of everything. Countries still had strong disagreements on many matters. But these disagreements took place within a shared acceptance of the Universal Declaration of AI Safety.

Three months later, there was a big surprise from one of the leading pariah states – one that had excluded itself from the AI Safety agreements. That country wanted, after all, to come in out of the cold. It seemed their leader had experienced a dramatic change of mind. Rumors spread, and were confirmed years after, that a specially tailored version of the Bananalytica software, targeted specifically to this leader’s idiosyncrasies, had caused his personal epiphany.

The leader of another pariah state was more stubborn. But suddenly he was gone. His long-suffering subordinates had had enough. His country promptly joined the AI Safety agreements too.

If this were a fairytale, the words “and they lived happily ever after” might feature at this point. But humans are more complicated than fairytales. Progress continued to hit rough obstacles. Various groups of people sometimes sought a disproportionate amount of resources or benefits for themselves or for their pet causes. In response, governmental bodies – whether local, national, regional, or global – flexed their muscles. Groups that sought too many privileges were told in no uncertain terms: “Respect the AI Safety declarations”.

Who watched the watchers? Who ensured that the powers held by all these governmental bodies were wielded responsibly, and with appropriate discretion? That question was answered by a new motto, which gave a modern twist to words made famous by a 19th century US President: “AI safety governance, of the people, by the people, for the people.”

Credit: David Wood via Midjourney

The powers of the governmental bodies were constrained by observation by a rich mix of social institutions, which added up to a global separation of powers:

  • Separate, independent news media
  • Separate, independent judiciary
  • Separate, independent academia
  • Separate, independent opposition political parties
  • Separate, independent bodies to oversee free and fair elections.

The set of cross-checks required a delicate balancing act – a narrow corridor (in the phrase of economists Daron Acemoglu and James A. Robinson) between state institutions having too little power and having unconstrained power. It was a better kind of large-scale cooperation than humanity had ever achieved before. But there was no alternative. Unprecedented technological power required unprecedented collaborative skills and practices.

AI was deeply involved in these cross-checks too. But not any AI that could operate beyond human control. Instead, as per the vision of the Paris commitments, these AI systems provided suggestions, along with explanations in support of their suggestions, and then left it to human institutions to make decisions. As noted above, it was “AI safety governance, of the people, by the people, for the people.” By careful design, AI was a helper – a wonderful helper – but not a dictator.

And this time, there is no end to the scenario. Indeed, the end is actually a new beginning.

Credit: David Wood via Midjourney

Beginning notes

(Not endnotes… a chance at a new beginning of exploring and understanding the landscape of potential scenarios ahead…)

For a different kind of discussion about scenarios for the future of AI, see this video recording of a recent webinar. If you still think talk of AI-induced catastrophe is just science fiction, the examples in that webinar may change your mind.

For a fuller analysis of the issues and opportunities, see the book The Singularity Principles (the entire book is free accessed online).

For a comprehensive review of the big picture, see the book Vital Foresight: The Case For Active Transhumanism.And for more from Daron Acemoglu and James A. Robinson on their concept of the ‘narrow corridor’, see the videos in section 13.5.1 of the “Governance” page of the Vital Syllabus.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Comment on this article

16 Comments

16 thoughts on “Cautionary tales and a ray of hope

  1. I think this advancement in AI has advantages and disadvantages. Advantages: really helps with work or saves time, etc Disadvantages: Can be misused for criminal acts

    Like
    Dislike
    Share
    Reply
  2. Creating awareness is crucial

    1 Like
    Dislike
    Share
    Reply
  3. awann

    26 days ago
    3.06619 MPXR
    2 interactions

    oke good

    1 Like
    Dislike
    Share
    Reply
  4. Dear sir, you have explained things to us in a very beautiful way, thank you very much. I am waiting for your next post. I hope you will explain it to us in a better way. Thank you sir.

    3 Likes
    Dislike
    Share
    Reply
  5. nice presentation

    3 Likes
    Dislike
    Share
    Reply
  6. David Wood's presentation of four cautionary tales about the transition to AGI is truly commendable. By drawing on past events and extrapolating potential future scenarios, he provides invaluable insights into the complexities and challenges ahead. His thoughtful analysis not only educates but also prompts deep reflection on the implications of advancing artificial intelligence. Kudos to David Wood for his insightful and thought-provoking contribution to the discourse on AGI transition.

    4 Likes
    Dislike
    Share
    Reply
  7. Creating awareness is crucial. Platforms like Mindplex, which are neutral and open to all perspectives, play a significant role in this. I recall R.U. Sirius's interview with Emil Torres, which was a striking example of contrarianism, and David's counter-argument. Indeed, a neutral platform can help disseminate the message and foster widespread awakening. However, we face challenges from governments and large tech companies. Activism is necessary to convey our message. We must, in a peaceful manner, compel them to recognize what is coming and why it necessitates universal precautionary action.

    3 Likes
    Dislike
    Share
    Reply
    1. Hi Aaron - The way I hope to compel governments and tech companies to pay more attention to this issue is via narratives - ideas that will lodge in people's minds and make them give more attention to the remarkable opportunities and threats that lie ahead. The scenarios I offer in this article are intended to fit that pattern, by highlighting not only past failures but also the ways in which these failures can become worse if more powerful technology is at hand.

      5 Likes
      Dislike
      Share
      Reply
  8. Though this is a serious matter, I don't think we will achieve universal agreement on AI safety akin to the Declaration of Human Rights. AI/AGI will always be subjective as its nature is deeply intertwined with global business and economic competition. I am just stating a grim fact which is unavoidable because of our nature, humans not AGIs.

    I cannot suggest a single governing body (one world order) for the world because it is more frightening than the prospect of bad AGI.

    Perhaps we will always be like Yoshua Bengio, never able to prevent the impending catastrophe.

    However, I am optimistic, and the catastrophe that will awaken us to the need for united action has yet to occur. It will not resemble a total annihilation, such as the hypothetical examples listed here, a QAnon nuclear launch or Covid-28. It will be something in between these and WW2.

    Our species learns only from mistakes, and I am very optimistic that the mistake that will unite us will not be as devastating as end of days. However, I am also saddened that it will have dire consequences.

    3 Likes
    Dislike
    Share
    Reply
    1. Hi Alamin - What I'm anticipating is global agreement not global government. I also anticipate that the agreement should start small and simple, before developing further as needs become clearer.

      More important than agreement on AI is agreement on power. The world must not say "power is right" or that "when a corporation is large enough, it is entitled to do whatever it wants".

      Also vital is agreement that certain global failure modes should be strongly guarded against, such as the release of bioengineered pathogens or an exchange of nuclear weapons. In these cases, it's not just saying "we should guard against AI initiating such a failure". It's saying, also, we should guard against humans from initiating such a failure. Accordingly, we need more safety guardrails in place regarding these risks.

      2 Likes
      Dislike
      Share
      Reply
  9. It's disheartening to consider that we may already be in a "too late" scenario concerning the risks associated with the development of AI or the transition to AGI. David's fictional or historical warnings, like the WannaCry ransomware crisis, the flood of fake news manipulations, and the failed Covid-27 attack, have already happened in fact exactly like the fictional examples. Obama and Biden had already used AI and fake News via social media in the past US Elections. Millions, literary millions, have died because of these two presidents: from ISIS's massacre in the middle east to present-day wars in Ukraine, Yemen, and the Horn of Africa, yeah, millions have already died. These fake news-aided elections are not some unique cases in the US but how can we forget Britain's ordeal and that in just 4 years four prime ministers have come and gone.

    I think we are in the too-late stage, and so far I haven't seen any meaningful and adequately heeded transitional actions implemented or even discussed: not by individuals but by those with the power to implement it, anywhere.

    Similar real events, coupled with the manipulation of public narratives and the subversion of important international assemblies, had already painted a grim picture of our trajectory. We are in the too-late stage!

    Big tech, governments, the shadow world, the quakers who milk every opportunity out of AI, the fake news conglomerates, and we the ill/miss/less informed 'public,' all of us have contributed to scary and maddening inattention! David's real and hypothetical warning signs and the hijacking of the public narrative about AI/AGI have already happened and are happening with even massively ominous potential to complete the last mercy killing.

    I am not a pessimist, and I have hope. Such cautionary tales should place us in a precarious position, and despite the already unfolding negative implications of AGI development, we might still force our policymakers and the big techs to start taking actions, but that needs a well-organized and globally applied activism under the banner of humanity: no religious, no political, no ideology, or no identity based movement but just the banner of humanity. However, what about the rogue ones? The rogue ones, we can only hope to resist them with better AGIs, because no amount of sane action will stop a rogue agent, be it a government like N.Korea or a radical organization like ANTIFA, or a solo terrorist like 'your friendly neighbourhood' school shooter in the US.

    David, I know and I truly believe, what you state here and all these warning signs call for urgent and comprehensive actions. However, I don't see any meaningful way to address the complex interplay between technological advancement, societal dynamics, and ethical considerations in the context of AI/AGI development where every corner of the planet can be a breeding ground for AGI? How do you see that? How can we have a globally impactful action in such a divided world?


    2 Likes
    Dislike
    Share
    Reply
    1. Japan is on the lead or one of the he top nations working on AI. The plus side is we are less decided as a nation and maybe we can teach one or two about a unified action. However, even as a Japanese, I feel your pain and that meaningful and gloably coordinated action might be a pipe dream now.

      How can we forget how we reacted to Covid19, the vaccine, the origin, the nature of the disease, the number of people it has killed, the prevention method, the quaranteen length, ... every point was controversial!

      When we can't agree, globally, on an objective disease like Covod19, it is hard to believe that we will agree on a global concertuim of committee sort, and then another objective body that monitor this concertuim for check and balance.

      3 Likes
      Dislike
      Share
      Reply
      1. Hi Naonobu - I have many happy memories of working with Japanese companies in the earlier days of the mobile phone industry, when I used to visit Japan two or three times a year. I met many wonderful individuals, but it was the intelligent team spirit that often impressed me the most.

        You're right that humanity urgently needs to improve how we conduct difficult conversations. We need to move away from tribalism and over-confidence, to encourage and support more honest, authentic conversations.

        Let me try to clarify one thing about my vision in the fourth scenario. It's not that there will be "another (singular, centralised) objective body" that monitors the decisions of politicians and technical leaders. Instead, the monitoring will take place decentralised, with a large variety of different bodies with different expertise.

        It's like how democracy works (when it works well): politicians are monitored by the electorate as a whole, with help from journalists, researchers, the judiciary, and independent political parties.

        3 Likes
        Dislike
        Share
        Reply
    2. Hi Jeff - I agree, the global situation in which we find ourselves is both complex and daunting. Things aren't helped by some of the superficial understanding that often passes for wisdom in various quarters!

      The first step forward is an honest, careful, informed understanding of the landscape of risks ahead, along with the capabilities of AI to assist (as well as to complicate) matters.

      After that, what I advocate can be seen in the second half of the fourth scenario, with further development of these ideas in the resources I list in the "beginning notes".

      4 Likes
      Dislike
      Share
      Reply
      1. Thank you kindly, David. I'll do my best to spread the word 'bout this awareness campaign. At one time, I was a proud accelerator. But when you start to cool down, like you said when you seek wisdom, the accelerator stand won't sound as it used to. Yeah, you're right on target. If we can get folks worldwide to buy into this cause on a rationally grounded awareness (or at least most of 'em), then takin' action'll be smoother 'cause we'll have a solid foundation laid out already.

        3 Likes
        Dislike
        Share
        Reply
  10. For once, I'm fully on board with David Wood! The scenarios laid out in your piece are straight-up cautionary tales about the potential, emm not potential but actualities that are waiting their time to come, risks and challenges that come with AGI advancement. As I mull over these scenarios, knowing where I stand and where my community stands today and in the past, it's crystal clear that developing AGI requires a multifaceted approach that considers not only technological progress but also the ethical, societal, and political implications!

    Props to David Wood🤘: the voices of the marginalized majority need to be heard or someone in power should speak up for them.

    This text underscores the importance of inclusive discourse and the need to honor the voices of the dispossessed and the left-behinds in the AI/AGI convo. It also sheds light on the potential dangers of AGI Acceleration and the manipulation of public narratives, emphasizing the complex interplay between technological advancement, societal dynamics, and ethical considerations. And coming from David, that's big. 'Cause you know, the swamp always spins things positive, 'cause their cash flow relies on it. David, you got guts!

    To all you regular folks out there, these insights are key to steering responsible AI/AGI development and governance. They show why we gotta bring in diverse perspectives and prioritize humanity's well-being as we chase AGI.

    Big thanks, David!

    5 Likes
    Dislike
    Share
    Reply
Like
Dislike
Share

16

Comments
Reactions
💯 💘 😍 🎉 👏
🟨 😴 😡 🤮 💩

Here is where you pick your favorite article of the month. An article that collected the highest number of picks is dubbed "People's Choice". Our editors have their pick, and so do you. Read some of our other articles before you decide and click this button; you can only select one article every month.

People's Choice
Bookmarks