The Big One

2026-04-03
34 min read.
Opposition to the precautionary principle has become a kind of fad. But if we really understand risks such as “The Big One”, we’ll see that opposition is a major mistake.
The Big One

The greatest danger is not that we fail to foresee catastrophe, but that we foresee it and still fail to act

Public health experts increasingly warn that the next pandemic could be far more lethal than Covid-19.

I therefore invite you to consider some of the implications of:

  • A pandemic that spreads via invisible particles in the air, exhaled silently by people who are already infected by the virus, but who don’t yet show any signs of being ill
  • A virus that, when enough time has passed for it to establish itself deep inside someone’s body, kills them at a rate approaching that of Nipah (40%+), Ebola (50%+), or even Rabies (90%+)
  • An outbreak whose seriousness is repeatedly denied by maverick high profile social media influencers, who keep pushing the wishful hope that the disease has already peaked, even as new mutations continue to emerge, in wave after wave of deadly new variants.

Imagine the utter chaos that such a pandemic would cause to humanity.

Credit: Image produced by the author using ChatGPT

That’s the theme of roughly one third of the content of the 2025 book The Big One: How We Must Prepare for Future Deadly Pandemics, co-authored by Dr. Michael T. Osterholm, founding director of the Center for Infectious Disease Research and Policy, and communications specialist Mark Olshaker.

That book develops a fictional scenario, set in the near future, featuring many echoes of the world’s response to Covid-19, raised to higher levels of intensity. At times, the narrative is gripping. It’s also depressing, as the extent of humanity’s ill-preparedness is laid bare. That’s despite everything we should have learned from our miserable recent encounter with Covid-19, but which, alas, we have collectively pushed out of mind.

Your first reaction might be: That’s just science fiction scare-mongering.

But the authors devote about one third of their book, in content intermingled with the unfolding scenario, to explaining why that scenario is credible. Of course, events won’t unfold in precisely the ways depicted. However, the authors make the case that events somewhat similar to the scenario could indeed happen:

  • An outbreak that starts in a distant location but which spreads quickly due to the multiple transport links which criss-cross the planet
  • Insufficient or poor-quality protective equipment
  • Confused and contradictory public health messaging
  • Vaccines that are only partially effective
  • Public frustration with policies of social distancing
  • The evolution of new variants of the virus that complicate treatment options
  • Healthcare systems collapsing under excess strain
  • The failure of international manufacturing distribution networks, causing major disruptions to numerous aspects of the economy.
Credit: Image produced by the author using ChatGPT

Your next reaction might be: But won’t forthcoming benevolent AI systems save us from the kind of medical and political dramas described in this book?

That’s where my experience while reading the book, earlier this year, brought me to a number of new aha realisations. It’s these realisations that I want to share in this article.

These realisations arise from my reflections on the final one third of the content of the book—spread among its various chapters—where the authors speculate on how our state of collective preparedness can, and should, improve. Such improvements won’t just put humanity in a better place to handle forthcoming new biological pandemics. It will also strengthen our ability to handle a different kind of perilous outbreak: fast-spreading misinformation propelled by out-of-control AI.

Credit: Book cover (The Big One)

Drawing the wrong lesson from past occurrences

When we hear reports of a potentially devastating pandemic, that is currently only in its early stages, how should we react?

One response is to remember previous warnings of potential pandemics that turned out to be false alarms—and to conclude that any new warnings are likely to be false alarms too.

Consider the case of the outbreak of Swine Flu in 2009.

In the early months of that year, reports started circulating of deaths of several dozen people in Mexico as a result of infection by a new type of flu. Analysis of the disease showed that it was a novel strain of the same variant of flu, H1N1, that had killed perhaps 100 million people worldwide in the years 1918-1920. The virus showed close similarity to one that infects pigs worldwide—hence the “swine flu” moniker.

Influenza specialists at the WHO (World Health Organization) quickly became concerned. They remembered how two relatively recent outbreaks of flu had each led to several millions of human deaths: H2N2, also known as “Asian flu”, in 1957-58, and H3N2, also known as “Hong Kong flu”, in 1968-69.

Calculations at the WHO headquarters in Geneva, Switzerland, suggested that the new swine flu could kill between two and seven million people—assuming that the effects of the virus were “relatively mild”—although the forecast climbed into the tens of millions if slightly different assumptions were made.

News soon started arriving of fatalities from the disease in the United States, Brazil, India, China, Turkey, Russia, and numerous other countries. On 25th April, 2009, the WHO declared that the disease posed a “public health emergency of international concern”, using that “PHEIC” designation for the first time in its history. A few weeks later, Margaret Chan, WHO Director General, raised the influenza panic alert from phase 5 to phase 6, and issued a sombre press statement:

“This particular H1N1 strain has not circulated previously in humans. The virus is entirely new. The virus is contagious, spreading easily from one person to another, and from one country to another. As of today, nearly 30,000 confirmed cases have been reported in 74 countries.”

Chan warned against any complacency:

“Most cases of severe and fatal infections have been in adults between the ages of 30 and 50 years. This pattern is significantly different from that seen during epidemics of seasonal influenza, when most deaths occur in frail elderly people… It is important to note that around one third to half of the severe and fatal infections are occurring in previously healthy young and middle-aged people.”

Credit: Image produced by the author using ChatGPT

Different countries took their own drastic measures to try to curb the disease. Egypt initiated a mass slaughter of pigs. Mexico banned spectators from attending football matches. The German state of Saarland passed local legislation forbidding greeting someone with a kiss.

The world braced for greater numbers of deaths. But rather than expanding, the disease soon seemed to contract. Rather than two to seven million people dying, the WHO’s official count of fatalities peaked at just eighteen thousand. (Though it should be noted that later reassessments pushed up that estimate into the hundreds of thousands.)

That figure of eighteen thousand deaths was far less than the deaths each year from so-called “normal” flu. The evident discrepancy between forecast and reality brought sharp criticism to the WHO. Richard Schabas, former chief medical officer in Ontario, referred to the WHO as “the World Hysteria Organization”. Schabas went on:

“They’ve just been [champing] at the bit waiting for a pandemic for the last 10 years and I think they dramatically overreacted.”

Why might the WHO have “overreacted”?

Wolfgang Wodarg, chairman of the Health Committee of the Parliamentary Assembly of the Council of Europe, had a theory. He pointed to the alleged adverse influence of pharmaceutical companies:

“In order to promote their patented drugs and vaccines against flu, pharmaceutical companies influenced scientists and official agencies, responsible for public health standards, to alarm governments worldwide and make them squander tight health resources for inefficient vaccine strategies and needlessly expose millions of healthy people to the risk of an unknown amount of side-effects of insufficiently tested vaccines.”

A subsequent investigation by the prestigious British Medical Journal (BMJ), in conjunction with the Bureau of Investigative Journalism, came down on the side of the critics of the WHO. The verdict was summarised by Fiona Godlee, Editor-in-Chief of the BMJ:

“The cost has been huge… countries like France and the United Kingdom who have stockpiled drugs and vaccines are now busy unpicking vaccine contracts, selling unused vaccine to other countries, and sitting on huge piles of unused [drugs]. Meanwhile drug companies have banked vast [revenues]: $7bn to $10bn from vaccines alone.”

Godlee highlighted that there were indeed conflicts of interest which had not been properly declared, and that the WHO operated with insufficient transparency:

“Some of the experts advising WHO on the pandemic had declarable financial ties with drug companies that were producing antivirals and influenza vaccines. As an example, WHO’s guidance on the use of antivirals in a pandemic was authored by an influenza expert who at the same time was receiving payments from Roche, the manufacturer of oseltamivir (Tamiflu), for consultancy work and lecturing.”

That’s the context which should be borne in mind when assessing the sceptical comments made, ten years later, by various media pundits as news of Covid-19 started to spread.

For example, on 3rd March 2020, radio talk show host Dennis Prager was quick to condemn “hysteria after hysteria”. Only six people in the United States had died of the disease, Prager claimed, “most, if not all, of whom were already ill”. As for the WHO, that “should be renamed the World Hysteria Organization”, Prager suggested, perhaps unaware he was repeating a suggestion already made ten years earlier. According to Prager, the WHO had acted hysterically in 2009 over swine fever, it had acted hysterically in 2003 over SARS, which had actually killed only 774 people, and was now acting hysterically again over Covid-19.

Allegations about conflicts of interest were heard too. Some companies were poised, it was claimed, to make financial killings from Covid-19 vaccination programmes. A variant claim was that the potency of the disease was being exaggerated for various political purposes—to alter the effect of a forthcoming election, or to inculcate a submissive attitude among the general public (by insisting that face masks be worn).

The result, in the first half of 2020, was a mix of two modes of thinking about Covid-19. On the one hand, critics acknowledged the theoretical possibility that the disease could become (as Prager put it) “a worldwide mass killer”. But that rational assessment coexisted with an emotive hostility toward apparent doom mongers and hysteria merchants. Yes, there was a risk of people dying from Covid-19. But there was said to be an even bigger risk of an unnecessary “panic mode” caused by “breathless” newspaper reporting. The second risk dominated the minds of many opinion-formers.

Operation Cygnus: the lessons that weren’t learned

The same two modes of thinking—rational assessment coupled with emotive hostility—can be seen in the response by the UK government to an exercise carried out in October 2016 to assess the country’s state of preparation for a possible H2N2 influenza pandemic. The exercise, codenamed “Cygnus”, took place over three days, with government ministers, scientific advisers, and other officials being given the opportunity to respond to fictionalised developments. Cygnus participants were reportedly left “ashen-faced” as they came to terms with the rapid impact of the infection. Mortuaries could not cope with the number of dead bodies. Medical staff suffered from a woeful shortage of PPE (personal protective equipment). Hospitals lacked sufficient ventilators and beds. In summary, the NHS would be “stretched beyond breaking point”.

Credit: Image produced by the author using ChatGPT

That exercise brought home to participants some of the dreadful possible consequences of the NHS’s lack of readiness for an influenza pandemic. That intellectual realisation could not be disputed. Nevertheless, few changes took place subsequently in pandemic preparedness planning, despite the strong recommendations in the report issued at the conclusion of the exercise. In practice, resources were actually reduced from that field, rather than being added to it.

Commenting on that subsequent lack of action, Martin Green, Chief Executive of Care England, expressed bitter surprise, during the early months of the Covid-19 outbreak:

“It beggars belief. This is a report that made some really clear recommendations that haven’t been implemented. If they had put in place a response to every one, we would have been in a much better place at the start of this pandemic.”

Evidently, gaining good foresight of forthcoming risks is only part of how humanity should prepare for these risks. That foresight needs to be transformed into meaningful action. That foresight must spread beyond brain to heart and to hand. Alas, lots of psychological and institutional treacle prevents society from taking advantage of foresight.

One of the impediments to taking action as a follow-up to Exercise Cygnus was the persistence of a sceptical frame of mind. The scenario being modelled depended on assumptions of rapid transmission of infection, and on a high proportion of the people infected becoming seriously ill. Forecasts involving high numbers for these parameters had been heard before but had been found wanting on these previous occasions, for example in the UK’s response to the 2009 H1N1 swine flu, mentioned above. The very idea of a major pandemic simply seemed… too scary to take seriously, for the length of time required to take real action. Instead, inaction won the day.

More precisely, people preferred to continue with “business as usual”, deciding that the risks of major forthcoming pandemics could be ignored. If a pandemic did arise, they reasoned, they could deal with it when it happened. They could “learn about the pandemic when that knowledge could be applied straightaway”, rather than taking resources away from current projects in attempts to “anticipate the unthinkable”.

The dreadful outcome of that type of thinking was that the resources needed once the Covid-19 pandemic was in full swing far exceeded what would have been needed for advance preparations. That type of thinking, therefore, led to a terrible economic outcome, as well as a terrible humanitarian crisis.

That’s the kind of thinking which is poised to get humanity into even worse problems in the near future.

That kind of thinking has a name: anti-precautionary. It takes pride in promoting (as it sees things) “bold risk-taking”. “You can’t make an omelette without breaking a few eggs”, it proclaims breezily. It positions itself as a contrast to what has become known as the precautionary principle.

Revisiting the precautionary principle

Here’s the formulation of the precautionary principle from the 1992 Rio Declaration on Environment and Development:

“Where there are threats of serious or irreversible damage, lack of full scientific certainty shall not be used as a reason for postponing cost-effective measures to prevent environmental degradation.”

To restate this in a more general form: if an activity introduces a threat of serious damage that could not subsequently be reversed, and there is uncertainty over the science involved (so the threat cannot be ruled out), then any such activity must be accompanied by the development of interventions ready to be used, if need be, to prevent that damage.

In such cases, caution should move from being one consideration among many, to being the paramount consideration.

Here, the “activity” includes “continuing with business as usual”—in cases when there are concerns that extending the current trajectory could be catastrophic.

“Activity” includes “continuing to spread potentially dangerous pesticides”, when analysis suggests that these pesticides will cause larger damage to biological ecosystems than merely killing one species of insects.

“Activity” includes “continuing to allow mass gatherings of people”, when there are indications that such gatherings could become superspreader events for a new disease that has started to raise health concerns.

However, the precautionary principle has generated a strong tide of criticism. The issue is that safety concerns surrounded the early days of countless technological innovations. A blunt application of the precautionary principle would have indefinitely delayed progress in areas such as railways, aeroplane flights, x-rays, new surgical techniques, agricultural improvements from the green crop revolution, radar, vaccines, and numerous new drugs. Since each of these innovations contained the possibility of causing death—surely an example of “serious or irreversible damage”—the precautionary principle would have throttled innovation, preventing it from subsequently bringing huge benefits to countless people.

Moreover, some governments have used versions of the precautionary principle to block developments with technologies such as GMOs (genetically modified organisms) and next-generation nuclear reactors, even when scientific consensus nowadays seems strongly in favour of these developments. In a similar way, slow-moving drug regulatory authorities, constrained by their own bureaucracies, have been blamed for holding up approvals of novel therapeutics that might have saved millions of lives. In these cases, what prevailed was an excess of precaution.

Reacting to these excesses, a campaign to oppose the precautionary principle has developed momentum. According to this campaign, the precautionary principle is said in practice to mean “don’t do anything for the first time”—which would make it absurd. More generously, the principle is said to mean “don’t do anything if there are risks of harm associated”—which it is easy to scorn as being anti-progress.

However, as I see it, the essential core of the precautionary principle that needs to be kept in mind is this:

“If there are risks not just of harm, but of what could be called ruin, then only go down this course of action if there are appropriate countermeasures ready to be applied.”

In other words, the vital distinction is not risk vs. safety, but harm vs. ruin.

The distinction between “harm” and “ruin” involves the extent of the disaster involved. By “harm” I mean an outcome that is broadly within usual human expectations: yes, sometimes people die, in tragic ways, but this is a relatively rare outcome, with reasonably predictable statistics.

For example, in the eight years from 2012 to 2019, out of a population of around 9 million people living in London, an average (mean) of 48,822 people died each year, with a standard deviation (sigma) of 1,498. Under normal conditions, the actual number of Londoners dying in a given year will lie between three sigmas of the mean more than 99 times in a hundred. Therefore, if 2020 were a “normal” year, the number of Londoners dying that year would lie between 44,329 and 53,315. The actual number of deaths of Londoners that year was 59,187, showing that something decidedly abnormal had happened in these twelve months. That result is nearly seven sigma above the mean—something with a probability of around 1 in 400 billion. Yikes!

Image produced by the author using Microsoft Excel

London was by no means the city worst affected by Covid. Other cities had death rates by mid-May 2020 that were considerably higher—including Iquitos in Peru, Manaus in Brazil, and Guayaquil in Ecuador.

To be clear, 59,187 deaths is still less than one percent of the population of London. If we go back in time to 1665, the year of the Great Plague of London, it is estimated that around one quarter of the inhabitants of London died in an eighteen month period. Mega yikes! If a new pandemic credibly threatened a repeat of such an outcome—similar to the scenario that unfolds in the book The Big One—it would be utterly irresponsible to ignore these warnings, to advocate “bold risk-taking”, and to chuckle about the need for eggs to be broken for the sake of progress and freedom.

Anticipating exponential magnification

Of course, there’s no sharp dividing line between “the kind of harm that isn’t hugely surprising” and “a ruinous outcome that is far beyond normal variations”. There’s a fuzzy boundary in between.

Where you draw the dividing line depends, not only on the extent of the ruin, but also on your estimate of the likelihood of that outcome. If you think there’s only a one in a billion chance of that kind of ruinous outcome, you’ll be more inclined to continue with “business as usual”, than if you think the probability is 10%.

Crucially, where you draw the dividing line depends as well on your assessment of the likelihood of an exponential take-off. That’s the factor which is hardest to estimate. It’s the factor that is most alien to our natural instincts.

But a wider appreciation of exponential magnification may be one of the most important principles that humanity needs to grasp, so we can properly prepare for the rapid transformations ahead.

If we watch a child growing up, there’s a broadly constant increase in height from year to year. It’s the same when we observe the development of a puppy, or a duckling, or a tree. These are all examples of what can be called linear growth. The growth rate can vary from time to time, and generally stops altogether in due course. But there’s no sustained explosion of growth.

Contrast linear growth with exponential growth—growth whose speed increases in line with the existing size of the entity. The bigger the entity, the faster the growth. Whereas linear growth looks like 2, 4, 6, 8, 10, exponential growth looks like 2, 4, 8, 16, 32. And if each sequence were to continue another five steps, the linear one would reach to 20, whereas the exponential one would reach just beyond 1,000. By the time the former sequence has reached 40, the latter one has reached over one million.

Exponential growth did take place in prehistoric times, but our ancestors rarely observed it directly. The number of cells in an embryo in a womb initially doubles on a regular basis—that is, it grows exponentially—but this growth soon slows down to a linear rate, and was in any case something outside the conscious awareness of any human observer. The number of locusts in a swarm can grow exponentially with each new generation, so long as sufficient food is available, but what our ancestors would have observed was the sudden devastation that a huge migrating swarm could inflict, rather than the generation-by-generation growth in numbers.

Credit: Image produced by the author using ChatGPT

That’s why we’re unused to exponential thinking. And therein lies grave peril.

For a disease in which the number of people infected can double every ten days, it doesn’t take long for an initial single infection to spread to an entire population of, let’s say, eight billion people. Eight billion is two doubled thirty-three times. So, everyone on the earth could have fallen ill from the disease in just 330 days. That’s one month less than a year. The problem is that, initially, there doesn’t seem much to be concerned about. After the first month, there’s just eight people ill. Who would worry about eight people being ill? Even after three months, the outbreak only comprises 500 people. That’s a tiny blip, compared to the number of people who die each year from, for example, drowning (300 thousand), or road traffic accidents (1.2 million).

This highlights the challenge posed by exponentials. At first, it’s too early to be unduly worried about any instance of exponential growth: the absolute numbers involved are paltry. There is some harm, but it is contained. However, by the time the numbers have grown large, they’re on the point of becoming much larger again, and it’s now too late for many kinds of preventive action. Harm has become ruin.

But that’s only the start of the problems with coming to terms with exponential growth.

Not one but multiple overlapping growth trajectories

In the real world, exponential growth usually quickly staggers to a halt, as the trend runs out of the resources needed to fuel it.

Critics of the precautionary principle are well aware of this tendency. They often speak about trends “hitting a wall” or “hitting a ceiling”. They point out that what looks like an exponential growth trajectory is often better described as an S-shaped curve—where the ‘S’ is drawn in old-style lettering, as in the integral sign used in mathematics, ‘∫’, slanted-forward.

Progress along any one of these S curves tends to be slow at first: that’s the lead-in part at the bottom of the letter ‘S’. It subsequently accelerates, but can then slow down again, once the full potential of this particular setting has been reached.

Is that the end of the matter? It depends. What can happen next is a partial change in the overall system, which allows the growth curve to continue, with renewed vigour, in a different mode.

As a famous example, Ray Kurzweil draws attention to a sequence of different S-curves for the progression of computing, which compound together to create a period of extended exponential growth:

  1. Mechanical calculators and tabulators, as used to process data collected in the 1890 national census in the United States
  2. Systems incorporating electrical relays, as used in telephone switchboard equipment
  3. Replacement of relays with electronic valves—as in the secret code-breaking computers at Bletchley Park during World War Two
  4. Replacement of valves with individual transistors
  5. Computers incorporating integrated circuits formed from ever larger numbers of ever smaller transistors
  6. Integrated circuits might shortly change from two-dimensional to three-dimensional—or computer architectures might undergo other fundamental transformations.

The same idea of a sequence of multiple S-curves is close to the heart of one of the books that has had the greatest influence over Silicon Valley personnel over the decades: Only The Paranoid Survive, by Intel co-founder Andy Grove.

Credit: Book Cover (Only The Paranoid Survive)

According to Grove, lasting progress in technology and business comes not from extending a single trajectory indefinitely, but from jumping from one S-curve to another at the right moment.

Each successive S-curve involves a novel technological idea, a new business model, an innovative reconfiguration of previous components, or some other breakthrough competitive advantage. Each S-curve has a three-stage lifecycle:

  • Early phase: slow progress, uncertainty
  • Rapid growth: steep incremental improvement
  • Maturity: diminishing returns

The key insight is that, in the maturity phase, further optimisation yields only small gains, but a new approach—jumping to the next S-curve—can offer an order-of-magnitude improvement. Therefore, long-term success comes from timely transitions between S-curves, not from perfecting a single one.

Note that each new S-curve tends to become possible only as a result of changes in industry, mindsets, and skills, which were enabled by the previous curve. The new curve differs in significant ways from its predecessor: it may require different organisational structures, different metrics, and different processes. But the appetite for the new curve arose from conditions stirred up by the success of what came before.

I saw this in my own professional career, in the late 2000s and early 2010s, as the more powerful smartphone operating systems of Apple (iOS) and Google (Android) superseded the paths previously pioneered by Palm, BlackBerry, and Symbian—but took advantage of many ideas that were popularised by these earlier platforms.

Critically, the same pattern applies in two more fields, as I’ll now discuss in more detail:

  • How the capabilities of AI systems can leap ahead from one S-curve to another
  • How the dangers posed by a virus can leap ahead as that virus mutates to acquire more deadly characteristics.

In both cases, the anti-precautionary stance is often blind to these possibilities. That stance forecasts that the current trajectory—of AI capabilities, or of disease deadliness—is bound to come to an end, sooner rather than later. If that’s true, there’s no special need to be unduly worried about consequences. However, it’s the possibility of a leapfrog to a new level of capability that deserves more attention.

The delusion of herd immunity

In retrospect, one of the biggest mistakes in theorising about Covid-19, during the early phase of that pandemic, was the idea that, once someone had been infected by the disease and then recovered from it, they would be immune from further infection. Accordingly, once a sufficient proportion of the population had experienced the disease, the disease would no longer be able to spread to new victims. In the jargon, the population would have acquired “herd immunity”. That thought lay behind the suggestion, discussed by various politicians in 2020 and 2021, that the best approach to Covid-19 might be to “let it rip”—that is, spread throughout the population, with many older people being likely to die as a result, but with the younger members of the population acquiring immunity. That would defang the pandemic, and allow society to return to something approaching normal (although with fewer elderly members).

Credit: Image produced by the author using ChatGPT

In reality:

  • Immunity to Covid, after a previous infection, waned over time, allowing a growing number of reinfections
  • New variants of Covid, such as Delta and Omicron, increased transmissibility
  • The more often someone was infected, the greater the probability of life-changing longer-term effects
  • Variations in how members of the population came into contact with each other allowed the disease to keep circulating.

A closely related mistake was the idea that mutations in an infectious disease invariably make that disease less deadly. In retrospect, that idea was essentially just wishful thinking—a clever-sounding concept to which champions of the anti-precautionary stance could appeal, even though it lacked any firm foundation.

Indeed, evolution by no means aims for kindness; it favours whatever maximises transmission.

It’s true that pathogens may evolve toward lower virulence if killing or incapacitating people who are infected reduces spread. This can apply

  • If transmission requires mobile hosts (such as close personal contact)
  • If severe illness quickly isolates the infected person.

However, that principle fails to apply when transmission can occur early in the course of an infection, before a patient has developed severe symptoms, when they are still mobile, and where the virus can be carried in the air they exhale.

It’s the same with the evolution of new capabilities in AI systems. AI systems are already enabling highly personalised scams and political persuasion at scale—demonstrating early examples of the kind of exponential influence that could rapidly escape control. Automated misinformation campaigns happen increasingly often, with ever-greater sophistication. However, supporters of the anti-precautionary stance often suggest that members of the public will soon acquire a kind of immunity to being manipulated by misaligned AI. After having been misled on an initial occasion by some AI-generated news that turned out to be “fake”, they would become smarter on subsequent occasions. In the words of the proverb, once bitten, they would become twice shy.

Once again, that idea is delusional:

  • People often lack the ability to generalise from one experience of being misled to recognising other situations in which they are also being deceived
  • AI systems are constantly acquiring unexpected new abilities to manipulate members of the public—drawing on richer data, stronger algorithms, and more beguiling graphics, narratives, and other distractions
  • The outcome of someone being misled can be much worse than their social embarrassment; they can be led into financial ruin; they can also be recruited, without their full awareness, into pressure groups that tilt political decisions in very destructive directions.

Once again, it would be far better to put guardrails or other precautionary measures in place well before dangerous capabilities have grown too powerful to be easily constrained.

Flawed social priorities

Perhaps the saddest takeaway from the narrative in The Big One is that the need for improved preparations for a forthcoming pandemic is well understood, but isn’t being acted upon. In other words, we’re not collectively benefiting from our best insights.

Measures that aren’t receiving anything like enough resourcing include:

  • Accelerating research into better vaccine platforms, which could be adapted and deployed more quickly
  • Building resilient manufacturing capacity—especially for vaccines and drugs
  • Improving the monitoring of waste water for early signs of a new outbreak
  • Preparing reliable high-quality masks, for greater personal protection
  • Improving air circulation in buildings
  • Developing other methods to kill or control viruses, including ultraviolet radiation
  • Repairing the global health response coordination mechanisms which failed to work well for Covid.

As I read these sections of The Big One, I had flashbacks to many discussions about how society has many priorities upside down—how the best ideas are often sidelined and ignored, rather than being honestly appraised. I see these prioritisation failures regarding:

  • Actions which could reduce the chance of accelerated climate chaos
  • Research into the main factor which worsens all chronic diseases, namely biological aging
  • Measures to avoid misaligned AI systems developing well beyond human understanding and human control.

I have sometimes thought that these three fields each have their own special complications, which prevent otherwise rational people from thinking clearly about them:

  • Climate change is a hotly contested subject, with many partisan overtones
  • Many people suffer from “terror management” issues, which leads them to the irrational conclusion that no-one should attempt to reverse aging
  • AI systems are another hotly contested issue.

However, the subject of pandemic preparedness ought to be free from similar psychological confusion:

  • Pandemics have happened many times in the past, so clearly cannot be classified as “just science fiction”
  • Pandemics maim and kill people of all political persuasions, of all ethnic groups, and of all religions—so opposition to pandemics ought to be a unifying issue
  • Measures to reduce the impact of pandemics are far from mysterious or alien: they involve medical practices that are, in principle, relatively easy to understand.

Why, then, are we failing to take actions to reduce the risks of catastrophe? What explains this profound social irrationality?

I see four main reasons:

  1. Too many of our resource-allocation decisions, as a society, are driven by short-term commercial pressures. That means that funding is applied, instead, on projects where there are straightforward ways in which the person or institution supplying the funds can earn a financial profit as a result
  2. Society has become too partisan and tribal, with an increasing number of issues moving from “non-partisan” to “a favourite cause of a particular political party”—with partisan considerations overriding objective evaluations
  3. We humans are too prone to cognitive flaws, with our mental instincts being more suited to the simpler circumstances of the hundreds of thousands of years of our evolutionary history, rather than the very different circumstances of more recent times
  4. We are failing to communicate key insights and issues with each other, in ways that capture attention, convey understanding, and leave each other feeling grateful for the new information rather than resentful or defensive.
Credit: Image produced by the author using ChatGPT

The fourth of these answers receives significant space in The Big One. The authors of that book review many ways in which public communications about Covid fell flat:

  • Presenting information without nuance or uncertainty, which then caused a loss of trust when things turned out to be more complicated than the simple messaging had suggested
  • Communications were frequently inconsistent across different agencies and over time
  • Public health messaging became entangled with partisan politics: different groups interpreted the same facts through political lenses; this led to masking, vaccines, and lockdowns becoming identity markers, and reduced public willingness to follow guidance
  • Official communication was too one-way: it was perceived as broadcasting instructions, rather than engaging in dialogue
  • Messaging did not sufficiently account for fear, denial, fatigue, and cognitive biases, and therefore failed to overcome emotional barriers
  • The “infodemic” problem was not controlled: social media amplified false claims, conspiracy theories, and pseudo-experts, and then governments and institutions failed to respond effectively; this meant that accurate information was competing on equal footing with misinformation
  • In short: communication was treated as secondary, not as a central pillar of pandemic response; governments and institutions failed to communicate in ways that built trust, understanding, and sustained cooperation.

These observations apply beyond the crisis that a pandemic will cause. They apply more generally for the full spectrum of risks and opportunities that we are likely to face in the next few years. As such, they may be the most urgent message from The Big One. Unless we can improve how we communicate about ongoing and forthcoming challenges, we’re likely to end up in a catastrophically bad situation.

Hence the burning need for communication that embodies and supports:

  • Two-way engagement
  • Trusted intermediaries
  • Prebunking misinformation
  • Narratives that acknowledge uncertainty.

Flattening the curve

I’ll finish with one more idea about how pandemics can be managed—an idea which turns out to be profoundly applicable to other existential risks and opportunities as well. That’s the idea of “flattening the curve”.

As discussed in The Big One, “flattening the curve” is an important public-health strategy aimed at slowing the spread of an infectious disease, so that the number of people needing care at any one time stays within the capacity of the healthcare system.

The point is that healthcare systems (such as the NHS in the UK) have finite capacity—limits on hospital beds, ICU units, ventilators, and staff, etc.

If too many people get sick at once:

  • Care quality drops
  • Mortality rises—not just from the disease, but from all conditions.

Flattening the curve aims to apply various interventions (social distancing, reduced travel, quarantines, wider use of protective equipment, prompt roll-out of vaccinations, and so on) to keep demand below this capacity threshold:

  • Without intervention: cases rise rapidly, forming a tall, sharp peak
  • With interventions: cases spread out over time, forming a lower, wider curve.

If successful, the strategy buys time:

  • To develop better treatments
  • To develop better vaccines
  • To improve other aspects of social coordination.

The result will be to reduce total deaths and other adverse consequences of the pandemic. Individuals need to accept a curtailment of their personal freedoms in the short term in order to gain fuller freedoms in the medium and long term.

The same principle surely applies for “flattening the development of potentially dangerous AI”:

  • If too many new AI features are released in parallel, they could combine in ways that are hard to predict and even harder to control
  • With more safety checks and design reviews spread out over more time, society has a greater chance of devising and implementing appropriate AI guardrails.

It’s no surprise that there is considerable overlap between two groups who advocate anti-precautionary stances: those who disregard warnings about exponential growth of pandemics, and those who disregard warnings about exponential growth of dangerous AI systems. There’s a strong psychological parallel:

  • The former assert their rights to travel wherever catches their fancy, to ignore advice about vaccinations, and to exhale their possibly infectious breath into any confined space, regardless of who else is present there
  • The latter assert their rights to develop AI in any way that catches their fancy, to ignore warnings about malign consequences of AIs, and to spread their poorly tested software all over the Internet, regardless of who might interact with it.

Both cases display shocking short-sightedness. Both sets of people fail to think through the consequences if more and more people behave in the reckless ways that they are advocating.

Both sets of people justify their actions by appeal to various highly dubious arguments:

  • “Vaccines exist only to make money for pharmaceutical companies”
  • “AI safety advocates are just trying to further their own careers”
  • “Pandemics always die back before serious damage is done”
  • “AIs will always provide humanity with timely solutions to whatever problems earlier versions of AIs have created”
  • “Precaution is for weaklings—we should just ‘let it [the pandemic] rip’”
  • “Precaution is for weaklings—we should just ‘let it [advanced AI] rip’”
  • “Even if I am to die from this disease, I would prefer to die unmasked, unvaccinated, and unrepentant”
  • “Even if AI is to cause a dreadful catastrophe, I would prefer to be the person who causes that catastrophe, rather than letting my rivals beat me to the punch.”

Countering the anti-precautionary stance will be far from straightforward. That stance has strong links into dangerous aspects of human psychology. These links won’t be easy to dissolve. It’s going to take more than wishful thinking, more than tub-thumping bravado, and more than impassioned pleas for culture change.

As I said earlier, a vital part of overcoming entrenched hostility to safety measures will be all-round improvements in our communications skills. We’re going to need better narratives, but also better listening skills. Flattening the curve will be much easier when people appreciate that we’re all in this together, rather than being fragmented into warring tribes.

The real enemy we’re all facing is not each other, but confusion, distraction, fear, and ignorance. If we fail to overcome these shared enemies, the next major crisis—whether biological or technological—will truly be “the big one”, that overwhelms all the capabilities left in our civilisational resilience.

These shared enemies are what we need to target—with clarity, focus, vision, and wisdom.

Credit: Image produced by the author using ChatGPT

#AIEthics

#AIInPublicHealth

#DeadlyGlobalPandemic

#GlobalCrisis

#HealthPolicies

#Outbreak

#PrecautionaryPrinciple

#PublicHealth

#RiskAnalysisFramework



Related Articles


Comments on this article

Before posting or replying to a comment, please review it carefully to avoid any errors. Reason: you are not able to edit or delete your comment on Mindplex, because every interaction is tied to our reputation system. Thanks!