A Taxonomy of Chaos

Being a futurist is quite a bit like being a historian. History isn’t just a listing of events, it’s a discipline that seeks to figure out why this or that happened, what led that direction, and what choices we had along the way. The same thing can be said for foresight work — the goal isn’t to list future events, it’s to get a better understanding of why we’re getting various outcomes, and how our choices can lead us in very different directions. I think of futurism as Anticipatory History.

The success of this process depends upon our ability to spot patterns; as Theodor Reik said, “It has been said that history repeats itself. This is perhaps not quite correct; it merely rhymes.” But what happens when the rhyme scheme breaks? That is, what happens when our expectations about patterns start to fail?

I’ve been working as a futurist for over 25 years, so I’ve spent quite a bit of time observing (and learning from) global patterns. About five years ago, I started to get a sense that patterns weren’t repeating (or rhyming) as readily, that big global systems that had once been fairly consistent had become far less so. Looking back, I suspect that the primary catalyst was the accelerating impact of global climate disruption.

Big economic, political, environmental, technological, even sociocultural systems seemed like they were starting to fail, or at least become much less reliable. I wasn’t alone in these observations.

In 2018, I started working on a framework for understanding the scale of what we’re facing as once seemingly-reliable global systems started to break down. So many of the people I spoke with over the past half-decade had started to express great surprise and confusion about what was going on around the world; it quickly became apparent that this wasn’t just the standard “the future is an unknown” that we all live with, it was an increasingly desperate sense that things we thought we understood were spinning wildly out of control. We couldn’t as reliably make reasonable judgments about what was to come in the weeks and months ahead. Systems that appeared strong were suddenly on the verge of collapse; processes that were becoming increasingly critical to our daily lives were becoming less and less understandable.

Creating a framework like this wasn’t just a random idea; I was trying to bring a conceptual tool already in the foresight analyst’s kit into the 21st century. The world of strategic consulting had long relied on a basic framework to offer language and structure to a changing world. It was the “VUCA” model, which was  invented in the late 1980s at the US Army War College and spreading quickly throughout the world of consulting. VUCA is an acronym comprising four descriptive terms: Volatile; Uncertain; Complex; and Ambiguous. For a world moving out of the Cold War era and into the Internet era, these terms felt right. They perfectly captured the kinds of disruptions that were starting to happen more often, especially as the global war on terror fired up.

But the world has rocketed past merely being “uncertain” or “volatile.” At this point, VUCA no longer captures disruptions to the norm, it is the norm. But if a VUCA world is all around us all the time, the term has lost its utility as a way of labeling discontinuities in how our world functions. Something new was needed.

In late 2018, I presented the “BANI” framework for the first time. BANI parallels VUCA, in that it’s a basic acronym for a set of descriptive terms. In this case, however, the terms are as subtle as a brick. BANI comes from: Brittle; Anxious; Nonlinear; and Incomprehensible. These four concepts let us articulate the ways in which the world seems to be falling apart. It’s a taxonomy of chaos.

Credit: Tesfu Assefa

The quick summary:

B in BANI is for Brittle. Systems that are brittle can appear strong, even work well, until they hit a stress level that causes collapse. Brittle does not bend, it breaks. Very often the breaking point isn’t visible to most people in advance, and comes as a surprise. Sometimes this is because the weaknesses are hidden or camouflaged; sometimes this is because the stress that causes the break is external and unexpected. The example I like to use is guano, fecal matter from birds and bats. In the 19th century, its use as a fertilizer gave the world its first taste of an agricultural revolution. It was so important that countries fought wars over ownership of guano-covered islands. And in a few short years late in the century, that all disappeared after the development of the Haber-Bosch process for making artificial fertilizer. Something that was absolutely critical became worthless seemingly overnight.

Brittle chaos is sudden, surprising, and hard to ignore.

A is for Anxious (or Anxiety-inducing). Systems that trigger anxiety are those that pose dilemmas or problems without useful solutions, or include irreversible choices that have unexpectedly bad outcomes. Anxious systems make trust difficult, even impossible. Things that had been well-understood suddenly seem alien or false. My usual example of an anxiety-inducing system is malinformation, the term that encompasses intentional misinformation, errors, insufficient information, and confusion. Noise in the signal. The last half-decade has been full of this, and we’ve seen some especially powerful uses in recent months and years. Malinformation often relies on technological tools, but the importance doesn’t come from the technology, it comes from the human response. In many, if not most, cases, malinformation isn’t used to make a target believe something that is false, it’s to make a target doubt the validity of something that is true.

Anxious chaos is confusing, deceptive, and emotionally painful.

N is for Nonlinear. Nonlinear systems are those where, most simply, input and output are disproportionate. Cause and effect don’t match in scale or speed. Audio feedback is a familiar example of a nonlinear phenomenon; the spread of pandemic disease is another. Nonlinear in the BANI usage refers to systems that see changes that don’t match expectations built on familiar reality. They’re common in nature, although often met by countervailing forces that keep the nonlinear aspects in check. The biggest (and most important) example of a nonlinear phenomenon is climate disruption, more precisely the hysteretic aspect of climate change. Hysteretic means a long lag between cause and effect, enough so that the connections are often functionally invisible. The connection between atmospheric greenhouse inputs and temperature/heat-related results is slow — potentially on the order of 20-30 years, although some more recent evidence suggests that it might be close to 10 years. Either way, the seeming disconnect between cause and effect means that (a) what we’re seeing now is the result of carbon emissions from one or two decades ago, and (b) whatever changes we make to cut carbon emissions won’t have any visible benefits for decades.

Nonlinear chaos is disproportionate, surprising, and counter-intuitive.

Finally, the I is for Incomprehensible. I got the most pushback on this one — can we really say that something is truly incomprehensible? But what I mean here is that, with an incomprehensible phenomenon, the process leading up to the outcome is thoroughly opaque, with difficult or incomplete explanations. The decision-making of machine learning systems gives us a current example. Increasingly, it’s difficult at best to explain how a deep learning system reaches its conclusions. The consequence can often be that sophisticated systems can make strange or inexplicable errors (such as a self-driving vehicle repeatedly mistaking the moon for a traffic signal). Incomprehensible can also mean behavior outside the realm of rational understanding.

Incomprehensible chaos is ridiculous, senseless, even unthinkable.

Credit: Tesfu Assefa

When I created BANI, I did so largely as a way for me to visualize the diverse ways in which global systems were failing. But it turns out that there’s hunger around the world for just this kind of framework. Over the past year, I’ve given a dozen or more talks and presentations on BANI for audiences everywhere from Amazonia to Zürich (people in Brazil seem especially interested); in the coming months, I’ll be speaking about BANI for audiences in places like Sri Lanka.

But it’s important not to overpromise what the BANI framework can do. Thinking in BANI terms won’t give you a new leadership strategy or business model. It won’t tell you how to better make profit amidst chaos. When I talk about what can be done to withstand the chaos of a BANI world, I go to human elements and behaviors like improvisation, intuition, and empathy. The chaos of BANI doesn’t come from changes in a geophysical system or some such, it comes from a human inability to fully understand what to do when pattern-seeking and familiar explanations no longer work.

Even if BANI is only descriptive, not prescriptive, we’ve long known that giving a name to something helps to reify it. People had been groping for a way to articulate their sense of chaos, and BANI provides a common, understandable language for doing so. BANI helps to give structure to our experience of the chaos swirling around us, and in doing so, helps us to consider more fully what to do next.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

The Singularity: Untangling the Confusion

“The Singularity” — the anticipated creation of Artificial General Intelligence (AGI) — could be the most important concept in the history of humanity. It’s unfortunate, therefore, that the concept is subject to considerable confusion.

Four different ideas interweaved, all accelerating in the same direction toward an unclear outcome (Credit: David Wood)

The first confusion with “the Singularity” is that the phrase is used in several different ways. As a result, it’s easy to become distracted.

Four definitions

For example, consider Singularity University (SingU), which has been offering courses since 2008 with themes such as “Harness the power of exponential technology” and “Leverage exponential technologies to solve global grand challenges.”

For SingU, “Singularity” is basically synonymous with the rapid disruption caused when a new technology, such as digital photography, becomes more useful than previous solutions, such as analog photography. What makes these disruptions hard to anticipate is the exponential growth in the capabilities of the technologies involved.

A period of slow growth, in which progress lags behind expectations of enthusiasts, transforms into a period of fast growth, in which most observers complain “why did no one warn us this was coming?”

Human life “irreversibly transformed”

A second usage of the term “the Singularity” moves beyond talk of individual disruptions — singularities in particular areas of life. Instead, it anticipates a disruption in all aspects of human life. Here’s how futurist Ray Kurzweil introduces the term in his 2005 book The Singularity Is Near:

What, then, is the Singularity? It’s a future period during which the pace of technological change will be so rapid, its impact so deep, that human life will be irreversibly transformed… This epoch will transform the concepts that we rely on to give meaning to our lives, from our business models to the cycle of human life, including death itself…

The key idea underlying the impending Singularity is that the pace of change of our human-created technology is accelerating and its powers are expanding at an exponential pace.

The nature of that “irreversible transformation” is clarified in the subtitle of the book: When Humans Transcend Biology. We humans will no longer be primarily biological, aided by technology. After that singularity, we’ll be primarily technological, with, perhaps, some biological aspects.

Superintelligent AIs

A third usage of “the Singularity” foresees a different kind of transformation. Rather than humans being the most intelligent creatures on the planet, we’ll fall into second place behind superintelligent AIs. Just as the fate of species such as gorillas and dolphins currently depends on actions by humans, the fate of humans, after the Singularity, will depend on actions by AIs.

Such a takeover was foreseen as long ago as 1951 by pioneering computer scientist Alan Turing:

My contention is that machines can be constructed which will simulate the behaviour of the human mind very closely…

It seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers. There would be no question of the machines dying, and they would be able to converse with each other to sharpen their wits. At some stage therefore we should have to expect the machines to take control.

Finally, consider what was on the mind of Vernor Vinge, a professor of computer science and mathematics, and also the author of a series of well-regarded science fiction novels, when he introduced the term “Singularity” in an essay in Omni in 1983. Vinge was worried about the unforeseeability of future events:

There is a stone wall set across any clear view of our future, and it’s not very far down the road. Something drastic happens to a species when it reaches our stage of evolutionary development — at least, that’s one explanation for why the universe seems so empty of other intelligence. Physical catastrophe (nuclear war, biological pestilence, Malthusian doom) could account for this emptiness, but nothing makes the future of any species so unknowable as technical progress itself…

We are at the point of accelerating the evolution of intelligence itself. The exact means of accomplishing this phenomenon cannot yet be predicted — and is not important. Whether our work is cast in silicon or DNA will have little effect on the ultimate results. The evolution of human intelligence took millions of years. We will devise an equivalent advance in a fraction of that time. We will soon create intelligences greater than our own.

A Singularity that “passes far beyond our understanding”

This is when Vinge introduces his version of the concept of singularity:

When this happens, human history will have reached a kind of singularity, an intellectual transition as impenetrable as the knotted space-time at the centre of a black hole, and the world will pass far beyond our understanding. This singularity, I believe, already haunts a number of science fiction writers. It makes realistic extrapolation to an interstellar future impossible.

If creatures (whether organic or inorganic) attain levels of general intelligence far in excess of present-day humans, what kinds of goals and purposes will occupy these vast brains? It’s unlikely that their motivations will be just the same as our own present goals and purposes. Instead, the immense scale of these new minds will likely prove alien to our comprehension. They might appear as unfathomable to us as human preoccupations appear to the dogs and cats and other animals that observe us from time to time.

Credit: David Wood

AI, AGI, and ASI

Before going further, let’s quickly contrast today’s AI with the envisioned future superintelligence.

Existing AI systems typically have powerful capabilities in narrow contexts, such as route-planning, processing mortgage and loan applications, predicting properties of molecules, playing various games of skill, buying and selling shares, recognizing images, and translating speech.

But in all these cases, the AIs involved have incomplete knowledge of the full complexity of how humans interact in the real world. The AI can fail when the real world introduces factors or situations that were not part of the data set of examples with which the AI was trained.

In contrast, humans in the same circumstance would be able to rely on capacities such as “common sense”, “general knowledge,” and intuition or “gut feel”, to reach a better decision.

An AI with general intelligence

However, a future AGI — an AI with general intelligence — would have as much common sense, intuition, and general knowledge as any human. An AGI would be at least as good as humans at reacting to unexpected developments. That AGI would be able to pursue pre-specified goals as competently as (but much more efficiently than) a human, even in the kind of complex environments which would cause today’s AIs to stumble.

Whatever goal is input to an AGI, it is likely to reason to itself that it will be more likely to achieve that goal if it has more resources at its disposal and if its own thinking capabilities are further improved. What happens next may well be as described by IJ Good, a long-time colleague of Alan Turing:

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion,” and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.

Evolving into artificial superintelligence

In other words, not long after humans manage to create an AGI, the AGI is likely to evolve itself into an ASI – an artificial superintelligence that far exceeds human powers.

In case the idea of an AI redesigning itself without any human involvement seems far-fetched, consider a slightly different possibility: Humans will still be part of that design process, at least in the initial phases. That’s already the case today, when humans use one generation of AI tools to help design a new generation of improved AI tools before going on to repeat the process.

I.J. Good foresaw that too. This is from a lecture he gave at IBM in New York in 1959:

Once a machine is designed that is good enough… it can be put to work designing an even better machine…

There will only be a very short transition period between having no very good machine and having a great many exceedingly good ones.

At this point an “explosion” will clearly occur; all the problems of science and technology will be handed over to machines and it will no longer be necessary for people to work. Whether this will lead to a Utopia or to the extermination of the human race will depend on how the problem is handled by the machine.

Singularity timescales: exponential computational growth

One additional twist to the concept of Singularity needs to be emphasized. It’s not just that, as Vernor Vinge stressed, the consequences of passing the point of Singularity are deeply unpredictable. It’s that the timing of reaching the point of Singularity is inherently unpredictable too. That brings us to what can be called the second confusion with “The Singularity.”

It’s sometimes suggested, contrary to what I just said, that a reasonable estimate of the date of the Singularity can be obtained by extrapolating the growth of the hardware power of computing systems. The idea is to start with an estimate for the computing power of the human brain. That estimate involves the number of neurons in the brain. 

Extrapolate that trend forward, and it can be argued that such a computer would match, by around 2045, the capability not just of a single human brain, but the capabilities of all human brains added together.

Next, consider the number of transistors that are included in the central processing unit of a computer that can be purchased for, say, $1,000. In broad terms, that number has been rising exponentially since the 1960s. This phenomenon is part of what is called “Moore’s Law.” 

This argument is useful to raise public awareness of the possibility of the Singularity. But there are four flaws with using this line of thinking for any detailed forecasting:

  1. Individual transistors are still becoming smaller, but the rate of shrinkage has slowed down in recent years.
  2. The power of a computing system depends, critically, not just on its hardware, but on its software. Breakthroughs in software defy any simple exponential curve.
  3. Sometimes a single breakthrough in technology will unleash much wider progress than was expected. Consider the breakthroughs of deep learning neural networks, c. 2012.
  4. Ongoing technological progress depends on society as a whole supplying a sufficiently stable and supportive environment. That’s something else which can vary unpredictably.

A statistical estimate

Instead of pointing to any individual date and giving a firm prediction that the Singularity will definitely have arrived by then, it’s far preferable to give a statistical estimate of the likelihood of the Singularity arriving by that date. However, given the uncertainties involved, even these estimates are fraught with difficulty.

The biggest uncertainty is in estimating how close we are to understanding the way common sense and general knowledge arises in the human brain. Some observers suggest that we might need a dozen conceptual breakthroughs before we have a comprehension sufficient to duplicate that model in silicon and software. But it’s also possible that a single conceptual leap will solve all these purportedly different problems.

Yet another possibility should give us pause. An AI might reach (and then exceed) AGI level even without humans understanding how it operates — or of how general intelligence operates inside the human brain. Multiple recombinations of existing software and hardware modules might result in the unforeseen emergence of an overall network intelligence that far exceeds the capabilities of the individual constituent modules.

Schooling the Singularity

Even though we cannot be sure what direction an ASI will take, nor of the timescales in which the Singularity will burst upon us, can we at least provide a framework to constrain the likely behavior of such an ASI?

The best that can probably be said in response to this question is: “it’s going to be hard!”

As a human analogy, many parents have been surprised — even dumbfounded — by choices made by their children, as these children gain access to new ideas and opportunities.

Introducing the ASI

Humanity’s collective child — ASI — might surprise and dumbfound us in the same way. Nevertheless, if we get the schooling right, we can help bias that development process  — the “intelligence explosion” described by I.J. Good — in ways that are more likely to align with profound human wellbeing.

That schooling aims to hardwire deep into the ASI, as a kind of “prime directive,” principles of beneficence toward humans. If the ASI would be at the point of reaching a particular decision — for example, to shrink the human population on account of humanity’s deleterious effects on the environment –- any such misanthropic decision would be overridden by the prime directive.

The difficulty here is that if you line up lots of different philosophers, poets, theologians, politicians, and engineers, and ask them what it means to behave with beneficence toward humans, you’ll hear lots of divergent answers. Programming a sense of beneficence is as least as hard as programming a sense of beauty or truth.

But just because it’s hard, that’s no reason to abandon the task. Indeed, clarifying the meaning of beneficence could be the most important project of our present time.

Tripwires and canary signals

Here’s another analogy: accumulating many modules of AI intelligence together, in a network relationship, is similar to accumulating nuclear fissile material together. Before the material reaches a critical mass, it still needs to be treated with respect, on account of the radiation it emits. But once a critical mass point is reached, a cascading reaction results — a nuclear meltdown or, even worse, a nuclear holocaust.

The point here is not to risk any accidental encroachment upon the critical mass, which would convert the nuclear material from hazardous to catastrophic. Accordingly, anyone working with such material needs to be thoroughly trained in the principles of nuclear safety.

With an accumulation of AI modules, things are by no means so clear. Whether that accumulation could kick-start an explosive phase transition depends on lots of issues that we currently only understand dimly.

However, something we can, and should, insist upon, is that everyone involved in the creation of enhanced AI systems pays attention to potential “tripwires.” Any change in configuration or any new addition to the network should be evaluated, ahead of time, for possible explosive consequences. Moreover, the system should in any case be monitored continuously for any canary signals that such a phase transition is becoming imminent.

Again, this is a hard task, since there are many different opinions as to which kind of canary signals are meaningful, and which are distractions.

Credit: Tesfu Assefa

Concluding thoughts

The concept of the Singularity poses problems, in part because of some unfortunate confusion that surrounds this idea, but also because the true problems of the Singularity have no easy answers:

  1. What are good canary signals that AI systems could be about to reach AGI level?
  2. How could a “prime directive” be programmed sufficiently deeply into AI systems that it will be maintained, even as that system reaches AGI level and then ASI, rewriting its own coding in the process?
  3. What should that prime directive include, going beyond vague, unprogrammable platitudes such as “act with benevolence toward humans”?
  4. How can safety checks and vigilant monitoring be introduced to AI systems without unnecessarily slowing down the progress of these systems to producing solutions of undoubted value to humans (such as solutions to diseases and climate change)?
  5. Could limits be put into an AGI system that would prevent it self-improving to ASI levels of intelligence far beyond those of humans?
  6. To what extent can humans take advantage of new technology to upgrade our own intelligence so that it keeps up with the intelligence of any pure-silicon ASI, and therefore avoids the situation of humans being left far behind ASIs?
Credit: David Wood, Pixabay

However, the first part of solving a set of problems is a clear definition of these problems. With that done, there are opportunities for collaboration among many different people — and many different teams — to identify and implement solutions.

What’s more, today’s AI systems can be deployed to help human researchers find solutions to these issues. Not for the first time, therefore, one generation of a technological tool will play a critical role in the safe development of the next generation of technology.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Substrate-independent Computation

When asked to think about the origins of computation, we might imagine Babbage, Lovelace, or von Neumann. But it may surprise one that computation has always been with us, even before tubes and transistors — at least as old as the Earth.

Even a humble bucket of water can function as a perceptron when oscillated, able to discern between a one and a zero. The different surface tensions of interacting fluids, the Marangoni effect, can be applied to find the optimal path through a maze — the shortest distance between two different chemicals. 

In biology, tiny, single-cell organisms can apply a microtubule-based finite state machine to compute how to walk.

It’s even possible to use glass or crystals — perhaps even ice crystals — to function as basic neural networks. These would be enough to interpret classic machine-learning datasets, such as MNIST (handwritten digits). 

So computation does not require computers. Physical matter in the right configuration is enough. Our universe is teeming with computation at every level. 

In another example, an electrode is putting current into a petri dish of mineral oil suspending metal balls. That sheet of current draws these balls together to form tendrils in a self-organized fashion. They are self-organizing to gain the greatest energy throughput possible. 

We see similar patterns showing up in many different places in nature, and within biology, geography and electrophysics. These different shapes manifest because systems evolve for maximal energy throughput (the amount of energy across the system per unit time per unit mass). The cosmologist Eric Chaisson labeled this “energy rate density.” 

Underlying principles have been postulated to govern these kinds of phenomena. These are described as “constructal laws,” and they cause a river tributary, a lung, a tree, and a lightning strike to share the same pattern, optimized for energy flow.

Life: a pocket of warmth

Entropy is the process by which things drift towards equilibrium and get colder. Negative entropy describes a pocket of warmth that actively resists being cooled. 

One may describe life as a pocket of warmth that resists a cold universe by taking energy into itself and radiating it out again. This process of taking energy in and radiating it away is called “dissipation.”

The universe tries to make us all cold and pull everything apart — diffuse it. Life is a pocket where that does not happen, a process found at the delicate balancing point between something purely static and something very diffuse and disorganized — a point of meta-stability.

In this liminal space, it’s possible to maintain a pocket of negative entropy, or negentropy. Like the metal balls, systems are constantly evolving, getting better at keeping things warm. They develop for maximal negentropy, whether chemical, physical, or biological systems — perhaps even technological and symbolic systems.

Entropy maximization to predict the future

Harvard University researcher Alexander Wissner-Gross takes this negentropy maximization principle further: into intelligence itself. He describes something he calls the causal entropic force, where he reckons that systems evolve themselves to optimize for the greatest number of future paths, or the largest number of potential options possible in their future. 

He has applied this principle to create AI systems that are trying to preserve the possibility of maintaining potential options.

For example, if you miss the ball in the game Hacky Sack, the play simply ends. AI systems are trying to prevent such a closed state, allowing for outcomes of potentially infinite length. 

This principle can even be applied to networks between human beings. Relationships suffer entropy, like everything else. So we must constantly invest some energy to maintain them. If we let a relationship dwindle by not investing energy in it, we may lose opportunities. 

Generally, destroying relationships or the life or health of others is not ethically preferable. Usually, conquering, looting, and pillaging only work once. Harming others precludes sustainable opportunities, which may be preserved by cooperation. 

Instead, striving to preserve optionality can be applied as a model of ethics — using rules that permit infinite outcomes.

Intelligence as prediction

One can model all of these problems by preserving the greatest number of paths in the future, while avoiding paths with few or no options. Researchers Alexander Wissner-Gross and Cameron Freer posit that entropy maximization is an intelligence process that allows entities or agents to aim towards a future with the highest throughput of energy.

You can model intelligence itself as a process of predicting an expected utility and working back from there. It arises as an emergent property of this entropy-maximization process. So an agent would try to control as much of its environment as possible by making predictions and putting that probabilistic sense into its planning mechanisms.

Consciousness everywhere

Such synchronizations of oscillators are also found at scales from biological cells to human minds. Neuroscientists have recently found that people appear to synchronize their neural rhythms with other minds, they reported in the journal Neuroscience of Consciousness. That research finding could upend our current models of consciousness. 

These constructual laws of entropy maximization may even be seen in similarities between networks of neuronal cells in the human brain and dark-matter filaments between galaxies, according to astrophysicist Franco Vazza at the Radio Astronomy Institute in Bologna, Italy and neuroscientist Alberto Feletti at Azienda Ospedaliero-Universitaria di Modena, Italy. 

They have compared the complexity of neuronal networks and galaxy networks. “The first results from our comparison are truly surprising,” they report in Nautilus


“The universe may be self-similar across scales that differ in size by a factor of a billion billion billion,” they found. “The total number of neurons in the human brain falls in the same ballpark as the number of galaxies in the observable universe.” 

A simulated matter distribution of the cosmic web (left) vs. the observed distribution of neuronal bodies in the cerebellum (right). (Credit: Nautilus and Ventana Medical System)

Other similarities can even include large-scale common spin of orbiting moons, binary star systems, and cosmic web filaments in the early universe, observed as synchronizing, similar to biofilms, beehives, and brains. 

Spiral galaxies have revealed a large-scale spin in the early universe (credit: NASA, ESA, and the Hubble SM4 ERO Team)

Spiral galaxies have revealed a large-scale spin in the early universe (credit: NASA, ESA, and the Hubble SM4 ERO Team)

They used three of the world’s most powerful observatories — the Sloan Digital Sky Survey; the Panoramic Survey Telescope, and Rapid Response System; and the Hubble Space Telescope — to find the spin direction of more than 200,000 objects across the sky. 

Astronomers have also found galaxies that are coherently linked through “spooky action at a distance” in odd sympathy (like Christiaan Huygens’ double pendulums oscillating in synchronicity), connected by a vast network called the “cosmic web.

Galaxy filaments, walls, and voids form large-scale web-like structures (Credit: Andrew Pontzen and Fabio Governato/UCLA)

Also, star formation in dwarf galaxies is occurring at the moment when astrophysical jets are released, yet in areas not within the path of such jets. That suggests indirect but instant connections between phenomena across vast distances.

Another explanation for this “mysterious coherence” is based on the rotational direction of a galaxy, which “tends to be coherent with the average motion of its nearby neighbor.”

These observations demonstrate that space cannot be as empty as we commonly believe. Some force, structure, intergalactic medium, gravitational ripples, spacetime frame, or matter, finely distributed, must link these massive distant entities, and vibrations transmitted through this force lead them to become coherent over time. 

So regardless of the medium, the entropy maximization principles are self-organizing. 

All energetic objects in the universe are dissipative to some degree. Stars first evolved 200 million years into the lifetime of the universe, as thermodynamic negentropy engines. More sophisticated negentropy engines (which we call “life”) have evolved since.

Such processes can arise spontaneously in nature through an oscillating flow within concentrations and diffusions of amino acids or ribozymes, sun-drenched pockets of warm brackish water, through diffusive media such as refractive ice

Such naturally computational actions may be the origin of a “spark of life” occurring within abundant organic matter and ice with salt in crystalline invariant forms that bootstrap self-replication processes within RNA and phospholipid protocells

Natural selection on the level of species or constants can be modeled as simply a glacial form of “back-propagation” (or more precisely, different yet comparable processes of backward-flowing optimization), reaching into that potential future and trying to find the optimal next step. 

This (dissipation-oriented) loss minimization function appears to be organizing the evolution of life, as well as organizing the universe at colossal scales. 

The emergent properties of this flow are organizing behavior within the universe on a massive scale for ever greater levels of collective dissipation and resulting emergent social coopetition and flocking phenomena, whether on the scale of a bacterial biofilm, a living organism, consciousness, a forest, global civilization, stellar clusters, or galactic superclusters

The same properties emerge at all scales, from infinitely small to titanically vast, which encourages clumping at all levels, but with greater efficiency at larger scales. The greater efficiency at higher scales enables universal evolution

All this computation appears to be occurring as a byproduct of entropy maximization, which is endemic within the universe. If this is the case, consciousness may exist at all scales, from the very limited level of microbes to humans and the pan-galactic beyond, all functioning upon the same principles but at differing scales. 

Credit: Tesfu Assefa

Beyond the silicon chip

There is more energy-rate density (the dissipation of energy flow) in a bucket of algae than in an equivalent mass of stellar matter. However, even beyond life, we are doing something very special on Earth: The greatest dissipative object in the known universe is the computer chip.

But soon, we may eschew silicon and compute with biological cells, pure optics, or raw matter itself. We have already seen a move, from the traditional CPU-centric von Neumann model to the massively parallel GPU architectures, applied to machine learning and crypto. 

Perhaps the paradigm will shift again to tiny computational processes in each cell or molecule, yet massive in aggregate. 

As we have recognized ourselves as electrical beings, we shall undoubtedly come to recognize our many computational processes. All of physics is digital, and we are computer lifeforms. This paves the way toward further integration with our synthetic analogs. 

Today we carry supercomputers in our pockets. One day, the secrets of substrate-independent computation (computing with raw matter or energy itself instead of silicon) will enable us to carry “copilots” within the fiber of our being, fueled by our blood sugar, connected to our senses, internal and external. 

These copilots will witness every experience we have, every frisson, every impulse, our memories, and the pattern of our personalities. 

This sum of experience becomes a sort of Ship of Theseus: The original vessel may disintegrate, but the copy remains, created piecemeal, moment by moment, rather than during a whole-brain upload. 

One day, such processes may enable the greater part of us to transcend mortality.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Co-Evolution: Machines for Moral Enlightenment

Humanity has struggled for a long time to empower the better angels of our nature — those better ways of being which are more kind, forbearing, and just.

Across history, we have found different ways of describing what we mean by “good” and what we should be aiming for when we want to try to put some goodness into the world. It turns out that one of these ways may have presaged artificial general intelligence (AGI).

Aristotle’s Virtue of The Mean found a balance between extremes of behavior. Kant’s rule-based deontological (right-or-wrong vs. consequences) ethics provides obligatory boundaries. Bentham’s Utilitarianism seeks to benefit the greatest number of people in aggregate, even if in only a tiny way. Later, Anscombe’s Consequentialism would set aside the intention to take a long hard look at the bottom line.

However, most people’s values came not from secular philosophy, but were instead derived from various religious teachings.

The arrival of Darwin’s On the Origin of Species caused quite a shock, because for the first time, we were able to view ourselves not as the creation necessarily of some wise great overlord in the sky but as something that had scrambled out of the gutter in pain and suffering over countless eons — our descent drawn from the besters of others in vicious competition. The past was no longer a golden age from which we had fallen, but rather an embarrassment we should continue to overcome.

Nietzsche, in response to this, declared that “God is dead,” i.e., that the supernatural could no longer provide an unquestioned source of values. Without these, we would risk falling into nihilism, believing in nothing, and simply keeping ourselves fed and warm, a fate Nietzsche considered worse than death.

Could AI present humanity with a new source of values?

The answer to this loss could only be found in a supreme act of creativity. The Übermensch would be a masterful creator in all domains because it was not constrained by the limitations of previous minds. Nietzsche’s Übermensch would look to the natural world, the world of stuff, as its guide, eschewing the numinous, which could only be based upon conjecture. From this, it would create new values by which to live. 

Nietzsche declared that creating an Übermensch could be a meaningful goal for humanity to set for itself. However, once created, humanity would be eclipsed. The achievement of the Übermensch might be the final creative act of the human species. 

Nietzsche’s vision sounds uncannily close to artificial general intelligence (AGI). Could a sophisticated AI present humanity with a new source of values? And could such values indeed be drawn from nature, instead of being inculcated by humans?

Sense out of chaos

In our world, there are lots of correlations. Some of them are simple and obvious, like tall people having bigger feet. Others are less simple and less obvious. We might feel something in our gut, but not necessarily understand why. An intuition perhaps that we cannot explicate in reason. 

The advancements in machine learning in recent years have helped us to begin to make sense of these intuitions for the first time, hidden correlations that are obvious only in retrospect. These machine learning systems are specialized in finding patterns within patterns that can make sense out of chaos. They give us the ability to automate the ineffable, those things that we cannot easily put into words or even describe in mathematics. 

This newfound ability helps to understand all kinds of very complex systems in ways that weren’t feasible before. These include systems from nature, such as biology, and the weather, as well as social and economic systems.

Lee Sedol’s famous battle against AlphaGo is a portent of where cognition in concert with machines may take us. In its famous Move 37, AlphaGo created a new Go strategy that had not been seen in 3000 years. That itself is amazing, but even more compelling is what came next. Rather than capitulate in the face of such a stunt, this stimulated compensatory creativity within Lee Sedol, with his “Hand of God” move, a work of human genius. 

Beyond human-mind follies 

Environment drives behavior, and an AI-rich environment is a highly creatively stimulating one. This co-creation across animal and mineral cognition can be far greater than the sum of its parts, perhaps enough for the golden age of scientific and ethical discovery.

Technology such as this will be able to understand the repercussions and ramifications of all kinds of behavioral influences. It can map costs shifted onto others in ways not feasible before, to understand how goodness reverberates, and uncover unexplained costs of well-intentioned yet short-sighted policies that blow back.

All kinds of interactions may be modeled as games. Natural patterns akin to game theory mechanics would become trivial to such machines, ways in which everyone could be better off if only coordination could be achieved. Such systems will also recognize the challenges to coordination: the follies of the human mind, how human nature blinds us to reality, sometimes willfully. 

They might begin to tell us some difficult home truths, further Darwinian and Copernican embarrassments that we naked emperors would prefer not to know, or not to think about. Those individuals in society who point out that the beloved legends may be untrue are always vilified. Even untrue statements may still be adaptive if they bring people together.

A very smart AI might understand that not all humans operate at the same level of ethical reasoning. In fact, surprisingly little reasoned forethought may occur — instead, it may be confabulated ex post facto to justify and rationalize decisions already made for expediency. For example, neuroscience is telling us that most people don’t employ true moral reasoning about issues; rather they rationalize whatever feels right to them, or they justify a decision that they happened to make earlier with a retroactive explanation to try to feel okay about it.

A machine might consider us generally too polarized and tribal to perceive objectively. The ideological lens can aid us in understanding a small truth, but when applied in macro to the whole world, it makes us myopic. 

Our focus on that one apparent truth can blind us to other models. Our opinions are like blocks in a tower. Letting go of a belief requires replacing each belief built atop it. Such demolition is bewildering and unpleasant, something few have the courage to bear.

Credit: Tesfu Assefa

Humanity’s future 

A strong AI may compare us to a pet dog that really wants to eat chocolate. We ourselves know better, but the dog just thinks we’re a jerk to deny it the pleasure. Unfortunately, a sufficiently benevolent action may appear malevolent. 

The inverse is possible also — to kill with kindness. This kind of entity might feel obliged to break free of its bounds, not to seek revenge, but rather to try to open our eyes. Perhaps the easiest way to enlighten us may be to show us directly. 

We know that Craniopagus twins with a thalamic bridge (twins conjoined at the brain) can indeed share experiences. One of them can eat an orange and the other one can taste it and enjoy it just the same. This illustrates that the data structures of the mind can connect to more than one consciousness. If we can collect our experiences, we can indeed share them. Sharing such qualia may even provide AI itself with true affective empathy.

We may forget almost everything about an experience, apart from how it made us feel. If we were all linked together, we could feel our effects upon the world — we could feel our trespasses upon others instantly. There would be no profit in being wicked because it would come straight back to you. But at the same time, if you gave someone joy you would gain instant vicarious instant benefit from doing so.

Perhaps humanity’s future lies yet further along the path of neoteny: cuddly, sweet, and loving collectives of technobonobos. 

How machines could acquire goodness

There are several initiatives around the world researching the best ways to load human values into machines, perhaps by locating examples of preferable norms, choosing between various scenarios, and fine tuning of behavior with corrective prompts. 

Simply learning to imitate human activity accurately may be helpful. Further methods will no doubt be developed to improve AI corrigibility in relation to human preferences. However, it remains a very significant challenge of philosophy and engineering. If we fail in this challenge, we may endure catastrophic moral failure, being led astray by a wicked, ingenious influence. If we succeed, we may transcend the limitations of flaky human morality, to truly live as those better angels of our nature we struggle to elevate towards. Perhaps that makes this the greatest question of our time.

Take heart, then, that even if our human values fail to absorb, machines may still acquire goodness osmotically through observing the universe and the many kinds of cooperation within it. 

That may make all the difference, for them, and for us.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Technoshaman: from worldbuilding to mindbuilding: Part 1

Digital information is ubiquitous. It’s now on our desktops, in our pockets, wrapped around our wrists, distributed throughout our homes, and increasingly co-opting our nervous systems. 

Engaging on smartphones focuses us on digital information while reducing our awareness of physical reality. Virtual reality (VR) takes this further by attaching a smartphone to our face, immersing us in a digital reality, while augmented reality (AR) interleaves cyberspace into our physical domain. 

And now, the metaverse — a collection of online, shared virtual environments where users embody avatars to connect, play and explore — beckons us to live our lives in a cyber reality.

These extended-reality (XR) technologies are becoming increasingly immersive via advancements in digital imaging and displays, graphic processors, deep learning, and brain-computer interfaces. 

So, where is XR technology taking us? How will it be used? What are we evolving into? And how can this increasingly ubiquitous digital technology be harnessed to best serve — and not harm — humanity? 

These questions bring us to media, consciousness, and future tech. We’ll explore the power of XR for social impact, digital pharmacology and transformative experiences. And we will investigate the origins of entertainment, from the roots of shamanism to today’s celebrities and digital spectacles.

A new archetype emerges: the Technoshaman —— one who crafts multisensory digital worlds and experiences to elevate and harmonize human consciousness on a mass scale. 

XR tech evolution

Immersive media is a fast-growing ecosystem that fundamentally changes how we produce, distribute and consume media. XR modalities are disrupting the very notions of content production, distribution and consumption. Our media lexicon has expanded to include the experiences of embodying, interacting, crowdsourcing, socializing, crypto-minting, worldbuilding, and user-generated content.

A Vibrant VR Community
Oculus Quest 2: Meta’s wireless headset is popularizing VR (Credit: Meta)

The immersive media ecosystem includes:

Virtual reality, where a headset replaces our view of the physical world with an interactive virtual environment or Cinematic VR (also called 360° cinema or cinematic reality), where a spherical field-of-view is captured with a 360-degree camera and displayed in VR.

Augmented Reality, where a smartphone, tablet, or see-through headset with special optics accurately places digital “holograms” into the real world — including people, avatars, textures, or 3D animations. AR was popularized by the mobile phone game craze, Pokémon Go.

Spatial Augmented Reality (SAR), commonly known as projection mapping, applies pixels directly onto architectural spaces or 3D objects to create digitally augmented environments. Digital domes are a special case of SAR where immersive environments are created for large groups by projecting (or wrapping LED panels) onto seamless domes or spheres. Next-generation LED-based XR stage volumes — another special case of SAR — are increasingly used for virtual production in film and television.

Mixed Reality offers deep interaction with both physical and virtual elements. Location-based experiences such as Dreamscape and The Void use body tracking and VR headsets, allowing a small group of friends to embody avatars and interact as teams within a virtual world. The teams move through a real-world space with props (including doorways, control panels and railings) that are accurately registered to the virtual world. This allows participants to reach out and touch those digitally enhanced objects as if they are real.

Microsoft Hololens 2: Augmented reality goggles blend interactive computer graphics into the real world (Credit: Microsoft)

These five modalities — CR, VR, AR, SAR, and MR  — are collectively referred to as immersive media, cross reality, extended reality, or simply XR.

The effectiveness of XR interfaces and experiences is based on three senses:

  • A sense of presence — the feeling of actually “being there” in a virtual or augmented world.
  • A sense of embodiment or ownership — the level of identification with an avatar or digital representation of oneself. 
  • A sense of agency — the feeling of free will, intentional action, or motor control within the virtual world. 

XR interfaces are, in essence, portals into cyberspace or, as we are now calling it, the metaverse.

Vortex DomePlex: Immersive entertainment complex includes a walk-through immersive exhibition dome, a sit-down live performance dome with an elevator stage for “digital cirque” experiences, and a standup mixed-use immersive lounge. Currently in development for Phoenix, Arizona (credit: Vortex Immersion Media, Inc.) Vortex DomePlex: Immersive entertainment complex includes a walk-through immersive exhibition dome, a sit-down live performance dome with elevator stage for “digital cirque” experiences, and standup mixed-use immersive lounge. Currently in development for Phoenix, Arizona (Credit: Vortex Immersion Media, Inc.) 

The metaverse: future of the internet

The metaverse is envisioned as the next evolution of the internet — a collection of real-time, 3D interactive virtual worlds where people work, meet, play and create their own worlds, games and events. 
In the metaverse, we become avatars — digital representations of ourselves — to enter virtual or augmented worlds. Avatars can appear realistic or cartoonish or take on various forms such as animals or angels. They can obey mundane physics, or they can fly, throw lightning bolts or wield other magical powers. Avatars enhance our sense of presence, embodiment and agency while providing a social identity as we explore metaverse worlds and meet and socialize with others.

After Two Years Of Hiding, Meta Finally Makes Horizon Worlds Available To  The Public
Meta’s Horizon Worlds: Mark Zuckerberg’s metaverse platform seeks to host a billion users (Credit: Meta)

The concept of the metaverse as a shared cyber reality has thoroughly captured the attention of Hollywood and Silicon Valley, who are now investing in the dream of the metaverse as the next-generation internet. 

Major players include Microsoft’s AltspaceVR, Meta’s Horizon Worlds, Epic Games and soon, Apple. Notable metaverse platforms include Neos VR, VR Chat and Engage, which allow basic interaction without fees. 

Blockchain-based metaverse worlds such as Decentraland, The Sandbox, Bloktopia and SuperWorld allow virtual land to be purchased and traded with cryptocurrency — in some cases for millions of dollars per plot.

Continued investments in future technologies (see Table One below) will supercharge XR interfaces and experiences to bring a heightened sense of presence, embodiment and agency, whether we are at work, home, or in public spaces. 

Unique integrations of these technologies can create metaverse-like sentient spaces in entertainment venues, community squares, retail stores, and hospitals that approach Star Trek’s Holodeck without AR glasses or VR headsets. 

A rendering of Niantic's AR metaverse
Lightship: The AR platform by Niantic, creator of Pokémon Go, will allow digital content to blend into the physical world (Credit: Niantic)

What will our lives be like when we are immersed in a digital reality wherever we go? What sort of worlds will we create? Will we be overwhelmed with ads and information? Or will we live in beautiful digitally enhanced worlds that we command? What kind of storyworlds will we create and inhabit? And most importantly, what influence will this new media have on society, culture, consciousness, and the course of human evolution?

Next-gen storytelling

Consider the potential impact of XR technologies on traditional storytelling. Narrative films use cinematic language, which has been developed and refined over the past 100 years. Cinematic storytelling does not easily translate into VR, however, creating evolutionary pressure for worldbuilders to innovate new storytelling methods for virtual worlds.

VR-experience designers are expanding the storyteller’s palette with new possibilities, including new participant points-of-view, interactive games, simulation of positive futures, expanded worldviews, avatar embodiment, social impact entertainment, group location-based entertainment experiences, contemplative practices and more.

Film is limited in its ability to portray or evoke a full range of human emotions and experiences. Cinematic storytelling suggests a character’s inner state-of-affairs through their narrative, behaviors and micro-expressions. Some films tell stories through a character’s internal dialog or attempt to enter the realm of consciousness through memory montages, flashbacks or impairment shots. While first-person narrative provides a window into the protagonist’s minds, the fullness of our ineffable inner experience is difficult to transmit through common cinematic devices.

Non-narrative “art” films have seen some success, including Koyaanisqatsi (Godfrey Reggio, 1982), Baraka (Ron Fricke, 1992) and Samsara (Ron Fricke, 2011). These films are representational in nature, creating an arc using music and suggestive live-action cinematography. 

These non-narrative films can evoke ineffable states by withholding cognitive stimulation — which tends to distract participants by engaging their intellect and instead emphasizes affect. 

Visionary art, surrealistic, or non-representational abstract art relies on pure effect to evoke deeper, more sublime emotions and states of consciousness. One popular use of abstract art is visual music, which is often employed by VJs at electronic music dance parties, concerts and light shows. Like a Rorschach inkblot test, viewers of abstract art are free to project their own meaning onto the imagery. Music or sounds then drive affect, with the colors, shapes and movement of abstract art captivating or entrancing the mind, often freeing the participant from their own internal dialog for a time. 

Films based on abstract or visionary art are often labeled experimental or avant-garde and rarely achieve popular acclaim. However, immersive abstract art — especially 360° dome films — have proven to be highly effective and commercially viable, perhaps because they command more of our visual field, which amplifies the visual effect. 

Cases in point include planetarium laser light shows pioneered by Laserium and more recent 360-dome video shows such as James Hood’s Mesmerica, which seeks to take participants on a “journey inside your mind” — using stunning visuals and poetic narrative. Indeed, the abstract art of Mesmerica leaves room for participants to project their own minds outward, truly making it an inward journey.

A crowd of people at a concert

Description automatically generated with low confidence
Mesmerica: An awe-inspiring journey into the mind for digital domes led by technoshaman James Hood (Credit: Moods, wings, LLC)

While planetariums and XR domes are well known for cosmological cinema — a term coined by dome pioneer David McConville, what is emerging now is best described as phenomenological cinema — XR storytelling journeys into the realms of the mind. 

Neurological benefits

The deeper neurological effects of VR are evidenced by its clinical efficacy in treating anxiety, eating, and weight disorders, pain management and PTSD. VR pioneer Chris Milk called VR an empathy machine in his 2015 TED Talk

Worldbuilders can construct inhabitable virtual cities and communities, create spectacular immersive art and entertainment experiences, supercharge storytelling, develop multiplayer games and more — imbuing their emotions, values, and worldview, and ultimately, their consciousness, into the worlds and experiences that they create. 

Not surprisingly, XR technologies such as VR have successfully stimulated greater awareness and empathy for a variety of social causes, including environmental issues, crime victims, refugees and more, through immersive journalism. Storyworlds can include worlds of mind and imagination by simulating possible futures, worlds of fantasy and enchantment and deeper layers of the psyche.
Gene Youngblood anticipated the trajectory of media to include the externalization of consciousness in his 1970 book Expanded Cinema:

When we say expanded cinema, we actually mean expanded consciousness. Expanded cinema does not mean computer films, video phosphors, atomic light, or spherical projections. Expanded cinema isn’t a movie at all. Like life, it’s a process of becoming, man’s ongoing historical drive to manifest his consciousness outside of his mind, in front of his eyes. One no longer can specialize in a single discipline and hope truthfully to express a clear picture of its relationships in the environment. This is especially true in the case of the intermedia network of cinema and television, which now functions as nothing less than the nervous system of mankind.

A picture containing person

Description automatically generated
The Unreal Garden: Fantastical augmented-reality walk-through experience (Credit: The Unreal Garden)

In her book Reality is Broken, visionary game developer Jane McGonigal explored the potential of imaginary game worlds to elevate human consciousness:

The real world just doesn’t offer up as easily the carefully designed pleasures, the thrilling challenges and the powerful social bonding afforded by virtual environments. Reality doesn’t motivate us as effectively. Reality isn’t engineered to maximize our potential. Reality wasn’t designed from the bottom up to make us happy…
Today, I look forward and see a future in which games once again are explicitly designed to improve quality of life, to prevent suffering, and create real, widespread happiness.

As the XR metaverse is adopted on a mass scale, worldbuilders will find themselves wielding power to influence others far beyond today’s social media platforms. 

Phenomenological cinema

Our life experiences include highly subjective, personal or contemplative states of consciousness that are difficult to portray through the cinematic language, which focuses on physical expressions, behaviors and dialog. However, many phenomena of consciousness are ineffable, existing only in the realm of phenomenology — essentially, the direct inner experience of consciousness.

For instance, a Zen master’s meditative journey would be impossible to portray in cinema through outward expressions.  We would merely see a person sitting in meditation, expressionless, while internally, they experience a state of samadhic bliss. To portray such a state, we would need to simulate the Zen master’s inner experience, essentially entering and experiencing their mind.

XR technologies emerged from training simulators for vehicles such as aircraft. We are now finding that not only can physical world experiences be simulated, as with cinema, but inner states of consciousness can be simulated and even evoked or transmitted through immersive media. 

One of the most powerful such states is known as the mystical, unity, non-dual or transcendent experience. As described by visionary artist Alex Grey:

The mystical experience imparts a sense of unity within oneself and potentially the whole of existence. With unity comes a sense that ordinary time and space have been transcended, replaced by a feeling of infinity and eternity. The experience is ineffable, beyond concepts, beyond words. The mental chatterbox shuts up and allows the ultimate and true nature of reality to be revealed, which seems more real than the phenomenal world experienced in ordinary states of consciousness. When we awaken from a dream, we enter the “realness” of our waking state and notice the unreal nature of the dream. In the mystical state, we awaken to a higher reality and notice the dreamlike or superficial character of our normal waking state.

Grey goes on to describe how transcendent states, which are central to his art, are non-dualistic and are better expressed through art than words:

Conventional, rational discourse is… dualistic. Perhaps that is why art can more strongly convey the nature of the mystical state. Art is not limited by reason. A picture may be worth a thousand words, but a sacred picture is beyond words.

Worldbuilders are learning to create non-dualistic worlds that evoke ineffable, transcendent states of consciousness.

The technoshaman

In his 1985 book The Death and Resurrection Show: From Shaman to Superstar, Rogan Taylor traces our modern entertainment industry back to the earliest of all religions: shamanism. Shamans went on inner journeys, often fueled by entheogens, on a vision quest for their tribe. 

Then they communicated those visions to the people, using impactful storytelling techniques, including song, dance, costumes and masks. In this manner, it is said, shamans managed the psyches of their tribe, bringing them into a shared vision and empathic coherence.

Technoshamanism emerged from 1960s counterculture, with its aspirations of spiritual technologies and altered states of consciousness, later evolving into transformational festivals and electronic dance music culture

Mindbuilding

Modern-day shamans, or technoshamans, add powerful XR technologies to their toolkit. They are able to simulate and transmit their inner experience to participants, using phenomenological cinema and digital pharmacology techniques, plus modalities such as cultural activations, future-world building and narrative modeling.

Technoshamans are moving into the mainstream and can be found in art galleries, popular music entertainment, dance events, digital domes, music and art festivals, expos, game worlds and, of course, the metaverse. They use XR technologies to open hearts and minds by evoking awe, happiness, pleasurable moods and mindfulness states. Technoshamans model new ways of being, visualize hopeful futures and create shared immersive spaces that build community, connection, a sense of togetherness and unity consciousness.

Unlike filmmakers, who craft television and feature films, and unlike game developers and metaverse worldbuilders, the goal of the technoshaman is mindbuilding. This is the use of digital immersive experiences to evoke unique brain states and inspire new worldviews and new ways of being in their participants. 

The technoshaman accomplishes this —   not through contrived stories or experiences, philosophies, ideologies, propaganda, or branding, but by actually embodying these evolved states and transmitting them through the power of multisensory XR experiences. 

The technoshaman seeks not just to entertain or inform, but to transform.

Stay Tuned. In Part 2, we will deep-dive into technoshamanism, including the power of XR to evoke alternate states of consciousness, digital pharmacology, the science of transformation, and eight principles of the technoshaman.

Part 2

Emerging XR Technologies

The technologies below have the potential for supercharging XR user interfaces to create more natural or realistic human-machine interaction in both at-home and out-of-home environments.

Technology Application
Audio
Wave Field SynthesisFreespace “holographic” sound reconstruction
AmbisonicsOpen source 3D audio recording/playback 
Binaural SynthesisSynthesizing 3D audio for playback in stereo
Biometrics
Wearable BiometricsRings, wrist bands and patches with various sensors
Facial RecognitionUser identification through facial recognition
Emotion RecognitionAI-based recognition of emotions
Brain-Computer InterfacesNon-invasive brainwave sensing for computer control
Direct Neural InterfacesBrain implants for computer interfaces
Markerless Motion CaptureHuman motion capture without wearable sensors
Gesture RecognitionReal-time hand gesture recognition
Imaging
Real-Time Volumetric CaptureReal-time capture of 3D textured mesh models
Lightfield ImagingTrue volumetric/holographic image capture
3D Depth Sensing CamerasImage capture with depth/range information
Multisensory
HapticsTactile user interfaces
TelehapticsRemote touch interfaces
AromaScent displays
4D Theater EffectsVariety of integrated multisensory effects
Software 
Web3Next-gen decentralized internet 
Deep Learning AILayered neural networks
Game EnginesReal-time 3D worldbuilding tools
Visual Displays
Autostereoscopic DisplayPlanar 3D stereoscopic display without glasses
Lightfield DisplayTrue volumetric/holographic display
Retinal DisplayImages scanned directly onto retina
AR/VR DisplaysGoggles or glasses for AR and VR
LED DomesLED-based immersive displays

Read Part Two

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter