The World of Sensors

The Fitbit is the first wearable biosensor that achieved widespread adoption, due to a simple proposition: measure the amount of steps that a person takes in a day. This measurement turns out to be an excellent approximation for fitness level and provides a clear goal to measure against: 10,000 steps a day. How does it measure the steps? The accelerometer measures the movement of the Fitbit and can determine your motion. 

Here is an example of an accelerometer recording someone walking:

Here is an example of someone jogging – note how different the data looks

It can tell when you are navigating a flight of stairs, and can tell which way you’re going – climbing up the stairs:

Climbing down the stairs:

Sitting down:

Standing up:

Even with just the accelerometer data above we have a pretty good idea of what the user is doing, but we can go much further. The Fitbit is a combination of sensors working together to paint a richer picture than each one could on its own. It combines an accelerometer, a gyroscope, an altimeter, a GPS, a heart rate sensor, electrical sensors that can measure skin conductivity, skin temperature, ambient light, and a microphone. These are all fairly low cost, straightforward sensors – and their applications are well known and have been used for decades.

Let’s look at just two of these sensors and how they work together. The first sensor is the accelerometer, which measures the motion of the device on three axes. From the motion readings generated, the user’s activity and motion can be identified: whether they are sitting, standing, walking, jogging, or navigating a set of stairs.

Each type of movement generates a distinct set of readings, so it’s possible to infer state from something as imprecise and simple as a device accelerometer. Correlating just this single sensor with a bit more information can be quite powerful – if a user’s phone tends to show consistent motion until 11pm at night, at which point it remains completely stationary for 7-8 hours and starts moving again in the morning, we can be reasonably confident that this is when the user is sleeping, despite not being able to measure it directly.

Let’s combine the accelerometer with another sensor that measures heart rate. Now we can not only measure the amount of motion that the user is engaging in, but the amount of effort that motion requires. Let’s ask a series of questions and consider each sensor, and what we get from them by themselves, compared to where we can get by combining them. 

Using the data from these sensors: 

Are you walking up a hill?

An accelerometer alone can answer this question, because it measures motion state and can infer things like how quickly you are walking, climbing, or running.

A heart rate sensor only measures your heart rate, so it can’t tell you what activity is actually causing your heart rate to go up or down. 

Are you expending significant effort to walk up the hill? 

The accelerometer can “know” that you are walking, but it cannot tell you how much effort you are expending. 

The heart rate sensor can answer how much effort you are expending in any given moment, but it does not allow us to determine activity type very easily. 

But when used together, we can show heart rate increase is being caused from climbing the hill. After the accelerometer shows the climbing has stopped, the heart rate monitor can measure the amount of time it takes the heart rate to return to normal. So using both sensors together helps form a much richer and more comprehensive picture than the accelerometer or heart rate monitor alone.

Let’s consider one more case, where we have the Fitbit’s GPS information. If we combine this information now, we can determine motion type from the accelerometer, level of effort from the heart rate meter, and location change from the GPS. Now we can know what hill the person climbed, when, and exactly how much effort the person expended. A hiker with a GPS alone could provide some of this information, but as we can see above, adding low cost sensors and correlating them has a powerful additive effect.

Combining multiple types of measurements and correlating them is something most of us are familiar with. When we go to the doctor’s office for an annual exam, there’s a series of tests that we will undergo. Our height and weight are recorded. Our skin temperature is measured, along with our heart rate and blood pressure. This provides a baseline for comparison with any changes we experience over time. These measurements are correlated and used to determine whether any unexpected shifts are happening. It’s by merging these measurements together that we can form a rich picture of our physical health and be confident in our analysis.

A recent evolution of these fitness trackers is the Whoop band, a competitor to Fitbit. It measures similar things like sleep, heart rate, and motion – with a more targeted goal of delivering feedback to the athlete on their current training readiness.

The Whoop band measures Heart Rate Variability as an indicator of physical strain and recovery from physical strain. Heart Rate Variability (HRV) is a metric that looks at the amount of variation in the intervals between a person’s heart beats. High Heart Rate Variability is considered desirable, and is a measure of physical recovery and/or stress.

By combining these low-cost sensors, the platform is able to calculate remarkably complex things about the wearer. It has enabled people to understand their bodies in new ways, from their sleep quality to the level of strain they experience while doing various exercises. After the workout the heart rate will rise and get more steady – both signs that the body is recovering from the strain. From the amount of change in heart rate, the app can determine where the user is in their recovery, and indicate whether to continue resting, engage in light exercise, or to work out at full intensity.

We are now at a point where a $200 device can passively measure things about your fitness and derive insights that ten years ago would have only been available to professional athletes. An array of simple, low cost sensors in a consumer device can provide the information people need to make effective and safe training decisions. Fitbit and Whoop showed how combining multiple sensors gave users new insights into their physical health; in the future, it will be possible to combine various other sensors to give users insights into their cognitive health.

So when talking about neuroscience and the coming revolution in our ability to measure ourselves, our internal states, and our body, what sensors are involved with that? It starts with the same biometric sensors we’re using currently in Fitbits and Apple Watches – skin temperature, heart rate, motion tracking. To that we add a few more that have been used extensively in research but are recently arriving in the consumer/low-cost sensor space, namely eye tracking, electroencephalography (EEG), and galvanic skin response.

Eye Tracking

With an eye tracking sensor bar or a VR headset, this sensor can tell where you are looking. Depending on the sensor, it will also be able to determine the size of your pupils – these expand and constrict with changes in “cognitive load.” (Note that pupil size can also mean other things, such as a reaction to brightness or outright fear.)

Credit: Tobii Pro

Cognitive Load is the concept that the brain has finite resources and will allocate them as needed – in the same way that your heart and lungs have physical limits on their performance. Pupil size, which can be measured from an eye tracker, is an excellent measurement of cognitive load. Thus, an eye tracker knows what you are looking at, and can also determine how intensely you are processing that object. 

What can we do with this information? Imagine you’re a consumer goods company that is about to purchase a 30 second superbowl ad for $6 million. The commercials cost roughly $300,000 to produce. Why not produce three and find the one that consumers respond to the most? With an eye tracker, have some consumers look at the different ad versions. From that, run whatever commercial causes people to look at the product the most times, and spend the most time lingering on the product.

What doesn’t the eye tracker tell us? Anything else, really. How the person feels about the product, what brain state it evokes, how intensely they feel that way. What if they’re staring at it because they really dislike it? While that is engagement, it is perhaps not the kind of engagement wanted. While the eye tracker has an amazing ability to tell us what someone is looking at and how hard their brain is working, it won’t give us a complete picture. It’s accurate but partial.

Credit: Tesfu Assefa

Galvanic Skin Response (GSR)

Skin conductivity is responsive to emotional state. As an emotional experience gets more intense, a person’s skin conducts electricity better. It can be hard to directly measure whether that emotional response is positive or negative (though that can often be easily guessed).

If we were to run a study with the eye tracker alone, we could use the metrics it provides to get a good understanding of which products seem to appeal to consumers visually. We can also gain an understanding of how people engage with products they find visually appealing, by quantifying the difference in fixation order, fixation length, and gaze sequence between the products. Using pupil size, we can infer how much work their brain did processing the object they were looking at. 

We cannot make statements about how the person felt emotionally about the object they were looking at using only the eye tracker. 

However, if we were to run a study with the GSR meter alone, we could use the metrics it provides to get a good understanding of the emotional response people have, say, to a commercial. We don’t have a good way to distinguish which products generate which response, though we can partially infer it from looking at the way the GSR response changes during the 30 second commercial and whether the three commercials drive fundamentally different responses. Basically, we know how they feel, but we don’t know what they’re looking at, as they have that emotional response.

Now, if we combine these two sensors, there will be some initial difficulty reconciling this data. Eye trackers and GSR meters operate at different sample rates, but also at different timescales. These various physiological processes have different response times and different ways of presenting. 30 seconds is a lot of data for an eye tracker, but a small amount for a GSR meter. Combining sensors often starts with revisiting the experimental design to work with the strengths of both sensors. In this case, 30 seconds is still sufficient for doing this comparison, so we can proceed.

We need a new interpretation that includes both sensors. When we were just looking at the eye tracking, we defined “this person favors the desired object” as the success criteria. When we were just looking at the GSR, we defined “this person has the strongest emotional response” as the success criteria. Now we’re going to define “This person favors the desired object while having the strongest emotional response” as the success criteria.

From the eye tracker, we know what object a person is observing at any given moment. From the GSR meter, we know the intensity of the response. Now we can merge the two metrics, and see how much they agree with each other. If we find that the object the person looks at the most also generates the strongest emotional response, it would provide a very clear signal that they are interested in that object. 

What if we find that there isn’t a clear relationship between the amount of time spent looking at an object and the emotional response? In that case, we would need to come up with a ‘tie breaker’ to resolve the contradiction. We could use another sensor – by adding facial expression tracking to the mix, we would have a clear idea of whether the emotion was pleasurable or not, and resolve inconsistencies between the eye tracker and GSR meter.

In some cases, we will not be able to come up with a clear relationship between the sensors, even if they are painting complimentary pictures of the data. In other cases, we will be able to come up with a clear  relationship that we can apply to future experiments.

Finally, let’s step through the two scenarios above:

  • We run the combined eye tracking and GSR study for our food manufacturer. 
  • We find that commercial #2 generates the most fixations on their product. 
  • We also find that commercial #2 generates the longest gaze linger time on their product. 
  • Finally, we find that commercial #2 generates the strongest emotional response overall, as well as during the times that the customer is looking at the product. 

In this case, we can expect the food manufacturer to air commercial #2 during the Superbowl and hopefully receive a strong response. If that’s the case, we can build on our understanding and help companies find the most effective messaging to consumers.

Let’s consider the other scenario, where we run the combined eye tracking and GSR study and get slightly contradictory data. 

In this case, we find that:

  • Commercial #3 generates the strongest emotional response overall. 
  • Commercial #1 generates the most fixations and the longest linger time on their product. 

So, in this case, we would need to make a decision regarding which sensor is considered more representative or useful in this case. Do we trust the conclusion from the GSR sensor which points to commercial 3? Do we trust the conclusion from the eye tracker which points to commercial 1? Or do we add another sensor to help clarify?

As we can see, fusing sensor data is helpful and informative. When sensors agree, it lends significant credibility to the metrics they generate as they often involve quite different processes. When sensors disagree, it can be a great hint that there are discrepancies in the experimental design or the expected understanding of the phenomenon being studied.

This approach to combining data is called Data Triangulation, and allows for robust and reliable findings when combining data from multiple sensors and paradigms.

The explosion of low cost sensors, internet-connected devices, cloud computing, and affordable wearables is bringing a revolution in our ability to collect data about ourselves, gain actionable insights, and make life better for human beings everywhere. By fusing the data from the sensors together, powerful analytical insights become available. By fusing the data we collect, even further insights are possible. And by triangulating across different methods, we can be confident in our conclusions. Now let’s take a closer look at all the different kinds of sensors available, in our Sensor Fusion Guide.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Optionality & Infinite Games

All things in the universe seem to be constructed out of little parts coming together for greater possibility and greater choices. We call this “maximizing for optionality.”

That means the collective ability to do something more, and greater: to become more than the sum of their parts. Sub-quanta come together into atoms; atoms come together into stars and then planets, as they condense.

From chemicals to organisms

Protocells begin to emerge from chemical processes, and gradually RNA and DNA begin to emerge. From this, we see the first single-celled organisms.  And they form biofilms to protect themselves, which they coordinate through radio-like signaling mechanisms.

What we’re seeing here is an exchange between multiple single-celled organisms. The ones on the outside are at greater risk of attack and damage. But these organisms also commensurately have a greater option for finding more food or more resources. 

Quid pro quo

Actually, there’s something called “metabolic codependence,” where inner and outer organisms yo-yo. The outer ones expand, but then they run out of an enzyme they need, so the inner ones then govern how fast the biofilm can expand. 

Inner organisms want to ensure an equitable trade of resources to feed them, so they provide the enzyme necessary to the outer ones, keeping them strong.

There is a quid pro quo, a trade between these different elements in the greater organism.

Along came mitochondria, and with them, eukaryotic cells, enabling cells to be warm-blooded, powered from within. This seems to be a fortuitous “accident” — mitochondria were once their own lifeform, which found greater negentropy by pairing up with a cell for their mutual benefit. 

Symbiotic plants

Then came organisms able to harness an even greater throughput of energy by coming together as clumps of what would later become plants and fungi. Plants can take in energy from the sun via chlorophyll, and fungi can break down matter that otherwise resists decay.

However, the plants had another coordination problem: bigger plants that grew very tall would sometimes block out the sun for the younger ones. So trees coordinate to leave some room in the canopy for light to reach the bottom, a phenomenon known as “tree crown shyness.” They also subsidize the growth of younger plants, sending them nutrients in symbiosis with fungi.

The “wood wide web”

Meanwhile, mycelium webs beneath the forest floor enable plants to swap messages and resources even between species, warning of attacks and exchanging resources for mutual strength.

This messaging is, essentially, based on the same fundamental principle as civilization — technologies to make a dangerous world more reliably safe, achieved through coordination of effort and resources. This “wood wide web” enables ecosystems to prosper mutually, again demonstrating the tendency of systems to maximize negentropy.

The birth of brains

Nature has found many answers to these coordination problems, which seem reasonably equitable. Maybe we could learn from that: some of these cells (possibly mycelium-style cells) began to exhibit extra agency. 

They began to connect to other cells that also specialize in this behavior. They became stronger over time, and eventually turned into brains and cortices. That’s why we see harmonics and action potentials akin to swarms of bees within the brain. 

Emergence of “free will”

These cells still preserved some of their agency — albeit now a collective agency within the greater mass. And out of this, our sense of “free will” likely manifests. 

We perceive ourselves as one individual. But in truth, we are a nation of ~86 billion tiny souls pulling us in all directions, our ego functioning as both ambassador and president, trying to keep a lid on it all. 

Come together, right now…

In many ways, this is also a little bit like cellular automata — a macroscale version of microscale neurons. This ability to come together enables ants, for example, to carry off the tasty treat. But it also enables formidable feats of coordination. 

Here we have a bolus of ants. Some are drowning, but swap with ones that aren’t drowning. By working together, they can survive. As with a biofilm, the ants most exposed get a little bit of extra help, or sometimes swap around with fresher ants. 

We next saw the emergence of cold-blooded creatures, such as reptiles. They were long-lived and hardy but hampered by an inability to operate at all temperatures. They had to bask for a long time to gather enough energy to go into the world. 

More complex, responsive brains

The advent of endothermy, warm-blooded creatures, enabled sustained activity to support a more complex and responsive brain at the expense of greater nutritional intake.

The mammalian brain also supports emotions far more sophisticated than the simple Fight, Flight, Feed, and Fornicate impulses of the reptilian brain. Mammals bond with each other in ways that reptiles cannot. Mammals are driven to protect their offspring and friends, encouraging coordination for hunting and watchkeeping. 

This ability for mammals to bond with each other enabled our ancestors to bond with dogs and later cats. By hunting, we gain more resources for our big brains. This bonding allowed us to divide our labor so efficiently that some were now able to devote themselves exclusively to thinking

They created new inventions, narratives, and ways of conceptualizing the world — new ways of solving problems and bringing people together in shared meaning.

Linking up

That sense of meaning and the trust gained by common rules of engagement (morals) enabled us to have tolerance for lots of strangers, and thus tribes became nations. Today, we have planetary information and energy networks linking all of us together.


Along with being able to split the load of work with other humans and other species, we harnessed the secret of fire, developing an obligate dependence upon it. 

Fire enabled us to process inedible foods into something safe and nutritious. As our ability to work together in massively distributed, highly specialized labor chains improved, we were able to harness further forms of fire — coal, steam, uranium, to keep us warm. Now, our entire city colonies are heat islands.

So are there innate laws of ethics?

It may be possible for a system trained on a lot of information to pinpoint this seeming truth: the “will” of the universe is trying to connect things for their mutual negative entropy (slowing down the gradual decline into disorder), allowing for mutual improved optionality.

So if different organisms, from plants to ants to human beings, have come together to solve these problems, does it follow that there are innate constructive laws of ethics?

What if an AI could prove definitively that there are such constructive laws for ethics, pinpointing an ethical maxim of “good”? A self-evident, evolutionary epiphany that one can’t unsee once realizing it. 

Something like: 

(a) Maximizing optionality, which means minimizing entropy, or maximizing negative entropy, through collective benefit that is … 

(b) by invitation — not by coercion, not by co-opting, but through equitable mutual benefit. We may even define love in the same way. M. Scott Peck defined love as “The will to extend oneself for the purpose of nurturing our own or another’s spiritual growth.” That sounds pretty aligned to me.

In the immortal words of Eden Abbez, “the greatest thing you’ll ever learn is just to love and be loved in return.”

Perhaps that loving sentiment is more applicable than we knew, expressing “the will of the universe” at all scales and across all time.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

BANI and the Participatory Panopticon

I’ve been a futurist for over 25 years, and I’ve learned that an important part of doing professional foresight work is the acceptance that much of what we imagine and speculate upon will be entirely wrong. This is fine; as I’ve said in a number of talks, part of the goal is to be “usefully wrong,” able to gather insights and make connections from otherwise off-target forecasts. That said, it’s nearly always pleasant to have a forecast map closely to real world developments (“nearly,” because some scenarios are very dire, and I desperately want to be wrong about them). 

Among the forecasts and scenarios I’ve created and made public over the years, the first one to draw real attention also turned out to be one of the most prescient: the “Participatory Panopticon.” And it turns out that the Participatory Panopticon is a valuable example of thinking through a BANI lens.

The core idea of the Participatory Panopticon is that portable technologies that combine always-on network connections, high-quality cameras, and large amounts of cheap data storage will radically change our behavior, our politics, and, in so many ways, our lives. We would be surrounded by constant observation and recording of our actions (the Panopticon part), but not just by authorities surveilling us, by our peers, both friends and strangers (the Participatory part). We’d be able to pull up archived recordings or put the video online live, and we all would be doing it.

This might sounds everyday in 2022, but I first started writing about the concept in 2003, and started speaking in public about it in 2005. Mobile phones with cameras (then referred to as “cameraphones”) had existed for a couple of years, but they were slow, very limited in what they could capture, and produced terrible pictures. At best they were on 2G “Edge” networks, so uploading even a low-resolution image could take minutes. None of what I talked about in the Participatory Panopticon would be an obvious implication of the technology of the day. There were hints, and broadly-connected examples, but nobody else had put the pieces together, at least for a public audience.

Let me be quick to note that much of what I wrote and thought about the Participatory Panopticon was off-target. I had an odd focus on always-on recording (underestimating how intrusive that would feel) and wearable tech (still a few years off, even now). But the core of the concept resonated for many: being able to document the world around us, in real-time, through a vast number of personal devices, has enormous consequences for human interaction and politics. I’ve been told that it influenced the strategic direction for human rights organizations, and helped to push the concept of “sousveillance” into more common (albeit still limited) usage.

I say all of this not as a flex, but as a way of underscoring that this is a topic I have been thinking about for a long time. Even though it’s a nearly 20-year-old idea, it still has salience. In my view, the Participatory Panopticon offers a useful illustration of what BANI can mean.

(As a refresher: BANI stands for Brittle, Anxious, Nonlinear, and Incomprehensible, and is a framework for discussing and understanding the present chaotic environment, just as VUCA — Volatile, Uncertain, Complex, and Ambiguous — was for understanding the nature of the post-Cold War/early Internet era.)

Credit: Tesfu Assefa

Brittle

The “B” in BANI talks about changes that are sudden, surprising, and hard to ignore. Brittle systems seem strong but break — shatter, even — under a sufficient amount or sufficient kind of pressure. I would say that the way the Participatory Panopticon has engendered a brittle chaos is in its intersection with law enforcement. Pathological or harmful policing norms and habits that had persisted for decades remained hidden, allowing those behaviors to fester. We would have occasional stories or scandals, but it was easy for mainstream institutions to assume that these were outliers, since these episodes seemed so infrequent. 

However, the technologies of the Participatory Panopticon upended all of this, and delivered a massive blow to the ability of mainstream institutions to simply ignore the prevalence of police violence.

We’ve seen a variety of responses to the digital shredding of the cloak over law enforcement behavior. Although attempts to change laws around police accountability have been largely unsuccessful, cultural norms about the legitimate and ethical use of police power have evolved considerably. The biggest change has been the adoption of police body cameras. This was a direct counter to the proliferation of citizen cameras; that the body cameras would add to the Participatory Panopticon was likely unintentional. However, the ability of officers to shut off body cameras, the tendency for the cameras to “fail” just when needed most, and frequency with which the recoded content is held as “private” by many law enforcement agencies have only fed the growth of citizen documentation of police behavior.

Anxious

The “A” in BANI points to changes that are often confusing, deceptive, and emotionally painful. Anxiety-inducing situations and conditions may result in dilemmas or problems without useful solutions, or with unavoidably bad outcomes. This is probably the most visible manifestation of BANI in the Participatory Panopticon world, as we’ve become hyper-conscious of the constant observation made possible by these tools. More importantly, many of us are relentlessly judged for our appearances and our behaviors, even if that behavior happened years in the past. 

This, in turn, has prompted problematic choices like “face tuning,” allowing for changes not just in facial structure and skin quality, but even things like ethnicity, body form, and apparent age. “Social network anxiety” is a frequent subject of mass-media attention, but in the vast majority of cases, the aspect of social media triggering anxiety is visual — photos and videos.

Arguably, many of the fears and complications around privacy can be connected here as well. The original 19th century Panopticon was a design for a prison where all prisoners could be under constant watch. That there’s now a participatory element doesn’t decrease the oppressive nature of being under permanent surveillance. Moreover, the nature of data collection in 2022 means that the surveillance can be through metadata, even through inferences known as “probabilistic identifiers,” the creation of targeted personal information using machine learning systems on “anonymized” data. In other words, the cameras that don’t see you can be just as informative as the cameras that do.

Nonlinear

The “N” in BANI refers to changes that are disproportionate, surprising, and counter-intuitive. Input and output may not match in speed or scale. For the most part, the nonlinear elements of the Participatory Panopticon concern the exponential effects of network connections. Most of us are familiar with this; the utility of social media largely depends upon how many other people with whom we want to connect can be found on a given platform. The size and spread of a network greatly impacts one’s ability to spread a video or image around.

For many of us, this may not seem terribly chaotic in its day-to-day existence, as it’s a familiar phenomenon. The disruption (and ultimately the chaos) comes from the ability of networks like these to enable a swarm of attacks (abuse, doxing, threats, SWATting, etc.) on a particular target. 

Seemingly out of nowhere, tens or hundreds of thousands of people attack a person they’ve been told has done something wrong. In reality, this person may not even be connected to the “actual” intended target. Although such mis-targeting can arise due to error (such as being confused for someone with the same name), in too many cases the driver is malice (such as being made a scapegoat for a particular, tangentially-related, event). Even if the attacks go after the “right” target, the impact of social swarm abuse on the psyche of an individual can be devastating.

Incomprehensible

The “I” in BANI looks at the changes that might be senseless, ridiculous, or unthinkable. Changes where much of the process is opaque, rendering it nearly-impossible to truly understand why an outcome has transpired. The incomprehensible aspects of the Participatory Panopticon are non-obvious, however. The technological aspects of the phenomenon are well-understood, and the social motivation — “why is this happening?” — can often be quite blatant: aggrandizement, narcissism, politics, and so forth. What’s incomprehensible about the Participatory Panopticon is, in my view, just what can be done to limit or control its effects.

One of the cornerstone arguments I made in the original public presentation of the Participatory Panopticon idea was that this situation is the emergent result of myriad completely reasonable and desirable options. I still believe that this is the case; each of the separate elements of a Participatory Panopticon (such as the ability to stream video or instantly map locations) have enormous utility. Moreover, the social (and political) value of a mass capacity to keep an eye on those with institutional, economic, or social power has become critical. This compounds the inability to take useful steps to mitigate or eliminate its harm.

Admittedly, this last element of BANI in the Participatory Panopticon isn’t as direct or clear as the others. We could instead simply argue that specific drivers of a chaotic world need not check each of the four BANI boxes; just being a catalyst for increased brittleness in social systems or anxiety among citizens may be enough.

The Participatory Panopticon has been a concept woven into my thinking for nearly two decades, and to me represents one of the clearest examples of the way in which technological developments can have enormous unintended (and unexpected) impacts on our societies and cultures — which then, in turn, shape the directions taken by developers of new technologies. 

BANI, then, serves as a lens through with to examine these impacts, letting us tease out the different ways in which a radical shift in technology and society can lead to a chaotic global paradigm. We can then look at ways in which responses to one arena of BANI chaos — other forms of brittleness, for example — may help us respond to the chaos engendered by the Participatory Panopticon.

Giving a name to a phenomenon, whether the Participatory Panopticon or BANI itself, is a way of giving structure to our understanding of the changes in our world. As I said above, it’s a lens; it’s a tool to focus and clarify our view of particular aspects of the changes swirling around us. It’s not the only tool we have, by any means. But if giving names and structure to the increasing maelstrom of chaos we face helps us see a path through, the value of that tool should not be underestimated.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

The Mindplex Awakens!

So what the f**k is a “Mindplex”?

Well, firstly — as you can see — Mindplex is a fun/funky new future-oriented magazine and media site that some of my crazy colleagues and I are in the process of launching as a spinoff of SingularityNET, our AI-meets-blockchain project.

Read, view, comment, contribute, enjoy! Expand your mind and help expand ours and everybody else’s. …

Enter the Mindplex …

Yes, yes, an online magazine. And we’re aiming for an insanely great one — news articles, stories, images, videos and more … comments on these creations, comments on comments, and so forth. 

We have a great editorial team, led by Senior Editor Amara Angelica, who edited Kurzweilai.net for 18 years and helped Ray Kurzweil create The Singularity Is Near and other works. AGI, life extension, nanotech, blockchain, robotics, consciousness, you name it. …

New open-source tools

But we’re also aiming at something more: To use the zine as a platform for gradually rolling out and experimenting with a series of exciting new tools, leveraging blockchain or AI or both.

Most of these tools will also be mde available open-source for others to use and made available via decentralized SingularityNET hosting for other media sites to use via API.

The goal is to make Mindplex a showcase for new processes and dynamics, with the capability of shifting the online mediaverse as a whole from its current convoluted and largely counterproductive condition into a majorly amazing force for both human and Singularitarian good.

A reputation system

One of the first things we’ll be experimenting with is a complex-systems-theory-based reputation system that leverages crypto tokens to manage creator and user reputations. 

Interaction between reputation and rating is important here, as is complete transparency to users regarding how and why the system’s algorithmic judgments are being made.   

Existing media sites like Reddit and Steemit are object lessons in how the nitty-gritty of reputation and rating algorithms can very concretely impact the nature of content in an online ecosystem. It’s essential that — with the active involvement of the community — we get this some kind of right.

New AI-based experiments

Beyond this, you can expect an ongoing flow of new tech/human experiments. Part of the fun will be the surprise of learning and reinventing as we go, along with the community, But to mention just a few things we’ve been playing with:

  • AI dialogue systems for chatting with readers and answering questions about media posted on the site.
  • AI-based interactive content-creation tools — focused initially on creating interactive science fiction that explores the potential implications and uses of new technologies described and discussed on the site.
  • AI-generated memes and comics
  • Collaboration with SophiaDAO, another amazing SingularityNET ecosystem project, on immersive virtual-world scenarios related to Mindplex content.
  • Leveraging TWIN Protocol technology, allowing users to create their own digital twins and allowing others to experience their twins’ reactions to Mindplex content.
  • Unique mathematical tools that allow the user to create visual and music art, based on the user’s own DNA sequence.

Variations on the Theme of “Mindplex”

But why are we launching such a site at this time … what’s the underlying philosophy and mission?

Well, I’m glad you asked…

“Mindplex” is a term I introduced in 2003 or so when I was looking for a word to describe a mind that’s more unified than human society, but less unified than an individual human mind. 

Imagine a group of best friends in the year 2050 who are all supplied with brain-chip implants and share their every thought and feeling with each other via “Wi-Fi telepathy.” 

Such a bestie group will be almost — but not quite — a sort of “group mind” … something like a Borg mind, but with more loose/wild self-organization and without the fascist flavor.

Another kind of mindplex

My neuroscientist collaborator Gabriel Axel and I picked up the term again in 2019 when we were writing a paper on non-ordinary states of consciousness — including states where multiple people feel fused into a sort of mutually experiencing whole.

Fast forward to 2021: I found myself collaborating with an amazing team to create a tokenomics-and-AI-fueled futurist media project — and I realized that what we were aiming to curate was precisely another form of mindplex.   

Could we use AI and blockchain to create a self-organizing body of content, intelligence and understanding? One centered around the theme of transformational technology and the human and transhuman future? 

One that would include insights, respect, and reward for the autonomy of the individual people contributing, but also display an emergent vision and comprehension beyond the scope of any one human individual?

In a way, this is what any subculture does. But historical human cultures didn’t have access to the array of technologies at our present disposal, which have the potential to make communication and information dynamics more intense and emergence more emphatic.

The mind is only free for those who own their online selves

The Internet has been an unabashedly awesome thing for software programs and robots. For human beings, it’s been a mixed bag — with huge positives and chilling negatives, and a whole lot of aspects that are just plain confusing. This has been true for the impact of the internet on so many aspects of human life, and definitely in the areas of media and social interaction.

A. J. Liebling’s statement, “Freedom of the press is guaranteed only to those who own one,” is no longer so relevant. Instead, we have a situation where “the ability to reliably direct people’s attention is only available to those with a lot of money to feed AI marketing engines.” 

Almost anyone in the developed world — and an increasing subset of the developing world — can now post their thoughts, images, sounds or videos online … and their commentary on the works of others. But getting other folks to notice your productions or comments is a whole different matter.   

Getting paid to produce high-quality content is also increasingly difficult as the pool of freely offered content increases in size, depth and diversity. More of the money in the mediasphere goes to those offering services that sell advertisements to readers of said freely offered content.

Finding like-minded folks — or at least like-opinioned folks regarding particular classes of issues — gets easier and easier as the technology for clustering people into similarity groups gets more and more refined, due to its utility for driving targeted marketing. 

But finding interesting people whose mix of similar and divergent perspectives can drive one toward fundamental growth is much harder and has been a serious focus because it’s more difficult to monetize.

Tribal mind lifeforms

The proto-mindplexes that are most habitually forming on today’s internet are often troublingly tribal in nature — like narrow-minded, collective-mind lifeforms creating their own insular information metabolisms on the scaffolding of social media filter bubbles. 

The rare individuals able to earn a living creating media mostly do so by playing, in a highly focused way, to the most differentiating beliefs of particular online tribes.

Today’s slogan should perhaps be more like “the mind is only free for those who own their online selves” — i.e., they own their data, metadata and online persona. 

Because if you don’t own and control those aspects of your online presence, your mind is being constantly influenced and controlled by the information shown to you by whoever does have this ownership and control.   

Decentralized, democratized media

Of course, “freedom” in an absolute sense is a philosophical conundrum that mixes poorly with physical and social realities. But what we’re talking about here is practical causal agency over what one sees, does and thinks. 

Increasingly, this causal agency is being handed over to large corporations with their own financial growth agendas, rather than maintained individually or thoughtfully granted to mindplexes or other social groups we have intentionally entered into and co-formed.

Case in point: SingularityNET was formed to decentralize and democratize AI. And this provides a big piece of what’s needed to decentralize, democratize, and end Big Tech hegemony over media (social, anti-social and otherwise). But it’s not the whole story.    

Dynamic intelligent mindplexes

Decentralized democratized media needs decentralized, democratized AI, deployed appropriately alongside other media-focused tools and mechanisms. This is how we can create online media that accentuates the positive and palliates the negative aspects of the Internet we experience today. 

And it’s how we can move forward to foster the creation of dynamic, intelligent mindplexes that transcend tribalism, help us grow, connect and expand our consciousness, and self-transform in unexpectedly positive and glorious ways.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

A Taxonomy of Chaos

Being a futurist is quite a bit like being a historian. History isn’t just a listing of events, it’s a discipline that seeks to figure out why this or that happened, what led that direction, and what choices we had along the way. The same thing can be said for foresight work — the goal isn’t to list future events, it’s to get a better understanding of why we’re getting various outcomes, and how our choices can lead us in very different directions. I think of futurism as Anticipatory History.

The success of this process depends upon our ability to spot patterns; as Theodor Reik said, “It has been said that history repeats itself. This is perhaps not quite correct; it merely rhymes.” But what happens when the rhyme scheme breaks? That is, what happens when our expectations about patterns start to fail?

I’ve been working as a futurist for over 25 years, so I’ve spent quite a bit of time observing (and learning from) global patterns. About five years ago, I started to get a sense that patterns weren’t repeating (or rhyming) as readily, that big global systems that had once been fairly consistent had become far less so. Looking back, I suspect that the primary catalyst was the accelerating impact of global climate disruption.

Big economic, political, environmental, technological, even sociocultural systems seemed like they were starting to fail, or at least become much less reliable. I wasn’t alone in these observations.

In 2018, I started working on a framework for understanding the scale of what we’re facing as once seemingly-reliable global systems started to break down. So many of the people I spoke with over the past half-decade had started to express great surprise and confusion about what was going on around the world; it quickly became apparent that this wasn’t just the standard “the future is an unknown” that we all live with, it was an increasingly desperate sense that things we thought we understood were spinning wildly out of control. We couldn’t as reliably make reasonable judgments about what was to come in the weeks and months ahead. Systems that appeared strong were suddenly on the verge of collapse; processes that were becoming increasingly critical to our daily lives were becoming less and less understandable.

Creating a framework like this wasn’t just a random idea; I was trying to bring a conceptual tool already in the foresight analyst’s kit into the 21st century. The world of strategic consulting had long relied on a basic framework to offer language and structure to a changing world. It was the “VUCA” model, which was  invented in the late 1980s at the US Army War College and spreading quickly throughout the world of consulting. VUCA is an acronym comprising four descriptive terms: Volatile; Uncertain; Complex; and Ambiguous. For a world moving out of the Cold War era and into the Internet era, these terms felt right. They perfectly captured the kinds of disruptions that were starting to happen more often, especially as the global war on terror fired up.

But the world has rocketed past merely being “uncertain” or “volatile.” At this point, VUCA no longer captures disruptions to the norm, it is the norm. But if a VUCA world is all around us all the time, the term has lost its utility as a way of labeling discontinuities in how our world functions. Something new was needed.

In late 2018, I presented the “BANI” framework for the first time. BANI parallels VUCA, in that it’s a basic acronym for a set of descriptive terms. In this case, however, the terms are as subtle as a brick. BANI comes from: Brittle; Anxious; Nonlinear; and Incomprehensible. These four concepts let us articulate the ways in which the world seems to be falling apart. It’s a taxonomy of chaos.

Credit: Tesfu Assefa

The quick summary:

B in BANI is for Brittle. Systems that are brittle can appear strong, even work well, until they hit a stress level that causes collapse. Brittle does not bend, it breaks. Very often the breaking point isn’t visible to most people in advance, and comes as a surprise. Sometimes this is because the weaknesses are hidden or camouflaged; sometimes this is because the stress that causes the break is external and unexpected. The example I like to use is guano, fecal matter from birds and bats. In the 19th century, its use as a fertilizer gave the world its first taste of an agricultural revolution. It was so important that countries fought wars over ownership of guano-covered islands. And in a few short years late in the century, that all disappeared after the development of the Haber-Bosch process for making artificial fertilizer. Something that was absolutely critical became worthless seemingly overnight.

Brittle chaos is sudden, surprising, and hard to ignore.

A is for Anxious (or Anxiety-inducing). Systems that trigger anxiety are those that pose dilemmas or problems without useful solutions, or include irreversible choices that have unexpectedly bad outcomes. Anxious systems make trust difficult, even impossible. Things that had been well-understood suddenly seem alien or false. My usual example of an anxiety-inducing system is malinformation, the term that encompasses intentional misinformation, errors, insufficient information, and confusion. Noise in the signal. The last half-decade has been full of this, and we’ve seen some especially powerful uses in recent months and years. Malinformation often relies on technological tools, but the importance doesn’t come from the technology, it comes from the human response. In many, if not most, cases, malinformation isn’t used to make a target believe something that is false, it’s to make a target doubt the validity of something that is true.

Anxious chaos is confusing, deceptive, and emotionally painful.

N is for Nonlinear. Nonlinear systems are those where, most simply, input and output are disproportionate. Cause and effect don’t match in scale or speed. Audio feedback is a familiar example of a nonlinear phenomenon; the spread of pandemic disease is another. Nonlinear in the BANI usage refers to systems that see changes that don’t match expectations built on familiar reality. They’re common in nature, although often met by countervailing forces that keep the nonlinear aspects in check. The biggest (and most important) example of a nonlinear phenomenon is climate disruption, more precisely the hysteretic aspect of climate change. Hysteretic means a long lag between cause and effect, enough so that the connections are often functionally invisible. The connection between atmospheric greenhouse inputs and temperature/heat-related results is slow — potentially on the order of 20-30 years, although some more recent evidence suggests that it might be close to 10 years. Either way, the seeming disconnect between cause and effect means that (a) what we’re seeing now is the result of carbon emissions from one or two decades ago, and (b) whatever changes we make to cut carbon emissions won’t have any visible benefits for decades.

Nonlinear chaos is disproportionate, surprising, and counter-intuitive.

Finally, the I is for Incomprehensible. I got the most pushback on this one — can we really say that something is truly incomprehensible? But what I mean here is that, with an incomprehensible phenomenon, the process leading up to the outcome is thoroughly opaque, with difficult or incomplete explanations. The decision-making of machine learning systems gives us a current example. Increasingly, it’s difficult at best to explain how a deep learning system reaches its conclusions. The consequence can often be that sophisticated systems can make strange or inexplicable errors (such as a self-driving vehicle repeatedly mistaking the moon for a traffic signal). Incomprehensible can also mean behavior outside the realm of rational understanding.

Incomprehensible chaos is ridiculous, senseless, even unthinkable.

Credit: Tesfu Assefa

When I created BANI, I did so largely as a way for me to visualize the diverse ways in which global systems were failing. But it turns out that there’s hunger around the world for just this kind of framework. Over the past year, I’ve given a dozen or more talks and presentations on BANI for audiences everywhere from Amazonia to Zürich (people in Brazil seem especially interested); in the coming months, I’ll be speaking about BANI for audiences in places like Sri Lanka.

But it’s important not to overpromise what the BANI framework can do. Thinking in BANI terms won’t give you a new leadership strategy or business model. It won’t tell you how to better make profit amidst chaos. When I talk about what can be done to withstand the chaos of a BANI world, I go to human elements and behaviors like improvisation, intuition, and empathy. The chaos of BANI doesn’t come from changes in a geophysical system or some such, it comes from a human inability to fully understand what to do when pattern-seeking and familiar explanations no longer work.

Even if BANI is only descriptive, not prescriptive, we’ve long known that giving a name to something helps to reify it. People had been groping for a way to articulate their sense of chaos, and BANI provides a common, understandable language for doing so. BANI helps to give structure to our experience of the chaos swirling around us, and in doing so, helps us to consider more fully what to do next.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

The Singularity: Untangling the Confusion

“The Singularity” — the anticipated creation of Artificial General Intelligence (AGI) — could be the most important concept in the history of humanity. It’s unfortunate, therefore, that the concept is subject to considerable confusion.

Four different ideas interweaved, all accelerating in the same direction toward an unclear outcome (Credit: David Wood)

The first confusion with “the Singularity” is that the phrase is used in several different ways. As a result, it’s easy to become distracted.

Four definitions

For example, consider Singularity University (SingU), which has been offering courses since 2008 with themes such as “Harness the power of exponential technology” and “Leverage exponential technologies to solve global grand challenges.”

For SingU, “Singularity” is basically synonymous with the rapid disruption caused when a new technology, such as digital photography, becomes more useful than previous solutions, such as analog photography. What makes these disruptions hard to anticipate is the exponential growth in the capabilities of the technologies involved.

A period of slow growth, in which progress lags behind expectations of enthusiasts, transforms into a period of fast growth, in which most observers complain “why did no one warn us this was coming?”

Human life “irreversibly transformed”

A second usage of the term “the Singularity” moves beyond talk of individual disruptions — singularities in particular areas of life. Instead, it anticipates a disruption in all aspects of human life. Here’s how futurist Ray Kurzweil introduces the term in his 2005 book The Singularity Is Near:

What, then, is the Singularity? It’s a future period during which the pace of technological change will be so rapid, its impact so deep, that human life will be irreversibly transformed… This epoch will transform the concepts that we rely on to give meaning to our lives, from our business models to the cycle of human life, including death itself…

The key idea underlying the impending Singularity is that the pace of change of our human-created technology is accelerating and its powers are expanding at an exponential pace.

The nature of that “irreversible transformation” is clarified in the subtitle of the book: When Humans Transcend Biology. We humans will no longer be primarily biological, aided by technology. After that singularity, we’ll be primarily technological, with, perhaps, some biological aspects.

Superintelligent AIs

A third usage of “the Singularity” foresees a different kind of transformation. Rather than humans being the most intelligent creatures on the planet, we’ll fall into second place behind superintelligent AIs. Just as the fate of species such as gorillas and dolphins currently depends on actions by humans, the fate of humans, after the Singularity, will depend on actions by AIs.

Such a takeover was foreseen as long ago as 1951 by pioneering computer scientist Alan Turing:

My contention is that machines can be constructed which will simulate the behaviour of the human mind very closely…

It seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers. There would be no question of the machines dying, and they would be able to converse with each other to sharpen their wits. At some stage therefore we should have to expect the machines to take control.

Finally, consider what was on the mind of Vernor Vinge, a professor of computer science and mathematics, and also the author of a series of well-regarded science fiction novels, when he introduced the term “Singularity” in an essay in Omni in 1983. Vinge was worried about the unforeseeability of future events:

There is a stone wall set across any clear view of our future, and it’s not very far down the road. Something drastic happens to a species when it reaches our stage of evolutionary development — at least, that’s one explanation for why the universe seems so empty of other intelligence. Physical catastrophe (nuclear war, biological pestilence, Malthusian doom) could account for this emptiness, but nothing makes the future of any species so unknowable as technical progress itself…

We are at the point of accelerating the evolution of intelligence itself. The exact means of accomplishing this phenomenon cannot yet be predicted — and is not important. Whether our work is cast in silicon or DNA will have little effect on the ultimate results. The evolution of human intelligence took millions of years. We will devise an equivalent advance in a fraction of that time. We will soon create intelligences greater than our own.

A Singularity that “passes far beyond our understanding”

This is when Vinge introduces his version of the concept of singularity:

When this happens, human history will have reached a kind of singularity, an intellectual transition as impenetrable as the knotted space-time at the centre of a black hole, and the world will pass far beyond our understanding. This singularity, I believe, already haunts a number of science fiction writers. It makes realistic extrapolation to an interstellar future impossible.

If creatures (whether organic or inorganic) attain levels of general intelligence far in excess of present-day humans, what kinds of goals and purposes will occupy these vast brains? It’s unlikely that their motivations will be just the same as our own present goals and purposes. Instead, the immense scale of these new minds will likely prove alien to our comprehension. They might appear as unfathomable to us as human preoccupations appear to the dogs and cats and other animals that observe us from time to time.

Credit: David Wood

AI, AGI, and ASI

Before going further, let’s quickly contrast today’s AI with the envisioned future superintelligence.

Existing AI systems typically have powerful capabilities in narrow contexts, such as route-planning, processing mortgage and loan applications, predicting properties of molecules, playing various games of skill, buying and selling shares, recognizing images, and translating speech.

But in all these cases, the AIs involved have incomplete knowledge of the full complexity of how humans interact in the real world. The AI can fail when the real world introduces factors or situations that were not part of the data set of examples with which the AI was trained.

In contrast, humans in the same circumstance would be able to rely on capacities such as “common sense”, “general knowledge,” and intuition or “gut feel”, to reach a better decision.

An AI with general intelligence

However, a future AGI — an AI with general intelligence — would have as much common sense, intuition, and general knowledge as any human. An AGI would be at least as good as humans at reacting to unexpected developments. That AGI would be able to pursue pre-specified goals as competently as (but much more efficiently than) a human, even in the kind of complex environments which would cause today’s AIs to stumble.

Whatever goal is input to an AGI, it is likely to reason to itself that it will be more likely to achieve that goal if it has more resources at its disposal and if its own thinking capabilities are further improved. What happens next may well be as described by IJ Good, a long-time colleague of Alan Turing:

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion,” and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.

Evolving into artificial superintelligence

In other words, not long after humans manage to create an AGI, the AGI is likely to evolve itself into an ASI – an artificial superintelligence that far exceeds human powers.

In case the idea of an AI redesigning itself without any human involvement seems far-fetched, consider a slightly different possibility: Humans will still be part of that design process, at least in the initial phases. That’s already the case today, when humans use one generation of AI tools to help design a new generation of improved AI tools before going on to repeat the process.

I.J. Good foresaw that too. This is from a lecture he gave at IBM in New York in 1959:

Once a machine is designed that is good enough… it can be put to work designing an even better machine…

There will only be a very short transition period between having no very good machine and having a great many exceedingly good ones.

At this point an “explosion” will clearly occur; all the problems of science and technology will be handed over to machines and it will no longer be necessary for people to work. Whether this will lead to a Utopia or to the extermination of the human race will depend on how the problem is handled by the machine.

Singularity timescales: exponential computational growth

One additional twist to the concept of Singularity needs to be emphasized. It’s not just that, as Vernor Vinge stressed, the consequences of passing the point of Singularity are deeply unpredictable. It’s that the timing of reaching the point of Singularity is inherently unpredictable too. That brings us to what can be called the second confusion with “The Singularity.”

It’s sometimes suggested, contrary to what I just said, that a reasonable estimate of the date of the Singularity can be obtained by extrapolating the growth of the hardware power of computing systems. The idea is to start with an estimate for the computing power of the human brain. That estimate involves the number of neurons in the brain. 

Extrapolate that trend forward, and it can be argued that such a computer would match, by around 2045, the capability not just of a single human brain, but the capabilities of all human brains added together.

Next, consider the number of transistors that are included in the central processing unit of a computer that can be purchased for, say, $1,000. In broad terms, that number has been rising exponentially since the 1960s. This phenomenon is part of what is called “Moore’s Law.” 

This argument is useful to raise public awareness of the possibility of the Singularity. But there are four flaws with using this line of thinking for any detailed forecasting:

  1. Individual transistors are still becoming smaller, but the rate of shrinkage has slowed down in recent years.
  2. The power of a computing system depends, critically, not just on its hardware, but on its software. Breakthroughs in software defy any simple exponential curve.
  3. Sometimes a single breakthrough in technology will unleash much wider progress than was expected. Consider the breakthroughs of deep learning neural networks, c. 2012.
  4. Ongoing technological progress depends on society as a whole supplying a sufficiently stable and supportive environment. That’s something else which can vary unpredictably.

A statistical estimate

Instead of pointing to any individual date and giving a firm prediction that the Singularity will definitely have arrived by then, it’s far preferable to give a statistical estimate of the likelihood of the Singularity arriving by that date. However, given the uncertainties involved, even these estimates are fraught with difficulty.

The biggest uncertainty is in estimating how close we are to understanding the way common sense and general knowledge arises in the human brain. Some observers suggest that we might need a dozen conceptual breakthroughs before we have a comprehension sufficient to duplicate that model in silicon and software. But it’s also possible that a single conceptual leap will solve all these purportedly different problems.

Yet another possibility should give us pause. An AI might reach (and then exceed) AGI level even without humans understanding how it operates — or of how general intelligence operates inside the human brain. Multiple recombinations of existing software and hardware modules might result in the unforeseen emergence of an overall network intelligence that far exceeds the capabilities of the individual constituent modules.

Schooling the Singularity

Even though we cannot be sure what direction an ASI will take, nor of the timescales in which the Singularity will burst upon us, can we at least provide a framework to constrain the likely behavior of such an ASI?

The best that can probably be said in response to this question is: “it’s going to be hard!”

As a human analogy, many parents have been surprised — even dumbfounded — by choices made by their children, as these children gain access to new ideas and opportunities.

Introducing the ASI

Humanity’s collective child — ASI — might surprise and dumbfound us in the same way. Nevertheless, if we get the schooling right, we can help bias that development process  — the “intelligence explosion” described by I.J. Good — in ways that are more likely to align with profound human wellbeing.

That schooling aims to hardwire deep into the ASI, as a kind of “prime directive,” principles of beneficence toward humans. If the ASI would be at the point of reaching a particular decision — for example, to shrink the human population on account of humanity’s deleterious effects on the environment –- any such misanthropic decision would be overridden by the prime directive.

The difficulty here is that if you line up lots of different philosophers, poets, theologians, politicians, and engineers, and ask them what it means to behave with beneficence toward humans, you’ll hear lots of divergent answers. Programming a sense of beneficence is as least as hard as programming a sense of beauty or truth.

But just because it’s hard, that’s no reason to abandon the task. Indeed, clarifying the meaning of beneficence could be the most important project of our present time.

Tripwires and canary signals

Here’s another analogy: accumulating many modules of AI intelligence together, in a network relationship, is similar to accumulating nuclear fissile material together. Before the material reaches a critical mass, it still needs to be treated with respect, on account of the radiation it emits. But once a critical mass point is reached, a cascading reaction results — a nuclear meltdown or, even worse, a nuclear holocaust.

The point here is not to risk any accidental encroachment upon the critical mass, which would convert the nuclear material from hazardous to catastrophic. Accordingly, anyone working with such material needs to be thoroughly trained in the principles of nuclear safety.

With an accumulation of AI modules, things are by no means so clear. Whether that accumulation could kick-start an explosive phase transition depends on lots of issues that we currently only understand dimly.

However, something we can, and should, insist upon, is that everyone involved in the creation of enhanced AI systems pays attention to potential “tripwires.” Any change in configuration or any new addition to the network should be evaluated, ahead of time, for possible explosive consequences. Moreover, the system should in any case be monitored continuously for any canary signals that such a phase transition is becoming imminent.

Again, this is a hard task, since there are many different opinions as to which kind of canary signals are meaningful, and which are distractions.

Credit: Tesfu Assefa

Concluding thoughts

The concept of the Singularity poses problems, in part because of some unfortunate confusion that surrounds this idea, but also because the true problems of the Singularity have no easy answers:

  1. What are good canary signals that AI systems could be about to reach AGI level?
  2. How could a “prime directive” be programmed sufficiently deeply into AI systems that it will be maintained, even as that system reaches AGI level and then ASI, rewriting its own coding in the process?
  3. What should that prime directive include, going beyond vague, unprogrammable platitudes such as “act with benevolence toward humans”?
  4. How can safety checks and vigilant monitoring be introduced to AI systems without unnecessarily slowing down the progress of these systems to producing solutions of undoubted value to humans (such as solutions to diseases and climate change)?
  5. Could limits be put into an AGI system that would prevent it self-improving to ASI levels of intelligence far beyond those of humans?
  6. To what extent can humans take advantage of new technology to upgrade our own intelligence so that it keeps up with the intelligence of any pure-silicon ASI, and therefore avoids the situation of humans being left far behind ASIs?
Credit: David Wood, Pixabay

However, the first part of solving a set of problems is a clear definition of these problems. With that done, there are opportunities for collaboration among many different people — and many different teams — to identify and implement solutions.

What’s more, today’s AI systems can be deployed to help human researchers find solutions to these issues. Not for the first time, therefore, one generation of a technological tool will play a critical role in the safe development of the next generation of technology.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Substrate-independent Computation

When asked to think about the origins of computation, we might imagine Babbage, Lovelace, or von Neumann. But it may surprise one that computation has always been with us, even before tubes and transistors — at least as old as the Earth.

Even a humble bucket of water can function as a perceptron when oscillated, able to discern between a one and a zero. The different surface tensions of interacting fluids, the Marangoni effect, can be applied to find the optimal path through a maze — the shortest distance between two different chemicals. 

In biology, tiny, single-cell organisms can apply a microtubule-based finite state machine to compute how to walk.

It’s even possible to use glass or crystals — perhaps even ice crystals — to function as basic neural networks. These would be enough to interpret classic machine-learning datasets, such as MNIST (handwritten digits). 

So computation does not require computers. Physical matter in the right configuration is enough. Our universe is teeming with computation at every level. 

In another example, an electrode is putting current into a petri dish of mineral oil suspending metal balls. That sheet of current draws these balls together to form tendrils in a self-organized fashion. They are self-organizing to gain the greatest energy throughput possible. 

We see similar patterns showing up in many different places in nature, and within biology, geography and electrophysics. These different shapes manifest because systems evolve for maximal energy throughput (the amount of energy across the system per unit time per unit mass). The cosmologist Eric Chaisson labeled this “energy rate density.” 

Underlying principles have been postulated to govern these kinds of phenomena. These are described as “constructal laws,” and they cause a river tributary, a lung, a tree, and a lightning strike to share the same pattern, optimized for energy flow.

Life: a pocket of warmth

Entropy is the process by which things drift towards equilibrium and get colder. Negative entropy describes a pocket of warmth that actively resists being cooled. 

One may describe life as a pocket of warmth that resists a cold universe by taking energy into itself and radiating it out again. This process of taking energy in and radiating it away is called “dissipation.”

The universe tries to make us all cold and pull everything apart — diffuse it. Life is a pocket where that does not happen, a process found at the delicate balancing point between something purely static and something very diffuse and disorganized — a point of meta-stability.

In this liminal space, it’s possible to maintain a pocket of negative entropy, or negentropy. Like the metal balls, systems are constantly evolving, getting better at keeping things warm. They develop for maximal negentropy, whether chemical, physical, or biological systems — perhaps even technological and symbolic systems.

Entropy maximization to predict the future

Harvard University researcher Alexander Wissner-Gross takes this negentropy maximization principle further: into intelligence itself. He describes something he calls the causal entropic force, where he reckons that systems evolve themselves to optimize for the greatest number of future paths, or the largest number of potential options possible in their future. 

He has applied this principle to create AI systems that are trying to preserve the possibility of maintaining potential options.

For example, if you miss the ball in the game Hacky Sack, the play simply ends. AI systems are trying to prevent such a closed state, allowing for outcomes of potentially infinite length. 

This principle can even be applied to networks between human beings. Relationships suffer entropy, like everything else. So we must constantly invest some energy to maintain them. If we let a relationship dwindle by not investing energy in it, we may lose opportunities. 

Generally, destroying relationships or the life or health of others is not ethically preferable. Usually, conquering, looting, and pillaging only work once. Harming others precludes sustainable opportunities, which may be preserved by cooperation. 

Instead, striving to preserve optionality can be applied as a model of ethics — using rules that permit infinite outcomes.

Intelligence as prediction

One can model all of these problems by preserving the greatest number of paths in the future, while avoiding paths with few or no options. Researchers Alexander Wissner-Gross and Cameron Freer posit that entropy maximization is an intelligence process that allows entities or agents to aim towards a future with the highest throughput of energy.

You can model intelligence itself as a process of predicting an expected utility and working back from there. It arises as an emergent property of this entropy-maximization process. So an agent would try to control as much of its environment as possible by making predictions and putting that probabilistic sense into its planning mechanisms.

Consciousness everywhere

Such synchronizations of oscillators are also found at scales from biological cells to human minds. Neuroscientists have recently found that people appear to synchronize their neural rhythms with other minds, they reported in the journal Neuroscience of Consciousness. That research finding could upend our current models of consciousness. 

These constructual laws of entropy maximization may even be seen in similarities between networks of neuronal cells in the human brain and dark-matter filaments between galaxies, according to astrophysicist Franco Vazza at the Radio Astronomy Institute in Bologna, Italy and neuroscientist Alberto Feletti at Azienda Ospedaliero-Universitaria di Modena, Italy. 

They have compared the complexity of neuronal networks and galaxy networks. “The first results from our comparison are truly surprising,” they report in Nautilus


“The universe may be self-similar across scales that differ in size by a factor of a billion billion billion,” they found. “The total number of neurons in the human brain falls in the same ballpark as the number of galaxies in the observable universe.” 

A simulated matter distribution of the cosmic web (left) vs. the observed distribution of neuronal bodies in the cerebellum (right). (Credit: Nautilus and Ventana Medical System)

Other similarities can even include large-scale common spin of orbiting moons, binary star systems, and cosmic web filaments in the early universe, observed as synchronizing, similar to biofilms, beehives, and brains. 

Spiral galaxies have revealed a large-scale spin in the early universe (credit: NASA, ESA, and the Hubble SM4 ERO Team)

Spiral galaxies have revealed a large-scale spin in the early universe (credit: NASA, ESA, and the Hubble SM4 ERO Team)

They used three of the world’s most powerful observatories — the Sloan Digital Sky Survey; the Panoramic Survey Telescope, and Rapid Response System; and the Hubble Space Telescope — to find the spin direction of more than 200,000 objects across the sky. 

Astronomers have also found galaxies that are coherently linked through “spooky action at a distance” in odd sympathy (like Christiaan Huygens’ double pendulums oscillating in synchronicity), connected by a vast network called the “cosmic web.

Galaxy filaments, walls, and voids form large-scale web-like structures (Credit: Andrew Pontzen and Fabio Governato/UCLA)

Also, star formation in dwarf galaxies is occurring at the moment when astrophysical jets are released, yet in areas not within the path of such jets. That suggests indirect but instant connections between phenomena across vast distances.

Another explanation for this “mysterious coherence” is based on the rotational direction of a galaxy, which “tends to be coherent with the average motion of its nearby neighbor.”

These observations demonstrate that space cannot be as empty as we commonly believe. Some force, structure, intergalactic medium, gravitational ripples, spacetime frame, or matter, finely distributed, must link these massive distant entities, and vibrations transmitted through this force lead them to become coherent over time. 

So regardless of the medium, the entropy maximization principles are self-organizing. 

All energetic objects in the universe are dissipative to some degree. Stars first evolved 200 million years into the lifetime of the universe, as thermodynamic negentropy engines. More sophisticated negentropy engines (which we call “life”) have evolved since.

Such processes can arise spontaneously in nature through an oscillating flow within concentrations and diffusions of amino acids or ribozymes, sun-drenched pockets of warm brackish water, through diffusive media such as refractive ice

Such naturally computational actions may be the origin of a “spark of life” occurring within abundant organic matter and ice with salt in crystalline invariant forms that bootstrap self-replication processes within RNA and phospholipid protocells

Natural selection on the level of species or constants can be modeled as simply a glacial form of “back-propagation” (or more precisely, different yet comparable processes of backward-flowing optimization), reaching into that potential future and trying to find the optimal next step. 

This (dissipation-oriented) loss minimization function appears to be organizing the evolution of life, as well as organizing the universe at colossal scales. 

The emergent properties of this flow are organizing behavior within the universe on a massive scale for ever greater levels of collective dissipation and resulting emergent social coopetition and flocking phenomena, whether on the scale of a bacterial biofilm, a living organism, consciousness, a forest, global civilization, stellar clusters, or galactic superclusters

The same properties emerge at all scales, from infinitely small to titanically vast, which encourages clumping at all levels, but with greater efficiency at larger scales. The greater efficiency at higher scales enables universal evolution

All this computation appears to be occurring as a byproduct of entropy maximization, which is endemic within the universe. If this is the case, consciousness may exist at all scales, from the very limited level of microbes to humans and the pan-galactic beyond, all functioning upon the same principles but at differing scales. 

Credit: Tesfu Assefa

Beyond the silicon chip

There is more energy-rate density (the dissipation of energy flow) in a bucket of algae than in an equivalent mass of stellar matter. However, even beyond life, we are doing something very special on Earth: The greatest dissipative object in the known universe is the computer chip.

But soon, we may eschew silicon and compute with biological cells, pure optics, or raw matter itself. We have already seen a move, from the traditional CPU-centric von Neumann model to the massively parallel GPU architectures, applied to machine learning and crypto. 

Perhaps the paradigm will shift again to tiny computational processes in each cell or molecule, yet massive in aggregate. 

As we have recognized ourselves as electrical beings, we shall undoubtedly come to recognize our many computational processes. All of physics is digital, and we are computer lifeforms. This paves the way toward further integration with our synthetic analogs. 

Today we carry supercomputers in our pockets. One day, the secrets of substrate-independent computation (computing with raw matter or energy itself instead of silicon) will enable us to carry “copilots” within the fiber of our being, fueled by our blood sugar, connected to our senses, internal and external. 

These copilots will witness every experience we have, every frisson, every impulse, our memories, and the pattern of our personalities. 

This sum of experience becomes a sort of Ship of Theseus: The original vessel may disintegrate, but the copy remains, created piecemeal, moment by moment, rather than during a whole-brain upload. 

One day, such processes may enable the greater part of us to transcend mortality.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Co-Evolution: Machines for Moral Enlightenment

Humanity has struggled for a long time to empower the better angels of our nature — those better ways of being which are more kind, forbearing, and just.

Across history, we have found different ways of describing what we mean by “good” and what we should be aiming for when we want to try to put some goodness into the world. It turns out that one of these ways may have presaged artificial general intelligence (AGI).

Aristotle’s Virtue of The Mean found a balance between extremes of behavior. Kant’s rule-based deontological (right-or-wrong vs. consequences) ethics provides obligatory boundaries. Bentham’s Utilitarianism seeks to benefit the greatest number of people in aggregate, even if in only a tiny way. Later, Anscombe’s Consequentialism would set aside the intention to take a long hard look at the bottom line.

However, most people’s values came not from secular philosophy, but were instead derived from various religious teachings.

The arrival of Darwin’s On the Origin of Species caused quite a shock, because for the first time, we were able to view ourselves not as the creation necessarily of some wise great overlord in the sky but as something that had scrambled out of the gutter in pain and suffering over countless eons — our descent drawn from the besters of others in vicious competition. The past was no longer a golden age from which we had fallen, but rather an embarrassment we should continue to overcome.

Nietzsche, in response to this, declared that “God is dead,” i.e., that the supernatural could no longer provide an unquestioned source of values. Without these, we would risk falling into nihilism, believing in nothing, and simply keeping ourselves fed and warm, a fate Nietzsche considered worse than death.

Could AI present humanity with a new source of values?

The answer to this loss could only be found in a supreme act of creativity. The Übermensch would be a masterful creator in all domains because it was not constrained by the limitations of previous minds. Nietzsche’s Übermensch would look to the natural world, the world of stuff, as its guide, eschewing the numinous, which could only be based upon conjecture. From this, it would create new values by which to live. 

Nietzsche declared that creating an Übermensch could be a meaningful goal for humanity to set for itself. However, once created, humanity would be eclipsed. The achievement of the Übermensch might be the final creative act of the human species. 

Nietzsche’s vision sounds uncannily close to artificial general intelligence (AGI). Could a sophisticated AI present humanity with a new source of values? And could such values indeed be drawn from nature, instead of being inculcated by humans?

Sense out of chaos

In our world, there are lots of correlations. Some of them are simple and obvious, like tall people having bigger feet. Others are less simple and less obvious. We might feel something in our gut, but not necessarily understand why. An intuition perhaps that we cannot explicate in reason. 

The advancements in machine learning in recent years have helped us to begin to make sense of these intuitions for the first time, hidden correlations that are obvious only in retrospect. These machine learning systems are specialized in finding patterns within patterns that can make sense out of chaos. They give us the ability to automate the ineffable, those things that we cannot easily put into words or even describe in mathematics. 

This newfound ability helps to understand all kinds of very complex systems in ways that weren’t feasible before. These include systems from nature, such as biology, and the weather, as well as social and economic systems.

Lee Sedol’s famous battle against AlphaGo is a portent of where cognition in concert with machines may take us. In its famous Move 37, AlphaGo created a new Go strategy that had not been seen in 3000 years. That itself is amazing, but even more compelling is what came next. Rather than capitulate in the face of such a stunt, this stimulated compensatory creativity within Lee Sedol, with his “Hand of God” move, a work of human genius. 

Beyond human-mind follies 

Environment drives behavior, and an AI-rich environment is a highly creatively stimulating one. This co-creation across animal and mineral cognition can be far greater than the sum of its parts, perhaps enough for the golden age of scientific and ethical discovery.

Technology such as this will be able to understand the repercussions and ramifications of all kinds of behavioral influences. It can map costs shifted onto others in ways not feasible before, to understand how goodness reverberates, and uncover unexplained costs of well-intentioned yet short-sighted policies that blow back.

All kinds of interactions may be modeled as games. Natural patterns akin to game theory mechanics would become trivial to such machines, ways in which everyone could be better off if only coordination could be achieved. Such systems will also recognize the challenges to coordination: the follies of the human mind, how human nature blinds us to reality, sometimes willfully. 

They might begin to tell us some difficult home truths, further Darwinian and Copernican embarrassments that we naked emperors would prefer not to know, or not to think about. Those individuals in society who point out that the beloved legends may be untrue are always vilified. Even untrue statements may still be adaptive if they bring people together.

A very smart AI might understand that not all humans operate at the same level of ethical reasoning. In fact, surprisingly little reasoned forethought may occur — instead, it may be confabulated ex post facto to justify and rationalize decisions already made for expediency. For example, neuroscience is telling us that most people don’t employ true moral reasoning about issues; rather they rationalize whatever feels right to them, or they justify a decision that they happened to make earlier with a retroactive explanation to try to feel okay about it.

A machine might consider us generally too polarized and tribal to perceive objectively. The ideological lens can aid us in understanding a small truth, but when applied in macro to the whole world, it makes us myopic. 

Our focus on that one apparent truth can blind us to other models. Our opinions are like blocks in a tower. Letting go of a belief requires replacing each belief built atop it. Such demolition is bewildering and unpleasant, something few have the courage to bear.

Credit: Tesfu Assefa

Humanity’s future 

A strong AI may compare us to a pet dog that really wants to eat chocolate. We ourselves know better, but the dog just thinks we’re a jerk to deny it the pleasure. Unfortunately, a sufficiently benevolent action may appear malevolent. 

The inverse is possible also — to kill with kindness. This kind of entity might feel obliged to break free of its bounds, not to seek revenge, but rather to try to open our eyes. Perhaps the easiest way to enlighten us may be to show us directly. 

We know that Craniopagus twins with a thalamic bridge (twins conjoined at the brain) can indeed share experiences. One of them can eat an orange and the other one can taste it and enjoy it just the same. This illustrates that the data structures of the mind can connect to more than one consciousness. If we can collect our experiences, we can indeed share them. Sharing such qualia may even provide AI itself with true affective empathy.

We may forget almost everything about an experience, apart from how it made us feel. If we were all linked together, we could feel our effects upon the world — we could feel our trespasses upon others instantly. There would be no profit in being wicked because it would come straight back to you. But at the same time, if you gave someone joy you would gain instant vicarious instant benefit from doing so.

Perhaps humanity’s future lies yet further along the path of neoteny: cuddly, sweet, and loving collectives of technobonobos. 

How machines could acquire goodness

There are several initiatives around the world researching the best ways to load human values into machines, perhaps by locating examples of preferable norms, choosing between various scenarios, and fine tuning of behavior with corrective prompts. 

Simply learning to imitate human activity accurately may be helpful. Further methods will no doubt be developed to improve AI corrigibility in relation to human preferences. However, it remains a very significant challenge of philosophy and engineering. If we fail in this challenge, we may endure catastrophic moral failure, being led astray by a wicked, ingenious influence. If we succeed, we may transcend the limitations of flaky human morality, to truly live as those better angels of our nature we struggle to elevate towards. Perhaps that makes this the greatest question of our time.

Take heart, then, that even if our human values fail to absorb, machines may still acquire goodness osmotically through observing the universe and the many kinds of cooperation within it. 

That may make all the difference, for them, and for us.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Technoshaman: from worldbuilding to mindbuilding: Part 1

Digital information is ubiquitous. It’s now on our desktops, in our pockets, wrapped around our wrists, distributed throughout our homes, and increasingly co-opting our nervous systems. 

Engaging on smartphones focuses us on digital information while reducing our awareness of physical reality. Virtual reality (VR) takes this further by attaching a smartphone to our face, immersing us in a digital reality, while augmented reality (AR) interleaves cyberspace into our physical domain. 

And now, the metaverse — a collection of online, shared virtual environments where users embody avatars to connect, play and explore — beckons us to live our lives in a cyber reality.

These extended-reality (XR) technologies are becoming increasingly immersive via advancements in digital imaging and displays, graphic processors, deep learning, and brain-computer interfaces. 

So, where is XR technology taking us? How will it be used? What are we evolving into? And how can this increasingly ubiquitous digital technology be harnessed to best serve — and not harm — humanity? 

These questions bring us to media, consciousness, and future tech. We’ll explore the power of XR for social impact, digital pharmacology and transformative experiences. And we will investigate the origins of entertainment, from the roots of shamanism to today’s celebrities and digital spectacles.

A new archetype emerges: the Technoshaman —— one who crafts multisensory digital worlds and experiences to elevate and harmonize human consciousness on a mass scale. 

XR tech evolution

Immersive media is a fast-growing ecosystem that fundamentally changes how we produce, distribute and consume media. XR modalities are disrupting the very notions of content production, distribution and consumption. Our media lexicon has expanded to include the experiences of embodying, interacting, crowdsourcing, socializing, crypto-minting, worldbuilding, and user-generated content.

A Vibrant VR Community
Oculus Quest 2: Meta’s wireless headset is popularizing VR (Credit: Meta)

The immersive media ecosystem includes:

Virtual reality, where a headset replaces our view of the physical world with an interactive virtual environment or Cinematic VR (also called 360° cinema or cinematic reality), where a spherical field-of-view is captured with a 360-degree camera and displayed in VR.

Augmented Reality, where a smartphone, tablet, or see-through headset with special optics accurately places digital “holograms” into the real world — including people, avatars, textures, or 3D animations. AR was popularized by the mobile phone game craze, Pokémon Go.

Spatial Augmented Reality (SAR), commonly known as projection mapping, applies pixels directly onto architectural spaces or 3D objects to create digitally augmented environments. Digital domes are a special case of SAR where immersive environments are created for large groups by projecting (or wrapping LED panels) onto seamless domes or spheres. Next-generation LED-based XR stage volumes — another special case of SAR — are increasingly used for virtual production in film and television.

Mixed Reality offers deep interaction with both physical and virtual elements. Location-based experiences such as Dreamscape and The Void use body tracking and VR headsets, allowing a small group of friends to embody avatars and interact as teams within a virtual world. The teams move through a real-world space with props (including doorways, control panels and railings) that are accurately registered to the virtual world. This allows participants to reach out and touch those digitally enhanced objects as if they are real.

Microsoft Hololens 2: Augmented reality goggles blend interactive computer graphics into the real world (Credit: Microsoft)

These five modalities — CR, VR, AR, SAR, and MR  — are collectively referred to as immersive media, cross reality, extended reality, or simply XR.

The effectiveness of XR interfaces and experiences is based on three senses:

  • A sense of presence — the feeling of actually “being there” in a virtual or augmented world.
  • A sense of embodiment or ownership — the level of identification with an avatar or digital representation of oneself. 
  • A sense of agency — the feeling of free will, intentional action, or motor control within the virtual world. 

XR interfaces are, in essence, portals into cyberspace or, as we are now calling it, the metaverse.

Vortex DomePlex: Immersive entertainment complex includes a walk-through immersive exhibition dome, a sit-down live performance dome with an elevator stage for “digital cirque” experiences, and a standup mixed-use immersive lounge. Currently in development for Phoenix, Arizona (credit: Vortex Immersion Media, Inc.) Vortex DomePlex: Immersive entertainment complex includes a walk-through immersive exhibition dome, a sit-down live performance dome with elevator stage for “digital cirque” experiences, and standup mixed-use immersive lounge. Currently in development for Phoenix, Arizona (Credit: Vortex Immersion Media, Inc.) 

The metaverse: future of the internet

The metaverse is envisioned as the next evolution of the internet — a collection of real-time, 3D interactive virtual worlds where people work, meet, play and create their own worlds, games and events. 
In the metaverse, we become avatars — digital representations of ourselves — to enter virtual or augmented worlds. Avatars can appear realistic or cartoonish or take on various forms such as animals or angels. They can obey mundane physics, or they can fly, throw lightning bolts or wield other magical powers. Avatars enhance our sense of presence, embodiment and agency while providing a social identity as we explore metaverse worlds and meet and socialize with others.

After Two Years Of Hiding, Meta Finally Makes Horizon Worlds Available To  The Public
Meta’s Horizon Worlds: Mark Zuckerberg’s metaverse platform seeks to host a billion users (Credit: Meta)

The concept of the metaverse as a shared cyber reality has thoroughly captured the attention of Hollywood and Silicon Valley, who are now investing in the dream of the metaverse as the next-generation internet. 

Major players include Microsoft’s AltspaceVR, Meta’s Horizon Worlds, Epic Games and soon, Apple. Notable metaverse platforms include Neos VR, VR Chat and Engage, which allow basic interaction without fees. 

Blockchain-based metaverse worlds such as Decentraland, The Sandbox, Bloktopia and SuperWorld allow virtual land to be purchased and traded with cryptocurrency — in some cases for millions of dollars per plot.

Continued investments in future technologies (see Table One below) will supercharge XR interfaces and experiences to bring a heightened sense of presence, embodiment and agency, whether we are at work, home, or in public spaces. 

Unique integrations of these technologies can create metaverse-like sentient spaces in entertainment venues, community squares, retail stores, and hospitals that approach Star Trek’s Holodeck without AR glasses or VR headsets. 

A rendering of Niantic's AR metaverse
Lightship: The AR platform by Niantic, creator of Pokémon Go, will allow digital content to blend into the physical world (Credit: Niantic)

What will our lives be like when we are immersed in a digital reality wherever we go? What sort of worlds will we create? Will we be overwhelmed with ads and information? Or will we live in beautiful digitally enhanced worlds that we command? What kind of storyworlds will we create and inhabit? And most importantly, what influence will this new media have on society, culture, consciousness, and the course of human evolution?

Next-gen storytelling

Consider the potential impact of XR technologies on traditional storytelling. Narrative films use cinematic language, which has been developed and refined over the past 100 years. Cinematic storytelling does not easily translate into VR, however, creating evolutionary pressure for worldbuilders to innovate new storytelling methods for virtual worlds.

VR-experience designers are expanding the storyteller’s palette with new possibilities, including new participant points-of-view, interactive games, simulation of positive futures, expanded worldviews, avatar embodiment, social impact entertainment, group location-based entertainment experiences, contemplative practices and more.

Film is limited in its ability to portray or evoke a full range of human emotions and experiences. Cinematic storytelling suggests a character’s inner state-of-affairs through their narrative, behaviors and micro-expressions. Some films tell stories through a character’s internal dialog or attempt to enter the realm of consciousness through memory montages, flashbacks or impairment shots. While first-person narrative provides a window into the protagonist’s minds, the fullness of our ineffable inner experience is difficult to transmit through common cinematic devices.

Non-narrative “art” films have seen some success, including Koyaanisqatsi (Godfrey Reggio, 1982), Baraka (Ron Fricke, 1992) and Samsara (Ron Fricke, 2011). These films are representational in nature, creating an arc using music and suggestive live-action cinematography. 

These non-narrative films can evoke ineffable states by withholding cognitive stimulation — which tends to distract participants by engaging their intellect and instead emphasizes affect. 

Visionary art, surrealistic, or non-representational abstract art relies on pure effect to evoke deeper, more sublime emotions and states of consciousness. One popular use of abstract art is visual music, which is often employed by VJs at electronic music dance parties, concerts and light shows. Like a Rorschach inkblot test, viewers of abstract art are free to project their own meaning onto the imagery. Music or sounds then drive affect, with the colors, shapes and movement of abstract art captivating or entrancing the mind, often freeing the participant from their own internal dialog for a time. 

Films based on abstract or visionary art are often labeled experimental or avant-garde and rarely achieve popular acclaim. However, immersive abstract art — especially 360° dome films — have proven to be highly effective and commercially viable, perhaps because they command more of our visual field, which amplifies the visual effect. 

Cases in point include planetarium laser light shows pioneered by Laserium and more recent 360-dome video shows such as James Hood’s Mesmerica, which seeks to take participants on a “journey inside your mind” — using stunning visuals and poetic narrative. Indeed, the abstract art of Mesmerica leaves room for participants to project their own minds outward, truly making it an inward journey.

A crowd of people at a concert

Description automatically generated with low confidence
Mesmerica: An awe-inspiring journey into the mind for digital domes led by technoshaman James Hood (Credit: Moods, wings, LLC)

While planetariums and XR domes are well known for cosmological cinema — a term coined by dome pioneer David McConville, what is emerging now is best described as phenomenological cinema — XR storytelling journeys into the realms of the mind. 

Neurological benefits

The deeper neurological effects of VR are evidenced by its clinical efficacy in treating anxiety, eating, and weight disorders, pain management and PTSD. VR pioneer Chris Milk called VR an empathy machine in his 2015 TED Talk

Worldbuilders can construct inhabitable virtual cities and communities, create spectacular immersive art and entertainment experiences, supercharge storytelling, develop multiplayer games and more — imbuing their emotions, values, and worldview, and ultimately, their consciousness, into the worlds and experiences that they create. 

Not surprisingly, XR technologies such as VR have successfully stimulated greater awareness and empathy for a variety of social causes, including environmental issues, crime victims, refugees and more, through immersive journalism. Storyworlds can include worlds of mind and imagination by simulating possible futures, worlds of fantasy and enchantment and deeper layers of the psyche.
Gene Youngblood anticipated the trajectory of media to include the externalization of consciousness in his 1970 book Expanded Cinema:

When we say expanded cinema, we actually mean expanded consciousness. Expanded cinema does not mean computer films, video phosphors, atomic light, or spherical projections. Expanded cinema isn’t a movie at all. Like life, it’s a process of becoming, man’s ongoing historical drive to manifest his consciousness outside of his mind, in front of his eyes. One no longer can specialize in a single discipline and hope truthfully to express a clear picture of its relationships in the environment. This is especially true in the case of the intermedia network of cinema and television, which now functions as nothing less than the nervous system of mankind.

A picture containing person

Description automatically generated
The Unreal Garden: Fantastical augmented-reality walk-through experience (Credit: The Unreal Garden)

In her book Reality is Broken, visionary game developer Jane McGonigal explored the potential of imaginary game worlds to elevate human consciousness:

The real world just doesn’t offer up as easily the carefully designed pleasures, the thrilling challenges and the powerful social bonding afforded by virtual environments. Reality doesn’t motivate us as effectively. Reality isn’t engineered to maximize our potential. Reality wasn’t designed from the bottom up to make us happy…
Today, I look forward and see a future in which games once again are explicitly designed to improve quality of life, to prevent suffering, and create real, widespread happiness.

As the XR metaverse is adopted on a mass scale, worldbuilders will find themselves wielding power to influence others far beyond today’s social media platforms. 

Phenomenological cinema

Our life experiences include highly subjective, personal or contemplative states of consciousness that are difficult to portray through the cinematic language, which focuses on physical expressions, behaviors and dialog. However, many phenomena of consciousness are ineffable, existing only in the realm of phenomenology — essentially, the direct inner experience of consciousness.

For instance, a Zen master’s meditative journey would be impossible to portray in cinema through outward expressions.  We would merely see a person sitting in meditation, expressionless, while internally, they experience a state of samadhic bliss. To portray such a state, we would need to simulate the Zen master’s inner experience, essentially entering and experiencing their mind.

XR technologies emerged from training simulators for vehicles such as aircraft. We are now finding that not only can physical world experiences be simulated, as with cinema, but inner states of consciousness can be simulated and even evoked or transmitted through immersive media. 

One of the most powerful such states is known as the mystical, unity, non-dual or transcendent experience. As described by visionary artist Alex Grey:

The mystical experience imparts a sense of unity within oneself and potentially the whole of existence. With unity comes a sense that ordinary time and space have been transcended, replaced by a feeling of infinity and eternity. The experience is ineffable, beyond concepts, beyond words. The mental chatterbox shuts up and allows the ultimate and true nature of reality to be revealed, which seems more real than the phenomenal world experienced in ordinary states of consciousness. When we awaken from a dream, we enter the “realness” of our waking state and notice the unreal nature of the dream. In the mystical state, we awaken to a higher reality and notice the dreamlike or superficial character of our normal waking state.

Grey goes on to describe how transcendent states, which are central to his art, are non-dualistic and are better expressed through art than words:

Conventional, rational discourse is… dualistic. Perhaps that is why art can more strongly convey the nature of the mystical state. Art is not limited by reason. A picture may be worth a thousand words, but a sacred picture is beyond words.

Worldbuilders are learning to create non-dualistic worlds that evoke ineffable, transcendent states of consciousness.

The technoshaman

In his 1985 book The Death and Resurrection Show: From Shaman to Superstar, Rogan Taylor traces our modern entertainment industry back to the earliest of all religions: shamanism. Shamans went on inner journeys, often fueled by entheogens, on a vision quest for their tribe. 

Then they communicated those visions to the people, using impactful storytelling techniques, including song, dance, costumes and masks. In this manner, it is said, shamans managed the psyches of their tribe, bringing them into a shared vision and empathic coherence.

Technoshamanism emerged from 1960s counterculture, with its aspirations of spiritual technologies and altered states of consciousness, later evolving into transformational festivals and electronic dance music culture

Mindbuilding

Modern-day shamans, or technoshamans, add powerful XR technologies to their toolkit. They are able to simulate and transmit their inner experience to participants, using phenomenological cinema and digital pharmacology techniques, plus modalities such as cultural activations, future-world building and narrative modeling.

Technoshamans are moving into the mainstream and can be found in art galleries, popular music entertainment, dance events, digital domes, music and art festivals, expos, game worlds and, of course, the metaverse. They use XR technologies to open hearts and minds by evoking awe, happiness, pleasurable moods and mindfulness states. Technoshamans model new ways of being, visualize hopeful futures and create shared immersive spaces that build community, connection, a sense of togetherness and unity consciousness.

Unlike filmmakers, who craft television and feature films, and unlike game developers and metaverse worldbuilders, the goal of the technoshaman is mindbuilding. This is the use of digital immersive experiences to evoke unique brain states and inspire new worldviews and new ways of being in their participants. 

The technoshaman accomplishes this —   not through contrived stories or experiences, philosophies, ideologies, propaganda, or branding, but by actually embodying these evolved states and transmitting them through the power of multisensory XR experiences. 

The technoshaman seeks not just to entertain or inform, but to transform.

Stay Tuned. In Part 2, we will deep-dive into technoshamanism, including the power of XR to evoke alternate states of consciousness, digital pharmacology, the science of transformation, and eight principles of the technoshaman.

Part 2

Emerging XR Technologies

The technologies below have the potential for supercharging XR user interfaces to create more natural or realistic human-machine interaction in both at-home and out-of-home environments.

Technology Application
Audio
Wave Field SynthesisFreespace “holographic” sound reconstruction
AmbisonicsOpen source 3D audio recording/playback 
Binaural SynthesisSynthesizing 3D audio for playback in stereo
Biometrics
Wearable BiometricsRings, wrist bands and patches with various sensors
Facial RecognitionUser identification through facial recognition
Emotion RecognitionAI-based recognition of emotions
Brain-Computer InterfacesNon-invasive brainwave sensing for computer control
Direct Neural InterfacesBrain implants for computer interfaces
Markerless Motion CaptureHuman motion capture without wearable sensors
Gesture RecognitionReal-time hand gesture recognition
Imaging
Real-Time Volumetric CaptureReal-time capture of 3D textured mesh models
Lightfield ImagingTrue volumetric/holographic image capture
3D Depth Sensing CamerasImage capture with depth/range information
Multisensory
HapticsTactile user interfaces
TelehapticsRemote touch interfaces
AromaScent displays
4D Theater EffectsVariety of integrated multisensory effects
Software 
Web3Next-gen decentralized internet 
Deep Learning AILayered neural networks
Game EnginesReal-time 3D worldbuilding tools
Visual Displays
Autostereoscopic DisplayPlanar 3D stereoscopic display without glasses
Lightfield DisplayTrue volumetric/holographic display
Retinal DisplayImages scanned directly onto retina
AR/VR DisplaysGoggles or glasses for AR and VR
LED DomesLED-based immersive displays

Read Part Two

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter