Risk and Precarity Part 1: The Early Digital Age

“Cyberculture” — the embrace of rising digital technology in the 1990s — was attractive to the hip (a word which, according to some, translates into “knowing”). Avant-gardistes are instinctive lovers of risk, always experimenting with the new; always pushing the broader culture forward while pushing its triggers. 

The culture then was at once kindly and brutal. It promised to inform the masses; to give the average person open access to  the means of communication — taking it away from the monied, well-connected  elites. It touted production technologies that could end scarcity — at the extreme, there was the oft-expressed hope for achieving Drexlerian nanotechnology. This sort of nanotech could, in theory, program matter to make whatever was needed or desired. (They promised me  a self-replicating paradise and all I got was these lousy stain-resistant pants.)  Declarations about scientists having achieved cold fusion for clean energy were known to be dubious, but surely were indicative of breakthroughs to come. 

The hacker ethic, as it was then understood, was all about making everything as free as possible to as many people as possible. Data, at least, was to be free really soon. Unlike physical goods, you can copy and share bits of data and still have it yourself. Over the internet, one could share it with everyone with internet access. There was to be no scarcity in anything that was made from data. In theory, with the kind of advanced nanotechnology advocated by Eric Drexler in his 1986 book The Engines of Creation, you could share data over the internet that would self-create material commodities. Today’s 3D printer is a primitive version of the idea of turning data into material wealth.

On the flip side of all this noblesse oblige was the arrogance of those who ‘got it’ towards those who didn’t. And hidden within the generous democratic or libertarian emphasis of the cultural moments was the contradictory certainty that everyone was going to have to  participate or wind up pretty-well fucked. Stewart Brand, very much at the center of things (as mentioned in earlier columns) wrote, “If you’re not part of the steamroller, you’re part of the road.” Note the brutality of this metaphor. In other words, the force that was promising to liberate everyone from the coercive powers of big government and big money — to decentralize and distribute computing power to the masses contained its own coercive undertow. Brand was saying you would be forced (coerced) into participating with the digital explosion by its inexorable takeover of economies and cultures. 

On its inception in 1993, Wired Magazine shouted that “the Digital Revolution is whipping through our lives like a Bengali typhoon,” another metaphor for disruption that sounded exciting and romantic but is basically an image of extreme material destruction and displacement. In my own The Cyberpunk Handbook, coauthored with (early hacker) St. Jude, we characterized the “cyberpunk” pictured on the cover as having a “derisive sneer.” Much was made of the cyberpunk’s sense of having a kind of power that was opaque to the general public. Hacker culture even had its own spellings for people who were a million times more talented with computers and the online world than the “newbies’   — eleet, or 31337 or *133t. Technolibertarian (and Mondo and Wired contributor/insider) John Perry Barlow whipped out the line about “changing the deck chairs on the Titanic” every time the political or economic mainstream tried to even think about bringing the early chaos under some semblance of control. In 1995, he wrote A Declaration of Independence of Cyberspace, declaiming, “I come from Cyberspace, the new home of Mind. On behalf of the future, I ask you of the past to leave us alone. You are not welcome among us. You have no sovereignty where we gather.”

Barlow imagined cyberspace as a separate state largely unconnected to the realities of governments and other concerns of the physical world, an idea that seems preposterous now that access to the internet is pretty much a requirement to get work, transfer moneys, and access most medical services. 

Even the mainstream’s shiny young boomer president, Bill Clinton, told people that the average person would have to “change jobs seven times” in the new economy (from burger flipper at McDonalds to Barista at Starbucks to lap dancer and back again). He tried to make it sound like it was all exciting, part of changing times, and he and the more directly cyberculture-oriented VP Al Gore touted retraining as a solution for the displaced (Has retraining been replaced with re-education among the “center-left” politicians of the fading neoliberal consensus? A case can be made.) 

As in all these cases, there was not much thought or sympathy for vulnerable people who might not be in a situation or condition that would allow them to cope with this jazzy and exciting rapidly changing future. Which brings us to…

Credit: Tesfu Assefa

The Precariat  

“We are the 99%.”

Class in America has always tended to be unspoken and, during the pre-digital age, there was a strong, comfortable middle class. My own parents, born in the mid-1920s and right in the middle of the middle, never feared slipping into poverty or homelessness. They bought homes. They rented. The cost wasn’t absurd. They got sick and were kept overnight in hospitals without having their savings wiped out. There was a comfortable sense that there would always be a nine-to-five job available with modest but adequate pay and benefits. And there was an additional sense that the companies or institutions they would work for were solid. Whatever it was, it was likely to remain open, functional and not inclined towards mass firings. They wouldn’t have to “change jobs seven times”as suggested by President Clinton. 

The idea of a class called the “precariat” — a portmanteau of ‘precarious’ and ‘proletariat’ — was popularized by the economist Guy Standing to describe the increasing numbers of people who lack predictable work or financial security. The precariat need extra work (’side hustles’) to bung the gap in their income: gig work, underground economic activity, and extended education or that good ol’ Clintonian ‘retraining’. Members of the precariat mint lines of NFTs hoping they will haul them out of precariousness, or at least give them a temporary lifeline. Ride-sharing businesses can only exist where there is a precariat.

There is an equal or perhaps greater cause for precarity in the state’s hands-off approach towards monopolies, and to what Rebecca Giblin and Cory Doctorow call ‘monopsonies’ (they didn’t originate the word). Wikipedia explains this economic trap as where “a single buyer substantially controls the market as the major purchaser of goods and services offered by many would-be sellers.” Amazon is a world-historic example. The backlash is directed towards digital technology as a whole – rather than just Amazon or some other monopoly company.

Occupy Wall Street & the 99%

Although the people who initiated Occupy Wall Street probably were not using the term back in 2011, their genius was in recognizing that precarity could be as high as 99% of the public – as middle class, upper middle class and even a few wealthy investments crashed, homes went “underwater,” business folded etc. When Occupy started and gained attention, some polls showed that a greater percentage supported than opposed the movement (44% versus 35% according to this Pew Research.) This may not seem impressive but it was a good stat in a land where most people are persuaded that they can achieve “the American dream” with hard work and good luck.

Identity: We Are Not The 99%

Many blame social media for spreading hostility among the public, both in the US and elsewhere. And there can be no doubt that seeing what huge numbers of other people have on their minds is the most irritating thing imaginable. (Cyber-romantics of the 90s rhapsodized the idea of a noosphere — a kind of collectivized global brain. On learning what’s going on in a lot of brains, I would suggest that this idea was, at best, premature. Detourning Sartre for the digital age: Hell is other people’s tweets.) Still, dare I suggest that there was a quantum leap in emphasis on identity divisions and anxieties in the  immediate aftermath of Occupy? Was there, perhaps, a subterranean effort to convince us that we  are decidedly not the 99%? I try to stay away from conspiracy theories but the thought nags at me.

Not Happy To Be Disrupted

As I noted in an earlier column, a lot of people, living in precarity, are not happy to learn about new disruptive technologies. More people, including many who were techno-romantics back in the 90s, now feel more like the road than the steamroller in Stewart Brand’s metaphor. Programmers are now panicking about losing jobs to AI and I hear talk that some in the libertarian bastion of Silicon Valley are opening up to more populist ideas about engaging the state in some form of guaranteed income security.

A follow up column Risk and Precarity Part 2: The Age of Web3 will follow.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

The Politics of Appropriation and the Active Use of Content-Creating AI

Will AI be used constructively or destructively? It possibly depends on sociological and political factors external to AI or technology. An irruption of barbarism (for example, in the much-ballyhooed upcoming American civil war) would bring destructive uses of AI, mainly fraud and trickery. It’s hard to think of a turn towards empathy doing much good, since it only takes a minority of bad actors to wreak havoc, but one can dream. Between Wall Street traders laughing about screwing middle-class investors out of 401ks back during the financial collapse and bailout of 2008, to the relentless news of corporate predations, manipulative politics, and the plague of more street-level grifters tricking the elderly out of their cash, the evidence is pretty high that the abuse of AI will be front and center in our minds and discussions going into the immediate future. 

But there is one area in which your relationship to AI may be more self-selective: the active versus the passive use of these apps and opportunities for creative work and experimentation. Here we have a richer and more complicated set of relations and possibilities.

Inappropriate Appropriation?

The artist Molly Crabapple recently posted a totalist objection to the use of AI in the arts, writing, “There’s no ethical way to use the major AI image generators. All of them are trained on stolen images, and all of them are built for the purpose of deskilling, disempowering, and replacing real artists”  On reading this I started thinking about how, at Mondo 2000 (the magazine that I co-created), the use of appropriation in creative work was consistently advocated. The main idea, of course, was that you appropriate — use found materials — to make something original, drawing a line between plagiarism and use, although there were exceptions. Our writer Gareth Branwyn amusedly quoted the Austin Texas based Tape Beatles slogan “Plagiarism Saves Time”. Even our ever-provocative Mondo 2000 softened that for our own “appropriation saves time.”

One might compare the incursion of AI for creative use in visual art, writing, music etc. to both the advent of the cassette recorder and the digital synthesizer. We saw the same reactions from musicians and the music industry. With home taping, users of the technology could make copies of recorded music by taping from the radio or a friend’s record collection. The tape itself could then also be copied. In the early ‘80s, the music industry adopted the slogan “Home Taping is Killing Music”, engaged in several lawsuits and lobbied the US Congress (as well as other institutions in Canada and Europe) for legal action to cover their perceived losses from the cassette taping menace. With the advent of the digital synthesizer — the sampler — the floodgates opened to a deluge of conflicts over ownership of music content. Old musicians and their lawyers demanding money from young sampling whippersnappers fuelled the disappointment that GenXers felt about the Baby Boom generation.

For Mondo 2000, Rickey Vincent, author of Funk: The Music, The People and the Rhythm of the One, wrote about the connection between hip-hop, rap, and the cyberpunk aesthetic as enacted by that genre’s playful use of found materials via the technology of the digital sampler: “Sampling is the auditory form of hacking through a database. A certain functional anarchy is involved which one might argue is good for the soul. For hip-hop, a sampler is not a toy. It’s an important instrument in the function of the rap song statement.”

More broadly, in the pages of Mondo 2000, the audio-collage band Negativland, whose use of found material sometimes landed them in lawsuits and hot water, were given the kind of coverage that Rolling Stone would have preserved for Janet Jackson. Our friend and frequent subject, the literary avant-gardiste Kathy Acker, blatantly lifted entire pages out of classic texts, mashing them up with biographical material, fantasy, philosophy and whatever else seemed to work to create her well-regarded (by some) novels. In his Mondo interview with Negativland, Beat historian Stephen Ronan declaimed, “appropriation is the hallmark of postmodernism.” 

Mondo art director Bart Nagel’s playful take on our love affair with appropriation from Issue #10 is too amusing not to share in full:

Some guidelines for appropriation

1. Remember: Appropriation saves time.

2. Appropriate your images from old books and magazines where, chances are, all parties who could make a case against you are dead or failingly old.

3. Unfocus the image slightly to avoid the moiré pattern (in Photoshop try a 0.8 Gaussian blur).

4. Morph, tweak or otherwise alter the image unrecognizably.

5. Don’t alter the image at all; have Italian craftsmen sculpt a simulacrum (not guaranteed to work).

6. Appropriate images from MONDO 2000 – these may already have been appropriated. Let’s confuse the trail. 

7. Appropriate images from ads in RAY GUN and submit them to MONDO — now it’s come full circle — and it’s ecologically sound (recycling is good).

8. It’s hip hop.

9. And finally, this: if you take someone else’s image it’s appropriation, or resonating, or recommodification; if someone takes your image — it’s stealing.

Self-satire aside, the complications over use and reuse are myriad.

Credit: Tesfu Assefa

Culture Uses Culture: News Uses News

In journalism, the hard work of the person who “gets the story” will lead to multiple news items, most of which don’t credit the original source. For those engaged in consequential investigations, it is more important that the information spread accurately than for the originator to be repeatedly credited. Just as songs enter common usage for people to sing or play as they will in daily life, the hard work of the journalist becomes fodder for other news stories, dinner table debates, opinion columns, tantrums on TV or combat at conferences.

All of this is to say that the ownership of one’s content is the blurriest of lines. It certainly keeps our courts busy.

But Does AI Make It All Too Easy?

It’s true using AI for creativity might be different from the sampling we’ve seen so far. Sometimes more becomes different. It’s a matter of degree: the amount of content grabbed by AIs and the degree to which the origins of AI-created content may be obscured makes it, arguably, a different situation. The first cause of concern is that AIs may be good enough — or may get good enough soon — at some types of content creation that the creative people will no longer be required. This is a situation touched on by my previous column about the writers’ strike. AI alienates  human creatives in a way that sampling didn’t, and the concerns about it putting people out of work are being widely expressed — and are legitimate. When it comes to alienating types of labor, one response is some sort of guaranteed income, and a movement towards a sense of purpose around unpaid activities. The identity and self-esteem of the engaged creative is deeply embedded into that social role, and getting paid defines one as a capital A-Artist or capital W-Writer, because otherwise everybody does the same thing you do. 

The artists’ natural affinity and passion for quality work is another source of angst, as covered by my previous article on ‘facsimile culture’. The replacement of quality work with the facsimile of quality strikes many creatives deeply; the war against mediocrity being a great motivator, particularly for alienated young creators finding their footing. 

Back in the day, you couldn’t switch on your sampler or even your synthesizer and tell it “make a recording that sounds just like Public Enemy with Chuck D rapping about Kanye West’s weird fashion sense”, and have it spit out something credible with no intervention from creators/copiers. The AI creation of  “fake” Drake and The Weeknd collaboration freaked some people out — mainly because they suspect that it took less creative effort than a possible actual collaboration between them. But sometimes laziness in music can also produce good results

Finally, and probably most importantly, the degree to which creative AIs are tied into the billionaire and corporate classes validates Crabapple’s broad-brush claim that its primary intended uses are to serve their interests, and to disempower more freelance or democratic or unionized groups of creative workers. The list of large corporations and billionaires engaged in AI development includes Musk, Bezos, Brin, Peter Thiel, Google, Microsoft, Baidu. These persons and organisms are all suspect. The notion that Big Tech wants to deliver us cool tools in a non-exploitive way has lost its luster since the more trusting days of early internet culture. The trend towards unionization increases the likelihood that these companies are acting out of anxiety to get rid of expensive and messy humans, as does the recent spate of layoffs.

For The Individual: The Passive v. Active Uses of AI

Still, there’s room for us to work and play with the tools handed down to us by the corporate monsters. (I type this on a Mac, designed by one of the world’s richest and most litigious corporations.)

Passive uses of AI might include the obvious things we are subjected to like phone-answering bots that declaim “I understand full sentences. What can I help you with?”, to the automated checkouts at supermarkets, to whatever your bank or financial institutions are doing with your money. If you’ve been reading CNET or Buzzfeed and didn’t know that some articles were written by bots, you might, in some sense, feel you’re being used by chatbots. And if you were to use ChatGPT and give it one simple instruction — maybe and ask it to write something for you about the current House of Representatives in the style of Hunter S. Thompson, that would be a passive use of ChatGPT. (The results would also be disappointing. The bot is prevented from getting mean about individuals, even when they deserve it.)

Active users have multiple strategies. With most creative-oriented AIs you can, in a sense, dialogue. You can respond, ask for more, use multiple prompts. The writer and sound engineer Paul Boutin (Wired, NY Times) presented impressive results on Facebook as the result of asking ChatGPT to write a column in the style of the acerbic and critical tech culture magazine Valleywag (now inactive). I asked him what his strategy was for getting more out of that app:

Boutin: “To get ChatGPT to write more like yourself, think about not only your prose style and vocabulary, but what mindset do you bring to your writing? I got it to do a pretty good version of an old Valleywag blog post by instructing it several times over a half dozen drafts: Don’t be even-handed, be unapologetic. Don’t say but convey that you know you’re smarter than the people you’re criticizing, even though they presume the opposite. Don’t hedge — you don’t care what they think or fear repercussions. Be funny, in a sarcastic condescending way. Don’t insult people with names, but express amusement at their foolish actions and words in the specific topic. Also to never say but always convey: Everyone thinks they can do what you do, but they couldn’t no matter how hard they try. They foolishly think they will be able to match your skills with enough prompt engineering.”

“Here’s what ChatGPT gave me on its sixth try:

 AI Authors: The Future of Laziness

Bravo, AI-author enthusiasts! Why work hard on your literary masterpieces when you can let an AI remix internet content? Skip the pesky interviews and firsthand accounts—let’s keep literature artificial and unoriginal. Who needs talent, depth, or hard work when you’ve got an algorithm? Just remember, every time you sip your latté while the AI ‘writes’, you’re celebrating a new era of mediocrity. Cheers to the rise of the machine!”

Bravo Paul and “Chat”

Another strategy for creativity has been long used by creatives engaged in appropriation. A lot of musicians (including David Bowie and Mick Jagger) used William S. Burroughs “cut-up technique” for recombining words to launch a lyric. A fragment of Chatbot text could be deployed in the same manner, to get started, break writer’s block, write a transition, or summing up. 

It could, in fact, be argued that for a truly creative piece of writing built on a skeleton of facts, the facts are the boring part. It might not be a crime against writing to grab your skeleton entirely or almost entirely from a chatbot and flesh it out with your own imagination or insight. In the visual arts, AI might help you rapidly generate alternative samples of a work, varying shading, color, proportions, etc. This is very likely something you already use a machine to do. AI will simply be making the work happen faster. In other words, the active user is engaged in some conscious way with creative AI and doesn’t need to be told what tools to use. 

Risk and Precarity

In an economically, socially, sexually and environmentally anxious moment, the excitability of those inclined towards neophilia (love of the new) brushes up not just against neophobia, but against the very real conditions of our historical moment. Very few of us can dismiss the fears of being displaced, mislabeled, denied or messed about by people and institutions using AI. Technoculture was built on the romance of risk and “disruption”, and, now that the chickens are coming home to roost, culture is not altogether happy to be disrupted. A column about risk and precarity in relation to the culture of technology (which now is, of course, culture itself) beckons sometime soon…

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Holographic Memories

What’s the first thing that comes to mind when I mention hologram? Is it Artoo-Detoo’s projection of Princess Leia? Perhaps Salvador Dali’s The Crystal Grotto? Maybe you’re thinking of those shiny postcards that seem to change pictures when you tilt them in different directions. Or if you’re a bit more modern you might be thinking of Magic Leap’s video of a humpback whale jumping through a high school gymnasium. These are all exceptionally creative examples of using a medium to display more than what meets the eye. The trouble is, none of them are true holograms. 

Credit:  Star Wars/Disney

By definition, a hologram is created by superimposing wavefronts on a physical medium. In physics, a wavefront is simply a group of points, such as atoms or molecules, that move in unison over time. Although it is possible to use non-optical points like acoustics to create a hologram, we’ll mostly focus on holography by laser light. The technique is called transmission holography and was invented back in the 1940’s by Dennis Gabor. Light Amplification by Stimulated Emission of Radiation, or rather a laser, emits light photons where all photons are lined up. This just means they’re in phase in the wave of light being emitted. So that’s what one laser does, and they’re pretty great for playing with cats. But, if you have one laser, a beam splitter, a lens, a mirror, and a special material called a photographic plate, you can make a hologram.

The laser is shined at the beam splitter which splits the light into two beams. The first beam of laser is passed through a lens which spreads out the light and is projected on the object you’re trying to make a hologram of. When this happens, some of the light bounces off the entire surface of the object in a particular direction. The photographic plate is placed to capture where all the bounces of light land. The second beam of laser light is redirected using a mirror to land on the photographic plate. When both of these light waves reconvene on the plate, they are coming from different directions: one from the reflection off the surface of the object and the other from the mirror. When they hit the plate, the two light waves mix together, which we call interference. The photons in one light wave interfere with the photons in the other light wave as they collide. The video below shows this process well.

As an analogy, inside the plate there are tiny mirrors that partially reflect the light. As light interferes in the plate, the pattern is captured by the tiny mirrors. Microscopically, you can see the interference in the form of abstract patterns in the plate. If you’re very careful and don’t move at all during this process, the hologram is complete. Now when you want to view the hologram, you shine the original laser beam through the plate. When this happens, the mirrors reflect the pattern of the object, which illuminates the plate to show the object.

Now here is where the magic happens, you can take that hologram plate, cut out a tiny square, and throw away the rest. What will the results be when you shine the light through the tiny square? You’ll still see the whole object. The pattern of interference for that object is distributed over the entire plate. The information of how to reconstruct the image of the object is mixed all over the plate, which means you only need a piece of it to get the whole image back.

Distributed information isn’t a new concept and it’s relevant today more than ever. Blockchain technologies have been rapidly growing adoption from cryptocurrency users, internet of things applications, banking solutions, supply chain management, and more. One of the core principles of blockchain is to distribute what information is shared between parties in such a way that it increases the security of the entire collection of information. Here, the information is secured, or distributed, not in one location, but over the entire blockchain network. If you’re not familiar with blockchain, this works because many computers, called nodes, hold the record of the blockchain. Even if a very large portion of nodes are deleted, the information stored in the remaining nodes holds all of the blockchain record. This distribution of information is very similar to the distribution of light patterns all over the photographic plate in a hologram. Inn blockchain, the distribution is a curated by a mathematical algorithm whereas in a hologram it is curated from interference. 

However, there is a type of math that  can describe the interference. Fourier (pronounced, Four-Yay!) math is often used to describe how light information is transformed through holography. Fourier transforms convert a pattern into a waveform and vice versa. In the case of our hologram, a wave was converted to a pattern of interference and then back into a wave. This process is often referred to as frequency analysis since the frequency of elements is analyzed between the pattern state and wave state.

In 1979 a vital discovery made fourier transforms an interesting topic amongst neuroscientists. Karen and Russell DeValois discovered that brain cells in the visual cortex respond to the waveform representation of a pattern, not only the pattern itself. This led to a series of experiments from many neuroscientists trying to determine if brain cells are detecting features of patterns in the world or if they are detecting patterns of frequencies from waveforms in the world. While the jury is still out on this debate, the concept is still important today as many deep learning models rely on feature detection. 

Suspending whether or not it’s true, the idea lends itself to an interesting debate about the brain. The brain may be using the same mathematical principles of holography. So the question begs, is the brain holographic? This question is explored in depth by Michael Talbot’s good read, “The Holographic Universe”. In which describes the “Holographic Theory”, first and independently introduced by David Bohm and Karl Pribram in the 80’s. While for the sake of this discussion, it lends itself less as a theory and more of an analogy, the holographic theory can give us important insights into our own minds.

Let’s think about memories. When the brain is slightly damaged, the person doesn’t only remember the first 12 letters of the alphabet, the memory might just get fuzzier. This characteristic is similar to distributed information. Memories are not located in a single neuron or a small group of neurons, it’s mixed about in various sectors of the brain. In an iconic and ethically questionable experiment, Paul Pietsch diced and sliced a salamander’s brain 700 times and reconfigured it. After the salamander healed, it remembered how to feed and return to normal behavior (see his book “Shuffle Brain” for more details). The information of how to be a salamander was distributed across the brain in such a way that it was recoverable, even if it was all jumbled up. 

Talbot describes how proponents of the holographic theory suggest that neurons fire according to a frequency of input stimulus. For vision, the stimulus is  a light wave, the cochlea in the ear captures sounds according to frequencies, and skin responds to frequencies of vibrations.  Neurons then fire according to these input frequencies and interfere with surrounding neuronces. As we know, when neurons fire the connections between neurons get stronger. This system is then thought to store the pattern of interference of firing neurons. 

The implications for memories being holographically represented in the brain help us understand memories better. First, a memory is distributed. Unlike a Von Neumman computer, there’s no central location where the memory of an object is stored. You can’t go to neuron A26 and find a memory, which means you can’t delete a memory like you would an unwanted file on your computer, nor can you upload it to the web. This supports the idea that the brain is so beautifully adept at alternating between learned tasks with a finite number of cells because neurons most likely contribute to many different memories. The amount of neurons needed is conserved because they are being multiplexed. 

Secondly, when we remember something, many of the original neurons that fired during the original experience fire again. We can understand this with our analogy by thinking back to the hologram when the laser is shined through the interfered plate, the image is reconstructed. Likewise, if a random selection of the right type of neurons are stimulated, the memory will flood the person’s mind. The right light reflects the corresponding object, a different light can reflect a different object in the plate. Likewise, our brains encode lots of information that doesn’t get mixed into one coherent blob, it gets mixed into nearly precise and reconstructable representations.This is important if you want to remember not to stick your hand over a hot stove verses setting a pan on the hot stove to cook some pancakes. And this is consistent with what we know about learning. Memories are used as part of the learning process. If you saw a hot stove, stuck your hand over it, and got burned, you learned that is probably not desirable to do again. Then if later in life, you learn that you can cook on a hot stove, you don’t suddenly forget not to touch the stove. The new memories do not replace the old memories, they are carefully mixed in.

Third, when only partial stimulus is present, the brain will fill in the gaps of what it thinks will be there. This concept is the basis of optical illusions and why we perceive vision as a continuous experience, even though we have a blind spot that is created from having two separate eyes. A if partial light, or very weak light is used on the hologram, the image will be fuzzy and partially reflected. To see the whole picture, we have to increase the amount of light going through the plate. The same is true for our brains. There is a mechanism of filling in the gaps, reproducing the stimulus we expect to be there, even when all of the information isn’t available. This allows us not only to operate in a seemingly continuous flow of information while conditions are radically changing (think billions of photons crashing into our retinas throughout the day).

But the question that Talbot and others have tackled in the holographic theory have expanded on these points to ask if reality itself is a hologram. If we really are experiencing reconstructions of waveforms in our mind, then what exactly is the nature of reality? Is nature itself a hologram? The emerging concept of “Simulation Theory” (again, still technically not a theory) that proposes that we are actually living in a simulation, coincides with the holographic theory well. The simulation hypothesis can be traced back to ancient origins in “The Butterfly Dream” by Zhuangzi in 426 BC, but has recently gained traction due to formulation by popular AI philosopher, Nick Bostrom.  In 2003, Bostrom proposed the simulation argument trilema which posits that the probabilities of living in a simulation are near zero. Several prominent physicists have debated on the topic and discussion has led to the simulation theory’s formulation as plausible. Popular advocates included in the discussion range from Elon Musk, Jurgen Schmidhuber, and Neil Degrass Tyson. 

In essence, the hypothesis proposes that all of reality is being computed by a simulation. That our lives are created for us by a computing device that is capable of simulating not only the world we live in, but even ourselves. That this simulated reality is computed so convincingly, we believe it is real. The connection between simulation theory and holographic theory relies on the nature of reality being abstracted from our subjective experience of it. We experience patterns and images, but the true source of reality is something different. Or rather what the innate source of our experience is, is not at all like what we experience. 

In holography, this is easier to understand. We experience an image of an object from our vision, but the object itself does not actually exist in this form, it exists as a waveform. Or according to simulation theory, it exists as some computed source, akin to bits that compiled the letters you’re reading now. Until more science is served, we may never know the real reality. 

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter