Holographic Memories

What’s the first thing that comes to mind when I mention hologram? Is it Artoo-Detoo’s projection of Princess Leia? Perhaps Salvador Dali’s The Crystal Grotto? Maybe you’re thinking of those shiny postcards that seem to change pictures when you tilt them in different directions. Or if you’re a bit more modern you might be thinking of Magic Leap’s video of a humpback whale jumping through a high school gymnasium. These are all exceptionally creative examples of using a medium to display more than what meets the eye. The trouble is, none of them are true holograms. 

Credit:  Star Wars/Disney

By definition, a hologram is created by superimposing wavefronts on a physical medium. In physics, a wavefront is simply a group of points, such as atoms or molecules, that move in unison over time. Although it is possible to use non-optical points like acoustics to create a hologram, we’ll mostly focus on holography by laser light. The technique is called transmission holography and was invented back in the 1940’s by Dennis Gabor. Light Amplification by Stimulated Emission of Radiation, or rather a laser, emits light photons where all photons are lined up. This just means they’re in phase in the wave of light being emitted. So that’s what one laser does, and they’re pretty great for playing with cats. But, if you have one laser, a beam splitter, a lens, a mirror, and a special material called a photographic plate, you can make a hologram.

The laser is shined at the beam splitter which splits the light into two beams. The first beam of laser is passed through a lens which spreads out the light and is projected on the object you’re trying to make a hologram of. When this happens, some of the light bounces off the entire surface of the object in a particular direction. The photographic plate is placed to capture where all the bounces of light land. The second beam of laser light is redirected using a mirror to land on the photographic plate. When both of these light waves reconvene on the plate, they are coming from different directions: one from the reflection off the surface of the object and the other from the mirror. When they hit the plate, the two light waves mix together, which we call interference. The photons in one light wave interfere with the photons in the other light wave as they collide. The video below shows this process well.

As an analogy, inside the plate there are tiny mirrors that partially reflect the light. As light interferes in the plate, the pattern is captured by the tiny mirrors. Microscopically, you can see the interference in the form of abstract patterns in the plate. If you’re very careful and don’t move at all during this process, the hologram is complete. Now when you want to view the hologram, you shine the original laser beam through the plate. When this happens, the mirrors reflect the pattern of the object, which illuminates the plate to show the object.

Now here is where the magic happens, you can take that hologram plate, cut out a tiny square, and throw away the rest. What will the results be when you shine the light through the tiny square? You’ll still see the whole object. The pattern of interference for that object is distributed over the entire plate. The information of how to reconstruct the image of the object is mixed all over the plate, which means you only need a piece of it to get the whole image back.

Distributed information isn’t a new concept and it’s relevant today more than ever. Blockchain technologies have been rapidly growing adoption from cryptocurrency users, internet of things applications, banking solutions, supply chain management, and more. One of the core principles of blockchain is to distribute what information is shared between parties in such a way that it increases the security of the entire collection of information. Here, the information is secured, or distributed, not in one location, but over the entire blockchain network. If you’re not familiar with blockchain, this works because many computers, called nodes, hold the record of the blockchain. Even if a very large portion of nodes are deleted, the information stored in the remaining nodes holds all of the blockchain record. This distribution of information is very similar to the distribution of light patterns all over the photographic plate in a hologram. Inn blockchain, the distribution is a curated by a mathematical algorithm whereas in a hologram it is curated from interference. 

However, there is a type of math that  can describe the interference. Fourier (pronounced, Four-Yay!) math is often used to describe how light information is transformed through holography. Fourier transforms convert a pattern into a waveform and vice versa. In the case of our hologram, a wave was converted to a pattern of interference and then back into a wave. This process is often referred to as frequency analysis since the frequency of elements is analyzed between the pattern state and wave state.

In 1979 a vital discovery made fourier transforms an interesting topic amongst neuroscientists. Karen and Russell DeValois discovered that brain cells in the visual cortex respond to the waveform representation of a pattern, not only the pattern itself. This led to a series of experiments from many neuroscientists trying to determine if brain cells are detecting features of patterns in the world or if they are detecting patterns of frequencies from waveforms in the world. While the jury is still out on this debate, the concept is still important today as many deep learning models rely on feature detection. 

Suspending whether or not it’s true, the idea lends itself to an interesting debate about the brain. The brain may be using the same mathematical principles of holography. So the question begs, is the brain holographic? This question is explored in depth by Michael Talbot’s good read, “The Holographic Universe”. In which describes the “Holographic Theory”, first and independently introduced by David Bohm and Karl Pribram in the 80’s. While for the sake of this discussion, it lends itself less as a theory and more of an analogy, the holographic theory can give us important insights into our own minds.

Let’s think about memories. When the brain is slightly damaged, the person doesn’t only remember the first 12 letters of the alphabet, the memory might just get fuzzier. This characteristic is similar to distributed information. Memories are not located in a single neuron or a small group of neurons, it’s mixed about in various sectors of the brain. In an iconic and ethically questionable experiment, Paul Pietsch diced and sliced a salamander’s brain 700 times and reconfigured it. After the salamander healed, it remembered how to feed and return to normal behavior (see his book “Shuffle Brain” for more details). The information of how to be a salamander was distributed across the brain in such a way that it was recoverable, even if it was all jumbled up. 

Talbot describes how proponents of the holographic theory suggest that neurons fire according to a frequency of input stimulus. For vision, the stimulus is  a light wave, the cochlea in the ear captures sounds according to frequencies, and skin responds to frequencies of vibrations.  Neurons then fire according to these input frequencies and interfere with surrounding neuronces. As we know, when neurons fire the connections between neurons get stronger. This system is then thought to store the pattern of interference of firing neurons. 

The implications for memories being holographically represented in the brain help us understand memories better. First, a memory is distributed. Unlike a Von Neumman computer, there’s no central location where the memory of an object is stored. You can’t go to neuron A26 and find a memory, which means you can’t delete a memory like you would an unwanted file on your computer, nor can you upload it to the web. This supports the idea that the brain is so beautifully adept at alternating between learned tasks with a finite number of cells because neurons most likely contribute to many different memories. The amount of neurons needed is conserved because they are being multiplexed. 

Secondly, when we remember something, many of the original neurons that fired during the original experience fire again. We can understand this with our analogy by thinking back to the hologram when the laser is shined through the interfered plate, the image is reconstructed. Likewise, if a random selection of the right type of neurons are stimulated, the memory will flood the person’s mind. The right light reflects the corresponding object, a different light can reflect a different object in the plate. Likewise, our brains encode lots of information that doesn’t get mixed into one coherent blob, it gets mixed into nearly precise and reconstructable representations.This is important if you want to remember not to stick your hand over a hot stove verses setting a pan on the hot stove to cook some pancakes. And this is consistent with what we know about learning. Memories are used as part of the learning process. If you saw a hot stove, stuck your hand over it, and got burned, you learned that is probably not desirable to do again. Then if later in life, you learn that you can cook on a hot stove, you don’t suddenly forget not to touch the stove. The new memories do not replace the old memories, they are carefully mixed in.

Third, when only partial stimulus is present, the brain will fill in the gaps of what it thinks will be there. This concept is the basis of optical illusions and why we perceive vision as a continuous experience, even though we have a blind spot that is created from having two separate eyes. A if partial light, or very weak light is used on the hologram, the image will be fuzzy and partially reflected. To see the whole picture, we have to increase the amount of light going through the plate. The same is true for our brains. There is a mechanism of filling in the gaps, reproducing the stimulus we expect to be there, even when all of the information isn’t available. This allows us not only to operate in a seemingly continuous flow of information while conditions are radically changing (think billions of photons crashing into our retinas throughout the day).

But the question that Talbot and others have tackled in the holographic theory have expanded on these points to ask if reality itself is a hologram. If we really are experiencing reconstructions of waveforms in our mind, then what exactly is the nature of reality? Is nature itself a hologram? The emerging concept of “Simulation Theory” (again, still technically not a theory) that proposes that we are actually living in a simulation, coincides with the holographic theory well. The simulation hypothesis can be traced back to ancient origins in “The Butterfly Dream” by Zhuangzi in 426 BC, but has recently gained traction due to formulation by popular AI philosopher, Nick Bostrom.  In 2003, Bostrom proposed the simulation argument trilema which posits that the probabilities of living in a simulation are near zero. Several prominent physicists have debated on the topic and discussion has led to the simulation theory’s formulation as plausible. Popular advocates included in the discussion range from Elon Musk, Jurgen Schmidhuber, and Neil Degrass Tyson. 

In essence, the hypothesis proposes that all of reality is being computed by a simulation. That our lives are created for us by a computing device that is capable of simulating not only the world we live in, but even ourselves. That this simulated reality is computed so convincingly, we believe it is real. The connection between simulation theory and holographic theory relies on the nature of reality being abstracted from our subjective experience of it. We experience patterns and images, but the true source of reality is something different. Or rather what the innate source of our experience is, is not at all like what we experience. 

In holography, this is easier to understand. We experience an image of an object from our vision, but the object itself does not actually exist in this form, it exists as a waveform. Or according to simulation theory, it exists as some computed source, akin to bits that compiled the letters you’re reading now. Until more science is served, we may never know the real reality. 

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

AGI Lessons from Holography

When we think about creating an artificial general intelligence (AGI), we mostly think about what types of tasks it should be able to solve, which environments it should be able to perform in, or what types of learning it should be capable of. However, for most people’s vision of AGI, there is often a desired implicit ability of an AGI that is overlooked. 

If we want our AGI to be able to solve many of the tasks that humans solve, then there is a basis of shared reality. So first, reality can be likened to the fundamental substrate of all things, which the scientific field of physics strives to find laws of order for. Now, to understand this shared basis of reality, indulge in this thought experience. 

Credit: Rachel StClair

Suppose you are playing the video game Super Mario Bros, but instead of playing the game as you would on your computer, you are the character in the game; controlling your own in-game choices. Super Mario Bros is your universe and the mechanics of that game is your reality. Now you want to complete a task in Super Mario Bros. You’ve been living in the Super Mario Bros game as a player all your life, so you have some understanding of how the universe of Super Mario Bros works. You’ve built a model of what actions you can and can’t take, and how you may plan to achieve your task. Now suppose you’re suddenly dropped from Super Mario Bros into the old Atari game Pong. Without forgetting everything you know about the universe of Super Mario Bros and learning the universe of Pong, you won’t be able to operate because there is virtually nothing shared between the two universes. You will be playing as if the rules of Super Mario Bros are the rules of Pong, you’ll fail. 

This is what it’s like trying to create an AGI. We have something that doesn’t understand the rules of the game and we’re expecting it to operate as if it understands the world as we do. A computer program doesn’t understand, or at least have a conceptual understanding, of our reality. We need to teach it how our universe works, the best we can. Even more so, we need to teach our AGI how to learn for itself how our reality works. That it can include video games, quantum mechanics, and how cats can sometimes act as liquids even though they should be solids, among other things.

If we, as humans, have learned how to interpret whatever reality really is, then our learning method, or rather interpretation method, might benefit an AGI. So when we are considering how to make AGI, let us consider the nature of reality. 

After all, the more advanced lifeforms on Earth share an experience, some sort of objective reality which we can all agree upon. This points to the brain as an interpretation device. It takes in the signals from the universe and converts them into a representation that benefits brain owners for navigating and learning to survive in their environment. But, how does the brain do this?

There are many answers to this question provided from neuroscience, ancient civilizations, mystics, and the likes. However, there is one particular answer that affords an interesting perspective for AGI design: The Holographic Theory. Although, since it’s what most scientists wouldn’t classify as a real theory, complete with tests to assert assumptions, we’ll refer to it as the holographic model. The holographic model was first and independently introduced by David Bohm and Karl Pribram in the 80’s. The model is explored in depth by Michael Talbot’s good read, “The Holographic Universe”. 

Starting with Pribram’s work, Talbot describes how various neuroscience and psychology studies suggest that neurons fire according to input stimuli represented as frequencies. For vision, the stimulus is a light wave, the cochlea in the ear captures sounds according to frequencies, and skin responds to frequencies of vibrations. Neurons then fire according to these input frequencies and interfere with surrounding neurons. As we know, when neurons fire the connections between neurons get stronger. Thus, neurons are thought to respond to wave frequency and the strengths between neurons store the pattern of interference created by wavefronts.

The second facet of the holographic model comes not from brain related research, but rather from one of Einstein’s colleagues, David Bohm. Experimental results that weren’t possible under the current quantum mechanics interpretation, were possible under Bohm’s new quantum philosophy. Amongst other results, early research on spooky action at a distance is the most useful for understanding Bohm’s idea. Particles (e.g. electrons) behave in such a way that two of them can be observed very far away from each other and both the particles have the exact same spin behavior. Think of spin like a special dance electrons do when they are floating around. So how could two different objects exhibit the same behavior even if they are very, very far away from each other? 

Bohm postulated that it is because the particles are actually not very far away from each other, they are in fact, in the same location. Or what physicists like to call, non-locality, meaning they have no location. This concept can be sort of confusing and a full understanding can’t be described so briefly. Bohm attributes this ability to explicate and implicate states of reality. That reality is enfolded in the implicate state with non-locality and when we observe it, reality is unfolded in the explicate state with locality, patterns, and distinction. When the electrons are unfolded to measure spin states, they come from the implicate state where they are enfolded and information is shared and distributed across the whole, because everything is one. Thus when they are unfolded, the information of the spin state is still retained from the enfolded state. 

What Bohm and Pribam have in common is that both touch on the nature of reality being a hologram. A hologram is made by two waves interfering with each other as they collide on a photographic plate. The plate records the interference pattern such that when the light is shined through the plate, only the object recorded is projected. 

The analogy of the holographic model helps us understand a different kind of system. A system that can’t be reduced to the sum of its parts. This means that a reductionist approach is the wrong approach to understanding reality with the holographic model. In science, reductionism is a way of breaking down a larger concept or phenomena into a minimal number of smaller parts and the interactions of those parts. In complex system science, emergent phenomena occur when the parts of a system are combined to create the whole. Here, the parts don’t add up to the whole system because new parts emerge as other parts interact. Complex systems are non-reductionist; where disorder is organized into order, in which true randomness doesn’t exist. Every action has a precise reaction and the cycle continues.

Likewise, when the brain processes information (according to the holographic model) it doesn’t break up the incoming information into bits and store each bit as a part. If this was the case, it would be impossible to store each memory we have as a neuron, we just don’t have enough neurons, and we’re not growing new ones every time we make a memory. Instead, we multiplex neurons. We distribute information, such as memories, all over large modules in the brain. 

Credit: Tesfu Assefa

Even so, traditional artificial intelligence and some neuroscientists still cling to the reductionist model. Deep learning constructs feature maps that build up parts of the original inputs in order to classify the correct output. With enough feature layers, we can investigate what object  was seen from a particular neural net output. 

Instead of storing and processing information bit-by-bit, perhaps we should consider a non-reductionist approach. One that doesn’t try to force the disorder of reality into order, but instead one that tries to interpret the disorder in a meaningful way. 

Results are likely to lead to an AGI that is more like humans, more understandable in the same way that you can try to understand another person’s decisions. It may also lead to a stronger interpretation of reality and afford some measures of safety. The more we share, the more likely we are to come to the same conclusion, the same goals, like protecting the human race and making the world a better place to live. 

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter