back

AGI Lessons from Holography

May. 11, 2023.
7 min. read. Interactions

Delving into the brain's interpretation of reality, Rachel St. Clair tackles the challenge of creating AGI and proposes a holographic model approach.

About the Writer

Rachel

4.39632 MPXR

Rachel St. Clair is the co-founder and CEO of Simuli, a company focused on developing powerful computers that help humanity face existential risks. With a Ph.D. in Complex Systems and Brain Sciences, she aims to create technology that enhances human cognition and behavior. Dr. St. Clair is also a Quantum Computing researcher with a background in Biological Science.

Credit: Tesfu Assefa

When we think about creating an artificial general intelligence (AGI), we mostly think about what types of tasks it should be able to solve, which environments it should be able to perform in, or what types of learning it should be capable of. However, for most people’s vision of AGI, there is often a desired implicit ability of an AGI that is overlooked. 

If we want our AGI to be able to solve many of the tasks that humans solve, then there is a basis of shared reality. So first, reality can be likened to the fundamental substrate of all things, which the scientific field of physics strives to find laws of order for. Now, to understand this shared basis of reality, indulge in this thought experience. 

Credit: Rachel StClair

Suppose you are playing the video game Super Mario Bros, but instead of playing the game as you would on your computer, you are the character in the game; controlling your own in-game choices. Super Mario Bros is your universe and the mechanics of that game is your reality. Now you want to complete a task in Super Mario Bros. You’ve been living in the Super Mario Bros game as a player all your life, so you have some understanding of how the universe of Super Mario Bros works. You’ve built a model of what actions you can and can’t take, and how you may plan to achieve your task. Now suppose you’re suddenly dropped from Super Mario Bros into the old Atari game Pong. Without forgetting everything you know about the universe of Super Mario Bros and learning the universe of Pong, you won’t be able to operate because there is virtually nothing shared between the two universes. You will be playing as if the rules of Super Mario Bros are the rules of Pong, you’ll fail. 

This is what it’s like trying to create an AGI. We have something that doesn’t understand the rules of the game and we’re expecting it to operate as if it understands the world as we do. A computer program doesn’t understand, or at least have a conceptual understanding, of our reality. We need to teach it how our universe works, the best we can. Even more so, we need to teach our AGI how to learn for itself how our reality works. That it can include video games, quantum mechanics, and how cats can sometimes act as liquids even though they should be solids, among other things.

If we, as humans, have learned how to interpret whatever reality really is, then our learning method, or rather interpretation method, might benefit an AGI. So when we are considering how to make AGI, let us consider the nature of reality. 

After all, the more advanced lifeforms on Earth share an experience, some sort of objective reality which we can all agree upon. This points to the brain as an interpretation device. It takes in the signals from the universe and converts them into a representation that benefits brain owners for navigating and learning to survive in their environment. But, how does the brain do this?

There are many answers to this question provided from neuroscience, ancient civilizations, mystics, and the likes. However, there is one particular answer that affords an interesting perspective for AGI design: The Holographic Theory. Although, since it’s what most scientists wouldn’t classify as a real theory, complete with tests to assert assumptions, we’ll refer to it as the holographic model. The holographic model was first and independently introduced by David Bohm and Karl Pribram in the 80’s. The model is explored in depth by Michael Talbot’s good read, “The Holographic Universe”. 

Starting with Pribram’s work, Talbot describes how various neuroscience and psychology studies suggest that neurons fire according to input stimuli represented as frequencies. For vision, the stimulus is a light wave, the cochlea in the ear captures sounds according to frequencies, and skin responds to frequencies of vibrations. Neurons then fire according to these input frequencies and interfere with surrounding neurons. As we know, when neurons fire the connections between neurons get stronger. Thus, neurons are thought to respond to wave frequency and the strengths between neurons store the pattern of interference created by wavefronts.

The second facet of the holographic model comes not from brain related research, but rather from one of Einstein’s colleagues, David Bohm. Experimental results that weren’t possible under the current quantum mechanics interpretation, were possible under Bohm’s new quantum philosophy. Amongst other results, early research on spooky action at a distance is the most useful for understanding Bohm’s idea. Particles (e.g. electrons) behave in such a way that two of them can be observed very far away from each other and both the particles have the exact same spin behavior. Think of spin like a special dance electrons do when they are floating around. So how could two different objects exhibit the same behavior even if they are very, very far away from each other? 

Bohm postulated that it is because the particles are actually not very far away from each other, they are in fact, in the same location. Or what physicists like to call, non-locality, meaning they have no location. This concept can be sort of confusing and a full understanding can’t be described so briefly. Bohm attributes this ability to explicate and implicate states of reality. That reality is enfolded in the implicate state with non-locality and when we observe it, reality is unfolded in the explicate state with locality, patterns, and distinction. When the electrons are unfolded to measure spin states, they come from the implicate state where they are enfolded and information is shared and distributed across the whole, because everything is one. Thus when they are unfolded, the information of the spin state is still retained from the enfolded state. 

What Bohm and Pribam have in common is that both touch on the nature of reality being a hologram. A hologram is made by two waves interfering with each other as they collide on a photographic plate. The plate records the interference pattern such that when the light is shined through the plate, only the object recorded is projected. 

The analogy of the holographic model helps us understand a different kind of system. A system that can’t be reduced to the sum of its parts. This means that a reductionist approach is the wrong approach to understanding reality with the holographic model. In science, reductionism is a way of breaking down a larger concept or phenomena into a minimal number of smaller parts and the interactions of those parts. In complex system science, emergent phenomena occur when the parts of a system are combined to create the whole. Here, the parts don’t add up to the whole system because new parts emerge as other parts interact. Complex systems are non-reductionist; where disorder is organized into order, in which true randomness doesn’t exist. Every action has a precise reaction and the cycle continues.

Likewise, when the brain processes information (according to the holographic model) it doesn’t break up the incoming information into bits and store each bit as a part. If this was the case, it would be impossible to store each memory we have as a neuron, we just don’t have enough neurons, and we’re not growing new ones every time we make a memory. Instead, we multiplex neurons. We distribute information, such as memories, all over large modules in the brain. 

Credit: Tesfu Assefa

Even so, traditional artificial intelligence and some neuroscientists still cling to the reductionist model. Deep learning constructs feature maps that build up parts of the original inputs in order to classify the correct output. With enough feature layers, we can investigate what object  was seen from a particular neural net output. 

Instead of storing and processing information bit-by-bit, perhaps we should consider a non-reductionist approach. One that doesn’t try to force the disorder of reality into order, but instead one that tries to interpret the disorder in a meaningful way. 

Results are likely to lead to an AGI that is more like humans, more understandable in the same way that you can try to understand another person’s decisions. It may also lead to a stronger interpretation of reality and afford some measures of safety. The more we share, the more likely we are to come to the same conclusion, the same goals, like protecting the human race and making the world a better place to live. 

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Comment on this article

4 Comments

4 thoughts on “AGI Lessons from Holography

  1. Love the idea I have thought to replace teachers with this method forever This is the real deal no child left behind
    1 Like
    Dislike
    Share
    Reply
  2. I enjoyed reading this article and I applaud Dr. St. Clair for her work. I have two questions: 1) The article establishes that our neurons interpret reality and fire according to input stimuli from the outside world, whether it is a hologram or not. How can we replace or reverse engineer this in AGI development? Does this mean that the embodiment approach is a must in order to incorporate external stimuli? 2) A hologram is a flat object and the encoded information makes it look like 3D. So reality for humans depends on perception and perception is mostly related to the outside world and the 3D. What can be done for an AGI that is always inside this 2D? Even if we use simulated worlds to teach the AGI, it is still 2D for it, isn’t it?
    7 Likes
    Dislike
    Share
    Reply
    1. Hi @alamin! Thanks for your encouragements. As for your questions I will respond within the context of the holographic model, although there are many other approaches to AGI and many types of AGI that could be created. Personally, I think the lesson here is that we want a shared understanding of reality. The holographic model may or may not be true, but their dynamical systems approach is somewhat valuable. 1. Stimuli can be external e.g. vision, audio, etc. or it can be internal e.g. neurons in one cortical layer sending waves of firing pattern to another. Embodiment is an essential component for a shared reality, but the form of embodiment will likely be different than our own. Rocks, octopus, cats... they all have different embodiment but take part in our shared reality. There are some reoccurring themes to embodiment though, on which I highly recommend Gary Marcus' book "Kluge". We may want to look at these themes and see what is fundamental to creating the types of neuron activity that is shared across these themes, for example camera eyes. 2. We could philosophize on whether perception is external or internal and what constitutes dimensionality, but I think what you're asking is if it is important that we start bringing more sensory dimensions to AI/AGI. There are many "3d" training environments.. DeepMind's "hide-and-seek" was quite interesting (although not AGI) and AI robotics focuses on this. I think the core thing is the perception-action loop. That the actions available to the AGI are similar actions to what we could do as humans. Whether they perceive some trippy one billion-D reality or a typical 2D reality, the ability for them to relate that information to the laws of physics as humans know it will be crucial (at least for a human-like AGI). This is kind of the question with automatic theorem proving... that approach might make a superintelligence, but at the point we can't interpret the maths it becomes difficult to see the benefit until we can catch up.
      5 Likes
      Dislike
      Share
      Reply
      1. Dearst Dr. St. Clair, first I am super impressed to see you reply to my comment and thank you very much. Your answer to question one has satisfied my and for question 2 emm yeah for me it is more of a philosophy and perception is external. I am so thankful for your examples. Keep up the good work!
        4 Likes
        Dislike
        Share
        Reply
Like
Dislike
Share

4

Comments
Reactions
💯 💘 😍 🎉 👏
🟨 😴 😡 🤮 💩

Here is where you pick your favorite article of the month. An article that collected the highest number of picks is dubbed "People's Choice". Our editors have their pick, and so do you. Read some of our other articles before you decide and click this button; you can only select one article every month.

People's Choice
Bookmarks