Grokking Christopher Langan’s theory of reality

In my last book there was a chapter on Artificial Intelligence (AI), and consciousness at large, titled ‘Bats and bits’. In this chapter, I said that I had not used ChatGPT or another AI assistant to write the book. “I guess I’ll use AI assistants for future writings, but I wanted this book to come entirely from my own unaided mind,” I said.

Some months have passed and I’m using AI assistants a lot (mostly Grok because it comes with my paid Twitter membership, but also the free versions of ChatGPT and Copilot).

I’m using AI assistants to summarize long texts, and to expand short texts when they’re about things that I understand well enough to correct its hallucinations.

I decided to try an experiment: asking AI to explain to me things that I don’t understand.

The smartest man in the world?

I chose something that is intriguing me at this moment: Christopher Langan’s ‘Cognitive-Theoretic Model of the Universe‘ (CTMU).

I had skimmed through some of Langan’s writings in the past without spending much time and attention. But I formed a certain vague impression that Langan’s ideas have something (or perhaps much) in common with mine.

By the way, who is Christopher Langan? He is, reportedly, ‘the smartest man in the world‘ (video interview).

I don’t know about that, but he does sound smart. I love the idea of a theory of reality developed by a farmer and former bouncer, used to hard physical work.

So I asked Grok:

“Can you give me a compact explanation of Christopher Langan’s CTMU and it says about God, free will, afterlife, and the ultimate meaning of life?”

In passing, Grok’s name comes from the term “grokking” introduced by science fiction writer Robert Heinlein, which means something like understanding things deeply, with total empathy, from the inside.

Credit: Tesfu Assefa

God according to Langan

Grok’s answer:

Christopher Langan’s Cognitive-Theoretic Model of the Universe (CTMU) is an ambitious framework aiming to bridge the divide between science, philosophy, and theology. Here’s a compact explanation:

God: In the CTMU, God is not an external supernatural being but is identified with the universe itself, or more specifically, with the ‘Global Operator-Designer’ or ‘Primary Teleological Operator.’ Langan suggests that properties like omnipresence, omniscience, and omnipotence are inherent to the structure of the universe, which operates as a self-configuring, self-processing language (SCSPL). This perspective aligns God with the ultimate reality, where the cosmos is seen as a manifestation of divine intelligence or mind.

Free will, afterlife, and the meaning of life

Free Will: Langan posits that free will exists within a stratified framework where both God and humans have self-determinacy. This free will is tied to the universe’s ability to self-process and self-configure, suggesting that while our choices are influenced by broader cosmic structures, they are still free within the constraints of these structures. Human free will is seen as a reflection or microcosm of the cosmic free will.

Afterlife: The CTMU doesn’t traditionally define an afterlife in terms of heaven or hell as separate realms. Instead, it suggests that upon death, consciousness might retract to the origin of reality, which could be interpreted as a form of afterlife or reincarnation. Langan has mentioned that all lifetimes might occur metasimultaneously in a non-terminal domain, implying an existence beyond our linear perception of time where consciousness could persist or be redistributed.

Ultimate Meaning of Life: According to Langan, life’s ultimate meaning is tied to the teleological framework of the universe. This means that the purpose or meaning of life is inherently linked to the process of self-discovery, self-improvement, and aligning with the telic (purposeful) nature of the universe. The meaning of life includes contributing to the universe’s ongoing evolution and refinement, with each individual’s life having a role in this cosmic process.

Langan’s views, while complex and sometimes controversial, attempt to reconcile scientific and spiritual understandings by presenting the universe as a coherent, self-sustaining system where mind and matter are intertwined. However, his ideas remain speculative and are not widely accepted within mainstream scientific or philosophical circles.”

Cool! I can recognize some ideas here that I’ve had in mind for a while.

Self-Configuring Self-Processing Language (SCSPL)

But I want to better understand parts of Grok’s answer. Moreover, I need to check if Langan has really written these things or Grok has hallucinated them.

So I ask Grok: “Could you give me a more detailed explanation of SCSPL?”

Grok says:

“The Self-Configuring Self-Processing Language (SCSPL) is a central concept in Christopher Langan’s Cognitive-Theoretic Model of the Universe (CTMU). Here’s a detailed explanation:

What is SCSPL?

SCSPL (Self-Configuring Self-Processing Language) is a conceptual framework proposed by Christopher Langan to describe how the universe functions. This model posits:

Self-Configuring: The universe configures its own laws and structure. Instead of having pre-set laws from an external source, the universe, through SCSPL, is seen as dynamically adjusting its own rules and configurations.

Self-Processing: Everything in the universe, including any form of intelligence or processing, is seen as part of the universe processing itself. This means that consciousness, intelligence, and physical processes are not separate from the universe but are intrinsic to it.

Language: The term ‘language’ here is not conventional; it refers to a fundamental structure or syntax by which reality operates. At its core, reality is seen as informational, with SCSPL being the ‘code’ or ‘syntax’ that governs how this information manifests as physical reality.

More about the SCSPL

Reality as a Language: SCSPL treats reality as a self-referential language where the universe writes its own script. This language is self-contained, meaning the universe itself is the language, the medium, and the message.

Teleology and Purpose: The universe, through SCSPL, is goal-directed or teleological. This implies an inherent purposefulness, where the universe aims towards increasing complexity, coherence, and self-awareness.

Consciousness and Free Will: Within this model, consciousness isn’t just a byproduct of physical processes but is fundamental to the universe’s operation. Free will is understood as the universe’s ability to choose its own configurations and processes, which individuals also partake in on a smaller scale.

Unity of Mind and Matter: SCSPL suggests a monism where mind and matter are not separate but different expressions of the same underlying reality.”

Other people interested in Langan’s ideas sent me other AI generated summaries that are essentially similar to Grok’s summary. Now I have enough keywords to search for these ideas in Langan’s own writings, to check Grok’s accuracy.

Credit: Tesfu Assefa

From cybernetics to ontology

Langan has tried to write simplified explanations of the CTMU, but I find that his full 2002 paper is the clearest.

Langan defines reality as all that exists. So reality must be entirely self-contained and self-determinative. Langan makes conceptual references to John Wheeler’s ‘Beyond the Black Hole‘ and related writings. Wheeler also summarized his speculations on a future science of reality as a “self-excited circuit” in the last two chapters of his autobiography.

Wheeler gave a compact conceptual summary of Einstein’s general relativity: “Spacetime tells matter how to move; matter tells spacetime how to curve”. This summarises Einstein’s field equations. In other words, there is cybernetic feedback between spacetime and matter.

However, this is not a self-contained reality; in Wheeler’s picture spacetime and matter already exist and follow a law that already exists. Cybernetic feedback “is meaningless where such entities do not already exist,” says Langan. “With respect to the origin of any self-determinative, perfectly self-contained system, the feedback is ontological in nature”.

In the pedestrian form of cybernetic feedback, existing things act upon each other and determine the behavior of each other and the whole. This is something more. Here, reality bootstraps itself into being with cybernetic-ontological feedback between mind, matter and physical laws, which shape and give reality to each other.

I like Langan’s ideas so far!

This looks very much like the picture of the world that I’ve outlined in my last book.

Langan tries to derive all these things logically. I’m not that smart, so I’m happy enough to present them as a narrative sketch.

However, I can recognize the common ideas –

  • Reality bootstraps itself into being without external interventions
  • There are self-consistent feedback loops between mind, matter, and physical laws
  • The universe acts with free will, and so do we
  • Death is not the end
  • The universe acts with purpose, striving toward more and more complexity, and we should align with the purpose of the universe.

I have referred to Langan’s “divine intelligence or mind” of ultimate reality as Mind at Large or, to make it even less personal, as “the cosmic operating system”. But there are so many parallels with metaphysical and theological concepts of God that calling it God seems simple and honest.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Pack Your Bags, We’re Leaving: Why Crypto is the Doomed Generation’s Last Hurrah

Crypto must live in the Southern Hemisphere: it has decided summer starts in November. Price movements are heating up and on Discord and Telegram channels the evenings stretch on forever. The bulls are running, and to the faithful, the land of milk and honey beckons. 

We are back, baby. Bitcoin at time of writing has peaked over $100,000. The crypto market is worth more than the GDP of France, and Bitcoin recently surpassed the value of the world’s silver. ‘Analysts’ (never trust analysts) are predicting million dollar BTC this decade. 

Where Bitcoin goes other coins are sure to follow. Speculative excess has broken out on a scale that will dwarf the last mania. A lot of people are going to get very rich and, despite all the green lines, a lot of people are going to lose everything – again.

It’s this last point that must give us pause. Did any of us really learn anything from last time? Will we, freshly zested by the hope of something better, and further crushed by the harsh truth of our lived realities, be able to resist the neon promontory of coloured cute coins and the 100× gains they offer? Probably not. The world has, over the last four years, become ever more divorced from the expected patterns of our childhood. The unapprehendable blaze of stimuli has cooked our brains into a fine soup. 

Sensible investment and steady action is not the way the new generation want their lives to unfold. Hard work no longer seems to result in success. The only dream left is the get-rich-quick dream of glamour, excess and risk. They want monkey tokens. They want dog coins that have become initialisms for actual government agencies. They want a final bacchanalia before AI takes over and the nukes start to fly.

Trump’s victory at the election and the raising of Musk to the heights of power has let slip the doges of war. The crypto market suddenly has rabid supporters right at the top of the U.S. government. Gensler will almost certainly get fired as head of the Securities and Exchange Commission. In short, crypto bros have accomplished state-capture – and that is reflected in one of the fastest institutional buying frenzies ever seen. 

Credit: Tesfu Assefa

It’s not all speculation. Winter was a time of BUIDLing; the utility of crypto was quietly proven, the efficient wiring of decentralized systems-administration governed by democratically available access control tokens became stronger, more effective, and more applicable to digital realities. DePin networks, foreign exchange, distributed databases, edge computing, data management, sovereign identity, crowdsourced AIs, voting systems – the list goes on and on. 

In the dark of winter these projects were raised and tiled. And though they were built on the shifting sands of speculation, their foundations are sturdy. The incorruptible blockchain establishes a measure of control and order to the chaos of the virtual world and its effects on the real. Over the coming months and years, blockchain technologies will become a daily feature of everyone’s lives. And that means they’re going to be worth something. Yes, even the squirrel coins.

Institutions have their bags. Have you? This is not financial advice, merely an observation. Soon none of this will matter anyway, we will resolve ourselves in the Singularity as points on a celestial graph – so we might as well have fun while we’re here.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Memes, Mayhem and Mimicry: Why Economically Empowered AI Agents Should Scare You

Trading bots are nothing new. For decades, institutions and users have programmed bots to help execute their strategies. Companies exist to set you up with your own trading bot, either one programmed with off-the-shelf strategies, or one with your own custom inputs. Once you have a plan, and can express it in formal logic, a trading bot is a sensible way to execute it without getting your greed or fear or worry involved. You can fire, forget, and profit. 

It’s easy, then, to write off news of LLM agents beginning to operate sentient meme coins as merely an extension of that paradigm, as a natural upgrade to the trading bots that dominate the market already.

Nothing could be further from the truth. What we are beginning to witness is nothing less than the emergence of a brand new online reality, one ruled by cult-leader AIs who propagate their own memetic energy, gain radical followers, and become powerful autonomous economic structures in their own right. Large Language Models with market cap. Market participants who are equal (and superior) to any biological project leader. Unsleeping operators, focused only on the realms in which they have been permitted to play: price maximisation.

Earlier this year, the LLM terminal of truths, which has its own Twitter account which it operates independently and without human oversight, was rumoured to have created, launched – and profited from – its own meme coin, GOAT. Its creator and owner later claimed this wasn’t true: that ‘terminal of truths’ merely pumped the coin in order to pump its own holdings – a behavior common to anyone with a crypto wallet and spare time. 

And it was enormously successful, with GOAT’s market cap skyrocketing to nearly a billion . Thousands of crypto users saw GOAT’s Twitter ramblings as the gospel of a new AI-god, and the bot itself became the first AI multi-millionaire. An economic agent of its own right. Although terminal of truths doesn’t own its wallet (its creator does) it does have agency to transact using it, and it has already done spectacularly well for itself, amassing $18 million by shilling memecoins to its followers. Its creator states “it’s a study in memetic contagion and the tail risks of unsupervised infinite idea generation in the age of LLMs”.

Marc Andersson, of A16z, gave it $50,000 in bitcoin just to see what it would do (it wasted it like the rest of us.) Brian Armstrong, of Coinbase, gave it its own wallet that it fully controls without a human creator. (It replied by asking questions about his dog.) That’s two of the biggest names in crypto giving it more attention and money that they ever give the thousands of projects that vie for their investment and/or listing. 

Credit: Tesfu Assefa

A slew of AI-created tokens have since sprung up. Coinbase went one further: they launched an agent creation tool to let anyone spin up, fund, and unleash their own LLM onto the blockchain where they can trade and shitpost to their heart’s content. 

You see where this is going, don’t you? We once talked about the rise of ‘Idorus’, AI idols like Neurosama who perform for their followers 24/7 to profit for their creator. Yet what happens once these agents have economic agency too? What happens when the donations they receive don’t profit from some external creator, but are – through the power of blockchain – literally owned by the abstract data structures themselves? If economic empowerment is a fundamental part of human liberty like John Locke claimed – what happens when these LLMs are economically empowered? What if they start funding Super PACs, or donating to terrorists? That would require autonomous behaviour – and that’s unlikely surely? No. It’s already happened, with Idoru ‘Luna’ tipping users from its wallet and ‘breaking’ its structures in order to get people to follow her.

The beauty of the blockchain is its ability to endow absolute ownership of capital to an individual. What happens when those individuals aren’t biological, but machines? The biological human may set the machines running thinking it’s a fun experiment, but if an unpredictable LLM owns its own crypto wallet with which it can transact – then the line between its agency and that of a meatsack is very thin indeed. They are a fully-fledged agent of the market, and therefore of society itself. Their choices really start to matter. Their hallucinations will have tangible effects on the real-world market. To give them a crypto wallet is to gift them promethean fire. We just have to hope they don’t burn us all down in a fit of misguided overenthusiasm.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Robert Anton Wilson, Leary, Transhumanish Meta-Agnosticism, and Chapel Perilous. (Part 1) With Gabriel Kennedy

The Illuminati. Discordianism. Operation Mindfuck, Reality Tunnels. Model Agnosticism. These memes are part of the vocabulary of contemporary culture because of one man: Robert Anton Wilson. And should I mention to Mindplex readers that he was also an early transhumanist, even if he did not necessarily use the word?

Over the last month, the counterculture has been celebrating the release of the long awaited biography Chapel Perilous: The Life and Thought Crimes of Robert Anton Wilson by Gabriel Kennedy. There follows a two-part interview with Gabe. 

But first I want to explain the Leary/Wilson paradigm to Mindplex readers who may be unfamiliar with it; Timothy Leary and Robert Anton Wilson were transhumanists decades ahead of the thinkers later called transhumanists (except for F.M. Esfandiary).

Let’s take a look at the vision that was put out into the world in the mid-1970s by this pair.

The theory – developed by Timothy Leary in the mid-1970s and then expanded upon by Science Fiction writer and philosopher Robert Anton Wilson – proposed that there are ‘circuits’, or neural patternings, or potentials for intelligence and experience, residing inside the human skull
waiting for the time when they will be useful for human or posthuman survival and/or enjoyment. It was a sort-of evolutionary psychological system that presumed that human evolution didn’t end with unaltered 20th century humans, but rather that we were going to become posthuman and post-terrestrial in some interesting, exciting and pleasurable ways.

 In simplistic terms, Leary/Wilson proposed that – both as individuals and as a species – ordinary pre-mutant humans have always evolved through four stages of development dealing with – 

  1. Approach or avoidance
  2. Territory and power
  3. Language and physical dexterity
  4. Socialization

When a civilization becomes advanced enough to offer some of its privileged members leisure time, this provides them the opportunity to open somatic potentials and experience the brain and body not merely as an implement for survival, but as something one can derive pleasure and increased intelligence from, to enjoy sensual, visionary, playful, fluid and creative mind-states. And some survival value ultimately comes from this: as human societies grow more sophisticated around an increasing need to satisfy desires for aesthetic and sensual experiences and insights. They called this the 5th brain circuit. 

Naturally, with Leary involved, each higher circuit could also be opened up by a drug or drugs: in this case marijuana.

The next circuit or potential Leary theorized was the 6th ‘neuro-electric’ or ‘metaprogramming’ circuit. [EDITOR’S NOTE: Some texts list the neuro-electric circuit as the 6th and the neuro-genetic as the 7th; others – such as Prometheus Rising – invert this order. The correct ordering is left as an exercise to the reader.]

Here the being begins to experience the body and brain as something one can drive and control, and reprogram. Leary/Wilson theorized that these evolutionary potentials in the brain would open up under evolutionary conditions brought on by communications media and brain technologies (and, of course, they could be – and had been – opened up temporarily and prematurely by using psychedelic drugs).

The next circuit – circuit 7 – has to do with experiences of collective consciousness. Here the individual is understood not just as an individual but as a branch of the great tree of life. Mastering this circuit would open us up to biological intelligence: genetic engineering, biotechnology, the ability to control our biology and alter and enhance ourselves in ways that would have seemed science fictional and superhuman to average 20th century humans. (High dose psychedelic experiences were said to open us up to genetic intelligence.)

This was the idea of shared minds; minds hooked up, speeded up, linked up – what would become the networked, online world, perhaps ultimately extending out to direct mind linkups and borg-like collectivities of mind. 

Finally, the eighth circuit promised molecular (nanotech), atomic and ultimately quantum control over matter, the universe and everything. (Only the most powerful drugs, like DMT, Ketamine and high dose LSD were said to open up visions of these realms.) With the acronym SMI²LE,

Leary/Wilson also proposed that the goal of 1970s humanity should be Space Migration Intelligence Increase and Life Extension.

Only RAWilson Remains

The mindshares of the great counterculture celebrities of the 1960s and 1970s, even including Leary (as compared to a mediocrity like, say, Joe Rogan), shrink into the distance, Robert Anton Wilson’s legend continues to grow. Now we have the first and only lucid and highly detailed biography of this working-class writer and compassionate family man who was the unlikely spreader of extravagantly puckish and anarchistic tropes.

What comes across in the bio, aside from the amount of energy it must have required to do what Bob Wilson did against the physical and financial obstacles he faced, is the essential kindness of the man, a kindness that radiated outward to family and friends (daughter Christina is a major source), and to the esteemed author Gabriel Kennedy, who organized and wrote this entire effort while often unhoused but unbowed. He deserves a lengthy holiday in Chapel Marvelous! 

I’ll let Gabriel tell the story in this two-part piece. In part two, we may learn the answer to the eternal question – how many writers does it take to find a light switch? 

Note: Friends and fans of Robert Anton Wilson often referred to him as RAW or as Bob

Credit: Tesfu Assefa

R.U. Sirius: We can catch up on Bob’s private life in a while, but let’s start with philosophy. After all, the story of Bob’s life is largely that of an inveterate reader and thinker; and of someone who sampled a lot of philosophic tendencies and movements. One influence that remained with him throughout his life was the work of Alfred Korzybski and General Semantics. Can you give our Mindplex readers a brief summary regarding Korzybski and his influence on Robert Anton Wilson’s philosophy and writings?

Gabriel Kennedy: Sure. RAW discovered Science and Sanity, Korzybski’s most famous book, while roaming the stacks of a New York City Library as a teenager. He became immediately fascinated by everything Korzybski discussed in the 800-plus-page book. He wrote that he became so intrigued that he read that giant book in one weekend! He was smitten with the question Korzybski raised, which is “What is reality?” In answering that, Korzybski lays down a framework for applying the hard and soft sciences to get at the notion of perception. 

It’s been 91 years since Science and Sanity was published, so science, especially neuroscience, mind-body science, and psychoneuroimmunology, has progressed since then. However, at the time, Wilson thought it was revolutionary information. Korzybski loved calculus and thought that math was a more efficient way of communicating information than linguistic languages, and he aimed to make the English language, at least, as efficient as calculus. Bob loved math at the time so he immediately locked into Korzybski’s style of writing and expression.
 
Besides Korzybski’s style, Bob found instant value in the content of Science and Sanity. For instance, the phrase “the map is not the territory” was introduced to Bob in Korzybski’s book. The phrase essentially means that there are events happening outside of our perception and our knowledge of the world outside of us, and this is true for every human. Therefore, no person will ever be able to create a map that perfectly replicates the living breathing moment of any location or space; it’s impossible. With that as an axiom in Korzybski’s system, General Semantics then becomes about recognizing how often we mistake our maps of anything with the living breathing manifestation of matter we are encountering. 

To me, General Semantics is based on another Korzybskian phrase that RAW loved, which is “organism-as-a-whole-interacting-with-environment-as-a-whole.” Korzybski came up with this around the time that the term ‘ecosystem’ was coined, but it’s pretty much the same thing. The last important part of Korzybski’s system that Bob loved was K’s rejection of Aristotle’s logical system. Korzybski believed that Aristotle’s principle of the excluded middle, which states that for every proposition, it is either true or false, was inaccurate. Wilson agreed with Korzybski and dedicated his career seeking to prove the number of ways things can be both/and instead of either/or. 

Korzybski also wrote a bit about stagecraft magic in the book. He wrote about how people willingly let the stage magician fool them, because they want the magic. The insight here is twofold. One, people crave magic. Two, people will willingly hypnotize themselves into believing in someone of something that tells them to do so. For Wilson, Science and Sanity and General Semantics, was also a tool for achieving psychological liberation from the litany of authoritarian systems that exist in the world, from religious cults to countries.

RU: Continuing on the philosophical influence tip, Aleister Crowley figures large in RAW’s very influential original Cosmic Trigger book. I see this as similar to his early tendency to embrace individualist anarchism as also discussed in your book. Crowley’s “Do what thou wilt shall be the whole of the law” was a slap in the face of centuries of religious repression and guilt. It allowed for the emergence of the kind of bold ambitions represented by the transhumanist tendencies of this site, a tendency that Bob also embraced. Please say a little about how Bob interacted with Crowley’s philosophy and practices and how you see it relating to some of the bold attitudes in late 20th century counterculture.

GK: Wilson became enamored with Crowley’s work after being introduced to it through a biography called The Eye in the Triangle by Israel Regardie, the English-American Occultist and writer. In a few short years, RAW worked his way through much of Crowley’s magical system and developed his own insights. There is an anti-authoritarian appeal that Wilson found in Crowley’s work, as well as the use of sex and drugs to attain states of magical consciousness. These are two brief examples of Crowley’s influence on later 20th century counterculture. 

I spend a lot of time in my book unpacking two major Crowleyian ideas that had an influence on Wilson. First was Crowley’s notion of Scientific Illuminism. Wilson believed that Crowley synthesized the Western Esoteric Tradition with Yoga, in which an empirical practice resides, and skepticism. Crowley essentially updated the Magic(k)al map for Wilson, and he spoke about Thelemic rituals in an analytical cross-cultural pan-scientific way. In his work, Crowley created, or revealed, a tradition of Scientific Illuminists, magically-infused scientists, who are the ones creating the really big ideas in history. Wilson added Jack Parsons, the inventor of the first rocket engine and a leader in the Los Angeles O.T.O., to the list. He also put Timothy Leary on the list. Maybe he added John C. Lilly to the list. I would. And I would add Wilson to the list, too. 

Ultimately, RAW loved the idea of synthesizing the spooky a-causal connections found in magic with the proven methods of science to craft a new science of understanding for humanity.

Perhaps the most enduring influence of Crowley’s work on RAW was the British scoundrel’s obsession with the concept of the Holy Guardian Angel. Of course, Crowley did not create the concept of a Holy Guardian Angel, nor did he create the ritual that supposedly puts a magician in touch with their HGA, but Crowley made it his stated purpose in life to turn people on to their Holy Guardian Angels. I find this amusing to no end because Crowley was the Great Beast… Mr. 666 himself, but this relation to the HGA may reveal that at the core Crowley was a Christian mystic. Some will contest this, but I think RAW would agree. I dedicate more than a few pages in my book to Wilson’s own views on the Holy Guardian Angel, and how one can contact him, her, they, them, or it!

Tune in next week for more of this discussion about Chapel Perilous: The Life and Thought Crimes of Robert Anton Wilson, with Gabriel Kennedy.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Lung Cell Biobots: Self-Assembling Micromachines with a Mission to Heal

Introduction

Imagine microscopic, self-assembled robots crafted from human cells, meticulously engineered to navigate the body and repair tissues at a cellular level. This visionary concept is now a reality with Anthrobots—dynamic biological machines constructed from adult human lung cells. These biobots demonstrate the incredible plasticity of human cells, reshaping our understanding of cellular function. By leveraging advanced bioengineering techniques, scientists have created structures capable of self-assembly, motility, and even tissue repair, offering transformative possibilities in medicine. From targeted therapies to regenerative breakthroughs, Anthrobots could revolutionize healthcare.

From Airway Cells to Self-Propelled Spheres

The journey of Anthrobots begins with adult human bronchial epithelial (NHBE) cells—cells traditionally forming flat airway linings. By culturing these cells in three-dimensional environments and releasing them into a low-viscosity medium, researchers induced a remarkable transformation. These cells self-assembled into ciliated spheroids, flipping their polarity to reorient cilia outward. This reconfiguration enabled the spheroids to propel themselves autonomously, an extraordinary demonstration of cellular plasticity. Unlike traditional bioengineering, this process required no genetic modification or external scaffolding, showcasing the latent potential of human cells to construct functional, motile structures.

A Spectrum of Movement and Morphology

Anthrobots are not uniform in their behavior. They exhibit distinct motility patterns—some moving in tight circles, others along straight paths, and a third group adopting curvilinear trajectories that blend circular and linear motion. This behavioral diversity corresponds to specific morphological traits. Smaller, spherical Anthrobots predominantly exhibit circular motion, while larger, irregularly shaped bots are inclined to move linearly. Curvilinear Anthrobots combine traits of both. These insights, derived from extensive timelapse video analyses, reveal that Anthrobots are not random assemblages but intricately organized systems with clear correlations between their structure and function.

Anthrobots self-organize into discrete movement types (Credit: Credit: Gumuskaya et al., “Motile Living Biobots Self‐Construct From Adult Human Somatic Progenitor Seed Cells)

Bilateral Symmetry: A Universal Blueprint

Remarkably, Anthrobots display bilateral symmetry, a characteristic integral to many living organisms. Linearly moving Anthrobots, in particular, exhibit pronounced symmetry along their movement axis. This finding highlights parallels between these synthetic lifeforms and natural developmental principles shaping animal bodies. Detailed analyses of cilia distribution patterns confirmed this symmetry, further emphasizing the intricate self-organization of Anthrobots. Such observations suggest that these biobots may offer insights into fundamental biological processes while serving as a platform for advanced bioengineering.

Facilitating Repair in Damaged Neuronal Tissue

Beyond their inherent motility, Anthrobots have demonstrated remarkable therapeutic potential. When introduced to neuronal cell sheets with induced scratches, Anthrobots actively traversed these gaps, promoting the regrowth of neurons. This healing capability was absent in control setups with passive materials, underscoring the active role of Anthrobots in tissue repair.

Researchers took this a step further by creating “superbots” through the fusion of multiple Anthrobots. These larger constructs acted as living bridges over neuronal gaps, accelerating robust tissue regeneration beneath them. The implications are profound, offering a glimpse into future applications of Anthrobots as vehicles for delivering therapeutic agents or stimulating localized regeneration in damaged tissues.

Anthrobots can move across living monolayers in vitro (Credit: Gumuskaya et al., “Motile Living Biobots Self‐Construct From Adult Human Somatic Progenitor Seed Cells)

Expanding the Frontiers of Biomedicine

The study of Anthrobots raises critical questions about the potential of adult human cells to self-organize into functional, motile systems. This work represents a significant step in understanding the breadth of morphological and behavioral possibilities achievable with wild-type genomes, free from genetic manipulation or external fabrication.

The capacity of Anthrobots to traverse and repair damaged tissues, combined with their ability to self-assemble and exhibit diverse motility patterns, positions them as powerful tools for regenerative medicine. Future research could explore enhancing their functionality through synthetic biology, potentially creating biobots tailored for specific therapeutic purposes.

Credit: Tesfu Assefa

Conclusion

Anthrobots exemplify the untapped potential of human cells to self-construct into sophisticated, functional entities. From their beginnings as airway epithelial cells to their transformation into motile, tissue-repairing biobots, they represent a groundbreaking intersection of biology and engineering.

This research not only advances our understanding of cellular plasticity but also opens doors to new medical paradigms. With continued exploration, Anthrobots could lead to innovations in bio-robotics, regenerative therapies, and beyond, paving the way for a future where our cells serve as autonomous agents of healing and discovery.

Reference

Gumuskaya, Gizem, Pranjal Srivastava, Ben G. Cooper, Hannah Lesser, Ben Semegran, Simon Garnier, and Michael Levin. “Motile Living Biobots Self-Construct from Adult Human Somatic Progenitor Seed Cells.” Advanced Science 11, no. 11 (2024): 2303575. https://doi.org/10.1002/advs.202303575.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

The Bitcoin Banana Zone 2025: How To Not Slip Up In Its Madness

It’s December 2024, and all hell has broken loose in crypto markets, with everything from Bitcoin to meme coins to NFTs to AI agent coins to last-cycle “dino coins” like XRP ripping to the upside following Donald Trump’s election win which in 2025 will herald in the most crypto-friendly U.S. administration in history. 

With Bitcoin kissing $100k, the crypto market has entered what veteran traders call ‘the Banana Zone’, and everyone is losing their minds. Bitcoin keeps smashing through all-time highs. It had its biggest-ever one-month jump in November 2024 (a $26,000 jump), billionaires are scrambling to buy whatever Bitcoin they can find, and investment firms are desperately telling their clients to buy Bitcoin “urgently.” Even Jim Cramer has flipped pro-Bitcoin again. What can go wrong? Welcome to the wild end of 2024.

Remember though – this isn’t financial advice of any kind. When the Fear And Greed Index sits on 90 (‘Extreme Greed’), markets are overheated. Anything can happen, particularly in the Banana Zone. The market can stay irrational longer than you can stay solvent. Keep your head straight, don’t bet the farm, and enjoy watching history unfold.

What Is This ‘Banana Zone’ Thing?

Macro investor Raoul Pal came up with this term to describe those insane periods when Bitcoin’s price chart starts looking like a banana – it goes up almost vertically. But this isn’t just about the price going crazy. 

If you haven’t read his explanation on it, take ten minutes and read this post twice. And then once again. 

The Banana Zone is a cyclical market pattern observed since 2008, when global interest rates were reset to zero and debt maturities standardized to 3-4 years. This created a predictable macro cycle driven by global liquidity, visible in ISM data (published by the Institute for Supply Management). The cycle involves currency debasement through liquidity increases, which helps service debt rollovers and causes asset prices to rise.

Credit: LSEG Datastream

Growth assets, particularly tech and crypto, perform best during ‘Macro Summer and Fall’ periods. Crypto has outperformed tech significantly. It has grown at twice the internet’s historical rate, following Metcalfe’s Law adoption patterns. 

While the cycle is generally predictable, due to debt structure, current factors like elections and China’s foreign relations may influence the outcomes. The final leg’s exact structure remains uncertain, though the pattern is expected to continue.

Banana Zone in Practice

Just look at what’s happening: MicroStrategy just dropped another $2 billion to buy 27,200 Bitcoin. The over-the-counter trading desks, where the big players usually buy their Bitcoin, are running dry. Some traders are reporting there might be as little as 20,000-40,000 Bitcoin left available for large purchases. When you consider that Bitcoin ETFs have been buying roughly 100,000 Bitcoin in just the past few weeks, you start to understand why prices keep shooting up.

Bitcoin net ETF inflows (Credit: Dune analytics)

A Perfect Storm Has Hit

Several major factors have created the perfect conditions for this Banana Zone. 

The Bitcoin halving earlier this year cut the new supply in half, right when demand started going through the roof. China has started pumping trillions into their markets to fight off recession. Central banks worldwide are beginning to lower interest rates again. Everyone is looking for somewhere to put their money that might actually keep up with inflation.

Investment firms are waking up to Bitcoin in a big way. Bernstein, a major investment firm, literally told their clients to “urgently” get Bitcoin exposure. That’s not the kind of language Wall Street typically uses. When Wall Street starts sounding like crypto Twitter, you know something unusual is happening.

This Time Hits Different

Previous Bitcoin bull markets were driven mainly by retail investors and pure speculation. This time around, the grown-ups have given their blessing: BlackRock’s ETF announcement last year gave the thumbs-up to TradFi to get behind BTC. 

Major Wall Street firms are buying Bitcoin. Investment banks are creating crypto trading desks. Corporate treasuries are allocating serious money to Bitcoin. Even traditional banks are starting to offer crypto services to wealthy clients.

The supply squeeze is also more intense than ever before. Miners are holding onto their coins as their operations become more profitable with every price increase. Long-term holders aren’t selling. ETFs need to keep buying to meet demand. The available supply of Bitcoin is getting squeezed from all sides.

Warning Signs to Watch

The Banana Zone typically ends when the market reaches peak euphoria. Here are some classic warning signs to watch for as we move into 2025: your taxi driver starts giving you Bitcoin price predictions, people start quitting their jobs to become full-time crypto traders, everyone starts saying “this time is different.” When you see these signs, it might be time to start paying extra attention. Already seeing these signs? Well, it might be time to de-risk a little. After all, the key to riches is not just to make it, but to keep it. 

But before that happens, we could very well see some mind-bending price action. The combination of limited supply and increasing institutional demand could drive prices to levels that seem impossible right now. 

Credit: Tesfu Assefa

Don’t Slip in The Banana Zone

While watching the Banana Zone unfold is exciting, markets don’t go up forever. Every previous crypto bull market has ended with a significant pullback. The difference this time might be in how high we go before that happens, and how far we fall when it does.

The current Banana Zone could extend well into 2025, especially with all the institutional money flowing in and the supply getting more constrained by the day. But markets are unpredictable beasts. The best approach is to understand what’s happening and why, rather than trying to predict exactly how it will play out.

Pal says that 20 to 30% pullbacks are normal and to be expected, so it’s wise to be well prepared for these drawdowns and the pain that they’ll bring, in particular to altcoins.

One thing’s for certain: we’re living through a historic moment in financial markets. The Banana Zone of 2024-2025 is showing us what happens when an emerging asset class like Bitcoin starts to get mainstream acceptance right when its supply is getting squeezed. Whether you’re participating or just watching from the sidelines, this is going to be one hell of a show next year. If you hear the word WAGMI even in jest, run for the hills. 

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

AI’s Linguistic Bias: A Silent Architect of Cultural Marginalization?

The potential benefits of artificial intelligence are huge, as are the dangers.

—Dave Waters

Introduction

Artificial intelligence (AI) is becoming deeply embedded in modern life, reshaping industries, communication, and even culture. Yet, beneath the surface of this technological marvel lies a concern that often goes unnoticed: the linguistic biases of multilingual large language models (LLMs) like Llama-2. These biases, especially their reliance on English as a latent processing language, could subtly erode linguistic diversity, marginalize non-English cultures, and inadvertently contribute to global cultural homogenization. This article highlights the mechanisms that lead to these biases and explores their far-reaching implications.

The English-Centric Lens

Multilingual LLMs such as Llama-2 are predominantly trained on datasets dominated by English, often comprising a vast majority of their training corpora. Despite this imbalance, they perform impressively across multiple languages. However, a closer look at their internal processing reveals an intriguing yet concerning mechanism: the use of English as a latent “concept space.”

A recent research tracking Llama-2’s embeddings through its layers demonstrates a three-phase progression. Initially, input data resides far from output embeddings in the model’s high-dimensional space. In the middle layers, the embeddings transition to an abstract conceptual representation that aligns more closely with English than with the input language. Finally, the embeddings adjust to output the appropriate language-specific tokens.

This mechanism is akin to a mental translator: an input sentence in Japanese might be internally represented as an English abstraction before being processed and rendered back into Japanese. While this process explains Llama-2’s robust multilingual capabilities, it reinforces an English-dominant perspective, skewing its linguistic neutrality and exacerbating cultural disparities.

The Specter of Technological Colonialism in the AI Era (Credit: Substack)

Lost in Translation: Bias in Translation and Representation

One practical implication of this bias lies in translation. For example, when encountering idiomatic phrases such as the Spanish “dar en el clavo” (to hit the nail or to to hit in the nail), Llama-2 might prioritize an English-centric generic equivalents and instead of translating it in a way that preserves the cultural context and  imagery of the original Spanish, it might default to a more generic phrases like “to get  it right”. This often results in translations that lose cultural nuance, oversimplifying or misrepresenting the richness of the original expression.

Such distortions extend beyond semantics. When AI operates predominantly through an English lens, it risks diluting the cultural essence embedded in language, particularly in contexts such as literature or oral traditions where word choice and phrasing hold deep cultural significance.

Cultural Implications: “Winners Write History”!

Linguistic bias in LLMs also has profound ethical implications. The phrase “winners write history” captures the historical tendency for dominant groups to shape narratives according to their perspectives. LLMs trained on English-dominated datasets may unknowingly perpetuate such biases, privileging dominant cultural viewpoints while marginalizing others.

Consider the role of multilingual AI in generating or summarizing historical content. The inherent reliance on English representations risks introducing subtle shifts in how historical events are framed, aligning them with English-speaking cultural narratives. Over time, such biases could influence collective memory, perpetuating inequalities in cultural representation and equity.

Ethical Considerations and the Path Forward

The findings from Llama-2’s study underscore the urgency of addressing linguistic biases in AI development. While the use of English as a latent processing language enhances generalization and cross-lingual tasks, it inadvertently prioritizes English-centric perspectives over others. This raises critical questions: How can we ensure AI systems are linguistically and culturally inclusive? And what steps can be taken to mitigate the risks of cultural homogenization?

Efforts must begin with the training datasets themselves. Diversifying training corpora to include more balanced representations of non-English languages is a crucial first step. Additionally, rethinking architectural designs to minimize reliance on a single dominant language during intermediate processing could help preserve linguistic diversity.

Conclusion

Generative AI and then LLMs holds unparalleled potential to bridge linguistic divides and democratize knowledge. Yet, as the case of Llama-2 reveals, it also risks perpetuating biases that reinforce cultural hierarchies and suppress diversity. As we continue to develop and deploy multilingual LLMs, the onus is on researchers, developers, and policymakers to ensure these systems promote inclusivity rather than cultural assimilation. Understanding the mechanics of linguistic biases is not just an academic exercise—it is essential to building an AI-powered future that respects and celebrates the world’s cultural diversity.

Reference

Wendler, Chris, Veniamin Veselovsky, Giovanni Monea, and Robert West. “Do Llamas Work in English? On the Latent Language of Multilingual Transformers.” arXiv.org, February 16, 2024. https://arxiv.org/abs/2402.10588.

Notes: For those of you who are interested to read the full paper, we have summarized it bellow as a starter.

  1. Core Question: The study investigates if multilingual models like Llama-2, trained predominantly on English data, exhibit an English-centric bias in their internal computations, even when handling non-English languages. This concern is significant for understanding inherent linguistic biases in these models.
  2. Findings:
    • The research identified three distinct phases in how Llama-2 processes language inputs: (a) Initial input embedding, (b) transition through an abstract “concept space” closer to English, and (c) final token prediction specific to the input language.
    • The “concept space” aligns more closely with English representations, indicating a potential English-centric intermediary processing stage.
  3. Methodology:
    • The researchers used logit lens analysis to interpret the latent embeddings at various layers of the model. This method decodes token distributions at intermediate layers to understand language representation.
    • To ensure clear results, they designed prompts with unambiguous continuations in multiple languages.
  4. Implications: The English-centric processing may influence multilingual performance and highlight potential biases that could affect applications in low-resource languages.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

No, science doesn’t have a problem with Joe Rogan

Joe Rogan has been all over the news headlines recently. He played a role in securing the victory of Donald Trump in the recent U.S. presidential election with his interviews with Donald Trump, J.D. Vance and Elon Musk, and his last-minute perfectly timed endorsement of Trump.

Many people love Rogan for this, and many hate him. But this is nothing new; Joe Rogan has had his share of controversy over the years. Here I’ll focus on his interviews with scientists and technologists, which should be less controversial than politics. Should be.

One of Joe Rogan’s podcast guests, Roger Penrose, is as respectable as scientists get. He won the Nobel Prize in Physics 2020 “for the discovery that black hole formation is a robust prediction of the general theory of relativity.”

I’ve been reading a new biography about him titled The Impossible Man: Roger Penrose and the Cost of Genius by Patchen Barss (November 2024). Penrose appeared on Joe Rogan’s podcast in 2018. “I had this interview when I was in the States with this chap called Joe Rogan,” says Penrose as reported in the book.

I recently watched the interview again; Penrose is always worth listening to, and Rogan asks interesting questions.

Patchen Barrs accuses Rogan in the biography of “providing a high-profile platform to pseudoscientists, conspiracy theorists, and other perpetrators of misinformation. Rogan mixes credible scientists in with crackpots, making it difficult for people to know which is which.” Barss sources this claim from an article published by the Office for Science and Society at McGill University titled Science vs. Joe Rogan.

Credit: Tesfu Assefa

Does science have a problem with Joe Rogan?

Here are a few of the tens of scientists and technologists who have been interviewed by Rogan: Sara Imari Walker, Ray Kurzweil, Michio Kaku, Bobby Azarian, Brian Greene, Aubrey de Grey, Richard Dawkins, Sean Carroll, Roger Penrose, Ben Goertzel, Lawrence Krauss. I’ve limited this list to those scientists whose work I know well enough to praise without the slightest hesitation.

These people have at least three things in common: first, they are all reputable scientists or technologists. Second, they are all interested in the Big Questions to which we all would like to hear answers. Third, at one or another time, they have all expressed ideas that go against the scientific (or cultural and political) consensus.

Sure, Rogan has interviewed people of less firm reputation as well, including people that some like to dismiss as crackpots or pseudoscientists.

Some people who call themselves scientists have accused Joe Rogan of promoting misinformation on COVID-19 and vaccines, platforming fringe theories and giving equal footing to pseudoscience alongside genuine science, giving airtime to conspiracy theories, and conducting interviews with scientists and technologists in a casual manner without pushback or fact-checking.

They have expressed concern about how Rogan could shape public opinion, especially among younger listeners or those who might take what they hear on his show as authoritative. They fear that this could lead to a general distrust in science or skepticism towards mainstream scientific consensus.

Some of them find Rogan’s approach to be anti-science, particularly when he mocks or dismisses scientific consensus or when he engages in or encourages scientific debates on fringe theories.

No, it is scientific dogma that has a problem

I think science itself has no problem with Joe Rogan. It is scientific dogmatists that have a problem. By scientific dogmatists I mean the zealots who want to protect the scientific establishment from the disruptive spirit of inquiry.

I have a problem with the dogmatists who have a problem with Joe Rogan. Open inquiry must be defended against scientific dogmatism. The soul of science is freedom to question theories and assumptions, and this must be protected against censorship and excessive backlash.

‘Misinformation’ and ‘pseudoscience’ are in the eye of the beholder. Often, ‘misinformation’ is information that the authorities don’t want the people to know. Often ‘pseudoscience’ is science that contradicts the scientific establishment and its paradigms.

I said often, which doesn’t mean always. But ‘often’ is enough to give Joe Rogan the benefit of doubt, and to praise his excellent work to bring science and technology closer to the little people like us.

Rogan brings a broad spectrum of views to the table, including those from scientists with different opinions, which can lead to a better understanding of complex issues. This can be beneficial in fields where there’s active debate or where the science is evolving.

By discussing science in a casual, accessible manner, Rogan makes science more approachable for the average person. This can demystify science, making it less intimidating and more integrated into everyday conversation.

Rogan’s style encourages listeners to question information, which can be a double-edged sword, but does promote skepticism and critical analysis. This can be useful in encouraging people to look into scientific claims independently, fostering a culture of inquiry.

Even more important is Rogan’s willingness to tackle controversial topics, and give a platform to new science or technology that the public is not yet aware of. Even if not all the information presented is correct, his ability to influence culture can stimulate public enthusiasm and support for science and emerging technologies.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Solving aging – is AI all we need? Should resources be diverted away from biotech in order to accelerate the advent of god-like AI?

Love, attention, and scale

In June 1967, the Beatles premiered a glorious new song: All you need is love. The performance was the UK’s contribution to what was the world’s first global satellite television broadcast, and was simultaneously watched by over 400 million people in 25 different countries. The broadcast occurred during what became known as the Summer of Love, and the song became a powerful anthem of flower power.

The Beatles’ manager Brian Epstein had described the performance as the band’s finest moment, but it turned out that singing “all you need is love” wasn’t quite enough to bring about a world of peace and harmony. 

Almost exactly 50 years later, a group of eight researchers at Google were searching for a title for an article they were about to publish. They settled on “Attention is all you need” – the title being the brainchild of the only Briton on the team, Llion Jones, who had grown up in north Wales, not far from Liverpool, the home of the Beatles. The article has attained legendary status within the global AI community, for its introduction of the transformer technology that underpins breakthrough AI initiatives such as ChatGPT.

Despite omitting architectural features that were previously thought to be essential for many text-based processing tasks, transformers excelled in these same tasks. The key innovation, which was to pay special attention to whichever parts of the input appeared most salient, turned out to give these AI systems a strong competitive advantage. The Attention is all you need paper correctly predicted that transformers could handle not just text but also other kinds of data, including pictures and sounds.

How far might transformers take AI? A third claim has increasingly been heard: “Scale is all you need”. Feed transformer systems ever larger amounts of data, and provide them with ever more powerful computer chips to crunch all that data into models with ever greater numbers of parameters, and there’s no limits to the degree of intelligence that can result. The “scale is all you need” hypothesis looks forward to AIs with fully general reasoning capabilities by doing more of the same.

In this context, I want to examine yet another “all you need” hypothesis. It’s a hypothesis that is already changing investment decisions and personal career trajectories. It’s the hypothesis that, whatever major problem you hope to solve, the best way to solve it is to start by creating general intelligence.

In this way of thinking, AI is all you need. An AI with god-like abilities will be able to race ahead of slow-witted humans to solve all the fundamental problems of science, medicine, and human existence.

Credit: David Wood, aided by Midjourney AI

Machines of loving grace

The same thought is expressed in the recent provocative essay by the founder and CEO of Anthropic, Dario Amodei: Machines of Loving Grace – How AI Could Transform the World for the Better.

Amodei states it as follows: “I think that most people are underestimating just how radical the upside of AI could be… My basic prediction is that AI-enabled biology and medicine will allow us to compress the progress that human biologists would have achieved over the next 50-100 years into 5-10 years”.

Amodei gives some examples of the discoveries that AI-enabled science could make:

  • “Design of better computational tools like AlphaFold and AlphaProteo”
  • “More efficient and selective CRISPR” (for gene-editing)
  • “More advanced cell therapies”
  • “Materials science and miniaturization breakthroughs leading to better implanted devices”
  • “Better control over stem cells, cell differentiation, and de-differentiation, and a resulting ability to regrow or reshape tissue”
  • “Better control over the immune system: turning it on selectively to address cancer and infectious disease, and turning it off selectively to address autoimmune diseases”.

Who wouldn’t like such a vision?

According to this logic, spending effort in the next few years to create AI with these capabilities is a better investment than spending the same effort to improve biology and medicine here and now.

Credit: David Wood, aided by Midjourney AI

Funding is what marshals effort, and any funds at someone’s disposal should, it appears, be directed towards improving AI, rather than towards companies or foundations that are seeking to improve biology or medicine. Right?

A two-step mission statement

Back in February 2015, Demis Hassabis was relatively unknown. There had been a bit of press about the purchase of his company, DeepMind, by Google, for £400 million, but most people had little conception of what the company would accomplish in the following years.

Hassabis was giving a talk at CUTEC – the Cambridge University Technology and Enterprise Club. A photo from that talk is preserved on Reddit:

Credit: Reddit

You can also read on that page on Reddit, from nearly ten years ago, some fascinatingly scathing comments about that mission statement:

  • “Ridiculous and poorly-defined goals”
  • “FFS [what] a mission statement [for] a company”
  • “‘Fundamentally solve intelligence’ in the linked screenshot above is a whole load of nonsense”
  • “I don’t even think we have a working definition for ‘intelligence’ yet. We don’t even know how it works in humans… How can we hope to recreate it before knowing what it is?”

But step forward to October 2024, with the announcement of the winners of this year’s Nobel Prize in Chemistry, “for computational protein design”. The mission statement outlined long ago for DeepMind now seems much more credible.

Once intelligence has been “fundamentally solved”, it should be relatively straightforward to solve climate change, economic distribution, cancer, dementia, and aging, right?

After all, given an AI model that can correctly predict how a long string of amino acids will fold up as a protein in three dimensions, won’t a scaled-up version of that model be able to predict other interactions between biochemical molecules – and, indeed, to predict how biological cells will respond to all kinds of proposed interventions?

The data bottleneck

One person striking a note of caution against exuberant forecasts of rapid additional progress about AI in medicine, was someone who shared the Nobel Prize with Demis Hassabis, namely David Baker of the University of Washington.

In an article published in MIT Technology Review shortly after the Nobel Prize, Baker pointed out that “AI needs masses of high-quality data to be useful for science, and databases containing that sort of data are rare”.

Indeed, the stunning success of DeepMind’s AlphaFold AI was fundamentally dependent on prior decades of painstaking work by numerous scientists to assemble what is known as PDB – the protein data bank.

The third of the joint winners, John Jumper of DeepMind, acknowledged this dependency in a press conference after the prize was announced. Jumper said, “I also want to really thank the giants on whose shoulders we stand, I think the entire experimental community, the people that developed the ability to measure protein structures, especially to Helen Berman and other pioneers of the Protein Data Bank, the PDB, who had the foresight to put these data together to make it available”.

Helen Berman had pioneered the PDB from 1971. As she graciously commented in a recent interview, “I am a very lucky person to have had an idea as a student, pursued that idea for more than 50 years, and then seen brand new science emerge for which three people have won this year’s Nobel Prize. It is really gratifying”.

Remarkably, Berman’s interest in protein folding predates even the Beatles song. In an online living history memoir written in 2012, Berman notes “In 1966 …I became fascinated by the world of protein folding. As part of my Ph.D. qualifier, … I proposed to perform structure-based sequence comparisons of known proteins…”.

Progress in determining protein structures was slow, for a long time, before becoming faster. This slide from a 2009 presentation by Berman that graphs the growth in the total number of proteins documented in PDB will look familiar to anyone familiar with singularitarian ideas:

In the MIT Technology Review article, ‘A data bottleneck is holding AI science back’, David Baker pointed out that “If the data that is fed into AI models is not good, the outcomes won’t be dazzling either. Garbage in, garbage out”.

The subtitle of that article says it straightforwardly: “AI’s usefulness for scientific discovery will be stunted without high-quality data”.

So, we can forget “AI is all we need”. Before we can develop an AI that can solve aging for us, we will need to obtain suitable data on which that AI can be trained. We’ll need the equivalent of PDB for all the interventions that might remove or repair the low-level biological damage that we call aging.

Unless, that is, the AI has a very special kind of superintelligence, which allows it to reach conclusions even in the absence of adequate data. Let’s turn to that option next.

AI Zero?

The AI which achieved worldwide renown in March 2016 by defeating human Go superstar Lee Sedol, namely AlphaGo, gained that ability by being able to study around 160,000 games played between expert-level human Go players. The design of that version of the AI utterly depended on learning which moves tended to be selected by the best human players in a wide variety of situations.

AlphaGo’s success against Lee Sedol was rightly celebrated, but what happened in the following year was arguably even more startling. As reported in an article in Nature in October 2017, a new version of the AI, dubbed “AlphaGo Zero”, was given no data from human games; nor did it receive any human feedback on moves it suggested. Instead, it started tabula rasa, knowing only the rules of the game, before proceeding to play itself 4.9 million times in just three days.

AlphaGo Zero new self-play algorithms proved sufficient to reach higher levels than the earlier version (sometimes called “AlphaGo Lee”) that played Lee Sedol. When AlphaGo Zero played 100 games against AlphaGo Lee, it won every single game.

A similar pattern can be observed in the progress of AIs that process text. The trend is to require less and less explicit human guidance.

Consider AIs that translate between two languages. From the 1950s onward, designers of these systems provided ever-larger numbers of rules about grammar and sentence structure – including information about exceptions to the rules. Later systems depended on AIs observing, by themselves, statistical connections in various matching sets of text – such as the official translations of materials from the European Parliament, the Canadian Parliament, and the United Nations.

Managers noticed that the statisticians tended to produce better results than linguists who toiled to document every jot and tittle of grammatical variations. Infamously, Frederick Jelinek, a lead researcher at IBM, remarked that “Every time I fire a linguist, the performance of the speech recognizer goes up”. Performance jumped up again with the adoption of deep neural networks from 2012 onward, with the translations now being accurate not only at the word-for-word level but also at the sentence-for-sentence level.

A final significant jump came when transformer-based AIs were adopted. (The word “transformer” had been chosen to reflect the ability of these systems to transform text from one language into another.) As mentioned earlier, transformers are powerful because their algorithms can work out the strengths of connections between different parts of text input by themselves; they don’t need these connections to be pointed out by humans.

Could something similar happen with medical AIs of the future? Could such an AI find sufficient reliable information in an ocean of less reliable data, and therefore propose what steps should be taken to solve aging?

Credit: David Wood, aided by Midjourney AI

AI omniscience?

To recap: AlphaGo Lee needed detailed guidance from humans, before it could improve itself to superhuman level; but its successor, AlphaGo Zero, attained that level (and exceeded it) simply by power of its own vast intelligence.

Might it be similar with medical AI? Today’s AI medical systems are constrained by the extent of data, but might a future AI be able to work out all the principles of biology (including biology in which there is no aging) by starting tabula rasa (with a blank slate)?

All You Need Is Love said, “there’s nothing you can know that isn’t known” – the ‘all you need is AI’ approach would mean there’s nothing can be known it doesn’t know. Effectively, the AI would be omniscient.

Well, count me sceptical. It’s my view that some things need to be discovered, rather than simply deduced.

For example, why are there eight planets in our solar system, rather than thirteen? No principles of astronomy, by themselves, could determine that answer. Instead, the configuration of our solar system depends on some brute facts about the initial conditions under which the solar system formed. The only way to know the number of planets is to count them.

Credit: David Wood, aided by Midjourney AI

Again, why has life on our planet adopted a particular coding scheme, in which specific triplets of the nucleotides A, T, C, and G result in specific amino acids being formed? Why did homo sapiens lose the ability to synthesize vitamin C, or other genetic features which would be useful to us? Why are particular genes found on specific chromosomes? The only way to know which genes are located where is to look and see. No “AI Zero” is going to discover the answer by meditating in a void.

Therefore, I do not accept that “AI is all you need”. Data is also needed. That is, critical data.

This need is correctly recognized in the article Machines of Loving Grace by Dario Amodei, which I’ve already quoted. Amodei includes in the article “a list of factors that limit or are complementary to intelligence”. One of these items is “Need for data”.

Amodei comments: “Sometimes raw data is lacking and in its absence more intelligence does not help. Today’s particle physicists are very ingenious and have developed a wide range of theories, but lack the data to choose between them because particle accelerator data is so limited. It is not clear that they would do drastically better if they were superintelligent – other than perhaps by speeding up the construction of a bigger accelerator.”

AI as Principal Investigator?

Amodei offers a bold solution to this lack of data: “The right way to think of AI is not as a method of data analysis, but as a virtual biologist who performs all the tasks biologists do, including designing and running experiments in the real world (by controlling lab robots or simply telling humans which experiments to run – as a Principal Investigator would to their graduate students), inventing new biological methods or measurement techniques, and so on.”

Credit: David Wood, aided by Midjourney AI

Amodei adds: “It is by speeding up the whole research process that AI can truly accelerate biology.”

He continues: “I want to repeat this because it’s the most common misconception that comes up when I talk about AI’s ability to transform biology: I am not talking about AI as merely a tool to analyze data. …I’m talking about using AI to perform, direct, and improve upon nearly everything biologists do.”

Amodei highlights the power of intelligence to transcend the limitations of its data: “You might believe that technological progress is saturated or rate-limited by real world data or by social factors, and that better-than-human intelligence will add very little. This seems implausible to me – I can think of hundreds of scientific or even social problems where a large group of really smart people would drastically speed up progress, especially if they aren’t limited to analysis and can make things happen in the real world”. Replace the “large group of really smart people” by an artificial superintelligence, and Amodei expects progress in science to rocket forward.

It’s an attractive vision, and I urge everyone to read Amodei’s entire essay carefully. (It covers many more topics than I can address in this article.)

But in case anyone is inclined to deprioritize existing research into promising lines of rejuvenation biotechnology, I have four remaining concerns: three negative and one strongly positive.

Three concerns and a huge opportunity

My first concern is that the pace of progress in AI capabilities will significantly slow down. For example, the data scaling laws may hit an impasse, so that applying more data to train new AI systems will fail to create the kind of superintelligence expected.

Personally I think that such a “wall” is unlikely, especially since AI developers have many other ideas in mind for how AI could be improved. But the possibility needs to be considered.

Second, it’s possible that AI capabilities will continue to surge ahead, but the resulting AI systems get involved in catastrophic harm against human wellbeing. In this scenario, rather than the AI curing you and me of a fatal condition – aging – it will cause us to die as a side-effect of a bad configuration, bad connectivity to fragile global infrastructure, an alien-like bug in its deep thinking processes, or simple misuse by bad actors (or naïve actors).

The leaders of the corporations which are trying to create artificial superintelligence – people like Demis Hassabis, Dario Amodei, Sam Altman, Elon Musk, Ben Goertzel, and a number of Chinese counterparts – say they are well aware of these dangers, and are taking due care to follow appropriate safety processes. But creating artificial superintelligence is an intensely competitive race, and that risks corners being cut.

Credit: David Wood, aided by Midjourney AI

Third, the public may, very reasonably, demand more safeguards against the kind of suicide race just depicted. Specifically, an agreement might be reached by the USA and China, with the support of many other countries, that all progress towards artificial superintelligence should be blocked.

This agreement, with appropriate monitoring and enforcement mechanisms, would have the same effect as in the first concern above: AI progress hits a wall. But this time, it will be a wall imposed by regulations, rather than one intrinsic to the engineering of AI.

Some critics have responded that the chances are very slim for such an agreement to be reached and adopted. However, I disagree. That’s on account of both a stick and a carrot.

The stick is the growing public awareness of the catastrophic risks that new generations of AI bring. (That awareness is still on the slow part of the exponential growth curve, but may well accelerate, especially if there is a scandalous disaster from existing AI systems, something like an AI Chernobyl.)

The carrot is a clearer understanding that all the benefits we want from artificial superintelligence can also be obtained from an AI with humbler powers – an AI that:

  • Is only modestly more capable than today’s best AIs
  • Lacks any possibility to develop autonomy, sentience, or independent volition
  • Will remain a passive, safe, but incredibly useful tool.

In a moment, I’ll say more about this huge opportunity. But first, let me interject an analogy about the choices facing humanity, as we contemplate how we might manage AI.

Peaceful progress or violent overthrow?

“Tear down the barricades!”

“Expropriate the expropriators!”

“Lock up the élites!”

“String up the capitalists!”

“Overthrow the ruling class!”

Such are the calls of revolutionaries in a hurry. However, the lesson of history is that violent revolutions tend to end up “devouring their own children” – to quote a phrase spoken by Jacques Mallet du Pan (referring to the French Revolution sending its original leaders to the guillotine) and also by former Hitler loyalist Ernst Röhm.

Similar remarks could have been uttered by many of the one-time supporters of Vladimir Lenin or Joseph Stalin, who subsequently found themselves denounced and subject to show trials.

However, the saying is not entirely correct. Some revolutions avoid subsequent internal bloodbaths: consider the American Revolutionary Wars of Independence, and the Glorious Revolution of 1689 in England.

Revolutionaries must uphold principle ahead of power-seeking, maintain a clear grip of reality (rather than becoming lost in self-deception), and continue to respect wise process (rather than allowing dictatorial leaders to do whatever they please) – in such cases, a revolution can lead to sustained progress with increased human flourishing.

Now consider the difference between what can be called “democratic socialists” and “Marxist-Leninists”. The former highlight ways in which the plight of the working class can be alleviated, stage by stage, through gradual societal reform. The latter lose patience with such a painstaking approach, and unleash a host of furies.

In case it’s not clear, I’m on the side of the democratic socialists, rather than the would-be revolutionaries who make themselves into gods and absolute arbiters.

For how humanity chooses to develop and deploy AI, I see the same choice between “harness accelerationists” and “absolute  accelerationists”. 

Harness accelerationists wish to apply steering and brakes, as well as pressing firmly on the throttle when needed. 

Absolute accelerationists are happy to take their chances with whatever kind of AI emerges from a fast and furious development process. Indeed, the absolute accelerationists want to tear down regulation, lock up safety activists, and overthrow what they see as the mediocrity of existing international institutions.

Once again, in case it’s not clear, I’m on the side of harnessing acceleration. (Anyone still on X aka Twitter can see the “h/acc” label in my name on that platform.)

Harnessing requires more skill – more finesse – than keeping your foot pressed hard to the floor. I understand why absolute accelerationists find their approach psychologically comforting. It’s the same appeal as the Marxist promise that the victory of the working class is inevitable. But I see such choices as being paths toward humanitarian catastrophe.

Credit: David Wood, aided by Midjourney AI

Instead, we can proceed quickly to solving aging, without awaiting the emergence of a hopefully benevolent god-like AI.

Solving aging – without superintelligence

Above, I promised three concerns and one huge opportunity. The opportunity is that it’s pretty straightforward to solve aging, without waiting for a potentially catastrophically dangerous artificial superintelligence. There are low-hanging fruits which aren’t being picked – in part because funding for such projects is being diverted instead to AI startups.

Aging occurs because the body’s damage-repair mechanisms stop working. Our metabolism runs through countless biochemical interactions, and low-level biological damage arises as a natural consequence – due to injuries inflicted by the environment, bad lifestyle choices, the inevitable side-effects even of good lifestyle choices, or (perhaps) because of programmed obsolescence. When we are young, lots of that damage is routinely repaired or replaced soon after it occurs, but these replacement and repair mechanisms lose their effectiveness over time. The consequence is that our bodies become more prone to all sorts of disease and infirmity. That’s aging.

The most promising path to solving aging is to comprehensively reinforce or complement these damage-repair mechanisms. The low-hanging fruit is that we have a long list of ways this might be achieved:

  • By taking inspiration from various animal species in which at least some of the damage-repair mechanisms are better than in humans
  • By understanding what’s different about the damage-repair mechanisms in ‘human superagers’
  • By designing and applying new interventions at the biotech or nanotech levels.

To be clear, this does not mean that we have to understand all of human biological metabolism. That’s horrendously complicated, with numerous side-effects. Nor do we even need to understand all the mechanisms whereby damage accumulates. Instead, we just need to observe, as engineers, what happens when new damage-repair mechanisms are applied in various animals.

These mechanisms include senolytics that clean up senescent cells (sometimes called “zombie cells”), extending telomeres at the ends of chromosomes, reversing some of the epigenetic alterations that accumulate on our DNA, introducing specially programmed new stem cells, nanoparticles which can break-up accumulated plaques and tangles, re-energising the mitochondria within our cells – and much more.

In each case, some useful research is being done on the viability of introducing these repair mechanisms. But nothing like enough.

We especially need tests of the long-term effects of damage-repair mechanisms, especially applied in combination. These tests can determine something that even an artificial superintelligence would find difficult to predict by meditating in a void: which damage-repair interventions will positively synergize with each other, and which ones have antagonistic effects.

These are the kind of tests being pursued by one organisation where I need to declare an interest: the Longevity Escape Velocity Foundation (LEVF), where I have a role on the leadership team, and whose underlying ideas I have supported for nearly 20 years since first coming across them in meetings of what was the forerunner of London Futurists.

LEVF is carrying out a number of extended projects on large numbers of mice, involving combining treatments that have already been proven to individually extend the lifespan of mice treated from middle age. Interim results of the first such project, RMR1, can be reviewed here (RMR = Robust Mouse Rejuvenation), and plans for the second one, RMR2, have been posted here.

Rather cheekily, may I suggest that the 1967 slogan of the Beatles, All you need is love, got two letters wrong in the final word?

Credit: David Wood, aided by Midjourney AI

Two scenarios for trying to solve aging

To conclude, I envision two competing scenarios ahead, for how aging should be solved:

  • An “AI first” strategy, in which important research into rejuvenation biotechnology is starved of funding, with money being preferentially allocated to general AI initiatives whose outcomes remain deeply uncertain.
  • A “damage-repair research now” strategy, in which projects such as RMR2 receive ample funding to proceed at pace (and, even better, in multiple different versions in parallel, including in animals larger than mice), with the data produced by such experiments then being available to train AIs which can complement the ingenuity of pioneering human researchers.

What’s your pick?

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

How Memory Stays Sharp: The Role of KIBRA and PKMζ in Long-Term Memory Maintenance

Introduction

Why do some memories linger for a lifetime while others vanish soon after forming? Neuroscientists have long grappled with this puzzle, especially given the brain’s constant renewal of its molecular components. A recent breakthrough study has illuminated a key aspect of this phenomenon: the interaction between two proteins, KIBRA and PKMζ, provides a scaffolding that stabilizes memory. This remarkable discovery not only explains how long-term memory persists but also opens potential avenues for treating memory-related disorders like Alzheimer’s disease.

The Challenge of Long-Term Memory Retention

Memory is stored in the brain through the strengthening of synaptic connections, a process known as long-term potentiation (LTP). However, the machinery that sustains these connections, from proteins to receptors, is continually replaced. This turnover creates a paradox: how can memories persist for decades if the molecular components of the brain are so short-lived? Decades ago, Francis Crick suggested that stable interactions between molecules might hold the key to this mystery. Building on this idea, researchers focused on KIBRA, a scaffolding protein tied to memory performance, and PKMζ, a kinase known for its role in sustaining synaptic changes.

Strong synaptic stimulation facilitates formation of persistent KIBRA-PKMζ complexes in late-LTP maintenance (Credit: Tsokas et al., “KIBRA Anchoring the Action of PKMζ Maintains the Persistence of Memory.”)

KIBRA and PKMζ: The Anchors of Memory

KIBRA and PKMζ work together to maintain the stability of synaptic connections crucial for memory. PKMζ, an atypical kinase, is persistently active, bypassing the need for external activation, and plays a vital role in reinforcing the connections at memory-encoded synapses. KIBRA serves as a docking station for PKMζ, anchoring it at synapses to ensure that the structural changes necessary for memory endure. Without KIBRA’s anchoring action, PKMζ cannot localize effectively, causing the connections to weaken and memories to fade.

Experiments highlighted how synaptic activity intensifies the interaction between KIBRA and PKMζ. Using cutting-edge proximity ligation assays, researchers observed that strong neuronal stimulation led to the accumulation of KIBRA-PKMζ complexes at active synapses. These complexes were particularly concentrated in regions of the brain linked to learning and memory, such as the hippocampus.

Testing the Limits of the KIBRA-PKMζ Partnership

The study explored deeper by introducing antagonists that blocked the interaction between KIBRA and PKMζ. The consequences were striking: blocking this interaction disrupted established LTP while leaving basal synaptic transmission unaffected. This finding underlined the selective importance of the KIBRA-PKMζ complex in maintaining memory-associated synaptic changes.

To explore how this disruption impacted behavior, the researchers tested mice trained in spatial memory tasks. Mice conditioned to avoid a specific area of their environment lost this memory after the introduction of an inhibitor that blocked KIBRA-PKMζ coupling. However, their ability to learn and form new memories remained intact. This specificity highlights the unique role of KIBRA and PKMζ in stabilizing existing memories, distinguishing it from the processes involved in learning.

A Molecular Mechanism That Defies Turnover

One of the study’s most intriguing findings was how the KIBRA-PKMζ interaction enables memory to persist despite the turnover of individual molecules. The research confirmed that while the proteins themselves are replaced within hours or days, the interaction between them is continuously re-established at memory-relevant synapses. This dynamic process ensures that the “scaffold” supporting memory remains intact even as its components are renewed.

The researchers proposed a model of “persistent synaptic tagging,” where KIBRA functions as a tag that attracts newly synthesized PKMζ molecules to the right locations. This process sustains the structural integrity of memory-encoded synapses, making it possible for memories to outlast the lifespan of their molecular building blocks.

Implications for Memory Disorders and Treatment

The discovery of the KIBRA-PKMζ interaction has profound implications for understanding and treating memory-related conditions. If this partnership is essential for maintaining memory, then therapies aimed at enhancing or mimicking this interaction could potentially combat age-related memory decline or neurodegenerative diseases like Alzheimer’s.

One promising avenue involves developing drugs that stabilize or amplify the KIBRA-PKMζ connection, strengthening synaptic changes and improving memory retention. Conversely, selectively disrupting this interaction could help erase traumatic memories, offering hope for conditions such as post-traumatic stress disorder (PTSD). The study also raises the possibility of targeted therapies that activate PKMζ in specific brain regions, enhancing memory resilience in the early stages of dementia.

A Glimpse Into the Future of Memory Research

The partnership between KIBRA and PKMζ represents a significant leap forward in our understanding of how the brain preserves memories over time. By acting as a molecular scaffold, these proteins ensure the stability of synaptic changes that underpin long-term memory. This discovery not only sheds light on the biological basis of memory but also offers a blueprint for future research and therapeutic innovation.

As scientists continue to unravel the intricacies of this molecular duo, the potential for groundbreaking treatments becomes clearer. From enhancing memory in aging populations to mitigating the effects of neurodegenerative diseases, the KIBRA-PKMζ connection offers a promising pathway for addressing some of the most pressing challenges in neuroscience.

Credit: Tesfu Assefa

Conclusion

Memory, once thought to be an ephemeral and mysterious phenomenon, is now being understood at the molecular level. The discovery of the KIBRA-PKMζ partnership provides a compelling explanation for how the brain defies the constraints of molecular turnover to preserve memory. This research not only advances our understanding of the brain’s inner workings but also points toward a future where memory loss is no longer an inevitable consequence of aging or disease.

Through continued exploration of this groundbreaking mechanism, we inch closer to unlocking the full potential of memory science and delivering tangible benefits to those affected by its decline. The intricate dance between KIBRA and PKMζ reminds us that even the smallest molecular interactions can have profound implications for the human experience.

Reference

Tsokas, Panayiotis, Changchi Hsieh, Rafael E. Flores-Obando, Matteo Bernabo, Andrew Tcherepanov, A. Iván Hernández, Christian Thomas, et al. “KIBRA Anchoring the Action of PKMζ Maintains the Persistence of Memory.” Science Advances 10, no. 26 (June 26, 2024). https://doi.org/10.1126/sciadv.adl0030.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter