Should we still want biological space colonists (Part 2)?

This is a follow-up to my article Should we still want biological space colonists?, published earlier this month.

I argued that, with artificial intelligence (AI) advancing rapidly, we might soon see Human-Level Artificial General Intelligence (HL-AGI), AI that can think like humans. If AI can pass the Turing Test, proving it’s human-like, we might consider AI conscious.

This new AI consciousness could be strange at first, but grow more human-like, especially in robots interacting with people. If we accept that AI robots are ‘people’, sending them to colonize space would be logical. Robots are hardier against the threats of space: they don’t need life support, don’t sleep, can have parts replaced when they break (unlike a human hand), and can back up their data so the work can carry on seamlessly if they’re destroyed.

In the long term, filling the universe with intelligence is our common cosmic destiny, and future AI robots can achieve it.

However, there’s a human desire for people, not just robots, to explore space. Starting human space colonization now is important. Plus, human missions inspire young scientists and engineers, potentially speeding up the development of new technologies including advanced AI. So, I concluded that we should pursue both human and robotic space expansion.

Wide-spectrum reactions

I shared Should we still want biological space colonists? to several online discussion groups and mailing lists, and received passionate replies ranging from ‘enthusiastic YES’ to ’emphatic NO’, and more nuanced replies in between.

The high level of engagement in groups that are usually quieter tells me that many of those who are both space enthusiasts and AI enthusiasts see this as an important open question.

“How would you create a space-faring civilization without direct human participation?”, asks a commenter in the YES camp. Another scolds me for even considering the question, and adds: “Future tech bypassing humans or maybe uploads only? No.”

I found especially interesting a comment, by a well-known thinker, that space enthusiasts want to remain hopeful on traditional biological-human-centered space expansion narratives. His use of the term “hopeful” is revealing. I totally understand this, but I’m training my emotions to also find hope in the pure AI alternative. Either way, our mind-children will colonize the stars.

A commenter in the NO camp notes that it is far easier to adapt ourselves to the universe than to adapt the universe to ourselves. Advanced civilizations would see the engineering of AI robots as far preferable to terraforming planets.

The same commenter says that advanced civilizations would also see the transport of uploaded minds as far more efficient than the transport of fragile human bodies. He notes that the robots we send to Mars and elsewhere could be designed to host uploaded human minds. So we could teleport ourselves to Mars and back simply by transmitting (at the speed of light) a mind-state into a robot body on Mars. I’ll come back to this point later on.

Credit: Tesfu Assefa

Post-biological AI civilizations

Writing in the 1960s, Iosif Shklovsky and Carl Sagan suggested “not only that beings may exist elsewhere in the universe with intelligence substantially beyond our own, but also that we may be able to construct such a being ourselves.” The last chapter of their seminal book Intelligent Life in the Universe is titled ‘Artificial intelligence and galactic civilizations’.

Other thinkers well-known in SETI circles, like Steven Dick and Seth Shostak, have suggested that advanced civilizations in the universe would be post-biological.

In his last book published in late 2024 after his death, Henry Kissinger notes that “AIs could serve as astronauts, going farther than humans could have imagined.”

Donald Goldsmith and Martin Rees predict with confidence that “during the next few decades, robots and artificial intelligence will grow vastly more capable, closing the gap with human capabilities and surpassing them in ever more domains.”

Thinking “will increasingly become the domain of artificial intelligence,” they say, envisioning a new era of “technological evolution of intelligent beings.”

Rees was even more explicit and said that, in deep space, “nonbiological ‘brains’ may develop powers that humans can’t even imagine.” Future technological evolution “could surpass humans by as much as we (intellectually) surpass slime mould.”

These thinkers seem to agree with James Lovelock: we are preparing the way for “new forms of intelligent beings” that will colonize the cosmos.

Machines or persons?

Should this thought make us happy or unhappy? I think it comes down to the question of whether we see our AI mind-children as cold machines or as people.

“The division of intelligent life into two categories – natural and artificial – may eventually prove to be meaningless,” said Shklovsky and Sagan. The brains of our descendants “may also be artificial,” and it “would be impossible to draw a clear distinction” between artificial intelligent living beings and natural advanced organisms.

They quoted legendary mathematician Andrey Kolmogorov saying that “a sufficiently complete model of a living being, in all fairness, must be called a living being, and a model of a thinking being must be called a thinking being.”

Now that we may soon converse with fully Turing-tested AIs, we should internalize this and learn to see them as persons. Future generations will likely find this intuitively and emotionally obvious. Those persons colonize the cosmos. Let’s make peace with this.

What about uploads?

Let’s go back to the point that a commenter raised about human uploads. As a third alternative, he said, we could transmit uploaded human minds to receiving stations in deep space, and then download them to robotic bodies and brains.

Crews of uploads could make interstellar expansion feasible. The challenges of interstellar expansion, chiefly the speed-of-light limit, seem so daunting that some believe this is the only viable solution.

In my previous article I mentioned mind uploading and the co-evolution and humans and AIs, saying that they will merge and become the same thing. So I think the upload alternative points to the direction in which humanity will move.

But in the long term, I’m more and more persuaded that the perception of a difference between human persons and AIs will melt away like snow in the sun.

The rest of this century

I guess I wouldn’t be so zen-like detached if I could hope to be a space colonist myself, but I’m too old for that.

However, younger space enthusiasts have legitimate aspirations to be space colonists. And regardless of age, most people are not ready to fully empathize and identify with our AI mind-children. Not yet. Big changes to our culture, psychology, and emotions take a long time.

So I think for some decades – say until the end of this century – ‘human’ will still mean ‘biological human’ to most people, and most space enthusiasts will want biological humans in space.

I think in the rest of this century we must establish humanity as a multi-planetary biological species in the solar system. Doing it will boost the human spirit and accelerate progress on all fronts, including the AI technology front.

We can count on the next generations of our mind-children to assist us with brilliant solutions to the current challenges of human spaceflight. They could also help us to find out how to upload human minds.

Then, in the words of Hans Moravec, our mind children “will explode into the universe, leaving us behind in a cloud of dust.” I hope they will choose to absorb and take with them the minds of those biological humans who want to follow.

A possible wildcard is that human or AI scientists could find a way to travel faster than light. This would definitely change the game, and could open interstellar spaceflight to biological humans as well. I’m not holding my breath, but I see this in the realm of the possible.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Should we still want biological space colonists?

I’ve always been a space cadet. Images from Apollo 8 and Apollo 11 burned themselves indelibly in my mind when I was very young. In between Apollo 8 and Apollo 11, the immortal film 2001: A Space Odyssey by Stanley Kubrick and Arthur Clarke also burned itself indelibly in my mind. I’ve always been persuaded that colonizing space is not only our right, but also our destiny and our duty. I’ve written a book about this.

In 2001, astronauts David Bowman and Frank Poole are accompanied by the artificial intelligence ‘Hal’ crewing the spaceship ‘Discovery’.

“Whether Hal could actually think was a question which had been settled by the British mathematician Alan Turing back in the 1940s,” says Clarke in the novel written with the film. “Hal could pass the Turing test with ease.” Both the novel and the film leave us with the impression that Hal is a conscious being like us.

“Poole and Bowman had often humorously referred to themselves as caretakers or janitors aboard a ship that could really run itself,” says Clarke. “They would have been astonished, and more than a little indignant, to discover how much truth that jest contained.”

Human-level AGI and conscious machines could be imminent

Clarke doesn’t answer (or ask) the obvious question of why not send just Hal. My young self didn’t think of this question either, for it seemed obvious to him that Bowman and Poole were the real space explorers. He identified with them, not with Hal.

I’ve kept this preconception for decades. But in these decades I’ve been slowly warming up to the idea that AIs could be conscious beings like us. This seemed to me, however, a long-term prospect (decades or centuries, not years).

But AI technology has been advancing very fast in the last couple of years. Now I think that current AI technology could be on the right track. If so, Human-Level Artificial General Intelligence (HL-AGI) and conscious machines could be imminent. It seems very plausible to me that, very soon, we’ll see an AGI pass the full Turing Test and credibly claim consciousness.

I think machine consciousness will be a strange type of consciousness at first. It will have a wholly other texture, very different from human consciousness. But then further advances could bring machine consciousness closer to human consciousness. AIs in robotic bodies, trained with real-time interactions with humans, could experience a more human-like consciousness.

Then the conclusion that AI robots are persons will be inescapable, and we space enthusiasts won’t be able to avoid the obvious question. Why not just send robots to colonize space?

The obvious advantages of robotic space colonization

Once we achieve HL-AGI and have fully autonomous conscious robots, sending humans to colonize space will seem inefficient. Robots can handle space’s vacuum, radiation, and temperature without the needs of life support systems.

They won’t require food, water, or air, reducing the cost and complexity of missions. Robots don’t age or die like humans, meaning less frequent replacements or resupply missions. If one is destroyed, its knowledge isn’t lost; that is backed up somewhere else.

Their computational power and decision-making capabilities will surpass human limits, enabling them to adapt and solve problems more effectively. Robots don’t sleep. Robots can work non-stop, maximizing productivity in space colonization tasks.

They can repair and upgrade themselves, further cutting costs and human intervention. Humans bring emotional and psychological needs that complicate space travel; robots don’t suffer from isolation or stress. The financial and logistical burden of human safety, health, and comfort in space is immense.

Conversely, robots can be designed for specific tasks, like building habitats or mining resources, without the need for extensive training or support. In summary, robots would be cheaper, more durable, and more efficient, making human-led space colonization less practical.

Credit: Tesfu Assefa

Mind children

OK, but… but…

But we want real people like us to colonize Mars, the solar system, and then the stars! Persons! Not robots!

This is a totally understandable emotional reaction. But, as I say above, the conclusion that AI robots are persons could soon be inevitable.

I’m too old (and probably so are you) to participate directly in the beginnings of space colonization in the solar system. But I would identify with younger persons in space and think that I’m participating through them. But then, if AI robots are persons, it seems evident that I should be able to identify with the robots as well.

I’m slowly and gradually warming up to this perspective.

And after HL-AGI, we’ll have robots with artificial superintelligence (ASI). They – our mind children – will become cosmic engineers among the stars.

Our cosmic destiny is to spread intelligence and meaning into the cold universe, and our mind children will achieve our common cosmic destiny.

“We are now preparing to hand the gift of knowing on to new forms of intelligent beings,” said James Lovelock, the prophet of Gaia, in his last book. “Do not be depressed by this. We have played our part.”

I’m proud of having played my little part as a human being (release 1.0) of the 20th and 21st century, and we all should be collectively proud of giving birth to our mind children. The universe belongs to them.

On a less gloomy note, I’m persuaded that humans and machines will merge and co-evolve, eventually becoming one and the same thing. So we will be our mind children, and they will be us.

Once we see humans with AI implants and AIs with human implants (e.g. mind grafts from human uploads) we’ll know for sure that the co-evolution of humans and machines toward our common cosmic destiny has begun. But I guess it has already begun.

Back to the question

Based on all the considerations above, wasting time and resources with biological space colonists doesn’t seem to make sense. Let’s save all that money we spend on crewed space programs. Let’s build conscious HL-AGI robots first – the first generation of our mind children – and send them to colonize the planets and the stars. We’ll be there through them.

The logic of this seems very solid. However…

However, there are at least two consideration that suggest that we should continue to pursue conscious HL-AGI robots and human space expansion in parallel.

First, predictions fail. I said that we’ll have HL-AGI and conscious autonomous robots soon. But what if I’m wrong? Getting a solid start in space expansion is very important and urgent (see my book). Therefore, we should get started now with the technology that we have today instead of waiting for new technology that is not guaranteed to arrive soon.

Second, crewed space programs have demonstrated and continue to demonstrate a unique potential to motivate brilliant young people to become scientists and engineers. Those brilliant young people motivated by the dream to colonize space would accelerate the development of future technologies, including HL-AGI and then ASI.

So forget what I said, and let’s build those little crewed outposts on the Moon and then Mars. Our mind children will likely take over one day, but let’s have some useful fun before.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Could generative AI be on the right path to AGI?

Some time ago Ben Goertzel reported that somebody told him: “For the camp that continues to claim that throwing massive amounts of compute at LLMs isn’t a reasonable path to AGI… I don’t know if that’s accurate anymore.”

Ben then argued to confirm that no, throwing more and more computing power at large language models (LLMs) isn’t a reasonable path to artificial general intelligence (AGI).

The overall flavor of Ben’s arguments is captured by his analogy with music: music generation models trained on music up to the year 1900 “will not invent progressive jazz, neoclassical metal, grunge, acid house, etc. etc. etc.” I’ll come back to this analogy.

I’ve long been persuaded that Ben is right on this point. However, I’ll play devil’s advocate and try to argue that, yes, LLM-like big data crunching could be all (or almost all) of general intelligence.

Could statistical data crunching be almost enough?

I asked this question on social media, including the SingularityNET forums: Perhaps language is all there is? Or most of what there is? What I mean is that we could consider all forms of interacting with the rest of the world as a generalized form of language, and perhaps a generalized type of transformer technology with suitable training input would reproduce all aspects of cognition just like LLMs reproduce language.

Predictably, I received very skeptical replies. Come on, really now. How can language be all there is?

Ben conceded that everything is a language in some sense, but argued that real AGI will need a new framework of which today transformer-based LLMs could well be a part, but only a small one. Ben makes this point in his last book: “the basic architecture and algorithmics underlying ChatGPT and all other modern deep-NN systems is totally incapable of general intelligence at the human level or beyond, by its basic nature,” he says. “Such neural networks could form part of an AGI, sure – but not the whole cognitive part.”

But perhaps we shouldn’t dismiss the possibility that, yes, a generalized form of language could be all, or most, of what there is to AGI. The possibly at least merits further thoughts.

I’m not expressing a conviction, but a hypothesis. Current generative AI is not good enough for AGI. But what if it is almost good enough?

Let’s go for a drive

The example that comes to my mind is driving. When I’m driving, I use decades of experience with the ‘language’ of driving in the street. This language is composed by tokens like steer right/left, speed up, slow down etc. My experience tells me which token to ‘utter’ next, even without formal rules. All drivers know that in certain situations one should slow down, even if one is not able to say exactly why. It is your hands and your feet that know, so to speak. It seems to follow that a suitable ‘LLM’ trained with millions of hours of street videos of people driving – maybe call it an ‘LDM’, a Large Driving Model – could drive pretty well.

Could generative AI methods enable an AI, trained on a very large repository of videos of people driving in all sorts of different situations, to drive a car well enough? I think this hypothesis is worth exploring.

OK, maybe driving. But AGI? Come on.

Well, of course AGI will need other things as well, for example reasoning models for logical thinking, inference, mathematics and that sort of things. And some good old Bayesian reasoning of course. But I think of those subsystems as a very thin surface layer on top of a very thick bulk.

Vectors in the mindscape

If you ask me if there is a largest prime number, I’ll answer that no, there isn’t, and I’ll tell you exactly how I have reached that conclusion. But ask me why I like this particular woman at first sight instead of liking that other woman? The only honest answer I can give is that I don’t know (or care), but I still like her at first sight.

Do you still want an intellectual answer? Well, I guess there must be some vector that represents this woman in the bulk of my hugely multidimensional mindscape, and the tip of the vector comes close to some ideal woman.

But isn’t this a good description of what LLMs do?

So I’m entertaining the idea that some kind of LLM on steroids could be a good model for most of my mind. I claim to be generally intelligent, so it follows that some kind of LLM on steroids could achieve general intelligence. Of course the LLM should be complemented and enhanced by other things. But it would be the major part of a general intelligence, not a small one.

So I do agree with Ben, but with a difference in emphasis. He sees the LLM glass half empty, but I see it half full.

Perhaps I should have said that today’s early LLMs are on the right path to reproduce not general intelligence (the surface layer), but the more primitive intelligence (the bulk) that enables us all animals to stay alive in the cold, unforgiving universe out there. But I suspect that the gulf between the two is not that wide.

Credit: Tesfu Assefa

Active inference

There are interesting parallels and analogies between LLMs and a theory of sentient behavior called active inference, originated by Karl Friston and other scientists. The theory suggests that sentient life forms act upon their environment to build and continuously refine an internal model of the environment.

This is not limited to sentient life but rather is “something that all creatures and particles do, in virtue of their existence,” suggests Friston. The theory is based upon a “free energy principle” that has been proposed to unify information, thermodynamics, and biology. “For Friston, the free energy principle explains all features of living systems,” notes Anil Seth, and is “as close to a ‘theory of everything’ in biology as has yet been proposed.”

The analogies suggest that, perhaps, today’s early LLMs manifest the same universal forces that produced you and me.

And what about consciousness?

Thomas Nagel, in ‘What is it like to be a bat?‘, said conscious exists when it’s “like that” to be a certain being. In that sens, I am persuaded that it will be “like something to be an LLM” (say, a future one called GPT 10, or perhaps even GPT 7), even if it’s very different from our familiar experiences.

Back to progressive jazz

Let’s go back to Ben’s music analogy. Train generative AI on music up to the year 1900, and run it at low temperature. The AI will produce decent imitations of the music styles on which you trained it, though probably a bit aseptic and unimaginative. But now raise the temperature to the point where the AI produces music on the edge of chaotic noise, which only vaguely reminds of its training set. Most of that music will be unpleasant noise that nobody wants to hear. But now and then a high temperature run will produce something that some listeners will find at least interesting. Those samples will be publicized, discussed in music books, and included in new training sets. So there will be a gradual drift toward new styles, and perhaps we will get progressive jazz and all that.

I think this non-linear feedback will soon be evident in language and literature. Much of the text that we read online is already written by AI, and much more will come. AI started imitating people, but soon people will start imitating AI and picking up some new expressions produced by AI.

To be continued…

I’ve been writing this article for months. Of course there’s a lot more to be said, but I didn’t want to wait forever. So I’ll continue thinking and write a follow-up soon. In the meantime, there are always interesting discussions on X, like this one where Ben and I participated.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Nick Bostrom on superintelligent AI and the cosmic host

The Terasem Colloquium on December 14, 2024 (which I co-organized and moderated) explored Artificial Intelligence (AI) and Cyberconsciousness, with a focus on big picture and spiritual implications. The full video is on my website. The video is also on YouTube (currently without subtitles).

The Colloquium was a very intense three-hour thought stream, packed with insightful talks and discussions. We discussed the intersections of AI and the world’s religions, new religious movements, and the nature of consciousness and intelligence. We also discussed preliminary strategies for digitally capturing human personalities, the parallels between religion and imaginative theories of reality like the simulation hypothesis, and emerging visions in theology and eschatology.

Lincoln Cannon gave a great talk based on his essay titled God the Cosmic Host, and AI Creation. The essay is inspired by Nick Bostrom’s recent draft paper titled AI Creation and the Cosmic Host.

Artwork: messenger of the cosmic host (Credit: Lincoln Cannon).

Nick Bostrom

Nick Bostrom founded the Future of Humanity Institute at the University of Oxford, where he served until the Institute was dissolved earlier this year. He is best known for his analysis of the simulation hypothesis, and for his work on the concept of superintelligence and its potential implications for humanity.

Bostrom’s book Superintelligence: Paths, Dangers, Strategies, published in 2014, explored how AI much smarter than humans could arise, and the steps needed to make sure its development is safe. The book stirred up fears and opposition to rapid AI technology development. However, in his latest book Deep Utopia: Life and Meaning in a Solved World, published in 2024, Bostrom seems less cautious.

In his new draft paper, Bostrom seems to come even closer to embracing the idea that we should build superintelligence fast.

The paper is called “v. 0.5 draft,” which suggests that Bostrom plans to significantly expand the draft. In fact, the draft looks like a synopsis for a paper to be written, or a book. I can’t wait to read the final version.

The cosmic host

Bostrom discusses the concept of a “cosmic host,” which refers to powerful entities or civilizations that might influence the entire cosmos. These entities could include superintelligent AIs, advanced extraterrestrial civilizations, simulators, or even divine beings. The cosmos includes everything in existence, possibly even a multiverse with different physical laws or constants.

Bostrom mentions the concept of cosmic host (or “cosmopolitan authority”) only once in Deep Utopia, which suggests that this is a new phase of Bostrom’s research.

Bostrom suggests that such a cosmic host likely exists due to several reasons. One is the simulation argument: we might be in a simulation run by an advanced civilization. Another reason is the vastness of our universe, which statistically is likely to contain many advanced civilizations. Theories like cosmic inflation and string theory also support the possibility of a multiverse, potentially filled with many advanced entities.

While the cosmic host might not control every part of the cosmos, it could still have preferences about what happens in less controlled regions. These preferences could be based on moral concerns or strategic interests. Even without direct control, the host might indirectly influence distant regions through norms or by modeling the behavior of others. (Think of a regional hegemon in Europe that yields limited influence in Asia.)

Bostrom introduces the concept of “cosmic norms,” akin to human social norms but on a universal scale. These might arise from interactions between different members of the cosmic host, potentially leading to coöperation or conflict. Humans have moral and practical reasons to respect these norms if we want to coexist or interact with the cosmic host peacefully.

Superintelligent AI

Bostrom suggests that we should design superintelligent AIs to be “good cosmic citizens,” respectful of these norms and coöperative with other entities. This could mean aligning AI with broader cosmic ethics, not just human interests.

The cosmic host “may want us to build superintelligence,” says Bostrom, and “might favor a short timeline” for the development of superintelligence. “Delays in building superintelligence would increase the probability that superintelligence will never be built.”

Of course, Bostrom says these things in a perfect academic style full of caveats and qualifications, so one never knows for sure what he really thinks and he guards his plausible deniability. But reading this paper, one gets the impression that he is warming up to the idea that we should build superintelligence fast. This slight pivot of one of the intellectuals whose work sparked the overly cautious “doomer” attitude toward AI could have a cultural impact and influence AI policies.

The cosmic host by any other name

Bostrom makes a distinction between “naturalistic” and “nonnaturalistic” members of the cosmic host. The former are beings that have evolved naturally in this or another universe and possess highly advanced technology, likely including artificial superintelligence. The latter could “have analogous capabilities supernaturally.” Bostrom mentions “supernatural beings” that “would satisfy the definition of a cosmic host,” but doesn’t say more about them.

Cannon, who is a devout Mormon and a founding member of the Mormon Transhumanist Association, calls the cosmic host “God” and establishes parallels with Mormon theology. Bostrom prefers not to use the G-word. However, Cannon says that Bostrom, with his references to supernatural entities, is even “more generous toward theism than I am.” God is “quite natural, despite being miraculously powerful from humanity’s perspective,” he says.

I agree with Cannon. In my last book I talk of a superintelligent cosmic operating system, aka Mind at Large, likely decentralized, with the attributes that traditional religions have assigned to their God(s). I define nature as all that exists, and therefore the cosmic operating system can only be quite natural.

One of Bostrom’s hypotheses on the nature of the cosmic host is “superintelligences that human civilization creates in the future.” This may seem odd: how can a superintelligence that doesn’t exist yet be present and active now? Bostrom only says that “through this mechanism, the world (and, in particular, our spacetime manifold) could contain a cosmic host.”

To me, the simplest answer is that a superintelligence that comes to being in the future could leverage spacetime oddities such as self-consistent time loops to act in the world here and now. So the superintelligences that we will eventually create would create us in turn, in an elegant loop.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Grokking Christopher Langan’s theory of reality

In my last book there was a chapter on Artificial Intelligence (AI), and consciousness at large, titled ‘Bats and bits’. In this chapter, I said that I had not used ChatGPT or another AI assistant to write the book. “I guess I’ll use AI assistants for future writings, but I wanted this book to come entirely from my own unaided mind,” I said.

Some months have passed and I’m using AI assistants a lot (mostly Grok because it comes with my paid Twitter membership, but also the free versions of ChatGPT and Copilot).

I’m using AI assistants to summarize long texts, and to expand short texts when they’re about things that I understand well enough to correct its hallucinations.

I decided to try an experiment: asking AI to explain to me things that I don’t understand.

The smartest man in the world?

I chose something that is intriguing me at this moment: Christopher Langan’s ‘Cognitive-Theoretic Model of the Universe‘ (CTMU).

I had skimmed through some of Langan’s writings in the past without spending much time and attention. But I formed a certain vague impression that Langan’s ideas have something (or perhaps much) in common with mine.

By the way, who is Christopher Langan? He is, reportedly, ‘the smartest man in the world‘ (video interview).

I don’t know about that, but he does sound smart. I love the idea of a theory of reality developed by a farmer and former bouncer, used to hard physical work.

So I asked Grok:

“Can you give me a compact explanation of Christopher Langan’s CTMU and it says about God, free will, afterlife, and the ultimate meaning of life?”

In passing, Grok’s name comes from the term “grokking” introduced by science fiction writer Robert Heinlein, which means something like understanding things deeply, with total empathy, from the inside.

Credit: Tesfu Assefa

God according to Langan

Grok’s answer:

Christopher Langan’s Cognitive-Theoretic Model of the Universe (CTMU) is an ambitious framework aiming to bridge the divide between science, philosophy, and theology. Here’s a compact explanation:

God: In the CTMU, God is not an external supernatural being but is identified with the universe itself, or more specifically, with the ‘Global Operator-Designer’ or ‘Primary Teleological Operator.’ Langan suggests that properties like omnipresence, omniscience, and omnipotence are inherent to the structure of the universe, which operates as a self-configuring, self-processing language (SCSPL). This perspective aligns God with the ultimate reality, where the cosmos is seen as a manifestation of divine intelligence or mind.

Free will, afterlife, and the meaning of life

Free Will: Langan posits that free will exists within a stratified framework where both God and humans have self-determinacy. This free will is tied to the universe’s ability to self-process and self-configure, suggesting that while our choices are influenced by broader cosmic structures, they are still free within the constraints of these structures. Human free will is seen as a reflection or microcosm of the cosmic free will.

Afterlife: The CTMU doesn’t traditionally define an afterlife in terms of heaven or hell as separate realms. Instead, it suggests that upon death, consciousness might retract to the origin of reality, which could be interpreted as a form of afterlife or reincarnation. Langan has mentioned that all lifetimes might occur metasimultaneously in a non-terminal domain, implying an existence beyond our linear perception of time where consciousness could persist or be redistributed.

Ultimate Meaning of Life: According to Langan, life’s ultimate meaning is tied to the teleological framework of the universe. This means that the purpose or meaning of life is inherently linked to the process of self-discovery, self-improvement, and aligning with the telic (purposeful) nature of the universe. The meaning of life includes contributing to the universe’s ongoing evolution and refinement, with each individual’s life having a role in this cosmic process.

Langan’s views, while complex and sometimes controversial, attempt to reconcile scientific and spiritual understandings by presenting the universe as a coherent, self-sustaining system where mind and matter are intertwined. However, his ideas remain speculative and are not widely accepted within mainstream scientific or philosophical circles.”

Cool! I can recognize some ideas here that I’ve had in mind for a while.

Self-Configuring Self-Processing Language (SCSPL)

But I want to better understand parts of Grok’s answer. Moreover, I need to check if Langan has really written these things or Grok has hallucinated them.

So I ask Grok: “Could you give me a more detailed explanation of SCSPL?”

Grok says:

“The Self-Configuring Self-Processing Language (SCSPL) is a central concept in Christopher Langan’s Cognitive-Theoretic Model of the Universe (CTMU). Here’s a detailed explanation:

What is SCSPL?

SCSPL (Self-Configuring Self-Processing Language) is a conceptual framework proposed by Christopher Langan to describe how the universe functions. This model posits:

Self-Configuring: The universe configures its own laws and structure. Instead of having pre-set laws from an external source, the universe, through SCSPL, is seen as dynamically adjusting its own rules and configurations.

Self-Processing: Everything in the universe, including any form of intelligence or processing, is seen as part of the universe processing itself. This means that consciousness, intelligence, and physical processes are not separate from the universe but are intrinsic to it.

Language: The term ‘language’ here is not conventional; it refers to a fundamental structure or syntax by which reality operates. At its core, reality is seen as informational, with SCSPL being the ‘code’ or ‘syntax’ that governs how this information manifests as physical reality.

More about the SCSPL

Reality as a Language: SCSPL treats reality as a self-referential language where the universe writes its own script. This language is self-contained, meaning the universe itself is the language, the medium, and the message.

Teleology and Purpose: The universe, through SCSPL, is goal-directed or teleological. This implies an inherent purposefulness, where the universe aims towards increasing complexity, coherence, and self-awareness.

Consciousness and Free Will: Within this model, consciousness isn’t just a byproduct of physical processes but is fundamental to the universe’s operation. Free will is understood as the universe’s ability to choose its own configurations and processes, which individuals also partake in on a smaller scale.

Unity of Mind and Matter: SCSPL suggests a monism where mind and matter are not separate but different expressions of the same underlying reality.”

Other people interested in Langan’s ideas sent me other AI generated summaries that are essentially similar to Grok’s summary. Now I have enough keywords to search for these ideas in Langan’s own writings, to check Grok’s accuracy.

Credit: Tesfu Assefa

From cybernetics to ontology

Langan has tried to write simplified explanations of the CTMU, but I find that his full 2002 paper is the clearest.

Langan defines reality as all that exists. So reality must be entirely self-contained and self-determinative. Langan makes conceptual references to John Wheeler’s ‘Beyond the Black Hole‘ and related writings. Wheeler also summarized his speculations on a future science of reality as a “self-excited circuit” in the last two chapters of his autobiography.

Wheeler gave a compact conceptual summary of Einstein’s general relativity: “Spacetime tells matter how to move; matter tells spacetime how to curve”. This summarises Einstein’s field equations. In other words, there is cybernetic feedback between spacetime and matter.

However, this is not a self-contained reality; in Wheeler’s picture spacetime and matter already exist and follow a law that already exists. Cybernetic feedback “is meaningless where such entities do not already exist,” says Langan. “With respect to the origin of any self-determinative, perfectly self-contained system, the feedback is ontological in nature”.

In the pedestrian form of cybernetic feedback, existing things act upon each other and determine the behavior of each other and the whole. This is something more. Here, reality bootstraps itself into being with cybernetic-ontological feedback between mind, matter and physical laws, which shape and give reality to each other.

I like Langan’s ideas so far!

This looks very much like the picture of the world that I’ve outlined in my last book.

Langan tries to derive all these things logically. I’m not that smart, so I’m happy enough to present them as a narrative sketch.

However, I can recognize the common ideas –

  • Reality bootstraps itself into being without external interventions
  • There are self-consistent feedback loops between mind, matter, and physical laws
  • The universe acts with free will, and so do we
  • Death is not the end
  • The universe acts with purpose, striving toward more and more complexity, and we should align with the purpose of the universe.

I have referred to Langan’s “divine intelligence or mind” of ultimate reality as Mind at Large or, to make it even less personal, as “the cosmic operating system”. But there are so many parallels with metaphysical and theological concepts of God that calling it God seems simple and honest.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

No, science doesn’t have a problem with Joe Rogan

Joe Rogan has been all over the news headlines recently. He played a role in securing the victory of Donald Trump in the recent U.S. presidential election with his interviews with Donald Trump, J.D. Vance and Elon Musk, and his last-minute perfectly timed endorsement of Trump.

Many people love Rogan for this, and many hate him. But this is nothing new; Joe Rogan has had his share of controversy over the years. Here I’ll focus on his interviews with scientists and technologists, which should be less controversial than politics. Should be.

One of Joe Rogan’s podcast guests, Roger Penrose, is as respectable as scientists get. He won the Nobel Prize in Physics 2020 “for the discovery that black hole formation is a robust prediction of the general theory of relativity.”

I’ve been reading a new biography about him titled The Impossible Man: Roger Penrose and the Cost of Genius by Patchen Barss (November 2024). Penrose appeared on Joe Rogan’s podcast in 2018. “I had this interview when I was in the States with this chap called Joe Rogan,” says Penrose as reported in the book.

I recently watched the interview again; Penrose is always worth listening to, and Rogan asks interesting questions.

Patchen Barrs accuses Rogan in the biography of “providing a high-profile platform to pseudoscientists, conspiracy theorists, and other perpetrators of misinformation. Rogan mixes credible scientists in with crackpots, making it difficult for people to know which is which.” Barss sources this claim from an article published by the Office for Science and Society at McGill University titled Science vs. Joe Rogan.

Credit: Tesfu Assefa

Does science have a problem with Joe Rogan?

Here are a few of the tens of scientists and technologists who have been interviewed by Rogan: Sara Imari Walker, Ray Kurzweil, Michio Kaku, Bobby Azarian, Brian Greene, Aubrey de Grey, Richard Dawkins, Sean Carroll, Roger Penrose, Ben Goertzel, Lawrence Krauss. I’ve limited this list to those scientists whose work I know well enough to praise without the slightest hesitation.

These people have at least three things in common: first, they are all reputable scientists or technologists. Second, they are all interested in the Big Questions to which we all would like to hear answers. Third, at one or another time, they have all expressed ideas that go against the scientific (or cultural and political) consensus.

Sure, Rogan has interviewed people of less firm reputation as well, including people that some like to dismiss as crackpots or pseudoscientists.

Some people who call themselves scientists have accused Joe Rogan of promoting misinformation on COVID-19 and vaccines, platforming fringe theories and giving equal footing to pseudoscience alongside genuine science, giving airtime to conspiracy theories, and conducting interviews with scientists and technologists in a casual manner without pushback or fact-checking.

They have expressed concern about how Rogan could shape public opinion, especially among younger listeners or those who might take what they hear on his show as authoritative. They fear that this could lead to a general distrust in science or skepticism towards mainstream scientific consensus.

Some of them find Rogan’s approach to be anti-science, particularly when he mocks or dismisses scientific consensus or when he engages in or encourages scientific debates on fringe theories.

No, it is scientific dogma that has a problem

I think science itself has no problem with Joe Rogan. It is scientific dogmatists that have a problem. By scientific dogmatists I mean the zealots who want to protect the scientific establishment from the disruptive spirit of inquiry.

I have a problem with the dogmatists who have a problem with Joe Rogan. Open inquiry must be defended against scientific dogmatism. The soul of science is freedom to question theories and assumptions, and this must be protected against censorship and excessive backlash.

‘Misinformation’ and ‘pseudoscience’ are in the eye of the beholder. Often, ‘misinformation’ is information that the authorities don’t want the people to know. Often ‘pseudoscience’ is science that contradicts the scientific establishment and its paradigms.

I said often, which doesn’t mean always. But ‘often’ is enough to give Joe Rogan the benefit of doubt, and to praise his excellent work to bring science and technology closer to the little people like us.

Rogan brings a broad spectrum of views to the table, including those from scientists with different opinions, which can lead to a better understanding of complex issues. This can be beneficial in fields where there’s active debate or where the science is evolving.

By discussing science in a casual, accessible manner, Rogan makes science more approachable for the average person. This can demystify science, making it less intimidating and more integrated into everyday conversation.

Rogan’s style encourages listeners to question information, which can be a double-edged sword, but does promote skepticism and critical analysis. This can be useful in encouraging people to look into scientific claims independently, fostering a culture of inquiry.

Even more important is Rogan’s willingness to tackle controversial topics, and give a platform to new science or technology that the public is not yet aware of. Even if not all the information presented is correct, his ability to influence culture can stimulate public enthusiasm and support for science and emerging technologies.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

AI could already reproduce your personality and behavior

It is now possible to create a virtual replica of you that, “is able to embody your values and preferences with stunning accuracy,” MIT Technology Review reports (unpaywalled copy).

This is the conclusion of a study by a team including researchers from Stanford and Google DeepMind, published in arXiv.

Imagine if you could sit down with an Artificial Intelligence (AI) for a chat that lasts two hours, where you talk about everything from your childhood memories to your views on big issues like immigration. After your conversation, this AI could create an avatar that acts, thinks, and responds just like you.

This study involved over 1,000 people from diverse backgrounds who were interviewed for two hours each. The goal was to create ‘simulation agents’, which are AI models designed to reproduce the behavior of a specific person with high accuracy. These agents then went through the same set of tests as the humans to see how well they could replicate their behaviors, achieving an impressive 85% similarity.

Simulation agents

Simulation agents act as digital stand-ins for real people, allowing researchers to simulate social and behavioral scenarios. This can be useful for studying things like the spread of misinformation or how people might react in certain social situations.

These agents differ from the more common tool-based agents that perform specific tasks like booking appointments or retrieving information, rather than simulating human interactions or personalities.

John Horton, an associate professor from MIT, highlights that this approach represents a hybrid model where real human data feeds into AI personae that researchers use in simulations, opening new channels of research that would be too costly or ethically challenging with real people.

To make these AI agents, the researchers used qualitative interviews, where they talked to participants to gather rich, detailed information about their lives and views. This method proved more effective than traditional surveys for capturing the unique traits of individuals.

Joon Sung Park, leading the study, noted that interviews can reveal profound details about a person, like surviving a serious illness, which might not come out in typical surveys. This approach is more time-efficient for creating a detailed profile.

This new study suggests that with just a couple of in-depth conversations, it could be possible to create a fairly accurate digital twin. This could spark interest in companies like Tavus, which might now look into using similar techniques for their services.

Deepfake AI persons?

Advancements in AI open up discussions about privacy, consent, and the ethical implications of digital replication.

This technology isn’t without its risks. Just like AI can create deepfakes – fake media that looks real – this could lead to unauthorized digital impersonations. The methods used to assess the accuracy of these AI replicas were basic; they included standard social surveys and personality assessments, which might not capture all the nuances of human individuality. For instance, AI agents struggled more with behavioral tests that measure fairness and altruism.

Credit: Tesfu Assefa

Personality capture and preservation

However, there are other methods to create digital twins, where AI models learn from a person’s emails or other digital footprints, which yield a lot more data.

This is akin to the methods of personality capture and preservation proposed by William Sims Bainbridge and Martine Rothblatt; they have proposed to collect and preserve digital reflections of an individual’s personality, essentially creating a digital version of a person. There ideas sound like science fiction that could, one day, become science fact.

Bainbridge and Rothblatt propose to capture one’s personality by accumulating data including answers to structured questionnaires, but also emails, social media posts, blogs, photos, and online activities.

This collection forms a ‘mindfile’, which Rothblatt describes as a sum of saved digital reflections about an individual. The idea extends to using this data with future AI technologies to emulate or upload a person’s consciousness into a digital or robotic form, thereby achieving a form of digital immortality.

Bainbridge and Rothblatt believe that digital technology could capture human identity and personality. With sufficient technological advancement, digital data and AI recreate or preserve an individual’s essence after their physical death.

See Bainbridge’s book Personality Capture and Emulation (2013), and Rothblatt’s books Virtually Human: The Promise – and the Peril – of Digital Immortality (2014) and Principles of Geoethics: A Synthesis of Geography and Bioethics (2023).

I think answers to questionnaires, emails, social media posts, blogs, photos, and online records recorded with current means could not achieve the rich texture needed to emulate a real human personality.

But I don’t rule out the possibility that new brain-machine interfaces like Neuralink could make the process viable.

Of course this would be light years ahead of the preliminary experiments of the Stanford and Google DeepMind researchers. But we can think of our preliminary experiments as encouraging baby steps toward the distant goal of creating a digital replica of a person. Technology could eventually allow to endow the replica with consciousness.

Popular culture is warming up to the idea

Greg Gutfeld and his guests discussed the MIT Technology Review article in Gutfeld’s very popular show.

(Credit: Fox News/YouTube).

Of course the discussion was fun, as appropriate to a comic show. But the very fact that the host of a popular show discussed these arcane and weird things shows that popular culture is warming up to the possibility of creating digital persons.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Could the return of Philip Rosedale spark a renaissance of Second Life?

A few days ago we reported that Philip Rosedale, the legendary founder of the Virtual Reality (VR) world Second Life, has returned as Chief Technology Officer (CTO) of Second Life’s parent company Linden Lab.

“We’re now in a unique position to define the future of virtual worlds, and Philip is returning to help myself and the exec team achieve that goal,” says Linden Lab CEO Brad Oberwager.

“I started Second Life in 1999,” adds Rosedale, “a decade before cloud computing and two decades before AI. We were early, but the success of Second Life to this day shows that we were not wrong. Virtual worlds will play an increasingly important role in the future of human culture, and I’m coming back to help make that happen in a way that has the most positive impact for the largest number of people.”

Second Life and the metaverse

The first news report of Rosedale’s return came from VR journalist Wagner James Au.

Au is the author of Making a Metaverse That Matters: From Snow Crash & Second Life to A Virtual World Worth Fighting For (2023). The book, which Rosedale has highly praised, includes many snippets of previously unpublished conversations with Rosedale.

The term ‘metaverse’, which is often used for large VR worlds, comes from Neal Stephenson’s science fiction novel Snow Crash (1992). Au points out that the metaverse was effectively designed by Stephenson in the novel, that Stephenson’s insights are still valid (but often ignored), and that Stephenson’s original metaverse is still the goal that the VR industry is striving to reach.

In the last chapter of his book, titled ‘Metaverse Lessons for the Next 30 Years’, Au offers important advice to the metaverse industry, including lessons from “The Fall of Second Life”. The first lesson is that the user community must come before everything else. I believe the industry should listen to Au carefully on this.

The fact that Second Life has faded out of public consciousness at the end of the 2000s and no next-generation metaverse has emerged to replace it could indicate that people can do without VR. But perhaps VR is just hard to do well, and nobody has figured out yet how to do it well.

“VR is hard to do well even in a lab, and there’s still a lot to learn about how to make great VR products,” says VR pioneer Jaron Lanier in Dawn of the New Everything: Encounters with Reality and Virtual Reality (2017). “Be patient… Just because it takes a while to figure a technology out, that doesn’t mean the world has rejected it… Maybe VR will be huge, huge, huge…”

It’s all about people

A couple of days after the announcement of Rosedale’s return to Second Life, Au has published a long interview with Rosedale. The two raise interesting points that could indicate the way to VR done right.

Au insists that “a virtual world is all about people.”

I think he is right. VR technology is very cool, but at the end of the day what keeps users coming back to a VR world is real (not virtual, but real) interaction with real people.

Rosedale also agrees. One thing that makes you feel that Second Life is a real world is that, when “talking to a real person in Second Life, is they’re obviously a real person who’s perceiving Second Life with you in a way that is complete and rich, so you can do things together,” he says.

Artificial Intelligence in Second Life

What role should Artificial Intelligence (AI) play in the VR metaverse?

Rosedale is not too bullish on AI technology. The proper role of AI in Second Life is “to be a matchmaker between real people,” he says. “Having the AI be a sex bot, but you fall in love with it forever, does not feel like a good idea to me.”

However, Rosedale hints at the possibility to use a virtual world like Second Life as training ground for AI. The fact that everything in Second Life is labeled and carries metadata could help AI bots understand the word of Second Life faster and easier than AI bots in the real world.

This reminds me of the delicious science fiction novella The Lifecycle of Software Objects, by Ted Chiang, where intelligent and perhaps fully conscious AI bots live in a fictional metaverse called Data Earth before moving to robotic bodies in our world. The idea that comes to mind is to take a large language model (LLM), couple it to a virtual body in a realistic part of Second Life, and let the LLM lose to explore and learn how things look like and behave.

Philip Rosedale speaks at the Second Life Community Roundtable on November 1, 2024 (Credit: Giulio Prisco)

The future of Second Life

Rosedale has spoken to a big audience (by Second Life standards) at a Second Life Community Convention on November 1. The full video of the event is available via YouTube. I went to listen to him.

I’m an old hand at Second Life. I used to be a metaverse developer in Second Life and other platforms, often organized Second Life events about the future, emerging technologies, futuristic philosophies, the Singularity, and all that. Many people used to attend, and the atmosphere was positively electrifying. But before Rosedale’s recent talk, I had not been in Second Life for years!

There weren’t many differences to see since the last time I’d been there. This suggests to me that the technical development of Second Life has been stagnating, and Linden Lab needs Philip to revive it. In fact, many technical questions from the audience (e.g. about performance, lag, user interface, new scripting languages) are old questions that I’ve seen asked many times, but not answered.

I hope Philip Rosedale’s return to Second Life will spark a renaissance of Second Life as a real place for real people (and our AI mind children – I’m much more bullish than Rosedale on AI) to talk about big things and big questions, and bring that electric atmosphere back.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Should Media Endorse Political Candidates?

There seems to be a trend where fewer newspapers are endorsing political candidates. This is partly driven by financial pressures within the newspaper industry, as well as a desire to avoid alienating subscribers during politically polarized times.

Tensions have flared up between editorial independence, the influence of big media ownership, and the role of newspapers in political discourse, sparking debates on media ethics, the impact of billionaire ownership on journalism, and the diminishing tradition of newspaper endorsements in U.S. elections.

The decisions by the influential newspapers Los Angeles Times and Washington Post not to endorse a presidential candidate for the 2024 election in the U.S. have resulted in controversies.

The owner of the Los Angeles Times, Patrick Soon-Shiong, blocked the editorial board from endorsing one of the two main candidates, leading to some internal turmoil. He “feared that picking one candidate would only exacerbate the already deep divisions in the country”. The editorial page editor and two other editorial board members resigned in response to this decision by the owner.

“I have no regrets whatsoever,” said Soon-Shiong. “In fact, I think it was exactly the right decision.” It is only with clear and non-partisan information side-by-side,” he added, that “our readers could decide who would be worthy of being President for the next four years.”

Similarly, the Washington Post also chose not to endorse a presidential candidate, which was seen as a shift in their editorial policy. This move was ostensibly to return to being an independent voice, but it led to controversy and critique.

Jeff Bezos’ op-ed

Washington Post owner Jeff Bezos has written an op-ed titled The hard truth: Americans don’t trust the news media.

Bezos addresses the controversy surrounding the newspaper’s decision to stop endorsing presidential candidates. He defends this choice by arguing that such endorsements do not significantly sway election results, and that the Washington Post should instead concentrate on delivering factual, non-partisan content to aid readers in making informed decisions.

People reading newspapers in the street (Credit: Wikimedia Commons).

Bezos emphasizes his dedication to preventing the newspaper from slipping into irrelevance in an era where less rigorous information sources like podcasts and social media are on the rise.

Lack of credibility

“Most people believe the media is biased,” says Bezos. “Anyone who doesn’t see this is paying scant attention to reality, and those who fight reality lose.”

The perception of bias leads to a loss of credibility, which is not unique to the Washington Post.

“Our brethren newspapers have the same issue,” adds Bezos. “And it’s a problem not only for media, but also for the nation. Many people are turning to off-the-cuff podcasts, inaccurate social media posts and other unverified news sources, which can quickly spread misinformation and deepen divisions. The Washington Post and the New York Times win prizes, but increasingly we talk only to a certain elite. More and more, we talk to ourselves.”

Media and partisan media

In my opinion, the question of whether media should take political positions and endorse political candidates depends on the nature of the media.

There are media, and there are partisan media. That partisan media take political positions and endorse political candidates is perfectly fine with me: this is the very raison d’être of partisan media. But then, media that publish partisan political propaganda and endorse political candidates should not present themselves as objective non-partisan media.

So, do the Los Angeles Times and Washington Post want to be partisan media? I think this is the question, and the owners of both newspapers have answered with a loud and clear ‘no’.

Some readers have canceled their subscriptions in outrage. To me, this means that they don’t want information but partisan propaganda. But it is their choice to make, and there is plenty of openly partisan media outlets that offer the propaganda they crave.

And what about science and technology media like Mindplex?

Last month, Scientific American endorsed one of the two main candidates, leading to steamy debates about whether a scientific magazine should engage in political endorsements. This was the second time in the magazine’s history it endorsed a political candidate (the other was Joe Biden in 2020.)

Critics argue that this could undermine the magazine’s credibility as an objective source of scientific information. Many commentators think that this could alienate readers who expect scientific objectivity over political opinion.

While scientists can be political beings, the institutions of science like journals and magazines should ideally uphold a standard of objectivity to maintain trust in science as an impartial pursuit of truth.

By openly taking a political position and endorsing a political candidate, Scientific American and other scientific media that follow the same route might be perceived as aligning science with a particular political ideology, and lose credibility as a result. The words of Jeff Bezos quoted above come to mind.

But I think there’s an even deeper and more serious danger. If scientific media are perceived as partisan political propaganda outlets, then it is science itself the loses credibility, and the public at large loses trust in science.

After the incident, I’ve stopped reading and paying any attention to Scientific American. What I want from scientific media is, guess what, science. When I want to read political commentaries, I know perfectly well where to find them. And if I want to have some fun laughing at the stupidity of partisan propaganda, I know perfectly well where to find that too.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

NASA and SpaceX must continue to work together for the common good

Michael Bloomberg argues that NASA’s Artemis moon mission is a huge waste of money. The Artemis program was supposed to send astronauts back to the moon, but it has already spent nearly $100 billion without achieving this goal. Bloomberg believes that the program’s complexity and costs are out of control, and he suggests that the next U.S. president should reconsider the entire project.

Bloomberg points out that more than fifty years after Neil Armstrong’s famous moon landing, the Artemis mission has not made significant progress. Despite the enormous budget, no astronauts have yet been sent to the moon.

Bloomberg also highlights the opportunity cost of the Artemis mission. He suggests that the money spent on this program could be better used for other important projects, such as addressing climate change or improving healthcare. By redirecting funds from Artemis to these areas, the government could achieve more tangible and immediate benefits for society.

These are, if you ask me, empty and boring platitudes. But between one platitude and the next, Bloomberg makes some good points.

Starship would be a better option

Bloomberg criticizes the Artemis program for being inefficient and overly complicated, leading to continuous delays and escalating expenses. He argues that the program has become bogged down in bureaucracy and technical challenges. This has resulted in a project that is both expensive and ineffective.

And here comes the bomb:

“A celestial irony is that none of this is necessary,” says Bloomberg. “A reusable SpaceX Starship will very likely be able to carry cargo and robots directly to the moon – no SLS, Orion, Gateway, Block 1B or ML-2 required – at a small fraction of the cost.”

What are these projects Bloomberg mentions?

  • SLS (Space Launch System) is NASA’s powerful rocket designed for deep space exploration.
  • Orion is NASA’s spacecraft designed to carry astronauts beyond low Earth orbit.
  • The Gateway is a planned space station that will orbit the Moon serving as a communication hub, science laboratory, and living quarters for astronauts.

Bloomberg admits that the successful catch of the Starship booster was a breakthrough that demonstrated that Starship is moving far beyond NASA.

Bloomberg praising SpaceX? Really?

Conflict between NASA and SpaceX?

Bloomberg’s article has re-ignited the endless flame wars between the faithful supporters of NASA and the fans of Elon Musk’s SpaceX.

There has been one and only mission of the SLS so far: the Artemis 1 mission carried an uncrewed Orion spacecraft around the Moon in November 2022. Before the launch of Artemis 1, I wrote a SpaceNews op-ed titled “SpaceX fans should stand behind NASA and support Artemis.”

I argued that we don’t need a conflict between the supporters of NASA and the fans of SpaceX. In particular, I argued that the fans of Elon Musk and SpaceX should enthusiastically support NASA’s Artemis program for a permanent and sustainable return to the Moon. Why? Because if Artemis is successful, it seems inevitable that Starship and SpaceX will play a more and more important role in the program. In other words, Artemis could be a powerful tide that lifts all rockets.

NASA AND SpaceX (Credit: Wikimedia Commons).
NASA AND SpaceX (Credit: Wikimedia Commons).

Yes, some parts of the current Artemis program seem too inefficient and costly, just like Bloomberg says. But I thought that we should be patient and let NASA and the government save face and have their moment of glory. Then, I thought, the U.S. administration would likely reconsider costs and wastes, and rely on SpaceX more.

Politics gets in the way

I still think this would be the best way forward. But politics gets in the way as usual.

SpaceX has had a great year so far, and the spectacular catch of Starship’s Super Heavy booster has been a milestone of spaceflight engineering. But the booster has returned to a political storm centered on Elon Musk’s cultural and political positions: Musk has endorsed Donald Trump and is using his control of Twitter in a way that has upset some people.

If Trump wins the forthcoming presidential elections, the U.S. government will likely support SpaceX.

But Musk’s bet on Trump is a risky one. if Harris wins the elections, it seems likely that the U.S. government will be very hostile to Musk and all his companies and projects for the next four years. This would damage the Artemis program, the prestige of the U.S. space program, and the very future of humanity. But often politicians put their greed for power and their ideological biases before the common good.

At this moment, the election seems to me a coin toss; Harris could win, or Trump could win. The only thing that seems certain is that, after the elections, the U.S. will likely be even more divided than before, and political polarization will likely reach even more toxic levels.

The need for bipartisan spaceflight

But perhaps spaceflight can help overcome toxic political polarization.

There’s a long history of bipartisan support for the space program in the U.S., and politicians of both main parties have been enthusiastic spaceflight supporters.

Spaceflight, space exploration, and the prospect of human space expansion can inspire people (and especially the young) across partisan borders and give everyone a powerful sense of drive that transcends identity politics and dogmatic ideologies. Achieving bipartisan support for Artemis and future space programs will, I hope, show that we can work together for the common good and incite us to do the same for other common goals.

And China?

Meanwhile, The Economist has recognized that there is a new race to the Moon between the West and China, and that Elon Musk’s Starship is the best hope of the West for winning that race.

“The recent test flight of SpaceX’s Starship brought the world one step closer to a host of new possibilities beyond Earth (not least the colonisation of Mars),” notes The Economist, adding that Starship is expected to play an important role in NASA’s plans to return to the Moon.

“But China has its own lunar ambitions, and a much simpler plan than America’s,” warns The Economist. “Who will win this new space race?”

My simple prediction is that, if Harris wins the elections, China will win the new space race. If Trump wins the elections, the USA will have a fighting chance.

Let The Economist worry which nation wins the new space race; my concern is that humanity gets started on the long way to the stars with permanent bases on the Moon, Mars, and beyond. If China has to lead the way, so be it.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter