The elephant at Starbase

2025-11-28
6 min read.
The elephant at Starbase is the strong likelyhood that we, flesh-and-blood humans like you and me, won't be the ones to establish an interstellar civilization. AI will.
The elephant at Starbase
(Credit: Tesfu Assefa).

We space enthusiasts are living a magic moment of space exploration. In only a few months, NASA's Artemis II could take four astronauts around the Moon. In only a few years, Artemis III could take astronauts to the lunar surface. We could soon see people orbiting the Moon and walking on the Moon again, like in the magic sixties.

This magic moment makes us dream of even more magic moments to come. The Starbase facility in Texas, home of SpaceX and the Starship rocket that, one day not too far, could take astronauts to Mars, is a powerful symbol of our aspiration to become a multiplanetary civilization, and eventually expand into outer space and establish an interstellar civilization.

But many space enthusiasts don't see the elephant in the room. Or rather, in this case, the elephant at Starbase. Or rather, they don't see the elephant at Starbase yet.

Do you see the elephant at Starbase? Should we still want to send human astronauts to colonize space? Or should we want to leave space expansion to AI? (Credit: made with Grok).

The expression "the elephant in the room" refers to a significant, obvious issue or problem that is deliberately ignored or avoided in conversation, despite being apparent to everyone involved. The elephant at Starbase is the strong likelyhood that we, flesh-and-blood humans like you and me, won't be the ones to establish an interstellar civilization.

Artificial intelligence (AI) will.

We can see the writing on the wall, and the writing on the wall says that AI and artificial superintelligence (ASI) will fill the galaxy and the universe with superintelligent consciousness.

I question the validity of the term "artificial" though. AI and eventually ASI are emerging through us, and therefore will be natural just like us. I prefer to sidestep the (in my opinion) pointless distinction between what is natural and what is artificial, and consider the emergence of AI and ASI as the next phase of our very natural evolutionary journey.

I'm a space enthusiast persuaded that filling the universe with superintelligent consciousness is our cosmic destiny and duty. To me, the question "Should we still want to send human astronauts to colonize space? Or should we want to leave space expansion to AI?" is an existential question. I've written about this (1, 2, 3, 4), referring to this question as "The Question" with capital T and Q.

Where is the elephant going? (Credit: made with Grok).

Discussing The Question

I've co-organized and moderated an online conference via Zoom in July. The conference was a very intense three-hours thought stream, packed with insightful talks and discussions. Stefano VajFrank WhiteMoti MizrahiMichelle HanlonFrank Tipler, and Robert Zubrin discussed space expansion in the age of AI and their personal answers to The Question.

Tipler was definitely in the leave-it-to-AI camp. Vaj argued that philosophical consistency mandates AI-inclusive expansion, broadening “humanity” beyond biology. The other speakers conceded that AI will play an important role in humanity's expansion to the planets and the stars, but insisted that biological humans play, and must continue to play, an essential role.

The discussion will continue at another conference on December 14, themed "Where is AI, and where is it going?" The speakers will be David Orban, David Pearce, Natasha Vita-More, David Brin, and Gregory Stock.

Stock will present his new book "Generation AI: The Transformation of the Human Being," which will be published on December 15. In October he discussed the content of the new book at the Beneficial AGI Summit & Unconference 2025 (BGI 2025), organized by SingularityNET in Istanbul. I'll publish a book review in a couple of weeks or so, stay tuned. Let me just say that the book is very relevant to The Question.

Vita-More has also participated in BGI 2025 (see my review of the event) and will say more on December 14. Brin, one of my favorite science fiction and science writers, is also writing a book about AI. I've read the draft, and I love the book. Brin will say more on December 14.

The other speakers are notable thinkers who have written about AI, and I look forward to listening to their insights. By the way, the conference is open to the public and everyone is invited. The Zoom access coordinates are given here.

I don't know how the development of AI technology will unfold in the next few years. Perhaps the current approach could scale to sentient AGI and then ASI with incremental improvements (important and necessary of course, but incremental). Perhaps with tens of millions of ultraefficient GPUs and superefficient next-generation mathematical algorithms? Perhaps with continually learning models with experience of physical embodiment in human-like robots? Or perhaps the current approach is not scalable to AGI/ASI at all and we'll need entirely new approaches? Time will tell.

But I'm persuaded that AGI will be here soon enough, with ASI to follow, and this will profoundly change our idea of space expansion.

AIstronauts bound for the stars

In his new book, Stock argues that AIs, even the limited AIs that will emerge in the short-term future, will likely be seen as "persons" more and more. I've suggested that the best (actually the only) way for us space enthusiasts to make peace with the knowledge that our ASI mind children will expand into interstellar space on our behalf and in our stead is to learn to see them as persons.

This may be facilitated by ongoing trends in space exploration missions. Earlier this year, Elon Musk said that “if all goes well, SpaceX will send Starship rockets to Mars with Optimus robots and Grok.” Deploying Grok to Mars alongside Optimus robots would mean that the robots would run on a state-of-the-art AI, more sophisticated than the necessarily limited on-board AI.

I think a scaled-down version of this plan could and should be tested on the Moon. I think that, regardless of the appeal Mars holds for the adventurous young and young-at-heart, what happens on the lunar surface has a more immediate psychological impact because we can see the Moon with our own eyes. So imagine looking at the lunar south pole and knowing that there are intelligent AI-powered robots there, building infrastructure and doing important scientific research. Wouldn't you think of the AIstronauts as just astronauts? Wouldn't you see them as your representatives and friends in space? I would.

And then the first primitive AIstronauts will be followed by more conscious and superintelligent ones. The solar-powered AI data centers in space that we're planning to build could evolve into nodes of posthuman superintelligence in space, first in the solar system and then among the stars. I'm learning to see these machines of loving grace as part of future humanity, just like our organic grandchildren, and this is kind of liberating.

Perhaps some of our organic grandchildren will be able to upload their consciousness to those posthuman circuits and follow the AIstronauts to the stars.

#AcceleratedEvolution

#ArtificialLife

#ConsciousAI

#InterstellarTravel

#SpaceColonization



Related Articles


Comments on this article

Before posting or replying to a comment, please review it carefully to avoid any errors. Reason: you are not able to edit or delete your comment on Mindplex, because every interaction is tied to our reputation system. Thanks!