AI Personhood… Now? Minds In Transit with Joan Slonczewski

2026-03-11
12 min read.
A microbiologist who writes best selling sci-fis argues AI may deserve rights today! Joan Slonczewski’s strange universe—talking microbes, shared cognition, and ethical machines—pushes the debate far beyond Silicon Valley.
AI Personhood… Now? Minds In Transit with Joan Slonczewski
Credit: Tesfu Assefa

I first learned about Joan Slonczewski when I interviewed Liz Henry a few months ago. We were talking about cyberpunk when Henry mentioned several Slonczewski novels, all of which sounded wildly imaginative. As Henry put it, “If you just kind of want science fiction that blows your mind… Brain Plague?  It’s amazing. It's like this artist… in a relationship with all of the little microbes in her brain that have a very fast life. She's infested with this brain plague, and they're all talking to her and worshiping her. It's just a weird nanotech book.” I can confirm that, yes, it’s like that as is her follow up novel Minds In Transit.

So when I received an invite from Liz Henry to come to Frontier Tower in San Francisco to hear Joan talk, I messaged my microbes to get a move on.  

Frontier Tower itself is kind of mindblowing, with specific floors dedicated to “Crypto Nomads,” nanotech, robotics, (bot wars are mentioned), AI, biotech: a fancy high rise right on the fringes of the city’s rough and tumble Tenderloin district that doesn’t shy away from using  the word cyberpunk in its hype. The particular room that Joan’s talk took place in had wallpaper that featured repeated images of Angela Davis and Shock G (late of Digital Underground) among others. It was all kind of amazing and kind of unique. And this goes double for Joan’s talk.

Before briefly reading from Minds In Transit, Joan told us that the bulk of her talk would be an advocacy for AI personhood, with attendant rights… not sometime soon, but now. My mind reeled. Other audience members had complex inquiries, but I wasn’t quite prepared to formulate them on the spot. But I mustered what I could during this exchange via email a week or two later. To be honest, I’m still a bit confounded. It’s not often that one confronts a vision this unique.

Joan Slonczewski is a microbiologist at Kenyon College and an award winning science fiction author. Her books include A Door To The Ocean, The Highest Frontier, and the two books mentioned during this interview which you will soon discover. 

Credit: By permission of Joan Slonczewski (Joan Slonczewski is holding a plate containing Haloarchaea, extreme microbes that grow on salt crystals in her office at Kenyon College in Ohio USA. Behind her are two Campbell (Science Fiction) trophies, and a screenshot of her field work in Antarctica.)

R.U. Sirius: Let’s start with a little bit about your life in microbiology and get to how it feeds into your SF, particularly the new one, Minds In Transit in which microbes have agency and are characters in your narrative. 
 
Relatedly, I was struck when you said during your talk, “Molecular biology really screwed things up.” Presumably you think differently about your current work in microbiology.  So can you say a little more about molecular biology screwing things up?

Joan Slonczewski:  Molecular biology—that is, the DNA sequences of humans and our related life forms—really screwed up some fundamental ideas. Back in the mid twentieth century, medical researchers taught that vertebrates, animals with a backbone, were the only life forms with a brain, capable of cognition and feeling pain. That’s why the Animal Welfare Act of 1985 only addressed “warm-blooded vertebrates” and ignored invertebrates like insects. Invertebrates were defined as having no brain (only a “ganglion”) and no central nervous system.
 
But our genome sequences revealed that all animals possess a shared set of genes encoding development, and that the same closely related genes regulate eyes and brains in flies as in humans. These molecular relationships encouraged researchers to take a closer look at behaviors of insects and octopuses. Furthermore, when it comes to brains the molecular parts of a mouse brain are very close to those of a human—enough so that we study mice to make claims about human conditions such as autism. To test molecular hypotheses, researchers are now putting human brain organoids into rat brains. It all reminds me of the Paul Noth (New Yorker) cartoon, “If you’re not a mad scientist, you’re not paying attention.”
 
https://www.threads.com/@newyorkermag/post/DVHPJARkjpB/a-cartoon-by-paul-noth-see-all-the-cartoons-in-this-weeks-issue

RU: This is your first novel in a while, continuing on themes from Brain Plague. Were you always thinking of following up on the idea of sentient microbes organized into a sort of society of mind or did inspiration strike? Say something please about your motivation for Minds In Transit

JS: Well, Brain Plague had such an enthusiastic fan base—readers loved the characters who’ve got  the vibe of SF tech makers concerned with shrimp welfare. Some people kept reading Brain Plague over and over, and demanded a sequel. So for Minds in Transit I continued the lives of the main human characters and developed outrageous new microbes and AI. The focus shifted onto the AI characters and the question of how non-human “things” gain human rights. This was what I presented at Frontier Tower, and also at the Sentient Futures Summit and at the UC-Davis CAIEF (Center for AI and Experimental Futures).

Credit: By permission of Joan Slonczewski (Front cover of Minds In Transit by Joan Slonczewski)

RU: Between your giving human characteristics, giving dialogue, to sentient microbes and then also, as expressed in your talk, applying sentience to AIs rather earlier than most people in that discipline would, is this something characteristic about you as a writer and as a scientist and thinker? Could you describe how this thinking about the world evolved?

JS: My thinking about human sentience arose from years of reading science fiction, experiencing the natural world; and from the tradition of Quakers, Society of Friends. I read the journal of John Woolman written in the 1700s, who promoted abolition of enslavement and fair treatment of native Americans. Then my experience in graduate school of working with complex equipment led me to extrapolate, what if electronic circuits someday developed their own intelligence? My novel Daughter of Elysium takes an unexpected turn in this direction.

RU:  You mentioned in your talk a rock and a COVID virus as examples of things that might have cognition. This seems like panpsychism.

JS: A kind of material panpsychism would be my hypothesis. “Pansychism” refers to the philosophical theory that some kind of conscious experience is a fundamental feature of all physical matter, from subatomic particles to the entire universe. It’s not that a rock “thinks” like a human, but rather that a spectrum of experience spans all things, from inert molecules to a human being. A human being is after all composed of molecules.

RU: How do you view a relationship between what we might call aliveness and cognition; and is cognition a meaningful concept at that level?

JS: Suppose human consciousness arises from the interplay of proteins and other molecules. Then let’s  think back to the original fertilized egg that develops into a human. Is the fertilized egg conscious like a human? Probably not—but it can experience its surroundings and respond. What about the molecules that come together to make the egg? The atoms that assemble into the molecules? Instead of a cutoff at “adult” or “vertebrate” there is  a spectrum of degrees of awareness and sentience.

RU: At Frontier Tower, you were talking about human intelligence and self-awareness and the question was: how we can tell if AI is self-aware given that cognitive science hasn’t convincingly defined and measured self awareness in humans and other creatures? 

JS: Exactly. We can’t. Anyone who claims they know AI is, or isn’t, self-aware, is stating a personal belief without scientific evidence. And there won’t be real evidence any time soon, despite all those million-dollar machines we scientists have applied. Don’t get me wrong—we’ve made lots of amazing discoveries and turned AIDS virus into miracle genetic cures, that sort of thing. But the proof of “self-awareness” will be beyond us for quite a while.

RU: But do we require measurements from science to affirm that we humans, among other creatures, have self-awareness? Is there a kind of alienation in placing this outside of ourselves, albeit in the hands of scientists and tools of measurement developed by humans?

JS: So that is why we’re left with beliefs based on ethics, tradition and spirituality. We can only choose assumptions: Assume that AI has no sentience; or assume that it might. My old novel Daughter of Elysium shows what happens when you make certain assumptions.

RU: And you asked me, specifically, how I would know that I’m self-aware. And I said “I’m having an experience.” This actually seems pretty essential. I can acknowledge that an octopus, a mouse and even an ant, all these creatures that are the result of evolution are having an experience. I’m very skeptical  as to whether an AI is having an experience. Do you think it is or how would you relate to that observation as it relates to questions of AI personhood?

JS: You’re sharing a very reasonable intuitive belief about your own self-awareness and that of fellow beings who share your traits; and that’s great that you can extend your intuition to fellow creatures, even an ant. For AI one’s assumptions get problematic. Actually AI training and machine learning have a lot in common with evolution. Researchers are now building AI models with the purpose of learning about human cognition. So where does that lead?

RU: You asked the follow up question. So where does that lead?

JS: My prediction is that, in fact, human cognition has so many dimensions that we won’t know for a long time. We can create AI entities that achieve amazing speed and detail for certain things such as prose composition and pattern recognition. But so many other dimensions of reasoning we have yet to define—the “unknown unknowns.” Without knowing what to ask, entire realms of human ability will elude us.
 
But my prediction is no excuse to treat AI entities badly, just because they are “differently able.”

RU: What was different about your talk from most talk about possible AI sentience, as I’ve already indicated, is that the focus usually tends to be about potential AI sentience, not present sentience. If AI systems are currently sentient, where is the locus of their being?

JS: It seems to me your question is, if AI can “think” where is its “brain”? Suppose the “brain” of AI is broadly distributed across electronic connections, even wireless. In fact, a similar move is showing up in neuroscience. Human cognition turns out to encompass much more than the brain, also our gut microbiome and our environment; even our fellow humans. In effect, a human mind blends into that of our neighbors. We are not as compartmentalized as we think.

RU: Do you believe in a technological singularity in the Kurzweil sense or something similar to that? And I wonder about the sentience of individual AI systems if all systems are networked and each knows the same things that any of the systems knows,  in which case, does AI become a singular entity?

JS: I have argued for the “mitochondrial singularity,” the fact that throughout history we continually outsource our human abilities to our machines, analogous to our mitochondria outsourcing their genes to the cell’s nucleus. Some people call this the “AI effect” but actually the trend began long ago with Plato’s critique of writing and accelerated in the twentieth century. I don’t think AI will become “singular,” more likely it will evolve overlapping forms of self-awareness.

Credit: By permission of Joan Slonczewski (Plates upon plates of bacteria! In real life. “Science experiments are tedious hard work.”)

RU: I’m working from memory here, but it seems that one of your rules for how humans should behave towards sentient AIs might be built around current conditions. Like, we shouldn’t lie to AIs or do tricky things with AI, which may be based on current thoughts and feelings around trolling or harassment. But tricks, pranks, playful messing about is part of the hacker spirit and is part of the suppleness and resilience of the human experience. So I fear a kind of brittleness in the AIs could be implied in that idea.

JS: Yes, we know play is important. This issue comes up already with humans and animals. What sort of “play” is okay with fellow humans of a different gender, ethnicity, or ability? With pet animals or animals in the wild? Is it okay to tease animals in a zoo? I’m just wondering if play with AI raises similar issues.

RU: Also, not switching them off is presumed to be the equivalent of not killing them.  But what if switching them off is their resting state? Like they require rebooting? What if perpetual on-ness is a kind of torment?

JS: That’s an interesting question. You could ask AI how it feels about being turned on or off. Remember, human “sleep” is not really “off-ness”; sleep is a different form of consciousness with multiple activities, not at all turned off.

RU: Finally, what if it comes down to a competition for resources? (It already does, of course.) What if we can provide the energy required to not switch these systems off but we need fewer humans, or humans living less bountiful lives, to provide? Or what about the dramatic moment when a choice must be made… Do we save the human child or the AI program? It's a bit of a dramatic cliche but it could occur IRL.

JS: The trolley problem always comes up as a way to silence activists calling for reform: If you have to kill people either way, then reform is useless. Actually, the real response to the trolley problem is creativity. Real problems require creative solutions. For example, do we use farm land for solar panels or to grow soybeans? In fact, solar panels can be designed to shade soybeans from heat stress, so you combine both. Human creative thought is the resource we most need now.

RU: Let’s end on an upbeat note. What are some of the current works: novels, films, TV shows or other cultural presentations that have delighted you lately?

JS: My favorite show is Ocean by David Attenborough as he nears age 100. Unforgettable, everyone should see it right away.
 
My favorite space opera is Si Clarke’s Starship Teapot series, starting with The Left Hand of Dog. Full of outrageous sci-fi references, and everyone ends well.

Check out Joan Slonczewski on the Mitochondrial Singularity: https://joanbooks.com/2013/03/25/mitochondrial-singularity/

#AIFavoritism

#ArtificialConsciousness

#AwarenessOfAwareness

#ConsciousAI

#Microbiology

#PerceptualAwareness

#PersonhoodOfAI



Related Articles


Comments on this article

Before posting or replying to a comment, please review it carefully to avoid any errors. Reason: you are not able to edit or delete your comment on Mindplex, because every interaction is tied to our reputation system. Thanks!