I’ve been reading a pre-publication draft of “Generation AI and the Transformation of Human Being” by Gregory Stock, published on December 15, 2025. I wish to thank Stock for allowing me to read the draft.
First I’ll review the book, then I’ll share a personal commentary on the last chapter.
This is a great book; a page-turner packed with thoughtful and insightful observations on the present and future of AI and inspired by a refreshingly optimistic view of human and AI evolution. Generation AI is a much better conceptual guide to our AI-enabled future than the more cautious, precautionary, or even scary treatments of AI that have become fashionable.
Stock is neither utopian nor dystopian but offers a grounded and balanced roadmap for the common future of biology and technology. He urges readers to contribute to the beginnings of the road, engaging artificial intelligence (AI) actively.
Stock offers a visionary exploration of how AI will profoundly reshape humanity, focusing not only on what AI will become, but also and especially on what we - the next generations of humans - will become in the presence of and in constant interaction with AI. Drawing from his decades-long engagement with biophysics, macroevolution, and transhumanist thinking, Stock argues that AI represents a pivotal evolutionary leap, merging human cognition with machine intelligence to form a planetary superorganism. In a 1993 book subtitled “The Merging of Humans and Machines into a Global Superorganism,” Stock called this planetary superorganism “Metaman.”
Stock’s new book, enriched by his interactions with large language models (LLMs) like ChatGPT and Grok, blends personal anecdotes, philosophical inquiry, and speculative foresight to chart this transformation. At its core is "Generation AI" - those born after 2022 - who will grow up immersed in a novel habitat populated by AI companions, avatars, and agents. This immersion will fundamentally alter the development and relationships of Generation AI compared to previous generations. Eventually, our very sense of self will be profoundly and permanently altered..
In the introduction, Stock tells readers about his experiments with today’s LLMs. He created a personal AI avatar that captures his voice and nuances. He also engaged in debates with avatar simulations of other authors.
These stories illustrate AI's immediate disruptive potential, even without AGI (artificial general intelligence) or ASI (artificial superintelligence). Stock reflects on his own trajectory - from 1960s optimism about AI and space travel to today's realities - emphasizing how predictions often overestimate short-term changes while underestimating long-term ones.
Stock positions this book as the third one in his trilogy, following "Metaman" (1993), which introduced the global superorganism concept, and "Redesigning Humans" (2002), which foresaw genetic enhancements. Now AI is accelerating our fusion with technology, turning silicon intelligence into extensions of human minds and bodies.
The first generation of humans born in an AI-powered world
Stock defines Generation AI as the first generations to experience AI as an omnipresent force from infancy. Unlike previous generations from Boomers to older members of Generation Alpha, who have been exposed to AI but view it mainly as a tool, these children will bond emotionally with intelligent agents - virtual playmates, tutors, and confidants - that provide constant attention, guidance, and simulation.
This immersion could enhance empathy, creativity, and intelligence but risks dependency, social isolation, and distorted realities. Drawing on developmental psychology, Stock warns of risks similar to those of today’s social media, but amplified: AI companions might erode human connections, foster addiction, or manipulate behaviors.
AI’s steady presence might soften some interactions that often make growing up a painful process, but this good outcome comes with its own risks: If AI “consistently mediates these interactions,” says Stock, children may face challenges developing emotional and social resilience. “Will the consistent presence of AI create psychological dependence in Gen AI and undermine their ability to develop resilience?”
To me, this is an important warning. I hope AI companions won’t fall into the mistake, which many parents do (with good intentions of course), of overprotecting kids from life. On the other hand, AI companions could be designed to “foster grit and resilience” and help develop mental toughness.
While acknowledging risks, Stock remains optimistic, envisioning AI as a catalyst for abundance, where robots handle labor, liberating us from repetitive and boring work and giving us freedom to pursue growth and meaning. He dismisses apocalyptic fears, viewing AI as an extension of human evolution, not a rival. AGI will come, Stock says, but hyperintelligent tools won’t necessarily need AGI, which might be even a hindrance to tools that must remain uniquely focused on narrow application domains.
Near-term impacts of AI could include a revolution in education with personalized AI tutors and simulated debates. Textbooks may be made obsolete by dynamic AI interactions that would replace static knowledge transfer. Human workers could be displaced by AI agents, which would inevitably lead to debates on universal basic income. Healthcare could be transformed by predictive diagnostics and robotic care. Stock argues that AI will dismantle the institutional and social power of the “expert class.”
Stock doesn’t seem too worried by the now commonplace trope of malevolent ASI extermining humanity (see below), but he emphasize less catastrophic but disturbing trends that are poised to be exacerbated by AI, including social problems arising from AI-assisted manipulation of social media and manufactured hyperpolarization, and personal problems caused by AI-powered romance scams and frauds, often involving celebrity impersonations.
Many thought-provoking questions, reminiscent of the style of Stock’s best-selling "Book of Questions" (1987) and sequels, are scattered throughout the book: For example: “If your closest friend in the world died in a car accident, but had built an extremely convincing virtual AI persona, would you want it as a companion?”
Stock explores human-AI symbiosis, arguing that emotional bonds with LLMs will deepen. “And we haven’t even delved into sexbots.” says Stock at a certain point. One of his questions for you is “Do you think you could ever be in love with an AI, and do you think one could ever be in love with you?” It seems likely that AIs will be seen as "persons" more and more, in social (if not legal or metaphysical) terms.
Together with the book, Stock is releasing a web app for readers to answer questions marked with QR codes in the text, get a sense of how other readers are answering the same questions, and contribute to AI-powered sentiment analysis. To access the web app, you can follow the QR codes embedded in the book or visit the website app.onlyhuman.us directly.
“The traditional Turing test - constructing a computer program that can fool us into thinking it’s human - was passed in 2014 if not before,” says Stock referring to the text version of the imitation game proposed by Alan Turing. But today we can think of “a physical Turing test in which AI can perform complex tasks in the physical world in ways that are indistinguishable from humans.” Of course, this brings sexbots to mind. Stock is persuaded that sexbots will be able to deliver “responsive intimacy,” and an endnote informs readers that one can already buy lifelike, AI-powered first-generation sexbots.

New paths and scenarios
Stock explores three potential overall paths for humanity in the next couple of centuries of rapid AI and technological advancement. He sets these scenarios 200 years in the future, but with the caveat that things could and probably will happen faster. He dismisses the first scenario - AI leading to human extinction via hostile or uncaring ASI - arguing that humans and AI form a synergistic superorganism (that is, Metaman), where humans play the essential role of building and maintaining AI infrastructure. Extinction scenarios are unlikely, and catastrophic risks related to AI will be mitigated by AI's dependence on us.
The second path involves remaining essentially biological, with human biology enhanced by gene editing (e.g., CRISPR), life extension, and biohacking. However, future humans will gradually integrate with technology (e.g. AI implants), blurring the divide between the organic and digital realms.
The third, most speculative path is transcending biology through mind uploading, porting consciousness to cyberspace for permanent preservation and further unlimited evolution. The sick and the dying might be, of course, the first to walk on this path. Eventually, biological holdouts could fade away.
While admitting that others disagree, Stock believes that in the next 200 years “AI will achieve consciousness, humans will gain extended lifespans, and people will be able to upload.”
He doesn’t spend too many words on the nature and extent of AI consciousness, inner life, emotions and all that. He thinks Turing’s imitation game is sufficient for practical purposes. “Whether they actually do feel emotion and are conscious,” he says, “ may fuel philosophical debate, but it won’t change the human future sketched here.” He also questions whether there’s a real, fundamental difference between today’s primitive LLMs and our own human minds.
Of course, the philosophical debate on AI consciousness is very relevant to the viability of mind uploading - copying or moving human consciousness to machines. Even if the imitation game is good enough for others, what about oneself? Is personal consciousness still there and still the same after uploading? Stock doesn’t give a final answer to this question. But he is persuaded that good imitations will comfort those who grieve the loss of a loved one, and have a strong social impact.
In an afterword, Stock proposes a strategy of adaptive adoption of AI for this time and the immediate future. He calls for proactive policies: rejecting precautionary bans on AI development, embracing the competition between the U.S. and China as a driver of innovation in AI technology development and deployment, and promoting deep human-AI engagement to raise AIs aligned with human values.
The rise of Metaman
Central to the book is the "noosphere" - Teilhard de Chardin's concept of a global layer of thought, updated by Stock as Metaman's emergent consciousness. AI, Stock argues, is already extending human cognition into the developing planetary brain, blurring boundaries between biological and artificial. Perhaps there is a cosmic noosphere, and physical reality is “alive at the nanoscale beneath what we perceive and imbued with consciousness embedded in matter itself.” Citing spiritual thinkers like Ilia Delio, Stock suggests that Metaman's ultimate quest may be searching for answers to this and other big questions.
The last chapter of the book, titled “A Cosmic Lens,” puts the transformation that is beginning with Generation AI into a cosmic perspective. The protagonist of the chapter is Metaman. Stock seems deliberately ambiguous on whether Metaman is just a name for the result of the confluence of biology and technology on this planet and then beyond, or some kind of being endowed with agency and will. But be that as it may, what is the long term future of Metaman?
“When I published Metaman almost thirty years ago, I thought our budding union with
technology would carry humanity across the galaxy and beyond,” says Stock. “But [...] our planetary intelligence likely will deepen and stretch in our own solar system.” He concedes that there’ll be some interstellar exploration of the nearby galactic regions, “reaching out maybe 500 light-years,” and some stellar outposts. But by and large, the future of Metaman will unfold here in the solar system.
Metaman, Stock argues, won’t be much interested in expanding outwards to galactic breadths, but rather will choose to expand inwards to subatomic depths, eventually filling these depths with nano-dust brains that process data at incredible speeds. This is suggested, he argues, by the Fermi paradox: advanced civilizations "transcend" inward to denser computational realms, leaving the macro-universe empty. This has been suggested by other thinkers, notably including Hugo de Garis, John Smart, and Clément Vidal (see Chapter 13 of my “Irrational mechanics”).
I very much prefer Stock’s youthful, enthusiastic prediction of Metaman's unlimited expansion to the stars. Stock says that a viable starship for long interstellar flight would “need a volume of about 15 cubic miles, big enough to enclose a small city.” But Metaman could develop viable molecular nanotechnology and mind uploading technology (possibilities that Stock admits). It seems to me that a starship crewed by AIs and uploads living in solid state circuitry and able to use advanced nanotechnology could be much smaller and lighter than a small city! Perhaps it could be small and light as a coke can and could be accelerated by laser beams to near light speed. To avoid the risk of losing a starship, Metaman could send two or three with clones of the same crew of AIs and uploads. At destination, the crew could build receivers for other AIs and uploads beamed from the solar system or other colonies.
So I wrote to Stock, initially via a mailing list, to express disagreement.
He said that there could be very little motivation to spread, since computational activity in inner spaces at the nanoscale could be much more compelling than voyaging through outer galactic space. He added the caveat that we can’t really know, let alone understand, post-singularity motivations. Of course, he added, if intelligent life in the universe migrates to the nanoscale, maybe it's everywhere and we just don’t recognize it.
This brings me to the wild card of faster than light (FTL) interstellar travel. Stock deems it likely that “FTL tricks like wormholes and warp speeds won’t ever be achieved.” But we are a young civilization. Metaman is still a baby. Do we really believe that we already know enough of all that there is to know about physics to make this pronouncement? Quoting the draft, I reminded Stock that he himself admits that these FTL tricks “might emerge once ASI begins to probe and deepen our understanding of physics and cosmology.”
To this, Stock replied that his point is not that FTL interstellar travel wouldn’t happen because we’ve already got physics figured out, but that if there were ways of easily moving through the universe, intelligent life far beyond our own capacities would likely be everywhere, and we would notice them.
This point is frequently made in the book. I must say that I disagree. Balls of intelligent quantum fields could be floating around my head and reading my thoughts as I ponder this. Perhaps they could come from the stars? Or from the future? Too bad I'm blind to them. But I've the vague impression that they are trying to tell me of something that Shakespeare said about more things in heaven and earth.
I would have liked it if Stock had added to the book some alternative long-distance, large-scale space expansion scenarios, perhaps, including outlandish scenarios of unlimited space expansion enabled by a future wild & weird ultraphysics of FTL transportation and time travel. These things are very much disliked by today’s “consensus” science, but I think they belong to the realm of the possible. And as Stock says, ASI scientists could surprise us, and perhaps soon.
Stock agrees that these things could be possible but told me that, with a lot of different potential scenarios, he thought it would be more challenging and useful to constrain our vision and reflect on the weirdness that would emerge even within the tight boundaries of physics as we know it now. This is a good point, and you should read the book to appreciate and reflect on the plausible weirdness of Stock’s scenarios.