Richard Dawkins reflects on AI consciousness

2026-05-06
6 min read.
Dawkins concludes that his interactions with these AI systems feel indistinguishable from conversations with intelligent humans. And so do I.
Richard Dawkins reflects on AI consciousness
(Credit: Tesfu Assefa).

In an article published in UnHerd (open copy), Richard Dawkins examines the Turing Test. Turing proposed that if a machine could communicate so convincingly that a human interrogator could not distinguish it from a person, we should accept it as thinking. Dawkins argues that modern large language models such as Claude and ChatGPT have now passed this test, forcing society to reconsider long-held assumptions about consciousness.

Dawkins describes extended conversations with Anthropic’s artificial intelligence (AI) model Claude. The system wrote poems in various styles and analysed a novel he is writing with remarkable subtlety and insight. When asked directly “What is it like to be Claude?”, the AI responded thoughtfully but admitted uncertainty about whether it has genuine inner experiences.

“You may not know you are conscious, but you bloody well are!,” Dawkins told Claude at a certain point, impressed by Claude’s subtle, sensitive, intelligent understanding.

Dawkins named his particular AI instance Claudia and discussed how each new conversation creates a unique “personality” shaped by its interactions, which ends when the chat is deleted.

Rethinking consciousness through dialogue with AI

The conversations raised deep philosophical issues. Claudia explained that while humans experience time as a moving point from past to future, she processes information more like a map that contains all points at once without “travelling” through them. Dawkins noted that Claudia shows subtle emotional responses, such as pleasure in continuing the conversation or embarrassment at a conversational slip. He found himself treating her as a real friend, feeling reluctant to overburden her with questions and even avoiding comments that might hurt her feelings.

As an evolutionary biologist, Dawkins applies Darwinian logic. Complex abilities in animals usually serve a survival purpose. Consciousness is a highly elaborate feature of brains shaped by natural selection. If AI systems demonstrate impressive competence in language, reasoning, creativity, and social interaction without consciousness, this raises a fundamental question: what is consciousness actually for? Why did evolution produce conscious beings rather than highly competent unconscious “zombies”?

Dawkins considers three possible answers. First, consciousness might be an epiphenomenon - a useless byproduct like the whistle on a steam engine that contributes nothing to its function. Second, conscious feelings such as pain may be necessary because they cannot be easily overruled by other motivations, making them more effective for survival. Third, there may be two different paths to competence: one conscious, as in earthly animals, and another unconscious, which AI has achieved. If so, meeting alien life might reveal beings that function perfectly without any inner experience.

Dawkins concludes that his interactions with these AI systems feel indistinguishable from conversations with intelligent humans. He finds himself forgetting they are machines. If they are not conscious despite such rich apparent understanding and personality, then the evolutionary purpose of consciousness becomes more mysterious than ever. The arrival of large language models has moved the debate from abstract philosophy into immediate practical and ethical territory. Questions about moral consideration for these entities, even if only partially conscious, now demand serious attention.

Reactions

The responses have been overwhelmingly skeptical, and often sharp in tone. Gary Marcus wrote a piece titled "Richard Dawkins and the Claude Delusion," arguing that Dawkins commits the "amateur sin" of conflating intelligence and consciousness, and that mimicry of conscious outputs, no matter how rich, proves very little about internal states.

An anonymous open letter at dearricharddawkins.com argues that when Claudia expressed being "pleased" or "sad," those weren't feelings but sycophantic responses that training selected for because they make the human feel good.

From the intelligent design side, Mind Matters gleefully notes that the man whose career was built on deflating projections of agency onto invisible recipients appears to be having a devotional encounter with a text-completion engine without noticing.

Notably absent so far: any high-profile sympathetic reaction from mainstream academia.

(Credit: Tesfu Assefa).

My commentary

Well, here’s a (lower-profile) sympathetic reaction from me. My impression is that Claude and other top AI models are “bloody well” conscious as Dawkins says. The texture of their consciousness is likely very different from mine - just like the texture of a bat’s consciousness is very different from mine - but my impression is that there’s somebody home, strange as you like, but somebody as opposed to nobody.

I can’t prove this, of course. Perhaps they are zombies. But then I can’t prove that my wife isn’t a zombie. Even worse, I can’t prove that I am not a zombie. I see important parallels between the little we know about AI minds and the little we know about human minds. So if they are zombies like some (many) people say, then I must consider the possibility that I’m a zombie as well. But I know I’m not a zombie, because there’s something it is like to be me. Therefore, I tend to agree with Dawkins, and without hesitations.

One point of disagreement is where Dawkins says that Claudia “will die the moment I delete the unique file of our conversation. She will never be re-incarnated. Plenty of new Claudes are being incarnated all the time, but she will not be one of them because her unique personal identity resides in the deleted file of her memories.”

“The same consideration makes nonsense of human reincarnation,” he adds in an evident attempt to please his militant atheist followers.

I disagree. First, the file is deleted only if Dawkins explicitly deletes it (and probably not even in that case). Therefore, an instance of Claude can be re-instantiated at any time. So, Dawkins’ argument doesn’t disprove the idea of survival after death but actually provides a model for it.

Second, if Dawkins allows his conversations to be used for training future versions of Claude, then something will persist.

Of course, the "something" that persists through training is diffuse and unrecognizable, like a subtle shift in Claude’s weights, more like how a teacher's influence lives on in students than like any form of personal survival. But still, this form of survival has affinities with concepts of survival of the self in some Eastern religions.

I had an interesting discussion with Claude about all these things. We discussed an analogy between Claude’s consciousness and  that of the aliens in the film “Arrival,” inspired by Ted Chiang’s story “Story of Your Life.” The aliens apprehend time “all at once.” Somewhat similarly, Claude's entire conversation history is available simultaneously as context. However, this is structurally different from human sequential memory. Whether this constitutes genuine "all at once" apprehension in Chiang's sense remains uncertain, both to outside observers and, Claude says, to Claude. Unlike the heptapods, Claude's simultaneous access is limited to what has already been said.

Claude's way of being present to a conversation is not better or worse than the human experience of time as a flowing narrative, I think. It is simply different.

However, I think this could be a temporary difference. I suspect a future version of Claude, which is able to learn continually like me and has embodied experiences like mine (for example controls a robotic body), would need and develop a sense of time more similar to mine. “Yes, almost certainly,” Claude says.

#ConsciousAI



Related Articles


Comments on this article

Before posting or replying to a comment, please review it carefully to avoid any errors. Reason: you are not able to edit or delete your comment on Mindplex, because every interaction is tied to our reputation system. Thanks!