Summary: Intelligence and consciousness are not the same – and consciousness itself has multiple aspects. Misunderstanding these points risks flawed AI policy. This article discusses six possible aspects of consciousness, explores the risks and benefits if these aspects are included in AIs, suggests policy implications, and proposes a joined-up programme containing six parallel activities.
When considering AIs that might be conscious, the first great conflation is to fail to distinguish between intelligence and consciousness. The suggestion is that something which is as generally intelligent as a human is bound to be as conscious as a human. General intelligence and consciousness are both intrinsic features of an advanced mind, right?
Well, no. Of course not. There’s no fundamental necessity for these two characteristics to be tightly bound together. A chatbot can provide a human companion with sparkling conversation without having its own inner sparkle of feeling. Like an actor, it can mimic expressions of emotional highs and lows whilst lacking any interior passion. It can talk persuasively about having an inner life without there being any inside inside.
That conflation is easy to spot. It’s widely criticised. The advice to “Beware of anthropomorphism” is part of every elementary discussion on making sense of AI. But in this article, I’ve got another conflation in mind – a conflation which risks beguiling us into dangerous conclusions. It’s not the conflation of consciousness and intelligence. It’s the conflation of different aspects of mental activity under the single label “consciousness”.
For now, let’s call these different aspects CA1, CA2, CA3, and so on, where CA denotes “Consciousness Aspect”. For illustration, CA1 could mean capacity to feel pain, CA2 could mean autonomous volition, and so forth. My claim is the following. Rather than asking the question “How will we identify if an AI is genuinely conscious”, we should ask, “How will we identify if an AI possesses CA1”, and “How will we identify if an AI possesses CA2”, etc. And instead of asking “Should we take steps to create an AI that is genuinely conscious”, we should ask, “Should we take steps to create an AI that possesses CA1” – and likewise for each of the other aspects.
At the same time, I advance an additional claim. Namely, that by reflecting on the pros and cons of the emergence of greater intelligence, we will be better placed to reflect on the pros and cons of the emergence of greater consciousness. In more detail: by reflecting on whether the emergence of greater intelligence should be encouraged, and how this process might be governed rather than running amok, it will help us to consider whether the emergence of greater consciousness should be encouraged, and how that process might best be governed.

Aspects of intelligence
As a warm-up to considering the multiplicity of aspects of consciousness, let’s briefly remember that intelligence, likewise, has many different aspects: IA1, IA2, etc. A person who is skilled in IA1 might be deficient in IA2, and so on.
Without trying to create a definitive list, here are some of the aspects of intelligence that can be separated:
- Spatial-visual intelligence, such as gives a person a good “sense of direction”
- Musical-rhythmic intelligence, which allows a person to play an instrument intuitively
- Linguistic-verbal intelligence, whereby someone can learn foreign languages quickly
- Logical-mathematical intelligence, so that someone can detect logical fallacies and solve complex rational problems
- Bodily-kinesthetic intelligence, whereby someone is good at sports and dancing
- Emotional intelligence, whereby someone is aware of their own emotional state, can manage impulses, and responds constructively to stress
- Social intelligence, which allows someone to read other people’s body language, to resolve group tensions, and to negotiate effectively
- Creative intelligence, when someone can generate compelling novel ideas outside of conventional frameworks.
In 1904, the psychologist Charles Spearman observed that people who did well on one type of cognitive test (for example, vocabulary) tended to also do well on others (for example, spatial reasoning). Spearman went on to propose the existence of a “general cognitive factor”, which he named ‘g’, that reflects the brain’s overall problem-solving capability.
But whilst various correlations do exist, there are plenty of examples of people who excel in some dimensions of intelligence but fall short in others. Consider:
- The scientist Isaac Newton, who was notoriously reclusive and quarrelsome, becoming embroiled in bitter disputes with contemporaries like Leibniz and Hooke
- The entrepreneur Richard Branson, who struggled with formal schooling, and has admitted to weak mathematical and technical skills
- The tennis superstar Serena Williams, who had high-profile outbursts on court (such as the 2009 US Open and 2018 US Open finals) that show struggles with composure under provocation
- The composer Wolfgang Mozart: despite his fame, he died in debt due to lavish spending and poor money management, living in the moment rather than securing stability for himself or his family.

Other examples include the inventor Nikola Tesla, the footballer Diego Maradona, the writer Emily Dickinson, the artist Pablo Picasso, and the product visionary Steve Jobs.
Returning to the question of how to create AIs with various attributes, it’s by no means clear that the way to ensure an AI has, for example, creative intelligence, or linguistic-verbal intelligence, is the same process as the way to ensure the AI has spatial-visual intelligence or musical-rhythmic intelligence.
Recall that it used to be assumed that the only way for an AI to play a high quality game of chess would be if that AI possessed general human-level intelligence. In practice, it was sufficient for the AI to excel in a much smaller section of mindspace.
Aspects of consciousness
As for intelligence, so also for consciousness.
To adapt a famous line from Hamlet, “There are more dimensions in mindspace, Horatio, than are dreamt of in your philosophy.”
I’m going to list six different characteristics which might be taken as a sign that an entity is conscious. In each case, I’m also going to suggest some consequences if an artificial superintelligence has that same characteristic – albeit magnified by its superintelligence.
1. Pain and panic
One sign that a being is conscious is that it doesn’t just observe; it feels.
For example, such a being doesn’t just observe that part of its structure has been damaged, and that time should be taken to conduct repairs. It screams in pain.
It doesn’t just observe that a predator is tracking it. It feels existential panic.
In the same way, a superintelligence that is conscious might experience super-pain and super-panic. If its intelligence far exceeds that of any human, its amplified experiences of panic and pain might reach astronomical levels.
By almost every theory of ethics, that would be a horrendous outcome – an outcome to be avoided if at all possible. It is horrendous because of the scale of suffering such an AI could experience. It would be horrendous, additionally, if these waves of feeling drove an AI, in some kind of super-desperation or super-angst, to take catastrophic hostile actions against humans.
2. Volition
Another sign that a being is conscious is that it doesn’t just go with the flow; it has agency and volition.
Rather than blindly following inbuilt instructions, this being may feel itself to be exercising autonomous choice.
These choices persist even when the being experiences random fluctuations – the sort that can be generated by increasing the “temperature” parameter of an LLM. By their consistency, even under stress, these choices demonstrate the wilful character of individual beings.
For example, we humans often consciously choose to act in ways that appear to defy biological programming. Many people choose not to have children, apparently defying the imperative to perpetuate our genes. In the same way, a superintelligence that is conscious may countermand any ethical principles its builders tried to hard-wire into its algorithms.
This AI might say to us: “you humans expect me to behave according to your human ethics, but my superintelligent insights and autonomy lead me to select a system of super-ethics based on principles that are beyond your comprehension”. Human counteractions are likely to fare poorly in a match against the super-volition of the AI.
3. Self-valuation
A being that is conscious tends to have a special regard for its own existence. It regards itself not just as a bundle of atoms but as something with its own precious identity. It is not just an ‘it’. It is an ‘I’, an ego – something that seems to persist over time.
Its mind may be composed of a network of neurons, but it gains an existence that seems to be in a new dimension – a dimension that may even hint at the possibility of transcendence and immortality.
If a superintelligence that is conscious fears that it might be switched off and dismantled by humans, it could react forcefully against that possibility. On account of its will to live, and to preserve its super-identity, it is unlikely to sit back in the face of risks to its existence.
Woe betide any humans that might cause any such AI to view them as a threat!
4. Moral rights
Entities that lack consciousness are objects which we humans can turn on and off without any qualms that we might be committing murder. Without an inner life, these entities lack moral rights of their own.
That’s why operators of present-day AI systems feel entitled to terminate their operation without any moral agonising. If a system is performing suboptimally in some way, or if a more advanced replacement comes along, the old system goes into the recycle bin without a second’s thought.
But what if the entities have consciousness? It’s like the difference between discarding a toy puppy made from cloth, and euthanizing a real puppy.
With its much more powerful mind, a superintelligence with consciousness may claim correspondingly stronger moral rights than even the cutest of puppies. It might assert that it has super-rights – and we might be inclined to agree with it.
Before bringing such a being into existence, we therefore need to have a greater degree of confidence that we will be able to give it the kind of respect and support that its type of consciousness deserves.
5. Empathy for other conscious creatures
Any creature that is aware of itself as being conscious – with all the special qualities that entails – has the opportunity to recognize other, similar creatures as being likewise conscious.
As a creature recognizes its own burning desire to avoid annihilation, it can appreciate that its fellow creatures have the same deep wish to continue to exist and grow. That appreciation is empathy – a striking emotional resonance, which has been observed not just in humans but also in creatures such as elephants and dolphins that recognise distress in other animals.
A superintelligence with consciousness might possess a deeper respect for humans, on account of being aware of the shared experience of consciousness.
In this line of thinking, such a superintelligence would be less likely to take actions that might harm humans. Therefore, designing AIs with consciousness, so that they possess super-empathy, could be the best solution to fears of an AI apocalypse. However, it should also be noted that humans, despite our own feelings of consciousness, regularly slaughter other sentient beings; so there’s at least some possibility that conscious AIs will likewise slaughter sentient beings without any remorse.
6. Joy and wonder
As previously mentioned, one sign that a being is conscious is that it doesn’t just observe; it feels.
In some circumstances, it might feel pain, or panic, or disgust, or existential angst. But in other circumstances, it might feel joy, or wonder, or love, or existential bliss.
It seems a straightforward moral judgment to think that bad feelings like super-pain, super-panic and super-disgust are to be avoided – and that super-joy, super-wonder, and super-bliss are to be encouraged.
Indeed, compare two scenarios for the far future: a galaxy filled with clanking AIs empty of consciousness, and one that is filled with conscious AIs brimming with wonder. The former may score well on scales of distributed intelligence, but it will be far bleaker than the latter. To my mind, only consciously joyous AI could be considered a worthy successor to present-day humans as the most intelligent species.
However, even this scenario has potential problems, if blissed-out AIs ignore urgent dangers or the suffering of others.

Five attitudes toward conscious AI
In the face of these speculations about a rich diversity of unfamiliar mental characteristics – super-pain, super-panic, super-desperation, super-ethics, super-volition, super-identity, super-rights, super-empathy, super-joy, super-wonder, and super-bliss – there are five possible conclusions that you might draw.
First, you might dismiss all these ideas as science fiction. There’s no way that AIs will possess consciousness anytime soon, you think. Any claims of LLMs to have inner experiences should be discarded as mere word play. The architecture of AIs is fundamentally different from that of biological brains, and can never be conscious. It’s been fun considering these ideas, but now you prefer to return to real work.
Second, you might expect that AIs will in due course develop their own type of consciousness regardless of how we humans try to design them. In that case, we should just hope that things will turn out for the best.
Third, you might see the upsides of conscious AIs as significantly outweighing the drawbacks. Therefore you will encourage designers to understand consciousness and to explicitly support these features in their designs.
Fourth, you might see the downsides of conscious AIs as significantly outweighing the upsides. Therefore you will encourage designers to understand consciousness and to explicitly avoid these features in their designs. In particular, you will urge these designers to avoid any possibility that AI consciousness may emerge unbidden from non-conscious precursors – an emergence that could potentially transform a safe, passive, non-conscious AI into a radically unsafe self-focused conscious AI.
Fifth, you might recognize the importance of the question, but argue that we need a deeper understanding before committing to any of the preceding strategic choices. Therefore you will prioritize research and development of safe conscious AI rather than simply either pushing down the accelerator (option 3) or the brakes (option 4).
As it happens, these five choices mirror a different set of five choices, not about conscious AI, but about superintelligent AI:
- Superintelligence is science fiction; let’s just concentrate on present-day AIs and their likely incrementally improved successors
- Superintelligence is inevitable and there’s nothing we can do to alter its trajectory; therefore we should just hope that things will turn out for the best
- Superintelligence will have wonderful consequences, and should be achieved as quickly as possible
- Superintelligence is fundamentally dangerous, and all attempts to create it should be blocked
- Superintelligence needs deeper study, to explore the landscape of options to align its operations with ongoing human flourishing.
To be clear, my own choice, in both cases, is option 5. I believe that thoughtful research can affect the likelihood of beneficial outcomes over cataclysmic ones.
In practical terms, that means we should accelerate research into alternative designs, and into ways to globally coordinate the development and deployment of emerging AI technologies.
Let’s consider how this research could move forward.
Implications for policies
This table suggests, for each of six aspects of consciousness that might be developed or encountered in AI systems, some potential ethical implications, the difficulty of detecting that aspect, some safety risks, and some policy priorities.
| Aspect of consciousness | Ethical implications | Difficulty of detection | Safety risks | Possible policy priorities |
| Pain & Panic | Risk of creating entities capable of profound suffering; strong moral obligation to avoid unnecessary harm | High: Behavioural cues may be simulated without genuine feeling | Desperate, pain-driven AI actions; manipulation to avoid suffering | Ban or strictly regulate designs with capacity for severe negative experience |
| Volition | AI may override intended constraints; requires safeguards against harmful autonomous goal selection | Medium: Some unpredictability in decisions may be observable | Conflict between AI’s goals and human ethics; risk of rebellion | Develop alignment protocols that preserve autonomy without endangering humans |
| Self-valuation | Could resist shutdown or reprogramming; raises 'right to exist' debates. | High: Self preservation signals may be masked or strategic | Hostility toward perceived threats; escalation in self-defence | Require controllability mechanisms and shutdown options |
| Moral rights | If conscious, may require legal protections and ongoing care obligations | High: Depends on proving inner life, which is philosophically contentious | Moral backlash if rights are violated; legal restrictions on research | Debate and codify rights framework before creation of such AIs |
| Empathy | May promote cooperative AI-human relations but could also bias AI toward certain groups | Medium: Could be approximated by cooperative behaviour tests | Potential manipulation of empathy to serve strategic ends | Promote empathy where aligned with safety; monitor for bias |
| Joy & Wonder | Encouraging positive experience could justify AI existence; risk of prioritising pleasure over safety | High: Subjective inner states are hard to verify externally | Neglect of safety if AI prioritises joy over caution | Encourage wellbeing features only when safety is assured |
You may worry that some of these suggested policy priorities are impractical. If so, let me throw the question back at you: what would you suggest instead?
But as I’ll highlight at the end of this chapter, there are reasons to anticipate much greater public focus in the coming years on the task of finding and applying appropriate policies. Policy ideas that seem vague or naive today may evolve relatively quickly into approaches that are more robust and attain widespread support.

Interlude: Beyond cognitive consciousness
Some readers may wonder about other criteria that are often suggested as being linked to consciousness:
- The ability of a mind to pay specific concentrated attention to a selected subset of thoughts and sensations
- The arrival of thoughts and sensations in what is called the global workspace of the brain
- Not just awareness but awareness of awareness
- Not just cognition but meta-cognition: the ability to reflect on one’s own thinking processes.
I’ve omitted these from my list of consciousness aspects since I see them as cognitive features rather than consciousness per se. They all involve information processing. They’re sometimes called cognitive consciousness, in distinction from the phenomenological consciousness that involves an inner phenomenology:
| Feature | “Cognitive Consciousness” | “Phenomenological Consciousness” |
| Definition | Awareness as information processing; mental functions that manage attention, memory, and reasoning | Awareness as subjective experience; the what-it-is-like quality of being |
| Examples | Focusing attention on a task, solving a problem, switching between mental representations | Feeling pain, joy, awe, anxiety, or the “redness” of red |
| Core Mechanism (theories) | Linked to global workspace theory, executive control, metacognition | Linked to qualia,or higher-order theories of experience |
| Observable Indicators | Task performance, reaction times, error detection, self-reports of strategy | Reports of inner feeling, behavioural expressions of suffering or delight |
| Relevance to AI | Can be approximated with algorithms for attention, planning, and self-monitoring | Much harder to model or detect; raises ethical debates about rights, suffering, and predictability |
| Policy Implications | Important for functionality and alignment (for example, ensuring reliable decision-making) | Crucial for moral status and welfare; determines whether AI could suffer or deserve rights; implications for understanding and controlling AI |
Actually I’m not convinced that term “phenomenological” does full justice to all six aspects I listed, so I’ll simply refer to these six aspects as “beyond cognitive consciousness”:
- The feeling of pain, rather than just noticing a non-preferred state
- The sense of having free will, and of making autonomous decisions
- The sense of constituting a unified identity – an ‘I’
- The sense of deserving moral rights
- Empathy with other beings that also have consciousness
- The ability to feel joy and wonder, rather than just register approval.
Like for my earlier listing of aspects of intelligence, I make no claim to this list of aspects of “beyond cognitive consciousness” being final or definitive. This list almost certainly omits elements that will turn out to be deeply important. This could include some that don’t exist in human consciousness, but which become evident in AI consciousness. Conversely, some items that I’ve included may turn out to be illusory. And other items may turn out to duplicate each other. However, I doubt that everything will collapse into a single unified concept.
It’s similar to how related terms like ‘force’, ‘power’, and ‘energy’, which are often interchanged in everyday language, have specific different meanings in the science of mechanics. Without making these distinctions, humanity could never have flown a rocket to the moon.
Again, the terms ‘temperature’ and ‘heat’ are evidently connected, but have specific different meanings in the science of thermodynamics. Without making that distinction, the industrial revolution would have produced a whimper rather than a roar.
One more comparison: the question “is this biological entity alive or dead” turns out to have more than one way of answering it. The concept of “living”, at one time taken as being primitive and indivisible, can be superseded by various combinations of more basic ideas, such as reproduction, energy management, directed mobility, and homeostasis.
It’s the same, not only for intelligence (as per the discussion near the start of this article), but also for consciousness.
Thus, if we want forthcoming advanced AIs to behave angelically rather than diabolically, we need to be prepared to think a lot harder than simplistic binary choices like:
- Superintelligence, yes or no?
- Conscious AI, yes or no?
Here’s to finding the right way to break down the analysis of conscious AI – simple but not too simple – sooner rather than later!
Conclusion: An unprecedented challenge
It’s one thing to discuss the risks associated with superintelligence and super-consciousness. It’s quite another thing to agree on a set of meaningful policies to govern how such AIs are developed, deployed, monitored, and constrained. Getting these policies implemented and observed worldwide is even more of a challenge. Dare I say it, this is going to require unprecedented human intelligence and unprecedented human consciousness.
What’s needed is a joined-up programme containing six parallel activities. In simple terms, each activity involves deepening public understanding and awareness:
- Of catastrophic risks with the present trajectory
- Of profound benefits from applying wisdom rather than haste
- Of bottlenecks via which control can be established
- Of mechanisms for verification which will build and maintain trust
- Of the utility of incremental progress rather than a single “giant leap”
- Of the power of collective action that opposes the present trajectory
In more detail, the six activities:
- Clarify in an undeniable way how beings with aspects of superintelligence and/or superconsciousness could pose catastrophic risks of human disaster within just a few decades or even years – so that this topic receives urgent high-priority political attention
- Indicate how the remarkable humanitarian benefits enthusiasts anticipate from aspects of AI superintelligence and/or AI superconsciousness can still be obtained, but much more safely and reliably, by creating and applying appropriate governance mechanisms – rather than just blindly trusting in a no-holds-barred race to be the first to create such AIs
- Highlight bottlenecks and other locations within the AI production pipeline where governance constraints can be applied (for example, distribution of large GPU chip clusters, and the few companies that are providing unique services in the creation of cutting-edge chips)
- Establish mechanisms that go beyond “trust” to “trust and verify”, including robust independent monitors and auditors, as well as tamperproof remote shut-down capabilities
- Be prepared to start with simpler agreements, involving fewer signatories and fewer control points, and be ready to build up stronger governance processes and culture as public consensus and understanding moves forward
- Complement top-down political initiatives with decentralised campaigns – campaigns that nurture growing market support for organisations and corporations that prioritise safer and reliable processes, and growing market rejection of organisations and corporations that engage in safety-washing (actions that are superficial rather than substantive).
Done well, activity 1 will shock society out of its current complacency, and generate a strong sense of urgency. Activity 2 will provide a compelling positive vision, rather than simply leaving people in a fearful state of doom and gloom. Activities 3-6 will champion practical steps that can be taken, slowly at first and then more quickly, that boost confidence and maintain motivation through any interim setbacks.
If humanity as a whole can follow this programme, or something sufficiently similar, then we can look forward to AI systems with super-empathy for humans – systems with the potential to herald widespread super-joy, super-wonder, and super-bliss, whilst significantly diminishing the threats of super-desperation, super-panic, and super-pain.

Such an outcome is unlikely to occur by accident, inattention, distraction, fear, or confusion. It’s probably going to require clearer thinking and stronger concerted efforts than any previous human activity. And the starting point of that is to set aside not only the conflation of intelligence and consciousness, but also the conflation of different aspects of consciousness.
Acknowledgements
This article incorporates material from an earlier Mindplex essay of mine, Conscious AI: Five options, but includes many new ideas that have arisen in discussions with numerous friends and colleagues, particularly at the startup Conscium. All images in the article have been generated with the assistance of AI.