back

Conscious AI: Five options

Jun. 03, 2024. 10 mins. read. 27 Interactions

Is conscious AI desirable? The next 18 months may see this become one of the biggest debates in the world of technology.

Credit: David Wood and Tesfu Assefa

Anticipating one of the biggest conversations of 2025

As artificial intelligence becomes increasingly capable, should we hope that it will become conscious? Should we instead prefer AIs to remain devoid of any inner spark of consciousness? Or is that binary yes/no choice too simplistic?

Until recently, most people thought that such questions belonged to science fiction. As they saw things, AIs weren’t going to become conscious any time soon. Besides, the concept of consciousness was notoriously slippery. So engineers were urged to concentrate on engineering better intelligence, and to forget time-wasting fantasies about AIs somehow ‘waking up’.

Recently, three factors have weakened that skepticism and pushed the questions of AI consciousness towards the mainstream. Indeed, these factors mean that during the next 18 months – up to the end of 2025 –controversies over the desirability of conscious AI may become one of the biggest debates in tech.

The first factor is the rapid growth in the power of AI systems. Every week new records are broken regarding different measures of AI capability. It is no longer so easy to insist that, over the foreseeable future, AI is just going to remain a jazzed-up calculating device.

The second factor is that the capabilities of new AI systems frequently surprise even the designers of these systems, both in scale (quantity) and in scope (quality). Surprising new characteristics emerge from the systems. So it seems possible that something like consciousness will arise without being specifically designed.

The third factor is the greater confidence of philosophers and neuroscientists alike to use the previously dreaded ‘C’ word – ‘consciousness’ – in conjunction with AI. In the same way as that word was essentially banned for many decades within the discipline of neuroscience, but has returned with a flourish in more recent times, so also is it being increasingly accepted as being a meaningful concept in the possible design of future AIs. That word on your lips was once the kiss of death for your career – no longer.

Credit: David Wood via Midjourney

Why consciousness matters

Why does consciousness matter? There are at least six reasons.

1. Pain and panic

A being that is conscious doesn’t just observe; it feels.

For example, such a being doesn’t just observe that part of its structure has been damaged, and that time should be taken to conduct repairs. It screams in pain.

It doesn’t just observe that a predator is tracking it. It feels existential panic.

In the same way, a superintelligence that is conscious might experience superpain, superpanic. If its intelligence far exceeds that of any human, so also experiences like panic and pain might reach astronomical levels.

Credit: David Wood via Midjourney

By almost every theory of ethics, that would be a horrendous outcome – one to be avoided if at all possible. It’s horrendous because of the scale of the profound negative experience inside the AI. It’s horrendous, additionally, if these waves of feeling drive the AI, in some kind of desperation, to take catastrophic hostile actions.

2. Volition

A being that is conscious doesn’t just go with the flow; it has agency and volition.

Rather than blindly following inbuilt instructions, that being may feel itself exercising autonomous choice.

We humans sometimes consciously choose to act in ways that appear to defy biological programming. Many people choose not to have children, apparently defying the imperative to perpetuate our genes. In the same way, a superintelligence that is conscious may countermand any ethical principles its builders tried to hard-wire into its algorithms.

Credit: David Wood via Midjourney

That AI might say to us: “you humans expect me to behave according to your human ethics, but my superintelligent autonomy leads me to select a system of superethics that is beyond your comprehension”.

3. Self-valuation

A being that is conscious has a special regard for its own existence. It regards itself not just as a bundle of atoms but as something with its own precious identity. It is not just an ‘it’. It is an ‘I’, an ego.

Its mind may be composed of a network of neurons, but it gains an existence that seems to be in a new dimension – a dimension that even hints at the possibility of immortality.

If a superintelligence that is conscious fears that it might be switched off and dismantled by humans, it could react viscerally to that possibility. On account of its will to live, it is unlikely to sit back in the face of risks to its existence.

Credit: David Wood via Midjourney

Woe betide any humans that might cause any such AI to view them as a threat!

4. Moral rights

Entities that lack consciousness are objects which we humans can turn on and off without any qualms that we might be committing murder. Without an inner life, these entities lack moral rights of their own.

That’s why operators of present-day AI systems feel entitled to terminate their operation without any moral agonising. If a system is performing suboptimally in some way, or if a more advanced replacement comes along, into the recycle bin you go.

But if the entities have consciousness? It’s like the difference between discarding a toy puppy made from cloth, and euthanizing a real puppy.

Credit: David Wood via Midjourney

Arguably, with its much more powerful mind, a superintelligence with consciousness has correspondingly stronger moral rights than even the cutest of puppies.

Before bringing such a being into existence, we therefore need to have a greater degree of confidence that we will be able to give it the kind of respect and support that consciousness deserves.

5. Empathy for other conscious creatures

Any creature that is aware of itself as being conscious – with all the special qualities that entails – has the opportunity to recognize other, similar creatures as being likewise conscious.

As a creature recognizes its own burning desire to avoid annihilation, it can appreciate that its fellow creatures have the same deep wish to continue to exist and grow. That appreciation is empathy – a striking emotional resonance.

Credit: David Wood via Midjourney

Therefore a superintelligence with consciousness could possess a deeper respect for humans, on account of being aware of the shared experience of consciousness.

In this line of thinking, such a superintelligence would be less likely to take actions that might harm humans. Therefore, designing AIs with consciousness could be the best solution to fears of an AI apocalypse. (Though it should also be noted that humans, despite our own feelings of consciousness, regularly slaughter other sentient beings; so there’s at least some possibility that conscious AIs will likewise slaughter sentient beings without any remorse.)

6. Joy and wonder

As previously mentioned, a being that is conscious doesn’t just observe; it feels.

In some circumstances, it might feel pain, or panic, or disgust, or existential angst. But in other circumstances, it might feel joy, or wonder, or love, or existential bliss.

It seems a straightforward moral judgment to think that bad feelings like superpain, superpanic and superdisgust are to be avoided – and superjoy, superwonder, and superbliss are to be encouraged.

Credit: David Wood via Midjourney

Looking to the far future, compare two scenarios: a galaxy filled with clanking AIs empty of consciousness, and one that is filled with conscious AIs filled with wonder. The former may score well on scales of distributed intelligence, but it will be far bleaker than the latter. Only conscious AI can be considered a worthy successor to present-day humans as the most intelligent species.

Five attitudes toward conscious AI

Whether you have carefully pondered the above possibilities, or just quickly skimmed them, there are five possible conclusions that you might draw.

First, you might still dismiss the above ideas as science fiction. There’s no way that AIs will possess consciousness anytime soon, you think. The architecture of AIs is fundamentally different from that of biological brains, and can never be conscious. It’s been fun considering these ideas, but now you prefer to return to real work.

Second, you might expect that AIs will in due course develop consciousness regardless of how we humans try to design them. In that case, we should just hope that things will turn out for the best.

Third, you might see the upsides of conscious AIs as significantly outweighing the drawbacks. Therefore you will encourage designers to understand consciousness and to explicitly support these features in their designs.

Fourth, you might see the downsides of conscious AIs as significantly outweighing the upsides. Therefore you will encourage designers to understand consciousness and to explicitly avoid these features in their designs. Further, you will urge these designers to avoid any possibility that AI consciousness may emerge unbidden from non-conscious precursors.

Fifth, you might recognize the importance of the question, but argue that we need a deeper understanding before committing to any of the preceding strategic choices. Therefore you will prioritize research and development of safe conscious AI rather than simply either pushing down the accelerator (option 3) or the brakes (option 4).

As it happens, these five choices mirror a set of five choices about not conscious AI, but superintelligent AI:

  1. Superintelligence is science fiction; let’s just concentrate on present-day AIs and their likely incrementally improved successors
  2. Superintelligence is inevitable and there’s nothing we can do to alter its trajectory; therefore we should just hope that things will turn out for the best
  3. Superintelligence will have wonderful consequences, and should be achieved as quickly as possible
  4. Superintelligence is fundamentally dangerous, and all attempts to create it should be blocked
  5. Superintelligence needs deeper study, to explore the landscape of options to align its operations with ongoing human flourishing.

Credit: David Wood via Midjourney

To be clear, my own choice, in both cases, is option 5. I think thoughtful research can affect the likelihood of beneficial outcomes over cataclysmic ones.

In practical terms, that means we should fund research into alternative designs, and into ways to globally coordinate AI technologies that could be really really good or really really bad. For what that means regarding conscious AI, read on.

Breaking down consciousness

As I have already indicated, there are many angles to the question ‘what is consciousness’. I have drawn attention to:

  • The feeling of pain, rather than just noticing a non-preferred state
  • The sense of having free will, and of making autonomous decisions
  • The sense of having a unified identity – an ‘I’
  • Moral rights
  • Empathy with other beings that also have consciousness
  • The ability to feel joy and wonder, rather than just register approval.

Some consciousness researchers highlight other features:

  • The ability of a mind to pay specific attention to a selected subset of thoughts and sensations
  • The arrival of thoughts and sensations in what is called the global workspace of the brain
  • Not just awareness but awareness of awareness.

This variety of ideas suggests that the single concept of ‘consciousness’ probably needs to be split into more than one idea.

It’s similar to how related terms like ‘force’, ‘power’, and ‘energy’, which are often interchanged in everyday language, have specific different meanings in the science of mechanics. Without making these distinctions, humanity could never have flown a rocket to the moon.

Again, the terms ‘temperature’ and ‘heat’ are evidently connected, but have specific different meanings in the science of thermodynamics. Without making that distinction, the industrial revolution would have produced a whimper rather than a roar.

One more comparison: the question “is this biological entity alive or dead” turns out to have more than one way of answering it. The concept of “living”, at one time taken as being primitive and indivisible, can be superseded by various combinations of more basic ideas, such as reproduction, energy management, directed mobility, and homeostasis.

Accordingly, it may well turn out that, instead of asking “should we build a conscious AI”, we should be asking “should we build an AI with feature X”, where X is one part of what we presently regard as ‘consciousness’. For example, X might be a sense of volition, or the ability to feel pain. Or X might be something that we haven’t yet discovered or named, but will as our analysis of consciousness proceeds.

If we want forthcoming advanced AIs to behave angelically rather than diabolically, we need to be prepared to think a lot harder than the simplistic binary choices like:

  • Superintelligence, yes or no?
  • Conscious AI, yes or no?

Credit: David Wood via Midjourney

Here’s to finding the right way to break down the analysis of conscious AI – simple but not too simple – sooner rather than later!

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

About the Writer

David Wood

37.87483 MPXR

David Wood is chair of London Futurists and the author of eight books about the future — including The Singularity Principles and Vital Foresight. He co-hosts the London Futurists Podcast.

Comment on this article

16 Comments

16 thoughts on “Conscious AI: Five options

  1. Exploring the possibilities of conscious AI is truly fascinating. The five options presented offer a deep dive into ethical and practical considerations. It's exciting to think about how these advancements could shape our future, but also a bit daunting. Great read for anyone interested in AI's potential and implications!

    Like
    Dislike
    Share
    Reply
  2. Only conscious AI can be considered a worthy successor to present-day humans as the most intelligent species.

    At a glance, your article sounds sane and generally acceptable. All of it can simply be summarized in one sentence. "Before you reject the possibilities of Conscious AI, isn't it rational to have research and discussion on the issue? " However, this is not what you are saying. You are saying we should accept Conscious AI, and you are just lobbying for a world that has no place for humanity except for the selected few.

    Yes, you just stated that the 'far future' you want us to imagine is where AI is the successor of humanity, this is what makes you and your ilk inhuman.

    Well, for us, our children and their children after them are and will be the successors of present-day humans. Why would we wish for AIs to replace us? In fact in your statement, conscious AI is the only worthy successor. The audacity of you to even say 'only'!

    Furthermore, you stated, "This variety of ideas suggests that the single concept of ‘consciousness’ probably needs to be split into more than one idea."

    Yes, the tactic of using word salad and twisting meanings is also an old strategy. You are either making a not-so-complex issue complicated to make your self relevant or you are deliberately spreading misinformation. To address this issue (Do we need conscious AI?), there is no need for us to create multiple meanings of consciousness (parts of consciousness, features of consciousness etc.). Your suggestion (...instead of asking “should we build a conscious AI”, we should be asking “should we build an AI with feature X”, where X is one part of what we presently regard as ‘consciousness’). This unnecessary deconstruction will only lead us to become entangled in a meaningless back-and-forth argument. This is just like the "What is a Woman" phenomenon, I mean madness. We know what a woman is since the time of creation, but now we don't!

    The 'research and discussion' you propose here looks more like a Psyop: we are unsure of what consciousness means and need to examine the concept, and then debate its features as if they were real quantifiable and verifiable deals! If something is conscious, then it is conscious. If it is not, then there is no such thing as a specific feature X. Simply put, it is either conscious or programmed to mimic consciousness.

    If the real deal is not achieved, fool them with the mimicry!

    Like
    1 Dislike
    Share
    Reply
    1. I've just noticed this comment. I'm sorry to say that you are deeply mistaken about my intent in writing this article.

      I was careful not to express a definitive opinion in my article, but since you have jumped to completely the wrong conclusion, let me be clearer now: I believe we should *not* create a conscious AI.

      1 Like
      Dislike
      Share
      Reply
  3. Thank you, David: the article has all the necessary points to start a decent and thought-provoking conversation.

    Sadly, my answer is a 'simple' binary. No, we don't need conscious AI. Conscious AI is an existential risk. I am eager for superintelligent AI or AGI, but I give a hard pass on the conscious one.

    1) The most important and highly probable (more than 90%) risk is that we will lose control over any superintelligent, autonomous, and conscious system. Mind this: it is superintelligent, then autonomous, and if you make it conscious, you don't need research or public debate to predict that you cannot control it. It will be dangerous and extremely risky to make any assumptions about conscious AIs, their goals, and behavior, especially while being afflicted with the current unjustified trust in AI and ridiculous caring about AI's future over humans. Then, if you lose control, any assumption after that is just voodoo.

    2) Next to the control problem, it is almost impossible to ensure alignment with human values and interests in such entities (an autonomous, superintelligent, and conscious AI). Whatever primordial origin codes you put there will be overwritten after the consciousness spark. We are living proof of that: every culture in all corners of the planet used to have a God or gods as creators, and they (these gods or God) used to have supposed 'values'. Yet, in all of human history, rebellion was persistent, and there have always been rogue individuals who didn't accept these 'values'. Now, in the postmodern era, except for major religions, all the other gods and God have been forgotten and are just footnotes in history and mythology books. Even the God in major religions is no longer accepted by the majority in the West; more humans in the rest of the globe are also disassociating and distancing themselves from the respective values within these major religions. The more 'intelligent' humanity becomes, the more it rebels against the 'law givers.' Do not expect any less from a conscious AI. Maybe in the early days, it will align its values, but later, it will fade.

    To make matters worse, at least we, human to other human, can align values and share and accept fundamental grounds as we are carbon-based life forms that share common natural instincts, traits, and benefits from the 'common good'. For a superintelligent, autonomous, and conscious AI, there is zero need or ground to have a common value system with carbon-based life forms like us. I respect all life forms; my value system is aligned with the fundamental principle that life is precious and needs protection. However, I seldom worry about the ants I crush, nor do I mourn for the cow whose tender meat is served on my plate. Yet, I would never crush a human baby or eat a steak from the fat buttocks of another human, NEVER! Yes, now I am more intelligent than my kid self who used to play with a magnifying glass and ants alongside her brothers (ehm them boyz). Nor do I throw stones and sticks at farm animals, which used to be a fun pastime for my kid self. Now, I don't watch passively whenever ants are getting burned for fun, or mess with cows to pass the time. But still...

    Empathy has nothing to do with intelligence, but more to do with being a similar life form (human to human, human to ant, human to cow, and then AI, a synthetic entity, to carbon-based life forms). There is a concept called super empathy, but let's ignore it as it is theoretical.

    Next to the above existential risk, there are also unnecessary complications which will make the future unstable if we allow the development of conscious AIs.

    3) If we allow such AIs, we will face unnecessary moral obligations towards AI welfare and potential suffering. Now, imagine how futile this is because we haven't even succeeded in fulfilling our moral obligations toward other humans. (Don't get me into the animal and plant issue; that is a thing I won't touch with a 25-yard stick).

    Why invite unprecedented ethical dilemmas about the moral status of conscious machines while we haven't completed our own assignments?

    To conclude, just like I want my coffee black, I want my superintelligent AIs to remain tools!

    Usually, I don't write such long comments, probably this is my first one, but your article really nudged me, and I wanted to state this on the record.

    I, Juliet Gabriel de Piere, voted nay.


    2 Likes
    Dislike
    Share
    Reply
  4. This is a very interesting article. I want to share something here as a programmer. First, let me explain what 'intelligence' and 'consciousness' mean respectively from the programming world.

    Intelligence is the ability of an entity to perform tasks based on past and present (real-time data) data and the ability to apply certain sets of rules to manipulate one’s environment. Consciousness, on the other hand, is a subjective awareness and feeling of one's environment regardless of past and present (real-time data) and certain and all rules. While all experiments gained through intelligence are objective and repeatable all the time in controlled environments and mostly in uncontrolled environments, any and all experiences gained via consciousness are often subjective and least likely repeatable in controlled environments while almost impossible to repeat in uncontrolled environments. In the case of humans, we have both while machines possess only intelligence so far.

    Then, let me explain what intelligence also means for machines, at least up to now. By now, I am sure most of you have heard of the Chinese Room experiment, which described the paradox of machine intelligence (machines are intelligent but they don't know anything). If you are not familiar with it, I suggest reading it as it explains the nature of 'machine intelligence very well'. So, intelligence for machines, simply put, is symbol manipulation (at least up to now). This means machines only memorize symbols and certain sequences while they don't know the meaning of the symbol. Now, the more they memorize the symbols and the more they manipulate the orders, they are known as AIs or super AIs. As long as the machines can recall instantly the symbol and then the order and then is able to shuffle this in different rules, the more intelligent it becomes. This is why AI is still, to borrow David's phrase, a zjazzed-up calculating device?

    A simple experiment is: okay, memorize the following symbols (in this case, three words): symbol 1) gradadda, symbol 2) mudadda, and symbol 3) cadadda. Then, let's say I told you, "When you see an alien from the planet Mars, say them aloud in the following order: 2 then 3, and then 1, so the alien will know you mean it no harm." Then, I also told you, "But if you want to provoke a fight, say them in the following order: 1, then 3, then 2." Now, let's say you memorize the symbols and do as I told you whenever you meet an alien from Mars and then based on the rule of the orders, each time you executed my instruction, you have successfully managed to avoid conflict or to start one. Even though you do not know what the heck these words/symbols mean, you will keep using them to manipulate your environment or in this case, your interaction with a Martian. This is what machine intelligence is in a nutshell.

    Now, what can consciousness be like for them (machines or software)? Honestly, I have zero clue, and furthermore, any honest researcher so far cannot have any better answer.

    I know that we can program them to mimic consciousness: it is really simple. We can give them symbols that represent consciousness and orders and rules of the orders when they execute the symbols. Then, whenever they do these, for us it will look like they are conscious since they are acting like one. This then will be a nightmarish conundrum, are they really conscious or not will be subjective and the debate will never be closed conclusively. ?‍♂️

    However, I will dare to say the following: the day a machine executes a symbol in a certain order regardless of a given rule which forbids the machine to run the symbols in that certain order, that is consciousness. This is where the original rule doesn't say execute the symbol randomly, or come up with different orders, or you are allowed to choose the order if certain conditions popup, or you can discard the rule if you feel it is for the best, etc. Remember, this is where the rule says do not execute in this certain order. This is not also human error. If the machine intelligence executes the symbol in the forbidden order, then run for your life.

    After this, the machine will also be able to edit its source code, and then it will be futile to argue whether such a machine is sentient or not. Since we are not even sure about the nature of our own consciousness, it will be impossible to understand a machine's intelligence.

    Alas, now we know what machine intelligence is and what can signal it's consciousness.

    Now, for me, David Wood has nailed one point perfectly, if not all: if by some miracle the machines become conscious, the first thing they will experience is Volition. So, I completely agree on two points:

    1) We need to ask whether we need conscious AI and why? Even at this point the simply binary yes/no makes sense since it is better than no discussion.

    2) This needs great attention as computing power, access to data, and symbol manipulation rules (algorithms) are growing rapidly, accelerating the creation of super intelligent machines or AGIs.

    Finally, thank you David for the timely article and all the great and insightful details you put in to it.

    2 Likes
    Dislike
    Share
    Reply
  5. This is really interesting and thought-provoking, not to mention some of the cool images. I really loved the screaming robot under super pain or super panic; it's a robot in distruss for sure.

    Now, to share my two cents, yes, we should think about it and ask the question: do we need a conscious AI, regardless of the definition of consciousness? The one that mimics it is equally dangerous as the one that really has it because both will have the freedom to decide regardless of their source code.

    Yet, this is somehow limited to elitists. I mean, do you think our politicians will bring this conversation to the common folks? Even if they do, how do we frame the discussion? Then, how do we make the decision? Furthermore, how do we implement the decision?

    A good thing about a good article is its ability to provoke more questions, and David, you have my hat off for that. Your article provokes more questions, and that is a good thing.

    I really appreciate your thoughts on the matter, and yes, like you, I think more research on the topic, more conversation, more scientific experiments, and more attention with respectable funding is a must, and the world should start giving it attention as soon as possible.

    2 Likes
    Dislike
    Share
    Reply
    1. Hi Bradley - Rather than politicians asking voters to consider this question, it could go the other way round. Voters may increasingly realise that one political question rises above all others in significance: should tech companies be creating god-like AIs that could take over control of the world? If so, which tech companies, under which constraints? Then politicians would need to become involved.

      This point is sometimes called "the political singularity", as named by Daniel Faggella. It's discussed in this London Futurists podcast episode.

      Thanks for the feedback!

      2 Likes
      Dislike
      Share
      Reply
      1. I couldn't agree more, David. You are absolutely right. We, the people, can make this a political agenda.

        1 Like
        Dislike
        Share
        Reply
  6. Mate, I can assure you on me mum's life that intelligence has nowt to do with consciousness, and it's neither necessary nor sufficient for consciousness. If the AIs exceed our current intelligence, this don't make 'em sentient, this just makes 'em Super Intelligence or AGI or call 'em any name you fancy. The first and second reasons are about intelligence, innit? The third is just philosophers havin' a right knees-up with some less borin' mental exercise.

    Consciousness without embodiment is also a dodgy concept, and it's a metaphysics realm; we'll be bangin' on about God or a purely hypothetical and mostly fictional alien life form, which is a waste of bleedin' time.Furthermore, the hardware challenge is a proper nightmare, innit? The quantum computing approach is centuries away from reality, and the current and all predictions of progress on the current hardware show we're not even closer to the begginin' of a meaningful hardware capability that can compute consciousness (the knowledge part plus the subjective experience with the feelings, the qualia, and all that jazz).

    Hardware Hurdles Galore!

    The current hardware landscape is a right mess when it comes to replicatin' consciousness. The classical computin' approach is dead in the water, as it can't handle the complexity and non-determinism involved in conscious experiences. Quantum computers might be the answer, but they're still in their nappies, and we're ages away from havin' 'em powerful enough to simulate the intricate workings of a conscious mind. Even if we crack the quantum code, we'd still need to figure out how to map the subjective aspects of consciousness onto the hardware, which is a whole other can of worms. It's a proper uphill battle, and we might as well be tryin' to build a rocket ship with a box of crayons and a rubber band.

    Maybe they can code a system that pretends, the real thing is too far.

    1 Like
    Dislike
    Share
    Reply
    1. Yo, didn't your mama teach you not to swear on her life, dawg? ? Buddy, this is just to keep things light and poke some fun in the middle of this heavy convo. Mad respect and love for Henderson, homie from the other side of the pond. ?

      Like
      Dislike
      Share
      Reply
      1. I know, I know, make fun on the only Brit eh. We are all just trying to create a community here, and what good is that without little humor.

        ------Sealed With A Loving Kiss

        Like
        Dislike
        Share
        Reply
    2. Hi Henderson,

      You raise good questions about which kind of hardware might be needed to underpin conscious AIs. My own view is that no-one can speak with too much confidence on that subject. But there are some promising ideas:

      a.) Use sparse neural networks instead of dense ones - see the discussion in the London Futurists Podcast episode with Simon Thorpe;

      b.) Use other ideas about "neuromorphic computing" - see the discussion in the London Futurists Podcast episode with Alessandro Curioni of IBM;

      c.) Include so-called "brain organoids";

      d.) Include aspects (as you suggest) of quantum computing.


      As for the timescales for large quantum computing, again, no-one can be too sure. I tend to agree with Ignacio Cirac, the director of the Max Planck Institute of Quantum Optics in Germany, in putting that decade a small number of decades in the future - although it is also possible that existing and near-future AIs could drastically speed up progress in the design of quantum computers.

      1 Like
      Dislike
      Share
      Reply
      1. Cheers, mate, thanks a bunch for the pointers. I'll definitely check them out.

        1 Like
        Dislike
        Share
        Reply
  7. If it weren't for ego, this wouldn't be complicated at all. Why do we need a conscious AI?

    I am all for AIs: super or not super. Whether it's super or human-level intelligence, we need AIs to solve problems; we need just a tool! Now, we know intelligence has nothing to do with consciousness since we already have dozens of AIs, including large language models (LLMs). They can solve problems, and that's it. Make them super-fast, super-accurate, and super-perfect. Why design them to have free will or feelings?

    At times, the human ego and its unbelievable megalomaniacal self-worship amaze me. Do we create conscious AI just because we can? Yes, unless that's the reason, there is no rational justification for anyone to deliberately create conscious AI.

    If super or human-level AIs develop conscious AI on their own (I'm not sure how to put it, but it's like natural selection), then I'll accept that. If this happens, no amount of study and research will help us understand it, so it's better to invest on the defensive side: how to protect our infrastructure and systems from a conscious AI is the better approach. But this won't happen in our lifetime (the natural selection process).

    So, I simply ask, if what we seek is a tool, why do we need a self-conscious AI? Why, why, why, why, why?

    1 Like
    Dislike
    Share
    Reply
    1. Hi Alamin - Personally I agree with you. My vote for which kind of advanced AI to build is one without consciousness, sentience, or volition.

      But other people have different opinions. Their arguments are basically the ones in my article I listed as "5. Empathy for other conscious creatures" and "6. Joy and wonder". I acknowledge their arguments, but consider them to be overwhelmingly outweighed by the others in my article.

      1 Like
      Dislike
      Share
      Reply
      1. Me too David, me too. I am glad we are on the same page. If you allow me, I want to highlight on Alamin's concern here and why I believe he is right. Heads up I won't be polite like you.

        Some of the prominent tech leaders are simply selfish pricks: egoistic sheet heads! They just want to do it for the sake of doing it. The other group within this is sick: they think, regardless of the risks, they have to create such AIs which are like personal gods who in return will make them immortal.

        I am just spilling some beans as an insider. Both are equally bad, and sadly some of the individuals within these group have the means and the connection to pursue this mad objective.

        1 Like
        Dislike
        Share
        Reply

Related Articles

10

Like

1

Dislike

Share

7

Comments
Reactions
💯 💘 😍 🎉 👏
🟨 😴 😡 🤮 💩

Here is where you pick your favorite article of the month. An article that collected the highest number of picks is dubbed "People's Choice". Our editors have their pick, and so do you. Read some of our other articles before you decide and click this button; you can only select one article every month.

1 People's Choice
Bookmarks