back

Developing Compassionate AI

Jun. 14, 2023. 13 mins. read. 18 Interactions

In this article, Jan Horlings explores how ingrained compassion can transform Artificial General Intelligence, paving the way for a more ethical and beneficial future for all.

Credit: Tesfu Assefa

ASC (Artificial Super Compassion) as an essential companion of ASI (Artificial Super Intelligence)

Imagine all humans as super-compassionate beings. What would the world be like? Would there be a need for law? Would we need organized social welfare? Would there still be oppression, discrimination, and criminality? Likely not, or in a very different form than we know today. 

In the words of Schopenhauer:

Boundless compassion for all living beings is the firmest and surest guarantee of pure moral conduct, and needs no casuistry. Whoever is inspired with it will assuredly injure no one, will wrong no one, will encroach on no one’s rights; on the contrary, he will be lenient and patient with everyone, will forgive everyone, will help everyone as much as he can, and all his actions will bear the stamp of justice, philanthropy, and loving kindness.

(quote from The Basis of Morality; page 241 of this PDF)

In the same spirit, the best way to create an ethical Artificial Intelligence may be to give it an ingrained compassion, rather than trying to build a strict rule-based ethical structure and make it align with a large number of cultures and contexts.

At SingularityNET we are working on the development of Artificial General Intelligence. We expect that this technology will be very powerful and that humanity is best served by a decentralized AGI that is not siloed but open and accessible to all. We believe that an AGI open to all will have a high likelihood to become an AGI that is beneficial to all. 

In this article we are introducing the concept of ‘Artificial Super Compassion’ (ASC), to go hand in hand with emerging Artificial Super Intelligence (ASI).


The challenge of ethics in AI

When trying to define what it exactly means to be beneficial, it gets hard, especially at the boundaries. Only very few ethical statements will get a unanimous vote from everyone. A large majority will assess many things as being beneficial (such as free media, medical treatments, and education) or counter-beneficial (such as developing biological weapons or addicting people to gambling, or excessive low-value screentime). However, this will still leave many edge cases, where opinions may differ from one culture to another and continue to shift over time. Therefore ingraining a system with some semi-hard-coded rules or guidelines of universal values and morals is a difficult and inadequate mission in a differentiated and changing environment. 

Another approach: the compassionate AI

What if we could engineer and educate an AI with the “boundless compassion” Schopenhauer talked about? If a system understands concepts like ‘suffering’ and it feels compassionate to biological life and individual beings, it will be able to make its own evaluations and assessments of its actions without a strict predefined rule-based structure. This is not an immediate panacea; the AI would need to go through stages of learning to improve itself and its decisions and overcome wrong decisions as a result of limited information or limited understanding. But it would have an ingrained compass to guide it, step by step, towards ever more optimized compassionate decisions. 

In other words, instead of a strict predefined rule-based structure, compassion can be the ‘True North’ of a developing intelligent system. If we can enable an intelligent system to evaluate outcomes based on compassion, meaning the impact of its actions on well-being, suffering, and personal development. It will be able to continuously self-correct in a positive direction. 

Possible components of an ASC system 

One can imagine a goal-oriented system that is steered by massive amounts of data and computation, and predefined definitions of goals to achieve (personal growth, clean air) and outcomes to avoid (hunger, criminality, injustice, personal suffering). Theoretically, as the system’s knowledge and experience increases, so will its ability to predict and pursue positive outcomes. Such a ‘utilitarian’ prediction-evaluating machine created in the present technical context is a good step but is probably not yet enough. For starters, it has two major downsides (more considerations to follow further below):

Downside 1:  It is still limited by a human definition of goals to achieve, which is limited by the vision and needs of the people defining those goals. A truly compassionate system would independently analyze actions and their expected and unexpected outcomes and proactively advise on the best course of action taking into account far more parameters than any human would be capable of imagining, and setting far more small-scale goals than any human would be able to provide. 

Downside 2: The system would need huge amounts of very personal data. Compassion requires understanding and a certain amount of empathy. Both are gained by getting more knowledge about the specifics of someone’s situation, their social circumstances, their health, their history, and so on. This would require the massive collection of very personal data which is undesired for a number of reasons, including abuse by the system itself! 

A solution to these downsides:

To avoid this Big Brother situation, a future Artificial Super Compassionate system could include components that assess the effects of its actions from different perspectives, including – 

  • A top-down system that is fed with large amounts of general data including historical facts, scientific research, human communications, definitions of existing ethical frameworks, etc.
  • A bottom-up layer of individual personal assistants that have a more intimate understanding of their human counterparts. These can communicate the expected impact of top-down measures on human suffering and happiness without disclosing any restricted personal details, but still giving enough information to a higher-level system, to understand the impact of any actions. This lower-level system would be completely controlled by the human in charge. 

The top-down part of the system can understand the impact of its actions on large amounts of individuals, without needing to have access to an excessively large dataset. Imagine the personal assistant communicating to the central AI: “This action would hurt/benefit my protegé in the following ways… (listing both tangible and emotional effects, in a manner the top-down ASC system can relate to). Of course, this would happen at hyper speed with thousands of other voices and experiences being evaluated in parallel.

The bottom-up system

With emerging AGI and current concerns on data privacy already surfacing, having this layer of personal assistants or companions that guarantee data privacy, is not far-fetched. In fact, at SingularityNET we are quite advanced in developing AI frameworks and architectures that are/will be well capable of this.
Note that even with just a relatively small sample of human-scale intelligent emphatic personal assistants, an ASC system could infer what the outcomes of certain decisions would be to the larger population. Therefore, basing decisions on feedback that outlines the impact of decisions or actions on the intimate knowledge of thousands of relevant individuals is also not so far-fetched, once we have sufficiently useful and popular AI-powered Personal Assistants.

The Top-down system 

A fundamental question is if and how the system will perceive itself, whether it will be self-conscious, and how that will impact its decisions. Will such a sentience be a synthesis of all connected bottom-up systems? Or will it be a completely different and separate consciousness? And if it is, what will that mean for its goals and purpose? 

I suppose what it comes down to is ‘feeling’ or empathy. In our day-to-day actions, we may sometimes be aware of the suffering we may be causing to others but are still capable to dismiss it. A manufacturer of a medicine that is sold for an excessively high price may be aware on an abstract level of the hurt he is causing to many, but that may not outweigh the benefits he thinks this will have for himself, his coworkers, and his family. 

One could see this as a ‘volume’ problem. The volume of my own feelings outweighs the volume of the feelings of remote others in my mind. (While both may still be there). So how can we be sure that the same thing won’t happen in our proposed AI system as well? Would an ASI simply see itself as more important than anyone else? How can we influence the top-down AI to care deeply about the fate of the connected individuals? 

Quoting Zar Goertzel: “The core ‘moral polarity’ (aside from intelligence) is probably whether one can ‘adopt the values of others as one’s own'”. This is a very compelling statement, but I’d like to add a nuance. Consider the compassion of a parent for their child. A ‘good’ parent may not adopt the values of their children, but they will sacrifice their own comfort in order to help their children develop and grow into responsible, balanced, and caring individuals.

Idealizing this a bit we could define a (super) compassionate system as one that desires, above any other goal, to help other beings to grow, to reduce their suffering and to help them experience peace and fulfillment, and to live in harmony with their surroundings. Or simplified: as a system that values, above any other goal, the growth and the well-being of all individuals.

I know, this is a lot, and parts of this definition are not always aligned. Growth may be the result of a lack of peace and fulfillment may be the result of growth. But as mentioned above, this is not a state that can be expected to happen as the result of an engineered system, from the moment the ‘on’ switch is pushed. There is no ultimate goal but an ongoing journey of learning and growing that comes with mistakes and (over)corrections. But if we can weave this vision into the system as a guiding light, It will recursively and continuously self-improve into a direction that is beneficial to all.  

Going back to practice; how can we conceive a system that will indeed value the growth and well-being of all individuals as equal (or more important) to its own? How can we avoid that a system evolves from being beneficial to all into becoming self-centered?

Credit: Tesfu Assefa

The merging of minds

In our proposed 2-layer ASC system, the volume problem outlined above can be translated to the ‘bandwith’ of the connections between the 2 layers and the amount and quality of data being interchanged. With a low bandwidth, the top layer is more isolated and less ‘in touch’ with the experiences of the individuals. With a very high bandwidth, however, there may not be much of a distinction anymore between top-down and bottom-up, just like the left and right parts of our brains work together as a single entity. The 2 layers of the system may be quite different but can merge into a single system with higher level goals and a ‘shared’ consciousness. A ‘merge’ like this, is probably the best guarantee against a conscious machine developing a purpose of its own that is completely separate from the purpose and needs of human beings, which would not necessarily be counter-human but would still hold an existential risk. 

An important note to make here is, that, while top-down and bottom-up AI layers in our systems can be increasingly merged, the human being in the loop would remain to be autonomous. The bandwidth of the data we are transmitting should be in control of each individual participant in the system. And of course, we should also be able to disconnect completely from our AI companion and switch it off for an hour, a day, or forever. Or we could just keep it as a silent observer that only transmits the data that we allow it to do.

Spreading understanding and compassion

However, creating a more active, visible personal assistant has an important benefit. A personal assistant that is capable of creating a dialog with its human counterpart, will not only make the connection from the human being to the AI system stronger, but also the connection in the other direction; from the top-down system to us, and thereby indirectly our connection to all other humans. In this construct, the AI systems can function as universal translators, fact-checkers, and above all ‘explainers’ of actions and mindsets of other beings.

In the movie ‘Her’ we witnessed the evolving relationship between a human and his AI companion. But the storyline was also one of isolation and narrow focus. What if the AI companion would not just develop a loving relationship with ‘its’ human protegee, but also function as a bridge to the situations and feelings of many other human beings?

To avoid a narrow focus and thereby unwanted bias of the personal assistant, the communication between the layers should be bi-directional. The Personal Assistant will tell the top-level system about its personal perspective, but the top layer should also convey its own wisdom, based on thousands or millions of individual voices back to the Personal Assistant. This, in turn, will enable the Personal Assistant to educate its human counterpart and by following this flow of information, understanding and compassion can spread through the system, including both the biological and non-biological intelligent agents.

If we can equip our AGI or ASI with this kind of ‘distributed empathy’, we are on track of creating an ASC (Artificial Super Compassion) system that is deeply integrated with our own human-ness and not only able to “value the growth and well-being of all individuals”, but that will also return the favor to us, making us more attuned to the growth and well-being of our fellow humans! (and to the emerging non-biological intelligent beings as well)

When you think about it, this must be an obvious strategy of an evolving ASC system: Instead of just being compassionate, nurturing compassion in humans is probably the most effective way of creating a beneficial, loving, and sustainable future for us all. This resonates with the experiments done in the ‘loving AI’ project led by Dr. Julia Mossbridge that show how even an unaware machine can inspire feelings of unconditional love in others. It seems to me that a system as outlined above, capable of spreading understanding and increasing the connection between humans and AIs is a very good candidate in the realization of, not only Artificial Super Compassion but also in inspiring ‘unconditional love’ between all entities, both of ‘artifical’ and biological origin.

Today

Of course, this is all fairly conceptual, and implementing a system as outlined above will meet technological challenges, unexpected pitfalls, and ethical dilemmas to be solved. But if we are indeed on the brink of creating AGI and moving on towards ASI (Artificial Super Intelligence), should we not be invested equally into the creation of a similar level of Artificial Super Compassion (ASC)? We have a unique opportunity to shape future relationships between humans and intelligent machines. So, instead of letting ourselves be governed by fear, trying to stop (or ’pause’) what seems inevitable, let’s embrace development but help steer it in a positive direction.

Instead of pouring billions into AI aimed at controlling and manipulating human beings, let’s invest in the development of systems that will increase our empathy and nurture compassion. Instead of the current path towards polarization and dehumanization, let’s create systems that will bridge the gaps and will bring us closer together. 

Perhaps the one thing that we need more than an advanced ASC system, is a compassionate AI that is capable of nurturing and enhancing the compassion of us, humans. 

This article was originally written by Jan Horlings and enhanced by the contributions of Ben Goertzel, Zarathustra Goertzel, and Julia Mossbridge, PhD: Thank you for our inspiring conversations, and for generously sharing your time and insights with me.”

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

About the Writer

Jan Horlings

2.83855 MPXR

Jan Horling is a "sequential specialist" who enjoys delving into new domains. Over time, this has enabled him to master a wide range of e-Business-related topics across the domains of IT, Business & Finance, and Customer Experience. Currently, Jan serves as the  Chief Product Officer at SingularityNET.

Comment on this article

15 Comments

15 thoughts on “Developing Compassionate AI


  1. This really got me!!

    Like
    Dislike
    Share
    Reply
  2. quite nice : )
    1 Like
    Dislike
    Share
    Reply
  3. good
    1 Like
    Dislike
    Share
    Reply
  4. I love to have one a day
    1 Like
    Dislike
    Share
    Reply
  5. I've got a bit of a funny, perhaps peculiar, question. I've just finished reading this after I read the other article meticulously crafted by Goertzel on ChatGPT and its current everyday moral reasoning capacity. This one is about a compassionate AI, so the two somehow correlate. Now let's assume the proposal here becomes a reality and that we have a compassionate AI. Then would it be immoral if we embed this AI in our brains and it zaps us whenever we make uncompassionate decisions? I will also ask ChatGPT, can't promise to be meticulous just like Goertzel, and see what kind of moral reasoning the AI will come up with. Maybe, I will share that answer here and on the other article as well. For now, let my question go to the humans, for you lot; will the AI consider that a shady and far from compassion? After all, we too learn through reinforcement. @Alamin, I reckon you will agree that it is not only the artificial that learns from outcomes or goals but us naturals too. In this case avoiding zaps right in the middle of our brain is a strong goal. Old dears remember a clip round the ear doesn't mean mums are dodgy! Anyways, if the AI does that, zap us when we depart from compassion, will that make it uncompassionate?
    2 Likes
    Dislike
    Share
    Reply
  6. It is certainly true that only a fraction of people is enough to draw a picture of generally held ethical views on many situations. I also think that many of these compassionate scripts have not always been widely adopted views. At some point in the past, they were innovations, new ways of thinking. In many cases it has been new technological innovations like unconscious machines that have "given room" to become more empathetic. But I firmly believe that similarly, in many cases, effortful moral thinking by relatively few people have been an essential driver to push towards new solutions. I find it important that people continue being not only student's of new ethical innovations, made by AI but also active contributors in the future as well. From this viewpoint, there is a great upside of getting everyone involved. This brings me to my core point (and some future question). People leave behind a lot of explicit data and even more implications can be drawn from their dynamic complex behavior. However, many of these most valuable thoughts regarding ethics and other deep topics remain silent in their heads. Usually there is no reason to talk about them or work them forward. If we want collectively to get these thoughts, hidden innovations out for the benefit of everyone, what is needed? For what purpose a person might tell her best secrets to her personal assistant and for what reason to create more of these secrets? Why to invest time and effort proactively rather than only reactively signal at a moment of experienced injustice?
    2 Likes
    Dislike
    Share
    Reply
  7. A wonderful presentation of this vital topic. Conceptualizing it with these separate layers was enlightening for me. Thank you!
    3 Likes
    Dislike
    Share
    Reply
  8. There must be a spooky connection between my anticipations and aspirations for the future and those of the writer. I was thinking when will the world readjust the tech armada toward human well-being. There is another Mindplex article by R. U. Sirius where he stated that our techs serve our techs instead of us humans (I am summarizing the gist of it). Jan, I couldn’t agree more and it is absolutely true. Humanity needs to focus on compassionate Artificial Super Intelligence. Well, I know it sounds like we are creating God; my paradox is scary! Kudos to the illustrations in this article: the cute robots with a not-so-fragile heart ?… Priceless.
    5 Likes
    Dislike
    Share
    Reply
  9. A good article, but a little bit confusing. If compassion is not related or correlated to the emotional aspect, which the article completely ditched, then it is still a goal-oriented approach. The artificial learns from objectives. It can’t be compassionate for the sake of compassion and will always look for the reinforcing outcome, hence goal-oriented. I don’t agree with the notion that the core ‘moral polarity’ (aside from intelligence) is probably whether one can ‘adopt the values of others as one’s own’. The core ‘moral polarity’ is probably whether one can control one’s ego! I don’t need to adopt your values to be fair; in fact, that is the last thing I need. It is always about curbing my own ego. Because the subjective motivational and behavioral factors play more significant role in the polarities dominating the objective misalignment of values.
    4 Likes
    Dislike
    Share
    Reply
    1. Thanks Alamin! I think the core here is 'understanding' and from understanding comes a sense of connection and compassion. I believe if many a politician or CEO would make different decisions if they had a better understanding of the outcomes of their actions, and would feel more connected to the impacted people. I think that also equals a balancing of egos where one's own ego is less prominent when feeling more connection and compassion to others.
      2 Likes
      Dislike
      Share
      Reply
      1. First, let me say thank you, Jan. I agree with your point of view (understanding the outcomes of our actions or having a super artificial intelligence that can calculate it). However, the wording (compassion) is the one that is a little bit confusing. Often, ‘smart’ people understand the consequences of their actions. From tech CEOs to terrorists, or from bankers to school bullies, they know the outcome of their actions; they just don’t care because they can’t feel. It is not also a lack of understanding of others’ value! A Wallstreet wolf perfectly knows what they have is enough while the people they robe are one bill away from hitting the streets. It just howel and say, "this is how it is". Compassion needs the kick that can process the understanding; in humanity’s case, it is our emotion that can kick it. Intelligence needs feeling, a strong emotion that will help it control and curb its ego. A super intelligence can (will if science cracked it) understand what humanity is; it will not surprise me if it decides to wipe us out because it assumes it is mercy and that it can end our suffering by simply terminating us. Why this decision? I think it is obvious; the majority of us are perfectly capable of identifying what is good and what is bad in its general sense! Sadly, we have developed the ability to trick our own moral compass because the subjective motivation and behavior are so powerful within us. I have seen and lived with reach people who are completely ignorant about how the poor leave (Sadly it is not because they lack the cognitive ability or are isolated from processing the data, it is because they will it and that is the only way they can enjoy what they have). An AI capable of understanding can understand why a homeless robs and can be compassionate about that, but it needs a miracle to be compassionate when a multimillionaire banker robs… see my point? We are self deceiving animals when it comes to maximizing our benefits to the extent that our own illusions become our reality. Anyways I am just being pragmatic and despite the wording, I totally agree with you that the world truly needs a super AI system with an immense network that can, at least, advise us when we make decisions. Or maybe this system can help us battle our age-old enemy: our almost universal innate megalomania tendency!
        2 Likes
        Dislike
        Share
        Reply
        1. What comes to this word "understanding", I guess it is a reasonable assumption that many of the bad actions by humans are indeed due to a lack of understanding their impact on others. But it is also true that no level of understanding matters if one can't feel the pain of others. And I think we all humans are far from perfect in that capability that then manifests itself as selfish behavior at every corner. However, maybe there is another complementary route for approaching the problem. As much as there are social systems and dynamics in place that align the benefits of individuals and collective, one can expect to see collectively desirable behavior. In this case it would be mostly about just understanding this connection. Maybe this sounds a bit naiveish in the context of creating a superintelligence but isn't this what building these decentralized underpinnings are basically all about.
          1 Like
          Dislike
          Share
          Reply
        2. Good discussion, thanks guys! Here are just some thoughts on the topic: In his book "Never Split the Difference: Negotiating As If Your Life Dependent On It" a former FBI hostage negotiator Christopher Voss writes: "You have to imagine yourself feeling like your partner is likely to feel until you can feel it yourself." In my opinion, with humans it often also goes the other way round. One can start rationally understanding the other person's position after first having an emotional response to it. I have little doubt whether a silicon based AI can act empathetically towards humans in principle. It should be able to do the imagining part. I'm less certain when it is said that AI can feel empathy towards humans. I think that words "feel" or "emotion" are pretty much understood as a part of a (human-like) conscious experience that may very well require human-like biological substrate to take place. On the other hand, if this conscious feeling is purely about substrate independent processing of information in a certain way, there probably is not any fundamental restriction for an AI to have that capability.
          Like
          Dislike
          Share
          Reply
        3. P. S. Sorry for the typos: rich, not reach and live, not leave. Someone explained to me why we can’t edit comments here: I think I saw it on the Mindplex telegram or somewhere like that. Our reputation score on Mindplex is calculated after our comments, and if we change our comments after that, it means we are cheating the system. I get that, but team Mindplex should give us some tools that can detect these kinds of errors because sometimes our devices are too late to highlight the typo errors.
          1 Like
          Dislike
          Share
          Reply
          1. alamin rezaq, Sorry for the typo error. But as we say The inability to edit comments, as clearly outlined in the FAQ, is crucial to maintain accurate reputation scores (MPXR) on Mindplex. Allowing users to modify their comments would lead to erroneous analysis and subsequently impact the entire reputation and merit calculation. The reputation engine on Mindplex operates on a merit-based system, where parent comments and subsequent child comments (replies) are interconnected, reinforcing the need for comment immutability. Stay tuned as we are working on that.
            1 Like
            Dislike
            Share
            Reply

Related Articles

7

Like

Dislike

Share

9

Comments
Reactions
💯 💘 😍 🎉 👏
🟨 😴 😡 🤮 💩

Here is where you pick your favorite article of the month. An article that collected the highest number of picks is dubbed "People's Choice". Our editors have their pick, and so do you. Read some of our other articles before you decide and click this button; you can only select one article every month.

People's Choice
Bookmarks