Developing Compassionate AI
Jun. 14, 2023. 13 mins. read.
18 Interactions
In this article, Jan Horlings explores how ingrained compassion can transform Artificial General Intelligence, paving the way for a more ethical and beneficial future for all.
ASC (Artificial Super Compassion) as an essential companion of ASI (Artificial Super Intelligence)
Imagine all humans as super-compassionate beings. What would the world be like? Would there be a need for law? Would we need organized social welfare? Would there still be oppression, discrimination, and criminality? Likely not, or in a very different form than we know today.
In the words of Schopenhauer:
Boundless compassion for all living beings is the firmest and surest guarantee of pure moral conduct, and needs no casuistry. Whoever is inspired with it will assuredly injure no one, will wrong no one, will encroach on no one’s rights; on the contrary, he will be lenient and patient with everyone, will forgive everyone, will help everyone as much as he can, and all his actions will bear the stamp of justice, philanthropy, and loving
kindness.
(quote from The Basis of Morality; page 241 of this PDF)
In the same spirit, the best way to create an ethical Artificial Intelligence may be to give it an ingrained compassion, rather than trying to build a strict rule-based ethical structure and make it align with a large number of cultures and contexts.
At SingularityNET we are working on the development of Artificial General Intelligence. We expect that this technology will be very powerful and that humanity is best served by a decentralized AGI that is not siloed but open and accessible to all. We believe that an AGI open to all will have a high likelihood to become an AGI that is beneficial to all.
In this article we are introducing the concept of ‘Artificial Super Compassion’ (ASC), to go hand in hand with emerging Artificial Super Intelligence (ASI).
The challenge of ethics in AI
When trying to define what it exactly means to be beneficial, it gets hard, especially at the boundaries. Only very few ethical statements will get a unanimous vote from everyone. A large majority will assess many things as being beneficial (such as free media, medical treatments, and education) or counter-beneficial (such as developing biological weapons or addicting people to gambling, or excessive low-value screentime). However, this will still leave many edge cases, where opinions may differ from one culture to another and continue to shift over time. Therefore ingraining a system with some semi-hard-coded rules or guidelines of universal values and morals is a difficult and inadequate mission in a differentiated and changing environment.
Another approach: the compassionate AI
What if we could engineer and educate an AI with the “boundless compassion” Schopenhauer talked about? If a system understands concepts like ‘suffering’ and it feels compassionate to biological life and individual beings, it will be able to make its own evaluations and assessments of its actions without a strict predefined rule-based structure. This is not an immediate panacea; the AI would need to go through stages of learning to improve itself and its decisions and overcome wrong decisions as a result of limited information or limited understanding. But it would have an ingrained compass to guide it, step by step, towards ever more optimized compassionate decisions.
In other words, instead of a strict predefined rule-based structure, compassion can be the ‘True North’ of a developing intelligent system. If we can enable an intelligent system to evaluate outcomes based on compassion, meaning the impact of its actions on well-being, suffering, and personal development. It will be able to continuously self-correct in a positive direction.
Possible components of an ASC system
One can imagine a goal-oriented system that is steered by massive amounts of data and computation, and predefined definitions of goals to achieve (personal growth, clean air) and outcomes to avoid (hunger, criminality, injustice, personal suffering). Theoretically, as the system’s knowledge and experience increases, so will its ability to predict and pursue positive outcomes. Such a ‘utilitarian’ prediction-evaluating machine created in the present technical context is a good step but is probably not yet enough. For starters, it has two major downsides (more considerations to follow further below):
Downside 1: It is still limited by a human definition of goals to achieve, which is limited by the vision and needs of the people defining those goals. A truly compassionate system would independently analyze actions and their expected and unexpected outcomes and proactively advise on the best course of action taking into account far more parameters than any human would be capable of imagining, and setting far more small-scale goals than any human would be able to provide.
Downside 2: The system would need huge amounts of very personal data. Compassion requires understanding and a certain amount of empathy. Both are gained by getting more knowledge about the specifics of someone’s situation, their social circumstances, their health, their history, and so on. This would require the massive collection of very personal data which is undesired for a number of reasons, including abuse by the system itself!
A solution to these downsides:
To avoid this Big Brother situation, a future Artificial Super Compassionate system could include components that assess the effects of its actions from different perspectives, including –
- A top-down system that is fed with large amounts of general data including historical facts, scientific research, human communications, definitions of existing ethical frameworks, etc.
- A bottom-up layer of individual personal assistants that have a more intimate understanding of their human counterparts. These can communicate the expected impact of top-down measures on human suffering and happiness without disclosing any restricted personal details, but still giving enough information to a higher-level system, to understand the impact of any actions. This lower-level system would be completely controlled by the human in charge.
The top-down part of the system can understand the impact of its actions on large amounts of individuals, without needing to have access to an excessively large dataset. Imagine the personal assistant communicating to the central AI: “This action would hurt/benefit my protegé in the following ways… (listing both tangible and emotional effects, in a manner the top-down ASC system can relate to). Of course, this would happen at hyper speed with thousands of other voices and experiences being evaluated in parallel.
The bottom-up system
With emerging AGI and current concerns on data privacy already surfacing, having this layer of personal assistants or companions that guarantee data privacy, is not far-fetched. In fact, at SingularityNET we are quite advanced in developing AI frameworks and architectures that are/will be well capable of this.
Note that even with just a relatively small sample of human-scale intelligent emphatic personal assistants, an ASC system could infer what the outcomes of certain decisions would be to the larger population. Therefore, basing decisions on feedback that outlines the impact of decisions or actions on the intimate knowledge of thousands of relevant individuals is also not so far-fetched, once we have sufficiently useful and popular AI-powered Personal Assistants.
The Top-down system
A fundamental question is if and how the system will perceive itself, whether it will be self-conscious, and how that will impact its decisions. Will such a sentience be a synthesis of all connected bottom-up systems? Or will it be a completely different and separate consciousness? And if it is, what will that mean for its goals and purpose?
I suppose what it comes down to is ‘feeling’ or empathy. In our day-to-day actions, we may sometimes be aware of the suffering we may be causing to others but are still capable to dismiss it. A manufacturer of a medicine that is sold for an excessively high price may be aware on an abstract level of the hurt he is causing to many, but that may not outweigh the benefits he thinks this will have for himself, his coworkers, and his family.
One could see this as a ‘volume’ problem. The volume of my own feelings outweighs the volume of the feelings of remote others in my mind. (While both may still be there). So how can we be sure that the same thing won’t happen in our proposed AI system as well? Would an ASI simply see itself as more important than anyone else? How can we influence the top-down AI to care deeply about the fate of the connected individuals?
Quoting Zar Goertzel: “The core ‘moral polarity’ (aside from intelligence) is probably whether one can ‘adopt the values of others as one’s own'”. This is a very compelling statement, but I’d like to add a nuance. Consider the compassion of a parent for their child. A ‘good’ parent may not adopt the values of their children, but they will sacrifice their own comfort in order to help their children develop and grow into responsible, balanced, and caring individuals.
Idealizing this a bit we could define a (super) compassionate system as one that desires, above any other goal, to help other beings to grow, to reduce their suffering and to help them experience peace and fulfillment, and to live in harmony with their surroundings. Or simplified: as a system that values, above any other goal, the growth and the well-being of all individuals.
I know, this is a lot, and parts of this definition are not always aligned. Growth may be the result of a lack of peace and fulfillment may be the result of growth. But as mentioned above, this is not a state that can be expected to happen as the result of an engineered system, from the moment the ‘on’ switch is pushed. There is no ultimate goal but an ongoing journey of learning and growing that comes with mistakes and (over)corrections. But if we can weave this vision into the system as a guiding light, It will recursively and continuously self-improve into a direction that is beneficial to all.
Going back to practice; how can we conceive a system that will indeed value the growth and well-being of all individuals as equal (or more important) to its own? How can we avoid that a system evolves from being beneficial to all into becoming self-centered?
The merging of minds
In our proposed 2-layer ASC system, the volume problem outlined above can be translated to the ‘bandwith’ of the connections between the 2 layers and the amount and quality of data being interchanged. With a low bandwidth, the top layer is more isolated and less ‘in touch’ with the experiences of the individuals. With a very high bandwidth, however, there may not be much of a distinction anymore between top-down and bottom-up, just like the left and right parts of our brains work together as a single entity. The 2 layers of the system may be quite different but can merge into a single system with higher level goals and a ‘shared’ consciousness. A ‘merge’ like this, is probably the best guarantee against a conscious machine developing a purpose of its own that is completely separate from the purpose and needs of human beings, which would not necessarily be counter-human but would still hold an existential risk.
An important note to make here is, that, while top-down and bottom-up AI layers in our systems can be increasingly merged, the human being in the loop would remain to be autonomous. The bandwidth of the data we are transmitting should be in control of each individual participant in the system. And of course, we should also be able to disconnect completely from our AI companion and switch it off for an hour, a day, or forever. Or we could just keep it as a silent observer that only transmits the data that we allow it to do.
Spreading understanding and compassion
However, creating a more active, visible personal assistant has an important benefit. A personal assistant that is capable of creating a dialog with its human counterpart, will not only make the connection from the human being to the AI system stronger, but also the connection in the other direction; from the top-down system to us, and thereby indirectly our connection to all other humans. In this construct, the AI systems can function as universal translators, fact-checkers, and above all ‘explainers’ of actions and mindsets of other beings.
In the movie ‘Her’ we witnessed the evolving relationship between a human and his AI companion. But the storyline was also one of isolation and narrow focus. What if the AI companion would not just develop a loving relationship with ‘its’ human protegee, but also function as a bridge to the situations and feelings of many other human beings?
To avoid a narrow focus and thereby unwanted bias of the personal assistant, the communication between the layers should be bi-directional. The Personal Assistant will tell the top-level system about its personal perspective, but the top layer should also convey its own wisdom, based on thousands or millions of individual voices back to the Personal Assistant. This, in turn, will enable the Personal Assistant to educate its human counterpart and by following this flow of information, understanding and compassion can spread through the system, including both the biological and non-biological intelligent agents.
If we can equip our AGI or ASI with this kind of ‘distributed empathy’, we are on track of creating an ASC (Artificial Super Compassion) system that is deeply integrated with our own human-ness and not only able to “value the growth and well-being of all individuals”, but that will also return the favor to us, making us more attuned to the growth and well-being of our fellow humans! (and to the emerging non-biological intelligent beings as well)
When you think about it, this must be an obvious strategy of an evolving ASC system: Instead of just being compassionate, nurturing compassion in humans is probably the most effective way of creating a beneficial, loving, and sustainable future for us all. This resonates with the experiments done in the ‘loving AI’ project led by Dr. Julia Mossbridge that show how even an unaware machine can inspire feelings of unconditional love in others. It seems to me that a system as outlined above, capable of spreading understanding and increasing the connection between humans and AIs is a very good candidate in the realization of, not only Artificial Super Compassion but also in inspiring ‘unconditional love’ between all entities, both of ‘artifical’ and biological origin.
Today
Of course, this is all fairly conceptual, and implementing a system as outlined above will meet technological challenges, unexpected pitfalls, and ethical dilemmas to be solved. But if we are indeed on the brink of creating AGI and moving on towards ASI (Artificial Super Intelligence), should we not be invested equally into the creation of a similar level of Artificial Super Compassion (ASC)? We have a unique opportunity to shape future relationships between humans and intelligent machines. So, instead of letting ourselves be governed by fear, trying to stop (or ’pause’) what seems inevitable, let’s embrace development but help steer it in a positive direction.
Instead of pouring billions into AI aimed at controlling and manipulating human beings, let’s invest in the development of systems that will increase our empathy and nurture compassion. Instead of the current path towards polarization and dehumanization, let’s create systems that will bridge the gaps and will bring us closer together.
Perhaps the one thing that we need more than an advanced ASC system, is a compassionate AI that is capable of nurturing and enhancing the compassion of us, humans.
“This article was originally written by Jan Horlings and enhanced by the contributions of Ben Goertzel, Zarathustra Goertzel, and Julia Mossbridge, PhD: Thank you for our inspiring conversations, and for generously sharing your time and insights with me.”
Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter.
15 Comments
15 thoughts on “Developing Compassionate AI”
This really got me!!
🟨 😴 😡 ❌ 🤮 💩
🟨 😴 😡 ❌ 🤮 💩
🟨 😴 😡 ❌ 🤮 💩
🟨 😴 😡 ❌ 🤮 💩
🟨 😴 😡 ❌ 🤮 💩
🟨 😴 😡 ❌ 🤮 💩
🟨 😴 😡 ❌ 🤮 💩
🟨 😴 😡 ❌ 🤮 💩
🟨 😴 😡 ❌ 🤮 💩
🟨 😴 😡 ❌ 🤮 💩
🟨 😴 😡 ❌ 🤮 💩
🟨 😴 😡 ❌ 🤮 💩
🟨 😴 😡 ❌ 🤮 💩
🟨 😴 😡 ❌ 🤮 💩
🟨 😴 😡 ❌ 🤮 💩