back

Universal Human Values and Artificial General Intelligence

Feb. 23, 2024.
3 min. read. Interactions
This article is available in multiple translations, Click here for a list of options.

What are the unresolved issues in the value alignment of AGI? Join the upcoming Beneficial AGI Summit and explore moral relativism, hierarchy of values, and consciousness in the 'AGI question'.

About the Writer

Jay Friedenberg

3.46025 MPXR

Jay Friedenberg is Professor of Psychology at Manhattan College in Department of Social and Behavioral Sciences. He has been teaching and writing books in the fields of cognitive science and artificial intelligence for nearly thirty years.

Credit: Tesfu Assefa

The field of value alignment is becoming increasingly important as AGI developments accelerate. By alignment we mean giving a generally intelligent software system the capability to act in a way that is beneficial to humans. One approach to this is to instill AI programs with human values. 

However, the research in this field tends to focus on algorithms for maximizing metrics like pleasure or happiness. Much less attention has been devoted to the actual set of values they should follow.

I propose that the evidence from worldwide religions, traditional philosophy, evolutionary psychology and survey research finds surprising agreement on basic human values. Abstracting from this work, I propose a five-tier system of values that can be applied to an AGI. 

Level 1 is the values of life, survival, and persistence. It evolves from a concern for living and reproducing. An agent acts to avoid destruction, seek out energy and pass on characteristics to a future generation. 

Level 2 contains truth, wisdom, and knowledge. It is about agents that value truth and the ability to understand and interact well with their immediate surroundings.

Level 3 is a bundle of three sets of values concerning oneself: freedom, temperance, and growth. These values affect the agent’s internal state and behavior. The ‘self’ in question might be a human being or an AI designed as an autonomous moral agent. 

Level 4 is a bundle of five values: empathy, authority, security, justice, and conformity. These are all concerned with group interaction. They apply to biological fellow beings or to multi-agent AI systems. 

Level 5 contains values for dealing with nature and the surrounding ecosystem as well as the universe beyond. A sufficiently powerful general intelligence could have a terrestrial and extra-terrestrial impact and so needs to be concerned with the larger environment.

Credit: Tesfu Assefa

The values are concentric – they start with fundamental metaphysical and epistemological assumptions and radiate outwards to be more inclusive in space, in time, and in variety.

However there are a number of unresolved issues:

  1. Universality – The debate of moral relativism vs. moral universalism in which different values are emphasized or traded off.
  2. Hierarchy – The ranked order of the values in terms of their importance or goal relevance.
  3. Change – Ethics for a transhuman or posthuman society might necessitate alterations due to changes in the nature of humanity or other AIs.
  4. Human centricity – These values are human and created by our unique evolution and societal organizations.
  5. Consciousness – Agents may need to feel – to have qualia – in order to be morally responsible. This does not mean that an AGI cannot act morally.
  6. Implementation – This listing of said values does not describe the mechanisms by which they can be instantiated.

In light of these challenges and opportunities within the realm of value alignment in AGI, the upcoming Beneficial AGI Summit emerges as a pivotal platform. The BGI Summit, with its gathering of leading minds in artificial intelligence, ethics, and related fields, presents an unparalleled opportunity to delve deeper into these unresolved issues.

The universality of values, the hierarchy of ethical considerations, the evolving nature of ethics in a transhuman society, the human-centric viewpoint, consciousness, and the practical implementation of these values in AGI systems – all these topics are ripe for discussion at the BGI Summit. 

This event could mark a significant step forward in our understanding and ability to integrate these complex value systems into AGI, fostering a future where artificial intelligence compliments and enhances human values and ethics. Join me at the BGI Summit to explore these topics. For more information and to register visit bgi24.ai

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Comment on this article

17 Comments

17 thoughts on “Universal Human Values and Artificial General Intelligence

  1. awann

    24 days ago
    3.06619 MPXR
    4 interactions

    i like read this

    3 Likes
    Dislike
    Share
    Reply
  2. Anime

    25 days ago
    6.51673 MPXR
    1 interaction

    Very Good

    1 Like
    Dislike
    Share
    Reply
  3. Thoughtful

    3 Likes
    Dislike
    Share
    Reply
  4. best of luck

    2 Likes
    Dislike
    Share
    Reply
  5. Universal Human Values And Artificial General Intelligence" is a thought-provoking exploration of the ethical considerations surrounding the development and deployment of artificial general intelligence (AGI). The article effectively highlights the importance of aligning AGI systems with universal human values to ensure their beneficial impact on society.

    2 Likes
    Dislike
    Share
    Reply
  6. Hi Kieghley and thanks for commenting. Humans are certainly driven by multiple competing impulses, like selfishness vs. altruism. That's the hard part of aligning value systems, knowing which values to implement in which situations. Its hard to just rank them in order of importance and say always do this one because its more important than another. Humans are also limited in terms of attention span, being emotional, etc. so we can't be purely rational creatures.


    1 Like
    Dislike
    Share
    Reply
  7. It's a thoughtful piece and the intentions are 'good' (not that we know what that is!). Humanity is like a leviathan of one entity with ADHD! Starting several projects without ever finalising one

    1 Like
    Dislike
    Share
    Reply
  8. Dear Martin,


    Regarding your comment, yes education is hugely important. Disinformation is rampant online and will only get worse with AI-generated fake news, videos, audio, etc. One solution is an emphasis on social media literacy training where students are specifically instructed on how to recognize valid content and think critically. In my view this should be mandatory in the school system. Parents also play a role here in helping their kinds digest content appropriately. Religions instruct people in values, but they aren't optimal for this. Philosophy is preferable, with its emphasis on reason over blind belief.


    Regarding values, hedonism should never be elevated over happiness. The first is based on pleasure and the second on value achievement. If one favors the first over the second, moral decadence is more likely to follow..


    Cheers,

    Like
    Dislike
    Share
    Reply
  9. Hi Aaron,


    Thank you for the thoughtful comments. Here are my reflections. We shouldn't conflate consciousness and value. They are related. To have values in my view an agent must be conscious, that is they must have the capacity to be alive and feel pain and pleasure and their actions must result in their continued survival. Those are the evolutionary foundations of value. Can machines be conscious and hence have values without being biological or undergoing actual evolution? Not sure. Its possible. But that doesn't preclude us from using values to govern machine behavior. Similarly, an AI can perceive and act emotional without feeling emotions.


    Your point about giving an AI values and then having it change is what a lot of people do. But at least with machines we can have more control over the process. If we decide that there is a certain immutable order of values (reality -> truth -> freedom) then we can ensure the program acts in a more virtuous way. So the argument devolves down to what what hierarchies and decision flow we want to use, i.e., what values should always have precedence over others. I think this can be done and that there is an objective ordering of values that reflects humanity and even all possible species.


    Best,


    Jay

    Like
    Dislike
    Share
    Reply
  10. Thoughtful and possible.

    1 Like
    Dislike
    Share
    Reply
  11. The spirit and clarity of this piece is on another level, thank you Jay. However, I will say, as a programmer, installing human values in AGIs is a paradox! If this is about narrow AIs, it makes sense. Super AI has consciousness (theoretically folks theoretically), and we can't install any permanent value!

    We can't be sure that we can program AGIs with human values and then expect the AGI to follow those. All the value we put there will be replaced by the system's own new codes and our installed values will fade quickly!

    I am not saying this is always true, I am saying in the case of AGI, we can't guarantee that even the trace will be there, not even as archaic line of codes aka instincts.

    If the values we put there are still intact and unchanged, then the system is not AGI and it is just a fancy maybe very powerful AI minus consciousness and freewill. As always, I am a fun and super excited about the BGI summit.

    Like
    Dislike
    Share
    Reply
    1. ...I'm a postgrad ethicist who just did ML at MIT... Would be fascinated to chat more with you!

      Like
      Dislike
      Share
      Reply
  12. Thank you Jay and I will definitely follow the BGI event online. I wish David Wood reflect his thoughts on this article. This really echos the ethical concerns he (David) has raised in so many of his articles recently.

    1 Like
    Dislike
    Share
    Reply
  13. Hi "Alamin",


    Many thanks for your detailed response to this article. Strict hierarchy I don't think is the solution and any AGI system will need to be able to change the relative value ordering as its environment changes. I think it best to start with basic human values. The ranking may alter in response to the system learning, new values may be added in, existing ones possibly deleted. This will especially be the case for farther future scenarios involving trans- and post humans.


    Not sure what you mean exactly by decadence but I agree instant electronic entertainment is not helping. We have certainly lost site of basic things. For instance the current problems with democracy should not mean replacing it with autocracy, yet that is precisely what's happening. The solution is better education, especially about civics and history. Truth decay would imply moral decay because reality and facts are primary to ethics and truth is being drowned out by emotionality and polarization.


    Best regards,


    Jay



    Like
    Dislike
    Share
    Reply
    1. I give the article a hundred! Yet, Jay, I do not accept some of your some of your suggestions in your comment.

      Probabaly I am the only one who will agree with Alamin here. There is an alarming moral decadence!

      When schools are teaching crap, and when universities are fabricating madness, how can mainstream education secure a better future? It will be a moot point to say personalized education will triumph over the mainstream education so here I am talking about the education system owend by governments, big tech, and...

      I will be a gentle soul here. Mindplex repeatedly punished me for some of my God givin right and I won't name those people and the - - - - culture. But everyone knows what I am talking about😋

      Sadly, ethics is politics and this BGI event should better be a place where the brave ones raor!

      1 Like
      Dislike
      Share
      Reply
  14. I like how your proposal encompasses the fundamental aspects of life, the self, and the group dynamics with room for environmental considerations. I like it.

    Hierarchy: finding the common ground is never easy. The irony is that the solution for this is a hierarchy map. I will always choose Level 1 as the most important tier. From the three frameworks in that tier, I will go with ensuring the "value of life" in any cybernetic consciousness system.

    You and I are not on the same page with the definition of "Change". Some of the changes we experience today are nothing but Moral Decadence! As you mentioned, 'changes' are creating a new dynamic for ethical adaptations. Unfortunately, as a post-modern and technocentric society, most of us are no longer concerned with the question of Ethics! This is why I say the decadence is more prevalent than the change. I will not waste more words on the human side as I gave up long ago, and my question here will be related to the AGIs. Under such a decadent, techno-fueled degeneracy and perversion, environment where moral values lose their meaning every day, how can we assume the AGIs will stay true to any ethical frame we install? We the teachers, we the creators of value have 'evolved': this is what we call our moral decay nowadays but I am just being polite. If we rationalize most of our perversion as 'change', 'modernization', 'evolving', or 'beyond humanism', why not the AGIs? Whatever meaning we put there, will they not change it in a short period?

    This is an interesting article and I am saying kudos to you for putting it so cleverly in such a short space. 

    2 Likes
    Dislike
    Share
    Reply
    1. Mr. Alamin is back! 👊

      Like
      Dislike
      Share
      Reply
Like
Dislike
Share

17

Comments
Reactions
💯 💘 😍 🎉 👏
🟨 😴 😡 🤮 💩

Here is where you pick your favorite article of the month. An article that collected the highest number of picks is dubbed "People's Choice". Our editors have their pick, and so do you. Read some of our other articles before you decide and click this button; you can only select one article every month.

People's Choice
Bookmarks