back Back

The Scary Future According to AGI Safety Researcher Eliezer Yudkowsky

Feb. 23, 2023.
3 min. read Interactions

"AGI may very likely be just a few years off [but] there are currents of love and wisdom in our world that he is not considering and seems to be mostly unaware of" — Dr. Ben Goertzel"

About the Writer

Amara Angelica

162.50249 MPXR

Senior Editor Amara Angelica, an electrical engineer and inventor, was previously Editor of Kurzweil AI, working with Ray Kurzweil on The Singularity Is Near and other works

David Wood, Chair of London Futurists:

There’s been a significant development in the global conversation about finding a pathway to safe AGI.

“In this video, released last night, pioneering AGI safety researcher Eliezer Yudkowsky explains why he has taken a sabbatical from MIRI (which he co-founded) and now expects to ‘go down fighting with dignity’ in his quest to prevent a misconfigured AGI from killing everyone.”

I’ll warn you in advance that it’s 90 minutes of existential despair. It’s not for the faint-hearted.

[Eliezer starts speaking at ~15 minutes]

Dr. Ben Goertzel responds:

My view on this is summarized in this tweet I just wrote:

“We should build a neural-symbolic-evolutionary AGI with rich self-reflective and compassionate capability, educate it well, work with it on benefit projects, put it under decentralized control, and have some of us fuse with it. Working on it. No guarantees but odds seem good!”

For a longer quote, I would say: “Eliezer is a very serious thinker on these matters and was the core source of most of the ideas in nick bostroms influential book Superintelligence. But ever since I met him, and first debated these issues with him,  back in 2000 I have felt he had a somewhat narrow view of humanity and the universe in general.   

There are currents of love and wisdom in our world that he is not considering and seems to be mostly unaware of, and that we can tap into by creating self reflective compassionate AGIs and doing good loving works together with them. There are no guarantees to be found regarding the future of AGI but taking big risks is how humanity diverged from our apelike ancestors, its what we do. Despondence is unwarranted and unproductive.  We need to focus on optimistically maximizing odds of a wildly beneficial Singularity together.   

A few things I agree with Eli in are : AGI may very likely be just a few years off… This fact is not taken nearly seriously enough by society today. And yes the odds of existential disaster are not zero (but I don’t agree w him they are high)”

_______________________________________________________________________________

Wood continues:

Please do not think Eliezer is merely repeating centuries-old (or decades-old) worries.
Famously, he also started out as an optimist about the prospects for human flourishing in partnership with AI. But he gradually realised the scale of the difficulties we are facing.

To be absolutely clear, the risks from AGI are NOTHING TO DO WITH POSSIBLE AI SENTIENCE

If you’re not familiar with the arguments about that, kindly do yourself a HUGE favor and take the time to bring yourself up to speed.

Moreover, sometimes FEAR IS APPROPRIATE. Without an element of fear, minds would have failed to concentrate sufficiently regarding the threats posed by Covid-19, and the impact of that pathogen upon human civilization would have been even worse. Yes, fear sometimes plays into the hands of people who would manipulate us. But, equally, complacency and self-assurance and glib pseudo-philosophical platitudes can also play into the hands of people who would manipulate us.

This is not the time for naive optimism. Nor for a panicked pessimism. This is time for a clear-eyed realism.

If you don’t have the patience to listen to Eliezer’s explanations (and I concede that his personal style of communication is sometimes awkward), you can find a simpler, more straightforward explanation of the risks in a number of chapters of my book, The Singularity Principles.

— David W. Wood, Chair of London Futurists, Author of The Singularity Principles

.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Comment on this content

1 Comments

One thought on “The Scary Future According to AGI Safety Researcher Eliezer Yudkowsky

  1. Will educate myself about ai sentience vs agi Really interesting interview with some actionable responses
    1 Like
    Dislike
    Share
    Reply
Like
Dislike
Share

1

Comments
Reactions
💯 💘 😍 🎉 👏
🟨 😴 😡 🤮 💩

Here is where you pick your favorite article of the month. An article that collected the highest number of picks is dubbed "People's Choice". Our editors have their pick, and so do you. Read some of our other articles before you decide and click this button; you can only select one article every month.

People's Choice
Bookmarks