back Back

From AGI to BGI

Mar. 04, 2024.
6 mins. read. 22 Interactions

What happened at BGI over the last few days and why this will have lasting significance

About the Writer

David Wood

37.3732 MPXR

David Wood is Chair of London Futurists, Foresight Advisor at SingularityNET, and Executive Director at Longevity Escape Velocity Foundation

About the Co-writer

Amara Angelica

228.34065 MPXR

Amara Angelica is Senior Editor, Mindplex.

It’s easy to suggest a new three-letter acronym. It’s much harder to get a new three-letter acronym to stick. Yet that is one likely outcome of a gathering that has just finished in Panama City.

The official title of the gathering was Beneficial AGI Summit & Unconference. But during the planning leading up to the event, the realization spread that the gathering could mark a profound change in the global discussion about AGI (Artificial General Intelligence). Hence the need for a more decisive name.

And that’s how the acronym “BGI” was born, standing for “Beneficial General Intelligence“.

The term “Artificial General Intelligence” had been proposed, independently, a number of times from 1997 to 2002, but SingularityNET CEO Ben Goertzel made that term his own due to using it in 2005 as the title of a book of articles, and because of his leading role in organizing an “AGI Conference” every year from 2008 to the present.

That series has featured a wide variety of different kinds of talks over the years. But the primary focus has been the technical possibility of AGI: which architectures of hardware, software, data, and communications are most likely to result in the creation of AGI?

A focus on beneficial outcomes

In contrast, the primary focus of the BGI gathering was the beneficial possibility of AGI: which architectures are most likely to result in AGI that has beneficial outcomes for humanity and other sentient beings—including beings that may come into existence as AGI evolves?

Critically, for the BGI event, the “architectures” include not just technical relationships and connections but also human relationships and connections.

Setting the scene in his opening keynote, Goertzel referenced a working document “BGI: Principles & Practices,” which starts as follows:

This text enumerates and briefly elaborates a core of basic principles and practices that we see as likely helpful and possibly critical for guiding the development and deployment of AGI technology in a positive direction—i.e., toward Beneficial General Intelligence (BGI).

Defend and decentralize

Re-using the term “d/acc” from a recent blogpost by Ethereum founder Vitalik Buterin, Goertzel summarized his own preferred approach: “Accelerating progress toward BGI in pursuit of the many powerful benefits anticipated,” while at the same time, “Taking decisive and imaginative measures to defend against negative potentials, and decentralize the dynamics underlying AGI, to avoid pathologies of narrow, closed-minded thinking and autocratic or oligopolistic overly centralized control.

As Goertzel explained, the “d” in “d/acc” refers to both “defend” and “decentralize.”

As the event continued, it became clear that while these two concepts of “defend” and “decentralize” were both popular, there was wide uncertainty about what they implied. This led to a variety of rich conversations.

Authentic conversations

What I heard from participants, time and again, as the four days of the event unfolded, was an appreciation: not just for the ideas that were shared in the various sessions, but also for the caliber and diversity of the attendees. The fact that there was little overt corporate presence—no salespeople looking to sign up new customers for their products—helped everyone relax into what I heard described as “authentic conversations.”

Attendees—some of whom are pictured below—included politicians and their advisors, researchers from fields such as consciousness and ethical frameworks, technologists with a focus on tools to assist decentralization, designers of robots, members of religious organizations, journalists and authors, film crews and documentary makers, representatives from think tanks and other NGOs, AI safety advocates, health technology innovators, VR system developers, leaders from the world of transhumanism, and much more besides.

A few highlights

SingularityNET Chief AI Alchemist (yes, some marvelous job titles were on display) Mihaela Uliheru warned about the dangers of a “Moloch” race toward AI that lacked beneficial characteristics, and emphasized, “It’s all about the incentives, stupid!”

Anders Sandberg from the Future of Humanity Institute at the University of Oxford highlighted the importance of legal and other institutions, for the wellbeing of society as a whole, and considered ways in which these institutions might evolve with the emergence of AGI.

Jerome Glenn from the Millennium Project reported on efforts to inspire representatives at the United Nations to pay more attention, not just to the governance challenges of existing “narrow” AI, but to AGI as it may emerge in the near future. Apparently, this is an uphill struggle.

That message dovetailed with a presentation from SingularityNET Chief Operating Officer Janet Adams, who reviewed legislation around the world countering adverse effects from AIs that are biased, misconfigured, or being used for destructive purposes. The takeaway here was that little valuable progress has been made so far.

Common good governance: An AGI constitution

For a different approach to transforming the governance of AGI, Anneloes Smitsman, CEO of EARTHwise Ventures, reviewed progress with an ongoing collaborative initiative to define what she called a “Constitution for guiding Artificial General Intelligence.” That document starts with this “promise”:

We, as future ancestors and children of the new civilizations, commit to common good governance for collectively stewarding the development and evolution of benevolent, safe, and responsible AGI in service to our long-term collective wellbeing.

Kinds of consciousness

Author Jeffery Martin, a founder of the field of Transformative Technology, spoke about the need to understand different modes of consciousness, such as those described over the centuries as enlightened, non-dual, the peace that passeth understanding, unitive, and persistent non-symbolic experience (PNSE). Any research into possible conscious modes for advanced AIs should beware of being trapped into overly narrow definitions of consciousness, he said.

Daniel Faggella of Emerj shared a provocative 3×3 “Intelligence Trajectory Political Matrix” on which different attitudes can be plotted regarding possible futures of artificial intelligence. One dimension was labeled “What you want” and featured “Preservation [of humanity],” “Progression,” and “Ascension.” The other dimension was labeled “How you get it” and featured “Controlled,” “Collaborative,” and “Open.”

Sociological, economic and morality issues

A group of panelists, including James Barrat, James Hughes, and Calum Chace, gave different assessments of the potential sociological and economic impacts of AGI. As with many other conversations at the event, there was agreement that the impacts would likely be extensive, but less agreement on how the approach to these impacts should best be managed.

Finally, in this selection from many other highlights, another group of panelists, including Weaver Weinbaum, Esther Galfalvi, Joscha Bach, and Marta Lenartowicz, debated whether traditional foundations for morality, including utilitarianism, deontology, and virtue ethics, were good starting points for coping with the extraordinarily open-ended transformations that AGI is likely to unleash. Guess what: there were no firm conclusions here either!

Round table groups and the future

In addition to these presentations and panel discussions, a number of impromptu “round table interactive working groups” proved highly popular. Sufficient time was provided for many ideas to be raised, then critiqued, and then recreated in new form. These working groups in turn provided the foundation for “unconference” sessions on the final two days which were organized spontaneously and with little central oversight, but which drew and kept sizable audiences.

At the very end of the event, one request resonated loudly: the conversations about BGI need to continue, quickly and urgently, without everyone having to wait for BGI25 (next year’s version of the BGI event) in around 12 month’s time. Several ideas have been mooted about appropriate tools to host these ongoing conversations.

Personally, I see Mindplex having a strong position to assist with these conversations.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Comment on this content

6 Comments

6 thoughts on “From AGI to BGI

  1. BGI sounds like a meaningful shift. Prioritizing both technical and human connections could be key to making AGI beneficial for humanity.

    Like
    Dislike
    Share
    Reply
  2. Amazing

    Like
    Dislike
    Share
    Reply
  3. This is great news

    Like
    Dislike
    Share
    Reply
  4. I am following the event online, the direction right!

    2 Likes
    Dislike
    Share
    Reply
  5. The other day, I read Jay's article on the topic. I tried to glance the summit videos and it looks a good starting point.

    2 Likes
    Dislike
    Share
    Reply
  6. The summit is the first step for BGI, spreading the word and building an active BGI community should be be the next move.

    2 Likes
    Dislike
    Share
    Reply

9

Like

Dislike

Share

6

Comments
Reactions
💯 💘 😍 🎉 👏
🟨 😴 😡 🤮 💩

Here is where you pick your favorite article of the month. An article that collected the highest number of picks is dubbed "People's Choice". Our editors have their pick, and so do you. Read some of our other articles before you decide and click this button; you can only select one article every month.

People's Choice
Bookmarks