Five of the Best: Reflections on Some of the Highlights of BGI 2025

2025-10-28
21 min read.
BGI 2025 involved a number of bold experiments. Some of these experiments paid off handsomely.
Five of the Best: Reflections on Some of the Highlights of BGI 2025
Credit: Tesfu Assefa

Perhaps the very best aspect of BGI 2025 – held in Istanbul from 21st to 23rd October – was the opportunity during the networking sessions for truly thought-provoking conversations with the remarkable set of attendees at the event.

These attendees all shared the same overarching aspiration: to enable the emergence of advanced artificial intelligence that will be deeply beneficial for humanity worldwide. But they brought numerous different perspectives into the mix: different backgrounds, different skillsets, different personal philosophies, and different operating modes.

That’s what made many of my discussions at the event both challenging and inspiring. Time and again I found myself reflecting: there’s more for me to learn here. I thought, moreover: and there are things that I probably need to unlearn too – ideas that have generally served me well in the past, but which are now making it difficult for me to absorb new insights.

Another special feature of BGI 2025 was the willingness of the organisers to take risks. Rather than sticking with tried and tested speakers and topics, the organisers invited a number of speakers and panellists who were less experienced, or less familiar with the ins and outs of advanced AI – but whose diverse approaches had the potential to stretch audience hearts and minds. Accordingly, the event featured what I would describe as a number of bold experiments.

By their nature, some of these experiments fell flat. Several times at the event, I wondered to myself: Why am I wasting my time listening to this? I tried harder to discern value in what was being said on stage, but in some cases, there was very little substance there. However, other experiments proved inspirational, and I was really glad to be exposed to some new approaches regarding BGI.

After that preamble, let me now pick out five speakers whose contributions at BGI 2025 included world class elements. To be clear, my choices for this “five of the best” list will likely differ from selections that other attendees would make. In case you watched the event – either in Istanbul, via the livestream broadcast, or via the recordings subsequently made available (day one, day two, and day three) – please let me know your own assessment of the highlights and lowlights of the event.

Credit: SingularityNet

Institutional Acceleration and The New Technologies of Freedom

“Now we can build liberty.”

That was a clarion call in the talk by Chris Berg, Professor of Economics at RMIT University, from Melbourne, Australia.

Chris said that, for most of history, if we wanted to help people gain more liberties, there have been two main ways to proceed:

  1. Change laws – replacing bad laws (that suppressed freedoms) with good laws (that enabled and supported freedoms)
  2. Change public opinion – to provide more pressure in favour of laws being changed.

However, in some cases, a third option is possible: develop and use technology to support greater freedoms. Examples include:

  • Encryption software, to keep communications private
  • Smartphones, which allow people all over the world to keep themselves informed about opportunities and threats
  • Telepresence systems that allow people to work remotely, giving them greater geographical freedom
  • Easily available contraception, giving more freedom regarding family planning.

Chris then made the case that technologies of AI and decentralisation can take this principle to a deeper level. To see this, we need to consider the role of institutions, such as contracts and courts, property rights, a sound money system, insurance markets, and controls over monopolies and cartels. Without good institutions, it’s much harder to sustain an innovative economy. But good institutions allow people to create and refine the technologies that provide valuable goods and services.

What’s new about the technologies of AI and decentralisation is the possibility of these technologies, not only to generate goods and services, but also to create new social institutions. Examples here include zero-knowledge proofs, tamperproof records maintained on blockchains, and DAOs (decentralised autonomous organisations).

Accordingly, we can anticipate the possibility of a powerful positive virtuous cycle:

  • Good institutions enable the creation of good technologies
  • Good technologies enable the creation of better institutions
  • Better institutions enable the creation of better technologies
  • Better technologies enable the creation of even better institutions, and so on.

This struck a chord with me, as the idea resonated with a previous Mindplex article of mine, “The Rise of DeSci: The Promises and Pitfalls of Decentralised Science”.

Credit: David Wood

To move the conversation further forward, I wanted the opportunity to ask Chris something like the following:

“You say that DAOs are an example of a better institution, by virtue of their decentralised nature. But many people who were previously enthusiastic about DAOs have lost some of that enthusiasm from their real-world experiences trying to put these principles into practice. So, what do you consider to be the best examples of DAOs in the real world?”

Unfortunately, the event schedule was too full, and too many talks had overrun their allotted time, for any audience Q&A at the end of talks. Never mind, I thought: this will be a good topic for the unconference taking place on the third day of the event.

But here’s a funny thing about unconferences. In principle, their decentralised nature allows dynamic planning of what will be covered. But without clarity about options for scheduling and locations, it’s hard for conference attendees to reach shared consensus about which topics would attract constructive attention.

The BGI organisers referred to an app that had the ability for people to propose and discuss unconference sessions. In practice, the app was buggy: it made a mess of timezone conversion calculations. As a fallback solution, a sheet of white paper was stuck on the wall near the reception desk, but with limited scope for consideration of how different proposed unconference sessions might dovetail or overlap.

At the end of the event, a number of attendees were discussing comparisons between this year’s BGI and the previous one, in Panama in 2024. There was wide agreement that the unconference portion of the Panama event was truly memorable, whereas this year’s was (as one person put it, uncharitably) “very poor”.

To be fair to the organisers, the Panama unconference was spread over two full days, whereas the one in Istanbul was squeezed into just half a day – with the first half of that day allocated to standard conference presentations.

But my takeaway here is: decentralisation is hard! And merely saying the words “AI” and “blockchain” doesn’t magically enable constructive decentralised collaboration.

AI: Unexplainable, Unpredictable, Uncontrollable

“You think it will be easy to control superintelligence? Why, it took us more than half an hour to get this Zoom call working properly!”

That was a lament from Roman Yampolskiy, Professor of Computer Science at the University of Louisville, Kentucky, USA.

Roman had good reason to be frustrated with technology. He had intended to fly to Istanbul to give his talk in person. But his flight was cancelled due to (guess what?) technical issues. He reasoned that he could still arrive in time, by boarding a later flight. However, that flight was likewise cancelled.

Being resourceful, Roman offered to give his talk instead via Zoom. That’s where the third set of technical gremlins kicked in. As mentioned, it took 30 minutes before the Zoom connection could be established.

Given bugs in airplane systems and in Zoom communications, did we really think that the first AGIs (artificial general intelligences) would be bug-free, and be fully controllable by their human operators?

Then Roman switched to his pre-prepared remarks, surveying the large academic literature which clearly demonstrated (he said) that any AGIs would be:

  • Unexplainable – taking actions that bewildered human observers
  • Unpredictable – deviating from what human observers had anticipated
  • Uncontrollable – defying attempts by human observers to constrain their behaviours.

Roman also pointed out that various attempted refutations of these conclusions had been raised many times over the years, often with the advocates of these refutations imagining that their observations were novel, whereas in fact these objections had all been comprehensively debunked in earlier literature.

Once again, although I liked a great deal of what Roman said in his talk, I wanted to ask him a number of questions. I particularly wanted to explore what I believe will be an easier AI control problem:

  • Not humanity trying to control an AGI that has already been created
  • But humanity controlling developers before they reach the point when they create an AGI.

I accept that the second of these control problems is hard too, but I wanted to indicate some ways in which humanity could rise to that challenge.

For example: In his talk, Roman had pointed out that humanity has failed to control other sorts of developers, such as the people who create and launch computer viruses and other online malware. Making these technologies illegal had not been sufficient to prevent them from being created, Roman noted. The question I wanted to ask about this was: but haven’t security agencies been half-hearted about shutting down the kinds of zero-day exploits on which such malware depends? Various three-letter agencies have actually been relaxed about the existence of such exploits, thinking they could be utilised for better remote surveillance. So, in fact, humanity hasn’t tried particularly hard to prevent malware.

But yet again, there was no time for that conversation. Instead, the event moved on quickly to another panel. As it happened, the speakers in that follow-on panel often referred back to the kinds of arguments that Roman had made. I regret to say that these panellists demonstrated little grasp of the force of Roman’s arguments.

Thus one panellist said: Surely the outcome depends on the kind of objective we give the AI. So why don’t we give the AI the objective of serving humanity? But this disregards the problem that we don’t actually know how to program such an objective into the kinds of AIs that are increasingly dominant around the world.

Another point raised on that panel: Why don’t AI companies integrate critical thinking skills into their AIs? I wanted to call out from the audience: AI companies are already working hard on doing precisely that:

  • Models with CoT (Chain of Thought) are a significant step in that direction
  • New types of hybrid systems, featuring further elements of rationality, are the subject of considerable research.

These further elements of rationality include:

  • External memory and adjudication layers
  • Self-reflection loops to ensure self-consistency
  • Explicit epistemic state tracking (“what do I know and how confident am I?”)
  • Adversarial reasoning between competing hypotheses
  • So-called Meta-CoT, in which models “think about their own thinking”.

The panellist suggestion “why don’t you consider rationality when you are designing AIs” was especially unnecessary at a conference featuring multiple SingularityNET presenters – given that the neurosymbolic architecture of SingularityNET’s AI explicitly features rationality alongside other elements (such as a knowledge graph).

I offer two takeaways here:

First, individual panellists can be excused for not being well informed about key arguments raised earlier in the conference. No-one can be an expert in every topic. However, when every member of a panel has the same inadequate understanding, it means their conversation is a big missed opportunity.

Second, what I have called “the second AI control problem” deserves more attention. On the day after the event, I created and published a simple graphic in which I contrasted two approaches to that second problem:

  • One approach, which I have provocatively dubbed “miserabilist”, basically takes it for granted that any worldwide collaboration to establish AI guardrails will fail – and it looks for evidence to back up that pessimism
  • Another approach, which I call “activist”, has a more upbeat assessment of the possibilities of human collaboration.
Credit: David Wood

At the end of this article, I’ll return to some of the ideas in this graphic. But first, let me continue my survey of “five of the best”.

AGI Governance and the United Nations

In terms of the above split – miserabilist versus activist – one person who is strongly in the latter camp is Jerome Glenn, the CEO of the Millennium Project.

Jerome often talks about the widespread despondency that existed during the Cold War regarding the nuclear arms race. Critics often bemoaned that nothing could be done to prevent an eventual catastrophic nuclear exchange. These critics were the miserabilists of their time. Thankfully, enough people worked hard on mechanisms to avoid nuclear war, and to reduce the enormous stockpiles of weaponry possessed by the superpowers.

(Aside: for some fascinating insights regarding what happened behind the scenes in nuclear weapons reduction talks, see the chapter “‘We Will Die, But We Will Sink Them All!’” in the recent book by Tom Ough, “The Anti-Catastrophe League”, for a conversation involving treaty negotiator Rose Gottemoeller.)

Jerome thinks that similar efforts, today, can enable humanity to successfully govern the transition to more advanced types of AI. He further thinks that the United Nations can play an important central role in this governance process. That’s despite plenty of problems at the United Nations.

In his talk at BGI 2025, Jerome described some of the painstaking work that he and various colleagues from the Millennium Project have undertaken in a long series of meetings with various UN personnel and their advisors.

That work paid off in May this year when the UNCPGA – the Council of [Former] Presidents of the General Assembly – formally accepted a report they had commissioned the previous year: “Governance of the Transition to Artificial General Intelligence (AGI): Urgent Considerations for the UN General Assembly”.

Credit: UNCPGA

The report contained the headline “Urgency for UN General Assembly action on AGI governance and likely consequences if no action is taken”, with the following explanatory text:

“Amidst the complex geopolitical environment and in the absence of cohesive and binding international norms, a competitive rush to develop AGI without adequate safety measures is increasing the risk of accidents or misuse, weaponization, and existential failures. Nations and corporations are prioritizing speed over security, undermining national governing frameworks, and making safety protocols secondary to economic or military advantage. Since many forms of AGI from governments and corporations could emerge before the end of this decade, and since establishing national and international governance systems will take years, it is urgent to begin the necessary procedures to prevent the following outcomes…”

Happily, good momentum has continued since then:

  • More behind-the-scenes conversations between representatives of the US and China
  • A new press release from the UNCPGA, to mark the occasion of the 80th anniversary of the founding of the United Nations.

The press release contains the following recommendation:

The proposed Special Session of the UN General Assembly should address the following specific threats to international peace and security:

  1. The crisis of multilateralism…
  2. Crisis of international law…
  3. Increasing danger of great power confrontation including potential nuclear conflict…
  4. Potential threat of Artificial General Intelligence: The rapid acceleration of technological progress particularly in the field of Artificial Intelligence poses unprecedented opportunities, but serious challenges to humanity in the 21st Century. While harnessing the benefits of this rapid technological progress, there is an urgent need for the United Nations to build an effective international framework to address the dangers posed by rapid evolution of Artificial General Intelligence (AGI) which could exceed human capabilities and pose potential threat to humanity itself.

My takeaway here is: collaboration is indeed hard, but with sufficient skills in diplomacy, persistence, mutual understanding, and coalition building, progress can definitely be made. As such, Jerome Glenn stands as a fine example for the rest of us to follow.

ASI Chain: The Decentralized Infrastructure for the AGI Economy

Someone I’ve heard speaking on numerous occasions is Ben Goertzel, CEO of SingularityNET. Ben was on the BGI 2025 stage on several occasions – providing context for what we had just heard (and what we were about to hear), and also setting out some of his own recent thinking.The talk of his on which I want to focus now was a short talk on the third morning, covering the DevNet release (for developers) of the ASI:Chain project of the ASI Alliance. During this talk, I could grasp an important piece of Ben’s vision for the future which, somehow, hadn’t previously been clear to me.

Credit: ASI Alliance

ASI:Chain is envisioned as a transformational new infrastructure, which supports the swift, reliable, uncensorable, low-cost operation of advanced distributed AI systems.

ASI:Chain is a new implementation of blockchain, with the usual blockchain features of being tamperproof, transparent, and beyond the control of any centralised gatekeeper.

However, the blockchains that are most often used today – Ethereum, Solana, Cardano, and so on – weren’t designed with advanced AI in mind. These existing blockchains each have their own uses, including the management of tokens, but they fall short on requirements such as sub-millisecond latency, multiple concurrency, and energy efficiency.

ASI:Chain combines several important innovations from within the extended SingularityNET ecosystem:

  • Elements from the high-throughput RChain blockchain developed by Greg Meredith, with its concurrency-first architecture, fast scalable smart contracts, and other advantages
  • A privacy-preserving AI multi-agent system architecture
  • The MeTTa AGI programming language from the SingularityNET Hyperon platform
  • Support for different sub-networks (“shards”) with different consensus mechanisms – improving interoperability with other blockchain networks.

One way to think of this is: just as Ethereum went far beyond the original Bitcoin blockchain by providing smart contracts, ASI:Chain has the potential to leap forward again, since the smart contracts on this chain can include arbitrarily sophisticated AI processes.

This is not just a qualitative improvement – with the ability to do things which were beyond previous smart contract languages. It is also a quantitative improvement, with dramatic speed-ups in processing throughput.

ASI:Chain is not yet ready for general access, but the DevNet release will allow interested application developers, blockchain developers, and node operators to become familiar with some of the aspects of the chain and its smart contract capabilities.

All being well, ASI:Chain will enable a levelling of the AI playing field, reducing the advantages that existing large technology companies have by virtue of their scale. ASI:Chain can place a greater premium on transparency and explainability, in contrast to the opacity and inner obscurity of most existing AI systems. As such, ASI:Chain can elevate questions of safety and security. Large tech companies will no longer be able to say: “there is no alternative: the kind of AI we are creating is the only kind that yields good results”.

The disruptive potential of predictive coding

Another challenge to the “there is no alternative” claims came in the talk by Faezeh Habibi, a PhD student at the NAC (Neural Adaptive Computing) Lab at the Rochester Institute of Technology in New York state.

In the overall spectrum of approaches to building AGI, one difference is in how central a role is given to neural networks, as opposed to symbolic and logical reasoning. Another difference is in the methods used to train the neural networks in the system. Most neural networks are trained using the back propagation methods. However, that’s not the only possible training method. In particular, Faezeh emphasised the positive possibilities of a training method known as predictive coding.

I first came across predictive coding at the AGI 2024 conference in Seattle, where I heard Alex Ororbia speaking about it – Alex is the head of the NAC Lab. I subsequently had the pleasure to invite Alex as a guest on episode 102 of the London Futurists Podcast, “Building brain-like AIs”. The talk by Faezeh at BGI 2025 reminded me of the disruptive potential of predictive coding.

Credit: Faezeh Habibi

Both back propagation and predictive coding involve mathematics, but some of the basic points can be explained in general terms:

  • Each training method applies to a multi-layer neural network
  • Back propagation focuses on reducing the errors at the output of the process, namely, how much the output differs from what is actually observed
  • The processing of each layer of intermediate neurons is globally adjusted via a “backward pass”, to minimise the error at the end of the forward pass
  • In contrast, for predictive coding, the mathematics applies locally, with the operation of each layer of neurons being adjusted to reduce the errors in the next layer
  • The local nature of these calculations is believed to be a better representation of what actually happens in biological brains.

For both approaches, practical applications involve elements of both mathematics and intuition – that is, adjustment of various parameters in the mathematical models. But it seems that, for good choices of the parameters in predictive coding:

  • The neural networks are much sparser than for back propagation: many more of the neurons are quiescent, rather than being actively involved
  • This in turn means the energy requirements are significantly lower – which matches the observation that human brains are orders of magnitude more power-efficient than silicon neural networks
  • It also holds out the possibility that the neural networks are easier to interpret and explain.

That takes me back to one distinction I made when comparing “miserabilist” and “activist” approaches to the possibility of controlling big tech companies in their rush to create AGI. It’s the fourth of the four observations on that image: “Advanced AI has the potential to solve aging, climate change, etc”:

  • Seeing no alternative to the opaque, inexplicable AI systems being produced by big tech companies, miserabilists resign themselves, saying “Any attempt to steer the development of advanced AI risks people dying from aging”
  • However, activists believe that new forms of AI can be prioritised, which lack dangerous features such as autonomy, volition, and inscrutability – new forms of AI that will still be able to deliver, quickly, the kinds of solution (including all-round rejuvenation) that people wish to obtain from AGI.

Examples of these new forms of AI include “Scientist AI” (to use a term favoured by Yoshua Bengio) and “Tool AI” (the term favoured by Anthony Aguirre). Critically, these new forms potentially also include AI delivered on ASI:Chain (see my comments above on the BGI 2025 talk by Ben Goertzel) and AI using neural networks trained by predictive coding (as described by Faezeh Habibi).

It’s when the potential of these new, safer, forms of AI becomes clearer, that more people can be expected to snap out of their miserabilist opposition to steering and controlling AGI development.

A final audacious analogy

The scenario I’ve just outlined can be compared to a major controversy within the field of preventing runaway climate change. That field, too, has its miserabilists and activists.

The argument runs as follows:

  1. Existing patterns of energy use, which rely heavily on fuels that emit greenhouse gases, risk the climate reaching dangerous tipping points and transitioning beyond a “climate singularity” into an utterly unpredictable, chaotic, cataclysmically dangerous situation
  2. However, most uses of energy prefer dirty sources to clean (“green”) sources, because the former have lower cost and appear to be more reliable (in the short term at least)
  3. Accordingly, there is almost no possibility of people switching away in sufficient numbers from dirty energy to clean energy
  4. Some observers might therefore be tempted to hope that theories of accelerating climate change are mistaken, and that there is no dangerous “climate singularity” in the near future
  5. In turn, that drives people to to look for faults in parts of the climate change argumentation – cherry picking various potential anomalies in order to salve their conscience
  6. BUT this miserable flow of thought can be disrupted once it is seen how clean energy can be lower cost than dirty energy
  7. From this new perspective, there will be no need to plead with energy users to make sacrifices for the better good; instead, these users will happily transition to abundant cleaner energy sources, for their short-term economic benefit as well as the longer-term environmental benefits.

You can likely see how a similar argument applies for safer development of trustworthy beneficial advanced AI:

  1. Existing AGI development processes, which rely heavily on poorly understood neural networks trained by back propagation, risk AI development reaching dangerous tipping points (when AIs repeatedly self-improve) and transitioning beyond a “technological singularity” into an utterly unpredictable, chaotic, cataclysmically dangerous situation
  2. However, most AI developers prefer opaque AI creation processes to transparent, explainable ones, because the former appear to produce more exciting results (in the short term at least)
  3. Accordingly, there is almost no possibility of developers switching away from their current reckless “suicide race” to build AGI first
  4. Some observers might therefore be tempted to hope that theories of AGI being “Unexplainable, Unpredictable, Uncontrollable” (as advanced for example by Roman Yampolskiy) are mistaken, and that there is no dangerous “technological singularity” in the future
  5. In turn, that drives people to look for faults in the work of Yampolskiy, Bengio, Eliezer Yudkowsky, and others, cherry picking various potential anomalies in order to salve their conscience
  6. BUT this miserable flow of thought can be disrupted once it is seen how alternative forms of advanced AI can deliver the anticipated benefits of AGI without the terrible risks of currently dominant development methods
  7. From this new perspective, there will be no need to plead with AGI developers to pause their research for the greater good; instead, these developers will happily transition to safer forms of AI development.

To be clear, this makes things appear somewhat too simple. In both cases, the complication is that formidable inertial forces will need to be overcome – power structures that, for various pathological reasons, are hell-bent on preserving the status quo. For that reason, the battle for truly beneficial advanced AI is going to require great fortitude as well as great skill – skill not only in technological architectures but also in human social and political dynamics.

#AccelerationOfAGI

#AGIArchitecture

#AGIControl

#AIFavoritism

#Conferences

#ConsciousAI



Related Articles


Comments on this article

Before posting or replying to a comment, please review it carefully to avoid any errors. Reason: you are not able to edit or delete your comment on Mindplex, because every interaction is tied to our reputation system. Thanks!