The Beneficial AGI Summit & Unconference 2025, BGI 2025 for short (Istanbul, October 21-23, 2025) brought together leading artificial intelligence (AI) researchers, innovators, policymakers, and technologists to steer the development of artificial general intelligence (AGI) toward a positive future. BGI 2025 is organized by SingularityNET. The event has been live streamed via the SingularityNET YouTube channel, and all recordings are now available online.
Topics: some technical issues, the alignment problem, artificial superintelligence (ASI), approaches to BGI, global and decentralized governance models for AGI, strategies for promoting human and societal flourishing, and related political aspects.
I couldn't be in Istanbul but I've been avidly watching the event online. I must say that, just like I do when I attend a conference in person, I listened with full attention only to selected talks and panels on topics that I find really interesting. Also, I wanted to listen to my friends. So the highlights that I include in this short review do not reflect the objective importance of the topics, but only my own, subjective personal interests. I'll watch all talks and discussions more carefully, and I'm sure I'll find out that I missed a lot.
Day 1
In his opening address, Ben Goertzel pointed out that this is an exciting era, marked by remarkable current technologies, emerging innovations on the horizon, and shifting public attitudes toward future technology. He noted that these trends are likely to intensify in the coming years, influencing technology, development, and societal views.
Current technologies, while impressive, fall short of achieving human-level AGI, defined as a system with broad, adaptable intelligence like ours. Ben highlighted the progress of AI tools in solving complex math problems. This comes largely from Large Language Models (LLMs). Beyond math, LLMs have transformed tasks like resume writing and essay composition, though human judgment remains essential. However, LLMs lack the creativity and error-correction skills of humans, limiting their reliability.
Looking ahead, Ben disagreed with those who believe scaling LLMs will lead to AGI, predicting instead that a combination of diverse AI architectures will emerge within a few years. This optimism stems from trends in computing power, data availability, and increased focus from talented individuals. He referenced a recent open-source Hyperon technical meetup in Istanbul, where a new AGI language was showcased. Hyperon integrates neural networks, logical reasoning, and evolutionary algorithms within a decentralized knowledge graph, offering a broader approach than LLMs alone.
Ben referenced early AGI discussions from the 1970s and influential books like “The Singularity Is Near” by Ray Kurzweil and his own “Artificial General Intelligence.” These works marked a growing interest and cultural acceptance of the prospect of AGI, accelerated by technological advances and cultural shifts. Today, funding for AGI projects is more accessible, reflecting this change.
Ben outlined economic models to assess transitions from early AGI (human-level intelligence) to late AGI (superintelligence), and suggested that decentralized AI infrastructure could support a beneficial AGI path. An open, decentralized AGI development could mitigate global tensions.
He introduced the BGI Nexus initiative, a community effort to guide AGI ethically, reinforcing the need for collective action in this transformative period. In a later session, Esther Galfalvi and Haley Lowy zoomed on BGI Nexus.
José Luis Cordeiro delivered an enthusiastic talk about AGI that can think and perform tasks across all human abilities. He highlighted AGI as the most disruptive technology of our time. He contrasted his lack of fear toward AI with his concern about human stupidity, suggesting AGI could guide humanity toward a peaceful future. José, who is associated with the Millennium Project, shared scenarios he helped develop, including one where a robot becomes Japan’s prime minister in 2035, preventing a nuclear conflict between North Korea and South Korea in 2032.
The talk featured short videos of advanced robots, showcasing their impressive capabilities. As a founding faculty member of Singularity University, José emphasized that AI is a savior, not a threat, and cited rapid technological progress as evidence of exponential growth. He predicted that by 2045, or possibly earlier, humanity could achieve immortality and the technological singularity, where human and machine intelligence merge, citing advancements in genome sequencing and biotech.
José discussed his book "The Death of Death," co-authored with David Wood, which explores curing aging, and noted its popularity, including with Argentina’s president. He highlighted global interest in longevity from leaders like Putin and Xi Jinping. Touching on brain evolution, he mentioned Neuralink’s work and the cultural differences in robot acceptance, particularly in Japan.
An interesting discussion moderated by Mihaela Ulieru compared and contrasted the proactionary approach to AGI, defended by Ben and Max More, and the precautionary approach to AGI defended by David Wood and Gregory Stock (but, Ben noted, neither of the latter two is really a "doomer").

Day 2
Roman Yampolskiy, the author of "AI: Unpredictable, Unexplainable, Uncontrollable," gave a talk via Zoom. He suggested that current AI, with scalable architectures, is nearing AGI by 2027, according to prediction markets and research leaders. This progress promises abundance but also raises concerns like unemployment and control issues.
Then he addressed the downsides, citing a recent statement signed by thousands calling for a superintelligence research ban due to risks akin to nuclear weapons. Roman’s research explores AI safety challenges in development, training and deployment, such as self-improvement and malevolent misuse. He highlighted that explaining the behavior of complex AI models, predicting superintelligent actions, or verifying safety proofs, are challenging to the point of impossibility. Even monitoring AI seems to exceed human comprehension.
Roman argued that controlling superintelligence is unfeasible, as no system can guarantee error-free operation against self-modifying, unpredictable agents. Historical AI accidents, he noted, show how systems fail in intended roles, with exponential error growth. Surveys suggest only a third of experts believe control is possible, with many estimating unacceptable odds. He concluded that governance can’t solve technical issues, urging caution over unchecked AGI development.
A panel discussion on the future of biology explored two paths to extending human life and capabilities: biological enhancement and digital transcendence (uploading consciousness into a digital form). Max More, Kennedy Schaal, José Cordeiro, and Chris Kudla debated whether these paths are separate or intertwined, questioning if humanity will enhance biology, digitize consciousness, or combine both approaches.
The panelists challenged the idea of a strict divide between biological and digital approaches. Biology is already computational, as DNA acts like a self-debugging code, and this suggests that AI could work with biological systems to evolve humanity. There are predictions that both biological (Aubrey de Grey) and digital (Ray Kurzweil) immortality might be achieved by 2045. Preliminary practical steps include biological therapies to slow aging, which feels more immediate than digital uploading.
The panel agreed that combining biological and digital enhancements is likely. For example, nanomachines could connect cognitive and emotional brain pathways. Neural implants might link brains to AI, enhancing abilities like memory or communication. An interesting question is whether neural simulation could truly preserve consciousness (emulation) or merely mimic it. The panelists also discussed practical enhancements like better memory, vision, or even wings for flight, reflecting desires to overcome biological limits. They concluded that AI could accelerate understanding of aging and biological systems, paving the way for both biological and digital advancements in the next few decades.
Day 3
Wrapping things up, Ben emphasized that things are "getting real." Ongoing technical advances mark a shift from theoretical prototypes to practical application. This mirrors the evolution of neural network technology in the last decade, and could enable a rapid growth of AGI-capable technologies.
Ben praised SingularityNET's BGI Nexus initiative and various projects being funded. AI now empowers small groups worldwide to solve local problems bottom-up.
He discussed broader impacts of AI, from financial systems to geopolitics and the development of next-generation decentralized systems. AI developers should create radically new systems while at the same time collaborating with big tech, which is a challenge. AGI could arrive in 2–15 years (making 10–15 years pessimistic), Ben thinks, and this is humanity's most critical era.
Instead of a singular exponential singularity, Natasha Vita-More expects "surges" and the dynamic form of evolution found in cybernetic systems. She introduced her Human + AI project, which views humans in an interconnected bio-cybernetic hybrid ecosystem. She revisited her 1995 "Primo Posthuman" design. At the time it was criticized as inhuman, but it's now timely. An analogy, she said, is the evolution of in-vitro fertilization (IVF) from controversy to normally accepted practice. Vita-More also elaborated on the concept of "metabrain" and Eric Drexler's molecular nanotechnology.
Humans aren't and have never been "pure," she said. We hacked DNA via epigenetics and evolved with bacterial mitochondria that have fueled our evolutionary progression. DNA breakout is a metaphor for evolving from biology to technology-mediated consciousness, enabling abundance via molecular manufacturing for basic needs, and allowing diverse human values. Natasha emphasized that cognitive, mental, and emotional agility is paramount.
Gregory Stock, whose new book "Generation AI: The Transformation of the Human Being" will be published on December 15 (the Kindle edition can be pre-ordered at half price from Amazon until next month), focused on AI's imminent impact on humans. Greg identified avoidance tactics: catastrophizing (e.g., Eliezer Yudkowsky's "everyone dies"); denial; thinking only of utopian futures while bypassing the challenge of the transition to a world with AI.
Greg argued that AI won't destroy us due to interdependence. History suggests that the real danger is not AI control, but human control.
For the "Generation AI" born with AI, individuality could be eroded. AI could democratize expertise and universalize abundance. But dependence on AI will deepen. AIs will be coaches, teachers, guardians that protect us from assaults (phishing, scams) and even lovers. Greg envisioned a near future where AI-powered avatars simulate (or emulate?) the deceased vividly. This will change our perception of death.
Later in the day, in conversation with Haley Lowy, Greg's daughter Sadie Stock, who as a twenty-something is not a member of Generation AI but almost, elaborated on the attitude of her generation toward AI. Of course some use AI to do less, but others use AI to do more. She doesn't think the use of AI is eroding our ability to think independently. Interestingly, most young people of her generation are more interested in the immediate future than in what will happen later on the road to ASI.
My overall impressions
I'm an observer who is very interested in all these things but prefers to remain agnostic on many points. I agree with Ben that current LLM technology is not scalable up to AGI and then ASI, but contrary to Ben who sees the LLM glass half-empty, I tend to see it half-full. Do we need major breakthroughs and entirely new paths to AI, or is the current path almost good enough? The answer I give myself is that time will tell, and likely soon. However, I want to study Ben's technical proposals in much more detail than I have done so far.
I was impressed by the enthusiastic drive and sense of purpose that I could feel in the air, even watching online from afar. If circumstances permit, I hope to attend some next events in person.
Philosophically, I'm one of those who think that giving birth to our superintelligent mind children is our cosmic destiny and duty. Of course, when it comes to the development and deployment of AI today, I'm very much in the proactionary camp and very much against the negative "Statement on Superintelligence" that was launched a few days ago and mentioned at BGI 2025. I'll confess that I don't pay too much attention to AI alignment and all that, because I feel that developing AGI and then ASI is good and the universe will reward us for doing so. Is this a naive position? Perhaps, but I think strongly optimistic positions are needed to counterbalance strongly pessimistic ones.
