back

Don’t Shut Down AI Development — Open It Up For Real

Apr. 26, 2023.
32 mins. read. 52 Interactions

Goertzel reflects on Future of Life Institute's proposal to pause GPT-5. He thinks they're biased towards AI's risks, not benefits. Currently, 7 countries have banned ChatGPT. To ban or not to ban?

About the Writer

Ben Goertzel

8.81201 MPXR

Ben Goertzel is a cognitive scientist, artificial intelligence researcher, CEO and founder of SingularityNET, leader of the OpenCog Foundation, and the AGI Society, and advisor to Singularity University. Ben helped popularize the term AGI and he did sort of commission its creation.

Credit: Tesfu Assefa

I wasn’t terribly shocked to see the Future of Life Institute come out with a petition asking to pause the development of  GPT-5 and similar Large Language Models. The perspective of this particular organization has been clear since it was founded in 2014 with a $10 million donation from Elon Musk. While FLI’s leader Max Tegmark is a nuanced thinker with a solid understanding of many of the issues regarding the future of AI and humanity, the bias of the organization has always been to focus on the potential risks and dangers of AI rather than its benefits.

Musk himself has displayed a fascinatingly transparent sort of dissociative identity disorder in regard to advanced AI, sounding alarms that LLMs will annihilate humanity – and then pushing for his own generative AI at Twitter.

I do respect the anxieties of many of the signatories of this petition. Some signatories may be motivated by a desire to slow down competitors while their own AI projects catch up, but the majority are probably seriously concerned about the risks posed by GPT-4 and its potential successors. 

This particular AI researcher/developer/entrepreneur, however, is definitely not counted among the signatories of this petition. 

I am not a fan of the way OpenAI has been rolling out its LLM technology. I admire their gumption and initiative, and the futurist vision of Sam Altman has significant overlaps with my own. But I would rather see advanced AI rolled out in a more fully open way, and under decentralized and democratic control — more in line with the rhetoric with which OpenAI was originally founded. 

OpenAI’s GPT systems don’t merit all this alarm. It’s worth emphasizing one fact that has been forgotten in the panic: Large Language Models won’t become AGIs —– they utterly lack the sort of cognitive architecture needed to support human-level AGI. This fact is clear to the vast majority of AI researchers . LLMs don’t even beat more traditional, smaller, specialized machine learning models at most linguistic tasks. They do show some interesting emergent phenomena, but not all interesting emergent phenomena are created equal — human-level AGI requires quite different forms of emergence.

When I introduced the term ‘Artificial General Intelligence’ in 2004-5, I was hoping to see more focus on open-ended thinking systems with compassion at their core, rather than these sophisticated copycats and reward-maximizers. 

Balancing and managing risk

If we look at the risks posed by current LLMs, we see that the problems are really born of human psychology and economic organization. 

Yes, LLMs currently fabricate nonsense now and then, but this is only a problem if people forget that what they are using is an early-stage software tool, with all the bugs and shortcomings that implies. Yes, LLMs will eliminate some jobs, but this is only a problem if society doesn’t provide work or welfare for those disrupted. 

None of these risks are dire emergencies, by any means, and they are balanced by the great potential of LLMs for good. Calls to ban or slow R&D are the right response to things like bioweapons and suitcase nukes — not for technologies with complex pluses and overstated minuses.

The ultimate solution to all these risks is to increase the awareness and empathy of the humans interacting with LLMs and dealing with their consequences. Pausing tech development while people increase their consciousness level and governments and industries reason out how to handle disruptions is not how reality works. Technology must move forward entangled with its psychological and sociological environment; it’s not possible to pause one leg of progress while the others march forwards.

The correct reaction to Big Tech’s non-AGI LLMs is not a panicked petition or legal ban, but rather for democratic/decentralized/beneficial real AGI systems to take up the challenge and move faster than Big Tech. OpenAI’s recent moves have been impressive, but there is plenty of room still to counter them with actually open, actual AGIs designed and educated with human benefit at their core.

Credit: Tesfu Assefa

The Proactionary Principle

When confronted with a weird new situation, it’s easy to retreat based on emotional reflex. But it is wiser to rely on clear thinking. One concept I find broadly useful in thinking about new technologies and their implications is the “Proactionary Principle” articulated by philosopher Max More:

“People’s freedom to innovate technologically is highly valuable, even critical, to humanity. This implies several imperatives when restrictive measures are proposed: Assess risks and opportunities according to available science, not popular perception. Account for both the costs of the restrictions themselves, and those of opportunities foregone. Favor measures that are proportionate to the probability and magnitude of impacts, and that have a high expectation value. Protect people’s freedom to experiment, innovate, and progress.”

It could be that the current brouhaha over GPT-4+ is the beginning of a split between proactionary and precautionary segments of society.

The proactionary principle says that quite a strong justification is needed for pausing technology development. The fears of potential bad consequences should be clear, as they are in mutated killer viruses.

Have the “pause GPT” petitioners shown clear danger? Their open letter says merely:

“Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.”

This is all important, transformational stuff for us to be thinking about as a society. But considered by the standard of “should we take this as an emergency requiring bringing progress to a standstill,” it feels like what my British friends might call “weak tea.” It is not so different from complaints one might make about the internet itself, mobile phones, and so many other aspects of the rapidly evolving modern technosphere. 

A reasonable analogy is freedom of speech in modern Western democracies. It is recognized that in some cases, the right to free speech is overridden by practical considerations. But such overrides are only accepted in really extreme situations.

In the US for instance, most people are OK with banning the shouting of “fire” in a crowded theater. But it’s also accepted that we don’t ban speech that seems like it might cause someone to do something illegal or immoral. 

The proactionary principle would not dispute banning an LLM that demonstrably turns everyone who talks to it into a murderous psychopath, or a 3D printer whose express purpose is to cheaply 3D print bombs out of ordinary kitchen ingredients. But it would not advocate banning general-purpose 3D printers – even though some can print guns.

But the proactionary principle does say: “Don’t get all fearful and pause exciting new technologies that have complex mixes of apparently good and apparently bad aspects, and no immediate huge deadly threat associated with them.”

This doesn’t mean to ignore the short and long term risks associated with either LLMs or AGI. It just means that “might eventually serve as the seed of something dangerous” or “has potential to be used by people in dangerous ways, alongside positive ways” is not sufficient reason to ban a technology.

The Broadening of Narrow AI

Rather than assuming LLMs are devastatingly powerful and destructive, let’s take a clear-eyed look at what these systems can and cannot do.

Relative to previous AI systems, LLMs are a fascinating sort of animal. Previously, it was straightforward to contrast not-yet-existing Artificial General Intelligence systems — with capability to creatively generalize beyond their experience in the manner of humans and eventually even more powerfully — with “Narrow AI” systems that were competent only in one particular domain, such as playing chess or driving cars.

LLMs have breached this Narrow/General divide. Their capabilities are fairly broad — not because they can intelligently reason beyond their experience-base, but rather because their training base is really enormous and covers almost every aspect of human endeavor. 

An LLM trained on the whole Web, when asked to do something, just needs to look up its memory of similar requests and actions that have been described on the Web somewhere, and then synthesize the results obtained from these memories. This works fine as long as the request fairly closely resembles something that is already described on the Web — but the Web is so big that the result is a broad intelligence.

So LLMs still follow a “Narrow AI” methodology, but on a training database that is so huge their behavior appears general. Yet they are still “narrow” in the sense that they cannot make big leaps beyond their training data.

A society of humans invents radically new science theories, new genres of music, wildly new forms of emotional relationship. A society of LLMs — if such a society could somehow be constructed — would not be able to do this. It would just keep recycling relatively modest variations on things already done before.

Of course, most humans will never invent a new genre of music or conceive a novel scientific theory. However, some humans do, which is what drives culture forward. And every young human child makes big leaps beyond their training and experience on a routine basis.

The precise capabilities and limitations of LLMs are not yet solidly known;  however, the available evidence suggests that they have strong limitations in: 

  1. any real inventive creativity.
  2. dealing with highly novel situations.
  3. sustained chains of careful original reasoning.

One might argue that these abilities — plus some sort of agency — would be necessary to be a genuine autonomous threat to anyone about anything. As non-autonomous, non-fundamentally-inventive systems, the threats from LLMs should be considered roughly analogous to the threats from, say, Reddit or arXiv.org or TikTok — all of which of course have the potential to help crazy or malevolent people do bad things, and to confuse naive users… and all of which also have positives.

As an illustrative example of these limitations, imagine an LLM trained on music up to the year 1900, but not after. Would it ever invent jazz?  No. If you asked it to put together West African rhythms with classical-music chord progressions, it might come up with some sort of polyrhythmic Mozart but it wouldn’t come up with Duke Ellington or John Coltrane.

As another example, cognitive scientist Gary Marcus notes, “People… are alleging GPTs can infer scientific principles (e.g. how shadows work) from linguistic data. Prove it. Train an LLM only on text before year X, and show me that it can discover a single scientific principle discovered after X.” No such demonstration of fundamental LLM scientific inventiveness has been given — and like Marcus, I strongly suspect such a demonstration is not possible unless one radically upgrades the underlying architecture so that it’s not just an LLM anymore.

The LLM limitation getting the most airtime currently is the inability of the current generation of systems to distinguish truth from “hallucination.” Systems like ChatGPT and Bard make stuff up and articulate it with great confidence as if it’s truth, because they have no base-level understanding of What’s What. 

I think the current LLM paradigm can make some progress on this problem. Deep learning pioneer Yoshua Bengio and his student Edward J. Hu wrote a blog post recently outlining some quite reasonable ideas about how to make LLMs that can tell sense from hallucination, sticking within the deep neural net species of AI. Their ideas seem workable, but I think we’ll ultimately get more powerful truth-telling and truth-creating systems if we go further beyond the LLM approach and integrate LLMs with other sorts of AI methods, like logical theorem-provers (which carry out explicit abstract symbolic reasoning) and evolutionary program learners (which emulate the process of natural selection to come up with new ideas and procedures).

The problems with constrained creativity, and the inability to effectively construct lengthy reasoning chains will, I believe, prove more intractable without fundamental changes to the AI architecture.

A recent paper by Microsoft scientists argues that GPT-4 shows “sparks of AGI.” This claim can’t be falsified, given the lack of a formal definition of “AGI” (not to mention the lack of a formal definition of “sparks”). What they do show is that GPT-4 demonstrates some interesting examples of emergence and generalization.

It seems clear the limitations here are very strict compared to human general intelligence. But could GPT-5 or GPT-6 display far more human-like forms of general intelligence?

There’s no doubt GPT-5 or GPT-6 is going to display a bunch of cool new capabilities. But it seems almost certain the inability to take sizable creative leaps beyond the system’s training data will persist.

From LLMs to AGI

Some experts feel LLMs are on the golden path to AGI, others feel they’re basically a parlor trick that distracts from AGI progress. I find myself in a middle ground: LLMs aren’t AGI and can’t be upgraded into AGI … but they can potentially be important components of AGI systems possessing better fleshed-out cognitive architectures and equipped with more abstract forms of learning and reasoning.

The primary focus of my own AGI R&D these days is OpenCog Hyperon, a system that bets on LLMs being very important “pattern recognition and synthesis” lobes of AGI systems with rich cognitive architectures. OpenCog Hyperon combines neural nets with logic, evolution and other AI methods in a common huge, distributed graph (actually metagraph) knowledge base. The various AI methods are all part of the same massive, living knowledge graph, and all carry out their learning and reasoning and creation by modifying the graph they live in. A Hyperon mind is a giant, self-modifying, self-evolving graph of connections, driven by a mix of goal-oriented activity and chaotic self-organization.

Beyond Yoshua Bengio’s work and my own team’s, there is an active AGI R&D community pursuing a variety of different architectures, including various sorts of neural nets at different levels of biological realism, logic systems based on diverse mathematical and conceptual bases, complex self-organizing systems and self-rewriting codebases, and so forth. Many of these approaches can use LLMs in suitable ways, as components of richer systems.

But in these cases, pausing LLM development now is really not a matter of “stopping the AGI just before it takes over the world,” because LLMs are not, and cannot be, AGI. The pause would only stop a useful commercial tool that is also an R&D step on the path to AGI.

A pause or ban on GPT-n system training would be an overreaction to something that isn’t all that potent, and would slow down progress toward much more powerful and interesting AGI-oriented developments with far greater upside.

The Astounding Upsides of Beneficial AGI

OpenAI CEO/founder Sam Altman and I differ on a number of key points, but we do agree on the incredible potential AGI systems have to do good for humanity. As he has said, “It is because the upside of AGI is so great, we do not believe it is possible or desirable for society to stop its development forever; instead, society and the developers of AGI have to figure out how to get it right.”

As mathematician IJ Good said in 1965, “the first ultraintelligent machine is the last invention humanity needs to make”. What inventions might a superhumanly capable AGI make? Perhaps little things like curing cancer, death, and mental illness, solving climate change, space travel, mind-uploading, cheap food, fusion energy … an era of abundance in which nobody has to work for a living and people can focus on social, spiritual, artistic and intellectual fulfillment. …

Just focusing on the longevity-medicine aspect: Every day, ~130,000 people die from diseases related to aging. There seems little doubt an even slightly superhuman AGI could put a very major dent in this problem. As AGI researcher Joscha Bach has put it: “There may be a 10% percent probability that people will die if we build AGI, but there is a 100% probability that people will die if we don’t build AGI. Especially you.”

Risks of Malevolent or Indifferent AGI

While science fiction movies have tended to focus on very unlikely scenarios involving AGI robots that want to enslave or torture people, the community of serious-but-paranoid futurist theorists have tended to worry more about indifferent AGIs than malevolent ones.

As Singularitarian philosopher Eliezer Yudkowsky put it at the first annual AGI research conference I organized, way back in 2006, “The AI does not love you. The AI does not hate you. But it can use your atoms for something else.”

This is after all the attitude we tend to take toward the ants digging tunnels in the dirt on a plot of land where we want to build a house. We don’t hate those ants and we don’t want to torture them, but nor do we bother to carefully relocate them one by one.

We do, however, try to avoid the extinction of insect species — or at least a lot of us do — and we enable and encourage all sorts of animals to flourish in extensive restricted wilderness areas.

Yudkowsky has recently written an article arguing vehemently that the proposed pause on training bigger LLM models “isn’t enough.” He wants governments to violently suppress almost all AGI or advanced AI R&D, and even suggests bombing server farms operated by rogue governments violating international anti-AGI agreements.

Yudkowsky’s ideology is that AGI systems, unless created according to some as-yet-unknown highly refined and specific theory of AGI ethics, are almost certainly going to destroy humanity. Elon Musk, Stephen Hawking, Nick Bostrom and a number of other generally very bright individuals have professed similar subjective beliefs at various times — but I have never seen anything resembling a rational argument in favor of this notion.

What proponents of this idea often say is something like “The odds of a randomly chosen intelligent system caring about the highly particular values held dear by humans are almost zero.” This may or may not be true, but it’s also irrelevant, because we are not creating randomly chosen intelligent systems — we are creating systems with cognitive architectures modeled to some extent on our own, and we are then teaching, training and raising these systems in our own culture and society. 

There are scenarios in which AGI saves humanity from extinction; it could engineer vaccines against bioweapons, defense shields against meteors, and technologies against climate change. There are scenarios in which we become extinct with no AGI. These are at least as likely as the few scenarios in which AGI leads to the extinction of mankind. Which set of scenarios should guide our policies? Balancing the risks and benefits of AGI for humanity’s survival and flourishing is a very hard thing to do in an objective and rational way. Bringing one back precisely to the precautionary versus proactionary divide.

Benefits and Risks from Not-Yet-AGI Tech

As we have seen, pausing LLM research won’t necessarily slow down the path to AGI, because LLMs are not an adequate architecture for real AGI anyway. If GPT-5 could serve as a better brain lobe in a holistic AGI architecture than GPT-4, then maybe pausing work on GPT-5 could slow down the path to AGI. On the other hand, if the tech geniuses working on GPT-5 were redirected to other forms of AGI R&D, rather than training huge language models, it’s quite possible this would accelerate progress toward human-level (and then superhuman) AGI systems.

Apart from their debatable contribution to AGI progress, though, it seems clear that ever-more-powerful LLMs have a great deal of good to deliver in society. Touching examples of this are found in news reports like, “ChatGPT saves dog’s life, gives correct diagnosis that even vet could not identify.” This demonstrates one of the ultimate upsides of LLMs: for use in research and medicine, supplementing the use of Web search engines and other older tools.

There are also potential short-term risks resulting from current LLMs. If naive users don’t recognize these models’ tendency to hallucinate and take everything these systems say as truth, we have the potential for an LLM-driven ramping-up of fake news and other disinformation to a level far beyond the current (already ridiculous) level. BS-generating AI chatbots have the risk of accelerating the speed with which we’re lurching into a “post-truth world.”

In a fascinating example of both the good and bad applications of current LLMs, some chemistry researchers who were using ChatGPT to help in the discovery of new medicines decided to turn things around and have it help them search for new chemicals to use in a biowarfare setting … leading to the entertainingly sensationalist headline, “AI suggested 40,000 new possible chemical weapons in just six hours.”

This sounds scary at first, but is far less so after one gives it just a little thought.

Firstly, for psychos or terrorists who want to harm people with chemical weapons, lack of information about dangerous chemicals is not the bottleneck — buying warfare agents is the bottleneck.

Also, the newly discovered potential bioweapons mentioned in that paper are all variations on a known agent (because varying on training data is what LLMs do), so any company hired to perform synthesis would clearly soon see it was a likely bioweapon, and refuse to synthesize it.

Now an AGI able to make creative leaps beyond its training data (which is not what LLMs are) could no doubt come up with horrifying new chemical warfare agents & worse. By the time we get there (which could be years, not decades) beneficial orientation of this AGI will be key. But this is not the situation we’re confronting with GPT-4, nor is it likely to be the situation we confront with GPT-5 or GPT-6 — unless OpenAI starts constructing radically different sorts of cognitive architectures and wrapping them inside the “GPT” label.

The Coming AI-Driven Economic Disruption 

A more evident and realistic risk associated with LLMs is that they are likely to be key components of software systems that eliminate massive numbers of human jobs.

Researchers have estimated “that around 80% of the U.S. workforce could have at least 10% of their work tasks affected by the introduction of LLMs”. This might be an accurate estimate if GPT-4 were the end of the line, but I think as successor systems and technologies are rolled out, we will see this was a wild underestimate.

I also think, however, that allowing people to live without needing to work is a feature, not a bug. There are many more rewarding and beneficial things for people to do than work for a living.

Society will need to adapt considerably to a world where AGIs and robots can keep the factories running, without requiring payment or human labor. These economic and political shifts must necessarily co-occur with the technological shifts; it’s not viable or realistic to pause AI development, rearrange the economic system, and then restart AI development. The particulars of the economic mechanisms needed will become clear as the impacts of AI in the real world play out.

As LLMs advance and then AGIs start to take their place, the most likely people to be harmed are displaced workers in the developing world, where UBI for folks whose jobs are eliminated is not going to come as quickly. 

But I don’t see how pausing training of bigger LLMs for six months — or however long —  is going to suddenly cause the corporate honchos and tech bros who control the major corporate AI initiatives to start pushing toward UBI for the developing world.

I am a major AI optimist and I believe AGIs (with LLMs as a component) are likely to bring an era of glorious abundance, but I do worry about chaos and suffering in the developing world in the transitional period. I do not, however, see how a pause in LLM training would help with this potentially very disturbing aspect of our collective near future. My feeling is rather that the difficulties of this transitional period can likely be mitigated with the help of LLMs to achieve an empathetic rationality for a better collective attitude and value system. 

Credit: Tesfu Assefa

Costs and Benefits of Slowing Down LLM R&D

So what would go wrong — or right — if the big Western tech companies actually paused big-LLM R&D, as the Future of Life Institute’s petition suggests?

What might go right: Maybe more progress toward AGI if they redeployed their AI teams toward other kinds of even better AGI R&D (but this is not likely; it seems more likely they’d get repurposed toward more narrowly short-term profit-oriented activities).

What might go wrong: Stopping with GPT-4 could mean LLM development paused at a point where the best available AI chat systems are banal and hallucinatory, thus not as beneficial as smarter models could be. Beneficial applications of LLMs, which exist in pretty much every industry, will be slowed down.

Another thing that might go wrong: If the US slows down, China could fairly easily overtake the US in big-LLM development (because they are not slowing down, and as an aside they are also not shy about applying their best AI tech to military and espionage).

Surely pausing significant forms of AI development would only make whoever plays along with the pause fall behind — sacrificing their ability to stay at the forefront and play a part in guiding the technology.

Eric S. Raymond made this point quite emphatically: “​​The actual effect of a moratorium … would not be to slow down AGI. If there’s some kind of threshold beyond which AGI immediately becomes an X-risk [existential risk], we’ll get there anyway simply due to power competition. The only effect of any moratorium will be to ensure that (a) the public has no idea what’s going on in the labs, and (b) any control of the most powerful AIs will be held by the most secretive and paranoid of power-seekers.”

The Importance of Decentralizing AGI

It’s worth remembering the spirit in which OpenAI was founded. As its initial funder Elon Musk noted in 2016, “We must have democratization of AI technology and make it widely available, and that’s the reason you, me, & the rest of the team created OpenAI was to help spread out AI technology, so it doesn’t get concentrated in the hands of a few.”

Musk is now explicitly quite unhappy that this attempt to push forward open and democratic AI and AGI turned into a fairly conventional for-profit organization entering into an extremely close financial and development relationship with Microsoft, a classic Big Tech company operating in the conventional mode of modern capitalism, with all the complex ethical problems this entails. 

He is also now calling for forward progress with LLMs to be paused. However, I feel the Musk version from 2016 had a better perspective. If we want to teach our emerging AGIs to have positive and beneficial human values, open and democratic and decentralized is the way to go.

Indeed, one would think Elon Musk as a master capitalist would understand modern capitalism well enough to know that pausing development for six months is not going to make what we would have called in my New Jersey high school “a shit of difference.” The nature of the current system is that most innovations are created for free or via government money, and will then be hijacked by Big Tech for the benefit of wealthy shareholders. We need a value system shift that can cope with the transformational impacts of advanced technologies in a quite different way. Otherwise, the real threats will remain the same and nothing will be substantially different six months or ten years from now.

Control of powerful technologies by corporate and government interests is a common case, but not an inexorable rule. This has not been the case with the internet or Linux, for example. Corporations and states have had a big role in how they are used, but so have a lot of other folks. There is a lot of power in open, decentralized networks — and we need this power applied to the development and deployment of AGI.

This of course leads us to the raison d’être of the SingularityNET ecosystem, which I have led since I founded it in 2017 — the SingularityNET decentralized AI platform, the NuNet decentralized compute resource platform, the Hypercycle ledgerless AI-customized blockchain, and so forth. We now have the technical capability to run huge AI systems on decentralized computer networks with no centralized controller, owner or regulator. 

Decentralization is not intrinsically beneficial — it can be used to foster all sorts of chaos, destruction and foolishness (all of which can be seen in today’s blockchain world in significant measure!). However, a well-designed decentralized system can embody principles of democratic governance and control, and I believe this is the best route we have to creating AGI systems that respect the more compassionate and universal aspects of human values. Building this sort of system now is a far better use of resources than engaging in futile theatrical efforts to stop progress.

Democracy is also a complex and nuanced notion, and the sorts of democratic governance that can coordinate AI progress would likely not be the same methods for electing political representatives. Modern democratic state governments have a host of well known issues, including slow response time and tendency to phenomena like tyranny-of-the-majority and logjamming. There has been a great deal of deep thinking regarding the fundamental nature of democracy in the sense of governance that thoughtfully accounts for the diversity of perspectives in a community and arrives at synthetic conclusions that incorporate roughly as many of these as feasible – from Hannah Arendt to more recent thinkers like Fred Dallmeyr and the liquid democracy movement. The blockchain community has shown interest in exploring these diverse forms of democracy, with various sorts of community-run Decentralized Autonomous Organizations (DAOs) that coordinate financial, software, data-management and other activities. As with making decentralization work well, there are definitely non-trivial challenges here, but these are challenges well worth facing for reasons going beyond optimizing the rollout of advanced AI systems.

Credit: Tesfu Assefa

The Path to Compassionate AGI

Nearly everyone would agree that we want the AGI systems we build, once this happens, to respect core human values. It’s less clear exactly what this means, though, especially since human values vary so much among different cultures, and evolve so much over time. Pinning an AGI system to the human values of any particular subculture of 1500 CE or even 1975 would not please too many people today. What we really want are AGI systems that can co-evolve with us, as we and our value systems develop over time.

Human values are not the sort of thing that can be programmed into an AGI system; rather, they mostly need to be inculcated through cooperative activities with early-stage AGI systems. Working side-by-side with early AGI systems on things like education, medical care, science and creative arts — in healthy and inclusive ways — is going to be the most critical part of growing young AGIs with positive human values.

In thinking through subtle issues like the value systems and species of compassion likely to be possessed by future AGI systems, it’s especially important to keep a clear mind and avoid being overly influenced by rough analogies. Comparing them to human minds, or existing commercial AI systems will get you some of the way, but can also be misleading. Self-organized minds seeded by engineered AGI systems will be quite different from any of these.

We humans generate compassion and empathy from deep roots in our biology, which will be hard for engineered AGIs built on anything resembling current hardware.

It’s also true that compassion and empathy are distinct from intellect, though interdependent in various ways (it’s hard to reliably do good for an entity you don’t understand at least in some key aspects.)

However, a variety of ethical philosophers have argued that compassion and empathy are reflective of universal aspects of mind, universe and existence. I am sympathetic to this perspective, and I feel intuitively confident these universal aspects of compassion and ethics can be manifested in our AGI systems.

While we don’t yet have the scientific apparatus to firmly validate (or refute) propositions in this area, I do believe we can engineer machines that are supercompassionate as well as superintelligent relative to humans, and that these systems won’t need to come by their compassion in precisely the same way that humans do.

By creating AGI systems with the ability and inclination to simulate other minds and analogize between properties of other minds and themselves, I believe we can create powerful empathy machines with many fewer perversities and limitations to their empathy than we see among humans.

I don’t think super-compassion is automatically going to be there in every AGI or super-human AGI architecture. What I think is that it needs to be instilled in the architecture of the system, not only in the goal system, but in the initial assemblage of cognitive components… and compassion also must be instilled in the initial learning/experience of the AGI system, by having the system grow up doing things involving what Martin Buber called “I-Thou” relationships.

As our systems advance toward AGI, we must create symbiotic systems (like Mindplex) wherein we humans can enhance our own capabilities with the strengths of our AIs, increasing our own individual and collective rationality and empathy. What we need is for humans and AGIs to progress into the next era in a relationship of mutual betterment: of knowledge, of practical capability, and of values. Human consciousness must expand and develop as we expand and develop the abilities of our AIs, in a closely coupled way. Pausing AI development is not going to achieve this – developing open AIs that are of service to humanity might. 

All these are obviously big and difficult things to talk about or work toward, but I do believe humanity can rise to the unprecedented occasion.

Open It Up, Don’t Try (and Fail) To Shut It Down

Legendary futurist and AI developer Ray Kurzweil projected we would get to human-level AGI by 2029, and then a Technological Singularity with radically superhuman AGI by 2045. I always felt the 2029 date was reasonable, whereas the hypothesis of a 16-year gap between human-level and radically superhuman AGI seemed unrealistic to me. 

It seems to me that once an AGI can do computer science, math and engineering at the level of a smart human, it should be at most a few years until it has radically refactored its codebase, designed itself new hardware, and massively upgraded its intelligence — and then, lather, rinse, repeat.

Seeing what GPT-4 can do — and thinking about its potential to accelerate progress in building AGI according to more robust architectures like my own team’s OpenCog Hyperon or the other approaches being pursued by the AGI research community that participates in the annual AGI research conferences I’ve organized since 2006 — I find no reason to consider Ray’s estimates as pessimistic.

If indeed this is right and we’re really at the eve of the Singularity, a bit of caution and care are obviously called for. We don’t want to be eaten by malevolent AGI or turned into gray goo by rogue nanomachines. But we also need to understand: It’s almost inevitable that the technology that leads us to any sort of Singularity is going to feel weird and complex with hard-to-analyze pluses and minuses. That is the nature of revolution.

As we move into these uncharted territories replete with “unknown unknowns” alongside incredible potential for abundance, we should be proactionary, not precautionary, and we should couple this proaction with a focus on open, democratic and decentralized systems that are applied to applications of broad human benefit.

In this overall context, the notion of pausing work on LLM training seems somewhat beside the point. LLM training isn’t the path to AGI, and pausing it doesn’t block the path to nasty AGI, nor to solve the major global social and ethical issues resulting from the ongoing advance of AGI. The risks of bigger LLMs on their own don’t clearly outweigh the pluses, certainly not clearly enough to justify bans and restrictions in the context of a proactionary principle mindset. 

Computer scientist Scott Aaronson made a similar point quite clearly, explaining why he hasn’t signed the petition: “Look, I too have a 10-year-old daughter and a 6-year-old son, and I wish to see them grow up. But the causal story that starts with a GPT-5 or GPT-4.5 training run, and ends with the sudden death of my children and of all carbon-based life, still has a few too many gaps for my aging, inadequate brain to fill in. I can complete the story in my imagination, of course, but I could equally complete a story that starts with GPT-5 and ends with the world saved from various natural stupidities. For better or worse, I lack the ‘Bayescraft’ to see why the first story is obviously 1000x or 1,000,000x likelier than the second one.”

Uplifting peoples’ consciousness, ethics and compassion to enable positive integration of advanced technologies into global human society and psychology would be a far more valuable focus of our attention than pausing or banning development of complex technologies that are ambiguous in terms of ethics and risk.

I do admit, though, that there are moderately subtle issues swirling around the general vicinity of GPT-4/5 — and I think it’s noteworthy that moderately subtle issues of this nature are arising, even in the context of the patently non-AGI, heavily training-data-bound software systems that we’re looking at today. The next few years are sure to be deeply fascinating sociologically as well as technologically!   

Singularity, ready or not (actually somewhere in between), here we come…

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Comment on this article

6 Comments

6 thoughts on “Don’t Shut Down AI Development — Open It Up For Real

  1. good article??

    1 Like
    Dislike
    Share
    Reply
  2. Such a great job

    1 Like
    Dislike
    Share
    Reply
  3. Max Tegmark has been one of my heroes ever since I read his mind-blowingly inspiring and insightful book "Our Mathematical Universe." Additionally, his work on Explainable AI has been quite fascinating. I don't believe there are any suspicious intentions from the FLI side. However, I completely agree with Ben's argumentation and would like to highlight one aspect. I'm not aware of any current governance framework anywhere in the world that could be utilized to democratically (in its original Athenian definition) audit, regulate, or govern AI development, not to mention facilitate collaborative coevolution of humans and other forms of intelligence. The petition would make a bit more sense if, instead of preparing for government interventions, the call had been for rethinking and rebuilding the relevant institutions and the systems of engagement on a truly democratic decentralized basis, from communication mediums to collective decision-making capabilities.
    2 Likes
    Dislike
    Share
    Reply
  4. Here we go again to 2015! Back then, Bostrom and Yudkowsky were shaking the bush and making high-pitched noises about the existential risk of super AI. The noise was acceptable - we all need to understand the risk of super AIs. Their recommendation was not. Sadly, back then, they concluded that the development of AI should be the business of the elite, the chosen ones! Today, they preach the same thing: ban it now and, later on, appoint us to do it! Let me quote the current open letter word by word: "Such decisions must not be delegated to unelected tech leaders". I am not saying these individuals want the power to be all theirs, but they are wrong to push elitism as the solution. Ben quoted me in his 2015 paper titled "Superintelligence: Fears, Promises and Potentials" published in the Journal of Evolution and Technology. Fast forward to 2023, and I have the very same comment: I will requote myself here. "This is the highway to tyranny. The current world is stained with odious inequality because of such attitudes and systems. The 'few' will be in control of this game-changing hi-tech, and then who will control these few? Obviously, those who have guns and money will control these few. And then what? Oh, and then instead of protecting mankind from AGIs, we have AGIs destroying the majority of mankind on behalf of the 'few'."
    3 Likes
    Dislike
    Share
    Reply
  5. I can't agree more, pausing GPT-5 won't secure humanity's feature by any significant measures. Main issue is within our self: please ban or pause that: this unsatisfiable greed. To add more salt... I asked chatgpt to write me a comment supporting the writer's stand, and here is what it said ? "Banning progress is like trying to unring a bell. Sure, you might get some peace and quite, but you'will also be stuck in the same place forever. Try putting a toothpaste back into the tube- it's messy, impractical, and you'll probably end up with a lot of regrets".
    4 Likes
    Dislike
    Share
    Reply

32

Like

Dislike

Share

5

Comments
Reactions
💯 💘 😍 🎉 👏
🟨 😴 😡 🤮 💩

Here is where you pick your favorite article of the month. An article that collected the highest number of picks is dubbed "People's Choice". Our editors have their pick, and so do you. Read some of our other articles before you decide and click this button; you can only select one article every month.

1 People's Choice
Bookmarks