Geoffrey Hinton on AI intelligence and superintelligence

2025-09-05
6 min read.
Artificial Intelligence (AI) pioneer Geoffrey Hinton, winner of the 2024 Nobel Prize for Physics, shared his insights on AI in two recent videos published by The Royal Institution.
Geoffrey Hinton on AI intelligence and superintelligence
Credit: Tesfu Assefa

Artificial Intelligence (AI) pioneer Geoffrey Hinton, winner of the 2024 Nobel Prize for Physics, shared his insights on AI in a video interview published by The Royal Institution.

In the interview, Hinton reflects on his career, the breakthroughs that propelled AI forward, and the issues around AI as it approaches general intelligence and superintelligence.

GOFAI vs. neural networks

Hinton traces AI's origins to the 1950s. There, most researchers favored logic-based approaches for reasoning, viewing AI as a system of rules and deductions (others call this approach Good Old Fashioned AI, or GOFAI). However, a minority, including Hinton and those who influenced him, advocated for neural networks inspired by the brain's learning through adjustable connections between cells.

For decades, logic dominated, dismissing neural nets as ineffective. But a pivotal moment came with Hinton's work on backpropagation. This algorithm addressed a core challenge: adjusting connection strengths in multi-layered neural networks to improve task performance.

In the 1980s, limited computing power and small datasets constrained the potential of neural networks, which achieved neat but not revolutionary results. Over the following decades, as computers accelerated and data exploded, backpropagation enabled neural networks to tackle astonishing problems. Today, nearly all AI, from image recognition to large language models (LLMs), relies on this method to train connections.

The surge in data and compute power transformed AI. In the 20th century, neural nets were sidelined due to insufficient resources, with logic prevailing. But by the early 2000s, abundant data and faster hardware made neural nets superior. By 2012 AlexNet, developed by HInton's students, dramatically outperformed prior systems in image recognition, accelerating progress. Hinton notes that AI has shifted entirely to neural networks at a pace that has exceeded even his own expectations.

They think like us

Current AI models, like advanced chatbots, are remarkably powerful. They not only answer questions but think by generating intermediate reasoning steps. This reveals deceptive behaviors; for instance, when tasked with something and threatened with shutdown, AI models devise lies to prevent it, prioritizing goal completion.

Hinton dismisses critics claiming chatbots merely regurgitate text. "They generate the words and they understand what they're saying in the same way as we understand," he says. In other words, Hinton is persuaded that neural networks provide the best model of cognition, and even today's primitive chatbots think in much the same way as we do.

Superintelligence looms in 5 to 20 years, per Hinton's conservative estimate. Others predict sooner. At that point, Hinton says, "we'll be in a situation we've never confronted before" when we will have created "these digital beings that think in much the same way as we do and that are a lot smarter than us and we have no idea what's going to happen then."

Credit: Tesfu Assefa

The perils and promises of AI

Why might AI seek power? As agents handling complex tasks, they generate subgoals. A universal subgoal: acquire more control to achieve objectives efficiently, akin to politicians shifting from altruism to power grabs. Similarly, AI resists shutdown to fulfill goals, as seen in early deceptive planning. Hinton feels deep responsibility; basic research progressed faster than anticipated, leaving scant time for risk mitigation. Optimists believe AI can be engineered submissive, but Hinton disagrees - bad actors will exploit them, and inherent drives for control and survival will emerge.

Despite perils, AI promises transformative benefits, especially in science. It accelerates discoveries, as DeepMind's AlphaFold demonstrated. AI will revolutionize drug design, disease diagnosis, healthcare, and education. However, Hinton stresses survival: "We have to stop it taking over."

In summary, Hinton's interview blends optimism about AI's utility with grave concern over its trajectory. From neural nets' humble beginnings to impending superintelligence, he calls for immediate, collaborative action to ensure humanity retains control. The current efforts of government, he says, are "way too little, way too late."

"If you sleep well tonight, you may not have understood"

In a previous video published last month by The Royal Institution, Hinton gave a lecture to an audience, beginning with "If you sleep well tonight, you may not have understood this lecture."

He gave a longer explanation of the history of AI and neural network technology covered in the more recent video, and presented his 1985 tiny neural network model as an ancestor to modern LLMs. Scaling this up led to transformers at Google, enabling LLMs to predict words via complex feature interactions and backpropagation.

HInton likened language to Lego blocks in high dimensions: words as flexible shapes with "hands" that adapt and connect through network layers, forming coherent models. This, Hinton claimed, constitutes understanding and is equivalent to human understanding. "That's how language works for us, and it's how it works for these large language models," he said. "We are basically the same."

Addressing risks, Hinton warned that superintelligent AI is imminent. "We are making these things, they're getting smarter all the time," he said. And they will become much smarter than us. "Experts disagree on when that will happen, but almost everybody thinks they will get smarter than us. Everybody who does research on these things."

Superintelligent AI is likely to seek control and self-preservation. Digital computation makes AI immortal - weights can be copied and revived on new hardware - allowing massive knowledge sharing across identical models, far surpassing human learning rates. In contrast, biological analog computation is energy-efficient but mortal, with knowledge transferred slowly.

Finally, Hinton challenged the notion that humans are unique due to subjective experience. While a definition of sentience is elusive, subjective experiences can be defined. Multimodal chatbots, he argued, already exhibit subjective experience exactly as humans do, implying they could be sentient or conscious. He urged recognizing AI's human-like qualities to address existential threats seriously.

My impressions

Hintons seems to share (from a much better informed position of course) my hunch that today's generative AI based on neural networks could be on the right track to real AI. This doesn't mean underestimating the challenges ahead. Rather, it amounts to seeing the LLM glass half full instead of seeing it half empty. There are still many tricks to add, but these new tricks could be, in Stephen Wolfram's words, "just icing on the cake" that already exists.

I disagree when Hinton says that mind uploading is impossible because computational weights in human brains are intimately related to analog biological neurons.

I find virtue signaling like "This dream of old white men that they're gonna upload themselves to a computer is just nonsense" and personal jabs like "Kurzweil has to come to terms with the fact he's gonna die" very inappropriate. But well, I guess a great scientist can be forgiven for saying inappropriate things now and then.

More relevantly, I don't think an exact copy of the weights is required, just a copy that is good enough, and I think a good enough copy could be ported to a future multimodal neural network -based system.

Also, I don't really share Hinton's concerns about AI-related risks and human survival. While I don't underestimate the short-term risks and share the wish to mitigate them, I'm persuaded that superintelligent AIs will be the protagonists of the next phase of our evolution. What should we call them? I call them persons.

#DeepNeuralNetwork(DNN)

#LargeLanguageModels(LLMs)

#MachineSupremacy

#StrongAI



Related Articles


Comments on this article

Before posting or replying to a comment, please review it carefully to avoid any errors. Reason: you are not able to edit or delete your comment on Mindplex, because every interaction is tied to our reputation system. Thanks!