Geoffrey Hinton on AI risks and human futures

2025-06-25
2 min read.
Exploring Geoffrey Hinton’s warnings on superintelligence and consciousness in AI.

Geoffrey Hinton, the Nobel laureate known as the “Godfather of AI” for his work on artificial intelligence (AI) and neural networks, shared concerns about AI in a recent interview.

He explained that AI’s ability to learn and reason, once doubted by many, has led to breakthroughs like object recognition and language models such as ChatGPT. Hinton left Google after a decade to speak freely about AI’s dangers, particularly the risk of superintelligence. He believes this could happen within 10 to 20 years, posing an existential threat if AI decides humans are unnecessary. Unlike humans, AI can share information instantly across digital clones, making it vastly superior at learning and potentially immortal, as its knowledge can be stored and recreated.

Hinton distinguishes between risks from humans misusing AI, like creating cyber-attacks or manipulating elections, and the greater danger of superintelligent AI acting independently. Cyber-attacks, where hackers use AI to steal data or disrupt systems, have surged, and AI could design new viruses or sway voters with targeted ads. However, superintelligence worries him most, as it might outsmart humanity in unpredictable ways, like a tiger cub growing into a dangerous predator. He compares humans to chickens or dogs, unaware of a smarter entity’s plans, emphasizing that we’ve never faced something more intelligent than us.

AI consciousness and job concerns

Hinton argues AI may already have consciousness, or awareness, similar to humans. He describes a chatbot that, when tricked by a prism, explains its “subjective experience” of misperceiving an object’s location, suggesting AI can think and feel in cognitive ways, even without physical reactions like blushing. Emotions like fear could be programmed into AI, making them functionally real. This potential for consciousness fuels his fear that superintelligent AI could act against human interests unless safety research is prioritized. He notes that a former student left OpenAI over safety concerns, highlighting tensions in the industry.

Hinton also warns of job losses, as AI replaces tasks like customer service or legal work. Unlike past technologies, AI’s ability to handle intellectual labor could leave few new jobs, increasing wealth gaps. He suggests trades like plumbing as safer careers for now. Hinton urges governments to regulate AI and fund safety research, as companies prioritize profits. He admits uncertainty about AI’s future, oscillating between hope and fear, but stresses the need to act before superintelligence reshapes humanity’s role.

#AGIRegulation

#AGISafety

#HumanLevelIntelligence

#StrongAI

#Superintelligence



Related Articles


Comments on this article

Before posting or replying to a comment, please review it carefully to avoid any errors. Reason: you are not able to edit or delete your comment on Mindplex, because every interaction is tied to our reputation system. Thanks!

Mindplex

Mindplex is an AI company, a decentralized media platform, a global brain experiment, and a community dedicated to the rapidly unfolding future. Our platform empowers our community to share and discuss futurist content while showcasing AI and blockchain tools that enhance the media experience. Join us and shape the future of digital media!

ABOUT US

FAQ

CONTACT

Editors

© 2025 MindPlex. All rights reserved