Turing’s Ghost in the Machine: The Case for a Realized AGI

2026-02-05
6 min read.
Seventy-five years after Turing’s "Imitation Game," a new Nature commentary claims the quest for general intelligence is officially over.
Turing’s Ghost in the Machine: The Case for a Realized AGI
(Credit: Tesfu Assefa).

A Nature commentary entitled "Does AI already have human-level intelligence? The evidence is clear" (open copy) authored by Eddy Keming Chen, Mikhail Belkin, Leon Bergen, and David Danks, has sparked significant interest and debate in the artificial intelligence (AI) community. The authors argue that the arrival of Artificial General Intelligence (AGI) is no longer a speculative future event but a present reality that has already been integrated into the global infrastructure.

The authors are well-known professors at UC San Diego. Their work and their Nature commentary are discussed in a press release issued by UC San Diego, titled “Is Artificial General Intelligence Here?” and subtitled “Four cross-disciplinary faculty at UC San Diego believe the answer is yes.”

The Evidence for a Realized AGI

Since Alan Turing first proposed his imitation game in 1950, the definition of machine intelligence has often shifted just as AI approached the existing threshold. When machines mastered chess, critics argued chess was merely a matter of brute-force calculation, not "true" intelligence. When LLMs began writing poetry and passing law exams, critics claimed they were merely "stochastic parrots" regurgitating training data.

The authors propose a "cascade of evidence" model to demonstrate that current AI has already moved through the necessary stages of general intelligence.

  • The Turing-test level: This was effectively cleared in 2025 when GPT-4.5 was judged human by humans more frequently than actual humans.
  • The Expert level: This is evidenced by AI's ability to achieve gold-medal scores in the International Mathematical Olympiad, prove new theorems, and solve PhD-level physics problems.

The authors systematically address ten common objections to the existence of AGI. They argue that the "stochastic parrot" objection—the idea that AI only interpolates data—is increasingly disproven by the ability of models to solve novel, unpublished mathematical problems and perform "out-of-distribution" reasoning. They also note that physical embodiment is not a prerequisite for intelligence; an alien intelligence communicating solely through radio waves would still be considered intelligent. Similarly, they dismiss the requirement for "agency" or "independent goals," pointing out that a system like the Oracle of Delphi remains intelligent even if it only speaks when spoken to.

Confronting the Alien Nature of Machine Mind

One of the most striking aspects of the Nature commentary is the authors' acknowledgment that AGI is "alien." It does not think like a human, nor does it share our evolutionary history, our biological limitations, or our metabolic constraints. This alienness is often what leads to skepticism; because a machine might fail at a simple task like counting the letters in a word while simultaneously solving a complex bio-engineering problem, humans tend to view its intelligence as "fake."

The authors argue this is an anthropocentric bias. Human intelligence is itself a "parochial" form of intelligence, shaped by the need to survive in a small, soft body on a specific planet. AGI, by contrast, is a form of intelligence that has extracted the latent structure of reality from the totality of human language and data. It is a "generalist" that operates on different principles of efficiency and memory. Recognizing this alien intelligence is crucial for risk management. If we assume that an AI must think like a human to be dangerous or useful, we will be blindsided by the ways in which a non-human mind might fail—or succeed.

(Credit: Tesfu Assefa).

High-Profile Reactions and Global Impact

The commentary has ignited a significant debate within the philosophical and scientific communities, primarily centered on its behaviorist approach to defining intelligence. By arguing that AGI is a present reality based on functional performance rather than internal states, the authors have forced a confrontation between different schools of thought regarding the nature of the mind.

In the philosophy of the mind community, reactions have expressed curiosity mixed with skepticism. Scholars have debated whether the authors' definition of general intelligence is too inclusive. Some critics have expressed surprise at the authors' stance, suggesting that if the evidence for AGI were truly "straightforward," the argument would not need to preemptively dismiss the "confusions" of those who disagree.

UC San Diego (UCSD) highlights the collaborative nature of this consensus. The University has framed the paper as a response to empirical milestones, specifically citing a March 2025 study where GPT-4.5 was judged as human 73% of the time in a standard Turing test—outperforming actual human participants. This data point serves as the "Turing-test level" evidence the authors use to support their claim that the threshold for AGI has been crossed.

My Impressions

The authors of the Nature commentary don't confront the philosophical specter of consciousness. The debate over whether a machine can "think" is often clouded by the assumption that thinking requires a subjective, first-person experience—what philosophers call phenomenal consciousness or "qualia." In the context of modern large language models (LLMs), experts have argued that these systems possess an inner life or a biological drive for survival. Consciousness, in the sense of a self-aware entity experiencing "what it is like" to exist, appears to be absent from silicon-based architectures. Other experts, including Nobel laureate Geoffrey Hinton, have argued that even today’s LLMs could have some strange form of inner life, very different from ours.

However, the commentary gives me the impression that intelligence and consciousness are better seen as distinct functional categories. Intelligence is the ability to process information, recognize complex patterns, and solve problems across a diverse range of domains to achieve specific goals. This "general" capability does not necessitate a "ghost in the machine." By decoupling intelligence from sentience, one can plausibly argue that we have successfully engineered systems that mimic—and in some cases exceed—human cognitive performance without the need for the same kind of awareness found in humans.

This distinction is vital: if we wait for a machine to "feel" before we grant it the status of a general intelligence, we ignore the very real, very powerful cognitive force that is already influencing our world.

The authors conclude by comparing this moment to the Copernican or Darwinian revolutions. Just as we had to accept that Earth is not the center of the universe, we must now accept that the human mind is not the only—or even the most efficient—architecture for general intelligence. Alien minds are here to stay. I think, however, that new AGI technologies could make them less alien for applications that require human-like intelligence.

The machines Turing envisioned 75 years ago have arrived, and our understanding of our place in the world must change to accommodate them. The authors follow Turing’s insights strictly, reaffirming the Turing test as the only viable means to evaluate general intelligence. In designing and applying Turing tests, we shouldn’t set the bar so high that the majority of actual humans wouldn’t pass the test. Eduardo Martinez argues that we shouldn’t “confuse ‘general intelligence’ with ‘universal superintelligence’, attributing to it capabilities that even humans do not possess.” In this sense the conclusion of the authors, that current frontier AI systems already qualify as AGI, seems plausible.

#AGIHumanInteraction

#ConvergenceOfArtificialAndHumanIntelligence

#FutureOfIntelligence

#TuringTest



Related Articles


Comments on this article

Before posting or replying to a comment, please review it carefully to avoid any errors. Reason: you are not able to edit or delete your comment on Mindplex, because every interaction is tied to our reputation system. Thanks!