Could AI machines become conscious?

2023-10-30
1 min read.
How LLMs radically differ from human brains
Could AI machines become conscious?
Unlike large language models, the human cortical system generates complex activity patterns thought to underlie consciousness (credit: Mac Shine, Jaan Aru)

Large language models (LLMs) can produce text that leaves the impression that you may be interacting with a human-like, intelligent conscious agent.

But in a paper in the journal Trends in Neurosciences, three neuroscientists—Jaan Aru at University of Tartu in Estonia; Matthew Larkum of Institute of Biology, Humboldt University; and James M. Shine of The University of Sydney—argue that present-day LLMs are "text-centric, whereas the phenomenological umwelt [perceived world] of living organisms is multifaceted and integrated."

Three ways LLMs differ from human brains

Left: a schematic depicting the basic architecture of a large language model, which can have tens or even more than a 100 decoder blocks arranged in a feed-forward fashion. Right: a heuristic map of the thalamocortical system, which generates complex activity patterns thought to underlie consciousness. (credit: Mac Shine, Jaan Aru)

"Many theories of the neural basis of consciousness assign a central role to thalamocortical re-entrant processing," say the neuroscientists. "Currently, such processes are not implemented in large language models (LLMs). The organizational complexity of living systems has no parallel in present-day AI tools.

"The inputs to language models lack the embodied, embedded information content characteristic of our sensory contact with the world around us.

"The evolutionary and developmental trajectories that led to the emergence of living conscious organisms arguably have no parallels in artificial systems as envisioned today. The existence of living organisms depends on their actions and their survival is intricately linked to multi-level cellular, inter-cellular, and organismal processes culminating in agency and consciousness."

Citation: Jaan Aru, Matthew Larkum, and James M. Shine. October 18, 2023. The feasibility of artificial consciousness through the lens of neuroscience. DOI: https://doi.org/10.1016/j.tins.2023.09.009

#CognitiveComputing

#Knowledge-augmentedPre-trainedLanguageModels(KAPING)



Related Articles


Comments on this article

Before posting or replying to a comment, please review it carefully to avoid any errors. Reason: you are not able to edit or delete your comment on Mindplex, because every interaction is tied to our reputation system. Thanks!

Mindplex

Mindplex is an AI company, a decentralized media platform, a global brain experiment, and a community dedicated to the rapidly unfolding future. Our platform empowers our community to share and discuss futurist content while showcasing AI and blockchain tools that enhance the media experience. Join us and shape the future of digital media!

ABOUT US

FAQ

CONTACT

Editors

© 2025 MindPlex. All rights reserved