How LLMs radically differ from human brains
Large language models (LLMs) can produce text that leaves the impression that you may be interacting with a human-like, intelligent conscious agent.
But in a paper in the journal Trends in Neurosciences, three neuroscientists—Jaan Aru at University of Tartu in Estonia; Matthew Larkum of Institute of Biology, Humboldt University; and James M. Shine of The University of Sydney—argue that present-day LLMs are “text-centric, whereas the phenomenological umwelt [perceived world] of living organisms is multifaceted and integrated.”
Three ways LLMs differ from human brains
“Many theories of the neural basis of consciousness assign a central role to thalamocortical re-entrant processing,” say the neuroscientists. “Currently, such processes are not implemented in large language models (LLMs). The organizational complexity of living systems has no parallel in present-day AI tools.
“The inputs to language models lack the embodied, embedded information content characteristic of our sensory contact with the world around us.
“The evolutionary and developmental trajectories that led to the emergence of living conscious organisms arguably have no parallels in artificial systems as envisioned today. The existence of living organisms depends on their actions and their survival is intricately linked to multi-level cellular, inter-cellular, and organismal processes culminating in agency and consciousness.”
Citation: Jaan Aru, Matthew Larkum, and James M. Shine. October 18, 2023. The feasibility of artificial consciousness through the lens of neuroscience. DOI: https://doi.org/10.1016/j.tins.2023.09.009
Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter.
0 Comments
0 thoughts on “Could AI machines become conscious?”