Scientists at UCLA Health argue that today’s most advanced artificial intelligence (AI) systems are missing something fundamental that humans have: a body. In a paper published in Neuron, the scientists propose that current AI lacks two key elements that shape human thinking and behavior.
The first is the ability to interact with the physical world through a body. The second is an internal awareness of the body’s own states, such as feeling tired, uncertain, or thirsty. The researchers call this combination “internal embodiment.” They explain that the human body acts as a built-in safety system, constantly monitoring internal conditions and influencing decisions, attention, memory, and social behavior. AI systems, by contrast, process information and produce outputs without any equivalent internal regulator.
The AI body gap
Multimodal LLM (large language models) can describe objects like a cup of water or recognize images. However, they have no sense of what thirst feels like or how physical experience shapes understanding. In one test, researchers showed several leading AI models a simple image known as a point-light display, where dots suggest a moving human figure. Even newborns can recognize this as a person, yet several models failed to identify it correctly, describing it instead as stars. When the image was rotated slightly, performance dropped further. Humans succeed at this task because their perception is grounded in a lifetime of bodily movement and action. AI systems rely only on patterns from text and images, without that physical anchor.
The paper distinguishes between external embodiment, which involves interacting with the outside world, and internal embodiment, which involves monitoring one’s own states. Current AI focuses mainly on external capabilities but has no persistent internal signals to regulate behavior, such as limiting overconfidence or resisting manipulation.
The researchers propose a dual-embodiment framework for future AI development. This would include internal state variables that track uncertainty, processing load, and confidence, helping the system behave more consistently and safely. Giving AI functional analogues of internal embodiment could lead to systems that are not only more capable but also safer and better aligned with human values.