• Paper Digest

Embodied AI: Bridging the Gap to Human-Like Cognition

09 August 2023


Our brain has evolved through embodiment in a physical system – the human body – that directly senses and acts in the world. In contrast, most of the currently used AI systems have no ‘bodies’ and lack a direct connection to the physical world. Connecting AI systems to the physical world through robotics and designing them based on principles from evolution, is a promising approach to develop AI with more human-like cognition. This is the position taken by HBP researcher and Professor at the University of Sheffield in Cognitive Robotics Tony Prescott in a paper recently published in Science Robotics.

To develop human-level cognition, AI systems need to interact directly with the physical and social world in real-time. Giving AI robotic bodies will allow these systems to learn how their actions affect the world and how to take action to improve the learning process. In the same way that human bodies and brains have shaped their own learning through evolution, such embodied AI systems may have much more potential for gaining human-level cognition than current disembodied AI systems. 

Prescott, who co-authored the research paper with Stuart Wilson, said: “ChatGPT, and other large neural network models, are exciting developments in AI which show that really hard challenges like learning the structure of human language can be solved. However, these types of AI systems are unlikely to advance to the point where they can fully think like a human brain if they continue to be designed using the same methods.”

“It is much more likely that AI systems will develop human-like cognition if they are built with architectures that learn and improve in similar ways as the human brain, using its connections to the real world. Robotics can provide AI systems with these connections – for example, via sensors such as cameras and microphones and actuators such as wheels and grippers. AI systems would then be able to sense the world around them and learn like the human brain.”

It is true, the Sheffield researchers say, that exciting progress has been made in recent AI developments. For instance, with the introduction of recurrent neural network models into their operations, AI systems have become better at predicting what might happen next as they solve a particular problem. Nevertheless, they argue that the underlying framework of predictive processing should not be seen as the only pathway to designing AI systems. Development of AI would need to be grounded more broadly in biological principles and constraints deriving from development, learnability and evolvability in the real world.

Akin to this, an important problem with currently used disembodied AI, especially ‘deep-learning’ systems, is that they can represent a problem in an increasingly abstract way, because their learning networks are designed hierarchically. This can make it virtually impossible for humans to figure out how they have learned to solve a particular problem and arrived at their solution.

Real brain networks roughly involve both higher and more abstract levels of processing (such as the cerebral cortex), as well as lower levels more connected to the real world (e.g., midbrain and below). Although lower levels are less flexible than higher levels, processing can still affect these lower levels. If the architecture of AI systems is based on such a brain-like layered control system, not only would they be more likely to develop human-like cognition, but their learning and decision-making would also become more transparent and easier for humans to understand and control.

The original press release of this research can be found here.

Original Publication:

Understanding brain functional architecture through robotics

Tony J. Prescott and Stuart P. Wilson. Science Robotics 2023 May 31;8(78):eadg6014.

DOI: 10.1126/scirobotics.adg6014