• Research News

Human Brain Project researchers work on safer interaction between humans and robots

27 September 2022


As factory floors become increasingly automated, the interaction between robots and humans in the same working environment is projected to be a common occurrence. It must be safe for the humans involved to avoid injuries and life-threatening situations. The Human Brain Project now unveils a new Showcase project about Cobotics – exploring the safe collaboration between humans and robots in a shared space. Using the digital EBRAINS research infrastructure, the scientists have employed models and insights that draw inspiration from neuroscience.
 
“We have built a virtual space where we can insert and simulate functional neural models developed by groups within the HBP that cover perception, cognition, planning and more,” explains Yannick Morel of the Cognitive Neuroscience Department at the University of Maastricht, who is currently overseeing the development of the Cobotics showcase. “Some of the technology required to support safe human-robot coexistence, such as vision technology, is being investigated by many companies, including Tesla and Google. We cannot compete directly with such tech giants; instead, we try to explore how insights from neuroscience can be used to find new solutions, building upon developments from the HBP both in terms of neural modelling and ICT (information and communications technology) tools.”

Addressing safety for the human collaborator is a well-established aim in robotics. Where is the moving, vulnerable human in a crowded workshop full of tools and supplies? The robot usually cannot tell unless it uses cameras to identify the worker within its visual field. But if the vision is occluded - say, the robot moves its arm in front of its camera - detection becomes difficult, and even a momentary gap can lead to an incident. “Most artificial vision systems lack robustness; situations where vision is occluded introduce a degree of uncertainty you want to avoid,” says Morel.
 
One possible solution could be to emulate how human vision handles occlusion. Building upon work in the HBP on human vision, the team explores how recurrences and dynamics may help. The goal is to promote the emergence of a temporal dimension that can help the system remember the occluded object. “We are extending deep-learning models using insights from abstract models of vision developed by HBP researchers,” says Morel. “Typically, you would try to solve this problem using deep learning, big data. The costs involved are starting to catch up with companies, and the deep-learning paradigm is starting to raise questions.” On factory floors, practical solutions resort to stopping or slowing down the robot whenever it cannot see the human, which impacts productivity. “Instead, the ambition is to emulate mechanisms involved in human vision, to achieve robustness to occlusion, without resorting to deep learning and big data.”
 
What looks like a trivial operation for two humans - handing over an object, for example - can prove complicated for a robot. “Imagine a robot wanting to give you a tool,” says Morel. “How does it know where you expect it? When does it tighten its grip, and when does it let it go? How do we make sure it doesn’t hurt you? Within our model, we describe not just the space occupied by the human but also its musculoskeletal system and relevant motor circuitry. By modeling the dynamics of the human arm, the robot learns how it can move safely and how to physically interact with you without inadvertently pulling your ligament.”

Another way to make the robots more human-like in their learning is to employ brain-inspired tech for its processing functions, including, for example, real-time visual processing on SpiNNaker. SpiNNaker, a neuromorphic computing system also developed by HBP research, supports spike-based (describing information through bursts of spikes or events, much like our neurons) neural nets, allowing it to run large models in real time. This can be used to perform visual recognition and automation tasks with a very high refresh rate, as is necessary for safety-critical functions. “Efforts have gone into connecting SpiNNaker with the real world through sensors and actuators, essentially providing embodiment to the system,” mentions Morel.
 
In this way, the Cobotics Showcase, both virtual factory floor and real-world SpiNNaker setup, provides a testing ground not only for robot-human interaction, but also for the performance of HBP models and technologies. “The Showcase is born out of the necessity of integrating a lot of great work made by very smart people,” says Michael Zechmair from the University of Maastricht, responsible for the integration of the architecture, including the functional assembly of the different neural models into a coherent structure. “All individual building blocks - vision, localisation, embodiment, motor control and planning - have enormous merit. Bringing them together is hard work. But we can learn a lot by putting together a modular, flexible system where we can simulate real life situations, drawing inspiration from multiple contributions to neuroscience and robotics.”


LinkedIn Interview

The Human Brain Project is presenting a series of talks with invited scientists to discuss their latest achievements in understanding the brain. The talks will be live-streamed to the Human Brain Project LinkedIn page.

On 28 September 2022 at 12:00 CET, we streamed a conversation with Jörg Conradt and Yannick Morel, researcher in the Cognitive Neuroscience Department at University of Maastricht, about their work on the safe collaboration between humans and robots in a shared space.



Text by Roberto Inchingolo