Brain-Inspired Cognitive Architectures
Contact usContact us via email at firstname.lastname@example.org
Join the Brain-Inspired Adaptive Cognitive Architecture Space in the EBRAINS Community
From advanced learning to neurorobotics and neuromorphic applications
The central ambition of this focus area is to better understand how brain networks enable visuo-motor and cognitive functions, such as dexterous manipulation, spatial navigation, and relational reasoning.
Our approach is to emulate the architecture and operation of the brain by designing functional cognitive architectures addressing challenging visuo-motor and cognitive problems. Numerical simulations and embodiment in virtual or real robots enable us to study closed-loop interactions between the brain, the body, and the physical environment. We target the development of a novel cognitive framework supporting the flexible integration of a range of heterogeneous modules, representing brain areas, into coherent embodied architectures. Furthermore, we develop new learning rules to train the modules in biologically-plausible ways and new neuromorphic computation methods for rapid, energy-efficient simulation of large-scale models. Our computational architectures can perform a variety of cognitive functions and lead to new experimentally-testable hypotheses about brain function.
Dexterous manipulation - how the brain coordinates hand movements
The ability to dexterously manipulate objects is essential to our everyday life. In Showcase 5, we aim to shed light on the neural computations underlying this complex task in a goal-driven approach. To that end, we trained an artificial agent controlling a simulated anthropomorphic robotic hand with reinforcement learning to perform ecologically valid, visually-guided actions to manipulate an object. These actions are computed in a recurrent convolutional neural network modeling the primate brain’s frontoparietal network involved in visually-guided hand and arm movements. The neural computations emerging during training can now be used to build new hypotheses about their biological counterpart.
An anthropomorphic hand learning dexterous manipulation
Brain-based technology to support safe human-robot collaboration
The central aim of Showcase 6 is to demonstrate the potential of functional cognitive models in cobotics applications. It addresses, among others, relevant research questions regarding safety for human beings in a shared robotic workspace. The showcase connects multiple models developed by HBP partners into a single simulated physical environment. Namely, these models comprise a visual system to detect humans and robots reliably in the demonstrator scene, a high level planning module allowing the robot to generate smooth and efficient trajectories as well as a musculoskeletal model accurately representing a human in the scene. By combining the capabilities of the individual models, we can control a robotic arm in a safe and efficient manner, allowing us to perform complex tasks in collaboration with a human. We demonstrate a range of operations, from simple tool handovers to dedicated collaborative part assembly, similar to what one might find in an automotive factory.
We can control a robotic arm in a safe and efficient manner, allowing us to perform complex tasks in collaboration with a human
Brain-Inspired Artificial Intelligence
As opposed to current AI networks, the human brain has rich internal dynamics, is data- and energy-efficient, and can learn a large variety of tasks continuously with limited supervision by interacting with the (physical and social) environment. Furthermore, in contrast to a homogenous deep neural network, the human brain has a sophisticated network architecture consisting of many brain areas organised into networks and cortical systems that are recruited in context- and task-dependent ways, to fulfil a wide range of cognitive challenges. Thus, scientists working in this subtopic build biologically-inspired cognitive architectures that consist of many networks or modules that work together to create more complex and flexible behavior than individual deep networks (see Showcase 5 and 6). The modelling approach integrates architectural constraints from the brain but derives network parameters by using biological learning. As a result of modularity, a single module can often serve as a building block for many different architectures. The architectures are extended by adding increasingly more higher-level cognitive abilities, including neural systems for relational reasoning and hierarchical planning and by embodying them into robots able to address ecologically valid tasks, such as navigation and manipulation.
The BrainProp algorithm as outlined in Pozzi et al, 2020. Only one output unit, corresponding to the selected action or class, is trained at a time by means of an attentional feedback sign
People and Groups
This subtopic is concerned with the development of cellular-resolution computational models of the brain combining biological realism with learning and task performance. A particular aim is to gain insight into visuomotor processing through models of multiple brain areas including the cerebellum, basal ganglia, cerebral cortex, hippocampus, and other intercalated nuclei. Aspects taken into account encompass spiking communication, cell-type-specific electrophysiology and connectivity, spike-timing-dependent plasticity, and dopamine-dependent reinforcement learning. These models form a bottom-up complement to the top-down approach taken elsewhere within the focus area, and suggest mechanisms for cognitive architectures more closely based on known anatomy and physiology. Besides the neuroscientific knowledge gained from these models, they constitute testbeds for various EBRAINS services: simulation, high-performance computing, neuromorphic computing, and neurorobotics.