Online Course

Cognitive systems for non‑specialists




 

About

Cognitive systems are devices that are designed to mimic cognitive skills of higher developed biological organisms at varying levels of complexity and performance. Models of these skills can be either abstract functional descriptions from the vast field of cognitive science or detailed simulations of brain circuits from neuroscience. Novel hardware designs and the steadily increasing availability of cheap computing resources have recently yielded remarkable results especially with the latter models. The goal of this course is to provide a definitive introduction to the theory of cognitive systems. Drawing from advances in brain research, the topic is approached from a computational-neuroscientific perspective rather than an abstract-psychological one, bridging the gap between the physical structure of the brain and the logical organisation of its cognitive capabilities. Special focus is put on the role of robotics as a means to ground cognitive function in bodies that physically interact within different types of environments.

ECTS credits: 2 (after attendance of the online course, one full workshop and successfully passing the exam)


 

Lectures

Click here to access all lectures


Lecture 1: Internal models and counterfactual cognition predicting our environment
Lars Muckli | UGLA

Lecture 2: Making our selves: From psychology to robotics
Tony Prescott | USFD

Lecture 3: Can robots ever become conscious? Insights from theory and experimental neurobiology
Cyriel Pennartz | UVA

Lecture 4: Introduction to the Neurorobotics platform
Marie Claire Capolei | DTU

Lecture 5: Deep reinforcement learning for robotic control
Jonathan Hunt | Google DeepMind

Lecture 6: Bio-inspired control architecture for mobile robotics
Yannick Morel | TUM

Lecture 7: Neuroscience and robotics
Tata Ramalingasetty Shravan | EPFL

Lecture 8: The social robot: Emotions and drives
Vicky Vouloutsi | IBEC


 

Speakers & Abstracts

Marie Claire Capolei is a PhD student in the Electrical Department at Technical University of Denmark (DTU) and a R&D engineer at the Human Brain Project. Her doctoral research investigates the design and testing of biologically inspired architectures for robotic motor learning and control. She takes a multidisciplinary approach that encompasses the fields of artificial intelligence, neuroscience, automation, robotics, and computational neuroscience. Marie Claire received the M.Sc. degree in Automation and Robot Technology Engineering and the B.Sc. degree in Industrial Engineering from the Technical University of Denmark, in 2017, and from Campus Bio-medico University in Rome, in 2015, respectively. 

 

Lecture 4: Introduction to the Neurorobotics Platform

Neurorobotics is quite a young discipline where computational neuroscience and robotics intersect. This interdisciplinary approach provides a testbed for evaluating brain-inspired algorithms while proposing new computational models to control autonomous robotics systems. The Neurorobotics Platform (NRP) in the subproject 10 "Neurorobotics" of the Human Brain Project has been developed to deal with this particular purpose and to make the neurorobotics tools accessible to a broader public. The platform provides access to state-of-the-art tools such as robot and brain simulators, designers for creating experiments, environments, brain and robot models. The user can define and run closed-loop experiments entirely in a web-based application, running centralised on high-performance clusters. This workshop aims at presenting the topic of (virtual) neurorobotics, the related research in the HBP, and the NRP research infrastructure to a broader audience and gather recommendation, critique and general feedback from the participants in a guided hands-on session. This feedback will influence the further development to ultimately create a highly valuable research infrastructure for the robotics and neuroscientific communities.

 

Jonathan J Hunt comes from Australia and originally studied physics and mathematics. Through a series of accidents he ended up studying computational neuroscience where he applied machine learning models to understand the development of the mammalian visual system at the University of Queensland. He then worked for 2.5 years as a researcher in San Diego at Brain Corporation on applications of computational neuroscience approaches to robotics. For the last 3 years he has been a research scientist at DeepMind where he has focussed on recurrent memory models and reinforcement learning for robotic control. In his personal reinforcement learning: he can ride a unicycle, but has not yet mastered juggling.

 

Lecture 5: Deep reinforcement learning for robotic control

Combining the representation capacity of deep neural networks with reinforcement learning (Deep RL) has lead to a number of notable successes. RL in continuous action spaces brings its own unique set of challenges, as it is no longer possible to enumerate all possible actions. I will discuss several recent scalable approaches to RL in continuous action spaces including Deep Deterministic Policy Gradient, an off-policy method we developed. I will also discuss limitations of these approaches, particular relating to the lack of transfer learning between tasks. The second half of the talk I will discuss the successor representation. Successor representations decompose the value function into a reward independent part and a reward dependent part, which allows it to generalise across tasks and, despite learning with model-free methods, regain some of the benefits of model-based approaches. I will discuss new, scalable innovations using successor features.

 

Yannick Morel has received a Master's Degree in Electrical Engineering from the Institut Supérieur de l’ Electronique et du Numérique (ISEN), and a Master of Science in Ocean Engineering from Florida Atlantic University (FAU) in 2002. He was awarded a Ph.D. in Mechanical Engineering by Virginia Polytechnic Institute and State University (Virginia Tech, VT) in 2009. He has worked in defense technology at the Institut de St-Louis (ISL, 2009-2010) and at Thales Underwater Systems (TUS, 2015-2016), and pursued research work on robotics at the Ecole Polytechnique Fédérale de Lausanne (EPFL, 2010-2014) and at the Commissariat à l’Energie Atomique (CEA, 2014-2015). He is now serving as Scientific Project Manager for the ECHORD++ project at the Technical University of Munich (TUM). His research interests include stability analysis of dynamical nonlinear systems, adaptive, nonlinear control theory with applications to robotics in general and motion control of unmanned vehicles in particular, probabilistic robotics, and multi-modal perception for unmanned vehicles.

 

Lecture 6: Bio-inspired control architecture for mobile robotics

From a practical control perspective, the dichotomy between lower-level (of abstraction) coordination of actuation, vs. higher-level cognitive functions acting on this lower layer (via descending signals) has shown its merit in a number of examples. This architecture has for instance proven particularly well-suited to motion control of bio-inspired robotic systems, such as legged or swimming robots. Lower-level control, functionally modelled after Central Pattern Generators (CPGs), can be designed to handle over-actuation and ensure development of effective locomotion gaits or patterns, as has been well documented in the literature. The manner in which one is able to effectively generate the descending signals acting on these CPGs however, remains in comparison largely unexplored. In this presentation, we will discuss the manner in which this problem may be approached, including considerations for the impact of (locomotion) system complexity and how to circumvent this issue, problems linked to system uncertainty and learning, as well as practical control considerations, drawing from a number of concrete examples in robotics.

 

Lars Muckli is Professor of Visual and Cognitive Neurosciences, and Director of fMRI at the Centre for Cognitive Neuroimaging (CCNI), in Glasgow and Co-chair of 7T-Imaging Center of Excellence (ICE) MRI. He has worked for 20 years in the field of fMRI and multi-modal brain imaging. His work focuses on brain imaging of cortical feedback, investigation of layer-specific fMRI, and multi-level cross–species computational neuroscience. Before he came to Glasgow in 2007, Lars Muckli worked at the Max Planck Institute for Brain Research with Rainer Goebel (1996-2000) and Wolf Singer (1996-2007). In particular his team were first to use multivariate analysis for the different cortical depth layers, and used these methods in retinotopic regions of early visual cortex when these regions were not directly visually stimulated. They found a counter-stream of information passing in primary visual cortex (V1). Feedforward information enters primary visual cortex in mid-layers (layer IV), whereas contextual feedback to V1 projects to superficial layers.

 

Lecture 1: Internal models and counterfactual cognition predicting our environment

Historically, the cognitive turn was the shift away from behaviourism – the understanding that brains create a rich internal world that performs at will many different specialised tasks and shifts between them. The predictive coding framework departs from the divide of perception, cognition and action, and provides a unifying framework in which the organism minimises surprises. We investigated internal models and gathered evidence for cognitive processes in early sensory systems. Normal brain function involves the interaction of internal processes with incoming sensory stimuli. We have created a series of brain imaging experiments that sample internal models and feedback mechanisms in early visual cortex. Primary visual cortex (V1) is the entry-stage for cortical processing of visual information. We can show that there are two information counter-streams concerned with: (1) retinotopic visual input and (2) top-down predictions of internal models generated by the brain. Our results speak to the conceptual framework of predictive coding, whereby internal models amplify and disamplify incoming information. Healthy brain function strikes a balance between precision of prediction and prediction update based on prediction error. Our results incorporate state-of-the-art, layer-specific ultra-high field fMRI and other imaging techniques. A future direction will be to investigate internal models that mind-wander away from the here and now and simulate alternative scenarios.

 

After his studies in Neurobiology at the University of Nijmegen and University of Amsterdam (UvA), Cyriel Pennartz obtained his PhD degree in Neuroscience cum laude at the UvA (1992) with Fernando Lopes da Silva and Henk Groenewegen. His PhD research elucidated physiological functions and plasticity of limbic-striatal circuits, and was partly conducted at the University of Tennessee at Memphis with Stephen Kitai. He continued as postdoctoral fellow in Computational Neuroscience at the department of Physics of Computation of the California Institute of Technology, working with John Hopfield. In 1993 he became Assistant Professor at the Netherlands Institute for Brain Research, where he initiated a research line on the cellular electrophysiology of the brain´s circadian clock and conducted research on limbic-striatal circuits to recordings in freely behaving animals. For the first time in the Netherlands he developed in vivo ensemble recording techniques, using tetrode-arrays, working with Prof. Dr. Carol Barnes and Prof. Dr. Bruce McNaughton at the University of Arizona in Tucson. In 2002, Pennartz became senior group leader at the Netherlands Institute for Brain Research and was appointed Special Professor in Cognitive Neurobiology at the University of Amsterdam. In 2003 he was promoted at the same institution to Full Professor in Cognitive and Systems Neuroscience, leading a group of ~20 people. In this function he has been working to integrate empirical neuroscience, computational theory and theories on perception, memory and consciousness and published a monograph on this topic in 2015 with MIT Press. Cyriel Pennartz is elected leader of one of the 12 Subprojects of the EU FET Flagship Human Brain Project (Systems and Cognitive Neuroscience) and co-leads the Brain & Cognition section of the Dutch National Science Agenda. Chief characteristics of his work are its multidisciplinarity and integrative approach to neuroscience, combined with theory- and model-driven experimentation and with technological innovation.

 

Lecture 3: Can robots ever become conscious? Insights from theory and experimental neurobiology

The last three decades have witnessed several theories on brain-consciousness relationships, but it is still poorly understood how brain systems may fulfill the various requirements and characteristics we associate with conscious experience. This seminar will first pay attention in particular to the basic requirements for generating experiences set in different modalities given the rather uniform nature of signal transmission from periphery to brain. We will next examine a few experimental approaches relevant for understanding basic processes underlying consciousness, such as changes in population behavior during sensory detection, as studied with multi-area ensemble recordings. For visual detection, sensory cortices have been a long-standing object of study, but it is unknown how neuronal populations in this area process detected and undetected stimuli differently. We will examine whether visual detection correlates more strongly with the overall response strength of a population, or with population response heterogeneity. Next we will discuss multimodal integration by asking how “visual” the visual cortex is when tested in a multisensory setting. Finally, we will return to long-standing theoretical issues: (i) the Explanatory Gap, i.e. that gap that is facing us when comparing, and seeking to connect, qualitatively rich conscious experience with raw physiological activity of neural substrates, and (ii) requirements to design empirically testable theories of consciousness-brain relationships. A productive way forward comes from thinking about world representations as perceptual inferences, set across different levels of computation and representation, ranging from cells to ensembles, multi-area networks and yet larger aggregates. I will argue that this way of thinking helps to delineate what requirements robots may be expected to fulfill in order to qualify as conscious beings, and how to test this.

 

Tony Prescott is a Professor of Cognitive Robotics, a Fellow of the British Psychological Society, and the Director of the Sheffield Robotics, a research institute across both universities in Sheffield with over one hundred active researchers.  From a background that encompasses both Psychology and Artificial Intelligence, his research has focused on developing animal-like robots that are controlled by programs that simulate the brain; both as a path to better understand natural intelligence and to create useful new technologies.  With his collaborators he has developed the whiskered robots Scratchbot and Shrewbot, and control architectures for active sensing and autobiographical memory for the iCub humanoid robot which he is currently developing within the EU FET Flagship Human Brain Project.  He is also involved in several ongoing projects to develop commercial robots including the companion robot pet, MiRo, and an intelligent assistive table. His research has been reported in over 200 research articles and papers, and has been covered by the major news media including the BBCCNN, Discovery ChannelScience Magazine, Wired and New Scientist.

 

Lecture 2: Making our selves: From psychology to robotics

This talk will explore how the approach of neurorobotics could enhance our understanding of the human condition by creating robotic models of the human sense of self. Beginning in philosophy and psychology, I will argue that a synthetic approach can address unanswered questions about the nature and emergence self, in evolution and development, and test hypotheses about the functional brain architecture underlying the human experience of being an embodied self. Following Ulrich Neisser, I will argue for a decomposition of the human self into multiple processes relating to its physical, temporal, social, and narrative aspects and will focus in on the role of autobiographical memory in generating the temporal self and the human capacity for mental time travel. I will conclude by reporting on progress in developing a systems-level model of autobiographical memory as part of a broader brain-based architecture for social cognition in the iCub humanoid robot.

 

Shravan Tata Ramalingasetty obtained his Mechatronics Bachelor in Engineering (B.E) degree from Manipal Institute of Technology, Manipal (2013). During his bachelor's he worked on the thesis titled "Human Motion Analysis using Inertial Sensors" in collaboration with the Indian Institute of Technology (IISc), Bangalore. Later he continued working in IISc as Research Assistant with Computational Intelligence Laboratory (CInT) Lab on topics related to human bio-mechanics. Later he joined Technical University of Delft, Delft and graduated with a Master of Science (M.Sc) degree in Bio-robotics (2016) after his completion of a thesis titled "Cerebellum Inspired Computational Models for Robot Control". He then joined BioRob (2016) for his PhD funded by the Human Brain Project (HBP) to study the underlying mechanisms of locomotion in animals and humans.

 

Lecture 7: Neuroscience and robotics

In this talk, I will be presenting mainly the work being researched at BioRob laboratory, EPFL. Topics will range from quadruped animal and robot locomotion, over humanoids, swimming, and human locomotion to evolutionary robots. The talk will also address the study on mouse locomotion, which is part of the Human Brain Project.

 

 

Lecture 8: The social robot: Emotions and drives

In a world of constant technological development, it is only natural to observe a shift from industrial robots to robots that are social and act in human environments. The challenge that robots must face in the integration in the human society is multidimensional: safety is important, as robots will have to be at a relatively close range with humans while interacting; furthermore, they will need to be understood and be accepted by humans as communication partners and understand the environment in which humans live. The introduction of robots in our daily lives requires them to adopt social roles in dynamic environments that rarely involve well-defined tasks. Several domains could benefit from having social robots including the field of education, entertainment as well as healthcare. Indeed, humans could benefit from social robots that can act as caretakers for the elderly or even have the role of companions in general. This raises a fundamental question: do humans interact with robots in a similar way to how they interact with other humans, especially with their peers?

 

Course Directors

Lars Muckli | The University of Glasgow
Tony Prescott | The University of Sheffield