• Paper Digest

New algorithms enable artificial intelligence to learn like the human brain

29 June 2023


Researchers of the Human Brain Project have taken inspiration from the most evolved region of the human brain – the prefrontal cortex – to advance learning in artificial neural networks. Their work has recently been published in PLOS Computational Biology.

Humans have the remarkable ability to learn different tasks in succession. For example, if we first learn to sort apples according to size and then learn to sort the same apples according to colour, it is easy for us to remember later what we have learned in both tasks. For an artificial neural network, this seemingly simple operation is exceedingly difficult.

Learning an individual task per se is not a problem for a neural network; however, when learning a second task, the network completely forgets what it has learned in the first one. In machine learning terms, this failure to remember has been aptly called catastrophic forgetting.

Researchers working in computational neuroscience are attempting to teach neural networks how to learn continually, without catastrophic forgetting, by introducing new algorithms into their operations. The ultimate goal is to generate neural networks that reach the same level of performance as humans have.

Now, researchers at the University of Oxford, involved with the Human Brain Project, have successfully generated a neural network that models human-like continual learning. In contrast to standard neural networks, their updated one was capable of learning the (equivalents of) apple-sorting tasks very well when one task was trained after the other – resembling the way humans learn – whereas a standard network performed better when it had to learn both tasks simultaneously.

The prefrontal cortex, the part of the brain that evolved latest and is particularly strongly developed in humans, has often been hypothesised to be responsible for the ability to switch between different tasks. The brain region is thought to actively maintain relevant task information and suppress irrelevant task information over time in a process called context-dependent gating. The authors hypothesised that this storing of learned tasks in different and independent neural representations prevents forgetting of tasks, such as seen in standard neural networks.  

To test their hypothesis, the researchers introduced two new algorithms into the network, both of which were inspired by the way the human prefrontal cortex works. The first algorithm made units of the network (the artificial neurons) slower in response (on a millisecond timescale) to resemble the ability of prefrontal cortex to maintain task information over time. Slowing down the responses of neurons ensured that their signalling history ‘added up’, in that their signals were partly influenced by past signalling information, which would be favourable for continual learning. 

The second algorithm was inspired by the simple biological principle ‘neurons that fire together wire together’, known as Hebbian learning. This ensures that connections between units that encode information relevant to a respective task are specifically strengthened, and that task-irrelevant information is suppressed. Previous solutions to the continual learning problem implemented this context-dependent gating process by hand. Here, the authors demonstrate that a simple biologically plausible mechanism, Hebbian learning, allows the model to learn this gating process by itself from scratch. 

By applying insights from brain research to update neural networks, the study provides a step forward in the field of machine learning and towards neuro-inspired technology, e.g., for smarter robots.

Text by Matthijs de Boer

Reference:

Timo Flesch, David G Nagy, Andrew Saxe & Christopher Summerfield. Modelling continual learning in humans with Hebbian context gating and exponentially decaying task signals. PLoS Comput Biol. 2023 Jan 19;19(1):e1010808. doi: 10.1371/journal.pcbi.1010808