Can AI tell us how the brain shapes what we see and don’t see?

    08 October 2021


    HBP scientists show how self-supervised deep learning can help to model the brain's visual system

    Neuroscientists at the University of Glasgow have shown how recent advances in Artificial Intelligence can help us to understand the human visual cortex, the part of the brain that processes visual information.

    The study is based on the idea that the brain works as a prediction machine: in addition to receiving sensory information, it actively generates sensory predictions of what we are going to see next. Signals circulate in feedforward and feedback loops throughout the brain, generating predictions and perceptions until they match.

    “Finding out how predictions contribute to our perception is crucial for the understanding of human vision,” explains Lars Muckli, Professor of Cognitive Neuroscience at the University of Glasgow. Several research teams in the Human Brain Project are working on human vision, with projects underway to apply these insights to robotics and medical devices to help the blind.

    A possible way to address this question is to try to describe the visual part of the brain using mathematical and engineering models: designing and comparing the output of such models with real-life brain data is the main goal of the computational neuroscience field. “Computational neuroscience has always used the latest statistical modelling and algorithms to create more sophisticated models of visual processing,” says Dr. Michele Svanera, the leading researcher of the team. 

    Over the past decade, Deep Learning has become one of the most powerful tools for this. However, so far, most scientists used only supervised learning models, such as Convolutional Neural Networks (CNN). In this most common form of AI, the “correct” results are predetermined and used as constant feedback to train the model. Muckli and his team have now explored a different approach: they used a type of deep learning that is unsupervised, and in more details self-supervised, meaning the system has to find ways of making sense of the data in a much freer way.

    In a study, the Glasgow researchers trained an artificial model to predict the missing part of the unseen image in a self-supervised way, relying on image statistics only without supervision. They then compared the different layer activations to emerge in the network to brain activity data from a patch of early visual cortex in human participants that were viewing the same partially occluded images. 

    The self-supervised network completed the task in a more brain-like manner, with the self-supervised network showing much more similarity to the activation patterns observed in the visual system of human participants. For the researchers, this result points to a new approach in which AI is challenged to replicate brain processing as closely as possible while performing cognitive tasks, like memory, visual imagery, and auditory responses. The new AI-based modelling approach could help us improve our understanding of information processing in the brain, such as neuronal predictive coding, the scientists write.

    All data and the model are openly available on the EBRAINS research infrastructure.

    Other resources

    Full paper: https://doi.org/10.1167/jov.21.7.5
    Project website: https://rocknroll87q.github.io/self-supervised_inpainting/

    Can deep learning networks learn to predict missing parts of a photo as good as a human can? The predictive coding network developed by the Glasgow team solves the task with a brain-like approach. When humans are given the same task, their brain activity patterns resemble the networks activity.