• Feature

A modern synthesis for brain modelling

08 August 2022


Virtual brains and detailed microcircuit models are changing neuroscience; yet, the elusive multiscale model of the human brain still escapes us. A new approach that goes beyond the standard top-down versus bottom-up dichotomy is not only possible, but necessary, argue Egidio D’Angelo and Viktor Jirsa. In a recent review article published in Trends in Neuroscience, the HBP researchers are proposing to build an array of plug-and-play tools to bring together worlds of neuroscience previously separated.

Imagine you wanted to study a part or a function of the brain by creating a computer model of it. Should you look at the area involved in high detail, mapping individual neurons? But brain areas are highly interconnected - you would be missing the bigger picture. Should you instead try to model the brain as a whole, inevitably missing key details of neuronal activity? The ideal choice would be to do both: a very detailed model of the area inserted into a whole brain network that responds to localized changes.

Up until recently, models were built at scales that were rarely compatible - puzzle pieces with mismatched edges. Brain simulations are more and more advanced but a so-called “multiscale” model had not been a mainstay of modern neuroscience. In a new standpoint paper published in Trends in Neuroscience, Egidio D’Angelo (Director of the Brain Connectivity Centre in Pavia) and Viktor Jirsa (Director of the Institut de Neuroscience des Systèmes in Marseille and Chief Science Officer of EBRAINS) claim not only that this elusive multiscale model is within reach, but also that it should become the new paradigm of neuroscience, the new approach to brain modelling. This approach is currently implemented in the Human Brain Project: the first multiscale model has already been published by a HBP research team led by Petra Ritter. Another HBP collaborative project involving research groups from six countries has unveiled new mechanisms of brain plasticity by using multiscale simulations. But why is the multiscale so important for understanding the brain?

“This is one of the biggest questions in neuroscience today, and it is a philosophical one - how do you infer from your data what’s actually happening within the brain?” explains Jirsa. “Fundamentally, we have no easy way of knowing the correspondence between brain activity and brain function. We can observe and map activities with ever-increasing details, but we don’t actually know which function comes from which activity.” This phenomenon has been given the name neurodegeneracy, and it is one of the main threads of investigations of the HBP. Many complex factors contribute to the same brain function, too many brain configurations could explain the same signals.

Faced with incompatible scales, inference in neuroscience has so far been a one-way street, or more precisely two parallel streets going in opposite directions. The traditional approaches are called top-down (you look at the whole brain and extrapolate what’s happening at a smaller scale) and bottom-up (you analyze a smaller scale phenomenon and draw conclusions on what happens in the whole brain). But the problem is: if you infer from the particular what happens at the general, you can’t then from the general infer what happens at the particular, and vice versa.

The increasing specialization of the field has also led neuroscientists to narrowly focus on the same type of models. Scientists work with microcircuits, down to the individual neurons; or with whole-brain models, looking at probability clouds; or with task-driven behavioural models, usually involving robots processing sensorimotor data. “In the past, researchers from different fields used to live in separation from each other, taking their own direction, subjected to their own biases and restrictions,” says D’Angelo. “We needed hybridization, a form of dialectical synthesis.” Which wouldn’t consist of merely converting the scale of a model into your preferred one, but rather consider them all at the same time, in their respective scale.

In a certain sense, a similar issue has been faced by the theory of biological evolution through natural selection. All evolutionary researchers agree that evolution does happen, but at what level does it happen, what is the unit of selection? Is it the individual organism, or its cells? Or its genes, and the organism is a mere carrier? Or is the familial group of the organism, which benefits from the evolutionary advantages of the individual? Can we extend this group even further, to include populations and symbiotic species? The scale at which evolution happens has been and still is the biggest point of contention between evolutionary theorists, and lead some to propose a synthesis of apparently incompatible views: multi-level selection. Evolution acts simultaneously on multiple levels of biological organisation, like nested Matrioska dolls. While this approach is far from being universally accepted, it shows that the scale at which to consider your problem is a problem in itself even in other disciplines. And that complex phenomena exist at different levels which have to be taken into account at the same time in order to be understood.

“It is not a matter of can but a matter of should,” elaborates Viktor Jirsa. “A lot of neuroscience has been and is being produced currently – detailed data and models, soundly executed, that ultimately means very little without integration into a larger context. Responsible research should ask itself these questions – how much data is enough? Is the model identifiable? How can we provide diagnostics ensuring quality of workflows? I believe it is imperative to consider these implications, it should be the guiding principle of modern neuroscience.” D’Angelo adds: “It’s an epistemological issue, from which we can derive a concrete research strategy with predefined approaches.”

As an example of what you can find out when considering multiple scales at the same time, D’Angelo points to a recent series of works carried out within the HBP, a simulation of an area of the cerebellum. “We built neurons of the cerebellum at the microscale, then fired up the simulation. It just wasn’t explaining some fundamental properties of neuronal electrogenesis, what we were seeing just wasn’t reflecting the known literature about the cells,” explains D’Angelo. “So, we looked at the empirical data for comparison and were able to postulate the existence of a class of channels and of gating properties that was absent from the simulation. We individuated these missing mechanisms when recording from the real cells.” The models postulated the existence of new mechanisms that have been subsequently demonstrated experimentally. The same models, once properly transformed, have been inserted into robotic controllers and virtual brains propagating cellular properties over multiple scales. “Operating at the multiscale gave us gaps to fill, a direction to follow,” adds Jirsa.

Possibilities emerge, like inserting spiking neural networks, a typical bottom-up model, into nodes of virtual brain models, examples of the top-down strategy. This is exactly what D’Angelo is doing in collaboration with other HBP researchers. Another one is inserting the spiking neural network into functional robotics models and have them generate signals. “We are combining the approaches in a way that resituates models that work,” says D’Angelo. In their standpoint paper, the authors point out that this approach isn’t merely “an interesting thing you can do with your models,” but should rather be considered as the path for future neuroscience. This concept is also included in a position paper outlining a vision for the coming decade of digital brain research – a collaborative, living document set up by the HBP and open to comments from the whole neuroscience community.

So, what needs to be built to make sure that multiscale models can become a mainstay in the field? Workflows, tools, plug and play devices that seamlessly integrate scales with each other, like different sized gears of a whole machine. Readaptation of inference techniques, which have been long-standing methods in neuroscience. A toolkit, essentially, which will help current and future neuroscientists in working at the multiscale. This toolkit is actually already available: the EBRAINS research infrastructure, which has been built by the Human Brain Project, has made these initial experiments possible and might make even more in the future by linking data, models and methods at a large scale.

“Implementing this scheme to EBRAINS in a structured manner could really let us play. Applying such a toolkit to robotics, in particular, seems to be very promising,” D’Angelo and Jirsa suggest. “This could lead to the integration of different HBP work packages. When we started, we simply did not know how to integrate connectome simulations with neuro-robotics. In the past, we didn’t even know from which direction to approach it. Now, we have a starting point, and the first tools for the job.”

Text by Roberto Inchingolo

Reference: The quest for multiscale brain modeling, Egidio D’Angelo, Viktor Jirsa Trends in Neuroscience, DOI: https://doi.org/10.1016/j.tins.2022.06.007