Will the HBP compete with on-going neuroscience research?
Not at all. What the project will do is add a completely new, meta-layer approach to the strategies neuroscientists already use in their work. We’re not talking about a zero sum game: the work we’re doing will benefit everyone, accelerating the pace of research throughout the neuroscience community. The HBP’s ICT platforms will make neuroscience data and knowledge easier to access; new informatics tools, provided through the platforms, will help researchers to find general organising principles hidden in the data; modelling and simulation tools will allow them to situate their data and discoveries in the context of a single integrated system of data and knowledge. Understanding the human brain is one of science’s and society’s greatest challenges. With its platforms, the HBP aims to spark a global collaboration to address the challenge.
One of the main goals of the HBP is to integrate huge volumes of neuroscience data in unifying models. Why do you think you can succeed where decades of effort have failed?
We shouldn’t say it failed. When we talk about unifying models we mean models we can use for in silico experiments - models that capture everything experimental neuroscience can tell us about the biology of the brain, everything theory can represent mathematically. But until very recently we just didn’t have the computing power and informatics tools to build this kind of model. Only now can we begin to build multiscale models we can validate against detailed data from biological experiments. Obviously there are still many gaps in our knowledge and many technical limitations. However, the HBP has a pragmatic step-by-step strategy that allows us to address and solve these problems one at a time – continuously improving the accuracy of our models as we go along. It is this strategy that makes the project unique and feasible.
Will the HBP make animal experimentation unnecessary?
Unifying brain models are driven by experimental data – so without experiments the project would not be possible. What the HBP will do is make experiments even more valuable than they are today. The HBP will allow new experimental datasets to “live” within unifying brain models and brain simulations, interacting with the other datasets incorporated in the model. This will not just add value to the data – it will enable us to address completely new scientific questions. For instance, we can put different datasets together to predict the behaviour of synaptic pathways – and validate the predictions against experimental results. Once the method has been validated for a reasonable number of pathways we can use it to predict the behaviour of pathways that have never been measured. Inevitably, this will make some experiments unnecessary. But it also one of the basic strategies that makes the HBP feasible. In practical terms, it would take centuries to measure every possible synaptic connection, in every different species, at different ages, in every possible state of health and disease. And this is not just true for synapses. We will never be able to completely map the brain just with experiments. What we need are organizing principles that allow us to make predictions about features of the brain, which we can validate against experimental results. When they fit we can use them to predict features of the brain that are difficult or impossible to map experimentally. But when they don’t, we will need new animal experiments. The models will point to the experiments likely to yield the most valuable information. In brief, the HBP will not eliminate animal experiments. What it will do is help to get more value from our experiments and to choose the experiments we really need.
Which brains will you study in HBP?
The HBP will focus primarily on mice and humans, in fact we want to use mouse data to predict features of the human brain. But to find and test general principles of brain organization, we will have to look at other species as well. For instance, the best way to learn about principles of gene expression, and the impact of mutations on neurons, is to study simple organisms like C. elegans (a simple worm) and drosophila (the Californian fruit fly).
Why not begin with simple organisms like C.elegans?
There are two problems here. The first is feasibility; the second is the relevance of our results.
Feasibility. Neuroscientists have mapped all of C.elegans’ 300 or so neurons. However, enormous amounts of key data needed are still missing. For instance we do not have enough data on the physiology and pharmacology of C. Elegans neurons and synapses. And we still have limited data on the distribution of ion channels, receptors and other proteins on neurons, synapses and glia. Without this data we cannot build unifying models. A second problem is how easy it is to obtain the data. The crucial requirement for unifying models is the ability to access the data needed. Obtaining a deep understanding of the molecular machinery of a single neuron or a single synapse is just as difficult in C. Elegans as in human beings. And many datasets – particularly data on cognition - are actually easier to acquire in rodents, or even in humans. So we can’t just say: “let’s do this quickly in worms and do complex brains later”: we have to solve the same basic challenges, whatever brain we model. What we are actually doing is building a generic strategy we can use to reconstruct any brain.
How is the focus on the mouse brain relevant to understanding the human brain?
Of course, the mouse and human brains are different in many respects. For example the mouse brain is much smaller, and has far fewer neurons; some brain regions are less developed than in humans, some are overdeveloped; some regions present in humans are not present in mice at all. However, mice, like humans, are mammals and as such, their brains have many features in common with ours: the main types of cells and their morphologies are the same; proteins are targeted to different parts of a neuron in the same way, the electrical behaviour of cells and synapses and the basic organization of the cortex and other brain areas, etc. are very similar.
For ethical and technical reasons, mouse brains are more accessible than those of humans or non-human primates. This means that mice can provide us with huge volumes of data it would be difficult to generate in other ways. In mice, we can relate data obtained invasively (e.g. data for gene expression) to data from non-invasive imaging (MRI, fMRI, DTI etc.). The results can improve our ability to interpret non-invasive data from humans. Even though technology is improving continuously, there are many features of the brain that we may never be able to measure directly. For instance, we may never be able to reconstruct the detailed morphology of every neuron in the human brain. But if we can use mice to understand the organizational principles linking gene expression to neural morphology, we may be able to “synthesize” neurons from gene expression alone. It might in fact be possible to synthesize cell types we have never seen before even in mice.
This is just an example. There are many others like this, where we need to can use mice to find fundamental principles of biology that we can then apply to reconstructing the human brain.
Will the Human Brain Project consider learning?
Of course! Many of the partners in the project are world-leading researchers in synaptic plasticity, learning and memory. The HBP Cognitive Neuroscience and Theoretical Neuroscience subprojects have specific activities dedicated to understanding and modelling synaptic plasticity, learning and memory. One of the project’s most important goals is to build brain models and simulations that incorporate lessons from this work and to test them with simulated robots acting in increasingly complex and challenging environments.
What about glia?
Building unifying brain models means taking account of every aspect of biology. Glia are a key component of the brain, supporting neurons and controlling metabolism and blood flow. This will be a step-by-step process. First we will build models that include the basic molecular machinery of cells and synapses. Then we will use detailed synchrotron scans to map out the detailed vasculature of the brain. Finally we will be able to model glia.
Can we understand the brain using only a bottom-up strategy?
Probably not. To understand the brain we have to know what the brain does: its high level emergent behaviours. Otherwise, we wouldn’t be able to do much with our understanding of the detailed mechanics. But a pure top-down strategy is also not enough. Ultimately we want to understand how a genetic mutation or the wrong positioning of a protein in a cell affects behaviour; how a drug acting on a specific molecule can produce changes in cognition. For this we need multiscale models detailed enough to represent mutations and the positioning of molecules. In brief, studying the brain’s low-level mechanisms is not enough on its own to give us a complete understanding of cognition, but unless we study these mechanical details we will never understand how cognition emerges or the way it breaks down in disease. Historically there has been a lot of hostility between reductionist bottom-up modelling and simulation and the top-down modelling typical of cognitive neuroscience. But the time is ripe to bring them together. The Human Brain Project will help drive this convergence – making it finally possible to understand basic principles of cognition together with the underlying mechanics. At the same time, the project’s multiscale modelling approach will help to settle historical arguments about the level of biological detail necessary to explain specific cognitive capabilities.
Don’t you need a theory to understand the brain?
Theory is central to understanding the brain. Without theory, we can’t build any kind of brain model – simple or complex. If we want to build multiscale models we need theory to tell us how to move between different levels of description. If we are to understand the principles underlying the emergent cognitive functions of the brain, we need theory. The HBP recognizes this need and will dedicate some of its funding to the creation of a European Institute for Theoretical Neuroscience tasked with producing the theoretical tools the project needs.
Usually, however, when people talk about the need for theory, what really interests them is a philosophical question: what do we mean when we talk about “understanding the brain”.
Imagine we produce a theory that captures most of the brain’s cognitive capabilities but does not show how the parts of the brain work together to produce cognition. Such a theory would not be false, but there are questions it could not answer. For instance it wouldn’t be able to predict the effects of a drug or a genetic mutation. This points to a more general issue.
There is no single “correct” way of understanding the brain – it all depends on the questions you want to ask. A psychophysicist may want to understand the circuitry responsible for visual illusions; a geneticist will be interested in the way genes control behaviour; molecular and cellular biologists will focus on the way they control proteins, neurochemists will want to understand the chemical reactions that supply the brain with energy. And so on. In brief, understanding the brain is very different from understanding temperature, or the way the planets move around the sun, or a chemical reaction or superconductivity. The brain is a system with many different levels of organisation and we can ask valid questions about any of them. So we need not just one theory but multiple theories. The theories we need should help us to build models and simulations that reproduce the capabilities that interest us. As Richard Feynman once put it, “If I can’t built it, I don’t understand it”.
But isn’t the brain just too complicated to build and simulate?
Biologically detailed models can serve as a testing ground for theory but can also be useful for many other purposes. For example, they can make an essential contribution to experimental mapping of the brain. Imagine how many experiments we would need to map, describe and understand every part of the brain, in every species, at all possible ages, and in every possible state of health and disease. Just take the neocortical column, the smallest microcircuit in the neocortex. A single column contains about 3’000 different types of synaptic pathway (the connection between one type of neuron and another type). Partially mapping the anatomy and physiology of just twelve has taken twenty years and thousands of animals - and hasn’t yet provided all the molecular details we would like to see. Even with unlimited funding, it’s impossible to conceive a programme that could measure all the pathways in even a small part of the brain of a single animal; let alone for a whole animal brain or the whole human brain. But with alternative, predictive strategies, we can model and simulate the brain. The HBP strategy is to dig deep enough experimentally to obtain general organizing principles (e.g. principles determining how neurons connect with each other) and then to build models to test our principles. If a model cannot reproduce the known data, it tells us that our principles were wrong and points to the experiments we need to do to produce better principles. If it does reproduce the data, then we can use the principles to predict data that is not yet known (e.g. data on synaptic pathways that have not yet been measured). Once these predictions have been validated and shown to be consistently correct we can use our principles to fill gaps of knowledge (e.g. knowledge about the anatomy and physiology of all 3000 synaptic pathways) – a task that could take a century to complete if we had to rely on experiments alone. In other words, detailed biologically modelling can turn experimental mapping of the brain into a tractable problem.The brain has huge numbers of interacting parts: even if we had all the data we would need, simulating the relevant interactions would still be very difficult, especially at the molecular level. However, if we start work on the software now, and steer manufacturers to design machines that meet our requirements, supercomputers will soon give us enough computing power to at least begin molecular-level simulations. In particular, we need to develop software to support multi-scale models. This would allow us to adapt our simulations to the computing power available, simulating active parts of the brain in great detail and inactive parts at a lower level of granularity. This will require a multilevel description of the brain, allowing us to develop models at different levels of resolution; multi-scale theory, showing us how to switch dynamically between levels, systems software to manage the use of computing resources when switching scales, and, of course, supercomputers that can support dynamic reconfigurations of memory and communications when we change scales . Another critical issue is interactive supercomputing. The big machines now on the horizon will generate so much data so fast that it could cost millions of dollars to move the data off the machine for analysis and visualization. This means we need new solutions where we don’t need to move the data, new middleware that allows us to perform computing, visualization and analysis simultaneously on a single machine.
With so many parameters, you could tune your model to fit any function. What is the point?
If all our parameters were truly free this would be a good objection. But in biological systems just about every parameter depends on most of the others. For instance, the number of genes expressed by a cell has to be consistent with the number of proteins a cell can produce and maintain, which in turn depends on the packing density of proteins on the cell membrane, the way these densities change in plasticity, the number of synapses a neuron can support, neuron morphologies and electrical properties and so on. This huge number of constraints means we can’t tune our model any way we choose. More importantly it is one of the key factors that makes brain modelling tractable.
For example if we know how many neurons there are in a part of the brain and know their morphologies, we can predict synapse densities; if we know the fractional expression of a series of proteins and the absolute concentration of just one, we can work out the other concentrations by simple algebra. In brief, interdependencies among variables means it is actually much easier to calculate parameter values for a biologically detailed brain model than for a simpler more abstract model. The essential task is not to measure every single parameter experimentally – this is not going to happen. What we really have to do is look for organizing principles – interdependencies among parameters (reverse engineering), build putative principles into models, and see if the models can predict parameter values that are already known. Once we have done this we can use our models to predict values that are not yet known (forward engineering).
It is these same interdependencies that make it so difficult to identify the root causes of disease. A disease starts when a single parameter leaves the “permitted” range of values consistent with good health. But then this initial change will “push” other parameters outside their permitted ranges creating a cascade, which can be extremely hard to understand. They also explain why the brain is so robust. In most cases, biological constraints on parameter values prevent such cascades from ever starting.