How will the HBP be different from classical Artificial Intelligence?

The challenge in Artificial Intelligence (AI) is to design algorithms that can produce intelligent behaviour and to use them to build intelligent machines. It doesn't matter whether the algorithms are biologically realistic – what matters is that they work – the behaviour they produce. In the HBP, we're doing something completely different. The goal is to build data driven models that capture what we've learned about the brain experimentally: its deep mechanics (the bottom up approach) and the basic principles it uses in cognition (the top-down approach). Certainly we will try and translate our results into technology (neuromorphic processors) but, unlike classical AI, we will base the technology on what we actually know about the brain and its circuitry. We will develop brain models with learning rules that are as close as possible to the actual rules used by the brain and couple our models to virtual robots that interact with virtual environments. In other words, our models will learn the same way the brain learns. Our hope is that they will develop the same kind of intelligent behaviour. We know that the brain's strategy works. So we expect that a model based on the same strategy will be much more powerful than anything AI has produced with "invented" algorithms.

How much memory will you need to store a full description of the brain?

Potentially we can always build models that are orders of magnitude larger than the models we can actually simulate on a given supercomputer. For instance, in the next few years we will be building very detailed molecular-level models that would require hundreds of exabytes of memory to simulate. But even in 2020, we expect that supercomputers will have no more than 200 petabytes. So what we plan to do is build fast storage random-access storage systems next to the supercomputer, store the complete detailed model there, and then allow our multi-scale simulation software to call in a mix of detailed or simplified models (models of neurons, synapses, circuits, and brain regions) that matches the needs of the research and the available computing power. This is a pragmatic strategy that allows us to keep build ever more detailed models, while keeping our simulations to the level of detail we can support with our current supercomputers.

How much electrical power will the HBP supercomputer use?

With today's technology, an exascale computer capable of simulating a cellular-level model of the whole human brain would probably consume about a Gigawatt –billions of euros worth of electricity every year. Obviously this is not feasible, so manufacturers companies are developing new processors and new communications technologies that need much less power. They believe they can get consumption for an exascale machine down to about 20 Megawatts.

One of the main goals of the HBP is to build neuromorphic computing systems that use the same basic principles of computation and cognitive architectures as the brain. What would be the difference between these new machines and the machines we have today?

Today's digital computers all share a number of basic properties. They all use "stored programmes" – lists of instructions telling them what to do in what order. And in most cases they are "generic machines" – designed to perform any kind of computation a programmer can code. They all rely on highly reliable, "compute cores" that perform extremely accurate ("bit-precise") computations and consume a lot of power in the process. And they all use a hierarchy of storage elements containing precise representations of specific chunks of information. HBP Neuromorphic Computing Systems, on the other hand, will operate more like the brain. So although they will be highly configurable they will not need to be programmed – they will be able to learn. Their architecture will not be generic – it will be based on the actual cognitive architectures we find in the brain – which are finely optimised for specific tasks. Their individual processing elements – "artificial neurons" – will be far simpler and faster than the processors we find in current computers. But like neurons in the brain they will also be far less accurate and reliable. So the HBP will develop new techniques of stochastic computing that turn this apparent weakness into a strength – making it possible to build very fast computers with very low power consumption, even with components that are individually unreliable and only moderately precise.

Will the exponential growth of digital computer performance make HBP neuromorphic technology obsolete?

We think not – for three reasons:

First there is the power argument. Power consumption in the latest generation of supercomputers is fourteen orders of magnitude (a hundred million million times) higher than in the human brain. Although neuromorphic computing systems will not be as power-efficient as that, they will still consume ten billion times less power than current systems. In the next ten years, the power efficiency of conventional computers will probably improve between a hundred and a thousand fold. In other words, neuromorphic systems will still retain a massive advantage.

The second reason why we expect neuromorphic systems to keep up is speed. Today most gains in computational performance are achieved by increasing parallelism – the use of more and more compute cores. However, the speed of the individual cores is growing slowly if at all. In other words, we can perform lots of arithmetical operations in parallel but it is much harder to speed up the individual operations. Neuromorphic computing systems also exploit massive parallelism – much more so in fact than conventional supercomputing. But at the same time they use semiconductor substrates in new ways that allow them to perform individual operations orders of magnitude faster than their digital counterparts. In some (not all) applications (e.g. during simulations of learning and plasticity over long periods of time) this could be an extremely valuable capability, opening the road to hybrid computing systems, integrating neuromorphic with conventional digital technologies.

Finally, there is the question of size. State-of-the-art high performance computers will always be massive objects filling large buildings. Special purpose neuromorphic chips on the other hand can be extremely compact, making it possible to exploit them in everyday applications (industrial machinery, consumer appliance, mobile phones etc.).

Will the HBP make neuromorphic computing systems available for daily use?

Yes, we anticipate it will. We will see them as stand-alone neuromorphic computers, as chips integrated into traditional digital computers, as back-end chips for all kinds of devices, even as brains for robots. And very likely we will see them in other kinds of system as well – beyond anything we can even imagine today.

What People are Saying

  • Collaborate, collaborate, collaborate. This is our opportunity.

    Prof. Karlheinz Meier, University of Heidelberg,
    Co-leader of the Neuromorphic Computing Subproject

  • A key goal of the Human Brain Project is to construct realistic simulations of the human brain – this will require molecular and cellular information and from that we will be able to model and understand biological and medical processes. In addition, we will be able to use that information to design and implement new kinds of computers and robotics.

    Prof. Seth Grant, University of Edinburgh,
    Co-leader of the Strategic Mouse Brain data subproject

  • The Human Brain is the most complex system that we know of. We would like to develop some kind of ‘google' brain where we can zoom in and out, see it from different perspectives and understand how brain structure and function is related. The ultimate aim of the Human Brain Project is to understand the human brain. This is only possible when we understand the structural organization of the human brain.

    Prof. Katrin Amunts, Institute of Neuroscience and Medicine,
    Forschungszentrum Jülich

  • The Human Brain Project will become a major driver of ICT in Europe.

    Prof. Thomas Lippert, Institute for Advanced Simulation, Jülich Supercomputing Centre,
    leader of the High Peformance Computing subproject