Nine HBP research and development projects are successfully using Fenix e-infrastructure services at CSCS.
To create the Fenix Infrastructure, leading European Supercomputing Centres BSC (Spain), CEA (France), CINECA (Italy), ETHZ-CSCS (Switzerland) and JUELICH-JSC (Germany) agreed to create set of federated e-infrastructure services to support collaborative research in Europe. A first project to realize Fenix is ICEI, which is funded by the European Commission within the Human Brain Project (HBP). The distinguishing characteristic of this e-infrastructure is that data repositories and scalable supercomputing systems will be in close proximity and well-integrated. Scalable Computing Services, Interactive Computing Services, Virtual Machine Services, Active Data Repositories, and Archival Data Repositories are available to research communities. One of the objectives is to support community specific platforms on top of these services.
The HBP will be the initial prime user of this research infrastructure. It will take care of developing the community-specific services on top of the Fenix infrastructure services. HBP is building a research infrastructure to help advance neuroscience, medicine and computing. It is one of the largest scientific projects ever funded by the European Union.
The Piz Daint Supercomputer at the Swiss National Supercomputing Centre (CSCS) ©CSCS
Nine HBP research and development projects are successfully using Fenix e-infrastructure services at CSCS. These include:
1. Virtual Epileptic Patient (principal investigator: V. Jirsa): The goal of the Virtual Epileptic Patient is to construct an epilepsy-specific application of The Virtual Brain large-scale modelling approach suitable for inferring an individual patient’s pathology from their neuroimaging data, to complement standard clinical practice.
2. Full-scale hippocampus model (principal investigator: M. Migliore): The aim is to study the mechanisms that may contribute to the emergence of higher brain functions at the cellular and behavioural level in the hippocampus.
3. Cerebellum modelling (principal investigator: E. D’Angelo): The goal is to refine the synaptic connectivity, location and strength so to better match the most critical behaviours seen in the experimental traces.
4. Neurorobotics Platform (NRP) development (principal investigator: A. von Arnim): The goal is to enable virtual robotics interactive 3D simulations running large-scale NEST and Nengo.
5. Image segmentation toolkit (ilastik) workflow dev (principal investigator: A. Kreshuk): The goal is to enable the running of ilastik, the image segmentation toolkit developed at EMBL, Heidelberg. The scientific goal of each analysis that uses the service is then to be defined by the Neuroscientists that use ilastik image processing service.
6. NEST network construction and simulation (principal investigator: H. E. Plesser): We test, benchmark and optimise the NEST simulation engine, especially network construction, to provide computational neuroscientists with more efficient tools for large-scale cellular-level simulations.
7. SimLab Neuroscience (principal investigators: A. Morrison, B. Orth): Allocation for the SimLab Neuroscience at the Jülich Supercomputing Centre (JSC), for development, testing and demonstration purposes. The goal is to enable the SimLab to support HBP users with their applications for resources as well as the deployment, integration and optimization of software to best exploit the Fenix infrastructure.
8. Model validation Service (principal investigator: A Davison): The goal of this service is to allow automated validation of computational models against experimental data, with the results being registered on the HBP’s validation framework. The HPC resources will provide a platform whereby simulations necessary for validations can be executed remotely, thereby providing a complete web- oriented service.
9. Neuromorphic Computing front-end services (principle investigator: A. Davison): The HBP Neuromorphic Computing (NMC) Platform currently runs a number of servers on a commercial cloud service (Digital Ocean), using Docker. These servers host the NMC job queue, compute quota, benchmarking, and monitoring web services and will be migrated to the ICEI resources as part of the proposal.