Supercomputers - powerful computing backends available from the Collaboratory

The HPAC Platform makes supercomputers available for neuroscience research. Supercomputers hosted by European High Performance Computing centres in Germany, Italy, Spain and Switzerland are well integrated into the HBP. Once a user has a computing time allocation (how to get access) for an HPC system, this can be used and accessed in different ways, e.g. in the standard manner through the command-line or from the Collaboratory. The user accounts for the Collaboratoy and for the HPC system are linked with each other so that jobs can be submitted easily from this web portal.

Supercomputers available after application

We integrate new supercomputers as soon as they are getting installed and available at the HPC sites. Currently available are:

 

Piz Daint

  • Swiss National Supercomputing Centre, Lugano, Switzerland
  • Cray XC30
  • 28 cabinets with 5,272 nodes and 42,176 cores in total (with 1 NVIDIA K20 GPU per node)
  • 7.787 PFlops
JUQUEEN

MareNostrum 4

  • Barcelona Supercomputing Centre, Barcelona, Spain
  • Lenovo SD530 compute cluster
  • 3,456 nodes with 2 Intel Xeon Platinum 8160 24C at 2.1 GHz each
  • 11.15 PFlops
JUQUEEN

MARCONI

  • Cineca, Bologna, Italy
  • Intel OmniPath Cluster
  • Current 2 PFlops
  • Will be upgraded twice over the next years
JUQUEEN

Pico

  • Cineca, Italy
  • Linux Infiniband Cluster
  • 74 nodes with in total 1080 cores
  • Compute nodes, viz nodes, big memory nodes
JUQUEEN

JUQUEEN

  • Jülich Supercomputing Centre, Forschungszentrum Jülich, Germany
  • IBM BlueGene/Q
  • 28,672 nodes (458,752 cores)
  • 5.9 PFlops
JUQUEEN

JURECA

  • Jülich Supercomputing Centre, Forschungszentrum Jülich, Germany
  • T-Platforms V-Class architecture
  • 1872 nodes and 12 visualization nodes (with 2 NVIDIA K40 GPUs per node)
  • 45,216 CPU cores in total
JUQUEEN

Pilots systems available to HBP users

Two HPC pilot systems have been developed as part of a Pre-Commercial Procurement in the first HBP phase. These systems are designed based on neuroscience requirements, in particular targetting dense memory integration, scalable visualisation and dynamic resource management. These systems are hosted at Jülich Supercomputing Centre and are available to HBP scientists without the need to write a full project application (how to get access).

JULIA

This system was developed by Cray.

  • KNL-based compute nodes
  • 100 Gbps network technology (Omni-Path)
  • NVRAM technologies
  • Coherent software stack

JURON

This system was developed by IBM and NVIDIA.

  • POWER8′ + NVIDIA Tesla P100 interconnected via NVLink
  • 100 Gbps network technology (InfiniBand EDR)
JULIA