In the HBP’s Ramp-up Phase, the Subproject High Performance Computing (now part of the EBRAINS Computing Services) conducted a Pre-Commercial Procurement to work with companies on pilot High Performance Computing (HPC) systems, which should be tailored to the requirements of the neuroscience community and the HBP in particular. Important aspects were dense memory integration, scalable visualization, and dynamic resource management. The result of this process were the systems JULIA, developed by Cray, and JURON, developed by IBM and NVIDIA, which got operational in August respectively October 2016.
IBM and NVIDIA addressed the above-mentioned topics with NVRAM at node level and an LSF integration. The base system is comprised of nodes with POWER8 processors and NVIDIA Tesla P100 GPUs connected via NVLink. The nodes are connected with an InfiniBand EDR 100 Gbps network.
Now, in December 2020, it is finally time to say “Goodbye, JURON!”. The system has reached the end of its lifetime and will be shut down soon.
We are looking back on a successful four years. In November 2017, JURON was ranked #4 on the IO500, a ranking of HPC systems by their data throughput. This list was actually the very first edition. JURON was still on rank #4 one in June 2018, and on rank #18 in June 2019. This achievement was possible thanks to a successful collaboration with BeeGFS.
About 250 users from inside and outside HBP have used JURON for their research and development projects. To give some examples, the new Arbor simulator was developed on JURON . It was used to tackle I/O challenges for brain atlasing using deep learning methods , to compare data staging techniques for large scale brain images  and for cytoarchitectonic segmentation of human brain areas with convolutional neural networks . It was also used by other communities, e.g. with regards to tuning and optimization of code for many-core architectures , for lattice QCD research  and to accelerate particle physics JusPIC with GPUs .
At the time it was installed, JURON made brand-new technology available to researchers from the HBP and from outside of the project. Although this system will now retire, the research and development will, of course, continue. The Fenix infrastructure offers a variety of computing and storage resources, with compute time calls for the HBP, European neuroscientists in general, and for European scientists from other fields.