The HPAC Platform currently provides supercomputers at four HPC centres for the HBP community.

Find out more about

The following supercomputers are integrated into the HPAC Platform. UNICORE is available and configured on these systems, which are also integrated into the HPAC Platform’s Authentication & Authorization Infrastructure.


  • Jülich Supercomputing Centre, Forschungszentrum Jülich, Germany
  • 122,448 CPU cores
  • 2271 standard compute nodes, 240 large memory compute nodes, 48 accelerated compute nodes, 4 visualisation nodes and 12 login nodes
  • 10.4 (CPU) + 1.6 (GPU) Petaflop per second peak performance

  • T-Platforms V-Class architecture
  • 1872 nodes and 12 visualization nodes (with 2 NVIDIA K40 GPUs per node)
  • 45,216 CPU cores in total
Piz Daint

  • Cray XC30
  • 28 cabinets with 5,272 nodes and 42,176 cores in total (with 1 NVIDIA K20 GPU per node)
  • 7.787 PFlops
PizDaint at ETHZ-CSCS
MareNostrum 4

  • Barcelona Supercomputing Centre, Spain
  • Lenovo SD530 compute cluster
  • 3,456 nodes with 2 Intel Xeon Platinum 8160 24C at 2.1 GHz each
  • 11.15 PFlops

  • Cineca, Italy
  • Intel OmniPath Cluster
  • Current 2 PFlops
  • Will be upgraded twice over the next years

  • Cineca, Italy
  • Linux Infiniband Cluster
  • 74 nodes with in total 1080 cores
  • Compute nodes, viz nodes, big memory nodes
Pico at CINECA

Also the two pilot systems developed in a Pre-Commercial Procurement during the HBP Ramp-up Phase were integrated into the HPAC Platform:

JULIA [out of production]

  • Jülich Supercomputing Centre, Forschungszentrum Jülich
  • KNL-based compute nodes
  • Developed by Cray
JURON [end of production: end of Nov. 2020]

  • Jülich Supercomputing Centre, Forschungszentrum Jülich
  • KNL-based POWER8′ + P100 interconnected via NVLink
  • Developed by a consortium of IBM and NVIDIA