JURECA

JURECA is located at Jülich Supercomputing Centre (JSC), Forschungszentrum Jülich, Germany.

How to get access to JURECA

 

JURECA is a modular supercomputer, the first of its kind (see this press release). The cluster module is available since July 2015 and it has been complemented by a booster module in November 2017.

The cluster module

Architecture T-Platforms V-Class
Peak performance 1.8 (CPU) + 0.44 (GPU) Petaflops
System configuration 1872 compute nodes and 12 visualisation nodes with 45,216 CPU cores in total
Processors 1872 compute nodes

  • Two Intel Xeon E5-2680 v3 Haswell CPUs per node
    • 2 x 12 cores, 2.5 GHz
    • Intel Hyperthreading Technology (Simultaneous Multithreading)
    • AVX 2.0 ISA extension
  • 75 compute nodes equipped with two NVIDIA K80 GPUs (four visible devices per node)
    • 2 x 4992 CUDA cores
    • 2 x 24 GiB GDDR5 memory

12 visusalisation nodes

  • Two Intel Xeon E5-2680 v3 Haswell CPUs per node
  • Two NVIDIA K40 GPUs per node
Memory Compute nodes: DDR4 memory technology (2133 MHz)

  • 1605 compute nodes with 128 GiB  memory
  • 128 compute nodes with 256 GiB  memory
  • 64 compute nodes with 512 GiB memory

Visualization nodes:

  • NVIDIA K40 GPUs:
    • 2 x 12 GiB GDDR5 memory
  • 10 nodes with 512 GiB memory
  • 2 nodes with 1024 GiB memory
Networks
  • Mellanox EDR InfiniBand with non-blocking fat tree topology
  • 100 GiB/s
I/O 100 GB per second storage connection to JUST
Login nodes
  • Shared login infrastructure with the booster module
  • 256 GiB memory per node

The booster module

Architecture Intel Omni-Path
Peak performance 5 Petaflops
System configuration 1640 compute nodes with one Intel Xeon Phi 7250-F Knights Landing CPUs per node; in total 111,520 CPU cores
Processors Per node: one Intel Xeon Phi 7250-F Knights Landing CPUs

  • 68 cores, 1.4 GHz
  • Intel Hyperthreading Technology (Simultaneous Multithreading)
  • AVX-512 ISA extension
Memory Per node: 96 GiB memory plus 16 GiB MCDRAM high-bandwidth memory
Networks Intel Omni-Path Architecture high-speed network with non-blocking fat tree topology
I/O 100+ GB per second storage connection to JUST
Login nodes
  • Shared login infrastructure with the cluster module
  • 256 GiB memory per node

Additional information

Operating system  CentOS
Job scheduler
  • Parastation Cluster Management
  • Slurm batch system with Parastation resource management
Login nodes 256 GB memory per node
More information  http://www.fz-juelich.de/ias/jsc/jureca
Contact HBP-HPC-Platform@fz-juelich.de