MARCONI

MARCONI is located at Cineca in Italy.
How to get access to MARCONI
MARCONI replaces FERMI that went out of production in July 2017.
CINECA

Architecture Intel OmniPath Cluster
Peak performance 20 Petaflops
System configuration Broadwell  nodes:Racks: 21

  • Model: Lenovo NeXtScale
  • Nodes: 1.512
  • Processors: 2 x 18-cores Intel Xeon E5-2697 v4 (Broadwell) at 2.30 GHz
  • Cores: 36 cores/node, 54.432 cores in total
  • RAM: 128 GB/node, 3.5 GB/core
  • Peak Performance: 2 PFlop/s

Knights Landing nodes:

  • Model: Lenovo Adam Pass
  • Racks: 50
  • Nodes: 3.600
  • Processors: 1 x 68-cores Intel Xeon Phi 7250 CPU (Knights Landing) at 1.40 GHz
  • Cores: 68 cores/node (272 with HyperThreading),  244.800 cores in total
  • RAM: 16 GB/node of MCDRAM and 96 GB/node of DDR4
  • Peak Performance: 11 PFlop/s

Skylake nodes:

  • Model: Lenovo StarkRacks: 21
  • Nodes: 1.512 + 792
  • Processors: 2 x 24-cores Intel Xeon 8160 CPU (Skylake) at 2.10 GHz
  • Cores: 48 cores/node 72.576 + 38.016 cores in total
  • RAM: 192 GB/node of DDR4
  • Peak Performance: 7.00 PFlop/s
Disk space 17 PB local storage
Networks Network type: new Intel Omnipath, 100 Gb/s. MARCONI is the largest Omnipath cluster of the world.
Network topology: Fat-tree 2:1 oversubscription tapering at the level of the core switches only.
Core Switches: 5 x OPA Core Switch “Sawtooth Forest”, 768 ports each.
Edge Switch: 216 OPA Edge Switch “Eldorado Forest”, 48 ports each.
Maximum system configuration: 5(opa) x 768 (ports) x 2 (tapering) → 7680 servers.
Login nodes
8 login nodes (3 available for regular users). Each one contains 2x Intel Xeon Processor E5-2697 v4 with a clock of 2.30GHz and 128 GB of memory. Login nodes are shared between the three partitions. The three partitions are served by three different PBS servers that must be selected in order to address the required resource.
More information http://www.hpc.cineca.it/content/hardware

http://www.hpc.cineca.it/hardware/marconi

Contact Giuseppe Fiameni (g.fiameni@cineca.it)