HPC storage

Storage available at the HPAC sites

The table below gives an overview of the storage that is available at the HPAC sites for projects that have a computing time allocation at the respective site.

  • /home: repository for source code, binaries, libraries and applications with small size and I/O demands (source code, scientific results, important restart files; has a backup)
  • /work: project storage (large results files, no backup)
  • /scratch: temporary storage location (job input/output, scratch files during computation, checkpoint/restart files; no backup; automatic remove of old files)
  • /archive: long-term storage location, typically data resides on tapes, might have to be removed after the project end

The status of the table is from September 2018, as published in the PRACE Call for Proposals #18.

Hosting site Storage File system/ partition Type Total capacity Quota per project [1] Remarks
BSC MareNostrum4 Storage /home GPFS 32 TB 20 GB With backup
/projects GPFS 4.3 PB 10 TB (100 TB) With backup
/scratch GPFS 8.7 PB 100 TB (more on demand) Without backup, clean-up procedure
/archive N/A N/A N/A
CEA JOLIOT CURIE Storage /home NFS TBA 15 GB With backup and snapshots
/work Lustre TBA 5 TB / 2.5 million inodes
/scratch Lustre 5 PB 100 TB / 10 million inodes Without backup, automatic clean-up procedure
/store Lustre+HPSS unlimited 0.5 million inodes File size > 1 GB, HSM functionality
CINECA Marconi Storage /home GPFS 200 TB 50 GB With backup
/work GPFS 7.1 PB 20 TB (100 TB)1 Without backup
/scratch GPFS 2.5 PB 20 TB (100 TB)1 Without backup, clean-up procedure for files older than 50 days
/archive on demand 20 TB (100 TB)4
ETHZ-CSCS Piz Daint Storage /users GPFS 86 TB 10 GB per user With backup and snapshots
/project GPFS 4.7 PB 250 TB (500 TB)2 Not readable from compute nodes. Data kept only for duration of the project.  With backup and snapshots.
/scratch Lustre 8.8 PB 8/8 PB Without backup, clean-up procedure, quota on inodes
/store GPFS 3.8 PB *5 Backed up and with HSM functionality.
JUELICH-JSC JUST (storage shared by all HPC systems) /home GPFS 2.3 PB 6 TB With backup
/work N/A N/A N/A
/scratch GPFS 9.1 PB 20 TB (100 TB) Without backup, files older than 90 days will be removed automatically
/archive on demand *3 Ideal file size: 500-1000 GB

[1] The quota in ( ) is available if the project PI has contacted the centre for approval.

1 The default is 1 TB. Please contact CINECA User Support (superc@cineca.it) to increase your quota after the project start.

2 From 250 TB to a maximum of 500 TB will be granted if the request is fully justified and a plan for moving the data is provided.

3 Access to JUWELS archive needs as a special agreement with JSC and PRACE for PRACE projects.

4 Not active by default. Please contact CINECA User Support (superc@cineca.it) after the project start.

5By agreement/contract

Data repositories available through the ICEI project as part of the Fenix infrastructure

The following table gives an overview of the active and archival data repositories that are or will be available through the ICEI project as part of the Fenix infrastructure. The timeline column indicates in which cases hardware still needs to be procured. 15% of the resources are available to the research communities via PRACE, starting with PRACE Call #18. Another 25% of the resources are available to the Human Brain Project (HBP) that has an own mechanism in place for internal resource distribution (for HBP members: Fenix Collab).

Hosting site Data repository (Estimated) total capacity when fully operational (100%) PRACE (15% of capacity) HBP (25 % of capacity) Timeline
Archival data repositories
BSC TBD 6000 TB 900 TB 1500 TB Operational in Q1/2020
CEA Archival 7000 TB 1050 TB 1750 TB Operational in Q1/2019
CINECA TBD 5000 TB 750 TB 1250 TB Operational in Q4 2019
ETHZ-CSCS Archival data repository 4000 TB 600 TB 1000 TB Already operational
Active data repositories
BSC TBD 70 TB 10.5 TB 17.5 TB Operational in Q1/2020
CEA Lustre Flash 800 TB 120 TB 200 TB Operational in Q1/2020
CINECA TBD 350 TB 50 TB 87.5 TB Operational in Q4 2019
ETHZ-CSCS Low latency, high bandwidth storage tier (Cray Data Warp) 80 TB 12 TB 20 TB Already operational
JUELICH-JSC High Performance Storage Tier (HPST) 1000 TB 150 TB 250 TB Operational in Q4 2019