JULIA is one of the two pilot systems developed by Cray in the Pre-Commercial Procurement during the HBP Ramp-up Phase. It is located at Jülich Supercomputing Centre (JSC), Forschungszentrum Jülich, Germany.
The Pre-Commercial Procurement (PCP) was focused on the areas dense memory integration, scalable visualization and dynamic resource management. Cray addressed these topics the following way:
- Dense memory integration: DataWarp nodes
- Scalable visualization: Tightly integrated visualization nodes
- Dynamic resource management: Integration with SLURM
The key technologies used for the Cray pilot system are
- KNL-based compute nodes
- 100 Gbps network technology (Omni-Path)
- NVRAM technologies
- Coherent software stack
Both PCP pilot systems are installed at Jülich Supercomputing Centre and integrated into the data infrastructure: The GPFS cluster JUST is accessible from all nodes. A 10 Gigabit-Ethernet connectivity for remote visualization is available.
Tests and performance explorations were performed on the pilot systems JULIA and JURON installed at Jülich Supercomputing Centre (JUELICH-JSC). JULIA is a Cray CS-400 system with four DataWarp nodes integrated, each equipped with two Intel P3600 NVMe drives. These and the compute nodes were integrated in an Omnipath network. Ceph was deployed on the DataWarp nodes. The other system, JURON, is based on IBM Power S822LC HPC (“Minsky”) servers. Each server comprises a HGST Ultrastar SN100 card. On this system BeeGFS, DSS and different key value stores were deployed.
The Pilot systems are integrated into the HBP HPAC Platform AAI.
JULIA was available since first week of August 2016 and went out of production in December 2018.
For more information and for getting access to JULIA, please visit https://trac.version.fz-juelich.de/hbp-pcp/wiki/Public
Ceph on JULIA
Ceph is an object storage solution, which allows the user to access the data-wrap nodes. By using an object storage the user does not need to care about the location of the data, since this is managed by the object storage layer.
Ceph can be accessed from all nodes of JULIA. There are several ways to use it. The easiest way is to use the Ceph parallel file system, which allows a POSIX-like access. The parallel file system is mounted on all nodes. Alternatively, the data in Ceph can also be directly assigned or read as objects via the RADOS interface. The advantage of the RADOS interface is especially noticeable with many parallel accesses. Note that (currently) the same data are not accessible with both interfaces.
To get access to Ceph (RADOS or the POSIX) interface, support of our admins is required. If desired, a separate folder can be created on the parallel file system. To access the RADOS interface, a special key is required, which is used for authentication.
To get access to Ceph, please contact hpac-support@humanbrainproject.eu.
1st PCP pilot JULIA for @HumanBrainProj by @cray_inc now available at @fz_juelich. Info: https://t.co/5683d5RRBy pic.twitter.com/I7bjA9Lxsf
— HBP HPAC Platform (@HBPHighPerfComp) 5. August 2016