We provide the base infrastructure and infrastructure services for the platforms of the HBP. Compute and storage resources are provided via the ICEI resource allocation mechanisms and there are a number of projects running on the infrastructure, using primarily resources at ETHZ-CSCS and also the pilot system JURON at JUELICH-JSC.
In the last few months, progress was made in a number of areas. This included the integration of the CEA into the HPAC Platform, the development of the container runtime engine Sarus tailored to HPC systems, continued support for the Neurorobotics Platform, and also increased collaboration with SP5.
Integration and Operation of Low-level Infrastructure
The operation of low-level infrastructure at the partner sites (BSC, CEA, CINECA, ETHZ-CSCS, JUELICH-JSC, KIT) comprising the compute-, data- and network infrastructure services, has been coordinated. Day-to-day operational activities, such as configuration and software update management, have been performed to ensure a continuously high availability of the low-level services. At ETHZ-CSCS, this encompassed also the support for the approved ICEI projects and associated users. Development work is on-going to provide the Platforms, such as the HPAC Platform, with an improved visibility about scheduled and unscheduled maintenances on the infrastructure.
The integration of the CEA infrastructure has been successfully completed. To this end, a method has been implemented to allow users to perform a local authentication step at CEA, which is required to meet legal obligations, and which is also used to generate and locally store a Kerberos ticket. This ticket is automatically and securely retrieved by the UNICORE daemon, thus enabling users of the HPAC platform to seamlessly use the CEA resources. SGA2 Deliverable D7.5.1 documented an example workflow.
Component | Content | URL |
UNICORE | Technical and user documentation | https://www.unicore.eu/ |
UFTP | Technical documentation | https://www.unicore.eu/docstore/uftp-2.2.0/uftp-manual.html |
User documentation |
Policy Management
A review of security policies is underway, to understand the relationship between site-local security policies and the overall security of the federated infrastructure. This is intimately tied to the ICEI project. The output of the Data Governance Working Group (DGWG), namely the Data Management Plan (DMP), is also currently under review.
Public report: https://collab.humanbrainproject.eu/#/collab/264/nav/1975?state=uuid%3D59 da41e3-43a0-478c-9873-f095cb8314af
Infrastructure Services
Work with HBP HLST to support other SPs was continued. Weekly HBP Joint Infrastructure Coordination meetings were used as a means for identifying and resolving issues that concern multiple Platforms. Furthermore, work on the Data Transfer Service was engaged, resulting in discussions between CINECA, JUELICH-JSC and ETHZ-CSCS. Finally, investigation of containerisation of visualisation tools has recently started.
Technical and user documentation: https://www.unicore.eu/
Platform developer services
The “Lightweight Virtualisation Report”, provides an overview of ‘lightweight virtualisation’, which demands less resources from the underlying host system, has better responsiveness and can leverage native or close-to-native performance from the actual host system. In particular, information about the production deployment of Sarus as a replacement for Shifter at CSCS is detailed. Sarus leverages the OCI standards to offer modularity and extensibility, and offers system administrators the possibility to configure OCI hooks: these are standalone programs which can customise a container, thus extending the functionality of an OCI runtime by working as plugins. Sarus has been deployed in production at CSCS since November 2019.
On the Neurorobotics Platform (NRP) side, ETHZ-CSCS is continuing to support developments in SP10, with a view to integrating Piz Daint into the NRP workflow. Significant progress has been made in this area, and we expect to be able to run scientific use cases from the University of Granada and the University of Pavia soon to be able to test the execution of NRP and distributed NEST.
Platform integration services
A new major release (UNICORE 8.0) of the UNICORE software suite has been finished, which focuses on simpler deployment, administration and maintenance of the software, as well as implementing a number of features requested by HBP Platform users.
The authentication and authorisation infrastructure that allows users to use HPC resources from the Collaboratory via UNICORE has been updated to support the new HBP identity provider (https://iam.humanbrainproject.eu/auth/realms/hbp/account).
The usage of UNICORE by other groups in the HBP was continuously supported. For example, UNICORE is used heavily by the Brain Simulation Platform to access HPC resources at JUELICH-JSC and ETHZ-CSCS from the HBP Collaboratory.
Technical and user documentation: https://www.unicore.eu/
Performance Optimisation
The work on performance analysis and optimisation of the last year has focused on evaluating the new version of CoreNEURON provided by the developers. The latest version uses NMODL, a new source-to-source compiler, to process modelling files. With this framework, the program generates optimised source files from original modelling files, which one may expect to result in overall performance improvement. A performance comparison of the new version using NMODL against the old one has been performed and reported to CoreNeuron developers.
Also, there was a request from CoreNEURON developers to study the performance of the new code in an Arm-based cluster. To fulfil this request, CoreNEURON was deployed in an Arm-based platform, evaluated there, and compared the performance achieved with an Intel-based platform.
Technical and user documentation: https://collab.humanbrainproject.eu/#/collab/264/nav/1975?state= uuid%3Dba213669-48b4-430a-b0f6-5fc44a62d6a8
User support and documentation
The HPAC user support Task has been focused during the second year of SGA2 on providing the first- level support for the HPAC infrastructure and improving the interaction with the central HBP user support ticket system (https://support.humanbrainproject.eu/). Furthermore, the interaction with the teams providing HLST support and 2nd level support within HBP has been improved, with a better understanding of the different responsibilities and with the creation of dedicated HLST queues per site. The current workflow is designed such that the incoming tickets can be assigned to 1st level support, 2nd level support teams or HLST depending on the topic, site and level of request. The documentation of the HPAC Platform is collected in the Guidebook (this website), which has been updated on a regular basis during the full project duration.
Technical and user documentation: https://collab.humanbrainproject.eu/#/collab/264/nav/329319
Education and training
The work on education and training focused on organising the 2nd HPAC Platform Training Event (https://plus.humanbrainproject.eu/events/152/). During the planning phase, we realised that other related events were scheduled for a similar period of time (end of 2019), and thus we decided to join efforts to create a joint event together including also a public EBRAINS Event, CodeJam #10 and an HLST Workshop (HBP High-Level Support Team).
The agenda of the 2nd HPAC Platform Training was based on the feedback, successes and lessons learned of the 1st HPAC Platform Training Event and was aligned with the programme of the other events taking place in parallel, so that participants of one event could also attend sessions of the other events that were relevant for them.
The joint event took place from 26-28 November 2019 in Heidelberg, Germany. In total, there were 26 registrants participating the HPAC Platform Training, some of them external to HBP.
The topics covered during the training included an introduction to the resources, tools and services provided by the HPAC Platform and by the Fenix infrastructure, the different options to get access to HPC and data resources, as well as how to use Fenix resources and services in workflows. It also included hands-on sessions on how to transfer between HPC sites and how to access resources from Jupyter notebooks in the HBP Collaboratory. The participants also got an overview of the visualisation tools available in the HBP. The simulator NEST, for models with point-neurons, was introduced, including more in-depth sessions introducing NEST Desktop, NESTML and an outlook to NEST 3. Another session dealt with the coupling of the simulators NEST and TVB.