This article and all sub-pages summarise the achievements of the second year of the SGA2 project phase (April 2019 – March 2020).
The mission of the High Performance Analytics and Computing Platform is to build, integrate and operate the base infrastructure for the HBP, which comprises the hardware and software required to run large-scale, data-intensive, interactive simulations, to manage large amounts of data, and to implement and manage complex workflows comprising concurrent simulation, data analysis and visualisation workloads.
A significant change in SGA2 was the start of the Interactive Computing e-Infrastructure (ICEI) project, funded through a separate Specific Grant Agreement (SGA) under the umbrella of the HBP Framework Partnership Agreement (FPA), which builds the Fenix Infrastructure. Fenix Infrastructure and HPAC Platform are tightly connected, as they are implemented and operated by the same five supercomputing centres. CEA of France, also a partner in the ICEI project, joined the HPAC Platform at the start of SGA2, and although the technical integration of its Très Grand Centre de Calcul (TGCC) HPC facility was significantly more complicated than expected, the new infrastructure components are now available. Neuroscience-specific platform services, which are running on top of the e- Infrastructure services, are designed, deployed and operated. Details of these services are being worked out taking the results of use case analysis into account. To validate architecture and implementation details, a set of validation tests have been formulated and implemented. To further exploit the generated knowledge, efforts have been made to transfer this knowledge to HPC and Cloud solution providers, to improve their understanding of the needs of the HBP and the future EBRAINS Research Infrastructure (see Architecture specification for the HPAC Platform and Fenix). Compute and storage resources are provided via the ICEI resource allocation mechanisms. In the last few months, progress has been made in the base infrastructure and infrastructure services for the HBP Platforms in a number of areas, including the development of the container runtime engine Sarus tailored to HPC systems, continued support for the Neurorobotics Platform, and also increased collaboration with SP5 (see High Performance Analytics and Computing Platform v3).
In the past year we have seen an increasing amount in HBP research groups using Fenix IT infrastructure resources; the required implementation support being provided by the HPAC Platform. Scientists outside the HBP can also apply to use Fenix resources via PRACE and hence use the same infrastructure as HBP scientists. The HPAC Platform also develops technologies for Fenix (see Data federation and data-intensive computing technology), e.g. the data transfer services, that have been enabled in order to allow HBP users to move datasets between all five HPC sites.
Significant efforts were also invested in the improvement of simulation software and work to ensure their readiness for current and future HPC systems (see Exascale-ready simulation technology: NEST, Arbor and TVB). The capabilities of the NEST simulator were considerably advanced with releases 2.18.0 and 2.20.0. The release of NEST 3.0 with a much more expressive and efficient interface for network construction is forthcoming. Voucher-funded work on the NEST Desktop graphical user interface will make advanced computational modelling accessible to a wider audience and become a powerful tool for neuroscience education. Leading international groups are actively using NEST today and contributing back to it and thus HBP, e.g. the Allen Brain Institute, the Okinawa Institute of Technology and RIKEN. Further work focused on reproducibility and standardisation. Great efforts have been invested in the release of a first version of TVB-HPC which connects neural mass modelling with high performance computing in order to allow highly efficient parameter fitting. For Arbor, which has been developed entirely within the HBP to provide efficient, state-of-the art, future proof simulation technology, a first full-featured and user accessible version has been released within the last year.
Other important work undertaken within the HPAC Platform concerns interactive visual data analytics and in-situ visualisation tools and techniques (see Interactive visual data analytics and in situ visualisation developed and deployed on Fenix and HPAC Platform infrastructure). Various visualisation tools were further developed, made ready for being connected to running simulations through the in-situ pipeline, and the visualisation software catalogue is now available through the Knowledge Graph (KG) to broaden the scope and enhancing its usability.
To ensure that users can get the most out of the HPAC Platform, dissemination, education and training events have been organised, often as joint efforts between ICEI and HPAC. The HPAC team has supported scientists in their work through the central support team and, at a more advanced level, the HLST and the performance optimisation service.