The mission of the High Performance Analytics and Computing (HPAC) Platform is to build, integrate and operate the base infrastructure for the Human Brain Project, together with Fenix, a project that is about to start. The infrastructure comprises hardware and software components, which are required to run large-scale, data-intensive, interactive simulations, to manage large amounts of data and to implement and manage complex workflows comprising concurrent simulation, data analysis and visualisation workloads.

During the last two years, major progress has been made in all areas.

The HPAC Platform has been extended with a set of new features, functionalities and services. The most important achievement is that the Neuroinformatics Platform, the Brain Simulation Platform, the Neurorobotics Platform and the Collaboratory can run (part of) their services on HPAC infrastructure, which also ensures a close infrastructural link between these other HBP Platforms, so that these can offer efficient, fast and scalable services to their user communities. Key new elements are the now operational OpenStack service integrated with the HPAC authentication and authorisation infrastructure, a flexible tool for creating on-demand virtual computing infrastructure, the deployment of high-performance data transfer services between HPAC sites, the availability of Object Storage and the ability to mount HPC storage into Jupyter notebooks running in the Collaboratory.

The HPAC Platform experts also develop new technologies that will be integrated into the Platform, once they are mature. To ensure that all developments serve the requirements of the neuroscience community, they are co-developed with potential future users, who get access to the new services early during development.

Data-intensive computing. Data-intensive applications, like the analysis of data from experimental facilities such as brain image scanners or simulations, play an increasingly important role. Ongoing development efforts focus on utilisation and management of hierarchical storage architectures, as well as data stores exploiting novel dense memory-based storage devices for such applications. Furthermore, software for coupling simulations and data processing pipelines, e.g. visualisation pipelines, has been developed.

Dynamic resource management. Supercomputers are typically operated in batch mode, i.e. users submit their jobs to a job scheduler that tries to maximise the overall system usage. To reach a higher level of interactivity, for example to start an ad hoc visualisation during a simulation run, to have a look at the evolvement of the simulated network, this visualisation job needs to get scheduled instantaneously. A solution is to have a more dynamic resource management. This has to be implemented at different levels in the HPC system, as well as at the application level. The simulators NEST and Neuron have been prepared to support such resource sharing, and the job schedulers have been enhanced for this new type of resource management.

Simulation technology. We also made important progress on the development of application-level software that needs to be closely aligned with the infrastructure evolution. The NEST simulator now supports rate models of neuronal networks, the data structures have been revised to better exploit the architecture of modern and future supercomputer architectures, and critical bottlenecks in the construction of brain-scale networks were eliminated. NESTML, a domain-specific language for the specification of neuron models, has been advanced. MUSIC has been used to couple NEST and UG4 to support multi-scale simulations. The Arbor library has also been enhanced; the focus has been on optimising the kernels for target architectures such as Intel KNL or NVIDIA GPUs, which are more and more often available as part of large-scale HPC systems.

Interactive visualisation. Many developments in the HBP yield data sets that are too large to open and analyse with standard viewers or visual analysis tools, be it ultra-high resolution imaging data or the results of large-scale simulations. Therefore, major efforts have been spent on the development of interactive visualisation and visual analysis tools that are well integrated within the HPAC infrastructure, to make such large data sets usable for visual analysis. Some of the tools can also be coupled to give users different data views at the same time to ease analyses.