Application.

Arbor

Arbor is a software library designed from the ground up for simulators of large networks of multi-compartment neurons on hybrid/accelerated/many core computer architectures.

Performance portability was completed for the three main target HPC architectures available through the HBP: Intel x86 CPUs (AVX2 and AVX512), Intel KNL (AVX512) and NVIDIA GPUs (CUDA).

Optimized kernels are automatically generated to target each architecture, and the system used in Arbor can be extended to new architectures in the future.

The other enhancements and features implemented in Arbor are:

  • Fully parallelized event generation and queuing from spikes.
  • Efficient sampling of model state on CPU and GPU implementations, e.g. voltage and current.
  • Significant refactoring to prepare the code for general release.
  • A Python interface for users.

The source code was released publicly on GitHub with an open source BSD license, along with documentation on Read the Docs, and automatic testing was set up on Travis CI.

Date of release
Version of software0.1.0
Version of documentation0.1.0
Software availablehttps://github.com/eth-cscs/arbor
Documentationhttp://arbor.readthedocs.io, https://github.com/eth-cscs/arbor
ResponsibleBenjamin Cumming - bcumming@cscs.ch
Requirements & dependencies
Target system(s)

VTK-m

VTK-m is a scientific visualization and analysis framework that offers a wealth of building blocks to create visualization and analysis applications. VTK-m facilitates scaling those applications to massively parallel shared memory systems, and it will – due to its architecture – most likely also run efficiently on future platforms.

HPX is a task-based programming model. It simplifies the formulation of well-scaling, highly-parallel algorithms. Integrating this programming model into VTK-m streamlines the formulation of its parallel building blocks and thus makes their deployment on present and emerging HPC platforms more efficient. Since neuroscientific applications require more and more compute power as well as memory, harnessing the available resources will become a challenge in itself. By combining VTK-m and HPX into task-based analysis and visualization, we expect to provide suitable tools to effectively face this challenge and facilitate building sophisticated interactive visual analysis tools, tailored to the neuroscientists’ needs.

Parallel primitive algorithms required for VTK-m have been added to HPX along with API support to enable the full range of visualization algorithms developed for VTK-m. A new scheduler has been developed that accepts core/numa placement hints from the programmer such that cache reuse can be maximized and traffic between sockets minimized. High performance tasks that access data shared by application and visualization can use this capability to improve performance. The thread pool management was improved to allow visualization tasks, communication tasks, and application tasks to execute on different cores if necessary, which reduces latency between components and improves the overall throughput of the distributed application. RDMA primitives have been added to the HPX messaging layer. These improvements make it possible to scale HPX applications to very high node/core counts. Respective tests have been successful on 10k nodes using 650k cores.

Date of releaseNovember 2017
Version of software1.1
Version of documentation1.1
Software availablehttps://gitlab.kitware.com/vtk/vtk-m
Documentationhttp://m.vtk.org/images/c/c8/VTKmUsersGuide.pdf,
ResponsibleJohn Biddiscombe
Requirements & dependencies
Target system(s)HPC platforms

PLIViewer

The PLIViewer is visualization software for 3D-Polarized Light Imaging (3D-PLI), to interactively explore the scalar and vector datasets; it provides additional methods to transform data, thus revealing new insights that are not available in the raw representations. The high resolution provided by 3D-PLI produces massive, terabyte-scale datasets, which makes visualization challenging.

The PLIViewer tackles this problem by providing functionality to select areas of interests from the dataset, and options for downscaling.  it makes it possible to interactively compute and visualize Orientation Distribution Functions (ODFs) and polar plots from the vector field, which reveal mesoscopic and macroscopic scale information from the microscopic dataset without significant loss of detail. The PLIViewer equips the neuroscientist with specialized visualization tools needed to explore 3D-PLI datasets through direct and interactive visualization of the data.

 

The original dataset: Fibre Orientation Maps rendered on top of Retardation map
A full slice from the Fibre Orientation Map of a Vervet Monkey
Orientation Distribution Functions (ODFs) rendered with Streamline Tractography
Close-up view of the ODFs
Date of releaseFebruary 2018
Version of software
Version of documentation1.1.0
Software availablehttps://devhub.vr.rwth-aachen.de/VR-Group/pli_vis
Documentationhttps://devhub.vr.rwth-aachen.de/VR-Group/pli_vis
ResponsibleThomas Vierjahn
Requirements & dependencies
Target system(s)

NEST-simulated spatial-point-neuron data visualisation

Complementary to other viewers and visualization implementations for NEST simulations, this technology offers a rendering of activity and membrane potentials in a neural network simulated with NEST.

A prototypical implementation exists that is based on vtk, the widely-used visualisation toolkit. This implementation can be generally run by computational neuroscientists on their workstations, imposing only moderate hardware requirements. Experiments using rendering on high-performance computing infrastructure were successful.

The results indicate that this component is extensible towards large-scale simulations that require HPC resources and thus produce large output data. The hi-fidelity rendering used in this case provides very high quality images that may be suitable for publications (Proof of concept image below).

Rendering of color-coded membrane potentials on spatial neurons from a running NEST simulation
Proof of concept of a high-quality rendering of spatially organized point neurons
Date of release
Version of softwarePlease contact the developers
Version of documentation
Software availablePlease contact the developers
DocumentationAvailable on demand
ResponsibleThomas Vierjahn
Requirements & dependencies
Target system(s)workstations to HPC

NEST in situ framework

The NEST in situ framework facilitates visualization and analysis of the output data of a NEST simulation while this is still running (line plot figure below). Membrane potentials, spikes and other data are streamed from the simulation. The framework builds on top of conduit, a well-established in situ library, for compatibility and extensibility reasons.

The framework consists of a well-tested, compact C++ library to be linked into the NEST simulator in order to provide the streaming capabilities. It can be linked into consumer applications for visualisation and analysis. Python bindings for consumer applications (visualisation and analysis) are also provided in order to make it more useful for computational neuroscientists who are familiar with NEST and Python.

Implemented data flow in the NEST in situ pipeline
Line plot of membrane potentials from a running NEST simulation
Date of release2018
Version of software18.02.0
Version of documentation18.02.0
Software availablehttps://devhub.vr.rwth-aachen.de/VR-Group/nest-in-situ-vis
Documentationhttps://devhub.vr.rwth-aachen.de/VR-Group/nest-in-situ-vis
ResponsibleVierjahn, Thomas
Requirements & dependenciesNEST simulator
Target system(s)

Multi-View Framework

The Multi-View Framework is a software component, which offers functionality to combine various visual representations of one or more data sets in a coordinated fashion.  Software components offering visualization capabilities can be included in such a network, as well as software components offering other functionality, such as statistical analysis. Multi-display scenarios can be addressed by the framework as coordination information can be distributed over network between view instances running on distributed machines.

The framework is composed of three libraries: nett, nett-python and nett-connect. nett implements a light-weight underlying messaging layer enabling the communication between views, whereas nett-python implements a python binding for nett, which enables the integration of python-based software components into a multi-view setup. nett-connect adds additional functionality to this basic communication layer, which enables non-experts to create multi-view setups according to their specific needs and workflows.

Interactive optimization of parameters for structural plasticity in neural network models (top left); comparative analysis of NEST simulations (top right); statistical analysis of NEST simulations (bottom left); multi-device and multi-user scenarios (bottom right)
Date of release2017
Version of softwareN/A
Version of documentationN/A
Software availablePlease contact the developers
Documentationhttps://devhub.vr.rwth-aachen.de/cnowke/nett-connect https://devhub.vr.rwth-aachen.de/cnowke/nett-connect
ResponsibleWeyers, Benjamin
Requirements & dependencies
Target system(s)

NEST: The Neural Simulation Tool

Science has driven the development of the NEST simulator for the past 20 years. Originally created to simulate the propagation of synfire chains using single-processor workstations, we have pushed NEST’s capabilities continuously to address new scientific questions and computer architectures. Prominent examples include studies on spike-timing dependent plasticity in large simulations of cortical networks, the verification of mean-field models, models of Alzheimer’s and Parkinson’s disease and tinnitus. Recent developments include a significant reduction in memory requirements, as demonstrated by a record-breaking simulation of 1.86 billion neurons connected by 11.1 trillion synapses on the Japanese K supercomputer, paving the way for brain-scale simulations.

Running on everything from laptops to the world’s largest supercomputers, NEST is configured and controlled by high-level Python scripts, while harnessing the power of C++ under the hood. An extensive testsuite and systematic quality assurance ensure the reliability of NEST.

The development of NEST is driven by the demands of neuroscience and carried out in a collaborative fashion at many institutions around the world, coordinated by the non-profit member-based NEST Initiative. NEST is released under GNU Public License version 2 or later.

How NEST has been improved in HBP

Continuous dynamics

The continuous dynamics code in NEST enables simulations of rate- based model neurons in the event-based simulation scheme of the spiking simulator NEST. The technology was included and released with NEST 2.14.0.

Furthermore, additional rate-based models for the Co-Design Project “Visuo-Motor Integration” (CDP4) have been implemented and scheduled for NEST release 2.16.0.

Related publication:
Hahne et al. (2017) Front. Neuroinform. 11,34. doi:10.3389/fninf.2017.00034

NESTML

NESTML is a domain-specific language that supports the specification of neuron models in a precise and concise syntax, based on the syntax of Python. Model equations can either be given as a simple string of mathematical notation or as an algorithm written in the built-in procedural language. The equations are analyzed by NESTML to compute an exact solution if possible, or use an appropriate numeric solver otherwise.

Link to this release (2018): https://github.com/nest/nestml

Related Publications:

Plotnikov et al. (2016) NESTML: a modeling language for spiking neurons.

Simulator-simulator interfaces

This technology couples the simulation software NEST and UG4 by means of the MUSIC library. NEST can only send spike trains where spiking occurs; UG4 receives those in form of events arriving at synapses (timestamps). The time course of the extracellular potential in a cube (representing a piece of tissue) is simulated based on the arriving spike data.The evolution of the membrane potential in space and time is described by the Xylouris-Wittum model.

Link to this release (2017): https://github.com/UG4

Related publications:
Vogel et al. (2014) Comput Vis Sci. 16,4. doi: 10.1007/s00791-014-0232-9Xylouris, K., Wittum, G. (2015) Front Comput Neurosci. doi: 10.3389/fncom.2015.00094

More information

NEST – A brain simulator (short movie)

NEST::documented (long movie)

NEST brochure:

http://www.nest-simulator.org/wp-content/uploads/2015/04/JARA_NEST_final.pdf

Date of releaseOctober 2017
Version of softwarev2.14.0
Version of documentationv2.14.0
Software availableNEST can be run directly from a Jupyter notebook inside a Collab in the HBP Collaboratory.
Download & Information: https://www.nest-simulator.org
Latest code version: https://github.com/nest/nest-simulator
Documentationhttp://www.nest-simulator.org/documentation/
ResponsibleNEST Initiative (http://www.nest-initiative.org/)
General Contact: NEST User Mailing List (http://www.nest-simulator.org/community/)

Contact for HBP Partners:
Hans Ekkehard Plesser (NMBU/JUELICH): hans.ekkehard.plesser@nmbu.no
Dennis Terhorst (JUELICH): d.terhorst@fz-juelich.de
Requirements & dependenciesAny Unix-like operating system and basic development tools
OpenMP
MPI
GNU Science Library
Target system(s)All Unix-like systems
Laptop to Supercomputer; has been ported to Raspberry Pi, too

TOUCH

The development of TOUCH was co-funded by the HBP during the Ramp-up Phase. This page is kept for reference but will no longer be updated.


Efficient spatial joins are pivotal for many applications and particularly important for geographical information systems or for the simulation sciences where scientists work with spatial models. Past research has primarily focused on disk-based spatial joins; efficient in-memory approaches, however, are important for two reasons: a) main memory has grown so large that many datasets fit in it and b) the in-memory join is a very time-consuming part of all disk-based spatial joins. In this paper we develop TOUCH, a novel in-memory spatial join algorithm that uses hierarchical data-oriented space partitioning, thereby keeping both its memory footprint and the number of comparisons low. Our results show that TOUCH outperforms known in-memory spatial-join algorithms as well as in-memory implementations of disk-based join approaches. In particular, it has a one order of magnitude advantage over the memory-demanding state of the art in terms of number of comparisons (i.e., pairwise object comparisons), as well as execution time, while it is two orders of magnitude faster when compared to approaches with a similar memory footprint. Furthermore, TOUCH is more scalable than competing approaches as data density grows.

Date of release
Version of software
Version of documentation
Software available
DocumentationMore information: http://infoscience.epfl.ch/record/186338?ln=en
ResponsibleEPFL-DIAS: Darius Sidlauskas (darius.sidlauskas@epfl.ch) and Thomas Heinis (t.heinis@imperial.ac.uk)
Requirements & dependencies
Target system(s)Tested on Pico supercomputer (CINECA)

MSPViz

MSPViz is a visualization tool for the Model of Structural Plasticity. It uses a visualisation technique  based on the representation of the neuronal information through the use of abstract levels and a set of schematic representations into each level. The multilevel structure and the design of the representations constitutes an approach that provides organized views that facilitates visual analysis tasks.

Each view has been enhanced adding line and bar charts to analyse trends in simulation data. Filtering and sorting capabilities can be applied on each view to ease the analysis. Other views, such as connectivity matrices and force-directed layouts, have been incorporated, enriching the already existing views and improving the analysis process. This tool has been optimised to lower render and data loading times, even from remote sources such as WebDav servers.

Screenshot of MSPViz
Screenshot of MSPViz
Screenshot of MSPViz
Screenshot of MSPViz
View of MSPViz to investigate structural plasticity models on different levels of abstraction: connectivity of a single neuron
View of MSPViz to investigate structural plasticity models on different levels of abstraction: full network connectivity
Date of releaseMarch 2018
Version of software0.2.6
Version of documentation0.2.6 for users
Software availablehttp://gmrv.es/mspviz
DocumentationSelf-contained in the application
ResponsibleUPM: Juan Pedro Brito (juanpedro.brito@upm.es)
Requirements & dependenciesSelf-contained code
Target system(s)Platform independent

VIOLA

The development of VIOLA was co-funded by the HBP during the Ramp-up Phase. This page is kept for reference but will no longer be updated.


VIOLA (VIsualizer Of Layer Activity) is a tool to visualize activity in multiple 2D layers in an interactive and efficient way. It gives an insight into spatially resolved time series such as simulation results of neural networks with 2D geometry. The usage example shows how VIOLA can be used to visualize spike data from a NEST simulation (http://nest-simulator.org/) of an excitatory and an inhibitory neuron population with distance-dependent connectivity.

Date of releaseJanuary 2016
Version of softwarenot yet specified
Version of documentationnot yet specified
Software availablehttps://github.com/HBPVIS/VIOLA
Documentationhttps://github.com/HBPVIS/VIOLA/wiki
ResponsibleRWTH Aachen: Benjamin Weyers (weyers@vr.rwth-aachen.de)
Forschungszentrum Jülich: Espen Hagen (e.hagen@fz-juelich.de), Johanna Senk (j.senk@fz-juelich.de)
Requirements & dependenciesThree.js
Target system(s)Web Browser (Google Chrome)

TRANSFORMERS

The development of TRANSFORMERS was co-funded by the HBP during the Ramp-up Phase. This page is kept for reference but will no longer be updated.


Spatial joins are becoming increasingly ubiquitous in many applications, particularly in the scientific domain. While several approaches have been proposed for joining spatial datasets, each of them has a strength for a particular type of density ratio among the joined datasets. More generally, no single proposed method can efficiently join two spatial datasets in a robust manner with respect to their data distributions. Some approaches do well for datasets with contrasting densities while others do better with similar densities. None of them does well when the datasets have locally divergent data distributions.

Therefore, we develop TRANSFORMERS, an efficient and robust spatial join approach that is indifferent to such variations of distribution among the joined data. TRANSFORMERS achieves this feat by departing from the state-of-the-art through adapting the join strategy and data layout to local density variations among the joined data. It employs a join method based on data-oriented partitioning when joining areas of substantially different local densities, whereas it uses big partitions (as in space-oriented partitioning) when the densities are similar, while seamlessly switching among these two strategies at runtime.

Date of release
Version of software
Version of documentation
Software available
Documentation
ResponsibleEPFL-DIAS: Mirjana Pavlovic (mirjana.pavlovic@epfl.ch)
Requirements & dependencies
Target system(s)

RUBIK

The development of RUBIK was co-funded by the HBP during the Ramp-up Phase. This page is kept for reference but will no longer be updated.


An increasing number of applications from finance, meteorology, science and others are producing time series as output. The analysis of the vast amount of time series is key to understand the phenomena studied, particularly in the simulation sciences, where the analysis of time series resulting from simulation allows scientists to refine the model simulated. Existing approaches to query time series typically keep a compact representation in main memory, use it to answer queries approximately and then access the exact time series data on disk to validate the result. The more precise the in-memory representation, the fewer disk accesses are needed to validate the result. With the massive sizes of today’s datasets, however, current in-memory representations oftentimes no longer fit into main memory. To make them fit, their precision has to be reduced considerably resulting in substantial disk access which impedes query execution today and limits scalability for even bigger datasets in the future. In this paper we develop RUBIK, a novel approach to compressing and indexing time series. RUBIK exploits that time series in many applications and particularly in the simulation sciences are similar to each other. It compresses similar time series, i.e., observation values as well as time information, achieving better space efficiency and improved precision. RUBIK translates threshold queries into two dimensional spatial queries and efficiently executes them on the compressed time series by exploiting the pruning power of a tree structure to find the result, thereby outperforming the state-of-the-art by a factor of between 6 and 23. As our experiments further indicate, exploiting similarity within and between time series is crucial to make query execution scale and to ultimately decouple query execution time from the growth of the data (size and number of time series).

Date of release
Version of software
Version of documentation
Software available
DocumentationMore information: http://infoscience.epfl.ch/record/209731?ln=en
ResponsibleEPFL-DIAS: Eleni Tziritazacharatou (eleni.tziritazacharatou@epfl.ch)
Requirements & dependencies
Target system(s)Tested on Pico supercomputer (CINECA)

NeuroScheme

NeuroScheme is a tool that allows users to navigate through circuit data at different levels of abstraction using schematic representations for a fast and precise interpretation of data. It also allows filtering, sorting and selections at the different levels of abstraction. Finally it can be coupled with realistic visualization or other applications using the ZeroEQ event library developed in WP 7.3.

This application allows analyses based on a side-by-side comparison using its multi-panel views, and it also provides focus-and-context. Its different layouts enable arranging data in different ways: grid, 3D, camera-based, scatterplot-based or circular. It provides editing capabilities, to create a scene from scratch or to modify an existing one.

ViSimpl, part of the NeuroScheme framework, is a prototype developed to analyse simulation data, using both abstract and schematic visualisations. This analysis can be done visually from temporal, spatial and structural perspectives, with the additional capability of exploring the correlations between input patterns and produced activity.

 

NeuroScheme
NeuroScheme screenshot
NeuroScheme
NeuroScheme screenshot
NeuroScheme
NeuroScheme screenshot
NeuroScheme
NeuroScheme screenshot
NeuroScheme
NeuroScheme screenshot
Overview of various neurons
User interface of ViSimpl visualising activity data emerging from a simulation of a neural network model
Date of releaseMarch 2018
Version of software0.2
Version of documentation0.2
Software availablehttps://github.com/gmrvvis/NeuroScheme
Documentationhttps://github.com/gmrvvis/NeuroScheme, http://gmrv.es/gmrvvis
ResponsibleURJC: Pablo Toharia (pablo.toharia@urjc.es)
Requirements & dependenciesRequired: Qt4, nsol
Optional: Brion/BBPSDK (to access BBP data), ZeroEQ (to couple with other software)
Supported OS: Windows 7, Windows 8.1, Linux (tested on Ubuntu 14.04) and Mac OSX
Target system(s)Desktop computers, notebooks, tablets

VisNEST

The development of VisNEST was co-funded by the HBP during the Ramp-up Phase. This page is kept for reference but will no longer be updated.


VisNEST is a tool for visualizing neural network simulations of the macaque visual cortex. It allows for exploring mean activity rates, connectivity of brain areas and information exchange between pairs of areas. In addition, it allows exploration of individual populations of each brain area and their connectivity used for simulation.

VisNEST
VisNEST screenshot
Date of releaseInformation available on demand.
Version of softwareInformation available on demand.
Version of documentationInformation available on demand.
Software availableNot publicly available yet. Please contact the developers in case of interest.
DocumentationReference paper:
Nowke, Christian, Maximilian Schmidt, Sacha J. van Albada, Jochen M. Eppler, Rembrandt Bakker, Markus Diesrnann, Bernd Hentschel, and Torsten Kuhlen. "VisNEST—Interactive analysis of neural activity data." In Biological Data Visualization (BioVis), 2013 IEEE Symposium on, pp. 65-72. IEEE, 2013.
ResponsibleRWTH Aachen: Benjamin Weyers (weyers@vr.rwth-aachen.de) and Torsten Kuhlen (kuhlen@vr.rwth-aachen.de)
Requirements & dependenciesViSTA, boost, zmq, hdf5
Target system(s)High Fidelity Visualization Platforms, Immersive Visualization Hardware, Desktop Computers

InDiProv

The development of InDiProv was co-funded by the HBP during the Ramp-up Phase. This page is kept for reference but will no longer be updated.


This server-side tool is meant to be used for the creation of provenance tracks in context of interactive analysis tools and visualization applications. It is capable of tracking multi-view and multiple applications for one user using this ensemble. It further is able to extract these tracks from the internal data base into a XML-based standard format, such as the W3C Prov-Model or the OPM format. This enables the integration to other tools used for provenance tracking and will finally end up in the UP.

Date of releaseAugust 2015
Version of softwareAugust 2015
Version of documentationAugust 2015
Software availablehttps://github.com/hbpvis
Documentationhttps://github.com/hbpvis
ResponsibleRWTH Aachen: Benjamin Weyers (weyers@vr.rwth-aachen.de) and Torsten Kuhlen (kuhlen@vr.rwth-aachen.de)
Requirements & dependenciesWritten in C++ , Linux environment, MySQL server 5.6, JSON library for annotation, CodeSynthesis XSD for XML serialization and parsing, ZeroMQ library, Boost library, xercex-c library and mysqlcppcon library
Target system(s)Server-side systems

SCOUT

The development of SCOUT was co-funded by the HBP during the Ramp-up Phase. This page is kept for reference but will no longer be updated.


SCOUT is a structure-aware method for prefetching data along interactive spatial query sequences. Given the user input, which is a spatial range query sequence representing the structure explored (interactively) by the user, and the spatial dataset to be queried, SCOUT reduces the query response time by prefetching the data along the query sequence.

Similarly to FLAT, both the query ranges in the query sequence and the spatial objects should be represented using a minimum bounding rectangle.

SCOUT outperforms the related prefetching techniques (e.g., Straight Line Extrapolation or Hilbert prefetching) with high prefetching accuracy, which is translated to one order of magnitude speedup.

Date of releaseMarch 2015
Version of software1.0
Version of documentation1.0
Software availableCollaboratory, integrated in and part of BBP SDK tool set
Documentationhttp://dias.epfl.ch/op/preview/BrainDB
ResponsibleEPFL-DIAS: Xuesong Lu (xuesong.lu@epfl.ch), Darius Sidlauskas (darius.sidlauskas@epfl.ch)
Requirements & dependenciesLinux, Boost library, BBP SDK
Target system(s)PICO supercomputer

Score-P: HPC Performance Instrumentation and Measurement Tool

The development of Score-P was co-funded by the HBP during the Ramp-up Phase. This page is kept for reference but will no longer be updated.


Score-P logo

 

The Score-P measurement infrastructure is a highly scalable and easy-to-use tool suite for profiling, event tracing, and online analysis of HPC applications. Score-P is developed under a BSD 3-Clause (Open Source) License and governed by a meritocratic governance model.

Score-P offers the user a maximum of convenience by supporting a number of analysis tools. Currently, it works with Periscope, Scalasca, Vampir, and Tau and is open for other tools. Score-P comes together with the new Open Trace Format Version 2, the Cube4 profiling format and the Opari2 instrumenter.

Score-P is part of a larger set of tools for parallel performance analysis and debugging developed by the “Virtual Institute – High Productivity Supercomputing” (VI-HPS) consortium. Further documentation, training and support are available through VI-HPS.

The new version 1.4.2 provides the following new features (externally funded) as compared to version 1.4 that was part of the HBP-internal Platform Release in M18:

  • Power8, ARM64, and Intel Xeon Phi support
  • Pthread and OpenMP tasking support
  • Prototype OmpSs support
Date of releaseFebruary 2014
Version of software1.4.2
Version of documentation1.x
Software availablehttp://www.score-p.org, Section “Download section”
Documentationhttp://www.score-p.org, Section “Documentation”,
ResponsibleScore-P consortium: support@score-p.org
Requirements & dependenciesSupported OS: Linux
Needs OTF2 1.5.x series, Cube 4.3 series, and OPARI2 1.1.2 software packages (available at same website)
Target system(s)Supercomputers (Cray, IBM BlueGene, Fujitsu K/FX10), Linux Clusters of all kinds, Linux Workstations or Laptops (for test/training)

Scalasca: HPC Performance Trace Analyzer

The development of Scalasca was co-funded by the HBP during the Ramp-up Phase. This page is kept for reference but will no longer be updated.


Scalasca logo

Scalasca is a software tool that supports the performance optimisation of parallelprograms by measuring and analysing their runtime behaviour. The analysis identifies potential performance bottlenecks – in particular those concerning communication and synchronization – and offers guidance in exploring their causes.

Scalasca targets mainly scientific and engineering applications based on the programming interfaces MPI and OpenMP, including hybrid applications based on a combination of the two. The tool has been specifically designed for use on large-scale systems including IBM Blue Gene and Cray XT, but is also well suited for small- and medium-scale HPC platforms. The software is available for free download under the New BSD open-source license.

Scalasca is part of a larger set of tools for parallel performance analysis and debugging developed by the “Virtual Institute – High Productivity Supercomputing” (VI-HPS) consortium. Further documentation, training and support are available through VI-HPS.

The new version 2.2.2 provides the following new features (externally funded) as compared to version 2.2 that was part of the HBP-internal Platform Release in M18:

  • Power8, ARM64, and Intel Xeon Phi support
  • Pthread and OpenMP tasking support
  • Improved analysis
  • Prototype OmpSs support
Date of releaseJanuary 2015
Version of software2.2.2
Version of documentation2.x
Software availablehttp://www.scalasca.org/software/scalasca-2.x/download.html
Documentationhttp://www.scalasca.org/software/scalasca-2.x/documentation.html
ResponsibleScalasca team: scalasca@fz-juelich.de
Requirements & dependenciesSupported OS: Linux
Needs Score-P v1.2 or newer and Cube library v4.3 software packages
Target system(s)Supercomputers (Cray, IBM BlueGene, Fujitsu K/FX10),w Linux Clusters of all kinds, Linux Workstations or Laptops (for test/training)

RTNeuron

RTNeuron is a scalable real-time rendering tool for the visualisation of neuronal simulations based on cable models. Its main utility is twofold: the interactive visual inspection of structural and functional features of the cortical column model and the generation of high quality movies and images for presentations and publications. The package provides three main components:

  • A high level C++ library.
  • A Python module that wraps the C++ library and provides additional tools.
  • The Python application script rtneuron-app.py

A wide variety of scenarios is covered by rtneuron-app.py. In case the user needs a finer control of the rendering, such as in movie production or to speed up the exploration of different data sets, the Python wrapping is the way to go. The Python wrapping can be used through an IPython shell started directly from rtneuron-app.py or importing the module rtneuron into own Python programs. GUI overlays can be created for specific use cases using PyQt and QML.

RTNeuron is available on the pilot system JULIA and on JURECA as environment module.

RTNeuron in aixCAVE
RTNeuron in aixCAVE
Neuron rendered by RTNeuron
Neuron rendered by RTNeuron
Visual representation of cell dyes
Simulation playback
Interactive circuit slicing
Connection browsing
Date of releaseFebruary 2018
Version of software2.13.0
Version of documentation2.13.0
Software availablehttps://developer.humanbrainproject.eu/docs/projects/RTNeuron/2.11/index.html; Open sourcing scheduled for June 2018
Documentationhttps://developer.humanbrainproject.eu/docs/projects/RTNeuron/2.11/index.html, https://www.youtube.com/watch?v=wATHwvRFGz0
ResponsibleSamuel Lapere
Requirements & dependenciesBBP SDK, Boost, Equalizer, OpenSceneGraph, osgTransparency, Python, Qt, NumPy, OpenMP, VRPN, Cuda, ZeroEQ
Target system(s)

Remote Connection Manager (RCM)

The development of Remote Connection Manager (RCM) was co-funded by the HBP during the Ramp-up Phase. This page is kept for reference but will no longer be updated.


The Remote Connection Manager (RCM) is an application that allows HPC users to perform remote visualisation on Cineca HPC clusters.

The tool offers to

  • Visualize the data produced on Cineca’s HPC systems (scientific visualization);
  • Analyse and inspect data directly on the systems;
  • Debug and profile parallel codes running on the HPC clusters.

The graphical interface of RCM allows the HPC users to easily create remote displays and to manage them (connect, kill, refresh).

Screenshot of RCM
Screenshot of Remote Connection Manager (RCM)
Screenshot of RCM
Screenshot of Remote Connection Manager (RCM)
Date of releaseApril 2015
Version of software1.2
Version of documentation1.2
Software availablehttp://www.hpc.cineca.it/content/remote-visualization-rcm
Documentationhttp://www.hpc.cineca.it/content/remote-visualization-rcm
ResponsibleRoberto Mucci (superc@cineca.it)
Requirements & dependenciesThe “Remote Connection Manager” works on the following operating systems: Windows, Linux, Mac OSX
(OSX Mountain Lion users need to install XQuartz: http://xquartz.macosforge.org/landing/)
Target system(s)Notebooks, office computers

Paraver

Paraver is a very flexible data browser. The metrics used are not hardwired on the tool but can be programmed. To compute them, the tool offers a large set of time functions, a filter module, and a mechanism to combine two timelines. This approach allows displaying a huge number of metrics with the available data. The analysis display allows computing statistics over any timeline and selected region, what allows correlating the information of up to three different time functions. To capture the expert’s knowledge, any view or set of views can be saved as a Paraver configuration file. Therefore, re-computing the same view with new data is as simple as loading the saved file. The tool has been demonstrated to be very useful for performance analysis studies, giving much more details about the applications behaviour than most performance tools available.

Screenshot of Paraver
Screenshot of Paraver

The new version 4.6.0 (3rd February 2016) provides the following new features (externally funded) as compared to version 4.5.6 (February 2015) that was part of the HBP-internal Platform Release in M18:

  • Automatic workspaces on trace loading
  • Scalability improvements for traces with more than 64K rows
  • Support for wxWidgets 3
  • Traces with same hierarchy can be combined to analyze
  • External tools integration

The new version 4.6.3 (16th November 2016) provides the following new features:

  • Added punctual information view to timelines
  • Added external tool Run->Spectral from timelines
  • Trace load time reduced by 25%
  • Histogram new features: show only totals and short/long column labels
  • Run app dialog usability improvements
Date of release16th of November 2016
Version of software4.6.3
Version of documentation3.1 (Old, year 2001) But Tutorials available for newer versions
Software availablehttps://tools.bsc.es/downloads
DocumentationParaver website: https://tools.bsc.es/paraver
Documentation: https://tools.bsc.es/tools_manuals
ResponsibleBSC Performance Tools Group: tools@bsc.es
Requirements & dependenciesBoost >= 1.36; Zlib; wxWidgets >= 2.8.0; wxPropertyGrid >= 1.4.0
Target system(s)Any Unix/Linux system (supercomputers, clusters, servers, workstations, laptops, …)

Livre

The development of Livre was co-funded by the HBP during the Ramp-up Phase. This page is kept for reference but will no longer be updated.


Livre is an out-of-core rendering engine that has the following features:

  • Distributed rendering using Equalizer parallel rendering framework
  • Octree based out-of-core rendering.
  • Visualisation of pre-processed UVF format volume data sets.
  • Real-time voxelisation and visualisation of surface meshes using OpenGL 4.2 extensions.
  • Real-time voxelisation and visualisation of Blue Brain Project (BBP) morphologies.
  • Real-time voxelisation and visualisation of local-field potentials in BBP circuit.
  • Multi-node, multi-GPU rendering.
Data rendered with Livre
Data rendered with Livre
Data rendered with Livre
Data rendered with Livre
Date of releaseApril 2015
Version of software0.3
Version of documentation0.3
Software availablehttps://github.com/BlueBrain/Livre
Documentationhttps://bluebrain.github.io/
ResponsibleEPFL: Stefan Eilemann (stefan.eilemann@epfl.ch)
Requirements & dependenciesOpenMP, Tuvok, ZeroEQ, FlatBuffers, Boost, Equalizer, Collage, Lunchbox, dash, OpenGL, PNG, Qt
Target system(s)

FLAT

The development of FLAT was co-funded by the HBP during the Ramp-up Phase. This page is kept for reference but will no longer be updated.


FLAT is a spatial indexing tool, which enables scalable range queries on (3D) spatial datasets. Given the user input, which should be a query range, and the dataset to be queried, FLAT returns all the objects that intersect with the query range.

In particular, both the query ranges and the spatial objects should be represented using minimum bounding rectangle, which is the geometry approximation bounding the underlying spatial object.

FLAT outperforms the state-of-the-art spatial indexing techniques (e.g. R-trees, grid file) on extremely dense datasets.

Date of releaseMarch 2015
Version of software1.0
Version of documentation1.0
Software availableCollaboratory, integrated and part of BBP SDK tool set
Documentationhttp://dias.epfl.ch/op/preview/BrainDB
ResponsibleEPFL-DIAS: Xuesong Lu (xuesong.lu@epfl.ch), Darius Sidlauskas (darius.sidlauskas@epfl.ch)
Requirements & dependenciesLinux, boost library, BBP SDK
Target system(s)PICO supercomputer

DisplayCluster

The development of DisplayCluster was co-funded by the HBP during the Ramp-up Phase. This page is kept for reference but will no longer be updated.


DisplayCluster is a software environment for interactively driving large-scale tiled displays. It provides the following functionality:

  • View media interactively such as high-resolution imagery, PDFs and video.
  • Receive content from remote sources such as laptops, desktops or parallel remote visualization machines using the Deflect library.
Display Cluster
DisplayCluster on a mobile tiled display wall
DisplayCluster on a tiled display wall
DisplayCluster on a tiled display wall
Date of release2013
Version of software0.5
Version of documentation0.5
Software availablehttps://github.com/BlueBrain/DisplayCluster
https://github.com/BlueBrain/Deflect
Documentationhttps://bluebrain.github.io/
ResponsibleEPFL, Stefan Eilemann (stefan.eilemann@epfl.ch)
Requirements & dependenciesBoost, LibJPEGTurbo, Qt, GLUT, OpenGL, Lunchbox, FCGI, FFMPEG, MPI, Poppler, TUIO, OpenMP
Target system(s)Tiled display walls

Cube: Score-P / Scalasca

The development of CUBE: Score-P / Scalasca was co-funded by the HBP during the Ramp-up Phase. This page is kept for reference but will no longer be updated.


Cube, which is used as performance report explorer for Scalasca and Score-P, is a generic tool for displaying a multi-dimensional performance space consisting of the dimensions

  1. Performance metric,
  2. Call path, and
  3. System resource.

Each dimension can be represented as a tree, where non-leaf nodes of the tree can be collapsed or expanded to achieve the desired level of granularity. In addition, Cube can display multi-dimensional Cartesian process topologies.

The Cube 4.x series report explorer and the associated Cube4 data format is provided for Cube files produced with the Score-P performance instrumentation and measurement infrastructure or with Scalasca version 2.x trace analyzer (and other compatible tools). However, for backwards compatibility, Cube 4.x can also read and display Cube 3.x data.

Cube is part of a larger set of tools for parallel performance analysis and debugging developed by the “Virtual Institute – High Productivity Supercomputing” (VI-HPS) consortium. Further documentation, training and support are available through VI-HPS:

Cube screenshot
Screenshot of Cube: Score-P/Scalasca

The new version 4.3.3 provides the following new features (externally funded) as compared to version 4.3.1 that was part of the HBP-internal Platform Release in M18:

  • Derived metrics support
  • Visual plugins,
  • Improved performance and scalability
Date of releaseApril 2015
Version of software4.3.3
Version of documentation4.x
Software availablehttp://www.scalasca.org/software/cube-4.x/download.html
Documentationhttp://www.scalasca.org/software/cube-4.x/documentation.html
ResponsibleScalasca team: scalasca@fz-juelich.de
Requirements & dependenciesSupported OS: Linux, Windows
Qt
Target system(s)Linux Workstations or Laptops

Extrae

The development of Extrae was co-funded by the HBP during the Ramp-up Phase. This page is kept for reference but will no longer be updated.


Extrae is an instrumentation and measurement system gathering time stamped information of the events of an application. It is the package devoted to generate Paraver trace files for a post-mortem analysis of a code run. It uses different interposition mechanisms to inject probes into the target application in order to gather information about the application performance.

The new version 3.2.1 (3rd November 2015) provides the following new features as compared to version 3.1.0 that was part of the HBP-internal Platform Release in M18:

  • Support for MPI3 immediate collectives
  • Use Intel PEBS to sample memory references.

The new version 3.4.1 (23th September 2016) provides the following new features:

  • Extended Java support through AspectJ and JVMTI
  • Improved CUDA and OpenCL support
  • Improved support for MPI-IO operations
  • Added instrumentation for system I/O and other system calls
  • Added support for OMPT
  • Added support for IBM Platform MPI
  • Added instrumentation for memkind allocations
  • Many other small improvements and bug fixes
Date of release23 September 2016
Version of software3.4.1
Version of documentation3.4.1
Software availablehttps://tools.bsc.es/downloads
Documentationhttps://tools.bsc.es/tools_manuals
Extrae website: https://tools.bsc.es/extrae
ResponsibleBSC Performance Tools Group: tools@bsc.es
Requirements & dependenciesDependencies: libxml2 2.5.0; libunwind for Linux x86/x86-64/IA64/ARM.
Optional: PAPI; DynInst; liberty and libbfd; MPI; OpenMP
Target system(s)Any Unix/Linux system (supercomputers, clusters, servers, workstations, laptops …)