Arbor is a software library designed from the ground up for simulators of large networks of multi-compartment neurons on hybrid/accelerated/many core computer architectures.

Performance portability was completed for the three main target HPC architectures available through the HBP: Intel x86 CPUs (AVX2 and AVX512), Intel KNL (AVX512) and NVIDIA GPUs (CUDA).

Optimized kernels are automatically generated to target each architecture, and the system used in Arbor can be extended to new architectures in the future.

The other enhancements and features implemented in Arbor are:

  • Fully parallelized event generation and queuing from spikes.
  • Efficient sampling of model state on CPU and GPU implementations, e.g. voltage and current.
  • Significant refactoring to prepare the code for general release.
  • A Python interface for users.

The source code was released publicly on GitHub with an open source BSD license, along with documentation on Read the Docs, and automatic testing was set up on Travis CI.

Date of release
Version of software0.1.0
Version of documentation0.1.0
Software available
ResponsibleBenjamin Cumming (ETHZ):
Alexander Peyser (JUELICH):
Requirements & dependencies
Target system(s)


The development of ZeroBuf was co-funded by the HBP during the Ramp-up Phase. This page is kept for reference but will no longer be updated.

ZeroBuf implements zero-copy, zero-serialize, zero-hassle protocol buffers. It is a replacement for FlatBuffers, resolving the following shortcomings:

  • Direct get and set functionality on the defined data members
  • A single memory buffer storing all data members, which is directly serializable
  • Usable, random read and write access to the the data members
  • Zero copy of the data used by the (C++) implementation from and to the network
Date of release
Version of software1.0
Version of documentation1.0
Software available
ResponsibleStefan Eilemann, EPFL (
Requirements & dependencies
Target system(s)


The development of Monsteer was co-funded by the HBP during the Ramp-up Phase. This page is kept for reference but will no longer be updated.

Monsteer is a library for Interactive Supercomputing in the neuroscience domain. Monsteer facilitates the coupling of running simulations (currently NEST) with interactive visualization and analysis applications. Monsteer supports streaming of simulation data to clients (currenty only spikes) as well as control of the simulator from the clients (also kown as computational steering). Monsteer’s main components are a C++ library, an MUSIC-based application and Python helpers.

Date of releaseJuly 2015
Version of software0.2.0
Version of documentation0.2.0
Software available
ResponsibleStefan Eilemann, EPFL (
Requirements & dependenciesMinimum configuration to configure using cmake, compile and run Monsteer: A Linux box,
GCC compiler 4.8+,
CMake 2.8+,
Boost 1.54,
MPI (OpenMPI, mvapich2, etc),
NEST simulator 2.4.2,
MUSIC 1.0.7,
Python 2.6,
See also:
Target system(s)Linux computer


The development of neuroFiReS was co-funded by the HBP during the Ramp-up Phase. This page is kept for reference but will no longer be updated.

neuroFiReS is a library for performing search and filtering operations using both data contents and metadata. These search operations will be tightly coupled with visualization in order to improve insight gaining from complex data. A first prototype (named spineRet) for searching and filtering over segmented spine data has been developed.

SpineRet screenshot
Date of releaseN/A
Version of software0.1
Version of documentation0.1
Software availablePlease contact the developers.
DocumentationPlease contact the developers.
ResponsibleURJC: Pablo Toharia (
Requirements & dependenciesQt, OpenSceneGraph
Supported OS: Windows 7/ 8.1, Linux (tested on Ubuntu 14.04) and Mac OSX
Target system(s)Desktop computers, notebooks


NeuroLOTs is a set of tools and libraries that allow creating neuronal meshes from a minimal skeletal description. It generates soma meshes using FEM deformation and allows to interactively adapt the tessellation level using different criteria (user-defined, camera distance, etc.)

NeuroTessMesh provides a visual environment for the generation of 3D polygonal meshes that approximate the membrane of neuronal cells, starting from the morphological tracings that describe neuronal morphologies. The 3D models can be tessellated at different levels of detail, providing either a homogeneous or an adaptive resolution of the model. The soma shape is recovered from the incomplete information of the tracings, applying a physical deformation model that can be interactively adjusted. The adaptive refinement process performed in the GPU generates meshes, that allow good visual quality geometries at an affordable computational cost, both in terms of memory and rendering time. NeuroTessMesh is the front-end GUI to the NeuroLOTs framework.

Related Publication:
Garcia-Cantero et al. (2017) Front NeuroinDOI:
NeuroLOTs screenshot
NeuroLOTs screenshot
NeuroLOTs screenshot
NeuroLOTs screenshot

Date of releaseNeurolots 0.2.0, March 2018; NeuroTessMesh 0.0.1, March 2018
Version of softwareNeurolots 0.2.0, NeuroTessMesh 0.0.1
Version of documentationNeurolots 0.2.0, NeuroTessMesh 0.0.1
Software available,
ResponsibleURJC: Pablo Toharia (
Requirements & dependenciesRequired: Eigen3, OpenGL (>= 4.0), GLEW, GLUT, nsol
Optional: Brion/BBPSDK (to access BBP data), ZeroEQ (to couple with other software)
Supported OS: Windows 7/8.1, GNU/Linux (tested on Ubuntu 14.04) and Mac OSX
Target system(s)High fidelity displays, desktop computers, notebooks

Dynamic Load Balancing

dlb logo

DLB is a library devoted to speedup hybrid parallel applications. And at the same time DLB improves the efficient use of the computational resources inside a computing node. The DLB library will improve the load balance of the outer level of parallelism by redistributing the computational resources at the inner level of parallelism. This readjustment of resources will be done at dynamically at runtime. This dynamism allows DLB to react to different sources of imbalance: Algorithm, data, hardware architecture and resource availability among others.

The first version that was integrated in the HPAC Platform was v1.1.

Used on MareNostrum IV supercomputer for some applications

Related publication:

Date of releaseDecember 2017
Version of software1.2
Version of documentationDecember 2017
Software available
DocumentationResponsibleBSC Programming Models Group:
Requirements & dependenciesAny MPI library, OpenMP or OMPSs compiler and runtime
For tracing: Extrae library
See also
Target system(s)Any system with multiple CPUs/cores in a node (supercomputers, clusters, workstations, …)


The development of HCFFT was co-funded by the HBP during the Ramp-up Phase. This page is kept for reference but will no longer be updated.

HCFFT (Hyperbolic Cross Fast Fourier Transform) is a software package to efficiently treat high-dimensional multivariate functions. The implementation is based on the fast Fourier transform for arbitrary hyperbolic cross / sparse grid spaces.

Date of release2015
Version of software1.0
Version of documentation1.0
ResponsibleFraunhofer (FG) SCAI:
Contact the developers using
Requirements & dependenciesC/C++
Target system(s)Tested under Linux


ZeroEQ is a cross-platform C++ library to publish and subscribe for events. It provides the following major features:

  • Publish events using zeroeq::Publisher
  • Subscribe to events using zeroeq::Subscriber
  • Asynchronous, reliable transport using ZeroMQ››
  • Automatic publisher discovery using Zeroconf
  • Efficient serialization of events using flatbuffers

The main intention of ZeroEQ is to allow the linking of applications using automatic discovery. Linking can be used to connect multiple visualization applications, or to connect simulators with analysis and visualization codes to implement streaming and steering. One example of the former is the interoperability of NeuroScheme with RTNeuron, and one for the latter is the streaming and steering between NEST and RTNeuron. Both were reported previously, whereas the current extensions focus on the implementation of the request-reply interface.

Date of releaseFebruary 2018
Version of software0.9.0
Version of documentation0.9.0
Software available
ResponsibleEPFL: Samuel Lapere -
Requirements & dependenciesZeroMQ, FlatBuffers, Boost, Lunchbox
Target system(s)

ViSTA Virtual Reality Toolkit

The development of ViSTA Virtual Reality Toolkit was co-funded by the HBP during the Ramp-up Phase. This page is kept for reference but will no longer be updated.

The ViSTA Virtual Reality Toolkit allows the integration of virtual reality (VR) technology and interactive, 3D visualisation into technical and scientific applications. The toolkit aims to enhance scientific applications with methods and techniques of VR and immersive visualization, thus enabling researchers from multiple disciplines to interactively analyse and explore their data in virtual environments. ViSTA is designed to work on multiple target platforms and operating systems, across various display devices (desktop workstations, powerwalls, tiled displays, CAVEs, etc.) and with various interaction devices.

The new version 1.15 provides the following new features as compared to version 1.14 that was part of the HBP-internal Platform Release in M18. It is available on SourceForge:

Date of releaseFebruary 20, 2013
Version of software1.15
Version of documentation1.15
Software available
DocumentationIncluded in the library source code
ResponsibleRWTH: Torsten Kuhlen (, Benjamin Weyers (
Requirements & dependenciesLibraries: OpenSG, freeglut, glew
Operating systems: Windows / Linux
Compilers: Microsoft Visual Studio 2010 (cl16) or higher, gcc 4.4.7 or higher
Target system(s)High Fidelity Visualization Platforms, Immersive Visualization Hardware, Desktop Computers


The development of Equalizer was co-funded by the HBP during the Ramp-up Phase. This page is kept for reference but will no longer be updated.

Equalizer is a parallel rendering framework Equalizer logoto create and deploy parallel, scalable OpenGL applications. It provides the following major features to facilitate the development and deployment of scalable OpenGL applications:

  • Runtime Configurability: An Equalizer application is configured automatically or manually at runtime and can be deployed on laptops, multi-GPU workstations and large-scale visualization clusters without recompilation.
  • Runtime Scalability: An Equalizer application can benefit from multiple graphics cards, processors and computers to scale rendering performance, visual quality and display size.
  • Distributed Execution: Equalizer applications can be written to support cluster-based execution. Equalizer uses the Collage network library, a cross-platform C++ library for building heterogeneous, distributed applications.

Support for Stereo and Immersive Environments: Equalizer supports stereo rendering head tracking, head-mounted displays and other advanced features for immersive Virtual Reality installations.

Equalizer in immersive environment
Equalizer in immersive environment
Date of release2007
Version of software1.8
Version of documentation1.8
Software available
ResponsibleEPFL: Stefan Eilemann (
Requirements & dependenciesBoost, OpenGL, Collage, hwsd, Glew, Qt
Target system(s)

Deflect Client Library

The development of Deflect Client Library was co-funded by the HBP during the Ramp-up Phase. This page is kept for reference but will no longer be updated.

Deflect is a C++ library to develop applications that can send and receive pixel streams from other Deflect-based applications, for example DisplayCluster. The following applications are provided which make use of the streaming API:

  • DesktopStreamer: A small utility that allows the user to stream the desktop.
  • SimpleStreamer: A simple example to demonstrate streaming of an OpenGL application.
Date of release2013
Version of software0.5
Version of documentation0.5
Software available
ResponsibleEPFL: Stefan Eilemann (
Requirements & dependenciesBoost, LibJPEGTurbo, Qt, GLUT, OpenGL, Lunchbox, FCGI, FFMPEG, MPI, Poppler, TUIO, OpenMP
Target system(s)


RTNeuron is a scalable real-time rendering tool for the visualisation of neuronal simulations based on cable models. Its main utility is twofold: the interactive visual inspection of structural and functional features of the cortical column model and the generation of high quality movies and images for presentations and publications. The package provides three main components:

  • A high level C++ library.
  • A Python module that wraps the C++ library and provides additional tools.
  • The Python application script

A wide variety of scenarios is covered by In case the user needs a finer control of the rendering, such as in movie production or to speed up the exploration of different data sets, the Python wrapping is the way to go. The Python wrapping can be used through an IPython shell started directly from or importing the module rtneuron into own Python programs. GUI overlays can be created for specific use cases using PyQt and QML.

RTNeuron is available on the pilot system JULIA and on JURECA as environment module.

RTNeuron in aixCAVE
RTNeuron in aixCAVE
Neuron rendered by RTNeuron
Neuron rendered by RTNeuron
Visual representation of cell dyes
Simulation playback
Interactive circuit slicing
Connection browsing
Date of releaseFebruary 2018
Version of software2.13.0
Version of documentation2.13.0
Software available; Open sourcing scheduled for June 2018
ResponsibleSamuel Lapere
Requirements & dependenciesBBP SDK, Boost, Equalizer, OpenSceneGraph, osgTransparency, Python, Qt, NumPy, OpenMP, VRPN, Cuda, ZeroEQ
Target system(s)