Tag Archives: visualization

In Situ Pipeline

This is the newer, more general version of NEST in situ framework.

The in situ pipeline consists of a set of libraries that can be integrated into neuronal network simulators developed by the HBP to enable live visual analysis during the runtime of the simulation. The library called ‘nesci’ (neuronal simulator conduit interface) stores the raw simulation data into a common conduit format and the library called ‘contra’ (conduit transport) transports the serialized data from one endpoint to another using a variety of different (network) protocols. The pipeline currently works with NEST and Arbor. Support for TVB is currently in development.

Prototypical implementation into the HPAC Platform finalised in February 2019.

Date of releaseFirst released in July 2018 with continuous updates (see also above)
Version of software18.07
Version of documentation18.07
Software availablehttps://devhub.vr.rwth-aachen.de/VR-Group/contra
https://devhub.vr.rwth-aachen.de/VR-Group/nesci
https://devhub.vr.rwth-aachen.de/VR-Group/nest-streaming-module
DocumentationSee the readme files in the repositories
ResponsibleRWTH: Simon Oehrl (oehrl@vr.rwth-aachen.de)
Requirements & dependenciesRequired: CMake, C++14, Conduit
Optional: Python, Boost, ZeroMQ
Target system(s)Desktops/HPC Systems running Linux, macOS or Windows

VTK-m

The development of VTK-m was co-funded by the HBP during the second project phase (SGA1). This page is kept for reference but will no longer be updated.


VTK-m is a scientific visualization and analysis framework that offers a wealth of building blocks to create visualization and analysis applications. VTK-m facilitates scaling those applications to massively parallel shared memory systems, and it will – due to its architecture – most likely also run efficiently on future platforms.

HPX is a task-based programming model. It simplifies the formulation of well-scaling, highly-parallel algorithms. Integrating this programming model into VTK-m streamlines the formulation of its parallel building blocks and thus makes their deployment on present and emerging HPC platforms more efficient. Since neuroscientific applications require more and more compute power as well as memory, harnessing the available resources will become a challenge in itself. By combining VTK-m and HPX into task-based analysis and visualization, we expect to provide suitable tools to effectively face this challenge and facilitate building sophisticated interactive visual analysis tools, tailored to the neuroscientists’ needs.

Parallel primitive algorithms required for VTK-m have been added to HPX along with API support to enable the full range of visualization algorithms developed for VTK-m. A new scheduler has been developed that accepts core/numa placement hints from the programmer such that cache reuse can be maximized and traffic between sockets minimized. High performance tasks that access data shared by application and visualization can use this capability to improve performance. The thread pool management was improved to allow visualization tasks, communication tasks, and application tasks to execute on different cores if necessary, which reduces latency between components and improves the overall throughput of the distributed application. RDMA primitives have been added to the HPX messaging layer. These improvements make it possible to scale HPX applications to very high node/core counts. Respective tests have been successful on 10k nodes using 650k cores.

Date of releaseNovember 2017
Version of software1.1
Version of documentation1.1
Software availablehttps://gitlab.kitware.com/vtk/vtk-m
Documentationhttp://m.vtk.org/images/c/c8/VTKmUsersGuide.pdf
ResponsibleJohn Biddiscombe (EPFL): john.biddiscombe@epfl.ch
Requirements & dependencies
Target system(s)HPC platforms

NEST-simulated spatial-point-neuron data visualisation

The development of this technology was co-funded by the HBP during the second project phase (SGA1). This page is kept for reference but will no longer be updated.


Complementary to other viewers and visualization implementations for NEST simulations, this technology offers a rendering of activity and membrane potentials in a neural network simulated with NEST.

A prototypical implementation exists that is based on vtk, the widely-used visualisation toolkit. This implementation can be generally run by computational neuroscientists on their workstations, imposing only moderate hardware requirements. Experiments using rendering on high-performance computing infrastructure were successful.

The results indicate that this component is extensible towards large-scale simulations that require HPC resources and thus produce large output data. The hi-fidelity rendering used in this case provides very high quality images that may be suitable for publications (Proof of concept image below).

Rendering of color-coded membrane potentials on spatial neurons from a running NEST simulation
Proof of concept of a high-quality rendering of spatially organized point neurons
Date of release
Version of softwarePlease contact the developers
Version of documentation
Software availablePlease contact the developers
DocumentationAvailable on demand
ResponsibleThomas Vierjahn
Requirements & dependencies
Target system(s)workstations to HPC

Multi-View Framework

The Multi-View Framework is a software component, which offers functionality to combine various visual representations of one or more data sets in a coordinated fashion.  Software components offering visualization capabilities can be included in such a network, as well as software components offering other functionality, such as statistical analysis. Multi-display scenarios can be addressed by the framework as coordination information can be distributed over network between view instances running on distributed machines.

The framework is composed of three libraries: nett, nett-python and nett-connect. nett implements a light-weight underlying messaging layer enabling the communication between views, whereas nett-python implements a python binding for nett, which enables the integration of python-based software components into a multi-view setup. nett-connect adds additional functionality to this basic communication layer, which enables non-experts to create multi-view setups according to their specific needs and workflows.

Interactive optimization of parameters for structural plasticity in neural network models (top left); comparative analysis of NEST simulations (top right); statistical analysis of NEST simulations (bottom left); multi-device and multi-user scenarios (bottom right)
Date of release2017
Version of softwareN/A
Version of documentationN/A
Software availablePlease contact the developers
Documentationhttps://devhub.vr.rwth-aachen.de/cnowke/nett-connect
ResponsibleU Trier: Weyers, Benjamin (weyers@uni-trier.de)
Requirements & dependencies
Target system(s)

ZeroBuf

The development of ZeroBuf was co-funded by the HBP during the Ramp-up Phase. This page is kept for reference but will no longer be updated.


ZeroBuf implements zero-copy, zero-serialize, zero-hassle protocol buffers. It is a replacement for FlatBuffers, resolving the following shortcomings:

  • Direct get and set functionality on the defined data members
  • A single memory buffer storing all data members, which is directly serializable
  • Usable, random read and write access to the the data members
  • Zero copy of the data used by the (C++) implementation from and to the network
Date of release
Version of software1.0
Version of documentation1.0
Software availablehttps://github.com/HBPVIS/ZeroBuf
Documentationhttps://github.com/HBPVIS/ZeroBuf
ResponsibleStefan Eilemann, EPFL (stefan.eilemann@epfl.ch)
Support: HBPVis@googlegroups.com
Requirements & dependencies
Target system(s)

Monsteer

The development of Monsteer was co-funded by the HBP during the Ramp-up Phase. This page is kept for reference but will no longer be updated.


Monsteer is a library for Interactive Supercomputing in the neuroscience domain. Monsteer facilitates the coupling of running simulations (currently NEST) with interactive visualization and analysis applications. Monsteer supports streaming of simulation data to clients (currenty only spikes) as well as control of the simulator from the clients (also kown as computational steering). Monsteer’s main components are a C++ library, an MUSIC-based application and Python helpers.

Date of releaseJuly 2015
Version of software0.2.0
Version of documentation0.2.0
Software availablehttps://github.com/BlueBrain/Monsteer
Documentationhttp://bluebrain.github.io/
ResponsibleStefan Eilemann, EPFL (stefan.eilemann@epfl.ch)
Requirements & dependenciesMinimum configuration to configure using cmake, compile and run Monsteer: A Linux box,
GCC compiler 4.8+,
CMake 2.8+,
Boost 1.54,
MPI (OpenMPI, mvapich2, etc),
NEST simulator 2.4.2,
MUSIC 1.0.7,
Python 2.6,
See also: http://bluebrain.github.io/Monsteer-0.3/_user__guide.html#Compilation
Target system(s)Linux computer

MSPViz

MSPViz is a visualization tool for the Model of Structural Plasticity. It uses a visualisation technique  based on the representation of the neuronal information through the use of abstract levels and a set of schematic representations into each level. The multilevel structure and the design of the representations constitutes an approach that provides organized views that facilitates visual analysis tasks.

Each view has been enhanced adding line and bar charts to analyse trends in simulation data. Filtering and sorting capabilities can be applied on each view to ease the analysis. Other views, such as connectivity matrices and force-directed layouts, have been incorporated, enriching the already existing views and improving the analysis process. This tool has been optimised to lower render and data loading times, even from remote sources such as WebDav servers.

Screenshot of MSPViz
Screenshot of MSPViz
Screenshot of MSPViz
Screenshot of MSPViz
View of MSPViz to investigate structural plasticity models on different levels of abstraction: connectivity of a single neuron
View of MSPViz to investigate structural plasticity models on different levels of abstraction: full network connectivity
Date of releaseMarch 2018
Version of software0.2.6
Version of documentation0.2.6 for users
Software availablehttp://gmrv.es/mspviz
DocumentationSelf-contained in the application
ResponsibleUPM: Juan Pedro Brito (juanpedro.brito@upm.es)
Requirements & dependenciesSelf-contained code
Target system(s)Platform independent

NeuroScheme

NeuroScheme is a tool that allows users to navigate through circuit data at different levels of abstraction using schematic representations for a fast and precise interpretation of data. It also allows filtering, sorting and selections at the different levels of abstraction. Finally it can be coupled with realistic visualization or other applications using the ZeroEQ event library developed in WP 7.3.

This application allows analyses based on a side-by-side comparison using its multi-panel views, and it also provides focus-and-context. Its different layouts enable arranging data in different ways: grid, 3D, camera-based, scatterplot-based or circular. It provides editing capabilities, to create a scene from scratch or to modify an existing one.

ViSimpl, part of the NeuroScheme framework, is a prototype developed to analyse simulation data, using both abstract and schematic visualisations. This analysis can be done visually from temporal, spatial and structural perspectives, with the additional capability of exploring the correlations between input patterns and produced activity.

 

NeuroScheme
NeuroScheme screenshot
NeuroScheme
NeuroScheme screenshot
NeuroScheme
NeuroScheme screenshot
NeuroScheme
NeuroScheme screenshot
NeuroScheme
NeuroScheme screenshot
Overview of various neurons
User interface of ViSimpl visualising activity data emerging from a simulation of a neural network model
Date of releaseMarch 2018
Version of software0.2
Version of documentation0.2
Software availablehttps://github.com/gmrvvis/NeuroScheme
Documentationhttps://github.com/gmrvvis/NeuroScheme, http://gmrv.es/gmrvvis
ResponsibleURJC: Pablo Toharia (pablo.toharia@urjc.es)
Requirements & dependenciesRequired: Qt4, nsol
Optional: Brion/BBPSDK (to access BBP data), ZeroEQ (to couple with other software)
Supported OS: Windows 7, Windows 8.1, Linux (tested on Ubuntu 14.04) and Mac OSX
Target system(s)Desktop computers, notebooks, tablets

NeuroLOTs

NeuroLOTs is a set of tools and libraries that allow creating neuronal meshes from a minimal skeletal description. It generates soma meshes using FEM deformation and allows to interactively adapt the tessellation level using different criteria (user-defined, camera distance, etc.)

NeuroTessMesh provides a visual environment for the generation of 3D polygonal meshes that approximate the membrane of neuronal cells, starting from the morphological tracings that describe neuronal morphologies. The 3D models can be tessellated at different levels of detail, providing either a homogeneous or an adaptive resolution of the model. The soma shape is recovered from the incomplete information of the tracings, applying a physical deformation model that can be interactively adjusted. The adaptive refinement process performed in the GPU generates meshes, that allow good visual quality geometries at an affordable computational cost, both in terms of memory and rendering time. NeuroTessMesh is the front-end GUI to the NeuroLOTs framework.

Related Publication:
Garcia-Cantero et al. (2017) Front NeuroinDOI: https://dx.doi.org/10.3389/fninf.2017.00038
NeuroLOTs
NeuroLOTs screenshot
NeuroLOTs
NeuroLOTs screenshot
NeuroLOTs
NeuroLOTs screenshot
NeuroLOTs
NeuroLOTs screenshot

Date of releaseNeurolots 0.2.0, March 2018; NeuroTessMesh 0.0.1, March 2018
Version of softwareNeurolots 0.2.0, NeuroTessMesh 0.0.1
Version of documentationNeurolots 0.2.0, NeuroTessMesh 0.0.1
Software availablehttps://github.com/gmrvvis/neurolots, https://github.com/gmrvvis/NeuroTessMesh
Documentationhttps://github.com/gmrvvis/neurolots, https://github.com/gmrvvis/NeuroTessMesh, https://gmrvvis.github.io/doc/neurolots/, https://github.com/gmrvvis/neurolots/blob/master/README.md, http://gmrv.es/neurotessmesh/NeuroTessMeshUserManual.pdf, http://gmrv.es/gmrvvis/neurolots/
ResponsibleURJC: Pablo Toharia (pablo.toharia@urjc.es)
Requirements & dependenciesRequired: Eigen3, OpenGL (>= 4.0), GLEW, GLUT, nsol
Optional: Brion/BBPSDK (to access BBP data), ZeroEQ (to couple with other software)
Supported OS: Windows 7/8.1, GNU/Linux (tested on Ubuntu 14.04) and Mac OSX
Target system(s)High fidelity displays, desktop computers, notebooks

InDiProv

The development of InDiProv was co-funded by the HBP during the Ramp-up Phase. This page is kept for reference but will no longer be updated.


This server-side tool is meant to be used for the creation of provenance tracks in context of interactive analysis tools and visualization applications. It is capable of tracking multi-view and multiple applications for one user using this ensemble. It further is able to extract these tracks from the internal data base into a XML-based standard format, such as the W3C Prov-Model or the OPM format. This enables the integration to other tools used for provenance tracking and will finally end up in the UP.

Date of releaseAugust 2015
Version of softwareAugust 2015
Version of documentationAugust 2015
Software availablehttps://github.com/hbpvis
Documentationhttps://github.com/hbpvis
ResponsibleRWTH Aachen: Benjamin Weyers (weyers@vr.rwth-aachen.de) and Torsten Kuhlen (kuhlen@vr.rwth-aachen.de)
Requirements & dependenciesWritten in C++ , Linux environment, MySQL server 5.6, JSON library for annotation, CodeSynthesis XSD for XML serialization and parsing, ZeroMQ library, Boost library, xercex-c library and mysqlcppcon library
Target system(s)Server-side systems

ZeroEQ

The development of ZeroEQ was co-funded by the HBP during the second project phase (SGA1). This page is kept for reference but will no longer be updated.


ZeroEQ is a cross-platform C++ library to publish and subscribe for events. It provides the following major features:

  • Publish events using zeroeq::Publisher
  • Subscribe to events using zeroeq::Subscriber
  • Asynchronous, reliable transport using ZeroMQ››
  • Automatic publisher discovery using Zeroconf
  • Efficient serialization of events using flatbuffers

The main intention of ZeroEQ is to allow the linking of applications using automatic discovery. Linking can be used to connect multiple visualization applications, or to connect simulators with analysis and visualization codes to implement streaming and steering. One example of the former is the interoperability of NeuroScheme with RTNeuron, and one for the latter is the streaming and steering between NEST and RTNeuron. Both were reported previously, whereas the current extensions focus on the implementation of the request-reply interface.

Date of releaseFebruary 2018
Version of software0.9.0
Version of documentation0.9.0
Software availablehttps://hbpvis.github.io/ZeroEQ-0.9/index.html
Documentationhttps://github.com/HBPVis/ZeroEQurl, https://hbpvis.github.io/ZeroEQ-0.9/index.html
ResponsibleEPFL: Samuel Lapere - samuel.lapere@epfl.ch
Requirements & dependenciesZeroMQ, FlatBuffers, Boost, Lunchbox
Target system(s)

ViSTA Virtual Reality Toolkit

The development of ViSTA Virtual Reality Toolkit was co-funded by the HBP during the Ramp-up Phase. This page is kept for reference but will no longer be updated.


The ViSTA Virtual Reality Toolkit allows the integration of virtual reality (VR) technology and interactive, 3D visualisation into technical and scientific applications. The toolkit aims to enhance scientific applications with methods and techniques of VR and immersive visualization, thus enabling researchers from multiple disciplines to interactively analyse and explore their data in virtual environments. ViSTA is designed to work on multiple target platforms and operating systems, across various display devices (desktop workstations, powerwalls, tiled displays, CAVEs, etc.) and with various interaction devices.

The new version 1.15 provides the following new features as compared to version 1.14 that was part of the HBP-internal Platform Release in M18. It is available on SourceForge: http://sourceforge.net/projects/vistavrtoolkit/

Date of releaseFebruary 20, 2013
Version of software1.15
Version of documentation1.15
Software availablehttp://sourceforge.net/projects/vistavrtoolkit/ https://github.com/HBPVIS/Vista
DocumentationIncluded in the library source code
ResponsibleRWTH: Torsten Kuhlen (kuhlen@vr.rwth-aachen.de), Benjamin Weyers (weyers@vr.rwth-aachen.de)
Requirements & dependenciesLibraries: OpenSG, freeglut, glew
Operating systems: Windows / Linux
Compilers: Microsoft Visual Studio 2010 (cl16) or higher, gcc 4.4.7 or higher
Target system(s)High Fidelity Visualization Platforms, Immersive Visualization Hardware, Desktop Computers

Equalizer

The development of Equalizer was co-funded by the HBP during the Ramp-up Phase. This page is kept for reference but will no longer be updated.


Equalizer is a parallel rendering framework Equalizer logoto create and deploy parallel, scalable OpenGL applications. It provides the following major features to facilitate the development and deployment of scalable OpenGL applications:

  • Runtime Configurability: An Equalizer application is configured automatically or manually at runtime and can be deployed on laptops, multi-GPU workstations and large-scale visualization clusters without recompilation.
  • Runtime Scalability: An Equalizer application can benefit from multiple graphics cards, processors and computers to scale rendering performance, visual quality and display size.
  • Distributed Execution: Equalizer applications can be written to support cluster-based execution. Equalizer uses the Collage network library, a cross-platform C++ library for building heterogeneous, distributed applications.

Support for Stereo and Immersive Environments: Equalizer supports stereo rendering head tracking, head-mounted displays and other advanced features for immersive Virtual Reality installations.

Equalizer in immersive environment
Equalizer in immersive environment
Date of release2007
Version of software1.8
Version of documentation1.8
Software availablehttps://github.com/Eyescale/Equalizer
Documentationhttps://eyescale.github.io
ResponsibleEPFL: Stefan Eilemann (stefan.eilemann@epfl.ch)
Requirements & dependenciesBoost, OpenGL, Collage, hwsd, Glew, Qt
Target system(s)

Deflect Client Library

The development of Deflect Client Library was co-funded by the HBP during the Ramp-up Phase. This page is kept for reference but will no longer be updated.


Deflect is a C++ library to develop applications that can send and receive pixel streams from other Deflect-based applications, for example DisplayCluster. The following applications are provided which make use of the streaming API:

  • DesktopStreamer: A small utility that allows the user to stream the desktop.
  • SimpleStreamer: A simple example to demonstrate streaming of an OpenGL application.
Date of release2013
Version of software0.5
Version of documentation0.5
Software availablehttps://github.com/BlueBrain/DisplayCluster
https://github.com/BlueBrain/Deflect
Documentationhttps://bluebrain.github.io/
ResponsibleEPFL: Stefan Eilemann (stefan.eilemann@epfl.ch)
Requirements & dependenciesBoost, LibJPEGTurbo, Qt, GLUT, OpenGL, Lunchbox, FCGI, FFMPEG, MPI, Poppler, TUIO, OpenMP
Target system(s)

RTNeuron

The development of RTNeuron in the HPAC Platform was co-funded by the HBP during the second project phase (SGA1). This page is kept for reference but will no longer be updated.


RTNeuron is a scalable real-time rendering tool for the visualisation of neuronal simulations based on cable models. Its main utility is twofold: the interactive visual inspection of structural and functional features of the cortical column model and the generation of high quality movies and images for presentations and publications. The package provides three main components:

  • A high level C++ library.
  • A Python module that wraps the C++ library and provides additional tools.
  • The Python application script rtneuron-app.py

A wide variety of scenarios is covered by rtneuron-app.py. In case the user needs a finer control of the rendering, such as in movie production or to speed up the exploration of different data sets, the Python wrapping is the way to go. The Python wrapping can be used through an IPython shell started directly from rtneuron-app.py or importing the module rtneuron into own Python programs. GUI overlays can be created for specific use cases using PyQt and QML.

RTNeuron is available on the pilot system JULIA and on JURECA as environment module.

RTNeuron in aixCAVE
RTNeuron in aixCAVE
Neuron rendered by RTNeuron
Neuron rendered by RTNeuron
Visual representation of cell dyes
Simulation playback
Interactive circuit slicing
Connection browsing
Date of releaseFebruary 2018
Version of software2.13.0
Version of documentation2.13.0
Software availablehttps://developer.humanbrainproject.eu/docs/projects/RTNeuron/2.11/index.html; Open sourcing scheduled for June 2018
Documentationhttps://developer.humanbrainproject.eu/docs/projects/RTNeuron/2.11/index.html, https://www.youtube.com/watch?v=wATHwvRFGz0
ResponsibleSamuel Lapere
Requirements & dependenciesBBP SDK, Boost, Equalizer, OpenSceneGraph, osgTransparency, Python, Qt, NumPy, OpenMP, VRPN, Cuda, ZeroEQ
Target system(s)

Remote Connection Manager (RCM)

The development of Remote Connection Manager (RCM) was co-funded by the HBP during the Ramp-up Phase. This page is kept for reference but will no longer be updated.


The Remote Connection Manager (RCM) is an application that allows HPC users to perform remote visualisation on Cineca HPC clusters.

The tool offers to

  • Visualize the data produced on Cineca’s HPC systems (scientific visualization);
  • Analyse and inspect data directly on the systems;
  • Debug and profile parallel codes running on the HPC clusters.

The graphical interface of RCM allows the HPC users to easily create remote displays and to manage them (connect, kill, refresh).

Screenshot of RCM
Screenshot of Remote Connection Manager (RCM)
Screenshot of RCM
Screenshot of Remote Connection Manager (RCM)
Date of releaseApril 2015
Version of software1.2
Version of documentation1.2
Software availablehttp://www.hpc.cineca.it/content/remote-visualization-rcm
Documentationhttp://www.hpc.cineca.it/content/remote-visualization-rcm
ResponsibleRoberto Mucci (superc@cineca.it)
Requirements & dependenciesThe “Remote Connection Manager” works on the following operating systems: Windows, Linux, Mac OSX
(OSX Mountain Lion users need to install XQuartz: http://xquartz.macosforge.org/landing/)
Target system(s)Notebooks, office computers

Livre

The development of Livre was co-funded by the HBP during the Ramp-up Phase. This page is kept for reference but will no longer be updated.


Livre is an out-of-core rendering engine that has the following features:

  • Distributed rendering using Equalizer parallel rendering framework
  • Octree based out-of-core rendering.
  • Visualisation of pre-processed UVF format volume data sets.
  • Real-time voxelisation and visualisation of surface meshes using OpenGL 4.2 extensions.
  • Real-time voxelisation and visualisation of Blue Brain Project (BBP) morphologies.
  • Real-time voxelisation and visualisation of local-field potentials in BBP circuit.
  • Multi-node, multi-GPU rendering.
Data rendered with Livre
Data rendered with Livre
Data rendered with Livre
Data rendered with Livre
Date of releaseApril 2015
Version of software0.3
Version of documentation0.3
Software availablehttps://github.com/BlueBrain/Livre
Documentationhttps://bluebrain.github.io/
ResponsibleEPFL: Stefan Eilemann (stefan.eilemann@epfl.ch)
Requirements & dependenciesOpenMP, Tuvok, ZeroEQ, FlatBuffers, Boost, Equalizer, Collage, Lunchbox, dash, OpenGL, PNG, Qt
Target system(s)

DisplayCluster

The development of DisplayCluster was co-funded by the HBP during the Ramp-up Phase. This page is kept for reference but will no longer be updated.


DisplayCluster is a software environment for interactively driving large-scale tiled displays. It provides the following functionality:

  • View media interactively such as high-resolution imagery, PDFs and video.
  • Receive content from remote sources such as laptops, desktops or parallel remote visualization machines using the Deflect library.
Display Cluster
DisplayCluster on a mobile tiled display wall
DisplayCluster on a tiled display wall
DisplayCluster on a tiled display wall
Date of release2013
Version of software0.5
Version of documentation0.5
Software availablehttps://github.com/BlueBrain/DisplayCluster
https://github.com/BlueBrain/Deflect
Documentationhttps://bluebrain.github.io/
ResponsibleEPFL, Stefan Eilemann (stefan.eilemann@epfl.ch)
Requirements & dependenciesBoost, LibJPEGTurbo, Qt, GLUT, OpenGL, Lunchbox, FCGI, FFMPEG, MPI, Poppler, TUIO, OpenMP
Target system(s)Tiled display walls