Tag Archives: scientific visualization

In Situ Pipeline

This is the newer, more general version of NEST in situ framework.

The in situ pipeline consists of a set of libraries that can be integrated into neuronal network simulators developed by the HBP to enable live visual analysis during the runtime of the simulation. The library called ‘nesci’ (neuronal simulator conduit interface) stores the raw simulation data into a common conduit format and the library called ‘contra’ (conduit transport) transports the serialized data from one endpoint to another using a variety of different (network) protocols. The pipeline currently works with NEST and Arbor. Support for TVB is currently in development.

Prototypical implementation into the HPAC Platform finalised in February 2019.

Date of releaseFirst released in July 2018 with continuous updates (see also above)
Version of software18.07
Version of documentation18.07
Software availablehttps://devhub.vr.rwth-aachen.de/VR-Group/contra
https://devhub.vr.rwth-aachen.de/VR-Group/nesci
https://devhub.vr.rwth-aachen.de/VR-Group/nest-streaming-module
DocumentationSee the readme files in the repositories
ResponsibleRWTH: Simon Oehrl (oehrl@vr.rwth-aachen.de)
Requirements & dependenciesRequired: CMake, C++14, Conduit
Optional: Python, Boost, ZeroMQ
Target system(s)Desktops/HPC Systems running Linux, macOS or Windows

Multi-View Framework

The Multi-View Framework is a software component, which offers functionality to combine various visual representations of one or more data sets in a coordinated fashion.  Software components offering visualization capabilities can be included in such a network, as well as software components offering other functionality, such as statistical analysis. Multi-display scenarios can be addressed by the framework as coordination information can be distributed over network between view instances running on distributed machines.

The framework is composed of three libraries: nett, nett-python and nett-connect. nett implements a light-weight underlying messaging layer enabling the communication between views, whereas nett-python implements a python binding for nett, which enables the integration of python-based software components into a multi-view setup. nett-connect adds additional functionality to this basic communication layer, which enables non-experts to create multi-view setups according to their specific needs and workflows.

Interactive optimization of parameters for structural plasticity in neural network models (top left); comparative analysis of NEST simulations (top right); statistical analysis of NEST simulations (bottom left); multi-device and multi-user scenarios (bottom right)
Date of release2017
Version of softwareN/A
Version of documentationN/A
Software availablePlease contact the developers
Documentationhttps://devhub.vr.rwth-aachen.de/cnowke/nett-connect
ResponsibleU Trier: Weyers, Benjamin (weyers@uni-trier.de)
Requirements & dependencies
Target system(s)

Monsteer

The development of Monsteer was co-funded by the HBP during the Ramp-up Phase. This page is kept for reference but will no longer be updated.


Monsteer is a library for Interactive Supercomputing in the neuroscience domain. Monsteer facilitates the coupling of running simulations (currently NEST) with interactive visualization and analysis applications. Monsteer supports streaming of simulation data to clients (currenty only spikes) as well as control of the simulator from the clients (also kown as computational steering). Monsteer’s main components are a C++ library, an MUSIC-based application and Python helpers.

Date of releaseJuly 2015
Version of software0.2.0
Version of documentation0.2.0
Software availablehttps://github.com/BlueBrain/Monsteer
Documentationhttp://bluebrain.github.io/
ResponsibleStefan Eilemann, EPFL (stefan.eilemann@epfl.ch)
Requirements & dependenciesMinimum configuration to configure using cmake, compile and run Monsteer: A Linux box,
GCC compiler 4.8+,
CMake 2.8+,
Boost 1.54,
MPI (OpenMPI, mvapich2, etc),
NEST simulator 2.4.2,
MUSIC 1.0.7,
Python 2.6,
See also: http://bluebrain.github.io/Monsteer-0.3/_user__guide.html#Compilation
Target system(s)Linux computer

MSPViz

MSPViz is a visualization tool for the Model of Structural Plasticity. It uses a visualisation technique  based on the representation of the neuronal information through the use of abstract levels and a set of schematic representations into each level. The multilevel structure and the design of the representations constitutes an approach that provides organized views that facilitates visual analysis tasks.

Each view has been enhanced adding line and bar charts to analyse trends in simulation data. Filtering and sorting capabilities can be applied on each view to ease the analysis. Other views, such as connectivity matrices and force-directed layouts, have been incorporated, enriching the already existing views and improving the analysis process. This tool has been optimised to lower render and data loading times, even from remote sources such as WebDav servers.

Screenshot of MSPViz
Screenshot of MSPViz
Screenshot of MSPViz
Screenshot of MSPViz
View of MSPViz to investigate structural plasticity models on different levels of abstraction: connectivity of a single neuron
View of MSPViz to investigate structural plasticity models on different levels of abstraction: full network connectivity
Date of releaseMarch 2018
Version of software0.2.6
Version of documentation0.2.6 for users
Software availablehttp://gmrv.es/mspviz
DocumentationSelf-contained in the application
ResponsibleUPM: Juan Pedro Brito (juanpedro.brito@upm.es)
Requirements & dependenciesSelf-contained code
Target system(s)Platform independent

VIOLA

The development of VIOLA was co-funded by the HBP during the Ramp-up Phase. This page is kept for reference but will no longer be updated.


VIOLA (VIsualizer Of Layer Activity) is a tool to visualize activity in multiple 2D layers in an interactive and efficient way. It gives an insight into spatially resolved time series such as simulation results of neural networks with 2D geometry. The usage example shows how VIOLA can be used to visualize spike data from a NEST simulation (http://nest-simulator.org/) of an excitatory and an inhibitory neuron population with distance-dependent connectivity.

Date of releaseJanuary 2016
Version of softwarenot yet specified
Version of documentationnot yet specified
Software availablehttps://github.com/HBPVIS/VIOLA
Documentationhttps://github.com/HBPVIS/VIOLA/wiki
ResponsibleU Trier: Weyers, Benjamin (weyers@uni-trier.de), Forschungszentrum Jülich: Espen Hagen (e.hagen@fz-juelich.de), Johanna Senk (j.senk@fz-juelich.de)
Requirements & dependenciesThree.js
Target system(s)Web Browser (Google Chrome)

NeuroScheme

NeuroScheme is a tool that allows users to navigate through circuit data at different levels of abstraction using schematic representations for a fast and precise interpretation of data. It also allows filtering, sorting and selections at the different levels of abstraction. Finally it can be coupled with realistic visualization or other applications using the ZeroEQ event library developed in WP 7.3.

This application allows analyses based on a side-by-side comparison using its multi-panel views, and it also provides focus-and-context. Its different layouts enable arranging data in different ways: grid, 3D, camera-based, scatterplot-based or circular. It provides editing capabilities, to create a scene from scratch or to modify an existing one.

ViSimpl, part of the NeuroScheme framework, is a prototype developed to analyse simulation data, using both abstract and schematic visualisations. This analysis can be done visually from temporal, spatial and structural perspectives, with the additional capability of exploring the correlations between input patterns and produced activity.

 

NeuroScheme
NeuroScheme screenshot
NeuroScheme
NeuroScheme screenshot
NeuroScheme
NeuroScheme screenshot
NeuroScheme
NeuroScheme screenshot
NeuroScheme
NeuroScheme screenshot
Overview of various neurons
User interface of ViSimpl visualising activity data emerging from a simulation of a neural network model
Date of releaseMarch 2018
Version of software0.2
Version of documentation0.2
Software availablehttps://github.com/gmrvvis/NeuroScheme
Documentationhttps://github.com/gmrvvis/NeuroScheme, http://gmrv.es/gmrvvis
ResponsibleURJC: Pablo Toharia (pablo.toharia@urjc.es)
Requirements & dependenciesRequired: Qt4, nsol
Optional: Brion/BBPSDK (to access BBP data), ZeroEQ (to couple with other software)
Supported OS: Windows 7, Windows 8.1, Linux (tested on Ubuntu 14.04) and Mac OSX
Target system(s)Desktop computers, notebooks, tablets

NeuroLOTs

NeuroLOTs is a set of tools and libraries that allow creating neuronal meshes from a minimal skeletal description. It generates soma meshes using FEM deformation and allows to interactively adapt the tessellation level using different criteria (user-defined, camera distance, etc.)

NeuroTessMesh provides a visual environment for the generation of 3D polygonal meshes that approximate the membrane of neuronal cells, starting from the morphological tracings that describe neuronal morphologies. The 3D models can be tessellated at different levels of detail, providing either a homogeneous or an adaptive resolution of the model. The soma shape is recovered from the incomplete information of the tracings, applying a physical deformation model that can be interactively adjusted. The adaptive refinement process performed in the GPU generates meshes, that allow good visual quality geometries at an affordable computational cost, both in terms of memory and rendering time. NeuroTessMesh is the front-end GUI to the NeuroLOTs framework.

Related Publication:
Garcia-Cantero et al. (2017) Front NeuroinDOI: https://dx.doi.org/10.3389/fninf.2017.00038
NeuroLOTs
NeuroLOTs screenshot
NeuroLOTs
NeuroLOTs screenshot
NeuroLOTs
NeuroLOTs screenshot
NeuroLOTs
NeuroLOTs screenshot

Date of releaseNeurolots 0.2.0, March 2018; NeuroTessMesh 0.0.1, March 2018
Version of softwareNeurolots 0.2.0, NeuroTessMesh 0.0.1
Version of documentationNeurolots 0.2.0, NeuroTessMesh 0.0.1
Software availablehttps://github.com/gmrvvis/neurolots, https://github.com/gmrvvis/NeuroTessMesh
Documentationhttps://github.com/gmrvvis/neurolots, https://github.com/gmrvvis/NeuroTessMesh, https://gmrvvis.github.io/doc/neurolots/, https://github.com/gmrvvis/neurolots/blob/master/README.md, http://gmrv.es/neurotessmesh/NeuroTessMeshUserManual.pdf, http://gmrv.es/gmrvvis/neurolots/
ResponsibleURJC: Pablo Toharia (pablo.toharia@urjc.es)
Requirements & dependenciesRequired: Eigen3, OpenGL (>= 4.0), GLEW, GLUT, nsol
Optional: Brion/BBPSDK (to access BBP data), ZeroEQ (to couple with other software)
Supported OS: Windows 7/8.1, GNU/Linux (tested on Ubuntu 14.04) and Mac OSX
Target system(s)High fidelity displays, desktop computers, notebooks

VisNEST

The development of VisNEST was co-funded by the HBP during the Ramp-up Phase. This page is kept for reference but will no longer be updated.


VisNEST is a tool for visualizing neural network simulations of the macaque visual cortex. It allows for exploring mean activity rates, connectivity of brain areas and information exchange between pairs of areas. In addition, it allows exploration of individual populations of each brain area and their connectivity used for simulation.

VisNEST
VisNEST screenshot
Date of releaseInformation available on demand.
Version of softwareInformation available on demand.
Version of documentationInformation available on demand.
Software availableNot publicly available yet. Please contact the developers in case of interest.
DocumentationReference paper:
Nowke, Christian, Maximilian Schmidt, Sacha J. van Albada, Jochen M. Eppler, Rembrandt Bakker, Markus Diesrnann, Bernd Hentschel, and Torsten Kuhlen. "VisNEST—Interactive analysis of neural activity data." In Biological Data Visualization (BioVis), 2013 IEEE Symposium on, pp. 65-72. IEEE, 2013.
ResponsibleRWTH Aachen: Benjamin Weyers (weyers@vr.rwth-aachen.de) and Torsten Kuhlen (kuhlen@vr.rwth-aachen.de)
Requirements & dependenciesViSTA, boost, zmq, hdf5
Target system(s)High Fidelity Visualization Platforms, Immersive Visualization Hardware, Desktop Computers

ViSTA Virtual Reality Toolkit

The development of ViSTA Virtual Reality Toolkit was co-funded by the HBP during the Ramp-up Phase. This page is kept for reference but will no longer be updated.


The ViSTA Virtual Reality Toolkit allows the integration of virtual reality (VR) technology and interactive, 3D visualisation into technical and scientific applications. The toolkit aims to enhance scientific applications with methods and techniques of VR and immersive visualization, thus enabling researchers from multiple disciplines to interactively analyse and explore their data in virtual environments. ViSTA is designed to work on multiple target platforms and operating systems, across various display devices (desktop workstations, powerwalls, tiled displays, CAVEs, etc.) and with various interaction devices.

The new version 1.15 provides the following new features as compared to version 1.14 that was part of the HBP-internal Platform Release in M18. It is available on SourceForge: http://sourceforge.net/projects/vistavrtoolkit/

Date of releaseFebruary 20, 2013
Version of software1.15
Version of documentation1.15
Software availablehttp://sourceforge.net/projects/vistavrtoolkit/ https://github.com/HBPVIS/Vista
DocumentationIncluded in the library source code
ResponsibleRWTH: Torsten Kuhlen (kuhlen@vr.rwth-aachen.de), Benjamin Weyers (weyers@vr.rwth-aachen.de)
Requirements & dependenciesLibraries: OpenSG, freeglut, glew
Operating systems: Windows / Linux
Compilers: Microsoft Visual Studio 2010 (cl16) or higher, gcc 4.4.7 or higher
Target system(s)High Fidelity Visualization Platforms, Immersive Visualization Hardware, Desktop Computers

Equalizer

The development of Equalizer was co-funded by the HBP during the Ramp-up Phase. This page is kept for reference but will no longer be updated.


Equalizer is a parallel rendering framework Equalizer logoto create and deploy parallel, scalable OpenGL applications. It provides the following major features to facilitate the development and deployment of scalable OpenGL applications:

  • Runtime Configurability: An Equalizer application is configured automatically or manually at runtime and can be deployed on laptops, multi-GPU workstations and large-scale visualization clusters without recompilation.
  • Runtime Scalability: An Equalizer application can benefit from multiple graphics cards, processors and computers to scale rendering performance, visual quality and display size.
  • Distributed Execution: Equalizer applications can be written to support cluster-based execution. Equalizer uses the Collage network library, a cross-platform C++ library for building heterogeneous, distributed applications.

Support for Stereo and Immersive Environments: Equalizer supports stereo rendering head tracking, head-mounted displays and other advanced features for immersive Virtual Reality installations.

Equalizer in immersive environment
Equalizer in immersive environment
Date of release2007
Version of software1.8
Version of documentation1.8
Software availablehttps://github.com/Eyescale/Equalizer
Documentationhttps://eyescale.github.io
ResponsibleEPFL: Stefan Eilemann (stefan.eilemann@epfl.ch)
Requirements & dependenciesBoost, OpenGL, Collage, hwsd, Glew, Qt
Target system(s)

Deflect Client Library

The development of Deflect Client Library was co-funded by the HBP during the Ramp-up Phase. This page is kept for reference but will no longer be updated.


Deflect is a C++ library to develop applications that can send and receive pixel streams from other Deflect-based applications, for example DisplayCluster. The following applications are provided which make use of the streaming API:

  • DesktopStreamer: A small utility that allows the user to stream the desktop.
  • SimpleStreamer: A simple example to demonstrate streaming of an OpenGL application.
Date of release2013
Version of software0.5
Version of documentation0.5
Software availablehttps://github.com/BlueBrain/DisplayCluster
https://github.com/BlueBrain/Deflect
Documentationhttps://bluebrain.github.io/
ResponsibleEPFL: Stefan Eilemann (stefan.eilemann@epfl.ch)
Requirements & dependenciesBoost, LibJPEGTurbo, Qt, GLUT, OpenGL, Lunchbox, FCGI, FFMPEG, MPI, Poppler, TUIO, OpenMP
Target system(s)

RTNeuron

RTNeuron is a scalable real-time rendering tool for the visualisation of neuronal simulations based on cable models. Its main utility is twofold: the interactive visual inspection of structural and functional features of the cortical column model and the generation of high quality movies and images for presentations and publications. The package provides three main components:

  • A high level C++ library.
  • A Python module that wraps the C++ library and provides additional tools.
  • The Python application script rtneuron-app.py

A wide variety of scenarios is covered by rtneuron-app.py. In case the user needs a finer control of the rendering, such as in movie production or to speed up the exploration of different data sets, the Python wrapping is the way to go. The Python wrapping can be used through an IPython shell started directly from rtneuron-app.py or importing the module rtneuron into own Python programs. GUI overlays can be created for specific use cases using PyQt and QML.

RTNeuron is available on the pilot system JULIA and on JURECA as environment module.

RTNeuron in aixCAVE
RTNeuron in aixCAVE
Neuron rendered by RTNeuron
Neuron rendered by RTNeuron
Visual representation of cell dyes
Simulation playback
Interactive circuit slicing
Connection browsing
Date of releaseFebruary 2018
Version of software2.13.0
Version of documentation2.13.0
Software availablehttps://developer.humanbrainproject.eu/docs/projects/RTNeuron/2.11/index.html; Open sourcing scheduled for June 2018
Documentationhttps://developer.humanbrainproject.eu/docs/projects/RTNeuron/2.11/index.html, https://www.youtube.com/watch?v=wATHwvRFGz0
ResponsibleSamuel Lapere
Requirements & dependenciesBBP SDK, Boost, Equalizer, OpenSceneGraph, osgTransparency, Python, Qt, NumPy, OpenMP, VRPN, Cuda, ZeroEQ
Target system(s)

Remote Connection Manager (RCM)

The development of Remote Connection Manager (RCM) was co-funded by the HBP during the Ramp-up Phase. This page is kept for reference but will no longer be updated.


The Remote Connection Manager (RCM) is an application that allows HPC users to perform remote visualisation on Cineca HPC clusters.

The tool offers to

  • Visualize the data produced on Cineca’s HPC systems (scientific visualization);
  • Analyse and inspect data directly on the systems;
  • Debug and profile parallel codes running on the HPC clusters.

The graphical interface of RCM allows the HPC users to easily create remote displays and to manage them (connect, kill, refresh).

Screenshot of RCM
Screenshot of Remote Connection Manager (RCM)
Screenshot of RCM
Screenshot of Remote Connection Manager (RCM)
Date of releaseApril 2015
Version of software1.2
Version of documentation1.2
Software availablehttp://www.hpc.cineca.it/content/remote-visualization-rcm
Documentationhttp://www.hpc.cineca.it/content/remote-visualization-rcm
ResponsibleRoberto Mucci (superc@cineca.it)
Requirements & dependenciesThe “Remote Connection Manager” works on the following operating systems: Windows, Linux, Mac OSX
(OSX Mountain Lion users need to install XQuartz: http://xquartz.macosforge.org/landing/)
Target system(s)Notebooks, office computers

Livre

The development of Livre was co-funded by the HBP during the Ramp-up Phase. This page is kept for reference but will no longer be updated.


Livre is an out-of-core rendering engine that has the following features:

  • Distributed rendering using Equalizer parallel rendering framework
  • Octree based out-of-core rendering.
  • Visualisation of pre-processed UVF format volume data sets.
  • Real-time voxelisation and visualisation of surface meshes using OpenGL 4.2 extensions.
  • Real-time voxelisation and visualisation of Blue Brain Project (BBP) morphologies.
  • Real-time voxelisation and visualisation of local-field potentials in BBP circuit.
  • Multi-node, multi-GPU rendering.
Data rendered with Livre
Data rendered with Livre
Data rendered with Livre
Data rendered with Livre
Date of releaseApril 2015
Version of software0.3
Version of documentation0.3
Software availablehttps://github.com/BlueBrain/Livre
Documentationhttps://bluebrain.github.io/
ResponsibleEPFL: Stefan Eilemann (stefan.eilemann@epfl.ch)
Requirements & dependenciesOpenMP, Tuvok, ZeroEQ, FlatBuffers, Boost, Equalizer, Collage, Lunchbox, dash, OpenGL, PNG, Qt
Target system(s)

DisplayCluster

The development of DisplayCluster was co-funded by the HBP during the Ramp-up Phase. This page is kept for reference but will no longer be updated.


DisplayCluster is a software environment for interactively driving large-scale tiled displays. It provides the following functionality:

  • View media interactively such as high-resolution imagery, PDFs and video.
  • Receive content from remote sources such as laptops, desktops or parallel remote visualization machines using the Deflect library.
Display Cluster
DisplayCluster on a mobile tiled display wall
DisplayCluster on a tiled display wall
DisplayCluster on a tiled display wall
Date of release2013
Version of software0.5
Version of documentation0.5
Software availablehttps://github.com/BlueBrain/DisplayCluster
https://github.com/BlueBrain/Deflect
Documentationhttps://bluebrain.github.io/
ResponsibleEPFL, Stefan Eilemann (stefan.eilemann@epfl.ch)
Requirements & dependenciesBoost, LibJPEGTurbo, Qt, GLUT, OpenGL, Lunchbox, FCGI, FFMPEG, MPI, Poppler, TUIO, OpenMP
Target system(s)Tiled display walls