In data-intensive neuroscience, interactive data analysis workflows play a central role for the scientific endeavour. Visual analysis (VA) techniques are a key component in many of these workflows. VA applications have to address the specific use cases and requirements of the neuroscientists using them. To this end, user-centric development and close collaboration with the users are essential, already during development. In addition, basic, shared functionalities have to be made available in re-usable frameworks to enable a seamless integration into users’ workflows or to directly couple analysis tools to live data sources, such as large-scale simulations. A two-fold approach was followed to meet these challenges. On the one hand, various applications were conceived, which target specific analysis scenarios. On the other hand, a number of software frameworks were implemented for the integration of interactive and visual analysis applications into the hardware infrastructure. This two-fold approach allowed the creation of increasingly sophisticated interactive visual analysis tools, that actually add value for end users.

It is important to note that for each developmental efforts there is a specific, neuroscientific use case of the HBP that drives its development. The focus lies on the development of visualisation applications for the investigation of computationally intensive neural network simulations on the one hand, and on massive image data created from brain scans on the other hand. Following the architecture presented in the figure on the right, software development focused on the framework and application layers. We developed visualisation applications that target the analysis of neural networks and simulation models, the activity data originating from those neural simulations, and the aforementioned large neuronal imaging data, such as 3D-Polarized Light Imaging (3D-PLI). To handle the large data sets emerging from running simulations, we are currently developing an in situ integration between live simulations and analysis methods (continuing in the next phase of the HBP). An event-driven messaging library is under development for the flexible linking of applications with automatic discovery, and streaming and steering capabilities. Furthermore, an ontology-supported integration system allows users to flexibly mix and merge multiple views into “coordinated multiple view systems” that are custom-tailored for their respective analyses. This enables neuroscientists to adapt their tools to the frequently changing requirements of data analysis workflows in their daily research work. To scale common visualisation algorithms to next-generation HPC systems, we investigated the use of the High Performance ParalleX (HPX) runtime as an execution layer to the widely supported vtk-m visualisation library. This holds the potential to scale vtk-m beyond the confines of a single, shared-memory system.
Below, we give an overview of the various visualisation applications and libraries, which form the major outcome of our work. We focus on components that have been actively developed during the past twelve months. Nevertheless, the development of all tools started in earlier phases of HBP. Some of the activities are based on libraries developed outside this project as in-kind contributions or open source third-party projects. We see this presentation as an overview of major software components for the interactive visual data analysis for simulation and image data emerging from the HBP and beyond.
Software developed on Application Layer
RTNeuron is a tool for the interactive visualisation and media production of simulation results for detailed neuronal network simulations. It allows the visualisation of neurons, synapses and playback of simulation data, as well as arbitrary user given geometry, and it provides some advanced rendering capabilities, such as order-independent transparency and parallel rendering. RTNeuron consists of a C++ library with the rendering back end, a Python wrapper and a Python application called rtneuron-app.py. GUI overlays can be created for specific use cases using PyQt and QML. Some power applications with GUI overlays have already been provided. During this period one application for interactive visualisation of hippocampus models has been developed. Using Brion as data access library, RTNeuron can access simulation data from key-value stores. RTNeuron is available on JURECA as environment module.
|
|
|
|
MSPViz is a web-based visualisation tool for structural plasticity models. It uses a novel visualisation technique based on the representation of neuronal information through the use of abstract levels and a set of representations in each level. This hierarchical representation lets the user interact and change the representation, modifying the degree of detail of the information to be analysed in a simple and intuitive way, through the navigation of different views at different levels of abstraction. The designed representations in each view only contain the necessary variables to achieve the desired tasks, thus avoiding overwhelming saturation of information. The multilevel structure and the design of the representations provide organised views, which facilitate visual analysis tasks. Moreover, each view has been enhanced adding line and bar charts to analyse trends in simulation data. Filtering and sorting capabilities can be applied on each view to ease the analysis. Additionally, some other views, such as connectivity matrices and force-directed layouts, have been incorporated, enriching the already existing views and improving the analysis process. Finally, this tool has been optimised to lower render and data loading times, even from remote sources such as WebDav servers.
|
|
NeuroLOTs is a set of libraries and tools to generate 3D meshes that approximate the anatomy of neurons and brain vasculature, to visualise them at different detail levels using GPU-based tessellation. As a part of NeuroLOTs, NeuroTessMesh provides a visual environment for the generation of 3D polygonal meshes that approximate the membrane of neuronal cells, starting from the morphological tracings that describe neuronal morphologies. The 3D models can be tessellated at different levels of detail, providing either a homogeneous or an adaptive resolution of the model. The soma shape is recovered from the incomplete information of the tracings, applying a physical deformation model that can be interactively adjusted. The adaptive refinement process performed in the GPU generates meshes, that allow good visual quality geometries at an affordable computational cost, both in terms of memory and rendering time. NeuroTessMesh is the front-end GUI to the NeuroLOTs framework.
|
![]() |
![]() |
NeuroScheme is a tool to navigate through circuit data at different levels of abstraction, using schematic representations for fast and precise data interpretation. It also allows filtering, sorting and making selections at these different levels of abstraction. Finally, it can be coupled to realistic visualisation or other applications using the ZeroEQ event library and it has been integrated into the multi-view framework, both are also developed in WP7.3. This application allows analyses based on a side-by-side comparison using its multi-panel views, and it also provides focus-and-context. Additionally, its different layouts enable arranging data in different ways: grid, 3D, camera-based, scatterplot-based or circular. Besides, it provides editing capabilities, to create a scene from scratch or to modify an existing one. Another part of the NeuroScheme framework is ViSimpl, a prototype developed to analyse simulation data, using both abstract and schematic visualisations. This analysis can be done visually from temporal, spatial and structural perspectives, with the additional capability of exploring the correlations between input patterns and produced activity.
|
|
NEST-simulated spatial-point-neuron data visualisation: Complementary to other viewer and visualisation implementations for NEST simulations, this component offers a rendering of activity and membrane potentials in a neural network simulated with NEST (figure on the left). A prototypical implementation exists that is based on vtk, the widely-used visualisation toolkit. This implementation can be generally run by computational neuroscientists on their workstations, imposing only moderate hardware requirements. Experiments using rendering on high-performance computing infrastructure were successful. The results indicate that this component is extensible towards large-scale simulations that require HPC resources and thus produce large output data. The hi-fidelity rendering used in this case provides very high quality images that may be suitable for publications (figure on the right).
|
|
PLIViewer: The study of the connectome investigates structural and functional connectivities in the brain. Structural connectivity refers to anatomical connections between brain areas, whereas functional connectivity describes the short-term, dynamic correlations between neural activities of distinct brain structures. 3D-Polarized Light Imaging (3D-PLI) is a recent neuroimaging technique, to study structural connectivity of the brain at unprecedented resolutions, within the micrometre range. The major outputs of 3D-PLI are four scalar fields (transmittance, retardation, inclination, direction maps) and a vector field (fibre orientation maps) which depict the 3D spatial orientation of myelinated nerve fibres. The PLIViewer is visualisation software for 3D-PLI, to interactively explore the scalar and vector datasets; it provides additional methods to transform data, thus revealing new insights that are not available in the raw representations. The high resolution provided by 3D-PLI produces massive, terabyte-scale datasets, which makes visualisation challenging. The PLIViewer tackles this problem by providing functionality to select areas of interests from the dataset, and options for downscaling. In addition, it makes it possible to interactively compute and visualise Orientation Distribution Functions (ODFs) and polar plots from the vector field, which reveal mesoscopic and macroscopic scale information from the microscopic dataset without significant loss of detail. Overall, the PLIViewer equips the neuroscientist with specialised visualisation tools he/she needs to explore 3D-PLI datasets through direct and interactive visualisation of the data.
|
|
|
|
Software developed on Framework Layer
ZeroEQ is a cross-platform C++ library for implementing event-driven architectures using modern messaging. It provides pub-sub and request-reply messaging using ZeroMQ and integrates REST APIs with JSON payload in C++ applications, using an optional http server. The main intention of ZeroEQ is to allow the linking of applications using automatic discovery. Linking can be used to link multiple visualisation applications, or to connect simulators with analysis and visualisation codes to implement streaming and steering. One example of the former is the interoperability of NeuroScheme with RTNeuron, and one for the latter is the streaming and steering between NEST and RTNeuron. Both were reported previously, whereas the current extensions focus on the implementation of the request-reply interface.
VTK-m is a scientific visualisation and analysis framework that offers a wealth of building blocks to create visualisation and analysis applications. VTK-m facilitates scaling those applications to massively parallel shared memory systems, and it will – due to its architecture, most likely also run efficiently on future platforms. HPX is a task-based programming model. As such, it simplifies the formulation of well-scaling, highly-parallel algorithms. Integrating this programming model into VTK-m streamlines the formulation of its parallel building blocks and thus makes their deployment on present and emerging HPC platforms more efficient. Since neuroscientific applications require more and more compute power as well as memory, harnessing the available resources will become a challenge in itself. By combining VTK-m and HPX into task-based analysis and visualisation, we expect to provide suitable tools to effectively face this challenge and facilitate building sophisticated interactive visual analysis tools, tailored to the neuroscientists’ needs.
For this purpose, parallel primitive algorithms required for VTK-m have been added to HPX along with API support to enable the full range of visualisation algorithms developed for VTK-m. Furthermore, a new scheduler has been developed that accepts core/numa placement hints from the programmer such that cache reuse can be maximised and traffic between sockets minimised. High performance tasks that access data shared by application and visualisation can use this capability to improve performance. Additionally, the thread pool management was improved to allow visualisation tasks, communication tasks, and application tasks to execute on different cores if necessary, which reduces latency between components and improves the overall throughput of the distributed application. Finally, RDMA primitives have been added to the HPX messaging layer. These improvements make it possible to scale HPX applications to very high node/core counts. Respective tests have been successful on 10k nodes using 650k cores.
The NEST in situ framework developed by RWTH facilitates visualisation and analysis of the output data of a NEST simulation while this is still running (figure on the left). For this purpose, membrane potentials, spikes and other data are streamed from the simulation. The framework builds on top of conduit, a well-established in situ library, for compatibility and extensibility reasons. The framework consists of a well-tested, compact C++ library that has to be linked into the NEST simulator in order to provide the streaming capabilities. Additionally, it can be linked into consumer applications for visualisation and analysis. Python bindings for consumer applications (visualisation and analysis) are also provided in order to make it more useful for computational neuroscientists who are familiar with NEST and Python. A – yet small – set of demos provides usage examples (figure on the right).
|
|
The Multi-View Framework is a software component, which offers functionality to combine various visual representations (views) of one or more data sets in a coordinated fashion. Coordination of multiple views here refers to the communication of interaction events between such views. For instance, if the scientist selects a data item in one view, the framework communicates this selection to all connected views such that this item gets highlighted there accordingly. As the underlying software layer is generic, not only software components offering visualisation capabilities can be included in such a network, but also software components offering other functionality, such as statistical analysis (e.g. Elephant). Furthermore, multi-display scenarios can be addressed by the framework as coordination information can be distributed over network between view instances running on distributed machines. The framework is composed of three libraries: nett, nett-python and nett-connect. nett implements a light-weight underlying messaging layer enabling the communication between views, whereas nett-python implements a python binding for nett, which enables the integration of python-based software components (such as Elephant) into a multi-view setup. nett-connect adds additional functionality to this basic communication layer, which enables non-experts to create multi-view setups according to their specific needs and workflows. Therefore, it offers a graphical user interface with which the scientist can select the view to be used, start it up and connect it with other already started views. Behind the scenes, this tool uses an ontology-based description of the various views/services provided, such that the system can validate the connections created by the user and suggest matching visual representations to be coordinated for the analysis process. Additionally, once created, setups can be stored, reused and adapted/extended later on. The framework is used in various use cases. For instance, we tested it for a scenario using Elephant as data analysis component, multiple views for steering a NEST simulation to investigate neural plasticity (publication submitted), for comparative analysis of NEST simulations and multi-device multi-user scenarios for collaborative work.

The impact of our work
This work made two major contributions to the overall project. First, we implemented and provided several interactive visualisation applications (application layer) for a variety of use cases that emerged from close collaboration within HBP. We initiated new collaborations, intensified existing ones, and supported neuroscientists’ use case dependent needs and requirements for visualisation, which also resulted in joint publications. Furthermore, most tools reached a high maturity level and they are ready for deployment on a larger scale in the project, which is planned for the next project phase.
Second, we continued the development of frameworks for various integration aspects (framework layer), such as in situ visualisation of large simulation and image data (continued for the next years), steering of neural simulation, and the integration and coordination of multiple views beyond individual analysis applications. We drove the framework development, guided by various neuroscientific use cases emerging from the project, which enabled co-design of analysis workflows and resulted in joint publications.