Programming models, libraries and toolkits.

In Situ Pipeline

This is the newer, more general version of NEST in situ framework.

The in situ pipeline consists of a set of libraries that can be integrated into neuronal network simulators developed by the HBP to enable live visual analysis during the runtime of the simulation. The library called ‘nesci’ (neuronal simulator conduit interface) stores the raw simulation data into a common conduit format and the library called ‘contra’ (conduit transport) transports the serialized data from one endpoint to another using a variety of different (network) protocols. The pipeline currently works with NEST and Arbor. Support for TVB is currently in development.

Prototypical implementation into the HPAC Platform finalised in February 2019.

Date of releaseFirst released in July 2018 with continuous updates (see also above)
Version of software18.07
Version of documentation18.07
Software availablehttps://devhub.vr.rwth-aachen.de/VR-Group/contra
https://devhub.vr.rwth-aachen.de/VR-Group/nesci
https://devhub.vr.rwth-aachen.de/VR-Group/nest-streaming-module
DocumentationSee the readme files in the repositories
ResponsibleRWTH: Simon Oehrl (oehrl@vr.rwth-aachen.de)
Requirements & dependenciesRequired: CMake, C++14, Conduit
Optional: Python, Boost, ZeroMQ
Target system(s)Desktops/HPC Systems running Linux, macOS or Windows

ZeroBuf

The development of ZeroBuf was co-funded by the HBP during the Ramp-up Phase. This page is kept for reference but will no longer be updated.


ZeroBuf implements zero-copy, zero-serialize, zero-hassle protocol buffers. It is a replacement for FlatBuffers, resolving the following shortcomings:

  • Direct get and set functionality on the defined data members
  • A single memory buffer storing all data members, which is directly serializable
  • Usable, random read and write access to the the data members
  • Zero copy of the data used by the (C++) implementation from and to the network
Date of release
Version of software1.0
Version of documentation1.0
Software availablehttps://github.com/HBPVIS/ZeroBuf
Documentationhttps://github.com/HBPVIS/ZeroBuf
ResponsibleStefan Eilemann, EPFL (stefan.eilemann@epfl.ch)
Support: HBPVis@googlegroups.com
Requirements & dependencies
Target system(s)

UG4

The development of UG4 was co-funded by the HBP during the Ramp-up Phase. This page is kept for reference but will no longer be updated.


UG4 is a powerful software framework for the simulation of complex PDE based systems on massively parallel computer architectures.

Date of releaseOctober 2015
Version of software1.0
Version of documentation1.0
Software availablehttp://gcsc.uni-frankfurt.de/simulation-and-modelling/ug4
Documentationhttp://gcsc.uni-frankfurt.de/simulation-and-modelling/ug4
ResponsibleKonstantinos Xylouris, UFRA (konstantinos.xylouris@gcsc.uni-frankfurt.de)
Requirements & dependenciesPython 2.7 or Python 3,
Git
See also: https://github.com/UG4/ughub/wiki
Target system(s)Linux, Windows, Mac OS

Monsteer

The development of Monsteer was co-funded by the HBP during the Ramp-up Phase. This page is kept for reference but will no longer be updated.


Monsteer is a library for Interactive Supercomputing in the neuroscience domain. Monsteer facilitates the coupling of running simulations (currently NEST) with interactive visualization and analysis applications. Monsteer supports streaming of simulation data to clients (currenty only spikes) as well as control of the simulator from the clients (also kown as computational steering). Monsteer’s main components are a C++ library, an MUSIC-based application and Python helpers.

Date of releaseJuly 2015
Version of software0.2.0
Version of documentation0.2.0
Software availablehttps://github.com/BlueBrain/Monsteer
Documentationhttp://bluebrain.github.io/
ResponsibleStefan Eilemann, EPFL (stefan.eilemann@epfl.ch)
Requirements & dependenciesMinimum configuration to configure using cmake, compile and run Monsteer: A Linux box,
GCC compiler 4.8+,
CMake 2.8+,
Boost 1.54,
MPI (OpenMPI, mvapich2, etc),
NEST simulator 2.4.2,
MUSIC 1.0.7,
Python 2.6,
See also: http://bluebrain.github.io/Monsteer-0.3/_user__guide.html#Compilation
Target system(s)Linux computer

neuroFiReS

The development of neuroFiReS was co-funded by the HBP during the Ramp-up Phase. This page is kept for reference but will no longer be updated.


neuroFiReS is a library for performing search and filtering operations using both data contents and metadata. These search operations will be tightly coupled with visualization in order to improve insight gaining from complex data. A first prototype (named spineRet) for searching and filtering over segmented spine data has been developed.

SpineRet
SpineRet screenshot
Date of releaseN/A
Version of software0.1
Version of documentation0.1
Software availablePlease contact the developers.
DocumentationPlease contact the developers.
ResponsibleURJC: Pablo Toharia (pablo.toharia@urjc.es)
Requirements & dependenciesQt, OpenSceneGraph
Supported OS: Windows 7/ 8.1, Linux (tested on Ubuntu 14.04) and Mac OSX
Target system(s)Desktop computers, notebooks

NeuroLOTs

NeuroLOTs is a set of tools and libraries that allow creating neuronal meshes from a minimal skeletal description. It generates soma meshes using FEM deformation and allows to interactively adapt the tessellation level using different criteria (user-defined, camera distance, etc.)

NeuroTessMesh provides a visual environment for the generation of 3D polygonal meshes that approximate the membrane of neuronal cells, starting from the morphological tracings that describe neuronal morphologies. The 3D models can be tessellated at different levels of detail, providing either a homogeneous or an adaptive resolution of the model. The soma shape is recovered from the incomplete information of the tracings, applying a physical deformation model that can be interactively adjusted. The adaptive refinement process performed in the GPU generates meshes, that allow good visual quality geometries at an affordable computational cost, both in terms of memory and rendering time. NeuroTessMesh is the front-end GUI to the NeuroLOTs framework.

Related Publication:
Garcia-Cantero et al. (2017) Front NeuroinDOI: https://dx.doi.org/10.3389/fninf.2017.00038
NeuroLOTs
NeuroLOTs screenshot
NeuroLOTs
NeuroLOTs screenshot
NeuroLOTs
NeuroLOTs screenshot
NeuroLOTs
NeuroLOTs screenshot

Date of releaseNeurolots 0.2.0, March 2018; NeuroTessMesh 0.0.1, March 2018
Version of softwareNeurolots 0.2.0, NeuroTessMesh 0.0.1
Version of documentationNeurolots 0.2.0, NeuroTessMesh 0.0.1
Software availablehttps://github.com/gmrvvis/neurolots, https://github.com/gmrvvis/NeuroTessMesh
Documentationhttps://github.com/gmrvvis/neurolots, https://github.com/gmrvvis/NeuroTessMesh, https://gmrvvis.github.io/doc/neurolots/, https://github.com/gmrvvis/neurolots/blob/master/README.md, http://gmrv.es/neurotessmesh/NeuroTessMeshUserManual.pdf, http://gmrv.es/gmrvvis/neurolots/
ResponsibleURJC: Pablo Toharia (pablo.toharia@urjc.es)
Requirements & dependenciesRequired: Eigen3, OpenGL (>= 4.0), GLEW, GLUT, nsol
Optional: Brion/BBPSDK (to access BBP data), ZeroEQ (to couple with other software)
Supported OS: Windows 7/8.1, GNU/Linux (tested on Ubuntu 14.04) and Mac OSX
Target system(s)High fidelity displays, desktop computers, notebooks

InDiProv

The development of InDiProv was co-funded by the HBP during the Ramp-up Phase. This page is kept for reference but will no longer be updated.


This server-side tool is meant to be used for the creation of provenance tracks in context of interactive analysis tools and visualization applications. It is capable of tracking multi-view and multiple applications for one user using this ensemble. It further is able to extract these tracks from the internal data base into a XML-based standard format, such as the W3C Prov-Model or the OPM format. This enables the integration to other tools used for provenance tracking and will finally end up in the UP.

Date of releaseAugust 2015
Version of softwareAugust 2015
Version of documentationAugust 2015
Software availablehttps://github.com/hbpvis
Documentationhttps://github.com/hbpvis
ResponsibleRWTH Aachen: Benjamin Weyers (weyers@vr.rwth-aachen.de) and Torsten Kuhlen (kuhlen@vr.rwth-aachen.de)
Requirements & dependenciesWritten in C++ , Linux environment, MySQL server 5.6, JSON library for annotation, CodeSynthesis XSD for XML serialization and parsing, ZeroMQ library, Boost library, xercex-c library and mysqlcppcon library
Target system(s)Server-side systems

Dynamic Load Balancing

The development of DLB was co-funded by the HBP during the second project phase (SGA1). This page is kept for reference but will no longer be updated.


dlb logo

DLB is a library devoted to speedup hybrid parallel applications. And at the same time DLB improves the efficient use of the computational resources inside a computing node. The DLB library will improve the load balance of the outer level of parallelism by redistributing the computational resources at the inner level of parallelism. This readjustment of resources will be done at dynamically at runtime. This dynamism allows DLB to react to different sources of imbalance: Algorithm, data, hardware architecture and resource availability among others.

The first version that was integrated in the HPAC Platform was v1.1.

Used on MareNostrum IV supercomputer for some applications

Related publication:

https://scholar.google.es/scholar?oi=bibs&hl=ca&cites=11364401518038771379&as_sdt=5

Date of releaseDecember 2017
Version of software1.2
Version of documentationDecember 2017
Software availablehttps://pm.bsc.es/dlb
https://github.com/bsc-pm/dlb
Documentationhttps://pm.bsc.es/dlb-https://pm.bsc.es/dlb-docs/user-guide/
https://pm.bsc.es/sites/default/files/ftp/dlb/doc/Tutorial_DLB.pdf
ResponsibleBSC Programming Models Group: pm-tools@bsc.es
Requirements & dependenciesAny MPI library, OpenMP or OMPSs compiler and runtime
For tracing: Extrae library
See also https://pm.bsc.es/dlb
Target system(s)Any system with multiple CPUs/cores in a node (supercomputers, clusters, workstations, …)

HCFFT

The development of HCFFT was co-funded by the HBP during the Ramp-up Phase. This page is kept for reference but will no longer be updated.


HCFFT (Hyperbolic Cross Fast Fourier Transform) is a software package to efficiently treat high-dimensional multivariate functions. The implementation is based on the fast Fourier transform for arbitrary hyperbolic cross / sparse grid spaces.

Date of release2015
Version of software1.0
Version of documentation1.0
Software availablewww.hcfft.org
Documentationwww.hcfft.org
ResponsibleFraunhofer (FG) SCAI:
Contact the developers using http://hcfft.org/get-hcfft.html
Requirements & dependenciesC/C++
Target system(s)Tested under Linux

ZeroEQ

The development of ZeroEQ was co-funded by the HBP during the second project phase (SGA1). This page is kept for reference but will no longer be updated.


ZeroEQ is a cross-platform C++ library to publish and subscribe for events. It provides the following major features:

  • Publish events using zeroeq::Publisher
  • Subscribe to events using zeroeq::Subscriber
  • Asynchronous, reliable transport using ZeroMQ››
  • Automatic publisher discovery using Zeroconf
  • Efficient serialization of events using flatbuffers

The main intention of ZeroEQ is to allow the linking of applications using automatic discovery. Linking can be used to connect multiple visualization applications, or to connect simulators with analysis and visualization codes to implement streaming and steering. One example of the former is the interoperability of NeuroScheme with RTNeuron, and one for the latter is the streaming and steering between NEST and RTNeuron. Both were reported previously, whereas the current extensions focus on the implementation of the request-reply interface.

Date of releaseFebruary 2018
Version of software0.9.0
Version of documentation0.9.0
Software availablehttps://hbpvis.github.io/ZeroEQ-0.9/index.html
Documentationhttps://github.com/HBPVis/ZeroEQurl, https://hbpvis.github.io/ZeroEQ-0.9/index.html
ResponsibleEPFL: Samuel Lapere - samuel.lapere@epfl.ch
Requirements & dependenciesZeroMQ, FlatBuffers, Boost, Lunchbox
Target system(s)

ViSTA Virtual Reality Toolkit

The development of ViSTA Virtual Reality Toolkit was co-funded by the HBP during the Ramp-up Phase. This page is kept for reference but will no longer be updated.


The ViSTA Virtual Reality Toolkit allows the integration of virtual reality (VR) technology and interactive, 3D visualisation into technical and scientific applications. The toolkit aims to enhance scientific applications with methods and techniques of VR and immersive visualization, thus enabling researchers from multiple disciplines to interactively analyse and explore their data in virtual environments. ViSTA is designed to work on multiple target platforms and operating systems, across various display devices (desktop workstations, powerwalls, tiled displays, CAVEs, etc.) and with various interaction devices.

The new version 1.15 provides the following new features as compared to version 1.14 that was part of the HBP-internal Platform Release in M18. It is available on SourceForge: http://sourceforge.net/projects/vistavrtoolkit/

Date of releaseFebruary 20, 2013
Version of software1.15
Version of documentation1.15
Software availablehttp://sourceforge.net/projects/vistavrtoolkit/ https://github.com/HBPVIS/Vista
DocumentationIncluded in the library source code
ResponsibleRWTH: Torsten Kuhlen (kuhlen@vr.rwth-aachen.de), Benjamin Weyers (weyers@vr.rwth-aachen.de)
Requirements & dependenciesLibraries: OpenSG, freeglut, glew
Operating systems: Windows / Linux
Compilers: Microsoft Visual Studio 2010 (cl16) or higher, gcc 4.4.7 or higher
Target system(s)High Fidelity Visualization Platforms, Immersive Visualization Hardware, Desktop Computers

PyCOMPSs

The development of PyCOMPSs was co-funded by the HBP during the Ramp-up Phase. This page is kept for reference but will no longer be updated, apart from release notes. 

COMPSs logo


PyCOMPSs is the Python binding of COMPSs, (COMP Superscalar) a coarse-grained programming model oriented to distributed environments, with a powerful runtime that leverages low-level APIs (e.g. Amazon EC2) and manages data dependencies (objects and files). From a sequential Python code, it is able to run in parallel and distributed.

COMPSs screenshot
COMPSs screenshot

Releases

PyCOMPSs is based on COMPSs. COMPSs version 1.3 was released in November 2015, version 1.4 in May 2016 and version 2.0 in November 2016.

New features in COMPSs v1.3

  • Runtime
    • Persistent workers: workers can be deployed on computing nodes and persist during all the application lifetime, thus reducing the runtime overhead. The previous implementation of workers based on a per task process is still supported.
    • Enhanced logging system
    • Interoperable communication layer: different inter-nodes communication protocol is supported by implementing the Adaptor interface (JavaGAT and NIO implementations already included)
    • Simplified cloud connectors interface
    • JClouds connector
  • Python/PyCOMPSs
    • Added constraints support
    • Enhanced methods support
    • Lists accepted as a tasks’ parameter type
    • Support for user decorators
  • Tools
    • New monitoring tool: with new views, as workload and possibility of visualizing information about previous runs
    • Enhanced tracing mechanism
  • Simplified execution scripts
  • Simplified installation on supercomputers through better scripts

New features in COMPSs v1.4

  • Runtime
    • Added support for Docker
    • Added support for Chameleon Cloud
    • Object cache for persistent workers
    • Improved error management
    • Added connector for submitting tasks to MN supercomputer from external COMPSs applications
    • Bug-fixes
  • Python/PyCOMPSs
    • General bug-fixes
  • Tools
    • Enhanced Tracing mechanism:
    • Reduced overhead using native Java API
    • Added support for communications instrumentation added
    • Added support for PAPI hardware counters
  • Known Limitations
    • When executing Python applications with constraints in the cloud the initial VMs must be set to 0

New features in COMPSs v2.0 (released November 2016)

  • Runtime:
    • Upgrade to Java 8
    • Support to remote input files (input files already at workers)
    • Integration with Persistent Objects
    • Elasticity with Docker and Mesos
    • Multi-processor support (CPUs, GPUs, FPGAs)
    • Dynamic constraints with environment variables
    • Scheduling taking into account the full tasks graph (not only ready tasks)
    • Support for SLURM clusters
    • Initial COMPSs/OmpSs integration
    • Replicated tasks: Tasks executed in all the workers
    • Explicit Barrier
  •  Python:
    • Python user events and HW counters tracing
    • Improved PyCOMPSs serialization. Added support for lambda and generator parameters.
  •  C:
    • Constraints support
  •  Tools:
    • Improved current graph visualization on COMPSs Monitor
  •  Improvements:
    • Simplified Resource and Project files (NO retrocompatibility)
    • Improved binding workers execution (use pipes instead of Java Process Builders)
    • Simplifies cluster job scripts and supercomputers configuration
    • Several bug fixes
  • Known Limitations:
    • When executing python applications with constraints in the cloud the initial VMs must be set to 0

New features in PyCOMPSs/COMPSs v2.1 (released June 2017)

  • New features:
    • Runtime:
      • New annotations to simplify tasks that call external binaries
      • Integration with other programming models (MPI, OmpSs,..)
      • Support for Singularity containers in Clusters
      • Extension of the scheduling to support multi-node tasks (MPI apps as tasks)
      • Support for Grid Engine job scheduler in clusters
      • Language flag automatically inferred in runcompss script
      • New schedulers based on tasks’ generation order
      • Core affinity and over-subscribing thread management in multi-core cluster queue scripts (used with MKL libraries, for example)
    • Python:
      • @local annotation to support simpler data synchronizations in master (requires to install guppy)
      • Support for args and kwargs parameters as task dependencies
      • Task versioning support in Python (multiple behaviors of the same task)
      • New Python persistent workers that reduce overhead of Python tasks
      • Support for task-thread affinity
      • Tracing extended to support for Python user events and HW counters (with known issues)
    • C:
      • Extension of file management API (compss_fopen, compss_ifstream, compss_ofstream, compss_delete_file)
      • Support for task-thread affinity
    • Tools:
      • Visualization of not-running tasks in current graph of the COMPSs Monitor
  • Improvements
    • Improved PyCOMPSs serialization
    • Improvements in cluster job scripts and supercomputers configuration
    • Several bug fixes
  • Known Limitations
    • When executing Python applications with constraints in the cloud the <InitialVMs> property must be set to 0
    • Tasks that invoke Numpy and MKL may experience issues if tasks use a different number of MKL threads. This is due to  the fact that MKL reuses threads in the different calls and it does not change the number of threads from one call to another.

New features in PyCOMPSs/COMPSs v2.3 (released June 2018)

  • Runtime
    • Persistent storage API implementation based on Redis (distributed as default implementation with COMPSs)
    • Support for FPGA constraints and reconfiguration scripts
    • Support for PBS Job Scheduler and the Archer Supercomputer
  • Java
    • New API call to delete objects in order to reduce application memory usage
  • Python
    • Support for Python 3
    • Support for Python virtual environments (venv)
    • Support for running PyCOMPSs as a Python module
    • Support for tasks returning multiple elements (returns=#)
    • Automatic import of dummy PyCOMPSs AP
  • C
    • Persistent worker with Memory-to-memory transfers
    • Support for arrays (no serialization required)
  • Improvements
    • Distribution with docker images
    • Source Code and example applications distribution on Github
    • Automatic inference of task return
    • Improved obsolete object cleanup
    • Improved tracing support for applications using persistent memory
    • Improved finalization process to reduce zombie processes
    • Several bug fixes
  • Known limitations
    • Tasks that invoke Numpy and MKL may experience issues if a different MKL threads count is used in different tasks. This is due to the fact that MKL reuses threads in the different calls and it does not change the number of threads from one call to another.

New features in PyCOMPSs/COMPSs v2.5 (released June 2019)

  • Runtime:
    • New Concurrent direction type for task parameter.
    • Multi-node tasks support for native (Java, Python) tasks. Previously, multi-node tasks were only posible with @mpi or @decaf tasks.
    • @Compss decorator for executing compss applications as tasks.
    • New runtime api to synchronize files without opening them.
    • Customizable task failure management with the “onFailure” task property.
    • Enabled master node to execute tasks.
  • Python:
    • Partial support of numba in tasks.
    • Support for collection as task parameter.
    • Supported task inheritance.
    • New persistent MPI worker mode (alternative to subprocess).
    • Support to ARM MAP and DDT tools (with MPI worker mode).
  • C:
    • Support for task without parameters and applications without src folder.
  • Improvements:
    • New task property “targetDirection” to indicate direction of the target object in object methods. Substitutes the “isModifier” task property.
    • Warnings for deprecated or incorrect task parameters.
    • Improvements in Jupyter for Supercomputers.
    • Upgrade of runcompss_docker script to docker stack interface.
    • Several bug fixes.
  • Known Limitations:
    • Tasks that invoke Numpy and MKL may experience issues if a different MKL threads count is used in different tasks. This is due to the fact that MKL reuses threads in the different calls and it does not change the number of threads from one call to another.
    • C++ Objects declared as arguments in a coarse-grain tasks must be passed in the task methods as object pointers in order to have a proper dependency management.
    • Master as worker is not working for executions with persistent worker in C++.
    • Coherence and concurrent writing in parameters annotated with the “Concurrent” direction must be managed by the underlaying distributed storage system.
    • Delete file calls for files used as input can produce a significant synchronization of the main code.

PyCOMPSs/COMPSs PIP installation package

This is a new feature available since January 2017.

Installation:

  • Check the dependencies in the PIP section of the PyCOMPSs installation manual (available at the documentation section of compss.bsc.es). Be sure that the target machine satisfies the mentioned dependencies.
  • The installation can be done in various alternative ways:
    • Use PIP to install the official PyCOMPSs version from the pypi live repository:
      sudo -E python2.7 -m pip install pycompss -v
    • Use PIP to install PyCOMPSs from a pycompss.tar.gz
      sudo -E python2.7 -m pip install pycompss-version.tar.gz -v
    • Use the setup.py script
      sudo -E python2.7 setup.py install

Internal report

How multi-scale applications can be developed using PyCOMPSs (accessible by HBP members only):

https://collaboration.humanbrainproject.eu/documents/10727/3235212/HBP_Multi-scale_in_PyCOMPSs_M30_SP7_WP7.2_T7.2.2_v1.0.docx/d187d1f5-c27c-42a3-9833-3cee3d62fb46

Date of releaseJune 2019
Version of software2.5
Version of documentation2.5
Software availablehttp://compss.bsc.es
Documentationhttps://www.bsc.es/research-and-development/software-and-apps/software-list/comp-superscalar/documentation
ResponsibleBSC Workflows and Distributed Computing Group: support-compss@bsc.es
Requirements & dependencieshttp://compss.bsc.es/releases/compss/latest/docs/COMPSs_Installation_Manual.pdf?tracked=true
Target system(s)Supercomputers or clusters with different nodes, distributed computers, grid and cloud architectures

OmpSs

The development of OmpSs was co-funded by the HBP during the Ramp-up Phase. This page is kept for reference but will no longer be updated.


OmpSs logo

OmpSs is a fine-grained programming model oriented to shared memory environments, with a powerful runtime that leverages low-level APIs (e.g. CUDA/OpenCL) and manages data dependencies (memory regions). It exploits task level parallelism and supports asynchronicity, heterogeneity and data movement.

The new version 15.06 provides the following new features as compared to version 15.04 that was part of the HBP-internal Platform Release in M18:

  • Socket aware (scheduling taking into account processor socket)
  • Reductions (mechanism to accumulate results of tasks more efficiently)
  • Work sharing (persistence of data in the worker) mechanisms

Date of releaseJune 2016
Version of software16.06.3
Version of documentationDecember 13, 2016
Software availablehttps://pm.bsc.es/ompss-downloads
DocumentationOmpSs website: https://pm.bsc.es/ompss-docs/book/index.html
ResponsibleBSC Programming Models Group: pm-tools@bsc.es
Requirements & dependencieshttp://pm.bsc.es/ompss-docs/user-guide/installation.html#nanos-build-requirements
http://pm.bsc.es/ompss-docs/user-guide/installation.html#mercurium-build-requirements
Target system(s)Any with shared memory (supercomputers, clusters, workstations, …)

Equalizer

The development of Equalizer was co-funded by the HBP during the Ramp-up Phase. This page is kept for reference but will no longer be updated.


Equalizer is a parallel rendering framework Equalizer logoto create and deploy parallel, scalable OpenGL applications. It provides the following major features to facilitate the development and deployment of scalable OpenGL applications:

  • Runtime Configurability: An Equalizer application is configured automatically or manually at runtime and can be deployed on laptops, multi-GPU workstations and large-scale visualization clusters without recompilation.
  • Runtime Scalability: An Equalizer application can benefit from multiple graphics cards, processors and computers to scale rendering performance, visual quality and display size.
  • Distributed Execution: Equalizer applications can be written to support cluster-based execution. Equalizer uses the Collage network library, a cross-platform C++ library for building heterogeneous, distributed applications.

Support for Stereo and Immersive Environments: Equalizer supports stereo rendering head tracking, head-mounted displays and other advanced features for immersive Virtual Reality installations.

Equalizer in immersive environment
Equalizer in immersive environment
Date of release2007
Version of software1.8
Version of documentation1.8
Software availablehttps://github.com/Eyescale/Equalizer
Documentationhttps://eyescale.github.io
ResponsibleEPFL: Stefan Eilemann (stefan.eilemann@epfl.ch)
Requirements & dependenciesBoost, OpenGL, Collage, hwsd, Glew, Qt
Target system(s)

Deflect Client Library

The development of Deflect Client Library was co-funded by the HBP during the Ramp-up Phase. This page is kept for reference but will no longer be updated.


Deflect is a C++ library to develop applications that can send and receive pixel streams from other Deflect-based applications, for example DisplayCluster. The following applications are provided which make use of the streaming API:

  • DesktopStreamer: A small utility that allows the user to stream the desktop.
  • SimpleStreamer: A simple example to demonstrate streaming of an OpenGL application.
Date of release2013
Version of software0.5
Version of documentation0.5
Software availablehttps://github.com/BlueBrain/DisplayCluster
https://github.com/BlueBrain/Deflect
Documentationhttps://bluebrain.github.io/
ResponsibleEPFL: Stefan Eilemann (stefan.eilemann@epfl.ch)
Requirements & dependenciesBoost, LibJPEGTurbo, Qt, GLUT, OpenGL, Lunchbox, FCGI, FFMPEG, MPI, Poppler, TUIO, OpenMP
Target system(s)