Biological Scientific User (BSU).

The Virtual Brain (TVB-HPC)

TVBThe Virtual Brain (TVB) is a large-scale brain simulator. With a community of thousands of users around the world, TVB has become a validated, popular and standard choice for the simulation of whole brain activity. TVB users can create simulations using neural mass models which can produce outputs for different analysis and modalities. TVB allows scientists to explore and analyze both simulated and experimental data. It contains analytic tools for evaluating relevant scientific parameters in light of that data. The current implementation of TVB is written in Python, with limited large-scale parallelization over different parameters. The objective of TVB-HPC is to enable large-scale parallelization of TVB simulations by making use of high performance computing to explore large parameter spaces for the models. With this approach, neuroscientists can define their models in a domain specific language based on NeuroML and automatically generate code which can run either on GPUs or on CPUs with different architectures and optimizations. The result is a framework that hides the complexity of writing robust parallel code and offers neuroscientists a fast and efficient access to high performance computing. TVB-HPC is publicly available on GitHub and, at the end of HBP project phase SGA2, it will be possible to launch large parameter simulations using code automatically generated with this framework via the HBP Collaboratory.

Date of release30.04.2019 (continuous minor releases since then)
Version of softwarev0.1-alpha
Version of documentationv0.1-alpha
Software availablehttps://github.com/the-virtual-brain/tvb-hpc
Documentationhttps://github.com/the-virtual-brain/tvb-hpc
ResponsibleSandra Diaz (JUELICH): s.diaz@fz-juelich.de
Requirements & dependencies
Target system(s)HPC systems

In Situ Pipeline

This is the newer, more general version of NEST in situ framework.

The in situ pipeline consists of a set of libraries that can be integrated into neuronal network simulators developed by the HBP to enable live visual analysis during the runtime of the simulation. The library called ‘nesci’ (neuronal simulator conduit interface) stores the raw simulation data into a common conduit format and the library called ‘contra’ (conduit transport) transports the serialized data from one endpoint to another using a variety of different (network) protocols. The pipeline currently works with NEST and Arbor. Support for TVB is currently in development.

Prototypical implementation into the HPAC Platform finalised in February 2019.

Date of releaseFirst released in July 2018 with continuous updates (see also above)
Version of software18.07
Version of documentation18.07
Software availablehttps://devhub.vr.rwth-aachen.de/VR-Group/contra
https://devhub.vr.rwth-aachen.de/VR-Group/nesci
https://devhub.vr.rwth-aachen.de/VR-Group/nest-streaming-module
DocumentationSee the readme files in the repositories
ResponsibleRWTH: Simon Oehrl (oehrl@vr.rwth-aachen.de)
Requirements & dependenciesRequired: CMake, C++14, Conduit
Optional: Python, Boost, ZeroMQ
Target system(s)Desktops/HPC Systems running Linux, macOS or Windows

NEST: The Neural Simulation Tool

Science has driven the development of the NEST simulator for the past 20 years. Originally created to simulate the propagation of synfire chains using single-processor workstations, we have pushed NEST’s capabilities continuously to address new scientific questions and computer architectures. Prominent examples include studies on spike-timing dependent plasticity in large simulations of cortical networks, the verification of mean-field models, models of Alzheimer’s and Parkinson’s disease and tinnitus. Recent developments include a significant reduction in memory requirements, as demonstrated by a record-breaking simulation of 1.86 billion neurons connected by 11.1 trillion synapses on the Japanese K supercomputer, paving the way for brain-scale simulations.

Running on everything from laptops to the world’s largest supercomputers, NEST is configured and controlled by high-level Python scripts, while harnessing the power of C++ under the hood. An extensive testsuite and systematic quality assurance ensure the reliability of NEST.

The development of NEST is driven by the demands of neuroscience and carried out in a collaborative fashion at many institutions around the world, coordinated by the non-profit member-based NEST Initiative. NEST is released under GNU Public License version 2 or later.

How NEST has been improved in HBP

Continuous dynamics

The continuous dynamics code in NEST enables simulations of rate- based model neurons in the event-based simulation scheme of the spiking simulator NEST. The technology was included and released with NEST 2.14.0.

Furthermore, additional rate-based models for the Co-Design Project “Visuo-Motor Integration” (CDP4) have been implemented and scheduled for NEST release 2.16.0.

Related publication:
Hahne et al. (2017) Front. Neuroinform. 11,34. doi:10.3389/fninf.2017.00034

NESTML

NESTML is a domain-specific language that supports the specification of neuron models in a precise and concise syntax, based on the syntax of Python. Model equations can either be given as a simple string of mathematical notation or as an algorithm written in the built-in procedural language. The equations are analyzed by NESTML to compute an exact solution if possible, or use an appropriate numeric solver otherwise.

Link to this release (2018): https://github.com/nest/nestml

Related Publications:

Plotnikov et al. (2016) NESTML: a modeling language for spiking neurons.

Simulator-simulator interfaces

This technology couples the simulation software NEST and UG4 by means of the MUSIC library. NEST can only send spike trains where spiking occurs; UG4 receives those in form of events arriving at synapses (timestamps). The time course of the extracellular potential in a cube (representing a piece of tissue) is simulated based on the arriving spike data.The evolution of the membrane potential in space and time is described by the Xylouris-Wittum model.

Link to this release (2017): https://github.com/UG4

Related publications:
Vogel et al. (2014) Comput Vis Sci. 16,4. doi: 10.1007/s00791-014-0232-9Xylouris, K., Wittum, G. (2015) Front Comput Neurosci. doi: 10.3389/fncom.2015.00094

More information

NEST – A brain simulator (short movie)

NEST::documented (long movie)

NEST brochure:

http://www.nest-simulator.org/wp-content/uploads/2015/04/JARA_NEST_final.pdf

Date of releaseJuly 2019
Version of softwarev2.18.0
Version of documentationv2.18.0
Software availableNEST can be run directly from a Jupyter notebook inside a Collab in the HBP Collaboratory.
Download & Information: https://www.nest-simulator.org
Latest code version: https://github.com/nest/nest-simulator
Documentationhttp://www.nest-simulator.org/documentation/
ResponsibleNEST Initiative (http://www.nest-initiative.org/)
General Contact: NEST User Mailing List (http://www.nest-simulator.org/community/)

Contact for HBP Partners:
Hans Ekkehard Plesser (NMBU/JUELICH): hans.ekkehard.plesser@nmbu.no
Dennis Terhorst (JUELICH): d.terhorst@fz-juelich.de
Requirements & dependenciesAny Unix-like operating system and basic development tools
OpenMP
MPI
GNU Science Library
Target system(s)All Unix-like systems
Laptop to Supercomputer; has been ported to Raspberry Pi, too

Monsteer

The development of Monsteer was co-funded by the HBP during the Ramp-up Phase. This page is kept for reference but will no longer be updated.


Monsteer is a library for Interactive Supercomputing in the neuroscience domain. Monsteer facilitates the coupling of running simulations (currently NEST) with interactive visualization and analysis applications. Monsteer supports streaming of simulation data to clients (currenty only spikes) as well as control of the simulator from the clients (also kown as computational steering). Monsteer’s main components are a C++ library, an MUSIC-based application and Python helpers.

Date of releaseJuly 2015
Version of software0.2.0
Version of documentation0.2.0
Software availablehttps://github.com/BlueBrain/Monsteer
Documentationhttp://bluebrain.github.io/
ResponsibleStefan Eilemann, EPFL (stefan.eilemann@epfl.ch)
Requirements & dependenciesMinimum configuration to configure using cmake, compile and run Monsteer: A Linux box,
GCC compiler 4.8+,
CMake 2.8+,
Boost 1.54,
MPI (OpenMPI, mvapich2, etc),
NEST simulator 2.4.2,
MUSIC 1.0.7,
Python 2.6,
See also: http://bluebrain.github.io/Monsteer-0.3/_user__guide.html#Compilation
Target system(s)Linux computer

MSPViz

MSPViz is a visualization tool for the Model of Structural Plasticity. It uses a visualisation technique  based on the representation of the neuronal information through the use of abstract levels and a set of schematic representations into each level. The multilevel structure and the design of the representations constitutes an approach that provides organized views that facilitates visual analysis tasks.

Each view has been enhanced adding line and bar charts to analyse trends in simulation data. Filtering and sorting capabilities can be applied on each view to ease the analysis. Other views, such as connectivity matrices and force-directed layouts, have been incorporated, enriching the already existing views and improving the analysis process. This tool has been optimised to lower render and data loading times, even from remote sources such as WebDav servers.

Screenshot of MSPViz
Screenshot of MSPViz
Screenshot of MSPViz
Screenshot of MSPViz
View of MSPViz to investigate structural plasticity models on different levels of abstraction: connectivity of a single neuron
View of MSPViz to investigate structural plasticity models on different levels of abstraction: full network connectivity
Date of releaseMarch 2018
Version of software0.2.6
Version of documentation0.2.6 for users
Software availablehttp://gmrv.es/mspviz
DocumentationSelf-contained in the application
ResponsibleUPM: Juan Pedro Brito (juanpedro.brito@upm.es)
Requirements & dependenciesSelf-contained code
Target system(s)Platform independent

neuroFiReS

The development of neuroFiReS was co-funded by the HBP during the Ramp-up Phase. This page is kept for reference but will no longer be updated.


neuroFiReS is a library for performing search and filtering operations using both data contents and metadata. These search operations will be tightly coupled with visualization in order to improve insight gaining from complex data. A first prototype (named spineRet) for searching and filtering over segmented spine data has been developed.

SpineRet
SpineRet screenshot
Date of releaseN/A
Version of software0.1
Version of documentation0.1
Software availablePlease contact the developers.
DocumentationPlease contact the developers.
ResponsibleURJC: Pablo Toharia (pablo.toharia@urjc.es)
Requirements & dependenciesQt, OpenSceneGraph
Supported OS: Windows 7/ 8.1, Linux (tested on Ubuntu 14.04) and Mac OSX
Target system(s)Desktop computers, notebooks

NeuroScheme

NeuroScheme is a tool that allows users to navigate through circuit data at different levels of abstraction using schematic representations for a fast and precise interpretation of data. It also allows filtering, sorting and selections at the different levels of abstraction. Finally it can be coupled with realistic visualization or other applications using the ZeroEQ event library developed in WP 7.3.

This application allows analyses based on a side-by-side comparison using its multi-panel views, and it also provides focus-and-context. Its different layouts enable arranging data in different ways: grid, 3D, camera-based, scatterplot-based or circular. It provides editing capabilities, to create a scene from scratch or to modify an existing one.

ViSimpl, part of the NeuroScheme framework, is a prototype developed to analyse simulation data, using both abstract and schematic visualisations. This analysis can be done visually from temporal, spatial and structural perspectives, with the additional capability of exploring the correlations between input patterns and produced activity.

 

NeuroScheme
NeuroScheme screenshot
NeuroScheme
NeuroScheme screenshot
NeuroScheme
NeuroScheme screenshot
NeuroScheme
NeuroScheme screenshot
NeuroScheme
NeuroScheme screenshot
Overview of various neurons
User interface of ViSimpl visualising activity data emerging from a simulation of a neural network model
Date of releaseMarch 2018
Version of software0.2
Version of documentation0.2
Software availablehttps://github.com/gmrvvis/NeuroScheme
Documentationhttps://github.com/gmrvvis/NeuroScheme, http://gmrv.es/gmrvvis
ResponsibleURJC: Pablo Toharia (pablo.toharia@urjc.es)
Requirements & dependenciesRequired: Qt4, nsol
Optional: Brion/BBPSDK (to access BBP data), ZeroEQ (to couple with other software)
Supported OS: Windows 7, Windows 8.1, Linux (tested on Ubuntu 14.04) and Mac OSX
Target system(s)Desktop computers, notebooks, tablets

NeuroLOTs

NeuroLOTs is a set of tools and libraries that allow creating neuronal meshes from a minimal skeletal description. It generates soma meshes using FEM deformation and allows to interactively adapt the tessellation level using different criteria (user-defined, camera distance, etc.)

NeuroTessMesh provides a visual environment for the generation of 3D polygonal meshes that approximate the membrane of neuronal cells, starting from the morphological tracings that describe neuronal morphologies. The 3D models can be tessellated at different levels of detail, providing either a homogeneous or an adaptive resolution of the model. The soma shape is recovered from the incomplete information of the tracings, applying a physical deformation model that can be interactively adjusted. The adaptive refinement process performed in the GPU generates meshes, that allow good visual quality geometries at an affordable computational cost, both in terms of memory and rendering time. NeuroTessMesh is the front-end GUI to the NeuroLOTs framework.

Related Publication:
Garcia-Cantero et al. (2017) Front NeuroinDOI: https://dx.doi.org/10.3389/fninf.2017.00038
NeuroLOTs
NeuroLOTs screenshot
NeuroLOTs
NeuroLOTs screenshot
NeuroLOTs
NeuroLOTs screenshot
NeuroLOTs
NeuroLOTs screenshot

Date of releaseNeurolots 0.2.0, March 2018; NeuroTessMesh 0.0.1, March 2018
Version of softwareNeurolots 0.2.0, NeuroTessMesh 0.0.1
Version of documentationNeurolots 0.2.0, NeuroTessMesh 0.0.1
Software availablehttps://github.com/gmrvvis/neurolots, https://github.com/gmrvvis/NeuroTessMesh
Documentationhttps://github.com/gmrvvis/neurolots, https://github.com/gmrvvis/NeuroTessMesh, https://gmrvvis.github.io/doc/neurolots/, https://github.com/gmrvvis/neurolots/blob/master/README.md, http://gmrv.es/neurotessmesh/NeuroTessMeshUserManual.pdf, http://gmrv.es/gmrvvis/neurolots/
ResponsibleURJC: Pablo Toharia (pablo.toharia@urjc.es)
Requirements & dependenciesRequired: Eigen3, OpenGL (>= 4.0), GLEW, GLUT, nsol
Optional: Brion/BBPSDK (to access BBP data), ZeroEQ (to couple with other software)
Supported OS: Windows 7/8.1, GNU/Linux (tested on Ubuntu 14.04) and Mac OSX
Target system(s)High fidelity displays, desktop computers, notebooks

Dynamic Load Balancing

The development of DLB was co-funded by the HBP during the second project phase (SGA1). This page is kept for reference but will no longer be updated.


dlb logo

DLB is a library devoted to speedup hybrid parallel applications. And at the same time DLB improves the efficient use of the computational resources inside a computing node. The DLB library will improve the load balance of the outer level of parallelism by redistributing the computational resources at the inner level of parallelism. This readjustment of resources will be done at dynamically at runtime. This dynamism allows DLB to react to different sources of imbalance: Algorithm, data, hardware architecture and resource availability among others.

The first version that was integrated in the HPAC Platform was v1.1.

Used on MareNostrum IV supercomputer for some applications

Related publication:

https://scholar.google.es/scholar?oi=bibs&hl=ca&cites=11364401518038771379&as_sdt=5

Date of releaseDecember 2017
Version of software1.2
Version of documentationDecember 2017
Software availablehttps://pm.bsc.es/dlb
https://github.com/bsc-pm/dlb
Documentationhttps://pm.bsc.es/dlb-https://pm.bsc.es/dlb-docs/user-guide/
https://pm.bsc.es/sites/default/files/ftp/dlb/doc/Tutorial_DLB.pdf
ResponsibleBSC Programming Models Group: pm-tools@bsc.es
Requirements & dependenciesAny MPI library, OpenMP or OMPSs compiler and runtime
For tracing: Extrae library
See also https://pm.bsc.es/dlb
Target system(s)Any system with multiple CPUs/cores in a node (supercomputers, clusters, workstations, …)

MonetDB

The development of MonetDB was co-funded by the HBP during the Ramp-up Phase. This page is kept for reference but will no longer be updated.


When a database grows into millions of records spread over many tables and business intelligence or science becomes the prevalent application domain, a column-store database management system (DBMS) is called for. Unlike traditional row-stores, such as MySQL and PostgreSQL, a column-store provides a modern and scalable solution without calling for substantial hardware investments.

monetDB logo

MonetDB pioneered column-store solutions for high-performance data warehouses for business intelligence and eScience since 1993. It achieves its goal by innovations at all layers of a DBMS, e.g. a storage model based on vertical fragmentation, modern CPU-tuned query execution architecture, automatic and adaptive indices, run-time query optimization, and a modular software architecture. It is based on the SQL 2003 standard with full support of foreign keys, joins, views, triggers, and stored procedures. It is fully ACID compliant and supports a rich spectrum of programming interfaces (JDBC, ODBC, PHP, Python, RoR, C/C++, Perl).

The current version provides the following new features as compared to the version that was part of the HBP-internal Platform Release in M18:

  • Python integration
  • Representation of arrays inside MonetDB
  • MonetDB as a standalone library (MonetDBLite)
Date of releaseOctober 2014, updated in July 2015
Version of software
Version of documentation
Software availablehttp://www.monetdb.org
Documentationhttp://www.monetdb.org
ResponsibleCWI, Martin Kersten (martin.kersten@cwi.nl)
Requirements & dependencies
Target system(s)Fedora, Ubuntu, Windows, Mac, FreeBSD, CentOS, RHEL, Solaris

PyCOMPSs

The development of PyCOMPSs was co-funded by the HBP during the Ramp-up Phase. This page is kept for reference but will no longer be updated, apart from release notes. 

COMPSs logo


PyCOMPSs is the Python binding of COMPSs, (COMP Superscalar) a coarse-grained programming model oriented to distributed environments, with a powerful runtime that leverages low-level APIs (e.g. Amazon EC2) and manages data dependencies (objects and files). From a sequential Python code, it is able to run in parallel and distributed.

COMPSs screenshot
COMPSs screenshot

Releases

PyCOMPSs is based on COMPSs. COMPSs version 1.3 was released in November 2015, version 1.4 in May 2016 and version 2.0 in November 2016.

New features in COMPSs v1.3

  • Runtime
    • Persistent workers: workers can be deployed on computing nodes and persist during all the application lifetime, thus reducing the runtime overhead. The previous implementation of workers based on a per task process is still supported.
    • Enhanced logging system
    • Interoperable communication layer: different inter-nodes communication protocol is supported by implementing the Adaptor interface (JavaGAT and NIO implementations already included)
    • Simplified cloud connectors interface
    • JClouds connector
  • Python/PyCOMPSs
    • Added constraints support
    • Enhanced methods support
    • Lists accepted as a tasks’ parameter type
    • Support for user decorators
  • Tools
    • New monitoring tool: with new views, as workload and possibility of visualizing information about previous runs
    • Enhanced tracing mechanism
  • Simplified execution scripts
  • Simplified installation on supercomputers through better scripts

New features in COMPSs v1.4

  • Runtime
    • Added support for Docker
    • Added support for Chameleon Cloud
    • Object cache for persistent workers
    • Improved error management
    • Added connector for submitting tasks to MN supercomputer from external COMPSs applications
    • Bug-fixes
  • Python/PyCOMPSs
    • General bug-fixes
  • Tools
    • Enhanced Tracing mechanism:
    • Reduced overhead using native Java API
    • Added support for communications instrumentation added
    • Added support for PAPI hardware counters
  • Known Limitations
    • When executing Python applications with constraints in the cloud the initial VMs must be set to 0

New features in COMPSs v2.0 (released November 2016)

  • Runtime:
    • Upgrade to Java 8
    • Support to remote input files (input files already at workers)
    • Integration with Persistent Objects
    • Elasticity with Docker and Mesos
    • Multi-processor support (CPUs, GPUs, FPGAs)
    • Dynamic constraints with environment variables
    • Scheduling taking into account the full tasks graph (not only ready tasks)
    • Support for SLURM clusters
    • Initial COMPSs/OmpSs integration
    • Replicated tasks: Tasks executed in all the workers
    • Explicit Barrier
  •  Python:
    • Python user events and HW counters tracing
    • Improved PyCOMPSs serialization. Added support for lambda and generator parameters.
  •  C:
    • Constraints support
  •  Tools:
    • Improved current graph visualization on COMPSs Monitor
  •  Improvements:
    • Simplified Resource and Project files (NO retrocompatibility)
    • Improved binding workers execution (use pipes instead of Java Process Builders)
    • Simplifies cluster job scripts and supercomputers configuration
    • Several bug fixes
  • Known Limitations:
    • When executing python applications with constraints in the cloud the initial VMs must be set to 0

New features in PyCOMPSs/COMPSs v2.1 (released June 2017)

  • New features:
    • Runtime:
      • New annotations to simplify tasks that call external binaries
      • Integration with other programming models (MPI, OmpSs,..)
      • Support for Singularity containers in Clusters
      • Extension of the scheduling to support multi-node tasks (MPI apps as tasks)
      • Support for Grid Engine job scheduler in clusters
      • Language flag automatically inferred in runcompss script
      • New schedulers based on tasks’ generation order
      • Core affinity and over-subscribing thread management in multi-core cluster queue scripts (used with MKL libraries, for example)
    • Python:
      • @local annotation to support simpler data synchronizations in master (requires to install guppy)
      • Support for args and kwargs parameters as task dependencies
      • Task versioning support in Python (multiple behaviors of the same task)
      • New Python persistent workers that reduce overhead of Python tasks
      • Support for task-thread affinity
      • Tracing extended to support for Python user events and HW counters (with known issues)
    • C:
      • Extension of file management API (compss_fopen, compss_ifstream, compss_ofstream, compss_delete_file)
      • Support for task-thread affinity
    • Tools:
      • Visualization of not-running tasks in current graph of the COMPSs Monitor
  • Improvements
    • Improved PyCOMPSs serialization
    • Improvements in cluster job scripts and supercomputers configuration
    • Several bug fixes
  • Known Limitations
    • When executing Python applications with constraints in the cloud the <InitialVMs> property must be set to 0
    • Tasks that invoke Numpy and MKL may experience issues if tasks use a different number of MKL threads. This is due to  the fact that MKL reuses threads in the different calls and it does not change the number of threads from one call to another.

New features in PyCOMPSs/COMPSs v2.3 (released June 2018)

  • Runtime
    • Persistent storage API implementation based on Redis (distributed as default implementation with COMPSs)
    • Support for FPGA constraints and reconfiguration scripts
    • Support for PBS Job Scheduler and the Archer Supercomputer
  • Java
    • New API call to delete objects in order to reduce application memory usage
  • Python
    • Support for Python 3
    • Support for Python virtual environments (venv)
    • Support for running PyCOMPSs as a Python module
    • Support for tasks returning multiple elements (returns=#)
    • Automatic import of dummy PyCOMPSs AP
  • C
    • Persistent worker with Memory-to-memory transfers
    • Support for arrays (no serialization required)
  • Improvements
    • Distribution with docker images
    • Source Code and example applications distribution on Github
    • Automatic inference of task return
    • Improved obsolete object cleanup
    • Improved tracing support for applications using persistent memory
    • Improved finalization process to reduce zombie processes
    • Several bug fixes
  • Known limitations
    • Tasks that invoke Numpy and MKL may experience issues if a different MKL threads count is used in different tasks. This is due to the fact that MKL reuses threads in the different calls and it does not change the number of threads from one call to another.

New features in PyCOMPSs/COMPSs v2.5 (released June 2019)

  • Runtime:
    • New Concurrent direction type for task parameter.
    • Multi-node tasks support for native (Java, Python) tasks. Previously, multi-node tasks were only posible with @mpi or @decaf tasks.
    • @Compss decorator for executing compss applications as tasks.
    • New runtime api to synchronize files without opening them.
    • Customizable task failure management with the “onFailure” task property.
    • Enabled master node to execute tasks.
  • Python:
    • Partial support of numba in tasks.
    • Support for collection as task parameter.
    • Supported task inheritance.
    • New persistent MPI worker mode (alternative to subprocess).
    • Support to ARM MAP and DDT tools (with MPI worker mode).
  • C:
    • Support for task without parameters and applications without src folder.
  • Improvements:
    • New task property “targetDirection” to indicate direction of the target object in object methods. Substitutes the “isModifier” task property.
    • Warnings for deprecated or incorrect task parameters.
    • Improvements in Jupyter for Supercomputers.
    • Upgrade of runcompss_docker script to docker stack interface.
    • Several bug fixes.
  • Known Limitations:
    • Tasks that invoke Numpy and MKL may experience issues if a different MKL threads count is used in different tasks. This is due to the fact that MKL reuses threads in the different calls and it does not change the number of threads from one call to another.
    • C++ Objects declared as arguments in a coarse-grain tasks must be passed in the task methods as object pointers in order to have a proper dependency management.
    • Master as worker is not working for executions with persistent worker in C++.
    • Coherence and concurrent writing in parameters annotated with the “Concurrent” direction must be managed by the underlaying distributed storage system.
    • Delete file calls for files used as input can produce a significant synchronization of the main code.

PyCOMPSs/COMPSs PIP installation package

This is a new feature available since January 2017.

Installation:

  • Check the dependencies in the PIP section of the PyCOMPSs installation manual (available at the documentation section of compss.bsc.es). Be sure that the target machine satisfies the mentioned dependencies.
  • The installation can be done in various alternative ways:
    • Use PIP to install the official PyCOMPSs version from the pypi live repository:
      sudo -E python2.7 -m pip install pycompss -v
    • Use PIP to install PyCOMPSs from a pycompss.tar.gz
      sudo -E python2.7 -m pip install pycompss-version.tar.gz -v
    • Use the setup.py script
      sudo -E python2.7 setup.py install

Internal report

How multi-scale applications can be developed using PyCOMPSs (accessible by HBP members only):

https://collaboration.humanbrainproject.eu/documents/10727/3235212/HBP_Multi-scale_in_PyCOMPSs_M30_SP7_WP7.2_T7.2.2_v1.0.docx/d187d1f5-c27c-42a3-9833-3cee3d62fb46

Date of releaseJune 2019
Version of software2.5
Version of documentation2.5
Software availablehttp://compss.bsc.es
Documentationhttps://www.bsc.es/research-and-development/software-and-apps/software-list/comp-superscalar/documentation
ResponsibleBSC Workflows and Distributed Computing Group: support-compss@bsc.es
Requirements & dependencieshttp://compss.bsc.es/releases/compss/latest/docs/COMPSs_Installation_Manual.pdf?tracked=true
Target system(s)Supercomputers or clusters with different nodes, distributed computers, grid and cloud architectures

OmpSs

The development of OmpSs was co-funded by the HBP during the Ramp-up Phase. This page is kept for reference but will no longer be updated.


OmpSs logo

OmpSs is a fine-grained programming model oriented to shared memory environments, with a powerful runtime that leverages low-level APIs (e.g. CUDA/OpenCL) and manages data dependencies (memory regions). It exploits task level parallelism and supports asynchronicity, heterogeneity and data movement.

The new version 15.06 provides the following new features as compared to version 15.04 that was part of the HBP-internal Platform Release in M18:

  • Socket aware (scheduling taking into account processor socket)
  • Reductions (mechanism to accumulate results of tasks more efficiently)
  • Work sharing (persistence of data in the worker) mechanisms

Date of releaseJune 2016
Version of software16.06.3
Version of documentationDecember 13, 2016
Software availablehttps://pm.bsc.es/ompss-downloads
DocumentationOmpSs website: https://pm.bsc.es/ompss-docs/book/index.html
ResponsibleBSC Programming Models Group: pm-tools@bsc.es
Requirements & dependencieshttp://pm.bsc.es/ompss-docs/user-guide/installation.html#nanos-build-requirements
http://pm.bsc.es/ompss-docs/user-guide/installation.html#mercurium-build-requirements
Target system(s)Any with shared memory (supercomputers, clusters, workstations, …)

Deflect Client Library

The development of Deflect Client Library was co-funded by the HBP during the Ramp-up Phase. This page is kept for reference but will no longer be updated.


Deflect is a C++ library to develop applications that can send and receive pixel streams from other Deflect-based applications, for example DisplayCluster. The following applications are provided which make use of the streaming API:

  • DesktopStreamer: A small utility that allows the user to stream the desktop.
  • SimpleStreamer: A simple example to demonstrate streaming of an OpenGL application.
Date of release2013
Version of software0.5
Version of documentation0.5
Software availablehttps://github.com/BlueBrain/DisplayCluster
https://github.com/BlueBrain/Deflect
Documentationhttps://bluebrain.github.io/
ResponsibleEPFL: Stefan Eilemann (stefan.eilemann@epfl.ch)
Requirements & dependenciesBoost, LibJPEGTurbo, Qt, GLUT, OpenGL, Lunchbox, FCGI, FFMPEG, MPI, Poppler, TUIO, OpenMP
Target system(s)

RTNeuron

The development of RTNeuron in the HPAC Platform was co-funded by the HBP during the second project phase (SGA1). This page is kept for reference but will no longer be updated.


RTNeuron is a scalable real-time rendering tool for the visualisation of neuronal simulations based on cable models. Its main utility is twofold: the interactive visual inspection of structural and functional features of the cortical column model and the generation of high quality movies and images for presentations and publications. The package provides three main components:

  • A high level C++ library.
  • A Python module that wraps the C++ library and provides additional tools.
  • The Python application script rtneuron-app.py

A wide variety of scenarios is covered by rtneuron-app.py. In case the user needs a finer control of the rendering, such as in movie production or to speed up the exploration of different data sets, the Python wrapping is the way to go. The Python wrapping can be used through an IPython shell started directly from rtneuron-app.py or importing the module rtneuron into own Python programs. GUI overlays can be created for specific use cases using PyQt and QML.

RTNeuron is available on the pilot system JULIA and on JURECA as environment module.

RTNeuron in aixCAVE
RTNeuron in aixCAVE
Neuron rendered by RTNeuron
Neuron rendered by RTNeuron
Visual representation of cell dyes
Simulation playback
Interactive circuit slicing
Connection browsing
Date of releaseFebruary 2018
Version of software2.13.0
Version of documentation2.13.0
Software availablehttps://developer.humanbrainproject.eu/docs/projects/RTNeuron/2.11/index.html; Open sourcing scheduled for June 2018
Documentationhttps://developer.humanbrainproject.eu/docs/projects/RTNeuron/2.11/index.html, https://www.youtube.com/watch?v=wATHwvRFGz0
ResponsibleSamuel Lapere
Requirements & dependenciesBBP SDK, Boost, Equalizer, OpenSceneGraph, osgTransparency, Python, Qt, NumPy, OpenMP, VRPN, Cuda, ZeroEQ
Target system(s)

Remote Connection Manager (RCM)

The development of Remote Connection Manager (RCM) was co-funded by the HBP during the Ramp-up Phase. This page is kept for reference but will no longer be updated.


The Remote Connection Manager (RCM) is an application that allows HPC users to perform remote visualisation on Cineca HPC clusters.

The tool offers to

  • Visualize the data produced on Cineca’s HPC systems (scientific visualization);
  • Analyse and inspect data directly on the systems;
  • Debug and profile parallel codes running on the HPC clusters.

The graphical interface of RCM allows the HPC users to easily create remote displays and to manage them (connect, kill, refresh).

Screenshot of RCM
Screenshot of Remote Connection Manager (RCM)
Screenshot of RCM
Screenshot of Remote Connection Manager (RCM)
Date of releaseApril 2015
Version of software1.2
Version of documentation1.2
Software availablehttp://www.hpc.cineca.it/content/remote-visualization-rcm
Documentationhttp://www.hpc.cineca.it/content/remote-visualization-rcm
ResponsibleRoberto Mucci (superc@cineca.it)
Requirements & dependenciesThe “Remote Connection Manager” works on the following operating systems: Windows, Linux, Mac OSX
(OSX Mountain Lion users need to install XQuartz: http://xquartz.macosforge.org/landing/)
Target system(s)Notebooks, office computers

Livre

The development of Livre was co-funded by the HBP during the Ramp-up Phase. This page is kept for reference but will no longer be updated.


Livre is an out-of-core rendering engine that has the following features:

  • Distributed rendering using Equalizer parallel rendering framework
  • Octree based out-of-core rendering.
  • Visualisation of pre-processed UVF format volume data sets.
  • Real-time voxelisation and visualisation of surface meshes using OpenGL 4.2 extensions.
  • Real-time voxelisation and visualisation of Blue Brain Project (BBP) morphologies.
  • Real-time voxelisation and visualisation of local-field potentials in BBP circuit.
  • Multi-node, multi-GPU rendering.
Data rendered with Livre
Data rendered with Livre
Data rendered with Livre
Data rendered with Livre
Date of releaseApril 2015
Version of software0.3
Version of documentation0.3
Software availablehttps://github.com/BlueBrain/Livre
Documentationhttps://bluebrain.github.io/
ResponsibleEPFL: Stefan Eilemann (stefan.eilemann@epfl.ch)
Requirements & dependenciesOpenMP, Tuvok, ZeroEQ, FlatBuffers, Boost, Equalizer, Collage, Lunchbox, dash, OpenGL, PNG, Qt
Target system(s)

DisplayCluster

The development of DisplayCluster was co-funded by the HBP during the Ramp-up Phase. This page is kept for reference but will no longer be updated.


DisplayCluster is a software environment for interactively driving large-scale tiled displays. It provides the following functionality:

  • View media interactively such as high-resolution imagery, PDFs and video.
  • Receive content from remote sources such as laptops, desktops or parallel remote visualization machines using the Deflect library.
Display Cluster
DisplayCluster on a mobile tiled display wall
DisplayCluster on a tiled display wall
DisplayCluster on a tiled display wall
Date of release2013
Version of software0.5
Version of documentation0.5
Software availablehttps://github.com/BlueBrain/DisplayCluster
https://github.com/BlueBrain/Deflect
Documentationhttps://bluebrain.github.io/
ResponsibleEPFL, Stefan Eilemann (stefan.eilemann@epfl.ch)
Requirements & dependenciesBoost, LibJPEGTurbo, Qt, GLUT, OpenGL, Lunchbox, FCGI, FFMPEG, MPI, Poppler, TUIO, OpenMP
Target system(s)Tiled display walls