The Virtual Brain (TVB) is a large-scale brain simulator. With a community of thousands of users around the world, TVB has become a validated, popular and standard choice for the simulation of whole brain activity. TVB users can create simulations using neural mass models which can produce outputs for different analysis and modalities. TVB allows scientists to explore and analyze both simulated and experimental data. It contains analytic tools for evaluating relevant scientific parameters in light of that data. The current implementation of TVB is written in Python, with limited large-scale parallelization over different parameters. The objective of TVB-HPC is to enable large-scale parallelization of TVB simulations by making use of high performance computing to explore large parameter spaces for the models. With this approach, neuroscientists can define their models in a domain specific language based on NeuroML and automatically generate code which can run either on GPUs or on CPUs with different architectures and optimizations. The result is a framework that hides the complexity of writing robust parallel code and offers neuroscientists a fast and efficient access to high performance computing. TVB-HPC is publicly available on GitHub and, at the end of HBP project phase SGA2, it will be possible to launch large parameter simulations using code automatically generated with this framework via the HBP Collaboratory.
Arbor is a software library designed from the ground up for simulators of large networks of multi-compartment neurons on hybrid/accelerated/many core computer architectures.
Performance portability was completed for the three main target HPC architectures available through the HBP: Intel x86 CPUs (AVX2 and AVX512), Intel KNL (AVX512) and NVIDIA GPUs (CUDA).
Optimized kernels are automatically generated to target each architecture, and the system used in Arbor can be extended to new architectures in the future.
The other enhancements and features implemented in Arbor are:
Fully parallelized event generation and queuing from spikes.
Efficient sampling of model state on CPU and GPU implementations, e.g. voltage and current.
Significant refactoring to prepare the code for general release.
A Python interface for users.
The source code was released publicly on GitHub with an open source BSD license, along with documentation on Read the Docs, and automatic testing was set up on Travis CI.
The development of VTK-m was co-funded by the HBP during the second project phase (SGA1). This page is kept for reference but will no longer be updated.
VTK-m is a scientific visualization and analysis framework that offers a wealth of building blocks to create visualization and analysis applications. VTK-m facilitates scaling those applications to massively parallel shared memory systems, and it will – due to its architecture – most likely also run efficiently on future platforms.
HPX is a task-based programming model. It simplifies the formulation of well-scaling, highly-parallel algorithms. Integrating this programming model into VTK-m streamlines the formulation of its parallel building blocks and thus makes their deployment on present and emerging HPC platforms more efficient. Since neuroscientific applications require more and more compute power as well as memory, harnessing the available resources will become a challenge in itself. By combining VTK-m and HPX into task-based analysis and visualization, we expect to provide suitable tools to effectively face this challenge and facilitate building sophisticated interactive visual analysis tools, tailored to the neuroscientists’ needs.
Parallel primitive algorithms required for VTK-m have been added to HPX along with API support to enable the full range of visualization algorithms developed for VTK-m. A new scheduler has been developed that accepts core/numa placement hints from the programmer such that cache reuse can be maximized and traffic between sockets minimized. High performance tasks that access data shared by application and visualization can use this capability to improve performance. The thread pool management was improved to allow visualization tasks, communication tasks, and application tasks to execute on different cores if necessary, which reduces latency between components and improves the overall throughput of the distributed application. RDMA primitives have been added to the HPX messaging layer. These improvements make it possible to scale HPX applications to very high node/core counts. Respective tests have been successful on 10k nodes using 650k cores.
The development of PLIViewer was co-funded by the HBP during the second project phase (SGA1). This page is kept for reference but will no longer be updated.
The PLIViewer is visualization software for 3D-Polarized Light Imaging (3D-PLI), to interactively explore the scalar and vector datasets; it provides additional methods to transform data, thus revealing new insights that are not available in the raw representations. The high resolution provided by 3D-PLI produces massive, terabyte-scale datasets, which makes visualization challenging.
The PLIViewer tackles this problem by providing functionality to select areas of interests from the dataset, and options for downscaling. It makes it possible to interactively compute and visualize Orientation Distribution Functions (ODFs) and polar plots from the vector field, which reveal mesoscopic and macroscopic scale information from the microscopic dataset without significant loss of detail. The PLIViewer equips the neuroscientist with specialized visualization tools needed to explore 3D-PLI datasets through direct and interactive visualization of the data.
The original dataset: Fibre Orientation Maps rendered on top of Retardation mapA full slice from the Fibre Orientation Map of a Vervet MonkeyOrientation Distribution Functions (ODFs) rendered with Streamline TractographyClose-up view of the ODFs
The development of this technology was co-funded by the HBP during the second project phase (SGA1). This page is kept for reference but will no longer be updated.
Complementary to other viewers and visualization implementations for NEST simulations, this technology offers a rendering of activity and membrane potentials in a neural network simulated with NEST.
A prototypical implementation exists that is based on vtk, the widely-used visualisation toolkit. This implementation can be generally run by computational neuroscientists on their workstations, imposing only moderate hardware requirements. Experiments using rendering on high-performance computing infrastructure were successful.
The results indicate that this component is extensible towards large-scale simulations that require HPC resources and thus produce large output data. The hi-fidelity rendering used in this case provides very high quality images that may be suitable for publications (Proof of concept image below).
Rendering of color-coded membrane potentials on spatial neurons from a running NEST simulationProof of concept of a high-quality rendering of spatially organized point neurons
The Multi-View Framework is a software component, which offers functionality to combine various visual representations of one or more data sets in a coordinated fashion. Software components offering visualization capabilities can be included in such a network, as well as software components offering other functionality, such as statistical analysis. Multi-display scenarios can be addressed by the framework as coordination information can be distributed over network between view instances running on distributed machines.
The framework is composed of three libraries: nett, nett-python and nett-connect. nett implements a light-weight underlying messaging layer enabling the communication between views, whereas nett-python implements a python binding for nett, which enables the integration of python-based software components into a multi-view setup. nett-connect adds additional functionality to this basic communication layer, which enables non-experts to create multi-view setups according to their specific needs and workflows.
Interactive optimization of parameters for structural plasticity in neural network models (top left); comparative analysis of NEST simulations (top right); statistical analysis of NEST simulations (bottom left); multi-device and multi-user scenarios (bottom right)
Science has driven the development of the NEST simulator for the past 20 years. Originally created to simulate the propagation of synfire chains using single-processor workstations, we have pushed NEST’s capabilities continuously to address new scientific questions and computer architectures. Prominent examples include studies on spike-timing dependent plasticity in large simulations of cortical networks, the verification of mean-field models, models of Alzheimer’s and Parkinson’s disease and tinnitus. Recent developments include a significant reduction in memory requirements, as demonstrated by a record-breaking simulation of 1.86 billion neurons connected by 11.1 trillion synapses on the Japanese K supercomputer, paving the way for brain-scale simulations.
Running on everything from laptops to the world’s largest supercomputers, NEST is configured and controlled by high-level Python scripts, while harnessing the power of C++ under the hood. An extensive testsuite and systematic quality assurance ensure the reliability of NEST.
The development of NEST is driven by the demands of neuroscience and carried out in a collaborative fashion at many institutions around the world, coordinated by the non-profit member-based NEST Initiative. NEST is released under GNU Public License version 2 or later.
How NEST has been improved in HBP
Continuous dynamics
The continuous dynamics code in NEST enables simulations of rate- based model neurons in the event-based simulation scheme of the spiking simulator NEST. The technology was included and released with NEST 2.14.0.
Furthermore, additional rate-based models for the Co-Design Project “Visuo-Motor Integration” (CDP4) have been implemented and scheduled for NEST release 2.16.0.
NESTML is a domain-specific language that supports the specification of neuron models in a precise and concise syntax, based on the syntax of Python. Model equations can either be given as a simple string of mathematical notation or as an algorithm written in the built-in procedural language. The equations are analyzed by NESTML to compute an exact solution if possible, or use an appropriate numeric solver otherwise.
This technology couples the simulation software NEST and UG4 by means of the MUSIC library. NEST can only send spike trains where spiking occurs; UG4 receives those in form of events arriving at synapses (timestamps). The time course of the extracellular potential in a cube (representing a piece of tissue) is simulated based on the arriving spike data.The evolution of the membrane potential in space and time is described by the Xylouris-Wittum model.
Link to this release (2017): https://github.com/UG4
The development of TOUCH was co-funded by the HBP during the Ramp-up Phase. This page is kept for reference but will no longer be updated.
Efficient spatial joins are pivotal for many applications and particularly important for geographical information systems or for the simulation sciences where scientists work with spatial models. Past research has primarily focused on disk-based spatial joins; efficient in-memory approaches, however, are important for two reasons: a) main memory has grown so large that many datasets fit in it and b) the in-memory join is a very time-consuming part of all disk-based spatial joins. In this paper we develop TOUCH, a novel in-memory spatial join algorithm that uses hierarchical data-oriented space partitioning, thereby keeping both its memory footprint and the number of comparisons low. Our results show that TOUCH outperforms known in-memory spatial-join algorithms as well as in-memory implementations of disk-based join approaches. In particular, it has a one order of magnitude advantage over the memory-demanding state of the art in terms of number of comparisons (i.e., pairwise object comparisons), as well as execution time, while it is two orders of magnitude faster when compared to approaches with a similar memory footprint. Furthermore, TOUCH is more scalable than competing approaches as data density grows.
MSPViz is a visualization tool for the Model of Structural Plasticity. It uses a visualisation technique based on the representation of the neuronal information through the use of abstract levels and a set of schematic representations into each level. The multilevel structure and the design of the representations constitutes an approach that provides organized views that facilitates visual analysis tasks.
Each view has been enhanced adding line and bar charts to analyse trends in simulation data. Filtering and sorting capabilities can be applied on each view to ease the analysis. Other views, such as connectivity matrices and force-directed layouts, have been incorporated, enriching the already existing views and improving the analysis process. This tool has been optimised to lower render and data loading times, even from remote sources such as WebDav servers.
Screenshot of MSPVizScreenshot of MSPVizView of MSPViz to investigate structural plasticity models on different levels of abstraction: connectivity of a single neuronView of MSPViz to investigate structural plasticity models on different levels of abstraction: full network connectivity
The development of VIOLA was co-funded by the HBP during the Ramp-up Phase. This page is kept for reference but will no longer be updated.
VIOLA (VIsualizer Of Layer Activity) is a tool to visualize activity in multiple 2D layers in an interactive and efficient way. It gives an insight into spatially resolved time series such as simulation results of neural networks with 2D geometry. The usage example shows how VIOLA can be used to visualize spike data from a NEST simulation (http://nest-simulator.org/) of an excitatory and an inhibitory neuron population with distance-dependent connectivity.
The development of TRANSFORMERS was co-funded by the HBP during the Ramp-up Phase. This page is kept for reference but will no longer be updated.
Spatial joins are becoming increasingly ubiquitous in many applications, particularly in the scientific domain. While several approaches have been proposed for joining spatial datasets, each of them has a strength for a particular type of density ratio among the joined datasets. More generally, no single proposed method can efficiently join two spatial datasets in a robust manner with respect to their data distributions. Some approaches do well for datasets with contrasting densities while others do better with similar densities. None of them does well when the datasets have locally divergent data distributions.
Therefore, we develop TRANSFORMERS, an efficient and robust spatial join approach that is indifferent to such variations of distribution among the joined data. TRANSFORMERS achieves this feat by departing from the state-of-the-art through adapting the join strategy and data layout to local density variations among the joined data. It employs a join method based on data-oriented partitioning when joining areas of substantially different local densities, whereas it uses big partitions (as in space-oriented partitioning) when the densities are similar, while seamlessly switching among these two strategies at runtime.
The development of RUBIK was co-funded by the HBP during the Ramp-up Phase. This page is kept for reference but will no longer be updated.
An increasing number of applications from finance, meteorology, science and others are producing time series as output. The analysis of the vast amount of time series is key to understand the phenomena studied, particularly in the simulation sciences, where the analysis of time series resulting from simulation allows scientists to refine the model simulated. Existing approaches to query time series typically keep a compact representation in main memory, use it to answer queries approximately and then access the exact time series data on disk to validate the result. The more precise the in-memory representation, the fewer disk accesses are needed to validate the result. With the massive sizes of today’s datasets, however, current in-memory representations oftentimes no longer fit into main memory. To make them fit, their precision has to be reduced considerably resulting in substantial disk access which impedes query execution today and limits scalability for even bigger datasets in the future. In this paper we develop RUBIK, a novel approach to compressing and indexing time series. RUBIK exploits that time series in many applications and particularly in the simulation sciences are similar to each other. It compresses similar time series, i.e., observation values as well as time information, achieving better space efficiency and improved precision. RUBIK translates threshold queries into two dimensional spatial queries and efficiently executes them on the compressed time series by exploiting the pruning power of a tree structure to find the result, thereby outperforming the state-of-the-art by a factor of between 6 and 23. As our experiments further indicate, exploiting similarity within and between time series is crucial to make query execution scale and to ultimately decouple query execution time from the growth of the data (size and number of time series).
NeuroScheme is a tool that allows users to navigate through circuit data at different levels of abstraction using schematic representations for a fast and precise interpretation of data. It also allows filtering, sorting and selections at the different levels of abstraction. Finally it can be coupled with realistic visualization or other applications using the ZeroEQ event library developed in WP 7.3.
This application allows analyses based on a side-by-side comparison using its multi-panel views, and it also provides focus-and-context. Its different layouts enable arranging data in different ways: grid, 3D, camera-based, scatterplot-based or circular. It provides editing capabilities, to create a scene from scratch or to modify an existing one.
ViSimpl, part of the NeuroScheme framework, is a prototype developed to analyse simulation data, using both abstract and schematic visualisations. This analysis can be done visually from temporal, spatial and structural perspectives, with the additional capability of exploring the correlations between input patterns and produced activity.
NeuroScheme screenshotNeuroScheme screenshotNeuroScheme screenshotNeuroScheme screenshotNeuroScheme screenshotOverview of various neuronsUser interface of ViSimpl visualising activity data emerging from a simulation of a neural network model
Required: Qt4, nsol
Optional: Brion/BBPSDK (to access BBP data), ZeroEQ (to couple with other software)
Supported OS: Windows 7, Windows 8.1, Linux (tested on Ubuntu 14.04) and Mac OSX
The development of VisNEST was co-funded by the HBP during the Ramp-up Phase. This page is kept for reference but will no longer be updated.
VisNEST is a tool for visualizing neural network simulations of the macaque visual cortex. It allows for exploring mean activity rates, connectivity of brain areas and information exchange between pairs of areas. In addition, it allows exploration of individual populations of each brain area and their connectivity used for simulation.
VisNEST screenshot
Date of release
Information available on demand.
Version of software
Information available on demand.
Version of documentation
Information available on demand.
Software available
Not publicly available yet. Please contact the developers in case of interest.
Documentation
Reference paper:
Nowke, Christian, Maximilian Schmidt, Sacha J. van Albada, Jochen M. Eppler, Rembrandt Bakker, Markus Diesrnann, Bernd Hentschel, and Torsten Kuhlen. "VisNEST—Interactive analysis of neural activity data." In Biological Data Visualization (BioVis), 2013 IEEE Symposium on, pp. 65-72. IEEE, 2013.
The development of InDiProv was co-funded by the HBP during the Ramp-up Phase. This page is kept for reference but will no longer be updated.
This server-side tool is meant to be used for the creation of provenance tracks in context of interactive analysis tools and visualization applications. It is capable of tracking multi-view and multiple applications for one user using this ensemble. It further is able to extract these tracks from the internal data base into a XML-based standard format, such as the W3C Prov-Model or the OPM format. This enables the integration to other tools used for provenance tracking and will finally end up in the UP.
Written in C++ , Linux environment, MySQL server 5.6, JSON library for annotation, CodeSynthesis XSD for XML serialization and parsing, ZeroMQ library, Boost library, xercex-c library and mysqlcppcon library
The development of SCOUT was co-funded by the HBP during the Ramp-up Phase. This page is kept for reference but will no longer be updated.
SCOUT is a structure-aware method for prefetching data along interactive spatial query sequences. Given the user input, which is a spatial range query sequence representing the structure explored (interactively) by the user, and the spatial dataset to be queried, SCOUT reduces the query response time by prefetching the data along the query sequence.
Similarly to FLAT, both the query ranges in the query sequence and the spatial objects should be represented using a minimum bounding rectangle.
SCOUT outperforms the related prefetching techniques (e.g., Straight Line Extrapolation or Hilbert prefetching) with high prefetching accuracy, which is translated to one order of magnitude speedup.
Date of release
March 2015
Version of software
1.0
Version of documentation
1.0
Software available
Collaboratory, integrated in and part of BBP SDK tool set
The development of Score-P was co-funded by the HBP during the Ramp-up Phase. This page is kept for reference but will no longer be updated.
The Score-P measurement infrastructure is a highly scalable and easy-to-use tool suite for profiling, event tracing, and online analysis of HPC applications. Score-P is developed under a BSD 3-Clause (Open Source) License and governed by a meritocratic governance model.
Score-P offers the user a maximum of convenience by supporting a number of analysis tools. Currently, it works with Periscope, Scalasca, Vampir, and Tau and is open for other tools. Score-P comes together with the new Open Trace Format Version 2, the Cube4 profiling format and the Opari2 instrumenter.
Score-P is part of a larger set of tools for parallel performance analysis and debugging developed by the “Virtual Institute – High Productivity Supercomputing” (VI-HPS) consortium. Further documentation, training and support are available through VI-HPS.
The new version 1.4.2 provides the following new features (externally funded) as compared to version 1.4 that was part of the HBP-internal Platform Release in M18:
The development of Scalasca was co-funded by the HBP during the Ramp-up Phase. This page is kept for reference but will no longer be updated.
Scalasca is a software tool that supports the performance optimisation of parallelprograms by measuring and analysing their runtime behaviour. The analysis identifies potential performance bottlenecks – in particular those concerning communication and synchronization – and offers guidance in exploring their causes.
Scalasca targets mainly scientific and engineering applications based on the programming interfaces MPI and OpenMP, including hybrid applications based on a combination of the two. The tool has been specifically designed for use on large-scale systems including IBM Blue Gene and Cray XT, but is also well suited for small- and medium-scale HPC platforms. The software is available for free download under the New BSD open-source license.
Scalasca is part of a larger set of tools for parallel performance analysis and debugging developed by the “Virtual Institute – High Productivity Supercomputing” (VI-HPS) consortium. Further documentation, training and support are available through VI-HPS.
The new version 2.2.2 provides the following new features (externally funded) as compared to version 2.2 that was part of the HBP-internal Platform Release in M18:
The development of RTNeuron in the HPAC Platform was co-funded by the HBP during the second project phase (SGA1). This page is kept for reference but will no longer be updated.
RTNeuron is a scalable real-time rendering tool for the visualisation of neuronal simulations based on cable models. Its main utility is twofold: the interactive visual inspection of structural and functional features of the cortical column model and the generation of high quality movies and images for presentations and publications. The package provides three main components:
A high level C++ library.
A Python module that wraps the C++ library and provides additional tools.
The Python application script rtneuron-app.py
A wide variety of scenarios is covered by rtneuron-app.py. In case the user needs a finer control of the rendering, such as in movie production or to speed up the exploration of different data sets, the Python wrapping is the way to go. The Python wrapping can be used through an IPython shell started directly from rtneuron-app.py or importing the module rtneuron into own Python programs. GUI overlays can be created for specific use cases using PyQt and QML.
RTNeuron is available on the pilot system JULIA and on JURECA as environment module.
RTNeuron in aixCAVENeuron rendered by RTNeuronVisual representation of cell dyesSimulation playbackInteractive circuit slicingConnection browsing
Date of release
February 2018
Version of software
2.13.0
Version of documentation
2.13.0
Software available
https://developer.humanbrainproject.eu/docs/projects/RTNeuron/2.11/index.html; Open sourcing scheduled for June 2018
The development of Remote Connection Manager (RCM) was co-funded by the HBP during the Ramp-up Phase. This page is kept for reference but will no longer be updated.
The Remote Connection Manager (RCM) is an application that allows HPC users to perform remote visualisation on Cineca HPC clusters.
The “Remote Connection Manager” works on the following operating systems: Windows, Linux, Mac OSX
(OSX Mountain Lion users need to install XQuartz: http://xquartz.macosforge.org/landing/)
The development of Paraver was co-funded by the HBP Ramp-up Phase. This page is kept for reference but will no longer be updated.
Paraver is a very flexible data browser. The metrics used are not hardwired on the tool but can be programmed. To compute them, the tool offers a large set of time functions, a filter module, and a mechanism to combine two timelines. This approach allows displaying a huge number of metrics with the available data. The analysis display allows computing statistics over any timeline and selected region, what allows correlating the information of up to three different time functions. To capture the expert’s knowledge, any view or set of views can be saved as a Paraver configuration file. Therefore, re-computing the same view with new data is as simple as loading the saved file. The tool has been demonstrated to be very useful for performance analysis studies, giving much more details about the applications behaviour than most performance tools available.
Screenshot of Paraver
The new version 4.6.0 (3rd February 2016) provides the following new features (externally funded) as compared to version 4.5.6 (February 2015) that was part of the HBP-internal Platform Release in M18:
Automatic workspaces on trace loading
Scalability improvements for traces with more than 64K rows
Support for wxWidgets 3
Traces with same hierarchy can be combined to analyze
External tools integration
The new version 4.6.3 (16th November 2016) provides the following new features:
Added punctual information view to timelines
Added external tool Run->Spectral from timelines
Trace load time reduced by 25%
Histogram new features: show only totals and short/long column labels
Run app dialog usability improvements
Date of release
16th of November 2016
Version of software
4.6.3
Version of documentation
3.1 (Old, year 2001) But Tutorials available for newer versions
The development of FLAT was co-funded by the HBP during the Ramp-up Phase. This page is kept for reference but will no longer be updated.
FLAT is a spatial indexing tool, which enables scalable range queries on (3D) spatial datasets. Given the user input, which should be a query range, and the dataset to be queried, FLAT returns all the objects that intersect with the query range.
In particular, both the query ranges and the spatial objects should be represented using minimum bounding rectangle, which is the geometry approximation bounding the underlying spatial object.
FLAT outperforms the state-of-the-art spatial indexing techniques (e.g. R-trees, grid file) on extremely dense datasets.
Date of release
March 2015
Version of software
1.0
Version of documentation
1.0
Software available
Collaboratory, integrated and part of BBP SDK tool set
The development of CUBE: Score-P / Scalasca was co-funded by the HBP during the Ramp-up Phase. This page is kept for reference but will no longer be updated.
Cube, which is used as performance report explorer for Scalasca and Score-P, is a generic tool for displaying a multi-dimensional performance space consisting of the dimensions
Performance metric,
Call path, and
System resource.
Each dimension can be represented as a tree, where non-leaf nodes of the tree can be collapsed or expanded to achieve the desired level of granularity. In addition, Cube can display multi-dimensional Cartesian process topologies.
The Cube 4.x series report explorer and the associated Cube4 data format is provided for Cube files produced with the Score-P performance instrumentation and measurement infrastructure or with Scalasca version 2.x trace analyzer (and other compatible tools). However, for backwards compatibility, Cube 4.x can also read and display Cube 3.x data.
Cube is part of a larger set of tools for parallel performance analysis and debugging developed by the “Virtual Institute – High Productivity Supercomputing” (VI-HPS) consortium. Further documentation, training and support are available through VI-HPS:
The new version 4.3.3 provides the following new features (externally funded) as compared to version 4.3.1 that was part of the HBP-internal Platform Release in M18:
The development of Extrae was co-funded by the HBP during the Ramp-up Phase. This page is kept for reference but will no longer be updated.
Extrae is an instrumentation and measurement system gathering time stamped information of the events of an application. It is the package devoted to generate Paraver trace files for a post-mortem analysis of a code run. It uses different interposition mechanisms to inject probes into the target application in order to gather information about the application performance.
The new version 3.2.1 (3rd November 2015) provides the following new features as compared to version 3.1.0 that was part of the HBP-internal Platform Release in M18:
Support for MPI3 immediate collectives
Use Intel PEBS to sample memory references.
The new version 3.4.1 (23th September 2016) provides the following new features:
Extended Java support through AspectJ and JVMTI
Improved CUDA and OpenCL support
Improved support for MPI-IO operations
Added instrumentation for system I/O and other system calls
Dependencies: libxml2 2.5.0; libunwind for Linux x86/x86-64/IA64/ARM. Optional: PAPI; DynInst; liberty and libbfd; MPI; OpenMP
Target system(s)
Any Unix/Linux system (supercomputers, clusters, servers, workstations, laptops …)
This website describes the results of the “High Performance Analytics and Computing” (HPAC) Platform of the Human Brain Project (HBP) from the first three project phases (Ramp-up Phase 10/2013-03/2016, SGA1 04/2016-03/2018 and SGA2 04/2018-03/2020).
Due to a major project-internal reorganisation, this website will no longer be updated after March 2020.
More recent information can be found on humanbrainproject.eu and ebrains.eu.
Information about the Fenix Research Infrastructure and the ICEI project, including resource access, are available on their website.
Follow EBRAINS Computing Services@HBPHighPerfComp and Fenix RI@Fenix_RI_eu on Twitter to learn about the most recent developments and to get to know about upcoming opportunities for calls and collaborations!