Today, exascale computers are characterized by billion-way parallelism. Computing on such extreme scale needs methods which scale perfectly and have optimal complexity. This project proposal brings together several crucial aspects of extreme scale solving. First, the solver itself must be of optimal numerical complexity - a requirement becoming more an more severe with increasing problem size - and at the same time scale efficiently on extreme scales of parallelism. Second, simulations on exascale systems will consume a lot of electric power, requiring algorithms and implementations with low power consumption. To that end, the present project combines domain decomposition, parallel multigrid and H-matrices. This technique has the potential to gain top efficiency on extreme scales while still maintaining optimal complexity. To further improve parallelism, this approach is combined with special methods for parallelization in time and solvers for optimization problems. Both cases have additional parallelization potential. Algorithms and implementations will be evaluated for energy efficiency in problem solving. Criteria and models for energy efficiency of numerical solvers will be developed in the project. The team has long standing experience in developing algorithms and software for large scale HPC cooperatively.
The project carried out at USI is concerned with parallel-in-time methods. Exascale solvers for partial differential equations will have to take into account parallel algorithms beyond pure spatial parallelization. The combination of time-parallelism with spatial domain decomposition into a new exascale capable space-time-parallel approach will allow to considerably extend the strong scaling limit of purely spatial parallelization. In the course of the project, we will develop efficient time-parallel approaches for the project’s benchmark problem and evaluate possible implementation strategies in order to optimize performance on state-of-the-art and future high-performance computer architectures.
The EXASOLVERS project is a cooperation between our group, Prof. Dr. G. Wittum at Goethe Center for Scientific Computing, Prof.Dr. W. Hackbusch at the Max-Planck-Institute for Mathematics in the Sciences in Leipzig, Prof. Dr. V. Schulz at the University of Trier, Prof. Dr. L. Grasedyck at the RWTH Aachen, and Prof.Dr. M. Resch at the High performance Computing Center Stuttgart
At ICS, Pietro Benedusi is also working on this project.
The ICS cooperates with the EXASOLVERS project especially in the development of parallel-in-time solution methods for large scale problems. A first step is the development of the "multi-level spectral deferred correction method" (MLSDC) in [Speck, Ruprecht et al., 2013]. MLSDC allows to interprete the "parallel full approximation scheme in space and time" (PFASST), see [here], as a time-parallel version of MLSDC. Both, MLSDC and PFASST, employ a hierarchy of space-time meshes. We explore a number of strategies to coarsen the representation of the problem on the coarser levels, thereby reducing the overhead from coarse level "sweeps" in MLSDC and optimizing the speedup provided by PFASST.
In order to demonstrate the feasibility of space-time parallelism also for very large-scale parallel simulations, the strong scaling of a combination of PFASST with a parallel multi-grid solver in space is currently explored in runs using up to all the 458K cores of the IBM BlueGene/Q JUQUEEN at Jülich Supercomputing Centre. The combination PFASST+PMG (Parallel Multigrid) thus is a member of the High-Q-Club, consisting of all codes that run and scale to the full machine. Studies of the parallel performance of PFASST+PMG can be found in [Speck et al., 2014] and [Ruprecht et al. 2013].
This project is funded by Swiss National Science Foundation grant 145271 under the lead agency agreement as part of the DFG project ExaSolvers within the Priority Programme 1648 Software for Exascale Computing (SPPEXA). Important precursory work, e.g. [Speck, Ruprecht et al., 2012], for the results presented here was obtained with funding and support from the Swiss High Performance and Productivity Computing initiatve HP2C.
Prof. Dr. Rolf Krause; ; PI; ICS Institute of Computational Science
PhD Pietro Benedusi; ; Collaborator; ICS Institute of Computational Science