- Towards multi-fidelity machine learning in scientific computing on GPU clusters
- 06.03.2019 - 06.03.2019
- USI Lugano Campus - Lugano
- ICS Events
Event VideosItem not found.
Towards multi-fidelity machine learning in scientific computing on GPU clusters
Wednesday - 06.03 at USI Lugano Campus, room A-34, Red building - 10:30-11:30
Talk by: Peter Zaspel, University of Basel, Switzerland
University of Basel, Switzerland
The solution of parametric partial differential equations or other parametric problems is the main component of many applications in scientific computing. Such applications include, but are not limited to, uncertainty quantification, inverse problems and optimization. To avoid the re-implementation of scientific simulation codes, the use of snapshot-based (non-intrusive) techniques for the solution of parametric problems becomes very attractive.
In this presentation, I will report on ongoing work to solve parametric problems with a higher-dimensional parameter space by means of approximation in reproducing kernel Hilbert spaces. In presence of regularization, approximation in reproducing kernel Hilbert spaces is equivalent to the so-called "kernel ridge regression", which is a classical approach in machine learning. In that sense, results on the use of machine learning to for an efficient approximation of parametric problems will be discussed for examples in computational fluid mechanics and quantum chemistry.
One challenge in parametric problems with high-dimensional parameter space is the high number of simulation snapshots that has to be computed in order to get a low approximation error with respect to the parameter space. If a single simulation is computationally expensive, many simulations of this kind become computationally intractable. To overcome this, we have introduced a multi-fidelity kernel ridge regression approach based on the sparse grid combination technique or multi-index approximation. In fact, this approach allows to significantly reduce the number of expensive calculations by adding coarser and coarser simulation snapshots.
While this approach allows to soften the computational challenges of the simulation snapshots, large-scale training in kernel ridge regression with millions of training samples is almost impossible if traditional matrix factorizations are used in the training process. To solve this issue, we have developed a hierarchical matrix approach that allows to solve related dense linear systems in log-linear time. This hierarchical matrix approach was parallelized on clusters of graphics hardware (GPUs) to get the best possible performance.
The results presented in my talk are based on joined work with Michael Griebel, Helmut Harbrecht, Bing Huang, Christian Rieger and Anatole von Lilienfeld (in alphabetical order).
Peter Zaspel got his PhD in mathematics at the University of Bonn. After a Postdoc at the University of Heidelberg / HITS, he now works as Postdoc at the University of Basel. His research interests are in higher-dimensional approximation (with uncertainty quantification and machine learning), algebraic linear solvers, high performance computing (e.g. GPUs) and applications.
Host: Prof. Michael Multerer
- USI Lugano Campus
- Via G. Buffi 13
EventManagement powered by scriptplaza.com