Description
This tutorial will address the basic functionalities of the PSBLAS and AMG4PSBLAS libraries for the parallelization of computationally intensive scientific applications. We will discuss the principles behind the parallel implementation of iterative solvers for sparse linear systems in a distributed memory paradigm and look at the routines for multiplying sparse matrices by dense matrices, solving block diagonal systems with triangular diagonal entries, preprocessing sparse matrices, and several additional methods for dense matrix operations. Moreover, we will delve into the idea underlying the construction of effective parallel preconditioners that are capable of extracting performance from supercomputers with hundreds of thousands of computational cores.
The tutorial will highlight how these approaches are related to and inspired by the need for EoCoE-II applications.