Communication Overlapping Krylov Subspace Methods for Distributed Memory Systems
Abstract
Many high performance computing applications in computational fluid dynamics, electromagnetics etc. need to solve a linear system of equations $Ax=b$. For linear systems where $A$ is generally large and sparse, Krylov Subspace methods (KSMs) are used. In this thesis, we propose communication overlapping KSMs. We start with the Conjugate Gradient (CG) method, which is used when $A$ is sparse symmetric positive definite. Recent variants of CG include a Pipelined CG (PIPECG) method which overlaps the allreduce in CG with independent computations i.e., one Preconditioner (PC) and one Sparse Matrix Vector Product (SPMV).
As we move towards the exascale era, the time for global synchronization and communication in allreduce increases with the large number of cores available in the exascale systems, and the allreduce time becomes the performance bottleneck which leads to poor scalability of CG. Therefore, it becomes necessary to reduce the number of allreduces in CG and adequately overlap the larger allreduce time with more independent computations than the independent computations provided by PIPECG. Towards this goal, we have developed PIPECG-OATI (PIPECG-One Allreduce per Two Iterations) which reduces the number of allreduces from three per iteration to one per two iterations and overlaps it with two PCs and two SPMVs. For better scalability with more overlapping, we also developed the Pipelined s-step CG method which reduces the number of allreduces to one per s iterations and overlaps it with s PCs and s SPMVs. We compared our methods with state-of-art CG variants on a variety of platforms and demonstrated that our method gives 2.15x - 3x speedup over the existing methods.
We have also generalized our research with parallelization of CG on multi-node CPU systems in two dimensions. Firstly, we have developed communication overlapping variants of KSMs other than CG, including Conjugate Residual (CR), Minimum Residual (MINRES) and BiConjugate Gradient Stabilised (BiCGStab) methods for matrices with different properties. The pipelined variants give up to 1.9x, 2.5x and 2x speedup over the state-of-the-art MINRES, CR and BiCGStab methods respectively. Secondly, we developed communication overlapping CG variants for GPU accelerated nodes, where we proposed and implemented three hybrid CPU-GPU execution strategies for the PIPECG method. The first two strategies achieve task parallelism and the last method achieves data parallelism. Our experiments on GPUs showed that our methods give 1.45x - 3x average speedup over existing CPU and GPU-based implementations. The third method gives up to 6.8x speedup for problems that cannot be fit in GPU memory. We also implemented GPU related optimizations for the PIPECG-OATI method and show performance improvements over other GPU implementations of PCG and PIPECG on multiple nodes with multiple GPUs.
Collections
Related items
Showing items related by title, author, creator and subject.
-
Study of Higher Order Split-Step Methods for Stiff Stochastic Differential Equations
Singh, Samar B (2018-04-06)Stochastic differential equations(SDEs) play an important role in many branches of engineering and science including economics, finance, chemistry, biology, mechanics etc. SDEs (with m-dimensional Wiener process) arising ... -
Optimal Control Of Numerical Dissipation In Modified KFVS (m-KFVS) Using Discrete Adjoint Method
Anil, N (2010-06-04)The kinetic schemes, also known as Boltzmann schemes are based on the moment-method-strategy, where an upwind scheme is first developed at the Boltzmann level and after taking suitable moments we arrive at an upwind scheme ... -
Rotationally Invariant Kinetic Upwind Method (KUMARI)
Malagi, Keshav Shrinivas (2009-03-11)In the quest for a high fidelity numerical scheme for CFD it is necessary to satisfy demands on accuracy, conservation, positivity and upwinding. Recently the requirement of rotational invariance has been added to this ...