Tiling Stencil Computations To Maximize Parallelism
Abstract
Stencil computations are iterative kernels often used to simulate the change in a discretized spatial domain overtime (e.g., computational fluid dynamics) or to solve for unknowns in a discretized space by converging to a steady state (i.e., partial differential equations).They are commonly found in many scientific and engineering applications. Most stencil computations allow tile-wise concurrent start ,i.e., there exists a face of the iteration space and a set of tiling hyper planes such that all tiles along that face can be started concurrently. This provides load balance and maximizes parallelism.
Loop tiling is a key transformation used to exploit both data locality and parallelism from stencils simultaneously. Numerous works exist that target improving locality, controlling frequency of synchronization, and volume of communication wherever applicable. But, concurrent start-up of tiles that evidently translates into perfect load balance and often reduction in frequency of synchronization is completely ignored. Existing automatic tiling frameworks often choose hyperplanes that lead to pipelined start-up and load imbalance. We address this issue with a new tiling technique that ensures concurrent start-up as well as perfect load balance whenever possible. We first provide necessary and sufficient conditions on tiling hyperplanes to enable concurrent start for programs with affine data accesses. We then discuss an iterative approach to find such hyperplanes.
It is not possible to directly apply automatic tiling techniques to periodic stencils because of the wrap-around dependences in them. To overcome this, we use iteration space folding techniques as a pre-processing stage after which our technique can be applied without any further change.
We have implemented our techniques on top of Pluto-a source-level automatic parallelizer. Experimental evaluation on a 12-core Intel Westmere shows that our code is able to outperform a tuned domain-specific stencil code generator by 4% to2 x, and previous compiler techniques by a factor of 1.5x to 15x. For the swim benchmark from SPECFP2000, we achieve an .improvement of 5.12 x on a 12-core Intel Westmere and 2.5x on a 16-core AMD Magny-Cours machines, over the auto-parallelizer of Intel C Compiler.
Collections
Related items
Showing items related by title, author, creator and subject.
-
Architecting Resource Management Services For Computational Grids : Patterns And Performance Models
Prem, Hema (2011-09-13) -
An Optimizing Code Generator for a Class of Lattice-Boltzmann Computations
Pananilath, Irshad Muhammed (2018-03-09)Lattice-Boltzmann method(LBM), a promising new particle-based simulation technique for complex and multiscale fluid flows, has seen tremendous adoption in recent years in computational fluid dynamics. Even with a ... -
Studies In Automatic Management Of Storage Systems
Pipada, Pankaj (2015-11-16)Autonomic management is important in storage systems and the space of autonomics in storage systems is vast. Such autonomic management systems can employ a variety of techniques depending upon the specific problem. In this ...