Analysis, Design, and Acceleration of DNN Training Preprocessing Pipelines
Abstract
The performance of deep neural network (DNN) training is a function of both the training compute
latency and the latency to fetch and preprocess the input data needed to train the model. With advance-
ments in GPUs and ML software platforms, the training compute performance on GPUs has seen
substantial performance gains. These improvements have shifted the bottleneck to the CPU-based
input pipeline, which preprocesses and transforms data before it is fed into the GPU accelerator for
training. To accelerate the input pipeline, some prior works cache intermediate data generated from
an input processing step and reuse the cached data rather than to recompute that step. Prior works
cache the data either in memory or in storage, and require the entire output generated from a step to be
cached. Given that modern training systems are equipped with substantial memory and storage, ex-
ploiting and optimizing both capacities is crucial. In this thesis, we propose Hybrid Cache(HyCache),
a technique that enables the caching of a subset of tensors from multiple intermediate steps on both
memory and storage automatically, without programmer involvement. HyCache has the ability to par-
tially cache the preprocessing step outputs across both memory and storage capacities to maximize
resource utilization while allowing recomputation of the uncached data as well. HyCache provides
a user-friendly library interface that determines the optimal tradeoff between recomputing a prepro-
cessing step and caching across memory and storage. HyCache outperforms state-of-the-art prior
approaches, delivering a raw pipeline throughput improvement ranging from 1.11× to 10.1× over the
regular preprocessing pipeline, and end-to-end training improvement up to 1.67×