Show simple item record

dc.contributor.advisorBasu, Arkaprava
dc.contributor.authorJha, Keshav Vinayak
dc.date.accessioned2024-08-27T09:40:24Z
dc.date.available2024-08-27T09:40:24Z
dc.date.submitted2024
dc.identifier.urihttps://etd.iisc.ac.in/handle/2005/6611
dc.description.abstractThe performance of deep neural network (DNN) training is a function of both the training compute latency and the latency to fetch and preprocess the input data needed to train the model. With advance- ments in GPUs and ML software platforms, the training compute performance on GPUs has seen substantial performance gains. These improvements have shifted the bottleneck to the CPU-based input pipeline, which preprocesses and transforms data before it is fed into the GPU accelerator for training. To accelerate the input pipeline, some prior works cache intermediate data generated from an input processing step and reuse the cached data rather than to recompute that step. Prior works cache the data either in memory or in storage, and require the entire output generated from a step to be cached. Given that modern training systems are equipped with substantial memory and storage, ex- ploiting and optimizing both capacities is crucial. In this thesis, we propose Hybrid Cache(HyCache), a technique that enables the caching of a subset of tensors from multiple intermediate steps on both memory and storage automatically, without programmer involvement. HyCache has the ability to par- tially cache the preprocessing step outputs across both memory and storage capacities to maximize resource utilization while allowing recomputation of the uncached data as well. HyCache provides a user-friendly library interface that determines the optimal tradeoff between recomputing a prepro- cessing step and caching across memory and storage. HyCache outperforms state-of-the-art prior approaches, delivering a raw pipeline throughput improvement ranging from 1.11× to 10.1× over the regular preprocessing pipeline, and end-to-end training improvement up to 1.67×en_US
dc.language.isoen_USen_US
dc.relation.ispartofseries;ET00621
dc.rightsI grant Indian Institute of Science the right to archive and to make available my thesis or dissertation in whole or in part in all forms of media, now hereafter known. I retain all proprietary rights, such as patent rights. I also retain the right to use in future works (such as articles or books) all or part of this thesis or dissertationen_US
dc.subjectCachingen_US
dc.subjectStorageen_US
dc.subjectDeep Neural Networksen_US
dc.subjectPreprocessingen_US
dc.subjectHybrid Cacheen_US
dc.subject.classificationResearch Subject Categories::TECHNOLOGY::Information technology::Computer science::Computer scienceen_US
dc.titleAnalysis, Design, and Acceleration of DNN Training Preprocessing Pipelinesen_US
dc.typeThesisen_US
dc.degree.nameMTech (Res)en_US
dc.degree.levelMastersen_US
dc.degree.grantorIndian Institute of Scienceen_US
dc.degree.disciplineEngineeringen_US


Files in this item

This item appears in the following Collection(s)

Show simple item record