Show simple item record

dc.contributor.advisorGanapathy, Vinod
dc.contributor.authorShanker, Kripa
dc.date.accessioned2025-11-06T04:37:20Z
dc.date.available2025-11-06T04:37:20Z
dc.date.submitted2025
dc.identifier.urihttps://etd.iisc.ac.in/handle/2005/7326
dc.description.abstractDeep learning is rapidly integrated into different applications, from medical imaging to financial products. Organisations are spending enormous financial resources to train deep learning models. Often, many organisations do not have sufficient resources to host, manage and scale these deep learning workloads in-house. Therefore, these organisations outsource deep learning inference workloads to public cloud platforms. However, outsourcing to public cloud platforms raises security and privacy risks for the trained models. On the cloud, the service provider controls all the software and hardware on their premises and has full access to the models deployed on their platforms. A malicious or compromised cloud provider can steal the trained model or interfere with the inference workload, which may lead to financial losses and legal troubles for the model owner. This dissertation presents solutions to secure deep learning workloads on public cloud platforms with hardware-assisted trusted execution environments. Intel has introduced Software Guard Extensions (SGX), a hardware-based trusted execution environment, to run private computations on public cloud platforms. However, applications do not run out-of-the-box on the SGX platform due to the restrictions imposed by the SGX specifications to ensure confidentiality and integrity of the code and data. Therefore, applications need to be rewritten, or other methods should be employed to avoid executing restricted instructions within the SGX enclave that contains code and private data. To port commodity applications to SGX enclaves, the software community has developed multiple frameworks to adapt existing applications to SGX specifications. However, at the beginning of this work, it was not clear which framework should be used to port deep learning workloads to SGX enclaves. Therefore, in the first part of this dissertation, we studied various frameworks that port applications to SGX to find a suitable framework for porting deep learning workloads. The study focuses on the challenges in transitioning commodity applications to SGX enclaves. Next, during the study, we observed that memory-intensive applications, such as deep learning workloads, incur a performance penalty when executing within the trusted execution environment offered by Intel SGX. Furthermore, SGX cannot securely use other untrusted resources, such as untrusted co-processors, that are commonly used to accelerate deep learning workloads. Therefore, the second part of the dissertation focuses on improving the performance of deep learning workloads on TEE. It presents MazeNet, a framework to transform pre-trained models into MazeNet models and deploy them on heterogeneous execution environments based on trusted and untrusted hardware, where the trusted hardware ensures the security of the model while the untrusted hardware accelerates the deep learning workload. MazeNet employs a secure outsourcing scheme that outsources both the linear and non-linear layers of deep learning models to untrusted hardware. Our experimental evaluation demonstrates that MazeNet can improve the throughput by 30x and reduce the latency by 5x.en_US
dc.language.isoen_USen_US
dc.relation.ispartofseries;ET01130
dc.rightsI grant Indian Institute of Science the right to archive and to make available my thesis or dissertation in whole or in part in all forms of media, now hereafter known. I retain all proprietary rights, such as patent rights. I also retain the right to use in future works (such as articles or books) all or part of this thesis or dissertationen_US
dc.subjectSecurityen_US
dc.subjectMachine Learningen_US
dc.subjectDeep Learningen_US
dc.subjectSoftware Engineeringen_US
dc.subjectTrusted Execution Environmentsen_US
dc.subjectHardware Securityen_US
dc.subjectCloud platformen_US
dc.subjectCould securityen_US
dc.subjectSoftware Guard Extensionen_US
dc.subjectSGXen_US
dc.subjectMazeNeten_US
dc.subject.classificationResearch Subject Categories::TECHNOLOGY::Information technology::Computer science::Software engineeringen_US
dc.titleProtecting Deep Learning Models on Cloud Platforms with Trusted Execution Environmentsen_US
dc.typeThesisen_US
dc.degree.namePhDen_US
dc.degree.levelDoctoralen_US
dc.degree.grantorIndian Institute of Scienceen_US
dc.degree.disciplineEngineeringen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record