Show simple item record

dc.contributor.advisorGopinath, K
dc.contributor.advisorBasu, Arkaprava
dc.contributor.authorPanwar, Ashish
dc.date.accessioned2022-07-25T05:52:40Z
dc.date.available2022-07-25T05:52:40Z
dc.date.submitted2022
dc.identifier.urihttps://etd.iisc.ac.in/handle/2005/5788
dc.description.abstractComputers rely on the virtual memory abstraction to simplify programming, portability, physical memory management and ensure isolation among co-running applications. However, it creates a layer of indirection in the critical path of execution wherein the processor needs to translate an application-generated virtual address into the corresponding physical address before performing the computation. To accelerate the virtual-to-physical address translation, processors cache recently used addresses in Translation Lookaside Buffers (TLBs). Unfortunately, modern data-centric applications executing on large memory servers experience frequent TLB misses. The processor services TLB misses by walking the in-memory page tables that often involves accessing physical memory. Consequently, the processor spends 30-50% of total cycles in servicing TLB misses alone for many big-data applications. Virtualization and non-uniform memory access (NUMA) architectures in multi-socket servers further exacerbate this overhead. Virtualization adds an additional level of address translation while NUMA can increase the latency of accessing page tables residing on a remote socket. The address translation overhead will increase further with deeper page tables and multi-tiered memory systems in newer and upcoming systems. In short, virtual memory is showing its age in the era of data-centric computing. In this thesis, we propose ways to moderate the overhead of virtual-to-physical address translation. The majority of this thesis focuses on huge pages. Processor designers have invested significant hardware in supporting huge pages to reduce the number and cost of TLB misses e.g., x86 architecture supports 2MB and 1GB huge pages. However, we find that operating systems often fail to harness the full potential of huge pages. This thesis highlights the pitfalls associated with the current huge page management strategies and proposes various operating system enhancements to maximize the benefits of huge pages. We also address the effect of non-uniform memory accesses on address translation with NUMA-aware page table management. A key objective of this thesis is to avoid modifying the applications or adding new features to the hardware. Therefore, all the solutions discussed in this thesis apply to current hardware and remain transparent to the applications. All of our contributions are open-sourced.en_US
dc.language.isoen_USen_US
dc.rightsI grant Indian Institute of Science the right to archive and to make available my thesis or dissertation in whole or in part in all forms of media, now hereafter known. I retain all proprietary rights, such as patent rights. I also retain the right to use in future works (such as articles or books) all or part of this thesis or dissertationen_US
dc.subjectOperating Systemsen_US
dc.subjectVirtual Memoryen_US
dc.subjectTranslation Lookaside Buffersen_US
dc.subjectHuge Pagesen_US
dc.subject.classificationResearch Subject Categories::TECHNOLOGY::Information technology::Computer science::Computer scienceen_US
dc.titleOperating System Support for Efficient Virtual Memoryen_US
dc.typeThesisen_US
dc.degree.namePhDen_US
dc.degree.levelDoctoralen_US
dc.degree.grantorIndian Institute of Scienceen_US
dc.degree.disciplineEngineeringen_US


Files in this item

This item appears in the following Collection(s)

Show simple item record