dc.contributor.advisor | Gopinath, K | |
dc.contributor.advisor | Basu, Arkaprava | |
dc.contributor.author | Panwar, Ashish | |
dc.date.accessioned | 2022-07-25T05:52:40Z | |
dc.date.available | 2022-07-25T05:52:40Z | |
dc.date.submitted | 2022 | |
dc.identifier.uri | https://etd.iisc.ac.in/handle/2005/5788 | |
dc.description.abstract | Computers rely on the virtual memory abstraction to simplify programming, portability, physical memory management and ensure isolation among co-running applications. However, it creates a layer of indirection in the critical path of execution wherein the processor needs to translate an application-generated virtual address into the corresponding physical address before performing the computation. To accelerate the virtual-to-physical address translation, processors cache recently used addresses in Translation Lookaside Buffers (TLBs).
Unfortunately, modern data-centric applications executing on large memory servers experience frequent TLB misses. The processor services TLB misses by walking the in-memory page tables that often involves accessing physical memory. Consequently, the processor spends 30-50% of total cycles in servicing TLB misses alone for many big-data applications. Virtualization and non-uniform memory access (NUMA) architectures in multi-socket servers further exacerbate this overhead. Virtualization adds an additional level of address translation while NUMA can increase the latency of accessing page tables residing on a remote socket. The address translation overhead will increase further with deeper page tables and multi-tiered memory systems in newer and upcoming systems. In short, virtual memory is showing its age in the era of data-centric computing.
In this thesis, we propose ways to moderate the overhead of virtual-to-physical address translation. The majority of this thesis focuses on huge pages. Processor designers have invested significant hardware in supporting huge pages to reduce the number and cost of TLB misses e.g., x86 architecture supports 2MB and 1GB huge pages. However, we find that operating systems often fail to harness the full potential of huge pages. This thesis highlights the pitfalls associated with the current huge page management strategies and proposes various operating system enhancements to maximize the benefits of huge pages. We also address the effect of non-uniform memory accesses on address translation with NUMA-aware page table management.
A key objective of this thesis is to avoid modifying the applications or adding new features to the hardware. Therefore, all the solutions discussed in this thesis apply to current hardware and remain transparent to the applications. All of our contributions are open-sourced. | en_US |
dc.language.iso | en_US | en_US |
dc.rights | I grant Indian Institute of Science the right to archive and to make available my thesis or dissertation in whole or in part in all forms of media, now hereafter known. I retain all proprietary rights, such as patent rights. I also retain the right to use in future works (such as articles or books) all or part
of this thesis or dissertation | en_US |
dc.subject | Operating Systems | en_US |
dc.subject | Virtual Memory | en_US |
dc.subject | Translation Lookaside Buffers | en_US |
dc.subject | Huge Pages | en_US |
dc.subject.classification | Research Subject Categories::TECHNOLOGY::Information technology::Computer science::Computer science | en_US |
dc.title | Operating System Support for Efficient Virtual Memory | en_US |
dc.type | Thesis | en_US |
dc.degree.name | PhD | en_US |
dc.degree.level | Doctoral | en_US |
dc.degree.grantor | Indian Institute of Science | en_US |
dc.degree.discipline | Engineering | en_US |