Show simple item record

dc.contributor.advisorGopinath, Kanchi
dc.contributor.advisorGanapathy, Vinod
dc.contributor.authorGangar, Parth
dc.date.accessioned2025-02-05T09:03:33Z
dc.date.available2025-02-05T09:03:33Z
dc.date.submitted2024
dc.identifier.urihttps://etd.iisc.ac.in/handle/2005/6800
dc.description.abstractThe virtual memory abstraction simplifies programming and enhances portability but requires the processor to translate virtual addresses to physical addresses which can be expensive. To speed up the virtual-to-physical address translation, processors store recently used addresses in Translation Lookaside Buffers (TLBs), and further use huge (aka large) pages to reduce TLB misses. For example, the x86 architecture supports 2MB and 1GB huge pages. However, fully harnessing the performance benefits of huge pages requires robust operating system support. For example, huge pages are notorious for creating memory bloat – a phenomenon wherein an application is allocated more physical memory than it needs. This leads to a tradeoff between performance and memory efficiency wherein application performance can be improved at the potential expense of allocating extra physical memory. Ideally, a system should manage this trade-off dynamically depending on the availability of physical memory at runtime. In this thesis, we highlight two major shortcomings of current OS-based solutions in dealing with this tradeoff. First, the majority of the existing systems lack support for dynamic memory de-bloating. This leads to a scenario where either performance is compromised or memory capacity is wasted permanently. Second, even when existing systems support dynamic memory de-bloating, their strategies lead to unnecessary performance slowdown and fairness issues when multiple applications are running concurrently. In this thesis, we address these issues with EMD (Efficient Memory De-bloating). The key insight in EMD is that different regions in an application’s address space exhibit different amounts of memory bloat. Consequently, the tradeoff between memory efficiency and performance varies significantly within a given application e.g., we find that memory bloat is typically concentrated in certain regions of an application address space, and de-bloating such regions leads to minimal performance impact. Hinged on this insight, EMD employs a prioritization scheme for fine-grained, efficient, and fair reclamation of memory bloat. We show that doing this improves performance by up to 69% compared to HawkEye, the state-of-the-art OS-based huge page management system, and nearly eliminates fairness pathologies in current systems.en_US
dc.language.isoen_USen_US
dc.relation.ispartofseries;ET00810
dc.rightsI grant Indian Institute of Science the right to archive and to make available my thesis or dissertation in whole or in part in all forms of media, now hereafter known. I retain all proprietary rights, such as patent rights. I also retain the right to use in future works (such as articles or books) all or part of this thesis or dissertationen_US
dc.subjectMemory bloaten_US
dc.subjectLinux kernelen_US
dc.subjectOperating systemen_US
dc.subjectMemory Managementen_US
dc.subjectHuge pagesen_US
dc.subjectvirtual-to-physical address translationen_US
dc.subjectmemory de-bloatingen_US
dc.subjectEfficient Memory De-bloatingen_US
dc.subject.classificationResearch Subject Categories::TECHNOLOGY::Information technology::Computer scienceen_US
dc.titleFair and Efficient Dynamic Memory De-bloatingen_US
dc.typeThesisen_US
dc.degree.nameMTech (Res)en_US
dc.degree.levelMastersen_US
dc.degree.grantorIndian Institute of Scienceen_US
dc.degree.disciplineEngineeringen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record