Show simple item record

dc.contributor.advisorNandy, SK
dc.contributor.authorLakshmi, J
dc.date.accessioned2025-10-30T10:39:55Z
dc.date.available2025-10-30T10:39:55Z
dc.date.submitted2010
dc.identifier.urihttps://etd.iisc.ac.in/handle/2005/7258
dc.description.abstractThe emergence of multi-core servers and the growing need for green computing have led to the resurgence of system virtualization. System virtualization has re-emerged as a solution to many critical challenges faced by enterprise computing infrastructure. Among these, server consolidation is one of the most prominent. Server consolidation addresses the problem of co-hosting multiple, independent application servers on a single physical machine. The availability of various virtualization technologies for commodity systems has accelerated the adoption of virtualization-based solutions, especially in the enterprise segment. However, many emerging virtualization solutions still face significant challenges, particularly in I/O virtualization architectures. Current technologies often fall short in addressing performance and security issues for I/O workloads on virtualized servers. Virtualized servers enable co-hosting of multiple independent servers as virtual machines (VMs) on the same physical machine. These VMs share some or all physical resources, and compared to non-virtualized servers, virtualized servers offer better resource utilization. In virtualization terminology, the number of independent servers that can be hosted as VMs on a virtualized server is referred to as the consolidation ratio. This ratio depends on the machine’s capacity and the workload of the individual servers being consolidated. Most capacity planning efforts have focused on the CPU component of workloads, which is effective for compute-intensive tasks. In such cases, the consolidation ratio can be determined by aggregating the CPU workload of the VMs against the system’s CPU capacity. However, when workloads include significant compute and I/O components, the consolidation ratio must also consider I/O workload aggregation against the system’s available I/O capacity. In multi-core servers, the limited number of I/O devices means that the consolidation ratio for I/O workloads is constrained not only by capacity but also by performance isolation due to shared devices among independent VMs. This thesis explores prevalent end-to-end I/O virtualization architectures with the following goals: 1. To understand the impact of virtualization on I/O workload performance. 2. To analyze resource utilization implications for I/O workloads due to virtualization. 3. To evaluate the effectiveness of existing Quality of Service (QoS) controls on resource usage and application performance. Apart from software isolation, a key driver for virtualization adoption in data centers is the performance and security isolation of virtual machines on consolidated servers. This is especially important for enterprise application workloads such as databases, email, and web-based applications, which include both CPU and I/O components. Current commodity multi-core technologies offer system virtualization architectures that provide CPU workload isolation. However, the number of CPU cores in multi-core servers far exceeds the number of I/O interfaces, resulting in shared I/O devices among independent VMs. This changes the dynamics of I/O device sharing compared to dedicated, non-virtualized servers, where all resources (processors, memory, disk, and network interfaces) are managed by a single OS. In non-virtualized systems, optimizing application usage of system resources is sufficient to ensure performance. In contrast, when independent applications are consolidated onto a multi-core server using virtual machines, performance interference caused by shared resources across multiple VMs introduces new challenges. The key challenge is ensuring consistent performance for independent I/O workloads. Intensive applications hosted inside individual VMs on a consolidated server often share a single I/O device. Prevalent I/O virtualization architectures suffer from high overheads and performance interference due to device sharing. These issues cause variability in application performance, which depends on the nature of the consolidated workloads and the number of VMs sharing the I/O device. Performance interference also introduces security vulnerabilities, potentially leading to denial-of-service (DoS)-like attacks. One way to control this variability is by imposing Quality of Service (QoS) controls on resource allocation and usage of shared resources. Existing technologies typically extend system software to provide resource-specific QoS controls. However, these result in coarse-grained controls that are often ineffective and inefficient. This behavior necessitates a re-evaluation of I/O virtualization architectures to ensure that system design aligns with virtualization goals. The desirable features for designing system hardware to support virtualization architecture goals are: 1. Efficiency: A virtual machine should be as efficient as a non-virtualized machine, if not more. 2. Isolation: A virtual machine is an isolated duplicate of the real machine. Therefore, resource sharing on a virtualized machine should not cause interference among VMs. 3. Safety: The Virtual Machine Monitor (VMM) should have complete control over resources. This implies that no VM can access any resource not explicitly allocated to it, and the VMM should be able to reclaim control of resources already allocated. To ensure these properties, resource management constructs used for virtualization should be lightweight and closer to real resources. This allows for tighter resource usage controls and provides the foundation for a hierarchical adaptive scheduling framework in virtualized environments. Existing hardware design is architected with the goal of allowing a single OS to manage all resources. Most operating systems are process-centric, and system architecture is designed to support this model. General-purpose operating systems abstract physical hardware resources using constructs like processes, pages, files, sockets, and packets. Resource sharing across multiple processes or tasks occurs within the OS context using these abstractions. Most resource control and sharing policies are built over these OS abstractions and are used by OS-level resource schedulers. System virtualization, by definition, requires every resource to support concurrent access contexts, especially I/O devices, which are not traditionally designed for concurrency. This thesis re-examines I/O virtualization architectures using Network Interface Card (NIC) as a case study and presents an end-to-end NIC virtualization architecture aimed at fulfilling the virtualization goals listed above. The proposed I/O virtualization architecture is implemented using a hardware-defined reconfigurable virtual device interface (HD Reconfig-VDI). NICs are chosen for analysis because they are highly shareable resources, and their efficient utilization benefits many users, particularly in enterprise environments. Network device sharing introduces complex usage patterns, and ensuring performance and security isolation is a major concern. The design properties of HD Reconfig-VDI include: 1. Hardware support for reconfigurable virtual device contexts. 2. Device time-sharing support via the controller with variable time slices for adaptive resource allocation. 3. Manifestation of virtual devices over physical devices through late or dynamic binding. Supporting virtual device contexts on physical devices enables native device access through the virtual interface and allows concurrency at the device level, significantly improving performance. Native access also enhances security in virtualized servers. Time-sharing improves resource utilization and system throughput for all VMs using the device. The biggest challenge in time-sharing is context switching time. If a device natively supports concurrency, context switching can be efficiently implemented in hardware by passing control to the context ready for scheduling. This also improves coordination between VMs and the device. Hierarchical device schedulers that are being proposed for virtualized servers and virtual machine ecosystems. Furthermore, dynamic or late-binding creates a less-cohesive virtual to physical resource mapping, that gives the flexibility to exploit unused resources and provide mechanisms for easy relocation and migration of VMs. In the thesis, using the HD Reconfig-VDI we propose an end-to-end I/O virtualization architecture. The proposed architecture enhances I/O device virtualization to enable separation of device management from device access. This is done by building device protection mechanisms into the physical device and managed by the virtual machine monitor (VMM). As an example, for the case of NIC, the VMM recognizes the destination VM of an incoming packet by the interrupt raised by the device and forwards it to the appropriate VM. The VM then processes the packet as it would do so in the case of non-virtualized environment. Thus, device access and scheduling of device communication are managed by the VM that is using it. The identity for access is managed by the VMM. This eliminates the intermediary VMM/hosting domain on the device access path and reduces I/O service time, which improves the application performance on virtualized servers and also the usable device bandwidth which results in improved consolidation. In order to use HD Reconfig-VDI, various layers involved in virtualization (the virtualization stack), like the hardware, hypervisor, and the Guest OS, need to be suitably modified. We need a standard method for evaluating the benefits of the proposed end-to-end I/O virtualization architecture. Increasing adoption of virtualization technologies brings out the need for standard methods for modeling and evaluating different technologies. Most efforts in evaluation are currently directed towards identification as well as evaluation of suitable virtualization benchmarks. This is because performance evaluation of diverse technologies is the major activity. Benchmark identification does help the user community to evaluate existing solutions. However, for evaluating architectures and changes made to the different layers of the virtualization stack, it is imperative to have a uniform framework for modeling multiple technologies to analyze their behavior. In order to compare and identify the performance bottlenecks on the virtualization stack, and how they can affect different applications, it is necessary to have a testbed that helps in analyzing the various components and their associated behavior. Modeling of virtualization environments can be complex, specifically if it involves end-to-end architectures. The basic requirement for virtualization environments is being able to capture the contention of software and device sharing, for concurrent use, which is critical for virtualization technologies. We find that layered queuing networks (LQNs) are ideal for modeling such systems without losing out in details and at the same time being able to give reasonably good estimates on performance so as to enable evaluation of end-to-end architectures. The LQNs models enable sufficiently high level of abstraction of the system under consideration which enables using them for analyzing complex systems like virtualization stacks. The LQNs used in this thesis were built using LQNs software of Carleton University. Comparing the simulations with experimental observations we have found that they can be used to model and evaluate changes and enhancement on the virtualization stack with sufficient accuracy. Hence, the proposed I/O virtualization architecture using the HD Reconfig-VDI is built as a LQN model and evaluated. The evaluation is carried out by comparing the simulation results of the LQN model for proposed architecture with that of the simulation results of the LQN model of the existing Xen I/O virtualization architecture. The evaluation for the proposed architecture is carried out systematically for the three identified metrics namely, throughput, server CPU utilization and effectiveness of resource usage specific QoS controls. On the existing technologies for I/O workloads, we observe that the application throughput on a VM is reduced and server CPU utilization is increased when moved from a non-virtualized server to a virtualized server. This is attributed to the high virtualization overheads caused by the software layers used for providing I/O virtualization. Making the hardware device virtualization aware enables the I/O device for native access by a VM, and hence these overheads are considerably reduced. The throughput improvement is about 60% for the proposed architecture, with an equivalent reduction in CPU utilization overheads due to virtualization. Also, enabling independent virtual device context on the hardware improves resource usage specific QoS controls which is reflected in the observed application throughput. Furthermore, involving the physical device in access management of the virtual device provides better security management and hence reduces vulnerabilities associated with sharing the I/O device. By virtue of studying and evaluating I/O virtualization architectures we conclude that designing systems from an end-to-end perspective enables greater flexibility in managing resources for virtualization and delivering additional benefits of performance and security. We observe that both the characteristics, performance and security, can be handled with simple, elegant constructs that are built on hardware APIs.
dc.language.isoen_US
dc.relation.ispartofseriesT07311
dc.rightsI grant Indian Institute of Science the right to archive and to make available my thesis or dissertation in whole or in part in all forms of media, now hereafter known. I retain all proprietary rights, such as patent rights. I also retain the right to use in future works (such as articles or books) all or part of this thesis or dissertation
dc.subjectVirtual Machine Monitor
dc.subjectHD Reconfigurable Virtual Device Interface HD Reconfig-VDI
dc.subjectLayered Queuing Networks
dc.titleSystem virtualization in the multi-core era-a Q0S perspective
dc.degree.namePhD
dc.degree.levelDoctoral
dc.degree.grantorIndian Institute of Science
dc.degree.disciplineEngineering


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record