Virtualized Cloud Infrastructure without the Virtualization Eric Keller Jakub Szefer Jennifer Rexford Ruby Lee IBM Cloud Computing Student Workshop ISCA 2010 Ongoing work Princeton University ID: 538336
Download Presentation The PPT/PDF document "NoHype" is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.
Slide1
NoHype: Virtualized Cloud Infrastructure without the Virtualization
Eric Keller, Jakub Szefer, Jennifer Rexford, Ruby Lee
IBM Cloud
Computing Student Workshop(ISCA 2010 + Ongoing work)
Princeton UniversitySlide2
Virtualized Cloud InfrastructureRun virtual machines on a hosted infrastructure
Benefits…Economies of scaleDynamically scale (pay for what you use)Slide3
Without the VirtualizationVirtualization used to share servers
Software layer running under each virtual machine3
Physical Hardware
Hypervisor
OS
OS
Apps
Apps
Guest VM1
Guest VM2
serversSlide4
Without the VirtualizationVirtualization used to share servers
Software layer running under each virtual machineMalicious software can run on the same serverAttack hypervisorAccess/Obstruct other VMs4
Physical Hardware
Hypervisor
OS
OS
Apps
Apps
Guest VM1
Guest VM2
serversSlide5
Are these vulnerabilities imagined?No headlines… doesn’t mean it’s not real
Not enticing enough to hackers yet?(small market size, lack of confidential data)Virtualization layer huge and growing100 Thousand lines of code in hypervisor1 Million lines in privileged virtual machineDerived from existing operating systems Which have security holes
5Slide6
NoHypeNoHype removes the hypervisor
There’s nothing to attackComplete systems solutionStill retains the needs of a virtualized cloud infrastructure6
Physical Hardware
OS
OS
Apps
Apps
Guest VM1
Guest VM2
No hypervisorSlide7
Virtualization in the CloudWhy does a cloud infrastructure use virtualization?
To support dynamically starting/stopping VMsTo allow servers to be shared (multi-tenancy)Do not need full power of modern hypervisorsEmulating diverse (potentially older) hardwareMaximizing server consolidation
7Slide8
Roles of the HypervisorIsolating/Emulating resources
CPU: Scheduling virtual machinesMemory: Managing memoryI/O: Emulating I/O devicesNetworkingManaging virtual machines
8
Push to HW /
Pre-allocation
Remove
Push to side
NoHype
has a double meaning… “no hype”Slide9
Scheduling Virtual MachinesScheduler called each time hypervisor runs
(periodically, I/O events, etc.)Chooses what to run next on given coreBalances load across cores9
hypervisor
timer
switch
I/O
switch
timer
switch
VMs
time
TodaySlide10
Dedicate a core to a single VM
Ride the multi-core trend1 core on 128-core device is ~0.8% of the processorCloud computing is pay-per-useDuring high demand, spawn more VMsDuring low demand, kill some VMsCustomer maximizing each VMs work, which minimizes opportunity for over-subscription
10
NoHypeSlide11
Managing MemoryGoal: system-wide optimal usage
i.e., maximize server consolidationHypervisor controls allocation of physical memory
11
TodaySlide12
Pre-allocate MemoryIn cloud computing: charged per unit
e.g., VM with 2GB memoryPre-allocate a fixed amount of memoryMemory is fixed and guaranteedGuest VM manages its own physical memory(deciding what pages to swap to disk)Processor support for enforcing:allocation and bus utilization
12
NoHypeSlide13
Emulate I/O Devices
Guest sees virtual devices
Access to a device’s memory range traps to hypervisor
Hypervisor handles interruptsPrivileged VM emulates devices and performs I/O13
Physical Hardware
Hypervisor
OS
OS
Apps
Apps
Guest VM1
Guest VM2
Real
Drivers
Priv. VM
Device
Emulation
trap
trap
hypercall
TodaySlide14
Guest sees virtual devicesAccess to a device’s memory range traps to hypervisorHypervisor handles interruptsPrivileged VM emulates devices and performs I/O
Emulate I/O Devices
14
Physical Hardware
Hypervisor
OS
OS
Apps
Apps
Guest VM1
Guest VM2
Real
Drivers
Priv. VM
Device
Emulation
trap
trap
hypercall
TodaySlide15
Dedicate Devices to a VMIn cloud computing, only networking and storage
Static memory partitioning for enforcing accessProcessor (for to device), IOMMU (for from device)15
Physical Hardware
OS
OS
Apps
Apps
Guest VM1
Guest VM2
NoHypeSlide16
Virtualize the DevicesPer-VM physical device doesn’t scale
Multiple queues on deviceMultiple memory ranges mapping to different queues16
Processor
Chipset
Memory
Classify
MUX
MAC/PHY
Network Card
Peripheral
bus
NoHypeSlide17
Ethernet switches connect serversNetworking
17
server
server
TodaySlide18
Software Ethernet switches connect VMs
Networking (in virtualized server)
18
Virtual server
Virtual server
Software
Virtual switch
TodaySlide19
Software Ethernet switches connect VMs
Networking (in virtualized server)
19
OS
Apps
Guest VM1
Hypervisor
OS
Apps
Guest VM2
hypervisor
TodaySlide20
Software Ethernet switches connect VMs
Networking (in virtualized server)
20
OS
Apps
Guest VM1
Hypervisor
OS
Apps
Guest VM2
Software
Switch
Priv. VM
TodaySlide21
Do Networking in the NetworkCo-located VMs communicate through software
Performance penalty for not co-located VMsSpecial case in cloud computingArtifact of going through hypervisor anywayInstead: utilize hardware switches in the networkModification to support hairpin turnaround
21
NoHypeSlide22
Removing the Hypervisor SummaryScheduling virtual machines
One VM per coreManaging memoryPre-allocate memory with processor supportEmulating I/O devicesDirect access to virtualized devicesNetworkingUtilize hardware Ethernet switches
Managing virtual machines
Decouple the management from operation22Slide23
NoHype Double Meaning Means
no hypervisor, also means “no hype”Multi-core processorsExtended Page TablesSR-IOV and Directed I/O (VT-d)
Virtual Ethernet Port Aggregator (VEPA)
23Slide24
NoHype Double Meaning Means
no hypervisor, also means “no hype”Multi-core processorsExtended Page TablesSR-IOV and Directed I/O (VT-d)
Virtual Ethernet Port Aggregator (VEPA)
24Current Work: Implement it on today’s HWSlide25
Xen as a Starting Point
Management toolsPre-allocate resources i.e., configure virtualized hardwareLaunch VM25
Xen
Guest VM1
Priv. VM
xm
Pre fill EPT mapping
to partition memory
core
coreSlide26
Network BootgPXE
in Hvmloader Added support for igbvf (Intel 82576)Allows us to remove diskWhich are not virtualized yet
26
Xen
Guest VM1
Priv. VM
xm
core
core
hvmloader
DHCP/
gPXE
serversSlide27
Allow Legacy Bootup Functionality
Known good kernel + initrd (our code)PCI reads return “no device” except for NICHPET reads to determine clock freq.
27
Xen
Guest VM1
Priv. VM
xm
core
core
kernel
DHCP
gPXE
serversSlide28
Use Device Level Virtualization
Pass through Virtualized NICPass through Local APIC (for timer)28
Xen
Guest VM1
Priv. VM
xm
core
core
kernel
DHCP
gPXE
serversSlide29
Block All Hypervisor Access
Mount iSCSI drive for user diskBefore jumping to user code, switch off hypervisorAny VM Exit causes a Kill VMUser can load kernel modules, any applications
29
Xen
Guest VM1
Priv. VM
xm
core
core
kernel
DHCP
gPXE
servers
Kill VM
iSCSI
serversSlide30
Timeline30
time
hvmloader
Set up
Kernel
(device disc.)
Customer code
Guest
VM
space
VMX
RootSlide31
Next StepsAssess needs for future
processorsAssess OS modificationsto eliminate need for golden image(e.g., push configuration instead of discovery)
31Slide32
ConclusionsTrend towards hosted and shared infrastructures
Significant security issue threatens adoptionNoHype solves this by removing the hypervisorPerformance improvement is a side benefit32Slide33
Questions?
Contact info:ekeller@princeton.eduhttp://www.princeton.edu/~ekellerszefer@princeton.edu
http://www.princeton.edu/~szefer
33