Virtualization Xen and Xen blanket Hakim Weatherspoon Assistant Professor Dept of Computer Science CS 5413 High Performance Systems and Networking November 17 2014 Slides from ID: 174655
Download Presentation The PPT/PDF document "Data Center" is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.
Slide1
Data Center Virtualization: Xen and Xen-blanket
Hakim WeatherspoonAssistant Professor, Dept of Computer ScienceCS 5413: High Performance Systems and NetworkingNovember 17, 2014
Slides from
ACM European Conference on Computer Systems 2012 presentation of
“
The
Xen
-Blanket: Virtualize Once, Run Everywhere
” and Dan Williams dissertationSlide2
Overview and BasicsData Center NetworksBasic switching technologies
Data Center Network Topologies (today and Monday)Software Routers (eg. Click, Routebricks, NetMap, Netslice)Alternative Switching TechnologiesData Center TransportData Center Software Networking Software Defined networking (overview, control plane, data plane,
NetFGPA
)Data Center Traffic and MeasurementsVirtualizing NetworksMiddleboxesAdvanced Topics
Where are we in the semester?Slide3
Goals for TodayThe Xen
-Blanket: Virtualize Once, Run Everywhere D. Williams, H. Jamjoom, and H. Weatherspoon. ACM European Conference on Computer Systems (EuroSys), April 2012, pages 113-126..Slide4
Background & motivation
Infrastructure
as a Service (
IaaS
) clouds
Inter-cloud migration?
Uniform VM image?
Advanced hypervisor level management?Slide5
research challengesLack of interoperability between cloudsHow can cloud
user homogenize clouds?Lack of control in cloud networksWhat cloud network abstraction enables enterprise workload to run without modification?Lack of efficient cloud resource utilizationHow can cloud users exploit oversubscription in the cloud while handling overload?5Slide6
Xen-BlanketA second-layer hypervisor
Xen
-Blanket
VM
VM
VM
VM
VM
VM
Inter-cloud migration?
Uniform VM image?
Advanced hypervisor level management?Slide7
Xen
-Blanket (Eurosys’12)
Xen
-Dom 0
HVM guest
Xen
Kernel
App
Kernel
Ring 0
Ring 1
Ring 3
Xen
-Dom 0
HVM guest
Xen
Xen-BlanketKernelKernelDom 0KernelDom UAppXen-BlanketSlide8
8
Enterprise Workloads
VM
VM
VM
VM
VM
Supercloud
VM
Cloud Interoperability
(The
Xen
-Blanket)
Third-Party Clouds
Cloud interoperability
Enable cloud user to homogenize clouds
The Xen-Blanketcontributions towards supercloudsSlide9
9
Enterprise Workloads
VM
VM
Supercloud
VM
Cloud Interoperability
(The
Xen
-Blanket)
User Control of Cloud Networks
(
VirtualWire
)
Third-Party Clouds
Cloud interoperability
User control of cloud networksEnable cloud user to implement network control logicVirtualWireVMVM
VMcontributions towards supercloudsSlide10
10
Enterprise Workloads
VM
VM
VM
VM
VM
Supercloud
VM
Cloud Interoperability
(The
Xen
-Blanket)
User Control of Cloud Networks
(
VirtualWire
)Efficient Cloud Resource Utilization(Overdriver)
Third-Party CloudsCloud interoperabilityUser control of cloud networksEfficient cloud resource utilizationEnable cloud user to oversubscribe resources and handle overloadOverdrivercontributions towards supercloudsSlide11
Enterprise Workloads
VM
VM
VM
VM
VM
Supercloud
VM
Cloud Interoperability
(The
Xen
-Blanket)
User Control of Cloud Networks
(
VirtualWire
)
Efficient Cloud Resource Utilization(Overdriver)
Cloud interoperabilityUser control of cloud networksEfficient cloud resource utilizationRelated workFuture workConclusionThird-Party Clouds11roadmap: towards supercloudsSlide12
Image format not yet standardAMI, Open Virtualization Format (OVF)Paravirtualized device interfaces vary
virtio, XenHypervisor-level services not standardAutoscale, VM migration, CPU burstingNeed homogenization (consistent interfaces, services across clouds)
12
Clouds are not interoperableSlide13
provider-centric homogenizationRely on
support from providerMay take years, if ever (e.g., standardization)“Least common denominator” functionality
Cloud B
Cloud A
Interface
Interface
VM
2
VM
3
VM
4
VM
1
Consistent VM/Device/Hypervisor Interfaces
Consistent Hypervisor-level Services
13Slide14
user-centric homogenizationNo special support from
providerCan be done todayCustom, user-specific functionality
Interface
VM
2
VM
3
VM
4
VM
1
Consistent VM/Device/Hypervisor Interfaces
Cloud B
Cloud A
Interface 1
Interface 2
Consistent Hypervisor-level Services14Slide15
nested virtualization approachesRequire support by bottom level hypervisor
No modifications to top-level hypervisorThe Turtles Project (OSDI’10) (provider-centric)No support from bottom level hypervisorModify top-level hypervisorThe Xen-Blanket
(user-centric)
15Slide16
the xen-blanket
Assumption:Existing clouds provide full virtualization (HVM)Future work:Xen-Blanket in paravirtualized guest
Hardware
User 1
Xen
-Blanket
User controlled VMM
VM
Xen
/ KVM
Provider controlled VMM
User 2
Xen
-Blanket
User controlled VMM
VM
VM
No support for nested virtualization
16Slide17
without hypervisor supportNo virtualization hardware exposed to second layer
Can use paravirtualization or binary translationWe use paravirtualization (Xen)Heterogeneous device interfacesCreate set of Blanket drivers
for each interface
We have built drivers for Xen and KVM (virtio)
17Slide18
Xen
PV device I/OParavirtualized device I/O essential for performance
Domain 0 hides physical device details from guests
Hardware
Dom 0
Guest
Physical
Device Driver
Backend Driver
Frontend Driver
Ring 1
Ring 3
Ring 0
Kernel
User
18Slide19
PV-on-HVM
HVM guest still needs PV device I/OPlatform PCI Driver makes Xen internals look like PCI device
Physical device details still hidden from guests
Xen
Hardware
Dom 0
HVM Guest
Physical
Device Driver
Backend Driver
HVM Frontend Driver
Ring 1
Ring 3
Ring 0
Kernel
User
Xen
Platform PCI Driver
19Slide20
HVM Guest
Dom 0
Guest
Backend Driver
Frontend Driver
blanket drivers
Physical device details are hidden from entire
Xen
-Blanket instance
Blanket Frontend Driver interfaces with provider-specific device interface
like PV-on-HVM
Provider-specific device interface details are hidden from second-layer guests
Hardware
Dom 0
Physical
Device Driver
Ring 1
Ring 3
Ring 0
Xen
-Blanket
Blanket
Hypercalls
Xen
Backend Driver
Blanket
Frontend Driver
20Slide21
technical detailsAddress translation
Virtual addresses are two translations from machine addresses (needed for DMA)Hypercall assistanceCommunication between frontend blanket driver and backend drivervmcall must be issued from ring 0Most hypercalls are passthrough
Many more details in thesis
21Slide22
o
verhead evaluation setup
Used up to 2 physical hosts (
six-core 2.93 GHz Intel Xeon X5670 processors, 24 GB of memory, four 1 TB disks, and 1 Gbps link)
22Slide23
lmbench microbenchmarks
Native
(
µ
s
)
HVM
(
µ
s
)
PV
(
µ
s)Xen-Blanket
(µs)
Null Call0.190.210.360.36Fork Proc67
86220258Ctxt switch (2p/64K)
0.450.663.183.46
Page fault
0.56
0.99
2.00
2.10
Compare
Xen
-Blanket to PV
23Slide24
blanket driver overheadTwo
VMs on two physical hosts using netperfCan receive at line speed on 1Gbps linkWithin 15% CPU utilization of single layer
24Slide25
kernbenchUp to 68% overhead on kernbench
APIC emulation causes many vmexits25Slide26
user-defined oversubscription
Type
CPU
(
ECUs
)
Memory
(GB)
Disk
(GB)
Price
($/hr)
Small
1
1.7
160
0.085Cluster 4XL33.523
16901.60Factor33.5x13.5x10x
18.8xResources do not all scale the same as priceOpportunity to exploit CPU scaling26Slide27
kernbench revisited
kernbench kernel compile benchmarkRent one 4XL EC2 instanceUse Xen-Blanket to partition it 40 waysAll instances (on average) finished the same time as EC2 small instance47% price reduction per VM per hour
27Slide28
cloud interoperabilityThe
Xen-BlanketUser-centric homogenizationNested virtualization without support from underlying hypervisorRuns on today's clouds (e.g., Amazon EC2) Download the code: http
://
code.google.com/p/xen
-blanket
/
New opportunities
performance
: user-defined oversubscription
28Slide29
Before Next timeProject Interim report
Due Monday, November 24.And meet with groups, TA, and professorFractus Upgrade: Should be back onlineRequired review and reading for Wednesday, November 19Extending networking into the virtualization layer, B. Pfaff, J. Pettit, T. Koponen, K. Amidon
, M.
Casado, S. Shenker. ACM SIGCOMM Workshop on Hot Topics in Networking (HotNets), October 2009.http://conferences.sigcomm.org/hotnets/2009/papers/hotnets2009-final143.pdf
Check piazza:
http://piazza.com/cornell/fall2014/cs5413
Check website for updated schedule