/
Disk IO and Network Benchmark on VMs Disk IO and Network Benchmark on VMs

Disk IO and Network Benchmark on VMs - PowerPoint Presentation

bella
bella . @bella
Follow
66 views
Uploaded On 2023-11-04

Disk IO and Network Benchmark on VMs - PPT Presentation

Qiulan Huang 26112010 Benchmarking CPU benchmarks HEPSPEC06 IO benchmarks IOZONE Network benchmarks IPERF Disk IO Benchmark Running 8 iozone processes on hypervisor Running iozone ID: 1028741

vms performance network side performance vms side network 458742 disk benchmark iperf iozone portnumber time server connect start test

Share:

Link:

Embed:

Download Presentation from below link

Download Presentation The PPT/PDF document "Disk IO and Network Benchmark on VMs" is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.


Presentation Transcript

1. Disk IO and Network Benchmark on VMs Qiulan Huang26/11/2010

2. BenchmarkingCPU benchmarks: HEPSPEC06I/O benchmarks: IOZONENetwork benchmarks: IPERF

3. Disk I/O BenchmarkRunning 8 iozone processes on hypervisorRunning iozone process on 8 VMs at the same time IOZONE options: -Mce -I -+r -r 256k -s 8g -f /usr/vice/cache/iozone_$i.dat$$ -i0 -i1 -i2Pay close attention to the read and write performance

4. Disk I/O BenchmarkRead performance

5. Disk I/O BenchmarkWrite performance

6. Summary(1)For disk IO performance, the write performance penalty is about 10%. While read performance loss is about 40% which is higher than the write one.It’s weird there are still 3 VMs getting about twice performance of the bare mental.

7. Network BenchmarkTools: IPERF 2.0.4Options: ‘-p 11522 -w 458742 -t 60 ‘ TCP window size is 256KB, the test duration time is 60 secs(default is 10 secs)Physical Server: lxbsq0910VM Servers: vmbsq091000~ vmbsq091007Client: lxvmpool005

8. Benchmark Design(1)Set 8 parallel threads running at the same time on the client to test the hypervisor throughput performanceStart 8 VMs running on the hypervisor and make them acted as servers. On the client side, run 8 threads almost at the same time to connect each server respectivelyServer side command:iperf -s –p PortNumber -w 458742 -t 60Client side command:iperf -c ServerIP -p PortNumber -w 458742 –P 8 -t 60 Port number should be same with the one on server side

9. Network Benchmark(1)

10. Benchmark Design(2)running a thread on the client to test the hypervisor throughput performanceDoing 8 rounds separately. First round, start one process to connect 1 VM, Second, start 2 processes to connect 2 VMs respectively. Finally, start 8 processes to connect 8 VMs repectivelyServer side commandiperf -s –p PortNumber -w 458742 -t 60Client side commandiperf -c ServerIP -p PortNumber -w 458742 -t 60 Port number should be same with the one on server side

11. Network Benchmark(2)

12. Summary(2)For network performance penalty in VMs is about 3%, It’s really optimistic. The later one shows the performance in VMs is nearly equal to the physical machine. What’s more, with 4 VMs get better value than the bare one. We should do more study to investigate and tune some parameters to optimize the network performance using real application.

13. Question?