FermiCloud to be integrated with master slides Grid Developers Use of clouds Storage Investigation OSG Storage Test B ed MCAS P roduction S ystem Development VM OSG User Support ID: 459789
Download Presentation The PPT/PDF document "Grid Developers’ use of" is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.
Slide1
Grid Developers’ use of FermiCloud
(to be integrated with master slides)Slide2
Grid Developers Use of clouds
Storage Investigation
OSG Storage
Test
B
ed
MCAS
P
roduction
S
ystem
Development
VM
OSG
User
Support
FermiCloud
Development
MCAS integration systemSlide3
Storage Investigation:
Lustre
Test
Bed
2 TB
6 Disks
eth
FCL Lustre: 3 OST & 1 MDT
FG ITB
Clients
(7 nodes -
21 VM)
BA
mount
mount
Dom0:
- 8 CPU
- 24 GB RAM
Lustre
Client VM
7 x
Lustre
Server VM
ITB clients vs. Lustre Virtual Server
FCL clients vs. Lustre V.S.
FCL + ITB clients vs. Lutre V.S.Slide4
ITB clts vs. FCL Virt. Srv. Lustre
Changing Disk and Net drivers on the
Lustre
Srv VM…
350 MB/s read
70 MB/s write
(250 MB/s write on Bare M.)
Bare Metal
Virt I/O for Disk and Net
Virt I/O for Disk and default for Net
Default driver for Disk and Net
Read I/O Rates
Write I/O Rates
Use Virt I/O drivers for NetSlide5
21 Nova clt vs. bare m. & virt. srv.
Read – ITB vs. virt. srv.
BW = 12.27
0.08 MB/s
(1 ITB cl.: 15.3
0.1 MB/s)
Read – FCL vs. virt. srv.
BW = 13.02
0.05 MB/s(1 FCL cl.: 14.4 0.1 MB/s)
Read – ITB vs. bare metal
BW = 12.55 0.06 MB/s(1 cl. vs. b.m.: 15.6 0.2 MB/s)
Virtual Clients on-board (on the same machine as the Virtual Server) are as fast as bare metal for read
Virtual Server is almost as fast as bare metal for readSlide6
OSG Storage Test Bed
Official test bed
resources
5
nodes purchased ~ 2 years ago
4
VM on each node (2 VM SL5, 2 VM SL4)
Test Systems:BeStMan-gateway/xrootdBeStMan-gateway,
GridFTP-xrootd, xrootdfs
Xrootd redirector5 data server nodesBeStMan-gateway/HDFS
BeStMan-gateway/GridFTP-hdfs, hdfs
name nodes8 data server nodesClient nodes (4 VMs):Client installation tests
Certification testsApache/tomcat to monitor/display test results etcSlide7
OSG Storage Test
Bed
Additional test bed resources
6 VMs on nodes outside of the official
testbed
Test systems:
BeStMan-gateway with diskBeStMan-fullmode
Xrootd (Atlas-Tier3, WLCG demonstrator project)Various test installationIn addition, 6 “old”
physical nodes are used as dCache test bed
These will be migrated to FermiCloudSlide8
MCAS Production System
FermiCloud
hosts the production server (mcas.fnal.gov)
VM
Config
: 2 CPUs, 4GB RAM, 2GB swap
Disk Config:10GB root partition for OS and system files250GB disk image as data partition for MCAS software and data
Independent disk image makes is easier to upgrade the VMOn VM boot up: Data partition is staged and auto mounted in VMOn VM shutdown: Data partition is savedWork
in progress: Restart the VM without having to save and stage in the data
partition to/from central image storageMCAS services hosted on the serverMule ESBJBoss
XML Berkeley DBSlide9
Metric Analysis and Correlation Service. CD Seminar
9