Szymon Gadomski Uni GE at CSCS November 2009 S Gadomski ATLAS T3 in Geneva CSCS meeting Nov 09 1 the Geneva ATLAS Tier3 cluster what is it used for recent issues and longterm concerns ID: 465989
Download Presentation The PPT/PDF document "ATLAS Tier-3 in Geneva" is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.
Slide1
ATLAS Tier-3 in Geneva
Szymon Gadomski, Uni GEat CSCS, November 2009
S. Gadomski, ”ATLAS T3 in Geneva", CSCS meeting, Nov 09
1
the Geneva ATLAS Tier-3 cluster
what is it used for
recent issues and long-term concernsSlide2
ATLAS computing in Geneva
268 CPU cores180 TB for data70 in a Storage Element
special features:direct line to CERN at 10
Gb/s latest software via CERN AFS SE in Tiers of ATLAS since Summer 2009
FTS channels from CERN and from NDGF Tier 1
S. Gadomski, ”ATLAS T3 in Geneva", CSCS meeting, Nov 09
2Slide3
Networks and systems
S. Gadomski, ”ATLAS T3 in Geneva", CSCS meeting, Nov 093Slide4
S. Gadomski, "Status and plans of the T3 in Geneva", Swiss ATLAS Grid Working Group, 7 Jan 2008
4Setup and use
Our
local clusterlog in
and have
an environment to work with ATLAS software, both offline and trigger
develop code, compile,
interact with ATLAS software repository at CERN
work with nightly releases of ATLAS software, normally not distributed off-site but visible on /
afs
disk space,
visible as normal linux file
systems
use of final analysis tools, in particular ROOT
a
easy
way to run batch jobs
A grid sitetools to transfer data from CERN as well as from and to other Grid sites worldwidea way for ATLAS colleagues, Swiss and other, to submit jobs to usways to submit our jobs to other grid sites
~55 active users, 75 accounts, ~90 including old
not only
Uni
GE; an official Trigger development site Slide5
Statistics of batch jobs
S. Gadomski, ”ATLAS T3 in Geneva", CSCS meeting, Nov 095
NorduGrid
production since 2005ATLAS never sleeps
local jobs taking over in recent monthsSlide6
Added value by resource sharing
S. Gadomski, ”ATLAS T3 in Geneva", CSCS meeting, Nov 09
6
local jobs come in peaks
grid
always has jobs
little idle time,
a lot of Monte Carlo doneSlide7
Some performance numbers
Storage system
directionmax rate [MB/s]
NFSwrite
250
NFS
read
370DPM Storage Element
write4*250
DPM
Storage Element
read
4*270
S. Gadomski, ”ATLAS T3 in Geneva", CSCS meeting, Nov 09
7
Internal to the Cluster the data rates are OK
Source/method
MB/sGB/day
dq2-get
average
6.6
560
dq2-get
max
58
5000
FTS from CERN (per file server)
10 to
60
840
– 5000
FTS
from NDGF-T1 (per file server)
3 – 5
250
–
420
Transfers to GenevaSlide8
Test of larger TCP buffers
S. Gadomski, ”ATLAS T3 in Geneva", CSCS meeting, Nov 098
transfer from fts001.nsc.liu.se
network latency 36 ms (CERN at 1.3 ms)increasing TCP buffer sizes Fri Sept 11
th
(Solaris default 48
kB)
192
kB
1 MB
Why?
~25 MB/
s
Data rate per server
Can we keep the FTS transfer at 25 MB/
s
/server?Slide9
Issues and concerns
recent issuesone crash of a Solaris file server in the DPM SEtwo latest Solaris file servers with slow disk I/O, deteriorating over time, fixed by rebootunreliable data transfers frequent security updates of the SLC4
migration to SLC5, Athena reading from DPM long term concernslevel of effort to keep it all up
support of the Storage Element
S. Gadomski, ”ATLAS T3 in Geneva", CSCS meeting, Nov 09
9Slide10
Summary and outlook
A large ATLAS T3 in GenevaSpecial site for Trigger developmentIn NorduGrid since 2005DPM Storage Element
since July 2009FTS from CERN and from the NDGF-T1exercising data transfers
Short-term to do listgradual move
to SLC5
w
rite
a note, including performance resultsTowards a steady–state operation!
S. Gadomski, ”ATLAS T3 in Geneva", CSCS meeting, Nov 09
10