openlab V Niko Neufeld CERNPH CERN openlab major review Oct 2014 Data acquisition and online challenges recap Online data filtering and processing quasi realtime data reduction for highrate detectors ID: 782365
Download The PPT/PDF document "Online Triggers and DAQ" is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.
Slide1
Online Triggers and DAQopenlab V
Niko Neufeld, CERN/PH
CERN
openlab
major review Oct. 2014
Slide2Data acquisition and online challenges - recap
Online data filtering and processing(quasi-)
realtime data reduction for high-rate detectorsHigh bandwidth networking for data acquisitionTerabit/s networks
Data-transfer and storage
openlab V lightning DAQ N. Neufeld
2
Slide3Evolution of the LHCb trigger-DAQopenlab V lightning DAQ N. Neufeld
LHCb Run
2
LHCb Run 3
Max. inst. luminosity
4 x 10^32
2 x 10^33
Event-size (mean –
zero-suppressed) [
kB
]~ 60 (L0 accepted) ~ 100 Event-building rate [MHz]1 40# read-out boards~ 330400 - 500link speed from detector [Gbit/s]1.64.5output data-rate / read-out board [Gbit/s]4 100 # detector-links / readout-boardup to 24up to 48# farm-nodes~ 16001000 - 4000# links 100 Gbit/s (from event-builder PCs)n/a400 - 500final output rate to tape [kHz]1220 - 100
3
Slide4Detector front-end electronics
Eventbuilder network
Eventbuilder PCs + software LLT
Eventfilter Farm
~ 80 subfarms
UX85B
Point 8 surface
subfarm
switch
TFC
500
6 x 100 Gbit/s
subfarm
switch
Online storage
Clock & fast commands
8800
Versatile Link
throttle from PCIe40
Clock & fast commands
6 x 100 Gbit/s
Online Architecture
ECS
openlab V lightning DAQ N. Neufeld
4
Slide5Event-size [kB]
Rate [kHz]
Bandwidth [Gb/s]
Year [CE]
ALICE
20000
50
8000
2019
ATLAS
400020064002023CMS40001000320002023LHCb
100
40000
32000
2019
Future data rates @ LHC (trigger stage 2)
openlab V lightning DAQ N. Neufeld
5
40000 kHz == collision rate
LHCb abandons Level 1 for an all-software trigger
O(100)
Tbit
/s networks required
Slide6Project HTCC (High Throughput Computing Collaboration)
Partners: CERN,
intel
Goal apply
intel
technologies to Online computing challenges
Work-packages
Interconnect
for DAQ
(
Stormlake)Knights Landing for software triggers (HLT and L1.5)Reconfigurable Logic (Xeon/FPGA)Use LHCb use-case as concrete example problems, but try to keep solutions and reports as widely applicable as possibleopenlab V lightning DAQ N. Neufeld6
Slide7Original “Use-cases” in Online (from openlab V white paper)
“Level-1” using (more/all) COTS hardware
✔
“
Data Acquisition
”
✔
“High Level Trigger
”
✔
“Controls”“Online data processing” ✔“Exporting data”openlab V lightning DAQ N. Neufeld7✔ == covered in HTCC
Slide8Additional material
Slide9Use-case #1 The first level trigger
Slide10Calorimeter data
openlab V lightning DAQ N. Neufeld
10
Slide11Finding Muons (2d view)
openlab V lightning DAQ N. Neufeld
11
Slide12Level 1 Trigger today
The Level 1 Trigger is implemented in hardware: FPGAs and
ASICs difficult / expensive to upgrade or change, maintenance by experts only
Decision time:
~ a small number of microseconds
It uses simple, hardware-friendly signatures
looses interesting collisions
Each sub-detector has its own solution, only the uplink is
standardized
openlab V lightning DAQ N. Neufeld12
Slide13Level 1 challenge
Can we do this in software? Using GPGPUs / XeonPhis?
We need low and near– deterministic latency Need an efficient interface to detector-hardware: CPU/FPGA hybrid?
openlab V lightning DAQ N. Neufeld
13
Slide14Use-case #2 and #6Data Acquisition and Export fast data transport
Slide15Data Acquisition (generic example)
openlab V lightning DAQ N. Neufeld
15
custom radiation- hard link from the detector
DAQ (“event-building”) links – some LAN (10/40/100
GbE
/
InfiniBand
)
Links into compute-units: typically 1/10 Gbit/s Detector
DAQ network
100 m rock
Readout Units
Compute Units
Every Readout Unit has a piece of the collision data
All pieces must be brought together into a single compute unit
The Compute Unit runs the software filtering (High Level Trigger – HLT)
10000 x
~ 1000 x
~ 3000 x
Slide16Data acquisition challenge
Transport large amount of data (multiple Terabit/s @ LHC) reliably and cost-effectivelyIntegrate the network closely and efficiently with compute resources (be they classical CPU or “many-core”)
Multiple network technologies should seamlessly co-exist in the same integrated fabric (“the right link for the right task”), end-to-end solution from online processing to scientist laptop (e.g. light-sources)
openlab V lightning DAQ N. Neufeld
16
Slide17Use-case #3 and #5Online Data processing
Slide18Pattern finding - tracks
openlab V lightning DAQ N. Neufeld
18
Slide19Same in 2 dimensions
openlab V lightning DAQ N. Neufeld
19
Can be much more complicated: lots of tracks / rings, curved / spiral trajectories, spurious measurements and various other imperfections
Slide20LHC High Level Trigger: Key Figures
Existing code base: 5 MLOC of mostly C++
Almost all algorithms are
single-threaded
(only few exceptions)
Currently processing time on a X5650 per event: several 10
ms
/ process (hyper-thread)
Currently between 100k and 1 million events per second are filtered online in each of the 4 LHC experiments
openlab V lightning DAQ N. Neufeld
20
Slide21Online Data processing challenge
Make the code-base ready for multi/many-core (this is not Online specific!)Optimize the online processing compute in terms of cost, power, cooling
Find the best architecture integrating “standard servers”, many-core systems and a high-bandwidth network
openlab V lightning DAQ N. Neufeld
21