Pat Burns VP for IT Rick Casey HPC Manager HJ Siegel Chair of the ISTeC MAC Agenda 1 Welcome introductions HJ 2 Review NSF Summit award Pat ID: 568668
Download Presentation The PPT/PDF document "Buying into “Summit” under the “Co..." is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.
Slide1
Buying into “Summit” under the “Condo” model
Pat Burns, VP for IT
Rick Casey, HPC Manager
H.J. Siegel, Chair of the ISTeC MACSlide2
Agenda
1. Welcome, introductions –
HJ
2.
Review
NSF
Summit
award
–
Pat
3.
Review
Summit
configuration
–
Rick
4.
Present
“
Condo
”
buy
-in
models
–
Rick
5.
Q&A
– HJSlide3
Welcome from ISTeC
ISTeC:
CSU's
Information Science &
Technology
Center Current HPC at CSU: ISTeC Cray from 9/09 $630K NSF grantNEW NSF award to ISTeC for greatly enhanced HPCThanks to those who helped us get this award through ISTeCSlide4
The Joint NSF MRI Award
NSF MRI proposal submitted under the RMACC
(
http://www.rmacc.org)
Joint award to CSU and CU: 450 TFLOPs (~200 on top 500)CSU: $850k (23%)CU: $2.5m (67%)10% of cycles offered to RMACC participantsGiant RFP process: award to Dell/DDN: “Summit” systemHoused and operated at CU (CSU fiber connected), no cost to CSUUndergoing final acceptance testing nowLimited opportunity to buy Into the system, subsidized for common infrastructureSlide5
Summary of Benefits
Great pricing via
large scale purchase
Zero operational costs
Hardware subsidy
Central user and application supportSlide6
S
ummit
: Schematic Rack Layout
Storage rack
Compute rack 1
Compute rack 2
Compute rack 3
Compute rack 4
Compute rack 5
Compute rack 6
Compute rack 7
1 PB scratch
GPFS
DDN SFA14K
HiMem nodes (5)
2 TB RAM / node
Ethernet
mgt. nodes
OmniPath
leaf nodes
Nvidia K80
GPU nodes
(10)
OmniPath
core nodes
Intel Knights Landing Phi nodes (20)
OPA fabric
mgt. nodes
Gateway nodes
Intel Haswell nodes
(376)
Note
: actual rack layout may differ from this schematicSlide7
CPU Nodes
376 CPU nodes
Dell
Poweredge
C6320
9,024
total Intel
Haswell CPU
cores
4 nodes, 96 cores per chassis
200 GB SATA SSD /
chassis
2X
Intel Xeon E5-2680v3; 2.5
GHz, per node
24
CPU cores / node
128 GB RAM / node
5.3 GB RAM / CPU coreSlide8
GPU Nodes
10 GPU nodes
Dell
Poweredge
C4130
99,840 GPU
cores
2X
Nvidia
K80 GPU cards /
node
2X Intel Xeon E5-2680v3; 2.5
GHz / node
200 GB SATA
SSD / node
24 CPU cores / node
128 GB RAM / node
5.3 GB RAM / CPU core
Nvidia K80Slide9
HiMem Nodes
5
HiMem
nodes
Dell
Poweredge
R930
4X Intel Xeon E7-4830v3; 2.1 GHz
2 TB RAM / node
(DDR4)
48 CPU cores / node
42 GB RAM / CPU core
200 GB SAS SSD / node
12 TB SAS HDD / nodeSlide10
Interconnect
Intel OmniPath interconnect
100
Gbyte
/ sec.
bandwidth
Fat tree topology, 2:1 blockingSlide11
Storage
1 Petabyte (PB) scratch storage
DDN SFA14K block storage appliance
GRIDScaler
(GPFS integration
)
Direct native connect to OmniPathSlide12
Access to Summit (In Process)
Via CSU’s fiber infrastructure
Identical for CU and CSU users
Simple account applications required
Start-up, small allocations “automatic”
Goal is to support widespread usageLarger allocations will be given in accordance with needs Limited “whole machine” runs will be availableWill require eID/Duo for authenticationWill require Globus for file transfersRecommended from CSU’s Research DMZ network/storageSlide13
“Condo” Model Buy-in
Buy-in in units of chasses
CPU chassis has 4 nodes (band together to buy?)
Allocations will be given equal to 8,760 hrs./year x purchased size
Ex.: 1 CPU node purchased, allocation = 8,760 x 24 core-hrs./yr.
All resources are shared when available: scale up to larger sizesAll shared, common elements are subsidized, until $$$ run outPower, cooling, data center space, staffPower distribution units (PDU’s)Ethernet switchesOmnipath common fabric (you must purchase card for each node)Omnipath cablingSlide14
Deadlines (for both CU and CSU)
Nov. 10 for commitments
Get commitments (specs and account numbers) to
Richard.Casey@colostate.edu
, (970) 980-5975
PO by 12/1Slide15
Excel Spreadsheet DiscussedSlide16
Q&A Most Welcome
Thank You!