/
DROPS: Distributed  ResOurce DROPS: Distributed  ResOurce

DROPS: Distributed ResOurce - PowerPoint Presentation

chiquity
chiquity . @chiquity
Follow
346 views
Uploaded On 2020-08-04

DROPS: Distributed ResOurce - PPT Presentation

ensembles for Petascale Science Ilia Baldine Yufeng Xin Anirban Mandal RENCI UNC CH Outline ORCA and GENI DROPS DOE ASCR DESC0005286 ExoGENI testbed ORCA Open Resource Control Architecture ID: 797103

orca resource resources performance resource orca performance resources application network provisioning solar efrc geni applications slice workflow cloud unc

Share:

Link:

Embed:

Download Presentation from below link

Download The PPT/PDF document "DROPS: Distributed ResOurce" is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.


Presentation Transcript

Slide1

DROPS: Distributed ResOurce ensembles for Petascale Science

Ilia Baldine,

Yufeng

Xin

,,

Anirban

Mandal

RENCI, UNC-

CH

Slide2

OutlineORCA and GENIDROPS (DOE ASCR DE-SC0005286)

ExoGENI

testbed

.

Slide3

ORCA (Open Resource Control Architecture)Distributed control softwareOne of GENI control frameworks

Originally developed at Duke University by Jeff Chase with NSF funding

Now under active development by Duke and RENCI for GENI

Pluggable, programmable

IaaS

system for experimenting with dynamic resource provisioning

3

Slide4

4

Cloud Providers

Virtual Compute and

Storage Infrastructure

Network Transit Providers

Cloud APIs (Amazon EC2 ..)

Network Provisioning APIs (NLR Sherpa, DOE OSCARS, Internet2 DRAGON, OGF NSI …)

Virtual Network Infrastructure

IaaS

: Compute and Network Virtualization

Slide5

ORCA is a “wrapper” for off-the-shelf cloud technologies and circuit networks etc., enabling federated orchestration:

Resource brokering

VM image distribution

Topology embedding

Stitching

Authorization

GENI, NSF SDCI

http://

networkedclouds.org

http://geni-orca.renci.org

Open Resource Control Architecture

Slide6

ORCA Foundational ConceptsLeasesEnable brokering, advance scheduling, distributed allocation of resources

Knowledge representation

Utilize Semantic Web concepts to represent resource states (NDL -> NDL-OWL)

Slices

Provide collections of interconnected resources with strong performance isolation properties that evolve with time

Isolation means repeatability and performance guarantees

Resources can be modified, delegated

6

Slide7

7

Federated Orchestration

ORCA Actors

Slide8

Operators

ORCA Actors

Broker

(CH)

ticket

redeem

lease

Authority/AM

delegate

Slice

Manager

(SM)

request

XML –

RPC

w

/ NDL-OWL

ORCA Actors

Java

Web portal

Web

portal

Web

portal

Users and tools

Substrate owners

Slide9

ORCA CapabilitiesCo-scheduling/co-provisioning of heterogeneous resources (primarily compute and network)

Automatic binding of resources to available sites

Automatic splitting of resources between sites

Stitching of resources into connected topologies

Deducing and honoring resource dependencies

Label continuity constraints where necessary

Label translation where possible

Semantic resource descriptions used on user-facing APIs (NDL-OWL)

Multi-layered network provisioning on BEN

Fiber, DWDM (Infinera) and L2 (Cisco, Juniper) provisioning

Using a combination of heuristics and ILP

Support for

OpenStack, Eucalyptus, OpenFlow, OSCARS, SherpaLow-level drivers for Cisco and Juniper switches

9

Slide10

Slices with and without OpenFlow

Presentation title goes here

10

Slide11

DROPS Foundational principles It is important to provide applications with the ability to provision distributed heterogeneous resources for themselves.

Requests may be fuzzy or specific

It is important to provide mechanisms for applications to express their resource needs.

It is important to provide feedback to the application that describes the performance and state of the allocated resources.

It is important to assess the application performance metrics and adjust resource allocation.

11

Slide12

Network

NSI

Application-driven resource orchestration

12

Provisioning

Virtualized Compute

Storage

Application-facing APIs

Substrate slivering APIs

Application

Application-specific resource mapping

Sherpa

xCAT

Eucalyptus/

OpenStack

ORCA

Resource Co-Scheduling

Stitching

Application Slice

Application directly

operates on resources

OSCARS

Slide13

DROPS GoalsSelect representative scientific applicationsMap/Reduce, EFRC Solar Chemistry Pipeline

Assess their performance

Create API for applications to create and modify ‘slices’

Simple API, complex semantic resource representations open to reasoning and inference

Extend Semantic Resource resource representations to performance measurements and metrics

Create persistent query mechanism for

perfSonar

to support closed feedback loop for application performance monitoring and resource provisioning

13

Slide14

Year 1Assess performance of Hadoop is slices with varying topologies and link latencies

Convert EFRC workflow to Pegasus. Evaluate launching the workflow in a slice

Presentation title goes here

14

Slide15

Solar Fuels: Creation of storable fuels using solar energy and catalysis

The Science

Research in Solar Fuels and

Photovoltaics will integrate light absorption and electron transfer driven catalysis

Molecular assemblies to create efficient devices for solar energy conversion through artificial photosynthesis

T Meyer, J

Papanikolas

, C

Heyer

, Catalysis Letters 141 (2011) 1-7.

A Theoretical Framework

Co-design strategy

for creation of new scalable codesIncorporation of workflow technologies to coordinate, launch, and enhance resilience of the design pipeline

Apply the developed codes to solve complex problems in electronic structure, kinetics, and synthesis

Collaborations - Working directly with Experimentalists (UNC-CH)

Model and methods developers (Duke, UNC-CH)15

Slide16

SciDAC-e Computational GoalsEstablish a production computational environment

in association with the theory side of EFRC

Directly working with EFRC experimentalists

Usable coupled multi-scale framework for inverse problems. Based on:

Workflow technology

Elastic resource provisioning

Development of new applications

16

Slide17

DOE Office of Science, Advanced Simulation and Computing Research. A supplement to R.J. Fowler’s existing SciDAC PERI project

RENCI/UNC

Rob Fowler (PI): management, optimization, …

Jeffrey

Tilson

(co-PI): management, chemistry

liaison,coupling of codes in optimization pipeline, …

Paul Ruth: optimization workflow framework, …

Rice University

John Mellor-

Crummey

(PI of collaborative proposal)Nathan Tallent: performance analysis tools

LBNLDavid Bailey (PI, collaborative) code tuningJuan Meza, simulation-driven optimization, inverse problems.Anubhav

Jain, High throughput computing, grids, workflowsORNLJeff Vetter (PI, collaborative) cross-code performance evaluation.

Gabriel Marin, performance evaluation and tuning.

17

SciDAC-e: Enhanced Productivity of Materials Discovery: Computations for Solar Fuels and Next Generation Photovoltaics

Slide18

A multistep process for sustainable fuel creation

Dye Sensitized

Photoelectrosynthesis

Cell (DSPEC)Solar ->

Catalysts + abundant materials

-> Liquid Fuel

Each step a significant research project

Focus on Oxidation catalyst

18

Oxidation catalysts

Image provided by UNC-CH EFRC: http://www.efrc.unc.edu/

Slide19

Solar Fuels Workflow19

Slide20

EFRC Workflow20

m

cdrt.x

m

ofmt.x

m

cuft.x

a

rgos.x

m

cscf.x

t

ran.x

PSOCI.x

MPI (Hopper)

Serial (Condor/Orca)

Slide21

ORCA SC11 Demo Create a slice spanning resources at different locationsVMs at cloud sites at RENCI and Duke

Physical machine at NERSC (Hopper)

Linked by bandwidth-guaranteed Layer 2 circuit across multiple network providers –

BEN,

provisioned by ORCA across several

layers

NLR

Framenet

, dynamically provisioned via Sherpa tool

StarLight

(stitching via ORCA)

ESnet (via ORCA interface to OSCARS)

21

Slide22

Demo resource/substrate providers

Slide23

Presentation title goes here23

Slide24

BEN Slice Detail

Slide25

[

Ru

O ]

+2

d

d

d

d

s

p

p

[

Ru

O ]

+2

d

d

d

d

s

p

p

s

*

[

Ru

O ]

+2

d

d

d

d

s

p

p

p

*

1

S

+

; O

+

3

F

3

D

Sample Result: Spin states of RuO2+. Electronic structure of

bare

RuO

2+

Slide26

Recent publicationsAutonomic Cloud Network Orchestration: A GENI Perspective. I.Baldine

, J. Chase,

Y.Xin

, D. Irwin, V. Marupadi, A.

Mandal

, C.

Heermann, A.

Yumerefendi

. IEEE International Workshop on Management of Emerging Networks and Services (IEEE MENS 2010)

Embedding Virtual Topologies in Networked Clouds.

Y.

Xin, I. Baldine, A. Mandal, C.

Heermann, J. Chase, A. Yumerefendi. In Proceedings of CFI 2011Provisioning and Evaluating Multi-domain Networked Clouds for

Hadoop-based Applications.

A.Mandal, Y.Xin,

I.Baldine, P.Ruth, C.Heermann, J.Chase, V.Orlikowski

and A. Yumerefendi. In Proceedings of IEEE CloudCom 2011

26

Slide27

Future DROPS DirectionsOntology extensions

Native path finding using ontology models

Performance measurement resources

Application performance measurements metrics

Persistent query pub/sub for application performance measurements

Slice elasticity

Allocate resources in reaction to workflow behavior

Move data to computation or vice versa

Slide28

Future related workAlgorithms to support more sophisticated resource co-scheduling

More substrate interfaces

E.g. NSI

Investigate other applications

E.g.

NowCasting

– cyberinfrastructure

and

instruments

OpenFlow

on-ramps into virtualized slicesThrough NSF TC add ABAC authorization

Trusted cloud computingPowerful mechanism and inference logic to process authorization decision, allow complex delegation chains, no single trust root

Ability to ask the system to explain the authorization decisionPresentation title goes here

28

Slide29

ExoGENITestbed consisting of over a dozen ‘racks’ with virtualized and bare-metal compute nodes

Approximately 160 cores per rack

6TB storage

IBM x3560M4 servers

IBM/BNT

OpenFlow

10G/40G switch

Extensible to support experimental hardware

PCIe

Gen 3

w/ Sandy Bridge

Placed on campuses across the country (and a few outside)Dynamic circuit networks connecting them on demand in arbitrary topologiesNLR, I2, ESnet

, BENSupport for GENI experimentation as well as experimentation with cyberinfrastructure in computational sciences

Running ORCA software

29

Slide30

ExoGENI Software Stack

30

Slide31

Creating virtual topologies in ExoGENIPresentation title goes here

31

Slide32

DeploymentFirst four racks will be operational by 06/12RENCI, GPO/BBN, FIU and UH

Follow on racks to be deployed mostly by 03/13

How do I get on?

Come to a GEC (at UCLA, at MIT etc)

Talk to me

Talk to GENI Project Office (BBN)

http://www.exogeni.net

32