/
SDN(++) and SDN(++) and

SDN(++) and - PowerPoint Presentation

alida-meadow
alida-meadow . @alida-meadow
Follow
369 views
Uploaded On 2016-05-07

SDN(++) and - PPT Presentation

Interesting UseCases Lecture 18 Aditya Akella 1 Two Parts SDN for L3L7 services What is different Why cant we use SDN OpenFlow as is New use cases Live Network Migration What else ID: 308948

migration state switch network state migration network switch middlebox packet control applying apps controller copy virtual server tenant ensemble

Share:

Link:

Embed:

Download Presentation from below link

Download Presentation The PPT/PDF document "SDN(++) and" is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.


Presentation Transcript

Slide1

SDN(++) and Interesting Use-Cases

Lecture 18Aditya Akella

1Slide2

Two Parts

SDN for L3-L7 servicesWhat is different?Why can’t we use SDN/OpenFlow

as is?

New use cases

Live Network MigrationWhat else?

2Slide3

Toward Software-Defined Middlebox Networking

Aaron Gember,

Prathmesh

Prabhu,

Zainab

Ghadiyali, Aditya AkellaUniversity of Wisconsin-Madison

3Slide4

Middlebox Deployment Models

Arbitrary middlebox placement

New forms of middlebox deployment

(VMs, ETTM

[NSDI 2011]

,

CoMB

[NSDI 2012]

)

4Slide5

Move between software-defined data centers

Existing VM and network migration methodsUnsuitable for changing underlying substrate

Live Data Center Migration

Data Center A

Data Center B

Programmatic control over middlebox state

5Slide6

Add or remove middlebox VMs based on load

Clone VM (logic, policy, and internal state)Unsuitable for scaling down or some scaling up

Middlebox Scaling

Fine-grained control

6Slide7

Our Contributions

Classify middlebox state, and discuss what should be controlledAbstractions and interfacesRepresenting state

Manipulating where state resides

Announcing state-related events

Control logic design sketches

7Slide8

Controller

Middlebox

App

App

Middlebox

SDN-like Middleboxes

IPS

Software-Defined Middlebox Networking

Today

8Slide9

Controller

Key Issues

9

Middlebox

How is the logic divided?

Where is state manipulated?

What interfaces

are exposed?

App

App

MiddleboxSlide10

Configuration input

Middlebox State

10

State: ESTAB

Seq

#: 3423

Server: B

CPU: 50%

Hash: 34225

Content: ABCDE

Significant state diversity

+ detailed internal records

Balance Method:

Round Robin

Cache size: 100

Src

:

HostA

Server: B

Proto: TCP

Port: 22Slide11

Balance Method:

Round Robin

Cache size: 100

Src

:

HostA

Server: B

Proto: TCP

Port: 22

Classification of State

11

State: ESTAB

Seq

#: 3423

Server: B

CPU: 50%

Hash: 34225

Content: ABCDE

Action

Supporting

Tuning

Internal & dynamic

Many forms

Only affects performance, not correctnessSlide12

Policy

Language

Src

:

HostA

Server: B

Proto: TCP

Port: 22

State: ESTAB

Seq

#: 3423

Server: B

CPU: 50%

Hash: 34225

Content: ABCDE

How to Represent State?

12

Unknown structure

Significant diversity

May be shared

Per flow

Shared

Commonality among middlebox operations

1000101

1101010

0101001

1111000

1010110Slide13

State Representation

Key:

protocol header field/value pairs identify traffic subsets to which state applies

Action:

transformation function to change parts of packet to new constants

Supporting:

binary blob

Key

Action

Supporting

Binary Blob

Field1

=

Value1

FieldN

=

ValueN

13

Offset1

Const1

OffsetN

ConstN

Only suitable for per-flow state

Not fully vendor independentSlide14

Controller

Middlebox

How to Manipulate State?

Today: only control some state

Constrains flexibility and sophistication

Manipulate all state at controller

Removes too much functionality from middleboxes

14Slide15

State Manipulation

Control over state placementBroad operations interface

Expose state-related events

15

Controller

IPS 1

IPS 2

Create and

update state

Determine where

state residesSlide16

Action

*

Key

SrcIP

= 10.10.0.0/16

DPort

= 22

Key

SrcIP

= 10.10.54.41

DstIP

= 10.20.1.23

SPort

= 12983

DPort

= 22

State =

ESTAB

Supporting

Operations Interface

get

( , )

Filter

SrcIP

= 10.10.54.41

add

( , )

Action

DROP

Key

DstIP

= 10.20.1.0/24

Source

Destination

Proto

Other

Action

*

10.20.1.0/24

TCP

*

DROP

remove

( , )

Filter

Need atomic blocks of operations

Potential for invalid manipulations of state

16Slide17

Firewall

Events Interface

Triggers

Created/updated state

Require state to complete operation

Contents

Key

Copy of packet?

Copy of new state?

17

Controller

Balance visibility and overheadSlide18

Conclusion

Need fine-grained, centralized control over middlebox state to support rich scenariosChallenges: state diversity, unknown semantics

18

get/add/remove

( , )

Action

Offset1

Const1

Key

Field1

=

Value1

Supporting

Binary BlobSlide19

Open Questions

Encoding supporting state/other action state?

Preventing invalid state manipulations?

Exposing events with sufficient detail?

Maintaining operation during state changes? Designing a variety of control logics?

Providing middlebox fault tolerance?

19Slide20

Related Work

Simple Middlebox COntrol protocol

[

RFC 4540

]Modeling middleboxes [

IEEE Network 2008

]

Stratos – middleboxes in clouds [UW-Madison TR]ETTM – middleboxes in hypervisors [NSDI 2011

]COnsolidated

MiddleBoxes [

NSDI 2012]Efficiently migrating virtual middleboxes [SIGCOMM 2012 Poster]LIve

Migration of Entire network [HotNets 2012]

20Slide21

3) Virtual Server Provisioning

Add VMs with security or performance needs

Modify and/or add to MB policy

Middlebox-specific configuration interfaces

Distributed, manual process => high complexity

Programmatic and centralized control

21Slide22

Live Migration of an Entire Network (and its Hosts)

Eric Keller, Soudeh

Ghorbani

, Matthew Caesar, Jennifer Rexford

HotNets 2012Slide23

Widely supported to help:Consolidate to save energyRe-locate to improve performance

Virtual Machine Migration

Hypervisor

Apps

OS

Hypervisor

Apps

OS

Apps

OS

Apps

OS

Apps

OS

Apps

OS

23Slide24

Many VMs working together

But Applications Look Like This

24Slide25

Networks have increasing amounts of state

And Rely on the Network

Configuration

Learned

Software-Defined

25Slide26

Joint (virtual) host and (virtual) network migration

Ensemble Migration

No re-learning,

No re-configuring,

No re-calculating

Capitalize on redundancy

26Slide27

Some Use Cases

27Slide28

Customer driven – for cost, performance, etc.Provider driven – offload when too full

1. Moving between cloud providers

28Slide29

Reduce energy consumption(turn off servers, reduce cooling)

2. Moving to smaller set of servers

29Slide30

Migrate ensemble to infrastructure dedicated to testing (special equipment)

3. Troubleshooting

30Slide31

Automated

migration according to some objectiveand easy manual input

Goal: General Management Tool

Monitoring

Objective

Migration

Ensemble Migration Automation

manual

31Slide32

LIve

Migration of Ensembles

Migration Primitives

Migration Orchestration

Tenant Control

LIME

Network Virtualization

API to operator/ automation

Software-defined network

Virtualized servers

Tenant Control

virtual topology

32

Migration is transparentSlide33

Why Transparent?

33Slide34

Separate Out Functionality

Tenant Control

Network Virtualization

Tenant Control

virtual topology

34Slide35

Separate Out Functionality

Migration Primitives

Migration Orchestration

Tenant Control

Network Virtualization

Tenant Control

virtual topology

35Slide36

Multi-tenancy

Migration Primitives

Migration Orchestration

Tenant Control

Network Virtualization

Tenant Control

virtual topology

Infrastructure

Operator

Tenants

36Slide37

Can we base it off of VM migration?Iteratively copy stateFreeze VM

Copy last delta of stateUn-freeze VM on new server

How to Live Migrate an Ensemble

37Slide38

Applying to Ensemble

Iterative copy

38Slide39

Applying to Ensemble

Freeze and copy

39Slide40

Applying to Ensemble

Resume

40Slide41

Applying to Ensemble

Resume

Complex to implement

Downtime potentially large

41Slide42

Applying to Whole Network

Iterative copy

42Slide43

Applying to Whole Network

Freeze and copy

43Slide44

Applying to Whole Network

Resume

44Slide45

Applying to Whole Network

Resume

Lots of packet loss

Lots of “backhaul” traffic

45Slide46

Applying to Each Switch

Iterative copy

46Slide47

Applying to Each Switch

Freeze and copy

47Slide48

Applying to Each Switch

Resume

48Slide49

Applying to Each Switch

Resume

Bursts of packet loss

Even more “backhaul” traffic

Long total time

49Slide50

Clone the networkMigrate the VMs individually (or in groups)

A Better Approach

50Slide51

Clone the Network

Copy

state

51Slide52

Clone the Network

Cloned Operation

52Slide53

Clone the Network

Migrate

VMs

53Slide54

Clone the Network

Migrate

VMs

54Slide55

Minimizes backhaul trafficNo packet loss associated with the network

(network is always operational)Clone the Network

55Slide56

Same guarantees as migration-freePreserve application semantics

Consistent View of a Switch

Migration Primitives

Migration Orchestration

Network Virtualization

Switch_A_0

Switch_A_1

Switch_A

Application view

Physical reality

56Slide57

Sources of Inconsistency

Switch_A_0

Switch_A_1

Apps

OS

Packet 0

Packet 1

R1

R2

R1

R2

Migration-free: packet 0 and packet 1 traverse same physical switch

VM

(end host)

57Slide58

1. Local Changes on Switch

Switch_A_0

Switch_A_1

(e.g. delete rule after idle timeout)

Apps

OS

Packet 0

Packet 1

R1

R2

R1

R2

VM

(end host)

58Slide59

2. Update from Controller

Switch_A_0

Switch_A_1

Apps

OS

Packet 0

Packet 1

R_new

R1

R2

R1

R2

Install(

R_new

)

(e.g. rule installed at different times)

VM

(end host)

59Slide60

3. Events to Controller

Switch_A_0

Switch_A_1

Apps

OS

Packet 0

Packet 1

R1

R2

R1

R2

Packet-in(

pkt

0)

Packet-in(

pkt

1)

(received at controller first)

(e.g. forward and send to controller)

VM

(end host)

60Slide61

Consistency in LIME

Migration Primitives

Migration Orchestration

Network Virtualization

Switch_A_0

Switch_A_1

Switch_A

*Restrict use of some features

* Use a commit protocol

* Emulate HW functions

* Combine information

61Slide62

LIME is a general

and efficient migration layerHope is future SDN is made migration friendly

Develop models and prove

correctness?

end-hosts and network

“Observational equivalence”

Develop general migration framework

Control over grouping, order, and approach?Conclusions and Future work

62

Related Contents


Next Show more