/
Seeding Cloud-based services: Seeding Cloud-based services:

Seeding Cloud-based services: - PowerPoint Presentation

myesha-ticknor
myesha-ticknor . @myesha-ticknor
Follow
377 views
Uploaded On 2017-12-20

Seeding Cloud-based services: - PPT Presentation

Distributed Rate Limiting DRL Kevin Webb Barath Raghavan Kashi Vishwanath Sriram Ramabhadran Kenneth Yocum and Alex C Snoeren Seeding the Cloud T echnologies ID: 616878

planetlab drl slice rate drl planetlab rate slice fps traffic mbps limit citadel control enforce flow results packet limiter

Share:

Link:

Embed:

Download Presentation from below link

Download Presentation The PPT/PDF document "Seeding Cloud-based services:" is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.


Presentation Transcript

Slide1

Seeding Cloud-based services:Distributed Rate Limiting (DRL)

Kevin Webb

,

Barath

Raghavan

,

Kashi

Vishwanath

,

Sriram

Ramabhadran

, Kenneth

Yocum

, and Alex C.

SnoerenSlide2

Seeding the Cloud

T

echnologies

to deliver on the promise cloud computing

Previously

: Process data in the

cloud (Mortar)

Produced/stored across providers

Find Ken

Yocum

or Dennis

Logothetis

for more info

Today: Control resource usage: “cloud control”

with DRL

Use resources at

multiple sites (e.g., CDN)

Complicates resource accounting and

control

Provide cost controlSlide3

DRL Overview

Example: Cost control in a Content Distribution Network

Abstraction

: Enforce

global

rate limit across multiple sitesSimple example: 10 flows, each limited as if there was a single, central limiter

Src

Dst

Limiter

Src

Dst

Limiter

Src

Dst

Limiter

DRL

10

flows

2

flows

8

flows

20 KB/

s

100 KB/

s

80 KB/

sSlide4

Goals & Challenges

Up to now

Develop architecture and protocols for distributed rate

limiting (SIGCOMM 07)

Particular approach (FPS) is practical in the wide area

Current goals:Move DRL out of the lab and impact real servicesValidate SIGCOMM results in real-world conditions

Provide Internet testbed with ability to manage bandwidth in a distributed fashionImprove usability of PlanetLab

ChallengesRun-time overheads: CPU, memory, communicationEnvironment: link/node failures, software quirksSlide5

PlanetLab

World-wide

test bed

Networking and systems research

Resource

s donated by Universities, Labs, etc.Experiments

divided into VMs called “slices” (Vservers)

PostgreSQL

PLC API

Web server

Linux 2.6

Internet

Controller

Vservers

Slice

1

Linux 2.6

Slice

2

Slice

N

Vservers

Slice

1

Linux 2.6

Slice

2

Slice

N

Nodes Slide6

PlanetLab Use Cases

PlanetLab

needs

DRL!

Donated bandwidth

Ease of administrationMachine roomLimit local-area nodes to a single ratePer slice

Limit experiments in the wide areaPer organizationLimit all slices belonging to an organizationSlide7

PlanetLab Use Cases

Machine room

Limit local-area nodes with a single rate

1

MBps

1

MBps

1

MBps

1

MBps

1

MBps

DRL

DRL

DRL

DRL

DRL

5

MBpsSlide8

DRL Design

Each limiter - main

event loop

Estimate: Observe and record outgoing demand

Allocate: Determine rate share of each node

Enforce: Drops packetsTwo allocation approachesGRD: Global random drop (packet granularity)

FPS: Flow proportional shareFlow count as proxy for demand

Input Traffic

Output

traffic

Estimate

Allocate

Enforce

Regular

Interval

Other

Limiters

FPSSlide9

Implementation Architecture

Abstractions

Limiter

Communication

Manages

identitiesIdentityParameters (limit, interval, etc.)

Machines and SubsetsBuilt upon standard Linux tools…Userspace packet logging (Ulogd)Hierarchical

Token BucketMesh & gossip update protocolsIntegrated with PlanetLab software

Input Data

Output Data

Estimate

FPS

Enforce

Regular

Interval

Ulogd

HTBSlide10

Estimation using ulogd

U

serspace

logging daemon

Already used by

PlanetLab for efficient abuse trackingPackets

tagged with slice ID by IPTablesReceives outgoing packet headers via netlink socketDRL implemented

as ulogd plug-inGives us efficient flow accounting for estimationExecutes the Estimate, Allocate, Enforce loopCommunicates with other limitersSlide11

Enforcement with Hierarchical Token Bucket

Linux Advanced Routing & Traffic Control

Hierarchy of rate limits

Enforces

DRL’s

rate limit

Packets attributed to leaves (slices)Packets move up, borrowing from parents

B

C

D

Y

Z

A

X

Root

Packet (1500b)

1000b

100b

600b

Packet (1500)

0b

0b

200bSlide12

Enforcement with Hierarchical Token Bucket

Uses same tree structure as

PlanetLab

Efficient

control of

sub-treesUpdated every loop

Root limits whole nodeReplenish each level

B

C

D

Y

Z

A

X

RootSlide13

Citadel SiteThe Citadel (2 nodes)

Wanted 1 Mbps traffic limit

Added (horrible) traffic shaper

Poor

responsiveness (2 – 15 seconds)

Running right now!Cycles on and off every four minutesObserve DRL’s impact without ground truth

Shaper

DRLSlide14

Citadel Results – Outgoing Traffic

Data logged from running

nodes

Takeaways:

Without DRL, way over limit

One node sending more than other

Time

Outgoing Traffic

1Mbit/s

On

On

On

On

Off

Off

Off

OffSlide15

Citadel Results – Flow Counts

Time

# of Flows

FPS uses flow count as proxy for demandSlide16

Citadel Results – Limits and Weights

Time

Rate Limit

FPS WeightSlide17

Lessons Learned

Flow counting is not always the best proxy for demand

FPS state

transitions were irregular

Added

checks and dampening/hysteresis in problem casesCan estimate after enforceUlogd only shows packets after HTB

FPS is forgiving to software limitationsHTB is difficultHYSTERESIS variableTCP Segmentation offloadingSlide18

Ongoing workO

ther

use

cases

Larger-scale tests

Complete PlanetLab administrative interfaceStandalone version

Continue DRL rollout on PlanetLabUCSD’s PlanetLab nodes soonSlide19

Questions?

Code is available from

PlanetLab

svn

http://svn.planet-lab.org/svn/DistributedRateLimiting/Slide20
Slide21

Citadel Results