/
An Improved Hop-by-hop Interest Shaper for An Improved Hop-by-hop Interest Shaper for

An Improved Hop-by-hop Interest Shaper for - PowerPoint Presentation

pasty-toler
pasty-toler . @pasty-toler
Follow
402 views
Uploaded On 2015-11-13

An Improved Hop-by-hop Interest Shaper for - PPT Presentation

Congestion Control in Named Data Networking Yaogong Wang NCSU Natalya Rozhnova UPMC Ashok Narayanan Cisco Dave Oran Cisco Injong Rhee NCSU Traditional congestion control ID: 191461

congestion interest data interests interest congestion interests data rate shaping client reverse link content load size flow control queue

Share:

Link:

Embed:

Download Presentation from below link

Download Presentation The PPT/PDF document "An Improved Hop-by-hop Interest Shaper f..." is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.


Presentation Transcript

Slide1

An Improved Hop-by-hop Interest Shaper for Congestion Control in Named Data Networking

Yaogong

Wang,

NCSU

Natalya

Rozhnova

,

UPMC

Ashok Narayanan

,

Cisco

Dave Oran,

Cisco

Injong

Rhee,

NCSUSlide2

Traditional congestion control

Data packets assumed to consume bandwidth

Too many data packets cause congestion

Congestion detected by monitoring data packet transfer

Queue depth in midpoint, delay and/or loss (modulo ECN) at endpoint

Assumes that non-data traffic (TCP ACKs) don’t cause congestion

Reasonable, since ACK is 40 bytes and full TCP frame is 1460 bytesSlide3

NDN Congestion control

Two important factors to consider:

R

eceiver-driven

: one interest generates one data packet

Symmetric:

Content retrieved in response to an interest traverses the same path in reverse

Content load forwarded on a link is directly related to interests previously received on that link

Given these properties,

shaping interests

can serve to control content load and therefore proactively avoid congestion.

There are multiple schemes that rely on slowing down interests to achieve congestion avoidance or resolution

But,

detecting

the congestion in question is not simple

Because it appears on the

other side

of the link where interests can be slowedSlide4

Interest shapingDifferent schemes have been proposed

HoBHIS

First successful scheme, demonstrated the feasibility of this method

Slows down interests on the hop

after

congestion

Relies on backpressure to

alleviate

congestion

ICP/HR-ICP

Runs per-flow AIMD scheme to manage outstanding interests

Tracks estimated RTT as a mechanism to rapidly detect congestion & loss

Endpoints control flow requests by shaping interest issue rate

Main congestion control operates end-to-end, some hop-by-hop shaping for special casesSlide5

Basic interest shapingAssume constant ratio

r

of content-size/interest-size

Simple unidirectional flow with link rate

c

Ingress interest rate of

c/r

causes egress content rate of

c

If we shape egress interest rate to c/r, remote content queue will not be overloadedIssues with varying content size, size ratio, link rate, etc.But the biggest issue is…Slide6

What about interests?

Interests consume bandwidth

(specifically,

c/r

in the reverse direction)

Bidirectional data flow also implies bidirectional

interest

flow

Therefore, the reverse path is not available to carry

c bandwidth of data, it also needs to carry some interestsAnd similarly, the rate of interests carried in the reverse direction cannot budget the forward path entirely for data, it needs to leave space for forward interests as wellOrdinarily there is no way to predict and therefore account for interests coming in the other direction, but…There is a recursive dependence between the interest shaping rate in the forward and reverse directions.Slide7

Problem overviewSlide8

Problem formulation

We

can formulate a

mutual bidirectional

optimization

as follows

u(.)

is link utility function

This must be

proportionally fair in both direction, to avoid starvation

We propose log(s) as utility function

i1 = received forward interest load

i2 = received reverse interest load

c

1 = forward link bandwidth

c2 = reverse link bandwidthr

1 = ratio of received content size to sent interests sizer2

= ratio of sent contents size to received interests sizeSlide9

Optimal solution

Feasible region is convex

First solve for infinite load in both directions

Optimal solutions at the Lagrange points marked with X

If

Lagrangian

points do not lie within feasible region (most common case), convert to equality constraints and solveSlide10

Finite load scenarios

Optimal shaping

rate

assumes

unbounded

load in both directions

We can’t model instantaneously varying load in a closed-form solution

If one direction is

underloaded

, fewer interests need to travel in the reverse direction to generate the lower loadAs a result, the local shaping algorithm need not leave as much space for interests in the reverse directionExtreme case: unidirectional traffic flowActual shaping rate needs to vary between two extremes depending on actual load in the reverse pathBUT, we don’t want to rely on signaling reverse path loadSlide11

Practical interest shaping algorithm

We observe that

each side can independently compute

both

expected shaping rates

Our algorithm observes the incoming interest rate, compares it to the expected incoming interest shaping rate, and adjusts our outgoing interest rate between these two extremes

On the router, interests and contents are separated in output queues. Interests are shaped as per the equation above, and contents flow directly to the output queue.Slide12

Explicit congestion notification

When an interest cannot be

enqueued

into the interest shaper queue, it is rejected

Instead of dropping it, we return it to the downstream hop in the form of a “Congestion-NACK”

This NACK is forwarded back towards the client in the place of the requested content

Consumes the PIT entries on the way

Note that the bandwidth consumed by this NACK has already been accounted for by the interest that caused it to be generated

Therefore, in our scheme Congestion-NACKs

cannot exacerbate congestionClients or other nodes can react to these signalsIn our current simulations, clients implement simple AIMD window control, with the NACK used to cause decreaseSlide13

Simulation results – basic topology

Scenario

Data Throughput

(Mbps)

Data loss

(%)

Interest rejection rate

(%)

Client 1

Client 2

R1

R2

Client 1

Client 2

Baseline

25B Interest, 1KB Data

9.558±0.001

9.559±0.002000.015±0.0006

0.015±0.0011Varying Pkt SizeData from 600-1400B9.432

±0.0059.434±0.00800

0.018±0.00140.017±0.0015Asymmetric data size1000B/500B

9.373±0.0149.326±0.0010

00.007±0.00060.016±0.0006Asymmetri

c bandwidth10 Mbps/1 Mbps9.774±0.0010.719±0.001

000.012±0.00050.058

±0.0000Slide14

Simulation results – dumbbell topology

Scenario

Data Throughput

(Mbps)

Data loss

(%)

Interest rejection rate

(%)

Client 1

Server 3

Client 2

Server 4

R1

R2

Client 1

Server 3

Client 2Server 4

Homogeneous RTT5.142±0.54.692±0.50

00.515±0.0110.063±0.013

Heterogeneous RTTR2—S4 link now 20ms5.209±0.384.624±0.38

000.513±0.0090.042±0.007

Flipped data flowsClient1Server3, Client4Server29.566±0.001

9.419±0.007000.148

±0.00040.012±0.0005Slide15

Client window and queue evolution

Queue depth on bottleneck queues is small

1 packet for homogeneous RTT case

Varies slightly more in heterogeneous RTT case, but is quite low (<17 packets)

Client window evolution is quite fairSlide16

Benefits of our schemeOptimally handles interest shaping for bidirectional traffic

No

signaling or message exchange

required between routers

Corollary: no

trust required between peers

No

requirement of flow identification by intermediaries

Fair and effective bandwidth allocation on highly asymmetric links

Congestion NACKs offer a timely and reliable congestion signalCongestion is detected downstream of the bottleneck linkSlide17

Future workU

se congestion detection and/or NACKs to offer dynamic reroute and multi-path load balancing

Use NACKs as backpressure mechanism in the network to handle

unco

-operative clients

Investigate shaper under different router AQM schemes (e.g. RED,

CoDEL

, PIE) and client implementations (e.g. CUBIC).Slide18

Questioners get…