/
Packet Caches on Routers: Packet Caches on Routers:

Packet Caches on Routers: - PowerPoint Presentation

yoshiko-marsland
yoshiko-marsland . @yoshiko-marsland
Follow
364 views
Uploaded On 2018-03-14

Packet Caches on Routers: - PPT Presentation

The Implications of Universal Redundant Traffic Elimination Ashok Anand Archit Gupta Aditya Akella University of Wisconsin Madison Srinivasan Seshan Carnegie Mellon University ID: 650773

packet redundancy routing traffic redundancy packet traffic routing fingerprint total border benefits network link aware elimination berkeley footprint router

Share:

Link:

Embed:

Download Presentation from below link

Download Presentation The PPT/PDF document "Packet Caches on Routers:" is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.


Presentation Transcript

Slide1

Packet Caches on Routers: The Implications of Universal Redundant Traffic Elimination

Ashok Anand, Archit Gupta, Aditya AkellaUniversity of Wisconsin, MadisonSrinivasan SeshanCarnegie Mellon University Scott Shenker University of California, Berkeley

1Slide2

Redundant Traffic in the Internet

Lots of redundant traffic in the InternetRedundancy due to…Identical objectsPartial content match (e.g. page banners)Application-headers…2

Same

c

ontent

t

raversing

same set of links

Time

T

Time

T + 5Slide3

Redundancy Elimination

Object-level cachingApplication layer approaches like Web proxy caches Store static objects in local cache[Summary Cache: SIGCOMM 98, Co-operative Caching: SOSP 99]Packet-level caching [Spring et. al: SIGCOMM 00]WAN Optimization Products: Riverbed, Peribit, Packeteer, ..3

Packet-Cache

Packet-Cache

Access link

Internet

Enterprise

Packet-level caching is better than object-level cachingSlide4

Benefits of Redundancy EliminationReduces bandwidth usage cost

Reduces network congestion at access links Higher throughputsReduces in transfer completion times4Slide5

Towards Universal RE

However, existing RE approaches apply only to point deploymentsE.g. at stub network access links, or between branch officesThey only benefit the system to which they are directly connected.Why not make RE a native network service that everyone can use?5Slide6

Our Contribution

Universal redundancy elimination on routers is beneficialRe-designing the routing protocol to be redundancy aware gives furthermore benefitsPractical to implement redundancy elimination6Slide7

Internet2

Universal Redundancy Elimination At All Routers

Total packets with universal RE= 12

(ignoring tiny packets)

Upstream router removes redundant bytes

.

Downstream router reconstructs full packet

7

Total packets

w/o RE = 18

Wisconsin

Berkeley

CMU

33%

Packet cache

at every routerSlide8

Benefits of Universal Redundancy Elimination

Subsumes benefits of point deploymentsAlso benefits Internet Service ProvidersReduces total traffic carried  better traffic engineeringBetter responsiveness to sudden overload (e.g. flash crowds)Re-design network protocols with redundancy elimination in mind  Further enhance the benefits of universal RE8Slide9

Redundancy-Aware Routing

Total packets with RE + routing= 10

(Further 20% benefit )

9

Total packets with RE = 12

Wisconsin

Berkeley

CMU

45%

ISP needs information of traffic similarity between CMU and Berkeley

ISP needs to compute redundancy-aware routesSlide10

Redundancy-Aware RoutingIntra-domain Routing for ISP

Every N minutesEach border router computes a redundancy profile for the first Ts of the N-minute intervalEstimates how traffic is replicated across other border routersHigh speed algorithm for computing profilesCentrally compute redundancy-aware routesRoute traffic for next N minutes on redundancy-aware routes. Redundancy elimination is applied hop-by-hop 10Slide11

CMU

Redundancy Profile Example11Internet2

Data

unique,pitsburgh

= 30 KB

Data

unique,Berkeley

= 30 KB

Data

shared

= 20 KB

11

Wisconsin

Berkeley

Total

CMU

= 50 KB

Total

Berkeley

= 50 KBSlide12

Centralized Route Computation

Linear ProgramObjective: minimize the total traffic footprint on ISP linksTraffic footprint on each link as latency of link times total unique content carried by the linkCompute narrow, deep trees which aggregate redundant traffic as much as possibleImpose flow conservation and capacity constraints12

Centralized

Platform

Route

c

omputationSlide13

Inter-domain RoutingISP selects neighbor AS and the border router for each destination

Goal: minimize impact of inter-domain traffic on intra-domain links and peering links.Challenges:Need to consider AS relationships, peering locations, route announcementsCompute redundancy profiles across destination ASesDetails in paper13Slide14

Trace-Based Evaluation

Trace-based studyRE + Routing: Redundancy aware routingRE: Shortest path routing with redundancy eliminationBaseline: Compared against shortest path routing without redundancy elimination Packet tracesCollected at University of Wisconsin access linkSeparately captured the outgoing traffic from separate group of high volume Web servers in University of WisconsinRepresents moderate-sized data centerRocketfuel ISP topologiesResults for intra-domain routing on Web server trace14Slide15

Benefits in Total Network Footprint

Average redundancy of this Web server trace is 50% using 2GB cacheATT topology2GB cache per routerCDF of reduction in network footprint across border routers of ATTRE gives reduction of 10-35%(RE + Routing) gives reduction of 20-45% 15Slide16

When is RE + Routing Beneficial?Topology effect

E.g., multiple multi-hop paths between pairs of border routersRedundancy profileLot of duplication across border routers16Slide17

Synthetic Trace Based Study

Synthetic trace for covering wide-range of situationsDuplicates striped across border routers in ISP (inter-flow redundancy)Low striping across border routers , but high redundancy with in traffic to a border router (intra-flow-redundancy)Understand topology effect17Slide18

Benefits in Total Network Footprint

Synthetic trace, average redundancy = 50%ATT (7018) topologyTrace is assumed to enter at Seattle RE + Routing, is close to RE at high intra-flow redundancy, 50% benefitRE has benefit of 8% at zero intra-flow redundancyRE + Routing, gets benefit of 26% at zero intra-flow redundancy.18Slide19

Benefits in Max Link Utilization

Link capacities either 2.5 or 10 GbpsComparison against traditional OSPF based traffic engineering (SP-MaxLoad)RE offers 1-25% lower maximum link load . RE + Routing offers 10-37% lower maximum link load. Max link Utilization = 80%, for SP-MaxLoad19Slide20

Evaluation SummaryRE significantly reduces network footprint

RE significantly improves traffic engineering objectivesRE + Routing further enhances these benefitsHighly beneficial for flash crowd situationsHighly beneficial in inter-domain traffic engineering20Slide21

Implementing RE on Routers

21Fingerprint tablePacket store

Fingerprint s

Main operations

Fingerprint computation

Easy, can be done with CRC

Memory operations, Read and WriteSlide22

High Speed Implementation

Reduced the number of memory operations per packetFixed number of fingerprints (<10 per packet)Used lazy invalidation of fingerprint for packet evictionOther optimizations in paperClick-based software prototype runs at 2.3 Gbps (approx. OC 48 speed ).22Slide23

Summary

RE at every router is beneficial ( 10-50%)Further benefits (10-25%) from redesigning routing protocol to be redundancy-aware.OC48 speed attainable in software23Slide24

Thank you

24Slide25

Backup

25Slide26

Flash Crowd Simulation

Flash Crowd: Volume increases at one of the border routersRedundancy ( 20% -> 50%)Inter Redundancy Fraction (0.5 -> 0.75)Max Link Utilization without RE is 50%Traditional OSPF traffic engineering gets links at 95% utilization at volume increment factor > 3.5Whereas SP-RE at 85% , and RA further lower at 75%26Slide27

Impact of Stale Redundancy Profile

RA relies on redundancy profiles.How stable are these redundancy profiles ?Used same profile to compute the reduction in network footprint at later times ( with in an hour)RA-stale is quite close to RA27Slide28

High Speed Implementation

Use specialized hardware for fingerprint computationReduced the number of memory operations per packetNumber of memory operations is function of number of fingerprints. Fixed the number of sampled fingerprintsDuring evicting packet, explicit invalidating fingerprint require memory operations. Used lazy invalidationFingerprint pointer is checked for validation as well as existence.Store packet-table and fingerprint-table in DRAM for high-speedUsed Cuckoo Hash-table. As simple-hash based fingerprint table is too large to fit in DRAM28Slide29

Base Implementation Details (Spring et. al)Compute fingerprints per packet and sample them

Insert packet into packet storeCheck for existence of fingerprint pointer to any packet, for match detection. Encode the match region in the packet.Insert each fingerprint into Fingerprint table.As store becomes full, evict the packet in FIFO mannerAs a packet gets evicted, invalidate its corresponding fingerprint pointers29