/
A Framework for Evaluating Caching Policies in a Hierarchical Network of Caches A Framework for Evaluating Caching Policies in a Hierarchical Network of Caches

A Framework for Evaluating Caching Policies in a Hierarchical Network of Caches - PowerPoint Presentation

celsa-spraggs
celsa-spraggs . @celsa-spraggs
Follow
344 views
Uploaded On 2019-11-06

A Framework for Evaluating Caching Policies in a Hierarchical Network of Caches - PPT Presentation

A Framework for Evaluating Caching Policies in a Hierarchical Network of Caches Eman Ramadan Pariya Babaie Zhi Li Zhang Presented By Arvind Narayanan University of Minnesota USA ID: 763885

hit cache lru caching cache hit caching lru prob performance probability policy request policies network object server layers networks

Share:

Link:

Embed:

Download Presentation from below link

Download Presentation The PPT/PDF document "A Framework for Evaluating Caching Polic..." is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.


Presentation Transcript

A Framework for Evaluating Caching Policies in a Hierarchical Network of Caches Eman Ramadan, Pariya Babaie, Zhi-Li ZhangPresented By: Arvind NarayananUniversity of Minnesota, USA

Outline IntroductionMotivation & Related WorkContribution:Hit Probability EstimationComparison FrameworkEvaluation Conclusion 2

Caching is Important! Caching is essential factor in the performance of large-scale streaming video delivery, due to the size of video objects and the scale of content delivery system.Several studies have focused on the performance analysis of cache networks under various caching policies.ICN architectures have renewed research interest in caching policies and their performance, due to ICN primitives:Storage is an integral part of network substrate Caching-along-the-path 3 ICN: Information-centric networks is an approach to shift the Internet infrastructure from a host-centric paradigm, to focus on ‘’named information” . Source: https:// en.wikipedia.org /wiki/Information- centric_networking

Cache Management Issues Q: How shall we effectively utilize the cache resources?4 Caching Policy What content to cache, and when to cache? e.g., only cache popular content, cache at first request, … etc. Cache Replacement Strategy If the cache is full, which cached content is evicted? e.g., LRU, LFU, … etc. Object Replication Which caches keep a copy of the requested object on its way to users? e .g., cache everywhere, cache probablistically, ... etc. Origin Content Servers Interests or Content Requests User Bases Cache Nodes/Servers

…. ..... ..... ..... cache miss cache hit Request Object cache miss Caching Along the Path Hierarchical Caching Structure Request Routing 5 Edge Servers Users Intermediate Servers Origin Server Vijay Adhikari et al. "Vivisecting youtube : An active measurement study." INFOCOM, 2012 Vijay Adhikari et al. "Unreeling netflix : Understanding and improving multi- cdn movie delivery." INFOCOM, 2012 Erik Nygren et al. "The akamai network: a platform for high-performance internet applications." SIGOPS 2010 L 1 L 2 L H ….......

…. ..... ..... ..... Caching Along the Path Object Replication Strategies Caching Along the Path Request Object aka: Leave Copy Everywhere (LCE) 6 Fayazbakhsh , Seyed Kaveh , et al. "Less pain, most of the gain: Incrementally deployable ICN."  SIGCOMM CCR , 2013

Object Replication StrategiesLeave Copy Probabilistically (LCP) …. ..... ..... ..... q = 0.5 Request Object rand = 0.6 > q don’t cache rand = 0.4 < q cache 7 Is Caching Along the Path Useful? Which Strategy is Better? How can we analyze their performance?

Performance Analysis of Cache Networks 8 L4 L3 L1 L2 L1 L1 L2 L1 L3 L1 L2 L1 L1 L2 L1 Requests to 1 st Layer are Independent

Performance Analysis of Cache Networks 9 L4 L3 L1 L2 L1 L1 L2 L1 L3 L1 L2 L1 L1 L2 L1 Requests to higher Layers are NOT Independent due to miss rate at lower Layers Thus, we need to calculate the request arrival streams at the upstream caches!

Performance Analysis of Cache Networks The main issue with the current approaches lies in the accuracy of estimating the superposition of the miss streams arriving at the intermediate caches.Also, these approaches are computationally very extensive when the size of the cache network grows, thus, they are not scalable . 10 [1] N.C.Fofack et al. “Performance evaluation of hierarchical TTL-based cache networks,” Computer Networks , 2014 [2] D. S. Berger et al.,“Exact analysis of TTL cache networks,” Performance Evaluation , 2014There exists no general methodology for computing, approximating, or even bounding the hit probabilities for a network of caches within a reasonable computation cost.

Outline IntroductionMotivation & Related WorkContribution:Hit Probability Estimation Comparison Framework Evaluation Conclusion 11

Hit Probability Estimation for Cache Network Given the hit probability of object Oi for a single cache under caching policy P as a function of the cache size C, and the object request rate λ i . We propose a general simplified approach to calculate the hit probability for a tandem cache network using “BIG” cache abstraction . Single Cache p i (C, λi, P)

Review: BIG Cache Abstraction L1L2 L3 L4 L1 L2 L4 L3 BIG Cache LRU, K-Hit, ….. 13

Hit Probability Estimation Apply the hit prob. function to the virtual cache composed of the first two layers with total capacity C[1:2] = C1 + C2  hit prob. for Oi from one of these 2 layers (L1+L2) p i ( C [1:2] , λ i , P) . Subtracting the hit prob. of O i at L1 from this value results in the hit prob. of Oi at L2.14 Hit Prob. of Oi at L2 = p i(C[1:2] , λi, P) - pi(C[1:1] , λi, P) L1 L2 Lj L1 L2 L(H-1) L3 C 1 C 2 C 3 C H-1 C j LH Origin Server

Hit Probability Estimation Generally, the hit prob. of Oi at Lj can be obtained by subtracting the hit prob. of Oi at the first j-1 layers from the the hit prob. of Oi at the first j layers. 15 Hit Prob. of O i at Lj = p i ( C [1:j], λi, P) - pi (C[1:j-1], λi, P) L1 L2 Lj Hit Prob. Of O i at origin = 1 –   “BIG” cache abstraction completely avoids the interdependency between cache layers. Lj L1 L2 L(H-1) L3 C 1 C 2 C 3 C H-1 C j LH Origin Server

16 Hit prob. at a single cache can be calculated by[1]:TC: characteristic time of the cacheλi: request rate of OiLRU-model: estimated hit prob. LRU(B): simulating LRU-LCE using BIG cache abstraction LRU(I): simulating LRU-LCE at each cache independently Estimation Approach Validation LRU-LCE [1] M. Garetto et al., “A unified approach to the performance analysis of caching systems,” INFOCOM 2014

17 Hit prob. at a single cache can be calculated by[1]:q: prob. to cache objectsTC: characteristic time of the cacheλi: request rate of Oi q-LRU-model: estimated hit prob. q-LRU(B): simulating LRU-LCP using BIG cache abstraction q-LRU(I): simulating LRU-LCE at each cache independently Estimation Approach Validation LRU-LCP (q-LRU) 1- “BIG” cache leads to better performance. 2- Our estimated hit prob. matches the hit prob. when simulating caching policies using “BIG” cache abstraction. [1] M. Garetto et al., “A unified approach to the performance analysis of caching systems,” INFOCOM 2014

Outline IntroductionMotivation & Related WorkContribution:Hit Probability EstimationComparison Framework Evaluation Conclusion 18

Caching Policies Comparison Framework We propose a general framework to compare the network performance of two caching policies P and Q.P and Q can represent any caching policy such as LRU, static caching, K-LRU, … etc.We represent the hit probabilities of each policy as a hit probability matrix , and define a majorization condition for these matrices to compare them. 19

Hit Probability Matrix N: number of ObjectsH: number of layerspij: hit prob. of Oi at layer Ljλi: request rate of O i λ : aggregate request rate for all objects a i = λi/ λ : access prob. of O i 20 LjL1 L2 L(H-1) L3 C1C2 C3 CH-1 C j LH Origin Server

Majorization of Hit Probability Matrices ConditionThe summation of the hit probabilities in the top-left (k, h) sub-matrix of P is equal to or larger than that of Q for all values of k, h, i.e., policy P utilizes the first h cache layers to serve the top k objects better than policy Q.Thus, we say caching policy P dominates Q if and only if P majorizes Q. 21

Comparing Overall Performance Expected Overall PerformanceCondition: If policy P majorizes policy Q, then:Theorem 1: The overall expected latency for P is less than or equal to that of Q. Theorem 2: The origin server load under policy P is less than or equal to that of Q . 22

Outline IntroductionMotivation & Related WorkContribution:Hit Probability Estimation Comparison Framework Evaluation Conclusion 23

Network Simulation Setup A tandem line of three caches1 Edge server, 2 intermediate servers, & Origin serverOrigin server serves a collection of N = 10K objectsO1: most popular, ON: least popularObject access probabilities follows Zipf. dist. with α = 1Size of each cache server lies in range [50, 100, 500, 1000, 2500, 3000]e.g., the edge server and intermediate servers have cache size 50 each Object replication strategy: LCE, LCD, LCP Cache replacement policy: LRU Other policies: static, LRU(B), qLRU (B) 24

Majorization Condition Representation We calculate new matrices and (summation of all possible submatrices of P and Q) Comparison Matrix X P-Q   25 Majorization Condition Represented as a binary heatmap of two values (0 = white, 1 = black)

Comparing Caching Policies P = Static, Q = LRU(B)26 a) CS = 50 b) CS = 100 c) CS = 500 d) CS = 1000 e) CS = 2500 f) CS = 3000

Comparing Caching Policies P = LRU-LCD, Q = LRU(B)27 a) CS = 50 b) CS = 100 c) CS = 500 d) CS = 1000 e) CS = 2500 f) CS = 3000

Conclusion The analysis of caching policies for a cache network is complicated and challenging.BIG cache abstraction avoids the interdependency between cache layers. We proposed a general methodology for approximating the hit probabilities for a network of caches. We introduced the notion of a hit probability matrix , and employed (a generalized notion of) majorization as the basic tool for evaluating and comparing cache policies.28

Thank you!