PPT-A Survey of Web Cache Replacement Strategies

Author : olivia-moreira | Published Date : 2016-06-28

Stefan Podlipnig Laszlo Boszormenyl University Klagenfurt ACM Computing Surveys December 2003 Presenter Junghwan Song 20120425 Outline Introduction Classification

Presentation Embed Code

Download Presentation

Download Presentation The PPT/PDF document "A Survey of Web Cache Replacement Strate..." is the property of its rightful owner. Permission is granted to download and print the materials on this website for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.

A Survey of Web Cache Replacement Strategies: Transcript


Stefan Podlipnig Laszlo Boszormenyl University Klagenfurt ACM Computing Surveys December 2003 Presenter Junghwan Song 20120425 Outline Introduction Classification Recency based. Replacement Using Re-Reference . Interval . Prediction (RRIP. ). Aamer Jaleel, Kevin B. Theobald. , . Simon C. Steely Jr. , Joel . Emer. Intel . Corporation. 1. / 20. The ACM IEEE International Symposium on Computer Architecture . Survey Results. 10 departments (7 responses) . Cache Products Use. Polymath 6/7 . Cachet 4/7 . Case studies 3/7. Energy modules/Guide to teaching design 2/7. Etomica. , Teaching resource center 1/7. List in the survey. : . Coordinated . In-Memory . Caching . for . Data-Intensive Clusters. Ganesh Ananthanarayanan, Ali Ghodsi, Andrew Wang, Dhruba Borthakur, Srikanth Kandula, Scott Shenker, Ion Stoica. Jan Reineke. j. oint work with Andreas Abel. . . Uppsala University. December 20, 2012 . The Timing Analysis Problem. Embedded Software. Timing Requirements. ?. Microarchitecture. +. What does the execution time of a program depend on?. Xiaodong Zhang. The Ohio State University. Numbers Everyone Should Know . (Jeff Dean, Google). L1 cache reference: 0.5 ns . Branch mis-predict: 5 ns. University of Virginia. April 21, 2016. COMPUTER ARCHITECTURE . CS 6354. Caches. The content and concept of this course are adapted from CMU ECE 740. AGENDA. Logistics. Review . from last . lecture. More on Caching. Samira Khan . March 23, 2017. Agenda. Review from last lecture. Data flow model. Memory hierarchy. More Caches. The Dataflow Model (of a Computer). Von Neumann model: An instruction is fetched and executed in . Defending . Against Cache-Based Side Channel . Attacks. Mengjia. Yan, . Bhargava. . Gopireddy. , Thomas Shull, . Josep Torrellas. University of Illinois at Urbana-Champaign. http://. iacoma.cs.uiuc.edu. 2015. NATIONAL INTERAGENCY SUPPORT CACHE . PRESENTATION. The National Interagency Support Cache System is made up of 15 caches in strategic locations throughout the United States. Ft. Wainwright, AK; Boise, ID; Missoula, MT; Redmond, OR; Redding, CA; Ontario, CA; Denver, CO; Prescott, AZ; Silver City, NM; London, KY; Grand Rapids, MN; LaGrande, OR; Wenatchee, WA; Billings, MT & Coeur d’ Alene, ID. The caches are operated under the direction of federal and state agencies, including the US Forest Service; Bureau of Land Management and various states including Alaska, Minnesota and Idaho.. March 28, 2017. Agenda. Review from last lecture. Cache access. Associativity. Replacement. Cache Performance. Cache Abstraction and Metrics. Cache hit rate = (# hits) / (# hits # misses) = (# hits) / (# accesses). Agenda. Review from last lecture. Data flow model. Memory hierarchy. More Caches. The Dataflow Model (of a Computer). Von Neumann model: An instruction is fetched and executed in . control flow order . Virtual Memory Use main memory as a “cache” for secondary (disk) storage Managed jointly by CPU hardware and the operating system (OS) Programs share main memory Each gets a private virtual address space holding its frequently used code and data TLC: A Tag-less Cache for reducing dynamic first level Cache Energy Presented by Rohit Reddy Takkala Introduction First level caches are performance critical and are therefore optimized for speed. Modern processors reduce the miss ratio by using set-associative caches and optimize latency by reading all ways in parallel with the TLB(Translation Lookaside Buffer) and tag lookup. 2017 NATIONAL LOGISTICS WEBINAR NATIONAL INTERAGENCY SUPPORT CACHE PRESENTATION 2017 TOPICS NATIONAL INTERAGENCY SUPPORT CACHE SYSTEM REPORTS, CATALOG, HANDBOOKS & FORMS UPDATES INCIDENT REPLACEMENT REQUISTION (IRR)

Download Document

Here is the link to download the presentation.
"A Survey of Web Cache Replacement Strategies"The content belongs to its owner. You may download and print it for personal use, without modification, and keep all copyright notices. By downloading, you agree to these terms.

Related Documents