/
Caching at the Web Scale Caching at the Web Scale

Caching at the Web Scale - PowerPoint Presentation

briana-ranney
briana-ranney . @briana-ranney
Follow
342 views
Uploaded On 2019-11-07

Caching at the Web Scale - PPT Presentation

Caching at the Web Scale Victor Zakhary Divyakant Agrawal Amr El Abbadi Full slide deck httpsgoogl1VYGW3 1 The old problem of Caching Disk RAM L 2 L 1 Larger Slower Cheaper Smaller Faster ID: 764211

access caching cache web caching access web cache facebook hashing load consistent server distributed memcached page update client servers

Share:

Link:

Embed:

Download Presentation from below link

Download Presentation The PPT/PDF document "Caching at the Web Scale" is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.


Presentation Transcript

Caching at the Web Scale Victor Zakhary, Divyakant Agrawal, Amr El AbbadiFull slide deck: https://goo.gl/1VYGW3 1

The old problem of Caching Disk RAM L 2 L 1 Larger Slower Cheaper Smaller FasterExpensive 2

The old problem of Caching SmallerFasterExpensive Larger Slower Cheaper T a = T h + m x Tm Ta: average access time Th: access time in case of a hitm: miss ration (1- hit ratio)Tm: access time in case of a missTm >>>> Th Disk RAM L 2 L 1 3

The old problem of Caching Tm >>>> ThWhen the cache is full  replacement policyReplacement policy  eviction mechanismHaving the right elements in cache increases the hit ratio High hit ratio  less average access time 4

The old problem of Caching Ta= Th + m x Tm Ta: average access timeTh : access time in case of a hit m: miss ration (1- hit ratio) T m : access time in case of a miss Tm >>>> Th T h and m are always in contention Good caching strategy:lowers mrequires more tracking Increases T h Less tracking: Lower T h increases m 5

The old problem of Caching LargerSlowerCheaper This is not a tutorial on 70s materials Right? Smaller Faster Expensive Disk RAM L 2 L 1 6

Nowadays Hardware technologies have changed: Storage Memory Network etc Solutions have to exploit all these changes to serve client requests: at billions of requests per second scale with low latency with high availability achieving data consistency(varies from application to application) New designs for Caching Services Very time sensitive Huge amount of data Dynamically generated data 7 Source: http://www.visualcapitalist.com/what-happens-internet-minute-2016/

Facebook page load Each page load is translated into hundreds of lookups Lookups are done on multiple rounds 8 This slide was taken from Scaling Memcache at Facebook presentation in NSDI`13

Nowadays Architecture Persistent Storage Millions of end-users Page-load and page-update stream(millions/sec) Billions of key lookups per second Stateless Application Servers Overloaded 9

Nowadays Architecture Load Balancer Persistent Storage Millions of end-users Page-load and page-update stream(millions/sec) Billions of key lookups per second Hundreds of Stateless Application Servers Overloaded 10

Nowadays Architecture Partition and replicate Load Balancer Millions of end-users Page-load and page-update stream(millions/sec) Billions of key lookups per second Hundreds of Stateless Application Servers Persistent Storage High Latency Supported operations Consistency 11

Facebook page load Each page load is translated into hundreds of lookups Lookups are done on multiple rounds Reads are 99.8% while writes are only 0.2% [Tao ATC`13] Persistent Storage cannot handle this request throughput at this scale Caching  lower latency + alleviate load on storage 12 This slide was taken from Scaling Memcache at Facebook presentation in NSDI`13

Nowadays Architecture Partitioned and replicated Load Balancer Millions of end-users Page-load and page-update stream(millions/sec) Billions of key lookups per second Hundreds of Stateless Application Servers Persistent Storage Caching Server hit miss Overloaded 13

Nowadays Architecture Partitioned and replicated Load Balancer Millions of end-users Page-load and page-update stream(millions/sec) Billions of key lookups per second Hundreds of Stateless Application Servers Persistent Storage Tens of Caching Servers hit miss Failures Load balance Lookaside vs knowledge based 14

15

Access latency y y y y y y y y 16 Peter Norvig : http://norvig.com/21-days.html#answers

Goal Access Latency ↓ Access Latency ↓Load distribution Old Caching Modern Caching Update Strategy Challenges Replacement policy Update strategy Update durability Thread contention Scale management Load balancing (utilization) Update strategy Update durability Data consistency Request rate T a = T h + m x T m 17

Replacement Policies 18

Cache Replacement Policies Cache Lookup Cache miss Cache hit  Fetch page Insert page to cache Cache is not full Cache is full Evict and insert Insert 19

Cache Replacement Policies Cache size is limited  Cannot fit everythingEviction mechanismContention between hit access time and miss ratioFIFO, LIFO LRU (recency of access)Pseudo-LRU ARC (Frequency and recency of access)MRU… 20

LRU Hardware supported implementationsUsing countersUsing a binary 2D matrixSoftware implementationUsing a doubly linked list and a hash table21

LRU – Hardware using Counters Large enough counter 64 -128 bitsIncrement the counter after each instructionWhen accessing a page:Tag the page with the current counter value at the access timeWhen a page fault happens:Evict the page with the lowest counter tagVery expensive to examine the counter for every page 22

LRU – Hardware using 2D Binary Matrix 1 2 3 4 1 0 0 0 0 2 0000 3 0 0 0 0 4 0 0 0 0 1 2 3 4 1 0 1 1 1 2 0 0 0 0 3 0 0 0 0 4 0 0 0 0 1 1 2 3 4 1 0 0 1 1 2 1 0 1 1 3 0000 400002 1 2 341 0001210013 11014 0000 3 1 2 3410000 21000 31 100411 104 O(N) bits per pageThe row with the smallest binary value is the eviction candidate 23

LRU – Software Implementation Hash Table 24

LRU – Software Implementation Access: Hash Table 25

LRU – Software Implementation Access: Hash Table 26

LRU – Software Implementation Access: Hash Table 27

Pseudo-LRU (PLRU) Bit-PLRUOne bit per page, initially zeroOn access, flip page’s bit to oneIf all bits are one, flip all to zero except the last accessed page 0 0 0 0 1 0 0 0 1 1 0 0 1 1 1 0 Access: 1, 2, 3, 4 0 0 0 1 28

Pseudo-LRU 0 0 0 Organizes blocks on a binary tree The path from the root leads to the PLRU leaf On access, flip the values along the path to the leaf 0 goes left, 1 goes right Access: 1 3 2 4 1 4 5 29

Organizes blocks on a binary tree The path from the root leads to the PLRU leafOn access, flip the values along the path to the leaf0 goes left, 1 goes rightPseudo-LRU 1 1 1 0 Access: 1 3 2 4 1 4 5 30

Organizes blocks on a binary tree The path from the root leads to the PLRU leafOn access, flip the values along the path to the leaf0 goes left, 1 goes rightPseudo-LRU 1 3 1 0 1 Access: 1 3 2 4 1 4 5 31

Organizes blocks on a binary tree The path from the root leads to the PLRU leafOn access, flip the values along the path to the leaf0 goes left, 1 goes rightPseudo-LRU 1 2 3 0 1 1 Access: 1 3 2 4 1 4 5 32

Organizes blocks on a binary tree The path from the root leads to the PLRU leafOn access, flip the values along the path to the leaf0 goes left, 1 goes rightPLRU 1 2 3 4 0 0 0 Access: 1 3 2 4 1 4 5 33

Organizes blocks on a binary tree The path from the root leads to the PLRU leafOn access, flip the values along the path to the leaf0 goes left, 1 goes rightPseudo-LRU 1 2 3 4 1 1 0 Access: 1 3 2 4 1 4 5 34

Organizes blocks on a binary tree The path from the root leads to the PLRU leafOn access, flip the values along the path to the leaf0 goes left, 1 goes rightPseudo-LRU 1 2 3 4 1 0 0 Access: 1 3 2 4 1 4 5 35

Organizes blocks on a binary tree The path from the root leads to the PLRU leafOn access, flip the values along the path to the leaf0 goes left, 1 goes rightPseudo-LRU 1 5 3 4 0 1 0 Access: 1 3 2 4 1 4 5 36

ARC- Adaptive Replacement Cache Maintains 2 LRU lists L1 and L2L1 for recencyL2 for frequencyTracks pages twice the size of the cache. |L1 + L2| = 2 |c|Dynamically and adaptively balancing between recency and frequency Online and self-tuning – in response to evolving and possibly changing access patterns ARC: Almaden Research Center37 Megiddo, Nimrod, and Dharmendra S. Modha. "ARC: A Self-Tuning, Low Overhead Replacement Cache." FAST. Vol. 3. 2003.

ARC L 1 L 2 T 1 B 1 B 2 T 2 c Recency Frequency Ghost pages Ghost pages A miss is inserted to the head of L 1 If a page in L 1 is accessed twice, move it to L 2 head of T 2 A miss in B 1 increases the size of L 1 A miss in B 2 increases the size of L 2 38

Goal Access Latency ↓ Access Latency ↓Load distribution Old Caching Modern Caching Update Strategy Challenges Replacement policy Update strategy Update durability Thread contention Scale management Load balancing (utilization) Update strategy Update durability Data consistency Request rate 39

Scale Management 40

Memcached* Distributed in-memory caching systemFree and Open source Written first in Perl by Brad Fitzpatrick in 2003Rewritten in C by Anatoly Vorobey Client driven cachingHow does it work? 41 https://memcached.org/, FITZPATRICK, B. Distributed caching with memcached . Linux Journal 2004, 124 (Aug. 2004), 5.

Memcached* Memcached logic Client Side Server Side Storage Application server or dedicated cache client Cache server 1- lookup(k) 2- Response(k) If k != null done  Else? 3- lookup(k) 4- Response(k) 5- Set( k,V ) So what does Memcached provide? 42 https://memcached.org/, FITZPATRICK, B. Distributed caching with memcached . Linux Journal 2004, 124 (Aug. 2004), 5.

Memcached 1GB Caching servers 1GB 1GB Application server or dedicated cache client 43 https://memcached.org/, FITZPATRICK, B. Distributed caching with memcached . Linux Journal 2004, 124 (Aug. 2004), 5.

Memcached 1GB Caching servers 1GB 1GB 3GB Application server or dedicated cache client Each key is mapped to one caching server Better memory utilization through hashing Clients know all servers Servers don’t communicate with each other Shared-nothing architecture Easy to scale 44 https://memcached.org/, FITZPATRICK, B. Distributed caching with memcached . Linux Journal 2004, 124 (Aug. 2004), 5.

LRU – Software Implementation Hash Table 45

Memcached 1GB Caching servers 1GB 1GB Application server or dedicated cache client Lookup(k) Hash(k) % server count Lookup(k) -Is k here? -Yes -Update LRU and return value =No =Return null 46 https://memcached.org/, FITZPATRICK, B. Distributed caching with memcached . Linux Journal 2004, 124 (Aug. 2004), 5.

Consistent Hashing When adding/removing a server% function causes high key churn (remapping)Consistent hashingK/n keys are remappedThe churn problemAssume keys 1,2,3,4,5,6,7,8,9,10,11,12These keys are distributed into 4 servers 1,2,3,4 47 Karger , David, et al. "Consistent hashing and random trees: Distributed caching protocols for relieving hot spots on the World Wide Web."  Proceedings of the twenty-ninth annual ACM symposium on Theory of computing . ACM, 1997. Karger , David, et al. "Web caching with consistent hashing." Computer Networks 31.11 (1999): 1203-1213.

Consistent Hashing 1,5,9 2,6,10 3,7,11 4,8,12 1 2 3 4 5 5%4 =1 11 11%4 =3 What happens when number of server changes? 48 Karger , David, et al. "Consistent hashing and random trees: Distributed caching protocols for relieving hot spots on the World Wide Web."  Proceedings of the twenty-ninth annual ACM symposium on Theory of computing . ACM, 1997. Karger , David, et al. "Web caching with consistent hashing." Computer Networks 31.11 (1999): 1203-1213.

Consistent Hashing 1,4,7,10 2,5,8,11 3,6,9,12 1,5,9 2,6,10 3,7,11 4,8,12 Keys 3,4,5,6,7,8,9,10,11 are remapped 1 2 4 1 2 3 4 Keys 4,5,6,8,9,10 are remapped even if their machines are up 49 Karger , David, et al. "Consistent hashing and random trees: Distributed caching protocols for relieving hot spots on the World Wide Web."  Proceedings of the twenty-ninth annual ACM symposium on Theory of computing . ACM, 1997. Karger , David, et al. "Web caching with consistent hashing." Computer Networks 31.11 (1999): 1203-1213.

Consistent Hashing 5 2 7 1 6 11 12 10 8 3 4 9 50 Karger , David, et al. "Consistent hashing and random trees: Distributed caching protocols for relieving hot spots on the World Wide Web."  Proceedings of the twenty-ninth annual ACM symposium on Theory of computing . ACM, 1997. Karger , David, et al. "Web caching with consistent hashing." Computer Networks 31.11 (1999): 1203-1213.

Consistent Hashing 5 2 7 1 6 11 12 10 8 3 4 9 51 Karger , David, et al. "Consistent hashing and random trees: Distributed caching protocols for relieving hot spots on the World Wide Web."  Proceedings of the twenty-ninth annual ACM symposium on Theory of computing . ACM, 1997. Karger , David, et al. "Web caching with consistent hashing." Computer Networks 31.11 (1999): 1203-1213.

Consistent Hashing 5 2 7 1 6 11 12 10 8 3 4 9 52 Karger , David, et al. "Consistent hashing and random trees: Distributed caching protocols for relieving hot spots on the World Wide Web."  Proceedings of the twenty-ninth annual ACM symposium on Theory of computing . ACM, 1997. Karger , David, et al. "Web caching with consistent hashing." Computer Networks 31.11 (1999): 1203-1213.

Consistent Hashing 5 2 7 1 6 11 12 10 8 3 4 9 53 Karger , David, et al. "Consistent hashing and random trees: Distributed caching protocols for relieving hot spots on the World Wide Web."  Proceedings of the twenty-ninth annual ACM symposium on Theory of computing . ACM, 1997. Karger , David, et al. "Web caching with consistent hashing." Computer Networks 31.11 (1999): 1203-1213.

Web Scale Distributed Caching Examples 54

Scaling Memcache at FacebookGlobally distributed caching (in memory key/value store)Billions of requests per second over trillions of itemsGoalsAllow near real-time communicationAggregate content on-the-fly from multiple resourcesBe able to access and update very popular shared contentScale to process millions of user requests per second 55 Nishtala , Rajesh, et al. "Scaling Memcache at Facebook." nsdi . Vol. 13. 2013.

Scaling Memcache at FacebookRead-dominated workloadMultiple data sourcesMySQLHDFSOne caching layer for multiple data sourcesDemand-filled look-aside cacheHigh fanout and hierarchical data fetching 56 Nishtala , Rajesh, et al. "Scaling Memcache at Facebook." nsdi . Vol. 13. 2013.

Lookups and Updates Database Memcached Client (Web Server) 1- lookup(k) 2- Select .. 3- Set( k,v ) Database Memcached Client (Web Server) 2- Delete k 1- Update .. Lookup Update 57 Nishtala , Rajesh, et al. "Scaling Memcache at Facebook." nsdi . Vol. 13. 2013.

Handling updates Memcache needs to be invalidated after DB writePrefer deletes to setsIdempotentUp to web application to specify which keys to invalidate after database update58 Nishtala , Rajesh, et al. "Scaling Memcache at Facebook." nsdi . Vol. 13. 2013. This slide was taken from Scaling Memcache at Facebook presentation in NSDI`13

Client (Web Server) Client (Web Server) One Cluster Memcached Memcached Memcached Memcached ….. Consistent Hashing Client (Web Server) Client (Web Server) Client (Web Server) Client (Web Server) Client (Web Server) Client (Web Server) Client (Web Server) Client (Web Server) Client (Web Server) Client (Web Server) ….. Congestion Webservers: batch requests use sliding window to limit outgoing requests use UDP for gets use TCP for deletes & sets 59 Nishtala , Rajesh, et al. "Scaling Memcache at Facebook." nsdi . Vol. 13. 2013.

Replication Storage Cluster Lookups & Updates Front-end Cluster Front-end Cluster 60

61 This slide was taken from Scaling Memcache at Facebook presentation in NSDI`13

62 This slide was taken from Scaling Memcache at Facebook presentation in NSDI`13

Tolerate datacenter scale outages Read latency Master slave replication 63 This slide was taken from Scaling Memcache at Facebook presentation in NSDI`13

64 This slide was taken from Scaling Memcache at Facebook presentation in NSDI`13

65 This slide was taken from Scaling Memcache at Facebook presentation in NSDI`13

Caching with Domain Knowledge 66

TAO: Facebook’s Distributed Data Store for the Social Graph Nodes and Associations Facebook graph Nodes Associations Look-aside cache Get/Set keys Agnostic of the data structure More requests More rounds of lookups 67 Bronson, Nathan, et al. "TAO: Facebook's Distributed Data Store for the Social Graph." USENIX Annual Technical Conference. 2013.

FRIEND The Social Graph COMMENT POST USER USER PHOTO LOCATION USER Carol USER USER USER EXIF_INFO GPS_DATA AT PHOTO EXIF COMMENT CHECKIN LIKE LIKE LIKE LIKE AUTHOR AUTHOR FRIEND (hypothetical encoding) 68 This slide was taken from Facebook TAO’s presentation ATC`13

What does TAO do? Nodes and Associations Graph partitioning instead of hash partitioning Instead of loading a single key per request Node Association Enrich the API Load ranges of the association list Newest ten comments on a post Get count of association How many comments are there on a post? 69 Bronson, Nathan, et al. "TAO: Facebook's Distributed Data Store for the Social Graph." USENIX Annual Technical Conference. 2013.

Objects and Associations API Reads – 99.8%Point queriesobj_get 28.9%assoc_get 15.7%Range queriesassoc_range 40.9% assoc_time_range 2.8%Count queries assoc_count 11.7%Writes – 0.2%Create, update, delete for objects obj_add 16.5%obj_update 20.7% obj_del 2.0%Set and delete for associationsassoc_add 52.5% assoc_del 8.3% 70 This slide was taken from Facebook TAO’s presentation ATC`13

Partitioning Objects are assigned a shard idObjects are bound to this shard for their life timeAssociationsDefined as (id1, atype, id2)id1 and id2 are node idsstored on the shard of its id1Association query can be served from a single server 71

Follower and Leader Caches Follower cache Database Web servers Leader cache 72 This slide was taken from Facebook TAO’s presentation ATC`13

Write-through Caching – Association Lists Follower cache Database Web servers X,… X,A,B,C Leader cache X,A,B,C Y,A,B,C Y,A,B,C X –> Y X –> Y X –> Y ok ok refill X refill X ok Y,… X,A,B,C Y,A,B,C range get 73 This slide was taken from Facebook TAO’s presentation ATC`13

Asynchronous DB Replication Follower cacheDatabase Web servers Master data center Replica data center Leader cache Inval and refill embedded in SQL Writes forwarded to master Delivery after DB replication done 74 This slide was taken from Facebook TAO’s presentation ATC`13

Improving Availability: Read Failover Follower cache Database Web servers Master data center Replica data c enter Leader cache 75 This slide was taken from Facebook TAO’s presentation ATC`13

Hotspots Objects are semi-randomly distributed among shardsLoad imbalanceA post by Justin Bieber vs a post by Victor Zakhary Load imbalanceReplicate ( more followers to handle the same object) Invalidation messages Cache hot objects at the clientAn additional level of invalidationAlleviate request load on the caching servers 76

Inverse associations Bidirectional relationships have separate a→b and b→ a edgesinv_type(LIKES ) = LIKED_BY inv_type ( FRIEND_OF ) = FRIEND_OFForward and inverse types linked only during write TAO assoc_add will update bothNot atomic, but failures are logged and repaired Nathan Carol “On the summit” FRIEND_OF FRIEND_OF AUTHORED_BY AUTHOR LIKED_BY LIKES 77 This slide was taken from Facebook TAO’s presentation ATC`13

Slicer: Auto- Sharding for Datacenter ApplicationsProvides auto-sharding without tying to storageSeparate assignment generation “control plane” from  request forwarding “data plane”Via a small interfaceIn a scalable, consistent, fault-tolerant manner Reshards for capacity and failure adaptation, load balancing 78 Adya , Atul, et al. "Slicer: Auto- sharding for datacenter applications." USENIX Symposium on Operating Systems Design and Implementation. 2016. This slide was taken Slicer’s presentation OSDI`16

Slicer Sharding Model Caching servers 0 2 63 - 1 Hash(K 1 ) Hash(K 2 ) Hash(K 3 ) “Slices” Hash keys into 63-bit space Assign ranges ("slices") of space to servers Split/Merge/Migrate slices for load balancing “ Asymmetric replication”: more copies for hot slices 79 Adya , Atul, et al. "Slicer: Auto- sharding for datacenter applications." USENIX Symposium on Operating Systems Design and Implementation. 2016. This slide was taken Slicer’s presentation OSDI`16

Slicer Overview Frontends Caching servers Slicelet Clerk 80 Distributed data plane Centralized control plane Hash(key) Hash(key) Slicer Service Adya , Atul, et al. "Slicer: Auto- sharding for datacenter applications." USENIX Symposium on Operating Systems Design and Implementation. 2016. This slide was taken Slicer’s presentation OSDI`16

Slicer Architecture Frontends Caching servers Slicelet Clerk Distributor Backup Distributor Assigner Existing Google Infrastructure 81 Capacity Monitoring Health Monitoring Load Monitoring Lease Manager This slide was taken Slicer’s presentation OSDI`16

Summarize Replacement PolicyPartitioning cachesFragmentationCliffhangersCuckoo hashingCluster managementMemcachedCross-datacenter cachingMemcache at Facebook Domain knowledge cachingTao at Facebook 82

Summarize AutoshardingSlicer83

Open problems Load balancing and UtilizationDistributed keys equally between shardsMemory utilization Load per key (Hotspots problem)Re-ShardingMapping?ReplicationPropagate updatesConsistency vs Latency Geo-replicationConsistency vs Latency 84

Open problems Domain-knowledge cachingCaching for distributed real time analyticsAdvances in HardwareSDNRDMANVMSSDsDatacentersEdge and Fog computing 85

References and Sources Full Slides deck: https://goo.gl/1VYGW3 http://www.visualcapitalist.com/what-happens-internet-minute-2016/http://norvig.com/21-days.html#answersTanenbaum, Andrew S., and Herbert Bos. Modern operating systems . Prentice Hall Press, 2014.Megiddo, Nimrod, and Dharmendra S. Modha. "ARC: A Self-Tuning, Low Overhead Replacement Cache." FAST. Vol. 3. 2003. https://memcached.org/, FITZPATRICK, B. Distributed caching with memcached . Linux Journal 2004, 124 (Aug. 2004), 5. Karger , David, et al. "Consistent hashing and random trees: Distributed caching protocols for relieving hot spots on the World Wide Web." Proceedings of the twenty-ninth annual ACM symposium on Theory of computing. ACM, 1997. Karger, David, et al. "Web caching with consistent hashing." Computer Networks 31.11 (1999): 1203-1213.86

References and Sources https://redis.io/Cidon, Asaf, et al. "Dynacache: Dynamic Cloud Caching." HotStorage. 2015.Cidon, Asaf , et al. "Cliffhanger: Scaling performance cliffs in web memory caches." USENIX NSDI. 2016.Pagh, Rasmus, and Flemming Friche Rodler . "Cuckoo hashing." Journal of Algorithms 51.2 (2004): 122-144. Fan, Bin, David G. Andersen, and Michael Kaminsky. "MemC3: Compact and Concurrent MemCache with Dumber Caching and Smarter Hashing." NSDI. Vol. 13. 2013.Li, Xiaozhou, et al. "Algorithmic improvements for fast concurrent cuckoo hashing." Proceedings of the Ninth European Conference on Computer Systems. ACM, 2014. Nishtala, Rajesh, et al. "Scaling Memcache at Facebook." nsdi. Vol. 13. 2013.Bronson, Nathan, et al. "TAO: Facebook's Distributed Data Store for the Social Graph." USENIX Annual Technical Conference. 2013. 87

References and Sources Dragojević, Aleksandar, et al. "FaRM: Fast remote memory." Proceedings of the 11th USENIX Conference on Networked Systems Design and Implementation. 2014.Lim, Hyeontaek, et al. "MICA: A holistic approach to fast in-memory key-value storage." management 15.32 (2014): 36.Pu, Qifan, et al. " Fairride: Near-optimal, fair cache sharing." 13th USENIX Symposium on Networked Systems Design and Implementation (NSDI 16). USENIX Association, 2016.Adya , Atul, et al. "Slicer: Auto- sharding for datacenter applications." USENIX Symposium on Operating Systems Design and Implementation. 2016. 88

Questions Thank you 89