PPT-Ceph: A Scalable, High-Performance Distributed File System
Author : natalia-silvester | Published Date : 2016-07-02
Priya Bhat Yonggang Liu Jing Qin Content 1 Ceph Architecture 2 Ceph Components 3 Performance Evaluation 4 Ceph Demo 5 Conclusion Ceph Architecture What is Ceph
Presentation Embed Code
Download Presentation
Download Presentation The PPT/PDF document "Ceph: A Scalable, High-Performance Distr..." is the property of its rightful owner. Permission is granted to download and print the materials on this website for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.
Ceph: A Scalable, High-Performance Distributed File System: Transcript
Priya Bhat Yonggang Liu Jing Qin Content 1 Ceph Architecture 2 Ceph Components 3 Performance Evaluation 4 Ceph Demo 5 Conclusion Ceph Architecture What is Ceph Ceph is a distributed file system that provides excellent performance scalability and reliability. Ken . Birman. , Cornell University. August 18/19, 2010. 1. Berkeley Workshop on Scalability. Today’s cloud systems exhibit visible inconsistencies!. Today, very few cloud applications need consistency in a strong, interactive, real-time sense of the term.. David . Kaser. Lecture Series. Lorcan Dempsey / . @. LorcanD. Indiana University, . 7 October 2012. How terrific to see you are the featured lecturer this year. Just thought I'd mention that David . Commutativity. Rule: Designing Scalable Software for Multicore Processors. Austin T. Clements, M. . Frans. . Kaashoek. , . Nickolai. . Zeldovich. , Robert T. Morris, and Eddie Kohler. MIT CSAIL and Harvard University. Ceph. Infrastructure. Production . Ceph. cluster. 2 Dell R510s. Dual Intel X5660s. 96 GB RAM. 2 PERC H800s, each with 2 MD1200 . shelves. Total . of 56 disks per host. .. 420TB raw, using 2x . replication . Larry Peterson. In collaboration with . Arizona. , Akamai. ,. . Internet2. , NSF. , North Carolina, . Open Networking Lab, Princeton. (and several pilot sites). S3. DropBox. GenBank. iPlant. Data Management Challenge. . Spring 2015 . – Oxford, UK. William . Strecker. -Kellogg willsk@bnl.gov. RHIC/ATLAS Computing Facility. Brookhaven National Lab. About 70 miles from NYC. Hosts RHIC. Provides computing services for both RHIC experiments and serves as ATLAS Tier-1 for the US.. PROCEDURESSCHOOLS OF PUBLIC HEALTHPUBLIC HEALTHPROGRAMSSTANDALONE BACCALAUREATE PROGRAMSAMENDED JUNE 2017Council on Education for Public Health1010 Wayne Avenue Suite 220Silver Spring MD 20910Phone 20 In-Depth Training. Tom . Barnes. Intel . Corporation. July 2014. Note: All information, screenshots, and examples are based on VSM 0.5.1. Prerequisites. (Not covered in this presentation). Ceph Concepts. APHA Annual Meeting. Denver, Colorado. October 30, 2016. Empowering the Future: Creating Leaders for a Healthier World. Welcome to Denver!. Opening remarks. Thomas C. Quade, MA, MPH . President-elect. The Ceph and Commercial Server SAN Yuting Wu wuyuting@awcloud.com AWcloud Introduction to AWcloud Introduction to Ceph Storage Introduction to ScaleIO and SolidFire Comparison of e. Echo. Big ceph cluster for WLCG Tier-1 object storage (and other users). 181 Storage nodes. 4700 OSDs (6-12TB). 36/28PB raw/usable – 16PB data stored. Density and throughput over latency. EC 8+3 . : A Scalable High-Performance Distributed File System. Ken Birman. Spring, 2019. http://www.cs.cornell.edu/courses/cs5412/2019sp. 1. HDFS limitations. Although many applications are designed to use the normal “POSIX” file system API (operations like file create/open, read/write, close, rename/replace, delete, and snapshot), some modern applications find POSIX inefficient.. *. Dr. Mark K. Seager. ASCI Tera-scale Systems PI. Lawrence Livermore National Laboratory. POBOX 808, L-60. Livermore, CA 94550. seager@llnl.gov. 925-423-3141. *. Work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract W-7405-Eng-48.. Metdata. Management for Distributed File Systems. Thomas Kao. 1. Background. Scalable solutions provided for data storage, why not file systems?. 2. Motivation. O. ften . bottlenecked by the metadata management .
Download Document
Here is the link to download the presentation.
"Ceph: A Scalable, High-Performance Distributed File System"The content belongs to its owner. You may download and print it for personal use, without modification, and keep all copyright notices. By downloading, you agree to these terms.
Related Documents