/
Synchronizing Processes Synchronizing Processes

Synchronizing Processes - PowerPoint Presentation

liane-varnes
liane-varnes . @liane-varnes
Follow
366 views
Uploaded On 2017-05-13

Synchronizing Processes - PPT Presentation

Clocks External clock synchronization Cristian Internal clock synchronization Gusella amp Zatti Network Time Protocol Mills Decisions Agreement protocols Fischer Data Distributed file ID: 547833

processes file systems synchronizing file processes synchronizing systems distributed server coda protocols agreement files commander byzantine lieutenants venus consensus

Share:

Link:

Embed:

Download Presentation from below link

Download Presentation The PPT/PDF document "Synchronizing Processes" is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.


Presentation Transcript

Slide1

Synchronizing Processes

ClocksExternal clock synchronization (Cristian)Internal clock synchronization (Gusella & Zatti)Network Time Protocol (Mills)DecisionsAgreement protocols (Fischer)DataDistributed file systems (Satyanarayanan)MemoryDistributed shared memory (Nitzberg & Lo)SchedulesDistributed scheduling (Isard et al.)

Synchronizing Processes

1Slide2

Agreement Problems

Require all non-faulty (or correct) processes to come to an agreementThree types of problems:Consensus:Each process Pi proposes a value vi and all non-faulty processes agree on a consensus value cInteractive Consistency:Each process Pi proposes a value vi and all non-faulty processes agree on a consensus vector c = <v1, v2

, …, vN>

Byzantine (Generals or Reliable Broadcast):

One process

Pg proposes a value vg and all non-faulty processes agree on a consensus value c = vg

Synchronizing Processes > Agreement Protocols > Agreement Problems

2Slide3

Relations Among the Problems

Since the interactive consistency problem can be solved with a Byzantine protocol BzAnd the consensus problem can be solved with an interactive consistency protocolThe consensus problem can be solved with a Byzantine protocol BzN copies of the Bz protocol are run in parallel, where each processor Pi acts as the commander (Pg) for exactly one copy of the protocolThe non-faulty processors use the majority vote of the consensus vector as the consensus valueHence, a Byzantine protocol can solve all three problems

Synchronizing Processes > Agreement Protocols > Agreement Problems

3Slide4

Consensus & Byz. Agreement

4The Byzantine Generals Problem[Lamport, Shostak, & Pease, 1982.]Basic idea is very similar to the consensus problem:Each of N generals has a value v(i), (e.g. “attack” or “retreat”).We want an algorithm to allow all generals to exchange their values such that the following hold:All non-faulty generals must agree on the values of v(1),…,

v(N

).

If the

i th general is non-faulty, then the value agreed for v(

i) must be the

i

th

general’s value.Slide5

Consensus & Byz. Agreement

5Byzantine Generals ProblemThe problem described earlier can be solved by restricting attention to one commanding general and considering all others to be lieutenants.A commanding general must send an order to his N–1 lieutenants, such that: IC1: All loyal lieutenants obey the same order. IC2: If the commander is loyal, then loyal lieutenants obey the order he sends. Slide6

Oral Message Algorithm

Assumptions:Every message that is sent is delivered correctlyThe receiver of a message knows who sent itThe absence of a message can be detectedAssumptions #1 and #2 prevent a traitor from interfering with the communication between two other generalsAssumption #3 foils a traitor who tries to prevent a decision by simply not sending messagesDenoted OM(m), where m is the maximum number of traitors the system can handleSynchronizing Processes > Agreement Protocols > Byzantine Protocols > Unauthenticated Messages6Slide7

Impossibility Theorem

If processes can only send unauthenticated messages, more than two thirds of the processes must be non-faulty to derive a solutionIn other words, no solution exists for a system with fewer than 3m + 1 nodes, where m is the number of faulty processesSynchronizing Processes > Agreement Protocols > Byzantine Protocols > Unauthenticated Messages7Slide8

Consensus & Byz. Agreement

8Algorithm with Oral MessagesAlgorithm OM(m) (defined recursively) tolerates m traitors.Algorithm OM(0):Commander sends value to each lieutenant.Each lieutenant uses the value received from the commander (or “retreat” if no message is received).Algorithm OM(m

), m > 0:

Commander sends value to each lieutenant.

Each lieutenant uses OM(

m–1) to send the value received (take this value to be “retreat” if not received) to the other N–2 lieutenants.

Each lieutenant uses the majority of the values received from the commander and the other lieutenants in the previous two steps.Slide9

Intuition

If the commander is loyal, then he sends the same command to all lieutenants. In this case, the lieutenants all agree on the correct command by majority, as in the example.If the commander is a traitor, then he may send different commands to different lieutenants. However, this leaves one fewer traitors among the lieutenants, making it easier to reach agreement among them. (When the commander is a traitor, they can agree on any command.)Consensus & Byz. Agreement 9Slide10

Signed Message Algorithm

Assumptions:Every message that is sent is delivered correctlyThe receiver of a message knows who sent itThe absence of a message can be detectedSignatures:A loyal general’s signature cannot be forged, and any alteration of the contents of his signed messages can be detectedAnyone can verify the authenticity of a general’s signatureDenoted SM(m), the algorithm can cope with m traitors for any number of generalsI.e., it is now possible to tolerate any number of traitorsSynchronizing Processes > Agreement Protocols >

Byzantine Protocols > Authenticated Messages

10Slide11

SM(m) Algorithm

Initially Vi = { }The commander P0 signs and sends his value to every lieutenantIf lieutenant i receives a message of the form v:0 from the commander, then it adds v to Vi and sends the message v:0:i to every other lieutenantIf lieutenant i receives a message of the form

v:0:j1:…:

j

k

and v is not in Vi then it adds v to Vi

andif k < m

, it sends the message

v

:0:

j

1

:…:

j

k

:

i

to every lieutenant other than

j

1

, …,

j

k

When lieutenant

i

will receive no more messages, it obeys

choice

(

V

i

)

Synchronizing Processes > Agreement Protocols >

Byzantine Protocols >

A

uthenticated Messages

11Slide12

Choice Function

The choice function is applied to a set of orders to obtain a single oneRequirements:If the set V consists of a single element v, then choice(V) = vIf the set V is empty, then choice(V) = a predetermined valuePossibilities:choice(V) selects the majority of set V or a predetermined value if there is not a majoritychoice(

V) selects the median of set V, if the elements of

V

can be ordered

Synchronizing Processes > Agreement Protocols > Byzantine Protocols > Authenticated Messages12Slide13

Choice Function

The choice function is applied to a set of orders to obtain a single oneRequirements:If the set V consists of a single element v, then choice(V) = vIf the set V is empty, then choice(V) = a predetermined valueBasic Idea:If Commander is loyal, then all messages will be of the form V:0:w*. (No forging.)

So, all lieutenants end up with Vi = {V}.

If Commander is a traitor

, then loyal lieutenants can detect it

.Synchronizing Processes > Agreement Protocols > Byzantine Protocols > A

uthenticated Messages

13Slide14

SM(m) Example

After step 3, V1 = V2 = {Attack, Retreat}Intuitively, both lieutenants can tell the commander is a tritorWith no majority, choice would default to RetreatSynchronizing Processes > Agreement Protocols > Byzantine Protocols > Authenticated Messages14

Lieutenant 1

Attack:0

Retreat:0

Attack:0:1

Retreat:0:2

Commander

Lieutenant 2Slide15

Synchronizing Processes

ClocksExternal clock synchronization (Cristian)Internal clock synchronization (Gusella & Zatti)Network Time Protocol (Mills)DecisionsAgreement protocols (Fischer)DataDistributed file systems (Satyanarayanan)MemoryDistributed shared memory (Nitzberg & Lo)SchedulesDistributed scheduling (

Isard et al.)

Synchronizing Processes

15Slide16

Synchronizing Processes:

Distributed File SystemsCS/CE/TE 6378 Advanced Operating SystemsSlide17

Distributed File Systems

Distributed File System (DFS):a system that provides access to the same storage for a distributed network of processesSynchronizing Processes > Distributed File Systems17

Common StorageSlide18

Benefits of DFSs

Data sharing is simplifiedFiles appear to be localUsers are not required to specify remote servers to accessUser mobility is supportedAny workstation in the system can access the storageSystem administration is easierOperations staff can focus on a small number of servers instead of a large number of workstationsBetter security is possibleServers can be physically securedNo user programs are executed on the serversSite autonomy is improvedWorkstations can be turned off without disrupting the storageSynchronizing Processes > Distributed File Systems18Slide19

Design Principles for DFSs

Utilize workstations when possibleOpt to perform operations on workstations rather than servers to improve scalabilityCache whenever possibleThis reduces contention on centralized resources and transparently makes data available whenever usedExploit file usage characteristicsKnowledge of how files are accessed can be used to make better choicesEx: Temporary files are rarely shared, hence can be kept locallyMinimize system-wide knowledge and changeScalability is enhanced if global information is rarely monitored or updatedTrust the fewest possible entitiesSecurity is improved by trusting a smaller number of processesBatch if possibleTransferring files in large chunks improve overall throughputSynchronizing Processes > Distributed File Systems

19Slide20

Quiz Question

Which of the following was not a design principle from the Andrew and Coda file systems?Cache whenever possible.Decentralize operations when possible.Minimize system-wide knowledge.Trust the most possible entities.Synchronizing Processes > Distributed File Systems20Slide21

Mechanisms for Building DFSs

Mount pointsEnables filename spaces to be “glued” together to provide a single, seamless, hierarchical namespaceClient cachingContributes the most to better performance in DFSsHintsPieces of information that can substantially improve performance if correct but no negative consequence if erroneousEx: Caching mappings of pathname prefixesBulk data transferReduces network communication overhead by transferring in bulkEncryptionUsed for remote authentication, either with private or public keysReplicationStoring the same data on multiple servers increases availabilitySynchronizing Processes > Distributed File Systems

21Slide22

Quiz Question

Which of the following is not an important mechanism for developing distributed file systems?Caching data at clients, either entire files or portions of files.Encrypting data transmissions, either with private or public keys.Read-only data replication for files that change often.Transferring data in bulk to reduce communication overheads.Synchronizing Processes > Distributed File Systems22Slide23

DFS Case Studies

Two case studies:Andrew (AFS)CodaBoth were:Developed at Carnegie Mellon University (CMU)A Unix-based DFSFocused on scalability, security, and availabilitySynchronizing Processes > Distributed File Systems23Slide24

Andrew File System (AFS)

Vice is a collection of trusted file serversVenus is a service that runs on each workstation to mediate shared file accessSynchronizing Processes > Distributed File Systems > Andrew24

Vice

Venus

Venus

Venus

Venus

Venus

Venus

ViceSlide25

AFS-1

Used from 1984 through 1985Each server contained a local file system mirroring the structure of the shared file systemIf a file was not on the server, a search would end in a stub directory that identified the server containing the fileClients cached pathname prefix information to direct file requests to the appropriate serversVenus used a pessimistic approach to maintaining cache coherenceAll cached files copies were considered suspectVenus would contact Vice to verify the cache was the latest version before accessing the fileSynchronizing Processes > Distributed File Systems > Andrew25Slide26

AFS-2

Used from 1985 through 1989Venus now used an optimistic approach to maintaining cache coherenceAll cached files were considered validCallbacks were usedWhen files are cached on a workstation, the server promises to notify the workstation if the file is to be modified by another machineA remote procedure call (RPC) mechanism was used to optimize bulk file transfersMount points and volumes were used instead of stub directories to easily move files around among the serversEach user was normally assigned a volume and a disk quotaRead-only replication of volumes increased availabilitySynchronizing Processes > Distributed File Systems > Andrew26Slide27

Quiz Question

What is a callback?A notification from a client that a local cache has been modified.A notification from a server that a file or directory is to be modified.Both of the above.None of the above.Synchronizing Processes > Distributed File Systems > Andrew27Slide28

AFS-3

Used from 1989 through early 1990sSupports multiple administrative cells, each with its own servers, workstations, system admins, and usersEach cell is completely autonomousVenus now cached files in large chunks instead of their entiretySynchronizing Processes > Distributed File Systems > Andrew28Slide29

Security in Andrew

Protection domainsEach is composed of users and groupsEach group is associated with a unique owner (user)A protection server is used to immediately reflect changes in domainsAuthenticationUpon login, a user’s password is used to obtain tokens from an authentication serverVenus uses these tokens to establish connections to the RPCFile system protectionAccess lists are used to determine access to directories instead of files, including negative rightsResource usageAndrew’s protection and authentication mechanisms protect against denials of service and resourcesSynchronizing Processes > Distributed File Systems > Andrew

29Slide30

Coda

A descendant of AFS-2Substantially more resilient to server and network failuresBy relying entirely on local resources (caches) when the servers are inaccessibleAllows a user to continue working regardless of failures elsewhere in the systemSynchronizing Processes > Distributed File Systems > Coda30Slide31

Coda Overview

Clients cache entire files on their local disksCache coherence is maintained by the use of callbacksClients dynamically find files on servers and cache location informationToken-based authentication and end-to-end encryption are used for securityProvides failure resiliency through two mechanisms:Server replication: storing copies of files on multiple serversDisconnected operation: mode of optimistic execution in which the client relies solely on cached dataSynchronizing Processes > Distributed File Systems > Coda31Slide32

Server Replication

Replicated Volume:consists of several physical volumes or replicas that are managed as one logical volume by the systemVolume Storage Group (VSG):a set of servers maintaining a replicated volumeAccessible VSG (AVSG):the set of servers currently accessibleVenus performs periodic probes to detect AVSGsOne member is designated as the preferred serverSynchronizing Processes > Distributed File Systems > Coda > Server Replication32Slide33

Quiz Question

What is a VSG?Venus Service GroupVice Server GroupVolume Storage GroupNone of the aboveSynchronizing Processes > Distributed File Systems > Coda > Server Replication33Slide34

Server Replication

Venus employs a Read-One, Write-All strategyFor a read request, If a local cache exists,Venus will read the cache instead of contacting the VSGIf a local cache does not exist,Venus will contact the preferred server for its copyVenus will also contact the other AVSG for their version numbersIf the preferred version is stale, a new, up-to-date preferred server is selected from the AVSG and the fetch is repeatedSynchronizing Processes > Distributed File Systems > Coda > Server Replication34Slide35

Server Replication

Venus employs a Read-One, Write-All strategyFor a write, When a file is closed, it is transferred to all members of the AVSGIf the server’s copy does not conflict with the client’s copy, an update operation handles transferring file contents, making directory entries, and changing access listsA data structure called the update set, which summarizes the client’s knowledge of which servers did not have conflicts, is distributed to the serversSynchronizing Processes > Distributed File Systems > Coda > Server Replication35Slide36

Disconnected Operation

Begins at a client when no member of a VSG is accessibleClients are allowed to rely solely on local cachesIf a cache does not exist, the system call that triggered the file access is abortedDisconnected operation ends when Venus established a connection with the VSGVenus executes a series of update processes to reintegrate the client with the VSGSynchronizing Processes > Distributed File Systems > Coda > Disconnected Operation36Slide37

Disconnected Operation

Reintegration updates can fail for two reasons:There may be no authentication tokens that Venus can use to communicate securely with AVSG members due to token expirationsConflicts may be detectedIf reintegration fails, a temporary repository is created on the servers to store the data in question until a user can resolve the problem laterThese temporary repositories are called covolumesMitigate is the operation that transfers a file or directory from a workstation to a covolumeSynchronizing Processes > Distributed File Systems > Coda > Disconnected Operation37Slide38

Conflict Resolution

When a conflict is detected, Coda first attempts to resolve it automaticallyEx: partitioned creation of uniquely named files in the same directory can be handled automatically by selectively replaying the missing file createsIf automated resolution is not possible, Code marks all accessible replicas inconsistent and moves them to their covolumesCoda provides a repair tool to assist users in manually resolving conflictsSynchronizing Processes > Distributed File Systems > Coda > Conflict Resolution38Slide39

Quiz Question

Which of the following is not true about conflict resolution in the Coda DFS?Coda attempts to resolve conflicts by recreating any missing files in a directory.Coda inspects workstations for the most up-to-date cache of the conflicted file.For file-level conflicts, Coda marks all replicas as inconsistent and moves them to a covolume.Users manually resolve file-level conflicts using a provided repair tool.Synchronizing Processes > Distributed File Systems > Coda > Conflict Resolution39