/
Parallel and Distributed Algorithms Parallel and Distributed Algorithms

Parallel and Distributed Algorithms - PowerPoint Presentation

kittie-lecroy
kittie-lecroy . @kittie-lecroy
Follow
433 views
Uploaded On 2016-06-20

Parallel and Distributed Algorithms - PPT Presentation

Overview Parallel Algorithm vs Distributed Algorithm PRAM Maximal Independent Set Sorting using PRAM Choice coordination problem Real world applications Introduction Need of distributed processing ID: 370854

history register processor current register history current processor algorithm write processors log random set step elements registers case independent

Share:

Link:

Embed:

Download Presentation from below link

Download Presentation The PPT/PDF document "Parallel and Distributed Algorithms" is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.


Presentation Transcript

Slide1

Parallel and Distributed AlgorithmsSlide2

Overview

Parallel Algorithm

vs

Distributed Algorithm

PRAM

Maximal Independent Set

Sorting using PRAM

Choice coordination problem

Real world applicationsSlide3

IntroductionSlide4

Need of distributed processing

A massively parallel processing machine

CPUs with 1000 processors

Moore’s law coming to an endSlide5

Parallel Algorithm

A parallel algorithm is an

algorithm which can be executed a piece at a time on many different processing devices, and then combined together again at the end to get the correct result

.

*

*

Blelloch

, Guy E.;

Maggs

, Bruce M. Parallel Algorithms. USA: School of Computer Science, Carnegie Mellon University.Slide6

Distributed Algorithm

A distributed algorithm is an algorithm designed to run on computer hardware constructed from interconnected processors.*

*Lynch, Nancy (1996).

Distributed Algorithms

. San Francisco, CA:

Morgan Kaufmann Publishers

.

ISBN

 

978-1-55860-348-6

.Slide7

PRAMSlide8

Random Access Machine

An abstract machine with unbounded number of local memory cells and with simple set of instruction sets

Time complexity:

number of instructions executed

Space complexity:

number of memory cells used

All operations take

Unit timeSlide9

PRAM (Parallel Random Access Machine)

PRAM is a parallel version of RAM for designing the algorithms applicable to parallel computers

Why PRAM ?

The number of processor execute per one cycle on P processors is at most P

Any processor can read/write any shared memory cell in unit time

It abstracts from overhead which makes the complexity of PRAM algorithm easier

It is a benchmarkSlide10

Read A[i-1], Computation A[

i

]=A[i-1]+1, Write A[

i

]

Shared Memory (A)

P1

P2

Pn

A[0]

A[1]=A[0]+1

A[2]=A[1]+1

A[n]=A[n-1]+1

A[1]

A[1]

A[2]

A[n-1]

A[n]Slide11

Share Memory Access Conflicts

Exclusive Read(ER) : all processors can simultaneously read from distinct memory locations

Exclusive Write(EW) : all processors can simultaneously write to distinct memory locations

Concurrent Read(CR) : all processors can simultaneously read from any memory location

Concurrent Write(CW) : all processors can write to any memory location

EREW, CREW, CRCWSlide12

Complexity

Parallel time complexity

:

The number of synchronous steps in the algorithm

Space complexity:

The number of share memory cell

Parallelism:

The number of processors usedSlide13

Maximal Independent Set

Lahiru

Samarakoon

Sumanaruban

RajaduraiSlide14

14

Independent Set (IS):

Any set of nodes that are not adjacentSlide15

15

Maximal Independent Set (MIS):

An independent set that is no

subset of any other independent set Slide16

16

Maximal vs. Maximum IS

a maximum independent set

a maximal independent setSlide17

17

A Sequential Greedy algorithm

Suppose that will hold the final MIS

InitiallySlide18

18

Pick a node and add it to

Phase 1:Slide19

19

Remove and neighborsSlide20

20

Remove and neighborsSlide21

21

Pick a node and add it to

Phase 2:Slide22

22

Remove and neighborsSlide23

23

Remove and neighborsSlide24

24

Repeat until all nodes are removed

Phases 3,4,5,…:Slide25

25

Repeat until all nodes are removed

No remaining nodes

Phases 3,4,5,…,x:Slide26

26

At the end, set will be an MIS of Slide27

27

Running time of algorithm:

Worst case graph:

nodesSlide28

Intuition for parallelization

28

A

t

each phase we may select any independent set

(instead of a single node)

, remove S and neighbors of S from the graph.Slide29

29

Suppose that will hold the final MIS

Initially

Example:Slide30

30

Find any independent set

Phase 1:

And insert to :Slide31

31

remove and neighbors Slide32

32

Phase 2:

Find any independent set

And insert to :

On new graphSlide33

33

remove and neighbors Slide34

34

remove and neighbors Slide35

35

Phase 3:

Find any independent set

And insert to :

On new graphSlide36

36

remove and neighbors Slide37

37

No nodes are left

remove and neighbors Slide38

38

Final MISSlide39

39

The number of phases depends on the

choice of independent set in each

phase

The larger the independent set at each

phase the faster the algorithm

Observation:Slide40

40

1

2

Let be the degree of node

Randomized Maximal Independent Set ( MIS )Slide41

41

Each node

elects itself

with probability

At each phase :

1

2

degree of

in

Elected nodes are candidates for the

independent setSlide42

42

If two neighbors are elected simultaneously,

then the higher degree node wins

Example:

ifSlide43

43

If both have the same degree,

ties are broken arbitrarily

Example:

ifSlide44

44

Problematic nodes

Using previous rules, problematic nodes

are removedSlide45

45

The remaining elected nodes form

independent setSlide46

46

mark lower-degree vertices with higher probability

Luby’s algorithmSlide47

47

Problematic nodes

Using previous rules, problematic nodes

are removedSlide48

48

if both end-points of an edge is marked,

unmark the one with the lower degree

Luby’s algorithmSlide49

49

The remaining elected nodes form

independent setSlide50

50

remove marked vertices with their neighbors and corresponding edges

add all marked vertices to MIS

Luby’s algorithmSlide51

AnalysisSlide52

6

2

Goodness property

3

4

4

3

4

4

4

A vertex

v

is good at least ⅓ of its neighbors have lesser degree than it. bad otherwise.

An edge is bad if its both endpoints are bad. good otherwise.Slide53

Lemma 1

Let v

Є

V be a good vertex with degree d(v) > 0.Then, the probability that some vertex w in N( v) gets marked is at least 1 - exp( -1 / 6).

Define L(v) is set of neighbors of v whose degree is lesser than v’s degree.

By definition, |L(v)|≥d(v)/3 if v is a GOOD vertex.Slide54
Slide55

Lemma 2

During any iteration, if a vertex w is marked then it is selected to be in S with probability at least 1/2.Slide56
Slide57
Slide58

From lemma1 and 2 => The probability that a good vertex belongs to S U N(S) is at least (1 - exp(-1/6))/2.

Good vertices get eliminated with a constant probability.

It follows that the expected number of edges eliminated during an iteration is a constant fraction of the current set of edges.

This implies that the expected number of iterations of the Parallel MIS algorithm is O(log n).Slide59

Lemma 3

In a graph G(V,E), the number of good edges is at least |E|/2.

Proof

Direct the edges in E from the lower degree endpoint to the higher degree end-point, breaking ties arbitrarily.

for each bad vertex v

For all S, T С V, define the subset of the (oriented) edges E(S, T) as those edges that are directed from vertices in S to vertices in TSlide60

Let V

G

and V

B

be the set of good and bad verticesSlide61

Sorting on PRAM

Jessica

Makucka

Puneet

DewanSlide62

Sorting

Current problem: sort

n

numbers

Best average case for sorting is

O(

n

log

n)

Can we do better with more processors?YES!Slide63

Notes about Quicksort

Sort

n

numbers on a PRAM with

n

processors

Assume all numbers are distinct

CREW PRAM for this case

Each of the n

processors contains an input elementNotation:

Let Pi denote ith processorSlide64

Quicksort Algorithm

0. If

n

=1 stop

Pick a

splitter

at random from

n

elements

Each processor determines whether its element is bigger or smaller than the splitter

Let j denote splitters rank:

If j [n/4, 3n

/4] means failure, go back to (1)

If j [n/4, 3n

/4] means success and move splitter to Pj Every element smaller than j is moved to distinct processor P

i for i

< j and the larger elements are moved to distinct processor P

k

for

k

>

j

Sort elements recursively in processors P

1

through P

j

-1

, and the elements in processors P

j

+1

through

P

nSlide65

Quicksort Time Analysis

Algorithm

Pick a successful

splitter

at random from

n

elements (assumption)

Each processor determines whether its element is bigger or smaller than the splitter

Time Analysis of each stage

O(

log

n

) stages for every sequence or recursive split

Trivial – can be implemented in single CREW PRAM stepSlide66

Quicksort Time Analysis

Let

j

denote splitters rank:

If

j

[

n

/4, 3

n

/4] go back to 1.If j [

n/4, 3n/4] move splitter to Pj Every element smaller than

j is moved to distinct processor P

i for i < j

and the larger elements are moved to distinct processor Pk for k > j

O(log

n) PRAM steps needed for the single splitting stageSlide67

Comparison Splitting Stage (3)

P

1

P

2

P

3

P

4

P

5

P

6

P

7

P

8

12

3

7

5

11

2

1

14

splitter

0

1

1

1

1

1

0

Assign bit depending on if P

i

’s element is smaller or bigger than the splitter

- 0 if element is bigger

- 1 otherwiseSlide68

Comparison Splitting Stage (3)

P

1

P

2

P

3

P

4

P

5

12

3

7

5

11

splitter

1

1

1

0

1

2

3

Step 1:

Step 2:

+

+

+Slide69

Overall Time Analysis

This algorithm would terminate in O(log

2

n

) steps

Each step is O(log

n

) for splitting stageO(log n

) stepsDerived from this solved equation:Slide70

Cons

In this algorithm,

There is an assumption that split will always be

successful

and it will break the problem from

N to

a constant fraction of N

.

No Suitable method for successful split. Slide71

Improvement

Idea

Reduce the problem into size of n

1-e

where e<1

while keeping the time to split the same. Slide72

Benefits

if e=1/2

The total time for the entire problem size will

be

:

log n + log n

1/2

+log n

1/4

+… resulting in O(log n) Then we could hope for an overall running time

of O(log n).Slide73

Long Story

Suppose that we have

n

processors and

n

elements.

Suppose that processors P

1

through

Pr, contain r of the elements in

sorted order, and that processors Pr+1 through Pn

contain the remaining n - r elements. 1.Choose Random Splitters and sort them.Let the sorted elements in the first r processors the splitters.

For 1 < =j <= r, let s

j denote the jth largest splitter.

2. Insert Insert the n - r unsorted elements among the splitters.

3.

Sort remaining elements among splitters a. Each processor should end up with a distinct input element.

b.

Let i(

s

j

) denote the index of the processor containing

s

j

following the insertion

operation

. Then, for all k < i(

s

j

), processor

P

k

contains an element that is smaller

than

s

j

similarly

, for all k > i(

s

j

), processor

P

k

contains an element that is larger than

sj.Slide74

Example

Choose Random Splitter

5

9

8

10

7

6

12

11Slide75

Example (Contd.)

Sort the random splitters.

S

orted List

Unsorted List

6

11

5

9

8

7

1012Slide76

Example(Contd.)

Insert the unsorted elements among the splitters

5

6

7

9

8

10

11

12Slide77

Example(Contd.

)

Check the number of elements between the splitters has size less than or equal to (Log n ) or not.

Suppose S represents size

S=4 (exceeds log n

i.e

3) S=1 S=1

5

6

7

9

8

10

11

12

5

6

7

9

8

10

11

12Slide78

Example Contd.

Recur on the sub problem whose size exceeds

log n.

Again choose random splitters and follow the

same process

Random

Splitters

5

6

7

9

8

10

11

12Slide79

Partitioning as tree

Tree formed from first partition.

Now the size on the right exceeds log n, so we again split by choosing random partitions.

E.g

9,8

6

5

7

9

8

10

Size on right exceeds

log nSlide80

Contd.

Sorted because of partition

6

5

8

9

7

10Slide81

Lemma’s

to be Used

A CREW

PRAM having

(n

2

)

processors. Suppose that each

of the

processors P1 through

Pn has an input element to be sorted.

Then the PRAM can sort these n elements in O(log n).2. For

n processors, and n elements of which

n1/2 are splitters , then the insertion

process can be completed in O(log n) steps.Slide82

Box Sort

Algorithm

:

Input

: A set of numbers S .

Output

: The elements of S sorted

in

increasing order.

1 . Select

n1/2 (e is 1/2)elements at random from the n input elements.

Using all n processors, sort them in O(log n) steps.(Fact 1)

2. Using the sorted elements from Stage 1 as splitters, insert the remaining elements among them in

O(log n) steps(Fact 2) 3. Treating the elements that are inserted between adjacent splitters as

subproblems , recur on each sub-problem whose size exceeds log n. For subproblems

of size log n or less, invoke

LogSort.Slide83

Sort

Fact

A

CREW PRAM with m processors can sort m

elements in

O(m) steps.Slide84

Example

Each Processor is assigned an element and compares its element with remaining elements simultaneously in O(m) steps.

Rank assigned implies elements are sorted.

4 7 6 5 8 2 3 1

5

9

8

7

10

3

4

2

P1

P2

P3

P4

P5

P6

P7

P8

Ranks

AssignedSlide85

Things to remember

L

ast statement of Box Sort algorithm.

I

dea on the previous slide.Slide86

Log Sort

We will be having log n processors with log n elements then we can sort in O(log n).Slide87

Analysis

Consider each

node of the tree

as

a

box.

Choosing random splitters and sort them take time of O(

log n

).

Insert the unsorted elements among the splitters takes O(log n).

With high probability (assumption) the sub problems resulting from splitting operation are very small(i.e

the unsorted elements among the splitters).So each leaf is a box of size at most log n

.For calculating the time spent, we can use the Log Sort which sorts the elements in O(

log n)Total time is O(log n)Slide88

Distributed Randomized Algorithm

Yogesh

S

Rawat

R.

RamanathanSlide89

Choice Coordination Problem (CCP)Slide90

Biological Inspiration

mite (genus

Myrmoyssus

)Slide91

Biological Inspiration

mite (genus

Myrmoyssus

)

reside as parasites on the ear membrane

of the moths of family

PhaenidaeSlide92

Biological Inspiration

mite (genus

Myrmoyssus

)

reside as parasites on the ear membrane

of the moths of family

Phaenidae

Moths are prey to bats and the only

defense

they have is that they can hear the sonar used by an approaching batSlide93

Biological Inspiration

if both ears of the moth are infected by the mites, then their ability to detect the sonar is considerably diminished, thereby severely decreasing the survival chances of both the moth and its colony of mites. Slide94

Biological Inspiration

The mites are therefore faced with a

"choice coordination problem"

How does any collection of mites infecting a particular ear ensure that every other mite chooses the same ear?Slide95

Problem Specification

Set of N processorsSlide96

Problem Specification

Set of N processors

M options to choose fromSlide97

Problem Specification

Set of N processors

processors have to reach a consensus on unique choice

M options to choose fromSlide98

Model for Communication

Collection of M read-write registers accessible to all the processors

Locking mechanism for conflicts

Each processor follow a protocol for making a choice

A special symbol (√) is used to mark the choice

At the end only one register contains the special symbolSlide99

Deterministic Solution

Complexity is measured in terms of number of read and write operations

For a deterministic solution

Complexity in terms of operations : Ω(n

1/3

)

n - Number of processors

For more details - M. O. Rabin, “The choice coordination problem,” Acta Informatica, vol. 17, no. 2, pp. 121–134, Jun. 1982.Slide100

Randomized Solution

for any

c

> 0

It will solve the problem using

c

operations

with a probability of success atleast

1-2

-Ω(c)

For simplicity we will consider only the case where

n = m = 2

although the protocol can be easily generalizedSlide101

Analogy from Real Life

?

?

Random Action

- Give way or Move aheadSlide102

Analogy from Real Life

?

?

Random Action

- Give way or Move ahead

Give way

Give way

Move Ahead

Move Ahead

Move Ahead

Give way

Give way

Move Ahead

Person 1

Person 2Slide103

Analogy from Real Life

?

?

Random Action

- Give way or Move ahead

Give way

Give way

Move Ahead

Move Ahead

Move Ahead

Give way

Give way

Move Ahead

Breaking Symmetry

Person 1

Person 2Slide104

Synchronous CCP

The two processors are synchronous

Operate in lock-step according to some global clock

Used terminology

P

i

– processor

i

, where

i

ϵ

{0,1}

C

i

– shared register for choices, where

i

ϵ

{0,1}

B

i

– local variable for each processor, where

i

ϵ

{0,1}

P

0

P

1

B

0

B

1

C

0

C

1Slide105

Synchronous CCP

P

0

P

1

B

0

B

1

C

0

C

1

The processor P

i

initially scans the register

C

i

Thereafter, the processors exchange registers after every iteration

At no time will the two processors scan the same register.Slide106

Synchronous CCP

P

0

P

1

B

0

B

1

C

0

C

1

The processor P

i

initially scans the register

C

i

Thereafter, the processors exchange registers after every iteration

At no time will the two processors scan the same register.Slide107

Synchronous CCP

P

0

P

1

B

0

B

1

C

1

C

0

The processor P

i

initially scans the register

C

i

Thereafter, the processors exchange registers after every iteration

At no time will the two processors scan the same register.Slide108

Algorithm

Input

: Registers C

0

and C

1

initialized to 0

Output

: Exactly one of the two registers has the value

Step 0

- P

i

is initially scanning the register

C

i

Step 1

- Read the current register and obtain a bit

R

i

Step 2

- Select one of three cases

case: 2.1

[

R

i

=

]

halt

case: 2.2

[

R

i

= 0, B

i

= 1 )

Write

into the current register and

halt

case: 2.3

[otherwise]

Assign an

unbiased random

bit to B

i

write B

i

into the current register

Step 3

- P

i

exchanges its current register with P

1 -

i

and returns to Step 1 .Slide109

Algorithm

Input

: Registers C

0

and C

1

initialized to 0

Output

: Exactly one of the two registers has the value

Step 0

- P

i

is initially scanning the register

C

i

Step 1

- Read the current register and obtain a bit

R

i

Step 2

- Select one of three cases

case: 2.1

[

R

i

=

]

halt

case: 2.2

[

R

i

= 0, B

i

= 1 ]

Write

into the current register and

halt

case: 2.3

[otherwise]

Assign an

unbiased random

bit to B

i

write B

i

into the current register

Step 3

- P

i

exchanges its current register with P

1 -

i

and returns to Step 1 .

Read OperationSlide110

Algorithm

Input

: Registers C

0

and C

1

initialized to 0

Output

: Exactly one of the two registers has the value

Step 0

- P

i

is initially scanning the register

C

i

Step 1

- Read the current register and obtain a bit

R

i

Step 2

- Select one of three cases

case: 2.1

[

R

i

=

]

halt

case: 2.2

[

R

i

= 0, B

i

= 1 )

Write

into the current register and

halt

case: 2.3

[otherwise]

Assign an

unbiased random

bit to B

i

write B

i

into the current register

Step 3

- P

i

exchanges its current register with P

1 -

i

and returns to Step 1 .

Choice has already been made

by the other processorSlide111

Input

: Registers C

0

and C

1

initialized to 0

Output

: Exactly one of the two registers has the value

Step 0

- P

i

is initially scanning the register

C

i

Step 1

- Read the current register and obtain a bit

R

i

Step 2

- Select one of three cases

case: 2.1

[

R

i

=

]

halt

case: 2.2

[

R

i

= 0, B

i

= 1 )

Write

into the current register and

halt

case: 2.3

[otherwise]

Assign an

unbiased random

bit to B

i

write B

i

into the current register

Step 3

- P

i

exchanges its current register with P

1 -

i

and returns to Step 1 .

Algorithm

Only condition for making a choiceSlide112

Input

: Registers C

0

and C

1

initialized to 0

Output

: Exactly one of the two registers has the value

Step 0

- P

i

is initially scanning the register

C

i

Step 1

- Read the current register and obtain a bit

R

i

Step 2

- Select one of three cases

case: 2.1

[

R

i

=

]

halt

case: 2.2

[

R

i

= 0, B

i

= 1 )

Write

into the current register and

halt

case: 2.3

[otherwise]

Assign an

unbiased random

bit to B

i

write B

i

into the current register

Step 3

- P

i

exchanges its current register with P

1 -

i

and returns to Step 1 .

Algorithm

Generate a random value

Write operationSlide113

Input

: Registers C

0

and C

1

initialized to 0

Output

: Exactly one of the two registers has the value

Step 0

- P

i

is initially scanning the register

C

i

Step 1

- Read the current register and obtain a bit

R

i

Step 2

- Select one of three cases

case: 2.1

[

R

i

=

]

halt

case: 2.2

[

R

i

= 0, B

i

= 1 )

Write

into the current register and

halt

case: 2.3

[otherwise]

Assign an

unbiased random

bit to B

i

write B

i

into the current register

Step 3

- P

i

exchanges its current register with P

1 -

i

and returns to Step 1 .

Algorithm

Exchange RegistersSlide114

Correctness of Algorithm

We need to prove only one of the shared register has √ marked in it

Suppose that both are marked with √

This must have had in same iteration

Otherwise step 2.1 will halt the algorithmSlide115

Correctness of Algorithm

Let us assume that the error takes place during the t

th

iteration

After step 1 values for processor P

i

B

i

(t) and R

i

(t)

By case 2.3

R

0

(t) = B

1

(t)

R

1

(t) = B

0

(t)

Suppose P

i

writes √ in the t

th

iteration, then

R

i

= 0 and B

i

= 1 and R

1-i

= 1 and B

1-i

= 0

P

1-i

cannot write √ in ith iteration

Breaking SymmetrySlide116

R0

B0

0

0

Read Operation

R1

B1

0

0

C0

C1

0

0

Processor 0

Shared Registers

Processor 1Slide117

R0

B0

0

0

0

0

Write Operation

R1

B1

0

0

0

0

C0

C1

0

0

0

0

Processor 0

Shared Registers

Processor 1

Random

RandomSlide118

R0

B0

0

0

0

0

0

0

Read Operation

R1

B1

0

0

0

0

0

0

C0

C1

0

0

0

0

0

0

Processor 0

Shared Registers

Processor 1Slide119

R0

B0

0

0

0

0

0

0

0

1

Write Operation

R1

B1

0

0

0

0

0

0

0

1

C0

C1

0

0

0

0

0

0

1

1

Processor 0

Shared Registers

Processor 1

Random

RandomSlide120

R0

B0

0

0

0

0

0

0

0

1

1

1

Read Operation

R1

B1

0

0

0

0

0

0

0

1

1

1

C0

C1

0

0

0

0

0

0

1

1

1

1

Processor 0

Shared Registers

Processor 1Slide121

R0

B0

0

0

0

0

0

0

0

1

1

1

1

0

Write Operation

R1

B1

0

0

0

0

0

0

0

1

1

1

1

1

C0

C1

0

0

0

0

0

0

1

1

1

1

0

1

Processor 0

Shared Registers

Processor 1

Random

RandomSlide122

R0

B0

0

0

0

0

0

0

0

1

1

1

1

0

1

0

Read Operation

R1

B1

0

0

0

0

0

0

0

1

1

1

1

1

0

1

HALT

C0

C1

0

0

0

0

0

0

1

1

1

1

0

1

0

1

Processor 0

Shared Registers

Processor 1Slide123

R0

B0

0

0

0

0

0

0

0

1

1

1

1

0

1

0

1

0/1

Write Operation

R1

B1

0

0

0

0

0

0

0

1

1

1

1

1

0

1

HALT

C0

C1

0

0

0

0

0

0

1

1

1

1

0

1

0

1

0/1

Processor 0

Shared Registers

Processor 1

RandomSlide124

R0

B0

0

0

0

0

0

0

0

1

1

1

1

0

1

0

1

0/1

0/1

HALT

Read Operation

R1

B1

0

0

0

0

0

0

0

1

1

1

1

1

0

1

HALT

C0

C1

0

0

0

0

0

0

1

1

1

1

0

1

0

1

0/1

0/1

Processor 0

Shared Registers

Processor 1Slide125

Complexity

Probability that both the random bits B

0

and B

1

are the same is 1/2

Therefore probability that number of steps exceeds t is 1/2

t

.

The algorithm will terminate in next two steps as soon as B

0

and B

1

are different.

Computation cost of each iteration is bounded

Therefore, the protocol does O(t) work with probability 1-1/2

tSlide126

The Problem

C

1

C

2

P

2

P

1 Slide127

C

1

C

2

P

2

P

1

The ProblemSlide128

C

1

C

2

P

1

P

2

The ProblemSlide129

C

1

C

2

P

1

P

2

The ProblemSlide130

C

1

C

2

P

1

P

2

The ProblemSlide131

The processors are not synchronized

C

1

C

2

P

1

P

2Slide132

What can we do?

C

1

C

2

P

1

P

2Slide133

What can we do?

Idea: Timestamp

C

1

C

2

P

1

P

2Slide134

Read

<

timestamp,value

>

P

1

P

2

B

1

B

2

C

1

C

2

T

1

T

2

t

2

t

1

Timestamp of Processor: T

i

Timestamp of Register : t

iSlide135

Input

:

 

Registers C

1

 and

C

2

 initialized to <0,0>

Output

:

 

Exactly

one of the two registers

has

 

Ö

AlgorithmSlide136

0)

P

i

 initially scans a randomly chosen register. <T

i

, B

i

> are initialized to <0,0>

P

i

 gets a lock on its current register and reads <

t

i

,

R

i

>

P

i

 executes one of these cases

:

2.1) If

R

i

 =

 

Ö

:

HALT

2.2) If T

i

 <

t

i

:

T

i

 

¬

 

t

i

 

and

B

i

 

¬

R

i

2.3) If T

i

 >

t

i

: Write

 

Ö

 

into the current register and

HALT

2.4) If T

i

 =

t

i

,

R

i

 = 0,

B

i

 =

1 :

Write 

Ö

 

into the current register and

HALT

2.5) Otherwise:

T

i

¬

T

i

 +

1

and

t

i

¬

t

i

 +

1

B

i

¬

Random (unbiased) bit

Write

<

t

i

, B

i

>

into the current

register

3) P

i

 releases the lock on its current register, moves to the other register and returns to step 1. 

Algorithm for a process P

iSlide137

Initial state

B

2

T

2

B

1

T

1

C

1

t

1

0

0

C

2

t

2

0

0

Processor P1

Register R2

Register R1

Processor P2Slide138

1) P

1

chooses C

1

and reads <0,0>

B

2

T

2

B

1

T

1

0

0

C

1

t

1

0

0

C

2

t

2

0

0

History : P

1

==C

1Slide139

1) P

1

chooses C

1

and reads <0,0>

B

2

T

2

B

1

T

1

0

0

C

1

t

1

0

0

C

2

t

2

0

0

[None of the cases from 2.1 to 2.4 are met.

Case 2.5 is satisfied]

History : P

1

==C

1Slide140

B

1

T

1

0

0

1

C

1

t

1

0

0

1

2.5)

T

1

¬

T

1

 + 1 and t

1

¬

t

1

 + 1

B

2

T

2

C

2

t

2

0

0

History : P

1

==C

1Slide141

2.5)

P

1

writes

<t

1

, B

1

> into

C

1

B

1

T

1

0

0

1

1

C

1

t

1

0

0

1

1

B

2

T

2

C

2

t

2

0

0

History : P

1

==C

1Slide142

3)

P

1

 releases the lock on

C

1

B

1

T

1

0

0

1

1

C

1

t

1

0

0

1

1

B

2

T

2

C

2

t

2

0

0

[P

1

moves to

C

2

and returns to step 1]

History : P

1

==C

1Slide143

B

2

T

2

0

0

C

2

t

2

0

0

1) P

2

chooses C

2

and reads <0,0>

B

1

T

1

0

0

1

1

C

1

t

1

0

0

1

1

History : P

1

==C

1

P

2

==C

2Slide144

B

2

T

2

0

0

C

2

t

2

0

0

1) P

2

chooses C

2

and reads <0,0>

B

1

T

1

0

0

1

1

C

1

t

1

0

0

1

1

[None of the cases from 2.1 to 2.4 are met.

Case 2.5 is satisfied]

History : P

1

==C

1

P

2

==C

2Slide145

B

2

T

2

0

0

1

C

2

t

2

0

0

1

2.5)

T

2

¬

T

2

 + 1 and t

2

¬

t

2

 + 1

B

1

T

1

0

0

1

1

C

1

t

1

0

0

1

1

History : P

1

==C

1

P

2

==C

2Slide146

2.5)

P

2

writes

<t

2

, B

2

> into

C

2

B

2

T

2

0

0

1

1

C

2

t

2

0

0

1

1

B

1

T

1

0

0

1

1

C

1

t

1

0

0

1

1

History : P

1

==C

1

P

2

==C

2Slide147

B

2

T

2

0

0

1

1

C

2

t

2

0

0

1

1

B

1

T

1

0

0

1

1

C

1

t

1

0

0

1

1

3)

P

2

 releases the lock on

C

2

[P

2

moves to

C

1

and returns to step 1]

History : P

1

==C

1

P

2

==C

2Slide148

B

2

T

2

0

0

1

1

C

2

t

2

0

0

1

1

B

1

T

1

0

0

1

1

C

1

t

1

0

0

1

1

1)

P

2

 locks

C

1

and reads <1,1>

History : P

1

==C

1

P

2

==C

2

P

2

==C

1Slide149

B

2

T

2

0

0

1

1

C

2

t

2

0

0

1

1

B

1

T

1

0

0

1

1

C

1

t

1

0

0

1

1

1)

P

2

 locks

C

1

and reads <1,1>

[None of the cases from 2.1 to 2.4 are met.

Case 2.5 is satisfied]

History : P

1

==C

1

P

2

==C

2

P

2

==C

1Slide150

B

2

T

2

0

0

1

1

2

C

2

t

2

0

0

1

1

B

1

T

1

0

0

1

1

C

1

t

1

0

0

1

1

2

2.5)

T

2

¬

T

2

 + 1 and t

1

¬

t

1

 + 1

History : P

1

==C

1

P

2

==C

2

P

2

==C

1Slide151

B

2

T

2

0

0

1

1

0

2

C

2

t

2

0

0

1

1

B

1

T

1

0

0

1

1

C

1

t

1

0

0

1

1

0

2

2.5)

P

2

writes

<t

1

, B

2

> into

C

1

History : P

1

==C

1

P

2

==C

2

P

2

==C

1Slide152

B

2

T

2

0

0

1

1

0

2

C

2

t

2

0

0

1

1

B

1

T

1

0

0

1

1

C

1

t

1

0

0

1

1

0

2

3)

P

2

 releases the lock on

C

1

[P

2

moves to

C

2

and returns to step 1]

History : P

1

==C

1

P

2

==C

2

P

2

==C

1Slide153

B

2

T

2

0

0

1

1

0

2

C

2

t

2

0

0

1

1

B

1

T

1

0

0

1

1

C

1

t

1

0

0

1

1

0

2

1)

P

2

 locks

C

2

and reads <1,1>

History : P

1

==C

1

P

2

==C

2

P

2

==C

1

P

2

==C

2Slide154

B

2

T

2

0

0

1

1

0

2

C

2

t

2

0

0

1

1

B

1

T

1

0

0

1

1

C

1

t

1

0

0

1

1

0

2

1)

P

2

 locks

C

2

and reads <1,1>

[Case 2.3:

T

2

 > t

2

is satisfied]

History : P

1

==C

1

P

2

==C

2

P

2

==C

1

P

2

==C

2Slide155

B

2

T

2

0

0

1

1

0

2

C

2

T

2

0

0

1

1

Ö

B

1

T

1

0

0

1

1

C

1

t

1

0

0

1

1

0

2

2.3)

P

2

writes 

Ö

 

into

C

2

[

P

2

HALTS

]

History : P

1

==C

1

P

2

==C

2

P

2

==C

1

P

2

==C

2Slide156

We’ll show another case of the algorithm

History : P

1

==C

1

P

2

==C

2

P

2

==C

1

P

2

==C

2Slide157

Let’s go back 1 iteration

History : P

1

==C

1

P

2

==C

2

P

2

==C

1

P

2

==C

2Slide158

B

2

T

2

0

0

1

1

C

2

t

2

0

0

1

1

B

1

T

1

0

0

1

1

C

1

t

1

0

0

1

1

0

2

History : P

1

==C

1

P

2

==C

2

P

2

==C

1

Let’s go back 1 iterationSlide159

B

2

T

2

0

0

1

1

0

2

C

2

t

2

0

0

1

1

B

1

T

1

0

0

1

1

C

1

t

1

0

0

1

1

0

2

1)

P

1

 locks

C

2

and reads <1,1>

History : P

1

==C

1

P

2

==C

2

P

2

==C

1

P

1

==C

2Slide160

B

2

T

2

0

0

1

1

0

2

C

2

t

2

0

0

1

1

B

1

T

1

0

0

1

1

C

1

t

1

0

0

1

1

0

2

1)

P

1

 locks

C

2

and reads <1,1>

[None of the cases from 2.1 to 2.4 are met.

Case 2.5 is satisfied]

History : P

1

==C

1

P

2

==C

2

P

2

==C

1

P

1

==C

2Slide161

B

2

T

2

0

0

1

1

0

2

C

2

t

2

0

0

1

1

2

B

1

T

1

0

0

1

1

2

C

1

t

1

0

0

1

1

0

2

2.5)

T

1

¬

T

1

 + 1 and t

2

¬

t

2

 + 1

History : P

1

==C

1

P

2

==C

2

P

2

==C

1

P

1

==C

2Slide162

B

2

T

2

0

0

1

1

0

2

C

2

t

2

0

0

1

1

1

2

B

1

T

1

0

0

1

1

1

2

C

1

t

1

0

0

1

1

0

2

2.5)

B

1

¬

Random (unbiased) bit

History : P

1

==C

1

P

2

==C

2

P

2

==C

1

P

1

==C

2Slide163

B

2

T

2

0

0

1

1

0

2

C

2

t

2

0

0

1

1

1

2

B

1

T

1

0

0

1

1

1

2

C

1

t

1

0

0

1

1

0

2

3)

P

1

 releases the lock on

C

2

[P

1

moves to

C

1

and returns to step 1]

History : P

1

==C

1

P

2

==C

2

P

2

==C

1

P

1

==C

2Slide164

B

2

T

2

0

0

1

1

0

2

C

2

t

2

0

0

1

1

1

2

B

1

T

1

0

0

1

1

1

2

C

1

t

1

0

0

1

1

0

2

1)

P

2

 locks

C

2

and reads <1,2>

History : P

1

==C

1

P

2

==C

2

P

2

==C

1

P

1

==C

2

P

2

==C

2Slide165

B

2

T

2

0

0

1

1

0

2

C

2

t

2

0

0

1

1

1

2

B

1

T

1

0

0

1

1

1

2

C

1

t

1

0

0

1

1

0

2

1)

P

2

 locks

C

2

and reads <1,2>

[None of the cases from 2.1 to 2.4 are met.

Case 2.5 is satisfied]

History : P

1

==C

1

P

2

==C

2

P

2

==C

1

P

1

==C

2

P

2

==C

2Slide166

B

2

T

2

0

0

1

1

0

2

3

C

2

t

2

0

0

1

1

1

2

3

B

1

T

1

0

0

1

1

1

2

C

1

t

1

0

0

1

1

0

2

2.5)

T

2

¬

T

2

 + 1 and t

2

¬

t

2

 + 1

History : P

1

==C

1

P

2

==C

2

P

2

==C

1

P

1

==C

2

P

2

==C

2Slide167

B

2

T

2

0

0

1

1

0

2

1

3

C

2

t

2

0

0

1

1

1

2

1

3

B

1

T

1

0

0

1

1

1

2

C

1

t

1

0

0

1

1

0

2

2.5)

P

2

writes

<t

2

, B

2

> into

C

2

History : P

1

==C

1

P

2

==C

2

P

2

==C

1

P

1

==C

2

P

2

==C

2Slide168

B

2

T

2

0

0

1

1

0

2

1

3

C

2

t

2

0

0

1

1

1

2

1

3

B

1

T

1

0

0

1

1

1

2

C

1

t

1

0

0

1

1

0

2

3)

P

2

 releases the lock on

C

2

[P

2

moves to

C

1

and returns to step 1]

History : P

1

==C

1

P

2

==C

2

P

2

==C

1

P

1

==C

2

P

2

==C

2Slide169

B

2

T

2

0

0

1

1

0

2

1

3

C

2

t

2

0

0

1

1

1

2

1

3

B

1

T

1

0

0

1

1

1

2

C

1

t

1

0

0

1

1

0

2

1)

P

1

 locks

C

1

and reads <0,2>

History : P

1

==C

1

P

2

==C

2

P

2

==C

1

P

1

==C

2

P

2

==C

2

P

1

==C

1Slide170

B

2

T

2

0

0

1

1

0

2

1

3

C

2

t

2

0

0

1

1

1

2

1

3

B

1

T

1

0

0

1

1

1

2

C

1

t

1

0

0

1

1

0

2

1)

P

1

 locks

C

1

and reads <0,2>

[Case 2.4:

T

1

 = t

1

, R

1

 = 0, B

1

 = 1

is satisfied]

History : P

1

==C

1

P

2

==C

2

P

2

==C

1

P

1

==C

2

P

2

==C

2

P

1

==C

1Slide171

B

2

T

2

0

0

1

1

0

2

1

3

C

2

t

2

0

0

1

1

1

2

1

3

B

1

T

1

0

0

1

1

1

2

C

1

t

1

0

0

1

1

0

2

Ö

2.4)

P

1

writes 

Ö

 

into

C

1

History : P

1

==C

1

P

2

==C

2

P

2

==C

1

P

1

==C

2

P

2

==C

2

P

1

==C

1Slide172

B

2

T

2

0

0

1

1

0

2

1

3

C

2

t

2

0

0

1

1

1

2

1

3

B

1

T

1

0

0

1

1

1

2

C

1

t

1

0

0

1

1

0

2

Ö

2.4) P

1

HALTS

History : P

1

==C

1

P

2

==C

2

P

2

==C

1

P

1

==C

2

P

2

==C

2

P

1

==C

1Slide173

B

2

T

2

0

0

1

1

0

2

1

3

C

2

t

2

0

0

1

1

1

2

1

3

B

1

T

1

0

0

1

1

1

2

C

1

t

1

0

0

1

1

0

2

Ö

1)

P

2

 locks

C

1

and reads

Ö

History : P

1

==C

1

P

2

==C

2

P

2

==C

1

P

1

==C

2

P

2

==C

2

P

1

==C

1

P

2

==C

1Slide174

B

2

T

2

0

0

1

1

0

2

1

3

C

2

t

2

0

0

1

1

1

2

1

3

B

1

T

1

0

0

1

1

1

2

C

1

t

1

0

0

1

1

0

2

Ö

1)

P

2

 locks

C

1

and reads

Ö

[Case 2.1:

R

1

 =

Ö

is satisfied]

History : P

1

==C

1

P

2

==C

2

P

2

==C

1

P

1

==C

2

P

2

==C

2

P

1

==C

1

P

2

==C

1Slide175

B

2

T

2

0

0

1

1

0

2

1

3

C

2

t

2

0

0

1

1

1

2

1

3

B

1

T

1

0

0

1

1

1

2

C

1

t

1

0

0

1

1

0

2

Ö

2.1)

P

2

HALTS

History : P

1

==C

1

P

2

==C

2

P

2

==C

1

P

1

==C

2

P

2

==C

2

P

1

==C

1

P

2

==C

1Slide176

Correctness

C

1

C

2

P

1

Ö

P

2Slide177

Correctness

When a processor writes

Ö

on

a

register, the other processor should

NOT

write

Ö

on

the other

registerSlide178

Correctness

Case

2.3) T

i

 >

t

i

: Write

Ö

into the current register and halt.

Case 2.4) T

i

 =

t

i

, R

i

 = 0, B

i

 = 1: Write 

Ö

into the current register and halt.

C

1

C

2

ÖSlide179

T

i

*

: Current timestamp of processor P

i

t

i

*

:

Current timestamp of register C

i

Whenever

P

i

finishes an iteration in

C

i

,

T

i

=

t

i

CorrectnessSlide180

T

i

*

: Current timestamp of processor P

i

t

i

*

:

Current timestamp of register C

i

When

a processor enters a register,

it would have just left the other

register

CorrectnessSlide181

2.3) T

i

 >

t

i

: Write

Ö

into the current register and

HALT

Consider P

1

has just entered C

1

with

t

1

*

<

T

1

*Slide182

2.3) T

i

 >

t

i

: Write

Ö

into the current register and

HALT

C

2

t

2

0

0

1

1

0

2

(

t

2

*

)

B

2

t

2

0

0

1

1

(

T

2

*

)

B

1

T

1

0

0

1

1

0

2

(

T

1

*

)

C

1

t

1

0

0

1

1

(

t

1

*

)

Consider P

1

has just entered C

1

with

t

1

*

<

T

1

*

History : P

2

==C

2

P

1

==C

1

P

1

==C

2

P

1

==C

1Slide183

2.3) T

i

 >

t

i

: Write

Ö

into the current register and

HALT

C

2

t

2

0

0

1

1

0

2

(

t

2

*

)

B

2

t

2

0

0

1

1

(

T

2

*

)

B

1

T

1

0

0

1

1

0

2

(

T

1

*

)

C

1

t

1

0

0

1

1

(

t

1

*

)

Consider P

1

has just entered C

1

with

t

1

*

<

T

1

*

History : P

2

==C

2

P

1

==C

1

P

1

==C

2

P

1

==C

1

In

prev

iter

, P

1

must have left C

2

with same

T

1

*Slide184

2.3) T

i

 >

t

i

: Write

Ö

into the current register and

HALT

C

2

t

2

0

0

1

1

0

2

(

t

2

*

)

B

2

t

2

0

0

1

1

(

T

2

*

)

B

1

T

1

0

0

1

1

0

2

(

T

1

*

)

C

1

t

1

0

0

1

1

(

t

1

*

)

Consider P

1

has just entered C

1

with

t

1

*

<

T

1

*

History : P

2

==C

2

P

1

==C

1

P

1

==C

2

P

1

==C

1

T

1

*

t

2

*Slide185

2.3) T

i

 >

t

i

: Write

Ö

into the current register and

HALT

C

2

t

2

0

0

1

1

0

2

(

t

2

*

)

B

2

t

2

0

0

1

1

(

T

2

*

)

B

1

T

1

0

0

1

1

0

2

(

T

1

*

)

C

1

t

1

0

0

1

1

(

t

1

*

)

Consider P

1

has just entered C

1

with

t

1

*

<

T

1

*

History : P

2

==C

2

P

1

==C

1

P

1

==C

2

P

1

==C

1

T

1

*

t

2

*Slide186

2.3) T

i

 >

t

i

: Write

Ö

into the current register and

HALT

C

2

t

2

0

0

1

1

0

2

(

t

2

*

)

B

2

t

2

0

0

1

1

(

T

2

*

)

B

1

T

1

0

0

1

1

0

2

(

T

1

*

)

C

1

t

1

0

0

1

1

(

t

1

*

)

Consider P

1

has just entered C

1

with

t

1

*

<

T

1

*

History : P

2

==C

2

P

1

==C

1

P

1

==C

2

P

1

==C

1

P

2

must go to C

2

only after C

1

T

1

*

t

2

*Slide187

2.3) T

i

 >

t

i

: Write

Ö

into the current register and

HALT

C

2

t

2

0

0

1

1

0

2

(

t

2

*

)

B

2

t

2

0

0

1

1

(

T

2

*

)

B

1

T

1

0

0

1

1

0

2

(

T

1

*

)

C

1

t

1

0

0

1

1

(

t

1

*

)

Consider P

1

has just entered C

1

with

t

1

*

<

T

1

*

History : P

2

==C

2

P

1

==C

1

P

1

==C

2

P

1

==C

1

T

2

*

t

1

*

T

1

*

t

2

*Slide188

2.3) T

i

 >

t

i

: Write

Ö

into the current register and

HALT

C

2

t

2

0

0

1

1

0

2

(

t

2

*

)

B

2

t

2

0

0

1

1

(

T

2

*

)

B

1

T

1

0

0

1

1

0

2

(

T

1

*

)

C

1

t

1

0

0

1

1

(

t

1

*

)

Consider P

1

has just entered C

1

with

t

1

*

<

T

1

*

History : P

2

==C

2

P

1

==C

1

P

1

==C

2

P

1

==C

1

T

1

*

t

2

*

T

2

*

t

1

*Slide189

2.3) T

i

 >

t

i

: Write

Ö

into the current register and

HALT

C

2

t

2

0

0

1

1

0

2

(

t

2

*

)

B

2

t

2

0

0

1

1

(

T

2

*

)

B

1

T

1

0

0

1

1

0

2

(

T

1

*

)

C

1

t

1

0

0

1

1

(

t

1

*

)

Consider P

1

has just entered C

1

with

t

1

*

<

T

1

*

History : P

2

==C

2

P

1

==C

1

P

1

==C

2

P

1

==C

1

T

1

*

t

2

*

T

2

*

t

1

*Slide190

2.3) T

i

 >

t

i

: Write

Ö

into the current register and

HALT

C

2

t

2

0

0

1

1

0

2

(

t

2

*

)

B

2

t

2

0

0

1

1

(

T

2

*

)

B

1

T

1

0

0

1

1

0

2

(

T

1

*

)

C

1

t

1

0

0

1

1

(

t

1

*

)

Consider P

1

has just entered C

1

with

t

1

*

<

T

1

*

History : P

2

==C

2

P

1

==C

1

P

1

==C

2

P

1

==C

1

T

1

*

t

2

*

T

2

*

t

1

*

Summing up :

T

2

*

t

1

*

<

T

1

*

t

2

*Slide191

2.3) T

i

 >

t

i

: Write

Ö

into the current register and

HALT

C

2

t

2

0

0

1

1

0

2

(

t

2

*

)

B

2

t

2

0

0

1

1

(

T

2

*

)

B

1

T

1

0

0

1

1

0

2

(

T

1

*

)

C

1

t

1

0

0

1

1

(

t

1

*

)

Consider P

1

has just entered C

1

with

t

1

*

<

T

1

*

History : P

2

==C

2

P

1

==C

1

P

1

==C

2

P

1

==C

1

T

1

*

t

2

*

T

2

*

t

1

*

T

2

*

<

t

2

*

: T

2

cannot write

ÖSlide192

2.4) T

i

 =

t

i

,

R

i

 = 0, B

i

 = 1: Write

Ö

into register and HALT

Similarly consider P

1

has entered C

1

with

t

1

*

=

T

1

*Slide193

2.4) T

i

 =

t

i

,

R

i

 = 0, B

i

 = 1: Write

Ö

into register and HALT

C

2

t

2

0

0

1

1

(

t

2

*

)

B

2

t

2

0

0

0

1

(

T

2

*

)

B

1

T

1

0

0

1

1

(

T

1

*

)

C

1

t

1

0

0

0

1

(

t

1

*

)

Similarly consider P

1

has entered C

1

with

t

1

*

=

T

1

*

History : P

1

==C

2

P

2

==C

1

P

1

==C

1

T

1

*

t

2

*

T

2

*

t

1

*

Summing up :

T

2

*

t

1

*

=

T

1

*

t

2

*Slide194

2.4) T

i

 =

t

i

,

R

i

 = 0, B

i

 = 1: Write

Ö

into register and HALT

C

2

t

2

0

0

1

1

(

t

2

*

)

B

2

t

2

0

0

0

1

(

T

2

*

)

B

1

T

1

0

0

1

1

(

T

1

*

)

C

1

t

1

0

0

0

1

(

t

1

*

)

Similarly consider P

1

has entered C

1

with

t

1

*

=

T

1

*

History : P

1

==C

2

P

2

==C

1

P

1

==C

1

T

1

*

t

2

*

T

2

*

t

1

*

T

2

*

t

2

*

,

R

2

 = 1, B

2

 = 0

: T

2

cannot write

ÖSlide195

Cost is

proportional to the largest timestamp

Timestamp can go up only in case 2.5

Processor’s current

B

i

value is set during a visit to the other register

So, synchronous case complexity applies

ComplexitySlide196

Real world applications

Pham Nam

KhanhSlide197

Applications of parallel sorting

Sorting is fundamental algorithm in data

processing:

»

Parallel Database operations: Rank, Join, etc.

»

Search (rapid index/lookup after sort

)

Best record in sorting:

102.5

TB in 4,328 seconds using2100 nodes

from Yahoo.Slide198

Applications of MIS

Wireless and communication

Scheduling problem

Perfect matching => assignment problem

FinanceSlide199

Applications of Maximal independent set

Market

graph

EAFE

EM

Low latency

requirement

Parallel MISSlide200

Applications of Maximal independent set

Market

graph

Stocks

Commodities

BondsSlide201

Applications of Maximal independent set

Market

graph

Slide202

Applications of Maximal independent set

Market

graph

MIS form completely diversified portfolio, where all

instruments

are

negatively

correlated with each other => lower the

riskSlide203

Applications of Choice coordination algorithm

Given

n

processes, each one can choose between

m

options. They need to agree on unique choice. => belongs to class of distributed consensus algorithms.

HW and SW task involving concurrency

Clock sync in wireless sensor

networks

Multivehicle cooperative control Slide204

Multivehicle cooperative control 

Coordinate the movement of multiple vehicles in

a certain

way to accomplish an objective

.

Task Assignment, cooperative

transport

,

cooperative

role

assignment, air traffic control, cooperative timing.Slide205

ConclusionSlide206

Conclusion

PRAM model: CREW

Parallel algorithm Maximal Independent Set with O(log n) and applications

Parallel sorting algorithm:

QuickSort

with O(log

2

n

)

BoxSort

with O(log n) Choice Coordination

Problem: distributed algorithms for synchronous and asynchronous system + applications