/
Designing Efficient Map-Reduce Algorithms Designing Efficient Map-Reduce Algorithms

Designing Efficient Map-Reduce Algorithms - PowerPoint Presentation

trish-goza
trish-goza . @trish-goza
Follow
379 views
Uploaded On 2017-08-28

Designing Efficient Map-Reduce Algorithms - PPT Presentation

A Common Mistake SizeCommunication TradeOff Specific Tradeoffs Jeffrey D Ullman Stanford University Research Is Joint Work of Foto Afrati NTUA Anish Das Sarma Google Semih ID: 582888

drug reducer reducers inputs reducer drug inputs reducers job output data drugs groups key group communication algorithm outputs rows size replication team

Share:

Link:

Embed:

Download Presentation from below link

Download Presentation The PPT/PDF document "Designing Efficient Map-Reduce Algorithm..." is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.


Presentation Transcript

Slide1

Designing Efficient Map-Reduce Algorithms

A Common MistakeSize/Communication Trade-OffSpecific Tradeoffs

Jeffrey D. Ullman

Stanford UniversitySlide2

Research Is Joint Work ofFoto Afrati (NTUA)Anish Das Sarma (Google)Semih

Salihoglu (Stanford)U.2Slide3

Motivating Example

The Drug Interaction ProblemA Failed AttemptLowering the CommunicationSlide4

The Drug-Interaction ProblemData consists of records for 3000 drugs.List of patients taking, dates, diagnoses.About 1M of data per drug.Problem is to find drug interactions.Example: two drugs that when taken together increase the risk of heart attack.

Must examine each pair of drugs and compare their data.4Slide5

Initial Map-Reduce AlgorithmThe first attempt used the following plan:Key = set of two drugs {i, j}.Value = the record for one of these drugs.Given drug

i and its record Ri, the mapper generates all key-value pairs ({i, j}, Ri), where j is any other drug besides

i.Each reducer receives its key and a list of the two records for that pair: ({i, j

},

[

R

i

,

R

j

]).

5Slide6

Example: Three Drugs6

Mapperfor drug 2

Mapperfor drug 1

Mapper

for drug 3

Drug 1 data

{1, 2}

Reducer

for {1,2}

Reducer

for {2,3}

Reducer

for {1,3}

Drug 1 data

{1, 3}

Drug 2 data

{1, 2}

Drug 2 data

{2, 3}

Drug 3 data

{1, 3}

Drug 3 data

{2, 3}Slide7

Example: Three Drugs7

Mapperfor drug 2

Mapperfor drug 1

Mapper

for drug 3

Drug 1 data

{1, 2}

Reducer

for {1,2}

Reducer

for {2,3}

Reducer

for {1,3}

Drug 1 data

{1, 3}

Drug 2 data

{1, 2}

Drug 2 data

{2, 3}

Drug 3 data

{1, 3}

Drug 3 data

{2, 3}Slide8

Example: Three Drugs8

Drug 1 data{1, 2}

Reducer

for {1,2}

Reducer

for {2,3}

Reducer

for {1,3}

Drug 1 data

Drug 2 data

Drug 2 data

{2, 3}

Drug 3 data

{1, 3}

Drug 3 dataSlide9

What Went Wrong?3000 drugstimes 2999 key-value pairs per drugtimes 1,000,000 bytes per key-value pair= 9 terabytes communicated over a 1Gb Ethernet= 90,000 seconds of network use.

9Slide10

The Improved AlgorithmThey grouped the drugs into 30 groups of 100 drugs each.Say G1 = drugs 1-100, G2 = drugs 101-200,…, G30 = drugs 2901-3000.

Let g(i) = the number of the group into which drug i goes.10Slide11

The Map FunctionA key is a set of two group numbers.The mapper for drug i produces 29 key-value pairs.Each key is the set containing g(i) and one of the other group numbers.The value is a pair consisting of the drug number

i and the megabyte-long record for drug i.11Slide12

The Reduce FunctionThe reducer for pair of groups {m, n} gets that key and a list of 200 drug records – the drugs belonging to groups m and n.

Its job is to compare each record from group m with each record from group n.Special case: also compare records in group n with each other, if m = n+1 or if

n = 30 and m = 1.Notice each pair of records is compared at exactly one reducer, so the total computation is not increased.

12Slide13

The New Communication CostThe big difference is in the communication requirement.Now, each of 3000 drugs’ 1MB records is replicated 29 times.Communication cost = 87GB, vs. 9TB.13Slide14

Theory of Map-Reduce Algorithms

Reducer SizeReplication RateMapping Schemas

Lower BoundsSlide15

A Model for Map-Reduce AlgorithmsA set of inputs.Example: the drug records.

A set of outputs.Example: One output for each pair of drugs.A many-many relationship between each output and the inputs needed to compute it.Example: The output for the pair of drugs {i

, j} is related to inputs i and j

.

15Slide16

Example: Drug Inputs/Outputs16

Drug 1

Drug 2

Drug 3

Drug 4

Output 1-2

Output 1-3

Output 2-4

Output 1-4

Output 2-3

Output 3-4Slide17

Example: Matrix Multiplication17

=

i

j

j

iSlide18

Reducer SizeReducer size, denoted q, is the maximum number of inputs that a given reducer can have.I.e., the length of the value list.Limit might be based on how many inputs can be handled in main memory.

Or: make q low to force lots of parallelism.18Slide19

Replication RateThe average number of key-value pairs created by each mapper is the replication rate.Denoted r.Represents the communication cost per input.

19Slide20

Example: Drug InteractionSuppose we use g groups and d drugs.A reducer needs two groups, so q = 2d/g.Each of the d inputs is sent to g-1 reducers, or approximately

r = g.Replace g by r in q = 2d/g to get r = 2d/q

.20

Tradeoff!

The bigger the reducers,

the less communication.Slide21

Upper and Lower Bounds on rWhat we did gives an upper bound on r as a function of q.

A solid investigation of map-reduce algorithms for a problem includes lower bounds.Proofs that you cannot have lower r for a given q.21Slide22

Proofs Need Mapping SchemasA mapping schema for a problem and a reducer size q is an assignment of inputs to sets of reducers, with two conditions:

No reducer is assigned more than q inputs.For every output, there is some reducer that receives all of the inputs associated with that output.Say the reducer covers the output.

22Slide23

Mapping Schemas – (2)Every map-reduce algorithm has a mapping schema.The requirement that there be a mapping schema is what distinguishes map-reduce algorithms from general parallel algorithms.23Slide24

Example: Drug Interactionsd drugs, reducer size q.Each drug has to meet each of the d-1 other drugs at some reducer.If a drug is sent to a reducer, then at most

q-1 other drugs are there.Thus, each drug is sent to at least (d-1)/(q-1) reducers, and r > 

(d-1)/(q-1).

Half the

r

from the algorithm we described.

Better algorithm gives

r

= d/

q

+ 1, so lower bound is actually tight.

24Slide25

The Better AlgorithmThe problem with the algorithm dividing inputs into g groups is that members of a group appear together at many reducers.Thus, each reducer can only productively compare about half the elements it gets.Better: use smaller groups, with each reducer getting many little groups.Eliminates almost all the redundancy.

25Slide26

Optimal Algorithm for All-PairsAssume d inputs.Let p be a prime, where p2 divides d.Divide inputs into p2 groups of d/p2 inputs each.

Name the groups (i, j), where 0 < i, j < p.Use p(p+1) reducers, organized into p+1 teams of p reducers each.For 0 < k < p, group (

i, j) is sent to the reducer i+kj (mod p) in group k.In the last team (p), group (i, j) is sent to reducer

j.

26Slide27

Example: Teams for p = 527

i

= 0

1

2

1

3

4

3

2

4

j = 0

Team 0Slide28

Example: Teams for p = 528

i

= 0

1

2

1

3

4

3

2

4

j = 0

Team 1Slide29

Example: Teams for p = 529

i

= 0

1

2

1

3

4

3

2

4

j = 0

Team 2Slide30

Example: Teams for p = 530

i

= 0

1

2

1

3

4

3

2

4

j = 0

Team 3Slide31

Example: Teams for p = 531

i

= 0

1

2

1

3

4

3

2

4

j = 0

Team 4Slide32

Example: Teams for p = 532

i

= 0

1

2

1

3

4

3

2

4

j = 0

Team 5Slide33

Why It WorksLet two inputs be in groups (i, j) and (i’, j’).If the same group, these inputs obviously share a reducer.If j = j’, then they share a reducer in

team p.If j  j’, then they share a reducer in team k provided i + kj = i’ + kj’ (all arithmetic modulo p).

Equivalently, (i-i’) = k(j-j’).But since j  j’,

(j-j’) has an inverse modulo p.

Thus, team k = (

i-i

’)(j-j

’)

-

1

has a reducer for which

i

+

kj

=

i

’ +

kj

’.

33Slide34

Why It Is OptimalThe replication rate r is p+1, since every input is sent to one reducer in each team.The reducer size q

= p(d/p2) = d/p, since each reducer gets p groups of size d/p2.Thus, r = d/q

+ 1.(d/q + 1) - (d-1

)/(

q

-1

)

< 1 provided

q

< d.

But if

q

>

d, we can do everything in one reducer, and

r

= 1.

The upper bound

r

<

d/q + 1 and the lower bound r

> (d-1)/(

q-1) differ by less than 1, and are integers, so they are equal.34Slide35

The Hamming-Distance = 1 Problem

The Exact Lower BoundMatching AlgorithmsSlide36

Definition of HD1 ProblemGiven a set of bit strings of length b, find all those that differ in exactly one bit.Example: For b=2, the inputs are 00, 01, 10, 11, and the outputs are (00,01), (00,10), (01,11), (10,11).Theorem

: r > b/log2q.(Part of) the proof later.

36Slide37

Algorithm With q=2We can use one reducer for every output.Each input is sent to b reducers (so r = b).

Each reducer outputs its pair if both its inputs are present, otherwise, nothing.Subtle point: if neither input for a reducer is present, then the reducer doesn’t really exist.37Slide38

Algorithm with q = 2bAlternatively, we can send all inputs to one reducer.No replication (i.e., r = 1).The lone reducer looks at all pairs of inputs that it receives.

38Slide39

Splitting AlgorithmAssume b is even.Two reducers for each string of length b/2.Call them the left and right reducers for that string.

String w = xy, where |x| = |y| = b/2, goes to the left reducer for x and the right reducer for y.If w and z differ in exactly one bit, then they will both be sent to the same left reducer (if they disagree in the right half) or to the same right reducer (if they disagree in the left half).Thus, r = 2; q = 2

b/2.

39Slide40

Proof That r > b/log2qLemma: A reducer of size

q cannot cover more than (q/2)log2q outputs.Induction on b; proof omitted.(b/2)2

b outputs must be covered.There are at least p = (b/2)2

b

/((

q

/2)log

2

q

) =

(

b/

q

)2

b

/log

2

q

reducers.

Sum of inputs over all reducers

>

pq = b2b/log2q.

Replication rate r = pq/2

b = b/log2q.Omits possibility that smaller reducers help.40Slide41

Algorithms Matching Lower Bound

q

= reducer

size

b

2

1

2

1

2

b/2

2

b

All inputs

to one

reducer

One reducer

for each output

Splitting

Generalized Splitting

41

r

= replication

rate Slide42

Matrix Multiplication

One-Job MethodTwo-Job MethodComparisonSlide43

Matrix MultiplicationAssume n  n matrices AB = C.Aij is the element in row

i and column j of matrix A.Similarly for B and C.Cik = j

Aij 

B

jk

.

Output

C

ik

depends on the

i

th

row of A, that is,

A

ij

for all

j

, and the

k

th column of B, that is, Bjk

for all j.

43Slide44

Computing One Output Value44

=

Row

i

Column k

A

B

CSlide45

Reducers Cover RectanglesImportant fact: If a reducer covers outputs Cik and Cfg, then it also covers C

ig and Cfk.Why? This reducer has all of rows i and f of A as inputs and also has all of columns k and g of B as inputs.

Thus, it has all the inputs it needs to cover Cig and Cfk

.

Generalizing

: Each reducer covers all the outputs in the “rectangle” defined by a set of rows and a set of columns of matrix C.

45Slide46

The Responsibility of One Reducer46Slide47

Upper Bound on Output SizeIf a reducer gets q inputs, it gets q/n rows or columns.Maximize the number of outputs covered by making the input “square.”I.e., #rows = #columns.q/2n rows and q/2n columns yield q2/4n2

outputs covered.47Slide48

Lower Bound on Replication RateTotal outputs = n2.One reducer can cover at most q2/4n2 outputs.

Therefore, 4n4/q2 reducers.4n4/q total inputs to all the reducers, divided by 2n2 total inputs = 2n2/q replication rate.Example

: If q = 2n2, one reducer suffices and the replication rate is r = 1.Example

: If q = 2n (minimum possible), then r = n.

48Slide49

Matching AlgorithmDivide rows of the first matrix into g groups of n/g rows each.Also divide the columns of the second matrix into g groups of n/g columns each.g2 reducers, each with q = 2n2/g inputs consisting of a group of rows and a group of columns.

r = g = 2n2/q.49Slide50

Picture of One Reducer50

=

n/g

n/gSlide51

Two-Job Map-Reduce AlgorithmA better way: use two map-reduce jobs.Job 1: Divide both input matrices into rectangles.Reducer takes two rectangles

and produces partial sums of certain outputs.Job 2: Sum the partial sums.51Slide52

Picture of First Job52

I

J

J

K

I

K

A

C

B

For

i

in I and

k

in K, contribution

is

j

in J

A

ij

×

B

jkSlide53

First Job – DetailsDivide the rows of the first matrix A into g groups of n/g rows each.Divide the columns of A into 2g groups of n/2g.Divide the rows of the second matrix B into 2g groups of n/2g rows each.Divide the columns of B into g groups of n/g.Important point: the groups of columns for A

and rows for B must have indices that match. 53Slide54

Reducers for First JobReducers correspond to an n/g by n/2g rectangle in A (with row indices I, column indices J) and an n/2g by n/g rectangle in B (with row indices J and column indices K).Call this reducer (I,J,K).Important point: there is one set of indices J that plays two roles.Needed so only rectangles that need to be multiplied are given a reducer.

54Slide55

The Reducer (I,J,K)55

I

J

J

K

I

K

A

C

B

n/g

n/g

n/g

n/g

n/2g

n/2g

2g reducers contribute to

this area, one for each J.Slide56

Job 1: DetailsConvention: i, j, k are individual rows and/or column numbers, which are members of groups I, J, and K, respectively.Mappers Job 1:

Aij -> key = (I,J,K) for any group K; value = (A,i,j,Aij).Bjk -> key = (I,J,K) for any group I; value = (B,j,k,B

jk).Reducers Job 1: For key (I,J,K) produce

x

iJk

=

j in J

A

ij

B

jk

for all

i

in I and k in K.

56Slide57

Job 2: DetailsMappers Job 2: xiJk -> key = (

i,k), value = xiJk.Reducers Job 2: For key (i,k

), produce output Cik = J

x

iJk

.

57Slide58

Comparison: Computation CostThe two methods (one or two map-reduce jobs) essentially do the same computation.Every Aij is multiplied once with every Bjk.

All terms in the sum for Cik are added together somewhere, only once.2 jobs requires some extra overhead of task management.58Slide59

Comparison: Communication CostOne-job method: r = 2n2/q; there are 2n2 inputs, so total communication = 4n

4/q.Two-job method with parameter g:Job 2: Communication = (2g)(n2/g2)(g2) = 2n2g.

59

Number of output squares

Area of each square

Number of reducers

contributing to

each outputSlide60

Communication Cost – ContinuedJob 1 communication:2n2 input elements.Each generates g key-value pairs.So another 2n2

g.Total communication = 4n2g.Reducer size q = (2)(n2/2g2) = n2/g2.So g = n/

q.Total communication = 4n3/

q

.

Compares favorably with 4n

4

/q for the one-job approach.

60Slide61

SummaryRepresent problems by mapping schemasGet upper bounds on number of covered outputs as a function of reducer size.Turn these into lower bounds on replication rate as a function of reducer size.For HD = 1 and all-pairs problems: exact match between upper and lower bounds.1-job matrix multiplication analyzed exactly.

But 2-job MM yields better total communication.61Slide62

Research QuestionsGet matching upper and lower bounds for the Hamming-distance problem for distances greater than 1.Ugly fact: For HD=1, you cannot have a large reducer with all pairs at distance 1; for HD=2, it is possible.Consider all inputs of weight 1 and length b.

62Slide63

Research Questions – (2)Give an algorithm that takes an input-output mapping and a reducer size q, and gives a mapping schema with the smallest replication rate.Is the problem even tractable?A recent extension by

Afrati, Dolev, Korach, Sharma, and U. lets inputs have weights, and the reducer size limits the sum of the weights of the inputs received.What can be extended to this model?63