 128K - views

# Lecture Comparisonbased Lower Bounds for Sorting

1 Overview In this lecture we discuss the notion of lower bounds in particular for the problem of sorting We show that any deterministic comparisonbased sorting algo rithm must take 8486 log time to sort an array of elements in the worst case We th

## Lecture Comparisonbased Lower Bounds for Sorting

Download Pdf - The PPT/PDF document "Lecture Comparisonbased Lower Bounds fo..." is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.

## Presentation on theme: "Lecture Comparisonbased Lower Bounds for Sorting"â€” Presentation transcript:

Page 1
Lecture 5 Comparison-based Lower Bounds for Sorting 5.1 Overview In this lecture we discuss the notion of lower bounds , in particular for the problem of sorting. We show that any deterministic comparison-based sorting algo rithm must take Ω( log ) time to sort an array of elements in the worst case. We then extend this result to aver age case performance, and to randomized algorithms. In the process, we introduce t he 2-player game view of algorithm design and analysis. 5.2 Sorting lower bounds So far we have been focusing on the question: “given some prob lem , can we

construct an algorithm that runs in time )) on inputs of size ?” This is often called an upper bound problem because we are determining an upper bound on the inhe rent diﬃculty of problem , and our goal here is to make ) as small as possible. In this lecture we examine the “lower b ound problem.” Here, the goal is to prove that any algorithm must take time Ω( )) time to solve the problem, where now our goal is to do this for ) as large as possible. Lower bounds help us understand how close we are to the best possible solution t o some problem: e.g., if we have an algorithm that

runs in time log ) and a lower bound of Ω( log ), then we have a log( “gap”: the maximum possible savings we could hope to achieve by improving our algorithm. Often, we will prove lower bounds in restricted models of com putation, that specify what types of operations may be performed on the input and at what cost. So, a lower bound in such a model means that if we want to do better, we would need somehow to do s omething outside the model. Today we consider the class of comparison-based sorting alg orithms. These are sorting algorithms that only operate on the input array by comparing

pairs of ele ments and moving elements around based on the results of these comparisons. In particular, le t us make the following deﬁnition. Deﬁnition 5.1 comparison-based sorting algorithm takes as input an array ,a ,... ,a of items, and can only gain information about the items by compa ring pairs of them. Each comparison (“is > a ?”) returns YES or NO and counts a 1 time-step. The algorithm m ay also for free 24
Page 2
5.2. SORTING LOWER BOUNDS 25 reorder items based on the results of comparisons made. In th e end, the algorithm must output a permutation of the

input in which all items are in sorted orde r. For instance, Quicksort, Mergesort, and Insertion-sort ar e all comparison-based sorting algorithms. What we will show is the following theorem. Theorem 5.1 Any deterministic comparison-based sorting algorithm mus t perform Ω( log com- parisons to sort elements in the worst case. Speciﬁcally, for any determinis tic comparison-based sorting algorithm , for all there exists an input of size such that makes at least log !) = Ω( log comparisons to sort To prove this theorem, we cannot assume the sorting algorith m is going to

necessarily choose a pivot as in Quicksort, or split the input as in Mergesort — we n eed to somehow analyze any possible (comparison-based) algorithm that might exist. The way we w ill do this is by showing that in order to sort its input, the sorting algorithm is implicitly playi ng a game of “20 questions” with the input, and an adversary by responding correctly can force the algor ithm to ask many questions before it can tell what is the correct permutation to output. Proof: Recall that the sorting algorithm must output a permutation of the input [ ,a ,... ,a ]. The key to the argument is

that (a) there are ! diﬀerent possible permutations the algorithm might output, and (b) for each of these permutations, there exists an input for which that permutation is the only correct answer. For instance, the permutation [ ,a ,a ,a ] is the only correct answer for sorting the input [2 3]. In fact, if you ﬁx a set of distinct elements, then there will be a 1-1 correspondence between the diﬀerent orderings the elem ents might be in and the permutations needed to sort them. Given (a) and (b) above, this means we can ﬁx some set of ! inputs (e.g., all orderings of

,... ,n ), one for each of the ! output permutations. let be the set of these inputs that are consistent with the answer s to all comparisons made so far (so, initially, !). We can think of a new comparison as splitting into two groups: those inputs for which the answer would be YES and those for wh ich the answer would be NO. Now, suppose an adversary always gives the answer to each com parison corresponding to the larger group. Then, each comparison will cut down the size of by at most a factor of 2. Since initially has size !, and by construction, the algorithm at the end must have red uced

down to 1 in order to know which output to produce, the algorithm must make at le ast log !) comparisons before it can halt. We can then solve: log !) = log ) + log 1) + ... + log (2) = Ω( log Notice that our proof is like a game of 20 Questions in which th e responder (the adversary) doesn’t actually decide what he is thinking of until there is only one option left. This is legitimate because we just need to show that there is some input that would cause the algorithm to take a long time. In other words, since the sorting algorithm is deterministi c, we can take that ﬁnal

remaining option and then re-run the algorithm on that speciﬁc input, and the a lgorithm will make the same exact sequence of operations. Let’s do an example with = 3, and as initially consisting of the 6 possible orderings of (123) (132) (213) (231) (312) (321)
Page 3
5.3. AVERAGE-CASE LOWER BOUNDS 26 Suppose the sorting algorithm initially compares the ﬁrst t wo elements and . Half of the possibilities have > a and half have > a . So, the adversary can answer either way and let’s say it answers that > a . This narrows down the input to the three possibilities: (123)

(132) (231) Suppose the next comparison is between and . In this case, the most popular answer is that > a , so the adversary returns that answer which removes just one ordering, leaving the algorithm with: (132) (231) It now takes one more comparison to ﬁnally isolate the input o rdering and determine the correct permutation to output. Alternative view of the proof: Another way of looking at the proof we gave above is as follows For a deterministic algorithm, the permutation it outputs i s solely a function of the series of answers it receives (any two inputs producing the same series

of answ ers will cause the same permutation to be output). So, if an algorithm always made at most k < lg( !) comparisons, then there are at most < n ! diﬀerent permutations it can possibly output. In other wor ds, there is some permutation it can’t output. So, the algorithm will fail on any input for which tha t permutation is the only correct answer. Question: Suppose we consider the problem: “order the input array so th at the smallest n/ come before the largest n/ 2”? Does our lower bound still hold for that problem, or where does it break down? How fast can you solve that problem?

Answer: No, the proof does not still hold. It breaks down because any g iven input can have multi- ple correct answers. E.g., for input [2 1 4 3] , we could output any of [ ,a ,a ,a ], [ ,a ,a ,a ], ,a ,a ,a ], or [ ,a ,a ,a ]. In fact, not only does the lower bound break down, but we can actually solve this problem in linear time: just run the line ar-time median-ﬁnding algorithm and then make a second pass putting elements into the ﬁrst half or second half based on how they compare to the median. 5.3 Average-case lower bounds In fact, we can generalize the above theorem to show

that any c omparison-based sorting algorithm must take Ω( log ) time on average , not just in the worst case. Theorem 5.2 For any deterministic comparison-based sorting algorithm , the average-case num- ber of comparisons (the number of comparisons on average on a randomly chosen permutation of distinct elements) is at least log !) Proof: Let be the set of all ! possible orderings of distinct elements. As noted in the previous argument, these each require a diﬀerent permutation to be pr oduced as output. Let’s now build out the entire decision tree for algorithm on : the tree we

get by looking at all the diﬀerent question/answer paths we get by running algorithm on the inputs in . This tree has ! leaves, where the depth of a leaf is the number of comparisons perform ed by the sorting algorithm on
Page 4
5.4. LOWER BOUNDS FOR RANDOMIZED ALGORITHMS 27 that input. Our goal is to show that the average depth of the leaves must be at least log !) (previously, we only cared about the maximum depth). If the tree is completely balanced, then each leaf is at depth log !) or log !) and we are done. To prove the theorem, we just need to show that out of all binar

y trees on a given number of leaves, the one that minimizes their average depth is a com pletely balanced tree. This is not too hard to see: given some unbalanced tree, we take two sibli ng leaves at largest depth and move them to be children of the leaf of smallest depth. Since the di ﬀerence between the largest depth and the smallest depth is at least 2 (otherwise the tree would be balanced), this operation reduces the average depth of the leaves. Speciﬁcally, if the smaller depth is and the larger depth is , we have removed two leaves of depth and one of depth , and we have

added two leaves of depth +1 and one of depth 1. Since any unbalanced tree can be modiﬁed to have a smaller a verage depth, such a tree cannot be one that minimizes average depth, and therefore the tree of smallest average depth must in fact be balanced. In fact, if we are a bit more clever in the proof, we can get rid o f the ﬂoor in the bound. 5.4 Lower bounds for randomized algorithms Theorem 5.3 The above bound holds for randomized algorithms too. Proof: The argument here is a bit subtle. The ﬁrst step is to argue tha t with respect to counting comparisons, we can

think of a randomized algorith as a probability distribution over deterministic algorithms. In particular, we can think of a r andomized algorithm as a deterministic algorithm with access to a special “random bit tape”: every t ime wants to ﬂip a coin, it just pulls the next bit oﬀ that tape. In that case, for any given run of algorithm , say reading bit-string from that tape, there is an equivalent deterministic algori thm with those bits hardwired in. Algorithm is then a probability distribution over all those determini stic algorithms This means that the expected number of

comparisons made by ra ndomized algorithm on some input is just Pr( )(Running time of on If you recall the deﬁnition of expectation, the running time of the randomized algorithm is a random variable and the sequences correspond to the elementary events. So, the expected running time of the randomized algorithm is just an average over deterministic algorithms. Since each deterministic algorithm has averag e-case running time at least log !) any average over them must too. Formally, the average-case r unning time of the randomized algorithm is avg inputs [Pr( )(Running time of on )] = avg

[Pr( )(Running time of on )] Pr( )avg (Running time of on Let us deﬁne a tree to be completely balanced if the deepest le af is at most one level deeper than the shallowest leaf. Everything would be easier if we could somehow assume ! was a power of 2....
Page 5
5.4. LOWER BOUNDS FOR RANDOMIZED ALGORITHMS 28 Pr( log !) log !) One way to think of the kinds of bounds we have been proving is t o think of a matrix with one row for every possible deterministic comparison-based sor ting algorithm (there could be a lot of rows!) and one column for every possible permutation of given

input elements (there are a lot of columns too). Entry ( i,j ) in this matrix contains the running time of algorithm on input . The worst-case deterministic lower bound tells us that for each row there exists a column such that the entry ( i,j ) is large. The average-case deterministic lower bound tell s us that for each row the average of the elements in the row is large. The randomize d lower bound says “well, since the above statement holds for every row, it must also hold for any weighted average of the rows.” In the language of game-theory, one could think of this as a two- player game

(much like rock-paper- scissors) between an “algorithm player” who gets to pick a ro w and an adversarial “input player who gets to pick a column. Each player makes their choice and t he entry in the matrix is the cost to the algorithm-player which we can think of as how much money the algorithm-player has to pay the input player. We have shown that there is a randomiz ed strategy for the input player (namely, pick a column at random) that guarantees it an expec ted gain of Ω( log ) no matter what strategy the algorithm-player chooses.