Transform and Conquer Instructor Tanvir What is Transform and Conquer The 4 th algorithm design technique we are going to study Three major variations Instance Simplification Transform to a simpler or more convenient instance of the same problem ID: 191720
Download Presentation The PPT/PDF document "COSC 3100" is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.
Slide1
COSC 3100Transform and Conquer
Instructor:
TanvirSlide2
What is Transform and Conquer ?
The 4
th
algorithm design technique we are going to study
Three major variations
Instance Simplification:
Transform to a simpler or more convenient instance of the same problem
Representation Change:
Transform to a different representation of the same instance
Problem Reduction:
Transform to an instance of a different problem for which you know an efficient algorithm
Two-step process:
Step 1:
Modify problem instance to something easier to solve
Step 2:
Conquer!Slide3
What is Transform and Conquer ?
Problem’s instance
Simpler instance
Or
Another representation
Or
Another problem’s instance
S
olution
1. Presorting
1.
Heapsort2. Horner’s Rule3. Binary Exponentiation
1. Computing Least Common Multiple2. Counting Paths in a Graph3. Reduction to Optimization Problems4. Reduction to Graph ProblemsSlide4
Trns. &
Conq
.: Presorting
Many questions about a list are easier to answer if the list is sorted
We shall see three examples
Example 1: Checking element uniqueness in an array
4
1893710231What is the brute-force idea ?
E.g.,Take say the first element and check whether it is in the rest of the array, take the second element and checkif it is in the rest of the array, and so on…
T
(n) = (n-1)+(n-2)+…+1
=
є Θ(n2) Slide5
Let us apply sorting first
ALGORITHM
PresortElementUniqueness
(A[0..n-1])
sort the array A
for i <- 0 to n-2 do if A[i] = A[i+1] return false
return trueTrns. & Conq.: PresortingT(n) = Tsort(n) + Tscan(n) є Θ(nlgn) + Θ(n) = Θ
(nlgn)4
1893
710231
1
1233478910
SORTSlide6
Example 2: Computing a modeA mode is a value that occurs most often in a list
E.g., in
, mode is ?
Trns
. &
Conq.: Presorting
5157657Brute-force: scan the list and compute the frequencies of all distinct values, then findthe value with the highest frequencyHow to implement
brute-force ?Store the valuesalready encounteredalong with their frequencies,in a separate list.
5
3
1
1
7
2
61
What is the worst-case
input ?
An array with no equal
e
lements…
C(n) = 0 + 1 + … + (n-1)
=
є
Θ
(n
2
)
Value:
Frequency:
5Slide7
Example 2: Computing a mode (contd.)If we sort the array first, all equal values will be adjacent to each other.
To compute the mode, all we need to know is to find the longest run of adjacent equal values in the sorted array
Trns
. &
Conq
.: PresortingSlide8
Example 2: Computing a mode (contd.)
ALGORITHM
PresortMode
( A[0..n-1] )
sort the array A
i <- 0modefrequency <- 0while i ≤ n-1 do runlength <- 1; runvalue <- A[i
] while i+runlength ≤ n-1 and A[i+runlength] = runvalue runlength <- runlength+1 if runlength > modefrequency modefrequency <- runlength; modevalue <- runvalue i <- i+runlengthreturn modevalue
Trns. & Conq.: Presortingwhile loop takes linear time, so the overall runtimeis dominated by the time for sorting, Θ
(nlgn)
1123
33789
10Slide9
Another O(nlgn
) sorting algorithm
Uses a clever data structure called “Heap”
It transforms an array into a Heap and then sorting becomes very easy
Heap has other important uses, like in implementing “priority queue”
Recall, priority queue is a
multiset of items with an orderable characteristic called “priority”, with the following operations:Find an item with the highest priorityDeleting an item with the highest priorityAdding a new item to the multiset
Trns. & Conq.: HeapsortLet usstudy Heapfirst…Slide10
A “heap” can be described as a binary tree, with one key per node, provided the following two conditions are met:
Shape property
: Binary tree is
essentially complete,
all levels are full except possibly the last level, where only some right most leaves may be missing
Parental dominance
(or heap property): key in each node is greater than or equal to the keys in its children (considered automatically satisfied for the leaves)Trns. & Conq.: Heap
This is actually the “max heap”, There is a corresponding “min heap”…Slide11
Trns
. &
Conq
.: Heap
10
5
2
4
7
1
10
5
7
2
1
10
5
7
2
6
1
Is it a Heap ?
Shape and Heap
properties…Slide12
10
8
7
5
2
1
6
351Trns. & Conq.: Heap
10
8
2
5
71
6
3
5
1
0
1
2
3
4
5
6
7
8
9
10
i
ndex:
value:
parents
leaves
Note:
Sequence of values on a path from
the root to a leaf is
nonincreasing
There is no left-to-right order in
key values, though
ALGORITHM
Parent(
i
)
return
ALGORITHM
Left(
i
)
return
2i
ALGORITHM
Right(
i
)
return
Slide13
Trns
. &
Conq
.: Heap
8
2
5
7
1
6
3
5
1
9
9
8
7
5
2
1
6
3
5
1
0
1
2
3
4
5
6
7
8
9
10
Important properties of heaps:
There exists one essentially complete
b
inary tree with n nodes, its height is
2. The root of a heap contains the largest
element
3. A node with all its descendants is also a
heap
4. Heap can be implemented as an array
By recording its elements in the top-down,
l
eft-right fashion. H[0] is unused or contains
a
number bigger than all the rest.
a. Parental node keys will be in
f
irst
positions, leafs will
Be in the last
positions
b. Children of key at index
i
(1 ≤
i
≤
) will be in positions
2i and 2i+1. The parent of a key at index
i
(2 ≤
i
≤ n) will be in
Position
.
H:
Which gives: H[
i
] ≥ max{ H[2i], H[2i+1] }
for
i
= 1, … ,
Slide14
How can we transform an array into
a heap? Two ways:
Trns
. &
Conq
.: Heap
2
9
56
7
8
1. Start at
t
he last parent at index
297
6
5
8
0
1
2
3
4
5
6
n
2. Check if parental
dominance holds
for this node’s key
3. If it does not,
Exchange the node’s
k
ey K with the larger
k
ey of its children
4
. Check if parental
dominance holds for K
in its new position and so on…
Stop after doing for root.
This process is called “
Heapifying
”
This method is called “bottom-up heap construction”
2
9
5
6
8
7
2
9
5
6
8
7
9
2
5
6
8
7
6
2Slide15
ALGORITHM
HeapBottomUp
(H[1..n])
//Input: An array H[1..n] of orderable items
//Output: A heap H[1..n]
for i <- downto
1 do k <- i; v <- H[k] heap <- false while not heap and 2*k ≤ n do j <- 2*k if j < n // there are 2 children if H[j] < H[j+1] j <- j+1 if v ≥ H[j] // only left child heap <- true else
H[k] <- H[j] k <- j H[k] <- v Trns. & Conq.: Heap
Heapify
one parent
2
95
6
78
2
9
7
6
5
8
0
1
2
3
4
5
6
nSlide16
Let us analyze the HeapBottomUp
(H[1..n])’s worst case
Trns
. &
Conq
.: Heap
Let’s take a heap whose binary tree is full
So, number of nodes, n = 2
m
-1
l
argest possible # of nodes at
each level…
n = 2
2
-1
n
= 2
3
-1
So what is the height ?
h =
=
-1 = m-1
Let’s look at
a
worst-case…
7
4
5
6
2
3
9
Why ?
A key on level
i
must travel to the
leaf-level h; Level
i
has (h-
i
) lower
levels. Going down by 1 level requires
2 comparisons: One to determine the
larger child, the other to determine
if exchange is needed.
C
worst
(n) =
=
= 2h(2
h
-1) – 2(h-2)2
h
+
4
= h2
h+1
-2h-h2
h+1
+2*2
h+1
+4
= 2 ( n –
lg
(n+1) + 4 )
So, fewer
t
han 2n
c
omparisons
are requiredSlide17
Building a heap, Method 2It’s called “top-down heap construction”
Idea: Successively insert a new key into a previously constructed heap
Attach a new node with key K after the last leaf of the existing heap
Sift K up until the parental dominance is established
Trns
. &
Conq.: Heap
9
6
5
2
7
10
8
9
6
5
2
7
8
10
10
6
5
2
7
8
9
Insertion operation cannot take more than
t
wice the height of the heap’s tree, so it is in O(
lgn
) Slide18
Maximum key deletion from a heapExchange root’s key with the last key K in the heap
Decrease the heap’s size by 1
“
Heapify
” the smaller tree by sifting K down establishing “parental dominance” as long as it is needed
Trns
. & Conq.: Heap
9
8
5
2
1
6
We want
to deletethe root
1
8
5
2
9
6
1
8
5
2
6
8
1
5
2
6
8
5
1
2
6
Efficiency is
O(
lgn
)Slide19
An interesting algorithm discovered by J. W. J. Williams in 1964It has 2 stages
Stage 1:
Costruct
a heap from a given array
Stage 2: Apply the root-deletion operation n-1 times to the remaining heap
Trns
. & Conq.: HeapsortSlide20
2
9
7
6
5
8
Trns
. & Conq.: HeapsortStage 1: Heap construction01
23456
2
9
7
6
58
2
9
8
6
5
7
0
1
2
3
4
5
6
2
9
8
6
5
7
2
9
8
6
5
7
0
1
2
3
4
5
6
2
9
8
6
5
7
9
2
8
6
5
7
0
1
2
3
4
5
6
9
2
8
6
5
7
9
6
8
2
5
7
9
6
8
2
5
7
0
1
2
3
4
5
6Slide21
Trns
. &
Conq
.:
Heapsort
Stage 2: Maximum deletions
9
682570
12345
6
7682
59
0123456
867259
01234
565
6
7
2
8
9
0
1
2
3
4
5
6
7
6
5
2
8
9
0
1
2
3
4
5
6
2
6
5
7
8
9
0
1
2
3
4
5
6
6
2
5
7
8
9
0
1
2
3
4
5
6
5
2
6
7
8
9
0
1
2
3
4
5
6
2
5
6
7
8
9
0
1
2
3
4
5
6Slide22
What is Heapsort’s
worst-case complexity ?
Trns
. &
Conq
.:
HeapsortStage 1: Heap construction is in ?O(n)
Stage 2: Maximum deletions is in ?Let’s see…Let C(n) be the # of comparisons needed in Stage 2C(n) ≤ 2+ 2
+ … … + 2 ≤ 2
Recall, height of heap’s tree is
C(n) ≤ 2 = 2(n-1)lg(n-1) ≤ 2nlgn So, C(n) є
O(nlgn)So, for two stages, O(n) + O(nlgn) = O(nlgn)Slide23
More careful analysis shows that Heapsort’s
worst and best cases are in
Θ
(
nlgn
) like
MergesortAdditionally Heapsort is in-placeTiming experiments on random files show that Heapsort is slower than Quicksort, but can be competitive with MergesortWhat sorting algorithm is used by Collections.sort(List<T> list) of Java?Trns. &
Conq.: HeapsortSlide24
Compute
least common multiple
of two nonnegative integers, lcm(m, n)
Say m = 24, and n = 60
24 = 2*2*2*3 and 60 = 2*2*3*5
Take product of all common factors of m and n, and all factors of m not in n, and all factors of n not in m
lcm(24, 60) = (2*2*3)*2*5 = 120Trns. & Conq.: Problem Reduction
Problem 1Problem 2
Solution to Problem 2
reduction
Alg. A
Solvable by Alg. ATo be solved
Is it a good algorithm?Notice, 24*60 = (2*2*2*3) * (2*2*3*5) = (2*2*3)2*2*5So, m*n = lcm(m, n) * gcd(m, n)lcm(m, n) =
! Slide25
THANK YOU…