Divide and Conquer. Arash Rafiey. 27 October, 2016

Size: px
Start display at page:

Download "Divide and Conquer. Arash Rafiey. 27 October, 2016"

Transcription

1 27 October, 2016

2 Divide the problem into a number of subproblems

3 Divide the problem into a number of subproblems Conquer the subproblems by solving them recursively or if they are small, there must be an easy solution.

4 Divide the problem into a number of subproblems Conquer the subproblems by solving them recursively or if they are small, there must be an easy solution. Combine the solutions to the subproblems to the solution of the problem

5 Divide the problem into a number of subproblems Conquer the subproblems by solving them recursively or if they are small, there must be an easy solution. Combine the solutions to the subproblems to the solution of the problem Example: 1. Merge-Sort(A, p, r) 2. if p < r 3. q := (p + r)/2 4. Merge-Sort(A, p, q) 5. Merge-Sort(A, q + 1, r) 6. Merge(A, p, q, r)

6 Divide the problem into a number of subproblems Conquer the subproblems by solving them recursively or if they are small, there must be an easy solution. Combine the solutions to the subproblems to the solution of the problem Example: 1. Merge-Sort(A, p, r) 2. if p < r 3. q := (p + r)/2 4. Merge-Sort(A, p, q) 5. Merge-Sort(A, q + 1, r) 6. Merge(A, p, q, r) Initial call for input array A = A[1]... A[n] is Merge-Sort(A, 1, n).

7 Example: Merge-Sort sorted sequence initial sequence splits length-l sequence into two length-l/2 sequences sorts them recursively merges the two sorted subsequences

8 Merge Merge(A, p, q, r) Take the smallest of the two front most elements of sequences A[p..q] and A[q + 1..r] and put it into a temporary array. Repeat this, until both sequences are empty. Copy the resulting sequence from temporary array into A[p..r].

9 Merge Merge(A, p, q, r) Take the smallest of the two front most elements of sequences A[p..q] and A[q + 1..r] and put it into a temporary array. Repeat this, until both sequences are empty. Copy the resulting sequence from temporary array into A[p..r]. Write a pseudo code for the procedure Merge(A, p, q, r) used in Merge-sort algorithm for merging two sorted arrays A[p... q] and A[q r] into one.

10 MERGE(A, p, q, r) 1. n 1 := q p + 1; n 2 := r q; 2. for i := 1 to n 1 3. L[i] := A[p + i 1] 4. for i := 1 to n 2 5. R[i] := A[q + i] 6. i := 1; j := 1; k := p 7. while i n 1 and j n 2 8. if L[i] R[j] 9. A[k] := L[i]; i := i + 1; 10. else 11. A[k] := R[j]; j := j + 1; 12. k := k + 1; 13. if i > n while j n A[k] := R[j]; j := j + 1; k := k + 1; 16. while i n A[k] := L[i]; i := i + 1; k := k + 1;

11 Analyzing an D&Q algorithm Let T (n) be running time on problem of size n If n is small enough (say, n c for constant c), then straightforward solution takes Θ(1)

12 Analyzing an D&Q algorithm Let T (n) be running time on problem of size n If n is small enough (say, n c for constant c), then straightforward solution takes Θ(1) If division of problem yields a subproblems, each of which 1/b of original (Merge-Sort: a = b = 2)

13 Analyzing an D&Q algorithm Let T (n) be running time on problem of size n If n is small enough (say, n c for constant c), then straightforward solution takes Θ(1) If division of problem yields a subproblems, each of which 1/b of original (Merge-Sort: a = b = 2) Division into subproblems takes D(n)

14 Analyzing an D&Q algorithm Let T (n) be running time on problem of size n Then, If n is small enough (say, n c for constant c), then straightforward solution takes Θ(1) If division of problem yields a subproblems, each of which 1/b of original (Merge-Sort: a = b = 2) Division into subproblems takes D(n) Combination of solutions to subproblems takes C(n) T (n) = { Θ(1) if n c at (n/b) + D(n) + C(n) otherwise

15 For Merge-Sort: Thus a = b = 2 D(n) = Θ(1) (just compute middle of array) C(n) = Θ(n) (merging has running time linear in length of resulting sequence) T MS (n) = { Θ(1) if n 1 T MS ( n/2 ) + T MS ( n/2 ) + Θ(n) otherwise. That s what s called a recurrence But: we want closed form, i.e., we want to solve the recurrence.

16 Solving recurrences There are a few methods for solving recurrences, some easy and not powerful, some complicated and powerful. Methods: 1 guess & verify (also called substitution method ) 2 master method 3 generating functions and some others. We re going to see 1. and 2.

17 Substitution method Basic idea: 1 guess the form of the solution 2 Use mathematical induction to find constants and show that solution works. Usually more difficult part is the part 1. Back to example: we had T MS (n) 2T MS ( n/2 ) + Θ(n) for n 2. If you hadn t seen something like this before, how would you guess? There is no general way to guess the correct solutions. It takes experience.

18 Heuristics that can help to find a good guess. One way would be to have a look at first few terms. Say if we had T (n) = 2T (n/2) + 3n, then T (n) = 2T (n/2) + 3n = 2(2T (n/4) + 3(n/2)) + 3n = 2(2(2T (n/8) + 3(n/4)) + 3(n/2)) + 3n = 2 3 T (n/2 3 ) (n/2 2 ) (n/2 1 ) (n/2 0 ) We can do this log n times

19 Heuristics that can help to find a good guess. One way would be to have a look at first few terms. Say if we had T (n) = 2T (n/2) + 3n, then T (n) = 2T (n/2) + 3n = 2(2T (n/4) + 3(n/2)) + 3n = 2(2(2T (n/8) + 3(n/4)) + 3(n/2)) + 3n = 2 3 T (n/2 3 ) (n/2 2 ) (n/2 1 ) (n/2 0 ) We can do this log n times log(n) 1 2 log n T (n/2 log n ) + 2 i 3(n/2 i ) log(n) 1 = n T (1) + 3n 1 i=0 i=0 = n T (1) + 3n log n = Θ(n log n) After guessing a solution you ll have to prove the correctness.

20 similar recurrences might have similar solutions Consider T (n) = 2T ( n/2 + 25) + n Looks similar to last example, but is the additional 25 in the argument going to change the solution? Not really, because for large n, difference between T ( n/2 ) and T ( n/2 + 25) is not large: both cut n nearly in half: for n = 2, 000 we have for n = 1, 000, 000 we have T (1, 000) and T (1, 025), T (500, 000) and T (500, 025). Thus, reasonable assumption is that now T (n) = O(n log n) as well.

21 stepwise refinement guessing loose lower and upper bounds, and gradually taking them closer to each other For T (n) = 2T ( n/2 ) + n we see

22 stepwise refinement guessing loose lower and upper bounds, and gradually taking them closer to each other For T (n) = 2T ( n/2 ) + n we see T (n) = Ω(n) (because of the n term) T (n) = O(n 2 ) (easily proven)

23 stepwise refinement guessing loose lower and upper bounds, and gradually taking them closer to each other For T (n) = 2T ( n/2 ) + n we see T (n) = Ω(n) (because of the n term) T (n) = O(n 2 ) (easily proven) From there, we can perhaps converge on asymptotically tight bound Θ(n log n).

24 Proving correctness of a guess For Merge-Sort we have T (n) 2T ( n/2 ) + Θ(n), which means, there is a constant d > 0 such that T (n) 2T ( n/2 ) + dn for n 2. We guessed that T (n) = O(n log n). We prove T (n) cn log n for appropriate choice of constant c > 0. Induction hypothesis: assume that bound holds for any n < m, hence also for n = m/2, i.e., T ( m/2 ) c m/2 log( m/2 )

25 and prove it for m 4 (induction step): T (m) 2T ( m/2 ) + dm

26 and prove it for m 4 (induction step): T (m) 2T ( m/2 ) + dm 2(c m/2 log( m/2 )) + dm

27 and prove it for m 4 (induction step): T (m) 2T ( m/2 ) + dm 2(c m/2 log( m/2 )) + dm 2(c(m + 1)/2 log((m + 1)/2)) + dm

28 and prove it for m 4 (induction step): T (m) 2T ( m/2 ) + dm 2(c m/2 log( m/2 )) + dm 2(c(m + 1)/2 log((m + 1)/2)) + dm c(m + 1) log((m + 1)/2) + dm

29 and prove it for m 4 (induction step): T (m) 2T ( m/2 ) + dm 2(c m/2 log( m/2 )) + dm 2(c(m + 1)/2 log((m + 1)/2)) + dm c(m + 1) log((m + 1)/2) + dm = c(m + 1) log(m + 1) c(m + 1) log 2 + dm

30 and prove it for m 4 (induction step): T (m) 2T ( m/2 ) + dm 2(c m/2 log( m/2 )) + dm 2(c(m + 1)/2 log((m + 1)/2)) + dm c(m + 1) log((m + 1)/2) + dm = c(m + 1) log(m + 1) c(m + 1) log 2 + dm c(m + 1)(log m + 1/3) c(m + 1) + dm

31 and prove it for m 4 (induction step): T (m) 2T ( m/2 ) + dm 2(c m/2 log( m/2 )) + dm 2(c(m + 1)/2 log((m + 1)/2)) + dm c(m + 1) log((m + 1)/2) + dm = c(m + 1) log(m + 1) c(m + 1) log 2 + dm c(m + 1)(log m + 1/3) c(m + 1) + dm = cm log m + c log m + c/3(m + 1) c(m + 1) + dm

32 and prove it for m 4 (induction step): T (m) 2T ( m/2 ) + dm 2(c m/2 log( m/2 )) + dm 2(c(m + 1)/2 log((m + 1)/2)) + dm c(m + 1) log((m + 1)/2) + dm = c(m + 1) log(m + 1) c(m + 1) log 2 + dm c(m + 1)(log m + 1/3) c(m + 1) + dm = cm log m + c log m + c/3(m + 1) c(m + 1) + dm cm log m + cm/2 2/3c(m + 1) + dm

33 and prove it for m 4 (induction step): T (m) 2T ( m/2 ) + dm 2(c m/2 log( m/2 )) + dm 2(c(m + 1)/2 log((m + 1)/2)) + dm c(m + 1) log((m + 1)/2) + dm = c(m + 1) log(m + 1) c(m + 1) log 2 + dm c(m + 1)(log m + 1/3) c(m + 1) + dm = cm log m + c log m + c/3(m + 1) c(m + 1) + dm cm log m + cm/2 2/3c(m + 1) + dm cm log m + (c/2 2/3c + d)m = cm log m + (d c/6)m

34 and prove it for m 4 (induction step): T (m) 2T ( m/2 ) + dm 2(c m/2 log( m/2 )) + dm 2(c(m + 1)/2 log((m + 1)/2)) + dm c(m + 1) log((m + 1)/2) + dm = c(m + 1) log(m + 1) c(m + 1) log 2 + dm c(m + 1)(log m + 1/3) c(m + 1) + dm = cm log m + c log m + c/3(m + 1) c(m + 1) + dm cm log m + cm/2 2/3c(m + 1) + dm cm log m + (c/2 2/3c + d)m = cm log m + (d c/6)m cm log m for c 6d.

35 Base step It remains to show that boundary conditions of recurrence (m < 4) are suitable as base cases for the inductive proof. We have got to show that we can choose c large enough s.t.bound T (m) cm log m works for boundary conditions as well (when m < 4). Assume that T (1) = b > 0. For m = 1, T (m) cm log m = c 1 0 = 0 is a bit of a problem, because T (1) is a constant greater than 0.

36 How to resolve this problem? Recall that we wanted to prove that T (n) = O(n log n). Also, recall that by def of O(), we are free to disregard a constant number of small values of n: f (n) = O(g(n)) there exist constants c, n 0 : f (n) c g(n) for n n 0 A way out of our problem is to remove difficult boundary condition T (1) = b > 0 from consideration in inductive proof. Note: for m 4, T (m) does not depend directly on T (1) (T (2) 2T (1) + 2d = 2b + 2d, T (3) 2T (2) + 3d = 2(2b + 2d) + 3d = 4b + 7d, T (4) 2T (2) + 4d) This means: We replace T (1) by T (2) and T (3) as the base case in the inductive proof, letting n 0 = 2.

37 In other words, we are showing that the bound holds for any n 2. T (n) cn log n For m = 2: T (2) = 2b + 2d??c.2 log 2 = 2c For m = 3: T (3) = 4b + 7d??c.3 log 3 Hence, set c to max(6d, b + d, 4b+7d 3 log 3 ) and the above boundary conditions as well as the induction step will work. Important: General technique! Can be used very often!

38 A neat trick called changing variables Suppose we have T (n) = 2T ( n) + log n

39 A neat trick called changing variables Suppose we have T (n) = 2T ( n) + log n Now rename m = log n 2 m = n. We know n = n 1/2 = (2 m ) 1/2 = 2 m/2 and thus obtain T (2 m ) = 2T (2 m/2 ) + m

40 A neat trick called changing variables Suppose we have T (n) = 2T ( n) + log n Now rename m = log n 2 m = n. We know n = n 1/2 = (2 m ) 1/2 = 2 m/2 and thus obtain T (2 m ) = 2T (2 m/2 ) + m Now rename S(m) = T (2 m ) and get S(m) = 2S(m/2) + m Looks familiar. We know the solution S(m) = Θ(m log m).

41 A neat trick called changing variables Suppose we have T (n) = 2T ( n) + log n Now rename m = log n 2 m = n. We know n = n 1/2 = (2 m ) 1/2 = 2 m/2 and thus obtain T (2 m ) = 2T (2 m/2 ) + m Now rename S(m) = T (2 m ) and get S(m) = 2S(m/2) + m Looks familiar. We know the solution S(m) = Θ(m log m). Going back from S(m) to T (n) we obtain T (n) = T (2 m ) = S(m) = Θ(m log m) = Θ(log n log log n)

42 Some basic algebra Sums a + (a + 1) + (a + 2) + + (b 1) + b = b i=a i = (a+b)(b a+1) 2

43 Some basic algebra Sums a + (a + 1) + (a + 2) + + (b 1) + b = b i=a i = (a+b)(b a+1) 2 a + a c + a c a c n 1 + a c n = n i=0 a ci = a 1 cn+1 1 c if 0 < c < 1 then we can estimate the sum as follows

44 Some basic algebra Sums a + (a + 1) + (a + 2) + + (b 1) + b = b i=a i = (a+b)(b a+1) 2 a + a c + a c a c n 1 + a c n = n i=0 a ci = a 1 cn+1 1 c if 0 < c < 1 then we can estimate the sum as follows n i=0 a ci = a 1 cn c < a 1 c

45 logarithms and powers log a n and a n are inverse functions to each other:

46 logarithms and powers log a n and a n are inverse functions to each other: log a (a n ) = n for all n

47 logarithms and powers log a n and a n are inverse functions to each other: log a (a n ) = n for all n a log a n = n for all n > 0

48 logarithms and powers log a n and a n are inverse functions to each other: log a (a n ) = n for all n a log a n = n for all n > 0 properties: log a (b c) = log a b + log a c

49 logarithms and powers log a n and a n are inverse functions to each other: log a (a n ) = n for all n a log a n = n for all n > 0 properties: log a (b c) = log a b + log a c log a b = log c b log c a

50 logarithms and powers log a n and a n are inverse functions to each other: log a (a n ) = n for all n a log a n = n for all n > 0 properties: log a (b c) = log a b + log a c log a b = log c b log c a log a b = 1 log b a

51 logarithms and powers log a n and a n are inverse functions to each other: log a (a n ) = n for all n a log a n = n for all n > 0 properties: log a (b c) = log a b + log a c log a b = log c b log c a log a b = 1 log b a a log c b = b log c a

52 The Master Method Recipe for recurrences of the form T (n) = at (n/b) + f (n) with a 1 and b > 1 constant, and f (n) an asymptotically positive function (f (n) = 5, f (n) = c log n, f (n) = n, f (n) = n 12 are just fine). Split problem into a subproblems each of size n/b. Subproblems are solved recursively, each in time T (n/b). Dividing problem and combining solutions of subproblems is captured by f (n). Deals with many frequently seen recurrences (in particular, our Merge-Sort example with a = b = 2 and f (n) = Θ(n)).

53 Theorem Let a 1 and b > 1 be constants, let f (n) be a function, and let T (n) be defined on the nonnegative integers by the recurrence T (n) = at (n/b) + f (n), where we interpret n/b to mean either n/b or n/b. Then T (n) can be bounded asymptotically as follows. 1 If f (n) = O(n (log b a) ɛ ) for some constant ɛ > 0, then T (n) = Θ(n log b a ). 2 If f (n) = Θ(n log b a ), then T (n) = Θ(n log b a log n). 3 If f (n) = Ω(n (log b a)+ɛ ) for some constant ɛ > 0, and if a f (n/b) c f (n) for some constant c < 1 and all sufficiently large n, then T (n) = Θ(f (n)).

54 Notes on Master s Theorem 2. If f (n) = Θ(n log b a ), then T (n) = Θ(n log b a log n). Note 1: Although it s looking rather scary, it really isn t. For instance, with Merge-Sort s recurrence T (n) = 2T (n/2) + Θ(n) we have n log b a = n log 2 2 = n 1 = n, and we can apply case 2. The result is therefore Θ(n log b a log n) = Θ(n log n).

55 Notes on Master s Theorem 2. If f (n) = Θ(n log b a ), then T (n) = Θ(n log b a log n). Note 1: Although it s looking rather scary, it really isn t. For instance, with Merge-Sort s recurrence T (n) = 2T (n/2) + Θ(n) we have n log b a = n log 2 2 = n 1 = n, and we can apply case 2. The result is therefore Θ(n log b a log n) = Θ(n log n). 1 If f (n) = O(n (log b a) ɛ ) for some constant ɛ > 0, then T (n) = Θ(n log b a ). Note 2: In case 1, f (n) = n (log b a) ɛ = n log b a /n ɛ = o(n log b a ), so the ɛ does matter. This case is basically about small functions f. But it s not enough if f (n) is just asymptotically smaller than n log b a (that is f (n) o(n log b a ), it must be polynomially smaller!

56 3. If f (n) = Ω(n (log b a)+ɛ ) for some constant ɛ > 0, and if a f (n/b) c f (n) for some constant c < 1 and all sufficiently large n, then T (n) = Θ(f (n)).

57 3. If f (n) = Ω(n (log b a)+ɛ ) for some constant ɛ > 0, and if a f (n/b) c f (n) for some constant c < 1 and all sufficiently large n, then T (n) = Θ(f (n)). Note 3: Similarly, in case 3, f (n) = n (log b a)+ɛ = n log b a n ɛ = ω(n log b a ), so the ɛ does matter again. This case is basically about large functions n. But again, f (n) ω(n log b a ) is not enough, it must be polynomially larger. And in addition f (n) has to satisfy the regularity condition : af (n/b) cf (n) for some constant c < 1 and n n 0 for some n 0.

58 The idea is that we compare n log b a to f (n). Result is (intuitively) determined by larger of two.

59 The idea is that we compare n log b a to f (n). Result is (intuitively) determined by larger of two. In case 1, n log b a is larger, so result is Θ(n log b a ).

60 The idea is that we compare n log b a to f (n). Result is (intuitively) determined by larger of two. In case 1, n log b a is larger, so result is Θ(n log b a ). In case 3, f (n) is larger, so result is Θ(f (n)).

61 The idea is that we compare n log b a to f (n). Result is (intuitively) determined by larger of two. In case 1, n log b a is larger, so result is Θ(n log b a ). In case 3, f (n) is larger, so result is Θ(f (n)). In case 2, both have same order, we multiply it by a logarithmic factor, and result is Θ(n log b a log n) = Θ(f (n) log n).

62 The idea is that we compare n log b a to f (n). Result is (intuitively) determined by larger of two. In case 1, n log b a is larger, so result is Θ(n log b a ). In case 3, f (n) is larger, so result is Θ(f (n)). In case 2, both have same order, we multiply it by a logarithmic factor, and result is Θ(n log b a log n) = Θ(f (n) log n). Important: Does not cover all possible cases. For instance, there is a gap between cases 1 and 2 whenever f (n) is smaller than n log b a but not polynomially smaller.

63 Using the master theorem Simple enough. Some examples: T (n) = 9T (n/3) + n We have a = 9, b = 3, f (n) = n. Thus, n log b a = n log 3 9 = n 2. Clearly, f (n) = O(n log 3 (9) ɛ ) for ɛ = 1, so case 1 gives T (n) = Θ(n 2 ).

64 Using the master theorem Simple enough. Some examples: T (n) = 9T (n/3) + n We have a = 9, b = 3, f (n) = n. Thus, n log b a = n log 3 9 = n 2. Clearly, f (n) = O(n log 3 (9) ɛ ) for ɛ = 1, so case 1 gives T (n) = Θ(n 2 ). T (n) = T (2n/3) + 1 We have a = 1, b = 3/2, and f (n) = 1, so n log b a = n log 2/3 1 = n 0 = 1. Apply case 2 (f (n) = Θ(n log b a ) = Θ(1), result is T (n) = Θ(log n).

65 T (n) = 3T (n/4) + n log n We have a = 3, b = 4, and f (n) = n log n, so n log b a = n log 4 3 = O(n ). Clearly, f (n) = n log n = Ω(n) and thus also f (n) = Ω(n log b (a)+ɛ ) for ɛ 0.2. Also, a f (n/b) = 3(n/4) log(n/4) (3/4)n log n = c f (n) for any c = 3/4 < 1. Thus we can apply case 3 with result T (n) = Θ(n log n).

66 Exercises Problem Give asymptotic upper and lower bounds for T (n) in each of the following recurrences. Assume that T (n) is constant for n 2. (a) T (n) = 2T (n/2) + n 3 (b) T (n) = T (n 2) + n (c) T (n) = T ( n) + log log n (d) T (n) = 16T (n/4) + n 2 (e) T (n) = 2T (n 1) + log n

67 Integer Multiplication We are given two n-bits numbers x, y and we want to compute x y.

68 Integer Multiplication We are given two n-bits numbers x, y and we want to compute x y. The usual multiplication takes O(n 2 ) times.

69 Integer Multiplication We are given two n-bits numbers x, y and we want to compute x y. The usual multiplication takes O(n 2 ) times. Can we do better than O(n 2 )?

70 Integer Multiplication We are given two n-bits numbers x, y and we want to compute x y. The usual multiplication takes O(n 2 ) times. Can we do better than O(n 2 )? We can write x = x 1 2 n 2 + x 0 and y = y 1 2 n 2 + y 0. x 0, x 1, y 0, y 1 are n 2-bits numbers.

71 For simplicity we write xy instead of x y. xy = (x 1 2 n 2 + x 0 )(y 1 2 n 2 + y 0 ) = x 1 y 1 2 n + (x 1 y 0 + x 0 y 1 )2 n 2 + x 0 y 0.

72 For simplicity we write xy instead of x y. xy = (x 1 2 n 2 + x 0 )(y 1 2 n 2 + y 0 ) = x 1 y 1 2 n + (x 1 y 0 + x 0 y 1 )2 n 2 + x 0 y 0. Now if we compute each of the x 1 y 1, x 0 y 0, x 1 y 0, x 0 y 1 then we can compute xy.

73 For simplicity we write xy instead of x y. xy = (x 1 2 n 2 + x 0 )(y 1 2 n 2 + y 0 ) = x 1 y 1 2 n + (x 1 y 0 + x 0 y 1 )2 n 2 + x 0 y 0. Now if we compute each of the x 1 y 1, x 0 y 0, x 1 y 0, x 0 y 1 then we can compute xy. T (n) 4T (n/2) + cn. O(n) is the time takes to add two n-bits numbers.

74 For simplicity we write xy instead of x y. xy = (x 1 2 n 2 + x 0 )(y 1 2 n 2 + y 0 ) = x 1 y 1 2 n + (x 1 y 0 + x 0 y 1 )2 n 2 + x 0 y 0. Now if we compute each of the x 1 y 1, x 0 y 0, x 1 y 0, x 0 y 1 then we can compute xy. T (n) 4T (n/2) + cn. O(n) is the time takes to add two n-bits numbers. Using the Master Method for a = 4, b = 2 and f (n) = cn we obtain : T (n) O(n log 2 4 ) = O(n 2 ). Not good!!

75 If we compute p = (x 1 + x 0 )(y 1 + y 0 ) = x 1 y 1 + x 1 y 0 + x 0 y 1 + x 0 y 0 then instead of x 1 y 0 + x 0 y 1 we can write p x 1 y 1 x 0 y 0.

76 If we compute p = (x 1 + x 0 )(y 1 + y 0 ) = x 1 y 1 + x 1 y 0 + x 0 y 1 + x 0 y 0 then instead of x 1 y 0 + x 0 y 1 we can write p x 1 y 1 x 0 y 0. p is the multiplication of two n 2-bits numbers. So we need to compute the multiplication of three pairs of n 2 -bits numbers.

77 Recursive-Multiply(x,y) 1. x := x 1 2 n 2 + x 0 and y := y 1 2 n 2 + y Compute x 1 + x 0 and y 1 + y 0 3. p := Recursive-Multiply (x 0 + x 1,y 0 + y 1 ) 4. x 1 y 1 := Recursive-Multiply (x 1,y 1 ) 5. x 0 y 0 := Recursive-Multiply (x 0,y 0 ) 6. Return x 1 y 1 2 n + (p x 1 y 1 x 0 y 0 )2 n 2 + x 0 y 0

78 Recursive-Multiply(x,y) 1. x := x 1 2 n 2 + x 0 and y := y 1 2 n 2 + y Compute x 1 + x 0 and y 1 + y 0 3. p := Recursive-Multiply (x 0 + x 1,y 0 + y 1 ) 4. x 1 y 1 := Recursive-Multiply (x 1,y 1 ) 5. x 0 y 0 := Recursive-Multiply (x 0,y 0 ) 6. Return x 1 y 1 2 n + (p x 1 y 1 x 0 y 0 )2 n 2 + x 0 y 0 T (n) 3T (n/2) + cn. T (n) O(n log 2 3 ) = O(n 1.59 ).

79 Smallest distance between pairs of points We are give a set {p 1, p 2,..., p n } of n points in plane (2D plane). Find a pair of points with the smallest distance.

80 Smallest distance between pairs of points We are give a set {p 1, p 2,..., p n } of n points in plane (2D plane). Find a pair of points with the smallest distance. It is clear that we can solve the problem in O(n 2 ). Can we do better than O(n 2 )?

81 Smallest distance between pairs of points We are give a set {p 1, p 2,..., p n } of n points in plane (2D plane). Find a pair of points with the smallest distance. It is clear that we can solve the problem in O(n 2 ). Can we do better than O(n 2 )? Sort the points by x-coordinate and produce list P x (O(n log n)).

82 Smallest distance between pairs of points We are give a set {p 1, p 2,..., p n } of n points in plane (2D plane). Find a pair of points with the smallest distance. It is clear that we can solve the problem in O(n 2 ). Can we do better than O(n 2 )? Sort the points by x-coordinate and produce list P x (O(n log n)). Sort the points by y-coordinate and produce list P y (O(n log n)).

83 Smallest distance between pairs of points We are give a set {p 1, p 2,..., p n } of n points in plane (2D plane). Find a pair of points with the smallest distance. It is clear that we can solve the problem in O(n 2 ). Can we do better than O(n 2 )? Sort the points by x-coordinate and produce list P x (O(n log n)). Sort the points by y-coordinate and produce list P y (O(n log n)). Set Q be the set of points in the first n 2 positions in P x and let R be the rest of the points in P x. Construct the lists Q x and Q y and R x and R y.

84 Smallest distance between pairs of points We are give a set {p 1, p 2,..., p n } of n points in plane (2D plane). Find a pair of points with the smallest distance. It is clear that we can solve the problem in O(n 2 ). Can we do better than O(n 2 )? Sort the points by x-coordinate and produce list P x (O(n log n)). Sort the points by y-coordinate and produce list P y (O(n log n)). Set Q be the set of points in the first n 2 positions in P x and let R be the rest of the points in P x. Construct the lists Q x and Q y and R x and R y. 1. Recursively compute closest pair of points for Q, say q 0, q 1 and

85 Smallest distance between pairs of points We are give a set {p 1, p 2,..., p n } of n points in plane (2D plane). Find a pair of points with the smallest distance. It is clear that we can solve the problem in O(n 2 ). Can we do better than O(n 2 )? Sort the points by x-coordinate and produce list P x (O(n log n)). Sort the points by y-coordinate and produce list P y (O(n log n)). Set Q be the set of points in the first n 2 positions in P x and let R be the rest of the points in P x. Construct the lists Q x and Q y and R x and R y. 1. Recursively compute closest pair of points for Q, say q 0, q 1 and 2. Recursively compute closest pair of points for R, say r 0, r 1.

86 Let δ be the smallest of d(q 0, q 1 ), d(r 0, r 1 ) (d(x, y) denote the distance between x, y ). Let x denote the coordinates of the rightmost point in Q and let L be the line x = x. L separates Q from R.

87 Let δ be the smallest of d(q 0, q 1 ), d(r 0, r 1 ) (d(x, y) denote the distance between x, y ). Let x denote the coordinates of the rightmost point in Q and let L be the line x = x. L separates Q from R. If there exist q Q and r R with d(q, r) < δ then each of q, r lies in distance at most δ of L.

88 Let δ be the smallest of d(q 0, q 1 ), d(r 0, r 1 ) (d(x, y) denote the distance between x, y ). Let x denote the coordinates of the rightmost point in Q and let L be the line x = x. L separates Q from R. If there exist q Q and r R with d(q, r) < δ then each of q, r lies in distance at most δ of L. So we can narrow down our search... Let S be the set of points in P in distance δ from L.

89 Let δ be the smallest of d(q 0, q 1 ), d(r 0, r 1 ) (d(x, y) denote the distance between x, y ). Let x denote the coordinates of the rightmost point in Q and let L be the line x = x. L separates Q from R. If there exist q Q and r R with d(q, r) < δ then each of q, r lies in distance at most δ of L. So we can narrow down our search... Let S be the set of points in P in distance δ from L. Let S y denote the list of points in S sorted by increasing y-coordinate. (by going through list P y, we can compute S y in O(n).)

90 There exist q Q and r R for which d(q, r) < δ if and only if there exist s, s S for which d(s, s ) < δ.

91 There exist q Q and r R for which d(q, r) < δ if and only if there exist s, s S for which d(s, s ) < δ. Now partition S into grids of size δ/2 by δ/2 in such a way that L is one of the lines of the grid. We note that in each of the grid cells there is at most one point. Otherwise the distance between two points in that grid cell is at most δ 2/2 < δ. A contradiction to δ = min{d(q 0, q 1 ), d(r 0, r 1 )}. If s, s S have the property that d(s, s ) < δ then s, s are within 15 positions of each other in the sorted list S y.

92 ClosestPair(P) 1. Construct P x and P y ( O(n log n) time) 2. (p 0, p 1 ) = Closest-Pair(P x, P y ) Closest-Pair(P x, P y ) 1. If P 3 compute the closest pairs by trying all and return Construct Q x, Q y and R x, R y ( O(n) time ) 3. (q 0, q 1 ) = Closest Pair(Q x, Q y ) 4. (r 0, r 1 ) = Closest Pair(R x, R y ) 5. δ := min(d(q 0, q 1 ), d(r 0, r 1 )). 6.x = maximum x-coordinate of a point in set Q 7. L = {(x, y) : x = x } 8. S = points in P within distance δ of L 9. Construct S y

93 10. for each point s S y, compute the distance from s to each of next 15 points in S y. 11. Let s, s be pair achieving minimum distance (O(n) time) 12. if d(s, s ) < δ return (s, s ) 13. if d(q 0, q 1 ) < d(r 0, r 1 )) return (q 0, q 1 ) 14. else return (r 0, r 1 )

94 Matrix Multiplication Given : We are given two n n matrixes A, B of integers. Goal : We want to compute A B. Direct approach: has O(n 3 ) time complexity. :

95 C 11 = A 11 B 11 + A 12 B 21 C 12 = A 11 B 12 + A 12 B 22 C 21 = A 21 B 11 + A 22 B 21 C 22 = A 21 B 12 + A 22 B 22

96 C 11 = A 11 B 11 + A 12 B 21 C 12 = A 11 B 12 + A 12 B 22 C 21 = A 21 B 11 + A 22 B 21 C 22 = A 21 B 12 + A 22 B 22 In this case we have T (n) = 8T (n/2) + O(n 2 ) = Θ(n 3 ) How we can reduce the time complexity?

97 Strassen s Matrix Multiplication In order to improve the algorithm we have to reduce the number of multiplication by introducing the new variables as follows: P = (A 11 + A 22 ) (B 11 + B 22 ) Q = (A 21 + A 22 ) B 11 R = A 11 (B 12 B 22 ) S = A 22 (B 21 B 11 ) T = (A 11 + A 12 ) B 22 U = (A 21 A 11 ) (B 11 + B 12 ) V = (A 12 A 22 ) (B 21 + B 22 ) Now, we have: C 11 = P + S T + V C 12 = R + T C 21 = Q + S C 22 = P + R Q + U

98 Strassen s Matrix Multiplication In order to improve the algorithm we have to reduce the number of multiplication by introducing the new variables as follows: P = (A 11 + A 22 ) (B 11 + B 22 ) Q = (A 21 + A 22 ) B 11 R = A 11 (B 12 B 22 ) S = A 22 (B 21 B 11 ) T = (A 11 + A 12 ) B 22 U = (A 21 A 11 ) (B 11 + B 12 ) V = (A 12 A 22 ) (B 21 + B 22 ) Now, we have: C 11 = P + S T + V C 12 = R + T C 21 = Q + S C 22 = P + R Q + U So the T (n) can be expressed as: T (n) = 7T (n/2) + O(n 2 ) = Θ(n log 2 7 ).

99 Given an integer array of size n. Find the k-smallest number?

100 Given an integer array of size n. Find the k-smallest number? Suppose n = 5(2r + 1) otherwise we add some zero to the end of the array.

101 Given an integer array of size n. Find the k-smallest number? Suppose n = 5(2r + 1) otherwise we add some zero to the end of the array. Find the median for each group.

102

103

104

Solving recurrences. Frequently showing up when analysing divide&conquer algorithms or, more generally, recursive algorithms.

Solving recurrences. Frequently showing up when analysing divide&conquer algorithms or, more generally, recursive algorithms. Solving recurrences Frequently showing up when analysing divide&conquer algorithms or, more generally, recursive algorithms Example: Merge-Sort(A, p, r) 1: if p < r then 2: q (p + r)/2 3: Merge-Sort(A,

More information

Recurrence Relations

Recurrence Relations Recurrence Relations Analysis Tools S.V. N. (vishy) Vishwanathan University of California, Santa Cruz vishy@ucsc.edu January 15, 2016 S.V. N. Vishwanathan (UCSC) CMPS101 1 / 29 Recurrences Outline 1 Recurrences

More information

A design paradigm. Divide and conquer: (When) does decomposing a problem into smaller parts help? 09/09/ EECS 3101

A design paradigm. Divide and conquer: (When) does decomposing a problem into smaller parts help? 09/09/ EECS 3101 A design paradigm Divide and conquer: (When) does decomposing a problem into smaller parts help? 09/09/17 112 Multiplying complex numbers (from Jeff Edmonds slides) INPUT: Two pairs of integers, (a,b),

More information

The maximum-subarray problem. Given an array of integers, find a contiguous subarray with the maximum sum. Very naïve algorithm:

The maximum-subarray problem. Given an array of integers, find a contiguous subarray with the maximum sum. Very naïve algorithm: The maximum-subarray problem Given an array of integers, find a contiguous subarray with the maximum sum. Very naïve algorithm: Brute force algorithm: At best, θ(n 2 ) time complexity 129 Can we do divide

More information

CS483 Design and Analysis of Algorithms

CS483 Design and Analysis of Algorithms CS483 Design and Analysis of Algorithms Chapter 2 Divide and Conquer Algorithms Instructor: Fei Li lifei@cs.gmu.edu with subject: CS483 Office hours: Room 5326, Engineering Building, Thursday 4:30pm -

More information

Divide and Conquer Strategy

Divide and Conquer Strategy Divide and Conquer Strategy Algorithm design is more an art, less so a science. There are a few useful strategies, but no guarantee to succeed. We will discuss: Divide and Conquer, Greedy, Dynamic Programming.

More information

1 Substitution method

1 Substitution method Recurrence Relations we have discussed asymptotic analysis of algorithms and various properties associated with asymptotic notation. As many algorithms are recursive in nature, it is natural to analyze

More information

Chapter 2. Recurrence Relations. Divide and Conquer. Divide and Conquer Strategy. Another Example: Merge Sort. Merge Sort Example. Merge Sort Example

Chapter 2. Recurrence Relations. Divide and Conquer. Divide and Conquer Strategy. Another Example: Merge Sort. Merge Sort Example. Merge Sort Example Recurrence Relations Chapter 2 Divide and Conquer Equation or an inequality that describes a function by its values on smaller inputs. Recurrence relations arise when we analyze the running time of iterative

More information

Fast Convolution; Strassen s Method

Fast Convolution; Strassen s Method Fast Convolution; Strassen s Method 1 Fast Convolution reduction to subquadratic time polynomial evaluation at complex roots of unity interpolation via evaluation at complex roots of unity 2 The Master

More information

Analysis of Algorithms - Using Asymptotic Bounds -

Analysis of Algorithms - Using Asymptotic Bounds - Analysis of Algorithms - Using Asymptotic Bounds - Andreas Ermedahl MRTC (Mälardalens Real-Time Research Center) andreas.ermedahl@mdh.se Autumn 004 Rehersal: Asymptotic bounds Gives running time bounds

More information

b + O(n d ) where a 1, b > 1, then O(n d log n) if a = b d d ) if a < b d O(n log b a ) if a > b d

b + O(n d ) where a 1, b > 1, then O(n d log n) if a = b d d ) if a < b d O(n log b a ) if a > b d CS161, Lecture 4 Median, Selection, and the Substitution Method Scribe: Albert Chen and Juliana Cook (2015), Sam Kim (2016), Gregory Valiant (2017) Date: January 23, 2017 1 Introduction Last lecture, we

More information

Algorithm efficiency analysis

Algorithm efficiency analysis Algorithm efficiency analysis Mădălina Răschip, Cristian Gaţu Faculty of Computer Science Alexandru Ioan Cuza University of Iaşi, Romania DS 2017/2018 Content Algorithm efficiency analysis Recursive function

More information

CSCI 3110 Assignment 6 Solutions

CSCI 3110 Assignment 6 Solutions CSCI 3110 Assignment 6 Solutions December 5, 2012 2.4 (4 pts) Suppose you are choosing between the following three algorithms: 1. Algorithm A solves problems by dividing them into five subproblems of half

More information

Introduction to Divide and Conquer

Introduction to Divide and Conquer Introduction to Divide and Conquer Sorting with O(n log n) comparisons and integer multiplication faster than O(n 2 ) Periklis A. Papakonstantinou York University Consider a problem that admits a straightforward

More information

Mergesort and Recurrences (CLRS 2.3, 4.4)

Mergesort and Recurrences (CLRS 2.3, 4.4) Mergesort and Recurrences (CLRS 2.3, 4.4) We saw a couple of O(n 2 ) algorithms for sorting. Today we ll see a different approach that runs in O(n lg n) and uses one of the most powerful techniques for

More information

Analysis of Algorithms I: Asymptotic Notation, Induction, and MergeSort

Analysis of Algorithms I: Asymptotic Notation, Induction, and MergeSort Analysis of Algorithms I: Asymptotic Notation, Induction, and MergeSort Xi Chen Columbia University We continue with two more asymptotic notation: o( ) and ω( ). Let f (n) and g(n) are functions that map

More information

Methods for solving recurrences

Methods for solving recurrences Methods for solving recurrences Analyzing the complexity of mergesort The merge function Consider the following implementation: 1 int merge ( int v1, int n1, int v, int n ) { 3 int r = malloc ( ( n1+n

More information

Grade 11/12 Math Circles Fall Nov. 5 Recurrences, Part 2

Grade 11/12 Math Circles Fall Nov. 5 Recurrences, Part 2 1 Faculty of Mathematics Waterloo, Ontario Centre for Education in Mathematics and Computing Grade 11/12 Math Circles Fall 2014 - Nov. 5 Recurrences, Part 2 Running time of algorithms In computer science,

More information

CS 577 Introduction to Algorithms: Strassen s Algorithm and the Master Theorem

CS 577 Introduction to Algorithms: Strassen s Algorithm and the Master Theorem CS 577 Introduction to Algorithms: Jin-Yi Cai University of Wisconsin Madison In the last class, we described InsertionSort and showed that its worst-case running time is Θ(n 2 ). Check Figure 2.2 for

More information

CS 470/570 Divide-and-Conquer. Format of Divide-and-Conquer algorithms: Master Recurrence Theorem (simpler version)

CS 470/570 Divide-and-Conquer. Format of Divide-and-Conquer algorithms: Master Recurrence Theorem (simpler version) CS 470/570 Divide-and-Conquer Format of Divide-and-Conquer algorithms: Divide: Split the array or list into smaller pieces Conquer: Solve the same problem recursively on smaller pieces Combine: Build the

More information

COMP Analysis of Algorithms & Data Structures

COMP Analysis of Algorithms & Data Structures COMP 3170 - Analysis of Algorithms & Data Structures Shahin Kamali Lecture 5 - Jan. 12, 2018 CLRS 1.1, 1.2, 2.2, 3.1, 4.3, 4.5 University of Manitoba Picture is from the cover of the textbook CLRS. 1 /

More information

Divide and Conquer. Andreas Klappenecker. [based on slides by Prof. Welch]

Divide and Conquer. Andreas Klappenecker. [based on slides by Prof. Welch] Divide and Conquer Andreas Klappenecker [based on slides by Prof. Welch] Divide and Conquer Paradigm An important general technique for designing algorithms: divide problem into subproblems recursively

More information

Data structures Exercise 1 solution. Question 1. Let s start by writing all the functions in big O notation:

Data structures Exercise 1 solution. Question 1. Let s start by writing all the functions in big O notation: Data structures Exercise 1 solution Question 1 Let s start by writing all the functions in big O notation: f 1 (n) = 2017 = O(1), f 2 (n) = 2 log 2 n = O(n 2 ), f 3 (n) = 2 n = O(2 n ), f 4 (n) = 1 = O

More information

Review Of Topics. Review: Induction

Review Of Topics. Review: Induction Review Of Topics Asymptotic notation Solving recurrences Sorting algorithms Insertion sort Merge sort Heap sort Quick sort Counting sort Radix sort Medians/order statistics Randomized algorithm Worst-case

More information

CS483 Design and Analysis of Algorithms

CS483 Design and Analysis of Algorithms CS483 Design and Analysis of Algorithms Lecture 6-8 Divide and Conquer Algorithms Instructor: Fei Li lifei@cs.gmu.edu with subject: CS483 Office hours: STII, Room 443, Friday 4:00pm - 6:00pm or by appointments

More information

Chapter 4 Divide-and-Conquer

Chapter 4 Divide-and-Conquer Chapter 4 Divide-and-Conquer 1 About this lecture (1) Recall the divide-and-conquer paradigm, which we used for merge sort: Divide the problem into a number of subproblems that are smaller instances of

More information

Algorithms Chapter 4 Recurrences

Algorithms Chapter 4 Recurrences Algorithms Chapter 4 Recurrences Instructor: Ching Chi Lin 林清池助理教授 chingchi.lin@gmail.com Department of Computer Science and Engineering National Taiwan Ocean University Outline The substitution method

More information

data structures and algorithms lecture 2

data structures and algorithms lecture 2 data structures and algorithms 2018 09 06 lecture 2 recall: insertion sort Algorithm insertionsort(a, n): for j := 2 to n do key := A[j] i := j 1 while i 1 and A[i] > key do A[i + 1] := A[i] i := i 1 A[i

More information

Asymptotic Analysis and Recurrences

Asymptotic Analysis and Recurrences Appendix A Asymptotic Analysis and Recurrences A.1 Overview We discuss the notion of asymptotic analysis and introduce O, Ω, Θ, and o notation. We then turn to the topic of recurrences, discussing several

More information

1 Divide and Conquer (September 3)

1 Divide and Conquer (September 3) The control of a large force is the same principle as the control of a few men: it is merely a question of dividing up their numbers. Sun Zi, The Art of War (c. 400 C.E.), translated by Lionel Giles (1910)

More information

Data Structures and Algorithms Chapter 3

Data Structures and Algorithms Chapter 3 Data Structures and Algorithms Chapter 3 1. Divide and conquer 2. Merge sort, repeated substitutions 3. Tiling 4. Recurrences Recurrences Running times of algorithms with recursive calls can be described

More information

Divide and Conquer Algorithms

Divide and Conquer Algorithms Divide and Conquer Algorithms T. M. Murali February 19, 2013 Divide and Conquer Break up a problem into several parts. Solve each part recursively. Solve base cases by brute force. Efficiently combine

More information

Algorithms, Design and Analysis. Order of growth. Table 2.1. Big-oh. Asymptotic growth rate. Types of formulas for basic operation count

Algorithms, Design and Analysis. Order of growth. Table 2.1. Big-oh. Asymptotic growth rate. Types of formulas for basic operation count Types of formulas for basic operation count Exact formula e.g., C(n) = n(n-1)/2 Algorithms, Design and Analysis Big-Oh analysis, Brute Force, Divide and conquer intro Formula indicating order of growth

More information

CSE548, AMS542: Analysis of Algorithms, Fall 2017 Date: October 11. In-Class Midterm. ( 7:05 PM 8:20 PM : 75 Minutes )

CSE548, AMS542: Analysis of Algorithms, Fall 2017 Date: October 11. In-Class Midterm. ( 7:05 PM 8:20 PM : 75 Minutes ) CSE548, AMS542: Analysis of Algorithms, Fall 2017 Date: October 11 In-Class Midterm ( 7:05 PM 8:20 PM : 75 Minutes ) This exam will account for either 15% or 30% of your overall grade depending on your

More information

Divide-and-conquer: Order Statistics. Curs: Fall 2017

Divide-and-conquer: Order Statistics. Curs: Fall 2017 Divide-and-conquer: Order Statistics Curs: Fall 2017 The divide-and-conquer strategy. 1. Break the problem into smaller subproblems, 2. recursively solve each problem, 3. appropriately combine their answers.

More information

Divide and Conquer Algorithms

Divide and Conquer Algorithms Divide and Conquer Algorithms T. M. Murali March 17, 2014 Divide and Conquer Break up a problem into several parts. Solve each part recursively. Solve base cases by brute force. Efficiently combine solutions

More information

Computational Complexity - Pseudocode and Recursions

Computational Complexity - Pseudocode and Recursions Computational Complexity - Pseudocode and Recursions Nicholas Mainardi 1 Dipartimento di Elettronica e Informazione Politecnico di Milano nicholas.mainardi@polimi.it June 6, 2018 1 Partly Based on Alessandro

More information

Divide and Conquer. Andreas Klappenecker

Divide and Conquer. Andreas Klappenecker Divide and Conquer Andreas Klappenecker The Divide and Conquer Paradigm The divide and conquer paradigm is important general technique for designing algorithms. In general, it follows the steps: - divide

More information

Divide-and-Conquer. Reading: CLRS Sections 2.3, 4.1, 4.2, 4.3, 28.2, CSE 6331 Algorithms Steve Lai

Divide-and-Conquer. Reading: CLRS Sections 2.3, 4.1, 4.2, 4.3, 28.2, CSE 6331 Algorithms Steve Lai Divide-and-Conquer Reading: CLRS Sections 2.3, 4.1, 4.2, 4.3, 28.2, 33.4. CSE 6331 Algorithms Steve Lai Divide and Conquer Given an instance x of a prolem, the method works as follows: divide-and-conquer

More information

Lecture 4. Quicksort

Lecture 4. Quicksort Lecture 4. Quicksort T. H. Cormen, C. E. Leiserson and R. L. Rivest Introduction to Algorithms, 3rd Edition, MIT Press, 2009 Sungkyunkwan University Hyunseung Choo choo@skku.edu Copyright 2000-2018 Networking

More information

When we use asymptotic notation within an expression, the asymptotic notation is shorthand for an unspecified function satisfying the relation:

When we use asymptotic notation within an expression, the asymptotic notation is shorthand for an unspecified function satisfying the relation: CS 124 Section #1 Big-Oh, the Master Theorem, and MergeSort 1/29/2018 1 Big-Oh Notation 1.1 Definition Big-Oh notation is a way to describe the rate of growth of functions. In CS, we use it to describe

More information

CPS 616 DIVIDE-AND-CONQUER 6-1

CPS 616 DIVIDE-AND-CONQUER 6-1 CPS 616 DIVIDE-AND-CONQUER 6-1 DIVIDE-AND-CONQUER Approach 1. Divide instance of problem into two or more smaller instances 2. Solve smaller instances recursively 3. Obtain solution to original (larger)

More information

/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Asymptotic Analysis, recurrences Date: 9/7/17

/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Asymptotic Analysis, recurrences Date: 9/7/17 601.433/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Asymptotic Analysis, recurrences Date: 9/7/17 2.1 Notes Homework 1 will be released today, and is due a week from today by the beginning

More information

Growth of Functions (CLRS 2.3,3)

Growth of Functions (CLRS 2.3,3) Growth of Functions (CLRS 2.3,3) 1 Review Last time we discussed running time of algorithms and introduced the RAM model of computation. Best-case running time: the shortest running time for any input

More information

Data Structures and Algorithms Chapter 3

Data Structures and Algorithms Chapter 3 1 Data Structures and Algorithms Chapter 3 Werner Nutt 2 Acknowledgments The course follows the book Introduction to Algorithms, by Cormen, Leiserson, Rivest and Stein, MIT Press [CLRST]. Many examples

More information

In-Class Soln 1. CS 361, Lecture 4. Today s Outline. In-Class Soln 2

In-Class Soln 1. CS 361, Lecture 4. Today s Outline. In-Class Soln 2 In-Class Soln 1 Let f(n) be an always positive function and let g(n) = f(n) log n. Show that f(n) = o(g(n)) CS 361, Lecture 4 Jared Saia University of New Mexico For any positive constant c, we want to

More information

Divide and Conquer algorithms

Divide and Conquer algorithms Divide and Conquer algorithms Another general method for constructing algorithms is given by the Divide and Conquer strategy. We assume that we have a problem with input that can be split into parts in

More information

Divide and conquer. Philip II of Macedon

Divide and conquer. Philip II of Macedon Divide and conquer Philip II of Macedon Divide and conquer 1) Divide your problem into subproblems 2) Solve the subproblems recursively, that is, run the same algorithm on the subproblems (when the subproblems

More information

COMP 9024, Class notes, 11s2, Class 1

COMP 9024, Class notes, 11s2, Class 1 COMP 90, Class notes, 11s, Class 1 John Plaice Sun Jul 31 1::5 EST 011 In this course, you will need to know a bit of mathematics. We cover in today s lecture the basics. Some of this material is covered

More information

When we use asymptotic notation within an expression, the asymptotic notation is shorthand for an unspecified function satisfying the relation:

When we use asymptotic notation within an expression, the asymptotic notation is shorthand for an unspecified function satisfying the relation: CS 124 Section #1 Big-Oh, the Master Theorem, and MergeSort 1/29/2018 1 Big-Oh Notation 1.1 Definition Big-Oh notation is a way to describe the rate of growth of functions. In CS, we use it to describe

More information

A SUMMARY OF RECURSION SOLVING TECHNIQUES

A SUMMARY OF RECURSION SOLVING TECHNIQUES A SUMMARY OF RECURSION SOLVING TECHNIQUES KIMMO ERIKSSON, KTH These notes are meant to be a complement to the material on recursion solving techniques in the textbook Discrete Mathematics by Biggs. In

More information

MA008/MIIZ01 Design and Analysis of Algorithms Lecture Notes 3

MA008/MIIZ01 Design and Analysis of Algorithms Lecture Notes 3 MA008 p.1/37 MA008/MIIZ01 Design and Analysis of Algorithms Lecture Notes 3 Dr. Markus Hagenbuchner markus@uow.edu.au. MA008 p.2/37 Exercise 1 (from LN 2) Asymptotic Notation When constants appear in exponents

More information

COMP 382: Reasoning about algorithms

COMP 382: Reasoning about algorithms Fall 2014 Unit 4: Basics of complexity analysis Correctness and efficiency So far, we have talked about correctness and termination of algorithms What about efficiency? Running time of an algorithm For

More information

CSCI Honor seminar in algorithms Homework 2 Solution

CSCI Honor seminar in algorithms Homework 2 Solution CSCI 493.55 Honor seminar in algorithms Homework 2 Solution Saad Mneimneh Visiting Professor Hunter College of CUNY Problem 1: Rabin-Karp string matching Consider a binary string s of length n and another

More information

Divide and Conquer. CSE21 Winter 2017, Day 9 (B00), Day 6 (A00) January 30,

Divide and Conquer. CSE21 Winter 2017, Day 9 (B00), Day 6 (A00) January 30, Divide and Conquer CSE21 Winter 2017, Day 9 (B00), Day 6 (A00) January 30, 2017 http://vlsicad.ucsd.edu/courses/cse21-w17 Merging sorted lists: WHAT Given two sorted lists a 1 a 2 a 3 a k b 1 b 2 b 3 b

More information

CS 231: Algorithmic Problem Solving

CS 231: Algorithmic Problem Solving CS 231: Algorithmic Problem Solving Naomi Nishimura Module 4 Date of this version: June 11, 2018 WARNING: Drafts of slides are made available prior to lecture for your convenience. After lecture, slides

More information

Divide and Conquer CPE 349. Theresa Migler-VonDollen

Divide and Conquer CPE 349. Theresa Migler-VonDollen Divide and Conquer CPE 349 Theresa Migler-VonDollen Divide and Conquer Divide and Conquer is a strategy that solves a problem by: 1 Breaking the problem into subproblems that are themselves smaller instances

More information

CS 4104 Data and Algorithm Analysis. Recurrence Relations. Modeling Recursive Function Cost. Solving Recurrences. Clifford A. Shaffer.

CS 4104 Data and Algorithm Analysis. Recurrence Relations. Modeling Recursive Function Cost. Solving Recurrences. Clifford A. Shaffer. Department of Computer Science Virginia Tech Blacksburg, Virginia Copyright c 2010,2017 by Clifford A. Shaffer Data and Algorithm Analysis Title page Data and Algorithm Analysis Clifford A. Shaffer Spring

More information

COMP Analysis of Algorithms & Data Structures

COMP Analysis of Algorithms & Data Structures COMP 3170 - Analysis of Algorithms & Data Structures Shahin Kamali Lecture 4 - Jan. 10, 2018 CLRS 1.1, 1.2, 2.2, 3.1, 4.3, 4.5 University of Manitoba Picture is from the cover of the textbook CLRS. 1 /

More information

Solving Recurrences. 1. Express the running time (or use of some other resource) as a recurrence.

Solving Recurrences. 1. Express the running time (or use of some other resource) as a recurrence. Solving Recurrences Recurrences and Recursive Code Many (perhaps most) recursive algorithms fall into one of two categories: tail recursion and divide-andconquer recursion. We would like to develop some

More information

Recurrence Relations

Recurrence Relations Recurrence Relations Winter 2017 Recurrence Relations Recurrence Relations A recurrence relation for the sequence {a n } is an equation that expresses a n in terms of one or more of the previous terms

More information

Class Note #14. In this class, we studied an algorithm for integer multiplication, which. 2 ) to θ(n

Class Note #14. In this class, we studied an algorithm for integer multiplication, which. 2 ) to θ(n Class Note #14 Date: 03/01/2006 [Overall Information] In this class, we studied an algorithm for integer multiplication, which improved the running time from θ(n 2 ) to θ(n 1.59 ). We then used some of

More information

Inf 2B: Sorting, MergeSort and Divide-and-Conquer

Inf 2B: Sorting, MergeSort and Divide-and-Conquer Inf 2B: Sorting, MergeSort and Divide-and-Conquer Lecture 7 of ADS thread Kyriakos Kalorkoti School of Informatics University of Edinburgh The Sorting Problem Input: Task: Array A of items with comparable

More information

Design and Analysis of Algorithms

Design and Analysis of Algorithms CSE 101, Winter 2018 Design and Analysis of Algorithms Lecture 5: Divide and Conquer (Part 2) Class URL: http://vlsicad.ucsd.edu/courses/cse101-w18/ A Lower Bound on Convex Hull Lecture 4 Task: sort the

More information

COMP Analysis of Algorithms & Data Structures

COMP Analysis of Algorithms & Data Structures COMP 3170 - Analysis of Algorithms & Data Structures Shahin Kamali Lecture 4 - Jan. 14, 2019 CLRS 1.1, 1.2, 2.2, 3.1, 4.3, 4.5 University of Manitoba Picture is from the cover of the textbook CLRS. COMP

More information

CS 4407 Algorithms Lecture 2: Iterative and Divide and Conquer Algorithms

CS 4407 Algorithms Lecture 2: Iterative and Divide and Conquer Algorithms CS 4407 Algorithms Lecture 2: Iterative and Divide and Conquer Algorithms Prof. Gregory Provan Department of Computer Science University College Cork 1 Lecture Outline CS 4407, Algorithms Growth Functions

More information

Kartsuba s Algorithm and Linear Time Selection

Kartsuba s Algorithm and Linear Time Selection CS 374: Algorithms & Models of Computation, Fall 2015 Kartsuba s Algorithm and Linear Time Selection Lecture 09 September 22, 2015 Chandra & Manoj (UIUC) CS374 1 Fall 2015 1 / 32 Part I Fast Multiplication

More information

CMPSCI611: Three Divide-and-Conquer Examples Lecture 2

CMPSCI611: Three Divide-and-Conquer Examples Lecture 2 CMPSCI611: Three Divide-and-Conquer Examples Lecture 2 Last lecture we presented and analyzed Mergesort, a simple divide-and-conquer algorithm. We then stated and proved the Master Theorem, which gives

More information

Lecture 1: Asymptotics, Recurrences, Elementary Sorting

Lecture 1: Asymptotics, Recurrences, Elementary Sorting Lecture 1: Asymptotics, Recurrences, Elementary Sorting Instructor: Outline 1 Introduction to Asymptotic Analysis Rate of growth of functions Comparing and bounding functions: O, Θ, Ω Specifying running

More information

Bucket-Sort. Have seen lower bound of Ω(nlog n) for comparisonbased. Some cheating algorithms achieve O(n), given certain assumptions re input

Bucket-Sort. Have seen lower bound of Ω(nlog n) for comparisonbased. Some cheating algorithms achieve O(n), given certain assumptions re input Bucket-Sort Have seen lower bound of Ω(nlog n) for comparisonbased sorting algs Some cheating algorithms achieve O(n), given certain assumptions re input One example: bucket sort Assumption: input numbers

More information

Asymptotic Notation. such that t(n) cf(n) for all n n 0. for some positive real constant c and integer threshold n 0

Asymptotic Notation. such that t(n) cf(n) for all n n 0. for some positive real constant c and integer threshold n 0 Asymptotic Notation Asymptotic notation deals with the behaviour of a function in the limit, that is, for sufficiently large values of its parameter. Often, when analysing the run time of an algorithm,

More information

Design and Analysis of Algorithms

Design and Analysis of Algorithms CSE 101, Winter 2018 Design and Analysis of Algorithms Lecture 4: Divide and Conquer (I) Class URL: http://vlsicad.ucsd.edu/courses/cse101-w18/ Divide and Conquer ( DQ ) First paradigm or framework DQ(S)

More information

CMPS 2200 Fall Divide-and-Conquer. Carola Wenk. Slides courtesy of Charles Leiserson with changes and additions by Carola Wenk

CMPS 2200 Fall Divide-and-Conquer. Carola Wenk. Slides courtesy of Charles Leiserson with changes and additions by Carola Wenk CMPS 2200 Fall 2017 Divide-and-Conquer Carola Wenk Slides courtesy of Charles Leiserson with changes and additions by Carola Wenk 1 The divide-and-conquer design paradigm 1. Divide the problem (instance)

More information

CS173 Running Time and Big-O. Tandy Warnow

CS173 Running Time and Big-O. Tandy Warnow CS173 Running Time and Big-O Tandy Warnow CS 173 Running Times and Big-O analysis Tandy Warnow Today s material We will cover: Running time analysis Review of running time analysis of Bubblesort Review

More information

CSE 421 Algorithms: Divide and Conquer

CSE 421 Algorithms: Divide and Conquer CSE 42 Algorithms: Divide and Conquer Larry Ruzzo Thanks to Richard Anderson, Paul Beame, Kevin Wayne for some slides Outline: General Idea algorithm design paradigms: divide and conquer Review of Merge

More information

Chapter 5 Divide and Conquer

Chapter 5 Divide and Conquer CMPT 705: Design and Analysis of Algorithms Spring 008 Chapter 5 Divide and Conquer Lecturer: Binay Bhattacharya Scribe: Chris Nell 5.1 Introduction Given a problem P with input size n, P (n), we define

More information

Big-oh stuff. You should know this definition by heart and be able to give it,

Big-oh stuff. You should know this definition by heart and be able to give it, Big-oh stuff Definition. if asked. You should know this definition by heart and be able to give it, Let f and g both be functions from R + to R +. Then f is O(g) (pronounced big-oh ) if and only if there

More information

CSE 421 Algorithms. T(n) = at(n/b) + n c. Closest Pair Problem. Divide and Conquer Algorithms. What you really need to know about recurrences

CSE 421 Algorithms. T(n) = at(n/b) + n c. Closest Pair Problem. Divide and Conquer Algorithms. What you really need to know about recurrences CSE 421 Algorithms Richard Anderson Lecture 13 Divide and Conquer What you really need to know about recurrences Work per level changes geometrically with the level Geometrically increasing (x > 1) The

More information

CS Data Structures and Algorithm Analysis

CS Data Structures and Algorithm Analysis CS 483 - Data Structures and Algorithm Analysis Lecture II: Chapter 2 R. Paul Wiegand George Mason University, Department of Computer Science February 1, 2006 Outline 1 Analysis Framework 2 Asymptotic

More information

Algorithm Analysis Recurrence Relation. Chung-Ang University, Jaesung Lee

Algorithm Analysis Recurrence Relation. Chung-Ang University, Jaesung Lee Algorithm Analysis Recurrence Relation Chung-Ang University, Jaesung Lee Recursion 2 Recursion 3 Recursion in Real-world Fibonacci sequence = + Initial conditions: = 0 and = 1. = + = + = + 0, 1, 1, 2,

More information

CS 161 Summer 2009 Homework #2 Sample Solutions

CS 161 Summer 2009 Homework #2 Sample Solutions CS 161 Summer 2009 Homework #2 Sample Solutions Regrade Policy: If you believe an error has been made in the grading of your homework, you may resubmit it for a regrade. If the error consists of more than

More information

Fundamental Algorithms

Fundamental Algorithms Chapter 2: Sorting, Winter 2018/19 1 Fundamental Algorithms Chapter 2: Sorting Jan Křetínský Winter 2018/19 Chapter 2: Sorting, Winter 2018/19 2 Part I Simple Sorts Chapter 2: Sorting, Winter 2018/19 3

More information

Fundamental Algorithms

Fundamental Algorithms Fundamental Algorithms Chapter 2: Sorting Harald Räcke Winter 2015/16 Chapter 2: Sorting, Winter 2015/16 1 Part I Simple Sorts Chapter 2: Sorting, Winter 2015/16 2 The Sorting Problem Definition Sorting

More information

Practical Session #3 - Recursions

Practical Session #3 - Recursions Practical Session #3 - Recursions Substitution method Guess the form of the solution and prove it by induction Iteration Method Convert the recurrence into a summation and solve it Tightly bound a recurrence

More information

Mid-term Exam Answers and Final Exam Study Guide CIS 675 Summer 2010

Mid-term Exam Answers and Final Exam Study Guide CIS 675 Summer 2010 Mid-term Exam Answers and Final Exam Study Guide CIS 675 Summer 2010 Midterm Problem 1: Recall that for two functions g : N N + and h : N N +, h = Θ(g) iff for some positive integer N and positive real

More information

Solutions. Problem 1: Suppose a polynomial in n of degree d has the form

Solutions. Problem 1: Suppose a polynomial in n of degree d has the form Assignment 1 1. Problem 3-1 on p. 57 2. Problem 3-2 on p. 58 3. Problem 4-5 on p. 86 4. Problem 6-1 on p. 142 5. Problem 7-4 on p. 162 6. Prove correctness (including halting) of SelectionSort (use loop

More information

Divide-and-conquer. Curs 2015

Divide-and-conquer. Curs 2015 Divide-and-conquer Curs 2015 The divide-and-conquer strategy. 1. Break the problem into smaller subproblems, 2. recursively solve each problem, 3. appropriately combine their answers. Known Examples: Binary

More information

Asymptotic Analysis 1

Asymptotic Analysis 1 Asymptotic Analysis 1 Last week, we discussed how to present algorithms using pseudocode. For example, we looked at an algorithm for singing the annoying song 99 Bottles of Beer on the Wall for arbitrary

More information

Divide and Conquer Algorithms

Divide and Conquer Algorithms Divide and Conquer Algorithms Introduction There exist many problems that can be solved using a divide-and-conquer algorithm. A divide-andconquer algorithm A follows these general guidelines. Divide Algorithm

More information

Divide and Conquer. Recurrence Relations

Divide and Conquer. Recurrence Relations Divide and Conquer Recurrence Relations Divide-and-Conquer Strategy: Break up problem into parts. Solve each part recursively. Combine solutions to sub-problems into overall solution. 2 MergeSort Mergesort.

More information

Solving Recurrences. 1. Express the running time (or use of some other resource) as a recurrence.

Solving Recurrences. 1. Express the running time (or use of some other resource) as a recurrence. 1 Recurrences and Recursive Code Solving Recurrences Many (perhaps most) recursive algorithms fall into one of two categories: tail recursion and divide-andconquer recursion. We would like to develop some

More information

i=1 i B[i] B[i] + A[i, j]; c n for j n downto i + 1 do c n i=1 (n i) C[i] C[i] + A[i, j]; c n

i=1 i B[i] B[i] + A[i, j]; c n for j n downto i + 1 do c n i=1 (n i) C[i] C[i] + A[i, j]; c n Fundamental Algorithms Homework #1 Set on June 25, 2009 Due on July 2, 2009 Problem 1. [15 pts] Analyze the worst-case time complexity of the following algorithms,and give tight bounds using the Theta

More information

Notes for Recitation 14

Notes for Recitation 14 6.04/18.06J Mathematics for Computer Science October 4, 006 Tom Leighton and Marten van Dijk Notes for Recitation 14 1 The Akra-Bazzi Theorem Theorem 1 (Akra-Bazzi, strong form). Suppose that: is defined

More information

CS383, Algorithms Spring 2009 HW1 Solutions

CS383, Algorithms Spring 2009 HW1 Solutions Prof. Sergio A. Alvarez http://www.cs.bc.edu/ alvarez/ 21 Campanella Way, room 569 alvarez@cs.bc.edu Computer Science Department voice: (617) 552-4333 Boston College fax: (617) 552-6790 Chestnut Hill,

More information

CSI2101-W08- Recurrence Relations

CSI2101-W08- Recurrence Relations Motivation CSI2101-W08- Recurrence Relations where do they come from modeling program analysis Solving Recurrence Relations by iteration arithmetic/geometric sequences linear homogenous recurrence relations

More information

Divide and Conquer: Polynomial Multiplication Version of October 1 / 7, 24201

Divide and Conquer: Polynomial Multiplication Version of October 1 / 7, 24201 Divide and Conquer: Polynomial Multiplication Version of October 7, 2014 Divide and Conquer: Polynomial Multiplication Version of October 1 / 7, 24201 Outline Outline: Introduction The polynomial multiplication

More information

Divide & Conquer. Jordi Cortadella and Jordi Petit Department of Computer Science

Divide & Conquer. Jordi Cortadella and Jordi Petit Department of Computer Science Divide & Conquer Jordi Cortadella and Jordi Petit Department of Computer Science Divide-and-conquer algorithms Strategy: Divide the problem into smaller subproblems of the same type of problem Solve the

More information

4.4 Recurrences and Big-O

4.4 Recurrences and Big-O 4.4. RECURRENCES AND BIG-O 91 4.4 Recurrences and Big-O In this section, we will extend our knowledge of recurrences, showing how to solve recurrences with big-o s in them, recurrences in which n b k,

More information

Divide and Conquer. Maximum/minimum. Median finding. CS125 Lecture 4 Fall 2016

Divide and Conquer. Maximum/minimum. Median finding. CS125 Lecture 4 Fall 2016 CS125 Lecture 4 Fall 2016 Divide and Conquer We have seen one general paradigm for finding algorithms: the greedy approach. We now consider another general paradigm, known as divide and conquer. We have

More information

Algorithms. Quicksort. Slide credit: David Luebke (Virginia)

Algorithms. Quicksort. Slide credit: David Luebke (Virginia) 1 Algorithms Quicksort Slide credit: David Luebke (Virginia) Sorting revisited We have seen algorithms for sorting: INSERTION-SORT, MERGESORT More generally: given a sequence of items Each item has a characteristic

More information