Part II. Foundations. Ernst Mayr, Harald Räcke 16

Size: px
Start display at page:

Download "Part II. Foundations. Ernst Mayr, Harald Räcke 16"

Transcription

1 Part II Foundations Ernst Mayr, Harald Räcke 16

2 Vocabularies a b a times b a multiplied by b a into b a b a divided by b a by b a over b a b (a: numerator (Zähler), b: denominator (Nenner)) a raised to the b-th power a to the b-th a raised to the power of b a to the power of b a raised to b a to the b a raised by the exponent of b Ernst Mayr, Harald Räcke 17

3 Vocabularies n! n factorial ( n k) n choose k x i x subscript i x sub i x i log b a log to the base b of a log a to the base b f : X Y, x x 2 f is a function that maps from domain (Definitionsbereich) X to codomain (Zielmenge) Y The set {y Y x X : f (x) = y} is the image or the range of the function (Bildbereich/Wertebereich) Ernst Mayr, Harald Räcke 18

4 3 Goals Gain knowledge about efficient algorithms for important problems, ie, learn how to solve certain types of problems efficiently Learn how to analyze and judge the efficiency of algorithms Learn how to design efficient algorithms 3 Goals Ernst Mayr, Harald Räcke 19

5 4 Modelling Issues What do you measure? Memory requirement Running time Number of comparisons Number of multiplications Number of hard-disc accesses Program size Power consumption 4 Modelling Issues Ernst Mayr, Harald Räcke 20

6 4 Modelling Issues How do you measure? Implementing and testing on representative inputs How do you choose your inputs? May be very time-consuming Very reliable results if done correctly Results only hold for a specific machine and for a specific set of inputs Theoretical analysis in a specific model of computation Gives asymptotic bounds like this algorithm always runs in time O(n 2 ) Typically focuses on the worst case Can give lower bounds like any comparison-based sorting algorithm needs at least Ω(n log n) comparisons in the worst case 4 Modelling Issues Ernst Mayr, Harald Räcke 21

7 4 Modelling Issues Input length The theoretical bounds are usually given by a function f : N N that maps the input length to the running time (or storage space, comparisons, multiplications, program size etc) The input length may eg be the size of the input (number of bits) the number of arguments Example 1 Suppose n numbers from the interval {1,, N} have to be sorted In this case we usually say that the input length is n instead of eg n log N, which would be the number of bits required to encode the input 4 Modelling Issues Ernst Mayr, Harald Räcke 22

8 Model of Computation How to measure performance 1 Calculate running time and storage space etc on a simplified, idealized model of computation, eg Random Access Machine (RAM), Turing Machine (TM), 2 Calculate number of certain basic operations: comparisons, multiplications, harddisc accesses, Version 2 is often easier, but focusing on one type of operation makes it more difficult to obtain meaningful results 4 Modelling Issues Ernst Mayr, Harald Räcke 23

9 Turing Machine Very simple model of computation Only the current memory location can be altered Very good model for discussing computabiliy, or polynomial vs exponential time Some simple problems like recognizing whether input is of the form xx, where x is a string, have quadratic lower bound Not a good model for developing efficient algorithms control unit state state holds program and can act as constant size memory 4 Modelling Issues Ernst Mayr, Harald Räcke 24

10 Random Access Machine (RAM) Input tape and output tape (sequences of zeros and ones; unbounded length) Memory unit: infinite but countable number of registers R[0], R[1], R[2], Registers hold integers Indirect addressing input tape memory R[0] R[1] control unit R[2] R[3] Note that in the picture on the right the tapes are one-directional, and that a READ- or WRITE-operation always advances its tape output tape R[4] R[5] 4 Modelling Issues Ernst Mayr, Harald Räcke 25

11 Random Access Machine (RAM) Operations input operations (input tape R[i]) READ i output operations (R[i] output tape) WRITE i register-register transfers R[j] := R[i] R[j] := 4 indirect addressing R[j] := R[R[i]] loads the content of the R[i]-th register into the j-th register R[R[i]] := R[j] loads the content of the j-th into the R[i]-th register 4 Modelling Issues Ernst Mayr, Harald Räcke 26

12 Random Access Machine (RAM) Operations branching (including loops) based on comparisons jump x jumps to position x in the program; sets instruction counter to x; reads the next operation to perform from register R[x] jumpz x R[i] jump to x if R[i] = 0 if not the instruction counter is increased by 1; jumpi i jump to R[i] (indirect jump); arithmetic instructions: +,,, / R[i] := R[j] + R[k]; R[i] := -R[k]; The jump-directives are very close to the jump-instructions contained in the assembler language of real machines 4 Modelling Issues Ernst Mayr, Harald Räcke 27

13 Model of Computation uniform cost model Every operation takes time 1 logarithmic cost model The cost depends on the content of memory cells: The time for a step is equal to the largest operand involved; The storage space of a register is equal to the length (in bits) of the largest value ever stored in it Bounded word RAM model: cost is uniform but the largest value stored in a register may not exceed 2 w, where usually w = log 2 n The latter model is quite realistic as the word-size of a standard computer that handles a problem of size n must be at least log 2 n as otherwise the computer could either not store the problem instance or not address all its memory 4 Modelling Issues Ernst Mayr, Harald Räcke 28

14 4 Modelling Issues Example 2 Algorithm 1 RepeatedSquaring(n) 1: r 2; 2: for i = 1 n do 3: r r 2 4: return r running time: uniform model: n steps logarithmic model: n = 2 n+1 1 = Θ(2 n ) space requirement: uniform model: O(1) logarithmic model: O(2 n ) 4 Modelling Issues Ernst Mayr, Harald Räcke 29

15 There are different types of complexity bounds: best-case complexity: C bc (n) := min{c(x) x = n} Usually easy to analyze, but not very meaningful worst-case complexity: C wc (n) := max{c(x) x = n} Usually moderately easy to analyze; sometimes too pessimistic average case complexity: C avg (n) := 1 I n x =n more general: probability measure µ C avg (n) := C(x) x I n µ(x) C(x) cost of instance C(x) x x input length of instance x I n set of instances of length n 4 Modelling Issues Ernst Mayr, Harald Räcke 30

16 There are different types of complexity bounds: amortized complexity: The average cost of data structure operations over a worst case sequence of operations randomized complexity: The algorithm may use random bits Expected running time (over all possible choices of random bits) for a fixed input x Then take the worst-case over all x with x = n 4 Modelling Issues Ernst Mayr, Harald Räcke 30

17 4 Modelling Issues Bibliography [MS08] [CLRS90] Kurt Mehlhorn, Peter Sanders: Algorithms and Data Structures The Basic Toolbox, Springer, 2008 Thomas H Cormen, Charles E Leiserson, Ron L Rivest, Clifford Stein: Introduction to algorithms (3rd ed), McGraw-Hill, 2009 Chapter 21 and 22 of [MS08] and Chapter 2 of [CLRS90] are relevant for this section 4 Modelling Issues Ernst Mayr, Harald Räcke 31

18 5 Asymptotic Notation We are usually not interested in exact running times, but only in an asymptotic classification of the running time, that ignores constant factors and constant additive offsets We are usually interested in the running times for large values of n Then constant additive terms do not play an important role An exact analysis (eg exactly counting the number of operations in a RAM) may be hard, but wouldn t lead to more precise results as the computational model is already quite a distance from reality A linear speed-up (ie, by a constant factor) is always possible by eg implementing the algorithm on a faster machine Running time should be expressed by simple functions 5 Asymptotic Notation Ernst Mayr, Harald Räcke 31

19 Asymptotic Notation Formal Definition Let f denote functions from N to R + O(f ) = {g c > 0 n 0 N 0 n n 0 : [g(n) c f (n)]} (set of functions that asymptotically grow not faster than f ) Ω(f ) = {g c > 0 n 0 N 0 n n 0 : [g(n) c f (n)]} (set of functions that asymptotically grow not slower than f ) Θ(f ) = Ω(f ) O(f ) (functions that asymptotically have the same growth as f ) o(f ) = {g c > 0 n 0 N 0 n n 0 : [g(n) c f (n)]} (set of functions that asymptotically grow slower than f ) ω(f ) = {g c > 0 n 0 N 0 n n 0 : [g(n) c f (n)]} (set of functions that asymptotically grow faster than f ) 5 Asymptotic Notation Ernst Mayr, Harald Räcke 32

20 Asymptotic Notation There is an equivalent definition using limes notation (assuming that the respective limes exists) f and g are functions from N 0 to R + 0 g O(f ): g Ω(f ): g Θ(f ): g o(f ): g ω(f ): 0 lim n g(n) f (n) < 0 < lim n g(n) f (n) 0 < lim n g(n) f (n) < g(n) lim n f (n) = 0 g(n) lim n f (n) = Note that for the version of the Landau notation defined here, we assume that f and g are positive functions There also exist versions for arbitrary functions, and for the case that the limes is not infinity 5 Asymptotic Notation Ernst Mayr, Harald Räcke 33

21 Asymptotic Notation Abuse of notation 1 People write f = O(g), when they mean f O(g) This is not an equality (how could a function be equal to a set of functions) 2 People write f (n) = O(g(n)), when they mean f O(g), with f : N R +, n f (n), and g : N R +, n g(n) 3 People write eg h(n) = f (n) + o(g(n)) when they mean that there exists a function z : N R +, n z(n), z o(g) such that h(n) = f (n) + z(n) 2 In this context f (n) does not mean the function f evaluated at n, but instead it is a shorthand for the function itself (leaving out domain and codomain and only giving the rule of correspondence of the function) 3 This is particularly useful if you do not want to ignore constant factors For example the median of n elements can be determined using 3 2 n+o(n) comparisons 5 Asymptotic Notation Ernst Mayr, Harald Räcke 34

22 Asymptotic Notation Abuse of notation 4 People write O(f (n)) = O(g(n)), when they mean O(f (n)) O(g(n)) Again this is not an equality 5 Asymptotic Notation Ernst Mayr, Harald Räcke 34

23 Asymptotic Notation in Equations How do we interpret an expression like: 2n 2 + 3n + 1 = 2n 2 + Θ(n) Here, Θ(n) stands for an anonymous function in the set Θ(n) that makes the expression true Note that Θ(n) is on the right hand side, otw this interpretation is wrong 5 Asymptotic Notation Ernst Mayr, Harald Räcke 35

24 Asymptotic Notation in Equations How do we interpret an expression like: 2n 2 + O(n) = Θ(n 2 ) Regardless of how we choose the anonymous function f (n) O(n) there is an anonymous function g(n) Θ(n 2 ) that makes the expression true 5 Asymptotic Notation Ernst Mayr, Harald Räcke 36

25 Asymptotic Notation in Equations How do we interpret an expression like: The Θ(i)-symbol on the left represents one anonymous function f : N R +, and then i f (i) is computed n Θ(i) = Θ(n 2 ) i=1 Careful! It is understood that every occurence of an O-symbol (or Θ, Ω, o, ω) on the left represents one anonymous function Hence, the left side is not equal to Θ(1) + Θ(2) + + Θ(n 1) + Θ(n) Θ(1)+Θ(2)+ +Θ(n 1)+Θ(n) does not really have a reasonable interpretation 5 Asymptotic Notation Ernst Mayr, Harald Räcke 37

26 Asymptotic Notation in Equations We can view an expression containing asymptotic notation as generating a set: n 2 O(n) + O(log n) represents { f : N R + f (n) = n 2 g(n) + h(n) } with g(n) O(n) and h(n) O(log n) Recall that according to the previous slide eg the expressions n i=1 O(i) and n/2 i=1 O(i) + n i=n/2+1 O(i) generate different sets 5 Asymptotic Notation Ernst Mayr, Harald Räcke 38

27 Asymptotic Notation in Equations Then an asymptotic equation can be interpreted as containement btw two sets: n 2 O(n) + O(log n) = Θ(n 2 ) represents n 2 O(n) + O(log n) Θ(n 2 ) Note that the equation does not hold 5 Asymptotic Notation Ernst Mayr, Harald Räcke 39

28 Asymptotic Notation Lemma 3 Let f, g be functions with the property n 0 > 0 n n 0 : f (n) > 0 (the same for g) Then c f (n) Θ(f (n)) for any constant c O(f (n)) + O(g(n)) = O(f (n) + g(n)) O(f (n)) O(g(n)) = O(f (n) g(n)) O(f (n)) + O(g(n)) = O(max{f (n), g(n)}) The expressions also hold for Ω Note that this means that f (n) + g(n) Θ(max{f (n), g(n)}) 5 Asymptotic Notation Ernst Mayr, Harald Räcke 40

29 Asymptotic Notation Comments Do not use asymptotic notation within induction proofs For any constants a, b we have log a n = Θ(log b n) Therefore, we will usually ignore the base of a logarithm within asymptotic notation In general log n = log 2 n, ie, we use 2 as the default base for the logarithm 5 Asymptotic Notation Ernst Mayr, Harald Räcke 41

30 Asymptotic Notation In general asymptotic classification of running times is a good measure for comparing algorithms: If the running time analysis is tight and actually occurs in practise (ie, the asymptotic bound is not a purely theoretical worst-case bound), then the algorithm that has better asymptotic running time will always outperform a weaker algorithm for large enough values of n However, suppose that I have two algorithms: Algorithm A Running time f (n) = 1000 log n = O(log n) Algorithm B Running time g(n) = log 2 n Clearly f = o(g) However, as long as log n 1000 Algorithm B will be more efficient 5 Asymptotic Notation Ernst Mayr, Harald Räcke 42

31 5 Asymptotic Notation Bibliography [MS08] [CLRS90] Kurt Mehlhorn, Peter Sanders: Algorithms and Data Structures The Basic Toolbox, Springer, 2008 Thomas H Cormen, Charles E Leiserson, Ron L Rivest, Clifford Stein: Introduction to algorithms (3rd ed), McGraw-Hill, 2009 Mainly Chapter 3 of [CLRS90] [MS08] covers this topic in chapter 21 but not very detailed 5 Asymptotic Notation Ernst Mayr, Harald Räcke 43

32 6 Recurrences Algorithm 2 mergesort(list L) 1: n size(l) 2: if n 1 return L 3: L 1 L[1 n 2 ] 4: L 2 L[ n n] 5: mergesort(l 1 ) 6: mergesort(l 2 ) 7: L merge(l 1, L 2 ) 8: return L This algorithm requires ( n ) ( n ) ( n ) T (n) = T + T + O(n) 2T + O(n) comparisons when n > 1 and 0 comparisons when n 1 6 Recurrences Ernst Mayr, Harald Räcke 43

33 Recurrences How do we bring the expression for the number of comparisons ( running time) into a closed form? For this we need to solve the recurrence 6 Recurrences Ernst Mayr, Harald Räcke 44

34 Methods for Solving Recurrences 1 Guessing+Induction Guess the right solution and prove that it is correct via induction It needs experience to make the right guess 2 Master Theorem For a lot of recurrences that appear in the analysis of algorithms this theorem can be used to obtain tight asymptotic bounds It does not provide exact solutions 3 Characteristic Polynomial Linear homogenous recurrences can be solved via this method 6 Recurrences Ernst Mayr, Harald Räcke 45

35 Methods for Solving Recurrences 4 Generating Functions A more general technique that allows to solve certain types of linear inhomogenous relations and also sometimes non-linear recurrence relations 5 Transformation of the Recurrence Sometimes one can transform the given recurrence relations so that it eg becomes linear and can therefore be solved with one of the other techniques 6 Recurrences Ernst Mayr, Harald Räcke 46

36 61 Guessing+Induction First we need to get rid of the O-notation in our recurrence: { ( n ) 2T 2 + cn n 2 T (n) 0 otherwise Assume that instead we had { ( n ) 2T 2 + cn n 2 T (n) 0 otherwise One way of solving such a recurrence is to guess a solution, and check that it is correct by plugging it in 61 Guessing+Induction Ernst Mayr, Harald Räcke 47

37 61 Guessing+Induction Suppose we guess T (n) dn log n for a constant d Then if we choose d c ( n ) T (n) 2T + cn 2 ( 2 d n 2 log n ) + cn 2 = dn(log n 1) + cn = dn log n + (c d)n dn log n Formally one would make an induction proof, where the above is the induction step The base case is usually trivial 61 Guessing+Induction Ernst Mayr, Harald Räcke 48

38 61 Guessing+Induction Guess: T (n) dn log n Proof (by induction) { ( n ) 2T 2 + cn n 16 T (n) b otw base case (2 n < 16): true if we choose d b induction step 2 n 1 n: Suppose statem is true for n {2,, n 1}, and n 16 We prove it for n: ( n ) T (n) 2T + cn 2 ( 2 d n 2 log n ) + cn 2 = dn(log n 1) + cn = dn log n + (c d)n dn log n Hence, statement is true if we choose d c Note that this proves the statement for n N 2, as the statement is wrong for n = 1 The base case is usually omitted, as it is the same for different recurrences

39 61 Guessing+Induction Why did we change the recurrence by getting rid of the ceiling? If we do not do this we instead consider the following recurrence: { 2T ( n 2 ) + cn n 16 T (n) b otherwise Note that we can do this as for constant-sized inputs the running time is always some constant (b in the above case) 61 Guessing+Induction Ernst Mayr, Harald Räcke 50

40 61 Guessing+Induction We also make a guess of T (n) dn log n and get n 2 n n n ( n ) T (n) 2T 2 ( n 2 d 2 + cn log n 2 ) + cn 2 ( d(n/2 + 1) log(n/2 + 1) ) + cn ( 9 ) dn log 16 n + 2d log n + cn log 9 16 n = log n + (log 9 4) log n n 4 = dn log n + (log 9 4)dn + 2d log n + cn dn log n + (log 9 35)dn + cn dn log n 033dn + cn dn log n for a suitable choice of d 61 Guessing+Induction Ernst Mayr, Harald Räcke 51

41 62 Master Theorem Note that the cases do not cover all possibilities Lemma 4 Let a 1, b 1 and ɛ > 0 denote constants Consider the recurrence ( n ) T (n) = at + f (n) b Case 1 If f (n) = O(n log b(a) ɛ ) then T (n) = Θ(n log b a ) Case 2 If f (n) = Θ(n log b(a) log k n) then T (n) = Θ(n log b a log k+1 n), k 0 Case 3 If f (n) = Ω(n logb(a)+ɛ ) and for sufficiently large n af ( n b ) cf (n) for some constant c < 1 then T (n) = Θ(f (n)) 62 Master Theorem Ernst Mayr, Harald Räcke 52

42 62 Master Theorem We prove the Master Theorem for the case that n is of the form b l, and we assume that the non-recursive case occurs for problem size 1 and incurs cost 1 62 Master Theorem Ernst Mayr, Harald Räcke 53

43 The Recursion Tree The running time of a recursive algorithm can be visualized by a recursion tree: nx a f (n) n b a n b a n b a af ( n b ) n b 2 n b 2 n b 2 n b 2 n b 2 a a a a a a a a a n b 2 n b 2 n b 2 n b 2 a 2 f ( n b 2 ) a log b n = n log b a 62 Master Theorem Ernst Mayr, Harald Räcke 54

44 62 Master Theorem This gives log b n 1 ( ) T (n) = n log b n a + a i f b i i=0 62 Master Theorem Ernst Mayr, Harald Räcke 55

45 Case 1 Now suppose that f (n) cn log b a ɛ T (n) n log b a = b i(log b a ɛ) = b ɛi (b log b a ) i = b ɛi a i Hence, k i=0 qi = qk+1 1 q 1 ( ) c T (n) b ɛ log b n 1 i=0 ( ) n a i f b i log b n 1 ( ) n logb c a i a ɛ b i i=0 = cn log b a ɛ log b n 1 i=0 ( b ɛ ) i = cn log b a ɛ (b ɛ log b n 1)/(b ɛ 1) = cn log b a ɛ (n ɛ 1)/(b ɛ 1) c = b ɛ 1 nlog b a (n ɛ 1)/(n ɛ ) n log b(a) T (n) = O(n log b a ) 62 Master Theorem Ernst Mayr, Harald Räcke 56

46 Case 2 Now suppose that f (n) cn log b a T (n) n log b a = log b n 1 i=0 ( ) n a i f b i log b n 1 ( ) n logb c a i a b i i=0 log b n 1 = cn log b a 1 i=0 = cn log b a log b n Hence, T (n) = O(n log b a log b n) T (n) = O(n log b a log n) 62 Master Theorem Ernst Mayr, Harald Räcke 57

47 Case 2 Now suppose that f (n) cn log b a T (n) n log b a = log b n 1 i=0 ( ) n a i f b i log b n 1 ( ) n logb c a i a b i i=0 log b n 1 = cn log b a 1 i=0 = cn log b a log b n Hence, T (n) = Ω(n log b a log b n) T (n) = Ω(n log b a log n) 62 Master Theorem Ernst Mayr, Harald Räcke 58

48 Case 2 Now suppose that f (n) cn log b a (log b (n)) k T (n) n log b a = n = b l l = log b n log b n 1 i=0 ( ) n a i f b i log b n 1 ( ) n logb c a i a b i i=0 = cn log b a l 1 i=0 ( ( b l )) k log b l 1 = cn log b a (l i) k = cn log b a i=0 l i=1 i k c k nlog b a l k+1 b i 1 k lk+1 ( ( )) n k log b b i T (n) = O(n log b a log k+1 n) 62 Master Theorem Ernst Mayr, Harald Räcke 59

49 Case 3 Now suppose that f (n) dn log b a+ɛ, and that for sufficiently large n: af (n/b) cf (n), for c < 1 From this we get a i f (n/b i ) c i f (n), where we assume that n/b i 1 n 0 is still sufficiently large T (n) n log b a = q < 1 : n i=0 qi = 1 qn+1 1 q 1 1 q Hence, log b n 1 i=0 log b n 1 i=0 ( ) n a i f b i c i f (n) + O(n log b a ) 1 1 c f (n) + O(nlog b a ) T (n) O(f (n)) T (n) = Θ(f (n)) Where did we use f (n) Ω(n log b a+ɛ )? 62 Master Theorem Ernst Mayr, Harald Räcke 60

50 Example: Multiplying Two Integers Suppose we want to multiply two n-bit Integers, but our registers can only perform operations on integers of constant size For this we first need to be able to add two integers A and B: A B This gives that two n-bit integers can be added in time O(n) 62 Master Theorem Ernst Mayr, Harald Räcke 61

51 Example: Multiplying Two Integers Suppose that we want to multiply an n-bit integer A and an m-bit integer B (m n) This is also nown as the school method for multiplying integers Note that the intermediate numbers that are generated can have at most m + n 2n bits Time requirement: Computing intermediate results: O(nm) Adding m numbers of length 2n: O((m + n)m) = O(nm) 62 Master Theorem Ernst Mayr, Harald Räcke 62

52 Example: Multiplying Two Integers A recursive approach: Suppose that integers A and B are of length n = 2 k, for some k b n 1 B B A 1 b n b n b 0 a n 1 a n B 0 A A 0 a 0 a n 2 Then it holds that A = A 1 2 n 2 + A0 and B = B 1 2 n 2 + B0 Hence, A B = A 1 B 1 2 n + (A 1 B 0 + A 0 B 1 ) 2 n 2 + A0 B 0 62 Master Theorem Ernst Mayr, Harald Räcke 63

53 Example: Multiplying Two Integers Algorithm 3 mult(a, B) 1: if A = B = 1 then 2: return a 0 b 0 3: split A into A 0 and A 1 4: split B into B 0 and B 1 5: Z 2 mult(a 1, B 1 ) 6: Z 1 mult(a 1, B 0 ) + mult(a 0, B 1 ) 7: Z 0 mult(a 0, B 0 ) 8: return Z 2 2 n + Z 1 2 n 2 + Z0 O(1) O(1) O(n) O(n) T ( n 2 ) 2T ( n 2 ) + O(n) T ( n 2 ) O(n) We get the following recurrence: ( n ) T (n) = 4T + O(n) 2 62 Master Theorem Ernst Mayr, Harald Räcke 64

54 Example: Multiplying Two Integers Master Theorem: Recurrence: T [n] = at ( n b ) + f (n) Case 1: f (n) = O(n log b a ɛ ) T (n) = Θ(n log b a ) Case 2: f (n) = Θ(n log b a log k n) T (n) = Θ(n log b a log k+1 n) Case 3: f (n) = Ω(n log b a+ɛ ) T (n) = Θ(f (n)) In our case a = 4, b = 2, and f (n) = Θ(n) Hence, we are in Case 1, since n = O(n 2 ɛ ) = O(n log b a ɛ ) We get a running time of O(n 2 ) for our algorithm Not better then the school method 62 Master Theorem Ernst Mayr, Harald Räcke 65

55 Example: Multiplying Two Integers We can use the following identity to compute Z 1 : Z 1 = A 1 B 0 + A 0 B 1 = Z 2 = Z {}}{{}} 0 { = (A 0 + A 1 ) (B 0 + B 1 ) A 1 B 1 A 0 B 0 Hence, A more precise (correct) analysis would say that computing Z 1 needs time T ( n 2 + 1) + O(n) Algorithm 4 mult(a, B) 1: if A = B = 1 then 2: return a 0 b 0 3: split A into A 0 and A 1 4: split B into B 0 and B 1 5: Z 2 mult(a 1, B 1 ) 6: Z 0 mult(a 0, B 0 ) 7: Z 1 mult(a 0 + A 1, B 0 + B 1 ) Z 2 Z 0 8: return Z 2 2 n + Z 1 2 n 2 + Z0 O(1) O(1) O(n) O(n) T ( n 2 ) T ( n 2 ) T ( n 2 ) + O(n) O(n) 62 Master Theorem Ernst Mayr, Harald Räcke 66

56 Example: Multiplying Two Integers We get the following recurrence: ( n ) T (n) = 3T + O(n) 2 Master Theorem: Recurrence: T [n] = at ( n b ) + f (n) Case 1: f (n) = O(n log b a ɛ ) T (n) = Θ(n log b a ) Case 2: f (n) = Θ(n log b a log k n) T (n) = Θ(n log b a log k+1 n) Case 3: f (n) = Ω(n log b a+ɛ ) T (n) = Θ(f (n)) Again we are in Case 1 We get a running time of Θ(n log 2 3 ) Θ(n 159 ) A huge improvement over the school method 62 Master Theorem Ernst Mayr, Harald Räcke 67

57 63 The Characteristic Polynomial Consider the recurrence relation: c 0 T (n) + c 1 T (n 1) + c 2 T (n 2) + + c k T (n k) = f (n) This is the general form of a linear recurrence relation of order k with constant coefficients (c 0, c k 0) T (n) only depends on the k preceding values This means the recurrence relation is of order k The recurrence is linear as there are no products of T [n] s If f (n) = 0 then the recurrence relation becomes a linear, homogenous recurrence relation of order k Note that we ignore boundary conditions for the moment 63 The Characteristic Polynomial Ernst Mayr, Harald Räcke 68

58 63 The Characteristic Polynomial Observations: The solution T [1], T [2], T [3], is completely determined by a set of boundary conditions that specify values for T [1],, T [k] In fact, any k consecutive values completely determine the solution k non-concecutive values might not be an appropriate set of boundary conditions (depends on the problem) Approach: First determine all solutions that satisfy recurrence relation Then pick the right one by analyzing boundary conditions First consider the homogenous case 63 The Characteristic Polynomial Ernst Mayr, Harald Räcke 69

59 The Homogenous Case The solution space { S = T = T [1], T [2], T [3], } T fulfills recurrence relation is a vector space This means that if T 1, T 2 S, then also αt 1 + βt 2 S, for arbitrary constants α, β How do we find a non-trivial solution? We guess that the solution is of the form λ n, λ 0, and see what happens In order for this guess to fulfill the recurrence we need c 0 λ n + c 1 λ n 1 + c 2 λ n c k λ n k = 0 for all n k 63 The Characteristic Polynomial Ernst Mayr, Harald Räcke 70

60 The Homogenous Case Dividing by λ n k gives that all these constraints are identical to c 0 λ k + c 1 λ k 1 + c 2 λ k c k = 0 }{{} characteristic polynomial P[λ] This means that if λ i is a root (Nullstelle) of P[λ] then T [n] = λ n i is a solution to the recurrence relation Let λ 1,, λ k be the k (complex) roots of P[λ] Then, because of the vector space property α 1 λ n 1 + α 2λ n α kλ n k is a solution for arbitrary values α i 63 The Characteristic Polynomial Ernst Mayr, Harald Räcke 71

61 The Homogenous Case Lemma 5 Assume that the characteristic polynomial has k distinct roots λ 1,, λ k Then all solutions to the recurrence relation are of the form α 1 λ n 1 + α 2λ n α kλ n k Proof There is one solution for every possible choice of boundary conditions for T [1],, T [k] We show that the above set of solutions contains one solution for every choice of boundary conditions 63 The Characteristic Polynomial Ernst Mayr, Harald Räcke 72

62 The Homogenous Case Proof (cont) Suppose I am given boundary conditions T [i] and I want to see whether I can choose the α is such that these conditions are met: α 1 λ 1 + α 2 λ α k λ k = T [1] α 1 λ α 2 λ α k λ 2 k = T [2] α 1 λ k 1 + α 2 λ k α k λ k k = T [k] 63 The Characteristic Polynomial Ernst Mayr, Harald Räcke 73

63 The Homogenous Case Proof (cont) Suppose I am given boundary conditions T [i] and I want to see whether I can choose the α is such that these conditions are met: λ 1 λ 2 λ k λ 2 1 λ 2 2 λ 2 k λ k 1 λ k 2 λ k k α 1 α 2 α k = T [1] T [2] T [k] We show that the column vectors are linearly independent Then the above equation has a solution 63 The Characteristic Polynomial Ernst Mayr, Harald Räcke 74

64 Computing the Determinant λ 1 λ 2 λ k 1 λ k λ 2 1 λ 2 2 λ 2 k 1 λ 2 k λ k 1 λ k 2 λ k k 1 λ k k = k i=1 λ i λ 1 λ 2 λ k 1 λ k λ k 1 1 λ k 1 2 λ k 1 k 1 λ k 1 k = k i=1 λ i 1 λ 1 λ k 2 1 λ k λ 2 λ k 2 2 λ k λ k λ k 2 k λ k 1 k 63 The Characteristic Polynomial Ernst Mayr, Harald Räcke 75

65 Computing the Determinant 1 λ 1 λ k 2 1 λ 2 λ k 2 1 λ k λ k 2 1 λ k λ k 1 2 k λ k 1 k = 1 λ 1 λ 1 1 λ1 k 2 λ 1 λ k 3 1 λ 2 λ 1 1 λ2 k 2 λ 1 λ k 3 1 λ k λ 1 1 λ k 2 λ 1 λ k 3 k k 1 λ k 1 2 λ k 1 1 λ 1 λ k λ 1 λ k 2 2 λ 1 λ k 2 λ k 1 k k 63 The Characteristic Polynomial Ernst Mayr, Harald Räcke 76

66 Computing the Determinant 1 λ 1 λ 1 1 λ k 2 1 λ 1 λ k 3 1 λ 2 λ 1 1 λ k 2 2 λ 1 λ k 3 1 λ k λ 1 1 λ k 2 k λ 1 λ k 3 k 1 λ k 1 2 λ k 1 1 λ 1 λ k λ 1 λ k 2 2 = λ 1 λ k 2 λ k 1 k (λ 2 λ 1 ) 1 (λ 2 λ 1 ) λ2 k 3 (λ 2 λ 1 ) λ k (λ k λ 1 ) 1 (λ k λ 1 ) λ k 3 (λ k λ 1 ) λ k 2 k k k 63 The Characteristic Polynomial Ernst Mayr, Harald Räcke 77

67 Computing the Determinant (λ 2 λ 1 ) 1 (λ 2 λ 1 ) λ k 3 2 (λ 2 λ 1 ) λ k (λ k λ 1 ) 1 (λ k λ 1 ) λ k 3 k (λ k λ 1 ) λ k 2 k = k i=2 (λ i λ 1 ) 1 λ 2 λ k 3 2 λ k λ k λ k 3 k λ k 2 k 63 The Characteristic Polynomial Ernst Mayr, Harald Räcke 78

68 Computing the Determinant Repeating the above steps gives: λ 1 λ 2 λ k 1 λ k λ 2 1 λ 2 2 λ 2 k 1 λ 2 k = λ k 1 λ k 2 λ k k 1 λ k k k λ i (λ i λ l ) Hence, if all λ i s are different, then the determinant is non-zero i=1 i>l 63 The Characteristic Polynomial Ernst Mayr, Harald Räcke 79

69 The Homogeneous Case What happens if the roots are not all distinct? Suppose we have a root λ i with multiplicity (Vielfachheit) at least 2 Then not only is λ n i a solution to the recurrence but also nλn i To see this consider the polynomial P[λ] λ n k = c 0 λ n + c 1 λ n 1 + c 2 λ n c k λ n k Since λ i is a root we can write this as Q[λ] (λ λ i ) 2 Calculating the derivative gives a polynomial that still has root λ i 63 The Characteristic Polynomial Ernst Mayr, Harald Räcke 80

70 This means c 0 nλ n 1 i + c 1 (n 1)λ n 2 i + + c k (n k)λ n k 1 i = 0 Hence, c 0 nλ n i + c 1(n 1)λ n 1 i + + c k (n k)λ n k i = 0 }{{}}{{}}{{} T [n] T [n 1] T [n k] 63 The Characteristic Polynomial Ernst Mayr, Harald Räcke 81

71 The Homogeneous Case Suppose λ i has multiplicity j We know that c 0 nλ n i + c 1(n 1)λ n 1 i + + c k (n k)λ n k i = 0 (after taking the derivative; multiplying with λ; plugging in λ i ) Doing this again gives c 0 n 2 λ n i + c 1(n 1) 2 λ n 1 i + + c k (n k) 2 λ n k i = 0 We can continue j 1 times Hence, n l λ n i is a solution for l 0,, j 1 63 The Characteristic Polynomial Ernst Mayr, Harald Räcke 82

72 The Homogeneous Case Lemma 6 Let P[λ] denote the characteristic polynomial to the recurrence c 0 T [n] + c 1 T [n 1] + + c k T [n k] = 0 Let λ i, i = 1,, m be the (complex) roots of P[λ] with multiplicities l i Then the general solution to the recurrence is given by T [n] = m l i 1 i=1 j=0 α ij (n j λ n i ) The full proof is omitted We have only shown that any choice of α ij s is a solution to the recurrence 63 The Characteristic Polynomial Ernst Mayr, Harald Räcke 83

73 Example: Fibonacci Sequence T [0] = 0 T [1] = 1 T [n] = T [n 1] + T [n 2] for n 2 The characteristic polynomial is Finding the roots, gives λ 2 λ 1 λ 1/2 = 1 2 ± = 1 ( 1 ± ) The Characteristic Polynomial Ernst Mayr, Harald Räcke 84

74 Example: Fibonacci Sequence Hence, the solution is of the form ( ) n ( β ) n α 2 2 T [0] = 0 gives α + β = 0 T [1] = 1 gives ( ) ( ) α + β 2 2 = 1 α β = The Characteristic Polynomial Ernst Mayr, Harald Räcke 85

75 Example: Fibonacci Sequence Hence, the solution is [( ) n ( ) n ] The Characteristic Polynomial Ernst Mayr, Harald Räcke 86

76 The Inhomogeneous Case Consider the recurrence relation: c 0 T (n) + c 1 T (n 1) + c 2 T (n 2) + + c k T (n k) = f (n) with f (n) 0 While we have a fairly general technique for solving homogeneous, linear recurrence relations the inhomogeneous case is different 63 The Characteristic Polynomial Ernst Mayr, Harald Räcke 87

77 The Inhomogeneous Case The general solution of the recurrence relation is T (n) = T h (n) + T p (n), where T h is any solution to the homogeneous equation, and T p is one particular solution to the inhomogeneous equation There is no general method to find a particular solution 63 The Characteristic Polynomial Ernst Mayr, Harald Räcke 88

78 The Inhomogeneous Case Example: Then, T [n] = T [n 1] + 1 T [0] = 1 T [n 1] = T [n 2] + 1 (n 2) Subtracting the first from the second equation gives, T [n] T [n 1] = T [n 1] T [n 2] (n 2) or T [n] = 2T [n 1] T [n 2] (n 2) I get a completely determined recurrence if I add T [0] = 1 and T [1] = 2 63 The Characteristic Polynomial Ernst Mayr, Harald Räcke 89

79 The Inhomogeneous Case Example: Characteristic polynomial: Then the solution is of the form λ } 2 {{ 2λ + 1 } = 0 (λ 1) 2 T [n] = α1 n + βn1 n = α + βn T [0] = 1 gives α = 1 T [1] = 2 gives 1 + β = 2 β = 1 63 The Characteristic Polynomial Ernst Mayr, Harald Räcke 90

80 The Inhomogeneous Case If f (n) is a polynomial of degree r this method can be applied r + 1 times to obtain a homogeneous equation: T [n] = T [n 1] + n 2 Shift: T [n 1] = T [n 2] + (n 1) 2 = T [n 2] + n 2 2n + 1 Difference: T [n] T [n 1] = T [n 1] T [n 2] + 2n 1 T [n] = 2T [n 1] T [n 2] + 2n 1

81 T [n] = 2T [n 1] T [n 2] + 2n 1 Shift: T [n 1] = 2T [n 2] T [n 3] + 2(n 1) 1 = 2T [n 2] T [n 3] + 2n 3 Difference: T [n] T [n 1] =2T [n 1] T [n 2] + 2n 1 2T [n 2] + T [n 3] 2n + 3 and so on T [n] = 3T [n 1] 3T [n 2] + T [n 3] + 2

82 64 Generating Functions Definition 7 (Generating Function) Let (a n ) n 0 be a sequence The corresponding generating function (Erzeugendenfunktion) is F(z) := a n z n ; n 0 exponential generating function (exponentielle Erzeugendenfunktion) is F(z) = n 0 a n n! zn 64 Generating Functions Ernst Mayr, Harald Räcke 93

83 64 Generating Functions Example 8 1 The generating function of the sequence (1, 0, 0, ) is F(z) = 1 2 The generating function of the sequence (1, 1, 1, ) is F(z) = 1 1 z 64 Generating Functions Ernst Mayr, Harald Räcke 94

84 64 Generating Functions There are two different views: A generating function is a formal power series (formale Potenzreihe) Then the generating function is an algebraic object Let f = n 0 a n z n and g = n 0 b n z n Equality: f and g are equal if a n = b n for all n Addition: f + g := n 0(a n + b n )z n Multiplication: f g := n 0 c n z n with c n = n p=0 a p b n p There are no convergence issues here 64 Generating Functions Ernst Mayr, Harald Räcke 95

85 64 Generating Functions The arithmetic view: We view a power series as a function f : C C Then, it is important to think about convergence/convergence radius etc 64 Generating Functions Ernst Mayr, Harald Räcke 96

86 64 Generating Functions What does n 0 z n = 1 1 z mean in the algebraic view? It means that the power series 1 z and the power series n 0 z n are invers, ie, ( ) ( 1 z n 0 z n) = 1 This is well-defined 64 Generating Functions Ernst Mayr, Harald Räcke 97

87 64 Generating Functions Suppose we are given the generating function n 0 We can compute the derivative: z n = 1 1 z nz n 1 = n 1 }{{} n 0(n+1)z n 1 (1 z) 2 Formally the derivative of a formal power series n 0 a n z n is defined as n 0 na n z n 1 The known rules for differentiation work for this definition In particular, eg the derivative of 1 z 1 is 1 (1 z) 2 Note that this requires a proof if we consider power series as algebraic objects However, we did not prove this in the lecture Hence, the generating function of the sequence a n = n + 1 is 1/(1 z) 2 64 Generating Functions Ernst Mayr, Harald Räcke 98

88 64 Generating Functions We can repeat this (n + 1)z n 1 = (1 z) 2 n 0 Derivative: n(n + 1)z n 1 = n 1 n 0(n+1)(n+2)z n }{{} 2 (1 z) 3 Hence, the generating function of the sequence a n = (n + 1)(n + 2) is 2 (1 z) 3 64 Generating Functions Ernst Mayr, Harald Räcke 99

89 64 Generating Functions Computing the k-th derivative of z n n(n 1) (n k + 1)z n k = n k n 0 k! = (1 z) k+1 (n + k) (n + 1)z n Hence: ( ) n + k z n 1 = k (1 z) k+1 n 0 The generating function of the sequence a n = ( ) n+k k is 1 (1 z) k+1 64 Generating Functions Ernst Mayr, Harald Räcke 100

90 64 Generating Functions nz n = + 1)z n 0 n 0(n n n 0 1 = (1 z) z z = (1 z) 2 z n The generating function of the sequence a n = n is z (1 z) 2 64 Generating Functions Ernst Mayr, Harald Räcke 101

91 64 Generating Functions We know n 0 y n = 1 1 y Hence, a n z n 1 = 1 az n 0 The generating function of the sequence f n = a n is 1 1 az 64 Generating Functions Ernst Mayr, Harald Räcke 102

92 Example: a n = a n 1 + 1, a 0 = 1 Suppose we have the recurrence a n = a n for n 1 and a 0 = 1 A(z) = a n z n n 0 = a 0 + (a n 1 + 1)z n n 1 = 1 + z a n 1 z n 1 + n 1 n 1 = z a n z n + n 0 n 0 = za(z) + n 0 = za(z) z z n z n z n 64 Generating Functions Ernst Mayr, Harald Räcke 103

93 Example: a n = a n 1 + 1, a 0 = 1 Solving for A(z) gives a n z n 1 = A(z) = (1 z) n 0 2 = (n + 1)z n n 0 Hence, a n = n Generating Functions Ernst Mayr, Harald Räcke 104

94 Some Generating Functions n-th sequence element 1 n + 1 ( n+k k n generating function 1 1 z 1 (1 z) 2 ) 1 (1 z) k+1 z (1 z) 2 a n 1 1 az n 2 z(1 + z) (1 z) 3 1 n! e z 64 Generating Functions Ernst Mayr, Harald Räcke 105

95 Some Generating Functions n-th sequence element cf n f n + g n n i=0 f ig n i generating function cf F + G F G f n k (n k); 0 otw z k F n i=0 f i nf n c n f n F(z) 1 z z df(z) dz F(cz) 64 Generating Functions Ernst Mayr, Harald Räcke 106

96 Solving Recursions with Generating Functions 1 Set A(z) = n 0 a n z n 2 Transform the right hand side so that boundary condition and recurrence relation can be plugged in 3 Do further transformations so that the infinite sums on the right hand side can be replaced by A(z) 4 Solving for A(z) gives an equation of the form A(z) = f (z), where hopefully f (z) is a simple function 5 Write f (z) as a formal power series Techniques: partial fraction decomposition (Partialbruchzerlegung) lookup in tables 6 The coefficients of the resulting power series are the a n 64 Generating Functions Ernst Mayr, Harald Räcke 107

97 Example: a n = 2a n 1, a 0 = 1 1 Set up generating function: A(z) = a n z n n 0 2 Transform right hand side so that recurrence can be plugged in: A(z) = a 0 + a n z n n 1 2 Plug in: A(z) = 1 + (2a n 1 )z n n 1 64 Generating Functions Ernst Mayr, Harald Räcke 108

98 Example: a n = 2a n 1, a 0 = 1 3 Transform right hand side so that infinite sums can be replaced by A(z) or by simple function A(z) = 1 + (2a n 1 )z n n 1 = 1 + 2z a n 1 z n 1 n 1 = 1 + 2z a n z n n 0 = 1 + 2z A(z) 4 Solve for A(z) A(z) = 1 1 2z 64 Generating Functions Ernst Mayr, Harald Räcke 109

99 Example: a n = 2a n 1, a 0 = 1 5 Rewrite f (z) as a power series: a n z n 1 = A(z) = 1 2z = 2 n z n n 0 n 0 64 Generating Functions Ernst Mayr, Harald Räcke 110

100 Example: a n = 3a n 1 + n, a 0 = 1 1 Set up generating function: A(z) = a n z n n 0 64 Generating Functions Ernst Mayr, Harald Räcke 111

101 Example: a n = 3a n 1 + n, a 0 = 1 2/3 Transform right hand side: A(z) = a n z n n 0 = a 0 + a n z n n 1 = 1 + (3a n 1 + n)z n n 1 = 1 + 3z a n 1 z n 1 + nz n n 1 n 1 = 1 + 3z a n z n + nz n n 0 n 0 z = 1 + 3zA(z) + (1 z) 2 64 Generating Functions Ernst Mayr, Harald Räcke 112

102 Example: a n = 3a n 1 + n, a 0 = 1 4 Solve for A(z): A(z) = 1 + 3zA(z) + z (1 z) 2 gives A(z) = (1 z)2 + z (1 3z)(1 z) 2 = z 2 z + 1 (1 3z)(1 z) 2 64 Generating Functions Ernst Mayr, Harald Räcke 113

103 Example: a n = 3a n 1 + n, a 0 = 1 5 Write f (z) as a formal power series: We use partial fraction decomposition: z 2 z + 1 (1 3z)(1 z) 2! = A 1 3z + B 1 z + C (1 z) 2 This gives z 2 z + 1 = A(1 z) 2 + B(1 3z)(1 z) + C(1 3z) = A(1 2z + z 2 ) + B(1 4z + 3z 2 ) + C(1 3z) = (A + 3B)z 2 + ( 2A 4B 3C)z + (A + B + C) 64 Generating Functions Ernst Mayr, Harald Räcke 114

104 Example: a n = 3a n 1 + n, a 0 = 1 5 Write f (z) as a formal power series: This leads to the following conditions: A + B + C = 1 2A + 4B + 3C = 1 A + 3B = 1 which gives A = 7 4 B = 1 4 C = Generating Functions Ernst Mayr, Harald Räcke 115

105 Example: a n = 3a n 1 + n, a 0 = 1 5 Write f (z) as a formal power series: A(z) = z z (1 z) 2 = n z n 1 4 z n 1 2 (n + 1)z n n 0 n 0 = ( 7 4 3n ) 2 (n + 1) z n n 0 = ( 7 4 3n 1 2 n 3 z 4) n n 0 n 0 6 This means a n = 7 4 3n 1 2 n Generating Functions Ernst Mayr, Harald Räcke 116

106 65 Transformation of the Recurrence Example 9 f 0 = 1 f 1 = 2 f n = f n 1 f n 2 for n 2 Define Then g n := log f n g n = g n 1 + g n 2 for n 2 g 1 = log 2 = 1(for log = log 2 ), g 0 = 0 g n = F n (n-th Fibonacci number) f n = 2 F n 65 Transformation of the Recurrence Ernst Mayr, Harald Räcke 117

107 65 Transformation of the Recurrence Example 10 f 1 = 1 f n = 3f n 2 + n; for n = 2k, k 1 ; Define g k := f 2 k Then: g 0 = 1 g k = 3g k k, k 1 65 Transformation of the Recurrence Ernst Mayr, Harald Räcke 118

108 6 Recurrences We get g k = 3 [ ] g k k [ = 3 3g k k 1] + 2 k = 3 2 [ g k 2 ] + 32 k k = 3 2 [ 3g k k 2] + 32 k k = 3 3 g k k k k = 2 k k ( 3 i 2) i=0 = 2 k ( 3 2 )k+1 1 1/2 = 3 k+1 2 k+1 65 Transformation of the Recurrence Ernst Mayr, Harald Räcke 119

109 6 Recurrences Let n = 2 k : g k = 3 k+1 2 k+1, hence f n = 3 3 k 2 2 k = 3(2 log 3 ) k 2 2 k = 3(2 k ) log k = 3n log 3 2n 65 Transformation of the Recurrence Ernst Mayr, Harald Räcke 120

110 6 Recurrences Bibliography [MS08] [CLRS90] [Liu85] Kurt Mehlhorn, Peter Sanders: Algorithms and Data Structures The Basic Toolbox, Springer, 2008 Thomas H Cormen, Charles E Leiserson, Ron L Rivest, Clifford Stein: Introduction to algorithms (3rd ed), MIT Press and McGraw-Hill, 2009 Chung Laung Liu: Elements of Discrete Mathematics McGraw-Hill, 1985 The Karatsuba method can be found in [MS08] Chapter 1 Chapter 43 of [CLRS90] covers the Substitution method which roughly corresponds to Guessing+induction Chapters 44, 45, 46 of this book cover the master theorem Methods using the characteristic polynomial and generating functions can be found in [Liu85] Chapter Transformation of the Recurrence Ernst Mayr, Harald Räcke 121

Part I. Efficient Algorithms and Data Structures. Organizational Matters. Part I. Organizational Matters WS 2013/14. Harald Räcke. Winter Term 2013/14

Part I. Efficient Algorithms and Data Structures. Organizational Matters. Part I. Organizational Matters WS 2013/14. Harald Räcke. Winter Term 2013/14 WS 3/4 Efficient Algorithms and Data Structures Harald Räcke Part I Organizational Matters Fakultät für Informatik TU München http://www4.in.tum.de/lehre/3ws/ea/ Winter Term 3/4 Ernst Mayr, Harald Räcke

More information

Part I. Efficient Algorithms and Data Structures. Part I. Organizational Matters. WS 2011/12 Organizational Matters. Harald Räcke. Winter Term 2011/12

Part I. Efficient Algorithms and Data Structures. Part I. Organizational Matters. WS 2011/12 Organizational Matters. Harald Räcke. Winter Term 2011/12 Part I WS / Organizational Matters Efficient Algorithms and Data Structures Harald Räcke Fakultät für Informatik TU München http://www4.in.tum.de/lehre/ws/ea/ Winter Term / c Ernst Mayr, Harald Räcke c

More information

Growth of Functions (CLRS 2.3,3)

Growth of Functions (CLRS 2.3,3) Growth of Functions (CLRS 2.3,3) 1 Review Last time we discussed running time of algorithms and introduced the RAM model of computation. Best-case running time: the shortest running time for any input

More information

Each internal node v with d(v) children stores d 1 keys. k i 1 < key in i-th sub-tree k i, where we use k 0 = and k d =.

Each internal node v with d(v) children stores d 1 keys. k i 1 < key in i-th sub-tree k i, where we use k 0 = and k d =. 7.5 (a, b)-trees 7.5 (a, b)-trees Definition For b a an (a, b)-tree is a search tree with the following properties. all leaves have the same distance to the root. every internal non-root vertex v has at

More information

Algorithm efficiency analysis

Algorithm efficiency analysis Algorithm efficiency analysis Mădălina Răschip, Cristian Gaţu Faculty of Computer Science Alexandru Ioan Cuza University of Iaşi, Romania DS 2017/2018 Content Algorithm efficiency analysis Recursive function

More information

1 Substitution method

1 Substitution method Recurrence Relations we have discussed asymptotic analysis of algorithms and various properties associated with asymptotic notation. As many algorithms are recursive in nature, it is natural to analyze

More information

Algorithms Design & Analysis. Analysis of Algorithm

Algorithms Design & Analysis. Analysis of Algorithm Algorithms Design & Analysis Analysis of Algorithm Review Internship Stable Matching Algorithm 2 Outline Time complexity Computation model Asymptotic notions Recurrence Master theorem 3 The problem of

More information

Asymptotic Notation. such that t(n) cf(n) for all n n 0. for some positive real constant c and integer threshold n 0

Asymptotic Notation. such that t(n) cf(n) for all n n 0. for some positive real constant c and integer threshold n 0 Asymptotic Notation Asymptotic notation deals with the behaviour of a function in the limit, that is, for sufficiently large values of its parameter. Often, when analysing the run time of an algorithm,

More information

Analysis of Algorithms [Reading: CLRS 2.2, 3] Laura Toma, csci2200, Bowdoin College

Analysis of Algorithms [Reading: CLRS 2.2, 3] Laura Toma, csci2200, Bowdoin College Analysis of Algorithms [Reading: CLRS 2.2, 3] Laura Toma, csci2200, Bowdoin College Why analysis? We want to predict how the algorithm will behave (e.g. running time) on arbitrary inputs, and how it will

More information

CS 4407 Algorithms Lecture 2: Iterative and Divide and Conquer Algorithms

CS 4407 Algorithms Lecture 2: Iterative and Divide and Conquer Algorithms CS 4407 Algorithms Lecture 2: Iterative and Divide and Conquer Algorithms Prof. Gregory Provan Department of Computer Science University College Cork 1 Lecture Outline CS 4407, Algorithms Growth Functions

More information

COMP 382: Reasoning about algorithms

COMP 382: Reasoning about algorithms Fall 2014 Unit 4: Basics of complexity analysis Correctness and efficiency So far, we have talked about correctness and termination of algorithms What about efficiency? Running time of an algorithm For

More information

Computational Complexity - Pseudocode and Recursions

Computational Complexity - Pseudocode and Recursions Computational Complexity - Pseudocode and Recursions Nicholas Mainardi 1 Dipartimento di Elettronica e Informazione Politecnico di Milano nicholas.mainardi@polimi.it June 6, 2018 1 Partly Based on Alessandro

More information

So far we have implemented the search for a key by carefully choosing split-elements.

So far we have implemented the search for a key by carefully choosing split-elements. 7.7 Hashing Dictionary: S. insert(x): Insert an element x. S. delete(x): Delete the element pointed to by x. S. search(k): Return a pointer to an element e with key[e] = k in S if it exists; otherwise

More information

Lecture 1: Asymptotics, Recurrences, Elementary Sorting

Lecture 1: Asymptotics, Recurrences, Elementary Sorting Lecture 1: Asymptotics, Recurrences, Elementary Sorting Instructor: Outline 1 Introduction to Asymptotic Analysis Rate of growth of functions Comparing and bounding functions: O, Θ, Ω Specifying running

More information

Grade 11/12 Math Circles Fall Nov. 5 Recurrences, Part 2

Grade 11/12 Math Circles Fall Nov. 5 Recurrences, Part 2 1 Faculty of Mathematics Waterloo, Ontario Centre for Education in Mathematics and Computing Grade 11/12 Math Circles Fall 2014 - Nov. 5 Recurrences, Part 2 Running time of algorithms In computer science,

More information

CS 577 Introduction to Algorithms: Strassen s Algorithm and the Master Theorem

CS 577 Introduction to Algorithms: Strassen s Algorithm and the Master Theorem CS 577 Introduction to Algorithms: Jin-Yi Cai University of Wisconsin Madison In the last class, we described InsertionSort and showed that its worst-case running time is Θ(n 2 ). Check Figure 2.2 for

More information

Solving recurrences. Frequently showing up when analysing divide&conquer algorithms or, more generally, recursive algorithms.

Solving recurrences. Frequently showing up when analysing divide&conquer algorithms or, more generally, recursive algorithms. Solving recurrences Frequently showing up when analysing divide&conquer algorithms or, more generally, recursive algorithms Example: Merge-Sort(A, p, r) 1: if p < r then 2: q (p + r)/2 3: Merge-Sort(A,

More information

Divide and Conquer. Arash Rafiey. 27 October, 2016

Divide and Conquer. Arash Rafiey. 27 October, 2016 27 October, 2016 Divide the problem into a number of subproblems Divide the problem into a number of subproblems Conquer the subproblems by solving them recursively or if they are small, there must be

More information

Data Structures and Algorithms CMPSC 465

Data Structures and Algorithms CMPSC 465 Data Structures and Algorithms CMPSC 465 LECTURE 9 Solving recurrences Substitution method Adam Smith S. Raskhodnikova and A. Smith; based on slides by E. Demaine and C. Leiserson Review question Draw

More information

COMP Analysis of Algorithms & Data Structures

COMP Analysis of Algorithms & Data Structures COMP 3170 - Analysis of Algorithms & Data Structures Shahin Kamali Lecture 4 - Jan. 10, 2018 CLRS 1.1, 1.2, 2.2, 3.1, 4.3, 4.5 University of Manitoba Picture is from the cover of the textbook CLRS. 1 /

More information

Analysis of Algorithms - Using Asymptotic Bounds -

Analysis of Algorithms - Using Asymptotic Bounds - Analysis of Algorithms - Using Asymptotic Bounds - Andreas Ermedahl MRTC (Mälardalens Real-Time Research Center) andreas.ermedahl@mdh.se Autumn 004 Rehersal: Asymptotic bounds Gives running time bounds

More information

COMP Analysis of Algorithms & Data Structures

COMP Analysis of Algorithms & Data Structures COMP 3170 - Analysis of Algorithms & Data Structures Shahin Kamali Lecture 4 - Jan. 14, 2019 CLRS 1.1, 1.2, 2.2, 3.1, 4.3, 4.5 University of Manitoba Picture is from the cover of the textbook CLRS. COMP

More information

Analysis of Algorithms I: Asymptotic Notation, Induction, and MergeSort

Analysis of Algorithms I: Asymptotic Notation, Induction, and MergeSort Analysis of Algorithms I: Asymptotic Notation, Induction, and MergeSort Xi Chen Columbia University We continue with two more asymptotic notation: o( ) and ω( ). Let f (n) and g(n) are functions that map

More information

Taking Stock. IE170: Algorithms in Systems Engineering: Lecture 3. Θ Notation. Comparing Algorithms

Taking Stock. IE170: Algorithms in Systems Engineering: Lecture 3. Θ Notation. Comparing Algorithms Taking Stock IE170: Algorithms in Systems Engineering: Lecture 3 Jeff Linderoth Department of Industrial and Systems Engineering Lehigh University January 19, 2007 Last Time Lots of funky math Playing

More information

8 Priority Queues. 8 Priority Queues. Prim s Minimum Spanning Tree Algorithm. Dijkstra s Shortest Path Algorithm

8 Priority Queues. 8 Priority Queues. Prim s Minimum Spanning Tree Algorithm. Dijkstra s Shortest Path Algorithm 8 Priority Queues 8 Priority Queues A Priority Queue S is a dynamic set data structure that supports the following operations: S. build(x 1,..., x n ): Creates a data-structure that contains just the elements

More information

CS483 Design and Analysis of Algorithms

CS483 Design and Analysis of Algorithms CS483 Design and Analysis of Algorithms Lecture 6-8 Divide and Conquer Algorithms Instructor: Fei Li lifei@cs.gmu.edu with subject: CS483 Office hours: STII, Room 443, Friday 4:00pm - 6:00pm or by appointments

More information

CS Data Structures and Algorithm Analysis

CS Data Structures and Algorithm Analysis CS 483 - Data Structures and Algorithm Analysis Lecture II: Chapter 2 R. Paul Wiegand George Mason University, Department of Computer Science February 1, 2006 Outline 1 Analysis Framework 2 Asymptotic

More information

Theory of Computation

Theory of Computation Theory of Computation Dr. Sarmad Abbasi Dr. Sarmad Abbasi () Theory of Computation 1 / 38 Lecture 21: Overview Big-Oh notation. Little-o notation. Time Complexity Classes Non-deterministic TMs The Class

More information

Intro to Theory of Computation

Intro to Theory of Computation Intro to Theory of Computation LECTURE 22 Last time Review Today: Finish recursion theorem Complexity theory Exam 2 solutions out Homework 9 out Sofya Raskhodnikova L22.1 I-clicker question (frequency:

More information

5 + 9(10) + 3(100) + 0(1000) + 2(10000) =

5 + 9(10) + 3(100) + 0(1000) + 2(10000) = Chapter 5 Analyzing Algorithms So far we have been proving statements about databases, mathematics and arithmetic, or sequences of numbers. Though these types of statements are common in computer science,

More information

Reading 10 : Asymptotic Analysis

Reading 10 : Asymptotic Analysis CS/Math 240: Introduction to Discrete Mathematics Fall 201 Instructor: Beck Hasti and Gautam Prakriya Reading 10 : Asymptotic Analysis In the last reading, we analyzed the running times of various algorithms.

More information

Algorithms and Their Complexity

Algorithms and Their Complexity CSCE 222 Discrete Structures for Computing David Kebo Houngninou Algorithms and Their Complexity Chapter 3 Algorithm An algorithm is a finite sequence of steps that solves a problem. Computational complexity

More information

Ch01. Analysis of Algorithms

Ch01. Analysis of Algorithms Ch01. Analysis of Algorithms Input Algorithm Output Acknowledgement: Parts of slides in this presentation come from the materials accompanying the textbook Algorithm Design and Applications, by M. T. Goodrich

More information

Theory of Computation

Theory of Computation Theory of Computation Dr. Sarmad Abbasi Dr. Sarmad Abbasi () Theory of Computation 1 / 33 Lecture 20: Overview Incompressible strings Minimal Length Descriptions Descriptive Complexity Dr. Sarmad Abbasi

More information

Design and Analysis of Algorithms

Design and Analysis of Algorithms CSE 101, Winter 2018 Design and Analysis of Algorithms Lecture 4: Divide and Conquer (I) Class URL: http://vlsicad.ucsd.edu/courses/cse101-w18/ Divide and Conquer ( DQ ) First paradigm or framework DQ(S)

More information

Asymptotic Analysis and Recurrences

Asymptotic Analysis and Recurrences Appendix A Asymptotic Analysis and Recurrences A.1 Overview We discuss the notion of asymptotic analysis and introduce O, Ω, Θ, and o notation. We then turn to the topic of recurrences, discussing several

More information

Asymptotic Analysis Cont'd

Asymptotic Analysis Cont'd Cont'd Carlos Moreno cmoreno @ uwaterloo.ca EIT-4103 https://ece.uwaterloo.ca/~cmoreno/ece250 Announcements We have class this Wednesday, Jan 18 at 12:30 That is, we have two sessions this Wednesday: at

More information

CS 310 Advanced Data Structures and Algorithms

CS 310 Advanced Data Structures and Algorithms CS 310 Advanced Data Structures and Algorithms Runtime Analysis May 31, 2017 Tong Wang UMass Boston CS 310 May 31, 2017 1 / 37 Topics Weiss chapter 5 What is algorithm analysis Big O, big, big notations

More information

with the size of the input in the limit, as the size of the misused.

with the size of the input in the limit, as the size of the misused. Chapter 3. Growth of Functions Outline Study the asymptotic efficiency of algorithms Give several standard methods for simplifying the asymptotic analysis of algorithms Present several notational conventions

More information

Asymptotic Analysis. Slides by Carl Kingsford. Jan. 27, AD Chapter 2

Asymptotic Analysis. Slides by Carl Kingsford. Jan. 27, AD Chapter 2 Asymptotic Analysis Slides by Carl Kingsford Jan. 27, 2014 AD Chapter 2 Independent Set Definition (Independent Set). Given a graph G = (V, E) an independent set is a set S V if no two nodes in S are joined

More information

Lecture 1 - Preliminaries

Lecture 1 - Preliminaries Advanced Algorithms Floriano Zini Free University of Bozen-Bolzano Faculty of Computer Science Academic Year 2013-2014 Lecture 1 - Preliminaries 1 Typography vs algorithms Johann Gutenberg (c. 1398 February

More information

What we have learned What is algorithm Why study algorithm The time and space efficiency of algorithm The analysis framework of time efficiency Asympt

What we have learned What is algorithm Why study algorithm The time and space efficiency of algorithm The analysis framework of time efficiency Asympt Lecture 3 The Analysis of Recursive Algorithm Efficiency What we have learned What is algorithm Why study algorithm The time and space efficiency of algorithm The analysis framework of time efficiency

More information

Asymptotic Algorithm Analysis & Sorting

Asymptotic Algorithm Analysis & Sorting Asymptotic Algorithm Analysis & Sorting (Version of 5th March 2010) (Based on original slides by John Hamer and Yves Deville) We can analyse an algorithm without needing to run it, and in so doing we can

More information

Computational Complexity

Computational Complexity Computational Complexity S. V. N. Vishwanathan, Pinar Yanardag January 8, 016 1 Computational Complexity: What, Why, and How? Intuitively an algorithm is a well defined computational procedure that takes

More information

Data Structures and Algorithms Running time and growth functions January 18, 2018

Data Structures and Algorithms Running time and growth functions January 18, 2018 Data Structures and Algorithms Running time and growth functions January 18, 2018 Measuring Running Time of Algorithms One way to measure the running time of an algorithm is to implement it and then study

More information

Ch 01. Analysis of Algorithms

Ch 01. Analysis of Algorithms Ch 01. Analysis of Algorithms Input Algorithm Output Acknowledgement: Parts of slides in this presentation come from the materials accompanying the textbook Algorithm Design and Applications, by M. T.

More information

CS483 Design and Analysis of Algorithms

CS483 Design and Analysis of Algorithms CS483 Design and Analysis of Algorithms Chapter 2 Divide and Conquer Algorithms Instructor: Fei Li lifei@cs.gmu.edu with subject: CS483 Office hours: Room 5326, Engineering Building, Thursday 4:30pm -

More information

Defining Efficiency. 2: Analysis. Efficiency. Measuring efficiency. CSE 421: Intro Algorithms. Summer 2007 Larry Ruzzo

Defining Efficiency. 2: Analysis. Efficiency. Measuring efficiency. CSE 421: Intro Algorithms. Summer 2007 Larry Ruzzo CSE 421: Intro Algorithms 2: Analysis Summer 2007 Larry Ruzzo Defining Efficiency Runs fast on typical real problem instances Pro: sensible, bottom-line-oriented Con: moving target (diff computers, compilers,

More information

Lecture 10: Big-Oh. Doina Precup With many thanks to Prakash Panagaden and Mathieu Blanchette. January 27, 2014

Lecture 10: Big-Oh. Doina Precup With many thanks to Prakash Panagaden and Mathieu Blanchette. January 27, 2014 Lecture 10: Big-Oh Doina Precup With many thanks to Prakash Panagaden and Mathieu Blanchette January 27, 2014 So far we have talked about O() informally, as a way of capturing the worst-case computation

More information

MA008/MIIZ01 Design and Analysis of Algorithms Lecture Notes 2

MA008/MIIZ01 Design and Analysis of Algorithms Lecture Notes 2 MA008 p.1/36 MA008/MIIZ01 Design and Analysis of Algorithms Lecture Notes 2 Dr. Markus Hagenbuchner markus@uow.edu.au. MA008 p.2/36 Content of lecture 2 Examples Review data structures Data types vs. data

More information

Data Structures and Algorithms Chapter 2

Data Structures and Algorithms Chapter 2 1 Data Structures and Algorithms Chapter 2 Werner Nutt 2 Acknowledgments The course follows the book Introduction to Algorithms, by Cormen, Leiserson, Rivest and Stein, MIT Press [CLRST]. Many examples

More information

Data Structures and Algorithms CSE 465

Data Structures and Algorithms CSE 465 Data Structures and Algorithms CSE 465 LECTURE 3 Asymptotic Notation O-, Ω-, Θ-, o-, ω-notation Divide and Conquer Merge Sort Binary Search Sofya Raskhodnikova and Adam Smith /5/0 Review Questions If input

More information

Recursion. Computational complexity

Recursion. Computational complexity List. Babes-Bolyai University arthur@cs.ubbcluj.ro Overview List 1 2 List Second Laboratory Test List Will take place week 12, during the laboratory You will receive one problem statement, from what was

More information

The Growth of Functions. A Practical Introduction with as Little Theory as possible

The Growth of Functions. A Practical Introduction with as Little Theory as possible The Growth of Functions A Practical Introduction with as Little Theory as possible Complexity of Algorithms (1) Before we talk about the growth of functions and the concept of order, let s discuss why

More information

A SUMMARY OF RECURSION SOLVING TECHNIQUES

A SUMMARY OF RECURSION SOLVING TECHNIQUES A SUMMARY OF RECURSION SOLVING TECHNIQUES KIMMO ERIKSSON, KTH These notes are meant to be a complement to the material on recursion solving techniques in the textbook Discrete Mathematics by Biggs. In

More information

Computational Complexity. This lecture. Notes. Lecture 02 - Basic Complexity Analysis. Tom Kelsey & Susmit Sarkar. Notes

Computational Complexity. This lecture. Notes. Lecture 02 - Basic Complexity Analysis. Tom Kelsey & Susmit Sarkar. Notes Computational Complexity Lecture 02 - Basic Complexity Analysis Tom Kelsey & Susmit Sarkar School of Computer Science University of St Andrews http://www.cs.st-andrews.ac.uk/~tom/ twk@st-andrews.ac.uk

More information

Recurrence Relations

Recurrence Relations Recurrence Relations Analysis Tools S.V. N. (vishy) Vishwanathan University of California, Santa Cruz vishy@ucsc.edu January 15, 2016 S.V. N. Vishwanathan (UCSC) CMPS101 1 / 29 Recurrences Outline 1 Recurrences

More information

COMP 9024, Class notes, 11s2, Class 1

COMP 9024, Class notes, 11s2, Class 1 COMP 90, Class notes, 11s, Class 1 John Plaice Sun Jul 31 1::5 EST 011 In this course, you will need to know a bit of mathematics. We cover in today s lecture the basics. Some of this material is covered

More information

Great Theoretical Ideas in Computer Science. Lecture 7: Introduction to Computational Complexity

Great Theoretical Ideas in Computer Science. Lecture 7: Introduction to Computational Complexity 15-251 Great Theoretical Ideas in Computer Science Lecture 7: Introduction to Computational Complexity September 20th, 2016 What have we done so far? What will we do next? What have we done so far? > Introduction

More information

CSC Design and Analysis of Algorithms. Lecture 1

CSC Design and Analysis of Algorithms. Lecture 1 CSC 8301- Design and Analysis of Algorithms Lecture 1 Introduction Analysis framework and asymptotic notations What is an algorithm? An algorithm is a finite sequence of unambiguous instructions for solving

More information

4.4 Recurrences and Big-O

4.4 Recurrences and Big-O 4.4. RECURRENCES AND BIG-O 91 4.4 Recurrences and Big-O In this section, we will extend our knowledge of recurrences, showing how to solve recurrences with big-o s in them, recurrences in which n b k,

More information

CS 4104 Data and Algorithm Analysis. Recurrence Relations. Modeling Recursive Function Cost. Solving Recurrences. Clifford A. Shaffer.

CS 4104 Data and Algorithm Analysis. Recurrence Relations. Modeling Recursive Function Cost. Solving Recurrences. Clifford A. Shaffer. Department of Computer Science Virginia Tech Blacksburg, Virginia Copyright c 2010,2017 by Clifford A. Shaffer Data and Algorithm Analysis Title page Data and Algorithm Analysis Clifford A. Shaffer Spring

More information

Define Efficiency. 2: Analysis. Efficiency. Measuring efficiency. CSE 417: Algorithms and Computational Complexity. Winter 2007 Larry Ruzzo

Define Efficiency. 2: Analysis. Efficiency. Measuring efficiency. CSE 417: Algorithms and Computational Complexity. Winter 2007 Larry Ruzzo CSE 417: Algorithms and Computational 2: Analysis Winter 2007 Larry Ruzzo Define Efficiency Runs fast on typical real problem instances Pro: sensible, bottom-line-oriented Con: moving target (diff computers,

More information

More Asymptotic Analysis Spring 2018 Discussion 8: March 6, 2018

More Asymptotic Analysis Spring 2018 Discussion 8: March 6, 2018 CS 61B More Asymptotic Analysis Spring 2018 Discussion 8: March 6, 2018 Here is a review of some formulas that you will find useful when doing asymptotic analysis. ˆ N i=1 i = 1 + 2 + 3 + 4 + + N = N(N+1)

More information

CIS 121 Data Structures and Algorithms with Java Spring Big-Oh Notation Monday, January 22/Tuesday, January 23

CIS 121 Data Structures and Algorithms with Java Spring Big-Oh Notation Monday, January 22/Tuesday, January 23 CIS 11 Data Structures and Algorithms with Java Spring 018 Big-Oh Notation Monday, January /Tuesday, January 3 Learning Goals Review Big-Oh and learn big/small omega/theta notations Discuss running time

More information

9 Union Find. 9 Union Find. List Implementation. 9 Union Find. Union Find Data Structure P: Maintains a partition of disjoint sets over elements.

9 Union Find. 9 Union Find. List Implementation. 9 Union Find. Union Find Data Structure P: Maintains a partition of disjoint sets over elements. Union Find Data Structure P: Maintains a partition of disjoint sets over elements. P. makeset(x): Given an element x, adds x to the data-structure and creates a singleton set that contains only this element.

More information

CHAPTER 8 Advanced Counting Techniques

CHAPTER 8 Advanced Counting Techniques 96 Chapter 8 Advanced Counting Techniques CHAPTER 8 Advanced Counting Techniques SECTION 8. Applications of Recurrence Relations 2. a) A permutation of a set with n elements consists of a choice of a first

More information

CSCI 3110 Assignment 6 Solutions

CSCI 3110 Assignment 6 Solutions CSCI 3110 Assignment 6 Solutions December 5, 2012 2.4 (4 pts) Suppose you are choosing between the following three algorithms: 1. Algorithm A solves problems by dividing them into five subproblems of half

More information

CSC236 Week 4. Larry Zhang

CSC236 Week 4. Larry Zhang CSC236 Week 4 Larry Zhang 1 Announcements PS2 due on Friday This week s tutorial: Exercises with big-oh PS1 feedback People generally did well Writing style need to be improved. This time the TAs are lenient,

More information

Lecture 3. Big-O notation, more recurrences!!

Lecture 3. Big-O notation, more recurrences!! Lecture 3 Big-O notation, more recurrences!! Announcements! HW1 is posted! (Due Friday) See Piazza for a list of HW clarifications First recitation section was this morning, there s another tomorrow (same

More information

Data Structures and Algorithms. Asymptotic notation

Data Structures and Algorithms. Asymptotic notation Data Structures and Algorithms Asymptotic notation Estimating Running Time Algorithm arraymax executes 7n 1 primitive operations in the worst case. Define: a = Time taken by the fastest primitive operation

More information

Data structures Exercise 1 solution. Question 1. Let s start by writing all the functions in big O notation:

Data structures Exercise 1 solution. Question 1. Let s start by writing all the functions in big O notation: Data structures Exercise 1 solution Question 1 Let s start by writing all the functions in big O notation: f 1 (n) = 2017 = O(1), f 2 (n) = 2 log 2 n = O(n 2 ), f 3 (n) = 2 n = O(2 n ), f 4 (n) = 1 = O

More information

Introduction to Algorithms 6.046J/18.401J/SMA5503

Introduction to Algorithms 6.046J/18.401J/SMA5503 Introduction to Algorithms 6.046J/8.40J/SMA5503 Lecture 3 Prof. Piotr Indyk The divide-and-conquer design paradigm. Divide the problem (instance) into subproblems. 2. Conquer the subproblems by solving

More information

Algorithm efficiency can be measured in terms of: Time Space Other resources such as processors, network packets, etc.

Algorithm efficiency can be measured in terms of: Time Space Other resources such as processors, network packets, etc. Algorithms Analysis Algorithm efficiency can be measured in terms of: Time Space Other resources such as processors, network packets, etc. Algorithms analysis tends to focus on time: Techniques for measuring

More information

The null-pointers in a binary search tree are replaced by pointers to special null-vertices, that do not carry any object-data

The null-pointers in a binary search tree are replaced by pointers to special null-vertices, that do not carry any object-data Definition 1 A red black tree is a balanced binary search tree in which each internal node has two children. Each internal node has a color, such that 1. The root is black. 2. All leaf nodes are black.

More information

Advanced Counting Techniques

Advanced Counting Techniques . All rights reserved. Authorized only for instructor use in the classroom. No reproduction or further distribution permitted without the prior written consent of McGraw-Hill Education. Advanced Counting

More information

Problem Set 1 Solutions

Problem Set 1 Solutions Introduction to Algorithms September 24, 2004 Massachusetts Institute of Technology 6.046J/18.410J Professors Piotr Indyk and Charles E. Leiserson Handout 7 Problem Set 1 Solutions Exercise 1-1. Do Exercise

More information

An analogy from Calculus: limits

An analogy from Calculus: limits COMP 250 Fall 2018 35 - big O Nov. 30, 2018 We have seen several algorithms in the course, and we have loosely characterized their runtimes in terms of the size n of the input. We say that the algorithm

More information

Mathematical Background. Unsigned binary numbers. Powers of 2. Logs and exponents. Mathematical Background. Today, we will review:

Mathematical Background. Unsigned binary numbers. Powers of 2. Logs and exponents. Mathematical Background. Today, we will review: Mathematical Background Mathematical Background CSE 373 Data Structures Today, we will review: Logs and eponents Series Recursion Motivation for Algorithm Analysis 5 January 007 CSE 373 - Math Background

More information

Big , and Definition Definition

Big , and Definition Definition Big O, Ω, and Θ Big-O gives us only a one-way comparison; if f is O(g) then g eventually is bigger than f from that point on, but in fact f could be very small in comparison. Example; 3n is O(2 2n ). We

More information

When we use asymptotic notation within an expression, the asymptotic notation is shorthand for an unspecified function satisfying the relation:

When we use asymptotic notation within an expression, the asymptotic notation is shorthand for an unspecified function satisfying the relation: CS 124 Section #1 Big-Oh, the Master Theorem, and MergeSort 1/29/2018 1 Big-Oh Notation 1.1 Definition Big-Oh notation is a way to describe the rate of growth of functions. In CS, we use it to describe

More information

Introduction to Algorithms and Asymptotic analysis

Introduction to Algorithms and Asymptotic analysis Indian Institute of Information Technology Design and Manufacturing, Kancheepuram Chennai 600 127, India An Autonomous Institute under MHRD, Govt of India An Institute of National Importance COM 501 Advanced

More information

Input Decidable Language -- Program Halts on all Input Encoding of Input -- Natural Numbers Encoded in Binary or Decimal, Not Unary

Input Decidable Language -- Program Halts on all Input Encoding of Input -- Natural Numbers Encoded in Binary or Decimal, Not Unary Complexity Analysis Complexity Theory Input Decidable Language -- Program Halts on all Input Encoding of Input -- Natural Numbers Encoded in Binary or Decimal, Not Unary Output TRUE or FALSE Time and Space

More information

Asymptotic Analysis 1

Asymptotic Analysis 1 Asymptotic Analysis 1 Last week, we discussed how to present algorithms using pseudocode. For example, we looked at an algorithm for singing the annoying song 99 Bottles of Beer on the Wall for arbitrary

More information

CSE373: Data Structures and Algorithms Lecture 3: Math Review; Algorithm Analysis. Catie Baker Spring 2015

CSE373: Data Structures and Algorithms Lecture 3: Math Review; Algorithm Analysis. Catie Baker Spring 2015 CSE373: Data Structures and Algorithms Lecture 3: Math Review; Algorithm Analysis Catie Baker Spring 2015 Today Registration should be done. Homework 1 due 11:59pm next Wednesday, April 8 th. Review math

More information

CSC : Homework #3

CSC : Homework #3 CSC 707-001: Homework #3 William J. Cook wjcook@math.ncsu.edu Monday, March 15, 2004 1 Exercise 4.13 on page 118 (3-Register RAM) Exercise 4.13 Verify that a RAM need not have an unbounded number of registers.

More information

Design and Analysis of Algorithms Recurrence. Prof. Chuhua Xian School of Computer Science and Engineering

Design and Analysis of Algorithms Recurrence. Prof. Chuhua Xian   School of Computer Science and Engineering Design and Analysis of Algorithms Recurrence Prof. Chuhua Xian Email: chhxian@scut.edu.cn School of Computer Science and Engineering Course Information Instructor: Chuhua Xian ( 冼楚华 ) Email: chhxian@scut.edu.cn

More information

Analysis of Algorithm Efficiency. Dr. Yingwu Zhu

Analysis of Algorithm Efficiency. Dr. Yingwu Zhu Analysis of Algorithm Efficiency Dr. Yingwu Zhu Measure Algorithm Efficiency Time efficiency How fast the algorithm runs; amount of time required to accomplish the task Our focus! Space efficiency Amount

More information

Enumerate all possible assignments and take the An algorithm is a well-defined computational

Enumerate all possible assignments and take the An algorithm is a well-defined computational EMIS 8374 [Algorithm Design and Analysis] 1 EMIS 8374 [Algorithm Design and Analysis] 2 Designing and Evaluating Algorithms A (Bad) Algorithm for the Assignment Problem Enumerate all possible assignments

More information

Module 1: Analyzing the Efficiency of Algorithms

Module 1: Analyzing the Efficiency of Algorithms Module 1: Analyzing the Efficiency of Algorithms Dr. Natarajan Meghanathan Professor of Computer Science Jackson State University Jackson, MS 39217 E-mail: natarajan.meghanathan@jsums.edu What is an Algorithm?

More information

P, NP, NP-Complete, and NPhard

P, NP, NP-Complete, and NPhard P, NP, NP-Complete, and NPhard Problems Zhenjiang Li 21/09/2011 Outline Algorithm time complicity P and NP problems NP-Complete and NP-Hard problems Algorithm time complicity Outline What is this course

More information

3.1 Asymptotic notation

3.1 Asymptotic notation 3.1 Asymptotic notation The notations we use to describe the asymptotic running time of an algorithm are defined in terms of functions whose domains are the set of natural numbers N = {0, 1, 2,... Such

More information

Great Theoretical Ideas in Computer Science. Lecture 9: Introduction to Computational Complexity

Great Theoretical Ideas in Computer Science. Lecture 9: Introduction to Computational Complexity 15-251 Great Theoretical Ideas in Computer Science Lecture 9: Introduction to Computational Complexity February 14th, 2017 Poll What is the running time of this algorithm? Choose the tightest bound. def

More information

Data Structures and Algorithms Chapter 3

Data Structures and Algorithms Chapter 3 1 Data Structures and Algorithms Chapter 3 Werner Nutt 2 Acknowledgments The course follows the book Introduction to Algorithms, by Cormen, Leiserson, Rivest and Stein, MIT Press [CLRST]. Many examples

More information

Algorithm Analysis, Asymptotic notations CISC4080 CIS, Fordham Univ. Instructor: X. Zhang

Algorithm Analysis, Asymptotic notations CISC4080 CIS, Fordham Univ. Instructor: X. Zhang Algorithm Analysis, Asymptotic notations CISC4080 CIS, Fordham Univ. Instructor: X. Zhang Last class Introduction to algorithm analysis: fibonacci seq calculation counting number of computer steps recursive

More information

CS 4407 Algorithms Lecture 3: Iterative and Divide and Conquer Algorithms

CS 4407 Algorithms Lecture 3: Iterative and Divide and Conquer Algorithms CS 4407 Algorithms Lecture 3: Iterative and Divide and Conquer Algorithms Prof. Gregory Provan Department of Computer Science University College Cork 1 Lecture Outline CS 4407, Algorithms Growth Functions

More information

CS 161 Summer 2009 Homework #2 Sample Solutions

CS 161 Summer 2009 Homework #2 Sample Solutions CS 161 Summer 2009 Homework #2 Sample Solutions Regrade Policy: If you believe an error has been made in the grading of your homework, you may resubmit it for a regrade. If the error consists of more than

More information

CSE332: Data Structures & Parallelism Lecture 2: Algorithm Analysis. Ruth Anderson Winter 2019

CSE332: Data Structures & Parallelism Lecture 2: Algorithm Analysis. Ruth Anderson Winter 2019 CSE332: Data Structures & Parallelism Lecture 2: Algorithm Analysis Ruth Anderson Winter 2019 Today Algorithm Analysis What do we care about? How to compare two algorithms Analyzing Code Asymptotic Analysis

More information

b + O(n d ) where a 1, b > 1, then O(n d log n) if a = b d d ) if a < b d O(n log b a ) if a > b d

b + O(n d ) where a 1, b > 1, then O(n d log n) if a = b d d ) if a < b d O(n log b a ) if a > b d CS161, Lecture 4 Median, Selection, and the Substitution Method Scribe: Albert Chen and Juliana Cook (2015), Sam Kim (2016), Gregory Valiant (2017) Date: January 23, 2017 1 Introduction Last lecture, we

More information

COMPUTER ALGORITHMS. Athasit Surarerks.

COMPUTER ALGORITHMS. Athasit Surarerks. COMPUTER ALGORITHMS Athasit Surarerks. Introduction EUCLID s GAME Two players move in turn. On each move, a player has to write on the board a positive integer equal to the different from two numbers already

More information