Problems with Solutions in the Analysis of Algorithms. Minko Markov

Similar documents
4.3 Growth Rates of Solutions to Recurrences

CS 270 Algorithms. Oliver Kullmann. Growth of Functions. Divide-and- Conquer Min-Max- Problem. Tutorial. Reading from CLRS for week 2

Infinite Sequences and Series

Test One (Answer Key)

(A sequence also can be thought of as the list of function values attained for a function f :ℵ X, where f (n) = x n for n 1.) x 1 x N +k x N +4 x 3

Analysis of Algorithms. Introduction. Contents

Model of Computation and Runtime Analysis

CS / MCS 401 Homework 3 grader solutions

Mathematical Foundation. CSE 6331 Algorithms Steve Lai

Chapter 2. Asymptotic Notation

Recursive Algorithms. Recurrences. Recursive Algorithms Analysis

Read carefully the instructions on the answer book and make sure that the particulars required are entered on each answer book.

OPTIMAL ALGORITHMS -- SUPPLEMENTAL NOTES

Classification of problem & problem solving strategies. classification of time complexities (linear, logarithmic etc)

The Growth of Functions. Theoretical Supplement

Advanced Course of Algorithm Design and Analysis

MA131 - Analysis 1. Workbook 3 Sequences II

Seunghee Ye Ma 8: Week 5 Oct 28

CSI 2101 Discrete Structures Winter Homework Assignment #4 (100 points, weight 5%) Due: Thursday, April 5, at 1:00pm (in lecture)

6.3 Testing Series With Positive Terms

Recurrence Relations

Data Structures and Algorithm. Xiaoqing Zheng

Sequences A sequence of numbers is a function whose domain is the positive integers. We can see that the sequence

Model of Computation and Runtime Analysis

MA131 - Analysis 1. Workbook 9 Series III

62. Power series Definition 16. (Power series) Given a sequence {c n }, the series. c n x n = c 0 + c 1 x + c 2 x 2 + c 3 x 3 +

CSE 1400 Applied Discrete Mathematics Number Theory and Proofs

MAT1026 Calculus II Basic Convergence Tests for Series

A sequence of numbers is a function whose domain is the positive integers. We can see that the sequence

Mathematical Induction

Lecture 10: Mathematical Preliminaries

Convergence of random variables. (telegram style notes) P.J.C. Spreij

ITEC 360 Data Structures and Analysis of Algorithms Spring for n 1

Math 2784 (or 2794W) University of Connecticut

A recurrence equation is just a recursive function definition. It defines a function at one input in terms of its value on smaller inputs.

CS583 Lecture 02. Jana Kosecka. some materials here are based on E. Demaine, D. Luebke slides

Math 475, Problem Set #12: Answers

Algorithm Analysis. Algorithms that are equally correct can vary in their utilization of computational resources

Math 299 Supplement: Real Analysis Nov 2013

Topics. Homework Problems. MATH 301 Introduction to Analysis Chapter Four Sequences. 1. Definition of convergence of sequences.

Math 61CM - Solutions to homework 3

Sums, products and sequences

CS161: Algorithm Design and Analysis Handout #10 Stanford University Wednesday, 10 February 2016

w (1) ˆx w (1) x (1) /ρ and w (2) ˆx w (2) x (2) /ρ.

Chapter 22 Developing Efficient Algorithms

10.1 Sequences. n term. We will deal a. a n or a n n. ( 1) n ( 1) n 1 2 ( 1) a =, 0 0,,,,, ln n. n an 2. n term.

Sequences. Notation. Convergence of a Sequence

ACO Comprehensive Exam 9 October 2007 Student code A. 1. Graph Theory

CS 332: Algorithms. Linear-Time Sorting. Order statistics. Slide credit: David Luebke (Virginia)

Lecture Notes for CS 313H, Fall 2011

Math 155 (Lecture 3)

Square-Congruence Modulo n

Sequences, Mathematical Induction, and Recursion. CSE 2353 Discrete Computational Structures Spring 2018

MA131 - Analysis 1. Workbook 2 Sequences I

Definition 4.2. (a) A sequence {x n } in a Banach space X is a basis for X if. unique scalars a n (x) such that x = n. a n (x) x n. (4.

Math 113 Exam 3 Practice

Math 104: Homework 2 solutions

MATH 324 Summer 2006 Elementary Number Theory Solutions to Assignment 2 Due: Thursday July 27, 2006

Sequences and Series of Functions

THE ASYMPTOTIC COMPLEXITY OF MATRIX REDUCTION OVER FINITE FIELDS

CIS 121 Data Structures and Algorithms with Java Spring Code Snippets and Recurrences Monday, February 4/Tuesday, February 5

1. By using truth tables prove that, for all statements P and Q, the statement

JANE PROFESSOR WW Prob Lib1 Summer 2000

6 Integers Modulo n. integer k can be written as k = qn + r, with q,r, 0 r b. So any integer.

Solutions for the Exam 9 January 2012

Sequence A sequence is a function whose domain of definition is the set of natural numbers.

CS161 Design and Analysis of Algorithms. Administrative

Ch3. Asymptotic Notation

CHAPTER 10 INFINITE SEQUENCES AND SERIES

Sorting Algorithms. Algorithms Kyuseok Shim SoEECS, SNU.

Solutions to Math 347 Practice Problems for the final

Hoggatt and King [lo] defined a complete sequence of natural numbers

Problem Set 2 Solutions

Math 140A Elementary Analysis Homework Questions 3-1

Sequences I. Chapter Introduction

f(x) dx as we do. 2x dx x also diverges. Solution: We compute 2x dx lim

Sequences and Limits

THE SOLUTION OF NONLINEAR EQUATIONS f( x ) = 0.

Data Structures Lecture 9

COMP26120: More on the Complexity of Recursive Programs (2018/19) Lucas Cordeiro

Analysis of Algorithms -Quicksort-

Math 451: Euclidean and Non-Euclidean Geometry MWF 3pm, Gasson 204 Homework 3 Solutions

Basic Sets. Functions. MTH299 - Examples. Example 1. Let S = {1, {2, 3}, 4}. Indicate whether each statement is true or false. (a) S = 4. (e) 2 S.

A New Method to Order Functions by Asymptotic Growth Rates Charlie Obimbo Dept. of Computing and Information Science University of Guelph

Bertrand s Postulate

7 Sequences of real numbers

COMP26120: Introducing Complexity Analysis (2018/19) Lucas Cordeiro

Homework 9. (n + 1)! = 1 1

Chapter 6 Overview: Sequences and Numerical Series. For the purposes of AP, this topic is broken into four basic subtopics:

Divide & Conquer. Divide-and-conquer algorithms. Conventional product of polynomials. Conventional product of polynomials.

Dynamic Programming. Sequence Of Decisions

Dynamic Programming. Sequence Of Decisions. 0/1 Knapsack Problem. Sequence Of Decisions

Computability and computational complexity

5 Sequences and Series

Physics 116A Solutions to Homework Set #1 Winter Boas, problem Use equation 1.8 to find a fraction describing

1 Approximating Integrals using Taylor Polynomials

CALCULATION OF FIBONACCI VECTORS

CS161 Handout 05 Summer 2013 July 10, 2013 Mathematical Terms and Identities

A Note on the Symmetric Powers of the Standard Representation of S n

A Simplified Binet Formula for k-generalized Fibonacci Numbers

Transcription:

Problems with Solutios i the Aalysis of Algorithms Miko Markov Draft date November 13, 014

Copyright c 010 014 Miko Markov All rights reserved. Maple is a trademark of Waterloo Maple Ic.

Cotets I Backgroud 1 1 Notatios: Θ, O, Ω, o, ad ω II Aalysis of Algorithms 5 Iterative Algorithms 6 3 Recursive Algorithms ad Recurrece Relatios 44 3.1 Prelimiaries................................... 44 3.1.1 Iterators.................................. 45 3.1. Recursio trees.............................. 48 3. Problems..................................... 5 3..1 Iductio, ufoldig, recursio trees.................. 5 3.. The Master Theorem........................... 93 3..3 The Method with the Characteristic Equatio............. 101 4 Provig the correctess of algorithms 108 4.1 Prelimiaries................................... 108 4. Loop Ivariats.................................. 109 4.3 Practicig Proofs with Loop Ivariats..................... 11 4.3.1 Elemetary problems........................... 11 4.3. Algorithms that compute the mode of a array............ 115 4.3.3 Isertio Sort, Selectio Sort, ad Bubble Sort....... 11 4.3.4 Iversio Sort............................. 17 4.3.5 Merge Sort ad Quick Sort.................... 131 4.3.6 Algorithms o biary heaps....................... 137 4.3.7 Dijkstra s algorithm........................... 144 4.3.8 Coutig Sort............................. 148 4.3.9 Miscellaeous algorithms........................ 150 4.4 Provig algorithm correctess by iductio.................. 153 4.4.1 Algorithms o biary heaps....................... 153 4.4. Miscellaeous algorithms........................ 156 i

III Desig of Algorithms 157 5 Algorithmic Problems 158 5.1 Programmig fragmets............................. 158 5. Arrays ad sortigs................................ 165 5.3 Graphs....................................... 180 5.3.1 Graph traversal related algorithms................... 181 5.3. N P-hard problems o restricted graphs................ 187 5.3.3 Dyamic Programmig......................... 19 IV Computatioal Complexity 197 6 Lower Bouds for Computatioal Problems 198 6.1 Compariso-based sortig............................ 198 6. The Balace Puzzle ad the Twelve-Coi Puzzle............... 03 6.3 Compariso-based elemet uiqueess..................... 07 7 Itractability 10 7.1 Several N P-complete decisio problems.................... 10 7. Polyomial Reductios.............................. 13 7..1 SAT 3SAT............................... 13 7.. 3SAT 3DM.............................. 15 7..3 3SAT VC............................... 6 7..4 VC HC................................. 8 7..5 3DM Partitio............................ 38 7..6 VC DS................................. 41 7..7 HC TSP................................ 4 7..8 Partitio Kapsack........................ 4 7..9 3SAT Clique............................. 43 7..10 3SAT EDP.............................. 45 7..11 VDP EDP............................... 49 7..1 EDP VDP............................... 50 7..13 3SAT 3-Colorability........................ 51 7..14 3-Colorability k-colorability................. 54 7..15 3-Colorability 3-Plaar Colorability............ 54 V Appedices 58 8 Appedix 59 9 Ackowledgemets 9 Refereces 9 ii

Part I Backgroud 1

Chapter 1 Notatios: Θ, O, Ω, o, ad ω The fuctios we cosider are assumed to have positive real domais ad real codomais uless specified otherwise. Furthermore, the fuctios are assumed to be asymptotically positive. The fuctio f() is asymptotically positive iff 0 : 0, f() > 0. Basic defiitios: Θ(g()) = { f() c 1, c > 0, 0 : 0, 0 c 1.g() f() c.g() } (1.1) O(g()) = { f() c > 0, 0 : 0, 0 f() c.g() } (1.) Ω(g()) = { f() c > 0, 0 : 0, 0 c.g() f() } (1.3) o(g()) = { f() c > 0, 0 : 0, 0 f() < c.g() } (1.4) ω(g()) = { f() c > 0, 0 : 0, 0 c.g() < f() } (1.5) 1.4 is equivalet to: f() lim g() = 0 (1.6) if the limit exists. 1.5 is equivalet to: g() lim f() = 0 (1.7) if the limit exists.

It is uiversally accepted to write f() = Θ(g()) istead of the formally correct f() Θ(g()). Let us defie the biary relatios,,,, ad over fuctios as follows. For ay two fuctios f() ad g(): f() g() f() = Θ(g()) (1.8) f() g() f() = O(g()) (1.9) f() g() f() = o(g()) (1.10) f() g() f() = Ω(g()) (1.11) f() g() f() = ω(g()) (1.1) Whe the relatios do ot hold we write f() g(), f() g(), etc. Properties of the relatios: 1. Reflexivity: f() f(), f() f(), f() f().. Symmetry: f() g() g() f(). Proof: Assume c 1, c, 0 > 0 as ecessitated by (1.1), so that 0 c 1.g() f() c.g() for all 0. The 0 1 c f() g() ad g() 1 c 1 f(). Overall, 0 1 c f() g() 1 c 1 f(). So there exist positive costats k 1 = 1 c ad k = 1 c 1, such that 0 k.f() g() k 1.f() for all 0. 3. Trasitivity: f() g() ad g() h() f() h() f() g() ad g() h() f() h() f() g() ad g() h() f() h() f() g() ad g() h() f() h() f() g() ad g() h() f() h(). 4. Traspose symmetry: f() g() g() f() f() g() g() f(). 3

5. f() g() f() g() f() g() f() g() f() g() f() g() f() g() f() g() 6. f() g() f() g() f() g() f() g() f() g() f() g() f() g() f() g() f() g() f() g() f() g() f() g() f() g() f() g() f() g() f() g() 7. f() g() f() g() ad f() g() 8. There do ot exist fuctios f() ad g(), such that f() g() ad f() g() 9. Let f() = f 1 () ± f () ± f 3 () ±... ± f k (). Let f 1 () f () f 1 () f 3 ()... f 1 () f k () The f() f 1 (). 10. Let f() = f 1 () f ()... f k (). Let some of the f i () fuctios be positive costats. Say, f 1 () = cost, f () = cost,..., f m () = cost for some m such that 1 m. The f() f m+1 () f m+ ()... f k (). f() 11. The statemet lim g() stroger tha f() g() : exists ad is equal to some L such that 0 < L < is f() lim = L f() g() (1.13) g() f() f() g() lim g() exists. To see why the secod implicatio does ot hold, suppose f() = ad g() = ( + si ()). Obviously g() oscillates betwee ad 3 ad thus f() g() f() but lim g() does ot exist. Problem 1 ([CLR00], pp. 4 5). Let f() = 1 3. Prove that f(). 4

Solutio: For a complete solutio we have to show some cocrete positive costats c 1 ad c ad a cocrete value 0 for the variable, such that for all 0, 0 c 1. 1 3 c. Sice > 0 this is equivalet to (divide by ): 0 c 1 1 3 c What we have here are i fact three iequalities: 0 c 1 (1.14) c 1 1 3 1 3 c (1.15) (1.16) (1.14) is trivial, ay c 1 > 0 will do. To satisfy (1.16) we ca pick 0 = 1 ad the ay positive c will do; say, c = 1. The smallest iteger value for that makes the right-had side of (1.15) positive is 7; the right-had side becomes 1 3 7 = 7 14 6 14 = 1 14. So, to saisfy (1.15) we pick c 1 = 1 14 ad 0 = 7. The overall 0 is 0 = max { 0, 0 } = 7. The solutio 0 = 7, c 1 = 1 14, c = 1 is obviously ot uique. Problem. Is it true that 1 1000 3 1000? Solutio: No. Assume the opposite. The c > 0 ad 0, such that for all 0 : 1 1000 3 c.1000 It follows that 0 : 1 1000.c 1000000.c 1000 That is clearly false. Problem 3. Is it true that for ay two fuctios, at least oe of the five relatios,,,, ad holds betwee them? Solutio: No. Proof by demostratig a couterexample ([CLR00, pp. 31]): let f() = ad g() = 1+si. Sice g() oscillates betwee 0 = 1 ad, it caot be the case that f() g() or f() g() or f() g() or f() g() or f() g(). However, this argumet from [CLR00] holds oly whe R +. If N +, we caot use the fuctio g() directly, i.e. without provig additioal stuff. Note that si reaches its extreme values 1 ad 1 at kπ + 3π ad kπ + π, respectively, for iteger k. As these are irratioal umbers, the iteger caot be equal to ay of them. So, it is o loger true that g() oscillates betwee 0 = 1 ad. If we isist o usig g() i our couterexample we have to argue, for istace, that: 5

for ifiitely may (positive) values of the iteger variable, for some costat ɛ > 0, it is the case that g() 1+ɛ ; for ifiitely may (positive) values of the iteger variable, for some costat σ > 0, it is the case that g() 1 σ. A alterative is to use the fuctio g () = 1+si (π+π/) that ideed oscillates betwee 0 = 1 ad for iteger. Aother alterative is to use { g, if is eve, () = 1, else. Problem 4. Let p() be ay uivariate polyomial of degree k, such that the coefficiet i the higherst degree term is positive. Prove that p() k. Solutio: p() = a k k + a k 1 k 1 +... + a 1 + a 0 with a k > 0. We have to prove that there exist positive costats c 1 ad c ad some 0 such that for all 0, 0 c 1 k p() c k. Sice the leftmost iequality is obvious, we have to prove that c 1 k a k k + a k 1 k 1 + a k k... + a 1 + a 0 c k For positive we ca divide by k, obtaiig: c 1 a k + a k 1 + a k +... + a 1 k 1 + a 0 }{{ k c } T Now it is obvious that ay c 1 ad c such that 0 < c 1 < a k ad c > a k are suitable because lim T = 0. Problem 5. Let a R ad b R +. Prove that ( + a) b b Solutio: Note that this problem does ot reduce to Problem 4 except i the special case whe b is iteger. We start with the followig trivial observatios: + a + a, provided that a + a a, provided that It follows that: 1 + a, if a By raisig to the b th power we obtai: ( ) 1 b b ( + a) b b b a, that is, a So we have a proof with c 1 = ( 1 ) b, c = b, ad 0 = a. Alteratively, solve this problem trivially usig Problem 6. 6

Problem 6. Prove that for ay two asymptotically positive fuctios f() ad g() ad ay costat k R +, f() g() (f()) k (g()) k Solutio: I oe directio, assume 0 c 1 g() f() c g() for some positive costats c 1 ad c ad for all 0 for some 0 > 0. Raise the three iequalities to the k-th power (recall that k is positive) to obtai 0 c k 1 (g())k (f()) k c k (g())k, for all 0 Coclude that (f()) k (g()) k sice c k 1 ad ck are positive costats. I the other directio the proof is virtually the same, oly raise to power 1 k. Problem 7. Prove that for ay two asymptotically positive fuctios f() ad g(), it is the case that max (f(), g()) f() + g(). Solutio: We are asked to prove there exist positive costats c 1 ad c ad a certai 0, such that for all 0 : 0 c 1 (f() + g()) max (f(), g()) c (f() + g()) As f() ad g() are asymptotically positive, 0 : 0, f() > 0 0 : 0, g() > 0 Let 0 = max { 0, 0 }. Obviously, 0 c 1 (f() + g()) for 0, if c 1 > 0 It is also obvious that whe 0 : 1 f() + 1 g() max (f(), g()) f() + g() max (f(), g()), which we ca write as: 1 (f() + g()) max (f(), g()) f() + g() So we have a proof with 0 = 0, c 1 = 1, ad c = 1. 7

Problem 8. Prove or disprove that for ay two asymptotically positive fuctios f() ad g() such that f() g() is asymptotically positive, it is the case that max (f(), g()) f() g(). Solutio: The claim is false. As a couterexample cosider f() = 3 + ad g() = 3 +. I this case, max (f(), g()) = 3 + = f() for all sufficietly large. Clearly, f() g() = which is asymptotically positive but 3 +. Problem 9. Which of the followig are true: +1 Solutio: +1 is true because +1 =. ad for ay costat c, c.. O the other had, is ot true. Assume the opposite. The, havig i mid that =., it is the case that for some costat c ad all + :. c. c That is clearly false. Problem 10. Which of the followig are true: 1 1 1 1 (1.17) (1.18) Solutio: (1.17) is true because 0 1 < c. 1 0 1 < c is true for every positive costat c ad sufficietly large. (1.18), however, is ot true. Assume the opposite. The: But c > 0, 0 : 0, 0 1 < c. 1 0 1 ( 1 lim 1 ) ( 1 lim 1 ) 1 < c (1.19) ) = lim ( 1 1 = 1 because (1.0) = lim It follows that (1.19) is false. ( ) 1 = lim ( 1 1 ) = 0 (1.1) 8

Problem 11. Which of the followig are true: 1 1 1 1 1 1 (1.) (1.3) Solutio: (1.) is true because ) lim ( 1 1 1 (1.3) is false because ( ) lim 1 1 1 = lim = lim ( 1 ) = lim 1 1 1 = 0 (1.4) ( ) 1 = cost (1.5) Problem 1. Let a be a costat such that a > 1. Which of the followig are true: f() g() a f() a g() (1.6) f() g() a f() a g() (1.7) f() g() a f() a g() (1.8) for all asymptotically positive fuctios f() ad g(). Solutio: (1.6) is ot true Problem 9 provides a couterexample sice ad. The same couterexample suffices to prove that (1.7) is ot true ote that but. Now cosider (1.8). case 1, g() is icreasig ad ubouded: that The statemet is true. We have to prove c > 0, :, 0 a f() < c.a g() (1.9) Sice the costat c is positive, we are allowed to cosider its logarithm to base a, amely k = log a c. So, c = a k. Of course, k ca be positive or egative or zero. We ca rewrite (1.9) as k, :, 0 a f() < a k a g() (1.30) Takig logarithm to base a of the two iequalities, we have k, :, 0 f() < k + g() (1.31) 9

If we prove (1.31), we are doe. By defiitio ((1.4) o page ), the premise is c > 0, 0 : 0, 0 f() < c.g() Sice that holds for ay c > 0, i particular it holds for c = 1. So, we have 0 : 0, 0 f() < g() (1.3) But g() is icreasig ad ubouded. Therefore, k, 1 : 1, 0 < k + g() (1.33) We ca rewrite (1.33) as k, 1 : 1, g() < k + g() (1.34) From (1.3) ad (1.34) we have k, :, 0 f() < k + g() (1.35) Sice (1.35) ad (1.31) are the same, the proof is completed. case, g() is icreasig but bouded: I this case (1.8) is ot true. Cosider Problem 11. As it is show there, 1 1 1 but 1 1 1. case 3, g() is ot icreasig: I this case (1.8) is ot true. Cosider Problem 10. 1 As it is show there, 1 but 1 1. Problem 13. Let a be a costat such that a > 1. Which of the followig are true: a f() a g() f() g() (1.36) a f() a g() f() g() (1.37) a f() a g() f() g() (1.38) for all asymptotically positive fuctios f() ad g(). Solutio: (1.36) is true, if g() is icreasig ad ubouded. Suppose there exist positive costats c 1 ad c ad some 0 such that 0 c 1.a g() a f() c.a g(), 0 Sice a > 1 ad f() ad g() are asymptotically positive, for all sufficietly large, the expoets have strictly larger tha oe values. Therefore, we ca take logarithm to base a (igorig the leftmost iequality) to obtai: log a c 1 + g() f() log a c + g() 10

First ote that, provided that g() is icreasig ad ubouded, for ay costat k 1 such that 0 < k 1 < 1, k 1.g() log a c 1 + g() for all sufficietly large, regardless of whether the logarithm is positive or egative or zero. The ote that, provided that g() is icreasig ad ubouded, for ay costat k such that k > 1, log a c + g() k.g() for all sufficietly large, regardless of whether the logarithm is positive or egative or zero. Coclude there exists 1, such that k 1.g() f() k.g(), 1 However, if g() is icreasig but bouded, (1.36) is ot true. We already showed 1 ( ) 1 1 1 (see 1.5). However, sice lim = 0 (see (1.4)), it is the case that 1 1 1 1 1 accordig to (1.6). Furthermore, ( if g() ) is ot icreasig, (1.36) is ot true. We already showed (see (1.0)) 1 that lim = 1. Accordig to (1.13), it is the case that 1 1. However, 1 1 1 (see (1.1)). Cosider (1.37). If g() is icreasig ad ubouded, it is true. The proof ca be doe easily as i the case with (1.36). If g() is icreasig but bouded, the statemet is false. Let g() = 1. As show i Problem 11, 1 1 1, therefore 1 1 1, but 1 1 1, therefore 1 1 1. Suppose g() is ot icreasig. Let g() = 1. We kow that 1 1 but 1 1. Now cosider (1.38). It is ot true. As a couterexample, cosider that but. Problem 14. Let a be a costat such that a > 1. Which of the followig are true: log a φ() log a ψ() φ() ψ() (1.39) log a φ() log a ψ() φ() ψ() (1.40) log a φ() log a ψ() φ() ψ() (1.41) φ() ψ() log a φ() log a ψ() (1.4) φ() ψ() log a φ() log a ψ() (1.43) φ() ψ() log a φ() log a ψ() (1.44) for all asymptotically positive fuctios φ() ad ψ(). Solutio: Let φ() = a f() ad ψ() = a g(), which meas that log a φ() = f() ad log a ψ() = g(). Cosider (1.6) ad coclude that (1.39) is ot true. Cosider (1.36) ad coclude that (1.4) is true if ψ() is icreasig ad ubouded, ad false otherwise. Cosider (1.7) ad colude that (1.40) is ot true. Cosider (1.37) ad coclude that (1.43) is true if ψ() is icreasig ad ubouded, ad false otherwise. Cosider (1.8) ad coclude that (1.41) is true if ψ() is icreasig ad ubouded, ad false otherwise. Cosider (1.38) ad colude that (1.44) is ot true. 11

Problem 15. Prove that for ay two asymptotically positive fuctios f() ad g(), f() g() iff f() g() ad f() g(). Solutio: I oe directio, assume that f() g(). The there exist positive costats c 1 ad c ad some 0, such that: 0 c 1.g() f() c.g(), 0 It follows that, 0 c 1.g() f(), 0 (1.45) 0 f() c.g(), 0 (1.46) I the other directio, assume that f() g() ad f() g(). The there exists a positive costat c ad some 0, such that: 0 f() c.g(), 0 ad there exists a positive costat c ad some 0, such that: 0 c.g() f(), 0 It follows that: 0 c.g() f() c.g(), max { 0, 0 } Lemma 1 (Stirlig s approximatio).! = ( ( )) π 1 e 1 + Θ (1.47) Here, Θ ( 1 ) meas ay fuctio that is i the set Θ ( 1 ). A derivatio of that formula without specifyig explicitly the π factor is foud i Problem 143 o page 60. Problem 16. Prove that lg! lg (1.48) Solutio: Use Stirlig s approximatio, igorig the ( 1 + Θ ( 1 )) factor, ad take logarithm of both sides to obtai: lg (!) = lg ( π) + lg + lg lg e By Property 9 of the relatios, lg ( π) + lg + lg lg e lg. 1

Problem 17. Prove that for ay costat a > 1, a! Solutio: Because of the factorial let us restrict to positive itegers. lim.( 1).( )....1 a }. a. a {{... a. a = } times lim.( 1).( )....1 }.. {{.... = 0 } times (1.49) Problem 18 (polylogarithm versus costat power of ). Let a, k ad ɛ be ay costats, such that k > 0, a > 1, ad ɛ > 0. Prove that: Solutio: (log a ) k ɛ lim lim lim lim lim ɛ (log a ) k = let b ɛ k ( b ) k (log a ) k = ( ) b k = k is positive log a b log a = b b 1 ( 1 ( 1 ) = l a) lim (l a)b b = use l Hôpital s rule (1.50) Problem 19 (costat power of versus expoet). Let a ad ɛ be ay costats, such that a > 1 ad ɛ > 0. Prove that: ɛ a (1.51) Solutio: Take log a of both sides. The left-had side yields ɛ. log a ad the right-had side yields. But ɛ. log a because of Problem 18. Coclude immediately the desired relatio holds. 13

Defiitio 1 (log-star fuctio, [CLR00], pp. 36). Let the fuctio lg (i) be defied recursively for oegative itegers i as follows:, ( ) if i = 0 lg (i) = lg lg (i 1), if i > 0 ad lg (i 1) > 0 udefied, if i > 0 ad lg (i 1) < 0 or lg (i 1) is udefied The { } lg = mi i 0 lg (i) 1 Accordig to this defiitio, ( ) lg = 1, sice lg (0) = ad lg (1) = lg lg (0) = lg () = 1 ( ) lg 3 =, sice lg (0) 3 = 3 ad lg lg (0) 3 = lg (lg 3) = 0.6644... lg 4 = lg 5 = 3... lg 16 = 3 lg 17 = 4... lg 65536 = 4 lg 65537 = 5... lg 65536 ( = 5 ) lg 65536 + 1 = 6... Obviously, every real umber t ca be represeted by a tower of twos:..s t =. where s is a real umber such that 1 < s. The height of the tower is the umber of elemets i this sequece. For istace, 14

umber its tower of twos the height of the tower 1 3 1.584965007... 4 5 1.1533957... 3 16 3 17 1.0336884... 4 65536 4 65537 1.0000005164167... 5 Havig that i mid, it is trivial to see that lg is the height of the tower of twos of. Problem 0 ([CLR00], problem -3, pp. 38 39). Rak the followig thirty fuctios by order of growth. That is, fid the equivalece classes of the relatio ad show their order by. ( ) lg lg (lg ) lg! (lg )! ( ) 3 3 lg lg (!) 1 lg l l lg. lg lg l 1 lg (lg ) lg e 4 lg ( + 1)! lg lg (lg ) lg lg +1 Solutio: +1 because +1 =. =. ( + 1)! To see why, take logarithm to base two of both sides. The left-had side becomes, the right-had side becomes lg (( + 1)!) By (1.47), lg (( + 1)!) ( + 1) lg ( + 1), ad clearly ( + 1) lg ( + 1) lg. As lg, by (1.41) we have ( + 1)! ( + 1)!! because ( + 1)! = ( + 1)!! e by (1.49). e.. To see why, cosider:. lim e = lim e = lim ( e ) = 0. 15

( 3. ) To see why, cosider: ( 3 ( ) lim ) 3 = lim = 0 4 ( 3 ) lg (lg ). To see why, take lg of both sides. The left-had side becomes. lg ( 3 ), the right-had side becomes lg.lg (lg ). Clearly, lg lg.lg (lg ) ad lg by (1.50). By trasitivity, lg.lg (lg ), ad so. lg ( 3 ) lg.lg (lg ). Apply (1.41) ad the desired coclusio follows. (lg ) lg = lg (lg ), which is obvious if we take lg of both sides. So, (lg ) lg lg (lg ). (lg ) lg (lg )! (1.49). To see why, substitute lg with m, obtaiig m m m! ad apply (lg )! 3. Take lg of both sides. The left-had side becomes lg ((lg )!). Substitute lg with m, obtaiig lg (m!). By (1.48), lg (m!) m lg m, therefore lg ((lg )!) (lg ).(lg (lg )). The right-had side becomes 3. lg. Compare (lg ).(lg (lg )) with 3. lg : lim 3. lg (lg ).(lg (lg )) = lim 3 lg (lg ) = 0 It follows that (lg ).(lg (lg )) 3. lg. Apply (1.41) to draw the desired coclusio. 3. 4 lg because 4 lg = lg = lg = by the properties of the logarithm. lg. lg! lg (see (1.48)). lg. lg because = lg by the properties of the logarithm. ( ) lg because ( ) lg = 1 lg = lg = ad clearly. ( ) lg lg. To see why, ote that lg lg, therefore 1. lg. lg = lg. Apply (1.8) ad coclude that 1. lg lg, i.e. ( ) lg lg. lg lg. To see why, take lg of both sides. The left-had side becomes lg ad the right-had side becomes lg (lg ) =. lg (lg ). Substitute lg with m: the left-had side becomes m = m =.m 1 ad the right-had side becomes lg m. By (1.50) we kow that m 1 lg m, therefore.m 1 lg m, therefore m lg m, therefore lg lg (lg ). Havig i mid (1.41) we draw the desired coclusio. lg l. To see this is true, observe that l = lg lg e. 16

l lg. lg l l. The left-had side is 1 l l l. m l m, which follows from (1.50).. Substitute l with m ad the claim becomes l l lg. To see why this is true, ote that l l lg lg ad rewrite the claim as lg lg lg. Take lg of both sides. The left-had side becomes lg lg lg, i.e. a triple logarithm. The right-had side becomes lg. If we thik of as a tower of twos, it is obvious that the triple logarithm decreases the height of the tower with three, while, as we said before, the log-star measures the height of the tower. Clearly, the latter is much smaller tha the former. lg lg. Clearly, for ay icreasig fuctio f(), f() f(). lg lg (lg ). Thik of as a tower of twos ad ote that the differece i the height of ad lg is oe. Therefore, lg (lg ) = (lg ) 1. lg lg (lg ). Substitute lg with f() ad the claim becomes f() lg f() which is clearly true sice f() is icreasig. lg (lg ) 1. 1 1 lg. Note that 1 lg ) = : take lg of both sides, the left-had side becomes lg ( 1 lg = 1 lg. lg = 1 ad the right-had side becomes lg = 1. Problem 1. Give a example of a fuctio f(), such that for ay fuctio g() amog the thirty fuctios from Problem 0, f() g() ad f() g(). Solutio: For istace, { +, if is eve f() = 1, if is odd Problem. Is it true that for ay asymptotically positive fuctios f() ad g(), f() + g() mi (f(), g())? Solutio: No. As a couterexample, cosider f() = ad g() = 1. The mi (f(), g()) = 1, f() + g() = + 1, ad certaily + 1 1. Problem 3. Is it true that for ay asymptotically positive fuctio f(), f() (f())? 17

Solutio: If f() is icreasig, it is trivially true. If it is decreasig, however, it may ot be true: cosider (1.17). Problem 4. Is it true that for ay asymptotically positive fuctio f(), f() f( )? Solutio: No. As a couterexample, cosider f() =. The f( ) =. As we already saw,. Problem 5. Compare the growth of lg ad (lg ). Solutio: Take logarithm of both sides. The left-had side becomes (lg )(lg ) = lg, the righthad side,. lg (lg ). As. lg (lg ) lg, it follows that (lg ) lg. Problem 6. Compare the growth of lg lg lg ad (lg )! Solutio: Take lg of both sides. The left-had side becomes (lg ).(lg lg lg ), the right-had side becomes lg ((lg )!). Substitute lg with m is the latter expressio to get lg ((m)!) m lg m. Ad that is (lg ).(lg lg ). Sice (lg ).(lg lg ) (lg ).(lg lg lg ), it follows that (lg )! lg lg lg. Problem 7. Let!! = (!)!. Compare the growth of!! ad ( 1)!! (( 1)!)!. Solutio: Let ( 1)! = v. The! = v. We compare!! vs ( 1)!! (( 1)!)! (v)! vs v! v v Apply Stirlig s approximatio to both sides to get: πv (v) v e v vs πv v v e v vv πv (v) v vs πv e ( 1)v v v v v Divide by πv v v both sides: v vs e ( 1)v v v Igore the factor o the left. If we derive without it that the left side grows faster, surely it grows eve faster with it. So, cosider: v vs e ( 1)v v v 18

Raise both sides to 1 v : That is, vs e 1 v vs e 1 ( 1)! Apply Stirlig s aproximatio secod time to get: That is, vs e 1 π( 1) ( 1) 1 e 1 vs π( 1) ( 1) 1 Sice π( 1) ( 1) 1 ( 1) ( 1 ), we have vs ( 1) ( 1 ) Clearly, ( 1) ( 1 ), therefore!! ( 1)!! (( 1)!)!. Lemma. The fuctio series: S(x) = l x x + l x x + l3 x x 3 +... is coverget for x > 1. Furthermore, lim x S(x) = 0. Proof: It is well kow that the series S (x) = 1 x + 1 x + 1 x 3 +... called geometric series is coverget for x > 1 ad S (x) = 1 lim x S (x) = 0. Cosider the series x 1 whe x > 1. Clearly, S (x) = 1 x + 1 ( x) + 1 ( x) 3 +... (1.5) It is a geometric series ad is coverget for x > 1, i.e. x > 1, ad lim x S (x) = 0. Let us rewrite S(x) as S(x) = 1 x. x l x 1 + ( x). For each term f k (x) = 1 ( x l x ( ( x) k. x l x 1 ) + ( x) 3. ( x l x ) 3 +... (1.53) ) k of S(x), k 1, for large eough x, it is the case that f k (x) < g k (x) where g k (x) = 1 ( x) k is the k th term of S (x). To see why this is true, cosider (1.50). The the fact that S (x) is coverget ad lim x S (x) = 0 implies the desired coclusio. 19

Problem 8 ([Ku73], pp. 107). Prove that 1. Solutio: We will show a eve stroger statemet: lim = 1. It is kow that: e x = 1 + x + x! + x3 3! +... Note that = e l = e ( l ). e ( l ) l = 1 + ( l )! ( l ) 3 3! + + +... }{{} T () Lemma implies lim T() = 0. It follows that lim = 1. ( lg We ca also say that = 1 + O otatio stads for ay fuctio of the set. ), ( = 1 + lg + O lg ), etc, where the big-oh Problem 9 ([Ku73], pp. 107). Prove that ( 1 ) l. Solutio: As = 1 + l + it is the case that: 1 = l + Multiply by to get: ( l ) +! ( l ) +! ( 1 ) (l ) = l +! ( l ) 3 +... 3! ( l ) 3 +... 3! + (l )3 3! +... } {{ } T () Note that lim T() = 0 by a obvious geeralisatio of Lemma. The claim follows immediately. Problem 30. Compare the growth of, ( + 1), +1, ad ( + 1) +1. Solutio: ( + 1) because ( + 1) lim = lim ( + 1 ) ( = lim 1 + 1 = e ) Clearly, (+1) =.. Ad (+1) ( + 1) (+1) : ( + 1) +1 ( lim +1 = lim 1 + 1 +1 = lim ) ( 1 + 1 ) lim ( 1 + 1 ) = e.1 = e 0

Problem 31. Let k be a costat such that k > 1. Prove that 1 + k + k + k 3 +... + k = Θ(k ) Solutio: First assume is a iteger variable. The 1 + k + k + k 3 +... + k = k+1 1 k 1 = Θ(k ) The result ca obviously be exteded for real, provided we defie appropriately the sum. For istace, if R + \ N let the sum be S() = 1 + k + k + k 3 +... + k 1 + k + k By the above result, S() = k + Θ ( k ) = Θ(k ). Problem 3. Let k be a costat such that 0 < k < 1. Prove that Solutio: 1 + k + k + k 3 +... + k = Θ(1) 1 + k + k + k 3 +... + k < t=0 k t = 1 1 k = Θ(1) Corollary 1. Θ(1), if 0 < k < 1 1 + k + k + k 3 +... + k = Θ(), if k = 1 Θ(k ), if k > 1 Problem 33. Let f(x) = x ad g(x) = x where x R +. Determie which of the followig are true ad which are false: 1. f(x) g(x). f(x) g(x) 3. f(x) g(x) 4. f(x) g(x) 5. f(x) g(x) 1

g(x) = x f(x) = x x Figure 1.1: f(x) ad g(x) from Problem 33.

Solutio: Note that x N +, x = x, therefore f(x) = g(x) wheever x N +. O the other had, x R + \ N +, x = x + 1, therefore g(x) = x +1 =. x = ( x ) = (f(x)) wheever x R + \ N +. Figure 1.1 illustrates the way that f(x) ad g(x) grow. First assume that f(x) g(x). By defiitio, for every costat c > 0 there exists x 0, such that x x 0, f(x) < c.g(x). It follows for c = 1 there exists a value for the variable, say x, such that x x, f(x) < g(x). However, x x Therefore, f( x ) < g( x ) O the other had, We derived x N + f( x ) = g( x ) f( x ) = g( x ) ad f( x ) < g( x ) We derived a cotradictio, therefore f(x) g(x) Aalogously we prove that f(x) g(x) To see that f(x) g(x), ote that x R +, x x, such that g(x ) = (f(x )). As f(x) is a growig fuctio, its square must have a higher asymptotic growth rate. Now we prove that f(x) g(x). Ideed, x R +, x x x R +, x x x R +, x x c > 0, c = cost, such that x R +, x c. x Fially we prove that f(x) g(x). Assume the opposite. Sice f(x) g(x), by property 7 o page 4 we derive f(x) g(x) ad that cotradicts our result that f(x) g(x). Problem 34. Prove that Solutio: Let m = ) (, which is =! (!) ( ) 1. You may assume is eve. if we assume is eve. It is kow that 3

Apply Stirlig s approximatio ( o page 1), igorig the ( 1 + Θ ( 1 )) factor, o the three factorials to get! π (!) = e ( π e ) = π e e π = π 1 Problem 35. Compare the growth of + ad 3. Solutio: lim + 3 = lim = lim 3 ( 3 ) = 0 To see why the limit is 0, take lg of ad ( ) 3, amely vs lg ( 3 ). We derived + 3. 4

Part II Aalysis of Algorithms 5

Chapter Iterative Algorithms I this sectio we compute the asymptotic ruig time of algorithms that use the for ad while statemets but make o calls to other algorithms or themselves. The time complexity is expressed as a fuctio of the size of the iput, i case the iput is a array or a matrix, or as a fuctio of the upper boud of the loops. Cosider the time complexity of the followig trivial algorithm. Add-1(: oegative iteger) 1 a 0 for i 1 to 3 a a + i 4 retur a We make the folloig assumptios: the expressio at lie 3 is executed i costat time regardless of how large is, the expressio at lie 1 is executed i costat time, ad the loop cotrol variable check ad assigmet of the for loop at lie are executed i costat time. Sice we are iterested i the asymptotic ruig time, ot i the precise oe, it suffices to fid the umber of times the expressio iside the loop (lie 3 i this case) is executed as a fuctio of the upper boud o the loop cotrol variable. Let that fuctio be f(). The time complexity of Add-1 will the be Θ(f()). We compute f() as follows. First we substitute the expressio iside the loop with a a + 1 where a is the couter variable that is set to zero iitially. The fid the value of a after the loop fiishes as a fuctio of where is the upper boud of the loop cotrol variable i. Usig that approach, algorithm Add-1 becomes Add-1-modified as follows. Add-1-modified(: oegative iteger) 1 a 0 for i 1 to 3 a a + 1 4 retur a 6

The value that Add-1-modified outputs is is Θ(). Now cosider aother algorithm: Add-(: oegative iteger) 1 retur i=1 1 =, therefore its time complexity Clearly, Add- is equivalet to Add-1 but the ruig time of Add- is, uder the said assumptios, costat. We deote costat ruig time by Θ(1). It is ot icorrect to say the ruig time of both algorithms is O() but the big-theta otatio is superior as it grasps precisely i the asymptotic sese the algorithm s ruig time. Cosider the followig iterative algorithm: Add-3(: oegative iteger) 1 a 0 for i 1 to 3 for j 1 to 4 a a + 1 5 retur a The value it outputs is i=1 j=1 1 = =, therefore its time complexity is Θ( ). i=1 Algorithm Add-3 has two ested cycles. ested cycles as follows. Add-geeralised(: oegative iteger) 1 for i 1 1 to for i 1 to 3... 4 for i k 1 to 5 expressio where expressio is computed i Θ(1), has ruig time Θ( k ). Let us cosider a modificatio of Add-3: Add-4(: oegative iteger) 1 a 0 for i 1 to 3 for j i to 4 a a + 1 5 retur a We ca geeralise that the ruig time of k All costats are bit-theta of each other so we might have as well used Θ(1000) or Θ(0.0001) but we prefer the simplest form Θ(1). 7

The ruig time is determied by the output a ad that is: i=1 j=i 1 = ( + 1) ( 1 i=1 j=1 }{{} ( + 1) ) 1 = i 1 j=1 }{{} i 1 ( i + 1) = i=1 ( + 1) i=1 i = i=1 = 1 + 1 = Θ( ) (see Problem 4 o page 6.) It follows that asymptotically Add-4 has the same ruig time as Add-3. Now cosider a modificatio of Add-4. Add-5(: oegative iteger) 1 a 0 for i 1 to 3 for j i + 1 to 4 a a + 1 5 retur a The ruig time is determied by the output a ad that is: i=1 j=i+1 1 = ( + 1) ( 1 i=1 j=1 }{{} i ) 1 = j=1 }{{} i ( i) = i=1 = 1 1 = Θ( ) Cosider the followig algorithm: A(: positive iteger) 1 a 0 for i 1 to 1 3 for j i + 1 to 4 for k 1 to j 5 a a + 1 6 retur a () i=1 i = We are asked to determie a that A returs as a fuctio of. The aswer clearly is i=1 8

1 i=1 j=i+1 k=1 But 1 j 1, we just eed to fid a equivalet closed form. i=1 j=i+1 k=1 1 i=1 1 i=1 j 1 = 1 i=1 j=i+1 ( 1 ( + 1) 1 i(i + 1) ( ) 1 ( + 1) 1 1 i=1 1 ( + 1)( 1) 1 1 i=1 i=1 j = ) = 1 i=1 (i + i) = i 1 1 i=1 i j j=1 i = 1 1 6 ( + 1)( + 1), therefore i=1 i j = j=1 1 ( 1), so we have 1 ( 1)( + 1) 1 1 ( 1)( 1) 1 ( 1) = ( 4 1 ( 1) + 1 1 6 ( 1) 1 ) = 1 1 ( 1)(6 + 3 + 1) = 1 ( 1)(4 + 4) = 1 1 ( 1)( + 1) 3 i = 1 1 6 ( 1)( 1). Further, That implies that the ruig time of A is Θ( 3 ). Clearly A is equivalet to the followig algorithm. A3(: positive iteger) 1 retur ( 1)( + 1)/3 whose ruig time is Θ(1). i=1 i = A4(: positive iteger) 1 a 0 for i 1 to 3 for j i + 1 to 4 for k i + j 1 to 5 a a + 1 6 retur a Problem 36. Fid the ruig time of algorithm A4 by determiig the value of a it returs as a fuctio of, f(). Fid a closed form for f(). 9

Solutio: f() = i=1 j=i+1 k=i+j 1 Let us evaluate the iermost sum 1 k=i+j 1 1. It is easy to see that the lower boudary i + j 1 may exceed the higher boudary. If that is the case, the sum is zero because the idex variable takes values from the empty set. More precisely, for ay iteger t, { t + 1, if t 1 = 0, else i=t It follows that The k=i+j 1 f() = 1 = i=1 j=i+1 { i j +, if i + j 1 j i + 1 i+1 0, else ( + (i + j)) Now the iermost sum is zero whe i + 1 > i + 1 i > i >, therefore 30

the maximum i we have to cosider is : f() = i+1 i=1 j=i+1 ( + ) ( + ) i=1 ( + (i + j)) = i+1 i=1 j=i+1 i=1 1 i=1 i i+1 j=i+1 ( i + 1 (i + 1) + 1) i+1 j=1 ( + ) i=1 i=1 j i j = j=1 ( i + 1) i=1 ( ( i + 1)( i + ) ( + )( + 1) 1 i=1 ( + )( + 1) i=1 1 ( + ) 1 i=1 i( i + 1) ) i(i + 1) = i=1 i+1 i=1 j=i+1 j = i( i + 1 (i + 1) + 1) i ( + 1) i=1 ( ) ( + 1)( + ) i( + 3)+ i i i) = i=1 ( + 1)( + ) 1 (3 + 5) i=1 1 + i=1 ( + 4) i + i=1 i = i=1 i i + ( ) + 1 ( ) ( + 1 ) + 1 ( + 1)( + ) (3 + 5) + 6 1 ( ) + 1 ( + 1)( + ) + ( + ) = ( + 1)( + ) ( ) + 1 ( + 3) ( ) ( + 1 ) + 1 + 3 i=1 i 31

Whe is eve, i.e. = k for some k N +, = k ad so k(k + 1)(k + ) k(k + 1)(4k + 3) k(k + 1)(k + 1) f() = + = 3 k(k + 1)(4k + ) k(k + 1)(4k + 3) k(k + 1)(k + 1) + = ( 3 k(k + 1) 1 + k + 1 ) k(k + 1)(4k 1) = 3 6 Whe is odd, i.e. = k + 1 for some k N, = k ad so k(k + )(k + 3) k(k + 1)(4k + 5) k(k + 1)(k + 1) f() = + = 3 k(k + 1)(4k + 6) k(k + 1)(4k + 5) k(k + 1)(k + 1) + = ( 3 1 k(k + 1) + k + 1 ) k(k + 1)(4k + 5) = 3 6 Obviously, f() = Θ( 3 ). A5(: positive iteger) 1 a 0 for i 1 to 3 for j i to 4 for k + i + j 3 to 5 a a + 1 6 retur a Problem 37. Fid the ruig time of algorithm A5 by determiig the value of a it returs as a fuctio of, f(). Fid a closed form for f(). Solutio: We have three ested for cycles ad it is certaily true that f() = O( 3 ). However, ow f() Θ( 3 ). It is easy to see that for ay large eough, lie 5 is executed for oly three values of the ordered triple i, j, k. Namely, i, j, k { 1, 1, 1, 1, 1,, 1,, 1 } because the coditio i the iermost loop (lie 5) requires that i + j 3. So, f() = 3, thus f() = Θ(1). Problem 37 raises a questio: does it make sese to compute the ruig time of a iterative algorithm by coutig how may time the expressio i the iermost loop is executed? At lies ad 3 of A5 there are coditio evaluatios ad variable icremets ca we assume they take o time at all? Certaily, if that was a segmet of a real-world program, the outermost two loops would be executed Θ( ) times, uless some sort of optimisatio 3

was applied by the compiler. Ayway, we postulate that the ruig time is evaluated by coutig how may times the iermost loop is executed. Whether that is a realistic model for real-world computatio or ot, is a side issue. A6(a 1, a,... a : array of positive distict itegers, 3) 1 S: a stack of positive itegers ( P(S) is a predicate that is evaluated i Θ(1) time. ) 3 ( If there are less tha two elemets i S the P(S) is false. ) 4 push(a 1, S) 5 push(a, S) 6 for i 3 to 7 while P(S) do 8 pop(s) 9 push(a i, S) Problem 38. Fid the asymptotic growth rate of ruig time of A6. Assume the predicate P is evaluated i Θ(1) time ad the push ad pop operatios are executed i Θ(1) time. Solutio: Certaily, the ruig time is O( ) because the outer loop rus Θ() times ad the ier loop rus i O() time: ote that for each cocrete i, the ier loop (lie 8) caot be executed more tha times sise there are at most elemets i S ad each executio of lie 8 removes oe elemet from S. However, a more precise aalysis is possible. Observe that each elemet of the array is beig pushed i S ad may be popped out of S later but oly oce. It follows that lie 8 caot be exesuted more tha times altogether, i.e. for all i, ad so the algorithm rus i Θ() time. A7(a 1, a,... a : array of positive distict itegers, x: positive iteger) 1 i 1 j 3 while i j do 4 k i+j 5 if x = a k 6 retur k 7 else if x < a k 8 j k 1 9 else i k + 1 10 retur 1 Problem 39. Fid the asymptotic growth rate of ruig time of A7. Solutio: The followig is a loop ivariat for A7: 33

For every iteratio of the while loop of A7 it is the case that: j i + 1 t (.1) where the iteratio umber is t, for some t 0. We prove it by iductio o t. The basis is t = 0, i.e. the first time the executio reaches lie 3. The j is, i is 1, ad ideed 1 + 1 for all sufficietly large. Assume that 0 at iteratio t, t 1, (.1) holds, ad there is yet aother iteratio to go through. Igore the possibility x = a k (lie 5) because, if that is true the iteratio t + 1 ever takes place. There are exactly two ways to get from iteratio t to iteratio t + 1 ad we cosider them i separate cases. Case I: the executio reaches lie 8 Now j becomes j i + 1 t+1 divide (.1) by j + i i + 1 t+1 j + i i + 1 t+1 j + i i + 1 1 + 1 t+1 j + i 1 i + 1 + 1 t+1 j + i 1 i + 1 t+1 sice 1 > 0 j + i 1 i + 1 }{{} t+1 sice m m, m R + the ew j i+j 1 ad i stays the same. Ad so the iductio step follows from the iductio hypothesis. i + j Case II: the executio reaches lie 9 Now j stays the same ad i becomes +1. j i + j j i j j i + 1 t+1 divide (.1) by + 1 t+1 j j + i + 1 t+1 + 1 1 + 1 t+1 t+1 j i + j 1 + 1 ( i + j j + 1 ) + 1 ( ) i + j j + 1 }{{} the ew i t+1 +1 t+1 sice i + j + 1 i + j + 1, i, j N+ 34

Ad so the iductio step follows from the iductio hypothesis. Havig prove (.1), we cosider the maximum value that t reaches, call it t max. Durig that last iteratio of the loop, the values of i ad j become equal, because the loop stops executig whe j < i. Therefore, j i = 0 durig the executio of iteratio t max before i gets icremeted or j gets decremeted. So, substitutig t with t max ad j i with 0 i the ivariat, we get tmax t max < lg. It follows that the ruig time of A7 is O(lg ). The followig claim is a loop ivariat for A7: For every iteratio of the while loop of A7, if the iteratio umber is t, t 0, it is the case that: 4 < j i (.) t+1 We prove it by iductio o t. The basis is t = 0, i.e. the first time the executio reaches lie 3. The j is, i is 1, ad ideed 4 = 1+0 4 < 1, for all sufficietly large. Assume that at iteratio t, t 1, (.) holds, ad there is yet aother iteratio to go through. Igore the possibility x = a k (lie 5) because, if that is true the iteratio t + 1 ever takes place. There are exactly two ways to get from iteratio t to iteratio t + 1 ad we cosider them i separate cases. i + j Case I: the executio reaches lie 8 Now j becomes 1 ad i stays the same. t+ < j i divide (.) by j + i i < t+ t+ < j + i i t+ 4 < j + i i j + i t+ 4 < 1 i sice m m 1, m R + }{{} the ew j 35

Case II: the executio reaches lie 9 t+ < j i j j i < t+ t+ < j j + i t+ 4 < j j + i ( ) j + i t+ 4 < j + ( ) j + i t+ 4 < j + 1 }{{} the ew i i + j Now j stays the same ad i becomes +1. sice m + m + 1, m R + Havig prove (.), it is trivial to prove that i the worst case, e.g. whe x is ot i the array, the loop is executed Ω(lg ) times. Problem 40. Determie the asymptotic ruig time of the followig programmig segmet: s = 0; for(i = 1; i * i <= ; i ++) for(j = 1; j <= i; j ++) s += + i - j; retur s; Solutio: The segmet is equivalet to: s = 0; for(i = 1; i <= floor(sqrt()); i ++) for(j = 1; j <= i; j ++) s += + i - j; retur s; ( ( ) ) As we already saw, the ruig time is Θ ad that is Θ(). Problem 41. Assume that A, B, ad C are matrices of itegers. Determie the asymptotic ruig time of the followig programmig segmet: for(i = 1; i <= ; i ++) for(j = 1; j <= ; j ++) { s = 0; for(k = 1; k <= ; k ++) s += A[i][k] * B[k][j]; C[i][j] = s; } retur s; 36

Solutio: Havig i mid the aalysis of Add-3 o page 7, clearly this is a Θ( 3 ) algorithm. However, if cosider the order of growth as a fuctio of the legth of the iput, the order of growth is Θ ( m 3 ), where m is the legth of the iput, i.e. m is the order of the umber of elemets i the matrices ad m = Θ( ). A8(a 1, a,... a : array of positive itegers) 1 s 0 for i 1 to 4 3 for j i to i + 4 4 for k i to j 5 s s + a i Problem 4. Determie the ruig time of algorithm A8. Solutio: The outermost loop is executed 4 times (assume large eough ). The middle loop is executed 5 times precisely. The iermost loop is executed 1,, 3, 4, or 5 times for j equal to i, i + 1, i +, i + 3, ad i + 4, respectively. Altogether, the ruig time is Θ(). A9(: positive iteger) 1 s 0 for i 1 to 4 3 for j 1 to i + 4 4 for k i to j 5 s s + 1 6 retur s Problem 43. Determie the ruig time of algorithm A9. First determie the value it returs as a fuctio of. Solutio: We have to evaluate the sum: 4 i+4 j 1 i=1 j=1 k=i Havig i mid that j 1 = k=i { j i + 1, if j i 0, else 37

we rewrite the sum as: 4 i=1 4 i 1 i=1 j=i j=1 k=i j i+4 1 + } {{ } this is 0 i+4 (j i + 1) = j=i k=i j 1 = 4 ( ) (i i + 1) + (i + 1 i + 1) + (i + i + 1) + (i + 3 i + 1) + (i + 4 i + 1) = i=1 4 (1 + + 3 + 4 + 5) = 15( 4) i=1 So, algorithm A9 returs 15( 4). The time complexity, though, is Ω( ) because the outer two loops require Ω( ) work. A10(: positive iteger) 1 a 0 for i 0 to 1 3 j 1 4 while j < do 5 for k i to j 6 a a + 1 7 j j + 8 retur a Problem 44. Fid the ruig time of algorithm A10 by determiig the value of a it returs as a fuctio f() of. Fid a closed form for f(). Solutio: f() = 1 i=0 Let = 1. The j {1,3,5,..., 1} j 1 k=i j {1, 3, 5,..., 1} j {1, 3, 5,..., + 1} But {1, 3, 5,..., + 1} = { 0 + 1, 1 + 1, + 1,..., + 1}. So we ca rewrite the sum as: f() = We kow that l+1 k=i 1 = 1 i=0 1 l=0 l+1 k=i 1 { l + 1 i + 1, 0, otherwise if l + 1 i l i 1 38

Let i 1 be called i. it must be case that f() = = = = = = = 1 i=0 1 i=0 1 i=0 1 i=0 1 i=0 1 i=0 1 i=0 i 1 1 l=0 l+1 k=i 1 } {{ } 0 (l + i) l=i ( ( ( ( i) 1 l=i 1 + + 1 l l=i 1 l=i ) = ( i)( 1 i + 1) + ( i)( i ) + 1 l l=i ) l+1 k=i }{{} l+ i 1 l l=i = ) = // sice q k = 1 (q + p)(q p + 1 ) k=p ( ( i)( i ) + 1 ( 1 + i )( 1 i + 1) ( ( i)( i ) + ( 1 + i )( i ) ) = 1 ( = ( i ) ( ( i) + ( 1 + i ) )) = i=0 ) = 1 = ( i )( + ( i + 1 + i )) (.3) i=0 But i 1 i + 1 + i = i + 1 + = i + 1 + i 1 i + + i 1 i + 1 = = = i 1 = i 1 39

sice x R, x = x. Therefore, (.3) equals 1 i=0 1 i=0 1 ( ( i=0 1 i 1 i 1 1 i=0 1 () 3 i=0 ( 1 i=0 ) ( ) i 1 = ) i 1 i 1 i 1 + = i 1 i 1 1 + + i 1 i 1 = }{{} i=0 i 1 1 i 1 i 1 (i 1) + 1 i i=0 1 ) i=0 1 + ( ) ( 1) 3 + 3 ( 3) 3 + 3 + 1 i=0 + 1 i=0 i=0 1 i=0 = i 1 i 1 = i 1 i 1 = i 1 i 1 = i 1 i 1 (.4) By (8.38) o page 8, 1 ( )( 5) i 1 i 1, 1 odd eve 4 = i=0 ( 3)( 1)( 1), 1 eve odd 4 I. Suppose is eve. The f() = 3 + 3 + ( )( 5) 4 = 1(3 + 3 ) + ( 3 9 + 10) 4 = 143 + 7 + 10 4 ( + 1)(7 + 10) = 4 40

II. Suppose is odd. The f() = 3 + 3 + ( 3)( 1)( 1) 4 = 13 + 36 + 3 9 + 10 3 4 = ( + 1)(14 + 13 3) 4 Obviously, f() = Θ( 3 ) i either case. Asymptotics of bivariate fuctios Our otatios from Chapter 1 ca be geeralised for two variables as follows. A bivariate fuctio f(, m) is asymptotically positive iff 0 m 0 : 0 m m 0, f(, m) > 0 Defiitio. Let g(, m) be a asymptotically positive fuctio with real domai ad codomai. The Θ(g(, m)) = { f(, m) c 1, c > 0, 0, m 0 > 0 : 0, m m 0, 0 c 1.g(, m) f(, m) c.g(, m) } Patter matchig is a computatioal problem i which we are give a text ad a patter ad we compute how may times or, i a more elaborate versio, at what shifts, the patter occurs i the text. More formally, we are give two arrays of characters T[1..] ad P[1..m], such that m. For ay k, 1 k m + 1, we have a shift at positio k iff: T[k] = P[1] T[k + 1] = P[]... T[k + m 1] = P[m] The problem the is to determie all the valid shifts. Cosider the followig algorithm for that problem. Naive-Patter-Mathig(T[1..]: characters, P[1..m]: characters) 1 ( assume m ) for i 1 to m + 1 3 if T[i, i + 1,..., i + m 1] = P 4 prit shift at i Problem 45. Determie the ruig time of algorithm Naive-Patter-Mathig. 41

Solutio: The algorithm is ostesibly Θ() because it has a sigle loop with the loop cotrol variable ruig from 1 to. That aalysis, however, is wrog because the compariso at lie 3 caot be performed i costat time. Have i mid that m ca be as large as. Therefore, the algorihm is i fact: Naive-Patter-Mathig-1(T[1..]: characters, P[1..m]: characters) 1 ( assume m ) for i 1 to m + 1 3 Match True 4 for j 1 to m 5 if T[i + j 1] P[j] 6 Match False 7 if Match 8 prit shift at i For obvious reasos this is a Θ(( m).m) algorithm: both the best-case ad the worst-case ruig times are Θ(( m).m). Suppose we improve it to: Naive-Patter-Mathig-(T[1..]: characters, P[1..m]: characters) 1 ( assume m ) for i 1 to m + 1 3 Match True 4 j 1 5 while Match Ad j m do 6 if T[i + j 1] = P[j] 7 j j + 1 8 else 9 Match False 10 if Match 11 prit shift at i Naive-Patter-Mathig- has the advatage that oce a mismatch is foud (lie 9) the ier loop breaks. Thus the best-case ruig time is Θ(). A best case, for istace, is: T = a a... a }{{} times ad P = b b... b }{{} m times However, the worst case ruig time is still Θ(( m).m). A worst case is, for istace: T = a a... a }{{} times ad P = a a... a }{{} m times It is easy to prove that ( m).m is maximised whe m varies ad is fixed for m ad achieves maximum value Θ( ). It follows that all the aive strig matchigs are, at worst, quadratic algorithms. Algorithms that have the same i asymptotic terms ruig time for all iputs of the same legth are called oblivious. 4

It is kow that faster algorithms exist for the patter matchig problem. For istace, the Kuth-Morris-Pratt [KMP77] algorithm that rus i Θ() i the worst case. Problem 46. For ay two strigs x ad y of the same legth, we say that x is a circular shift of y iff y ca be broke ito substrigs oe of them possibly empty y 1 ad y : y = y 1 y such that x = y y 1. Fid a liear time algorithm, i.e. Θ() i the worst case, that computes whether x is a circular shift of y or ot. Assume that x y. Solutio: Ru the liear time algorithm for strig matchig of Kuth-Morris-Pratt with iput y y (y cocateated with itself) as text ad x as patter. The algorithm will output oe or more valid shifts iff x is a circular shift of y, ad zero valid shifts, otherwise. To see why, cosider the cocateatio of y with itself whe it is a circular shift of x for some y 1 ad y, such that y = y 1 y ad x = y y 1 : y y = y 1 y y }{{ 1 } this is x y The ruig time is Θ(), i.e. Θ(), at worst. 43

Chapter 3 Recursive Algorithms ad Recurrece Relatios 3.1 Prelimiaries A recursive algorithm is a algorithm that calls itself, oe or more times, o smaller iputs. To prevet a ifiite chai of such calls there has to be a value of the iput for which the algorithm does ot call itself. A recurrece relatio i oe variable is a equatio, i.e. there is a = sig i the middle, i which a fuctio of the variable is equated to a expressio that icludes the same fuctio o smaller value of the variable. I additio to that for some basic value of the variable, typically oe or zero, a explicit value for the fuctio is defied that is the iitial coditio. The variable is cosidered by default to take oegative iteger values, although oe ca thik of perfectly valid recurrece relatios i which the variable is real. Typically, i the part of the relatio that is ot the iitial coditio, the fuctio of the variable is writte o the left-had side of the = sig as, say, T(), ad the expressio, o the right-had side, e.g. T() = T( 1) + 1. If the iitial coditio is, say, T(0) = 0, we typically write: T() = T( 1) + 1, N + (3.1) T(0) = 0 It is ot formally icorrect to write the same thig as: T( 1) = T( ) + 1, N +, 1 T(0) = 0 The equal sig is iterpreted as a assigmet from right to left, just as the equal sig i the C programmig laguage, so the followig uorthodox way of describig the same Note there ca be more tha oe iitial coditio as i the case with the famous Fiboacci umbers: F() = F( 1) + F( ), N +, 1 F(1) = 1 F(0) = 0 The umber of iitial coditios is such that the iitial coditios prevet ifiite descet. 44

relatio is discouraged: T( 1) + 1 = T(), N + 0 = T(0) Each recurrece relatio defies a ifiite umerical sequece, provided the variable is iteger. For example, (3.1) defies the sequece 0, 1,, 3,.... Each term of the relatio, except for the terms defied by the iitial coditios, is defied recursively, i.e. i terms of smaller terms, hece the ame. To solve a recurrece relatio meas to fid a o-recursive expressio for the same fuctio oe that defies the same sequece. For example, a solutio of (3.1) is T() =. It is atural to describe the ruig time of a recursive algorithm by some recurrece relatio. However, sice we are iterested i asymptotic ruig times, we do ot eed the precise solutio of a ormal recurrece relatio as described above. A ormal recurrece relatio defies a sequece of umbers. If the time complexity of a algorithm as a worstcase aalysis was give by a ormal recurrece relatio the the umber sequece a 1, a, a 3,..., defied by that relatio, would describe the ruig time of algorithm precisely, i.e. for iput of size, the maximum umber of steps the algorithm makes over all iputs of size is precisely a. We do ot eed such a precise aalysis ad ofte it is impossible to derive oe. So, the recurrece relatios we use whe aalysig a algorithm typically have bases Θ(1), for example: T() = T( 1) + 1, (3.) T(1) = Θ(1) Ifiitely may umber sequeces are solutios to (3.). To solve such a recurrece relatio meas to fid the asymptotic growth of ay of those sequeces. The best solutio we ca hope for, asymptotically, is the oe give by the Θ otatio. If we are uable to pi dow the asymptotic growth i that sese, our secod best optio is to fid fuctios f() ad g(), such that f() = o(g()) ad T() = Ω(f()) ad T() = O(g()). The best solutio for the recurrece relatio (3.), i the asymptotic sese, is T() = Θ(). Aother solutio, ot as good as this oe, is, for example, T() = Ω( ) ad T() = O( ). I the problems that follow, we distiguish the two types of recurrece relatio by the iitial coditios. If the iitial coditio is give by a precise expressio as i (3.1) we have to give a precise aswer such as T() =, ad if the iitial coditio is Θ(1) as i (3.) we wat oly the growth rate. It is possible to omit the iitial coditio altogether i the descriptio of the recurrece. If we do so we assume tacitly the iitial coditio is T(c) = Θ(1) for some positive costat c. The reaso to do that may be that it is poitless to specify the usual T(1); however, it may be the case that the variable ever reaches value oe. For istace, cosider the recurrece relatio ( ) T() = T + 17 + which we solve below (Problem 53 o page 59). To specify T(1) = Θ(1) for it is wrog. 3.1.1 Iterators The recurrece relatios ca be partitioed ito the followig two classes, assumig T is the fuctio of the recurrece relatios as above. 45