The Growth of Functions and Big-O Notation

Similar documents
3.1 Asymptotic notation

Growth of Functions (CLRS 2.3,3)

Asymptotic Notation. such that t(n) cf(n) for all n n 0. for some positive real constant c and integer threshold n 0

Lecture 2. Fundamentals of the Analysis of Algorithm Efficiency

The Time Complexity of an Algorithm

The Time Complexity of an Algorithm

MA008/MIIZ01 Design and Analysis of Algorithms Lecture Notes 2

Big O 2/14/13. Administrative. Does it terminate? David Kauchak cs302 Spring 2013

CS473 - Algorithms I

CS Non-recursive and Recursive Algorithm Analysis

CS 4407 Algorithms Lecture 2: Growth Functions

COMP 9024, Class notes, 11s2, Class 1

Md Momin Al Aziz. Analysis of Algorithms. Asymptotic Notations 3 COMP Computer Science University of Manitoba

CSC Design and Analysis of Algorithms. Lecture 1

Data Structures and Algorithms Running time and growth functions January 18, 2018

Computational Complexity

CIS 121 Data Structures and Algorithms with Java Spring Big-Oh Notation Monday, January 22/Tuesday, January 23

CS 4407 Algorithms Lecture 2: Iterative and Divide and Conquer Algorithms

Introduction to Algorithms

When we use asymptotic notation within an expression, the asymptotic notation is shorthand for an unspecified function satisfying the relation:

Data Structures and Algorithms. Asymptotic notation

Big O (Asymptotic Upper Bound)

Asymptotic Analysis. Slides by Carl Kingsford. Jan. 27, AD Chapter 2

Cpt S 223. School of EECS, WSU

Asymptotic Analysis 1

Introduction to Algorithms and Asymptotic analysis

Running Time Evaluation

Algorithm efficiency can be measured in terms of: Time Space Other resources such as processors, network packets, etc.

Analysis of Algorithm Efficiency. Dr. Yingwu Zhu

Ch01. Analysis of Algorithms

with the size of the input in the limit, as the size of the misused.

Compare the growth rate of functions

Introduction to Algorithms: Asymptotic Notation

CSE 417: Algorithms and Computational Complexity

Grade 11/12 Math Circles Fall Nov. 5 Recurrences, Part 2

Taking Stock. IE170: Algorithms in Systems Engineering: Lecture 3. Θ Notation. Comparing Algorithms

Analysis of Algorithms

Comparison of functions. Comparison of functions. Proof for reflexivity. Proof for transitivity. Reflexivity:

Lecture 2. More Algorithm Analysis, Math and MCSS By: Sarah Buchanan

Analysis of Algorithms

COMP Analysis of Algorithms & Data Structures

COMPUTER ALGORITHMS. Athasit Surarerks.

Principles of Algorithm Analysis

Module 1: Analyzing the Efficiency of Algorithms

Asymptotic Analysis. Thomas A. Anastasio. January 7, 2004

COMP Analysis of Algorithms & Data Structures

CSE 531 Homework 1 Solution. Jiayi Xian

When we use asymptotic notation within an expression, the asymptotic notation is shorthand for an unspecified function satisfying the relation:

Agenda. We ve discussed

More Asymptotic Analysis Spring 2018 Discussion 8: March 6, 2018

CSE332: Data Abstrac0ons Sec%on 2. HyeIn Kim Spring 2014

Analysis of Algorithms I: Asymptotic Notation, Induction, and MergeSort

Algorithms, CSE, OSU. Introduction, complexity of algorithms, asymptotic growth of functions. Instructor: Anastasios Sidiropoulos

A New Paradigm for the Computational Complexity Analysis of Algorithms and Functions

On my honor I affirm that I have neither given nor received inappropriate aid in the completion of this exercise.

Convergence of sequences and series

Announcements. CompSci 230 Discrete Math for Computer Science. The Growth of Functions. Section 3.2

Growth of Functions. As an example for an estimate of computation time, let us consider the sequential search algorithm.

i=1 i B[i] B[i] + A[i, j]; c n for j n downto i + 1 do c n i=1 (n i) C[i] C[i] + A[i, j]; c n

CSE 101. Algorithm Design and Analysis Miles Jones Office 4208 CSE Building Lecture 1: Introduction

Lecture 2: Asymptotic Notation CSCI Algorithms I

Input Decidable Language -- Program Halts on all Input Encoding of Input -- Natural Numbers Encoded in Binary or Decimal, Not Unary

Mid-term Exam Answers and Final Exam Study Guide CIS 675 Summer 2010

Algorithms Design & Analysis. Analysis of Algorithm

Algorithm Analysis, Asymptotic notations CISC4080 CIS, Fordham Univ. Instructor: X. Zhang

2. ALGORITHM ANALYSIS

Data Structures and Algorithms

Asymptotic Analysis Cont'd

Homework Assignment 1 Solutions

Reading 10 : Asymptotic Analysis

Asymptotic Analysis of Algorithms. Chapter 4

Algorithms and Theory of Computation. Lecture 2: Big-O Notation Graph Algorithms

EECS 477: Introduction to algorithms. Lecture 5

Asymptotic notation. O-notation 2 78 and g(n) = 1 2 n2.then. O-notation 3 Example. Let f (n) =n 3 and g(n) =2n 2.Thenf (n) 6= O(g(n)).

Computational Complexity - Pseudocode and Recursions

Problem Set 1 Solutions

CSC2100B Data Structures Analysis

Lecture 1: Asymptotic Complexity. 1 These slides include material originally prepared by Dr.Ron Cytron, Dr. Jeremy Buhler, and Dr. Steve Cole.

Written Homework #1: Analysis of Algorithms

Algorithms, Design and Analysis. Order of growth. Table 2.1. Big-oh. Asymptotic growth rate. Types of formulas for basic operation count

What is Performance Analysis?

Analysis of Algorithms

Big O Notation. P. Danziger

Chapter 0: Preliminaries

P, NP, NP-Complete, and NPhard

Solutions. Problem 1: Suppose a polynomial in n of degree d has the form

Big-O Notation and Complexity Analysis

Theory of Computation Chapter 1: Introduction

Solving recurrences. Frequently showing up when analysing divide&conquer algorithms or, more generally, recursive algorithms.

Analysis of Algorithms [Reading: CLRS 2.2, 3] Laura Toma, csci2200, Bowdoin College

Enumerate all possible assignments and take the An algorithm is a well-defined computational

CSE 421: Intro Algorithms. 2: Analysis. Winter 2012 Larry Ruzzo

5 + 9(10) + 3(100) + 0(1000) + 2(10000) =

Lecture 1: Asymptotics, Recurrences, Elementary Sorting

Algorithms and Their Complexity

Data Structures and Algorithms CSE 465

Lecture 10: Big-Oh. Doina Precup With many thanks to Prakash Panagaden and Mathieu Blanchette. January 27, 2014

Mathematics 324 Riemann Zeta Function August 5, 2005

Homework 1 Solutions

Ch 01. Analysis of Algorithms

Transcription:

The Growth of Functions and Big-O Notation Big-O Notation Big-O notation allows us to describe the aymptotic growth of a function without concern for i) constant multiplicative factors, and ii) lower-order additive terms. For example, using big-o notation, the function = 3n 2 +6n+7 is assumed to have the same kind of (quadratic) growth as g(n) = n 2. Why do we choose to ignore constant factors and lower-order additive terms? One kind of function that we often consider throughout this course is T (n), which represents the worst-case number of steps required by a an algorithm to process an input of size n. Function T (n) will vary depending on the computing paradigm that is used to represent the algorithm. For example, one paradigm might represent the algorithm as a C program, while another might represent it as a sequence of random-access machine instructions. Now if T measures the number of algorithmic steps for the first paradigm, and T 2 (n) measures the number of steps for the second, then, assuming that a paradigm does not include any unnecessary overhead, these two functions will likely be within multiplicative constant factors of one another. In other words, there will exist two constants C and C 2 for which C T 2 (n) T (n) C 2 T 2 (n). For this reason, big-o notation allows one to describe the steps of an algorithm in a mostly paradigmindepenent manner, yet still be able to give meaningful representations of T (n) by ignoring the paradignm-dependent constant factors. Let and g(n) be functions from the set of nonnegative integers to the set of nonnegative real numbers. Then Big-O = O(g(n)) iff there exist constants C > 0 and k such that Cg(n) for every n k. Big-Ω = Ω(g(n)) iff there exist constants C > 0 and k such that Cg(n) for every n k. Big-Θ = Θ(g(n)) iff = O(g(n)) and = Ω(g(n)). little-o = o(g(n)) iff = 0. g(n)

little-ω = ω(g(n)) iff =. g(n) Note that a more succinct way of saying property P (n) is true for all n k, for some constant k is to say property P (n) holds for sufficiently large n. Although this phrase will be used often throughout the course, nevertheless, when establishing a big-o relationship between two functions, the student should make the effort to provide the value of k for which the inequality is true. Given functions and g(n), to determine the big-o relationship between f and g, we mean establishing which, if any, of the above growth relationships apply to f and g. Note that, if more than one of the above relations is true, then we choose the one that gives the most information. For example, if = o(g(n)) and = O(g(n)), then we would simply write = o(g(n)), since it implies the latter relation. Example. Determine the big-o relationship between i) = 6n 2 + 2n + 5 and g(n) = 50n 2, and ii) the same and g(n) = n 3. Example Solution. We have and = O(g(n)) via C = and k =. Similarly, 6n 2 + 2n + 5 6n 2 + 2n 2 + 5n 2 = 3n 2 50n 2 = g(n), g(n) = 50n 2 (50/6), which implies g(n) = O() via C = 25/3 and k =. Therefore, = Θ(g(n)) is the strongest possible statement. In the case g(n) = n 3, we have it g(n) = it 6n 2 + 2n + 5 n 3 = it (6/n + 2/n2 + 5/n 3 ) = 0, implying that = o(g(n)). Moreover, it is an exercise to show that, whenever = o(g(n), then g(n) O(). Therefore, = o(g(n) is the strongest possible statement. 2

Basic Results In this section we provide some basic results that allow one to determine the big-o growth of a function, or the big-o relationship between two functions, without having to revert to the definitions. Theorem. Let p(n) be a polynomial of degree a and q(n) be a polynomial of degree b. Then p(n) = O(q(n)) if and only if a b p(n) = Ω(q(n)) if and only if a b p(n) = Θ(q(n)) if and only if a = b p(n) = o(q(n)) if and only if a < b p(n) = ω(q(n)) if and only if a > b Thus, Theorem could have been invoked to prove that = o(g(n)), where f and g are the functions from Example. Theorem 2. Let, g(n), h(n), and k(n) be nonnegative integer functions for sufficiently large n. Then. + g(n) = Θ(max(, g(n))) 2. if = Θ(h(n)) and g(n) = Θ(k(n)), then (fg)(n) = g(n) = Θ(h(n)k(n)) 3. Transitivity. Let R {O, o, Θ, Ω, ω} be one of the five big-o relationships. Then if = R(g(n)), and g(n) = R(h(n)) then = R(h(n)). In other words, all five of the big-o relationships are transitive. 3

Example 2. Use the results of Theorems and 2 to give a succinct expression for the big-o growth of (fg)(n) = g(n), where = n log(n 4 + ) + n(log n) 2 and g(n) = n 2 + 2n + 3. Note: by succinct we mean that no constants or lower-order additive terms should appear in the answer. Example 2 Solution. By property 2 of Theorem 2, it suffices to find succinct big-o expressions for the growth of both f and g, and then multiply those expressions to obtain a succinct expression for (fg)(n). Moreover, Theorem tells us that g(n) = Θ(n 2 ). As for the growth of notice that n log(n 4 + ) 4n log n = o(n(log n) 2 ), where we are using the logarithm property log a x b = b log a x. Thus, by property of Theorem 2, = Θ(n log 2 n) (Note: log 2 n means take the logarithm of n and then square the answer, and is a neater way of writing (log n) 2 ). Therefore, (fg)(n) = Θ((n log 2 n)n 2 ) = Θ(n 3 log 2 n). Theorem 3. If g(n) Proof of Theorem 3. Mathematically, = C, for some constant C > 0, then = Θ(g(n)). g(n) = C means that, for every ɛ > 0, there exists k 0, such that C < ɛ, g(n) for all n k. In words, can be made arbitrarily close to C with increasing values of n. Removing g(n) the absolute-value yields C ɛ < g(n) < C + ɛ, which implies (C ɛ)g(n) < < (C + ɛ)g(n). Since C > 0 and ɛ > 0 are constants, the latter inequalities imply = Θ(g(n)) so long as C ɛ > 0. Therefore, choosing ɛ = C/2, the result is proven. 4

Example 3. Suppose a > and b < 0 are constants, with b < a. Prove that a n + b n = Θ(a n ). Example 3 Solution. We have a n + b n it a n bn = it( + a ) = it( + n (b/a)n ) =, since b < a. Therefore, by Theorem 3, a n + b n = Θ(a n ). L Hospital s Rule. Suppose and g(n) are both differentiable functions with either. = g(n) =, or 2. = g(n) = 0. Then g(n) = f (n) g (n). Example 4. Prove that for every ɛ > 0, log n = o(n ɛ ). Example 4 Solution. We have log n it n ɛ and the statement is proved. ( ) ( ) (log n) = it = it = (n ɛ ) (ln 2)n ɛn ɛ ɛ ln 2 it n = 0, ɛ 5

The following terminology is often used to describe the big-o growth of a function. Growth Θ() Θ(log n) Θ(log k n), for some k Θ(n k ) for some positve k < Θ(n) Θ(n 2 ) Θ(n 3 ) Θ(n log n) Θ(n log k n), for some k O(n k ) for some k Ω(n k ), for every k Ω(a n ) for some a > Terminology constant growth logarithmic growth polylogarithmic growth sublinear growth linear growth quadratic growth cubic growth log-linear growth polylog-linear growth polynomial growth superpolynomial growth exponential growth Example 5. Use the above terminology to describe the growth of the functions from Examples and 2. Example 5 Solution. Functions f and g from part of Example both have quadratic growth, while g(n) from part 2 has cubic growth. Function f from Example 2 has polylog-linear growth, while g has quadratic growth. Finally (f g)(n) from Example 2 has polylog-cubic growth. Theorem 4 (Log Ratio Test). Suppose f and g are continuous functions over the interval [, ), and Then log( g(n) ) = log() log(g(n)) = L.. If L = then 2. If L = then 3. If L (, ) is a constant then g(n) =. g(n) = 0. g(n) = 2L. 6

Example 6. Use the log-ratio test to determine the big-o growth of = n log n. Example 6 Solution. has superpolynomial growth since it is a power-like function whose power (logarithmically) increases as n increases, where as a polynomial-growth power function has a constant power. Does f have exponential growth? To determine this we consider the ratio n log n /a n, where a > is arbitrary. Now, since n log n does not have an explicit derivative, we cannot use L Hospital s rule. Instead we use the log ratio test. Indeed, it log ( n log n /a n) = it (log nlog n log a n ) = it (log2 n n log a) =. Therefore, by Case 2 of the Log Ratio Test, it nlog n /a n = 0, implying that does not have exponential growth. Unfortunately, the sum function S(n) for many important series cannot be expressed using a formula. However, the next result shows how we may still determine the big-o growth of S(n), which quite often is our main interest. Integral Theorem. Let f(x) > 0 be an increasing or decreasing Riemann-integrable function over the interval [, ). Then f(i) = Θ( f(x)dx), i= if f is decreasing. Moreover, the same is true if f is increasing, provided = O( f(x)dx). Proof of Integral Theorem. We prove the case when f is decreasing. The case when f is increasing is left as an exercise. The quantity f(x)dx represents the area under the curve of f(x) from to n. Moreover, for i =,..., n, the rectangle R i whose base is positioned from x = i to x = i +, and whose height is f(i + ) lies under the graph. Therefore, n Area(R i ) = i= f(i) i=2 f(x)dx. Adding f() to both sides of the last inequality gives f(i) i= f(x)dx + f(). 7

Now, choosing C > 0 so that f() = C f(x)dx gives f(i) ( + C) i= which proves n f(i) = O( f(x)dx). i= f(x)dx, Now, for i =,..., n, consider the rectangle R i whose base is positioned from x = i to x = i +, and whose height is f(i). This rectangle covers all the area under the graph of f from x = i to x = i +. Therefore, n n Area(R i) = f(i) f(x)dx. Now adding to the left side of the last inequality gives i= i= which proves n f(i) = Ω( f(x)dx). Therefore, i= f(i) i= f(i) = Θ( i= f(x)dx, f(x)dx). Example 7. Determine the big-o growth of the series + 2 + + n. Example 7 Solution. We need to estimate the growth of the sum n f(i), where f(i) =. By the i Integral theorem, this sum grows on the order of i= Therefore, x dx = ln x n = ln n. + 2 + + n = Θ(log n). 8

Exercises. Use the definition of big-ω to prove that n log n = Ω(n + n log n 2 ). Provide appropriate C and k constants. 2. Provide the big-o relationship between = n log n and g(n) = n + n log n 2. 3. Prove that = O(g(n)) if and only if g(n) = Ω(). 4. Use the definition of big-θ to prove that + g(n) = Θ(max(, g(n))). 5. Prove that (n + a) b = Θ(n b ), for all real a and b > 0. Explain why Theorem and L Hospital s rule should be avoided when solving this problem. 6. Prove that if it g(n) 7. Prove or disprove: 2 n+ = O(2 n ). 8. Prove or disprove: 2 2n = O(2 n ). = 0, then = O(g(n)), but g(n) O(). 9. Use any techniques or results from lecture to determine a succinct big-θ expression for the growth of the function log 50 (n)n 2 + log(n 4 )n 2. + 000n 2 + 00000000n. 0. Prove or disprove: if = O(g(n)), then 2 = O(2 g(n) ).. Prove transitivity of big-o: if = O(g(n)), then g(n) = O(h(n)), then = O(h(n)). 2. If g(n) = o(), then prove that + g(n) = Θ(). 3. Use L Hospital s rule to prove Theorem. Hint: assume a and b are nonnegative integers and that a b. 4. Use L Hospital s rule to prove that a n = ω(n k ), for every real a > and integer k. 5. Prove that log a n = Θ(log b n) for all a, b > 0. 6. Determine the big-o growth of the function = 2 2 log n. Explain and show work. 7. Prove that n! = ω(2 n ). 8. Determine the big-o relationship between 2 2n and n!. 9. Determine the big-o growth of the function = (log n) log n. Explain and show work. 20. Determine the big-o growth of the function = ( 2) log n. Explain and show work. 2. Determine the big-o growth of the function = (log n)!. 22. Determine the big-o growth of the function = n / log n. 23. Suppose g(n) for all n, and that g(n)+l, for some constant L 0 and all n. Prove that = O(g(n)). 24. Give an example that shows that the statement of Exercise 23 may not be true if we no longer assume g(n). 9

25. Use the integral theorem to establish that k + 2 k + + n k = Θ(n k+ ), where k is an integer constant. 26. Use the Integeral Theorem to prove that log + log 2 + + log n = Θ(n log n). 27. Show that log(n!) = Θ(n log n). 28. Determine the big-o growth of n + n/2 + n/3 + +. 0

Exercise Hints and Solutions. Answers may vary. We have for n 2, n + n log n 2 n log n + 2n log n = 3n log n. Thus, n log n (/3)(n + n log n 2 ), for all n 2. So C = /3 and k = 2. 2. From the previous exercise we have = Ω(g(n)). But = O(g(n)) with C = k =. Thus, = Θ(g(n)). 3. Set up the inequality for big-o and divide both sides by C. 4. Two inequalities must be established, and C = 0.5, k =, C 2 = 2, k 2 = are adequate constants (why?). 5. Use Theorem 3. 6. Since g(n) = 0, we know that g(n) for sufficiently large n. Thus, = O(g(n)). Now suppose it was true that g(n) C for some constant C > 0, and n sufficiently large. Then dividing both sides by g(n) yields /C for sufficiently large n. But since g(n) we know that g(n) O(). 7. True, since 2 n+ = 2 2 n. C = 2, k =. g(n) = 0, < /C, for sufficiently large n, which is a contradiction. Therefore, g(n) 8. False. 2 2n = 4 n and Now use Exercise 6. 2 n 4 = n 2 = 0. n 9. Θ(n 2. log n). 0. False. Consider = 2n and g(n) = n.. Assume C g(n), for all n k and g(n) C h(n), for all n k 2. Therefore, C g(n) C C 2 h(n) for all n max(k, k 2 ). C = C C 2 and k = max(k, k 2 ). 2. Use Theorem 2 and the fact that g(n) < for n sufficiently large.

3. Let be a nonnegative-valued polynomial of degree a, and g(n) be a nonnegative-valued polynomial of degree b, with a b. Since the k th derivative (as a function of n) of is equal to Θ(n a k ) and that of g(n) is equal to Θ(n b k ) it follows that L Hospital s rule applies to the ratio f [k] (n)/g [k] (n) for all k < b. Moreover, upon applying the rule for the b th time, the numerator will equal a polyhomial of degree a b, while the denominator will equal a constant. Thus = Θ(g(n)) if a = b and = ω(g(n)) if a > b. 4. Since the derivative (as a function of n) of a n equals (ln a)a n, it follows that the k th derivative of a n divided by the k th derivative of n k equals ln k a n /k!, which tends to infinity. Therefore, a n = ω(n k ). 5. By the Change of Base formula, log a n = log b n log b a, and so log a n = C log b n, where C = / log b a. Therefore, log a n = Θ(log b n). 6. Sublinear growth. Use the Log-ratio test with g(n) = n. 7. Use the Log-ratio test, and the fact that n i= log(i) = Θ(n log n). 8. Use the Log-ratio test to show that the first function is little-ω of the second. 9. Superpolynomial but not exponential growth. Use the Log-ratio test against an arbitrary polynomial function n k. Then use it again against an arbitrary exponential function a n, where a >. 20. Sublinear. Use logarithm identities. 2. Superpolynomial. First analyze the growth of log(). Then consider the growth of 2 log() which has the same growth as. 22. Constant growth. Take the logarithm of and analyze its growth. 23. g(n) + L g(n) + Lg(n) ( + L)g(n), for all n 0, where the second inequality is true since g(n). C = + L and k =. The result still holds so long as g(n) is bounded away from zero. 24. Consider = /n and g(n) = /n 2. 25. By the Integral Theorem, 26. By the Integral Theorem, i k = Θ( i= x k dx) = Θ( xk+ n k + ) = Θ(n k+ ). Moreover, ln i = Θ( i= ln xdx). ln xdx = x ln x n dx = 2

n ln n n + = Θ(n ln n). 27. Since log ab = log a + log b, we have log(n!) = log(n(n )(n 2) ) = log n + log(n ) + log(n 2) + log. Therefore, from the previous exercise, we have log(n!) = Θ(n log n). 28. We have n + n/2 + n/3 + + = n( + /2 + /3 + + /n). Therefore, by Example 3, n + n/2 + n/3 + + = Θ(n log n). 3