Big O Notation. P. Danziger

Similar documents
Big O Notation. P. Danziger

Growth of Functions (CLRS 2.3,3)

2.2 Asymptotic Order of Growth. definitions and notation (2.2) examples (2.4) properties (2.2)

Announcements. CompSci 230 Discrete Math for Computer Science. The Growth of Functions. Section 3.2

The Growth of Functions (2A) Young Won Lim 4/6/18

Algorithms and Their Complexity

CS 344 Design and Analysis of Algorithms. Tarek El-Gaaly Course website:

Copyright 2000, Kevin Wayne 1

CSE 421: Intro Algorithms. 2: Analysis. Winter 2012 Larry Ruzzo

Big , and Definition Definition

Growth of Functions. As an example for an estimate of computation time, let us consider the sequential search algorithm.

COMP 382: Reasoning about algorithms

COMP 182 Algorithmic Thinking. Algorithm Efficiency. Luay Nakhleh Computer Science Rice University

Introduction to Algorithms: Asymptotic Notation

Orders of Growth. Also, f o(1) means f is little-oh of the constant function g(x) = 1. Similarly, f o(x n )

MA008/MIIZ01 Design and Analysis of Algorithms Lecture Notes 2

Taking Stock. IE170: Algorithms in Systems Engineering: Lecture 3. Θ Notation. Comparing Algorithms

with the size of the input in the limit, as the size of the misused.

2. ALGORITHM ANALYSIS

Input Decidable Language -- Program Halts on all Input Encoding of Input -- Natural Numbers Encoded in Binary or Decimal, Not Unary

Asymptotic Notation. such that t(n) cf(n) for all n n 0. for some positive real constant c and integer threshold n 0

Defining Efficiency. 2: Analysis. Efficiency. Measuring efficiency. CSE 421: Intro Algorithms. Summer 2007 Larry Ruzzo

Running Time Evaluation

2.1 Computational Tractability. Chapter 2. Basics of Algorithm Analysis. Computational Tractability. Polynomial-Time

Algorithms and Theory of Computation. Lecture 2: Big-O Notation Graph Algorithms

Define Efficiency. 2: Analysis. Efficiency. Measuring efficiency. CSE 417: Algorithms and Computational Complexity. Winter 2007 Larry Ruzzo

Asymptotic Analysis. Slides by Carl Kingsford. Jan. 27, AD Chapter 2

Algorithms 2/6/2018. Algorithms. Enough Mathematical Appetizers! Algorithm Examples. Algorithms. Algorithm Examples. Algorithm Examples

COMP 355 Advanced Algorithms

COMP 355 Advanced Algorithms Algorithm Design Review: Mathematical Background

Lecture 2. Fundamentals of the Analysis of Algorithm Efficiency

Reading 10 : Asymptotic Analysis

CISC 235: Topic 1. Complexity of Iterative Algorithms

The Growth of Functions. A Practical Introduction with as Little Theory as possible

Lecture 1 - Preliminaries

Discrete Mathematics CS October 17, 2006

Asymptotic Analysis. Thomas A. Anastasio. January 7, 2004

Problem-Solving via Search Lecture 3

Algorithm Analysis, Asymptotic notations CISC4080 CIS, Fordham Univ. Instructor: X. Zhang

Analysis of Algorithm Efficiency. Dr. Yingwu Zhu

CS 310 Advanced Data Structures and Algorithms

Copyright 2000, Kevin Wayne 1

5 + 9(10) + 3(100) + 0(1000) + 2(10000) =

Unit 1A: Computational Complexity

CSE332: Data Structures & Parallelism Lecture 2: Algorithm Analysis. Ruth Anderson Winter 2018

CSE332: Data Structures & Parallelism Lecture 2: Algorithm Analysis. Ruth Anderson Winter 2018

Mathematics for Business and Economics - I. Chapter 5. Functions (Lecture 9)

Computational Complexity

Big-Oh notation and P vs NP

COMP 9024, Class notes, 11s2, Class 1

CSE332: Data Structures & Parallelism Lecture 2: Algorithm Analysis. Ruth Anderson Winter 2019

Principles of Algorithm Analysis

Announcements. CompSci 102 Discrete Math for Computer Science. Chap. 3.1 Algorithms. Specifying Algorithms

Algorithms, Design and Analysis. Order of growth. Table 2.1. Big-oh. Asymptotic growth rate. Types of formulas for basic operation count

3. Algorithms. What matters? How fast do we solve the problem? How much computer resource do we need?

Describe an algorithm for finding the smallest integer in a finite sequence of natural numbers.

Asymptotic Analysis 1

Analysis of Algorithms [Reading: CLRS 2.2, 3] Laura Toma, csci2200, Bowdoin College

282 Math Preview. Chris Brown. September 8, Why This? 2. 2 Logarithms Basic Identities Basic Consequences...

Introduction to Algorithms

Fall Lecture 5

Computer Algorithms CISC4080 CIS, Fordham Univ. Outline. Last class. Instructor: X. Zhang Lecture 2

Computer Algorithms CISC4080 CIS, Fordham Univ. Instructor: X. Zhang Lecture 2

Test 2 Review Math 1111 College Algebra

3.1 Asymptotic notation

Analysis of Algorithms

Advanced Algorithmics (6EAP)

Limitations of Algorithm Power

Algorithms: Review from last time

Lecture 3. Big-O notation, more recurrences!!

Introduction to Complexity Theory

CSC 8301 Design & Analysis of Algorithms: Lower Bounds

The Time Complexity of an Algorithm

CSE 373: Data Structures and Algorithms. Asymptotic Analysis. Autumn Shrirang (Shri) Mare

Section 1.8 The Growth of Functions. We quantify the concept that g grows at least as fast as f.

Analysis of Algorithms

Section 0.2 & 0.3 Worksheet. Types of Functions

CSC Design and Analysis of Algorithms. Lecture 1

3 The Fundamentals:Algorithms, the Integers, and Matrices

Big-oh stuff. You should know this definition by heart and be able to give it,

P, NP, NP-Complete, and NPhard

Measuring Goodness of an Algorithm. Asymptotic Analysis of Algorithms. Measuring Efficiency of an Algorithm. Algorithm and Data Structure

Computer Science Section 3.2

Section 3.2 The Growth of Functions. We quantify the concept that g grows at least as fast as f.

Tropical Polynomials

Lecture 29: Tractable and Intractable Problems

The Time Complexity of an Algorithm

The Growth of Functions and Big-O Notation

CS1210 Lecture 23 March 8, 2019

When we use asymptotic notation within an expression, the asymptotic notation is shorthand for an unspecified function satisfying the relation:

Harvard CS 121 and CSCI E-121 Lecture 20: Polynomial Time

CSE 417: Algorithms and Computational Complexity

Great Theoretical Ideas in Computer Science. Lecture 7: Introduction to Computational Complexity

Computational Complexity

Data Structures and Algorithms. Asymptotic notation

Algorithms Design & Analysis. Analysis of Algorithm

Outline. Part 5. Computa0onal Complexity (3) Compound Interest 2/15/12. Recurrence Rela0ons: An Overview. Recurrence Rela0ons: Formal Defini0on

Deterministic Time and Space Complexity Measures

CIS 121 Data Structures and Algorithms with Java Spring Big-Oh Notation Monday, January 22/Tuesday, January 23

Transcription:

1 Comparing Algorithms We have seen that in many cases we would like to compare two algorithms. Generally, the efficiency of an algorithm can be guaged by how long it takes to run as a function of the size of the input. For example the efficiency of a graph algorithm might be measured as a function of the number of vertices n or edges m of the input graph. So, the Eulerian algorithm finds an Eulerian circuit after roughly m steps, where m is the number of edges in the input graph. and the Hamiltonian in roughly n k, where k is the maximum degree in the graph. When considering the efficiency of an algorithm we always consider the worst case. For example, the Hamiltonian circuit problem can be solved in at most roughly n k steps, by considering all possible circuits. We might be lucky and discover a Hamiltonian circuit on the first try, but we assume the worst case. We now consider what we mean by roughly. We say that two functions f and g are of thee same order if they behave similarly for large values of n, i.e. f(n) g(n) for large n. Consider the functions f(n) = n 2 and g(n) = n 2 + 2n + n 1 10 20 50 100 200 00 400 500 1000 n 2 1 100 400 2500 10000 40000 90000 160000 250000 1000000 n 2 + 2n + 6 12 44 260 1020 4040 9060 16080 25100 100200 2 n 2 1024 1048576 10 15 10 0 10 60 2 10 90 2.5 10 120 10 150 10 01 Clearly n 2 and n 2 + 2n + are of the same order, whereas 2 n is not. 2 Big O Definition 1 Given a function f : R R, or the corresponding sequnce a n = f(n). 1. Ω(f) = functions that are of equal or greater order than f. All functions that are larger than some constant multiple of f(x) for sufficiently large values of x. Ω(f) = A, a R + such that A f(x) for all x > a 2. O(f) = functions that are of equal or less order than f. All functions that are smaller than some constant multiple of f(x) for sufficiently large values of x. O(f) = A, a R + such that A f(x) for all x > a. Θ(x) = functions that are of the same order as f. All functions that are always between two constant multiples of f(x) for sufficiently large values of x. Θ(f) = A, B, a R + such that A f(x) B f(x) for all x > a 1

4. o(f) = functions that are of strictly lesser order than f. o(f) = O(f) Θ(f) If g O(f) we say g is of order f, many authors abuse notation by writing g = O(f). Alternately, we can define order by using the notion of the limit of the ratio of sequences tending to a value. Definition 2 Given a function f : R R, or the corresponding sequnce a n = f(n). 1. Ω(f). 2. O(f). Θ(f) 4. o(f) Θ(f) = Ω(f) = O(f) = o(f) = > 0. <. = L, 0 < L <. = 0. g Ω(f) means that g is of equal or greater order than f. g Θ(f) means that f and g are roughly the same order of magnitude. g O(f) means that g is of lesser or equal magnitude than f. g o(f) means that g is of lesser magnitude than f. Big O is by far the most commonly used and it is very common to say f O(g) (f is at most order g) when what is really meant is that f Θ(g) (f and g are the same order). Example Show that 2 n o( n ). 2 n Consider lim n (Also lim 2 n = lim ( ) n 2 n = lim = 0. So 2 n o( n ). ( ) n =. So n Ω(2 n ).) 2 Theorem 4 (9.2.1) Let f, g, h and k be real valued functions defined on the same set of nonnegative reals, then: 1. Ω(f) O(f) = Θ(f). 2. f Ω(g) g O(f). 2

. (Reflexivity of O) f O(f), f Ω(f) and f Θ(f). 4. (Transitivity of O) If f O(g) and g O(h) then f O(h). 5. If f O(g) and c R 0 then c f(x) O(g) 6. If f O(h) and g O(k) then f(x) + O(G(x)) where G(x) = max( h(x), k(x) ). 7. If f(x) O(h) and g O(k) then f(x) O(h(x) k(x)). In fact a more useful rule than 6 is: 8. If f O(h) and g O(h) then f(x) + O(h). 9. If f o(g) then g Ω(f) Θ(f) 10. o(f) O(f). Types of Order The orders form a hierarchy in the sense that if g is in a lower member of the hierarchy it is in all higher members. This is essentially the statement of transitivity (4). The following is a list of common types of orders and their names: Notation Name O(1) Constant O(log(n)) Logarithmic O(log(log(n)) Double logarithmic (iterative logarithmic) o(n) Sublinear O(n) Linear O(n log(n)) Loglinear, Linearithmic, Quasilinear or Supralinear O(n 2 ) Quadratic O(n ) Cubic O(n c ) Polynomial (different class for each c > 1) O(c n ) Exponential (different class for each c > 1) O(n!) Factorial O(n n ) - (Yuck!) Rules 5 and 8 are particularly useful for finding which class a function belongs. Example 5 1. 2 n + n 2 O(2 n ) 2. 2n + n 2 2n + 5 is O(n ) (cubic).. (2n + n 2 2n + 5) 1 is O(n) (linear). In general a polynomial is of the order of its highest term.

Theorem 6 Given a polynomial f, if the highest power of f is x r : If r < s then f o(x s ) O(x s ). If r = s then f Θ(x s ) O(x s ). If r > s then f Ω(x s ). Proof: Let f be a polynomial with highest power x r. So f(x) = a r x r + a r 1 x r 1 +... + a 1 x + a 0. Now let s R and consider f(x) a r x r + a r 1 x r 1 +... + a 1 x + a 0 lim x x s = lim x x s = lim a r x r s + a r 1 x r 1 s +... + a 1 x 1 s + a 0 x s x 0 r < s = a r r = s r > s Example 7 1. 2n + n 2 2n + 5 O(n ) 2. (10 12 ) n 2 670n + 5 o(n ). 0.000000001n Ω(n 2 ) 4 Algorithms The primary use of order notation in computer science is to compare the efficiency of algorithms. Big O notation is especially useful when analyzing the efficiency of algorithms. In this case n is the size of the input and f(n) is the running time of the algorithm relative to input size. Suppose that we have two algorithms to solve a problem, we may compare them by comparing their orders. In the following table we suppose that a linear algorithm can do a problem up to size 1000 in 1 second. We then use this to calculate how large a problem this algorithm could handle in 1 minute and 1 hour. We also calculate the largest tractable problem for other algorithms with the given speeds. Running time Maximum problem size 1 sec 1 min. 1 hr. n 1000 6 10 4.6 10 6 n log(n) 140 489 2 10 5 n 2 1 244 1897 n 10 9 15 2 n 9 15 21 4

In the following table we suppose that the next generation of computers are 10 times faster than the current ones. We then calculate how much bigger the largest tractable problem is for the given algorithm speed. Running time After 10 speedup n 10 n log(n) 10 n 2.16 n 2.15 2 n +. In general, an algorithm is called efficient if it is polynomial time, ie O(n k ) for some constant k. Of course for small values of n even an exponential algorithm can outpreform a polynomial one. Given two problems, if there is a polynomial time algorithm which reduces an instance of one problem to an instance of the other, we say that the two problems are polynomial time equivalent. Clearly if two problems are polynomial time equivalent and we have a polynomial time algorithm for one, then we have a polynomial time algorithm for the other. The set of all polynomial time algorithms is denoted P. There is another set of algorithms called NP (nondeterministic polynomial). It is known that P NP, but unknown whether P = NP. Within NP there are a set of algorithms called NP-complete which are all polynomial time equivalent to one another. NP-complete problems are as hard (computationally) as any problem in NP. If we could find a polynomial time algorithm for any NP-complete problem we would have shown P = NP. Determining if a graph has a Hamiltonian path is an example of an NP-complete problem. 5