Algorithm efficiency can be measured in terms of: Time Space Other resources such as processors, network packets, etc.

Similar documents
3.1 Asymptotic notation

CS473 - Algorithms I

with the size of the input in the limit, as the size of the misused.

CS 4407 Algorithms Lecture 2: Growth Functions

Data Structures and Algorithms Running time and growth functions January 18, 2018

Growth of Functions (CLRS 2.3,3)

Introduction to Algorithms

Asymptotic Analysis 1

CS Non-recursive and Recursive Algorithm Analysis

Lecture 2. More Algorithm Analysis, Math and MCSS By: Sarah Buchanan

EECS 477: Introduction to algorithms. Lecture 5

Data Structures and Algorithms. Asymptotic notation

Principles of Algorithm Analysis

CS 4407 Algorithms Lecture 2: Iterative and Divide and Conquer Algorithms

Cpt S 223. School of EECS, WSU

Computational Complexity

Lecture 2. Fundamentals of the Analysis of Algorithm Efficiency

The Time Complexity of an Algorithm

Big-O Notation and Complexity Analysis

Analysis of Algorithm Efficiency. Dr. Yingwu Zhu

Lecture 10: Big-Oh. Doina Precup With many thanks to Prakash Panagaden and Mathieu Blanchette. January 27, 2014

The Time Complexity of an Algorithm

CIS 121. Analysis of Algorithms & Computational Complexity. Slides based on materials provided by Mary Wootters (Stanford University)

Please give details of your answer. A direct answer without explanation is not counted.

Asymptotic Analysis. Thomas A. Anastasio. January 7, 2004

Asymptotic Notation. such that t(n) cf(n) for all n n 0. for some positive real constant c and integer threshold n 0

5 + 9(10) + 3(100) + 0(1000) + 2(10000) =

Grade 11/12 Math Circles Fall Nov. 5 Recurrences, Part 2

Asymptotic Analysis Cont'd

CSC Design and Analysis of Algorithms. Lecture 1

An analogy from Calculus: limits

Lecture 1: Asymptotic Complexity. 1 These slides include material originally prepared by Dr.Ron Cytron, Dr. Jeremy Buhler, and Dr. Steve Cole.

Foundations II: Data Structures and Algorithms

Lecture 2: Asymptotic Notation CSCI Algorithms I

More Asymptotic Analysis Spring 2018 Discussion 8: March 6, 2018

Data Structures and Algorithms CSE 465

Module 1: Analyzing the Efficiency of Algorithms

This chapter covers asymptotic analysis of function growth and big-o notation.

Menu. Lecture 2: Orders of Growth. Predicting Running Time. Order Notation. Predicting Program Properties

Asymptotic Analysis of Algorithms. Chapter 4

CSED233: Data Structures (2017F) Lecture4: Analysis of Algorithms

Define Efficiency. 2: Analysis. Efficiency. Measuring efficiency. CSE 417: Algorithms and Computational Complexity. Winter 2007 Larry Ruzzo

CSE332: Data Structures & Parallelism Lecture 2: Algorithm Analysis. Ruth Anderson Winter 2018

Analysis of Algorithms [Reading: CLRS 2.2, 3] Laura Toma, csci2200, Bowdoin College

CSE332: Data Structures & Parallelism Lecture 2: Algorithm Analysis. Ruth Anderson Winter 2018

CSE332: Data Structures & Parallelism Lecture 2: Algorithm Analysis. Ruth Anderson Winter 2019

Defining Efficiency. 2: Analysis. Efficiency. Measuring efficiency. CSE 421: Intro Algorithms. Summer 2007 Larry Ruzzo

Md Momin Al Aziz. Analysis of Algorithms. Asymptotic Notations 3 COMP Computer Science University of Manitoba

Analysis of Algorithms

Running Time Evaluation

Algorithms Design & Analysis. Analysis of Algorithm

CSC236 Week 4. Larry Zhang

Lecture 3. Big-O notation, more recurrences!!

Ch01. Analysis of Algorithms

Big O 2/14/13. Administrative. Does it terminate? David Kauchak cs302 Spring 2013

Introduction to Algorithms and Asymptotic analysis

P, NP, NP-Complete, and NPhard

CS 4407 Algorithms Lecture 3: Iterative and Divide and Conquer Algorithms

Introduction to Algorithms: Asymptotic Notation

Asymptotic Analysis and Recurrences

Math 3012 Applied Combinatorics Lecture 5

Great Theoretical Ideas in Computer Science. Lecture 7: Introduction to Computational Complexity

Analysis of Algorithms

MA008/MIIZ01 Design and Analysis of Algorithms Lecture Notes 2

Algorithms, Design and Analysis. Order of growth. Table 2.1. Big-oh. Asymptotic growth rate. Types of formulas for basic operation count

Great Theoretical Ideas in Computer Science. Lecture 9: Introduction to Computational Complexity

CIS 121 Data Structures and Algorithms with Java Spring Big-Oh Notation Monday, January 22/Tuesday, January 23

COMP 9024, Class notes, 11s2, Class 1

When we use asymptotic notation within an expression, the asymptotic notation is shorthand for an unspecified function satisfying the relation:

Theory of Computation

Lecture 3: Big-O and Big-Θ

Omega notation. Transitivity etc.

Math 2602 Finite and Linear Math Fall 14. Homework 8: Core solutions

Theory of Computation

The Complexity of Optimization Problems

Data Structures and Algorithms

Enumerate all possible assignments and take the An algorithm is a well-defined computational

Fundamentals of Programming. Efficiency of algorithms November 5, 2017

COMPUTER ALGORITHMS. Athasit Surarerks.

/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Asymptotic Analysis, recurrences Date: 9/7/17

b + O(n d ) where a 1, b > 1, then O(n d log n) if a = b d d ) if a < b d O(n log b a ) if a > b d

CS 350 Midterm Algorithms and Complexity

Intro to Theory of Computation

Ch 01. Analysis of Algorithms

2.2 Asymptotic Order of Growth. definitions and notation (2.2) examples (2.4) properties (2.2)

Lecture 1: Asymptotics, Recurrences, Elementary Sorting

The Growth of Functions and Big-O Notation

Time complexity analysis

CS 344 Design and Analysis of Algorithms. Tarek El-Gaaly Course website:

Asymptotic Algorithm Analysis & Sorting

Asymptotic Analysis. Slides by Carl Kingsford. Jan. 27, AD Chapter 2

Review Of Topics. Review: Induction

When we use asymptotic notation within an expression, the asymptotic notation is shorthand for an unspecified function satisfying the relation:

Agenda. We ve discussed

Topic 17. Analysis of Algorithms

i=1 i B[i] B[i] + A[i, j]; c n for j n downto i + 1 do c n i=1 (n i) C[i] C[i] + A[i, j]; c n

CS173 Running Time and Big-O. Tandy Warnow

COMP 555 Bioalgorithms. Fall Lecture 3: Algorithms and Complexity

Advanced Algorithmics (6EAP)

Design and Analysis of Algorithms

Transcription:

Algorithms Analysis Algorithm efficiency can be measured in terms of: Time Space Other resources such as processors, network packets, etc. Algorithms analysis tends to focus on time: Techniques for measuring all resources are basically the same Historically, time dominated 3.1

The efficiency of a program or system is affected by: Programming Language Physical characteristics of the computer Room temperature Amount of data to process (or actual parameters) Algorithms used In the real world all factors are considered, and any one can dominate in any particular situation. When developing algorithms in the abstract, most such factors are ignored because they are: Out of our control, as programmers and algorithm developers. Difficult to predict. For that reason we say they are arbitrary. Consequently, we are interested in algorithm efficiency, not program or system efficiency. 3.2

The efficiency of an algorithm is specified as a running time, sometimes also called complexity. In our abstract world, the input length or parameters have the most influence on running time of an algorithm. The running time of an algorithm measures: Total number of operations executed by an algorithm, or Total number of statements executed by an algorithm Running times can be based on: Worst case (most frequently used) Average case (most difficult) Best case 3.3

Big Oh Notation Note that it is tempting to think that O(f(n)) means approximately f(n). Although used in this manner frequently, this interpretation or use of big Oh is absolutely incorrect. Frequently, O(f(n)) is pronounced on the order of f(n) or order f(n), but keep in mind that this also does not mean approximately f(n). 3.4

Definitions: Let f(n) and g(n) be functions: f(n) g(n) n 3.5

f(n) is said to be less than g(n) if f(n) <= g(n) for all n. g(n) f(n) n For example, n 2 is less than n 4 + 1. 3.6

To within a constant factor, f(n) is said to be less than g(n) if there exists a positive constant c such that f(n) <= cg(n) for all n. cg(n) f(n) g(n) n 3.7

Example: g(n) = 3n 2 + 2 f(n) = 6n 2 + 3 Note: 6n 2 + 3 is not less than 3n 2 + 2 However: To within a constant factor 6n 2 + 3 is less than 3n 2 + 2 Proof: Let c=9, then we see that: 6n 2 + 3 <= 9(3n 2 + 2) = 27n 2 + 18 3.8

By the way, for these two functions it is also the case that: To within a constant factor 3n 2 + 2 is less than 6n 2 + 3 Proof: Let c=1, then we see that: 3n 2 + 2 <= 1(6n 2 + 3) = 6n 2 + 3 In fact 3n 2 + 2 is actually less than 6n 2 + 3 In other words, asymptotically, there ain t much difference in the growth of the two functions, as n goes to infinite. (always enjoy using that word in class) 3.9

Question: g(n) = n 2 f(n) = 2 n Is f(n), to within a constant factor, less than g(n)? In other words, is there a constant c such that: 2 n <= cn 2? No! Not even if we let c = 1,000,000, or larger! 2 n is significantly different, and bigger than n 2 more than just a constant factor. 3.10

Big O Definition Definition #1: f(n) is said to be O(g(n)) if there exists two positive constants c and n 0 such that f(n) <= cg(n) for all n >= n 0. cg(n) f(n) n n 0 Intuitively, we say that, as n goes to infinity, f(n) is, to within a constant factor, less than g(n). Stated another way, as n goes to infinity, f(n) is, to within a constant factor, bounded from above by g(n). 3.11

From the definition of big-o, it should now be clear that saying T(n) is O(g(n)) means that g(n) is an upper bound on T(n), and does not mean approximately. An analogy: Consider two integers x and y, where x <= y. Is y approximately equal to x? Of course not! Similarly for the definition of big-o. 3.12

Example: g(n) = n 2 f(n) = n 2 + 1 cg(n) f(n) Claim: n 2 + 1 is O(n 2 ) Proof: Let c=2 and n 0 = 1, then we see that: n 0 n 2 + 1 <= 2n 2 for all n >= n 0 = 1. Note that in this case f(n) is not less than g(n), even to within a constant, but it is as n goes to infinite. 3.13

Give a polynomial, a simple procedure is to drop the lower-order terms, and the coefficient of the highest order term. f(n) = n 2 + 1 f(n) is O(n 2 ) g(n) = n + 6 g(n) is O(n) h(n) = n 35 + n 16 + n 4 + 3n 2 + 1025 h(n) is O(n 35 ) r(n) = (1/2)n + 300 r(n) is O(n) s(n) = 4n 2 + 2n + 25 s(n) is O(n 2 ) 3.14

More generally: If: f(n) = a m-1 n m-1 + a m-2 n m-2 + + a 1 n 1 + a 0 Then: f(n) is O(n m-1 ). Explanation: Let: c = a m-1 + a m-2 + a 1 + a 0 n 0 = 1 Then it will be the case that: f(n) <= cn m-1 for all n>= n 0 3.15

Why is f(n) <= cn m-1 for all n>=n 0 =1? cn m-1 = a m-1 n m-1 + a m-2 n m-1 + + a 1 n m-1 + a 0 n m-1 f(n) = a m-1 n m-1 + a m-2 n m-2 + + a 1 n 1 + a 0 3.16

A bit more formally: f(n) = a m-1 n m-1 + a m-2 n m-2 + + a 1 n 1 + a 0 <= ( a m-1 n m-1 + a m-2 n m-2 + + a 1 n 1 + a 0 n 0 ) <= ( a m-1 n m-1 + a m-2 n m-1 + + a 1 n m-1 + a 0 n m-1 ) = ( a m-1 + a m-2 + a 1 + a 0 ) n m-1 = cn m-1 3.17

Another thing worth noting: If, for example, f(n) = 5n 3 + 35n 2 + 3n + 1025 Then f(n) is O(n 3 ), but also O(n 4 ), O(n 5 ), etc. This may seem counter-intuitive, but it is a useful property of big-oh to be, i.e., an upper bound. More technically O(n 3 ) is said to be an asymptotically tight upper bound for f(n), while O(n 4 ), O(n 5 ), etc., are not. 3.18

Recall the definition of big-o: f(n) is said to be O(g(n)) if there exists two positive constants c and n 0 such that f(n) <= cg(n) for all n >= n 0. The authors give two slightly different definitions: We write f(n) = O(g(n)) if there exist constants c > 0 and n 0 > 0 such that 0 f(n) cg(n) for all n n 0. O(g(n)) = { f(n) : there exist constants c > 0 and n 0 > 0 such that 0 f(n) cg(n), for all n n 0 } What s the difference? 3.20

What s the significance of the difference? Fundamentally, not much, but there is a notational difference: 4n 2 + 2n + 25 is O(n 2 ) 4n 2 + 2n + 25 = O(n 2 ) 4n 2 + 2n + 25 O(n 2 ) Unless stated otherwise, you are free to choose, just so long is there is no misunderstanding that big-o is an upper-bound. 3.21

All of the notations are routinely abused: 4n 2 + O(n) T(n) = O(n 2 ) O(n 2 ) + O(n) = O(n 2 ) O(1) In most such cases, the big-o term represents an anonymous function. This is true for big-o, big-, and big- 3.22

Big Definition Definition: f(n) is said to be (g(n)) if there exists two positive constants c and n 0 such that cg(n) <= f(n) for all n >= n 0. f(n) cg(n) n 0 As n goes to infinity, f(n) is, to within a constant factor, bounded from below by g(n). 3.23

Example: n 2 1/2n 2 + 1 cg(n) f(n) Claim: 1/2n 2 + 1 is (n 2 ) Proof: Let c=1/2 and n 0 = 1, then we see that: n 0 1/2n 2 <= 1/2n 2 + 1 for all n >= n 0 = 1. Note that in this case n 2 is not less than 1/2n 2 + 1, even to within a constant, but it is as n goes to infinite. 3.24

Give a polynomial, can we say that a simple procedure is to drop the lower-order terms, and the coefficient of the highest order term? f(n) = n 2 + 1 f(n) is (n 2 ) g(n) = n + 6 g(n) is (n) h(n) = n 35 + n 16 + n 4 + 3n 2 + 1025 h(n) is (n 35 ) r(n) = (1/2)n + 300 r(n) is (n) s(n) = 4n 2 + 2n + 25 s(n) is (n 2 ) What s the proof? 3.25

Another thing worth noting: If, for example, f(n) = 5n 3 + 35n 2 + 3n + 1025 Then f(n) is (n 3 ), but also (n 2 ), (n), etc. This may seem counter-intuitive, but this is useful property of big-, i.e., it is a lower bound. More technically (n 3 ) is said to be an asymptotically tight lower bound for f(n), while (n 2 ), (n), etc., are not. 3.26

Example: 6n 2 + 3 is (3n 2 + 2) Proof: Let c=1 and n 0 = 1 then we see that: c(3n 2 + 2) <= 6n 2 + 3 3n 2 + 2 <= 6n 2 + 3 for all n >= n 0 = 1. 3.27

Similarly: 3n 2 + 2 is (6n 2 + 3) Proof: Let c=1/3 and n 0 = 1 then we see that: 1/3(6n 2 + 3) <= 3n 2 + 2 2n 2 + 1 <= 3n 2 + 2 for all n >= n 0 = 1. 3.28

Lastly consider: Is 2 n = (n 2 )? Yes, if we let c = 1, then cn 2 <= 2 n for all n>=1. Is n 2 = (2 n )? It is not the case that c2 n <= n 2 for all n>=n 0, for any constants c and n 0. Not even if we let c = 1/1,000,000, or smaller! 3.29

Just like with big-oh, the authors give two slightly different definitions: We write f(n) = (g(n)) if there exist constants c > 0 and n 0 > 0 such that 0 cg(n) f(n) for all n n 0. (g(n)) = { f(n) : there exist constants c > 0 and n 0 > 0 such that 0 cg(n) f(n), for all n n 0 } But once again, the difference is largely cosmetic. 3.30

Big Definition Definition: f(n) is said to be (g(n)) if there exist positive constants c 1, c 2 and n 0 such that c 1 g(n) <= f(n) <= c 2 g(n) for all n >= n 0. If f(n) is (g(n)) then it is O(g(n)), and also (g(n)). 3.31

Give a polynomial, a simple procedure is to drop the lower-order terms, and the coefficient of the highest order term. f(n) = n 2 + 1 f(n) is (n 2 ) g(n) = n + 6 g(n) is (n) h(n) = n 35 + n 16 + n 4 + 3n 2 + 1025 h(n) is (n 35 ) r(n) = (1/2)n + 300 r(n) is (n) s(n) = 4n 2 + 2n + 25 s(n) is (n 2 ) 3.32

Big Definition Just like with big-o and big-, the authors give two slightly different definitions: We write f(n) = (g(n)) if there exist positive constants c 1, c 2 and n 0 such that 0 <= c 1 g(n) <= f(n) <= c 2 g(n) for all n >= n 0. (g(n)) = { f(n) : there exist positive constants c 1, c 2 and n 0 such that 0 <= c 1 g(n) <= f(n) <= c 2 g(n) for all n >= n 0.} But once again, the difference is largely cosmetic. 3.33

Big Definition One advantage of the set-theoretic definition of big- is that it can be defined simply as: (g(n)) = O(g(n)) (g(n)) 3.34

Little o Definition Definition: f(n) is said to be o(g(n)) if, for any positive constant c, there exists a positive constant n 0 such that f(n) <= cg(n) for all n >= n 0. Contrast the above with the definition of big-o: f(n) is said to be O(g(n)) if there exists two positive constants c and n 0 such that f(n) <= cg(n) for all n >= n 0. What s the difference? 3.35

Little o Definition Definition: f(n) is said to be o(g(n)) if, for any positive constant c, there exists a positive constant n 0 such that f(n) <= cg(n) for all n >= n 0. What s the significance of the difference? Suppose you wanted to make cg(n) smaller than f(n). How would you do it? Pick a really small value for c. There is still a point (n 0 ) where cg(n) over-takes f(n). 3.36

Little o Definition Definition: f(n) is said to be o(g(n)) if, for any positive constant c, there exists a positive constant n 0 such that f(n) <= cg(n) for all n >= n 0. This means that no matter what value for c you pick, and no matter now small it is, there is a point where cg(n) becomes greater than f(n). Therefore, g(n) is significantly greater than f(n); in particular, by more than just a constant factor. More technically, g(n) denotes an upper bound on f(n) that is not asymptotically tight. 3.37

Little ω Definition Definition: f(n) is said to be ω(g(n)) if, for any positive constant c, there exists a positive constant n 0 such that cg(n) <= f(n) for all n >= n 0. Contrast the above with the definition of big- : f(n) is said to be (g(n)) if there exists two positive constants c and n 0 such that cg(n) <= f(n) for all n >= n 0. What s the difference? 3.38

Little ω Definition Definition: f(n) is said to be ω(g(n)) if, for any positive constant c, there exists a positive constant n 0 such that cg(n) <= f(n) for all n >= n 0. What s the significance of the difference? Suppose you wanted to make cg(n) bigger than f(n). How would you do it? Pick a really big value for c. If f(n) is ω(g(n)) there is still a point (n 0 ) where f(n) over-takes cg(n). 3.39

Little ω Definition Definition: f(n) is said to be ω(g(n)) if, for any positive constant c, there exists a positive constant n 0 such that f(n) <= cg(n) for all n >= n 0. This means that no matter what value for c you pick, and no matter now big it is, there is a point where f(n) becomes greater than cg(n). Therefore, f(n) is significantly greater than g(n); in particular, by more than just a constant factor. More technically, g(n) denotes a lower bound on f(n) that is not asymptotically tight. 3.40

Fill in the Blanks Suppose f(n) is O(g(n)) Yes No Maybe f(n) is (g(n)) f(n) is (g(n)) f(n) is o(g(n)) f(n) is ω(g(n)) g(n) is O(f(n)) g(n) is (f(n)) g(n) is (f(n)) g(n) is o(f(n)) g(n) is ω(f(n)) X X X X X X X X X 3.41

Fill in the Blanks! Suppose f(n) is (g(n)) Yes No Maybe f(n) is O(g(n)) f(n) is (g(n)) f(n) is o(g(n)) f(n) is ω(g(n)) g(n) is O(f(n)) g(n) is (f(n)) g(n) is (f(n)) g(n) is o(f(n)) g(n) is ω(f(n)) 3.42

Fill in the Blanks! Suppose f(n) is (g(n)) Yes No Maybe f(n) is O(g(n)) f(n) is (g(n)) f(n) is o(g(n)) f(n) is ω(g(n)) g(n) is O(f(n)) g(n) is (f(n)) g(n) is (f(n)) g(n) is o(f(n)) g(n) is ω(f(n)) 3.43

Fill in the Blanks! Suppose f(n) is o(g(n)) Yes No Maybe f(n) is O(g(n)) f(n) is (g(n)) f(n) is (g(n)) f(n) is ω(g(n)) g(n) is O(f(n)) g(n) is (f(n)) g(n) is (f(n)) g(n) is o(f(n)) g(n) is ω(f(n)) 3.44

Fill in the Blanks! Suppose f(n) is ω(g(n)) Yes No Maybe f(n) is O(g(n)) f(n) is (g(n)) f(n) is (g(n)) f(n) is o(g(n)) g(n) is O(f(n)) g(n) is (f(n)) g(n) is (f(n)) g(n) is o(f(n) g(n) is ω(f(n)) 3.45

Warning! The following are all common misunderstandings of the notation. Big-O is a worst-case analysis Big- is an average case analysis Big- is is a best case analysis Worst case running time can be analyzed using any of the three, and so can average case or best case. 3.46

Warning! Example - Quicksort Worst case: O(n 2 ) (n 2 ) already sorted list (lower-bound example) Therefore, worst-case is (n 2 ) Average case: O(nlogn) (nlogn) Therefore, average-case is (nlogn) Sometimes there is no distinction between best, worst, and average cases. Input a list of n integers, and add them up (n) - best, worst, and average case. 3.47

Identities Transitivity: if f(n) is O(g(n)) and g(n) is O(h(n)) then f(n) is O(h(n)) if f(n) is (g(n)) and g(n) is (h(n)) then f(n) is (h(n)) if f(n) is (g(n)) and g(n) is (h(n)) then f(n) is (h(n)) if f(n) is o(g(n)) and g(n) is o(h(n)) then f(n) is o(h(n)) if f(n) is ω(g(n)) and g(n) is ω(h(n)) then f(n) is ω(h(n)) Reflexivity: f(n) is (f(n)) f(n) is O(g(n)) f(n) is (g(n)) Symmetry: f(n) is (g(n)) if and only if g(n) is (f(n)) Transpose Symmetry: f(n) is O(g(n)) if and only if g(n) is (f(n)) f(n) is o(g(n)) if and only if g(n) is ω(f(n)) 3.48

All the notations subsume constants: 4O(n 2 ), O(6n 2 ), and O(n 2 /5) are all just O(n 2 ) Similarly for (n 2 ), (n 2 ), o(n 2 ) and ω(n 2 ) Lower-order terms are subsumed by higher-order terms: O(n 2 ) + O(n) + O(1) is just O(n 2 ) o(n 2 ) + o(n) + o(1) is just o(1) Constant time is always expressed as O(1), never O(2), O(57), etc. 3.49