Running Time Evaluation

Similar documents
Analysis of Algorithms

EECS 477: Introduction to algorithms. Lecture 5

Data Structures and Algorithms. Asymptotic notation

Analysis of Algorithm Efficiency. Dr. Yingwu Zhu

Big O 2/14/13. Administrative. Does it terminate? David Kauchak cs302 Spring 2013

3.1 Asymptotic notation

COMP Analysis of Algorithms & Data Structures

COMP Analysis of Algorithms & Data Structures

COMP 9024, Class notes, 11s2, Class 1

Asymptotic Notation. such that t(n) cf(n) for all n n 0. for some positive real constant c and integer threshold n 0

Computational Complexity

Analysis of Algorithms

CSC Design and Analysis of Algorithms. Lecture 1

Data Structures and Algorithms Chapter 2

Introduction to Algorithms

Principles of Algorithm Analysis

CS 4407 Algorithms Lecture 2: Growth Functions

Lecture 2. More Algorithm Analysis, Math and MCSS By: Sarah Buchanan

Lecture 2. Fundamentals of the Analysis of Algorithm Efficiency

Analysis of Algorithms

Asymptotic Analysis. Thomas A. Anastasio. January 7, 2004

csci 210: Data Structures Program Analysis

Topic 17. Analysis of Algorithms

CSED233: Data Structures (2017F) Lecture4: Analysis of Algorithms

csci 210: Data Structures Program Analysis

Data Structures and Algorithms Running time and growth functions January 18, 2018

Lecture 1: Asymptotics, Recurrences, Elementary Sorting

CS 4407 Algorithms Lecture 2: Iterative and Divide and Conquer Algorithms

Lecture 1: Asymptotic Complexity. 1 These slides include material originally prepared by Dr.Ron Cytron, Dr. Jeremy Buhler, and Dr. Steve Cole.

Growth of Functions (CLRS 2.3,3)

Analysis of Algorithms

Algorithms and Their Complexity

Written Homework #1: Analysis of Algorithms

MA008/MIIZ01 Design and Analysis of Algorithms Lecture Notes 2

The Time Complexity of an Algorithm

Asymptotic Analysis of Algorithms. Chapter 4

The Time Complexity of an Algorithm

CS Non-recursive and Recursive Algorithm Analysis

What is Performance Analysis?

5 + 9(10) + 3(100) + 0(1000) + 2(10000) =

Asymptotic Analysis 1

CS473 - Algorithms I

Asymptotic Algorithm Analysis & Sorting

Algorithm efficiency can be measured in terms of: Time Space Other resources such as processors, network packets, etc.

Module 1: Analyzing the Efficiency of Algorithms

CSE332: Data Structures & Parallelism Lecture 2: Algorithm Analysis. Ruth Anderson Winter 2019

Cpt S 223. School of EECS, WSU

CSE 417: Algorithms and Computational Complexity

When we use asymptotic notation within an expression, the asymptotic notation is shorthand for an unspecified function satisfying the relation:

Algorithms, Design and Analysis. Order of growth. Table 2.1. Big-oh. Asymptotic growth rate. Types of formulas for basic operation count

CIS 121. Analysis of Algorithms & Computational Complexity. Slides based on materials provided by Mary Wootters (Stanford University)

CSE332: Data Structures & Parallelism Lecture 2: Algorithm Analysis. Ruth Anderson Winter 2018

CSE332: Data Structures & Parallelism Lecture 2: Algorithm Analysis. Ruth Anderson Winter 2018

with the size of the input in the limit, as the size of the misused.

Ch01. Analysis of Algorithms

Analysis of Algorithms Review

Big O (Asymptotic Upper Bound)

Grade 11/12 Math Circles Fall Nov. 5 Recurrences, Part 2

Measuring Goodness of an Algorithm. Asymptotic Analysis of Algorithms. Measuring Efficiency of an Algorithm. Algorithm and Data Structure

COMPUTER ALGORITHMS. Athasit Surarerks.

When we use asymptotic notation within an expression, the asymptotic notation is shorthand for an unspecified function satisfying the relation:

Define Efficiency. 2: Analysis. Efficiency. Measuring efficiency. CSE 417: Algorithms and Computational Complexity. Winter 2007 Larry Ruzzo

Announcements. CompSci 230 Discrete Math for Computer Science. The Growth of Functions. Section 3.2

2.2 Asymptotic Order of Growth. definitions and notation (2.2) examples (2.4) properties (2.2)

CIS 121 Data Structures and Algorithms with Java Spring Big-Oh Notation Monday, January 22/Tuesday, January 23

Lecture 2: Asymptotic Notation CSCI Algorithms I

Defining Efficiency. 2: Analysis. Efficiency. Measuring efficiency. CSE 421: Intro Algorithms. Summer 2007 Larry Ruzzo

Lecture 6: Recurrent Algorithms:

CS 4407 Algorithms Lecture 3: Iterative and Divide and Conquer Algorithms

Algorithms. Grad Refresher Course 2011 University of British Columbia. Ron Maharik

Data selection. Lower complexity bound for sorting

Introduction to Algorithms: Asymptotic Notation

Mathematical Background. Unsigned binary numbers. Powers of 2. Logs and exponents. Mathematical Background. Today, we will review:

Computer Algorithms CISC4080 CIS, Fordham Univ. Outline. Last class. Instructor: X. Zhang Lecture 2

Computer Algorithms CISC4080 CIS, Fordham Univ. Instructor: X. Zhang Lecture 2

Algorithm Analysis, Asymptotic notations CISC4080 CIS, Fordham Univ. Instructor: X. Zhang

The Growth of Functions and Big-O Notation

Big-O Notation and Complexity Analysis

Announcements. CompSci 102 Discrete Math for Computer Science. Chap. 3.1 Algorithms. Specifying Algorithms

Analysis of Algorithms I: Asymptotic Notation, Induction, and MergeSort

Lecture 10: Big-Oh. Doina Precup With many thanks to Prakash Panagaden and Mathieu Blanchette. January 27, 2014

Analysis of Algorithms - Using Asymptotic Bounds -

Great Theoretical Ideas in Computer Science. Lecture 9: Introduction to Computational Complexity

LECTURE NOTES ON DESIGN AND ANALYSIS OF ALGORITHMS

Data Structures and Algorithms

Big O Notation. P. Danziger

CSCE 222 Discrete Structures for Computing

This chapter covers asymptotic analysis of function growth and big-o notation.

CS 380 ALGORITHM DESIGN AND ANALYSIS

Ch 01. Analysis of Algorithms

Algorithm. Executing the Max algorithm. Algorithm and Growth of Functions Benchaporn Jantarakongkul. (algorithm) ก ก. : ก {a i }=a 1,,a n a i N,

CSE 101. Algorithm Design and Analysis Miles Jones Office 4208 CSE Building Lecture 1: Introduction

Math 304 (Spring 2010) - Lecture 2

COMP 555 Bioalgorithms. Fall Lecture 3: Algorithms and Complexity

Discrete Mathematics CS October 17, 2006

Algorithms and Theory of Computation. Lecture 2: Big-O Notation Graph Algorithms

Lecture 3. Big-O notation, more recurrences!!

Big-Oh notation and P vs NP

Time Analysis of Sorting and Searching Algorithms

Runtime Complexity. CS 331: Data Structures and Algorithms

Transcription:

Running Time Evaluation Quadratic Vs. Linear Time Lecturer: Georgy Gimel farb COMPSCI 220 Algorithms and Data Structures 1 / 19

1 Running time 2 Examples 3 Big-Oh, Big-Omega, and Big-Theta Tools 4 Time complexity 2 / 19

Running Time T (n): Estimation Rules It is proportional to the most significant term in T (n): n for a linear time, T (n) = c 0 + c 1 n; or n k if T (n) = c 0 + c 1 n +... + c k n k for a polynomial time. Once a problem size n becomes large, the most significant term is that which has the largest power of n. The most significant term increases faster than other terms which reduce in significance. Constants of proportionality depend on a compiler, language, computer, programming, etc. It is useful to ignore the constants when analysing algorithms. Reducing constants of proportionality by using faster hardware or minimising time spent on the inner loop does not effect an algorithm s behaviour for a large problem! 3 / 19

Elementary Operations and Data Inputs Basic elementary computing operations Arithmetic operations (+; ; ; /; %) Relational operators ( ==;! =; >; <; ; ) Boolean operations (AND; OR; XOR; NOT) Branch operations Return Input size for problem domains (meaning of n) Sorting: n items Graph / path: n vertices / edges Image processing: n pixels (2D images) or voxels (3D images) Text processing: n characters, i.e. the string length n 4 / 19

Estimating Running Time Simplifying assumptions: all elementary statements / expressions take the same amount of time to execute, e.g. simple arithmetic assignments, return, etc. A single loop increases in time linearly as λ T body of a loop where λ is number of times the loop is executed. Nested loops result in polynomial running time T (n) = cn k if the number of elementary operations in the innermost loop is constant (k is the highest level of nesting and c is some constant). The first three values of k have special names: linear time for k = 1 (a single loop); quadratic time for k = 2 (two nested loops), and cubic time for k = 3 (three nested loops). 5 / 19

Estimating Running Time Conditional / switch statements like if {condition} then {const time T 1 } else {const time T 2 } are more complicated. One has to account for branching frequencies f condition=true and f condition=false = 1 f condition=true : Function calls: Function composition: T = f true T 1 + (1 f true ) T 2 max{t 1, T 2 } T function = T statements in function T (f(g(n))) = T (g(n)) + T (f(n)) 6 / 19

Estimating Running Time Function calls in more detail: T = i T statement i... x.mymethod( 5,... );... public void mymethod( int a,... ) { statements 1, 2,..., M } Function composition in more detail: T (f(g(n))): Computation of x = g(n) T (g(n)) Computation of y = f(x) T (f(n)) T (f(g(n))) = T (g(n)) + T (f(n)) 7 / 19

Example 1.5: Textbook, p.19 Logarithmic time for a simple loop due to an exponential change i = 1, k, k 2, k 3,..., k m of the control variable in the range 1 i n: for i 1 step i i k until n do... constant number of elementary operations end for m iterations such that k m 1 < n k m T (n) = c log k n The ceil z of the real number z is the least integer not less than z. Additional conditions for executing inner loops only for special values of the outer variables also decrease running time. 8 / 19

Example 1.6: Textbook, p.19 Linearithmic n log n running time of the conditional nested loops: m 2 for j 1 to n do if j == m then m 2 m for i 1 to n do... constant number of elementary operations end for end if end for The inner loop is executed k times for j = m = 2, 4,..., 2 k 2 k n < 2 k+1 implies that k log 2 n < k + 1 In total, T (n) is proportional to kn, that is, T (n) = n log 2 n. The floor z is the greatest integer not greater than z. 9 / 19

Example 1.6: Textbook, p.19 Linearithmic n log n running time of the conditional nested loops: m 2 for j 1 to n do if j == m then m 2 m for i 1 to n do... constant number of elementary operations end for end if end for The inner loop is executed k times for j = m = 2, 4,..., 2 k 2 k n < 2 k+1 implies that k log 2 n < k + 1 In total, T (n) is proportional to kn, that is, T (n) = n log 2 n. The floor z is the greatest integer not greater than z. 9 / 19

Exercise 1.2.1: Textbook Is the running time quadratic or linear for the nested loops below? m 1 for j 1 to n do if j == m then m (n 1) m for i 1 to n do... constant number of operations end for end if } end for The inner loop is executed only twice, for j = 1 and j = n 1; in total: T (n) = 2n linear running time. 10 / 19

Exercise 1.2.1: Textbook Is the running time quadratic or linear for the nested loops below? m 1 for j 1 to n do if j == m then m (n 1) m for i 1 to n do... constant number of operations end for end if } end for The inner loop is executed only twice, for j = 1 and j = n 1; in total: T (n) = 2n linear running time. 10 / 19

Big-Oh, Big-Omega, and Big-Theta Tools How does the relative running time change if the input size, n, increases from n 1 to n 2, all other things equal? By a factor of T (n 2) T (n 1 ) = cf(n 1 cf(n 1 ) = f(n 2) f(n 1 ) Big-Oh, Big-Omega, and Big-Theta help to avoid imprecise statements like roughly proportional to... Can be applied to all non-negative-valued functions, f(n) and g(n), defined on non-negative integers, n. Running time is such a function, T (n), of data size, n; n > 0. Basic assumption: Two algorithms have essentially the same complexity if their running times as functions of n differ only by a constant factor. 11 / 19

Definition of Big-Oh, g(n) is O(f(n)) Let f(n) and g(n) be non-negative-valued functions, defined on non-negative integers, n. Then g(n) is O(f(n)) (read g(n) is Big Oh of f(n)) iff there exists a positive real constant, c, and a positive integer, n 0, such that g(n) cf(n) for all n > n 0. The notation iff is an abbreviation of if and only if. Meaning: g(n) is a member of the set O(f(n)) of functions that increase at most as fast as f(n), when n. In other words, g(n) O(f(n)) if g(n) increases eventually at the same or lesser rate than f(n), to within a constant factor. g(n) O(f(n)) specifies a generalised asymptotic upper bound, such that g(n) for large n may approach closer and closer to cf(n). 12 / 19

Definition of Big-Omega, g(n) is Ω(f(n)) g(n) is Ω(f(n)) (read g(n) is Big Omega of f(n)) iff there exists a positive real constant, c, and a positive integer, n 0, such that g(n) cf(n) for all n > n 0. Meaning: g(n) is a member of the set Ω(f(n)) of functions that increase at least as fast as f(n), when n. In other words, g(n) Ω(f(n)) if g(n) increases eventually at the same or larger rate than f(n), to within a constant factor. Big Omega is complementary to Big Oh and generalises the concept of asymptotic lower bound ( n ) just as Big Oh generalises the asymptotic upper bound ( n ). If g(n) is O(f(n)), then f(n) is Ω(g(n)). 13 / 19

Definition of Big Theta, g(n) is Θ(f(n)) g(n) is Θ(f(n)) (read g(n) is Big Theta of f(n)) iff there exist two positive real constants, c 1 and c 2, and a positive integer, n 0, such that c 1 f(n) g(n) c 2 f(n). Meaning: g(n) is a member of the set Θ(f(n)) of functions that increase as fast as f(n), when n Im other words, g(n) Θ(f(n)) if g(n) increases eventually at the same rate as f(n), to within a constant factor. Big Theta generalises the concept of asymptotic tight bound. If g(n) O(f(n)) and f(n) O(g(n)), then f(n) Θ(g(n)) and g(n) Θ(f(n)), i.e. both algorithms are of the same time complexity. 14 / 19

Proving g(n) is O(f(n)), or Ω(f(n)), or Θ(f(n)) Proving the Big-X property means finding constants, (c, n 0 ) or (c 1, c 2, n 0 ) specified in Definitions. It might be done by a chain of inequalities, starting from f(n). Mathematical induction can be used in more intricate cases. Proving g(n) is not Big-X of f(n) finds the required constants do not exist, i.e. lead to a contradiction. Example 1: Prove that g(n) = 5n 2 + 3n is not O(n). If g(n) = 5n 2 + 3n c n for n > n 0, then for any n 0 the factor c > 5n 0 + 3, i.e. it cannot be constant. Therefore, g(n) O(n). Example 2: Prove that g(n) = 5n 2 + 3n is Ω(n). If g(n) = 5n 2 + 3n c n for n > n 0, then for any n 0 there exist the required factor c < 5n 0 + 3. Therefore, g(n) Ω(n). 15 / 19

Time Complexity of Algorithms T (n) = 100 log 10 n T (n) n for all n > 238 T (n) 0.3n for all n > 1000 T (n) O(n) In analysing running time, T (n) O(f(n)), functions f(n) measure approximate time complexity like log n, n, n 2 etc. Polynomial algorithms: T (n) is O(n k ); k = const. Exponential algorithms otherwise. Intractable problems: if no polynomial algorithm is known for solution. 16 / 19

Time Complexity Growth f(n) Approximate number of data items processed per: 1 minute 1 day 1 year 1 century n 10 14, 400 5.3 10 6 5.3 10 8 n log 10 n 10 3, 997 8.8 10 5 6.7 10 7 n 1.5 10 1, 275 65, 128 1.4 10 6 n 2 10 379 7, 252 72, 522 n 3 10 112 807 3, 746 2 n 10 20 29 35 Beware Exponential Complexity! A linear, O(n), algorithm processing 10 items per minute, can process 1.4 10 4 items per day, 5.3 10 6 items per year, and 5.3 10 8 items per century. An exponential, O(2 n ), algorithm processing 10 items per minute, can process only 20 items per day and only 35 items per century... 17 / 19

Time Complexity Growth f(n) Approximate number of data items processed per: 1 minute 1 day 1 year 1 century n 10 14, 400 5.3 10 6 5.3 10 8 n log 10 n 10 3, 997 8.8 10 5 6.7 10 7 n 1.5 10 1, 275 65, 128 1.4 10 6 n 2 10 379 7, 252 72, 522 n 3 10 112 807 3, 746 2 n 10 20 29 35 Beware Exponential Complexity! A linear, O(n), algorithm processing 10 items per minute, can process 1.4 10 4 items per day, 5.3 10 6 items per year, and 5.3 10 8 items per century. An exponential, O(2 n ), algorithm processing 10 items per minute, can process only 20 items per day and only 35 items per century... 17 / 19

Big-Oh vs. Actual Running Time Example 1: Algorithms A and B with running times T A (n) = 20n time units and T B (n) = 0.1n log 2 n time units, respectively. In the Big-Oh sense, the linear algorithm A is better than the linearithmic algorithm B... But: on which data volume can A outperform B, i.e. for which value n the running time for A is less than for B? T A (n) < T B (n) if 20n < 0.1n log 2 n, or log 2 n > 200, that is, when n > 2 200 10 60! Thus, in all practical cases the algorithm B is better than A... 18 / 19

Big-Oh vs. Actual Running Time Example 1: Algorithms A and B with running times T A (n) = 20n time units and T B (n) = 0.1n log 2 n time units, respectively. In the Big-Oh sense, the linear algorithm A is better than the linearithmic algorithm B... But: on which data volume can A outperform B, i.e. for which value n the running time for A is less than for B? T A (n) < T B (n) if 20n < 0.1n log 2 n, or log 2 n > 200, that is, when n > 2 200 10 60! Thus, in all practical cases the algorithm B is better than A... 18 / 19

Big-Oh vs. Actual Running Time Example 1: Algorithms A and B with running times T A (n) = 20n time units and T B (n) = 0.1n log 2 n time units, respectively. In the Big-Oh sense, the linear algorithm A is better than the linearithmic algorithm B... But: on which data volume can A outperform B, i.e. for which value n the running time for A is less than for B? T A (n) < T B (n) if 20n < 0.1n log 2 n, or log 2 n > 200, that is, when n > 2 200 10 60! Thus, in all practical cases the algorithm B is better than A... 18 / 19

Big-Oh vs. Actual Running Time Example 1: Algorithms A and B with running times T A (n) = 20n time units and T B (n) = 0.1n log 2 n time units, respectively. In the Big-Oh sense, the linear algorithm A is better than the linearithmic algorithm B... But: on which data volume can A outperform B, i.e. for which value n the running time for A is less than for B? T A (n) < T B (n) if 20n < 0.1n log 2 n, or log 2 n > 200, that is, when n > 2 200 10 60! Thus, in all practical cases the algorithm B is better than A... 18 / 19

Big-Oh vs. Actual Running Time Example 1: Algorithms A and B with running times T A (n) = 20n time units and T B (n) = 0.1n log 2 n time units, respectively. In the Big-Oh sense, the linear algorithm A is better than the linearithmic algorithm B... But: on which data volume can A outperform B, i.e. for which value n the running time for A is less than for B? T A (n) < T B (n) if 20n < 0.1n log 2 n, or log 2 n > 200, that is, when n > 2 200 10 60! Thus, in all practical cases the algorithm B is better than A... 18 / 19

Big-Oh vs. Actual Running Time Example 2: Algorithms A and B with running times T A (n) = 20n time units and T B (n) = 0.1n 2 time units, respectively. In the Big-Oh sense, the linear algorithm A is better than the quadratic algorithm B... But: on which data volume can A outperform B, i.e. for which value n the running time for A is less than for B? T A (n) < T B (n) if 20n < 0.1n 2, or n > 200 Thus the algorithm A is better than B in most of practical cases except for n < 200 when B becomes faster... 19 / 19

Big-Oh vs. Actual Running Time Example 2: Algorithms A and B with running times T A (n) = 20n time units and T B (n) = 0.1n 2 time units, respectively. In the Big-Oh sense, the linear algorithm A is better than the quadratic algorithm B... But: on which data volume can A outperform B, i.e. for which value n the running time for A is less than for B? T A (n) < T B (n) if 20n < 0.1n 2, or n > 200 Thus the algorithm A is better than B in most of practical cases except for n < 200 when B becomes faster... 19 / 19

Big-Oh vs. Actual Running Time Example 2: Algorithms A and B with running times T A (n) = 20n time units and T B (n) = 0.1n 2 time units, respectively. In the Big-Oh sense, the linear algorithm A is better than the quadratic algorithm B... But: on which data volume can A outperform B, i.e. for which value n the running time for A is less than for B? T A (n) < T B (n) if 20n < 0.1n 2, or n > 200 Thus the algorithm A is better than B in most of practical cases except for n < 200 when B becomes faster... 19 / 19

Big-Oh vs. Actual Running Time Example 2: Algorithms A and B with running times T A (n) = 20n time units and T B (n) = 0.1n 2 time units, respectively. In the Big-Oh sense, the linear algorithm A is better than the quadratic algorithm B... But: on which data volume can A outperform B, i.e. for which value n the running time for A is less than for B? T A (n) < T B (n) if 20n < 0.1n 2, or n > 200 Thus the algorithm A is better than B in most of practical cases except for n < 200 when B becomes faster... 19 / 19

Big-Oh vs. Actual Running Time Example 2: Algorithms A and B with running times T A (n) = 20n time units and T B (n) = 0.1n 2 time units, respectively. In the Big-Oh sense, the linear algorithm A is better than the quadratic algorithm B... But: on which data volume can A outperform B, i.e. for which value n the running time for A is less than for B? T A (n) < T B (n) if 20n < 0.1n 2, or n > 200 Thus the algorithm A is better than B in most of practical cases except for n < 200 when B becomes faster... 19 / 19