CS 4407 Algorithms Lecture 2: Growth Functions

Similar documents
CS 4407 Algorithms Lecture 2: Iterative and Divide and Conquer Algorithms

CS473 - Algorithms I

CS 4407 Algorithms Lecture 3: Iterative and Divide and Conquer Algorithms

Introduction to Algorithms: Asymptotic Notation

CS Non-recursive and Recursive Algorithm Analysis

The Time Complexity of an Algorithm

Computational Complexity

EECS 477: Introduction to algorithms. Lecture 5

Cpt S 223. School of EECS, WSU

The Time Complexity of an Algorithm

Data Structures and Algorithms Running time and growth functions January 18, 2018

Introduction to Algorithms

Algorithm efficiency can be measured in terms of: Time Space Other resources such as processors, network packets, etc.

Analysis of Algorithms

Growth of Functions (CLRS 2.3,3)

Data Structures and Algorithms CSE 465

with the size of the input in the limit, as the size of the misused.

CSC Design and Analysis of Algorithms. Lecture 1

3.1 Asymptotic notation

MA008/MIIZ01 Design and Analysis of Algorithms Lecture Notes 2

CIS 121 Data Structures and Algorithms with Java Spring Big-Oh Notation Monday, January 22/Tuesday, January 23

COMP 9024, Class notes, 11s2, Class 1

Lecture 2: Asymptotic Notation CSCI Algorithms I

CSED233: Data Structures (2017F) Lecture4: Analysis of Algorithms

Analysis of Algorithm Efficiency. Dr. Yingwu Zhu

Big O 2/14/13. Administrative. Does it terminate? David Kauchak cs302 Spring 2013

Asymptotic Analysis. Thomas A. Anastasio. January 7, 2004

Data Structures and Algorithms. Asymptotic notation

Asymptotic Analysis 1

CS 380 ALGORITHM DESIGN AND ANALYSIS

Lecture 2. Fundamentals of the Analysis of Algorithm Efficiency

Module 1: Analyzing the Efficiency of Algorithms

Running Time Evaluation

CS 350 Midterm Algorithms and Complexity

Asymptotic Analysis. Slides by Carl Kingsford. Jan. 27, AD Chapter 2

COMP Analysis of Algorithms & Data Structures

Analysis of Algorithms I: Asymptotic Notation, Induction, and MergeSort

Analysis of Algorithms

COMP Analysis of Algorithms & Data Structures

Grade 11/12 Math Circles Fall Nov. 5 Recurrences, Part 2

When we use asymptotic notation within an expression, the asymptotic notation is shorthand for an unspecified function satisfying the relation:

Lecture 2. More Algorithm Analysis, Math and MCSS By: Sarah Buchanan

Ch01. Analysis of Algorithms

Asymptotic Notation. such that t(n) cf(n) for all n n 0. for some positive real constant c and integer threshold n 0

Analysis of Algorithms

Analysis of Algorithms

Written Homework #1: Analysis of Algorithms

Lecture 1: Asymptotics, Recurrences, Elementary Sorting

CSE 531 Homework 1 Solution. Jiayi Xian

Data Structures and Algorithms Chapter 2

data structures and algorithms lecture 2

Md Momin Al Aziz. Analysis of Algorithms. Asymptotic Notations 3 COMP Computer Science University of Manitoba

Foundations II: Data Structures and Algorithms

Algorithms and Their Complexity

CISC 235: Topic 1. Complexity of Iterative Algorithms

Principles of Algorithm Analysis

More Asymptotic Analysis Spring 2018 Discussion 8: March 6, 2018

Algorithms Design & Analysis. Analysis of Algorithm

Asymptotic Analysis of Algorithms. Chapter 4

The Growth of Functions and Big-O Notation

Time complexity analysis

2.2 Asymptotic Order of Growth. definitions and notation (2.2) examples (2.4) properties (2.2)

When we use asymptotic notation within an expression, the asymptotic notation is shorthand for an unspecified function satisfying the relation:

Big O (Asymptotic Upper Bound)

CS 344 Design and Analysis of Algorithms. Tarek El-Gaaly Course website:

CIS 121. Analysis of Algorithms & Computational Complexity. Slides based on materials provided by Mary Wootters (Stanford University)

Algorithms, Design and Analysis. Order of growth. Table 2.1. Big-oh. Asymptotic growth rate. Types of formulas for basic operation count

Ch 01. Analysis of Algorithms

This chapter covers asymptotic analysis of function growth and big-o notation.

CS F-01 Algorithm Analysis 1

Big-O Notation and Complexity Analysis

P, NP, NP-Complete, and NPhard

Taking Stock. IE170: Algorithms in Systems Engineering: Lecture 3. Θ Notation. Comparing Algorithms

An analogy from Calculus: limits

Analysis of Algorithms [Reading: CLRS 2.2, 3] Laura Toma, csci2200, Bowdoin College

COMP 382: Reasoning about algorithms

CSE 417: Algorithms and Computational Complexity

COMPUTER ALGORITHMS. Athasit Surarerks.

Lecture 3. Big-O notation, more recurrences!!

Advanced Algorithmics (6EAP)

Module 1: Analyzing the Efficiency of Algorithms

Asymptotic Analysis and Recurrences

Intro to Theory of Computation

CSE 101. Algorithm Design and Analysis Miles Jones Office 4208 CSE Building Lecture 1: Introduction

INTRODUCTION 1.1 ALGORITHM

COMP 555 Bioalgorithms. Fall Lecture 3: Algorithms and Complexity

Define Efficiency. 2: Analysis. Efficiency. Measuring efficiency. CSE 417: Algorithms and Computational Complexity. Winter 2007 Larry Ruzzo

Analysis of Algorithms Review

Introduction to Algorithms and Asymptotic analysis

Topic 17. Analysis of Algorithms

2. ALGORITHM ANALYSIS

Asymptotic Algorithm Analysis & Sorting

On my honor I affirm that I have neither given nor received inappropriate aid in the completion of this exercise.

Defining Efficiency. 2: Analysis. Efficiency. Measuring efficiency. CSE 421: Intro Algorithms. Summer 2007 Larry Ruzzo

Analysis of Algorithms - Using Asymptotic Bounds -

Algorithms and Theory of Computation. Lecture 2: Big-O Notation Graph Algorithms

i=1 i B[i] B[i] + A[i, j]; c n for j n downto i + 1 do c n i=1 (n i) C[i] C[i] + A[i, j]; c n

Lecture 1: Asymptotic Complexity. 1 These slides include material originally prepared by Dr.Ron Cytron, Dr. Jeremy Buhler, and Dr. Steve Cole.

Great Theoretical Ideas in Computer Science. Lecture 9: Introduction to Computational Complexity

MA008/MIIZ01 Design and Analysis of Algorithms Lecture Notes 3

Transcription:

CS 4407 Algorithms Lecture 2: Growth Functions Prof. Gregory Provan Department of Computer Science University College Cork 1

Lecture Outline Growth Functions Mathematical specification of growth functions Function specification Examples

Today s Learning Objectives Describe mathematical principles for specifying the growth of run-time of an algorithm Classify the growth functions Θ, O, Ω, o, ω Understand relationships among the growth functions

Analyzing Algorithms Assumptions generic one-processor, random access machine running time (others: memory, communication, etc) Worst Case Running Time: the longest time for any input of size n upper bound on the running time for any input in some cases like searching, this is close Average Case Behavior: the expected performance averaged over all possible inputs it is generally better than worst case behavior sometimes it s roughly as bad as worst case

A Simple Example INPUT: a sequence of n numbers OUTPUT: the smallest number among them 1. x T[1] 2. for i 2 to n do 3. if T[i] < x then x T[i] * Performance of this algorithm is a function of n.

Runtime Analysis Elementary operation: an operation whose execution time can be bounded above by a constant depending on implementation used. Assume all elementary operations can be executed at unit cost. This is not true, but they are within some constant factor of each other. We are mainly concerned about the order of growth.

Order of Growth For very large input size, it is the asymptotic rate of growth (or order of growth) that matters. We can ignore the lower-order terms, since they are relatively insignificant for very large n. We can also ignore leading term s constant coefficients, since they are not as important for the rate of growth in computational efficiency for very large n. Higher order functions of n are normally considered less efficient.

Comparisons of Algorithms Multiplication classical technique: O(nm) divide-and-conquer: O(nm ln1.5 ) ~ O(nm 0.59 ) For operands of size 1000, takes 40 & 15 seconds respectively on a Cyber 835. Sorting insertion sort: Θ(n 2 ) merge sort: Θ(n lg n) For 10 6 numbers, it took 5.56 hrs on a supercomputer using machine language and 16.67 min on a PC using C/C++.

Why Order of Growth Matters Computer speeds double every two years, so why worry about algorithm speed? When speed doubles, what happens to the amount of work you can do?

Effect of Faster Machines H/W Speed Comp. of Alg. No. of items sorted 1 M * 2 M Gain * Million operations per second. Ο(n 2 ) 1000 1414 1.414 O(n lgn) 62700 118600 1.9 Higher gain with faster hardware for more efficient algorithm. Results are more dramatic for higher speeds: for the n lg n algorithm, doubling the speed almost doubles the number of items that can be sorted.

Classifying Functions Functions Double Exp Exp Exponential Polynomial Poly Logarithmic Logarithmic Constant n5 25n 2 5 5 log n (log n) 5 n 5 2 5n 2

Ordering Functions Functions Cons stant Loga arithmic Poly Polyn nomial Expo onential Exp << << << << << << Logarithmic 5 << 5 log n << (log n) 5 << n 5 << 2 5n << 2 << Doub ble Exp 2 n5 25n

Classifying Functions Functions Double Exp Exp Exponential Polynomial Poly Logarithmic Constant Logarithmic nθ(1) 2θ(n) 2 θ(1) θ(log n) (log n) θ(1) n θ(1) 2 θ(n) 2

Asymptotic Notation Θ, O, Ω, o, ω Used to describe the running times of algorithms Instead of exact running time, say Θ(n 2 ) Defined for functions whose domain is the set of natural numbers, N Determine sets of functions, in practice used to compare two functions

Notation Theta f(n) = θ(g(n)) f(n) c g(n) BigOh f(n) = O(g(n)) f(n) c g(n) Omega f(n) = Ω(g(n)) f(n) c g(n) Little Oh f(n) = o(g(n)) f(n) << c g(n) Little Omega f(n) = ω(g(n)) f(n) >> c g(n)

Θ-notation For a given function g(n), we denote by Θ(g(n)) the set of functions Θ(g(n)) = {f(n): there exist positive constants c 1, c 2 and n 0 such that 0 c 1 g(n) f(n) c 2 g(n), for all n n 0 } We say g(n) is an asymptotically tight bound for f(n)

Example 3n 2 + 7n + 8= Θ(n 2 ) What constants for n 0, c 1, and c 2 will work? Make c 1 a little smaller than leading coefficient, and c 2 a little bigger To compare orders of growth, look at the leading term

Definition of Theta f(n) = θ(g(n)) c c n, n n, c g( n) f ( n) c g( n) 1, 2, 0 0 1 2 f(n) is sandwiched between c 1 g(n) and c 2 g(n) for some sufficiently small c 1 (= 0.0001) for some sufficiently large c 2 (= 1000)

Definition of Theta f(n) = θ(g(n)) c c n, n n, c g( n) f ( n) c g( n) 1, 2, 0 0 1 2 n 0 For all sufficiently large n For some definition of sufficiently large

Definition of Theta 3n 2 + 7n + 8 = θ(n 2 ) c c n, n n, c g( n) f ( n) c g( n) 1, 2, 0 0 1 2

Definition of Theta 3n 2 + 7n + 8 = θ(n 2 ) c c n, n n, c g( n) f ( n) c g( n) 1, 2, 0 0 1 2?? c 1 n 2 3n 2 + 7n + 8 c 2 n 2

Definition of Theta 3n 2 + 7n + 8 = θ(n 2 ) c c n, n n, c g( n) f ( n) c g( n) 1, 2, 0 0 1 2 3 4 3 n 2 3n 2 + 7n + 8 4 n 2

Definition of Theta 3n 2 + 7n + 8 = θ(n 2 ) c c n, n n, c g( n) f ( n) c g( n) 1, 2, 0 0 1 2 1 3 1 2 3 1 2 + 7 1 + 8 4 1 2 False

Definition of Theta 3n 2 + 7n + 8 = θ(n 2 ) c c n, n n, c g( n) f ( n) c g( n) 1, 2, 0 0 1 2 7 3 7 2 3 7 2 + 7 7 + 8 4 7 2 False

Definition of Theta 3n 2 + 7n + 8 = θ(n 2 ) c c n, n n, c g( n) f ( n) c g( n) 1, 2, 0 0 1 2 8 3 8 2 3 8 2 + 7 8 + 8 4 8 2 True

Definition of Theta 3n 2 + 7n + 8 = θ(n 2 ) c c n, n n, c g( n) f ( n) c g( n) 1, 2, 0 0 1 2 8 n 8 3 n 2 3n 2 + 7n + 8 4 n 2

O-notation For a given function g(n), we denote by O(g(n)) the set of functions O(g(n)) = {f(n): there exist positive constants c and n 0 such that 0 f(n) cg(n), for all n n 0 } We say g(n) is an asymptotic upper bound for f(n)

Ω-notation For a given function g(n), we denote by Ω(g(n)) the set of functions Ω(g(n)) = {f(n): there exist positive constants c and n 0 such that 0 cg(n) f(n) for all n n 0 } We say g(n) is an asymptotic lower bound for f(n)

Relations Between Θ, Ω, O For any two functions g(n) and f(n), f(n) = Θ(g(n)) if and only if f(n) = O(g(n)) and f(n) = Ω(g(n)). i.e., Θ(g(n)) = O(g(n)) Ω(g(n)).

Running Times The running time is O(f(n)) Worst case is O(f(n)) Running time is Ω(f(n)) Best case is Ω(f(n)) Can still say Worst case running time is Ω(f(n)) Means worst case running time is given by some unknown function g(n) Ω(f(n))

Example Insertion sort takes Θ(n 2 ) worst case time, so sorting (as a problem) is O(n 2 ) Any sort algorithm must look at each item, so sorting is Ω(n) In fact, using (e.g.) merge sort, sorting is Θ(n lg n) in the worst case

Asymptotic Notation in Equations Asymptotic notation is used to replace expressions containing lower-order terms. For example, 4n 3 + 3n 2 + 2n + 1 = 4n 3 + 3n 2 + Θ(n) = 4n 3 + Θ(n 2 ) = Θ(n 3 ) In equations, Θ(f(n)) always stands for an anonymous function g(n) Θ(f(n)). In the example above, Θ(n 2 ) stands for 3n 2 + 2n + 1.

o-notation For a given function g(n), we denote by o(g(n)) the set of functions: o(g(n)) = {f(n): for any positive constant c > 0, there exists a constant n 0 > 0 such that 0 f(n) < cg(n) for all n n 0 } f(n) becomes insignificant relative to g(n) as n approaches infinity: lim [f(n) / g(n)] = 0 n We say g(n) is an upper bound for f(n) that is not asymptotically tight.

O(*) versus o(*) O(g(n)) = {f(n): there exist positive constants c and n 0 such that 0 f(n) cg(n), for all n n 0 }. o(g(n)) = {f(n): for any positive constant c > 0, there exists a constant n 0 > 0 such that 0 f(n) < cg(n) for all n n 0 }. Thus o(f(n)) is a weakened O(f(n)). For example: n 2 = O(n 2 ) n 2 o(n 2 ) n 2 = O(n 3 ) n 2 = o(n 3 )

ω-notation For a given function g(n), we denote by ω(g(n)) the set of functions ω(g(n)) = {f(n): for any positive constant c > 0, there exists a constant n 0 > 0 such that 0 cg(n) < f(n) for all n n 0 } f(n) becomes arbitrarily large relative to g(n) as n approaches infinity: lim [f(n) / g(n)] = n We say g(n) is a lower bound for f(n) that is not asymptotically tight.

Limits lim [f(n) / g(n)] = 0 f(n) ο(g(n)) n lim [f(n) / g(n)] < f(n) Ο(g(n)) n 0 < lim [f(n) / g(n)] < f(n) Θ(g(n)) n 0 < lim [f(n) / g(n)] f(n) Ω(g(n)) n lim [f(n) / g(n)] = f(n) ω(g(n)) n lim [f(n) / g(n)] undefined can t say n

Lecture Summary Defined the mathematical principles for specifying the runtime of an algorithm Classified the growth functions Θ, O, Ω, o, ω Defined several relationships among the growth functions Theta f(n) = θ(g(n)) f(n) c g(n) BigOh f(n) = O(g(n)) f(n) c g(n) Omega f(n) = Ω(g(n)) f(n) c g(n) Little Oh f(n) = o(g(n)) f(n) << c g(n) Little Omega f(n) = ω(g(n)) f(n) >> c g(n)

Additional Useful Relations The following slides provide many useful relations for orders of growth These relations may prove helpful in homeworks and exams

Comparison of Functions f g a b f (n) = O(g(n)) a b f (n) = Ω(g(n)) a b f (n) = Θ(g(n)) a = b f (n) = o(g(n)) a < b f (n) = ω (g(n)) a > b

Properties Transitivity f(n) = Θ(g(n)) & g(n) = Θ(h(n)) f(n) = Θ(h(n)) f(n) = O(g(n)) & g(n) = O(h(n)) f(n) = O(h(n)) f(n) = Ω(g(n)) & g(n) = Ω(h(n)) f(n) = Ω(h(n)) f(n) = o (g(n)) & g(n) = o (h(n)) f(n) = o (h(n)) f(n) = ω(g(n)) & g(n) = ω(h(n)) f(n) = ω(h(n)) Symmetry f(n) = Θ(g(n)) if and only if g(n) = Θ(f(n)) Transpose Symmetry f(n) = O(g(n)) if and only if g(n) = Ω(f(n)) f(n) = o(g(n)) if and only if g(n) = ω((f(n))

Useful Facts For a 0, b > 0, lim n ( lg a n / n b ) = 0, so lg a n = o(n b ), and n b = ω(lg a n ) Prove using L Hopital s rule and induction lg(n!) = Θ(n lg n)

Examples A B 5n 2 + 100n 3n 2 + 2 log 3 (n 2 ) log 2 (n 3 ) n lg4 3 lg n lg 2 n n 1/2

Examples A B 5n 2 + 100n 3n 2 + 2 A Θ(B) A Θ(n 2 ), n 2 Θ(B) A Θ(B) log 2 3 3 (n ) log 2 (n ) n lg4 3 lg n lg 2 n n 1/2

Examples A B 5n 2 + 100n 3n 2 + 2 A Θ(B) A Θ(n 2 ), n 2 Θ(B) A Θ(B) log 2 3 3 (n ) log 2 (n ) A Θ(B) log b a = log c a / log c b; A = 2lgn / lg3, B = 3lgn, A/B =2/(3lg3) n lg4 3 lg n lg 2 n n 1/2

Examples A B 5n 2 + 100n 3n 2 + 2 A Θ(B) A Θ(n 2 ), n 2 Θ(B) A Θ(B) log 2 3 3 (n ) log 2 (n ) A Θ(B) log b a = log c a / log c b; A = 2lgn / lg3, B = 3lgn, A/B =2/(3lg3) n lg4 3 lg n A ω(b) a log b = b log a ; B =3 lg n =n lg 3 ; A/B =n lg(4/3) as n lg 2 n n 1/2

A Examples B 5n 2 + 100n 3n 2 + 2 A Θ(B) A Θ(n 2 ), n 2 Θ(B) A Θ(B) log 3 (n 2 ) log 2 (n 3 ) A Θ(B) log b a = log c a / log c b; A = 2lgn / lg3, B = 3lgn, A/B =2/(3lg3) n lg4 3 lg n A ω(b) a log b = b log a ; B =3 lg n =n lg 3 ; A/B =n lg(4/3) as n lg 2 n n 1/2 A ο (B) lim ( lg a n / n b ) = 0 (here a = 2 and b = 1/2) A ο (B) n

Review on Summation Why do we need summation formulas? We need them for computing the running times of some algorithms. (CLRS/Chapter 3) Example: Maximum Subvector Given an array a[1 n] of numeric values (can be positive, zero and negative) determine the subvector a[i j] (1 i j n) whose sum of elements is maximum over all subvectors. 1-2 2 2

Reviews on Summation MaxSubvector(a, n) maxsum 0; for i 1 to n for j = i to n sum 0; for k i to j sum += a[k] maxsum max(sum, maxsum); return maxsum; n n j T(n) = 1 i=1 j=i k=i NOTE: This is not a simplified solution. What is the final answer?

Reviews on Summation Constant Series: For n 0, Linear Series: For n 0, b 1 = b - a + 1 i=a n i = 1 + 2 + + n = n(n+1) / 2 i=1

Reviews on Summation Quadratic Series: For n 0, n i 2 = 1 2 + 2 2 + + n 2 = (2n 3 + 3n 2 + n) / 6 i=1 Linear-Geometric Series: For n 0, n ic i = c + 2c 2 + + nc n = [(n-1)c (n+1) - nc n + c] / (c-1) 2 i=1