CS 4407 Algorithms Lecture 2: Iterative and Divide and Conquer Algorithms

Similar documents
CS 4407 Algorithms Lecture 2: Growth Functions

CS 4407 Algorithms Lecture 3: Iterative and Divide and Conquer Algorithms

Computational Complexity

What we have learned What is algorithm Why study algorithm The time and space efficiency of algorithm The analysis framework of time efficiency Asympt

Analysis of Algorithms I: Asymptotic Notation, Induction, and MergeSort

The Time Complexity of an Algorithm

Growth of Functions (CLRS 2.3,3)

CS473 - Algorithms I

Analysis of Algorithm Efficiency. Dr. Yingwu Zhu

COMP Analysis of Algorithms & Data Structures

CS Non-recursive and Recursive Algorithm Analysis

COMP Analysis of Algorithms & Data Structures

The Time Complexity of an Algorithm

with the size of the input in the limit, as the size of the misused.

data structures and algorithms lecture 2

Algorithms Design & Analysis. Analysis of Algorithm

When we use asymptotic notation within an expression, the asymptotic notation is shorthand for an unspecified function satisfying the relation:

3.1 Asymptotic notation

MA008/MIIZ01 Design and Analysis of Algorithms Lecture Notes 3

EECS 477: Introduction to algorithms. Lecture 5

Lecture 1: Asymptotics, Recurrences, Elementary Sorting

Big O 2/14/13. Administrative. Does it terminate? David Kauchak cs302 Spring 2013

Analysis of Algorithms

Cpt S 223. School of EECS, WSU

Lecture 2. Fundamentals of the Analysis of Algorithm Efficiency

Data Structures and Algorithms CMPSC 465

Algorithms, Design and Analysis. Order of growth. Table 2.1. Big-oh. Asymptotic growth rate. Types of formulas for basic operation count

Data Structures and Algorithms Running time and growth functions January 18, 2018

Analysis of Algorithms

When we use asymptotic notation within an expression, the asymptotic notation is shorthand for an unspecified function satisfying the relation:

CSED233: Data Structures (2017F) Lecture4: Analysis of Algorithms

Analysis of Algorithms

MA008/MIIZ01 Design and Analysis of Algorithms Lecture Notes 2

COMP 9024, Class notes, 11s2, Class 1

Grade 11/12 Math Circles Fall Nov. 5 Recurrences, Part 2

CSC Design and Analysis of Algorithms. Lecture 1

Ch01. Analysis of Algorithms

Data Structures and Algorithms. Asymptotic notation

Data Structures and Algorithms CSE 465

Lecture 2: Asymptotic Notation CSCI Algorithms I

Asymptotic Notation. such that t(n) cf(n) for all n n 0. for some positive real constant c and integer threshold n 0

Module 1: Analyzing the Efficiency of Algorithms

Module 1: Analyzing the Efficiency of Algorithms

COMP 382: Reasoning about algorithms

Asymptotic Analysis 1

Asymptotic Analysis. Slides by Carl Kingsford. Jan. 27, AD Chapter 2

Algorithm efficiency can be measured in terms of: Time Space Other resources such as processors, network packets, etc.

Analysis of Algorithms - Using Asymptotic Bounds -

Ch 01. Analysis of Algorithms

Analysis of Algorithms [Reading: CLRS 2.2, 3] Laura Toma, csci2200, Bowdoin College

CIS 121 Data Structures and Algorithms with Java Spring Big-Oh Notation Monday, January 22/Tuesday, January 23

CMPS 2200 Fall Divide-and-Conquer. Carola Wenk. Slides courtesy of Charles Leiserson with changes and additions by Carola Wenk

Algorithm Design and Analysis

Asymptotic Algorithm Analysis & Sorting

Chapter 2. Recurrence Relations. Divide and Conquer. Divide and Conquer Strategy. Another Example: Merge Sort. Merge Sort Example. Merge Sort Example

CS 350 Midterm Algorithms and Complexity

Analysis of Algorithms

Asymptotic Analysis. Thomas A. Anastasio. January 7, 2004

Big O (Asymptotic Upper Bound)

CS 310 Advanced Data Structures and Algorithms

Lecture 2. More Algorithm Analysis, Math and MCSS By: Sarah Buchanan

Algorithms. Adnan YAZICI Dept. of Computer Engineering Middle East Technical Univ. Ankara - TURKEY. Algorihms, A.Yazici, Fall 2007 CEng 315

Design and Analysis of Algorithms Recurrence. Prof. Chuhua Xian School of Computer Science and Engineering

CS 577 Introduction to Algorithms: Strassen s Algorithm and the Master Theorem

Data Structures and Algorithms Chapter 2

The maximum-subarray problem. Given an array of integers, find a contiguous subarray with the maximum sum. Very naïve algorithm:

Big-O Notation and Complexity Analysis

Topic 17. Analysis of Algorithms

Algorithms and Their Complexity

CS Data Structures and Algorithm Analysis

Asymptotic Analysis and Recurrences

Notes for Recitation 14

i=1 i B[i] B[i] + A[i, j]; c n for j n downto i + 1 do c n i=1 (n i) C[i] C[i] + A[i, j]; c n

More Asymptotic Analysis Spring 2018 Discussion 8: March 6, 2018

Taking Stock. IE170: Algorithms in Systems Engineering: Lecture 3. Θ Notation. Comparing Algorithms

Introduction to Algorithms

Asymptotic Analysis of Algorithms. Chapter 4

Mergesort and Recurrences (CLRS 2.3, 4.4)

Written Homework #1: Analysis of Algorithms

Problem Set 1 Solutions

Data structures Exercise 1 solution. Question 1. Let s start by writing all the functions in big O notation:

CISC 235: Topic 1. Complexity of Iterative Algorithms

Analysis of Algorithms Review

Introduction to Algorithms and Asymptotic analysis

Review Of Topics. Review: Induction

csci 210: Data Structures Program Analysis

CSE332: Data Structures & Parallelism Lecture 2: Algorithm Analysis. Ruth Anderson Winter 2018

Midterm Exam. CS 3110: Design and Analysis of Algorithms. June 20, Group 1 Group 2 Group 3

CSE332: Data Structures & Parallelism Lecture 2: Algorithm Analysis. Ruth Anderson Winter 2018

In-Class Soln 1. CS 361, Lecture 4. Today s Outline. In-Class Soln 2

Principles of Algorithm Analysis

CSE332: Data Structures & Parallelism Lecture 2: Algorithm Analysis. Ruth Anderson Winter 2019

P, NP, NP-Complete, and NPhard

CSE 417: Algorithms and Computational Complexity

An analogy from Calculus: limits

Foundations II: Data Structures and Algorithms

csci 210: Data Structures Program Analysis

COMPUTER ALGORITHMS. Athasit Surarerks.

A design paradigm. Divide and conquer: (When) does decomposing a problem into smaller parts help? 09/09/ EECS 3101

Lecture 3. Big-O notation, more recurrences!!

Transcription:

CS 4407 Algorithms Lecture 2: Iterative and Divide and Conquer Algorithms Prof. Gregory Provan Department of Computer Science University College Cork 1

Lecture Outline CS 4407, Algorithms Growth Functions Mathematical specification of growth functions Iterative Algorithms Divide-and-Conquer Algorithms

Today s Learning Objectives Describe mathematical principles for specifying the growth of run-time of an algorithm Classify the growth functions, O,, o, Iterative Algorithm Analysis Divide-and-Conquer Algorithm Analysis CS 4407, Algorithms

Analyzing Algorithms Assumptions generic one-processor, random access machine running time (others: memory, communication, etc) Worst Case Running Time: the longest time for any input of size n upper bound on the running time for any input in some cases like searching, this is close Average Case Behavior: the expected performance averaged over all possible inputs it is generally better than worst case behavior sometimes it s roughly as bad as worst case

Notation: Growth Functions Theta f(n) = θ(g(n)) f(n) c g(n) BigOh f(n) = O(g(n)) f(n) c g(n) Omega f(n) = Ω(g(n)) f(n) c g(n) Little Oh f(n) = o(g(n)) f(n) << c g(n) Little Omega f(n) = ω(g(n)) f(n) >> c g(n)

-notation For a given function g(n), we denote by (g(n)) the set of functions (g(n)) = {f(n): there exist positive constants c 1, c 2 and n 0 such that 0 c 1 g(n) f(n) c 2 g(n), for all n n 0 } We say g(n) is an asymptotically tight bound for f(n)

O-notation For a given function g(n), we denote by O(g(n)) the set of functions O(g(n)) = {f(n): there exist positive constants c and n 0 such that 0 f(n) cg(n), for all n n 0 } We say g(n) is an asymptotic upper bound for f(n) CS 4407, Algorithms

-notation For a given function g(n), we denote by (g(n)) the set of functions (g(n)) = {f(n): there exist positive constants c and n 0 such that 0 cg(n) f(n) for all n n 0 } We say g(n) is an asymptotic lower bound for f(n)

Relations Between,, O For any two functions g(n) and f(n), f(n) = (g(n)) if and only if f(n) = O(g(n)) and f(n) = (g(n)). i.e., (g(n)) = O(g(n)) (g(n)).

Complexity of Simple Algorithms Simple Interation Dealing with Loops Loop invariant method Divide and Conquer Algorithms CS 4407, Algorithms

Iterative Algorithm Running Time T(n), or the running time of a particular algorithm on input of size n, is taken to be the number of times the instructions in the algorithm are executed. Pseudo code algorithm illustrates the calculation of the mean (average) of a set of n numbers: 1. n = read input from user 2. sum = 0 3. i = 0 4. while i < n 5. number = read input from user 6. sum = sum + number 7. i = i + 1 8. mean = sum / n The computing time for this algorithm in terms on input size n is: T(n) = 4n + 5. Statement Number of times executed 1 1 2 1 3 1 4 n+1 5 n 6 n 7 n 8 1

Analysis of Simple Programs (no recursion) Sum the costs of the lines of the program Compute the bounding function e.g., O(*) CS 4407, Algorithms

Analysing Loops Use Loop Invariants Intuitive notion and example CS 4407, Algorithms

Designing an Algorithm Define Problem Define Loop Invariants Define Measure of Progress 79 km to school Define Step Define Exit Condition Maintain Loop Inv Exit Exit Make Progress Initial Conditions Ending Exit 79 km 75 km km 0 km Exit Exit

Typical Loop Invariant If the input consists of an array of objects I have a solution for the first i objects. i objects Exit 79 km 75 km i to i+1 Extend the solution into a solution for the first i+1. Exit Done when solution for n

Typical Loop Invariant CS 4407, Algorithms If the output consists of an array of objects I have an output produced for the first i objects. i objects Produce the i+1-st output object. Exit 79 km 75 km i to i+1 Exit Done when output n objects.

Binary search Example of Approach Standard algorithm Input Sorted list, and key Goal: find key in list CS 4407, Algorithms

Define Problem: Binary Search PreConditions Key 25 Sorted List 3 5 6 13 18 21 21 25 36 43 49 51 53 60 72 74 83 88 91 95 PostConditions Find key in list (if there). 3 5 6 13 18 21 21 25 36 43 49 51 53 60 72 74 83 88 91 95 CS 4407, Algorithms

Define Loop Invariant Maintain a sublist Such that key 25 3 5 6 13 18 21 21 25 36 43 49 51 53 60 72 74 83 88 91 95 CS 4407, Algorithms

Define Loop Invariant Maintain a sublist. If the key is contained in the original list, then the key is contained in the sublist. key 25 3 5 6 13 18 21 21 25 36 43 49 51 53 60 72 74 83 88 91 95 CS 4407, Algorithms

Make Progress Define Step Maintain Loop Invariant key 25 3 5 6 13 18 21 21 25 36 43 49 51 53 60 72 74 83 88 91 95 CS 4407, Algorithms

Define Step Cut sublist in half. Determine which half key would be in. Keep that half. key 25 3 5 6 13 18 21 21 25 36 43 49 51 53 60 72 74 83 88 91 95 CS 4407, Algorithms

Define Step Cut sublist in half. Determine which half the key would be in. Keep that half. key 25 3 5 6 13 18 21 21 25 36 43 49 51 53 60 72 74 83 88 91 95 If key mid, then key is in left half. If key > mid, then key is in right half. CS 4407, Algorithms

Define Step It is faster not to check if the middle element is the key. Simply continue. key 43 3 5 6 13 18 21 21 25 36 43 49 51 53 60 72 74 83 88 91 95 If key mid, then key is in left half. If key > mid, then key is in right half. CS 4407, Algorithms

Make Progress Exit The size of the list becomes smaller. 79 km 75 km 3 5 6 13 18 21 21 25 36 43 49 51 53 60 72 74 83 88 91 95 79 km 3 5 6 13 18 21 21 25 36 43 49 51 53 60 72 74 83 88 91 95 75 km

Initial Conditions km key 25 3 5 6 13 18 21 21 25 36 43 49 51 53 60 72 74 83 88 91 95 n km The sublist is the entire original list. If the key is contained in the original list, then the key is contained in the sublist. CS 4407, Algorithms

Ending Algorithm Exit key 25 3 5 6 13 18 21 21 25 36 43 49 51 53 60 72 74 83 88 91 95 0 km If the key is contained in the original list, then the key is contained in the sublist. Sublist contains one element. Exit If the key is contained in the original list, then the key is at this location.

If key not in original list If the key is contained in the original list, then the key is contained in the sublist. Loop invariant true, even if the key is not in the list. key 24 3 5 6 13 18 21 21 25 36 43 49 51 53 60 72 74 83 88 91 95 If the key is contained in the original list, then the key is at this location. Conclusion still solves the problem. Simply check this one location for the key. CS 4407, Algorithms

Running Time The sublist is of size n, n / 2, n / 4, n / 8,,1 Each step (1) time. Total = (log n) key 25 3 5 6 13 18 21 21 25 36 43 49 51 53 60 72 74 83 88 91 95 If key mid, then key is in left half. CS 4407, Algorithms If key > mid, then key is in right half.

Code CS 4407, Algorithms

Divide and Conquer Algorithms Well-known class of algorithm Requires special approach for complexity analysis CS 4407, Algorithms

Divide-and-Conquer Analysis The analysis of divide and conquer algorithms require us to solve a recurrence. Recurrences are a major tool for analysis of algorithms CS 4407, Algorithms

MergeSort A L G O R I T H M S A L G O R I T H M S divide cn T(n/2) T(n/2)

MergeSort A L G O R I T H M S A L G O R I T H M S Divide #1 A L G O R I T H M S Divide #2 cn T(n/2) T(n/2) T(n/4) T(n/4) T(n/4) T(n/4)

MergeSort Solve T(n) = T(n/2) + T(n/2) + cn cn (n/2) +c (n/2) +c T(n/4) T(n/4) T(n/4) T(n/4) c n 1 Recurrence T ( n) 2T n 2 cn n 1 CS 4407, Algorithms

Recursion-tree method A recursion tree models the costs (time) of a recursive execution of an algorithm. The recursion tree method is good for generating guesses for the substitution method. The recursion-tree method can be unreliable, just like any method that uses ellipses ( ). The recursion-tree method promotes intuition, however.

Example of recursion tree Solve T(n) = T(n/4) + T(n/2) + n 2 :

Example of recursion tree Solve T(n) = T(n/4) + T(n/2) + n 2 : T(n)

Example of recursion tree Solve T(n) = T(n/4) + T(n/2) + n 2 : n 2 T(n/4) T(n/2)

Example of recursion tree Solve T(n) = T(n/4) + T(n/2) + n 2 : n 2 (n/4) 2 (n/2) 2 T(n/16) T(n/8) T(n/8) T(n/4)

Example of recursion tree Solve T(n) = T(n/4) + T(n/2) + n 2 : n 2 (n/4) 2 (n/2) 2 (n/16) 2 (n/8) 2 (n/8) 2 (n/4) 2 (1)

Example of recursion tree Solve T(n) = T(n/4) + T(n/2) + n 2 : n 2 n 2 (n/4) 2 (n/2) 2 (n/16) 2 (n/8) 2 (n/8) 2 (n/4) 2 (1)

Example of recursion tree Solve T(n) = T(n/4) + T(n/2) + n 2 : n 2 (n/4) 2 (n/2) 2 (n/16) 2 (n/8) 2 (n/8) 2 (n/4) 2 n2 5 n 2 16 (1)

Example of recursion tree Solve T(n) = T(n/4) + T(n/2) + n 2 : n 2 (n/4) 2 (n/2) 2 (n/16) 2 (n/8) 2 (n/8) 2 (n/4) 2 n2 5 n 2 16 25 n 2 256 (1)

Example of recursion tree Solve T(n) = T(n/4) + T(n/2) + n 2 : n 2 (n/4) 2 (n/2) 2 (n/16) 2 (n/8) 2 (n/8) 2 (n/4) 2 n2 5 n 2 16 25 n 2 256 (1) Total = n 2 1 16 5 16 5 2 16 5 3 = (n 2 ) geometric series

Solution: geometric series 1 x x 2 n 1 n x x for x 1 1 1 x 1 2 1 x x for x < 1 1 x

The master method The master method applies to recurrences of the form T(n) = a T(n/b) + f (n), where a 1, b > 1, and f is asymptotically positive.

Idea of master theorem Recursion tree: f (n) a f (n/b) f (n/b) f (n/b) a h = log b n f (n/b 2 ) f (n/b 2 ) f (n/b 2 ) f (n) a f (n/b) a 2 f (n/b 2 ) (1) #leaves = a h = a log bn = n log ba n log ba (1)

Three common cases Compare f (n) with n log ba : 1. f (n) = O(n log ba ) for some constant > 0. f (n) grows polynomially slower than n log ba (by an n factor). Solution: T(n) = (n log ba ). # leaves in recursion tree CS 4407, Algorithms

Idea of master theorem Recursion tree: f (n) a f (n/b) f (n/b) a h = log b n f (n/b 2 ) f (n/b 2 ) f (n/b 2 ) f (n/b) f (n) a f (n/b) a 2 f (n/b 2 ) (1) CASE 1: The weight increases geometrically from the root to the leaves. The leaves hold a constant n log ba (1) fraction of the total weight. (n log ba )

Three common cases Compare f (n) with n log ba : 2. f (n) = (n log ba lg k n) for some constant k 0. f (n) and n log ba grow at similar rates. Solution: T(n) = (n log ba lg k+1 n). CS 4407, Algorithms

Idea of master theorem Recursion tree: f (n) a f (n/b) f (n/b) a h = log b n f (n/b 2 ) f (n/b 2 ) f (n/b 2 ) f (n/b) f (n) a f (n/b) a 2 f (n/b 2 ) (1) CASE 2: (k = 0) The weight is approximately the same on each of the log b n levels. n log ba (1) (n log ba lg n)

Three common cases (cont.) Compare f (n) with n log ba : 3. f (n) = (n log ba + ) for some constant > 0. f (n) grows polynomially faster than n log ba (by an n factor), and f (n) satisfies the regularity condition that a f (n/b) c f (n) for some constant c < 1. Solution: T(n) = ( f (n) ).

Idea of master theorem Recursion tree: f (n) a f (n/b) f (n/b) a h = log b n f (n/b 2 ) f (n/b 2 ) f (n/b 2 ) f (n/b) f (n) a f (n/b) a 2 f (n/b 2 ) (1) CASE 3: The weight decreases geometrically from the root to the leaves. The root holds a constant fraction of the total weight. n log ba (1) ( f (n))

Examples Ex. T(n) = 4T(n/2) + n a = 4, b = 2 n log ba = n 2 ; f (n) = n. CASE 1: f (n) = O(n 2 ) for = 1. T(n) = (n 2 ). Ex. T(n) = 4T(n/2) + n 2 a = 4, b = 2 n log ba = n 2 ; f (n) = n 2. CASE 2: f (n) = (n 2 lg 0 n), that is, k = 0. T(n) = (n 2 lg n).

Examples Ex. T(n) = 4T(n/2) + n 3 a = 4, b = 2 n log ba = n 2 ; f (n) = n 3. CASE 3: f (n) = (n 2 + ) for = 1 and 4(cn/2) 3 cn 3 (reg. cond.) for c = 1/2. T(n) = (n 3 ). Ex. T(n) = 4T(n/2) + n 2 /lg n a = 4, b = 2 n log ba = n 2 ; f (n) = n 2 /lg n. Master method does not apply. In particular, for every constant > 0, we have n (lg n).

Growth Function Summary Defined the mathematical principles for specifying the runtime of an algorithm Classified the growth functions, O,, o, Defined several relationships among the growth functions Theta f(n) = θ(g(n)) f(n) c g(n) BigOh f(n) = O(g(n)) f(n) c g(n) Omega f(n) = Ω(g(n)) f(n) c g(n) Little Oh f(n) = o(g(n)) f(n) << c g(n) Little Omega f(n) = ω(g(n)) f(n) >> c g(n) CS 4407, Algorithms

Appendix: Review of Growth Functions Detailed review of different growth functions

A Simple Example INPUT: a sequence of n numbers OUTPUT: the smallest number among them 1. x T[1] 2. for i 2 to n do 3. if T[i] < x then x T[i] * Performance of this algorithm is a function of n. CS 4407, Algorithms

Runtime Analysis Elementary operation: an operation whose execution time can be bounded above by a constant depending on implementation used. Assume all elementary operations can be executed at unit cost. This is not true, but they are within some constant factor of each other. We are mainly concerned about the order of growth.

Order of Growth For very large input size, it is the asymptotic rate of growth (or order of growth) that matters. We can ignore the lower-order terms, since they are relatively insignificant for very large n. We can also ignore leading term s constant coefficients, since they are not as important for the rate of growth in computational efficiency for very large n. Higher order functions of n are normally considered less efficient. CS 4407, Algorithms

Comparisons of Algorithms Multiplication classical technique: O(nm) divide-and-conquer: O(nm ln1.5 ) ~ O(nm 0.59 ) For operands of size 1000, takes 40 & 15 seconds respectively on a Cyber 835. Sorting insertion sort: (n 2 ) merge sort: (n lg n) For 10 6 numbers, it took 5.56 hrs on a supercomputer using machine language and 16.67 min on a PC using C/C++. CS 4407, Algorithms

Why Order of Growth Matters Computer speeds double every two years, so why worry about algorithm speed? When speed doubles, what happens to the amount of work you can do? CS 4407, Algorithms

Effect of Faster Machines H/W Speed Comp. of Alg. No. of items sorted 1 M * 2 M Gain * Million operations per second. Ο(n 2 ) 1000 1414 1.414 O(n lgn) 62700 118600 1.9 Higher gain with faster hardware for more efficient algorithm. Results are more dramatic for higher speeds: for the n lg n algorithm, doubling the speed almost doubles the number of items that can be sorted. CS 4407, Algorithms

Classifying Functions Functions Constant Logarithmic Poly Logarithmic Polynomial Exponential 5 5 log n (log n) 5 n 5 2 5n 2 Exp Double Exp n5 25n 2

Ordering Functions Functions Constant Logarithmic Poly Logarithmic Polynomial Exponential Exp << << << << << << 5 << 5 log n << (log n) 5 << n 5 << 2 5n << 2 << Double Exp 2 n5 25n

Classifying Functions Functions Constant Logarithmic Poly Logarithmic Polynomial Exponential θ(1) θ(log n) (log n) θ(1) n θ(1) 2 θ(n) 2 Exp Double Exp nθ(1) 2θ(n) 2

Asymptotic Notation, O,, o, Used to describe the running times of algorithms Instead of exact running time, say (n 2 ) Defined for functions whose domain is the set of natural numbers, N Determine sets of functions, in practice used to compare two functions CS 4407, Algorithms

Example 3n 2 + 7n + 8= (n 2 ) What constants for n 0, c 1, and c 2 will work? Make c 1 a little smaller than leading coefficient, and c 2 a little bigger To compare orders of growth, look at the leading term CS 4407, Algorithms

Definition of Theta f(n) = θ(g(n)) c c n, n n, c g( n) f ( n) c g( n) 1, 2, 0 0 1 2 f(n) is sandwiched between c 1 g(n) and c 2 g(n) for some sufficiently small c 1 (= 0.0001) for some sufficiently large c 2 (= 1000) CS 4407, Algorithms

Definition of Theta f(n) = θ(g(n)) c c n, n n, c g( n) f ( n) c g( n) 1, 2, 0 0 1 2 n 0 For all sufficiently large n For some definition of sufficiently large CS 4407, Algorithms

Definition of Theta 3n 2 + 7n + 8 = θ(n 2 ) c c n, n n, c g( n) f ( n) c g( n) 1, 2, 0 0 1 2

Definition of Theta 3n 2 + 7n + 8 = θ(n 2 ) c c n, n n, c g( n) f ( n) c g( n) 1, 2, 0 0 1 2?? c 1 n 2 3n 2 + 7n + 8 c 2 n 2 CS 4407, Algorithms

Definition of Theta 3n 2 + 7n + 8 = θ(n 2 ) c c n, n n, c g( n) f ( n) c g( n) 1, 2, 0 0 1 2 3 4 3 n 2 3n 2 + 7n + 8 4 n 2 CS 4407, Algorithms

Definition of Theta 3n 2 + 7n + 8 = θ(n 2 ) c c n, n n, c g( n) f ( n) c g( n) 1, 2, 0 0 1 2 1 3 1 2 3 1 2 + 7 1 + 8 4 1 2 False CS 4407, Algorithms

Definition of Theta 3n 2 + 7n + 8 = θ(n 2 ) c c n, n n, c g( n) f ( n) c g( n) 1, 2, 0 0 1 2 7 3 7 2 3 7 2 + 7 7 + 8 4 7 2 False CS 4407, Algorithms

Definition of Theta 3n 2 + 7n + 8 = θ(n 2 ) c c n, n n, c g( n) f ( n) c g( n) 1, 2, 0 0 1 2 8 3 8 2 3 8 2 + 7 8 + 8 4 8 2 True CS 4407, Algorithms

Definition of Theta 3n 2 + 7n + 8 = θ(n 2 ) c c n, n n, c g( n) f ( n) c g( n) 1, 2, 0 0 1 2 8 n 8 3 n 2 3n 2 + 7n + 8 4 n 2 CS 4407, Algorithms

Running Times The running time is O(f(n)) Worst case is O(f(n)) Running time is (f(n)) Best case is (f(n)) Can still say Worst case running time is (f(n)) Means worst case running time is given by some unknown function g(n) (f(n))

Example Insertion sort takes (n 2 ) worst case time, so sorting (as a problem) is O(n 2 ) Any sort algorithm must look at each item, so sorting is (n) In fact, using (e.g.) merge sort, sorting is (n lg n) in the worst case

Asymptotic Notation in Equations Asymptotic notation is used to replace expressions containing lower-order terms. For example, 4n 3 + 3n 2 + 2n + 1 = 4n 3 + 3n 2 + (n) = 4n 3 + (n 2 ) = (n 3 ) In equations, (f(n)) always stands for an anonymous function g(n) (f(n)). In the example above, (n 2 ) stands for 3n 2 + 2n + 1.

o-notation For a given function g(n), we denote by o(g(n)) the set of functions: o(g(n)) = {f(n): for any positive constant c > 0, there exists a constant n 0 > 0 such that 0 f(n) < cg(n) for all n n 0 } f(n) becomes insignificant relative to g(n) as n approaches infinity: lim [f(n) / g(n)] = 0 n We say g(n) is an upper bound for f(n) that is not asymptotically tight.

O(*) versus o(*) O(g(n)) = {f(n): there exist positive constants c and n 0 such that 0 f(n) cg(n), for all n n 0 }. o(g(n)) = {f(n): for any positive constant c > 0, there exists a constant n 0 > 0 such that 0 f(n) < cg(n) for all n n 0 }. Thus o(f(n)) is a weakened O(f(n)). For example: n 2 = O(n 2 ) n 2 o(n 2 ) n 2 = O(n 3 ) n 2 = o(n 3 )

-notation For a given function g(n), we denote by functions (g(n)) the set of (g(n)) = {f(n): for any positive constant c > 0, there exists a constant n 0 > 0 such that 0 cg(n) < f(n) for all n n 0 } f(n) becomes arbitrarily large relative to g(n) as n approaches infinity: lim [f(n) / g(n)] = n We say g(n) is a lower bound for f(n) that is not asymptotically tight.

Limits lim [f(n) / g(n)] = 0 f(n) (g(n)) n lim [f(n) / g(n)] < f(n) (g(n)) n 0 < lim [f(n) / g(n)] < f(n) (g(n)) n 0 < lim [f(n) / g(n)] f(n) (g(n)) n lim [f(n) / g(n)] = f(n) (g(n)) n lim [f(n) / g(n)] undefined n can t say

Additional Useful Relations The following slides provide many useful relations for orders of growth These relations may prove helpful in homeworks and exams CS 4407, Algorithms

Comparison of Functions f g a b f (n) = O(g(n)) a b f (n) = (g(n)) a b f (n) = (g(n)) a = b f (n) = o(g(n)) a < b f (n) = (g(n)) a > b CS 4407, Algorithms

Properties Transitivity f(n) = (g(n)) & g(n) = (h(n)) f(n) = (h(n)) f(n) = O(g(n)) & g(n) = O(h(n)) f(n) = O(h(n)) f(n) = (g(n)) & g(n) = (h(n)) f(n) = (h(n)) f(n) = o (g(n)) & g(n) = o (h(n)) f(n) = o (h(n)) f(n) = (g(n)) & g(n) = (h(n)) f(n) = (h(n)) Symmetry f(n) = (g(n)) if and only if g(n) = (f(n)) Transpose Symmetry f(n) = O(g(n)) if and only if g(n) = (f(n)) f(n) = o(g(n)) if and only if g(n) = ((f(n))

Useful Facts For a 0, b > 0, lim n ( lg a n / n b ) = 0, so lg a n = o(n b ), and n b = (lg a n ) Prove using L Hopital s rule and induction lg(n!) = (n lg n) CS 4407, Algorithms

Examples A B 5n 2 + 100n 3n 2 + 2 log 3 (n 2 ) log 2 (n 3 ) n lg4 3 lg n lg 2 n n 1/2 CS 4407, Algorithms

Examples A B 5n 2 + 100n 3n 2 + 2 A (B) A (n 2 ), n 2 (B) A (B) log 3 (n 2 ) log 2 (n 3 ) n lg4 3 lg n lg 2 n n 1/2

Examples A B 5n 2 + 100n 3n 2 + 2 A (B) A (n 2 ), n 2 (B) A (B) log 3 (n 2 ) log 2 (n 3 ) A (B) log b a = log c a / log c b; A = 2lgn / lg3, B = 3lgn, A/B =2/(3lg3) n lg4 3 lg n lg 2 n n 1/2

Examples A B 5n 2 + 100n 3n 2 + 2 A (B) A (n 2 ), n 2 (B) A (B) log 3 (n 2 ) log 2 (n 3 ) A (B) log b a = log c a / log c b; A = 2lgn / lg3, B = 3lgn, A/B =2/(3lg3) n lg4 3 lg n A (B) a log b = b log a ; B =3 lg n =n lg 3 ; A/B =n lg(4/3) as n lg 2 n n 1/2

A Examples B 5n 2 + 100n 3n 2 + 2 A (B) A (n 2 ), n 2 (B) A (B) log 3 (n 2 ) log 2 (n 3 ) A (B) log b a = log c a / log c b; A = 2lgn / lg3, B = 3lgn, A/B =2/(3lg3) n lg4 3 lg n A (B) a log b = b log a ; B =3 lg n =n lg 3 ; A/B =n lg(4/3) as n lg 2 n n 1/2 A (B) lim ( lg a n / n b ) = 0 (here a = 2 and b = 1/2) A (B) n

Review on Summation Why do we need summation formulas? We need them for computing the running times of some algorithms. (CLRS/Chapter 3) Example: Maximum Subvector Given an array a[1 n] of numeric values (can be positive, zero and negative) determine the subvector a[i j] (1 i j n) whose sum of elements is maximum over all subvectors. 1-2 2 2

Reviews on Summation MaxSubvector(a, n) maxsum 0; for i 1 to n for j = i to n sum 0; for k i to j sum += a[k] maxsum max(sum, maxsum); return maxsum; n n j T(n) = 1 i=1 j=i k=i NOTE: This is not a simplified solution. What is the final answer?

Reviews on Summation Constant Series: For n 0, b i=a 1 = b - a + 1 Linear Series: For n 0, n i = 1 + 2 + + n = n(n+1) / 2 i=1

Reviews on Summation Quadratic Series: For n 0, n i 2 = 1 2 + 2 2 + + n 2 = (2n 3 + 3n 2 + n) / 6 i=1 Linear-Geometric Series: For n 0, n i=1 ic i = c + 2c 2 + + nc n = [(n-1)c (n+1) - nc n + c] / (c-1) 2

Divide-and-Conquer Examples CS 4407, Algorithms

Integer Multiplication Let X = A B and Y = C D where A,B,C and D are n/2 bit integers Simple Method: XY = (2 n/2 A+B)(2 n/2 C+D) Running Time Recurrence T(n) < 4T(n/2) + 100n How do we solve it? CS 4407, Algorithms

Substitution method The most general method: 1. Guess the form of the solution. 2. Verify by induction. 3. Solve for constants. Example: T(n) = 4T(n/2) + 100n [Assume that T(1) = (1).] Guess O(n 3 ). (Prove O and separately.) Assume that T(k) ck 3 for k < n. Prove T(n) cn 3 by induction.