COMP Analysis of Algorithms & Data Structures

Similar documents
COMP Analysis of Algorithms & Data Structures

COMP 9024, Class notes, 11s2, Class 1

Mergesort and Recurrences (CLRS 2.3, 4.4)

Analysis of Algorithms I: Asymptotic Notation, Induction, and MergeSort

Big O 2/14/13. Administrative. Does it terminate? David Kauchak cs302 Spring 2013

COMP Analysis of Algorithms & Data Structures

CS 4407 Algorithms Lecture 2: Iterative and Divide and Conquer Algorithms

data structures and algorithms lecture 2

Lecture 1: Asymptotics, Recurrences, Elementary Sorting

Grade 11/12 Math Circles Fall Nov. 5 Recurrences, Part 2

MA008/MIIZ01 Design and Analysis of Algorithms Lecture Notes 3

Computational Complexity

When we use asymptotic notation within an expression, the asymptotic notation is shorthand for an unspecified function satisfying the relation:

When we use asymptotic notation within an expression, the asymptotic notation is shorthand for an unspecified function satisfying the relation:

Inf 2B: Sorting, MergeSort and Divide-and-Conquer

Analysis of Algorithms

Data Structures and Algorithms CSE 465

i=1 i B[i] B[i] + A[i, j]; c n for j n downto i + 1 do c n i=1 (n i) C[i] C[i] + A[i, j]; c n

CS 577 Introduction to Algorithms: Strassen s Algorithm and the Master Theorem

Running Time Evaluation

Data Structures and Algorithms Running time and growth functions January 18, 2018

CS Non-recursive and Recursive Algorithm Analysis

Analysis of Algorithms

with the size of the input in the limit, as the size of the misused.

Growth of Functions (CLRS 2.3,3)

CS 4407 Algorithms Lecture 3: Iterative and Divide and Conquer Algorithms

CS 4104 Data and Algorithm Analysis. Recurrence Relations. Modeling Recursive Function Cost. Solving Recurrences. Clifford A. Shaffer.

Algorithms, Design and Analysis. Order of growth. Table 2.1. Big-oh. Asymptotic growth rate. Types of formulas for basic operation count

Analysis of Algorithms - Using Asymptotic Bounds -

Analysis of Algorithm Efficiency. Dr. Yingwu Zhu

Lecture 2. Fundamentals of the Analysis of Algorithm Efficiency

MA008/MIIZ01 Design and Analysis of Algorithms Lecture Notes 2

Algorithms Design & Analysis. Analysis of Algorithm

CS 4407 Algorithms Lecture 2: Growth Functions

Review Of Topics. Review: Induction

Big-O Notation and Complexity Analysis

Module 1: Analyzing the Efficiency of Algorithms

3.1 Asymptotic notation

ECE250: Algorithms and Data Structures Analyzing and Designing Algorithms

Algorithm Analysis Recurrence Relation. Chung-Ang University, Jaesung Lee

Introduction to Algorithms

In-Class Soln 1. CS 361, Lecture 4. Today s Outline. In-Class Soln 2

Md Momin Al Aziz. Analysis of Algorithms. Asymptotic Notations 3 COMP Computer Science University of Manitoba

COMP 382: Reasoning about algorithms

Module 1: Analyzing the Efficiency of Algorithms

Introduction to Divide and Conquer

Lecture 2: Asymptotic Notation CSCI Algorithms I

Asymptotic Notation. such that t(n) cf(n) for all n n 0. for some positive real constant c and integer threshold n 0

CS 161 Summer 2009 Homework #2 Sample Solutions

Written Homework #1: Analysis of Algorithms

CS473 - Algorithms I

Data Structures and Algorithms. Asymptotic notation

The Time Complexity of an Algorithm

Problem Set 1 Solutions

The Time Complexity of an Algorithm

Homework 1 Solutions

Data structures Exercise 1 solution. Question 1. Let s start by writing all the functions in big O notation:

Algorithm efficiency analysis

Analysis of Multithreaded Algorithms

CIS 121. Analysis of Algorithms & Computational Complexity. Slides based on materials provided by Mary Wootters (Stanford University)

Analysis of Algorithms

Recommended readings: Description of Quicksort in my notes, Ch7 of your CLRS text.

Asymptotic Analysis Cont'd

Divide and Conquer CPE 349. Theresa Migler-VonDollen

Ch01. Analysis of Algorithms

Big O (Asymptotic Upper Bound)

Asymptotic Analysis. Slides by Carl Kingsford. Jan. 27, AD Chapter 2

The maximum-subarray problem. Given an array of integers, find a contiguous subarray with the maximum sum. Very naïve algorithm:

CIS 121 Data Structures and Algorithms with Java Spring Big-Oh Notation Monday, January 22/Tuesday, January 23

COMP 355 Advanced Algorithms

Problem 5. Use mathematical induction to show that when n is an exact power of two, the solution of the recurrence

Data Structures and Algorithms Chapter 2

Chapter 2. Recurrence Relations. Divide and Conquer. Divide and Conquer Strategy. Another Example: Merge Sort. Merge Sort Example. Merge Sort Example

Asymptotic Algorithm Analysis & Sorting

Algorithms, CSE, OSU. Introduction, complexity of algorithms, asymptotic growth of functions. Instructor: Anastasios Sidiropoulos

Outline. 1 Introduction. Merging and MergeSort. 3 Analysis. 4 Reference

Cpt S 223. School of EECS, WSU

COMP 355 Advanced Algorithms Algorithm Design Review: Mathematical Background

Lecture 1: Asymptotic Complexity. 1 These slides include material originally prepared by Dr.Ron Cytron, Dr. Jeremy Buhler, and Dr. Steve Cole.

Lecture 4. Quicksort

Introduction to Algorithms 6.046J/18.401J

CS 2110: INDUCTION DISCUSSION TOPICS

Divide and Conquer Algorithms. CSE 101: Design and Analysis of Algorithms Lecture 14

Data Structures and Algorithms Chapter 3

Notes for Recitation 14

Introduction to Algorithms: Asymptotic Notation

Lecture 2. More Algorithm Analysis, Math and MCSS By: Sarah Buchanan

1 Divide and Conquer (September 3)

CS F-01 Algorithm Analysis 1

Lecture 5: Loop Invariants and Insertion-sort

What we have learned What is algorithm Why study algorithm The time and space efficiency of algorithm The analysis framework of time efficiency Asympt

Taking Stock. IE170: Algorithms in Systems Engineering: Lecture 3. Θ Notation. Comparing Algorithms

A design paradigm. Divide and conquer: (When) does decomposing a problem into smaller parts help? 09/09/ EECS 3101

Midterm Exam. CS 3110: Design and Analysis of Algorithms. June 20, Group 1 Group 2 Group 3

Divide and Conquer. Andreas Klappenecker

CS 361, Lecture 14. Outline

Asymptotic Running Time of Algorithms

Analysis of Algorithms

Analysis of Algorithms CMPSC 565

Divide and Conquer. Recurrence Relations

Transcription:

COMP 3170 - Analysis of Algorithms & Data Structures Shahin Kamali Lecture 4 - Jan. 10, 2018 CLRS 1.1, 1.2, 2.2, 3.1, 4.3, 4.5 University of Manitoba Picture is from the cover of the textbook CLRS. 1 / 16

Definition Asymptotic Notations Asymptotic Notations in a Nutshell f(n) O(g(n)) M > 0, n 0 > 0 s.t. n > n 0, f (n) M g(n) Definition f(n) o(g(n)) M > 0, n 0 > 0 s.t. n > n 0, f (n) < M g(n) Definition f(n) Ω(g(n)) M > 0, n 0 > 0 s.t. n > n 0, f (n) M g(n) Definition f(n) ω(g(n)) M > 0, n 0 > 0 s.t. n > n 0, f (n) > M g(n) Definition f(n) Θ(g(n)) M 1, M 2 > 0, n 0 > 0 s.t. n > n 0, M 1 g(n) f (n) M 2 g(n) 2 / 16

Common Growth Rates Θ(1) constant complexity 3 / 16

Common Growth Rates Θ(1) constant complexity e.g., an algorithms that only samples a constant number of inputs 3 / 16

Common Growth Rates Θ(1) constant complexity e.g., an algorithms that only samples a constant number of inputs Θ(log n) logarithmic complexity 3 / 16

Common Growth Rates Θ(1) constant complexity e.g., an algorithms that only samples a constant number of inputs Θ(log n) logarithmic complexity Binary search 3 / 16

Common Growth Rates Θ(1) constant complexity e.g., an algorithms that only samples a constant number of inputs Θ(log n) logarithmic complexity Binary search Θ(n) linear complexity 3 / 16

Common Growth Rates Θ(1) constant complexity e.g., an algorithms that only samples a constant number of inputs Θ(log n) logarithmic complexity Binary search Θ(n) linear complexity Most practical algorithms :) 3 / 16

Common Growth Rates Θ(1) constant complexity e.g., an algorithms that only samples a constant number of inputs Θ(log n) logarithmic complexity Binary search Θ(n) linear complexity Most practical algorithms :) Θ(n log n) pseudo-linear complexity 3 / 16

Common Growth Rates Θ(1) constant complexity e.g., an algorithms that only samples a constant number of inputs Θ(log n) logarithmic complexity Binary search Θ(n) linear complexity Most practical algorithms :) Θ(n log n) pseudo-linear complexity Optimal comparison based sorting algorithms, e.g., merge-sort 3 / 16

Common Growth Rates Θ(1) constant complexity e.g., an algorithms that only samples a constant number of inputs Θ(log n) logarithmic complexity Binary search Θ(n) linear complexity Most practical algorithms :) Θ(n log n) pseudo-linear complexity Optimal comparison based sorting algorithms, e.g., merge-sort Θ(n 2 ) Quadratic complexity 3 / 16

Common Growth Rates Θ(1) constant complexity e.g., an algorithms that only samples a constant number of inputs Θ(log n) logarithmic complexity Binary search Θ(n) linear complexity Most practical algorithms :) Θ(n log n) pseudo-linear complexity Optimal comparison based sorting algorithms, e.g., merge-sort Θ(n 2 ) Quadratic complexity naive sorting algorithms (Bubble sort, insertion sort) 3 / 16

Common Growth Rates Θ(1) constant complexity e.g., an algorithms that only samples a constant number of inputs Θ(log n) logarithmic complexity Binary search Θ(n) linear complexity Most practical algorithms :) Θ(n log n) pseudo-linear complexity Optimal comparison based sorting algorithms, e.g., merge-sort Θ(n 2 ) Quadratic complexity naive sorting algorithms (Bubble sort, insertion sort) Θ(n 3 ) Cubic Complexity 3 / 16

Common Growth Rates Θ(1) constant complexity e.g., an algorithms that only samples a constant number of inputs Θ(log n) logarithmic complexity Binary search Θ(n) linear complexity Most practical algorithms :) Θ(n log n) pseudo-linear complexity Optimal comparison based sorting algorithms, e.g., merge-sort Θ(n 2 ) Quadratic complexity naive sorting algorithms (Bubble sort, insertion sort) Θ(n 3 ) Cubic Complexity naive matrix multiplication 3 / 16

Common Growth Rates Θ(1) constant complexity e.g., an algorithms that only samples a constant number of inputs Θ(log n) logarithmic complexity Binary search Θ(n) linear complexity Most practical algorithms :) Θ(n log n) pseudo-linear complexity Optimal comparison based sorting algorithms, e.g., merge-sort Θ(n 2 ) Quadratic complexity naive sorting algorithms (Bubble sort, insertion sort) Θ(n 3 ) Cubic Complexity naive matrix multiplication Θ(2 n ) Exponential Complexity 3 / 16

Common Growth Rates Θ(1) constant complexity e.g., an algorithms that only samples a constant number of inputs Θ(log n) logarithmic complexity Binary search Θ(n) linear complexity Most practical algorithms :) Θ(n log n) pseudo-linear complexity Optimal comparison based sorting algorithms, e.g., merge-sort Θ(n 2 ) Quadratic complexity naive sorting algorithms (Bubble sort, insertion sort) Θ(n 3 ) Cubic Complexity naive matrix multiplication Θ(2 n ) Exponential Complexity The algorithm terminates but the universe is likely to end much earlier even if n 1000. 3 / 16

Techniques for Comparing Growth Rates Assume the running time of two algorithms are given by functions f (n) and g(n) and let f (n) L = lim n g(n) Then. o(g(n)) if L = 0 f (n) Θ(g(n)) if 0 < L < ω(g(n)) if L = 4 / 16

Techniques for Comparing Growth Rates Assume the running time of two algorithms are given by functions f (n) and g(n) and let f (n) L = lim n g(n) Then. o(g(n)) if L = 0 f (n) Θ(g(n)) if 0 < L < ω(g(n)) if L = If the limit is not defined, we need another method Note that we cannot compare two algorithms using big O and Ω notations E.g., algorithm A can have complexity O(n 2 ) and algorithm B has complexity O(n 3 ). We cannot state that A is faster than B (why?) 4 / 16

Fun with Asymptotic Notations Compare the grow-rate of log n and n r where r is a real number. 5 / 16

Fun with Asymptotic Notations Prove that n(sin(n) + 2) is Θ(n). 6 / 16

Fun with Asymptotic Notations Prove that n(sin(n) + 2) is Θ(n). Use the definition since the limit does not exist Define n 0, M 1, M 2 so that n > n 0 we have M 1n(sin(n) + 2) n qm 2n(sin(n) + 2). M 1 = 1/3, M 2 = 1, n 0 = 1 work! 6 / 16

Fun with Asymptotic Notations The same relationship that holds for relative values of numbers hold for asymptotic. E.g., if f (n) O(g(n)) [f (n) is asymptotically smaller than or equal to g(n)], then we have g(n) Ω(f (n)) [g(n) is asymptotically larger than or equal to f (n)]. 7 / 16

Fun with Asymptotic Notations The same relationship that holds for relative values of numbers hold for asymptotic. E.g., if f (n) O(g(n)) [f (n) is asymptotically smaller than or equal to g(n)], then we have g(n) Ω(f (n)) [g(n) is asymptotically larger than or equal to f (n)]. In order to prove f (n) Θ(g(n)), we often show that f (n) O(n) and f (n) Ω(g(n)). 7 / 16

Fun with Asymptotic Notations The same relationship that holds for relative values of numbers hold for asymptotic. E.g., if f (n) O(g(n)) [f (n) is asymptotically smaller than or equal to g(n)], then we have g(n) Ω(f (n)) [g(n) is asymptotically larger than or equal to f (n)]. In order to prove f (n) Θ(g(n)), we often show that f (n) O(n) and f (n) Ω(g(n)). Similarly, we have transitivity in asymptotic notations: if f (n) O(g(n)) and g(n) O(h(n)), we have f (n) O(h(n)). 7 / 16

Fun with Asymptotic Notations The same relationship that holds for relative values of numbers hold for asymptotic. E.g., if f (n) O(g(n)) [f (n) is asymptotically smaller than or equal to g(n)], then we have g(n) Ω(f (n)) [g(n) is asymptotically larger than or equal to f (n)]. In order to prove f (n) Θ(g(n)), we often show that f (n) O(n) and f (n) Ω(g(n)). Similarly, we have transitivity in asymptotic notations: if f (n) O(g(n)) and g(n) O(h(n)), we have f (n) O(h(n)). Max rule: f (n) + g(n) Θ(max{f (n), g(n)}). E.g., 2n 3 + 8n 2 + 16n log n Θ(max{2n 3, 8n 2, 16n log n}) = Θ(n 3 ). 7 / 16

Fun with Asymptotic Notations What is the time complexity of arithmetic sequences? n 1 (a + di) i=0 8 / 16

Fun with Asymptotic Notations What is the time complexity of arithmetic sequences? n 1 (a + di)= na + dn(n 1) Θ(n 2 ) 2 i=0 8 / 16

Fun with Asymptotic Notations What is the time complexity of arithmetic sequences? n 1 (a + di)= na + dn(n 1) Θ(n 2 ) 2 i=0 What about geometric sequence? n 1 i=0 ar i 8 / 16

Fun with Asymptotic Notations What is the time complexity of arithmetic sequences? n 1 (a + di)= na + dn(n 1) Θ(n 2 ) 2 i=0 What about geometric sequence? n 1 a 1 r n Θ(1) if 0 < r < 1 ar i 1 r = na Thη(n) if r = 1 i=0 r n 1 Θ(r n ) if r > 1 r 1 What about Harmonic sequence? H n = n i=1 1 i 8 / 16

Fun with Asymptotic Notations What is the time complexity of arithmetic sequences? n 1 (a + di)= na + dn(n 1) Θ(n 2 ) 2 i=0 What about geometric sequence? n 1 a 1 r n Θ(1) if 0 < r < 1 ar i 1 r = na Thη(n) if r = 1 i=0 r n 1 Θ(r n ) if r > 1 r 1 What about Harmonic sequence? H n = n i=1 1 i ln(n) + γ Θ(log n) (γ is a constant 0.577) 8 / 16

Loop Analysis Identify elementary operations that require constant time The complexity of a loop is expressed as the sum of the complexities of each iteration of the loop. Analyze independent loops separately, and then add the results (use maximum rules and simplify when possible). If loops are nested, start with the innermost loop and proceed outwards. 9 / 16

Example of Loop Analysis Algo1 (n) 1. A 0 2. for i 1 to n do 3. for j i to n do 4. A A/(i j) 2 5. A A 100 6. return sum 10 / 16

Example of Loop Analysis Algo2 (A, n) 1. max 0 2. for i 1 to n do 3. for j i to n do 4. X 0 5. for k i to j do 6. X A[k] 7. if X > max then 8. max X 9. return max 11 / 16

Example of Loop Analysis Algo3 (n) 1. X 0 2. for i 1 to n 2 do 3. j i 4. while j 1 do 5. X X + i/j 6. j j/2 7. return X 12 / 16

MergeSort Sorting an array A of n numbers Step 1: We split A into two subarrays: A L consists of the first n 2 elements in A and A R consists of the last n 2 elements in A. Step 2: Recursively run MergeSort on A L and A R. Step 3: After A L and A R have been sorted, use a function Merge to merge them into a single sorted array. This can be done in time Θ(n). 13 / 16

MergeSort MergeSort(A, n) 1. if n = 1 then 2. S A 3. else 4. n L n 2 5. n R n 2 6. A L [A[1],..., A[n L ]] 7. A R [A[n L + 1],..., A[n]] 8. S L MergeSort(A L, n L ) 9. S R MergeSort(A R, n R ) 10. S Merge(S L, n L, S R, n R ) 11. return S 14 / 16

Analysis of MergeSort The following is the corresponding sloppy recurrence (it has floors and ceilings removed): { 2 T ( ) n T (n) = 2 + cn if n > 1 d if n = 1. The exact and sloppy recurrences are identical when n is a power of 2. The recurrence can easily be solved by various methods when n = 2 j. The solution has growth rate T (n) Θ(n log n). It is possible to show that T (n) Θ(n log n) for all n by analyzing the exact recurrence. 15 / 16

Analysis of Recursions Substitution method Guess the growth function and prove it using induction. For merge-sort, prove T (n) < Mn log n. This holds for n = 2, n = 3 (base of induction). Fix a value of n and assume the inequality holds for smaller values. we have T (n) = 2T (n/2) + cn 2M(n/2 log n/2) + cn = Mn log n MN + cn Mn log n + cn (the inequality comes from induction hypothesis) 16 / 16

Analysis of Recursions Substitution method Guess the growth function and prove it using induction. For merge-sort, prove T (n) < Mn log n. This holds for n = 2, n = 3 (base of induction). Fix a value of n and assume the inequality holds for smaller values. we have T (n) = 2T (n/2) + cn 2M(n/2 log n/2) + cn = Mn log n MN + cn Mn log n + cn (the inequality comes from induction hypothesis) Limited Master theorem { a T ( ) n T (n) = b + n c if n > 1 d if n = 1. if log b a > c, then T (n) Θ(n log b a ) if log b a = c then T (n) Θ(n c log n) if log b a < c then T (n) Θ(n c ) 16 / 16