Algorithm Analysis. Algorithms that are equally correct can vary in their utilization of computational resources

Similar documents
Advanced Course of Algorithm Design and Analysis

Chapter 22 Developing Efficient Algorithms

CS583 Lecture 02. Jana Kosecka. some materials here are based on E. Demaine, D. Luebke slides

Model of Computation and Runtime Analysis

Classification of problem & problem solving strategies. classification of time complexities (linear, logarithmic etc)

CS 270 Algorithms. Oliver Kullmann. Growth of Functions. Divide-and- Conquer Min-Max- Problem. Tutorial. Reading from CLRS for week 2

Data Structures and Algorithm. Xiaoqing Zheng

Model of Computation and Runtime Analysis

Design and Analysis of ALGORITHM (Topic 2)

CS / MCS 401 Homework 3 grader solutions

Mathematical Foundation. CSE 6331 Algorithms Steve Lai

CS161 Design and Analysis of Algorithms. Administrative

Analysis of Algorithms. Growth of Functions

Ch3. Asymptotic Notation

Test One (Answer Key)

CS 332: Algorithms. Linear-Time Sorting. Order statistics. Slide credit: David Luebke (Virginia)

Analysis of Algorithms. Introduction. Contents

Recursive Algorithms. Recurrences. Recursive Algorithms Analysis

This Lecture. Divide and Conquer. Merge Sort: Algorithm. Merge Sort Algorithm. MergeSort (Example) - 1. MergeSort (Example) - 2

A recurrence equation is just a recursive function definition. It defines a function at one input in terms of its value on smaller inputs.

Sums, products and sequences

Algorithm Analysis. Chapter 3

Data Structures Lecture 9

Sorting Algorithms. Algorithms Kyuseok Shim SoEECS, SNU.

Introduction to Algorithms 6.046J/18.401J LECTURE 3 Divide and conquer Binary search Powering a number Fibonacci numbers Matrix multiplication

2. ALGORITHM ANALYSIS

Analysis of Algorithms -Quicksort-

CS161 Handout 05 Summer 2013 July 10, 2013 Mathematical Terms and Identities

OPTIMAL ALGORITHMS -- SUPPLEMENTAL NOTES

4.3 Growth Rates of Solutions to Recurrences

COMP26120: Introducing Complexity Analysis (2018/19) Lucas Cordeiro

Algorithms and Data Structures Lecture IV

CIS 121 Data Structures and Algorithms with Java Spring Code Snippets and Recurrences Monday, February 4/Tuesday, February 5

CSE Introduction to Parallel Processing. Chapter 3. Parallel Algorithm Complexity

Design and Analysis of Algorithms

Examples: data compression, path-finding, game-playing, scheduling, bin packing

ITEC 360 Data Structures and Analysis of Algorithms Spring for n 1

COMP285 Midterm Exam Department of Mathematics

Divide & Conquer. Divide-and-conquer algorithms. Conventional product of polynomials. Conventional product of polynomials.

Algorithms Design & Analysis. Divide & Conquer

Recursive Algorithm for Generating Partitions of an Integer. 1 Preliminary

Chapter 2. Asymptotic Notation

Recurrence Relations

A New Method to Order Functions by Asymptotic Growth Rates Charlie Obimbo Dept. of Computing and Information Science University of Guelph

CSI 5163 (95.573) ALGORITHM ANALYSIS AND DESIGN

Optimally Sparse SVMs

In general, we need to describe how much time or memory will be required with respect to one or more variables. The most common variables include:

An Introduction to Randomized Algorithms

CS:3330 (Prof. Pemmaraju ): Assignment #1 Solutions. (b) For n = 3, we will have 3 men and 3 women with preferences as follows: m 1 : w 3 > w 1 > w 2

CS166 Handout 02 Spring 2018 April 3, 2018 Mathematical Terms and Identities

Average-Case Analysis of QuickSort

Parallel Vector Algorithms David A. Padua

Design and Analysis of Algorithms

CS 332: Algorithms. Quicksort

CSE 4095/5095 Topics in Big Data Analytics Spring 2017; Homework 1 Solutions

CSE 5311 Notes 1: Mathematical Preliminaries

Quantum Computing Lecture 7. Quantum Factoring

Department of Informatics Prof. Dr. Michael Böhlen Binzmühlestrasse Zurich Phone:

w (1) ˆx w (1) x (1) /ρ and w (2) ˆx w (2) x (2) /ρ.

The Growth of Functions. Theoretical Supplement

Lecture 3: Asymptotic Analysis + Recurrences

Fundamental Algorithms

Polynomial identity testing and global minimum cut

CSE 332. Data Structures and Parallelism

62. Power series Definition 16. (Power series) Given a sequence {c n }, the series. c n x n = c 0 + c 1 x + c 2 x 2 + c 3 x 3 +

Math 2784 (or 2794W) University of Connecticut

Merge and Quick Sort

In number theory we will generally be working with integers, though occasionally fractions and irrationals will come into play.

Sequences A sequence of numbers is a function whose domain is the positive integers. We can see that the sequence

Growth of Functions. Chapter 3. CPTR 430 Algorithms Growth of Functions 1

is also known as the general term of the sequence

A sequence of numbers is a function whose domain is the positive integers. We can see that the sequence

CSI 2101 Discrete Structures Winter Homework Assignment #4 (100 points, weight 5%) Due: Thursday, April 5, at 1:00pm (in lecture)

CHAPTER 1 SEQUENCES AND INFINITE SERIES

Algorithms 演算法. Multi-threaded Algorithms

Skip Lists. Presentation for use with the textbook, Algorithm Design and Applications, by M. T. Goodrich and R. Tamassia, Wiley, 2015 S 3 S S 1

Oblivious Gradient Clock Synchronization

Disjoint set (Union-Find)

x c the remainder is Pc ().

Problems with Solutions in the Analysis of Algorithms. Minko Markov

Lecture 9: Hierarchy Theorems

COMP26120: More on the Complexity of Recursive Programs (2018/19) Lucas Cordeiro

n m CHAPTER 3 RATIONAL EXPONENTS AND RADICAL FUNCTIONS 3-1 Evaluate n th Roots and Use Rational Exponents Real nth Roots of a n th Root of a

Lecture 7: Properties of Random Samples

ECE 308 Discrete-Time Signals and Systems

Some special clique problems

ORIE 633 Network Flows September 27, Lecture 8

Chapter 9: Numerical Differentiation

Symbolic computation 2: Linear recurrences

Chapter 6. Advanced Counting Techniques

IP Reference guide for integer programming formulations.

Computability and computational complexity

MCT242: Electronic Instrumentation Lecture 2: Instrumentation Definitions

Sequences, Mathematical Induction, and Recursion. CSE 2353 Discrete Computational Structures Spring 2018

Hashing and Amortization

A Probabilistic Analysis of Quicksort

CEE 522 Autumn Uncertainty Concepts for Geotechnical Engineering

11. Hash Tables. m is not too large. Many applications require a dynamic set that supports only the directory operations INSERT, SEARCH and DELETE.

2.4 - Sequences and Series

Infinite Sequences and Series

Transcription:

Algorithm Aalysis Algorithms that are equally correct ca vary i their utilizatio of computatioal resources time ad memory a slow program it is likely ot to be used a program that demads too much memory may ot be executable o the machies that are available E.G.M. Petrakis Algorithm Aalysis 1

Memory - Time It is importat to be able to predict how a algorithm behaves uder a rage of possible coditios usually differet iputs The emphasis is o the time rather tha o the space efficiecy memory is cheap! E.G.M. Petrakis Algorithm Aalysis 2

Ruig Time The time to solve a problem icreases with the size of the iput measure the rate at which the time icreases e.g., liearly, quadratically, expoetially This measure focuses o itrisic characteristics of the algorithm rather tha icidetal factors e.g., computer speed, codig tricks, optimizatios, quality of compiler E.G.M. Petrakis Algorithm Aalysis 3

Additioal Cosideratio Sometimes, simple but less efficiet algorithms are preferred over more efficiet oes the more efficiet algorithms are ot always easy to implemet the program is to be chaged soo time may ot be critical the program will ru oly few times (e.g., oce a year) the size of the iput is always very small E.G.M. Petrakis Algorithm Aalysis 4

Bubble sort Cout the umber of steps executed by the algorithm step: basic operatio igore operatios that are executed a costat umber of times for ( i = 1; i < ( 1); i ++) for (j = ; j < (i + 1); j --) if ( A[ j 1 ] > A[ j ] ) T() = c 2 swap(a[ j ], A[ j 1 ]); //step E.G.M. Petrakis Algorithm Aalysis 5

Asymptotic Algorithm Aalysis Measures the efficiecy of a program as the size of the iput becomes large focuses o growth rate : the rate at which the ruig time of a algorithm grows as the size of the iput becomes big Complexity => Growth rate focuses o the upper ad lower bouds of the growth rates igores all costat factors E.G.M. Petrakis Algorithm Aalysis 6

Growth Rate Depedig o the order of a algorithm ca beof Liear time Complexity: T() = c Quadratic Complexity: T() = c 2 Expoetial Complexity: T() = c k The differece betwee a algorithm whose ruig time is T() = 10 ad aother with ruig time is T() = 2 2 is tremedous for > 5 the liear algorithm is must faster despite the fact that it has larger costat it is slower oly for small but, for such, both algorithms are very fast E.G.M. Petrakis Algorithm Aalysis 7

E.G.M. Petrakis Algorithm Aalysis 8

Upper Lower Bouds Several terms are used to describe the ruig time of a algorithm: The upper boud to its growth rate which is expressed by the Big-Oh otatio f() is upper boud of T() T() is i O(f()) The lower boud to it growth rate which is expressed by the Omega otatio g() is lower boud of T() T() is i Ω(g()) If the upper ad lower bouds are the same T() is i Θ(g()) E.G.M. Petrakis Algorithm Aalysis 9

Big O (upper boud) Defiitio: T() T() Ο(g()) if 0 cg() 0, c 1 Example: T() = 3 T() = 6 2 T() Ο( 2 + 7 T() O( 3 ) 2 ) T() = 3 + 10 6 2 T() O( 3 ) E.G.M. Petrakis Algorithm Aalysis 10

Properties of O O is trasitive: if f Ο(g) g Ο(h) f Ο(h) If f O(g) f + g O(g) If If f O(f ) f g O(f g ) g O(g ) f O(f ) f + g O(f + g ) g O(g ) E.G.M. Petrakis Algorithm Aalysis 11

More o Upper Bouds T() O(g()) g() :upper boud for T() There exist may upper bouds e.g., T() = 2 O( 2 ), O( 3 ) O( 100 )... Fid the smallest upper boud!! O otatio hides the costats e.g., c 2, c 2 2, c! 2, c k 2 O( 2 ) E.G.M. Petrakis Algorithm Aalysis 12

O i Practice Amog differet algorithms solvig the same problem, prefer the algorithm with the smaller rate of growth O it will be faster for large Ο(1) < Ο(log ) < Ο() < Ο( log ) < Ο( 2 ) < Ο( 3 ) < < Ο(2 )< O( ) Ο(), Ο( log ), Ο( 2 ) < Ο( 3 ): polyomial Ο(log ): logarithmic O( k ), O(2 ), (!), O( ): expoetial!! E.G.M. Petrakis Algorithm Aalysis 13

10-6 sec/commad Complexity 1000 20.02 50.5 100.1 200.2 500.5 100 1.0 0 1000log.9.3.6 1.5 4.5 10 100 2.04.25 1.0 4.0 25 2mi 100 3.08 1.0 10 1.0mi 21mi 27hrs log /3 2.4.0001 1.1 0.1 220da y 2.7hrs 125cet 4 3 10 cet 5 10 8 cet 2 1.0 35yrs 4 3 10 cet 58mi 9 2 10 cet E.G.M. Petrakis Algorithm Aalysis 14

Omega Ω (lower boud) Defiitio: f() Ω(g()) if c 1 f() cg() g() is a lower boud of the rate of growth of f() f() icreases faster tha g() as icreases e.g: T() = 6 2 2 + 7 =>T() i Ω( 2 ) E.G.M. Petrakis Algorithm Aalysis 15

More o Lower Bouds Ω provides lower boud of the growth rate of T() if there exist may lower bouds T() = 4 Ω(), Ω( 2 ), Ω( 3 ), Ω( 4 ) fid the larger oe => T() Ω( 4 ) E.G.M. Petrakis Algorithm Aalysis 16

Thetα Θ If the lower ad upper bouds are the same: f O(g) f Ω(g) f Θ(g) e.g.: T() = 3 +135 Θ( 3 ) E.G.M. Petrakis Algorithm Aalysis 17

Theorem: T()=Σ α i i Θ( m ) T() = α m +α -1 m-1 + +α 1 + α 0 O( m ) Proof: T() T() α m + α m-1 m-1 + α 1 + α 0 m { α m + 1/ α m-1 + +1/ m-1 α 1 +1/ m α 0 } m { α m + α m-1 + + α 1 + α 0 } T() O( m ) E.G.M. Petrakis Algorithm Aalysis 18

Theorem (cot.) b) T() = α m m + α m-1 m-1 + α 1 + α 0 >= c m + c m-1 + c + c >= c m where c = mi{α m, α m-1, α 1, α 0 } => T() Ω( m ) (a), (b) => Τ() Θ( m ) E.G.M. Petrakis Algorithm Aalysis 19

Theorem: (a) (b) T() 2 T() i= 1 (i k = = i k T() i= 1 i= 1 + + ( -i + 1) i= 1 i= 1 k i= 1 Θ( 2 (either i or ( - i + 1) ) T() i k 2 ) = k ( -i + 1) i= 1 i k = ( ) 2 k k = 1 k+ 1 T() Ω( k+ 1 2 k k+ 1 ), k T() O( k+ 1 ) k+ 1 ) 0 E.G.M. Petrakis Algorithm Aalysis 20

Recursive Relatios (1) T() = 1 T( -1) + if if = 1 > 1 T() = T( -1) + = = T( - 2) + ( -1) + = =... = T(1) + 2 +... + ( -1) + = = i= 1 ( + 1) i = 2 T() Θ( 2 ) E.G.M. Petrakis Algorithm Aalysis 21

F(0) = 0 F(1) = 1 Fiboacci Relatio F() = F( - 1) + F( - 2) if 2 (a) F() = F( - 1) + F( - 2) (b) 2F( - 1) 4F( - 2) 2 F() 2 F() 2-1 F(1) = 2-1 F() O(2 E.G.M. Petrakis Algorithm Aalysis 22... F() = F( - 1) + F( - 2) 2F( - 2) 4F( - 4) K (-1)/2 (-2)/2 if odd F() Ω(2 if eve /2 ) ) = Ω( 2 )

Fiboacci relatio (cot.) From (a), (b): F() Ω( 2 ) O(2 ) It ca be prove that F() Θ(φ ) where 1 + 5 φ = 2 =1.6180 golde ratio E.G.M. Petrakis Algorithm Aalysis 23

Biary Search it BSearch(it table[a..b], key, ) { if (a > b) retur ( 1) ; it middle = (a + b)/2 ; if (T[middle] == key) retur (middle); else if ( key < T[middle] ) retur BSearch(T[a..middle 1], key); else retur BSearch(T[middle + 1..b], key); } E.G.M. Petrakis Algorithm Aalysis 24

Aalysis of Biary Search Let = b a + 1: size of array ad = 2 k 1 for some k (e.g., 1, 3, 5 etc) The size of each sub-array is 2 k-1 1 k c k = 0 T() = T(2-1) = k 1 d + T(2 1) c: executio time of first commad d: executio time outside recursio T: executio time k > 0 E.G.M. Petrakis Algorithm Aalysis 25

Biary Search (cot.) T() = T(2 k 1) = d + T(2 = kd + T(0) = kd + c = d log( + 1) + c => T() O(log) k -1-1) = 2d + T(2 k -2-1) =... E.G.M. Petrakis Algorithm Aalysis 26

Biary Search for Radom c T() = d + max{t(( 1)/2 ), T( ( 1)/2 )} k = 0 k > 0 T() <= T(+1) <=...T(u()) where u()=2 k 1 For some k: < u() < 2 The, T() <= T(u()) = d log(u() + 1) + c => d log(2 + 1) + c => T() O(log ) E.G.M. Petrakis Algorithm Aalysis 27

Merge sort list mergesort(list L, it ) { if ( == 1) retur (L); L 1 = lower half of L; L 2 = upper half of L; retur merge (mergesort(l 1,/2), mergesort(l 2,/2) ); } : size of array L (assume L some power of 2) merge: merges the sorted L 1, L 2 i a sorted array E.G.M. Petrakis Algorithm Aalysis 28

E.G.M. Petrakis Algorithm Aalysis 29

Aalysis of Merge sort [25] [57] [48] [37] [12] [92] [86] [33] log 2 merges [25 57] [37 48] [12 92] [33 86] [25 37 48 57] [12 33 86 92] [12 25 33 37 48 57 86 92] N steps per merge, log 2 steps/merge => O( log 2 ) E.G.M. Petrakis Algorithm Aalysis 30

Aalysis of Merge sort T() = 2T(/2) + c 2 c1 > 1 = 1 Two ways: recursio aalysis iductio E.G.M. Petrakis Algorithm Aalysis 31

Iductio Let T() = O( log ) the T() ca be writte as T() <= a log + b, a, b > 0 Proof: (1) for = 1, true if b >= c 1 (2) let T(k) = a k log k + b for k=1,2,..-1 (3) for k= prove that T() <= a log +b For k = /2 : T(/2) <= a /2 log /2 + b therefore, T() ca be writte as T() = 2 {a /2 log /2 + b} + c 2 = a (log 1)+ 2b + c 2 = E.G.M. Petrakis Algorithm Aalysis 32

Iductio (cot.) T() = a log a + 2 b + c 2 Let b = c 1 ad a = c 1 + c 2 => T() <= (c 1 + c 2 ) log + c 1 => T() <= C log + B E.G.M. Petrakis Algorithm Aalysis 33

Recursio aalysis of mergesort T() = 2T(/2) + c 2 <= <= 4T(/4) + 2 c 2 <= <= 8T(/8) + 3 c 2 <= T() <= 2 i T(/2 i ) + ic 2 where = 2 i T() <= T(1) + c 2 log =>T() < = c 1 + c 2 log => T() <= (c 1 + c 2 ) log => T() <= C log E.G.M. Petrakis Algorithm Aalysis 34

Best, Worst, Average Cases For some algorithms, differet iputs require differet amouts of time e.g., sequetial search i a array of size... x = pi xi = ( + 1)/2 comparisos worst-case: elemet ot i array => T() = best-case: elemet is first i array =>T() = 1 average-case: elemet betwee 1, =>T() = /2 E.G.M. Petrakis Algorithm Aalysis 35

Compariso The best-case is ot represetative happes oly rarely useful whe it has high probability The worst-case is very useful: the algorithm performs at least that well very importat for real-time applicatios The average-case represets the typical behavior of the algorithm ot always possible kowledge about data distributio is ecessary E.G.M. Petrakis Algorithm Aalysis 36