Asymptotic Analysis. Slides by Carl Kingsford. Jan. 27, AD Chapter 2

Similar documents
Growth of Functions (CLRS 2.3,3)

COMP 382: Reasoning about algorithms

Md Momin Al Aziz. Analysis of Algorithms. Asymptotic Notations 3 COMP Computer Science University of Manitoba

2.2 Asymptotic Order of Growth. definitions and notation (2.2) examples (2.4) properties (2.2)

Big O 2/14/13. Administrative. Does it terminate? David Kauchak cs302 Spring 2013

Computational Complexity

CS 4407 Algorithms Lecture 2: Iterative and Divide and Conquer Algorithms

Data Structures and Algorithms. Asymptotic notation

EECS 477: Introduction to algorithms. Lecture 5

CS473 - Algorithms I

CIS 121 Data Structures and Algorithms with Java Spring Big-Oh Notation Monday, January 22/Tuesday, January 23

CS 4407 Algorithms Lecture 2: Growth Functions

Data Structures and Algorithms CSE 465

Analysis of Algorithms

2. ALGORITHM ANALYSIS

Copyright 2000, Kevin Wayne 1

Lecture 2. Fundamentals of the Analysis of Algorithm Efficiency

Analysis of Algorithm Efficiency. Dr. Yingwu Zhu

Time complexity analysis

Algorithms and Theory of Computation. Lecture 2: Big-O Notation Graph Algorithms

Module 1: Analyzing the Efficiency of Algorithms

More Asymptotic Analysis Spring 2018 Discussion 8: March 6, 2018

An analogy from Calculus: limits

Analysis of Algorithms [Reading: CLRS 2.2, 3] Laura Toma, csci2200, Bowdoin College

Algorithms Design & Analysis. Analysis of Algorithm

Agenda. We ve discussed

Lecture 1: Asymptotics, Recurrences, Elementary Sorting

Asymptotic Notation. such that t(n) cf(n) for all n n 0. for some positive real constant c and integer threshold n 0

5 + 9(10) + 3(100) + 0(1000) + 2(10000) =

Data Structures and Algorithms Running time and growth functions January 18, 2018

Module 1: Analyzing the Efficiency of Algorithms

Asymptotic Analysis 1

COMP Analysis of Algorithms & Data Structures

Analysis of Algorithms I: Asymptotic Notation, Induction, and MergeSort

Algorithms, Design and Analysis. Order of growth. Table 2.1. Big-oh. Asymptotic growth rate. Types of formulas for basic operation count

COMP Analysis of Algorithms & Data Structures

3.1 Asymptotic notation

CS173 Running Time and Big-O. Tandy Warnow

Grade 11/12 Math Circles Fall Nov. 5 Recurrences, Part 2

with the size of the input in the limit, as the size of the misused.

Analysis of Algorithms

COMP 9024, Class notes, 11s2, Class 1

data structures and algorithms lecture 2

P, NP, NP-Complete, and NPhard

The Time Complexity of an Algorithm

How many hours would you estimate that you spent on this assignment?

Lecture 2. More Algorithm Analysis, Math and MCSS By: Sarah Buchanan

Algorithms, CSE, OSU. Introduction, complexity of algorithms, asymptotic growth of functions. Instructor: Anastasios Sidiropoulos

Cpt S 223. School of EECS, WSU

Big O (Asymptotic Upper Bound)

Lecture 2: Asymptotic Notation CSCI Algorithms I

i=1 i B[i] B[i] + A[i, j]; c n for j n downto i + 1 do c n i=1 (n i) C[i] C[i] + A[i, j]; c n

COMP Analysis of Algorithms & Data Structures

CSC Design and Analysis of Algorithms. Lecture 1

CS 4407 Algorithms Lecture 3: Iterative and Divide and Conquer Algorithms

2.1 Computational Tractability. Chapter 2. Basics of Algorithm Analysis. Computational Tractability. Polynomial-Time

MA008/MIIZ01 Design and Analysis of Algorithms Lecture Notes 3

Defining Efficiency. 2: Analysis. Efficiency. Measuring efficiency. CSE 421: Intro Algorithms. Summer 2007 Larry Ruzzo

Ch01. Analysis of Algorithms

The Time Complexity of an Algorithm

When we use asymptotic notation within an expression, the asymptotic notation is shorthand for an unspecified function satisfying the relation:

CSE 421: Intro Algorithms. 2: Analysis. Winter 2012 Larry Ruzzo

CSED233: Data Structures (2017F) Lecture4: Analysis of Algorithms

MA008/MIIZ01 Design and Analysis of Algorithms Lecture Notes 2

When we use asymptotic notation within an expression, the asymptotic notation is shorthand for an unspecified function satisfying the relation:

CS 310 Advanced Data Structures and Algorithms

Copyright 2000, Kevin Wayne 1

Asymptotic Analysis of Algorithms. Chapter 4

Algorithms and Their Complexity

Comparison of functions. Comparison of functions. Proof for reflexivity. Proof for transitivity. Reflexivity:

Define Efficiency. 2: Analysis. Efficiency. Measuring efficiency. CSE 417: Algorithms and Computational Complexity. Winter 2007 Larry Ruzzo

Analysis of Algorithms

Big , and Definition Definition

Limitations of Algorithm Power

The Growth of Functions and Big-O Notation

COMP 555 Bioalgorithms. Fall Lecture 3: Algorithms and Complexity

Problem Set 1 Solutions

Electrical & Computer Engineering University of Waterloo Canada February 26, 2007

Taking Stock. IE170: Algorithms in Systems Engineering: Lecture 3. Θ Notation. Comparing Algorithms

Asymptotic Analysis. Thomas A. Anastasio. January 7, 2004

Introduction to Algorithms: Asymptotic Notation

Principles of Algorithm Analysis

COMP 355 Advanced Algorithms Algorithm Design Review: Mathematical Background

Review Asympto&c Bounds

Introduction to Algorithms and Asymptotic analysis

Divide-and-Conquer Algorithms Part Two

CS Non-recursive and Recursive Algorithm Analysis

Big-O Notation and Complexity Analysis

Introduction to Algorithms

Complexity Theory Part I

CS 344 Design and Analysis of Algorithms. Tarek El-Gaaly Course website:

Algorithm efficiency can be measured in terms of: Time Space Other resources such as processors, network packets, etc.

Analysis of Multithreaded Algorithms

COMP 355 Advanced Algorithms

IS 709/809: Computational Methods in IS Research Fall Exam Review

Recursion. Computational complexity

In-Class Soln 1. CS 361, Lecture 4. Today s Outline. In-Class Soln 2

Theory of Computation

Lecture 1: Asymptotic Complexity. 1 These slides include material originally prepared by Dr.Ron Cytron, Dr. Jeremy Buhler, and Dr. Steve Cole.

CS 61B Asymptotic Analysis Fall 2017

Transcription:

Asymptotic Analysis Slides by Carl Kingsford Jan. 27, 2014 AD Chapter 2

Independent Set Definition (Independent Set). Given a graph G = (V, E) an independent set is a set S V if no two nodes in S are joined by an edge.

Independent Set Definition (Independent Set). Given a graph G = (V, E) an independent set is a set S V if no two nodes in S are joined by an edge. Maximum Independent Set. Given a graph G, find the largest independent set. Apparently a difficult problem. (No efficient algorithm known, and good reason to suspect that none exists.)

Combinatorial Problems Problems like Independent Set, Minimum Spanning Tree, etc. can be thought of as families of instances: An instance of Independent Set is specified by a particular graph G. An instance of Minimum Spanning Tree is given by the graph G and weights d. Instances often have a natural way of encoding them in the computer: A graph with n nodes might be specified by an n n matrix or a list of edges.

Instance Sizes Size of the instance is the space used to represent it. Usually use n to represent this size. Generally, larger instances are more difficult than smaller ones. Can often break the problem down into: Search space: the set of feasible solutions (every independent set or spanning tree) Objective function: a way of measuring how good the solution is (size of independent set, stability of perfect matching)

Central Dogma of Computer Science Difficulty is not necessarily proportional to size of the search space.

Efficient Algorithms Definition (Efficiency). An algorithm is efficient if its worst-case runtime is bounded by a polynomial function of the input size. There is a polynomial p(n) such that for every instance of size n, the algorithm solves the instance in fewer than p(n) steps. Generally, the problems we consider will have huge search spaces. How many subsets of nodes are there in a graph of n nodes? Any polynomial is much smaller than the search space.

Efficient Algorithms Definition (Efficiency). An algorithm is efficient if its worst-case runtime is bounded by a polynomial function of the input size. There is a polynomial p(n) such that for every instance of size n, the algorithm solves the instance in fewer than p(n) steps. Generally, the problems we consider will have huge search spaces. How many subsets of nodes are there in a graph of n nodes? 2 n Any polynomial is much smaller than the search space.

Efficient Algorithms, II This definition of efficiently is not perfect: 1. There are non-polynomal algorithms that are usually the best choice in practice (e.g. simplex method for linear programming). 2. There are polynomial time algorithms that are almost never used in practice (e.g. ellipsoid algorithm for linear programming). 3. An algorithm that takes n 100 steps would be efficient by this definition, but horrible in practice. 4. An algorithm that takes n 1+0.002 log n is probably useful in practice, but it is not efficient.

But it has a lot of benefits: Benefits of the definition 1. It s concrete and falsifiable avoids vague arguments about which algorithm is better. 2. Average performance is hard to define. 3. The exceptions are apparently fewer than the cases where polynomial time corresponds to useful algorithms in practice. 4. There is normally a huge difference between polynomial time and other natural runtimes (If you can do 1 million steps per second and n = 1, 000, 000, then a n 2 algorithm would take 12 days, but a 1.5 n algorithm would take far more than 10 25 years)

Asymptotic Upper Bounds A running time of n 2 + 4n + 2 is usually too detailed. Rather, we re interested in how the runtime grows as the problem size grows. Definition (O). A runtime T (n) is O(f (n)) if there exist constants n 0 0 and c > 0 such that: T (n) cf (n) for all n n 0 What does this mean? for all large enough instances the running time is bounded by a constant multiple of f (n)

Asymptotic Lower Bounds O( ) talks about the longest possible time an algorithm could take. Ω( ) talks about the shortest possible time. Definition (Ω). n 0 0 so that: T (n) is Ω(f (n)) if there are constants ɛ > 0 and T (n) ɛf (n) for all n n 0

Tight bounds Definition (Θ). T (n) is Θ(f (n)) if T (n) is O(f (n)) and Ω(f (n)). If we know that T (n) is Θ(f (n)) then f (n) is the right asymptotic running time: it will run faster than O(f (n)) on all instances and some instances might take that long.

Asymptotic Limit Theorem (Theta). T (n) = Θ(g(n)). If lim n T (n) g(n) equals some c > 0, then T(n) / g(n) 2c c c/2 0 n There is an n 0 such that c/2 T (n)/g(n) 2c for all n n 0. Therefore, T (n) 2cg(n) for n n 0 T (n) = O(g(n)). Also, T (n) c 2 g(n) for n n 0 T (n) = Ω(g(n)).

Linear Time Linear time usually means you look at each element a constant number of times. Finding the maximum element in a list: max = a[1] for i = 2 to n: if a[i] > max then set max = a[i] Endfor This does a constant amount of work per element in array a. Merging sorted lists.

O(n log n) O(n log n) time is common because of sorting (often the slowest step in an algorithm). Where does the O(n log n) come from?

O(n log n) O(n log n) time is common because of sorting (often the slowest step in an algorithm). Where does the O(n log n) come from? n n/2 n/2 n/4 n/4 n/4 n/4 T (n) = 2T (n/2) + n

Quadratic Time O(n 2 ) A brute force algorithm for: Given a set of points, what is the smallest distance between them. ( n ) 2 pairs

O(n 3 ) Ex. 1. Given n sets S 1, S 2,..., S n that are subsets of {1,..., n}, is there some pair of sets that is disjoint? Assume the sets are represented as bit vectors. Natural algorithm: for every pair of sets i and j, test whether they overlap by going through all the elements in i and j. Ex. 2. Standard rule for matrix multiplication: c1 = r 2 c 2 r 1 r 2 = r 1 c 2 A 1 A 2 (r 1 c 2 ) c 1 = multiplications

O(n k ) Larger polynomials arise from exploring smaller search spaces exhaustively: Independent Set of Size k. Given a graph with n nodes, find an independent set of size k or report none exists. For every subset S of k nodes If S is an independent set then (*) return S Endfor return Failure How many subsets of k nodes are there?

Exponential Time What if we didn t limit ourselves to independent sets of size k, and instead want to find the largest independent set? Brute force algorithms search through all possibilities. How many subsets of nodes are there in an n-node graph? What s the runtime of the brute force search search for a largest independent set?

Exponential Time What if we didn t limit ourselves to independent sets of size k, and instead want to find the largest independent set? Brute force algorithms search through all possibilities. How many subsets of nodes are there in an n-node graph? 2 n What s the runtime of the brute force search search for a largest independent set?

Exponential Time What if we didn t limit ourselves to independent sets of size k, and instead want to find the largest independent set? Brute force algorithms search through all possibilities. How many subsets of nodes are there in an n-node graph? 2 n What s the runtime of the brute force search search for a largest independent set? O(n 2 2 n )

Sublinear Time Sublinear time means we don t even look at every input. Since it takes n time just to read the input, we have to work in a model where we count how many queries to the data we make. Sublinear usually means we don t have too look at every element. Example?

Sublinear Time Sublinear time means we don t even look at every input. Since it takes n time just to read the input, we have to work in a model where we count how many queries to the data we make. Sublinear usually means we don t have too look at every element. Example? Binary search O(log n)

Properties of O, Ω, Θ

Example (Transitivity of O). Show that if f (n) = O(g(n)) and g = O(h(n)) then f (n) = O(h(n)). Solution: By the assumptions, there are constants c f, n f, c g, n g such that f (n) c f g(n) and g(n) c g h(n) for all n max{n f, n g }. Therefore f (n) c f g(n) c f c g h(n) and setting c = c f c g and n 0 = max{n f, n g }.

Example (Transitivity of Ω). Show that if f = Ω(g) and g = Ω(h) then f = Ω(h). Solution: A very similar argument to the previous example: there are c f, n f such that f (n) c f g(n) for all n n f, and there are c g, n g such that g(n) c g h(n) for all n n g. Substituting in for g(n) we get f (n) c f c g h(n) for all n max{n f, n g }.

Example (Multiplication). For positive functions f (n), f (n), g(n), g (n), if f (n) = O(f (n)) and g(n) = O(g (n)) then f (n)g(n) = O(f (n)g (n)). Solution: There are constants c f, n f, c g, n g as specified in the definition of O-notation: f (n) c f f (n) g(n) c g g (n) for all n > n f for all n > n g So for n > max{n f, n g } both statements above hold, and we have: f (n)g(n) c f f (n)c g g (n) = c f c g f (n)g (n). Taking c = c f c g and n 0 = max{n f, n g } yields the desired statement.

Example (Max). Show that max{f (n), g(n)} = Θ(f (n) + g(n)) for any f (n), g(n) that eventually become and stay positive. Solution: We have max{f (n), g(n)} = O(f (n) + g(n)) by taking c = 1 and n 0 equal to be the point where they become positive. We have max{f (n), g(n)} = Ω(f (n) + g(n)) taking the same n 0 and c = 1/2 since the average of positive functions f (n) and g(n) is always less than the max.

Example (Polynomials). Prove that an 2 + bn + d = O(n 2 ) for all real a, b, d. Solution: an 2 + bn + d a n 2 + b n + d for all n 0 a n 2 + b n 2 + d n 2 for all n 1 = ( a + b + d )n 2. Take c = a + b + d and n 0 = 1.

Example (Polynomials and Θ). Show that any degree-d polynomial p(n) = d i=0 α in i with α d > 0 is Θ(n d ). Solution: Step 1: A similar argument to the previous example shows that p(n) is O(n d ). For all n 1: p(n) = d α i n i i=0 d α i n i i=0 d d α i n d = n d α i. i=0 i=0 So take c = d i=0 α i and n 0 = 1. Step 2 on next slide.

Step 2: We now show that p(n) is Ω(n d ). We try to find a c that works in the definition. For n 1, we have: d cn d α i n i i=0 c d α i n i /n d i=0 d 1 = α d + α i n i d i=0 take out n d /n d term d 1 α i = α d + n d i x i = 1/x i. i=0 So: need to show that the R.H.S. of the above is > 0 for large enough n. If so, then there will be a c that we can choose.

Step 2, continued: Let N = {i : α i < 0}. Then we have: d 1 α d + i=0 Now: α d > 1 n α i n d i α d + i N α i n d i b/c we threw out the positive terms = α d α i n d i i N α d 1 α i b/c 1/n is the largest dependence on n n i N i N α i N i when n > α i α d. At that n and larger, the last line above is strictly greater than 0. So we choose n 0 equal to α d 1 n 0 i N α i. i N α i α d + 1 and any c between 0 and

Example. Show that (n + a) d = Θ(n d ) when d > 0. Solution: (n + a) d = d i=0 ( d i ) a d i n i by the binomial theorem. This equals d i=0 α in i for α i = ( ) d i a d i, which is a degree-d polynomial, which is Θ(n d ) by the previous example.