Algorithm Efficiency. Algorithmic Thinking Luay Nakhleh Department of Computer Science Rice University

Similar documents
COMP 182 Algorithmic Thinking. Algorithm Efficiency. Luay Nakhleh Computer Science Rice University

Analysis of Algorithm Efficiency. Dr. Yingwu Zhu

Lecture 2. Fundamentals of the Analysis of Algorithm Efficiency

Module 1: Analyzing the Efficiency of Algorithms

CS Data Structures and Algorithm Analysis

Module 1: Analyzing the Efficiency of Algorithms

Asymptotic Analysis and Recurrences

Computer Science 385 Analysis of Algorithms Siena College Spring Topic Notes: Limitations of Algorithms

CS Asymptotic Notations for Algorithm Analysis

Friday Four Square! Today at 4:15PM, Outside Gates

CS173 Running Time and Big-O. Tandy Warnow

CS Non-recursive and Recursive Algorithm Analysis

Sequences and Summations

More Asymptotic Analysis Spring 2018 Discussion 8: March 6, 2018

How many hours would you estimate that you spent on this assignment?

Chapter 2: The Basics. slides 2017, David Doty ECS 220: Theory of Computation based on The Nature of Computation by Moore and Mertens

Lecture 10: Big-Oh. Doina Precup With many thanks to Prakash Panagaden and Mathieu Blanchette. January 27, 2014

Divide&Conquer: MergeSort. Algorithmic Thinking Luay Nakhleh Department of Computer Science Rice University Spring 2014

P, NP, NP-Complete, and NPhard

Lecture 7: More Arithmetic and Fun With Primes

Discrete Math, Spring Solutions to Problems V

5 + 9(10) + 3(100) + 0(1000) + 2(10000) =

Introduction to Algorithms

The Complexity of Optimization Problems

Admin NP-COMPLETE PROBLEMS. Run-time analysis. Tractable vs. intractable problems 5/2/13. What is a tractable problem?

Linear Algebra, Summer 2011, pt. 2

CSE332: Data Structures & Parallelism Lecture 2: Algorithm Analysis. Ruth Anderson Winter 2019

Mathematical Background. Unsigned binary numbers. Powers of 2. Logs and exponents. Mathematical Background. Today, we will review:

Introduction. How can we say that one algorithm performs better than another? Quantify the resources required to execute:

Analysis of Algorithms - Using Asymptotic Bounds -

MA008/MIIZ01 Design and Analysis of Algorithms Lecture Notes 2

Programming, Data Structures and Algorithms Prof. Hema Murthy Department of Computer Science and Engineering Indian Institute Technology, Madras

An analogy from Calculus: limits

Preparing for the CS 173 (A) Fall 2018 Midterm 1

CS Asymptotic Notations for Algorithm Analysis Outline

LECTURE NOTES ON DESIGN AND ANALYSIS OF ALGORITHMS

Asymptotic Notation. such that t(n) cf(n) for all n n 0. for some positive real constant c and integer threshold n 0

Math 304 (Spring 2010) - Lecture 2

CSE332: Data Structures & Parallelism Lecture 2: Algorithm Analysis. Ruth Anderson Winter 2018

CSE332: Data Structures & Parallelism Lecture 2: Algorithm Analysis. Ruth Anderson Winter 2018

CS1210 Lecture 23 March 8, 2019

Essential facts about NP-completeness:

Principles of Algorithm Analysis

Lecture 3: Big-O and Big-Θ

NP, polynomial-time mapping reductions, and NP-completeness

1 Review of Vertex Cover

Analysis of Algorithms

In-Class Soln 1. CS 361, Lecture 4. Today s Outline. In-Class Soln 2

Algorithms Design & Analysis. Analysis of Algorithm

6. Qualitative Solutions of the TISE

CMPSCI 311: Introduction to Algorithms Second Midterm Exam

Preliminaries and Complexity Theory

3.1 Asymptotic notation

Topic 17. Analysis of Algorithms

Introduction to Divide and Conquer

Ch 01. Analysis of Algorithms

Algorithm Analysis, Asymptotic notations CISC4080 CIS, Fordham Univ. Instructor: X. Zhang

Complexity Theory Part II

NP-Completeness. Algorithmique Fall semester 2011/12

Time Analysis of Sorting and Searching Algorithms

Algorithms and Theory of Computation. Lecture 2: Big-O Notation Graph Algorithms

9.5 Radical Equations

Divisibility = 16, = 9, = 2, = 5. (Negative!)

Quicksort algorithm Average case analysis

CS264: Beyond Worst-Case Analysis Lecture #11: LP Decoding

CSED233: Data Structures (2017F) Lecture4: Analysis of Algorithms

Algorithm efficiency can be measured in terms of: Time Space Other resources such as processors, network packets, etc.

CS 4349 Lecture August 30th, 2017

Conjecture 4. f 1 (n) = n

Written Homework #1: Analysis of Algorithms

CMPSCI 611 Advanced Algorithms Midterm Exam Fall 2015

Data Structures and Algorithms. Asymptotic notation

Computational complexity theory

Dominating Configurations of Kings

CS/COE 1501 cs.pitt.edu/~bill/1501/ Integer Multiplication

Computer Science 431 Algorithms The College of Saint Rose Spring Topic Notes: Analysis Fundamentals

Big-O Notation and Complexity Analysis

Ch01. Analysis of Algorithms

1 Maintaining a Dictionary

The Time Complexity of an Algorithm

Quiz 1 Solutions. Problem 2. Asymptotics & Recurrences [20 points] (3 parts)

Factoring. there exists some 1 i < j l such that x i x j (mod p). (1) p gcd(x i x j, n).

Limitations of Algorithm Power

Lecture 1 - Preliminaries

R ij = 2. Using all of these facts together, you can solve problem number 9.

COMP 382: Reasoning about algorithms

Solving recurrences. Frequently showing up when analysing divide&conquer algorithms or, more generally, recursive algorithms.

Algorithms, Design and Analysis. Order of growth. Table 2.1. Big-oh. Asymptotic growth rate. Types of formulas for basic operation count

Algorithms and Data Structures 2016 Week 5 solutions (Tues 9th - Fri 12th February)

The Time Complexity of an Algorithm

Asymptotic Analysis. Thomas A. Anastasio. January 7, 2004

What is Performance Analysis?

Practical Algebra. A Step-by-step Approach. Brought to you by Softmath, producers of Algebrator Software

Computer Algorithms CISC4080 CIS, Fordham Univ. Outline. Last class. Instructor: X. Zhang Lecture 2

Computer Algorithms CISC4080 CIS, Fordham Univ. Instructor: X. Zhang Lecture 2

Parallel Performance Theory - 1

Example: Fib(N) = Fib(N-1) + Fib(N-2), Fib(1) = 0, Fib(2) = 1

INDUCTION AND RECURSION. Lecture 7 - Ch. 4

Theory of Computation Chapter 1: Introduction

Problem. Problem Given a dictionary and a word. Which page (if any) contains the given word? 3 / 26

Transcription:

Algorithm Efficiency Algorithmic Thinking Luay Nakhleh Department of Computer Science Rice University 1

All Correct Algorithms Are Not Created Equal When presented with a set of correct algorithms for a certain problem, it is natural to choose the most efficient one(s) out of these algorithms. (In some cases, we may opt for an algorithm X that s a bit slower than algorithm Y, but that is much, much easier to implement than Y.) Efficiency: Time: how fast an algorithm runs Space: how much extra space the algorithm requires There s often a trade-off between the two. 2

It s All a Function of the Input s Size We often investigate the algorithm s complexity (or, efficiency) in terms of some parameter n that indicates the algorithm s input size For example, for an algorithm that sorts a list of numbers, the input s size is the number of elements in the list. For some algorithms, more than one parameter may be needed to indicates the size of their inputs. For the algorithm IsBipartite, the input size is given by the number of nodes if the graph is represented as an adjacency matrix, whereas the input size is given by the number of nodes and number of edges if the graph is represented by an adjacency list. 3

It s All a Function of the Input s Size Sometimes, more than one choice of a parameter indicating the input size may be possible For an algorithm that multiplies two square matrices, one choice is n, the order of the matrix, and another is N, the total number of entries in the matrix. Notice that the relation between n and N is easy to establish, so switching between them is straightforward (yet results in different qualitative statements about the efficiency of the algorithm). When the matrices being multiplied are not square, N would be more appropriate. 4

It s All a Function of the Input s Size Question: For an algorithm that tests whether a nonnegative integer p is prime, what is the input size? 5

Units for Measuring Running Time We d like to use a measure that does not depend on extraneous factors such as the speed of a particular computer, dependence on the quality of a program implementing the algorithm and of the compiler used, and the difficulty of clocking the actual running time of the program. We usually focus on basic operations that the algorithm executes, and compute the number of times the basic operations are executed. 6

Orders of Growth A difference in running times on small inputs is not what really distinguishes efficient algorithms from inefficient ones (2 steps vs. 15 steps is not really a difference when it comes to the running times of algorithms!) When analyzing the complexity, or efficiency, of algorithms, we pay special attention to the order of growth of the number of steps of an algorithm on large input sizes. 7

Orders of Growth The complexity analysis framework ignores multiplicative constants and concentrates on the order of growth of the number of steps to within a constant multiple for large-size inputs. 8

Worst-, Best-, and Average-Case Complexity The worst-case complexity of an algorithm is its complexity for the worst-case input of size n, which is an input of size n for which the algorithm runs the longest among all possible inputs of that size. What type of bound does the worst-case analysis provide on the running time? The best-case complexity of an algorithm is its complexity for the best-case input of size n, which is an input of size n for which the algorithm runs the fastest among all possible inputs of that size. Neither of these two types of analyses provide the necessary information about an algorithm s behavior on a typical or random input. This is the information that the average-case complexity seeks to provide. 9

Worst-, Best-, and Average-Case Complexity Best-case analysis is usually uninformative about the efficiency of an algorithm. Average-case analysis is often very hard to conduct (and, sometimes it is not even clear what the average case is). Therefore, most analyses focus on the worst case, which is what we will focus on as well. 10

Worst-case Complexity: Example 1 11

Worst-case Complexity: Example 2 12

Worst-case Complexity: Example 3 Algorithm 1: IsBipartite. Input: Undirected graph g =(V,E). Output: True if g is bipartite, and False otherwise. 1 foreach Non-empty subset V 1 V do 2 V 2 V \ V 1 ; 3 bipartite True; 4 foreach Edge {u, v} 2 E do 5 if {u, v} V 1 or {u, v} V 2 then 6 bipartite False; 7 Break; 8 if bipartite = Truethen 9 return True; 10 return False; 13

Asymptotic Notations As we stated before, in mathematical analysis of efficiency, we ignore multiplicative constants. For example, 2n2 +3n-5 n 2 +n 10000n n 2n 2 n 2 14

Asymptotic Notations Let T(n) and f(n) be two functions. We say that T(n) is O(f(n)) if there exist constants c>0 and n0 0 such that for all n n 0, we have T(n) c f(n). cf(n) T(n) n<n 0 n 0 15

Asymptotic Notations Let T(n) and f(n) be two functions. We say that T(n) is #(f(n)) if there exist constants c>0 and n0 0 such that for all n n 0, we have T(n) c f(n). T(n) cf(n) n<n 0 n 0 16

Asymptotic Notations Let T(n) and f(n) be two functions. We say that T(n) is $(f(n)) if there exist constants c1 >0, c 2 >0, and n 0 0 such that for all n n 0, we have c 1 f(n) T(n) c 2 f(n). c 2 f(n) T(n) c 1 f(n) n<n 0 n 0 17

Asymptotic Notations 18

Asymptotic Notations Theorem: If T1 (n)=o(f 1 (n)) and T 2 (n)=o(f 2 (n)) then T 1 (n)+t 2 (n)=o(max{f 1 (n),f 2 (n)}). Examples: n + n 2 = O(n 2 ) log n + n = O(n) 1 + n + n 2 + 10 6 *n 3 + 2*n 4 = O(n 4 )... Proof? 19

Asymptotic Analysis of Algorithms Algorithm 3: LinearSearch. Input: An array A[0...n 1] of integers, and an integer x. Output: The index of the first element of A that matches x, or 1 i 0; 2 while i<nand A[i] 6= x do 3 i i +1; 4 if i n then 5 i 1; 6 return i Algorithm 4: MatrixMultiplication. Input: Two matrices A[0...n 1, 0...k 1] and B[0...k 1, 0...m 1]. Output: Matrix C = AB. 1 for i 0 to n 1 do 2 for j 0 to m 1 do 3 C[i, j] 0; 4 for l 0 to k 1 do 5 C[i, j] C[i, j]+a[i, l] B[l, j]; 6 return C 1 if there are no matching elements. Algorithm 1: IsBipartite. Input: Undirected graph g =(V,E). Output: True if g is bipartite, and False otherwise. 1 foreach Non-empty subset V 1 V do 2 V 2 V \ V 1 ; 3 bipartite True; 4 foreach Edge {u, v} 2 E do 5 if {u, v} V 1 or {u, v} V 2 then 6 bipartite False; 7 Break; 8 if bipartite = Truethen 9 return True; 10 return False; 20

Asymptotic Analysis of Algorithms Analyzing the computational complexity of many algorithms is not an easy task, and especially so for recursive algorithms. We ll see many examples of complexity analysis throughout the semester, and develop an intuition for doing it through the techniques used in these examples. 21

Empirical Analysis of Algorithms A General Plan 1. Decide on the efficiency metric M to be measured and the measurement unit (an operation s count vs. a time unit). 2. Decide on characteristics of the input sample (its range, size, and so on). 3. Prepare a program implementing the algorithm for the experimentation. 4. Generate a sample of inputs. 5. Run the algorithm on the sample inputs and record the data observed. 6. Analyze the data obtained. 22

Empirical Analysis of Algorithms: Issues A system s time is typically not very accurate, and you may get somewhat different results on repeated runs of the same program on the same inputs. An obvious remedy is to make a large number of such measurements and take their average. If the computer is very fast, the program is very fast, the input is so small, the number precision is limited, etc., the running time may be reported as zero. One possible remedy is to run the program a large number of times and compute the time for all those runs, rather than each single one. The reported time may include the time spent by the CPU on other programs. 23

Empirical Analysis of Algorithms: Issues You need to decide on a sample of inputs for the experiment. Often, the goal is to use a sample representing a typical input; so, you need to understand what a typical input is. 24

Empirical Analysis of Algorithms: Issues The principal strength of the mathematical analysis is its independence of the specific inputs; its principal weakness is its poor reflection of the exact running time. The principal strength of the empirical analysis lies in its applicability to any algorithm, but its results can depend on the particular sample of instances and the computer used in the experiment (other than it comes after the fact ). Remember: Empirical analysis tells about a specific implementation of the abstract algorithm, so you have to be careful about what conclusions you can draw about an algorithm s efficiency from the empirical analysis of a specific implementation. (For deciding on the time it takes to check membership in a set, did you implement your set as a list, a hash table,...?) 25