Input Decidable Language -- Program Halts on all Input Encoding of Input -- Natural Numbers Encoded in Binary or Decimal, Not Unary

Similar documents
Recursive Definitions. Recursive Definition A definition is called recursive if the object is defined in terms of itself.

Recursive Definitions. Recursive Definition A definition is called recursive if the object is defined in terms of itself.

Time Complexity (1) CSCI Spring Original Slides were written by Dr. Frederick W Maier. CSCI 2670 Time Complexity (1)

Module 1: Analyzing the Efficiency of Algorithms

Growth of Functions (CLRS 2.3,3)

Harvard CS 121 and CSCI E-121 Lecture 20: Polynomial Time

Big , and Definition Definition

Lecture 2. Fundamentals of the Analysis of Algorithm Efficiency

Analysis of Algorithm Efficiency. Dr. Yingwu Zhu

Mid-term Exam Answers and Final Exam Study Guide CIS 675 Summer 2010

Analysis of Algorithms

Module 1: Analyzing the Efficiency of Algorithms

CSC Design and Analysis of Algorithms. Lecture 1

Name CMSC203 Fall2008 Exam 2 Solution Key Show All Work!!! Page (16 points) Circle T if the corresponding statement is True or F if it is False.

Big O 2/14/13. Administrative. Does it terminate? David Kauchak cs302 Spring 2013

Taking Stock. IE170: Algorithms in Systems Engineering: Lecture 3. Θ Notation. Comparing Algorithms

Analysis of Algorithms [Reading: CLRS 2.2, 3] Laura Toma, csci2200, Bowdoin College

2301 Assignment 1 Due Friday 19th March, 2 pm

CS 4407 Algorithms Lecture 2: Iterative and Divide and Conquer Algorithms

Computational Complexity - Pseudocode and Recursions

Remainders. We learned how to multiply and divide in elementary

Cpt S 223. School of EECS, WSU

Chapter 2: The Basics. slides 2017, David Doty ECS 220: Theory of Computation based on The Nature of Computation by Moore and Mertens

3. Algorithms. What matters? How fast do we solve the problem? How much computer resource do we need?

COMP 382: Reasoning about algorithms

Order Notation and the Mathematics for Analysis of Algorithms

Divisibility in the Fibonacci Numbers. Stefan Erickson Colorado College January 27, 2006

Define Efficiency. 2: Analysis. Efficiency. Measuring efficiency. CSE 417: Algorithms and Computational Complexity. Winter 2007 Larry Ruzzo

3 The Fundamentals:Algorithms, the Integers, and Matrices

Limitations of Algorithm Power

Computer Science 385 Analysis of Algorithms Siena College Spring Topic Notes: Limitations of Algorithms

Lecture 2: Asymptotic Notation CSCI Algorithms I

MAT 243 Test 2 SOLUTIONS, FORM A

1. (16 points) Circle T if the corresponding statement is True or F if it is False.

Asymptotic Notation. such that t(n) cf(n) for all n n 0. for some positive real constant c and integer threshold n 0

The Time Complexity of an Algorithm

CSE 421: Intro Algorithms. 2: Analysis. Winter 2012 Larry Ruzzo

Defining Efficiency. 2: Analysis. Efficiency. Measuring efficiency. CSE 421: Intro Algorithms. Summer 2007 Larry Ruzzo

CSC 8301 Design & Analysis of Algorithms: Lower Bounds

CMSC Discrete Mathematics SOLUTIONS TO SECOND MIDTERM EXAM November, 2005

The P-vs-NP problem. Andrés E. Caicedo. September 10, 2011

CS1800 Discrete Structures Fall 2016 Profs. Gold & Schnyder April 25, CS1800 Discrete Structures Final

Reading 10 : Asymptotic Analysis

CS 344 Design and Analysis of Algorithms. Tarek El-Gaaly Course website:

CS Non-recursive and Recursive Algorithm Analysis

Automata Theory CS Complexity Theory I: Polynomial Time

Algorithms and Their Complexity

Time Complexity. CS60001: Foundations of Computing Science

cse 311: foundations of computing Fall 2015 Lecture 12: Primes, GCD, applications

Limits of Feasibility. Example. Complexity Relationships among Models. 1. Complexity Relationships among Models

Fall 2017 Test II review problems

What is Performance Analysis?

Analysis of Algorithms. Unit 5 - Intractable Problems

Advanced Algorithmics (6EAP)

Data Structures in Java

Algebra for error control codes

Models of Computation

The Time Complexity of an Algorithm

Theory of Computation Time Complexity

Computer Sciences Department

Analysis of Algorithms I: Asymptotic Notation, Induction, and MergeSort

Principles of Algorithm Analysis

CS Data Structures and Algorithm Analysis

5 + 9(10) + 3(100) + 0(1000) + 2(10000) =

Computational Models Lecture 11, Spring 2009

Computability and Complexity

2. THE EUCLIDEAN ALGORITHM More ring essentials

Chinese Remainder Theorem

Complexity. Complexity Theory Lecture 3. Decidability and Complexity. Complexity Classes

Lecture 6: Introducing Complexity

Topic 17. Analysis of Algorithms

Recursion. Computational complexity

CP405 Theory of Computation

Big O Notation. P. Danziger

Big O Notation. P. Danziger

CS Asymptotic Notations for Algorithm Analysis

CISC 4090 Theory of Computation

CISC 235: Topic 1. Complexity of Iterative Algorithms

Discrete Mathematics and Probability Theory Summer 2014 James Cook Note 5

In complexity theory, algorithms and problems are classified by the growth order of computation time as a function of instance size.

cse 311: foundations of computing Spring 2015 Lecture 12: Primes, GCD, applications

Part I: Definitions and Properties

Algorithms Design & Analysis. Analysis of Algorithm

Algorithms (II) Yu Yu. Shanghai Jiaotong University

Growth of Functions. As an example for an estimate of computation time, let us consider the sequential search algorithm.

Fundamental Algorithms

The running time of Euclid s algorithm

Intro to Theory of Computation

Ch 01. Analysis of Algorithms

CSE 417: Algorithms and Computational Complexity

Chapter 5.1: Induction

Discrete Math in Computer Science Solutions to Practice Problems for Midterm 2

Algorithm Analysis, Asymptotic notations CISC4080 CIS, Fordham Univ. Instructor: X. Zhang

Algorithm efficiency analysis

Unit 1A: Computational Complexity

Mat Week 8. Week 8. gcd() Mat Bases. Integers & Computers. Linear Combos. Week 8. Induction Proofs. Fall 2013

Analysis of Algorithms

Student Responsibilities Week 8. Mat Section 3.6 Integers and Algorithms. Algorithm to Find gcd()

Space Complexity of Algorithms

Transcription:

Complexity Analysis

Complexity Theory Input Decidable Language -- Program Halts on all Input Encoding of Input -- Natural Numbers Encoded in Binary or Decimal, Not Unary Output TRUE or FALSE Time and Space Problems How much Time does the Algorithm Take to Execute? How much Memory is Required to Execute the Algorithm? As Functions of the Size of the Input

Examples Hamiltonian Circuit -- Decision Problem Input = Matrix Describing a Graph Output = YES if Graph has a Hamiltonian Circuit = NO if Graph does not have a Hamiltonian Circuit Traveling Salesman -- Optimization Problem Input = Matrix Describing the Cost of Each Edge Max Cost = Maximal Allowable Cost of Circuit Output = YES if Graph has Circuit with Cost Max Cost = NO if Graph does not have a Circuit with Cost Max Cost

Complexity and Turing Machines One Tape Deterministic Turing Machines Time Complexity = Number of Commands Executed Space Complexity = Number of Tape Squares Visited Multiple Tape Turing Machines and Actual Digital Computers Complexity Differs from Single Tape Turing Machines by Only a Polynomial Factor Non-Deterministic Turing Machines Require Different Complexity Analysis from Deterministic Turing Machines

Complexity of Turing Machines Setup M = Turing Machine that Halts on All Input n = Size of Input Time and Space Complexity Worst Case Analysis t M : N N -- t M (n) = Maximal Time M Takes to Execute on a Problem of Size n s M : N N -- s M (n) = Maximal Memory Required by M to Execute a Problem of Size n For Non-Deterministic Machines, Consider All Possible Paths

Asymptotic Complexity Classes Big O -- Upper Bound Big Ω -- Lower Bound Small o -- Much Bigger Bound Small ω -- Much Smaller Bound Big Theta Θ -- Tight Bound

Big O and Big Ω Big O -- Upper Bound f = O(g) f O(g) f is bounded above by g O(g) = {h h(n) C h g(n) for all n k h for some constant C h } Big Ω -- Lower Bound f = Ω(g) f Ω(g) f is bounded below by g Ω(g) = {h h(n) C h g(n) for all n k h for some constant C h } Observations k and C are not unique g is not unique

Big O and Big Ω and Limits Limits lim n f (n) g(n) < f O(g) lim n f (n) g(n) > 0 f Ω(g)

Small o and Small ω Definitions f is o(g) lim n f (n) g(n) = 0 (much smaller) f is ω (g) lim n f (n) g(n) = (much bigger) Observations f is o(g) f is O(g) (much smaller smaller) f is ω (g) f is Ω(g) (much bigger bigger)

Big-Θ -- Tight Bound Definition f = Θ(g) f Θ(g) f is bounded above and below by a constant times g Θ(g) = {h Dg(h) h(n) C h g(n) for all n k h for some constants C h, D h } Meaning f Θ(g) f Ω(g) and f O(g) Observations k, c, C are not unique g is not unique Limits 0 < lim n f (n) g(n) < f Θ(g)

Most Important Big O is the most important complexity class, since we are most interested in an upper bound on the time of an algorithm.

Big-O Definition f is O(g) means there are numbers k and C such that -- f (n) Cg(n) for all n > k. Meaning For all sufficiently large integers, f (n) is less than a constant multiple of g(n). g is only an Upper Bound on the size of f. Observations k and C are not unique. g is not unique. g is usually chosen to be well-known function: log(n), n p, 2 n.

Examples 1. c p n p + + c 1 n + c 0 = O(n p ) 2. c p n p + + c 1 n + c 0 d q n q + + d 1 n + d 0 = O(n p q ) n 3. k p = O(n p+1 ) k =1 n 4. 2 p = O(2 n ) p=1 5. n! = O(n n )

Most Common Orders 1, log(n), n, nlog(n), n 2, n p, 2 p, n!, n n Constant, Logarithmic, Linear, Polynomial, Exponential

Properties of Big O 0. f = O( f ) 1. f = O(g) and g = O(h) f = O(h) 2. f = O(g) f + c = O(g) 3. f = O(g) c f = O(g) 4. f 1 = O(g 1 ) and f 2 = O(g 2 ) f 1 + f 2 = O ( max(g 1, g 2 )) 5. f 1 = O(g 1 ) and f 2 = O(g 2 ) f 1 f 2 = O(g 1 g 2 )

Properties of Big O (continued) FALSE Property 2. f = O(g) f + c = O(g) Example 1 n 2 = O 1 n 1 n 2 +1 O 1 n True Property 2*. f = O(g) and 1 = O( f ) f + c = O(g)

Properties of Big O for Special Functions 6. p q n p = O(n q ) 7. c p n p + + c 1 n + c 0 = O(n p ) 8, a < b a n = O(b n ) 9. a,b > 1 O ( Log a (n)) = O ( Log b (n)) -- Log a (n) = Log a (b)log b (n) 10. O(n p ) O ( n p Log b (n)) O(n p +1 ) 11. 2 n = O(n!) 12. n! = O(n n )

Examples Find Simple O(g) estimates for the following functions: 1. (n 2 + 8)(n +1) 2. (nlog(n) + n 2 )(n 3 + 3) 3. ( n!+ 2 n )( n 3 + log(n 2 +1) )

True or False Prove or give a counterexample For all positive functions f and g, either f = O(g) or g = O( f ).

True or False Prove or give a counterexample For all positive functions f and g, either f = O(g) or g = O( f ). Counterexample f (n) = 1+ ncos 2 nπ 2 g(n) = 1 + nsin 2 n π 2

Examples of o( f ) lim n ln(n) n = 0 ln(n) is o(n) ( ln(n) ) p lim n n = 0 ( ln(n) ) p is o(n) lim n n p ln(n) n p +1 = 0 n p ln(n) is o(n p+1 ) lim n n p 2 n = 0 n p is o(2 n ) lim n P(n) 2 n = 0 P(n) is o(2 n ), P(n) Polynomial

Examples of Θ( f ) 1. c nn p + + c 1 n + c 0 = Θ(n p ) 2. c p n p + + c 1 n + c 0 d q n q + + d 1 n + d 0 = Θ(n p q ) n 3. k p = Θ(n p +1 ) (Hint: Integrate) k =1 n 4. 2 p = Θ(2 n ) p=1

Recursion vs. Dynamic Programming Polynomial Interpolation Neville s Algorithm (Time and Space) O(n 2 ) vs. O(2 n ) Fibonacci Numbers Memoization O(n) vs. O(2 n )

Neville s Algorithm (Polynomial Interpolation) -- Recursive Implementation I 1,,n (x) I 1,,n 1 (x) I 2,,n (x) I 1,2 (x) I 1,,n 2 (x) I 2,,n 1 (x) I 2,,n 1(x) I 3,,n 1(x) I n 1, n (x) y 1 y 2 y n 1 y n n 1 Levels 2 n 2 = O(2 n ) multiplications

Neville s Algorithm (Polynomial Interpolation) -- Iterative Implementation I 1, 2, 3, 4 I 1, 2, 3 I 2, 3, 4 I 1,2 I 2, 3 I 3, 4 y 1 y 2 y 3 y 4 n 1 n 1 Levels 2 k = (n 1)n = O(n 2 ) multiplications k =1

Fibonacci Recurrence f n f n 1 f n 2 f 3 f n 2 f n 3 f n 3 f n 4 f 3 f 1 f 2 f 1 f 2 f n = f n 1 + f n 2 T ( f n ) = T ( f n 1 ) +T ( f n 2 )

Fibonacci Numbers -- Memoization f 1 f 3 f 5 f 2n 3 f 2n 1 f 2 f 4 f 6 f 2n 2 f 2n T ( f 2n ) = 2n 2 = O(n) Additions

Polynomial Time Algorithms Polynomial Evaluation Horner s Method (Time) O(n) vs. O(n 2 ) Polynomial Multiplication Fast Fourier Transform (FFT) O ( n log(n) ) vs. O(n 2 ) Matrix Multiplication Strassen s Algorithm O n 2.8 ( ) vs. O(n 3 )

Horner s Method Example 7x 3 3x 2 +11x 5 -- 6 multiplies ((7x 3)x +11)x 5 -- 3 multiplies General Case a nx n + a n 1 x n 1 n + + a 1 x + a 0 -- k = k=0 n(n +1) 2 = O(n 2 ) multiplies ((a n x + a n 1 )x + + a 1 )x + a 0 -- n = O(n) multiplies

Searching Algorithms Linear Search Worst Case Analysis -- O(n) Average Case Analysis -- O(n) Binary Search -- O ( Log(n) ) Root Finding (Mathematica Code) Ray Tracing

Example Integer Division Input: m, n = integers Output: q, r = integers (quotient and remainder) -- n = mq + r with 0 r < m Algorithm q = 0 r = n While r m q = q +1 Loop Invariant n = mq + r r = r m

Proofs Proof of Loop Invariance By induction on the number of iteration of the loop. Base Case: q = 0 and r = n n = mq + r. Induction: Suppose that n = mq + r after k iterations of the loop. Proof of Program Correctness Must show that n = mq + r after k +1 iterations of the loop. But after k +1 iterations of the loop: (mq + r) k +1 = mq k +1 + r k+1 = m (q k +1) + r k m = m q k + r k (inductive hypothesis) = n. 1. The loop will terminate with r < m. 2. When the loop terminates n = m q + r. (Loop Invariance)

Complexity of Integer Division Worst Case: m = 2, n odd O(n / 2)

Example GCD -- Greatest Common Divisor Input: m, n = integers Output: GCD(m,n) Euclidean Algorithm x = m y = n While y 0 Loop Invariant r = remainder of x divided by y GCD(x, y) = GCD(m, n) (use an efficient division algorithm) x = y y = r Output: x is GCD(m, n)

Proof of Loop Invariance By induction on the number of iteration of the loop. Base Case: x = m and y = n GCD(x, y) = GCD(m, n). Induction: Suppose that GCD(x, y) = GCD(m, n) after k iterations of the loop. Must show that GCD(x, y) = GCD(m, n) after k +1 iterations. After the first line of the (k +1) st iteration of the loop: x = yq + r r = x yq GCD(y, r) = GCD(x, y) and by the inductive hypothesis GCD(x, y) = GCD(m, n) so GCD(y, r) = GCD(m, n). But after the (k +1) st iteration of the loop: x = y and y = r so GCD(x, y) = GCD(y, r) = GCD(m, n).

Complexity GCD Algorithm Standard GCD Algorithm Factor m and n into Prime Factors Find Common Prime Factors No Efficient Algorithm for Step 1 Euclid s GCD Algorithm m n / 2: (n, m) (m,r) r < n / 2 m < n / 2: (n, m) (m,r) m < n / 2 -- Second Parameter Decreases by at Least 1/ 2 in at Most 2 Step -- O ( Log 2 (n))

Searching Algorithms Depth First Less Space -- 1 Path at a Time More Time -- May Explore Bad Path Breadth First More Space -- Stores all Paths Simultaneously Less Time -- May Find Good Path and Halt Iterative Deepening All Paths to Depth 1, All Paths to Depth 2, Space Complexity = Space Complexity of Depth First Search Time Complexity Breadth First Search Compromise Between Depth First and Breadth First Search