O Notation (Big Oh) We want to give an upper bound on the amount of time it takes to solve a problem.

Similar documents
Topic 17. Analysis of Algorithms

CS1800: Sequences & Sums. Professor Kevin Gold

SECTION 9.2: ARITHMETIC SEQUENCES and PARTIAL SUMS

CSC2100B Data Structures Analysis

Lecture 2. More Algorithm Analysis, Math and MCSS By: Sarah Buchanan

5 + 9(10) + 3(100) + 0(1000) + 2(10000) =

Introduction. An Introduction to Algorithms and Data Structures

CSE332: Data Structures & Parallelism Lecture 2: Algorithm Analysis. Ruth Anderson Winter 2019

CSE373: Data Structures and Algorithms Lecture 3: Math Review; Algorithm Analysis. Catie Baker Spring 2015

CSCE 222 Discrete Structures for Computing

Recurrences COMP 215

Natural Language Processing Prof. Pushpak Bhattacharyya Department of Computer Science & Engineering, Indian Institute of Technology, Bombay

MA/CSSE 473 Day 05. Factors and Primes Recursive division algorithm

Lecture 10: Big-Oh. Doina Precup With many thanks to Prakash Panagaden and Mathieu Blanchette. January 27, 2014

LECTURE NOTES ON DESIGN AND ANALYSIS OF ALGORITHMS

What we have learned What is algorithm Why study algorithm The time and space efficiency of algorithm The analysis framework of time efficiency Asympt

3.3 Accumulation Sequences

Induction 1 = 1(1+1) = 2(2+1) = 3(3+1) 2

Algorithms and Their Complexity

Divide and Conquer. Recurrence Relations

CSE332: Data Structures & Parallelism Lecture 2: Algorithm Analysis. Ruth Anderson Winter 2018

CSE332: Data Structures & Parallelism Lecture 2: Algorithm Analysis. Ruth Anderson Winter 2018

Big , and Definition Definition

Given a sequence a 1, a 2,...of numbers, the finite sum a 1 + a 2 + +a n,wheren is an nonnegative integer, can be written

Row Reduced Echelon Form

MITOCW MIT18_01SCF10Rec_24_300k

Written Homework #1: Analysis of Algorithms

PROFESSOR: WELCOME BACK TO THE LAST LECTURE OF THE SEMESTER. PLANNING TO DO TODAY WAS FINISH THE BOOK. FINISH SECTION 6.5

csci 210: Data Structures Program Analysis

Algorithms Test 1. Question 1. (10 points) for (i = 1; i <= n; i++) { j = 1; while (j < n) {

Data structures Exercise 1 solution. Question 1. Let s start by writing all the functions in big O notation:

Lecture 2: Divide and conquer and Dynamic programming

2 Systems of Linear Equations

Divide and Conquer CPE 349. Theresa Migler-VonDollen

Topic Contents. Factoring Methods. Unit 3: Factoring Methods. Finding the square root of a number

N M L Nova S. Scotia League. Math. Game One SOLUTIONS

Module 1: Analyzing the Efficiency of Algorithms

1 Closest Pair of Points on the Plane

csci 210: Data Structures Program Analysis

CS 310 Advanced Data Structures and Algorithms

Mathematical Background. Unsigned binary numbers. Powers of 2. Logs and exponents. Mathematical Background. Today, we will review:

Data Structures and Algorithms Chapter 2

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

CS 4407 Algorithms Lecture 2: Iterative and Divide and Conquer Algorithms

CISC 235: Topic 1. Complexity of Iterative Algorithms

Example: 2x y + 3z = 1 5y 6z = 0 x + 4z = 7. Definition: Elementary Row Operations. Example: Type I swap rows 1 and 3

Asymptotic Analysis and Recurrences

University of the Virgin Islands, St. Thomas January 14, 2015 Algorithms and Programming for High Schoolers. Lecture 5

Math Circle: Recursion and Induction

When we use asymptotic notation within an expression, the asymptotic notation is shorthand for an unspecified function satisfying the relation:

Recursion. Computational complexity

Lecture 2. Fundamentals of the Analysis of Algorithm Efficiency

Sequences and Summations

Ordinary Differential Equations Prof. A. K. Nandakumaran Department of Mathematics Indian Institute of Science Bangalore

Analysis of Algorithms

3.1 Asymptotic notation

Every subset of {1, 2,...,n 1} can be extended to a subset of {1, 2, 3,...,n} by either adding or not adding the element n.

2. Limits at Infinity

5.2 Infinite Series Brian E. Veitch

Data Structures and Algorithms. Asymptotic notation

CSCI 3110 Assignment 6 Solutions

Asymptotic Algorithm Analysis & Sorting

Ch 01. Analysis of Algorithms

What is Performance Analysis?

12 Sequences and Recurrences

Linear Equations in Linear Algebra

Designing Information Devices and Systems I Spring 2018 Lecture Notes Note Introduction to Linear Algebra the EECS Way

CSC236 Week 4. Larry Zhang

Fall 2017 November 10, Written Homework 5

Asymptotic Running Time of Algorithms

Tricks of the Trade in Combinatorics and Arithmetic

Getting Started with Communications Engineering

Section Summary. Sequences. Recurrence Relations. Summations Special Integer Sequences (optional)

Introduction to Divide and Conquer

Math 308 Midterm Answers and Comments July 18, Part A. Short answer questions

Sequences and infinite series

Algebra Year 10. Language

Ch01. Analysis of Algorithms

1.1.1 Algebraic Operations

Solving Recurrences. 1. Express the running time (or use of some other resource) as a recurrence.

Algorithm Design and Analysis

CS 4407 Algorithms Lecture 3: Iterative and Divide and Conquer Algorithms

Divide-Conquer-Glue. Divide-Conquer-Glue Algorithm Strategy. Skyline Problem as an Example of Divide-Conquer-Glue

ACCESS TO SCIENCE, ENGINEERING AND AGRICULTURE: MATHEMATICS 1 MATH00030 SEMESTER / Functions

Analysis of Algorithm Efficiency. Dr. Yingwu Zhu

Lecture 10: Powers of Matrices, Difference Equations

UCSD CSE 21, Spring 2014 [Section B00] Mathematics for Algorithm and System Analysis

/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Asymptotic Analysis, recurrences Date: 9/7/17

CSCE 222 Discrete Structures for Computing. Dr. Hyunyoung Lee

The maximum-subarray problem. Given an array of integers, find a contiguous subarray with the maximum sum. Very naïve algorithm:

Big O 2/14/13. Administrative. Does it terminate? David Kauchak cs302 Spring 2013

Data Structures and Algorithms

Designing Information Devices and Systems I Fall 2018 Lecture Notes Note Introduction to Linear Algebra the EECS Way

MA008/MIIZ01 Design and Analysis of Algorithms Lecture Notes 3

Counting Dots Kwok-Wai Ng Feb 1, 2007

Q: How can quantum computers break ecryption?

Evolving a New Feature for a Working Program

Comp 11 Lectures. Mike Shah. July 26, Tufts University. Mike Shah (Tufts University) Comp 11 Lectures July 26, / 45

COMP Analysis of Algorithms & Data Structures

Algebra Exam. Solutions and Grading Guide

Transcription:

O Notation (Big Oh) We want to give an upper bound on the amount of time it takes to solve a problem. defn: v(n) = O(f(n)) constants c and n 0 such that v(n) c f(n) whenever n > n 0 Termed complexity: has nothing to do with difficulty of coding or understanding, just time to execute. Important tool for analyzing and describing the behavior of algorithms Is an n 2 algorithm always better than a n 3 algorithm? No, it depends on the specific constants, but if n is large enough, an O(n 3 ) is always slower. Complexity Class: O(1), O(log n), O(n), O(n log n), O(n 2 ) O(2 n ) For small n, c may dominate. Intractable: all known algorithms are of exponential complexity Measuring Complexity 1. Additive Property: If two statements follow one another, the complexity of the two of them is the larger complexity. O(m) + O(n) = O(max(n,m)) 2. If/Else: For the fragment 3. if cond then S1 else S2 The complexity is the running time of the cond plus the larger of the complexities of S1 and S2. 4. Multiplicative Property: For Loops: the complexity of a for loop is at most the complexity of the statements inside the for loop times the number of iterations. However, if each iteration is different in length, it is more accurate to sum the individual lengths rather than just multiply. Nested For Loops Analyze nested for loops inside out. The total complexity of a statement inside a group of nested for loops is the complexity of the statement multiplied by the product of the size of each for loop. Example 1 for i = 1,m for j = 1,n x++;

O(mn) Consider a pictorial view of the work. Let each row represent the work done in an iteration. The area of the figure will then represent the total work. Example 2 for (beg = 1;beg < m; beg++) for (j = beg; j < m; j++) x++; In this example, our complexity picture is triangular. Recursive Algorithms The complexity for recursive algorithms requires additional techniques. There are three general techniques for finding complexity. The pictorial approach, the formulaic approach, and the mathematics approach. You should be able to use any two of them. Example 3 for i = 1, N If we let T(n) represent the time to solve this problem, the time is represented recursively as T(n) = 2 T(n/2) +n. This is called a recurrence relation

In our pictorial view, we let each line represent a layer of recursions (The first call is the first row, the two calls at the second level comprise the second row, the four third level calls represent the third row.) The picture represents the call itself (ignoring costs incurred by the recursive calls). In other words, to determine the size of the first row, measure how much work is done in that call not counting the recursive calls it makes. Example 4 for i = 1, N If we let T(n) represent the time to solve this problem, the time is represented recursively as T(n) = 1 T(n/2) +n.

Example 5 If we let T(n) represent the time to solve this problem, the time is represented recursively as T(n) = T(n/2) +1. Example 6 If we let T(n) represent the time to solve this problem, the time is represented recursively as T(n) = 2T(n/2) +1. A Formula Approach Mathematicians have developed a formula approach to determining complexity. Theorem: T(n)= a T(n/b) + O(n k ) if a > b k the complexity is O(n log b a ) if a= b k the complexity is O(n k log n)

if a < b k the complexity is O(n k ) In this case: a represents the number of recursive calls you make b represents the number of pieces you divide the problem into k represents the power on n which represents the work you do in breaking the problem into two pieces or putting them back together again. In other words, n k represents the work you do independent of the recursive calls you make. Note this corresponds to one row in our pictorial representation. Let's use the theorem to revisit the same problems we solved pictorially. Example 3 for i = 1, N a=2 b=2 k=1 (as work is n) Since a=b k, we are in the ``equals'' case. if a= b k the complexity is O(n k log n) = O( n log g) which is exactly what our pictures told us. Example 4 for i = 1, N a=1 b=2 k=1 Since a < b k we are in the ``less than'' case. if a < b k the complexity is O(n k ) = O(n) which is exactly what our pictures told us. Example 5

a=1b=2 k=0 as the work is 1 or n 0 a= b k as 1 = 2 0 we are in the ``equals'' case. if a= b k the complexity is O(n k log n) = O(log n) which is exaclty what our pictures told us. Example 6 a= 2 b = 2 k = 0 Since a > b k we are in the ``greater than'' case. if a > b k the complexity is O(n log b a ) = O(n log 2 2 ) = O(n 1 ) which is exactly what our pictures told us. Recurrence Relations In analyzing algorithms, we often encounter progressions. Most calculus books list the formulas below, but you should be able to derive them. Arithmetic Progression Example: Writing the same sum backwards: If we add the two S's together, Geometric Progression Example: Multiplying both sides by the base: Subtracting S from 2S we get

Mathematically Solving Recurrence Relations While the math types may like this approach, it usually confuses the non-math types more than helps. Example 7 Suppose our recurrence relation is: and We use the telescoping technique (plug and chug technique). (You may have learned something you like better in your discrete math classes.) We set up a series of equations. Since we would never stop for infinite n, let n = 32 to see the pattern, and generalize for any n. Now by substituting each equation into the previous equation we have Notice the upper limit on the summation is related to the size of n. Since 2 4 = n/2 in this case, we conjecture the upper bound (call it b) will always be determined by 2 b = n/2, i.e. b = log 2 n -1. We would actually have to consider a few more cases to convince ourselves this is the actual relationship. Notice, however, that we expect to perform the recursion a logarithmic number of times as we halved the problem size each time.

using our geometric sum technique Thus, It would be nice to be able to check our work: For the case of n = 32, we got Since 3(32)/2-2 = 46, we are reasonably certain we did the computations correctly. Telescoping, again : If that was confusing, you could also have done the same thing as follows: But since n=32 Thus, Example 8 Suppose the relation was and This is a common form. You break a problem of size n into two equal sized pieces (n/2) and have to solve each piece (as long as n/2 is bigger than our base case.). The breaking apart of the problem and the accumulating the results takes time proportional to n.

This technique won't always work, but for the type of recurrence relations we see in algorithm analysis, it is often useful. Divide each side of the recurrence relation by n. The choice will become clearer as we proceed. T n = 2T n/2 +n This equation is valid for any power of two, so: and Now add up all of the equations. This means we add up all the terms on the left hand side and set the result to the sum of all the terms on the right hand size. Observe, virtually all terms exist on both sides of the equation and may be cancelled. After everything is cancelled, we get Example 9 Consider a sorting algorithm in which you repeatedly search the unsorted portion of an array for the smallest element. The smallest element is swapped with the first element in the unsorted portion, and the lower limit of the unsorted portion is incremented. If we have n elements, it takes n-1 compares to find the smallest, 1 swap, and then sort the remaining n-1 elements. If there are only two elements, there is a compare and at most 1 swap. To see the pattern, let's assume n=6

Backwards substitution yields: In general,