CSCI 3110 Assignment 6 Solutions

Similar documents
Grade 11/12 Math Circles Fall Nov. 5 Recurrences, Part 2

1 Substitution method

Methods for solving recurrences

Solving recurrences. Frequently showing up when analysing divide&conquer algorithms or, more generally, recursive algorithms.

Divide and Conquer. Arash Rafiey. 27 October, 2016

b + O(n d ) where a 1, b > 1, then O(n d log n) if a = b d d ) if a < b d O(n log b a ) if a > b d

Data Structures and Algorithms Chapter 3

1 Quick Sort LECTURE 7. OHSU/OGI (Winter 2009) ANALYSIS AND DESIGN OF ALGORITHMS

CMPT 307 : Divide-and-Conqer (Study Guide) Should be read in conjunction with the text June 2, 2015

CS 161 Summer 2009 Homework #2 Sample Solutions

Divide and Conquer. Andreas Klappenecker. [based on slides by Prof. Welch]

Advanced Analysis of Algorithms - Midterm (Solutions)

Data structures Exercise 1 solution. Question 1. Let s start by writing all the functions in big O notation:

Divide and Conquer. Andreas Klappenecker

1 Divide and Conquer (September 3)

Problem Set 2 Solutions

Lecture 1: Asymptotics, Recurrences, Elementary Sorting

Computational Complexity - Pseudocode and Recursions

Divide and Conquer Strategy

Review Of Topics. Review: Induction

Lecture 4. Quicksort

CS483 Design and Analysis of Algorithms

Fundamental Algorithms

Fundamental Algorithms

Algorithms Test 1. Question 1. (10 points) for (i = 1; i <= n; i++) { j = 1; while (j < n) {

Introduction to Divide and Conquer

Divide-and-conquer: Order Statistics. Curs: Fall 2017

1. [10 marks] Consider the following two algorithms that find the two largest elements in an array A[1..n], where n >= 2.

In-Class Soln 1. CS 361, Lecture 4. Today s Outline. In-Class Soln 2

Algorithms and Their Complexity

Chapter 5. Divide and Conquer CLRS 4.3. Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved.

i=1 i B[i] B[i] + A[i, j]; c n for j n downto i + 1 do c n i=1 (n i) C[i] C[i] + A[i, j]; c n

Chapter 4 Divide-and-Conquer

Solutions. Problem 1: Suppose a polynomial in n of degree d has the form

Recurrence Relations

The maximum-subarray problem. Given an array of integers, find a contiguous subarray with the maximum sum. Very naïve algorithm:

CS Non-recursive and Recursive Algorithm Analysis

Algorithms, Design and Analysis. Order of growth. Table 2.1. Big-oh. Asymptotic growth rate. Types of formulas for basic operation count

Kartsuba s Algorithm and Linear Time Selection

Randomized Sorting Algorithms Quick sort can be converted to a randomized algorithm by picking the pivot element randomly. In this case we can show th

Divide & Conquer. Jordi Cortadella and Jordi Petit Department of Computer Science

Analysis of Algorithms - Using Asymptotic Bounds -

CS Data Structures and Algorithm Analysis

Lecture 22: Multithreaded Algorithms CSCI Algorithms I. Andrew Rosenberg

Linear Selection and Linear Sorting

Outline. 1 Introduction. 3 Quicksort. 4 Analysis. 5 References. Idea. 1 Choose an element x and reorder the array as follows:

Chapter 5. Divide and Conquer CLRS 4.3. Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved.

Quicksort. Where Average and Worst Case Differ. S.V. N. (vishy) Vishwanathan. University of California, Santa Cruz

Divide and Conquer. CSE21 Winter 2017, Day 9 (B00), Day 6 (A00) January 30,

Quicksort (CLRS 7) We previously saw how the divide-and-conquer technique can be used to design sorting algorithm Merge-sort

Data Structures and Algorithms

CS2223 Algorithms D Term 2009 Exam 3 Solutions

CS Analysis of Recursive Algorithms and Brute Force

Algorithms And Programming I. Lecture 5 Quicksort

Algorithm efficiency analysis

Data Structures and Algorithms Chapter 3

What we have learned What is algorithm Why study algorithm The time and space efficiency of algorithm The analysis framework of time efficiency Asympt

CS 470/570 Divide-and-Conquer. Format of Divide-and-Conquer algorithms: Master Recurrence Theorem (simpler version)

COMP 250: Quicksort. Carlos G. Oliver. February 13, Carlos G. Oliver COMP 250: Quicksort February 13, / 21

CSE101: Design and Analysis of Algorithms. Ragesh Jaiswal, CSE, UCSD

Divide and conquer. Philip II of Macedon

Bucket-Sort. Have seen lower bound of Ω(nlog n) for comparisonbased. Some cheating algorithms achieve O(n), given certain assumptions re input

Divide and Conquer Algorithms

Partition and Select

CS483 Design and Analysis of Algorithms

Chapter 2. Recurrence Relations. Divide and Conquer. Divide and Conquer Strategy. Another Example: Merge Sort. Merge Sort Example. Merge Sort Example

Data Structures and Algorithms CSE 465

Algorithms, CSE, OSU Quicksort. Instructor: Anastasios Sidiropoulos

Asymptotic Notation. such that t(n) cf(n) for all n n 0. for some positive real constant c and integer threshold n 0

Algorithms and Data Structures 2016 Week 5 solutions (Tues 9th - Fri 12th February)

Week 5: Quicksort, Lower bound, Greedy

Problem 5. Use mathematical induction to show that when n is an exact power of two, the solution of the recurrence

CS361 Homework #3 Solutions

Recommended readings: Description of Quicksort in my notes, Ch7 of your CLRS text.

CS161: Algorithm Design and Analysis Recitation Section 3 Stanford University Week of 29 January, Problem 3-1.

A design paradigm. Divide and conquer: (When) does decomposing a problem into smaller parts help? 09/09/ EECS 3101

CPS 616 DIVIDE-AND-CONQUER 6-1

Searching. Sorting. Lambdas

Asymptotic Analysis and Recurrences

Design and Analysis of Algorithms

CS325: Analysis of Algorithms, Fall Midterm

COMP Analysis of Algorithms & Data Structures

University of New Mexico Department of Computer Science. Midterm Examination. CS 361 Data Structures and Algorithms Spring, 2003

CS 310 Advanced Data Structures and Algorithms

MA008/MIIZ01 Design and Analysis of Algorithms Lecture Notes 3

COMP Analysis of Algorithms & Data Structures

An analogy from Calculus: limits

Algorithm. Executing the Max algorithm. Algorithm and Growth of Functions Benchaporn Jantarakongkul. (algorithm) ก ก. : ก {a i }=a 1,,a n a i N,

COMP Analysis of Algorithms & Data Structures

/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Asymptotic Analysis, recurrences Date: 9/7/17

Sorting algorithms. Sorting algorithms

CSE 421, Spring 2017, W.L.Ruzzo. 8. Average-Case Analysis of Algorithms + Randomized Algorithms

CMPSCI611: Three Divide-and-Conquer Examples Lecture 2

CS 577 Introduction to Algorithms: Strassen s Algorithm and the Master Theorem

Design and Analysis of Algorithms

Mergesort and Recurrences (CLRS 2.3, 4.4)

Homework 1 Solutions

CS 4407 Algorithms Lecture 2: Iterative and Divide and Conquer Algorithms

Analysis of Algorithms CMPSC 565

Algorithm Design CS 515 Fall 2015 Sample Final Exam Solutions

Transcription:

CSCI 3110 Assignment 6 Solutions December 5, 2012 2.4 (4 pts) Suppose you are choosing between the following three algorithms: 1. Algorithm A solves problems by dividing them into five subproblems of half the size, recursively solving each subproblem, and then combining the solutions in linear time. 2. Algorithm B solves problems of size n by recursively solving two subproblems of size n 1 and then combining the solutions in constant time. 3. Algorithm C solves problems of size n by dividing them into nine subproblems of size n/3, recursively solving each subproblem, and then combining the solutions in O(n 2 ) time. What are the running times of each of these algorithms (in big-o notation), and which would you choose? These are 1. T (n) = 5T (n/2) + Θ(n) We have a = 5, b = 2, f(n) = Θ(n). n log b a = n log 2 5 > n 2, so f(n) = O(n log b a ɛ ). This is Case 1 of the Master Theorem, thus T (n) = Θ(n log 2 5 ). 2. T (n) = 2T (n 1) + O(1) This recurrence does not fit the Master Theorem. However, observe that the number of subproblems doubles n times and each subproblem uses O(1) time. Therefore T (n) = Θ(2 n ). 3. T (n) = 9T (n/3) + T (n 2 ) We have a = 9, b = 3, f(n) = Θ(n 2 ). n log b a = n log 3 9 = n 2, so f(n) = Θ(n log b a ). This is Case 2 of the Master Theorem. Thus, T (n) = Θ(n 2 log n). The algorithm which grows the slowest asymptotically is the 3rd algorithm with a running time of Θ(n 2 log n), so we would choose it. 2.5 (2 pts) Solve the following recurrence relations and give a Θ bound for each of them. (f) T (n) = 49T (n/25) + n 3/2 log n We have a = 49, b = 25, f(n) = Θ(n 3/2 ). n log b a = n log 25 49, so f(n) = Ω(n log b a+ɛ). This is Case 3 of the Master Theorem, so we check the regularity condition af(n/b) cf(n) for some c < 1 and sufficiently large n. That is, 49((n/25) 3/2 log(n/25)) cn 3/2 log n. Since 49((n/25) 3/2 log(n/25)) = (49/25 3/2 )(n 3/2 + (log n log(25)) the condition holds for c = (49/25 3/2 ) = 49/125 < 1. Thus, T (n) = Θ(n 3/2 log n). (i) T (n) = T (n 1) + c n, where c > 1 is some constant T (n) = T (n 1) + c n = T (n 2) + c n 1 + c n. We can thus unroll this to T (n) = 1 + c + c 2 +... c n. As c > 1, this is Θ(c n ). 2.12 (2 pts) How many lines, as a function of n (in Θ( ) form), does the following program print? Write a recurrence and solve it. You may assume n is a power of 2.

f(n) 1 if n > 1: 2 print line( still going ) 3 f(n/2) 4 f(n/2) This program prints one line and then splits into two subprograms with half of the input size each. The recurrence is T (n) = 2T (n/2) + 1. Using the Master Theorem, n log b a = n log 2 2 = n. Since f(n) = 1, we have that f(n) = O(n log b a ɛ ) so the solution is Θ(n) lines. 2.25 (4 pts) In Section 2.1 we described an algorithm that multiplies two n-bit binary integers x and y in time n a, where a = log 2 3. Call this procedure fastmultiply(x, y). (a) We want to convert the decimal integer 10 n (a 1 followed by n zeros) into binary. Here is the algorithm (assume n is a power of 2 ): pwr2bin(n) 1 if n = 1 2 return 1010 2 3 else 4 z =??? 5 return fastmultiply(z, z) Fill in the missing details. Then give a recurrence relation for the running time of the algorithm, and solve the recurrence. The missing details are z = pwr2bin(n/2). The recurrence is T (n) = T (n/2) + n a. Unrolling this, we get Σ log 2 n i=0 (n/2 i ) a which is Θ(n a ). (b) Next, we want to convert any decimal integer x with n digits (where n is a power of 2) into binary. The algorithm is the following: dec2bin(x) 1 if n = 1 2 return binary[x] 3 else 4 split x into two decimal numbers x L, x R with n/2 digits each 5 return??? Here binary[ ] is a vector that contains the binary representation of all one-digit integers. That is, binary[0] = 0 2, binary[1] = 1 2, up to binary[9] = 1001 2. Assume that a lookup in binary takes O(1) time. Fill in the missing details. Once again, give a recurrence for the running time of the algorithm, and solve it. The missing details are return fastmultiply(pwr2bin(n/2), dec2bin(x l )) + dec2bin(x R ). The recurrence is T (n) = 2T (n/2) + n a, as there are two recursive calls with half of the input size and both fastmultiply and dec2bin use Θ(n a ) time. Using the Master Theorem, we have a = 2, b = 2, and f(n) = Θ(n a ). Considering n log b a = n log 2 2 = n, we see that f(n) = Ω(n log b a+ɛ ) so we have Case 3. We check the regularity condition af(n/b) cf(n). This is 2(n/2) a cn a. As 2(n/2) a = (2/2 a )n a, this holds for c = (2/2 a ) < 1. Therefore, by the Master Theorem, the solution to the recurrence is Θ(n a ). 2.29 (4 pts) Suppose we want to evaluate the polynomial p(x) = a 0 + a 1 x + a 2 x 2 +... + a n x n at point x. (a) Show that the following simple routine, known as Horner s rule, does the job and leaves the answer in z.

1 z = a n 2 for i = n 1 downto 0 3 z = zx + a i Note that for n = 0 the loop is not executed and the alg. correctly returns a 0. Now consider the alg. is run for n + 1: In the first n iterations of the loop the algorithm computes: z = a 1 + a 2 x +... + a n x n 1. On the last iteration, the algorithm computes a 0 + zx = a 0 + a 1 x +... + a n x n. Which is the required computation. Hence, Horner s rule begins with z = a n. It then multiplies z by x and adds a n 1. It thus implements the factorization p(x) = a 0 + x(a 1 + x(a 2 + x(...))) from the inside out and is correct. (b) How many additions and multiplications does this routine use, as a function of n? Can you find a polynomial for which an alternative method is substantially better? This routine uses n additions and n multiplications as it does one addition and multiplication for each step and there are n steps. A polynomial with a single non-zero coefficient a i x i can be evaluated with a logarithmic number of multiplications and additions using the repeated squaring method of evaluating x i. 2.16 (2 pts) You are given an infinite array A[ ] in which the first n cells contain integers in sorted order and the rest of the cells are filled with. You are not given the value of n. Describe an algorithm that takes an integer x as input and finds a position in the array containing x, if such a position exists, in O(log n) time. (If you are disturbed by the fact that the array A has infinite length, assume instead that it is of length n, but that you don t know this length, and that the implementation of the array data type in your programming language returns the error message whenever elements A[i] with i > n are accessed.) Iterate through the powers of 2, i = 1, 2, 4,... until we find the first location A[i] such that x A[i]. As A[i/2] < x, we know that n 2x. We can then do a binary search from A[0] to A[i] to find x if it is in the array. Both steps take Θ(log x) = Θ(log n) time. 3 (b) (2 pts) Consider an m n array A of integers. Each row and colum of the array are in ascending order: A[i, j] A[i, j + 1], i = 1... n, j = 1... m 1 and A[i, j] A[i + 1, j], i = 1... n 1 j = 1... m. Design an algorithm that determines if a given integer x is in A, if so, return it s position otherwise return a 1. Explain your algorithm with words and diagram(s). Show that your alogrithm terminates and find its worst case complexity. Hint: Start by comparing x with A[1, m] A simple observation helps in finding the solution: Since each row and column of A is sorted, x is not in a row i if x > A[i, m] and x is not in a column j if x < A[1, j]. Compare x with A[1, m]. If x < A[1, m] then x can not be in the last column, so we compare x with A[1, m 1]. If x > A[1, m] then x can not be in the first row, so we compare x with A[1, m]. We can thus use the following algorithm: search2darray(a, x) 1 row = 1 2 col = m 3 while row n and col 1 4 if x < A[row, col] 5 col = col 1 6 elseif x > A[row, col] 7 row = row + 1 8 else return (row, col) 9 return 1

Each iteration of the while loop either increases the row number by 1 or decreases the column number by 1, so the algorithm terminates in Θ(n + m) steps, each of which takes Θ(1) time. Thus, the algorithm terminates in Θ(n + m) time. 4. (6 pts) (a) Draw the recursion tree to guess a solution and prove your guess for T (n) = T (n a) + T (a) + cn for constants a 1 and c > 0. Assume T (n) = T (1) for n < a. See Fig ex_4_a and ex_4_b included alongside The solution for this recurrence is T (n) = O(n 2 ). Take the worst case of a = 1. Then T (n) = T (n a) + T (a) + cn = T (n 1) + T (1) + cn. This is Θ() of T (n 1) + cn, which can be unrolled to c(n + (n 1) + (n 2) +... + 1) = Θ(n 2 ). (b) Draw the recursion tree to guess a solution and prove your guess for T (n) = T (αn) + T ((1 α)n) + cn for constants 0 < α < 1 and c > 0. The solution for this recurrence is T (n) = O(n log n). We can prove this with substition by claiming that T (n) dn log n for all n. Assume without loss of generality that α 0.5. The base case is T (1) which is clearly O(n log n). Assume that the claim holds for all T (i), 1 i < n. Now, consider T (i + 1) = T (α(i + 1)) + T ((1 α)(i + 1) + c(i + 1) dα(i + 1) log(α(i + 1)) + d(1 α)(i + 1) log((1 α)(i + 1)) + c(i + 1) dα(i + 1) log(α(i + 1)) + d(1 α)(i + 1) log(α(i + 1)) + c(i + 1) d(i + 1) log(α(i + 1))) + c(i + 1) d(i + 1) log(i + 1), for sufficient d (c) Suppose that the splits at each level of quicksort are in the proportion 1 a to a where 0 < a = 1/2 is a constant. Show that the minimum depth of a leaf in the recursion tree is approx. log n/ log a and the maximum depth is approx log n/ log(1 a). The minimum depth is attained if we pick children of proportion α, and the maximum depth is obtained if we pick children of proportion 1 α. After k steps down the tree in the first case, the size of the child sub-array is n α k. If this sub-array is a leaf in the recursion tree, then its size is 1 and we get the equation 1 = n α k. From here, k = log α 1/n = log n/ log α. The other case is symmetric. After k steps down the tree, the size of the child sub-array is n (1 α) k. If this sub-array is a leaf in the recursion tree, then its size is 1 and we get the equation 1 = n (1 α) k. From here, k = log 1 α 1/n = log n/ log(1 α) 5. (4 pts) (a) Give an algorithm that sorts an array of size n, only containing the elements 1,2 and 3, in linear time. Give a plausible argument that your algorithm sorts the array in linear time Since we only have 1, 2 or 3 as array elements: Call Partition specifying the pivot element to be the integer 2. Now we have: A[1... q 1] 2 and A[q... n] > 2; the latter sub-array only contains threes. Call Pivot-Partition again on the sub-array A[1... q 1] with pivot element = 1. Now we have: A[1... q 1] 1 and A[q... q] > 1; the latter sub-array only contains twos and the former only contains ones. Since Partition is a Θ(n) algorithm, and there are two calls to Partition with only constant overhead (Θ(1)) for assigning the pivot, the algorithm described above is in Θ(n).

(b) Construct an example input for which quicksort will use Ω(n 2 ) comparisions when the pivot is chosen as the median of the first, last and middle elements of the sequence. For the median-of-three pivot based Partition to cause quicksort to take Ω(n 2 ) time, the partition must be lop-sided on EVERY call. This will happen if two of the three elements examined (as potential pivots) are the two smallest or the two largest elements in the file. This must happen consistently through all the partitions. For example, consider an EIGHT element array: (1, (the rest of the numbers must be larger than two), 2) pivot = Median(1,2, something larger than 2) = 2 move 2 to end as pivot. ( 1, (the rest of the numbers must be larger than two), 2) PARTITION will swap A[8] with A[2] (the element next to 1). Since all other elements are larger than 2, no more swaps will occur and the partition will move to after 2 (now in A[2]). Quicksort will recursively sort from the third index to the last one. To continue the pattern of bad splits, the third smallest element must be in A[3]. and since the element originally in A[8] was originally in A[2], the element originally in A[2] must be the fourth smallest. Continuing this pattern we see that the sixth smallest must be in A[4] and fifth smallest in A[5] So, the original array must be: [1, 4, 3, 6, 5, 8, 7, 2] Running quicksort: pivot = 2, so array becomes ( ][ denotes partition loc.): [1, 2, ][ 3, 6, 5, 8, 7, 4] pivot = 4, so array becomes ( ][ denotes partition loc.): [1, 2, 3, 4][ 5, 8, 7, 6] pivot = 6, so array becomes ( ][ denotes partition loc.): etc. etc. [1, 2, 3, 4, 5, 6 ][ 7, 8]