Solutions. Problem 1: Suppose a polynomial in n of degree d has the form

Similar documents
Quicksort. Where Average and Worst Case Differ. S.V. N. (vishy) Vishwanathan. University of California, Santa Cruz

Lecture 5: Loop Invariants and Insertion-sort

Data Structures and Algorithm Analysis (CSC317) Randomized Algorithms (part 3)

CS 161 Summer 2009 Homework #2 Sample Solutions

Outline. 1 Introduction. 3 Quicksort. 4 Analysis. 5 References. Idea. 1 Choose an element x and reorder the array as follows:

Lecture 4. Quicksort

Recommended readings: Description of Quicksort in my notes, Ch7 of your CLRS text.

Data Structures and Algorithm Analysis (CSC317) Randomized algorithms

Algorithms, CSE, OSU Quicksort. Instructor: Anastasios Sidiropoulos

Week 5: Quicksort, Lower bound, Greedy

Sorting. Chapter 11. CSE 2011 Prof. J. Elder Last Updated: :11 AM

Algorithms and Their Complexity

Algorithm efficiency analysis

Randomized Sorting Algorithms Quick sort can be converted to a randomized algorithm by picking the pivot element randomly. In this case we can show th

CSCI 3110 Assignment 6 Solutions

COMP Analysis of Algorithms & Data Structures

Quicksort (CLRS 7) We previously saw how the divide-and-conquer technique can be used to design sorting algorithm Merge-sort

COMP Analysis of Algorithms & Data Structures

Data Structures and Algorithms Chapter 2

Topic 17. Analysis of Algorithms

CS161: Algorithm Design and Analysis Recitation Section 3 Stanford University Week of 29 January, Problem 3-1.

Big O 2/14/13. Administrative. Does it terminate? David Kauchak cs302 Spring 2013

Computational Complexity - Pseudocode and Recursions

Lecture 1: Asymptotics, Recurrences, Elementary Sorting

Analysis of Algorithms. Randomizing Quicksort

1 Quick Sort LECTURE 7. OHSU/OGI (Winter 2009) ANALYSIS AND DESIGN OF ALGORITHMS

Divide and Conquer Strategy

Data Structures and Algorithms Chapter 3

Searching. Sorting. Lambdas

1 Divide and Conquer (September 3)

Review Of Topics. Review: Induction

Asymptotic Notation. such that t(n) cf(n) for all n n 0. for some positive real constant c and integer threshold n 0

COL106: Data Structures and Algorithms (IIT Delhi, Semester-II )

Math 391: Midterm 1.0 Spring 2016

Fundamental Algorithms

CS361 Homework #3 Solutions

Fundamental Algorithms

Divide and Conquer. Arash Rafiey. 27 October, 2016

Data Structures and Algorithms CSE 465

1 Substitution method

Asymptotic Algorithm Analysis & Sorting

COMP Analysis of Algorithms & Data Structures

3.1 Asymptotic notation

Lecture 2. More Algorithm Analysis, Math and MCSS By: Sarah Buchanan

Divide & Conquer. Jordi Cortadella and Jordi Petit Department of Computer Science

Algorithms. Quicksort. Slide credit: David Luebke (Virginia)

Lecture 6 September 21, 2016

CIS 121 Data Structures and Algorithms with Java Spring Big-Oh Notation Monday, January 22/Tuesday, January 23

CPSC 320 (Intermediate Algorithm Design and Analysis). Summer Instructor: Dr. Lior Malka Final Examination, July 24th, 2009

Lecture 2. Fundamentals of the Analysis of Algorithm Efficiency

COMP 382: Reasoning about algorithms

Divide and Conquer Algorithms. CSE 101: Design and Analysis of Algorithms Lecture 14

data structures and algorithms lecture 2

Bin Sort. Sorting integers in Range [1,...,n] Add all elements to table and then

Average Case Analysis. October 11, 2011

CS383, Algorithms Spring 2009 HW1 Solutions

Analysis of Algorithms CMPSC 565

COMP 250: Quicksort. Carlos G. Oliver. February 13, Carlos G. Oliver COMP 250: Quicksort February 13, / 21

Input Decidable Language -- Program Halts on all Input Encoding of Input -- Natural Numbers Encoded in Binary or Decimal, Not Unary

LECTURE NOTES ON DESIGN AND ANALYSIS OF ALGORITHMS

Outline. 1 Introduction. Merging and MergeSort. 3 Analysis. 4 Reference

Solving recurrences. Frequently showing up when analysing divide&conquer algorithms or, more generally, recursive algorithms.

Reading 10 : Asymptotic Analysis

Divide & Conquer. Jordi Cortadella and Jordi Petit Department of Computer Science

Mathematical Induction

Analysis of Algorithms I: Asymptotic Notation, Induction, and MergeSort

Algorithm Analysis Recurrence Relation. Chung-Ang University, Jaesung Lee

Taking Stock. IE170: Algorithms in Systems Engineering: Lecture 3. Θ Notation. Comparing Algorithms

A design paradigm. Divide and conquer: (When) does decomposing a problem into smaller parts help? 09/09/ EECS 3101

Mergesort and Recurrences (CLRS 2.3, 4.4)

Introduction to Divide and Conquer

Divide and conquer. Philip II of Macedon

CS 4407 Algorithms Lecture 2: Iterative and Divide and Conquer Algorithms

An analogy from Calculus: limits

Quiz 1 Solutions. Problem 2. Asymptotics & Recurrences [20 points] (3 parts)

Central Algorithmic Techniques. Iterative Algorithms

CS 361, Lecture 14. Outline

Recursion. Computational complexity

INDIAN INSTITUTE OF TECHNOLOGY KHARAGPUR Design and Analysis of Algorithms Department of Mathematics MA21007/Assignment-2/Due: 18 Aug.

Data Structures and Algorithms Chapter 3

CSE 21 Practice Exam for Midterm 2 Fall 2017

Quicksort algorithm Average case analysis

Problem Set 2 Solutions

Analysis of Algorithms - Midterm (Solutions)

Analysis of Algorithm Efficiency. Dr. Yingwu Zhu

Data selection. Lower complexity bound for sorting

CS 4104 Data and Algorithm Analysis. Recurrence Relations. Modeling Recursive Function Cost. Solving Recurrences. Clifford A. Shaffer.

Asymptotic Running Time of Algorithms

Methods for solving recurrences

Ch01. Analysis of Algorithms

CIS 121. Analysis of Algorithms & Computational Complexity. Slides based on materials provided by Mary Wootters (Stanford University)

Computer Algorithms CISC4080 CIS, Fordham Univ. Outline. Last class. Instructor: X. Zhang Lecture 2

Computer Algorithms CISC4080 CIS, Fordham Univ. Instructor: X. Zhang Lecture 2

CS Analysis of Recursive Algorithms and Brute Force

Heaps Induction. Heaps. Heaps. Tirgul 6

Algorithms And Programming I. Lecture 5 Quicksort

CSE548, AMS542: Analysis of Algorithms, Fall 2017 Date: Oct 26. Homework #2. ( Due: Nov 8 )

CS 344 Design and Analysis of Algorithms. Tarek El-Gaaly Course website:

Divide and Conquer CPE 349. Theresa Migler-VonDollen

1 Recursive Algorithms

Transcription:

Assignment 1 1. Problem 3-1 on p. 57 2. Problem 3-2 on p. 58 3. Problem 4-5 on p. 86 4. Problem 6-1 on p. 142 5. Problem 7-4 on p. 162 6. Prove correctness (including halting) of SelectionSort (use loop invariants) 7. Provide worst and average case runtime analysis of SelectionSort 8. Provide runspace analysis of SelectionSort 1

2 Solutions Problem 1: Suppose a polynomial in n of degree d has the form d p(n) = a i n i with leading coefficient a d > 0, and let k 1 be a constant (not necessarily an integer). Prove the following: (a) If k d then p(n) O(n k ). (b) If k d then p(n) Ω(n k ). (c) If k = d then p(n) = Θ(n k ). where Proof of (c). First we do some algebra: d d d 1 p(n) = a i n i = n d a i n i d = n d ( a i n i d + a d ) = n d (a d + q(n)) d 1 q(n) = a i n i d. Note that q(n) is a sum of terms each of which is a constant multiplied by a power of n, the power being less than or equal to 1. By calculus, the limit of q(n) is zero as n. So there is an integer n 0 such that q(n) < 0.5a d for all n n 0. Using more elementary algebra, and assuming n n 0, we can see that n d (a d 0.5a d ) n d (a d + q(n)) n d (a d + 0.5a d ). Letting c 1 = 0.5a d and c 2 = 1.5a d, we have: c 1 n d p(n) c 2 n d for all n n 0. From this last inequality we see that p(n) = Θ(n d ). QED We have shown part (c) first, which makes (a) and (b) very straightforward: Proof of (a). Since k d 0, we have n d n d n k d = n k, so that O(n d ) O(n k ). Using part (c), we have p(n) O(n d ) O(n k ) as required. QED

Proof of (b). Since d k 0, we have n d = n k n d k n k, so that Ω(n d ) Ω(n k ). Using part (c), we have p(n) Ω(n d ) Ω(n k ) as required. QED Problem 2: Indicate in the table below whether A is O, Ω, or Θ of B. Assume that k 1, ɛ > 0, and c > 1 are constants. 3 A B O Ω Θ a. log k n n k yes b. n k c n yes c. n n sin n d. 2 n 2 n/2 yes e. n log c c log n yes yes yes f. log(n!) log(n n ) yes yes yes (No entry = no.) c: The essential point is that sin n takes on values between -1 and 1, getting arbitrarily close to any particular value as n traverses the positive integers. In particular, sin n gets within epsilon of 1 and also within epsilon of -1 for sufficiently large n. (This is because n gets arbitrarily near multiples of π for large n.) Therefore n sin n gets near n = n 1 and 1/n = n 1, thus wandering all over the place between zero and n. e: Note that A and B are actually equal: Show that log A = log B and hence conclude A = B. f: Note that log n! = Θ(n log n) [formula (6)] and that n log n = log n n [property of logarithms] whence log n! = Θ(log n n ).

Problem 3: This problem develops properties of the Fibonacci numbers, which are given by the recurrence f n = f n 1 +f n 2 with initial conditions f 0 = 0, f 1 = 1. Define the generating function for the Fibonacci sequence as the formal power series 4. F (z) = f i z i (a). Show that F (z) = z + zf (z) + z 2 F (z). Proof: Expand the right hand side RHS: z + zf (z) + z 2 F (z) = z + z( f i z i ) + z 2 ( f i z i ) = (1 + f 0 )z + (f i 1 + f i 2 )z i i=2 Now plug in the initial conditions and recurrence relation, to obtain RHS = z + f i z i = i=2 f i z i which is identical to the left hand side. QED (b). Show that z F (z) = 1 z z = z 2 (1 φz)(1 ˆφz) = 1 1 ( 5 (1 φz) 1 (1 ˆφz) ) where and φ = 1 + 5 2 ˆφ = 1 5 2 = +1.61803... = 0.61803... Proof: The first equality is proved by solving the equation proved in (a) for F (z). The other two equalities are proved by calculation, for example: (1 φz)(1 ˆφz) = 1 (φ+ ˆφ)z +(φ ˆφ)z 2 = 1 (1/2+1/2)z +((1 5)/4)z 2 = 1 z z 2 (c). Show that F (z) = 1 5 (φ i ˆφ i )z i

5 Proof: Recall the fact: 1 1 az = a i z i (verify by multiplying both sides by 1 az). Thus applying part (b) we obtain: F (z) = 1 1 ( 5 (1 φz) 1 (1 ˆφz) ) = 1 ( φ i z i ˆφ i z i 1 ) = (φ i ˆφ i )z i 5 5 which proves the result. (d). Prove that f i = φ i / 5, rounded to the nearest integer. Proof: Note that the absolute value of the second base is ˆφ = (1 5)/2 < 1 and hence ˆφ i < 1 for all i. Applying part (c), we have f i = φ i / 5 ˆφ i / 5, where the second term is smaller than 1/2 in absolute value. QED (e). Prove that f i+2 φ i for i 0. Proof: First note that φ and ˆφ are the roots of the quadratic P (z) = z 2 z 1. In particular, φ 2 = φ + 1, so that φ i 1 + φ i 2 = φ i 2 (φ + 1) = φ i 2 φ 2 = φ i, which shows that the sequence φ i satisfies the Fibonacci recursion (but not the initial conditions defining f i ). We prove the assertion by induction on i 2. For the base cases, note that f 2 = 1 = φ 0 and f 3 = 2 = (1+ 9)/2 > (1+ 5)/2 = φ 1. The inductive step uses the calculation: f i+2 = f i+1 + f i φ i 1 + φ i 2 = φ i where the inequality follows from the inductive hypothesis and the equality follows from the fact that φ satisfies the recursion. QED

Problem 4: The procedure Build-Max-Heap in section 6.3 can be implemented by repeatedly using Max-Heap-Insert to insert the elements into a heap. Consider the following implementation: 6 template <class I, class P> void g_build_max_heap (I beg, I end, const P& LessThan) // pre: I is a random access iterator class // T is the value_type of I // P is a predicate class for type T // post: the specified range of values is a max-heap using LessThan, if (end - beg <= 1) return; size_t size = end - beg; for (size_t i = 1; i < size; ++i) g_push_heap(beg, beg + (i + 1), LessThan); (a) Do g build max heap and Build-Max-Heap in the text always create the same heap when run on the same input array? (Prove or disprove.) Answer: No. Run the two algorithms on the array A = [1,2,3,4,5] results in [6,4,5,1,3,2] and [6,5,3,4,2,1], respectively. (b) Show that in the worst case g build max heap requires Θ(n log n) time to build an n-element heap. The algorithm body consists of a single loop of length n that executes the loop body n 1 times. The iteration i of the loop body consists of one call to g push heap on a range of length i+1. We know that g push heap has worst-case runtime Θ(log i). Therefore the algorithm runtime is n Θ( log i) = Θ(n log n), by application of Equation (4) from the Formulas handout. i=1

Problem 5: This problem is about the stack depth (which adds to the runspace requirements of the algorithm) of QuickSort. Here are two versions of QuickSort, for an array A with range [p,r). (Note that the notation here is the standard C interpretation of range, that includes the begin index and excludes the end index. This differs from the text.) 7 void QuickSort(A,p,r) void QuickSort2(A,p,r) if (r - p > 1) while (r - p > 1) q = partition(a,p,r); q = partition(a,p,r); QuickSort(A,p,q); QuickSort2(A,p,q); QuickSort(A,q+1,r); p = q + 1; These each call the same version of Partition (below). (QuickSort2 is obtained from QuickSort by eliminating tail recursion, a process that can be formalized and accomplished by optimizing compilers.) size_t partition(a,p,r) i = p; for (j = p; j < r-1; ++j) if (A[j] <= A[r-1]) swap(a[i],a[j]); ++i; swap(a[i],a[r-1]); return i; (a) Give an informal argument that QuickSort2 is a sort. Note that by setting p = q+1 at the end of the loop body of QuickSort2, the effect is that the next execution of the loop body results in the same process as a call to QuickSort(q+1,r). Thus the two algorithm bodies perform the same sequence of

tasks. Since we have already shown that QuickSort is a sort, so also must QuickSort2 be a sort. 8 (b) Describe a scenario in which the stack depth of QuickSort2 is Θ(n) on an n-element array (n = r p). If the input array is sorted, the partition index will always be the largest index in the range, resulting in recursive calls to QuickSort2(A,p,i) for i = r... p, a total on n recursive calls. (c) Modify the code for QuickSort2 so that the worst-case stack depth is Θ(log n), while maintaining O(n log n) expected runtime of the algoriithm. Using a randomized version of Partition will at least make the expected stack usage O(log n), but we would still have the worst case Ω(n). To ensure that stack space does not grow worse than log n we can modify the algorithm so that the recursive call is made on the smaller of the two ranges: void QuickSort3(A,p,r) while (r - p > 1) q = partition(a,p,r); if (q - p < r - q) QuickSort3(A,p,q); p = q + 1; else QuickSort3(A, q+1, r) r = q; This modification ensures that the recursive call is made on a range that is no larger than 1/2 the size of the previous call and terminates when the range is 1, so there are at most log n recursive calls.

Problem 6: Prove correctness (including halting) of SelectionSort (use loop invariants) Here is selection sort (from the lecture notes, with some added loop invariants): 9 SelectionSort ( array A[0..n) ) // pre: // post: A[0..n) is sorted for (i = 0; i < n; ++i) // Loop Invariant 1: A[0..i) is sorted k = i; for (j = i; j!= n; ++j) if (A[j] < A[k]) k = j; // Loop Invariant 2: A[k] is a smallest element of A[i..n) Swap (A[i], A[k]); // Loop Invariant 3: A[i] is a smallest element of A[i..n) // Loop Invariant 4: A[i] is a largest element of A[0..i] return; Proof of halting: There are two loops, nested, each with length bounded by n. Proof of correctness: Let P 1 (i), P 2 (i), P 3 (i), P 4 (i) denote the four loop invariants shown in the listing. Clearly if we prove P 1 (n) we have proved that SelectionSort is a sort. First consider P 2 (i): suppose that s is the index of a smallest element of the range A[i.. n). Then the condition will ensure that k is assigned the value s after the comparison. Therefore P 2 (i) is true for all i. Then it is straightforward to deduce P 3 (i), just observing the effect of the call to Swap. Now we can prove P 1 (i) and P 4 (i) by double induction. Base case: P 1 (0) and P 4 (0) These are trivially true: an empty range or a range of one element is automatically sorted.

10 Inductive step part 1: P 1 (i) implies P 4 (i) By P 3 (i 1), A[i-1] is a smallest element of A[i-1..n), which implies that A[i-1] <= A[i]. P 4 (i) follows because A[0..i) is sorted. Inductive step part 2: P 1 (i) and P 4 (i) imply P 1 (i + 1) We are given that A[0..i) is sorted and must prove that A[0..i+1) is sorted after the next iteration of the loop. Invoking P 1 (i) and P 4 (i) we see that A[0..i) is sorted and that A[i] is at least as large as any element in A[0..i). Therefore A[0..i] = A[0..i+1) is sorted. Problem 7: Provide worst and average case runtime analysis of SelectionSort. The algorithm body runs exactly the same independent of data, because the lengths of the outer and inner loops are not data dependent. This runtime is n n n n Θ( 1) = Θ( (n i)) = Θ( i) = Θ(n 2 ) j=i by Equation (1) of Formulas. Problem 8: Provide runspace analysis of SelectionSort. There are four local variables used in the algorithm body, independent of n. Therefore the runspace is +Θ(1).