Algorithm Design CS 515 Fall 2015 Sample Final Exam Solutions

Similar documents
CPSC 320 Sample Final Examination December 2013

Computational Complexity - Pseudocode and Recursions

CS Data Structures and Algorithm Analysis

CS 161 Summer 2009 Homework #2 Sample Solutions

Quiz 1 Solutions. Problem 2. Asymptotics & Recurrences [20 points] (3 parts)

CSCI 3110 Assignment 6 Solutions

Algorithms Exam TIN093 /DIT602

Algorithms 2/6/2018. Algorithms. Enough Mathematical Appetizers! Algorithm Examples. Algorithms. Algorithm Examples. Algorithm Examples

Fundamental Algorithms

Fundamental Algorithms

Slides for CIS 675. Huffman Encoding, 1. Huffman Encoding, 2. Huffman Encoding, 3. Encoding 1. DPV Chapter 5, Part 2. Encoding 2

NAME (1 pt): SID (1 pt): TA (1 pt): Name of Neighbor to your left (1 pt): Name of Neighbor to your right (1 pt):

CMPSCI 311: Introduction to Algorithms Second Midterm Exam

Outline. Complexity Theory. Example. Sketch of a log-space TM for palindromes. Log-space computations. Example VU , SS 2018

SOLUTION: SOLUTION: SOLUTION:

CSCE 551 Final Exam, April 28, 2016 Answer Key

Mid-term Exam Answers and Final Exam Study Guide CIS 675 Summer 2010

Midterm Exam 2 Solutions

CSE101: Design and Analysis of Algorithms. Ragesh Jaiswal, CSE, UCSD

University of New Mexico Department of Computer Science. Final Examination. CS 561 Data Structures and Algorithms Fall, 2013

Preliminaries. Graphs. E : set of edges (arcs) (Undirected) Graph : (i, j) = (j, i) (edges) V = {1, 2, 3, 4, 5}, E = {(1, 3), (3, 2), (2, 4)}

Sorting. Chapter 11. CSE 2011 Prof. J. Elder Last Updated: :11 AM

Lecture 4. Quicksort

CS213d Data Structures and Algorithms

University of Toronto Department of Electrical and Computer Engineering. Final Examination. ECE 345 Algorithms and Data Structures Fall 2016

CSE 421 Introduction to Algorithms Final Exam Winter 2005

Module 1: Analyzing the Efficiency of Algorithms

Quicksort. Where Average and Worst Case Differ. S.V. N. (vishy) Vishwanathan. University of California, Santa Cruz

CS6901: review of Theory of Computation and Algorithms

Randomized Sorting Algorithms Quick sort can be converted to a randomized algorithm by picking the pivot element randomly. In this case we can show th

NOTE: You have 2 hours, please plan your time. Problems are not ordered by difficulty.

Remainders. We learned how to multiply and divide in elementary

Divide and conquer. Philip II of Macedon

Time Complexity (1) CSCI Spring Original Slides were written by Dr. Frederick W Maier. CSCI 2670 Time Complexity (1)

CS361 Homework #3 Solutions

CS Analysis of Recursive Algorithms and Brute Force

Discrete Mathematics CS October 17, 2006

3. Algorithms. What matters? How fast do we solve the problem? How much computer resource do we need?

CS 395T Computational Learning Theory. Scribe: Mike Halcrow. x 4. x 2. x 6

Algorithm Theory - Exercise Class

Algorithms and Their Complexity

MIDTERM I CMPS Winter 2013 Warmuth

Advanced Analysis of Algorithms - Midterm (Solutions)

University of New Mexico Department of Computer Science. Midterm Examination. CS 361 Data Structures and Algorithms Spring, 2003

CIS 121 Data Structures and Algorithms with Java Spring Big-Oh Notation Monday, January 22/Tuesday, January 23

CS60007 Algorithm Design and Analysis 2018 Assignment 1

CSE 202 Homework 4 Matthias Springer, A

Week 5: Quicksort, Lower bound, Greedy

Combinatorial Optimization

CS2223 Algorithms D Term 2009 Exam 3 Solutions

Big , and Definition Definition

P, NP, NP-Complete, and NPhard

Theory of Computation Time Complexity

Data structures Exercise 1 solution. Question 1. Let s start by writing all the functions in big O notation:

Complexity Theory VU , SS The Polynomial Hierarchy. Reinhard Pichler

CS3719 Theory of Computation and Algorithms

Outline. Complexity Theory EXACT TSP. The Class DP. Definition. Problem EXACT TSP. Complexity of EXACT TSP. Proposition VU 181.

Lecture 1: Asymptotics, Recurrences, Elementary Sorting

Math 391: Midterm 1.0 Spring 2016

Basic elements of number theory

1 Computational Problems

Basic elements of number theory

Analysis of Algorithms I: Asymptotic Notation, Induction, and MergeSort

NP-Completeness. Algorithmique Fall semester 2011/12

Homework 1 Solutions

CS173 Lecture B, November 3, 2015

Lecture 2: Divide and conquer and Dynamic programming

Problem 5. Use mathematical induction to show that when n is an exact power of two, the solution of the recurrence

Data Structures in Java

p 3 p 2 p 4 q 2 q 7 q 1 q 3 q 6 q 5

Notes for Lecture Notes 2

CS483 Design and Analysis of Algorithms

CS 4407 Algorithms Lecture 2: Iterative and Divide and Conquer Algorithms

ICS 252 Introduction to Computer Design

THE idea of network coding over error-free networks,

Big O 2/14/13. Administrative. Does it terminate? David Kauchak cs302 Spring 2013

CS 577 Introduction to Algorithms: Strassen s Algorithm and the Master Theorem

CSE 200 Lecture Notes Turing machine vs. RAM machine vs. circuits

Bin Sort. Sorting integers in Range [1,...,n] Add all elements to table and then

1. Problem I calculated these out by hand. It was tedious, but kind of fun, in a a computer should really be doing this sort of way.

Math Models of OR: Branch-and-Bound

Aside: Golden Ratio. Golden Ratio: A universal law. Golden ratio φ = lim n = 1+ b n = a n 1. a n+1 = a n + b n, a n+b n a n

Midterm Exam. CS 3110: Design and Analysis of Algorithms. June 20, Group 1 Group 2 Group 3

MA008/MIIZ01 Design and Analysis of Algorithms Lecture Notes 3

CSE 21 Practice Exam for Midterm 2 Fall 2017

Computer Sciences Department

FINAL EXAM PRACTICE PROBLEMS CMSC 451 (Spring 2016)

University of New Mexico Department of Computer Science. Final Examination. CS 561 Data Structures and Algorithms Fall, 2006

CS383, Algorithms Spring 2009 HW1 Solutions

CIS 121. Analysis of Algorithms & Computational Complexity. Slides based on materials provided by Mary Wootters (Stanford University)

BBM402-Lecture 11: The Class NP

Advanced Implementations of Tables: Balanced Search Trees and Hashing

CSCE 551 Final Exam, Spring 2004 Answer Key

AC64/AT64/ AC115/AT115 DESIGN & ANALYSIS OF ALGORITHMS

Algorithm runtime analysis and computational tractability

CMSC Discrete Mathematics FINAL EXAM Tuesday, December 5, 2017, 10:30-12:30

Show that the following problems are NP-complete

CMPT 307 : Divide-and-Conqer (Study Guide) Should be read in conjunction with the text June 2, 2015

Lecture 6: Oracle TMs, Diagonalization Limits, Space Complexity

NP, polynomial-time mapping reductions, and NP-completeness

Transcription:

Algorithm Design CS 515 Fall 2015 Sample Final Exam Solutions Copyright c 2015 Andrew Klapper. All rights reserved. 1. For the functions satisfying the following three recurrences, determine which is the fastest growing and which is the slowest growing (a) T (n) = 2T (n 1), T (1) = 5. (b) S(n) = S(n 1) + log(n), S(1) = 17. (c) R(n) = 3R( n/2 ) + 4, R(1) = 11. We have T (n) = 5 2 n 1. We have S(n) = log(n) + log(n 1) + + log(2) +17 log(n!) + 17 Θ(n log(n)) and similarly S(n) log(n!) n + 17. By the master theorem for divide and conquer recurrences, R(n) Θ(n log 2 (3) ). Thus S(n) o(r(n)) and R(n) o(t (n)). 2. Prove that log 2 (n) O(2 n ). It suffices to show that log(n) O(2 n/2 ). By L Hospital s rule, log(n) lim n 2 = lim n/2 n (1/n) log(e) 2 n/2 (n 1/2 /4) ln(2) = lim n log(e) 2 n/2 (n 1/2 /4) ln(2) = 0. 3. Give a proof by invariants that the following algorithm correctly searches a sorted list for element z. Search(X,n,z) { i = 0 j = n-1 while (i < j) { k = (i+j)/2 if (X[k] = z) return(k) if (X[k] < z) i = k+1 X[0],, X[n 1]

else j = k-1 if (X[i] = z) return(i) else return(failure) Assumption: X[1] X[2] X[n 1] Correctness: P: If there is a t so that X[t] = z, then the algorithm returns a t with X[t] = z. Invariant: R: If there is a t {0, 1,, n 1 so that X[t] = z, then there is a t {i,, j so that X[t] = z. R is true initially since i = 0 and j = n 1. Assume R at the start of an iteration. If X[k] = z, then the loop never repeats. Suppose there is a t {0, 1,, n 1. By R we may assume i t j. If X[k] < z, then any t with X[t] = z satisfies k < t, so there is a t {k + 1,, j with X[t] = z. If X[k] > z, then any t with X[t] = z satisfies k > j, so there is a t {i,, k 1 with X[t] = z. Thus R holds at the start of the next iteration. If the algorithm returns at either return statement, it is returning a correct value. Suppose the loop terminates without having returned, R holds, and there is a t so that X[t] = z. Then there is a t so that X[t] = z and i t j i, so t = i and i will be returned. The loop eventually returns or terminates since j i + 1 decreases at each iteration. 4. Let H be an array containing a min-heap with n elements (the smallest element is at the top, and the children of H[i] are H[2i] and H[2i + 1]). Give pseudocode for an efficient algorithm for deleting the smallest element from H. What is the worst case time complexity of your algorithm? Move the last element to H[1] and then bubble it down until it s smaller that than both its children. Always swap with the smaller child. x = H[1] H[1] = H[n] n = n-1 k = 1 while (2k+1 <= n and H[k] > min(h[2k],h[2k+1])) { if (H[2k] < H[2k+1]) j=2k

else j = 2k+1 swap(h[k],h[j]) k = j if (2k <= n and H[k] > H[2k]) swap(h[k],h[2k]) return(x) Time: the number of iterations is at most the height of the heap, which is at most log(n). So time is O(log(n)). 5. (a) Find a sharp upper bound on the height of a 2-3 tree with n nodes. Every internal node has at least 2 children, and every leaf is at the same depth, so if the height is h there are at least 1 + 2 + 2 2 + + 2 h = 2 h+1 1 nodes. That is, 2 h+1 1 n, so h log(n + 1) 1. (b) Compare the performance of search trees, red-black trees, B-trees, or other such structures under various mixes of the basic operations (search, insert, delete). There are lots of possible variants of this and lots of different answers. The main idea is to discuss time complexity of the various operations, how much extra memory is needed, ease of programming, situations where one or another structure is preferred, and so on. 6. Describe an efficient algorithm to count the connected components in an undirected graph. Analyze the complexity of your algorithm. It should run in linear time. Use a counter c to count the components, initialized to 0. While possible, pick an unvisited node u and do a DFS from u. After each DFS, increment c. The final value of c is the number of connected components. Time is O(n + e) since this is the time of DFS. 7. Let G = (V, E) be a directed graph with edge capacities c(u, v) 0, a sink s, and a source t. Prove that if f is a flow on G and f is a flow on the residual graph G f, then the function g defined by g(u, v) = f(u, v) + f (u, v) is a flow on G. The symmetry and conservation laws are linear equations so are preserved by addition of the flows. To prove the capacity bound, for every edge e we have f (e) c f (e) = c(e) f(e). Thus f(e) + f (e) f(e) + c(e) f(e) = c(e). 8. Dynamic programming

Let S = s 1 s 2 s n be a string over the alphabet {a, b and consider the two operations Insert (insert a symbol in S) and Delete (delete a symbol from S). An ID-edit sequence is a sequence of Inserts and Deletes. If S = s 1 s n and T = t 1 t k are two strings, then MinID(S, T ) is the length of the shortest ID-edit sequence that changes S into T. For 0 i n let S(i) = s 1 s 2 s i. (a) Express M inid(s(i), T (j)) in terms of M inid of shorter strings. (Hint: consider separately the cases s i = t j and s i t j.) If s i = t j, then MinID(S(i), T (j)) = MinID(S(i 1), T (j 1)). If s i t j, then either MinID(S(i), T (j)) = 1 + MinID(S(i 1), T (j)) (delete the last element of S(i)) or MinID(S(i), T (j)) = 1 + MinID(S(i), T (j 1)) (insert the last symbol of T (j)). That is: MinID(S(i), T (j) = MinID(S(i { 1), T (j 1)) MinID(S(i 1), T (j)) (delete end of S(i)) or 1 + min MinID(S(i), T (j 1)) (insert end of T (j)) if s i = t j if s i t j. (b) Using dynamic programming, give an efficient algorithm that computes M inid. Give pseudocode and analyze the worst case time complexity. 9. Algebra MinID(S,n,T,k) { for (i=0 to n) M[i,0] = i for (j=1 to k) M[0,j] = j for (i=1 to n) for (j=1 to k) if (S[i] = T[j]) M[i,j] = M[i-1,j-1] else M[i,j] = 1 + minimum(m[i,j-1],m[i-1,j]) return(m[n,k]) Two nested loops take time Θ((n + 1)(k + 1)).

(a) Describe an efficient algorithm which, given integers a, k, m with 0 a < m and k 0, computes a k (mod m). Analyze the complexity of the algorithm in terms of the time M(t) required to multiply two t bit integers. Let k = r i=0 k i2 i with k i {0, 1. We want to compute z i = 2 2i (mod m) for i r and then a k (mod m) = z i. k i =1 z = 2 b = 1 while (k > 0) { if (k mod 2 = 1) b = bz mod m z = zz mod m return(b) The loop iterates r = O(log(k)) times, and the body take M(log(m)) time, so the whole thing is O(log(k)M(log(m))). (b) How can the Extended Euclidean Algorithm be used to find the inverse of an integer a modulo an integer m? Given a, m Z, the EEA finds s and t so sa + tm = gcd(a, m). If the gcd is 1, then sa 1 (mod m), so s is the inverse of a modulo m. 10. Intractibility (a) Prove that if K and L are in NP, then the concatenation KL is in NP. Suppose we are given NP algorithms A, B that recognize K, L, respectively. Then to recognize KL, input x = x 1 x 2 x n ; guess 0 i n; let y = x 1 x i and z = x i+1 x n ; run A on y and B on z; accept iff both accept. (b) Let L be NP-complete and let K be in NP. Let J = {(x, 0) : x L {(y, 1) : y K. Prove that J is NP-complete. J is in NP: On input (x, 0), run an NP decider for L on input x. On input (y, 1), run an NP decider for K on input y. J is NP-hard: Let M NP. Then M m p L by some polynomial time computable function f. Thus x M iff f(x) L. Define g(x) = (f(x), 0). Then x M iff g(x) J, and g is poynomial time computable.