COL351: Analysis and Design of Algorithms (CSE, IITD, Semester-I ) Name: Entry number:

Similar documents
CS60007 Algorithm Design and Analysis 2018 Assignment 1

Solutions to Final Practice Problems 1-15

Advanced Analysis of Algorithms - Midterm (Solutions)

Outline / Reading. Greedy Method as a fundamental algorithm design technique

Lecture 4: An FPTAS for Knapsack, and K-Center

1 The Knapsack Problem

Dynamic Programming( Weighted Interval Scheduling)

Randomized Sorting Algorithms Quick sort can be converted to a randomized algorithm by picking the pivot element randomly. In this case we can show th

Week 5: Quicksort, Lower bound, Greedy

University of California Berkeley CS170: Efficient Algorithms and Intractable Problems November 19, 2001 Professor Luca Trevisan. Midterm 2 Solutions

Santa Claus Schedules Jobs on Unrelated Machines

Lecture 18: More NP-Complete Problems

Data Structures in Java

1 T 1 = where 1 is the all-ones vector. For the upper bound, let v 1 be the eigenvector corresponding. u:(u,v) E v 1(u)

Week Cuts, Branch & Bound, and Lagrangean Relaxation

Econ Slides from Lecture 2

CS Algorithms and Complexity

13 Dynamic Programming (3) Optimal Binary Search Trees Subset Sums & Knapsacks

k-protected VERTICES IN BINARY SEARCH TREES

SMT 2013 Power Round Solutions February 2, 2013

Math 120 Final Exam Practice Problems, Form: A

Lecture 17: Trees and Merge Sort 10:00 AM, Oct 15, 2018

Econ Slides from Lecture 1

Classes of Boolean Functions

CS 161 Summer 2009 Homework #2 Sample Solutions

/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Dynamic Programming II Date: 10/12/17

On improving matchings in trees, via bounded-length augmentations 1

1. Introduction. 2. Outlines

Chapter 11. Approximation Algorithms. Slides by Kevin Wayne Pearson-Addison Wesley. All rights reserved.

Mat 3770 Bin Packing or

More Dynamic Programming

CSC236H Lecture 2. Ilir Dema. September 19, 2018

SECTION 5.1: Polynomials

NOTE: You have 2 hours, please plan your time. Problems are not ordered by difficulty.

Dynamic Programming. Cormen et. al. IV 15

32. Use a graphing utility to find the equation of the line of best fit. Write the equation of the line rounded to two decimal places, if necessary.

More Dynamic Programming

Exercises NP-completeness

CS Data Structures and Algorithm Analysis

Dynamic Programming on Trees. Example: Independent Set on T = (V, E) rooted at r V.

Computational Integer Programming. Lecture 2: Modeling and Formulation. Dr. Ted Ralphs

1. Find all relations which are functions. 2. Find all one to one functions.

5-7 Solving Quadratic Inequalities. Holt Algebra 2

MA008/MIIZ01 Design and Analysis of Algorithms Lecture Notes 3

University of Toronto Department of Electrical and Computer Engineering. Final Examination. ECE 345 Algorithms and Data Structures Fall 2016

I Credit I Max I Problem 1 20 Problem 2 20 Problem 3 20 Problem 4 20

Train the model with a subset of the data. Test the model on the remaining data (the validation set) What data to choose for training vs. test?

3. Branching Algorithms

The min cost flow problem Course notes for Optimization Spring 2007

An equation is a statement that states that two expressions are equal. For example:

1. The sides of a triangle are in the ratio 3 : 5 : 9. Which of the following words best describes the triangle?

Online Interval Coloring and Variants

The Knapsack Problem. 28. April /44

Computer Science & Engineering 423/823 Design and Analysis of Algorithms

CSE541 Class 22. Jeremy Buhler. November 22, Today: how to generalize some well-known approximation results

1 Basic Definitions. 2 Proof By Contradiction. 3 Exchange Argument

Computer Science & Engineering 423/823 Design and Analysis of Algorithms

Algorithms Exam TIN093 /DIT602

MATH 121: EXTRA PRACTICE FOR TEST 2. Disclaimer: Any material covered in class and/or assigned for homework is a fair game for the exam.

Tree Decompositions and Tree-Width

CAHSEE Math Released Test Questions

CPSC 320 (Intermediate Algorithm Design and Analysis). Summer Instructor: Dr. Lior Malka Final Examination, July 24th, 2009

3. Branching Algorithms COMP6741: Parameterized and Exact Computation

ACO Comprehensive Exam October 18 and 19, Analysis of Algorithms

CS 372: Computational Geometry Lecture 4 Lower Bounds for Computational Geometry Problems

CS383, Algorithms Spring 2009 HW1 Solutions

Problem Set 1. MIT students: This problem set is due in lecture on Wednesday, September 19.

CSE 431/531: Analysis of Algorithms. Dynamic Programming. Lecturer: Shi Li. Department of Computer Science and Engineering University at Buffalo

Chapter 4. Inequalities

NETS 412: Algorithmic Game Theory March 28 and 30, Lecture Approximation in Mechanism Design. X(v) = arg max v i (a)

CSE 421 Dynamic Programming

Aside: Golden Ratio. Golden Ratio: A universal law. Golden ratio φ = lim n = 1+ b n = a n 1. a n+1 = a n + b n, a n+b n a n

Walrasian Equilibrium: Hardness, Approximations and Tractable Instances

CSE 202 Homework 4 Matthias Springer, A

1 Ordinary Load Balancing

Part V. Intractable Problems

20. Dynamic Programming II

2. A vertex in G is central if its greatest distance from any other vertex is as small as possible. This distance is the radius of G.

Midterm Exam 2 Solutions

CS325: Analysis of Algorithms, Fall Final Exam

PRGs for space-bounded computation: INW, Nisan

6. DYNAMIC PROGRAMMING I

Note that M i,j depends on two entries in row (i 1). If we proceed in a row major order, these two entries will be available when we are ready to comp

Lecture 6 January 15, 2014

Lecture 11 October 7, 2013

8.7 Taylor s Inequality Math 2300 Section 005 Calculus II. f(x) = ln(1 + x) f(0) = 0

5 Flows and cuts in digraphs

Constraint Technology for Solving Combinatorial Problems

CSCI 3110 Assignment 6 Solutions

ASU Mathematics Placement Test Sample Problems June, 2000

Equations and Inequalities

Introduction to Bin Packing Problems

CSCE 222 Discrete Structures for Computing. Review for Exam 2. Dr. Hyunyoung Lee !!!

Lecture 13: Dynamic Programming Part 2 10:00 AM, Feb 23, 2018

P, NP, NP-Complete, and NPhard

This means that we can assume each list ) is

The min cost flow problem Course notes for Search and Optimization Spring 2004

SIGNAL COMPRESSION Lecture 7. Variable to Fix Encoding

Introduction. I Dynamic programming is a technique for solving optimization problems. I Key element: Decompose a problem into subproblems, solve them

Theoretical Computer Science

Transcription:

Name: Entry number: There are 5 questions for a total of 75 points. 1. (5 points) You are given n items and a sack that can hold at most W units of weight. The weight of the i th item is denoted by w(i) and the value of this item is denoted by v(i). The items are indivisible. This means that you cannot take a fraction of any item. Design an algorithm that determines the items that should be filled in the sack such that the total value of items in the sack is maximized with the constraint that the combined weight of the items in the sack is at most W. You may assume that all the quantities in this problem are integers. Give pseudocode, discuss running time, and argue correctness. Solution: Let V (i, w) denote the maximum profit that can be made using items 1,..., i when the sack capacity of w. Given this, we have V (0, w) = 0 for any w 0 and V (i, 0) = 0 for any i 0. For i > 1 and sack capacity w > 0, we have: { V (i 1, w) If w(i) > w V (i, w) = max {V (i 1, w), V (i 1, w w(i)) + v(i)} otherwise Given a sack of size w, if the i th item cannot fit into the sack, then the maximum profit is the same as the maximum profit obtained using items 1,..., i 1. If the i th item can fit into the sack, then the maximum profit is the maximum of the cases where you pick item i and when you do not pick item i. This is basically the logic behind the above recursive formulation.s The following algorithm finds the maximum profit as well as the items that should be picked for maximum profit. 1 of 10

0-1-knapsack(v, w, W ) - For i = 0 to n - V [i, 0] 0 - P [i, 0] - For t = 0 to W - V [0, t] 0 - For i = 1 to n - For t = 1 to W - If (w[i] > t) - V [i, t] V [i 1, t] - P [i, t] = if (V [i 1, t] > V [i 1, t w[i]] + v[i]) - V [i, t] V [i 1, t] - P [i, t] = - V [i, t] V [i 1, t w[i]] + v[i] - P [i, t] = - FindItems(w, P ) FindItems(w, P ) - i n; t W ; S {} - While (i 0) - If (P [i, t] = ) - i i 1 - S S {i} - i i 1 - t t w[i] - return(s) Running time: The running time of the above algorithm is O(n W ) since the algorithm fills tables of size n W and filling each table entry takes O(1) time. 2 of 10

2. (15 points) You are given n types of coin denominations of values v 1 < v 2 <... < v n (all integers). Assume v 1 = 1, so you can always make change for any integer amount of money C. Design an algorithm that makes change for an integer amount of money C with as few coins as possible. Give pseudocode, discuss running time, and argue correctness. Solution: Let M(i, c) denote the minimum number of coins needed to make a change for c when we are allowed only coins of values v 1, v 2,..., v i. First we note that for all c > 0, M(1, c) = c and also for all i, M(i, 0) = 0. Since for the first case, we are allowed coins of value 1 and for the second case, there is no change needed to be made. Here is a recursive relation for M(.,.) { M(i 1, c) If v i > c M(i, c) = min {M(i 1, c), M(i, c v i ) + 1} Otherwise. If v i > c, then the no coins of value v i can be used. Otherwise, we consider these cases: 1. Number of coins used to construct the change for c using coins with value v 1,..., v i 1, and 2. Number of coins used to construct the change for (c v i ) using coins of value v 1,..., v i and plus one (coin of value v i ). We pick the case which minimizes the number of coins. Here is the table-filling algorithm for finding the minimum number of coins: CoinChange(C) - For i = 1 to n - M[i, 0] 0 - For c = 1 to C - M[1, c] c - For i = 2 to n - For c = 1 to C - If (v i > c)then M[i, c] M[i 1, c] M[i, c] min {M[i 1, c], M[i, c v i ] + 1} - return(m[n, C]) Here is the algorithm that outputs the number of coins of each value: 3 of 10

CoinChangeNum(C) - For i = 1 to n - M[i, 0] 0 - P [i, 0] 1 - For c = 1 to C - M[1, c] c - P [1, c] c 1 - For i = 2 to n - For c = 1 to C - If (v i > c) - M[i, c] M[i 1, c] - P [i, c] 1 - If (M[i 1, c] < M[i, c v i ] + 1) - M[i, c] M[i 1, c] - P [i, c] 1 - M[i, c] M[i, c v i ] + 1 - P [i, c] c v i - Coins FindCoins(P, C) - return(coins) FindCoins(P, C) - For i = 1 to n A[i] 0 - c C; i n - While(c 0) - While (P [i, c] 1) - A[i] A[i] + 1 - c P [i, c] - i i 1 - return(a) Running time: The running time of the above algorithm is O(n C) since the algorithm fills a table of size n C and filling each table entry takes constant amount of time. 4 of 10

3. (15 points) Recall, the problem of finding a minimum vertex cover of a tree. Suppose, you are given a rooted tree T with root r. For every node v, let C(v) denotes the set of children of the node v in T. So, for a leaf node v, C(v) = {}. We will try to find the minimum vertex cover using Dynamic Programming. For any node v, let M(v, 0) denote the size of the vertex cover S of the subtree rooted at v such that v / S and S is the smallest such cover. Let M(v, 1) denote the size of the vertex cover S of the subtree rooted at v such that v S and S is the smallest such cover. So, the size of the minimum vertex cover of the given tree is min {M(r, 0), M(r, 1)}. Give the recursive formulation for M(.,.) and use this to give an algorithm for finding the size of minimum vertex cover. Your algorithm should also output a minimum vertex cover of T. Give pseudocode, discuss running time, and argue correctness. Solution: For any leaf v in the tree we have M(v, 1) = 1 and M(v, 0) = 0 since the subtree rooted at v contains just v. We get the following recursive relation for any internal vertex u in the tree: M(u, 1) = 1 + min {M(v, 0), M(v, 1)}, and M(u, 0) = v C(v) v C(u) M(v, 1) The reason for the above recursion is that the size of the vertex cover of minimum size of the tree rooted at u that includes vertex u will be equal to 1 plus the size of the vertex covers of trees rooted at its children. Furthermore, the size of the vertex cover of minimum size that does not include u will be equal to the sum of sizes of vertex covers of the trees rooted at its children that includes the child (since edges from u to its children need to be accounted for). We can compute these values while doing a post-order traversal of the given tree. MinVCSize(T, r) - PostOrder(r) - return(min {M[r, 0], M[r, 1]}) PostOrder(r) - If (r is a leaf) - M[r, 0] 0 - M[r, 1] 1 - For all v C(r) - PostOrder(v) - M[r, 1] 1 + v C(r) min {M(v, 0), M(v, 1)} - M[r, 0] v C(r) M[v, 1] For finding a minimum vertex cover, we will do a pre-order traversal of the tree. The pseudocode is given below. Assume that M and S are global variables. 5 of 10

MinVC(T, r) - PostOrder(r) - PreOrder(r, 0) - return(s) PreOrder(r, b) - If (b = 1) - S S {r} - For all v C(r) - PreOrder(v, 0) if (M[r, 0] < M[r, 1]) - For all v C(r) - PreOrder(v, 1) - S S {r} - For all v C(r) - PreOrder(v, 0) Running time: The algorithm does a tree traversal which will take O(n) time. 6 of 10

4. (20 points) You are given a rectangular piece of cloth with dimensions X Y, where X and Y are positive integers, and a list of n products that can be made using the cloth. For each product i {1,..., n} you know that a rectangle of cloth of dimensions a i b i is needed and that the final selling price of the product is c i. Assume the a i, b i, and c i are all positive integers. You have a machine that can cut any rectangular piece of cloth into two pieces either horizontally or vertically. Design an algorithm that determines the best return on the X Y piece of cloth, that is, a strategy for cutting the cloth so that the products made from the resulting pieces give the maximum sum of selling prices. You are free to make as many copies of a given product as you wish, or none if desired. Give pseudocode, discuss running time, and argue correctness. Solution: The first observation that one should make for this question is that it makes sense to cut the cloth only at integer coordinates since all objects are of integer size. For any size x y, consider all products with size matching x y and let p(x, y) denote the profit of the product of size x y that gives maximum profit. If there is no product of size x y, then p(x, y) = 0. Let P (x, y) denotes the maximum profit that we can obtain from a cloth of size x y. There are two cases: (i) we do not cut this cloth and use it for a product of size exactly x y, or (ii) cut it (there being (x 1) + (y 1) sub-options of this) and then make maximum profit out of the two pieces created. This gives us the following recursive relationship: { } P (x, y) = max p(x, y), max {P (h, y) + P (x h, y)}, max {P (x, v) + P (x, y v)} 1 h<x 1 v<y Here is a table-filling algorithm that computes the above values: ClothCutting(X, Y ) - Compute the values of p[x, y] for all 1 x X and 1 y Y - For x = 1 to X - For y = 1 to Y - P [x, y] max {p[x, y], max 1 h<x {P [h, y] + P [x h, y]}, max 1 v<y {P [x, v] + P [x, y v]}} - return(p [X, Y ]) Running time: Computing the values of p(x, y) for all x, y takes time O(X Y + n). The algorithm fills a table of size X Y and filling each table entry takes time O(X + Y ). So, the total running time of the algorithm is O(X Y (X + Y ) + X Y + n). 7 of 10

5. (20 points) Consider the 0-1 Knapsack problem that we discussed in lecture. Given n positive integers w 1,..., w n and another positive integer W, the goal is to find a subset S of indices such that w i is maximised subject to w i W. We studied a dynamic program for this problem that has a running time O(n W ). We know that this is bad when the integers are very large. The goal in this task is to improve the running time at the cost of optimality of the solution. Let us make this more precise. Let OP T denote the value of the optimal solution. For any given ε (0, 1], design an algorithm that outputs a solution that has value at least OP T (1+ε). The running time of your algorithm should be of the ) form O, where f(n) is some polynomial in n. You need not give the best algorithm that exists ( f(n) ε for this question. As long as f(n) in the running time is polynomial in n, you will get the full credit. (Hint: Try modifying the dynamic programming solution for the knapsack problem.) Solution: Let us assume that for all i, w i W. Since otherwise we can remove the item i and then solve the problem with the remaining items. We will first check if i w i W. If so, the optimal solution will pick all items. So, now we only need to focus on the case when i w i > W. Let w be the weight of the item with maximum weight. Since i w i > W, we have: n w > W (1) A DP that does not work: Let us try to use the same Dynamic Programming formulation as in the class and try to scale. We will see that this formulation has an issue. However, this will nicely set us up for the DP that works. Given the problem instance I = (w 1,..., w n, W ) we create the following new instance of the 0-1 knapsack problem I = (w 1,..., w n, W ), where for all i, w i = w i/m, W = W/m, and m = w/(2n/ε). We solve the second instance using the dynamic programming that we discussed in the class. Let O denote the optimal solution for (w 1,..., w n, W ). That is, O is the set of items picked by the dynamic programming algorithm when executed on instance I. Note that the running time of the algorithm in instance I will be O(n W ) = O(n W/m) = O O(n 3 /ε) (using inequality (1)). Let us analyze if we can return O as an approximate solution to instance I. The fact that i O w i W does not ensure that i O w i will be at most W. This is where the problem lies with this formulation. ( n 2 ε W w ) = We will use an alternative DP formulation. Consider w 1,..., w n as in the above DP formulation. Let { L(i, w) = min w i }. S {1,...,i} s.t. w i =w For base cases, we have L(i, 0) = 0 for all i 0 and L(0, w) = for all w > 0. Furthermore, we can write: { min {L(i 1, w w i L(i, w) = ) + w i, L(i 1, w)} If w i w L(i 1, w) Otherwise We can compute the values L(i, w) similar to the way we compute the values M(i, w) as in the previous formulation. We can also output the set S {1,..., i} that minimizes w i such that = w for a given (i, w). The following program computes these values: w i 8 of 10

0-1-Knap(w, w, W ) - For i = 0 to n - L[i, 0] 0 - P [i, 0] - For t = 0 to W - L[0, t] - For i = 1 to n - For t = 1 to W - If (w [i] > t) - L[i, t] L[i 1, t] - P [i, t] = if (L[i 1, t] L[i 1, t w [i]] + w[i]) - L[i, t] L[i 1, t] - P [i, t] = - L[i, t] L[i 1, t w [i]] + w[i] - P [i, t] = FindItems(w, P, t) - i n; S {} - While (i 0) - If (P [i, t] = ) - i i 1 - S S {i} - i i 1 - t t w [i] - return(s) Let t be the largest index such that L(n, t) W. Then we run the program FindItems(w, P, t) to find a set S. We return S as the approximate solution to the original problem. First note that w i W. This follows from the definition of L. Let O denote the optimal solution to the given problem. We prove the following claim. Claim 1: i O w i t. Proof. Since otherwise t will not be the largest index such that L(n, t) W. Finally, we have w i = t i O w i 9 of 10

Multiplying both sides by m, we get: m ( w i) m ( w i) i O ( ) w i m w i /m i O w i mn w i i O w i OP T (ε/2)w w i OP T (ε/2) OP T (since w OP T ) w i (1 ε/2) OP T OP T 1 + ε 10 of 10