Find: a multiset M { 1,..., n } so that. i M w i W and. i M v i is maximized. Find: a set S { 1,..., n } so that. i S w i W and. i S v i is maximized.

Similar documents
Dynamic Programming: Shortest Paths and DFA to Reg Exps

Dynamic Programming: Shortest Paths and DFA to Reg Exps

CS483 Design and Analysis of Algorithms

Chapter 8 Dynamic Programming

Chapter 8 Dynamic Programming

Slides for CIS 675. Huffman Encoding, 1. Huffman Encoding, 2. Huffman Encoding, 3. Encoding 1. DPV Chapter 5, Part 2. Encoding 2

All-Pairs Shortest Paths

Computer Science & Engineering 423/823 Design and Analysis of Algorithms

CSC 1700 Analysis of Algorithms: Warshall s and Floyd s algorithms

Data Structures in Java

CS173 Lecture B, November 3, 2015

Algorithm Design CS 515 Fall 2015 Sample Final Exam Solutions

Lecture 7: Shortest Paths in Graphs with Negative Arc Lengths. Reading: AM&O Chapter 5

Dynamic Programming (CLRS )

Computer Science 385 Analysis of Algorithms Siena College Spring Topic Notes: Limitations of Algorithms

CMPSCI 611 Advanced Algorithms Midterm Exam Fall 2015

CS60007 Algorithm Design and Analysis 2018 Assignment 1

Computer Science & Engineering 423/823 Design and Analysis of Algorithms

Design and Analysis of Algorithms

Dynamic Programming. Prof. S.J. Soni

CS 470/570 Dynamic Programming. Format of Dynamic Programming algorithms:

Dynamic Programming( Weighted Interval Scheduling)

Dynamic Programming. p. 1/43

Maximum sum contiguous subsequence Longest common subsequence Matrix chain multiplication All pair shortest path Kna. Dynamic Programming

1 Assembly Line Scheduling in Manufacturing Sector

Introduction. I Dynamic programming is a technique for solving optimization problems. I Key element: Decompose a problem into subproblems, solve them

Dynamic Programming ACM Seminar in Algorithmics

CS Algorithms and Complexity

1. (a) Explain the asymptotic notations used in algorithm analysis. (b) Prove that f(n)=0(h(n)) where f(n)=0(g(n)) and g(n)=0(h(n)).

CS/COE

COL351: Analysis and Design of Algorithms (CSE, IITD, Semester-I ) Name: Entry number:

General Methods for Algorithm Design

Lecture 13. More dynamic programming! Longest Common Subsequences, Knapsack, and (if time) independent sets in trees.

Quiz 1 Solutions. Problem 2. Asymptotics & Recurrences [20 points] (3 parts)

BKLUP PRELIMINARY SUMMARY REPORT JULY 2016

Exercises NP-completeness

Advanced Analysis of Algorithms - Midterm (Solutions)

Chapter 7 Network Flow Problems, I

Lecture 6 September 21, 2016

CSCI 239 Discrete Structures of Computer Science Lab 6 Vectors and Matrices

Week 5: Quicksort, Lower bound, Greedy

Branch-and-Bound. Leo Liberti. LIX, École Polytechnique, France. INF , Lecture p. 1

CS Analysis of Recursive Algorithms and Brute Force

Practice Final Solutions. 1. Consider the following algorithm. Assume that n 1. line code 1 alg(n) { 2 j = 0 3 if (n = 0) { 4 return j

Dynamic Programming. Cormen et. al. IV 15

Algorithm Design Strategies V

Randomized Sorting Algorithms Quick sort can be converted to a randomized algorithm by picking the pivot element randomly. In this case we can show th

Limitations of Algorithm Power

Discrete Optimization 2010 Lecture 2 Matroids & Shortest Paths

CSE 202 Homework 4 Matthias Springer, A

Analysis of Algorithms I: All-Pairs Shortest Paths

Methods for solving recurrences

Lecture 11. Single-Source Shortest Paths All-Pairs Shortest Paths

Activity selection. Goal: Select the largest possible set of nonoverlapping (mutually compatible) activities.

Intro to Contemporary Math

CS325: Analysis of Algorithms, Fall Final Exam

Dominating Set Counting in Graph Classes

CS 6901 (Applied Algorithms) Lecture 3

Data Structures and Algorithms (CSCI 340)

Randomized Algorithms III Min Cut

Combinatorial Optimization

Lecture 13. More dynamic programming! Longest Common Subsequences, Knapsack, and (if time) independent sets in trees.

CS281A/Stat241A Lecture 19

July 18, Approximation Algorithms (Travelling Salesman Problem)

Notes on the Matrix-Tree theorem and Cayley s tree enumerator

Algorithms and Data Structures Final Lesson

Applications of Binary Search

IS 2610: Data Structures

Query Optimization: Exercise

CSI 4105 MIDTERM SOLUTION

Dynamic Programming: Matrix chain multiplication (CLRS 15.2)

Partha Sarathi Mandal

Solutions to Final Practice Problems 1-15

Lecture 17: Trees and Merge Sort 10:00 AM, Oct 15, 2018

University of Toronto Department of Electrical and Computer Engineering. Final Examination. ECE 345 Algorithms and Data Structures Fall 2016

Divide and Conquer Algorithms. CSE 101: Design and Analysis of Algorithms Lecture 14

Single Source Shortest Paths

13 Dynamic Programming (3) Optimal Binary Search Trees Subset Sums & Knapsacks

Santa Claus Schedules Jobs on Unrelated Machines

Do not turn this page until you have received the signal to start. Please fill out the identification section above. Good Luck!

Dynamic Programming. Reading: CLRS Chapter 15 & Section CSE 6331: Algorithms Steve Lai

Week 7 Solution. The two implementations are 1. Approach 1. int fib(int n) { if (n <= 1) return n; return fib(n 1) + fib(n 2); } 2.

NP-Complete Problems

Weighted Activity Selection

DAA Unit- II Greedy and Dynamic Programming. By Mrs. B.A. Khivsara Asst. Professor Department of Computer Engineering SNJB s KBJ COE, Chandwad

4/12/2011. Chapter 8. NP and Computational Intractability. Directed Hamiltonian Cycle. Traveling Salesman Problem. Directed Hamiltonian Cycle

Mergesort and Recurrences (CLRS 2.3, 4.4)

Algorithms and Data Structures 2016 Week 5 solutions (Tues 9th - Fri 12th February)

Lecture 18: More NP-Complete Problems

Dynamic Programming. Reading: CLRS Chapter 15 & Section CSE 2331 Algorithms Steve Lai

Even More on Dynamic Programming

Clustering using Mixture Models

Lecture 15 - NP Completeness 1

CSE 431/531: Analysis of Algorithms. Dynamic Programming. Lecturer: Shi Li. Department of Computer Science and Engineering University at Buffalo

CS Lunch. Dynamic Programming Recipe. 2 Midterm 2! Slides17 - Segmented Least Squares.key - November 16, 2016

Computational Complexity and Intractability: An Introduction to the Theory of NP. Chapter 9

CS325: Analysis of Algorithms, Fall Midterm

Question Paper Code :

Dynamic programming. Curs 2015

Methods for finding optimal configurations

Transcription:

Knapsack gain Slides for IS 675 PV hapter 6: ynamic Programming, Part 2 Jim Royer EES October 28, 2009 The Knapsack Problem (KP) knapsack with weight capacity W. Items 1,..., n where item i has weight w i and value v i.... with repetition Find: a multiset M 1,..., n } so that... without repetition Find: a set S 1,..., n } so that i S w i W and i S v i is maximized. Image from: http://commons.wikimedia.org/wiki/file:knapsack.svg IS 675 (EES) ynamic Programming, part 2 October 28, 2009 1 / 21 IS 675 (EES) ynamic Programming, part 2 October 28, 2009 2 / 21 Knapsack with repetition Knapsack without repetition, 1 Knapsack with repetition knapsack with capacity W. Items 1,..., n Item i has weight w i & value v i. Find: a multiset M 1,..., n } K(w) = max. value gained from a knapsack with cap. w = max i:w i w K(w w i) + v i (Why?) array K[0..W] K[0] 0 for w 1 to W do K[w] 0 for i 1 to n do if w i w then K[w] max(k[w], K[w w i ] + v i ) This runs in Θ(n W) time. Since we usually measure the size of W as W = the number of bits in the binary rep. of W. Hence, Θ(n W) = Θ(n 2 W ). So this is only useful for small values of W. Knapsack without repetition knapsack with capacity W. Items 1,..., n Item i has weight w i & value v i. Find: a set M 1,..., n } Problem K[w w n ] is not useful since it does not tell you whether item n was used in an optimal solution. Therefore, we refine things to: K[w, j] the best value obtainable with = capacity w using items from 1,..., j K[w, j 1], if w j > w; = max(k[w, j 1], K[w w j, j 1] + v j ), otherwise. (Why?) IS 675 (EES) ynamic Programming, part 2 October 28, 2009 3 / 21 IS 675 (EES) ynamic Programming, part 2 October 28, 2009 4 / 21

Knapsack without repetition, 2 Knapsack w/o repetition knapsack with capacity W. Items 1,..., n Item i has weight w i & value v i. Find: a set M 1,..., n } Our recursive relation is: K[w, j] the best value obtainable with = capacity w using items from 1,..., j K[w, j 1], if w j > w; = max(k[w, j 1], K[w w j, j 1] + v j ), otherwise. array K[0..W, 0..n] // This is also Θ(n W) time. for w 0 to W do K[w, 0] 0 // Hence only useful K[0, j] 0 // when W is small. for w 1 to W do if w i > w then K[w, j] K[w, j 1] else K[w, j] max(k[w, j 1], K[w w i, j 1] + v i ) IS 675 (EES) ynamic Programming, part 2 October 28, 2009 5 / 21 hain matrix multiplication, 1 Recall Multiplying an d 0 d 1 matrix by an d 1 d 2 matrix results in a d 0 d 2 matrix and takes (d 0 d 1 d 2 )-many scalar multiplies. The hain Matrix Multiplication Problem (MMP) d 0,..., d n N + and matrices 1,..., n where dim( i ) = d i 1 d i. Find: The cheapest way to order the multiplications. Example: Suppose that is an 50 20 matrix. is an 20 1 matrix. is an 1 10 matrix. is an 10 100 matrix. Then: Parenthesization ost computation ost ( ( )) 1 10 100 + 20 1 100 + 50 20 100 103,000 (( ) ) 20 1 10 + 20 10 100 + 50 20 100 120,200 ( ) ( ) 50 20 1 + 1 10 100 + 50 1 100 7,000 ( ( )) 20 1 10 + 50 20 10 + 50 10 100 60,200 (( ) ) 50 20 1 + 50 1 10 + 50 10 100 151,000 IS 675 (EES) ynamic Programming, part 2 October 28, 2009 6 / 21 hain matrix multiplication, 2 hain matrix multiplication, 3 Parenthesization ost computation ost ( ( )) 1 10 100 + 20 1 100 + 50 20 100 103,000 (( ) ) 20 1 10 + 20 10 100 + 50 20 100 120,200 ( ) ( ) 50 20 1 + 1 10 100 + 50 1 100 7,000 ( ( )) 20 1 10 + 50 20 10 + 50 10 100 60,200 (( ) ) 50 20 1 + 50 1 10 + 50 10 100 151,000 The hain Matrix Multiplication Problem (MMP) d 0,..., d n N + and matrices 1,..., n where dim( i ) = d i 1 d i. Find: The cheapest way to order the multiplications. The (i, k) subproblem, 1 i k n Find: (i, k) = the min. cost of the i k. (i, i) = 0. (i, k) = min j=i,...,k 1 ((i, j) + (j + 1, k) + d i 1 d j d k ), where i < k. (1, n) is the minimal cost of the MMP for 1,..., n. Question: Why does optimal substructure hold? IS 675 (EES) ynamic Programming, part 2 October 28, 2009 7 / 21 IS 675 (EES) ynamic Programming, part 2 October 28, 2009 8 / 21

hain matrix multiplication, 4 The (i, k) subproblem, 1 i k n Find: (i, k) = the min. cost of the i k. (i, i) = 0. (i, k) = min j=i,...,k 1 (1, n) is the minimal cost of the MMP for 1,..., n. ( (i, j) + (j + 1, k) + di 1 d j d k ), where i < k. for i 1 to n do [i, i] 0 for s 1 to n 1 do // ompute [1, 1 + s], [2, 2 + s],..., [n s, n] for i 1 to n s do k i + s; [i, k] + for j i to k 1 do [i, k] min([i, k], [i, j] + [j + 1, k] + d i 1 d j d k return [1, n] Run Time: Θ(n 3 ). Question: How do you reconstruct the order of multiplication from the [, ] table? IS 675 (EES) ynamic Programming, part 2 October 28, 2009 9 / 21 Shortest paths: ll-pairs, 1 Vaughn Truth or onseq. Tucumcari lbuquerque lamogordo 207 239 110 36 275 146 374 338 165 327 283 346 311 273 346 168 319 178 150 219 227 354 473 398 267 237 233 127 415 401 376 113 457 421 182 389 122 309 357 411 235 413 377 138 327 453 497 314 128 296 414 69 77 315 187 255 339 405 59 295 414 388 208 178 223 68 307 285 261 263 356 168 150 170 268 232 123 239 405 119 374 296 418 60 414 533 447 327 297 282 187 375 98 316 342 231 193 326 246 228 100 302 266 93 255 254 395 176 276 109 365 409 335 19 187 327 159 131 227 208 253 161 506 106 413 362 362 309 457 234 83 216 374 338 224 345 81 279 102 447 114 388 388 303 250 398 260 148 157 374 338 165 327 371 430 385 299 143 329 211 410 187 309 152 377 468 371 370 334 206 263 294 298 298 91 226 304 192 185 116 337 381 244 110 271 298 76 40 199 117 71 309 298 329 162 239 233 223 114 187 311 373 173 181 330 311 147 111 191 46 205 192 265 106 165 220 34 341 64 282 308 197 199 292 212 214 106 268 232 59 221 107 160 129 298 177 169 113 141 363 63 244 222 252 296 303 105 170 213 205 169 114 176 330 297 226 297 99 403 462 388 331 44 361 112 367 256 378 53 407 500 403 320 290 238 180 161 169 136 118 165 129 242 301 256 170 205 200 146 281 193 259 156 248 339 242 241 205 77 134 206 367 141 70 275 262 335 36 95 254 66 411 78 352 363 267 214 362 246 162 121 338 302 129 291 278 72 90 241 208 170 217 189 314 373 308 242 134 272 75 324 265 331 85 320 411 314 282 247 149 137 300 184 228 389 59 166 219 160 357 203 177 91 200 422 106 303 200 311 355 362 83 111 272 236 200 173 235 96 204 167 132 293 37 97 123 95 261 203 206 124 131 326 100 207 211 242 286 266 116 207 203 171 135 104 139 Taos Socorro Silver ity Santa Rosa Santa Fe Ruidoso Roswell Reserve Red River Raton Los lamos Portales Lordsburg Las Vegas IS 675 (EES) ynamic Programming, part 2 October 28, 2009 10 / 21 Las ruces Hobbs Farmington Gallup eming lovis layton hama arlsbad rtesia Shortest paths: ll-pairs, 2 Shortest paths: ll-pairs, 3 ll-pairs Shortest Paths Problem (PSP) G = ( 1,..., n }, E) an undirected graph and len:e R +. onstruct: S[1..n, 1..n] so that S[i, j] = the length of a shortest G-path from i to j. ssumption: G is initially given by a matrix [1..n, 1..n] so that len(i, j), if (i, j) E; [i, j] = +, if (i, j) / E. PSP, restated: Given [1..n, 1..n], compute S[1..n, 1..n]. Question: What are good subproblems? is the approximation to S in which that paths have no vertices: ❶ ❸ ❶ ❷ ❸ llowing more and more vertices gives subproblems... vertices from 1,..., k } len(i, j), if (i, j) E; dist[i, j, 0] = [i, j] = +, if (i, j) / E. dist[i, j, n] = S[i, j] = the length of a shortest G-path from i to j IS 675 (EES) ynamic Programming, part 2 October 28, 2009 11 / 21 IS 675 (EES) ynamic Programming, part 2 October 28, 2009 12 / 21

length of the shortest path from i to j in which only nodes 1, 2,..., k} can be used as interme diates. Initially, dist(i, j, 0) is the length of the direct edge between i and j, if it exists, and is Shortest paths: ll-pairs, 3 otherwise. a,.h. Papadimitriou, and U.V. Vazirani 187 Shortest paths: ll-pairs, 4 What happens when we expand the set to include an extra node k? We mus reexamine all pairs Question: i, j and check How do whether we go using from dist[, k as, an k 1] to dist[,, k], point where: gives us a shorter shortest paths path from i to j. ut this is easy: a shortest path from i to j that uses k along with possibly want to find the shortest path not just between s and t but between other all lower-numbered pairs nodes goes through k just once (why? because we assume One approach would be to execute our general shortest-path algorithm that there from are no negative cycles). nd we have already vertices from calculated 1,... the, k } length of the shortes.1 (since there may be the negative length of edges) a shortest V times, G-path once from for i to each j using starting path node. from The i to k and from k to j using only lower-numbered vertices: ng time would then be O( V 2 E ). vertices We ll now from see 1, a.. better., k } alternative, the O( V 3 ) k ogramming-based Floyd-Warshall len(i, algorithm. j), if (i, j) E; dist(i, k, k 1) is a good dist[i, subproblem j, 0] = [i, for j] = computing distances between all pairs of vertices in a +, if (i, j) / E. i dist(k, j, k 1) ply solving the problem for more and more pairs or starting points is unhelpful, leads right dist[i, back j, n] to= the S[i, O( V j] = 2 the E ) length algorithm. of a shortest G-path from i to j a comes to mind: the shortest path u w 1 w l v between u and v dist(i, j, k 1) j number of nodes possibly none. Suppose we disallow ether. Then Question: we can How solve do we all-pairs go from shortest dist[,, k paths 1] to at dist[, once:, k]? the shortest Thus, pathusing from k gives us a shorter path from i to j if and only if mply the direct edge (u, v), if it exists. What if we now gradually expand the set dist[i, j, k 1] = dist[i, j, k] means: there is a shortest path from i to j using dist(i, k, k vertices 1) + dist(k, from j, 1, k..., k 1) } that < dist(i, does not j, use k k. 1), ble nodes? We can do this one node at a time, updating the shortest s at each stage. Eventually this set grows to all of V, at which pointin allwhich vertices case dist(i, j, k) should be updated accordingly. to be on all paths, and we have found the true shortest paths between vertices Here isofthe Floyd-Warshall algorithm and as you can see, it takes O( V 3 ) time. IS 675 (EES) ynamic Programming, part 2 October 28, 2009 12 / 21 IS 675 (EES) ynamic Programming, part 2 October 28, 2009 13 / 21 ncretely, number the vertices in V as 1, 2,..., n}, and let dist(i, j, k) denote forthe i = 1 to n: e shortest path from i to j in which only nodes 1, 2,..., k} can be used as intermeially, for j = 1 to n: dist(i, j, 0) is the length of the direct edge between i and j, if it exists, and is dist(i, j, 0) = e. Shortest paths: ll-pairs, 5 appens when we expand the set to include an extra node k? We must Shortest paths: ll-pairs, 6 all pairs Question: i, j and check How do whether we go using from dist[, k as, an k 1] to dist[,, k], point where: gives us a shorter i to j. ut this is easy: a shortest path from i to j that uses k along with possibly Question: How do we go from dist[,, k 1] to dist[,, k]? -numbered nodes goes through k just once (why? because we assume are no negative dist[i, cycles). j, k] = nd we have already vertices from calculated 1,... the, k } length of the shortest vertices from 1,..., k } to k and from k to j using only lower-numbered vertices: = min(dist[i, j, k 1], dist[i, k, k 1] + dist[k, j, k 1]). dist(i, k, k 1) k for i 1 to n do // Initialization i dist(k, j, k 1) dist[i, j, 0] [i, j] for i 1 to n do // Main iteration dist(i, j, k 1) j for k 1 to n do g k gives us a shorter path from i to j if and only if dist[i, j, k] min(dist[i, j, k 1], dist[i, j, k 1] = dist[i, j, k] means: shortest paths from i to j using dist[i, k, k 1] + dist[k, j, k 1]) for i dist(i, k, k vertices 1) + dist(k, from j, 1, k..., k 1) } must < dist(i, use k. j, k 1), 1 to n do // Output dist[i, k, k 1] + dist[k, j, k 1]. (Why?) S[i, j] dist[i, j, n] se dist(i, j, k) should be updated accordingly. the Floyd-Warshall algorithm and as you can see, it takes O( V 3 return S ) time. i = 1 to n: IS 675 (EES) ynamic Programming, part 2 October 28, 2009 14 / 21 IS 675 (EES) ynamic Programming, part 2 October 28, 2009 15 / 21

Shortest paths: ll-pairs, 7 The traveling salesman problem for i 1 to n do // Initialization dist[i, j, 0] [i, j] for i 1 to n do // Main iteration for k 1 to n do dist[i, j, k] min(dist[i, j, k 1], dist[i, k, k 1] + dist[k, j, k 1]) for i 1 to n do // Output S[i, j] dist[i, j, n] return S Time complexity: Θ( V 3 ). (Why?) Space complexity: lso Θ( V 3 ), but this is easy to improve to Θ( V 2 ). The traveling salesman problem G a complete graph on verts 1,..., n and d(i, j) = (the distance of i to j) < Find: minimal cost of a complete tour of G. Subproblems For S 1,..., n } with 1 S, and j S: the minimal cost of a path from [S, j] = 1 to j using just nodes in S. [S, 1] =, when S > 1. [S, j] = min [S j }, i] + dist(i, j). i (S j } Note: This is a set up for hapter 8. 3 5 3.0 4.0 4.0 3.0 4 1 1.0 There are at most 2 n n-many subproblems, and each one takes O(n) time to solve.!!! This takes O(n 2 2 n ) time!!! 2 IS 675 (EES) ynamic Programming, part 2 October 28, 2009 16 / 21 IS 675 (EES) ynamic Programming, part 2 October 28, 2009 17 / 21 Independent sets in trees, 1 Independent sets in trees, 2 efinition Suppose G = (V, E) is an undirected graph. (a) u and v V are independent when (u, v) / E. (b) U V is an independent set when every pair of elements from U are independent. The Independent Set Problem (ISP) G an undirected graph. Find: max-sized independent set for G. The Independent Set Problem for Trees T a tree. Find: max-sized independent set for T. Example In the graph below: 1, 4, 5 } is not an independent set. 1, 5 } and 1, 6 } are independent sets 2, 3, 6 } and 1, 4, 6 } are max-sized independent sets. 1 3 2 4 5 6 efinition Suppose G = (V, E) is an undirected graph. (a) u and v V are independent when (u, v) / E. (b) U V is an independent set when every pair of elements from U are independent. The Indep. Set Problem for Trees T a tree. Find: max-sized independent set for T. Strategy Pick some vertex of T as the root, r. Now each vertex of T is the root of a subtree. For each v in T, define I(v) = the size of a largest independent set in v s subtree. I(v) = 1 when v is a leaf. I(r) = the size of a largest indep. set in T. So, what is the recursion? IS 675 (EES) ynamic Programming, part 2 October 28, 2009 18 / 21 IS 675 (EES) ynamic Programming, part 2 October 28, 2009 19 / 21

Independent sets in trees, 3 Strategy Pick some vertex of T as the root, r. For each v in T: I(v) = the size of a largest indep. set in v s subtree. I(v) = 1 when v is a leaf and I(r) = the size of a largest indep. set in T. Wwhat is the recursion for I(u)? ase: u is in some maximal sized indep. set for u s subtree Then I(u) = 1 + I(v) : v is a grandchild of u }. ase: u is in no maximal sized indep. set for u s subtree Then I(u) = I(v) : v is a child of u }. Independent sets in trees, 4 Strategy Pick some vertex of T as the root, r. For each u in T compute: I(u) = the size of a largest indep. set in u s subtree = max ( 1 + v is a grandchild of u I(v), v is a child of u I(v) ) This can be done in Θ( V ) time. For the general graph case, O(2 V ) is the best known time. Note: This is another set up for hapter 8. I(u) = max ( 1 + v is a grandchild of u I(v), v is a child of u I(v) ) IS 675 (EES) ynamic Programming, part 2 October 28, 2009 20 / 21 IS 675 (EES) ynamic Programming, part 2 October 28, 2009 21 / 21