Dynamic Programming. Cormen et. al. IV 15

Similar documents
Dynamic Programming: Interval Scheduling and Knapsack

6. DYNAMIC PROGRAMMING I

Copyright 2000, Kevin Wayne 1

CS 580: Algorithm Design and Analysis

CSE 202 Dynamic Programming II

Dynamic Programming 1

Dynamic Programming. Weighted Interval Scheduling. Algorithmic Paradigms. Dynamic Programming

Chapter 6. Dynamic Programming. Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved.

CSE 421 Dynamic Programming

Dynamic Programming( Weighted Interval Scheduling)

6. DYNAMIC PROGRAMMING I

Copyright 2000, Kevin Wayne 1

CSE 421 Weighted Interval Scheduling, Knapsack, RNA Secondary Structure

Chapter 6. Weighted Interval Scheduling. Dynamic Programming. Algorithmic Paradigms. Dynamic Programming Applications

Dynamic Programming. Data Structures and Algorithms Andrei Bulatov

Chapter 6. Dynamic Programming. CS 350: Winter 2018

Areas. ! Bioinformatics. ! Control theory. ! Information theory. ! Operations research. ! Computer science: theory, graphics, AI, systems,.

6. DYNAMIC PROGRAMMING I

CS Lunch. Dynamic Programming Recipe. 2 Midterm 2! Slides17 - Segmented Least Squares.key - November 16, 2016

Lecture 2: Divide and conquer and Dynamic programming

Aside: Golden Ratio. Golden Ratio: A universal law. Golden ratio φ = lim n = 1+ b n = a n 1. a n+1 = a n + b n, a n+b n a n

Chapter 6. Dynamic Programming. Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved.

CS 580: Algorithm Design and Analysis

Algoritmiek, bijeenkomst 3

CSE 431/531: Analysis of Algorithms. Dynamic Programming. Lecturer: Shi Li. Department of Computer Science and Engineering University at Buffalo

CS320 The knapsack problem

1.1 First Problem: Stable Matching. Six medical students and four hospitals. Student Preferences. s6 h3 h1 h4 h2. Stable Matching Problem

Dynamic programming. Curs 2015

Algorithm Design and Analysis

Lecture 9. Greedy Algorithm

DAA Unit- II Greedy and Dynamic Programming. By Mrs. B.A. Khivsara Asst. Professor Department of Computer Engineering SNJB s KBJ COE, Chandwad

Dynamic programming. Curs 2017

Activity selection. Goal: Select the largest possible set of nonoverlapping (mutually compatible) activities.

Design and Analysis of Algorithms

Algorithm Design and Analysis

Chapter 6. Dynamic Programming. Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved.

Algorithms and Theory of Computation. Lecture 9: Dynamic Programming

CS 6901 (Applied Algorithms) Lecture 3

Outline DP paradigm Discrete optimisation Viterbi algorithm DP: 0 1 Knapsack. Dynamic Programming. Georgy Gimel farb

Partha Sarathi Mandal

Unit 1A: Computational Complexity

Knapsack and Scheduling Problems. The Greedy Method

Chapter 4. Greedy Algorithms. Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved.

Greedy Algorithms My T. UF

CS473 - Algorithms I

CS2223 Algorithms D Term 2009 Exam 3 Solutions

Dynamic Programming. Credits: Many of these slides were originally authored by Jeff Edmonds, York University. Thanks Jeff!

Algorithms Design & Analysis. Dynamic Programming

Lecture 13: Dynamic Programming Part 2 10:00 AM, Feb 23, 2018

CSE 421 Greedy Algorithms / Interval Scheduling

CS 6783 (Applied Algorithms) Lecture 3

Chapter 4. Greedy Algorithms. Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved.

1 Assembly Line Scheduling in Manufacturing Sector

Matching Residents to Hospitals

Mat 3770 Bin Packing or

CS481: Bioinformatics Algorithms

Dynamic Programming (CLRS )

Algorithm Design Strategies V

Week 7 Solution. The two implementations are 1. Approach 1. int fib(int n) { if (n <= 1) return n; return fib(n 1) + fib(n 2); } 2.

Computer Science & Engineering 423/823 Design and Analysis of Algorithms

What is an integer program? Modelling with Integer Variables. Mixed Integer Program. Let us start with a linear program: max cx s.t.

Greedy Algorithms. Kleinberg and Tardos, Chapter 4

COSC 341: Lecture 25 Coping with NP-hardness (2)

CS 374: Algorithms & Models of Computation, Spring 2017 Greedy Algorithms Lecture 19 April 4, 2017 Chandra Chekuri (UIUC) CS374 1 Spring / 1

Greedy vs Dynamic Programming Approach

This means that we can assume each list ) is

Chapter 8 Dynamic Programming

CSE101: Design and Analysis of Algorithms. Ragesh Jaiswal, CSE, UCSD

Greedy. Outline CS141. Stefano Lonardi, UCR 1. Activity selection Fractional knapsack Huffman encoding Later:

6.6 Sequence Alignment

(tree searching technique) (Boolean formulas) satisfying assignment: (X 1, X 2 )

Lecture 6: Greedy Algorithms I

1 Closest Pair of Points on the Plane

Data Structures in Java

Bin packing and scheduling

CS483 Design and Analysis of Algorithms

Chapter 11. Approximation Algorithms. Slides by Kevin Wayne Pearson-Addison Wesley. All rights reserved.

Computer Science & Engineering 423/823 Design and Analysis of Algorithms

0-1 Knapsack Problem in parallel Progetto del corso di Calcolo Parallelo AA

What is Dynamic Programming

CS Analysis of Recursive Algorithms and Brute Force

Outline / Reading. Greedy Method as a fundamental algorithm design technique

13 Dynamic Programming (3) Optimal Binary Search Trees Subset Sums & Knapsacks

Graduate Algorithms CS F-09 Dynamic Programming

CS3233 Competitive i Programming

Coin Changing: Give change using the least number of coins. Greedy Method (Chapter 10.1) Attempt to construct an optimal solution in stages.

Lecture 13. More dynamic programming! Longest Common Subsequences, Knapsack, and (if time) independent sets in trees.

Weighted Activity Selection

Greedy Homework Problems

Chapter 4. Greedy Algorithms. Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved.

More Dynamic Programming

Lecture 13. More dynamic programming! Longest Common Subsequences, Knapsack, and (if time) independent sets in trees.

CS325: Analysis of Algorithms, Fall Final Exam

Algorithm Design and Analysis

More Dynamic Programming

COMP 355 Advanced Algorithms Algorithm Design Review: Mathematical Background

Knapsack. Bag/knapsack of integer capacity B n items item i has size s i and profit/weight w i

/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Dynamic Programming II Date: 10/12/17

CS60007 Algorithm Design and Analysis 2018 Assignment 1

CS583 Lecture 11. Many slides here are based on D. Luebke slides. Review: Dynamic Programming

Transcription:

Dynamic Programming Cormen et. al. IV 5

Dynamic Programming Applications Areas. Bioinformatics. Control theory. Operations research. Some famous dynamic programming algorithms. Unix diff for comparing two files. Smith-Waterman for sequence alignment. Bellman-Ford for shortest path routing in networks. 2

F() = F(2) = F(n) = F(n-) + F(n-2) n>2 Fibonacci numbers

Fibonacci numbers F() = F(2) = F(n) = F(n-) + F(n-2) n>2 Simple recursive solution: def fib(n) : if n<=2: return else: return fib(n-) + fib(n-2) 5 4 3 What is the size of the call tree? 3 2 2 2

Fibonacci numbers F() = F(2) = F(n) = F(n-) + F(n-2) n>2 Simple recursive solution: def fib(n) : if n<=2: return else: return fib(n-) + fib(n-2) Problem: exponential call tree Can we avoid it?

Efficient computation using a memo table def fib(n, table) : # pre: n>, table[i] either or contains fib(i) if n <= 2 : return if table[n] > : return table[n] result = fib(n-, table) + fib(n-2, table) table[n] = result return result We used a memo table, never computing the same value twice. How many calls now? Can we do better?

Look ma, no table def fib(n) : if n<=2 : return a,b = c = for i in range(3, n+) : c = a + b a = b b = c return c Compute the values "bottom up Avoid the table, only store the previous two same O(n) time complexity, constant space

Optimization Problems In optimization problems a set of choices are to be made to arrive at an optimum, and sub problems are encountered. This often leads to a recursive definition of a solution. However, the recursive algorithm is often inefficient in that it solves the same sub problem many times. Dynamic programming avoids this repetition by solving the problem bottom up and storing sub solutions.

Dynamic vs Greedy, Dynamic vs Div&Co Compared to Greedy, there is no predetermined local best choice of a sub solution, but a best solution is chosen by computing a set of alternatives and picking the best. Dynamic Programming builds on the recursive definition of a divide and conquer solution, but avoids re-computation by storing earlier computed values, thereby often saving orders of magnitude of time. Fibonacci: from exponential to linear

Dynamic Programming Dynamic Programming has the following steps - Characterize the structure of the problem, i.e., show how a larger problem can be solved using solutions to sub-problems - Recursively define the optimum - Compute the optimum bottom up, storing values of sub solutions - Construct the optimum from the stored data

Optimal substructure Dynamic programming works when a problem has optimal substructure: we can construct the optimum of a larger problem from the optima of a "small set" of smaller problems. small: polynomial Not all problems have optimal substructure. Travelling Salesman Problem (TSP)?

Weighted Interval Scheduling We studied a greedy solution for the interval scheduling problem, where we searched for the maximum number of compatible intervals. If each interval has a weight and we search for the set of compatible intervals with the maximum sum of weights, no greedy solution is known.

Weighted Interval Scheduling Weighted interval scheduling problem. Job j starts at s j, finishes at f j, and has value v j. Two jobs compatible if they don't overlap. Goal: find maximum value subset of compatible jobs. b a c d e f g 2 3 4 5 8 9 h Time 3

Weighted Interval Scheduling Assume jobs sorted by finish time: f f 2... f n. p(j) = largest index i < j such that job i is compatible with j, in other words: p(j) is j-s latest predecessor; p(j) = if it has no predecessors. Example: p(8) = 5, p() = 3, p(2) =. Using p(j) can you think of a recursive solution? 2 3 4 5 2 3 4 5 8 9 8 Time 4

Dynamic Programming: Recursive Solution Notation. OPT(j): optimal value to the problem consisting of job requests, 2,..., j. Case : OPT(j) includes job j. add v j to total value can't use incompatible jobs { p(j) +, p(j) + 2,..., j - } must include optimal solution to problem consisting of remaining compatible jobs, 2,..., p(j) Case 2: OPT(j) does not include job j. must include optimal solution to problem consisting of remaining compatible jobs, 2,..., j- # if j = OPT( j) = $ % max v j + OPT( p( j)), OPT( j ) { } otherwise 5

Weighted Interval Scheduling: Recursive Solution input: s,,s n, f,,f n, v,,v n sort jobs by finish times so that f f 2... f n. compute p(), p(2),, p(n) Compute-Opt(j) { if (j == ) return else return max(v j } + Compute-Opt(p(j)),Compute-Opt(j-)) What is the size of the call tree here? How can you make it as big as possible?

Analysis of the recursive solution Observation. Recursive algorithm considers exponential number of (redundant) sub-problems. Number of recursive calls for family of "layered" instances grows like Fibonacci sequence. 5 2 3 p() =, p(j) = j-2 Code on previous slide becomes Fibonacci: opt(j) calls opt(j-) and opt(j-2) 4 5 4 3 3 2 2 2

Weighted Interval Scheduling: Memoization Memoization. Store results of each sub-problem in a cache; lookup as needed. input: n, s,,s n, f,,f n, v,,v n sort jobs by finish times so that f f 2... f n. compute p(), p(2),, p(n) for j = to n M[j] = empty M[] = Global array M-Compute-Opt(j) { if (M[j] is empty) M[j] = max(v j + M-Compute-Opt(p(j)), M-Compute-Opt(j-)) return M[j] } 8

Weighted Interval Scheduling: Running Time Claim. Memoized version of M-Compute-Opt(n) takes O(n log n) time. M-Compute-Opt(n) fills in all entries of M ONCE in constant time Since M has n+ entries, this takes O(n) But we have sorted the jobs So Overall running time is O(n log n). 9

Weighted Interval Scheduling: Finding a Solution Q. Dynamic programming algorithm computes optimal value. What if we want the solution itself? A. Do some post-processing. Run M-Compute-Opt(n) Run Find-Solution(n) Find-Solution(j) { if (j = ) output nothing else if (v j + M[p(j)] > M[j-]) print j Find-Solution(p(j)) else Find-Solution(j-) } 2

Weighted Interval Scheduling: Bottom-Up Bottom-up dynamic programming. Unwind recursion. input: n, s,,s n, f,,f n, v,,v n sort jobs by finish times so that f f 2... f n. compute p(), p(2),, p(n) Iterative-Compute-Opt { M[] = for j = to n M[j] = max(v j + M[p(j)], M[j-]) } By going in bottom up order M[p(j)] and M[j-] are present when M[j] is computed. This again takes O(nlogn) for sorting and O(n) for Compute, so O(nlogn) 2

Algorithmic Paradigms Greedy. Build up a solution incrementally, optimizing some local criterion. Divide-and-conquer. Break up a problem into subproblems, solve each sub-problem independently, and combine solution to sub-problems to form solution to original problem. Memoization: follow divco, but to avoid re-computing values, store computed values in memo table and check first if value was already computed. Dynamic programming. Break up a problem into a series of sub-problems, and build up solutions to larger and larger sub-problems, store and re-use results. 22

Memoization Remember Fibonacci: F() = F(2) = ; F(n) = F(n-) + F(n-2) n>2 Recursive solution, exponential call tree: def fib(n) : if n<=2: return else: return fib(n-) + fib(n-2) Memo table, follows the recursive solution but stores results to avoid recomputation, as in naïve recursive solution def fib(n, table) : # pre: n>, table[i] either or contains fib(i) if n <= 2 : return if table[n] > : return table[n] result = fib(n-, table) + fib(n-2, table) table[n] = result return result Using a memo table, we never compute the same value twice. How many calls now? At most n. So O(n) time and space bound 23

Dynamic Programming v Characterize the structure of the problem: show how a problem can be solved using solutions to sub-problems v Recursively define the optimum v Compute the optimum bottom up, storing solutions of sub problems, optimizing the size of the stored data structure v Construct the optimum from the stored data

Discrete Optimization Problems Discrete Optimization Problem (S,f) S: space of feasible solutions (satisfying some constraints) f : S R Cost function associated with feasible solutions Objective: find an optimal solution x opt such that f(x opt ) f(x) for all x in S (minimization) or f(x opt ) f(x) for all x in S (maximization) Ubiquitous in many application domains planning and scheduling VLSI layout pattern recognition

Example: Subset Sums Given a set of n objects with weights w i and a capacity W, find a subset S with the largest sum of weights such that total weight is less equal W Does greedy work? How would you do it? Largest first: No {3 2 2} W=4 Smallest first: No { 2 2} W=4

Recursive Approach Two options for object i: v Either take object i or don't Assume the current available capacity is W v v v If we take object i, leftover capacity is W-w i this gives rise to one recursive call If we don't, leftover capacity is W this gives rise to another recursive call Then, after coming out of the two recursive calls, take the best of the two options

Recursive solution for Subset-Sum Problem Notation: OPT(i, w) = weight of max weight subset that uses items,, i with weight limit w. Case : item i is not included: OPT includes best of {, 2,, i- } using weight limit w Case 2: item i is included: new weight limit = w w i OPT includes best of {, 2,, i } using remaining capacity w-w i " $ OPT(i, w) = # $ % $ if i = OPT(i, w) if w i > w max{ OPT(i, w), w i + OPT(i, w w i ) } otherwise 28

Subset Sum Problem: Bottom-Up Dynamic Programming Approach: Fill an n-by-w array. Input: n, W, w,,w N for w = to W M[, w] = for i = to n for w = to W if (w i > w) M[i, w] = M[i-, w] else M[i, w] = max {M[i-, w], w i + M[i-, w-w i ]} return M[n, W] 29

Subset Sum: Dynamic Programming Go through the state space bottom-up select object, then 2 objects,..., all objects What does M[,*] look like? And M[,*]? use solutions of smaller sub-problem to efficiently compute solutions of larger one.. S o S X s = M[i-,c] if c>=w[i]: s = M[i-,c-W[i]] + W[i] else: s= M[i,c] = max(s,s) c M..2..3..i-..i objects considered 3

Example Capacity: Object: 2 3 4 Weight: 2 4 3 Table: rows are sets of objects (as opposed to columns on previous slide) cap: 2 3 4 5 8 9 {}: {}: {,2}: {:3}: {:4}: 3

Example Capacity: Object: 2 3 4 Weight: 2 4 3 Table: rows are sets of objects (as opposed to columns on previous slide) cap: 2 3 4 5 8 9 {}: {}: 2 2 2 2 2 2 2 2 2 2 {,2}: {:3}: {:4}: 32

Example Capacity: Object: 2 3 4 Weight: 2 4 3 Table: rows are sets of objects (as opposed to columns on previous slide) cap: 2 3 4 5 8 9 {}: {}: 2 2 2 2 2 2 2 2 2 2 {,2}: 2 2 2 2 8 8 8 8 {:3}: {:4}: 33

Example Capacity: Object: 2 3 4 Weight: 2 4 3 Table: rows are sets of objects (as opposed to columns on previous slide) cap: 2 3 4 5 8 9 {}: {}: 2 2 2 2 2 2 2 2 2 2 {,2}: 2 2 2 2 8 8 8 8 {:3}: 2 2 4 4 8 8 {:4}: 34

Example Capacity: Object: 2 3 4 Weight: 2 4 3 Table: rows are sets of objects (as opposed to columns on previous slide) cap: 2 3 4 5 8 9 {}: {}: 2 2 2 2 2 2 2 2 2 2 {,2}: 2 2 2 2 8 8 8 8 {:3}: 2 2 4 4 8 8 {:4}: 2 3 4 5 8 9 Which objects? Choice vector? 35

Example Capacity: Object: 2 3 4 Weight: 2 4 3 Table: rows are sets of objects (as opposed to columns on previous slide) cap: 2 3 4 5 8 9 {}: {}: 2 2 2 2 2 2 2 2 2 2 {,2}: 2 2 2 2 8 8 8 8 {:3}: 2 2 4 4 8 8 {:4}: 2 3 4 5 8 9 pick 4 Which objects? Choice vector? 3

Example Capacity: Object: 2 3 4 Weight: 2 4 3 Table: rows are sets of objects (as opposed to columns on previous slide) cap: 2 3 4 5 8 9 {}: {}: 2 2 2 2 2 2 2 2 2 2 {,2}: 2 2 2 2 8 8 8 8 {:3}: 2 2 4 4 8 8 not 3 {:4}: 2 3 4 5 8 9 pick 4 Which objects? Choice vector? 3

Example Capacity: Object: 2 3 4 Weight: 2 4 3 Table: rows are sets of objects (as opposed to columns on previous slide) cap: 2 3 4 5 8 9 {}: {}: 2 2 2 2 2 2 2 2 2 2 {,2}: 2 2 2 2 8 8 8 8 pick 2 {:3}: 2 2 4 4 8 8 not 3 {:4}: 2 3 4 5 8 9 pick 4 Which objects? Choice vector? 38

Example Capacity: Object: 2 3 4 Weight: 2 4 3 Table: rows are sets of objects (as opposed to columns on previous slide) cap: 2 3 4 5 8 9 {}: {}: 2 2 2 2 2 2 2 2 2 2 pick {,2}: 2 2 2 2 8 8 8 8 pick 2 {:3}: 2 2 4 4 8 8 not 3 {:4}: 2 3 4 5 8 9 pick 4 Which objects? Choice vector? 39

Memo table vs Dynamic Programming The dynamic programming solution computes ALL values BOTTOM UP, even ones that are not needed for the final result. The memo table solution computes only the necessary values top down, but needs to check for each entry whether is has been computed yet. How about the memory requirements for memo? 4

Knapsack Problem Knapsack problem. Given n objects and a "knapsack of capacity W Item i has a weight w i > and value v i >. Goal: fill knapsack so as to maximize total value. What would be a Greedy solution? repeatedly add item with maximum v i / w i ratio Does Greedy work? Capacity M =, Number of objects n = 3 w = [5, 4, 3] v = [,, 5] (ordered by v i / w i ratio) What is the relation between Subset Sum and Knapsack? Can you turn a Subset Sum problem into a knapsack problem? 4

Recursion for Knapsack Problem Notation: OPT(i, w) = optimal value of max weight subset that uses items,, i with weight limit w. Case : item i is not included: OPT includes best of {, 2,, i- } using weight limit w Case 2: item i is included: new weight limit = w w i OPT includes best of {, 2,, i } using weight limit w-w i # if i = % OPT(i, w) = $ OPT(i, w) if w i > w % & max OPT(i, w), v i + OPT(i, w w i ) { } otherwise 42

Knapsack Problem: Bottom-Up Dynamic Programming Knapsack. Fill an n-by-w array. Input: n, W, weights w,,w N, profits v,,v N for w = to W M[, w] = for i = to n for w = to W if w i > w : M[i, w] = M[i-, w] else : M[i, w] = max (M[i-, w], v i + M[i-, w-w i ]) return M[n, W] 43

Knapsack Algorithm W + 2 3 4 5 8 9 f { } n + {, 2 } {, 2, 3 } 8 9 24 25 25 25 25 {, 2, 3, 4 } 8 22 24 28 29 29 4 {, 2, 3, 4, 5 } 8 22 28 29 34 34 4 OPT: 4 Item Value Weight How do we find the objects in the optimum solution? 2 2 W = 3 8 5 Walk back through the table!! 4 5 22 28 44

Knapsack Algorithm W + 2 3 4 5 8 9 f { } n + {, 2 } {, 2, 3 } 8 9 24 25 25 25 25 {, 2, 3, 4 } 8 22 24 28 29 29 4 {, 2, 3, 4, 5 } 8 22 28 29 34 34 4 OPT: 4 n=5 Don t take object 5 Item Value Weight 2 2 W = 3 8 5 4 22 5 28 45

Knapsack Algorithm W + 2 3 4 5 8 9 f { } n + {, 2 } {, 2, 3 } 8 9 24 25 25 25 25 {, 2, 3, 4 } 8 22 24 28 29 29 4 {, 2, 3, 4, 5 } 8 22 28 29 34 34 4 OPT: 4 n=5 Don t take object 5 n=4 Take object 4 Item 2 Value Weight 2 W = 3 8 5 4 22 5 28 4

Knapsack Algorithm W + 2 3 4 5 8 9 f { } n + {, 2 } {, 2, 3 } 8 9 24 25 25 25 25 {, 2, 3, 4 } 8 22 24 28 29 29 4 {, 2, 3, 4, 5 } 8 22 28 29 34 34 4 OPT: 4 n=5 Don t take object 5 n=4 Take object 4 n=3 Take object 3 Item 2 Value Weight 2 and now we cannot take anymore, so choice set is {3,4} W = 3 4 5 8 22 28 5 4

Knapsack Problem: Running Time Running time. Q(nW). Not polynomial in input size! W can be exponential in n "Pseudo-polynomial. Decision version of Knapsack is NP-complete. [Chapter 8] Knapsack approximation algorithm. There exists a poly-time algorithm that produces a feasible solution that has value within.% of optimum. 48