CSE 421 Dynamic Programming

Similar documents
CSE 421 Weighted Interval Scheduling, Knapsack, RNA Secondary Structure

Dynamic Programming: Interval Scheduling and Knapsack

6. DYNAMIC PROGRAMMING I

CSE 202 Dynamic Programming II

Dynamic Programming. Cormen et. al. IV 15

6. DYNAMIC PROGRAMMING I

Dynamic Programming. Data Structures and Algorithms Andrei Bulatov

Copyright 2000, Kevin Wayne 1

CS 580: Algorithm Design and Analysis

Dynamic Programming. Weighted Interval Scheduling. Algorithmic Paradigms. Dynamic Programming

Chapter 6. Dynamic Programming. Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved.

Dynamic Programming( Weighted Interval Scheduling)

Dynamic Programming 1

6. DYNAMIC PROGRAMMING I

CSE 421 Greedy Algorithms / Interval Scheduling

Copyright 2000, Kevin Wayne 1

Chapter 6. Weighted Interval Scheduling. Dynamic Programming. Algorithmic Paradigms. Dynamic Programming Applications

CS Lunch. Dynamic Programming Recipe. 2 Midterm 2! Slides17 - Segmented Least Squares.key - November 16, 2016

Chapter 6. Dynamic Programming. CS 350: Winter 2018

This means that we can assume each list ) is

CS 580: Algorithm Design and Analysis

Areas. ! Bioinformatics. ! Control theory. ! Information theory. ! Operations research. ! Computer science: theory, graphics, AI, systems,.

Activity selection. Goal: Select the largest possible set of nonoverlapping (mutually compatible) activities.

Greedy Algorithms My T. UF

CSE 431/531: Analysis of Algorithms. Dynamic Programming. Lecturer: Shi Li. Department of Computer Science and Engineering University at Buffalo

Aside: Golden Ratio. Golden Ratio: A universal law. Golden ratio φ = lim n = 1+ b n = a n 1. a n+1 = a n + b n, a n+b n a n

Algorithm Design and Analysis

Knapsack and Scheduling Problems. The Greedy Method

Lecture 2: Divide and conquer and Dynamic programming

Matching Residents to Hospitals

CSE101: Design and Analysis of Algorithms. Ragesh Jaiswal, CSE, UCSD

Minimizing the Number of Tardy Jobs

Knapsack. Bag/knapsack of integer capacity B n items item i has size s i and profit/weight w i

Algoritmiek, bijeenkomst 3

Design and Analysis of Algorithms

DAA Unit- II Greedy and Dynamic Programming. By Mrs. B.A. Khivsara Asst. Professor Department of Computer Engineering SNJB s KBJ COE, Chandwad

Algorithm Design and Analysis

Chapter 4. Greedy Algorithms. Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved.

Mat 3770 Bin Packing or

13 Dynamic Programming (3) Optimal Binary Search Trees Subset Sums & Knapsacks

CS 6901 (Applied Algorithms) Lecture 3

Greedy Algorithms. Kleinberg and Tardos, Chapter 4

Chapter 11. Approximation Algorithms. Slides by Kevin Wayne Pearson-Addison Wesley. All rights reserved.

/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Dynamic Programming II Date: 10/12/17

Lecture 9. Greedy Algorithm

1.1 First Problem: Stable Matching. Six medical students and four hospitals. Student Preferences. s6 h3 h1 h4 h2. Stable Matching Problem

Dynamic programming. Curs 2015

The Greedy Method. Design and analysis of algorithms Cs The Greedy Method

CS583 Lecture 11. Many slides here are based on D. Luebke slides. Review: Dynamic Programming

CS60007 Algorithm Design and Analysis 2018 Assignment 1

CS 374: Algorithms & Models of Computation, Spring 2017 Greedy Algorithms Lecture 19 April 4, 2017 Chandra Chekuri (UIUC) CS374 1 Spring / 1

Chapter 6. Dynamic Programming. Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved.

Greedy vs Dynamic Programming Approach

Weighted Activity Selection

Chapter 4. Greedy Algorithms. Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved.

Chapter 11. Approximation Algorithms. Slides by Kevin Wayne Pearson-Addison Wesley. All rights reserved.

Lecture 6: Greedy Algorithms I

Recap & Interval Scheduling

Dynamic Programming. Credits: Many of these slides were originally authored by Jeff Edmonds, York University. Thanks Jeff!

Chapter 4. Greedy Algorithms. Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved.

CMSC 451: Lecture 7 Greedy Algorithms for Scheduling Tuesday, Sep 19, 2017

CS2223 Algorithms D Term 2009 Exam 3 Solutions

CS325: Analysis of Algorithms, Fall Final Exam

Greedy. Outline CS141. Stefano Lonardi, UCR 1. Activity selection Fractional knapsack Huffman encoding Later:

Lecture 11 October 7, 2013

Algorithms Design & Analysis. Dynamic Programming

1 The Knapsack Problem

Discrete Mathematics: Logic. Discrete Mathematics: Lecture 17. Recurrence

CSE 421. Dynamic Programming Shortest Paths with Negative Weights Yin Tat Lee

Chapter 6. Dynamic Programming. Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved.

8 Knapsack Problem 8.1 (Knapsack)

Greedy Algorithms. CSE 101: Design and Analysis of Algorithms Lecture 9

Unit 1A: Computational Complexity

Algorithms. NP -Complete Problems. Dong Kyue Kim Hanyang University

COSC 341: Lecture 25 Coping with NP-hardness (2)

Coin Changing: Give change using the least number of coins. Greedy Method (Chapter 10.1) Attempt to construct an optimal solution in stages.

CS 6783 (Applied Algorithms) Lecture 3

Advanced Analysis of Algorithms - Midterm (Solutions)

Outline / Reading. Greedy Method as a fundamental algorithm design technique

Dynamic programming. Curs 2017

Lecture 7: Dynamic Programming I: Optimal BSTs

Greedy Algorithms. CSE 101: Design and Analysis of Algorithms Lecture 10

Algorithms and Theory of Computation. Lecture 9: Dynamic Programming

Lecture 13: Dynamic Programming Part 2 10:00 AM, Feb 23, 2018

APTAS for Bin Packing

Computer Algorithms CISC4080 CIS, Fordham Univ. Outline. Last class. Instructor: X. Zhang Lecture 2

Computer Algorithms CISC4080 CIS, Fordham Univ. Instructor: X. Zhang Lecture 2

arxiv: v1 [math.oc] 3 Jan 2019

General Methods for Algorithm Design

4/12/2011. Chapter 8. NP and Computational Intractability. Directed Hamiltonian Cycle. Traveling Salesman Problem. Directed Hamiltonian Cycle

Chapter 8 Dynamic Programming

CSE 417. Chapter 4: Greedy Algorithms. Many Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved.

Bin packing and scheduling

Week 7 Solution. The two implementations are 1. Approach 1. int fib(int n) { if (n <= 1) return n; return fib(n 1) + fib(n 2); } 2.

The Knapsack Problem. n items with weight w i N and profit p i N. Choose a subset x of items

Dynamic Programming 11/8/2009. Weighted Interval Scheduling. Weighted Interval Scheduling. Unweighted Interval Scheduling: Review

Approximation Algorithms

Asymptotic Polynomial-Time Approximation (APTAS) and Randomized Approximation Algorithms

CSE 311: Foundations of Computing. Lecture 14: Induction

What is Dynamic Programming

Transcription:

CSE Dynamic Programming Yin Tat Lee

Weighted Interval Scheduling

Interval Scheduling Job j starts at s(j) and finishes at f j and has weight w j Two jobs compatible if they don t overlap. Goal: find maximum weight subset of mutually compatible jobs. a b c d e f g 9 h Time

Unweighted Interval Scheduling: Review Recall: Greedy algorithm works if all weights are : Consider jobs in ascending order of finishing time Add job to a subset if it is compatible with prev added jobs. Observation: Greedy ALG fails spectacularly if arbitrary weights are allowed: weight = weight = a b 9 Time by finish weight = a b weight = 999 a a a a a a a a a by weight 9 Time

Weighted Job Scheduling by Induction Suppose,, n are all jobs. Let us use induction: IH: Suppose we can compute the optimum job scheduling for < n jobs. IS: Goal: For any n jobs we can compute OPT. Case : Job n is not in OPT. -- Then, just return OPT of,, n. Take best of the two Case : Job n is in OPT. -- Then, delete all jobs not compatible with n and recurse. Q: Are we done? A: No, How many subproblems are there? Potentially n all possible subsets of jobs. n n n n n n n

Sorting to Reduce Subproblems Why we can t order by start time? Sorting Idea: Label jobs by finishing time f f(n) IS: For jobs,, n we want to compute OPT Case : Suppose OPT has job n. So, all jobs i that are not compatible with n are not OPT Let p n = largest index i < n such that job i is compatible with n. Then, we just need to find OPT of,, p(n) p(n) p(n) + n n n

Sorting to Reduce Subproblems Sorting Idea: Label jobs by finishing time f f(n) IS: For jobs,, n we want to compute OPT Case : Suppose OPT has job n. So, all jobs i that are not compatible with n are not OPT Let p(n) = largest index i < n such that job i is compatible with n. Then, we just need to find OPT of,, p(n) Case : OPT does not select job n. Then, OPT is just the OPT of,, n Take best of the two Q: Have we made any progress (still reducing to two subproblems)? A: Yes! This time every subproblem is of the form,, i for some i So, at most n possible subproblems.

Weighted Job Scheduling by Induction Sorting Idea: Label jobs by finishing time f f(n) Def OPT(j) denote the weight of OPT solution of,, j The most important part of a correct DP; It fixes IH To solve OPT(j): Case : OPT(j) has job j. So, all jobs i that are not compatible with j are not OPT(j). Let p(j) = largest index i < j such that job i is compatible with j. So OPT j = OPT p j + w j. Case : OPT(j) does not select job j. Then, OPT j = OPT(j ). if j = OPT j = ቐ max w j + OPT p j, OPT j o. w.

Algorithm Input: n, s,, s(n) and f,, f(n) and w,, w n. Sort jobs by finish times so that f f f(n). Compute p(), p(),, p(n) OPT(j) { if ( j = ) return else return max (w j + OPT p j, OPT j ). } 9

Recursive Algorithm Fails Even though we have only n subproblems, if we do not store the solution to the subproblems we may re-solve the same problem many many times. Ex. Number of recursive calls for family of "layered" instances grows like Fibonacci sequence p =, p j = j

Algorithm with Memoization Memorization. Compute and Store the solution of each sub-problem in a cache the first time that you face it. lookup as needed. Input: n, s,, s(n) and f,, f(n) and w,, w n. Sort jobs by finish times so that f f f(n). Compute p(), p(),, p(n) for j = to n M[j] = empty M[] = OPT(j) { if (M[j] is empty) M[j] = max (w j + OPT p j, OPT j ). return M[j] } In practice, you may get if n (depends on the language).

Bottom up Dynamic Programming You can also avoid recursion recursion may be easier conceptually when you use induction Input: n, s,, s(n) and f,, f(n) and w,, w n. Sort jobs by finish times so that f f f(n). Compute p(), p(),, p(n) OPT(j) { M[] = for j = to n M[j] = max (w j + M p j, M j ). } Output M[n] Claim: M[j] is value of OPT(j) Timing: Easy. Main loop is O(n); sorting is O(n log n).

Example if j = OPT j = ቐ max w j + OPT p j, OPT j o. w. Label jobs by finishing time: f f n. p(j) = largest index i < j such that job i is compatible with j. j w j p(j) OPT(j) 9 Time

Example if j = OPT j = ቐ max w j + OPT p j, OPT j o. w. Label jobs by finishing time: f f n. p(j) = largest index i < j such that job i is compatible with j. j w j p(j) OPT(j) 9 Time

Example if j = OPT j = ቐ max w j + OPT p j, OPT j o. w. Label jobs by finishing time: f f n. p(j) = largest index i < j such that job i is compatible with j. j w j p(j) OPT(j) 9 Time

Example if j = OPT j = ቐ max w j + OPT p j, OPT j o. w. Label jobs by finishing time: f f n. p(j) = largest index i < j such that job i is compatible with j. j w j p(j) OPT(j) 9 Time

Example if j = OPT j = ቐ max w j + OPT p j, OPT j o. w. Label jobs by finishing time: f f n. p(j) = largest index i < j such that job i is compatible with j. j w j p(j) OPT(j) 9 Time

Example if j = OPT j = ቐ max w j + OPT p j, OPT j o. w. Label jobs by finishing time: f f n. p(j) = largest index i < j such that job i is compatible with j. j w j p(j) OPT(j) 9 Time

Example if j = OPT j = ቐ max w j + OPT p j, OPT j o. w. Label jobs by finishing time: f f n. p(j) = largest index i < j such that job i is compatible with j. j w j p(j) OPT(j) 9 Time

Example if j = OPT j = ቐ max w j + OPT p j, OPT j o. w. Label jobs by finishing time: f f n. p(j) = largest index i < j such that job i is compatible with j. j w j p(j) OPT(j) 9 Time

Example if j = OPT j = ቐ max w j + OPT p j, OPT j o. w. Label jobs by finishing time: f f n. p(j) = largest index i < j such that job i is compatible with j. j w j p(j) OPT(j) 9 Time

Example if j = OPT j = ቐ max w j + OPT p j, OPT j o. w. Label jobs by finishing time: f f n. p(j) = largest index i < j such that job i is compatible with j. j w j p(j) OPT(j) 9 Time

Dynamic Programming Give a solution of a problem using smaller (overlapping) sub-problems where the parameters of all sub-problems are determined in-advance Useful when the same subproblems show up again and again in the solution.

Knapsack Problem

Knapsack Problem Given n objects and a "knapsack. Item i weighs w i > kilograms and has value v i >. Knapsack has capacity of W kilograms. Goal: fill knapsack so as to maximize total value. Item Value Weight Ex: OPT is {, } with value. W = Greedy: repeatedly add item with maximum ratio v i /w i. Ex: {,, } achieves only value = greedy not optimal.

Dynamic Programming: First Attempt Let OPT i = Max value of subsets of items,, i of weight W. Case : OPT(i) does not select item i - In this case OPT(i) = OPT(i ) Case : OPT(i) selects item i In this case, item i does not immediately imply we have to reject other items The problem does not reduce to OPT(i ) because we now want to pack as much value into box of weight W w i Conclusion: We need more subproblems, we need to strengthen IH.

Stronger DP (Strengthening Hypothesis) What is the ordering of item we should pick? Let OPT(i, w) = Max value of subsets of items,, i of weight w Case : OPT(i, w) selects item i In this case, OPT i, w = v i + OPT(i, w w i ) Case : OPT i, w does not select item i In this case, OPT i, w = OPT(i, w). Take best of the two Therefore, OPT i, w = ቐOPT i, w max(opt i, w, v i + OPT i, w w i ) If i = If w i > w o.w.,

DP for Knapsack Comp-OPT(i,w) if M[i,w] == empty if (i==) M[i,w]= recursive else if (w i > w) M[i,w]= Comp-OPT(i-,w) else M[i,w]= max {Comp-OPT(i-,w), v i + Comp-OPT(i-,w-w i )} return M[i, w] for w = to W M[, w] = for i = to n for w = to W if (w i > w) M[i, w] = M[i-, w] else M[i, w] = max {M[i-, w], v i + M[i-, w-w i ]} Non-recursive return M[n, W]

DP for Knapsack W + 9 { } n + {, } {,, } {,,, } {,,,, } W = if (w i > w) M[i, w] = M[i-, w] else M[i, w] = max {M[i-, w], v i + M[i-, w-w i ]} Item Value Weight 9

DP for Knapsack W + 9 { } n + {, } {,, } {,,, } {,,,, } W = if (w i > w) M[i, w] = M[i-, w] else M[i, w] = max {M[i-, w], v i + M[i-, w-w i ]} Item Value Weight

DP for Knapsack W + 9 { } n + {, } {,, } {,,, } {,,,, } OPT: {, } value = + = W = if (w i > w) M[i, w] = M[i-, w] else M[i, w] = max {M[i-, w], v i + M[i-, w-w i ]} Item Value Weight

DP for Knapsack W + 9 { } n + {, } {,, } 9 {,,, } {,,,, } OPT: {, } value = + = W = if (w i > w) M[i, w] = M[i-, w] else M[i, w] = max {M[i-, w], v i + M[i-, w-w i ]} Item Value Weight

DP for Knapsack W + 9 { } n + {, } {,, } 9 {,,, } 9 {,,,, } OPT: {, } value = + = W = if (w i > w) M[i, w] = M[i-, w] else M[i, w] = max {M[i-, w], v i + M[i-, w-w i ]} Item Value Weight

DP for Knapsack W + 9 { } n + {, } {,, } 9 {,,, } 9 9 {,,,, } 9 OPT: {, } value = + = W = if (w i > w) M[i, w] = M[i-, w] else M[i, w] = max {M[i-, w], v i + M[i-, w-w i ]} Item Value Weight

Knapsack Problem: Running Time Running time: Θ(n W) Not polynomial in input size! "Pseudo-polynomial. Decision version of Knapsack is NP-complete. Knapsack approximation algorithm: There exists a polynomial algorithm that produces a feasible solution that has value within.% of optimum in time Poly(n,log W). UW Expert

DP Ideas so far You may have to define an ordering to decrease #subproblems You may have to strengthen DP, equivalently the induction, i.e., you have may have to carry more information to find the Optimum. This means that sometimes we may have to use two dimensional or three dimensional induction