Dynamic Programming. Data Structures and Algorithms Andrei Bulatov

Similar documents
Dynamic Programming: Interval Scheduling and Knapsack

Dynamic Programming. Cormen et. al. IV 15

CSE 421 Dynamic Programming

Dynamic Programming 1

6. DYNAMIC PROGRAMMING I

CSE 202 Dynamic Programming II

Dynamic Programming. Weighted Interval Scheduling. Algorithmic Paradigms. Dynamic Programming

Lecture 2: Divide and conquer and Dynamic programming

Copyright 2000, Kevin Wayne 1

CS 580: Algorithm Design and Analysis

Dynamic Programming( Weighted Interval Scheduling)

Chapter 6. Dynamic Programming. Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved.

Copyright 2000, Kevin Wayne 1

CSE 421 Weighted Interval Scheduling, Knapsack, RNA Secondary Structure

Areas. ! Bioinformatics. ! Control theory. ! Information theory. ! Operations research. ! Computer science: theory, graphics, AI, systems,.

13 Dynamic Programming (3) Optimal Binary Search Trees Subset Sums & Knapsacks

Chapter 6. Dynamic Programming. CS 350: Winter 2018

Dynamic Programming 11/8/2009. Weighted Interval Scheduling. Weighted Interval Scheduling. Unweighted Interval Scheduling: Review

Chapter 6. Weighted Interval Scheduling. Dynamic Programming. Algorithmic Paradigms. Dynamic Programming Applications

Aside: Golden Ratio. Golden Ratio: A universal law. Golden ratio φ = lim n = 1+ b n = a n 1. a n+1 = a n + b n, a n+b n a n

6. DYNAMIC PROGRAMMING I

Chapter 6. Dynamic Programming. Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved.

6. DYNAMIC PROGRAMMING I

CSE 431/531: Analysis of Algorithms. Dynamic Programming. Lecturer: Shi Li. Department of Computer Science and Engineering University at Buffalo

CS Lunch. Dynamic Programming Recipe. 2 Midterm 2! Slides17 - Segmented Least Squares.key - November 16, 2016

Data Structures and Algorithms CMPSC 465

1.1 First Problem: Stable Matching. Six medical students and four hospitals. Student Preferences. s6 h3 h1 h4 h2. Stable Matching Problem

This means that we can assume each list ) is

CMPS 6610 Fall 2018 Shortest Paths Carola Wenk

Matrix Multiplication. Data Structures and Algorithms Andrei Bulatov

Dynamic Programming: Shortest Paths and DFA to Reg Exps

CS 561, Lecture: Greedy Algorithms. Jared Saia University of New Mexico

Today s Outline. CS 561, Lecture 15. Greedy Algorithms. Activity Selection. Greedy Algorithm Intro Activity Selection Knapsack

Undirected Graphs. V = { 1, 2, 3, 4, 5, 6, 7, 8 } E = { 1-2, 1-3, 2-3, 2-4, 2-5, 3-5, 3-7, 3-8, 4-5, 5-6 } n = 8 m = 11

Dynamic Programming: Shortest Paths and DFA to Reg Exps

6. DYNAMIC PROGRAMMING II. sequence alignment Hirschberg's algorithm Bellman-Ford distance vector protocols negative cycles in a digraph

6. DYNAMIC PROGRAMMING II

CS60007 Algorithm Design and Analysis 2018 Assignment 1

Design and Analysis of Algorithms

Chapter 4. Greedy Algorithms. Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved.

Chapter 6. Dynamic Programming. Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved.

Matching Residents to Hospitals

Greedy Algorithms My T. UF

Discrete Optimization 2010 Lecture 2 Matroids & Shortest Paths

Single Source Shortest Paths

Algorithm Design and Analysis

Chapter 4. Greedy Algorithms. Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved.

/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Dynamic Programming II Date: 10/12/17

FINAL EXAM PRACTICE PROBLEMS CMSC 451 (Spring 2016)

Partha Sarathi Mandal

CSE 421 Introduction to Algorithms Final Exam Winter 2005

University of Toronto Department of Electrical and Computer Engineering. Final Examination. ECE 345 Algorithms and Data Structures Fall 2016

CS2223 Algorithms D Term 2009 Exam 3 Solutions

Discrete Wiskunde II. Lecture 5: Shortest Paths & Spanning Trees

CS 580: Algorithm Design and Analysis

Algoritmiek, bijeenkomst 3

Knapsack and Scheduling Problems. The Greedy Method

Algorithm Design and Analysis

CS473 - Algorithms I

Weighted Activity Selection

Dynamic programming. Curs 2015

Algorithms and Theory of Computation. Lecture 9: Dynamic Programming

Announcements. CompSci 102 Discrete Math for Computer Science. Chap. 3.1 Algorithms. Specifying Algorithms

Chapter 6. Dynamic Programming. Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved.

More Dynamic Programming

More Dynamic Programming

Computer Science & Engineering 423/823 Design and Analysis of Algorithms

DAA Unit- II Greedy and Dynamic Programming. By Mrs. B.A. Khivsara Asst. Professor Department of Computer Engineering SNJB s KBJ COE, Chandwad

Single Source Shortest Paths

Algorithms: Lecture 12. Chalmers University of Technology

Dynamic programming. Curs 2017

CS 4407 Algorithms Lecture: Shortest Path Algorithms

Advanced Analysis of Algorithms - Midterm (Solutions)

Recurrences COMP 215

CSE 202 Homework 4 Matthias Springer, A

Greedy. Outline CS141. Stefano Lonardi, UCR 1. Activity selection Fractional knapsack Huffman encoding Later:

Computer Science & Engineering 423/823 Design and Analysis of Algorithms

Chapter 8 Dynamic Programming

CPSC 320 (Intermediate Algorithm Design and Analysis). Summer Instructor: Dr. Lior Malka Final Examination, July 24th, 2009

Topic 17. Analysis of Algorithms

Dynamic Programming. Credits: Many of these slides were originally authored by Jeff Edmonds, York University. Thanks Jeff!

CSE 421 Greedy Algorithms / Interval Scheduling

CMPS 2200 Fall Carola Wenk Slides courtesy of Charles Leiserson with small changes by Carola Wenk. 10/8/12 CMPS 2200 Intro.

Activity selection. Goal: Select the largest possible set of nonoverlapping (mutually compatible) activities.

Coin Changing: Give change using the least number of coins. Greedy Method (Chapter 10.1) Attempt to construct an optimal solution in stages.

Algorithm Design Strategies V

Design and Analysis of Algorithms

Data Structures in Java

Week 7 Solution. The two implementations are 1. Approach 1. int fib(int n) { if (n <= 1) return n; return fib(n 1) + fib(n 2); } 2.

Chapter 8 Dynamic Programming

Shortest paths with negative lengths

Discrete Optimization 2010 Lecture 3 Maximum Flows

Greedy Algorithms. Kleinberg and Tardos, Chapter 4

1 Basic Definitions. 2 Proof By Contradiction. 3 Exchange Argument

Question Paper Code :

CMSC 451: Lecture 7 Greedy Algorithms for Scheduling Tuesday, Sep 19, 2017

CS Analysis of Recursive Algorithms and Brute Force

Analysis of Algorithms. Outline. Single Source Shortest Path. Andres Mendez-Vazquez. November 9, Notes. Notes

(tree searching technique) (Boolean formulas) satisfying assignment: (X 1, X 2 )

Maximum flow problem CE 377K. February 26, 2015

Transcription:

Dynamic Programming Data Structures and Algorithms Andrei Bulatov

Algorithms Dynamic Programming 18-2 Weighted Interval Scheduling Weighted interval scheduling problem. Instance A set of n jobs. Job j starts at s j, finishes at f j, and has weight or value v j. Two jobs compatible if they don't overlap. Objective Find maximum weight subset of mutually compatible jobs.

Algorithms Dynamic Programming 18-3 Unweighted Interval Scheduling: Review Recall: Greedy algorithm works if all weights are 1. Consider jobs in ascending order of finish time. Add job to subset if it is compatible with previously chosen jobs. Observation. Greedy algorithm can fail spectacularly if arbitrary weights are allowed. weight = 999 weight = 1 a b 0 1 2 3 4 5 6 7 8 9 10 11 Time

Algorithms Dynamic Programming 18-4 Weighted Interval Scheduling Notation: Label jobs by finishing time: f 1 f 2... f n. Let p(j) be the largest index i < j such that job i is compatible with j. Example. p(8) = 5, p(7) = 3, p(2) = 0. 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 8 9 10 11 8 Time

Algorithms Dynamic Programming 18-5 Dynamic Programming: Binary Choice Let OPT(j) denote the value of an optimal solution to the problem consisting of job requests 1, 2,..., j. Case 1: OPT selects job j. cannot use incompatible jobs { p(j) + 1, p(j) + 2,..., j - 1 } must include optimal solution to problem consisting of remaining compatible jobs 1, 2,..., p(j) Case 2: OPT does not select job j. optimal substructure must include optimal solution to problem consisting of remaining compatible jobs 1, 2,..., j-1 OPT ( j) = 0 max { v + OPT ( p( j)), OPT ( j 1) } j if j = 0 otherwise

Algorithms Dynamic Programming 18-6 Weighted Interval Scheduling: Brute Force Input: n, s 1,,s n, f 1,,f n, v 1,,v n sort jobs by finish times so that f 1 f 2 f n compute p(1), p(2),, p(n) return Compute-Opt(n) Compute-Opt(j) if (j = 0) return 0 else return max(v j +Compute-Opt(p(j)),Compute-Opt(j-1))

Algorithms Dynamic Programming 18-7 Weighted Interval Scheduling: Brute Force Observation. Recursive algorithm fails spectacularly because of redundant sub-problems exponential algorithms. Example Number of recursive calls for family of "layered" instances grows like Fibonacci sequence. 5 1 2 3 4 5 4 3 3 2 2 1 2 1 1 0 1 0 p(1) = 0, p(j) = j - 2 1 0

Algorithms Dynamic Programming 18-8 Weighted Interval Scheduling: Memoization Memoization: Store results of each sub-problem in a cache; lookup as needed. Input: n, s 1,,s n, f 1,,f n, v 1,,v n sort jobs by finish times so that f 1 f 2 f n compute p(1), p(2),, p(n) set OPT[0]:=0 for j=1 to n do set OPT[j]:=max(v j +OPT[p(j)],OPT[j-1]) endfor return OPT[n]

Algorithms Dynamic Programming 18-9 Weighted Interval Scheduling: Running Time Theorem Memoized version of algorithm takes O(n log n) time. Proof Sort by finish time: O(n log n). Computing p( ) : O(n) after sorting by finish time Each iteration of the for loop: O(1) Overall time is O(n log n) Remark. O(n) if jobs are pre-sorted by finish times QED

Algorithms Dynamic Programming 18-10 Automated Memoization Automated memoization. Many functional programming languages (e.g., Lisp) have built-in support for memoization. Q. Why not in imperative languages (e.g., Java)? (defun F (n) (if (<= n 1) n (+ (F (- n 1)) (F (- n 2))))) Lisp (efficient) static int F(int n) { if (n <= 1) return n; else return F(n-1) + F(n-2); } Java (exponential) F(40) F(39) F(38) F(38) F(37) F(37) F(36) F(37) F(36) F(36) F(35) F(36) F(35) F(35) F(34)

Algorithms Dynamic Programming 18-11 Finding a Solution Dynamic programming algorithm computes optimal value. What if we want the solution itself? Do some post-processing Find-Solution(j) if j = 0 then output nothing else if v j +M[p(j)]>M[j-1] then do print j Find-Solution(p(j)) endif else Find-Solution(j-1) endif

Algorithms Dynamic Programming 18-12 Knapsack The Knapsack Problem Instance: A set of n objects, each of which has a positive integer value and a positive integer weight. A weight limit W. Objective: w i Select objects so that their total weight does not exceed W, and they have maximal total value v i

Algorithms Dynamic Programming 18-13 Idea A simple question: Should we include the last object into selection? Let OPT(n,W) denote the maximal value of a selection of objects out of {1,, n} such that the total weight of the selection doesn t exceed W More general, OPT(i,U) denote the maximal value of a selection of objects out of {1,, i} such that the total weight of the selection doesn t exceed U Then OPT(n,W) = max{ OPT(n 1, W), OPT(n 1, W w ) + v } i i

Algorithms Dynamic Programming 18-14 Algorithm (First Try) Knapsack(n,W) set V1:=Knapsack(n-1,W) set V2:=Knapsack(n-1,W- ) v i output max(v1,v2+ ) w i Is it good enough? Example Let the values be 1,3,4,2, the weights 1,1,3,2, and W = 5 Recursion tree

Algorithms Dynamic Programming 18-15 Another Idea: Memoization Let us store values OPT(i,U) as we find them We need to store (and compute) at most n W numbers We ll do it in a regular way: Instead of recursion, we will compute those values starting from smaller ones, and fill up a table

Algorithms Dynamic Programming 18-16 Algorithm (Second Try) Knapsack(n,W) array M[0..n,0..W] set M[0,w]:=0 for each w=0,1,...,w for i=1 to n do for w=0 to W do set M[i,w]:= max{m[i 1,w],M[n-1,w ]+ } endfor endfor w i v i

Algorithms Dynamic Programming 18-17 Example Example Let the values be 1,3,4,2, the weights 1,1,3,2, and W = 5 w 0 i 0 1 2 3 4 0 0 0 0 0 1 0 1 3 3 3 2 0 1 4 4 4 3 0 1 4 4 5 4 0 1 4 7 7 5 0 1 4 8 8 w i M[i,w] = max{ M[ i 1, w], M[ n 1,w ] + } v i

Algorithms Dynamic Programming 18-18 Shortest Path Suppose that every arc e of a digraph G has length (or cost, or weight, or ) len(e) But now we allow negative lengths (weights) Then we can naturally define the length of a directed path in G, and the distance between any two nodes The s-t-shortest Path Problem Instance: Digraph G with lengths of arcs, and nodes s,t Objective: Find a shortest path between s and t

Algorithms Dynamic Programming 18-19 Shortest Path: Difficulties Negative Cycles. <0 s t Greediness fails t 3 2-6 s 1 t t Adding constant weight to all arcs fails

Algorithms Dynamic Programming 18-20 Shortest Path: Observations Assumption There are no negative cycles Lemma If graph G has no negative cycles, then there is a shortest path from s to t that is simple (i.e. does not repeat nodes), and hence has at most n 1 arcs Proof If a shortest path P from s to t repeat a node v, then it also include a cycle C starting and ending at v. The weight of the cycle is non-negative, therefore removing the cycle makes the path shorter (no longer). QED

Algorithms Dynamic Programming 18-21 Shortest Path: Dynamic Programming We will be looking for a shortest path with increasing number of arcs Let OPT(i,v) denote the minimum weight of a path from v to t using at most i arcs v w t Shortest v t path can use i 1 arcs. Then OPT(i,v) = OPT(i 1,v) Or it can use i arcs and the first arc is vw. Then OPT(i,v) = len(vw) + OPT(i 1,w) OPT ( i, v) = min{ OPT ( i 1, v),min{ OPT ( i 1, w) + len( vw)}} w V

Algorithms Dynamic Programming 18-22 Shortest Path: Bellman-Ford Algorithm Shortest-Path(G,s,t) set n:= V /*number of nodes in G array M[0..n-1,V] set M[0,t]:=0 and M[0,v]:= for each v V-{t} for i=1 to n-1 do for v V do set M[i,w]:=min{M[i-1,v],min {M[i-1,w]+len(vw)}} endfor endfor return m[n-1,s] w V

Algorithms Dynamic Programming 18-23 Example -4 8 v v v 6-1 -3-2 v v 4 2 3 v -3 t a b c d e 0 1 2 3 4 5 0 0 0 0 0 0-3 -3 0-4 -2-6 -2-6 -2 3 3 3 3 3 4 3 3 2 0 2 0 0 0 0 M[i,w] = min{ M[i 1, v], min { M[ i 1, w] + len(vw) }} w V

Algorithms Dynamic Programming 18-24 Shortest Path: Soundness and Running Time Theorem The ShortestPath algorithm correctly computes the minimum cost of an s-t path in any graph that has no negative cycles, and runs in 3 O( n ) time Proof. Soundness follows by induction from the recurrent relation for the optimal value. DIY. Running time: 2 We fill up a table with n entries. Each of them requires O(n) time

Algorithms Dynamic Programming 18-25 Shortest Path: Soundness and Running Time Theorem The ShortestPath algorithm can be implemented in O(mn) time A big improvement for sparse graphs Proof. Consider the computation of the array entry M[i,v]: M[i,v] = min{ M[i 1, v], min { M[ i 1, w] + len(vw) }} We need only compute the minimum over all nodes w for which v has an edge to w n v w V Let denote the number of such edges

Algorithms Dynamic Programming 18-26 Shortest Path: Running Time Improvements It takes O( ) to compute the array entry M[i,v]. n v It needs to be computed for every node v and for each i, 1 i n. Thus the bound for running time is O n n = O(nm) v v V n v Indeed, is the outdegree of v, and we have the result by the Handshaking Lemma. QED

Algorithms Dynamic Programming 18-27 Shortest Path: Space Improvements The straightforward implementation requires storing a table with entries It can be reduced to O(n) Instead of recording M[i,v] for each i, we use and update a single value M[v] for each node v, the length of the shortest path from v to t found so far Thus we use the following recurrent relation: M[v] = min{ M[v], min { M[ w] + len(vw) }} w V

Algorithms Dynamic Programming 18-28 Shortest Path: Space Improvements (cntd) Lemma Throughout the algorithm M[v] is the length of some path from v to t, and after i rounds of updates the value M[v] is no larger than the length of the shortest from v to t using at most i edges

Algorithms Dynamic Programming 18-29 Shortest Path: Finding Shortest Path In the standard version we only need to keep record on how the optimum is achieved Consider the space saving version. For each node v store the first node on its path to the destination t Denote it by first(v) Update it every time M[v] is updated Let P be the pointer graph P = (V, {(v, first(v)): v V})

Algorithms Dynamic Programming 18-30 Shortest Path: Finding Shortest Path Lemma If the pointer graph P contains a cycle C, then this cycle must have negative cost. Proof If w = first(v) at any time, then M[v] M[w] + len(vw) Let v 1, v2, K, v k be the nodes along the cycle C, and ( v k, v 1) the last arc to be added Consider the values right before this arc is added We have M v ] M[ v ] len( v v ) for i = 1,, k 1 and M [ i i+ 1 + i i+ 1 [ vk ] > M[ v1 ] + len( vkv1 ) k 1 0 > i= 1 Adding up all the inequalities we get len( v v i i+ 1 ) + len( vkv1 )

Algorithms Dynamic Programming 18-31 Shortest Path: Finding Shortest Path (cntd) Lemma Suppose G has no negative cycles, and let P be the pointer graph after termination of the algorithm. For each node v, the path in P from v to t is a shortest v-t path in G. Proof Observe that P is a tree. Since the algorithm terminates we have M[v] = M[w] + len(vw), where w = first(v). As M[t] = 0, the length of the path traced out by the pointer graph is exactly M[v], which is the shortest path distance. QED

Algorithms Dynamic Programming 18-32 Shortest Path: Finding Negative Cycles Two questions: - how to decide if there is a negative cycle? - how to find one? Lemma It suffices to find negative cycles C such that t can be reached from C <0 <0 t

Algorithms Dynamic Programming 18-33 Shortest Path: Finding Negative Cycles Proof Let G be a graph The augmented graph, A(G), is obtained by adding a new node and connecting every node in G with the new node t As is easily seen, G contains a negative cycle if and only if A(G) contains a negative cycle C such that t is reachable from C QED

Algorithms Dynamic Programming 18-34 Shortest Path: Finding Negative Cycles (cntd) Extend OPT(i,v) to i n If the graph G does not contain negative cycles then OPT(i,v) = OPT(n 1,v) for all nodes v and all i n Indeed, it follows from the observation that every shortest path contains at most n 1 arcs. Lemma There is no negative cycle with a path to t if and only if OPT(n,v) = OPT(n 1,v) Proof If there is no negative cycle, then OPT(n,v) = OPT(n 1,v) for all nodes v by the observation above

Algorithms Dynamic Programming 18-35 Shortest Path: Finding Negative Cycles (cntd) Proof (cntd) Suppose OPT(n,v) = OPT(n 1,v) for all nodes v. Therefore OPT(n,v) = min{ OPT(n 1,v), min { OPT(n 1,w) + len(vw) }} w V = min{ OPT(n,v), min { OPT(n,w) + len(vw) }} = OPT(n + 1,v) =. w V However, if a negative cycle from which t is reachable exists, then lim OPT ( i, v) = i

Algorithms Dynamic Programming 18-36 Shortest Path: Finding Negative Cycles (cntd) Let v be a node such that OPT(n,v) OPT(n 1,v). A path P from v to t of weight OPT(n,v) must use exactly n arcs Any simple path can have at most n 1 arcs, therefore P contains a cycle C Lemma If G has n nodes and OPT(n,v) OPT(n 1,v), then a path P of weight OPT(n,v) contains a cycle C, and C is negative. Proof Every path from v to t using less than n arcs has greater weight. Let w be a node that occurs in P more than once. Let C be the cycle between the two occurrences of w Deleting C we get a shorter path of greater weight, thus C is negative