Dynamic Programming. Reading: CLRS Chapter 15 & Section CSE 6331: Algorithms Steve Lai

Similar documents
Dynamic Programming. Reading: CLRS Chapter 15 & Section CSE 2331 Algorithms Steve Lai

CSE 431/531: Analysis of Algorithms. Dynamic Programming. Lecturer: Shi Li. Department of Computer Science and Engineering University at Buffalo

Dynamic Programming( Weighted Interval Scheduling)

Lecture 7: Dynamic Programming I: Optimal BSTs

Computer Science & Engineering 423/823 Design and Analysis of Algorithms

CMPSCI 311: Introduction to Algorithms Second Midterm Exam

Divide-and-Conquer. Reading: CLRS Sections 2.3, 4.1, 4.2, 4.3, 28.2, CSE 6331 Algorithms Steve Lai

MAKING A BINARY HEAP

Maximum sum contiguous subsequence Longest common subsequence Matrix chain multiplication All pair shortest path Kna. Dynamic Programming

Lecture 2: Divide and conquer and Dynamic programming

Note that M i,j depends on two entries in row (i 1). If we proceed in a row major order, these two entries will be available when we are ready to comp

MAKING A BINARY HEAP

More Dynamic Programming

Binary Decision Diagrams

CSCE 750 Final Exam Answer Key Wednesday December 7, 2005

Divide-Conquer-Glue Algorithms

CSE 202 Dynamic Programming II

CS325: Analysis of Algorithms, Fall Final Exam

Computer Science & Engineering 423/823 Design and Analysis of Algorithms

Greedy Algorithms My T. UF

Dynamic Programming. p. 1/43

Question Paper Code :

Dynamic Programming. Cormen et. al. IV 15

Computational Complexity

CPSC 320 (Intermediate Algorithm Design and Analysis). Summer Instructor: Dr. Lior Malka Final Examination, July 24th, 2009

Copyright 2000, Kevin Wayne 1

CS6901: review of Theory of Computation and Algorithms

CSE 101. Algorithm Design and Analysis Miles Jones Office 4208 CSE Building Lecture 20: Dynamic Programming

Lecture 1: Asymptotics, Recurrences, Elementary Sorting

More Dynamic Programming

University of Toronto Department of Electrical and Computer Engineering. Final Examination. ECE 345 Algorithms and Data Structures Fall 2016

Dynamic Programming: Shortest Paths and DFA to Reg Exps

Aside: Golden Ratio. Golden Ratio: A universal law. Golden ratio φ = lim n = 1+ b n = a n 1. a n+1 = a n + b n, a n+b n a n

CS60007 Algorithm Design and Analysis 2018 Assignment 1

Analysis of Algorithms I: Asymptotic Notation, Induction, and MergeSort

Introduction. I Dynamic programming is a technique for solving optimization problems. I Key element: Decompose a problem into subproblems, solve them

CS 580: Algorithm Design and Analysis

CSE 421. Dynamic Programming Shortest Paths with Negative Weights Yin Tat Lee

CS2223 Algorithms D Term 2009 Exam 3 Solutions

A Dynamic Programming algorithm for minimizing total cost of duplication in scheduling an outtree with communication delays and duplication

More Approximation Algorithms

NP-Completeness. Andreas Klappenecker. [based on slides by Prof. Welch]

Sums of Products. Pasi Rastas November 15, 2005

Recoverable Robustness in Scheduling Problems

Dynamic Programming. Data Structures and Algorithms Andrei Bulatov

APTAS for Bin Packing

Today s Outline. CS 362, Lecture 13. Matrix Chain Multiplication. Paranthesizing Matrices. Matrix Multiplication. Jared Saia University of New Mexico

Combinatorial Optimization

Maximum Flow. Reading: CLRS Chapter 26. CSE 6331 Algorithms Steve Lai

Algorithms Exam TIN093 /DIT602

Longest Common Prefixes

CSE 202 Homework 4 Matthias Springer, A

Dynamic Programming: Shortest Paths and DFA to Reg Exps

Algorithms Design & Analysis. Dynamic Programming

Divide and Conquer Algorithms. CSE 101: Design and Analysis of Algorithms Lecture 14

Design and Analysis of Algorithms

/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Dynamic Programming II Date: 10/12/17

DAA Unit- II Greedy and Dynamic Programming. By Mrs. B.A. Khivsara Asst. Professor Department of Computer Engineering SNJB s KBJ COE, Chandwad

6. DYNAMIC PROGRAMMING I

CSE 421 Introduction to Algorithms Final Exam Winter 2005

1 The independent set problem

How many hours would you estimate that you spent on this assignment?

Graduate Algorithms CS F-09 Dynamic Programming

Unit 1A: Computational Complexity

Chapter 4. Greedy Algorithms. Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved.

CPS 616 DIVIDE-AND-CONQUER 6-1

Design and Analysis of Algorithms

CSE 421 Dynamic Programming

Lecture 21: Counting and Sampling Problems

3. Branching Algorithms

Algorithm Efficiency. Algorithmic Thinking Luay Nakhleh Department of Computer Science Rice University

Algorithms. NP -Complete Problems. Dong Kyue Kim Hanyang University

Dynamic programming. Curs 2015

CS 470/570 Dynamic Programming. Format of Dynamic Programming algorithms:

Fundamental Algorithms

Exam in Discrete Mathematics

Dynamic Programming (CLRS )

Algorithms and Complexity Theory. Chapter 8: Introduction to Complexity. Computer Science - Durban - September 2005

Introduction to Divide and Conquer

Solutions to the Mathematics Masters Examination

CSE101: Design and Analysis of Algorithms. Ragesh Jaiswal, CSE, UCSD

V. Adamchik 1. Recurrences. Victor Adamchik Fall of 2005

Outline. Relaxation. Outline DMP204 SCHEDULING, TIMETABLING AND ROUTING. 1. Lagrangian Relaxation. Lecture 12 Single Machine Models, Column Generation

CS481: Bioinformatics Algorithms

CS 170 Algorithms Fall 2014 David Wagner MT2

3. Branching Algorithms COMP6741: Parameterized and Exact Computation

This means that we can assume each list ) is

Chapter 8 Dynamic Programming

Algorithms Test 1. Question 1. (10 points) for (i = 1; i <= n; i++) { j = 1; while (j < n) {

Algorithms PART II: Partitioning and Divide & Conquer. HPC Fall 2007 Prof. Robert van Engelen

Exact Algorithms for Dominating Induced Matching Based on Graph Partition

1 More finite deterministic automata

CS Fall 2011 P and NP Carola Wenk

Lecture 2: Scheduling on Parallel Machines

CSE 591 Foundations of Algorithms Homework 4 Sample Solution Outlines. Problem 1

Design and Analysis of Algorithms

Copyright 2000, Kevin Wayne 1

Outline. Part 5. Computa0onal Complexity (3) Compound Interest 2/15/12. Recurrence Rela0ons: An Overview. Recurrence Rela0ons: Formal Defini0on

Friday Four Square! Today at 4:15PM, Outside Gates

CS 5114: Theory of Algorithms. Tractable Problems. Tractable Problems (cont) Decision Problems. Clifford A. Shaffer. Spring 2014

Transcription:

Dynamic Programming Reading: CLRS Chapter 5 & Section 25.2 CSE 633: Algorithms Steve Lai

Optimization Problems Problems that can be solved by dynamic programming are typically optimization problems. Optimization problems: Construct a set or a sequence of { } of elements y,..., that satisfies a given constraint and optimizes a given obective function. y The closest pair problem is an optimization problem. The convex hull problem is an optimization problem. 2

Problems and Subproblems Consider the closest pair problem: { } Given a set of points,,,,,, find a closest pair in A. n A= p p2 p3 p n Let Pi (, ) denote the problem of finding a closest pair { } in A p, p,, p, where i n. i = i i+ We have a class of similar problems, indexed by ( i, ). The original problem is P(, n). 3

Dynamic Programming: basic ideas 2 ( x x ) Problem: construct an optimal solution,,. There are several options for x, say, op, op,..., op. Each option op leads to a subproblem P : given x ( x = op x x ) find an optimal solution,,,. The best of these optimal solutions, i.e., 2 {( x ) } = op x2 x d Best of,,, : is an optimal solution to the original problem. DP wors only if the original problem. P is a problem similar to the = d op, 4

Dynamic Programming: basic ideas Apply the same reasoning to each subproblem, sub-subproblem, sub-sub-subproblem, and so on. Have a tree of the original problem (root) and subproblems. Dynamic programming wors when these subproblems have many duplicates, are of the same type, and we can describe them using, typically, one or two parameters. The tree of problem/subproblems (which is of exponential size) now condensed to a smaller, polynomial-size graph. Now solve the subproblems from the "leaves". 5

Design a Dynamic Programming Algorithm 2 ( x x ). View the problem as constructing an opt. seq.,,. 2. There are several options for x, say, op, op,..., op. Each option op leads to a subproblem. 3. Denote each problem/subproblem by a small number of parameters, the fewer the better. E.g., Pi (, ), i n. 4. Define the obective function to be optimized using these parameter(s). E.g., f( i, ) = the optimal value of P( i, ). 5. Formulate a recurrence relation. 6. Determine the boundary condition and the goal. 7. Implement the algorithm. d 6

Shortest Path Problem: Let G = ( V, E) be a directed acyclic graph (DAG). Let G be represented by a matrix: length of edge ( i, ) if ( i, ) E di (, ) = 0 if i= otherwise Find a shr o test path from a given node u to a given node v. 7

Dynamic Programming Solution. View the problem as constructing an opt. seq. Here we want to find a sequence of nodes,, ( ) ( x x ) ( x x ) such that ux,,, x, v is a shortest path from uto v. 2,,. 2. There are several options for x, say, op, op,..., op. Each option op leads to a subproblem. Options for x are the nodes x which have an edge from u. The subproblem corresponding to option x is: Find a shortest path from x to v. d 8

3. Denote each problem/subproblem by a small number of parameters, the fewer the better. 4. Define the obective function to be optimized using these parameter(s). These two steps are usually done simultaneously. Let f( x) denote the shortest distance from x to v. 5. Formulate a recurrence relation. { } f( x) = min d( x, y) + f( y):( x, y) E, if x v and out-degree( x) 0. 9

6. Determine the boundary condition. 0 if x = v f( x) = if x v and out-degree( x) = 0 7. What's the goal (obective)? Our goal is to compute f( u). Once we now how to compute f( u), it will be easy to construct a shortest path from u to v. I.e., we compute the shortest distance from u to v, and then construct a path having that distance. 8. Implement the algorithm. 0

Computing f( u) (version ) function shortest() x //computing f( x)// global d[.. n,.. n] if x = elseif v then return (0) out-degree( x) = 0 then return ( ) { y) E} ( ) else return min d( x, y) + shortest( y):( x, Initial call: shortest( u) Question: What's the worst-case running time?

Computing f( u) (version 2) function shortest( x) //computing f( x)// global d[.. n,.. n], F[.. n], Next[.. n] if Fx [ ] = then if x= vthen Fx [ ] 0 elseif out-degree( x) = 0 then else F[ x] { } Fx [ ] min dx (, y) + shortest( y) : ( x, y) E Next[ x] the node y that yielded the min retur n( Fx [ ]) 2

Main Program procedure shortest-path( uv, ) // find a shortest path from u to v // global d[.. n,.. n], F[.. n], Next[.. n] initialize Next[ v] 0 initialize F[.. n] SD shortest( u) //shortest distance from u to v// if SD < then //print the shortest path// u { } while 0 do write( ); Next[ ] 3

Time Complexity Number of calls to shortest: ( E ) Θ( E ) Is it Ω or? ( ) O E How much time is spent on shortest( x) for any x? The first call: O() + time to find x's outgoing edges Subsequent calls: O() per call The over-all worst-case running time of the algorithm is ( ) O E O() + time to find all nodes' outgoing edges If the graph is represend by an adacency matrix: ( 2 ) O V If the graph is represend by adacency lists: O V ( + E ) 4

Forward vs Bacward approach 5

Matrix-chain Multiplication Problem: Given n matrices M, M,..., M, where M is of dimensions d i d i 2, we want to compute the product M M2 Mn in a least expensive order, assuming that the cost for multiplying an a b matrix by a b c matrix is abc. Example: want to compute A B C, where A is 0 2, B is 2 5, C is 5 0. Cost of computing ( A B) C is 00 + 500 = 600 Cost of computing A ( B C) is 200 + 00 = 300 n i 6

Dynamic Programming Solution 2 n ( x x ) We want to determine an optimal,,, where x x x n means which two matrices to multiply first, means which two matrices to multiply next, and means which two matrices to multiply lastly. Consider x. (Why not x?) n There are n choices for x : n ( ) ( ) M M M M, where n. + n A general problem/subproblem is to multiply which can be naturally denoted by Pi (, ). M i M, 7

Dynamic Programming Solution Let M i Cost( i, ) denote the minimum cost for computing M Recurrence relation:. { + + } Cost( i, ) = min Cost( i, ) Cost(, ) + d d d i < i for i< n. Boundary condition: Cost( i, i) = 0 for i n. Goal: Cost(, n) 8

Algorithm (recursive version) function MinCost( i, ) global d[0.. n], Cost[.. n,.. n], Cut[.. n,.. n] //initially, Cost[ i, ] 0 if i =, and Cost[ i, ] if i // if Cost[ i, ] < 0 then { Cost[ i, ] min MinCost( i, ) + MinCost( +, ) i < + di [ ] d [ ] d[ ] Cut[ i, ] the index that gave the minimum in the last ( Cost[ i ) return, ] statement } 9

Algorithm (non-recursive version) procedure MinCost global d[0.. n], Cost[.. n,.. n], Cut[.. n,.. n] initialize Cost[ i, i] 0 for i n for i n to l do for i+ to n do Cost[ i, ] min { i < Cs o t( i, ) + Cost( +, ) + di [ ] d [ ] d[ ] Cut[ i, ] the index that gave the minimum in the last statement } 20

Computing M i M function MatrixProduct( i, ) // Return the product M M // global Cut[.. n,.. n], M,..., M if i = then return( M ) else Cut[ i, ] i i ( Matr uct i MatrixProdu ct + ) ) return ixprod (, ) (, n 3 Time complexity: Θ( n ) 2

Paragraphing Problem: Typeset a sequence of words w, w,..., w 2 2 into a paragraph with minimum cost (penalty). Words: w, w,..., w. w i L : b : length of w. length of each line. i n : ideal width of space between two words. ε : minimum required space between words. b : actual width of space between words if the line is right ustified. Assume that w + + w L for all i. i ε i+ n 22

If words w, w,..., w are typeset as a line, where n, i i+ ( ) ( ) ( ) = i the value of b for that line is b = L w i and the penalty is defined as: b b i if b ε Cost( i, ) = if b < ε Right ustification is not needed for the last line. So the width of space for setting w, w,..., w when n is min( b, b ), and the penalty is i i+ = ( ) b b i if ε b < b Cost( i, ) = 0 if b b if b < ε 23

Longest Common Subsequence Problem: Given two sequences (, 2,, n ) (,,, ) A= a a a B = b b b 2 find a longest common subsequence of A and B. n To solve it by dynamic programming, we view the problem ( x x x ) as finding an optimal sequence,,, and as: what choices are there for for x?) x 2? (Or what choices are there 24

Approach ( x x ) 2 (not efficient) View,, as a subsequence of A. So, the choices for x are a, a2,, an. Let Li (, ) denote the length of a longest common subseq ( a ) of A = a, a,, i i i+ n ( ) and B = b, b,, b. + n Let ϕ(, ) be the index of the first character in B is equal to a, or n+ if no such character. that ( ϕ ) { L } + max +, (, ) + i n Recurrence: Li (, ) = ϕ (, ) n 0 if the set for the max is empty Boundary condition: Ln ( +, ) = Lin (, + ) = 0, i, n+. Running time: ( n 3 ) O( n 3 ) = Θ( n 3 ) Θ + 25

Approach 2 ( x x ) (not efficient) View, 2, as a sequence of 0/, where x indicates whether or not to include a. The choices for each x are 0 and. i Let Li (, ) denote the length of a longest common subseq ( ) ( ) of A = a, a,, a and B = b, b,, b. i i i+ n + n Recurrence: Running time: Li (, ϕ( i, ) ) ( ) ( + ) + + + max if ϕ( i, ) n Li (, ) = Li+, Li, otherwise ( 2) ( 3 n O n ) Θ + i i 26

Algorithm (non-recursive) procedure Compute-Array-L global L[.. n+,.. n+ ], ϕ[.. n,.. n] initialize Li [, n+ ] 0, Ln [ +, ] 0 for i, n+ compute ϕ[.. n,.. n] for i for n to l do n to do if ϕ( i, ) n then { ϕ } Li [, ] max + Li [ +, ( i, ) + ], Li [ +, ] else Li [, ] Li [ +, ] 27

Algorithm (recursive) procedure Longest( i, ) //print the longest common subsequence// //assume L[.. n+,.. n+ ] has been computed// global L[.. n+,.. n+ ] if Li [, ] = Li [ +, ] then else ( i+ ) Longest, Print ( a ) ( i+ ϕ i + ) Longest, (, ) Initial call: Longest(,) i 28

Approach 3 ( x x ) View,, as a sequence of decisions, where 2 x indicates whether to include a = b (if a = b) exclude a or exclude b (if a b ) Let Li (, ) denote the length of a longest common subseq ( ) ( ) of A = a, a,, a and B = b, b,, b. i i i+ n + n ( ) ( ) ( + ) + Li+, + if ai = b Recurrence: Li (, ) = max{ Li+,, Li, } if ai b Boundary: Li (, ) = 0, if i = n+ or = n+ Running time: Θ ( 2 n ) 29

All-Pair Shortest Paths Problem: Let GV (, E) be a weighted directed graph. For every pair of nodes u, v, find a shortest path from u to v. DP approach: u, v V, we are looing for an optimal sequence ( ) x x2 x,,...,. What choices are there for x? To answer this, we need to now the meaning of x. 30

Approach x : the next node. What choices are there for x? How to describe a subproblem? { } What about L( i, ) = min d( i, z) + L( z, ) : ( i, z) E? Let L ( i, ) denote the length of a shortest path from i to with at most intermediate nodes. { + } L (, i ) = min d(, i z) L ( z, ):(, i z) E. 3

Approach 2 x : going through node or not? What choices are there for x? Taing the bacward approach, we as whether to go through node n or not. Let D ( i, ) be the length of a shortest path from { } to with intermediate nodes, 2,...,. { } Then, D ( i, ) = min D ( i, ), D ( i, ) + D (, ). weight of edge ( i, ) if ( i, ) E 0 D ( i, ) = 0 if i = () otherwise i 32

Straightforward implementation 0 initialize [..,.. ] by Eq. () D n n for to n do for i to n do for to n do if [, ] [, ] [, ] then D i + D < D i D [ i, ] D [ i, ] + D [, ] P [ i, ] else D [ i, ] D [ i, ] P [ i, ] 0 33

Print paths Procedure Path(, i, ) //shortest path from i to w/o going thru +,, n // global D [.. n,.. n], P [.. n,.. n], 0 n. if = 0 then if i = then print i D i 0 elseif (, ) < then print i, else print "no path" elseif P [ i, ] = then Path(, i, ), Path(,, ) else Path(, i, ) 34

Print paths Procedure ShortestPath( i, ) //shortest path from i to // global D [.. n,.. n], P [.. n,.. n], 0 n. the largest such that P [ i, ] = let 0 if no such if = 0 then if i = then print i 0 elseif (, ) then print, D i < i else print "no path" else ShortestPath(, i, ), ShortestPath(,, ) 35

Eliminate the in D [.. n,.. n], P [.. n,.. n] If i and : We need [, ] only for computing [, ]. D i D i Once D [ i, ] is computed, we don't need to eep D [ i, ]. If i = or : D [ i, ] D [ i, ]. = = What does P [ i, ] indicate? Only need to now the largest such that P [ i, ] =. 36

Floyd's Algorithm initialize D[.. n,.. n] by Eq. () initialize P[.. n,.. n] 0 for to n do for i to n do for to n do if D[ i, ] + D[, ] < D[ i, ] then D[ i, ] D[ i, ] + D[, ] Pi [, ] 37

Longest Nondecreasing Subsequence Problem: Given a sequence of integers (,,, ) A= a a a n 2 find a longest nondecreasing subsequence of A. 38

Sum of Subset Given a positive integer M { } 2 and a multiset of positive integers A= a, a,..., a, determine if there is a subset B A such that Sum( B) = M, where Sum( B) denotes the sum of integers in B. This problem is NP-har d. n 39

Job Scheduling on Two Machines There are n obs to be processed, and two machines A and B are available. If ob i is processed on machine A then a units of time are needed. If it is processed on machine B then b i units of processing time are needed. Because of the peculiarities of the obs and the machines, it is possible that a > b for some i while a < b for some other. Schedule i i the obs to minimize the completion time. (If obs in J are processed by machine A and the rest by machine B, the completion time is defined to be max a, i bi.) i J i J Assume a, b 3 for all i. i i i 40