Dynamic Programming. Reading: CLRS Chapter 15 & Section CSE 2331 Algorithms Steve Lai

Similar documents
Dynamic Programming. Reading: CLRS Chapter 15 & Section CSE 6331: Algorithms Steve Lai

CSE 431/531: Analysis of Algorithms. Dynamic Programming. Lecturer: Shi Li. Department of Computer Science and Engineering University at Buffalo

Computer Science & Engineering 423/823 Design and Analysis of Algorithms

MAKING A BINARY HEAP

Dynamic Programming( Weighted Interval Scheduling)

Binary Decision Diagrams

MAKING A BINARY HEAP

More Dynamic Programming

Introduction. I Dynamic programming is a technique for solving optimization problems. I Key element: Decompose a problem into subproblems, solve them

More Dynamic Programming

CSE 421 Introduction to Algorithms Final Exam Winter 2005

CSE 202 Homework 4 Matthias Springer, A

CSE 101. Algorithm Design and Analysis Miles Jones Office 4208 CSE Building Lecture 20: Dynamic Programming

Aside: Golden Ratio. Golden Ratio: A universal law. Golden ratio φ = lim n = 1+ b n = a n 1. a n+1 = a n + b n, a n+b n a n

Computer Science & Engineering 423/823 Design and Analysis of Algorithms

CSE 421. Dynamic Programming Shortest Paths with Negative Weights Yin Tat Lee

Dynamic Programming. p. 1/43

Note that M i,j depends on two entries in row (i 1). If we proceed in a row major order, these two entries will be available when we are ready to comp

Lecture 7: Dynamic Programming I: Optimal BSTs

CPSC 320 (Intermediate Algorithm Design and Analysis). Summer Instructor: Dr. Lior Malka Final Examination, July 24th, 2009

CMPSCI 311: Introduction to Algorithms Second Midterm Exam

CS483 Design and Analysis of Algorithms

University of Toronto Department of Electrical and Computer Engineering. Final Examination. ECE 345 Algorithms and Data Structures Fall 2016

CSE 202 Dynamic Programming II

Dynamic Programming (CLRS )

CS2223 Algorithms D Term 2009 Exam 3 Solutions

CSE101: Design and Analysis of Algorithms. Ragesh Jaiswal, CSE, UCSD

Divide and Conquer Algorithms. CSE 101: Design and Analysis of Algorithms Lecture 14

CS Fall 2011 P and NP Carola Wenk

Question Paper Code :

Dynamic Programming. Cormen et. al. IV 15

Chapter 8 Dynamic Programming

16.1 Min-Cut as an LP

CSCE 750 Final Exam Answer Key Wednesday December 7, 2005

Algorithms Design & Analysis. Dynamic Programming

Today s Outline. CS 362, Lecture 13. Matrix Chain Multiplication. Paranthesizing Matrices. Matrix Multiplication. Jared Saia University of New Mexico

Analysis of Algorithms I: Asymptotic Notation, Induction, and MergeSort

Chapter 8 Dynamic Programming

Lecture 21: Counting and Sampling Problems

Computational Complexity and Intractability: An Introduction to the Theory of NP. Chapter 9

Maximum sum contiguous subsequence Longest common subsequence Matrix chain multiplication All pair shortest path Kna. Dynamic Programming

CS 580: Algorithm Design and Analysis

CS 470/570 Dynamic Programming. Format of Dynamic Programming algorithms:

Graph Theory and Optimization Computational Complexity (in brief)

Graduate Algorithms CS F-09 Dynamic Programming

IE418 Integer Programming

Dynamic Programming: Shortest Paths and DFA to Reg Exps

Sums of Products. Pasi Rastas November 15, 2005

CS 170 Algorithms Fall 2014 David Wagner MT2

Lecture 2: Divide and conquer and Dynamic programming

1 Review of Vertex Cover

(tree searching technique) (Boolean formulas) satisfying assignment: (X 1, X 2 )

CSE 421 Dynamic Programming

Fundamental Algorithms

Data Structures in Java

CS481: Bioinformatics Algorithms

CS473 - Algorithms I

Exact Algorithms for Dominating Induced Matching Based on Graph Partition

Unit 1A: Computational Complexity

Exam in Discrete Mathematics

Partition and Select

Outline. Part 5. Computa0onal Complexity (3) Compound Interest 2/15/12. Recurrence Rela0ons: An Overview. Recurrence Rela0ons: Formal Defini0on

Limitations of Algorithm Power

Algorithms and Theory of Computation. Lecture 9: Dynamic Programming

Computational Complexity

A Lower Bound for the Size of Syntactically Multilinear Arithmetic Circuits

V. Adamchik 1. Recurrences. Victor Adamchik Fall of 2005

Tree Decompositions and Tree-Width

Copyright 2000, Kevin Wayne 1

Dynamic Programming: Shortest Paths and DFA to Reg Exps

1 Assembly Line Scheduling in Manufacturing Sector

Greedy Algorithms My T. UF

Dynamic Programming: Matrix chain multiplication (CLRS 15.2)

Algorithm Design CS 515 Fall 2015 Sample Final Exam Solutions

Algorithms Exam TIN093 /DIT602

Even More on Dynamic Programming

Partha Sarathi Mandal

1 The independent set problem

6. DYNAMIC PROGRAMMING I

Dynamic Programming. Data Structures and Algorithms Andrei Bulatov

BBM402-Lecture 11: The Class NP

Covering Linear Orders with Posets

Midterm 1 for CS 170

Algorithm Efficiency. Algorithmic Thinking Luay Nakhleh Department of Computer Science Rice University

Intractable Problems Part One

i times p(p(... (p( n))...) = n ki.

CS173 Running Time and Big-O. Tandy Warnow

CS 241 Analysis of Algorithms

VIII. NP-completeness

CMPSCI611: Three Divide-and-Conquer Examples Lecture 2

Module 1: Analyzing the Efficiency of Algorithms

Easy Problems vs. Hard Problems. CSE 421 Introduction to Algorithms Winter Is P a good definition of efficient? The class P

All-Pairs Shortest Paths

Algorithms and Complexity Theory. Chapter 8: Introduction to Complexity. Computer Science - Durban - September 2005

Dynamic programming. Curs 2015

A Dynamic Programming algorithm for minimizing total cost of duplication in scheduling an outtree with communication delays and duplication

Solutions to the Mathematics Masters Examination

CSCI 3110 Assignment 6 Solutions

There are two types of problems:

CS 5114: Theory of Algorithms. Tractable Problems. Tractable Problems (cont) Decision Problems. Clifford A. Shaffer. Spring 2014

Transcription:

Dynamic Programming Reading: CLRS Chapter 5 & Section 25.2 CSE 233 Algorithms Steve Lai

Optimization Problems Problems that can be solved by dynamic programming are typically optimization problems. Optimization problems: Construct a set or a sequence of { } of elements y,..., that satisfies a given constraint and optimizes a given obective function. y 2

Problems and Subproblems Consider the sorting problem: { } i i i+ { } Given a set of n elements, A= a, a, a,, a, sort A. 2 3 Let Pi (, ) denote the problem of sorting in A = a, a,, a, where i n. We have a class of similar problems, indexed by ( i, ). The original problem is P(, n). n 3

Dynamic Programming: basic ideas () ( x x ) Problem: construct an optimal solution,,. There are several options for x, say, op, op2,..., opd. Each option op leads to a subproblem P : given x = op, ( x = op x x ) find an optimal solution,,,. 2 The best of these optimal solutions, i.e., {( x ) } = op x2 x d Best,,, : is an optimal solution to the original problem. 4

Dynamic Programming: basic ideas (2) Apply the same reasoning to each subproblem, sub-subproblem, sub-sub-subproblem, and so on. Have a tree of the original problem (root) and subproblems. Dynamic programming wors when these subproblems have many duplicates, are of the same type, and we can describe them using, typically, one or two parameters. The tree of problem/subproblems ( which is of exponential size) now condenses to a smaller, polynomial-size graph. Now solve the subproblems from the "leaves". 5

Design a Dynamic Programming Algorithm 2 ( x x ). View the problem as constructing an opt. seq.,,. 2. There are several options for x, say, op, op,..., op. Each option op leads to a subproblem. 3. Denote each problem/subproblem by a small number of parameters, the fewer the better. 4. Define the obective function to be optimized using these parameter(s). 5. Formulate a recurrence relation. 6. Determine the boundary condition and the goal. 7. Implement the algorithm. d 6

Shortest Path Problem: Let G = ( V, E) be a directed acyclic graph (DAG). Let G be represented by a matrix: length of edge ( i, ) if ( i, ) E di (, ) = 0 if i= otherwise Find a shr o test path from a given node u to a given node v. 7

Dynamic Programming Solution. View the problem as constructing an opt. seq. Here we want to find a sequence of nodes,, ( ) ( x x ) ( x x ) such that ux,,, x, v is a shortest path from uto v. 2,,. 2. There are several options for x, say, op, op,..., op. Each option op Options for x leads to a subproblem. are the nodes x which have an edge from u. The subproblem corresponding to option x is: Find a shortest path from x to v. d 8

3. Denote each problem/subproblem by a small number of parameters, the fewer the better. 4. Define the obective function to be optimized using these parameter(s). These two steps are usually done simultaneously. Let f( x) denote the shortest distance from x to v. 5. Formulate a recurrence relation. { } f( x) = min d( x, y) + f( y):( x, y) E, if x v and out-degree( x) 0. 9

6. Determine the boundary condition. 0 if x = v f( x) = if x v and out-degree( x) = 0 7. What's the goal? Our goal is to compute f( u). Once we now how to compute f( u), it will be easy to construct a shortest path from u to v. I.e., we compute the shortest distance from u to v, and then construct a path having that distance. 8. Implement the algorithm. 0

Computing f( u) (version ) function shortest( x) //computing f( x)// global d[.. n,.. n] if x = v then return (0) elseif out-degree( x) = 0 then return ( ) { y) E} ( ) els e return min d( x, y) + shortest( y) : ( x, Initial call: shortest( u) Question: What's the worst-case running time?

Computing f( u) (version 2) function shortest( x) //computing f( x)// global d[.. n,.. n], F[.. n], Next[.. n] if Fx [ ] = then if x= vthen Fx [ ] 0 elseif out-degree( x) = 0 then else F[ x] { } Fx [ ] min dx (, y) + shortest( y) : ( x, y) E Next[ x] the node y that yielded the min retur n( Fx [ ]) 2

Main Program procedure shortest-path( uv, ) // find a shortest path from u to v // global d[.. n,.. n], F[.. n], Next[.. n] initialize Next[ v] 0 initialize F[.. n] SD shortest( u) //shortest distance from u to v// if SD < then //print the shortest path// u { } while 0 do write( ); Next[ ] 3

Time Complexity Number of calls to shortest: ( ) O E How much time does shortest( x) need for a particular x? The first call: O() + time to find x's outgoing edges Subsequent calls: O() per call The over-all worst-case running time of the algorithm is ( ) O E O() + time to find all nodes' outgoing edges If the graph is represend by an adacency matrix: ( 2 ) O V If the graph is represend by adacency lists: O V ( + E ) 4

Matrix-chain Multiplication Problem: Given n matrices M, M,..., M, where M is of dimensions d i d i 2, we want to compute the product M M2 Mn in a least expensive order, assuming that the cost for multiplying an a b matrix by a b c matrix is abc. Example: want to compute A B C, where A is 0 2, B is 2 5, C is 5 0. Cost of computing ( A B) C is 00 + 500 = 600 Cost of computing A ( B C) is 200 + 00 = 300 n i 5

Dynamic Programming Solution 2 n ( x x ) We want to determine an optimal,,, where x x x n means which two matrices to multiply first, means which two matrices to multiply next, and means which two matrices to multiply lastly. Consider x. (Why not x?) n There are n choices for x : n ( ) ( ) M M M M, where n. + n A general problem/subproblem is to multiply which can be naturally denoted by ( i, ). M i M, 6

Dynamic Programming Solution Let M i Cost( i, ) denote the minimum cost for computing M Recurrence relation:. { } Cost( i, ) = min Cost( i, ) + Cost( +, ) + d d d. i < Boundary condition: Cost( i, i) = 0 for i n. Goal: Cost(, n) i 7

Algorithm (recursive version) function MinCost( i, ) global d[0.. n], Cost[.. n,.. n], Cut[.. n,.. n] //initially, Cost[ i, ] 0 if i =, and Cost[ i, ] if i // if Cost[ i, ] < 0 then { Cost[ i, ] min MinCost( i, ) + MinCost( +, ) i < + di [ ] d [ ] d[ ] Cut[ i, ] the index that gave the minimum in the last ( Cost[ i ) return, ] statement } 8

Algorithm (non-recursive version) procedure MinCost global d[0.. n], Cost[.. n,.. n], Cut[.. n,.. n] initialize Cost[ i, i] 0 for i n for i n to l do for i+ to n do Cost[ i, ] min { i < Cs o t( i, ) + Cost( +, ) + di [ ] d [ ] d[ ] Cut[ i, ] the index that gave the minimum in the last statement } 9

Computing Mi M function MatrixProduct( i, ) // Return the product M M // global Cut[.. n,.. n], M,..., M if i = then return( M ) else Cut[ i, ] i i ( i + ) return MatrixProduct(, ) MatrixProduct(, ) n 20

Main Program global d[0.. n], Cost[.. n,.. n], Cut[.. n,.. n] global M,..., M Call MinCost Call MatrixProduct(, n) n ( or MinCost(, n), the recursive version) Time complexity: 3 ( ) Θ n 2

Longest Common Subsequence Problem: Given two sequences (, 2,, n ) (,,, ) A= a a a B = b b b 2 find a longest common subsequence of A and B. n To solve it by dynamic programming, we view the problem ( x x x ) as finding an optimal sequence,,, and as: what choices are there for for x?) x 2? (Or what choices are there 22

Approach ( ) View x, x, as a subsequence of A. 2 (not So, the choices for x are a, a,, a. ( ) i i i+ n efficien t) 2 Let Li (, ) denote the length of a longest common subseq of A = a, a,, a n ( ) and B = b, b,, b. + n ( ϕ ) { } Recurrence: Li (, ) = max L +, (, ) + +. ϕ(, ) is the index of the first character in B equal to a, or i n n + if no such character. Boundary condition: Li (, ) = 0, if i = n+ or = n+ Running time: Θ ( 3 n ) Lin (, + 2) =, i n+ 23

Approach 2 ( ) View x, x, as a sequence of 0/, where x 2 (not efficient) indicates whether or not to include ai. The choices for each xi are 0 and. Let Li (, ) denote the length of a longest common subseq ( ) ( ) of A = a, a,, a and B = b, b,, b. i i i+ n + n Li ( ϕ i ) ( +, ) + +, (, ) + Recurrence: Li (, ) = max Li ϕ(, ) is as defined in approach. Boundary condition: same as in approach. ( 2 ) Running time: Θ n + time for computing ϕ(.. n,.. n). i 24

Approach 3 ( x x ) View,, as a sequence of decisions, where 2 x indicates whether to include a = b (if a = b) exclude a or exclude b (if a b) Let Li (, ) denote the length of a longest common subseq ( ) ( ) of A = a, a,, a and B = b, b,, b. i i i+ n + n ( ) ( ) ( + ) + Li+, + if ai = b Recurrence: Li (, ) = max{ Li+,, Li, } if ai b Boundary: Li (, ) = 0, if i = n+ or = n+ Running time: Θ ( 2 n ) 25

All-Pair Shortest Paths Problem: Let GV (, E) be a weighted directed graph. For every pair of nodes u, v, find a shortest path from u to v. DP approach: u, v V, we are looing for an optimal sequence ( ) x x2 x,,...,. What choices are there for x? To answer this, we need to now the meaning of x. 26

Approach x : the next node. What choices are there for x? How to describe a subproblem? 27

Approach 2 x : going through node or not? What choices are there for x? Taing the bacward approach, we as whether to go through node Let D n or not. ( i, ) be the length of a shortest path from { } to with intermediate nodes in, 2,...,. { } Then, D ( i, ) = min D ( i, ), D ( i, ) + D (, ). i weight of edge ( i, ) if ( i, ) E 0 D ( i, ) = 0 if i = otherwise () 28

Straightforward implementation 0 initialize D [.. n,.. n] by Eq. () for to n do for i to n do for to n do D i + D < D i if [, ] [, ] [, ] then D [ i, ] D [ i, ] + D [, ] P [ i, ] else D [ i, ] D [ i, ] P [ i, ] 0 29

Eliminate the in D [.. n,.. n], P [.. n,.. n] If i and : D i D i We need [, ] only for computing [, ]. Once D [ i, ] is computed, we don't need to eep D [ i, ]. If i = or = : D [ i, ] = D [ i, ]. What does P [ i, ] indicate? Only need to now the largest such that P [ i, ] =. 30

Floyd's Algorithm initialize D[.. n,.. n] by Eq. () initialize P[.. n,.. n] 0 for to n do for i to n do for to n do if Di [, ] + D [, ] < Di [, ] then Di [, ] Di [, ] + D [, ] Pi [, ] 3

Sum of Subset { } Given a multiset of positive integers A= a, a2,..., a n and another positive integer M, determine whether there is a subset B A such that Sum( B) = M, where Sum( B) means the sum of integers in B. This problem is NP-hard. 32