Chapter 8 Dynamic Programming

Similar documents
Chapter 8 Dynamic Programming

Dynamic Programming. p. 1/43

All-Pairs Shortest Paths

CS Analysis of Recursive Algorithms and Brute Force

DAA Unit- II Greedy and Dynamic Programming. By Mrs. B.A. Khivsara Asst. Professor Department of Computer Engineering SNJB s KBJ COE, Chandwad

Partha Sarathi Mandal

CSC 1700 Analysis of Algorithms: Warshall s and Floyd s algorithms

Analysis of Algorithms I: All-Pairs Shortest Paths

CS 470/570 Dynamic Programming. Format of Dynamic Programming algorithms:

6. DYNAMIC PROGRAMMING I

Lecture 7: Dynamic Programming I: Optimal BSTs

Single Source Shortest Paths

Max-plus algebra. Max-plus algebra. Monika Molnárová. Technická univerzita Košice. Max-plus algebra.

CMPS 6610 Fall 2018 Shortest Paths Carola Wenk

Note that M i,j depends on two entries in row (i 1). If we proceed in a row major order, these two entries will be available when we are ready to comp

Greedy Algorithms My T. UF

Lecture 11. Single-Source Shortest Paths All-Pairs Shortest Paths

Maximum sum contiguous subsequence Longest common subsequence Matrix chain multiplication All pair shortest path Kna. Dynamic Programming

CSE 431/531: Analysis of Algorithms. Dynamic Programming. Lecturer: Shi Li. Department of Computer Science and Engineering University at Buffalo

Design and Analysis of Algorithms

Dynamic Programming: Shortest Paths and DFA to Reg Exps

(tree searching technique) (Boolean formulas) satisfying assignment: (X 1, X 2 )

Today s Outline. CS 362, Lecture 13. Matrix Chain Multiplication. Paranthesizing Matrices. Matrix Multiplication. Jared Saia University of New Mexico

CS 241 Analysis of Algorithms

IS 2610: Data Structures

Lecture 2: Divide and conquer and Dynamic programming

Courses 12-13: Dynamic programming

Lecture 7: Shortest Paths in Graphs with Negative Arc Lengths. Reading: AM&O Chapter 5

Question Paper Code :

1 Assembly Line Scheduling in Manufacturing Sector

Computer Science & Engineering 423/823 Design and Analysis of Algorithms

Dynamic Programming (CLRS )

Combinatorial optimization problems

Section Summary. Sequences. Recurrence Relations. Summations Special Integer Sequences (optional)

Dynamic Programming: Shortest Paths and DFA to Reg Exps

CS473 - Algorithms I

Dynamic Programming. Prof. S.J. Soni

Dynamic Programming. Cormen et. al. IV 15

Dynamic Programming: Matrix chain multiplication (CLRS 15.2)

Computer Science & Engineering 423/823 Design and Analysis of Algorithms

Algorithms Design & Analysis. Dynamic Programming

CS483 Design and Analysis of Algorithms

Relations Graphical View

CS 577 Introduction to Algorithms: Strassen s Algorithm and the Master Theorem

Activity selection. Goal: Select the largest possible set of nonoverlapping (mutually compatible) activities.

CSE 202 Dynamic Programming II

6. DYNAMIC PROGRAMMING I

Discrete (and Continuous) Optimization WI4 131

Notes. Relations. Introduction. Notes. Relations. Notes. Definition. Example. Slides by Christopher M. Bourke Instructor: Berthe Y.

Automata and Formal Languages - CM0081 Finite Automata and Regular Expressions

MA008/MIIZ01 Design and Analysis of Algorithms Lecture Notes 3

1 Review of Vertex Cover

Dynamic Programming: Interval Scheduling and Knapsack

Dynamic Programming. Data Structures and Algorithms Andrei Bulatov

Compositions of n with no occurrence of k. Phyllis Chinn, Humboldt State University Silvia Heubach, California State University Los Angeles

Discrete Structures May 13, (40 points) Define each of the following terms:

Fast Inference with Min-Sum Matrix Product

CS481: Bioinformatics Algorithms

Week 7 Solution. The two implementations are 1. Approach 1. int fib(int n) { if (n <= 1) return n; return fib(n 1) + fib(n 2); } 2.

Find: a multiset M { 1,..., n } so that. i M w i W and. i M v i is maximized. Find: a set S { 1,..., n } so that. i S w i W and. i S v i is maximized.

Chapter 9: Relations Relations

Algorithm Design Strategies V

Lecture #8: We now take up the concept of dynamic programming again using examples.

Graduate Algorithms CS F-09 Dynamic Programming

Advanced Counting Techniques. Chapter 8

University of the Virgin Islands, St. Thomas January 14, 2015 Algorithms and Programming for High Schoolers. Lecture 5

CS 350 Algorithms and Complexity

CS 350 Algorithms and Complexity

Lecture Notes for Chapter 25: All-Pairs Shortest Paths

Dynamic Programming. Reading: CLRS Chapter 15 & Section CSE 2331 Algorithms Steve Lai

Dynamic Programming. Shuang Zhao. Microsoft Research Asia September 5, Dynamic Programming. Shuang Zhao. Outline. Introduction.

Dynamic Programming ACM Seminar in Algorithmics

Dynamic Programming( Weighted Interval Scheduling)

Shortest paths with negative lengths

CS Algorithms and Complexity

Copyright 2000, Kevin Wayne 1

Examination paper for TDT4120 Algorithms and Data Structures

Chapter 2. Recurrence Relations. Divide and Conquer. Divide and Conquer Strategy. Another Example: Merge Sort. Merge Sort Example. Merge Sort Example

CS 580: Algorithm Design and Analysis

Lecture 13. More dynamic programming! Longest Common Subsequences, Knapsack, and (if time) independent sets in trees.

Technische Universität München, Zentrum Mathematik Lehrstuhl für Angewandte Geometrie und Diskrete Mathematik. Combinatorial Optimization (MA 4502)

The Cache-Oblivious Gaussian Elimination Paradigm: Theoretical Framework and Experimental Evaluation

Mat 3770 Bin Packing or

CHAPTER 1. Relations. 1. Relations and Their Properties. Discussion

Outline. Computer Science 331. Cost of Binary Search Tree Operations. Bounds on Height: Worst- and Average-Case

Unit 1A: Computational Complexity

EE263 Review Session 1

CS60007 Algorithm Design and Analysis 2018 Assignment 1

Advanced Counting Techniques

Math.3336: Discrete Mathematics. Chapter 9 Relations

Introduction to Computer Science Lecture 5: Algorithms

Why do we need math in a data structures course?

6.842 Randomness and Computation February 24, Lecture 6

Lecture 13: Dynamic Programming Part 2 10:00 AM, Feb 23, 2018

Answer the following questions: Q1: ( 15 points) : A) Choose the correct answer of the following questions: نموذج اإلجابة

Combinatorial Optimization

/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Dynamic Programming II Date: 10/12/17

General Methods for Algorithm Design

Algorithm Analysis Recurrence Relation. Chung-Ang University, Jaesung Lee

Discrete Structures: Sample Questions, Exam 2, SOLUTIONS

Transcription:

Chapter 8 Dynamic Programming Copyright 007 Pearson Addison-Wesley. All rights reserved.

Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by recurrences with overlapping subproblems Invented by American mathematician Richard Bellman in the 950s to solve optimization problems and later assimilated by CS Programming here means planning Main idea: - set up a recurrence relating a solution to a larger instance to solutions of some smaller instances - solve smaller instances once - record solutions in a table - extract solution to the initial instance from that table 8-

Example: Fibonacci numbers Recall definition of Fibonacci numbers: F(n) = F(n-) F + F(n-) F F(0) = 0 F() = Computing the n th Fibonacci number recursively (top-down): F(n) F(n-) + F(n-n-) F(n-) + F(n-n-) F(n-) + F(n-n-)... 8-

Example: Fibonacci numbers (cont.) Computing the n th Fibonacci number using bottom-up iteration and recording results: F(0) = 0 F() = F() = +0 = F(n-) = F(n-) = F(n) ) = F(n-) + F(n-) F Efficiency: - time - space 0... F(n-) F(n-) F(n) 8-

Examples of DP algorithms Computing a binomial coefficient Warshall s algorithm for transitive closure Floyd s algorithm for all-pairs shortest paths Constructing an optimal binary search tree Some instances of difficult discrete optimization problems: - traveling salesman - knapsack 8-5

Computing a binomial coefficient by DP Binomial coefficients are coefficients of the binomial formula: (a a + b) b n = C(n,0),0)a n b 0 +... + C(n,k)a n-k b k +... + C(n,n)a 0 b n Recurrence: C(n,k) ) = C(n-,,k) ) + C(n-, -,k-)-) for n > k > 0 C(n,0) =, C(n,n) ) = for n 0 Value of C(n,k) ) can be computed by filling a table: 0... k- k 0... n- C(n-,,k-) C(n-,,k) n C(n,k) 8-6

Computing C(n,k): pseudocode and analysis Time efficiency: Θ(nk) Space efficiency: Θ(nk) 8-7

Knapsack Problem by DP Given n items of integer weights: w w w n values: v v v n a knapsack of integer capacity W find most valuable subset of the items that fit into the knapsack Consider instance defined by first i items and capacity j (j W). Let V[i,j] ] be optimal value of such instance. Then max {V[i-,{ -,j], v i + V[i-, -,j- w i ]} if j- V[i,j] ] = V[i-, -,j]] if j- j- w i 0 j- w i < 0 Initial conditions: V[0, [0,j]] = 0 and V[i,0] = 0 8-8

Knapsack Problem by DP (example) Example: Knapsack of capacity W = 5 item weight value $ $0 $0 $5 capacity j 0 5 0 w =, v = w =, v = 0 w =, v = 0 w =, v = 5? 8-9

Warshall s Algorithm: Transitive Closure Computes the transitive closure of a relation Alternatively: existence of all nontrivial paths in a digraph Example of transitive closure: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 8-0

Warshall s Algorithm Constructs transitive closure T as the last matrix in the sequence of n-by-n matrices R (0),, R (k ),, R (n) where R (k ) [i,j]] = iff there is nontrivial path from i to j with only first k vertices allowed as intermediate Note that R (0) = A (adjacency matrix), R (n ) = T (transitive closure) R (0) 0 0 0 0 0 0 0 0 0 0 0 0 R () 0 0 0 0 0 0 0 0 0 0 0 R () 0 0 0 0 0 0 0 0 R () 0 0 0 0 0 0 0 0 R () 0 0 0 0 0 0 0 8-

Warshall s Algorithm (recurrence) On the k-th iteration, the algorithm determines for every pair of vertices i, j if a path exists from i and j with just vertices,,k allowed as intermediate { R (k-) [i,j]] (path using just,,kk-) R (k ) [i,j]] = or R (k-) [i,k]] and R (k-) [k,j]] (path from i to k and from k to i k using just,,kk-) i j 8-

Warshall s Algorithm (matrix generation) Recurrence relating elements R (k) to elements of R (k-) is: R (k ) [i,j]] = R (k-) [i,j]] or (R (k-) [i,k]] and R (k-) [k,j]) It implies the following rules for generating R (k) from R (k-) : Rule If an element in row i and column j is in R (k- it remains in R (k) k-), Rule If an element in row i and column j is 0 in R (kk-), it has to be changed to in R (k) if and only if the element in its row i and column k and the element in its column j and row k are both s in R (kk-) 8-

Warshall s Algorithm (example) 0 0 0 0 0 0 R 0 0 0 (0) = 0 0 0 0 R () = 0 0 0 0 0 0 0 0 0 0 R () = 0 0 0 0 0 0 0 0 0 0 0 0 0 0 R 0 () = 0 0 0 0 R () = 0 0 0 0 8-

Warshall s Algorithm (pseudocode and analysis) Time efficiency: Θ(n ) Space efficiency: Matrices can be written over their predecessors 8-5

Floyd s Algorithm: All pairs shortest paths Problem: In a weighted (di)graph, find shortest paths between every pair of vertices Same idea: construct solution through series of matrices D (0),, D (n) using increasing subsets of the vertices allowed as intermediate Example: 6 5 8-6

Floyd s Algorithm (matrix generation) On the k-th iteration, the algorithm determines shortest paths between every pair of vertices i, j that use only vertices among,,k as intermediate D (k ) [i,j]] = min {D{ (k-) [i,j], D (k-) [i,k]] + D (k-) [k,j]} i D (k-) D (k-) -) [i,j] -) [i,k] j k D (k-) -) [k,j] 8-7

Floyd s Algorithm (example) 6 7 D (0) = 0 0 7 0 6 0 D () = 0 0 5 7 0 6 9 0 0 0 0 0 0 D 0 5 0 5 6 () = 9 7 0 D 0 5 6 () = 9 7 0 D () = 7 7 0 6 9 0 6 6 9 0 6 6 9 0 8-8

Floyd s Algorithm (pseudocode and analysis) Time efficiency: Θ(n ) Space efficiency: Matrices can be written over their predecessors Note: Shortest paths themselves can be found, too 8-9

Optimal Binary Search Trees Problem: Given n keys a < < a n and probabilities p p n searching for them, find a BST with a minimum average number of comparisons in successful search. Since total number of BSTs with n nodes is given by C(n,n)/ (n+), which grows exponentially, brute force is hopeless. Example: What is an optimal BST for keys A, B, C,, and D with search probabilities 0., 0., 0., and 0., respectively? 8-0

i DP for Optimal BST Problem Let C[i,j] ] be minimum average number of comparisons made in T[i,j i,j], optimal BST for keys a i < < a j, where i j n. Consider optimal BST among all BSTs with some a k (i k j ) as their root; T[i,j i,j] ] is the best among them. Optimal BST for a,..., a a k k- Optimal BST for a k+,..., a C[i,j] ] = min {p{ k + i k j k- p s (level a s in T[i,ki,k-] +) + s = i p s (level a s in T[k+ k+,j] ] +)} s =k+ = k+ j 8-

DP for Optimal BST Problem (cont.) After simplifications, we obtain the recurrence for C[i,j]: C[i,j] ] = min {C[i,k-]{ + C[k+, +,j]} + p s for i j n i k j C[i,i] ] = p i for i j n 0 j n j s = i 0 p goal 0 p i C[i,j] p n n+ 0 8-

Example: key A B C D probability 0. 0. 0. 0. The tables below are filled diagonal by diagonal: the left one is filled using the recurrence j C[i,j] ] = min {C[i,k-]{ + C[k+, +,j]} + p s, C[i,i] ] = p i ; the right one, for trees roots, records k s values giving the minima i j 0 0. 0....8 i k j.7. i j 0 s = i B C D 5 0. 0.0. 0 5 A optimal BST

Optimal Binary Search Trees 8-

Analysis DP for Optimal BST Problem Time efficiency: Θ(n ) but can be reduced to Θ(n ) by taking advantage of monotonicity of entries in the root table, i.e., R[i,j] ] is always in the range between R[i,j-] and R[i+,j] Space efficiency: Θ(n ) Method can be expended to include unsuccessful searches 8-5