Dynamic Programming (CLRS )

Similar documents
Dynamic Programming: Matrix chain multiplication (CLRS 15.2)

Dynamic Programming( Weighted Interval Scheduling)

Today s Outline. CS 362, Lecture 13. Matrix Chain Multiplication. Paranthesizing Matrices. Matrix Multiplication. Jared Saia University of New Mexico

Computer Science & Engineering 423/823 Design and Analysis of Algorithms

CMPSCI611: Three Divide-and-Conquer Examples Lecture 2

Computer Science & Engineering 423/823 Design and Analysis of Algorithms

Activity selection. Goal: Select the largest possible set of nonoverlapping (mutually compatible) activities.

Algorithms Design & Analysis. Dynamic Programming

/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Dynamic Programming II Date: 10/12/17

Dynamic Programming. Cormen et. al. IV 15

Mergesort and Recurrences (CLRS 2.3, 4.4)

Maximum sum contiguous subsequence Longest common subsequence Matrix chain multiplication All pair shortest path Kna. Dynamic Programming

Lecture 13: Dynamic Programming Part 2 10:00 AM, Feb 23, 2018

Lecture 10: Powers of Matrices, Difference Equations

Advanced Analysis of Algorithms - Midterm (Solutions)

CS2223 Algorithms D Term 2009 Exam 3 Solutions

Chapter 8 Dynamic Programming

Graduate Algorithms CS F-09 Dynamic Programming

CS173 Lecture B, November 3, 2015

1 Closest Pair of Points on the Plane

Lecture 13. More dynamic programming! Longest Common Subsequences, Knapsack, and (if time) independent sets in trees.

Algorithms and Data Structures Strassen s Algorithm. ADS (2017/18) Lecture 4 slide 1

Quick Sort Notes , Spring 2010

Greedy vs Dynamic Programming Approach

Aside: Golden Ratio. Golden Ratio: A universal law. Golden ratio φ = lim n = 1+ b n = a n 1. a n+1 = a n + b n, a n+b n a n

CPSC 490 Problem Solving in Computer Science

General Methods for Algorithm Design

Chapter 8 Dynamic Programming

Tutorials. Algorithms and Data Structures Strassen s Algorithm. The Master Theorem for solving recurrences. The Master Theorem (cont d)

Algorithmic Game Theory and Applications. Lecture 4: 2-player zero-sum games, and the Minimax Theorem

Lecture 3: Big-O and Big-Θ

/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Matroids and Greedy Algorithms Date: 10/31/16

Greedy Algorithms My T. UF

1 Assembly Line Scheduling in Manufacturing Sector

The Master Theorem for solving recurrences. Algorithms and Data Structures Strassen s Algorithm. Tutorials. The Master Theorem (cont d)

Weighted Activity Selection

Lecture 2: Divide and conquer and Dynamic programming

Lecture 3. Big-O notation, more recurrences!!

CS173 Running Time and Big-O. Tandy Warnow

Dynamic Programming. Credits: Many of these slides were originally authored by Jeff Edmonds, York University. Thanks Jeff!

CS473 - Algorithms I

Divide and Conquer Algorithms. CSE 101: Design and Analysis of Algorithms Lecture 14

CSE 421 Dynamic Programming

Dynamic Programming: Shortest Paths and DFA to Reg Exps

Dynamic Programming: Interval Scheduling and Knapsack

Lecture 13. More dynamic programming! Longest Common Subsequences, Knapsack, and (if time) independent sets in trees.

CS1210 Lecture 23 March 8, 2019

Chapter 4 Divide-and-Conquer

Outline / Reading. Greedy Method as a fundamental algorithm design technique

Divide and Conquer. Maximum/minimum. Median finding. CS125 Lecture 4 Fall 2016

CSE 431/531: Analysis of Algorithms. Dynamic Programming. Lecturer: Shi Li. Department of Computer Science and Engineering University at Buffalo

CSE548, AMS542: Analysis of Algorithms, Fall 2017 Date: October 11. In-Class Midterm. ( 7:05 PM 8:20 PM : 75 Minutes )

Introduction. I Dynamic programming is a technique for solving optimization problems. I Key element: Decompose a problem into subproblems, solve them

Quiz 1 Solutions. Problem 2. Asymptotics & Recurrences [20 points] (3 parts)

Dynamic Programming: Shortest Paths and DFA to Reg Exps

Multidimensional Divide and Conquer 1 Skylines

Dynamic Programming. p. 1/43

Computational Complexity. This lecture. Notes. Lecture 02 - Basic Complexity Analysis. Tom Kelsey & Susmit Sarkar. Notes

Reading 10 : Asymptotic Analysis

Lecture 4: An FPTAS for Knapsack, and K-Center

Vector, Matrix, and Tensor Derivatives

DAA Unit- II Greedy and Dynamic Programming. By Mrs. B.A. Khivsara Asst. Professor Department of Computer Engineering SNJB s KBJ COE, Chandwad

Divide and Conquer Strategy

Today s Outline. CS 561, Lecture 15. Greedy Algorithms. Activity Selection. Greedy Algorithm Intro Activity Selection Knapsack

Number of solutions of a system

CS583 Lecture 11. Many slides here are based on D. Luebke slides. Review: Dynamic Programming

Lecture 7: Dynamic Programming I: Optimal BSTs

Week 7 Solution. The two implementations are 1. Approach 1. int fib(int n) { if (n <= 1) return n; return fib(n 1) + fib(n 2); } 2.

Computer Science 385 Analysis of Algorithms Siena College Spring Topic Notes: Limitations of Algorithms

Dynamic Programming. Reading: CLRS Chapter 15 & Section CSE 2331 Algorithms Steve Lai

6. DYNAMIC PROGRAMMING I

Divide & Conquer. CS 320, Fall Dr. Geri Georg, Instructor CS320 Div&Conq 1

7.6 Customary Units of Weight and Capacity

Question Paper Code :

/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Asymptotic Analysis, recurrences Date: 9/7/17

Testing a Hash Function using Probability

Data Structures and Algorithms

CMPSCI 611 Advanced Algorithms Midterm Exam Fall 2015

Dynamic Programming. Prof. S.J. Soni

CS Lunch. Dynamic Programming Recipe. 2 Midterm 2! Slides17 - Segmented Least Squares.key - November 16, 2016

Aditya Bhaskara CS 5968/6968, Lecture 1: Introduction and Review 12 January 2016

Dynamic Programming ACM Seminar in Algorithmics

CS483 Design and Analysis of Algorithms

University of the Virgin Islands, St. Thomas January 14, 2015 Algorithms and Programming for High Schoolers. Lecture 5

Divide-and-conquer: Order Statistics. Curs: Fall 2017

Solving Recurrences. 1. Express the running time (or use of some other resource) as a recurrence.

COMP 382: Reasoning about algorithms

CS60007 Algorithm Design and Analysis 2018 Assignment 1

Divide-and-Conquer. Reading: CLRS Sections 2.3, 4.1, 4.2, 4.3, 28.2, CSE 6331 Algorithms Steve Lai

CS 561, Lecture: Greedy Algorithms. Jared Saia University of New Mexico

Inf 2B: Sorting, MergeSort and Divide-and-Conquer

Mat 3770 Bin Packing or

Extended Algorithms Courses COMP3821/9801

Introduction to Computer Science and Programming for Astronomers

Partition and Select

Section Summary. Sequences. Recurrence Relations. Summations Special Integer Sequences (optional)

Asymptotic Analysis and Recurrences

CPSC 320 Sample Final Examination December 2013

Dynamic Programming. Data Structures and Algorithms Andrei Bulatov

data structures and algorithms lecture 2

Transcription:

Dynamic Programming (CLRS.-.) Today we discuss a technique called dynamic programming. It is neither especially dynamic nor especially programming related. We will discuss dynamic programming by looking at two examples. Matrix-chain multiplication Problem: Given a sequence of matrices A, A, A,..., A n, find the best way (using the minimal number of multiplications) to compute their product. Isn t there only one way? (( ((A A ) A ) ) A n ) No, matrix multiplication is associative. e.g. A (A (A ( (A n A n ) ))) yields the same matrix. Different multiplication orders do not cost the same: Multiplying p q matrix A and q r matrix B takes p q r multiplications; result is a p r matrix. Consider multiplying 0 00 matrix A with 00 matrix A and 0 matrix A. (A A ) A takes 0 00 + 0 0 = 700 multiplications. A (A A ) takes 00 0 + 0 0 00 = 7000 multiplications. In general, let A i be p i p i matrix. A, A, A,..., A n can be represented by p 0, p, p, p,..., p n Let m(i, j) denote minimal number of multiplications needed to compute A i A i+ A j We want to compute m(, n). Divide-and-conquer solution/recursive algorithm: Assume that someone tells us the position of the last product, say k. Then we have to compute recursively the best way to multiply the chain from i to k, and from k + to j, and add the cost of the final product. This means that m(i, j) = m(i, k) + m(k +, j) + p i p k p j If noone tells us k, then we have to try all possible values of k and pick the best solution.

Recursive formulation of m(i, j): m(i, j) = { 0 If i = j min i k<j {m(i, k) + m(k +, j) + p i p k p j } If i < j To go from the recursive formulation above to a program is pretty straightforward: Matrix-chain(i, j) IF i = j THEN return 0 m = FOR k = i TO j DO q = Matrix-chain(i, k) + Matrix-chain(k +, j) +p i p k p j IF q < m THEN m = q Return m END Matrix-chain Return Matrix-chain(, n) Running time: Exponential is... SLOW! T (n) = n = k= n (T (k) + T (n k) + O()) k= T (n ) T (k) + O(n) T (n )... = n Problem is that we compute the same result over and over again. Example: Recursion tree for Matrix-chain(, ),,,,,,,,,,,,,,,,,,,,,,,,,,,

For example, we compute Matrix-chain(, ) twice. Solution is to remember the values we have already computed in a table. This is called memoization. We ll have a table T[..n][..n] such that T[i][j] stores the solution to problem Matrix-CHAIN(i,j). Initially all entries will be set to. FOR i = to n DO FOR j = i to n DO T [i][j] = The code for MATRIX-CHAIN(i,j) stays the same, except that it now uses the table. The first thing MATRIX-CHAIN(i,j) does is to check the table to see if T [i][j] is already computed. Is so, it returns it, otherwise, it computes it and writes it in the table. Below is the updated code. Matrix-chain(i, j) IF T [i][j] < THEN return T [i][j] IF i = j THEN T [i][j] = 0, return 0 m = FOR k = i to j DO q = Matrix-chain(i, k) + Matrix-chain(k +, j)+p i p k p j IF q < m THEN m = q T [i][j] = m return m END Matrix-chain return Matrix-chain(, n) Effectively, the table will prevent a subproblem MATRIX-CHAIN(i,j) to be computed more than once. Running time: Θ(n ) different calls to matrix-chain(i, j). The first time a call is made it takes O(n) time, not counting recursive calls. When a call has been made once it costs O() time to make it again. O(n ) time Another way of thinking about it: Θ(n ) total entries to fill, it takes O(n) to fill one.

. Dynamic Programming without recursion Often dynamic programming is presented as filling up a table from the bottom, in such a way that makes recursion unnecessary. Avoiding recursion is perhaps elegant, and necessary 0-0 years ago when programming languages did not include support for recursion. Personally, I like top-down recursive thinking, and I think compilers are pretty effective at optimizing recursion. Solution for Matrix-chain without recursion: Key is that m(i, j) only depends on m(i, k) and m(k +, j) where i k < j if we have computed them, we can compute m(i, j) We can easily compute m(i, i) for all i n (m(i, i) = 0) Then we can easily compute m(i, i + ) for all i n m(i, i + ) = m(i, i) + m(i +, i + ) + p i p i p i+ Then we can compute m(i, i + ) for all i n m(i, i + ) = min{m(i, i) + m(i +, i + ) + p i p i p i+, m(i, i + ) + m(i +, i + ) + p i p i+ p i+ }. Until we compute m(, n) Computation order: j 7 7 Computation order i 7 Program:

FOR i = to n DO T [i][i] = 0 FOR l = to n DO FOR i = to n l DO Analysis: j = i + l T [i][j] = FOR k = to j DO q = T [i][k] + T [k + ][j] + p i p k p j IF q < T [i[j] THEN T [i][j] = q O(n ) entries, O(n) time to compute each O(n ).. Extensions Above we only computed the best way to multiply the chain (with the smallest number of operations). The algorithm can be extended to compute the actual order of multiplications corresponding to this optimal cost (we ll do this as homework or in-class exercise). This is a simplification that is often done: focus on computing the optimal cost, and leave the details of computing the solution corresponding to that optimal cost for later.

0- Knapsack problem Problem: Given n items, with item i being worth v i and having weight w i pounds, fill a knapsack of capacity W pounds with maximal value. Perhaps a greedy strategy of picking the item with the biggest value-per-pound might work? Here is a counter-example showing that this does not work: $0 0 $00 0 $0 0 0 0 = $0 0 0 = $0 Items in value per pound order Optimal solution for knapsack of size 0 Greedy solution for knapsack of size 0 Note: In fractional knapsack problem we can take of $0 object and get $0 solution. Now we ll show that 0 knapsack problem can be solved in time O(n W ) using dynamicprogramming. Often the hardest part is coming up with the recursive formulation. Let us denote by optknapsack(k, w) the maximal value obtainable when filling a knapsack of capacity w using items among items through k. To solve our problem we need to compute optknapsack(n, W ). The idea is to consider each item, one at a time. Let s take item k: either it s part of the optimal solution, or not. We need to compute both options, and chose the best one. optknapsack(k,w) IF weight[k] <= w THEN with = value[k] + optknapsack(k-, w - weight[k]) ELSE with = 0 END IF without = optknapsack(k-,w) RETURN max{with, without} END Knapsack Analysis: Let T (n, W ) be the running time of optknapsack(k, w). T (n, W ) = T (n, W ) + T (n, W w[n]) + Θ() We ll look at the worst case where w[i] = for all i n. If w[i] = then it is clear that T (n, W ) > T (n, W ). This recurrence, which runs for min(n, W ) steps, gives that T (n, W ) = Ω( min(n,w ) ).

We now show how to improve the exponential running time with dynamic programming: We create a table T of size [..n][..w ] in which to store our results of prior runs. Entry T [i][w] will store the result of optknapsack(i, w). First we initialize all entries in the table as 0 (in this problem we are looking for max values when all item values are positive, so 0 is safe). We modify the algorithm to check this table before launching into computing the solution. optknapsack(k,w) IF table[k][w]!= 0 THEN RETURN table[k][w] IF w[k] <= w THEN with = v[k] + optknapsack(k-, w-w[k]) ELSE with = 0 without = optknapsack(k-,w) table[i][j] = max{with, without} RETURN max{with, without} END Effectively, the table will prevent a subproblem optknapsack(k,w) to be computed more than once. Analysis: This will run in O(n W ) time as we fill each entry in the table at most once, and there are nw spaces in the table. 7