Algorithms Design & Analysis. Dynamic Programming

Similar documents
CS473 - Algorithms I

1 Assembly Line Scheduling in Manufacturing Sector

DAA Unit- II Greedy and Dynamic Programming. By Mrs. B.A. Khivsara Asst. Professor Department of Computer Engineering SNJB s KBJ COE, Chandwad

Week 7 Solution. The two implementations are 1. Approach 1. int fib(int n) { if (n <= 1) return n; return fib(n 1) + fib(n 2); } 2.

Computer Science & Engineering 423/823 Design and Analysis of Algorithms

Today s Outline. CS 362, Lecture 13. Matrix Chain Multiplication. Paranthesizing Matrices. Matrix Multiplication. Jared Saia University of New Mexico

Graduate Algorithms CS F-09 Dynamic Programming

Computer Science & Engineering 423/823 Design and Analysis of Algorithms

Determining an Optimal Parenthesization of a Matrix Chain Product using Dynamic Programming

General Methods for Algorithm Design

/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Dynamic Programming II Date: 10/12/17

Introduction. I Dynamic programming is a technique for solving optimization problems. I Key element: Decompose a problem into subproblems, solve them

6. DYNAMIC PROGRAMMING I

Dynamic Programming (CLRS )

Dynamic Programming. Cormen et. al. IV 15

Copyright 2000, Kevin Wayne 1

CS 580: Algorithm Design and Analysis

Algorithms and Theory of Computation. Lecture 9: Dynamic Programming

Dynamic Programming: Matrix chain multiplication (CLRS 15.2)

Chapter 4 Divide-and-Conquer

Dynamic Programming( Weighted Interval Scheduling)

Dynamic programming. Curs 2015

CSE 421 Dynamic Programming

CSE 202 Dynamic Programming II

6. DYNAMIC PROGRAMMING I

CPS 616 DIVIDE-AND-CONQUER 6-1

Introduction to Algorithms 6.046J/18.401J/SMA5503

Chapter 8 Dynamic Programming

Chapter 8 Dynamic Programming

Lecture 7: Dynamic Programming I: Optimal BSTs

Algorithm Design and Analysis

Data Structures and Algorithms

Dynamic Programming 1

Greedy Algorithms My T. UF

CSE 421 Weighted Interval Scheduling, Knapsack, RNA Secondary Structure

Divide and Conquer: Polynomial Multiplication Version of October 1 / 7, 24201

Discrete Mathematics U. Waterloo ECE 103, Spring 2010 Ashwin Nayak May 17, 2010 Recursion

Dynamic Programming. Credits: Many of these slides were originally authored by Jeff Edmonds, York University. Thanks Jeff!

Review Of Topics. Review: Induction

Divide-and-conquer: Order Statistics. Curs: Fall 2017

Dynamic Programming. Data Structures and Algorithms Andrei Bulatov

The Divide-and-Conquer Design Paradigm

Dynamic Programming. Prof. S.J. Soni

A design paradigm. Divide and conquer: (When) does decomposing a problem into smaller parts help? 09/09/ EECS 3101

Weighted Activity Selection

CSE 431/531: Analysis of Algorithms. Dynamic Programming. Lecturer: Shi Li. Department of Computer Science and Engineering University at Buffalo

Partha Sarathi Mandal

CS Analysis of Recursive Algorithms and Brute Force

Divide and Conquer. Andreas Klappenecker. [based on slides by Prof. Welch]

Dynamic Programming. Reading: CLRS Chapter 15 & Section CSE 2331 Algorithms Steve Lai

Matrix Multiplication

Lecture 2: Divide and conquer and Dynamic programming

CS481: Bioinformatics Algorithms

CS 470/570 Dynamic Programming. Format of Dynamic Programming algorithms:

INF4130: Dynamic Programming September 2, 2014 DRAFT version

Divide-and-conquer. Curs 2015

Divide and Conquer. CSE21 Winter 2017, Day 9 (B00), Day 6 (A00) January 30,

Data Structures in Java

Lecture 4. Quicksort

CMPS 2200 Fall Divide-and-Conquer. Carola Wenk. Slides courtesy of Charles Leiserson with changes and additions by Carola Wenk

Chapter 6. Dynamic Programming. CS 350: Winter 2018

Dynamic Programming. p. 1/43

Dynamic programming. Curs 2017

Chapter 6. Dynamic Programming. Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved.

CS 231: Algorithmic Problem Solving

Divide and Conquer Algorithms. CSE 101: Design and Analysis of Algorithms Lecture 14

Divide and Conquer Strategy

ITEC2620 Introduction to Data Structures

CS 310 Advanced Data Structures and Algorithms

Algorithms And Programming I. Lecture 5 Quicksort

Algorithms. Quicksort. Slide credit: David Luebke (Virginia)

Divide-and-Conquer Algorithms Part Two

Dynamic Programming. Reading: CLRS Chapter 15 & Section CSE 6331: Algorithms Steve Lai

6. DYNAMIC PROGRAMMING I

Computational Complexity - Pseudocode and Recursions

Advanced Counting Techniques. Chapter 8

Omega notation. Transitivity etc.

Algorithms, Design and Analysis. Order of growth. Table 2.1. Big-oh. Asymptotic growth rate. Types of formulas for basic operation count

Sorting. Chapter 11. CSE 2011 Prof. J. Elder Last Updated: :11 AM

Algorithm Design and Analysis

Algorithmic Approach to Counting of Certain Types m-ary Partitions

1 Divide and Conquer (September 3)

Question Paper Code :

Class Note #14. In this class, we studied an algorithm for integer multiplication, which. 2 ) to θ(n

CMPSCI611: Three Divide-and-Conquer Examples Lecture 2

CS173 Running Time and Big-O. Tandy Warnow

Advanced Analysis of Algorithms - Midterm (Solutions)

Copyright 2000, Kevin Wayne 1

Design and Analysis of Algorithms

Lecture #8: We now take up the concept of dynamic programming again using examples.

Sorting Algorithms. We have already seen: Selection-sort Insertion-sort Heap-sort. We will see: Bubble-sort Merge-sort Quick-sort

Michael Kaiser Assignment 1 Comp 510 Fall 2012

Introduction to Computer Science Lecture 5: Algorithms

CSE 421 Algorithms: Divide and Conquer

Dynamic Programming. Weighted Interval Scheduling. Algorithmic Paradigms. Dynamic Programming

Lecture 2. Fundamentals of the Analysis of Algorithm Efficiency

Divide and conquer. Philip II of Macedon

Chapter 5. Divide and Conquer CLRS 4.3. Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved.

CPSC 320 (Intermediate Algorithm Design and Analysis). Summer Instructor: Dr. Lior Malka Final Examination, July 24th, 2009

5. DIVIDE AND CONQUER I

Transcription:

Algorithms Design & Analysis Dynamic Programming

Recap Divide-and-conquer design paradigm

Today s topics Dynamic programming Design paradigm Assembly-line scheduling Matrix-chain multiplication Elements 3

Decision Problem Definition The answer of the problem is simply yes or no. Example Binary search Reachability 4

Optimization Problem Definition Each feasible solution has an associated value, and we wish to find the feasible solution with the best value. Example Shortest path Closest-pair problem 5

Assembly-line scheduling Manufacturing problem S, S, S,3 S,4 S,n- S,n line a, a, a,3 a,4 a,n- a,n chassis enters e e t, t, t, t,3 t, t,3 t,n- t,n- x x completed auto exits line a, a, a,3 a,4 a,n- a,n S, S, S,3 S,4 S,n- a i,j : assembly time t i,j : transfer time e i : enter time S,n x i : exit time 6

Example S, S, S,3 S,4 S,5 S,6 line 7 9 3 4 8 4 chassis enters 4 3 3 4 3 completed auto exits line 8 5 6 4 5 7 S, S, S,3 S,4 S,5 S,6 j 3 4 5 6 j 3 4 5 6 f [j] f [j] 9 8 6 0 4 5 3 30 35 37 l [j] l [j] f * = 38 l * = 7

Brute-force algorithm IDEA: If a list is given of which s to use in line and which to use in line, it is easy to compute in Θ(n) time how long it takes to pass through the factory. Brute-force algorithm: Exhaustively checking all possible lists. running time = Ω( n ) It is too unlucky! There are choices for each! 8

Optimal structure Observation: The fastest way through S,j is either. The fastest way through S,j- and then directly through S,j.. The fastest way through S,j-, a transfer from line to line, and then through S,j. To find the fastest way through j of either line, we solve the subproblem of finding the fastest ways through j - on both lines. 9

Recursive solution For the first, f [] = e + a,, f [] = e + a, For the final, The recurrence e + a f[ j] = min{ f e + a f[ j] = min{ f f* = min(f [n] + x, f [n] + x ), [ j ] + a, [ j ] + a, j,, j, f [ j ] + t f [ j ] + t, j, j if j =. + a, j} if j. if j =. + a, j} if j. 0

Find the fastest way FASTEST-WAY(a, t, e, x, n) f [] e + a,, f [] e + a, for j to n do if f [j-] + a,j f [j-] +t,j- + a,j then f [j] f [j-] + a,j ; l [j] else f [j] f [j-] +t,j- + a,j ; l [j] if f [j-] + a,j f [j-] +t,j- + a,j then f [j] f [j-] + a,j ; l [j] else f [j] f [j-] +t,j- + a,j ; l [j] if f [n] + x, f [n] + x then f * f [n] + x l* else f* f [n] + x l *

Example S, S, S,3 S,4 S,5 S,6 line 7 9 3 4 8 4 chassis enters 4 3 3 4 3 completed auto exits line 8 5 6 4 5 7 S, S, S,3 S,4 S,5 S,6 j 3 4 5 6 j 3 4 5 6 f [j] f [j] 9 8 6 0 4 5 3 30 35 37 l [j] l [j] f * = 38 l * =

Design paradigm Characterize the structure of an optimal solution. Define the value of an optimal solution recursively. Compute the value of an optimal solution in a bottom-up fashion. Construct an optimal solution from computed information. 3

Matrix-chain multiplication Input: a sequence of n matrices A, A,, A n Output: a matrix Example: B = A A A n B m,s = A m,n A n,o A r,s 4

Multiplication of two matrices MATRIX-MULTIPLY(A,B) if columns[a] rows[b] then error "incompatible dimensions" else for i to rows[a] do for j to columns[b] do C[i, j] 0 for k to columns[a] do C[i,j] C[i,j] + A [i, k] B[k, j] return C Suppose A is a p q matrix, B is a q r matrix Running time = Θ(p q r) 5

Naïve algorithm Naïve Algorithm: Multiply the matrices from left to right. Suppose the size of the n matrice are p p, p p 3,, p n- p n Generalization of two matrice s case Running time = Ο(p p p n ) Any better way to reduce the numbers of scalar multiplication??? 6

Matrix-chain: Example 4 matrices: A A A 3 A 4 A : 5 5 A : 5 0 A 3 : 0 0 A 4 : 0 5 Associative law: (A A ) A 3 = A (A A 3 ) ((A A ) A 3 ) A 4 : 5 5 0+5 0 0+5 0 5=50 (A A ) (A 3 A 4 ) : 5 5 0+0 0 5+5 0 5=350 (A (A A 3 )) A 4 : 5 0 0+5 5 0+5 0 5=0000 A ((A A 3 ) A 4 ) : 5 0 0+5 0 5+5 5 5=5375 A (A (A 3 A 4 )) : 0 0 5+5 0 5+5 5 5=85 Minimal number of 7 multiplications!

Brute-force algorithm IDEA: Find a parenthesization that minimizes the number of scalar multiplications for n matrices < A,, A n > with dimensions p i- p i, for i n. Brute-force algorithm: Exhausitively checking all possible parenthesization. What s the running time? Depends on the number of parenthesization! 8

Number of parenthesization The recurrence of the number of parenthesization For a single matrix, we have only one. For a sequence of n matrices, split it between the k th and (k+) st matrices and parenthesize the subsequences recursively. = n P( n) k= P( k) P( n k) if if n =. n n 4 P( n) = C( n ), C( n) = = Ω( ) 3/ n + n n n. C(n) is the Catalan number! Unlucky! It is exponential in n 9

More clever way Notation: A i j means A i A i+ A j, where i j Given A i j with an optimal parenthesization There must be a i k < j such that A i j = (A i k ) (A k+ j ). Cost of computing A i j Cost of computing A i k and A k+ j Cost of computing (A i k ) (A k+ j ) 0

More clever way (Cont.) Assertion: Suppose that an optimal parethesization of A i j is split at k. Then the parethesization subchain A i k within this optimization of A i j must be an optimal parenthesization of A i k. Why?

Recursive solution Notation: m[i,j] be the minimum number of scalar multiplications to compute A i j If there is one matrix A i : m[i,i] = 0 Suppose optimal parenthesization of A i j is k Finally, # % m[i, j] = $ & % m[i,j] = m[i,k] + m[k+,j] + p i- p k p j 0 if i = j. min{m[i, j] = m[i, k]+ m[k +, j]+ p i p k p j } if i < j. i k< j

Recursive Algorithm RECURSIVE-MATRIX-CHAIN(p,i,j) if i = j then return 0 p = <p, p,.., p n > m[i, j] for k i to j - //computing the minimum cost do q RECURSIVE-MATRIX-CHAIN(p, i, k) + RECURSIVE-MATRIX-CHAIN(p, k +, j) + p i- p k p j if q < m[i, j] then m[i, j] q return m[i, j] 3

The great moment The recurrence T (), T ( n) + n k = ( T ( k) + T ( n k) + ) for n Theorem. The running time satisfies T(n) n- >. Exponetial time, it is too bad! 4

5 The great moment (Cont.) Proof.(Induction) The base, T() = 0. Suppose T(n-) n-. when i = n, we have = + ) ( ) ( n i n i T n T = + = n i i n = + = 0 n i i n ) ( + = + = n n n n n

It is totally frustrated! We need to find another better way! 6

Bottom-up method MATRIX-CHAIN-ORDER(p) n length[p] for i to n do m[i, i] 0 // initialize the entries for l to n // l is the chain length do for i to n l + do j i + l // j is i + length of chain - m[i, j] for k i to j //based on previous result do q m[i, k] + m[k +, j] + p i p k p j if q < m[i, j] // choose the minimal then m[i, j] q s[i, j] k return m and s 7

The truly great moment Correctness? Space complexity T(n) = Θ(n ) m[ n, n]: store the cost m[i,j] s[ n, n]: records the index k achieved optimal cost in computing m[i,j] Time complexity T(n) = Θ(n 3 ) 8

Example Given matrices A : 30 35 A : 35 5 A 3 : 5 5 A 4 : 5 0 A 5 : 0 0 A 6 : 0 5 9

30 Example(Cont.) Computing of m[,5] 75 375. 0 0 35 0 4375 [5,5] [,4] 75. 0 5 35 000 65 [4,5] [,3] 3000. 0 5 35 500 0 [3,5] [,] min [,5] 5 4 5 3 5 = = + + = + + = + + = + + = + + = + + = p p p m m p p p m m p p p m m m

Constructing the optimal solution MATRIX-CHAIN-MULTIPLY(A, s, i, j) if j >i then X MATRIX-CHAIN-MULTIPLY(A, s, i, s[i, j]) Y MATRIX-CHAIN-MULTIPLY(A, s, s[i, j] +, j) return MATRIX-MULTIPLY(X, Y) else return A[i] It is a recursive algorithm 3

Elements of dynamic programming Optimal substructure The optimal solution for a problem consists of optimal solutions of subproblems. Overlapping subproblems Solving a subproblem leads to same subproblems over and over. 3

Pattern to discover optimal substructure Solution to the problem depends on one or more subproblems to be solved. Assume that the optimal solution has been given. Determine ensued subproblems and characterize its resulting space best. Show the solutions to the subproblems used within the optimal solution must themselves be optimal by using a "cut-and-paste" technique. 33

Overlapping subproblems The recursion tree for the computation of RECURSIVE-MATRIX-CHAIN(p) 34

Memoized recursive algorithm Bottom-up v.s. top-down Memoize the intermediate result encountered first. Memoized method MEMOIZED-MATRIX-CHAIN(p) n length[p] - for i to n do for j i to n do m[i, j] return LOOKUP-CHAIN(p,, n) 35

LOOKUP-CHAIN(p, i, j) Memoized recursive algorithm(cont.) //lookup while computing if m[i,j] < then return m[i, j] if i = j then m[i, j] 0 else for k i to j - do q LOOKUP-CHAIN(p, i, k) + LOOKUP-CHAIN(p, k+, j) + p i - p k p j if q < m[i, j] then m[i, j] q return m[i, j] 36

Running time Memoized recursive algorithm(cont.) T(n) = O(n 3 ) Memoization v.s. bottom-up Bottom-up wins: subproblem must be solved at least once. Top-down wins: if some subproblems need not be solved. 37

Dynamic v.s. Divide-and-conquer Divide-and-conquer algorithms partition the problem into independent subproblems, solve the subproblems recursively, and the combine the solutions to solve the original problem. Dynamic programming is applicable when the subproblems are not independent. Every subproblem is solved only once and the result sorted in a table for avoiding the work of recomputing it. 38

Dynamic v.s. Divide-and-conquer A consequence is that there must be only relatively few subproblems for the table to be efficiently computable. Under such circumstances dynamic programming allows an exponential-time algorithm to be transformed to a polynomial-time algorithm. The name dynamic programming is historic. It refers to computing the table. 39

Greedy algorithm Next week 40