CSE 431/531: Analysis of Algorithms. Dynamic Programming. Lecturer: Shi Li. Department of Computer Science and Engineering University at Buffalo

Similar documents
Dynamic Programming: Shortest Paths and DFA to Reg Exps

Dynamic Programming( Weighted Interval Scheduling)

Aside: Golden Ratio. Golden Ratio: A universal law. Golden ratio φ = lim n = 1+ b n = a n 1. a n+1 = a n + b n, a n+b n a n

Dynamic Programming: Shortest Paths and DFA to Reg Exps

6. DYNAMIC PROGRAMMING I

CSE 202 Dynamic Programming II

Lecture 2: Divide and conquer and Dynamic programming

Dynamic Programming. Cormen et. al. IV 15

CSE 421 Dynamic Programming

Dynamic Programming: Interval Scheduling and Knapsack

Single Source Shortest Paths

CSE 421 Weighted Interval Scheduling, Knapsack, RNA Secondary Structure

More Dynamic Programming

More Dynamic Programming

Dynamic Programming. Data Structures and Algorithms Andrei Bulatov

Design and Analysis of Algorithms

6. DYNAMIC PROGRAMMING I

Partha Sarathi Mandal

Dynamic Programming. Weighted Interval Scheduling. Algorithmic Paradigms. Dynamic Programming

Undirected Graphs. V = { 1, 2, 3, 4, 5, 6, 7, 8 } E = { 1-2, 1-3, 2-3, 2-4, 2-5, 3-5, 3-7, 3-8, 4-5, 5-6 } n = 8 m = 11

Design and Analysis of Algorithms

CMPS 6610 Fall 2018 Shortest Paths Carola Wenk

Chapter 6. Dynamic Programming. Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved.

Algorithms and Theory of Computation. Lecture 9: Dynamic Programming

Dynamic Programming. p. 1/43

FINAL EXAM PRACTICE PROBLEMS CMSC 451 (Spring 2016)

CS60007 Algorithm Design and Analysis 2018 Assignment 1

CS 4407 Algorithms Lecture: Shortest Path Algorithms

Copyright 2000, Kevin Wayne 1

CS 580: Algorithm Design and Analysis

Greedy Algorithms My T. UF

Dynamic Programming. Reading: CLRS Chapter 15 & Section CSE 6331: Algorithms Steve Lai

Analysis of Algorithms I: All-Pairs Shortest Paths

Dynamic Programming. Reading: CLRS Chapter 15 & Section CSE 2331 Algorithms Steve Lai

Dynamic Programming 1

Lecture 13. More dynamic programming! Longest Common Subsequences, Knapsack, and (if time) independent sets in trees.

Dynamic programming. Curs 2015

All-Pairs Shortest Paths

CS 580: Algorithm Design and Analysis

Lecture 13. More dynamic programming! Longest Common Subsequences, Knapsack, and (if time) independent sets in trees.

6. DYNAMIC PROGRAMMING II

6. DYNAMIC PROGRAMMING I

Dynamic programming. Curs 2017

CSC 1700 Analysis of Algorithms: Warshall s and Floyd s algorithms

Discrete Optimization 2010 Lecture 2 Matroids & Shortest Paths

Chapter 4. Greedy Algorithms. Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved.

CSE 421 Greedy Algorithms / Interval Scheduling

General Methods for Algorithm Design

Chapter 11. Approximation Algorithms. Slides by Kevin Wayne Pearson-Addison Wesley. All rights reserved.

Copyright 2000, Kevin Wayne 1

Analysis of Algorithms. Outline. Single Source Shortest Path. Andres Mendez-Vazquez. November 9, Notes. Notes

Chapter 6. Dynamic Programming. CS 350: Winter 2018

Introduction to Algorithms

Greedy. Outline CS141. Stefano Lonardi, UCR 1. Activity selection Fractional knapsack Huffman encoding Later:

Chapter 4. Greedy Algorithms. Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved.

Matching Residents to Hospitals

CMPSCI 311: Introduction to Algorithms Second Midterm Exam

Chapter 6. Weighted Interval Scheduling. Dynamic Programming. Algorithmic Paradigms. Dynamic Programming Applications

Chapter 8 Dynamic Programming

Lecture 11. Single-Source Shortest Paths All-Pairs Shortest Paths

Lecture 9. Greedy Algorithm

Lecture 7: Dynamic Programming I: Optimal BSTs

CSE 591 Foundations of Algorithms Homework 4 Sample Solution Outlines. Problem 1

Activity selection. Goal: Select the largest possible set of nonoverlapping (mutually compatible) activities.

Algorithms: Lecture 12. Chalmers University of Technology

Chapter 8 Dynamic Programming

Computer Science & Engineering 423/823 Design and Analysis of Algorithms

1 Assembly Line Scheduling in Manufacturing Sector

CSE 421 Introduction to Algorithms Final Exam Winter 2005

University of Toronto Department of Electrical and Computer Engineering. Final Examination. ECE 345 Algorithms and Data Structures Fall 2016

Algorithm Design Strategies V

CS367 Lecture 3 (old lectures 5-6) APSP variants: Node Weights, Earliest Arrivals, Bottlenecks Scribe: Vaggos Chatziafratis Date: October 09, 2015

Intractable Problems Part Two

Slides credited from Hsueh-I Lu & Hsu-Chun Hsiao

Complexity Theory of Polynomial-Time Problems

CS3233 Competitive i Programming

Algorithms Exam TIN093 /DIT602

Algorithms Booklet. 1 Graph Searching. 1.1 Breadth First Search

Optimal Tree-decomposition Balancing and Reachability on Low Treewidth Graphs

Knapsack and Scheduling Problems. The Greedy Method

Algorithm Design and Analysis

CSE 417. Chapter 4: Greedy Algorithms. Many Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved.

Maximum sum contiguous subsequence Longest common subsequence Matrix chain multiplication All pair shortest path Kna. Dynamic Programming

CSE 202 Homework 4 Matthias Springer, A

Areas. ! Bioinformatics. ! Control theory. ! Information theory. ! Operations research. ! Computer science: theory, graphics, AI, systems,.

CS583 Lecture 11. Many slides here are based on D. Luebke slides. Review: Dynamic Programming

Algorithm Design and Analysis

CS781 Lecture 3 January 27, 2011

CSCI Honor seminar in algorithms Homework 2 Solution

CS483 Design and Analysis of Algorithms

CS 470/570 Dynamic Programming. Format of Dynamic Programming algorithms:

Computer Science & Engineering 423/823 Design and Analysis of Algorithms

CSE101: Design and Analysis of Algorithms. Ragesh Jaiswal, CSE, UCSD

directed weighted graphs as flow networks the Ford-Fulkerson algorithm termination and running time

DAA Unit- II Greedy and Dynamic Programming. By Mrs. B.A. Khivsara Asst. Professor Department of Computer Engineering SNJB s KBJ COE, Chandwad

Discrete Optimization 2010 Lecture 3 Maximum Flows

CS Lunch. Dynamic Programming Recipe. 2 Midterm 2! Slides17 - Segmented Least Squares.key - November 16, 2016

Shortest paths with negative lengths

Design and Analysis of Algorithms

CMSC 451: Lecture 7 Greedy Algorithms for Scheduling Tuesday, Sep 19, 2017

Transcription:

CSE 431/531: Analysis of Algorithms Dynamic Programming Lecturer: Shi Li Department of Computer Science and Engineering University at Buffalo

Paradigms for Designing Algorithms Greedy algorithm Make a greedy choice Prove that the greedy choice is safe Reduce the problem to a sub-problem and solve it iteratively Divide-and-conquer Break a problem into many independent sub-problems Solve each sub-problem separately Combine solutions for sub-problems to form a solution for the original one Usually used to design more efficient algorithms 2/99

Paradigms for Designing Algorithms Dynamic Programming Break up a problem into many overlapping sub-problems Build solutions for larger and larger sub-problems Use a table to store solutions for sub-problems for reuse 3/99

Recall: Computing the n-th Fibonacci Number F 0 = 0, F 1 = 1 F n = F n 1 + F n 2, n 2 Fibonacci sequence: 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, Fib(n) 1 F [0] 0 2 F [1] 1 3 for i 2 to n do 4 F [i] F [i 1] + F [i 2] 5 return F [n] Store each F [i] for future use. 4/99

Outline 1 Weighted Interval Scheduling 2 Subset Sum Problem 3 Knapsack Problem 4 Longest Common Subsequence Longest Common Subsequence in Linear Space 5 Shortest Paths in Graphs with Negative Weights Shortest Paths in Directed Acyclic Graphs Bellman-Ford Algorithm 6 All-Pair Shortest Paths and Floyd-Warshall 7 Matrix Chain Multiplication 8 Summary 5/99

Recall: Interval Schduling Input: n jobs, job i with start time s i and finish time f i each job has a weight (or value) v i > 0 i and j are compatible if [s i, f i ) and [s j, f j ) are disjoint Output: a maximum-size subset of mutually compatible jobs 0 1 2 3 4 5 6 7 8 9 100 50 30 25 50 90 80 80 70 Optimum value = 220 6/99

Hard to Design a Greedy Algorithm Q: Which job is safe to schedule? Job with the earliest finish time? No, we are ignoring weights Job with the largest weight? No, we are ignoring times Job with the largest weight length? No, when weights are equal, this is the shortest job 0 1 2 3 4 5 6 7 8 9 7/99

Designing a Dynamic Programming Algorithm 0 1 2 3 4 5 6 7 8 9 2 1 80 100 3 4 90 5 25 6 50 9 30 Sort jobs according to non-decreasing order of finish times opt[i]: optimal value for instance only containing jobs {1, 2,, i} 7 70 80 8 50 i opt[i] 0 0 1 80 2 100 3 100 4 105 5 150 6 170 7 185 8 220 9 220 8/99

Designing a Dynamic Programming Algorithm 0 1 2 3 4 5 6 7 8 9 2 100 4 5 25 50 8 9 30 50 3 90 7 80 1 80 6 70 Focus on instance {1, 2, 3,, i}, opt[i]: optimal value for the instance assume we have computed opt[0], opt[1],, opt[i 1] Q: The value of optimal solution that does not contain i? A: opt[i 1] Q: The value of optimal solution that contains job i? A: v i + opt[p i ], p i = the largest j such that f j s i 9/99

Designing a Dynamic Programming Algorithm Q: The value of optimal solution that does not contain i? A: opt[i 1] Q: The value of optimal solution that contains job i? A: v i + opt[p i ], p i = the largest j such that f j s i Recursion for opt[i]: opt[i] = max {opt[i 1], v i + opt[p i ]} 10/99

Designing a Dynamic Programming Algorithm Recursion for opt[i]: opt[i] = max {opt[i 1], v i + opt[p i ]} 0 1 2 3 4 5 6 7 8 9 2 100 5 50 9 30 4 25 8 50 3 90 7 80 1 80 opt[0] = 0 opt[1] = max{opt[0], 80 + opt[0]} = 80 opt[2] = max{opt[1], 100 + opt[0]} = 100 opt[3] = max{opt[2], 90 + opt[0]} = 100 opt[4] = max{opt[3], 25 + opt[1]} = 105 opt[5] = max{opt[4], 50 + opt[3]} = 150 6 70 11/99

Designing a Dynamic Programming Algorithm Recursion for opt[i]: opt[i] = max {opt[i 1], v i + opt[p i ]} 0 1 2 3 4 5 6 7 8 9 2 100 5 50 9 30 4 25 8 50 3 90 7 80 1 80 opt[0] = 0, opt[1] = 80, opt[2] = 100 opt[3] = 100, opt[4] = 105, opt[5] = 150 opt[6] = max{opt[5], 70 + opt[3]} = 170 opt[7] = max{opt[6], 80 + opt[4]} = 185 opt[8] = max{opt[7], 50 + opt[6]} = 220 opt[9] = max{opt[8], 30 + opt[7]} = 220 6 70 12/99

Recursive Algorithm to Compute opt[n] 1 sort jobs by non-decreasing order of finishing times 2 compute p 1, p 2,, p n 3 return compute-opt(n) compute-opt(i) 1 if i = 0 then 2 return 0 3 else 4 return max{compute-opt(i 1), v i + compute-opt(p i )} Running time can be exponential in n Reason: we are computed each opt[i] many times Solution: store the value of opt[i], so it s computed only once 13/99

Memoized Recursive Algorithm 1 sort jobs by non-decreasing order of finishing times 2 compute p 1, p 2,, p n 3 opt[0] 0 and opt[i] for every i = 1, 2, 3,, n 4 return compute-opt(n) compute-opt(i) 1 if opt[i] = then 2 opt[i] max{compute-opt(i 1), v i + compute-opt(p i )} 3 return opt[i] Running time sorting: O(n lg n) Running time for computing p: O(n lg n) via binary search Running time for computing opt[n]: O(n) 14/99

Dynamic Programming 1 sort jobs by non-decreasing order of finishing times 2 compute p 1, p 2,, p n 3 opt[0] 0 4 for i 1 to n 5 opt[i] max{opt[i 1], v i + opt[p i ]} Running time sorting: O(n lg n) Running time for computing p: O(n lg n) via binary search Running time for computing opt[n]: O(n) 15/99

How Can We Recover the Optimum Schedule? 1 sort jobs by non-decreasing order of finishing times 2 compute p 1, p 2,, p n 3 opt[0] 0 4 for i 1 to n 5 if opt[i 1] v i + opt[p i ] 6 opt[i] opt[i 1] 7 b[i] N 8 else 9 opt[i] v i + opt[p i ] 10 b[i] Y 1 i n, S 2 while i 0 3 if b[i] = N 4 i i 1 5 else 6 S S {i} 7 i p i 8 return S 16/99

Recovering Optimum Schedule: Example i opt[i] b[i] 0 0 1 80 Y 2 100 Y 3 100 N 4 105 Y 5 150 Y 6 170 Y 7 185 Y 8 220 Y 9 220 N i 0 1 2 3 4 5 6 7 8 9 2 100 5 4 25 3 90 1 80 6 50 9 30 8 50 7 80 70 17/99

Dynamic Programming Break up a problem into many overlapping sub-problems Build solutions for larger and larger sub-problems Use a table to store solutions for sub-problems for reuse 18/99

Outline 1 Weighted Interval Scheduling 2 Subset Sum Problem 3 Knapsack Problem 4 Longest Common Subsequence Longest Common Subsequence in Linear Space 5 Shortest Paths in Graphs with Negative Weights Shortest Paths in Directed Acyclic Graphs Bellman-Ford Algorithm 6 All-Pair Shortest Paths and Floyd-Warshall 7 Matrix Chain Multiplication 8 Summary 19/99

Subset Sum Problem Input: an integer bound W > 0 a set of n items, each with an integer weight w i > 0 Output: a subset S of items that maximizes i S w i s.t. i S w i W. Motivation: you have budget W, and want to buy a subset of items, so as to spend as much money as possible. Example: W = 35, n = 5, w = (14, 9, 17, 10, 13) Optimum: S = {1, 2, 4} and 14 + 9 + 10 = 33 20/99

Greedy Algorithms for Subset Sum Candidate Algorithm: Sort according to non-increasing order of weights Select items in the order as long as the total weight remains below W Q: Does candidate algorithm always produce optimal solutions? A: No. W = 100, n = 3, w = (51, 50, 50). Q: What if we change non-increasing to non-decreasing? A: No. W = 100, n = 3, w = (1, 50, 50) 21/99

Design a Dynamic Programming Algorithm Consider the instance: i, W, (w 1, w 2,, w i ); opt[i, W ]: the optimum value of the instance Q: The value of the optimum solution that does not contain i? A: opt[i 1, W ] Q: The value of the optimum solution that contains i? A: opt[i 1, W w i ] + w i 22/99

Dynamic Programming Consider the instance: i, W, (w 1, w 2,, w i ); opt[i, W ]: the optimum value of the instance 0 i = 0 opt[i, W opt[i ] = { 1, W ] } i > 0, w i > W opt[i 1, W ] max i > 0, w opt[i 1, W i W w i ] + w i 23/99

Dynamic Programming 1 for W 0 to W 2 opt[0, W ] 0 3 for i 1 to n 4 for W 0 to W 5 opt[i, W ] opt[i 1, W ] 6 if w i W and opt[i 1, W w i ] + w i opt[i, W ] then 7 opt[i, W ] opt[i 1, W w i ] + w i 8 return opt[n, W ] 24/99

Recover the Optimum Set 1 for W 0 to W 2 opt[0, W ] 0 3 for i 1 to n 4 for W 0 to W 5 opt[i, W ] opt[i 1, W ] 6 b[i, W ] N 7 if w i W and opt[i 1, W w i ] + w i opt[i, W ] then 8 opt[i, W ] opt[i 1, W w i ] + w i 9 b[i, W ] Y 10 return opt[n, W ] 25/99

Recover the Optimum Set 1 i n, W W, S 2 while i > 0 3 if b[i, W ] = Y then 4 W W w i 5 S S {i} 6 i i 1 7 return S 26/99

Running Time of Algorithm 1 for W 0 to W 2 opt[0, W ] 0 3 for i 1 to n 4 for W 0 to W 5 opt[i, W ] opt[i 1, W ] 6 if w i W and opt[i 1, W w i ] + w i opt[i, W ] then 7 opt[i, W ] opt[i 1, W w i ] + w i 8 return opt[n, W ] Running time is O(nW ) Running time is pseudo-polynomial because it depends on value of the input integers. 27/99

Outline 1 Weighted Interval Scheduling 2 Subset Sum Problem 3 Knapsack Problem 4 Longest Common Subsequence Longest Common Subsequence in Linear Space 5 Shortest Paths in Graphs with Negative Weights Shortest Paths in Directed Acyclic Graphs Bellman-Ford Algorithm 6 All-Pair Shortest Paths and Floyd-Warshall 7 Matrix Chain Multiplication 8 Summary 28/99

Knapsack Problem Input: an integer bound W > 0 a set of n items, each with an integer weight w i > 0 a value v i > 0 for each item i Output: a subset S of items that maximizes i S v i s.t. i S w i W. Motivation: you have budget W, and want to buy a subset of items of maximum total value 29/99

Greedy Algorithm 1 sort items according to non-increasing order of v i /w i 2 for each item in the ordering 3 take the item if we have enough budget Bad example: W = 100, n = 2, w = (1, 100), v = (1.1, 100). Optimum takes item 2 and greedy takes item 1. 30/99

Fractional Knapsack Problem Input: integer bound W > 0, a set of n items, each with an integer weight w i > 0 a value v i > 0 for each item i Output: a vector (α 1, α 2,, α n ) [0, 1] n that n n maximizes α i v i s.t. α i w i W. i=1 Greedy Algorithm for Fractional Knapsack 1 sort items according to non-increasing order of v i /w i, 2 for each item according to the ordering, take as much fraction of the item as possible. i=1 31/99

Greedy is Optimum for Fractional Knapsack Greedy Algorithm for Fractional Knapsack 1 sort items according to non-increasing order of v i /w i, 2 for each item according to the ordering, take as much fraction of the item as possible. W = 100, n = 2, w = (1, 100), v = (1.1, 100). α 1 = 1, α 2 = 0.99, value = 1.1 + 99 = 100.1. Idea of proof: exchanging argument. (Left as homework exercise). 32/99

DP for ({0, 1}-)Knapsack Problem opt[i, W ]: the optimum value when budget is W and items are {1, 2, 3,, i}. If i = 0, opt[i, W ] = 0 for every W = 0, 1, 2,, W. 0 i = 0 opt[i, W opt[i ] = { 1, W ] } i > 0, w i > W opt[i 1, W ] max i > 0, w opt[i 1, W i W w i ] + v i 33/99

Avoiding Unncessary Computation and Memory Using Memoized Algorithm and Hash Map compute-opt(i, W ) 1 if opt[i, W ] return opt[i, W ] 2 if i = 0 then r 0 3 else 4 r compute-opt(i 1, W ) 5 if w i W then 6 r compute-opt(i 1, W w i ) + v i 7 if r > r then r r 8 opt[i, W ] r 9 return r Use hash map for opt 34/99

Exercise: Items with 3 Parameters Input: integer bounds W > 0, Z > 0, a set of n items, each with an integer weight w i > 0 a size z i > 0 for each item i a value v i > 0 for each item i Output: a subset S of items that maximizes i S v i s.t. i S w i W and i S z i Z 35/99

Outline 1 Weighted Interval Scheduling 2 Subset Sum Problem 3 Knapsack Problem 4 Longest Common Subsequence Longest Common Subsequence in Linear Space 5 Shortest Paths in Graphs with Negative Weights Shortest Paths in Directed Acyclic Graphs Bellman-Ford Algorithm 6 All-Pair Shortest Paths and Floyd-Warshall 7 Matrix Chain Multiplication 8 Summary 36/99

Subsequence A = bacdca C = adca C is a subsequence of A Def. Given two sequences A[1.. n] and C[1.. t] of letters, C is called a subsequence of A if there exists integers 1 i 1 < i 2 < i 3 <... < i t n such that A[i j ] = C[j] for every j = 1, 2, 3,, t. Exercise: how to check if sequence C is a subsequence of A? 37/99

Longest Common Subsequence Input: A[1.. n] and B[1.. m] Output: the longest common subsequence of A and B Example: A = bacdca B = adbcda LCS(A, B) = adca Applications: edit distance (diff), similarity of DNAs 38/99

Matching View of LCS b a c d c a a d b c d a Goal of LCS: find a maximum-size non-crossing matching between letters in A and letters in B. 39/99

Reduce to Subproblems A = bacdca B = adbcda either the last letter of A is not matched: need to compute LCS( bacdc, adbc ) or the last letter of B is not matched: need to compute LCS( bacd, adbcd ) 40/99

Dynamic Programming for LCS opt[i, j], 0 i n, 0 j m: length of longest common sub-sequence of A[1.. i] and B[1.. j]. if i = 0 or j = 0, then opt[i, j] = 0. if i > 0, j > 0, then opt[i { 1, j 1] + 1 opt[i, j] = opt[i 1, j] max opt[i, j 1] if A[i] = B[j] if A[i] B[j] 41/99

Dynamic Programming for LCS 1 for j 0 to m do 2 opt[0, j] 0 3 for i 1 to n 4 opt[i, 0] 0 5 for j 1 to m 6 if A[i] = B[j] then 7 opt[i, j] opt[i 1, j 1] + 1, π[i, j] 8 elseif opt[i, j 1] opt[i 1, j] then 9 opt[i, j] opt[i, j 1], π[i, j] 10 else 11 opt[i, j] opt[i 1, j], π[i, j] 42/99

Example 1 2 3 4 5 6 A b a c d c a B a d b c d a 0 1 2 3 4 5 6 0 0 0 0 0 0 0 0 1 0 0 0 1 1 1 1 2 0 1 1 1 1 1 2 3 0 1 1 1 2 2 2 4 0 1 2 2 2 3 3 5 0 1 2 2 3 3 3 6 0 1 2 2 3 3 4 43/99

Example: Find Common Subsequence 1 2 3 4 5 6 A b a c d c a B a d b c d a 0 1 2 3 4 5 6 0 0 0 0 0 0 0 0 1 0 0 0 1 1 1 1 2 0 1 1 1 1 1 2 3 0 1 1 1 2 2 2 4 0 1 2 2 2 3 3 5 0 1 2 2 3 3 3 6 0 1 2 2 3 3 4 44/99

Find Common Subsequence 1 i n, j m, S 2 while i > 0 and j > 0 3 if π[i, j] = then 4 S A[i] S, i i 1, j j 1 5 else if π[i, j] = 6 i i 1 7 else 8 j j 1 9 return S 45/99

Variants of Problem Edit Distance with Insertions and Deletions Input: a string A each time we can delete a letter from A or insert a letter to A Output: minimum number of operations (insertions or deletions) we need to change A to B? Example: A = ocurrance, B = occurrence 3 operations: insert c, remove a and insert e Obs. #OPs = length(a) + length(b) - 2 length(lcs(a, B)) 46/99

Variants of Problem Edit Distance with Insertions, Deletions and Replacing Input: a string A, each time we can delete a letter from A, insert a letter to A or change a letter Output: how many operations do we need to change A to B? Example: A = ocurrance, B = occurrence. 2 operations: insert c, change a to e Not related to LCS any more 47/99

Edit Distance (with Replacing) opt[i, j], 0 i n, 0 j m: edit distance between A[1.. i] and B[1.. j]. if i = 0 then opt[i, j] = j; if j = 0 then opt[i, j] = i. if i > 0, j > 0, then opt[i 1, j 1] opt[i, j] = opt[i 1, j] + 1 min opt[i, j 1] + 1 opt[i 1, j 1] + 1 if A[i] = B[j] if A[i] B[j] 48/99

Exercise: Longest Palindrome Def. A palindrome is a string which reads the same backward or forward. example: racecar, wasitacaroracatisaw, putitup Longest Palindrome Subsequence Input: a sequence A Output: the longest subsequence C of A that is a palindrome. Example: Input: acbcedeacab Output: acedeca 49/99

Outline 1 Weighted Interval Scheduling 2 Subset Sum Problem 3 Knapsack Problem 4 Longest Common Subsequence Longest Common Subsequence in Linear Space 5 Shortest Paths in Graphs with Negative Weights Shortest Paths in Directed Acyclic Graphs Bellman-Ford Algorithm 6 All-Pair Shortest Paths and Floyd-Warshall 7 Matrix Chain Multiplication 8 Summary 50/99

Computing the Length of LCS 1 for j 0 to m do 2 opt[0, j] 0 3 for i 1 to n 4 opt[i, 0] 0 5 for j 1 to m 6 if A[i] = B[j] 7 opt[i, j] opt[i 1, j 1] + 1 8 elseif opt[i, j 1] opt[i 1, j] 9 opt[i, j] opt[i, j 1] 10 else 11 opt[i, j] opt[i 1, j] Obs. The i-th row of table only depends on (i 1)-th row. 51/99

Reducing Space to O(n + m) Obs. The i-th row of table only depends on (i 1)-th row. Q: How to use this observation to reduce space? A: We only keep two rows: the (i 1)-th row and the i-th row. 52/99

Linear Space Algorithm to Compute Length of LCS 1 for j 0 to m do 2 opt[0, j] 0 3 for i 1 to n 4 opt[i mod 2, 0] 0 5 for j 1 to m 6 if A[i] = B[j] 7 opt[i mod 2, j] opt[i 1 mod 2, j 1] + 1 8 elseif opt[i mod 2, j 1] opt[i 1 mod 2, j] 9 opt[i mod 2, j] opt[i mod 2, j 1] 10 else 11 opt[i mod 2, j] opt[i 1 mod 2, j] 12 return opt[n mod 2, m] 53/99

How to Recover LCS Using Linear Space? Only keep the last two rows: only know how to match A[n] Can recover the LCS using n rounds: time = O(n 2 m) Using Divide and Conquer + Dynamic Programming: Space: O(m + n) Time: O(nm) 54/99

Outline 1 Weighted Interval Scheduling 2 Subset Sum Problem 3 Knapsack Problem 4 Longest Common Subsequence Longest Common Subsequence in Linear Space 5 Shortest Paths in Graphs with Negative Weights Shortest Paths in Directed Acyclic Graphs Bellman-Ford Algorithm 6 All-Pair Shortest Paths and Floyd-Warshall 7 Matrix Chain Multiplication 8 Summary 55/99

Recall: Single Source Shortest Path Problem Single Source Shortest Paths Input: directed graph G = (V, E), s V w : E R 0 Output: shortest paths from s to all other vertices v V Algorithm for the problem: Dijkstra s algorithm 56/99

2 a 5 7 b 2 8 9 6 0 s 16 4 c 10 13 d 3 1 5 12 4 3 4 7 e f 7 14 t u 57/99

Dijkstra s Algorithm Using Priorty Queue Dijkstra(G, w, s) 1 S, d(s) 0 and d(v) for every v V \ {s} 2 Q empty queue, for each v V : Q.insert(v, d(v)) 3 while S V, do 4 u Q.extract min() 5 S S {u} 6 for each v V \ S such that (u, v) E 7 if d(u) + w(u, v) < d(v) then 8 d(v) d(u) + w(u, v), Q.decrease key(v, d(v)) 9 π(v) u 10 return (π, d) Running time = O(m + n lg n). 58/99

Single Source Shortest Paths, Weights May be Negative Input: directed graph G = (V, E), s V assume all vertices are reachable from s w : E R Output: shortest paths from s to all other vertices v V In transition graphs, negative weights make sense If we sell a item: having the item not having the item, weight is negative (we gain money) Dijkstra s algorithm does not work any more! 59/99

Dijkstra s Algorithm Fails if We Have Negative Weights s 2 3 a -4 5 1 c b 60/99

s 2 c 3 1 4 b -5 a d Q: What is the length of the shortest path from s to d? A: Def. A negative cycle is a cycle in which the total weight of edges is negative. Dealing with Negative Cycles assume the input graph does not contain negative cycles, or allow algorithm to report negative cycle exists 61/99

s 2 c 3 1 4 b -5 a d Q: What is the length of the shortest simple path from s to d? A: 1 Unfortunately, computing the shortest simple between two vertices is an NP-hard problem. 62/99

Outline 1 Weighted Interval Scheduling 2 Subset Sum Problem 3 Knapsack Problem 4 Longest Common Subsequence Longest Common Subsequence in Linear Space 5 Shortest Paths in Graphs with Negative Weights Shortest Paths in Directed Acyclic Graphs Bellman-Ford Algorithm 6 All-Pair Shortest Paths and Floyd-Warshall 7 Matrix Chain Multiplication 8 Summary 63/99

Directed Acyclic Graphs Def. A directed acyclic graph (DAG) is a directed graph without (directed) cycles. c 2 5 b d 1 3 6 8 s a not a DAG 4 a DAG Lemma A directed graph is a DAG if and only its vertices can be topologically sorted. 7 64/99

Shortest Paths in DAG Input: directed acyclic graph G = (V, E) and w : E R. Assume V = {1, 2, 3, n} is topologically sorted: if (i, j) E, then i < j Output: the shortest path from 1 to i, for every i V 1 1 2 8 2 3 4 6 8 9 1 9 5 1 5 6 7 3 5 2 8 65/99

Shortest Paths in DAG f[i]: length of the shortest path from 1 to i f[i] = { 0 i = 1 min j:(j,i) E {f(j) + w(j, i)} i = 2, 3,, n 66/99

Shortest Paths in DAG Use an adjacency list for incoming edges of each vertex i Shortest Paths in DAG 1 f[1] 0 2 for i 2 to n do 3 f[i] 4 for each incoming edge (j, i) E of i 5 if f[j] + w(j, i) < f[i] 6 f[i] f[j] + w(j, i) 7 π(i) j print-path(t) 1 if t = 1 then 2 print(1) 3 return 4 print-path(π(t)) 5 print(,, t) 67/99

Example 0 1 1 2 8 2 3 4 2 8 1 6 8 9 1 9 5 1 5 6 7 9 10 7 3 5 2 8 68/99

Outline 1 Weighted Interval Scheduling 2 Subset Sum Problem 3 Knapsack Problem 4 Longest Common Subsequence Longest Common Subsequence in Linear Space 5 Shortest Paths in Graphs with Negative Weights Shortest Paths in Directed Acyclic Graphs Bellman-Ford Algorithm 6 All-Pair Shortest Paths and Floyd-Warshall 7 Matrix Chain Multiplication 8 Summary 69/99

Defining Cells of Table Single Source Shortest Paths, Weights May be Negative Input: directed graph G = (V, E), s V assume all vertices are reachable from s w : E R Output: shortest paths from s to all other vertices v V first try: f[v]: length of shortest path from s to v issue: do not know in which order we compute f[v] s f l [v], l {0, 1, 2, 3, n 1}, v V : length of shortest path from s to v that uses at most l edges 70/99

b 7 s 8 6 a f l [v], l {0, 1, 2, 3, n 1}, v V : length of shortest path from s to v that uses at most l edges f -2 2 [a] = 6-4 -3 f 3 [a] = 2 c 7 d 0 l = 0, v = s f l [v] = { l = 0, v s f l 1 [v] min ( min u:(u,v) E f l 1 [u] + w(u, v) ) l > 0 71/99

dynamic-programming(g, w, s) 1 f 0 [s] 0 and f 0 [v] for any v V \ {s} 2 for l 1 to n 1 do 3 copy f l 1 f l 4 for each (u, v) E 5 if f l 1 [u] + w(u, v) < f l [v] 6 f l [v] f l 1 [u] + w(u, v) 7 return (f n 1 [v]) v V Obs. Assuming there are no negative cycles, then a shortest path contains at most n 1 edges 72/99

Dynamic Programming: Example b c s 7 8-4 7 6-3 a d -2 f 0 s a b c d 0-3 -2 0 6 7 8-4 7 f 1 0 6 7-3 -2 0 6 7 8-4 7 f 2 0 6 7 2 4-3 -2 0 6 7 8-4 7 f 3 0 2 7 2 4-3 -2 0 6 7 8-4 7 f 4 0 2 7-2 4 73/99

dynamic-programming(g, w, s) 1 f 0 [s] 0 and f 0 [v] for any v V \ {s} 2 for l 1 to n 1 do 3 copy f l 1 f l 4 for each (u, v) E 5 if f l 1 [u] + w(u, v) < f l [v] 6 f l [v] f l 1 [u] + w(u, v) 7 return (f n 1 [v]) v V Obs. Assuming there are no negative cycles, then a shortest path contains at most n 1 edges Q: What if there are negative cycles? 74/99

Dynamic Programming With Negative Cycle Detection dynamic-programming(g, w, s) 1 f 0 [s] 0 and f 0 [v] for any v V \ {s} 2 for l 1 to n 1 do 3 copy f l 1 f l 4 for each (u, v) E 5 if f l 1 [u] + w(u, v) < f l [v] 6 f l [v] f l 1 [u] + w(u, v) 7 for each (u, v) E 8 if f n 1 [u] + w(u, v) < f n 1 [v] 9 report negative cycle exists and exit 10 return (f n 1 [v]) v V 75/99

Bellman-Ford Algorithm Bellman-Ford(G, w, s) 1 f[s] 0 and f[v] for any v V \ {s} 2 for l 1 to n 1 do 3 for each (u, v) E 4 if f[u] + w(u, v) < f[v] 5 f[v] f[u] + w(u, v) 6 return f Issue: when we compute f[u] + w(u, v), f[u] may be changed since the end of last iteration This is OK: it can only accelerate the process! After iteration l, f[v] is at most the length of the shortest path from s to v that uses at most l edges f[v] is always the length of some path from s to v 76/99

Bellman-Ford Algorithm Bellman-Ford(G, w, s) 1 f[s] 0 and f[v] for any v V \ {s} 2 for l 1 to n 1 do 3 for each (u, v) E 4 if f[u] + w(u, v) < f[v] 5 f[v] f[u] + w(u, v) 6 return f After iteration l, f[v] is at most the length of the shortest path from s to v that uses at most l edges f[v] is always the length of some path from s to v Assuming there are no negative cycles, after iteration n 1, f[v] = length of shortest path from s to v 77/99

Bellman-Ford Algorithm Bellman-Ford(G, w, s) 1 f[s] 0 and f[v] for any v V \ {s} 2 for l 1 to n do 3 updated false 4 for each (u, v) E 5 if f[u] + w(u, v) < f[v] 6 f[v] f[u] + w(u, v), π[v] u 7 updated true 8 if not updated, then return f 9 output negative cycle exists π[v]: the parent of v in the shortest path tree Running time = O(nm) 78/99

Outline 1 Weighted Interval Scheduling 2 Subset Sum Problem 3 Knapsack Problem 4 Longest Common Subsequence Longest Common Subsequence in Linear Space 5 Shortest Paths in Graphs with Negative Weights Shortest Paths in Directed Acyclic Graphs Bellman-Ford Algorithm 6 All-Pair Shortest Paths and Floyd-Warshall 7 Matrix Chain Multiplication 8 Summary 79/99

All-Pair Shortest Paths All Pair Shortest Paths Input: directed graph G = (V, E), w : E R (can be negative) Output: shortest path from u to v for every u, v V 1 for every starting point s V do 2 run Bellman-Ford(G, w, s) Running time = O(n 2 m) 80/99

Design a Dynamic Programming Algorithm It is convenient to assume V = {1, 2, 3,, n} For simplicity, extend the w values to non-edges: 0 i = j w(i, j) = weight of edge (i, j) i j, (i, j) E i j, (i, j) / E For now assume there are no negative cycles Cells for Floyd-Warshall Algorithm First try: f[i, j] is length of shortest path from i to j Issue: do not know in which order we compute f[i, j] s f k [i, j]: length of shortest path from i to j that only uses vertices {1, 2, 3,, k} as intermediate vertices 81/99

0 i = j w(i, j) = weight of edge (i, j) i j, (i, j) E i j, (i, j) / E f k [i, j]: length of shortest path from i to j that only uses vertices {1, 2, 3,, k} as intermediate vertices w(i, j) { k = 0 f k [i, j] = f [i, j] min f k 1 [i, k] + f k 1 [k, j] k = 1, 2,, n 82/99

Floyd-Warshall(G, w) 1 f 0 w 2 for k 1 to n do 3 copy f k 1 f k 4 for i 1 to n do 5 for j 1 to n do 6 if f k 1 [i, k] + f k 1 [k, j] < f k [i, j] then 7 f k [i, j] f k 1 [i, k] + f k 1 [k, j] 83/99

Floyd-Warshall(G, w) 1 f old w 2 for k 1 to n do 3 copy f old f new 4 for i 1 to n do 5 for j 1 to n do 6 if f old [i, k] + f old [k, j] < f new [i, j] then 7 f new [i, j] f old [i, k] + f old [k, j] Lemma Assume there are no negative cycles in G. After iteration k, for i, j V, f[i, j] is exactly the length of shortest path from i to j that only uses vertices in {1, 2, 3,, k} as intermediate vertices. Running time = O(n 3 ). 84/99

Recovering Shortest Paths Floyd-Warshall(G, w) 1 f w, π[i, j] for every i, j V 2 for k 1 to n do 3 for i 1 to n do 4 for j 1 to n do 5 if f[i, k] + f[k, j] < f[i, j] then 6 f[i, j] f[i, k] + f[k, j], π[i, j] k print-path(i, j) 1 if π[i, j] = then 2 if i j then print(i,, ) 3 else 4 print-path(i, π[i, j]), print-path(π[i, j], j) 85/99

Detecting Negative Cycles Floyd-Warshall(G, w) 1 f w, π[i, j] for every i, j V 2 for k 1 to n do 3 for i 1 to n do 4 for j 1 to n do 5 if f[i, k] + f[k, j] < f[i, j] then 6 f[i, j] f[i, k] + f[k, j], π[i, j] k 7 for k 1 to n do 8 for i 1 to n do 9 for j 1 to n do 10 if f[i, k] + f[k, j] < f[i, j] then 11 report negative cycle exists and exit 86/99

Outline 1 Weighted Interval Scheduling 2 Subset Sum Problem 3 Knapsack Problem 4 Longest Common Subsequence Longest Common Subsequence in Linear Space 5 Shortest Paths in Graphs with Negative Weights Shortest Paths in Directed Acyclic Graphs Bellman-Ford Algorithm 6 All-Pair Shortest Paths and Floyd-Warshall 7 Matrix Chain Multiplication 8 Summary 87/99

Matrix Chain Multiplication Matrix Chain Multiplication Input: n matrices A 1, A 2,, A n of sizes r 1 c 1, r 2 c 2,, r n c n, such that c i = r i+1 for every i = 1, 2,, n 1. Output: the order of computing A 1 A 2 A n with the minimum number of multiplications Fact Multiplying two matrices of size r k and k c takes r k c multiplications. 88/99

Example: A 1 : 10 100, A 2 : 100 5, A 3 : 5 50 10 100 100 5 5 50 10 100 100 5 5 50 10 5 10 100 5 = 5000 100 5 50 = 25000 100 50 10 50 10 5 50 = 2500 10 100 50 = 50000 10 5 cost = 5000 + 2500 = 7500 cost = 25000 + 50000 = 75000 (A 1 A 2 )A 3 : 10 100 5 + 10 5 50 = 7500 A 1 (A 2 A 3 ): 100 5 50 + 10 100 50 = 75000 89/99

Matrix Chain Multiplication: Design DP Assume the last step is (A 1 A 2 A i )(A i+1 A i+2 A n ) Cost of last step: r 1 c i c n Optimality for sub-instances: we need to compute A 1 A 2 A i and A i+1 A i+2 A n optimally opt[i, j] : the minimum cost of computing A i A i+1 A j opt[i, j] = { 0 i = j min k:i k<j (opt[i, k] + opt[k + 1, j] + r i c k c j ) i < j 90/99

matrix-chain-multiplication(n, r[1..n], c[1..n]) 1 let opt[i, i] 0 for every i = 1, 2,, n 2 for l 2 to n 3 for i 1 to n l + 1 4 j i + l 1 5 opt[i, j] 6 for k i to j 1 7 if opt[i, k] + opt[k + 1, j] + r i c k c j < opt[i, j] 8 opt[i, j] opt[i, k] + opt[k + 1, j] + r i c k c j 9 return opt[1, n] 91/99

Outline 1 Weighted Interval Scheduling 2 Subset Sum Problem 3 Knapsack Problem 4 Longest Common Subsequence Longest Common Subsequence in Linear Space 5 Shortest Paths in Graphs with Negative Weights Shortest Paths in Directed Acyclic Graphs Bellman-Ford Algorithm 6 All-Pair Shortest Paths and Floyd-Warshall 7 Matrix Chain Multiplication 8 Summary 92/99

Dynamic Programming Break up a problem into many overlapping sub-problems Build solutions for larger and larger sub-problems Use a table to store solutions for sub-problems for reuse 93/99

Definition of Cells for Problems We Learnt Weighted interval scheduling: opt[i] = value of instance defined by jobs {1, 2,, i} Subset sum, knapsack: opt[i, W ] = value of instance with items {1, 2,, i} and budget W Longest common subsequence: opt[i, j] = value of instance defined by A[1..i] and B[1..j] Matrix chain multiplication: opt[i, j] = value of instances defined by matrices i to j Shortest paths in DAG: f[v] = length of shortest path from s to v Bellman-Ford: f l [v] = length of shortest path from s to v that uses at most l edges Floyd-Warshall: f k [i, j] = length of shortest path from i to j that only uses {1, 2,, k} as intermediate vertices 94/99

Exercise: Counting Number of Domino Coverings Input: n Output: number of ways to cover a n 2 grid using domino tiles Example: 5 different ways if n = 4: How about number of ways to cover a n 3 grid? 95/99

Exercise: Maximum-Weight Subset with Gaps Input: n, integers w 1, w 2,, w n 0. Output: a set S {1, 2, 3, n} that maximizes i S w i s.t. i, j S, i j, we have i j 2. Example: n = 7, w = (10, 80, 100, 90, 30, 50, 70) Choose items 2, 4, 7: value = 80 + 90 + 70 = 240 96/99

Def. Given a sequence A = (a 1, a 2,, a n ) of n numbers, an increasing subsequence of A is a subsequence (A i1, A i2, A i3,, A i,t ) such that 1 i 1 < i 2 < i 3 < < i t n and a i1 < a i2 < a i3 < < a it. Exercise: Longest Increasing Subsequence Input: A = (a 1, a 2,, a n ) of n numbers Output: The length of the longest increasing sub-sequence of A Example: Input: (10, 3, 9, 8, 2, 5, 7, 1, 12) Output: 4 97/99

Def. A sequence X[1..m] of numbers is oscillating if X[i] < X[i + 1] for all even i m 1, and X[i] > X[i + 1] for all odd i m 1. Example: 5, 3, 9, 7, 8, 6, 12, 11 is an oscillating sequence: 5 > 3 < 9 > 7 < 8 > 6 < 12 > 11 Exercise: Longest Oscillating Subsequence Input: A sequence A of n numbers Output: The length of the longest oscillating subsequence of A 98/99

Recall: an independent set of G = (V, E) is a set U V such that there are no edges between vertices in U. Maximum Weighted Independent Set in A Tree Input: a tree with node weights Output: the independent set of the tree with the maximum total weight 15 3 5 8 16 18 5 7 2 9 maximum-weight independent set has weight 47. 4 99/99