Note that M i,j depends on two entries in row (i 1). If we proceed in a row major order, these two entries will be available when we are ready to comp

Similar documents
Chapter 8 Dynamic Programming

CS367 Lecture 3 (old lectures 5-6) APSP variants: Node Weights, Earliest Arrivals, Bottlenecks Scribe: Vaggos Chatziafratis Date: October 09, 2015

Chapter 8 Dynamic Programming

Lecture 4: An FPTAS for Knapsack, and K-Center

Lecture 7: Shortest Paths in Graphs with Negative Arc Lengths. Reading: AM&O Chapter 5

Maximum sum contiguous subsequence Longest common subsequence Matrix chain multiplication All pair shortest path Kna. Dynamic Programming

(tree searching technique) (Boolean formulas) satisfying assignment: (X 1, X 2 )

Design and Analysis of Algorithms

Subcubic Equivalence of Triangle Detection and Matrix Multiplication

Lecture 13. More dynamic programming! Longest Common Subsequences, Knapsack, and (if time) independent sets in trees.

Dynamic Programming. Shuang Zhao. Microsoft Research Asia September 5, Dynamic Programming. Shuang Zhao. Outline. Introduction.

Examination paper for TDT4120 Algorithms and Data Structures

x 1 + 4x 2 = 5, 7x 1 + 5x 2 + 2x 3 4,

CPSC 320 (Intermediate Algorithm Design and Analysis). Summer Instructor: Dr. Lior Malka Final Examination, July 24th, 2009

Lecture 9: Dantzig-Wolfe Decomposition

Dynamic Programming. Reading: CLRS Chapter 15 & Section CSE 2331 Algorithms Steve Lai

Dynamic Programming: Shortest Paths and DFA to Reg Exps

IE418 Integer Programming

Lecture 2: Divide and conquer and Dynamic programming

Dynamic Programming: Shortest Paths and DFA to Reg Exps

CSC 1700 Analysis of Algorithms: Warshall s and Floyd s algorithms

Complexity Theory of Polynomial-Time Problems

CSE 101. Algorithm Design and Analysis Miles Jones Office 4208 CSE Building Lecture 20: Dynamic Programming

Complexity Theory of Polynomial-Time Problems

Dynamic Programming. Reading: CLRS Chapter 15 & Section CSE 6331: Algorithms Steve Lai

1 The Knapsack Problem

0-1 Knapsack Problem in parallel Progetto del corso di Calcolo Parallelo AA

Randomized Sorting Algorithms Quick sort can be converted to a randomized algorithm by picking the pivot element randomly. In this case we can show th

On the Exponent of the All Pairs Shortest Path Problem

Today s Outline. CS 362, Lecture 13. Matrix Chain Multiplication. Paranthesizing Matrices. Matrix Multiplication. Jared Saia University of New Mexico

Lecture 18: More NP-Complete Problems

Midterm Exam 2 Solutions

Lecture 13. More dynamic programming! Longest Common Subsequences, Knapsack, and (if time) independent sets in trees.

Lecture 13: Polynomial-Time Algorithms for Min Cost Flows. (Reading: AM&O Chapter 10)

Aside: Golden Ratio. Golden Ratio: A universal law. Golden ratio φ = lim n = 1+ b n = a n 1. a n+1 = a n + b n, a n+b n a n

CSE 202 Homework 4 Matthias Springer, A

Do not turn this page until you have received the signal to start. Please fill out the identification section above. Good Luck!

Greedy Algorithms My T. UF

Shortest paths: label setting

Dynamic Programming. Prof. S.J. Soni

CSE 431/531: Analysis of Algorithms. Dynamic Programming. Lecturer: Shi Li. Department of Computer Science and Engineering University at Buffalo

CS 470/570 Dynamic Programming. Format of Dynamic Programming algorithms:

Advanced Analysis of Algorithms - Midterm (Solutions)

Lecture 13: Dynamic Programming Part 2 10:00 AM, Feb 23, 2018

COL351: Analysis and Design of Algorithms (CSE, IITD, Semester-I ) Name: Entry number:

CS Algorithms and Complexity

Review Questions, Final Exam

Combinatorial optimization problems

MAKING A BINARY HEAP

Complexity Theory of Polynomial-Time Problems

Linear Programming Duality

CS483 Design and Analysis of Algorithms

Integer programming: an introduction. Alessandro Astolfi

CS173 Running Time and Big-O. Tandy Warnow

Question Paper Code :

2 Fundamentals. 2.1 Graph Theory. Ulrik Brandes and Thomas Erlebach

Countable and uncountable sets. Matrices.

Maximum flow problem (part I)

An O(n 3 log log n/ log n) Time Algorithm for the All-Pairs Shortest Path Problem

Section Notes 9. Midterm 2 Review. Applied Math / Engineering Sciences 121. Week of December 3, 2018

MAKING A BINARY HEAP

Dynamic Programming. p. 1/43

Network Flows. 4. Maximum Flows Problems. Fall 2010 Instructor: Dr. Masoud Yaghini

Problem set 1. (c) Is the Ford-Fulkerson algorithm guaranteed to produce an acyclic maximum flow?

Algorithm Design Strategies V

CSE 202 Dynamic Programming II

Matrices and Vectors

CS60007 Algorithm Design and Analysis 2018 Assignment 1

CSE 206A: Lattice Algorithms and Applications Spring Basic Algorithms. Instructor: Daniele Micciancio

Scribes: Po-Hsuan Wei, William Kuzmaul Editor: Kevin Wu Date: October 18, 2016

Shortest Paths in Directed Planar Graphs with Negative Lengths: a Linear-Space O(n log 2 n)-time Algorithm

CS 6783 (Applied Algorithms) Lecture 3

Computational Complexity and Intractability: An Introduction to the Theory of NP. Chapter 9

Chapter 7 Network Flow Problems, I

Graphs and Network Flows IE411. Lecture 15. Dr. Ted Ralphs

Computer Science & Engineering 423/823 Design and Analysis of Algorithms

Algorithms and Theory of Computation. Lecture 9: Dynamic Programming

CSE 421 Introduction to Algorithms Final Exam Winter 2005

ORIE 633 Network Flows October 4, Lecture 10

Integer Programming ISE 418. Lecture 16. Dr. Ted Ralphs

CSE101: Design and Analysis of Algorithms. Ragesh Jaiswal, CSE, UCSD

DAA Unit- II Greedy and Dynamic Programming. By Mrs. B.A. Khivsara Asst. Professor Department of Computer Engineering SNJB s KBJ COE, Chandwad

NP-Completeness. NP-Completeness 1

NP-Completeness. f(n) \ n n sec sec sec. n sec 24.3 sec 5.2 mins. 2 n sec 17.9 mins 35.

CSC Design and Analysis of Algorithms. LP Shader Electronics Example

Computational Integer Programming. Lecture 2: Modeling and Formulation. Dr. Ted Ralphs

Lecture 19: Finish NP-Completeness, conp and Friends

P, NP, NP-Complete, and NPhard

Fast Inference with Min-Sum Matrix Product

Econ 172A, Fall 2012: Final Examination Solutions (I) 1. The entries in the table below describe the costs associated with an assignment

Discrete (and Continuous) Optimization Solutions of Exercises 2 WI4 131

Math 304 (Spring 2010) - Lecture 2

SHORT INTRODUCTION TO DYNAMIC PROGRAMMING. 1. Example

4. Duality Duality 4.1 Duality of LPs and the duality theorem. min c T x x R n, c R n. s.t. ai Tx = b i i M a i R n

Discrete Optimization 2010 Lecture 10 P, N P, and N PCompleteness

Econ 172A, Fall 2012: Final Examination (I) 1. The examination has seven questions. Answer them all.

University of Toronto Department of Electrical and Computer Engineering. Final Examination. ECE 345 Algorithms and Data Structures Fall 2016

CS 6901 (Applied Algorithms) Lecture 3

Lecture 11 October 7, 2013

CSE 3500 Algorithms and Complexity Fall 2016 Lecture 25: November 29, 2016

Transcription:

CSE 3500 Algorithms and Complexity Fall 2016 Lecture 18: October 27, 2016 Dynamic Programming Steps involved in a typical dynamic programming algorithm are: 1. Identify a function such that the solution we are looking for is the value of this function at a specific point; 2. Write a recurrence relation for this function; and 3. Start from the base case values of the function. Use the recurrence relation to evaluate the function at as many points as needed before the point of interest is reached. 0/1 Knapsack Problem Input for this problem are n objects. Object i has a profit of p i and a weight of w i, for 1 i n. We are also given a knapsack of capacity m. The problem is to compute x 1, x 2,..., x n such that each x i is either zero or one (for 1 i n), n i=1 w ix i m, and n i=1 p ix i is maximum. Define Knap(i, j, y) to be a subproblem (of the 0/1 knapsack problem) as follows: Given objects i, i+1,..., j and a knapsack of capacity y, computer the maximum profit obtainable. We note that the 0/1 knapsack problem is nothing but Knap(1, n, m). Here is a dynamic programming solution for Knap(i, j, y): 1. Define f i (y) to be the solution to Knap(1, i, y). 2. In an optimal solution to Knap(1, i, y) there are two possibilities: Object i is in the knapsack or the object i is not in the knapsack. If object i is not in the knapsack, then the optimal profit obtainable is f i (y) = f i 1 (y). If object i is in the knapsack, then the maximum profit achievable is f i (y) = f i 1 (y w i ) + p i. Put together we get: f i (y) = max{f i 1 (y), f i 1 (y w i ) + p i }. (1) 3. Now consider the case that the object weights are integers. We can use Equation 1 to compute f n (m) as follows. We have the following base cases: f 0 (y) = 0 for all y 0 and f i (y) = for all y < 0 and all i. Let M be a (n + 1) (m + 1) matrix such that M i,j = f i (j) for 0 i n and 0 j m. Compute this matrix in a row major order. Row 0 and column 0 of this matrix have all zeros. 1

Note that M i,j depends on two entries in row (i 1). If we proceed in a row major order, these two entries will be available when we are ready to compute M i,j and hence M i,j can be computed in O(1) time (for each entry M i,j ). Thus the entire algorithm takes a total of O(mn) time. An Example Consider the following instance of the 0/1 knapsack problem: There are three objects whose profits are 4, 5, and 12, and whose weights are 2, 3, and 4, respectively. The knapsack capacity is 5. We start with i = 1 in Equation 1: f 1 (1) = max{f 0 (1), f 0 (1 2) + 4} = max{0, } = 0. f 1 (2) = max{f 0 (2), f 0 (2 2) + 4} = max{0, 4} = 4. Likewise, f 1 (3) = f 1 (4) = f 1 (5) = 4. f 2 (0) = 0. f 2 (1) = max{f 1 (1), f 1 (1 3)+5} = max{0, } = 0. f 2 (2) = max{f 1 (2), f 1 (2 3) + 5} = 4. f 2 (3) = max{f 1 (3), f 1 (3 3) + 5} = max{4, 5} = 5. f 2 (4) = max{f 1 (4), f 1 (4 3) + 5} = max{4, 5} = 5. f 2 (5) = max{f 1 (5), f 1 (5 3) + 5} = max{4, 4 + 5} = 9. f 3 (0) = 0. f 3 (1) = max{f 2 (1), f 2 (1 4)+12} = max{0, } = 0. f 3 (2) = max{f 2 (2), f 2 (2 4) + 12} = max{4, } = 4. f 3 (3) = max{f 2 (3), f 2 (3 4) + 12} = max{5, } = 5. f 3 (4) = max{f 2 (4), f 2 (4 4) + 12} = max{5, 12} = 12. Likewise we realize that f 3 (5) = 12 and the final answer is 12. All Pairs Shortest Paths (APSP) Problem Input for the APSP problem is a weighted directed graph G(V, E). The problem is to find the shortest path between every pair of nodes in the graph. In the context of path problems in graphs, we typically assume that there are no negative cycles in the graph. If there are no negative edges in G, we can use Dijkstra s algorithm to solve the APSP problem. The idea is to invoke Dijkstra s algorithm multiple times, once for each node in the graph as the source. In this case, the run time will be O( V ( V + E ) log V ). We can employ dynamic programming to solve the APSP problem (even when the graph has negative edges) as follows. Let V = {1, 2, 3,..., n}. 1. For any two nodes i and j in V, define A k (i, j) to be the weight of the shortest i to j path from among all the i to j paths for which the intermediate nodes come from {1, 2, 3,..., k}. Input for the APSP problem is A 0 (i, j) for i, j V. We want to compute A n (i, j), for i, j V. 2

2. To derive a recurrence relation for A k (i, j) note that the shortest i to j path whose intermediate nodes come from {1, 2, 3,..., k} either does not have k as an intermediate node or it has k as an intermediate node. If k is not an intermediate node, then, A k (i, j) = A k 1 (i, j). If k is an intermediate node, then A k (i, j) = A k 1 (i, k) + A k 1 (k, j). Put together, we get: A k (i, j) = min{a k 1 (i, j), A k 1 (i, k) + A k 1 (k, j)}. (2) 3. We can use Equation 2 to compute A n (i, j) starting from A 0 (i, j). A pseudocode follows. for i = 1 to n do for j 1 to n do A[i, j] = cost(i, j); for k = 1 to n do for i = 1 to n do for j = 1 to n do A[i, j] = min{a[i, j], A[i, k] + A[k, j]}; Clearly, the run time of the above algorithm is O(n 3 ). If there are no negative edges in the graph, Dijkstra s algorithm can be used to solve the APSP problem in O(n(m + n) log n) time where m = E. Dijkstra s algorithm will be faster than the dynamic programming algorithm when m < n2. Otherwise, the dynamic log n programming algorithm will be faster. ( ) n In 2014, V. Williams has shown that the APSP problem can be solved in O 3 2 Ω( log n) time. 3