Partha Sarathi Mandal

Similar documents
Algorithms and Theory of Computation. Lecture 9: Dynamic Programming

Design and Analysis of Algorithms

CSE 431/531: Analysis of Algorithms. Dynamic Programming. Lecturer: Shi Li. Department of Computer Science and Engineering University at Buffalo

Chapter 8 Dynamic Programming

Aside: Golden Ratio. Golden Ratio: A universal law. Golden ratio φ = lim n = 1+ b n = a n 1. a n+1 = a n + b n, a n+b n a n

Chapter 8 Dynamic Programming

Lecture Notes for Chapter 25: All-Pairs Shortest Paths

Dynamic Programming. p. 1/43

CS 4407 Algorithms Lecture: Shortest Path Algorithms

Graduate Algorithms CS F-09 Dynamic Programming

CS473 - Algorithms I

Lecture 7: Dynamic Programming I: Optimal BSTs

Shortest Path Algorithms

Week 7 Solution. The two implementations are 1. Approach 1. int fib(int n) { if (n <= 1) return n; return fib(n 1) + fib(n 2); } 2.

Dynamic Programming. Cormen et. al. IV 15

Algorithm Design and Analysis

Computer Science & Engineering 423/823 Design and Analysis of Algorithms

Introduction to Algorithms

Discrete Optimization 2010 Lecture 2 Matroids & Shortest Paths

Single Source Shortest Paths

CMPS 6610 Fall 2018 Shortest Paths Carola Wenk

Design and Analysis of Algorithms

Computer Science & Engineering 423/823 Design and Analysis of Algorithms

Dynamic Programming. Prof. S.J. Soni

/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Dynamic Programming II Date: 10/12/17

Dynamic Programming: Shortest Paths and DFA to Reg Exps

CSC 1700 Analysis of Algorithms: Warshall s and Floyd s algorithms

MAKING A BINARY HEAP

Dynamic Programming: Shortest Paths and DFA to Reg Exps

All-Pairs Shortest Paths

MAKING A BINARY HEAP

Data Structures in Java

6. DYNAMIC PROGRAMMING I

Dynamic Programming. Data Structures and Algorithms Andrei Bulatov

Introduction. I Dynamic programming is a technique for solving optimization problems. I Key element: Decompose a problem into subproblems, solve them

Dijkstra s Single Source Shortest Path Algorithm. Andreas Klappenecker

Lecture 11. Single-Source Shortest Paths All-Pairs Shortest Paths

Dynamic Programming. Shuang Zhao. Microsoft Research Asia September 5, Dynamic Programming. Shuang Zhao. Outline. Introduction.

Dynamic Programming 1

Analysis of Algorithms I: All-Pairs Shortest Paths

Dynamic Programming 4/5/12. Dynamic programming. Fibonacci numbers. Fibonacci: a first attempt. David Kauchak cs302 Spring 2012

CSE 101. Algorithm Design and Analysis Miles Jones Office 4208 CSE Building Lecture 20: Dynamic Programming

Introduction to Algorithms

Chapter 4. Greedy Algorithms. Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved.

Discrete Wiskunde II. Lecture 5: Shortest Paths & Spanning Trees

Data Structures and Algorithms

Introduction to Algorithms

Greedy. Outline CS141. Stefano Lonardi, UCR 1. Activity selection Fractional knapsack Huffman encoding Later:

Chapter 6. Dynamic Programming. CS 350: Winter 2018

Algorithm Design and Analysis

Lecture 13. More dynamic programming! Longest Common Subsequences, Knapsack, and (if time) independent sets in trees.

6. DYNAMIC PROGRAMMING I

6. DYNAMIC PROGRAMMING II

Introduction to Algorithms

CS173 Running Time and Big-O. Tandy Warnow

Single Source Shortest Paths

CSE 202 Dynamic Programming II

Lecture 9. Greedy Algorithm

General Methods for Algorithm Design

Analysis of Algorithms. Outline. Single Source Shortest Path. Andres Mendez-Vazquez. November 9, Notes. Notes

CMPS 2200 Fall Carola Wenk Slides courtesy of Charles Leiserson with small changes by Carola Wenk. 10/8/12 CMPS 2200 Intro.

Dynamic Programming. Credits: Many of these slides were originally authored by Jeff Edmonds, York University. Thanks Jeff!

Dynamic programming. Curs 2015

More Dynamic Programming

Algoritmi di Bioinformatica. Computational efficiency I

CSE 591 Foundations of Algorithms Homework 4 Sample Solution Outlines. Problem 1

Programming Abstractions

CS 241 Analysis of Algorithms

Lecture 7: Shortest Paths in Graphs with Negative Arc Lengths. Reading: AM&O Chapter 5

More Dynamic Programming

Lecture 2: Divide and conquer and Dynamic programming

CS 470/570 Dynamic Programming. Format of Dynamic Programming algorithms:

CS60020: Foundations of Algorithm Design and Machine Learning. Sourangshu Bhattacharya

1 Closest Pair of Points on the Plane

Slides credited from Hsueh-I Lu & Hsu-Chun Hsiao

Chapter 6. Dynamic Programming. Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved.

Similarity Search. The String Edit Distance. Nikolaus Augsten.

CS 410/584, Algorithm Design & Analysis, Lecture Notes 4

Lecture 2. Fundamentals of the Analysis of Algorithm Efficiency

CSE548, AMS542: Analysis of Algorithms, Fall 2017 Date: Oct 26. Homework #2. ( Due: Nov 8 )

CS583 Lecture 11. Many slides here are based on D. Luebke slides. Review: Dynamic Programming

Algorithm efficiency analysis

Dynamic Programming. Weighted Interval Scheduling. Algorithmic Paradigms. Dynamic Programming

Approximation: Theory and Algorithms

Chapter 4. Greedy Algorithms. Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved.

Cache-Oblivious Computations: Algorithms and Experimental Evaluation

Lecture 1 : Data Compression and Entropy

Algorithms Design & Analysis. Dynamic Programming

Weighted Activity Selection

Similarity Search. The String Edit Distance. Nikolaus Augsten. Free University of Bozen-Bolzano Faculty of Computer Science DIS. Unit 2 March 8, 2012

Informatique Fondamentale IMA S8

Lecture 13. More dynamic programming! Longest Common Subsequences, Knapsack, and (if time) independent sets in trees.

Divide and Conquer Algorithms. CSE 101: Design and Analysis of Algorithms Lecture 14

Computer Algorithms CISC4080 CIS, Fordham Univ. Outline. Last class. Instructor: X. Zhang Lecture 2

Computer Algorithms CISC4080 CIS, Fordham Univ. Instructor: X. Zhang Lecture 2

Algorithm Analysis Divide and Conquer. Chung-Ang University, Jaesung Lee

Discrete Optimization 2010 Lecture 3 Maximum Flows

Outline. Approximation: Theory and Algorithms. Motivation. Outline. The String Edit Distance. Nikolaus Augsten. Unit 2 March 6, 2009

Mathematical Background. Unsigned binary numbers. Powers of 2. Logs and exponents. Mathematical Background. Today, we will review:

Slides for CIS 675. Huffman Encoding, 1. Huffman Encoding, 2. Huffman Encoding, 3. Encoding 1. DPV Chapter 5, Part 2. Encoding 2

Transcription:

MA 252: Data Structures and Algorithms Lecture 32 http://www.iitg.ernet.in/psm/indexing_ma252/y12/index.html Partha Sarathi Mandal Dept. of Mathematics, IIT Guwahati

The All-Pairs Shortest Paths Problem Given a weighted digraph G = (V, E) with weight function w : E R, (R, is the set of real numbers) determine the length of the shortest path i.e., distance between all pairs of vertices in G. Here we assume that there are no cycles with zero or negative cost.

SSSP Dijkstra's SSSP algorithm requires all edge weights to be nonnegative even more restrictive than outlawing negative weight cycles Runtime O(E log V) if priority queue is implemented with a binary heap. Runtime O(V log V + E) priority queue is implemented with a Fibonacci heap. Bellman-Ford SSSP algorithm can handle negative edge weights even "handles" negative weight cycles by reporting they exist Runtime O(V E)

All pairs shortest paths Simple approach Call Dijkstra's V times O( V E log V ) / O( V 2 log V + V E )) Call Bellman-Ford V times O( V 2 E ) A dynamic programming solution. Only assumes no negative weight cycles. First version is Θ( V 4 ) Repeated squaring reduces to Θ( V 3 log V ) Floyd-Warshall Θ( V 3 ) Johnson s algorithm O( V 2 log V + V E )

Dynamic programming

Computing Fibonacci Numbers Fibonacci numbers: F 0 = 0 F 1 = 1 F n = F n-1 + F n-2 for n > 1 Sequence is 0, 1, 1, 2, 3, 5, 8, 13,

Computing Fibonacci Numbers Obvious recursive algorithm: Fib(n): if n = 0 or 1 then return n else return (Fib(n-1) + Fib(n-2))

Recursion Tree for Fib(5) Fib(5) Fib(4) Fib(3) Fib(3) Fib(2) Fib(2) Fib(1) Fib(2) Fib(1) Fib(1) Fib(0) Fib(1) Fib(0) Fib(1) Fib(0)

How Many Recursive Calls? If all leaves had the same depth, then there would be about 2 n recursive calls. But this is over-counting. However with more careful counting it can be shown that it is ((1.6) n ) Exponential!

Save Work Wasteful approach - repeat work unnecessarily Fib(2) is computed three times Instead, compute Fib(2) once, store result in a table, and access it when needed

More Efficient Recursive Algo F[0] := 0; F[1] := 1; F[n] := Fib(n); Fib(n): if n = 0 or 1 then return F[n] if F[n-1] = NIL then F[n-1] := Fib(n-1) if F[n-2] = NIL then F[n-2] := Fib(n-2) return (F[n-1] + F[n-2]) computes each F[i] only once

Example of Memoized Fib 0 1 2 3 4 5 F 0 1 NIL 1 NIL 2 NIL 3 NIL 5 Fib(5) Fib(4) Fib(3) fills in F[4] with 3, returns 3+2 = 5 fills in F[3] with 2, returns 2+1 = 3 fills in F[2] with 1, returns 1+1 = 2 Fib(2) returns 0+1 = 1

Get Rid of the Recursion Recursion adds overhead extra time for function calls extra space to store information on the runtime stack about each currently active function call Avoid the recursion overhead by filling in the table entries bottom up, instead of top down.

Subproblem Dependencies Figure out which subproblems rely on which other subproblems Example: F0 F1 F2 F3 Fn-2 Fn-1 Fn

Order for Computing Subproblems Then figure out an order for computing the subproblems that respects the dependencies: when you are solving a subproblem, you have already solved all the subproblems on which it depends Example: Just solve them in the order F 0, F 1, F 2, F 3,

DP Solution for Fibonacci Fib(n): F[0] := 0; F[1] := 1; for i := 2 to n do F[i] := F[i-1] + F[i-2] return F[n] Can perform application-specific optimizations e.g., save space by only keeping last two numbers computed

Dynamic Programming (DP) Paradigm DP is typically applied to Optimization problems. DP can be applied when a problem exhibits: Optimal substructure: Is an optimal solution to the problem contains within it optimal solutions to subproblems. Overlapping subproblems: If recursive algorithm revisits the same problem over and over again.

Dynamic Programming (DP) Paradigm DP can be applied when the solution of a problem includes solutions to subproblems We need to find a recursive formula for the solution We can recursively solve subproblems, starting from the trivial case, and save their solutions in memory In the end we ll get the solution of the whole problem

Dynamic programming One of the most important algorithm tools! Very common interview question Method for solving problems where optimal solutions can be defined in terms of optimal solutions to sub-problems AND the sub-problems are overlapping

Identifying a dynamic programming problem The solution can be defined with respect to solutions to subproblems The subproblems created are overlapping, that is we see the same subproblems repeated

Two main ideas for dynamic programming Identify a solution to the problem with respect to smaller subproblems F(n) = F(n-1) + F(n-2) Bottom up: start with solutions to the smallest problems and build solutions to the larger problems Fib(n): F[0] := 0; F[1] := 1; for i := 2 to n do F[i] := F[i-1] + F[i-2] return F[n] use an array to store solutions to subproblems

Longest common subsequence (LCS) For a sequence X = x 1,x 2,, x n, a subsequence is a subset of the sequence defined by a set of increasing indices (i 1, i 2,, i k ) where 1 i 1 < i 2 < < i k n X = A B A C D A B A B ABA?

Longest common subsequence (LCS) For a sequence X = x 1,x 2,, x n, a subsequence is a subset of the sequence defined by a set of increasing indices (i 1, i 2,, i k ) where 1 i 1 < i 2 < < i k n X = A B A C D A B A B ABA

Longest common subsequence (LCS) For a sequence X = x 1,x 2,, x n, a subsequence is a subset of the sequence defined by a set of increasing indices (i 1, i 2,, i k ) where 1 i 1 < i 2 < < i k n X = A B A C D A B A B ACA?

Longest common subsequence (LCS) For a sequence X = x 1,x 2,, x n, a subsequence is a subset of the sequence defined by a set of increasing indices (i 1, i 2,, i k ) where 1 i 1 < i 2 < < i k n X = A B A C D A B A B ACA

Longest common subsequence (LCS) For a sequence X = x 1,x 2,, x n, a subsequence is a subset of the sequence defined by a set of increasing indices (i 1, i 2,, i k ) where 1 i 1 < i 2 < < i k n X = A B A C D A B A B DCA?

Longest common subsequence (LCS) For a sequence X = x 1,x 2,, x n, a subsequence is a subset of the sequence defined by a set of increasing indices (i 1, i 2,, i k ) where 1 i 1 < i 2 < < i k n X = A B A C D A B A B DCA

Longest common subsequence (LCS) For a sequence X = x 1,x 2,, x n, a subsequence is a subset of the sequence defined by a set of increasing indices (i 1, i 2,, i k ) where 1 i 1 < i 2 < < i k n X = A B A C D A B A B AADAA?

Longest common subsequence (LCS) For a sequence X = x 1,x 2,, x n, a subsequence is a subset of the sequence defined by a set of increasing indices (i 1, i 2,, i k ) where 1 i 1 < i 2 < < i k n X = A B A C D A B A B AADAA

LCS problem Given two sequences X and Y, a common subsequence is a subsequence that occurs in both X and Y Given two sequences X = x 1, x 2,, x m and Y = y 1, y 2,, y n, What is the longest common subsequence? X = A B C B D A B Y = B D C A B A

LCS problem Given two sequences X and Y, a common subsequence is a subsequence that occurs in both X and Y Given two sequences X = x 1, x 2,, x m and Y = y 1, y 2,, y n, What is the longest common subsequence? X = A B C B D A B Y = B D C A B A The sequences{b, D, A, B} of length 4 are the LCS of X and Y, since there is no common subsequence of length 5 or greater.

LCS problem Application: comparison of two DNA strings Brute force algorithm would compare each subsequence of X with the symbols in Y

LCS Algorithm Brute-force algorithm: For every subsequence of x, check if it s a subsequence of y How many subsequences of x are there? What will be the running time of the brute-force algorithm?

LCS Algorithm if X = m, Y = n, then there are 2 m subsequences of x; we must compare each with Y (n comparisons) So the running time of the brute-force algorithm is O(n 2 m ) Notice that the LCS problem has optimal substructure: solutions of subproblems are parts of the final solution. Subproblems: find LCS of pairs of prefixes of X and Y

LCS Algorithm First we ll find the length of LCS. Later we ll modify the algorithm to find LCS itself. Define X i, Y j to be the prefixes of X and Y of length i and j respectively Define c[i,j] to be the length of LCS of X i and Y j Then the length of LCS of X and Y will be c[m,n] c[m,n] is the final solution.

Step 1: Define the problem with respect to subproblems X = A B C B D A B Y = B D C A B A

Step 1: Define the problem with respect to subproblems X = A B C B D A? Y = B D C A B? Is the last character part of the LCS?

Step 1: Define the problem with respect to subproblems X = A B C B D A? Y = B D C A B? Two cases: either the characters are the same or they re different

Step 1: Define the problem with respect to subproblems X = A B C B D A A LCS Y = B D C A B A The characters are part of the LCS If they re the same LCS( X, Y) LCS( X m 1, Yn 1 ) 1

Step 1: Define the problem with respect to subproblems X = A B C B D A B LCS Y = B D C A B A If they re different LCS( X, Y) LCS( X 1, Y) m

Step 1: Define the problem with respect to subproblems X = A B C B D A B LCS Y = B D C A B A If they re different LCS( X, Y) LCS( X, Yn 1 )

Step 1: Define the problem with respect to subproblems X = A B C B D A B Y = B D C A B A X = A B C B D A B? Y = B D C A B A If they re different

Step 1: Define the problem with respect to subproblems X = A B C B D A B Y = B D C A B A LCS( X, Y) 1 LCS( X m 1, Yn 1 ) max( LCS( X m 1, Y), LCS( X, Y n 1 ) if x m y n otherwise