Dynamic Programming ACM Seminar in Algorithmics

Similar documents
Maximum sum contiguous subsequence Longest common subsequence Matrix chain multiplication All pair shortest path Kna. Dynamic Programming

CS483 Design and Analysis of Algorithms

Dynamic Programming. Prof. S.J. Soni

Activity selection. Goal: Select the largest possible set of nonoverlapping (mutually compatible) activities.

Lecture 13. More dynamic programming! Longest Common Subsequences, Knapsack, and (if time) independent sets in trees.

General Methods for Algorithm Design

DAA Unit- II Greedy and Dynamic Programming. By Mrs. B.A. Khivsara Asst. Professor Department of Computer Engineering SNJB s KBJ COE, Chandwad

Lecture 13. More dynamic programming! Longest Common Subsequences, Knapsack, and (if time) independent sets in trees.

Lecture 9. Greedy Algorithm

Aside: Golden Ratio. Golden Ratio: A universal law. Golden ratio φ = lim n = 1+ b n = a n 1. a n+1 = a n + b n, a n+b n a n

Computer Science & Engineering 423/823 Design and Analysis of Algorithms

Computer Science & Engineering 423/823 Design and Analysis of Algorithms

Dynamic Programming. p. 1/43

Dynamic Programming. Shuang Zhao. Microsoft Research Asia September 5, Dynamic Programming. Shuang Zhao. Outline. Introduction.

CS60007 Algorithm Design and Analysis 2018 Assignment 1

CS473 - Algorithms I

Lecture 7: Dynamic Programming I: Optimal BSTs

/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Dynamic Programming II Date: 10/12/17

Do not turn this page until you have received the signal to start. Please fill out the identification section above. Good Luck!

Chapter 8 Dynamic Programming

Dynamic Programming (CLRS )

CSE 202 Homework 4 Matthias Springer, A

CSE 431/531: Analysis of Algorithms. Dynamic Programming. Lecturer: Shi Li. Department of Computer Science and Engineering University at Buffalo

Dynamic programming. Curs 2015

Chapter 8 Dynamic Programming

Today s Outline. CS 362, Lecture 13. Matrix Chain Multiplication. Paranthesizing Matrices. Matrix Multiplication. Jared Saia University of New Mexico

CS 470/570 Dynamic Programming. Format of Dynamic Programming algorithms:

Dynamic Programming: Shortest Paths and DFA to Reg Exps

CS173 Running Time and Big-O. Tandy Warnow

Partha Sarathi Mandal

Dynamic Programming( Weighted Interval Scheduling)

Lecture 18: More NP-Complete Problems

CMPS 6610 Fall 2018 Shortest Paths Carola Wenk

1 Assembly Line Scheduling in Manufacturing Sector

Greedy Algorithms My T. UF

CS Lunch. Dynamic Programming Recipe. 2 Midterm 2! Slides17 - Segmented Least Squares.key - November 16, 2016

Dynamic Programming: Shortest Paths and DFA to Reg Exps

Introduction. I Dynamic programming is a technique for solving optimization problems. I Key element: Decompose a problem into subproblems, solve them

CS 580: Algorithm Design and Analysis

CMPSCI 311: Introduction to Algorithms Second Midterm Exam

CPSC 490 Problem Solving in Computer Science

Lecture #8: We now take up the concept of dynamic programming again using examples.

Chapter 4 Divide-and-Conquer

Covering Linear Orders with Posets

CSC 1700 Analysis of Algorithms: Warshall s and Floyd s algorithms

Knapsack and Scheduling Problems. The Greedy Method

CSE 202 Dynamic Programming II

CPSC 320 (Intermediate Algorithm Design and Analysis). Summer Instructor: Dr. Lior Malka Final Examination, July 24th, 2009

Lecture 13: Dynamic Programming Part 2 10:00 AM, Feb 23, 2018

CHAPTER 1. Relations. 1. Relations and Their Properties. Discussion

Courses 12-13: Dynamic programming

Algorithms and Theory of Computation. Lecture 9: Dynamic Programming

CSCI 239 Discrete Structures of Computer Science Lab 6 Vectors and Matrices

CS 6901 (Applied Algorithms) Lecture 3

Find: a multiset M { 1,..., n } so that. i M w i W and. i M v i is maximized. Find: a set S { 1,..., n } so that. i S w i W and. i S v i is maximized.

Essential facts about NP-completeness:

Dynamic Programming: Matrix chain multiplication (CLRS 15.2)

Greedy. Outline CS141. Stefano Lonardi, UCR 1. Activity selection Fractional knapsack Huffman encoding Later:

What is Dynamic Programming

NP-Complete Problems

CSE 421 Dynamic Programming

Data Structures in Java

NP-Complete Reductions 3

Algorithm Design and Analysis

Lecture 2: Pairwise Alignment. CG Ron Shamir

ICS141: Discrete Mathematics for Computer Science I

CS 350 Algorithms and Complexity

CS 350 Algorithms and Complexity

University of Toronto Department of Electrical and Computer Engineering. Final Examination. ECE 345 Algorithms and Data Structures Fall 2016

Single Source Shortest Paths

Dynamic Programming. Cormen et. al. IV 15

CS583 Lecture 11. Many slides here are based on D. Luebke slides. Review: Dynamic Programming

Unit 1A: Computational Complexity

Reductions to Graph Isomorphism

Dynamic Programming. Reading: CLRS Chapter 15 & Section CSE 2331 Algorithms Steve Lai

Mat 3770 Bin Packing or

Chapter 3: Proving NP-completeness Results

CSE 421 Weighted Interval Scheduling, Knapsack, RNA Secondary Structure

APTAS for Bin Packing

Binary Decision Diagrams

Lecture 6: Greedy Algorithms I

Discrete Optimization 2010 Lecture 10 P, N P, and N PCompleteness

GRAPH ALGORITHMS Week 3 (16-21 October 2017)

LAB: FORCE AND MOTION

More Dynamic Programming

Dynamic Programming. Reading: CLRS Chapter 15 & Section CSE 6331: Algorithms Steve Lai

Limitations of Algorithm Power

CSE 101. Algorithm Design and Analysis Miles Jones Office 4208 CSE Building Lecture 20: Dynamic Programming

Graph Theory and Optimization Computational Complexity (in brief)

More Dynamic Programming

Lecture 2: Divide and conquer and Dynamic programming

Chapter 6. Dynamic Programming. CS 350: Winter 2018

CSE 421 Introduction to Algorithms Final Exam Winter 2005

13 Dynamic Programming (3) Optimal Binary Search Trees Subset Sums & Knapsacks

An Algorithm to Determine the Clique Number of a Split Graph. In this paper, we propose an algorithm to determine the clique number of a split graph.

Frequent Pattern Mining: Exercises

Introduction to Kleene Algebras

IE418 Integer Programming

COMP 633: Parallel Computing Fall 2018 Written Assignment 1: Sample Solutions

3.5 Solving Equations Involving Integers II

Transcription:

Dynamic Programming ACM Seminar in Algorithmics Marko Genyk-Berezovskyj berezovs@fel.cvut.cz Tomáš Tunys tunystom@fel.cvut.cz CVUT FEL, K13133 February 27, 2013

Problem: Longest Increasing Subsequence Input: A list of numbers, a 1, a 2,..., a n. Output: Find the largest k for which there are indices i 1, i 2,..., i k with a i1 < a i2 <... < a ik. Exercise: What is the longest increasing subsequence in the following list of numbers? 5, 1, 6, 4, 8, 3, 2, 1, 5, 6, 8

Problem: Longest Increasing Subsequence On the first sight it may seem a bit difficult to come up with a solution but as we will see there is an underlying structure in the problem that allows us to solve it fast. If you are faced with the following 5 1 6 4 8 3 2 1 5 6 8 can you tell what the underlying structure is and what its properties are?

Problem: Longest Increasing Subsequence Let G = (V, E) is a directed graph with vertices v 1, v 2,..., v n which are labelled with the numbers from the list a 1, a 2,..., a n. There is an edge (v i, v j ) E iff the corresponding numbers satisfy a i < a j. 5 1 6 4 8 3 2 1 5 6 8 Two important things to notice: The graph G is a directed acyclic graph, so-called DAG. There is one-to-one correspondence between paths in the graph G and the increasing subsequences in the list a 1, a 2,..., a n.

Problem: Longest Increasing Subsequence Solution: Let us denote the length of a longest path ing in the vertex v i as L(i). Then the following algorithm computes L(i) for every i {1,..., n}: for j 1 to n do L(j) 1 + max{l(i) (i, j) E(G)} return max n i=1 L(i) Example: L(3) = 1 + max{l(1), L(2)} 5 1 6 4 8

Dynamic Programming Definition Dynamic programming is a problem solving technique based upon the following two principles: 1 Identification of subproblems. 2 Using the answer to smaller subproblems to solve the bigger ones. In the case of the Longest Increasing Subsequence we have according to these principles: 1 L(i) - the length of the longest increasing subsequence ing with a i. 2 L(i) 1 + max j<i {L(j) a j < a i }.

When Does Dynamic Programming Work? A problem is solvable with dynamic programming if it has: 1 Optimal substructure. An optimal solution of the whole problem can be build out of optimal solutions to its subroblems. 2 Overlapping subproblems. Remember, when you are faced with a DP problem the DAG is implicit - you are the one to define the vertices (subproblems) and the edges (relations) between them. How do we find out a given problem is solvable using DP?

When Does Dynamic Programming Work? A problem is solvable with dynamic programming if it has: 1 Optimal substructure. An optimal solution of the whole problem can be build out of optimal solutions to its subroblems. 2 Overlapping subproblems. Remember, when you are faced with a DP problem the DAG is implicit - you are the one to define the vertices (subproblems) and the edges (relations) between them. How do we find out a given problem is solvable using DP? Through Experience!

Problem: Maximum Subarray Problem Input: An array of numbers, a 1, a 2,..., a n. Output: Find a continuous subarray within the given array which has the largest sum. Solution:

Problem: Maximum Subarray Problem Input: An array of numbers, a 1, a 2,..., a n. Output: Find a continuous subarray within the given array which has the largest sum. Solution: 1 Subproblems: S[i] - the largest sum of a subarray ing at i-th element (inclusive). 2 Induction: S[i] a i if S[i 1] <= 0 else a i + S[i 1].

Problem: Longest Common Subsequence Input: Two strings a 1 a 2 a n and b 1 b 2 b m. Output: The largest k for which there are indices Solution: 1 i 1 < i 2 < < i k n and 1 j 1 < j 2 < < j k m with a i1 a i2 a ik = b j1 b j2 b jk.

Problem: Longest Common Subsequence Input: Two strings a 1 a 2 a n and b 1 b 2 b m. Output: The largest k for which there are indices Solution: 1 i 1 < i 2 < < i k n and 1 j 1 < j 2 < < j k m with a i1 a i2 a ik = b j1 b j2 b jk. 1 Subproblems: S[i][j] - the longest common subsequence of the prefixes of the two strings a 1 a 2 a i and b 1 b 2 b j. 2 Induction: (diff (a, b) is 1 if a = b else 0) S[i][j] max{s[i 1][j], S[i][j 1], S[i 1][j 1]+diff (a i, b j )}.

Problem: Knapsack Problem Input: A list of items with their weights w 1, w 2,..., w n and costs c 1, c 2,..., c n, a total weight W the knapsack can hold (capacity). Output: A subset of items for which the sum of their costs is maximum and the knapsack can hold them, i.e. I {1, 2,..., n} such that i I c i is maximum and i I w i <= W. Solution:

Problem: Knapsack Problem Input: A list of items with their weights w 1, w 2,..., w n and costs c 1, c 2,..., c n, a total weight W the knapsack can hold (capacity). Output: A subset of items for which the sum of their costs is maximum and the knapsack can hold them, i.e. I {1, 2,..., n} such that i I c i is maximum and i I w i <= W. Solution: 1 Subproblems: C[ŵ][i] - the maximum achievable value of the items from a set {1, 2,..., i} with the knapsack capacity ŵ. 2 Induction: C[ŵ][i] = max{c[ŵ][i 1], C[ŵ w i ][i 1] + c i } if ŵ w i else C[ŵ][i 1].

Top-Down vs. Bottom-up Approach Consider the knapsack problem again. The following solution of it is referred to as the bottom-up approach: for i 0 to n do C[0][i] 0 for ŵ 0 to W do C[ŵ][0] 0 for i 1 to n do for ŵ 1 to W do if ŵ < w i then C[ŵ][i] C[ŵ][i 1] else C[ŵ][i] C[ŵ w i ][i 1] + c i return C[W ][n]

Top-Down vs. Bottom-up Approach Now consider the following which is referred to as the top-down approach: function solve(ŵ,i) if C[ŵ][i] = 1 then if ŵ w i then C[ŵ][i] solve(ŵ w i,i 1)+c i C[ŵ][i] max{c[ŵ][i], solve(ŵ,i 1)} return C[ŵ][i] for ŵ 0 to W do for i 0 to n do C[ŵ][i] 1 return solve(w,n)

Problem: Chain Matrix Multiplication Input: An expression A 1 A 2 A n, where the A i s are matrices with dimensions m 0 m 1, m 1 m 2,..., m n 1 m n. Output: A parenthesization of the expression such that the number of multiplications needed to be done in order to evaluate it is minimum. Hint: A 1 ((A 2 A 3 ) A 4 ) A 1 A 4 A 2 A 3

Problem: Chain Matrix Multiplication Solution: 1 Subproblems: C[i][j] - the minimum number of multiplications to evaluate A i A i+1 A j. 2 Induction: C[i][j] min i k<j {C[i][k]+C[k][j]+m i 1 m k m j }.

DP Problem Complexities The problems that have been presented are typical representants of the following complexity classes: O(n) Maximum Subarray Sum O(mn) Longest Common Subsequence O(n 3 ) Chain Matrix Multiplication