Aside: Golden Ratio. Golden Ratio: A universal law. Golden ratio φ = lim n = 1+ b n = a n 1. a n+1 = a n + b n, a n+b n a n

Similar documents
More Dynamic Programming

More Dynamic Programming

Algorithms and Theory of Computation. Lecture 9: Dynamic Programming

Dynamic Programming. Cormen et. al. IV 15

CSE 431/531: Analysis of Algorithms. Dynamic Programming. Lecturer: Shi Li. Department of Computer Science and Engineering University at Buffalo

More Approximation Algorithms

CSE 421 Dynamic Programming

CSE 202 Dynamic Programming II

Dynamic Programming. Data Structures and Algorithms Andrei Bulatov

Dynamic Programming( Weighted Interval Scheduling)

Dynamic programming. Curs 2015

Dynamic Programming: Interval Scheduling and Knapsack

CSE 421 Weighted Interval Scheduling, Knapsack, RNA Secondary Structure

Partha Sarathi Mandal

Dynamic Programming. Weighted Interval Scheduling. Algorithmic Paradigms. Dynamic Programming

Lecture 2: Divide and conquer and Dynamic programming

Dynamic programming. Curs 2017

6. DYNAMIC PROGRAMMING I

Dynamic Programming: Shortest Paths and DFA to Reg Exps

CS 580: Algorithm Design and Analysis

Chapter 6. Dynamic Programming. Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved.

/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Dynamic Programming II Date: 10/12/17

CS60007 Algorithm Design and Analysis 2018 Assignment 1

CS2223 Algorithms D Term 2009 Exam 3 Solutions

MA008/MIIZ01 Design and Analysis of Algorithms Lecture Notes 3

CS173 Running Time and Big-O. Tandy Warnow

CS473 - Algorithms I

Knapsack. Bag/knapsack of integer capacity B n items item i has size s i and profit/weight w i

Dynamic Programming: Shortest Paths and DFA to Reg Exps

a 1 a 2 a 3 a 4 v i c i c(a 1, a 3 ) = 3

Copyright 2000, Kevin Wayne 1

Data Structures in Java

6. DYNAMIC PROGRAMMING I

Chapter 6. Dynamic Programming. CS 350: Winter 2018

6.6 Sequence Alignment

1 Closest Pair of Points on the Plane

1 The Knapsack Problem

Copyright 2000, Kevin Wayne 1

Computer Science & Engineering 423/823 Design and Analysis of Algorithms

Chapter 6. Weighted Interval Scheduling. Dynamic Programming. Algorithmic Paradigms. Dynamic Programming Applications

0-1 Knapsack Problem in parallel Progetto del corso di Calcolo Parallelo AA

CS 580: Algorithm Design and Analysis

Lecture 11 October 7, 2013

CS 473: Algorithms. Ruta Mehta. Spring University of Illinois, Urbana-Champaign. Ruta (UIUC) CS473 1 Spring / 32

Lecture 2: Pairwise Alignment. CG Ron Shamir

Computer Science & Engineering 423/823 Design and Analysis of Algorithms

Lecture 13. More dynamic programming! Longest Common Subsequences, Knapsack, and (if time) independent sets in trees.

Greedy vs Dynamic Programming Approach

Areas. ! Bioinformatics. ! Control theory. ! Information theory. ! Operations research. ! Computer science: theory, graphics, AI, systems,.

Algorithms Exam TIN093 /DIT602

CS 374: Algorithms & Models of Computation, Spring 2017 Greedy Algorithms Lecture 19 April 4, 2017 Chandra Chekuri (UIUC) CS374 1 Spring / 1

Dynamic Programming. Credits: Many of these slides were originally authored by Jeff Edmonds, York University. Thanks Jeff!

CS 6901 (Applied Algorithms) Lecture 3

CS Lunch. Dynamic Programming Recipe. 2 Midterm 2! Slides17 - Segmented Least Squares.key - November 16, 2016

Graduate Algorithms CS F-09 Dynamic Programming

Lecture 13: Dynamic Programming Part 2 10:00 AM, Feb 23, 2018

Dynamic Programming 1

Dynamic Programming (CLRS )

Greedy Algorithms My T. UF

Lecture 13. More dynamic programming! Longest Common Subsequences, Knapsack, and (if time) independent sets in trees.

Objec&ves. Review. Dynamic Programming. What is the knapsack problem? What is our solu&on? Ø Review Knapsack Ø Sequence Alignment 3/28/18

Lecture 9. Greedy Algorithm

CMPSCI 311: Introduction to Algorithms Second Midterm Exam

More NP-Complete Problems

CS 231: Algorithmic Problem Solving

CS483 Design and Analysis of Algorithms

CS481: Bioinformatics Algorithms

Unit 1A: Computational Complexity

Week 7 Solution. The two implementations are 1. Approach 1. int fib(int n) { if (n <= 1) return n; return fib(n 1) + fib(n 2); } 2.

Dynamic Programming ACM Seminar in Algorithmics

Lecture 7: Dynamic Programming I: Optimal BSTs

V. Adamchik 1. Recurrences. Victor Adamchik Fall of 2005

Mat 3770 Bin Packing or

Outline DP paradigm Discrete optimisation Viterbi algorithm DP: 0 1 Knapsack. Dynamic Programming. Georgy Gimel farb

CS3233 Competitive i Programming

Dynamic Programming. Reading: CLRS Chapter 15 & Section CSE 2331 Algorithms Steve Lai

Data Structures and Algorithms

CS 4407 Algorithms Lecture 2: Iterative and Divide and Conquer Algorithms

Foundations of Natural Language Processing Lecture 6 Spelling correction, edit distance, and EM

FINAL EXAM PRACTICE PROBLEMS CMSC 451 (Spring 2016)

Intractable Problems Part Two

Algorithm Design and Analysis

Knapsack and Scheduling Problems. The Greedy Method

6. DYNAMIC PROGRAMMING I

Dynamic Programming. p. 1/43

CSE 549 Lecture 3: Sequence Similarity & Alignment. slides (w/*) courtesy of Carl Kingsford

1 Assembly Line Scheduling in Manufacturing Sector

Algorithm Design CS 515 Fall 2015 Sample Final Exam Solutions

CS583 Lecture 11. Many slides here are based on D. Luebke slides. Review: Dynamic Programming

8 Knapsack Problem 8.1 (Knapsack)

(tree searching technique) (Boolean formulas) satisfying assignment: (X 1, X 2 )

CPSC 320 Sample Final Examination December 2013

Chapter 4. Greedy Algorithms. Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved.

This means that we can assume each list ) is

data structures and algorithms lecture 2

Exam EDAF May 2011, , Vic1. Thore Husfeldt

Introduction to Bin Packing Problems

CPSC 490 Problem Solving in Computer Science

Even More on Dynamic Programming

SAT, NP, NP-Completeness

Transcription:

Aside: Golden Ratio Golden Ratio: A universal law. Golden ratio φ = lim n a n+b n a n = 1+ 5 2 a n+1 = a n + b n, b n = a n 1 Ruta (UIUC) CS473 1 Spring 2018 1 / 41

CS 473: Algorithms, Spring 2018 Dynamic Programming I Lecture 3 Jan 23, 2018 Most slides are courtesy Prof. Chekuri Ruta (UIUC) CS473 2 Spring 2018 2 / 41

Recursion Reduction: Reduce one problem to another Recursion A special case of reduction 1 reduce problem to a smaller instance of itself 2 self-reduction 1 Problem instance of size n is reduced to one or more instances of size n 1 or less. 2 For termination, problem instances of small size are solved by some other method as base cases. Ruta (UIUC) CS473 3 Spring 2018 3 / 41

Recursion in Algorithm Design 1 Tail Recursion: problem reduced to a single recursive call after some work. Easy to convert algorithm into iterative or greedy algorithms. Ruta (UIUC) CS473 4 Spring 2018 4 / 41

Recursion in Algorithm Design 1 Tail Recursion: problem reduced to a single recursive call after some work. Easy to convert algorithm into iterative or greedy algorithms. 2 Divide and Conquer: Problem reduced to multiple independent sub-problems that are solved separately. Conquer step puts together solution for bigger problem. Examples: Merge/Quick Sort, FFT Ruta (UIUC) CS473 4 Spring 2018 4 / 41

Recursion in Algorithm Design 1 Tail Recursion: problem reduced to a single recursive call after some work. Easy to convert algorithm into iterative or greedy algorithms. 2 Divide and Conquer: Problem reduced to multiple independent sub-problems that are solved separately. Conquer step puts together solution for bigger problem. Examples: Merge/Quick Sort, FFT 3 Dynamic Programming: problem reduced to multiple (typically) dependent or overlapping sub-problems. Use memoization to avoid recomputation of common solutions. Ruta (UIUC) CS473 4 Spring 2018 4 / 41

Part I Recursion and Memoization Ruta (UIUC) CS473 5 Spring 2018 5 / 41

Recursion, recursion Tree and dependency graph foo(instance X ) If X is a base case then solve it and return solution Else do some computation foo(x 1 ) do some computation foo(x 2 ) foo(x 3 ) more computation Output solution for X Ruta (UIUC) CS473 6 Spring 2018 6 / 41

Recursion, recursion Tree and dependency graph foo(instance X ) If X is a base case then solve it and return solution Else do some computation foo(x 1 ) do some computation foo(x 2 ) foo(x 3 ) more computation Output solution for X Two objects of interest when analyzing foo(x ) recursion tree of the recursive implementation a DAG representing the dependency graph of the distinct subproblems Ruta (UIUC) CS473 6 Spring 2018 6 / 41

Fibonacci Numbers Fibonacci (1200 AD), Pingala (200 BCE). Numbers defined by recurrence: F (n) = F (n 1) + F (n 2) and F (0) = 0, F (1) = 1. Ruta (UIUC) CS473 7 Spring 2018 7 / 41

Fibonacci Numbers Fibonacci (1200 AD), Pingala (200 BCE). Numbers defined by recurrence: F (n) = F (n 1) + F (n 2) and F (0) = 0, F (1) = 1. These numbers have many interesting and amazing properties. A journal The Fibonacci Quarterly! 1 F (n) = (φ n (1 φ) n )/ 5 where φ is the golden ratio (1 + 5)/2 1.618. 2 lim n F (n + 1)/F (n) = φ Ruta (UIUC) CS473 7 Spring 2018 7 / 41

Recursive Algorithm for Fibonacci Numbers Question: Given n, compute F (n). Fib(n): if (n = 0) return 0 else if (n = 1) return 1 else return Fib(n 1) + Fib(n 2) Ruta (UIUC) CS473 8 Spring 2018 8 / 41

Recursive Algorithm for Fibonacci Numbers Question: Given n, compute F (n). Fib(n): if (n = 0) return 0 else if (n = 1) return 1 else return Fib(n 1) + Fib(n 2) Running time? Let T (n) be the number of additions in Fib(n). T (n) = T (n 1) + T (n 2) + 1 and T (0) = T (1) = 0 Ruta (UIUC) CS473 8 Spring 2018 8 / 41

Recursive Algorithm for Fibonacci Numbers Question: Given n, compute F (n). Fib(n): if (n = 0) return 0 else if (n = 1) return 1 else return Fib(n 1) + Fib(n 2) Running time? Let T (n) be the number of additions in Fib(n). T (n) = T (n 1) + T (n 2) + 1 and T (0) = T (1) = 0 Roughly same as F (n) T (n) = Θ(φ n ) The number of additions is exponential in n. Ruta (UIUC) CS473 8 Spring 2018 8 / 41

Recursion tree vs dependency graph Fib(5) Ruta (UIUC) CS473 9 Spring 2018 9 / 41

An iterative algorithm for Fibonacci numbers FibIter(n): if (n = 0) then return 0 if (n = 1) then return 1 F [0] = 0 F [1] = 1 Ruta (UIUC) CS473 10 Spring 2018 10 / 41

An iterative algorithm for Fibonacci numbers FibIter(n): if (n = 0) then return 0 if (n = 1) then return 1 F [0] = 0 F [1] = 1 for i = 2 to n do F [i ] = F [i 1] + F [i 2] return F [n] Running time: O(n) additions. Ruta (UIUC) CS473 10 Spring 2018 10 / 41

What is the difference? 1 Recursive algorithm is recomputing same subproblems many time. 2 Iterative algorithm is computing the value of a subproblem only once by storing them: Memoization. Ruta (UIUC) CS473 11 Spring 2018 11 / 41

What is the difference? 1 Recursive algorithm is recomputing same subproblems many time. 2 Iterative algorithm is computing the value of a subproblem only once by storing them: Memoization. Dynamic Programming: Finding a recursion that can be effectively/efficiently memoized. Leads to polynomial time algorithm if number of sub-problems is polynomial in input size. Every recursive algorithm can be memoized by working with the dependency graph. Ruta (UIUC) CS473 11 Spring 2018 11 / 41

Automatic Memoization Can we convert recursive algorithm into an efficient algorithm without explicitly doing an iterative algorithm? Ruta (UIUC) CS473 12 Spring 2018 12 / 41

Automatic Memoization Can we convert recursive algorithm into an efficient algorithm without explicitly doing an iterative algorithm? Fib(n): if (n = 0) return 0 if (n = 1) return 1 if (Fib(n) was previously computed) return stored value of Fib(n) else return Fib(n 1) + Fib(n 2) Ruta (UIUC) CS473 12 Spring 2018 12 / 41

Automatic Memoization Can we convert recursive algorithm into an efficient algorithm without explicitly doing an iterative algorithm? Fib(n): if (n = 0) return 0 if (n = 1) return 1 if (Fib(n) was previously computed) return stored value of Fib(n) else return Fib(n 1) + Fib(n 2) How do we keep track of previously computed values? Ruta (UIUC) CS473 12 Spring 2018 12 / 41

Automatic Memoization Can we convert recursive algorithm into an efficient algorithm without explicitly doing an iterative algorithm? Fib(n): if (n = 0) return 0 if (n = 1) return 1 if (Fib(n) was previously computed) return stored value of Fib(n) else return Fib(n 1) + Fib(n 2) How do we keep track of previously computed values? Two methods: explicitly and implicitly (via data structure) Ruta (UIUC) CS473 12 Spring 2018 12 / 41

Automatic explicit memoization Initialize table/array M of size n such that M[i] = 1 for i = 0,..., n. Ruta (UIUC) CS473 13 Spring 2018 13 / 41

Automatic explicit memoization Initialize table/array M of size n such that M[i] = 1 for i = 0,..., n. Fib(n): if (n = 0) return 0 if (n = 1) return 1 if (M[n] 1) (* M[n] has stored value of Fib(n) *) return M[n] M[n] Fib(n 1) + Fib(n 2) return M[n] To allocate memory need to know upfront the number of subproblems for a given input size n Ruta (UIUC) CS473 13 Spring 2018 13 / 41

Automatic implicit memoization Initialize a (dynamic) dictionary data structure D to empty Fib(n): if (n = 0) return 0 if (n = 1) return 1 if (n is already in D) return value stored with n in D val Fib(n 1) + Fib(n 2) Store (n, val) in D return val Ruta (UIUC) CS473 14 Spring 2018 14 / 41

Explicit vs Implicit Memoization 1 Explicit memoization or iterative algorithm preferred if one can analyze problem ahead of time. Allows for efficient memory allocation and access. Ruta (UIUC) CS473 15 Spring 2018 15 / 41

Explicit vs Implicit Memoization 1 Explicit memoization or iterative algorithm preferred if one can analyze problem ahead of time. Allows for efficient memory allocation and access. 2 Implicit and automatic memoization used when problem structure or algorithm is either not well understood or in fact unknown to the underlying system. 1 Need to pay overhead of data-structure. 2 Functional languages such as LISP automatically do memoization, usually via hashing based dictionaries. Ruta (UIUC) CS473 15 Spring 2018 15 / 41

Back to Fibonacci Numbers Saving space. Do we need an array of n numbers? Ruta (UIUC) CS473 16 Spring 2018 16 / 41

Back to Fibonacci Numbers Saving space. Do we need an array of n numbers? Not really. FibIter(n): if (n = 0) then return 0 if (n = 1) then return 1 prev2 = 0 prev1 = 1 for i = 2 to n do temp = prev1 + prev2 prev2 = prev1 prev1 = temp return prev 1 Ruta (UIUC) CS473 16 Spring 2018 16 / 41

What is Dynamic Programming? Every recursion can be memoized. Automatic memoization does not help us understand whether the resulting algorithm is efficient or not. Ruta (UIUC) CS473 17 Spring 2018 17 / 41

What is Dynamic Programming? Every recursion can be memoized. Automatic memoization does not help us understand whether the resulting algorithm is efficient or not. Dynamic Programming: A recursion that when memoized leads to an efficient algorithm. Ruta (UIUC) CS473 17 Spring 2018 17 / 41

What is Dynamic Programming? Every recursion can be memoized. Automatic memoization does not help us understand whether the resulting algorithm is efficient or not. Dynamic Programming: A recursion that when memoized leads to an efficient algorithm. Key Questions: Given a recursive algorithm, how do we analyze complexity when it is memoized? Ruta (UIUC) CS473 17 Spring 2018 17 / 41

What is Dynamic Programming? Every recursion can be memoized. Automatic memoization does not help us understand whether the resulting algorithm is efficient or not. Dynamic Programming: A recursion that when memoized leads to an efficient algorithm. Key Questions: Given a recursive algorithm, how do we analyze complexity when it is memoized? How do we recognize whether a problem admits a dynamic programming based efficient algorithm? Ruta (UIUC) CS473 17 Spring 2018 17 / 41

What is Dynamic Programming? Every recursion can be memoized. Automatic memoization does not help us understand whether the resulting algorithm is efficient or not. Dynamic Programming: A recursion that when memoized leads to an efficient algorithm. Key Questions: Given a recursive algorithm, how do we analyze complexity when it is memoized? How do we recognize whether a problem admits a dynamic programming based efficient algorithm? How do we further optimize time and space of a dynamic programming based algorithm? Ruta (UIUC) CS473 17 Spring 2018 17 / 41

Part II Edit Distance Ruta (UIUC) CS473 18 Spring 2018 18 / 41

Edit Distance Definition Edit distance between two words X and Y is the number of letter insertions, letter deletions and letter substitutions required to obtain Y from X. Example The edit distance between FOOD and MONEY is at most 4: FOOD MOOD MONOD MONED MONEY Ruta (UIUC) CS473 19 Spring 2018 19 / 41

Edit Distance: Alternate View Alignment Place words one on top of the other, with gaps in the first word indicating insertions, and gaps in the second word indicating deletions. F O O D M O N E Y Ruta (UIUC) CS473 20 Spring 2018 20 / 41

Edit Distance: Alternate View Alignment Place words one on top of the other, with gaps in the first word indicating insertions, and gaps in the second word indicating deletions. F O O D M O N E Y Formally, an alignment is a sequence M of pairs (i, j) such that each index appears exactly once, and there is no crossing : if (i, j),..., (i, j ) then i < i and j < j. One of i or j could be empty, in which case no comparision. In the above example, this is M = {(1, 1), (2, 2), (3, 3), (, 4), (4, 5)}. Ruta (UIUC) CS473 20 Spring 2018 20 / 41

Edit Distance: Alternate View Alignment Place words one on top of the other, with gaps in the first word indicating insertions, and gaps in the second word indicating deletions. F O O D M O N E Y Formally, an alignment is a sequence M of pairs (i, j) such that each index appears exactly once, and there is no crossing : if (i, j),..., (i, j ) then i < i and j < j. One of i or j could be empty, in which case no comparision. In the above example, this is M = {(1, 1), (2, 2), (3, 3), (, 4), (4, 5)}. Cost of an alignment: the number of mismatched columns. Ruta (UIUC) CS473 20 Spring 2018 20 / 41

Edit Distance Problem Problem Given two words, find the edit distance between them, i.e., an alignment of smallest cost. Ruta (UIUC) CS473 21 Spring 2018 21 / 41

Edit Distance Basic observation Let X = αx and Y = βy α, β: strings. x and y single characters. Possible alignments between X and Y α x α x or β y βy or αx β y Ruta (UIUC) CS473 22 Spring 2018 22 / 41

Edit Distance Basic observation Let X = αx and Y = βy α, β: strings. x and y single characters. Possible alignments between X and Y α x α x or β y βy Observation Prefixes must have optimal alignment! or αx β y Ruta (UIUC) CS473 22 Spring 2018 22 / 41

Edit Distance Basic observation Let X = αx and Y = βy α, β: strings. x and y single characters. Possible alignments between X and Y α x α x or β y βy Observation Prefixes must have optimal alignment! or αx β y EDIST (α, β) + [x y] EDIST (X, Y ) = min 1 + EDIST (α, Y ) 1 + EDIST (X, β) Ruta (UIUC) CS473 22 Spring 2018 22 / 41

Recursive Algorithm Assume X is stored in array A[1..m] and Y is stored in B[1..n] EDIST (A[1..m], B[1..n]) If (m = 0) return n If (n = 0) return m Ruta (UIUC) CS473 23 Spring 2018 23 / 41

Recursive Algorithm Assume X is stored in array A[1..m] and Y is stored in B[1..n] EDIST (A[1..m], B[1..n]) If (m = 0) return n If (n = 0) return m m 1 = 1 + EDIST (A[1..(m 1)], B[1..n]) m 2 = 1 + EDIST (A[1..m], B[1..(n 1)])) If (A[m] = B[n]) then m 3 = EDIST (A[1..(m 1)], B[1..(n 1)]) Else m 3 = 1 + EDIST (A[1..(m 1)], B[1..(n 1)]) return min(m 1, m 2, m 3 ) Ruta (UIUC) CS473 23 Spring 2018 23 / 41

Example DEED and DREAD Ruta (UIUC) CS473 24 Spring 2018 24 / 41

Subproblems and Recurrence Each subproblem corresponds to a prefix of X and a prefix of Y Optimal Costs Let Opt(i, j) be optimal cost of aligning x 1 x i and y 1 y j. Then [x i y j ] + Opt(i 1, j 1), Opt(i, j) = min 1 + Opt(i 1, j), 1 + Opt(i, j 1) Base Cases: Opt(i, 0) = i and Opt(0, j) = j Ruta (UIUC) CS473 25 Spring 2018 25 / 41

Memoizing the Recursive Algorithm int M[0..m][0..n] Initialize all entries of M[i ][j] to return EDIST (A[1..m], B[1..n]) Ruta (UIUC) CS473 26 Spring 2018 26 / 41

Memoizing the Recursive Algorithm int M[0..m][0..n] Initialize all entries of M[i ][j] to return EDIST (A[1..m], B[1..n]) EDIST (A[1..m], B[1..n]) If (M[i ][j] < ) return M[i ][j] (* return stored value *) If (m = 0) M[i ][j] = n ElseIf (n = 0) M[i ][j] = m Ruta (UIUC) CS473 26 Spring 2018 26 / 41

Memoizing the Recursive Algorithm int M[0..m][0..n] Initialize all entries of M[i ][j] to return EDIST (A[1..m], B[1..n]) EDIST (A[1..m], B[1..n]) If (M[i ][j] < ) return M[i ][j] (* return stored value *) If (m = 0) M[i ][j] = n ElseIf (n = 0) M[i ][j] = m Else m 1 = 1 + EDIST (A[1..(m 1)], B[1..n]) m 2 = 1 + EDIST (A[1..m], B[1..(n 1)])) If (A[m] = B[n]) m 3 = EDIST (A[1..(m 1)], B[1..(n 1)]) Else m 3 = 1 + EDIST (A[1..(m 1)], B[1..(n 1)]) M[i ][j] = min(m 1, m 2, m 3 ) return M[i ][j] Ruta (UIUC) CS473 26 Spring 2018 26 / 41

Matrix and DAG of Computation Matrix M: 0, 0 i-1, j-1 i-1, j i, j-1 i, j............ m, n Figure: Dependency of matrix entries in the recursive algorithm of previous slide Ruta (UIUC) CS473 27 Spring 2018 27 / 41

Removing Recursion to obtain Iterative Algorithm EDIST (A[1..m], B[1..n]) int M[0..m][0..n] for i = 1 to m do M[i, 0] = i for j = 1 to n do M[0, j] = j for i = 1 to m do for j = 1 to n do [x i = y j ] + M[i 1][j 1], M[i ][j] = min 1 + M[i 1][j], 1 + M[i ][j 1] Ruta (UIUC) CS473 28 Spring 2018 28 / 41

Removing Recursion to obtain Iterative Algorithm Analysis EDIST (A[1..m], B[1..n]) int M[0..m][0..n] for i = 1 to m do M[i, 0] = i for j = 1 to n do M[0, j] = j for i = 1 to m do for j = 1 to n do [x i = y j ] + M[i 1][j 1], M[i ][j] = min 1 + M[i 1][j], 1 + M[i ][j 1] Ruta (UIUC) CS473 28 Spring 2018 28 / 41

Removing Recursion to obtain Iterative Algorithm Analysis EDIST (A[1..m], B[1..n]) int M[0..m][0..n] for i = 1 to m do M[i, 0] = i for j = 1 to n do M[0, j] = j for i = 1 to m do for j = 1 to n do [x i = y j ] + M[i 1][j 1], M[i ][j] = min 1 + M[i 1][j], 1 + M[i ][j 1] 1 Running time is O(mn). 2 Space used is O(mn). Ruta (UIUC) CS473 28 Spring 2018 28 / 41

Matrix and DAG of Computation (revisited) Matrix M: 0, 0 i-1, j-1 i-1, j i, j-1 i, j............ m, n Figure: Iterative algorithm in previous slide computes values in row order. Ruta (UIUC) CS473 29 Spring 2018 29 / 41

Finding an Optimum Solution The DP algorithm finds the minimum edit distance in O(nm) space and time. Question: Can we find a specific alignment which achieves the minimum? Ruta (UIUC) CS473 30 Spring 2018 30 / 41

Finding an Optimum Solution The DP algorithm finds the minimum edit distance in O(nm) space and time. Question: Can we find a specific alignment which achieves the minimum? Exercise: Show that one can find an optimum solution after computing the optimum value. Key idea is to store back pointers when computing Opt(i, j) to know how we calculated it. See notes for more details. Ruta (UIUC) CS473 30 Spring 2018 30 / 41

Dynamic Programming Template 1 Come up with a recursive algorithm to solve problem 2 Understand the structure/number of the subproblems generated by recursion 3 Memoize the recursion DP set up compact notation for subproblems set up a data structure for storing subproblems 4 Iterative algorithm Understand dependency graph on subproblems Pick an evaluation order (any topological sort of the dependency dag) 5 Analyze time and space 6 Optimize Ruta (UIUC) CS473 31 Spring 2018 31 / 41

Part III Knapsack Ruta (UIUC) CS473 32 Spring 2018 32 / 41

Knapsack Problem Input Given a Knapsack of capacity W lbs. and n objects with ith object having weight w i and value v i ; assume W, w i, v i are all positive integers Goal Fill the Knapsack without exceeding weight limit while maximizing value. Ruta (UIUC) CS473 33 Spring 2018 33 / 41

Knapsack Problem Input Given a Knapsack of capacity W lbs. and n objects with ith object having weight w i and value v i ; assume W, w i, v i are all positive integers Goal Fill the Knapsack without exceeding weight limit while maximizing value. Basic problem that arises in many applications as a sub-problem. Ruta (UIUC) CS473 33 Spring 2018 33 / 41

Knapsack Example Example Item I 1 I 2 I 3 I 4 I 5 Value 1 6 18 22 28 Weight 1 2 5 6 7 If W = 11, the best is {I 3, I 4 } giving value 40. Ruta (UIUC) CS473 34 Spring 2018 34 / 41

Knapsack Example Example Item I 1 I 2 I 3 I 4 I 5 Value 1 6 18 22 28 Weight 1 2 5 6 7 If W = 11, the best is {I 3, I 4 } giving value 40. Special Case When v i = w i, the Knapsack problem is called the Subset Sum Problem. Ruta (UIUC) CS473 34 Spring 2018 34 / 41

Knapsack For the following instance of Knapsack: Item I 1 I 2 I 3 I 4 I 5 Value 1 6 16 22 28 Weight 1 2 5 6 7 and weight limit W = 15. The best solution has value: (A) 22 (B) 28 (C) 38 (D) 50 (E) 56 Ruta (UIUC) CS473 35 Spring 2018 35 / 41

Greedy Approach 1 Pick objects with greatest value 1 Let W = 2, w 1 = w 2 = 1, w 3 = 2, v 1 = v 2 = 2 and v 3 = 3; Ruta (UIUC) CS473 36 Spring 2018 36 / 41

Greedy Approach 1 Pick objects with greatest value 1 Let W = 2, w 1 = w 2 = 1, w 3 = 2, v 1 = v 2 = 2 and v 3 = 3;greedy strategy will pick {3}, but the optimal is {1, 2} 2 Pick objects with smallest weight 1 Let W = 2, w 1 = 1, w 2 = 2, v 1 = 1 and v 2 = 3; Ruta (UIUC) CS473 36 Spring 2018 36 / 41

Greedy Approach 1 Pick objects with greatest value 1 Let W = 2, w 1 = w 2 = 1, w 3 = 2, v 1 = v 2 = 2 and v 3 = 3;greedy strategy will pick {3}, but the optimal is {1, 2} 2 Pick objects with smallest weight 1 Let W = 2, w 1 = 1, w 2 = 2, v 1 = 1 and v 2 = 3;greedy strategy will pick {1}, but the optimal is {2} 3 Pick objects with largest v i /w i ratio 1 Let W = 4, w 1 = w 2 = 2, w 3 = 3, v 1 = v 2 = 3 and v 3 = 5; Ruta (UIUC) CS473 36 Spring 2018 36 / 41

Greedy Approach 1 Pick objects with greatest value 1 Let W = 2, w 1 = w 2 = 1, w 3 = 2, v 1 = v 2 = 2 and v 3 = 3;greedy strategy will pick {3}, but the optimal is {1, 2} 2 Pick objects with smallest weight 1 Let W = 2, w 1 = 1, w 2 = 2, v 1 = 1 and v 2 = 3;greedy strategy will pick {1}, but the optimal is {2} 3 Pick objects with largest v i /w i ratio 1 Let W = 4, w 1 = w 2 = 2, w 3 = 3, v 1 = v 2 = 3 and v 3 = 5;greedy strategy will pick {3}, but the optimal is {1, 2} Ruta (UIUC) CS473 36 Spring 2018 36 / 41

Greedy Approach 1 Pick objects with greatest value 1 Let W = 2, w 1 = w 2 = 1, w 3 = 2, v 1 = v 2 = 2 and v 3 = 3;greedy strategy will pick {3}, but the optimal is {1, 2} 2 Pick objects with smallest weight 1 Let W = 2, w 1 = 1, w 2 = 2, v 1 = 1 and v 2 = 3;greedy strategy will pick {1}, but the optimal is {2} 3 Pick objects with largest v i /w i ratio 1 Let W = 4, w 1 = w 2 = 2, w 3 = 3, v 1 = v 2 = 3 and v 3 = 5;greedy strategy will pick {3}, but the optimal is {1, 2} 2 Aside: Can show that a slight modification always gives half the optimum profit: pick the better of the output of this algorithm and the largest value item. Also, the algorithms gives better approximations when all item weights are small when compared to W. Ruta (UIUC) CS473 36 Spring 2018 36 / 41

Towards a Recursive Algorithms First guess: Opt(i) is the optimum solution value for items 1,..., i. Observation Consider an optimal solution O for 1,..., i Case item i O O is an optimal solution to items 1 to i 1 Case item i O Then O {i} is an optimum solution for items 1 to i 1 in knapsack of capacity W w i. Ruta (UIUC) CS473 37 Spring 2018 37 / 41

Towards a Recursive Algorithms First guess: Opt(i) is the optimum solution value for items 1,..., i. Observation Consider an optimal solution O for 1,..., i Case item i O O is an optimal solution to items 1 to i 1 Case item i O Then O {i} is an optimum solution for items 1 to i 1 in knapsack of capacity W w i. Subproblems depend also on remaining capacity. Cannot write subproblem only in terms of Opt(1),..., Opt(i 1). Ruta (UIUC) CS473 37 Spring 2018 37 / 41

Towards a Recursive Algorithms First guess: Opt(i) is the optimum solution value for items 1,..., i. Observation Consider an optimal solution O for 1,..., i Case item i O O is an optimal solution to items 1 to i 1 Case item i O Then O {i} is an optimum solution for items 1 to i 1 in knapsack of capacity W w i. Subproblems depend also on remaining capacity. Cannot write subproblem only in terms of Opt(1),..., Opt(i 1). Opt(i, w): optimum profit for items 1 to i in knapsack of size w Goal: compute Opt(n, W ) Ruta (UIUC) CS473 37 Spring 2018 37 / 41

Dynamic Programming Solution Definition Let Opt(i, w) be the optimal way of picking items from 1 to i, with total weight not exceeding w. Opt(i, w) = 0 if i = 0 Opt(i { 1, w) if w i > w max Opt(i 1, w) Opt(i 1, w w i ) + v i otherwise Number of subproblem generated by Opt(n, W ) is O(nW ). Ruta (UIUC) CS473 38 Spring 2018 38 / 41

An Iterative Algorithm for w = 0 to W do M[0, w] = 0 for i = 1 to n do for w = 1 to W do if (w i > w) then M[i, w] = M[i 1, w] else M[i, w] = max(m[i 1, w], M[i 1, w w i ] + v i ) Running Time Ruta (UIUC) CS473 39 Spring 2018 39 / 41

An Iterative Algorithm for w = 0 to W do M[0, w] = 0 for i = 1 to n do for w = 1 to W do if (w i > w) then M[i, w] = M[i 1, w] else M[i, w] = max(m[i 1, w], M[i 1, w w i ] + v i ) Running Time 1 Time taken is O(nW ) Ruta (UIUC) CS473 39 Spring 2018 39 / 41

An Iterative Algorithm for w = 0 to W do M[0, w] = 0 for i = 1 to n do for w = 1 to W do if (w i > w) then M[i, w] = M[i 1, w] else M[i, w] = max(m[i 1, w], M[i 1, w w i ] + v i ) Running Time 1 Time taken is O(nW ) 2 Input has size O(n + log W + n i =1 (log v i + log w i )); so running time not polynomial but pseudo-polynomial! Ruta (UIUC) CS473 39 Spring 2018 39 / 41

Introducing a Variable For the Knapsack problem obtaining a recursive algorithm required introducing a new variable, namely the size of the knapsack. This is a key idea that recurs in many dynamic programming problems. Ruta (UIUC) CS473 40 Spring 2018 40 / 41

Introducing a Variable For the Knapsack problem obtaining a recursive algorithm required introducing a new variable, namely the size of the knapsack. This is a key idea that recurs in many dynamic programming problems. How do we figure out when this is possible? Heuristic answer that works for many problems: Try divide and conquer or obvious recursion: if problem is not decomposable then introduce the information required to decompose as new variable(s). Will see several examples to make this idea concrete. Ruta (UIUC) CS473 40 Spring 2018 40 / 41

Knapsack Algorithm and Polynomial time 1 Input size for Knapsack: O(n) + log W + n i =1 (log w i + log v i ). 2 Running time of dynamic programming algorithm: O(nW ). 3 Not a polynomial time algorithm. Ruta (UIUC) CS473 41 Spring 2018 41 / 41

Knapsack Algorithm and Polynomial time 1 Input size for Knapsack: O(n) + log W + n i =1 (log w i + log v i ). 2 Running time of dynamic programming algorithm: O(nW ). 3 Not a polynomial time algorithm. 4 Example: W = 2 n and w i, v i [1..2 n ]. Input size is O(n 2 ), running time is O(n2 n ) arithmetic/comparisons. Ruta (UIUC) CS473 41 Spring 2018 41 / 41

Knapsack Algorithm and Polynomial time 1 Input size for Knapsack: O(n) + log W + n i =1 (log w i + log v i ). 2 Running time of dynamic programming algorithm: O(nW ). 3 Not a polynomial time algorithm. 4 Example: W = 2 n and w i, v i [1..2 n ]. Input size is O(n 2 ), running time is O(n2 n ) arithmetic/comparisons. 5 Algorithm is called a pseudo-polynomial time algorithm because running time is polynomial if numbers in input are of size polynomial in the combinatorial size of problem. 6 Knapsack is NP-Hard if numbers are not polynomial in n. Ruta (UIUC) CS473 41 Spring 2018 41 / 41