Dynamic programming. Curs 2015
|
|
- Buddy Andrews
- 6 years ago
- Views:
Transcription
1 Dynamic programming. Curs 2015
2 Fibonacci Recurrence. n-th Fibonacci Term INPUT: n nat QUESTION: Compute F n = F n 1 + F n 2 Recursive Fibonacci (n) if n = 0 then return 0 else if n = 1 then return 1 else (Fibonacci(n 1)+ +Fibonacci(n 2)) end if
3 Computing F 7. As F n+1 /F n (1 + 5)/ then F n > 1.6 n, and to compute F n we need 1.6 n recursive calls. F(7) F(6) F(5) F(5) F(4) F(3) F(3) F(2) F(2) F(1) F(2) F(1) F(1) F(0) F(0)F(0) F(0) F(1) F(0) F(4) F(4) F(3)
4 A DP algorithm. To avoid repeating multiple computations of subproblems, carry the computation bottom-up and store the partial results in a table DP-Fibonacci (n) {Construct table} F 0 = 0 F 1 = 1 for i = 1 to n do F i = F i 1 + F i 2 end for To get F n need O(n) time and O(n) space. F[0] F[1] F[2] F[3] F[4] F[5] F[6] F[7]
5 F(7) F(6) F(5) F(5) F(4) F(4) F(3) F(4) F(3) F(3) F(2) F(2) F(1) F(1) F(2) F(1) F(1) F(0) F(1) F(0) F(0) Recursive (top-down) approach very slow Too many identical sub-problems and lots of repeated work. Therefore, bottom-up + storing in a table. This allows us to look up for solutions of sub-problems, instead of recomputing. Which is more efficient.
6 Dynamic Programming. Richard Bellman: An introduction to the theory of dynamic programming RAND, 1953 Today it would be denoted Dynamic Planning Dynamic programming is a powerful technique of divide and conquer type, for efficiently computing recurrences by storing partial results and re-using them when needed. Explore the space of all possible solutions by decomposing things into subproblems, and then building up correct solutions to larger problems. Therefore, the number of subproblems can be exponential, but if the number of different problems is polynomial, we can get a polynomial solution by avoiding to repeat the computation of the same subproblem.
7 Properties of Dynamic Programming Dynamic Programming works when: Optimal sub-structure: An optimal solution to a problem contains optimal solutions to subproblems. Overlapping subproblems: A recursive solution contains a small number of distinct subproblems. repeated many times. Difference with greedy Greedy problems have the greedy choice property: locally optimal choices lead to globally optimal solution. For DP problems greedy choice is not possible globally optimal solution requires back-tracking through many choices.
8 Guideline to implement Dynamic Programming 1. Characterize the structure of an optimal solution: make sure space of subproblems is not exponential. Define variables. 2. Define recursively the value of an optimal solution: Find the correct recurrence formula, with solution to larger problem as a function of solutions of sub-problems. 3. Compute, bottom-up, the cost of a solution: using the recursive formula, tabulate solutions to smaller problems, until arriving to the value for the whole problem. 4. Construct an optimal solution: Trace-back from optimal value.
9 Implementtion of Dynamic Programming Memoization: technique consisting in storing the results of subproblems and returning the result when the same sub-problem occur again. Technique used to speed up computer programs. In implementing the DP recurrence using recursion could be very inefficient because solves many times the same sub-problems. But if we could manage to solve and store the solution to sub-problems without repeating the computation, that could be a clever way to use recursion + memoization. To implement memoization use any dictionary data structure, usually tables or hashing.
10 Implementtion of Dynamic Programming The other way to implement DP is using iterative algorithms. DP is a trade-off between time speed vs. storage space. In general, although recursive algorithms ha exactly the same running time than the iterative version, the constant factor in the O is quite more larger because the overhead of recursion. On the other hand, in general the memoization version is easier to program, more concise and more elegant. Top-down: Recursive and Bottom-up: Iterative
11 Weighted Activity Selection Problem Weighted Activity Selection Problem INPUT: a set S = {1, 2,..., n} of activities to be processed by a single resource. Each activity i has a start time s i and a finish time f i, with f i > s i, and a weight w i. QUESTION: Find the set of mutually compatible such that it maximizes i S w i Recall: Greedy strategy not always solved this problem
12 Notation for the weighted activity selection problem We have {1, 2,..., n} activities with f 1 f 2 f n and weights {w i }. Define p(j) to be the largest integer i < j such that i and j are disjoints (p(j) = 0 if no disjoint i < j exists). Let Opt(j) be the value of the optimal solution to the problem consisting of activities in the range 1 to j. Let O j be the set of jobs in optimal solution for {1,..., j} p(1)=0 p(2)=0 p(3)=1 p(4)=0 p(5)=3 p(6)=
13 Recurrence Consider sub-problem {1,..., j}. We have two cases: 1.- j O j : w j is part of the solution, no jobs {p(j) + 1,..., j 1} are in O j, if O p(n) is the optimal solution for {1,..., p(n)} then O j = O p(n) {j} (optimal substructure) 2.- j O j : then O j = O j 1 { 0 if j = 0 Opt(j) = max{(opt(p(j)) + w j ), Opt(j 1)} if j 1
14 Recursive algoritm Considering the set of activities A, we start by a pre-processing phase: sorting the activities by increasing {f i } n i=1 and computing and tabulating P[j]. The cost of the pre-computing phase is: O(n lg n + n) Therefore we assume A is sorted and all p(j) are computer and tabulated in P[1 n] R-Opt (j) if j = 0 then return 0 else return max(w j + R-Opt(p(j)), R-Opt(j 1)) end if
15 Recursive algorithm Opt(3) Opt(4) Opt(6) Opt(5) Opt(3) Opt(3) Opt(2) Opt(2) Opt(1) Opt(1) Opt(1) Opt(1) Opt(1) Opt(1) What is the worst running time of this algorithm?: O(2 n )
16 Iterative algorithm Assuming we have as input the set A of n activities sorted by increasing f, each i with s i, w i and the values of p(j) tabulated, I-Opt (n) Define a 2 n table M[] for j = 1 to n do M[j] = max(m[p[j]] + w j, M[j 1]) end for return M[n] Notice: this algorithm gives only the numerical max. weight Time complexity: O(n) (not counting the O(n lg n) pre-process)
17 Iterative algorithm: Example i s i f i w i P M
18 Iterative algorithm: Returning the selected activities List-Sol (n) Define a 2 n table M[] for j = 1 to n do M[j] = max(m[p[j]] + w j, M[j 1]) if M[P[j]] + w j > M[j 1] then return j end if end for return M[n] In the previous example we get : 1,3,6 and value=5
19 The DP recursive algorithm: Memoization Notice we only have to solve a limited number of n different subproblems: from Opt(1) to Opt(n). A memoize recursive algorithm stores the solution for each sub-problem solution, using a dictionary DS. Therefore it could use tables, hashing,... In the algorithm below we use a table M[] Mem-Find-Sol (j) if j = 0 then return 0 else if M[j] then return M[j] else M[j] = max(mem-opt(p[j]) + w j, Mem-Opt(j 1)) return M[j] end if Time complexity: O(n)
20 The DP recursive algorithm: Memoization To get also the list of selected activities: Mem-Opt (j) if j = 0 then return 0 else if M[P[j]] + w j > M[j 1] then return j together with Mem-Find-Sol(P[j]) else return Mem-Find-Sol(j 1) end if Time complexity: O(n)
21 0-1 Knapsack 0-1 Knapsack INPUT:a set I = {i} n 1 of items that can NOT be fractioned, each i with weight w i and value v i. A maximum weight W permissible QUESTION: select the items to maximize the profit. Recall that we can NOT take fractions of items.
22 Characterize structure of optimal solution and define recurrence Need a new variable w Let v[i, w] be the maximum value (optimum) we can get from objects {1, 2,..., i} and taking a maximum weight of w (the remaining free weight). We wish to compute v[n, W ]. To compute v[i, w] we have two possibilities: That the i-th element is not part of the solution, then we go to compute v[i 1, w], or that the i-th element is part of the solution. we add v i and call This gives the recurrence, 0 if i = 0 or w = 0 v[i, j] = v[i 1, w w i ] + v i if i is part of the solution v[i 1, w] otherwise
23 Iterative algorithm Define a table M = v[1... n, 0... W ], Knapsack(I, W ) for i = 1 to n 1 do v[i, 0] := 0 end for for i = 1 to n do for w = 0 to W do if w i > w then v[i, w] := v[i 1, w] else v[i, w] := max{v[i 1, w], v[i 1, w w i ] + v i } end if end for end for return v[n, W ] The number of steps is O(nW ).
24 Example. i w i v i W = 11. w I For instance, v[5, 11] = max{v[4, 10], v[4, 11 7] + 28} = max{7, } = 18.
25 Recovering the solution a table M = v[1... n, 0... W ], Sol-Knaps(I, W ) Initialize M Devine a list L[i, w] for every cell [i, w] for i = 1 to n do for w = 0 to W do v[i, w] := max{v[i 1, w], v[i 1, w w i ] + v i } if v[i 1, w] < v[i 1, w w i ] + v i then Add v i L[i 1, w w i ] to L[i, w] end if end for end for return v[n, W ] and the list of [i, w] Complexity: O(nW ) + cost(l[i, w]). Question: How would you implement the L[i, w] s?
26 Solution for previous example (1) (1,2) (1,2) (1,2) (1,2,5) Question: could you come with a memoized algorithm? (Hint: use a hash data structure to store solution to sub-problems)
27 Complexity The 0-1 Knapsack is NP-complete. Does it mean P=NP? An algorithm runs in pseudo-polynomial time if its running time is polynomial in the numerical value of the input, but it is exponential in the length of the input, Recall that given n Z the value is n but the length of the representation is lg n bits. 0-1 Knapsack, has complexity O(nW ), and its length is O(lg n + lg W ). If W = 2 n the problem is exponential its the length. However the DP algorithm works fine when W = Θ(n). Consider the unary knapsack problem, where all integers are coded in unary (7= ). In this case, the complexity of the DP algorithm is polynomial on the size. I.e. unary knapsack P.
28 Multiplying a Sequence of Matrices Multiplication of n matrices INPUT: A sequence of n matrices (A 1 A 2 A n ) QUESTION: Minimize the number of operation in the computation A 1 A 2 A n Recall that Give matrices A 1, A 2 with dim(a 1 ) = p 0 p 1 and dim(a 2 ) = p 1 p 2, the basic algorithm to A 1 A 2 takes time p 0 p 1 p [ ] =
29 Recall that matrix multiplication is NOT commutative, so we can not permute the order of the matrices without changing the result, but it is associative, so we can put parenthesis as we wish. In fact, the problem of given A 1,..., A n with dim (A i ) = p i 1 p i, how to multiply them to minimize the number of operations is equivalent to the problem of how to parenthesize the sequence A 1,... A n. Example Consider A 1 A 2 A 3, where dim (A 1 ) = dim (A 2 ) = and dim (A 3 ) = ((A 1 A 2 )A 3 ) = ( ) + ( ) = 7500 operations, (A 1 (A 2 A 3 )) = ( ) + ( ) =75000 operations. The order makes a big difference in real computation s time.
30 How many ways to paranthesize A 1,... A n? A 1 A 2 A 3 A 4 : (A 1 (A 2 (A 3 A 4 ))), ((A 1 A 2 )(A 3 A 4 )), (((A 1 (A 2 A 3 ))A 4 ), (A 1 ((A 2 A 3 )A 4 ))), (((A 1 A 2 )A 3 )A 4 )) Let P(n) be the number of ways to paranthesize A 1,... A n. Then, { 1 if n = 1 P(n) = n 1 k=1 P(k)P(n k) si n 2 with solution P(n) = 1 n+1( 2n n ) = Ω(4 n /n 3/2 ) The Catalan numbers. Brute force will take too long!
31 1.- Structure of an optimal solution and recursive solution. Let A i j = (A i A i+1 A j ). The parenthesization of the subchain (A 1 A k ) within the optimal parenthesization of A 1 A n must be an optimal paranthesization of A k+1 A n. Notice, k, 1 k n, cost(a 1 n ) = cost(a 1 k ) + cost(a k+1 n ) + p 0 p k p n.
32 Let m[i, j] the minimum cost of A i... A j. Then, m[i, j] will be given by choosing the k, i k j s.t. minimizes m[i, k] + m[k + 1, j] + cost (A 1 k A k+1 n ). That is, { 0 if i = j m[i, j] = min 1 k j {m[i, k] + m[k + 1, j] + p i 1 p k p j } otherwise
33 2.- Computing the optimal costs Straightforward implementation of the previous recurrence: As dim(a i ) = p i 1 p i, the imput is given by P =< p 0, p 1,..., p n >, MCR (P, i, j) if i = j then return 0 end if m[i, j] := for k = i to j 1 do q := MCR(P, i, k) + MCR(P, k + 1, j) + p i 1 p k p j if q < m[i, j] then m[i, j] := q end if end for return m[i, j]. The time complexity if the previous is given by T (n) 2 n 1 i=1 T (i) + n Ω(2n ).
34 Dynamic programming approach. Use two auxiliary tables: m[1... n, 1... n] and s[1... n, 1... n]. MCP (P) for i = 1 to n do m[i, i] := 0 end for for l = 2 to n do for i = 1 to n l + 1 do j := i + l 1 m[i, j] := for k = i to j 1 do q := m[i, k] + m[k + 1, j] + p i 1 p k p j if q < m[i, j] then m[i, j] := q, s[i, j] := k end if end for end for end for return m, s. T (n) = Θ(n 3 ), and space = Θ(n 2 ).
35 Example. We wish to compute A 1, A 2, A 3, A 4 with P =< 3, 5, 3, 2, 4 > m[1, 1] = m[2, 2] = m[3, 3] = m[4, 4] = 0 i \ j
36 l = 2, i = 1, j = 2, k = 1 : q = m[1, 2] = m[1, 1] + m[2, 2] = 45 (A 1 A 2 ) s[i, 2] = 1 l = 2, i = 2, j = 3, k = 2 : q = m[2, 3] = m[2, 2] + m[3, 3] = 30 (A 2 A 3 ) s[2, 3] = 2 l = 2, i = 3, j = 4, k = 3 : q = m[3, 4] = m[3, 3] + m[4, 4] = 24 (A 3 A 4 ) s[3, 4] = 3 i \ j
37 l = 3, i = 1, j = 3 : { (k = 1)m[1, 1] + m[2, 3] = 60 A 1 (A 2 A 3 ) m[1, 3] = min (k = 2)m[1, 2] + m[3, 3] = 63 (A 1 A 2 )A 3 s[1, 3] = 1, l = 3, i = 2, j = 4 : m[2, 4] = min s[2, 4] = 3 { (k = 2)m[2, 2] + m[3, 4] = 84 A 2 (A 3 A 4 ), (k = 3)m[2, 3] + m[4, 4] = 70 (A 2 A 3 )A 4. i \ j
38 l = 4, i = 1, j = 4 : (k = 1)m[1, 1] + m[2, 4] = 130 A 1 (A 2 A 3 A 4 ), m[1, 4] = min (k = 2)m[1, 2] + m[3, 4] = 105 (A 1 A 2 )(A 3 A 4 ), (k = 3)m[1, 3] + m[4, 4] = 84 (A 1 A 2 A 3 )A 4. i \ j
39 3.- Constructing an optimal solution We need to construct an optimal solution from the information in s[1,..., n, 1,..., n]. In the table, s[i, j] contains k such that the optimal way to multiply: A i A j = (A i A k )(A k+1 A j ). Moreover, s[i, s[i, j]] determines the k to get A i s[i,j] and s[s[i, j] + 1, j] determines the k to get A s[i,j]+1 j. Therefore, A 1 n = A 1 s[1,n] A s[1,n]+1 n. Multiplication(A, s, i, j) if j > 1 then X :=Multiplication (A, s, i, s[i, j]) Y :=Multiplication (A, s, s[i, j] + 1, j) return X Y else return A i end if Therefore (A 1 (A 2 A 3 ))A 4.
40 Evolution DNA T A C A G T A C G Mutation T A C A C T A C G Delete T C A G A C G T C A G A C G Insertion A T C A G A C G
41 Sequence alignment problem T A C A G T A C G? A T C A G A C G
42 Formalizing the problem Longest common substring: Substring = chain of characters without gaps. T C A T G T A G A C T A T C A G A Longest common subsequence: Subsequence = ordered chain of characters with gaps. T C A T G T A G A C T A T C A G A Edit distance: Convert one string into another one using a given set of operations. T A C A G T A C G A T C A G A C G?
43 String similarity problem: The Longest Common Subsequence LCS INPUT: sequences X =< x 1 x m > and Y =< y 1 y n > QUESTION: Compute the longest common subsequence. A sequence Z =< z 1 z k > is a subsequence of X if there is a subsequence of integers 1 i 1 < i 2 <... < i k m such that z j = x ij. If Z is a subsequence of X and Y, the Z is a common subsequence of X and Y. Given X = ATATAT, then TTT is a subsequence of X
44 Greedy approach LCS: Given sequences X =< x 1 x m > and Y =< y 1 y n >. Compute the longest common subsequence. Greedy X, Y S := for i = 1 to m do for j = i to n do if x i = y j then S := S {x i } end if let y l such that l = min{a > j x i = y a} let x k such that k = min{a > i x i = y a} if i, l < k then do S := S {x i }, i := i + 1; j := l + 1 S := S {x k }, i := k + 1; j := j + 1 else if not such y l, x k then do i := i + 1 and j := j + 1. end if end for end for
45 Greedy approach LCS: Given sequences X =< x 1 x m > and Y =< y 1 y n >. Compute the longest common subsequence. Greedy X, Y S := for i = 1 to m do for j = i to n do if x i = y j then S := S {x i } end if let y l such that l = min{a > j x i = y a} let x k such that k = min{a > i x i = y a} if i, l < k then do S := S {x i }, i := i + 1; j := l + 1 S := S {x k }, i := k + 1; j := j + 1 else if not such y l, x k then do i := i + 1 and j := j + 1. end if end for end for Greedy approach do not work For X =A T C A C and Y =C G C A C A C A C T the result of greedy is A T but the solution is A C A C.
46 Dynamic Programming approach Characterization of optimal solution and recurrence Let X =< x 1 x n > and Y =< y 1 y m >. Let X [i] =< x 1 x i > and Y [i] =< y 1 y j >. Define c[i, j] = length de la LCS of X [i] and Y [j]. Want c[n, m] i.e. solution LCS X and Y What is a subproblem?
47 Characterization of optimal solution and recurrence Subproblem = something that goes part of the way in converting one string into other. Given X =C G A T C and Y =A T A C c[5, 4] = c[4, 3] + 1 If X =C G A T and Y =A T A to find c[4, 3]: either LCS of C G A T and A T or LCS of C G A and A T A c[4, 3] = max(c[3, 3], c[4, 2]) Given X and Y { c[i 1, j 1] + 1 if x i = y j c[i, j] = max(c[i, j 1], c[i 1, j]) otherwise
48 c[i, j] = { c[i 1, j 1] + 1 max(c[i, j 1], c[i 1, j]) if x i = y j otherwise c[3,2] c[3,1] c[2,2] c[2,1] c[3,0] c[2,1] c[2,0] c[2,1] c[1,2] c[1,1] c[2,0] c[1,1] c[1,0] c[1,1] c[0,2] c[0,1] c[1,0] c[0,1] c[0,0]
49 The direct top-down implementation of the recurrence LCS (X, Y ) if m = 0 or n = 0 then return 0 else if x m = y n then return 1+LCS (x 1 x m 1, y 1 y n 1 ) else return max{lcs (x 1 x m 1, y 1 y n ) end if LCS (x 1 x m, y 1 y n 1 )} The algorithm explores a tree of depth Θ(n + m), therefore the time complexity is T (n) = 3 Θ(n+m).
50 Bottom-up solution Avoid the exponential running time, by tabulating the subproblems and not repeating their computation. To memoize the values c[i, j] we use a table c[0 n, 0 m] Starting from c[0, j] = 0 for 0 j m and from c[i, 0] = 0 from 0 i n go filling row-by-row, left-to-right, all c[i, j] j-1 j i-1 i c[i-1,j-1] c[i-1,j] c[i,j-1] c[i,j] Use a field d[i, j] inside c[i, j] to indicate from where we use the solution.
51 Bottom-up solution LCS (X, Y ) for i = 1 to n do c[i, 0] := 0 end for for j = 1 to m do c[0, j] := 0 end for for i = 1 to n do for j = 1 to m do if x i = y j then c[i, j] := c[i 1, j 1] + 1, b[i.j] := else if c[i 1, j] c[i, j 1] then c[i, j] := c[i 1, j], b[i, j] := else c[i, j] := c[i, j 1], b[i, j] :=. end if end for end for Time and space complexity T = O(nm).
52 Example. X=(ATCTGAT); Y=(TGCATA). Therefore, m = 6, n = T G C A T A A T C T G A T
53 Construct the solution Uses as input the table c[n, m]. The first call to the algorithm is con-lcs (c, n, m) con-lcs (c, i, j) if i = 0 or j = 0 then STOP. else if b[i, j] = then con-lcs (c, i 1, j 1) return x i else if b[i, j] = then con-lcs (c, i 1, j) else con-lcs (c, i, j 1) end if The algorithm has time complexity O(n + m).
54 Edit Distance. Generalization of LCS The edit distance between strings X = x 1 x n and Y = y 1 y m is defined to be the minimum number of edit operations needed to transform X into Y, where the set of edit operations is: insert(x, i, a) = x 1 x i ax i+1 x n. delete(x, i) = x 1 x i 1 x i+1 x n modify(x, i, a) = x 1 x i 1 ax i+1 x n. Example: x = aabab and y = babb aabab = X X =insert(x, 0, b) baabab X =delete(x, 2) babab Y =delete(x, 4) babb
55 A shorter distance. X = aabab Y = babb aabab = X X =modify(x, 1, b) babab Y =delete(x, 4) babb Use dynamic programming.
56 1.- Characterize the structure of an optimal solution and set the recurrence. Assume want to find the edit distance from X = TATGCAAGTA to Y = CAGTAGTC Consider the prefixes TATGCA and CAGT Let E[6, 4] = edit distance between TATGCA and CAGT Distance between TATGCA and CAGT is E[5, 4] + 1 (delete A) Distance between TATGCA and CAGT is E[6, 5] + 1 (add A) Distance between TATGCA and CAGTA is E[5, 4] (keep last A and compare TATGC and CAGT ).
57 To compute the edit distance from X = x 1 x n to Y = y 1 y m : Let X [i] = x 1 x i and Y [j] = y 1 y j let E[i, j] = edit distance from X [i] to Y [j] If x i y j the last step from X [i] Y [j] must be one of: 1. put y j at the end x: x x 1 x i y j, and then transform x 1 x i into y 1 y j 1. E[i, j] = E[i, j 1] delete x i : x x 1 x i 1, and then transform x 1 x i 1 into y 1 y j. E[i, j] = E[i 1, j] change x i into y j : x x 1 x n 1 y m, and then transform x 1 x i 1 into y 1 y j 1 E[i, j] = E[i 1, j 1] + 1
58 Moreover, E[0, j] = j (converting λ Y [j]) E[i, 0] = j (converting X [i] λ) Therefore, we have the recurrence i E[i, j] = j min{e[i 1, j] + 1, E[i, j 1] + 1, E[i 1, j 1] + d(x i, y j )} where d(x i, y j ) = { 0 if x i = y j 1 otherwise
59 2.- Computing the optimal costs. Edit X = {x 1,..., x n}, Y = {y 1,..., y n} for i = 0 to n do E[i, 0] := i end for for j = 0 to m do E[0, j] := j end for for i = 1 to n do for j = 1 to m do if x i = y j then d(x i, y j ) = 0 else d(x i, y j ) = 1 end if E[i, j] := E[i, j 1] + 1 if E[i 1, j 1] + d(x i, y j ) < E[i, j] then E[i, j] := E[i 1, j 1] + d(x i, y j ), b[i, j] := else if E[i 1, j] + 1 < E[i, j] then E[i, j] := E[i 1, j] + 1, b[i, j] := else b[i, j] := end if end for end for Time and space complexity T = O(nm). Example: X=aabab; Y=babb. Therefore, n = 5, m = λ b a b b 0 λ a a b a b
60 3.- Construct the solution. Uses as input the table E[n, m]. The first call to the algorithm is con-edit (E, n, m) con-edit (E, i, j) if i = 0 or j = 0 then STOP. else if b[i, j] = and x i = y j then change(x, i, y j ) con-edit (E, i 1, j 1) else if b[i, j] = then delete(x, i), con-edit (c, i 1, j) else insert(x, i, y j ), con-edit (c, i, j 1) end if This algorithm has time complexity O(nm).
61 Sequence Alignment. Finding similarities between sequences is important in Bioinformatics For example, Locate similar subsequences in DNA Locate DNA which may overlap. Similar sequences evolved from common ancestor Evolution modified sequences by mutations Replacement. Deletions. Insertions.
62 Sequence Alignment. Given two sequences over same alphabet G C G C A T G G A T T G A G C G A T G C G C C A T T G A T G A C C A An alignment - GCGC- ATGGATTGAGCGA TGCGCCATTGAT -GACC- A Alignments consist of: Perfect matchings. Mismatches. Insertion and deletions. There are many alignments of two sequences. Which is better?
63 To compare alignments define a scoring function: +1 if the ith pair matches s(i) = 1 if the ith mismatches 2 if the ith pair InDel Score alignment = i s(i) Better = max. score between all alignments of two sequences. Very similar to LCS, but with weights.
64 Typesetting justification. We have a text T = w 1, w 2,..., w n Given we have a width of line = M, the goal is to place the line breaks causing the minimum of stretching (present an aesthetic look).
65 Typesetting justification Typesetting INPUT: a text w 1, w 2,..., w n and width M QUESTION: Compute the line breaks j 1, j 2,..., j l that minimizes the cost function of stretching. The DP we explain here is very similar to that LaTex software for doing typesetting justification.
66 Greedy approach Fill each line as much as possible before going to next line. abcd, abcd, abcd, abcd abcd i ab abracadabraarbadacabra abcd, abcd, abcd abcd, abcd i ab abracadabraarbadacabra
67 Greedy approach Fill each line as much as possible before going to next line. abcd, abcd, abcd, abcd abcd i ab abracadabraarbadacabra abcd, abcd, abcd abcd, abcd i ab abracadabraarbadacabra DP solution If average number of words is 10 per line and there are n words the number of possible solutions is ( ) n = 2 Ω(n). n/10 Key: once we break each line it breaks the paragraph into two parts that can be optimized independently.
68 Characterize optimal solution. We have l + 1 lines in a page, and let j 1, j 2,..., j l be the line breaks, so at the j i -th line we have w ji 1 +1,..., w ji. Let E[i, j] be the extra-space of putting w i,..., w j in one line. Then j E[i, j] = M (j i) w k. Let c[i, j] be the cost of putting w i,..., w j in one line. Then if M E[i, j] < 0 c[i, j] = 0 if j = n & M E[i, j] 0 E[i, j] 3 otherwise Want to minimize the sum of c over all lines. Consider an optimal arrangement of words 1,..., j, where 1 j n. If we have l + 1 lines consider the line from word i = j l to word j l. The proceedings words 1 to j l 1 must be break in an optimal way k=i
69 Recursive definition. Let OC[j] the optimal cost of the (optimal) arrangement of words 1,..., j If we know the last line in those words contains i,..., j we get the recurrence: OC[j] = OC[i 1] + c[i, j]. So, OC[j] = { 0 if j = 0, min 1 i j OC[i 1] + c[i, j] if j > 0. We apply DP to compute OC[n].
70 Algorithm to compute the solution. Justify { w 1 } n i=1, n, M for i = 1 to n do E[i, i] := M w i for j = i + 1 to n do E[i, j] := E[i, j 1] w j 1 end for end for for i = 1 to n do for j = i to n do if E[i, j] < 0 then c[i, j] := else if j = n and E[i, j] 0 then c[i, j] := 0 else c[i, j] := (E[i, j]) 3 end if end for end for c[0] := 0 for j = 1 to n do c[j] := for i = 1 to j do if c[i 1] + OC[i, j] < c[j] then c[j] := c[i 1] + OC[i, j] p[j] := i end if end for end for print c, p Define n n table: for empty space per line E[i, j] cost from w i to w j in one line: c[i, j] Define n + 1 array of optimal costs:oc[i] Define n array of pointers indicating optimal breaking of lines l i : p[i] Complexity O(n 2 ).
71 Example. Consider w 1 = 2, w 2 = 4, w 3 = 5, w 4 = 1, w 5 = 4, w 6 = 1, w 7 = 2, w 8 = 4, w 9 = 3, w 10 = 5, w 11 = 2, w 12 = 4 with M = 20. The table of extra space E = M j + i w k : E
72 Example. The table of costs c[i, j] c
73 Optimal cost array of pointers and Typesetting The optimal cost array OC[i, j] The array of pointers p[i] p[12] = 10, p[p[12]] = 5, p[p[p[12]]] = 1 w 1 w 2 w 3 w 4 w 5 w 6 w 7 w 8 w 9 w w 11 w 12
74 Dynamic Programming in Trees Trees are nice graphs to bound the number of subproblems. Given T = (V, A) with V = n, recall that there are n subtrees in T. Therefore, when considering problems defined on trees, it is easy to bound the number of subproblems This allows to use Dynamic Programming to give polynomial solutions to difficult graph problems when the input is a tree.
75 The Maximum Weight Independent Set (MWIS). INPUT: G = (V, E), together with a weight w : V R QUESTION: Find the largest S V such that no two vertices in S are connected in G. For general G, the problem is difficult, as the case with all weights =1 is already NP-complete Given a rooted tree T = (V, A) as instance for the MWIS. is rooted. Then v V, v defines a subtree T v. 6 a 4 b 8 c 8 d 8 e f g h i j 9 k 7 l m n o
76 Characterization of the optimal solution Key observation: An WIS S in T can t contain vertices which are father-son. For v V, let T v be the subtree rooted at v and let S v be the size of the MWIS in T v. Define: M(v) = max{w(s v ) v S v } & M (v) = max{w(s v ) v S}. M(v) and M (v) are the max weights of the independent sets in T v that contain v and not contain v Recursive definition of the optimal solution For v V and u a son of v (in T v ): M(v) = w(v) + u M (u) and M (v) = w(v) + u max{m(u), M (u)} In the case v is a leaf M(v) = w(v) and M (v) = 0
77 Algorithm to find MWIS on a rooted tree T with size n. Define 2 n n tables to store the values of M and M. The optimal solution will be given by the following procedure that expres T first bottom-up and after top-down: 1. For every v V compute M(v) and M (v) 1.1 If v is a leaf M(v) := w(v), M (v) := Otherwise for u a son of v, M(v) := w(v) + v M (u), M (v) := u max{m(u), M (u)} 2. In a top-dow manner, do 2.1 S = 2.2 for root r, if M(r) > M (r), then S := S {r} 2.3 for u r, if v S and M(u) > M (u) then S := S {r} 3. return S
78 Bottom-up M(l) = 8, M(n) = 4, M(n) = 6, M(e) = 5 M(f ) = 6, M(o) = 2, M(k) = 7 M(b) = 4, M (b) = 11 M(h) = 8, M (h) = 15 M(j) = 9, M (h) = 2 M(d) = 10, M (d) = 16 M(c) = 23, M (c) = 20 M(a) = 6 + M (b) + M (c) + M (d) = 53 M (a) = 6 + M (b) + M(c) + M (d) = 50 6 a 4 b 8 c 8 d 8 e f g h i j 9 k 7 l m n o
79 Top down M(l) = 8, M(n) = 4, M(n) = 6, M(e) = 5 M(f ) = 6, M(o) = 2, M(k) = 7 M(b) = 4, M (b) = 11 M(h) = 8, M (h) = 15 M(j) = 9, M (h) = 2 M(d) = 10, M (d) = 16 M(c) = 23, M (c) = 20 M(a) = 6 + M (b) + M (c) + M (d) = 53 M (a) = 6 + M (b) + M(c) + M (d) = 50 Maximum Weighted Independent Set: S = {a, e, f, g, i, j, k, l, m, n} w(s) = a 4 b 8 c 8 d 8 e f g h i j 9 k 7 l m n o
80 Complexity. Space complexity: 2n 2 Time complexity: Step 1 To update M and M we only need to look at the sons of the vertex into consideration. As we cover all n vertices, O(n) Step 2 From the root down to the leaves, for each vertex v compare M(v) with M (v), if M(v) > M (v) include v in S, otherwise do not include. Θ(n) As we have to consider all the vertices at least one, we have a lower bound in the time complexity of Ω(n), therefore the total number of steps is Θ(n).
Dynamic programming. Curs 2017
Dynamic programming. Curs 2017 Fibonacci Recurrence. n-th Fibonacci Term INPUT: n nat QUESTION: Compute F n = F n 1 + F n 2 Recursive Fibonacci (n) if n = 0 then return 0 else if n = 1 then return 1 else
More informationDynamic Programming. Cormen et. al. IV 15
Dynamic Programming Cormen et. al. IV 5 Dynamic Programming Applications Areas. Bioinformatics. Control theory. Operations research. Some famous dynamic programming algorithms. Unix diff for comparing
More informationCSE 202 Dynamic Programming II
CSE 202 Dynamic Programming II Chapter 6 Dynamic Programming Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved. 1 Algorithmic Paradigms Greed. Build up a solution incrementally,
More informationCSE 421 Weighted Interval Scheduling, Knapsack, RNA Secondary Structure
CSE Weighted Interval Scheduling, Knapsack, RNA Secondary Structure Shayan Oveis haran Weighted Interval Scheduling Interval Scheduling Job j starts at s(j) and finishes at f j and has weight w j Two jobs
More informationAside: Golden Ratio. Golden Ratio: A universal law. Golden ratio φ = lim n = 1+ b n = a n 1. a n+1 = a n + b n, a n+b n a n
Aside: Golden Ratio Golden Ratio: A universal law. Golden ratio φ = lim n a n+b n a n = 1+ 5 2 a n+1 = a n + b n, b n = a n 1 Ruta (UIUC) CS473 1 Spring 2018 1 / 41 CS 473: Algorithms, Spring 2018 Dynamic
More informationCSE 431/531: Analysis of Algorithms. Dynamic Programming. Lecturer: Shi Li. Department of Computer Science and Engineering University at Buffalo
CSE 431/531: Analysis of Algorithms Dynamic Programming Lecturer: Shi Li Department of Computer Science and Engineering University at Buffalo Paradigms for Designing Algorithms Greedy algorithm Make a
More information6. DYNAMIC PROGRAMMING I
6. DYNAMIC PROGRAMMING I weighted interval scheduling segmented least squares knapsack problem RNA secondary structure Lecture slides by Kevin Wayne Copyright 2005 Pearson-Addison Wesley Copyright 2013
More informationDynamic Programming 1
Dynamic Programming 1 lgorithmic Paradigms Divide-and-conquer. Break up a problem into two sub-problems, solve each sub-problem independently, and combine solution to sub-problems to form solution to original
More informationCS473 - Algorithms I
CS473 - Algorithms I Lecture 10 Dynamic Programming View in slide-show mode CS 473 Lecture 10 Cevdet Aykanat and Mustafa Ozdal, Bilkent University 1 Introduction An algorithm design paradigm like divide-and-conquer
More informationChapter 6. Dynamic Programming. Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved.
Chapter 6 Dynamic Programming Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved. 1 Algorithmic Paradigms Greed. Build up a solution incrementally, myopically optimizing
More informationDynamic Programming( Weighted Interval Scheduling)
Dynamic Programming( Weighted Interval Scheduling) 17 November, 2016 Dynamic Programming 1 Dynamic programming algorithms are used for optimization (for example, finding the shortest path between two points,
More informationDynamic Programming: Interval Scheduling and Knapsack
Dynamic Programming: Interval Scheduling and Knapsack . Weighted Interval Scheduling Weighted Interval Scheduling Weighted interval scheduling problem. Job j starts at s j, finishes at f j, and has weight
More informationCSE 421 Dynamic Programming
CSE Dynamic Programming Yin Tat Lee Weighted Interval Scheduling Interval Scheduling Job j starts at s(j) and finishes at f j and has weight w j Two jobs compatible if they don t overlap. Goal: find maximum
More informationComputer Science & Engineering 423/823 Design and Analysis of Algorithms
Computer Science & Engineering 423/823 Design and Analysis of Algorithms Lecture 03 Dynamic Programming (Chapter 15) Stephen Scott and Vinodchandran N. Variyam sscott@cse.unl.edu 1/44 Introduction Dynamic
More informationDynamic Programming. Weighted Interval Scheduling. Algorithmic Paradigms. Dynamic Programming
lgorithmic Paradigms Dynamic Programming reed Build up a solution incrementally, myopically optimizing some local criterion Divide-and-conquer Break up a problem into two sub-problems, solve each sub-problem
More informationComputer Science & Engineering 423/823 Design and Analysis of Algorithms
Computer Science & Engineering 423/823 Design and Analysis of s Lecture 09 Dynamic Programming (Chapter 15) Stephen Scott (Adapted from Vinodchandran N. Variyam) 1 / 41 Spring 2010 Dynamic programming
More informationCopyright 2000, Kevin Wayne 1
//8 Fast Integer Division Too (!) Schönhage Strassen algorithm CS 8: Algorithm Design and Analysis Integer division. Given two n-bit (or less) integers s and t, compute quotient q = s / t and remainder
More information/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Dynamic Programming II Date: 10/12/17
601.433/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Dynamic Programming II Date: 10/12/17 12.1 Introduction Today we re going to do a couple more examples of dynamic programming. While
More informationCS 580: Algorithm Design and Analysis
CS 580: Algorithm Design and Analysis Jeremiah Blocki Purdue University Spring 208 Announcement: Homework 3 due February 5 th at :59PM Final Exam (Tentative): Thursday, May 3 @ 8AM (PHYS 203) Recap: Divide
More informationLecture 2: Divide and conquer and Dynamic programming
Chapter 2 Lecture 2: Divide and conquer and Dynamic programming 2.1 Divide and Conquer Idea: - divide the problem into subproblems in linear time - solve subproblems recursively - combine the results in
More informationDynamic Programming. Data Structures and Algorithms Andrei Bulatov
Dynamic Programming Data Structures and Algorithms Andrei Bulatov Algorithms Dynamic Programming 18-2 Weighted Interval Scheduling Weighted interval scheduling problem. Instance A set of n jobs. Job j
More informationCopyright 2000, Kevin Wayne 1
/9/ lgorithmic Paradigms hapter Dynamic Programming reed. Build up a solution incrementally, myopically optimizing some local criterion. Divide-and-conquer. Break up a problem into two sub-problems, solve
More informationDAA Unit- II Greedy and Dynamic Programming. By Mrs. B.A. Khivsara Asst. Professor Department of Computer Engineering SNJB s KBJ COE, Chandwad
DAA Unit- II Greedy and Dynamic Programming By Mrs. B.A. Khivsara Asst. Professor Department of Computer Engineering SNJB s KBJ COE, Chandwad 1 Greedy Method 2 Greedy Method Greedy Principal: are typically
More information6. DYNAMIC PROGRAMMING I
6. DYNAMIC PRORAMMIN I weighted interval scheduling segmented least squares knapsack problem RNA secondary structure Lecture slides by Kevin Wayne Copyright 2005 Pearson-Addison Wesley http://www.cs.princeton.edu/~wayne/kleinberg-tardos
More informationCS60007 Algorithm Design and Analysis 2018 Assignment 1
CS60007 Algorithm Design and Analysis 2018 Assignment 1 Palash Dey and Swagato Sanyal Indian Institute of Technology, Kharagpur Please submit the solutions of the problems 6, 11, 12 and 13 (written in
More informationGraduate Algorithms CS F-09 Dynamic Programming
Graduate Algorithms CS673-216F-9 Dynamic Programming David Galles Department of Computer Science University of San Francisco 9-: Recursive Solutions Divide a problem into smaller subproblems Recursively
More informationChapter 6. Dynamic Programming. CS 350: Winter 2018
Chapter 6 Dynamic Programming CS 350: Winter 2018 1 Algorithmic Paradigms Greedy. Build up a solution incrementally, myopically optimizing some local criterion. Divide-and-conquer. Break up a problem into
More informationAlgorithm Design and Analysis
Algorithm Design and Analysis LECTURE 18 Dynamic Programming (Segmented LS recap) Longest Common Subsequence Adam Smith Segmented Least Squares Least squares. Foundational problem in statistic and numerical
More informationCS Lunch. Dynamic Programming Recipe. 2 Midterm 2! Slides17 - Segmented Least Squares.key - November 16, 2016
CS Lunch 1 Michelle Oraa Ali Tien Dao Vladislava Paskova Nan Zhuang Surabhi Sharma Wednesday, 12:15 PM Kendade 307 2 Midterm 2! Monday, November 21 In class Covers Greedy Algorithms Closed book Dynamic
More informationChapter 6. Weighted Interval Scheduling. Dynamic Programming. Algorithmic Paradigms. Dynamic Programming Applications
lgorithmic Paradigms hapter Dynamic Programming reedy. Build up a solution incrementally, myopically optimizing some local criterion. Divide-and-conquer. Break up a problem into sub-problems, solve each
More informationAlgorithms Design & Analysis. Dynamic Programming
Algorithms Design & Analysis Dynamic Programming Recap Divide-and-conquer design paradigm Today s topics Dynamic programming Design paradigm Assembly-line scheduling Matrix-chain multiplication Elements
More information6. DYNAMIC PROGRAMMING I
lgorithmic paradigms 6. DYNMI PRORMMIN I weighted interval scheduling segmented least squares knapsack problem RN secondary structure reedy. Build up a solution incrementally, myopically optimizing some
More informationAreas. ! Bioinformatics. ! Control theory. ! Information theory. ! Operations research. ! Computer science: theory, graphics, AI, systems,.
lgorithmic Paradigms hapter Dynamic Programming reed Build up a solution incrementally, myopically optimizing some local criterion Divide-and-conquer Break up a problem into two sub-problems, solve each
More informationLecture 13. More dynamic programming! Longest Common Subsequences, Knapsack, and (if time) independent sets in trees.
Lecture 13 More dynamic programming! Longest Common Subsequences, Knapsack, and (if time) independent sets in trees. Announcements HW5 due Friday! HW6 released Friday! Last time Not coding in an action
More informationWhat is Dynamic Programming
What is Dynamic Programming Like DaC, Greedy algorithms, Dynamic Programming is another useful method for designing efficient algorithms. Why the name? Eye of the Hurricane: An Autobiography - A quote
More informationLecture 2: Pairwise Alignment. CG Ron Shamir
Lecture 2: Pairwise Alignment 1 Main source 2 Why compare sequences? Human hexosaminidase A vs Mouse hexosaminidase A 3 www.mathworks.com/.../jan04/bio_genome.html Sequence Alignment עימוד רצפים The problem:
More informationCS Analysis of Recursive Algorithms and Brute Force
CS483-05 Analysis of Recursive Algorithms and Brute Force Instructor: Fei Li Room 443 ST II Office hours: Tue. & Thur. 4:30pm - 5:30pm or by appointments lifei@cs.gmu.edu with subject: CS483 http://www.cs.gmu.edu/
More informationCMPSCI 311: Introduction to Algorithms Second Midterm Exam
CMPSCI 311: Introduction to Algorithms Second Midterm Exam April 11, 2018. Name: ID: Instructions: Answer the questions directly on the exam pages. Show all your work for each question. Providing more
More informationLecture 13. More dynamic programming! Longest Common Subsequences, Knapsack, and (if time) independent sets in trees.
Lecture 13 More dynamic programming! Longest Common Subsequences, Knapsack, and (if time) independent sets in trees. Announcements HW5 due Friday! HW6 released Friday! Last time Not coding in an action
More information1 Assembly Line Scheduling in Manufacturing Sector
Indian Institute of Information Technology Design and Manufacturing, Kancheepuram, Chennai 600 27, India An Autonomous Institute under MHRD, Govt of India http://www.iiitdm.ac.in COM 209T Design and Analysis
More informationDynamic Programming (CLRS )
Dynamic Programming (CLRS.-.) Today we discuss a technique called dynamic programming. It is neither especially dynamic nor especially programming related. We will discuss dynamic programming by looking
More informationLecture 9. Greedy Algorithm
Lecture 9. Greedy Algorithm T. H. Cormen, C. E. Leiserson and R. L. Rivest Introduction to Algorithms, 3rd Edition, MIT Press, 2009 Sungkyunkwan University Hyunseung Choo choo@skku.edu Copyright 2000-2018
More informationIntroduction. I Dynamic programming is a technique for solving optimization problems. I Key element: Decompose a problem into subproblems, solve them
ntroduction Computer Science & Engineering 423/823 Design and Analysis of Algorithms Lecture 03 Dynamic Programming (Chapter 15) Stephen Scott and Vinodchandran N. Variyam Dynamic programming is a technique
More informationGeneral Methods for Algorithm Design
General Methods for Algorithm Design 1. Dynamic Programming Multiplication of matrices Elements of the dynamic programming Optimal triangulation of polygons Longest common subsequence 2. Greedy Methods
More informationOutline DP paradigm Discrete optimisation Viterbi algorithm DP: 0 1 Knapsack. Dynamic Programming. Georgy Gimel farb
Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Dynamic Programming Georgy Gimel farb (with basic contributions by Michael J. Dinneen) COMPSCI 69 Computational Science / Outline
More informationLecture 7: Dynamic Programming I: Optimal BSTs
5-750: Graduate Algorithms February, 06 Lecture 7: Dynamic Programming I: Optimal BSTs Lecturer: David Witmer Scribes: Ellango Jothimurugesan, Ziqiang Feng Overview The basic idea of dynamic programming
More informationMore Dynamic Programming
CS 374: Algorithms & Models of Computation, Spring 2017 More Dynamic Programming Lecture 14 March 9, 2017 Chandra Chekuri (UIUC) CS374 1 Spring 2017 1 / 42 What is the running time of the following? Consider
More informationDynamic Programming: Shortest Paths and DFA to Reg Exps
CS 374: Algorithms & Models of Computation, Spring 207 Dynamic Programming: Shortest Paths and DFA to Reg Exps Lecture 8 March 28, 207 Chandra Chekuri (UIUC) CS374 Spring 207 / 56 Part I Shortest Paths
More informationDynamic Programming. Prof. S.J. Soni
Dynamic Programming Prof. S.J. Soni Idea is Very Simple.. Introduction void calculating the same thing twice, usually by keeping a table of known results that fills up as subinstances are solved. Dynamic
More informationMore Dynamic Programming
Algorithms & Models of Computation CS/ECE 374, Fall 2017 More Dynamic Programming Lecture 14 Tuesday, October 17, 2017 Sariel Har-Peled (UIUC) CS374 1 Fall 2017 1 / 48 What is the running time of the following?
More informationCS 580: Algorithm Design and Analysis
CS 58: Algorithm Design and Analysis Jeremiah Blocki Purdue University Spring 28 Announcement: Homework 3 due February 5 th at :59PM Midterm Exam: Wed, Feb 2 (8PM-PM) @ MTHW 2 Recap: Dynamic Programming
More informationGreedy Algorithms My T. UF
Introduction to Algorithms Greedy Algorithms @ UF Overview A greedy algorithm always makes the choice that looks best at the moment Make a locally optimal choice in hope of getting a globally optimal solution
More informationDynamic Programming ACM Seminar in Algorithmics
Dynamic Programming ACM Seminar in Algorithmics Marko Genyk-Berezovskyj berezovs@fel.cvut.cz Tomáš Tunys tunystom@fel.cvut.cz CVUT FEL, K13133 February 27, 2013 Problem: Longest Increasing Subsequence
More informationDynamic Programming: Shortest Paths and DFA to Reg Exps
CS 374: Algorithms & Models of Computation, Fall 205 Dynamic Programming: Shortest Paths and DFA to Reg Exps Lecture 7 October 22, 205 Chandra & Manoj (UIUC) CS374 Fall 205 / 54 Part I Shortest Paths with
More informationToday s Outline. CS 362, Lecture 13. Matrix Chain Multiplication. Paranthesizing Matrices. Matrix Multiplication. Jared Saia University of New Mexico
Today s Outline CS 362, Lecture 13 Jared Saia University of New Mexico Matrix Multiplication 1 Matrix Chain Multiplication Paranthesizing Matrices Problem: We are given a sequence of n matrices, A 1, A
More informationQuiz 1 Solutions. Problem 2. Asymptotics & Recurrences [20 points] (3 parts)
Introduction to Algorithms October 13, 2010 Massachusetts Institute of Technology 6.006 Fall 2010 Professors Konstantinos Daskalakis and Patrick Jaillet Quiz 1 Solutions Quiz 1 Solutions Problem 1. We
More informationAlgorithms and Theory of Computation. Lecture 9: Dynamic Programming
Algorithms and Theory of Computation Lecture 9: Dynamic Programming Xiaohui Bei MAS 714 September 10, 2018 Nanyang Technological University MAS 714 September 10, 2018 1 / 21 Recursion in Algorithm Design
More informationDynamic Programming. Reading: CLRS Chapter 15 & Section CSE 6331: Algorithms Steve Lai
Dynamic Programming Reading: CLRS Chapter 5 & Section 25.2 CSE 633: Algorithms Steve Lai Optimization Problems Problems that can be solved by dynamic programming are typically optimization problems. Optimization
More informationCS Data Structures and Algorithm Analysis
CS 483 - Data Structures and Algorithm Analysis Lecture VII: Chapter 6, part 2 R. Paul Wiegand George Mason University, Department of Computer Science March 22, 2006 Outline 1 Balanced Trees 2 Heaps &
More informationCSE 202 Homework 4 Matthias Springer, A
CSE 202 Homework 4 Matthias Springer, A99500782 1 Problem 2 Basic Idea PERFECT ASSEMBLY N P: a permutation P of s i S is a certificate that can be checked in polynomial time by ensuring that P = S, and
More informationCS583 Lecture 11. Many slides here are based on D. Luebke slides. Review: Dynamic Programming
// CS8 Lecture Jana Kosecka Dynamic Programming Greedy Algorithms Many slides here are based on D. Luebke slides Review: Dynamic Programming A meta-technique, not an algorithm (like divide & conquer) Applicable
More informationPartha Sarathi Mandal
MA 252: Data Structures and Algorithms Lecture 32 http://www.iitg.ernet.in/psm/indexing_ma252/y12/index.html Partha Sarathi Mandal Dept. of Mathematics, IIT Guwahati The All-Pairs Shortest Paths Problem
More informationActivity selection. Goal: Select the largest possible set of nonoverlapping (mutually compatible) activities.
Greedy Algorithm 1 Introduction Similar to dynamic programming. Used for optimization problems. Not always yield an optimal solution. Make choice for the one looks best right now. Make a locally optimal
More informationINF4130: Dynamic Programming September 2, 2014 DRAFT version
INF4130: Dynamic Programming September 2, 2014 DRAFT version In the textbook: Ch. 9, and Section 20.5 Chapter 9 can also be found at the home page for INF4130 These slides were originally made by Petter
More informationModule 1: Analyzing the Efficiency of Algorithms
Module 1: Analyzing the Efficiency of Algorithms Dr. Natarajan Meghanathan Professor of Computer Science Jackson State University Jackson, MS 39217 E-mail: natarajan.meghanathan@jsums.edu What is an Algorithm?
More informationDynamic Programming. p. 1/43
Dynamic Programming Formalized by Richard Bellman Programming relates to planning/use of tables, rather than computer programming. Solve smaller problems first, record solutions in a table; use solutions
More informationACO Comprehensive Exam October 18 and 19, Analysis of Algorithms
Consider the following two graph problems: 1. Analysis of Algorithms Graph coloring: Given a graph G = (V,E) and an integer c 0, a c-coloring is a function f : V {1,,...,c} such that f(u) f(v) for all
More informationLimitations of Algorithm Power
Limitations of Algorithm Power Objectives We now move into the third and final major theme for this course. 1. Tools for analyzing algorithms. 2. Design strategies for designing algorithms. 3. Identifying
More informationQuestion Paper Code :
www.vidyarthiplus.com Reg. No. : B.E./B.Tech. DEGREE EXAMINATION, NOVEMBER/DECEMBER 2011. Time : Three hours Fourth Semester Computer Science and Engineering CS 2251 DESIGN AND ANALYSIS OF ALGORITHMS (Regulation
More informationGreedy Algorithms and Data Compression. Curs 2017
Greedy Algorithms and Data Compression. Curs 2017 Greedy Algorithms An optimization problem: Given of (S, f ), where S is a set of feasible elements and f : S R is the objective function, find a u S, which
More informationCMPSCI 611 Advanced Algorithms Midterm Exam Fall 2015
NAME: CMPSCI 611 Advanced Algorithms Midterm Exam Fall 015 A. McGregor 1 October 015 DIRECTIONS: Do not turn over the page until you are told to do so. This is a closed book exam. No communicating with
More informationAlgorithm Design CS 515 Fall 2015 Sample Final Exam Solutions
Algorithm Design CS 515 Fall 2015 Sample Final Exam Solutions Copyright c 2015 Andrew Klapper. All rights reserved. 1. For the functions satisfying the following three recurrences, determine which is the
More informationIntroduction to Bioinformatics
Introduction to Bioinformatics Lecture : p he biological problem p lobal alignment p Local alignment p Multiple alignment 6 Background: comparative genomics p Basic question in biology: what properties
More informationLecture 18 April 26, 2012
6.851: Advanced Data Structures Spring 2012 Prof. Erik Demaine Lecture 18 April 26, 2012 1 Overview In the last lecture we introduced the concept of implicit, succinct, and compact data structures, and
More informationAdvanced Analysis of Algorithms - Midterm (Solutions)
Advanced Analysis of Algorithms - Midterm (Solutions) K. Subramani LCSEE, West Virginia University, Morgantown, WV {ksmani@csee.wvu.edu} 1 Problems 1. Solve the following recurrence using substitution:
More informationCPSC 320 (Intermediate Algorithm Design and Analysis). Summer Instructor: Dr. Lior Malka Final Examination, July 24th, 2009
CPSC 320 (Intermediate Algorithm Design and Analysis). Summer 2009. Instructor: Dr. Lior Malka Final Examination, July 24th, 2009 Student ID: INSTRUCTIONS: There are 6 questions printed on pages 1 7. Exam
More informationFoundations of Natural Language Processing Lecture 6 Spelling correction, edit distance, and EM
Foundations of Natural Language Processing Lecture 6 Spelling correction, edit distance, and EM Alex Lascarides (Slides from Alex Lascarides and Sharon Goldwater) 2 February 2019 Alex Lascarides FNLP Lecture
More informationCS 6901 (Applied Algorithms) Lecture 3
CS 6901 (Applied Algorithms) Lecture 3 Antonina Kolokolova September 16, 2014 1 Representative problems: brief overview In this lecture we will look at several problems which, although look somewhat similar
More informationCS325: Analysis of Algorithms, Fall Midterm
CS325: Analysis of Algorithms, Fall 2017 Midterm I don t know policy: you may write I don t know and nothing else to answer a question and receive 25 percent of the total points for that problem whereas
More informationMA008/MIIZ01 Design and Analysis of Algorithms Lecture Notes 3
MA008 p.1/37 MA008/MIIZ01 Design and Analysis of Algorithms Lecture Notes 3 Dr. Markus Hagenbuchner markus@uow.edu.au. MA008 p.2/37 Exercise 1 (from LN 2) Asymptotic Notation When constants appear in exponents
More informationQuiz 1 Solutions. (a) f 1 (n) = 8 n, f 2 (n) = , f 3 (n) = ( 3) lg n. f 2 (n), f 1 (n), f 3 (n) Solution: (b)
Introduction to Algorithms October 14, 2009 Massachusetts Institute of Technology 6.006 Spring 2009 Professors Srini Devadas and Constantinos (Costis) Daskalakis Quiz 1 Solutions Quiz 1 Solutions Problem
More informationDynamic Programming: Matrix chain multiplication (CLRS 15.2)
Dynamic Programming: Matrix chain multiplication (CLRS.) The problem Given a sequence of matrices A, A, A,..., A n, find the best way (using the minimal number of multiplications) to compute their product.
More informationCS 470/570 Dynamic Programming. Format of Dynamic Programming algorithms:
CS 470/570 Dynamic Programming Format of Dynamic Programming algorithms: Use a recursive formula But recursion can be inefficient because it might repeat the same computations multiple times So use an
More informationIntroduction to Divide and Conquer
Introduction to Divide and Conquer Sorting with O(n log n) comparisons and integer multiplication faster than O(n 2 ) Periklis A. Papakonstantinou York University Consider a problem that admits a straightforward
More informationDynamic Programming. Reading: CLRS Chapter 15 & Section CSE 2331 Algorithms Steve Lai
Dynamic Programming Reading: CLRS Chapter 5 & Section 25.2 CSE 233 Algorithms Steve Lai Optimization Problems Problems that can be solved by dynamic programming are typically optimization problems. Optimization
More informationMaximum sum contiguous subsequence Longest common subsequence Matrix chain multiplication All pair shortest path Kna. Dynamic Programming
Dynamic Programming Arijit Bishnu arijit@isical.ac.in Indian Statistical Institute, India. August 31, 2015 Outline 1 Maximum sum contiguous subsequence 2 Longest common subsequence 3 Matrix chain multiplication
More informationCPSC 320 Sample Final Examination December 2013
CPSC 320 Sample Final Examination December 2013 [10] 1. Answer each of the following questions with true or false. Give a short justification for each of your answers. [5] a. 6 n O(5 n ) lim n + This is
More informationUniversity of New Mexico Department of Computer Science. Final Examination. CS 561 Data Structures and Algorithms Fall, 2013
University of New Mexico Department of Computer Science Final Examination CS 561 Data Structures and Algorithms Fall, 2013 Name: Email: This exam lasts 2 hours. It is closed book and closed notes wing
More informationThis means that we can assume each list ) is
This means that we can assume each list ) is of the form ),, ( )with < and Since the sizes of the items are integers, there are at most +1pairs in each list Furthermore, if we let = be the maximum possible
More informationGreedy vs Dynamic Programming Approach
Greedy vs Dynamic Programming Approach Outline Compare the methods Knapsack problem Greedy algorithms for 0/1 knapsack An approximation algorithm for 0/1 knapsack Optimal greedy algorithm for knapsack
More informationChapter 6. Dynamic Programming. Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved.
Chapter 6 Dynamic Programming Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved. Algorithmic Paradigms Greed. Build up a solution incrementally, myopically optimizing some
More information1 Maintaining a Dictionary
15-451/651: Design & Analysis of Algorithms February 1, 2016 Lecture #7: Hashing last changed: January 29, 2016 Hashing is a great practical tool, with an interesting and subtle theory too. In addition
More informationCSCE 750 Final Exam Answer Key Wednesday December 7, 2005
CSCE 750 Final Exam Answer Key Wednesday December 7, 2005 Do all problems. Put your answers on blank paper or in a test booklet. There are 00 points total in the exam. You have 80 minutes. Please note
More informationAlgorithmic Approach to Counting of Certain Types m-ary Partitions
Algorithmic Approach to Counting of Certain Types m-ary Partitions Valentin P. Bakoev Abstract Partitions of integers of the type m n as a sum of powers of m (the so called m-ary partitions) and their
More informationDivide-and-conquer. Curs 2015
Divide-and-conquer Curs 2015 The divide-and-conquer strategy. 1. Break the problem into smaller subproblems, 2. recursively solve each problem, 3. appropriately combine their answers. Known Examples: Binary
More informationKnapsack and Scheduling Problems. The Greedy Method
The Greedy Method: Knapsack and Scheduling Problems The Greedy Method 1 Outline and Reading Task Scheduling Fractional Knapsack Problem The Greedy Method 2 Elements of Greedy Strategy An greedy algorithm
More informationChapter 8 Dynamic Programming
Chapter 8 Dynamic Programming Copyright 2007 Pearson Addison-Wesley. All rights reserved. Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by
More informationA Simple Linear Space Algorithm for Computing a Longest Common Increasing Subsequence
A Simple Linear Space Algorithm for Computing a Longest Common Increasing Subsequence Danlin Cai, Daxin Zhu, Lei Wang, and Xiaodong Wang Abstract This paper presents a linear space algorithm for finding
More informationDetermining an Optimal Parenthesization of a Matrix Chain Product using Dynamic Programming
Determining an Optimal Parenthesization of a Matrix Chain Product using Dynamic Programming Vivian Brian Lobo 1, Flevina D souza 1, Pooja Gharat 1, Edwina Jacob 1, and Jeba Sangeetha Augestin 1 1 Department
More informationComputational Complexity and Intractability: An Introduction to the Theory of NP. Chapter 9
1 Computational Complexity and Intractability: An Introduction to the Theory of NP Chapter 9 2 Objectives Classify problems as tractable or intractable Define decision problems Define the class P Define
More information