CS 470/570 Dynamic Programming. Format of Dynamic Programming algorithms:

Size: px
Start display at page:

Download "CS 470/570 Dynamic Programming. Format of Dynamic Programming algorithms:"

Transcription

1 CS 470/570 Dynamic Programming Format of Dynamic Programming algorithms: Use a recursive formula But recursion can be inefficient because it might repeat the same computations multiple times So use an array to store and lookup the subproblem results as needed

2 Fibonacci numbers: 1, 1, 2, 3, 5, 8, 13, 21, 34,.. F(0) = 1 F(1) = 1 F(n) = F(n 2) + F(n 1), when n 2 int F (int n) { if (n==0 n==1) return 1; else return F(n 2) + F(n 1); } F(10) F(8) F(9) F(6) F(7) F(7) F(8) F(4) F(5) F(5) F(6) F(5) F(6) F(6) F(7) Recursion yields repeated subproblems, so not efficient (Running time is exponential)

3 Dynamic programming use a table (array) to remember the values that have previously been computed First method: Bottom-up (use a table instead of recursion) allocate array A[0 n]; for (k=0; k<=n; k++) if (k==0 k==1) A[k] = 1; else A[k] = A[k 2] + A[k 1]; return A[n]; Second method: Top-down (use both recursion and a table) allocate array A[0 n]; for (k=0; k<=n; k++) A[k] = null; return F(n); int F (int n) { if (A[n]!= null) return A[n]; if (n==0 n==1) A[n] = 1; else A[n] = F(n 2) + F(n 1); return A[n]; } Running time = θ(n) for each method

4 Combinations: C(n, k) = n! k! (n k)! Pascal s recursive formula for combinations: C(n, 0) = 1 C(n, n) = 1 C(n, k) = C(n 1, k 1) + C(n 1, k), when 0 < k < n int C (int n, int k) { if (k==0 k==n) return 1; else return C(n 1, k 1) + C(n 1, k); } C(8,4) C(7,3) C(7,4) C(6,2) C(6,3) C(6,3) C(6,4) Again, recursion yields repeated subproblems, so not efficient (Running time is exponential)

5 Dynamic programming Pascal s triangle C(8,4) C(7,3) C(7,4) C(6,2) C(6,3) C(6,4) C(5,1) C(5,2) C(5,3) C(5,4) C(4,0) C(4,1) C(4,2) C(4,3) C(4,4) C(3,0) C(3,1) C(3,2) C(3,3) C(2,0) C(2,1) C(2,2) C(1,0) C(1,1) C(0,0) allocate array C[0 n][0 k]; for (m=0; m<=n; m++) for (j=0; j<=k; j++) if (j==0 j==m) C[m][j] = 1; else C[m][j] = C[m 1][j 1] + C[m 1][j]; return C[n][k]; Running time = θ(nk) = O(n 2 ) because k n

6 0-1 Knapsack problem: Objects {1 n} Profits P[1 n] Weights W[1 n] Maximum weight capacity M 0-1 constraint: Must take none (0) or all (1) of each object. Goal: Determine the amounts X[1 n] for each object so that Σ 1 k n X[k]*W[k] M, and Σ 1 k n X[k]*P[k] is as large as possible. Each X[j] must be either 0 or 1.

7 Next we develop a dynamic programming algorithm for the 0-1 Knapsack problem Recursive function: T (n, M) = max possible total profit such that we choose any subset of objects {1 n} and maximum weight capacity is M T (j, k) = max possible total profit such that we can only choose from objects {1 j} and maximum weight capacity is k (0 j n, 0 k M) T (j, k) = 0 T (j, k) = T (j 1, k) if j==0 if j>0 and k < W[j] T (j, k) = max {T (j 1, k), T (j 1, k W[j]) + P[j]} if j>0 and k W[j] int T (int j, int k) { if (j==0) return 0; if (k < W[j]) return T (j 1, k); return max {T (j 1, k), T (j 1, k W[j]) + P[j]}; }

8 Dynamic programming algorithm: allocate array T[0 n][0 M]; for j = 0 to n for k = 0 to M if (j==0) T[j][k] = 0; else if (k < W[j]) T[j][k] = T[j 1][k]; else T[j][k] = max {T[j 1][k], T[j 1][k W[j]] + P[j]}; Running time = θ(nm) Example: n=4, M= P W T k=0 k=1 k=2 k=3 k=4 k=5 k=6 k=7 k=8 j= j= j= j= j=

9 So the max total profit is 31, but which objects to choose? We still need to assign each X[j] = either 0 or 1. k=m; for (j=n; j>0; j = 1) if (T[j 1][k] == T[j][k]) X[j]=0; else { X[j]=1; k = W[j]; } Takes θ(n) additional time P W X T k=0 k=1 k=2 k=3 k=4 k=5 k=6 k=7 k=8 j= j= j= j= j= So the optimal solution places objects 2 and 3 in the knapsack.

10 Longest Common Subsequence problem (LCS): Given two strings: X[1 m] Y[1 n] Example: X = bdacbea m = X = 7 Y = dcbaecba n = Y = 8 Goal: Find a string that is a subsequence of both X and Y, and that has largest possible length. Example: LCS (X, Y) = dacba (length = 5)

11 Recursive function for the length of the LCS of X and Y: L (m, n) = length of the LCS of X[1 m] and Y[1 n] L (j, k) = length of the LCS of X[1 j] and Y[1 k] (0 j m, 0 k n) L (j, k) = 0 L (j 1, k 1) + 1 max {L (j 1, k), L (j, k 1)} if j==0 or k==0 if j>0, k>0, and X[j]==Y[k] if j>0, k>0, and X[j] Y[k] int L (int j, int k) { if (j==0 k==0) return 0; if (X[j]==Y[k]) return L (j 1, k 1) + 1; return max {L (j 1, k), L (j, k 1)}; }

12 If implement this algorithm recursively, it will take exponential time due to repeated subproblems L (m, n) L (m 1, n) L (m, n 1) L (m 2, n) L (m 1, n 1) L (m 1, n 1) L (m, n 2) Dynamic programming algorithm: allocate array L[0 m][0 n]; for j = 0 to m for k = 0 to n if (j==0 k==0) L[j][k] = 0; else if (X[j]==Y[k]) L[j][k] = L[j 1][k 1] + 1; else L[j][k] = max {L[j 1][k], L[j][k 1]}; Running time = θ(mn)

13 Example: X = bdacbea m = X = 7 Y = dcbaecba n = Y = 8 L X: b d a c b e a Y: d c b a e c b a So the length of the LCS is 5. But how can we find a LCS that has length 5?

14 Call buildlcs (m, n) string buildlcs (j, k) { if (L[j][k] == 0) return ; else if (X[j] == Y[k]) return buildlcs (j 1, k 1) + X[j]; else if (L[j][k 1] == L[j][k]) return buildlcs (j, k 1); else // (L[j 1][k] == L[j][k]) return buildlcs (j 1, k); } // empty string // append char Takes θ(m+n) additional time assumes we implement append char in θ(1) time this is possible here because we know the final string length L(m,n) in advance can allocate an array of size L(m,n) chars in advance or replace recursion by iteration

15 L X: b d a c b e a Y: d c b a e c b a LCS (X, Y) = dcbea Find another solution by exchanging last two cases in buildlcs L X: b d a c b e a Y: d c b a e c b a LCS (X, Y) = bacba

16 Matrix Chain Product (MCP): Multiply chain of compatible matrices M 1 M 2 M 3 M n Each matrix M j has dimensions D[j 1] rows and D[j] columns Example: M 1 M 2 M 3 where D[0 3] = {10, 20, 5, 30} M 1 is 10-by-20, M 2 is 20-by-5, and M 3 is 5-by-30 (M 1 M 2 ) M 3 takes 10*20*5 + 10*5*30 = 2500 scalar products M 1 (M 2 M 3 ) takes 20*5* *20*30 = 9000 scalar products Goal: compute M 1 M 2 M 3 M n using fewest scalar products Let Cost[i][j] = fewest scalar products to compute M i M j Find best parenthesization (M i M k ) (M k+1 M j ) Cost[i][j] = 0 when i==j Cost[i][j] = min {Cost[i][k] + Cost[k+1][j] + D[i 1]*D[k]*D[j] i k j 1} when i<j

17 Dynamic programming algorithm: allocate arrays Cost[1 n][1 n] and Bestk[1 n][1 n]; for i = 1 to n Cost[i][i] = 0; for L = 2 to n for i = 1 to n L+1 j = i+l 1; Cost[i][j] = ; for k = i to j 1 t = Cost[i][k] + Cost[k+1][j] + D[i 1]*D[k]*D[j]; if (t < Cost[i][j]) Cost[i][j] = t; Bestk[i][j] = k; Running time = θ(n 3 ) Example: n= D Cost j=1 j=2 j=3 j=4 Bestk j=1 j=2 j=3 j=4 i= i= i= i=2 2 2 i= i=3 3 i=4 0 i=4

18 Next determine how to multiply M 1 M 2 M 3 M n using only Cost[1][n] scalar products: MatrixMult (A, B) { // naïve algorithm allocate C[1 A.rows][1 B.columns] for i = 1 to A.rows for j = 1 to B.columns C[i][j] = 0; for k = 1 to A.columns // same as B.rows C[i][j] += A[i][k]*B[k][j]; return C; } ChainProduct (i, j) { if (i==j) return M i ; k = Bestk[i][j]; A = ChainProduct (i, k); B = ChainProduct (k+1, j); return MatrixMult (A, B); } ChainProduct (1, n);

19 Optimal Binary Search Tree (BST): Construct a BST with Key 1 < Key 2 < Key 3 < < Key n Frequency of searching for each Key i is F[i] Example: Key[1 3] = {a, b, c} and F[1 3] = {0.3, 0.5, 0.2} 5 possible BSTs Expected cost of searching a b c 1* * *0.2 = 1.9 a b a c b c 1* * *0.2 = 2.2 2* * *0.2 = 1.5 a c 2* *0.5 * 1*0.2 = 2.3 b

20 b c 3* * *0.2 = 2.1 a Goal: construct the BST with minimum expected search cost Let Cost[i][j] = min expected search cost for BST with Key[i j] Find best choice Key[r] for root node Left subtree has Key[i r 1] Right subtree has Key[r+1 j] Cost[i][j] = 0 Cost[i][j] = F[i] when i>j when i==j Cost[i][j] = min {Cost[i][r 1] + Cost[r+1][j] + Σ i k j F[k] i r j} when i<j For efficiency, define W[i][j] = Σ i k j F[k] Cost[i][j] = 0 when i>j Cost[i][j] = min {Cost[i][r 1] + Cost[r+1][j] + W[i][j] i r j} when i j

21 Dynamic programming algorithm: allocate arrays Cost[1 n+1][0 n], W[1 n+1][0 n], and Root[1 n+1][0 n]; for i = 0 to n Cost[i+1][i] = 0; W[i+1][i] = 0; for L = 1 to n for i = 1 to n L+1 j = i+l 1; Cost[i][j] = ; W[i][j] = W[i][j 1] + F[j]; for r = i to j t = Cost[i][r 1] + Cost[r+1][j] + W[i][j]; if (t < Cost[i][j]) Cost[i][j] = t; Root[i][j] = r; Running time = θ(n 3 ) Example: n=3, F[1 3] = {0.3, 0.5, 0.2} Cost j=0 j=1 j=2 j=3 Root j=0 j=1 j=2 j=3 i= i= i= i=2 2 2 i= i=3 3 i=4 0 i=4

22 Next build the BST to achieve the min expected search cost BuildBST (i, j) { if (i>j) return null; // empty tree r = Root[i][j]; left = BuildBST (i, r 1); right = BuildBST (r+1, j); return new Node (Key[r], left, right); } tree = BuildBST (1, n);

23 All-Pairs Shortest Paths: Given a weighted (undirected or directed) graph, find a minimum-distance path from each start vertex to each destination vertex [Restriction: we allow edges with negative weights, but not any cycle with negative total weight] Example, why negative-weight cycle is forbidden: What is shortest distance from vertex 1 to vertex 5? = 7 3+ (8 2 9) +4 = 4 3+ (8 2 9) + (8 2 9) +4 = 1 3+ (8 2 9) + (8 2 9) + (8 2 9) +4 = 2 3+ (8 2 9) + (8 2 9) + (8 2 9) + (8 2 9) +4 = 5 Always better to repeat the negative cycle again 2 2

24 Example, with no negative-weight cycle: Input: Adjacency matrix Output: Distance matrix

25 Intuition for designing an algorithm to solve APSP problem: D[X,Y] = distance from X to Y = length of shortest path from X to Y X D[X,Y] Y D[X,Z] Z D[Z,Y] if (D[X,Z] + D[Z,Y] < D[X,Y]) D[X,Y] = D[X,Z] + D[Z,Y] Assume vertices are numbered 1 n D Z [X,Y] = length of shortest path from X to Y such that this path is allowed to pass through any or all of the intermediate vertices 1 Z, but it is not allowed to pass through any of Z+1 n X D Z 1 [X,Y] Y D Z 1 [X,Z] Z D Z 1 [Z,Y] D Z [X,Y] = min (D Z 1 [X,Y], D Z 1 [X,Z] + D Z 1 [Z,Y])

26 Floyd s algorithm for APSP problem: // initialize D 0 = weighted adjacency matrix for X = 1 to n for Y = 1 to n if (X==Y) D 0 [X,Y] = 0; else if (edge (X,Y) exists) D 0 [X,Y] = weight(x,y); else D 0 [X,Y] = ; // compute each D Z matrix from values in the D Z 1 matrix for Z = 1 to n for X = 1 to n for Y = 1 to n if (D Z-1 [X,Z] + D Z-1 [Z,Y] < D Z-1 [X,Y]) D Z [X,Y] = D Z-1 [X,Z] + D Z-1 [Z,Y]; else D Z [X,Y] = D Z-1 [X,Y]; Running time of Floyd s algorithm = θ(n 3 ) Memory usage of Floyd s algorithm = θ(n 3 ) D Z [X,Y] can be stored as 3-dimensional array with elements D[X,Y,Z] for 1 X n, 1 Y n, 0 Z n

27 Improve Floyd s algorithm to use θ(n 2 ) space: Discard the Z subscript Let D be 2-dimensional array for X = 1 to n for Y = 1 to n if (X==Y) D[X,Y] = 0; else if (edge (X,Y) exists) D[X,Y] = weight(x,y); else D[X,Y] = ; for Z = 1 to n for X = 1 to n for Y = 1 to n if (D[X,Z] + D[Z,Y] < D[X,Y]) D[X,Y] = D[X,Z] + D[Z,Y]; // else D[X,Y] = D[X,Y]; // now we can omit this line To determine whether the graph contains a negative cycle: boolean hasnegativecycle( ) { for X = 1 to n if (D[X,X] < 0) return true; return false; }

28 World Series problem: Teams A and B compete in a series of games. The winner is the first team to win n games. The series ends as soon as the winner is decided. At most 2n 1 games are played. For 1 k 2n 1, let p[k] = probability that team A wins the k th game. (These are the input values.) For 0 i n and 0 j n, let X(i, j) = probability that the series reaches a situation where team A wins exactly i games and team B wins exactly j games. (These are the output values.) First we develop a recursive formula: X(0, 0) = 1 X(n, n) = 0 X(i, 0) = X(i 1, 0)*p[i], for i>0 X(0, j) = X(0, j 1)*(1 p[j]), for j>0 X(n, j) = X(n 1, j)*p[n+j], for j<n X(i, n) = X(i, n 1)*(1 p[i+n]), for i<n X(i, j) = X(i 1, j)*p[i+j] + X(i, j 1)*(1 p[i+j]), for 0<i<n and 0<j<n Next we can solve using dynamic programming

29 Dynamic programming algorithm: allocate array X[0 n][0 n]; for (i=0; i<=n; i++) for (j=0; j<=n; j++) if (i==0 && j==0) X[i][j] = 1; else if (i==n && j==n) X[i][j] = 0; else if ((i>0 && j==0) (i==n && j<n)) X[i][j] = X[i 1][j]*p[i+j]; else if ((i==0 && j>0) (i<n && j==n)) X[i][j] = X[i][j 1]*(1 p[i+j]); else X[i][j] = X[i 1][j]*p[i+j] + X[i][j 1]*(1 p[i+j]); Running time = θ(n 2 )

30 Example: n=4, so 2n 1= p X j=0 j=1 j=2 j=3 j=4 i= i= i= i= i= To illustrate, X[3][2] = X[3 1][2]*p[3+2] + X[3][2 1]*(1 p[3+2]) = X[2][2]*p[5] + X[3][1]*(1 p[5]) = (.38)*(.7) + (.25)*(.3) =.341

Chapter 8 Dynamic Programming

Chapter 8 Dynamic Programming Chapter 8 Dynamic Programming Copyright 2007 Pearson Addison-Wesley. All rights reserved. Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by

More information

Chapter 8 Dynamic Programming

Chapter 8 Dynamic Programming Chapter 8 Dynamic Programming Copyright 007 Pearson Addison-Wesley. All rights reserved. Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by

More information

Dynamic Programming. p. 1/43

Dynamic Programming. p. 1/43 Dynamic Programming Formalized by Richard Bellman Programming relates to planning/use of tables, rather than computer programming. Solve smaller problems first, record solutions in a table; use solutions

More information

Graduate Algorithms CS F-09 Dynamic Programming

Graduate Algorithms CS F-09 Dynamic Programming Graduate Algorithms CS673-216F-9 Dynamic Programming David Galles Department of Computer Science University of San Francisco 9-: Recursive Solutions Divide a problem into smaller subproblems Recursively

More information

Dynamic Programming. Prof. S.J. Soni

Dynamic Programming. Prof. S.J. Soni Dynamic Programming Prof. S.J. Soni Idea is Very Simple.. Introduction void calculating the same thing twice, usually by keeping a table of known results that fills up as subinstances are solved. Dynamic

More information

Computer Science & Engineering 423/823 Design and Analysis of Algorithms

Computer Science & Engineering 423/823 Design and Analysis of Algorithms Computer Science & Engineering 423/823 Design and Analysis of Algorithms Lecture 03 Dynamic Programming (Chapter 15) Stephen Scott and Vinodchandran N. Variyam sscott@cse.unl.edu 1/44 Introduction Dynamic

More information

CS473 - Algorithms I

CS473 - Algorithms I CS473 - Algorithms I Lecture 10 Dynamic Programming View in slide-show mode CS 473 Lecture 10 Cevdet Aykanat and Mustafa Ozdal, Bilkent University 1 Introduction An algorithm design paradigm like divide-and-conquer

More information

CSE 431/531: Analysis of Algorithms. Dynamic Programming. Lecturer: Shi Li. Department of Computer Science and Engineering University at Buffalo

CSE 431/531: Analysis of Algorithms. Dynamic Programming. Lecturer: Shi Li. Department of Computer Science and Engineering University at Buffalo CSE 431/531: Analysis of Algorithms Dynamic Programming Lecturer: Shi Li Department of Computer Science and Engineering University at Buffalo Paradigms for Designing Algorithms Greedy algorithm Make a

More information

Computer Science & Engineering 423/823 Design and Analysis of Algorithms

Computer Science & Engineering 423/823 Design and Analysis of Algorithms Computer Science & Engineering 423/823 Design and Analysis of s Lecture 09 Dynamic Programming (Chapter 15) Stephen Scott (Adapted from Vinodchandran N. Variyam) 1 / 41 Spring 2010 Dynamic programming

More information

DAA Unit- II Greedy and Dynamic Programming. By Mrs. B.A. Khivsara Asst. Professor Department of Computer Engineering SNJB s KBJ COE, Chandwad

DAA Unit- II Greedy and Dynamic Programming. By Mrs. B.A. Khivsara Asst. Professor Department of Computer Engineering SNJB s KBJ COE, Chandwad DAA Unit- II Greedy and Dynamic Programming By Mrs. B.A. Khivsara Asst. Professor Department of Computer Engineering SNJB s KBJ COE, Chandwad 1 Greedy Method 2 Greedy Method Greedy Principal: are typically

More information

1 Assembly Line Scheduling in Manufacturing Sector

1 Assembly Line Scheduling in Manufacturing Sector Indian Institute of Information Technology Design and Manufacturing, Kancheepuram, Chennai 600 27, India An Autonomous Institute under MHRD, Govt of India http://www.iiitdm.ac.in COM 209T Design and Analysis

More information

Design and Analysis of Algorithms

Design and Analysis of Algorithms Design and Analysis of Algorithms CSE 5311 Lecture 22 All-Pairs Shortest Paths Junzhou Huang, Ph.D. Department of Computer Science and Engineering CSE5311 Design and Analysis of Algorithms 1 All Pairs

More information

Algorithms and Theory of Computation. Lecture 9: Dynamic Programming

Algorithms and Theory of Computation. Lecture 9: Dynamic Programming Algorithms and Theory of Computation Lecture 9: Dynamic Programming Xiaohui Bei MAS 714 September 10, 2018 Nanyang Technological University MAS 714 September 10, 2018 1 / 21 Recursion in Algorithm Design

More information

Maximum sum contiguous subsequence Longest common subsequence Matrix chain multiplication All pair shortest path Kna. Dynamic Programming

Maximum sum contiguous subsequence Longest common subsequence Matrix chain multiplication All pair shortest path Kna. Dynamic Programming Dynamic Programming Arijit Bishnu arijit@isical.ac.in Indian Statistical Institute, India. August 31, 2015 Outline 1 Maximum sum contiguous subsequence 2 Longest common subsequence 3 Matrix chain multiplication

More information

Lecture 7: Dynamic Programming I: Optimal BSTs

Lecture 7: Dynamic Programming I: Optimal BSTs 5-750: Graduate Algorithms February, 06 Lecture 7: Dynamic Programming I: Optimal BSTs Lecturer: David Witmer Scribes: Ellango Jothimurugesan, Ziqiang Feng Overview The basic idea of dynamic programming

More information

Introduction. I Dynamic programming is a technique for solving optimization problems. I Key element: Decompose a problem into subproblems, solve them

Introduction. I Dynamic programming is a technique for solving optimization problems. I Key element: Decompose a problem into subproblems, solve them ntroduction Computer Science & Engineering 423/823 Design and Analysis of Algorithms Lecture 03 Dynamic Programming (Chapter 15) Stephen Scott and Vinodchandran N. Variyam Dynamic programming is a technique

More information

Partha Sarathi Mandal

Partha Sarathi Mandal MA 252: Data Structures and Algorithms Lecture 32 http://www.iitg.ernet.in/psm/indexing_ma252/y12/index.html Partha Sarathi Mandal Dept. of Mathematics, IIT Guwahati The All-Pairs Shortest Paths Problem

More information

Computational Complexity and Intractability: An Introduction to the Theory of NP. Chapter 9

Computational Complexity and Intractability: An Introduction to the Theory of NP. Chapter 9 1 Computational Complexity and Intractability: An Introduction to the Theory of NP Chapter 9 2 Objectives Classify problems as tractable or intractable Define decision problems Define the class P Define

More information

Dynamic Programming. Cormen et. al. IV 15

Dynamic Programming. Cormen et. al. IV 15 Dynamic Programming Cormen et. al. IV 5 Dynamic Programming Applications Areas. Bioinformatics. Control theory. Operations research. Some famous dynamic programming algorithms. Unix diff for comparing

More information

Algorithms Design & Analysis. Dynamic Programming

Algorithms Design & Analysis. Dynamic Programming Algorithms Design & Analysis Dynamic Programming Recap Divide-and-conquer design paradigm Today s topics Dynamic programming Design paradigm Assembly-line scheduling Matrix-chain multiplication Elements

More information

CSC 1700 Analysis of Algorithms: Warshall s and Floyd s algorithms

CSC 1700 Analysis of Algorithms: Warshall s and Floyd s algorithms CSC 1700 Analysis of Algorithms: Warshall s and Floyd s algorithms Professor Henry Carter Fall 2016 Recap Space-time tradeoffs allow for faster algorithms at the cost of space complexity overhead Dynamic

More information

Data Structures and Algorithms

Data Structures and Algorithms Data Structures and Algorithms CS245-2017S-22 Dynamic Programming David Galles Department of Computer Science University of San Francisco 22-0: Dynamic Programming Simple, recursive solution to a problem

More information

(tree searching technique) (Boolean formulas) satisfying assignment: (X 1, X 2 )

(tree searching technique) (Boolean formulas) satisfying assignment: (X 1, X 2 ) Algorithms Chapter 5: The Tree Searching Strategy - Examples 1 / 11 Chapter 5: The Tree Searching Strategy 1. Ex 5.1Determine the satisfiability of the following Boolean formulas by depth-first search

More information

Dynamic Programming. Reading: CLRS Chapter 15 & Section CSE 2331 Algorithms Steve Lai

Dynamic Programming. Reading: CLRS Chapter 15 & Section CSE 2331 Algorithms Steve Lai Dynamic Programming Reading: CLRS Chapter 5 & Section 25.2 CSE 233 Algorithms Steve Lai Optimization Problems Problems that can be solved by dynamic programming are typically optimization problems. Optimization

More information

Courses 12-13: Dynamic programming

Courses 12-13: Dynamic programming Courses 12-13: Dynamic programming 1 Outline What is dynamic programming? Main steps in applying dyamic programming Recurrence relations: ascending versus descendent Applications 2 What is dynamic programming?

More information

Note that M i,j depends on two entries in row (i 1). If we proceed in a row major order, these two entries will be available when we are ready to comp

Note that M i,j depends on two entries in row (i 1). If we proceed in a row major order, these two entries will be available when we are ready to comp CSE 3500 Algorithms and Complexity Fall 2016 Lecture 18: October 27, 2016 Dynamic Programming Steps involved in a typical dynamic programming algorithm are: 1. Identify a function such that the solution

More information

Dynamic Programming ACM Seminar in Algorithmics

Dynamic Programming ACM Seminar in Algorithmics Dynamic Programming ACM Seminar in Algorithmics Marko Genyk-Berezovskyj berezovs@fel.cvut.cz Tomáš Tunys tunystom@fel.cvut.cz CVUT FEL, K13133 February 27, 2013 Problem: Longest Increasing Subsequence

More information

Mat 3770 Bin Packing or

Mat 3770 Bin Packing or Basic Algithm Spring 2014 Used when a problem can be partitioned into non independent sub problems Basic Algithm Solve each sub problem once; solution is saved f use in other sub problems Combine solutions

More information

Determining an Optimal Parenthesization of a Matrix Chain Product using Dynamic Programming

Determining an Optimal Parenthesization of a Matrix Chain Product using Dynamic Programming Determining an Optimal Parenthesization of a Matrix Chain Product using Dynamic Programming Vivian Brian Lobo 1, Flevina D souza 1, Pooja Gharat 1, Edwina Jacob 1, and Jeba Sangeetha Augestin 1 1 Department

More information

Dynamic Programming: Shortest Paths and DFA to Reg Exps

Dynamic Programming: Shortest Paths and DFA to Reg Exps CS 374: Algorithms & Models of Computation, Fall 205 Dynamic Programming: Shortest Paths and DFA to Reg Exps Lecture 7 October 22, 205 Chandra & Manoj (UIUC) CS374 Fall 205 / 54 Part I Shortest Paths with

More information

General Methods for Algorithm Design

General Methods for Algorithm Design General Methods for Algorithm Design 1. Dynamic Programming Multiplication of matrices Elements of the dynamic programming Optimal triangulation of polygons Longest common subsequence 2. Greedy Methods

More information

Dynamic Programming: Shortest Paths and DFA to Reg Exps

Dynamic Programming: Shortest Paths and DFA to Reg Exps CS 374: Algorithms & Models of Computation, Spring 207 Dynamic Programming: Shortest Paths and DFA to Reg Exps Lecture 8 March 28, 207 Chandra Chekuri (UIUC) CS374 Spring 207 / 56 Part I Shortest Paths

More information

CPSC 320 (Intermediate Algorithm Design and Analysis). Summer Instructor: Dr. Lior Malka Final Examination, July 24th, 2009

CPSC 320 (Intermediate Algorithm Design and Analysis). Summer Instructor: Dr. Lior Malka Final Examination, July 24th, 2009 CPSC 320 (Intermediate Algorithm Design and Analysis). Summer 2009. Instructor: Dr. Lior Malka Final Examination, July 24th, 2009 Student ID: INSTRUCTIONS: There are 6 questions printed on pages 1 7. Exam

More information

CS60007 Algorithm Design and Analysis 2018 Assignment 1

CS60007 Algorithm Design and Analysis 2018 Assignment 1 CS60007 Algorithm Design and Analysis 2018 Assignment 1 Palash Dey and Swagato Sanyal Indian Institute of Technology, Kharagpur Please submit the solutions of the problems 6, 11, 12 and 13 (written in

More information

Week 7 Solution. The two implementations are 1. Approach 1. int fib(int n) { if (n <= 1) return n; return fib(n 1) + fib(n 2); } 2.

Week 7 Solution. The two implementations are 1. Approach 1. int fib(int n) { if (n <= 1) return n; return fib(n 1) + fib(n 2); } 2. Week 7 Solution 1.You are given two implementations for finding the nth Fibonacci number(f Fibonacci numbers are defined by F(n = F(n 1 + F(n 2 with F(0 = 0 and F(1 = 1 The two implementations are 1. Approach

More information

Subcubic Equivalence of Triangle Detection and Matrix Multiplication

Subcubic Equivalence of Triangle Detection and Matrix Multiplication Subcubic Equivalence of Triangle Detection and Matrix Multiplication Bahar Qarabaqi and Maziar Gomrokchi April 29, 2011 1 Introduction An algorithm on n n matrix with the entries in [ M, M] has a truly

More information

/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Dynamic Programming II Date: 10/12/17

/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Dynamic Programming II Date: 10/12/17 601.433/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Dynamic Programming II Date: 10/12/17 12.1 Introduction Today we re going to do a couple more examples of dynamic programming. While

More information

Dynamic Programming. Reading: CLRS Chapter 15 & Section CSE 6331: Algorithms Steve Lai

Dynamic Programming. Reading: CLRS Chapter 15 & Section CSE 6331: Algorithms Steve Lai Dynamic Programming Reading: CLRS Chapter 5 & Section 25.2 CSE 633: Algorithms Steve Lai Optimization Problems Problems that can be solved by dynamic programming are typically optimization problems. Optimization

More information

Dynamic programming. Curs 2015

Dynamic programming. Curs 2015 Dynamic programming. Curs 2015 Fibonacci Recurrence. n-th Fibonacci Term INPUT: n nat QUESTION: Compute F n = F n 1 + F n 2 Recursive Fibonacci (n) if n = 0 then return 0 else if n = 1 then return 1 else

More information

6. DYNAMIC PROGRAMMING I

6. DYNAMIC PROGRAMMING I 6. DYNAMIC PROGRAMMING I weighted interval scheduling segmented least squares knapsack problem RNA secondary structure Lecture slides by Kevin Wayne Copyright 2005 Pearson-Addison Wesley Copyright 2013

More information

Induction and Recursion

Induction and Recursion . All rights reserved. Authorized only for instructor use in the classroom. No reproduction or further distribution permitted without the prior written consent of McGraw-Hill Education. Induction and Recursion

More information

CS483 Design and Analysis of Algorithms

CS483 Design and Analysis of Algorithms CS483 Design and Analysis of Algorithms Lectures 15-16 Dynamic Programming Instructor: Fei Li lifei@cs.gmu.edu with subject: CS483 Office hours: STII, Room 443, Friday 4:00pm - 6:00pm or by appointments

More information

Today s Outline. CS 362, Lecture 13. Matrix Chain Multiplication. Paranthesizing Matrices. Matrix Multiplication. Jared Saia University of New Mexico

Today s Outline. CS 362, Lecture 13. Matrix Chain Multiplication. Paranthesizing Matrices. Matrix Multiplication. Jared Saia University of New Mexico Today s Outline CS 362, Lecture 13 Jared Saia University of New Mexico Matrix Multiplication 1 Matrix Chain Multiplication Paranthesizing Matrices Problem: We are given a sequence of n matrices, A 1, A

More information

Complexity Theory of Polynomial-Time Problems

Complexity Theory of Polynomial-Time Problems Complexity Theory of Polynomial-Time Problems Lecture 1: Introduction, Easy Examples Karl Bringmann and Sebastian Krinninger Audience no formal requirements, but: NP-hardness, satisfiability problem, how

More information

CPSC 320 Sample Final Examination December 2013

CPSC 320 Sample Final Examination December 2013 CPSC 320 Sample Final Examination December 2013 [10] 1. Answer each of the following questions with true or false. Give a short justification for each of your answers. [5] a. 6 n O(5 n ) lim n + This is

More information

Chapter Summary. Mathematical Induction Strong Induction Well-Ordering Recursive Definitions Structural Induction Recursive Algorithms

Chapter Summary. Mathematical Induction Strong Induction Well-Ordering Recursive Definitions Structural Induction Recursive Algorithms 1 Chapter Summary Mathematical Induction Strong Induction Well-Ordering Recursive Definitions Structural Induction Recursive Algorithms 2 Section 5.1 3 Section Summary Mathematical Induction Examples of

More information

CSE 202 Dynamic Programming II

CSE 202 Dynamic Programming II CSE 202 Dynamic Programming II Chapter 6 Dynamic Programming Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved. 1 Algorithmic Paradigms Greed. Build up a solution incrementally,

More information

Midterm 2 for CS 170

Midterm 2 for CS 170 UC Berkeley CS 170 Midterm 2 Lecturer: Gene Myers November 9 Midterm 2 for CS 170 Print your name:, (last) (first) Sign your name: Write your section number (e.g. 101): Write your sid: One page of notes

More information

Dynamic Programming( Weighted Interval Scheduling)

Dynamic Programming( Weighted Interval Scheduling) Dynamic Programming( Weighted Interval Scheduling) 17 November, 2016 Dynamic Programming 1 Dynamic programming algorithms are used for optimization (for example, finding the shortest path between two points,

More information

Part V. Intractable Problems

Part V. Intractable Problems Part V Intractable Problems 507 Chapter 16 N P-Completeness Up to now, we have focused on developing efficient algorithms for solving problems. The word efficient is somewhat subjective, and the degree

More information

In-Class Soln 1. CS 361, Lecture 4. Today s Outline. In-Class Soln 2

In-Class Soln 1. CS 361, Lecture 4. Today s Outline. In-Class Soln 2 In-Class Soln 1 Let f(n) be an always positive function and let g(n) = f(n) log n. Show that f(n) = o(g(n)) CS 361, Lecture 4 Jared Saia University of New Mexico For any positive constant c, we want to

More information

CS Analysis of Recursive Algorithms and Brute Force

CS Analysis of Recursive Algorithms and Brute Force CS483-05 Analysis of Recursive Algorithms and Brute Force Instructor: Fei Li Room 443 ST II Office hours: Tue. & Thur. 4:30pm - 5:30pm or by appointments lifei@cs.gmu.edu with subject: CS483 http://www.cs.gmu.edu/

More information

Find: a multiset M { 1,..., n } so that. i M w i W and. i M v i is maximized. Find: a set S { 1,..., n } so that. i S w i W and. i S v i is maximized.

Find: a multiset M { 1,..., n } so that. i M w i W and. i M v i is maximized. Find: a set S { 1,..., n } so that. i S w i W and. i S v i is maximized. Knapsack gain Slides for IS 675 PV hapter 6: ynamic Programming, Part 2 Jim Royer EES October 28, 2009 The Knapsack Problem (KP) knapsack with weight capacity W. Items 1,..., n where item i has weight

More information

Lecture 2: Divide and conquer and Dynamic programming

Lecture 2: Divide and conquer and Dynamic programming Chapter 2 Lecture 2: Divide and conquer and Dynamic programming 2.1 Divide and Conquer Idea: - divide the problem into subproblems in linear time - solve subproblems recursively - combine the results in

More information

More Dynamic Programming

More Dynamic Programming CS 374: Algorithms & Models of Computation, Spring 2017 More Dynamic Programming Lecture 14 March 9, 2017 Chandra Chekuri (UIUC) CS374 1 Spring 2017 1 / 42 What is the running time of the following? Consider

More information

Induction and recursion. Chapter 5

Induction and recursion. Chapter 5 Induction and recursion Chapter 5 Chapter Summary Mathematical Induction Strong Induction Well-Ordering Recursive Definitions Structural Induction Recursive Algorithms Mathematical Induction Section 5.1

More information

More Dynamic Programming

More Dynamic Programming Algorithms & Models of Computation CS/ECE 374, Fall 2017 More Dynamic Programming Lecture 14 Tuesday, October 17, 2017 Sariel Har-Peled (UIUC) CS374 1 Fall 2017 1 / 48 What is the running time of the following?

More information

Binomial Coefficient Identities/Complements

Binomial Coefficient Identities/Complements Binomial Coefficient Identities/Complements CSE21 Fall 2017, Day 4 Oct 6, 2017 https://sites.google.com/a/eng.ucsd.edu/cse21-fall-2017-miles-jones/ permutation P(n,r) = n(n-1) (n-2) (n-r+1) = Terminology

More information

CS325: Analysis of Algorithms, Fall Midterm

CS325: Analysis of Algorithms, Fall Midterm CS325: Analysis of Algorithms, Fall 2017 Midterm I don t know policy: you may write I don t know and nothing else to answer a question and receive 25 percent of the total points for that problem whereas

More information

ICS141: Discrete Mathematics for Computer Science I

ICS141: Discrete Mathematics for Computer Science I ICS141: Discrete Mathematics for Computer Science I Dept. Information & Computer Sci., Jan Stelovsky based on slides by Dr. Baek and Dr. Still Originals by Dr. M. P. Frank and Dr. J.L. Gross Provided by

More information

Dynamic programming. Curs 2017

Dynamic programming. Curs 2017 Dynamic programming. Curs 2017 Fibonacci Recurrence. n-th Fibonacci Term INPUT: n nat QUESTION: Compute F n = F n 1 + F n 2 Recursive Fibonacci (n) if n = 0 then return 0 else if n = 1 then return 1 else

More information

CMPSCI 611 Advanced Algorithms Midterm Exam Fall 2015

CMPSCI 611 Advanced Algorithms Midterm Exam Fall 2015 NAME: CMPSCI 611 Advanced Algorithms Midterm Exam Fall 015 A. McGregor 1 October 015 DIRECTIONS: Do not turn over the page until you are told to do so. This is a closed book exam. No communicating with

More information

Even More on Dynamic Programming

Even More on Dynamic Programming Algorithms & Models of Computation CS/ECE 374, Fall 2017 Even More on Dynamic Programming Lecture 15 Thursday, October 19, 2017 Sariel Har-Peled (UIUC) CS374 1 Fall 2017 1 / 26 Part I Longest Common Subsequence

More information

Lecture 18: P & NP. Revised, May 1, CLRS, pp

Lecture 18: P & NP. Revised, May 1, CLRS, pp Lecture 18: P & NP Revised, May 1, 2003 CLRS, pp.966-982 The course so far: techniques for designing efficient algorithms, e.g., divide-and-conquer, dynamic-programming, greedy-algorithms. What happens

More information

New Bounds and Extended Relations Between Prefix Arrays, Border Arrays, Undirected Graphs, and Indeterminate Strings

New Bounds and Extended Relations Between Prefix Arrays, Border Arrays, Undirected Graphs, and Indeterminate Strings New Bounds and Extended Relations Between Prefix Arrays, Border Arrays, Undirected Graphs, and Indeterminate Strings F. Blanchet-Sadri Michelle Bodnar Benjamin De Winkle 3 November 6, 4 Abstract We extend

More information

With Question/Answer Animations

With Question/Answer Animations Chapter 5 With Question/Answer Animations Copyright McGraw-Hill Education. All rights reserved. No reproduction or distribution without the prior written consent of McGraw-Hill Education. Chapter Summary

More information

Searching. Constant time access. Hash function. Use an array? Better hash function? Hash function 4/18/2013. Chapter 9

Searching. Constant time access. Hash function. Use an array? Better hash function? Hash function 4/18/2013. Chapter 9 Constant time access Searching Chapter 9 Linear search Θ(n) OK Binary search Θ(log n) Better Can we achieve Θ(1) search time? CPTR 318 1 2 Use an array? Use random access on a key such as a string? Hash

More information

A design paradigm. Divide and conquer: (When) does decomposing a problem into smaller parts help? 09/09/ EECS 3101

A design paradigm. Divide and conquer: (When) does decomposing a problem into smaller parts help? 09/09/ EECS 3101 A design paradigm Divide and conquer: (When) does decomposing a problem into smaller parts help? 09/09/17 112 Multiplying complex numbers (from Jeff Edmonds slides) INPUT: Two pairs of integers, (a,b),

More information

Quiz 1 Solutions. Problem 2. Asymptotics & Recurrences [20 points] (3 parts)

Quiz 1 Solutions. Problem 2. Asymptotics & Recurrences [20 points] (3 parts) Introduction to Algorithms October 13, 2010 Massachusetts Institute of Technology 6.006 Fall 2010 Professors Konstantinos Daskalakis and Patrick Jaillet Quiz 1 Solutions Quiz 1 Solutions Problem 1. We

More information

Divide-and-Conquer Algorithms Part Two

Divide-and-Conquer Algorithms Part Two Divide-and-Conquer Algorithms Part Two Recap from Last Time Divide-and-Conquer Algorithms A divide-and-conquer algorithm is one that works as follows: (Divide) Split the input apart into multiple smaller

More information

Mathematical Background

Mathematical Background Chapter 1 Mathematical Background When we analyze various algorithms in terms of the time and the space it takes them to run, we often need to work with math. That is why we ask you to take MA 2250 Math

More information

Dijkstra s Algorithm. Goal: Find the shortest path from s to t

Dijkstra s Algorithm. Goal: Find the shortest path from s to t Dynamic Programming Dijkstra s Algorithm Goal: Find the shortest path from s to t s 4 18 0 0 11 1 4 1 7 44 t S = { } PQ = { s,,, 4,,, 7, t } 0 s 0 0 4 18 11 1 4 1 7 44 t S = { s } PQ = {,, 4,,, 7, t }

More information

Lecture 11. Single-Source Shortest Paths All-Pairs Shortest Paths

Lecture 11. Single-Source Shortest Paths All-Pairs Shortest Paths Lecture. Single-Source Shortest Paths All-Pairs Shortest Paths T. H. Cormen, C. E. Leiserson and R. L. Rivest Introduction to, rd Edition, MIT Press, 009 Sungkyunkwan University Hyunseung Choo choo@skku.edu

More information

Aside: Golden Ratio. Golden Ratio: A universal law. Golden ratio φ = lim n = 1+ b n = a n 1. a n+1 = a n + b n, a n+b n a n

Aside: Golden Ratio. Golden Ratio: A universal law. Golden ratio φ = lim n = 1+ b n = a n 1. a n+1 = a n + b n, a n+b n a n Aside: Golden Ratio Golden Ratio: A universal law. Golden ratio φ = lim n a n+b n a n = 1+ 5 2 a n+1 = a n + b n, b n = a n 1 Ruta (UIUC) CS473 1 Spring 2018 1 / 41 CS 473: Algorithms, Spring 2018 Dynamic

More information

HW8. Due: November 1, 2018

HW8. Due: November 1, 2018 CSCI 1010 Theory of Computation HW8 Due: November 1, 2018 Attach a fully filled-in cover sheet to the front of your printed homework Your name should not appear anywhere; the cover sheet and each individual

More information

Lecture 4: An FPTAS for Knapsack, and K-Center

Lecture 4: An FPTAS for Knapsack, and K-Center Comp 260: Advanced Algorithms Tufts University, Spring 2016 Prof. Lenore Cowen Scribe: Eric Bailey Lecture 4: An FPTAS for Knapsack, and K-Center 1 Introduction Definition 1.0.1. The Knapsack problem (restated)

More information

All-Pairs Shortest Paths

All-Pairs Shortest Paths All-Pairs Shortest Paths Version of October 28, 2016 Version of October 28, 2016 All-Pairs Shortest Paths 1 / 26 Outline Another example of dynamic programming Will see two different dynamic programming

More information

CSE 421 Dynamic Programming

CSE 421 Dynamic Programming CSE Dynamic Programming Yin Tat Lee Weighted Interval Scheduling Interval Scheduling Job j starts at s(j) and finishes at f j and has weight w j Two jobs compatible if they don t overlap. Goal: find maximum

More information

CS 350 Algorithms and Complexity

CS 350 Algorithms and Complexity 1 CS 350 Algorithms and Complexity Fall 2015 Lecture 15: Limitations of Algorithmic Power Introduction to complexity theory Andrew P. Black Department of Computer Science Portland State University Lower

More information

Lecture 8: Column Generation

Lecture 8: Column Generation Lecture 8: Column Generation (3 units) Outline Cutting stock problem Classical IP formulation Set covering formulation Column generation A dual perspective Vehicle routing problem 1 / 33 Cutting stock

More information

CMPSCI 311: Introduction to Algorithms Second Midterm Exam

CMPSCI 311: Introduction to Algorithms Second Midterm Exam CMPSCI 311: Introduction to Algorithms Second Midterm Exam April 11, 2018. Name: ID: Instructions: Answer the questions directly on the exam pages. Show all your work for each question. Providing more

More information

NP-Completeness. Algorithmique Fall semester 2011/12

NP-Completeness. Algorithmique Fall semester 2011/12 NP-Completeness Algorithmique Fall semester 2011/12 1 What is an Algorithm? We haven t really answered this question in this course. Informally, an algorithm is a step-by-step procedure for computing a

More information

1 Review of Vertex Cover

1 Review of Vertex Cover CS266: Parameterized Algorithms and Complexity Stanford University Lecture 3 Tuesday, April 9 Scribe: Huacheng Yu Spring 2013 1 Review of Vertex Cover In the last lecture, we discussed FPT algorithms for

More information

Lecture 9. Greedy Algorithm

Lecture 9. Greedy Algorithm Lecture 9. Greedy Algorithm T. H. Cormen, C. E. Leiserson and R. L. Rivest Introduction to Algorithms, 3rd Edition, MIT Press, 2009 Sungkyunkwan University Hyunseung Choo choo@skku.edu Copyright 2000-2018

More information

Dynamic Programming. Data Structures and Algorithms Andrei Bulatov

Dynamic Programming. Data Structures and Algorithms Andrei Bulatov Dynamic Programming Data Structures and Algorithms Andrei Bulatov Algorithms Dynamic Programming 18-2 Weighted Interval Scheduling Weighted interval scheduling problem. Instance A set of n jobs. Job j

More information

Easy Shortcut Definitions

Easy Shortcut Definitions This version Mon Dec 12 2016 Easy Shortcut Definitions If you read and understand only this section, you ll understand P and NP. A language L is in the class P if there is some constant k and some machine

More information

CSE 202 Homework 4 Matthias Springer, A

CSE 202 Homework 4 Matthias Springer, A CSE 202 Homework 4 Matthias Springer, A99500782 1 Problem 2 Basic Idea PERFECT ASSEMBLY N P: a permutation P of s i S is a certificate that can be checked in polynomial time by ensuring that P = S, and

More information

Advanced topic: Space complexity

Advanced topic: Space complexity Advanced topic: Space complexity CSCI 3130 Formal Languages and Automata Theory Siu On CHAN Chinese University of Hong Kong Fall 2016 1/28 Review: time complexity We have looked at how long it takes to

More information

University of New Mexico Department of Computer Science. Final Examination. CS 561 Data Structures and Algorithms Fall, 2006

University of New Mexico Department of Computer Science. Final Examination. CS 561 Data Structures and Algorithms Fall, 2006 University of New Mexico Department of Computer Science Final Examination CS 561 Data Structures and Algorithms Fall, 2006 Name: Email: Print your name and email, neatly in the space provided above; print

More information

Algorithmic Approach to Counting of Certain Types m-ary Partitions

Algorithmic Approach to Counting of Certain Types m-ary Partitions Algorithmic Approach to Counting of Certain Types m-ary Partitions Valentin P. Bakoev Abstract Partitions of integers of the type m n as a sum of powers of m (the so called m-ary partitions) and their

More information

Column Generation. i = 1,, 255;

Column Generation. i = 1,, 255; Column Generation The idea of the column generation can be motivated by the trim-loss problem: We receive an order to cut 50 pieces of.5-meter (pipe) segments, 250 pieces of 2-meter segments, and 200 pieces

More information

CS/COE

CS/COE CS/COE 1501 www.cs.pitt.edu/~nlf4/cs1501/ P vs NP But first, something completely different... Some computational problems are unsolvable No algorithm can be written that will always produce the correct

More information

Combinatorial optimization problems

Combinatorial optimization problems Combinatorial optimization problems Heuristic Algorithms Giovanni Righini University of Milan Department of Computer Science (Crema) Optimization In general an optimization problem can be formulated as:

More information

CSE 421 Weighted Interval Scheduling, Knapsack, RNA Secondary Structure

CSE 421 Weighted Interval Scheduling, Knapsack, RNA Secondary Structure CSE Weighted Interval Scheduling, Knapsack, RNA Secondary Structure Shayan Oveis haran Weighted Interval Scheduling Interval Scheduling Job j starts at s(j) and finishes at f j and has weight w j Two jobs

More information

Algorithm Efficiency. Algorithmic Thinking Luay Nakhleh Department of Computer Science Rice University

Algorithm Efficiency. Algorithmic Thinking Luay Nakhleh Department of Computer Science Rice University Algorithm Efficiency Algorithmic Thinking Luay Nakhleh Department of Computer Science Rice University 1 All Correct Algorithms Are Not Created Equal When presented with a set of correct algorithms for

More information

CS 470/570 Divide-and-Conquer. Format of Divide-and-Conquer algorithms: Master Recurrence Theorem (simpler version)

CS 470/570 Divide-and-Conquer. Format of Divide-and-Conquer algorithms: Master Recurrence Theorem (simpler version) CS 470/570 Divide-and-Conquer Format of Divide-and-Conquer algorithms: Divide: Split the array or list into smaller pieces Conquer: Solve the same problem recursively on smaller pieces Combine: Build the

More information

Lecture 13. More dynamic programming! Longest Common Subsequences, Knapsack, and (if time) independent sets in trees.

Lecture 13. More dynamic programming! Longest Common Subsequences, Knapsack, and (if time) independent sets in trees. Lecture 13 More dynamic programming! Longest Common Subsequences, Knapsack, and (if time) independent sets in trees. Announcements HW5 due Friday! HW6 released Friday! Last time Not coding in an action

More information

Unit 1A: Computational Complexity

Unit 1A: Computational Complexity Unit 1A: Computational Complexity Course contents: Computational complexity NP-completeness Algorithmic Paradigms Readings Chapters 3, 4, and 5 Unit 1A 1 O: Upper Bounding Function Def: f(n)= O(g(n)) if

More information

Algorithm Design CS 515 Fall 2015 Sample Final Exam Solutions

Algorithm Design CS 515 Fall 2015 Sample Final Exam Solutions Algorithm Design CS 515 Fall 2015 Sample Final Exam Solutions Copyright c 2015 Andrew Klapper. All rights reserved. 1. For the functions satisfying the following three recurrences, determine which is the

More information

SOLUTION: SOLUTION: SOLUTION:

SOLUTION: SOLUTION: SOLUTION: Convert R and S into nondeterministic finite automata N1 and N2. Given a string s, if we know the states N1 and N2 may reach when s[1...i] has been read, we are able to derive the states N1 and N2 may

More information