Dynamic Programming. Preview. Dynamic Programming. Dynamic Programming. Dynamic Programming (Example: Fibonacci Sequence)

Similar documents
Outline and Reading. Dynamic Programming. Dynamic Programming revealed. Computing Fibonacci. The General Dynamic Programming Technique

Dynamic Programming 4/5/12. Dynamic programming. Fibonacci numbers. Fibonacci: a first attempt. David Kauchak cs302 Spring 2012

Design and Analysis of Algorithms

CS 331 DESIGN AND ANALYSIS OF ALGORITHMS DYNAMIC PROGRAMMING. Dr. Daisy Tang

Problem Set 9 Solutions

Exercises. 18 Algorithms

APPENDIX A Some Linear Algebra

Dynamic Programming! CSE 417: Algorithms and Computational Complexity!

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix

Errors for Linear Systems

n α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0

p 1 c 2 + p 2 c 2 + p 3 c p m c 2

Calculation of time complexity (3%)

Example: (13320, 22140) =? Solution #1: The divisors of are 1, 2, 3, 4, 5, 6, 9, 10, 12, 15, 18, 20, 27, 30, 36, 41,

MMA and GCMMA two methods for nonlinear optimization

Min Cut, Fast Cut, Polynomial Identities

Lecture 5 Decoding Binary BCH Codes

8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS

Singular Value Decomposition: Theory and Applications

Some modelling aspects for the Matlab implementation of MMA

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

Vector Norms. Chapter 7 Iterative Techniques in Matrix Algebra. Cauchy-Bunyakovsky-Schwarz Inequality for Sums. Distances. Convergence.

10-701/ Machine Learning, Fall 2005 Homework 3

EEL 6266 Power System Operation and Control. Chapter 3 Economic Dispatch Using Dynamic Programming

THE SUMMATION NOTATION Ʃ

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal

Common loop optimizations. Example to improve locality. Why Dependence Analysis. Data Dependence in Loops. Goal is to find best schedule:

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017

Chapter Newton s Method

VQ widely used in coding speech, image, and video

COS 521: Advanced Algorithms Game Theory and Linear Programming

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

The Minimum Universal Cost Flow in an Infeasible Flow Network

Math 261 Exercise sheet 2

MEM 255 Introduction to Control Systems Review: Basics of Linear Algebra

The KMO Method for Solving Non-homogenous, m th Order Differential Equations

CS 770G - Parallel Algorithms in Scientific Computing

Hongyi Miao, College of Science, Nanjing Forestry University, Nanjing ,China. (Received 20 June 2013, accepted 11 March 2014) I)ϕ (k)

Mean Field / Variational Approximations

Lecture 10 Support Vector Machines II

763622S ADVANCED QUANTUM MECHANICS Solution Set 1 Spring c n a n. c n 2 = 1.

From Biot-Savart Law to Divergence of B (1)

Simultaneous Optimization of Berth Allocation, Quay Crane Assignment and Quay Crane Scheduling Problems in Container Terminals

2.3 Nilpotent endomorphisms

Curve Fitting with the Least Square Method

A PROBABILITY-DRIVEN SEARCH ALGORITHM FOR SOLVING MULTI-OBJECTIVE OPTIMIZATION PROBLEMS

Week 5: Neural Networks

First day August 1, Problems and Solutions

Structure and Drive Paul A. Jensen Copyright July 20, 2003

Lecture Notes on Linear Regression

Section 8.3 Polar Form of Complex Numbers

Linear Regression Analysis: Terminology and Notation

18.1 Introduction and Recap

Norms, Condition Numbers, Eigenvalues and Eigenvectors

Linear Feature Engineering 11

Introduction to Algorithms

The Geometry of Logit and Probit

Lecture 12: Discrete Laplacian

Workshop: Approximating energies and wave functions Quantum aspects of physical chemistry

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009

Formulas for the Determinant

Kernel Methods and SVMs Extension

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 13

Introduction to Algorithms

Generalized Linear Methods

Société de Calcul Mathématique SA

The Second Anti-Mathima on Game Theory

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results.

Additional Codes using Finite Difference Method. 1 HJB Equation for Consumption-Saving Problem Without Uncertainty

CSC 411 / CSC D11 / CSC C11

Communication Complexity 16:198: February Lecture 4. x ij y ij

Case A. P k = Ni ( 2L i k 1 ) + (# big cells) 10d 2 P k.

SL n (F ) Equals its Own Derived Group

MATH Sensitivity of Eigenvalue Problems

A 2D Bounded Linear Program (H,c) 2D Linear Programming

Lecture 2: Gram-Schmidt Vectors and the LLL Algorithm

CHAPTER-5 INFORMATION MEASURE OF FUZZY MATRIX AND FUZZY BINARY RELATION

Complete subgraphs in multipartite graphs

LOW BIAS INTEGRATED PATH ESTIMATORS. James M. Calvin

Pattern Classification

find (x): given element x, return the canonical element of the set containing x;

Math Review. CptS 223 Advanced Data Structures. Larry Holder School of Electrical Engineering and Computer Science Washington State University

ECE559VV Project Report

Random Walks on Digraphs

Finding the Longest Similar Subsequence of Thumbprints for Intrusion Detection

Some Comments on Accelerating Convergence of Iterative Sequences Using Direct Inversion of the Iterative Subspace (DIIS)

6.854J / J Advanced Algorithms Fall 2008

Statistics II Final Exam 26/6/18

Linear Approximation with Regularization and Moving Least Squares

1 Convex Optimization

Feature Selection: Part 1

Lecture 3. Ax x i a i. i i

Maximal Margin Classifier

Problem Set 6: Trees Spring 2018

Hidden Markov Models

Supporting Information

Tracking with Kalman Filter

NP-Completeness : Proofs

Introduction to Vapor/Liquid Equilibrium, part 2. Raoult s Law:

Lecture 4: Constant Time SVD Approximation

Transcription:

/24/27 Prevew Fbonacc Sequence Longest Common Subsequence Dynamc programmng s a method for solvng complex problems by breakng them down nto smpler sub-problems. It s applcable to problems exhbtng the propertes of overlappng sub-problems whch are only slghtly smaller. Dynamc programmng Ths method s applcable when the subproblems are not ndependent.(.e., when subproblems share sub-subproblems.) dynamc-programmng method solves every sub-subproblem ust once and then saves ts answer n a table. The dynamc programmng method s often appled to optmzaton problems. When developng a dynamc programmng algorthm, we typcally follow a sequence of four steps:. Characterze the structure of an optmal soluton. 2. Recursvely defne the values of an optmal soluton. 3. Compute the value of an optmal soluton n a bottom-up fashon. 4. Construct an optmal soluton from computed nformaton. (Fbonacc Sequence) nt fb(nt n) { f n == or n == return ; else return fb(n-) + fb(n-2); Notce that f we call, say, fb(5), we produce a call tree that calls the functon on the same value many dfferent tmes. In partcular, fb(2) s calculated three tmes from scratch. In larger examples, even more values of fb,.e., subproblems, are recalculated, resultng n an exponental tme algorthm.. fb(5) 2. fb(4) + fb(3) 3. (fb(3) + fb(2)) + (fb(2) + fb()) 4. ((fb(2) + fb()) + (fb() + fb())) + ((fb() + fb()) + fb()) 5. (((fb() + fb()) + fb()) + (fb() + fb())) + ((fb() + fb()) + fb()) (Example: Fbonacc Sequence) We can mprove recurrence verson of Fbonacc sequence by usng dynamc programmng dea. Now, suppose we have a smple map obect, m, whch maps each value of fb that has already been calculated to ts result. The resultng functon requres only O(n) tme nstead of exponental tme (but requres O(n) space): map M (key, value) M() = ; M() = ; nt fb(nt n) { f map M does not contan key n else return M(n)= fb(n-)+ fb(n-2); return M(n)

/24/27 Gven a sequence X=(x, x 2,, x m ), another sequence Z=(z, z 2,, z k ) s a subsequence of X f there exsts a strctly ncreasng sequence (, 2,, k ) of ndces of X such that for all =, 2,, k, we have x = z. Ex) Z = (B, C, D, B) s a subsequence of X = (, B, C, B, D,, B) wth correspondng ndex sequence (2, 3, 5, 7). Gven two sequences X and Y, we say that a sequence Z s a common subsequence of X and Y f Z s a subsequence of both X and Y. Ex) Wth X = (, B, C, B, D,, B) Y = (B, D, C,, B, ), Common subsequence of both X and Y are (, B, ), (B, C, B), (B, C, ), (B, D,, B), longest-common-subsequence problem (LCS) Input: Gven two sequences X = (x, x 2,, x m ), and Y = (y, y 2,, y n ) Output: Fnd a maxmum-length common subsequence of X and Y. Defnton) Gven a sequence X = (x, x 2,, x m ), we defne s prefx of X, for =,,, m, as X = (x, x 2,, x ) Ex) X = (, B, C, B, D,, B), then X 4 = (, B, C, B) X s empty sequence Theorem: Optmal substructure of an LCS Let X = (x, x 2,, x m ) and Y = (y, y 2,, y n ) be sequences, and Z = (z, z 2,, z k ) be any LCS of X and Y.. f x m = y n, then z k = x m = y n and Z k- s an LCS of X m- and Y n-. 2. If x m y n, then z k x m mples that Z s an LCS of X m- and Y. 3. If x m y n, then z k y n mples that Z s an LCS of X and Y n- From the theorem, we can dvde LCD calculatons as follows:. When x m = y n, we must fnd an LCS of X m- and Y n-. 2. When x m y n, we need to solve two subproblems n order to fnd out whch one s maxmum.. LCS of X m- and Y n. 2. LCS of X m and Y n- 2

/24/27 recursve soluton to the LCS problem: Let us defne c[, ] to be the length of an LCS of the sequences X and Y. If ether = or =, the LCS has length. c[, ] c[, ] max( c[, ], c[, ]) f or, f, and x y f, and x y LCS_Length(X, Y) { m = length of X n = length of Y (longest for = common to m { subsequence) c[, ] = for = to n c[, ] = for = to m { for = to n { f x = y { c[, ] = c[-, -] + b[, ] = /* up and left */ else f c[-, ] c[, -] { c[, ] = c[-, ] b[, ] = /* up */ else { c[, ] = c[, -] b[, ] = /* left */ /* end of nner for loop */ /* end of outer for loop */ dynamc programmng soluton that computes the soluton bottom up return c and b 3

/24/27 4

/24/27 2 2 2 2 2 2 2 2 2 2 5

/24/27 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 6

/24/27 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 3 2 2 2 2 2 2 2 2 3 3 2 2 2 2 2 2 2 2 3 3 2 2 2 3 3 2 2 2 2 2 2 2 2 3 3 2 2 2 3 3 2 2 3 3 4 2 2 3 4 4 2 2 2 2 2 2 2 2 3 3 2 2 2 3 3 2 2 3 3 4 2 2 3 4 4 7

/24/27 2 2 2 2 2 2 2 2 3 3 2 2 2 3 3 2 2 3 3 4 2 2 3 4 4 2 2 2 2 2 2 2 2 3 3 2 2 2 3 3 2 2 3 3 4 2 2 3 4 4 Least Common Subsequence (LCS) s BCB Gven a sequence (chan) of n matrces < 2 n > to be multpled. We wsh to compute the product 2 3 n. We can multply two matrces and B only f the number of columns of s equal to the number of rows of B. For example, f s an n m matrx and B s an m p matrx than the resultng matrx C s n p matrx. The tme to compute C s domnated by the number of scalar multplcatons, whch s n m p. We want to mnmze the number of scalar multplcatons!! Consder a matrx chan < 2 3 > where : 2 : 5 3 : 5 5 We can multply these matrces two dfferent ways (( 2 ) 3 ): ( 5) + ( 5 5) = 75 ( ( 2 3 )): ( 5 5) +( 5) = 75 ykes! 8

/24/27 The matrx-chan multplcaton problem Gven a matrx chan < 2 n > of n matrces, where for =, 2,, n, matrx has dmensons p - p, fully parenthesze the product 2 3 n n a way that mnmzes the number of scalar multplcatons. Characterze problem Defne m(, ) = cost of computng +... Snce = p - p and + = p p + m(, +) = p - p p + 2 2... k k...... k k... m(, k) m(k+, ) k s a dvde pont for optmzaton where k - 2 2..k = p - p k We can defne a recursve equaton... k k...... k k... k+.. =p k p f m(, ) mn m (, k) m( k, ) p pk p f k We need nvestgate for all k between and -to fnd the mnmum. 2 3 8 8 2 2 5 5 4 sze m (, ) for all m(,2) 38 2 48 sze 2 m(2,3) 8 25 8 m(3,4) 25 4 4 3 4 2 3 8 8 2 2 5 5 4 ( m(,) m(2,3) 385) m(,3) mn 78 k,2 ( m(,2) m(3,3) 3 25 sze3 ( m(2,2) m(3,4) 8 2 4) m(2,4) mn 4 k 2,3 ( m(2,3) m(4,4) 85 4 3 m(,2) 38 2 48 sze 2 m(2,3) 8 25 8 m(3,4) 25 4 4 4 m(,2) 38 2 48 sze 2 m(2,3) 8 25 8 m(3,4) 25 4 4 3 8 8 2 2 5 5 4 sze 4 : m(,4) ( m(,) m(2,4) 38 4) mn ( m(,2) m(3,4) 3 2 4) 2 k,2,3 ( (,3) (4,4) 3 5 4) m m 2 ( m(,) m(2,3) 385) m(,3) mn 78 k,2 ( m(,2) m(3,3) 3 25 sze3 ( m(2,2) m(3,4) 8 2 4) m(2,4) mn 4 k 2,3 ( m(2,3) m(4,4) 85 4 3 4 9

/24/27 procedure MnMult(p) begn for tolength of chan do m(, ) for d to (length of for to (length of d; end of MnMult chan -) do chan - d) do m(, ) mn { m(, k) c( k, ) p k - p p k sze = + sze 2 = + n sze 3 = +2 n 2 2............ sze r = + (n ) n (n ) = n - # of m(, ) s s O(n 2 ) For computng each m(, ): Total complexty s O(n 3 ) O(n) tme slghtly modfed verson of the algorthm MTRIX-CHIN-ORDER(p) n = p.length - 2 let m[..n,..n] and s[..n, 2..n] be new tables 3 for ( = to n) 4 m[, ] = 5 for (l = 2 to n) // l s the chan length 6 for ( = to n - l + ) 7 = + l 8 m[, ] = // nfnty or a really bg number 9 for (k = to ) q = m[, k] + m[k +, ] + p - p k p f (q < m[, ]) 2 m[, ] = q 3 s[, ] = k 4 return m and s lthough the soluton has been determned, t s not really n an easy to nterpret form. Table s can be used to accomplsh ths. PRINT-OPTIML-PRENS(s,, ) f == 2 prnt 3 else 4 prnt ( 5 PRINT-OPTIML-PRENS(s,, s[, ]) 6 PRINT-OPTIML-PRENS(s, s[, ] +, ) 7 prnt )