Outline and Reading. Dynamic Programming. Dynamic Programming revealed. Computing Fibonacci. The General Dynamic Programming Technique

Similar documents
Dynamic Programming. Preview. Dynamic Programming. Dynamic Programming. Dynamic Programming (Example: Fibonacci Sequence)

Problem Set 9 Solutions

Dynamic Programming 4/5/12. Dynamic programming. Fibonacci numbers. Fibonacci: a first attempt. David Kauchak cs302 Spring 2012

Design and Analysis of Algorithms

Dynamic Programming! CSE 417: Algorithms and Computational Complexity!

CS 331 DESIGN AND ANALYSIS OF ALGORITHMS DYNAMIC PROGRAMMING. Dr. Daisy Tang

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix

Lecture 21: Numerical methods for pricing American type derivatives

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009

Feature Selection: Part 1

Section 8.3 Polar Form of Complex Numbers

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017

Kernel Methods and SVMs Extension

8.6 The Complex Number System

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results.

p 1 c 2 + p 2 c 2 + p 3 c p m c 2

Singular Value Decomposition: Theory and Applications

Exercises. 18 Algorithms

Structure and Drive Paul A. Jensen Copyright July 20, 2003

Lecture 2: Prelude to the big shrink

Generalized Linear Methods

Calculation of time complexity (3%)

Math Review. CptS 223 Advanced Data Structures. Larry Holder School of Electrical Engineering and Computer Science Washington State University

Lecture 10 Support Vector Machines II

Some Comments on Accelerating Convergence of Iterative Sequences Using Direct Inversion of the Iterative Subspace (DIIS)

EEL 6266 Power System Operation and Control. Chapter 3 Economic Dispatch Using Dynamic Programming

Example: (13320, 22140) =? Solution #1: The divisors of are 1, 2, 3, 4, 5, 6, 9, 10, 12, 15, 18, 20, 27, 30, 36, 41,

Lecture Notes on Linear Regression

Week 5: Neural Networks

NP-Completeness : Proofs

Module 9. Lecture 6. Duality in Assignment Problems

Computing Correlated Equilibria in Multi-Player Games

Finding Dense Subgraphs in G(n, 1/2)

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg

Solutions to exam in SF1811 Optimization, Jan 14, 2015

Vector Norms. Chapter 7 Iterative Techniques in Matrix Algebra. Cauchy-Bunyakovsky-Schwarz Inequality for Sums. Distances. Convergence.

10-701/ Machine Learning, Fall 2005 Homework 3

APPROXIMATE PRICES OF BASKET AND ASIAN OPTIONS DUPONT OLIVIER. Premia 14

EEE 241: Linear Systems

2.3 Nilpotent endomorphisms

COS 521: Advanced Algorithms Game Theory and Linear Programming

Section 3.6 Complex Zeros

Some Consequences. Example of Extended Euclidean Algorithm. The Fundamental Theorem of Arithmetic, II. Characterizing the GCD and LCM

FTCS Solution to the Heat Equation

1 GSW Iterative Techniques for y = Ax

APPENDIX A Some Linear Algebra

Multilayer Perceptron (MLP)

MMA and GCMMA two methods for nonlinear optimization

Lecture 11. minimize. c j x j. j=1. 1 x j 0 j. +, b R m + and c R n +

Errors for Linear Systems

Differentiating Gaussian Processes

An Interactive Optimisation Tool for Allocation Problems

Hidden Markov Models

Amiri s Supply Chain Model. System Engineering b Department of Mathematics and Statistics c Odette School of Business

The Minimum Universal Cost Flow in an Infeasible Flow Network

MEM 255 Introduction to Control Systems Review: Basics of Linear Algebra

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

4DVAR, according to the name, is a four-dimensional variational method.

Learning Theory: Lecture Notes

Lecture 4: Universal Hash Functions/Streaming Cont d

Société de Calcul Mathématique SA

Difference Equations

Formulas for the Determinant

Expected Value and Variance

Dynamic Programming. Lecture 13 (5/31/2017)

LOW BIAS INTEGRATED PATH ESTIMATORS. James M. Calvin

Global Optimization of Truss. Structure Design INFORMS J. N. Hooker. Tallys Yunes. Slide 1

Maximal Margin Classifier

Notes on Frequency Estimation in Data Streams

Time-Varying Systems and Computations Lecture 6

Homework Notes Week 7

Resource Allocation with a Budget Constraint for Computing Independent Tasks in the Cloud

ADVANCED MACHINE LEARNING ADVANCED MACHINE LEARNING

= = = (a) Use the MATLAB command rref to solve the system. (b) Let A be the coefficient matrix and B be the right-hand side of the system.

Applied Stochastic Processes

VARIATION OF CONSTANT SUM CONSTRAINT FOR INTEGER MODEL WITH NON UNIFORM VARIABLES

Quantum Mechanics for Scientists and Engineers. David Miller

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Which Separator? Spring 1

1 Matrix representations of canonical matrices

Min Cut, Fast Cut, Polynomial Identities

Mean Field / Variational Approximations

Linear Feature Engineering 11

VQ widely used in coding speech, image, and video

C4B Machine Learning Answers II. = σ(z) (1 σ(z)) 1 1 e z. e z = σ(1 σ) (1 + e z )

Report on Image warping

C/CS/Phy191 Problem Set 3 Solutions Out: Oct 1, 2008., where ( 00. ), so the overall state of the system is ) ( ( ( ( 00 ± 11 ), Φ ± = 1

A New Refinement of Jacobi Method for Solution of Linear System Equations AX=b

Feb 14: Spatial analysis of data fields

1 Convex Optimization

8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS

LECTURE 9 CANONICAL CORRELATION ANALYSIS

Lecture 3 January 31, 2017

Bezier curves. Michael S. Floater. August 25, These notes provide an introduction to Bezier curves. i=0

1 Generating functions, continued

Lecture 10 Support Vector Machines. Oct

On the Repeating Group Finding Problem

18.1 Introduction and Recap

Problem Do any of the following determine homomorphisms from GL n (C) to GL n (C)?

Spectral Clustering. Shannon Quinn

Transcription:

Outlne and Readng Dynamc Programmng The General Technque ( 5.3.2) -1 Knapsac Problem ( 5.3.3) Matrx Chan-Product ( 5.3.1) Dynamc Programmng verson 1.4 1 Dynamc Programmng verson 1.4 2 Dynamc Programmng revealed Brea problem nto subproblems that are shared have subproblem optmalty (optmal subproblem soluton helps solve overall problem) subproblem optmalty means can wrte recursve realtonshp between subproblems! Defnng subproblems s hardest part! Compute solutons to small subproblems Store solutons n array A. Combne already computed solutons nto solutons for larger subproblems Solutons Array A s teratvely flled (Optonal: reduce space needed by reusng array) Dynamc Programmng verson 1.4 3 Computng Fbonacc Dynamc Programmng s a general algorthm desgn paradgm: Iteratvely solves small subproblems whch are combned to solve overall problem. Fbonacc numbers defned F = F 1 = 1 F n = F n-1 + F n-2, for n > 1 Recursve soluton: nt fb(nt x) f (x=) return ; f (x=1) return 1; return fb(x-1) + fb(x-2); Dynamc Programmng Soluton: f[]=; f[1]=1; for 2 to x do f[] f[-1] + f[-2]; return f[x]; Dynamc Programmng verson 1.4 4 Reducng Space for Computng Fbonacc store only prevous 2 values to compute next value nt fb(x) f (x=) return ; f (x=1) return 1; nt last 1; nextlast ; for 2 to x do temp last + nextlast; nextlast last; last temp; return temp; The General Dynamc Programmng Technque Apples to a problem that at frst seems to requre a lot of tme (possbly exponental), provded we have: Smple subproblems: the subproblems can be defned n terms of a few varables, such as, l, m, and so on. Subproblem optmalty: the global optmum value can be defned n terms of optmal subproblems Subproblem overlap: the subproblems are not ndependent, but nstead they overlap (hence, should be constructed bottom-up). Dynamc Programmng verson 1.4 5 Dynamc Programmng verson 1.4 6

The /1 Knapsac Problem Gven: A set S of n tems, wth each tem havng b - a postve beneft w - a postve weght Goal: Choose tems wth maxmum total beneft but wth. If we are not allowed to tae fractonal amounts, then ths s the /1 napsac problem. In ths case, we let T denote the set of tems we tae Obectve: maxmze Constrant: w W b T T Dynamc Programmng verson 1.4 7 Example Items: Gven: A set S of n tems, wth each tem havng b - a postve beneft w - a postve weght Goal: Choose tems wth maxmum total beneft but wth. napsac Weght: Beneft: 1 2 3 4 5 4 n 2 n 2 n 6 n 2 n $2 $3 $6 $25 $8 9 n Soluton: 5 (2 n) 3 (2 n) 1 (4 n) Dynamc Programmng verson 1.4 8 A /1 Knapsac, Frst Attempt S : Set of tems numbered 1 to. Defne ] = best selecton from S. Problem: does not have subproblem optmalty: Consder S={(3,2),(5,4),(8,5),(4,3),(1,9)} beneft-weght pars Best for S 4 : Best for S 5 : Dynamc Programmng verson 1.4 9 A /1 Knapsac, Second Attempt S : Set of tems numbered 1 to. Defne = best selecton from S wth weght exactly equal to w Good news: ths does have subproblem optmalty: >, w w } I.e., best subset of S wth weght lmt exactly w s ether the best subset of S -1 w/ weght w or the best subset of S -1 w/ weght w-w plus beneft of tem. Dynamc Programmng verson 1.4 1 w Towards the /1 Knapsac S : Set of tems numbered 1 to = {(b 1,w 1 ), (b 2,w 2 ),, (b,w )} Defne = maxmum beneft of optmal subset from S wth total weght at most Recursve defnton of :, w } f = otherwse Dynamc Programmng verson 1.4 11 Towards the /1 Knapsac f =, w } otherwse rec1knap(s, W): = maxmum beneft Input: set S of tems w/ beneft b 1, b 2, b of optmal subset from S,; weghts w 1, w 2, w and max. weght W wth total weght at most Output: beneft of best subset wth Recursve verson of f = then {S = emptyset} algorthm based on return recursve subproblem remove tem (beneft-weght (b,w )) relatonshp. from S Not a dynamc f w > W then {tem does not ft} return rec1knap(s,w) programmng verson. return max(rec1knap(s,w), rec1knap(s,w-w ) + b ) Dynamc Programmng verson 1.4 12

Towards the /1 Knapsac Modfed recursve verson that stores subproblem solutons Frst allocate global array B of sze n+1 by W Then ntalze all entres of, to 1 B stores results of recursve calls Entres n B are computed when necessary Ths s consdered a dynamc programmng verson., w ] + b } f = otherwse rec1knap(s, W): Input: set S of tems w/ beneft b 1, b 2,,b,; weghts w 1, w 2, w and max. weght W Output: beneft of best subset wth f = then return remove tem (beneft-weght (b,w )) from S f -1, W]= 1 then -1,W]=rec1Knap(S,W) f w > W then return -1, W] f -1, W- w ]= 1 then -1,W - w ]=rec1knap(s,w -w ) return max(-1, W], -1,W - w ]+b ) Dynamc Programmng verson 1.4 13 The /1 Knapsac - Iteratve f =, w } otherwse 1Knapsac(S, W): Input: set S of n tems w/ beneft b Recursve computaton and weght w ; max. weght W not necessary Output: beneft of best subset wth Compute teratvely, for w to W do {base case} bottom-up, All -1,*] must be for 1 to n do computed before *] for 1 to W do because of subproblem f w then dependences -1, Ths s also dynamc max(-1,, programmng. -1,-w Dynamc Programmng verson 1.4 ]+b ) 14 The /1 Knapsac - Iteratve The /1 Knapsac - Iteratve f = f =, w } otherwse, w } otherwse 1Knapsac(S, W): 1Knapsac(S, W): Input: set S of n tems w/ beneft b Input: set S of n tems w/ beneft b Not necessary to use all the and weght w ; max. weght W Not necessary to use all the and weght w space ; max. weght W space Output: beneft of best subset wth Output: beneft of best subset wth Keep trac of one row at a Keep trac of one row at a tme for w to W do {base case} tme for w to W do {base case} Overwrte results from, Overwrte results from prevous row as new values for 1 to n do prevous row as new values for 1 to n do computed for W downto 1 do computed for W downto 1 do Must compute rght to left (W f w then Must compute rght to left (W f w then downto 1) so that the next -1, downto 1) so that the next row (*]) uses results from row (*]) uses results from the prevous row (-1,*]). the prevous row (-1,*]). max(-1,, max(, Smplfy ths to get verson n -1, -w ]+b ) Smplfy ths to get verson n -w ]+b ) boo. Dynamc Programmng verson 1.4 15 boo. Dynamc Programmng verson 1.4 16 The /1 Knapsac > w, w w } 1Knapsac(S, W): The boo verson: Input: set S of n tems w/ beneft b When value does not change and weght w ; max. weght W from one row to the next, Output: beneft of best subset wth then no need to assgn same value. for w to W do Runnng tme: O(nW). Not a polynomal-tme for 1 to n do algorthm f W s large for w W downto w do Ths s a pseudo-polynomal f w-w ]+b > then tme algorthm w-w ]+b Dynamc Programmng verson 1.4 17 lne-breang problem Gven sequence of words from one paragraph Return where lne-breas should occur Mnmze empty space on each lne (except for last lne of paragraph) Dynamc Programmng verson 1.4 18

lne-breang problem Example problem A smple verson: letters and spaces have equal wdth nput s set of n word lengths, w 1, w 2, w n also gven lne wdth lmt L. each length w ncludes one space Placng words up to on one lne means w L = w = Penalty for extra spaces X = L s X 3 Mnmze sum of penaltes from each lne (no last lne penalty) Dynamc Programmng verson 1.4 19 Paragraph s: Those who cannot remember the past are condemned to repeat t. Word lengths are 6,4,7,9,4,5,4,1,3,7,4. Suppose lne wdth L = 17. Fnd an optmal way of separatng words nto lnes that mnmzes penalty. Dynamc Programmng verson 1.4 2 lnebrea DP lnebrea DP cost for n-1 downto do f (w[] + w[+1] + + w[n-1] < L) lne] ; mncost Infnty; 1; whle ( words startng from w[] ft on a lne) // meanng (w[] + w[+1] + + w[+-1] <= L) lnecost penalty from placng words w[] to w[+-1] on one lne. totalcost lnecost + lne+]; mncost mn(totalcost, mncost) // trac mn. so far ++; lne]=mncost; O(nL); L s maxmum wdth Lnear f L s consdered constant Space O(n). Dynamc Programmng verson 1.4 21 Dynamc Programmng verson 1.4 22 Matrx Chan-Products Revew: Matrx Multplcaton. C = A*B A s d e and B s e f e 1 = C[, A[, ]* = O(def ) tme (def multplcatons) d A e e B C f, d Matrx Chan-Products Matrx Chan-Product: Compute A=A *A 1 * *A n-1 A s d d +1 Problem: How to parenthesze? [for mnmzng ops] Example B s 3 1 C s 1 5 D s 5 5 (B*C)*D taes 15 + 75 = 1575 ops B*(C*D) taes 15 + 25 = 4 ops Dynamc Programmng verson 1.4 23 f Dynamc Programmng verson 1.4 24

An Enumeraton Approach Matrx Chan-Product Alg.: Try all possble ways to parenthesze A=A *A 1 * *A n-1 Calculate number of ops for each one Pc the one that s best Runnng tme: The number of paranetheszatons s equal to the number of bnary trees wth n nodes Ths s exponental! It s called the Catalan number, and t s almost 4 n. Ths s a terrble algorthm! Dynamc Programmng verson 1.4 25 A Greedy Approach Idea #1: repeatedly select the product that uses (up) the most operatons. Counter-example: A s 1 5 B s 5 1 C s 1 5 D s 5 1 Greedy dea #1 gves (A*B)*(C*D), whch taes 5+1+5 = 2 ops A*((B*C)*D) taes 5+25+25 = 1 ops Dynamc Programmng verson 1.4 26 Another Greedy Approach Idea #2: repeatedly select the product that uses the fewest operatons. Counter-example: A s 11 11 B s 11 9 C s 9 1 D s 1 99 Greedy dea #2 gves A*((B*C)*D)), whch taes 19989+99+189=228789 ops (A*B)*(C*D) taes 9999+89991+891=1899 ops The greedy approach s not gvng us the A Recursve Approach Defne subproblems: Fnd the best parentheszaton of A *A +1 * *A. Let N, denote the number of operatons done by ths subproblem. The optmal soluton for the whole problem s N,n-1. Subproblem optmalty: The optmal soluton can be defned n terms of optmal subproblems There has to be a fnal multplcaton (root of the expresson tree) for the optmal soluton. Say, the fnal multply s at ndex : (A * *A )*(A +1 * *A n-1 ). Then the optmal soluton N,n-1 s the sum of two optmal subproblems, N, and N +1,n-1 plus the tme for the last multply. If subproblems were not optmal, nether s global soluton. optmal value. Dynamc Programmng verson 1.4 27 Dynamc Programmng verson 1.4 28 A Characterzng Equaton Defne global optmal n terms of optmal subproblems, by checng all possble locatons for fnal multply. Recall that A s a d d +1 dmensonal matrx. So, a characterzng equaton for N, s the followng: N, = + < mn{ N, + N + 1, + dd+ 1d 1} Note that subproblems are not ndependent--the subproblems overlap (are shared) A Dynamc Programmng Construct optmal subproblems bottom-up. N, s are easy, so start wth them Then do length 2,3, subproblems, and so on. Array N, stores solutons Runnng tme: O(n 3 ) matrxchan(s): Input: sequence S of n matrces to be multpled Output: number of operatons n an optmal parantheszaton of S for 1 to n-1 do N, for b 1 to n-1 do for to n-b-1 do +b N, +nfnty for to -1 do N, mn{n,, N, +N +1, +d d +1 d +1 } Dynamc Programmng verson 1.4 29 Dynamc Programmng verson 1.4 3

A Dynamc Programmng Vsualzaton N The bottom-up constructon flls n the N array by dagonals N, gets values from pervous entres n -th row and -th column Fllng n each entry n the N table taes O(n) tme. Total run tme: O(n 3 ) Gettng actual parentheszaton can be done by rememberng for each N entry, = + mn{ N, + N+ 1, + dd+ 1d 1} < answer N 1 2 1 n-1 n-1 Dynamc Programmng verson 1.4 31