Dynamc Programmng Lecture 13 (5/31/2017)
- A Forest Thnnng Example - Projected yeld (m3/ha) at age 20 as functon of acton taken at age 10 Age 10 Begnnng Volume Resdual Ten-year Volume volume thnned volume growth at age 20 260 0 260 390 650 260 50 210 325 535 260 100 160 250 410 Projected yeld (m3/ha) at age 30 as functon of begnnng volume at age 20 and acton taken at age 20 Age 20 Begnnng Volume Resdual Ten-year Volume volume thnned volume growth at age 30 0 650 200 850 650 150 500 250 750 200 450 200 650 0 535 215 750 535 100 435 215 650 175 360 140 500 0 410 190 600 410 75 335 165 500 150 260 140 400 Age 10 1 260 0 Stage 1 50 100 Age 20 2 650 0 3 535 4 410 150 Stage 2 0 150 200 100 175 0 75 Age 30 5 850 6 750 7 650 8 600 9 500 850 750 650 600 500 Stage 3 Age 10 11 260 Source: Dykstra s Mathematcal Programmng for Natural Resource Management (1984) 10 400 400
Dynamc Programmng Cont. Stages, states and actons (decsons) Backward and forward recurson x decson varable: mmedate destnaton node at stage ; f ( s, x ) the maxmum total volume harvested durng all remanng stages, gven that the stand has reached the state correspondng to node s at stage and the mmedate destnaton node s x ; x the value of x that maxmzes f ( s, x ); and * f ( s) the maxmum value of f ( s, x ),.e., f ( s) f ( s, x ). * * *
Solvng Dynamc Programs Recursve relaton at stage (Bellman Equaton): f () s max H f ( x ), * * s, x 1 x where H s the volume harvested by thnnng at the begnnng of stage sx, n order to move the stand from state s at the begnnng of stage to state x at the end of stage.
Dynamc Programmng Structural requrements of DP The problem can be dvded nto stages Each stage has a fnte set of assocated states (dscrete state DP) The mpact of a polcy decson to transform a state n a gven stage to another state n the subsequent stage s determnstc Prncple of optmalty: gven the current state of the system, the optmal polcy for the remanng stages s ndependent of any pror polcy adopted
Examples of DP The Floyd-Warshall Algorthm (used n the Bucket formulaton of ARM): Let c (, j) denote the length (or weght) of the path between node and j; and Let s (, jk, ) denote the length (or total weght) of the shortest path between nodes and j gong through ntermedate node 1, 2,...,k. Then, the followng recurson wll gve the length of the shortest paths between all pars of nodes n a graph: s (, jk, ) c (, j) f k1 mn s (, jk, 1), skk (,, 1) sk (, jk, 1) otherwse
Examples of DP cont. Mnmzng the rsk of losng an endangered speces (non-lnear DP): Probablty of Project Falure Fundng level (state) Project (stage) $1M 1 50% 30% 20% 2 70% 50% 30% 3 80% 50% 40% $1M $1M $1M $1M $1M $1M Stage 1 Stage 2 Stage 3 Source: Buongorno and Glles (2003) Decson Methods for Forest Resource Management
Mnmzng the rsk of losng an endangered speces (DP example) Stages (t=1,2,3) represent $ allocaton decsons for Project 1, 2 and 3; States (=0,1,2) represent the budgets avalable at each stage t; Decsons (j=0,1,2) represent the budgets avalable at stage t+1; pt (, j) denotes the probablty of falure of Project t f decson j s made (.e., -j s spent on Project t); and V t* () s the smallest probablty of total falure from stage t onward, startng n state and makng the best decson j*. Then, the recursve relaton can be stated as: V () mn p (, j) V ( j) * * t t t1 j
Markov Chans (based on Buongorno and Glles 2003) Volume by stand states State Volume (m3/ha) L <400 M 400-700 H 700< 20-yr Transton probabltes w/o management Start State L End State j M H L 40% 60% 0% M 0% 30% 70% H 5% 5% 90% Transton probablty matrx P pt = probablty dstrbuton of stand states n perod t p p P t t t-1 Vector p t converges to a vector of steady-state probabltes p*. p* s ndependent of p 0!
Markov Chans cont. Berger-Parker Landscape Index BP t plt pmt pht max( p, p, p ) Lt Mt Ht,where s the probablty that the stand p t s n state L, M or H n perod t. Mean Resdence Tmes m D (1 p ),where D s the length of each perod, and p s the probablty that a stand n state n the begnnng of perod t stays there tll the end of perod t. Mean Recurrence Tmes m D,where s the steady-state probablty of state.
Markov Chans cont. Forest dynamcs (expected revenues or bodversty wth vs. w/o management: 20-yr Transton probabltes w/o management Start State L End State j M H L 40% 60% 0% M 0% 30% 70% H 5% 5% 90% 20-yr Transton probabltes w/ management Start State L End State j M H L 40% 60% 0% M 0% 30% 70% H 40% 60% 0% Expected long-term bodversty: Expected long-term perodc ncome: B B B B L L M M H H R R R R L L M M H H
Markov Chans cont. Present value of expected returns 1 V R p V p V p V (1 r) t, 1 20 L Lt M Mt H Ht where V s the present value of the expected return from t, a stand n state n state = L, M or H managed wth a specfc harvest polcy wth t perods to go before the end of the plannng horzon. R s the mmedate return from managng a stand n state wth the gven harvest polcy. p, p, p are the probabltes that a stand n state moves to L M H state L, M or H, respectvely.
Markov Decson Processes 20-yr Transton probabltes dependng on the harvest decson No Cut Cut Start End State j Start End State j State L M H State L M H L 40% 60% 0% L 40% 60% 0% M 0% 30% 70% M 40% 60% 0% H 5% 5% 90% H 40% 60% 0% 1 V max R p V p V p V (1 r) * * * *, t1 j 20 Lj Lt Mj Mt Hj Ht j where V s the hghest present value of the expected return from * t, 1 a stand n state n state = L, M or H managed wth a specfc harvest polcy wth t perods to go before the end of the plannng horzon. R s the mmedate return from j managng a stand n state wth harvest polcy j. p, p, p are the probabltes that a stand n state moves to Lj Mj Hj state L, M or H, respectvely f the harvest polcy s j.