LONG-TERM OPTIMAL REAL INVESTMENT STRATEGIES IN THE PRESENCE OF ADJUSTMENT COSTS

Size: px
Start display at page:

Download "LONG-TERM OPTIMAL REAL INVESTMENT STRATEGIES IN THE PRESENCE OF ADJUSTMENT COSTS"

Transcription

1 LONG-TERM OPTIMAL REAL INVESTMENT STRATEGIES IN THE PRESENCE OF ADJUSTMENT COSTS ARNE LØKKA AND MIHAIL ZERVOS Abstract. We consider the problem of determining in a dynamical way the optimal capacity level of an investment project that operates within a random economic environment. In particular, we consider an investment project that yields a payoff at a rate that depends on its installed capacity level and on a random economic indicator such as, for instance, the price of the project s output commodity. We model the economic indicator by a one-dimensional ergodic Itô diffusion. At any time, the capacity level can be increased or decreased at given proportional costs. The aim is to maximise the long-term average profit, which takes the form of an ergodic optimisation criterion. The associated Hamilton-Jacobi-Bellman equation is a two-dimensional differential equation that we solve explicitly. We characterise completely an optimal strategy.. Introduction We consider an investment project that operates within a random evironment and yields a payoff rate that depends on the installed capacity and a stochastic economic indicator such as, for instance, the price of the project s output commodity. The capacity can be increased or decreased dynamically over time at proportional costs. We model the economic indicator as a one-dimensional ergodic Itô diffusion, and address the problem of determining the strategy that maximises the long-term average payoff of the project. In mathematical terms, this formulation takes the form of an optimal control problem with an expected ergodic criterion. We are able to solve the corresponding Hamilton-Jacobi- Bellman equation, and conpletely characterise the corresponding optimal strategy. Irreversible capacity models have attracted considerable interest in the literature. See for instance Davis, Dempster, Sethi and Vermes [9], Davis [8], Kobila K, Øksendal [23], Wang [27], Chiarolla and Haussmann [7], Bank [2] and the references therein. Abel and Eberly [], and Merhi and Zervos [22] consider a model involving both expansion and reduction of a project s capacity level. Guo and Pham [2] consider a partially reversible investment model with entry and exit decisions and a general running payoff function. However, as far as we know, all of the aforementioned references aims at maximising the discounted running payoff of the project. We address the problem of maximising the longterm average payoff rate of the project. This yields a singular stochastic control problem with an ergodic criterion. Jack and Zervos [3], [4] consider an impulse and absolutely continuous control problem with ergodic criterion. Bronstein and Zervos [6] consider a sequential entry and exit decision problem with an ergodic criterion. For the more general Research supported by EPSRC grant no. GR/S22998/.

2 2 LØKKA AND ZERVOS theory of optimal ergodic control, the reader is referred to Kushner [2], Karatzas [5], Gatarek and Stettner [], Borkar and Gosh [5], Bensoussan and Frehse [3], Menaldi, Robin and Taksar [2], Duncan, Maslowski and Pasik-Duncan [], Kurtz and Stockbridge [9], Borkar [4], Kruk [8], Sadowy and Stettner [26], and the references therein. The paper is organised as follows. In Section 2 we formulate the problem and the various technical assumptions. In Section 3 we derive the solution to the Hamilton-Jacobi-Bellman equation. In Section 4 we specify the optimal strategy corresponding to the Hamilton- Jacobi-Bellman equation, and prove that this provides an optimal strategy. Further, we show that the value function corresponding to the problem is constant, and analyse the non-uniquenss of optimal strategies. 2. Problem formulation We consider an investment project which payoff rate depends on the installed capacity and an economic indicator. We model the economic indicator by a one-dimensional Itô diffusion, i.e., the state X t of the indicator at time t is given by 2.) dx t = bx t ) dt σx t ) dw t, X = x >, where b, σ :, ) R are given functions, and W is a one-dimensional Brownian motion. If we think of the investment project as a producer of a single commodity, the state process X can be used to model an economic indicator such as the commodity s demand or the commodity s price. We make the following assumptions regarding the coefficients b and σ. Assumption 2.. The deterministic functions b, σ :, ) R satisfy the following conditions: 2.2) 2.3) σ 2 x) >, for all x, ), for all x, ), there exists ε > such that The scale function p defined by 2.6) satisfies 2.4) Further, 2.5) ε x ε lim px) = and lim px) =. x x lim xbx) x σ 2 x) = c, for some c, ]. bs) σ 2 s) ds <, With reference to the general theory of one-dimensional diffusions e.g., see Section 5.5 in Karatzas and Shreve [6]), the assumptions 2.2) and 2.3) are sufficient for 2.) to define a diffusion that is unique in the sense of probability law. 2.2) and 2.3) also ensure that the scale function p and the speed measure m given by 2.6) p) = and p x) = exp 2 bs) σ 2 s) ds ), for x, ),

3 and 2.7) LONG-TERM OPTIMAL STRATEGIES 3 mdx) = 2 σ 2 x)p x) dx, respectively, are well defined. According to Proposition in Karatzas and Shreve [6], assumption 2.4) is sufficient to ensure that the solution to 2.) is non-explosive and recurrent. Assumption 2.5) is related to conditions formulated by Peskir [24] and bounds on the maximimum of Itô diffusions. We assume that the project s running payoff rate is of the form hx t, Y t ), for some function h, where X denotes the state of the economic indicator and Y denotes the level of installed capacity. We make the following assumptions on h. Assumption 2.2. There exists a measurable function k :, ) [, ) and finite constants C, C 2 > such that 2.8) and 2.9) kx) mdx) <, C y) hx, y) kx) C 2 y), for all x, y), ) 2. Further, hx, y) is concave in y and non-decreasing in x, and there is a unique, strictly increasing function y :, ), ) such that 2.) where H x, y x) ) =, for all x, ), 2.) Hx, y) = h x, y). We assume that there exist finite constants C 3, C 4 >, such that 2.2) y x) C 3 C 4 x, for all x. Since y is strictly increasing, it has a well defined inverse function, denoted by x. We procede to define what we mean by an investment/capacity strategy, and the corresponding set of all such strategies. Definition 2.. Assume given an initial condition x, y), and let Ω, F, F t, X, P x ) be a weak solution to 2.). An investment strategy Y x,y is a F t -adapted cáglád process Y of finite variation with Y = y. Moreover, let Y and Y denote the increasing processes corresponding to the increasing and decreasing parts of Y, respectively, that is Y = y Y Y. Let Y x,y denote the set of all investments strategies with initial condition x, y). We are now ready to formulate the optimisation criterion. This takes the form of an expected ergodic optimisation criterion, which corresponds to the long-term average payoff. Define the performance index 2.3) JY x,y ) = lim sup [ T ] T Ex hx t, Y t ) dt K Y T K Y T,

4 4 LØKKA AND ZERVOS where K is the cost of increasing the capacity one unit and K is the cost/return of reducing the capacity one unit. Assumption 2.3. We assume that K K >. The aim is to determine the strategy that maximise the performance index J given by 2.3). The value function associated with such an optimal control problem is given by 2.4) V x, y) = sup JY x,y ). Y x,y Y x,y It turns out that the value function V is constant, i.e., V does not depend on the initial condition x, y) see the end of Section 4). According to the next remark, the value function takes values in R. Remark 2.. Let x, y), ) 2 be any initial condition, and denote by Y c x,y the strategy for which Y t = y, for all t. From the assumption 2.9) it follows that V x, y) JY c x,y ) C y) >. On the other hand, it follows from the assumptions 2.8) 2.9) and Rogers and Williams [25, p. 3] that [ T ] JY x,y ) lim sup T Ex kx t ) dt = m, ) ) kx) mdx) <, for every strategy Y x,y Y x,y. Hence, V x, y) m, ) ) kx) mdx) <. We conclude that < V x, y) <, for all initial conditions x, y). 3. The Hamilton-Jacobi-Bellman equation In the case of the usual expected discounted optimisation criterion, the standard approach applying the principle of dynamic programming provides a Hamilton-Jacobi- Bellman equation, which determines the value function and the corresponding optimal strategy. Since in our case, the value function is constant, the similar approach is not that straight forward. Regardless of this, it turns out that we can derive the Hamilton- Jacobi-Bellman equation corresponding to the value function, which determines an optimal strategy. The Hamilton-Jacobi-Bellman equation takes the following form: { } 3.) max 2 σ2 w xx bw x h, w y K, w y K =. From the nature of the problem it is natural to conjecture that we are looking for continuous and increasing functions F : [y F, ) [, ) and G : [, y G ) [x G, ) that divides [, ) 2 into three connected sets. Here, y F, y G and x G are positive real numbers. The interpretation is that if the economic indicator X t at time t is less than FY t ), it is optimal to decrease the capacity to a level F X t ). If X t is greater than GY t ) then it is optimal

5 LONG-TERM OPTIMAL STRATEGIES 5 to increase the capacity to a level G X t ). While the indicator X is greater than FY t ) and less than GY t ) it is optimal to take no action until X reaches the boundaries defined by F and G. So, we look for a solution to 3.) of the form: 3.2) 3.3) 3.4) w y x, y) K =, for x, y) D, 2 σ2 x)w xx x, y) bx)w x x, y) hx, y) =, for x, y) C, where D, C and I are given by 3.5) 3.6) 3.7) w y x, y) K =, for x, y) I, D = { x, y) [, ) 2 : y F x) }, C = { x, y) [, ) 2 : G x) y F x) }, I = { x, y) [, ) 2 : y G x) }. Our derivation of a solution to equation 3.) starts with the observation that the ordinary differential equation 3.8) 2 σ2 x)v xx x, y) bx)v x x, y) hx, y) =, has general solutions of the form 3.9) vx, y) = By) Ay)px) p s) for some functions A and B. To see this, first observe that v xx x, y) = Ay)p x) p x) v x = Ay)p x) p x) p x) = 2bx) σ 2 x) p x). hu, y) mdu) ds, hu, y) mdu) 2 hx, y), σ 2 x) hu, y) mdu), Hence, we calculate that 2 σ2 x)v xx x, y) bx)v x x, y) hx, y) = ) 2 σ2 x) Ay)p x) p x) hu, y) mdu) hx, y) bx)ay)p x) bx)p x) hu, y) mdu) hx, y)

6 6 LØKKA AND ZERVOS = Ay)bx)p x) bx)p x) =, bx)ay)p x) bx)p x) hu, y) mdu) hu, y) mdu) which proves that functions v of the form 3.9) satisfy equation 3.8). Having established the general solution to equation 3.8), our next task is to determine the unknown functions A and B, as well as the functions F and G. We start by defining candidates for y F and y G. Definition 3.. Let β : [, ) [, ) be given by { b } βy) = inf b [, ) : Hu, y) mdu) >, if and βy) =, if Hu, y) mdu) <. Hu, y) mdu), Observe that x y) < βy). Moreover, β satisfies βy) Hu, y) mdu) =, if βy) <. From Assumption 2.2, it follows that 3.) β y) = σ2 βy) ) p βy) ) 2H βy), y ) if βy) <. Now, let ȳ be given by Then β is strictly increasing for y < ȳ. Definition 3.2. Let y F be given by { 3.) y F = inf y : βy) ȳ = inf { y : βy) = }. βy) H u, y) mdu) >, pu)hu, y) mdu) = K K }. In view of Assumption 2.2 and the equation for β y) given by 3.), we calculate that βy) ) βy) pu)hu, y) mdu) = pu) p βy) )) H u, y) mdu) >, for every y < ȳ. Definition 3.3. Let α : [, ) [, ) be given by { } αy) = sup a, ) : Hu, y) mdu) <, if and, if Hu, y) mdu). a Hu, y) mdu) <,

7 LONG-TERM OPTIMAL STRATEGIES 7 Note that αy) < x y) < βy), and that αy) =, for y ȳ. Moreover, αy) satisfies Hu, y) mdu) =, if αy) >. Assumption 2.2 and the inequality H αy), y) <, αy) implies that α y) = σ2 αy) ) p αy) ) 3.2) 2H αy), y ) H u, y) mdu) >, for y > ȳ. αy) We conclude that α is strictly increasing for y > ȳ. Definition 3.4. Let y G be given by { 3.3) y G = inf y y F : αy) pu)hu, y) mdu) = K K }, if the set is non empty, and y G =, if the set over which the infimum is taken is empty. Assumption 2.2 and the expression for α y) given by 3.2) imply that ) 3.4) pu)hu, y) mdu) = pu) p αy) )) H u, y) mdu) <, αy) αy) for y > ȳ. We conclude that y G < if and only if lim sup y αy) pu)hu, y) mdu) < K K. Our next step is to construct the solution to equation 3.), for y y F. The case y y F. According to the ansatz that the solution to 3.) takes the form 3.2) 3.4), w satisfies equation 3.8) for x, y) in C. Hence, for x, y) C, the function w is given by 3.5) wx, y) = By) Ay)px) p s) hu, y) mdu) ds, for some functions A and B. We require that lim x wx, y) exists for every y [, y F ], which implies that 3.6) Ay) = hu, y) mdu), for y [, y F ]. In addition we look for a solution w to 3.) that is continuous. This implies that 3.7) wx, y) = wx, G x)) K G x) y ), for x, y) I. With reference to the principle of smooth fit, we further require w to satisfy 3.8) 3.9) Equation 3.8) implies that 3.2) B y) = K w y Gy), y) = w y Gy), y), w xy Gy), y) = w xy Gy), y). Hu, y) mdu)pgy)) p s) Hu, y) mdu) ds,

8 8 LØKKA AND ZERVOS and equation 3.9) implies that p Gy)) Hu, y) mdu) =, fory [, y F ]. From the latter equation it follows that G should satisfy 3.2) Hu, y) mdu) =, for y [, y F ]. We remark that Assumption 2.2 ensures that equation 3.2) uniquely determines G. Moreover, we observe that Gy) = βy), from which it follows that G is differentiable and strictly increasing, for every y [, y F ]. Equation 3.2) determines B up to a constant. We may choose this constant such that 3.22) for y y F. By) = K y y y Gx) Hu, x), mdu)p Gx) ) dx p s) Hu, s) mdu) ds dx, The case y F y y G. We require w to be continuous at the points, y) and Gy), y), which imply that w must have the form 3.23) and 3.24) wx, y) = wx, G x)) K G x) y ), wx, y) = wx, F x)) K y F x) ), for x Gy), for x. Appealing to the so called principle of smooth fit, we further require w to satisfy 3.25) 3.26) 3.27) 3.28) Conditions 3.27) and 3.28) imply that 3.29) 3.3) from which it follows that 3.3) A y) = w y, y) = w y, y), w y Gy), y) = w y Gy), y), w xy, y) = w xy, y), w xy Gy), y) = w xy Gy), y). A y)p ) p ) A y)p Gy)) p Gy)) Hu, y) mdu) =, Hu, y) mdu) =, Hu, y) mdu), or equivalently A y) = Hu, y) mdu),

9 and 3.32) LONG-TERM OPTIMAL STRATEGIES 9 Hu, y) mdu) =. Further, the smooth fit conditions 3.25) and 3.26) imply that 3.33) 3.34) B y) A y)p) B y) A y)pgy)) p s) p s) Hu, y) mdu) ds = K, Hu, y) mdu) ds = K. In view of equations 3.3), 3.33) and 3.34), we calculate that = K K A y) [ pgy)) p) ] p s) Hu, y) mdu) ds = K K A y) [ pgy)) p) ] [ pgy)) p) ] Hu, y) mdu) = K K from which it follows that 3.35) p s) p s) Hu, y) mdu) ds, p s) Hu, y) mdu) ds = K K ). Hu, y) mdu) ds p s) Hu, y) mdu) ds By 3.32) and Fubini s theorem, we calculate that the left-hand side of the latter equation is given by p s) Hu, y) mdu) ds = = = pgy)) p s)hu, y) {u s} mdu) ds ) p s) {u s} ds Hu, y) mdu) Hu, y) mdu) pu)hu, y) mdu)

10 LØKKA AND ZERVOS 3.36) = pu)hu, y) mdu). From equations 3.35) and 3.36), we conclude that, in addition to 3.32), the functions F and G must satisfy 3.37) pu)hu, y) mdu) = K K, The next result states that 3.32) and 3.37) has a unique solution. Lemma 3.. The system of equations 3.32) and 3.37) have a unique solution and Gy), for every y F y y G. Moreover, the functions F and G are both continuously differentiable and strictly increasing. Proof. Define the function L y : αy), x y)] [x y), βy)) by 3.38) Lya) a Hu, y) mdu) =. We remark that Assumption 2.2 ensures that equation 3.38) has a unique solution L y a), for every y F y y G and a αy), x y)]. Moreover, L satisfies lim a αy) L y a) = βy) and lim a x y) L y a) = x y). By differentiating L ya) Hu, y) mdu) with respect to a we obtain a L y a) = Ha, y)σ2 L y a))p L y a)) 3.39). HL y a), y)σ 2 a)p a) From the expression for L y a) obtained in 3.39), we calculate that [ d Lya) ] ) 2Ha, y) pu)hu, y) mdu) = pl da σ 2 a)p y a)) pa) a) a since p is strictly increasing and a < L y a) and Ha, y) <. It follows from the definition of y G that 3.32) and 3.35) has a unique solution for every y F y y G, Differentiating equation 3.38) with respect to y shows that 3.4) L y a) = L y a))p L y a)) σ2 2HL y a), y) Lya) a H u, y) mdu), which is strictly positive since H u, y) is strictly negative for all u, ). Denote by ãy) the unique function of y which satisfies 3.4) Lyeay)) eay) pu)hu, y) mdu) = K K. By construction of ã and L, we have in particular that Lyeay)) eay) Hu, y) mdu) =. <,

11 LONG-TERM OPTIMAL STRATEGIES Differentiating this identity with respect to y, and inserting the expressions for L y and Ly given by 3.39) and 3.4), we obtain 3.42) ã y) = σ2 ãy) ) p ãy) ) Lyeay)) eay) [ p Ly ãy)) ) p u )] Hu, y) mdu) 2H ãy), y )[ p L y ãy)) ) p ãy) )], which is strictly positive since H is concave in y, H ãy), y ) is negative and p is strictly increasing. Since ã y) > and Ly a) >, it follows that the functions F and G both are continuously differentiable and strictly increasing. Our next task is to verify that the solutions for y y F and y F y y G are consistent with each other, and to piece them together in a smooth way. Since F and G satisfy equations 3.32) and 3.37), for y y F, it follows that Fy F ) = and Gy F ) = βy F ) = Gy F ). Based on the expressions 3.39) 3.42), we calculate that G y) = σ2 Gy) ) p Gy) ) 2H Gy), y ) p Gy) ) pu) p Gy) ) p ) ) H u, y) mdu), from which it follows that G y F ) = σ2 Gy F ) ) p Gy F ) ) GyF ) 2H H ) Gy F ), y F u, y F) mdu) = G y F ). We conclude that G is continuously differentiable for every y [, y G ]. Further, equations 3.2) and 3.34) imply that B y F ) = B y F ), and equations 3.6) and 3.3) imply that A y F ) = A y F ). Hence, by choosing 3.43) and 3.44) Ay) = hu, y F ) mdu) By) = By F ) K y y F ) y Gx) y F p s) y Fs) y F y y F Hu, s) mdu) ds, Hu, x), mdu)p Gx) ) dx Hu, s) mdu) ds dx, for y > y F, we have that Ay F ) = Ay F ) and By F ) = By F ). The case y y G. We start by remarking that if y G =, then there is nothing to prove. Therefore, assume that y G <. For x, the solution w to equation 3.) is of the form 3.9), for some functions A and B. We require w to be continuous at, y), which implies that 3.45) wx, y) = wx, F x)) K y F x) ), We postulate that F should solve 3.46) Hu, y) mdu) =, for y y G. for x.

12 2 LØKKA AND ZERVOS Assumption 2.2 ensures that equation 3.46) has a unique solution F. Further, the smooth fit conditions w y, y) = w y, y) and w xy, y) = w xy, y), imply that 3.47) and 3.48) A y) = B y) A y)p) Hu, y) mdu), p s) Hu, y) mdu) ds = K. From the definition of y G and equations 3.32) and 3.37), it follows that Gy G ) = and that Fy G ) satisfies Fy G ) Hu, y G ) mdu) =. We conclude that Fy G ) = Fy G ). Comparing 3.3) and 3.33) with 3.47) and 3.48) verifies that A y G ) = A y G ) and B y G ) = B y G ). Moreover, from equation 3.42) it follows that 3.49) F y G ) = σ2 Fy G ) ) p Fy G ) ) 2H Fy G ), y G ) Fy G ) H Hu, y G) mdu). Differentiating equation 3.46) with respect to y shows that F y G ) coincides with the right hand side of 3.49), which verifies that F is continuously differentiable. Finally, by choosing 3.5) and 3.5) Ay) = Ay G ) y y G By) = By G ) K y y G ) y Fx) y G p s) Hu, x) mdu) dx, y y G Hu, x) mdu) ds dx, Hu, x) mdu)p Fx) ) dx we see that A and B are continuous at y G. We have the following characterisation of the solution to the differential equation 3.), which we later show plays a similar role as a Hamilton-Jacobi-Bellman equation. Proposition 3.2. Let y F and y G be given by 3.) and 3.3), respectively. Let F and G be the unique solution to 3.2), for y < y F, be the unique solution to 3.32) and 3.37), for y F y < y G, and be the unique solution to 3.46), for y y G. Further, let v be given by 3.9) where A and B be given by 3.3) and 3.22), for y < y F, by 3.43)

13 LONG-TERM OPTIMAL STRATEGIES 3 and 3.44), for y F y < y G, and by 3.5) and 3.5), for y y G. Then F and G are strictly increasing and continuously differentiable, and vx, G x)) K G x) y ), for x Gy), 3.52) wx, y) = vx, y), for x Gy), vx, F x)) K y F x) ), for x, is up to a constant the unique solution to 3.) of class C 2, [, ) 2 ). Proof. First note that the existence and claimed properties for F and G follows from the previous analysis. This also proves the existence and uniqueness of A and the existence of B, and that B is unique up to a constant. Further, by construction and the function v given by 3.9) satisfies equation 3.8). Assume that x, y) I. Using the fact that w y Gy), y) = w y Gy), y) = K, we calculate that 3.53) w x x, y) = v x x, G x)) v y x, G x)) dg x) KdG dx dx x) ) dg = v x x, G x)) v y x, G x)) K dx x) = v x x, G x)) = w x x, G x)), Further, since w yx Gy), y) = w xy Gy), y) =, we obtain that 3.54) w xx x, y) = v xx x, G x)) v yx x, G x)) dg dx x) = w xxx, G x)). A similar calculation shows that w x x, y) = w x x, F x) ) and w xx x, y) = w xx x, F x) ), for x, y) in D. We conclude that w belongs to C 2, [, ) 2 ). Moreover, up to a constant, w is the unique solution to the smooth pasting conditions, hence up to a constant the unique solution to 3.) of class C 2, [, ) 2 ). In view of equations 3.53) and 3.54), we calculate that 2 σ2 x)w xx x, y) bx)v x x, y) hx, y) = 2 σ2 x)w xx x, G x) ) bx)v x x, G x) ) hx, y) = y G x) Hx, s) ds, for all x, y) I, since Hx, y), for x, y) I. A similar argument shows that 2 σ2 x)w xx x, y) bx)v x x, y) hx, y), for all x, y) D. Assume that x, y) C. With reference to 2.), 3.22), 3.43) and 3.44), we calculate w y x, y) K = p s) Hu, y) mdu) ds p Gy) ) Hu, y) mdu)

14 4 LØKKA AND ZERVOS 3.55) Define a function R I y by and observe that R I yx) = px) px) Hu, y), mdu) = p Gy) ) Hu, y) mdu) px) Hu, y) mdu) Hu, y) mdu) Ry) I x) = p x) Hx, y) mdu). p s) Hu, y) mdu) ds pu)hu, y) mdu) pu)hu, y) mdu). pu)hu, y) mdu), From the definition of the functions F and G, and Assumption 2.2, it follows that Ry I) x), for every x Gy). This observation and equation 3.55) imply that w y x, y) K, for every x, y) C. For completeness we remark that for x, y) D, we have that w y x, y) = K K. Furthermore, for x, y) I, we have w y x, y) = K K, since K K >. Next, observe that the construction of y F, y G, F and G imply that Consequently, pu)hu, y) mdu) K K w y x, y) K Define a function R D y by We calculate that R D y x) = and Hu, y) mdu) =. pu)hu, y) mdu) px) Hu, y) mdu). pu)hu, y) mdu) px) Hu, y) mdu). Ry D ) x) = p x) Hu, y) mdu). This implies that = R D y ) R D y x). We conclude that w y x, y) K. 4. The optimal investment strategy The aim of this section is to formulate the strategy corresponding to the solution to the Hamilton-Jacobi-Bellman equation, and prove that this strategy is optimal. However, we will start by establishing a couple of technical results.

15 Definition 4.. Let Φ :, ), ) be given by LONG-TERM OPTIMAL STRATEGIES 5 Φx) = m[, u])pu)du, and let Φ denote its inverse, i.e. Φ Φ y) = y, and Φ Φx) = x. The following result provides a relationship between conditions formulated by Peskir [24] and condition 2.5). Lemma 4.. Assumption 2.5) implies that { Φy) 4.) y sup y> y } du <, Φu) and Φ x) 4.2) lim =. x x Proof. Assume that 2.5) holds. Then there exists an ɛ > and a constant C such that bx) σ 2 x) ɛ, for x C x. This implies that p x) C 2 x 2ɛ, for some positive constant C 2, and that px) C C 2 x 2ɛ, for some positive constants C and C 2. Observe that there exist constants C, C 2 > such that m[, x]) C, for x C 2. By L Hopital s rule and the previous estimate for the scale function p, 4.3) Next, we claim that 4.4) Φx) lim x x 2 xφ x) lim x Φx) =. In order to verify this, assume that 4.4) does not hold. Then for every ɛ > there exists a C > such that xφ x) ɛ, for x C Φx). This implies that Φx) C 2 x ɛ, for some constant C 2 >, which contradicts 4.3). Further, it follows from 4.3) that x dy Φy) where C > is a constant. This shows that x L Hopital s rule and 4.4), 4.5) lim x Φx) x Together with the observation 4.6) this implies that 4.) holds. x x >. dy <, for all x, C y2 ) dy x = lim Φy) x dy Φy) dy Φy) x Φx) is well defined for all x. By = lim Φx) ) dy lim =, x x x Φy) x xφ x) Φx) <.

16 6 LØKKA AND ZERVOS Since Φx), for all x, it follows that Φ x), for all x. Therefore, if lim x Φ x) <, then we can conclude that 4.2) holds. Now, assume that lim x Φ x) =. Then by L Hopital s rule, we calculate that lim x Φ x) x for some ɛ >. This verifies that 4.2) holds. = lim ) Φ x) = lim x x Φ x) lim x x 2ɛ =, The next result shows that strategies with a certain limit behaviour as time tends to infinity can be ruled out as optimal strategies. Lemma 4.2. Let x, y) be any initial condition and Y x,y Y x,y be any strategy for which either of the following holds. Then V x, y) > JY x,y ) =. ) lim inf 2) lim inf E x [Y T ] >. T E x [Y T ] =. T Proof. First, observe that assumption 2.9) implies that [ T JY x,y ) = lim sup T Ex lim sup T Ex [ h ) X t, Y t dt K Y T K Y T [ ] kx t ) dt C lim inf T Ex max{k, K K E x [Y T 4.7) } lim inf ]. T By Rogers and Williams [25, p. 3] and assumption 2.8), it follows that 4.8) lim sup T Ex [ ] kx t ) dt = m, ) ) ] ] Y t dt kx) mdx) <. Now, assume that Y x,y satisfies ). Then there exists a κ > and a finite positive constant C such that E [ ] Y T > κt, for every T greater C. Hence, [ T ] T C 4.9) lim inf T Ex Y t dt κ lim inf t dt =. T It follows from 4.7) 4.9) that JY x,y ) =, from which the claimed result, when Y x,y satisfies ), follows from Remark 2.. Assume that Y x,y satisfies 2). Then it follows from 4.7) 4.8) that and the result follows from Remark 2.. JY x,y ) =, C

17 LONG-TERM OPTIMAL STRATEGIES 7 We are now ready to formulate the strategy corresponding to the solution to the Hamilton- Jacobi-Bellman equation that we derived the the previous section. Definition 4.2. Let C be given by 3.6, and F, G, and w be as in Proposition 3.2. Denote by Y x,y the strategy consisting of immediately raising or decreasing the capacity to the closest boundary point of C if x, y) / C, and then take minimal action so as to reflect X, Y ) on the boundary of C. That is, at time t =, Y has a positive jump of size G x) t), if G x) > y, and Y has a negative jump of size y F x), if y > F x). For t >, the strategy Y is a continuous process given by 4.) dy t = {Yt=G X t)} {Yt=F X t)} ) dy t. According to the next result, the strategy Y provides an optimal strategy corresponding to the optimisation problem with value function V given by 2.4). Theorem 4.3. Fix any initial condition x, y), the strategy Y x,y provides an optimal strategy. That is V x, y) = JY x,y ). given by Definition 4.2 Proof. We need to prove that if Y x,y is any strategy in Y x,y, then JY x,y ) JY x,y). Observe that since Y t >, for all t, and Y is increasing, it follows from Lemma 4.2 that it is sufficient to prove the inequality for strategies Y x,y which satisfy 4.) E[Y T ] lim = and lim T E[Y T ] T <. So, let Y x,y be any strategy which satisfy the conditions in 4.). We claim that 4.2) and 4.3) hx t, Y t ) dt K Y T K Y T wx, y) wx T, Y T ) σx t )w x X t, Y t ) dw t, hx t, Y t ) dt K Y T ) K Y T ) = wx, y) wx T, Y for all T <. To prove this, observe that by Itô s formula = wx, y) wx T, Y T ) σx t )w x X t, Y t ) dw t T ) σx t )w x X t, Y t ) dw t, 2 σ2 X t )w xx X t, Y t ) bx t )w x X t, Y t ) dt w y X t, Y t ) dy t ) c w y X t, Y t ) dy t ) c

18 8 LØKKA AND ZERVOS 4.4) wx t, Y t Y t ) wx t, Y t ). t T By re-arranging equation 4.4), we get 4.5) hx t, Y t ) dt K Y T K Y T = wx, y) wx T, Y T ) t T 2 σ2 X t )w xx X t, Y t ) bx t )w x X t, Y t ) hx t, Y t ) dt σx t )w x X t, Y t ) dw t w y X t, Y t ) K dy t ) c w y X t, Y t ) K dyt ) c wx t, Y t Y t ) wx t, Y t ) K Y t wx, y) wx T, Y T ) σx t )w x X t, Y t ) dw t wx t, Y t Y t ) wx t, Y t ) K Y t t T K Y t K Y t, since w is a solution to 3.). The same calculations holds for Y x,y, except that in this case the inequality in 4.5) holds with equality. Further, we can make the following calculations regarding the last term in 4.5), ) wx t, Y t Y t ) wx t, Y t ) K Y t K Yt 4.6) t T = t T =, ) wx t, Y t Y t ) wx t, Y t ) K Y t t T Y t t T ) wx t, Y t Yt ) wx t, Y t ) K Yt w y X t, Y t u) K du t T Y t w y X t, Y t u) K du since w is a solution to 3.). The same calculations hold for the case Y x,y, with the exception that the inequality in 4.6) holds with equality. The claimed inequalities 4.2) and 4.3) then follows from 4.5), 4.6) and the comments regarding the case Y x,y proceeding these inequalities.

19 It follows from assumptions 2.9) and 4.) that h ) X t, Y t ) dt K Y T K Y T C LONG-TERM OPTIMAL STRATEGIES 9 Y t dt max { K, K K } Y T, which belongs to L P x ), for all T <. Here ) denotes the absolute value of the negative part. Moreover, from assumptions 2.8) 2.9) it follows that h ) X t, Yt ) dt K YT ) K YT ) kx t ) dt, which belongs to L P x ), for all T <. Assumption 2.2) implies that G x) y x) C C 2 x, for some finite constants C and C 2. Therefore, YT y sup G X t ) y C C 2 sup X t ), t T t T for some finite constants C and C 2. By the assumption 4.2), Lemma 4. and [24, Theorem 2.5], [ ] lim T E[ ] YT lim T E Φ T) 4.7) C C 2 sup X t C 2 lim =. t T T Since w satisfies the equation 3.), we calculate that wxt, Y T ) wx T, YT ) Y T = w y X T, y) dy max{ K, K } YT YT. Y T By condition 4.) and equation 4.7), it follows that WX T, Y T ) WX T, Y T ) L P x ), for all T <, and that 4.8) lim WXT, Y T ) WX T, YT T ) =. According to the previous results and the inequalities 4.2) and 4.3), the negative part of σx t ) { w x X t, Y t ) w x X t, Y t )} dw t belongs to L P x ), for all T <. Hence, by Fatou s lemma and 4.8), [ T lim sup T E x hx t, Y t ) hx t, Yt ) dt K Y T YT )) K Y T Y T ) )] T ) wx T, Y T ) ] lim sup lim sup T Ex[ wx T, Y [ T T Ex σx t ) { w x X t, Y t ) w x X t, Y t )} dw t ]

20 2 LØKKA AND ZERVOS 4.9) lim sup lim inf n [ T τn T Ex σx t ) { w x X t, Y t ) w x X t, Yt )} dw t ]), where {τ n } n= given by τ n = inf { t : X t, Y t ) / [, n) 2} is a sequence of stopping times converging to infinity, P x -a.s., since X t, Y t ), ) 2, P x -a.s., for all t <. The calculation τn σx t ) { w x X t, Y t ) w x X t, Yt ) }) 2 dt implies that 4.2) T max σ 4 x) w x x, y) w x x, z) )2 <, x,y) [,n] 2,z [,y n)] E x [ τn σx t ) { w x X t, Y t ) w x X t, Y t ) } dw t ] =, for all n and T <. From 4.9) and 4.2) we conclude that 4.2) lim sup [ T T E x hx t, Y t ) hx t, Yt ) dt K Y T YT )) K Y T Y T ) )]. Finally, in view of 4.2), we calculate [ T JY x,y ) = lim sup T Ex lim inf which completes the proof. T Ex lim sup T Ex = JY x,y), [ [ hx t, Y t ) dt K Y T K Y T hx t, Y t ) dt K Y T ) K Y T ) ] hx t, Y t ) dt K Y T ) K Y T ) ] The next result shows that the strategy Y is not the only optimal strategy. Essentially this is due to the fact that certain actions taken before any finite stopping time does not affect the performance index. Lemma 4.4. Let τ be a stopping time satisfying τ < a.s., and let Ȳx,y be given by Y t = y {t τ} Yt {t>τ}, i.e. the strategy that consists of taking no action up to time τ and then proceed according to the optimal strategy described in Theorem 4.3. Then JȲx,y) = JY x,y ) = vx, y), for every x, y), )2. ]

21 LONG-TERM OPTIMAL STRATEGIES 2 Proof. We calculate that 4.22) JȲx,y) JY x,y) = lim sup T Ex [ τ [ τ lim sup E x lim inf [ τ Ex ] hx t, y) dt hx t, Y t ) dt K Y T τ ) K Y T τ ) ] hx t, y) hx t, Y t ) dt K Y T τ ) K Y T τ ) ]. From assumptions 2.8) 2.9), it follows that the absolute value of the negative part of τ hx t, y) hx t, Y t ) dt K Y T τ ) K Y T τ ) is less than or equal to C y)t τ) τ kx t ) dt, which belongs to L P x ), for all deterministic finite T. Hence, by Fatou s lemma, 4.23) lim inf T Ex E x [lim inf =. [ τ τ T hx t, y) hx t, Y t ) dt K Y T τ) K Y T τ) ] hx t, y) hx t, Y t ) dt K Y T τ ) K Y T τ ) )] From 4.22) and 4.23) we conclude that JȲx,y) JY x,y ). The result then follows from Theorem 4.3. Our assumptions on b and σ assure that X is recurrent. Hence, for any x, ) and initial value x, ) the hitting time of x, given by τ x = inf{t : X t = x}, is finite almost surely see, e.g., Karatzas and Shreve [6, Section 5.5]). By Lemma 4.4, it follows that 4.24) V x, y) = V x, y), for all x, x, y, ). Moreover, given any initial condition x, y), choose x such that K y < x <. Then 4.25) V x, y) V x K y, ) = V x, ) V x K y, y) = V x, y), where we have used that immediately increasing and decreasing the capacity level is suboptimal, and that the value function does not depend on the initial level of the economic indicator. From 4.24) and 4.25) we conclude that the value function V given by 2.4) is a constant that does not depend on the initial value x, y).

22 22 LØKKA AND ZERVOS References [] B. A. Abel and J. C. Eberly 996), Optimal investment with costly reversibility, Review of Economic Studies, vol.63, pp [2] P. Bank 25), Optimal Control under a Dynamic Fuel Constraint, SIAM Journal on Control and Optimization, to appear. [3] A. Bensoussan and J. Frehse 992), On Bellman equations of ergodic control in R n, Journal für die Reine und Angewandte Mathematik, vol. 429, pp [4] V. S. Borkar 999), The value function in ergodic control of diffusion processes with partial observations, Stochastics and Stochastics Reports, vol. 67, pp [5] V. S. Borkar and M. K. Ghosh 988), Ergodic control of multidimensional diffusions I: The existence results, SIAM Journal on Control and Optimization, vol. 26, pp [6] A. L. Bronstein and M. Zervos 26), Sequential entry and exit decisions with an ergodic performance criterion, Stochastics, to appear. [7] M. B. Chiarolla and U. G. Haussmann 25), Explicit solution of a stochastic irreversible investment problem and its moving threshold, Mathematics of Operations Research, vol. 3, pp [8] M. H. A. Davis 993), Markov models and optimization, Chapman & Hall. [9] M.H.A.Davis, M.A.H.Dempster, S.P.Sethi and D.Vermes 987), Optimal capacity expansion under uncertainty, Advances in Applied Probability, vol.9, pp [] T. E. Duncan, B. Maslowski and B. Pasik-Duncan 998), Ergodic boundary/point control of stochastic semilinear systems, SIAM Journal on Control and Optimization, vol. 36, pp [] D. Gatarek and L. Stettner 99), On the compactness method in general ergodic impulse control of Markov processes, Stochastics and Stochastics Reports, vol. 3, pp [2] X. Guo and H. Pham 25), Optimal partially reversible investments with entry decisions and general production function, Stochastic Processes and their Applications, vol. 5, pp [3] A. Jack and M. Zervos 26), Impulse control of one-dimensional Itô diffusions with an expected and a pathwise ergodic criterion, Applied Mathematics and Optimization, to appear. [4] A. Jack and M. Zervos 26), Impulse and absolutely continuous ergodic control of onedimensional Itô diffusions, in From Stochastic Analysis to Mathematical Finance, Festschrift for Albert Shiryaev Y. Kabanov, R. Lipster and J. Stoyanov, eds.), Springer. [5] I. Karatzas 983), A class of singular stochastic control problems, Advances in Applied Probability, vol. 5, pp [6] I. Karatzas and S. E. Shreve 988), Brownian Motion and Stochastic Calculus, Springer-Verlag. [7] T.Ø.Kobila 993), A class of solvable stochastic investment problems involving singular controls, Stochastics and Stochastics Reports, vol.43, pp [8] L. Kruk 2), Optimal policies for n-dimensional singular stochastic control problems. II. The radially symmetric case. Ergodic control, SIAM Journal on Control and Optimization, vol. 39, pp [9] T. G. Kurtz and R. H. Stockbridge 998), Existence of Markov controls and characterization of optimal Markov controls, SIAM Journal on Control and Optimization, vol 36, pp [2] H. J. Kushner 978), Optimality conditions for the average cost per unit time problem with a diffusion model, SIAM Journal on Control and Optimization, vol. 6, pp [2] J. L. Menaldi, M. Robin and M. I. Taksar 992), Singular ergodic control for multidimensional Gaussian processes, Mathematics of Control, Signals and Systems, vol. 5, pp [22] A. Merhi and M. Zervos 25), A model for reversible investment capacity expansions, submitted. [23] A. Øksendal 2), Irreversible investment problems, Finance and Stochastics, vol. 4, pp [24] G. Peskir2), Bounding the maximal height of a diffusion by the time elapsed, J. Theoret. Probab., vol.4, no. 3, pp

23 LONG-TERM OPTIMAL STRATEGIES 23 [25] L. C. G. Rogers and D. Williams 2), Diffusions, Markov Processes and Martingales, volume 2, Cambridge University Press. [26] R. Sadowy and L. Stettner 22), Om risk-sensitive ergodic impulsive control of Markov processes, Applied Mathematics and Optimization, vol. 45, pp [27] H. Wang 23), Capacity expansion with exponential jump diffusion processes, Stochastics and Stochastics Reports, vol.75, pp Arne Løkka) Department of Mathematics King s College London Strand, London WC2R 2LS United Kingdom address: arne.lokka@kcl.ac.uk Mihail Zervos) Department of Mathematics King s College London Strand, London WC2R 2LS United Kingdom address: mihail.zervos@kcl.ac.uk

A MODEL FOR THE LONG-TERM OPTIMAL CAPACITY LEVEL OF AN INVESTMENT PROJECT

A MODEL FOR THE LONG-TERM OPTIMAL CAPACITY LEVEL OF AN INVESTMENT PROJECT A MODEL FOR HE LONG-ERM OPIMAL CAPACIY LEVEL OF AN INVESMEN PROJEC ARNE LØKKA AND MIHAIL ZERVOS Abstract. We consider an investment project that produces a single commodity. he project s operation yields

More information

ON THE POLICY IMPROVEMENT ALGORITHM IN CONTINUOUS TIME

ON THE POLICY IMPROVEMENT ALGORITHM IN CONTINUOUS TIME ON THE POLICY IMPROVEMENT ALGORITHM IN CONTINUOUS TIME SAUL D. JACKA AND ALEKSANDAR MIJATOVIĆ Abstract. We develop a general approach to the Policy Improvement Algorithm (PIA) for stochastic control problems

More information

Multi-dimensional Stochastic Singular Control Via Dynkin Game and Dirichlet Form

Multi-dimensional Stochastic Singular Control Via Dynkin Game and Dirichlet Form Multi-dimensional Stochastic Singular Control Via Dynkin Game and Dirichlet Form Yipeng Yang * Under the supervision of Dr. Michael Taksar Department of Mathematics University of Missouri-Columbia Oct

More information

The Azéma-Yor Embedding in Non-Singular Diffusions

The Azéma-Yor Embedding in Non-Singular Diffusions Stochastic Process. Appl. Vol. 96, No. 2, 2001, 305-312 Research Report No. 406, 1999, Dept. Theoret. Statist. Aarhus The Azéma-Yor Embedding in Non-Singular Diffusions J. L. Pedersen and G. Peskir Let

More information

Maximum Process Problems in Optimal Control Theory

Maximum Process Problems in Optimal Control Theory J. Appl. Math. Stochastic Anal. Vol. 25, No., 25, (77-88) Research Report No. 423, 2, Dept. Theoret. Statist. Aarhus (2 pp) Maximum Process Problems in Optimal Control Theory GORAN PESKIR 3 Given a standard

More information

On a class of optimal stopping problems for diffusions with discontinuous coefficients

On a class of optimal stopping problems for diffusions with discontinuous coefficients On a class of optimal stopping problems for diffusions with discontinuous coefficients Ludger Rüschendorf and Mikhail A. Urusov Abstract In this paper we introduce a modification of the free boundary problem

More information

Pavel V. Gapeev, Neofytos Rodosthenous Perpetual American options in diffusion-type models with running maxima and drawdowns

Pavel V. Gapeev, Neofytos Rodosthenous Perpetual American options in diffusion-type models with running maxima and drawdowns Pavel V. Gapeev, Neofytos Rodosthenous Perpetual American options in diffusion-type models with running maxima and drawdowns Article (Accepted version) (Refereed) Original citation: Gapeev, Pavel V. and

More information

Proving the Regularity of the Minimal Probability of Ruin via a Game of Stopping and Control

Proving the Regularity of the Minimal Probability of Ruin via a Game of Stopping and Control Proving the Regularity of the Minimal Probability of Ruin via a Game of Stopping and Control Erhan Bayraktar University of Michigan joint work with Virginia R. Young, University of Michigan K αρλoβασi,

More information

On the submartingale / supermartingale property of diffusions in natural scale

On the submartingale / supermartingale property of diffusions in natural scale On the submartingale / supermartingale property of diffusions in natural scale Alexander Gushchin Mikhail Urusov Mihail Zervos November 13, 214 Abstract Kotani 5 has characterised the martingale property

More information

ON THE FIRST TIME THAT AN ITO PROCESS HITS A BARRIER

ON THE FIRST TIME THAT AN ITO PROCESS HITS A BARRIER ON THE FIRST TIME THAT AN ITO PROCESS HITS A BARRIER GERARDO HERNANDEZ-DEL-VALLE arxiv:1209.2411v1 [math.pr] 10 Sep 2012 Abstract. This work deals with first hitting time densities of Ito processes whose

More information

A Class of Singular Control Problems and the Smooth Fit Principle. February 25, 2008

A Class of Singular Control Problems and the Smooth Fit Principle. February 25, 2008 A Class of Singular Control Problems and the Smooth Fit Principle Xin Guo UC Berkeley Pascal Tomecek Cornell University February 25, 28 Abstract This paper analyzes a class of singular control problems

More information

Bayesian quickest detection problems for some diffusion processes

Bayesian quickest detection problems for some diffusion processes Bayesian quickest detection problems for some diffusion processes Pavel V. Gapeev Albert N. Shiryaev We study the Bayesian problems of detecting a change in the drift rate of an observable diffusion process

More information

Reflected Brownian Motion

Reflected Brownian Motion Chapter 6 Reflected Brownian Motion Often we encounter Diffusions in regions with boundary. If the process can reach the boundary from the interior in finite time with positive probability we need to decide

More information

1 Brownian Local Time

1 Brownian Local Time 1 Brownian Local Time We first begin by defining the space and variables for Brownian local time. Let W t be a standard 1-D Wiener process. We know that for the set, {t : W t = } P (µ{t : W t = } = ) =

More information

Applications of Optimal Stopping and Stochastic Control

Applications of Optimal Stopping and Stochastic Control Applications of and Stochastic Control YRM Warwick 15 April, 2011 Applications of and Some problems Some technology Some problems The secretary problem Bayesian sequential hypothesis testing the multi-armed

More information

HJB equations. Seminar in Stochastic Modelling in Economics and Finance January 10, 2011

HJB equations. Seminar in Stochastic Modelling in Economics and Finance January 10, 2011 Department of Probability and Mathematical Statistics Faculty of Mathematics and Physics, Charles University in Prague petrasek@karlin.mff.cuni.cz Seminar in Stochastic Modelling in Economics and Finance

More information

Continuous dependence estimates for the ergodic problem with an application to homogenization

Continuous dependence estimates for the ergodic problem with an application to homogenization Continuous dependence estimates for the ergodic problem with an application to homogenization Claudio Marchi Bayreuth, September 12 th, 2013 C. Marchi (Università di Padova) Continuous dependence Bayreuth,

More information

Controlled Diffusions and Hamilton-Jacobi Bellman Equations

Controlled Diffusions and Hamilton-Jacobi Bellman Equations Controlled Diffusions and Hamilton-Jacobi Bellman Equations Emo Todorov Applied Mathematics and Computer Science & Engineering University of Washington Winter 2014 Emo Todorov (UW) AMATH/CSE 579, Winter

More information

LECTURE 2: LOCAL TIME FOR BROWNIAN MOTION

LECTURE 2: LOCAL TIME FOR BROWNIAN MOTION LECTURE 2: LOCAL TIME FOR BROWNIAN MOTION We will define local time for one-dimensional Brownian motion, and deduce some of its properties. We will then use the generalized Ray-Knight theorem proved in

More information

The Uniform Integrability of Martingales. On a Question by Alexander Cherny

The Uniform Integrability of Martingales. On a Question by Alexander Cherny The Uniform Integrability of Martingales. On a Question by Alexander Cherny Johannes Ruf Department of Mathematics University College London May 1, 2015 Abstract Let X be a progressively measurable, almost

More information

I forgot to mention last time: in the Ito formula for two standard processes, putting

I forgot to mention last time: in the Ito formula for two standard processes, putting I forgot to mention last time: in the Ito formula for two standard processes, putting dx t = a t dt + b t db t dy t = α t dt + β t db t, and taking f(x, y = xy, one has f x = y, f y = x, and f xx = f yy

More information

A Barrier Version of the Russian Option

A Barrier Version of the Russian Option A Barrier Version of the Russian Option L. A. Shepp, A. N. Shiryaev, A. Sulem Rutgers University; shepp@stat.rutgers.edu Steklov Mathematical Institute; shiryaev@mi.ras.ru INRIA- Rocquencourt; agnes.sulem@inria.fr

More information

Predicting the Time of the Ultimate Maximum for Brownian Motion with Drift

Predicting the Time of the Ultimate Maximum for Brownian Motion with Drift Proc. Math. Control Theory Finance Lisbon 27, Springer, 28, 95-112 Research Report No. 4, 27, Probab. Statist. Group Manchester 16 pp Predicting the Time of the Ultimate Maximum for Brownian Motion with

More information

minimize x subject to (x 2)(x 4) u,

minimize x subject to (x 2)(x 4) u, Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for

More information

Optimal stopping of integral functionals and a no-loss free boundary formulation

Optimal stopping of integral functionals and a no-loss free boundary formulation Optimal stopping of integral functionals and a no-loss free boundary formulation Denis Belomestny Ludger Rüschendorf Mikhail Urusov January 1, 8 Abstract This paper is concerned with a modification of

More information

On the quantiles of the Brownian motion and their hitting times.

On the quantiles of the Brownian motion and their hitting times. On the quantiles of the Brownian motion and their hitting times. Angelos Dassios London School of Economics May 23 Abstract The distribution of the α-quantile of a Brownian motion on an interval [, t]

More information

On Ergodic Impulse Control with Constraint

On Ergodic Impulse Control with Constraint On Ergodic Impulse Control with Constraint Maurice Robin Based on joint papers with J.L. Menaldi University Paris-Sanclay 9119 Saint-Aubin, France (e-mail: maurice.robin@polytechnique.edu) IMA, Minneapolis,

More information

Worst Case Portfolio Optimization and HJB-Systems

Worst Case Portfolio Optimization and HJB-Systems Worst Case Portfolio Optimization and HJB-Systems Ralf Korn and Mogens Steffensen Abstract We formulate a portfolio optimization problem as a game where the investor chooses a portfolio and his opponent,

More information

Thomas Knispel Leibniz Universität Hannover

Thomas Knispel Leibniz Universität Hannover Optimal long term investment under model ambiguity Optimal long term investment under model ambiguity homas Knispel Leibniz Universität Hannover knispel@stochastik.uni-hannover.de AnStAp0 Vienna, July

More information

Stochastic Volatility and Correction to the Heat Equation

Stochastic Volatility and Correction to the Heat Equation Stochastic Volatility and Correction to the Heat Equation Jean-Pierre Fouque, George Papanicolaou and Ronnie Sircar Abstract. From a probabilist s point of view the Twentieth Century has been a century

More information

An Analytic Method for Solving Uncertain Differential Equations

An Analytic Method for Solving Uncertain Differential Equations Journal of Uncertain Systems Vol.6, No.4, pp.244-249, 212 Online at: www.jus.org.uk An Analytic Method for Solving Uncertain Differential Equations Yuhan Liu Department of Industrial Engineering, Tsinghua

More information

Optimal Stopping and Maximal Inequalities for Poisson Processes

Optimal Stopping and Maximal Inequalities for Poisson Processes Optimal Stopping and Maximal Inequalities for Poisson Processes D.O. Kramkov 1 E. Mordecki 2 September 10, 2002 1 Steklov Mathematical Institute, Moscow, Russia 2 Universidad de la República, Montevideo,

More information

A Note on the Central Limit Theorem for a Class of Linear Systems 1

A Note on the Central Limit Theorem for a Class of Linear Systems 1 A Note on the Central Limit Theorem for a Class of Linear Systems 1 Contents Yukio Nagahata Department of Mathematics, Graduate School of Engineering Science Osaka University, Toyonaka 560-8531, Japan.

More information

Richard F. Bass Krzysztof Burdzy University of Washington

Richard F. Bass Krzysztof Burdzy University of Washington ON DOMAIN MONOTONICITY OF THE NEUMANN HEAT KERNEL Richard F. Bass Krzysztof Burdzy University of Washington Abstract. Some examples are given of convex domains for which domain monotonicity of the Neumann

More information

Optimal exit strategies for investment projects. 7th AMaMeF and Swissquote Conference

Optimal exit strategies for investment projects. 7th AMaMeF and Swissquote Conference Optimal exit strategies for investment projects Simone Scotti Université Paris Diderot Laboratoire de Probabilité et Modèles Aléatories Joint work with : Etienne Chevalier, Université d Evry Vathana Ly

More information

(2) E exp 2 M, M t < t [0, ) guarantees that Z is a martingale. Kazamaki [27] showed that Z is a martingale provided

(2) E exp 2 M, M t < t [0, ) guarantees that Z is a martingale. Kazamaki [27] showed that Z is a martingale provided ON THE MARTINGALE PROPERTY OF CERTAIN LOCAL MARTINGALES ALEKSANDAR MIJATOVIĆ AND MIKHAIL URUSOV Abstract. The stochastic exponential Z t = exp{m t M (1/2) M, M t} of a continuous local martingale M is

More information

Albert N. Shiryaev Steklov Mathematical Institute. On sharp maximal inequalities for stochastic processes

Albert N. Shiryaev Steklov Mathematical Institute. On sharp maximal inequalities for stochastic processes Albert N. Shiryaev Steklov Mathematical Institute On sharp maximal inequalities for stochastic processes joint work with Yaroslav Lyulko, Higher School of Economics email: albertsh@mi.ras.ru 1 TOPIC I:

More information

Solving the Poisson Disorder Problem

Solving the Poisson Disorder Problem Advances in Finance and Stochastics: Essays in Honour of Dieter Sondermann, Springer-Verlag, 22, (295-32) Research Report No. 49, 2, Dept. Theoret. Statist. Aarhus Solving the Poisson Disorder Problem

More information

1. Stochastic Processes and filtrations

1. Stochastic Processes and filtrations 1. Stochastic Processes and 1. Stoch. pr., A stochastic process (X t ) t T is a collection of random variables on (Ω, F) with values in a measurable space (S, S), i.e., for all t, In our case X t : Ω S

More information

The first order quasi-linear PDEs

The first order quasi-linear PDEs Chapter 2 The first order quasi-linear PDEs The first order quasi-linear PDEs have the following general form: F (x, u, Du) = 0, (2.1) where x = (x 1, x 2,, x 3 ) R n, u = u(x), Du is the gradient of u.

More information

Approximating diffusions by piecewise constant parameters

Approximating diffusions by piecewise constant parameters Approximating diffusions by piecewise constant parameters Lothar Breuer Institute of Mathematics Statistics, University of Kent, Canterbury CT2 7NF, UK Abstract We approximate the resolvent of a one-dimensional

More information

Harmonic Functions and Brownian motion

Harmonic Functions and Brownian motion Harmonic Functions and Brownian motion Steven P. Lalley April 25, 211 1 Dynkin s Formula Denote by W t = (W 1 t, W 2 t,..., W d t ) a standard d dimensional Wiener process on (Ω, F, P ), and let F = (F

More information

March 16, Abstract. We study the problem of portfolio optimization under the \drawdown constraint" that the

March 16, Abstract. We study the problem of portfolio optimization under the \drawdown constraint that the ON PORTFOLIO OPTIMIZATION UNDER \DRAWDOWN" CONSTRAINTS JAKSA CVITANIC IOANNIS KARATZAS y March 6, 994 Abstract We study the problem of portfolio optimization under the \drawdown constraint" that the wealth

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

STOPPING AT THE MAXIMUM OF GEOMETRIC BROWNIAN MOTION WHEN SIGNALS ARE RECEIVED

STOPPING AT THE MAXIMUM OF GEOMETRIC BROWNIAN MOTION WHEN SIGNALS ARE RECEIVED J. Appl. Prob. 42, 826 838 (25) Printed in Israel Applied Probability Trust 25 STOPPING AT THE MAXIMUM OF GEOMETRIC BROWNIAN MOTION WHEN SIGNALS ARE RECEIVED X. GUO, Cornell University J. LIU, Yale University

More information

OPTIMAL SOLUTIONS TO STOCHASTIC DIFFERENTIAL INCLUSIONS

OPTIMAL SOLUTIONS TO STOCHASTIC DIFFERENTIAL INCLUSIONS APPLICATIONES MATHEMATICAE 29,4 (22), pp. 387 398 Mariusz Michta (Zielona Góra) OPTIMAL SOLUTIONS TO STOCHASTIC DIFFERENTIAL INCLUSIONS Abstract. A martingale problem approach is used first to analyze

More information

MATH 425, HOMEWORK 3 SOLUTIONS

MATH 425, HOMEWORK 3 SOLUTIONS MATH 425, HOMEWORK 3 SOLUTIONS Exercise. (The differentiation property of the heat equation In this exercise, we will use the fact that the derivative of a solution to the heat equation again solves the

More information

On Stopping Times and Impulse Control with Constraint

On Stopping Times and Impulse Control with Constraint On Stopping Times and Impulse Control with Constraint Jose Luis Menaldi Based on joint papers with M. Robin (216, 217) Department of Mathematics Wayne State University Detroit, Michigan 4822, USA (e-mail:

More information

SUPPLEMENT TO CONTROLLED EQUILIBRIUM SELECTION IN STOCHASTICALLY PERTURBED DYNAMICS

SUPPLEMENT TO CONTROLLED EQUILIBRIUM SELECTION IN STOCHASTICALLY PERTURBED DYNAMICS SUPPLEMENT TO CONTROLLED EQUILIBRIUM SELECTION IN STOCHASTICALLY PERTURBED DYNAMICS By Ari Arapostathis, Anup Biswas, and Vivek S. Borkar The University of Texas at Austin, Indian Institute of Science

More information

OPTIMAL STOPPING OF A BROWNIAN BRIDGE

OPTIMAL STOPPING OF A BROWNIAN BRIDGE OPTIMAL STOPPING OF A BROWNIAN BRIDGE ERIK EKSTRÖM AND HENRIK WANNTORP Abstract. We study several optimal stopping problems in which the gains process is a Brownian bridge or a functional of a Brownian

More information

Ernesto Mordecki 1. Lecture III. PASI - Guanajuato - June 2010

Ernesto Mordecki 1. Lecture III. PASI - Guanajuato - June 2010 Optimal stopping for Hunt and Lévy processes Ernesto Mordecki 1 Lecture III. PASI - Guanajuato - June 2010 1Joint work with Paavo Salminen (Åbo, Finland) 1 Plan of the talk 1. Motivation: from Finance

More information

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( )

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( ) Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio (2014-2015) Etienne Tanré - Olivier Faugeras INRIA - Team Tosca November 26th, 2014 E. Tanré (INRIA - Team Tosca) Mathematical

More information

Theoretical Tutorial Session 2

Theoretical Tutorial Session 2 1 / 36 Theoretical Tutorial Session 2 Xiaoming Song Department of Mathematics Drexel University July 27, 216 Outline 2 / 36 Itô s formula Martingale representation theorem Stochastic differential equations

More information

Solution of Stochastic Optimal Control Problems and Financial Applications

Solution of Stochastic Optimal Control Problems and Financial Applications Journal of Mathematical Extension Vol. 11, No. 4, (2017), 27-44 ISSN: 1735-8299 URL: http://www.ijmex.com Solution of Stochastic Optimal Control Problems and Financial Applications 2 Mat B. Kafash 1 Faculty

More information

On the sequential testing problem for some diffusion processes

On the sequential testing problem for some diffusion processes To appear in Stochastics: An International Journal of Probability and Stochastic Processes (17 pp). On the sequential testing problem for some diffusion processes Pavel V. Gapeev Albert N. Shiryaev We

More information

Change detection problems in branching processes

Change detection problems in branching processes Change detection problems in branching processes Outline of Ph.D. thesis by Tamás T. Szabó Thesis advisor: Professor Gyula Pap Doctoral School of Mathematics and Computer Science Bolyai Institute, University

More information

Generalized Hypothesis Testing and Maximizing the Success Probability in Financial Markets

Generalized Hypothesis Testing and Maximizing the Success Probability in Financial Markets Generalized Hypothesis Testing and Maximizing the Success Probability in Financial Markets Tim Leung 1, Qingshuo Song 2, and Jie Yang 3 1 Columbia University, New York, USA; leung@ieor.columbia.edu 2 City

More information

Optimal Stopping Games for Markov Processes

Optimal Stopping Games for Markov Processes SIAM J. Control Optim. Vol. 47, No. 2, 2008, (684-702) Research Report No. 15, 2006, Probab. Statist. Group Manchester (21 pp) Optimal Stopping Games for Markov Processes E. Ekström & G. Peskir Let X =

More information

A Change of Variable Formula with Local Time-Space for Bounded Variation Lévy Processes with Application to Solving the American Put Option Problem 1

A Change of Variable Formula with Local Time-Space for Bounded Variation Lévy Processes with Application to Solving the American Put Option Problem 1 Chapter 3 A Change of Variable Formula with Local Time-Space for Bounded Variation Lévy Processes with Application to Solving the American Put Option Problem 1 Abstract We establish a change of variable

More information

Xt i Xs i N(0, σ 2 (t s)) and they are independent. This implies that the density function of X t X s is a product of normal density functions:

Xt i Xs i N(0, σ 2 (t s)) and they are independent. This implies that the density function of X t X s is a product of normal density functions: 174 BROWNIAN MOTION 8.4. Brownian motion in R d and the heat equation. The heat equation is a partial differential equation. We are going to convert it into a probabilistic equation by reversing time.

More information

Introduction Optimality and Asset Pricing

Introduction Optimality and Asset Pricing Introduction Optimality and Asset Pricing Andrea Buraschi Imperial College Business School October 2010 The Euler Equation Take an economy where price is given with respect to the numéraire, which is our

More information

Research Article An Optimal Stopping Problem for Jump Diffusion Logistic Population Model

Research Article An Optimal Stopping Problem for Jump Diffusion Logistic Population Model Mathematical Problems in Engineering Volume 216, Article ID 5839672, 5 pages http://dx.doi.org/1.1155/216/5839672 Research Article An Optimal Stopping Problem for Jump Diffusion Logistic Population Model

More information

Liquidity risk and optimal dividend/investment strategies

Liquidity risk and optimal dividend/investment strategies Liquidity risk and optimal dividend/investment strategies Vathana LY VATH Laboratoire de Mathématiques et Modélisation d Evry ENSIIE and Université d Evry Joint work with E. Chevalier and M. Gaigi ICASQF,

More information

Uniformly Uniformly-ergodic Markov chains and BSDEs

Uniformly Uniformly-ergodic Markov chains and BSDEs Uniformly Uniformly-ergodic Markov chains and BSDEs Samuel N. Cohen Mathematical Institute, University of Oxford (Based on joint work with Ying Hu, Robert Elliott, Lukas Szpruch) Centre Henri Lebesgue,

More information

Introduction to Random Diffusions

Introduction to Random Diffusions Introduction to Random Diffusions The main reason to study random diffusions is that this class of processes combines two key features of modern probability theory. On the one hand they are semi-martingales

More information

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539 Brownian motion Samy Tindel Purdue University Probability Theory 2 - MA 539 Mostly taken from Brownian Motion and Stochastic Calculus by I. Karatzas and S. Shreve Samy T. Brownian motion Probability Theory

More information

Path Decomposition of Markov Processes. Götz Kersting. University of Frankfurt/Main

Path Decomposition of Markov Processes. Götz Kersting. University of Frankfurt/Main Path Decomposition of Markov Processes Götz Kersting University of Frankfurt/Main joint work with Kaya Memisoglu, Jim Pitman 1 A Brownian path with positive drift 50 40 30 20 10 0 0 200 400 600 800 1000-10

More information

Minimization of ruin probabilities by investment under transaction costs

Minimization of ruin probabilities by investment under transaction costs Minimization of ruin probabilities by investment under transaction costs Stefan Thonhauser DSA, HEC, Université de Lausanne 13 th Scientific Day, Bonn, 3.4.214 Outline Introduction Risk models and controls

More information

A new approach for investment performance measurement. 3rd WCMF, Santa Barbara November 2009

A new approach for investment performance measurement. 3rd WCMF, Santa Barbara November 2009 A new approach for investment performance measurement 3rd WCMF, Santa Barbara November 2009 Thaleia Zariphopoulou University of Oxford, Oxford-Man Institute and The University of Texas at Austin 1 Performance

More information

Suggested Solution to Assignment 7

Suggested Solution to Assignment 7 MATH 422 (25-6) partial diferential equations Suggested Solution to Assignment 7 Exercise 7.. Suppose there exists one non-constant harmonic function u in, which attains its maximum M at x. Then by the

More information

MA8109 Stochastic Processes in Systems Theory Autumn 2013

MA8109 Stochastic Processes in Systems Theory Autumn 2013 Norwegian University of Science and Technology Department of Mathematical Sciences MA819 Stochastic Processes in Systems Theory Autumn 213 1 MA819 Exam 23, problem 3b This is a linear equation of the form

More information

Lecture 21 Representations of Martingales

Lecture 21 Representations of Martingales Lecture 21: Representations of Martingales 1 of 11 Course: Theory of Probability II Term: Spring 215 Instructor: Gordan Zitkovic Lecture 21 Representations of Martingales Right-continuous inverses Let

More information

Nonlinear L 2 -gain analysis via a cascade

Nonlinear L 2 -gain analysis via a cascade 9th IEEE Conference on Decision and Control December -7, Hilton Atlanta Hotel, Atlanta, GA, USA Nonlinear L -gain analysis via a cascade Peter M Dower, Huan Zhang and Christopher M Kellett Abstract A nonlinear

More information

n E(X t T n = lim X s Tn = X s

n E(X t T n = lim X s Tn = X s Stochastic Calculus Example sheet - Lent 15 Michael Tehranchi Problem 1. Let X be a local martingale. Prove that X is a uniformly integrable martingale if and only X is of class D. Solution 1. If If direction:

More information

On Reflecting Brownian Motion with Drift

On Reflecting Brownian Motion with Drift Proc. Symp. Stoch. Syst. Osaka, 25), ISCIE Kyoto, 26, 1-5) On Reflecting Brownian Motion with Drift Goran Peskir This version: 12 June 26 First version: 1 September 25 Research Report No. 3, 25, Probability

More information

Minimal Sufficient Conditions for a Primal Optimizer in Nonsmooth Utility Maximization

Minimal Sufficient Conditions for a Primal Optimizer in Nonsmooth Utility Maximization Finance and Stochastics manuscript No. (will be inserted by the editor) Minimal Sufficient Conditions for a Primal Optimizer in Nonsmooth Utility Maximization Nicholas Westray Harry Zheng. Received: date

More information

Lecture No 1 Introduction to Diffusion equations The heat equat

Lecture No 1 Introduction to Diffusion equations The heat equat Lecture No 1 Introduction to Diffusion equations The heat equation Columbia University IAS summer program June, 2009 Outline of the lectures We will discuss some basic models of diffusion equations and

More information

Stochastic optimal control with rough paths

Stochastic optimal control with rough paths Stochastic optimal control with rough paths Paul Gassiat TU Berlin Stochastic processes and their statistics in Finance, Okinawa, October 28, 2013 Joint work with Joscha Diehl and Peter Friz Introduction

More information

Constrained Dynamic Optimality and Binomial Terminal Wealth

Constrained Dynamic Optimality and Binomial Terminal Wealth To appear in SIAM J. Control Optim. Constrained Dynamic Optimality and Binomial Terminal Wealth J. L. Pedersen & G. Peskir This version: 13 March 018 First version: 31 December 015 Research Report No.

More information

REGULAR LAGRANGE MULTIPLIERS FOR CONTROL PROBLEMS WITH MIXED POINTWISE CONTROL-STATE CONSTRAINTS

REGULAR LAGRANGE MULTIPLIERS FOR CONTROL PROBLEMS WITH MIXED POINTWISE CONTROL-STATE CONSTRAINTS REGULAR LAGRANGE MULTIPLIERS FOR CONTROL PROBLEMS WITH MIXED POINTWISE CONTROL-STATE CONSTRAINTS fredi tröltzsch 1 Abstract. A class of quadratic optimization problems in Hilbert spaces is considered,

More information

The multidimensional Ito Integral and the multidimensional Ito Formula. Eric Mu ller June 1, 2015 Seminar on Stochastic Geometry and its applications

The multidimensional Ito Integral and the multidimensional Ito Formula. Eric Mu ller June 1, 2015 Seminar on Stochastic Geometry and its applications The multidimensional Ito Integral and the multidimensional Ito Formula Eric Mu ller June 1, 215 Seminar on Stochastic Geometry and its applications page 2 Seminar on Stochastic Geometry and its applications

More information

Solution for Problem 7.1. We argue by contradiction. If the limit were not infinite, then since τ M (ω) is nondecreasing we would have

Solution for Problem 7.1. We argue by contradiction. If the limit were not infinite, then since τ M (ω) is nondecreasing we would have 362 Problem Hints and Solutions sup g n (ω, t) g(ω, t) sup g(ω, s) g(ω, t) µ n (ω). t T s,t: s t 1/n By the uniform continuity of t g(ω, t) on [, T], one has for each ω that µ n (ω) as n. Two applications

More information

Exercises in stochastic analysis

Exercises in stochastic analysis Exercises in stochastic analysis Franco Flandoli, Mario Maurelli, Dario Trevisan The exercises with a P are those which have been done totally or partially) in the previous lectures; the exercises with

More information

Homogeneous Stochastic Differential Equations

Homogeneous Stochastic Differential Equations WDS'1 Proceedings of Contributed Papers, Part I, 195 2, 21. ISBN 978-8-7378-139-2 MATFYZPRESS Homogeneous Stochastic Differential Equations J. Bártek Charles University, Faculty of Mathematics and Physics,

More information

Economics 2010c: Lectures 9-10 Bellman Equation in Continuous Time

Economics 2010c: Lectures 9-10 Bellman Equation in Continuous Time Economics 2010c: Lectures 9-10 Bellman Equation in Continuous Time David Laibson 9/30/2014 Outline Lectures 9-10: 9.1 Continuous-time Bellman Equation 9.2 Application: Merton s Problem 9.3 Application:

More information

Goodness of fit test for ergodic diffusion processes

Goodness of fit test for ergodic diffusion processes Ann Inst Stat Math (29) 6:99 928 DOI.7/s463-7-62- Goodness of fit test for ergodic diffusion processes Ilia Negri Yoichi Nishiyama Received: 22 December 26 / Revised: July 27 / Published online: 2 January

More information

A Short Introduction to Diffusion Processes and Ito Calculus

A Short Introduction to Diffusion Processes and Ito Calculus A Short Introduction to Diffusion Processes and Ito Calculus Cédric Archambeau University College, London Center for Computational Statistics and Machine Learning c.archambeau@cs.ucl.ac.uk January 24,

More information

Optimal Control of Partially Observable Piecewise Deterministic Markov Processes

Optimal Control of Partially Observable Piecewise Deterministic Markov Processes Optimal Control of Partially Observable Piecewise Deterministic Markov Processes Nicole Bäuerle based on a joint work with D. Lange Wien, April 2018 Outline Motivation PDMP with Partial Observation Controlled

More information

Hamilton-Jacobi-Bellman Equation of an Optimal Consumption Problem

Hamilton-Jacobi-Bellman Equation of an Optimal Consumption Problem Hamilton-Jacobi-Bellman Equation of an Optimal Consumption Problem Shuenn-Jyi Sheu Institute of Mathematics, Academia Sinica WSAF, CityU HK June 29-July 3, 2009 1. Introduction X c,π t is the wealth with

More information

The Pedestrian s Guide to Local Time

The Pedestrian s Guide to Local Time The Pedestrian s Guide to Local Time Tomas Björk, Department of Finance, Stockholm School of Economics, Box 651, SE-113 83 Stockholm, SWEDEN tomas.bjork@hhs.se November 19, 213 Preliminary version Comments

More information

The Mabinogion Sheep Problem

The Mabinogion Sheep Problem The Mabinogion Sheep Problem Kun Dong Cornell University April 22, 2015 K. Dong (Cornell University) The Mabinogion Sheep Problem April 22, 2015 1 / 18 Introduction (Williams 1991) we are given a herd

More information

Constrained Optimal Stopping Problems

Constrained Optimal Stopping Problems University of Bath SAMBa EPSRC CDT Thesis Formulation Report For the Degree of MRes in Statistical Applied Mathematics Author: Benjamin A. Robinson Supervisor: Alexander M. G. Cox September 9, 016 Abstract

More information

lim n C1/n n := ρ. [f(y) f(x)], y x =1 [f(x) f(y)] [g(x) g(y)]. (x,y) E A E(f, f),

lim n C1/n n := ρ. [f(y) f(x)], y x =1 [f(x) f(y)] [g(x) g(y)]. (x,y) E A E(f, f), 1 Part I Exercise 1.1. Let C n denote the number of self-avoiding random walks starting at the origin in Z of length n. 1. Show that (Hint: Use C n+m C n C m.) lim n C1/n n = inf n C1/n n := ρ.. Show that

More information

Partial Differential Equations and Diffusion Processes

Partial Differential Equations and Diffusion Processes Partial Differential Equations and Diffusion Processes James Nolen 1 Department of Mathematics, Stanford University 1 Email: nolen@math.stanford.edu. Reproduction or distribution of these notes without

More information

STOCHASTIC PERRON S METHOD AND VERIFICATION WITHOUT SMOOTHNESS USING VISCOSITY COMPARISON: OBSTACLE PROBLEMS AND DYNKIN GAMES

STOCHASTIC PERRON S METHOD AND VERIFICATION WITHOUT SMOOTHNESS USING VISCOSITY COMPARISON: OBSTACLE PROBLEMS AND DYNKIN GAMES STOCHASTIC PERRON S METHOD AND VERIFICATION WITHOUT SMOOTHNESS USING VISCOSITY COMPARISON: OBSTACLE PROBLEMS AND DYNKIN GAMES ERHAN BAYRAKTAR AND MIHAI SÎRBU Abstract. We adapt the Stochastic Perron s

More information

Stochastic integral. Introduction. Ito integral. References. Appendices Stochastic Calculus I. Geneviève Gauthier.

Stochastic integral. Introduction. Ito integral. References. Appendices Stochastic Calculus I. Geneviève Gauthier. Ito 8-646-8 Calculus I Geneviève Gauthier HEC Montréal Riemann Ito The Ito The theories of stochastic and stochastic di erential equations have initially been developed by Kiyosi Ito around 194 (one of

More information

Question 1. The correct answers are: (a) (2) (b) (1) (c) (2) (d) (3) (e) (2) (f) (1) (g) (2) (h) (1)

Question 1. The correct answers are: (a) (2) (b) (1) (c) (2) (d) (3) (e) (2) (f) (1) (g) (2) (h) (1) Question 1 The correct answers are: a 2 b 1 c 2 d 3 e 2 f 1 g 2 h 1 Question 2 a Any probability measure Q equivalent to P on F 2 can be described by Q[{x 1, x 2 }] := q x1 q x1,x 2, 1 where q x1, q x1,x

More information

Implicit Functions, Curves and Surfaces

Implicit Functions, Curves and Surfaces Chapter 11 Implicit Functions, Curves and Surfaces 11.1 Implicit Function Theorem Motivation. In many problems, objects or quantities of interest can only be described indirectly or implicitly. It is then

More information

Conjugate duality in stochastic optimization

Conjugate duality in stochastic optimization Ari-Pekka Perkkiö, Institute of Mathematics, Aalto University Ph.D. instructor/joint work with Teemu Pennanen, Institute of Mathematics, Aalto University March 15th 2010 1 / 13 We study convex problems.

More information

On semilinear elliptic equations with measure data

On semilinear elliptic equations with measure data On semilinear elliptic equations with measure data Andrzej Rozkosz (joint work with T. Klimsiak) Nicolaus Copernicus University (Toruń, Poland) Controlled Deterministic and Stochastic Systems Iasi, July

More information