Optimal cash management using impulse control Peter Lakner and Josh Reed New York University. Abstract
|
|
- Imogen Carr
- 6 years ago
- Views:
Transcription
1 Optimal cash management using impulse control Peter Lakner and Josh Reed New York University Abstract We consider the impulse control of Levy processes under the infinite horizon, discounted cost criterion. Our motivating example is the cash management problem in which a controller is charged a fixed plus proportional cost for adding to or withdrawing from her reserve, plus an opportunity cost for keeping any cash on hand. Our main result is to provide a verification theorem for the optimality of control band policies in this scenario. We also analyze the transient and steady-state behavior of the controlled process under control band policies and explicitly solve for the optimal policy in the case in which the Levy process to be controlled is the sum of a Brownian motion with drift and a compound Poisson process with exponentially distributed jump sizes. Introduction Impulse control problems have a long history related to applications to the cash management problem. The technique of impulse control was originally developed by Bensoussan and Lions [2, 3]. In Constantinides and Richard [7] it is shown that for the particular case of impulse control of Brownian motion, the optimal solution to the cash management problem is a control band policy. Harrison, Selke and Taylor [8] also consider the impulse control of Brownian similar to as in [7] but explicitly calculate the critical parameters related to the optimal policy. Recently, Ormeci, Dai and Vande Vate [2] have considered impulse control of Brownian motion under the average cost criterion and again show that a control band policy is optimal. Cadenillas, Zapatero, and Sarkar [6] and Cadenillas, Lakner, and Pinedo [5] solved the Brownian case with a mean-reverting drift. In the present paper, we consider the impulse control of Levy processes. Our motivating application is the cash management problem in which there exists a system manager who must control the amount of cash she has on hand. We assume that the manager s cash on hand fluctuates due to randomly occurring withdrawals from and deposits to her account
2 but that the manager is charged a fixed plus proportional cost for any specific, intentional adding to or withdrawing from her reserves and that there exists an opportunity cost for keeping cash hand. The manager s objective is to minimize her long run opportunity cost of keeping cash on hand plus any cost incurred from depositing or withdrawing from the reserve. An alternative motivating application which is also sometimes considered in the literature is a manager who wishes to control her inventory level. The manager s inventory level fluctuates randomly and she may increase or decrease her inventory level at will by expediting or salvaging parts, paying a fixed plus proportional cost to do so. The manager s objective is to minimize her long run inventory holding costs plus costs of expediting and salvaging. Our main result is to provide a verification theorem for the optimality of control band policies for the impulse control of Levy processes. In the specific case in which the Levy process to be controlled is spectrally positive, we also explicitly calculate its Laplace transform with respect to time and its steady-state distribution under any control band policy. In related work, Bar-Ilan, Perry and Stadje [4] have also considered the problem of impulse control of Levy processes for the specific case in which the Levy process is a sum of a Brownian motion and a compound Poisson process. Assuming that a control band policy is optimal, their main results are to evaluate the cost functionals of the resulting policy through a fundamental identity derived from the martingale originally introduced by Kella and Whitt []. The remainder of the paper is organized as follows. In Section 2, we present the model which we analyze throughout the paper. Section 3 contains our main results, included in which is the verification theorem, Theorem 3. In Section 4 we analyze control band policies under the assumption that the Levy process to be controlled is spectrally positive. Our main results in this section are to provide the Laplace transform with respect to time and the steady-state distribution of the controlled Levy process in terms of its scale functions and potential measures. In Section 5 we provide an example in which the Levy process to be controlled is the sum of a Brownian motion with constant drift and a compound Possion process with jump sizes which are exponentially distributed. In this case, we are able to explicitly solve for the value function and parameters of the optimal control band policy in 2
3 addition to identifying the steady-state distributed of the controlled process. The Appendix contains proofs of some technical results which are required in the paper. 2 The Model In this Section we provide the specifics of the model described in the Introduction. All forthcoming processes are to assumed to live on a probability triplet equipped with a filtration F = {F t, t < }. We begin by assuming that Y t is a Levy process with Levy measure ν such that y ν(dy) <. () { y } The process Y t will be used to represent the cash on hand process assuming that the manager exerts no control by making no deposits to or withdrawal from her fund. Let J(ω, dt, dy) = J(dt, dy) be the jump measure of Y. Then, Y t then has the Ito-Levy decomposition where M t is the following martingale M t = Y t = µt + σw t + A t + M t { y <} and A is the sum of the large jumps A t = x {J((, t], dx) tν(dx)} <s t Y s { Ys }. The process w is assumed to be a standard Wiener process and µ is a constant. We do not make any assumption regarding σ, it may be zero or non-zero. Also we allow ν(r) = in which case Y is continuous. The case when both σ = and ν(r) = is also included, although this case is trivial (Y is deterministic in this case). In general, we will use P x to denote the probability measure under which Y t is started is from x and E x its associated expectation operator. We let (T, Ξ) = (τ, τ 2,..., τ n,..., ξ, ξ 2,..., ξ n,..., ) 3
4 denote the impulse control policy used by the manager where τ < τ 2 < τ 3... are stopping times and ξ n (a.s.) is an F τ(n) measurable random variable for each n. Positive values of ξ n represent deposits by the manager into her fund and negative values represent withdrawal. In order to simplify future developments we define τ =. In principle we also allow only finitely many interventions with positive probability. This means that it is possible that for some ω Ω only finitely many, say m(ω) interventions happen. For those ω s we leave (τ i, ξ i ) undefined whenever i > m(ω). As described in the Introduction, the controlled cash on hand process X t follows the dynamics X t = Y t + {τ(i) t} ξ i i= and has RCLL paths. Assuming a quadratic opportunity cost function (x ρ) 2 for being x ρ units away from the level ρ, the manger s total cost is the sum of her expected discounted penalty costs as well as impulse control costs and is given by [ ] I(x, T, Ξ) = E x e λt (X t ρ) 2 dt + e λτ(n) g(ξ n ) (2) where λ > is a discount factor and the manger s impulse control costs are given by { C + cξ, if ξ > ; g(ξ) = D dξ, if ξ <. We leave g() undefined. We also assume that the fixed costs C and D are positive and the variable costs c and d are non-negative constants. If for some ω Ω there are only finitely many, say m(ω) interventions, then in (2) the infinity in the upper limit of the sum must be replaced by m(ω). One of our primary objectives in this paper is to identify the optimal impulse control (T, Ξ) that minimizes the above penalty and it associated value function n= V (x) = inf {I(x, T, Ξ), (T, Ξ) is an impulse control}. In the section that follows we show that the impulse control takes the form of a control band policy which arise frequently as the solution to impulse control problems. Moreover, 4
5 in several specific instances we are able to explicitly identify the parameters corresponding to this policy. 3 Main Results In this Section, we provide the main result of the paper, Theorem 3, providing an ordinary differential equation which the value function V must satisfy on the continuation region. Also included in the statement of Theorem 3 is the optimal impulse control policy which turns out to be a double bandwidth control policy. We begin first with some preliminary results before providing the statement of Theorem 3. For a function f : R R we define the operator Mf(x) = inf {f(x + η) + g(η), η R \ {}}. We shall also use the linear operator A associated with the uncontrolled process Y, that is, for f C 2 (R) Af(x) = 2 σ2 f (x) + µf (x) + R [ f(x + y) f(x) f (x)y { y <} ] ν(dy). Our assumption () and Taylor s theorem implies that this value is finite whenever f and f are bounded. In order to prove our results we shall actually need to extend the domain of A to the larger class of functions D defined below. Definition Let D be the class of functions f : R R for which there exist an integer n and a set of real numbers S = {x, x 2,..., x n } (if n = then S is the empty set) such that the following conditions hold: (i) f C (R) C 2 (R \ S) (ii) The derivative f is bounded on R and the second derivative f is bounded on R \ S. For f D it may be shown that Ito s rule holds in its usual form (see the Appendix). 5
6 We now conjecture that the optimal impulse control policy takes a double bandwidth control policy form. In particular, we assume that there exist constants a < α β < b such that τ n = inf { t τ n : X t + Y t R \ (a, b) } (3) (recall that τ = ) and that for n the jump size is given by ξ n = { β (X(τ n ) + Y (τn)), if X(τn ) + Y (τn) b; α (X(τn ) + Y (τn)), if X(τn ) + Y (τn) a. (4) Proposition 2 Suppose that for some a < α β < b the sequence of stopping times (τ n) {n } is given in 3. If Y is not constant then τ n <, τ n < τ n+ for n almost surely, and lim n τ n =. Proof: It is obvious that τn < since otherwise the Levy process Y would be bounded, that is, a constant. Next we show τn < τn+. Let Y α be the process Y started at Y = α and Y β be the process Y started at Y = β. Let τ α = inf {s : Ys α / (a, b)} and τ β = inf { s : Y β s / (a, b) }. By the right-continuity of Y we have τ α > and τ β > a.s. But conditionally on X(τ n) = α the inter-arrival time τ n+ τ n has the same distribution as τ α and conditionally on X(τ n) = β the inter-arrival time τ n+ τ n has the same distribution as τ β. Hence τ n < τ n+. Finally we show that lim n τ n =. One can be convinced easily that either P [X(τ n) = a, i.o.] = or P [X(τ n) = b, i.o.] =, or possibly both (we omit the details). Suppose that the first relation is true. Then let S = and S n+ = min{τ m > S n : X(τ m) = a}. Then S 2 S, S 3 S 2,... is an i.i.d. sequence of random variables. Since S n+ S n >, a.s., so E[S n+ S n ] > which implies that n= (S n+ S n ) diverges, i.e. S n. But this implies τ n. The following is now the main result of this Section. Theorem 3 Suppose that there exist constants a < α β < b and a function f : R (, ) such that f C (R) C 2 (R \ {a, b}), such that f is bounded on R\{a, b} and the following conditions are satisfied: (i) Af(x) λf(x) + (x ρ) 2 = for x (a, b); 6
7 (ii) f(x) Mf(x) for x (a, b); (iii) Af(x) λf(x) + (x ρ) 2 for x R \ [a, b]; (iv) f(a) = Mf(a) = f(α) + C + c(α a), f(b) = Mf(b) = f(β) + D + d(b β); (v) f is linear on (, a] with slope c, and also linear on [b, ) with slope d. Then f(x) = V (x). Furthermore, the control (T, Ξ ) given in (3) - (4) with these values for a, α, β, b is optimal. Remark: Note that τ = if and only if x R \ (a, b). For n > it is possible that Y (τn) = in which case X(τn ) is either equal to a or b, ξn is α a or β b and X(τn) equal to α or β, respectively. However, it is also possible that Y (τn) in which case X(τn ) + Y (τn) may be either larger than a or smaller then b, but we still have X(τn) equal to α or β, respectively. Notice also that τn < τn+ almost surely for all n. In order to provide the proof of Theorem 3, we first need the following. Lemma 4 Let (T, Ξ) be a control such that I(x, T, Ξ) < and let F be the event on which there are infinitely many interactions. Then and lim E [ ] x e λt X t = (5) t lim τ n(ω) =, for almost every ω F. (6) n Proof: First we prove (5). [ E x e λt X t dt ] E x [ e λt X t ρ { X(t) ρ } dt ] + e λt dt + E x [ e λt X t ρ dt ] + e λt ρ dt = E x [ e λt X t ρ { X(t) ρ >} dt ] + E x [ e λt (X t ρ) 2 dt ] + e λt ρ dt <. e λt ρ dt 7
8 Next we prove (6). Suppose the opposite, i.e., that P (G) > where G = {ω F : τ n τ < }. Then [ ] [ ] E x e λτ n g(ξ n ) F E x e λτ n G min {C, D} n= n= [ ] E x e λτ G min {C, D} = n= By Lemma 4 without any loss of generality we can and shall consider only those policies for which (5) and (6) hold. Proof of Theorem 3: Using condition (v) we can extend (iv) to (, a] and [b, ): f(x) = Mf(x) = f(α) + C + c(α x), x a (7) f(x) = Mf(x) = f(β) + D + d(x β), x b. (8) Indeed, suppose that x < a. Then for η [, a x] the quantity f(x+η)+c +cη = f(x)+c does not depend on η. Hence by (iv) and (v) Mf(x) = inf{f(x + η) + C + cη, η a x} = inf{f(a + γ) + C + c(a + γ x); γ } Also by (iv) = Mf(a) + c(a x) = f(a) + c(a x) = f(x) Mf(a) + c(a x) = f(α) + C + c(α a) + c(a x) = f(α) + C + c(α x). We deal with the x > b case similarly. Let {τ, τ 2,..., ξ, ξ 2,...} be an arbitrary impulse control for which (5) and (6) hold. Let (S n ) n be a sequence of strictly increasing stopping times such that S n, almost surely. We shall specify this sequence later in the proof. Now we define the sequence of stopping times T n which will be the mixture of (τ n ) n and (S n ) n. More precisely, T = and T n = inf{τ k ; τ k > T n } inf{s l ; τ l > T n }. In order to make this definition meaningful even on the event F c (when there are only finitely many interactions) we define the infimum of the empty set be infinity. In either case (both on F and on F c ) we have T n because S n. Finally let T n = T n t. 8
9 We have the following decomposition e λt (n) f ( X T (n) ) f(x) = n { e λt (i) f ( ) X T (i) + Y T (i) e λt (i ) f ( )} X T (i ) + i= n e [ λt (i) f ( ) ( )] X T (i) f XT (i) + Y T (i) i= From Ito s rule follows that (9) e λt (i) f ( X T (i) + Y T (i) ) e λt (i ) f ( X T (i ) ) = T (i) (T (i ),T (i)] T (i ) e λs { σ 2 T (i )<s T (i) e λs f (X s ) {da s + dm s + σdw s } + 2 f (X s ) + µf (X s ) λf(x s ) } ds+ e λs {f(x s ) f(x s ) f (X s ) X s }. The right-hand side can be written as e λs f (X s ) {dm s + σdw s } + which is equal to T (i) T (i ) T (i )<s T (i) T (i) T (i ) (T (i ),T (i)] R (T (i ),T (i)] e λs { σ 2 2 f (X s ) + µf (X s ) λf(x s ) } ds+ e λs { f(x s ) f(x s ) f (X s ) X s Xs < (T (i ),T (i)] e λs { σ 2 e λs f (X s ) {dm s + σdw s } + 2 f (X s ) + µf (X s ) λf(x s ) } ds+ e λs { f(x s + y) f(x s ) f (X s )y { y <} } J(ds, dy). } 9
10 By condition and the boundedness of f and f we have f(x s + y) f(x s ) f (X s )y { y <} v(dy) < R hence the above expression can be written as e λs f (X s ) {dm s + σdw s } + (T (i ),T (i)] R T (i) T (i ) (T (i ),T (i)] e λs { σ 2 2 f (X s ) + µf (X s ) λf(x s ) } ds+ e λs { f(x s + y) f(x s ) f (X s )y { y <} } (J(ds, dy) dsν(dy)) + (T (i ),T (i)] R (T (i ),T (i)] R e λs { f(x s + y) f(x s ) f (X s )y { y <} } dsν(dy) = (T (i ),T (i)] T (i) T (i ) e λs f (X s ) {dm s + σdw s } + e λs {A(X s ) λf(x s )} ds+ e λs { f(x s + y) f(x s ) f (X s )y { y <} } (J(ds, dy) dsν(dy)) By conditions (i) and (iii) we end up with the inequality (T (i ),T (i)] R e λt (i) f ( X T (i) + Y T (i) ) e λt (i ) f ( X T (i ) ) (T (i ),T (i)] e λs f (X s ) {dm s + σdw s } + e λs { f(x s + y) f(x s ) f (X s )y { y <} } (J(ds, dy) dsν(dy)) T (i) T (i ) e λs (X s ρ) 2 ds. () There is a minor problem here since (i) and (iii) implies the inequality Af(x) λf(x) + (x ρ) 2 only for x R \ {a, b}. But either σ = in which case the inequality holds for all x R, or σ in which case still holds by Lemma 2 in the Appendix. For our candidate optimal policy (T, Ξ ) actually equality holds by condition (i).
11 By 7, 8, and (ii) whenever T i = τ j for some j then f(x T (i) ) f ( X T (i) + Y T (i) ) g ( XT (i) (X T (i) + Y T (i) ) ) = g(ξ j ) and the left-hand side is zero if T i is not equal to any of the τ j s. By 7 and 8 the above inequality is actually an equality for (T, Ξ ). Adding up all these inequalities and considering 9 we get (,T (n)] R (,T (n)] e λt (n) f ( X T (n) ) f(x) e λs f (X s ) {dm s + σdw s } + e λs { f(x s + y) f(x s ) f (X s )y { y <} } (J(ds, dy) dsν(dy)) T (n) e λs (X s ρ) 2 ds e λτ(j) g(ξ j ) j:τ j T n Now we specify (S n ) n : it is a reducing sequence for the local martingale U t = e { } λs f(x s + y) f(x s ) f (X s )y { y <} (J(ds, dy) dsν(dy)) (,t] R So {U t S(n), t [, )} is a martingale for every n. Hence by taking expectations all the martingale terms disappear (recall that f is bounded) so we end up with E [ e λt (n) f ( )] T (n) X T (n) + f(x) E e λs (X s ρ) 2 ds + e λτ(j) g(ξ j ) j:τ j T n Now if n then the Monottone Convergence Theorem implies E [ e λt f (X t ) ] t + f(x) E e λs (X s ρ) 2 ds + j:τ j t e λτ(j) g(ξ j ) By condition (v) f is Lipschitz continuous so by 6 we have E [ e λt f (X t ) ] as t. So by letting t a second application of the Monotone Convergence Theorem gives T (n) f(x) E e λs (X s ρ) 2 ds + e λτ(j) g(ξ j ) j:τ j T n Equality holds (T, Ξ ) hence the proof is complete.
12 4 Analysis of the Optimal Control (T, Ξ ) We now set out to determine the transient and steady-state behavior of the controlled cash on hand process X t under the optimal control (T, Ξ ) assuming that Y t is a spectrally positive Levy process. In other words, R ν(dy) = and Y t is not a subordinator. The case of a spectrally negative Levy process may be treated similarly. Our main result will be to determine the Laplace transform with respect time of the transition probabilities of X t and also to determine the limiting distribution of X t. Let A B(R) be a Borel set of R and let e q be an exponential random variable with rate q independent of X. Now consider P x [X eq A] for x (a, b). It then follows conditioning on the value of e q relative to the stopping time τ that P x [X eq A] = E x [{X eq A}{e q < τ }] + E x [{X eq A}{e q τ }]. () However, since X t = Y t for t < τ, it follows by the memoryless property of the exponential distribution and the strong Markov property that E x [{X eq A}{e q τ }] is equal to E α [{X eq A}]P x [{e q τ } {Y (τ ) a}] + E β [{X eq A}]P x [{e q τ } {Y (τ ) b}]. Substituting the above into (), we have E x [{X eq A}] = E x [{X eq A}{e q < τ }] (2) Setting x = α in (2), we obtain +E α [{X eq A}]P x [{e q τ } {Y (τ ) a}] +E β [{X eq A}]P x [{e q τ } {Y (τ ) b}]. E α [{X eq A}]( P α [{e q τ } {Y (τ ) a})) (3) E β [{X eq = E α [{X eq A}{e q < τ }], A}]P α [{e q τ } {Y (τ ) b}] 2
13 and similarly, setting x = β, we have E β [{X eq A}]( P β [{e q τ } {Y (τ ) b}]) (4) E α [{X eq A}]P β [{e q τ } {Y (τ ) a}] = E β [{X eq A}{e q < τ }]. Now note that (3) and (4) constitute a set of linear equations for E α [{X eq E β [{X eq A}]. Moreover, so long as q >, we have that A}] and ( P β [{e q τ } {Y (τ ) b}])( P α [{e q τ } {Y τ a}]) > P α [{e q τ } {Y (τ ) b}]p β [{e q τ } {Y τ a}] and so the determinant associated with (3) and (4) is non-zero and hence a solution exists. Solving for E α [{X eq A}] and E β [{X eq A}] then yields that E α [{X eq A}] is given by ( Eα [{X eq A}{e q < τ }]( P β [{e q τ } {Y (τ ) ) b}] (5) C a,α,β,b +E β [{X eq A}{e q < τ }]P α [{e q τ } {Y (τ ) b}] and E β [{X eq A}] is given by ( Eα [{X eq A}{e q < τ }]P β [{e q τ } {Y τ a}] ), (6) C a,α,β,b +E β [{X eq A}{e q < τ }]( P α [{e q τ } {Y τ a}]) where C a,α,β,b = ( P β [{e q τ } {Y (τ ) b}])( P α [{e q τ } {Y τ a}]) P α [{e q τ } {Y (τ ) b}]p β [{e q τ } {Y τ a}]. We now proceed to compute the terms appearing on the right-hand sides of (5) and (6). First note that τ is equal to the fist time the Levy process Y t exits the open interval (a, b). We then have that E x [{X eq A}{e q < τ }] = E x [{Y eq A}{e q < τ }] = q 3 e qt E x [{Y t A}{τ > t}]dt
14 = q = q A A e qt P x [Y (t) dy, τ > t]dt U (q) (x, dy), where U (q) is the q-potential measure of Y t. By Theorem 8.7 of [], if Y t is spectrally positive then its q-potential measure U (q) (x, dy) has a density u (q) (x, y) given by where u (q) (x, y) = W (q) (b x) W (q) (y a) W (q) (b a) W (q) (y x), (7) e sy W (q) (y)dy = ψ( s) q, (8) whenever s is large enough so that ψ( s) > q and ψ(s) = log E [e sy ] is the Laplace exponent of Y t. Note that ψ(s) < for all s by the spectral positivity of Y. Also note that P x [{e q τ } {Y τ a}] = qe qt P x [{t τ } {Y τ a}]dt [ ] = E x qe qt dt{y τ a} τ = E x [ e qτ {Yτ a} ] = Z (q) (b x) Z (q) (b a) W (q) (b x) W (q) (b a), where the final equality follows from Theorem 8. of [] and we have the relationship Z (q) (x) = + q x W (q) (y)dy. In a similar fashion, using Theorem 8. of [], one may compute P x [{e q τ } {Y (τ ) b}] = W (q) (b x) W (q) (b a). Substituting into the above, one obtains an expression for the Laplace transform of the transition probabliites of X t. 4
15 We now proceed towards obtaining an expression for the limiting distribution of X t as t. Note first that by the strong Markov property, X t is a regenerative process with possible regeneration points either α or β. Let us consider the point α and define n α = inf{n : X(τ n) = α}. By the standard theory of regenerative processes, see for instance Theorem.2 of Chapter VI of [], if we may show that E α [τ n α ] < and that τ n α is nonlattice, then lim t P x (X t A) = π(a) exists for all A B(R) and x R and is given by π(a) = E α[ τn α {X s A}ds]. E α [τ n α ] The following proposition now shows that E α [τ n α ] <. Proposition 5 If the Levy process Y t is spectrally positive, then E α [τ n α ] <. Proof: Note first that Hence, by the strong Markov property, Similarly, we may show from which we obtain τ n α = τ {Y (τ ) a} + τ n α {Y (τ ) b} = τ + (τ n α τ ){Y (τ ) b}. E α [τ n α ] = E α [τ ] + E α [(τ n α τ ){Y (τ ) b}] = E α [τ ] + E β [τ n α ]P α [Y (τ ) b]. E β [τ n α ] = E β [τ ] + E β [τ n α ]P β [Y (τ ) b], E β [τ n α ] = E β [τ ] P β [Y (τ ) b]. Now note that since Y t is spectrally positive, we have by Theorem 8. of [] that P β [Y (τ ) b] = W (b β) W (b a) < 5
16 and so it suffices from the above to show E α [τ ], E β [τ ] <. We now show that in general for x (a, b), E x [τ ] <. Recall by [], the potential measure of Y t upon exiting [a, b] is given by U(x, dy) = Integrating over [a, b], we obtain that U(x, dy) = [a,b] = = P x [Y t dy, τ > t]dt. [a,b] [a,b] = E x [τ ]. P x [Y t dy, τ > t]dt P x [Y t dy, τ > t]dt P x [τ > t]dt However, by Theorem 8.7 of [], since Y t is spectrally positive, U(x, dy) has a density given by W (y a) u(x, y) = W (b x) W (y x). W (b a) Integrating over [a, b], we therefore find that ( ) W (y a) U(x, dy) = W (b x) W (y x) dy [a,b] [a,b] W (b a) <, where the inequality follows since W is bounded on compact sets. By the above, this completes the proof. The following proposition now allows us to take the limt as q in (5) and (6) in order to obtain the limiting distribution δ of X t. Proposition 6 For each x R, lim P x[x(e q ) A] = π(a) (9) q 6
17 Proof: Select T > large enough so that P x [X t A] π(a) < ɛ. Then P x [X(e q ) A] π(a) = {P x [X t A] π(a)} qe qt dt T {P x [X t A] π(a)} qe qt dt + T {P x [X t A] π(a)} qe qt dt. The second term in the above expression is bounded by ɛ and the first term converges to zero as q by the Dominated Convergence Theorem, which completes the proof. Using Proposition 6, we now wish to take limits q in (5) in order to determine the limiting distribution δ. However, both the numerator and denominator in (5) converge to as q and so we must apply L Hoptial s rule. Before doing so, however, we first must verify that both the numerator and denominator in (5) are differentiable. By Lemma 8.3 and Corollary 8.5 in [] we have that for each x >, both W q (x) and Z (q) (x) are differentiable in q. Moreover, since for a x < b, it follows that W (q) (b x) W (q) (b a) d W (q) (b x) dq W (q) (b a) = E x [e qτ {Yτ b}], = E x [τ e qτ {Yτ b}] <, where the inequality follows as in the proof of Proposition 5. Finally, since for each a x b, U q (x) has a density u (q) (x, y) given by (7) it follows that for each A B(R), d U (q) d (x, dy) = dq A A dq u(q) (x, y)dy. Thus, noting that (E α [{X eq A}{e q < τ }]( P β [{e q τ } {Y (τ ) b}]) (2) +E β [{X eq A}{e q < τ }]P α [{e q τ } {Y (τ ) b}]) ( = q U (q) (α, dy) W (q) ) ( (b β) W + q U (q) (q) ) (b α) (β, dy) A W (q) (b a) A W (q) (b a) 7
18 and = (( P β [{e q τ } {Y (τ ) b}])( P α [{e q τ } {Y τ a}]) (2) P α [{e q τ } {Y (τ ) b}]p β [{e q τ } {Y τ a}]) ( W (q) ) ( ( (b β) Z (q) (b α) Z (q) (b a) W q )) (b α) W (q) (b a) W q (b a) ( Z (q) (b β) Z (q) (b a) W q ) (b β) W (q) (b α) W q (b a) W (q) (b a), we see that both the numerator and denominator in (5) are differentiable. Let us now take derivatives on the righthand sides of (2) and (2). Taking the derivative of the right hand side of (2) and evaluating at q = we obtain ( U () (α, dy) W () ) ( (b β) W + U () () ) (b α) (β, dy). A W () (b a) A W () (b a) Next, recalling that x Z q (x) = + q W (q) (y)dy, it follows upon taking the derivative of the righthand side of (2) and evaluating at q = that we obtain (( d W (q) ) ( ( (b β) Z (q) (b α) Z (q) (b a) W q ))) (b α) dq W (q) (b a) W q (b a) (( Z (q) (b β) Z (q) (b a) W q ) (b β) W (q) )) (b α) W q (b a) W (q) (b a) = W () (b β) W () (b a) = K a,α,β,b. b α W (x)dx + W () (b α) W () (b a) b a b β W (x)dx b α W (x)dx (22) Thus, by (5) and Proposition 5 we have now obtained the following result. 8
19 Proposition 7 If Y t is spectrally positive, then under a double bandwidth control policy (a, α, β, b), the limiting distribution of X t is given by (( π(a) = W () ) (b β) ( W U () () ) (b α) ) (α, dy) + U () (β, dy), W () (b a) A W () (b a) A K a,α,β,b for each A B, where K a,α,β,b is as given in (22). Note that π has a density which is a linear combination of u () (α, y) and u () (β, y). 5 An Example We now provide an explicit example in which the value function and corresponding optimal impulse control may be explicitly found. Moreover, we will also be able to identify the steady-state distribution δ. We will consider a case in which in addition to () we also have that y ν(dy) <. (23) { y <} Conditions () and (23) together are equivalent to the fact that Y σw is a finite variation process. In this case with ϑ = µ (,) yν(dy) we can write the linear operator A in the form Af(x) = σ2 2 f (x) + ϑf (x) + [f(x + y) f(x)] ν(dx). (24) R Moreover, in this case the Ito-Levy representation of Y simplifies to Y t = x + σw t + ϑt + Y s. We suppose in this section that <s t Y t = x + ϑt + σw t + N t where N is a compound Poisson process independent of w such that the rate of jump arrivals is equal to and the Levy measure ν of N is ν(dy) = θe θy dy 9
20 for some θ > for y and ν((, ]) =. Suppose now that x (a, b). Using (24), the equation in (i) in Theorem 3 may be written as ϑf (x) + 2 σ2 f (x) + (x ρ) 2 λf(x) + [f(x + y) f(x)] θe θy dy =. This becomes and also ϑf (x) + b 2 σ2 f (x) + (x ρ) 2 ( + λ)f(x) + θe θx f(z)e θz dz+ b ϑe θx f (x)+ 2 σ2 e θx f (x)+e θx (x ρ) 2 (+λ)e θx f(x)+θ where ζ = [f(b) + d(z b)] θe θz dz =, (25) b b [f(b) + d(z b)] θe θz dz. x x f(z)e θz dz+ζe θx =, (26) Let us now introduce e θx f(x) = g(x). We then obtain the following equation from (26): (ϑθ + 2 σ2 θ 2 λ )g(x)+(ϑ+σ 2 θ)g (x)+ b 2 σ2 g (x)+e θx (x ρ) 2 +θ g(z)dz +ζe θx =. x (27) Differentiating the above with respect to x we get the following inhomogeneous linear ordinary differential equation of the third order: 2 σ2 g + (ϑ + σ 2 θ)g + ( 2 σ2 θ 2 + ϑθ λ ) g θg + 2e θx (x ρ) θe θx (x ρ) 2 ζθe θx =. (28) A particular solution for the inhomogeneous equation, denoted by g p, is given by g p (x) = e θx [ K (x ρ) 2 + K 2 (x ρ) + K 3 + K 4 ], where K = λ K 2 = 2(θϑ + ) θλ 2 K 3 = λ 3 θ 2 [ 2ϑ 2 θ 2 + 4θϑ + 2λ θ 2 λσ 2] 2
21 K 4 = ζ θ σ2 θ 2 + λ. The general solution of the homogeneous equation is given by g h, that is, g h (x) = L e c x + L 2 e c 2x + L 3 e c 3x where c, c 2, c 3 are the roots of the equation P (x) = 2 σ2 x 3 + ( ϑ + σ 2 θ ) ( ) x σ2 θ 2 + ϑθ λ x θ = and L, L 2, L 3 are free parameters. Notice that P () = θ < and P ( θ) = θλ >, thus P (x) has three roots, say c < θ, θ < c 2 < and C 3 >. We have now arrived at the following family of candidate solutions: g(x; L, L 2, L 3, b) = e [ ] θx K (x ρ) 2 + K 2 (x ρ) + K 3 + K 4 + L e cx + L 2 e c2x + L 3 e c3x. This gives f(x; L, L 2, L 3, b) = K (x ρ) 2 +K 2 (x ρ)+k 3 +K 4 +L e (θ+c)x +L 2 e (θ+c2)x +L 3 e (θ+c3)x. (29) For simplicity we shall use the notation f(x; L, L 2, L 3, b) = f(x). We now have 7 unknown parameters a, α, β, b, L, L 2, L 3. From the conditions of Theorem 3, we may derive the following 6 equations for these constants: f (a) = c (3) f (α) = c (3) f (b) = d (32) f (β) = d (33) f(a) = f(α) + C + c(α a) (34) (b) = f(β) + D + d(b β). (35) 2
22 In addition, if we trace back our derivation in the above, then we see that we must have 25 hold for at least for one particular x since in going from (27) to (28) we took a derivative. Select x = b. This then gives us our 7th equation We now have the following. ϑf (b) + 2 σ2 f (b) + (b ρ) 2 ( + λ)f(b) + ζ =. (36) Theorem 8 Suppose that there exist seven constants L, L 2, L 3, a < α β < b satisfying the seven equations (3)-(36). We define h by f(a) c(x a), if x a; h(x) = f(x), if a x b; f(b) + d(x b), if x b, and assume also that σ 2 2 h (a+) + ϑh (a+), [h (a + z) + c]ν(dz) + 2. (37) Then h(x) = V (x), i.e., h(x) is the value function of the optimization problem. Furthermore, the policy (T, Ξ ) described in (3) and (4) with this choice of a, α, β, b is optimal. In order to prove this theorem, we need the following lemma. Lemma 9 Assume the conditions of Theorem 8. Then there exists a constant ξ (α, β) such that h is convex on [a, ξ], concave on [ξ, b]. Furthermore h (x) c if x [a, α], h (x) d if x [β, b], and c h (x) d if x [α, β]. Proof: From the condition that L, L 2, L 3 it follows that f (x) is decreasing on (a, b). Therefore f (x) has at most two zero points, which implies that f (x) has at most two local extreme values in (a, b). The lemma now follows from (3)-(33). Proof of Theorem 8: We need to prove that the conditions of Theorem 3 are satisfied. Condition (i) and the required smoothness of h follows from our construction. Next we prove 22
23 (ii) and (iv). From conditions (3), (33) and Lemma 9 it follows that h(α) + C + c(α x), if a x α; Mh(x) = h(x) + min{c, D}, if α < x < β; h(β) + D + d(x β), if β x b. Conditions (ii) and (iv) follow from Lemma 9 and conditions (34) and (35). Next we show condition (iii). First we look at the case of x > b. Let K(x) = Ah(x) λh(x) + (x ρ) 2 ; x R \ {a, b}. A simple calculation shows that = K(b ) = σ2 2 h (b ) + K(b+), and σ2 2 h (b ) implies K(b+). On the other hand for x > b we have K (x) = λd + 2(x ρ). A simple calculation then also shows that = K (b ) = σ2 2 h (b ) + ϑh (b ) λd + 2(b ρ). Since h (b ) and h (b ), we then have that K (x) whenever x (b, ). This in turn implies K(x) for x (b, ). Next we show that K(x) for x < a. An simple calculation shows that = K(a+) = K(a ) + σ2 2 h (a+) which implies that K(a ). Hence all we need to show is that K (x) for x a. With a change of variable in the integral one can see that and K (x) = e θ(x a) [h (a + z) + c]ν(dz) + λc + 2(x ρ), x < a K (x) = θe θ(x a) [h (a + z) + c]ν(dz) + 2, x < a. Thus K is either increasing or decreasing on (, a) depending on the sign of [h (a + z) + c]ν(dz) which makes K either convex or concave on (, a). However, the fact that lim x K (x) = implies that K must be concave and K decreasing on (, a). Therefore, in order to show that K (x) it is sufficient to show that K (a ) and K (a ). The latter is exactly the second inequality in (37). For the former we note that = K (a+) = σ2 2 h (a+) + ϑh (a+) + K (a ) 23
24 and so K (a ) follows from the first inequality in (37). Having found the form of the optimal control X t in Theorem 8, we now proceed towards calculating its limiting distribution δ. Note that by the discussion in Section 4, it suffices to determine the function W () = lim q W (q). By (8) we have that the Laplace transform of W () is given by /ψ( s) where ψ(s) is the Levy exponent of Y t. Let us assume now that σ = so that Y t = νt + N t where N t is a compound Poisson process which has jumps at rate one and jump sizes which are exponentially distributed with rate θ. By (8.) in [] it then follows that ψ(s) = νs ( e xs )θe θx dx, for s < θ, which reduces to ψ(s) = νs + s(θ s). One may now proceed to verify that ψ( s) = θ s(νs + νθ + ) νs + νθ +. In the case in which θν inverting each of the terms in the above, one obtains that the function W () is given by ( W () θ (x) = θν + ν θ ) ( exp θν + ) x. θν + ν For the case in which θν =, one has that W () (x) = ν (θx + ). Substituting into the discussion preceding Proposition 5, one may now obtain the density of π. 6 Acknowledgements The authors would like to thank Bert Zwart for his help with Section 4. 24
25 A Appenidx In the Appendix, we provide a proof of the fact that Ito s rule applies to test functions f D. For f D the second derivative f (x) may not exist in points S = {x,..., x n }. We shall call S the set of exceptional points. We extend f to the entire of R by assuming an arbitrary value for f (x i ). This convention will be used in the rest of this section. The following is then the main result of the Appendix. Proposition If f D and X is a controlled cash on hand process with an arbitrary impulse control (T, Ξ) = (τ, τ 2,..., τ n,..., ξ, ξ 2,..., ξ n,..., ) then Ito s rule holds in its usual form: f(x t ) f(x ) = <s t (,t] f (X s )dx s + σ2 2 (,t] f (X s )ds+ {f(x s ) f(x s ) f (X s ) X s }. (38) In order to prove Proposition, we need the following two lemmas. Lemma Let f D with set of exceptional points S. Then there exists a sequence (f n ) n C 2 (R) such that the following hold; (i) f n (x) f(x) and f n(x) f (x) for every x R as n ; (ii) f n(x) f (x) for every x R \ S as n ; (iii) f n and f n are bounded uniformly in n, i.e., f n(x) C and f n(x) C for some constant C and all n and x R. The proof of this lemma can be based on the proof of a similar lemma in Økesendal [3], Appendix D with some obvious modifications. Lemma 2 If σ then for every x R {x} (X s )ds = 25
26 In other words, the Lebesgue measure of the time the controlled cash on hand process spends at level x is zero. Proof: [ E ] {x} (X s )ds = P [τ i = s for some i]ds + We deal with these last two integrals separately. P [X s = x]ds P [X s = x, τ i s for all i]ds P [τ i = s for some i]ds P [τ i = s]ds = P [τ i = s]ds i= i= and this last expression is zero because the set {s : P [τ i = s] > } is either countable or finite. For the second integral we have P [X s = x, τ i s for all i]ds = P [X s = x, τ i < s < τ i+ ] ds. i= Now it suffices to show that the probability in the right-hand side is zero. Indeed, [,s] R [,s] R [,s] R P [X s = x, τ i < s < τ i+ ] = P [X s = x, s < τ i+ τ i = u, X u = y] P [τ i du, X u dy] = P [X s X u = x y, s < τ i+ τ i = u, X u = y] P [τ i du, X u dy] = P [Y s Y u = x y, s < τ i+ τ i = u, X u = y] P [τ i du, X u dy] [,s] R P [Y s Y u = x y, τ i = u, X u = y] P [τ i du, X u dy] = [,s] R [,s] R P [Y s Y u = x y] P [τ i du, X u dy] = P [Y s = x Y u = y] P [τ i du, X u dy] and P [Y s = x Y u = y] = follows from our assumption σ and Sato [4], Theorem
27 We now provide the proof of Proposition. Proof of Proposition : Let (f n ) n be the sequence approximating f in the sense of Lemma. Ito s rule holds for each f n, i.e., f n (X t ) f n (X ) = <s t (,t] f n(x s )dx s + σ2 2 (,t] f n(x s )ds+ {f n (X s ) f n (X s ) f n(x s ) X s }. (39) All we need to show that all three terms in the right-hand side of 39 converge to the corresponding terms in the right-hand side of 38 as n. We can write X = X +M (t)+a (t) where M is a local martingale with bounded jumps (thus also locally square-integrable) and A is a finite variation process (Jacod & Shiryaev [9], Proposition 4.7). We then have f n(x s )dm (s) f (X s )dm (s) in probability as n (,t] (,t] by Theorem 4.4 iii in Jacod & Shiryaev [9]. Also f n(x s )da (s) f (X s )da (s) a.s, as n (,t] (,t] by the Dominated Convergence Theorem. Therefore, the first integral in the right-hand side of (39) indeed converges to the corresponding integral in (38). The convergence <s t {f n (X s ) f n (X s ) f n(x s ) X s } <s t {f(x s ) f(x s ) f (X s ) X s } follows from the discrete time version of the Dominated Convergence Theorem since f n (X s ) f n (X s ) f n(x s ) X s is is bounded by C 2 ( X s ) 2 and <s t( X s ) 2 <. Finally we need to show that σ 2 f 2 n(x s )ds σ2 f (X s )ds (,t] 2 (,t] as n. If σ = then there is nothing to prove and if σ then this follows from Lemma 2 and the Dominated Convergence Theorem. 27
28 References [] Asmussen, S. (23). Applied Probability and Queues. Springer-Verlag, New York. [2] Bensoussan, A., Lions, J.L. (973). Nouvelle Formulatin de problemes de controle impulsionnel et applications C. R., Acad. Sci (Paris) 276, [3] Bensoussan, A., Lions, J.L. (975). Nouvelles Methods en Controle Impulsionnel Appl. Math. Optimization, [4] Bar-Ilan, A., Perry, D., Stadje W. (24). A generalized impulse control model of cash management. Journal of Economic Dymanics and Control 28, [5] Cadenillas, A., Lakner, P., Pinedo, M. (2). Optimal Control of a Mean-Reverting Inventory. Operations Research, to appear. [6] Cadenillas, A., Zapatero, F., Sarkar, S. (27). Optimal Dividend Policy with Mean- Reverting Cash Reservoir. Mathematical Finance 7, 8 9. [7] Constantinides, R., Richard, S. (978). Existence of Optimal Simple Policies for Dicounted-Cost Inventory Cash Management in Continuous Time Operations Research 26, [8] Harrison, J.M., Selke, T.M., Taylor, A.J. (983). Impulse Control of Brownian Motion Mathematics of Operations Research 8, [9] Jacod, J., Shiryaev A.N. (23). Limit Theorems for Stochastic Processes. Springer- Verlag, Berlin Heidelberg. [] Kyprianou, A.E., (26). Introductory Lectures on Fluctuations of Lévy Processes with Applications. Springer-Verlag, Berlin. [] Kella, O., Whitt, W. (992). Useful martingales for stochastic storage processes with Levy input Journal of Applied Probability 29, [2] Ormeci, M., Dai, J.G., Vande Vate, J. (28). Impulse Control of Brownian Motion: The Constrained Average Cost Case Operations Research 56,
29 [3] Økesendal, B. (23). Stochastic Differential Equations, An Introduction with Applications. Springer-Verlag, Berlin Heidelberg. [4] Sato, K. (999). Lévy Processes and Infinitely Divisible Distributions. University Press, Cambridge. 29
A NEW PROOF OF THE WIENER HOPF FACTORIZATION VIA BASU S THEOREM
J. Appl. Prob. 49, 876 882 (2012 Printed in England Applied Probability Trust 2012 A NEW PROOF OF THE WIENER HOPF FACTORIZATION VIA BASU S THEOREM BRIAN FRALIX and COLIN GALLAGHER, Clemson University Abstract
More informationPoisson random measure: motivation
: motivation The Lévy measure provides the expected number of jumps by time unit, i.e. in a time interval of the form: [t, t + 1], and of a certain size Example: ν([1, )) is the expected number of jumps
More informationBrownian Motion. 1 Definition Brownian Motion Wiener measure... 3
Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................
More informationPROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS
PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please
More informationOn Optimal Stopping Problems with Power Function of Lévy Processes
On Optimal Stopping Problems with Power Function of Lévy Processes Budhi Arta Surya Department of Mathematics University of Utrecht 31 August 2006 This talk is based on the joint paper with A.E. Kyprianou:
More informationFeller Processes and Semigroups
Stat25B: Probability Theory (Spring 23) Lecture: 27 Feller Processes and Semigroups Lecturer: Rui Dong Scribe: Rui Dong ruidong@stat.berkeley.edu For convenience, we can have a look at the list of materials
More informationPROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS
PROBABILITY: LIMIT THEOREMS II, SPRING 15. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please
More informationErnesto Mordecki 1. Lecture III. PASI - Guanajuato - June 2010
Optimal stopping for Hunt and Lévy processes Ernesto Mordecki 1 Lecture III. PASI - Guanajuato - June 2010 1Joint work with Paavo Salminen (Åbo, Finland) 1 Plan of the talk 1. Motivation: from Finance
More informationPoint Process Control
Point Process Control The following note is based on Chapters I, II and VII in Brémaud s book Point Processes and Queues (1981). 1 Basic Definitions Consider some probability space (Ω, F, P). A real-valued
More informationQuestion 1. The correct answers are: (a) (2) (b) (1) (c) (2) (d) (3) (e) (2) (f) (1) (g) (2) (h) (1)
Question 1 The correct answers are: a 2 b 1 c 2 d 3 e 2 f 1 g 2 h 1 Question 2 a Any probability measure Q equivalent to P on F 2 can be described by Q[{x 1, x 2 }] := q x1 q x1,x 2, 1 where q x1, q x1,x
More informationHJB equations. Seminar in Stochastic Modelling in Economics and Finance January 10, 2011
Department of Probability and Mathematical Statistics Faculty of Mathematics and Physics, Charles University in Prague petrasek@karlin.mff.cuni.cz Seminar in Stochastic Modelling in Economics and Finance
More informationReflected Brownian Motion
Chapter 6 Reflected Brownian Motion Often we encounter Diffusions in regions with boundary. If the process can reach the boundary from the interior in finite time with positive probability we need to decide
More informationOther properties of M M 1
Other properties of M M 1 Přemysl Bejda premyslbejda@gmail.com 2012 Contents 1 Reflected Lévy Process 2 Time dependent properties of M M 1 3 Waiting times and queue disciplines in M M 1 Contents 1 Reflected
More informationApplications of Ito s Formula
CHAPTER 4 Applications of Ito s Formula In this chapter, we discuss several basic theorems in stochastic analysis. Their proofs are good examples of applications of Itô s formula. 1. Lévy s martingale
More informationOptimal Impulse Control for Cash Management with Two Sources of Short-term Funds
The Eighth International Symposium on Operations Research and Its Applications (ISORA 09) Zhangjiajie, China, September 20 22, 2009 Copyright 2009 ORSC & APORC, pp. 323 331 Optimal Impulse Control for
More informationOn Stopping Times and Impulse Control with Constraint
On Stopping Times and Impulse Control with Constraint Jose Luis Menaldi Based on joint papers with M. Robin (216, 217) Department of Mathematics Wayne State University Detroit, Michigan 4822, USA (e-mail:
More informationExponential functionals of Lévy processes
Exponential functionals of Lévy processes Víctor Rivero Centro de Investigación en Matemáticas, México. 1/ 28 Outline of the talk Introduction Exponential functionals of spectrally positive Lévy processes
More information1 Brownian Local Time
1 Brownian Local Time We first begin by defining the space and variables for Brownian local time. Let W t be a standard 1-D Wiener process. We know that for the set, {t : W t = } P (µ{t : W t = } = ) =
More informationBayesian quickest detection problems for some diffusion processes
Bayesian quickest detection problems for some diffusion processes Pavel V. Gapeev Albert N. Shiryaev We study the Bayesian problems of detecting a change in the drift rate of an observable diffusion process
More informationHomework #6 : final examination Due on March 22nd : individual work
Université de ennes Année 28-29 Master 2ème Mathématiques Modèles stochastiques continus ou à sauts Homework #6 : final examination Due on March 22nd : individual work Exercise Warm-up : behaviour of characteristic
More informationOn a class of optimal stopping problems for diffusions with discontinuous coefficients
On a class of optimal stopping problems for diffusions with discontinuous coefficients Ludger Rüschendorf and Mikhail A. Urusov Abstract In this paper we introduce a modification of the free boundary problem
More informationLecture 17 Brownian motion as a Markov process
Lecture 17: Brownian motion as a Markov process 1 of 14 Course: Theory of Probability II Term: Spring 2015 Instructor: Gordan Zitkovic Lecture 17 Brownian motion as a Markov process Brownian motion is
More informationErik J. Baurdoux Some excursion calculations for reflected Lévy processes
Erik J. Baurdoux Some excursion calculations for reflected Lévy processes Article (Accepted version) (Refereed) Original citation: Baurdoux, Erik J. (29) Some excursion calculations for reflected Lévy
More informationOptimal control of a mean-reverting inventory
OPERATIONS RESEARCH Vol., No., Xxxxx, pp. issn 3-364X eissn156-5463 1 INFORMS doi 1.187/xxxx.. c INFORMS Optimal control of a mean-reverting inventory Abel Cadenillas Department of Finance and Management
More informationNon-Essential Uses of Probability in Analysis Part IV Efficient Markovian Couplings. Krzysztof Burdzy University of Washington
Non-Essential Uses of Probability in Analysis Part IV Efficient Markovian Couplings Krzysztof Burdzy University of Washington 1 Review See B and Kendall (2000) for more details. See also the unpublished
More informationChange of Measure formula and the Hellinger Distance of two Lévy Processes
Change of Measure formula and the Hellinger Distance of two Lévy Processes Erika Hausenblas University of Salzburg, Austria Change of Measure formula and the Hellinger Distance of two Lévy Processes p.1
More informationInfinitely divisible distributions and the Lévy-Khintchine formula
Infinitely divisible distributions and the Cornell University May 1, 2015 Some definitions Let X be a real-valued random variable with law µ X. Recall that X is said to be infinitely divisible if for every
More informationSolving the Poisson Disorder Problem
Advances in Finance and Stochastics: Essays in Honour of Dieter Sondermann, Springer-Verlag, 22, (295-32) Research Report No. 49, 2, Dept. Theoret. Statist. Aarhus Solving the Poisson Disorder Problem
More informationOn the martingales obtained by an extension due to Saisho, Tanemura and Yor of Pitman s theorem
On the martingales obtained by an extension due to Saisho, Tanemura and Yor of Pitman s theorem Koichiro TAKAOKA Dept of Applied Physics, Tokyo Institute of Technology Abstract M Yor constructed a family
More informationThe Pedestrian s Guide to Local Time
The Pedestrian s Guide to Local Time Tomas Björk, Department of Finance, Stockholm School of Economics, Box 651, SE-113 83 Stockholm, SWEDEN tomas.bjork@hhs.se November 19, 213 Preliminary version Comments
More informationStochastic Calculus. Kevin Sinclair. August 2, 2016
Stochastic Calculus Kevin Sinclair August, 16 1 Background Suppose we have a Brownian motion W. This is a process, and the value of W at a particular time T (which we write W T ) is a normally distributed
More informationThe Distributions of Stopping Times For Ordinary And Compound Poisson Processes With Non-Linear Boundaries: Applications to Sequential Estimation.
The Distributions of Stopping Times For Ordinary And Compound Poisson Processes With Non-Linear Boundaries: Applications to Sequential Estimation. Binghamton University Department of Mathematical Sciences
More informationarxiv: v2 [math.oc] 25 Mar 2016
Optimal ordering policy for inventory systems with quantity-dependent setup costs Shuangchi He Dacheng Yao Hanqin Zhang arxiv:151.783v2 [math.oc] 25 Mar 216 March 18, 216 Abstract We consider a continuous-review
More informationInformation and Credit Risk
Information and Credit Risk M. L. Bedini Université de Bretagne Occidentale, Brest - Friedrich Schiller Universität, Jena Jena, March 2011 M. L. Bedini (Université de Bretagne Occidentale, Brest Information
More informationStochastic Volatility and Correction to the Heat Equation
Stochastic Volatility and Correction to the Heat Equation Jean-Pierre Fouque, George Papanicolaou and Ronnie Sircar Abstract. From a probabilist s point of view the Twentieth Century has been a century
More informationON ADDITIVE TIME-CHANGES OF FELLER PROCESSES. 1. Introduction
ON ADDITIVE TIME-CHANGES OF FELLER PROCESSES ALEKSANDAR MIJATOVIĆ AND MARTIJN PISTORIUS Abstract. In this note we generalise the Phillips theorem [1] on the subordination of Feller processes by Lévy subordinators
More informationA new approach for investment performance measurement. 3rd WCMF, Santa Barbara November 2009
A new approach for investment performance measurement 3rd WCMF, Santa Barbara November 2009 Thaleia Zariphopoulou University of Oxford, Oxford-Man Institute and The University of Texas at Austin 1 Performance
More informationIntroduction to self-similar growth-fragmentations
Introduction to self-similar growth-fragmentations Quan Shi CIMAT, 11-15 December, 2017 Quan Shi Growth-Fragmentations CIMAT, 11-15 December, 2017 1 / 34 Literature Jean Bertoin, Compensated fragmentation
More informationUniformly Uniformly-ergodic Markov chains and BSDEs
Uniformly Uniformly-ergodic Markov chains and BSDEs Samuel N. Cohen Mathematical Institute, University of Oxford (Based on joint work with Ying Hu, Robert Elliott, Lukas Szpruch) Centre Henri Lebesgue,
More informationClassical and Restricted Impulse Control for the Exchange Rate under a Stochastic Trend Model
Classical and Restricted Impulse Control for the Exchange Rate under a Stochastic Trend Model Wolfgang J. Runggaldier Department of Mathematics University of Padua, Italy runggal@math.unipd.it Kazuhiro
More informationFunctional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals
Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals Noèlia Viles Cuadros BCAM- Basque Center of Applied Mathematics with Prof. Enrico
More informationPotentials of stable processes
Potentials of stable processes A. E. Kyprianou A. R. Watson 5th December 23 arxiv:32.222v [math.pr] 4 Dec 23 Abstract. For a stable process, we give an explicit formula for the potential measure of the
More informationCIMPA SCHOOL, 2007 Jump Processes and Applications to Finance Monique Jeanblanc
CIMPA SCHOOL, 27 Jump Processes and Applications to Finance Monique Jeanblanc 1 Jump Processes I. Poisson Processes II. Lévy Processes III. Jump-Diffusion Processes IV. Point Processes 2 I. Poisson Processes
More informationSome Properties of NSFDEs
Chenggui Yuan (Swansea University) Some Properties of NSFDEs 1 / 41 Some Properties of NSFDEs Chenggui Yuan Swansea University Chenggui Yuan (Swansea University) Some Properties of NSFDEs 2 / 41 Outline
More informationLECTURE 2: LOCAL TIME FOR BROWNIAN MOTION
LECTURE 2: LOCAL TIME FOR BROWNIAN MOTION We will define local time for one-dimensional Brownian motion, and deduce some of its properties. We will then use the generalized Ray-Knight theorem proved in
More informationLecture 21 Representations of Martingales
Lecture 21: Representations of Martingales 1 of 11 Course: Theory of Probability II Term: Spring 215 Instructor: Gordan Zitkovic Lecture 21 Representations of Martingales Right-continuous inverses Let
More informationBrownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539
Brownian motion Samy Tindel Purdue University Probability Theory 2 - MA 539 Mostly taken from Brownian Motion and Stochastic Calculus by I. Karatzas and S. Shreve Samy T. Brownian motion Probability Theory
More informationMulti-dimensional Stochastic Singular Control Via Dynkin Game and Dirichlet Form
Multi-dimensional Stochastic Singular Control Via Dynkin Game and Dirichlet Form Yipeng Yang * Under the supervision of Dr. Michael Taksar Department of Mathematics University of Missouri-Columbia Oct
More information1. Stochastic Process
HETERGENEITY IN QUANTITATIVE MACROECONOMICS @ TSE OCTOBER 17, 216 STOCHASTIC CALCULUS BASICS SANG YOON (TIM) LEE Very simple notes (need to add references). It is NOT meant to be a substitute for a real
More information{σ x >t}p x. (σ x >t)=e at.
3.11. EXERCISES 121 3.11 Exercises Exercise 3.1 Consider the Ornstein Uhlenbeck process in example 3.1.7(B). Show that the defined process is a Markov process which converges in distribution to an N(0,σ
More informationJump-type Levy Processes
Jump-type Levy Processes Ernst Eberlein Handbook of Financial Time Series Outline Table of contents Probabilistic Structure of Levy Processes Levy process Levy-Ito decomposition Jump part Probabilistic
More informationExercises in stochastic analysis
Exercises in stochastic analysis Franco Flandoli, Mario Maurelli, Dario Trevisan The exercises with a P are those which have been done totally or partially) in the previous lectures; the exercises with
More informationOn Classical and Restricted Impulse Stochastic Control for the Exchange Rate
On Classical and Restricted Impulse Stochastic Control for the Exchange Rate Giulio Bertola Department of Mathematics University of Padua, Italy giulio@powerlinux.it Kazuhiro Yasuda Faculty of Science
More informationThe Cameron-Martin-Girsanov (CMG) Theorem
The Cameron-Martin-Girsanov (CMG) Theorem There are many versions of the CMG Theorem. In some sense, there are many CMG Theorems. The first version appeared in ] in 944. Here we present a standard version,
More informationOn Ergodic Impulse Control with Constraint
On Ergodic Impulse Control with Constraint Maurice Robin Based on joint papers with J.L. Menaldi University Paris-Sanclay 9119 Saint-Aubin, France (e-mail: maurice.robin@polytechnique.edu) IMA, Minneapolis,
More informationStochastic Processes
Stochastic Processes A very simple introduction Péter Medvegyev 2009, January Medvegyev (CEU) Stochastic Processes 2009, January 1 / 54 Summary from measure theory De nition (X, A) is a measurable space
More informationStochastic Integration.
Chapter Stochastic Integration..1 Brownian Motion as a Martingale P is the Wiener measure on (Ω, B) where Ω = C, T B is the Borel σ-field on Ω. In addition we denote by B t the σ-field generated by x(s)
More informationOn Reflecting Brownian Motion with Drift
Proc. Symp. Stoch. Syst. Osaka, 25), ISCIE Kyoto, 26, 1-5) On Reflecting Brownian Motion with Drift Goran Peskir This version: 12 June 26 First version: 1 September 25 Research Report No. 3, 25, Probability
More informationSelected Exercises on Expectations and Some Probability Inequalities
Selected Exercises on Expectations and Some Probability Inequalities # If E(X 2 ) = and E X a > 0, then P( X λa) ( λ) 2 a 2 for 0 < λ
More informationOn pathwise stochastic integration
On pathwise stochastic integration Rafa l Marcin Lochowski Afican Institute for Mathematical Sciences, Warsaw School of Economics UWC seminar Rafa l Marcin Lochowski (AIMS, WSE) On pathwise stochastic
More informationThe Codimension of the Zeros of a Stable Process in Random Scenery
The Codimension of the Zeros of a Stable Process in Random Scenery Davar Khoshnevisan The University of Utah, Department of Mathematics Salt Lake City, UT 84105 0090, U.S.A. davar@math.utah.edu http://www.math.utah.edu/~davar
More informationA Concise Course on Stochastic Partial Differential Equations
A Concise Course on Stochastic Partial Differential Equations Michael Röckner Reference: C. Prevot, M. Röckner: Springer LN in Math. 1905, Berlin (2007) And see the references therein for the original
More informationSEQUENTIAL TESTING OF SIMPLE HYPOTHESES ABOUT COMPOUND POISSON PROCESSES. 1. Introduction (1.2)
SEQUENTIAL TESTING OF SIMPLE HYPOTHESES ABOUT COMPOUND POISSON PROCESSES SAVAS DAYANIK AND SEMIH O. SEZER Abstract. One of two simple hypotheses is correct about the unknown arrival rate and jump distribution
More informationThe strictly 1/2-stable example
The strictly 1/2-stable example 1 Direct approach: building a Lévy pure jump process on R Bert Fristedt provided key mathematical facts for this example. A pure jump Lévy process X is a Lévy process such
More informationHardy-Stein identity and Square functions
Hardy-Stein identity and Square functions Daesung Kim (joint work with Rodrigo Bañuelos) Department of Mathematics Purdue University March 28, 217 Daesung Kim (Purdue) Hardy-Stein identity UIUC 217 1 /
More informationOPTIMAL STOPPING OF A BROWNIAN BRIDGE
OPTIMAL STOPPING OF A BROWNIAN BRIDGE ERIK EKSTRÖM AND HENRIK WANNTORP Abstract. We study several optimal stopping problems in which the gains process is a Brownian bridge or a functional of a Brownian
More informationOn an Effective Solution of the Optimal Stopping Problem for Random Walks
QUANTITATIVE FINANCE RESEARCH CENTRE QUANTITATIVE FINANCE RESEARCH CENTRE Research Paper 131 September 2004 On an Effective Solution of the Optimal Stopping Problem for Random Walks Alexander Novikov and
More informationSummary of Stochastic Processes
Summary of Stochastic Processes Kui Tang May 213 Based on Lawler s Introduction to Stochastic Processes, second edition, and course slides from Prof. Hongzhong Zhang. Contents 1 Difference/tial Equations
More informationA MODEL FOR THE LONG-TERM OPTIMAL CAPACITY LEVEL OF AN INVESTMENT PROJECT
A MODEL FOR HE LONG-ERM OPIMAL CAPACIY LEVEL OF AN INVESMEN PROJEC ARNE LØKKA AND MIHAIL ZERVOS Abstract. We consider an investment project that produces a single commodity. he project s operation yields
More informationFiltrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition
Filtrations, Markov Processes and Martingales Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition David pplebaum Probability and Statistics Department,
More informationJump Processes. Richard F. Bass
Jump Processes Richard F. Bass ii c Copyright 214 Richard F. Bass Contents 1 Poisson processes 1 1.1 Definitions............................. 1 1.2 Stopping times.......................... 3 1.3 Markov
More informationProbability and Measure
Part II Year 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2018 84 Paper 4, Section II 26J Let (X, A) be a measurable space. Let T : X X be a measurable map, and µ a probability
More informationUPPER DEVIATIONS FOR SPLIT TIMES OF BRANCHING PROCESSES
Applied Probability Trust 7 May 22 UPPER DEVIATIONS FOR SPLIT TIMES OF BRANCHING PROCESSES HAMED AMINI, AND MARC LELARGE, ENS-INRIA Abstract Upper deviation results are obtained for the split time of a
More informationWiener Measure and Brownian Motion
Chapter 16 Wiener Measure and Brownian Motion Diffusion of particles is a product of their apparently random motion. The density u(t, x) of diffusing particles satisfies the diffusion equation (16.1) u
More informationThe Azéma-Yor Embedding in Non-Singular Diffusions
Stochastic Process. Appl. Vol. 96, No. 2, 2001, 305-312 Research Report No. 406, 1999, Dept. Theoret. Statist. Aarhus The Azéma-Yor Embedding in Non-Singular Diffusions J. L. Pedersen and G. Peskir Let
More informationUniversity Of Calgary Department of Mathematics and Statistics
University Of Calgary Department of Mathematics and Statistics Hawkes Seminar May 23, 2018 1 / 46 Some Problems in Insurance and Reinsurance Mohamed Badaoui Department of Electrical Engineering National
More informationA Barrier Version of the Russian Option
A Barrier Version of the Russian Option L. A. Shepp, A. N. Shiryaev, A. Sulem Rutgers University; shepp@stat.rutgers.edu Steklov Mathematical Institute; shiryaev@mi.ras.ru INRIA- Rocquencourt; agnes.sulem@inria.fr
More informationReal Analysis Math 131AH Rudin, Chapter #1. Dominique Abdi
Real Analysis Math 3AH Rudin, Chapter # Dominique Abdi.. If r is rational (r 0) and x is irrational, prove that r + x and rx are irrational. Solution. Assume the contrary, that r+x and rx are rational.
More informationSTOCHASTIC DIFFERENTIAL EQUATIONS DRIVEN BY PROCESSES WITH INDEPENDENT INCREMENTS
STOCHASTIC DIFFERENTIAL EQUATIONS DRIVEN BY PROCESSES WITH INDEPENDENT INCREMENTS DAMIR FILIPOVIĆ AND STEFAN TAPPE Abstract. This article considers infinite dimensional stochastic differential equations
More informationLecture 12. F o s, (1.1) F t := s>t
Lecture 12 1 Brownian motion: the Markov property Let C := C(0, ), R) be the space of continuous functions mapping from 0, ) to R, in which a Brownian motion (B t ) t 0 almost surely takes its value. Let
More informationConvergence at first and second order of some approximations of stochastic integrals
Convergence at first and second order of some approximations of stochastic integrals Bérard Bergery Blandine, Vallois Pierre IECN, Nancy-Université, CNRS, INRIA, Boulevard des Aiguillettes B.P. 239 F-5456
More informationOPTIMAL SOLUTIONS TO STOCHASTIC DIFFERENTIAL INCLUSIONS
APPLICATIONES MATHEMATICAE 29,4 (22), pp. 387 398 Mariusz Michta (Zielona Góra) OPTIMAL SOLUTIONS TO STOCHASTIC DIFFERENTIAL INCLUSIONS Abstract. A martingale problem approach is used first to analyze
More informationELEMENTS OF PROBABILITY THEORY
ELEMENTS OF PROBABILITY THEORY Elements of Probability Theory A collection of subsets of a set Ω is called a σ algebra if it contains Ω and is closed under the operations of taking complements and countable
More informationThe Wiener Itô Chaos Expansion
1 The Wiener Itô Chaos Expansion The celebrated Wiener Itô chaos expansion is fundamental in stochastic analysis. In particular, it plays a crucial role in the Malliavin calculus as it is presented in
More informationOptimal production management when demand depends on the business cycle
OPERATIONS RESEARCH Vol., No., Xxxxx, pp. issn 3-364X eissn526-5463 INFORMS doi.287/xxxx.. c INFORMS Optimal production management when demand depends on the business cycle Abel Cadenillas Department of
More informationStochastic Areas and Applications in Risk Theory
Stochastic Areas and Applications in Risk Theory July 16th, 214 Zhenyu Cui Department of Mathematics Brooklyn College, City University of New York Zhenyu Cui 49th Actuarial Research Conference 1 Outline
More informationMathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( )
Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio (2014-2015) Etienne Tanré - Olivier Faugeras INRIA - Team Tosca November 26th, 2014 E. Tanré (INRIA - Team Tosca) Mathematical
More informationLecture 22 Girsanov s Theorem
Lecture 22: Girsanov s Theorem of 8 Course: Theory of Probability II Term: Spring 25 Instructor: Gordan Zitkovic Lecture 22 Girsanov s Theorem An example Consider a finite Gaussian random walk X n = n
More informationA NOTE ON STOCHASTIC INTEGRALS AS L 2 -CURVES
A NOTE ON STOCHASTIC INTEGRALS AS L 2 -CURVES STEFAN TAPPE Abstract. In a work of van Gaans (25a) stochastic integrals are regarded as L 2 -curves. In Filipović and Tappe (28) we have shown the connection
More informationCross-Selling in a Call Center with a Heterogeneous Customer Population. Technical Appendix
Cross-Selling in a Call Center with a Heterogeneous Customer opulation Technical Appendix Itay Gurvich Mor Armony Costis Maglaras This technical appendix is dedicated to the completion of the proof of
More informationMartingale Problems. Abhay G. Bhatt Theoretical Statistics and Mathematics Unit Indian Statistical Institute, Delhi
s Abhay G. Bhatt Theoretical Statistics and Mathematics Unit Indian Statistical Institute, Delhi Lectures on Probability and Stochastic Processes III Indian Statistical Institute, Kolkata 20 24 November
More informationSolution for Problem 7.1. We argue by contradiction. If the limit were not infinite, then since τ M (ω) is nondecreasing we would have
362 Problem Hints and Solutions sup g n (ω, t) g(ω, t) sup g(ω, s) g(ω, t) µ n (ω). t T s,t: s t 1/n By the uniform continuity of t g(ω, t) on [, T], one has for each ω that µ n (ω) as n. Two applications
More informationn E(X t T n = lim X s Tn = X s
Stochastic Calculus Example sheet - Lent 15 Michael Tehranchi Problem 1. Let X be a local martingale. Prove that X is a uniformly integrable martingale if and only X is of class D. Solution 1. If If direction:
More informationFigure 10.1: Recording when the event E occurs
10 Poisson Processes Let T R be an interval. A family of random variables {X(t) ; t T} is called a continuous time stochastic process. We often consider T = [0, 1] and T = [0, ). As X(t) is a random variable
More informationexchange rate, optimal impulse control, quasi-variational inequalities, stopping
OPTIMAL INTERVENTION IN THE FOREIGN EXCHANGE MARKET WHEN INTERVENTIONS AFFECT MARKET DYNAMICS ALEC N. KERCHEVAL AND JUAN F. MORENO Abbreviated Title: Optimal intervention in foreign exchange Abstract.
More informationA diamagnetic inequality for semigroup differences
A diamagnetic inequality for semigroup differences (Irvine, November 10 11, 2001) Barry Simon and 100DM 1 The integrated density of states (IDS) Schrödinger operator: H := H(V ) := 1 2 + V ω =: H(0, V
More informationLOCATION OF THE PATH SUPREMUM FOR SELF-SIMILAR PROCESSES WITH STATIONARY INCREMENTS. Yi Shen
LOCATION OF THE PATH SUPREMUM FOR SELF-SIMILAR PROCESSES WITH STATIONARY INCREMENTS Yi Shen Department of Statistics and Actuarial Science, University of Waterloo. Waterloo, ON N2L 3G1, Canada. Abstract.
More informationOn Minimal Entropy Martingale Measures
On Minimal Entropy Martingale Measures Andrii Andrusiv Friedrich Schiller University of Jena, Germany Marie Curie ITN Outline 1. One Period Model Definitions and Notations Minimal Entropy Martingale Measure
More informationA Note on the Central Limit Theorem for a Class of Linear Systems 1
A Note on the Central Limit Theorem for a Class of Linear Systems 1 Contents Yukio Nagahata Department of Mathematics, Graduate School of Engineering Science Osaka University, Toyonaka 560-8531, Japan.
More informationUNCERTAINTY FUNCTIONAL DIFFERENTIAL EQUATIONS FOR FINANCE
Surveys in Mathematics and its Applications ISSN 1842-6298 (electronic), 1843-7265 (print) Volume 5 (2010), 275 284 UNCERTAINTY FUNCTIONAL DIFFERENTIAL EQUATIONS FOR FINANCE Iuliana Carmen Bărbăcioru Abstract.
More information