行政院國家科學委員會專題研究計畫成果報告

Size: px
Start display at page:

Download "行政院國家科學委員會專題研究計畫成果報告"

Transcription

1 行政院國家科學委員會專題研究計畫成果報告 跳躍過程及其應用研究成果報告 ( 精簡版 ) 計畫類別 : 個別型計畫編號 :NSC M 執行期間 :99 年 08 月 0 日至 00 年 07 月 3 日執行單位 : 國立交通大學應用數學系 ( 所 ) 計畫主持人 : 許元春 計畫參與人員 : 碩士班研究生 - 兼任助理人員 : 吳國禎碩士班研究生 - 兼任助理人員 : 郭閔豪博士班研究生 - 兼任助理人員 : 陳育慈博士班研究生 - 兼任助理人員 : 張明淇博士班研究生 - 兼任助理人員 : 蔡明耀 處理方式 : 本計畫可公開查詢 中華民國 00 年 09 月 2 日

2 Free Boundary Problems and Perpetual American Strangles Ming-Chi Chang, Department of Applied Mathematics National Chiao Tung University, Hsinchu, Taiwan Yuan-Chung Sheu, Department of Applied Mathematics National Chiao Tung University, Hsinchu, Taiwan Abstract We consider the perpetual American strangles in the geometric jump-diffusion models We assume further that the jump distribution is a mixture of exponential distributions To solve the corresponding optimal stopping problem for this option, by using the approach in [5, we derive a system of equations that is equivalent to the associated free boundary problem with smooth pasting condition We verify the existence of the solutions to these equations Then, in terms of the solutions together with a verification theorem, we solve the optimal stopping problem and hence find the optimal exercise boundaries and the rational price for the perpetual American strangle In addition we work out an algorithm for computing the optimal exercise boundaries and the rational price of this option Keywords: jump-diffusion, mixture of exponential distributions, perpetual American strangle, free boundary problem, smooth pasting condition JEL Classification: D8, C6, G2 Mathematics Subject Classification(2000): 60J75, 60G5, 60G99 Running Title: Free Boundary Problems and Perpetual American Strangles Introduction An American option is an option that can be exercised at any time prior to its expiration time For an American call option with a finite expiration time, Merton [ observed that the price of the American option(written on an underlying stock without dividends) coincides with the price of the corresponding European option However the American put option(even without dividends) presents a difficult problem We have no explicit pricing formulas and the optimal exercise boundaries are not known One exception is the perpetual American put option, ie, an American put with infinite expiration time Within the Black-Scholes model, the perpetual American put problem was solved by Mckean [0 In the Lévy-based models, using the theory of pseudo-differential operators, Boyarchenko and Levendorskii [4 derived a closed formula for prices of perpetual American put and call options By probabilistic techniques, Mordecki and Salminen [2 obtained explicit formulas under the assumption of mixed-exponentially distributed and arbitrary negative jumps for the call options, and negative mixed-exponentially distributed and arbitrary positive jumps for put options (For related works, see Asmussen et al [ and the references therein) In this paper we consider the pricing problem for the perpetual American strangles, which is a combination of a put and a call written on the same security Mathematically the pricing problem for perpetual American contracts in the Lévy-based model is equivalent to the optimal stopping problem of the form V (x) = sup E x (e rτ g(x τ )) () τ T where X = {X t : t 0} under the chosen risk-neutral probability measure P x is a Lévy process started from X 0 = x Further, g is the nonnegative continuous reward function corresponding to the Corresponding author Tel: x56428; fax: address: sheu@mathnctuedutw

3 contract, r 0 and T is a family of stopping times with respect to the natural filtration generated by X, F = {F t : t 0} (Here we define, on τ =, e rτ g(x τ ) = 0) In the literature, there are many approaches for finding the value functions V (x) and the optimal stopping times τ such that V (x) = E x (e rτ g(x τ )) The free boundary approach is based on the observation that, under some suitable conditions, the value function V (x) for the optimal stopping problem () is a solution to the free boundary(or Stephan) problem (L X r)v (x) = 0 in C (2) V (x) = g(x) on D (3) where C = {x R, V (x) > g(x)} (the continuation region), D = {x R, V (x) = g(x)} (the stopping region) and L X is the infinitesimal operator of X (For details, see Shiryayev[7 Theorem 5 p57) Many authors in the literature also observed that the boundary of the stopping region D is determined by imposing the smooth pasting condition for the value function Then, to solve the optimal stopping problem (), it suffices to solve the above free boundary problem with suitable pasting conditions and prove a verification theorem(a verification theorem implies that solving the free boundary problem with smooth pasting condition(or related conditions) allow one to establish explicit solutions of the optimal stopping problem) By this approach, we find the value functions and the optimal stopping times for the optimal stopping problems () and, by the risk-neutral pricing formula, we obtain the optimal exercise times and the rational prices for the perpetual American contracts For recent works and other approaches, see Kyprianou and Surya [9, Mordecki and Salminen [2, Mordecki [3, Novikov and Shiryaev [4, Surya [8 and the monograph of Peskir and Shiryaev [5 It is worth noting that the reward functions considered above are of American put-type or American call-type For our financial applications, we need to consider the reward functions g of the two-sided form as in (24) (In the literature there are not many works for two-sided reward functions See, for example, Beibel and Lerche [2, Gapeev and Lerche [7 and Boyarchenko [3) Then we study the perpetual American strangles in the geometric jump-diffusion models We assume further that the jump distribution is a two-sided mixture of exponential distributions To solve the corresponding optimal stopping problem for this option, by using the approach in [5, we first derive a system of equations(see (33)-(38) below) that is equivalent to the associated free boundary problem with smooth pasting condition In terms of the solutions to the system of equations, together with a verification theorem(theorem 2), we find the optimal stopping time and the value function for the optimal stopping problem(theorem 3) Hence, by the risk-neutral pricing formula, we obtain the optimal exercise boundaries and the price for the perpetual American strangle In addition, in the proof of the existence of solutions to the equations (33)-(38), we also work out an algorithm for computing the optimal exercise boundaries and the rational price of the option(see Theorem 4 ) The paper is organized as follows In Section 2 we introduce our jump-diffusion setting and provide a verification theorem for the optimal stopping problems () with a general two-sided reward function In Section 3 we consider the perpetual American strangles under the geometric jump-diffusion setting We derive a system of equations for solving the corresponding free boundary problem with smooth pasting condition and in terms of solutions to the system of equations, we solve the optimal stopping problem corresponding to the perpetual American strangle In Section 4 we prove the existence of solutions to the system of equations(theorem 4) Some numerical results based on our algorithm are presented in Section 5 Section 6 concludes this paper Long and difficult proofs are relegated to the Appendix 2 Optimal Stopping and Jump-Diffusion Processes Throughout this paper, on a probability space (Ω, F, P), we consider a jump-diffusion process X of the form N t X t = ct + σb t + Y n (2) where c R, σ > 0, B = (B t, t 0) is a standard Brownian motion, (N t ; t 0) is a Poisson process with rate λ > 0 Also, Y = (Y n, n 0) is a sequence of independent random variables with identical n= 2

4 piecewise continuous density functions f Assume further that B, N t and Y are independent A jump-diffusion process starting from x is simply defined as x + X t for t 0 and we denote its law by P x For convenience we shall write P in place of P 0 Also E x denotes the expectation with respect to the probability measure P x Under these model assumptions, we have E(e zx t ) = e tψ(z), z ir, where ψ is called the characteristic exponent ψ of X and is given by the formula ψ(z) = σ2 2 u2 + cz + λ e zy f(y)dy λ (22) Also the infinitesimal generator L X of X has a domain containing C0(R) 2 and for h C0(R), 2 L X h(x) = 2 σ2 h (x) + ch (x) + λ h(x + y)f(y)dy λh(x) (23) We define L X h(x) by the expression (23) for all functions h on R such that h, h and the integral in (23) exists at x Given a jump-diffusion process X as in (2), we consider in this section the optimal stopping problem () with the continuous reward function g given by the formula g(x) = g (x) {x l} + g 2 (x) {x l2} (24) for some < l l 2 < Here g (x) is a strictly positive C -function on (, l ) and g 2 (x) is a strictly positive C -function on (l 2, ) We assume further that E x [ supt 0 e rt g(x t ) < for all x R For any set I in R, we write τ I = inf{t 0 X t I} and set V I (x) = E x [e rτ I g(x τi ), x R (25) Theorem 2 Given I = (h, h 2 ) c where < h < l l 2 < h 2 < Assume that the function V I (x) in (25) satisfies the following conditions: (a) V I (x) is the difference of two convex functions (b) V I (x) is a twice continuously differentiable function except possibly at h and h 2 (c) The limits V I (h i±) = lim h hi ± V I (h), i =, 2, exist and are finite (d) (L X r)v I (x) 0 for all x except possibly at h and h 2 (e) V I (x) g(x) for all x (h, h 2 ) Then V I (x) is the value function for the optimal stopping problem () with the reward function g given in (24) Proof : Let V be the value function for the optimal stopping problem () Clearly, we have V I (x) V (x) It remains to show that V (x) V I (x) By the Meyer-Itô formula(see, for example, Corollary in Protter [6 ChIV pp28-pp29), we have e rt V I (X t ) V I (x) = + 0<s t t 0 re rs V I (X s )ds + t e rs (V I (X s ) V I (X s ) V I (X s ) X s ) e rs V I (X s )dx s t 0 e rs V I (X s )d[x, X c s where V I (x) is its left derivative and V I (x) is the second derivative in the generalized function sense By similar arguments as that in Mordecki [3 Sec 3, we have e rt V I (X t ) V I (x) = t 0 e rs (L X r)v I (X s )ds + M t (26) where {M t } is a local martingale with M 0 = 0 Let T n be a sequence of stopping times such that for each n, {M Tn t} is a martingale Let τ be a stopping time By the optional stopping theorem, we have E x [M Tn t τ = E x [M 0 = 0 In addition, by (d), we have T n t τ 0 e rs (L X 3

5 r)v I (X s )ds 0 By (26), we observe E x [e r(tn t τ) V I (X Tn t τ ) V I (x) Since g(x) 0 and E x [ supt 0 e rt g(x t ) <, by Dominated Convergence Theorem and (e), we have E x [e rτ g(x τ ) = E x [ lim t lim n er(τ t T n) g(x (τ t Tn)) = lim t lim n E x[e r(τ t T n) g(x (τ t Tn)) lim t lim n E x[e r(τ t Tn) V I (X (τ t Tn )) V I (x) Because τ is arbitrary, we observe V (x) = sup τ E x [e rτ g(x τ ) V I (x) The proof is complete We have the uniqueness of solutions for the boundary value problem in (2) and (3) Proposition 2 Assume that g is bounded on (, l ) and the function g 0 2 (x + y)f(y)dy, x l 2, is locally bounded Let I = (h, h 2 ) c for some < h < l l 2 < h 2 < If Ṽ is a solution of the boundary value problem: { (L X r)ṽ (x) = 0, x (h, h 2 ) (27) Ṽ (x) = g(x), x I and Ṽ is in C2 (h, h 2 ) C[h, h 2, then Ṽ (x) = V I(x) for all x R Proof: See the Appendix Remark The conclusion of Proposition 2 still holds if the functions g and g 2 are C (not necessary strictly positive) and satisfy the conditions in Proposition 2 Proposition 22 Assume that g and g are bounded on (, l ) and the functions g 0 2 (x + y)f(y)dy and g 0 2(x + y)f(y)dy, x l 2, are locally bounded We assume further that g (x) g (x) is positive and increasing on (, l ), g 2 (x) g 2(x) is negative and decreasing on (l 2, ) and E x [sup t 0 e rt g (X t ) < for all x Let I = (h, h 2 ) c for some < h < l l 2 < h 2 < and consider a non-negative function Ṽ (x) on R that is C2 on (h, h 2 ) and satisfies the following conditions: (a) (L X r)ṽ (x) = 0, x (h, h 2 ), (b) (c) Ṽ (x) = g(x), x I Ṽ (x + y)f(y)dy = Ṽ (x + y)f(y)dy, x (h, h 2 ) d dx (d) Ṽ is continuous at h and h 2 and Ṽ (h i ), i =, 2, exist and are continuous there Then Ṽ (x) g(x) for all x (h, h 2 ) Proof: By Proposition 2, we have Ṽ (x) = V I(x) for all x R Note that Ṽ is C on (h, h 2 ) (for a proof, see Chen et al [6) and, for x (h, h 2 ), we have 0 = d dx (L X r)ṽ (x) = 2 σ2 Ṽ (x) + cṽ (x) (λ + r)ṽ (x) + λ Ṽ (x + y)f(y)dy, which implies that (L X r)ṽ (x) = 0 for x (h, h 2 ) By condition (d), Ṽ C[h, h 2 and hence by the remark after Proposition 2, Ṽ (x) = E x [e rτ I g (X τi ) This implies that Ṽ (x) satisfies the ODE: Ṽ (x) Ṽ (x) = F (x), where F (x) = E x[e rτ I (g (X τi ) g(x τi )) First consider the( case that h x l By the ODE theory and the boundary conditions, we have Ṽ (x) = ) e x x h e t F (t)dt + g (h )e h Set H(x) e x (Ṽ (x) g(x)) Then H(x) = x h e t F (t)dt + 4

6 g (h )e h g (x)e x and H (x) = e x F (x) + g (x)e x g (x)e x = e x {E x [e rτ I (g (X τi ) g(x τi )) + g (x) g (x)} = e x {E x [e rτ + I (g 2 (X τi ) g 2 (X τi )); {τ I = τ + I } +E x [e rτ I (g (X τi ) g (X τi )); {τ I = τ I } + g (x) g (x)} e x E x [e rτ + I (g 2 (X τi ) g 2 (X τi )); {τ I = τ + I } +e x (g (x) g (x))( E x [e rτ I ; {τi = τ I } where τ + I = inf{t 0 X t h 2 } and τ I = inf{t 0 X t h } For the last inequality, we use the facts that g (x)g (x) is increasing and hence g (X τ )g (X I τ ) g (h )g (h ) g (x)g (x) I Since g 2 (x) g 2(x) is negative and g (x) g (x) is positive, we obtain H (x) 0 which implies that H(x) is increasing Therefore H(x) H(h ) = 0 and hence Ṽ (x) g(x) By a similar argument, we get Ṽ (x) g(x) for l 2 x h 2 Since Ṽ (x) = V I(x) 0 = g(x) for l x l 2, we complete the proof 3 Perpetual American Strangles and Straddles A strangle is a financial instrument whose payoff function is a combination of a put with the strike price K and a call with the strike price written on the same security, where K In particular, if K =, the strangle becomes a straddle We model the price of the underlying security under the chosen risk-neutral measure by a geometric jump-diffusion: S t = exp{x t } Here X is a jump-diffusion process of the form in (2) We assume further that the jump density function f is given by the mixture of exponential distributions f(x) = p i η + i eη+ i x {x>0} + q j (η j )eη j x {x<0} (3) N + i= where η < < η N < 0 < η + < < η+ N, p + i s and q j s are positive with i= p i + j= q j = (In a Lévy model, there are infinitely many equivalent risk-neutral measures and, for pricing purpose, we usually choose one of them by using the so-called Cramér-Esscher transform Note that this transform preserves the jump-diffusion structure as above For details, see in particular Appendix A of Asmussen et al [) The characteristic exponent for this jump-diffusion process X is given by the formula ψ(z) = N + 2 σ2 z 2 + cz + λ( i= N j= p i η + N i η + i z + j= q j η j η j z ) λ The rational price for the perpetual American strangle is the value function for the optimal stopping problem () with the reward function g given by the formula g(x) = (K e x ) + + (e x ) + = g (x) x l + g 2 (x) x l2 (32) where l = ln K, l 2 = ln, g (x) = K e x and g 2 (x) = e x To find the value function, we consider the interval (h, h 2 ) with h < l l 2 < h 2 First we find the function V (x) that solves the boundary value problem (27) As in Chen et al [5, we first transform the integro-differential equation in (27) into the ODE N + λ i= (η + i D) (η j D)( 2 σ2 D 2 + cd (λ + r))v (x) + N + i= (η + i D) N j= N j= + N (η j D)( i= p i η + N i η + i D + j= q j η j ))V (x) = 0 D η j 5

7 where D = d dx By the general theory of ODE theory, the function V in (h, h 2 ) must be of the form V (x) = N +N + +2 n= C n e βnx, where β n are the roots to the characteristic polynomial ϕ(x) of the above ODE, that is, N + N ϕ(x) = (η + i x) (η j x) N + 2 σ2 x 2 p i η + N i + cx (λ + r) + λ( η + i= j= i= i x + q j η j η j= j x) (Note that in fact we have β < η < β 2 < η 2 < < β N < η N < β N + < 0 < β N +2 < η + < < β N +N + + < η + N + < β N +N + +2) For x / (h, h 2 ), we set V (x) = g(x) To determine the coefficients C n s, plugging the function V into the integro-differential equation in (27), we obtain the system of equations N + +N +2 n= N + +N +2 n= C n e βnh2 β n η + k C n e β nh β n η k = η + k = η k e h 2 + η +, k =, 2,, N + (33) k e h K η, k =, 2,, N (34) k (For details, see ([5,[6)) Also, imposing the condition (d) of Proposition 22 for the function V (x) (ie, assuming that V satisfies the continuity and smooth pasting conditions at the boundaries) gives the equations N + +N +2 n= N + +N +2 n= N + +N +2 n= N + +N +2 n= C n e βnh2 = e h2 (35) C n e β nh = K e h (36) C n β n e β nh 2 = e h 2 (37) C n β n e β nh = e h (38) Now, with the set {C,, C N +N + +2, h, h 2 } that satisfies the equations (33)-(38), we will show later that the function V is the value function for the optimal stopping problem () To do this, we need some further properties for the coefficients C n s We consider the following conditions on the model : η + i > for i =, 2,, N + (39) and N + 2 σ2 + c (λ + r) + λ p i η + N i η + i= i + q j η j η j= j < 0 (30) (Note that (39) implies that E[e X < and (30) guarantees E[e X < e r ( hence the underlying asset pays dividends continuously) If E[e X < e r and 0 g(x) A + Be x for some constants A and B, then E[sup t 0 e rt g(x t ) < For details, see Lemma 4 of Mordecki and Salminen [2) Lemma 3 Under the conditions (39) and (30), we have β N +2 > Proof : See the Appendix 6

8 Example Consider the case that X = ct + σb t with 2 σ2 + c < r Since the process X does not have the jump part, the system of equations (33)-(38) is reduced to the following C e β h 2 + C 2 e β 2h 2 = e h 2 (3) C e β h + C 2 e β 2h = K e h (32) C β e βh2 + C 2 β 2 e β2h2 = e h2 (33) C β e β h + C 2 β 2 e β 2h = e h (34) where β and β 2 are solutions to the equation 2 σ2 x 2 + cx r = 0, that is, β = c c 2 +2rσ 2 σ 2 β 2 = c+ c 2 +2rσ 2 σ By (3) and (32), we obtain 2 C = eβ 2h (e h 2 ) e β 2h 2 (K e h ) deta [ e β h 2 e where A = β 2h 2 e βh e β2h and C 2 = eβ h 2 (K e h ) e β h (e h 2 ) deta and (35) Hence, in terms of h and h 2, we have explicit formulas for C and C 2 To determine h and h 2, plugging the expressions for C and C 2 in (35) into (33) and (34), we observe that equations (33) and (34) are equivalent to the equations: (K e h )β 2 e β 2h + e h e β 2h K = (eh2 2 )β 2 e β2h2 e h2 e β2h2 β 2 e β2h e βh e β2h β e βh β 2 e β2h2 e βh2 e β2h2 β e βh2 (36) (K e h )β e β h + e h e β h K β 2 e β 2h e β h e β 2 h β e β h = (eh2 2 )β e βh2 e h2 e βh2 β 2 e β 2h 2e β h 2 e β 2 h 2β e β h 2 (37) Gapeev and Lerche [7 showed that there is a unique solution h, h 2 to the equations (36) and (37) Then, by a verification lemma, they verified that (h, h 2) is the continuation region for the corresponding optimal stopping problem and the value function on (h, h 2) is given by the formula V (x) = C e βx + C2 e β2x Here C and C2 are computed by the formulas in (35) with h, h 2 replaced by h and h 2 For a martingale approach for this optimal stopping problem, see Beibel and Lerche [2 In the following, we solve the optimal stopping problem () by using the results in Section 2 Our approach also gives an algorithm for finding the solutions to the system of equations (3)-(34) (In fact our method will be applied later for processes with jumps) Assume that {C, C 2, h, h 2 } is a solution to the equations (3)-(34) From these equations, we have ADC = K where D = From this, we have [ β 0 0 β 2 [ C C 2, C = [ C C 2, and K = = [ β (eβ 2h 2 K + e β 2h ) deta β 2 (eβ h 2 K + e β h ) [ K2 K (38) uniquely determined by h and h 2 (By Lemma 3, β 2 > Hence C and C 2 have the same sign) On the other hand, the matrix form of (33) and (34) is given by [ [ [ β e βh2 β 2 e β2h2 C e h 2 β e β h β 2 e β 2h = e h (39) By multiplying e h2 to both sides of (33), e h to both sides of (34) and adding them together, we have C β (e (β )h + e (β )h 2 ) + C 2 β 2 (e (β 2)h + e (β 2)h 2 ) = 0 Combining this with the expressions for C and C 2 in (38) gives [ β (e (β )h + e (β )h 2 ) β 2 (e (β 2)h + e (β 2)h 2 ) det β 2 (eβ h 2 K + e β h ) β (eβ 2h 2 K + e β 2h = 0 (320) ) By multiplying e β h to the first column of the matrix in (320), e β 2h to the second column and then multiplying eh β β 2 to the first row and to the second row, we obtain [ β det 2 ( + e (β) h ) β ( + e (β2) h ) β 2 ( + K e β h ) β ( + K e β2 h = 0 (32) ) 7 C 2

9 where h = h 2 h In addition, from (34) and (38), we have e h = [ β deta det β eβ h β 2 β 2 eβ 2h K e β h + e β h 2 K e β 2h 2 + e β 2h [ β det β eβ h β 2 [ β 2 eβ 2h β β 2 det β β 2 K e βh + e βh2 K e β2h2 + e β2h K + e β h K e β2 h + = [ e β h 2 e det β 2h 2 = [ e β h e det β 2 h e βh e β2h which implies [ β det β h = log det β 2 β 2 K + e β h K e β2 h + [ e β h e β 2 h (322) and hence, we also obtain h 2 = h + h With this h and h 2, we compute C, C 2 by (38) To prove the existence of solutions to the equations (3)-(34), we show that there is a solution h to (32) in (0, ) As h 0, the left term in (32) tends to ) Since ( 2 + K ) ( det β 2 (β ) β (β 2 ) +e (β ) h β +e (β 2 ) h β 2 + K K e β h 2 K + K e β 2 h 2 β β 2 = = e β2 h det ( + K ) 2(β 2 β ) β β 2 (β )(β 2 ) > 0 +e (β ) h e β 2 h +e h β β 2 + K K e β h e β2 h K + 2 β β 2, (323) we have, as h, det +e (β ) h e β 2 h +e h β β 2 + K K e β h e β2 h K + 2 β β 2 det [ β 0 K β β 2 = K (β )β 2 < 0 Therefore, (32) has a solution h in (0, ) by the intermediate value theorem With this h, we compute {C, C 2, h, h 2 } by the formulas (322), (38) and h 2 = h + h (Later in this paper, we show that {C, C 2, h, h 2 } is a solution to the system of equations (3)-(34)) Define the function V (x) by the formula { C e V (x) = βx + C 2 e β2x if x (h, h 2 ) g(x) if x (h, h 2 ) c where g is the function in (32) Then V is a solution of the boundary value problem in Proposition 2 Hence we have V (x) = E x [e rτ (h,h 2 ) c g(x τ(h,h 2 ) for all x R Also, by Proposition 22 ) c, we observe V (x) g(x) for all x To prove that V is indeed the value function, by Theorem 2, it remains to verify that (L X r)v (x) 0 for all x / (h, h 2 ) For x > h 2, we have (L X r)v (x) = σ2 2 g (x) + cg (x) rg(x) = ( 2 σ2 + c r)e x + r Since 2 σ2 + c < r, we observe d dx (L X r)v (x) = ( 2 σ2 + c r)e x < 0 which implies that (L X r)v (x) is a decreasing function on (h 2, ) In addition, we have (L X r)v (x) = 0 for x (h, h 2 ) and, hence, (L X r)v (h + 2 ) = (L X r)v (h + 2 ) (L X r)v (h 2 ) = [ 2 σ2 V (h + 2 ) + cv (h + 2 ) rv (h+ 2 ) [ 2 σ2 V (h 2 ) + cv (h 2 ) rv (h 2 ) = 2 σ2 [V (h + 2 ) V (h 2 ) = 2 σ2 (e h 2 2 C n βne 2 β nh 2 ) (The third equality holds since the function V (x) satisfies both the continuous fit and the smooth pasting conditions at h 2 ) Since V (x) 0 and C and C 2 have the same sign, we observe C i 0 n= 8

10 β η β 2 η 2 β 3 β N η N β N + 0 * β N +2η + β N +3 η 2 + β N+ η + N β + N+2 Figure : Relationship of η i, i N, η + j, j N + and β n, n N + 2 for i =, 2 Also we have β < 0 < < β 2 Therefore we observe (L X r)v (h + 2 ) = 2 σ2 (e h 2 2 n= C nβ 2 ne βnh2 ) 2 σ2 (e h2 2 n= C nβ n e βnh2 ) = 0 This implies that (L X r)v (x) L X r)v (h + 2 ) 0 for all x > h 2 By a similar argument, (L X r)v (x) 0 for x < h The proof is complete Now we go back to the equations (33)-(38) From this point on, we set N = N + N + and assume that the conditions (39) and (30) hold Subtract (35) from (37) and (36) from (38), we have N+2 n= N+2 n= Using (325), (38) and (34), we have N+2 n= for k =, 2,, N Similarly, by (324), (37) and (33), we have N+2 n= for k =, 2,, N + From equations (324) and (325), we also have N+2 i= In addition, by (37) and (38), we have C n ( β n )e βnh2 = (324) C n ( β n )e βnh = K (325) β n ( β n ) C n β n η e β nh = 0, (326) k β n ( β n ) C n β n η + e β nh 2 = 0 (327) k C n ( β n )( K e βnh + e βnh2 ) = 0 (328) N+2 i= C n β n (e (β n)h + e (β n)h 2 ) = 0 (329) Lemma 32 Assume that {C,, C N+2, h, h 2 } is a solution of the equations (33)-(38) Then C j 0 except for at most one j Proof : See the Appendix Lemma 33 Assume that {C,, C N+2, h, h 2 } is a solution of the equations (33)-(38) Then C n 0 for all n Proof : See the Appexdix Lemma 34 Assume that N+2 n= C nβ n e βnx0 = e x0 for some x 0 R Then there exists ϵ > 0 such that N+2 n= C nβ n e βnx < e x for all x (x 0 ϵ, x 0 ) Also we have N+2 n= C nβ n e βnx e x for all x x 0 9

11 Proof: See the Appendix Theorem 3 Let {C,, C N, h, h 2 } be a solution of the equations (33)-(38) Define the function V (x) by the formula { N+2 V (x) = n= C ne β nx if x (h, h 2 ) g(x) if x (h, h 2 ) c where g is the function in (32) Then V is the value function of the optimal stopping problem () Also, we have V (x) = E x [e rτ (h,h 2 ) c g(x τ(h,h 2 ) for all x R and hence τ ) c (h,h 2 ) c is the optimal stopping time for the optimal stopping problem () Proof : Clearly the function V (x) satisfies conditions (a)-(c) of Theorem 2 Direct computation shows that the function V is a solution of the boundary value problem (27) Because C n s are nonnegative according to Lemma 33, thus, h < l = ln K ln = l 2 < h 2 by (35) and (36) Also functions g and g 2 satisfy the conditions in Proposition 2 Therefore we have V (x) = E x [e rτ (h,h 2 ) c g(x τ(h,h 2 ) for all x R Note that functions g ) c and g 2 also satisfy the conditions in Proposition 22 and V also satisfies conditions (c) and (d) of Proposition 22 Hence by Proposition 22, we obtain N+2 n= C ne βnx g(x) for x (h, h 2 ) By Theorem 2, it remains to show that (L X r)v (x) 0 for x [h, h 2 c Note that, on x > h 2 > ln, direct calculation gives (L X r)v (x) = e x ( 2 σ2 + c + ( N N+2 +λ q j e η j x j= η n= j = e x ( 2 σ2 + c + j= N + i= ( N N+2 +λ q j e η j x η n= j N + i= N λp i η + i + j= C n η j β e (β nη j )h 2 n N λp i η + i + j= C n β n β e (β nη j )h 2 n λq j η j ) r(ex ) η j η j e(η λq j η j ) r(ex ) (The last equality holds because of (35)) Let Ψ j (x) = N+2 n= η j j )h 2 e(η j )h 2 + e η j h 2 C n β n e (β nη η j )x j βn η j )x j e(η j N and x R First we show that Ψ j (h 2 ) 0 By (34) and (36), we have Ψ j (h ) = 2 η j ) ) for e (η j )h > 0 Also, we observe Ψ j (x) = N+2 n= C nβ n e (βnη j )x +e (η j )x = e η j x ( N+2 n= C nβ n e βnx e x ) We need the fact that N+2 n= C nβ n e βnx e x 0 for all x (h, h 2 ) (Indeed, if N+2 n= C nβ n e βnh e h = 0 for some h (h, h 2 ), by Lemma 34, N+2 n= C nβ n e βnx e x 0 for all x [h, h 2 Note that by (37), we have N+2 n= C nβ n e βnh2 e h2 = 0 and by Lemma 34, there exists ϵ > 0 such that N+2 n= C nβ n e βnx e x < 0 for all x (h 2 ϵ, h 2 which is a contradiction) Combining this with the fact that N+2 n= C nβ n e βnh e h = 2e h < 0, we obtain N+2 n= C nβ n e βnx e x 0 for all x [h, h 2 and hence, Ψ j (x) 0 on [h, h 2 This implies that Ψ j (x) is an increasing function and hence Ψ j (h 2 ) Ψ j (h ) > 0 Therefore, on x > h 2 > ln, we observe d dx (L X r)v (x) = ( 2 σ2 + c + N + i= λp i + N η + i j= λq j η j r)ex + λ N j= q jψ j (h 2 )η j eη j x 0, which implies that (L X r)v (x) is a decreasing function and its maximum value is (L X r)v (h 2 +) Because V (x) satisfies the smooth pasting condition at h 2 and (L X r)v (h 2 ) = 0, we get (L X r)v (h 2 +) = (L X r)v (h 2 +) (L X r)v (h 2 ) = 2 σ2 (V (h 2 +) V (h 2 )) = N+2 2 σ2 (e h 2 C n βne 2 β nh 2 ) < N+2 2 σ2 (e h 2 C n β n e β nh 2 ) = 0 n= Therefore (L X r)v (x) (L X r)v (h + 2 ) < 0 for all x > h 2 By the same procedure, we verify (L X r)v (x) is an increasing function for x h and (L X r)v (h ) 0, which implies (L X r)v (x) 0 for all x h The proof is complete n= 0

12 4 Existence of Solutions to Equations (33)-(38) In this section we prove the existence of solutions to the system of equations (33)-(38) According to (325)-(328), we have ÃDC = K where D is an (N + 2) (N + 2) diagonal matrix with entries d ii = β i ( β i ), K = [0, 0,, 0, K T is an (N + 2) column vector and à = e β h e h β η η e β h e h β η N η N e β h 2 e h 2 β η + η + e βh2 e h 2 β η + N + η + N + β ( K e βh + e βh2 ) Then, the coefficient vector C is equal to ( K e h + e h 2 ) β e β h e h C = K detãd Y (4) where Y is the last column of the cofactor matrix of à Thus, if we find out the boundaries of the continuation region (h, h 2 ), then we can compute the coefficient vector C by (4) Proposition 4 Let {C,, C N+2, h, h 2 } be a solution of the equations (33)-(38) Then h = h 2 h is a solution of the equation detb(h) = 0 where for every h R, B(h) is a (N + 2) (N + 2) matrix defined by the formula Moreover, we have B(h) = β η η β η N e β h β η + η + e β h β η + N + η N e h η + N + e h β ( + K e βh ) ( + K e βn+2h ) β ( + e(β)h ) ( + e(βn+2)h ) (42) h = log deta log deta 2, (43) where A = β η η β η N β η + e β h η + e β h β η + N + β ( + K e β h ) η N e h η + N + e h ( + K e βn+2 h ) β

13 and A 2 = β η η β η N β η + e β h η + η N e h e β h e h β η + N + η + N + β ( + K e β h ) ( + K e βn+2 h ) β K K Proof : See the Appendix Proposition 42 Given any h R, define the matrix B(h) as in (42) There exists a positive solution h to the equation detb(h) = 0 Proof : See the Appendix Theorem 4 Let h be a positive solution of the equation detb(h) = 0 and define h by (43) Set h 2 = h + h and compute {C,, C N+2 } by the formula (4) Then {C,, C N+2, h, h 2 } is a solution of the equations (33)-(38) Proof : The system of equations (33)-(38) is equivalent to ÃDC = K together with the smooth pasting conditions (37) and (38) From the proof of Proposition 4, we know that {C,, C N+2, h, h 2 } satisfies ÃDC = K and (38) It remains to check that (37) is satisfied By (4), the left hand side 2

14 of (37) is N+2 n= K y n det(ã)( β n) eβ nh 2 = det det e β h e h β η η e β h e h β η N η N e βh2 e h 2 β η + η + e βh2 e h 2 β η + N + η + N + β (e βh + K e βh2 ) = det (e h + K e h 2 ) e β h 2 β e h 2 e βh e h β η η e β h e h β η N η N e β h 2 e h 2 β η + η + e β h 2 e h 2 β η + N + η + N + β ( K e β h + e β h 2 ) β e βh β η η β η N e β h β η + η + e β h β η + N + β ( + K = det(a 2 ) det ( K e h + e h 2 ) η N e h η + N + e h e h e β h ) ( + K e βn+2 h ) e β h β e βn+2 h β η η e βh β η N η N e β h e h det β η + η + e β h e h β η + N + η + N + β ( + e β h ) ( + e βn+2 h ) β K K β η η β η N η N e β h e h β η + η + e β h β η + N + β ( + K e β h ) ( + K η + N + e h e β h β e h e h ) 3

15 Since h satisfies detb(h) = 0, we have deta = det β η η β η N e β h β η + η + e β h β η + N + β ( + K η N e h η + N + e h e β h ) ( + K e βn+2 h ) β β η η β η N η N e β h e h β = det η + η + e β h e h β η + N + η + N + β ( + K e β h ) ( + K e βn+2 h ) β e(β) h e() h β η η β η N η N = e h e β h e h det β η + η + e β h e h β η + N + η + N + β ( + K e β h ) ( + K e βn+2 h ) β eβ h e h Therefore, the left hand side of (37) is equal to deta 2 deta e h = e h+ h = e h2 The proof is complete 5 Numerical Results In this section, we solve the system of equations (33)-(38) numerically To solve the equations (33)-(38) numerically, we first find the length of the optimal interval h by solving the equation detb(h) = 0 where B(h) is the square matrix in (42) (Note that the equation above depends only on h and hence we can use the simple and fast approach like the Newton method to solve it) Second, we compute h by (43) and set h 2 = h + h Finally, we obtain the coefficient vector C according to (4) and evaluate the value function V (x) by the formula V (x) = N+2 n= C ne βnx for x (h, h 2 ) Example 3 Consider the case that N + = N = In addition, as in Boyarchenko [3, we take c = 005, σ = 025, r = 006, η + = 04, η = 07, λ = 3 5, p = q = 05 and the strike prices K = 50 and = 00 Then the value function is given by V (x) = 4 n= C ne βnx in (h, h 2) where (h, h 2) = (2992, 6953) {β, β 2, β 3, β 4 } = {3482, 02322, 995, 6953} {C, C 2, C 3, C 4 } = {259533, 6224, 0283, } 4

16 Payoff Function of Call Option Payoff Function of Put Option Value Function For Diffuse Case Value Function For Jump Diffuse Case with N+=N= Figure 2: The solid line is the value function V (x) for the jump-diffusion model with N + = N = and the dash line is the one for the diffusion model, that is, N + = N = 0 The optimal boundaries are marked by circles for jump-diffusion model, and by triangles for diffusion model Besides, if we take N + = N = 0 which is the diffusion case in Example, then we observe V (x) = 2 n= C ne β nx in (h, h 2) where (h, h 2) = (345, 4859) {β, β 2 } = {5607, 49207} {C, C 2 } = { , } It is interesting to note that in the jump-diffusion model, the optimal interval (h, h 2) is much wider than that for the diffusion case This indeed makes sense because there are more opportunities to earn large gains by the jump occurring and hence it can be expected that the investors will not exercise the options in the jump-diffusion environment earlier than in the diffusion one Figure 3 shows the graph of the determinant of B(h) as a function of h It shows that the zero of the determinant (this is h) is unique Besides, the graph descends sharply near the zero of the determinant This implies that we can get the numerical result for h fast and correctly Example 4 Consider the jump-diffusion model with N = N + = 2 and let c = 005, σ = 025, r = 006, η + = 05, η+ 2 = 025, η = 24,η 2 = 75, λ = 3 5, p = p 2 = q = q 2 = 025 and the strike prices K = 50 and = 00 In this model, the expected value E[e X is the same as the one with N = N + = in Example 3 The value function is V (x) = 6 n= C ne βnx in (h, h 2) where (h, h 2) = (253, 6380) {β, β 2, β 3, β 4, β 5, β 6 } = {7997, 9409, 055, 642, 3242, 7093} {C, C 2, C 3, C 4, C 5, C 6 } = { , , 44297, 02679, , } As noted before, models in Example 3 and Example 4 have the same expected value E[e X However the optimal interval in Example 4 (N = N + = 2) is wider than that for the case N = N + = 6 Concluding Remarks American option contracts are more complicated to analyze than their European counterparts, because an American option can be exercised at any time prior to its expiration Mathematically this means that we have to solve the optimal stopping problem of the form in () Instead of the corresponding PDEs for the European counterparts, problem of this kind always leads to so 5

17 8 x 0 Determinant of the length of the optimal interval Figure 3: The figure is the graph of the determinant B(h) for finding the length h of the optimal interval It shows that there is only one zero for the determinant Payoff Function of Call Option Payoff Function of Put Option Value Function For Jump Diffuse Case With N+=N= Value Function For Jump Diffuse Case with N+=N= Figure 4: The solid line is the value function V (x) for the jump-diffusion model with N + = N = 2 and the dash line is the one for the model with N + = N = The optimal boundaries for the case N + = N = 2 are marked by circles and by triangles for the case N = N + = 6

18 2 x Figure 5: The figure is the graph of the determinant for finding the length of the optimal interval for the case N = N + = 2 The figure has similar properties as for the case N = N + = In particular, there is only one zero for the determinant called free boundary value problems, that is not easy to solve Usually we have no explicit pricing formulas for the value functions and the optimal exercise boundaries are not known We refer to the monograph of Peskir and Shiryaev [5 for more details and related topics about optimal stopping and free boundary problem The American call and put options are the simplest American contracts The pricing problem for these options has been widely studied and generalized since Mckean [0 and Merton [ For recent works on Lévy-model setting, we refer to Mordecki and Salminen [2, Boyarchenko and Levendorskii [4 and Asmussen et al [ and the references therein In this paper we consider the perpetual American strangle and straddle options, which is a combination of a put and a call written on the same security As in Asmussen et al (2004) and many others, we consider the pricing problems of these options in the jump diffusion models By the free boundary problem approach, we solve the corresponding optimal stopping problems and hence find the optimal exercise boundaries and the rational prices of the perpetual American strangle and straddle options More precisely, following the approach in Chen et al [5, we derive an equivalent system of equations for the free boundary problem with smooth pasting condition By solving the system of equations, we find an algorithm for computing the rational prices and the optimal exercise boundaries for these options (Boyarchenko [3 studied the same pricing problems by a different approach and assuming the smooth pasting principle for the value functions In fact Boyarchenko posted the verification of the smooth pasting principle for the value functions as an open problem in [3 and we resolve this open problem in Theorem 3) The present method together with the general results in Section 2 could possibly give an alternative approach to compute prices for other exotic options in jump diffusion models 7 Appendix Proof of Proposition 2 We follow similar argument as that in Chen et al [5 Fix x (h, h 2 ) Pick a sequence of functions {Ṽn} C0(R) 2 such that Ṽn Ṽ on [h, h 2 and Ṽn Ṽ on R Since g is bounded, we can choose {Ṽn} such that {Ṽn} are uniformly bounded on (, c for any c R, and Ṽn(x) 2g 2 (x) for all n and all x M Here M > h 2 is a strictly positive constant(independent 7

19 of n) By Dynkin s formula, we have [ t τi E x [e r(t τ I ) Ṽ n (X τi t) = E x 0 e ru (L X r)ṽn(x u )du + Ṽ (x) (7) For every u < τ I t, we have X u (h, h 2 ) and hence Ṽn(X u ) = Ṽ (X u) This gives [Ṽ (L X r)ṽ (X u) (L X r)ṽn(x u ) = (Xu + y) Ṽn(X u + y) f(y)dy (72) and hence [ (L X r)[ṽ (X u) Ṽn(X u ) sup Ṽ (z) + Ṽn(z) + sup 3g 2 (x + y)f(y)dy < z M+ h + h 2 h x h 2 M+ h (73) By Dominated Convergence Theorem and (72), for all u < t τ I, (L X r)ṽn(x u ) (L X r)ṽ (X u) as n By (73) and Dominated Convergence Theorem, we have [ t τi [ t τi lim E x e ru (L X r)ṽn(x u )du = E x e ru (L X r)ṽ (X u)du n 0 [ Note that Ṽn(x) sup n sup x M Ṽn(x) + 2g(x) and E x supt 0 e rt g(x t ) < Letting n for both sides of (7) together with the dominated convergence theorem gives [ t τi E x [e r(t τ I ) Ṽ (X τi t) = E x e ru (L X r)ṽ (X u)du + Ṽ (x) = Ṽ (x) (74) 0 Note that the last equality follows from the assumption that (L X r)ṽ = 0 in (h, h 2 ) Since [ E x [sup e rt Ṽ (X t ) sup Ṽ (y) + E x sup e rt g(x t ) <, t 0 y h 2 t 0 our result follows by letting t in both sides of the equality in (74) and the Dominated Convergence Theorem This completes the proof Proof of Lemma 3 First consider the case that N + = 0 Then β N +2 is the unique solution to the equation ϕ(x) = 0 in (0, ) Observe that lim x ϕ(x) lim x ϕ(x) = Our result follows by the intermediate value theorem Next assume that N + Then β N +2 is the unique solution to the equation N + N ϕ(x) = (η + i x) (η j x) N + 2 σ2 x 2 + cx (λ + r) + λ( i= j= 0 i= p i η + N i η + i x + j= q j η j η j x) = 0 in (0, η + ) Also we have ϕ() = N + i= (η+ i ) N j= (η j ) [ 2 σ2 + c (λ + r) + λ( N + i= N + p iη + i + N η + i j= and ϕ(η + ) = λp η + i=2 (η+ i η + ) N j= (η j η+ ) By (39) and (30), we obtain ϕ()ϕ(η+ ) < 0 which implies β N +2 > Proof of Lemma 32 Set h = h 2 h and put Ĉn = e β nh ( β n )β n C n for n N + 2 Then, by(324)-(327), we have AĈ = K where β η η 0 β η N η N Ĉ β A = β e β h e h, Ĉ = Ĉ 2 0 e β h e β and K = K N+2 h β η + η + Ĉ N+2 0 e β h e h 0 β η + N + η + N + q jη j η ) j 8

20 Let F (x) = N+2 i= S 2(x) N+2 i= (β ix), where S (x) = N+2 n= Ĉ i β and F ix 2(x) = N+2 e β i h Ĉ i i= β Clearly, F ix (x) = N+2 Ĉ n i=,i n (β i x) and S 2 (x) = N+2 n= N+2 e βn h Ĉ n S (x) N+2 i= (βix) and F 2(x) = i=,i n (β i x) (75) Then S (x) and S 2 (x) are polynomials with degree at most N + Also, by the fact AĈ = K, we N+2 have S (0) = K i= β N+2 i, S 2 (0) = i= β i, S (η k ) = 0 for k N and S 2 (η + k ) = 0 for k N + By (75), we have Ĉ n = S (β n ) N+2 i=,i n (β i β n ) = eβn h S 2 (β n ) N+2 i=,i n (β i β n ) for n N + 2 From this, we have S 2 (β n ) = S (β n ) = 0 if and only if S 2 (β n ) S (β n ) = 0 In addition, we have Ĉn = 0 if and only if S 2 (β n ) S (β n ) = 0 Also if S (β k ) and S 2 (β k ) S are nonzero for some k N + 2, 2(β k ) S (β k ) = eβk h It remains to show that Θ where Θ = {β n S (β n ) S 2 (β n ) = 0, for n N + 2} and Θ is the cardinality of Θ To do this, we need the following facts : () If S 2 (x) 0 on (η k, β k+ for some k, k N, then S 2 (x) S (x) = 0 has a solution in (η k, β k+) (2) If S 2 (x) 0 on [β k, η k ) for some k, k N, then S 2 (x) S (x) = 0 has a solution in (β k, η k ) (3) If S (x) 0 on (η + k, β N +2+k for some k, k N +, then S 2 (x) S (x) = 0 has a solution in (η + k, β N +2+k) (4) If S (x) 0 on [β N ++k, η + k ) for some k, k N +, then S 2 (x) S (x) = 0 has a solution in (β N ++k, η + k ) (5) If S 2 (x) 0 on [β N +, 0), then S (x) has a solution in (β N +, 0) (6) If S (x) 0 on (0, β N +2, then S 2 (x) has a solution in (0, β N +2) To prove (), we assume that S 2 (x) 0 for all x (η k, β k+ Let x = sup{x [η k, β k+ S (x) = 0} Note that x exists because S (η k ) = 0 and x < β k+ Because S 2(x) S (x) is continuous on (x, β k+, 0 < S 2(β k+ ) S (β k+ ) = eβ k+ h < and lim x x + S2(x) S (x) (76) =, by the intermediate value theorem, there exists x 0 (x, β k+ ) such that S 2(x 0 ) S (x 0 ) = This completes the proof of the fact () above Facts (2)-(4) are verified by similar arguments Next, we verify the fact (5) and assume that S 2 (x) 0 for all x [β N +, 0) Then sgn(s 2 (β N +)S (β N +)) = sgn e β N + h Ĉn 2 N+2 i=,i N + (β i β N +) 2 > 0, and sgn(s 2 (0)S (0)) = sgn(k N+2 i= β2 i ) < 0, which imply that S (x) has a solution in (β N +, 0) The proof of the fact (6) is similar Let S(x) = S 2 (x) S (x) Then S(x) is a polynomial with degree at most N + and S(β k ) = 0 whenever β k Θ Let Π = {[β N +, 0) β N + / Θ} {(0, β N +2 β N +2 / Θ} {[β k, η k ) β k / Θ, k N } {(η k, β k+ β k+ / Θ, k N } {[β N ++k, η + k ) β N ++k / Θ, k N + } {(η + k, β N +2+k β N +2+k / Θ, k N + } Note that Π is a collection of intervals and Π the number of intervals in Π 2(N + ) 2 Θ Let Π = {I Π S(x) = 0 has no solution in I} Since {x S(x) = 0, x / Θ} N + Θ, Π 2(N + ) 2 Θ ((N + ) Θ ) = N + Θ For any I Π, by facts ()-(4), we obtain 9

21 (a) if sup x I x β N +, then the equation S 2 (x) = 0 has solutions in I (b) if inf x I x β N +2, then the equation S (x) = 0 has solutions in I Also, by fact (5),S (x)s 2 (x) = 0 for some x [β N +, 0) Similarly, by fact (6), S (x)s 2 (x) = 0 for some x (0, β N +2 From these observation, combining with the fact that for I, I 2 Π, I I 2 = ϕ or I I 2 Θ c, we have {x S 2 (x) = 0, x < β N +2, x / Θ} + {x S (x) = 0, x > β N +, x / Θ} Π N + Θ (77) Recall that S (η k ) = 0 for k N and S 2 (η + k ) = 0 for k N + Therefore, 2(N + ) {x S (x) = 0} + {x S 2 (x) = 0} = {x S (x) = 0, x > β N +, x / Θ} {x S + (x) = 0, x β N +, x / Θ} + {x S 2 (x) = 0, x < β + N +2, x / Θ} {x S2 + (x) = 0, x β + N +2, x / Θ} + 2 {x x Θ} N + Θ + N + N Θ = 2N + + Θ (78) This implies that Θ The proof is complete Proof of Lemma 33 We define S, S 2, Θ, Π, Π, and Ĉn s as in the proof of Lemma 32 Since Ĉ n = e β nh ( β n )β n C n and, by Lemma 3, we observe C n 0 if and only if Ĉn 0 Besides, by Proposition (2), we obtain N+2 n= C ne β nx = E x [e rτ (h,h 2 ) c g(x τ(h,h 2 ) c ) which is nonnegative for all x (h, h 2 ) To prove C n 0 for all n, it suffices to show that the Ĉn s have the same sign By Lemma (32), Θ = 0 or First, we consider the case that Θ =, that is, S (β k0 ) = S 2 (β k0 ) = 0 for some k 0 N + 2 Then Π 2N and by (77), {x S 2 (x) = 0, x < β N +2, x β k0 } + {x S (x) = 0, x > β N +, x β k0 } Π N + = N By (78), we obtain {x S 2 (x) = 0} + {x S (x) = 0} = 2N + 2 Hence S (x) and S 2 (x) are polynomials with degree N + and all roots of S (x) and of S 2 (x) are simple In addition 2(N + ) {x S (x) = 0} + {x S 2 (x) = 0} {x S 2 (x) = 0, x < β N +2, x β k0 } + {x S (x) = 0, x > β N +, x β k0 } + {x S 2 (x) = 0, x β N +2, x β k0 } + {x S (x) = 0, x β N +, x β k0 } + 2 N + {x S 2 (x) = 0, x β N +2, x β k0 } + {x S (x) = 0, x β N +, x β k0 } + 2 and hence, N {x S 2 (x) = 0, x β N +2, x β k0 } + {x S (x) = 0, x β N +, x β k0 } Since S 2 (η + k ) = 0 for k N + and S (η k ) = 0 for k N, we obtain {x S (x) = 0, x β N +, x β k 0 } = {η k k N } and {x S 2 (x) = 0, x β N +2, x β k0 } = {η + k k N + } Now we consider the case that k 0 =, that is S (β ) = S 2 (β ) = 0 Because η i is the unique root for S (x) in [β i, β i+, 2 i N, we obtain S (β i )S (β i+ ) < 0 By similar arguments, we also have S 2 (β j )S 2 (β j+ ) < 0 for N + 2 j N + By (76), we have Ĉ n Ĉ n = eβn h S 2 (β n ) N+2 i=,i n (β i β n ) e βn h S 2 (β n ) N+2 i=,i n (β i β n ) = e(β n+β n ) h S 2 (β n )S 2 (β n )(β n β n ) (β n β n ) n2 i= (β i β n )(β i β n ) N+2 j=n+ (β j β n )(β j β n ) S (β n )S (β n )(β n β n ) (β n β n ) = n2 i= (β i β n )(β i β n ) N+2 j=n+ (β j β n )(β j β n ), Therefore, the elements in C {Ĉn 2 n N + } have the same sign and this is also true for elements in C + {Ĉn N + 2 n N + 2} Because AĈ = K, if the elements in C are positive and the ones in C + are negative, then we get the contradiction that K = N+2 n= Ĉn β n < 0; if the elements in C are negative and the ones in C + are positive, then we get another contradiction, ie, 20

22 = N+2 eβn h n= Ĉn β n > 0 Therefore, Ĉ n s must have the same sign For the case k 0 = N +, the proof is the same For the case < k 0 < N +, by a similar argument as above, we obtain the elements in C = {Ĉn n k 0 }, C2 = {Ĉn k 0 + n N + }, and C + = {Ĉn N + 2 n N + 2} have the same sign, respectively There are eight situations for the signs of C, C 2, and C+ : () C < 0, C 2 < 0, and C+ < 0, (2) C > 0, C 2 > 0, and C+ > 0, (3)C < 0, C 2 < 0, and C+ > 0, (4)C > 0, C 2 > 0, and C+ < 0, (5) C < 0, C 2 > 0, and C + > 0, (6) C > 0, C 2 < 0, and C+ < 0, (7) C < 0, C 2 > 0, and C+ < 0, (8) C > 0, C 2 < 0, and C + > 0 (We write C ± i > (<)0 if all elements in C ± i are greater(smaller) than zero) We show that cases (3)-(8) are impossible The arguments for disproving cases (3) and (4) are the same as for the case k 0 = Note that β < η < β 2 < η 2 < < β k 0 < η k 0 < β k0 < < β N < η N < β N + < 0 < < β N +2 < η + < < β N+ < η + N + < Because AĈ = K, Comparing with the (k 0)-th entries in AĈ and K, we obtain N+2 n= Ĉn β n η k 0 0 Therefore, it is impossible for cases (5) and (6) Note that the entries of A satisfy the following: (a) A i,j < 0 for {(i, j) j i N + } {(i, j) N + 2 i N + 2, j < i} and A i,j > 0, otherwise (b) If A i,j and A i+,j are negative, then A i,j < A i+,j (c) If A i,j and A i+,j are positive, then A i,j < A i+,j For the case (7), we get the contradiction K = (A N + A k0 )Ĉ < 0 and for the case (8), we get the contradiction = (A N +2 A k0 )Ĉ > 0 where A i is the ith row of A Therefore, we complete the proof for the case that Θ = and < k 0 N + The proof for the case that Θ = and N + 2 k 0 N + 2 is similar Consider the case that Θ = 0 which implies that Ĉn s are nonzero Then we have Π = 2N + 2 and by (77), {x S 2 (x) = 0, x < β N +2} + {x S (x) = 0, x > β N +} Π N + Therefore 2(N + ) {x S (x) = 0} + {x S 2 (x) = 0} {x S 2 (x) = 0, x < β N +2} + {x S (x) = 0, x > β N +} + {x S 2 (x) = 0, x β N +2} + {x S (x) = 0, x β N +} + 2 Θ N + + {x S 2 (x) = 0, x β N +2} + {x S (x) = 0, x β N +} which implies N + {x S 2 (x) = 0, x β N +2} + {x S (x) = 0, x β N +} Because {x S 2 (x) = 0, x β N +2} + {x S (x) = 0, x β N +} N, we have {x x > β N +2, S 2 (x) = 0} = N + or {x x < β N +, S (x) = 0} = N First, we consider the case {x x > β N +2, S 2 (x) = 0} = N +, or equivalently, {x x β N +2, S 2 (x) = 0} = {η + η+ N } If {x x < β + N +, S (x) = 0} = N, then we have {x x β N +, S (x) = 0} = {η η N } Similar arguments as for the case Θ = imply that the elements in C = {Ĉn n N + } and in C + = {Ĉn N + 2 n N + 2} have the same sign, respectively, and hence, the sign of Ĉn s are the same If {x x < β N +, S (x) = 0} = N +, then either S (x) has a root in (, β ) or S (x) has two roots in (β k0, β k0 +) for some k 0 N For the case (, β ), we can get as above that the elements in C = {Ĉn n N + } and in C + = {Ĉn N + 2 n N + 2} have the same sign, respectively If S (x) has two roots in (β k0, β k0 +) for some k 0 N, we also observe that the elements in C = {Ĉn n k 0 }, C2 = {Ĉn k 0 n N + }, and C + = {Ĉn N + 2 n N + 2} have the same sign, respectively By the same argument as for the case Θ =, we know that the coefficients have the same sign The proof for the case {x x < β N +, S (x) = 0} = N is similar and hence, we omit it Proof of Lemma 34 Let F (x) = N+2 n= C nβ n e βnx e x Then F (x) = N+2 n= C nβne 2 βnx e x Because β < β 2 < < β N + < 0 < < β N +2 < β N +3 < <, and by Lemma 32 and Lemma 33, F (x 0 ) = N+2 n= C n β 2 ne βnx0 e x0 > 2 N+2 n= C n β n e βnx0 e x0 = 0, (79) =

23 which implies that F (x) is strictly increasing in some neighborhood U x0 of x 0 and hence, we complete the proof of the first part of the lemma Assume that there exists x > x 0 such that F (x ) < 0 Let x = sup{x x 0 x < x, F (x) = 0} Then x < x, F ( x) = 0 and as shown for (79), we have F ( x) > 0 Therefore, there exists a neighborhood U x of x such that for all x U x with x > x, F (x) > F ( x) = 0 This is a contradiction because F (x) < 0 for all x ( x, x ) and hence, we complete the proof of the lemma Proof of Proposition 4 Substitute (4) into (329), we have N+2 n= K y n ( β n )deta (e(β n)h + e (β n)h 2 ) = 0 (70) where y n is the nth entry of the column vector Y (70) is equivalent to det e βh e h β η η e β h e h β η N η N e β h 2 e h 2 β η + η + e β h 2 e h 2 β η + N + η + N + β ( K e β h + e β h 2 ) ( K e h + e h 2 ) β (e (β )h + e (β )h 2 ) (e ()h + e ()h 2 ) = 0 Multiply e βih to the i-th column for each i and then e h to the last row, we observe that h = h 2 h is a solution of the equation detb(h) = 0 Substitute (4) into (38), we have K det(a) [β e βh,, β n e βnh D Y = e h Note that [β e β h,, β n e β nh D Y β (β ) = [β e βh,, β n e βnh β 2(β 2) Y ( ) e β h e h β η η = det e β h β η N e β h 2 β η + η + η N e h e h 2 e βh2 e h 2 β η + N + η + N + β ( K e βh + e βh2 ) ( K e h + e h 2 ) e β h β e h 22

Ernesto Mordecki 1. Lecture III. PASI - Guanajuato - June 2010

Ernesto Mordecki 1. Lecture III. PASI - Guanajuato - June 2010 Optimal stopping for Hunt and Lévy processes Ernesto Mordecki 1 Lecture III. PASI - Guanajuato - June 2010 1Joint work with Paavo Salminen (Åbo, Finland) 1 Plan of the talk 1. Motivation: from Finance

More information

Pavel V. Gapeev, Neofytos Rodosthenous Perpetual American options in diffusion-type models with running maxima and drawdowns

Pavel V. Gapeev, Neofytos Rodosthenous Perpetual American options in diffusion-type models with running maxima and drawdowns Pavel V. Gapeev, Neofytos Rodosthenous Perpetual American options in diffusion-type models with running maxima and drawdowns Article (Accepted version) (Refereed) Original citation: Gapeev, Pavel V. and

More information

On Optimal Stopping Problems with Power Function of Lévy Processes

On Optimal Stopping Problems with Power Function of Lévy Processes On Optimal Stopping Problems with Power Function of Lévy Processes Budhi Arta Surya Department of Mathematics University of Utrecht 31 August 2006 This talk is based on the joint paper with A.E. Kyprianou:

More information

Optimal Stopping Problems and American Options

Optimal Stopping Problems and American Options Optimal Stopping Problems and American Options Nadia Uys A dissertation submitted to the Faculty of Science, University of the Witwatersrand, in fulfilment of the requirements for the degree of Master

More information

A Barrier Version of the Russian Option

A Barrier Version of the Russian Option A Barrier Version of the Russian Option L. A. Shepp, A. N. Shiryaev, A. Sulem Rutgers University; shepp@stat.rutgers.edu Steklov Mathematical Institute; shiryaev@mi.ras.ru INRIA- Rocquencourt; agnes.sulem@inria.fr

More information

A Change of Variable Formula with Local Time-Space for Bounded Variation Lévy Processes with Application to Solving the American Put Option Problem 1

A Change of Variable Formula with Local Time-Space for Bounded Variation Lévy Processes with Application to Solving the American Put Option Problem 1 Chapter 3 A Change of Variable Formula with Local Time-Space for Bounded Variation Lévy Processes with Application to Solving the American Put Option Problem 1 Abstract We establish a change of variable

More information

Regularity of the Optimal Exercise Boundary of American Options for Jump Diffusions

Regularity of the Optimal Exercise Boundary of American Options for Jump Diffusions Regularity of the Optimal Exercise Boundary of American Options for Jump Diffusions Hao Xing University of Michigan joint work with Erhan Bayraktar, University of Michigan SIAM Conference on Financial

More information

行政院國家科學委員會補助專題研究計畫 成果報告 期中進度報告 ( 計畫名稱 )

行政院國家科學委員會補助專題研究計畫 成果報告 期中進度報告 ( 計畫名稱 ) 附件一 行政院國家科學委員會補助專題研究計畫 成果報告 期中進度報告 ( 計畫名稱 ) 發展紅外線 / 可見光合頻波成像顯微術以研究表面催化反應 計畫類別 : 個別型計畫 整合型計畫計畫編號 :NSC 97-2113 - M - 009-002 - MY2 執行期間 : 97 年 3 月 1 日至 98 年 7 月 31 日 計畫主持人 : 重藤真介共同主持人 : 計畫參與人員 : 成果報告類型 (

More information

On a class of optimal stopping problems for diffusions with discontinuous coefficients

On a class of optimal stopping problems for diffusions with discontinuous coefficients On a class of optimal stopping problems for diffusions with discontinuous coefficients Ludger Rüschendorf and Mikhail A. Urusov Abstract In this paper we introduce a modification of the free boundary problem

More information

Reflected Brownian Motion

Reflected Brownian Motion Chapter 6 Reflected Brownian Motion Often we encounter Diffusions in regions with boundary. If the process can reach the boundary from the interior in finite time with positive probability we need to decide

More information

國立中正大學八十一學年度應用數學研究所 碩士班研究生招生考試試題

國立中正大學八十一學年度應用數學研究所 碩士班研究生招生考試試題 國立中正大學八十一學年度應用數學研究所 碩士班研究生招生考試試題 基礎數學 I.(2%) Test for convergence or divergence of the following infinite series cos( π (a) ) sin( π n (b) ) n n=1 n n=1 n 1 1 (c) (p > 1) (d) n=2 n(log n) p n,m=1 n 2 +

More information

n E(X t T n = lim X s Tn = X s

n E(X t T n = lim X s Tn = X s Stochastic Calculus Example sheet - Lent 15 Michael Tehranchi Problem 1. Let X be a local martingale. Prove that X is a uniformly integrable martingale if and only X is of class D. Solution 1. If If direction:

More information

OBSTACLE PROBLEMS FOR NONLOCAL OPERATORS: A BRIEF OVERVIEW

OBSTACLE PROBLEMS FOR NONLOCAL OPERATORS: A BRIEF OVERVIEW OBSTACLE PROBLEMS FOR NONLOCAL OPERATORS: A BRIEF OVERVIEW DONATELLA DANIELLI, ARSHAK PETROSYAN, AND CAMELIA A. POP Abstract. In this note, we give a brief overview of obstacle problems for nonlocal operators,

More information

On an Effective Solution of the Optimal Stopping Problem for Random Walks

On an Effective Solution of the Optimal Stopping Problem for Random Walks QUANTITATIVE FINANCE RESEARCH CENTRE QUANTITATIVE FINANCE RESEARCH CENTRE Research Paper 131 September 2004 On an Effective Solution of the Optimal Stopping Problem for Random Walks Alexander Novikov and

More information

ERRATA: Probabilistic Techniques in Analysis

ERRATA: Probabilistic Techniques in Analysis ERRATA: Probabilistic Techniques in Analysis ERRATA 1 Updated April 25, 26 Page 3, line 13. A 1,..., A n are independent if P(A i1 A ij ) = P(A 1 ) P(A ij ) for every subset {i 1,..., i j } of {1,...,

More information

CALCULUS JIA-MING (FRANK) LIOU

CALCULUS JIA-MING (FRANK) LIOU CALCULUS JIA-MING (FRANK) LIOU Abstract. Contents. Power Series.. Polynomials and Formal Power Series.2. Radius of Convergence 2.3. Derivative and Antiderivative of Power Series 4.4. Power Series Expansion

More information

Obstacle problems for nonlocal operators

Obstacle problems for nonlocal operators Obstacle problems for nonlocal operators Camelia Pop School of Mathematics, University of Minnesota Fractional PDEs: Theory, Algorithms and Applications ICERM June 19, 2018 Outline Motivation Optimal regularity

More information

Lecture 21 Representations of Martingales

Lecture 21 Representations of Martingales Lecture 21: Representations of Martingales 1 of 11 Course: Theory of Probability II Term: Spring 215 Instructor: Gordan Zitkovic Lecture 21 Representations of Martingales Right-continuous inverses Let

More information

2 Two-Point Boundary Value Problems

2 Two-Point Boundary Value Problems 2 Two-Point Boundary Value Problems Another fundamental equation, in addition to the heat eq. and the wave eq., is Poisson s equation: n j=1 2 u x 2 j The unknown is the function u = u(x 1, x 2,..., x

More information

Optimal Stopping and Maximal Inequalities for Poisson Processes

Optimal Stopping and Maximal Inequalities for Poisson Processes Optimal Stopping and Maximal Inequalities for Poisson Processes D.O. Kramkov 1 E. Mordecki 2 September 10, 2002 1 Steklov Mathematical Institute, Moscow, Russia 2 Universidad de la República, Montevideo,

More information

A LOCALIZATION PROPERTY AT THE BOUNDARY FOR MONGE-AMPERE EQUATION

A LOCALIZATION PROPERTY AT THE BOUNDARY FOR MONGE-AMPERE EQUATION A LOCALIZATION PROPERTY AT THE BOUNDARY FOR MONGE-AMPERE EQUATION O. SAVIN. Introduction In this paper we study the geometry of the sections for solutions to the Monge- Ampere equation det D 2 u = f, u

More information

EIGENVALUES AND EIGENVECTORS 3

EIGENVALUES AND EIGENVECTORS 3 EIGENVALUES AND EIGENVECTORS 3 1. Motivation 1.1. Diagonal matrices. Perhaps the simplest type of linear transformations are those whose matrix is diagonal (in some basis). Consider for example the matrices

More information

1 + lim. n n+1. f(x) = x + 1, x 1. and we check that f is increasing, instead. Using the quotient rule, we easily find that. 1 (x + 1) 1 x (x + 1) 2 =

1 + lim. n n+1. f(x) = x + 1, x 1. and we check that f is increasing, instead. Using the quotient rule, we easily find that. 1 (x + 1) 1 x (x + 1) 2 = Chapter 5 Sequences and series 5. Sequences Definition 5. (Sequence). A sequence is a function which is defined on the set N of natural numbers. Since such a function is uniquely determined by its values

More information

Jump-type Levy Processes

Jump-type Levy Processes Jump-type Levy Processes Ernst Eberlein Handbook of Financial Time Series Outline Table of contents Probabilistic Structure of Levy Processes Levy process Levy-Ito decomposition Jump part Probabilistic

More information

Other properties of M M 1

Other properties of M M 1 Other properties of M M 1 Přemysl Bejda premyslbejda@gmail.com 2012 Contents 1 Reflected Lévy Process 2 Time dependent properties of M M 1 3 Waiting times and queue disciplines in M M 1 Contents 1 Reflected

More information

On the principle of smooth fit in optimal stopping problems

On the principle of smooth fit in optimal stopping problems 1 On the principle of smooth fit in optimal stopping problems Amir Aliev Moscow State University, Faculty of Mechanics and Mathematics, Department of Probability Theory, 119992, Moscow, Russia. Keywords:

More information

Bayesian quickest detection problems for some diffusion processes

Bayesian quickest detection problems for some diffusion processes Bayesian quickest detection problems for some diffusion processes Pavel V. Gapeev Albert N. Shiryaev We study the Bayesian problems of detecting a change in the drift rate of an observable diffusion process

More information

Curriculum Vitae. Dr. Tzu-Chien Chiu (Clara Chiu) 邱子虔

Curriculum Vitae. Dr. Tzu-Chien Chiu (Clara Chiu) 邱子虔 Email Address: tc423@columbia.edu Curriculum Vitae Dr. Tzu-Chien Chiu (Clara Chiu) 邱子虔 Professional Appointment: Jan 2008 ~ July 2011 Assistant Research Fellow (Institute of Earth Sciences, Academia Sinica,

More information

Lecture 12. F o s, (1.1) F t := s>t

Lecture 12. F o s, (1.1) F t := s>t Lecture 12 1 Brownian motion: the Markov property Let C := C(0, ), R) be the space of continuous functions mapping from 0, ) to R, in which a Brownian motion (B t ) t 0 almost surely takes its value. Let

More information

Feller Processes and Semigroups

Feller Processes and Semigroups Stat25B: Probability Theory (Spring 23) Lecture: 27 Feller Processes and Semigroups Lecturer: Rui Dong Scribe: Rui Dong ruidong@stat.berkeley.edu For convenience, we can have a look at the list of materials

More information

Lecture 17 Brownian motion as a Markov process

Lecture 17 Brownian motion as a Markov process Lecture 17: Brownian motion as a Markov process 1 of 14 Course: Theory of Probability II Term: Spring 2015 Instructor: Gordan Zitkovic Lecture 17 Brownian motion as a Markov process Brownian motion is

More information

ON THE OPTIMAL STOPPING PROBLEM FOR ONE DIMENSIONAL DIFFUSIONS

ON THE OPTIMAL STOPPING PROBLEM FOR ONE DIMENSIONAL DIFFUSIONS ON THE OPTIMAL STOPPING PROBLEM FOR ONE DIMENSIONAL DIFFUSIONS SAVAS DAYANIK Department of Operations Research and Financial Engineering and the Bendheim Center for Finance Princeton University, Princeton,

More information

Supplementary Notes for W. Rudin: Principles of Mathematical Analysis

Supplementary Notes for W. Rudin: Principles of Mathematical Analysis Supplementary Notes for W. Rudin: Principles of Mathematical Analysis SIGURDUR HELGASON In 8.00B it is customary to cover Chapters 7 in Rudin s book. Experience shows that this requires careful planning

More information

Laplace s Equation. Chapter Mean Value Formulas

Laplace s Equation. Chapter Mean Value Formulas Chapter 1 Laplace s Equation Let be an open set in R n. A function u C 2 () is called harmonic in if it satisfies Laplace s equation n (1.1) u := D ii u = 0 in. i=1 A function u C 2 () is called subharmonic

More information

1. Introduction Boundary estimates for the second derivatives of the solution to the Dirichlet problem for the Monge-Ampere equation

1. Introduction Boundary estimates for the second derivatives of the solution to the Dirichlet problem for the Monge-Ampere equation POINTWISE C 2,α ESTIMATES AT THE BOUNDARY FOR THE MONGE-AMPERE EQUATION O. SAVIN Abstract. We prove a localization property of boundary sections for solutions to the Monge-Ampere equation. As a consequence

More information

OPTIMAL STOPPING OF A BROWNIAN BRIDGE

OPTIMAL STOPPING OF A BROWNIAN BRIDGE OPTIMAL STOPPING OF A BROWNIAN BRIDGE ERIK EKSTRÖM AND HENRIK WANNTORP Abstract. We study several optimal stopping problems in which the gains process is a Brownian bridge or a functional of a Brownian

More information

生物統計教育訓練 - 課程. Introduction to equivalence, superior, inferior studies in RCT 謝宗成副教授慈濟大學醫學科學研究所. TEL: ext 2015

生物統計教育訓練 - 課程. Introduction to equivalence, superior, inferior studies in RCT 謝宗成副教授慈濟大學醫學科學研究所. TEL: ext 2015 生物統計教育訓練 - 課程 Introduction to equivalence, superior, inferior studies in RCT 謝宗成副教授慈濟大學醫學科學研究所 tchsieh@mail.tcu.edu.tw TEL: 03-8565301 ext 2015 1 Randomized controlled trial Two arms trial Test treatment

More information

Preliminary Exam 2016 Solutions to Morning Exam

Preliminary Exam 2016 Solutions to Morning Exam Preliminary Exam 16 Solutions to Morning Exam Part I. Solve four of the following five problems. Problem 1. Find the volume of the ice cream cone defined by the inequalities x + y + z 1 and x + y z /3

More information

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3 Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................

More information

Stochastic Calculus for Finance II - some Solutions to Chapter VII

Stochastic Calculus for Finance II - some Solutions to Chapter VII Stochastic Calculus for Finance II - some Solutions to Chapter VII Matthias hul Last Update: June 9, 25 Exercise 7 Black-Scholes-Merton Equation for the up-and-out Call) i) We have ii) We first compute

More information

Albert N. Shiryaev Steklov Mathematical Institute. On sharp maximal inequalities for stochastic processes

Albert N. Shiryaev Steklov Mathematical Institute. On sharp maximal inequalities for stochastic processes Albert N. Shiryaev Steklov Mathematical Institute On sharp maximal inequalities for stochastic processes joint work with Yaroslav Lyulko, Higher School of Economics email: albertsh@mi.ras.ru 1 TOPIC I:

More information

Notes on Complex Analysis

Notes on Complex Analysis Michael Papadimitrakis Notes on Complex Analysis Department of Mathematics University of Crete Contents The complex plane.. The complex plane...................................2 Argument and polar representation.........................

More information

Ernesto Mordecki. Talk presented at the. Finnish Mathematical Society

Ernesto Mordecki. Talk presented at the. Finnish Mathematical Society EXACT RUIN PROBABILITIES FOR A CLASS Of LÉVY PROCESSES Ernesto Mordecki http://www.cmat.edu.uy/ mordecki Montevideo, Uruguay visiting Åbo Akademi, Turku Talk presented at the Finnish Mathematical Society

More information

ON COST MATRICES WITH TWO AND THREE DISTINCT VALUES OF HAMILTONIAN PATHS AND CYCLES

ON COST MATRICES WITH TWO AND THREE DISTINCT VALUES OF HAMILTONIAN PATHS AND CYCLES ON COST MATRICES WITH TWO AND THREE DISTINCT VALUES OF HAMILTONIAN PATHS AND CYCLES SANTOSH N. KABADI AND ABRAHAM P. PUNNEN Abstract. Polynomially testable characterization of cost matrices associated

More information

Malliavin Calculus in Finance

Malliavin Calculus in Finance Malliavin Calculus in Finance Peter K. Friz 1 Greeks and the logarithmic derivative trick Model an underlying assent by a Markov process with values in R m with dynamics described by the SDE dx t = b(x

More information

L p Spaces and Convexity

L p Spaces and Convexity L p Spaces and Convexity These notes largely follow the treatments in Royden, Real Analysis, and Rudin, Real & Complex Analysis. 1. Convex functions Let I R be an interval. For I open, we say a function

More information

THE LINDEBERG-FELLER CENTRAL LIMIT THEOREM VIA ZERO BIAS TRANSFORMATION

THE LINDEBERG-FELLER CENTRAL LIMIT THEOREM VIA ZERO BIAS TRANSFORMATION THE LINDEBERG-FELLER CENTRAL LIMIT THEOREM VIA ZERO BIAS TRANSFORMATION JAINUL VAGHASIA Contents. Introduction. Notations 3. Background in Probability Theory 3.. Expectation and Variance 3.. Convergence

More information

Harmonic Functions and Brownian motion

Harmonic Functions and Brownian motion Harmonic Functions and Brownian motion Steven P. Lalley April 25, 211 1 Dynkin s Formula Denote by W t = (W 1 t, W 2 t,..., W d t ) a standard d dimensional Wiener process on (Ω, F, P ), and let F = (F

More information

Filtrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition

Filtrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition Filtrations, Markov Processes and Martingales Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition David pplebaum Probability and Statistics Department,

More information

Some Aspects of Universal Portfolio

Some Aspects of Universal Portfolio 1 Some Aspects of Universal Portfolio Tomoyuki Ichiba (UC Santa Barbara) joint work with Marcel Brod (ETH Zurich) Conference on Stochastic Asymptotics & Applications Sixth Western Conference on Mathematical

More information

We denote the derivative at x by DF (x) = L. With respect to the standard bases of R n and R m, DF (x) is simply the matrix of partial derivatives,

We denote the derivative at x by DF (x) = L. With respect to the standard bases of R n and R m, DF (x) is simply the matrix of partial derivatives, The derivative Let O be an open subset of R n, and F : O R m a continuous function We say F is differentiable at a point x O, with derivative L, if L : R n R m is a linear transformation such that, for

More information

VISCOSITY SOLUTIONS. We follow Han and Lin, Elliptic Partial Differential Equations, 5.

VISCOSITY SOLUTIONS. We follow Han and Lin, Elliptic Partial Differential Equations, 5. VISCOSITY SOLUTIONS PETER HINTZ We follow Han and Lin, Elliptic Partial Differential Equations, 5. 1. Motivation Throughout, we will assume that Ω R n is a bounded and connected domain and that a ij C(Ω)

More information

Measure and integration

Measure and integration Chapter 5 Measure and integration In calculus you have learned how to calculate the size of different kinds of sets: the length of a curve, the area of a region or a surface, the volume or mass of a solid.

More information

STATISTICS 385: STOCHASTIC CALCULUS HOMEWORK ASSIGNMENT 4 DUE NOVEMBER 23, = (2n 1)(2n 3) 3 1.

STATISTICS 385: STOCHASTIC CALCULUS HOMEWORK ASSIGNMENT 4 DUE NOVEMBER 23, = (2n 1)(2n 3) 3 1. STATISTICS 385: STOCHASTIC CALCULUS HOMEWORK ASSIGNMENT 4 DUE NOVEMBER 23, 26 Problem Normal Moments (A) Use the Itô formula and Brownian scaling to check that the even moments of the normal distribution

More information

Exponential martingales: uniform integrability results and applications to point processes

Exponential martingales: uniform integrability results and applications to point processes Exponential martingales: uniform integrability results and applications to point processes Alexander Sokol Department of Mathematical Sciences, University of Copenhagen 26 September, 2012 1 / 39 Agenda

More information

SMSTC (2007/08) Probability.

SMSTC (2007/08) Probability. SMSTC (27/8) Probability www.smstc.ac.uk Contents 12 Markov chains in continuous time 12 1 12.1 Markov property and the Kolmogorov equations.................... 12 2 12.1.1 Finite state space.................................

More information

Gaussian, Markov and stationary processes

Gaussian, Markov and stationary processes Gaussian, Markov and stationary processes Gonzalo Mateos Dept. of ECE and Goergen Institute for Data Science University of Rochester gmateosb@ece.rochester.edu http://www.ece.rochester.edu/~gmateosb/ November

More information

Multi-dimensional Stochastic Singular Control Via Dynkin Game and Dirichlet Form

Multi-dimensional Stochastic Singular Control Via Dynkin Game and Dirichlet Form Multi-dimensional Stochastic Singular Control Via Dynkin Game and Dirichlet Form Yipeng Yang * Under the supervision of Dr. Michael Taksar Department of Mathematics University of Missouri-Columbia Oct

More information

Stochastic Areas and Applications in Risk Theory

Stochastic Areas and Applications in Risk Theory Stochastic Areas and Applications in Risk Theory July 16th, 214 Zhenyu Cui Department of Mathematics Brooklyn College, City University of New York Zhenyu Cui 49th Actuarial Research Conference 1 Outline

More information

行政院國家科學委員會補助專題研究計畫 * 成果報告 期中進度報告 壓電傳感元件之波傳耦合層設計與元件整體之 力電作用表現研究

行政院國家科學委員會補助專題研究計畫 * 成果報告 期中進度報告 壓電傳感元件之波傳耦合層設計與元件整體之 力電作用表現研究 行 力 精 類 行 年 年 行 立 料 參 連 論 理 年 行政院國家科學委員會補助專題研究計畫 * 成果報告 期中進度報告 壓電傳感元件之波傳耦合層設計與元件整體之 力電作用表現研究 計畫類別 : * 個別型計畫 整合型計畫 計畫編號 : NSC 95 2221 E 002 116 執行期間 : 2006 年 8 月 1 日至 2007 年 7 月 31 日 計畫主持人 : 謝宗霖共同主持人 :

More information

Numerical Solutions to Partial Differential Equations

Numerical Solutions to Partial Differential Equations Numerical Solutions to Partial Differential Equations Zhiping Li LMAM and School of Mathematical Sciences Peking University The Residual and Error of Finite Element Solutions Mixed BVP of Poisson Equation

More information

Maturity randomization for stochastic control problems

Maturity randomization for stochastic control problems Maturity randomization for stochastic control problems Bruno Bouchard LPMA, Université Paris VI and CREST Paris, France Nicole El Karoui CMAP, Ecole Polytechnique Paris, France elkaroui@cmapx.polytechnique.fr

More information

Impulse control and expected suprema

Impulse control and expected suprema Impulse control and expected suprema Sören Christensen, Paavo Salminen arxiv:1503.01253v2 [math.pr] 4 Nov 2015 April 6, 2018 Abstract We consider a class of impulse control problems for general underlying

More information

STOPPING AT THE MAXIMUM OF GEOMETRIC BROWNIAN MOTION WHEN SIGNALS ARE RECEIVED

STOPPING AT THE MAXIMUM OF GEOMETRIC BROWNIAN MOTION WHEN SIGNALS ARE RECEIVED J. Appl. Prob. 42, 826 838 (25) Printed in Israel Applied Probability Trust 25 STOPPING AT THE MAXIMUM OF GEOMETRIC BROWNIAN MOTION WHEN SIGNALS ARE RECEIVED X. GUO, Cornell University J. LIU, Yale University

More information

Introduction to Random Diffusions

Introduction to Random Diffusions Introduction to Random Diffusions The main reason to study random diffusions is that this class of processes combines two key features of modern probability theory. On the one hand they are semi-martingales

More information

Lemma 8: Suppose the N by N matrix A has the following block upper triangular form:

Lemma 8: Suppose the N by N matrix A has the following block upper triangular form: 17 4 Determinants and the Inverse of a Square Matrix In this section, we are going to use our knowledge of determinants and their properties to derive an explicit formula for the inverse of a square matrix

More information

LECTURE 2: LOCAL TIME FOR BROWNIAN MOTION

LECTURE 2: LOCAL TIME FOR BROWNIAN MOTION LECTURE 2: LOCAL TIME FOR BROWNIAN MOTION We will define local time for one-dimensional Brownian motion, and deduce some of its properties. We will then use the generalized Ray-Knight theorem proved in

More information

Introductory Analysis I Fall 2014 Homework #9 Due: Wednesday, November 19

Introductory Analysis I Fall 2014 Homework #9 Due: Wednesday, November 19 Introductory Analysis I Fall 204 Homework #9 Due: Wednesday, November 9 Here is an easy one, to serve as warmup Assume M is a compact metric space and N is a metric space Assume that f n : M N for each

More information

arxiv: v1 [math.co] 3 Nov 2014

arxiv: v1 [math.co] 3 Nov 2014 SPARSE MATRICES DESCRIBING ITERATIONS OF INTEGER-VALUED FUNCTIONS BERND C. KELLNER arxiv:1411.0590v1 [math.co] 3 Nov 014 Abstract. We consider iterations of integer-valued functions φ, which have no fixed

More information

Supermodular ordering of Poisson arrays

Supermodular ordering of Poisson arrays Supermodular ordering of Poisson arrays Bünyamin Kızıldemir Nicolas Privault Division of Mathematical Sciences School of Physical and Mathematical Sciences Nanyang Technological University 637371 Singapore

More information

{σ x >t}p x. (σ x >t)=e at.

{σ x >t}p x. (σ x >t)=e at. 3.11. EXERCISES 121 3.11 Exercises Exercise 3.1 Consider the Ornstein Uhlenbeck process in example 3.1.7(B). Show that the defined process is a Markov process which converges in distribution to an N(0,σ

More information

MS 3011 Exercises. December 11, 2013

MS 3011 Exercises. December 11, 2013 MS 3011 Exercises December 11, 2013 The exercises are divided into (A) easy (B) medium and (C) hard. If you are particularly interested I also have some projects at the end which will deepen your understanding

More information

Static Problem Set 2 Solutions

Static Problem Set 2 Solutions Static Problem Set Solutions Jonathan Kreamer July, 0 Question (i) Let g, h be two concave functions. Is f = g + h a concave function? Prove it. Yes. Proof: Consider any two points x, x and α [0, ]. Let

More information

Convergence of price and sensitivities in Carr s randomization approximation globally and near barrier

Convergence of price and sensitivities in Carr s randomization approximation globally and near barrier Convergence of price and sensitivities in Carr s randomization approximation globally and near barrier Sergei Levendorskĭi University of Leicester Toronto, June 23, 2010 Levendorskĭi () Convergence of

More information

1: PROBABILITY REVIEW

1: PROBABILITY REVIEW 1: PROBABILITY REVIEW Marek Rutkowski School of Mathematics and Statistics University of Sydney Semester 2, 2016 M. Rutkowski (USydney) Slides 1: Probability Review 1 / 56 Outline We will review the following

More information

Applications of Ito s Formula

Applications of Ito s Formula CHAPTER 4 Applications of Ito s Formula In this chapter, we discuss several basic theorems in stochastic analysis. Their proofs are good examples of applications of Itô s formula. 1. Lévy s martingale

More information

奈米微污染控制工作小組 協辦單位 台灣賽默飛世爾科技股份有限公司 報名方式 本參訪活動由郭啟文先生負責 報名信箱

奈米微污染控制工作小組 協辦單位 台灣賽默飛世爾科技股份有限公司 報名方式 本參訪活動由郭啟文先生負責 報名信箱 SEMI AMC TF2 微污染防治專案為提升國內微污染量測之技術水平, 特舉辦離子層析儀之原理教學與儀器參訪活動, 邀請 AMC TF2 成員參加, 期盼讓 SEMI 會員對於離子層析技術於微汙染防治之儀器功能及未來之應用有進一步的認識 以實作參訪及教學模式, 使 SEMI 會員深刻瞭解離子層析技術之概念與原理, 並於活動中結合產業專家, 進行研討, 提出未來應用與發展方向之建議 參訪日期 中華民國

More information

Uniformly Uniformly-ergodic Markov chains and BSDEs

Uniformly Uniformly-ergodic Markov chains and BSDEs Uniformly Uniformly-ergodic Markov chains and BSDEs Samuel N. Cohen Mathematical Institute, University of Oxford (Based on joint work with Ying Hu, Robert Elliott, Lukas Szpruch) Centre Henri Lebesgue,

More information

MRDFG 的周期界的計算的提升計畫編號 :NSC E 執行期限 : 94 年 8 月 1 日至 94 年 7 月 31 日主持人 : 趙玉政治大學資管系計畫參與人員 :

MRDFG 的周期界的計算的提升計畫編號 :NSC E 執行期限 : 94 年 8 月 1 日至 94 年 7 月 31 日主持人 : 趙玉政治大學資管系計畫參與人員 : MRDFG 的周期界的計算的提升計畫編號 :NSC 94-2213-E-004-005 執行期限 : 94 年 8 月 1 日至 94 年 7 月 31 日主持人 : 趙玉政治大學資管系計畫參與人員 : 一 中文摘要 Schoenen [4] 證實了我們的理論 即如果 MRDFG 的標記使它像 SRDFG 一樣表現, 不需要變換為 SRDFG 他們表明當標記高于相符標記, 在回界相對於標記的圖中,

More information

h(x) lim H(x) = lim Since h is nondecreasing then h(x) 0 for all x, and if h is discontinuous at a point x then H(x) > 0. Denote

h(x) lim H(x) = lim Since h is nondecreasing then h(x) 0 for all x, and if h is discontinuous at a point x then H(x) > 0. Denote Real Variables, Fall 4 Problem set 4 Solution suggestions Exercise. Let f be of bounded variation on [a, b]. Show that for each c (a, b), lim x c f(x) and lim x c f(x) exist. Prove that a monotone function

More information

On Reflecting Brownian Motion with Drift

On Reflecting Brownian Motion with Drift Proc. Symp. Stoch. Syst. Osaka, 25), ISCIE Kyoto, 26, 1-5) On Reflecting Brownian Motion with Drift Goran Peskir This version: 12 June 26 First version: 1 September 25 Research Report No. 3, 25, Probability

More information

HJB equations. Seminar in Stochastic Modelling in Economics and Finance January 10, 2011

HJB equations. Seminar in Stochastic Modelling in Economics and Finance January 10, 2011 Department of Probability and Mathematical Statistics Faculty of Mathematics and Physics, Charles University in Prague petrasek@karlin.mff.cuni.cz Seminar in Stochastic Modelling in Economics and Finance

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

Liquidation in Limit Order Books. LOBs with Controlled Intensity

Liquidation in Limit Order Books. LOBs with Controlled Intensity Limit Order Book Model Power-Law Intensity Law Exponential Decay Order Books Extensions Liquidation in Limit Order Books with Controlled Intensity Erhan and Mike Ludkovski University of Michigan and UCSB

More information

ON THE POLICY IMPROVEMENT ALGORITHM IN CONTINUOUS TIME

ON THE POLICY IMPROVEMENT ALGORITHM IN CONTINUOUS TIME ON THE POLICY IMPROVEMENT ALGORITHM IN CONTINUOUS TIME SAUL D. JACKA AND ALEKSANDAR MIJATOVIĆ Abstract. We develop a general approach to the Policy Improvement Algorithm (PIA) for stochastic control problems

More information

SEQUENTIAL TESTING OF SIMPLE HYPOTHESES ABOUT COMPOUND POISSON PROCESSES. 1. Introduction (1.2)

SEQUENTIAL TESTING OF SIMPLE HYPOTHESES ABOUT COMPOUND POISSON PROCESSES. 1. Introduction (1.2) SEQUENTIAL TESTING OF SIMPLE HYPOTHESES ABOUT COMPOUND POISSON PROCESSES SAVAS DAYANIK AND SEMIH O. SEZER Abstract. One of two simple hypotheses is correct about the unknown arrival rate and jump distribution

More information

Dynamic Pricing for Non-Perishable Products with Demand Learning

Dynamic Pricing for Non-Perishable Products with Demand Learning Dynamic Pricing for Non-Perishable Products with Demand Learning Victor F. Araman Stern School of Business New York University René A. Caldentey DIMACS Workshop on Yield Management and Dynamic Pricing

More information

Green s Functions and Distributions

Green s Functions and Distributions CHAPTER 9 Green s Functions and Distributions 9.1. Boundary Value Problems We would like to study, and solve if possible, boundary value problems such as the following: (1.1) u = f in U u = g on U, where

More information

Jukka Lempa The Optimal Stopping Problem of Dupuis and Wang: A Generalization. Aboa Centre for Economics

Jukka Lempa The Optimal Stopping Problem of Dupuis and Wang: A Generalization. Aboa Centre for Economics Jukka Lempa The Optimal Stopping Problem of Dupuis and Wang: A Generalization Aboa Centre for Economics Discussion Paper No. 36 Turku 28 Copyright Author(s) ISSN 1796-3133 Turun kauppakorkeakoulun monistamo

More information

On the martingales obtained by an extension due to Saisho, Tanemura and Yor of Pitman s theorem

On the martingales obtained by an extension due to Saisho, Tanemura and Yor of Pitman s theorem On the martingales obtained by an extension due to Saisho, Tanemura and Yor of Pitman s theorem Koichiro TAKAOKA Dept of Applied Physics, Tokyo Institute of Technology Abstract M Yor constructed a family

More information

Short-time expansions for close-to-the-money options under a Lévy jump model with stochastic volatility

Short-time expansions for close-to-the-money options under a Lévy jump model with stochastic volatility Short-time expansions for close-to-the-money options under a Lévy jump model with stochastic volatility José Enrique Figueroa-López 1 1 Department of Statistics Purdue University Statistics, Jump Processes,

More information

Math113: Linear Algebra. Beifang Chen

Math113: Linear Algebra. Beifang Chen Math3: Linear Algebra Beifang Chen Spring 26 Contents Systems of Linear Equations 3 Systems of Linear Equations 3 Linear Systems 3 2 Geometric Interpretation 3 3 Matrices of Linear Systems 4 4 Elementary

More information

Quitting games - An Example

Quitting games - An Example Quitting games - An Example E. Solan 1 and N. Vieille 2 January 22, 2001 Abstract Quitting games are n-player sequential games in which, at any stage, each player has the choice between continuing and

More information

Economics 2010c: Lectures 9-10 Bellman Equation in Continuous Time

Economics 2010c: Lectures 9-10 Bellman Equation in Continuous Time Economics 2010c: Lectures 9-10 Bellman Equation in Continuous Time David Laibson 9/30/2014 Outline Lectures 9-10: 9.1 Continuous-time Bellman Equation 9.2 Application: Merton s Problem 9.3 Application:

More information

FIRST YEAR CALCULUS W W L CHEN

FIRST YEAR CALCULUS W W L CHEN FIRST YER CLCULUS W W L CHEN c W W L Chen, 994, 28. This chapter is available free to all individuals, on the understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,

More information

MAT 570 REAL ANALYSIS LECTURE NOTES. Contents. 1. Sets Functions Countability Axiom of choice Equivalence relations 9

MAT 570 REAL ANALYSIS LECTURE NOTES. Contents. 1. Sets Functions Countability Axiom of choice Equivalence relations 9 MAT 570 REAL ANALYSIS LECTURE NOTES PROFESSOR: JOHN QUIGG SEMESTER: FALL 204 Contents. Sets 2 2. Functions 5 3. Countability 7 4. Axiom of choice 8 5. Equivalence relations 9 6. Real numbers 9 7. Extended

More information

On pathwise stochastic integration

On pathwise stochastic integration On pathwise stochastic integration Rafa l Marcin Lochowski Afican Institute for Mathematical Sciences, Warsaw School of Economics UWC seminar Rafa l Marcin Lochowski (AIMS, WSE) On pathwise stochastic

More information

Optimal Stopping and Applications

Optimal Stopping and Applications Optimal Stopping and Applications Alex Cox March 16, 2009 Abstract These notes are intended to accompany a Graduate course on Optimal stopping, and in places are a bit brief. They follow the book Optimal

More information

Solving the Poisson Disorder Problem

Solving the Poisson Disorder Problem Advances in Finance and Stochastics: Essays in Honour of Dieter Sondermann, Springer-Verlag, 22, (295-32) Research Report No. 49, 2, Dept. Theoret. Statist. Aarhus Solving the Poisson Disorder Problem

More information

From now on, we will represent a metric space with (X, d). Here are some examples: i=1 (x i y i ) p ) 1 p, p 1.

From now on, we will represent a metric space with (X, d). Here are some examples: i=1 (x i y i ) p ) 1 p, p 1. Chapter 1 Metric spaces 1.1 Metric and convergence We will begin with some basic concepts. Definition 1.1. (Metric space) Metric space is a set X, with a metric satisfying: 1. d(x, y) 0, d(x, y) = 0 x

More information