行政院國家科學委員會專題研究計畫成果報告

Similar documents
Ernesto Mordecki 1. Lecture III. PASI - Guanajuato - June 2010

Pavel V. Gapeev, Neofytos Rodosthenous Perpetual American options in diffusion-type models with running maxima and drawdowns

On Optimal Stopping Problems with Power Function of Lévy Processes

Optimal Stopping Problems and American Options

A Barrier Version of the Russian Option

A Change of Variable Formula with Local Time-Space for Bounded Variation Lévy Processes with Application to Solving the American Put Option Problem 1

Regularity of the Optimal Exercise Boundary of American Options for Jump Diffusions

行政院國家科學委員會補助專題研究計畫 成果報告 期中進度報告 ( 計畫名稱 )

On a class of optimal stopping problems for diffusions with discontinuous coefficients

Reflected Brownian Motion

國立中正大學八十一學年度應用數學研究所 碩士班研究生招生考試試題

n E(X t T n = lim X s Tn = X s

OBSTACLE PROBLEMS FOR NONLOCAL OPERATORS: A BRIEF OVERVIEW

On an Effective Solution of the Optimal Stopping Problem for Random Walks

ERRATA: Probabilistic Techniques in Analysis

CALCULUS JIA-MING (FRANK) LIOU

Obstacle problems for nonlocal operators

Lecture 21 Representations of Martingales

2 Two-Point Boundary Value Problems

Optimal Stopping and Maximal Inequalities for Poisson Processes

A LOCALIZATION PROPERTY AT THE BOUNDARY FOR MONGE-AMPERE EQUATION

EIGENVALUES AND EIGENVECTORS 3

1 + lim. n n+1. f(x) = x + 1, x 1. and we check that f is increasing, instead. Using the quotient rule, we easily find that. 1 (x + 1) 1 x (x + 1) 2 =

Jump-type Levy Processes

Other properties of M M 1

On the principle of smooth fit in optimal stopping problems

Bayesian quickest detection problems for some diffusion processes

Curriculum Vitae. Dr. Tzu-Chien Chiu (Clara Chiu) 邱子虔

Lecture 12. F o s, (1.1) F t := s>t

Feller Processes and Semigroups

Lecture 17 Brownian motion as a Markov process

ON THE OPTIMAL STOPPING PROBLEM FOR ONE DIMENSIONAL DIFFUSIONS

Supplementary Notes for W. Rudin: Principles of Mathematical Analysis

Laplace s Equation. Chapter Mean Value Formulas

1. Introduction Boundary estimates for the second derivatives of the solution to the Dirichlet problem for the Monge-Ampere equation

OPTIMAL STOPPING OF A BROWNIAN BRIDGE

生物統計教育訓練 - 課程. Introduction to equivalence, superior, inferior studies in RCT 謝宗成副教授慈濟大學醫學科學研究所. TEL: ext 2015

Preliminary Exam 2016 Solutions to Morning Exam

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Stochastic Calculus for Finance II - some Solutions to Chapter VII

Albert N. Shiryaev Steklov Mathematical Institute. On sharp maximal inequalities for stochastic processes

Notes on Complex Analysis

Ernesto Mordecki. Talk presented at the. Finnish Mathematical Society

ON COST MATRICES WITH TWO AND THREE DISTINCT VALUES OF HAMILTONIAN PATHS AND CYCLES

Malliavin Calculus in Finance

L p Spaces and Convexity

THE LINDEBERG-FELLER CENTRAL LIMIT THEOREM VIA ZERO BIAS TRANSFORMATION

Harmonic Functions and Brownian motion

Filtrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition

Some Aspects of Universal Portfolio

We denote the derivative at x by DF (x) = L. With respect to the standard bases of R n and R m, DF (x) is simply the matrix of partial derivatives,

VISCOSITY SOLUTIONS. We follow Han and Lin, Elliptic Partial Differential Equations, 5.

Measure and integration

STATISTICS 385: STOCHASTIC CALCULUS HOMEWORK ASSIGNMENT 4 DUE NOVEMBER 23, = (2n 1)(2n 3) 3 1.

Exponential martingales: uniform integrability results and applications to point processes

SMSTC (2007/08) Probability.

Gaussian, Markov and stationary processes

Multi-dimensional Stochastic Singular Control Via Dynkin Game and Dirichlet Form

Stochastic Areas and Applications in Risk Theory

行政院國家科學委員會補助專題研究計畫 * 成果報告 期中進度報告 壓電傳感元件之波傳耦合層設計與元件整體之 力電作用表現研究

Numerical Solutions to Partial Differential Equations

Maturity randomization for stochastic control problems

Impulse control and expected suprema

STOPPING AT THE MAXIMUM OF GEOMETRIC BROWNIAN MOTION WHEN SIGNALS ARE RECEIVED

Introduction to Random Diffusions

Lemma 8: Suppose the N by N matrix A has the following block upper triangular form:

LECTURE 2: LOCAL TIME FOR BROWNIAN MOTION

Introductory Analysis I Fall 2014 Homework #9 Due: Wednesday, November 19

arxiv: v1 [math.co] 3 Nov 2014

Supermodular ordering of Poisson arrays

{σ x >t}p x. (σ x >t)=e at.

MS 3011 Exercises. December 11, 2013

Static Problem Set 2 Solutions

Convergence of price and sensitivities in Carr s randomization approximation globally and near barrier

1: PROBABILITY REVIEW

Applications of Ito s Formula

奈米微污染控制工作小組 協辦單位 台灣賽默飛世爾科技股份有限公司 報名方式 本參訪活動由郭啟文先生負責 報名信箱

Uniformly Uniformly-ergodic Markov chains and BSDEs

MRDFG 的周期界的計算的提升計畫編號 :NSC E 執行期限 : 94 年 8 月 1 日至 94 年 7 月 31 日主持人 : 趙玉政治大學資管系計畫參與人員 :

h(x) lim H(x) = lim Since h is nondecreasing then h(x) 0 for all x, and if h is discontinuous at a point x then H(x) > 0. Denote

On Reflecting Brownian Motion with Drift

HJB equations. Seminar in Stochastic Modelling in Economics and Finance January 10, 2011

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

Liquidation in Limit Order Books. LOBs with Controlled Intensity

ON THE POLICY IMPROVEMENT ALGORITHM IN CONTINUOUS TIME

SEQUENTIAL TESTING OF SIMPLE HYPOTHESES ABOUT COMPOUND POISSON PROCESSES. 1. Introduction (1.2)

Dynamic Pricing for Non-Perishable Products with Demand Learning

Green s Functions and Distributions

Jukka Lempa The Optimal Stopping Problem of Dupuis and Wang: A Generalization. Aboa Centre for Economics

On the martingales obtained by an extension due to Saisho, Tanemura and Yor of Pitman s theorem

Short-time expansions for close-to-the-money options under a Lévy jump model with stochastic volatility

Math113: Linear Algebra. Beifang Chen

Quitting games - An Example

Economics 2010c: Lectures 9-10 Bellman Equation in Continuous Time

FIRST YEAR CALCULUS W W L CHEN

MAT 570 REAL ANALYSIS LECTURE NOTES. Contents. 1. Sets Functions Countability Axiom of choice Equivalence relations 9

On pathwise stochastic integration

Optimal Stopping and Applications

Solving the Poisson Disorder Problem

From now on, we will represent a metric space with (X, d). Here are some examples: i=1 (x i y i ) p ) 1 p, p 1.

Transcription:

行政院國家科學委員會專題研究計畫成果報告 跳躍過程及其應用研究成果報告 ( 精簡版 ) 計畫類別 : 個別型計畫編號 :NSC 99-25-M-009-00- 執行期間 :99 年 08 月 0 日至 00 年 07 月 3 日執行單位 : 國立交通大學應用數學系 ( 所 ) 計畫主持人 : 許元春 計畫參與人員 : 碩士班研究生 - 兼任助理人員 : 吳國禎碩士班研究生 - 兼任助理人員 : 郭閔豪博士班研究生 - 兼任助理人員 : 陳育慈博士班研究生 - 兼任助理人員 : 張明淇博士班研究生 - 兼任助理人員 : 蔡明耀 處理方式 : 本計畫可公開查詢 中華民國 00 年 09 月 2 日

Free Boundary Problems and Perpetual American Strangles Ming-Chi Chang, Department of Applied Mathematics National Chiao Tung University, Hsinchu, Taiwan Yuan-Chung Sheu, Department of Applied Mathematics National Chiao Tung University, Hsinchu, Taiwan Abstract We consider the perpetual American strangles in the geometric jump-diffusion models We assume further that the jump distribution is a mixture of exponential distributions To solve the corresponding optimal stopping problem for this option, by using the approach in [5, we derive a system of equations that is equivalent to the associated free boundary problem with smooth pasting condition We verify the existence of the solutions to these equations Then, in terms of the solutions together with a verification theorem, we solve the optimal stopping problem and hence find the optimal exercise boundaries and the rational price for the perpetual American strangle In addition we work out an algorithm for computing the optimal exercise boundaries and the rational price of this option Keywords: jump-diffusion, mixture of exponential distributions, perpetual American strangle, free boundary problem, smooth pasting condition JEL Classification: D8, C6, G2 Mathematics Subject Classification(2000): 60J75, 60G5, 60G99 Running Title: Free Boundary Problems and Perpetual American Strangles Introduction An American option is an option that can be exercised at any time prior to its expiration time For an American call option with a finite expiration time, Merton [ observed that the price of the American option(written on an underlying stock without dividends) coincides with the price of the corresponding European option However the American put option(even without dividends) presents a difficult problem We have no explicit pricing formulas and the optimal exercise boundaries are not known One exception is the perpetual American put option, ie, an American put with infinite expiration time Within the Black-Scholes model, the perpetual American put problem was solved by Mckean [0 In the Lévy-based models, using the theory of pseudo-differential operators, Boyarchenko and Levendorskii [4 derived a closed formula for prices of perpetual American put and call options By probabilistic techniques, Mordecki and Salminen [2 obtained explicit formulas under the assumption of mixed-exponentially distributed and arbitrary negative jumps for the call options, and negative mixed-exponentially distributed and arbitrary positive jumps for put options (For related works, see Asmussen et al [ and the references therein) In this paper we consider the pricing problem for the perpetual American strangles, which is a combination of a put and a call written on the same security Mathematically the pricing problem for perpetual American contracts in the Lévy-based model is equivalent to the optimal stopping problem of the form V (x) = sup E x (e rτ g(x τ )) () τ T where X = {X t : t 0} under the chosen risk-neutral probability measure P x is a Lévy process started from X 0 = x Further, g is the nonnegative continuous reward function corresponding to the Corresponding author Tel: +886-3-5722x56428; fax: +886-3-5724679 E-mail address: sheu@mathnctuedutw

contract, r 0 and T is a family of stopping times with respect to the natural filtration generated by X, F = {F t : t 0} (Here we define, on τ =, e rτ g(x τ ) = 0) In the literature, there are many approaches for finding the value functions V (x) and the optimal stopping times τ such that V (x) = E x (e rτ g(x τ )) The free boundary approach is based on the observation that, under some suitable conditions, the value function V (x) for the optimal stopping problem () is a solution to the free boundary(or Stephan) problem (L X r)v (x) = 0 in C (2) V (x) = g(x) on D (3) where C = {x R, V (x) > g(x)} (the continuation region), D = {x R, V (x) = g(x)} (the stopping region) and L X is the infinitesimal operator of X (For details, see Shiryayev[7 Theorem 5 p57) Many authors in the literature also observed that the boundary of the stopping region D is determined by imposing the smooth pasting condition for the value function Then, to solve the optimal stopping problem (), it suffices to solve the above free boundary problem with suitable pasting conditions and prove a verification theorem(a verification theorem implies that solving the free boundary problem with smooth pasting condition(or related conditions) allow one to establish explicit solutions of the optimal stopping problem) By this approach, we find the value functions and the optimal stopping times for the optimal stopping problems () and, by the risk-neutral pricing formula, we obtain the optimal exercise times and the rational prices for the perpetual American contracts For recent works and other approaches, see Kyprianou and Surya [9, Mordecki and Salminen [2, Mordecki [3, Novikov and Shiryaev [4, Surya [8 and the monograph of Peskir and Shiryaev [5 It is worth noting that the reward functions considered above are of American put-type or American call-type For our financial applications, we need to consider the reward functions g of the two-sided form as in (24) (In the literature there are not many works for two-sided reward functions See, for example, Beibel and Lerche [2, Gapeev and Lerche [7 and Boyarchenko [3) Then we study the perpetual American strangles in the geometric jump-diffusion models We assume further that the jump distribution is a two-sided mixture of exponential distributions To solve the corresponding optimal stopping problem for this option, by using the approach in [5, we first derive a system of equations(see (33)-(38) below) that is equivalent to the associated free boundary problem with smooth pasting condition In terms of the solutions to the system of equations, together with a verification theorem(theorem 2), we find the optimal stopping time and the value function for the optimal stopping problem(theorem 3) Hence, by the risk-neutral pricing formula, we obtain the optimal exercise boundaries and the price for the perpetual American strangle In addition, in the proof of the existence of solutions to the equations (33)-(38), we also work out an algorithm for computing the optimal exercise boundaries and the rational price of the option(see Theorem 4 ) The paper is organized as follows In Section 2 we introduce our jump-diffusion setting and provide a verification theorem for the optimal stopping problems () with a general two-sided reward function In Section 3 we consider the perpetual American strangles under the geometric jump-diffusion setting We derive a system of equations for solving the corresponding free boundary problem with smooth pasting condition and in terms of solutions to the system of equations, we solve the optimal stopping problem corresponding to the perpetual American strangle In Section 4 we prove the existence of solutions to the system of equations(theorem 4) Some numerical results based on our algorithm are presented in Section 5 Section 6 concludes this paper Long and difficult proofs are relegated to the Appendix 2 Optimal Stopping and Jump-Diffusion Processes Throughout this paper, on a probability space (Ω, F, P), we consider a jump-diffusion process X of the form N t X t = ct + σb t + Y n (2) where c R, σ > 0, B = (B t, t 0) is a standard Brownian motion, (N t ; t 0) is a Poisson process with rate λ > 0 Also, Y = (Y n, n 0) is a sequence of independent random variables with identical n= 2

piecewise continuous density functions f Assume further that B, N t and Y are independent A jump-diffusion process starting from x is simply defined as x + X t for t 0 and we denote its law by P x For convenience we shall write P in place of P 0 Also E x denotes the expectation with respect to the probability measure P x Under these model assumptions, we have E(e zx t ) = e tψ(z), z ir, where ψ is called the characteristic exponent ψ of X and is given by the formula ψ(z) = σ2 2 u2 + cz + λ e zy f(y)dy λ (22) Also the infinitesimal generator L X of X has a domain containing C0(R) 2 and for h C0(R), 2 L X h(x) = 2 σ2 h (x) + ch (x) + λ h(x + y)f(y)dy λh(x) (23) We define L X h(x) by the expression (23) for all functions h on R such that h, h and the integral in (23) exists at x Given a jump-diffusion process X as in (2), we consider in this section the optimal stopping problem () with the continuous reward function g given by the formula g(x) = g (x) {x l} + g 2 (x) {x l2} (24) for some < l l 2 < Here g (x) is a strictly positive C -function on (, l ) and g 2 (x) is a strictly positive C -function on (l 2, ) We assume further that E x [ supt 0 e rt g(x t ) < for all x R For any set I in R, we write τ I = inf{t 0 X t I} and set V I (x) = E x [e rτ I g(x τi ), x R (25) Theorem 2 Given I = (h, h 2 ) c where < h < l l 2 < h 2 < Assume that the function V I (x) in (25) satisfies the following conditions: (a) V I (x) is the difference of two convex functions (b) V I (x) is a twice continuously differentiable function except possibly at h and h 2 (c) The limits V I (h i±) = lim h hi ± V I (h), i =, 2, exist and are finite (d) (L X r)v I (x) 0 for all x except possibly at h and h 2 (e) V I (x) g(x) for all x (h, h 2 ) Then V I (x) is the value function for the optimal stopping problem () with the reward function g given in (24) Proof : Let V be the value function for the optimal stopping problem () Clearly, we have V I (x) V (x) It remains to show that V (x) V I (x) By the Meyer-Itô formula(see, for example, Corollary in Protter [6 ChIV pp28-pp29), we have e rt V I (X t ) V I (x) = + 0<s t t 0 re rs V I (X s )ds + t e rs (V I (X s ) V I (X s ) V I (X s ) X s ) + 2 0 e rs V I (X s )dx s t 0 e rs V I (X s )d[x, X c s where V I (x) is its left derivative and V I (x) is the second derivative in the generalized function sense By similar arguments as that in Mordecki [3 Sec 3, we have e rt V I (X t ) V I (x) = t 0 e rs (L X r)v I (X s )ds + M t (26) where {M t } is a local martingale with M 0 = 0 Let T n be a sequence of stopping times such that for each n, {M Tn t} is a martingale Let τ be a stopping time By the optional stopping theorem, we have E x [M Tn t τ = E x [M 0 = 0 In addition, by (d), we have T n t τ 0 e rs (L X 3

r)v I (X s )ds 0 By (26), we observe E x [e r(tn t τ) V I (X Tn t τ ) V I (x) Since g(x) 0 and E x [ supt 0 e rt g(x t ) <, by Dominated Convergence Theorem and (e), we have E x [e rτ g(x τ ) = E x [ lim t lim n er(τ t T n) g(x (τ t Tn)) = lim t lim n E x[e r(τ t T n) g(x (τ t Tn)) lim t lim n E x[e r(τ t Tn) V I (X (τ t Tn )) V I (x) Because τ is arbitrary, we observe V (x) = sup τ E x [e rτ g(x τ ) V I (x) The proof is complete We have the uniqueness of solutions for the boundary value problem in (2) and (3) Proposition 2 Assume that g is bounded on (, l ) and the function g 0 2 (x + y)f(y)dy, x l 2, is locally bounded Let I = (h, h 2 ) c for some < h < l l 2 < h 2 < If Ṽ is a solution of the boundary value problem: { (L X r)ṽ (x) = 0, x (h, h 2 ) (27) Ṽ (x) = g(x), x I and Ṽ is in C2 (h, h 2 ) C[h, h 2, then Ṽ (x) = V I(x) for all x R Proof: See the Appendix Remark The conclusion of Proposition 2 still holds if the functions g and g 2 are C (not necessary strictly positive) and satisfy the conditions in Proposition 2 Proposition 22 Assume that g and g are bounded on (, l ) and the functions g 0 2 (x + y)f(y)dy and g 0 2(x + y)f(y)dy, x l 2, are locally bounded We assume further that g (x) g (x) is positive and increasing on (, l ), g 2 (x) g 2(x) is negative and decreasing on (l 2, ) and E x [sup t 0 e rt g (X t ) < for all x Let I = (h, h 2 ) c for some < h < l l 2 < h 2 < and consider a non-negative function Ṽ (x) on R that is C2 on (h, h 2 ) and satisfies the following conditions: (a) (L X r)ṽ (x) = 0, x (h, h 2 ), (b) (c) Ṽ (x) = g(x), x I Ṽ (x + y)f(y)dy = Ṽ (x + y)f(y)dy, x (h, h 2 ) d dx (d) Ṽ is continuous at h and h 2 and Ṽ (h i ), i =, 2, exist and are continuous there Then Ṽ (x) g(x) for all x (h, h 2 ) Proof: By Proposition 2, we have Ṽ (x) = V I(x) for all x R Note that Ṽ is C on (h, h 2 ) (for a proof, see Chen et al [6) and, for x (h, h 2 ), we have 0 = d dx (L X r)ṽ (x) = 2 σ2 Ṽ (x) + cṽ (x) (λ + r)ṽ (x) + λ Ṽ (x + y)f(y)dy, which implies that (L X r)ṽ (x) = 0 for x (h, h 2 ) By condition (d), Ṽ C[h, h 2 and hence by the remark after Proposition 2, Ṽ (x) = E x [e rτ I g (X τi ) This implies that Ṽ (x) satisfies the ODE: Ṽ (x) Ṽ (x) = F (x), where F (x) = E x[e rτ I (g (X τi ) g(x τi )) First consider the( case that h x l By the ODE theory and the boundary conditions, we have Ṽ (x) = ) e x x h e t F (t)dt + g (h )e h Set H(x) e x (Ṽ (x) g(x)) Then H(x) = x h e t F (t)dt + 4

g (h )e h g (x)e x and H (x) = e x F (x) + g (x)e x g (x)e x = e x {E x [e rτ I (g (X τi ) g(x τi )) + g (x) g (x)} = e x {E x [e rτ + I (g 2 (X τi ) g 2 (X τi )); {τ I = τ + I } +E x [e rτ I (g (X τi ) g (X τi )); {τ I = τ I } + g (x) g (x)} e x E x [e rτ + I (g 2 (X τi ) g 2 (X τi )); {τ I = τ + I } +e x (g (x) g (x))( E x [e rτ I ; {τi = τ I } where τ + I = inf{t 0 X t h 2 } and τ I = inf{t 0 X t h } For the last inequality, we use the facts that g (x)g (x) is increasing and hence g (X τ )g (X I τ ) g (h )g (h ) g (x)g (x) I Since g 2 (x) g 2(x) is negative and g (x) g (x) is positive, we obtain H (x) 0 which implies that H(x) is increasing Therefore H(x) H(h ) = 0 and hence Ṽ (x) g(x) By a similar argument, we get Ṽ (x) g(x) for l 2 x h 2 Since Ṽ (x) = V I(x) 0 = g(x) for l x l 2, we complete the proof 3 Perpetual American Strangles and Straddles A strangle is a financial instrument whose payoff function is a combination of a put with the strike price K and a call with the strike price written on the same security, where K In particular, if K =, the strangle becomes a straddle We model the price of the underlying security under the chosen risk-neutral measure by a geometric jump-diffusion: S t = exp{x t } Here X is a jump-diffusion process of the form in (2) We assume further that the jump density function f is given by the mixture of exponential distributions f(x) = p i η + i eη+ i x {x>0} + q j (η j )eη j x {x<0} (3) N + i= where η < < η N < 0 < η + < < η+ N, p + i s and q j s are positive with i= p i + j= q j = (In a Lévy model, there are infinitely many equivalent risk-neutral measures and, for pricing purpose, we usually choose one of them by using the so-called Cramér-Esscher transform Note that this transform preserves the jump-diffusion structure as above For details, see in particular Appendix A of Asmussen et al [) The characteristic exponent for this jump-diffusion process X is given by the formula ψ(z) = N + 2 σ2 z 2 + cz + λ( i= N j= p i η + N i η + i z + j= q j η j η j z ) λ The rational price for the perpetual American strangle is the value function for the optimal stopping problem () with the reward function g given by the formula g(x) = (K e x ) + + (e x ) + = g (x) x l + g 2 (x) x l2 (32) where l = ln K, l 2 = ln, g (x) = K e x and g 2 (x) = e x To find the value function, we consider the interval (h, h 2 ) with h < l l 2 < h 2 First we find the function V (x) that solves the boundary value problem (27) As in Chen et al [5, we first transform the integro-differential equation in (27) into the ODE N + λ i= (η + i D) (η j D)( 2 σ2 D 2 + cd (λ + r))v (x) + N + i= (η + i D) N j= N j= + N (η j D)( i= p i η + N i η + i D + j= q j η j ))V (x) = 0 D η j 5

where D = d dx By the general theory of ODE theory, the function V in (h, h 2 ) must be of the form V (x) = N +N + +2 n= C n e βnx, where β n are the roots to the characteristic polynomial ϕ(x) of the above ODE, that is, N + N ϕ(x) = (η + i x) (η j x) N + 2 σ2 x 2 p i η + N i + cx (λ + r) + λ( η + i= j= i= i x + q j η j η j= j x) (Note that in fact we have β < η < β 2 < η 2 < < β N < η N < β N + < 0 < β N +2 < η + < < β N +N + + < η + N + < β N +N + +2) For x / (h, h 2 ), we set V (x) = g(x) To determine the coefficients C n s, plugging the function V into the integro-differential equation in (27), we obtain the system of equations N + +N +2 n= N + +N +2 n= C n e βnh2 β n η + k C n e β nh β n η k = η + k = η k e h 2 + η +, k =, 2,, N + (33) k e h K η, k =, 2,, N (34) k (For details, see ([5,[6)) Also, imposing the condition (d) of Proposition 22 for the function V (x) (ie, assuming that V satisfies the continuity and smooth pasting conditions at the boundaries) gives the equations N + +N +2 n= N + +N +2 n= N + +N +2 n= N + +N +2 n= C n e βnh2 = e h2 (35) C n e β nh = K e h (36) C n β n e β nh 2 = e h 2 (37) C n β n e β nh = e h (38) Now, with the set {C,, C N +N + +2, h, h 2 } that satisfies the equations (33)-(38), we will show later that the function V is the value function for the optimal stopping problem () To do this, we need some further properties for the coefficients C n s We consider the following conditions on the model : η + i > for i =, 2,, N + (39) and N + 2 σ2 + c (λ + r) + λ p i η + N i η + i= i + q j η j η j= j < 0 (30) (Note that (39) implies that E[e X < and (30) guarantees E[e X < e r ( hence the underlying asset pays dividends continuously) If E[e X < e r and 0 g(x) A + Be x for some constants A and B, then E[sup t 0 e rt g(x t ) < For details, see Lemma 4 of Mordecki and Salminen [2) Lemma 3 Under the conditions (39) and (30), we have β N +2 > Proof : See the Appendix 6

Example Consider the case that X = ct + σb t with 2 σ2 + c < r Since the process X does not have the jump part, the system of equations (33)-(38) is reduced to the following C e β h 2 + C 2 e β 2h 2 = e h 2 (3) C e β h + C 2 e β 2h = K e h (32) C β e βh2 + C 2 β 2 e β2h2 = e h2 (33) C β e β h + C 2 β 2 e β 2h = e h (34) where β and β 2 are solutions to the equation 2 σ2 x 2 + cx r = 0, that is, β = c c 2 +2rσ 2 σ 2 β 2 = c+ c 2 +2rσ 2 σ By (3) and (32), we obtain 2 C = eβ 2h (e h 2 ) e β 2h 2 (K e h ) deta [ e β h 2 e where A = β 2h 2 e βh e β2h and C 2 = eβ h 2 (K e h ) e β h (e h 2 ) deta and (35) Hence, in terms of h and h 2, we have explicit formulas for C and C 2 To determine h and h 2, plugging the expressions for C and C 2 in (35) into (33) and (34), we observe that equations (33) and (34) are equivalent to the equations: (K e h )β 2 e β 2h + e h e β 2h K = (eh2 2 )β 2 e β2h2 e h2 e β2h2 β 2 e β2h e βh e β2h β e βh β 2 e β2h2 e βh2 e β2h2 β e βh2 (36) (K e h )β e β h + e h e β h K β 2 e β 2h e β h e β 2 h β e β h = (eh2 2 )β e βh2 e h2 e βh2 β 2 e β 2h 2e β h 2 e β 2 h 2β e β h 2 (37) Gapeev and Lerche [7 showed that there is a unique solution h, h 2 to the equations (36) and (37) Then, by a verification lemma, they verified that (h, h 2) is the continuation region for the corresponding optimal stopping problem and the value function on (h, h 2) is given by the formula V (x) = C e βx + C2 e β2x Here C and C2 are computed by the formulas in (35) with h, h 2 replaced by h and h 2 For a martingale approach for this optimal stopping problem, see Beibel and Lerche [2 In the following, we solve the optimal stopping problem () by using the results in Section 2 Our approach also gives an algorithm for finding the solutions to the system of equations (3)-(34) (In fact our method will be applied later for processes with jumps) Assume that {C, C 2, h, h 2 } is a solution to the equations (3)-(34) From these equations, we have ADC = K where D = From this, we have [ β 0 0 β 2 [ C C 2, C = [ C C 2, and K = = [ β (eβ 2h 2 K + e β 2h ) deta β 2 (eβ h 2 K + e β h ) [ K2 K (38) uniquely determined by h and h 2 (By Lemma 3, β 2 > Hence C and C 2 have the same sign) On the other hand, the matrix form of (33) and (34) is given by [ [ [ β e βh2 β 2 e β2h2 C e h 2 β e β h β 2 e β 2h = e h (39) By multiplying e h2 to both sides of (33), e h to both sides of (34) and adding them together, we have C β (e (β )h + e (β )h 2 ) + C 2 β 2 (e (β 2)h + e (β 2)h 2 ) = 0 Combining this with the expressions for C and C 2 in (38) gives [ β (e (β )h + e (β )h 2 ) β 2 (e (β 2)h + e (β 2)h 2 ) det β 2 (eβ h 2 K + e β h ) β (eβ 2h 2 K + e β 2h = 0 (320) ) By multiplying e β h to the first column of the matrix in (320), e β 2h to the second column and then multiplying eh β β 2 to the first row and to the second row, we obtain [ β det 2 ( + e (β) h ) β ( + e (β2) h ) β 2 ( + K e β h ) β ( + K e β2 h = 0 (32) ) 7 C 2

where h = h 2 h In addition, from (34) and (38), we have e h = [ β deta det β eβ h β 2 β 2 eβ 2h K e β h + e β h 2 K e β 2h 2 + e β 2h [ β det β eβ h β 2 [ β 2 eβ 2h β β 2 det β β 2 K e βh + e βh2 K e β2h2 + e β2h K + e β h K e β2 h + = [ e β h 2 e det β 2h 2 = [ e β h e det β 2 h e βh e β2h which implies [ β det β h = log det β 2 β 2 K + e β h K e β2 h + [ e β h e β 2 h (322) and hence, we also obtain h 2 = h + h With this h and h 2, we compute C, C 2 by (38) To prove the existence of solutions to the equations (3)-(34), we show that there is a solution h to (32) in (0, ) As h 0, the left term in (32) tends to ) Since ( 2 + K ) ( det β 2 (β ) β (β 2 ) +e (β ) h β +e (β 2 ) h β 2 + K K e β h 2 K + K e β 2 h 2 β β 2 = = e β2 h det ( + K ) 2(β 2 β ) β β 2 (β )(β 2 ) > 0 +e (β ) h e β 2 h +e h β β 2 + K K e β h e β2 h K + 2 β β 2, (323) we have, as h, det +e (β ) h e β 2 h +e h β β 2 + K K e β h e β2 h K + 2 β β 2 det [ β 0 K β β 2 = K (β )β 2 < 0 Therefore, (32) has a solution h in (0, ) by the intermediate value theorem With this h, we compute {C, C 2, h, h 2 } by the formulas (322), (38) and h 2 = h + h (Later in this paper, we show that {C, C 2, h, h 2 } is a solution to the system of equations (3)-(34)) Define the function V (x) by the formula { C e V (x) = βx + C 2 e β2x if x (h, h 2 ) g(x) if x (h, h 2 ) c where g is the function in (32) Then V is a solution of the boundary value problem in Proposition 2 Hence we have V (x) = E x [e rτ (h,h 2 ) c g(x τ(h,h 2 ) for all x R Also, by Proposition 22 ) c, we observe V (x) g(x) for all x To prove that V is indeed the value function, by Theorem 2, it remains to verify that (L X r)v (x) 0 for all x / (h, h 2 ) For x > h 2, we have (L X r)v (x) = σ2 2 g (x) + cg (x) rg(x) = ( 2 σ2 + c r)e x + r Since 2 σ2 + c < r, we observe d dx (L X r)v (x) = ( 2 σ2 + c r)e x < 0 which implies that (L X r)v (x) is a decreasing function on (h 2, ) In addition, we have (L X r)v (x) = 0 for x (h, h 2 ) and, hence, (L X r)v (h + 2 ) = (L X r)v (h + 2 ) (L X r)v (h 2 ) = [ 2 σ2 V (h + 2 ) + cv (h + 2 ) rv (h+ 2 ) [ 2 σ2 V (h 2 ) + cv (h 2 ) rv (h 2 ) = 2 σ2 [V (h + 2 ) V (h 2 ) = 2 σ2 (e h 2 2 C n βne 2 β nh 2 ) (The third equality holds since the function V (x) satisfies both the continuous fit and the smooth pasting conditions at h 2 ) Since V (x) 0 and C and C 2 have the same sign, we observe C i 0 n= 8

β η β 2 η 2 β 3 β N η N β N + 0 * β N +2η + β N +3 η 2 + β N+ η + N β + N+2 Figure : Relationship of η i, i N, η + j, j N + and β n, n N + 2 for i =, 2 Also we have β < 0 < < β 2 Therefore we observe (L X r)v (h + 2 ) = 2 σ2 (e h 2 2 n= C nβ 2 ne βnh2 ) 2 σ2 (e h2 2 n= C nβ n e βnh2 ) = 0 This implies that (L X r)v (x) L X r)v (h + 2 ) 0 for all x > h 2 By a similar argument, (L X r)v (x) 0 for x < h The proof is complete Now we go back to the equations (33)-(38) From this point on, we set N = N + N + and assume that the conditions (39) and (30) hold Subtract (35) from (37) and (36) from (38), we have N+2 n= N+2 n= Using (325), (38) and (34), we have N+2 n= for k =, 2,, N Similarly, by (324), (37) and (33), we have N+2 n= for k =, 2,, N + From equations (324) and (325), we also have N+2 i= In addition, by (37) and (38), we have C n ( β n )e βnh2 = (324) C n ( β n )e βnh = K (325) β n ( β n ) C n β n η e β nh = 0, (326) k β n ( β n ) C n β n η + e β nh 2 = 0 (327) k C n ( β n )( K e βnh + e βnh2 ) = 0 (328) N+2 i= C n β n (e (β n)h + e (β n)h 2 ) = 0 (329) Lemma 32 Assume that {C,, C N+2, h, h 2 } is a solution of the equations (33)-(38) Then C j 0 except for at most one j Proof : See the Appendix Lemma 33 Assume that {C,, C N+2, h, h 2 } is a solution of the equations (33)-(38) Then C n 0 for all n Proof : See the Appexdix Lemma 34 Assume that N+2 n= C nβ n e βnx0 = e x0 for some x 0 R Then there exists ϵ > 0 such that N+2 n= C nβ n e βnx < e x for all x (x 0 ϵ, x 0 ) Also we have N+2 n= C nβ n e βnx e x for all x x 0 9

Proof: See the Appendix Theorem 3 Let {C,, C N, h, h 2 } be a solution of the equations (33)-(38) Define the function V (x) by the formula { N+2 V (x) = n= C ne β nx if x (h, h 2 ) g(x) if x (h, h 2 ) c where g is the function in (32) Then V is the value function of the optimal stopping problem () Also, we have V (x) = E x [e rτ (h,h 2 ) c g(x τ(h,h 2 ) for all x R and hence τ ) c (h,h 2 ) c is the optimal stopping time for the optimal stopping problem () Proof : Clearly the function V (x) satisfies conditions (a)-(c) of Theorem 2 Direct computation shows that the function V is a solution of the boundary value problem (27) Because C n s are nonnegative according to Lemma 33, thus, h < l = ln K ln = l 2 < h 2 by (35) and (36) Also functions g and g 2 satisfy the conditions in Proposition 2 Therefore we have V (x) = E x [e rτ (h,h 2 ) c g(x τ(h,h 2 ) for all x R Note that functions g ) c and g 2 also satisfy the conditions in Proposition 22 and V also satisfies conditions (c) and (d) of Proposition 22 Hence by Proposition 22, we obtain N+2 n= C ne βnx g(x) for x (h, h 2 ) By Theorem 2, it remains to show that (L X r)v (x) 0 for x [h, h 2 c Note that, on x > h 2 > ln, direct calculation gives (L X r)v (x) = e x ( 2 σ2 + c + ( N N+2 +λ q j e η j x j= η n= j = e x ( 2 σ2 + c + j= N + i= ( N N+2 +λ q j e η j x η n= j N + i= N λp i η + i + j= C n η j β e (β nη j )h 2 n N λp i η + i + j= C n β n β e (β nη j )h 2 n λq j η j ) r(ex ) η j η j e(η λq j η j ) r(ex ) (The last equality holds because of (35)) Let Ψ j (x) = N+2 n= η j j )h 2 e(η j )h 2 + e η j h 2 C n β n e (β nη η j )x j βn η j )x j e(η j N and x R First we show that Ψ j (h 2 ) 0 By (34) and (36), we have Ψ j (h ) = 2 η j ) ) for e (η j )h > 0 Also, we observe Ψ j (x) = N+2 n= C nβ n e (βnη j )x +e (η j )x = e η j x ( N+2 n= C nβ n e βnx e x ) We need the fact that N+2 n= C nβ n e βnx e x 0 for all x (h, h 2 ) (Indeed, if N+2 n= C nβ n e βnh e h = 0 for some h (h, h 2 ), by Lemma 34, N+2 n= C nβ n e βnx e x 0 for all x [h, h 2 Note that by (37), we have N+2 n= C nβ n e βnh2 e h2 = 0 and by Lemma 34, there exists ϵ > 0 such that N+2 n= C nβ n e βnx e x < 0 for all x (h 2 ϵ, h 2 which is a contradiction) Combining this with the fact that N+2 n= C nβ n e βnh e h = 2e h < 0, we obtain N+2 n= C nβ n e βnx e x 0 for all x [h, h 2 and hence, Ψ j (x) 0 on [h, h 2 This implies that Ψ j (x) is an increasing function and hence Ψ j (h 2 ) Ψ j (h ) > 0 Therefore, on x > h 2 > ln, we observe d dx (L X r)v (x) = ( 2 σ2 + c + N + i= λp i + N η + i j= λq j η j r)ex + λ N j= q jψ j (h 2 )η j eη j x 0, which implies that (L X r)v (x) is a decreasing function and its maximum value is (L X r)v (h 2 +) Because V (x) satisfies the smooth pasting condition at h 2 and (L X r)v (h 2 ) = 0, we get (L X r)v (h 2 +) = (L X r)v (h 2 +) (L X r)v (h 2 ) = 2 σ2 (V (h 2 +) V (h 2 )) = N+2 2 σ2 (e h 2 C n βne 2 β nh 2 ) < N+2 2 σ2 (e h 2 C n β n e β nh 2 ) = 0 n= Therefore (L X r)v (x) (L X r)v (h + 2 ) < 0 for all x > h 2 By the same procedure, we verify (L X r)v (x) is an increasing function for x h and (L X r)v (h ) 0, which implies (L X r)v (x) 0 for all x h The proof is complete n= 0

4 Existence of Solutions to Equations (33)-(38) In this section we prove the existence of solutions to the system of equations (33)-(38) According to (325)-(328), we have ÃDC = K where D is an (N + 2) (N + 2) diagonal matrix with entries d ii = β i ( β i ), K = [0, 0,, 0, K T is an (N + 2) column vector and à = e β h e h β η η e β h e h β η N η N e β h 2 e h 2 β η + η + e βh2 e h 2 β η + N + η + N + β ( K e βh + e βh2 ) Then, the coefficient vector C is equal to ( K e h + e h 2 ) β e β h e h C = K detãd Y (4) where Y is the last column of the cofactor matrix of à Thus, if we find out the boundaries of the continuation region (h, h 2 ), then we can compute the coefficient vector C by (4) Proposition 4 Let {C,, C N+2, h, h 2 } be a solution of the equations (33)-(38) Then h = h 2 h is a solution of the equation detb(h) = 0 where for every h R, B(h) is a (N + 2) (N + 2) matrix defined by the formula Moreover, we have B(h) = β η η β η N e β h β η + η + e β h β η + N + η N e h η + N + e h β ( + K e βh ) ( + K e βn+2h ) β ( + e(β)h ) ( + e(βn+2)h ) (42) h = log deta log deta 2, (43) where A = β η η β η N β η + e β h η + e β h β η + N + β ( + K e β h ) η N e h η + N + e h ( + K e βn+2 h ) β

and A 2 = β η η β η N β η + e β h η + η N e h e β h e h β η + N + η + N + β ( + K e β h ) ( + K e βn+2 h ) β K K Proof : See the Appendix Proposition 42 Given any h R, define the matrix B(h) as in (42) There exists a positive solution h to the equation detb(h) = 0 Proof : See the Appendix Theorem 4 Let h be a positive solution of the equation detb(h) = 0 and define h by (43) Set h 2 = h + h and compute {C,, C N+2 } by the formula (4) Then {C,, C N+2, h, h 2 } is a solution of the equations (33)-(38) Proof : The system of equations (33)-(38) is equivalent to ÃDC = K together with the smooth pasting conditions (37) and (38) From the proof of Proposition 4, we know that {C,, C N+2, h, h 2 } satisfies ÃDC = K and (38) It remains to check that (37) is satisfied By (4), the left hand side 2

of (37) is N+2 n= K y n det(ã)( β n) eβ nh 2 = det det e β h e h β η η e β h e h β η N η N e βh2 e h 2 β η + η + e βh2 e h 2 β η + N + η + N + β (e βh + K e βh2 ) = det (e h + K e h 2 ) e β h 2 β e h 2 e βh e h β η η e β h e h β η N η N e β h 2 e h 2 β η + η + e β h 2 e h 2 β η + N + η + N + β ( K e β h + e β h 2 ) β e βh β η η β η N e β h β η + η + e β h β η + N + β ( + K = det(a 2 ) det ( K e h + e h 2 ) η N e h η + N + e h e h e β h ) ( + K e βn+2 h ) e β h β e βn+2 h β η η e βh β η N η N e β h e h det β η + η + e β h e h β η + N + η + N + β ( + e β h ) ( + e βn+2 h ) β K K β η η β η N η N e β h e h β η + η + e β h β η + N + β ( + K e β h ) ( + K η + N + e h e β h β e h e h ) 3

Since h satisfies detb(h) = 0, we have deta = det β η η β η N e β h β η + η + e β h β η + N + β ( + K η N e h η + N + e h e β h ) ( + K e βn+2 h ) β β η η β η N η N e β h e h β = det η + η + e β h e h β η + N + η + N + β ( + K e β h ) ( + K e βn+2 h ) β e(β) h e() h β η η β η N η N = e h e β h e h det β η + η + e β h e h β η + N + η + N + β ( + K e β h ) ( + K e βn+2 h ) β eβ h e h Therefore, the left hand side of (37) is equal to deta 2 deta e h = e h+ h = e h2 The proof is complete 5 Numerical Results In this section, we solve the system of equations (33)-(38) numerically To solve the equations (33)-(38) numerically, we first find the length of the optimal interval h by solving the equation detb(h) = 0 where B(h) is the square matrix in (42) (Note that the equation above depends only on h and hence we can use the simple and fast approach like the Newton method to solve it) Second, we compute h by (43) and set h 2 = h + h Finally, we obtain the coefficient vector C according to (4) and evaluate the value function V (x) by the formula V (x) = N+2 n= C ne βnx for x (h, h 2 ) Example 3 Consider the case that N + = N = In addition, as in Boyarchenko [3, we take c = 005, σ = 025, r = 006, η + = 04, η = 07, λ = 3 5, p = q = 05 and the strike prices K = 50 and = 00 Then the value function is given by V (x) = 4 n= C ne βnx in (h, h 2) where (h, h 2) = (2992, 6953) {β, β 2, β 3, β 4 } = {3482, 02322, 995, 6953} {C, C 2, C 3, C 4 } = {259533, 6224, 0283, 4624 0 8 } 4

Payoff Function of Call Option Payoff Function of Put Option Value Function For Diffuse Case Value Function For Jump Diffuse Case with N+=N= 2 25 3 35 4 45 5 55 6 65 Figure 2: The solid line is the value function V (x) for the jump-diffusion model with N + = N = and the dash line is the one for the diffusion model, that is, N + = N = 0 The optimal boundaries are marked by circles for jump-diffusion model, and by triangles for diffusion model Besides, if we take N + = N = 0 which is the diffusion case in Example, then we observe V (x) = 2 n= C ne β nx in (h, h 2) where (h, h 2) = (345, 4859) {β, β 2 } = {5607, 49207} {C, C 2 } = {40378534, 088 0 9 } It is interesting to note that in the jump-diffusion model, the optimal interval (h, h 2) is much wider than that for the diffusion case This indeed makes sense because there are more opportunities to earn large gains by the jump occurring and hence it can be expected that the investors will not exercise the options in the jump-diffusion environment earlier than in the diffusion one Figure 3 shows the graph of the determinant of B(h) as a function of h It shows that the zero of the determinant (this is h) is unique Besides, the graph descends sharply near the zero of the determinant This implies that we can get the numerical result for h fast and correctly Example 4 Consider the jump-diffusion model with N = N + = 2 and let c = 005, σ = 025, r = 006, η + = 05, η+ 2 = 025, η = 24,η 2 = 75, λ = 3 5, p = p 2 = q = q 2 = 025 and the strike prices K = 50 and = 00 In this model, the expected value E[e X is the same as the one with N = N + = in Example 3 The value function is V (x) = 6 n= C ne βnx in (h, h 2) where (h, h 2) = (253, 6380) {β, β 2, β 3, β 4, β 5, β 6 } = {7997, 9409, 055, 642, 3242, 7093} {C, C 2, C 3, C 4, C 5, C 6 } = {735200029, 2406048, 44297, 02679, 8843 0 9, 2467 0 9 } As noted before, models in Example 3 and Example 4 have the same expected value E[e X However the optimal interval in Example 4 (N = N + = 2) is wider than that for the case N = N + = 6 Concluding Remarks American option contracts are more complicated to analyze than their European counterparts, because an American option can be exercised at any time prior to its expiration Mathematically this means that we have to solve the optimal stopping problem of the form in () Instead of the corresponding PDEs for the European counterparts, problem of this kind always leads to so 5

8 x 0 Determinant of the length of the optimal interval 6 4 2 0 2 4 0 05 5 2 25 3 35 4 45 Figure 3: The figure is the graph of the determinant B(h) for finding the length h of the optimal interval It shows that there is only one zero for the determinant Payoff Function of Call Option Payoff Function of Put Option Value Function For Jump Diffuse Case With N+=N= Value Function For Jump Diffuse Case with N+=N=2 2 25 3 35 4 45 5 55 6 65 Figure 4: The solid line is the value function V (x) for the jump-diffusion model with N + = N = 2 and the dash line is the one for the model with N + = N = The optimal boundaries for the case N + = N = 2 are marked by circles and by triangles for the case N = N + = 6

2 x 09 0 2 3 4 5 0 05 5 2 25 3 35 4 45 Figure 5: The figure is the graph of the determinant for finding the length of the optimal interval for the case N = N + = 2 The figure has similar properties as for the case N = N + = In particular, there is only one zero for the determinant called free boundary value problems, that is not easy to solve Usually we have no explicit pricing formulas for the value functions and the optimal exercise boundaries are not known We refer to the monograph of Peskir and Shiryaev [5 for more details and related topics about optimal stopping and free boundary problem The American call and put options are the simplest American contracts The pricing problem for these options has been widely studied and generalized since Mckean [0 and Merton [ For recent works on Lévy-model setting, we refer to Mordecki and Salminen [2, Boyarchenko and Levendorskii [4 and Asmussen et al [ and the references therein In this paper we consider the perpetual American strangle and straddle options, which is a combination of a put and a call written on the same security As in Asmussen et al (2004) and many others, we consider the pricing problems of these options in the jump diffusion models By the free boundary problem approach, we solve the corresponding optimal stopping problems and hence find the optimal exercise boundaries and the rational prices of the perpetual American strangle and straddle options More precisely, following the approach in Chen et al [5, we derive an equivalent system of equations for the free boundary problem with smooth pasting condition By solving the system of equations, we find an algorithm for computing the rational prices and the optimal exercise boundaries for these options (Boyarchenko [3 studied the same pricing problems by a different approach and assuming the smooth pasting principle for the value functions In fact Boyarchenko posted the verification of the smooth pasting principle for the value functions as an open problem in [3 and we resolve this open problem in Theorem 3) The present method together with the general results in Section 2 could possibly give an alternative approach to compute prices for other exotic options in jump diffusion models 7 Appendix Proof of Proposition 2 We follow similar argument as that in Chen et al [5 Fix x (h, h 2 ) Pick a sequence of functions {Ṽn} C0(R) 2 such that Ṽn Ṽ on [h, h 2 and Ṽn Ṽ on R Since g is bounded, we can choose {Ṽn} such that {Ṽn} are uniformly bounded on (, c for any c R, and Ṽn(x) 2g 2 (x) for all n and all x M Here M > h 2 is a strictly positive constant(independent 7

of n) By Dynkin s formula, we have [ t τi E x [e r(t τ I ) Ṽ n (X τi t) = E x 0 e ru (L X r)ṽn(x u )du + Ṽ (x) (7) For every u < τ I t, we have X u (h, h 2 ) and hence Ṽn(X u ) = Ṽ (X u) This gives [Ṽ (L X r)ṽ (X u) (L X r)ṽn(x u ) = (Xu + y) Ṽn(X u + y) f(y)dy (72) and hence [ (L X r)[ṽ (X u) Ṽn(X u ) sup Ṽ (z) + Ṽn(z) + sup 3g 2 (x + y)f(y)dy < z M+ h + h 2 h x h 2 M+ h (73) By Dominated Convergence Theorem and (72), for all u < t τ I, (L X r)ṽn(x u ) (L X r)ṽ (X u) as n By (73) and Dominated Convergence Theorem, we have [ t τi [ t τi lim E x e ru (L X r)ṽn(x u )du = E x e ru (L X r)ṽ (X u)du n 0 [ Note that Ṽn(x) sup n sup x M Ṽn(x) + 2g(x) and E x supt 0 e rt g(x t ) < Letting n for both sides of (7) together with the dominated convergence theorem gives [ t τi E x [e r(t τ I ) Ṽ (X τi t) = E x e ru (L X r)ṽ (X u)du + Ṽ (x) = Ṽ (x) (74) 0 Note that the last equality follows from the assumption that (L X r)ṽ = 0 in (h, h 2 ) Since [ E x [sup e rt Ṽ (X t ) sup Ṽ (y) + E x sup e rt g(x t ) <, t 0 y h 2 t 0 our result follows by letting t in both sides of the equality in (74) and the Dominated Convergence Theorem This completes the proof Proof of Lemma 3 First consider the case that N + = 0 Then β N +2 is the unique solution to the equation ϕ(x) = 0 in (0, ) Observe that lim x ϕ(x) lim x ϕ(x) = Our result follows by the intermediate value theorem Next assume that N + Then β N +2 is the unique solution to the equation N + N ϕ(x) = (η + i x) (η j x) N + 2 σ2 x 2 + cx (λ + r) + λ( i= j= 0 i= p i η + N i η + i x + j= q j η j η j x) = 0 in (0, η + ) Also we have ϕ() = N + i= (η+ i ) N j= (η j ) [ 2 σ2 + c (λ + r) + λ( N + i= N + p iη + i + N η + i j= and ϕ(η + ) = λp η + i=2 (η+ i η + ) N j= (η j η+ ) By (39) and (30), we obtain ϕ()ϕ(η+ ) < 0 which implies β N +2 > Proof of Lemma 32 Set h = h 2 h and put Ĉn = e β nh ( β n )β n C n for n N + 2 Then, by(324)-(327), we have AĈ = K where β η η 0 β η N η N Ĉ β A = β e β h e h, Ĉ = Ĉ 2 0 e β h e β and K = K N+2 h β η + η + Ĉ N+2 0 e β h e h 0 β η + N + η + N + q jη j η ) j 8

Let F (x) = N+2 i= S 2(x) N+2 i= (β ix), where S (x) = N+2 n= Ĉ i β and F ix 2(x) = N+2 e β i h Ĉ i i= β Clearly, F ix (x) = N+2 Ĉ n i=,i n (β i x) and S 2 (x) = N+2 n= N+2 e βn h Ĉ n S (x) N+2 i= (βix) and F 2(x) = i=,i n (β i x) (75) Then S (x) and S 2 (x) are polynomials with degree at most N + Also, by the fact AĈ = K, we N+2 have S (0) = K i= β N+2 i, S 2 (0) = i= β i, S (η k ) = 0 for k N and S 2 (η + k ) = 0 for k N + By (75), we have Ĉ n = S (β n ) N+2 i=,i n (β i β n ) = eβn h S 2 (β n ) N+2 i=,i n (β i β n ) for n N + 2 From this, we have S 2 (β n ) = S (β n ) = 0 if and only if S 2 (β n ) S (β n ) = 0 In addition, we have Ĉn = 0 if and only if S 2 (β n ) S (β n ) = 0 Also if S (β k ) and S 2 (β k ) S are nonzero for some k N + 2, 2(β k ) S (β k ) = eβk h It remains to show that Θ where Θ = {β n S (β n ) S 2 (β n ) = 0, for n N + 2} and Θ is the cardinality of Θ To do this, we need the following facts : () If S 2 (x) 0 on (η k, β k+ for some k, k N, then S 2 (x) S (x) = 0 has a solution in (η k, β k+) (2) If S 2 (x) 0 on [β k, η k ) for some k, k N, then S 2 (x) S (x) = 0 has a solution in (β k, η k ) (3) If S (x) 0 on (η + k, β N +2+k for some k, k N +, then S 2 (x) S (x) = 0 has a solution in (η + k, β N +2+k) (4) If S (x) 0 on [β N ++k, η + k ) for some k, k N +, then S 2 (x) S (x) = 0 has a solution in (β N ++k, η + k ) (5) If S 2 (x) 0 on [β N +, 0), then S (x) has a solution in (β N +, 0) (6) If S (x) 0 on (0, β N +2, then S 2 (x) has a solution in (0, β N +2) To prove (), we assume that S 2 (x) 0 for all x (η k, β k+ Let x = sup{x [η k, β k+ S (x) = 0} Note that x exists because S (η k ) = 0 and x < β k+ Because S 2(x) S (x) is continuous on (x, β k+, 0 < S 2(β k+ ) S (β k+ ) = eβ k+ h < and lim x x + S2(x) S (x) (76) =, by the intermediate value theorem, there exists x 0 (x, β k+ ) such that S 2(x 0 ) S (x 0 ) = This completes the proof of the fact () above Facts (2)-(4) are verified by similar arguments Next, we verify the fact (5) and assume that S 2 (x) 0 for all x [β N +, 0) Then sgn(s 2 (β N +)S (β N +)) = sgn e β N + h Ĉn 2 N+2 i=,i N + (β i β N +) 2 > 0, and sgn(s 2 (0)S (0)) = sgn(k N+2 i= β2 i ) < 0, which imply that S (x) has a solution in (β N +, 0) The proof of the fact (6) is similar Let S(x) = S 2 (x) S (x) Then S(x) is a polynomial with degree at most N + and S(β k ) = 0 whenever β k Θ Let Π = {[β N +, 0) β N + / Θ} {(0, β N +2 β N +2 / Θ} {[β k, η k ) β k / Θ, k N } {(η k, β k+ β k+ / Θ, k N } {[β N ++k, η + k ) β N ++k / Θ, k N + } {(η + k, β N +2+k β N +2+k / Θ, k N + } Note that Π is a collection of intervals and Π the number of intervals in Π 2(N + ) 2 Θ Let Π = {I Π S(x) = 0 has no solution in I} Since {x S(x) = 0, x / Θ} N + Θ, Π 2(N + ) 2 Θ ((N + ) Θ ) = N + Θ For any I Π, by facts ()-(4), we obtain 9

(a) if sup x I x β N +, then the equation S 2 (x) = 0 has solutions in I (b) if inf x I x β N +2, then the equation S (x) = 0 has solutions in I Also, by fact (5),S (x)s 2 (x) = 0 for some x [β N +, 0) Similarly, by fact (6), S (x)s 2 (x) = 0 for some x (0, β N +2 From these observation, combining with the fact that for I, I 2 Π, I I 2 = ϕ or I I 2 Θ c, we have {x S 2 (x) = 0, x < β N +2, x / Θ} + {x S (x) = 0, x > β N +, x / Θ} Π N + Θ (77) Recall that S (η k ) = 0 for k N and S 2 (η + k ) = 0 for k N + Therefore, 2(N + ) {x S (x) = 0} + {x S 2 (x) = 0} = {x S (x) = 0, x > β N +, x / Θ} {x S + (x) = 0, x β N +, x / Θ} + {x S 2 (x) = 0, x < β + N +2, x / Θ} {x S2 + (x) = 0, x β + N +2, x / Θ} + 2 {x x Θ} N + Θ + N + N + + 2 Θ = 2N + + Θ (78) This implies that Θ The proof is complete Proof of Lemma 33 We define S, S 2, Θ, Π, Π, and Ĉn s as in the proof of Lemma 32 Since Ĉ n = e β nh ( β n )β n C n and, by Lemma 3, we observe C n 0 if and only if Ĉn 0 Besides, by Proposition (2), we obtain N+2 n= C ne β nx = E x [e rτ (h,h 2 ) c g(x τ(h,h 2 ) c ) which is nonnegative for all x (h, h 2 ) To prove C n 0 for all n, it suffices to show that the Ĉn s have the same sign By Lemma (32), Θ = 0 or First, we consider the case that Θ =, that is, S (β k0 ) = S 2 (β k0 ) = 0 for some k 0 N + 2 Then Π 2N and by (77), {x S 2 (x) = 0, x < β N +2, x β k0 } + {x S (x) = 0, x > β N +, x β k0 } Π N + = N By (78), we obtain {x S 2 (x) = 0} + {x S (x) = 0} = 2N + 2 Hence S (x) and S 2 (x) are polynomials with degree N + and all roots of S (x) and of S 2 (x) are simple In addition 2(N + ) {x S (x) = 0} + {x S 2 (x) = 0} {x S 2 (x) = 0, x < β N +2, x β k0 } + {x S (x) = 0, x > β N +, x β k0 } + {x S 2 (x) = 0, x β N +2, x β k0 } + {x S (x) = 0, x β N +, x β k0 } + 2 N + {x S 2 (x) = 0, x β N +2, x β k0 } + {x S (x) = 0, x β N +, x β k0 } + 2 and hence, N {x S 2 (x) = 0, x β N +2, x β k0 } + {x S (x) = 0, x β N +, x β k0 } Since S 2 (η + k ) = 0 for k N + and S (η k ) = 0 for k N, we obtain {x S (x) = 0, x β N +, x β k 0 } = {η k k N } and {x S 2 (x) = 0, x β N +2, x β k0 } = {η + k k N + } Now we consider the case that k 0 =, that is S (β ) = S 2 (β ) = 0 Because η i is the unique root for S (x) in [β i, β i+, 2 i N, we obtain S (β i )S (β i+ ) < 0 By similar arguments, we also have S 2 (β j )S 2 (β j+ ) < 0 for N + 2 j N + By (76), we have Ĉ n Ĉ n = eβn h S 2 (β n ) N+2 i=,i n (β i β n ) e βn h S 2 (β n ) N+2 i=,i n (β i β n ) = e(β n+β n ) h S 2 (β n )S 2 (β n )(β n β n ) (β n β n ) n2 i= (β i β n )(β i β n ) N+2 j=n+ (β j β n )(β j β n ) S (β n )S (β n )(β n β n ) (β n β n ) = n2 i= (β i β n )(β i β n ) N+2 j=n+ (β j β n )(β j β n ), Therefore, the elements in C {Ĉn 2 n N + } have the same sign and this is also true for elements in C + {Ĉn N + 2 n N + 2} Because AĈ = K, if the elements in C are positive and the ones in C + are negative, then we get the contradiction that K = N+2 n= Ĉn β n < 0; if the elements in C are negative and the ones in C + are positive, then we get another contradiction, ie, 20

= N+2 eβn h n= Ĉn β n > 0 Therefore, Ĉ n s must have the same sign For the case k 0 = N +, the proof is the same For the case < k 0 < N +, by a similar argument as above, we obtain the elements in C = {Ĉn n k 0 }, C2 = {Ĉn k 0 + n N + }, and C + = {Ĉn N + 2 n N + 2} have the same sign, respectively There are eight situations for the signs of C, C 2, and C+ : () C < 0, C 2 < 0, and C+ < 0, (2) C > 0, C 2 > 0, and C+ > 0, (3)C < 0, C 2 < 0, and C+ > 0, (4)C > 0, C 2 > 0, and C+ < 0, (5) C < 0, C 2 > 0, and C + > 0, (6) C > 0, C 2 < 0, and C+ < 0, (7) C < 0, C 2 > 0, and C+ < 0, (8) C > 0, C 2 < 0, and C + > 0 (We write C ± i > (<)0 if all elements in C ± i are greater(smaller) than zero) We show that cases (3)-(8) are impossible The arguments for disproving cases (3) and (4) are the same as for the case k 0 = Note that β < η < β 2 < η 2 < < β k 0 < η k 0 < β k0 < < β N < η N < β N + < 0 < < β N +2 < η + < < β N+ < η + N + < Because AĈ = K, Comparing with the (k 0)-th entries in AĈ and K, we obtain N+2 n= Ĉn β n η k 0 0 Therefore, it is impossible for cases (5) and (6) Note that the entries of A satisfy the following: (a) A i,j < 0 for {(i, j) j i N + } {(i, j) N + 2 i N + 2, j < i} and A i,j > 0, otherwise (b) If A i,j and A i+,j are negative, then A i,j < A i+,j (c) If A i,j and A i+,j are positive, then A i,j < A i+,j For the case (7), we get the contradiction K = (A N + A k0 )Ĉ < 0 and for the case (8), we get the contradiction = (A N +2 A k0 )Ĉ > 0 where A i is the ith row of A Therefore, we complete the proof for the case that Θ = and < k 0 N + The proof for the case that Θ = and N + 2 k 0 N + 2 is similar Consider the case that Θ = 0 which implies that Ĉn s are nonzero Then we have Π = 2N + 2 and by (77), {x S 2 (x) = 0, x < β N +2} + {x S (x) = 0, x > β N +} Π N + Therefore 2(N + ) {x S (x) = 0} + {x S 2 (x) = 0} {x S 2 (x) = 0, x < β N +2} + {x S (x) = 0, x > β N +} + {x S 2 (x) = 0, x β N +2} + {x S (x) = 0, x β N +} + 2 Θ N + + {x S 2 (x) = 0, x β N +2} + {x S (x) = 0, x β N +} which implies N + {x S 2 (x) = 0, x β N +2} + {x S (x) = 0, x β N +} Because {x S 2 (x) = 0, x β N +2} + {x S (x) = 0, x β N +} N, we have {x x > β N +2, S 2 (x) = 0} = N + or {x x < β N +, S (x) = 0} = N First, we consider the case {x x > β N +2, S 2 (x) = 0} = N +, or equivalently, {x x β N +2, S 2 (x) = 0} = {η + η+ N } If {x x < β + N +, S (x) = 0} = N, then we have {x x β N +, S (x) = 0} = {η η N } Similar arguments as for the case Θ = imply that the elements in C = {Ĉn n N + } and in C + = {Ĉn N + 2 n N + 2} have the same sign, respectively, and hence, the sign of Ĉn s are the same If {x x < β N +, S (x) = 0} = N +, then either S (x) has a root in (, β ) or S (x) has two roots in (β k0, β k0 +) for some k 0 N For the case (, β ), we can get as above that the elements in C = {Ĉn n N + } and in C + = {Ĉn N + 2 n N + 2} have the same sign, respectively If S (x) has two roots in (β k0, β k0 +) for some k 0 N, we also observe that the elements in C = {Ĉn n k 0 }, C2 = {Ĉn k 0 n N + }, and C + = {Ĉn N + 2 n N + 2} have the same sign, respectively By the same argument as for the case Θ =, we know that the coefficients have the same sign The proof for the case {x x < β N +, S (x) = 0} = N is similar and hence, we omit it Proof of Lemma 34 Let F (x) = N+2 n= C nβ n e βnx e x Then F (x) = N+2 n= C nβne 2 βnx e x Because β < β 2 < < β N + < 0 < < β N +2 < β N +3 < <, and by Lemma 32 and Lemma 33, F (x 0 ) = N+2 n= C n β 2 ne βnx0 e x0 > 2 N+2 n= C n β n e βnx0 e x0 = 0, (79) =

which implies that F (x) is strictly increasing in some neighborhood U x0 of x 0 and hence, we complete the proof of the first part of the lemma Assume that there exists x > x 0 such that F (x ) < 0 Let x = sup{x x 0 x < x, F (x) = 0} Then x < x, F ( x) = 0 and as shown for (79), we have F ( x) > 0 Therefore, there exists a neighborhood U x of x such that for all x U x with x > x, F (x) > F ( x) = 0 This is a contradiction because F (x) < 0 for all x ( x, x ) and hence, we complete the proof of the lemma Proof of Proposition 4 Substitute (4) into (329), we have N+2 n= K y n ( β n )deta (e(β n)h + e (β n)h 2 ) = 0 (70) where y n is the nth entry of the column vector Y (70) is equivalent to det e βh e h β η η e β h e h β η N η N e β h 2 e h 2 β η + η + e β h 2 e h 2 β η + N + η + N + β ( K e β h + e β h 2 ) ( K e h + e h 2 ) β (e (β )h + e (β )h 2 ) (e ()h + e ()h 2 ) = 0 Multiply e βih to the i-th column for each i and then e h to the last row, we observe that h = h 2 h is a solution of the equation detb(h) = 0 Substitute (4) into (38), we have K det(a) [β e βh,, β n e βnh D Y = e h Note that [β e β h,, β n e β nh D Y β (β ) 0 0 0 0 = [β e βh,, β n e βnh β 2(β 2) 0 0 0 Y 0 0 0 ( ) e β h e h β η η = det e β h β η N e β h 2 β η + η + η N e h e h 2 e βh2 e h 2 β η + N + η + N + β ( K e βh + e βh2 ) ( K e h + e h 2 ) e β h β e h 22