Lecture 21: Numercal methods for prcng Amercan type dervatves Xaoguang Wang STAT 598W Aprl 10th, 2014 (STAT 598W) Lecture 21 1 / 26
Outlne 1 Fnte Dfference Method Explct Method Penalty Method (STAT 598W) Lecture 21 2 / 26
Amercan dervatves Defnton Amercan type nstrument wth maturty T and payoff functon f s a contngent clam that can be exercsed at any moment up to T. Its payoff at t equals f (S t, t). Theorem The prce of an Amercan clam at tme t can be wrtten as V (S t, t) for some functon V : (0, ) [0, T ] R (STAT 598W) Lecture 21 3 / 26
Free-boundary problem Recall that the calculaton of the prce of an Amercan type dervatve can be summarzed as a free-boundary problem: V (s, t) f (s, t) ( ) t + A V (s, t) 0 V (s, t) = f (s, t) or boundary condtons ( t + A ) V (s, t) = 0 where A s the Ito operator we defned before (n lecture 3). The free-boundary s the set of ponts where V (s, t) = f (s, t). In these ponts the system s not governed by the equaton wth partal dervatves. (STAT 598W) Lecture 21 4 / 26
Boundary condtons for Amercan Put Boundary condtons for the Amercan put opton: Termnal condton Left-boundary condton Rght-boundary condton V (s, T ) = (K s) + lm V (s, t) = K s 0 lm V (s, t) = 0 s (STAT 598W) Lecture 21 5 / 26
Complete free-boundary problem For s > 0, t [0, T ]: V (s, t) f (s, t) ( ) t + A V (s, t) 0 ( ) V (s, t) = f (s, t) or t + A V (s, t) = 0 V (s, T ) = (K s) + lm V (s, t) = K s 0 lm s V (s, t) = 0 (STAT 598W) Lecture 21 6 / 26
Outlne 1 Fnte Dfference Method Explct Method Penalty Method (STAT 598W) Lecture 21 7 / 26
Transformaton We compute the prce of an Amercan put opton by the explct method (why?). Frst we make the followng change of varables (the same as for Black-Scholes PDE) x := log s + (r 1 2 σ2 )(T t) 2τ r(t y(x, τ) := e σ 2 ) V where x R and τ [0, σ2 T 2 ] τ := σ2 (T t) 2 ( 2r x ( e σ 2 1)τ, T 2τ ) σ 2 (STAT 598W) Lecture 21 8 / 26
Transformaton of the problem ( ) 2τ r(t y(x, τ) e σ 2 ) 2r x ( K e σ 2 1)τ + y τ (x, τ) 2 y (x, τ) 0 x 2 ( ) 2τ r(t y(x, τ) = e σ 2 ) 2r x ( K e σ 2 1)τ + y or τ (x, τ) 2 y (x, τ) = 0 x 2 y(x, 0) = e rt (K e x ) + 2τ lm y(x, τ) = Ke r(t σ 2 ) x lm y(x, τ) = 0 x (STAT 598W) Lecture 21 9 / 26
Dscretzaton Grd dscretzaton of tme τ : δτ = 1 2 σ2 T M τ j = j δτ, forj = 0, 1,, M dscretzaton of space x x N, x N δx = x N x N N x = x N + δx, for = 0, 1,, N w,j denotes the approxmaton of y(x, τ j ). (STAT 598W) Lecture 21 10 / 26
One step In the node (x, τ j+1 ), the expresson u 1 = αw 1,j + (1 2α)w,j + αw +1,j where α = δτ, approxmates the value of y(x δx 2, τ j+1 ) n the case of no exercse. If ths value s smaller than the payoff from the early exercse u 2 = e r(t 2τ ( j+1 σ 2 ) K e x ( 2r σ 2 1)τ j+1 then t s optmal to exercse mmedately and w,j+1 = u 2. Ths s wrtten concsely as ( w,j+1 = max αw 1,j + (1 2α)w,j + αw +1,j, e r(t 2τ ( ) ) j+1 σ 2 ) K e x ( 2r + σ 2 1)τ j+1 (1) ) + (STAT 598W) Lecture 21 11 / 26
Algorthm for an Amercan Put opton Input: x N, x N, M, N, K, T and the parameters of the model δτ = σ2 T 2M, δx = x N x N N Calculate τ j for j = 0, 1,, M and x for = N,, N. for = N,..., N do w,0 = e rt (K e x ) + end for for j = 0, 1,..., M 1 do w N,j+1 = Ke r(t 2τ j+1 σ 2 ) w N,j+1 = 0 for = N + 1,.., N 1 u 1 = αw 1,j + (1 2α)w,j + αw +1,j u 2 = e r(t 2τ ( j+1 σ 2 ) K e x ( 2r σ 2 1)τ j+1 ) + w,j+1 = max{u 1, u 2 } end for end for Output: w,j for = N,..., N and j = 0, 1,..., M (STAT 598W) Lecture 21 12 / 26
General Amercan nstrument For a general Amercan nstrument we return to orgnal varables only makng the change of tme t τ = T t. A general Amercan nstrument s characterzed by a pay-off functon g(s, τ). The free-boundary problen for ths nstrument s gven by ) (( (V (s, τ) g(s, τ)) τ + A ( ) τ + A V (s, τ) 0 V (s, τ) g(s, τ) 0 V (s, 0) = g(s, 0) ) V (s, τ) = 0 lm V (s, τ) = lm g(s, τ) s 0 s 0 lm V (s, τ) = lm g(s, τ) s s Ths problem s also called the lnear complementarty problem for the Amercan nstrument defned by a pay-off functon g(s, τ). (STAT 598W) Lecture 21 13 / 26
Outlne 1 Fnte Dfference Method Explct Method Penalty Method (STAT 598W) Lecture 21 14 / 26
Penalty method There are many numercal methods whch solve the lnear complementarty problem (LCP). We present here the method called the penalty method. Ths method s smple and effcent (ths s partcularly vsble for more complcated nstruments lke barrer optons). On the other hand the method s only frst order (slow convergence). The basc dea of the penalty method s smple. We replace the lnear complementarty problem by the nonlnear PDE V (s, τ) τ = rs V (s, τ) + 1 s 2 σ2 s 2 2 V (s, τ) s 2 rv (s, τ)+ρ(g(s, τ) V (s, τ)) + where, n the lmt as the postve penalty parameter ρ the soluton satsfes V g 0. (STAT 598W) Lecture 21 15 / 26
Fnte Dfference approxmaton We use the same grd as for the Black-Scholes equaton wth V n denotng an approxmaton to V (s, τ n ) and g n an approxmaton to g(s, τ n ). The nonlnear PDE for the penalty method becomes n the dscrete verson (1 θ) ( +θ V n+1 V n = τ j = ± 1(γ j + β j )(V n+1 j V n+1 ) r τv n+1 ( τ j = ± 1(γ j + β j )(V n j +P n+1 (g n+1 V n+1 ) V n ) r τv n where the choce of θ gves the mplct (θ = 0) and the Crank-Ncolson (θ = 1/2) scheme. ) ) (STAT 598W) Lecture 21 16 / 26
Fnte Dfference approxmaton - cont. Coeffcents from the prevous slde are as follows: P n+1 = ρ for V n+1 < g n+1 = 0, otherwse (2) γ j = σ2 s 2 s s (s +1 s 1 ) β j = rs (j 1) for σ 2 s + r(j ) s j s > 0, s +1 s 1 ( ) 2rs (j 1) + =, otherwse s +1 s 1 where j = ± 1and ρ s a penalty factor (a large postve number). (3) (STAT 598W) Lecture 21 17 / 26
Fnte Dfference approxmaton - cont. The numercal algorthm can be wrtten n the concse form (I + (1 θ) τm + P(V n+1 ))V n+1 = (I θ τm)v n + P(V n+1 )g n+1 where V n s a vector wth entres V n and g n a vector wth entres g n [MV n ] = (γ j + β j )(Vj n V n ) rv n j=±1 and P(V n ) s a dagonal matrx wth entres, [P(V n )] j = ρ for V n < g n = 0, otherwse (4) (STAT 598W) Lecture 21 18 / 26
Fnte Dfference Approxmaton - cont. Matrx M has the property of strct dagonal domnance. It has postve dagonal and non-postve off-dagonals wth dagonal entres strctly domnatng sum of absolute values of off-dagonal entres. Ths property of M s essental for the convergence of the method and s vsble from the structure of the upper left corner of the matrx r + γ 12 + β 12 γ 12 β 12 0 M = γ 21 β 21 r + γ 21 + β 21 + γ 23 + β 23 γ 23 β 23 Note that n the vector on the rght hand sde (I θ τm)v n the frst and the last elements have to be modfed to take nto account the boundary condtons. (STAT 598W) Lecture 21 19 / 26
Convergence Theorem Let us assume that γ j + β j 0 2 θ τ (γ j + β j ) + r τ 0 where s = mn (s +1 s ). j=±1 τ s < const τ, s 0 (STAT 598W) Lecture 21 20 / 26
Convergence -cont. Theorem Then the numercal scheme for the LCP from the prevous sldes solves V (s, τ) τ V (s, τ) rs 1 s 2 σ2 s 2 2 V (s, τ) s 2 + rv (s, τ) 0 V n+1 g n+1 C ρ, C > 0 ( V (s, τ) V (s, τ) rs 1 ) τ s 2 σ2 s 2 2 V (s, τ) s 2 + rv (s, τ) = 0 ( V n+1 g n+1 C ) ρ where C s ndependent of ρ, τ, s. (STAT 598W) Lecture 21 21 / 26
Iteratve soluton Snce we get a nonlnear equaton for V n+1 t has to be solved by teratons. We shall use here the smple teraton method. Let (V n+1 ) (k) be the k-th estmate for V n+1. For notatonal convenence, we wrte V (k) = (V n+1 ) (k) and P (k) = P((V n+1 ) (k) ) If V (0) = V n, then we have the followng algorthm of Penalty Amercan Constrant Iteraton. (STAT 598W) Lecture 21 22 / 26
Algorthm Input: V n, tolerance tol. V (0) = V n for k = 0, untl convergence Solve (I + (1 θ) τm + P (k) )V (k+1) = (I θ τm)v n + P (k) g n+1 end for If max V (k+1) V (k) max(1, V (k+1) or P (k+1) = P (k) qut V n+1 = V (k+1) Output: V n+1 ) < tol (STAT 598W) Lecture 21 23 / 26
Convergence of nteratons Theorem Let γ j + β j 0 then the nonlnear teraton converges to the unque soluton to the numercal algorthm of the penalzed problem for any ntal terate V (0) ; the terates converge monotoncally,.e., V (k+1) V (k) for k 1; the teraton has fnte termnaton;.e. for an terate suffcently close to the soluton of the penalzed problem, convergence s obtaned n one step. (STAT 598W) Lecture 21 24 / 26
Sze of ρ In theory, f we are takng the lmt as s, τ 0, then we should have ( ) 1 ρ = O mn(( s) 2, ( τ) 2 ) Ths means that any error n the penalzed formulaton tends to zero at the same rate as the dscretzaton error. However, n practce t seems easer to specfy the value of ρ n terms of the requred accuracy. Then we should take ρ 1 tol (STAT 598W) Lecture 21 25 / 26
Speed up nterates convergence Although the smple terates converge to the soluton of the nonlnear problem ts speed of convergence s rather slow. To make the convergence more rapd we can use the Newton terates. Ths requres to wrte the nonlnear equaton n the form F (x) = 0 and solve the teratve procedure x k+1 = x k F (x k ) 1 F (x k ) where F (x) s the Jacoban of F. In the penalty method algorthm the only nonlnear term whch requres dfferentaton n order to obtan F s P n. Unfortunately, ths term s dscontnuous. A good convergence can be obtaned when we defne the dervatve of the penalty term as P n+1 (g n+1 V n+1 { ) ρ, for V n+1 V n+1 = < g n+1 0, otherwse (STAT 598W) Lecture 21 26 / 26