Lecture 21: Numerical methods for pricing American type derivatives

Similar documents
MMA and GCMMA two methods for nonlinear optimization

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE

Appendix B. The Finite Difference Scheme

A New Refinement of Jacobi Method for Solution of Linear System Equations AX=b

Additional Codes using Finite Difference Method. 1 HJB Equation for Consumption-Saving Problem Without Uncertainty

APPROXIMATE PRICES OF BASKET AND ASIAN OPTIONS DUPONT OLIVIER. Premia 14

Lecture 10 Support Vector Machines II

Inexact Newton Methods for Inverse Eigenvalue Problems

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 13

Numerical Methods for Controlled Hamilton-Jacobi-Bellman PDEs in Finance

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 16

Assortment Optimization under MNL

EEE 241: Linear Systems

1 Matrix representations of canonical matrices

ADAPTIVE FINITE DIFFERENCE METHODS FOR VALUING AMERICAN OPTIONS. Duy Minh Dang

Lecture 2: Numerical Methods for Differentiations and Integrations

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017

Introduction to Vapor/Liquid Equilibrium, part 2. Raoult s Law:

Outline and Reading. Dynamic Programming. Dynamic Programming revealed. Computing Fibonacci. The General Dynamic Programming Technique

4DVAR, according to the name, is a four-dimensional variational method.

Some modelling aspects for the Matlab implementation of MMA

Transfer Functions. Convenient representation of a linear, dynamic model. A transfer function (TF) relates one input and one output: ( ) system

Chapter 12. Ordinary Differential Equation Boundary Value (BV) Problems

Vector Norms. Chapter 7 Iterative Techniques in Matrix Algebra. Cauchy-Bunyakovsky-Schwarz Inequality for Sums. Distances. Convergence.

Modelli Clamfim Equazione del Calore Lezione ottobre 2014

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal

Solution of Linear System of Equations and Matrix Inversion Gauss Seidel Iteration Method

NUMERICAL DIFFERENTIATION

Errors for Linear Systems

Lecture 5.8 Flux Vector Splitting

Lecture 12: Discrete Laplacian

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix

Relaxation Methods for Iterative Solution to Linear Systems of Equations

2.29 Numerical Fluid Mechanics Fall 2011 Lecture 12

Formal solvers of the RT equation

Difference Equations

1 GSW Iterative Techniques for y = Ax

FTCS Solution to the Heat Equation

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009

Affine transformations and convexity

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Some Comments on Accelerating Convergence of Iterative Sequences Using Direct Inversion of the Iterative Subspace (DIIS)

Report on Image warping

2 Finite difference basics

Normally, in one phase reservoir simulation we would deal with one of the following fluid systems:

Lecture Notes on Linear Regression

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 17. a ij x (k) b i. a ij x (k+1) (D + L)x (k+1) = b Ux (k)

Solution of the Navier-Stokes Equations

APPENDIX A Some Linear Algebra

ME 501A Seminar in Engineering Analysis Page 1

Randomness and Computation

The Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction

Numerical Heat and Mass Transfer

Modified Mass Matrices and Positivity Preservation for Hyperbolic and Parabolic PDEs

NON-CENTRAL 7-POINT FORMULA IN THE METHOD OF LINES FOR PARABOLIC AND BURGERS' EQUATIONS

More metrics on cartesian products

2.29 Numerical Fluid Mechanics

Time-Varying Systems and Computations Lecture 6

PHYS 705: Classical Mechanics. Calculus of Variations II

n α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0

36.1 Why is it important to be able to find roots to systems of equations? Up to this point, we have discussed how to find the solution to

Newton s Method for One - Dimensional Optimization - Theory

Bernoulli Numbers and Polynomials

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4)

Solutions to exam in SF1811 Optimization, Jan 14, 2015

Bezier curves. Michael S. Floater. August 25, These notes provide an introduction to Bezier curves. i=0

Linear Feature Engineering 11

Module 9. Lecture 6. Duality in Assignment Problems

Chapter Newton s Method

Summary with Examples for Root finding Methods -Bisection -Newton Raphson -Secant

Topic 5: Non-Linear Regression

A PROCEDURE FOR SIMULATING THE NONLINEAR CONDUCTION HEAT TRANSFER IN A BODY WITH TEMPERATURE DEPENDENT THERMAL CONDUCTIVITY.

Applied Stochastic Processes

Generalized Linear Methods

Single Variable Optimization

Estimation: Part 2. Chapter GREG estimation

Convexity preserving interpolation by splines of arbitrary degree

MLE and Bayesian Estimation. Jie Tang Department of Computer Science & Technology Tsinghua University 2012

CHARACTERISTICS OF COMPLEX SEPARATION SCHEMES AND AN ERROR OF SEPARATION PRODUCTS OUTPUT DETERMINATION

Problem Set 9 Solutions

Maximum Likelihood Estimation of Binary Dependent Variables Models: Probit and Logit. 1. General Formulation of Binary Dependent Variables Models

Yong Joon Ryang. 1. Introduction Consider the multicommodity transportation problem with convex quadratic cost function. 1 2 (x x0 ) T Q(x x 0 )

Mathematical Methods (10/24.539) V. General Boundary Value Problems (BVPs)

Chapter 4 The Wave Equation

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography

Linear Approximation with Regularization and Moving Least Squares

Expected Value and Variance

Continuous Time Markov Chain

Feature Selection: Part 1

Digital Signal Processing

8 : Learning in Fully Observed Markov Networks. 1 Why We Need to Learn Undirected Graphical Models. 2 Structural Learning for Completely Observed MRF

Math 217 Fall 2013 Homework 2 Solutions

COS 521: Advanced Algorithms Game Theory and Linear Programming

Inductance Calculation for Conductors of Arbitrary Shape

Lecture 4: Universal Hash Functions/Streaming Cont d

Implicit Integration Henyey Method

Body Models I-2. Gerard Pons-Moll and Bernt Schiele Max Planck Institute for Informatics

Lecture 11. minimize. c j x j. j=1. 1 x j 0 j. +, b R m + and c R n +

Modelli Clamfim Equazioni differenziali 7 ottobre 2013

Transcription:

Lecture 21: Numercal methods for prcng Amercan type dervatves Xaoguang Wang STAT 598W Aprl 10th, 2014 (STAT 598W) Lecture 21 1 / 26

Outlne 1 Fnte Dfference Method Explct Method Penalty Method (STAT 598W) Lecture 21 2 / 26

Amercan dervatves Defnton Amercan type nstrument wth maturty T and payoff functon f s a contngent clam that can be exercsed at any moment up to T. Its payoff at t equals f (S t, t). Theorem The prce of an Amercan clam at tme t can be wrtten as V (S t, t) for some functon V : (0, ) [0, T ] R (STAT 598W) Lecture 21 3 / 26

Free-boundary problem Recall that the calculaton of the prce of an Amercan type dervatve can be summarzed as a free-boundary problem: V (s, t) f (s, t) ( ) t + A V (s, t) 0 V (s, t) = f (s, t) or boundary condtons ( t + A ) V (s, t) = 0 where A s the Ito operator we defned before (n lecture 3). The free-boundary s the set of ponts where V (s, t) = f (s, t). In these ponts the system s not governed by the equaton wth partal dervatves. (STAT 598W) Lecture 21 4 / 26

Boundary condtons for Amercan Put Boundary condtons for the Amercan put opton: Termnal condton Left-boundary condton Rght-boundary condton V (s, T ) = (K s) + lm V (s, t) = K s 0 lm V (s, t) = 0 s (STAT 598W) Lecture 21 5 / 26

Complete free-boundary problem For s > 0, t [0, T ]: V (s, t) f (s, t) ( ) t + A V (s, t) 0 ( ) V (s, t) = f (s, t) or t + A V (s, t) = 0 V (s, T ) = (K s) + lm V (s, t) = K s 0 lm s V (s, t) = 0 (STAT 598W) Lecture 21 6 / 26

Outlne 1 Fnte Dfference Method Explct Method Penalty Method (STAT 598W) Lecture 21 7 / 26

Transformaton We compute the prce of an Amercan put opton by the explct method (why?). Frst we make the followng change of varables (the same as for Black-Scholes PDE) x := log s + (r 1 2 σ2 )(T t) 2τ r(t y(x, τ) := e σ 2 ) V where x R and τ [0, σ2 T 2 ] τ := σ2 (T t) 2 ( 2r x ( e σ 2 1)τ, T 2τ ) σ 2 (STAT 598W) Lecture 21 8 / 26

Transformaton of the problem ( ) 2τ r(t y(x, τ) e σ 2 ) 2r x ( K e σ 2 1)τ + y τ (x, τ) 2 y (x, τ) 0 x 2 ( ) 2τ r(t y(x, τ) = e σ 2 ) 2r x ( K e σ 2 1)τ + y or τ (x, τ) 2 y (x, τ) = 0 x 2 y(x, 0) = e rt (K e x ) + 2τ lm y(x, τ) = Ke r(t σ 2 ) x lm y(x, τ) = 0 x (STAT 598W) Lecture 21 9 / 26

Dscretzaton Grd dscretzaton of tme τ : δτ = 1 2 σ2 T M τ j = j δτ, forj = 0, 1,, M dscretzaton of space x x N, x N δx = x N x N N x = x N + δx, for = 0, 1,, N w,j denotes the approxmaton of y(x, τ j ). (STAT 598W) Lecture 21 10 / 26

One step In the node (x, τ j+1 ), the expresson u 1 = αw 1,j + (1 2α)w,j + αw +1,j where α = δτ, approxmates the value of y(x δx 2, τ j+1 ) n the case of no exercse. If ths value s smaller than the payoff from the early exercse u 2 = e r(t 2τ ( j+1 σ 2 ) K e x ( 2r σ 2 1)τ j+1 then t s optmal to exercse mmedately and w,j+1 = u 2. Ths s wrtten concsely as ( w,j+1 = max αw 1,j + (1 2α)w,j + αw +1,j, e r(t 2τ ( ) ) j+1 σ 2 ) K e x ( 2r + σ 2 1)τ j+1 (1) ) + (STAT 598W) Lecture 21 11 / 26

Algorthm for an Amercan Put opton Input: x N, x N, M, N, K, T and the parameters of the model δτ = σ2 T 2M, δx = x N x N N Calculate τ j for j = 0, 1,, M and x for = N,, N. for = N,..., N do w,0 = e rt (K e x ) + end for for j = 0, 1,..., M 1 do w N,j+1 = Ke r(t 2τ j+1 σ 2 ) w N,j+1 = 0 for = N + 1,.., N 1 u 1 = αw 1,j + (1 2α)w,j + αw +1,j u 2 = e r(t 2τ ( j+1 σ 2 ) K e x ( 2r σ 2 1)τ j+1 ) + w,j+1 = max{u 1, u 2 } end for end for Output: w,j for = N,..., N and j = 0, 1,..., M (STAT 598W) Lecture 21 12 / 26

General Amercan nstrument For a general Amercan nstrument we return to orgnal varables only makng the change of tme t τ = T t. A general Amercan nstrument s characterzed by a pay-off functon g(s, τ). The free-boundary problen for ths nstrument s gven by ) (( (V (s, τ) g(s, τ)) τ + A ( ) τ + A V (s, τ) 0 V (s, τ) g(s, τ) 0 V (s, 0) = g(s, 0) ) V (s, τ) = 0 lm V (s, τ) = lm g(s, τ) s 0 s 0 lm V (s, τ) = lm g(s, τ) s s Ths problem s also called the lnear complementarty problem for the Amercan nstrument defned by a pay-off functon g(s, τ). (STAT 598W) Lecture 21 13 / 26

Outlne 1 Fnte Dfference Method Explct Method Penalty Method (STAT 598W) Lecture 21 14 / 26

Penalty method There are many numercal methods whch solve the lnear complementarty problem (LCP). We present here the method called the penalty method. Ths method s smple and effcent (ths s partcularly vsble for more complcated nstruments lke barrer optons). On the other hand the method s only frst order (slow convergence). The basc dea of the penalty method s smple. We replace the lnear complementarty problem by the nonlnear PDE V (s, τ) τ = rs V (s, τ) + 1 s 2 σ2 s 2 2 V (s, τ) s 2 rv (s, τ)+ρ(g(s, τ) V (s, τ)) + where, n the lmt as the postve penalty parameter ρ the soluton satsfes V g 0. (STAT 598W) Lecture 21 15 / 26

Fnte Dfference approxmaton We use the same grd as for the Black-Scholes equaton wth V n denotng an approxmaton to V (s, τ n ) and g n an approxmaton to g(s, τ n ). The nonlnear PDE for the penalty method becomes n the dscrete verson (1 θ) ( +θ V n+1 V n = τ j = ± 1(γ j + β j )(V n+1 j V n+1 ) r τv n+1 ( τ j = ± 1(γ j + β j )(V n j +P n+1 (g n+1 V n+1 ) V n ) r τv n where the choce of θ gves the mplct (θ = 0) and the Crank-Ncolson (θ = 1/2) scheme. ) ) (STAT 598W) Lecture 21 16 / 26

Fnte Dfference approxmaton - cont. Coeffcents from the prevous slde are as follows: P n+1 = ρ for V n+1 < g n+1 = 0, otherwse (2) γ j = σ2 s 2 s s (s +1 s 1 ) β j = rs (j 1) for σ 2 s + r(j ) s j s > 0, s +1 s 1 ( ) 2rs (j 1) + =, otherwse s +1 s 1 where j = ± 1and ρ s a penalty factor (a large postve number). (3) (STAT 598W) Lecture 21 17 / 26

Fnte Dfference approxmaton - cont. The numercal algorthm can be wrtten n the concse form (I + (1 θ) τm + P(V n+1 ))V n+1 = (I θ τm)v n + P(V n+1 )g n+1 where V n s a vector wth entres V n and g n a vector wth entres g n [MV n ] = (γ j + β j )(Vj n V n ) rv n j=±1 and P(V n ) s a dagonal matrx wth entres, [P(V n )] j = ρ for V n < g n = 0, otherwse (4) (STAT 598W) Lecture 21 18 / 26

Fnte Dfference Approxmaton - cont. Matrx M has the property of strct dagonal domnance. It has postve dagonal and non-postve off-dagonals wth dagonal entres strctly domnatng sum of absolute values of off-dagonal entres. Ths property of M s essental for the convergence of the method and s vsble from the structure of the upper left corner of the matrx r + γ 12 + β 12 γ 12 β 12 0 M = γ 21 β 21 r + γ 21 + β 21 + γ 23 + β 23 γ 23 β 23 Note that n the vector on the rght hand sde (I θ τm)v n the frst and the last elements have to be modfed to take nto account the boundary condtons. (STAT 598W) Lecture 21 19 / 26

Convergence Theorem Let us assume that γ j + β j 0 2 θ τ (γ j + β j ) + r τ 0 where s = mn (s +1 s ). j=±1 τ s < const τ, s 0 (STAT 598W) Lecture 21 20 / 26

Convergence -cont. Theorem Then the numercal scheme for the LCP from the prevous sldes solves V (s, τ) τ V (s, τ) rs 1 s 2 σ2 s 2 2 V (s, τ) s 2 + rv (s, τ) 0 V n+1 g n+1 C ρ, C > 0 ( V (s, τ) V (s, τ) rs 1 ) τ s 2 σ2 s 2 2 V (s, τ) s 2 + rv (s, τ) = 0 ( V n+1 g n+1 C ) ρ where C s ndependent of ρ, τ, s. (STAT 598W) Lecture 21 21 / 26

Iteratve soluton Snce we get a nonlnear equaton for V n+1 t has to be solved by teratons. We shall use here the smple teraton method. Let (V n+1 ) (k) be the k-th estmate for V n+1. For notatonal convenence, we wrte V (k) = (V n+1 ) (k) and P (k) = P((V n+1 ) (k) ) If V (0) = V n, then we have the followng algorthm of Penalty Amercan Constrant Iteraton. (STAT 598W) Lecture 21 22 / 26

Algorthm Input: V n, tolerance tol. V (0) = V n for k = 0, untl convergence Solve (I + (1 θ) τm + P (k) )V (k+1) = (I θ τm)v n + P (k) g n+1 end for If max V (k+1) V (k) max(1, V (k+1) or P (k+1) = P (k) qut V n+1 = V (k+1) Output: V n+1 ) < tol (STAT 598W) Lecture 21 23 / 26

Convergence of nteratons Theorem Let γ j + β j 0 then the nonlnear teraton converges to the unque soluton to the numercal algorthm of the penalzed problem for any ntal terate V (0) ; the terates converge monotoncally,.e., V (k+1) V (k) for k 1; the teraton has fnte termnaton;.e. for an terate suffcently close to the soluton of the penalzed problem, convergence s obtaned n one step. (STAT 598W) Lecture 21 24 / 26

Sze of ρ In theory, f we are takng the lmt as s, τ 0, then we should have ( ) 1 ρ = O mn(( s) 2, ( τ) 2 ) Ths means that any error n the penalzed formulaton tends to zero at the same rate as the dscretzaton error. However, n practce t seems easer to specfy the value of ρ n terms of the requred accuracy. Then we should take ρ 1 tol (STAT 598W) Lecture 21 25 / 26

Speed up nterates convergence Although the smple terates converge to the soluton of the nonlnear problem ts speed of convergence s rather slow. To make the convergence more rapd we can use the Newton terates. Ths requres to wrte the nonlnear equaton n the form F (x) = 0 and solve the teratve procedure x k+1 = x k F (x k ) 1 F (x k ) where F (x) s the Jacoban of F. In the penalty method algorthm the only nonlnear term whch requres dfferentaton n order to obtan F s P n. Unfortunately, ths term s dscontnuous. A good convergence can be obtaned when we defne the dervatve of the penalty term as P n+1 (g n+1 V n+1 { ) ρ, for V n+1 V n+1 = < g n+1 0, otherwse (STAT 598W) Lecture 21 26 / 26