Stein s method and zero bias transformation: Application to CDO pricing

Similar documents
Gauss and Poisson Approximation: Applications to CDOs Tranche Pricing

Limiting Distributions

Stein s Method: Distributional Approximation and Concentration of Measure

A Gentle Introduction to Stein s Method for Normal Approximation I

Stein s Method and the Zero Bias Transformation with Application to Simple Random Sampling

Week 1 Quantitative Analysis of Financial Markets Distributions A

1 Presessional Probability

Differential Stein operators for multivariate continuous distributions and applications

6 The normal distribution, the central limit theorem and random samples

Probability and Statistics

Susceptible-Infective-Removed Epidemics and Erdős-Rényi random

Continuous Random Variables

CS145: Probability & Computing

Formulas for probability theory and linear models SF2941

Probability and Distributions

Expectation of Random Variables

Lecture 2: Repetition of probability theory and statistics

Lecture 4: Proofs for Expectation, Variance, and Covariance Formula

1 Review of Probability and Distributions

Practice Problem - Skewness of Bernoulli Random Variable. Lecture 7: Joint Distributions and the Law of Large Numbers. Joint Distributions - Example

Exam P Review Sheet. for a > 0. ln(a) i=0 ari = a. (1 r) 2. (Note that the A i s form a partition)

STAT Chapter 5 Continuous Distributions

Expectation. DS GA 1002 Probability and Statistics for Data Science. Carlos Fernandez-Granda

1 Random Variable: Topics

A Short Introduction to Stein s Method

Concentration of Measures by Bounded Couplings

Analysis of Engineering and Scientific Data. Semester

1 Probability and Random Variables

Expectation. DS GA 1002 Statistical and Mathematical Models. Carlos Fernandez-Granda

2 (Statistics) Random variables

Midterm #1. Lecture 10: Joint Distributions and the Law of Large Numbers. Joint Distributions - Example, cont. Joint Distributions - Example

STAT 430/510: Lecture 16

Fast Computation of the Expected Loss of a Loan Portfolio Tranche in the Gaussian Factor Model: Using Hermite Expansions for Higher Accuracy.

EE5319R: Problem Set 3 Assigned: 24/08/16, Due: 31/08/16

Deccan Education Society s FERGUSSON COLLEGE, PUNE (AUTONOMOUS) SYLLABUS UNDER AUTOMONY. SECOND YEAR B.Sc. SEMESTER - III

Math Review Sheet, Fall 2008

Probability reminders

A Fast Algorithm for Computing Expected Loan Portfolio Tranche Loss in the Gaussian Factor Model.

Random Variables. P(x) = P[X(e)] = P(e). (1)

On the convergence of sequences of random variables: A primer

Algorithms for Uncertainty Quantification

Joint Distribution of Two or More Random Variables

EXPECTED VALUE of a RV. corresponds to the average value one would get for the RV when repeating the experiment, =0.

Learning Objectives for Stat 225

Generalized Hypothesis Testing and Maximizing the Success Probability in Financial Markets

2 Random Variable Generation

Random Variables and Their Distributions

1 Exercises for lecture 1

Set Theory Digression

ECE 4400:693 - Information Theory

THE LINDEBERG-FELLER CENTRAL LIMIT THEOREM VIA ZERO BIAS TRANSFORMATION

Chapter 4. Chapter 4 sections

Chapter 2. Poisson point processes

Extreme Value Analysis and Spatial Extremes

Statistics (1): Estimation

CS37300 Class Notes. Jennifer Neville, Sebastian Moreno, Bruno Ribeiro

Structural Reliability

Probability and Statistics

Monte-Carlo MMD-MA, Université Paris-Dauphine. Xiaolu Tan

Exercises and Answers to Chapter 1

MAS113 Introduction to Probability and Statistics

4. Distributions of Functions of Random Variables

PROBABILITY DISTRIBUTION

STA 256: Statistics and Probability I

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( )

Week 2. Review of Probability, Random Variables and Univariate Distributions

3 Multiple Discrete Random Variables

A Gaussian-Poisson Conditional Model for Defaults

IE 230 Probability & Statistics in Engineering I. Closed book and notes. 120 minutes.

Stationary independent increments. 1. Random changes of the form X t+h X t fixed h > 0 are called increments of the process.

Copyright c 2006 Jason Underdown Some rights reserved. choose notation. n distinct items divided into r distinct groups.

Probability Models. 4. What is the definition of the expectation of a discrete random variable?

Part IA Probability. Theorems. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015

Normal Approximation for Hierarchical Structures

Modelling Dependent Credit Risks

Chp 4. Expectation and Variance

Calculating credit risk capital charges with the one-factor model

1: PROBABILITY REVIEW

Statistical Modeling and Cliodynamics

Probability and Statistics Notes

Chapter 2. Discrete Distributions

Inference for High Dimensional Robust Regression

Asymptotic distribution of the sample average value-at-risk

A Probability Primer. A random walk down a probabilistic path leading to some stochastic thoughts on chance events and uncertain outcomes.

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2

Chapter 3, 4 Random Variables ENCS Probability and Stochastic Processes. Concordia University

The Choice of Representative Volumes for Random Materials

On Approximating a Generalized Binomial by Binomial and Poisson Distributions

EE4601 Communication Systems

Topic 3: The Expectation of a Random Variable

Exponential Families

Discrete Probability Refresher

Ruin probabilities in multivariate risk models with periodic common shock

Quick Tour of Basic Probability Theory and Linear Algebra

Notes for Math 324, Part 19

Bounds of the normal approximation to random-sum Wilcoxon statistics

Modern Portfolio Theory with Homogeneous Risk Measures

Actuarial Science Exam 1/P

Stochastic Models of Manufacturing Systems

Statistics 1B. Statistics 1B 1 (1 1)

Transcription:

Stein s method and zero bias transformation: Application to CDO pricing ESILV and Ecole Polytechnique Joint work with N. El Karoui

Introduction CDO a portfolio credit derivative containing 100 underlying names susceptible to default risk. Key term to calculate: the cumulative loss L T = n i=1 N i(1 R i )11 {τi T } where N i is the nominal of each name and R i is the recovery rate. Pricing of one CDO tranche: call function E[(L T K ) + ] with attachment or detachment point K. Motivation: fast and precise numerical method for CDOs in the market-adopted framework

Factor models Standard model in the market: factor model with conditionally independent defaults. Normal factor model: Gaussian copula approach {τ i t} = {ρ i Y + 1 ρ 2 i Y i N 1 (α i (t))} where Y i, Y N(0, 1) and independent, and α i (t) is the average probability of default before t. Given Y (not necessarily normal), L T is written as sum of independent random variables L T follows binomial law for homogeneous portfolios Central limit theorems: Gaussian and Poisson approximations

Some approximation methods Gaussian approximation in finance: Total loss of a homogeneous portfolio (Vasicek (1991)), normal approximation of binomial distribution Option pricing: symmetric case (Diener-Diener (2004)) Difficulty: n 100 and small default probability. Saddle point method applied to CDOs (Martin-Thompson-Browne, Antonov-Mechkov-Misirpashaev): calculation time for non-homogeneous portfolio Our solution: combine Gaussian and Poisson approximations and propose a corrector term in each case. Mathematical tools: Stein s method and zero bias transformation

Stein method and Zero bias transformation

Zero bias transformation Goldstein-Reinert, 1997 -X a r.v. of mean zero and of variance σ 2 -X has the zero-bias distribution of X if for all regular enough f, E[Xf (X)] = σ 2 E[f (X )] (1) -distribution of X unique with density p X (x) = E[X11 {X>x} ]/σ 2. Observation of Stein (1972): if Z N(0, σ 2 ), then E[Zf (Z )] = σ 2 E[f (Z )].

Stein method and normal approximation Stein s equation (1972) For the normal approximation of E[h(X)], Stein s idea is to use f h defined as the solution of h(x) Φ σ (h) = xf (x) σ 2 f (x) (2) where Φ σ (h) = E[h(Z )] with Z N(0, σ 2 ). Proposition (Stein): If h is absolutely continuous, then E[h(X)] Φ σ (h) = σ 2 E [ f h (X ) f h (X)] σ 2 f h E[ X X ]. To estimate the approximation error, it s important to measure the distance between X and X and the sup-norm of f h and its derivatives.

Example and estimations Example (Asymmetric Bernoulli distribution) : P(X = q) = p and P(X = p) = q with q = 1 p. So E(X) = 0 and Var(X) = pq. Furthermore, X U[ p, q]. If X and X are independent, then, for any even function g, E [ g(x X) ] = 1 2σ 2 E[ X s G(X s ) ] where G(x) = x 0 g(t)dt, X s = X X and X is an independent copy of X. In particular, E[ X X ] = 1 E [ X s 3] ( O ( )) n 1 4σ. 2 E[ X X k 1 ] = E [ X s k+2] ( O ( )) 1 2(k+1)σ 2 k. n

Sum of independent random variables Goldstein-Reinert, 97 Let W = X 1 + + X n where X i are independent r.v. of mean zero and Var(W ) = σw 2 <. Then W = W (I) + X I, where P(I = i) = σi 2 /σw 2. W (i) = W X i. Furthermore, W (i), X i, Xi and I are mutually independent. Here W and W are not independent. E [ W W k] 1 = n 2(k+1)σW 2 i=1 E[ Xi s k+2] ( O ( )) 1 k. n

Conditional expectation technic To obtain an addition order in the estimations, for example, E[X I X 1,, X n ] = n σi 2 i=1 X i, then, σ 2 W E[E[X I X 1,, X n ] 2 ] = n i=1 σ 6 i σ 4 W is of order O( 1 n 2 ). However, E[XI 2] is of order O( 1 n ). This technic is crucial for estimating E[g(X I, XI )] and E[f (W )g(x I, XI )].

First order Gaussian correction Theorem Normal approximation Φ σw (h) of E[h(W )] has corrector : ( ) C h = 1 n ( x 2σW 4 E[Xi 3 2 ) ]Φ σw 3σW 2 1 xh(x) i=1 (3) where h has bounded second order derivative. The corrected approximation error is bounded by E[h(W )] Φ σw (h) C h α(h, X1,, X n ) where α(h, X 1,, X n ) depends on h and moments of X i up to fourth order.

Remarks For i.i.d. asymmetric Bernoulli: corrector of order O( 1 n ); corrected error of order O( 1 n ). In the symmetric case, E[X I ] = 0, so C h = 0 for any h. C h can be viewed as an asymmetric corrector. Skew play an important role.

Ingredients of proof development around the total loss W E[h(W )] Φ σw (h) = σw 2 [ + σw 2 E E[f h (W )(XI X I )] ( ξw + (1 ξ)w ) ξ(w W ) 2]. f (3) h by independence, E[f h (W )X I ] = E[X I ]E[f h (W )] = E[XI ]Φ σ W (f h ) + E[X I ]( E[f h (W )] Φ σ W (f h )). use the conditional expectation technic E[f h (W )X I] = cov ( f h (W ), E[X I X 1,, X n ] ). write Φ σw (f h ) in term of ) h and obtain C h = σw 2 E[X I ]Φ σ W ( f h (( = 1 E[X σw 2 I ]Φ x 2 σ W 3σW 2 ) ) 1 xh(x).

Function call Call function C k (x) = (x k) + : absolutely continuous with C k (x) = 11 {x>k}, but no second order derivative. Hypothesis not satisfied. Same corrector E[(W k) + ] Φ σw ((x k) + ) 1 3 E[X I ]kφ σ W (k) O( 1 n ). Error bounds depend on f C k, xf C k. Ingredients of proof: write f C k as more regular function by Stein s equation Concentration inequality (Chen-Shao, 2001) Corrector C h = 0 when k = 0, extremal values when k = ±σ 2

Normal or Poisson? The methodology works for other distributions. Stein s method in Poisson case (Chen 1975): a r.v. Λ taking positive integer values follows Poisson distribution P(λ) iif E[Λg(Λ)] = λe[g(λ + 1)] := A P g(λ). Poisson zero bias transformation X for X with E[X] = λ: E[Xg(X)] = E[A P g(x )] Stein s equation : h(x) P λ (h) = xg(x) A P g(x) (4) where P λ (h) = E[h(Λ)] with Λ P(λ).

Poisson correction Poisson corrector of P λw (h) for E[h(W )] : C P h = λ W 2 P λ W ( 2 h ) E [ X I X I ] where h(x) = h(x + 1) h(x) and P(I = i) = λ i /λ W. Proof: combinatory technics for integer valued r.v. For h = (x k) +, 2 h(x) = 11 {x=k 1}, then C P h = σ2 W λ W 2( k 1)! e λ W λ k 1 W.

Numerical tests: E[(W n k) + ] E[(W n k) + ] in function of n σ W = 1 and k = 1 (maximal Gaussian correction) for homogeneous portfolio. Two graphes : p = 0.1 and p = 0.01 respectively. Oscillation of the binomial curve and comparison of different approximations.

Numerical tests: E[(W n k) + ] Legend : binomial (black), normal (magenta), corrected normal (green), Poisson (blue) and corrected Poisson (red). p = 0.1, n = 100. 0.15 0.14 binomial normal normal with correction poisson poisson with correction 0.13 0.12 0.11 0.1 0.09 0.08 0 50 100 150 200 250 300 350 400 450 500 n

Numerical tests : E[(W n k) + ] Legend : binomial (black), normal (magenta), corrected normal (green), Poisson (blue) et corrected Poisson (red). p = 0.01, n = 100. 0.22 0.2 binomial normal normal with correction poisson poisson with correction 0.18 0.16 0.14 0.12 0.1 0.08 0 50 100 150 200 250 300 350 400 450 500 n

Higher order corrections The regularity of h plays an important role. Second order approximation is given by C(2, h) = Φ σw (h) + C h + Φ σw ( ( x 6 18σ 8 W + 1 2 Φ σ W ( ( x 4 4σ 6 W 5x 4 6σW 6 + 5x 2 2σW 4 3x 2 ) 2σW 4 h(x) ) ) (h(x) Φ σw (h)) E[XI ]2 ) (E[(X I )2 ] E[X 2 I ]). Since the call function is not regular enough, it s difficult to control errors. (5)

Applications to CDOs In collaboration with David KURTZ (Calyon, Londres).

Normalization for loss Percentage loss l T = 1 n n i=1 (1 R i)11 {τi T } K = 3%, 6%, 9%, 12%. Normalization for ξ i = 11 {τi T } : let X i = ξ i p i where npi (1 p i ) p i = P(ξ i = 1). Then E[X i ] = 0 and Var[X i ] = 1 n. Suppose R i = 0 and p i = p for simplicity. Let p(y ) = p i (Y ) = P(τ i T Y ). Then E [ (l T K ) + Y ] p(y )(1 p(y )) = E [ (W k(n, p(y ))) + Y ]. n where W = n i=1 X i and k(n, p) = (K p) n. p(1 p) Constraint in Poisson case: R i deterministic and proportional.

Numerical tests: domain of validity Conditional error of E[(l T K ) + ], in function of K /p, n = 100. Inhomogeneous portfolio with log-normal p i of mean p.

Numerical tests : Domain of validity Legend : corrected Gaussian error (blue), corrected Poisson error (red). np=5 and np=20. np=5 0.020% 0.015% 0.010% 0.005% 0.000% -0.005% 0% 100% 200% 300% -0.010% Error Gauss Error Poisson np=20 0.012% 0.010% 0.008% 0.006% 0.004% 0.002% 0.000% -0.002% 0% 100% 200% 300% -0.004% -0.006% -0.008% Error Gauss Error Poisson Domain of validity : np 15 Maximal error : when k near the average loss; overlapping area of the two approximations

Comparison with saddle-point method Legend : Gaussian error (red), saddle-point first order error (blue dot), saddle-point second order error (blue). np=5 and np=20. np=5 np=20 0.040% 0.030% 0.020% 0.010% 0.000% 0% 100% 200% 300% -0.010% -0.020% -0.030% 0.025% 0.020% 0.015% 0.010% 0.005% 0.000% -0.005% 0% 100% 200% 300% -0.010% -0.015% -0.020% -0.040% Gauss Saddle 1 Saddle 2-0.025% Gauss Saddle 1 Saddle 2

Numerical tests : stochastic R i Conditional error of E[(l T K ) + ], in function of K /p for Gaussian approximation error with stochastic R i. R i : Beta distribution with expectation 50% and variance 26%, independent (Moody s). Corrector depends on the third order moment of R i. Confidence interval 95% by 10 6 Monte Carlo simulation.

Numerical tests : stochastic R i Gaussian approximation better when p is large as in the standard case. np=5 and np=20. np=5 0.004% 0.003% 0.002% 0.001% 0.000% -0.001% 0% 100% 200% 300% -0.002% -0.003% -0.004% -0.005% Lower MC Bound Gauss Error Upper MC Bound np=20 0.006% 0.004% 0.002% 0.000% -0.002% 0% 100% 200% -0.004% -0.006% Lower MC Bound Gauss Error Upper MC Bound

Real-life CDOs pricing Parametric correlation model of ρ(y ) to match the smile. Method adapted to all conditional independent models, not only in the normal factor model case Numerical results compared to the recursive method (Hull-White) Calculation time largely reduced: 1 : 200 for one price

Real-life CDOs pricing Error of prices in bp (10 4 ) for corrected Gauss-Poisson approximation with realistic local correlation. Expected loss 4% 5% Break Even Error 1.40 1.20 1.00 0.80 0.60 0.40 0.20 - -0.20 0-3 3-6 6-9 9-12 12-15 15-22 0-100 Break Even Error

Conclusion remark and perspective Tests for VaR and sensitivity remain satisfactory Perspective: remove the deterministic recovery rate condition in the Poisson case.