An Introduction to Correlation Stress Testing

Similar documents
Rank Constrained Matrix Optimization Problems

MATH 5720: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 2018

Positive semidefinite matrix approximation with a trace constraint

Linear Algebra Massoud Malek

Lecture 5 : Projections

Calibrating Least Squares Semidefinite Programming with Equality and Inequality Constraints

Chapter 3 Transformations

Lecture 3. Optimization Problems and Iterative Algorithms

E5295/5B5749 Convex optimization with engineering applications. Lecture 5. Convex programming and semidefinite programming

Functional Analysis Review

An Augmented Lagrangian Dual Approach for the H-Weighted Nearest Correlation Matrix Problem

Multi-stage convex relaxation approach for low-rank structured PSD matrix recovery

Lecture 7: Convex Optimizations

Dimensionality Reduction: PCA. Nicholas Ruozzi University of Texas at Dallas

CSCI 1951-G Optimization Methods in Finance Part 10: Conic Optimization

Interior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems

Constrained Optimization

A two-phase augmented Lagrangian approach for linear and convex quadratic semidefinite programming problems

Symmetric Matrices and Eigendecomposition

Nearest Correlation Matrix

Nonsmooth Matrix Valued Functions Defined by Singular Values

We describe the generalization of Hazan s algorithm for symmetric programming

An inexact accelerated proximal gradient method for large scale linearly constrained convex SDP

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

CSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming

An inexact accelerated proximal gradient method for large scale linearly constrained convex SDP

Linear Algebra. P R E R E Q U I S I T E S A S S E S S M E N T Ahmad F. Taha August 24, 2015

Introduction and Math Preliminaries

A Heuristic for Completing Covariance And Correlation Matrices. Martin van der Schans and Alex Boer

Lecture 3: Review of Linear Algebra

Matrix Rank Minimization with Applications

Basic Calculus Review

A A x i x j i j (i, j) (j, i) Let. Compute the value of for and

Lecture 3: Review of Linear Algebra

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Research Article Solving the Matrix Nearness Problem in the Maximum Norm by Applying a Projection and Contraction Method

1 Inner Product and Orthogonality

1 Cricket chirps: an example

Semidefinite Programming

S.F. Xu (Department of Mathematics, Peking University, Beijing)

Tutorials in Optimization. Richard Socher

Preliminary Examination, Numerical Analysis, August 2016

A priori bounds on the condition numbers in interior-point methods

Methods for sparse analysis of high-dimensional data, II

The Q Method for Second-Order Cone Programming

Spherical Euclidean Distance Embedding of a Graph

Math Matrix Algebra

Review of Linear Algebra

Chapter 6. Eigenvalues. Josef Leydold Mathematical Methods WS 2018/19 6 Eigenvalues 1 / 45

Methods for sparse analysis of high-dimensional data, II

Linear Algebra. Session 12

Sparse PCA with applications in finance

Iterative Methods. Splitting Methods

ICS 6N Computational Linear Algebra Symmetric Matrices and Orthogonal Diagonalization

Newton s Method for Computing the Nearest Correlation Matrix with a Simple Upper Bound

Solving large Semidefinite Programs - Part 1 and 2

STRUCTURED LOW RANK MATRIX OPTIMIZATION PROBLEMS: A PENALTY APPROACH GAO YAN. (B.Sc., ECNU)

A direct formulation for sparse PCA using semidefinite programming

Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization

A direct formulation for sparse PCA using semidefinite programming

L. Vandenberghe EE236C (Spring 2016) 18. Symmetric cones. definition. spectral decomposition. quadratic representation. log-det barrier 18-1

Lecture 7: Positive Semidefinite Matrices

Linear Algebra Review. Vectors

Math 205 A B Final Exam page 1 12/12/2012 Name

Large Scale Data Analysis Using Deep Learning

Chap 3. Linear Algebra

Convex optimization problems. Optimization problem in standard form

Matrix decompositions

Numerical Optimization

Computing the nearest correlation matrix a problem from finance

On the Moreau-Yosida regularization of the vector k-norm related functions

Convex Optimization and l 1 -minimization

14 Singular Value Decomposition

Lecture 2: Linear Algebra Review

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice

Scientific Computing: Optimization

C&O367: Nonlinear Optimization (Winter 2013) Assignment 4 H. Wolkowicz

c 2005 Society for Industrial and Applied Mathematics

Jim Lambers MAT 610 Summer Session Lecture 2 Notes

Data Preprocessing Tasks

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

WHY DUALITY? Gradient descent Newton s method Quasi-newton Conjugate gradients. No constraints. Non-differentiable ???? Constrained problems? ????

FIXED POINT ITERATIONS

Positive Semidefinite Matrix Completions on Chordal Graphs and Constraint Nondegeneracy in Semidefinite Programming

March 8, 2010 MATH 408 FINAL EXAM SAMPLE

Karush-Kuhn-Tucker Conditions. Lecturer: Ryan Tibshirani Convex Optimization /36-725

Linear & nonlinear classifiers

Lecture Note 5: Semidefinite Programming for Stability Analysis

LINEAR ALGEBRA: NUMERICAL METHODS. Version: August 12,

Review problems for MA 54, Fall 2004.

EE Applications of Convex Optimization in Signal Processing and Communications Dr. Andre Tkacenko, JPL Third Term

ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications

The Karush-Kuhn-Tucker (KKT) conditions

Recitation. Shuang Li, Amir Afsharinejad, Kaushik Patnaik. Machine Learning CS 7641,CSE/ISYE 6740, Fall 2014

Properties of Matrices and Operations on Matrices

A refined Lanczos method for computing eigenvalues and eigenvectors of unsymmetric matrices

Linear Algebra Lecture Notes-II

Section 3.9. Matrix Norm

MA 265 FINAL EXAM Fall 2012

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2017 LECTURE 5

Transcription:

An Introduction to Correlation Stress Testing Defeng Sun Department of Mathematics and Risk Management Institute National University of Singapore This is based on a joint work with GAO Yan at NUS March 05, 2010

The model These problems can be modeled in the following way min H (X G) F s.t. X ii = 1, i = 1,...,n X ij = e ij, (i,j) B e, X ij l ij, (i,j) B l, X ij u ij, (i,j) B u, X S+ n, (1) where B e, B l, and B u are three index subsets of {(i,j) 1 i < j n} satisfying B e B l =, B e B u =, and l ij < u ij for any (i,j) B l B u. Second Singapore Conference on Quantitative Finance NUS/SUN 7 / 38

continued Here S n and S+ n are, respectively, the space of n n symmetric matrices and the cone of positive semidefinite matrices in S n. F is the Frobenius norm defined in S n. H 0 is a weight matrix. H ij is larger if G ij is better estimated. H ij = 0 if G ij is missing. A matrix X S n + is called a correlation matrix if X 0 (i.e., X S n +) and X ii = 1, i = 1,...,n. Second Singapore Conference on Quantitative Finance NUS/SUN 8 / 38

A simple correlation matrix model min H (X G) F s.t. X ii = 1, i = 1,...,n X 0, (2) Second Singapore Conference on Quantitative Finance NUS/SUN 9 / 38

The simplest corr. matrix model min (X G) F s.t. X ii = 1, i = 1,...,n X 0, (3) Second Singapore Conference on Quantitative Finance NUS/SUN 10 / 38

In finance and statistics, correlation matrices are in many situations found to be inconsistent, i.e., X 0. These include, but are not limited to, Structured statistical estimations; data come from different time frequencies Stress testing regulated by Basel II; Expert opinions in reinsurance, and etc. Second Singapore Conference on Quantitative Finance NUS/SUN 11 / 38

One correlation matrix Partial market data 1 G = 1.0000 0.9872 0.9485 0.9216 0.0485 0.0424 0.9872 1.0000 0.9551 0.9272 0.0754 0.0612 0.9485 0.9551 1.0000 0.9583 0.0688 0.0536 0.9216 0.9272 0.9583 1.0000 0.1354 0.1229 0.0485 0.0754 0.0688 0.1354 1.0000 0.9869 0.0424 0.0612 0.0536 0.1229 0.9869 1.0000 The eigenvalues of G are: 0.0087, 0.0162, 0.0347, 0.1000, 1.9669, and 3.8736. 1 RiskMetrics (www.riskmetrics.com/stddownload edu.html) Second Singapore Conference on Quantitative Finance NUS/SUN 12 / 38

Stress tested Let s change G to [change G(1, 6) = G(6, 1) from 0.0424 to 0.1000] 1.0000 0.9872 0.9485 0.9216 0.0485 0.1000 0.9872 1.0000 0.9551 0.9272 0.0754 0.0612 0.9485 0.9551 1.0000 0.9583 0.0688 0.0536 0.9216 0.9272 0.9583 1.0000 0.1354 0.1229 0.0485 0.0754 0.0688 0.1354 1.0000 0.9869 0.1000 0.0612 0.0536 0.1229 0.9869 1.0000 The eigenvalues of G are: 0.0216, 0.0305, 0.0441, 0.1078, 1.9609, and 3.8783. Second Singapore Conference on Quantitative Finance NUS/SUN 13 / 38

Missing data On the other hand, some correlations may not be reliable or even missing: G = 1.0000 0.9872 0.9485 0.9216 0.0485 0.9872 1.0000 0.9551 0.9272 0.0754 0.0612 0.9485 0.9551 1.0000 0.9583 0.0688 0.0536 0.9216 0.9272 0.9583 1.0000 0.1354 0.1229 0.0485 0.0754 0.0688 0.1354 1.0000 0.9869 0.0612 0.0536 0.1229 0.9869 1.0000 H = 1 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 1 1 1 1 Second Singapore Conference on Quantitative Finance NUS/SUN 14 / 38

Let us rewrite the problem: min 1 2 H (X G) 2 F s.t. X ii = 1, i = 1,...,n X 0. (4) When H = E, the matrix of ones, we get min 1 2 X G 2 F s.t. X ii = 1, i = 1,...,n X 0. which is known as the nearest correlation matrix (NCM) problem, a terminology coined by Nick Higham (2002). (5) Second Singapore Conference on Quantitative Finance NUS/SUN 15 / 38

The story starts The NCM problem is a special case of the best approximation problem min 1 x c 2 2 s.t. Ax b + Q, x K, where X is a real Euclidean space equipped with a scalar product, and its induced norm A : X R m is a bounded linear operator Q = {0} p R q + is a polyhedral convex cone, 1 p m, q = m p, and K is a closed convex cone in X. Second Singapore Conference on Quantitative Finance NUS/SUN 16 / 38

The KKT conditions The Karush-Kuhn-Tucker conditions are (x + z) c A y = 0 Q y Ax b Q, K z x K, where means the orthogonality. Q = R p R q + is the dual cone of Q and K 2 is the dual cone of K. 2 K := {d X d, x 0 x K}. Second Singapore Conference on Quantitative Finance NUS/SUN 17 / 38

Equivalently, (x + z) c A y = 0 Q y Ax b Q, x Π K (x + z) = 0 where Π K (x) is the unique optimal solution to min 1 2 u x 2 s.t. u K. Second Singapore Conference on Quantitative Finance NUS/SUN 18 / 38

Consequently, by first eliminating (x + z) and then x, we get which is equivalent to Q y AΠ K (c + A y) b Q, F(y) := y Π Q [y (AΠ K (c + A y) b)] = 0, y R m. Second Singapore Conference on Quantitative Finance NUS/SUN 19 / 38

The dual formulation The above is nothing but the first order optimality condition to the convex dual problem [ 1 max θ(y) := 2 Π K(c + A y) 2 b,y 1 ] 2 c 2 s.t. y Q. Then F can be written as F(y) = y Π Q (y θ(y)). Second Singapore Conference on Quantitative Finance NUS/SUN 20 / 38

Now, we only need to solve F(y) = 0, y R m. However, the difficulties are: F is not differentiable at y; F involves two metric projection operators; Even if F is differentiable at y, it is too costly to compute F (y). Second Singapore Conference on Quantitative Finance NUS/SUN 21 / 38

The NCM problem For the nearest correlation matrix problem,. A(X) = diag(x), a vector consisting of all diagonal entries of X. A (y) = diag(y), the diagonal matrix. b = e, the vector of all ones in R n and K = S n +. Consequently, F can be written as F(y) = AΠ S n + (G + A y) b. Second Singapore Conference on Quantitative Finance NUS/SUN 22 / 38

The projector For n = 1, we have x + := Π S 1 + (x) = max(0,x). Note that x + is only piecewise linear, but not smooth. (x + ) 2 is continuously differentiable with { 1 } 2 (x +) 2 = x +, but is not twice continuously differentiable. Second Singapore Conference on Quantitative Finance NUS/SUN 23 / 38

The one dimensional case Second Singapore Conference on Quantitative Finance NUS/SUN 24 / 38

The multi-dimensional case The projector for K = S n + : x 1 Convex Cone K Π (η) K 0 η x 2 x 3 Second Singapore Conference on Quantitative Finance NUS/SUN 25 / 38

Let X S n have the following spectral decomposition X = PΛP T, where Λ is the diagonal matrix of eigenvalues of X and P is a corresponding orthogonal matrix of orthonormal eigenvectors. Then X + := P S n + (X) = PΛ + P T. Second Singapore Conference on Quantitative Finance NUS/SUN 26 / 38

We have X + 2 is continuously differentiable with ( 1 ) 2 X + 2 but is not twice continuously differentiable. = X +, X + is not piecewise smooth, but strongly semismooth 3. 3 D.F. Sun and J. Sun. Semismooth matrix valued functions. Mathematics of Operations Research 27 (2002) 150 169. Second Singapore Conference on Quantitative Finance NUS/SUN 27 / 38

A quadratically convergent Newton s method is then designed by Qi and Sun 4 The written code is called CorNewton.m. "This piece of research work is simply great and practical. I enjoyed reading your paper." March 20, 2007, a home loan financial institution based in McLean, VA. "It s very impressive work and I ve also run the Matlab code found in Defeng s home page. It works very well." August 31, 2007, a major investment bank based in New York city. 4 H.D. Qi and D.F. Sun. A quadratically convergent Newton method for computing the nearest correlation matrix. SIAM Journal on Matrix Analysis and Applications 28 (2006) 360 385. Second Singapore Conference on Quantitative Finance NUS/SUN 28 / 38

Inequality constraints If we have lower and upper bounds on X, F takes the form F(y) = y Π Q [y (AΠ S n + (G + A y) b)], which involves double layered projections over convex cones. A quadratically convergent smoothing Newton method is designed by Gao and Sun 5. Again, highly efficient. 5 Y. Gao and D.F. Sun. Calibrating least squares covariance matrix problems with equality and inequality constraints, SIAM Journal on Matrix Analysis and Applications 31 (2009), 1432 1457. Second Singapore Conference on Quantitative Finance NUS/SUN 29 / 38

Back to the original problem min 1 2 H (X G) 2 F s.t. A(X) b + Q, X S n +, Second Singapore Conference on Quantitative Finance NUS/SUN 30 / 38

The Majorization Method Let d R n be a positive vector such that H H dd T. For example, d = max(h ij )e. Let D 1/2 = diag(d1 0.5,...,d0.5 n ). Let f(x) := 1 2 H (X G) 2 F. Then g is majorized by f k (X) := f(x k )+ H H(X k G),X X k + 1 2 D1/2 (X X k )D 1/2 2 F, i.e., f(x k ) = f k (X k ) and f(x) f k (X). Second Singapore Conference on Quantitative Finance NUS/SUN 31 / 38

The idea of majorization The idea of the majorization is to solve, for given X k, the following problem min f k (X) s.t. A(X) b + Q, X S+ n, which is a diagonal weighted least squares correlation matrix problem min 1 2 D1/2 (X X k )D 1/2 2 F s.t. A(X) b + Q, X S n +. Second Singapore Conference on Quantitative Finance NUS/SUN 32 / 38

Now, we can use the two Newton methods introduced earlier for the majorized subproblems! f(x k+1 ) < f(x k ) < < f(x 1 ). Second Singapore Conference on Quantitative Finance NUS/SUN 33 / 38

A small example: n = 4 G = 1 1 1 1 1 1 1 1 1 1 1 0.5 1 1 0.5 1 Suppose that G(1, 2) and G(2, 1) are missing. G = 1 1 1 1 1 1 1 1 1 0.5 1 1 0.5 1 Second Singapore Conference on Quantitative Finance NUS/SUN 34 / 38

A small example: n = 4 (continued) We take G = 1 0 1 1 0 1 1 1 1 1 1 1 1 1 1 1 After 4 iterations, we get X = 1.000 1.000 0.6894 0.6894 1.000 1.000 0.6894 0.6894 0.6894 0.6894 1.000 0.0495 0.6894 0.6894 0.0495 1.000 This is the same solution as the case with no-missing data. Second Singapore Conference on Quantitative Finance NUS/SUN 35 / 38

Large examples Example 1 CorrMajor AugLag n time residue time residue 100 0.9 2.9006e1 1.1 2.9006e1 200 1.8 6.6451e1 3.2 6.6451e1 500 9.7 1.8815e2 23.5 1.8815e2 1000 51.3 4.0108e2 223.4 4.0108e2 Table 1: Numerical results for Example 1 Second Singapore Conference on Quantitative Finance NUS/SUN 36 / 38

Final remarks A code named CorrMajor.m can efficiently solve correlation matrix problems with all sorts of bound constraints. The techniques may be used to solve many other problems, e.g., low rank matrix problems with sparsity. The limitation is that it cannot solve problems for matrices exceeding the dimension 5, 000 by 5, 000 on a PC due to memory constraints. Second Singapore Conference on Quantitative Finance NUS/SUN 37 / 38

End of talk Thank you! :) Second Singapore Conference on Quantitative Finance NUS/SUN 38 / 38