Numerical Methods II

Similar documents
Stochastic Numerical Analysis

Quasi-Monte Carlo integration over the Euclidean space and applications

Fast evaluation of mixed derivatives and calculation of optimal weights for integration. Hernan Leovey

QMC methods in quantitative finance. and perspectives

Covariance to PCA. CS 510 Lecture #14 February 23, 2018

Principal Component Analysis -- PCA (also called Karhunen-Loeve transformation)

PROCESS MONITORING OF THREE TANK SYSTEM. Outline Introduction Automation system PCA method Process monitoring with T 2 and Q statistics Conclusions

Applicability of Quasi-Monte Carlo for lattice systems

VAR Model. (k-variate) VAR(p) model (in the Reduced Form): Y t-2. Y t-1 = A + B 1. Y t + B 2. Y t-p. + ε t. + + B p. where:

Covariance to PCA. CS 510 Lecture #8 February 17, 2014

Principal Components Analysis (PCA)

7. Variable extraction and dimensionality reduction

Principal Component Analysis and Linear Discriminant Analysis

Discretization of SDEs: Euler Methods and Beyond

Numerical Optimization

Quasi-Monte Carlo Methods for Applications in Statistics

component risk analysis

Eigenvalues, Eigenvectors, and an Intro to PCA

Eigenvalues, Eigenvectors, and an Intro to PCA

Lecture: Face Recognition and Feature Reduction

Monte Carlo Methods. Handbook of. University ofqueensland. Thomas Taimre. Zdravko I. Botev. Dirk P. Kroese. Universite de Montreal

Introduction to (randomized) quasi-monte Carlo

Stochastic Calculus and Black-Scholes Theory MTH772P Exercises Sheet 1

Monte Carlo Methods for Uncertainty Quantification

Numerical methods for lattice field theory

x. Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ 2 ).

Lecture on Parameter Estimation for Stochastic Differential Equations. Erik Lindström

Lecture: Face Recognition and Feature Reduction

Robust Range-rate Estimation of Passive Narrowband Sources in Shallow Water

Stat. 758: Computation and Programming

Randomized Quasi-Monte Carlo: An Introduction for Practitioners

Lecture 2. Spring Quarter Statistical Optics. Lecture 2. Characteristic Functions. Transformation of RVs. Sums of RVs

Gaussian, Markov and stationary processes

Lecture 7a: Vector Autoregression (VAR)

Variance Reduction s Greatest Hits

Model Counting for Logical Theories

GEOG 4110/5100 Advanced Remote Sensing Lecture 15

MFE6516 Stochastic Calculus for Finance

Lecture4 INF-MAT : 5. Fast Direct Solution of Large Linear Systems

PHYS 6710: Nuclear and Particle Physics II

Sobol-Hoeffding Decomposition with Application to Global Sensitivity Analysis

Face Recognition and Biometric Systems

Chemometrics. Matti Hotokka Physical chemistry Åbo Akademi University

Florida State University Libraries

Probabilistic Graphical Models

Reliability Theory of Dynamically Loaded Structures (cont.)

Two new developments in Multilevel Monte Carlo methods

Principal Component Analysis-I Geog 210C Introduction to Spatial Data Analysis. Chris Funk. Lecture 17

Teaching Linear Algebra at UNSW Sydney

Review. DS GA 1002 Statistical and Mathematical Models. Carlos Fernandez-Granda

Dynamic Risk Measures and Nonlinear Expectations with Markov Chain noise

SIMON FRASER UNIVERSITY School of Engineering Science

Statistical study of Non Uniform Transmission Lines

Principal Component Analysis

Lecture 7a: Vector Autoregression (VAR)

Polynomial chaos expansions for sensitivity analysis

However, reliability analysis is not limited to calculation of the probability of failure.

ELEMENTS OF PROBABILITY THEORY

Fast evaluation of the inverse Poisson CDF

3. WIENER PROCESS. STOCHASTIC ITÔ INTEGRAL

Hot new directions for QMC research in step with applications

Announcements (repeat) Principal Components Analysis

Multi-Factor Finite Differences

CS4495/6495 Introduction to Computer Vision. 8B-L2 Principle Component Analysis (and its use in Computer Vision)

Representation of inhomogeneous, non-separable covariances by sparse wavelet-transformed matrices

01 Probability Theory and Statistics Review

CSC411 Fall 2018 Homework 5

1 Linear Difference Equations

Karhunen-Loève Expansions of Lévy Processes

Probabilistic Graphical Models

Table of Contents. Multivariate methods. Introduction II. Introduction I

Cramér-von Mises Gaussianity test in Hilbert space

Previously Monte Carlo Integration

Polynomial Chaos and Karhunen-Loeve Expansion

z = β βσβ Statistical Analysis of MV Data Example : µ=0 (Σ known) consider Y = β X~ N 1 (β µ, β Σβ) test statistic for H 0β is

The Laplace driven moving average a non-gaussian stationary process

Correlation Functions and Fourier Transforms

Generalised extreme value statistics and sum of correlated variables.

Matrix Iteration. Giacomo Boffi.

Covariance and Principal Components

A TERM PAPER REPORT ON KALMAN FILTER

Gaussian Processes for Sequential Prediction

Waveform-Based Coding: Outline

Introduction to Computational Stochastic Differential Equations

Numerical Methods with Lévy Processes

Descriptive Statistics

Discovery significance with statistical uncertainty in the background estimate

Uniform Random Number Generators

Linear Dimensionality Reduction

Chapter 4 - Lecture 3 The Normal Distribution

COMP6237 Data Mining Covariance, EVD, PCA & SVD. Jonathon Hare

on t0 t T, how can one compute the value E[g(X(T ))]? The Monte-Carlo method is based on the

A Coding Theoretic Approach to Building Nets with Well-Equidistributed Projections

Simulation of random fields in structural design. Lectures. Sebastian Wolff

Multivariate Statistical Analysis

20th Philippine Mathematical Olympiad Qualifying Stage, 28 October 2017

INDIAN INSTITUTE OF SCIENCE STOCHASTIC HYDROLOGY. Lecture -30 Course Instructor : Prof. P. P. MUJUMDAR Department of Civil Engg., IISc.

Introduction to Machine Learning

Monte Carlo Integration II & Sampling from PDFs

Estimation, Detection, and Identification

Transcription:

Numerical Methods II Prof. Mike Giles mike.giles@maths.ox.ac.uk Oxford University Mathematical Institute MC Lecture 13 p. 1

Quasi-Monte Carlo As in lecture 6, quasi-monte Carlo methods offer much greater accuracy for the same computational costs. Same ingredients: Sobol or lattice rule quasi-uniform generators PCA to best use QMC inputs for multi-dimensional applications randomised QMC to regain confidence interval New ingredient: how best to use QMC inputs to generate Brownian increments MC Lecture 13 p. 2

Quasi-Monte Carlo In lecture 12, expressed expectation as a multi-dimensional integral with respect to unit Normal inputs V = E[ f(ŝ)] = f(ŝ) φ(z) dz where φ(z) is multi-dimensional unit Normal p.d.f. Putting Z n = Φ 1 (U n ) turns this into an integral over a M-dimensional hypercube V = E[ f(ŝ)] = f(ŝ) du MC Lecture 13 p. 3

Quasi-Monte Carlo This is then approximated as N 1 n f(ŝ (n) ) and each path calculation involves the computations U Z W Ŝ f The key step here is the second, how best to convert the vector Z into the vector W. With standard Monte Carlo, as long as W has the correct distribution, how it is generated is irrelevant, but with QMC it does matter. MC Lecture 13 p. 4

Quasi-Monte Carlo For a scalar Brownian motion W(t) with W(0)=0, defining W n =W(nh), each W n is Normally distributed and for j k E[W j W k ] = E[W 2 k ]+E[(W j W k )W k ] = t k since W j W k is independent of W k. Hence, the covariance matrix for W is Ω with elements Ω j,k = min(t j,t k ) MC Lecture 13 p. 5

Quasi-Monte Carlo The task now is to find a matrix L such that 1 1... 1 1 1 2... 2 2 LL T = Ω = h............... 1 2... M 1 M 1 1 2... M 1 M We will consider 3 possibilities: Cholesky factorisation PCA Brownian Bridge treatment MC Lecture 13 p. 6

Cholesky factorisation The Cholesky factorisation gives L = h 1 0... 0 0 1 1... 0 0............... 1 1... 1 0 1 1... 1 1 and hence W n = n h Zm = W n = W n W n 1 = h Z n m=1 i.e. standard MC approach MC Lecture 13 p. 7

PCA construction The PCA construction uses ( L = U Λ 1/2 = U 1 U 2... ) λ 1/2 1 λ 1/2 2... with the eigenvalues λ n and eigenvectors U n arranged in descending order, from largest to smallest. Numerical computation of the eigenvalues and eigenvectors is costly for large numbers of timesteps, so instead use theory due to Åkesson and Lehoczky (1998) MC Lecture 13 p. 8

PCA construction It is easily verified that 2 1 1 2 1 1 2 1 Ω 1 = h 1......... 1 2 1 1 2 1 1 1. This looks like the finite difference operator approximating a second derivative, and so the eigenvectors are Fourier modes. MC Lecture 13 p. 9

PCA construction The eigenvectors of both Ω 1 and Ω are ( ) 2 (2m 1)nπ (U m ) n = sin 2M +1 2M+1 and the eigenvalues of Ω are the reciprocal of those of Ω 1, λ m = h 4 ( sin ( )) 2 (2m 1)π 2(2M+1) Because the eigenvectors are Fourier modes, an efficient FFT transform can be used (Scheicher, 2006) to compute ( ) L Z = U Λ 1/2 Z = ( λ m Z m )U m m MC Lecture 13 p. 10

Brownian Bridge construction The Brownian Bridge construction uses the theory from lecture 10. The final Brownian value is constructed using Z 1 : W M = T Z 1 Conditional on this, the midpoint value W M/2 is Normally distributed with mean 1 2 W M and variance T/4, and so can be constructed as W M/2 = 1 2 W M + T/4 Z 2 MC Lecture 13 p. 11

Brownian Bridge construction The quarter and three-quarters points can then be constructed as W M/4 = 1 2 W M/2 + T/8 Z 3 W 3M/4 = 1 2 (W M/2 +W M )+ T/8 Z 4 and the procedure continued recursively until all Brownian values are defined. (This assumes M is a power of 2 if not, the implementation is slightly more complex) I have a slight preference for this method because it is particularly effective for European options for which S(T) is very strongly dependent on W(T). MC Lecture 13 p. 12

Multi-dimensional Brownian motion The preceding discussion concerns the construction of a single, scalar Brownian motion. Suppose now that we have to generate a P-dimensional Brownian motion with correlation matrix Σ between the different components. What do we do? MC Lecture 13 p. 13

Multi-dimensional Brownian motion First, using either PCA or BB to construct P uncorrelated Brownian paths using Z 1,Z 1+P,Z 1+2P,Z 1+3P,... for first path Z 2,Z 2+P,Z 2+2P,Z 2+3P,... for second path Z 3,Z 3+P,Z 3+2P,Z 3+3P,... for third path etc. This uses the best dimensions of Z for the overall behaviour of all of the paths. MC Lecture 13 p. 14

Multi-dimensional Brownian motion Second, define W corr n = L Σ W uncorr n = W corr n = L Σ W uncorr n where Wn uncorr is the uncorrelated sequence, is the correlated sequence, and W corr n L Σ L T Σ = Σ MC Lecture 13 p. 15

Numerical results Usual European call test case based on geometric Brownian motion: 128 timesteps so weak error is negligible comparison between QMC using Brownian Bridge QMC without Brownian Bridge standard MC QMC calculations use Sobol generator all calculations use 64 sets of points for QMC calcs, each has a different random offset plots show error and 3 s.d. error bound MC Lecture 13 p. 16

QMC with Brownian Bridge 10 1 comparison to exact solution Error MC error bound 10 0 Error 10-1 10-2 10 0 10 1 10 2 10 3 N MC Lecture 13 p. 17

QMC without Brownian Bridge 10 1 comparison to exact solution Error MC error bound 10 0 Error 10-1 10-2 10 0 10 1 10 2 10 3 N MC Lecture 13 p. 18

Standard Monte Carlo 10 1 comparison to exact solution Error MC error bound 10 0 Error 10-1 10-2 10 0 10 1 10 2 10 3 N MC Lecture 13 p. 19

Final words QMC offers large computational savings over the standard Monte Carlo approach again advisable to use randomised QMC to regain confidence intervals, at the cost of slightly poorer accuracy very important to use PCA or Brownian Bridge construction to create discrete Brownian increments much better than standard approach which is equivalent to Cholesky factorisation of covariance matrix MC Lecture 13 p. 20