E cient Monte Carlo for Gaussian Fields and Processes

Similar documents
Introduction to Rare Event Simulation

Importance Sampling Methodology for Multidimensional Heavy-tailed Random Walks

E cient Simulation and Conditional Functional Limit Theorems for Ruinous Heavy-tailed Random Walks

Exact Simulation of Multivariate Itô Diffusions

Efficient rare-event simulation for sums of dependent random varia

Fluid Heuristics, Lyapunov Bounds and E cient Importance Sampling for a Heavy-tailed G/G/1 Queue

Gaussian Random Fields: Excursion Probabilities

EFFICIENT SIMULATION FOR LARGE DEVIATION PROBABILITIES OF SUMS OF HEAVY-TAILED INCREMENTS

Brandon C. Kelly (Harvard Smithsonian Center for Astrophysics)

Gaussian Process Approximations of Stochastic Differential Equations

Asymptotics and Simulation of Heavy-Tailed Processes

If we want to analyze experimental or simulated data we might encounter the following tasks:

E cient Importance Sampling

Efficient Simulations for the Exponential Integrals of Hölder Continuous Gaussian Random Fields

Adaptive timestepping for SDEs with non-globally Lipschitz drift

STA205 Probability: Week 8 R. Wolpert

DART_LAB Tutorial Section 5: Adaptive Inflation

Numerical Methods I Monte Carlo Methods

Small ball probabilities and metric entropy

Solution for Problem 7.1. We argue by contradiction. If the limit were not infinite, then since τ M (ω) is nondecreasing we would have

I forgot to mention last time: in the Ito formula for two standard processes, putting

Estimation of the long Memory parameter using an Infinite Source Poisson model applied to transmission rate measurements

MONTE CARLO FOR LARGE CREDIT PORTFOLIOS WITH POTENTIALLY HIGH CORRELATIONS. Xuan Yang. Department of Statistics Columbia University New York, NY

Introduction to Algorithmic Trading Strategies Lecture 10

Lecture 7 and 8: Markov Chain Monte Carlo

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( )

EFFICIENT TAIL ESTIMATION FOR SUMS OF CORRELATED LOGNORMALS. Leonardo Rojas-Nandayapa Department of Mathematical Sciences

Evgeny Spodarev WIAS, Berlin. Limit theorems for excursion sets of stationary random fields

A Dichotomy for Sampling Barrier-Crossing Events of Random Walks with Regularly Varying Tails

Lecture on Parameter Estimation for Stochastic Differential Equations. Erik Lindström

Poisson Approximation for Two Scan Statistics with Rates of Convergence

ELEMENTS OF PROBABILITY THEORY

ECE276A: Sensing & Estimation in Robotics Lecture 10: Gaussian Mixture and Particle Filtering

Stochastic Differential Equations.

Verona Course April Lecture 1. Review of probability

Bayesian Regression Linear and Logistic Regression

Econ 508B: Lecture 5

Sequential Monte Carlo Samplers for Applications in High Dimensions

Negative Association, Ordering and Convergence of Resampling Methods

Lecture 2: Repetition of probability theory and statistics

Probability Theory and Simulation Methods

Discretization of SDEs: Euler Methods and Beyond

Multivariate Regression

EE/CpE 345. Modeling and Simulation. Fall Class 9

On Lyapunov Inequalities and Subsolutions for Efficient Importance Sampling

MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems

EE/CpE 345. Modeling and Simulation. Fall Class 10 November 18, 2002

Formulas for probability theory and linear models SF2941

Lecture 22 Girsanov s Theorem

Reduction of Variance. Importance Sampling

Brownian Motion. An Undergraduate Introduction to Financial Mathematics. J. Robert Buchanan. J. Robert Buchanan Brownian Motion

Modern Methods of Data Analysis - WS 07/08

On a class of stochastic differential equations in a financial network model

Sample path large deviations of a Gaussian process with stationary increments and regularily varying variance

ACM 116: Lectures 3 4

ABSTRACT 1 INTRODUCTION. Jingchen Liu. Xiang Zhou. Brown University 182, George Street Providence, RI 02912, USA

Sparse Nonparametric Density Estimation in High Dimensions Using the Rodeo

The Moment Method; Convex Duality; and Large/Medium/Small Deviations

Free Entropy for Free Gibbs Laws Given by Convex Potentials

Estimating Tail Probabilities of Heavy Tailed Distributions with Asymptotically Zero Relative Error

Extremes and ruin of Gaussian processes

04. Random Variables: Concepts

Uniformly Uniformly-ergodic Markov chains and BSDEs

Gaussian Processes. Le Song. Machine Learning II: Advanced Topics CSE 8803ML, Spring 2012

STATE-DEPENDENT IMPORTANCE SAMPLING FOR REGULARLY VARYING RANDOM WALKS

Winter 2019 Math 106 Topics in Applied Mathematics. Lecture 8: Importance Sampling

Hybrid Dirichlet processes for functional data

Lecture 2: From Linear Regression to Kalman Filter and Beyond

NUMERICAL ANALYSIS PROBLEMS

Statistical Methods in Particle Physics

Markov Chain Monte Carlo (MCMC)

Universal examples. Chapter The Bernoulli process

Least Squares Estimation Namrata Vaswani,

Patterns of Scalable Bayesian Inference Background (Session 1)

Basics on Probability. Jingrui He 09/11/2007

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

ABSTRACT 1 INTRODUCTION. Jose Blanchet. Henry Lam. Boston University 111 Cummington Street Boston, MA 02215, USA

MA 8101 Stokastiske metoder i systemteori

CPSC 540: Machine Learning

Introduction to Self-normalized Limit Theory

International Conference on Applied Probability and Computational Methods in Applied Sciences

Continuous Random Variables

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

On the inefficiency of state-independent importance sampling in the presence of heavy tails

Problem Set 2. MAS 622J/1.126J: Pattern Recognition and Analysis. Due: 5:00 p.m. on September 30

Advanced Machine Learning

Theory and Applications of Stochastic Systems Lecture Exponential Martingale for Random Walk

The Central Limit Theorem: More of the Story

Monte-Carlo MMD-MA, Université Paris-Dauphine. Xiaolu Tan

6. Brownian Motion. Q(A) = P [ ω : x(, ω) A )

Kernel adaptive Sequential Monte Carlo

Monte Carlo Integration I [RC] Chapter 3

Introductory Econometrics

State-dependent Importance Sampling for Rare-event Simulation: An Overview and Recent Advances

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

Monte Carlo Integration

Random Processes. DS GA 1002 Probability and Statistics for Data Science.

The concentration of a drug in blood. Exponential decay. Different realizations. Exponential decay with noise. dc(t) dt.

Sequential Monte Carlo Methods for Bayesian Computation

STAT 7032 Probability Spring Wlodek Bryc

Transcription:

E cient Monte Carlo for Gaussian Fields and Processes Jose Blanchet (with R. Adler, J. C. Liu, and C. Li) Columbia University Nov, 2010 Jose Blanchet (Columbia) Monte Carlo for Gaussian Fields Nov, 2010 1 / 29

Outline 1 Introduction 2 Importance Sampling and E ciency 3 Example 1: Maximum of Gaussian Process 4 Example 2: Gaussian Random Fields Jose Blanchet (Columbia) Monte Carlo for Gaussian Fields Nov, 2010 2 / 29

A Motivating Application Contamination level in a geographic area... (yellow area = Mexico City) Jose Blanchet (Columbia) Monte Carlo for Gaussian Fields Nov, 2010 3 / 29

A Motivating Application When the ground-level ozone reaches 270 parts per billion, Mexico City bans the use of a signi cant portion of cars, orders industry to reduce emissions by 50% and closes gas stations... Jose Blanchet (Columbia) Monte Carlo for Gaussian Fields Nov, 2010 4 / 29

A Motivating Application When the ground-level ozone reaches 270 parts per billion, Mexico City bans the use of a signi cant portion of cars, orders industry to reduce emissions by 50% and closes gas stations... What s the chance that high levels are reached in a given area? What happens to other areas given that a high level has been reached somewhere? Jose Blanchet (Columbia) Monte Carlo for Gaussian Fields Nov, 2010 4 / 29

A Motivating Application When the ground-level ozone reaches 270 parts per billion, Mexico City bans the use of a signi cant portion of cars, orders industry to reduce emissions by 50% and closes gas stations... What s the chance that high levels are reached in a given area? What happens to other areas given that a high level has been reached somewhere? X (t, s) = logarithm of ozone s concentration at position s (in yellow area) at time t... Jose Blanchet (Columbia) Monte Carlo for Gaussian Fields Nov, 2010 4 / 29

A Motivating Application When the ground-level ozone reaches 270 parts per billion, Mexico City bans the use of a signi cant portion of cars, orders industry to reduce emissions by 50% and closes gas stations... What s the chance that high levels are reached in a given area? What happens to other areas given that a high level has been reached somewhere? X (t, s) = logarithm of ozone s concentration at position s (in yellow area) at time t... Adler, Blanchet and Liu (2010): Algorithm for conditional sampling with ε relative precision in time Polyfε 1 log[1/p(high excursion)]g Jose Blanchet (Columbia) Monte Carlo for Gaussian Fields Nov, 2010 4 / 29

A Motivating Application Another motivating application: Queues with Gaussian input Mandjes (2007) u (b) := P max X (k) > b k0 where X () is a suitably de ned Gaussian process. Jose Blanchet (Columbia) Monte Carlo for Gaussian Fields Nov, 2010 5 / 29

A Motivating Application Another motivating application: Queues with Gaussian input Mandjes (2007) u (b) := P max X (k) > b k0 where X () is a suitably de ned Gaussian process. Blanchet and Li (2010): Algorithm for conditional sampling with ε relative precision in time Polyfε 1 log[1/u (b) ]g Jose Blanchet (Columbia) Monte Carlo for Gaussian Fields Nov, 2010 5 / 29

Outline 1 Introduction 2 Importance Sampling and E ciency 3 Example 1: Maximum of Gaussian Process 4 Example 2: Gaussian Random Fields Jose Blanchet (Columbia) Monte Carlo for Gaussian Fields Nov, 2010 6 / 29

Importance Sampling Goal: Estimate u := P (A) > 0 and Jose Blanchet (Columbia) Monte Carlo for Gaussian Fields Nov, 2010 7 / 29

Importance Sampling Goal: Estimate u := P (A) > 0 and Choose ep and simulate ω from it Jose Blanchet (Columbia) Monte Carlo for Gaussian Fields Nov, 2010 7 / 29

Importance Sampling Goal: Estimate u := P (A) > 0 and Choose ep and simulate ω from it I.S. estimator per trial is Z = L (ω) I (ω 2 A), where L (ω) is the likelihood ratio (i.e. L (ω) = P (ω) /ep (ω)). Jose Blanchet (Columbia) Monte Carlo for Gaussian Fields Nov, 2010 7 / 29

Importance Sampling Goal: Estimate u := P (A) > 0 and Choose ep and simulate ω from it I.S. estimator per trial is Z = L (ω) I (ω 2 A), where L (ω) is the likelihood ratio (i.e. L (ω) = P (ω) /ep (ω)). ep is called a change-of-measure Jose Blanchet (Columbia) Monte Carlo for Gaussian Fields Nov, 2010 7 / 29

Importance Sampling Must apply I.S. with care > Can increase the variance Jose Blanchet (Columbia) Monte Carlo for Gaussian Fields Nov, 2010 8 / 29

Importance Sampling Must apply I.S. with care > Can increase the variance Glasserman and Kou 95, Asmussen, Binswanger and Hojgaard 00 Jose Blanchet (Columbia) Monte Carlo for Gaussian Fields Nov, 2010 8 / 29

Importance Sampling Goal: Choose ep simulatable and e ciently as P (A) & 0. Jose Blanchet (Columbia) Monte Carlo for Gaussian Fields Nov, 2010 9 / 29

Importance Sampling Goal: Choose ep simulatable and e ciently as P (A) & 0. Strong E ciency: eez 2 /P (A) 2 = O (1). Jose Blanchet (Columbia) Monte Carlo for Gaussian Fields Nov, 2010 9 / 29

Importance Sampling Goal: Choose ep simulatable and e ciently as P (A) & 0. Strong E ciency: eez 2 /P (A) 2 = O (1). Weak E ciency: eez 2 /P (A) 2 = O 1/P (A) ε for every ε > 0. Jose Blanchet (Columbia) Monte Carlo for Gaussian Fields Nov, 2010 9 / 29

Importance Sampling Goal: Choose ep simulatable and e ciently as P (A) & 0. Strong E ciency: eez 2 /P (A) 2 = O (1). Weak E ciency: eez 2 /P (A) 2 = O 1/P (A) ε for every ε > 0. Polynomial E ciency: eez 2 /P (A) 2 = O (log (1/P (A)) q ) for some q > 0. Jose Blanchet (Columbia) Monte Carlo for Gaussian Fields Nov, 2010 9 / 29

Importance Sampling Goal: Choose ep simulatable and e ciently as P (A) & 0. Strong E ciency: eez 2 /P (A) 2 = O (1). Weak E ciency: eez 2 /P (A) 2 = O 1/P (A) ε for every ε > 0. Polynomial E ciency: eez 2 /P (A) 2 = O (log (1/P (A)) q ) for some q > 0. Strong Weak Polynomial Jose Blanchet (Columbia) Monte Carlo for Gaussian Fields Nov, 2010 9 / 29

Importance Sampling Suppose we choose ep () = P ( j A) L (ω) = P (ω) I (ω 2 A) P (ω) I (ω 2 A) /P (A) = P (A) Jose Blanchet (Columbia) Monte Carlo for Gaussian Fields Nov, 2010 10 / 29

Importance Sampling Suppose we choose ep () = P ( j A) L (ω) = P (ω) I (ω 2 A) P (ω) I (ω 2 A) /P (A) = P (A) Estimator has zero variance, but requires knowledge of P (A) Jose Blanchet (Columbia) Monte Carlo for Gaussian Fields Nov, 2010 10 / 29

Importance Sampling Suppose we choose ep () = P ( j A) L (ω) = P (ω) I (ω 2 A) P (ω) I (ω 2 A) /P (A) = P (A) Estimator has zero variance, but requires knowledge of P (A) Lesson: Try choosing ep () close to P ( j A)! Jose Blanchet (Columbia) Monte Carlo for Gaussian Fields Nov, 2010 10 / 29

Outline 1 Introduction 2 Importance Sampling and E ciency 3 Example 1: Maximum of Gaussian Process Target Bridge Sampling Brownian Motion E ciency Analysis Numerical Example 4 Example 2: Gaussian Random Fields Jose Blanchet (Columbia) Monte Carlo for Gaussian Fields Nov, 2010 11 / 29

Example 1: Maximum of Gaussian Process Let X = (X k : k 0) be a Gaussian process with Var (X k ) = σ 2 k and EX k = µ k < 0. Want to e ciently estimate (as b % ) Assume u(b) = P[ max k1 X k > b] σ k c σ k H σ, jµ k j c µ k H µ, 0 < H σ < H µ < so u(b)! 0 as b!. No assumption on correlation structure. Asymptotic regime known as "large bu er scaling". Jose Blanchet (Columbia) Monte Carlo for Gaussian Fields Nov, 2010 12 / 29

Related Asymptotics Rich literature on asymptotics for u (b) Pickands (1969), Berman (1990), Du eld and O Connell (1995), Piterbarg (1996), Husler and Piterbarg (1999), Dieker (2006), Husler (2006), Likhanov and Mazumdar (1999), Debicki and Mandjes (2003)... Jose Blanchet (Columbia) Monte Carlo for Gaussian Fields Nov, 2010 13 / 29

Related Simulation Algorithms Common basic idea is to sample Gaussian process by mean tracking the most likely path given the over ow, time-slot by time-slot. X t b X 0 T (b) t Jose Blanchet (Columbia) Monte Carlo for Gaussian Fields Nov, 2010 14 / 29

Target Bridge Sampling X t 1 Identifying the target set T ; b X 0 T (b) t Jose Blanchet (Columbia) Monte Carlo for Gaussian Fields Nov, 2010 15 / 29

Target Bridge Sampling X t 1 Identifying the target set T ; 2 Targeting; b X 0 T (b) t Jose Blanchet (Columbia) Monte Carlo for Gaussian Fields Nov, 2010 15 / 29

Target Bridge Sampling X t 1 Identifying the target set T ; 2 Targeting; 3 Bridging; b X 0 T (b) t Jose Blanchet (Columbia) Monte Carlo for Gaussian Fields Nov, 2010 15 / 29

Target Bridge Sampling X t 1 Identifying the target set T ; 2 Targeting; 3 Bridging; 4 T (b) = inffk : X k > bg; X T (b) b X 0 T (b) t Jose Blanchet (Columbia) Monte Carlo for Gaussian Fields Nov, 2010 15 / 29

Target Bridge Sampling X t 1 Identifying the target set T ; 2 Targeting; 3 Bridging; 4 T (b) = inffk : X k > bg; 5 Deleting X 0 X T (b) T (b) b t Jose Blanchet (Columbia) Monte Carlo for Gaussian Fields Nov, 2010 15 / 29

Likelihood of a Path = ep(x 1,..., X T (b) ) Z ep (τ = k) P(X 1,..., X T (b) jx k )ep (dx k ). k=t (b) b Select ep (τ = k) P (X k > bj T (b) < ) P (X k > b) Given τ = k sample ep (X k 2 ) = P (X k 2 j X k > b) Jose Blanchet (Columbia) Monte Carlo for Gaussian Fields Nov, 2010 16 / 29

Likelihood Ratio = = = ep(x 1,..., X T (b) ) Z ep (τ = k) P(X 1,..., X T (b) jx k )ep (dx k ) k=t (b) b k=t (b) P (X k > b) Z j=1 P (X j > b) b P(X 1,..., X T (b), X k > b) k=t (b) j=1 P (X j > b) = P(X 1,..., X T (b) ) j=1 P (X j > b) P (dx P(X 1,..., X T (b) jx k ) k ) P (X k > b) P X k > bjx 1,..., X T (b) k=t (b) Jose Blanchet (Columbia) Monte Carlo for Gaussian Fields Nov, 2010 17 / 29

Consequently, the importance sampling estimator for u (b) generated by P is simply L = dp X 1,..., X T (b) d ep j=1 P (X j > b) =. j=t (b) P X j > bj X 1,..., X T (b) Jose Blanchet (Columbia) Monte Carlo for Gaussian Fields Nov, 2010 18 / 29

Special Case: Brownian Motion Consider the case when X t = B (t) t, is a standard Brownian motion, with negative linear drift µ t = t. Jose Blanchet (Columbia) Monte Carlo for Gaussian Fields Nov, 2010 19 / 29

Special Case: Brownian Motion Consider the case when X t = B (t) t, is a standard Brownian motion, with negative linear drift µ t = t. The likelihood ratio: L = R R 0 P(B t t > b)dt T (b) P B t t > bj B T (b) = b dt is a constant! = R 0 P(B t t > b)dt R 0 P(B t t 0)dt, Jose Blanchet (Columbia) Monte Carlo for Gaussian Fields Nov, 2010 19 / 29

Special Case: Brownian Motion Consider the case when X t = B (t) t, is a standard Brownian motion, with negative linear drift µ t = t. The likelihood ratio: L = R R 0 P(B t t > b)dt T (b) P B t t > bj B T (b) = b dt = R 0 P(B t t > b)dt R 0 P(B t t 0)dt, is a constant! Zero-variance importance sampler: eel L = P max B t t0 t > b = exp( 2b) Jose Blanchet (Columbia) Monte Carlo for Gaussian Fields Nov, 2010 19 / 29

Special Case: Brownian Motion Consider the case when X t = B (t) t, is a standard Brownian motion, with negative linear drift µ t = t. The likelihood ratio: L = R R 0 P(B t t > b)dt T (b) P B t t > bj B T (b) = b dt = R 0 P(B t t > b)dt R 0 P(B t t 0)dt, is a constant! Zero-variance importance sampler: eel L = P max B t t0 t > b = exp( 2b) In this case TBS outputs a Brownian motion with drift +1. Jose Blanchet (Columbia) Monte Carlo for Gaussian Fields Nov, 2010 19 / 29

E ciency Analysis The e ciency analysis involves ee (L 2 ) P(max k1 X k b) 2 k=1 P(X k b) 2 max k1 P(X k b) k=0 P(X k > b) 2 P(max k1 X k b) Does not involve the correlation structure... Jose Blanchet (Columbia) Monte Carlo for Gaussian Fields Nov, 2010 20 / 29

E ciency Analysis Theorem If (σ k : k 1) and (µ k : k 1) have power law type tails with power indices 0 < H σ < H µ < respectively, then we have that 1 L is an unbiased estimator of u(b) 2 Let h(b) = λ H µ, H σ b Hµ Hσ Hµ, then u(b) = O(h (b) 1/(H µ H σ ) exp( h(b))). 3 ee L 2 u(b) 2 = O(b2/H µ ); Polynomially e cient Strongly e cient in many source scaling Jose Blanchet (Columbia) Monte Carlo for Gaussian Fields Nov, 2010 21 / 29

Summary of Performance Analysis (Updated) Method Many Sources Large Bu er Cost of each replication Single twist x x O(b 3 ) Cut-and-twist weakly x O(b 4 ) Random twist weakly x O(b 3 ) Sequential twist weakly x O(nb 3 ) Mean shift x x O(b 3 ) BMC x x O(b 3 ) TBS strongly polynomial O(b 3 ) Jose Blanchet (Columbia) Monte Carlo for Gaussian Fields Nov, 2010 22 / 29

Numerical Example: Fractional Brownian Noise Test the performance of our Target Bridge Sampler and compare it against other existing methods in the many sources setting. Suppose that fx k g are driven by fractional Brownian noises, that is, Cov(X k, X j ) = (k 2H σ + j 2H σ jk jj 2H σ )/2 and µ k = k. The numerical result is compared against what was reported in Dieker and Mandjes (2006). Jose Blanchet (Columbia) Monte Carlo for Gaussian Fields Nov, 2010 23 / 29

Numerical Example b = 300 Cost of each Estimator Simulation Runs Time replication Naive O(b 3 ) 6.12 10 4 833562 232s Single twist O(b 3 ) 4.84 10 4 4038 ~60s Cut-and-twist O(b 4 ) 5.95 10 4 703 ~80s Random twist O(b 3 ) 5.50 10 4 3269 ~50s Sequential twist O(nb 3 ) 6.39 10 4 692 ~100s TBS O(b 3 ) 5.84 10 4 26 1s Benchmark - 5.75 10 4 - Table: Simulation result of Example 2 with n = 300, b = 3, H σ = 0.8. Jose Blanchet (Columbia) Monte Carlo for Gaussian Fields Nov, 2010 24 / 29

Outline 1 Introduction 2 Importance Sampling and E ciency 3 Example 1: Maximum of Gaussian Process 4 Example 2: Gaussian Random Fields Discrete Version Jose Blanchet (Columbia) Monte Carlo for Gaussian Fields Nov, 2010 25 / 29

Discrete Version: Monte Carlo Strategy Discrete version: (X 1,..., X d ) multivariate Gaussian: Consider d P max X i > b i=1 d P (X 1,..., X d ) 2 j max X i > b i=1 Jose Blanchet (Columbia) Monte Carlo for Gaussian Fields Nov, 2010 26 / 29

Discrete Version: Monte Carlo Strategy Discrete version: (X 1,..., X d ) multivariate Gaussian: Consider d P max X i > b i=1 d P (X 1,..., X d ) 2 j max X i > b i=1 Monte Carlo strategy: Jose Blanchet (Columbia) Monte Carlo for Gaussian Fields Nov, 2010 26 / 29

Discrete Version: Monte Carlo Strategy Discrete version: (X 1,..., X d ) multivariate Gaussian: Consider d P max X i > b i=1 d P (X 1,..., X d ) 2 j max X i > b i=1 Monte Carlo strategy: 1 Pick i-th coordinate with probability proportional to P (X i > b) Jose Blanchet (Columbia) Monte Carlo for Gaussian Fields Nov, 2010 26 / 29

Discrete Version: Monte Carlo Strategy Discrete version: (X 1,..., X d ) multivariate Gaussian: Consider d P max X i > b i=1 d P (X 1,..., X d ) 2 j max X i > b i=1 Monte Carlo strategy: 1 Pick i-th coordinate with probability proportional to P (X i > b) 2 Sample X i given X i > b Jose Blanchet (Columbia) Monte Carlo for Gaussian Fields Nov, 2010 26 / 29

Discrete Version: Monte Carlo Strategy Discrete version: (X 1,..., X d ) multivariate Gaussian: Consider d P max X i > b i=1 d P (X 1,..., X d ) 2 j max X i > b i=1 Monte Carlo strategy: 1 Pick i-th coordinate with probability proportional to P (X i > b) 2 Sample X i given X i > b 3 Sample the remaining values given X i ep ((X 1,..., X d ) 2 ) = d P (X i > b) P ((X 1,..., X d ) 2 jx i > b) i=1 d j=1 P (X j > b) Jose Blanchet (Columbia) Monte Carlo for Gaussian Fields Nov, 2010 26 / 29

Discrete Version: Importance Sampling Distribution Prob. measure ep ((X 1,..., X d ) 2 ) = = d P (X i > b) P ((X 1,..., X d ) 2 jx i > b) i=1 d j=1 P (X j > b) d P ((X 1,..., X d ) 2 ; X i > b) i=1 d j=1 P (X j > b) Jose Blanchet (Columbia) Monte Carlo for Gaussian Fields Nov, 2010 27 / 29

Discrete Version: Likelihood Ratio Likelihood ratio I ( d max i=1 X i > b) dp d ep (X 1,..., X d ) = I ( d max i=1 X i > b) d j=1 P (X j > b) d j=1 I (X j > b) d P (X j > b). j=1 Jose Blanchet (Columbia) Monte Carlo for Gaussian Fields Nov, 2010 28 / 29

Total Variation Approximation Theorem (Adler, Blanchet and Liu (2008)) If Corr (X i, X j ) < 1 then as b % and therefore sup A as b %. P( max d d X i > b) = P (X j > b) (1 + o (1)) i=1 j=1 d jp((x 1,..., X d ) 2 Aj max X i > b) ep ((X 1,..., X d ) 2 A) j! 0 i=1 Jose Blanchet (Columbia) Monte Carlo for Gaussian Fields Nov, 2010 29 / 29

Efficient Monte Carlo for High Excursions of Gaussian Random Fields Jose Blanchet Columbia University Joint work with Robert Adler (Technion Israel Institute of Technology ) Jingchen Liu (Columbia University) 1

Continuous Gaussian Random Fields on Compacts Gaussian random field, where T R d is a compact set. 2

High Excursion Probability Interesting probability as b. More generally, where Γ( ) is suitable functional. 3

Asymptotic results Under very mild conditions Sharp asymptotics for mean zero and constant variance random fields (under conditions) k depends on dimension of T and continuity of f(t). Pickands (1969), Piterbarg (1995), Sun (1993), Adler (1981), Azais and Wschebor (2005), Taylor, Takemura and Adler (2005). 4

The change of measure mes is the Lebesgue measure A γ is the excursion set A change of measure on C(T) 5

Simulation from this change of measure 6

The estimator The estimator and its variance The second moment 7

The Estimator and Its Variance 8

The choice of γ γ = b results in infinite variance γ = b a/b 9

The area of excursion set, mes(a b-1/b ), given high excursion 10

Efficiency results the general case 11

Efficiency results the homogeneous case 12

Implementation discretize the continuous filed Discretize the space T, {t 1,,t n } It is sufficient to choose n = (b/ε) α such that Adler, Blanchet and L. (2010) 13

Summary Non-exponential change-of-measure for Gaussian processes and fields (efficiency properties & conditional sampling) Polynomialy efficient for general Hölder continuous fields Strong efficiency for homogeneous fields 14