Gaussian Random Fields: Excursion Probabilities

Similar documents
THE EXCURSION PROBABILITY OF GAUSSIAN AND ASYMPTOTICALLY GAUSSIAN RANDOM FIELDS. Dan Cheng

Evgeny Spodarev WIAS, Berlin. Limit theorems for excursion sets of stationary random fields

Gaussian Random Fields: Geometric Properties and Extremes

Rice method for the maximum of Gaussian fields

ELEMENTS OF PROBABILITY THEORY

RANDOM FIELDS AND GEOMETRY. Robert Adler and Jonathan Taylor

Random Fields and Random Geometry. I: Gaussian fields and Kac-Rice formulae

PACKING-DIMENSION PROFILES AND FRACTIONAL BROWNIAN MOTION

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

Packing-Dimension Profiles and Fractional Brownian Motion

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

Functional Central Limit Theorem for the Measure of Level Sets Generated by a Gaussian Random Field

1 Directional Derivatives and Differentiability

5 Measure theory II. (or. lim. Prove the proposition. 5. For fixed F A and φ M define the restriction of φ on F by writing.

GAUSSIAN PROCESSES; KOLMOGOROV-CHENTSOV THEOREM

Spectral Gap and Concentration for Some Spherically Symmetric Probability Measures

E cient Monte Carlo for Gaussian Fields and Processes

LECTURE 5: THE METHOD OF STATIONARY PHASE

Uniformly and strongly consistent estimation for the Hurst function of a Linear Multifractional Stable Motion

Lower Tail Probabilities and Related Problems

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

Multivariate Gaussian Random Fields with SPDEs

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Lower Tail Probabilities and Normal Comparison Inequalities. In Memory of Wenbo V. Li s Contributions

Module 9: Stationary Processes

3 (Due ). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure?

Probabilistic Graphical Models

Joint Parameter Estimation of the Ornstein-Uhlenbeck SDE driven by Fractional Brownian Motion

Statistical applications of geometry and random

Weak Variation of Gaussian Processes

1/12/05: sec 3.1 and my article: How good is the Lebesgue measure?, Math. Intelligencer 11(2) (1989),

We denote the derivative at x by DF (x) = L. With respect to the standard bases of R n and R m, DF (x) is simply the matrix of partial derivatives,

Optimal series representations of continuous Gaussian random fields

Creating materials with a desired refraction coefficient: numerical experiments

Max stable Processes & Random Fields: Representations, Models, and Prediction

Probability and Measure

Positive Definite Functions on Spheres

Lecture I: Asymptotics for large GUE random matrices

Formulas for probability theory and linear models SF2941

Lecture 4: Numerical solution of ordinary differential equations

Handbook of Spatial Statistics Chapter 2: Continuous Parameter Stochastic Process Theory by Gneiting and Guttorp

Chapter 2: Fundamentals of Statistics Lecture 15: Models and statistics

Concentration inequalities: basics and some new challenges

2 (Bonus). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure?

STOCHASTIC GEOMETRY BIOIMAGING

Conditional Full Support for Gaussian Processes with Stationary Increments

Measures on spaces of Riemannian metrics CMS meeting December 6, 2015

µ X (A) = P ( X 1 (A) )

A Note on the Central Limit Theorem for a Class of Linear Systems 1

Minimax Rate of Convergence for an Estimator of the Functional Component in a Semiparametric Multivariate Partially Linear Model.

ON THE KAC-RICE FORMULA

Regularity of the density for the stochastic heat equation

Properties for systems with weak invariant manifolds

Product measure and Fubini s theorem

Solution for Problem 7.1. We argue by contradiction. If the limit were not infinite, then since τ M (ω) is nondecreasing we would have

Statistical inference on Lévy processes

9 Brownian Motion: Construction

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product

The circular law. Lewis Memorial Lecture / DIMACS minicourse March 19, Terence Tao (UCLA)

Cohomology of the Mumford Quotient

Multivariable Calculus

Wiener Measure and Brownian Motion

Sample path large deviations of a Gaussian process with stationary increments and regularily varying variance

SAMPLE PATH AND ASYMPTOTIC PROPERTIES OF SPACE-TIME MODELS. Yun Xue

Bayesian Regularization

Extreme Value Analysis and Spatial Extremes

Laplace s Equation. Chapter Mean Value Formulas

LECTURE 15: COMPLETENESS AND CONVEXITY

STAT 200C: High-dimensional Statistics

MATH 205C: STATIONARY PHASE LEMMA

Distance between multinomial and multivariate normal models

Supermodular ordering of Poisson arrays

Central Limit Theorem for Non-stationary Markov Chains

Information geometry for bivariate distribution control

Introduction to Spatial Data and Models

Quasi-conformal minimal Lagrangian diffeomorphisms of the

arxiv: v1 [math.pr] 19 Aug 2014

Probabilistic Graphical Models

arxiv:math/ v1 [math.st] 16 May 2006

Learning Patterns for Detection with Multiscale Scan Statistics

Handlebody Decomposition of a Manifold

Lecture No 1 Introduction to Diffusion equations The heat equat

The Codimension of the Zeros of a Stable Process in Random Scenery

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R.

Dimensionality in the Stability of the Brunn-Minkowski Inequality: A blessing or a curse?

Hardy-Stein identity and Square functions

Lecture 9. d N(0, 1). Now we fix n and think of a SRW on [0,1]. We take the k th step at time k n. and our increments are ± 1

Gravitational allocation to Poisson points

Zeros of lacunary random polynomials

LARGE DEVIATION PROBABILITIES FOR SUMS OF HEAVY-TAILED DEPENDENT RANDOM VECTORS*

Outline of the course

Weak quenched limiting distributions of a one-dimensional random walk in a random environment

On the p-laplacian and p-fluids

CHAPTER 6. Differentiation

LONG TIME BEHAVIOUR OF PERIODIC STOCHASTIC FLOWS.

The Canonical Gaussian Measure on R

LECTURE: KOBORDISMENTHEORIE, WINTER TERM 2011/12; SUMMARY AND LITERATURE

GAUSSIAN PROCESSES AND THE LOCAL TIMES OF SYMMETRIC LÉVY PROCESSES

Gaussian Processes. 1. Basic Notions

A Framework for Daily Spatio-Temporal Stochastic Weather Simulation

Transcription:

Gaussian Random Fields: Excursion Probabilities Yimin Xiao Michigan State University

Lecture 5 Excursion Probabilities 1 Some classical results on excursion probabilities A large deviation result Upper bounds via the entropy method Asymptotic results (the double sum method) 2 Smooth GFs: Excursion probability The expected Euler characteristic approximation 3 Vector-valued Gaussian fields Smooth case Non-smooth case

Let X = {X(t), t T} be a real-valued Gaussian random field, where T is the index set. The excursion probability { } P sup X(t) u t T, (u > 0) is important in probability, statistics and their applications. When T R N and N = 1, only in very few special cases, the exact formulae for the excursion probability are available. When N > 1, no exact formula is known.

5.1 Some classical results Theorem 5.1 (Landau and Shepp, 1970; Marcus and Shepp, 1972) If {X(t), t T} is a centered GRF and sup t T X(t) < a.s., then lim u 1 { u log P 2 where σ 2 T = sup t T E(X(t)2 ). } sup X(t) u = 1 t T 2σT 2,

The Borell-TIS inequality Theorem 5.2 [Borell, 1975; Tsirelson, Ibragimov and Sudakov, 1976] Let X = { X(t), t T } be a centered Gaussian process with a.s. bounded sample paths. Let X = sup X(t). Then t T E( X ) < and for all λ > 0, P( X E( X ) > λ ) 2 exp ( ) λ2 2σT 2.

Proof of Theorem 5.1 The Borell-TIS inequality implies immediately the upper bound in Theorem 5.1: 1 { } lim u u log P sup X(t) u 1 2 t T 2σT 2. The lower bound in Theorem 5.1 is easy. Remark The Borell-TIS inequality, combined with a partitioning argument, can lead to improved non asymptotic upper bounds, as shown by the following result.

Upper bounds using entropy method For δ > 0, set T δ = {t T : E(X(t) 2 ) σ 2 T δ}. Theorem 5.3 (Samorodnitsky, 1991; Talagrand, 1994) If v w 1 such that N(T δ, d X, ε) Kδ w ε v, where d X is the canonical metric of X, then for u 2σ T w, { } u ) v w ( u ) P sup X(t) u K( Φ t T σt 2. σ T

Pickands asymptotic theorem Theorem 5.4 (Pickands, 69; Qualls and Watanabe, 73) Let {X(t), t [0, L] N } be a centered stationary Gaussian field with E(X(s)X(t)) = 1 s t α + o( s t α ) for a constant α (0, 2], then P { sup t [0, L] N X(t) u } lim = H u ψ(u)u 2N/α α L N, (1) where ψ(u) = (2π) 1 u 1 exp( u 2 /2) and H α is Pickands constant.

Recall that Pickands constant is defined as 1 ( ( H α = lim e s P sup ) ) χ(t) t α > s A A N 0 t [0, A] N where χ is a centered Gaussian field with covariance function E[χ(t)χ(s)] = t α + s α t s α (FBM). The only known values of H α are H 1 = 1, H 2 = 1 π. ds,

Ideas for the proof of Theorem 5.4 Divide [0, L] N into N u small cubes C j of side-length u 2/α. So N u = L N u 2N/α. Observe that { } P sup X(t) u t [0, L] N { N u = P j=1 j=1 } sup X(t) u t C j N u { } P sup X(t) u t C j

and { } P sup X(t) u t [0, L] N N u N u i=1 j=1 N u j=1 { } P sup X(t) u t C j { P sup X(t) u, sup X(t) u t C i t C j }.

Ideas for the proof Prove that { } N u j=1 P sup t Cj X(t) u is the main term and the double sum is negligible. We recall one important step in the proof. Write { } P sup X(t) u = P { X(0) u } t C j u { + P max X(t) u } X(0) = x φ(x)dx, t C j where φ is the density of standard normal N(0, 1).

For any a > 0 and integer vector n, let I u [an/u 2/α ] = { ak u 2/α : 0 k n } := I u. One can show that { } P max t Iu X(t) u lim u ψ(u) { } = 1 + e y P max χ(k) > y dy. k n where ψ(u) = P { N(0, 1) > u }. 0

Nonstationary case: A result for fbm Theorem 5.5 (Talagrand, 1988) Let B H = {B H (t), t R N } be a fbm with index H (0, 1). If H > 1/2, then for any L > 0, P { sup t [0, L] N B H (t) u } lim u P { B H ( L ) u } P = lim u { supt [0, L] N B H (t) u } ψ(u/(l N) H ) = 1. This is clearly different from (1). The reason is that E(B H (t) 2 ) has a unique maximum at t = L.

5.2 Asymptotic expansion for smooth Gaussian fields The Rice method initiated by Rice (1944) and developed by many others: see Adler (1981), Azais and Wschebor (2009). The Euler characteristic method by Worsley (1995), Taylor, Takemura and Adler (2005), Taylor and Adler (2007).

The Euler characteristic method Let A u = {t T : X(t) u} be the excursion set. A general conjecture is that the mean Euler characteristic of A u gives the behavior of P { sup t T X(t) u }. This conjecture is referred to as the Expected Euler Characteristic Heuristic, which has proven to be true for some cases. Before we give any details, let us recall the notion of Euler characteristic of a set.

The Euler characteristic method Let A R N be a finite union of basic sets. The EC ϕ(a) can be defined as the unique function which satisfies the following properties: { 0 if A =, ϕ(a) = 1 if A is basic or ball like. ϕ(a B) = ϕ(a) + ϕ(b) ϕ(a B). If N = 1, then the Euler characteristic of A is ϕ(a) = # of disjoint intervals in A. If N = 2, then ϕ(a) = # of its connected components # of holes.

The Euler characteristic method When T = [0, L], ϕ(a u ) is like the number of upcrossings of the level u by the process X(t) and E{ϕ(A u )} is similar to the Rice formula, which has long been used to approximate the excursion probability. If T = [0, L] N and N 2, it is difficult to define upcrossings of the level u. The Euler characteristic becomes a natural choice. One can also use other quantities such as the expected number of local maxima to approximate the excursion probability.

Euler characteristic method Theorem 5.6 (Taylor, Takemura and Adler, 2005) Let X = {X(t) : t T} be a unit-variance smooth Gaussian field parameterized on a manifold T. Under certain conditions on the regularity of X and topology of T, there exists α 0 > 0 such that as u, { P sup X(t) u t T } = E{ϕ(A u (X, T))}(1 + o ( e α 0u 2 )), where ϕ(a u (X, T)) is the Euler characteristic of the excursion set A u (X, T) = {t T : X(t) u}.

E{ϕ(A u (X, T))} can be computed via the Kac-Rice formula [cf. Adler and Taylor (2007)], E{ϕ(A u (X, T))} = C 0 Ψ(u) + dim(t) j=1 C j u j 1 e u2 /2, where C j are constants depending on X and T. Compared with Pickands approximation, this expansion is much more accurate since the error decays exponentially fast. In fact, Pickands approximation only contains one of the terms involving u N 1 e u2 /2 in E{ϕ(A u (X, T))}.

Example 5.1 Let X be a smooth isotropic Gaussian field with unit variance and T = [0, L] N, then E{ϕ(A u (X, T))} = Ψ(u) + N j=1 ( N j ) L j λ j/2 (2π) (j+1)/2h j 1(u)e u2 /2, where λ = Var(X i (t)) and H j 1 (u) are Hermite polynomials.

The constant-variance and isotropy conditions are sometimes too restrictive for many applications. Adler (2000, section 7.3) listed non-stationary random fields as one of the main future research directions. We study the following questions: For Gaussian fields with stationary increments, how to compute the mean Euler characteristic of their excursion sets? Can they still be used to approximate the excursion probabilities? What about excursion probabilities of vector-valued Gaussian random fields?

We have obtained some results about these questions and they are presented in the following papers. D. Cheng and Y. Xiao. Mean Euler characteristic approximation to excursion probability of Gaussian random fields. Ann. Appl. Probab. 26 (2016), 722 759. D. Cheng and Y. Xiao. Excursion probability of smooth vector-valued Gaussian random fields. Preprint, 2016. Y. Zhou and Y. Xiao. Tail asymptotics of extremes for bivariate Gaussian random fields. Bernoulli, to appear. In the following, we present some results from the first two papers.

5.3 Smooth Gaussian fields with stationary increments Let X = {X(t), t R N } be a centered Gaussian field with stationary increments and X(0) = 0. It is represented by X(t) = (e i t,λ 1) W(dλ), R N where W is a complex-valued Gaussian random measure with control measure, which satisfies RN λ 2 1 + λ 2 (dλ) <.

Sufficient conditions for sample path differentiability in terms of the spectral measure of X are known. For example, if the spectral density f (λ) satisfies ) f (λ) = O ( λ (2H+N+2k) as λ, where k 1, H (0, 1), then X has a version X such that X( ) C k (R N ) almost surely.

We will consider the case k = 2 and use the following notations X i (t) = X(t) t i, X(t) = (X 1 (t),..., X N (t)), X ij (t) = 2 X(t) t i t j, 2 X(t) = (X ij (t)) 1 i,j N. For Gaussian fields with stationary increments, we have E{X i (t)x jk (t)} = 0 for all t R N, i.e. X i (t) and X jk (t) are independent.

Let T = N i=1 [a i, b i ] be an N-dimensional rectangle. A face J of dimension k, is defined by fixing a subset σ(j) {1,, N} of size k and a subset ε(j) = {ε j, j / σ(j)} {0, 1} N k of size N k, so that J = {t T : a j < t j < b j if j σ(j), t j = (1 ε j )a j + ε j b j if j / σ(j)}. k = 0 means σ(j) =, the faces are the vertices. k = N, then there is only one face which is T. Let k T be the collection of faces of dimension k in T, then T= N T and T = N 1 k=0 J k T J.

5.3.1 Mean Euler Characteristic Morse s Theorem (cf. Adler and Taylor, 2007) gives a formula for the Euler characteristic of the excursion set of X. Theorem 5.7 [Morse s theorem] Let X(t) be a Morse function a.s. Then where ϕ(a u (X, T)) = N ( 1) k k=0 J k T k ( 1) i µ i (J) a.s., i=0 µ i (J) = # { t J : X(t) u, X J (t) = 0, index( 2 X J (t)) = i, ε j X j (t) 0 for all j / σ(j) }.

Example 5.3 Let T = [0, 1] = {0} {1} (0, 1) and a smooth function X(t), we have ϕ(a u (X, T)) = 1 {X(0) u,x (0) 0} + 1 {X(1) u,x (1) 0} + #{t (0, 1) : X(t) u, X (t) = 0, X (t) < 0} #{t (0, 1) : X(t) u, X (t) = 0, X (t) > 0}.

Mean Euler Characteristic Let X = {X(t), t R N } be a centered Gaussian field with stationary increments and spectral density f (λ). Assume H1: f (λ) = O ( λ (2H+N+4)) for some H (0, 1). H2: t T, (X(t), X(t), 2 X(t)) has nondegenerate distribution. Notation: (E{X i (t)x j (t)}) i,j=1,,n = (λ ij ) i,j=1,,n = Λ, (E{X(t)X ij (t)}) i,j=1,,n = (λ ij (t) λ ij ) i,j=1,,n = Λ(t) Λ, where λ ij = λ i λ j f (λ) dλ, λ ij (t) = λ i λ j cos t, λ f (λ) dλ. R N R N

Define Λ J = (λ ij ) i,j σ(j), Λ J (t) = (λ ij (t)) i,j σ(j) and γ 2 t = Var(X(t) X(t)) = For J k T, we denote detcov(x(t), X(t)). detcov( X(t)) {1,, N}\σ(J) = {J 1,, J N k } and let E(J) = { (t J1,, t JN k ) R N k : } t j ε j > 0, j = J 1,, J N k. Let C j (t) be the (1, j + 1) entry of (Cov(X(t), X(t))) 1.

Theorem 5.8 (Cheng and X. 2016) E { ϕ(a u ) } = + N t 0 T k=1 J k T E(J) ( ) P X(t) u, X(t) E({t}) 1 Λ J Λ J (t) dt dx (2π) k/2 Λ J 1/2 J γt k u ( ) x H k + γ t C J1 (t)y J1 + + γ t C JN k (t)y JN k γ t p t (x, y J1,, y JN k 0,, 0) dy J1 dy JN k, where p t is the conditional density of (X(t), X J1 (t),, X JN k (t) X J (t) = 0).

Remarks The proof relies strongly on two properties of Gaussian fields with stationary increments: (i) X i (t) and X jk (t) are independent. ( ) (ii) E{X(t)X ij (t)} = Λ(t) Λ are negative definite. i,j In many cases, the formula can be simplified with only a super-exponentially small difference.

5.3.2 Approximation to the excursion probability Define the number of extended outward maxima above level u by { Mu E (J) # t J : X(t) u, X J (t) = 0, } index( 2 X J (t)) = k, ε j X j (t) > 0 for all j / σ(j). Recall T = N k=0 kt = N k=0 J k T J, it can be shown that { } { N } P sup X(t) u = P {Mu E (J) 1}, t T see Azais and Delmas (2002). k=0 J k T

By the Bonferroni inequality and Piterbarg (1996), N k=0 J k T { E{Mu E (J)} P N k=0 J k T } sup X(t) u t T ( E{M E u (J)} E{M E u (J)(M E u (J) 1)} ) J J E{M E u (J)M E u (J )}.

Extending the method in Azais and Delmas (2002), we prove Lemma 5.1 Under conditions in Theorem 5.8, there exists some α > 0 such that N k=0 J k T E{M E u (J)} = E{ϕ(A u )} + o(e αu2 u 2 /2σ 2 T ), where σ 2 T sup t T Var( X(t) ).

The following theorem shows that the Expected Euler Characteristic Heuristic holds more generally. Theorem 5.9 (Cheng and X., 2016) Let X = {X(t) : t R N } be a centered Gaussian random field with stationary increments satisfying H1, H2 and H3: For all t s R N, (X(t), X(t), X ij (t), X(s), X(s), 2 X(s), 1 i j N) have nondegenerate distributions. Then there exists α > 0 such that { } P sup X(t) u t T = E{ϕ(A u )} + o(e αu2 u 2 /2σT 2 ).

Corollary 5.1 Under the conditions in Theorem 5.9 and an extra condition, P { sup t T X(t) u } equals t 0 T J ( u ) Ψ + σ t N k=1 J k T 1 (2π) (k+1)/2 Λ J 1/2 Λ J Λ J (t) ( u θt k H k 1 )e u2 /2θt 2 dt + o(e αu2 u 2 /2σT 2 ), θ t where for t J, θt 2 = Var(X(t) X J (t)) = det Cov(X(t), X J(t)). detcov( X J (t))

5.4. Vector-valued Gaussian fields Consider a multivariate random field X = {X(t), t R N } taking values in R p defined by X(t) = (X 1 (t),, X p (t)), t R N. (2) Their key features are: the components X 1,..., X p are dependent. X 1,..., X p may have different smoothness properties.

Given subsets T 1,..., T p of R N, it is of interest to estimate the excursion probability { } P max X 1 (t) u 1,, max X p (t) u p (3) t T 1 t T p for certain threshold values u 1,..., u p. For T R N, another type of excursion probability for X is P { t T such that X i (t) u i, 1 i p }. (4) We focus on the excursion probabilities in (3) for p = 2.

Let {(X(t), Y(s)) : t T, s S} be an R 2 -valued, centered, unit-variance Gaussian random field, where T and S are rectangles in R N. We are interested in the joint excursion probability { P sup t T X(t) u, sup Y(s) u s S }. Only a few results are known, see Piterbarg (2000), Piterbarg and Stamatovic (2005) and Debicki et al. (2010).

Let {(X(t), Y(s)) : t T, s S} be an R 2 -valued, centered, unit-variance Gaussian random field, where T and S are rectangles in R N. We are interested in the joint excursion probability { P sup t T X(t) u, sup Y(s) u s S }. Only a few results are known, see Piterbarg (2000), Piterbarg and Stamatovic (2005) and Debicki et al. (2010).

Let {(X(t), Y(s)) : t T, s S} be an R 2 -valued, centered, unit-variance Gaussian random field, where T and S are rectangles in R N. We are interested in the joint excursion probability { P sup t T X(t) u, sup Y(s) u s S }. Only a few results are known, see Piterbarg (2000), Piterbarg and Stamatovic (2005) and Debicki et al. (2010).

5.4.1 The expected Euler characteristic method We decompose T and S into several faces of lower dimensions N N T = J, S = L. k=0 J k T Similarly to the real-valued case, { } P sup X(t) u, sup Y(s) u t T { N = P k,l=0 J k T,L l S s S l=0 L l S } {Mu E (X, J) 1, Mu E (Y, L) 1}.

Upper Bound { P N sup t T } X(t) u, sup Y(s) u s S k,l=0 J k T,L l S N k,l=0 J k T,L l S P{M E u (X, J) 1, M E u (Y, L) 1} E{M E u (X, J)M E u (Y, L)}.

Lower Bound { P N sup t T } X(t) u, sup Y(s) u s S k,l=0 J k T,L l S { E{M E u (X, J)M E u (Y, L)} E{Mu E (X, J)[Mu E (X, J) 1]Mu E (Y, L)} } E{Mu E (Y, L)[Mu E (Y, L) 1]Mu E (X, J)} crossing terms.

Smoothness and regularity conditions (H1 ). X, Y C 2 a.s. and their second derivatives satisfy the uniform mean-square Hölder condition. (H2 ). For every (t, t, s) T 2 S with t t, (X(t), X(t), 2 X(t), X(t ), X(t ), 2 X(t ), Y(s), Y(s), 2 Y(s), 1 i j N) is non-degenerate; and for every (s, s, t) S 2 T with s s, (Y(s), Y(s), 2 Y(s), Y(s ), Y(s ), 2 Y(s ), X(t), X(t), 2 X(t), 1 i j N) is non-degenerate.

Smoothness and regularity conditions Let ρ(t, s) = E{X(t)Y(s)}, ρ(t, S) = sup ρ(t, s). t T,s S (H3 ). For every (t, s) T S such that ρ(t, s) = ρ(t, S), (E{X ij (t)y(s)}) i,j ζ(t,s), (E{X(t)Y i j (s)}) i,j ζ (t,s) are both negative semi-definite, where ζ(t, s) = {n : E{X n (t)y(s)} = 0, 1 n N}, ζ (t, s) = {n : E{X(t)Y n (s)} = 0, 1 n N}.

Theorem 5.10 [(Cheng and X. (2016+)] Under (H1 )-(H3 ), there exists α 0 > 0 such that as u, where { } P sup X(t) u, sup Y(s) u t T s S = E{ϕ(A u (X, T) A u (Y, S))} ( { + o exp u 2 }) 1 + ρ(t, S) α 0u 2, A u (X, T) A u (Y, S) = {(t, s) T S : X(t) u, Y(s) u}.

5.5 The double sum method Consider non-smooth bivariate locally stationary Gaussian field X(t) = (X 1 (t), X 2 (t)). Define Let t := r ij (s, t) := E[X i (s)x j (s + t)], i, j = 1 or 2. (5) N j=1 t2 j be the l 2 -norm of a vector t R N.

Assumptions: i) r ii (s, t) = 1 c i t s α i + o( t s α i ), where α i (0, 2), c i > 0 for i = 1, 2. ii) r ii (s, t) < 1 for all t s > 0, i = 1, 2. iii) r 12 (s, t) = r 21 (s, t):= r( t s ), which means the cross correlation is isotropic. iv) r( ) : [0, ) R attains maximum only at zero with r(0) = ρ (0, 1), i.e., r(t) < ρ for all t > 0. Moreover, we assume r (0) = 0, r (0) < 0 and there exists η > 0, for any s [0, η], r (s) exists and continuous.

Let S, T R N be bounded Jordan measurable sets (that is, the boundary of S and T have Lebesgue measure 0. Theorem 5.11 [Zhou and X. (2015)] If mes N (S T) 0, then as u, { } P max X 1(s) > u, max X 2(t) > u s S t T = (2π) N 2 ( r (0)) N 2 c N α 1 N 1 c α 2 2 (1 + ρ) N( 2 α 1 + 2 α 2 1) mes N (S T)H α1 H α2 u N( 2 α 1 + 2 α 2 1) Ψ(u, ρ)(1 + o(1)), where H α denotes Pickands constant and Ψ(u, ρ) is ) (1 + ρ)2 Ψ(u, ρ) := ( 2πu 2 1 ρ exp u2. 2 1 + ρ

Two remarks about Theorem 5.11 The rate of exponential decay is u2 1+ρ, where ρ is the maximum cross correlation over S T. The extreme tail probability is proportional to the volume of the set {(s, s) s S T}, where (X 1 ( ), X 2 ( )) attains maximum cross correlation.

If mes N (S T) = 0, the above theorem fails, and result depends on the dimension of S T. Let S = S 1,M N j=m+1 [a j, b j ] and T = T 2,M N M+1 [h j, k j ], where 0 M N 1, S 1,M and T 2,M are M dimensional Jordan sets with mes M (S 1,M T 2,M ) 0 and a j b j < h j for j = M + 1,..., N.

Theorem 5.12 [Zhou and X. (2015)] Under the above conditions, we have ( ) P max X 1(s) > u, max X 2(t) > u s S t T = (2π) M 2 ( r (0)) 2N M N α 2 c 1 N 1 c α 2 2 mes M (S 1,M T 2,M )H α1 H α2 (1 + ρ)2n M 2N α 1 2N α 2 u M+N( 2 α 1 + 2 α 2 2) Ψ(u, ρ)(1 + o(1)), as u.

Example: The bivariate Matérn field Multivariate stationary Matérn models {X(t), t R N } in (2) with marginal and cross-covariance functions of the form M(h ν, a) := 21 ν Γ(ν) (a h )ν K ν (a h ), (with parameters a, ν) have been introduced and studied by Gneiting, Kleiber and Schlather (2010), Apanansovich, Genton and Sun (2012), Kleiber and Nychka (2013). Sometimes, It is more convenient to work with the spectral density: f (ω ν, a) = Γ(ν + N 2 )a2ν Γ(ν)π N/2 1 (a 2 + ω 2 ) ν+(n/2).

The bivariate Matérn field Let X(t) = (X 1 (t), X 2 (t)) T be an R 2 -valued Gaussian field whose covariance matrix is determined by ( ) c11 (h) c C(h) = 12 (h), (6) c 21 (h) c 22 (h) where c ij (h) := E[X i (s + h)x j (s)] are specified by c 11 (h) = σ 2 1M(h ν 1, a 1 ), c 22 (h) = σ 2 2M(h ν 2, a 2 ), c 12 (h) = c 21 (h) = ρσ 1 σ 2 M(h ν 12, a 12 ) (7) with a 1, a 2, a 12, σ 1, σ 2 > 0 and ρ ( 1, 1).

Gneiting, et al. (2010) gave NSC for (6) to give a valid covariance matrix. In particular, if ρ 0, one must have ν 1 + ν 2 2 ν 12. The parameters ν 1 and ν 2 control the smoothness of the sample function t X(t). If min{ν 1, ν 2 } > 1, then a.s. the sample function t (X 1 (t), X 2 ) is continuously differentiable. If 0 < ν 1 ν 2 1, then a.s. the sample function t (X 1 (t), X 2 ) are non-smooth.

Suppose 0 < ν 1 ν 2 1, then by Xiao (1995), we have dim H GrX([0, 1] N ) { N + 2 (ν1 + ν 2 ), if ν 1 + ν 2 < N, = N+ν 2 ν 1 ν 2 if ν 1 < N ν 1 + ν 2, where GrX([0, 1] N ) = {(t, X 1 (t), X 2 (t)) T : t [0, 1] N } is the graph set of X. Many other any random sets generated by X are also fractals. Theorems 5.10, 5.11, and 5.12 can be applied depending on whether min{ν 1, ν 2 } > 2 and max{ν 1, ν 2 } < 1, respectively.

Thank you