THE EXCURSION PROBABILITY OF GAUSSIAN AND ASYMPTOTICALLY GAUSSIAN RANDOM FIELDS. Dan Cheng

Size: px
Start display at page:

Download "THE EXCURSION PROBABILITY OF GAUSSIAN AND ASYMPTOTICALLY GAUSSIAN RANDOM FIELDS. Dan Cheng"

Transcription

1 THE EXCURSION PROBABILITY OF GAUSSIAN AND ASYMPTOTICALLY GAUSSIAN RANDOM FIELDS By Dan Cheng A DISSERTATION Submitted to Michigan State University in partial fulfillment of the requirements for the degree of Statistics Doctor of Philosophy 2013

2 ABSTRACT THE EXCURSION PROBABILITY OF GAUSSIAN AND ASYMPTOTICALLY GAUSSIAN RANDOM FIELDS By Dan Cheng The purpose of this thesis is to develop the asymptotic approximation to excursion probability of Gaussian and asymptotically Gaussian random fields. It is composed of two parts. The first part is to study smooth Gaussian random fields. We extend the expected Euler characteristic approximation to a wide class of smooth Gaussian random fields with non-constant variances. Applying similar techniques, we also find that the joint excursion probability of vector-valued smooth Gaussian random fields can be approximated via the expected Euler characteristic of related excursion sets. As useful applications, the excursion probabilities over random intervals and infinite intervals are also investigated. The second part focuses on non-smooth Gaussian and asymptotically Gaussian random fields. We study the excursion probability of Gaussian random fields on the sphere and obtain an asymptotics based on the Pickands constant. Using double sum method, we also derive the approximation, which involves the generalized Pickands constant, to excursion probability of anisotropic Gaussian and asymptotically Gaussian random fields.

3 Copyright by DAN CHENG 2013

4 ACKNOWLEDGMENTS I would like to express my sincere gratitude to my advisor Professor Yimin Xiao for his excellent guidance and continuous support during my Ph.D. study and research. He guided me not only to write this thesis but to set a career path to the future. He is extraordinarily kind to students. Besides academic communications, we chatted and shared life experience like friends. His enthusiasm on mathematics encourages me to keep working in the research. I also wish to thank Professor V. S. Mandrekar, Professor Lifeng Wang and Professor Xiaodong Wang for serving on my dissertation committee. I am grateful to the Department of Statistics and Probability and the Graduate School who provided me the assistantships, Dissertation Continuation Fellowship and Dissertation Completion Fellowship for working on the dissertation. This dissertation is also supported in part by the NSF Grant DMS Finally, I would like to thank my family for their love and support that enable me to pursue my career goal. iv

5 TABLE OF CONTENTS Chapter 1 Introduction and Review of Existing Literature Gaussian Random Fields Excursion Probability Chapter 2 Smooth Gaussian Random Fields with Stationary Increments Gaussian Fields with Stationary Increments The Mean Euler Characteristic Excursion Probability Further Remarks and Examples Some Auxiliary Facts Chapter 3 Smooth Gaussian Random Fields with Non-constant Variances Gaussian Fields on Rectangles Applications for Gaussian Fields with a Unique Maximum Point of the Variance Gaussian Fields on Manifolds without Boundary Gaussian Fields on Convex Sets with Smooth Boundary Gaussian Fields on Convex Sets with Piecewise Smooth Boundary Chapter 4 The Expected Euler Characteristic of Non-centered Stationary Gaussian Fields Preliminary Gaussian Computations Stationary Gaussian Fields on Rectangles Isotropic Gaussian Random Fields on Sphere Chapter 5 Excursion Probability of Smooth Gaussian Processes over Random Intervals Stationary Gaussian Processes Gaussian Processes with Increasing Variance Chapter 6 Ruin Probability of a Certain Class of Smooth Gaussian Processes Self-similar Processes Integrated Fractional Brownian Motion More General Gaussian Processes Chapter 7 Excursion Probability of Gaussian Random Fields on Sphere Notations Non-smooth Gaussian Fields on Sphere Locally Isotropic Gaussian Fields on Sphere v

6 7.2.2 Standardized Spherical Fractional Brownian Motion Smooth Isotropic Gaussian Fields on Sphere Preliminaries Excursion Probability Chapter 8 Excursion Probability of Anisotropic Gaussian and Asymptotically Gaussian Random Fields Preliminaries Asymptotically Gaussian Random Fields Proof of Theorem Example: Standardized Fractional Brownian Sheet Example: Standardized Random String Processes Chapter 9 Vector-valued Smooth Gaussian Random Fields Joint Excursion Probability Vector-valued Gaussian Processes BIBLIOGRAPHY vi

7 Chapter 1 Introduction and Review of Existing Literature 1.1 Gaussian Random Fields A real-valued random field is simply a stochastic process defined over a parameter space T, which could be a subset of R N or even a manifold, etc. The following is the rigorous definition [cf. Adler and Taylor (2007)]. Definition Let (Ω, F, P) be a complete probability space and T a topological space. Then a measurable mapping X : Ω R T (the space of all real-valued functions on T ) is called a real-valued random field. Measurable mappings from Ω to (R T ) d, d 1, are called vector-valued random fields. Thus, X is a real-valued function X(ω, t), where ω Ω and t T. For convenience, usually, we abbreviate X(ω, t) as X(t) or X. We define a real-valued Gaussian (random) field to be a real-valued random field X on a parameter space T for which the finite dimensional distributions of (X(t 1 ),..., X(t n )) are multivariate Gaussian ( i.e., multivariate Normal) for each 1 n < and each (t 1,..., t n ) T n. The functions m(t) = EX(t) and C(t, s) = E(X(t) m(t))(x(s) m(s)) are called respectively the mean and covariance functions of X. If m(t) 0, we call X a centered 1

8 Gaussian field. A vector-valued Gaussian field X taking values in R d is the random field for which ξ, X(t) is a real-valued Gaussian field for every ξ R d. The following result is Theorem in Adler and Taylor (2007), which gives a sufficient condition such that a Gaussian field X is continuous and bounded. Theorem Let X(t) : t T be a centered Gaussian field, where T is a compact set of R N. If there exist positive constants K, α and η such that E X(t) X(s) 2 K log t s 1 α, t s η, then X is continuous and bounded on T with probability one. Note that the sufficient condition in the above theorem only depends on the covariance function of X. This is a huge advantage for studying centered Gaussian random fields: all of their properties only depend on the covariance structure. Similar sufficient conditions for the differentiability of Gaussian fields can also be obtained, see Chapter 1 in Adler and Taylor (2007) for more details. 1.2 Excursion Probability The excursion probability above level u > 0 is defined as Psup t T X(t) u. Due to the wide applications in statistics and many other related areas, computing the excursion probability becomes a classical and very important problem in probability theory. However, usually, the exact probability is unable to obtain, instead, we try to find the asymptotic approximation as u tends to infinity. There is a classical result of Landau and Shepp (1970) and Marcus and Shepp (1972) that 2

9 gives a logarithmic asymptotics for the excursion probability of a general centered Gaussian process. If we assume that X(t) is a.s. bounded, then they showed that lim u u 2 log P sup t T X(t) u = 1 2σT 2, (1.2.1) where σ 2 T = sup t T Var(X(t)). We present here a non-asymptotic result due to Borell (1975) and Tsirelson, Ibragimov and Sudakov (TIS) (1976). Theorem (Borell-TIS inequality). Let X(t) : t T be a centered Gaussian field, a.s. bounded, where T is a compact subset of R N. Then Esup t T X(t) < and for all u > 0, P sup t T X(t) E sup t T X(t) u e u2 /(2σ T 2 ). It is evident to check that the Borell-TIS inequality implies (1.2.1). There are also several non-asymptotic bounds for the excursion probability of general (only assume continuity and boundedness a.s.) Gaussian fields, see Chapter 4 in Adler and Taylor (2007) for more details. If assume X to be stationary or locally stationary, then there is a famous approximation obtained by the double sum method. This technique was developed by Pickands (1969a, 1969b) for Gaussian processes, extended to Gaussian fields by Qualls and Watanabe (1973), and surveyed and developed in a monograph of Piterbarg (1996a). Theorem Let T be a bounded Jordan measurable set in R N such that dim(t ) = N, and let X(t) : t T be a centered Gaussian field with covariance function C(, ) satisfying C(t, s) = 1 t s α (1 + o(1)) as t s 0. 3

10 Then as u, P sup t T X(t) u = H α Vol(T )u 2N/α Ψ(u)(1 + o(1)), (1.2.2) where H α is the Pickankds constant and Ψ(u) = (2π) 1/2 u e x2 /2 dx. This result was developed further by Chan and Lai (2006) for Gaussian fields with a wider class of covariance structures. The coefficient H α Vol(T ) above was generalized as T H α(t)dt, where H α ( ) is a function on T. Moreover, the result in Chan and Lai (2006) is applicable to certain asymptotically Gaussian random fields. In Chapter 7, we investigate Gaussian random fields on the sphere and obtain Theorem 7.2.4, which is similar to Theorem In Chapter 8, we extend the result in Chan and Lai (2006) to anisotropic and asymptotically anisotropic Gaussian random fields, see Theorem and Theorem Can we get more accurate approximation to the excursion probability of nicer Gaussian random fields? The answer is yes. Sun (1993) used the tube method to find the approximation for Gaussian fields with finite Karhunen-Loève expansion. Also, many authors applied the Rice method to get accurate approximations for smooth Gaussian fields, see Piterbarg (1996a), Adler (2000) and Azaïs and Wschebor (2005, 2008, 2009), etc. Later on, these approximations were conjectured by statisticians that they should have close connection to the geometry of the excursion set A u = t T : X(t) u. Taylor, Takemura and Adler (2005) showed the rigorous proof that the expected Euler characteristic of the excursion set, denoted by Eϕ(A u ), can approximate the excursion probability very accurately. Their result is stated as follows. Theorem Let X = X(t) : t T be a unit-variance smooth Gaussian random field 4

11 parameterized on a manifold T. Under certain conditions on the regularity of X and topology of T, the following approximation holds: P sup t T X(t) u = Eϕ(A u )(1 + o ( e αu2 )), as u, (1.2.3) where α is some positive constant. Moreover, Eϕ(A u ) can be computed by the Kac-Rice formula, see Adler and Taylor (2007), Eϕ(A u ) = C 0 Ψ(u) + dim(t ) j=1 C j u j 1 e u2 /2, (1.2.4) where C j, j = 0, 1,..., dim(t ), are constants depending on X and T. Here is a simple example. Let X be a smooth isotropic Gaussian field with unit variance and T = [0, L] N, then ( N Nj ) L j λ j/2 Eϕ(A u ) = Ψ(u) + (2π) j=1 (j+1)/2 H j 1(u)e u2 /2, where λ = Var( X t i (t)) and H j 1 (u) are Hermite polynomials of order j 1. It is worth mentioning here that if X is not centered or not stationary, then Eϕ(A u ) becomes complicated to compute. In the recent monograph Adler and Taylor (2007), the authors only considered centered Gaussian random fields with constant variance. In Chapter 4 here, we study non-centered stationary Gaussian fields and derive exact formulae for computing Eϕ(A u ). Comparing (1.2.3) and (1.2.4) with (1.2.2), we see that the approximation in (1.2.2) only uses one of the terms, which involves u N 1 e u2 /2, in Eϕ(A u ). Also, we note that the error term in (1.2.2) is only o(1), and the expected Euler characteristic approximation in (1.2.3) is much more accurate since the error is exponentially smaller than the major term 5

12 Eϕ(A u ). The requirement of constant variance on the Gaussian random fields in Theorem is too restrictive for many applications. However, the original proof in Taylor, Takemura and Adler (2005) relies on this requirement heavily. If the constant variance condition is not satisfied, little had been known on whether the approximation (1.2.3) still holds. In a recent paper Azaïs and Wschebor (2008, Theorem 5), the authors proved (1.2.3) for a special case when the variance of the Gaussian field attains its maximum only in the interior of T. But this special case excludes many important Gaussian fields in which we are interested. As a major contribution in this thesis, we shall use the Rice method to show (1.2.3) for more general smooth Gaussian fields without constant-variance. In Chapter 2, we study smooth Gaussian random fields with stationary increments and obtain the desired results in Theorem and Theorem Meanwhile, we provide a specific formula for computing Eϕ(A u ) in Theorem To develop the theory further, we show in Chapter 3 that the expected Euler characteristic approximation also holds for a large class of smooth Gaussian random fields with non-constant variances. When computing Eϕ(A u ), we also find that it can be simplified in certain sense depending on the variance function of X. As useful applications, we study the excursion probabilities of Gaussian processes over random intervals and infinite intervals in Chapter 5 and Chapter 6. The approximations we derived are also more accurate than the existing ones, since the errors are super-exponentially small. Lastly, Chapter 9 is on a new topic: the excursion probability for vector-valued Gaussian random fields. There has been little research on this. The only exceptions are Piterbarg and Stamatovic (2005) and Debicki et al. (2010) who obtained some logarithmic asymptotics, and Ladneva and Piterbarg (2000) and Anshin (2006) who obtained certain asymptotics for 6

13 non-smooth vector-valued Gaussian random fields with special covariance functions. Let (X(t), Y (s)) : t T, s S be an R 2 -valued, centered, unit-variance Gaussian random field, where T and S are rectangles in R N. Define the excursion set A u (X, T ) A u (Y, S) = (t, s) T S : X(t) u, Y (s) u. We show in Theorem that under certain smoothness and regularity conditions, as u, P sup t T X(t) u, sup Y (s) u s S ( = Eϕ(A u (X, T ) A u (Y, S)) + o exp u ρ(t, S) αu2). where ρ(t, S) = sup t T,s S EX(t)Y (s). Let (X(t), Y (t)) : t T be an R 2 -valued, centered, unit-variance Gaussian process, where T = [a, b] is a finite interval in R. Define the excursion set A u (T, X Y ) = t T : (X Y )(t) u. We show in Theorem that under certain smoothness and regularity conditions, as u, P t T such that X(t) u, Y (t) u = P ( = Eϕ(A u (T, X Y )) + o exp sup t T (X Y )(t) u u ρ(t ) αu2), where ρ(t ) = sup t T EX(t)Y (t). 7

14 Chapter 2 Smooth Gaussian Random Fields with Stationary Increments 2.1 Gaussian Fields with Stationary Increments Let X = X(t) : t R N be a real-valued centered Gaussian random field with stationary increments. We assume that X has continuous covariance function C(t, s) = EX(t)X(s) and X(0) = 0. Then it is known [cf. Yaglom (1957)] that C(t, s) = R N (ei t,λ 1)(e i s,λ 1) F (dλ) + t, Θs (2.1.1) where x, y is the ordinary inner product in R N, Θ is an N N non-negative definite (or positive semidefinite) matrix and F is a non-negative symmetric measure on R N \0 which satisfies RN λ 2 F (dλ) <. (2.1.2) 1 + λ 2 Similarly to stationary random fields, the measure F and its density (if it exists) f(λ) are called the spectral measure and spectral density of X, respectively. 8

15 By (2.1.1) we see that X has the following stochastic integral representation X(t) = R N (ei t,λ 1)W (dλ) + Y, t, (2.1.3) where Y is an N-dimensional Gaussian random vector and W is a complex-valued Gaussian random measure (independent of Y) with F as its control measure. It is known that many probabilistic, analytic and geometric properties of a Gaussian field with stationary increments can be described in terms of its spectral measure F and, on the other hand, various interesting Gaussian random fields can be constructed by choosing their spectral measures appropriately. See Xiao (2009), Xue and Xiao (2011) and the references therein for more information. For simplicity we assume that Y = 0. It follows from (2.1.1) that the variogram ν of X is given by ν(h) := E(X(t + h) X(t)) 2 = 2 (1 cos h, λ )F (dλ). RN (2.1.4) Mean-square directional derivatives and sample path differentiability of Gaussian random fields have been well studied. See, for example, Adler (1981), Adler and Taylor (2007), Potthoff (2010), Xue and Xiao (2011). In particular, general sufficient conditions for a Gaussian random field to have a modification whose sample functions are in C k are given by Adler and Taylor (2007). For a Gaussian random field X = X(t) : t R N with stationary increments, Xue and Xiao (2011) provided conditions for its sample path differentiability in terms of the spectral density function f(λ). Similar arguments can be applied to give the spectral condition for the sample functions of X to be in C k (R N ). Definition [Adler and Taylor (2007, p.22)]. Let t, v 1,..., v k R N ; v = (v 1,..., v k ) k R N. 9

16 We say X has a kth-order L 2 partial derivative at t, in the direction v, which we denote by D v L2X(t), if the limit D v L2X(t) := lim h 1,...,h k 0 1 ki=1 h i G X (t, k ) h i v i i=1 exists in L 2, where G X (t, k i=1 h i v i ) is the symmetrized difference G X (t, k ) h i v i = i=1 s 0,1 k ( 1) k k i=1 s ix ( t + k s i h i v i ). (2.1.5) i=1 Remark Recall the fact that a sequence of random variables ξ n converges in L 2 if and only if Eξ n ξ m converges to a constant as n, m. It follows immediately that D v L 2X(t) exists in L 2 if and only if ( 1 lim ki=1 E G X t, h 1,...,h k,ĥ1,...,ĥk 0 h i ĥ i k i=1 h i v i ) G X (t, k ) ĥ i v i i=1 (2.1.6) exists. Let e 1, e 2,..., e N be the standard orthonormal basis of R N. If the direction v consists of k i many e i, 1 i N, and k = N i=1 k i, then we write D v L2X(t) simply as k X(t) t k. 1 1 tk N N Lemma Let X = X(t) : t R N be a real-valued centered Gaussian random field with stationary increments and let k = N i=1 k i. Then if 2k ν(0) t 2k 1 1 t 2k N N exists. k X(t) t k 1 1 tk N N exists in L 2 if and only Proof To simplify the notations, we only show the proof for k = 2 and the proof for general 10

17 k will be similar. By the definition of the symmetric difference G X in (2.1.5), 1 h 1 h 2 ĥ 1 ĥ 2 EG X (t, h 1 e i + h 2 e j )G X (t, ĥ1e i + ĥ2e j ) = 1 h 1 h 2 ĥ 1 ĥ 2 E[X(t + h 1 e i + h 2 e j ) X(t + h 1 e i ) X(t + h 2 e j ) + X(t)] (2.1.7) [X(t + ĥ1e i + ĥ2e j ) X(t + ĥ1e i ) X(t + ĥ2e j ) + X(t)]. Expanding the product above and applying the variogram ν defined in (2.1.4), we obtain that (2.1.7) becomes 1 2h 1 h 2 ĥ 1 ĥ 2 ν(h 1 e i + h 2 e j ĥ1e i ĥ2e j ) ν(h 1 e i + h 2 e j ĥ1e i ) ν(h 1 e i + h 2 e j ĥ2e j ) + ν(h 1 e i + h 2 e j ) ν(h 1 e i ĥ1e i ĥ2e j ) + ν(h 1 e i ĥ1e i ) + ν(h 1 e i ĥ2e j ) ν(h 1 e i ) ν(h 2 e j ĥ1e i ĥ2e j ) + ν(h 2 e j ĥ1e i ) + ν(h 2 e j ĥ2e j ) ν(h 2 e j ) + ν( ĥ1e i ĥ2e j ) (2.1.8) = ν( ĥ1e i ) ν( ĥ2e j ) + ν(0) 1 2h 1 h 2 ( ĥ1)( ĥ2) G ν(0, h 1 e i + h 2 e j + ( ĥ1)e i + ( ĥ2)e j ). Note that as h 1, h 2, ĥ1, ĥ2 0, the limit (if it exists) of the last term in (2.1.8) is just 4 ν(0) t 2 i t2, together with Remark 2.1.2, we obtain the desired result. j Proposition Let X = X(t) : t R N be a real-valued centered Gaussian random field with stationary increments and let k i (1 i N) be non-negative integers. If there is a constant ε > 0 such that N λ i 2k i +ε F (dλ) <, (2.1.9) λ >1 i=1 11

18 then X has a modification X such that the partial derivative k X(t) t k is continuous on 1 1 tk N N R N almost surely, where k = N i=1 k i. Moreover, T > 0 and η (0, ε 1), there exists a constant κ such that ( k X(t) E t k 1 1 tk N N k X(s) s k 1 1 sk N N ) 2 κ t s η, t, s [ T, T ] N. Proof Applying the dominated convergence theorem, 2k ν(0) t 2k 1 1 t 2k N N = R N λ2k 1 1 λ 2k N N F (dλ) = λ 2k 1 1 λ 2k N N F (dλ) + λ 2k 1 1 λ 2k N N F (dλ) λ 1 λ >1 λ 2 F (dλ) + λ 2k 1 1 λ 2k N N F (dλ) <, λ 1 λ >1 (2.1.10) where the last inequality is due to the requirement (2.1.2) and condition (2.1.9). By Lemma 2.1.3, the partial derivative k X(t) t k exists in L tk N N Next, we show that for any η (0, ε 1), there exists a constant κ such that Recall that ( k X(t) E t k 1 1 tk N N k X(s) s k 1 1 sk N N ) 2 κ t s η, t, s [ T, T ] N. (2.1.11) ( C(t, s) = e i t,λ R N 1 )( e i s,λ 1 ) F (dλ) = (cos t s, λ cos t, λ cos s, λ + 1)F (dλ), RN (2.1.12) 12

19 taking the derivative gives 2k C(t, s) t k 1 1 tk N N s k 1 1 sk N = R N λ2k 1 1 λ 2k N N cos t s, λ F (dλ). It follows that ( k X(t) E t k 1 1 tk N N ( k X(t) = E k X(s) ) 2 s k 1 1 sk N N ) 2 ( k X(s) + E s k 1 1 sk N N 1 λ 2k ( ) N N 1 cos t s, λ F (dλ). t k 1 1 tk N N = 2 R N λ2k 1 ) 2 ( k X(t) 2E t k 1 1 tk N N k X(s) s k 1 1 sk N N Let ŝ 0 = t, ŝ 1 = (s 1, t 2,..., t N ), ŝ 2 = (s 1, s 2, t 3..., t N ),..., ŝ N 1 = (s 1,..., s N 1, t N ) and ŝ N = s. Let h = s t := (h 1,..., h N ). Then, by Jensen s inequality, ) ( k X(t) E t k 1 1 tk N N N ( k X(s) s k 1 1 sk N N ) 2 k X(ŝ j ) N E j=1 s k 1 1 sk j j tk j+1 j+1 tk N N N ( = 2N 1 cos(hj j=1 R N λ j ) ) N λ i 2k if (dλ) i=1 N ( 2N 1 cos(hj λ j ) ) N λ i 2k if (dλ) j=1 λ 1 i=1 N ( + 2N 1 cos(hj λ j ) ) N λ i 2k if (dλ) j=1 λ >1 i=1 k X(ŝ j 1 ) s k 1 1 sk j 1 j 1 tk j j t k N N ) 2 (2.1.13) := I 1 + I 2. 13

20 Combining the result in (2.1.10) with the elementary inequality 1 cos x x 2 yields ( N ) I 1 2N h j 2 λ 2 F (dλ) c 1 t s 2 (2.1.14) j=1 λ 1 for some positive constant c 1. To bound the jth integral in I 2, we note that, when λ > 1, either λ j > 1/ N or there is j 0 j such that λ j0 > 1/ N. We break the integral according to these two possibilities. ( 1 cos(hj λ j ) ) N λ i 2k if (dλ) λ >1 i=1 ( λ j >1/ 1 cos(hj λ j ) ) N λ i 2k if (dλ) N i=1 + ( j 0 j λ j 1, λ j0 >1/ 1 cos(hj λ j ) ) N λ i 2k if (dλ) N i=1 (2.1.15) := I 3 + I 4. Combining condition (2.1.9) with the elementary inequality 1 cos x x 2 yields ( 1 cos(h j λ j ) N I 3 1/ N< λ j 1/ h j λ j ε λ j ε ) λ i 2k i F (dλ) i=1 ( 1 N + λ j >1/ h j λ j ε λ j ε ) λ i 2k i F (dλ) i=1 c 2 h j ε (2.1.16) for some positive constant c 2. Similarly, it is evident to check that I 4 c 3 h j 2 for some positive constant c 3. Therefore, the Höder condition for L 2 partial derivative in (2.1.11) holds, and then the desired result follows from Kolmogorov s continuity theorem. 14

21 For simplicity we will not distinguish X from its modification X. As a consequence of Proposition 2.1.4, we see that, if X = X(t) : t R N has a spectral density f(λ) which satisfies ( ) 1 f(λ) = O λ N+2k+H as λ, (2.1.17) for some integer k 1 and H (0, 1), then the sample functions of X are in C k (R N ) a.s. When X( ) C 2 (R N ) almost surely, we write X(t) t i = X i (t) and 2 X(t) t i t j = X ij (t). Denote by X(t) and 2 X(t) the column vector (X 1 (t),..., X N (t)) T and the N N matrix (X ij (t)) i,j=1,...,n, respectively. It follows from (2.1.1) that for every t R N, λ ij := R N λ iλ j F (dλ) = 2 C(t, s) s=t = EX t i s i (t)x j (t). (2.1.18) j Define the N N matrix Λ = (λ ij ) i,j=1,...,n, then (2.1.18) shows that Λ = Cov( X(t)) for all t. In particular, the distribution of X(t) is independent of t. Let λ ij (t) := R N λ iλ j cos t, λ F (dλ), Λ(t) := (λ ij (t)) i,j=1,...,n. Then we have λ ij (t) λ ij = R N λ iλ j (cos t, λ 1) F (λ) = 2 C(t, s) s=t = EX(t)X t i t ij (t), j or equivalently, Λ(t) Λ = EX(t) 2 X(t). Let T = N i=1 [a i, b i ] be a closed rectangle on R N, where a i < b i for all 1 i N and 0 / T (the case of 0 T will be discussed in Remark 2.4.1). In addition to the stationary increments, we will make use of the following conditions on X: 15

22 (H1). X( ) C 2 (T ) almost surely and its second derivatives satisfy the uniform mean-square Hölder condition: there exist constants L, η > 0 such that E(X ij (t) X ij (s)) 2 L t s 2η, t, s T, i, j = 1,..., N. (2.1.19) (H2). For every t T, the matrix Λ Λ(t) is non-degenerate. (H3). For every pair (t, s) T 2 with t s, the Gaussian random vector (X(t), X(t), X ij (t), X(s), X(s), X ij (s), 1 i j N) is non-degenerate. (H3 ). For every t T, (X(t), X(t), X ij (t), 1 i j N) is non-degenerate. Clearly, by Proposition 2.1.4, condition (H1) is satisfied if (2.1.17) holds for k = 2. Also note that (H3) implies (H3 ). We shall use conditions (H1), (H2) and (H3) to prove Theorems and Condition (H3 ) will be used for computing Eϕ(A u ) in Theorem The following lemma shows that for Gaussian fields with stationary increments, (H2) is equivalent to Λ Λ(t) being positive definite. Lemma For every t 0, Λ Λ(t) is non-negative definite. Hence, under (H2), Λ Λ(t) is positive definite. Proof Let t 0 be fixed. For any (a 1,..., a N ) R N \0, N a i a j (λ ij λ ij (t)) = i,j=1 R N ( N ) 2 a i λ i (1 cos t, λ ) F (λ). (2.1.20) i=1 16

23 Since ( N i=1 a i λ i ) 2 (1 cos t, λ ) 0 for all λ R N, (2.1.20) is always non-negative, which implies Λ Λ(t) is non-negative definite. If (H2) is satisfied, then all the eigenvalues of Λ Λ(t) are positive. This completes the proof. It follows from (2.1.20) that, if the spectral measure F is carried by a set of positive Lebesgue measure (i.e., there is a set B R N with positive Lebesgue measure such that F (B) > 0), then (H2) holds. Hence, (H2) is in fact a very mild condition for smooth Gaussian fields with stationary increments. Lemma and the following two lemmas indicate some significant properties of Gaussian fields with stationary increments. They will play important roles in later sections. Lemma For each t, X i (t) and X jk (t) are independent for all i, j, k; and EX ij (t)x kl (t) is symmetric in i, j, k, l. Proof By (2.1.1), one can verify that for t, s R N, EX i (t)x jk (s) = 3 C(t, s) = t i s j s k R N λ iλ j λ k sin t s, λ F (dλ), EX ij (t)x kl (s) = 4 C(t, s) t i t j s k s l = Letting s = t we obtain the desired results. R N λ iλ j λ k λ l cos t s, λ F (dλ). (2.1.21) It follows immediately from Lemma that the following result holds. Lemma Let A = (a ij ) 1 i,j N be a symmetric matrix, then S t (i, j, k, l) = E(A 2 X(t)A) ij (A 2 X(t)A) kl is a symmetric function of i, j, k, l. 17

24 2.2 The Mean Euler Characteristic The rectangle T = N i=1 [a i, b i ] can be decomposed into several faces of lower dimensions. We use the same notations as in Adler and Taylor (2007, p.134). A face J of dimension k, is defined by fixing a subset σ(j) 1,..., N of size k and a subset ε(j) = ε j, j / σ(j) 0, 1 N k of size N k, so that J = t = (t 1,..., t N ) T : a j < t j < b j if j σ(j), t j = (1 ε j )a j + ε j b j if j / σ(j). Denote by k T the collection of all k-dimensional faces in T, then the interior of T is given by T = N T and the boundary of T is given by T = N 1 k=0 J k T J. For J k T, denote by X J (t) and 2 X J (t) the column vector (X i1 (t),..., X ik (t)) T i 1,...,i k σ(j) and the k k matrix (X mn (t)) m,n σ(j), respectively. If X( ) C 2 (R N ) and it is a Morse function a.s. [cf. Definition in Adler and Taylor (2007)], then according to Corollary or page in Adler and Taylor (2007), the Euler characteristic of the excursion set A u = t T : X(t) u is given by N k ϕ(a u ) = ( 1) k ( 1) i µ i (J) (2.2.1) k=0 J k T i=0 with µ i (J) := #t J : X(t) u, X J (t) = 0, index( 2 X J (t)) = i, ε j X j(t) 0 for all j / σ(j), (2.2.2) where ε j = 2ε j 1 and the index of a matrix is defined as the number of its negative 18

25 eigenvalues. We also define µ i (J) := #t J : X(t) u, X J (t) = 0, index( 2 X J (t)) = i. (2.2.3) Let σ 2 t = Var(X(t)) and let σ2 T = sup t T σ 2 t be the maximum variance. For Gaussian fields with stationary increments, it follows from (2.1.4) that ν(t) = σ 2 t. For t J kt, where k 1, let Λ J = (λ ij ) i,j σ(j), Λ J (t) = (λ ij (t)) i,j σ(j), θ 2 t = Var(X(t) X J (t)), J 1,..., J N k = 1,..., N\σ(J), γ2 t = Var(X(t) X(t)), (2.2.4) E(J) = (t J1,..., t JN k ) R N k : t j ε j 0, j = J 1,..., J N k. Then for all t J, Λ J = Cov( X J (t)), Λ J (t) Λ J = EX(t) 2 X J (t). (2.2.5) Note that θ 2 t γ2 t for all t T and θ2 t = γ2 t if t N T. For t 0 T, then X J (t) is not defined, in this case we set θ 2 t as σ2 t by convention. Let C j(t) be the (1, j + 1) entry of (Cov(X(t), X(t))) 1, i.e. C j (t) = M 1,j+1 /detcov(x(t), X(t)), where M 1,j+1 is the cofactor of the (1, j + 1) entry, EX(t)X j (t), in the covariance matrix Cov(X(t), X(t)). Denote by H k (x) the Hermite polynomial of order k, i.e., H k (x) = ( 1) k e x2 /2 d k dx k (e x2 /2 ). Then the following identity holds [cf. Adler and Taylor (2007, p.289)]: H k (x)e x2 /2 dx = H k 1 (u)e u2 /2, (2.2.6) u 19

26 where u > 0 and k 1. For a matrix A, A denotes its determinant. Let R + = [0, ), R = (, 0] and Ψ(u) = (2π) 1/2 u e x2 /2 dx. The following lemma is an analogue of Lemma in Adler and Taylor (2007). It provides a key step for computing the mean Euler characteristic in Theorem 2.2.2, meanwhile, it has close connection with Theorem Lemma Let X = X(t) : t R N be a centered Gaussian random field with stationary increments satisfying (H1), (H2) and (H3 ). Then for each J k T with k 1, k E ( 1) i µ i (J) i=0 = ( 1) k (2π) (k+1)/2 Λ J 1/2 J Λ J Λ J (t) ( u θt k H k 1 )e u2 /(2θ t 2) dt. (2.2.7) θ t Proof Let D i be the collection of all k k matrices with index i. Recall the definition of µ i (J) in (2.2.3), thanks to (H1) and (H3 ), we can apply the Kac-Rice metatheorem [cf. Theorem or Corollary in Adler and Taylor (2007)] to get that the left hand side of (2.2.7) becomes k p X J (t) (0)dt ( 1) i E det 2 X J (t) 1 2 X J (t) D J i 1 X(t) u X J (t) = 0. i=0 (2.2.8) Note that on the event D i, the matrix 2 X J (t) has i negative eigenvalues, which implies ( 1) i det 2 X J (t) = det 2 X J (t). Also, k i=0 2 X J (t) D i = Ω a.s., hence (2.2.8) 20

27 equals p X J (t) (0)dt Edet 2 X J (t)1 X(t) u X J (t) = 0 J e x2 /(2θ t 2) = J (2π) (k+1)/2 Λ J 1/2 dt dx Edet 2 X J (t) X(t) = x, X J (t) = 0. θ t u (2.2.9) Now we turn to computing Edet 2 X J (t) X(t) = x, X J (t) = 0. By Lemma 2.1.5, under (H2), Λ Λ(t) and hence Λ J Λ J (t) are positive definite for every t J. Thus there exists a k k positive definite matrix Q t such that Q t (Λ J Λ J (t))q t = I k, (2.2.10) where I k is the k k identity matrix. By (2.2.5), EX(t)(Q t 2 X J (t)q t ) ij = (Q t (Λ J Λ J (t))q t ) ij = δ ij, where δ ij is the Kronecker delta function. One can write Edet(Q t 2 X J (t)q t ) X(t) = x, X J (t) = 0 = Edet (t, x), (2.2.11) where (t, x) = ( ij (t, x)) i,j σ(j) with all elements ij (t, x) being Gaussian variables. To study (t, x), we only need to find its mean and covariance. Note that X(t) and 2 X(t) 21

28 are independent by Lemma 2.1.6, then we apply Lemma to obtain E ij (t, x) = E(Q t 2 X J (t)q t ) ij X(t) = x, X J (t) = 0 = (EX(t)(Q t 2 X J (t)q t ) ij, 0,..., 0)(Cov(X(t), X J (t))) 1 (x, 0,..., 0) T (2.2.12) = ( δ ij, 0,..., 0)(Cov(X(t), X J (t))) 1 (x, 0,..., 0) T = x θt 2 δ ij, where the last equality comes from the fact that the (1, 1) entry of (Cov(X(t), X J (t))) 1 is detcov( X J (t))/detcov(x(t), X J (t)) = 1/θt 2. For the covariance, applying Lemma again gives E( ij (t, x) E ij (t, x))( kl (t, x) E kl (t, x)) = E(Q t 2 X J (t)q t ) ij (Q t 2 X J (t)q t ) kl (EX(t)(Q t 2 X J (t)q t ) ij, 0,..., 0) (Cov(X(t), X J (t))) 1 (EX(t)(Q t 2 X J (t)q t ) kl, 0,..., 0) T = S t (i, j, k, l) ( δ ij, 0,..., 0)(Cov(X(t), X J (t))) 1 ( δ kl, 0,..., 0) T = S t (i, j, k, l) δ ijδ kl θt 2, where S t is a symmetric function of i, j, k, l by applying Lemma with A replaced by Q t. Therefore (2.2.11) becomes 1 E θt k det(θ t Q t ( 2 X J (t))q t ) X(t) = x, X J (t) = 0 = 1 ( ) θt k E det (t) xθt I k, where (t) = ( ij (t)) i,j σ(j) and all ij (t) are Gaussian variables satisfying E ij (t) = 0, E ij (t) kl (t) = θ 2 t S t(i, j, k, l) δ ij δ kl. 22

29 By Corollary in Adler and Taylor (2007), (2.2.11) is equal to ( 1) k θt k H k (x/θ t ), hence Edet 2 X J (t) X(t) = x, X J (t) = 0 = Edet(Q 1 t Q t 2 X J (t)q t Q 1 t ) X(t) = x, X J (t) = 0 = Λ J Λ J (t) Edet(Q t 2 X J (t)q t ) X(t) = x, X J (t) = 0 = ( 1)k ( x θt k Λ J Λ J (t) H k ). θ t Plugging this into (2.2.9) and applying (2.2.6), we obtain the desired result. Theorem Let X = X(t) : t R N be a centered Gaussian random field with stationary increments such that (H1), (H2) and (H3 ) are fulfilled. Then Eϕ(A u ) = N 1 P(X(t) u, X(t) E(t)) + (2π) t 0 T k=1 J k T k/2 Λ J 1/2 Λ dt dx dy J1 dy J Λ J (t) JN k J u E(J) γt k ( x ) H k + γ t C γ J1 (t)y J1 + + γ t C JN k (t)y JN k t (2.2.13) p X(t),XJ1 (t),...,x JN k (t) (x, y J 1,..., y JN k X J (t) = 0). Proof According to Corollary in Adler and Taylor (2007), (H1) and (H3 ) imply that X is a Morse function a.s. It follows from (2.2.1) that Eϕ(A u ) = N k=0 J k T ( 1) k k E ( 1) i µ i (J). (2.2.14) i=0 If J 0 T, say J = t, it turns out Eµ 0 (J) = P(X(t) u, X(t) E(t)). If J k T with k 1, we apply the Kac-Rice metatheorem to obtain that the expectation on the right 23

30 hand side of (2.2.14) becomes k p X J (t) (0)dt ( 1) i E det 2 X J (t) 1 2 X J (t) D J i 1 (X J1 (t),...,x JN k (t)) E(J) i=0 = 1 (2π) k/2 Λ J 1/2 J 1 X(t) u X J (t) = 0 dt dx dy J1 dy JN k u E(J) Edet 2 X J (t) X(t) = x, X J1 (t) = y J1,..., X JN k (t) = y JN k, X J (t) = 0 p X(t),XJ1 (t),...,x JN k (t) (x, y J 1,..., y JN k X J (t) = 0). (2.2.15) For fixed t, let Q t be the positive definite matrix in (2.2.10). Then, similarly to the proof in Lemma 2.2.1, we can write Edet(Q t 2 X J (t)q t ) X(t) = x, X J1 (t) = y J1,..., X JN k = y JN k, X J (t) = 0 as Edet (t, x), where (t, x) is a matrix consisting of Gaussian entries ij (t, x) with mean E(Q t 2 X J (t)q t ) ij X(t) = x, X J1 (t) = y J1,..., X JN k = y JN k, X J (t) = 0 = ( δ ij, 0,..., 0)(Cov(X(t), X J1 (t),..., X JN k (t), X J (t))) 1 (x, y J1,..., y JN k, 0,..., 0) T = δ ij γt 2 (x + γt 2 C J 1 (t)y J1 + + γt 2 C J (t)y N k JN k ), (2.2.16) 24

31 and covariance E( ij (t, x) E ij (t, x))( kl (t, x) E kl (t, x)) = S t (i, j, k, l) δ ijδ kl γt 2. Following the same procedure in the proof of Lemma 2.2.1, we obtain that the last conditional expectation in (2.2.15) is equal to ( 1) k Λ J Λ J (t) ( x γt k H k + γ t C γ J1 (t)y J1 + + γ t C JN k (t)y JN k ). (2.2.17) t Plug this into (2.2.15) and (2.2.14), yielding the desired result. Remark Usually, for nonstationary (including constant-variance) Gaussian field X on R N, its mean Euler characteristic involves at least the third-order derivatives of the covariance function. For Gaussian random fields with stationary increments, as shown in Lemma 2.1.6, EX ij (t)x k (t) = 0 and EX ij (t)x kl (t) is symmetric in i, j, k, l, so the mean Euler characteristic becomes relatively simpler, contains only up to the second-order derivatives of the covariance function. In various practical applications, (2.2.13) could be simplified with only an exponentially smaller difference, see the discussions in Section

32 2.3 Excursion Probability As in Section 3.1, we decompose T into several faces as T = N k=0 k T = N k=0 J k T J. For each J k T, define the number of extended outward maxima above level u as M E u (J) := #t J : X(t) u, X J (t) = 0, index( 2 X J (t)) = k, ε j X j(t) 0 for all j / σ(j). In fact, M E u (J) is the same as µ k (J) defined in (2.2.2) with i = k. We will make use of the following lemma. Lemma Let X = X(t) : t R N be a Gaussian random field satisfying (H1) and (H3 ), then for any u > 0, sup t T X(t) u = N k=0 J k T M E u (J) 1 a.s. Proof By the definition of M E u (J), it is clear that sup t T X(t) u N k=0 J k T M E u (J) 1 a.s. Suppose sup t T X(t) u, since X(t) C 2 (R N ) a.s., there exists t 0 T such that X(t 0 ) = sup t T X(t). Without loss of generality, assume t 0 J k T. Note that t 0 is a local maximum restricted on J, thus X J (t 0 ) = 0 and 2 X J (t 0 ) is non-positive definite. Due to (H1) and (H3 ), we apply Lemma in Adler and Taylor (2007) to obtain that almost surely, det( 2 X J (t 0 )) 0 and hence index( 2 X J (t 0 )) = k. If ε j X j(t 0 ) < 0 for some j / σ(j), then we can find t 1 T such that X(t 1 ) > X(t 0 ), which contradicts 26

33 X(t 0 ) = sup t T X(t). Hence ε j X j(t 0 ) 0 for all j / σ(j). These indicate M E u (J) 1, therefore sup t T X(t) u N k=0 J k T M E u (J) 1 a.s., completing the proof. It follows from Lemma that P sup t T X(t) u N k=0 J k T PM E u (J) 1 N k=0 J k T EM E u (J). (2.3.1) On the other hand, by the Bonferroni inequality, P sup t T X(t) u N k=0 J k T PM E u (J) 1 J J PM E u (J) 1, M E u (J ) 1. Let p i = PM E u (J) = i, then PM E u (J) 1 = i=1 p i and EM E u (J) = i=1 ip i, it follows that EMu E (J) PMu E (J) 1 = (i 1)p i i=2 i(i 1) p 2 i = 1 2 EM u E (J)(Mu E (J) 1). i=2 27

34 Together with the obvious bound PM E u (J) 1, M E u (J ) 1 EM E u (J)M E u (J ), we obtain the following lower bound for the excursion probability, P sup t T X(t) u N k=0 J k T ( EMu E (J) 1 ) 2 EM u E (J)(Mu E (J) 1) J J EM E u (J)M E u (J ). (2.3.2) Define the number of local maxima above level u as M u (J) := #t J : X(t) u, X J (t) = 0, index( 2 X J (t)) = k, then obviously M u (J) M E u (J), and M u (J) is the same as µ k (J) defined in (2.2.3) with i = k. It follows similarly that N k=0 J k T N k=0 J k T EM u (J) P sup t T X(t) u ( EM u (J) 1 ) 2 EM u(j)(m u (J) 1) EM u (J)M u (J ). J J (2.3.3) We will use (2.3.1) and (2.3.2) to estimate the excursion probability for the general case, see Theorem Inequalities in (2.3.3) provide another method to approximate the excursion probability in some special cases, see Theorem The advantage of (2.3.3) is that the principal term induced by N k=0 J k T EM u(j) is much easier to compute compared with the one induced by N k=0 J k T EM u E (J). The following two lemmas provide the estimations for the principal terms in approximating the excursion probability. 28

35 Lemma Let X be a Gaussian field as in Theorem Then for each J k T with k 1, there exists some constant α > 0 such that EM u (J) = 1 (2π) (k+1)/2 Λ J 1/2 J Λ J Λ J (t) ( u θt k H k 1 )e u2 /(2θ t 2) dt(1 + o(e αu2 )). θ t (2.3.4) Proof Following the notations in the proof of Lemma 2.2.1, we obtain similarly that EM u (J) = J = dt J u p X J (t) (0)dt E det 2 X J (t) 1 2 X J (t) D k 1 X(t) u X J (t) = 0 dx ( 1)k e x2 /(2θ 2 t ) (2π) (k+1)/2 Λ J 1/2 θ t Edet 2 X J (t)1 2 X J (t) D k X(t) = x, X J (t) = 0. (2.3.5) Recall 2 X J (t) = Q 1 t Q t 2 X J (t)q t Q 1 t and we can write (2.2.12) as EQ t 2 X J (t)q t X(t) = x, X J (t) = 0 = x θt 2 I k. Make change of variables V (t) = Q t 2 X J (t)q t + x θt 2 I k, where V (t) = (V ij (t)) 1 i,j k. Then (V (t) X(t) = x, X J (t) = 0) is a Gaussian matrix whose mean is 0 and covariance is the same as that of (Q t 2 X J (t)q t X(t) = x, X J (t) = 0). Denote the density of Gaussian vectors ((V ij (t)) 1 i j k X(t) = x, X J (t) = 0) by 29

36 h t (v), v = (v ij ) 1 i j k R k(k+1)/2, then Edet(Q t 2 X J (t)q t )1 2 X J (t) D k X(t) = x, X J (t) = 0 = Edet(Q t 2 X J (t)q t )1 Qt 2 X J (t)q t D k X(t) = x, X J (t) = 0 ( = v:(v ij ) x det (v ij ) x θ t 2 I k D k θt 2 I k )h t (v) dv, (2.3.6) where (v ij ) is the abbreviation of matrix (v ij ) 1 i,j k. Since θ 2 t : t T is bounded, there exists a constant c > 0 such that (v ij ) x ( k ) 1/2 θt 2 I k D k, (v ij ) := vij 2 < x c. i,j=1 Thus we can write (2.3.6) as ( det (v Rk(k+1)/2 ij ) x ( θt 2 I k )h t (v)dv v:(v ij ) x det (v ij ) x θ t 2 I k / D k θt 2 I k )h t (v) dv = Edet(Q t 2 X J (t)q t ) X(t) = x, X J (t) = 0 + Z(t, x), (2.3.7) where Z(t, x) is the second integral in the first line of (2.3.7) and it satisfies Z(t, x) (v ij ) x c ( det (v ij ) x ) θt 2 I k h t (v)dv. Denote by G(t) the covariance matrix of ((V ij (t)) 1 i j k X(t) = x, X J (t) = 0), then by Lemma in the Appendix, the eigenvalues of G(t) and hence those of (G(t)) 1 are bounded for all t T. It follows that there exists some constant α > 0 such that 30

37 h t (v) = o(e α (v ij ) 2 ) and hence Z(t, x) = o(e αx2 ) for some constant α > 0 uniformly for all t T. Combine this with (2.3.5), (2.3.6), (2.3.7) and the proof of Lemma 2.2.1, yielding the result. Lemma Let X be a Gaussian field as in Theorem Then for each J k T with k 1, there exists some constant α > 0 such that EM E u (J) = 1 (2π) k/2 Λ J 1/2 dt J u dx dy J1 dy JN k E(J) Λ ( ) J Λ J (t) x γt k H k + γ t C γ J1 (t)y J1 + + γ t C JN k (t)y JN k t p X(t),XJ1 (t),...,x JN k (t) (x, y J 1,..., y JN k X J (t) = 0)(1 + o(e αu2 )). (2.3.8) Proof Under the notations in the proof of Theorem 2.2.2, applying the Kac-Rice formula, we see that EM E u (J) equals p X J (t) (0)dt E det 2 X J (t) 1 2 X J (t) D J k 1 X(t) u = 1 (XJ1 (t),,x JN k (t)) E(J) X J (t) = 0 ( 1) k (2π) k/2 Λ J 1/2 J dt u dx dy J1 dy JN k E(J) Edet 2 X J (t)1 2 X J (t) D k X(t) = x, X J 1 (t) = y J1,, X JN k (t) = y JN k, X J (t) = 0p X(t),XJ1 (t),,x JN k (t) (x, y J 1,, y JN k X J (t) = 0). 31

38 Recall 2 X J (t) = Q 1 t Q t 2 X J (t)q t Q 1 t and we can write (2.2.16) as EQ t 2 X J (t)q t X(t) = x, X J1 (t) = y J1,, X JN k (t) = y JN k, X J (t) = 0 ( x = γt 2 + C J1 (t)y J1 + + C JN k (t)y JN k )I k. Make change of variables W (t) = Q t 2 X J (t)q t + x γt 2 I k, where W (t) = (W ij (t)) 1 i,j k. Denote the density of ((W ij (t)) 1 i j k X(t) = x, X J1 (t) = y J1,, X JN k (t) = y JN k, X J (t) = 0) by f t,yj1,,y JN k (w), w = (w ij ) 1 i j k R k(k+1)/2. Similarly to the proof in Lemma 2.3.2, to estimate E det 2 X(t)=x, X X J (t)1 J (t)=0, 2 X J (t) D k X J1 (t)=y J1,,X JN k (t)=y, JN k 32

39 we will get an expression similar to (2.3.7) with Z(t, x) replaced by Z(t, x, y J1,, y JN k ). Then, similarly, we have I(t, x) := dy J1 dy JN k p X(t),XJ1 (t),,x E(J) JN k (t) (x, y J 1,, y JN k X J (t) = 0) Z(t, x, y J1,, y JN k ) dy J1 dy JN k p X(t),XJ1 (t),,x E(J) JN k (t) (x, y J 1,, y JN k ( X J (t) = 0) (w ij ) x det (w ij ) x ) γ c t 2 I k f t,yj1,,y JN k (w)dw ( p X(t) (x X J (t) = 0) (w ij ) x det (w ij ) x ) γ c t 2 I k f t (w)dw, where the last inequality comes from replacing the integral region E(J) by R N k, and f t (w) is the density of ((W ij (t)) 1 i j k X(t) = x, X J (t) = 0). Hence by the same discussions in the proof of Lemma 2.3.2, I(t, x) = o(e αu2 u 2 /(2σ T 2 ) ) uniformly for all t T and some constant α > 0. Combining the proofs of Lemma and Theorem 2.2.2, we obtain the result. We call a function h(u) super-exponentially small (when compared with P(sup t T X(t) u)), if there exists a constant α > 0 such that h(u) = o(e αu2 u 2 /(2σ 2 T ) ) as u. The following lemma is Lemma 4 in Piterbarg (1996b). It shows that the factorial moments are usually super-exponentially small. Lemma Let X(t) : t R N be a centered Gaussian field satisfying (H1) and (H3). Then for any ε > 0, there exists ε 1 > 0 such that for any J k T and u large enouth, EM u (J)(M u (J) 1) e u2 /(2β 2 J +ε) + e u2 /(2σ 2 J ε 1 ), 33

40 where β 2 J = sup t J sup e S k 1 Var(X(t) X J (t), 2 X J (t)e) and σ 2 J = sup t J Var(X(t)). Here S k 1 is the (k 1)-dimensional unit sphere. Corollary Let X = X(t) : t R N be a centered Gaussian random field with stationary increments satisfying (H1), (H2) and (H3). Then for all J k T, EM u (J)(M u (J) 1) and EM E u (J)(M E u (J) 1) are super-exponentially small. Proof Since M E u (J) M u (J), we only need to show that EM u (J)(M u (J) 1) is superexponentially small. If k = 0, then M u (J) is either 0 or 1 and hence EM u (J)(M u (J) 1) = 0. If k 1, then, thanks to Lemma 2.3.4, it suffices to show that β 2 J is strictly less than σ2 T. Clearly, Var(X(t) X J (t), 2 X J (t)e) σt 2. Applying Lemma yields that Var(X(t) X J (t), 2 X J (t)e) = σ 2 T EX(t)( 2 X J (t)e) = 0. Note that the right hand side above is equivalent to (Λ J (t) Λ J )e = 0. By (H2), Λ J (t) Λ J is negative definite, which implies (Λ J (t) Λ J )e 0 for all e S k 1, so that sup Var(X(t) J X(t), 2 e S k 1 J X(t)e) < σ2 T. Therefore β 2 J < σ2 T by continuity. The following lemma shows that the cross terms in (2.3.2) and (2.3.3) are super-exponentially small if the two faces are not adjacent. For the case when the faces are adjacent, the proof is more technical, see the proofs in Theorems and Lemma Let X = X(t) : t R N be a centered Gaussian random field with stationary increments satisfying (H1) and (H3). Let J and J be two faces of T such that their 34

41 distance is positive, i.e., inf t J,s J s t > δ 0 for some δ 0 > 0, then EM u (J)M u (J ) is super-exponentially small. Proof We first consider the case when dim(j) = k 1 and dim(j ) = k 1. By the Kac-Rice metatheorem for higher moments (the proof is the same as that of Theorem in Adler and Taylor (2007)), EM u (J)M u (J ) = J dt J ds E det 2 X J (t) det 2 X J (s) 1 X(t) u,x(s) u 1 2 X J (t) D k, 2 X J (s) D k X(t) = x, X(s) = y, X J (t) = 0, X J (s) = 0p X(t),X(s), X J (t), X (x, y, 0, 0) J (s) dt J J ds dx dy E det 2 X J (t) det 2 X J (s) u u X(t) = x, X(s) = y, X J (t) = 0, X J (s) = 0p X(t),X(s) (x, y) p X J (t), X (0, 0 X(t) = x, X(s) = y). J (s) (2.3.9) Note that the following two inequalities hold: for constants a i and b j, k a i i=1 k j=1 b j 1 ( k k + k a i k+k + i=1 k b j k+k ) ; j=1 and for any Gaussian variable ξ and positive integer l, E ξ l E( Eξ + ξ Eξ ) l 2 l ( Eξ l + E ξ Eξ l ) 2 l ( Eξ l + C l (Var(ξ)) l/2 ), 35

42 where the constant C l depends only on l. Combining these two inequalities with Lemma 2.5.1, we get that there exist some positive constants C 1 and N 1 such that for large x and y, sup t J,s J E det 2 X J (t) det 2 X J (s) X(t) = x, X(s) = y, X J (t) = 0, X J (s) = 0 C 1 x N 1y N 1. (2.3.10) Also, there exists a positive constant C 2 such that sup p t J,s J X J (t), X (0, 0 X(t) = x, X(s) = y) J (s) sup t J,s J (2π) (k+k )/2 [detcov( X J (t), X J (s) X(t) = x, X(s) = y)] 1/2 C 2. (2.3.11) Let ρ(δ 0 ) = sup s t >δ0 EX(t)X(s) σ t σs which is strictly less than 1 due to (H3), then ε > 0, there exists a positive constant C 3 such that for all t J, s J and u large enough, x N 1y N 1p X(t),X(s) (x, y)dxdy = E[X(t)X(s)] N 11 X(t) u,x(s) u u u ( E[X(t) + X(s)] 2N 11 X(t)+X(s) 2u C 3 exp εu 2 u 2 ) (1 + ρ(δ 0 ))σt 2. (2.3.12) Combine (2.3.9) with (2.3.10), (2.3.11) and (2.3.12), yielding that EM E u (J)M E u (J ) is super-exponentially small. 36

43 When only one of the faces, say J, is a singleton, then let J = t 0 and we have EM u (J)M u (J ) J ds dx u u dy p X(t0 ),X(s), X (x, y, 0) J (s) E det 2 X J (s) X(t 0 ) = x, X(s) = y, X J (s) = 0. (2.3.13) Following the previous discussions yields that EM u (J)M u (J ) is super-exponentially small. Finally, if both J and J are singletons, then EM u (J)M u (J ) becomes the joint probability of two Gaussian variables exceeding level u and hence is trivial. Theorem Let X = X(t) : t R N be a centered Gaussian random field with stationary increments such that (H1), (H2) and (H3) are fulfilled. Suppose that for any face J, t J : ν(t) = σt 2, ν j(t) = 0 for some j / σ(j) =. (2.3.14) Then there exists some constant α > 0 such that P sup t T X(t) u = N EM u (J) + o(e αu2 u 2 /(2σ 2 T ) ) k=0 J k T = ( u ) N 1 Ψ + σ t (2π) t 0 T k=1 J k T (k+1)/2 Λ J 1/2 Λ J Λ J (t) ( u J θt k H k 1 )e u2 /(2θ t 2) dt + o(e αu2 u 2 /(2σ T 2 ) ). θ t (2.3.15) Proof Since the second equality in (2.3.15) follows from Lemma directly, we only need to prove the first one. By (2.3.3) and Corollary 2.3.5, it suffices to show that the last term in (2.3.3) is super-exponentially small. Thanks to Lemma 2.3.6, we only need to consider 37

44 the case when the distance of J and J is 0, or I := J J. Without loss of generality, assume σ(j) = 1,..., m, m + 1,..., k, σ(j ) = 1,..., m, k + 1,..., k + k m, (2.3.16) where 0 m k k N and k 1. If k = 0, we consider σ(j) = by convention. Under such assumption, J k T, J k T and dim(i) = m. Case 1: k = 0, i.e. J is a singleton, say J = t 0. If ν(t 0 ) < σt 2, then by (2.3.13), it is trivial to show that EM u (J)M u (J ) is super-exponentially small. Now we consider the case ν(t 0 ) = σ 2 T. Due to (2.3.14), EX(t 0)X 1 (t 0 ) 0 and hence by continuity, there exists δ > 0 such that EX(s)X 1 (s) = 0 for all s t 0 δ. It follows from (2.3.13) that EM u (J)M u (J ) is bounded from above by s J ds dx dy E det 2 X J (s) X(t 0 ) = x, X(s) = y, X J (s) = 0 : s t 0 >δ u u + s J ds : s t 0 δ u := I 1 + I 2. p X(t0 ),X(s), X (x, y, 0) J (s) dy E det 2 X J (s) X(s) = y, X J (s) = 0p X(s), X J (y, 0) (s) Following the proof of Lemma yields that I 1 is super-exponentially small. We apply Lemma to obtain that there exists ε 0 > 0 such that sup Var(X(s) X J (s)) sup Var(X(s) X 1 (s)) σ s J : s t 0 δ s J T 2 ε 0. : s t 0 δ Then I 2 and hence EM u (J)M u (J ) are super-exponentially small. 38

45 Case 2: k 1. For all t I with ν(t) = σ 2 T, by assumption (2.3.14), EX(t)X i(t) 0, i = m + 1,..., k + k m. Note that I is a compact set, by Lemma and the uniform continuity of conditional variance, there exist ε 1, δ 1 > 0 such that sup Var(X(t) X m+1(t),..., X k (t), X k+1 (s),..., X k+k t B,s B m (s)) σt 2 ε 1, (2.3.17) where B = t J : dist(t, I) δ 1 and B = s J : dist(s, I) δ 1. It follows from (2.3.9) that EM u (J)M u (J ) is bounded by (J J )\(B B dtds dx ) u u dy p X(t),X(s), X J (t), X (x, y, 0, 0) J (s) E det 2 X J (t) det 2 X J (s) X(t) = x, X(s) = y, X J (t) = 0, X J (s) = 0 + B B dtds dx p X(t) (x X J (t) = 0, X J (s) = 0)p X J (t), X (0, 0) u J (s) E det 2 X J (t) det 2 X J (s) X(t) = x, X J (t) = 0, X J (s) = 0 := I 3 + I 4. Note that (J J )\(B B ) = ((J\B) B ) ( ) ( ) B (J\B) (J\B) (J\B). (2.3.18) Since each product set on the right hand side of (2.3.18) consists of two sets with positive distance, following the proof of Lemma yields that I 3 is super-exponentially small. For I 4, taking into account (2.3.17), one has sup t B,s B Var( X(t) X J (t), X J (s) ) σ 2 T ε 1. (2.3.19) 39

Gaussian Random Fields: Excursion Probabilities

Gaussian Random Fields: Excursion Probabilities Gaussian Random Fields: Excursion Probabilities Yimin Xiao Michigan State University Lecture 5 Excursion Probabilities 1 Some classical results on excursion probabilities A large deviation result Upper

More information

Gaussian Random Fields: Geometric Properties and Extremes

Gaussian Random Fields: Geometric Properties and Extremes Gaussian Random Fields: Geometric Properties and Extremes Yimin Xiao Michigan State University Outline Lecture 1: Gaussian random fields and their regularity Lecture 2: Hausdorff dimension results and

More information

Rice method for the maximum of Gaussian fields

Rice method for the maximum of Gaussian fields Rice method for the maximum of Gaussian fields IWAP, July 8, 2010 Jean-Marc AZAÏS Institut de Mathématiques, Université de Toulouse Jean-Marc AZAÏS ( Institut de Mathématiques, Université de Toulouse )

More information

Evgeny Spodarev WIAS, Berlin. Limit theorems for excursion sets of stationary random fields

Evgeny Spodarev WIAS, Berlin. Limit theorems for excursion sets of stationary random fields Evgeny Spodarev 23.01.2013 WIAS, Berlin Limit theorems for excursion sets of stationary random fields page 2 LT for excursion sets of stationary random fields Overview 23.01.2013 Overview Motivation Excursion

More information

Random Fields and Random Geometry. I: Gaussian fields and Kac-Rice formulae

Random Fields and Random Geometry. I: Gaussian fields and Kac-Rice formulae Random Fields and Random Geometry. I: Gaussian fields and Kac-Rice formulae Robert Adler Electrical Engineering Technion Israel Institute of Technology. and many, many others October 25, 2011 I do not

More information

Stochastic Differential Equations.

Stochastic Differential Equations. Chapter 3 Stochastic Differential Equations. 3.1 Existence and Uniqueness. One of the ways of constructing a Diffusion process is to solve the stochastic differential equation dx(t) = σ(t, x(t)) dβ(t)

More information

A Concise Course on Stochastic Partial Differential Equations

A Concise Course on Stochastic Partial Differential Equations A Concise Course on Stochastic Partial Differential Equations Michael Röckner Reference: C. Prevot, M. Röckner: Springer LN in Math. 1905, Berlin (2007) And see the references therein for the original

More information

ELEMENTS OF PROBABILITY THEORY

ELEMENTS OF PROBABILITY THEORY ELEMENTS OF PROBABILITY THEORY Elements of Probability Theory A collection of subsets of a set Ω is called a σ algebra if it contains Ω and is closed under the operations of taking complements and countable

More information

GAUSSIAN PROCESSES; KOLMOGOROV-CHENTSOV THEOREM

GAUSSIAN PROCESSES; KOLMOGOROV-CHENTSOV THEOREM GAUSSIAN PROCESSES; KOLMOGOROV-CHENTSOV THEOREM STEVEN P. LALLEY 1. GAUSSIAN PROCESSES: DEFINITIONS AND EXAMPLES Definition 1.1. A standard (one-dimensional) Wiener process (also called Brownian motion)

More information

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539 Brownian motion Samy Tindel Purdue University Probability Theory 2 - MA 539 Mostly taken from Brownian Motion and Stochastic Calculus by I. Karatzas and S. Shreve Samy T. Brownian motion Probability Theory

More information

Kernel Method: Data Analysis with Positive Definite Kernels

Kernel Method: Data Analysis with Positive Definite Kernels Kernel Method: Data Analysis with Positive Definite Kernels 2. Positive Definite Kernel and Reproducing Kernel Hilbert Space Kenji Fukumizu The Institute of Statistical Mathematics. Graduate University

More information

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3 Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

Functional Central Limit Theorem for the Measure of Level Sets Generated by a Gaussian Random Field

Functional Central Limit Theorem for the Measure of Level Sets Generated by a Gaussian Random Field Functional Central Limit Theorem for the Measure of Level Sets Generated by a Gaussian Random Field Daniel Meschenmoser Institute of Stochastics Ulm University 89069 Ulm Germany daniel.meschenmoser@uni-ulm.de

More information

Spectral representations and ergodic theorems for stationary stochastic processes

Spectral representations and ergodic theorems for stationary stochastic processes AMS 263 Stochastic Processes (Fall 2005) Instructor: Athanasios Kottas Spectral representations and ergodic theorems for stationary stochastic processes Stationary stochastic processes Theory and methods

More information

Gaussian Processes. 1. Basic Notions

Gaussian Processes. 1. Basic Notions Gaussian Processes 1. Basic Notions Let T be a set, and X : {X } T a stochastic process, defined on a suitable probability space (Ω P), that is indexed by T. Definition 1.1. We say that X is a Gaussian

More information

In terms of measures: Exercise 1. Existence of a Gaussian process: Theorem 2. Remark 3.

In terms of measures: Exercise 1. Existence of a Gaussian process: Theorem 2. Remark 3. 1. GAUSSIAN PROCESSES A Gaussian process on a set T is a collection of random variables X =(X t ) t T on a common probability space such that for any n 1 and any t 1,...,t n T, the vector (X(t 1 ),...,X(t

More information

µ X (A) = P ( X 1 (A) )

µ X (A) = P ( X 1 (A) ) 1 STOCHASTIC PROCESSES This appendix provides a very basic introduction to the language of probability theory and stochastic processes. We assume the reader is familiar with the general measure and integration

More information

STOCHASTIC GEOMETRY BIOIMAGING

STOCHASTIC GEOMETRY BIOIMAGING CENTRE FOR STOCHASTIC GEOMETRY AND ADVANCED BIOIMAGING 2018 www.csgb.dk RESEARCH REPORT Anders Rønn-Nielsen and Eva B. Vedel Jensen Central limit theorem for mean and variogram estimators in Lévy based

More information

E cient Monte Carlo for Gaussian Fields and Processes

E cient Monte Carlo for Gaussian Fields and Processes E cient Monte Carlo for Gaussian Fields and Processes Jose Blanchet (with R. Adler, J. C. Liu, and C. Li) Columbia University Nov, 2010 Jose Blanchet (Columbia) Monte Carlo for Gaussian Fields Nov, 2010

More information

Linear Ordinary Differential Equations

Linear Ordinary Differential Equations MTH.B402; Sect. 1 20180703) 2 Linear Ordinary Differential Equations Preliminaries: Matrix Norms. Denote by M n R) the set of n n matrix with real components, which can be identified the vector space R

More information

On a class of stochastic differential equations in a financial network model

On a class of stochastic differential equations in a financial network model 1 On a class of stochastic differential equations in a financial network model Tomoyuki Ichiba Department of Statistics & Applied Probability, Center for Financial Mathematics and Actuarial Research, University

More information

Lecture 4: Numerical solution of ordinary differential equations

Lecture 4: Numerical solution of ordinary differential equations Lecture 4: Numerical solution of ordinary differential equations Department of Mathematics, ETH Zürich General explicit one-step method: Consistency; Stability; Convergence. High-order methods: Taylor

More information

Chapter One. The Calderón-Zygmund Theory I: Ellipticity

Chapter One. The Calderón-Zygmund Theory I: Ellipticity Chapter One The Calderón-Zygmund Theory I: Ellipticity Our story begins with a classical situation: convolution with homogeneous, Calderón- Zygmund ( kernels on R n. Let S n 1 R n denote the unit sphere

More information

Formulas for probability theory and linear models SF2941

Formulas for probability theory and linear models SF2941 Formulas for probability theory and linear models SF2941 These pages + Appendix 2 of Gut) are permitted as assistance at the exam. 11 maj 2008 Selected formulae of probability Bivariate probability Transforms

More information

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES JOEL A. TROPP Abstract. We present an elementary proof that the spectral radius of a matrix A may be obtained using the formula ρ(a) lim

More information

I forgot to mention last time: in the Ito formula for two standard processes, putting

I forgot to mention last time: in the Ito formula for two standard processes, putting I forgot to mention last time: in the Ito formula for two standard processes, putting dx t = a t dt + b t db t dy t = α t dt + β t db t, and taking f(x, y = xy, one has f x = y, f y = x, and f xx = f yy

More information

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( )

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( ) Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio (2014-2015) Etienne Tanré - Olivier Faugeras INRIA - Team Tosca November 26th, 2014 E. Tanré (INRIA - Team Tosca) Mathematical

More information

ON MARKOV AND KOLMOGOROV MATRICES AND THEIR RELATIONSHIP WITH ANALYTIC OPERATORS. N. Katilova (Received February 2004)

ON MARKOV AND KOLMOGOROV MATRICES AND THEIR RELATIONSHIP WITH ANALYTIC OPERATORS. N. Katilova (Received February 2004) NEW ZEALAND JOURNAL OF MATHEMATICS Volume 34 (2005), 43 60 ON MARKOV AND KOLMOGOROV MATRICES AND THEIR RELATIONSHIP WITH ANALYTIC OPERATORS N. Katilova (Received February 2004) Abstract. In this article,

More information

Stochastic Convergence, Delta Method & Moment Estimators

Stochastic Convergence, Delta Method & Moment Estimators Stochastic Convergence, Delta Method & Moment Estimators Seminar on Asymptotic Statistics Daniel Hoffmann University of Kaiserslautern Department of Mathematics February 13, 2015 Daniel Hoffmann (TU KL)

More information

1 Directional Derivatives and Differentiability

1 Directional Derivatives and Differentiability Wednesday, January 18, 2012 1 Directional Derivatives and Differentiability Let E R N, let f : E R and let x 0 E. Given a direction v R N, let L be the line through x 0 in the direction v, that is, L :=

More information

SAMPLE PATH AND ASYMPTOTIC PROPERTIES OF SPACE-TIME MODELS. Yun Xue

SAMPLE PATH AND ASYMPTOTIC PROPERTIES OF SPACE-TIME MODELS. Yun Xue SAMPLE PATH AND ASYMPTOTIC PROPERTIES OF SPACE-TIME MODELS By Yun Xue A DISSERTATION Submitted to Michigan State University in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY

More information

LARGE DEVIATION PROBABILITIES FOR SUMS OF HEAVY-TAILED DEPENDENT RANDOM VECTORS*

LARGE DEVIATION PROBABILITIES FOR SUMS OF HEAVY-TAILED DEPENDENT RANDOM VECTORS* LARGE EVIATION PROBABILITIES FOR SUMS OF HEAVY-TAILE EPENENT RANOM VECTORS* Adam Jakubowski Alexander V. Nagaev Alexander Zaigraev Nicholas Copernicus University Faculty of Mathematics and Computer Science

More information

Econ 204 Differential Equations. 1 Existence and Uniqueness of Solutions

Econ 204 Differential Equations. 1 Existence and Uniqueness of Solutions Econ 4 Differential Equations In this supplement, we use the methods we have developed so far to study differential equations. 1 Existence and Uniqueness of Solutions Definition 1 A differential equation

More information

Additive functionals of infinite-variance moving averages. Wei Biao Wu The University of Chicago TECHNICAL REPORT NO. 535

Additive functionals of infinite-variance moving averages. Wei Biao Wu The University of Chicago TECHNICAL REPORT NO. 535 Additive functionals of infinite-variance moving averages Wei Biao Wu The University of Chicago TECHNICAL REPORT NO. 535 Departments of Statistics The University of Chicago Chicago, Illinois 60637 June

More information

STAT 7032 Probability Spring Wlodek Bryc

STAT 7032 Probability Spring Wlodek Bryc STAT 7032 Probability Spring 2018 Wlodek Bryc Created: Friday, Jan 2, 2014 Revised for Spring 2018 Printed: January 9, 2018 File: Grad-Prob-2018.TEX Department of Mathematical Sciences, University of Cincinnati,

More information

(2m)-TH MEAN BEHAVIOR OF SOLUTIONS OF STOCHASTIC DIFFERENTIAL EQUATIONS UNDER PARAMETRIC PERTURBATIONS

(2m)-TH MEAN BEHAVIOR OF SOLUTIONS OF STOCHASTIC DIFFERENTIAL EQUATIONS UNDER PARAMETRIC PERTURBATIONS (2m)-TH MEAN BEHAVIOR OF SOLUTIONS OF STOCHASTIC DIFFERENTIAL EQUATIONS UNDER PARAMETRIC PERTURBATIONS Svetlana Janković and Miljana Jovanović Faculty of Science, Department of Mathematics, University

More information

Regularity of the density for the stochastic heat equation

Regularity of the density for the stochastic heat equation Regularity of the density for the stochastic heat equation Carl Mueller 1 Department of Mathematics University of Rochester Rochester, NY 15627 USA email: cmlr@math.rochester.edu David Nualart 2 Department

More information

LARGE DEVIATIONS OF TYPICAL LINEAR FUNCTIONALS ON A CONVEX BODY WITH UNCONDITIONAL BASIS. S. G. Bobkov and F. L. Nazarov. September 25, 2011

LARGE DEVIATIONS OF TYPICAL LINEAR FUNCTIONALS ON A CONVEX BODY WITH UNCONDITIONAL BASIS. S. G. Bobkov and F. L. Nazarov. September 25, 2011 LARGE DEVIATIONS OF TYPICAL LINEAR FUNCTIONALS ON A CONVEX BODY WITH UNCONDITIONAL BASIS S. G. Bobkov and F. L. Nazarov September 25, 20 Abstract We study large deviations of linear functionals on an isotropic

More information

Krzysztof Burdzy University of Washington. = X(Y (t)), t 0}

Krzysztof Burdzy University of Washington. = X(Y (t)), t 0} VARIATION OF ITERATED BROWNIAN MOTION Krzysztof Burdzy University of Washington 1. Introduction and main results. Suppose that X 1, X 2 and Y are independent standard Brownian motions starting from 0 and

More information

We denote the derivative at x by DF (x) = L. With respect to the standard bases of R n and R m, DF (x) is simply the matrix of partial derivatives,

We denote the derivative at x by DF (x) = L. With respect to the standard bases of R n and R m, DF (x) is simply the matrix of partial derivatives, The derivative Let O be an open subset of R n, and F : O R m a continuous function We say F is differentiable at a point x O, with derivative L, if L : R n R m is a linear transformation such that, for

More information

Brownian Motion. Chapter Stochastic Process

Brownian Motion. Chapter Stochastic Process Chapter 1 Brownian Motion 1.1 Stochastic Process A stochastic process can be thought of in one of many equivalent ways. We can begin with an underlying probability space (Ω, Σ,P and a real valued stochastic

More information

Packing-Dimension Profiles and Fractional Brownian Motion

Packing-Dimension Profiles and Fractional Brownian Motion Under consideration for publication in Math. Proc. Camb. Phil. Soc. 1 Packing-Dimension Profiles and Fractional Brownian Motion By DAVAR KHOSHNEVISAN Department of Mathematics, 155 S. 1400 E., JWB 233,

More information

An introduction to Birkhoff normal form

An introduction to Birkhoff normal form An introduction to Birkhoff normal form Dario Bambusi Dipartimento di Matematica, Universitá di Milano via Saldini 50, 0133 Milano (Italy) 19.11.14 1 Introduction The aim of this note is to present an

More information

A LOWER BOUND ON BLOWUP RATES FOR THE 3D INCOMPRESSIBLE EULER EQUATION AND A SINGLE EXPONENTIAL BEALE-KATO-MAJDA ESTIMATE. 1.

A LOWER BOUND ON BLOWUP RATES FOR THE 3D INCOMPRESSIBLE EULER EQUATION AND A SINGLE EXPONENTIAL BEALE-KATO-MAJDA ESTIMATE. 1. A LOWER BOUND ON BLOWUP RATES FOR THE 3D INCOMPRESSIBLE EULER EQUATION AND A SINGLE EXPONENTIAL BEALE-KATO-MAJDA ESTIMATE THOMAS CHEN AND NATAŠA PAVLOVIĆ Abstract. We prove a Beale-Kato-Majda criterion

More information

Submitted to the Brazilian Journal of Probability and Statistics

Submitted to the Brazilian Journal of Probability and Statistics Submitted to the Brazilian Journal of Probability and Statistics Multivariate normal approximation of the maximum likelihood estimator via the delta method Andreas Anastasiou a and Robert E. Gaunt b a

More information

LECTURE 15: COMPLETENESS AND CONVEXITY

LECTURE 15: COMPLETENESS AND CONVEXITY LECTURE 15: COMPLETENESS AND CONVEXITY 1. The Hopf-Rinow Theorem Recall that a Riemannian manifold (M, g) is called geodesically complete if the maximal defining interval of any geodesic is R. On the other

More information

PACKING-DIMENSION PROFILES AND FRACTIONAL BROWNIAN MOTION

PACKING-DIMENSION PROFILES AND FRACTIONAL BROWNIAN MOTION PACKING-DIMENSION PROFILES AND FRACTIONAL BROWNIAN MOTION DAVAR KHOSHNEVISAN AND YIMIN XIAO Abstract. In order to compute the packing dimension of orthogonal projections Falconer and Howroyd 997) introduced

More information

Spectral Gap and Concentration for Some Spherically Symmetric Probability Measures

Spectral Gap and Concentration for Some Spherically Symmetric Probability Measures Spectral Gap and Concentration for Some Spherically Symmetric Probability Measures S.G. Bobkov School of Mathematics, University of Minnesota, 127 Vincent Hall, 26 Church St. S.E., Minneapolis, MN 55455,

More information

Properties of an infinite dimensional EDS system : the Muller s ratchet

Properties of an infinite dimensional EDS system : the Muller s ratchet Properties of an infinite dimensional EDS system : the Muller s ratchet LATP June 5, 2011 A ratchet source : wikipedia Plan 1 Introduction : The model of Haigh 2 3 Hypothesis (Biological) : The population

More information

Estimates for probabilities of independent events and infinite series

Estimates for probabilities of independent events and infinite series Estimates for probabilities of independent events and infinite series Jürgen Grahl and Shahar evo September 9, 06 arxiv:609.0894v [math.pr] 8 Sep 06 Abstract This paper deals with finite or infinite sequences

More information

RANDOM FIELDS AND GEOMETRY. Robert Adler and Jonathan Taylor

RANDOM FIELDS AND GEOMETRY. Robert Adler and Jonathan Taylor RANDOM FIELDS AND GEOMETRY from the book of the same name by Robert Adler and Jonathan Taylor IE&M, Technion, Israel, Statistics, Stanford, US. ie.technion.ac.il/adler.phtml www-stat.stanford.edu/ jtaylor

More information

We denote the space of distributions on Ω by D ( Ω) 2.

We denote the space of distributions on Ω by D ( Ω) 2. Sep. 1 0, 008 Distributions Distributions are generalized functions. Some familiarity with the theory of distributions helps understanding of various function spaces which play important roles in the study

More information

Laplace s Equation. Chapter Mean Value Formulas

Laplace s Equation. Chapter Mean Value Formulas Chapter 1 Laplace s Equation Let be an open set in R n. A function u C 2 () is called harmonic in if it satisfies Laplace s equation n (1.1) u := D ii u = 0 in. i=1 A function u C 2 () is called subharmonic

More information

This ODE arises in many physical systems that we shall investigate. + ( + 1)u = 0. (λ + s)x λ + s + ( + 1) a λ. (s + 1)(s + 2) a 0

This ODE arises in many physical systems that we shall investigate. + ( + 1)u = 0. (λ + s)x λ + s + ( + 1) a λ. (s + 1)(s + 2) a 0 Legendre equation This ODE arises in many physical systems that we shall investigate We choose We then have Substitution gives ( x 2 ) d 2 u du 2x 2 dx dx + ( + )u u x s a λ x λ a du dx λ a λ (λ + s)x

More information

MA8109 Stochastic Processes in Systems Theory Autumn 2013

MA8109 Stochastic Processes in Systems Theory Autumn 2013 Norwegian University of Science and Technology Department of Mathematical Sciences MA819 Stochastic Processes in Systems Theory Autumn 213 1 MA819 Exam 23, problem 3b This is a linear equation of the form

More information

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R.

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R. Ergodic Theorems Samy Tindel Purdue University Probability Theory 2 - MA 539 Taken from Probability: Theory and examples by R. Durrett Samy T. Ergodic theorems Probability Theory 1 / 92 Outline 1 Definitions

More information

Wiener Measure and Brownian Motion

Wiener Measure and Brownian Motion Chapter 16 Wiener Measure and Brownian Motion Diffusion of particles is a product of their apparently random motion. The density u(t, x) of diffusing particles satisfies the diffusion equation (16.1) u

More information

The Codimension of the Zeros of a Stable Process in Random Scenery

The Codimension of the Zeros of a Stable Process in Random Scenery The Codimension of the Zeros of a Stable Process in Random Scenery Davar Khoshnevisan The University of Utah, Department of Mathematics Salt Lake City, UT 84105 0090, U.S.A. davar@math.utah.edu http://www.math.utah.edu/~davar

More information

Weak convergence and Brownian Motion. (telegram style notes) P.J.C. Spreij

Weak convergence and Brownian Motion. (telegram style notes) P.J.C. Spreij Weak convergence and Brownian Motion (telegram style notes) P.J.C. Spreij this version: December 8, 2006 1 The space C[0, ) In this section we summarize some facts concerning the space C[0, ) of real

More information

Lecture 9. d N(0, 1). Now we fix n and think of a SRW on [0,1]. We take the k th step at time k n. and our increments are ± 1

Lecture 9. d N(0, 1). Now we fix n and think of a SRW on [0,1]. We take the k th step at time k n. and our increments are ± 1 Random Walks and Brownian Motion Tel Aviv University Spring 011 Lecture date: May 0, 011 Lecture 9 Instructor: Ron Peled Scribe: Jonathan Hermon In today s lecture we present the Brownian motion (BM).

More information

Bessel Functions Michael Taylor. Lecture Notes for Math 524

Bessel Functions Michael Taylor. Lecture Notes for Math 524 Bessel Functions Michael Taylor Lecture Notes for Math 54 Contents 1. Introduction. Conversion to first order systems 3. The Bessel functions J ν 4. The Bessel functions Y ν 5. Relations between J ν and

More information

Chapter 1. Measure Spaces. 1.1 Algebras and σ algebras of sets Notation and preliminaries

Chapter 1. Measure Spaces. 1.1 Algebras and σ algebras of sets Notation and preliminaries Chapter 1 Measure Spaces 1.1 Algebras and σ algebras of sets 1.1.1 Notation and preliminaries We shall denote by X a nonempty set, by P(X) the set of all parts (i.e., subsets) of X, and by the empty set.

More information

4 Sums of Independent Random Variables

4 Sums of Independent Random Variables 4 Sums of Independent Random Variables Standing Assumptions: Assume throughout this section that (,F,P) is a fixed probability space and that X 1, X 2, X 3,... are independent real-valued random variables

More information

ODE Final exam - Solutions

ODE Final exam - Solutions ODE Final exam - Solutions May 3, 018 1 Computational questions (30 For all the following ODE s with given initial condition, find the expression of the solution as a function of the time variable t You

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 15. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

SHARP BOUNDARY TRACE INEQUALITIES. 1. Introduction

SHARP BOUNDARY TRACE INEQUALITIES. 1. Introduction SHARP BOUNDARY TRACE INEQUALITIES GILES AUCHMUTY Abstract. This paper describes sharp inequalities for the trace of Sobolev functions on the boundary of a bounded region R N. The inequalities bound (semi-)norms

More information

The Skorokhod problem in a time-dependent interval

The Skorokhod problem in a time-dependent interval The Skorokhod problem in a time-dependent interval Krzysztof Burdzy, Weining Kang and Kavita Ramanan University of Washington and Carnegie Mellon University Abstract: We consider the Skorokhod problem

More information

Lower Tail Probabilities and Related Problems

Lower Tail Probabilities and Related Problems Lower Tail Probabilities and Related Problems Qi-Man Shao National University of Singapore and University of Oregon qmshao@darkwing.uoregon.edu . Lower Tail Probabilities Let {X t, t T } be a real valued

More information

1. Stochastic Processes and filtrations

1. Stochastic Processes and filtrations 1. Stochastic Processes and 1. Stoch. pr., A stochastic process (X t ) t T is a collection of random variables on (Ω, F) with values in a measurable space (S, S), i.e., for all t, In our case X t : Ω S

More information

Elements of Positive Definite Kernel and Reproducing Kernel Hilbert Space

Elements of Positive Definite Kernel and Reproducing Kernel Hilbert Space Elements of Positive Definite Kernel and Reproducing Kernel Hilbert Space Statistical Inference with Reproducing Kernel Hilbert Space Kenji Fukumizu Institute of Statistical Mathematics, ROIS Department

More information

(A n + B n + 1) A n + B n

(A n + B n + 1) A n + B n 344 Problem Hints and Solutions Solution for Problem 2.10. To calculate E(M n+1 F n ), first note that M n+1 is equal to (A n +1)/(A n +B n +1) with probability M n = A n /(A n +B n ) and M n+1 equals

More information

Scaling Limits of Waves in Convex Scalar Conservation Laws under Random Initial Perturbations

Scaling Limits of Waves in Convex Scalar Conservation Laws under Random Initial Perturbations Scaling Limits of Waves in Convex Scalar Conservation Laws under Random Initial Perturbations Jan Wehr and Jack Xin Abstract We study waves in convex scalar conservation laws under noisy initial perturbations.

More information

Uniformly Uniformly-ergodic Markov chains and BSDEs

Uniformly Uniformly-ergodic Markov chains and BSDEs Uniformly Uniformly-ergodic Markov chains and BSDEs Samuel N. Cohen Mathematical Institute, University of Oxford (Based on joint work with Ying Hu, Robert Elliott, Lukas Szpruch) Centre Henri Lebesgue,

More information

Modeling and testing long memory in random fields

Modeling and testing long memory in random fields Modeling and testing long memory in random fields Frédéric Lavancier lavancier@math.univ-lille1.fr Université Lille 1 LS-CREST Paris 24 janvier 6 1 Introduction Long memory random fields Motivations Previous

More information

Supermodular ordering of Poisson arrays

Supermodular ordering of Poisson arrays Supermodular ordering of Poisson arrays Bünyamin Kızıldemir Nicolas Privault Division of Mathematical Sciences School of Physical and Mathematical Sciences Nanyang Technological University 637371 Singapore

More information

Stein s Method and Characteristic Functions

Stein s Method and Characteristic Functions Stein s Method and Characteristic Functions Alexander Tikhomirov Komi Science Center of Ural Division of RAS, Syktyvkar, Russia; Singapore, NUS, 18-29 May 2015 Workshop New Directions in Stein s method

More information

Notes on uniform convergence

Notes on uniform convergence Notes on uniform convergence Erik Wahlén erik.wahlen@math.lu.se January 17, 2012 1 Numerical sequences We begin by recalling some properties of numerical sequences. By a numerical sequence we simply mean

More information

Harmonic Analysis. 1. Hermite Polynomials in Dimension One. Recall that if L 2 ([0 2π] ), then we can write as

Harmonic Analysis. 1. Hermite Polynomials in Dimension One. Recall that if L 2 ([0 2π] ), then we can write as Harmonic Analysis Recall that if L 2 ([0 2π] ), then we can write as () Z e ˆ (3.) F:L where the convergence takes place in L 2 ([0 2π] ) and ˆ is the th Fourier coefficient of ; that is, ˆ : (2π) [02π]

More information

Bernardo D Auria Stochastic Processes /10. Notes. Abril 13 th, 2010

Bernardo D Auria Stochastic Processes /10. Notes. Abril 13 th, 2010 1 Stochastic Calculus Notes Abril 13 th, 1 As we have seen in previous lessons, the stochastic integral with respect to the Brownian motion shows a behavior different from the classical Riemann-Stieltjes

More information

arxiv: v5 [math.na] 16 Nov 2017

arxiv: v5 [math.na] 16 Nov 2017 RANDOM PERTURBATION OF LOW RANK MATRICES: IMPROVING CLASSICAL BOUNDS arxiv:3.657v5 [math.na] 6 Nov 07 SEAN O ROURKE, VAN VU, AND KE WANG Abstract. Matrix perturbation inequalities, such as Weyl s theorem

More information

Exercises. T 2T. e ita φ(t)dt.

Exercises. T 2T. e ita φ(t)dt. Exercises. Set #. Construct an example of a sequence of probability measures P n on R which converge weakly to a probability measure P but so that the first moments m,n = xdp n do not converge to m = xdp.

More information

Lecture 4: Stochastic Processes (I)

Lecture 4: Stochastic Processes (I) Miranda Holmes-Cerfon Applied Stochastic Analysis, Spring 215 Lecture 4: Stochastic Processes (I) Readings Recommended: Pavliotis [214], sections 1.1, 1.2 Grimmett and Stirzaker [21], section 9.4 (spectral

More information

Topics in fractional Brownian motion

Topics in fractional Brownian motion Topics in fractional Brownian motion Esko Valkeila Spring School, Jena 25.3. 2011 We plan to discuss the following items during these lectures: Fractional Brownian motion and its properties. Topics in

More information

Handbook of Spatial Statistics Chapter 2: Continuous Parameter Stochastic Process Theory by Gneiting and Guttorp

Handbook of Spatial Statistics Chapter 2: Continuous Parameter Stochastic Process Theory by Gneiting and Guttorp Handbook of Spatial Statistics Chapter 2: Continuous Parameter Stochastic Process Theory by Gneiting and Guttorp Marcela Alfaro Córdoba August 25, 2016 NCSU Department of Statistics Continuous Parameter

More information

p 1 ( Y p dp) 1/p ( X p dp) 1 1 p

p 1 ( Y p dp) 1/p ( X p dp) 1 1 p Doob s inequality Let X(t) be a right continuous submartingale with respect to F(t), t 1 P(sup s t X(s) λ) 1 λ {sup s t X(s) λ} X + (t)dp 2 For 1 < p

More information

Itô s formula. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

Itô s formula. Samy Tindel. Purdue University. Probability Theory 2 - MA 539 Itô s formula Samy Tindel Purdue University Probability Theory 2 - MA 539 Mostly taken from Brownian Motion and Stochastic Calculus by I. Karatzas and S. Shreve Samy T. Itô s formula Probability Theory

More information

Lower Tail Probabilities and Normal Comparison Inequalities. In Memory of Wenbo V. Li s Contributions

Lower Tail Probabilities and Normal Comparison Inequalities. In Memory of Wenbo V. Li s Contributions Lower Tail Probabilities and Normal Comparison Inequalities In Memory of Wenbo V. Li s Contributions Qi-Man Shao The Chinese University of Hong Kong Lower Tail Probabilities and Normal Comparison Inequalities

More information

Lecture I: Asymptotics for large GUE random matrices

Lecture I: Asymptotics for large GUE random matrices Lecture I: Asymptotics for large GUE random matrices Steen Thorbjørnsen, University of Aarhus andom Matrices Definition. Let (Ω, F, P) be a probability space and let n be a positive integer. Then a random

More information

Filtrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition

Filtrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition Filtrations, Markov Processes and Martingales Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition David pplebaum Probability and Statistics Department,

More information

The Canonical Gaussian Measure on R

The Canonical Gaussian Measure on R The Canonical Gaussian Measure on R 1. Introduction The main goal of this course is to study Gaussian measures. The simplest example of a Gaussian measure is the canonical Gaussian measure P on R where

More information

Multi-operator scaling random fields

Multi-operator scaling random fields Available online at www.sciencedirect.com Stochastic Processes and their Applications 121 (2011) 2642 2677 www.elsevier.com/locate/spa Multi-operator scaling random fields Hermine Biermé a, Céline Lacaux

More information

Existence and Uniqueness

Existence and Uniqueness Chapter 3 Existence and Uniqueness An intellect which at a certain moment would know all forces that set nature in motion, and all positions of all items of which nature is composed, if this intellect

More information

Asymptotics of generalized eigenfunctions on manifold with Euclidean and/or hyperbolic ends

Asymptotics of generalized eigenfunctions on manifold with Euclidean and/or hyperbolic ends Asymptotics of generalized eigenfunctions on manifold with Euclidean and/or hyperbolic ends Kenichi ITO (University of Tokyo) joint work with Erik SKIBSTED (Aarhus University) 3 July 2018 Example: Free

More information

1 Lyapunov theory of stability

1 Lyapunov theory of stability M.Kawski, APM 581 Diff Equns Intro to Lyapunov theory. November 15, 29 1 1 Lyapunov theory of stability Introduction. Lyapunov s second (or direct) method provides tools for studying (asymptotic) stability

More information

PCA with random noise. Van Ha Vu. Department of Mathematics Yale University

PCA with random noise. Van Ha Vu. Department of Mathematics Yale University PCA with random noise Van Ha Vu Department of Mathematics Yale University An important problem that appears in various areas of applied mathematics (in particular statistics, computer science and numerical

More information

Scaling Limits of Waves in Convex Scalar Conservation Laws Under Random Initial Perturbations

Scaling Limits of Waves in Convex Scalar Conservation Laws Under Random Initial Perturbations Journal of Statistical Physics, Vol. 122, No. 2, January 2006 ( C 2006 ) DOI: 10.1007/s10955-005-8006-x Scaling Limits of Waves in Convex Scalar Conservation Laws Under Random Initial Perturbations Jan

More information

Brownian Motion and Conditional Probability

Brownian Motion and Conditional Probability Math 561: Theory of Probability (Spring 2018) Week 10 Brownian Motion and Conditional Probability 10.1 Standard Brownian Motion (SBM) Brownian motion is a stochastic process with both practical and theoretical

More information

Measures on spaces of Riemannian metrics CMS meeting December 6, 2015

Measures on spaces of Riemannian metrics CMS meeting December 6, 2015 Measures on spaces of Riemannian metrics CMS meeting December 6, 2015 D. Jakobson (McGill), jakobson@math.mcgill.ca [CJW]: Y. Canzani, I. Wigman, DJ: arxiv:1002.0030, Jour. of Geometric Analysis, 2013

More information

Product measure and Fubini s theorem

Product measure and Fubini s theorem Chapter 7 Product measure and Fubini s theorem This is based on [Billingsley, Section 18]. 1. Product spaces Suppose (Ω 1, F 1 ) and (Ω 2, F 2 ) are two probability spaces. In a product space Ω = Ω 1 Ω

More information