Gaussian Random Fields: Excursion Probabilities

Size: px
Start display at page:

Download "Gaussian Random Fields: Excursion Probabilities"

Transcription

1 Gaussian Random Fields: Excursion Probabilities Yimin Xiao Michigan State University

2 Lecture 5 Excursion Probabilities 1 Some classical results on excursion probabilities A large deviation result Upper bounds via the entropy method Asymptotic results (the double sum method) 2 Smooth GFs: Excursion probability The expected Euler characteristic approximation 3 Vector-valued Gaussian fields Smooth case Non-smooth case

3 Let X = {X(t), t T} be a real-valued Gaussian random field, where T is the index set. The excursion probability { } P sup X(t) u t T, (u > 0) is important in probability, statistics and their applications. When T R N and N = 1, only in very few special cases, the exact formulae for the excursion probability are available. When N > 1, no exact formula is known.

4 5.1 Some classical results Theorem 5.1 (Landau and Shepp, 1970; Marcus and Shepp, 1972) If {X(t), t T} is a centered GRF and sup t T X(t) < a.s., then lim u 1 { u log P 2 where σ 2 T = sup t T E(X(t)2 ). } sup X(t) u = 1 t T 2σT 2,

5 The Borell-TIS inequality Theorem 5.2 [Borell, 1975; Tsirelson, Ibragimov and Sudakov, 1976] Let X = { X(t), t T } be a centered Gaussian process with a.s. bounded sample paths. Let X = sup X(t). Then t T E( X ) < and for all λ > 0, P( X E( X ) > λ ) 2 exp ( ) λ2 2σT 2.

6 Proof of Theorem 5.1 The Borell-TIS inequality implies immediately the upper bound in Theorem 5.1: 1 { } lim u u log P sup X(t) u 1 2 t T 2σT 2. The lower bound in Theorem 5.1 is easy. Remark The Borell-TIS inequality, combined with a partitioning argument, can lead to improved non asymptotic upper bounds, as shown by the following result.

7 Upper bounds using entropy method For δ > 0, set T δ = {t T : E(X(t) 2 ) σ 2 T δ}. Theorem 5.3 (Samorodnitsky, 1991; Talagrand, 1994) If v w 1 such that N(T δ, d X, ε) Kδ w ε v, where d X is the canonical metric of X, then for u 2σ T w, { } u ) v w ( u ) P sup X(t) u K( Φ t T σt 2. σ T

8 Pickands asymptotic theorem Theorem 5.4 (Pickands, 69; Qualls and Watanabe, 73) Let {X(t), t [0, L] N } be a centered stationary Gaussian field with E(X(s)X(t)) = 1 s t α + o( s t α ) for a constant α (0, 2], then P { sup t [0, L] N X(t) u } lim = H u ψ(u)u 2N/α α L N, (1) where ψ(u) = (2π) 1 u 1 exp( u 2 /2) and H α is Pickands constant.

9 Recall that Pickands constant is defined as 1 ( ( H α = lim e s P sup ) ) χ(t) t α > s A A N 0 t [0, A] N where χ is a centered Gaussian field with covariance function E[χ(t)χ(s)] = t α + s α t s α (FBM). The only known values of H α are H 1 = 1, H 2 = 1 π. ds,

10 Ideas for the proof of Theorem 5.4 Divide [0, L] N into N u small cubes C j of side-length u 2/α. So N u = L N u 2N/α. Observe that { } P sup X(t) u t [0, L] N { N u = P j=1 j=1 } sup X(t) u t C j N u { } P sup X(t) u t C j

11 and { } P sup X(t) u t [0, L] N N u N u i=1 j=1 N u j=1 { } P sup X(t) u t C j { P sup X(t) u, sup X(t) u t C i t C j }.

12 Ideas for the proof Prove that { } N u j=1 P sup t Cj X(t) u is the main term and the double sum is negligible. We recall one important step in the proof. Write { } P sup X(t) u = P { X(0) u } t C j u { + P max X(t) u } X(0) = x φ(x)dx, t C j where φ is the density of standard normal N(0, 1).

13 For any a > 0 and integer vector n, let I u [an/u 2/α ] = { ak u 2/α : 0 k n } := I u. One can show that { } P max t Iu X(t) u lim u ψ(u) { } = 1 + e y P max χ(k) > y dy. k n where ψ(u) = P { N(0, 1) > u }. 0

14 Nonstationary case: A result for fbm Theorem 5.5 (Talagrand, 1988) Let B H = {B H (t), t R N } be a fbm with index H (0, 1). If H > 1/2, then for any L > 0, P { sup t [0, L] N B H (t) u } lim u P { B H ( L ) u } P = lim u { supt [0, L] N B H (t) u } ψ(u/(l N) H ) = 1. This is clearly different from (1). The reason is that E(B H (t) 2 ) has a unique maximum at t = L.

15 5.2 Asymptotic expansion for smooth Gaussian fields The Rice method initiated by Rice (1944) and developed by many others: see Adler (1981), Azais and Wschebor (2009). The Euler characteristic method by Worsley (1995), Taylor, Takemura and Adler (2005), Taylor and Adler (2007).

16 The Euler characteristic method Let A u = {t T : X(t) u} be the excursion set. A general conjecture is that the mean Euler characteristic of A u gives the behavior of P { sup t T X(t) u }. This conjecture is referred to as the Expected Euler Characteristic Heuristic, which has proven to be true for some cases. Before we give any details, let us recall the notion of Euler characteristic of a set.

17 The Euler characteristic method Let A R N be a finite union of basic sets. The EC ϕ(a) can be defined as the unique function which satisfies the following properties: { 0 if A =, ϕ(a) = 1 if A is basic or ball like. ϕ(a B) = ϕ(a) + ϕ(b) ϕ(a B). If N = 1, then the Euler characteristic of A is ϕ(a) = # of disjoint intervals in A. If N = 2, then ϕ(a) = # of its connected components # of holes.

18 The Euler characteristic method When T = [0, L], ϕ(a u ) is like the number of upcrossings of the level u by the process X(t) and E{ϕ(A u )} is similar to the Rice formula, which has long been used to approximate the excursion probability. If T = [0, L] N and N 2, it is difficult to define upcrossings of the level u. The Euler characteristic becomes a natural choice. One can also use other quantities such as the expected number of local maxima to approximate the excursion probability.

19 Euler characteristic method Theorem 5.6 (Taylor, Takemura and Adler, 2005) Let X = {X(t) : t T} be a unit-variance smooth Gaussian field parameterized on a manifold T. Under certain conditions on the regularity of X and topology of T, there exists α 0 > 0 such that as u, { P sup X(t) u t T } = E{ϕ(A u (X, T))}(1 + o ( e α 0u 2 )), where ϕ(a u (X, T)) is the Euler characteristic of the excursion set A u (X, T) = {t T : X(t) u}.

20 E{ϕ(A u (X, T))} can be computed via the Kac-Rice formula [cf. Adler and Taylor (2007)], E{ϕ(A u (X, T))} = C 0 Ψ(u) + dim(t) j=1 C j u j 1 e u2 /2, where C j are constants depending on X and T. Compared with Pickands approximation, this expansion is much more accurate since the error decays exponentially fast. In fact, Pickands approximation only contains one of the terms involving u N 1 e u2 /2 in E{ϕ(A u (X, T))}.

21 Example 5.1 Let X be a smooth isotropic Gaussian field with unit variance and T = [0, L] N, then E{ϕ(A u (X, T))} = Ψ(u) + N j=1 ( N j ) L j λ j/2 (2π) (j+1)/2h j 1(u)e u2 /2, where λ = Var(X i (t)) and H j 1 (u) are Hermite polynomials.

22 The constant-variance and isotropy conditions are sometimes too restrictive for many applications. Adler (2000, section 7.3) listed non-stationary random fields as one of the main future research directions. We study the following questions: For Gaussian fields with stationary increments, how to compute the mean Euler characteristic of their excursion sets? Can they still be used to approximate the excursion probabilities? What about excursion probabilities of vector-valued Gaussian random fields?

23 We have obtained some results about these questions and they are presented in the following papers. D. Cheng and Y. Xiao. Mean Euler characteristic approximation to excursion probability of Gaussian random fields. Ann. Appl. Probab. 26 (2016), D. Cheng and Y. Xiao. Excursion probability of smooth vector-valued Gaussian random fields. Preprint, Y. Zhou and Y. Xiao. Tail asymptotics of extremes for bivariate Gaussian random fields. Bernoulli, to appear. In the following, we present some results from the first two papers.

24 5.3 Smooth Gaussian fields with stationary increments Let X = {X(t), t R N } be a centered Gaussian field with stationary increments and X(0) = 0. It is represented by X(t) = (e i t,λ 1) W(dλ), R N where W is a complex-valued Gaussian random measure with control measure, which satisfies RN λ λ 2 (dλ) <.

25 Sufficient conditions for sample path differentiability in terms of the spectral measure of X are known. For example, if the spectral density f (λ) satisfies ) f (λ) = O ( λ (2H+N+2k) as λ, where k 1, H (0, 1), then X has a version X such that X( ) C k (R N ) almost surely.

26 We will consider the case k = 2 and use the following notations X i (t) = X(t) t i, X(t) = (X 1 (t),..., X N (t)), X ij (t) = 2 X(t) t i t j, 2 X(t) = (X ij (t)) 1 i,j N. For Gaussian fields with stationary increments, we have E{X i (t)x jk (t)} = 0 for all t R N, i.e. X i (t) and X jk (t) are independent.

27 Let T = N i=1 [a i, b i ] be an N-dimensional rectangle. A face J of dimension k, is defined by fixing a subset σ(j) {1,, N} of size k and a subset ε(j) = {ε j, j / σ(j)} {0, 1} N k of size N k, so that J = {t T : a j < t j < b j if j σ(j), t j = (1 ε j )a j + ε j b j if j / σ(j)}. k = 0 means σ(j) =, the faces are the vertices. k = N, then there is only one face which is T. Let k T be the collection of faces of dimension k in T, then T= N T and T = N 1 k=0 J k T J.

28 5.3.1 Mean Euler Characteristic Morse s Theorem (cf. Adler and Taylor, 2007) gives a formula for the Euler characteristic of the excursion set of X. Theorem 5.7 [Morse s theorem] Let X(t) be a Morse function a.s. Then where ϕ(a u (X, T)) = N ( 1) k k=0 J k T k ( 1) i µ i (J) a.s., i=0 µ i (J) = # { t J : X(t) u, X J (t) = 0, index( 2 X J (t)) = i, ε j X j (t) 0 for all j / σ(j) }.

29 Example 5.3 Let T = [0, 1] = {0} {1} (0, 1) and a smooth function X(t), we have ϕ(a u (X, T)) = 1 {X(0) u,x (0) 0} + 1 {X(1) u,x (1) 0} + #{t (0, 1) : X(t) u, X (t) = 0, X (t) < 0} #{t (0, 1) : X(t) u, X (t) = 0, X (t) > 0}.

30 Mean Euler Characteristic Let X = {X(t), t R N } be a centered Gaussian field with stationary increments and spectral density f (λ). Assume H1: f (λ) = O ( λ (2H+N+4)) for some H (0, 1). H2: t T, (X(t), X(t), 2 X(t)) has nondegenerate distribution. Notation: (E{X i (t)x j (t)}) i,j=1,,n = (λ ij ) i,j=1,,n = Λ, (E{X(t)X ij (t)}) i,j=1,,n = (λ ij (t) λ ij ) i,j=1,,n = Λ(t) Λ, where λ ij = λ i λ j f (λ) dλ, λ ij (t) = λ i λ j cos t, λ f (λ) dλ. R N R N

31 Define Λ J = (λ ij ) i,j σ(j), Λ J (t) = (λ ij (t)) i,j σ(j) and γ 2 t = Var(X(t) X(t)) = For J k T, we denote detcov(x(t), X(t)). detcov( X(t)) {1,, N}\σ(J) = {J 1,, J N k } and let E(J) = { (t J1,, t JN k ) R N k : } t j ε j > 0, j = J 1,, J N k. Let C j (t) be the (1, j + 1) entry of (Cov(X(t), X(t))) 1.

32 Theorem 5.8 (Cheng and X. 2016) E { ϕ(a u ) } = + N t 0 T k=1 J k T E(J) ( ) P X(t) u, X(t) E({t}) 1 Λ J Λ J (t) dt dx (2π) k/2 Λ J 1/2 J γt k u ( ) x H k + γ t C J1 (t)y J1 + + γ t C JN k (t)y JN k γ t p t (x, y J1,, y JN k 0,, 0) dy J1 dy JN k, where p t is the conditional density of (X(t), X J1 (t),, X JN k (t) X J (t) = 0).

33 Remarks The proof relies strongly on two properties of Gaussian fields with stationary increments: (i) X i (t) and X jk (t) are independent. ( ) (ii) E{X(t)X ij (t)} = Λ(t) Λ are negative definite. i,j In many cases, the formula can be simplified with only a super-exponentially small difference.

34 5.3.2 Approximation to the excursion probability Define the number of extended outward maxima above level u by { Mu E (J) # t J : X(t) u, X J (t) = 0, } index( 2 X J (t)) = k, ε j X j (t) > 0 for all j / σ(j). Recall T = N k=0 kt = N k=0 J k T J, it can be shown that { } { N } P sup X(t) u = P {Mu E (J) 1}, t T see Azais and Delmas (2002). k=0 J k T

35 By the Bonferroni inequality and Piterbarg (1996), N k=0 J k T { E{Mu E (J)} P N k=0 J k T } sup X(t) u t T ( E{M E u (J)} E{M E u (J)(M E u (J) 1)} ) J J E{M E u (J)M E u (J )}.

36 Extending the method in Azais and Delmas (2002), we prove Lemma 5.1 Under conditions in Theorem 5.8, there exists some α > 0 such that N k=0 J k T E{M E u (J)} = E{ϕ(A u )} + o(e αu2 u 2 /2σ 2 T ), where σ 2 T sup t T Var( X(t) ).

37 The following theorem shows that the Expected Euler Characteristic Heuristic holds more generally. Theorem 5.9 (Cheng and X., 2016) Let X = {X(t) : t R N } be a centered Gaussian random field with stationary increments satisfying H1, H2 and H3: For all t s R N, (X(t), X(t), X ij (t), X(s), X(s), 2 X(s), 1 i j N) have nondegenerate distributions. Then there exists α > 0 such that { } P sup X(t) u t T = E{ϕ(A u )} + o(e αu2 u 2 /2σT 2 ).

38 Corollary 5.1 Under the conditions in Theorem 5.9 and an extra condition, P { sup t T X(t) u } equals t 0 T J ( u ) Ψ + σ t N k=1 J k T 1 (2π) (k+1)/2 Λ J 1/2 Λ J Λ J (t) ( u θt k H k 1 )e u2 /2θt 2 dt + o(e αu2 u 2 /2σT 2 ), θ t where for t J, θt 2 = Var(X(t) X J (t)) = det Cov(X(t), X J(t)). detcov( X J (t))

39 5.4. Vector-valued Gaussian fields Consider a multivariate random field X = {X(t), t R N } taking values in R p defined by X(t) = (X 1 (t),, X p (t)), t R N. (2) Their key features are: the components X 1,..., X p are dependent. X 1,..., X p may have different smoothness properties.

40 Given subsets T 1,..., T p of R N, it is of interest to estimate the excursion probability { } P max X 1 (t) u 1,, max X p (t) u p (3) t T 1 t T p for certain threshold values u 1,..., u p. For T R N, another type of excursion probability for X is P { t T such that X i (t) u i, 1 i p }. (4) We focus on the excursion probabilities in (3) for p = 2.

41 Let {(X(t), Y(s)) : t T, s S} be an R 2 -valued, centered, unit-variance Gaussian random field, where T and S are rectangles in R N. We are interested in the joint excursion probability { P sup t T X(t) u, sup Y(s) u s S }. Only a few results are known, see Piterbarg (2000), Piterbarg and Stamatovic (2005) and Debicki et al. (2010).

42 Let {(X(t), Y(s)) : t T, s S} be an R 2 -valued, centered, unit-variance Gaussian random field, where T and S are rectangles in R N. We are interested in the joint excursion probability { P sup t T X(t) u, sup Y(s) u s S }. Only a few results are known, see Piterbarg (2000), Piterbarg and Stamatovic (2005) and Debicki et al. (2010).

43 Let {(X(t), Y(s)) : t T, s S} be an R 2 -valued, centered, unit-variance Gaussian random field, where T and S are rectangles in R N. We are interested in the joint excursion probability { P sup t T X(t) u, sup Y(s) u s S }. Only a few results are known, see Piterbarg (2000), Piterbarg and Stamatovic (2005) and Debicki et al. (2010).

44 5.4.1 The expected Euler characteristic method We decompose T and S into several faces of lower dimensions N N T = J, S = L. k=0 J k T Similarly to the real-valued case, { } P sup X(t) u, sup Y(s) u t T { N = P k,l=0 J k T,L l S s S l=0 L l S } {Mu E (X, J) 1, Mu E (Y, L) 1}.

45 Upper Bound { P N sup t T } X(t) u, sup Y(s) u s S k,l=0 J k T,L l S N k,l=0 J k T,L l S P{M E u (X, J) 1, M E u (Y, L) 1} E{M E u (X, J)M E u (Y, L)}.

46 Lower Bound { P N sup t T } X(t) u, sup Y(s) u s S k,l=0 J k T,L l S { E{M E u (X, J)M E u (Y, L)} E{Mu E (X, J)[Mu E (X, J) 1]Mu E (Y, L)} } E{Mu E (Y, L)[Mu E (Y, L) 1]Mu E (X, J)} crossing terms.

47 Smoothness and regularity conditions (H1 ). X, Y C 2 a.s. and their second derivatives satisfy the uniform mean-square Hölder condition. (H2 ). For every (t, t, s) T 2 S with t t, (X(t), X(t), 2 X(t), X(t ), X(t ), 2 X(t ), Y(s), Y(s), 2 Y(s), 1 i j N) is non-degenerate; and for every (s, s, t) S 2 T with s s, (Y(s), Y(s), 2 Y(s), Y(s ), Y(s ), 2 Y(s ), X(t), X(t), 2 X(t), 1 i j N) is non-degenerate.

48 Smoothness and regularity conditions Let ρ(t, s) = E{X(t)Y(s)}, ρ(t, S) = sup ρ(t, s). t T,s S (H3 ). For every (t, s) T S such that ρ(t, s) = ρ(t, S), (E{X ij (t)y(s)}) i,j ζ(t,s), (E{X(t)Y i j (s)}) i,j ζ (t,s) are both negative semi-definite, where ζ(t, s) = {n : E{X n (t)y(s)} = 0, 1 n N}, ζ (t, s) = {n : E{X(t)Y n (s)} = 0, 1 n N}.

49 Theorem 5.10 [(Cheng and X. (2016+)] Under (H1 )-(H3 ), there exists α 0 > 0 such that as u, where { } P sup X(t) u, sup Y(s) u t T s S = E{ϕ(A u (X, T) A u (Y, S))} ( { + o exp u 2 }) 1 + ρ(t, S) α 0u 2, A u (X, T) A u (Y, S) = {(t, s) T S : X(t) u, Y(s) u}.

50 5.5 The double sum method Consider non-smooth bivariate locally stationary Gaussian field X(t) = (X 1 (t), X 2 (t)). Define Let t := r ij (s, t) := E[X i (s)x j (s + t)], i, j = 1 or 2. (5) N j=1 t2 j be the l 2 -norm of a vector t R N.

51 Assumptions: i) r ii (s, t) = 1 c i t s α i + o( t s α i ), where α i (0, 2), c i > 0 for i = 1, 2. ii) r ii (s, t) < 1 for all t s > 0, i = 1, 2. iii) r 12 (s, t) = r 21 (s, t):= r( t s ), which means the cross correlation is isotropic. iv) r( ) : [0, ) R attains maximum only at zero with r(0) = ρ (0, 1), i.e., r(t) < ρ for all t > 0. Moreover, we assume r (0) = 0, r (0) < 0 and there exists η > 0, for any s [0, η], r (s) exists and continuous.

52 Let S, T R N be bounded Jordan measurable sets (that is, the boundary of S and T have Lebesgue measure 0. Theorem 5.11 [Zhou and X. (2015)] If mes N (S T) 0, then as u, { } P max X 1(s) > u, max X 2(t) > u s S t T = (2π) N 2 ( r (0)) N 2 c N α 1 N 1 c α 2 2 (1 + ρ) N( 2 α α 2 1) mes N (S T)H α1 H α2 u N( 2 α α 2 1) Ψ(u, ρ)(1 + o(1)), where H α denotes Pickands constant and Ψ(u, ρ) is ) (1 + ρ)2 Ψ(u, ρ) := ( 2πu 2 1 ρ exp u ρ

53 Two remarks about Theorem 5.11 The rate of exponential decay is u2 1+ρ, where ρ is the maximum cross correlation over S T. The extreme tail probability is proportional to the volume of the set {(s, s) s S T}, where (X 1 ( ), X 2 ( )) attains maximum cross correlation.

54 If mes N (S T) = 0, the above theorem fails, and result depends on the dimension of S T. Let S = S 1,M N j=m+1 [a j, b j ] and T = T 2,M N M+1 [h j, k j ], where 0 M N 1, S 1,M and T 2,M are M dimensional Jordan sets with mes M (S 1,M T 2,M ) 0 and a j b j < h j for j = M + 1,..., N.

55 Theorem 5.12 [Zhou and X. (2015)] Under the above conditions, we have ( ) P max X 1(s) > u, max X 2(t) > u s S t T = (2π) M 2 ( r (0)) 2N M N α 2 c 1 N 1 c α 2 2 mes M (S 1,M T 2,M )H α1 H α2 (1 + ρ)2n M 2N α 1 2N α 2 u M+N( 2 α α 2 2) Ψ(u, ρ)(1 + o(1)), as u.

56 Example: The bivariate Matérn field Multivariate stationary Matérn models {X(t), t R N } in (2) with marginal and cross-covariance functions of the form M(h ν, a) := 21 ν Γ(ν) (a h )ν K ν (a h ), (with parameters a, ν) have been introduced and studied by Gneiting, Kleiber and Schlather (2010), Apanansovich, Genton and Sun (2012), Kleiber and Nychka (2013). Sometimes, It is more convenient to work with the spectral density: f (ω ν, a) = Γ(ν + N 2 )a2ν Γ(ν)π N/2 1 (a 2 + ω 2 ) ν+(n/2).

57 The bivariate Matérn field Let X(t) = (X 1 (t), X 2 (t)) T be an R 2 -valued Gaussian field whose covariance matrix is determined by ( ) c11 (h) c C(h) = 12 (h), (6) c 21 (h) c 22 (h) where c ij (h) := E[X i (s + h)x j (s)] are specified by c 11 (h) = σ 2 1M(h ν 1, a 1 ), c 22 (h) = σ 2 2M(h ν 2, a 2 ), c 12 (h) = c 21 (h) = ρσ 1 σ 2 M(h ν 12, a 12 ) (7) with a 1, a 2, a 12, σ 1, σ 2 > 0 and ρ ( 1, 1).

58 Gneiting, et al. (2010) gave NSC for (6) to give a valid covariance matrix. In particular, if ρ 0, one must have ν 1 + ν 2 2 ν 12. The parameters ν 1 and ν 2 control the smoothness of the sample function t X(t). If min{ν 1, ν 2 } > 1, then a.s. the sample function t (X 1 (t), X 2 ) is continuously differentiable. If 0 < ν 1 ν 2 1, then a.s. the sample function t (X 1 (t), X 2 ) are non-smooth.

59 Suppose 0 < ν 1 ν 2 1, then by Xiao (1995), we have dim H GrX([0, 1] N ) { N + 2 (ν1 + ν 2 ), if ν 1 + ν 2 < N, = N+ν 2 ν 1 ν 2 if ν 1 < N ν 1 + ν 2, where GrX([0, 1] N ) = {(t, X 1 (t), X 2 (t)) T : t [0, 1] N } is the graph set of X. Many other any random sets generated by X are also fractals. Theorems 5.10, 5.11, and 5.12 can be applied depending on whether min{ν 1, ν 2 } > 2 and max{ν 1, ν 2 } < 1, respectively.

60 Thank you

THE EXCURSION PROBABILITY OF GAUSSIAN AND ASYMPTOTICALLY GAUSSIAN RANDOM FIELDS. Dan Cheng

THE EXCURSION PROBABILITY OF GAUSSIAN AND ASYMPTOTICALLY GAUSSIAN RANDOM FIELDS. Dan Cheng THE EXCURSION PROBABILITY OF GAUSSIAN AND ASYMPTOTICALLY GAUSSIAN RANDOM FIELDS By Dan Cheng A DISSERTATION Submitted to Michigan State University in partial fulfillment of the requirements for the degree

More information

Evgeny Spodarev WIAS, Berlin. Limit theorems for excursion sets of stationary random fields

Evgeny Spodarev WIAS, Berlin. Limit theorems for excursion sets of stationary random fields Evgeny Spodarev 23.01.2013 WIAS, Berlin Limit theorems for excursion sets of stationary random fields page 2 LT for excursion sets of stationary random fields Overview 23.01.2013 Overview Motivation Excursion

More information

Gaussian Random Fields: Geometric Properties and Extremes

Gaussian Random Fields: Geometric Properties and Extremes Gaussian Random Fields: Geometric Properties and Extremes Yimin Xiao Michigan State University Outline Lecture 1: Gaussian random fields and their regularity Lecture 2: Hausdorff dimension results and

More information

Rice method for the maximum of Gaussian fields

Rice method for the maximum of Gaussian fields Rice method for the maximum of Gaussian fields IWAP, July 8, 2010 Jean-Marc AZAÏS Institut de Mathématiques, Université de Toulouse Jean-Marc AZAÏS ( Institut de Mathématiques, Université de Toulouse )

More information

ELEMENTS OF PROBABILITY THEORY

ELEMENTS OF PROBABILITY THEORY ELEMENTS OF PROBABILITY THEORY Elements of Probability Theory A collection of subsets of a set Ω is called a σ algebra if it contains Ω and is closed under the operations of taking complements and countable

More information

RANDOM FIELDS AND GEOMETRY. Robert Adler and Jonathan Taylor

RANDOM FIELDS AND GEOMETRY. Robert Adler and Jonathan Taylor RANDOM FIELDS AND GEOMETRY from the book of the same name by Robert Adler and Jonathan Taylor IE&M, Technion, Israel, Statistics, Stanford, US. ie.technion.ac.il/adler.phtml www-stat.stanford.edu/ jtaylor

More information

Random Fields and Random Geometry. I: Gaussian fields and Kac-Rice formulae

Random Fields and Random Geometry. I: Gaussian fields and Kac-Rice formulae Random Fields and Random Geometry. I: Gaussian fields and Kac-Rice formulae Robert Adler Electrical Engineering Technion Israel Institute of Technology. and many, many others October 25, 2011 I do not

More information

PACKING-DIMENSION PROFILES AND FRACTIONAL BROWNIAN MOTION

PACKING-DIMENSION PROFILES AND FRACTIONAL BROWNIAN MOTION PACKING-DIMENSION PROFILES AND FRACTIONAL BROWNIAN MOTION DAVAR KHOSHNEVISAN AND YIMIN XIAO Abstract. In order to compute the packing dimension of orthogonal projections Falconer and Howroyd 997) introduced

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

Packing-Dimension Profiles and Fractional Brownian Motion

Packing-Dimension Profiles and Fractional Brownian Motion Under consideration for publication in Math. Proc. Camb. Phil. Soc. 1 Packing-Dimension Profiles and Fractional Brownian Motion By DAVAR KHOSHNEVISAN Department of Mathematics, 155 S. 1400 E., JWB 233,

More information

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539 Brownian motion Samy Tindel Purdue University Probability Theory 2 - MA 539 Mostly taken from Brownian Motion and Stochastic Calculus by I. Karatzas and S. Shreve Samy T. Brownian motion Probability Theory

More information

Functional Central Limit Theorem for the Measure of Level Sets Generated by a Gaussian Random Field

Functional Central Limit Theorem for the Measure of Level Sets Generated by a Gaussian Random Field Functional Central Limit Theorem for the Measure of Level Sets Generated by a Gaussian Random Field Daniel Meschenmoser Institute of Stochastics Ulm University 89069 Ulm Germany daniel.meschenmoser@uni-ulm.de

More information

1 Directional Derivatives and Differentiability

1 Directional Derivatives and Differentiability Wednesday, January 18, 2012 1 Directional Derivatives and Differentiability Let E R N, let f : E R and let x 0 E. Given a direction v R N, let L be the line through x 0 in the direction v, that is, L :=

More information

5 Measure theory II. (or. lim. Prove the proposition. 5. For fixed F A and φ M define the restriction of φ on F by writing.

5 Measure theory II. (or. lim. Prove the proposition. 5. For fixed F A and φ M define the restriction of φ on F by writing. 5 Measure theory II 1. Charges (signed measures). Let (Ω, A) be a σ -algebra. A map φ: A R is called a charge, (or signed measure or σ -additive set function) if φ = φ(a j ) (5.1) A j for any disjoint

More information

GAUSSIAN PROCESSES; KOLMOGOROV-CHENTSOV THEOREM

GAUSSIAN PROCESSES; KOLMOGOROV-CHENTSOV THEOREM GAUSSIAN PROCESSES; KOLMOGOROV-CHENTSOV THEOREM STEVEN P. LALLEY 1. GAUSSIAN PROCESSES: DEFINITIONS AND EXAMPLES Definition 1.1. A standard (one-dimensional) Wiener process (also called Brownian motion)

More information

Spectral Gap and Concentration for Some Spherically Symmetric Probability Measures

Spectral Gap and Concentration for Some Spherically Symmetric Probability Measures Spectral Gap and Concentration for Some Spherically Symmetric Probability Measures S.G. Bobkov School of Mathematics, University of Minnesota, 127 Vincent Hall, 26 Church St. S.E., Minneapolis, MN 55455,

More information

E cient Monte Carlo for Gaussian Fields and Processes

E cient Monte Carlo for Gaussian Fields and Processes E cient Monte Carlo for Gaussian Fields and Processes Jose Blanchet (with R. Adler, J. C. Liu, and C. Li) Columbia University Nov, 2010 Jose Blanchet (Columbia) Monte Carlo for Gaussian Fields Nov, 2010

More information

LECTURE 5: THE METHOD OF STATIONARY PHASE

LECTURE 5: THE METHOD OF STATIONARY PHASE LECTURE 5: THE METHOD OF STATIONARY PHASE Some notions.. A crash course on Fourier transform For j =,, n, j = x j. D j = i j. For any multi-index α = (α,, α n ) N n. α = α + + α n. α! = α! α n!. x α =

More information

Uniformly and strongly consistent estimation for the Hurst function of a Linear Multifractional Stable Motion

Uniformly and strongly consistent estimation for the Hurst function of a Linear Multifractional Stable Motion Uniformly and strongly consistent estimation for the Hurst function of a Linear Multifractional Stable Motion Antoine Ayache and Julien Hamonier Université Lille 1 - Laboratoire Paul Painlevé and Université

More information

Lower Tail Probabilities and Related Problems

Lower Tail Probabilities and Related Problems Lower Tail Probabilities and Related Problems Qi-Man Shao National University of Singapore and University of Oregon qmshao@darkwing.uoregon.edu . Lower Tail Probabilities Let {X t, t T } be a real valued

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 15. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

Multivariate Gaussian Random Fields with SPDEs

Multivariate Gaussian Random Fields with SPDEs Multivariate Gaussian Random Fields with SPDEs Xiangping Hu Daniel Simpson, Finn Lindgren and Håvard Rue Department of Mathematics, University of Oslo PASI, 214 Outline The Matérn covariance function and

More information

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3 Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................

More information

Lower Tail Probabilities and Normal Comparison Inequalities. In Memory of Wenbo V. Li s Contributions

Lower Tail Probabilities and Normal Comparison Inequalities. In Memory of Wenbo V. Li s Contributions Lower Tail Probabilities and Normal Comparison Inequalities In Memory of Wenbo V. Li s Contributions Qi-Man Shao The Chinese University of Hong Kong Lower Tail Probabilities and Normal Comparison Inequalities

More information

Module 9: Stationary Processes

Module 9: Stationary Processes Module 9: Stationary Processes Lecture 1 Stationary Processes 1 Introduction A stationary process is a stochastic process whose joint probability distribution does not change when shifted in time or space.

More information

3 (Due ). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure?

3 (Due ). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure? MA 645-4A (Real Analysis), Dr. Chernov Homework assignment 1 (Due ). Show that the open disk x 2 + y 2 < 1 is a countable union of planar elementary sets. Show that the closed disk x 2 + y 2 1 is a countable

More information

Probabilistic Graphical Models

Probabilistic Graphical Models Probabilistic Graphical Models Brown University CSCI 2950-P, Spring 2013 Prof. Erik Sudderth Lecture 12: Gaussian Belief Propagation, State Space Models and Kalman Filters Guest Kalman Filter Lecture by

More information

Joint Parameter Estimation of the Ornstein-Uhlenbeck SDE driven by Fractional Brownian Motion

Joint Parameter Estimation of the Ornstein-Uhlenbeck SDE driven by Fractional Brownian Motion Joint Parameter Estimation of the Ornstein-Uhlenbeck SDE driven by Fractional Brownian Motion Luis Barboza October 23, 2012 Department of Statistics, Purdue University () Probability Seminar 1 / 59 Introduction

More information

Statistical applications of geometry and random

Statistical applications of geometry and random Statistical applications of geometry and random fields 3 e Cycle romand de statistique et probabilités appliquées Based on joint work with: R. Adler, K. Worsley Jonathan Taylor Stanford University / Université

More information

Weak Variation of Gaussian Processes

Weak Variation of Gaussian Processes Journal of Theoretical Probability, Vol. 10, No. 4, 1997 Weak Variation of Gaussian Processes Yimin Xiao1,2 Received December 6, 1995; revised June 24, 1996 Let X(l) (ter) be a real-valued centered Gaussian

More information

1/12/05: sec 3.1 and my article: How good is the Lebesgue measure?, Math. Intelligencer 11(2) (1989),

1/12/05: sec 3.1 and my article: How good is the Lebesgue measure?, Math. Intelligencer 11(2) (1989), Real Analysis 2, Math 651, Spring 2005 April 26, 2005 1 Real Analysis 2, Math 651, Spring 2005 Krzysztof Chris Ciesielski 1/12/05: sec 3.1 and my article: How good is the Lebesgue measure?, Math. Intelligencer

More information

We denote the derivative at x by DF (x) = L. With respect to the standard bases of R n and R m, DF (x) is simply the matrix of partial derivatives,

We denote the derivative at x by DF (x) = L. With respect to the standard bases of R n and R m, DF (x) is simply the matrix of partial derivatives, The derivative Let O be an open subset of R n, and F : O R m a continuous function We say F is differentiable at a point x O, with derivative L, if L : R n R m is a linear transformation such that, for

More information

Optimal series representations of continuous Gaussian random fields

Optimal series representations of continuous Gaussian random fields Optimal series representations of continuous Gaussian random fields Antoine AYACHE Université Lille 1 - Laboratoire Paul Painlevé A. Ayache (Lille 1) Optimality of continuous Gaussian series 04/25/2012

More information

Creating materials with a desired refraction coefficient: numerical experiments

Creating materials with a desired refraction coefficient: numerical experiments Creating materials with a desired refraction coefficient: numerical experiments Sapto W. Indratno and Alexander G. Ramm Department of Mathematics Kansas State University, Manhattan, KS 66506-2602, USA

More information

Max stable Processes & Random Fields: Representations, Models, and Prediction

Max stable Processes & Random Fields: Representations, Models, and Prediction Max stable Processes & Random Fields: Representations, Models, and Prediction Stilian Stoev University of Michigan, Ann Arbor March 2, 2011 Based on joint works with Yizao Wang and Murad S. Taqqu. 1 Preliminaries

More information

Probability and Measure

Probability and Measure Part II Year 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2018 84 Paper 4, Section II 26J Let (X, A) be a measurable space. Let T : X X be a measurable map, and µ a probability

More information

Positive Definite Functions on Spheres

Positive Definite Functions on Spheres Positive Definite Functions on Spheres Tilmann Gneiting Institute for Applied Mathematics Heidelberg University, Germany t.gneiting@uni-heidelberg.de www.math.uni-heidelberg.de/spatial/tilmann/ International

More information

Lecture I: Asymptotics for large GUE random matrices

Lecture I: Asymptotics for large GUE random matrices Lecture I: Asymptotics for large GUE random matrices Steen Thorbjørnsen, University of Aarhus andom Matrices Definition. Let (Ω, F, P) be a probability space and let n be a positive integer. Then a random

More information

Formulas for probability theory and linear models SF2941

Formulas for probability theory and linear models SF2941 Formulas for probability theory and linear models SF2941 These pages + Appendix 2 of Gut) are permitted as assistance at the exam. 11 maj 2008 Selected formulae of probability Bivariate probability Transforms

More information

Lecture 4: Numerical solution of ordinary differential equations

Lecture 4: Numerical solution of ordinary differential equations Lecture 4: Numerical solution of ordinary differential equations Department of Mathematics, ETH Zürich General explicit one-step method: Consistency; Stability; Convergence. High-order methods: Taylor

More information

Handbook of Spatial Statistics Chapter 2: Continuous Parameter Stochastic Process Theory by Gneiting and Guttorp

Handbook of Spatial Statistics Chapter 2: Continuous Parameter Stochastic Process Theory by Gneiting and Guttorp Handbook of Spatial Statistics Chapter 2: Continuous Parameter Stochastic Process Theory by Gneiting and Guttorp Marcela Alfaro Córdoba August 25, 2016 NCSU Department of Statistics Continuous Parameter

More information

Chapter 2: Fundamentals of Statistics Lecture 15: Models and statistics

Chapter 2: Fundamentals of Statistics Lecture 15: Models and statistics Chapter 2: Fundamentals of Statistics Lecture 15: Models and statistics Data from one or a series of random experiments are collected. Planning experiments and collecting data (not discussed here). Analysis:

More information

Concentration inequalities: basics and some new challenges

Concentration inequalities: basics and some new challenges Concentration inequalities: basics and some new challenges M. Ledoux University of Toulouse, France & Institut Universitaire de France Measure concentration geometric functional analysis, probability theory,

More information

2 (Bonus). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure?

2 (Bonus). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure? MA 645-4A (Real Analysis), Dr. Chernov Homework assignment 1 (Due 9/5). Prove that every countable set A is measurable and µ(a) = 0. 2 (Bonus). Let A consist of points (x, y) such that either x or y is

More information

STOCHASTIC GEOMETRY BIOIMAGING

STOCHASTIC GEOMETRY BIOIMAGING CENTRE FOR STOCHASTIC GEOMETRY AND ADVANCED BIOIMAGING 2018 www.csgb.dk RESEARCH REPORT Anders Rønn-Nielsen and Eva B. Vedel Jensen Central limit theorem for mean and variogram estimators in Lévy based

More information

Conditional Full Support for Gaussian Processes with Stationary Increments

Conditional Full Support for Gaussian Processes with Stationary Increments Conditional Full Support for Gaussian Processes with Stationary Increments Tommi Sottinen University of Vaasa Kyiv, September 9, 2010 International Conference Modern Stochastics: Theory and Applications

More information

Measures on spaces of Riemannian metrics CMS meeting December 6, 2015

Measures on spaces of Riemannian metrics CMS meeting December 6, 2015 Measures on spaces of Riemannian metrics CMS meeting December 6, 2015 D. Jakobson (McGill), jakobson@math.mcgill.ca [CJW]: Y. Canzani, I. Wigman, DJ: arxiv:1002.0030, Jour. of Geometric Analysis, 2013

More information

µ X (A) = P ( X 1 (A) )

µ X (A) = P ( X 1 (A) ) 1 STOCHASTIC PROCESSES This appendix provides a very basic introduction to the language of probability theory and stochastic processes. We assume the reader is familiar with the general measure and integration

More information

A Note on the Central Limit Theorem for a Class of Linear Systems 1

A Note on the Central Limit Theorem for a Class of Linear Systems 1 A Note on the Central Limit Theorem for a Class of Linear Systems 1 Contents Yukio Nagahata Department of Mathematics, Graduate School of Engineering Science Osaka University, Toyonaka 560-8531, Japan.

More information

Minimax Rate of Convergence for an Estimator of the Functional Component in a Semiparametric Multivariate Partially Linear Model.

Minimax Rate of Convergence for an Estimator of the Functional Component in a Semiparametric Multivariate Partially Linear Model. Minimax Rate of Convergence for an Estimator of the Functional Component in a Semiparametric Multivariate Partially Linear Model By Michael Levine Purdue University Technical Report #14-03 Department of

More information

ON THE KAC-RICE FORMULA

ON THE KAC-RICE FORMULA ON THE KAC-RCE FORMULA LVU. NCOLAESCU Abstract. This is an informal introduction to the Kac-Rice formula and some of its mostly one-dimensional applications. Contents. Gaussian random functions 2. The

More information

Regularity of the density for the stochastic heat equation

Regularity of the density for the stochastic heat equation Regularity of the density for the stochastic heat equation Carl Mueller 1 Department of Mathematics University of Rochester Rochester, NY 15627 USA email: cmlr@math.rochester.edu David Nualart 2 Department

More information

Properties for systems with weak invariant manifolds

Properties for systems with weak invariant manifolds Statistical properties for systems with weak invariant manifolds Faculdade de Ciências da Universidade do Porto Joint work with José F. Alves Workshop rare & extreme Gibbs-Markov-Young structure Let M

More information

Product measure and Fubini s theorem

Product measure and Fubini s theorem Chapter 7 Product measure and Fubini s theorem This is based on [Billingsley, Section 18]. 1. Product spaces Suppose (Ω 1, F 1 ) and (Ω 2, F 2 ) are two probability spaces. In a product space Ω = Ω 1 Ω

More information

Solution for Problem 7.1. We argue by contradiction. If the limit were not infinite, then since τ M (ω) is nondecreasing we would have

Solution for Problem 7.1. We argue by contradiction. If the limit were not infinite, then since τ M (ω) is nondecreasing we would have 362 Problem Hints and Solutions sup g n (ω, t) g(ω, t) sup g(ω, s) g(ω, t) µ n (ω). t T s,t: s t 1/n By the uniform continuity of t g(ω, t) on [, T], one has for each ω that µ n (ω) as n. Two applications

More information

Statistical inference on Lévy processes

Statistical inference on Lévy processes Alberto Coca Cabrero University of Cambridge - CCA Supervisors: Dr. Richard Nickl and Professor L.C.G.Rogers Funded by Fundación Mutua Madrileña and EPSRC MASDOC/CCA student workshop 2013 26th March Outline

More information

9 Brownian Motion: Construction

9 Brownian Motion: Construction 9 Brownian Motion: Construction 9.1 Definition and Heuristics The central limit theorem states that the standard Gaussian distribution arises as the weak limit of the rescaled partial sums S n / p n of

More information

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product Chapter 4 Hilbert Spaces 4.1 Inner Product Spaces Inner Product Space. A complex vector space E is called an inner product space (or a pre-hilbert space, or a unitary space) if there is a mapping (, )

More information

The circular law. Lewis Memorial Lecture / DIMACS minicourse March 19, Terence Tao (UCLA)

The circular law. Lewis Memorial Lecture / DIMACS minicourse March 19, Terence Tao (UCLA) The circular law Lewis Memorial Lecture / DIMACS minicourse March 19, 2008 Terence Tao (UCLA) 1 Eigenvalue distributions Let M = (a ij ) 1 i n;1 j n be a square matrix. Then one has n (generalised) eigenvalues

More information

Cohomology of the Mumford Quotient

Cohomology of the Mumford Quotient Cohomology of the Mumford Quotient Maxim Braverman Abstract. Let X be a smooth projective variety acted on by a reductive group G. Let L be a positive G-equivariant line bundle over X. We use a Witten

More information

Multivariable Calculus

Multivariable Calculus 2 Multivariable Calculus 2.1 Limits and Continuity Problem 2.1.1 (Fa94) Let the function f : R n R n satisfy the following two conditions: (i) f (K ) is compact whenever K is a compact subset of R n. (ii)

More information

Wiener Measure and Brownian Motion

Wiener Measure and Brownian Motion Chapter 16 Wiener Measure and Brownian Motion Diffusion of particles is a product of their apparently random motion. The density u(t, x) of diffusing particles satisfies the diffusion equation (16.1) u

More information

Sample path large deviations of a Gaussian process with stationary increments and regularily varying variance

Sample path large deviations of a Gaussian process with stationary increments and regularily varying variance Sample path large deviations of a Gaussian process with stationary increments and regularily varying variance Tommi Sottinen Department of Mathematics P. O. Box 4 FIN-0004 University of Helsinki Finland

More information

SAMPLE PATH AND ASYMPTOTIC PROPERTIES OF SPACE-TIME MODELS. Yun Xue

SAMPLE PATH AND ASYMPTOTIC PROPERTIES OF SPACE-TIME MODELS. Yun Xue SAMPLE PATH AND ASYMPTOTIC PROPERTIES OF SPACE-TIME MODELS By Yun Xue A DISSERTATION Submitted to Michigan State University in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY

More information

Bayesian Regularization

Bayesian Regularization Bayesian Regularization Aad van der Vaart Vrije Universiteit Amsterdam International Congress of Mathematicians Hyderabad, August 2010 Contents Introduction Abstract result Gaussian process priors Co-authors

More information

Extreme Value Analysis and Spatial Extremes

Extreme Value Analysis and Spatial Extremes Extreme Value Analysis and Department of Statistics Purdue University 11/07/2013 Outline Motivation 1 Motivation 2 Extreme Value Theorem and 3 Bayesian Hierarchical Models Copula Models Max-stable Models

More information

Laplace s Equation. Chapter Mean Value Formulas

Laplace s Equation. Chapter Mean Value Formulas Chapter 1 Laplace s Equation Let be an open set in R n. A function u C 2 () is called harmonic in if it satisfies Laplace s equation n (1.1) u := D ii u = 0 in. i=1 A function u C 2 () is called subharmonic

More information

LECTURE 15: COMPLETENESS AND CONVEXITY

LECTURE 15: COMPLETENESS AND CONVEXITY LECTURE 15: COMPLETENESS AND CONVEXITY 1. The Hopf-Rinow Theorem Recall that a Riemannian manifold (M, g) is called geodesically complete if the maximal defining interval of any geodesic is R. On the other

More information

STAT 200C: High-dimensional Statistics

STAT 200C: High-dimensional Statistics STAT 200C: High-dimensional Statistics Arash A. Amini May 30, 2018 1 / 59 Classical case: n d. Asymptotic assumption: d is fixed and n. Basic tools: LLN and CLT. High-dimensional setting: n d, e.g. n/d

More information

MATH 205C: STATIONARY PHASE LEMMA

MATH 205C: STATIONARY PHASE LEMMA MATH 205C: STATIONARY PHASE LEMMA For ω, consider an integral of the form I(ω) = e iωf(x) u(x) dx, where u Cc (R n ) complex valued, with support in a compact set K, and f C (R n ) real valued. Thus, I(ω)

More information

Distance between multinomial and multivariate normal models

Distance between multinomial and multivariate normal models Chapter 9 Distance between multinomial and multivariate normal models SECTION 1 introduces Andrew Carter s recursive procedure for bounding the Le Cam distance between a multinomialmodeland its approximating

More information

Supermodular ordering of Poisson arrays

Supermodular ordering of Poisson arrays Supermodular ordering of Poisson arrays Bünyamin Kızıldemir Nicolas Privault Division of Mathematical Sciences School of Physical and Mathematical Sciences Nanyang Technological University 637371 Singapore

More information

Central Limit Theorem for Non-stationary Markov Chains

Central Limit Theorem for Non-stationary Markov Chains Central Limit Theorem for Non-stationary Markov Chains Magda Peligrad University of Cincinnati April 2011 (Institute) April 2011 1 / 30 Plan of talk Markov processes with Nonhomogeneous transition probabilities

More information

Information geometry for bivariate distribution control

Information geometry for bivariate distribution control Information geometry for bivariate distribution control C.T.J.Dodson + Hong Wang Mathematics + Control Systems Centre, University of Manchester Institute of Science and Technology Optimal control of stochastic

More information

Introduction to Spatial Data and Models

Introduction to Spatial Data and Models Introduction to Spatial Data and Models Sudipto Banerjee 1 and Andrew O. Finley 2 1 Biostatistics, School of Public Health, University of Minnesota, Minneapolis, Minnesota, U.S.A. 2 Department of Forestry

More information

Quasi-conformal minimal Lagrangian diffeomorphisms of the

Quasi-conformal minimal Lagrangian diffeomorphisms of the Quasi-conformal minimal Lagrangian diffeomorphisms of the hyperbolic plane (joint work with J.M. Schlenker) January 21, 2010 Quasi-symmetric homeomorphism of a circle A homeomorphism φ : S 1 S 1 is quasi-symmetric

More information

arxiv: v1 [math.pr] 19 Aug 2014

arxiv: v1 [math.pr] 19 Aug 2014 A Multiplicative Wavelet-based Model for Simulation of a andom Process IEVGEN TUCHYN University of Lausanne, Lausanne, Switzerland arxiv:1408.453v1 [math.p] 19 Aug 014 We consider a random process Y(t)

More information

Probabilistic Graphical Models

Probabilistic Graphical Models School of Computer Science Probabilistic Graphical Models Variational Inference II: Mean Field Method and Variational Principle Junming Yin Lecture 15, March 7, 2012 X 1 X 1 X 1 X 1 X 2 X 3 X 2 X 2 X 3

More information

arxiv:math/ v1 [math.st] 16 May 2006

arxiv:math/ v1 [math.st] 16 May 2006 The Annals of Statistics 006 Vol 34 No 46 68 DOI: 04/009053605000000886 c Institute of Mathematical Statistics 006 arxiv:math/0605436v [mathst] 6 May 006 SPATIAL EXTREMES: MODELS FOR THE STATIONARY CASE

More information

Learning Patterns for Detection with Multiscale Scan Statistics

Learning Patterns for Detection with Multiscale Scan Statistics Learning Patterns for Detection with Multiscale Scan Statistics J. Sharpnack 1 1 Statistics Department UC Davis UC Davis Statistics Seminar 2018 Work supported by NSF DMS-1712996 Hyperspectral Gas Detection

More information

Handlebody Decomposition of a Manifold

Handlebody Decomposition of a Manifold Handlebody Decomposition of a Manifold Mahuya Datta Statistics and Mathematics Unit Indian Statistical Institute, Kolkata mahuya@isical.ac.in January 12, 2012 contents Introduction What is a handlebody

More information

Lecture No 1 Introduction to Diffusion equations The heat equat

Lecture No 1 Introduction to Diffusion equations The heat equat Lecture No 1 Introduction to Diffusion equations The heat equation Columbia University IAS summer program June, 2009 Outline of the lectures We will discuss some basic models of diffusion equations and

More information

The Codimension of the Zeros of a Stable Process in Random Scenery

The Codimension of the Zeros of a Stable Process in Random Scenery The Codimension of the Zeros of a Stable Process in Random Scenery Davar Khoshnevisan The University of Utah, Department of Mathematics Salt Lake City, UT 84105 0090, U.S.A. davar@math.utah.edu http://www.math.utah.edu/~davar

More information

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R.

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R. Ergodic Theorems Samy Tindel Purdue University Probability Theory 2 - MA 539 Taken from Probability: Theory and examples by R. Durrett Samy T. Ergodic theorems Probability Theory 1 / 92 Outline 1 Definitions

More information

Dimensionality in the Stability of the Brunn-Minkowski Inequality: A blessing or a curse?

Dimensionality in the Stability of the Brunn-Minkowski Inequality: A blessing or a curse? Dimensionality in the Stability of the Brunn-Minkowski Inequality: A blessing or a curse? Ronen Eldan, Tel Aviv University (Joint with Bo`az Klartag) Berkeley, September 23rd 2011 The Brunn-Minkowski Inequality

More information

Hardy-Stein identity and Square functions

Hardy-Stein identity and Square functions Hardy-Stein identity and Square functions Daesung Kim (joint work with Rodrigo Bañuelos) Department of Mathematics Purdue University March 28, 217 Daesung Kim (Purdue) Hardy-Stein identity UIUC 217 1 /

More information

Lecture 9. d N(0, 1). Now we fix n and think of a SRW on [0,1]. We take the k th step at time k n. and our increments are ± 1

Lecture 9. d N(0, 1). Now we fix n and think of a SRW on [0,1]. We take the k th step at time k n. and our increments are ± 1 Random Walks and Brownian Motion Tel Aviv University Spring 011 Lecture date: May 0, 011 Lecture 9 Instructor: Ron Peled Scribe: Jonathan Hermon In today s lecture we present the Brownian motion (BM).

More information

Gravitational allocation to Poisson points

Gravitational allocation to Poisson points Gravitational allocation to Poisson points Sourav Chatterjee joint work with Ron Peled Yuval Peres Dan Romik Allocation rules Let Ξ be a discrete subset of R d. An allocation (of Lebesgue measure to Ξ)

More information

Zeros of lacunary random polynomials

Zeros of lacunary random polynomials Zeros of lacunary random polynomials Igor E. Pritsker Dedicated to Norm Levenberg on his 60th birthday Abstract We study the asymptotic distribution of zeros for the lacunary random polynomials. It is

More information

LARGE DEVIATION PROBABILITIES FOR SUMS OF HEAVY-TAILED DEPENDENT RANDOM VECTORS*

LARGE DEVIATION PROBABILITIES FOR SUMS OF HEAVY-TAILED DEPENDENT RANDOM VECTORS* LARGE EVIATION PROBABILITIES FOR SUMS OF HEAVY-TAILE EPENENT RANOM VECTORS* Adam Jakubowski Alexander V. Nagaev Alexander Zaigraev Nicholas Copernicus University Faculty of Mathematics and Computer Science

More information

Outline of the course

Outline of the course School of Mathematical Sciences PURE MTH 3022 Geometry of Surfaces III, Semester 2, 20 Outline of the course Contents. Review 2. Differentiation in R n. 3 2.. Functions of class C k 4 2.2. Mean Value Theorem

More information

Weak quenched limiting distributions of a one-dimensional random walk in a random environment

Weak quenched limiting distributions of a one-dimensional random walk in a random environment Weak quenched limiting distributions of a one-dimensional random walk in a random environment Jonathon Peterson Cornell University Department of Mathematics Joint work with Gennady Samorodnitsky September

More information

On the p-laplacian and p-fluids

On the p-laplacian and p-fluids LMU Munich, Germany Lars Diening On the p-laplacian and p-fluids Lars Diening On the p-laplacian and p-fluids 1/50 p-laplacian Part I p-laplace and basic properties Lars Diening On the p-laplacian and

More information

CHAPTER 6. Differentiation

CHAPTER 6. Differentiation CHPTER 6 Differentiation The generalization from elementary calculus of differentiation in measure theory is less obvious than that of integration, and the methods of treating it are somewhat involved.

More information

LONG TIME BEHAVIOUR OF PERIODIC STOCHASTIC FLOWS.

LONG TIME BEHAVIOUR OF PERIODIC STOCHASTIC FLOWS. LONG TIME BEHAVIOUR OF PERIODIC STOCHASTIC FLOWS. D. DOLGOPYAT, V. KALOSHIN AND L. KORALOV Abstract. We consider the evolution of a set carried by a space periodic incompressible stochastic flow in a Euclidean

More information

The Canonical Gaussian Measure on R

The Canonical Gaussian Measure on R The Canonical Gaussian Measure on R 1. Introduction The main goal of this course is to study Gaussian measures. The simplest example of a Gaussian measure is the canonical Gaussian measure P on R where

More information

LECTURE: KOBORDISMENTHEORIE, WINTER TERM 2011/12; SUMMARY AND LITERATURE

LECTURE: KOBORDISMENTHEORIE, WINTER TERM 2011/12; SUMMARY AND LITERATURE LECTURE: KOBORDISMENTHEORIE, WINTER TERM 2011/12; SUMMARY AND LITERATURE JOHANNES EBERT 1.1. October 11th. 1. Recapitulation from differential topology Definition 1.1. Let M m, N n, be two smooth manifolds

More information

GAUSSIAN PROCESSES AND THE LOCAL TIMES OF SYMMETRIC LÉVY PROCESSES

GAUSSIAN PROCESSES AND THE LOCAL TIMES OF SYMMETRIC LÉVY PROCESSES GAUSSIAN PROCESSES AND THE LOCAL TIMES OF SYMMETRIC LÉVY PROCESSES MICHAEL B. MARCUS AND JAY ROSEN CITY COLLEGE AND COLLEGE OF STATEN ISLAND Abstract. We give a relatively simple proof of the necessary

More information

Gaussian Processes. 1. Basic Notions

Gaussian Processes. 1. Basic Notions Gaussian Processes 1. Basic Notions Let T be a set, and X : {X } T a stochastic process, defined on a suitable probability space (Ω P), that is indexed by T. Definition 1.1. We say that X is a Gaussian

More information

A Framework for Daily Spatio-Temporal Stochastic Weather Simulation

A Framework for Daily Spatio-Temporal Stochastic Weather Simulation A Framework for Daily Spatio-Temporal Stochastic Weather Simulation, Rick Katz, Balaji Rajagopalan Geophysical Statistics Project Institute for Mathematics Applied to Geosciences National Center for Atmospheric

More information