Random Fields and Random Geometry. I: Gaussian fields and Kac-Rice formulae
|
|
- Reynard Hunter
- 5 years ago
- Views:
Transcription
1
2 Random Fields and Random Geometry. I: Gaussian fields and Kac-Rice formulae Robert Adler Electrical Engineering Technion Israel Institute of Technology. and many, many others October 25, 2011
3 I do not intend to cover all these slides in 75 minutes! (Some of the material is for your later reference, and some for the afternoon tutorial.)
4 Our heroes Marc Kac Stephen O. Rice
5 Real roots of algebraic equations (Kac, 1943) f (t) = ξ 0 + ξ 1 t + ξ 2 t ξ n 1 t n 1
6 Real roots of algebraic equations (Kac, 1943) f (t) = ξ 0 + ξ 1 t + ξ 2 t ξ n 1 t n 1 P{ξ = (ξ 0,..., ξ n 1 ) A} = A e x 2 /2 (2π) n/2 dx ( ξ N(0, In n ) )
7 Real roots of algebraic equations (Kac, 1943) f (t) = ξ 0 + ξ 1 t + ξ 2 t ξ n 1 t n 1 P{ξ = (ξ 0,..., ξ n 1 ) A} = Theorem: N n = the number of real zeroes of f A e x 2 /2 (2π) n/2 dx ( ξ N(0, In n ) ) E{N n } = 4 π 1 0 [1 n 2 [x 2 (1 x 2 )/(1 x 2n ] 2 ] 1/2 1 x 2 dx
8 Real roots of algebraic equations (Kac, 1943) f (t) = ξ 0 + ξ 1 t + ξ 2 t ξ n 1 t n 1 P{ξ = (ξ 0,..., ξ n 1 ) A} = Theorem: N n = the number of real zeroes of f A e x 2 /2 (2π) n/2 dx ( ξ N(0, In n ) ) E{N n } = Approx n: For large n 4 π 1 0 [1 n 2 [x 2 (1 x 2 )/(1 x 2n ] 2 ] 1/2 1 x 2 dx E{N n } 2 log n π
9 Real roots of algebraic equations (Kac, 1943) f (t) = ξ 0 + ξ 1 t + ξ 2 t ξ n 1 t n 1 P{ξ = (ξ 0,..., ξ n 1 ) A} = Theorem: N n = the number of real zeroes of f A e x 2 /2 (2π) n/2 dx ( ξ N(0, In n ) ) E{N n } = Approx n: For large n Bound: For large n 4 π 1 0 [1 n 2 [x 2 (1 x 2 )/(1 x 2n ] 2 ] 1/2 1 x 2 dx E{N n } 2 log n π E{N n } 2 log n π + 14 π.
10 Zeroes of complex polynomials f (z) = ξ 0 +a 1 ξ 1 z+a 2 ξ 2 z 2 + +a n 1 ξ n 1 z n 1, z C.
11 Zeroes of complex polynomials f (z) = ξ 0 +a 1 ξ 1 z+a 2 ξ 2 z 2 + +a n 1 ξ n 1 z n 1, z C.
12 Zeroes of complex polynomials f (z) = ξ 0 +a 1 ξ 1 z+a 2 ξ 2 z 2 + +a n 1 ξ n 1 z n 1, z C.
13 Zeroes of complex polynomials f (z) = ξ 0 +a 1 ξ 1 z+a 2 ξ 2 z 2 + +a n 1 ξ n 1 z n 1, z C. Shiffman
14 Thinking more generally: 1: f : R N R N, random { } E #{t R N : f (t) = u} =?
15 Thinking more generally: 1: f : R N R N, random { } E #{t R N : f (t) = u} =? 2: f : M N, random, dim(m) = dim(n) E {#{p M : f (p) = q}} =?
16 Thinking more generally: 1: f : R N R N, random { } E #{t R N : f (t) = u} =? 2: f : M N, random, dim(m) = dim(n) E {#{p M : f (p) = q}} =? 3: In another notation: E { #{f 1 (q)} } =?
17 Thinking more generally: 1: f : R N R N, random { } E #{t R N : f (t) = u} =? 2: f : M N, random, dim(m) = dim(n) E {#{p M : f (p) = q}} =? 3: In another notation: E { #{f 1 (q)} } =? 4: More generally: f : M N, dim(m) dim(n), D N In this case, typically, dim ( f 1 (D) ) = dim(m) dim(n) + dim(d), and it is not clear what the corresponding question is.
18 The original (non-specific) Rice formula U u U u (f, T ) = #{t T : f (t) = u, ḟ (t) > 0}
19 The original (non-specific) Rice formula U u U u (f, T ) = #{t T : f (t) = u, ḟ (t) > 0} Basic conditions on f : Continuity and differentiability
20 The original (non-specific) Rice formula U u U u (f, T ) = #{t T : f (t) = u, ḟ (t) > 0} Basic conditions on f : Continuity and differentiability Plus (boring) side conditions to be met later
21 The original (non-specific) Rice formula U u U u (f, T ) = #{t T : f (t) = u, ḟ (t) > 0} Basic conditions on f : Continuity and differentiability Plus (boring) side conditions to be met later Kac-Rice formula: E {U u } = y p t (u, y) dydt T 0 = p t (u) y p t (y u) dydt T 0 { = p t (u) E ḟ (t) 1 (0, ) (ḟ (t)) } f (t) = u dt T where p t (x, y) is the joint density of (f (t), ḟ (t)), p t (u) is the probability density of f (t), etc.
22 The original (non-specific) Rice formula U u U u (f, T ) = #{t T : f (t) = u, ḟ (t) > 0} Basic conditions on f : Continuity and differentiability Plus (boring) side conditions to be met later Kac-Rice formula: E {U u } = y p t (u, y) dydt T 0 = p t (u) y p t (y u) dydt T 0 { = p t (u) E ḟ (t) 1 (0, ) (ḟ (t)) } f (t) = u dt T where p t (x, y) is the joint density of (f (t), ḟ (t)), p t (u) is the probability density of f (t), etc. T = [0, T ] is an interval. M is a general set (e.g. manifold)
23 The original (non-specific) Rice formula: The proof Take a (positive) approximate delta function, δ ε, supported on [ ε, +ε], and R δ ε(x) dx = 1.
24 The original (non-specific) Rice formula: The proof Take a (positive) approximate delta function, δ ε, supported on [ ε, +ε], and R δ ε(x) dx = 1.
25 The original (non-specific) Rice formula: The proof Take a (positive) approximate delta function, δ ε, supported on [ ε, +ε], and R δ ε(x) dx = 1. 1 = R δ ε (x u) dx = t u 1 t l 1 ḟ (t) δ ε (f (t) u) 1 (0, ) (f (t)) dt = lim ε 0 t u 1 t l 1 ḟ (t) δ ε(f (t) u) 1 (0, ) (f (t)) dt
26 The original (non-specific) Rice formula: The proof Take a (positive) approximate delta function, δ ε, supported on [ ε, +ε], and R δ ε(x) dx = 1. 1 = R δ ε (x u) dx = t u 1 t l 1 ḟ (t) δ ε (f (t) u) 1 (0, ) (f (t)) dt = lim ε 0 t u 1 t l 1 ḟ (t) δ ε(f (t) u) 1 (0, ) (f (t)) dt U u (f, T ) = lim ḟ (t) δ ε 0 ε(f (t) u) 1 (0, ) (f (t)) dt T
27 The original (non-specific) Rice formula: The proof So far, everything is deterministic U u (f, T ) = lim ḟ (t) δ ε (f (t) u) 1 (0, ) (f (t)) dt ε 0 T
28 The original (non-specific) Rice formula: The proof So far, everything is deterministic U u (f, T ) = lim ḟ (t) δ ε (f (t) u) 1 (0, ) (f (t)) dt ε 0 T Take expectations, with some sleight of hand { E{U u (f, T )} = E lim ε 0 T = = = T { lim E ε 0 lim T ε 0 x= T 0 } ḟ (t) δ ε(f (t) u) 1 (0, ) (f (t)) dt } ḟ (t) δ ε (f (t) u) 1 (0, ) (f (t)) dt y=0 y p t (u, y) dydt y δ ε (x u) p t (x, y) dxdy dt
29 Constructing Gaussian Processes A (separable) parameter space M.
30 Constructing Gaussian Processes A (separable) parameter space M. A finite or infinite set of functions ϕ 1, ϕ 2,..., ϕ j : M R satisfying ϕ 2 j (t) <, for all t M. j
31 Constructing Gaussian Processes A (separable) parameter space M. A finite or infinite set of functions ϕ 1, ϕ 2,..., ϕ j : M R satisfying ϕ 2 j (t) <, for all t M. j A sequence ξ 1, ξ 2,..., of independent, N(0, 1) random variables,
32 Constructing Gaussian Processes A (separable) parameter space M. A finite or infinite set of functions ϕ 1, ϕ 2,..., ϕ j : M R satisfying ϕ 2 j (t) <, for all t M. j A sequence ξ 1, ξ 2,..., of independent, N(0, 1) random variables, Define the random field f : M R by f (t) = j ξ j ϕ j (t)
33 Constructing Gaussian Processes A (separable) parameter space M. A finite or infinite set of functions ϕ 1, ϕ 2,..., ϕ j : M R satisfying ϕ 2 j (t) <, for all t M. j A sequence ξ 1, ξ 2,..., of independent, N(0, 1) random variables, Define the random field f : M R by f (t) = j ξ j ϕ j (t) Mean, covariance, and variance functions µ(t) = E{f (t)} = 0 C(s, t) = {[f (s) µ(s)] [f (t) µ(t)]} = ϕ j (s)ϕ j (t) σ 2 (t) = C(t, t) = ϕ 2 j (t)
34 Existence of Gaussian processes Theorem Let M be a topological space. Let C : M M be positive semi-definite. Then there exists a Gaussian process on f : M R with mean zero and covariance function C. Furthermore, f has a representation of the form f (t) = j ξ jϕ j (t), and if f is a.s. continuous then the sum converges uniformly, a.s., on compacts.
35 Existence of Gaussian processes Theorem Let M be a topological space. Let C : M M be positive semi-definite. Then there exists a Gaussian process on f : M R with mean zero and covariance function C. Furthermore, f has a representation of the form f (t) = j ξ jϕ j (t), and if f is a.s. continuous then the sum converges uniformly, a.s., on compacts. Corollary If there is justice in the world (smoothness and summability) ḟ (t) = t f (t) = j ξ j ϕ j (t), and so if f is Gaussian, so is ḟ. Where ḟ is any derivative on nice M.
36 Existence of Gaussian processes Theorem Let M be a topological space. Let C : M M be positive semi-definite. Then there exists a Gaussian process on f : M R with mean zero and covariance function C. Furthermore, f has a representation of the form f (t) = j ξ jϕ j (t), and if f is a.s. continuous then the sum converges uniformly, a.s., on compacts. Corollary If there is justice in the world (smoothness and summability) ḟ (t) = t f (t) = j ξ j ϕ j (t), and so if f is Gaussian, so is ḟ. Where ḟ is any derivative on nice M. Furthermore E{ḟ (s)ḟ (t)} = E{ ξ j ϕ j (s) ξ k ϕ k (t) } = ϕ j (s) ϕ j (t) E{f (s) ḟ (t)} = E { ξ j ϕ j (s) ξ k ϕ k (t) } = ϕ j (s) ϕ j (t)
37 Example: The Brownian sheet on RN
38 Example: The Brownian sheet on R N E{W (t)} = 0 E{W (s)w (t)} = (s 1 t 1 ) (s N t N ). where t = (t 1,..., t N ).
39 Example: The Brownian sheet on R N E{W (t)} = 0 E{W (s)w (t)} = (s 1 t 1 ) (s N t N ). where t = (t 1,..., t N ). Replace j by j = (j 1,..., j N ) ϕ j (t) = 2 N/2 N i=1 2 (2j i + 1)π sin ( 1 2 (2j i + 1) πt i ) W (t) = ξ j1,...,j N ϕ j1,...,j N (t) j 1 j N
40 Example: The Brownian sheet on R N E{W (t)} = 0 E{W (s)w (t)} = (s 1 t 1 ) (s N t N ). where t = (t 1,..., t N ). Replace j by j = (j 1,..., j N ) ϕ j (t) = 2 N/2 N i=1 2 (2j i + 1)π sin ( 1 2 (2j i + 1) πt i ) W (t) = ξ j1,...,j N ϕ j1,...,j N (t) j 1 N = 1 W is standard Brownian motion. The corresponding expansion is due to Lévy, and the corresponding RKHS is known as Cameron-Martin space. j N
41 Constant variance Gaussian processes We know that f (t) = ξj ϕ j (t) ḟ (t) = ξ j ϕ j (t) E{ḟ (s)ḟ (t)} = ϕ j (s) ϕ j (t) E{f (s)ḟ (t)} = ϕ j (s) ϕ j (t)
42 Constant variance Gaussian processes We know that f (t) = ξj ϕ j (t) ḟ (t) = ξ j ϕ j (t) E{ḟ (s)ḟ (t)} = ϕ j (s) ϕ j (t) E{f (s)ḟ (t)} = ϕ j (s) ϕ j (t) Thus, if σ 2 (t) = E{f 2 (t)} = ϕ 2 j (t) is a constant, say 1, so that {ϕ j (t)} l2 = 1
43 Constant variance Gaussian processes We know that f (t) = ξj ϕ j (t) ḟ (t) = ξ j ϕ j (t) E{ḟ (s)ḟ (t)} = ϕ j (s) ϕ j (t) E{f (s)ḟ (t)} = ϕ j (s) ϕ j (t) Thus, if σ 2 (t) = E{f 2 (t)} = ϕ 2 j (t) is a constant, say 1, so that {ϕ j (t)} l2 = 1 then E{f (t)ḟ (t)} = ϕ j (t) ϕ j (t) = 1 2 t ϕ2 j (t) = 1 2 ϕ 2 t j (t) = 0
44 Constant variance Gaussian processes We know that f (t) = ξj ϕ j (t) ḟ (t) = ξ j ϕ j (t) E{ḟ (s)ḟ (t)} = ϕ j (s) ϕ j (t) E{f (s)ḟ (t)} = ϕ j (s) ϕ j (t) Thus, if σ 2 (t) = E{f 2 (t)} = ϕ 2 j (t) is a constant, say 1, so that {ϕ j (t)} l2 = 1 then E{f (t)ḟ (t)} = ϕ j (t) ϕ j (t) = 1 2 t ϕ2 j (t) = 1 2 ϕ 2 t j (t) = 0 f (t) and its derivative ḟ (t) are INDEPENDENT. (uncorrelated)
45 Constant variance Gaussian processes and Kac-Rice Generic Kac-Rice formula { E {U u } = p t (u) E ḟ (t) 1 (0, ) (ḟ (t)) } f (t) = u dt T
46 Constant variance Gaussian processes and Kac-Rice Generic Kac-Rice formula { E {U u } = p t (u) E ḟ (t) 1 (0, ) (ḟ (t)) } f (t) = u dt T But f (t) and ḟ (t) are independent!
47 Constant variance Gaussian processes and Kac-Rice Generic Kac-Rice formula { E {U u } = p t (u) E ḟ (t) 1 (0, ) (ḟ (t)) } f (t) = u dt T But f (t) and ḟ (t) are independent! Notation 1 = σ 2 (t), λ(t) = E{[ḟ (t)] 2 }
48 Constant variance Gaussian processes and Kac-Rice Generic Kac-Rice formula { E {U u } = p t (u) E ḟ (t) 1 (0, ) (ḟ (t)) } f (t) = u dt T But f (t) and ḟ (t) are independent! Notation 1 = σ 2 (t), λ(t) = E{[ḟ (t)] 2 } Rice formula (+ε) E {U u } = e u2 /2 2π T [λ(t)] 1/2 dt.
49 Constant variance Gaussian processes and Kac-Rice Generic Kac-Rice formula { E {U u } = p t (u) E ḟ (t) 1 (0, ) (ḟ (t)) } f (t) = u dt T But f (t) and ḟ (t) are independent! Notation Rice formula (+ε) 1 = σ 2 (t), λ(t) = E{[ḟ (t)] 2 } E {U u } = e u2 /2 2π If λ(t) λ, we have the Rice formula T E {U u } = T λ1/2 2π [λ(t)] 1/2 dt. e u2 /2
50 Gaussian Kac-Rice with no simplification E {U u (T )} = T λ 1/2 (t)σ 1 (t)[1 µ 2 (t)] 1/2 ϕ ( ) m(t) σ(t) [ 2ϕ(η(t)) + 2η(t)[2Φ(η(t)) 1] ] dt
51 Gaussian Kac-Rice with no simplification E {U u (T )} = T λ 1/2 (t)σ 1 (t)[1 µ 2 (t)] 1/2 ϕ ( ) m(t) σ(t) [ 2ϕ(η(t)) + 2η(t)[2Φ(η(t)) 1] ] dt m(t) = E{f (t)} σ 2 (t) = E{[f (t)] 2 } λ(t) = E{[ḟ (t)] 2 } µ(t) = E{[f (t) m(t)] [ḟ (t)]} λ 1/2 (t)σ(t) η(t) = ṁ(t) λ1/2 (t)µ(t)m(t)/σ(t) λ 1/2 (t)[1 µ 2 (t)] 1/2
52 Gaussian Kac-Rice with no simplification E {U u (T )} = T λ 1/2 (t)σ 1 (t)[1 µ 2 (t)] 1/2 ϕ ( ) m(t) σ(t) [ 2ϕ(η(t)) + 2η(t)[2Φ(η(t)) 1] ] dt m(t) = E{f (t)} σ 2 (t) = E{[f (t)] 2 } λ(t) = E{[ḟ (t)] 2 } µ(t) = E{[f (t) m(t)] [ḟ (t)]} λ 1/2 (t)σ(t) η(t) = ṁ(t) λ1/2 (t)µ(t)m(t)/σ(t) λ 1/2 (t)[1 µ 2 (t)] 1/2 Very important fact: Long term covariances do not appear in any of these formulae.
53 Some pertinent thoughts Real roots of Gaussian polynomials The original Kac result now makes sense: f (t) = ξ 0 + ξ 1 t + + ξ n 1 t n 1 E{N n } = 4 π 1 0 [1 n 2 [x 2 (1 x 2 )/(1 x 2n ] 2 ] 1/2 1 x 2 dx
54 Some pertinent thoughts Real roots of Gaussian polynomials The original Kac result now makes sense: f (t) = ξ 0 + ξ 1 t + + ξ n 1 t n 1 E{N n } = 4 π 1 0 [1 n 2 [x 2 (1 x 2 )/(1 x 2n ] 2 ] 1/2 1 x 2 dx Downcrossings, crossings, critical points Critical points of different kinds are just zeroes of ḟ with different side conditions. But now second derivatives appear, calculations will be harder. (f (t), f (t) are not independent.)
55 Some pertinent thoughts Real roots of Gaussian polynomials The original Kac result now makes sense: f (t) = ξ 0 + ξ 1 t + + ξ n 1 t n 1 E{N n } = 4 π 1 0 [1 n 2 [x 2 (1 x 2 )/(1 x 2n ] 2 ] 1/2 1 x 2 dx Downcrossings, crossings, critical points Critical points of different kinds are just zeroes of ḟ with different side conditions. But now second derivatives appear, calculations will be harder. (f (t), f (t) are not independent.) Weakening the conditions In the Gaussian case, only absolute continuity and the finiteness of λ is needed. (Itô, Ylvisaker)
56 Some pertinent thoughts Real roots of Gaussian polynomials The original Kac result now makes sense: f (t) = ξ 0 + ξ 1 t + + ξ n 1 t n 1 E{N n } = 4 π 1 0 [1 n 2 [x 2 (1 x 2 )/(1 x 2n ] 2 ] 1/2 1 x 2 dx Downcrossings, crossings, critical points Critical points of different kinds are just zeroes of ḟ with different side conditions. But now second derivatives appear, calculations will be harder. (f (t), f (t) are not independent.) Weakening the conditions In the Gaussian case, only absolute continuity and the finiteness of λ is needed. (Itô, Ylvisaker) Higher moments?
57 Some pertinent thoughts Real roots of Gaussian polynomials The original Kac result now makes sense: f (t) = ξ 0 + ξ 1 t + + ξ n 1 t n 1 E{N n } = 4 π 1 0 [1 n 2 [x 2 (1 x 2 )/(1 x 2n ] 2 ] 1/2 1 x 2 dx Downcrossings, crossings, critical points Critical points of different kinds are just zeroes of ḟ with different side conditions. But now second derivatives appear, calculations will be harder. (f (t), f (t) are not independent.) Weakening the conditions In the Gaussian case, only absolute continuity and the finiteness of λ is needed. (Itô, Ylvisaker) Higher moments? Fixed points of vector valued processes?
58 The Kac-Rice Metatheorem
59 The Kac-Rice Metatheorem
60 The Kac-Rice Metatheorem The setup f = (f 1,..., f N ) : M R N R N g = (g 1,..., g K ) : M R N R K
61 The Kac-Rice Metatheorem The setup Number of points: f = (f 1,..., f N ) : M R N R N g = (g 1,..., g K ) : M R N R K N u N u (M) N u (f, g : M, B) = E { # {t M : f (t) = u, g(t) B} }.
62 The Kac-Rice Metatheorem The setup Number of points: f = (f 1,..., f N ) : M R N R N g = (g 1,..., g K ) : M R N R K N u N u (M) N u (f, g : M, B) = E { # {t M : f (t) = u, g(t) B} }. The metatheorem, or generalised Kac-Rice E{N u } = det y 1 B (v) p t (u, y, v) d( y) dv dt M R { D } = E det f (t) 1 B (g(t)) f (t) = u p t (u) dt, M p t(x, y, v) is the joint density of (f t, f t, g t) ( f )(t) f (t) ( f i j (t) ) i,j=1,...,n ( f i (t) t j )i,j=1,...,n.
63 The original (non-specific) Rice formula: The proof Take a (positive) approximate delta function, δ ε, supported on [ ε, +ε], and R δ ε(x) dx = 1.
64 The original (non-specific) Rice formula: The proof Take a (positive) approximate delta function, δ ε, supported on [ ε, +ε], and R δ ε(x) dx = 1.
65 The original (non-specific) Rice formula: The proof Take a (positive) approximate delta function, δ ε, supported on [ ε, +ε], and R δ ε(x) dx = 1. 1 = R δ ε (x u) dx = t u 1 t l 1 ḟ (t) δ ε (f (t) u) 1 (0, ) (f (t)) dt = lim ε 0 t u 1 t l 1 ḟ (t) δ ε(f (t) u) 1 (0, ) (f (t)) dt
66 The original (non-specific) Rice formula: The proof Take a (positive) approximate delta function, δ ε, supported on [ ε, +ε], and R δ ε(x) dx = 1. 1 = R δ ε (x u) dx = t u 1 t l 1 ḟ (t) δ ε (f (t) u) 1 (0, ) (f (t)) dt = lim ε 0 t u 1 t l 1 ḟ (t) δ ε(f (t) u) 1 (0, ) (f (t)) dt U u (f, T ) = lim ḟ (t) δ ε 0 ε(f (t) u) 1 (0, ) (f (t)) dt T
67 The Kac-Rice Conditions (the fine print).
68 The Kac-Rice Conditions (the fine print). Let f, g, M and B be as above, with the additional assumption that the boundaries of M and B have finite N 1 and K 1 dimensional measures, respectively. Furthermore, assume that the following conditions are satisfied for some u R N : (a) All components of f, f, and g are a.s. continuous and have finite variances (over M). (b) For all t M, the marginal densities p t (x) of f (t) (implicitly assumed to exist) are continuous at x = u. (c) The conditional densities p t (x f (t), g(t)) of f (t) given g(t) and f (t) (implicitly assumed to exist) are bounded above and continuous at x = u, uniformly in t M. (d) The conditional densities p t (z f (t) = x) of det f (t) given f (t) = x, are continuous for z and x in neighbourhoods of 0 and u, respectively, uniformly in t M. (e) The conditional densities p t (z f (t) = x) of g(t) given f (t) = x, are continuous for all z and for x in a neighbourhood u, uniformly in t M. (f) The following moment condition holds: { f sup max E i j (t) N } <. t M 1 i,j N (g) The moduli of continuity of each of the components of f, f, and g satisfy ( P { ω(η) > ε } = o η N ), as η 0, for any ε > 0.
69 Higher (factorial) moments Factorial notation (x) k = x(x 1)... (x k + 1).
70 Higher (factorial) moments Factorial notation (x) k = x(x 1)... (x k + 1). Kac-Rice (again?) E{(N u ) k } = = M k E { k det f (t j ) 1 B (g(t j )) f } ( t) = ũ p t (ũ)d t j=1 M k R kd j=1 k detd j 1 B (v j ) p t (ũ, D, ṽ) d D dṽ d t,
71 Higher (factorial) moments Factorial notation (x) k = x(x 1)... (x k + 1). Kac-Rice (again?) E{(N u ) k } = = M k E M k { k det f (t j ) 1 B (g(t j )) f } ( t) = ũ p t (ũ)d t j=1 R kd j=1 k detd j 1 B (v j ) p t (ũ, D, ṽ) d D dṽ d t, M k = { t = (t 1,..., t k ) : t j M, 1 j k} f ( t) = (f (t 1),..., f (t k )) : M k R Nk g( t) = (g(t 1),..., g(t k )) : M k R Kk D = N(N + 1)/2 + K
72 The Gaussian case: What can/can t be explicitly computed General mean and covariance functions
73 The Gaussian case: What can/can t be explicitly computed General mean and covariance functions Isotropic fields (N = 2, 3)
74 The Gaussian case: What can/can t be explicitly computed General mean and covariance functions Isotropic fields (N = 2, 3) Zero mean and constant variance (via the induced metric )
75 The Gaussian case: What can/can t be explicitly computed General mean and covariance functions Isotropic fields (N = 2, 3) Zero mean and constant variance (via the induced metric ) Gaussian related processes
76 The Gaussian case: What can/can t be explicitly computed General mean and covariance functions Isotropic fields (N = 2, 3) Zero mean and constant variance (via the induced metric ) Gaussian related processes Perturbed Gaussian processes ( Approximately )
77 The Gaussian case: What can/can t be explicitly computed General mean and covariance functions Isotropic fields (N = 2, 3) Zero mean and constant variance (via the induced metric ) Gaussian related processes Perturbed Gaussian processes ( Approximately ) E{No. of critical points of index k above the level u}
78 The Gaussian case: What can/can t be explicitly computed General mean and covariance functions Isotropic fields (N = 2, 3) Zero mean and constant variance (via the induced metric ) Gaussian related processes Perturbed Gaussian processes ( Approximately ) E{No. of critical points of index k above the level u} N } E{ k=1 ( 1)k (No. of critical points of index k above u)) Useful for Morse theory
79 The Gaussian-related case
80 The Gaussian-related case f (t) = (f 1 (t),..., f k (t)) : T R k F : R k R d
81 The Gaussian-related case f (t) = (f 1 (t),..., f k (t)) : T R k F : R k R d g(t) = F (g(t)) = F (g 1 (t),..., g k (t)),
82 The Gaussian-related case f (t) = (f 1 (t),..., f k (t)) : T R k F : R k R d g(t) = F (g(t)) = F (g 1 (t),..., g k (t)), F (x) = k xi 2, 1 x 1 k 1 ( k 2 x, m n 1 x i 2 i 2)1/2 n n+m n+1 x 2 i i.e. χ 2 fields with k degrees of freedom, T field with k 1 degrees of freedom, F field with n and m degrees of freedom..
83 The Gaussian Kinematic Formula (GKF)
84 The Gaussian Kinematic Formula (GKF) Jonathan s lecture
85 The perturbed-gaussian case A physics approach [ ] ϕ(x) = ϕ G (x) 1 + Tr [E G {h n (X )} h n (x)] ϕ G is iid Gaussian n=3 h n (x) = ( 1) n 1 n ϕ G (x) ϕ G (x) x n are Hermite tensors of rank n with coefficients constructed from the moments E G {h n (X )}
86 The perturbed-gaussian case A physics approach [ ] ϕ(x) = ϕ G (x) 1 + Tr [E G {h n (X )} h n (x)] ϕ G is iid Gaussian n=3 h n (x) = ( 1) n 1 n ϕ G (x) ϕ G (x) x n are Hermite tensors of rank n with coefficients constructed from the moments E G {h n (X )} A statistical (Gaussian related) approach f (t) = f G (t) + J j=1 p j ε j fj GR (t)
87 Applications I: Exceedence probabilities via level crossings P { sup f (t) u } 0 t T
88 Applications I: Exceedence probabilities via level crossings P { sup 0 t T f (t) u }
89 Applications I: Exceedence probabilities via level crossings P { sup 0 t T f (t) u } P { sup f (t) u } = P {f (0) u} + P {f (0) < u, N u 1} 0 t T = P {f (0) u) + P {f (0) < u, N u 1} P {f (0) u) + E{N u } = E{# of connected components in A u (T )}
90 Applications I: Exceedence probabilities via level crossings P { sup 0 t T f (t) u } P { sup f (t) u } = P {f (0) u} + P {f (0) < u, N u 1} 0 t T = P {f (0) u) + P {f (0) < u, N u 1} Note: Nothing is Gaussian here! P {f (0) u) + E{N u } = E{# of connected components in A u (T )}
91 Applications I: Exceedence probabilities via level crossings P { sup 0 t T f (t) u } P { sup f (t) u } = P {f (0) u} + P {f (0) < u, N u 1} 0 t T = P {f (0) u) + P {f (0) < u, N u 1} Note: Nothing is Gaussian here! P {f (0) u) + E{N u } = E{# of connected components in A u (T )} Inequality is usually an approximation, for large u.
92 Applications II: Local maxima on the line Number of local maxima above the level u { } M u (T ) = # t [0, T ] : ḟ (t) = 0, f (t) < 0, f (t) u
93 Applications II: Local maxima on the line Number of local maxima above the level u { } M u (T ) = # t [0, T ] : ḟ (t) = 0, f (t) < 0, f (t) u f stationary mean 0, variance 1, λ 4 = E{[f (t)] 4 } E {M (T )} = T λ1/2 4 2πλ 1/2 2
94 Applications II: Local maxima on the line Number of local maxima above the level u { } M u (T ) = # t [0, T ] : ḟ (t) = 0, f (t) < 0, f (t) u f stationary mean 0, variance 1, λ 4 = E{[f (t)] 4 } E {M (T )} = T λ1/2 4 2πλ 1/2 2 Similarly, with = λ 4 λ 2 2 and λ 2 λ. ( ) E {M u (T )} = T λ1/2 4 λ 1/2 4 u Ψ 1/2 2πλ 1/2 2 ( ) T λ1/2 2 λ2 u ϕ(u)φ 2π 1/2,
95 Applications II: Local maxima on the line Number of local maxima above the level u { } M u (T ) = # t [0, T ] : ḟ (t) = 0, f (t) < 0, f (t) u f stationary mean 0, variance 1, λ 4 = E{[f (t)] 4 } E {M (T )} = T λ1/2 4 2πλ 1/2 2 Similarly, with = λ 4 λ 2 2 and λ 2 λ. ( ) E {M u (T )} = T λ1/2 4 λ 1/2 4 u Ψ 1/2 2πλ 1/2 2 An easy computation E {M u (T )} lim u E {N u (T )} = 1. ( ) T λ1/2 2 λ2 u ϕ(u)φ 2π 1/2,
96 Applications II: Local maxima on the line Number of local maxima above the level u { } M u (T ) = # t [0, T ] : ḟ (t) = 0, f (t) < 0, f (t) u f stationary mean 0, variance 1, λ 4 = E{[f (t)] 4 } E {M (T )} = T λ1/2 4 2πλ 1/2 2 Similarly, with = λ 4 λ 2 2 and λ 2 λ. ( ) E {M u (T )} = T λ1/2 4 λ 1/2 4 u Ψ 1/2 2πλ 1/2 2 An easy computation E {M u (T )} lim u E {N u (T )} = 1. which holds in very wide generality. ( ) T λ1/2 2 λ2 u ϕ(u)φ 2π 1/2,
97 Applications III: Local maxima on M R N M u (M) Number of local maxima on M above the level u
98 Applications III: Local maxima on M R N M u (M) Number of local maxima on M above the level u N = 2 f isotropic on unit square E {M } = 1 6π 3 λ 4 λ 2
99 Applications III: Local maxima on M R N M u (M) Number of local maxima on M above the level u N = 2 f isotropic on unit square E {M } = N = 2, f stationary on unit square 1 6π 3 λ 4 λ 2 E {M } = 1 2π 2 d 1 Λ 1/2 G( d 1/d 2 ) Λ the covariance matrix of first order derivatives of f d 1, d 2 eigenvalues of cov matrix of second order derivatives G involves first and second order elliptic integrals
100 Applications III: Local maxima on M R N M u (M) Number of local maxima on M above the level u N = 2 f isotropic on unit square E {M } = N = 2, f stationary on unit square 1 6π 3 λ 4 λ 2 E {M } = 1 2π 2 d 1 Λ 1/2 G( d 1/d 2 ) Λ the covariance matrix of first order derivatives of f d 1, d 2 eigenvalues of cov matrix of second order derivatives G involves first and second order elliptic integrals N = 2, f isotropic on unit square E {M u } =???
101 Applications III: Local maxima on M R N M u (M) Number of local maxima on M above the level u N = 2 f isotropic on unit square E {M } = N = 2, f stationary on unit square 1 6π 3 λ 4 λ 2 E {M } = 1 2π 2 d 1 Λ 1/2 G( d 1/d 2 ) Λ the covariance matrix of first order derivatives of f d 1, d 2 eigenvalues of cov matrix of second order derivatives G involves first and second order elliptic integrals N = 2, f isotropic on unit square Is there a replacement for E {M u } =??? E {M u (T )} lim u E {N u (T )} = 1.
102 Applications IV: Longuet-Higgins and oceanography
103 Applications IV: Longuet-Higgins and oceanography
104 Applications IV: Longuet-Higgins and oceanography There are precise computations for the expected numbers of specular points, mainly by M.S. Longuet-Higgins,
105 Applications IV: Longuet-Higgins and oceanography There are precise computations for the expected numbers of specular points, mainly by M.S. Longuet-Higgins, M.S. Longuet-Higgins, 1925
106 Applications IV: Longuet-Higgins and oceanography There are precise computations for the expected numbers of specular points, mainly by M.S. Longuet-Higgins, Mark Dennis M.S. Longuet-Higgins, 1925
107 Applications V: Higher moments and complex polynomials f (z) = ξ 0 +a 1 ξ 1 z+a 2 ξ 2 z 2 + +a n 1 ξ n 1 z n 1, z C.
108 Applications V: Higher moments and complex polynomials f (z) = ξ 0 +a 1 ξ 1 z+a 2 ξ 2 z 2 + +a n 1 ξ n 1 z n 1, z C. Means tell us where we expect the roots to be, but variances are needed to give concentration information. Balint Virag
109 Applications VI: Poisson limits and Slepian models
110 Applications VI: Poisson limits and Slepian models Theorem 1: Sequences of increasingly rare events such as the existence of high level local maxima in N dimensions or level crossings in 1 dimension, looked at over long time periods or large regions so that a few of them still occur have an asymptotic Poisson distribution as long as dependence in time or space is not too strong. 2: The normalisations and the parameters of the limiting Poisson depend only on the expected number of events in a given region or time interval.
111 Applications VI: Poisson limits and Slepian models Theorem 1: Sequences of increasingly rare events such as the existence of high level local maxima in N dimensions or level crossings in 1 dimension, looked at over long time periods or large regions so that a few of them still occur have an asymptotic Poisson distribution as long as dependence in time or space is not too strong. 2: The normalisations and the parameters of the limiting Poisson depend only on the expected number of events in a given region or time interval. Theorem If f is stationary and ergodic although less will do P { f (τ) A correct τ is a local maximum of f }
112 Applications VI: Poisson limits and Slepian models Theorem 1: Sequences of increasingly rare events such as the existence of high level local maxima in N dimensions or level crossings in 1 dimension, looked at over long time periods or large regions so that a few of them still occur have an asymptotic Poisson distribution as long as dependence in time or space is not too strong. 2: The normalisations and the parameters of the limiting Poisson depend only on the expected number of events in a given region or time interval. Theorem If f is stationary and ergodic although less will do P { f (τ) A correct τ is a local maximum of f } = E{#{t B : t is a local maximum of f and f (t) A}} E{#{t B : t is a local maximum of f }} B is any ball
113 Applications VII: Eigenvalues of random matrices A a n n matrix
114 Applications VII: Eigenvalues of random matrices A a n n matrix Define f A (t) = At, t, t M If A is random, then f A is a random field. If A is Gaussian (i.e. has Gaussian components) then f A is Gaussian.
115 Applications VII: Eigenvalues of random matrices A a n n matrix Define f A (t) = At, t, t M If A is random, then f A is a random field. If A is Gaussian (i.e. has Gaussian components) then f A is Gaussian. Algebraic fact If A has no repeated eigenvalues, there are exactly 2n critical points of f A, which occur at ± the eigenvectors of A. The values of f A at critical points are the eigenvalues of M
116 Applications VII: Eigenvalues of random matrices A a n n matrix Define f A (t) = At, t, t M If A is random, then f A is a random field. If A is Gaussian (i.e. has Gaussian components) then f A is Gaussian. Algebraic fact If A has no repeated eigenvalues, there are exactly 2n critical points of f A, which occur at ± the eigenvectors of A. The values of f A at critical points are the eigenvalues of M Finding the maximum eigenvalue λ max (A) = sup f A (t) t S n 1
117 Applications VII: Eigenvalues of random matrices A a n n matrix Define f A (t) = At, t, t M If A is random, then f A is a random field. If A is Gaussian (i.e. has Gaussian components) then f A is Gaussian. Algebraic fact If A has no repeated eigenvalues, there are exactly 2n critical points of f A, which occur at ± the eigenvectors of A. The values of f A at critical points are the eigenvalues of M Finding the maximum eigenvalue λ max (A) = sup f A (t) t S n 1 (Some) random matrix problems are equivalent to random field problems, and vice versa
118 Appendix I: The canonical Gaussian process Consider f (t) = l j=1 ξ jϕ j (t)
119 Appendix I: The canonical Gaussian process Consider f (t) = l j=1 ξ jϕ j (t) σ 2 (t) 1 ϕ(t) = (ϕ 1 (t),..., π l (t)) S l 1.
120 Appendix I: The canonical Gaussian process Consider f (t) = l j=1 ξ jϕ j (t) σ 2 (t) 1 ϕ(t) = (ϕ 1 (t),..., π l (t)) S l 1. A crucial mapping
121 Appendix I: The canonical Gaussian process Consider f (t) = l j=1 ξ jϕ j (t) σ 2 (t) 1 ϕ(t) = (ϕ 1 (t),..., π l (t)) S l 1. A crucial mapping Define a new Gaussian process f on ϕ(m) f (x) = f ( ϕ 1 (x) ),
122 Appendix I: The canonical Gaussian process Consider f (t) = l j=1 ξ jϕ j (t) σ 2 (t) 1 ϕ(t) = (ϕ 1 (t),..., π l (t)) S l 1. A crucial mapping Define a new Gaussian process f on ϕ(m) ( f (x) = f ϕ 1 (x) ), } E { f (x) f (y) = E { f ( ϕ 1 (x) ) f ( ϕ 1 (y) )} = ( ϕ j ϕ 1 (x) ) ( ϕ j ϕ 1 (y) ) = x j y j = x, y
123 The canonical Gaussian process on S l 1 1: Has mean zero and covariance E {f (s)f (s)} = s, t for s, t S l 1. 2: It can be realised as f (t) = l t j ξ j. j=1 3: It is stationary and isotropic since the covariance is function of only the (geodesic) distance between s and t.
124 Exceedence probabilities for canonical process: M S l 1 P { sup f t u } = t M = = = = 0 0 u u u P { sup f t u ξ = r } P ξ (dr) t M P { sup ξ, t u ξ = r } P ξ (dr) t M P { sup ξ, t u } ξ = r P ξ (dr) t M P { sup ξ/r, t u/r ξ = r } P ξ (dr) t M P { sup U, t u/r } P ξ (dr) t M where U is uniform on S l 1.
125 We need P { sup U, t u/r } t M
126 We need P { sup U, t u/r } t M Working with tubes The tube of radius ρ around a closed set M S l 1 ) is { } Tube(M, ρ) = t S l 1 : τ(t, M) ρ { } = t S l 1 : s M such that s, t cos(ρ) { } = t S l 1 : sup s, t cos(ρ) s M.
127 We need P { sup U, t u/r } t M Working with tubes The tube of radius ρ around a closed set M S l 1 ) is { } Tube(M, ρ) = t S l 1 : τ(t, M) ρ { } = t S l 1 : s M such that s, t cos(ρ) { } = t S l 1 : sup s, t cos(ρ) s M. And so... P { sup f t u } t M = u η ( l Tube(M, cos 1 (u/r)) ) P ξ (dr) and geometry has entered the picture, in a serious fashion!
128 Appendix II: Stationary and isotropic fields Definition: M has a group structure, µ(t) = const and C(s, t) = C(s t).
129 Appendix II: Stationary and isotropic fields Definition: M has a group structure, µ(t) = const and C(s, t) = C(s t). Gaussian case: (Weak) stationarity also implies strong stationarity.
130 Appendix II: Stationary and isotropic fields Definition: M has a group structure, µ(t) = const and C(s, t) = C(s t). Gaussian case: (Weak) stationarity also implies strong stationarity. M = R N : C : R N R is non-negative definite there exists a finite measure ν such that C(t) = e i t,λ ν(dλ), R N ν is called the spectral measure and, since C is real, must be symmetric. i.e. ν(a) = ν( A) for all A B N.
131 Appendix II: Stationary and isotropic fields Definition: M has a group structure, µ(t) = const and C(s, t) = C(s t). Gaussian case: (Weak) stationarity also implies strong stationarity. M = R N : C : R N R is non-negative definite there exists a finite measure ν such that C(t) = e i t,λ ν(dλ), R N ν is called the spectral measure and, since C is real, must be symmetric. i.e. ν(a) = ν( A) for all A B N. Spectral moments λ i1...i N = R N λ i 1 1 λ i N N ν(dλ) ν is symmetric odd ordered spectral moments are zero.
132 Elementary considerations give { k f (s) k } f (t) E = s i1 s i1... s ik t i1 t i1... t ik 2k C(s, t) s i1 t i1... s ik t ik.
133 Elementary considerations give { k f (s) k } f (t) E = s i1 s i1... s ik t i1 t i1... t ik 2k C(s, t) s i1 t i1... s ik t ik. When f is stationary, and α, β, γ, δ {0, 1, 2,... }, then { α+β f (t) γ+δ } f (t) E α t i β t j γ t k δ t l = ( 1) α+β α+β+γ+δ α t i β t j γ t k δ C(t) t l t=0 = ( 1) α+β i α+β+γ+δ R N λ α i λ β j λγ k λδ l ν(dλ).
134 Elementary considerations give { k f (s) k } f (t) E = s i1 s i1... s ik t i1 t i1... t ik 2k C(s, t) s i1 t i1... s ik t ik. When f is stationary, and α, β, γ, δ {0, 1, 2,... }, then { α+β f (t) γ+δ } f (t) E α t i β t j γ t k δ t l = ( 1) α+β α+β+γ+δ α t i β t j γ t k δ C(t) t l Write f j = f / t j, t=0 = ( 1) α+β i α+β+γ+δ R N λ α i λ β j λγ k λδ l ν(dλ). f ij = 2 f / t i t j Then f (t) and f j (t) are uncorrelated, f i (t) and f jk (t) are uncorrelated
135 Elementary considerations give { k f (s) k } f (t) E = s i1 s i1... s ik t i1 t i1... t ik 2k C(s, t) s i1 t i1... s ik t ik. When f is stationary, and α, β, γ, δ {0, 1, 2,... }, then { α+β f (t) γ+δ } f (t) E α t i β t j γ t k δ t l = ( 1) α+β α+β+γ+δ α t i β t j γ t k δ C(t) t l Write f j = f / t j, t=0 = ( 1) α+β i α+β+γ+δ R N λ α i λ β j λγ k λδ l ν(dλ). f ij = 2 f / t i t j Then f (t) and f j (t) are uncorrelated, f i (t) and f jk (t) are uncorrelated Isotropy (C(t) = C( t ) ν is spherically symmetric E {f i (t)f j (t)} = E {f (t)f ij (t)} = λδ ij
136 Appendix III: Regularity of Gaussian processes The canonical metric, d d(s, t) = [ E {( f (s) f (t) ) 2}] 1 2, A ball of radius ε and centered at t M is denoted by B d (t, ε) = {s M : d(s, t) ε}.
137 Appendix III: Regularity of Gaussian processes The canonical metric, d d(s, t) = [ E {( f (s) f (t) ) 2}] 1 2, A ball of radius ε and centered at t M is denoted by Compactness assumption B d (t, ε) = {s M : d(s, t) ε}. diam(m) = sup d(s, t) <. s,t M
138 Appendix III: Regularity of Gaussian processes The canonical metric, d d(s, t) = [ E {( f (s) f (t) ) 2}] 1 2, A ball of radius ε and centered at t M is denoted by Compactness assumption B d (t, ε) = {s M : d(s, t) ε}. diam(m) = sup d(s, t) <. s,t M Entropy Fix ε > 0 and let N(M, d, ε) N(ε) denote the smallest number of d-balls of radius ε whose union covers M. Set H(M, d, ε) H(ε) = ln (N(ε)). Then N and H are called the (metric) entropy and log-entropy functions for M (or f ).
139 Dudley s theorem Let f be a centered Gaussian field on a d-compact M Then there exists a universal K such that and where E { sup t M } diam(m) f t K H 1/2 (ε) dε, 0 E {ω f,d (δ)} K ω f,d (δ) = δ 0 H 1/2 (ε) dε, sup f (t) f (s), δ > 0, d(s,t) δ Furthermore, there exists a random η (0, ) and a universal K such that for all δ < η. ω f,d (δ) K δ 0 H 1/2 (ε) dε,
140 Special cases of the entropy result If f is also stationary f is a.s. continuous on M f is a.s. bounded on M δ 0 H 1/2 (ε) dε <, δ > 0
141 Special cases of the entropy result If f is also stationary f is a.s. continuous on M f is a.s. bounded on M If M R N, and p 2 (u) δ = sup E { f s f t 2}, s t u 0 H 1/2 (ε) dε <, δ > 0 continuity & boundedness follow if, for some δ > 0, either δ ( ( ln u) 1 2 dp(u) < or p e u2) du <. 0 δ
142 Special cases of the entropy result If f is also stationary f is a.s. continuous on M f is a.s. bounded on M If M R N, and p 2 (u) δ = sup E { f s f t 2}, s t u 0 H 1/2 (ε) dε <, δ > 0 continuity & boundedness follow if, for some δ > 0, either δ ( ( ln u) 1 2 dp(u) < or p e u2) du <. 0 A sufficient condition For some 0 < K < and α, η > 0, E { f s f t 2} for all s, t with s t < η. δ K log s t 1+α,
143 Appendix IV: Borell-Tsirelson inequality Finiteness theorem: f = sup t M f t P{ f < } = 1 E{ f } <,
144 Appendix IV: Borell-Tsirelson inequality Finiteness theorem: f = sup t M f t P{ f < } = 1 E{ f } <, THE inequality: For all u > 0, σ 2 M = sup t M E{f 2 t } P{ f E{ f } > u } e u2 /2σ 2 M.
145 Appendix IV: Borell-Tsirelson inequality Finiteness theorem: f = sup t M f t P{ f < } = 1 E{ f } <, THE inequality: For all u > 0, σ 2 M = sup t M E{f 2 t } This implies P{ f E{ f } > u } e u2 /2σ 2 M. P{ f u} e µu u2 /2σ 2 M, µ u = (2uE{ f } [E{ f }] 2 )/σ 2 M
146 Appendix IV: Borell-Tsirelson inequality Finiteness theorem: f = sup t M f t P{ f < } = 1 E{ f } <, THE inequality: For all u > 0, σ 2 M = sup t M E{f 2 t } This implies P{ f E{ f } > u } e u2 /2σ 2 M. P{ f u} e µu u2 /2σ 2 M, µ u = (2uE{ f } [E{ f }] 2 )/σ 2 M Asymptotics: For high levels u, the dominant behavior of all Gaussian exceedence probabilities is determined by e u2 /2σ 2 M.
147 Places to start reading and to find other references 1. R.J. Adler and J.E. Taylor, Random Fields and Geometry, Springer, R.J. Adler, and J.E. Taylor, Topological Complexity of Random Functions, Springer Lecture Notes in Mathematics, Vol. 2019, Springer, D. Aldous, Probability Approximations via the Poisson Clumping Heuristic, Applied Mathematical Sciences, Vol. 77, Springer-Verlag, J-M. Azaïs and M. Wschebor. Level Sets and Extrema of Random Processes and Fields, Wiley, H. Cramér and M.R. Leadbetter, Stationary and Related Stochastic Processes, Wiley, M.R. Leadbetter, G. Lindgren, and H. Rootzén, Extremes and Related Properties of Random Sequences and Processes, Springer-Verlag, 1983.
RANDOM FIELDS AND GEOMETRY. Robert Adler and Jonathan Taylor
RANDOM FIELDS AND GEOMETRY from the book of the same name by Robert Adler and Jonathan Taylor IE&M, Technion, Israel, Statistics, Stanford, US. ie.technion.ac.il/adler.phtml www-stat.stanford.edu/ jtaylor
More informationGaussian Random Fields: Excursion Probabilities
Gaussian Random Fields: Excursion Probabilities Yimin Xiao Michigan State University Lecture 5 Excursion Probabilities 1 Some classical results on excursion probabilities A large deviation result Upper
More informationPROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS
PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please
More informationGaussian Random Fields: Geometric Properties and Extremes
Gaussian Random Fields: Geometric Properties and Extremes Yimin Xiao Michigan State University Outline Lecture 1: Gaussian random fields and their regularity Lecture 2: Hausdorff dimension results and
More informationEvgeny Spodarev WIAS, Berlin. Limit theorems for excursion sets of stationary random fields
Evgeny Spodarev 23.01.2013 WIAS, Berlin Limit theorems for excursion sets of stationary random fields page 2 LT for excursion sets of stationary random fields Overview 23.01.2013 Overview Motivation Excursion
More informationPROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS
PROBABILITY: LIMIT THEOREMS II, SPRING 15. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please
More informationELEMENTS OF PROBABILITY THEORY
ELEMENTS OF PROBABILITY THEORY Elements of Probability Theory A collection of subsets of a set Ω is called a σ algebra if it contains Ω and is closed under the operations of taking complements and countable
More informationRice method for the maximum of Gaussian fields
Rice method for the maximum of Gaussian fields IWAP, July 8, 2010 Jean-Marc AZAÏS Institut de Mathématiques, Université de Toulouse Jean-Marc AZAÏS ( Institut de Mathématiques, Université de Toulouse )
More informationExtremal behaviour of chaotic dynamics
Extremal behaviour of chaotic dynamics Ana Cristina Moreira Freitas CMUP & FEP, Universidade do Porto joint work with Jorge Freitas and Mike Todd (CMUP & FEP) 1 / 24 Extreme Value Theory Consider a stationary
More informationTopological Complexity of Smooth Random Functions
Lecture Notes in Mathematics 2019 Topological Complexity of Smooth Random Functions École d'été de Probabilités de Saint-Flour XXXIX-2009 Bearbeitet von Robert Adler, Jonathan E. Taylor 1. Auflage 2011.
More informationMeasures on spaces of Riemannian metrics CMS meeting December 6, 2015
Measures on spaces of Riemannian metrics CMS meeting December 6, 2015 D. Jakobson (McGill), jakobson@math.mcgill.ca [CJW]: Y. Canzani, I. Wigman, DJ: arxiv:1002.0030, Jour. of Geometric Analysis, 2013
More informationAn introduction to Birkhoff normal form
An introduction to Birkhoff normal form Dario Bambusi Dipartimento di Matematica, Universitá di Milano via Saldini 50, 0133 Milano (Italy) 19.11.14 1 Introduction The aim of this note is to present an
More informationNodal Sets of High-Energy Arithmetic Random Waves
Nodal Sets of High-Energy Arithmetic Random Waves Maurizia Rossi UR Mathématiques, Université du Luxembourg Probabilistic Methods in Spectral Geometry and PDE Montréal August 22-26, 2016 M. Rossi (Université
More informationApplied Math Qualifying Exam 11 October Instructions: Work 2 out of 3 problems in each of the 3 parts for a total of 6 problems.
Printed Name: Signature: Applied Math Qualifying Exam 11 October 2014 Instructions: Work 2 out of 3 problems in each of the 3 parts for a total of 6 problems. 2 Part 1 (1) Let Ω be an open subset of R
More informationTHE EXCURSION PROBABILITY OF GAUSSIAN AND ASYMPTOTICALLY GAUSSIAN RANDOM FIELDS. Dan Cheng
THE EXCURSION PROBABILITY OF GAUSSIAN AND ASYMPTOTICALLY GAUSSIAN RANDOM FIELDS By Dan Cheng A DISSERTATION Submitted to Michigan State University in partial fulfillment of the requirements for the degree
More informationOn the spatial distribution of critical points of Random Plane Waves
On the spatial distribution of critical points of Random Plane Waves Valentina Cammarota Department of Mathematics, King s College London Workshop on Probabilistic Methods in Spectral Geometry and PDE
More informationStatistical applications of geometry and random
Statistical applications of geometry and random fields 3 e Cycle romand de statistique et probabilités appliquées Based on joint work with: R. Adler, K. Worsley Jonathan Taylor Stanford University / Université
More informationFinite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product
Chapter 4 Hilbert Spaces 4.1 Inner Product Spaces Inner Product Space. A complex vector space E is called an inner product space (or a pre-hilbert space, or a unitary space) if there is a mapping (, )
More informationUNIVERSITY OF DUBLIN
UNIVERSITY OF DUBLIN TRINITY COLLEGE JS & SS Mathematics SS Theoretical Physics SS TSM Mathematics Faculty of Engineering, Mathematics and Science school of mathematics Trinity Term 2015 Module MA3429
More informationLaplace s Equation. Chapter Mean Value Formulas
Chapter 1 Laplace s Equation Let be an open set in R n. A function u C 2 () is called harmonic in if it satisfies Laplace s equation n (1.1) u := D ii u = 0 in. i=1 A function u C 2 () is called subharmonic
More informationON THE UPCROSSINGS OF TRIGONOMETRIC POLYNOMIALS WITH RANDOM COEFFICIENTS
REVSTAT Statistical Journal Volume 12 Number 2 June 2014 135 155 ON THE UPCROSSINGS OF TRIGONOMETRIC POLYNOMIALS WITH RANDOM COEFFICIENTS Author: K.F. Turman DEIO-CEAUL FCUL Bloco C6 Campo Grande 1749-016
More informationThe Wiener Itô Chaos Expansion
1 The Wiener Itô Chaos Expansion The celebrated Wiener Itô chaos expansion is fundamental in stochastic analysis. In particular, it plays a crucial role in the Malliavin calculus as it is presented in
More informationMoment Properties of Distributions Used in Stochastic Financial Models
Moment Properties of Distributions Used in Stochastic Financial Models Jordan Stoyanov Newcastle University (UK) & University of Ljubljana (Slovenia) e-mail: stoyanovj@gmail.com ETH Zürich, Department
More informationWe denote the space of distributions on Ω by D ( Ω) 2.
Sep. 1 0, 008 Distributions Distributions are generalized functions. Some familiarity with the theory of distributions helps understanding of various function spaces which play important roles in the study
More informationCMS winter meeting 2008, Ottawa. The heat kernel on connected sums
CMS winter meeting 2008, Ottawa The heat kernel on connected sums Laurent Saloff-Coste (Cornell University) December 8 2008 p(t, x, y) = The heat kernel ) 1 x y 2 exp ( (4πt) n/2 4t The heat kernel is:
More informationReview (Probability & Linear Algebra)
Review (Probability & Linear Algebra) CE-725 : Statistical Pattern Recognition Sharif University of Technology Spring 2013 M. Soleymani Outline Axioms of probability theory Conditional probability, Joint
More informationHardy-Stein identity and Square functions
Hardy-Stein identity and Square functions Daesung Kim (joint work with Rodrigo Bañuelos) Department of Mathematics Purdue University March 28, 217 Daesung Kim (Purdue) Hardy-Stein identity UIUC 217 1 /
More informationLate-time tails of self-gravitating waves
Late-time tails of self-gravitating waves (non-rigorous quantitative analysis) Piotr Bizoń Jagiellonian University, Kraków Based on joint work with Tadek Chmaj and Andrzej Rostworowski Outline: Motivation
More informationInformation geometry for bivariate distribution control
Information geometry for bivariate distribution control C.T.J.Dodson + Hong Wang Mathematics + Control Systems Centre, University of Manchester Institute of Science and Technology Optimal control of stochastic
More informationStochastic Differential Equations.
Chapter 3 Stochastic Differential Equations. 3.1 Existence and Uniqueness. One of the ways of constructing a Diffusion process is to solve the stochastic differential equation dx(t) = σ(t, x(t)) dβ(t)
More informationA Short Introduction to Diffusion Processes and Ito Calculus
A Short Introduction to Diffusion Processes and Ito Calculus Cédric Archambeau University College, London Center for Computational Statistics and Machine Learning c.archambeau@cs.ucl.ac.uk January 24,
More informationWhat can be expressed via Conic Quadratic and Semidefinite Programming?
What can be expressed via Conic Quadratic and Semidefinite Programming? A. Nemirovski Faculty of Industrial Engineering and Management Technion Israel Institute of Technology Abstract Tremendous recent
More informationUNCERTAINTY FUNCTIONAL DIFFERENTIAL EQUATIONS FOR FINANCE
Surveys in Mathematics and its Applications ISSN 1842-6298 (electronic), 1843-7265 (print) Volume 5 (2010), 275 284 UNCERTAINTY FUNCTIONAL DIFFERENTIAL EQUATIONS FOR FINANCE Iuliana Carmen Bărbăcioru Abstract.
More informationLower Tail Probabilities and Normal Comparison Inequalities. In Memory of Wenbo V. Li s Contributions
Lower Tail Probabilities and Normal Comparison Inequalities In Memory of Wenbo V. Li s Contributions Qi-Man Shao The Chinese University of Hong Kong Lower Tail Probabilities and Normal Comparison Inequalities
More informationRIEMANNIAN GEOMETRY COMPACT METRIC SPACES. Jean BELLISSARD 1. Collaboration:
RIEMANNIAN GEOMETRY of COMPACT METRIC SPACES Jean BELLISSARD 1 Georgia Institute of Technology, Atlanta School of Mathematics & School of Physics Collaboration: I. PALMER (Georgia Tech, Atlanta) 1 e-mail:
More informationCentral limit theorem for specular points
Colloquium IFUM Dec 2009 IMT,ESP,University of Toulouse joint work with Mario Wschebor and Jose Léon 1 2 3 4 Specular points Jean-Marc Azaı s Specular points Jean-Marc Azaı s Specular points The cylindric
More informationLAN property for sde s with additive fractional noise and continuous time observation
LAN property for sde s with additive fractional noise and continuous time observation Eulalia Nualart (Universitat Pompeu Fabra, Barcelona) joint work with Samy Tindel (Purdue University) Vlad s 6th birthday,
More informationExact Solutions of the Einstein Equations
Notes from phz 6607, Special and General Relativity University of Florida, Fall 2004, Detweiler Exact Solutions of the Einstein Equations These notes are not a substitute in any manner for class lectures.
More informationThe circular law. Lewis Memorial Lecture / DIMACS minicourse March 19, Terence Tao (UCLA)
The circular law Lewis Memorial Lecture / DIMACS minicourse March 19, 2008 Terence Tao (UCLA) 1 Eigenvalue distributions Let M = (a ij ) 1 i n;1 j n be a square matrix. Then one has n (generalised) eigenvalues
More informationBrownian Motion. Chapter Stochastic Process
Chapter 1 Brownian Motion 1.1 Stochastic Process A stochastic process can be thought of in one of many equivalent ways. We can begin with an underlying probability space (Ω, Σ,P and a real valued stochastic
More informationFast-slow systems with chaotic noise
Fast-slow systems with chaotic noise David Kelly Ian Melbourne Courant Institute New York University New York NY www.dtbkelly.com May 1, 216 Statistical properties of dynamical systems, ESI Vienna. David
More informationIntroduction to Self-normalized Limit Theory
Introduction to Self-normalized Limit Theory Qi-Man Shao The Chinese University of Hong Kong E-mail: qmshao@cuhk.edu.hk Outline What is the self-normalization? Why? Classical limit theorems Self-normalized
More informationStatistical signal processing
Statistical signal processing Short overview of the fundamentals Outline Random variables Random processes Stationarity Ergodicity Spectral analysis Random variable and processes Intuition: A random variable
More informationLecture 12: Detailed balance and Eigenfunction methods
Miranda Holmes-Cerfon Applied Stochastic Analysis, Spring 2015 Lecture 12: Detailed balance and Eigenfunction methods Readings Recommended: Pavliotis [2014] 4.5-4.7 (eigenfunction methods and reversibility),
More informationIntroduction to Empirical Processes and Semiparametric Inference Lecture 02: Overview Continued
Introduction to Empirical Processes and Semiparametric Inference Lecture 02: Overview Continued Michael R. Kosorok, Ph.D. Professor and Chair of Biostatistics Professor of Statistics and Operations Research
More informationINSTANTON MODULI AND COMPACTIFICATION MATTHEW MAHOWALD
INSTANTON MODULI AND COMPACTIFICATION MATTHEW MAHOWALD () Instanton (definition) (2) ADHM construction (3) Compactification. Instantons.. Notation. Throughout this talk, we will use the following notation:
More informationOperator norm convergence for sequence of matrices and application to QIT
Operator norm convergence for sequence of matrices and application to QIT Benoît Collins University of Ottawa & AIMR, Tohoku University Cambridge, INI, October 15, 2013 Overview Overview Plan: 1. Norm
More informationWe denote the derivative at x by DF (x) = L. With respect to the standard bases of R n and R m, DF (x) is simply the matrix of partial derivatives,
The derivative Let O be an open subset of R n, and F : O R m a continuous function We say F is differentiable at a point x O, with derivative L, if L : R n R m is a linear transformation such that, for
More information1 Tridiagonal matrices
Lecture Notes: β-ensembles Bálint Virág Notes with Diane Holcomb 1 Tridiagonal matrices Definition 1. Suppose you have a symmetric matrix A, we can define its spectral measure (at the first coordinate
More informationDeforming conformal metrics with negative Bakry-Émery Ricci Tensor on manifolds with boundary
Deforming conformal metrics with negative Bakry-Émery Ricci Tensor on manifolds with boundary Weimin Sheng (Joint with Li-Xia Yuan) Zhejiang University IMS, NUS, 8-12 Dec 2014 1 / 50 Outline 1 Prescribing
More informationLecture 4: Stochastic Processes (I)
Miranda Holmes-Cerfon Applied Stochastic Analysis, Spring 215 Lecture 4: Stochastic Processes (I) Readings Recommended: Pavliotis [214], sections 1.1, 1.2 Grimmett and Stirzaker [21], section 9.4 (spectral
More informationStein s Method and Characteristic Functions
Stein s Method and Characteristic Functions Alexander Tikhomirov Komi Science Center of Ural Division of RAS, Syktyvkar, Russia; Singapore, NUS, 18-29 May 2015 Workshop New Directions in Stein s method
More informationLecture Characterization of Infinitely Divisible Distributions
Lecture 10 1 Characterization of Infinitely Divisible Distributions We have shown that a distribution µ is infinitely divisible if and only if it is the weak limit of S n := X n,1 + + X n,n for a uniformly
More informationNATIONAL BOARD FOR HIGHER MATHEMATICS. Research Awards Screening Test. February 25, Time Allowed: 90 Minutes Maximum Marks: 40
NATIONAL BOARD FOR HIGHER MATHEMATICS Research Awards Screening Test February 25, 2006 Time Allowed: 90 Minutes Maximum Marks: 40 Please read, carefully, the instructions on the following page before you
More informationA Concise Course on Stochastic Partial Differential Equations
A Concise Course on Stochastic Partial Differential Equations Michael Röckner Reference: C. Prevot, M. Röckner: Springer LN in Math. 1905, Berlin (2007) And see the references therein for the original
More informationOptimal Transportation. Nonlinear Partial Differential Equations
Optimal Transportation and Nonlinear Partial Differential Equations Neil S. Trudinger Centre of Mathematics and its Applications Australian National University 26th Brazilian Mathematical Colloquium 2007
More informationELEG 3143 Probability & Stochastic Process Ch. 6 Stochastic Process
Department of Electrical Engineering University of Arkansas ELEG 3143 Probability & Stochastic Process Ch. 6 Stochastic Process Dr. Jingxian Wu wuj@uark.edu OUTLINE 2 Definition of stochastic process (random
More informationCollocation based high dimensional model representation for stochastic partial differential equations
Collocation based high dimensional model representation for stochastic partial differential equations S Adhikari 1 1 Swansea University, UK ECCM 2010: IV European Conference on Computational Mechanics,
More informationPoisson random measure: motivation
: motivation The Lévy measure provides the expected number of jumps by time unit, i.e. in a time interval of the form: [t, t + 1], and of a certain size Example: ν([1, )) is the expected number of jumps
More informationTopics in Harmonic Analysis Lecture 6: Pseudodifferential calculus and almost orthogonality
Topics in Harmonic Analysis Lecture 6: Pseudodifferential calculus and almost orthogonality Po-Lam Yung The Chinese University of Hong Kong Introduction While multiplier operators are very useful in studying
More informationLecture 12: Detailed balance and Eigenfunction methods
Lecture 12: Detailed balance and Eigenfunction methods Readings Recommended: Pavliotis [2014] 4.5-4.7 (eigenfunction methods and reversibility), 4.2-4.4 (explicit examples of eigenfunction methods) Gardiner
More informationLax Solution Part 4. October 27, 2016
Lax Solution Part 4 www.mathtuition88.com October 27, 2016 Textbook: Functional Analysis by Peter D. Lax Exercises: Ch 16: Q2 4. Ch 21: Q1, 2, 9, 10. Ch 28: 1, 5, 9, 10. 1 Chapter 16 Exercise 2 Let h =
More informationOn the cosmic no-hair conjecture in the Einstein-Vlasov setting
On the cosmic no-hair conjecture in the Einstein-Vlasov setting KTH, Royal Institute of Technology, Stockholm Recent Advances in Mathematical General Relativity Institut Henri Poincaré, Paris September
More informationFunctional Central Limit Theorem for the Measure of Level Sets Generated by a Gaussian Random Field
Functional Central Limit Theorem for the Measure of Level Sets Generated by a Gaussian Random Field Daniel Meschenmoser Institute of Stochastics Ulm University 89069 Ulm Germany daniel.meschenmoser@uni-ulm.de
More information1. Stochastic Processes and Stationarity
Massachusetts Institute of Technology Department of Economics Time Series 14.384 Guido Kuersteiner Lecture Note 1 - Introduction This course provides the basic tools needed to analyze data that is observed
More informationAsymptotics of generalized eigenfunctions on manifold with Euclidean and/or hyperbolic ends
Asymptotics of generalized eigenfunctions on manifold with Euclidean and/or hyperbolic ends Kenichi ITO (University of Tokyo) joint work with Erik SKIBSTED (Aarhus University) 3 July 2018 Example: Free
More informationp 1 ( Y p dp) 1/p ( X p dp) 1 1 p
Doob s inequality Let X(t) be a right continuous submartingale with respect to F(t), t 1 P(sup s t X(s) λ) 1 λ {sup s t X(s) λ} X + (t)dp 2 For 1 < p
More informationReview (probability, linear algebra) CE-717 : Machine Learning Sharif University of Technology
Review (probability, linear algebra) CE-717 : Machine Learning Sharif University of Technology M. Soleymani Fall 2012 Some slides have been adopted from Prof. H.R. Rabiee s and also Prof. R. Gutierrez-Osuna
More informationGaussian Fields and Percolation
Gaussian Fields and Percolation Dmitry Beliaev Mathematical Institute University of Oxford RANDOM WAVES IN OXFORD 18 June 2018 Berry s conjecture In 1977 M. Berry conjectured that high energy eigenfunctions
More informationGaussian Processes. 1. Basic Notions
Gaussian Processes 1. Basic Notions Let T be a set, and X : {X } T a stochastic process, defined on a suitable probability space (Ω P), that is indexed by T. Definition 1.1. We say that X is a Gaussian
More informationMultiple Random Variables
Multiple Random Variables Joint Probability Density Let X and Y be two random variables. Their joint distribution function is F ( XY x, y) P X x Y y. F XY ( ) 1, < x
More informationCointegration Lecture I: Introduction
1 Cointegration Lecture I: Introduction Julia Giese Nuffield College julia.giese@economics.ox.ac.uk Hilary Term 2008 2 Outline Introduction Estimation of unrestricted VAR Non-stationarity Deterministic
More informationHeat Kernel and Analysis on Manifolds Excerpt with Exercises. Alexander Grigor yan
Heat Kernel and Analysis on Manifolds Excerpt with Exercises Alexander Grigor yan Department of Mathematics, University of Bielefeld, 33501 Bielefeld, Germany 2000 Mathematics Subject Classification. Primary
More informationStochastic Processes. M. Sami Fadali Professor of Electrical Engineering University of Nevada, Reno
Stochastic Processes M. Sami Fadali Professor of Electrical Engineering University of Nevada, Reno 1 Outline Stochastic (random) processes. Autocorrelation. Crosscorrelation. Spectral density function.
More informationReliability Theory of Dynamic Loaded Structures (cont.) Calculation of Out-Crossing Frequencies Approximations to the Failure Probability.
Outline of Reliability Theory of Dynamic Loaded Structures (cont.) Calculation of Out-Crossing Frequencies Approximations to the Failure Probability. Poisson Approximation. Upper Bound Solution. Approximation
More informationA note on some approximation theorems in measure theory
A note on some approximation theorems in measure theory S. Kesavan and M. T. Nair Department of Mathematics, Indian Institute of Technology, Madras, Chennai - 600 06 email: kesh@iitm.ac.in and mtnair@iitm.ac.in
More informationAnalysis in weighted spaces : preliminary version
Analysis in weighted spaces : preliminary version Frank Pacard To cite this version: Frank Pacard. Analysis in weighted spaces : preliminary version. 3rd cycle. Téhéran (Iran, 2006, pp.75.
More informationRandom Matrix Eigenvalue Problems in Probabilistic Structural Mechanics
Random Matrix Eigenvalue Problems in Probabilistic Structural Mechanics S Adhikari Department of Aerospace Engineering, University of Bristol, Bristol, U.K. URL: http://www.aer.bris.ac.uk/contact/academic/adhikari/home.html
More informationWhite noise generalization of the Clark-Ocone formula under change of measure
White noise generalization of the Clark-Ocone formula under change of measure Yeliz Yolcu Okur Supervisor: Prof. Bernt Øksendal Co-Advisor: Ass. Prof. Giulia Di Nunno Centre of Mathematics for Applications
More informationLecture 4: Numerical solution of ordinary differential equations
Lecture 4: Numerical solution of ordinary differential equations Department of Mathematics, ETH Zürich General explicit one-step method: Consistency; Stability; Convergence. High-order methods: Taylor
More informationMax stable Processes & Random Fields: Representations, Models, and Prediction
Max stable Processes & Random Fields: Representations, Models, and Prediction Stilian Stoev University of Michigan, Ann Arbor March 2, 2011 Based on joint works with Yizao Wang and Murad S. Taqqu. 1 Preliminaries
More informationAnalysis-3 lecture schemes
Analysis-3 lecture schemes (with Homeworks) 1 Csörgő István November, 2015 1 A jegyzet az ELTE Informatikai Kar 2015. évi Jegyzetpályázatának támogatásával készült Contents 1. Lesson 1 4 1.1. The Space
More informationON THE FOLIATION OF SPACE-TIME BY CONSTANT MEAN CURVATURE HYPERSURFACES
ON THE FOLIATION OF SPACE-TIME BY CONSTANT MEAN CURVATURE HYPERSURFACES CLAUS GERHARDT Abstract. We prove that the mean curvature τ of the slices given by a constant mean curvature foliation can be used
More informationBrownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539
Brownian motion Samy Tindel Purdue University Probability Theory 2 - MA 539 Mostly taken from Brownian Motion and Stochastic Calculus by I. Karatzas and S. Shreve Samy T. Brownian motion Probability Theory
More informationUniformly Uniformly-ergodic Markov chains and BSDEs
Uniformly Uniformly-ergodic Markov chains and BSDEs Samuel N. Cohen Mathematical Institute, University of Oxford (Based on joint work with Ying Hu, Robert Elliott, Lukas Szpruch) Centre Henri Lebesgue,
More informationFourier analysis, measures, and distributions. Alan Haynes
Fourier analysis, measures, and distributions Alan Haynes 1 Mathematics of diffraction Physical diffraction As a physical phenomenon, diffraction refers to interference of waves passing through some medium
More informationRandom sets. Distributions, capacities and their applications. Ilya Molchanov. University of Bern, Switzerland
Random sets Distributions, capacities and their applications Ilya Molchanov University of Bern, Switzerland Molchanov Random sets - Lecture 1. Winter School Sandbjerg, Jan 2007 1 E = R d ) Definitions
More informationOn semilinear elliptic equations with measure data
On semilinear elliptic equations with measure data Andrzej Rozkosz (joint work with T. Klimsiak) Nicolaus Copernicus University (Toruń, Poland) Controlled Deterministic and Stochastic Systems Iasi, July
More informationVALIDITY OF THE BOLTZMANN EQUATION
VALIDITY OF THE BOLTZMANN EQUATION BEYOND HARD SPHERES based on joint work with M. Pulvirenti and C. Saffirio Sergio Simonella Technische Universität München Sergio Simonella - TU München Academia Sinica
More informationGAUSSIAN PROCESSES; KOLMOGOROV-CHENTSOV THEOREM
GAUSSIAN PROCESSES; KOLMOGOROV-CHENTSOV THEOREM STEVEN P. LALLEY 1. GAUSSIAN PROCESSES: DEFINITIONS AND EXAMPLES Definition 1.1. A standard (one-dimensional) Wiener process (also called Brownian motion)
More informationTube formula approach to testing multivariate normality and testing uniformity on the sphere
Tube formula approach to testing multivariate normality and testing uniformity on the sphere Akimichi Takemura 1 Satoshi Kuriki 2 1 University of Tokyo 2 Institute of Statistical Mathematics December 11,
More informationSpectral representations and ergodic theorems for stationary stochastic processes
AMS 263 Stochastic Processes (Fall 2005) Instructor: Athanasios Kottas Spectral representations and ergodic theorems for stationary stochastic processes Stationary stochastic processes Theory and methods
More informationIn terms of measures: Exercise 1. Existence of a Gaussian process: Theorem 2. Remark 3.
1. GAUSSIAN PROCESSES A Gaussian process on a set T is a collection of random variables X =(X t ) t T on a common probability space such that for any n 1 and any t 1,...,t n T, the vector (X(t 1 ),...,X(t
More informationWiener Measure and Brownian Motion
Chapter 16 Wiener Measure and Brownian Motion Diffusion of particles is a product of their apparently random motion. The density u(t, x) of diffusing particles satisfies the diffusion equation (16.1) u
More informationRandom Vectors and Multivariate Normal Distributions
Chapter 3 Random Vectors and Multivariate Normal Distributions 3.1 Random vectors Definition 3.1.1. Random vector. Random vectors are vectors of random 75 variables. For instance, X = X 1 X 2., where each
More informationTEST CODE: MMA (Objective type) 2015 SYLLABUS
TEST CODE: MMA (Objective type) 2015 SYLLABUS Analytical Reasoning Algebra Arithmetic, geometric and harmonic progression. Continued fractions. Elementary combinatorics: Permutations and combinations,
More informationON THE KAC-RICE FORMULA
ON THE KAC-RCE FORMULA LVU. NCOLAESCU Abstract. This is an informal introduction to the Kac-Rice formula and some of its mostly one-dimensional applications. Contents. Gaussian random functions 2. The
More informationApproximating diffusions by piecewise constant parameters
Approximating diffusions by piecewise constant parameters Lothar Breuer Institute of Mathematics Statistics, University of Kent, Canterbury CT2 7NF, UK Abstract We approximate the resolvent of a one-dimensional
More informationBrownian Motion. 1 Definition Brownian Motion Wiener measure... 3
Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................
More informationFast-slow systems with chaotic noise
Fast-slow systems with chaotic noise David Kelly Ian Melbourne Courant Institute New York University New York NY www.dtbkelly.com May 12, 215 Averaging and homogenization workshop, Luminy. Fast-slow systems
More information