Gaussian Random Field: simulation and quantification of the error

Similar documents
Existence of a Limit on a Dense Set, and. Construction of Continuous Functions on Special Sets

ELEMENTS OF PROBABILITY THEORY

THEOREMS, ETC., FOR MATH 515

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R.

MAT 570 REAL ANALYSIS LECTURE NOTES. Contents. 1. Sets Functions Countability Axiom of choice Equivalence relations 9

Fourier Transform & Sobolev Spaces

Exercises with solutions (Set D)

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable

SAMPLE PROPERTIES OF RANDOM FIELDS

Information Theory and Statistics Lecture 3: Stationary ergodic processes

X n D X lim n F n (x) = F (x) for all x C F. lim n F n(u) = F (u) for all u C F. (2)

Three hours THE UNIVERSITY OF MANCHESTER. 24th January

REAL AND COMPLEX ANALYSIS

) ) = γ. and P ( X. B(a, b) = Γ(a)Γ(b) Γ(a + b) ; (x + y, ) I J}. Then, (rx) a 1 (ry) b 1 e (x+y)r r 2 dxdy Γ(a)Γ(b) D

1 Continuity Classes C m (Ω)

Problem Set 5: Solutions Math 201A: Fall 2016

From now on, we will represent a metric space with (X, d). Here are some examples: i=1 (x i y i ) p ) 1 p, p 1.

Measure-theoretic probability

MATH 205C: STATIONARY PHASE LEMMA

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable

Sobolev Spaces. Chapter 10

3 (Due ). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure?

We denote the space of distributions on Ω by D ( Ω) 2.

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

RKHS, Mercer s theorem, Unbounded domains, Frames and Wavelets Class 22, 2004 Tomaso Poggio and Sayan Mukherjee

Wiener Measure and Brownian Motion

Math 321 Final Examination April 1995 Notation used in this exam: N. (1) S N (f,x) = f(t)e int dt e inx.

Statistical signal processing

Probability and Measure

Real Analysis Problems

2 (Bonus). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure?

P-adic Functions - Part 1

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product

Elements of Positive Definite Kernel and Reproducing Kernel Hilbert Space

Exercises from other sources REAL NUMBERS 2,...,

Laplace s Equation. Chapter Mean Value Formulas

FRAMES AND TIME-FREQUENCY ANALYSIS

Linear Independence of Finite Gabor Systems

h(x) lim H(x) = lim Since h is nondecreasing then h(x) 0 for all x, and if h is discontinuous at a point x then H(x) > 0. Denote

9 Brownian Motion: Construction

Math 209B Homework 2

Chapter 3, 4 Random Variables ENCS Probability and Stochastic Processes. Concordia University

Recall that if X is a compact metric space, C(X), the space of continuous (real-valued) functions on X, is a Banach space with the norm

7 Convergence in R d and in Metric Spaces

Bounded uniformly continuous functions

PCMI Introduction to Random Matrix Theory Handout # REVIEW OF PROBABILITY THEORY. Chapter 1 - Events and Their Probabilities

1.5 Approximate Identities

Supplementary Notes for W. Rudin: Principles of Mathematical Analysis

Functional Analysis Winter 2018/2019

Analysis Finite and Infinite Sets The Real Numbers The Cantor Set

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( )

Math The Laplacian. 1 Green s Identities, Fundamental Solution

Functional Analysis I

Persistence as a spectral property

1/12/05: sec 3.1 and my article: How good is the Lebesgue measure?, Math. Intelligencer 11(2) (1989),

Kernel Method: Data Analysis with Positive Definite Kernels

Lecture 5: Expectation

Statistics, Data Analysis, and Simulation SS 2015

Sobolev spaces, Trace theorems and Green s functions.

Overview of normed linear spaces

Partial Differential Equations

CHAPTER V DUAL SPACES

MATH MEASURE THEORY AND FOURIER ANALYSIS. Contents

MA5206 Homework 4. Group 4. April 26, ϕ 1 = 1, ϕ n (x) = 1 n 2 ϕ 1(n 2 x). = 1 and h n C 0. For any ξ ( 1 n, 2 n 2 ), n 3, h n (t) ξ t dt

l(y j ) = 0 for all y j (1)

Dual Space of L 1. C = {E P(I) : one of E or I \ E is countable}.

18.175: Lecture 15 Characteristic functions and central limit theorem

1 Random Variable: Topics

A List of Problems in Real Analysis

4 Riesz Kernels. Since the functions k i (ξ) = ξ i. are bounded functions it is clear that R

CHAPTER 3: LARGE SAMPLE THEORY

Tools from Lebesgue integration

Errata Applied Analysis

In terms of measures: Exercise 1. Existence of a Gaussian process: Theorem 2. Remark 3.

Lecture 5 - Hausdorff and Gromov-Hausdorff Distance

Analysis Qualifying Exam

Continuous Random Variables

THE STONE-WEIERSTRASS THEOREM AND ITS APPLICATIONS TO L 2 SPACES

SUMMARY OF RESULTS ON PATH SPACES AND CONVERGENCE IN DISTRIBUTION FOR STOCHASTIC PROCESSES

Stability of optimization problems with stochastic dominance constraints

FFTs in Graphics and Vision. Homogenous Polynomials and Irreducible Representations

i=1 α i. Given an m-times continuously

Entrance Exam, Real Analysis September 1, 2017 Solve exactly 6 out of the 8 problems

Chapter One. The Calderón-Zygmund Theory I: Ellipticity

A SHORT INTRODUCTION TO BANACH LATTICES AND

Markov Chain Monte Carlo (MCMC)

Stone-Čech compactification of Tychonoff spaces

MET Workshop: Exercises

Reminder Notes for the Course on Distribution Theory

An Approximate Solution for Volterra Integral Equations of the Second Kind in Space with Weight Function

conditional cdf, conditional pdf, total probability theorem?

Mid Term-1 : Practice problems

2) Let X be a compact space. Prove that the space C(X) of continuous real-valued functions is a complete metric space.

Homogenization for chaotic dynamical systems

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

CONVERGENCE THEORY. G. ALLAIRE CMAP, Ecole Polytechnique. 1. Maximum principle. 2. Oscillating test function. 3. Two-scale convergence

Estimation of the functional Weibull-tail coefficient

P (A G) dp G P (A G)

SEPARABILITY AND COMPLETENESS FOR THE WASSERSTEIN DISTANCE

7 Complete metric spaces and function spaces

Transcription:

Gaussian Random Field: simulation and quantification of the error EMSE Workshop 6 November 2017

3 2 1 0-1 -2-3 80 60 40 1 Continuity Separability Proving continuity without separability 2 The stationary case 3 Medical framework Example of non-stationary gaussian random field 20 0 0 20 40 60 80

Definition Continuity Separability Proving continuity without separability Definition A real valued random field φ on (Ω, A, P) indexed by a metric space (M, d) is called separable, if there exists an at most countable subset S of M which is dense in (M, d), so that for all closed intervals C in R, and all open subsets O of M, {φ(x) C, x O} = {φ(x) C, x O S}. holds. Then S is called a separating set for φ.

Alternative definition Continuity Separability Proving continuity without separability Lemma A real valued random field φ on (Ω, A, P) indexed by (M, d) is separable with separating set S if and only if one of the following equivalent statements holds : (S 1 ) For every open subset O in M, (S 1) For every x M, and and inf φ(y) = inf φ(x), y O S x O sup y O S φ(y) = sup φ(x) ; x O lim inf φ(y) = lim inf φ(x), y x, y S y x lim inf φ(y) = lim inf φ(x). y x, y S y x

Definitions Continuity Separability Proving continuity without separability Let M be a compact of R d. (Or M = [ N, N] d, N N ) ; def D n = { k 2 n, k Zd} M, We define M the following norm : for all x, y M, def We will denote δ n = 1 2. n d(x, y) = def sup x i y i. i 1,...,d

Properties Continuity Separability Proving continuity without separability We have and D n D n+1 D n ( 2 n diam(m) + 1 ) d, For x D n we define (δ n (= 1 2 n )) C n (x) = def {y D n, d(x, y) δ n }. We have the following upperbound C n (x) 3 d.

Separability Proving continuity without separability We define also Then def π n = {{x, y}, x, y D n, d(x, y) δ n }. π n D n sup x D n C n (x) ( 2 n diam(m) + 1 ) d 3 d. (D) For all n > 1 and x, y D n, there exists x, y D n+1 such that d(x, x ) δ n+1 et d(y, y ) δ n+1 ; d(x, y ) d(x, y).

Conditions for the random field Separability Proving continuity without separability We finally set D = n N D n, (C) there exist two nondecreasing functions q and r such that π n q(δ n ) < + (C 1 ) n=1 r(δ n ) < + (C 2 ) n=1 ( P φ(x) φ(y) r ( d(x, y) )) q ( d(x, y) ), (C 3 ) for all x, y M with d(x, y) < ρ.

Choice of r Continuity Separability Proving continuity without separability We want r being "as big as possible", along the condition (C 2 ) : we choose (with ρ < 1 and α > 1) r(h) = def ( 1 ) α ( 1 ) α, = 0 < h < 1 (1) ln 2 (1/h) ln 2 (h) since then ( 1 ) ( r(δ n ) = r 2 n = 1 ln 2 (2 n ) ) α = 1 n α,

Choice of q Continuity Separability Proving continuity without separability q(h) = h (1+β)d With we have q(δ n ) = 1 2 (1+β)dn and π n ( 2 n diam(m) + 1 ) d 3 d (6diam(M)) d 2 nd π n q(δ n ) (6diam(M)) d 2 nd 2 (1+β)dn = C 1 d (2 βd ) n since βd > 0, 2 βd > 1, leads to a convergent series.

Fundamental Lemma Continuity Separability Proving continuity without separability Lemma Under the conditions (D) and (C), there exists a set Ω Ω, P(Ω ) = 1, such that for all ω Ω there exists n(ω) N such that 1 for all n n(ω), max φ(x, ω) φ(y, ω) r(δ n ); (2) (x,y) π n 2 for all m n n(ω), and every x, y D m with d(x, y) δ n we have m φ(x, ω) φ(y, ω) 2 r(δ k ). (3) k=n

Separability Proving continuity without separability We can state the continuity with respect to the set D Proposition Under the conditions (D) and (C) there exists a set Ω Ω, P(Ω ) = 1, such that for all ω Ω the restriction of the function x φ(x, ω) to D is (uniformly) continuous.

Separability Proving continuity without separability Theorem Let φ be continuous in probability so that the conditions (D) and (C) hold. Then, there exists Ω Ω, P(Ω ) = 1, such that we have a uniformly sample continuous modification φ of φ such that φ = φ on D Ω. Lemma Suppose that the random field φ satisfies condition (C 3 ) with lim q(x) = lim r(x) = 0 x 0 x 0 (which is true, if the three conditions (C) are satisfied). Then φ is continuous in probability.

Continuity Let S n be a finite index set (size n n) associated with a random field Gaussian vector of size n 2, denoted by X. The covariance C is then a n 2 n 2 -matrix. We know that if Z is a Gaussian vector of size n 2, which the componants have i.i.d. normal distributions (a «white noise») and A is such that A 2 = C, then X has the same law as AZ.

Examples with Gaussian covariance The matlab function "simgauss(n)" computes a square of size (2 n + 1) (2 n + 1). We have the following duration times :» simgauss(2) ; Elapsed time is 0.066492 seconds.» simgauss(3) ; Elapsed time is 0.244321 seconds.» simgauss(4) ; Elapsed time is 2.839962 seconds.» simgauss(5) ; Elapsed time is 44.630163 seconds.

Examples with Gaussian covariance» simgauss(6) ; Elapsed time is 771.891478 seconds. 3 2 1 0-1 -2-3 80 60 40 20 0 0 20 40 60 80 FIGURE: Gaussian kernel, variance : 0.01

Examples with Gaussian covariance» simgauss(7) ; Elapsed time is 8551.463134 seconds. 3 2 1 0-1 -2-3 150 100 50 0 0 50 100 150 FIGURE: Gaussian kernel, variance : 0.01

Examples with Gaussian covariance 3 2 1 0-1 -2-3 150 100 50 0 0 50 100 150 FIGURE: Gaussian kernel, variance : 0.05» simgauss(7) ; Elapsed time is 8475.642908 seconds.

Examples with Gaussian covariance The time is not the only issue of the naïve algorithm :» simgauss(8) ; Requested 66049x66049 (32.5GB) array exceeds maximum array size preference. Creation of arrays greater than this limit may take a long time and cause MATLAB to become unresponsive. See array size limit or preference panel for more information.

Generalized Random Field We consider the complexification of the Schwartz space S(R d ) S C (R d ) = S(R d ) S(R d ) Let (Ω, F, P) be a probability space, and denote by L 0 K (P) the space of real-or complex valued random variables, with K = R or C. Definition A generalized random field on R d is a K -linear mapping where K = R or C. ϕ : S K (R d ) L 0 K (P)

Theorem (Minlos theorem) Let Ξ be a characteristic function on S(R d ), i.e. 1 Ξ is continuous in S(R d ), 2 Ξ is positive definite, 3 Ξ(0) = 1. Then there exists a unique probability measure µ on (S (R d ), σ w ), such that for all f S(R d ) e i ω,f dµ(ω) = Ξ(f ) S (R d ) i.e. Ξ(f ) is the Fourier transform of a countably additive positive normalized measure.

Generalized white noise Definition A generalized random field W on R d is called white noise, if its characteristic function is given by Ξ WN (f ) = e 1 2 f,f L 2 (R d ). Remark : A real-valued white noise on R d, is a linear mapping W : S(R d ) L 0 (Ω, F, P), where (Ω, F, P) is some probability space, so that the family {W (f ), f S(R d )} is a centered Gaussian family with Proposition Cov(W (f ), W (g)) = E[W (f )W (g)] = f, g L2 (R d ). W extends to an isometric embedding of L 2 (R d ) into L 2 (P).

Let ϕ be a generalized random field. We define operations via dual pairing : Definition 1 The Fourier transform F and its inverse F 1 are defined on ϕ via Fϕ(f ) := ϕ(f 1 f ) F 1 ϕ(f ) := ϕ(ff ) 2 If g C (R d ) with at most polynomial growth, then gϕ(f ) := ϕ(gf )

Construction of Stationary Gaussian Random Fields Let ϕ be a stationary, real-valued, centered Gaussian random field with covariance Cov(ϕ(f ), ϕ(g)) = f, Cg L2 (R d ), f, g S(Rd ) with C : S(R d ) L 2 (R d ). Assume that C is given by an integral kernel, which by stationarity can be written as Cf (x) = K (x y)f (y)dy, R d where K is even because of the symmetry of C, and positive definite.

Hence if K is continuous, K (x) = e 2πixp dγ(p) (Böchner Theorem) We assume that Γ has a density γ which is supposed to be strictly positive and smooth. Then γ 1 2 is a strictly positive smooth root of γ. We set ϕ(f ) := (F 1 γ 1 2 FW )(f ), f S(R d ).

Discrete Fourier transform N 1,..., N d will represent even positive integers. For all u = (u 1,..., u d ) d i=1 0, N i 1 : ( ) 1 F N1,...,N d (f )(u) := e 2iπ u 1 k 1 + + u d k d N 1 N d f (k 1,..., k d ). N 1 N d 0 k 1 N 1 1. 0 k d N d 1 (4) We will denote by F(N S i ) the "symmetric" discrete Fourier transform ( ) FN S 1 1,...,N d (g)(v) := e 2iπ v 1 k 1 + + v d k d N 1 N d g(k 1,..., k d ). N 1 N d N 1 2 k 1 N 1 2 1. N d 2 k d N d 2 1

d sym (Ni ) : i=1 0, N i d i=1 N i, N i 2 [ 2 yi = x (x i ) (y i ) with i if x i 0, N i 1 2 y i = x i N i if x i N i, N 2 (6) and, conversely, sym 1 (N i ) : d i=1 N i 2, N i 2 d i=1 0, N [ i xi = y (y i ) (x i ) with i if y i 0, N i 1 2 x i = y i + N i if y i N i, 0 2 (7) So, while working on a function c : d i=1 N i will use the function 2, N i 2 1 R, we c(x) := c ( sym (Ni ) (x)), d x 0, N i. i=1

Lemma F S (N i ) (f )(x) = F (N i )(f sym (Ni ) )(sym 1 (N i ) (x)). We can state the lemma on which will be built the algorithm : Lemma Let E =: d i=1 0, N i, h : E C and (A p ) p E, (B q ) q E be two independent white noises. The centered Gaussian Random field indexed on d i=1 0, N i defined by 2 ( G(x) := R F 1 (N i ) ( (A + ib)h ) (x) ) (8) has the following covariance function : ( E [G(x)G(y)] = R F 1 ( ) h 2) (N i ) (sym 1 (N i ) (y x)) (9)

Lemma F S (N i ) (f )(x) = F (N i )(f sym (Ni ) )(sym 1 (N i ) (x)). We can state the lemma on which will be built the algorithm : Lemma Let E =: d i=1 0, N i, h : E C and (A p ) p E, (B q ) q E be two independent white noises. The centered Gaussian Random field indexed on d i=1 0, N i defined by 2 ( G(x) := R F 1 (N i ) ( (A + ib)h ) (x) ) (8) has the following covariance function : ( E [G(x)G(y)] = R F 1 ( ) h 2) (N i ) (sym 1 (N i ) (y x)) (9)

Heuristic of the Algorithm Let c : (R n ) 2 R be a (continuous) covariance function (i.e. symmetric definite positive) and we assume that function to be stationary (i.e. invariant with respect to translation). Notation : c(h) := c(0, h), h R n. 1 This last function, according to a consequence of the Böchner s Theorem has a real positive valued Fourier s transform. 2 On the other hand, if c has a compact support, or if its tails norm decrease sufficiently fast to 0, the fast Fourier transform can be seen as an approximation of its Fourier s transform.

Using those two facts, we can consider that the values of the discrete Fourier transform of c are real and (close to being) positive and, with h(x) := F (Ni )(c)(x) R +, the Lemma 5 provides a Gaussian Random Field with covariance c ( E [G(x)G(y)] = R F 1 ( ) h 2) (N i ) (y x) ( = R F 1 (N i )( F(Ni )(c) ) ) (y x) = R ( c(x y) ) Remark = c(x y) = c(x, y). In the general case (i.e. F (Ni )(c)(x) R, because of the symmetry of c) we have ( E [G(x)G(y)] = R F 1 (N i )( F (Ni )(c) ) ) (y x).

Unidimensional case Continuity The idea is to approximate N values in [ 1 2, 1 2 ] of a Fourier transform of a function f (assuming that the function of interest has negligible values outside of [ 1 2, 1 2 ]) by 1 2 1 2 e i x N y f (y) dy, x N 2, N 2 1. (10) then by 1 N F s N (f N)(x) def = 1 N N 2 1 k= N 2 e ix k N fn (k) (11) (with f N (k) = def f ( k N )).

Error quantification Continuity Proposition Let N 2N, c : E C and (A p ) p E, (B q ) q E be two independent white noises. The centered Gaussian Random field indexed on 0, N 2 defined by ( G(x) := R F 1 (N i )( (A + ib) FN (c)(x) ) ) (x) (12) has a covariance function c G such that, for all x, y 0, N 2, c(x, y) c G (x, y) 2n N (ε c + ε N ) (13) where n N stands for the numbers of x such that F N (c)(x) < 0 and with ε c such that [ sup ] ] e i2πxy c(y) dy ε c. x 1 2, 1 2 [ R\ 1 2, 1 2

with ε N c( 1 2 ) p N + b 2k N 2k (2k)! (2π)2k 1 2 2k+1 ( c 1 ) 1 2k+1,i 2 + N 2p+1 (2p + 1) R N,p k=1 where c 2k+1,o (x) = def R n,p (4π) 2p+1 2k+1 l=1 l impair sup t [ 1 2, 1 2 ] l 0,2p+1 c (l) (x) and with c (l) (t) 1 0 (14) B 2p+1 (u) du, (15) def where B i (resp. b i = B i (0)) are the Bernoulli polynomials (resp. Bernoulli numbers).

Example : the exponential covariance case c(x) = def e x2 2σ 2. we have (with p = 0) ( ( c(x, y) c G (x, y) 2n N e 1 8σ 2 4 + 1 ) + 1 ( 1 ) ) N 2N σ 2 + 2π. }{{} e N For N = 1000 we have : σ 0.09 0.1 0.11 0.12 0.13 0.14 0.15 e 1000 0.0649 0.0532 0.0446 0.0385 0.0352 0.0355 0.0408 For N = 10000 σ 0.09 0.1 0.11 0.12 0.13 0.14 0.15 e 10000 0.0065 0.0053 0.0046 0.0045 0.0057 0.0097 0.0180

Example, the exponential covariance case : error (p=0, N=1000) 0.065 0.06 0.055 0.05 0.045 0.04 0.035 1 2 3 4 5 6 7

Example, the exponential covariance case : error (p=0, N=10000) 0.018 0.016 0.014 0.012 0.01 0.008 0.006 0.004 1 2 3 4 5 6 7

Example, the exponential covariance case For p = 1, it follows c( 1 2 ε N ) N + b 2 2N 2 2π23( c ( 1 2 ) + c ( 1 2 ) ) + 1 3N 3 R N 3 ( e 1 1 8σ 2 N + 2π ( 1 3N 2 σ 2 + 3 σ 4 + 1 )) 4σ 6 + 1 ( 1 N 3 σ 2 + 3 2σ 4 + 1 ) 8σ 6 For N = 1000 we have : σ 0.04 0.05 0.06 0.07 0.08 0.09 e 1000 0.0104 0.00275 9.32 10 4 3.751 10 4 1.713 10 4 8.71 10 5 and for σ = 0.095, e 1000 = 6.7484 10 5. For N = 10000 σ 0.05 0.06 0.07 0.08 0.09 e 10000 2.75 10 6 9.32 10 7 3.751 10 7 1.845 10 7 8.83 10 7

Example, the exponential covariance case : error (p=1, N=1000) #10-3 3 2.5 2 1.5 1 0.5 0 1 2 3 4 5 6

Example, the exponential covariance case : error (p=1, N=10000) #10-6 3 2.5 2 1.5 1 0.5 0 1 1.5 2 2.5 3 3.5 4 4.5 5

The compact support case When the covariance function has a compact support included in [ 1 2, 1 2 ], the speed can be improved (especially when the derivatives are uniformly bounded), since we have for some p N, if c is a C p function, 1 2 1 2 e i x N y f (y) dy 1 N F s N (f N)(x) = R N,p N 2p+1 (2p + 1) with R n,p M 2p+1 1 0 B 2p+1(t) dt.

The multidimensional case We consider a continuous covariance function c : R 2 R belonging to L 1 (R 2 ). Let ε c be such that R 2 \[ 1 2, 1 2 ]2 c(u, v) du dv ε c.

g(u, v) du dv 1 [ 1 2, 1 MN 2 ]2 1 2N + + k=1 =:ε N,M. g(u k, v l ) N 1 M 1 k=0 l=0 sup ( g 1, v) g ( 1, v) 1 v [ 1 2, 1 2 2 + 2M 2 ] p b 2k ( 1 (2k)! N 2k 1 N 2p+1 (2p + 1) sup ( g u, 1 u [ 1 2, 1 2 2 ] sup g (2k 1) ( 1, v) g (2k 1)( 1, v) v [ 1 2, 2 1 2 2 ] ) ( ) g u, 1 2 + 1 sup g (2k 1)( ) u, 1 M 2k u [ 1 2, 2 1 2 g (2k 1) ( ) ) u, 1 2 ] sup RN,p v + v [ 1 2, 1 2 ] 1 M 2p+1 (2p + 1) sup u [ 1 2, 1 2 ] R u M,p

Example : the exponential covariance case c(u, v) = e u2 +v 2 2σ ( 2 ) g(u, v) = c(u, v)e 2iπ u x N +v y M ) g 1 (u, v) = u u2 +v (u 2 σ 2 e 2σ 2 e 2iπ x N +v y M ( = g(u, v) u σ 2 + 2iπ x N ). + 2iπ x g(u, v) N ( g 1 (u, v) = g(u, v) u σ 2 + 2iπ x ) 2 1 + ( N σ 2 ( = g(u, v) u σ 2 + 2iπ x ) ) 2 1 N σ 2

And, ( ( g 1 (u, v) = g(u, v) u σ 2 + 2iπ x ) 3 3 ( N σ 2 u σ 2 + 2iπ x ) ) N It leads to the following upperbound : ( g 1 (u, v) c(u, v) 4 u 3 σ 6 + 4(2π)3 + 3 u ) σ 4 + 6π Exemples: For N = 1024 we have the following results : σ 0.1 0.11 0.12 0.13 0.14 0.15 e1024 2 0.00097 0.00055 0.00033 0.000212 0.000183 0.000336 σ 0.16 0.17 0.18 0.19 0.2 e1024 2 0.000995 0.0029 0.0073 0.019 0.031

The human eye Continuity Medical framework Example of non-stationary gaussian random field FIGURE: L œil en coupe

Corneal endothelium Continuity Medical framework Example of non-stationary gaussian random field Definition (Corneal endothelium) It separates the cornea from the aqueous humor. It is a cellular monolayer comprising about 500,000 cells whose quality and quantity vary with age.

Morphology Continuity Medical framework Example of non-stationary gaussian random field The endothelium is composed of relatively regular hexagonal (or almost) cells. However, there are some more pronounced irregularities : on the edges (larger cells) ; in case of pathologies (e.g. Cornea guttata) ; in case of transplant,...

Voronoï Partition Continuity Medical framework Example of non-stationary gaussian random field Question How will we simulate the cells? If we already have a set of points, we can make a Voronoï partition For each point p of the surface S, the Voronoï cell V (p) of p is the set of the points closer to p than any other points of S. The Voronoï partition V (S) is the partition made by the Voronoï cells.

Medical framework Example of non-stationary gaussian random field Poisson point process and Voronoï partion Random space points are most commonly generated using a Poisson point process......which is rather irregular...

Medical framework Example of non-stationary gaussian random field Superposition an functions with compact support We use the combination G 1 (x) + f (x)g 2 (x) where f is a compact support function :

Continuity Medical framework Example of non-stationary gaussian random field Non-stationary gaussian random field : "large center" 100 200 300 400 500 600 700 800 900 1000 100 200 300 400 500 600 700 800 900 1000

Medical framework Example of non-stationary gaussian random field Voronoï partition with high centred density

Weak local density Continuity Medical framework Example of non-stationary gaussian random field We use a combination G 1 (x) + f (x)g 2 (x) where f is the complementary of a compact support function

Voronoï weak local density I Medical framework Example of non-stationary gaussian random field

Voronoï weak local density II Medical framework Example of non-stationary gaussian random field

Medical framework Example of non-stationary gaussian random field Random field for simulating the Cornea Guttata

Conclusion and perspective Medical framework Example of non-stationary gaussian random field Quantification of the error and combination of Gaussian stationary random fields (with related "well chosen" functions belonging to the algebra generated by smooth functions with compact support) provide an easy to use and interesting modelisation tool, at least in the medical field in which we had to work. We are working now with a combination of stationary random fields with multiplicative functions taken in wavelet basis.

Medical framework Example of non-stationary gaussian random field Thank you for the invitation!