Mean field simulation for Monte Carlo integration. Part I : Intro. nonlinear Markov models (+links integro-diff eq.) P. Del Moral

Size: px
Start display at page:

Download "Mean field simulation for Monte Carlo integration. Part I : Intro. nonlinear Markov models (+links integro-diff eq.) P. Del Moral"

Transcription

1 Mean field simulation for Monte Carlo integration Part I : Intro. nonlinear Markov models (+links integro-diff eq.) P. Del Moral INRIA Bordeaux & Inst. Maths. Bordeaux & CMAP Polytechnique Lectures, INLN CNRS & Nice Sophia Antipolis Univ Some hyper-refs Mean field simulation for Monte Carlo integration. Chapman & Hall - Maths & Stats [600p.] (May 2013). Feynman-Kac formulae, Genealogical & Interacting Particle Systems with appl., Springer [573p.] (2004) Concentration Inequalities for Mean Field Particle Models. Ann. Appl. Probab (2011). (+ Rio). Coalescent tree based functional representations for some Feynman-Kac particle models Annals of Applied Probability (2009) (+ Patras, Rubenthaler) Particle approximations of a class of branching distribution flows arising in multi-target tracking. SIAM Control. & Opt. (2011). (joint work with Caron, Doucet, Pace) On the stability & the approximation of branching distribution flows, with applications to nonlinear multiple target filtering. Stoch. Analysis and Appl. (2011). (joint work with Caron, Pace, Vo). More references on the websitehttp:// delmoral/index.html [+ Links]

2 Contents Some basic notation Markov chains Linear evolution equations Finite state space models Monte Carlo methods Links with linear integro-diff. eq. Nonlinear Markov models Nonlinear evolution equations Mean field particle models Graphical illustration Links with nonlinear integro-diff. eq. How & Why it works? Some interpretations A stochastic perturbation analysis Uniform concentration inequalities

3 Some basic notation Markov chains Nonlinear Markov models How & Why it works?

4 Lebesgue integral on a measurable state space E (µ, f ) =(measure, function) (M(E) B b (E)) µ(f ) = µ(dx) f (x) Delta-Dirac Measure at a E µ = δ a δ a (f ) = f (x) δ a (dx) = f (a) Normalization of a positive measure µ M + (E) (when µ(1) 0) µ(dx) := µ(dx)/µ(1) = probability P(E)

5 Lebesgue integral on a measurable state space E (µ, f ) =(measure, function) (M(E) B b (E)) µ(f ) = µ(dx) f (x) Delta-Dirac Measure at a E µ = δ a δ a (f ) = f (x) δ a (dx) = f (a) Normalization of a positive measure µ M + (E) (when µ(1) 0) µ(dx) := µ(dx)/µ(1) = probability P(E) Example µ = 10 Law(X ) µ(1) = 10 & µ = Law(X ) & µ(f ) = E(f (X ))

6 Examples E={1,...,d} Matrix-Vector notation f (1) µ := [µ(1),..., µ(d)] and f :=. f (d) µ(f ) := d µ(i) f (i) i=1

7 Examples E={1,...,d} Matrix-Vector notation f (1) µ := [µ(1),..., µ(d)] and f :=. f (d) µ(f ) := d µ(i) f (i) i=1 X = 1 i N δ X i spatial Poisson point process with intensity µ N = Poisson random variable with E(N) = µ(1) and X i iid µ E(X (f )) = E(E(X (f ) N)) = E(N µ(f )) = µ(1) µ(f ) = µ(f )

8 Q(x, dy) integral operator from E into E Two operator actions : f B b (E ) Q(f ) B b (E) and µ M(E) µq M(E ) with Q(f )(x) = [µq](dx ) = Q(x, dx )f (x ) µ(dx)q(x, dx ) ( [µq](f ) := µ[q(f )] ) and the composition (Q 1 Q 2 )(x, dx ) = Q 1 (x, dx )Q 2 (x, dx )

9 Q(x, dy) integral operator from E into E Two operator actions : f B b (E ) Q(f ) B b (E) and µ M(E) µq M(E ) with Q(f )(x) = [µq](dx ) = Q(x, dx )f (x ) µ(dx)q(x, dx ) ( [µq](f ) := µ[q(f )] ) and the composition (Q 1 Q 2 )(x, dx ) = Q 1 (x, dx )Q 2 (x, dx ) Notation (used in variance descriptions) Q([f Q(f )] 2 )(x) := Q(x, dy) [f (y) Q(f )(x)] 2 Q Markov operator x Q(x, dy) P(E) notation : M or K (Markov transitions-kernel/stochastic matrices)

10 Finite state spaces E = {1,..., d} and E = {1,..., d }: Action on the right f (1) f :=. f (d ) Matrix operation B b (E ) Q(f ) = Q(f )(1). Q(f )(d) B b (E) Q(f ) = Q(f )(1). Q(f )(d) = Q(1, 1)... Q(1, d )..... Q(d, 1)... Q(d, d ) f (1). f (d )

11 Finite state spaces E = {1,..., d} and E = {1,..., d }: Action on the left µ = [µ(1),..., µ(d)] M(E) µq = [(µq)(1),..., (µq)(d )] M(E ) Matrix operation µq = [µ(1),..., µ(d)] Q(1, 1)... Q(1, d )..... Q(d, 1)... Q(d, d )

12 Boltzmann-Gibbs transformation : G 0 s.t. µ(g) > 0 Ψ G (µ)(dx) = 1 µ(g) G(x) µ(dx)

13 Boltzmann-Gibbs transformation : G 0 s.t. µ(g) > 0 Ψ G (µ)(dx) = 1 µ(g) Bayes rule (with fixed observation y) G(x) µ(dx) µ(dx) = p(x)dx and G(x) = p(y x) Ψ G (µ)(dx) = 1 p(y) p(y x) p(x)dx = p(x y) dx

14 Boltzmann-Gibbs transformation : G 0 s.t. µ(g) > 0 Ψ G (µ)(dx) = 1 µ(g) Bayes rule (with fixed observation y) G(x) µ(dx) µ(dx) = p(x)dx and G(x) = p(y x) Restriction Ψ G (µ)(dx) = 1 p(y) p(y x) p(x)dx = p(x y) dx µ(dx) = P(X dx) = p(x)dx and G(x) = 1 A (x) Ψ G (µ)(dx) = 1 P(X A) 1 A(x) p(x)dx = P(X dx X A)

15 Boltzmann-Gibbs transformation : G 0 s.t. µ(g) > 0 Ψ G (µ)(dx) = 1 µ(g) G(x) µ(dx) Markov transport equation Ψ G (µ)(dy) = µ(dx)s µ (x, dy) Ψ G (µ) = µs µ Example 1 : (G 1) accept/reject/recycling/interacting jumps S µ (x, dy) = G(x)δ x (dy) + (1 G(x)) Ψ G (µ)(dy)

16 Boltzmann-Gibbs transformation : G 0 s.t. µ(g) > 0 Ψ G (µ)(dx) = 1 µ(g) G(x) µ(dx) Markov transport equation Ψ G (µ)(dy) = µ(dx)s µ (x, dy) Ψ G (µ) = µs µ Example 1 : (G 1) accept/reject/recycling/interacting jumps Note : S µ (x, dy) = G(x)δ x (dy) + (1 G(x)) Ψ G (µ)(dy) S 1 N 1 j N δ (X i, dy) X j = G(X i )δ X i (dy) + (1 G(X i )) 1 j N G(X j ) 1 k N G(X k ) δ X j (dy)

17 Other examples of transport equations Ψ G (µ) = µs µ Example 2 : ɛ µ s.t. ɛ µ G 1 µ a.e. S µ (x, dy) = ɛ µ G(x) δ x (dy) + (1 ɛ µ G(x)) Ψ G (µ)(dy) Example 3 : a s.t. G > a S µ (x, dy) = Example 4 : G a µ(g) δ x(dy) + ( 1 a ) µ(g) Ψ G a (µ)(dy) S µ (x, dy) = α(x) δ x (dy) + (1 α(x)) Ψ [G G(x)]+ (µ)(dy) with the acceptance rate α(x) = µ[g G(x)]/µ(G)

18 Some basic notation Markov chains Linear evolution equations Finite state space models Monte Carlo methods Links with linear integro-diff. eq. Nonlinear Markov models How & Why it works?

19 Example: Markov chain X n+1 = F n (X n, W n ) η 0 = Law(X 0 ) and M n (x n 1, dx n ) = P (X n dx n X n 1 = x n 1 ) η 0 M 1 = Law(X 1 ) and M n (f )(x) = E(f (X n ) X n 1 = x)

20 Example: Markov chain X n+1 = F n (X n, W n ) η 0 = Law(X 0 ) and M n (x n 1, dx n ) = P (X n dx n X n 1 = x n 1 ) η 0 M 1 = Law(X 1 ) and M n (f )(x) = E(f (X n ) X n 1 = x) as well as (M 1... M n )(x 0, dx n ) = P(X n dx n X 0 = x 0 ) η 0 M 1... M n = Law(X n ) E={1,...,d} Matrix-Vector notation

21 Markov chain models X n+1 = F n (X n, W n ) Linear evolution equations Law(X n ) := η n η n+1 = η n M n+1 E={1,...,d} simple matrix computations M n+1 (1, 1)... M n+1 (1, d) η n+1 = [η n (1),..., η n (d)]..... M n+1 (d, 1)... M n+1 (d, d)

22 Markov chain models X n+1 = F n (X n, W n ) Linear evolution equations Law(X n ) := η n η n+1 = η n M n+1 E={1,...,d} simple matrix computations M n+1 (1, 1)... M n+1 (1, d) η n+1 = [η n (1),..., η n (d)]..... M n+1 (d, 1)... M n+1 (d, d) Note : Same equations for X n E n = {1,..., d n } with M n+1 R (dn dn+1) +

23 (Crude) Monte Carlo methods : Markov chain X n+1 = F n (X n, W n ) E n Linear evolution equations Law(X n ) := η n η n+1 = η n M n+1 N iid copies/samples X i n+1 = F n(x i n, Wi n ), 1 i N ηn N := 1 δ N X i n N η n 1 i N ηn N (f ) = 1 f (X i N n) N η n (f ) = E(f (X n )) 1 i N

24 (Crude) Monte Carlo methods : Markov chain X n E n with transitions M n Path space models P N n := 1 N Historical process 1 i N X n = (X 0,..., X n ) E n+1 = δ (X i 0,...,X i n ) N P n = Law(X 0,..., X n ) 0 p n E p with Markov transition M n Same model as before and η N n := 1 N P n+1 = η n+1 = η n M n+1 1 i N with N iid copies of the historical process δ X i n = P N n N P n = η n X i n = (X i 0,..., X i n)

25 Diffusion type & discrete generation models (d=1) Markov chain with Gaussian perturbations X n+1 = F n (X n, W n ) = X n + a n (X n ) + b n (X n ) W n with the Gaussian random variable P(W n dw) = 1 exp ( w 2 ) 2π 2 Diffusion limits when (t n+1 t n ) = 0 & (a n, b n, X n ) = ( a t n, b t n, X t n ) dx t = a t(x t )dt + b t(x t )dw t

26 Diffusion type & discrete generation models (d=1) Diffusion model dx t = a t(x t )dt + b t(x t )dw t Taylor expansion (Ito s formula) f (X t + dx t ) f (X t ) = f (X t ) dx t f (X t ) dx t dx t +...

27 Diffusion type & discrete generation models (d=1) Diffusion model Taylor expansion (Ito s formula) dx t = a t(x t )dt + b t(x t )dw t f (X t + dx t ) f (X t ) = f (X t ) dx t f (X t ) dx t dx t +... with the infinitesimal generator dw t= dt = L t (f )(X t ) dt + f (X t ) b t(x t )dw t }{{} Martingale (E(.) = 0) L t (f ) = a t f (b t) 2 2 f Weak parabolic PDE η t = Law(X t ) d dt η t(f ) = η t (L t (f ))

28 Two-steps jump-diffusion type model (d=1) X n X n+1/2 = F n (X n, W n ) X n+1 = ɛ n+1 X n+1/2 + (1 ɛ n+1 ) Y n+1 ( ) ɛ n+1 e Vn(X n+1/2) δ e Vn(X n+1/2) δ 0 Y n+1 P n (X n+1/2, dy)

29 Two-steps jump-diffusion type model (d=1) X n X n+1/2 = F n (X n, W n ) X n+1 = ɛ n+1 X n+1/2 + (1 ɛ n+1 ) Y n+1 ( ) ɛ n+1 e Vn(X n+1/2) δ e Vn(X n+1/2) δ 0 Y n+1 P n (X n+1/2, dy) Jump-diffusion limits (a n, b n, V n, P n, X n ) = ( ) a t n, b t n, V t n, P t n, X t n between jumps = diffusion as before, and jumps times T n+1 = inf { t T n : t T n V s (X s ) ds e n } e n iid expo(λ = 1)

30 Two-steps jump-diffusion type model (d=1) with the diffusion part and the jump part M n+1 J n+1 X n X n+1/2 X n+1 M n+1 (f )(x) f (x) + L diff t (f )(x) J n+1 (f )(x) = e V tn (x) f (x) + (1 e V tn (x) ) P t n (f )(x) = f (x) + V t n (x) [ P t n (f )(x) f (x) ] }{{} =L jump t (f )(x)

31 Two-steps jump-diffusion type model (d=1) with the diffusion part and the jump part M n+1 J n+1 X n X n+1/2 X n+1 M n+1 (f )(x) f (x) + L diff t (f )(x) J n+1 (f )(x) = e V tn (x) f (x) + (1 e V tn (x) ) P t n (f )(x) = f (x) + V t n (x) [ P t n (f )(x) f (x) ] }{{} =L jump t (f )(x)

32 Two-steps jump-diffusion type model (d=1) with the diffusion part and the jump part M n+1 J n+1 X n X n+1/2 X n+1 M n+1 (f )(x) f (x) + L diff t (f )(x) J n+1 (f )(x) = e V tn (x) f (x) + (1 e V tn (x) ) P t n (f )(x) = f (x) + V t n (x) [ P t n (f )(x) f (x) ] }{{} =L jump t (f )(x) [ MJ = (I + L diff ) (I + L jump ) I + (L diff + L jump ) ]

33 Two-steps jump-diffusion type model (d=1) with the diffusion part and the jump part M n+1 J n+1 X n X n+1/2 X n+1 M n+1 (f )(x) f (x) + L diff t (f )(x) J n+1 (f )(x) = e V tn (x) f (x) + (1 e V tn (x) ) P t n (f )(x) = f (x) + V t n (x) [ P t n (f )(x) f (x) ] }{{} =L jump t (f )(x) [ MJ = (I + L diff ) (I + L jump ) I + (L diff + L jump ) ] = Weak integro-differential equation with the infinitesimal generator η t = Law(X t ) d dt η t(f ) = η t (L t (f )) L t (f )(x) = a t(x) f (x)+ 1 2 (b t(x)) 2 2 f (x)+v t (x) (f (y) f (x)) P t(x, dy)

34 Some basic notation Markov chains Nonlinear Markov models Nonlinear evolution equations Mean field particle models Graphical illustration Links with nonlinear integro-diff. eq. How & Why it works?

35 Mean field models : Markov chain X n+1 = F n (X n, η n, W n ) Nonlinear evolution equations Law(X n ) := η n η n+1 = Φ n+1 (η n ) := η n K n+1,ηn with Markov transitions K n,η indexed by the symplex η S (d 1) R d + K n+1,ηn (x n, dx n+1 ) = P (X n+1 dx n+1 X n = x n )

36 Mean field models : Markov chain X n+1 = F n (X n, η n, W n ) Nonlinear evolution equations Law(X n ) := η n η n+1 = Φ n+1 (η n ) := η n K n+1,ηn with Markov transitions K n,η indexed by the symplex η S (d 1) R d + K n+1,ηn (x n, dx n+1 ) = P (X n+1 dx n+1 X n = x n ) When E = {1,..., d} simple matrix computations K n+1,ηn (1, 1)... K n+1,ηn (1, d) η n+1 = [η n (1),..., η n (d)]..... K n+1,ηn (d, 1)... K n+1,ηn (d, d)

37 Example 1 : Markov chain X n+1 = F n (X n, η n, W n ) E := { 1, 1} McKean s two velocities of gases : F n (X n, η n, W n ) = X n 1 Wn η n(1) + ( X n ) 1 Wn>η n(1) X n+1 = ɛ n X n with the Bernoulli random variable P(ɛ n = 1) = 1 P(ɛ n = 1) = η n (1) Nonlinear equation on the symplex S (0) = [0, 1] : η n+1 (1) = η n (1) 2 + η n ( 1) 2 = η n (1) 2 + (1 η n (1)) 2

38 Example 1 : Markov chain X n+1 = F n (X n, η n, W n ) E := { 1, 1} McKean s two velocities of gases : F n (X n, η n, W n ) = X n 1 Wn η n(1) + ( X n ) 1 Wn>η n(1) X n+1 = ɛ n X n with the Bernoulli random variable P(ɛ n = 1) = 1 P(ɛ n = 1) = η n (1) Nonlinear equation on the symplex S (0) = [0, 1] : η n+1 (1) = η n (1) 2 + η n ( 1) 2 = η n (1) 2 + (1 η n (1)) 2 n 1/2 if η 0 (1) {0, 1}

39 Example 2 : G n (i)( ex. = e Vn(i) ) ]0, 1] & (M n+1 (i, j)) i,j=1,...,d Markov transition K n+1,ηn (i, j) = G n (i) M n+1 (i, j)+(1 G n (i)) k η n (k)g n (k) l η n(l)g n (l) M n+1(k, j)

40 Example 2 : G n (i)( ex. = e Vn(i) ) ]0, 1] & (M n+1 (i, j)) i,j=1,...,d Markov transition K n+1,ηn (i, j) = G n (i) M n+1 (i, j)+(1 G n (i)) k η n (k)g n (k) l η n(l)g n (l) M n+1(k, j) k η n(k) K n+1,ηn (k, j) = k η n(k)g n (k) M n+1 (k, j) + [1 l η n(l)g n (l)] k = k η n(k)g n(k) l ηn(l)gn(l) M n+1(k, j) η n(k)g n(k) l ηn(l)gn(l) M n+1(k, j)

41 Example 2 : G n (i)( ex. = e Vn(i) ) ]0, 1] & (M n+1 (i, j)) i,j=1,...,d Markov transition K n+1,ηn (i, j) = G n (i) M n+1 (i, j)+(1 G n (i)) k η n (k)g n (k) l η n(l)g n (l) M n+1(k, j) k η n(k) K n+1,ηn (k, j) = k η n(k)g n (k) M n+1 (k, j) + [1 l η n(l)g n (l)] k = k η n(k)g n(k) l ηn(l)gn(l) M n+1(k, j) k η n(k)g n (k)m n+1 (k, j) η n(k)g n(k) l ηn(l)gn(l) M n+1(k, j)

42 Example 2 : G n (i)( ex. = e Vn(i) ) ]0, 1] & (M n+1 (i, j)) i,j=1,...,d Markov transition K n+1,ηn (i, j) = G n (i) M n+1 (i, j)+(1 G n (i)) k η n (k)g n (k) l η n(l)g n (l) M n+1(k, j) k η n(k) K n+1,ηn (k, j) = k η n(k)g n (k) M n+1 (k, j) + [1 l η n(l)g n (l)] k = k η n(k)g n(k) l ηn(l)gn(l) M n+1(k, j) k η n(k)g n (k)m n+1 (k, j) η n(k)g n(k) l ηn(l)gn(l) M n+1(k, j) G n (1) M n+1 (1, 1)... M n+1 (1, d) η n+1 η n G n (d) M n+1 (d, 1)... M n+1 (d, d) }{{}}{{} Multiplication Markov transport

43 Example 2 : G n (i)( ex. = e Vn(i) ) ]0, 1] & (M n+1 (i, j)) i,j=1,...,d Markov transition η n+1 = [Ψ Gn (η n )(1),..., Ψ Gn (η n )(d)] M n+1 (1, 1)... M n+1 (1, d)..... M n+1 (d, 1)... M n+1 (d, d) with the Boltzmann-Gibbs transformation Ψ Gn : η M(E) Ψ Gn (η) M(E) defined by Ψ Gn (η)(i) = 1 η n (G n ) G n(i) η(i) Nonlinear evolution eq. on the (d 1)-symplex η n S (d 1) R d + η n+1 = Φ n+1 (η n ) := Ψ Gn (η n )M n+1

44 Example 2 : G n (i)( ex. = e Vn(i) ) ]0, 1] & (M n+1 (i, j)) i,j=1,...,d Markov transition η n+1 = [Ψ Gn (η n )(1),..., Ψ Gn (η n )(d)] M n+1 (1, 1)... M n+1 (1, d)..... M n+1 (d, 1)... M n+1 (d, d) with the Boltzmann-Gibbs transformation Ψ Gn : η M(E) Ψ Gn (η) M(E) defined by Ψ Gn (η)(i) = 1 η n (G n ) G n(i) η(i) Nonlinear evolution eq. on the (d 1)-symplex η n S (d 1) R d + η n+1 = Φ n+1 (η n ) := Ψ Gn (η n )M n+1 Feynman-Kac models

45 Mean field models : Markov X n+1 = F n (X n, η n, W n ) E = R d Nonlinear evolution equations Law(X n ) := η n η n+1 = Φ n+1 (η n ) := η n K n+1,ηn Mean field particle-simulation models: N almost iid copies/samples X i n+1 = F n(x i n, ηn n, Wi n ), 1 i N with the occupation measure of the system at time n : η N n := 1 N 1 i N δ X i n N η n

46 Mean field models : Markov X n+1 = F n (X n, η n, W n ) E = R d Nonlinear evolution equations Law(X n ) := η n η n+1 = Φ n+1 (η n ) := η n K n+1,ηn Mean field particle-simulation models: N almost iid copies/samples X i n+1 = F n(x i n, ηn n, Wi n ), 1 i N with the occupation measure of the system at time n : η N n := 1 N 1 i N δ X i n N η n [by induction on n] ηn+1 N = 1 δ N X i n+1 with Xn+1 i F n (Xn, i η n, Wn) i ηn+1 N N η n+1 1 i N

47 An abstract mathematical model Nonlinear evolution equation η n+1 = Φ n+1 (η n ) := η n K n+1,ηn Nonlinear Markov model interpretation (not unique) K n+1,ηn (x n, dx n+1 ) = P (X n+1 dx n+1 X n = x n ) with Law(X n ) := η n Mean field particle-simulation model N X i n X i n+1 = r.v. with law K n+1,η N n (X i n, dx) where η N n = 1 N i=1 δ X i n

48 Local sampling errors Nonlinear evolution equation Mean field particle model η n+1 = Φ n+1 (η n ) V N n+1 := N ( η N n+1 Φ n+1 ( η N n )) η N n+1 = Φ n+1 ( η N n ) + 1 N V N n+1 Note f : osc(f ) := sup x,y (f (x) f (y)) 1 E ( ηn+1(f N ) ηn N ) N E ([ ηn+1 N ( )] Φ n+1 η N n (f ) 2 ηn N ) = ηn N ( ) K n+1,η N n (f ) = Φ n+1 η N n (f ) [ = ηn N [ K n+1,η N n f Kn+1,η N n (f ) ] ] 2 1

49 McKean measures : Markov chain X n E n with transitions K n,ηn 1 P N n := 1 N 1 i N with the McKean measures P n (d(x 0,..., x n )) = η 0 (dx 0 ) δ (X i 0,...,X i n ) N P n = Law(X 0,..., X n ) 1 p n K p,ηp 1 (x p 1, dx p )

50 McKean measures : Markov chain X n E n with transitions K n,ηn 1 P N n := 1 N 1 i N with the McKean measures P n (d(x 0,..., x n )) = η 0 (dx 0 ) δ (X i 0,...,X i n ) N P n = Law(X 0,..., X n ) 1 p n K p,ηp 1 (x p 1, dx p ) Historical process X n = (X 0,..., X n ) E n+1 = 0 p n E p with Markov transition K n,ηn 1

51 McKean measures : Markov chain X n E n with transitions K n,ηn 1 P N n := 1 N 1 i N with the McKean measures P n (d(x 0,..., x n )) = η 0 (dx 0 ) δ (X i 0,...,X i n ) N P n = Law(X 0,..., X n ) 1 p n K p,ηp 1 (x p 1, dx p ) Historical process X n = (X 0,..., X n ) E n+1 = 0 p n E p with Markov transition K n,ηn 1 Same mathematical model as before and η N n := 1 N P n+1 = η n+1 = η n K n+1,ηn 1 i N δ X i n = P N n N P n = η n with almost N iid copies of the historical process X i n = (X i 0,..., X i n).

52 Example 1 : Markov chain X n+1 = F n (X n, η n, W n ) E = R d=1 McKean-Vlasov diffusion type & discrete generation models X n+1 = F n (X n, η n, W n ) = a n (X n, η n ) + b n (X n, η n ) W n with the Gaussian random variable P(W n dw) = 1 exp ( w 2 ) 2π 2

53 Example 1 : Markov chain X n+1 = F n (X n, η n, W n ) E = R d=1 McKean-Vlasov diffusion type & discrete generation models X n+1 = F n (X n, η n, W n ) = a n (X n, η n ) + b n (X n, η n ) W n with the Gaussian random variable P(W n dw) = 1 exp ( w 2 ) 2π 2 Two-steps jump type model X n X n+1/2 = F n (X n, η n, W n ) X n+1 = ɛ n+1 X n+1/2 + (1 ɛ n+1 ) Y n+1 with ( ) ɛ n+1 e Vη (Xn+1/2) n+1/2 δ e Vη (X n+1/2 n+1/2) δ 0 Y n+1 P ηn+1/2 (X n+1/2, dy)

54 Example 1 : Markov chain X n+1 = F n (X n, η n, W n ) R d=1 McKean-Vlasov diffusion type & discrete generation models a n (X n, η n ) = η n (dy) a n(x n, y) & b n (X n, η n ) = η n (dy) b n(x n, y)

55 Example 1 : Markov chain X n+1 = F n (X n, η n, W n ) R d=1 McKean-Vlasov diffusion type & discrete generation models a n (X n, η n ) = η n (dy) a n(x n, y) & b n (X n, η n ) = η n (dy) b n(x n, y) X i n+1 = F n (X i n, η N n, W i n) = 1 N N a n(x n, i Xn) j + 1 N j=1 N b n(x n, i Xn) j Wn i j=1

56 Example 1 : Markov chain X n+1 = F n (X n, η n, W n ) R d=1 McKean-Vlasov diffusion type & discrete generation models a n (X n, η n ) = η n (dy) a n(x n, y) & b n (X n, η n ) = η n (dy) b n(x n, y) X i n+1 = F n (X i n, η N n, W i n) = 1 N N a n(x n, i Xn) j + 1 N j=1 N b n(x n, i Xn) j Wn i In the same vein : Two-steps mean field jump type model Xn i Xn+1/2 i = F n(xn, i ηn N, Wn) i Xn+1 i = ɛ i n+1 Xn+1/2 i +( 1 ɛ i n+1) Y i n+1 j=1 with ( ɛ i n+1 e V η n+1/2 N (X i n+1/2 ) δ e V η n+1/2 N (X i )) n+1/2 δ 0 Yn+1 i P η N (X i n+1/2 n+1/2, dy)

57 Example 2 : Back to the multiplication-transport model η n+1 = Ψ Gn (η n )M n+1 Nonlinear transport equation Ψ Gn (µ) = µs n,µ with S n,µ (x,.) = G n (x)δ x + (1 G n (x)) Ψ Gn (µ)

58 Example 2 : Back to the multiplication-transport model η n+1 = Ψ Gn (η n )M n+1 Nonlinear transport equation Ψ Gn (µ) = µs n,µ with S n,µ (x,.) = G n (x)δ x + (1 G n (x)) Ψ Gn (µ) Nonlinear Markov chain model η n+1 = Φ n+1 (η n ) := η n K n+1,ηn with K n+1,ηn := S n,ηn M n+1

59 Example 2 : Back to the multiplication-transport model η n+1 = Ψ Gn (η n )M n+1 Nonlinear transport equation Ψ Gn (µ) = µs n,µ with S n,µ (x,.) = G n (x)δ x + (1 G n (x)) Ψ Gn (µ) Nonlinear Markov chain model η n+1 = Φ n+1 (η n ) := η n K n+1,ηn with K n+1,ηn := S n,ηn M n+1 Mean field particle model = Genetic type interacting jump model X i n S n,η N n X n i M n+1 }{{} K n+1,η N n i Xn+1 with S n,η N n (X i n, dy) = G n (X i n)δ X i n (dy)+(1 G(X i n)) 1 j N G n (X j n) 1 k N G n(x k n ) δ X j n (dy)

60 Graphical illustration : η n η N n := 1 N 1 i N δ X i n

61 Graphical illustration : η n η N n := 1 N 1 i N δ X i n

62 Graphical illustration : η n η N n := 1 N 1 i N δ X i n

63 Graphical illustration : η n η N n := 1 N 1 i N δ X i n

64 Graphical illustration : η n η N n := 1 N 1 i N δ X i n

65 Graphical illustration : η n η N n := 1 N 1 i N δ X i n

66 Graphical illustration : η n η N n := 1 N 1 i N δ X i n

67 Graphical illustration : η n η N n := 1 N 1 i N δ X i n

68 Graphical illustration : η n η N n := 1 N 1 i N δ X i n

69 Graphical illustration : η n η N n := 1 N 1 i N δ X i n

70 Graphical illustration : η n η N n := 1 N 1 i N δ X i n

71 Graphical illustration : η n η N n := 1 N 1 i N δ X i n

72 Graphical illustration : η n η N n := 1 N 1 i N δ X i n

73 Graphical illustration : η n η N n := 1 N 1 i N δ X i n

74 Graphical illustration : η n η N n := 1 N 1 i N δ X i n

75 Graphical illustration : η n η N n := 1 N 1 i N δ X i n

76 Graphical illustration : η n η N n := 1 N 1 i N δ X i n

77 Graphical illustration : η n η N n := 1 N 1 i N δ X i n

78 Graphical illustration : η n η N n := 1 N 1 i N δ X i n

79 Some questions Law(Xn i acceptance) = η n? Convergence of the occupation measures of the genealogical trees? Use of the occupation measures of the complete ancestral tree? Convergence of the product of success proportions? Feynman-Kac integration models on path spaces

80 Some questions Law(Xn i acceptance) = η n? Convergence of the occupation measures of the genealogical trees? Use of the occupation measures of the complete ancestral tree? Convergence of the product of success proportions? Feynman-Kac integration models on path spaces Part II & Part III (Application areas) & Part V (Theoretical results)

81 Some questions Law(Xn i acceptance) = η n? Convergence of the occupation measures of the genealogical trees? Use of the occupation measures of the complete ancestral tree? Convergence of the product of success proportions? Feynman-Kac integration models on path spaces Part II & Part III (Application areas) & Part V (Theoretical results) Nonlinear evolution equations on M(E n ) γ n+1 = γ n Q n+1,γn

82 Some questions Law(Xn i acceptance) = η n? Convergence of the occupation measures of the genealogical trees? Use of the occupation measures of the complete ancestral tree? Convergence of the product of success proportions? Feynman-Kac integration models on path spaces Part II & Part III (Application areas) & Part V (Theoretical results) Nonlinear evolution equations on M(E n ) γ n+1 = γ n Q n+1,γn Part IV (Multiple object filtering)

83 Some questions Law(Xn i acceptance) = η n? Convergence of the occupation measures of the genealogical trees? Use of the occupation measures of the complete ancestral tree? Convergence of the product of success proportions? Feynman-Kac integration models on path spaces Part II & Part III (Application areas) & Part V (Theoretical results) Nonlinear evolution equations on M(E n ) γ n+1 = γ n Q n+1,γn Part IV (Multiple object filtering) Link with nonlinear integro-differential equations?

84 Nonlinear diffusion type models (d=1) X n+1 = F n (X n, η n, W n ) = X n + a n (X n, η n ) + b n (X n, η n ) W n with the Gaussian random variable P(W n dw) = 1 exp ( w 2 ) 2π 2

85 Nonlinear diffusion type models (d=1) X n+1 = F n (X n, η n, W n ) = X n + a n (X n, η n ) + b n (X n, η n ) W n with the Gaussian random variable P(W n dw) = 1 exp ( w 2 ) 2π 2 Diffusion limits when (t n+1 t n ) = 0 & (a n, b n, X n, η n ) = ( a t n, b t n, X t n, η t n ) dx t = a t(x t, η t)dt + b t(x t, η t)dw t

86 Nonlinear diffusion type models (d=1) X n+1 = F n (X n, η n, W n ) = X n + a n (X n, η n ) + b n (X n, η n ) W n with the Gaussian random variable P(W n dw) = 1 exp ( w 2 ) 2π 2 Diffusion limits when (t n+1 t n ) = 0 & (a n, b n, X n, η n ) = ( a t n, b t n, X t n, η t n ) dx t = a t(x t, η t)dt + b t(x t, η t)dw t Weak parabolic nonlinear PDE η t = Law(X t ) d dt η t(f ) = η t(l t,η t (f )) with the infinitesimal generator L t,η t (f )(x) = a t(x, η t) f (x) (b t(x, η t)) 2 2 f (x)

87 Two-steps jump-diffusion type model (d=1) X n X n+1/2 = F n (X n, η n, W n ) X n+1 = ɛ n+1 X n+1/2 + (1 ɛ n+1 ) Y n+1 ( ) ɛ n+1 e Vη (X n+1/2 n+1/2) δ e Vη (X n+1/2 n+1/2) δ 0 Y n+1 P ηn+1/2 (X n+1/2, dy)

88 Two-steps jump-diffusion type model (d=1) X n X n+1/2 = F n (X n, η n, W n ) X n+1 = ɛ n+1 X n+1/2 + (1 ɛ n+1 ) Y n+1 ( ) ɛ n+1 e Vη (X n+1/2 n+1/2) δ e Vη (X n+1/2 n+1/2) δ 0 Y n+1 P ηn+1/2 (X n+1/2, dy) Jump-diffusion limits ) (a n, b n, V n, P n, X n, η n ) = (a tn, b tn, V ηtn, P ηtn, X tn, η tn between jumps = nonlinear diffusion as before, and jumps times { t } T n+1 = inf t T n : V s (X s, η s) ds e n e n iid expo(λ = 1) T n

89 Two-steps jump-diffusion type model (d=1) X n X n+1/2 = F n (X n, η n, W n ) X n+1 = ɛ n+1 X n+1/2 + (1 ɛ n+1 ) Y n+1 ( ) ɛ n+1 e Vη (X n+1/2 n+1/2) δ e Vη (X n+1/2 n+1/2) δ 0 Y n+1 P ηn+1/2 (X n+1/2, dy) Jump-diffusion limits ) (a n, b n, V n, P n, X n, η n ) = (a tn, b tn, V ηtn, P ηtn, X tn, η tn between jumps = nonlinear diffusion as before, and jumps times { t } T n+1 = inf t T n : V s (X s, η s) ds e n e n iid expo(λ = 1) T n Weak integro-differential equation d dt η t(f ) = η t(l t,η t (f )) L t,η t (f )(x) = a t(x, η t) f (x) (b t(x, η t)) 2 2 f (x) + V η (x) (f (y) f (x)) P t η (x, dy) t

90 An abstract mathematical model Nonlinear evolution equation P(E) d dt η t(f ) = η t (L t,ηt (f )) Mean field particle-simulation model= Markov E N with generator L t (ϕ)(x 1,..., x N ) = L (i) (ϕ)(x 1,..., x i,..., x N ) t, 1 N 1 j N δ x j 1 i N

91 Local sampling errors Nonlinear evolution equation dη t (f ) = η t (L t,ηt (f )) dt

92 Local sampling errors Nonlinear evolution equation dη t (f ) = η t (L t,ηt (f )) dt Mean field particle model (useless in pratice) dη N t (f ) = η N t (L t,η N t (f )) dt + 1 N dm N t (f ) with a sequence of martingales M N t with predictable increasing process M N (f ) t = t 0 ηs N Γ Ls,η (f, f ) ds N s where Γ L stands for the carré du champ operator Γ L (f, f ) := L([f f (x)] 2 )(x) = L(f 2 )(x) 2f (x)l(f )(x)

93 Some basic notation Markov chains Nonlinear Markov models How & Why it works? Some interpretations A stochastic perturbation analysis Uniform concentration inequalities

94 How & Why it works? (Biology) Individual based models (IBM). (Physics) Mean field approximation schemes. (Computer Sciences) Stochastic adaptive grid approximation. (Stats) Universal acceptance-rejection-recycling sampling schemes. (Probab) Stochastic linearization/perturbation technique. η n = Φ n (η n 1 ) η N n = Φ n ( η N n 1 ) + 1 N V N n Theorem: (V N n ) n N (V n ) n independent centered Gaussian fields. Φ p,n (η p ) = η n stable sg No propagation of local errors = Uniform control w.r.t. the time horizon New concentration inequalities for (general) interacting processes

95 Stochastic perturbation analysis = Taylor expansion Gâteaux derivative of Φ n : P(E n 1 ) P(E n ) ɛ Φ n(η + ɛµ)(f ) ɛ=0 = µ[d η Φ n (f )] with the first order linear/integral operator d η Φ n : f B b (E n ) d η Φ n (f ) B b (E n 1 )

96 Stochastic perturbation analysis = Taylor expansion Gâteaux derivative of Φ n : P(E n 1 ) P(E n ) ɛ Φ n(η + ɛµ)(f ) ɛ=0 = µ[d η Φ n (f )] with the first order linear/integral operator d η Φ n : f B b (E n ) d η Φ n (f ) B b (E n 1 ) Φ n (µ) Φ n (η) (µ η) d η Φ n

97 Stochastic perturbation analysis = Taylor expansion Gâteaux derivative of Φ n : P(E n 1 ) P(E n ) ɛ Φ n(η + ɛµ)(f ) ɛ=0 = µ[d η Φ n (f )] with the first order linear/integral operator d η Φ n : f B b (E n ) d η Φ n (f ) B b (E n 1 ) Φ n (µ) Φ n (η) (µ η) d η Φ n Differential rule d η (Φ n+1 Φ n ) = d η Φ n d Φn(η)Φ n+1 Semigroup derivatives Φ p,n (µ) Φ p,n (η) (µ η) d η Φ p,n

98 Key idea = First order expansions Key telescoping decomposition η N n η n = n p=0 [ Φp,n (η N p ) Φ p,n ( Φp (η N p 1) )] First order expansion N [ Φp,n (η N p ) Φ p,n ( Φp (η N p 1 ))] = N [Φ p,n ( Φ p (η N p 1 ) + 1 N V N p ) ( Np 1 Φ p,n Φp (η ))] V N p D p,n + 1 N R N p,n with a predictable D p,n first order operator }{{} fluctuation term 2nd-order measure R N p,n }{{} bias-term

99 Stochastic perturbation model Stochastic perturbation model W η,n n := N [ η N n η n ] = 0 p n V N p D p,n + 1 N R N n Under some mixing condition on the limiting semigroups Φ p,n osc (D p,n (f )) Cte e (n p)α and E ( R N n (f ) m) Cte 2 m (2m)!/m! Uniform concentration estimates w.r.t. the time parameter

100 Uniform concentration inequalities Constants (c 1, c 2 ) related to (bias,variance), c finite constant Test functions/observables f n 1, (x 0, n 0, N 1). When E n = R d : F n (y) := η n ( 1(,y] ) and F N n (y) := η N n ( 1(,y] ) with y R d The probability of any of the following events is greater than 1 e x. η N n η n (fn ) c 1 N ( 1 + x + x ) + c 2 N x sup [ η N ] p η p (fp ) c x log (n + e)/n 0 p n F N n F n c d (x + 1)/N

101 Coalescent tree based expansions Weak propagation of chaos Taylor s type expansions: P (N,q) n = Law of the first q ancestral lines (q N) = Qn q + ( ) 1 1 N l d l P (q) n + O N m+1 with signed measures d l P (q) n 1 l m expressed in terms of coalescent trees.

102 Coalescent tree based expansions Weak propagation of chaos Taylor s type expansions: P (N,q) n = Law of the first q ancestral lines (q N) = Qn q + ( ) 1 1 N l d l P (q) n + O N m+1 with signed measures d l P (q) n 1 l m expressed in terms of coalescent trees. Romberg-Richardson interpolation: For any order l 1 1 m l ( 1) l m m! m l (l m)! P(mN,q) n = Q q n + O(1/N l )

Concentration inequalities for Feynman-Kac particle models. P. Del Moral. INRIA Bordeaux & IMB & CMAP X. Journées MAS 2012, SMAI Clermond-Ferrand

Concentration inequalities for Feynman-Kac particle models. P. Del Moral. INRIA Bordeaux & IMB & CMAP X. Journées MAS 2012, SMAI Clermond-Ferrand Concentration inequalities for Feynman-Kac particle models P. Del Moral INRIA Bordeaux & IMB & CMAP X Journées MAS 2012, SMAI Clermond-Ferrand Some hyper-refs Feynman-Kac formulae, Genealogical & Interacting

More information

Mean field simulation for Monte Carlo integration. Part II : Feynman-Kac models. P. Del Moral

Mean field simulation for Monte Carlo integration. Part II : Feynman-Kac models. P. Del Moral Mean field simulation for Monte Carlo integration Part II : Feynman-Kac models P. Del Moral INRIA Bordeaux & Inst. Maths. Bordeaux & CMAP Polytechnique Lectures, INLN CNRS & Nice Sophia Antipolis Univ.

More information

A Backward Particle Interpretation of Feynman-Kac Formulae

A Backward Particle Interpretation of Feynman-Kac Formulae A Backward Particle Interpretation of Feynman-Kac Formulae P. Del Moral Centre INRIA de Bordeaux - Sud Ouest Workshop on Filtering, Cambridge Univ., June 14-15th 2010 Preprints (with hyperlinks), joint

More information

Advanced Monte Carlo integration methods. P. Del Moral (INRIA team ALEA) INRIA & Bordeaux Mathematical Institute & X CMAP

Advanced Monte Carlo integration methods. P. Del Moral (INRIA team ALEA) INRIA & Bordeaux Mathematical Institute & X CMAP Advanced Monte Carlo integration methods P. Del Moral (INRIA team ALEA) INRIA & Bordeaux Mathematical Institute & X CMAP MCQMC 2012, Sydney, Sunday Tutorial 12-th 2012 Some hyper-refs Feynman-Kac formulae,

More information

A new class of interacting Markov Chain Monte Carlo methods

A new class of interacting Markov Chain Monte Carlo methods A new class of interacting Marov Chain Monte Carlo methods P Del Moral, A Doucet INRIA Bordeaux & UBC Vancouver Worshop on Numerics and Stochastics, Helsini, August 2008 Outline 1 Introduction Stochastic

More information

An introduction to particle simulation of rare events

An introduction to particle simulation of rare events An introduction to particle simulation of rare events P. Del Moral Centre INRIA de Bordeaux - Sud Ouest Workshop OPUS, 29 juin 2010, IHP, Paris P. Del Moral (INRIA) INRIA Bordeaux-Sud Ouest 1 / 33 Outline

More information

On the stability and the applications of interacting particle systems

On the stability and the applications of interacting particle systems 1/42 On the stability and the applications of interacting particle systems P. Del Moral INRIA Bordeaux Sud Ouest PDE/probability interactions - CIRM, April 2017 Synthesis joint works with A.N. Bishop,

More information

Contraction properties of Feynman-Kac semigroups

Contraction properties of Feynman-Kac semigroups Journées de Statistique Marne La Vallée, January 2005 Contraction properties of Feynman-Kac semigroups Pierre DEL MORAL, Laurent MICLO Lab. J. Dieudonné, Nice Univ., LATP Univ. Provence, Marseille 1 Notations

More information

Perfect simulation algorithm of a trajectory under a Feynman-Kac law

Perfect simulation algorithm of a trajectory under a Feynman-Kac law Perfect simulation algorithm of a trajectory under a Feynman-Kac law Data Assimilation, 24th-28th September 212, Oxford-Man Institute C. Andrieu, N. Chopin, A. Doucet, S. Rubenthaler University of Bristol,

More information

Stability of Feynman-Kac Semigroup and SMC Parameters' Tuning

Stability of Feynman-Kac Semigroup and SMC Parameters' Tuning Stability of Feynman-Kac Semigroup and SMC Parameters' Tuning François GIRAUD PhD student, CEA-CESTA, INRIA Bordeaux Tuesday, February 14th 2012 1 Recalls and context of the analysis Feynman-Kac semigroup

More information

On the Concentration Properties of Interacting Particle Processes. Contents

On the Concentration Properties of Interacting Particle Processes. Contents Foundations and Trends R in Machine Learning Vol. 3, Nos. 3 4 (2010) 225 389 c 2012 P. Del Moral, P. Hu, and L. Wu DOI: 10.1561/2200000026 On the Concentration Properties of Interacting Particle Processes

More information

Sequential Monte Carlo Methods for Bayesian Computation

Sequential Monte Carlo Methods for Bayesian Computation Sequential Monte Carlo Methods for Bayesian Computation A. Doucet Kyoto Sept. 2012 A. Doucet (MLSS Sept. 2012) Sept. 2012 1 / 136 Motivating Example 1: Generic Bayesian Model Let X be a vector parameter

More information

Multilevel Monte Carlo for Stochastic McKean-Vlasov Equations

Multilevel Monte Carlo for Stochastic McKean-Vlasov Equations Multilevel Monte Carlo for Stochastic McKean-Vlasov Equations Lukasz Szpruch School of Mathemtics University of Edinburgh joint work with Shuren Tan and Alvin Tse (Edinburgh) Lukasz Szpruch (University

More information

An Brief Overview of Particle Filtering

An Brief Overview of Particle Filtering 1 An Brief Overview of Particle Filtering Adam M. Johansen a.m.johansen@warwick.ac.uk www2.warwick.ac.uk/fac/sci/statistics/staff/academic/johansen/talks/ May 11th, 2010 Warwick University Centre for Systems

More information

On a class of stochastic differential equations in a financial network model

On a class of stochastic differential equations in a financial network model 1 On a class of stochastic differential equations in a financial network model Tomoyuki Ichiba Department of Statistics & Applied Probability, Center for Financial Mathematics and Actuarial Research, University

More information

Weak solutions of mean-field stochastic differential equations

Weak solutions of mean-field stochastic differential equations Weak solutions of mean-field stochastic differential equations Juan Li School of Mathematics and Statistics, Shandong University (Weihai), Weihai 26429, China. Email: juanli@sdu.edu.cn Based on joint works

More information

1. Stochastic Processes and filtrations

1. Stochastic Processes and filtrations 1. Stochastic Processes and 1. Stoch. pr., A stochastic process (X t ) t T is a collection of random variables on (Ω, F) with values in a measurable space (S, S), i.e., for all t, In our case X t : Ω S

More information

A Backward Particle Interpretation of Feynman-Kac Formulae

A Backward Particle Interpretation of Feynman-Kac Formulae A Backward Particle Interpretation of Feynman-Kac Formulae Pierre Del Moral, Arnaud Doucet, Sumeetpal Singh To cite this version: Pierre Del Moral, Arnaud Doucet, Sumeetpal Singh. A Backward Particle Interpretation

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

Computer Intensive Methods in Mathematical Statistics

Computer Intensive Methods in Mathematical Statistics Computer Intensive Methods in Mathematical Statistics Department of mathematics johawes@kth.se Lecture 7 Sequential Monte Carlo methods III 7 April 2017 Computer Intensive Methods (1) Plan of today s lecture

More information

Chapter 3 sections. SKIP: 3.10 Markov Chains. SKIP: pages Chapter 3 - continued

Chapter 3 sections. SKIP: 3.10 Markov Chains. SKIP: pages Chapter 3 - continued Chapter 3 sections Chapter 3 - continued 3.1 Random Variables and Discrete Distributions 3.2 Continuous Distributions 3.3 The Cumulative Distribution Function 3.4 Bivariate Distributions 3.5 Marginal Distributions

More information

On Adaptive Resampling Procedures for Sequential Monte Carlo Methods

On Adaptive Resampling Procedures for Sequential Monte Carlo Methods On Adaptive Resampling Procedures for Sequential Monte Carlo Methods Pierre Del Moral, Arnaud Doucet, Ajay Jasra To cite this version: Pierre Del Moral, Arnaud Doucet, Ajay Jasra. On Adaptive Resampling

More information

University of Toronto Department of Statistics

University of Toronto Department of Statistics Norm Comparisons for Data Augmentation by James P. Hobert Department of Statistics University of Florida and Jeffrey S. Rosenthal Department of Statistics University of Toronto Technical Report No. 0704

More information

Weakly interacting particle systems on graphs: from dense to sparse

Weakly interacting particle systems on graphs: from dense to sparse Weakly interacting particle systems on graphs: from dense to sparse Ruoyu Wu University of Michigan (Based on joint works with Daniel Lacker and Kavita Ramanan) USC Math Finance Colloquium October 29,

More information

Pointwise convergence rates and central limit theorems for kernel density estimators in linear processes

Pointwise convergence rates and central limit theorems for kernel density estimators in linear processes Pointwise convergence rates and central limit theorems for kernel density estimators in linear processes Anton Schick Binghamton University Wolfgang Wefelmeyer Universität zu Köln Abstract Convergence

More information

PROBABILISTIC METHODS FOR A LINEAR REACTION-HYPERBOLIC SYSTEM WITH CONSTANT COEFFICIENTS

PROBABILISTIC METHODS FOR A LINEAR REACTION-HYPERBOLIC SYSTEM WITH CONSTANT COEFFICIENTS The Annals of Applied Probability 1999, Vol. 9, No. 3, 719 731 PROBABILISTIC METHODS FOR A LINEAR REACTION-HYPERBOLIC SYSTEM WITH CONSTANT COEFFICIENTS By Elizabeth A. Brooks National Institute of Environmental

More information

Monte Carlo methods for sampling-based Stochastic Optimization

Monte Carlo methods for sampling-based Stochastic Optimization Monte Carlo methods for sampling-based Stochastic Optimization Gersende FORT LTCI CNRS & Telecom ParisTech Paris, France Joint works with B. Jourdain, T. Lelièvre, G. Stoltz from ENPC and E. Kuhn from

More information

Introduction to self-similar growth-fragmentations

Introduction to self-similar growth-fragmentations Introduction to self-similar growth-fragmentations Quan Shi CIMAT, 11-15 December, 2017 Quan Shi Growth-Fragmentations CIMAT, 11-15 December, 2017 1 / 34 Literature Jean Bertoin, Compensated fragmentation

More information

Lecture 15 Random variables

Lecture 15 Random variables Lecture 15 Random variables Weinan E 1,2 and Tiejun Li 2 1 Department of Mathematics, Princeton University, weinan@princeton.edu 2 School of Mathematical Sciences, Peking University, tieli@pku.edu.cn No.1

More information

Discretization of SDEs: Euler Methods and Beyond

Discretization of SDEs: Euler Methods and Beyond Discretization of SDEs: Euler Methods and Beyond 09-26-2006 / PRisMa 2006 Workshop Outline Introduction 1 Introduction Motivation Stochastic Differential Equations 2 The Time Discretization of SDEs Monte-Carlo

More information

Chapter 3 sections. SKIP: 3.10 Markov Chains. SKIP: pages Chapter 3 - continued

Chapter 3 sections. SKIP: 3.10 Markov Chains. SKIP: pages Chapter 3 - continued Chapter 3 sections 3.1 Random Variables and Discrete Distributions 3.2 Continuous Distributions 3.3 The Cumulative Distribution Function 3.4 Bivariate Distributions 3.5 Marginal Distributions 3.6 Conditional

More information

Expectation. DS GA 1002 Statistical and Mathematical Models. Carlos Fernandez-Granda

Expectation. DS GA 1002 Statistical and Mathematical Models.   Carlos Fernandez-Granda Expectation DS GA 1002 Statistical and Mathematical Models http://www.cims.nyu.edu/~cfgranda/pages/dsga1002_fall16 Carlos Fernandez-Granda Aim Describe random variables with a few numbers: mean, variance,

More information

Branching Processes II: Convergence of critical branching to Feller s CSB

Branching Processes II: Convergence of critical branching to Feller s CSB Chapter 4 Branching Processes II: Convergence of critical branching to Feller s CSB Figure 4.1: Feller 4.1 Birth and Death Processes 4.1.1 Linear birth and death processes Branching processes can be studied

More information

Intertwinings for Markov processes

Intertwinings for Markov processes Intertwinings for Markov processes Aldéric Joulin - University of Toulouse Joint work with : Michel Bonnefont - Univ. Bordeaux Workshop 2 Piecewise Deterministic Markov Processes ennes - May 15-17, 2013

More information

Interacting and Annealing Particle Filters: Mathematics and a Recipe for Applications. Christoph Schnörr Bodo Rosenhahn Hans-Peter Seidel

Interacting and Annealing Particle Filters: Mathematics and a Recipe for Applications. Christoph Schnörr Bodo Rosenhahn Hans-Peter Seidel Interacting and Annealing Particle Filters: Mathematics and a Recipe for Applications Jürgen Gall Jürgen Potthoff Christoph Schnörr Bodo Rosenhahn Hans-Peter Seidel MPI I 2006 4 009 September 2006 Authors

More information

Importance splitting for rare event simulation

Importance splitting for rare event simulation Importance splitting for rare event simulation F. Cerou Frederic.Cerou@inria.fr Inria Rennes Bretagne Atlantique Simulation of hybrid dynamical systems and applications to molecular dynamics September

More information

Lecture 4: Introduction to stochastic processes and stochastic calculus

Lecture 4: Introduction to stochastic processes and stochastic calculus Lecture 4: Introduction to stochastic processes and stochastic calculus Cédric Archambeau Centre for Computational Statistics and Machine Learning Department of Computer Science University College London

More information

White noise generalization of the Clark-Ocone formula under change of measure

White noise generalization of the Clark-Ocone formula under change of measure White noise generalization of the Clark-Ocone formula under change of measure Yeliz Yolcu Okur Supervisor: Prof. Bernt Øksendal Co-Advisor: Ass. Prof. Giulia Di Nunno Centre of Mathematics for Applications

More information

Weak convergence and large deviation theory

Weak convergence and large deviation theory First Prev Next Go To Go Back Full Screen Close Quit 1 Weak convergence and large deviation theory Large deviation principle Convergence in distribution The Bryc-Varadhan theorem Tightness and Prohorov

More information

Lycka till!

Lycka till! Avd. Matematisk statistik TENTAMEN I SF294 SANNOLIKHETSTEORI/EXAM IN SF294 PROBABILITY THE- ORY MONDAY THE 14 T H OF JANUARY 28 14. p.m. 19. p.m. Examinator : Timo Koski, tel. 79 71 34, e-post: timo@math.kth.se

More information

Computer Intensive Methods in Mathematical Statistics

Computer Intensive Methods in Mathematical Statistics Computer Intensive Methods in Mathematical Statistics Department of mathematics johawes@kth.se Lecture 16 Advanced topics in computational statistics 18 May 2017 Computer Intensive Methods (1) Plan of

More information

Computer Intensive Methods in Mathematical Statistics

Computer Intensive Methods in Mathematical Statistics Computer Intensive Methods in Mathematical Statistics Department of mathematics johawes@kth.se Lecture 5 Sequential Monte Carlo methods I 31 March 2017 Computer Intensive Methods (1) Plan of today s lecture

More information

LECTURES ON MEAN FIELD GAMES: II. CALCULUS OVER WASSERSTEIN SPACE, CONTROL OF MCKEAN-VLASOV DYNAMICS, AND THE MASTER EQUATION

LECTURES ON MEAN FIELD GAMES: II. CALCULUS OVER WASSERSTEIN SPACE, CONTROL OF MCKEAN-VLASOV DYNAMICS, AND THE MASTER EQUATION LECTURES ON MEAN FIELD GAMES: II. CALCULUS OVER WASSERSTEIN SPACE, CONTROL OF MCKEAN-VLASOV DYNAMICS, AND THE MASTER EQUATION René Carmona Department of Operations Research & Financial Engineering PACM

More information

EE4601 Communication Systems

EE4601 Communication Systems EE4601 Communication Systems Week 2 Review of Probability, Important Distributions 0 c 2011, Georgia Institute of Technology (lect2 1) Conditional Probability Consider a sample space that consists of two

More information

Multilevel Monte Carlo for Lévy Driven SDEs

Multilevel Monte Carlo for Lévy Driven SDEs Multilevel Monte Carlo for Lévy Driven SDEs Felix Heidenreich TU Kaiserslautern AG Computational Stochastics August 2011 joint work with Steffen Dereich Philipps-Universität Marburg supported within DFG-SPP

More information

Probability Models. 4. What is the definition of the expectation of a discrete random variable?

Probability Models. 4. What is the definition of the expectation of a discrete random variable? 1 Probability Models The list of questions below is provided in order to help you to prepare for the test and exam. It reflects only the theoretical part of the course. You should expect the questions

More information

Expectation. DS GA 1002 Probability and Statistics for Data Science. Carlos Fernandez-Granda

Expectation. DS GA 1002 Probability and Statistics for Data Science.   Carlos Fernandez-Granda Expectation DS GA 1002 Probability and Statistics for Data Science http://www.cims.nyu.edu/~cfgranda/pages/dsga1002_fall17 Carlos Fernandez-Granda Aim Describe random variables with a few numbers: mean,

More information

Auxiliary Particle Methods

Auxiliary Particle Methods Auxiliary Particle Methods Perspectives & Applications Adam M. Johansen 1 adam.johansen@bristol.ac.uk Oxford University Man Institute 29th May 2008 1 Collaborators include: Arnaud Doucet, Nick Whiteley

More information

Brownian Motion and Stochastic Calculus

Brownian Motion and Stochastic Calculus ETHZ, Spring 17 D-MATH Prof Dr Martin Larsson Coordinator A Sepúlveda Brownian Motion and Stochastic Calculus Exercise sheet 6 Please hand in your solutions during exercise class or in your assistant s

More information

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( )

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( ) Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio (2014-2015) Etienne Tanré - Olivier Faugeras INRIA - Team Tosca November 26th, 2014 E. Tanré (INRIA - Team Tosca) Mathematical

More information

1 Random Variable: Topics

1 Random Variable: Topics Note: Handouts DO NOT replace the book. In most cases, they only provide a guideline on topics and an intuitive feel. 1 Random Variable: Topics Chap 2, 2.1-2.4 and Chap 3, 3.1-3.3 What is a random variable?

More information

X n D X lim n F n (x) = F (x) for all x C F. lim n F n(u) = F (u) for all u C F. (2)

X n D X lim n F n (x) = F (x) for all x C F. lim n F n(u) = F (u) for all u C F. (2) 14:17 11/16/2 TOPIC. Convergence in distribution and related notions. This section studies the notion of the so-called convergence in distribution of real random variables. This is the kind of convergence

More information

Martingale Problems. Abhay G. Bhatt Theoretical Statistics and Mathematics Unit Indian Statistical Institute, Delhi

Martingale Problems. Abhay G. Bhatt Theoretical Statistics and Mathematics Unit Indian Statistical Institute, Delhi s Abhay G. Bhatt Theoretical Statistics and Mathematics Unit Indian Statistical Institute, Delhi Lectures on Probability and Stochastic Processes III Indian Statistical Institute, Kolkata 20 24 November

More information

Exact Simulation of Diffusions and Jump Diffusions

Exact Simulation of Diffusions and Jump Diffusions Exact Simulation of Diffusions and Jump Diffusions A work by: Prof. Gareth O. Roberts Dr. Alexandros Beskos Dr. Omiros Papaspiliopoulos Dr. Bruno Casella 28 th May, 2008 Content 1 Exact Algorithm Construction

More information

Derivation of Itô SDE and Relationship to ODE and CTMC Models

Derivation of Itô SDE and Relationship to ODE and CTMC Models Derivation of Itô SDE and Relationship to ODE and CTMC Models Biomathematics II April 23, 2015 Linda J. S. Allen Texas Tech University TTU 1 Euler-Maruyama Method for Numerical Solution of an Itô SDE dx(t)

More information

Uses of Asymptotic Distributions: In order to get distribution theory, we need to norm the random variable; we usually look at n 1=2 ( X n ).

Uses of Asymptotic Distributions: In order to get distribution theory, we need to norm the random variable; we usually look at n 1=2 ( X n ). 1 Economics 620, Lecture 8a: Asymptotics II Uses of Asymptotic Distributions: Suppose X n! 0 in probability. (What can be said about the distribution of X n?) In order to get distribution theory, we need

More information

Markov Chain BSDEs and risk averse networks

Markov Chain BSDEs and risk averse networks Markov Chain BSDEs and risk averse networks Samuel N. Cohen Mathematical Institute, University of Oxford (Based on joint work with Ying Hu, Robert Elliott, Lukas Szpruch) 2nd Young Researchers in BSDEs

More information

Full text available at: On the Concentration Properties of Interacting Particle Processes

Full text available at:  On the Concentration Properties of Interacting Particle Processes On the Concentration Properties of Interacting Particle Processes On the Concentration Properties of Interacting Particle Processes Pierre Del Moral INRIA, Université Bordeaux 1 France Peng Hu INRIA, Université

More information

Normal approximation of Poisson functionals in Kolmogorov distance

Normal approximation of Poisson functionals in Kolmogorov distance Normal approximation of Poisson functionals in Kolmogorov distance Matthias Schulte Abstract Peccati, Solè, Taqqu, and Utzet recently combined Stein s method and Malliavin calculus to obtain a bound for

More information

Econ 508B: Lecture 5

Econ 508B: Lecture 5 Econ 508B: Lecture 5 Expectation, MGF and CGF Hongyi Liu Washington University in St. Louis July 31, 2017 Hongyi Liu (Washington University in St. Louis) Math Camp 2017 Stats July 31, 2017 1 / 23 Outline

More information

Formulas for probability theory and linear models SF2941

Formulas for probability theory and linear models SF2941 Formulas for probability theory and linear models SF2941 These pages + Appendix 2 of Gut) are permitted as assistance at the exam. 11 maj 2008 Selected formulae of probability Bivariate probability Transforms

More information

Optimal Control of Partially Observable Piecewise Deterministic Markov Processes

Optimal Control of Partially Observable Piecewise Deterministic Markov Processes Optimal Control of Partially Observable Piecewise Deterministic Markov Processes Nicole Bäuerle based on a joint work with D. Lange Wien, April 2018 Outline Motivation PDMP with Partial Observation Controlled

More information

March 1, Florida State University. Concentration Inequalities: Martingale. Approach and Entropy Method. Lizhe Sun and Boning Yang.

March 1, Florida State University. Concentration Inequalities: Martingale. Approach and Entropy Method. Lizhe Sun and Boning Yang. Florida State University March 1, 2018 Framework 1. (Lizhe) Basic inequalities Chernoff bounding Review for STA 6448 2. (Lizhe) Discrete-time martingales inequalities via martingale approach 3. (Boning)

More information

Asymptotic statistics using the Functional Delta Method

Asymptotic statistics using the Functional Delta Method Quantiles, Order Statistics and L-Statsitics TU Kaiserslautern 15. Februar 2015 Motivation Functional The delta method introduced in chapter 3 is an useful technique to turn the weak convergence of random

More information

STAT 430/510 Probability

STAT 430/510 Probability STAT 430/510 Probability Hui Nie Lecture 16 June 24th, 2009 Review Sum of Independent Normal Random Variables Sum of Independent Poisson Random Variables Sum of Independent Binomial Random Variables Conditional

More information

On Optimal Stopping Problems with Power Function of Lévy Processes

On Optimal Stopping Problems with Power Function of Lévy Processes On Optimal Stopping Problems with Power Function of Lévy Processes Budhi Arta Surya Department of Mathematics University of Utrecht 31 August 2006 This talk is based on the joint paper with A.E. Kyprianou:

More information

for all f satisfying E[ f(x) ] <.

for all f satisfying E[ f(x) ] <. . Let (Ω, F, P ) be a probability space and D be a sub-σ-algebra of F. An (H, H)-valued random variable X is independent of D if and only if P ({X Γ} D) = P {X Γ}P (D) for all Γ H and D D. Prove that if

More information

Uniformly Uniformly-ergodic Markov chains and BSDEs

Uniformly Uniformly-ergodic Markov chains and BSDEs Uniformly Uniformly-ergodic Markov chains and BSDEs Samuel N. Cohen Mathematical Institute, University of Oxford (Based on joint work with Ying Hu, Robert Elliott, Lukas Szpruch) Centre Henri Lebesgue,

More information

Introduction. log p θ (y k y 1:k 1 ), k=1

Introduction. log p θ (y k y 1:k 1 ), k=1 ESAIM: PROCEEDINGS, September 2007, Vol.19, 115-120 Christophe Andrieu & Dan Crisan, Editors DOI: 10.1051/proc:071915 PARTICLE FILTER-BASED APPROXIMATE MAXIMUM LIKELIHOOD INFERENCE ASYMPTOTICS IN STATE-SPACE

More information

Stochastic Demography, Coalescents, and Effective Population Size

Stochastic Demography, Coalescents, and Effective Population Size Demography Stochastic Demography, Coalescents, and Effective Population Size Steve Krone University of Idaho Department of Mathematics & IBEST Demographic effects bottlenecks, expansion, fluctuating population

More information

A Short Introduction to Diffusion Processes and Ito Calculus

A Short Introduction to Diffusion Processes and Ito Calculus A Short Introduction to Diffusion Processes and Ito Calculus Cédric Archambeau University College, London Center for Computational Statistics and Machine Learning c.archambeau@cs.ucl.ac.uk January 24,

More information

ON THE FIRST TIME THAT AN ITO PROCESS HITS A BARRIER

ON THE FIRST TIME THAT AN ITO PROCESS HITS A BARRIER ON THE FIRST TIME THAT AN ITO PROCESS HITS A BARRIER GERARDO HERNANDEZ-DEL-VALLE arxiv:1209.2411v1 [math.pr] 10 Sep 2012 Abstract. This work deals with first hitting time densities of Ito processes whose

More information

Bridging the Gap between Center and Tail for Multiscale Processes

Bridging the Gap between Center and Tail for Multiscale Processes Bridging the Gap between Center and Tail for Multiscale Processes Matthew R. Morse Department of Mathematics and Statistics Boston University BU-Keio 2016, August 16 Matthew R. Morse (BU) Moderate Deviations

More information

Chapter 4. Chapter 4 sections

Chapter 4. Chapter 4 sections Chapter 4 sections 4.1 Expectation 4.2 Properties of Expectations 4.3 Variance 4.4 Moments 4.5 The Mean and the Median 4.6 Covariance and Correlation 4.7 Conditional Expectation SKIP: 4.8 Utility Expectation

More information

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past.

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past. 1 Markov chain: definition Lecture 5 Definition 1.1 Markov chain] A sequence of random variables (X n ) n 0 taking values in a measurable state space (S, S) is called a (discrete time) Markov chain, if

More information

On Reflecting Brownian Motion with Drift

On Reflecting Brownian Motion with Drift Proc. Symp. Stoch. Syst. Osaka, 25), ISCIE Kyoto, 26, 1-5) On Reflecting Brownian Motion with Drift Goran Peskir This version: 12 June 26 First version: 1 September 25 Research Report No. 3, 25, Probability

More information

Lecture 1 Measure concentration

Lecture 1 Measure concentration CSE 29: Learning Theory Fall 2006 Lecture Measure concentration Lecturer: Sanjoy Dasgupta Scribe: Nakul Verma, Aaron Arvey, and Paul Ruvolo. Concentration of measure: examples We start with some examples

More information

Exercises and Answers to Chapter 1

Exercises and Answers to Chapter 1 Exercises and Answers to Chapter The continuous type of random variable X has the following density function: a x, if < x < a, f (x), otherwise. Answer the following questions. () Find a. () Obtain mean

More information

S. Lototsky and B.L. Rozovskii Center for Applied Mathematical Sciences University of Southern California, Los Angeles, CA

S. Lototsky and B.L. Rozovskii Center for Applied Mathematical Sciences University of Southern California, Los Angeles, CA RECURSIVE MULTIPLE WIENER INTEGRAL EXPANSION FOR NONLINEAR FILTERING OF DIFFUSION PROCESSES Published in: J. A. Goldstein, N. E. Gretsky, and J. J. Uhl (editors), Stochastic Processes and Functional Analysis,

More information

Approximation of BSDEs using least-squares regression and Malliavin weights

Approximation of BSDEs using least-squares regression and Malliavin weights Approximation of BSDEs using least-squares regression and Malliavin weights Plamen Turkedjiev (turkedji@math.hu-berlin.de) 3rd July, 2012 Joint work with Prof. Emmanuel Gobet (E cole Polytechnique) Plamen

More information

Lecture 1a: Basic Concepts and Recaps

Lecture 1a: Basic Concepts and Recaps Lecture 1a: Basic Concepts and Recaps Cédric Archambeau Centre for Computational Statistics and Machine Learning Department of Computer Science University College London c.archambeau@cs.ucl.ac.uk Advanced

More information

Lecture 6: Bayesian Inference in SDE Models

Lecture 6: Bayesian Inference in SDE Models Lecture 6: Bayesian Inference in SDE Models Bayesian Filtering and Smoothing Point of View Simo Särkkä Aalto University Simo Särkkä (Aalto) Lecture 6: Bayesian Inference in SDEs 1 / 45 Contents 1 SDEs

More information

Goodness of fit test for ergodic diffusion processes

Goodness of fit test for ergodic diffusion processes Ann Inst Stat Math (29) 6:99 928 DOI.7/s463-7-62- Goodness of fit test for ergodic diffusion processes Ilia Negri Yoichi Nishiyama Received: 22 December 26 / Revised: July 27 / Published online: 2 January

More information

Kinetic models of Maxwell type. A brief history Part I

Kinetic models of Maxwell type. A brief history Part I . A brief history Part I Department of Mathematics University of Pavia, Italy Porto Ercole, June 8-10 2008 Summer School METHODS AND MODELS OF KINETIC THEORY Outline 1 Introduction Wild result The central

More information

An introduction to stochastic particle integration methods: with applications to risk and insurance

An introduction to stochastic particle integration methods: with applications to risk and insurance An introduction to stochastic particle integration methods: with applications to risk and insurance P. Del Moral, G. W. Peters, C. Vergé Abstract This article presents a guided introduction to a general

More information

Harnack Inequalities and Applications for Stochastic Equations

Harnack Inequalities and Applications for Stochastic Equations p. 1/32 Harnack Inequalities and Applications for Stochastic Equations PhD Thesis Defense Shun-Xiang Ouyang Under the Supervision of Prof. Michael Röckner & Prof. Feng-Yu Wang March 6, 29 p. 2/32 Outline

More information

On the martingales obtained by an extension due to Saisho, Tanemura and Yor of Pitman s theorem

On the martingales obtained by an extension due to Saisho, Tanemura and Yor of Pitman s theorem On the martingales obtained by an extension due to Saisho, Tanemura and Yor of Pitman s theorem Koichiro TAKAOKA Dept of Applied Physics, Tokyo Institute of Technology Abstract M Yor constructed a family

More information

Part II Probability and Measure

Part II Probability and Measure Part II Probability and Measure Theorems Based on lectures by J. Miller Notes taken by Dexter Chua Michaelmas 2016 These notes are not endorsed by the lecturers, and I have modified them (often significantly)

More information

MAT 135B Midterm 1 Solutions

MAT 135B Midterm 1 Solutions MAT 35B Midterm Solutions Last Name (PRINT): First Name (PRINT): Student ID #: Section: Instructions:. Do not open your test until you are told to begin. 2. Use a pen to print your name in the spaces above.

More information

Discretization of Stochastic Differential Systems With Singular Coefficients Part II

Discretization of Stochastic Differential Systems With Singular Coefficients Part II Discretization of Stochastic Differential Systems With Singular Coefficients Part II Denis Talay, INRIA Sophia Antipolis joint works with Mireille Bossy, Nicolas Champagnat, Sylvain Maire, Miguel Martinez,

More information

Consistency of the maximum likelihood estimator for general hidden Markov models

Consistency of the maximum likelihood estimator for general hidden Markov models Consistency of the maximum likelihood estimator for general hidden Markov models Jimmy Olsson Centre for Mathematical Sciences Lund University Nordstat 2012 Umeå, Sweden Collaborators Hidden Markov models

More information

Math Spring Practice for the final Exam.

Math Spring Practice for the final Exam. Math 4 - Spring 8 - Practice for the final Exam.. Let X, Y, Z be three independnet random variables uniformly distributed on [, ]. Let W := X + Y. Compute P(W t) for t. Honors: Compute the CDF function

More information

Exercises. T 2T. e ita φ(t)dt.

Exercises. T 2T. e ita φ(t)dt. Exercises. Set #. Construct an example of a sequence of probability measures P n on R which converge weakly to a probability measure P but so that the first moments m,n = xdp n do not converge to m = xdp.

More information

Lecture Notes 3 Convergence (Chapter 5)

Lecture Notes 3 Convergence (Chapter 5) Lecture Notes 3 Convergence (Chapter 5) 1 Convergence of Random Variables Let X 1, X 2,... be a sequence of random variables and let X be another random variable. Let F n denote the cdf of X n and let

More information

STABILITY AND UNIFORM APPROXIMATION OF NONLINEAR FILTERS USING THE HILBERT METRIC, AND APPLICATION TO PARTICLE FILTERS 1

STABILITY AND UNIFORM APPROXIMATION OF NONLINEAR FILTERS USING THE HILBERT METRIC, AND APPLICATION TO PARTICLE FILTERS 1 The Annals of Applied Probability 0000, Vol. 00, No. 00, 000 000 STABILITY AND UNIFORM APPROXIMATION OF NONLINAR FILTRS USING TH HILBRT MTRIC, AND APPLICATION TO PARTICL FILTRS 1 By François LeGland and

More information

Malliavin calculus and central limit theorems

Malliavin calculus and central limit theorems Malliavin calculus and central limit theorems David Nualart Department of Mathematics Kansas University Seminar on Stochastic Processes 2017 University of Virginia March 8-11 2017 David Nualart (Kansas

More information

Math 3215 Intro. Probability & Statistics Summer 14. Homework 5: Due 7/3/14

Math 3215 Intro. Probability & Statistics Summer 14. Homework 5: Due 7/3/14 Math 325 Intro. Probability & Statistics Summer Homework 5: Due 7/3/. Let X and Y be continuous random variables with joint/marginal p.d.f. s f(x, y) 2, x y, f (x) 2( x), x, f 2 (y) 2y, y. Find the conditional

More information

Learning Session on Genealogies of Interacting Particle Systems

Learning Session on Genealogies of Interacting Particle Systems Learning Session on Genealogies of Interacting Particle Systems A.Depperschmidt, A.Greven University Erlangen-Nuremberg Singapore, 31 July - 4 August 2017 Tree-valued Markov processes Contents 1 Introduction

More information

Sequential Monte Carlo Samplers for Applications in High Dimensions

Sequential Monte Carlo Samplers for Applications in High Dimensions Sequential Monte Carlo Samplers for Applications in High Dimensions Alexandros Beskos National University of Singapore KAUST, 26th February 2014 Joint work with: Dan Crisan, Ajay Jasra, Nik Kantas, Alex

More information

On the concentration properties of Interacting particle processes

On the concentration properties of Interacting particle processes ISTITUT ATIOAL DE RECHERCHE E IFORMATIQUE ET E AUTOMATIQUE arxiv:1107.1948v2 [math.a] 12 Jul 2011 On the concentration properties of Interacting particle processes Pierre Del Moral, Peng Hu, Liming Wu

More information