Mean field simulation for Monte Carlo integration. Part II : Feynman-Kac models. P. Del Moral

Size: px
Start display at page:

Download "Mean field simulation for Monte Carlo integration. Part II : Feynman-Kac models. P. Del Moral"

Transcription

1 Mean field simulation for Monte Carlo integration Part II : Feynman-Kac models P. Del Moral INRIA Bordeaux & Inst. Maths. Bordeaux & CMAP Polytechnique Lectures, INLN CNRS & Nice Sophia Antipolis Univ Some hyper-refs Mean field simulation for Monte Carlo integration. Chapman & Hall - Maths & Stats [600p.] (May 2013). Feynman-Kac formulae, Genealogical & Interacting Particle Systems with appl., Springer [573p.] (2004) Particle approximations of Lyapunov exponents connected to Schrödinger operators and Feynman-Kac semigroups. ESAIM-P&S (2003) (joint work with L. Miclo). Coalescent tree based functional representations for some Feynman-Kac particle models. Annals of Applied Probability (2009) (joint work with F. Patras, S. Rubenthaler). On the concentration of interacting processes. Foundations & Trends in Machine Learning [170p.] (2012). (joint work with P. Hu & L.M. Wu) More references on the websitehttp:// delmoral/index.html [+ Links]

2 Contents Introduction Mean field simulation Interacting jumps models Feynman-Kac models Path integration models The 3 keys formulae Stability properties Some illustrations ( Part III) Markov restrictions Multilevel splitting Absorbing Markov chains Quasi-invariant measures Gradient of Markov semigroups Some bad tempting ideas Interacting particle interpretations Nonlinear evolution equation Mean field particle models Graphical illustration Island particle models Concentration inequalities Current population models Particle free energy/genealogical tree models Backward particle models

3 Introduction Mean field simulation Interacting jumps models Feynman-Kac models Some illustrations ( Part III) Some bad tempting ideas Interacting particle interpretations Concentration inequalities

4 Introduction Mean field simulation = Universal adaptive & interacting sampling technique Part I 2 types of stochastic interacting particle models: Diffusive particle models with mean field drifts [McKean-Vlasov style] Interacting jump particle models [Boltzmann & Feynman-Kac style]

5 Part II Interacting jumps models Interacting jumps = Recycling transitions = Discrete generation models ( geometric jump times)

6 Equivalent particle algorithms Sequential Monte Carlo Sampling Resampling Particle Filters Prediction Updating Genetic Algorithms Mutation Selection Evolutionary Population Exploration Branching-selection Diffusion Monte Carlo Free evolutions Absorption Quantum Monte Carlo Walkers motions Reconfiguration Sampling Algorithms Transition proposals Accept-reject-recycle

7 Equivalent particle algorithms Sequential Monte Carlo Sampling Resampling Particle Filters Prediction Updating Genetic Algorithms Mutation Selection Evolutionary Population Exploration Branching-selection Diffusion Monte Carlo Free evolutions Absorption Quantum Monte Carlo Walkers motions Reconfiguration Sampling Algorithms Transition proposals Accept-reject-recycle More lively buzzwords: Bootstrapping, spawning, cloning, pruning, replenish, cloning, splitting, condensation, resampled Monte Carlo, enrichment, go with the winner, subset simulation, rejection and weighting, look-a-head sampling, pilot exploration,..

8 A single stochastic model Particle interpretation of Feynman-Kac path integrals

9 Introduction Feynman-Kac models Path integration models The 3 keys formulae Stability properties Some illustrations ( Part III) Some bad tempting ideas Interacting particle interpretations Concentration inequalities

10 Feynman-Kac models FK models = Markov chain X n E n functions G n : E n [0, [ dq n := 1 G Z n p (X p ) dp n with P n = Law(X 0,..., X n ) 0 p<n Flow of n-marginals η n (f ) = γ n (f )/γ n (1) with γ n (f ) := E f (X n ) G p (X p ) 0 p<n

11 Feynman-Kac models FK models = Markov chain X n E n functions G n : E n [0, [ dq n := 1 G Z n p (X p ) dp n with P n = Law(X 0,..., X n ) 0 p<n Flow of n-marginals η n (f ) = γ n (f )/γ n (1) with γ n (f ) := E f (X n ) 0 p<n G p (X p ) Evolution equations : with M n Markov trans. of X n and Q n+1 (x, dy) = G n (x)m n+1 (x, dy) γ n+1 = γ n Q n+1 and η n+1 = Ψ Gn (η n )M n+1

12 The 3 keys formulae Time marginal measures = Path space measures: γ n (f n ) = E f n (X n ) G p (X p ) 0 p<n [ X n := (X 0,..., X n ) & G n (X n ) = G n (X n )] = η n = Q n Normalizing constants (= Free energy models): Z n = E G p (X p ) = η p (G p ) 0 p<n 0 p<n

13 The last key Backward Markov models with Q n (d(x 0,..., x n )) η 0 (dx 0 )Q 1 (x 0, dx 1 )... Q n (x n 1, dx n ) Q n (x n 1, dx n ) := G n 1 (x n 1 )M n (x n 1, dx n ) If we set hyp = H n (x n 1, x n ) ν n (dx n ) 1 η n+1 (dx) = η n (G n ) η n (H n+1 (., x)) ν n+1 (dx) M n+1,ηn (x n+1, dx n ) = η n(dx n ) H n+1 (x n, x n+1 ) η n (H n+1 (., x n+1 )) then we find the backward equation η n+1 (dx n+1 ) M n+1,ηn (x n+1, dx n ) = 1 η n (G n ) η n(dx n ) Q n+1 (x n, dx n+1 )

14 The last key (continued) Q n (d(x 0,..., x n )) η 0 (dx 0 )Q 1 (x 0, dx 1 )... Q n (x n 1, dx n ) η n+1 (dx n+1 ) M n+1,ηn (x n+1, dx n ) η n (dx n ) Q n+1 (x n, dx n+1 ) Backward Markov chain model : Q n (d(x 0,..., x n )) = η n (dx n ) M n,ηn 1 (x n, dx n 1 )... M 1,η0 (x 1, dx 0 ) with the dual/backward Markov transitions M n+1,ηn (x n+1, dx n ) η n (dx n ) H n+1 (x n, x n+1 )

15 Stability properties Transition/Excursions/Path spaces X n = (X n, X n+1) X n = X [T n,t n+1] X n = (X 0,..., X n) Continuous time models Langevin diffusions tn+1 X n = X [t n,t n+1] & G n (X n ) = exp V t (X t )dt t n OR Euler schemes (Langevin diff. Metropolis-Hasting moves) OR Fully continuous time particle models Schrödinger operators Important observation: ( t ) γ t (1) = E exp V s (X s )ds 0 d dt γ t(f ) = γ t (L V t (f )) with L V t = L t + V t t = exp η s (V s )ds with η t = γ t /γ t (1) 0

16 Stability properties Change of probability measures-importance sampling (IS) - Sequential Monte Carlo methods (SMC) : For any target probability measures of the form Q n+1 (d(x 0,..., x n+1 )) Q n (d(x 0,..., x n )) Q n+1 (x n, dx n+1 ) η 0 (dx 0 )Q 1 (x 0, dx 1 )... Q n+1 (x n, dx n+1 ) and any Markov transition M n+1 s.t. Q n+1(x n,.) M n+1 (x n,.) Target at time (n + 1) G n (x n, x n+1 ) = Target at time (n) Twisted transition = dq n+1(x n,.) dm n+1 (x n,.) (x n+1)

17 Stability properties Change of probability measures-importance sampling (IS) - Sequential Monte Carlo methods (SMC) : For any target probability measures of the form Q n+1 (d(x 0,..., x n+1 )) Q n (d(x 0,..., x n )) Q n+1 (x n, dx n+1 ) η 0 (dx 0 )Q 1 (x 0, dx 1 )... Q n+1 (x n, dx n+1 ) and any Markov transition M n+1 s.t. Q n+1(x n,.) M n+1 (x n,.) Target at time (n + 1) G n (x n, x n+1 ) = Target at time (n) Twisted transition = dq n+1(x n,.) dm n+1 (x n,.) (x n+1) Feynman-Kac model with X n = ( ) X n, X n+1 Q n = 1 G p (X p ) Z n dp n with P n = Law(X 0,..., X n ) 0 p<n

18 Introduction Feynman-Kac models Some illustrations ( Part III) Markov restrictions Multilevel splitting Absorbing Markov chains Quasi-invariant measures Gradient of Markov semigroups Some bad tempting ideas Interacting particle interpretations Concentration inequalities

19 Markov restrictions Confinements: X n random walk Z d A & G n := 1 A Q n = Law ((X 0,..., X n ) X p A, 0 p < n) Z n = Proba (X p A, 0 p < n) SAW : X n = (X p) 0 p n & G n (X n ) = 1 X n {X 0,...,X n 1 } Q n = Law ( (X 0,..., X n) X p X q, 0 p < q < n ) Z n = Proba ( X p X q, 0 p < q < n )

20 Multilevel splitting Decreasing level sets A n, with B non critical recurrent subset. T n := inf {t T n 1 : X t (A n B)} Excursion valued Feynman-Kac model: X n = (X t ) t [Tn,T n+1] & G n (X n ) = 1 An+1 (X T n+1 ) ( ) Q n = Law X [T X 0,T n] hits A n 1 before B Z n = P(X hits A n 1 before B)

21 Absorbing Markov chains X c n E c n absorption (1 G n) X c n exploration M n+1 X c n+1 Q n = Law((X c 0,..., X c n ) T abs. n) & Z n = Proba ( T abs. n ) Quasi-invariant measures : (G n, M n ) = (G, M) & M µ-reversible 1 n log P ( T abs. n ) n λ = top spect. of Q(x, dy) = G(x)M(x, dy) [Frobenius theo] Q(h) = λh = λ eigenfunction (ground state) P(X c n dx T abs. > n) n 1 µ(h) h(x) µ(dx)

22 Doob h-processes X h with Q n (d(x 0,..., x n )) P((X h 0,..., X h n ) d(x 0,..., x n )) h 1 (x n ) M h (x, dy) = 1 λ h 1 (x)q(x, dy)h(y) = M(x, dy)h(y) M(h)(x) Invariant measure µ h = µ h M h & Additive functionals F n (x 0,..., x n ) = 1 n p n f (x p ) = Q n (F n ) n µ h (f ) If G = G θ depends on some θ R f := θ log G θ θ log 1 λθ n n + 1 θ log Zθ n+1 = Q n (F n )

23 Gradient of Markov semigroups X n+1 (x) = F n (X n (x), W n ) ( X 0 (x) = x R d) P n (f )(x) := E (f (X n (x))) First variational equation X n+1 x (x) = A n (x, W n ) X n (x) with A(i,j) x Random process on the sphere U 0 = u 0 S d 1 n (x, w) = F n(., i w) x j (x) X n x U n+1 = A n (X n, W n )U n / A n (X n, W n )U n = (x) u 0 X n x (x) u 0 Feynman-Kac model X n = (X n, U n, W n ) & G n (x, u, w) = A n (x, w) u P n+1 (f )(x) u 0 = E F (X n+1 ) }{{} f (X n+1) U n+1 G p (X p ) 0 p n }{{} Xn x (x) u0

24 Introduction Feynman-Kac models Some illustrations ( Part III) Some bad tempting ideas Interacting particle interpretations Concentration inequalities

25 Bad tempting ideas I.i.d. weighted samples Xn i Z n := E G p (X p ) Zn N := 1 N 0 p<n or in terms of killing-absorption models Z n = P (T n) Zn N := 1 N N G p (Xp) i i=1 0 p<n 1 i N 1 T i n

26 Bad tempting ideas I.i.d. weighted samples Xn i Z n := E G p (X p ) Zn N := 1 N 0 p<n or in terms of killing-absorption models Z n = P (T n) Zn N := 1 N N G p (Xp) i i=1 0 p<n 1 i N 1 T i n Example : X n simple RW Z d, G n = 1 [ 10,10] (killed at the boundary) n = n(ω) : Zn N = 0

27 Bad tempting ideas I.i.d. weighted samples Xn i Z n := E G p (X p ) Zn N := 1 N 0 p<n or in terms of killing-absorption models Z n = P (T n) Zn N := 1 N N G p (Xp) i i=1 0 p<n 1 i N 1 T i n Example : X n simple RW Z d, G n = 1 [ 10,10] (killed at the boundary) and N E ( [Z ] N 2 ) n 1 Z n n = n(ω) : Z N n = 0 = 1 Z n Z n Proba(X p A, 0 p < n) 1 = P (T n) 1

28 Our objective Find an unbiased estimate Zn N s.t. ( [Z ] N 2 ) N E n 1 c n Z n using the multiplicative formula E G p (X p ) = 0 p<n 0 p<n η p (G p ) And estimating/learning each (larger) terms in the product η p (G p ) η N p (G p ) with η N p = 1 N 1 i N δ ξ i n

29 Introduction Feynman-Kac models Some illustrations ( Part III) Some bad tempting ideas Interacting particle interpretations Nonlinear evolution equation Mean field particle models Graphical illustration Island particle models Concentration inequalities

30 Flow of n-marginals [X n Markov with transitions M n ] η n (f ) = γ n (f )/γ n (1) with γ n (f ) := E f (X n ) G p (X p ) 0 p<n (γ n (1) = Z n ) Nonlinear evolution equation : η n+1 = Ψ Gn (η n )M n+1 Z n+1 = η n (G n ) Z n Nonlinear m.v.p. = Law of a Markov X n (perfect sampler) η n+1 = Φ n+1 (η n ) = η n (S n,ηn M n+1 ) = η n K n+1,ηn = Law(X n+1 )

31 Examples related to product models η n (dx) := 1 Z n { n p=0 h p (x) } λ(dx) with h p 0 2 illustrations: h p (x) = e (βp+1 βp)v (x) β p = η n (dx) = 1 Z n e βnv (x) λ(dx) h p (x) = 1 Ap+1 (x) A p = η n (dx) = 1 Z n 1 An (x) λ(dx) For any MCMC transitions M n with target η n, we have η n+1 = η n+1 M n+1 = Ψ hn+1 (η n )M n+1 Feynman-Kac model

32 Mean field particle model McKean Markov chain model η n+1 = η n K n+1,ηn = Law(X n ) Markov chain ξ n = (ξ i n) 1 i N E N n ξn i ξn+1 i K n+1,η N n (ξn, i dx) with ηn N = 1 N and the (unbiased) particle normalizing constants Zn+1 N = ηn N (G n ) Zn N = ηp N (G p ) 0 p n 1 i N δ ξ i n

33 Mean field particle model Mean field FK simulation ξn i ξn+1 i K n+1,η N n = S n,η N n M n+1 Sequential particle simulation technique (SMC) G n -acceptance-rejection with recycling M n+1 -propositions

34 Mean field particle model Mean field FK simulation ξ i n ξ i n+1 K n+1,η N n = S n,η N n M n+1 Sequential particle simulation technique (SMC) G n -acceptance-rejection with recycling M n+1 -propositions Genetic type branching particle algorithm (GA) ξ n G n selection ξ n M n mutation ξ n+1

35 Mean field particle model Mean field FK simulation ξ i n ξ i n+1 K n+1,η N n = S n,η N n M n+1 Sequential particle simulation technique (SMC) G n -acceptance-rejection with recycling M n+1 -propositions Genetic type branching particle algorithm (GA) ξ n G n selection ξ n M n mutation ξ n+1 Reconfiguration Monte Carlo (particles walkers) (QMC) (Selection, Mutation) = (Reconfiguration, exploration)

36 Continuous time Feynman-Kac particle models Master equation η t (.) = γ t(.) γ t (1) = Law(X t) d dt η t(f ) = η t (L t,ηt (f ))

37 Continuous time Feynman-Kac particle models Master equation η t (.) = γ t(.) γ t (1) = Law(X t) d dt η t(f ) = η t (L t,ηt (f )) (ex. : V t = U t 0) L t,ηt (f )(x) = L t(f )(x) + }{{} U t (x) }{{} free exploration acceptance rate (f (y) f (x)) η t (dy) }{{} interacting jump law L t,ηt (f ) = L t(f ) U t f + }{{} U t η t (f ) }{{} Schrödinger s op. normalizing-stabilizing term Particle model: Survival-acceptance rates Recycling jumps

38 Graphical illustration : η n η N n := 1 N 1 i N δ ξ i n

39 Graphical illustration : η n η N n := 1 N 1 i N δ ξ i n

40 Graphical illustration : η n η N n := 1 N 1 i N δ ξ i n

41 Graphical illustration : η n η N n := 1 N 1 i N δ ξ i n

42 Graphical illustration : η n η N n := 1 N 1 i N δ ξ i n

43 Graphical illustration : η n η N n := 1 N 1 i N δ ξ i n

44 Graphical illustration : η n η N n := 1 N 1 i N δ ξ i n

45 Graphical illustration : η n η N n := 1 N 1 i N δ ξ i n

46 Graphical illustration : η n η N n := 1 N 1 i N δ ξ i n

47 Graphical illustration : η n η N n := 1 N 1 i N δ ξ i n

48 Graphical illustration : η n η N n := 1 N 1 i N δ ξ i n

49 Graphical illustration : η n η N n := 1 N 1 i N δ ξ i n

50 Graphical illustration : η n η N n := 1 N 1 i N δ ξ i n

51 Graphical illustration : η n η N n := 1 N 1 i N δ ξ i n

52 Graphical illustration : η n η N n := 1 N 1 i N δ ξ i n

53 Graphical illustration : η n η N n := 1 N 1 i N δ ξ i n

54 Graphical illustration : η n η N n := 1 N 1 i N δ ξ i n

55 Graphical illustration : η n η N n := 1 N 1 i N δ ξ i n

56 Graphical illustration : η n η N n := 1 N 1 i N δ ξ i n

57 How to use the full ancestral tree model? G n 1 (x n 1 )M n (x n 1, dx n ) hyp = H n (x n 1, x n ) ν n (dx n ) Q n (d(x 0,..., x n )) = η n (dx n ) M n,ηn 1 (x n, dx n 1 ) }{{}... M 1,η0 (x 1, dx 0 ) η n 1(dx n 1) H n(x n 1,x n)

58 How to use the full ancestral tree model? G n 1 (x n 1 )M n (x n 1, dx n ) hyp = H n (x n 1, x n ) ν n (dx n ) Q n (d(x 0,..., x n )) = η n (dx n ) M n,ηn 1 (x n, dx n 1 ) }{{}... M 1,η0 (x 1, dx 0 ) η n 1(dx n 1) H n(x n 1,x n) Particle approximation = Random stochastic matrices Q N n (d(x 0,..., x n )) = η N n (dx n ) M n,η N n 1 (x n, dx n 1 )... M 1,η N 0 (x 1, dx 0 )

59 How to use the full ancestral tree model? G n 1 (x n 1 )M n (x n 1, dx n ) hyp = H n (x n 1, x n ) ν n (dx n ) Q n (d(x 0,..., x n )) = η n (dx n ) M n,ηn 1 (x n, dx n 1 ) }{{}... M 1,η0 (x 1, dx 0 ) η n 1(dx n 1) H n(x n 1,x n) Particle approximation = Random stochastic matrices Q N n (d(x 0,..., x n )) = η N n (dx n ) M n,η N n 1 (x n, dx n 1 )... M 1,η N 0 (x 1, dx 0 ) Ex.: Additive functionals f n (x 0,..., x n ) = 1 n+1 0 p n f p(x p ) Q N n (f n ) := 1 n p n η N n M n,η N n 1... M p+1,η N p (f p ) }{{} matrix operations

60 4 particle estimates Individuals ξ i n almost iid with law η n η N n = 1 N 1 i N δ ξ i n Path space models Ancestral lines almost iid with law Q n η N n := 1 N 1 i N δ Ancestral linen(i) Backward particle model Q N n (d(x 0,..., x n )) = η N n (dx n ) M n,η N n 1 (x n, dx n 1 )... M 1,η N 0 (x 1, dx 0 ) Normalizing constants Z n+1 = η p (G p ) N Zn+1 N = ηp N (G p ) 0 p n 0 p n (Unbiased)

61 Island models Reminder : the unbiased property E f n (X n ) G p (X p ) = E η n N (f n ) ηp N (G p ) 0 p<n = E F n (X n ) with the Island evolution Markov chain model 0 p<n 0 p<n G p (X p ) X n := ηn N and G n (X n ) = ηn N (G n ) = X n (G n ) particle model with (X n, G n (X n )) = Interacting Island particle model

62 Some key advantages Mean field models=stochastic linearization/perturbation technique η N n = η N n 1K n,η N n N V N n with V N n V n independent centered Gaussian fields. η n = η n 1 K n,ηn 1 stable Non propagation of local sampling errors = Uniform control w.r.t. the time horizon No burning, no need to study the stability of MCMC models. Stochastic adaptive grid approximation Nonlinear system positive - beneficial interactions. Simple and natural sampling algorithm. Local conditional iid samples Stability of nonlinear sg New concentration and empirical process theory

63 Introduction Feynman-Kac models Some illustrations ( Part III) Some bad tempting ideas Interacting particle interpretations Concentration inequalities Current population models Particle free energy/genealogical tree models Backward particle models

64 Current population models Constants (c 1, c 2 ) related to (bias,variance), c finite constant Test functions/observables f n 1, (x 0, n 0, N 1). When E n = R d : F n (y) := η n ( 1(,y] ) and F N n (y) := η N n ( 1(,y] ) with y R d The probability of any of the following events is greater than 1 e x. η N n η n (fn ) c 1 N ( 1 + x + x ) + c 2 N x sup [ η N ] p η p (fp ) c x log (n + e)/n 0 p n F N n F n c d (x + 1)/N

65 Particle free energy/genealogical tree models Constants (c 1, c 2 ) related to (bias,variance), c finite constant (x 0, n 0, N 1). The probability of any of the following events is greater than 1 e x 1 n log ZN n 1 n log Z n c 1 N ( 1 + x + x ) + c 2 N x [ η N n Q n ] (fn ) c1 (n + 1) N ( 1 + x + x ) + c2 (n + 1) N x with η N n = Genealogical tree models := η N n (in path space)

66 Backward particle models Constants (c 1, c 2 ) related to (bias,variance), c finite constant. For any normalized additive functional f n with f p 1, (x 0, n 0, N 1) The probability of the following event is greater than 1 e x [ Q N ] n Q n (fn ) 1 c1 N (1 + (x + x x)) + c 2 N(n + 1)

Concentration inequalities for Feynman-Kac particle models. P. Del Moral. INRIA Bordeaux & IMB & CMAP X. Journées MAS 2012, SMAI Clermond-Ferrand

Concentration inequalities for Feynman-Kac particle models. P. Del Moral. INRIA Bordeaux & IMB & CMAP X. Journées MAS 2012, SMAI Clermond-Ferrand Concentration inequalities for Feynman-Kac particle models P. Del Moral INRIA Bordeaux & IMB & CMAP X Journées MAS 2012, SMAI Clermond-Ferrand Some hyper-refs Feynman-Kac formulae, Genealogical & Interacting

More information

Advanced Monte Carlo integration methods. P. Del Moral (INRIA team ALEA) INRIA & Bordeaux Mathematical Institute & X CMAP

Advanced Monte Carlo integration methods. P. Del Moral (INRIA team ALEA) INRIA & Bordeaux Mathematical Institute & X CMAP Advanced Monte Carlo integration methods P. Del Moral (INRIA team ALEA) INRIA & Bordeaux Mathematical Institute & X CMAP MCQMC 2012, Sydney, Sunday Tutorial 12-th 2012 Some hyper-refs Feynman-Kac formulae,

More information

A Backward Particle Interpretation of Feynman-Kac Formulae

A Backward Particle Interpretation of Feynman-Kac Formulae A Backward Particle Interpretation of Feynman-Kac Formulae P. Del Moral Centre INRIA de Bordeaux - Sud Ouest Workshop on Filtering, Cambridge Univ., June 14-15th 2010 Preprints (with hyperlinks), joint

More information

Mean field simulation for Monte Carlo integration. Part I : Intro. nonlinear Markov models (+links integro-diff eq.) P. Del Moral

Mean field simulation for Monte Carlo integration. Part I : Intro. nonlinear Markov models (+links integro-diff eq.) P. Del Moral Mean field simulation for Monte Carlo integration Part I : Intro. nonlinear Markov models (+links integro-diff eq.) P. Del Moral INRIA Bordeaux & Inst. Maths. Bordeaux & CMAP Polytechnique Lectures, INLN

More information

A new class of interacting Markov Chain Monte Carlo methods

A new class of interacting Markov Chain Monte Carlo methods A new class of interacting Marov Chain Monte Carlo methods P Del Moral, A Doucet INRIA Bordeaux & UBC Vancouver Worshop on Numerics and Stochastics, Helsini, August 2008 Outline 1 Introduction Stochastic

More information

An introduction to particle simulation of rare events

An introduction to particle simulation of rare events An introduction to particle simulation of rare events P. Del Moral Centre INRIA de Bordeaux - Sud Ouest Workshop OPUS, 29 juin 2010, IHP, Paris P. Del Moral (INRIA) INRIA Bordeaux-Sud Ouest 1 / 33 Outline

More information

On the stability and the applications of interacting particle systems

On the stability and the applications of interacting particle systems 1/42 On the stability and the applications of interacting particle systems P. Del Moral INRIA Bordeaux Sud Ouest PDE/probability interactions - CIRM, April 2017 Synthesis joint works with A.N. Bishop,

More information

Contraction properties of Feynman-Kac semigroups

Contraction properties of Feynman-Kac semigroups Journées de Statistique Marne La Vallée, January 2005 Contraction properties of Feynman-Kac semigroups Pierre DEL MORAL, Laurent MICLO Lab. J. Dieudonné, Nice Univ., LATP Univ. Provence, Marseille 1 Notations

More information

Perfect simulation algorithm of a trajectory under a Feynman-Kac law

Perfect simulation algorithm of a trajectory under a Feynman-Kac law Perfect simulation algorithm of a trajectory under a Feynman-Kac law Data Assimilation, 24th-28th September 212, Oxford-Man Institute C. Andrieu, N. Chopin, A. Doucet, S. Rubenthaler University of Bristol,

More information

On the Concentration Properties of Interacting Particle Processes. Contents

On the Concentration Properties of Interacting Particle Processes. Contents Foundations and Trends R in Machine Learning Vol. 3, Nos. 3 4 (2010) 225 389 c 2012 P. Del Moral, P. Hu, and L. Wu DOI: 10.1561/2200000026 On the Concentration Properties of Interacting Particle Processes

More information

Stability of Feynman-Kac Semigroup and SMC Parameters' Tuning

Stability of Feynman-Kac Semigroup and SMC Parameters' Tuning Stability of Feynman-Kac Semigroup and SMC Parameters' Tuning François GIRAUD PhD student, CEA-CESTA, INRIA Bordeaux Tuesday, February 14th 2012 1 Recalls and context of the analysis Feynman-Kac semigroup

More information

Sequential Monte Carlo Methods for Bayesian Computation

Sequential Monte Carlo Methods for Bayesian Computation Sequential Monte Carlo Methods for Bayesian Computation A. Doucet Kyoto Sept. 2012 A. Doucet (MLSS Sept. 2012) Sept. 2012 1 / 136 Motivating Example 1: Generic Bayesian Model Let X be a vector parameter

More information

An Brief Overview of Particle Filtering

An Brief Overview of Particle Filtering 1 An Brief Overview of Particle Filtering Adam M. Johansen a.m.johansen@warwick.ac.uk www2.warwick.ac.uk/fac/sci/statistics/staff/academic/johansen/talks/ May 11th, 2010 Warwick University Centre for Systems

More information

Full text available at: On the Concentration Properties of Interacting Particle Processes

Full text available at:  On the Concentration Properties of Interacting Particle Processes On the Concentration Properties of Interacting Particle Processes On the Concentration Properties of Interacting Particle Processes Pierre Del Moral INRIA, Université Bordeaux 1 France Peng Hu INRIA, Université

More information

MCMC: Markov Chain Monte Carlo

MCMC: Markov Chain Monte Carlo I529: Machine Learning in Bioinformatics (Spring 2013) MCMC: Markov Chain Monte Carlo Yuzhen Ye School of Informatics and Computing Indiana University, Bloomington Spring 2013 Contents Review of Markov

More information

Controlled sequential Monte Carlo

Controlled sequential Monte Carlo Controlled sequential Monte Carlo Jeremy Heng, Department of Statistics, Harvard University Joint work with Adrian Bishop (UTS, CSIRO), George Deligiannidis & Arnaud Doucet (Oxford) Bayesian Computation

More information

Introduction to Machine Learning CMU-10701

Introduction to Machine Learning CMU-10701 Introduction to Machine Learning CMU-10701 Markov Chain Monte Carlo Methods Barnabás Póczos & Aarti Singh Contents Markov Chain Monte Carlo Methods Goal & Motivation Sampling Rejection Importance Markov

More information

CPSC 540: Machine Learning

CPSC 540: Machine Learning CPSC 540: Machine Learning MCMC and Non-Parametric Bayes Mark Schmidt University of British Columbia Winter 2016 Admin I went through project proposals: Some of you got a message on Piazza. No news is

More information

Path Decomposition of Markov Processes. Götz Kersting. University of Frankfurt/Main

Path Decomposition of Markov Processes. Götz Kersting. University of Frankfurt/Main Path Decomposition of Markov Processes Götz Kersting University of Frankfurt/Main joint work with Kaya Memisoglu, Jim Pitman 1 A Brownian path with positive drift 50 40 30 20 10 0 0 200 400 600 800 1000-10

More information

CS242: Probabilistic Graphical Models Lecture 7B: Markov Chain Monte Carlo & Gibbs Sampling

CS242: Probabilistic Graphical Models Lecture 7B: Markov Chain Monte Carlo & Gibbs Sampling CS242: Probabilistic Graphical Models Lecture 7B: Markov Chain Monte Carlo & Gibbs Sampling Professor Erik Sudderth Brown University Computer Science October 27, 2016 Some figures and materials courtesy

More information

Lecture 7 and 8: Markov Chain Monte Carlo

Lecture 7 and 8: Markov Chain Monte Carlo Lecture 7 and 8: Markov Chain Monte Carlo 4F13: Machine Learning Zoubin Ghahramani and Carl Edward Rasmussen Department of Engineering University of Cambridge http://mlg.eng.cam.ac.uk/teaching/4f13/ Ghahramani

More information

Introduction. log p θ (y k y 1:k 1 ), k=1

Introduction. log p θ (y k y 1:k 1 ), k=1 ESAIM: PROCEEDINGS, September 2007, Vol.19, 115-120 Christophe Andrieu & Dan Crisan, Editors DOI: 10.1051/proc:071915 PARTICLE FILTER-BASED APPROXIMATE MAXIMUM LIKELIHOOD INFERENCE ASYMPTOTICS IN STATE-SPACE

More information

Importance splitting for rare event simulation

Importance splitting for rare event simulation Importance splitting for rare event simulation F. Cerou Frederic.Cerou@inria.fr Inria Rennes Bretagne Atlantique Simulation of hybrid dynamical systems and applications to molecular dynamics September

More information

Adaptive Monte Carlo methods

Adaptive Monte Carlo methods Adaptive Monte Carlo methods Jean-Michel Marin Projet Select, INRIA Futurs, Université Paris-Sud joint with Randal Douc (École Polytechnique), Arnaud Guillin (Université de Marseille) and Christian Robert

More information

Kernel Sequential Monte Carlo

Kernel Sequential Monte Carlo Kernel Sequential Monte Carlo Ingmar Schuster (Paris Dauphine) Heiko Strathmann (University College London) Brooks Paige (Oxford) Dino Sejdinovic (Oxford) * equal contribution April 25, 2016 1 / 37 Section

More information

Monte Carlo methods for sampling-based Stochastic Optimization

Monte Carlo methods for sampling-based Stochastic Optimization Monte Carlo methods for sampling-based Stochastic Optimization Gersende FORT LTCI CNRS & Telecom ParisTech Paris, France Joint works with B. Jourdain, T. Lelièvre, G. Stoltz from ENPC and E. Kuhn from

More information

17 : Markov Chain Monte Carlo

17 : Markov Chain Monte Carlo 10-708: Probabilistic Graphical Models, Spring 2015 17 : Markov Chain Monte Carlo Lecturer: Eric P. Xing Scribes: Heran Lin, Bin Deng, Yun Huang 1 Review of Monte Carlo Methods 1.1 Overview Monte Carlo

More information

Bayesian Parameter Inference for Partially Observed Stopped Processes

Bayesian Parameter Inference for Partially Observed Stopped Processes Bayesian Parameter Inference for Partially Observed Stopped Processes BY AJAY JASRA, NIKOLAS KANTAS & ADAM PERSING 3 Department of Statistics & Applied Probability, National University of Singapore, Singapore,

More information

ECE276A: Sensing & Estimation in Robotics Lecture 10: Gaussian Mixture and Particle Filtering

ECE276A: Sensing & Estimation in Robotics Lecture 10: Gaussian Mixture and Particle Filtering ECE276A: Sensing & Estimation in Robotics Lecture 10: Gaussian Mixture and Particle Filtering Lecturer: Nikolay Atanasov: natanasov@ucsd.edu Teaching Assistants: Siwei Guo: s9guo@eng.ucsd.edu Anwesan Pal:

More information

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Lecture 2: From Linear Regression to Kalman Filter and Beyond Lecture 2: From Linear Regression to Kalman Filter and Beyond Department of Biomedical Engineering and Computational Science Aalto University January 26, 2012 Contents 1 Batch and Recursive Estimation

More information

Perturbed Proximal Gradient Algorithm

Perturbed Proximal Gradient Algorithm Perturbed Proximal Gradient Algorithm Gersende FORT LTCI, CNRS, Telecom ParisTech Université Paris-Saclay, 75013, Paris, France Large-scale inverse problems and optimization Applications to image processing

More information

Asymptotics and Simulation of Heavy-Tailed Processes

Asymptotics and Simulation of Heavy-Tailed Processes Asymptotics and Simulation of Heavy-Tailed Processes Department of Mathematics Stockholm, Sweden Workshop on Heavy-tailed Distributions and Extreme Value Theory ISI Kolkata January 14-17, 2013 Outline

More information

Improved diffusion Monte Carlo

Improved diffusion Monte Carlo Improved diffusion Monte Carlo Jonathan Weare University of Chicago with Martin Hairer (U of Warwick) October 4, 2012 Diffusion Monte Carlo The original motivation for DMC was to compute averages with

More information

A Backward Particle Interpretation of Feynman-Kac Formulae

A Backward Particle Interpretation of Feynman-Kac Formulae A Backward Particle Interpretation of Feynman-Kac Formulae Pierre Del Moral, Arnaud Doucet, Sumeetpal Singh To cite this version: Pierre Del Moral, Arnaud Doucet, Sumeetpal Singh. A Backward Particle Interpretation

More information

Monte Carlo Methods. Leon Gu CSD, CMU

Monte Carlo Methods. Leon Gu CSD, CMU Monte Carlo Methods Leon Gu CSD, CMU Approximate Inference EM: y-observed variables; x-hidden variables; θ-parameters; E-step: q(x) = p(x y, θ t 1 ) M-step: θ t = arg max E q(x) [log p(y, x θ)] θ Monte

More information

Computer Vision Group Prof. Daniel Cremers. 11. Sampling Methods: Markov Chain Monte Carlo

Computer Vision Group Prof. Daniel Cremers. 11. Sampling Methods: Markov Chain Monte Carlo Group Prof. Daniel Cremers 11. Sampling Methods: Markov Chain Monte Carlo Markov Chain Monte Carlo In high-dimensional spaces, rejection sampling and importance sampling are very inefficient An alternative

More information

Auxiliary Particle Methods

Auxiliary Particle Methods Auxiliary Particle Methods Perspectives & Applications Adam M. Johansen 1 adam.johansen@bristol.ac.uk Oxford University Man Institute 29th May 2008 1 Collaborators include: Arnaud Doucet, Nick Whiteley

More information

Kernel adaptive Sequential Monte Carlo

Kernel adaptive Sequential Monte Carlo Kernel adaptive Sequential Monte Carlo Ingmar Schuster (Paris Dauphine) Heiko Strathmann (University College London) Brooks Paige (Oxford) Dino Sejdinovic (Oxford) December 7, 2015 1 / 36 Section 1 Outline

More information

Exercises Tutorial at ICASSP 2016 Learning Nonlinear Dynamical Models Using Particle Filters

Exercises Tutorial at ICASSP 2016 Learning Nonlinear Dynamical Models Using Particle Filters Exercises Tutorial at ICASSP 216 Learning Nonlinear Dynamical Models Using Particle Filters Andreas Svensson, Johan Dahlin and Thomas B. Schön March 18, 216 Good luck! 1 [Bootstrap particle filter for

More information

19 : Slice Sampling and HMC

19 : Slice Sampling and HMC 10-708: Probabilistic Graphical Models 10-708, Spring 2018 19 : Slice Sampling and HMC Lecturer: Kayhan Batmanghelich Scribes: Boxiang Lyu 1 MCMC (Auxiliary Variables Methods) In inference, we are often

More information

An introduction to Sequential Monte Carlo

An introduction to Sequential Monte Carlo An introduction to Sequential Monte Carlo Thang Bui Jes Frellsen Department of Engineering University of Cambridge Research and Communication Club 6 February 2014 1 Sequential Monte Carlo (SMC) methods

More information

STABILITY AND UNIFORM APPROXIMATION OF NONLINEAR FILTERS USING THE HILBERT METRIC, AND APPLICATION TO PARTICLE FILTERS 1

STABILITY AND UNIFORM APPROXIMATION OF NONLINEAR FILTERS USING THE HILBERT METRIC, AND APPLICATION TO PARTICLE FILTERS 1 The Annals of Applied Probability 0000, Vol. 00, No. 00, 000 000 STABILITY AND UNIFORM APPROXIMATION OF NONLINAR FILTRS USING TH HILBRT MTRIC, AND APPLICATION TO PARTICL FILTRS 1 By François LeGland and

More information

Machine Learning for Data Science (CS4786) Lecture 24

Machine Learning for Data Science (CS4786) Lecture 24 Machine Learning for Data Science (CS4786) Lecture 24 Graphical Models: Approximate Inference Course Webpage : http://www.cs.cornell.edu/courses/cs4786/2016sp/ BELIEF PROPAGATION OR MESSAGE PASSING Each

More information

On the concentration properties of Interacting particle processes

On the concentration properties of Interacting particle processes ISTITUT ATIOAL DE RECHERCHE E IFORMATIQUE ET E AUTOMATIQUE arxiv:1107.1948v2 [math.a] 12 Jul 2011 On the concentration properties of Interacting particle processes Pierre Del Moral, Peng Hu, Liming Wu

More information

Introduction to self-similar growth-fragmentations

Introduction to self-similar growth-fragmentations Introduction to self-similar growth-fragmentations Quan Shi CIMAT, 11-15 December, 2017 Quan Shi Growth-Fragmentations CIMAT, 11-15 December, 2017 1 / 34 Literature Jean Bertoin, Compensated fragmentation

More information

Winter 2019 Math 106 Topics in Applied Mathematics. Lecture 9: Markov Chain Monte Carlo

Winter 2019 Math 106 Topics in Applied Mathematics. Lecture 9: Markov Chain Monte Carlo Winter 2019 Math 106 Topics in Applied Mathematics Data-driven Uncertainty Quantification Yoonsang Lee (yoonsang.lee@dartmouth.edu) Lecture 9: Markov Chain Monte Carlo 9.1 Markov Chain A Markov Chain Monte

More information

Pattern Recognition and Machine Learning. Bishop Chapter 11: Sampling Methods

Pattern Recognition and Machine Learning. Bishop Chapter 11: Sampling Methods Pattern Recognition and Machine Learning Chapter 11: Sampling Methods Elise Arnaud Jakob Verbeek May 22, 2008 Outline of the chapter 11.1 Basic Sampling Algorithms 11.2 Markov Chain Monte Carlo 11.3 Gibbs

More information

Lecture Particle Filters. Magnus Wiktorsson

Lecture Particle Filters. Magnus Wiktorsson Lecture Particle Filters Magnus Wiktorsson Monte Carlo filters The filter recursions could only be solved for HMMs and for linear, Gaussian models. Idea: Approximate any model with a HMM. Replace p(x)

More information

Robert Collins CSE586, PSU Intro to Sampling Methods

Robert Collins CSE586, PSU Intro to Sampling Methods Intro to Sampling Methods CSE586 Computer Vision II Penn State Univ Topics to be Covered Monte Carlo Integration Sampling and Expected Values Inverse Transform Sampling (CDF) Ancestral Sampling Rejection

More information

Chapter 7. Markov chain background. 7.1 Finite state space

Chapter 7. Markov chain background. 7.1 Finite state space Chapter 7 Markov chain background A stochastic process is a family of random variables {X t } indexed by a varaible t which we will think of as time. Time can be discrete or continuous. We will only consider

More information

Bayesian Inference and MCMC

Bayesian Inference and MCMC Bayesian Inference and MCMC Aryan Arbabi Partly based on MCMC slides from CSC412 Fall 2018 1 / 18 Bayesian Inference - Motivation Consider we have a data set D = {x 1,..., x n }. E.g each x i can be the

More information

Markov Chain Monte Carlo (MCMC)

Markov Chain Monte Carlo (MCMC) School of Computer Science 10-708 Probabilistic Graphical Models Markov Chain Monte Carlo (MCMC) Readings: MacKay Ch. 29 Jordan Ch. 21 Matt Gormley Lecture 16 March 14, 2016 1 Homework 2 Housekeeping Due

More information

On a class of stochastic differential equations in a financial network model

On a class of stochastic differential equations in a financial network model 1 On a class of stochastic differential equations in a financial network model Tomoyuki Ichiba Department of Statistics & Applied Probability, Center for Financial Mathematics and Actuarial Research, University

More information

Sequential Monte Carlo Samplers for Applications in High Dimensions

Sequential Monte Carlo Samplers for Applications in High Dimensions Sequential Monte Carlo Samplers for Applications in High Dimensions Alexandros Beskos National University of Singapore KAUST, 26th February 2014 Joint work with: Dan Crisan, Ajay Jasra, Nik Kantas, Alex

More information

Markov Chain Monte Carlo (MCMC)

Markov Chain Monte Carlo (MCMC) Markov Chain Monte Carlo (MCMC Dependent Sampling Suppose we wish to sample from a density π, and we can evaluate π as a function but have no means to directly generate a sample. Rejection sampling can

More information

Markov chain Monte Carlo

Markov chain Monte Carlo Markov chain Monte Carlo Probabilistic Models of Cognition, 2011 http://www.ipam.ucla.edu/programs/gss2011/ Roadmap: Motivation Monte Carlo basics What is MCMC? Metropolis Hastings and Gibbs...more tomorrow.

More information

Optimal filtering and the dual process

Optimal filtering and the dual process Optimal filtering and the dual process Omiros Papaspiliopoulos ICREA & Universitat Pompeu Fabra Joint work with Matteo Ruggiero Collegio Carlo Alberto & University of Turin The filtering problem X t0 X

More information

Computer Intensive Methods in Mathematical Statistics

Computer Intensive Methods in Mathematical Statistics Computer Intensive Methods in Mathematical Statistics Department of mathematics johawes@kth.se Lecture 16 Advanced topics in computational statistics 18 May 2017 Computer Intensive Methods (1) Plan of

More information

Weak convergence of Markov chain Monte Carlo II

Weak convergence of Markov chain Monte Carlo II Weak convergence of Markov chain Monte Carlo II KAMATANI, Kengo Mar 2011 at Le Mans Background Markov chain Monte Carlo (MCMC) method is widely used in Statistical Science. It is easy to use, but difficult

More information

Gradient-based Monte Carlo sampling methods

Gradient-based Monte Carlo sampling methods Gradient-based Monte Carlo sampling methods Johannes von Lindheim 31. May 016 Abstract Notes for a 90-minute presentation on gradient-based Monte Carlo sampling methods for the Uncertainty Quantification

More information

Sampling Methods (11/30/04)

Sampling Methods (11/30/04) CS281A/Stat241A: Statistical Learning Theory Sampling Methods (11/30/04) Lecturer: Michael I. Jordan Scribe: Jaspal S. Sandhu 1 Gibbs Sampling Figure 1: Undirected and directed graphs, respectively, with

More information

Lecture Particle Filters

Lecture Particle Filters FMS161/MASM18 Financial Statistics November 29, 2010 Monte Carlo filters The filter recursions could only be solved for HMMs and for linear, Gaussian models. Idea: Approximate any model with a HMM. Replace

More information

Computer Vision Group Prof. Daniel Cremers. 14. Sampling Methods

Computer Vision Group Prof. Daniel Cremers. 14. Sampling Methods Prof. Daniel Cremers 14. Sampling Methods Sampling Methods Sampling Methods are widely used in Computer Science as an approximation of a deterministic algorithm to represent uncertainty without a parametric

More information

Monte-Carlo Methods and Stochastic Processes

Monte-Carlo Methods and Stochastic Processes Monte-Carlo Methods and Stochastic Processes From Linear to Non-Linear EMMANUEL GOBET ECOLE POLYTECHNIQUE - UNIVERSITY PARIS-SACLAY CMAP, PALAISEAU CEDEX, FRANCE CRC Press Taylor & Francis Group 6000 Broken

More information

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Lecture 2: From Linear Regression to Kalman Filter and Beyond Lecture 2: From Linear Regression to Kalman Filter and Beyond January 18, 2017 Contents 1 Batch and Recursive Estimation 2 Towards Bayesian Filtering 3 Kalman Filter and Bayesian Filtering and Smoothing

More information

Markov Chain Monte Carlo Methods for Stochastic Optimization

Markov Chain Monte Carlo Methods for Stochastic Optimization Markov Chain Monte Carlo Methods for Stochastic Optimization John R. Birge The University of Chicago Booth School of Business Joint work with Nicholas Polson, Chicago Booth. JRBirge U of Toronto, MIE,

More information

A Central Limit Theorem for Fleming-Viot Particle Systems Application to the Adaptive Multilevel Splitting Algorithm

A Central Limit Theorem for Fleming-Viot Particle Systems Application to the Adaptive Multilevel Splitting Algorithm A Central Limit Theorem for Fleming-Viot Particle Systems Application to the Algorithm F. Cérou 1,2 B. Delyon 2 A. Guyader 3 M. Rousset 1,2 1 Inria Rennes Bretagne Atlantique 2 IRMAR, Université de Rennes

More information

Some recent improvements to importance splitting

Some recent improvements to importance splitting Some recent improvements to importance splitting F. Cerou IRISA / INRIA Campus de Beaulieu 35042 RENNES Cédex, France Frederic.Cerou@inria.fr F. Le Gland IRISA / INRIA Campus de Beaulieu 35042 RENNES Cédex,

More information

Particle Filtering Approaches for Dynamic Stochastic Optimization

Particle Filtering Approaches for Dynamic Stochastic Optimization Particle Filtering Approaches for Dynamic Stochastic Optimization John R. Birge The University of Chicago Booth School of Business Joint work with Nicholas Polson, Chicago Booth. JRBirge I-Sim Workshop,

More information

Dynamics of the evolving Bolthausen-Sznitman coalescent. by Jason Schweinsberg University of California at San Diego.

Dynamics of the evolving Bolthausen-Sznitman coalescent. by Jason Schweinsberg University of California at San Diego. Dynamics of the evolving Bolthausen-Sznitman coalescent by Jason Schweinsberg University of California at San Diego Outline of Talk 1. The Moran model and Kingman s coalescent 2. The evolving Kingman s

More information

Monte Carlo Methods. Handbook of. University ofqueensland. Thomas Taimre. Zdravko I. Botev. Dirk P. Kroese. Universite de Montreal

Monte Carlo Methods. Handbook of. University ofqueensland. Thomas Taimre. Zdravko I. Botev. Dirk P. Kroese. Universite de Montreal Handbook of Monte Carlo Methods Dirk P. Kroese University ofqueensland Thomas Taimre University ofqueensland Zdravko I. Botev Universite de Montreal A JOHN WILEY & SONS, INC., PUBLICATION Preface Acknowledgments

More information

Monte Carlo Dynamically Weighted Importance Sampling for Spatial Models with Intractable Normalizing Constants

Monte Carlo Dynamically Weighted Importance Sampling for Spatial Models with Intractable Normalizing Constants Monte Carlo Dynamically Weighted Importance Sampling for Spatial Models with Intractable Normalizing Constants Faming Liang Texas A& University Sooyoung Cheon Korea University Spatial Model Introduction

More information

Malvin H. Kalos, Paula A. Whitlock. Monte Carlo Methods. Second Revised and Enlarged Edition WILEY- BLACKWELL. WILEY-VCH Verlag GmbH & Co.

Malvin H. Kalos, Paula A. Whitlock. Monte Carlo Methods. Second Revised and Enlarged Edition WILEY- BLACKWELL. WILEY-VCH Verlag GmbH & Co. Malvin H. Kalos, Paula A. Whitlock Monte Carlo Methods Second Revised and Enlarged Edition WILEY- BLACKWELL WILEY-VCH Verlag GmbH & Co. KGaA v I Contents Preface to the Second Edition IX Preface to the

More information

Robert Collins CSE586, PSU Intro to Sampling Methods

Robert Collins CSE586, PSU Intro to Sampling Methods Robert Collins Intro to Sampling Methods CSE586 Computer Vision II Penn State Univ Robert Collins A Brief Overview of Sampling Monte Carlo Integration Sampling and Expected Values Inverse Transform Sampling

More information

MCMC for big data. Geir Storvik. BigInsight lunch - May Geir Storvik MCMC for big data BigInsight lunch - May / 17

MCMC for big data. Geir Storvik. BigInsight lunch - May Geir Storvik MCMC for big data BigInsight lunch - May / 17 MCMC for big data Geir Storvik BigInsight lunch - May 2 2018 Geir Storvik MCMC for big data BigInsight lunch - May 2 2018 1 / 17 Outline Why ordinary MCMC is not scalable Different approaches for making

More information

University of Toronto Department of Statistics

University of Toronto Department of Statistics Norm Comparisons for Data Augmentation by James P. Hobert Department of Statistics University of Florida and Jeffrey S. Rosenthal Department of Statistics University of Toronto Technical Report No. 0704

More information

On Adaptive Resampling Procedures for Sequential Monte Carlo Methods

On Adaptive Resampling Procedures for Sequential Monte Carlo Methods On Adaptive Resampling Procedures for Sequential Monte Carlo Methods Pierre Del Moral, Arnaud Doucet, Ajay Jasra To cite this version: Pierre Del Moral, Arnaud Doucet, Ajay Jasra. On Adaptive Resampling

More information

Information theoretic perspectives on learning algorithms

Information theoretic perspectives on learning algorithms Information theoretic perspectives on learning algorithms Varun Jog University of Wisconsin - Madison Departments of ECE and Mathematics Shannon Channel Hangout! May 8, 2018 Jointly with Adrian Tovar-Lopez

More information

Linear variance bounds for particle approximations of time-homogeneous Feynman Kac formulae

Linear variance bounds for particle approximations of time-homogeneous Feynman Kac formulae Available online at www.sciencedirect.com Stochastic Processes and their Applications 122 (2012) 1840 1865 www.elsevier.com/locate/spa Linear variance bounds for particle approximations of time-homogeneous

More information

Markov Chain Monte Carlo Methods for Stochastic

Markov Chain Monte Carlo Methods for Stochastic Markov Chain Monte Carlo Methods for Stochastic Optimization i John R. Birge The University of Chicago Booth School of Business Joint work with Nicholas Polson, Chicago Booth. JRBirge U Florida, Nov 2013

More information

The Moran Process as a Markov Chain on Leaf-labeled Trees

The Moran Process as a Markov Chain on Leaf-labeled Trees The Moran Process as a Markov Chain on Leaf-labeled Trees David J. Aldous University of California Department of Statistics 367 Evans Hall # 3860 Berkeley CA 94720-3860 aldous@stat.berkeley.edu http://www.stat.berkeley.edu/users/aldous

More information

Graphical Models and Kernel Methods

Graphical Models and Kernel Methods Graphical Models and Kernel Methods Jerry Zhu Department of Computer Sciences University of Wisconsin Madison, USA MLSS June 17, 2014 1 / 123 Outline Graphical Models Probabilistic Inference Directed vs.

More information

A quick introduction to Markov chains and Markov chain Monte Carlo (revised version)

A quick introduction to Markov chains and Markov chain Monte Carlo (revised version) A quick introduction to Markov chains and Markov chain Monte Carlo (revised version) Rasmus Waagepetersen Institute of Mathematical Sciences Aalborg University 1 Introduction These notes are intended to

More information

The Particle Filter. PD Dr. Rudolph Triebel Computer Vision Group. Machine Learning for Computer Vision

The Particle Filter. PD Dr. Rudolph Triebel Computer Vision Group. Machine Learning for Computer Vision The Particle Filter Non-parametric implementation of Bayes filter Represents the belief (posterior) random state samples. by a set of This representation is approximate. Can represent distributions that

More information

MCMC and Gibbs Sampling. Kayhan Batmanghelich

MCMC and Gibbs Sampling. Kayhan Batmanghelich MCMC and Gibbs Sampling Kayhan Batmanghelich 1 Approaches to inference l Exact inference algorithms l l l The elimination algorithm Message-passing algorithm (sum-product, belief propagation) The junction

More information

Markov Chains and MCMC

Markov Chains and MCMC Markov Chains and MCMC Markov chains Let S = {1, 2,..., N} be a finite set consisting of N states. A Markov chain Y 0, Y 1, Y 2,... is a sequence of random variables, with Y t S for all points in time

More information

Bootstrap random walks

Bootstrap random walks Bootstrap random walks Kais Hamza Monash University Joint work with Andrea Collevecchio & Meng Shi Introduction The two and three dimensional processes Higher Iterations An extension any (prime) number

More information

Computer Intensive Methods in Mathematical Statistics

Computer Intensive Methods in Mathematical Statistics Computer Intensive Methods in Mathematical Statistics Department of mathematics johawes@kth.se Lecture 7 Sequential Monte Carlo methods III 7 April 2017 Computer Intensive Methods (1) Plan of today s lecture

More information

Mean-field dual of cooperative reproduction

Mean-field dual of cooperative reproduction The mean-field dual of systems with cooperative reproduction joint with Tibor Mach (Prague) A. Sturm (Göttingen) Friday, July 6th, 2018 Poisson construction of Markov processes Let (X t ) t 0 be a continuous-time

More information

16 : Markov Chain Monte Carlo (MCMC)

16 : Markov Chain Monte Carlo (MCMC) 10-708: Probabilistic Graphical Models 10-708, Spring 2014 16 : Markov Chain Monte Carlo MCMC Lecturer: Matthew Gormley Scribes: Yining Wang, Renato Negrinho 1 Sampling from low-dimensional distributions

More information

Interacting and Annealing Particle Filters: Mathematics and a Recipe for Applications. Christoph Schnörr Bodo Rosenhahn Hans-Peter Seidel

Interacting and Annealing Particle Filters: Mathematics and a Recipe for Applications. Christoph Schnörr Bodo Rosenhahn Hans-Peter Seidel Interacting and Annealing Particle Filters: Mathematics and a Recipe for Applications Jürgen Gall Jürgen Potthoff Christoph Schnörr Bodo Rosenhahn Hans-Peter Seidel MPI I 2006 4 009 September 2006 Authors

More information

Computer Intensive Methods in Mathematical Statistics

Computer Intensive Methods in Mathematical Statistics Computer Intensive Methods in Mathematical Statistics Department of mathematics johawes@kth.se Lecture 5 Sequential Monte Carlo methods I 31 March 2017 Computer Intensive Methods (1) Plan of today s lecture

More information

27 : Distributed Monte Carlo Markov Chain. 1 Recap of MCMC and Naive Parallel Gibbs Sampling

27 : Distributed Monte Carlo Markov Chain. 1 Recap of MCMC and Naive Parallel Gibbs Sampling 10-708: Probabilistic Graphical Models 10-708, Spring 2014 27 : Distributed Monte Carlo Markov Chain Lecturer: Eric P. Xing Scribes: Pengtao Xie, Khoa Luu In this scribe, we are going to review the Parallel

More information

SAMPLING ALGORITHMS. In general. Inference in Bayesian models

SAMPLING ALGORITHMS. In general. Inference in Bayesian models SAMPLING ALGORITHMS SAMPLING ALGORITHMS In general A sampling algorithm is an algorithm that outputs samples x 1, x 2,... from a given distribution P or density p. Sampling algorithms can for example be

More information

Advances and Applications in Perfect Sampling

Advances and Applications in Perfect Sampling and Applications in Perfect Sampling Ph.D. Dissertation Defense Ulrike Schneider advisor: Jem Corcoran May 8, 2003 Department of Applied Mathematics University of Colorado Outline Introduction (1) MCMC

More information

Sequential Monte Carlo samplers

Sequential Monte Carlo samplers J. R. Statist. Soc. B (2006) 68, Part 3, pp. 411 436 Sequential Monte Carlo samplers Pierre Del Moral, Université Nice Sophia Antipolis, France Arnaud Doucet University of British Columbia, Vancouver,

More information

Convex Optimization CMU-10725

Convex Optimization CMU-10725 Convex Optimization CMU-10725 Simulated Annealing Barnabás Póczos & Ryan Tibshirani Andrey Markov Markov Chains 2 Markov Chains Markov chain: Homogen Markov chain: 3 Markov Chains Assume that the state

More information

6 Markov Chain Monte Carlo (MCMC)

6 Markov Chain Monte Carlo (MCMC) 6 Markov Chain Monte Carlo (MCMC) The underlying idea in MCMC is to replace the iid samples of basic MC methods, with dependent samples from an ergodic Markov chain, whose limiting (stationary) distribution

More information

Lecture 6: Markov Chain Monte Carlo

Lecture 6: Markov Chain Monte Carlo Lecture 6: Markov Chain Monte Carlo D. Jason Koskinen koskinen@nbi.ku.dk Photo by Howard Jackman University of Copenhagen Advanced Methods in Applied Statistics Feb - Apr 2016 Niels Bohr Institute 2 Outline

More information

Hamiltonian Monte Carlo for Scalable Deep Learning

Hamiltonian Monte Carlo for Scalable Deep Learning Hamiltonian Monte Carlo for Scalable Deep Learning Isaac Robson Department of Statistics and Operations Research, University of North Carolina at Chapel Hill isrobson@email.unc.edu BIOS 740 May 4, 2018

More information