Concentration inequalities for Feynman-Kac particle models. P. Del Moral. INRIA Bordeaux & IMB & CMAP X. Journées MAS 2012, SMAI Clermond-Ferrand

Size: px
Start display at page:

Download "Concentration inequalities for Feynman-Kac particle models. P. Del Moral. INRIA Bordeaux & IMB & CMAP X. Journées MAS 2012, SMAI Clermond-Ferrand"

Transcription

1 Concentration inequalities for Feynman-Kac particle models P. Del Moral INRIA Bordeaux & IMB & CMAP X Journées MAS 2012, SMAI Clermond-Ferrand Some hyper-refs Feynman-Kac formulae, Genealogical & Interacting Particle Systems with appl., Springer (2004) Sequential Monte Carlo Samplers JRSS B. (2006). (joint work with A. Doucet & A. Jasra) A Backward Particle Interpretation of Feynman-Kac Formulae M2AN (2010). (joint work with A. Doucet & S.S. Singh) Concentration Inequalities for Mean Field Particle Models. Annals of Applied Probability (2011) (joint work with Emmanuel Rio). On the concentration of interacting processes. Foundations & Trends in Machine Learning [170p.] (2012). (joint work with Peng Hu & Liming Wu) [+ Refs] More references on the website delmoral/index.html [+ Links]

2 Feynman-Kac models Some basic notation Description of the models Some examples Ineracting particle interpretations Concentration inequalities

3 Basic notation P(E) probability meas., B(E) bounded functions on E. (µ, f ) P(E) B(E) µ(f ) = µ(dx) f (x) Q(x 1, dx 2 ) integral operators x 1 E 1 x 2 E 2 Q(f )(x 1 ) = Q(x 1, dx 2 )f (x 2 ) [µq](dx 2 ) = µ(dx 1 )Q(x 1, dx 2 ) (= [µq](f ) = µ[q(f )] )

4 Basic notation Boltzmann-Gibbs transformation G 0 µ(dx) Ψ G (µ)(dx) = 1 µ(g) G(x) µ(dx) When G 1 we have the (non unique) Markov transport equation with Ψ G (µ) = µs µ,g S µ,g (x, dy) = G(x) δ x (dy) + (1 G(x)) Ψ G (µ)(dy)

5 Feynman-Kac models Markov chain X n E n and a potential function G n : E n [0, [ dq n := 1 G Z n p (X p ) dp n with P n = Law(X 0,..., X n ) 0 p<n Transition/Excursions/Path spaces X n = (X n, X n+1) X n = X [T n,t n+1] X n = (X 0,..., X n) Continuous time models tn+1 X n = X [t n,t n+1] & G n (X n ) = exp V t (X t )dt t n

6 Some examples Confinements: X n Z d A & G n := 1 A. and Q n = Law ((X 0,..., X n ) X p A, 0 p < n) Z n = Proba (X p A, 0 p < n) SAW : X n = (X p) 0 p n & G n (X n ) = 1 X n {X 0,...,X n 1 } and Q n = Law ( (X 0,..., X n) X p X q, 0 p < q < n ) Z n = Proba ( X p X q, 0 p < q < n )

7 Some examples Filtering : Y n = H n (X n, V n ) & G n (x n ) := p Yn X n (y n x n ). and Q n = Law ((X 0,..., X n ) Y p = y p, 0 p < n) Z n = p Y0,...,Y n (y 0,..., y n ) Multilevel splitting : A n, with B non critical recurrent subset. T n := inf {t T n 1 : X t (A n B)} X n = (X t ) t [Tn,T n+1] & G n (X n ) = 1 An+1 (X T n+1 ) and ( ) Q n = Law X [T X 0,T n] hits A n 1 before B Z n = P(X hits A n 1 before B)

8 Some examples Absorption models : Xn c En c absorption (1 G n) X n c exploration M n+1 Xn+1 c Q n = Law((X0 c,..., Xn c ) T abs. n) & Z n = Proba ( T abs. n ) Quasi-invariant measures : (G n, M n ) = (G, M) & M µ-reversible 1 n log P ( T abs. n ) n λ = top spect. of Q(x, dy) = G(x)M(x, dy) [Frobenius theo] Q(h) = λh = λ eigenfunction (ground state) P(X c n dx T abs. > n) n 1 µ(h) h(x) µ(dx)

9 Some examples Doob h-processes X h : M h (x, dy) = 1 λ h 1 (x)q(x, dy)h(y) = Q(x, dy)h(y) Q(h)(x) = M(x, dy)h(y) M(h)(x) Q n (d(x 0,..., x n )) Proba((X h 0,..., X h n ) d(x 0,..., x n )) h 1 (x n )

10 Feynman-Kac models Ineracting particle interpretations Nonlinear evolution equation Mean field particle models Graphical illustration First order expansions Uniform concentration w.r.t. time Particle free energy Concentration inequalities

11 Flow of n-marginals [X n Markov with transitions M n ] η n (f ) = γ n (f )/γ n (1) with γ n (f ) := E f (X n ) G p (X p ) 0 p<n (γ n (1) = Z n ) Nonlinear evolution equation : η n+1 = Ψ Gn (η n )M n+1 Z n+1 = η n (G n ) Z n Nonlinear m.v.p. = Law of a Markov X n (perfect sampler) η n+1 = Φ n+1 (η n ) = η n (S n,ηn M n+1 ) = η n K n+1,ηn = Law(X n+1 )

12 Examples related to product models η n (dx) := 1 Z n { n p=0 h p (x) } λ(dx) with h p 0 3 illustrations: h p (x) = e (βp+1 βp)v (x) β p = η n (dx) = 1 e βnv (x) λ(dx) Z n h p (x) = 1 Ap+1 (x) A p = η n (dx) = 1 1 An (x) λ(dx) Z n h p (θ) = p(y p θ, y 0,..., y p 1 ) = η n (dθ) = p(θ y 0,..., y n ) For any MCMC transitions M n with target η n, we have η n+1 = η n+1 M n+1 = Ψ hn+1 (η n )M n+1 Feynman-Kac model

13 Mean field particle model = Markov ξ n = (ξ i n) 1 i N E N n P (ξ n+1 dx ξ n ) = K n+1,η N n (ξn, i dx i ) with ηn N = 1 N 1 i N and the (unbiased) particle normalizing constants Zn+1 N = ηn N (G n ) Zn N = ηp N (G p ) 0 p n 1 i N δ ξ i n

14 Mean field particle model = Markov ξ n = (ξ i n) 1 i N E N n P (ξ n+1 dx ξ n ) = K n+1,η N n (ξn, i dx i ) with ηn N = 1 N 1 i N and the (unbiased) particle normalizing constants Zn+1 N = ηn N (G n ) Zn N = ηp N (G p ) 0 p n 1 i N δ ξ i n [K n+1,ηn = S n,ηn M n+1 ] Sequential particle simulation technique G n -acceptance-rejection with recycling M n+1 -propositions Genetic type branching particle model ξ n = (ξn) i G n selection 1 i N ξ n = ( ξ n) i M n mutation 1 i N ξ n+1 = (ξn+1) i 1 i N

15 Graphical illustration : η n η N n := 1 N 1 i N δ ξ i n

16 Graphical illustration : η n η N n := 1 N 1 i N δ ξ i n

17 Graphical illustration : η n η N n := 1 N 1 i N δ ξ i n

18 Graphical illustration : η n η N n := 1 N 1 i N δ ξ i n

19 Graphical illustration : η n η N n := 1 N 1 i N δ ξ i n

20 Graphical illustration : η n η N n := 1 N 1 i N δ ξ i n

21 Graphical illustration : η n η N n := 1 N 1 i N δ ξ i n

22 Graphical illustration : η n η N n := 1 N 1 i N δ ξ i n

23 Graphical illustration : η n η N n := 1 N 1 i N δ ξ i n

24 Graphical illustration : η n η N n := 1 N 1 i N δ ξ i n

25 Graphical illustration : η n η N n := 1 N 1 i N δ ξ i n

26 Graphical illustration : η n η N n := 1 N 1 i N δ ξ i n

27 Graphical illustration : η n η N n := 1 N 1 i N δ ξ i n

28 Graphical illustration : η n η N n := 1 N 1 i N δ ξ i n

29 Graphical illustration : η n η N n := 1 N 1 i N δ ξ i n

30 Graphical illustration : η n η N n := 1 N 1 i N δ ξ i n

31 Graphical illustration : η n η N n := 1 N 1 i N δ ξ i n

32 Graphical illustration : η n η N n := 1 N 1 i N δ ξ i n

33 Graphical illustration : η n η N n := 1 N 1 i N δ ξ i n

34 How to use the full ancestral tree model? G n 1 (x n 1 )M n (x n 1, dx n ) hyp = H n (x n 1, x n ) ν n (dx n ) Q n (d(x 0,..., x n )) = η n (dx n ) M n,ηn 1 (x n, dx n 1 )... M 1,η0 (x 1, dx 0 ) }{{} η n(dx n) H n+1(x n,x n+1)

35 How to use the full ancestral tree model? G n 1 (x n 1 )M n (x n 1, dx n ) hyp = H n (x n 1, x n ) ν n (dx n ) Q n (d(x 0,..., x n )) = η n (dx n ) M n,ηn 1 (x n, dx n 1 )... M 1,η0 (x 1, dx 0 ) }{{} η n(dx n) H n+1(x n,x n+1) Particle approximation = Random stochastic matrices Q N n (d(x 0,..., x n )) = η N n (dx n ) M n,η N n 1 (x n, dx n 1 )... M 1,η N 0 (x 1, dx 0 )

36 How to use the full ancestral tree model? G n 1 (x n 1 )M n (x n 1, dx n ) hyp = H n (x n 1, x n ) ν n (dx n ) Q n (d(x 0,..., x n )) = η n (dx n ) M n,ηn 1 (x n, dx n 1 )... M 1,η0 (x 1, dx 0 ) }{{} η n(dx n) H n+1(x n,x n+1) Particle approximation = Random stochastic matrices Q N n (d(x 0,..., x n )) = η N n (dx n ) M n,η N n 1 (x n, dx n 1 )... M 1,η N 0 (x 1, dx 0 ) Ex.: Additive functionals f n (x 0,..., x n ) = 1 n+1 0 p n f p(x p ) Q N n (f n ) := 1 n p n η N n M n,η N n 1... M p+1,η N p (f p ) }{{} matrix operations

37 The example of the h-process and the absorption model Q n (d(x 0,..., x n )) P((X h 0,..., X h n ) d(x 0,..., x n )) h 1 (x n ) Invariant measure µ h = µ h M h & normalized additive functionals f n (x 0,..., x n ) = 1 n p n f (x p ) = Q n (f n ) n µ h (f ) If G = G θ depends on some θ R f := θ log G θ θ log 1 λθ n n + 1 θ log Zθ n+1 = Q n (f n ) NB : Similar expression when M θ depends on some θ R.

38 4 particle estimates Individuals ξ i n almost iid with law η n η N n = 1 N 1 i N δ ξ i n Path space models Ancestral lines almost iid with law Q n η N n := 1 N 1 i N δ ancestral linen(i) Backward particle model Q N n (d(x 0,..., x n )) = η N n (dx n ) M n,η N n 1 (x n, dx n 1 )... M 1,η N 0 (x 1, dx 0 ) Normalizing constants Z n+1 = η p (G p ) N Zn+1 N = ηp N (G p ) 0 p n 0 p n (Unbiased)

39 How & Why it works (Computer Sci.) Stochastic adaptive grid approximation. (Stats) Universal acceptance-rejection-recycling sampling schemes. (Probab) Stochastic linearization/perturbation technique. η n = Φ n (η n 1 ) η N n = Φ n ( η N n 1 ) + 1 N V N n Theorem: (V N n ) n N (V n ) n independent centered Gaussian fields. Φ p,n (η p ) = η n stable sg No propagation of local errors = Uniform control w.r.t. the time horizon New concentration inequalities for (general) interacting processes

40 Key idea = First order expansions Key telescoping decomposition η N n η n = n p=0 [ Φp,n (η N p ) Φ p,n ( Φp (η N p 1) )] First order expansion N [ Φp,n (η N p ) Φ p,n ( Φp (η N p 1 ))] = N [Φ p,n ( Φ p (η N p 1 ) + 1 N V N p ) ( Np 1 Φ p,n Φp (η ))] V N p D p,n + 1 N R N p,n with a predictable D p,n first order operator }{{} fluctuation term 2nd-order measure R N p,n }{{} bias-term

41 Stochastic perturbation model Stochastic perturbation model W η,n n := N [ η N n η n ] = 0 p n V N p D p,n + 1 N R N n Under some mixing condition on the limiting FK semigroups Φ p,n osc (D p,n (f )) Cte e (n p)α and E ( R N n (f ) m) Cte 2 m (2m)!/m! Uniform concentration estimates w.r.t. the time parameter

42 Particle free energy Multiplicative formulae Z N n = 0 p<n Taylor first order expansion η N p (G p ) = γ N n (1) N γ n (1) = x, y > 0 log y log x = log ( γ N n (1)/γ n (1) ) p<n (y x) x + t(y x) dt η p (G p ) = ( 0 p<n log η N p (G p ) log η p (G p ) ) = ( ( ) ) 0 p<n log η p (G p ) + 1 N Wp η,n (G p ) log η p (G p ) = 1 1 N 0 p<n 0 η p (G p ) + first order expansion [exercice] W η,n p (G p ) t N Wp η,n (G p ) dt

43 Feynman-Kac models Ineracting particle interpretations Concentration inequalities Current population models Particle free energy Genealogical tree models Backward particle models

44 Current population models Constants (c 1, c 2 ) related to (bias,variance), c universal constant Test funct. f n 1, (x 0, n 0, N 1). The probability of the event [ η N n η n ] (f ) c 1 N ( 1 + x + x ) + c 2 N x is greater than 1 e x. x = (x i ) 1 i d (, x] = d i=1 (, x i] cells in E n = R d. ( ) F n (x) = η n 1(,x] and Fn N (x) = ηn N ( ) 1(,x] The probability of the following event N F N n F n c d (x + 1) is greater than 1 e x.

45 Particle free energy models Constants (c 1, c 2 ) related to (bias,variance), c universal constant (x 0, n 0, N 1) Unbiased property E η n N (f n ) 0 p<n ηp N (G p ) = E f n (X n ) G p (X p ) 0 p<n For any ɛ {+1, 1}, the probability of the event ɛ n log ZN n c 1 Z n N ( 1 + x + x ) + c 2 N x is greater than 1 e x. note (0 ɛ 1 (1 e ɛ ) (e ɛ 1) 2ɛ) e ɛ zn z eɛ z N z 1 2ɛ

46 Genealogical tree models := η N n (in path space) Constants (c 1, c 2 ) related to (bias,variance), c universal constant f n test function f n 1, (x 0, n 0, N 1). The probability of the event [ η N n Q n ] (f ) c1 n + 1 N ( 1 + x + x ) + c2 (n + 1) N x is greater than 1 e x. F n = indicator fct. f n of cells in E n = ( ) R d0..., R dn The probability of the following event 0 p n d p sup ηn N (f n ) Q n (f n ) c (n + 1) f n F n is greater than 1 e x. N (x + 1)

47 Backward particle models Constants (c 1, c 2 ) related to (bias,variance), c universal constant. f n normalized additive functional with f p 1, (x 0, n 0, N 1) The probability of the event [ ] Q N 1 n Q n (fn ) c 1 N (1 + (x + x x)) + c 2 N(n + 1) f a,n normalized additive functional w.r.t. f p = 1 (,a], a R d = E n. The probability of the following event is greater than 1 e x. sup Q N n (f a,n ) Q n (f a,n ) c a R d d (x + 1) N

Mean field simulation for Monte Carlo integration. Part II : Feynman-Kac models. P. Del Moral

Mean field simulation for Monte Carlo integration. Part II : Feynman-Kac models. P. Del Moral Mean field simulation for Monte Carlo integration Part II : Feynman-Kac models P. Del Moral INRIA Bordeaux & Inst. Maths. Bordeaux & CMAP Polytechnique Lectures, INLN CNRS & Nice Sophia Antipolis Univ.

More information

Advanced Monte Carlo integration methods. P. Del Moral (INRIA team ALEA) INRIA & Bordeaux Mathematical Institute & X CMAP

Advanced Monte Carlo integration methods. P. Del Moral (INRIA team ALEA) INRIA & Bordeaux Mathematical Institute & X CMAP Advanced Monte Carlo integration methods P. Del Moral (INRIA team ALEA) INRIA & Bordeaux Mathematical Institute & X CMAP MCQMC 2012, Sydney, Sunday Tutorial 12-th 2012 Some hyper-refs Feynman-Kac formulae,

More information

A Backward Particle Interpretation of Feynman-Kac Formulae

A Backward Particle Interpretation of Feynman-Kac Formulae A Backward Particle Interpretation of Feynman-Kac Formulae P. Del Moral Centre INRIA de Bordeaux - Sud Ouest Workshop on Filtering, Cambridge Univ., June 14-15th 2010 Preprints (with hyperlinks), joint

More information

Mean field simulation for Monte Carlo integration. Part I : Intro. nonlinear Markov models (+links integro-diff eq.) P. Del Moral

Mean field simulation for Monte Carlo integration. Part I : Intro. nonlinear Markov models (+links integro-diff eq.) P. Del Moral Mean field simulation for Monte Carlo integration Part I : Intro. nonlinear Markov models (+links integro-diff eq.) P. Del Moral INRIA Bordeaux & Inst. Maths. Bordeaux & CMAP Polytechnique Lectures, INLN

More information

A new class of interacting Markov Chain Monte Carlo methods

A new class of interacting Markov Chain Monte Carlo methods A new class of interacting Marov Chain Monte Carlo methods P Del Moral, A Doucet INRIA Bordeaux & UBC Vancouver Worshop on Numerics and Stochastics, Helsini, August 2008 Outline 1 Introduction Stochastic

More information

An introduction to particle simulation of rare events

An introduction to particle simulation of rare events An introduction to particle simulation of rare events P. Del Moral Centre INRIA de Bordeaux - Sud Ouest Workshop OPUS, 29 juin 2010, IHP, Paris P. Del Moral (INRIA) INRIA Bordeaux-Sud Ouest 1 / 33 Outline

More information

Contraction properties of Feynman-Kac semigroups

Contraction properties of Feynman-Kac semigroups Journées de Statistique Marne La Vallée, January 2005 Contraction properties of Feynman-Kac semigroups Pierre DEL MORAL, Laurent MICLO Lab. J. Dieudonné, Nice Univ., LATP Univ. Provence, Marseille 1 Notations

More information

Stability of Feynman-Kac Semigroup and SMC Parameters' Tuning

Stability of Feynman-Kac Semigroup and SMC Parameters' Tuning Stability of Feynman-Kac Semigroup and SMC Parameters' Tuning François GIRAUD PhD student, CEA-CESTA, INRIA Bordeaux Tuesday, February 14th 2012 1 Recalls and context of the analysis Feynman-Kac semigroup

More information

On the stability and the applications of interacting particle systems

On the stability and the applications of interacting particle systems 1/42 On the stability and the applications of interacting particle systems P. Del Moral INRIA Bordeaux Sud Ouest PDE/probability interactions - CIRM, April 2017 Synthesis joint works with A.N. Bishop,

More information

On the Concentration Properties of Interacting Particle Processes. Contents

On the Concentration Properties of Interacting Particle Processes. Contents Foundations and Trends R in Machine Learning Vol. 3, Nos. 3 4 (2010) 225 389 c 2012 P. Del Moral, P. Hu, and L. Wu DOI: 10.1561/2200000026 On the Concentration Properties of Interacting Particle Processes

More information

An Brief Overview of Particle Filtering

An Brief Overview of Particle Filtering 1 An Brief Overview of Particle Filtering Adam M. Johansen a.m.johansen@warwick.ac.uk www2.warwick.ac.uk/fac/sci/statistics/staff/academic/johansen/talks/ May 11th, 2010 Warwick University Centre for Systems

More information

Perfect simulation algorithm of a trajectory under a Feynman-Kac law

Perfect simulation algorithm of a trajectory under a Feynman-Kac law Perfect simulation algorithm of a trajectory under a Feynman-Kac law Data Assimilation, 24th-28th September 212, Oxford-Man Institute C. Andrieu, N. Chopin, A. Doucet, S. Rubenthaler University of Bristol,

More information

A Backward Particle Interpretation of Feynman-Kac Formulae

A Backward Particle Interpretation of Feynman-Kac Formulae A Backward Particle Interpretation of Feynman-Kac Formulae Pierre Del Moral, Arnaud Doucet, Sumeetpal Singh To cite this version: Pierre Del Moral, Arnaud Doucet, Sumeetpal Singh. A Backward Particle Interpretation

More information

Sequential Monte Carlo Methods for Bayesian Computation

Sequential Monte Carlo Methods for Bayesian Computation Sequential Monte Carlo Methods for Bayesian Computation A. Doucet Kyoto Sept. 2012 A. Doucet (MLSS Sept. 2012) Sept. 2012 1 / 136 Motivating Example 1: Generic Bayesian Model Let X be a vector parameter

More information

Controlled sequential Monte Carlo

Controlled sequential Monte Carlo Controlled sequential Monte Carlo Jeremy Heng, Department of Statistics, Harvard University Joint work with Adrian Bishop (UTS, CSIRO), George Deligiannidis & Arnaud Doucet (Oxford) Bayesian Computation

More information

Auxiliary Particle Methods

Auxiliary Particle Methods Auxiliary Particle Methods Perspectives & Applications Adam M. Johansen 1 adam.johansen@bristol.ac.uk Oxford University Man Institute 29th May 2008 1 Collaborators include: Arnaud Doucet, Nick Whiteley

More information

Introduction. log p θ (y k y 1:k 1 ), k=1

Introduction. log p θ (y k y 1:k 1 ), k=1 ESAIM: PROCEEDINGS, September 2007, Vol.19, 115-120 Christophe Andrieu & Dan Crisan, Editors DOI: 10.1051/proc:071915 PARTICLE FILTER-BASED APPROXIMATE MAXIMUM LIKELIHOOD INFERENCE ASYMPTOTICS IN STATE-SPACE

More information

Full text available at: On the Concentration Properties of Interacting Particle Processes

Full text available at:  On the Concentration Properties of Interacting Particle Processes On the Concentration Properties of Interacting Particle Processes On the Concentration Properties of Interacting Particle Processes Pierre Del Moral INRIA, Université Bordeaux 1 France Peng Hu INRIA, Université

More information

Consistency of the maximum likelihood estimator for general hidden Markov models

Consistency of the maximum likelihood estimator for general hidden Markov models Consistency of the maximum likelihood estimator for general hidden Markov models Jimmy Olsson Centre for Mathematical Sciences Lund University Nordstat 2012 Umeå, Sweden Collaborators Hidden Markov models

More information

Introduction to Machine Learning CMU-10701

Introduction to Machine Learning CMU-10701 Introduction to Machine Learning CMU-10701 Markov Chain Monte Carlo Methods Barnabás Póczos & Aarti Singh Contents Markov Chain Monte Carlo Methods Goal & Motivation Sampling Rejection Importance Markov

More information

CPSC 540: Machine Learning

CPSC 540: Machine Learning CPSC 540: Machine Learning MCMC and Non-Parametric Bayes Mark Schmidt University of British Columbia Winter 2016 Admin I went through project proposals: Some of you got a message on Piazza. No news is

More information

Path Decomposition of Markov Processes. Götz Kersting. University of Frankfurt/Main

Path Decomposition of Markov Processes. Götz Kersting. University of Frankfurt/Main Path Decomposition of Markov Processes Götz Kersting University of Frankfurt/Main joint work with Kaya Memisoglu, Jim Pitman 1 A Brownian path with positive drift 50 40 30 20 10 0 0 200 400 600 800 1000-10

More information

University of Toronto Department of Statistics

University of Toronto Department of Statistics Norm Comparisons for Data Augmentation by James P. Hobert Department of Statistics University of Florida and Jeffrey S. Rosenthal Department of Statistics University of Toronto Technical Report No. 0704

More information

On the concentration properties of Interacting particle processes

On the concentration properties of Interacting particle processes ISTITUT ATIOAL DE RECHERCHE E IFORMATIQUE ET E AUTOMATIQUE arxiv:1107.1948v2 [math.a] 12 Jul 2011 On the concentration properties of Interacting particle processes Pierre Del Moral, Peng Hu, Liming Wu

More information

Markov Chain Monte Carlo Methods for Stochastic

Markov Chain Monte Carlo Methods for Stochastic Markov Chain Monte Carlo Methods for Stochastic Optimization i John R. Birge The University of Chicago Booth School of Business Joint work with Nicholas Polson, Chicago Booth. JRBirge U Florida, Nov 2013

More information

Lecture 7 and 8: Markov Chain Monte Carlo

Lecture 7 and 8: Markov Chain Monte Carlo Lecture 7 and 8: Markov Chain Monte Carlo 4F13: Machine Learning Zoubin Ghahramani and Carl Edward Rasmussen Department of Engineering University of Cambridge http://mlg.eng.cam.ac.uk/teaching/4f13/ Ghahramani

More information

Chapter 7. Markov chain background. 7.1 Finite state space

Chapter 7. Markov chain background. 7.1 Finite state space Chapter 7 Markov chain background A stochastic process is a family of random variables {X t } indexed by a varaible t which we will think of as time. Time can be discrete or continuous. We will only consider

More information

Markov Chain Monte Carlo Methods for Stochastic Optimization

Markov Chain Monte Carlo Methods for Stochastic Optimization Markov Chain Monte Carlo Methods for Stochastic Optimization John R. Birge The University of Chicago Booth School of Business Joint work with Nicholas Polson, Chicago Booth. JRBirge U of Toronto, MIE,

More information

Perturbed Proximal Gradient Algorithm

Perturbed Proximal Gradient Algorithm Perturbed Proximal Gradient Algorithm Gersende FORT LTCI, CNRS, Telecom ParisTech Université Paris-Saclay, 75013, Paris, France Large-scale inverse problems and optimization Applications to image processing

More information

Particle Filtering Approaches for Dynamic Stochastic Optimization

Particle Filtering Approaches for Dynamic Stochastic Optimization Particle Filtering Approaches for Dynamic Stochastic Optimization John R. Birge The University of Chicago Booth School of Business Joint work with Nicholas Polson, Chicago Booth. JRBirge I-Sim Workshop,

More information

Optimal filtering and the dual process

Optimal filtering and the dual process Optimal filtering and the dual process Omiros Papaspiliopoulos ICREA & Universitat Pompeu Fabra Joint work with Matteo Ruggiero Collegio Carlo Alberto & University of Turin The filtering problem X t0 X

More information

STABILITY AND UNIFORM APPROXIMATION OF NONLINEAR FILTERS USING THE HILBERT METRIC, AND APPLICATION TO PARTICLE FILTERS 1

STABILITY AND UNIFORM APPROXIMATION OF NONLINEAR FILTERS USING THE HILBERT METRIC, AND APPLICATION TO PARTICLE FILTERS 1 The Annals of Applied Probability 0000, Vol. 00, No. 00, 000 000 STABILITY AND UNIFORM APPROXIMATION OF NONLINAR FILTRS USING TH HILBRT MTRIC, AND APPLICATION TO PARTICL FILTRS 1 By François LeGland and

More information

On Adaptive Resampling Procedures for Sequential Monte Carlo Methods

On Adaptive Resampling Procedures for Sequential Monte Carlo Methods On Adaptive Resampling Procedures for Sequential Monte Carlo Methods Pierre Del Moral, Arnaud Doucet, Ajay Jasra To cite this version: Pierre Del Moral, Arnaud Doucet, Ajay Jasra. On Adaptive Resampling

More information

SMC 2 : an efficient algorithm for sequential analysis of state-space models

SMC 2 : an efficient algorithm for sequential analysis of state-space models SMC 2 : an efficient algorithm for sequential analysis of state-space models N. CHOPIN 1, P.E. JACOB 2, & O. PAPASPILIOPOULOS 3 1 ENSAE-CREST 2 CREST & Université Paris Dauphine, 3 Universitat Pompeu Fabra

More information

Monte Carlo methods for sampling-based Stochastic Optimization

Monte Carlo methods for sampling-based Stochastic Optimization Monte Carlo methods for sampling-based Stochastic Optimization Gersende FORT LTCI CNRS & Telecom ParisTech Paris, France Joint works with B. Jourdain, T. Lelièvre, G. Stoltz from ENPC and E. Kuhn from

More information

dans les modèles à vraisemblance non explicite par des algorithmes gradient-proximaux perturbés

dans les modèles à vraisemblance non explicite par des algorithmes gradient-proximaux perturbés Inférence pénalisée dans les modèles à vraisemblance non explicite par des algorithmes gradient-proximaux perturbés Gersende Fort Institut de Mathématiques de Toulouse, CNRS and Univ. Paul Sabatier Toulouse,

More information

Importance splitting for rare event simulation

Importance splitting for rare event simulation Importance splitting for rare event simulation F. Cerou Frederic.Cerou@inria.fr Inria Rennes Bretagne Atlantique Simulation of hybrid dynamical systems and applications to molecular dynamics September

More information

MCMC: Markov Chain Monte Carlo

MCMC: Markov Chain Monte Carlo I529: Machine Learning in Bioinformatics (Spring 2013) MCMC: Markov Chain Monte Carlo Yuzhen Ye School of Informatics and Computing Indiana University, Bloomington Spring 2013 Contents Review of Markov

More information

Machine Learning for Data Science (CS4786) Lecture 24

Machine Learning for Data Science (CS4786) Lecture 24 Machine Learning for Data Science (CS4786) Lecture 24 Graphical Models: Approximate Inference Course Webpage : http://www.cs.cornell.edu/courses/cs4786/2016sp/ BELIEF PROPAGATION OR MESSAGE PASSING Each

More information

Bayesian Parameter Inference for Partially Observed Stopped Processes

Bayesian Parameter Inference for Partially Observed Stopped Processes Bayesian Parameter Inference for Partially Observed Stopped Processes BY AJAY JASRA, NIKOLAS KANTAS & ADAM PERSING 3 Department of Statistics & Applied Probability, National University of Singapore, Singapore,

More information

Annealing Between Distributions by Averaging Moments

Annealing Between Distributions by Averaging Moments Annealing Between Distributions by Averaging Moments Chris J. Maddison Dept. of Comp. Sci. University of Toronto Roger Grosse CSAIL MIT Ruslan Salakhutdinov University of Toronto Partition Functions We

More information

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past.

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past. 1 Markov chain: definition Lecture 5 Definition 1.1 Markov chain] A sequence of random variables (X n ) n 0 taking values in a measurable state space (S, S) is called a (discrete time) Markov chain, if

More information

Monte Carlo Methods. Leon Gu CSD, CMU

Monte Carlo Methods. Leon Gu CSD, CMU Monte Carlo Methods Leon Gu CSD, CMU Approximate Inference EM: y-observed variables; x-hidden variables; θ-parameters; E-step: q(x) = p(x y, θ t 1 ) M-step: θ t = arg max E q(x) [log p(y, x θ)] θ Monte

More information

Bayesian Inference and MCMC

Bayesian Inference and MCMC Bayesian Inference and MCMC Aryan Arbabi Partly based on MCMC slides from CSC412 Fall 2018 1 / 18 Bayesian Inference - Motivation Consider we have a data set D = {x 1,..., x n }. E.g each x i can be the

More information

Exercises Tutorial at ICASSP 2016 Learning Nonlinear Dynamical Models Using Particle Filters

Exercises Tutorial at ICASSP 2016 Learning Nonlinear Dynamical Models Using Particle Filters Exercises Tutorial at ICASSP 216 Learning Nonlinear Dynamical Models Using Particle Filters Andreas Svensson, Johan Dahlin and Thomas B. Schön March 18, 216 Good luck! 1 [Bootstrap particle filter for

More information

Part 1: Expectation Propagation

Part 1: Expectation Propagation Chalmers Machine Learning Summer School Approximate message passing and biomedicine Part 1: Expectation Propagation Tom Heskes Machine Learning Group, Institute for Computing and Information Sciences Radboud

More information

Computer Vision Group Prof. Daniel Cremers. 11. Sampling Methods: Markov Chain Monte Carlo

Computer Vision Group Prof. Daniel Cremers. 11. Sampling Methods: Markov Chain Monte Carlo Group Prof. Daniel Cremers 11. Sampling Methods: Markov Chain Monte Carlo Markov Chain Monte Carlo In high-dimensional spaces, rejection sampling and importance sampling are very inefficient An alternative

More information

Computer Intensive Methods in Mathematical Statistics

Computer Intensive Methods in Mathematical Statistics Computer Intensive Methods in Mathematical Statistics Department of mathematics johawes@kth.se Lecture 16 Advanced topics in computational statistics 18 May 2017 Computer Intensive Methods (1) Plan of

More information

Sequential Monte Carlo Samplers for Applications in High Dimensions

Sequential Monte Carlo Samplers for Applications in High Dimensions Sequential Monte Carlo Samplers for Applications in High Dimensions Alexandros Beskos National University of Singapore KAUST, 26th February 2014 Joint work with: Dan Crisan, Ajay Jasra, Nik Kantas, Alex

More information

Weak convergence of Markov chain Monte Carlo II

Weak convergence of Markov chain Monte Carlo II Weak convergence of Markov chain Monte Carlo II KAMATANI, Kengo Mar 2011 at Le Mans Background Markov chain Monte Carlo (MCMC) method is widely used in Statistical Science. It is easy to use, but difficult

More information

ON COMPOUND POISSON POPULATION MODELS

ON COMPOUND POISSON POPULATION MODELS ON COMPOUND POISSON POPULATION MODELS Martin Möhle, University of Tübingen (joint work with Thierry Huillet, Université de Cergy-Pontoise) Workshop on Probability, Population Genetics and Evolution Centre

More information

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Lecture 2: From Linear Regression to Kalman Filter and Beyond Lecture 2: From Linear Regression to Kalman Filter and Beyond January 18, 2017 Contents 1 Batch and Recursive Estimation 2 Towards Bayesian Filtering 3 Kalman Filter and Bayesian Filtering and Smoothing

More information

The chopthin algorithm for resampling

The chopthin algorithm for resampling The chopthin algorithm for resampling Axel Gandy F. Din-Houn Lau Department of Mathematics, Imperial College London Abstract Resampling is a standard step in particle filters and more generally sequential

More information

Multilevel Sequential 2 Monte Carlo for Bayesian Inverse Problems

Multilevel Sequential 2 Monte Carlo for Bayesian Inverse Problems Jonas Latz 1 Multilevel Sequential 2 Monte Carlo for Bayesian Inverse Problems Jonas Latz Technische Universität München Fakultät für Mathematik Lehrstuhl für Numerische Mathematik jonas.latz@tum.de November

More information

Calibration of Stochastic Volatility Models using Particle Markov Chain Monte Carlo Methods

Calibration of Stochastic Volatility Models using Particle Markov Chain Monte Carlo Methods Calibration of Stochastic Volatility Models using Particle Markov Chain Monte Carlo Methods Jonas Hallgren 1 1 Department of Mathematics KTH Royal Institute of Technology Stockholm, Sweden BFS 2012 June

More information

A Backward Particle Interpretation of Feynman-Kac Formulae

A Backward Particle Interpretation of Feynman-Kac Formulae A Backward Particle Interpretation of Feynman-Kac Formulae Pierre Del Moral, Arnaud Doucet, Sumeetpal Singh To cite this version: Pierre Del Moral, Arnaud Doucet, Sumeetpal Singh. A Backward Particle Interpretation

More information

Bootstrap random walks

Bootstrap random walks Bootstrap random walks Kais Hamza Monash University Joint work with Andrea Collevecchio & Meng Shi Introduction The two and three dimensional processes Higher Iterations An extension any (prime) number

More information

Bayesian Methods with Monte Carlo Markov Chains II

Bayesian Methods with Monte Carlo Markov Chains II Bayesian Methods with Monte Carlo Markov Chains II Henry Horng-Shing Lu Institute of Statistics National Chiao Tung University hslu@stat.nctu.edu.tw http://tigpbp.iis.sinica.edu.tw/courses.htm 1 Part 3

More information

Computer Intensive Methods in Mathematical Statistics

Computer Intensive Methods in Mathematical Statistics Computer Intensive Methods in Mathematical Statistics Department of mathematics johawes@kth.se Lecture 7 Sequential Monte Carlo methods III 7 April 2017 Computer Intensive Methods (1) Plan of today s lecture

More information

Divide-and-Conquer Sequential Monte Carlo

Divide-and-Conquer Sequential Monte Carlo Divide-and-Conquer Joint work with: John Aston, Alexandre Bouchard-Côté, Brent Kirkpatrick, Fredrik Lindsten, Christian Næsseth, Thomas Schön University of Warwick a.m.johansen@warwick.ac.uk http://go.warwick.ac.uk/amjohansen/talks/

More information

The Hierarchical Particle Filter

The Hierarchical Particle Filter and Arnaud Doucet http://go.warwick.ac.uk/amjohansen/talks MCMSki V Lenzerheide 7th January 2016 Context & Outline Filtering in State-Space Models: SIR Particle Filters [GSS93] Block-Sampling Particle

More information

Adaptive Monte Carlo methods

Adaptive Monte Carlo methods Adaptive Monte Carlo methods Jean-Michel Marin Projet Select, INRIA Futurs, Université Paris-Sud joint with Randal Douc (École Polytechnique), Arnaud Guillin (Université de Marseille) and Christian Robert

More information

Monte Carlo Approximation of Monte Carlo Filters

Monte Carlo Approximation of Monte Carlo Filters Monte Carlo Approximation of Monte Carlo Filters Adam M. Johansen et al. Collaborators Include: Arnaud Doucet, Axel Finke, Anthony Lee, Nick Whiteley 7th January 2014 Context & Outline Filtering in State-Space

More information

Integrated Non-Factorized Variational Inference

Integrated Non-Factorized Variational Inference Integrated Non-Factorized Variational Inference Shaobo Han, Xuejun Liao and Lawrence Carin Duke University February 27, 2014 S. Han et al. Integrated Non-Factorized Variational Inference February 27, 2014

More information

Markov Chain Monte Carlo (MCMC)

Markov Chain Monte Carlo (MCMC) Markov Chain Monte Carlo (MCMC Dependent Sampling Suppose we wish to sample from a density π, and we can evaluate π as a function but have no means to directly generate a sample. Rejection sampling can

More information

Mean-field dual of cooperative reproduction

Mean-field dual of cooperative reproduction The mean-field dual of systems with cooperative reproduction joint with Tibor Mach (Prague) A. Sturm (Göttingen) Friday, July 6th, 2018 Poisson construction of Markov processes Let (X t ) t 0 be a continuous-time

More information

STAT 512 sp 2018 Summary Sheet

STAT 512 sp 2018 Summary Sheet STAT 5 sp 08 Summary Sheet Karl B. Gregory Spring 08. Transformations of a random variable Let X be a rv with support X and let g be a function mapping X to Y with inverse mapping g (A = {x X : g(x A}

More information

Basic math for biology

Basic math for biology Basic math for biology Lei Li Florida State University, Feb 6, 2002 The EM algorithm: setup Parametric models: {P θ }. Data: full data (Y, X); partial data Y. Missing data: X. Likelihood and maximum likelihood

More information

Undirected Graphical Models

Undirected Graphical Models Undirected Graphical Models 1 Conditional Independence Graphs Let G = (V, E) be an undirected graph with vertex set V and edge set E, and let A, B, and C be subsets of vertices. We say that C separates

More information

UPPER DEVIATIONS FOR SPLIT TIMES OF BRANCHING PROCESSES

UPPER DEVIATIONS FOR SPLIT TIMES OF BRANCHING PROCESSES Applied Probability Trust 7 May 22 UPPER DEVIATIONS FOR SPLIT TIMES OF BRANCHING PROCESSES HAMED AMINI, AND MARC LELARGE, ENS-INRIA Abstract Upper deviation results are obtained for the split time of a

More information

Exercises and Answers to Chapter 1

Exercises and Answers to Chapter 1 Exercises and Answers to Chapter The continuous type of random variable X has the following density function: a x, if < x < a, f (x), otherwise. Answer the following questions. () Find a. () Obtain mean

More information

Unsupervised Learning

Unsupervised Learning Unsupervised Learning Bayesian Model Comparison Zoubin Ghahramani zoubin@gatsby.ucl.ac.uk Gatsby Computational Neuroscience Unit, and MSc in Intelligent Systems, Dept Computer Science University College

More information

Information theoretic perspectives on learning algorithms

Information theoretic perspectives on learning algorithms Information theoretic perspectives on learning algorithms Varun Jog University of Wisconsin - Madison Departments of ECE and Mathematics Shannon Channel Hangout! May 8, 2018 Jointly with Adrian Tovar-Lopez

More information

Probabilistic Machine Learning

Probabilistic Machine Learning Probabilistic Machine Learning Bayesian Nets, MCMC, and more Marek Petrik 4/18/2017 Based on: P. Murphy, K. (2012). Machine Learning: A Probabilistic Perspective. Chapter 10. Conditional Independence Independent

More information

On Markov chain Monte Carlo methods for tall data

On Markov chain Monte Carlo methods for tall data On Markov chain Monte Carlo methods for tall data Remi Bardenet, Arnaud Doucet, Chris Holmes Paper review by: David Carlson October 29, 2016 Introduction Many data sets in machine learning and computational

More information

MCMC for big data. Geir Storvik. BigInsight lunch - May Geir Storvik MCMC for big data BigInsight lunch - May / 17

MCMC for big data. Geir Storvik. BigInsight lunch - May Geir Storvik MCMC for big data BigInsight lunch - May / 17 MCMC for big data Geir Storvik BigInsight lunch - May 2 2018 Geir Storvik MCMC for big data BigInsight lunch - May 2 2018 1 / 17 Outline Why ordinary MCMC is not scalable Different approaches for making

More information

Particle Filtering for Data-Driven Simulation and Optimization

Particle Filtering for Data-Driven Simulation and Optimization Particle Filtering for Data-Driven Simulation and Optimization John R. Birge The University of Chicago Booth School of Business Includes joint work with Nicholas Polson. JRBirge INFORMS Phoenix, October

More information

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R.

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R. Ergodic Theorems Samy Tindel Purdue University Probability Theory 2 - MA 539 Taken from Probability: Theory and examples by R. Durrett Samy T. Ergodic theorems Probability Theory 1 / 92 Outline 1 Definitions

More information

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Lecture 2: From Linear Regression to Kalman Filter and Beyond Lecture 2: From Linear Regression to Kalman Filter and Beyond Department of Biomedical Engineering and Computational Science Aalto University January 26, 2012 Contents 1 Batch and Recursive Estimation

More information

Lecture Particle Filters. Magnus Wiktorsson

Lecture Particle Filters. Magnus Wiktorsson Lecture Particle Filters Magnus Wiktorsson Monte Carlo filters The filter recursions could only be solved for HMMs and for linear, Gaussian models. Idea: Approximate any model with a HMM. Replace p(x)

More information

Computer Intensive Methods in Mathematical Statistics

Computer Intensive Methods in Mathematical Statistics Computer Intensive Methods in Mathematical Statistics Department of mathematics johawes@kth.se Lecture 5 Sequential Monte Carlo methods I 31 March 2017 Computer Intensive Methods (1) Plan of today s lecture

More information

Learning the hyper-parameters. Luca Martino

Learning the hyper-parameters. Luca Martino Learning the hyper-parameters Luca Martino 2017 2017 1 / 28 Parameters and hyper-parameters 1. All the described methods depend on some choice of hyper-parameters... 2. For instance, do you recall λ (bandwidth

More information

Particle Filters: Convergence Results and High Dimensions

Particle Filters: Convergence Results and High Dimensions Particle Filters: Convergence Results and High Dimensions Mark Coates mark.coates@mcgill.ca McGill University Department of Electrical and Computer Engineering Montreal, Quebec, Canada Bellairs 2012 Outline

More information

Lecture Particle Filters

Lecture Particle Filters FMS161/MASM18 Financial Statistics November 29, 2010 Monte Carlo filters The filter recursions could only be solved for HMMs and for linear, Gaussian models. Idea: Approximate any model with a HMM. Replace

More information

An introduction to Approximate Bayesian Computation methods

An introduction to Approximate Bayesian Computation methods An introduction to Approximate Bayesian Computation methods M.E. Castellanos maria.castellanos@urjc.es (from several works with S. Cabras, E. Ruli and O. Ratmann) Valencia, January 28, 2015 Valencia Bayesian

More information

Stochastic Demography, Coalescents, and Effective Population Size

Stochastic Demography, Coalescents, and Effective Population Size Demography Stochastic Demography, Coalescents, and Effective Population Size Steve Krone University of Idaho Department of Mathematics & IBEST Demographic effects bottlenecks, expansion, fluctuating population

More information

SUPPLEMENT TO PAPER CONVERGENCE OF ADAPTIVE AND INTERACTING MARKOV CHAIN MONTE CARLO ALGORITHMS

SUPPLEMENT TO PAPER CONVERGENCE OF ADAPTIVE AND INTERACTING MARKOV CHAIN MONTE CARLO ALGORITHMS Submitted to the Annals of Statistics SUPPLEMENT TO PAPER CONERGENCE OF ADAPTIE AND INTERACTING MARKO CHAIN MONTE CARLO ALGORITHMS By G Fort,, E Moulines and P Priouret LTCI, CNRS - TELECOM ParisTech,

More information

Lecture 1 Measure concentration

Lecture 1 Measure concentration CSE 29: Learning Theory Fall 2006 Lecture Measure concentration Lecturer: Sanjoy Dasgupta Scribe: Nakul Verma, Aaron Arvey, and Paul Ruvolo. Concentration of measure: examples We start with some examples

More information

Statistical Inference and Methods

Statistical Inference and Methods Department of Mathematics Imperial College London d.stephens@imperial.ac.uk http://stats.ma.ic.ac.uk/ das01/ 31st January 2006 Part VI Session 6: Filtering and Time to Event Data Session 6: Filtering and

More information

Inference for multi-object dynamical systems: methods and analysis

Inference for multi-object dynamical systems: methods and analysis Inference for multi-object dynamical systems: methods and analysis Jérémie Houssineau National University of Singapore September 13, 2018 Joint work with: Daniel Clark, Ajay Jasra, Sumeetpal Singh, Emmanuel

More information

Asymptotics and Simulation of Heavy-Tailed Processes

Asymptotics and Simulation of Heavy-Tailed Processes Asymptotics and Simulation of Heavy-Tailed Processes Department of Mathematics Stockholm, Sweden Workshop on Heavy-tailed Distributions and Extreme Value Theory ISI Kolkata January 14-17, 2013 Outline

More information

Linear variance bounds for particle approximations of time-homogeneous Feynman Kac formulae

Linear variance bounds for particle approximations of time-homogeneous Feynman Kac formulae Available online at www.sciencedirect.com Stochastic Processes and their Applications 122 (2012) 1840 1865 www.elsevier.com/locate/spa Linear variance bounds for particle approximations of time-homogeneous

More information

Stochastic Proximal Gradient Algorithm

Stochastic Proximal Gradient Algorithm Stochastic Institut Mines-Télécom / Telecom ParisTech / Laboratoire Traitement et Communication de l Information Joint work with: Y. Atchade, Ann Arbor, USA, G. Fort LTCI/Télécom Paristech and the kind

More information

Asymptotic Statistics-VI. Changliang Zou

Asymptotic Statistics-VI. Changliang Zou Asymptotic Statistics-VI Changliang Zou Kolmogorov-Smirnov distance Example (Kolmogorov-Smirnov confidence intervals) We know given α (0, 1), there is a well-defined d = d α,n such that, for any continuous

More information

Some Results on the Ergodicity of Adaptive MCMC Algorithms

Some Results on the Ergodicity of Adaptive MCMC Algorithms Some Results on the Ergodicity of Adaptive MCMC Algorithms Omar Khalil Supervisor: Jeffrey Rosenthal September 2, 2011 1 Contents 1 Andrieu-Moulines 4 2 Roberts-Rosenthal 7 3 Atchadé and Fort 8 4 Relationship

More information

Branching Processes II: Convergence of critical branching to Feller s CSB

Branching Processes II: Convergence of critical branching to Feller s CSB Chapter 4 Branching Processes II: Convergence of critical branching to Feller s CSB Figure 4.1: Feller 4.1 Birth and Death Processes 4.1.1 Linear birth and death processes Branching processes can be studied

More information

Undirected Graphical Models

Undirected Graphical Models Outline Hong Chang Institute of Computing Technology, Chinese Academy of Sciences Machine Learning Methods (Fall 2012) Outline Outline I 1 Introduction 2 Properties Properties 3 Generative vs. Conditional

More information

Inference in state-space models with multiple paths from conditional SMC

Inference in state-space models with multiple paths from conditional SMC Inference in state-space models with multiple paths from conditional SMC Sinan Yıldırım (Sabancı) joint work with Christophe Andrieu (Bristol), Arnaud Doucet (Oxford) and Nicolas Chopin (ENSAE) September

More information

17 : Markov Chain Monte Carlo

17 : Markov Chain Monte Carlo 10-708: Probabilistic Graphical Models, Spring 2015 17 : Markov Chain Monte Carlo Lecturer: Eric P. Xing Scribes: Heran Lin, Bin Deng, Yun Huang 1 Review of Monte Carlo Methods 1.1 Overview Monte Carlo

More information