Advanced Monte Carlo integration methods. P. Del Moral (INRIA team ALEA) INRIA & Bordeaux Mathematical Institute & X CMAP

Size: px
Start display at page:

Download "Advanced Monte Carlo integration methods. P. Del Moral (INRIA team ALEA) INRIA & Bordeaux Mathematical Institute & X CMAP"

Transcription

1 Advanced Monte Carlo integration methods P. Del Moral (INRIA team ALEA) INRIA & Bordeaux Mathematical Institute & X CMAP MCQMC 2012, Sydney, Sunday Tutorial 12-th 2012 Some hyper-refs Feynman-Kac formulae, Genealogical & Interacting Particle Systems with appl., Springer (2004) Sequential Monte Carlo Samplers JRSS B. (2006). (joint work with Doucet & Jasra) On the concentration of interacting processes. Foundations & Trends in Machine Learning (2012). (joint work with Hu & Wu) [+ Refs] More references on the website delmoral/index.html [+ Links]

2 Some basic notation Monte Carlo methods A brief introduction The Importance sampling trick The Metropolis-Hasting model The Gibbs Sampler Measure valued equations Sequence of target probabilities Probability mass transport models Nonlinear Markov transport models Boltzmann-Gibbs measures Feynman-Kac models Updating-Prediction models Path space models Application domains extensions A little stochastic analysis Interacting Monte Carlo models Interacting/Adaptive MCMC methods Mean field interacting particle models The 4 particle estimates

3 Some basic notation Monte Carlo methods Measure valued equations Feynman-Kac models Interacting Monte Carlo models

4 Basic notation P(E) probability meas., B(E) bounded functions on E. (µ, f ) P(E) B(E) µ(f ) = µ(dx) f (x) Q(x 1, dx 2 ) integral operators x 1 E 1 x 2 E 2 Q(f )(x 1 ) = Q(x 1, dx 2 )f (x 2 ) [µq](dx 2 ) = µ(dx 1 )Q(x 1, dx 2 ) (= [µq](f ) = µ[q(f )] ) Boltzmann-Gibbs transformation (updating Bayes type rule) [Positive and bounded potential function G] µ(dx) Ψ G (µ)(dx) = 1 µ(g) G(x) µ(dx)

5 Basic notation E = {1,..., d} integral operations matrix operations Indeed : with µ = [µ(1),..., µ(d)] Q = (Q(i, j)) 1 i,j d f = µq = [(µq)(1),..., (µq)(d)] & Q(f ) = Q(f )(1). Q(f )(d) f (1). f (d) (µq)(j) = i µ(i)q(i, j) & Q(f )(i) = j Q(i, j)f (j)... and of course the duality formula µ(f ) = i µ(i)f (i)

6 Some basic notation Monte Carlo methods A brief introduction The Importance sampling trick The Metropolis-Hasting model The Gibbs Sampler Measure valued equations Feynman-Kac models Interacting Monte Carlo models

7 Introduction Objective : Given a target probab. η(dx) compute the map η : f η(f ) = f (x) η(dx)?? Monte Carlo methods : η N := 1 N δ X i N η 1 i N with two traditional types of samples (X i ) i 1 i.i.d. with common law η (X i ) i 1 Markov chain with invariant measure η

8 The Importance sampling trick (f > 0) Key formula η(f ) = f (x) dη dη (x) π(dx) = π (f g) with g = dπ dπ Sample X i i.i.d. with common law π and set Note that η N = 1 N for the updated twisted measure 1 i N g(x i ) δ X i N Var ( η N (f ) ) = η ( g f 2) η(f ) 2 = 0 π(dx) = Ψ f (η)(dx) := 1 η(f ) f (x) η(dx) ( g = η(f )/f )

9 The Metropolis-Hasting model From x we propose x with law P(x, dx ) and accept the move with proba a(x, x ) := 1 η(dx )P(x, dx) η(dx)p(x, dx ) The Markov transition M(x, dx ) is η-reversible η(dx)m(x, dx ) x x = η(dx)p(x, dx ) a(x, x ) = [η(dx)p(x, dx )] [η(dx )P(x, dx)] = η(dx )M(x, dx) Fixed point equation η(dx) M(x, dx ) = (ηm)(dx ) = η(dx ) ηm = η

10 The Metropolis-Hasting model Markov chain samples X 1 M X2 M X3 M M... Xn 1 with the Markov transport equation P (X n dx n ) = P (X n 1 dx n 1 ) }{{}}{{} =η n(dx n) =η n 1(dx n 1) M Xn M... P (X n dx n X n 1 = x n 1 ) }{{} =M(x n 1,dx n) Linear measure valued equation : η n = η n 1 M n η = ηm

11 Example 1 Boltzmann-Gibbs target measure : η(dx) = η n (dx) = 1 Z n e βnv (x) λ(dx) with β n = 1/T n The M-H transition M n s.t. η n = η n M n : Proposition of moves P(x, dx ) Acceptance rate ( a n (x, x ) = 1 e βn[v (x ) V (x)] If P is λ-reversible then we have (with a + = max (a, 0)) [ λ(dx )P(x ]), dx) λ(dx)p(x, dx ) a n (x, x ) = e βn[v (x ) V (x)] + stochastic style steepest descent Some mixing pb. : β n large high rejection/local minima absorptions

12 Example 2 Restriction probability: η(dx) = η n (dx) = 1 1 An (x) λ(dx) Z n with A n A The M-H transition M n s.t. η n = η n M n = Shaker of the set A n λ-reversible propositions P(x, dx ) Acceptance iff x A n Example of λ-reversible moves : λ = N (0, 1) = Law(W ) x = a x + 1 a 2 W a [0, 1] Some mixing pb. : a or A n too small high rejection

13 Gibbs samplers Metropolis-Hasting model On product state spaces η(dx) := η(d(x 1, x 2 )) on E = (E 1 E 2 ) Desintegration formulae : η(d(x 1, x 2 )) = η 1 (dx 1 ) P 2 (x 1, dx 2 ) = η 2 (dx 2 ) P 1 (x 2, dx 1 ) Unit acceptance rates : x = (x 1, x 2 ) P1 x = (x 1, x 2 ) P1 x = (x 1, x 2) a 1 (x, x ) := 1 [η 2(dx 2 )P 1 (x 2, dx 1 )] P 1(x 2, dx 1 ) [η 2 (dx 2 )P 1 (x 2, dx 1 )] P 1 (x 2, dx 1 ) = 1 = a 2(x, x )

14 Some basic notation Monte Carlo methods Measure valued equations Sequence of target probabilities Probability mass transport models Nonlinear Markov transport models Boltzmann-Gibbs measures Feynman-Kac models Interacting Monte Carlo models

15 Measure valued equations Measure valued equations = Sequence of (target) probabilities (with complexity) η 0 η 1... η n 1 η n η n+1... Examples : Boltzmann-Gibbs w.r.t. β n η n (dx) = 1 Z n e βnv (x) λ(dx) Restriction models w.r.t. A n η n (dx) = 1 Z n 1 An (x) λ(dx)

16 Probability mass transport models Reminder :Two types of transformations : Given a potential function G(x) 0, and a Markov transition M(x, dy) Boltzmann-Gibbs transformation (= Bayes type updating rule) Ψ G : η(dx) Ψ G (η)(dx) = 1 η(g) G(x) η(dx) Markov transport equation M : η(dx) (ηm)(dx) = η(dx ) M(x, dx)

17 Key observation Boltzmann-Gibbs transform = Nonlinear Markov transport Ψ G (η) = ηs G,η with the Markov transition S G,η (x, dx ) = ɛg(x)δ x (dx ) + (1 ɛg(x)) Ψ G (η)(dx ) for any ɛ [0, 1] s.t. ɛ G 1 Proof : S G,η (f )(x) = ɛg(x)f (x) + (1 ɛg(x)) Ψ G (η)(f ) η(s G,η (f )) = ɛη(gf ) + (1 ɛη(g)) η(gf ) η(g) = Ψ G (η)(f )

18 Boltzmann-Gibbs measures η n (dx) := 1 Z n e βnv (x) λ(dx) with β n For any MCMC transition M n with target η n η n = η n M n Updating of the temperature parameter η n+1 = Ψ Gn (η n ) with G n = e (βn+1 βn)v Proof : e βn+1v = e (βn+1 βn)v e βnv Consequence : η n+1 = η n+1 M n+1 = Ψ Gn (η n )M n+1 η n+1 = Ψ Gn (η n )M n+1

19 Restriction models η n (dx) := 1 Z n 1 An λ(dx) with A n For any MCMC transition M n with target η n Updating of the subset η n = η n M n η n+1 = Ψ Gn (η n ) with G n = 1 An+1 Proof : 1 An+1 = 1 An+1 1 An Consequence : η n+1 = η n+1 M n+1 = Ψ Gn (η n )M n+1 η n+1 = Ψ Gn (η n )M n+1

20 Product models η n (dx) := 1 Z n { n p=0 h p (x) } λ(dx) with h p 0 For any MCMC transition M n with target η n = η n M n. Updating of the product Proof : η n+1 = Ψ Gn (η n ) with G n = h n+1 { n+1 } { p=0 h n } p = h n+1 p=0 h p Consequence : η n+1 = η n+1 M n+1 = Ψ Gn (η n )M n+1 η n+1 = Ψ Gn (η n )M n+1

21 Some basic notation Monte Carlo methods Measure valued equations Feynman-Kac models Updating-Prediction models Path space models Application domains extensions A little stochastic analysis Interacting Monte Carlo models

22 Feynman-Kac models The solution of any measure valued equation of the form η n = Ψ Gn 1 (η n 1 )M n is given by a normalized Feynman-Kac model of the form η n (f ) = γ n (f )/γ n (1) with the positive measure γ n defined by ( ) n 1 γ n (f ) = E f (X n ) G p (X p ) p=0 where X n stands for the Markov chain with transitions P(X n dx n X n 1 = x n 1 ) = M n (x n 1, dx n )

23 Path space models If we take the historical process X n = (X 0,..., X n ) E n = E n+1 and G p (X n ) = G n (X n ) then we have γ n (f n ) = E ( f n (X n ) ) n 1 G p (X p ) η n = Q n p=0 with the Feynman-Kac model on path space dq n := 1 G Z n p (X p ) dp }{{} n 0 p<n =law(x 0,...,X n) Evolution equation : Q n = Ψ Gn (Q n 1 )M n with the Markov transition M n of the historical process X n

24 Application domains extensions Confinements : Random walk X n Z d, X 0 = 0 & G n := 1 [ L,L], L > 0. Q n = Law ((X 0,..., X n ) X p [ L, L], 0 p < n) and Z n = γ n (1) = Proba (X p [ L, L], 0 p < n) Self avoiding walks X n = (X 0,..., X n ) & G n (X n ) = 1 Xn {X 0,...,X n 1} Q n = Law ((X 0,..., X n ) X p X q, 0 p < q < n) and Z n = γ n (1) = Proba (X p X q, 0 p < q < n)

25 Filtering: M n (x n 1, dx n ) = p(x n x n 1 ) dx n & G n (x n ) = p(y n x n ) and Q n+1 = Law ((X 0,..., X n+1 ) Y 0 = y 0,..., Y n = y n ) Z n+1 = γ n+1 (1) = p(y 0,..., y n ) Hidden Markov chain models = product models dp(θ (y 0,..., y n )) }{{} η n(dθ) n p(y p θ, (y 0,..., y p 1 )) }{{} dp(θ) }{{} h p(θ) λ(dθ) p=0

26 Continuous time models tn+1 X n := X [t n,t n+1[ & G n (X n ) = exp V s (X s )ds t n 0 p<n { tn } G p (X p ) = exp V s (X s )ds t 0 or using a simple Euler s scheme X t p = X p tn t e 0 [V s (X s )ds+ws (X )dbs] s 0 p<n e Vtp (Xp) t+wtp (Xp) t N p(0,1)

27 A little analysis with 3 keys formulae Time marginal measures = Path space measures: [ X n := (X 0,..., X n ) & G n (X n ) = G n (X n )] = η n = Q n Normalizing constants (= Free energy models): Z n = E G p (X p ) = η p (G p ) 0 p<n 0 p<n Proof (Z n = γ n (1)) : γ n+1 (1) = E G n (X n ) G p (X p ) = γ n (G n ) = η n (G n ) γ n (1) 0 p (n 1)

28 The last key Backward Markov models with Q n (d(x 0,..., x n )) η 0 (dx 0 )Q 1 (x 0, dx 1 )... Q n (x n 1, dx n ) Q n (x n 1, dx n ) := G n 1 (x n 1 )M n (x n 1, dx n ) If we set hyp = H n (x n 1, x n ) ν n (dx n ) 1 η n+1 (dx) = η n (G n ) η n (H n+1 (., x)) ν n+1 (dx) M n+1,ηn (x n+1, dx n ) = η n(dx n ) H n+1 (x n, x n+1 ) η n (H n+1 (., x n+1 )) then we find the backward equation η n+1 (dx n+1 ) M n+1,ηn (x n+1, dx n ) = 1 η n (G n ) η n(dx n ) Q n+1 (x n, dx n+1 )

29 The last key (continued) Q n (d(x 0,..., x n )) η 0 (dx 0 )Q 1 (x 0, dx 1 )... Q n (x n 1, dx n ) η n+1 (dx n+1 ) M n+1,ηn (x n+1, dx n ) η n (dx n ) Q n+1 (x n, dx n+1 ) Backward Markov chain model : Q n (d(x 0,..., x n )) = η n (dx n ) M n,ηn 1 (x n, dx n 1 )... M 1,η0 (x 1, dx 0 ) with the dual/backward Markov transitions M n+1,ηn (x n+1, dx n ) η n (dx n ) H n+1 (x n, x n+1 )

30 Some basic notation Monte Carlo methods Measure valued equations Feynman-Kac models Interacting Monte Carlo models Interacting/Adaptive MCMC methods Mean field interacting particle models The 4 particle estimates

31 Interacting Monte Carlo models Objective : Solve a nonlinear measure valued equation η n+1 = Φ n+1 (η n ) Two classes of interacting Monte Carlo models Interacting/Adaptive MCMC methods Mean field/interacting particle systems

32 Interacting/Adaptive MCMC methods n, run an interacting sequence of MCMC algorithms Xn 1 Xn 2... Xn j [target η n ] Xn+1 1 Xn X j n+1 X j+1 n+1... [target η n+1 ] s.t. X j n+1 M [n+1] η X j+1 n+1 depends on η = η j n := 1 j A single fixed point compatibility condition : References η Φ n+1 (η)m [n+1] η = Φ n+1 (η) 1 i j δ X i n k η n A Functional Central Limit Theorem for a Class of Interacting MCMC Models EJP (2009). (joint work with Bercu & Doucet) Sequentially Interacting Markov chain Monte Carlo. AoS (2010). (joint work with Brockwell & Doucet) Interacting MCMC methods for solving nonlinear measure-valued equations. AAP (2010) (joint work with Doucet) Fluctuations of Interacting MCMC Models. SPA (2012). (joint work with Bercu & Doucet)

33 Mean field interacting particle models Key idea : The solution of any measure valued process η n = Φ n (η n 1 ) can be seen as the law η n = Law ( X n ) of a Markov chain X n for some transitions P ( X n dx n X n 1 = x n 1 ) = Kn,ηn 1 (x n 1, dx n ) Notice that X n = Perfect sampler Example : the Feynman-Kac updating-prediction mapping Φ n (η) = Ψ Gn 1 (η)m n = η S Gn 1,ηM n = ηk n,η }{{} K n,η

34 Mean field particle interpretation We approximate the exact/perfect transitions by running a X n X n+1 K n+1,ηn (X n, dx n+1 ) Markov chain ξ n = (ξ 1 n,..., ξ N n ) E N n 1 N 1 i N Particle transitions ( 1 i N) s.t. δ ξ i n := η N n N η n ξ i n ξ i n+1 K n+1,η N n (ξ i n, dx n+1 )

35 Discrete generation mean field particle model Schematic picture : ξ n E N n ξ n+1 E N n+1 ξ 1 n. ξ i n.. ξ N n K n+1,η N n ξ 1 n+1. ξ i n+1. ξ N n+1 Rationale : η N n N η n = K n+1,η N n N K n+1,ηn = ξ i n+1 almost iid copies of X n+1 = η N n+1 N η n+1

36 Updating-prediction models genetic particle model : ξ 1 ṇ. ξ i ṇ. ξ N n S Gn,η N n ξ 1 n. ξ i n. ξ N n M n+1 Accept/Reject/Recycling/Selection transition : S Gn,η N n (ξi n, dx) ξ 1 n+1. ξ i n+1. ξ N n+1 := ɛ n G n (ξn) i δ ξ i n (dx) + ( 1 ɛ n G n (ξn) ) i N G n(ξ j n ) j=1 N k=1 Gn(ξk )δ ξ j (dx) n n Ex. : G n = 1 A, ɛ n = 1 G n (ξ i n) = 1 A (ξ i n) FK-Mean field particle models = sequential Monte Carlo, population Monte Carlo, genetic algorithms, particle filters, pruning, spawning, reconfiguration, quantum Monte carlo, go with the winner...

37 Graphical illustration : η N n := 1 N 1 i N δ ξ i n η n

38 Graphical illustration : η N n := 1 N 1 i N δ ξ i n η n

39 Graphical illustration : η N n := 1 N 1 i N δ ξ i n η n

40 Graphical illustration : η N n := 1 N 1 i N δ ξ i n η n

41 Graphical illustration : η N n := 1 N 1 i N δ ξ i n η n

42 Graphical illustration : η N n := 1 N 1 i N δ ξ i n η n

43 Graphical illustration : η N n := 1 N 1 i N δ ξ i n η n

44 Graphical illustration : η N n := 1 N 1 i N δ ξ i n η n

45 Graphical illustration : η N n := 1 N 1 i N δ ξ i n η n

46 Graphical illustration : η N n := 1 N 1 i N δ ξ i n η n

47 Graphical illustration : η N n := 1 N 1 i N δ ξ i n η n

48 Graphical illustration : η N n := 1 N 1 i N δ ξ i n η n

49 Graphical illustration : η N n := 1 N 1 i N δ ξ i n η n

50 Graphical illustration : η N n := 1 N 1 i N δ ξ i n η n

51 Graphical illustration : η N n := 1 N 1 i N δ ξ i n η n

52 Graphical illustration : η N n := 1 N 1 i N δ ξ i n η n

53 Graphical illustration : η N n := 1 N 1 i N δ ξ i n η n

54 Graphical illustration : η N n := 1 N 1 i N δ ξ i n η n

55 Graphical illustration : η N n := 1 N 1 i N δ ξ i n η n

56 Graphical illustration : η N n := 1 N 1 i N δ ξ i n η n

57 The 4 particle estimates Genealogical tree evolution (N, n) = (3, 3) = = = Individuals in the current population = Almost i.i.d. samples w.r.t. FK marginal meas. η n η N n := 1 N 1 i N δ ξ i n N η n = solution of a nonlinear m.v.p.

58 Two more particle estimates = = = Ancestral lines = Almost i.i.d. sampled paths w.r.t. Q n. ( ξ i 0,n, ξ1,n, i..., ξn,n i ) := i-th ancetral line i-th current individual = ξn i 1 N 1 i N δ (ξ i 0,n,ξ i 1,n,...,ξi n,n) N Q n Unbiased particle free energy functions Zn N = ηp N (G p ) N Z n = 0 p<n 0 p<n η p (G p )

59 ... and the last particle measure Q N n (d(x 0,..., x n )) := η N n (dx n ) M n,η N n 1 (x n, dx n 1 )... M 1,η N 0 (x 1, dx 0 ) with the random particle matrices: M n+1,η N n (x n+1, dx n ) η N n (dx n ) H n+1 (x n, x n+1 ) Example: Normalized additive functionals f n (x 0,..., x n ) = 1 n + 1 Q N n (f n ) := 1 n p n 0 p n f p (x p ) η N n M n,η N n 1... M p+1,η N p (f p ) }{{} matrix operations

Concentration inequalities for Feynman-Kac particle models. P. Del Moral. INRIA Bordeaux & IMB & CMAP X. Journées MAS 2012, SMAI Clermond-Ferrand

Concentration inequalities for Feynman-Kac particle models. P. Del Moral. INRIA Bordeaux & IMB & CMAP X. Journées MAS 2012, SMAI Clermond-Ferrand Concentration inequalities for Feynman-Kac particle models P. Del Moral INRIA Bordeaux & IMB & CMAP X Journées MAS 2012, SMAI Clermond-Ferrand Some hyper-refs Feynman-Kac formulae, Genealogical & Interacting

More information

Mean field simulation for Monte Carlo integration. Part II : Feynman-Kac models. P. Del Moral

Mean field simulation for Monte Carlo integration. Part II : Feynman-Kac models. P. Del Moral Mean field simulation for Monte Carlo integration Part II : Feynman-Kac models P. Del Moral INRIA Bordeaux & Inst. Maths. Bordeaux & CMAP Polytechnique Lectures, INLN CNRS & Nice Sophia Antipolis Univ.

More information

A Backward Particle Interpretation of Feynman-Kac Formulae

A Backward Particle Interpretation of Feynman-Kac Formulae A Backward Particle Interpretation of Feynman-Kac Formulae P. Del Moral Centre INRIA de Bordeaux - Sud Ouest Workshop on Filtering, Cambridge Univ., June 14-15th 2010 Preprints (with hyperlinks), joint

More information

A new class of interacting Markov Chain Monte Carlo methods

A new class of interacting Markov Chain Monte Carlo methods A new class of interacting Marov Chain Monte Carlo methods P Del Moral, A Doucet INRIA Bordeaux & UBC Vancouver Worshop on Numerics and Stochastics, Helsini, August 2008 Outline 1 Introduction Stochastic

More information

An introduction to particle simulation of rare events

An introduction to particle simulation of rare events An introduction to particle simulation of rare events P. Del Moral Centre INRIA de Bordeaux - Sud Ouest Workshop OPUS, 29 juin 2010, IHP, Paris P. Del Moral (INRIA) INRIA Bordeaux-Sud Ouest 1 / 33 Outline

More information

Mean field simulation for Monte Carlo integration. Part I : Intro. nonlinear Markov models (+links integro-diff eq.) P. Del Moral

Mean field simulation for Monte Carlo integration. Part I : Intro. nonlinear Markov models (+links integro-diff eq.) P. Del Moral Mean field simulation for Monte Carlo integration Part I : Intro. nonlinear Markov models (+links integro-diff eq.) P. Del Moral INRIA Bordeaux & Inst. Maths. Bordeaux & CMAP Polytechnique Lectures, INLN

More information

Contraction properties of Feynman-Kac semigroups

Contraction properties of Feynman-Kac semigroups Journées de Statistique Marne La Vallée, January 2005 Contraction properties of Feynman-Kac semigroups Pierre DEL MORAL, Laurent MICLO Lab. J. Dieudonné, Nice Univ., LATP Univ. Provence, Marseille 1 Notations

More information

On the stability and the applications of interacting particle systems

On the stability and the applications of interacting particle systems 1/42 On the stability and the applications of interacting particle systems P. Del Moral INRIA Bordeaux Sud Ouest PDE/probability interactions - CIRM, April 2017 Synthesis joint works with A.N. Bishop,

More information

An Brief Overview of Particle Filtering

An Brief Overview of Particle Filtering 1 An Brief Overview of Particle Filtering Adam M. Johansen a.m.johansen@warwick.ac.uk www2.warwick.ac.uk/fac/sci/statistics/staff/academic/johansen/talks/ May 11th, 2010 Warwick University Centre for Systems

More information

Stability of Feynman-Kac Semigroup and SMC Parameters' Tuning

Stability of Feynman-Kac Semigroup and SMC Parameters' Tuning Stability of Feynman-Kac Semigroup and SMC Parameters' Tuning François GIRAUD PhD student, CEA-CESTA, INRIA Bordeaux Tuesday, February 14th 2012 1 Recalls and context of the analysis Feynman-Kac semigroup

More information

Perfect simulation algorithm of a trajectory under a Feynman-Kac law

Perfect simulation algorithm of a trajectory under a Feynman-Kac law Perfect simulation algorithm of a trajectory under a Feynman-Kac law Data Assimilation, 24th-28th September 212, Oxford-Man Institute C. Andrieu, N. Chopin, A. Doucet, S. Rubenthaler University of Bristol,

More information

CPSC 540: Machine Learning

CPSC 540: Machine Learning CPSC 540: Machine Learning MCMC and Non-Parametric Bayes Mark Schmidt University of British Columbia Winter 2016 Admin I went through project proposals: Some of you got a message on Piazza. No news is

More information

Sequential Monte Carlo Methods for Bayesian Computation

Sequential Monte Carlo Methods for Bayesian Computation Sequential Monte Carlo Methods for Bayesian Computation A. Doucet Kyoto Sept. 2012 A. Doucet (MLSS Sept. 2012) Sept. 2012 1 / 136 Motivating Example 1: Generic Bayesian Model Let X be a vector parameter

More information

On the Concentration Properties of Interacting Particle Processes. Contents

On the Concentration Properties of Interacting Particle Processes. Contents Foundations and Trends R in Machine Learning Vol. 3, Nos. 3 4 (2010) 225 389 c 2012 P. Del Moral, P. Hu, and L. Wu DOI: 10.1561/2200000026 On the Concentration Properties of Interacting Particle Processes

More information

Auxiliary Particle Methods

Auxiliary Particle Methods Auxiliary Particle Methods Perspectives & Applications Adam M. Johansen 1 adam.johansen@bristol.ac.uk Oxford University Man Institute 29th May 2008 1 Collaborators include: Arnaud Doucet, Nick Whiteley

More information

MCMC: Markov Chain Monte Carlo

MCMC: Markov Chain Monte Carlo I529: Machine Learning in Bioinformatics (Spring 2013) MCMC: Markov Chain Monte Carlo Yuzhen Ye School of Informatics and Computing Indiana University, Bloomington Spring 2013 Contents Review of Markov

More information

Controlled sequential Monte Carlo

Controlled sequential Monte Carlo Controlled sequential Monte Carlo Jeremy Heng, Department of Statistics, Harvard University Joint work with Adrian Bishop (UTS, CSIRO), George Deligiannidis & Arnaud Doucet (Oxford) Bayesian Computation

More information

Lecture 7 and 8: Markov Chain Monte Carlo

Lecture 7 and 8: Markov Chain Monte Carlo Lecture 7 and 8: Markov Chain Monte Carlo 4F13: Machine Learning Zoubin Ghahramani and Carl Edward Rasmussen Department of Engineering University of Cambridge http://mlg.eng.cam.ac.uk/teaching/4f13/ Ghahramani

More information

Monte Carlo Methods. Leon Gu CSD, CMU

Monte Carlo Methods. Leon Gu CSD, CMU Monte Carlo Methods Leon Gu CSD, CMU Approximate Inference EM: y-observed variables; x-hidden variables; θ-parameters; E-step: q(x) = p(x y, θ t 1 ) M-step: θ t = arg max E q(x) [log p(y, x θ)] θ Monte

More information

University of Toronto Department of Statistics

University of Toronto Department of Statistics Norm Comparisons for Data Augmentation by James P. Hobert Department of Statistics University of Florida and Jeffrey S. Rosenthal Department of Statistics University of Toronto Technical Report No. 0704

More information

Calibration of Stochastic Volatility Models using Particle Markov Chain Monte Carlo Methods

Calibration of Stochastic Volatility Models using Particle Markov Chain Monte Carlo Methods Calibration of Stochastic Volatility Models using Particle Markov Chain Monte Carlo Methods Jonas Hallgren 1 1 Department of Mathematics KTH Royal Institute of Technology Stockholm, Sweden BFS 2012 June

More information

Monte Carlo methods for sampling-based Stochastic Optimization

Monte Carlo methods for sampling-based Stochastic Optimization Monte Carlo methods for sampling-based Stochastic Optimization Gersende FORT LTCI CNRS & Telecom ParisTech Paris, France Joint works with B. Jourdain, T. Lelièvre, G. Stoltz from ENPC and E. Kuhn from

More information

Probabilistic Machine Learning

Probabilistic Machine Learning Probabilistic Machine Learning Bayesian Nets, MCMC, and more Marek Petrik 4/18/2017 Based on: P. Murphy, K. (2012). Machine Learning: A Probabilistic Perspective. Chapter 10. Conditional Independence Independent

More information

Sampling Methods (11/30/04)

Sampling Methods (11/30/04) CS281A/Stat241A: Statistical Learning Theory Sampling Methods (11/30/04) Lecturer: Michael I. Jordan Scribe: Jaspal S. Sandhu 1 Gibbs Sampling Figure 1: Undirected and directed graphs, respectively, with

More information

Markov Chains and MCMC

Markov Chains and MCMC Markov Chains and MCMC Markov chains Let S = {1, 2,..., N} be a finite set consisting of N states. A Markov chain Y 0, Y 1, Y 2,... is a sequence of random variables, with Y t S for all points in time

More information

Exercises Tutorial at ICASSP 2016 Learning Nonlinear Dynamical Models Using Particle Filters

Exercises Tutorial at ICASSP 2016 Learning Nonlinear Dynamical Models Using Particle Filters Exercises Tutorial at ICASSP 216 Learning Nonlinear Dynamical Models Using Particle Filters Andreas Svensson, Johan Dahlin and Thomas B. Schön March 18, 216 Good luck! 1 [Bootstrap particle filter for

More information

Bayesian Inference and MCMC

Bayesian Inference and MCMC Bayesian Inference and MCMC Aryan Arbabi Partly based on MCMC slides from CSC412 Fall 2018 1 / 18 Bayesian Inference - Motivation Consider we have a data set D = {x 1,..., x n }. E.g each x i can be the

More information

STA 4273H: Statistical Machine Learning

STA 4273H: Statistical Machine Learning STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Computer Science! Department of Statistical Sciences! rsalakhu@cs.toronto.edu! h0p://www.cs.utoronto.ca/~rsalakhu/ Lecture 7 Approximate

More information

Introduction to Machine Learning CMU-10701

Introduction to Machine Learning CMU-10701 Introduction to Machine Learning CMU-10701 Markov Chain Monte Carlo Methods Barnabás Póczos & Aarti Singh Contents Markov Chain Monte Carlo Methods Goal & Motivation Sampling Rejection Importance Markov

More information

A Backward Particle Interpretation of Feynman-Kac Formulae

A Backward Particle Interpretation of Feynman-Kac Formulae A Backward Particle Interpretation of Feynman-Kac Formulae Pierre Del Moral, Arnaud Doucet, Sumeetpal Singh To cite this version: Pierre Del Moral, Arnaud Doucet, Sumeetpal Singh. A Backward Particle Interpretation

More information

Pseudo-marginal MCMC methods for inference in latent variable models

Pseudo-marginal MCMC methods for inference in latent variable models Pseudo-marginal MCMC methods for inference in latent variable models Arnaud Doucet Department of Statistics, Oxford University Joint work with George Deligiannidis (Oxford) & Mike Pitt (Kings) MCQMC, 19/08/2016

More information

SMC 2 : an efficient algorithm for sequential analysis of state-space models

SMC 2 : an efficient algorithm for sequential analysis of state-space models SMC 2 : an efficient algorithm for sequential analysis of state-space models N. CHOPIN 1, P.E. JACOB 2, & O. PAPASPILIOPOULOS 3 1 ENSAE-CREST 2 CREST & Université Paris Dauphine, 3 Universitat Pompeu Fabra

More information

ECE276A: Sensing & Estimation in Robotics Lecture 10: Gaussian Mixture and Particle Filtering

ECE276A: Sensing & Estimation in Robotics Lecture 10: Gaussian Mixture and Particle Filtering ECE276A: Sensing & Estimation in Robotics Lecture 10: Gaussian Mixture and Particle Filtering Lecturer: Nikolay Atanasov: natanasov@ucsd.edu Teaching Assistants: Siwei Guo: s9guo@eng.ucsd.edu Anwesan Pal:

More information

Pattern Recognition and Machine Learning. Bishop Chapter 11: Sampling Methods

Pattern Recognition and Machine Learning. Bishop Chapter 11: Sampling Methods Pattern Recognition and Machine Learning Chapter 11: Sampling Methods Elise Arnaud Jakob Verbeek May 22, 2008 Outline of the chapter 11.1 Basic Sampling Algorithms 11.2 Markov Chain Monte Carlo 11.3 Gibbs

More information

Markov Chain Monte Carlo Inference. Siamak Ravanbakhsh Winter 2018

Markov Chain Monte Carlo Inference. Siamak Ravanbakhsh Winter 2018 Graphical Models Markov Chain Monte Carlo Inference Siamak Ravanbakhsh Winter 2018 Learning objectives Markov chains the idea behind Markov Chain Monte Carlo (MCMC) two important examples: Gibbs sampling

More information

Markov Chain Monte Carlo Methods for Stochastic

Markov Chain Monte Carlo Methods for Stochastic Markov Chain Monte Carlo Methods for Stochastic Optimization i John R. Birge The University of Chicago Booth School of Business Joint work with Nicholas Polson, Chicago Booth. JRBirge U Florida, Nov 2013

More information

Computer Vision Group Prof. Daniel Cremers. 11. Sampling Methods: Markov Chain Monte Carlo

Computer Vision Group Prof. Daniel Cremers. 11. Sampling Methods: Markov Chain Monte Carlo Group Prof. Daniel Cremers 11. Sampling Methods: Markov Chain Monte Carlo Markov Chain Monte Carlo In high-dimensional spaces, rejection sampling and importance sampling are very inefficient An alternative

More information

MCMC and Gibbs Sampling. Kayhan Batmanghelich

MCMC and Gibbs Sampling. Kayhan Batmanghelich MCMC and Gibbs Sampling Kayhan Batmanghelich 1 Approaches to inference l Exact inference algorithms l l l The elimination algorithm Message-passing algorithm (sum-product, belief propagation) The junction

More information

April 20th, Advanced Topics in Machine Learning California Institute of Technology. Markov Chain Monte Carlo for Machine Learning

April 20th, Advanced Topics in Machine Learning California Institute of Technology. Markov Chain Monte Carlo for Machine Learning for for Advanced Topics in California Institute of Technology April 20th, 2017 1 / 50 Table of Contents for 1 2 3 4 2 / 50 History of methods for Enrico Fermi used to calculate incredibly accurate predictions

More information

17 : Markov Chain Monte Carlo

17 : Markov Chain Monte Carlo 10-708: Probabilistic Graphical Models, Spring 2015 17 : Markov Chain Monte Carlo Lecturer: Eric P. Xing Scribes: Heran Lin, Bin Deng, Yun Huang 1 Review of Monte Carlo Methods 1.1 Overview Monte Carlo

More information

Sampling Algorithms for Probabilistic Graphical models

Sampling Algorithms for Probabilistic Graphical models Sampling Algorithms for Probabilistic Graphical models Vibhav Gogate University of Washington References: Chapter 12 of Probabilistic Graphical models: Principles and Techniques by Daphne Koller and Nir

More information

On Adaptive Resampling Procedures for Sequential Monte Carlo Methods

On Adaptive Resampling Procedures for Sequential Monte Carlo Methods On Adaptive Resampling Procedures for Sequential Monte Carlo Methods Pierre Del Moral, Arnaud Doucet, Ajay Jasra To cite this version: Pierre Del Moral, Arnaud Doucet, Ajay Jasra. On Adaptive Resampling

More information

Approximate Inference using MCMC

Approximate Inference using MCMC Approximate Inference using MCMC 9.520 Class 22 Ruslan Salakhutdinov BCS and CSAIL, MIT 1 Plan 1. Introduction/Notation. 2. Examples of successful Bayesian models. 3. Basic Sampling Algorithms. 4. Markov

More information

Markov Chain Monte Carlo Methods for Stochastic Optimization

Markov Chain Monte Carlo Methods for Stochastic Optimization Markov Chain Monte Carlo Methods for Stochastic Optimization John R. Birge The University of Chicago Booth School of Business Joint work with Nicholas Polson, Chicago Booth. JRBirge U of Toronto, MIE,

More information

16 : Markov Chain Monte Carlo (MCMC)

16 : Markov Chain Monte Carlo (MCMC) 10-708: Probabilistic Graphical Models 10-708, Spring 2014 16 : Markov Chain Monte Carlo MCMC Lecturer: Matthew Gormley Scribes: Yining Wang, Renato Negrinho 1 Sampling from low-dimensional distributions

More information

An introduction to stochastic particle integration methods: with applications to risk and insurance

An introduction to stochastic particle integration methods: with applications to risk and insurance An introduction to stochastic particle integration methods: with applications to risk and insurance P. Del Moral, G. W. Peters, C. Vergé Abstract This article presents a guided introduction to a general

More information

Particle Filtering Approaches for Dynamic Stochastic Optimization

Particle Filtering Approaches for Dynamic Stochastic Optimization Particle Filtering Approaches for Dynamic Stochastic Optimization John R. Birge The University of Chicago Booth School of Business Joint work with Nicholas Polson, Chicago Booth. JRBirge I-Sim Workshop,

More information

Markov chain Monte Carlo

Markov chain Monte Carlo Markov chain Monte Carlo Probabilistic Models of Cognition, 2011 http://www.ipam.ucla.edu/programs/gss2011/ Roadmap: Motivation Monte Carlo basics What is MCMC? Metropolis Hastings and Gibbs...more tomorrow.

More information

Winter 2019 Math 106 Topics in Applied Mathematics. Lecture 9: Markov Chain Monte Carlo

Winter 2019 Math 106 Topics in Applied Mathematics. Lecture 9: Markov Chain Monte Carlo Winter 2019 Math 106 Topics in Applied Mathematics Data-driven Uncertainty Quantification Yoonsang Lee (yoonsang.lee@dartmouth.edu) Lecture 9: Markov Chain Monte Carlo 9.1 Markov Chain A Markov Chain Monte

More information

Full text available at: On the Concentration Properties of Interacting Particle Processes

Full text available at:  On the Concentration Properties of Interacting Particle Processes On the Concentration Properties of Interacting Particle Processes On the Concentration Properties of Interacting Particle Processes Pierre Del Moral INRIA, Université Bordeaux 1 France Peng Hu INRIA, Université

More information

Likelihood Inference for Lattice Spatial Processes

Likelihood Inference for Lattice Spatial Processes Likelihood Inference for Lattice Spatial Processes Donghoh Kim November 30, 2004 Donghoh Kim 1/24 Go to 1234567891011121314151617 FULL Lattice Processes Model : The Ising Model (1925), The Potts Model

More information

An introduction to Sequential Monte Carlo

An introduction to Sequential Monte Carlo An introduction to Sequential Monte Carlo Thang Bui Jes Frellsen Department of Engineering University of Cambridge Research and Communication Club 6 February 2014 1 Sequential Monte Carlo (SMC) methods

More information

Weak convergence of Markov chain Monte Carlo II

Weak convergence of Markov chain Monte Carlo II Weak convergence of Markov chain Monte Carlo II KAMATANI, Kengo Mar 2011 at Le Mans Background Markov chain Monte Carlo (MCMC) method is widely used in Statistical Science. It is easy to use, but difficult

More information

Annealing Between Distributions by Averaging Moments

Annealing Between Distributions by Averaging Moments Annealing Between Distributions by Averaging Moments Chris J. Maddison Dept. of Comp. Sci. University of Toronto Roger Grosse CSAIL MIT Ruslan Salakhutdinov University of Toronto Partition Functions We

More information

Adaptive Monte Carlo methods

Adaptive Monte Carlo methods Adaptive Monte Carlo methods Jean-Michel Marin Projet Select, INRIA Futurs, Université Paris-Sud joint with Randal Douc (École Polytechnique), Arnaud Guillin (Université de Marseille) and Christian Robert

More information

Bayesian Parameter Inference for Partially Observed Stopped Processes

Bayesian Parameter Inference for Partially Observed Stopped Processes Bayesian Parameter Inference for Partially Observed Stopped Processes BY AJAY JASRA, NIKOLAS KANTAS & ADAM PERSING 3 Department of Statistics & Applied Probability, National University of Singapore, Singapore,

More information

LECTURE 15 Markov chain Monte Carlo

LECTURE 15 Markov chain Monte Carlo LECTURE 15 Markov chain Monte Carlo There are many settings when posterior computation is a challenge in that one does not have a closed form expression for the posterior distribution. Markov chain Monte

More information

Asymptotics and Simulation of Heavy-Tailed Processes

Asymptotics and Simulation of Heavy-Tailed Processes Asymptotics and Simulation of Heavy-Tailed Processes Department of Mathematics Stockholm, Sweden Workshop on Heavy-tailed Distributions and Extreme Value Theory ISI Kolkata January 14-17, 2013 Outline

More information

Machine Learning for Data Science (CS4786) Lecture 24

Machine Learning for Data Science (CS4786) Lecture 24 Machine Learning for Data Science (CS4786) Lecture 24 Graphical Models: Approximate Inference Course Webpage : http://www.cs.cornell.edu/courses/cs4786/2016sp/ BELIEF PROPAGATION OR MESSAGE PASSING Each

More information

Basic math for biology

Basic math for biology Basic math for biology Lei Li Florida State University, Feb 6, 2002 The EM algorithm: setup Parametric models: {P θ }. Data: full data (Y, X); partial data Y. Missing data: X. Likelihood and maximum likelihood

More information

Sequential Monte Carlo Methods in High Dimensions

Sequential Monte Carlo Methods in High Dimensions Sequential Monte Carlo Methods in High Dimensions Alexandros Beskos Statistical Science, UCL Oxford, 24th September 2012 Joint work with: Dan Crisan, Ajay Jasra, Nik Kantas, Andrew Stuart Imperial College,

More information

Probabilistic Graphical Models Lecture 17: Markov chain Monte Carlo

Probabilistic Graphical Models Lecture 17: Markov chain Monte Carlo Probabilistic Graphical Models Lecture 17: Markov chain Monte Carlo Andrew Gordon Wilson www.cs.cmu.edu/~andrewgw Carnegie Mellon University March 18, 2015 1 / 45 Resources and Attribution Image credits,

More information

Bayesian Phylogenetics:

Bayesian Phylogenetics: Bayesian Phylogenetics: an introduction Marc A. Suchard msuchard@ucla.edu UCLA Who is this man? How sure are you? The one true tree? Methods we ve learned so far try to find a single tree that best describes

More information

Particle Filtering a brief introductory tutorial. Frank Wood Gatsby, August 2007

Particle Filtering a brief introductory tutorial. Frank Wood Gatsby, August 2007 Particle Filtering a brief introductory tutorial Frank Wood Gatsby, August 2007 Problem: Target Tracking A ballistic projectile has been launched in our direction and may or may not land near enough to

More information

Markov Chain Monte Carlo (MCMC)

Markov Chain Monte Carlo (MCMC) Markov Chain Monte Carlo (MCMC Dependent Sampling Suppose we wish to sample from a density π, and we can evaluate π as a function but have no means to directly generate a sample. Rejection sampling can

More information

Markov Chain Monte Carlo

Markov Chain Monte Carlo Markov Chain Monte Carlo Recall: To compute the expectation E ( h(y ) ) we use the approximation E(h(Y )) 1 n n h(y ) t=1 with Y (1),..., Y (n) h(y). Thus our aim is to sample Y (1),..., Y (n) from f(y).

More information

Monte Carlo Approximation of Monte Carlo Filters

Monte Carlo Approximation of Monte Carlo Filters Monte Carlo Approximation of Monte Carlo Filters Adam M. Johansen et al. Collaborators Include: Arnaud Doucet, Axel Finke, Anthony Lee, Nick Whiteley 7th January 2014 Context & Outline Filtering in State-Space

More information

The Ising model and Markov chain Monte Carlo

The Ising model and Markov chain Monte Carlo The Ising model and Markov chain Monte Carlo Ramesh Sridharan These notes give a short description of the Ising model for images and an introduction to Metropolis-Hastings and Gibbs Markov Chain Monte

More information

MCMC for big data. Geir Storvik. BigInsight lunch - May Geir Storvik MCMC for big data BigInsight lunch - May / 17

MCMC for big data. Geir Storvik. BigInsight lunch - May Geir Storvik MCMC for big data BigInsight lunch - May / 17 MCMC for big data Geir Storvik BigInsight lunch - May 2 2018 Geir Storvik MCMC for big data BigInsight lunch - May 2 2018 1 / 17 Outline Why ordinary MCMC is not scalable Different approaches for making

More information

Learning the hyper-parameters. Luca Martino

Learning the hyper-parameters. Luca Martino Learning the hyper-parameters Luca Martino 2017 2017 1 / 28 Parameters and hyper-parameters 1. All the described methods depend on some choice of hyper-parameters... 2. For instance, do you recall λ (bandwidth

More information

Monte Carlo Methods. Geoff Gordon February 9, 2006

Monte Carlo Methods. Geoff Gordon February 9, 2006 Monte Carlo Methods Geoff Gordon ggordon@cs.cmu.edu February 9, 2006 Numerical integration problem 5 4 3 f(x,y) 2 1 1 0 0.5 0 X 0.5 1 1 0.8 0.6 0.4 Y 0.2 0 0.2 0.4 0.6 0.8 1 x X f(x)dx Used for: function

More information

Tutorial on Probabilistic Programming with PyMC3

Tutorial on Probabilistic Programming with PyMC3 185.A83 Machine Learning for Health Informatics 2017S, VU, 2.0 h, 3.0 ECTS Tutorial 02-04.04.2017 Tutorial on Probabilistic Programming with PyMC3 florian.endel@tuwien.ac.at http://hci-kdd.org/machine-learning-for-health-informatics-course

More information

STABILITY AND UNIFORM APPROXIMATION OF NONLINEAR FILTERS USING THE HILBERT METRIC, AND APPLICATION TO PARTICLE FILTERS 1

STABILITY AND UNIFORM APPROXIMATION OF NONLINEAR FILTERS USING THE HILBERT METRIC, AND APPLICATION TO PARTICLE FILTERS 1 The Annals of Applied Probability 0000, Vol. 00, No. 00, 000 000 STABILITY AND UNIFORM APPROXIMATION OF NONLINAR FILTRS USING TH HILBRT MTRIC, AND APPLICATION TO PARTICL FILTRS 1 By François LeGland and

More information

Probabilistic Graphical Models

Probabilistic Graphical Models Probabilistic Graphical Models Lecture 9: Variational Inference Relaxations Volkan Cevher, Matthias Seeger Ecole Polytechnique Fédérale de Lausanne 24/10/2011 (EPFL) Graphical Models 24/10/2011 1 / 15

More information

Sequential Monte Carlo Samplers for Applications in High Dimensions

Sequential Monte Carlo Samplers for Applications in High Dimensions Sequential Monte Carlo Samplers for Applications in High Dimensions Alexandros Beskos National University of Singapore KAUST, 26th February 2014 Joint work with: Dan Crisan, Ajay Jasra, Nik Kantas, Alex

More information

Particle Filters: Convergence Results and High Dimensions

Particle Filters: Convergence Results and High Dimensions Particle Filters: Convergence Results and High Dimensions Mark Coates mark.coates@mcgill.ca McGill University Department of Electrical and Computer Engineering Montreal, Quebec, Canada Bellairs 2012 Outline

More information

Graphical Models and Kernel Methods

Graphical Models and Kernel Methods Graphical Models and Kernel Methods Jerry Zhu Department of Computer Sciences University of Wisconsin Madison, USA MLSS June 17, 2014 1 / 123 Outline Graphical Models Probabilistic Inference Directed vs.

More information

On the concentration properties of Interacting particle processes

On the concentration properties of Interacting particle processes ISTITUT ATIOAL DE RECHERCHE E IFORMATIQUE ET E AUTOMATIQUE arxiv:1107.1948v2 [math.a] 12 Jul 2011 On the concentration properties of Interacting particle processes Pierre Del Moral, Peng Hu, Liming Wu

More information

Computer Intensive Methods in Mathematical Statistics

Computer Intensive Methods in Mathematical Statistics Computer Intensive Methods in Mathematical Statistics Department of mathematics johawes@kth.se Lecture 5 Sequential Monte Carlo methods I 31 March 2017 Computer Intensive Methods (1) Plan of today s lecture

More information

MCMC algorithms for fitting Bayesian models

MCMC algorithms for fitting Bayesian models MCMC algorithms for fitting Bayesian models p. 1/1 MCMC algorithms for fitting Bayesian models Sudipto Banerjee sudiptob@biostat.umn.edu University of Minnesota MCMC algorithms for fitting Bayesian models

More information

19 : Slice Sampling and HMC

19 : Slice Sampling and HMC 10-708: Probabilistic Graphical Models 10-708, Spring 2018 19 : Slice Sampling and HMC Lecturer: Kayhan Batmanghelich Scribes: Boxiang Lyu 1 MCMC (Auxiliary Variables Methods) In inference, we are often

More information

SAMPLING ALGORITHMS. In general. Inference in Bayesian models

SAMPLING ALGORITHMS. In general. Inference in Bayesian models SAMPLING ALGORITHMS SAMPLING ALGORITHMS In general A sampling algorithm is an algorithm that outputs samples x 1, x 2,... from a given distribution P or density p. Sampling algorithms can for example be

More information

MARKOV CHAIN MONTE CARLO

MARKOV CHAIN MONTE CARLO MARKOV CHAIN MONTE CARLO RYAN WANG Abstract. This paper gives a brief introduction to Markov Chain Monte Carlo methods, which offer a general framework for calculating difficult integrals. We start with

More information

Introduction to MCMC. DB Breakfast 09/30/2011 Guozhang Wang

Introduction to MCMC. DB Breakfast 09/30/2011 Guozhang Wang Introduction to MCMC DB Breakfast 09/30/2011 Guozhang Wang Motivation: Statistical Inference Joint Distribution Sleeps Well Playground Sunny Bike Ride Pleasant dinner Productive day Posterior Estimation

More information

Computer Vision Group Prof. Daniel Cremers. 14. Sampling Methods

Computer Vision Group Prof. Daniel Cremers. 14. Sampling Methods Prof. Daniel Cremers 14. Sampling Methods Sampling Methods Sampling Methods are widely used in Computer Science as an approximation of a deterministic algorithm to represent uncertainty without a parametric

More information

Variational Scoring of Graphical Model Structures

Variational Scoring of Graphical Model Structures Variational Scoring of Graphical Model Structures Matthew J. Beal Work with Zoubin Ghahramani & Carl Rasmussen, Toronto. 15th September 2003 Overview Bayesian model selection Approximations using Variational

More information

Optimal filtering and the dual process

Optimal filtering and the dual process Optimal filtering and the dual process Omiros Papaspiliopoulos ICREA & Universitat Pompeu Fabra Joint work with Matteo Ruggiero Collegio Carlo Alberto & University of Turin The filtering problem X t0 X

More information

On Markov chain Monte Carlo methods for tall data

On Markov chain Monte Carlo methods for tall data On Markov chain Monte Carlo methods for tall data Remi Bardenet, Arnaud Doucet, Chris Holmes Paper review by: David Carlson October 29, 2016 Introduction Many data sets in machine learning and computational

More information

Density Estimation. Seungjin Choi

Density Estimation. Seungjin Choi Density Estimation Seungjin Choi Department of Computer Science and Engineering Pohang University of Science and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjin@postech.ac.kr http://mlg.postech.ac.kr/

More information

Computational statistics

Computational statistics Computational statistics Markov Chain Monte Carlo methods Thierry Denœux March 2017 Thierry Denœux Computational statistics March 2017 1 / 71 Contents of this chapter When a target density f can be evaluated

More information

27 : Distributed Monte Carlo Markov Chain. 1 Recap of MCMC and Naive Parallel Gibbs Sampling

27 : Distributed Monte Carlo Markov Chain. 1 Recap of MCMC and Naive Parallel Gibbs Sampling 10-708: Probabilistic Graphical Models 10-708, Spring 2014 27 : Distributed Monte Carlo Markov Chain Lecturer: Eric P. Xing Scribes: Pengtao Xie, Khoa Luu In this scribe, we are going to review the Parallel

More information

Lecture Particle Filters

Lecture Particle Filters FMS161/MASM18 Financial Statistics November 29, 2010 Monte Carlo filters The filter recursions could only be solved for HMMs and for linear, Gaussian models. Idea: Approximate any model with a HMM. Replace

More information

Advanced Computational Methods in Statistics: Lecture 5 Sequential Monte Carlo/Particle Filtering

Advanced Computational Methods in Statistics: Lecture 5 Sequential Monte Carlo/Particle Filtering Advanced Computational Methods in Statistics: Lecture 5 Sequential Monte Carlo/Particle Filtering Axel Gandy Department of Mathematics Imperial College London http://www2.imperial.ac.uk/~agandy London

More information

Computer Vision Group Prof. Daniel Cremers. 10a. Markov Chain Monte Carlo

Computer Vision Group Prof. Daniel Cremers. 10a. Markov Chain Monte Carlo Group Prof. Daniel Cremers 10a. Markov Chain Monte Carlo Markov Chain Monte Carlo In high-dimensional spaces, rejection sampling and importance sampling are very inefficient An alternative is Markov Chain

More information

Markov Chain Monte Carlo (MCMC)

Markov Chain Monte Carlo (MCMC) School of Computer Science 10-708 Probabilistic Graphical Models Markov Chain Monte Carlo (MCMC) Readings: MacKay Ch. 29 Jordan Ch. 21 Matt Gormley Lecture 16 March 14, 2016 1 Homework 2 Housekeeping Due

More information

Principles of Bayesian Inference

Principles of Bayesian Inference Principles of Bayesian Inference Sudipto Banerjee University of Minnesota July 20th, 2008 1 Bayesian Principles Classical statistics: model parameters are fixed and unknown. A Bayesian thinks of parameters

More information

SC7/SM6 Bayes Methods HT18 Lecturer: Geoff Nicholls Lecture 2: Monte Carlo Methods Notes and Problem sheets are available at http://www.stats.ox.ac.uk/~nicholls/bayesmethods/ and via the MSc weblearn pages.

More information

Statistical Machine Learning Lecture 8: Markov Chain Monte Carlo Sampling

Statistical Machine Learning Lecture 8: Markov Chain Monte Carlo Sampling 1 / 27 Statistical Machine Learning Lecture 8: Markov Chain Monte Carlo Sampling Melih Kandemir Özyeğin University, İstanbul, Turkey 2 / 27 Monte Carlo Integration The big question : Evaluate E p(z) [f(z)]

More information

Chapter 11. Stochastic Methods Rooted in Statistical Mechanics

Chapter 11. Stochastic Methods Rooted in Statistical Mechanics Chapter 11. Stochastic Methods Rooted in Statistical Mechanics Neural Networks and Learning Machines (Haykin) Lecture Notes on Self-learning Neural Algorithms Byoung-Tak Zhang School of Computer Science

More information

Probabilistic Graphical Networks: Definitions and Basic Results

Probabilistic Graphical Networks: Definitions and Basic Results This document gives a cursory overview of Probabilistic Graphical Networks. The material has been gleaned from different sources. I make no claim to original authorship of this material. Bayesian Graphical

More information