An introduction to Sequential Monte Carlo

Size: px
Start display at page:

Download "An introduction to Sequential Monte Carlo"

Transcription

1 An introduction to Sequential Monte Carlo Thang Bui Jes Frellsen Department of Engineering University of Cambridge Research and Communication Club 6 February

2 Sequential Monte Carlo (SMC) methods Initially designed for online inference in dynamical systems Observations arrive sequentially and one needs to update the posterior distribution of hidden variables Analytically tractable solutions are available for linear Gaussian models, but not for complex models Examples: target tracking, time series analysis, computer vision Increasingly used to perform inference for a wide range of applications, not just dynamical systems Example: graphical models, population genetic,... SMC methods are scalable, easy to implement and flexible! 2

3 Outline Motivation References Introduction MCMC and importance sampling Sequential importance sampling and resampling Example: A dynamical system Proposal Smoothing MAP estimation Parameter estimation A generic SMC algorithm Particle MCMC Particle learning for GP regression Summary 3

4 Bibliography I Andrieu, Christophe, Doucet, Arnaud, & Holenstein, Roman Particle markov chain monte carlo methods. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 72(3), Cappé, Olivier, Godsill, Simon J, & Moulines, Eric An overview of existing methods and recent advances in sequential Monte Carlo. Proceedings of the IEEE, 95(5), Doucet, Arnaud, De Freitas, Nando, & Gordon, Neil An introduction to sequential Monte Carlo methods. Pages 3 14 of: Sequential Monte Carlo methods in practice. Springer. Gramacy, Robert B., & Polson, Nicholas G Particle Learning of Gaussian Process Models for Sequential Design and Optimization. Journal of Computational and Graphical Statistics, 20(1), Holenstein, Roman Particle Markov Chain Monte Carlo. Ph.D. thesis, The University Of British Columbia. 4

5 Bibliography II Holenstein, Roman, & Doucet, Arnaud Particle Markov Chain Monte Carlo. Workshop: New directions in Monte Carlo Methods Fleurance, France. Wilkinson, Richard GPs huh? what are they good for? Gaussian Process Winter School, Sheffield. 5

6 State Space Models (Doucet et al., 2001; Cappé et al., 2007) The Markovian, nonlinear, non-gaussian state space model Unobserved signal or states {x t t N} Observations or output {y t t N + } or {y t t N} P(x 0 ) P(x t x t 1 ) for t 1 (transition probability) P(y t x t ) for t 0 (emission/observation probability) x t 1 x t x t+1 y t 1 y t y t+1 6

7 Inference for State Space Model (Doucet et al., 2001; Cappé et al., 2007) We are interested the posterior distributions of the unobserved signal P(x 0:t y 0:t ) fixed interval smoothing distribution P(x t L y 0:t ) fixed lag smoothing distribution P(x t y 0:t ) filtering distribution and expectations under these posteriors, e.g. E P(x0:t y 0:t )(h t ) = h t (x 0:t )P(x 0:t y 0:t ) dx 0:t for some function h t : X (t+1) R n ht 7

8 Couldn t we use MCMC? (Doucet et al., 2001; Holenstein, 2009) Sure, generate N samples from P(x 0:t y 0:t ) using MH Sample a candidate x 0:t from a proposal distribution x 0:t q(x 0:t x 0:t ) Accept the candidate x 0:t with probability [ α(x 0:t x 0:t ) = min 1, P(x 0:t y 0:t)q(x 0:t x 0:t ) ] P(x 0:t y 0:t )q(x 0:t x 0:t) Obtain a set of sample {x (i) 0:t }N i=1 Calculate empirical estimates for posterior and expectation P(x 0:t y 0:t ) = 1 N E P(x 0:t y 0:t ) (h t) = i δ (i) x (x 0:t ) 0:t h t (x 0:t ) P(x 0:t ) dx 0:t = 1 N N i=1 h t (x (i) 0:t ) 8

9 Couldn t we use MCMC? (Doucet et al., 2001; Holenstein, 2009) Unbiased estimates and in most cases nice convergence Problem solved!? E P(x 0:t y 0:t ) (h t) a.s. E P(x0:t y 0:t )(h t ) as N I can be hard to design a good proposal q Single-site updates q(x j x 0:t ) can lead to slow mixing What happens if we get a new data point y t+1? We cannot (directly) reuse the samples {x (i) 0:t } We have to run a new MCMC simulations for P(x 0:t+1 y 0:t+1 ) MCMC not well-suited for recursive estimation problems 9

10 What about importance sampling? (Doucet et al., 2001) Generate N i.i.d. samples {x (i) 0:t }N i=1 from an arbitrary importance sampling distribution π(x 0:t y 0:t ) The empirical estimates are ˆP(x 0:t y 0:t ) = 1 N EˆP(x 0:t y 0:t ) (h t) = 1 N N i=1 N i=1 δ (i) x (x 0:t ) w (i) t 0:t h t (x (i) 0:t ) w(i) t where the importance weights are w(x 0:t ) = P(x 0:t y 0:t ) π(x 0:t y 0:t ) and w (i) t = ( w x (i) 0:t ) j w (x (j) 0:t ) 10

11 What about importance sampling? (Doucet et al., 2001) EˆP(x 0:t y 0:t ) (h t) is biased, but converges to E P(x0:t y 0:t )(h t ) Problem solved!? Designing a good importance distribution can be hard! Still not adequate for recursive estimation When seeing new data y t+1, we cannot reuse the samples and weights for time t to sample from P(x 0:t+1 y 0:t+1 ) {x (i) 0:t, w(i) t } N i=1 11

12 Sequential importance sampling (Doucet et al., 2001; Cappé et al., 2007) Assume that the importance distribution can be factored as π(x 0:t y 0:t ) = π(x 0:t 1 y 0:t 1 ) }{{} importance distribution at time t 1 = π(x 0 y 0 ) π(x t x 0:t 1, y 0:t ) }{{} extension to time t t π(x k x 0:k 1, y 0:k ) The importance weight can then be evaluated recursively k=1 w (i) t w (i) t 1 P(y t x (i) t )P(x (i) t x (i) t 1 ) π(x (i) t x (i) 0:t 1, y 0:t) Given past i.i.d. trajectories {x (i) 0:t 1 i = 1,..., N} we can 1. simulate x (i) t π(x t x (i) 0:t 1, y 0:t) 2. update the weight w (i) t for x (i) 0:t Note that the extended trajectories {x (i) 0:t based on w(i) t 1 using eq. (1) } remain i.i.d. (1) 12

13 Sequential importance sampling {x (i) t 1, w(i) t 1 } Adapted from (Doucet et al., 2001) {x (i) t, w (i) t } Problem solved!? 13

14 Sequential importance sampling {x (i) t 1, w(i) t 1 } {x (i) t, w (i) t } Adapted from (Doucet et al., 2001) {x (i) t+1, w(i) t+1 } Weights become highly degenerated after few steps 14

15 Sequential importance resampling (Doucet et al., 2001; Cappé et al., 2007) Key idea to eliminate weight degeneracy 1. Eliminate particles with low importance weights 2. Multiply particles with high importance weights Introduce a resampling each time step (or occasionally ) Resample a new trajectory {x (i) 0:t i = 1,..., N} Draw N samples from ˆP(x 0:t y 0:t ) = 1 N δ (i) N x (x 0:t ) w (i) t 0:t (i) The weights of the new samples are w t = 1 N The new empirical (unweighted) distribution a time step t where N (i) t N (i) t ˆP (x 0:t y 0:t ) = 1 N N i=1 i=1 δ (i) x (x 0:t )N (i) t 0:t is the number of copies of x (i) 0:t. is sampled for a multinomial with parameters w (i) t 15

16 Sequential importance resampling (Doucet et al., 2001; Cappé et al., 2007) 1: for i = 1,..., N do 2: Sample x (i) 0 π(x 0 y 0 ) 3: w (i) 0 P(y 0 x (i) 0 )P(x(i) 0 ) π(x (i) 0 y 0) 4: for t = 1,..., T do Importance sampling step 5: for i = 1,..., N do 6: Sample x (i) t π(x t x 0:t 1, y 0:t ) 7: x (i) 0:t (x(i) 0:t 1, x(i) 8: w (i) t w (i) t 1 t ) P(y t x (i) t )P(x (i) t x (i) t 1 ) π(x (i) t x (i) 0:t 1,y 0:t) Resampling/selection step 9: Sample N particles {x (i) 0:t for i = 1,..., N 10: w (i) t 1 N 11: return {x (i) 0:t }N i=1 } from { x(i) 0:t } according to { w(i) t } 16

17 Sequential importance resampling { x (i) t 1, N 1 } { x (i) t 1, w(i) t 1 } {x (i) t 1, N 1 } } Resampling { x (i) t, N 1 } { x (i) t, w (i) t } } Importance sampling i = 1,..., N and N = 10, figure modified from (Doucet et al., 2001) 17

18 y x Example - A dynamical system x t = 1 2 x t x t 1 1+x 2 t cos(1.2t) + u t y t = x2 t 20 + v t where x 0 N (0, σ 2 0 ), u t N (0, σ 2 u), v t N (0, σ 2 v), σ 2 0 = σ2 u = 10, σ 2 v = t t 18

19 Posterior distribution of states p(x t y 1:t ) t 19

20 Posterior distribution of states p(x t y 1:t ) t 20

21 Posterior distribution of states p(x t y 1:t ) t x t 21

22 Proposal Bootstrap filter uses π t (x t x t 1, y t ) = P(x t x t 1 ) which leads to a simple form for the importance weight update: w (i) t w (i) t 1 P(y t x (i) t ) The weight update depends on the new proposed state and the observation! Uninformative observation can lead to poor performance Optimal proposal: π t (x t x t 1, y t ) = P(x t x t 1, y t ) therefore: w (i) t w (i) t 1 P(y t x t 1 ) = P(y t x t )P(x t x t 1 )dx t The weight update depends on the previous state and the observation Analytically intractable integral, need to resort to approximation techniques. 22

23 Smoothing For a complex SSM, the posterior distribution of state variables can be smoothed by including future observations. The joint smoothing distribution can be factorised: T 1 P(x 0:T y 0:T ) = P(x T y 0:T ) P(x t x t+1, y 0:t ) t=0 T 1 P(x T y 0:T ) Hence, the weight update: P(x t y 0:t ) }{{} P(x t+1 x t ) }{{} t=0 filtering distribution likelihood of future state ŵ (i) t (x t+1 ) = w (i) t P(x t+1 x t ) 23

24 Particle smoother Algorithm: Run forward simulation to obtain particle paths {x (i) t, w (i) t } i=1,,n;t=1,,t Draw x T from ˆP(x T y 0:T ) Repeat: Adjust and normalise the filtering weights w (i) t : ŵ (i) t = w (i) t P( x t+1 x t ) Draw a random sample x t from ˆP(x t:t y 0:T ) The sequence ( x 0, x 2,, x T ) is a random draw from the approximate distribution ˆP(x 0:T y 0:T ) [O(NT)] 24

25 MAP estimation Maximum a posteriori (MAP) estimate: argmax x 0:T P(x 0:T y 0:T ) = argmax x 0:T P(x 0 ) T T P(x t x t 1 ) P(y t x t ) Question: Can we just choose particle trajectory with largest weights? Answer: NO! Assume a discrete particle grid, x t x (i) t 1 i N, the approximation can be interpreted as a Hidden Markov Model with N states. MAP estimate can be found using the Viterbi algorithm Keep track of the probability of the most likely path so far Keep track of the last state index of the most likely path so far t=1 t=0 25

26 Viterbi algorithm for MAP estimation Observation Particle Particle Path probability update: α (j) t = α (j) t 1 P(x(i) t x (i) t ) t 1 )P(y t x (i) 26

27 Parameter estimation Consider SSMs that have P(x t x t 1, θ), P(y t x t, θ) where θ is a static parameter vector and one wishes to estimate θ. Marginal likelihood: l(y 0:T θ) = p(y 0:T, x 0:T θ)dx 0:T Optimise l(y 0:T θ) using the EM algorithm: E-step: ˆτ(θ, θ k ) = N i=1 T 1 w (i,θ) T t=0 s t,θ (x (i,θk) t, x (i,θk) t+1 ) where s t,θ (x t, x t+1 ) = log(p(x t+1 x t, θ)) + log(p(y t+1 x t+1, θ)) Optimise ˆτ(θ θ k ) to update θ k. 27

28 A generic SMC algorithm (Holenstein, 2009) We want to sample from a target distribution π(x), x X p Assume we have a sequence of bridging distributions of increasing dimension {π n (x n )} p n=1 = {π 1(x 1 ), π 2 (x 1, x 2 ),..., π p (x 1,..., x p )} where π n (x n ) = Zn 1 γ n (x n ) πp (x p ) = π(x) A sequence of importance densities on X A resampling distribution M 1 (x 1 ), {M }{{} n (x n x n 1 )} p n=2 }{{} for initial sample for extending x n 1 X n 1 by sampling x n X r(a n w n ), A n {1,..., N} N and w n [0, 1] N where A i n 1 is the parent at time n 1 of some particle Xi n 28

29 A generic SMC algorithm From (Holenstein, 2009) 29

30 A generic SMC algorithm From (Holenstein, 2009) 30

31 A generic SMC algorithm (Holenstein, 2009) Again we can calculate empirical estimates for target and the normalization constant (π(x) = Z 1 γ(x)) ˆπ N (x) = Ẑ N (x) = N i=1 p n=1 δ (i) x (x)w p (i) p ( 1 N N i=1 w n (X (i) n ) Convergence can be shown under weak assumptions ) ˆπ N (x) a.s. π(x) as N Ẑ N a.s. Z as N The SIR algorithm for state space models is a special case of this generic SMC algorithm 31

32 Motivation for Particle MCMC (Holenstein & Doucet, 2007; Holenstein, 2009) Let s return to the problem om sampling from a target π(x), where x = (x 1, x 2,..., x n ) X n using MCMC Single-site proposal q(x j x) Easy to design Often leads to slow mixing It would be more efficient, if we could update larger blocks Such proposals are harder to construct We could use SMC as a proposal distribution! 32

33 Particle Metropolis Hastings Sampler From (Holenstein, 2009) 33

34 Particle Metropolis Hastings (PMH) Sampler (Holenstein, 2009) Standard independent MH algorithm (q(x x) = q(x )) Target π N and proposal q N defined on an extended space π N ( ) q N ( ) = ẐN Z which leads to the acceptance ratio [ α = min Ẑ N, 1, Ẑ N (i 1) Note that α 1 as N, since Ẑ N Z as N ] 34

35 Particle Gibbs (PG) Sampler (Holenstein, 2009) Assume that we are interested in sampling from π(θ, x) = Assume that sampling form π(θ x) is easy π(x θ) is hard γ(θ, x) Z The PG Sampler uses SMC to sample from π(x θ) 1. Sample θ(i) π(θ x(i 1)) 2. Sample x(i) ˆπ N (x θ(i)) If sampling from π(θ x) is not easy? We can use a MH update for θ 35

36 Parameter estimation a state space models using PG (Andrieu et al., 2010) (Re)consider the non-linear state space model x t 1 x t = 1 2 x t xt cos(1.2t) + V t y t = x2 t 20 + W t where x 0 N (0, σ 2 0 ), V t N (0, σ 2 V ) and W t N (0, σ 2 W ) Assume that θ = (σv 2, σ2 W ) is unknown Simulate y 1:T for T = 500, σ0 2 = 5, σ2 V = 10 and σ2 W = 1 Sample from P(θ, x 1:t y 1:t ) using Particle Gibbs sampler, with importance dist. fθ (x n x n 1 ) One-at-a-time MH sampler, with proposal fθ (x n x n 1 ) The algorithms ran for 50,000 iterations (burn-in of 10,000) Vague inverse-gamma priors for θ = (σ 2 V, σ 2 W) 36

37 Parameter estimation a state space models using PG (Andrieu et al., 2010) One-at-a-time Metropolis Hastings Particle Gibbs Sampler 37

38 Particle learning for GP regression motivation Training a GP using data: D 1:n = {(x 1, y 1 ),, (x n, y n )} and make prediction: P(y ˆθ, D, x ) (2) or P(y D, x ) = P(y θ, D, x )P(θ D)dθ (3) Estimate model hyperparameters θ n using ML (2) or use sampling to find the posterior distribution (3) Find the inverse of the covariance matrix K 1 n. Computational cost O(n 3 ). 38

39 Sequential update Given a new observation pair (x n+1, y n+1 ) that we want to use in our training set, need to find K 1 n+1 and re-estimate hyperparameters θ n+1. a naive implementation costs O(n 3 ) need an efficient approach that makes use of the sequential nature of data. 39

40 Particle learning for GP regression (Gramacy & Polson, 2011; Wilkinson, 2014) Sufficient information for each particle S n (i) Two-step update based on: = {K (i) n, θ (i) n } P(S n+1 D 1:n+1 ) = P(S n+1 S n, D n+1 )P(S n D 1:n+1 )ds n P(S n+1 S n, D n+1 )P(D n+1 S n )P(S n D 1:n )ds n 1. Resample indices {i} N i=1 with replacement to obtain new indices {ζ(i)} N i=1 according to weights w i P(D n+1 S (i) n ) = P(y n+1 x n+1, D n, θ (i) n ) 2. Propagate sufficient information from S n to S n+1 S (i) n+1 P(S n+1 S ζ(i) n, D 1:n+1 ) 40

41 Propagation Parameters θ n are static and can be deterministically copied from Sn ζ(i) to S (i) n+1. Covariance matrix rank-one update to build K 1 n+1 [ ] K n+1 = K n k(x n+1 ) k (x n+1 ) k(x n+1, x n+1 ) from K 1 n : then [ K 1 K n+1 = 1 n + g n (x n+1 )g ] n (x n+1 )/µ n (x n+1 ) g n (x n+1 ) g n (x n+1 ) µ n (x n+1 ) where g n (x) = µ(x)kn 1 k(x) µ n (x) = [k(x, x) k (x)kn 1 k(x)] 1 Use Cholesky update for stability Cost: O(n 2 ) 41

42 Illustrative result 1 - Prediction From (Gramacy & Polson, 2011) 42

43 Illustrative result 2 - Particle locations From (Gramacy & Polson, 2011) 43

44 SMC for learning GP models Advantages: Fast for sequential learning problems Disadvantages: Particle degeneracy/depletion Use MCMC sampler to augment the propagate and rejuvenate the particles after m sequential updates. The predictive distribution given model hyperparameters needs to be analytically tractable [See resample step] Similar treatment for classification can be found in (Gramacy & Polson, 2011). 44

45 Summary SMC is a powerful method for sampling from distributions with sequential nature Online learning in state space models Sample from high dimensional distributions As proposal distribution in MCMC We presented two concrete examples of using SMC Particle Gibbs for sampling from the posterior distribtuions of the parameters in a non-linear state space model Particle learning of the hyperparameters in a GP model Thank you for your attention! 45

An Brief Overview of Particle Filtering

An Brief Overview of Particle Filtering 1 An Brief Overview of Particle Filtering Adam M. Johansen a.m.johansen@warwick.ac.uk www2.warwick.ac.uk/fac/sci/statistics/staff/academic/johansen/talks/ May 11th, 2010 Warwick University Centre for Systems

More information

Inference in state-space models with multiple paths from conditional SMC

Inference in state-space models with multiple paths from conditional SMC Inference in state-space models with multiple paths from conditional SMC Sinan Yıldırım (Sabancı) joint work with Christophe Andrieu (Bristol), Arnaud Doucet (Oxford) and Nicolas Chopin (ENSAE) September

More information

Controlled sequential Monte Carlo

Controlled sequential Monte Carlo Controlled sequential Monte Carlo Jeremy Heng, Department of Statistics, Harvard University Joint work with Adrian Bishop (UTS, CSIRO), George Deligiannidis & Arnaud Doucet (Oxford) Bayesian Computation

More information

Lecture 8: Bayesian Estimation of Parameters in State Space Models

Lecture 8: Bayesian Estimation of Parameters in State Space Models in State Space Models March 30, 2016 Contents 1 Bayesian estimation of parameters in state space models 2 Computational methods for parameter estimation 3 Practical parameter estimation in state space

More information

L09. PARTICLE FILTERING. NA568 Mobile Robotics: Methods & Algorithms

L09. PARTICLE FILTERING. NA568 Mobile Robotics: Methods & Algorithms L09. PARTICLE FILTERING NA568 Mobile Robotics: Methods & Algorithms Particle Filters Different approach to state estimation Instead of parametric description of state (and uncertainty), use a set of state

More information

Sequential Monte Carlo Methods for Bayesian Computation

Sequential Monte Carlo Methods for Bayesian Computation Sequential Monte Carlo Methods for Bayesian Computation A. Doucet Kyoto Sept. 2012 A. Doucet (MLSS Sept. 2012) Sept. 2012 1 / 136 Motivating Example 1: Generic Bayesian Model Let X be a vector parameter

More information

Advanced Computational Methods in Statistics: Lecture 5 Sequential Monte Carlo/Particle Filtering

Advanced Computational Methods in Statistics: Lecture 5 Sequential Monte Carlo/Particle Filtering Advanced Computational Methods in Statistics: Lecture 5 Sequential Monte Carlo/Particle Filtering Axel Gandy Department of Mathematics Imperial College London http://www2.imperial.ac.uk/~agandy London

More information

Exercises Tutorial at ICASSP 2016 Learning Nonlinear Dynamical Models Using Particle Filters

Exercises Tutorial at ICASSP 2016 Learning Nonlinear Dynamical Models Using Particle Filters Exercises Tutorial at ICASSP 216 Learning Nonlinear Dynamical Models Using Particle Filters Andreas Svensson, Johan Dahlin and Thomas B. Schön March 18, 216 Good luck! 1 [Bootstrap particle filter for

More information

Bayesian Monte Carlo Filtering for Stochastic Volatility Models

Bayesian Monte Carlo Filtering for Stochastic Volatility Models Bayesian Monte Carlo Filtering for Stochastic Volatility Models Roberto Casarin CEREMADE University Paris IX (Dauphine) and Dept. of Economics University Ca Foscari, Venice Abstract Modelling of the financial

More information

CPSC 540: Machine Learning

CPSC 540: Machine Learning CPSC 540: Machine Learning MCMC and Non-Parametric Bayes Mark Schmidt University of British Columbia Winter 2016 Admin I went through project proposals: Some of you got a message on Piazza. No news is

More information

A Review of Pseudo-Marginal Markov Chain Monte Carlo

A Review of Pseudo-Marginal Markov Chain Monte Carlo A Review of Pseudo-Marginal Markov Chain Monte Carlo Discussed by: Yizhe Zhang October 21, 2016 Outline 1 Overview 2 Paper review 3 experiment 4 conclusion Motivation & overview Notation: θ denotes the

More information

Computer Intensive Methods in Mathematical Statistics

Computer Intensive Methods in Mathematical Statistics Computer Intensive Methods in Mathematical Statistics Department of mathematics johawes@kth.se Lecture 16 Advanced topics in computational statistics 18 May 2017 Computer Intensive Methods (1) Plan of

More information

Monte Carlo Approximation of Monte Carlo Filters

Monte Carlo Approximation of Monte Carlo Filters Monte Carlo Approximation of Monte Carlo Filters Adam M. Johansen et al. Collaborators Include: Arnaud Doucet, Axel Finke, Anthony Lee, Nick Whiteley 7th January 2014 Context & Outline Filtering in State-Space

More information

Introduction to Machine Learning

Introduction to Machine Learning Introduction to Machine Learning Brown University CSCI 1950-F, Spring 2012 Prof. Erik Sudderth Lecture 25: Markov Chain Monte Carlo (MCMC) Course Review and Advanced Topics Many figures courtesy Kevin

More information

Calibration of Stochastic Volatility Models using Particle Markov Chain Monte Carlo Methods

Calibration of Stochastic Volatility Models using Particle Markov Chain Monte Carlo Methods Calibration of Stochastic Volatility Models using Particle Markov Chain Monte Carlo Methods Jonas Hallgren 1 1 Department of Mathematics KTH Royal Institute of Technology Stockholm, Sweden BFS 2012 June

More information

STA 4273H: Statistical Machine Learning

STA 4273H: Statistical Machine Learning STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! rsalakhu@utstat.toronto.edu! http://www.utstat.utoronto.ca/~rsalakhu/ Sidney Smith Hall, Room 6002 Lecture 11 Project

More information

Auxiliary Particle Methods

Auxiliary Particle Methods Auxiliary Particle Methods Perspectives & Applications Adam M. Johansen 1 adam.johansen@bristol.ac.uk Oxford University Man Institute 29th May 2008 1 Collaborators include: Arnaud Doucet, Nick Whiteley

More information

Particle Filtering Approaches for Dynamic Stochastic Optimization

Particle Filtering Approaches for Dynamic Stochastic Optimization Particle Filtering Approaches for Dynamic Stochastic Optimization John R. Birge The University of Chicago Booth School of Business Joint work with Nicholas Polson, Chicago Booth. JRBirge I-Sim Workshop,

More information

A Note on Auxiliary Particle Filters

A Note on Auxiliary Particle Filters A Note on Auxiliary Particle Filters Adam M. Johansen a,, Arnaud Doucet b a Department of Mathematics, University of Bristol, UK b Departments of Statistics & Computer Science, University of British Columbia,

More information

1 / 32 Summer school on Foundations and advances in stochastic filtering (FASF) Barcelona, Spain, June 22, 2015.

1 / 32 Summer school on Foundations and advances in stochastic filtering (FASF) Barcelona, Spain, June 22, 2015. Outline Part 4 Nonlinear system identification using sequential Monte Carlo methods Part 4 Identification strategy 2 (Data augmentation) Aim: Show how SMC can be used to implement identification strategy

More information

Introduction to Particle Filters for Data Assimilation

Introduction to Particle Filters for Data Assimilation Introduction to Particle Filters for Data Assimilation Mike Dowd Dept of Mathematics & Statistics (and Dept of Oceanography Dalhousie University, Halifax, Canada STATMOS Summer School in Data Assimila5on,

More information

CS242: Probabilistic Graphical Models Lecture 7B: Markov Chain Monte Carlo & Gibbs Sampling

CS242: Probabilistic Graphical Models Lecture 7B: Markov Chain Monte Carlo & Gibbs Sampling CS242: Probabilistic Graphical Models Lecture 7B: Markov Chain Monte Carlo & Gibbs Sampling Professor Erik Sudderth Brown University Computer Science October 27, 2016 Some figures and materials courtesy

More information

MCMC and Gibbs Sampling. Kayhan Batmanghelich

MCMC and Gibbs Sampling. Kayhan Batmanghelich MCMC and Gibbs Sampling Kayhan Batmanghelich 1 Approaches to inference l Exact inference algorithms l l l The elimination algorithm Message-passing algorithm (sum-product, belief propagation) The junction

More information

Lecture 7 and 8: Markov Chain Monte Carlo

Lecture 7 and 8: Markov Chain Monte Carlo Lecture 7 and 8: Markov Chain Monte Carlo 4F13: Machine Learning Zoubin Ghahramani and Carl Edward Rasmussen Department of Engineering University of Cambridge http://mlg.eng.cam.ac.uk/teaching/4f13/ Ghahramani

More information

AN EFFICIENT TWO-STAGE SAMPLING METHOD IN PARTICLE FILTER. Qi Cheng and Pascal Bondon. CNRS UMR 8506, Université Paris XI, France.

AN EFFICIENT TWO-STAGE SAMPLING METHOD IN PARTICLE FILTER. Qi Cheng and Pascal Bondon. CNRS UMR 8506, Université Paris XI, France. AN EFFICIENT TWO-STAGE SAMPLING METHOD IN PARTICLE FILTER Qi Cheng and Pascal Bondon CNRS UMR 8506, Université Paris XI, France. August 27, 2011 Abstract We present a modified bootstrap filter to draw

More information

SMC 2 : an efficient algorithm for sequential analysis of state-space models

SMC 2 : an efficient algorithm for sequential analysis of state-space models SMC 2 : an efficient algorithm for sequential analysis of state-space models N. CHOPIN 1, P.E. JACOB 2, & O. PAPASPILIOPOULOS 3 1 ENSAE-CREST 2 CREST & Université Paris Dauphine, 3 Universitat Pompeu Fabra

More information

1 / 31 Identification of nonlinear dynamic systems (guest lectures) Brussels, Belgium, June 8, 2015.

1 / 31 Identification of nonlinear dynamic systems (guest lectures) Brussels, Belgium, June 8, 2015. Outline Part 4 Nonlinear system identification using sequential Monte Carlo methods Part 4 Identification strategy 2 (Data augmentation) Aim: Show how SMC can be used to implement identification strategy

More information

An Introduction to Particle Filtering

An Introduction to Particle Filtering An Introduction to Particle Filtering Author: Lisa Turner Supervisor: Dr. Christopher Sherlock 10th May 2013 Abstract This report introduces the ideas behind particle filters, looking at the Kalman filter

More information

Markov Chain Monte Carlo Methods for Stochastic Optimization

Markov Chain Monte Carlo Methods for Stochastic Optimization Markov Chain Monte Carlo Methods for Stochastic Optimization John R. Birge The University of Chicago Booth School of Business Joint work with Nicholas Polson, Chicago Booth. JRBirge U of Toronto, MIE,

More information

Computer Intensive Methods in Mathematical Statistics

Computer Intensive Methods in Mathematical Statistics Computer Intensive Methods in Mathematical Statistics Department of mathematics johawes@kth.se Lecture 7 Sequential Monte Carlo methods III 7 April 2017 Computer Intensive Methods (1) Plan of today s lecture

More information

Sampling Algorithms for Probabilistic Graphical models

Sampling Algorithms for Probabilistic Graphical models Sampling Algorithms for Probabilistic Graphical models Vibhav Gogate University of Washington References: Chapter 12 of Probabilistic Graphical models: Principles and Techniques by Daphne Koller and Nir

More information

Learning the hyper-parameters. Luca Martino

Learning the hyper-parameters. Luca Martino Learning the hyper-parameters Luca Martino 2017 2017 1 / 28 Parameters and hyper-parameters 1. All the described methods depend on some choice of hyper-parameters... 2. For instance, do you recall λ (bandwidth

More information

Blind Equalization via Particle Filtering

Blind Equalization via Particle Filtering Blind Equalization via Particle Filtering Yuki Yoshida, Kazunori Hayashi, Hideaki Sakai Department of System Science, Graduate School of Informatics, Kyoto University Historical Remarks A sequential Monte

More information

An efficient stochastic approximation EM algorithm using conditional particle filters

An efficient stochastic approximation EM algorithm using conditional particle filters An efficient stochastic approximation EM algorithm using conditional particle filters Fredrik Lindsten Linköping University Post Print N.B.: When citing this work, cite the original article. Original Publication:

More information

Kernel Sequential Monte Carlo

Kernel Sequential Monte Carlo Kernel Sequential Monte Carlo Ingmar Schuster (Paris Dauphine) Heiko Strathmann (University College London) Brooks Paige (Oxford) Dino Sejdinovic (Oxford) * equal contribution April 25, 2016 1 / 37 Section

More information

LECTURE 15 Markov chain Monte Carlo

LECTURE 15 Markov chain Monte Carlo LECTURE 15 Markov chain Monte Carlo There are many settings when posterior computation is a challenge in that one does not have a closed form expression for the posterior distribution. Markov chain Monte

More information

State-Space Methods for Inferring Spike Trains from Calcium Imaging

State-Space Methods for Inferring Spike Trains from Calcium Imaging State-Space Methods for Inferring Spike Trains from Calcium Imaging Joshua Vogelstein Johns Hopkins April 23, 2009 Joshua Vogelstein (Johns Hopkins) State-Space Calcium Imaging April 23, 2009 1 / 78 Outline

More information

Hastings-within-Gibbs Algorithm: Introduction and Application on Hierarchical Model

Hastings-within-Gibbs Algorithm: Introduction and Application on Hierarchical Model UNIVERSITY OF TEXAS AT SAN ANTONIO Hastings-within-Gibbs Algorithm: Introduction and Application on Hierarchical Model Liang Jing April 2010 1 1 ABSTRACT In this paper, common MCMC algorithms are introduced

More information

STA 4273H: Statistical Machine Learning

STA 4273H: Statistical Machine Learning STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Computer Science! Department of Statistical Sciences! rsalakhu@cs.toronto.edu! h0p://www.cs.utoronto.ca/~rsalakhu/ Lecture 7 Approximate

More information

Markov Chain Monte Carlo Methods for Stochastic

Markov Chain Monte Carlo Methods for Stochastic Markov Chain Monte Carlo Methods for Stochastic Optimization i John R. Birge The University of Chicago Booth School of Business Joint work with Nicholas Polson, Chicago Booth. JRBirge U Florida, Nov 2013

More information

Bayesian Methods for Machine Learning

Bayesian Methods for Machine Learning Bayesian Methods for Machine Learning CS 584: Big Data Analytics Material adapted from Radford Neal s tutorial (http://ftp.cs.utoronto.ca/pub/radford/bayes-tut.pdf), Zoubin Ghahramni (http://hunch.net/~coms-4771/zoubin_ghahramani_bayesian_learning.pdf),

More information

Learning Static Parameters in Stochastic Processes

Learning Static Parameters in Stochastic Processes Learning Static Parameters in Stochastic Processes Bharath Ramsundar December 14, 2012 1 Introduction Consider a Markovian stochastic process X T evolving (perhaps nonlinearly) over time variable T. We

More information

Markov Chain Monte Carlo methods

Markov Chain Monte Carlo methods Markov Chain Monte Carlo methods Tomas McKelvey and Lennart Svensson Signal Processing Group Department of Signals and Systems Chalmers University of Technology, Sweden November 26, 2012 Today s learning

More information

SAMPLING ALGORITHMS. In general. Inference in Bayesian models

SAMPLING ALGORITHMS. In general. Inference in Bayesian models SAMPLING ALGORITHMS SAMPLING ALGORITHMS In general A sampling algorithm is an algorithm that outputs samples x 1, x 2,... from a given distribution P or density p. Sampling algorithms can for example be

More information

an introduction to bayesian inference

an introduction to bayesian inference with an application to network analysis http://jakehofman.com january 13, 2010 motivation would like models that: provide predictive and explanatory power are complex enough to describe observed phenomena

More information

Probabilistic Machine Learning

Probabilistic Machine Learning Probabilistic Machine Learning Bayesian Nets, MCMC, and more Marek Petrik 4/18/2017 Based on: P. Murphy, K. (2012). Machine Learning: A Probabilistic Perspective. Chapter 10. Conditional Independence Independent

More information

April 20th, Advanced Topics in Machine Learning California Institute of Technology. Markov Chain Monte Carlo for Machine Learning

April 20th, Advanced Topics in Machine Learning California Institute of Technology. Markov Chain Monte Carlo for Machine Learning for for Advanced Topics in California Institute of Technology April 20th, 2017 1 / 50 Table of Contents for 1 2 3 4 2 / 50 History of methods for Enrico Fermi used to calculate incredibly accurate predictions

More information

Sequential Monte Carlo and Particle Filtering. Frank Wood Gatsby, November 2007

Sequential Monte Carlo and Particle Filtering. Frank Wood Gatsby, November 2007 Sequential Monte Carlo and Particle Filtering Frank Wood Gatsby, November 2007 Importance Sampling Recall: Let s say that we want to compute some expectation (integral) E p [f] = p(x)f(x)dx and we remember

More information

Bayesian Learning. HT2015: SC4 Statistical Data Mining and Machine Learning. Maximum Likelihood Principle. The Bayesian Learning Framework

Bayesian Learning. HT2015: SC4 Statistical Data Mining and Machine Learning. Maximum Likelihood Principle. The Bayesian Learning Framework HT5: SC4 Statistical Data Mining and Machine Learning Dino Sejdinovic Department of Statistics Oxford http://www.stats.ox.ac.uk/~sejdinov/sdmml.html Maximum Likelihood Principle A generative model for

More information

Towards a Bayesian model for Cyber Security

Towards a Bayesian model for Cyber Security Towards a Bayesian model for Cyber Security Mark Briers (mbriers@turing.ac.uk) Joint work with Henry Clausen and Prof. Niall Adams (Imperial College London) 27 September 2017 The Alan Turing Institute

More information

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Lecture 2: From Linear Regression to Kalman Filter and Beyond Lecture 2: From Linear Regression to Kalman Filter and Beyond January 18, 2017 Contents 1 Batch and Recursive Estimation 2 Towards Bayesian Filtering 3 Kalman Filter and Bayesian Filtering and Smoothing

More information

Kernel adaptive Sequential Monte Carlo

Kernel adaptive Sequential Monte Carlo Kernel adaptive Sequential Monte Carlo Ingmar Schuster (Paris Dauphine) Heiko Strathmann (University College London) Brooks Paige (Oxford) Dino Sejdinovic (Oxford) December 7, 2015 1 / 36 Section 1 Outline

More information

Particle Filtering for Data-Driven Simulation and Optimization

Particle Filtering for Data-Driven Simulation and Optimization Particle Filtering for Data-Driven Simulation and Optimization John R. Birge The University of Chicago Booth School of Business Includes joint work with Nicholas Polson. JRBirge INFORMS Phoenix, October

More information

The Kalman Filter ImPr Talk

The Kalman Filter ImPr Talk The Kalman Filter ImPr Talk Ged Ridgway Centre for Medical Image Computing November, 2006 Outline What is the Kalman Filter? State Space Models Kalman Filter Overview Bayesian Updating of Estimates Kalman

More information

Sequential Monte Carlo methods for system identification

Sequential Monte Carlo methods for system identification Technical report arxiv:1503.06058v3 [stat.co] 10 Mar 2016 Sequential Monte Carlo methods for system identification Thomas B. Schön, Fredrik Lindsten, Johan Dahlin, Johan Wågberg, Christian A. Naesseth,

More information

Likelihood NIPS July 30, Gaussian Process Regression with Student-t. Likelihood. Jarno Vanhatalo, Pasi Jylanki and Aki Vehtari NIPS-2009

Likelihood NIPS July 30, Gaussian Process Regression with Student-t. Likelihood. Jarno Vanhatalo, Pasi Jylanki and Aki Vehtari NIPS-2009 with with July 30, 2010 with 1 2 3 Representation Representation for Distribution Inference for the Augmented Model 4 Approximate Laplacian Approximation Introduction to Laplacian Approximation Laplacian

More information

28 : Approximate Inference - Distributed MCMC

28 : Approximate Inference - Distributed MCMC 10-708: Probabilistic Graphical Models, Spring 2015 28 : Approximate Inference - Distributed MCMC Lecturer: Avinava Dubey Scribes: Hakim Sidahmed, Aman Gupta 1 Introduction For many interesting problems,

More information

Density Estimation. Seungjin Choi

Density Estimation. Seungjin Choi Density Estimation Seungjin Choi Department of Computer Science and Engineering Pohang University of Science and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjin@postech.ac.kr http://mlg.postech.ac.kr/

More information

19 : Slice Sampling and HMC

19 : Slice Sampling and HMC 10-708: Probabilistic Graphical Models 10-708, Spring 2018 19 : Slice Sampling and HMC Lecturer: Kayhan Batmanghelich Scribes: Boxiang Lyu 1 MCMC (Auxiliary Variables Methods) In inference, we are often

More information

MCMC for big data. Geir Storvik. BigInsight lunch - May Geir Storvik MCMC for big data BigInsight lunch - May / 17

MCMC for big data. Geir Storvik. BigInsight lunch - May Geir Storvik MCMC for big data BigInsight lunch - May / 17 MCMC for big data Geir Storvik BigInsight lunch - May 2 2018 Geir Storvik MCMC for big data BigInsight lunch - May 2 2018 1 / 17 Outline Why ordinary MCMC is not scalable Different approaches for making

More information

Monte Carlo methods for sampling-based Stochastic Optimization

Monte Carlo methods for sampling-based Stochastic Optimization Monte Carlo methods for sampling-based Stochastic Optimization Gersende FORT LTCI CNRS & Telecom ParisTech Paris, France Joint works with B. Jourdain, T. Lelièvre, G. Stoltz from ENPC and E. Kuhn from

More information

Monte Carlo Methods. Geoff Gordon February 9, 2006

Monte Carlo Methods. Geoff Gordon February 9, 2006 Monte Carlo Methods Geoff Gordon ggordon@cs.cmu.edu February 9, 2006 Numerical integration problem 5 4 3 f(x,y) 2 1 1 0 0.5 0 X 0.5 1 1 0.8 0.6 0.4 Y 0.2 0 0.2 0.4 0.6 0.8 1 x X f(x)dx Used for: function

More information

Part 1: Expectation Propagation

Part 1: Expectation Propagation Chalmers Machine Learning Summer School Approximate message passing and biomedicine Part 1: Expectation Propagation Tom Heskes Machine Learning Group, Institute for Computing and Information Sciences Radboud

More information

Computational statistics

Computational statistics Computational statistics Markov Chain Monte Carlo methods Thierry Denœux March 2017 Thierry Denœux Computational statistics March 2017 1 / 71 Contents of this chapter When a target density f can be evaluated

More information

An intro to particle methods for parameter inference in state-space models

An intro to particle methods for parameter inference in state-space models An intro to particle methods for parameter inference in state-space models PhD course FMS020F NAMS002 Statistical inference for partially observed stochastic processes, Lund University http://goo.gl/sx8vu9

More information

Sensor Fusion: Particle Filter

Sensor Fusion: Particle Filter Sensor Fusion: Particle Filter By: Gordana Stojceska stojcesk@in.tum.de Outline Motivation Applications Fundamentals Tracking People Advantages and disadvantages Summary June 05 JASS '05, St.Petersburg,

More information

Inferring biological dynamics Iterated filtering (IF)

Inferring biological dynamics Iterated filtering (IF) Inferring biological dynamics 101 3. Iterated filtering (IF) IF originated in 2006 [6]. For plug-and-play likelihood-based inference on POMP models, there are not many alternatives. Directly estimating

More information

Package RcppSMC. March 18, 2018

Package RcppSMC. March 18, 2018 Type Package Title Rcpp Bindings for Sequential Monte Carlo Version 0.2.1 Date 2018-03-18 Package RcppSMC March 18, 2018 Author Dirk Eddelbuettel, Adam M. Johansen and Leah F. South Maintainer Dirk Eddelbuettel

More information

The Hierarchical Particle Filter

The Hierarchical Particle Filter and Arnaud Doucet http://go.warwick.ac.uk/amjohansen/talks MCMSki V Lenzerheide 7th January 2016 Context & Outline Filtering in State-Space Models: SIR Particle Filters [GSS93] Block-Sampling Particle

More information

DAG models and Markov Chain Monte Carlo methods a short overview

DAG models and Markov Chain Monte Carlo methods a short overview DAG models and Markov Chain Monte Carlo methods a short overview Søren Højsgaard Institute of Genetics and Biotechnology University of Aarhus August 18, 2008 Printed: August 18, 2008 File: DAGMC-Lecture.tex

More information

STA 414/2104: Machine Learning

STA 414/2104: Machine Learning STA 414/2104: Machine Learning Russ Salakhutdinov Department of Computer Science! Department of Statistics! rsalakhu@cs.toronto.edu! http://www.cs.toronto.edu/~rsalakhu/ Lecture 9 Sequential Data So far

More information

Machine Learning for Data Science (CS4786) Lecture 24

Machine Learning for Data Science (CS4786) Lecture 24 Machine Learning for Data Science (CS4786) Lecture 24 Graphical Models: Approximate Inference Course Webpage : http://www.cs.cornell.edu/courses/cs4786/2016sp/ BELIEF PROPAGATION OR MESSAGE PASSING Each

More information

PARTICLE FILTERS WITH INDEPENDENT RESAMPLING

PARTICLE FILTERS WITH INDEPENDENT RESAMPLING PARTICLE FILTERS WITH INDEPENDENT RESAMPLING Roland Lamberti 1, Yohan Petetin 1, François Septier, François Desbouvries 1 (1) Samovar, Telecom Sudparis, CNRS, Université Paris-Saclay, 9 rue Charles Fourier,

More information

Robert Collins CSE586, PSU Intro to Sampling Methods

Robert Collins CSE586, PSU Intro to Sampling Methods Robert Collins Intro to Sampling Methods CSE586 Computer Vision II Penn State Univ Robert Collins A Brief Overview of Sampling Monte Carlo Integration Sampling and Expected Values Inverse Transform Sampling

More information

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Lecture 2: From Linear Regression to Kalman Filter and Beyond Lecture 2: From Linear Regression to Kalman Filter and Beyond Department of Biomedical Engineering and Computational Science Aalto University January 26, 2012 Contents 1 Batch and Recursive Estimation

More information

Lecture 6: Graphical Models: Learning

Lecture 6: Graphical Models: Learning Lecture 6: Graphical Models: Learning 4F13: Machine Learning Zoubin Ghahramani and Carl Edward Rasmussen Department of Engineering, University of Cambridge February 3rd, 2010 Ghahramani & Rasmussen (CUED)

More information

Adversarial Sequential Monte Carlo

Adversarial Sequential Monte Carlo Adversarial Sequential Monte Carlo Kira Kempinska Department of Security and Crime Science University College London London, WC1E 6BT kira.kowalska.13@ucl.ac.uk John Shawe-Taylor Department of Computer

More information

Sequential Monte Carlo Methods (for DSGE Models)

Sequential Monte Carlo Methods (for DSGE Models) Sequential Monte Carlo Methods (for DSGE Models) Frank Schorfheide University of Pennsylvania, PIER, CEPR, and NBER October 23, 2017 Some References These lectures use material from our joint work: Tempered

More information

Linear Dynamical Systems

Linear Dynamical Systems Linear Dynamical Systems Sargur N. srihari@cedar.buffalo.edu Machine Learning Course: http://www.cedar.buffalo.edu/~srihari/cse574/index.html Two Models Described by Same Graph Latent variables Observations

More information

Bayesian Inference and MCMC

Bayesian Inference and MCMC Bayesian Inference and MCMC Aryan Arbabi Partly based on MCMC slides from CSC412 Fall 2018 1 / 18 Bayesian Inference - Motivation Consider we have a data set D = {x 1,..., x n }. E.g each x i can be the

More information

Sampling Methods (11/30/04)

Sampling Methods (11/30/04) CS281A/Stat241A: Statistical Learning Theory Sampling Methods (11/30/04) Lecturer: Michael I. Jordan Scribe: Jaspal S. Sandhu 1 Gibbs Sampling Figure 1: Undirected and directed graphs, respectively, with

More information

Machine Learning Summer School

Machine Learning Summer School Machine Learning Summer School Lecture 3: Learning parameters and structure Zoubin Ghahramani zoubin@eng.cam.ac.uk http://learning.eng.cam.ac.uk/zoubin/ Department of Engineering University of Cambridge,

More information

Monte Carlo in Bayesian Statistics

Monte Carlo in Bayesian Statistics Monte Carlo in Bayesian Statistics Matthew Thomas SAMBa - University of Bath m.l.thomas@bath.ac.uk December 4, 2014 Matthew Thomas (SAMBa) Monte Carlo in Bayesian Statistics December 4, 2014 1 / 16 Overview

More information

Contrastive Divergence

Contrastive Divergence Contrastive Divergence Training Products of Experts by Minimizing CD Hinton, 2002 Helmut Puhr Institute for Theoretical Computer Science TU Graz June 9, 2010 Contents 1 Theory 2 Argument 3 Contrastive

More information

Sequential Bayesian Inference for Dynamic State Space. Model Parameters

Sequential Bayesian Inference for Dynamic State Space. Model Parameters Sequential Bayesian Inference for Dynamic State Space Model Parameters Arnab Bhattacharya and Simon Wilson 1. INTRODUCTION Dynamic state-space models [24], consisting of a latent Markov process X 0,X 1,...

More information

LTCC: Advanced Computational Methods in Statistics

LTCC: Advanced Computational Methods in Statistics LTCC: Advanced Computational Methods in Statistics Advanced Particle Methods & Parameter estimation for HMMs N. Kantas Notes at http://wwwf.imperial.ac.uk/~nkantas/notes4ltcc.pdf Slides at http://wwwf.imperial.ac.uk/~nkantas/slides4.pdf

More information

Integrated Non-Factorized Variational Inference

Integrated Non-Factorized Variational Inference Integrated Non-Factorized Variational Inference Shaobo Han, Xuejun Liao and Lawrence Carin Duke University February 27, 2014 S. Han et al. Integrated Non-Factorized Variational Inference February 27, 2014

More information

27 : Distributed Monte Carlo Markov Chain. 1 Recap of MCMC and Naive Parallel Gibbs Sampling

27 : Distributed Monte Carlo Markov Chain. 1 Recap of MCMC and Naive Parallel Gibbs Sampling 10-708: Probabilistic Graphical Models 10-708, Spring 2014 27 : Distributed Monte Carlo Markov Chain Lecturer: Eric P. Xing Scribes: Pengtao Xie, Khoa Luu In this scribe, we are going to review the Parallel

More information

Particle Filters. Outline

Particle Filters. Outline Particle Filters M. Sami Fadali Professor of EE University of Nevada Outline Monte Carlo integration. Particle filter. Importance sampling. Degeneracy Resampling Example. 1 2 Monte Carlo Integration Numerical

More information

Graphical Models and Kernel Methods

Graphical Models and Kernel Methods Graphical Models and Kernel Methods Jerry Zhu Department of Computer Sciences University of Wisconsin Madison, USA MLSS June 17, 2014 1 / 123 Outline Graphical Models Probabilistic Inference Directed vs.

More information

MCMC algorithms for fitting Bayesian models

MCMC algorithms for fitting Bayesian models MCMC algorithms for fitting Bayesian models p. 1/1 MCMC algorithms for fitting Bayesian models Sudipto Banerjee sudiptob@biostat.umn.edu University of Minnesota MCMC algorithms for fitting Bayesian models

More information

Lecture: Gaussian Process Regression. STAT 6474 Instructor: Hongxiao Zhu

Lecture: Gaussian Process Regression. STAT 6474 Instructor: Hongxiao Zhu Lecture: Gaussian Process Regression STAT 6474 Instructor: Hongxiao Zhu Motivation Reference: Marc Deisenroth s tutorial on Robot Learning. 2 Fast Learning for Autonomous Robots with Gaussian Processes

More information

Adaptive Monte Carlo methods

Adaptive Monte Carlo methods Adaptive Monte Carlo methods Jean-Michel Marin Projet Select, INRIA Futurs, Université Paris-Sud joint with Randal Douc (École Polytechnique), Arnaud Guillin (Université de Marseille) and Christian Robert

More information

AUTOMATIC CONTROL REGLERTEKNIK LINKÖPINGS UNIVERSITET. Machine Learning T. Schön. (Chapter 11) AUTOMATIC CONTROL REGLERTEKNIK LINKÖPINGS UNIVERSITET

AUTOMATIC CONTROL REGLERTEKNIK LINKÖPINGS UNIVERSITET. Machine Learning T. Schön. (Chapter 11) AUTOMATIC CONTROL REGLERTEKNIK LINKÖPINGS UNIVERSITET About the Eam I/II) ), Lecture 7 MCMC and Sampling Methods Thomas Schön Division of Automatic Control Linköping University Linköping, Sweden. Email: schon@isy.liu.se, Phone: 3-373, www.control.isy.liu.se/~schon/

More information

17 : Markov Chain Monte Carlo

17 : Markov Chain Monte Carlo 10-708: Probabilistic Graphical Models, Spring 2015 17 : Markov Chain Monte Carlo Lecturer: Eric P. Xing Scribes: Heran Lin, Bin Deng, Yun Huang 1 Review of Monte Carlo Methods 1.1 Overview Monte Carlo

More information

Approximate Bayesian Computation: a simulation based approach to inference

Approximate Bayesian Computation: a simulation based approach to inference Approximate Bayesian Computation: a simulation based approach to inference Richard Wilkinson Simon Tavaré 2 Department of Probability and Statistics University of Sheffield 2 Department of Applied Mathematics

More information

Linear and nonlinear filtering for state space models. Master course notes. UdelaR 2010

Linear and nonlinear filtering for state space models. Master course notes. UdelaR 2010 Linear and nonlinear filtering for state space models Master course notes UdelaR 2010 Thierry CHONAVEL thierry.chonavel@telecom-bretagne.eu December 2010 Contents 1 Introduction 3 2 Kalman filter and non

More information

Lecture 6: Bayesian Inference in SDE Models

Lecture 6: Bayesian Inference in SDE Models Lecture 6: Bayesian Inference in SDE Models Bayesian Filtering and Smoothing Point of View Simo Särkkä Aalto University Simo Särkkä (Aalto) Lecture 6: Bayesian Inference in SDEs 1 / 45 Contents 1 SDEs

More information

Variational Scoring of Graphical Model Structures

Variational Scoring of Graphical Model Structures Variational Scoring of Graphical Model Structures Matthew J. Beal Work with Zoubin Ghahramani & Carl Rasmussen, Toronto. 15th September 2003 Overview Bayesian model selection Approximations using Variational

More information

(5) Multi-parameter models - Gibbs sampling. ST440/540: Applied Bayesian Analysis

(5) Multi-parameter models - Gibbs sampling. ST440/540: Applied Bayesian Analysis Summarizing a posterior Given the data and prior the posterior is determined Summarizing the posterior gives parameter estimates, intervals, and hypothesis tests Most of these computations are integrals

More information