Slice Sampling Approaches for Stochastic Optimization

Size: px
Start display at page:

Download "Slice Sampling Approaches for Stochastic Optimization"

Transcription

1 Slice Sampling Approaches for Stochastic Optimization John Birge, Nick Polson Chicago Booth INFORMS Philadelphia, October 2015 John Birge, Nick Polson (Chicago Booth) Split Sampling 05/01 1 / 28

2 Overview Epectations, Normalization, and Optimization Epectations, Normalization, and Optimization Nested Sampling, Serial tempering, Bridge Sampling, Wang-Landau, Cross-Entropy, Conditional Probability Estimator,..., Stochastic Optimization Models Single-stage (decision and outcome) Two-stage (decision, outcome observation, recourse action) Multi-stage (repeated decision-observation sequences) John Birge, Nick Polson (Chicago Booth) Split Sampling 05/01 2 / 28

3 Broad view: Importance Sampling Importance sampling is a variance reduction technique. Given an importance blanket g(): E (L()) 1 N Posterior g() = L()π()/Z. N L( (i) ) π((i) ) n=1 g( (i) ) where (i) g() Reduce the problem to finding a posterior normalization constant π L () = L()π()/Z where Z = L()π()d X Harmonic Mean: Ẑ 1 = 1 N N n=1 L 1 ( (i) ). John Birge, Nick Polson (Chicago Booth) Split Sampling 05/01 3 / 28

4 One-stage Problems The set-up for the 1-stage problem is as follows: Find such that E ω [G (ω, )] = ma E ω [G (ω, )], (1) where ω p(ω) for G (, ) given. Let G () = E ω [G (ω, )]. Assuming G > 0 and E ω [G (ω, )] dµ() < for some measure µ(d) to avoid singularities for. Let π J (ω J, ) J j=1 G (ω j, )p(ω j ). The marginal π J () E ω [G (ω, )] J has its mode at and collapses on as J as required (see Pincus (1968)). John Birge, Nick Polson (Chicago Booth) Split Sampling 05/01 4 / 28

5 Finding Conditionals The MCMC conditionals can then be found as: and π J (ω j ) G (ω j, )p(ω j ) π J ( ω J ) J j=1 G (ω j, )p(ω j ), which can be sampled through Gibbs sampling or via Metropolis-Hastings. The result is then the Markov chain with samples, {ω J,(h), (h) } H h=1. Key properties: Can simulate the ω j s from a density that depends on current state in the chain (h) Convergence: is achieved with {ω J,(H), (H) } π J (ω J, ) and (H) in mode (and epectation as J ) No optimization steps or step-lengths No dependence on functional properties (e.g., conveity) John Birge, Nick Polson (Chicago Booth) Split Sampling 05/01 5 / 28

6 Eample: Portfolio Optimization Setup: G (ω, ) = K e γ(r(ω)t +r f ), where r(ω) is an n-vector of the ecess returns of the risky assets relative to a riskfree return r f and K is chosen sufficiently large that the returns can be restricted so that G (ω, ) 0. Analytical solution: For r normally distributed with mean µ and covariance Σ, = Σ 1 µ/γ. MCMC implementation: Introduce an auiliary variable w so that G (ω, ) = r(ω) T +r f log K /γ γe γw dw, i.e., so that w is conditionally a truncated eponential random variable. We then have π J (ω J, ) J j=1 γe γw j 1 { log K /γ wj r T j +r f } e (r µ)t Σ 1 (r µ)/2. John Birge, Nick Polson (Chicago Booth) Split Sampling 05/01 6 / 28

7 MCMC Iterations for Portfolio Eample The MCMC iterations then iteratively draw w j, returns r j, and solutions (g) as follows. w j log(k )/γ E(γ, [0, log(k ) γ + rj T + r f ])(truncated eponential), r j N ((µ, Σ) rj T + r f w j )(truncated normal), w j r T j k k U([ma k r f w j r T j k, min k r f ])(uniform), r jk >0 r jk r jk <0 r jk where k = 1,..., n and subscript k denotes the vector of components other than k. John Birge, Nick Polson (Chicago Booth) Split Sampling 05/01 7 / 28

8 Portfolio Results ( ) Histogram of 1 values for n = 2, µ = 0.05, r f = 0, Σ =, K = 2, and J = 100 for the last 1000 iterations in a 5000 iteration sequence. Note: = (0.833, 0.833) T. The modal intervals for 1 and 2 both include the corresponding value. Histogram of ans$[ (1:4000), 1] Frequency John Birge, Nick Polson (Chicago Booth) Split Sampling 05/01 8 / 28 1

9 Two-Stage Stochastic Programming Problem Setup: Find to solve where S = {A = b, 0} and Q(, ω) = min S ct + E ω [Q(, ω)] min y(ω) 0 {qt (ω)y Wy = h(ω) T (ω)}, y(ω) is known as the recourse decision given the realization of ω. Conversion to one-stage problem: Note that by duality, Q(, ω) = G (h(ω) T (ω)) where G (h T) = ma{(h T) T ξ : W T ξ q}. ξ Two-stage problem is then equivalent to: [ ma S E ω ma {(h(ω) T (ω)) T ξ + c T } ξ:w T ξ q(ω) ] John Birge, Nick Polson (Chicago Booth) Split Sampling 05/01 9 / 28

10 MCMC Setup for Two-Stage MOdel Consider the LP problem: G (z) = ma π { π z : W π q } Following Pincus, we consider the annealed distribution p κ (π z, q, W ) = ep ( κπz) I ( W π q ) /Z κ where Z κ = W π q ep ( κπz) dπ is an appropriate normalization constant. As κ, this distribution tends to a Dirac measure on the solution π of the LP. Simulation from p κ (π), however, requires a method for dealing with truncated multivariate eponential distributions. Simulate a Markov chain and obtain draws π (h) for h = 1,..., H and estimate the value function as a Monte Carlo Rao-Blackwellised average G ˆ(z) = 1 H H i=1 π(g) z, an average of piecewise linear functions. John Birge, Nick Polson (Chicago Booth) Split Sampling 05/01 10 / 28

11 Simulating Truncated Multivariate Eponentials The easiest way to draw from a truncated multivariate eponential is to use one-at-a-time Gibbs sampling. This can be slow to converge when there is an icicle in the constraint region. First write = ( +, ) and then transform the system so that all coordinates are positive > 0. Also assume that the constraint is of the form A b where is K 1 and A is n K where typically n K. The constraint A b can be written as a series of conditional constraints (Gelfand, Smith and Lee, 1993) where n a ij j b i implies a ik k b i a ij j ; j=1 j =k therefore, k min i (b i j =k a ij j )/a ik if the a ik > 0. John Birge, Nick Polson (Chicago Booth) Split Sampling 05/01 11 / 28

12 Factorization In some cases, we can use the factorisation p( 1, 2 ) = p( 1 2 )p( 2 ). For eample, consider the joint distribution ep ( q ) I (a a 2 2 b) We can use the transformation z = a 1 1 and y z = a 2 2 ; then, the distribution p(y z) TEp (z,b) (q 2 a 1 2 ). The marginal distribution p(z) is then determined by p(z) e z ( 1 e q 2a 1 2 (b z)) where 0 < z < b. This is a density tilted by a cdf, which can be sampled using the following result. Theorem (Vaduva) Generate X f and Y with cdf F until X Y. This will gives samples from p() = cf ()F (), where c gives the efficiency of the algorithm. John Birge, Nick Polson (Chicago Booth) Split Sampling 05/01 12 / 28

13 Conditioning Multivariate Eponential to the Simple The truncated multivariate eponential on the simple S k 1 is defined by ( ) k 1 f () = d(λ) 1 ep λ j j = ( 1,..., k 1 ) k 1 ; j 0, j 1 j=1 j=1 Kent et al (2004) calculate the normalisation constant for unequal and equal λ s. They write k 1 p K (λ) = d(λ) λ j j=1 ( ) where p K (λ) = P k 1 i=1 X i < 1 is the probability of the indept eponentials lying in the simple. If the λ s are all equal we get ( ) p K (λ) = e λ k 2 λ e λ. j=1 j! John Birge, Nick Polson (Chicago Booth) Split Sampling 05/01 13 / 28

14 Equal λ Setup If the λ j = λ then this distribution can be easily sampled. Consider the transformation j = yr j where y = k 1 j=1 j is the size of. After a change of variables, f (y, r) y k 2 ep ( λy), r S k 2, 0 y 1. We need to sample from the simple and from a truncated gamma as follows. It is straightforward to sample from the simple. Take uniforms U 1,..., U k 2 on [0, 1] and let s 1,..., s k 2 be the successive gaps in the order statistics, s 1 = U 1, s j = U (j) U (j 1) for j = 2,..., k 1. Sampling from a truncated gamma distribution can be done with the ratios of uniforms method.simulating Y Γ(k 1, λ) on [0, 1] is equivalent to X = λy Γ(k 1, λ) on [0, λ]. Then if we let W X min(k 1, λ) be a location shift. Let f (w) be the pdf. Then we can define u +, v, v + and generate (U, V ) uniformly in the rectangle D = [0, u + ] [v, v + ] and accept W = V /U if U < f 1 2 (V /U). John Birge, Nick Polson (Chicago Booth) Split Sampling 05/01 14 / 28

15 Distribution of Y = k 1 i=1 X j The Laplace transform of Y = k 1 i=1 X j where (X j λ j ) Ep(λ j ) is given by M(φ) = E (ep(φy )) = k 1 j= φ/λ j If the λ j are all unequal this can be given by a partial fraction epansion Let a j = k 1 i=1,i =j λ i λ i λ j M(φ) = give the cdf of Y = k 1 i=1 j as k 1 j=1 λ j λ j + φ k 1 i=1,i =j λ i λ i λ j. where the λ s have been ordered. This can be inverted to F (y) = k 1 a j (1 e λ j y ), j=1 Setting y = 1 given the probability of lying in the simple, p T (λ) = k 1 (1 e λ k 1 j λ ) i. j=1 λ i=1,i =j i λ j John Birge, Nick Polson (Chicago Booth) Split Sampling 05/01 15 / 28

16 Eponential Slice Suppose that we wish to find ma X f (, y). Then, we can define the annealed distribution π κ (, y) = ep (κf (, y)) /Z κ where (, y) X If we introduce an auilary eponential slice variable, so that the joint distribution is π(u,, y) κ ep(κu)i ( < u < f (, y)) I ((, y) X ) Then, we have a truncated eponential for the slice variable π(u, y) = κ ep(κu) I ( < u < f (, y)) ep(κf (, y)) and a uniform for (, y) truncated via the bound u < f (, y). Marginalising out u also gives us the appropriate marginal π κ (, y) = ep (κf (, y)) I ((, y) X ) /Z κ assuming this is a well-defined density. John Birge, Nick Polson (Chicago Booth) Split Sampling 05/01 16 / 28

17 MCMC for Two-stage SP For the 2-stage stochastic program, the LP recourse problem can be replaced by an epectation under the annealed distribution p κ (π ω,, q, W ) = ep ( κπ(t (ω) h(ω))) I ( W π q ) /Z κ (ω.) where Z κ is an appropriate normalization constant. We can then sample from this multivariate distribution in a number of ways. For an over-determined W using one-at-a-time Gibbs leads to min/ma constraints. John Birge, Nick Polson (Chicago Booth) Split Sampling 05/01 17 / 28

18 MCMC Sampling for Two-Stage Model Overall, the MCMC method leads to the limit E π ω,,q,w ( π (T (ω) h(ω)) ) (π ) (T (ω) h(ω)) as κ Therefore, we can instead use MCMC methods to solve ) ma E ω ( ma π (T (ω) h(ω)) π:w π q ( ( = ma E ω E π ω,,q,w π (T (ω) h(ω)) )) John Birge, Nick Polson (Chicago Booth) Split Sampling 05/01 18 / 28

19 MCMC Sampling Continued Define a joint distribution p(, π, ω) π q p κ (π ω, )p(ω) where p κ is the annealed distribution. } This has the appropriate marginal p() E ω {E π ω, (π q). We can also introduce a further slice variable to deal with the objective function. Let that slice variable satisfy 0 < u < π q. Consider the augmented joint distribution p(, u, π, ω) I ( 0 < u < π q ) p κ (π ω, )p(ω) with a uniform measure on. The conditional for the π generation essentially adds another constraint I ( 0 < u < π q ) and I ( W π q ). We use one-at-a-time Gibbs to draw from the annealed conditional p(π ω, ) as a truncated multivariate eponential on this set of inequality constraints. The uncertainty variable samples from an eponentially tilted distribution rather than the prior p(ω) in typical stochastic methods; the conditional posterior is p(ω π, ) p(ω) ep ( π (T (ω) h(ω)) ) John Birge, Nick Polson (Chicago Booth) Split Sampling 05/01 19 / 28

20 Conditional Sampling Gelfand, Smith and Lee (1992) propose using Gibbs sampling to do this. Hence you only need the 1-dimensional conditionals p φ (ξ j ξ ( j), a, W, γ), where ξ ( j) denotes the other variables leaving ξ j out. If the inequality constraints W T ξ γ can be written ξ j b ( j) (W, ξ ( j), γ) then the 1-dimensional conditionals are simply truncated eponentials, ξ j ξ ( j), a, W, γ 1 φa j e φa j (ξ j b ( j) ) on ξj b ( j) (W, ξ ( j), γ) In the 2-stage case: a = a(ω, ) and γ = γ(ω, ) and you need to deal with p φ (ξ ω, ) e φa (ω,)ξ 1 {W T ξ γ(ω,)}. now making eplicit the conditioning on (ω, ). The optimum is now a decision function ξ (ω, ). This can be determined by simulating from the joint p φ (ξ, ω, ) = eφa (ω,)ξ C (ω, ) 1 {W T ξ γ(ω,)} John Birge, Nick Polson (Chicago Booth) Split Sampling 05/01 20 / 28

21 Completing the Iteration Here C (ω, ) = W T ξ γ(ω,) eφa (ω,)ξ and is typically not available in closed from. Notice that MCMC uses the following conditionals p φ (ξ j ξ ( j), ω, ) and p φ (ω, ξ) The former does not require the normalisation constant and is just a truncated eponential with parameters depending on (ω, ). The latter p φ (ω, ξ) eφa (ω,)ξ C (ω, ) χ W T ξ γ(ω,) µ(ω, ), however, depends on C (ω, ). We can still avoid the direct evaluation of this by using the Metropolis algorithm and notice that by Clifford-Hammersley we can compute the ratio for any two candidate draws (ω, ) (g) and (ω, ) (g+1) as p φ (ω (g), (g) ξ) p φ (ξ j ξ ( j), ω (g), (g) ) k p φ (ω (g+1), (g+1) ξ) = j=1 p φ (ξ j ξ ( j), ω (g+1), (g+1) ) in terms of the normalization constants of the 1-dimensional conditionals which are known in closed-form. John Birge, Nick Polson (Chicago Booth) Split Sampling 05/01 21 / 28

22 Eample: Farmer Problem from BL 2011 We consider a model based on the crop allocation problem in Birge and Louveau (2011) for the farmer s decision of an initial storage decision and then subsequent allocation decisions y 1 and y 2 that depend on the realization of the random outcome. The overall problem is to maimize profit of k + E [143y y 2 ] subject to storage y 1 + y 2 and investment constraints, 110y y 2 ω 1 and 120y y 2 ω 2, with y 1, y 2 0. This yields conditionals: p(y 1 y 2 ) ep(143κy 1 )I(0 y 1 min((ω 1 30y 2 )/110, (ω 2 210y)/120, y 2 )) p(y 2 y 1 ) ep(60κy 2 )I(0 y 2 min((ω 1 110y 1 )/30, (ω 2 120y 1 )/210, y 1 )) John Birge, Nick Polson (Chicago Booth) Split Sampling 05/01 22 / 28

23 Normalizing Z s We need to calculate the normalizing constant Z (z, ω). The constraint region (y 1, y 2 ) X (ω, ) is given by y 1, y 2 > 0, y 1 + y 2, 110y y 2 ω 1, 120y y 2 ω 2 We will also assume that 0 < < 100, 3000 ω , 10, 000 < ω 2 < Given and (ω 1, ω 2 ), we can find the optimal as we know 0 < y 1 < min ( y 2, (ω 1 30y 2 )/110, (ω 2 210y 2 )/120) For the y variable, we have marginally 0 < y 2 < min(, ω 1 /30, ω 2 /210). Given that < 100 and ω 1 > 3000, this reduces to 0 < y 2 < min(, ω 2 /210). Hence, we can substitute out and consider the nonlinear criteria function E ω ma (60y min ( y 2, (ω 1 30y 2 )/110, (ω 2 210y 2 )/120)) y This leads to a marginal annealed distribution where we use the Pincus result for nonlinear functionals. John Birge, Nick Polson (Chicago Booth) Split Sampling 05/01 23 / 28

24 Solving the 2-stage problem Steps: J = 20 copies of the recourse variables MCMC iterations over, y j 2, and auiliary variables uj For different values of k, with ω 1 U(3000, 500) and ω 2 U(10000, 20000), John Birge, Nick Polson (Chicago Booth) Split Sampling 05/01 24 / 28

25 Solution Values: Effect of κ kappa=0.01, G=10000 kappa=0.1 y y kappa=1 kappa=10 y y John Birge, Nick Polson (Chicago Booth) Split Sampling 05/01 25 / 28

26 Objective Values f k=5 f f f f f k= k=15 k= k=25 k= John Birge, Nick Polson (Chicago Booth) Split Sampling 05/01 26 / 28

27 Histogram of Solutions Histogram of [ (1:niter/2)] kappa=0.001,k=5 Histogram of [ (1:niter/2)] kappa=0.001,k=10 Frequency Frequency Frequency Histogram of [ (1:niter/2)] kappa=0.001,k=20 Frequency Histogram of [ (1:niter/2)] kappa=0.001,k= John Birge, Nick Polson (Chicago Booth) Split Sampling 05/01 27 / 28

28 Conclusions Boltzmann distribution can yield useful form for optimization with MCMC Importance sampling leads to sequential draws of outcomes and actions Adding latent variable and using slice sampling can make draws efficient Two (and multiple) stages can be incorporated Fast convergence in mode to optima (while mean convergence is slower) John Birge, Nick Polson (Chicago Booth) Split Sampling 05/01 28 / 28

Markov Chain Monte Carlo Methods for Stochastic

Markov Chain Monte Carlo Methods for Stochastic Markov Chain Monte Carlo Methods for Stochastic Optimization i John R. Birge The University of Chicago Booth School of Business Joint work with Nicholas Polson, Chicago Booth. JRBirge U Florida, Nov 2013

More information

Markov Chain Monte Carlo Methods for Stochastic Optimization

Markov Chain Monte Carlo Methods for Stochastic Optimization Markov Chain Monte Carlo Methods for Stochastic Optimization John R. Birge The University of Chicago Booth School of Business Joint work with Nicholas Polson, Chicago Booth. JRBirge U of Toronto, MIE,

More information

Particle Filtering Approaches for Dynamic Stochastic Optimization

Particle Filtering Approaches for Dynamic Stochastic Optimization Particle Filtering Approaches for Dynamic Stochastic Optimization John R. Birge The University of Chicago Booth School of Business Joint work with Nicholas Polson, Chicago Booth. JRBirge I-Sim Workshop,

More information

The L-Shaped Method. Operations Research. Anthony Papavasiliou 1 / 44

The L-Shaped Method. Operations Research. Anthony Papavasiliou 1 / 44 1 / 44 The L-Shaped Method Operations Research Anthony Papavasiliou Contents 2 / 44 1 The L-Shaped Method [ 5.1 of BL] 2 Optimality Cuts [ 5.1a of BL] 3 Feasibility Cuts [ 5.1b of BL] 4 Proof of Convergence

More information

ST 740: Markov Chain Monte Carlo

ST 740: Markov Chain Monte Carlo ST 740: Markov Chain Monte Carlo Alyson Wilson Department of Statistics North Carolina State University October 14, 2012 A. Wilson (NCSU Stsatistics) MCMC October 14, 2012 1 / 20 Convergence Diagnostics:

More information

Computer Vision Group Prof. Daniel Cremers. 14. Sampling Methods

Computer Vision Group Prof. Daniel Cremers. 14. Sampling Methods Prof. Daniel Cremers 14. Sampling Methods Sampling Methods Sampling Methods are widely used in Computer Science as an approximation of a deterministic algorithm to represent uncertainty without a parametric

More information

The L-Shaped Method. Operations Research. Anthony Papavasiliou 1 / 38

The L-Shaped Method. Operations Research. Anthony Papavasiliou 1 / 38 1 / 38 The L-Shaped Method Operations Research Anthony Papavasiliou Contents 2 / 38 1 The L-Shaped Method 2 Example: Capacity Expansion Planning 3 Examples with Optimality Cuts [ 5.1a of BL] 4 Examples

More information

Cheng Soon Ong & Christian Walder. Canberra February June 2018

Cheng Soon Ong & Christian Walder. Canberra February June 2018 Cheng Soon Ong & Christian Walder Research Group and College of Engineering and Computer Science Canberra February June 2018 Outlines Overview Introduction Linear Algebra Probability Linear Regression

More information

April 20th, Advanced Topics in Machine Learning California Institute of Technology. Markov Chain Monte Carlo for Machine Learning

April 20th, Advanced Topics in Machine Learning California Institute of Technology. Markov Chain Monte Carlo for Machine Learning for for Advanced Topics in California Institute of Technology April 20th, 2017 1 / 50 Table of Contents for 1 2 3 4 2 / 50 History of methods for Enrico Fermi used to calculate incredibly accurate predictions

More information

Computer Vision Group Prof. Daniel Cremers. 11. Sampling Methods

Computer Vision Group Prof. Daniel Cremers. 11. Sampling Methods Prof. Daniel Cremers 11. Sampling Methods Sampling Methods Sampling Methods are widely used in Computer Science as an approximation of a deterministic algorithm to represent uncertainty without a parametric

More information

MCMC and Gibbs Sampling. Sargur Srihari

MCMC and Gibbs Sampling. Sargur Srihari MCMC and Gibbs Sampling Sargur srihari@cedar.buffalo.edu 1 Topics 1. Markov Chain Monte Carlo 2. Markov Chains 3. Gibbs Sampling 4. Basic Metropolis Algorithm 5. Metropolis-Hastings Algorithm 6. Slice

More information

Probabilistic Graphical Networks: Definitions and Basic Results

Probabilistic Graphical Networks: Definitions and Basic Results This document gives a cursory overview of Probabilistic Graphical Networks. The material has been gleaned from different sources. I make no claim to original authorship of this material. Bayesian Graphical

More information

Advanced Statistical Computing

Advanced Statistical Computing Advanced Statistical Computing Fall 206 Steve Qin Outline Collapsing, predictive updating Sequential Monte Carlo 2 Collapsing and grouping Want to sample from = Regular Gibbs sampler: Sample t+ from π

More information

Particle Filtering for Data-Driven Simulation and Optimization

Particle Filtering for Data-Driven Simulation and Optimization Particle Filtering for Data-Driven Simulation and Optimization John R. Birge The University of Chicago Booth School of Business Includes joint work with Nicholas Polson. JRBirge INFORMS Phoenix, October

More information

STA 4273H: Statistical Machine Learning

STA 4273H: Statistical Machine Learning STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Computer Science! Department of Statistical Sciences! rsalakhu@cs.toronto.edu! h0p://www.cs.utoronto.ca/~rsalakhu/ Lecture 7 Approximate

More information

Brief introduction to Markov Chain Monte Carlo

Brief introduction to Markov Chain Monte Carlo Brief introduction to Department of Probability and Mathematical Statistics seminar Stochastic modeling in economics and finance November 7, 2011 Brief introduction to Content 1 and motivation Classical

More information

Computational statistics

Computational statistics Computational statistics Markov Chain Monte Carlo methods Thierry Denœux March 2017 Thierry Denœux Computational statistics March 2017 1 / 71 Contents of this chapter When a target density f can be evaluated

More information

MONTE CARLO METHODS. Hedibert Freitas Lopes

MONTE CARLO METHODS. Hedibert Freitas Lopes MONTE CARLO METHODS Hedibert Freitas Lopes The University of Chicago Booth School of Business 5807 South Woodlawn Avenue, Chicago, IL 60637 http://faculty.chicagobooth.edu/hedibert.lopes hlopes@chicagobooth.edu

More information

F denotes cumulative density. denotes probability density function; (.)

F denotes cumulative density. denotes probability density function; (.) BAYESIAN ANALYSIS: FOREWORDS Notation. System means the real thing and a model is an assumed mathematical form for the system.. he probability model class M contains the set of the all admissible models

More information

Default Priors and Effcient Posterior Computation in Bayesian

Default Priors and Effcient Posterior Computation in Bayesian Default Priors and Effcient Posterior Computation in Bayesian Factor Analysis January 16, 2010 Presented by Eric Wang, Duke University Background and Motivation A Brief Review of Parameter Expansion Literature

More information

Application of new Monte Carlo method for inversion of prestack seismic data. Yang Xue Advisor: Dr. Mrinal K. Sen

Application of new Monte Carlo method for inversion of prestack seismic data. Yang Xue Advisor: Dr. Mrinal K. Sen Application of new Monte Carlo method for inversion of prestack seismic data Yang Xue Advisor: Dr. Mrinal K. Sen Overview Motivation Introduction Bayes theorem Stochastic inference methods Methodology

More information

STAT 425: Introduction to Bayesian Analysis

STAT 425: Introduction to Bayesian Analysis STAT 425: Introduction to Bayesian Analysis Marina Vannucci Rice University, USA Fall 2017 Marina Vannucci (Rice University, USA) Bayesian Analysis (Part 2) Fall 2017 1 / 19 Part 2: Markov chain Monte

More information

Calibration of Stochastic Volatility Models using Particle Markov Chain Monte Carlo Methods

Calibration of Stochastic Volatility Models using Particle Markov Chain Monte Carlo Methods Calibration of Stochastic Volatility Models using Particle Markov Chain Monte Carlo Methods Jonas Hallgren 1 1 Department of Mathematics KTH Royal Institute of Technology Stockholm, Sweden BFS 2012 June

More information

Markov Chain Monte Carlo

Markov Chain Monte Carlo 1 Motivation 1.1 Bayesian Learning Markov Chain Monte Carlo Yale Chang In Bayesian learning, given data X, we make assumptions on the generative process of X by introducing hidden variables Z: p(z): prior

More information

Computer Vision Group Prof. Daniel Cremers. 11. Sampling Methods: Markov Chain Monte Carlo

Computer Vision Group Prof. Daniel Cremers. 11. Sampling Methods: Markov Chain Monte Carlo Group Prof. Daniel Cremers 11. Sampling Methods: Markov Chain Monte Carlo Markov Chain Monte Carlo In high-dimensional spaces, rejection sampling and importance sampling are very inefficient An alternative

More information

MCMC algorithms for fitting Bayesian models

MCMC algorithms for fitting Bayesian models MCMC algorithms for fitting Bayesian models p. 1/1 MCMC algorithms for fitting Bayesian models Sudipto Banerjee sudiptob@biostat.umn.edu University of Minnesota MCMC algorithms for fitting Bayesian models

More information

Markov Chain Monte Carlo Using the Ratio-of-Uniforms Transformation. Luke Tierney Department of Statistics & Actuarial Science University of Iowa

Markov Chain Monte Carlo Using the Ratio-of-Uniforms Transformation. Luke Tierney Department of Statistics & Actuarial Science University of Iowa Markov Chain Monte Carlo Using the Ratio-of-Uniforms Transformation Luke Tierney Department of Statistics & Actuarial Science University of Iowa Basic Ratio of Uniforms Method Introduced by Kinderman and

More information

On the Fisher Bingham Distribution

On the Fisher Bingham Distribution On the Fisher Bingham Distribution BY A. Kume and S.G Walker Institute of Mathematics, Statistics and Actuarial Science, University of Kent Canterbury, CT2 7NF,UK A.Kume@kent.ac.uk and S.G.Walker@kent.ac.uk

More information

Bayesian Methods and Uncertainty Quantification for Nonlinear Inverse Problems

Bayesian Methods and Uncertainty Quantification for Nonlinear Inverse Problems Bayesian Methods and Uncertainty Quantification for Nonlinear Inverse Problems John Bardsley, University of Montana Collaborators: H. Haario, J. Kaipio, M. Laine, Y. Marzouk, A. Seppänen, A. Solonen, Z.

More information

A note on Reversible Jump Markov Chain Monte Carlo

A note on Reversible Jump Markov Chain Monte Carlo A note on Reversible Jump Markov Chain Monte Carlo Hedibert Freitas Lopes Graduate School of Business The University of Chicago 5807 South Woodlawn Avenue Chicago, Illinois 60637 February, 1st 2006 1 Introduction

More information

Where now? Machine Learning and Bayesian Inference

Where now? Machine Learning and Bayesian Inference Machine Learning and Bayesian Inference Dr Sean Holden Computer Laboratory, Room FC6 Telephone etension 67 Email: sbh@clcamacuk wwwclcamacuk/ sbh/ Where now? There are some simple take-home messages from

More information

16 : Markov Chain Monte Carlo (MCMC)

16 : Markov Chain Monte Carlo (MCMC) 10-708: Probabilistic Graphical Models 10-708, Spring 2014 16 : Markov Chain Monte Carlo MCMC Lecturer: Matthew Gormley Scribes: Yining Wang, Renato Negrinho 1 Sampling from low-dimensional distributions

More information

Index. Pagenumbersfollowedbyf indicate figures; pagenumbersfollowedbyt indicate tables.

Index. Pagenumbersfollowedbyf indicate figures; pagenumbersfollowedbyt indicate tables. Index Pagenumbersfollowedbyf indicate figures; pagenumbersfollowedbyt indicate tables. Adaptive rejection metropolis sampling (ARMS), 98 Adaptive shrinkage, 132 Advanced Photo System (APS), 255 Aggregation

More information

Likelihood-free MCMC

Likelihood-free MCMC Bayesian inference for stable distributions with applications in finance Department of Mathematics University of Leicester September 2, 2011 MSc project final presentation Outline 1 2 3 4 Classical Monte

More information

39th Annual ISMS Marketing Science Conference University of Southern California, June 8, 2017

39th Annual ISMS Marketing Science Conference University of Southern California, June 8, 2017 Permuted and IROM Department, McCombs School of Business The University of Texas at Austin 39th Annual ISMS Marketing Science Conference University of Southern California, June 8, 2017 1 / 36 Joint work

More information

Contents. Part I: Fundamentals of Bayesian Inference 1

Contents. Part I: Fundamentals of Bayesian Inference 1 Contents Preface xiii Part I: Fundamentals of Bayesian Inference 1 1 Probability and inference 3 1.1 The three steps of Bayesian data analysis 3 1.2 General notation for statistical inference 4 1.3 Bayesian

More information

Optimisation via Slice Sampling

Optimisation via Slice Sampling Optimisation via Slice Sampling John R. Birge Booth School of Business Nicholas G. Polson Booth School of Business First Draft: Januar 2012 This Draft: June 2012 Abstract In this paper, we develop a simulation-based

More information

Markov Chain Monte Carlo Lecture 4

Markov Chain Monte Carlo Lecture 4 The local-trap problem refers to that in simulations of a complex system whose energy landscape is rugged, the sampler gets trapped in a local energy minimum indefinitely, rendering the simulation ineffective.

More information

Dynamic System Identification using HDMR-Bayesian Technique

Dynamic System Identification using HDMR-Bayesian Technique Dynamic System Identification using HDMR-Bayesian Technique *Shereena O A 1) and Dr. B N Rao 2) 1), 2) Department of Civil Engineering, IIT Madras, Chennai 600036, Tamil Nadu, India 1) ce14d020@smail.iitm.ac.in

More information

Bayesian Inference and MCMC

Bayesian Inference and MCMC Bayesian Inference and MCMC Aryan Arbabi Partly based on MCMC slides from CSC412 Fall 2018 1 / 18 Bayesian Inference - Motivation Consider we have a data set D = {x 1,..., x n }. E.g each x i can be the

More information

An introduction to Sequential Monte Carlo

An introduction to Sequential Monte Carlo An introduction to Sequential Monte Carlo Thang Bui Jes Frellsen Department of Engineering University of Cambridge Research and Communication Club 6 February 2014 1 Sequential Monte Carlo (SMC) methods

More information

Markov Chain Monte Carlo methods

Markov Chain Monte Carlo methods Markov Chain Monte Carlo methods Tomas McKelvey and Lennart Svensson Signal Processing Group Department of Signals and Systems Chalmers University of Technology, Sweden November 26, 2012 Today s learning

More information

Probabilistic Graphical Models Lecture 17: Markov chain Monte Carlo

Probabilistic Graphical Models Lecture 17: Markov chain Monte Carlo Probabilistic Graphical Models Lecture 17: Markov chain Monte Carlo Andrew Gordon Wilson www.cs.cmu.edu/~andrewgw Carnegie Mellon University March 18, 2015 1 / 45 Resources and Attribution Image credits,

More information

A Search and Jump Algorithm for Markov Chain Monte Carlo Sampling. Christopher Jennison. Adriana Ibrahim. Seminar at University of Kuwait

A Search and Jump Algorithm for Markov Chain Monte Carlo Sampling. Christopher Jennison. Adriana Ibrahim. Seminar at University of Kuwait A Search and Jump Algorithm for Markov Chain Monte Carlo Sampling Christopher Jennison Department of Mathematical Sciences, University of Bath, UK http://people.bath.ac.uk/mascj Adriana Ibrahim Institute

More information

Session 3A: Markov chain Monte Carlo (MCMC)

Session 3A: Markov chain Monte Carlo (MCMC) Session 3A: Markov chain Monte Carlo (MCMC) John Geweke Bayesian Econometrics and its Applications August 15, 2012 ohn Geweke Bayesian Econometrics and its Session Applications 3A: Markov () chain Monte

More information

Bayesian spatial hierarchical modeling for temperature extremes

Bayesian spatial hierarchical modeling for temperature extremes Bayesian spatial hierarchical modeling for temperature extremes Indriati Bisono Dr. Andrew Robinson Dr. Aloke Phatak Mathematics and Statistics Department The University of Melbourne Maths, Informatics

More information

Bayesian Inference for DSGE Models. Lawrence J. Christiano

Bayesian Inference for DSGE Models. Lawrence J. Christiano Bayesian Inference for DSGE Models Lawrence J. Christiano Outline State space-observer form. convenient for model estimation and many other things. Bayesian inference Bayes rule. Monte Carlo integation.

More information

BAYESIAN METHODS FOR VARIABLE SELECTION WITH APPLICATIONS TO HIGH-DIMENSIONAL DATA

BAYESIAN METHODS FOR VARIABLE SELECTION WITH APPLICATIONS TO HIGH-DIMENSIONAL DATA BAYESIAN METHODS FOR VARIABLE SELECTION WITH APPLICATIONS TO HIGH-DIMENSIONAL DATA Intro: Course Outline and Brief Intro to Marina Vannucci Rice University, USA PASI-CIMAT 04/28-30/2010 Marina Vannucci

More information

ECO 513 Fall 2009 C. Sims HIDDEN MARKOV CHAIN MODELS

ECO 513 Fall 2009 C. Sims HIDDEN MARKOV CHAIN MODELS ECO 513 Fall 2009 C. Sims HIDDEN MARKOV CHAIN MODELS 1. THE CLASS OF MODELS y t {y s, s < t} p(y t θ t, {y s, s < t}) θ t = θ(s t ) P[S t = i S t 1 = j] = h ij. 2. WHAT S HANDY ABOUT IT Evaluating the

More information

MARKOV CHAIN MONTE CARLO

MARKOV CHAIN MONTE CARLO MARKOV CHAIN MONTE CARLO RYAN WANG Abstract. This paper gives a brief introduction to Markov Chain Monte Carlo methods, which offer a general framework for calculating difficult integrals. We start with

More information

Winter 2019 Math 106 Topics in Applied Mathematics. Lecture 9: Markov Chain Monte Carlo

Winter 2019 Math 106 Topics in Applied Mathematics. Lecture 9: Markov Chain Monte Carlo Winter 2019 Math 106 Topics in Applied Mathematics Data-driven Uncertainty Quantification Yoonsang Lee (yoonsang.lee@dartmouth.edu) Lecture 9: Markov Chain Monte Carlo 9.1 Markov Chain A Markov Chain Monte

More information

Computer Vision Group Prof. Daniel Cremers. 10a. Markov Chain Monte Carlo

Computer Vision Group Prof. Daniel Cremers. 10a. Markov Chain Monte Carlo Group Prof. Daniel Cremers 10a. Markov Chain Monte Carlo Markov Chain Monte Carlo In high-dimensional spaces, rejection sampling and importance sampling are very inefficient An alternative is Markov Chain

More information

Bayesian inference for multivariate skew-normal and skew-t distributions

Bayesian inference for multivariate skew-normal and skew-t distributions Bayesian inference for multivariate skew-normal and skew-t distributions Brunero Liseo Sapienza Università di Roma Banff, May 2013 Outline Joint research with Antonio Parisi (Roma Tor Vergata) 1. Inferential

More information

AUTOMATIC CONTROL REGLERTEKNIK LINKÖPINGS UNIVERSITET. Machine Learning T. Schön. (Chapter 11) AUTOMATIC CONTROL REGLERTEKNIK LINKÖPINGS UNIVERSITET

AUTOMATIC CONTROL REGLERTEKNIK LINKÖPINGS UNIVERSITET. Machine Learning T. Schön. (Chapter 11) AUTOMATIC CONTROL REGLERTEKNIK LINKÖPINGS UNIVERSITET About the Eam I/II) ), Lecture 7 MCMC and Sampling Methods Thomas Schön Division of Automatic Control Linköping University Linköping, Sweden. Email: schon@isy.liu.se, Phone: 3-373, www.control.isy.liu.se/~schon/

More information

Bayesian non-parametric model to longitudinally predict churn

Bayesian non-parametric model to longitudinally predict churn Bayesian non-parametric model to longitudinally predict churn Bruno Scarpa Università di Padova Conference of European Statistics Stakeholders Methodologists, Producers and Users of European Statistics

More information

DAG models and Markov Chain Monte Carlo methods a short overview

DAG models and Markov Chain Monte Carlo methods a short overview DAG models and Markov Chain Monte Carlo methods a short overview Søren Højsgaard Institute of Genetics and Biotechnology University of Aarhus August 18, 2008 Printed: August 18, 2008 File: DAGMC-Lecture.tex

More information

Review. DS GA 1002 Statistical and Mathematical Models. Carlos Fernandez-Granda

Review. DS GA 1002 Statistical and Mathematical Models.   Carlos Fernandez-Granda Review DS GA 1002 Statistical and Mathematical Models http://www.cims.nyu.edu/~cfgranda/pages/dsga1002_fall16 Carlos Fernandez-Granda Probability and statistics Probability: Framework for dealing with

More information

Markov Chain Monte Carlo, Numerical Integration

Markov Chain Monte Carlo, Numerical Integration Markov Chain Monte Carlo, Numerical Integration (See Statistics) Trevor Gallen Fall 2015 1 / 1 Agenda Numerical Integration: MCMC methods Estimating Markov Chains Estimating latent variables 2 / 1 Numerical

More information

Integrated Non-Factorized Variational Inference

Integrated Non-Factorized Variational Inference Integrated Non-Factorized Variational Inference Shaobo Han, Xuejun Liao and Lawrence Carin Duke University February 27, 2014 S. Han et al. Integrated Non-Factorized Variational Inference February 27, 2014

More information

The Particle Filter. PD Dr. Rudolph Triebel Computer Vision Group. Machine Learning for Computer Vision

The Particle Filter. PD Dr. Rudolph Triebel Computer Vision Group. Machine Learning for Computer Vision The Particle Filter Non-parametric implementation of Bayes filter Represents the belief (posterior) random state samples. by a set of This representation is approximate. Can represent distributions that

More information

BAYESIAN MODEL CRITICISM

BAYESIAN MODEL CRITICISM Monte via Chib s BAYESIAN MODEL CRITICM Hedibert Freitas Lopes The University of Chicago Booth School of Business 5807 South Woodlawn Avenue, Chicago, IL 60637 http://faculty.chicagobooth.edu/hedibert.lopes

More information

On Markov chain Monte Carlo methods for tall data

On Markov chain Monte Carlo methods for tall data On Markov chain Monte Carlo methods for tall data Remi Bardenet, Arnaud Doucet, Chris Holmes Paper review by: David Carlson October 29, 2016 Introduction Many data sets in machine learning and computational

More information

Markov Chain Monte Carlo

Markov Chain Monte Carlo Markov Chain Monte Carlo Recall: To compute the expectation E ( h(y ) ) we use the approximation E(h(Y )) 1 n n h(y ) t=1 with Y (1),..., Y (n) h(y). Thus our aim is to sample Y (1),..., Y (n) from f(y).

More information

Approximate inference, Sampling & Variational inference Fall Cours 9 November 25

Approximate inference, Sampling & Variational inference Fall Cours 9 November 25 Approimate inference, Sampling & Variational inference Fall 2015 Cours 9 November 25 Enseignant: Guillaume Obozinski Scribe: Basile Clément, Nathan de Lara 9.1 Approimate inference with MCMC 9.1.1 Gibbs

More information

(5) Multi-parameter models - Gibbs sampling. ST440/540: Applied Bayesian Analysis

(5) Multi-parameter models - Gibbs sampling. ST440/540: Applied Bayesian Analysis Summarizing a posterior Given the data and prior the posterior is determined Summarizing the posterior gives parameter estimates, intervals, and hypothesis tests Most of these computations are integrals

More information

Exploring Monte Carlo Methods

Exploring Monte Carlo Methods Exploring Monte Carlo Methods William L Dunn J. Kenneth Shultis AMSTERDAM BOSTON HEIDELBERG LONDON NEW YORK OXFORD PARIS SAN DIEGO SAN FRANCISCO SINGAPORE SYDNEY TOKYO ELSEVIER Academic Press Is an imprint

More information

Monte Carlo Methods. Leon Gu CSD, CMU

Monte Carlo Methods. Leon Gu CSD, CMU Monte Carlo Methods Leon Gu CSD, CMU Approximate Inference EM: y-observed variables; x-hidden variables; θ-parameters; E-step: q(x) = p(x y, θ t 1 ) M-step: θ t = arg max E q(x) [log p(y, x θ)] θ Monte

More information

STA 4273H: Statistical Machine Learning

STA 4273H: Statistical Machine Learning STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! rsalakhu@utstat.toronto.edu! http://www.utstat.utoronto.ca/~rsalakhu/ Sidney Smith Hall, Room 6002 Lecture 7 Approximate

More information

Discrete solid-on-solid models

Discrete solid-on-solid models Discrete solid-on-solid models University of Alberta 2018 COSy, University of Manitoba - June 7 Discrete processes, stochastic PDEs, deterministic PDEs Table: Deterministic PDEs Heat-diffusion equation

More information

General Construction of Irreversible Kernel in Markov Chain Monte Carlo

General Construction of Irreversible Kernel in Markov Chain Monte Carlo General Construction of Irreversible Kernel in Markov Chain Monte Carlo Metropolis heat bath Suwa Todo Department of Applied Physics, The University of Tokyo Department of Physics, Boston University (from

More information

Doing Bayesian Integrals

Doing Bayesian Integrals ASTR509-13 Doing Bayesian Integrals The Reverend Thomas Bayes (c.1702 1761) Philosopher, theologian, mathematician Presbyterian (non-conformist) minister Tunbridge Wells, UK Elected FRS, perhaps due to

More information

MCMC for big data. Geir Storvik. BigInsight lunch - May Geir Storvik MCMC for big data BigInsight lunch - May / 17

MCMC for big data. Geir Storvik. BigInsight lunch - May Geir Storvik MCMC for big data BigInsight lunch - May / 17 MCMC for big data Geir Storvik BigInsight lunch - May 2 2018 Geir Storvik MCMC for big data BigInsight lunch - May 2 2018 1 / 17 Outline Why ordinary MCMC is not scalable Different approaches for making

More information

CPSC 540: Machine Learning

CPSC 540: Machine Learning CPSC 540: Machine Learning MCMC and Non-Parametric Bayes Mark Schmidt University of British Columbia Winter 2016 Admin I went through project proposals: Some of you got a message on Piazza. No news is

More information

Graphical Models and Kernel Methods

Graphical Models and Kernel Methods Graphical Models and Kernel Methods Jerry Zhu Department of Computer Sciences University of Wisconsin Madison, USA MLSS June 17, 2014 1 / 123 Outline Graphical Models Probabilistic Inference Directed vs.

More information

17 : Markov Chain Monte Carlo

17 : Markov Chain Monte Carlo 10-708: Probabilistic Graphical Models, Spring 2015 17 : Markov Chain Monte Carlo Lecturer: Eric P. Xing Scribes: Heran Lin, Bin Deng, Yun Huang 1 Review of Monte Carlo Methods 1.1 Overview Monte Carlo

More information

Riemann Manifold Methods in Bayesian Statistics

Riemann Manifold Methods in Bayesian Statistics Ricardo Ehlers ehlers@icmc.usp.br Applied Maths and Stats University of São Paulo, Brazil Working Group in Statistical Learning University College Dublin September 2015 Bayesian inference is based on Bayes

More information

Statistical Machine Learning Lecture 8: Markov Chain Monte Carlo Sampling

Statistical Machine Learning Lecture 8: Markov Chain Monte Carlo Sampling 1 / 27 Statistical Machine Learning Lecture 8: Markov Chain Monte Carlo Sampling Melih Kandemir Özyeğin University, İstanbul, Turkey 2 / 27 Monte Carlo Integration The big question : Evaluate E p(z) [f(z)]

More information

Bayesian Modeling of Conditional Distributions

Bayesian Modeling of Conditional Distributions Bayesian Modeling of Conditional Distributions John Geweke University of Iowa Indiana University Department of Economics February 27, 2007 Outline Motivation Model description Methods of inference Earnings

More information

Kazuhiko Kakamu Department of Economics Finance, Institute for Advanced Studies. Abstract

Kazuhiko Kakamu Department of Economics Finance, Institute for Advanced Studies. Abstract Bayesian Estimation of A Distance Functional Weight Matrix Model Kazuhiko Kakamu Department of Economics Finance, Institute for Advanced Studies Abstract This paper considers the distance functional weight

More information

Bayesian Estimation of Input Output Tables for Russia

Bayesian Estimation of Input Output Tables for Russia Bayesian Estimation of Input Output Tables for Russia Oleg Lugovoy (EDF, RANE) Andrey Polbin (RANE) Vladimir Potashnikov (RANE) WIOD Conference April 24, 2012 Groningen Outline Motivation Objectives Bayesian

More information

Multivariate Normal & Wishart

Multivariate Normal & Wishart Multivariate Normal & Wishart Hoff Chapter 7 October 21, 2010 Reading Comprehesion Example Twenty-two children are given a reading comprehsion test before and after receiving a particular instruction method.

More information

Bayesian Methods for Machine Learning

Bayesian Methods for Machine Learning Bayesian Methods for Machine Learning CS 584: Big Data Analytics Material adapted from Radford Neal s tutorial (http://ftp.cs.utoronto.ca/pub/radford/bayes-tut.pdf), Zoubin Ghahramni (http://hunch.net/~coms-4771/zoubin_ghahramani_bayesian_learning.pdf),

More information

Introduction to Machine Learning CMU-10701

Introduction to Machine Learning CMU-10701 Introduction to Machine Learning CMU-10701 Markov Chain Monte Carlo Methods Barnabás Póczos & Aarti Singh Contents Markov Chain Monte Carlo Methods Goal & Motivation Sampling Rejection Importance Markov

More information

Monte Carlo Integration I

Monte Carlo Integration I Monte Carlo Integration I Digital Image Synthesis Yung-Yu Chuang with slides by Pat Hanrahan and Torsten Moller Introduction L o p,ωo L e p,ωo s f p,ωo,ω i L p,ω i cosθ i i d The integral equations generally

More information

Markov Random Fields

Markov Random Fields Markov Random Fields 1. Markov property The Markov property of a stochastic sequence {X n } n 0 implies that for all n 1, X n is independent of (X k : k / {n 1, n, n + 1}), given (X n 1, X n+1 ). Another

More information

Strong Lens Modeling (II): Statistical Methods

Strong Lens Modeling (II): Statistical Methods Strong Lens Modeling (II): Statistical Methods Chuck Keeton Rutgers, the State University of New Jersey Probability theory multiple random variables, a and b joint distribution p(a, b) conditional distribution

More information

Kernel adaptive Sequential Monte Carlo

Kernel adaptive Sequential Monte Carlo Kernel adaptive Sequential Monte Carlo Ingmar Schuster (Paris Dauphine) Heiko Strathmann (University College London) Brooks Paige (Oxford) Dino Sejdinovic (Oxford) December 7, 2015 1 / 36 Section 1 Outline

More information

Bayesian Inference for DSGE Models. Lawrence J. Christiano

Bayesian Inference for DSGE Models. Lawrence J. Christiano Bayesian Inference for DSGE Models Lawrence J. Christiano Outline State space-observer form. convenient for model estimation and many other things. Preliminaries. Probabilities. Maximum Likelihood. Bayesian

More information

State Space and Hidden Markov Models

State Space and Hidden Markov Models State Space and Hidden Markov Models Kunsch H.R. State Space and Hidden Markov Models. ETH- Zurich Zurich; Aliaksandr Hubin Oslo 2014 Contents 1. Introduction 2. Markov Chains 3. Hidden Markov and State

More information

19 : Slice Sampling and HMC

19 : Slice Sampling and HMC 10-708: Probabilistic Graphical Models 10-708, Spring 2018 19 : Slice Sampling and HMC Lecturer: Kayhan Batmanghelich Scribes: Boxiang Lyu 1 MCMC (Auxiliary Variables Methods) In inference, we are often

More information

Bayesian Classification and Regression Trees

Bayesian Classification and Regression Trees Bayesian Classification and Regression Trees James Cussens York Centre for Complex Systems Analysis & Dept of Computer Science University of York, UK 1 Outline Problems for Lessons from Bayesian phylogeny

More information

STA 294: Stochastic Processes & Bayesian Nonparametrics

STA 294: Stochastic Processes & Bayesian Nonparametrics MARKOV CHAINS AND CONVERGENCE CONCEPTS Markov chains are among the simplest stochastic processes, just one step beyond iid sequences of random variables. Traditionally they ve been used in modelling a

More information

Bayes: All uncertainty is described using probability.

Bayes: All uncertainty is described using probability. Bayes: All uncertainty is described using probability. Let w be the data and θ be any unknown quantities. Likelihood. The probability model π(w θ) has θ fixed and w varying. The likelihood L(θ; w) is π(w

More information

Hmms with variable dimension structures and extensions

Hmms with variable dimension structures and extensions Hmm days/enst/january 21, 2002 1 Hmms with variable dimension structures and extensions Christian P. Robert Université Paris Dauphine www.ceremade.dauphine.fr/ xian Hmm days/enst/january 21, 2002 2 1 Estimating

More information

Data Analysis I. Dr Martin Hendry, Dept of Physics and Astronomy University of Glasgow, UK. 10 lectures, beginning October 2006

Data Analysis I. Dr Martin Hendry, Dept of Physics and Astronomy University of Glasgow, UK. 10 lectures, beginning October 2006 Astronomical p( y x, I) p( x, I) p ( x y, I) = p( y, I) Data Analysis I Dr Martin Hendry, Dept of Physics and Astronomy University of Glasgow, UK 10 lectures, beginning October 2006 4. Monte Carlo Methods

More information

SUPPLEMENT TO MARKET ENTRY COSTS, PRODUCER HETEROGENEITY, AND EXPORT DYNAMICS (Econometrica, Vol. 75, No. 3, May 2007, )

SUPPLEMENT TO MARKET ENTRY COSTS, PRODUCER HETEROGENEITY, AND EXPORT DYNAMICS (Econometrica, Vol. 75, No. 3, May 2007, ) Econometrica Supplementary Material SUPPLEMENT TO MARKET ENTRY COSTS, PRODUCER HETEROGENEITY, AND EXPORT DYNAMICS (Econometrica, Vol. 75, No. 3, May 2007, 653 710) BY SANGHAMITRA DAS, MARK ROBERTS, AND

More information

Multilevel Statistical Models: 3 rd edition, 2003 Contents

Multilevel Statistical Models: 3 rd edition, 2003 Contents Multilevel Statistical Models: 3 rd edition, 2003 Contents Preface Acknowledgements Notation Two and three level models. A general classification notation and diagram Glossary Chapter 1 An introduction

More information

Markov Chain Monte Carlo methods

Markov Chain Monte Carlo methods Markov Chain Monte Carlo methods By Oleg Makhnin 1 Introduction a b c M = d e f g h i 0 f(x)dx 1.1 Motivation 1.1.1 Just here Supresses numbering 1.1.2 After this 1.2 Literature 2 Method 2.1 New math As

More information

Bayesian Inference in GLMs. Frequentists typically base inferences on MLEs, asymptotic confidence

Bayesian Inference in GLMs. Frequentists typically base inferences on MLEs, asymptotic confidence Bayesian Inference in GLMs Frequentists typically base inferences on MLEs, asymptotic confidence limits, and log-likelihood ratio tests Bayesians base inferences on the posterior distribution of the unknowns

More information

Particle Methods as Message Passing

Particle Methods as Message Passing Particle Methods as Message Passing Justin Dauwels RIKEN Brain Science Institute Hirosawa,2-1,Wako-shi,Saitama,Japan Email: justin@dauwels.com Sascha Korl Phonak AG CH-8712 Staefa, Switzerland Email: sascha.korl@phonak.ch

More information