Surveying the Characteristics of Population Monte Carlo
|
|
- Clement Watson
- 5 years ago
- Views:
Transcription
1 International Research Journal of Applied and Basic Sciences 2013 Available online at ISSN X / Vol, 7 (9): Science Explorer Publications Surveying the Characteristics of Population Monte Carlo Ehsan Fayyazi 1*, Gholamhossein Gholami 2 1. Department of Statistics, Science and Research Branch, Islamic Azad University, Fars, Iran. 2. Department of Mathematics, Faculty of sciences, Urmia University, Urmia, Iran. Corresponding Author e.fayyazi@fsriau.ac.ir ABSTRACT: The importance sampling method as other Monte Carlo Markov Chain (MCMC) algorithms is iterative while this algorithm i.e. importance sampling does not depend on the initiating point. The Population Monte Carlo method includes the frequent production of importance sampling whose used importance functions depend on the previous produced importance samples. The advantage of this method over the MCMC algorithm is that the framework of this algorithm in each iteration is unbiased, so running this algorithm can stop at any given time. The reason is that the iterations improve running the importance function (i.e. the proposal distribution). Hence, this leads to the improved importance sampling. In this study, we survey this method through diverse examples. Keywords: Population Monte Carlo, Importance sampling, Monte Carlo Markov Chain (MCMC), mixed models, Metropolis-Hastings algorithms. INTRODUCTION This study suggests a method named Population Monte Carlo (PMC) which is the combination of Monte Carlo Chain methods, importance sampling, and importance resampling. The method takes the advantages of every of these methods. In doing so, we describe the extension of importance sampling, andthen suggest the population Monte Carlo. Population Monte Carlo Population Monte Carlo (PMC) algorithm is an iterative importance sampling method which produces in each iteration the stimulated approximate sample from the target distribution and the adaptive algorithm which arranges the proposal distribution with the target distribution throughout the iterations. So, the theoretical basis of this method rootin the importance sampling rather than in MCMC and despite the iterated characteristics (that is, unbiased at least to the order O(1/n)), the estimation of target distribution is valid in each iteration and does not require the convergence times and stopping rules. simulating the sample Considering the MCMC, stationary distribution has been taken into account as a limit distribution from Markovsequenceas{ }, having this experimental result is large enough for X to t. A rather simple expansion of this perspective is that instead of simulating a distribution point of π we embark on simulating the n number sample distributed from π. In other words, one would simulate n number from the following π (x,, x ) = π(x ) The expansion in [4] and [5] accompanied by developed programming for nmcmc parallel running has been argued. In fact, one would use this complete sample int iteration for devising a proposal distribution in + 1 iteration. General importance sampling The PMC algorithm can be considered in more general framework: one would assume differentproposal distributions in each iteration and for each particle in this algorithm. In other words, if i is the sample indices and t the iteration indices, then X ( ) can be simulated from q distributions which might depend on the previous samples while being independent from other samples (to be conditioned on other samples) X ( ) ~q (x)
2 So, an important weight is assumed to any simulated point Thus, the following estimations ϱ ( ) = π(x( ) ) q (x ( ) ) I = 1 n ϱ ( ) h(x ( ) ) are the asymmetric estimators [h(x)][1]. The reason that the importance distributions of q can depend on the all previous experiments without changingϱ ( ) is unbiasedness. In fact, we have: E ϱ ( ) h x ( ) = π(x) h(x)q (x)dxg(ς)dς q (x) = h(x)π(x)dxg(ς)dς = [h(x)] (1) in which ς is the preceding random variations vector existed in q and g(ς) is its arbitrary distribution ( q are independent of g(ς)) [1]. In addition, depending on previous status does not contribute to the correlation; since ϱ ( ) h x ( ) ϱ ( ) h x ( ) π(x) π(y) = h(x)q (x)dx h(y)q (y)dyg(ς)dς q (x) q (y) = h(x)π(x)dx h(y)π(y)dyg(ς)dς = [h(x)] (2) so that ς is the previous random variations existing either in q or q and g is its density. Also, assuming that the following variances Var(ϱ ( ) h x ( ) ) areexist for every 1 i n, which means that the proposal distributions of q compared to π, we have: should be heavier tailed Var(I ) = 1 n Var(ϱ ( ) h x ( ) ) (3) Because the effects of ϱ ( ) weights are declined. In fact, even if the x ( ) s are correlated, the weighted terms will be not correlated according to the following theorem [1]. Theorem 1.1. Suppose{X } is a Markov chain with transition kernel q, then: π(x ) Var( h(x ) q(x X ) ) = Var(h(X ) π(x ) q(x X ) ). Proof: without lossof generality, suppose [h(x)] = 0. If we define: ω = π(x ) q(x X ), then, the covariance between ω h(x ) and ω h(x ) equals as: [ω h(x )ω h(x )] = [ (ω h(x X ) ω h(x X ) ] in which we iterated the expectation and used Markov s conditional independency attribute. The second conditional expectation is as follows: π(z) (ω h(x ) X ) = h(z) q(z X )dz q(z X ) = [h(x)] = 0, Which shows that covariance equals to zero [3]. So, the sum of variance equals to the sum of each of the terms for the importance sampling estimators. Note that resampling can be included in some iteration or even in all of the algorithms iterations, but in that case there will be no increase in weights throughout the iterations compared to the particle systems. Similar to most of the cases, the target distribution π lacks any scale and instead we use the following scale 523
3 ϱ ( ) π(x( ) ) q (x ( ) ), i = 1,, n whose sum of ϱ ( ) weights equals to 1. In this case, both the above unbiasedness and variance analysis are declined, although they still continue [3]. In fact, each constant normalizing estimator of π is improved by t iteration since it is the total average of convergent estimator from the normalizing inverse constant [3]. ω = 1 tn π(z ( ) ) q (z ( ) ) So, when t increases, ω participates less in symmetric status and I variations. The above attribute can be considered for t as large enough. Moreover, if ω in equation (4) is used for ω, i.e. if ϱ ( ) = π(z( ) ) ω q ( ) ( ) then, the variance analysis (3) is recoverable by means of analyzing the same conditions [3]. Population Monte Carlo Suppose, we are interested in estimating the following integral I = h(x) π(x)dx We tend to sample π(x) from the target distribution. The Population Monte Carlo algorithm provides us with a sample population from the targetdistribution to be generated in each iteration. The reason for naming this method as population Monte Carlo is the indication on simulation idea of one complete sample as iterated, rather than simulating iteratively the approximate samples points. Since the above section confirms that one iterated importance sampling method based on the dependent sample s proposal distributionis fundamentally one specific importance sampling, one would proposal algorithm 1 which is confirmed by importance sampling principles. Algorithm 1 Population Monte Carlo (4) In this algorithm, the indication is on level (i) since it is the main attribute of PMC algorithm. The proposal distributions can be considered separately in each level without declining the validity of method. So, the proposal distributions of q can be chosen according to the previous proposal distributions of q ( ) and in particular, they can depend on the previous sample (x ( ),, x ( ) ) or even on all of the simulated samples. In each t iteration, the PMC estimator is as follows: I = 1 n ϱ ( ) h(x ( ) ) Unbiasedness of this estimator was shown in (1). Although, in many cases the π(x) distribution can probably be identifiable to the extent of normalizing constant. Like importance sampling, one can use the following alternative estimator: I = ϱ( ) h(x ( ) ) ϱ ( ) In this cane, the unbiasedness of estimator is removed; however, it is taken into account as consistent estimator (Cappé et al., 2004). 524
4 In practice, one would calculate the man of the whole iterations in order to improve estimation. A PMC cumulative estimator on all iterations is defined as follows: I = ( ( ) ( ) ( ) ( ) ) where is the weights to mix the estimators of different iterations. Efficient chooses of which minimize estimator variance of 5 is as follows (Douc et al., 2005): = ( ) where is the estimator variance of I in iteration. Note that it might be possible for the importance weights tobe symmetrical; this symmetrical status can lead to degeneration, also brings risk into estimation of importance sampling, so, the drawback regarding this method is degeneration. In fact, frequent importance sampling has in it more degeneration compared to normal importance sampling because the resampling phase is repeated several times in frequent importance sampling. The percent of particles to be formed between two iterations from resampling algorithm can be low.it is obvious that the probability of such case might be increase by the increase of iterations. The implication of such population degeneration is that the number of main sample branches decreases rapidly. In such cases, if the proposal distributions are based on the generated values, then it might be possible in final output that there is symmetric status or at least it might increase the variance of estimators. Also, similar to normal importance ( sampling there is a risk that ) weights lead to the infinite variances (Celeux et al., 2006). There are other similarities between PMC and proposal distributions in particle systems in Gilks and Berzuini (2006) since these authors have taken into account the frequent samples through resampling phase based on the importance weights. However, the big difference (despite their dynamic statues difference for mobile target distributions) is that they remain in the domain of MCMC because they do the resampling phase prior to the proposal distributions running. So, these authors had to use the Markov condition changing core for the given stationary distributions. Also, there is a similarity with Chopin (2002) which used the frequent importance sampling by the proposal distributions. This case is a special one in PMC on the Bayesian framework so that the proposaldistributions of are the corresponding posterior distributions accompanied by element from the observed class. As mentioned above, one of the prominent attributes of PMC method is choosing proposal distribution of freely because of the MCMC framework ignoring. In fact, a Metropolis-Hastings for each point from the posterior sample generates one parallel MCMC sampler which is simply convergent to the target distribution of within the distribution without reforming the importance resampling. Similarly, a Metropolis-Hastings accept ( phase for the whole vector of ) is convergent to. The advantage of generating an asymptote estimation over i.i.d sample is declined by the probability that the accepting is decreased as power approximately. ( Hence, in each iteration points are chosen according to their ) importance weight in PMC. Sometimes, it occurs that the Metropolis-Hastingsmethod doesnot work properly based on this proposal distribution while a PMC algorithm generates correct and valid responses (Cappé et al., 2004). So, PMC framework provides simpler structure of adaptive methods compared to MCMC in fact, as long as the importance sampling strategies are considered in prior section of MCMC, the MCMC context is less appropriate for adaptive algorithms since adaptability invalidates the essence of continues Markov. Because of this, it needs more studies on convergence to make the argotic attribute. For more information on this, see example Anderieu and Robert (2001). In PMC methods, the argotic attribute is not considered as a solution since the validity of this algorithm has been obtained by means of describing the importance sampling. The generated samples by PMC can exploit the importance sampling outputs in each iteration and in this way it does not require stop rules the same as MCMC samples (Robert and Casella, 2004). Anyway, as it was described with constant estimation in Robert and Casella(2004), one would exploit from all of the continuum of samples both for proposal distributions and for estimation.in order to use the samples continuum, one does not need constant storing pace for all of the generated samples because estimations like that of (3) can be updated dynamically. In addition, the possibility of stimulations exploitation implies that the sample value is not necessarily large because the effective simulation volume is. The last point is that the number of points within the sample in iterations is not necessarily constant. As (Chopin, 2002), one would increase the number of points within the sample when the algorithm is fixed in one status. In order to survey this method, we analyze a simple instance whose answer is known. Example 1.2 suppose the target of solving integral is as follows: = 525
5 Where h( ) = and ( ) = 1. Also, assumethat one random sample is available from ( )distribution. The estimation of this integral using Population Monte Carlo would be as follows: = = I = ( ) ( ) ( ) ( ) The resulting estimation will be as figure 1 for = 100 iteration. We know = As it is seen, the integral value is convergent to which is in consistency with main integral and with the obtained result from Monte Carlo method. Figure 1. PMC Estimator with 1000 iterations Example 1.3 suppose X~τ(ν, θ, σ ) is the density distribution: π(x) = Γ σ νπγ(ν 2) the aim is calculating the following integral: I = [h(x)] = (1 + (x θ) νσ ) ( ) x 1 + (x 3) π(x)dx Suppose θ = 0, σ = 1and ν = 12without reducing the generality of problem. We use Population Monte Carlo method to solve the above integral. In doing so, we assume Exp(1) function as the importance function (Robert and Casella, 2004). The integral formed by such proposal distribution would be as follows: x e I = π(x)e dx 1 + (x 3) According to the importance sampling method, the estimator I would be as follows: I = ϱ( ) h(x ( ) ) ϱ ( ) Where (.) is t-student distribution, (.) is the exponential distribution including parameter 1 and h = 51+( 3)2. As is obvious, the value of estimator decreases and converges to a constant value by increasingof the number of iterations. The resulting estimation for =2000 would be as figure
6 Figure 2. PMC Estimator with 2000 iterations REFERENCES Anderieu C, Robert CP Controlled Markov Chain Monte Carlo Methods for Optimal Sampling. Université Paris Dauphine, Cappé O, Guillin A, Marin JM, Robert C Population Monte Carlo. Journal of Computational and GraphicalStatististics, 13(4): Celeux G, Marin JM, Robert CP Iterated Importance Sampling in Missing Data Problems. Computational Statistics and Data Analysis, 50(12) Chopin N A Sequential Particle Filter Method for Static Models. Biometrica, 89: , Christian PR, George C Monte Carlo Statistical Methods, Springer, Second Edition, Douc R, Guillin A, Marin JM, Robert CP Minimum Variance Importance Sampling via Population Monte Carlo. Technical report, University Paris Dauphine. Gilks W, Berzuini C Following a moving target-monte Carlo inference for dynamic Bayesian models. J. Royal Statist. Soc. Series B, 63(1): , Mengersen K, Robert C Iid sampling with self-avoiding particle filters: the pinball sampler. In Bernardo, J., Bayarri, M., Berger, J., Dawid, A., Heckerman, D., Smith, A., and West, M., editors. Bayesian Statistics, 7. Oxford University Press, Oxford. Warnes, G The Normal kernel coupler: An adaptive Markov Chain Monte Carlo Method for efficiently sampling from multi-modal distributions. Technical Report 395, Univ. if Washington, 527
Adaptive Monte Carlo methods
Adaptive Monte Carlo methods Jean-Michel Marin Projet Select, INRIA Futurs, Université Paris-Sud joint with Randal Douc (École Polytechnique), Arnaud Guillin (Université de Marseille) and Christian Robert
More informationAdaptive Population Monte Carlo
Adaptive Population Monte Carlo Olivier Cappé Centre Nat. de la Recherche Scientifique & Télécom Paris 46 rue Barrault, 75634 Paris cedex 13, France http://www.tsi.enst.fr/~cappe/ Recent Advances in Monte
More informationKernel Sequential Monte Carlo
Kernel Sequential Monte Carlo Ingmar Schuster (Paris Dauphine) Heiko Strathmann (University College London) Brooks Paige (Oxford) Dino Sejdinovic (Oxford) * equal contribution April 25, 2016 1 / 37 Section
More informationAnswers and expectations
Answers and expectations For a function f(x) and distribution P(x), the expectation of f with respect to P is The expectation is the average of f, when x is drawn from the probability distribution P E
More informationeqr094: Hierarchical MCMC for Bayesian System Reliability
eqr094: Hierarchical MCMC for Bayesian System Reliability Alyson G. Wilson Statistical Sciences Group, Los Alamos National Laboratory P.O. Box 1663, MS F600 Los Alamos, NM 87545 USA Phone: 505-667-9167
More informationKobe University Repository : Kernel
Kobe University Repository : Kernel タイトル Title 著者 Author(s) 掲載誌 巻号 ページ Citation 刊行日 Issue date 資源タイプ Resource Type 版区分 Resource Version 権利 Rights DOI URL Note on the Sampling Distribution for the Metropolis-
More informationKernel adaptive Sequential Monte Carlo
Kernel adaptive Sequential Monte Carlo Ingmar Schuster (Paris Dauphine) Heiko Strathmann (University College London) Brooks Paige (Oxford) Dino Sejdinovic (Oxford) December 7, 2015 1 / 36 Section 1 Outline
More informationOutline. General purpose
Outline Population Monte Carlo and adaptive sampling schemes Christian P. Robert Université Paris Dauphine and CREST-INSEE http://www.ceremade.dauphine.fr/~xian 1 2 3 Illustrations 4 Joint work with O.
More informationLecture 8: The Metropolis-Hastings Algorithm
30.10.2008 What we have seen last time: Gibbs sampler Key idea: Generate a Markov chain by updating the component of (X 1,..., X p ) in turn by drawing from the full conditionals: X (t) j Two drawbacks:
More informationAdvances and Applications in Perfect Sampling
and Applications in Perfect Sampling Ph.D. Dissertation Defense Ulrike Schneider advisor: Jem Corcoran May 8, 2003 Department of Applied Mathematics University of Colorado Outline Introduction (1) MCMC
More informationIntroduction to Machine Learning CMU-10701
Introduction to Machine Learning CMU-10701 Markov Chain Monte Carlo Methods Barnabás Póczos & Aarti Singh Contents Markov Chain Monte Carlo Methods Goal & Motivation Sampling Rejection Importance Markov
More informationBayesian Inference in GLMs. Frequentists typically base inferences on MLEs, asymptotic confidence
Bayesian Inference in GLMs Frequentists typically base inferences on MLEs, asymptotic confidence limits, and log-likelihood ratio tests Bayesians base inferences on the posterior distribution of the unknowns
More informationBayesian Inference and MCMC
Bayesian Inference and MCMC Aryan Arbabi Partly based on MCMC slides from CSC412 Fall 2018 1 / 18 Bayesian Inference - Motivation Consider we have a data set D = {x 1,..., x n }. E.g each x i can be the
More informationBayesian Modelling and Inference on Mixtures of Distributions Modelli e inferenza bayesiana per misture di distribuzioni
Bayesian Modelling and Inference on Mixtures of Distributions Modelli e inferenza bayesiana per misture di distribuzioni Jean-Michel Marin CEREMADE Université Paris Dauphine Kerrie L. Mengersen QUT Brisbane
More informationSimulation of truncated normal variables. Christian P. Robert LSTA, Université Pierre et Marie Curie, Paris
Simulation of truncated normal variables Christian P. Robert LSTA, Université Pierre et Marie Curie, Paris Abstract arxiv:0907.4010v1 [stat.co] 23 Jul 2009 We provide in this paper simulation algorithms
More informationHastings-within-Gibbs Algorithm: Introduction and Application on Hierarchical Model
UNIVERSITY OF TEXAS AT SAN ANTONIO Hastings-within-Gibbs Algorithm: Introduction and Application on Hierarchical Model Liang Jing April 2010 1 1 ABSTRACT In this paper, common MCMC algorithms are introduced
More informationComputer Practical: Metropolis-Hastings-based MCMC
Computer Practical: Metropolis-Hastings-based MCMC Andrea Arnold and Franz Hamilton North Carolina State University July 30, 2016 A. Arnold / F. Hamilton (NCSU) MH-based MCMC July 30, 2016 1 / 19 Markov
More informationCalibration of Stochastic Volatility Models using Particle Markov Chain Monte Carlo Methods
Calibration of Stochastic Volatility Models using Particle Markov Chain Monte Carlo Methods Jonas Hallgren 1 1 Department of Mathematics KTH Royal Institute of Technology Stockholm, Sweden BFS 2012 June
More informationA Search and Jump Algorithm for Markov Chain Monte Carlo Sampling. Christopher Jennison. Adriana Ibrahim. Seminar at University of Kuwait
A Search and Jump Algorithm for Markov Chain Monte Carlo Sampling Christopher Jennison Department of Mathematical Sciences, University of Bath, UK http://people.bath.ac.uk/mascj Adriana Ibrahim Institute
More informationBayesian inference for multivariate skew-normal and skew-t distributions
Bayesian inference for multivariate skew-normal and skew-t distributions Brunero Liseo Sapienza Università di Roma Banff, May 2013 Outline Joint research with Antonio Parisi (Roma Tor Vergata) 1. Inferential
More informationControl Variates for Markov Chain Monte Carlo
Control Variates for Markov Chain Monte Carlo Dellaportas, P., Kontoyiannis, I., and Tsourti, Z. Dept of Statistics, AUEB Dept of Informatics, AUEB 1st Greek Stochastics Meeting Monte Carlo: Probability
More informationReminder of some Markov Chain properties:
Reminder of some Markov Chain properties: 1. a transition from one state to another occurs probabilistically 2. only state that matters is where you currently are (i.e. given present, future is independent
More informationAn Brief Overview of Particle Filtering
1 An Brief Overview of Particle Filtering Adam M. Johansen a.m.johansen@warwick.ac.uk www2.warwick.ac.uk/fac/sci/statistics/staff/academic/johansen/talks/ May 11th, 2010 Warwick University Centre for Systems
More informationBayesian Monte Carlo Filtering for Stochastic Volatility Models
Bayesian Monte Carlo Filtering for Stochastic Volatility Models Roberto Casarin CEREMADE University Paris IX (Dauphine) and Dept. of Economics University Ca Foscari, Venice Abstract Modelling of the financial
More informationInference in state-space models with multiple paths from conditional SMC
Inference in state-space models with multiple paths from conditional SMC Sinan Yıldırım (Sabancı) joint work with Christophe Andrieu (Bristol), Arnaud Doucet (Oxford) and Nicolas Chopin (ENSAE) September
More informationMonte Carlo Methods. Leon Gu CSD, CMU
Monte Carlo Methods Leon Gu CSD, CMU Approximate Inference EM: y-observed variables; x-hidden variables; θ-parameters; E-step: q(x) = p(x y, θ t 1 ) M-step: θ t = arg max E q(x) [log p(y, x θ)] θ Monte
More information17 : Markov Chain Monte Carlo
10-708: Probabilistic Graphical Models, Spring 2015 17 : Markov Chain Monte Carlo Lecturer: Eric P. Xing Scribes: Heran Lin, Bin Deng, Yun Huang 1 Review of Monte Carlo Methods 1.1 Overview Monte Carlo
More informationComputational statistics
Computational statistics Markov Chain Monte Carlo methods Thierry Denœux March 2017 Thierry Denœux Computational statistics March 2017 1 / 71 Contents of this chapter When a target density f can be evaluated
More informationSTAT232B Importance and Sequential Importance Sampling
STAT232B Importance and Sequential Importance Sampling Gianfranco Doretto Andrea Vedaldi June 7, 2004 1 Monte Carlo Integration Goal: computing the following integral µ = h(x)π(x) dx χ Standard numerical
More informationMinicourse on: Markov Chain Monte Carlo: Simulation Techniques in Statistics
Minicourse on: Markov Chain Monte Carlo: Simulation Techniques in Statistics Eric Slud, Statistics Program Lecture 1: Metropolis-Hastings Algorithm, plus background in Simulation and Markov Chains. Lecture
More informationMarkov Chain Monte Carlo methods
Markov Chain Monte Carlo methods Tomas McKelvey and Lennart Svensson Signal Processing Group Department of Signals and Systems Chalmers University of Technology, Sweden November 26, 2012 Today s learning
More informationIterative Markov Chain Monte Carlo Computation of Reference Priors and Minimax Risk
Iterative Markov Chain Monte Carlo Computation of Reference Priors and Minimax Risk John Lafferty School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 lafferty@cs.cmu.edu Abstract
More informationSequential Monte Carlo Samplers for Applications in High Dimensions
Sequential Monte Carlo Samplers for Applications in High Dimensions Alexandros Beskos National University of Singapore KAUST, 26th February 2014 Joint work with: Dan Crisan, Ajay Jasra, Nik Kantas, Alex
More informationMarkov chain Monte Carlo
Markov chain Monte Carlo Karl Oskar Ekvall Galin L. Jones University of Minnesota March 12, 2019 Abstract Practically relevant statistical models often give rise to probability distributions that are analytically
More informationIntroduction to Markov Chain Monte Carlo & Gibbs Sampling
Introduction to Markov Chain Monte Carlo & Gibbs Sampling Prof. Nicholas Zabaras Sibley School of Mechanical and Aerospace Engineering 101 Frank H. T. Rhodes Hall Ithaca, NY 14853-3801 Email: zabaras@cornell.edu
More informationComputer intensive statistical methods
Lecture 11 Markov Chain Monte Carlo cont. October 6, 2015 Jonas Wallin jonwal@chalmers.se Chalmers, Gothenburg university The two stage Gibbs sampler If the conditional distributions are easy to sample
More informationMH I. Metropolis-Hastings (MH) algorithm is the most popular method of getting dependent samples from a probability distribution
MH I Metropolis-Hastings (MH) algorithm is the most popular method of getting dependent samples from a probability distribution a lot of Bayesian mehods rely on the use of MH algorithm and it s famous
More informationSequential Monte Carlo samplers for Bayesian DSGE models
Sequential Monte Carlo samplers for Bayesian DSGE models Drew Creal Department of Econometrics, Vrije Universitiet Amsterdam, NL-8 HV Amsterdam dcreal@feweb.vu.nl August 7 Abstract Bayesian estimation
More informationPOSTERIOR ANALYSIS OF THE MULTIPLICATIVE HETEROSCEDASTICITY MODEL
COMMUN. STATIST. THEORY METH., 30(5), 855 874 (2001) POSTERIOR ANALYSIS OF THE MULTIPLICATIVE HETEROSCEDASTICITY MODEL Hisashi Tanizaki and Xingyuan Zhang Faculty of Economics, Kobe University, Kobe 657-8501,
More informationBayesian Methods for Machine Learning
Bayesian Methods for Machine Learning CS 584: Big Data Analytics Material adapted from Radford Neal s tutorial (http://ftp.cs.utoronto.ca/pub/radford/bayes-tut.pdf), Zoubin Ghahramni (http://hunch.net/~coms-4771/zoubin_ghahramani_bayesian_learning.pdf),
More informationBayesian Inference. Chapter 1. Introduction and basic concepts
Bayesian Inference Chapter 1. Introduction and basic concepts M. Concepción Ausín Department of Statistics Universidad Carlos III de Madrid Master in Business Administration and Quantitative Methods Master
More informationMonte Carlo Methods. Geoff Gordon February 9, 2006
Monte Carlo Methods Geoff Gordon ggordon@cs.cmu.edu February 9, 2006 Numerical integration problem 5 4 3 f(x,y) 2 1 1 0 0.5 0 X 0.5 1 1 0.8 0.6 0.4 Y 0.2 0 0.2 0.4 0.6 0.8 1 x X f(x)dx Used for: function
More informationSampling Methods (11/30/04)
CS281A/Stat241A: Statistical Learning Theory Sampling Methods (11/30/04) Lecturer: Michael I. Jordan Scribe: Jaspal S. Sandhu 1 Gibbs Sampling Figure 1: Undirected and directed graphs, respectively, with
More informationSequential Monte Carlo Methods for Bayesian Computation
Sequential Monte Carlo Methods for Bayesian Computation A. Doucet Kyoto Sept. 2012 A. Doucet (MLSS Sept. 2012) Sept. 2012 1 / 136 Motivating Example 1: Generic Bayesian Model Let X be a vector parameter
More informationAn adaptive kriging method for characterizing uncertainty in inverse problems
Int Statistical Inst: Proc 58th World Statistical Congress, 2, Dublin Session STS2) p98 An adaptive kriging method for characterizing uncertainty in inverse problems FU Shuai 2 University Paris-Sud & INRIA,
More informationThe Metropolis-Hastings Algorithm. June 8, 2012
The Metropolis-Hastings Algorithm June 8, 22 The Plan. Understand what a simulated distribution is 2. Understand why the Metropolis-Hastings algorithm works 3. Learn how to apply the Metropolis-Hastings
More informationApril 20th, Advanced Topics in Machine Learning California Institute of Technology. Markov Chain Monte Carlo for Machine Learning
for for Advanced Topics in California Institute of Technology April 20th, 2017 1 / 50 Table of Contents for 1 2 3 4 2 / 50 History of methods for Enrico Fermi used to calculate incredibly accurate predictions
More information27 : Distributed Monte Carlo Markov Chain. 1 Recap of MCMC and Naive Parallel Gibbs Sampling
10-708: Probabilistic Graphical Models 10-708, Spring 2014 27 : Distributed Monte Carlo Markov Chain Lecturer: Eric P. Xing Scribes: Pengtao Xie, Khoa Luu In this scribe, we are going to review the Parallel
More informationMONTE CARLO METHODS. Hedibert Freitas Lopes
MONTE CARLO METHODS Hedibert Freitas Lopes The University of Chicago Booth School of Business 5807 South Woodlawn Avenue, Chicago, IL 60637 http://faculty.chicagobooth.edu/hedibert.lopes hlopes@chicagobooth.edu
More informationMarkov Chain Monte Carlo Methods
Markov Chain Monte Carlo Methods John Geweke University of Iowa, USA 2005 Institute on Computational Economics University of Chicago - Argonne National Laboaratories July 22, 2005 The problem p (θ, ω I)
More informationIntroduction to Machine Learning
Introduction to Machine Learning Brown University CSCI 1950-F, Spring 2012 Prof. Erik Sudderth Lecture 25: Markov Chain Monte Carlo (MCMC) Course Review and Advanced Topics Many figures courtesy Kevin
More informationMarkov Chain Monte Carlo Methods for Stochastic Optimization
Markov Chain Monte Carlo Methods for Stochastic Optimization John R. Birge The University of Chicago Booth School of Business Joint work with Nicholas Polson, Chicago Booth. JRBirge U of Toronto, MIE,
More informationZig-Zag Monte Carlo. Delft University of Technology. Joris Bierkens February 7, 2017
Zig-Zag Monte Carlo Delft University of Technology Joris Bierkens February 7, 2017 Joris Bierkens (TU Delft) Zig-Zag Monte Carlo February 7, 2017 1 / 33 Acknowledgements Collaborators Andrew Duncan Paul
More informationControlled sequential Monte Carlo
Controlled sequential Monte Carlo Jeremy Heng, Department of Statistics, Harvard University Joint work with Adrian Bishop (UTS, CSIRO), George Deligiannidis & Arnaud Doucet (Oxford) Bayesian Computation
More informationAn introduction to Sequential Monte Carlo
An introduction to Sequential Monte Carlo Thang Bui Jes Frellsen Department of Engineering University of Cambridge Research and Communication Club 6 February 2014 1 Sequential Monte Carlo (SMC) methods
More informationStatistical Machine Learning Lecture 8: Markov Chain Monte Carlo Sampling
1 / 27 Statistical Machine Learning Lecture 8: Markov Chain Monte Carlo Sampling Melih Kandemir Özyeğin University, İstanbul, Turkey 2 / 27 Monte Carlo Integration The big question : Evaluate E p(z) [f(z)]
More informationParticle Filtering Approaches for Dynamic Stochastic Optimization
Particle Filtering Approaches for Dynamic Stochastic Optimization John R. Birge The University of Chicago Booth School of Business Joint work with Nicholas Polson, Chicago Booth. JRBirge I-Sim Workshop,
More informationECE276A: Sensing & Estimation in Robotics Lecture 10: Gaussian Mixture and Particle Filtering
ECE276A: Sensing & Estimation in Robotics Lecture 10: Gaussian Mixture and Particle Filtering Lecturer: Nikolay Atanasov: natanasov@ucsd.edu Teaching Assistants: Siwei Guo: s9guo@eng.ucsd.edu Anwesan Pal:
More informationChapter 12 PAWL-Forced Simulated Tempering
Chapter 12 PAWL-Forced Simulated Tempering Luke Bornn Abstract In this short note, we show how the parallel adaptive Wang Landau (PAWL) algorithm of Bornn et al. (J Comput Graph Stat, to appear) can be
More informationStat 451 Lecture Notes Markov Chain Monte Carlo. Ryan Martin UIC
Stat 451 Lecture Notes 07 12 Markov Chain Monte Carlo Ryan Martin UIC www.math.uic.edu/~rgmartin 1 Based on Chapters 8 9 in Givens & Hoeting, Chapters 25 27 in Lange 2 Updated: April 4, 2016 1 / 42 Outline
More informationBrief introduction to Markov Chain Monte Carlo
Brief introduction to Department of Probability and Mathematical Statistics seminar Stochastic modeling in economics and finance November 7, 2011 Brief introduction to Content 1 and motivation Classical
More informationSequential Monte Carlo Methods
National University of Singapore KAUST, October 14th 2014 Monte Carlo Importance Sampling Markov chain Monte Carlo Sequential Importance Sampling Resampling + Weight Degeneracy Path Degeneracy Algorithm
More informationTheory and Methods of Statistical Inference
PhD School in Statistics cycle XXIX, 2014 Theory and Methods of Statistical Inference Instructors: B. Liseo, L. Pace, A. Salvan (course coordinator), N. Sartori, A. Tancredi, L. Ventura Syllabus Some prerequisites:
More informationAdvanced Computational Methods in Statistics: Lecture 5 Sequential Monte Carlo/Particle Filtering
Advanced Computational Methods in Statistics: Lecture 5 Sequential Monte Carlo/Particle Filtering Axel Gandy Department of Mathematics Imperial College London http://www2.imperial.ac.uk/~agandy London
More informationMCMC Methods: Gibbs and Metropolis
MCMC Methods: Gibbs and Metropolis Patrick Breheny February 28 Patrick Breheny BST 701: Bayesian Modeling in Biostatistics 1/30 Introduction As we have seen, the ability to sample from the posterior distribution
More informationMetropolis-Hastings Algorithm
Strength of the Gibbs sampler Metropolis-Hastings Algorithm Easy algorithm to think about. Exploits the factorization properties of the joint probability distribution. No difficult choices to be made to
More informationMarkov Chain Monte Carlo (MCMC)
Markov Chain Monte Carlo (MCMC Dependent Sampling Suppose we wish to sample from a density π, and we can evaluate π as a function but have no means to directly generate a sample. Rejection sampling can
More informationLikelihood Inference for Lattice Spatial Processes
Likelihood Inference for Lattice Spatial Processes Donghoh Kim November 30, 2004 Donghoh Kim 1/24 Go to 1234567891011121314151617 FULL Lattice Processes Model : The Ising Model (1925), The Potts Model
More informationNew Insights into History Matching via Sequential Monte Carlo
New Insights into History Matching via Sequential Monte Carlo Associate Professor Chris Drovandi School of Mathematical Sciences ARC Centre of Excellence for Mathematical and Statistical Frontiers (ACEMS)
More informationIntroduction to Bayesian methods in inverse problems
Introduction to Bayesian methods in inverse problems Ville Kolehmainen 1 1 Department of Applied Physics, University of Eastern Finland, Kuopio, Finland March 4 2013 Manchester, UK. Contents Introduction
More informationLecture 8: Bayesian Estimation of Parameters in State Space Models
in State Space Models March 30, 2016 Contents 1 Bayesian estimation of parameters in state space models 2 Computational methods for parameter estimation 3 Practical parameter estimation in state space
More informationThe Jackknife-Like Method for Assessing Uncertainty of Point Estimates for Bayesian Estimation in a Finite Gaussian Mixture Model
Thai Journal of Mathematics : 45 58 Special Issue: Annual Meeting in Mathematics 207 http://thaijmath.in.cmu.ac.th ISSN 686-0209 The Jackknife-Like Method for Assessing Uncertainty of Point Estimates for
More informationMonte Carlo in Bayesian Statistics
Monte Carlo in Bayesian Statistics Matthew Thomas SAMBa - University of Bath m.l.thomas@bath.ac.uk December 4, 2014 Matthew Thomas (SAMBa) Monte Carlo in Bayesian Statistics December 4, 2014 1 / 16 Overview
More informationMarkov Chain Monte Carlo Methods for Stochastic
Markov Chain Monte Carlo Methods for Stochastic Optimization i John R. Birge The University of Chicago Booth School of Business Joint work with Nicholas Polson, Chicago Booth. JRBirge U Florida, Nov 2013
More informationBayesian inference & process convolution models Dave Higdon, Statistical Sciences Group, LANL
1 Bayesian inference & process convolution models Dave Higdon, Statistical Sciences Group, LANL 2 MOVING AVERAGE SPATIAL MODELS Kernel basis representation for spatial processes z(s) Define m basis functions
More informationComputer Vision Group Prof. Daniel Cremers. 11. Sampling Methods
Prof. Daniel Cremers 11. Sampling Methods Sampling Methods Sampling Methods are widely used in Computer Science as an approximation of a deterministic algorithm to represent uncertainty without a parametric
More informationDynamic System Identification using HDMR-Bayesian Technique
Dynamic System Identification using HDMR-Bayesian Technique *Shereena O A 1) and Dr. B N Rao 2) 1), 2) Department of Civil Engineering, IIT Madras, Chennai 600036, Tamil Nadu, India 1) ce14d020@smail.iitm.ac.in
More informationMarkov chain Monte Carlo
1 / 26 Markov chain Monte Carlo Timothy Hanson 1 and Alejandro Jara 2 1 Division of Biostatistics, University of Minnesota, USA 2 Department of Statistics, Universidad de Concepción, Chile IAP-Workshop
More informationABC methods for phase-type distributions with applications in insurance risk problems
ABC methods for phase-type with applications problems Concepcion Ausin, Department of Statistics, Universidad Carlos III de Madrid Joint work with: Pedro Galeano, Universidad Carlos III de Madrid Simon
More informationParticle Learning for Sequential Bayesian Computation Rejoinder
BAYESIAN STATISTICS 9, J. M. Bernardo, M. J. Bayarri, J. O. Berger, A. P. Dawid, D. Heckerman, A. F. M. Smith and M. West (Eds.) c Oxford University Press, 20 Particle Learning for Sequential Bayesian
More informationSequential Monte Carlo Methods in High Dimensions
Sequential Monte Carlo Methods in High Dimensions Alexandros Beskos Statistical Science, UCL Oxford, 24th September 2012 Joint work with: Dan Crisan, Ajay Jasra, Nik Kantas, Andrew Stuart Imperial College,
More informationMarkov chain Monte Carlo methods in atmospheric remote sensing
1 / 45 Markov chain Monte Carlo methods in atmospheric remote sensing Johanna Tamminen johanna.tamminen@fmi.fi ESA Summer School on Earth System Monitoring and Modeling July 3 Aug 11, 212, Frascati July,
More informationBayesian Methods with Monte Carlo Markov Chains II
Bayesian Methods with Monte Carlo Markov Chains II Henry Horng-Shing Lu Institute of Statistics National Chiao Tung University hslu@stat.nctu.edu.tw http://tigpbp.iis.sinica.edu.tw/courses.htm 1 Part 3
More informationPerfect simulation algorithm of a trajectory under a Feynman-Kac law
Perfect simulation algorithm of a trajectory under a Feynman-Kac law Data Assimilation, 24th-28th September 212, Oxford-Man Institute C. Andrieu, N. Chopin, A. Doucet, S. Rubenthaler University of Bristol,
More informationL09. PARTICLE FILTERING. NA568 Mobile Robotics: Methods & Algorithms
L09. PARTICLE FILTERING NA568 Mobile Robotics: Methods & Algorithms Particle Filters Different approach to state estimation Instead of parametric description of state (and uncertainty), use a set of state
More informationIntroduction to Machine Learning CMU-10701
Introduction to Machine Learning CMU-10701 Markov Chain Monte Carlo Methods Barnabás Póczos Contents Markov Chain Monte Carlo Methods Sampling Rejection Importance Hastings-Metropolis Gibbs Markov Chains
More informationStat 535 C - Statistical Computing & Monte Carlo Methods. Arnaud Doucet.
Stat 535 C - Statistical Computing & Monte Carlo Methods Arnaud Doucet Email: arnaud@cs.ubc.ca 1 1.1 Outline Introduction to Markov chain Monte Carlo The Gibbs Sampler Examples Overview of the Lecture
More informationMonte Carlo Methods in Statistics
Monte Carlo Methods in Statistics Christian Robert To cite this version: Christian Robert. Monte Carlo Methods in Statistics. Entry for the International Handbook of Statistical Sciences. 2009.
More informationStatistics & Data Sciences: First Year Prelim Exam May 2018
Statistics & Data Sciences: First Year Prelim Exam May 2018 Instructions: 1. Do not turn this page until instructed to do so. 2. Start each new question on a new sheet of paper. 3. This is a closed book
More informationPattern Recognition and Machine Learning. Bishop Chapter 11: Sampling Methods
Pattern Recognition and Machine Learning Chapter 11: Sampling Methods Elise Arnaud Jakob Verbeek May 22, 2008 Outline of the chapter 11.1 Basic Sampling Algorithms 11.2 Markov Chain Monte Carlo 11.3 Gibbs
More informationI. Bayesian econometrics
I. Bayesian econometrics A. Introduction B. Bayesian inference in the univariate regression model C. Statistical decision theory D. Large sample results E. Diffuse priors F. Numerical Bayesian methods
More informationA quick introduction to Markov chains and Markov chain Monte Carlo (revised version)
A quick introduction to Markov chains and Markov chain Monte Carlo (revised version) Rasmus Waagepetersen Institute of Mathematical Sciences Aalborg University 1 Introduction These notes are intended to
More informationParticle Filtering for Data-Driven Simulation and Optimization
Particle Filtering for Data-Driven Simulation and Optimization John R. Birge The University of Chicago Booth School of Business Includes joint work with Nicholas Polson. JRBirge INFORMS Phoenix, October
More informationPrinciples of Bayesian Inference
Principles of Bayesian Inference Sudipto Banerjee University of Minnesota July 20th, 2008 1 Bayesian Principles Classical statistics: model parameters are fixed and unknown. A Bayesian thinks of parameters
More informationSampling from complex probability distributions
Sampling from complex probability distributions Louis J. M. Aslett (louis.aslett@durham.ac.uk) Department of Mathematical Sciences Durham University UTOPIAE Training School II 4 July 2017 1/37 Motivation
More informationMCMC for big data. Geir Storvik. BigInsight lunch - May Geir Storvik MCMC for big data BigInsight lunch - May / 17
MCMC for big data Geir Storvik BigInsight lunch - May 2 2018 Geir Storvik MCMC for big data BigInsight lunch - May 2 2018 1 / 17 Outline Why ordinary MCMC is not scalable Different approaches for making
More informationSupplement to A Hierarchical Approach for Fitting Curves to Response Time Measurements
Supplement to A Hierarchical Approach for Fitting Curves to Response Time Measurements Jeffrey N. Rouder Francis Tuerlinckx Paul L. Speckman Jun Lu & Pablo Gomez May 4 008 1 The Weibull regression model
More informationMarkov Chain Monte Carlo methods
Markov Chain Monte Carlo methods By Oleg Makhnin 1 Introduction a b c M = d e f g h i 0 f(x)dx 1.1 Motivation 1.1.1 Just here Supresses numbering 1.1.2 After this 1.2 Literature 2 Method 2.1 New math As
More informationSession 3A: Markov chain Monte Carlo (MCMC)
Session 3A: Markov chain Monte Carlo (MCMC) John Geweke Bayesian Econometrics and its Applications August 15, 2012 ohn Geweke Bayesian Econometrics and its Session Applications 3A: Markov () chain Monte
More informationBagging During Markov Chain Monte Carlo for Smoother Predictions
Bagging During Markov Chain Monte Carlo for Smoother Predictions Herbert K. H. Lee University of California, Santa Cruz Abstract: Making good predictions from noisy data is a challenging problem. Methods
More information