Asymptotics and Simulation of Heavy-Tailed Processes
|
|
- Sherman Weaver
- 5 years ago
- Views:
Transcription
1 Asymptotics and Simulation of Heavy-Tailed Processes Department of Mathematics Stockholm, Sweden Workshop on Heavy-tailed Distributions and Extreme Value Theory ISI Kolkata January 14-17, 2013
2 Outline 1 Introduction to Rare-Event Simulation The Problem with Monte Carlo Importance Sampling 2 Importance Sampling in a Heavy-Tailed Setting The Conditional Mixture Algorithm for Random Walks Performance of the Conditional Mixture Algorithm 3 Markov Chain Monte Carlo in Rare-Event Simulation Efficient Sampling for a Heavy-Tailed Random Walk Efficient Sampling of Heavy-Tailed Random Sums
3 Simulation of Heavy-Tailed Processes Goal: improve on the computational efficiency of standard Monte Carlo. Design: The large deviations analysis lead to a heuristic way to design efficient algorithms. Efficiency Analysis: The large deviations analysis can be applied to theoretically quantify the computational performance of algorithms.
4 The Problem with Monte Carlo Outline 1 Introduction to Rare-Event Simulation The Problem with Monte Carlo Importance Sampling 2 Importance Sampling in a Heavy-Tailed Setting The Conditional Mixture Algorithm for Random Walks Performance of the Conditional Mixture Algorithm 3 Markov Chain Monte Carlo in Rare-Event Simulation Efficient Sampling for a Heavy-Tailed Random Walk Efficient Sampling of Heavy-Tailed Random Sums
5 The Problem with Monte Carlo The Performance of Monte Carlo Let X 1,...,X N be independent copies of X. The Monte Carlo estimator is Its standard deviation is p = 1 N N I{X k > b}. k=1 std = 1 N p(1 p) To obtain a Relative Error (std/p) of 1% you need to take N such that 1 N p(1 p) p 0.01 N p p
6 Importance Sampling Outline 1 Introduction to Rare-Event Simulation The Problem with Monte Carlo Importance Sampling 2 Importance Sampling in a Heavy-Tailed Setting The Conditional Mixture Algorithm for Random Walks Performance of the Conditional Mixture Algorithm 3 Markov Chain Monte Carlo in Rare-Event Simulation Efficient Sampling for a Heavy-Tailed Random Walk Efficient Sampling of Heavy-Tailed Random Sums
7 Importance Sampling Introduction to Importance Sampling 1 The problem with Monte Carlo is that few samples hit the rare event. This problem can be fixed by sampling from a distribution which puts more mass on the rare event. Must compensate for not sampling from the original distribution. Key issue: How to select an appropriate sampling distribution? 1 See e.g. [1] for an introduction.
8 Importance Sampling Monte Carlo and Importance Sampling The Basics for Computing F(A) = P{X A} Monte Carlo Sample X 1,...,X N from F. Empirical measure F N ( ) = 1 N N δ X k( ). k=1 Plug-in estimator p = F N (A). Importance Sampling Sample X 1,..., X N from F. Weighted empirical measure F w N( ) = 1 N N k=1 Plug-in estimator p = F w N(A) = 1 N N k=1 df d F ( X k )δ X k( ). df d F ( X k )I{ X k A}.
9 Importance Sampling Monte Carlo and Importance Sampling The Basics for Computing F(A) = P{X A} Monte Carlo Sample X 1,...,X N from F. Empirical measure F N ( ) = 1 N N δ X k( ). k=1 Plug-in estimator p = F N (A). Importance Sampling Sample X 1,..., X N from F. Weighted empirical measure F w N( ) = 1 N N k=1 Plug-in estimator p = F w N(A) = 1 N N k=1 df d F ( X k )δ X k( ). df d F ( X k )I{ X k A}.
10 Importance Sampling Importance Sampling The importance sampling estimator is unbiased: [ df ] Ẽ d F ( X)I{ X df A} = A d F d F = df = F(A). A Its variance is ( df ) [( df ) ] Var d F ( X)I{ X A} = 2I{ X Ẽ A} F(A) 2 d F ( df ) = 2d F F(A) 2 A d F df = df F(A)2 A d F [ df ] = E d F I{X A} F(A) 2.
11 Importance Sampling Importance Sampling The importance sampling estimator is unbiased: [ df ] Ẽ d F ( X)I{ X df A} = A d F d F = df = F(A). A Its variance is ( df ) [( df ) ] Var d F ( X)I{ X A} = 2I{ X Ẽ A} F(A) 2 d F ( df ) = 2d F F(A) 2 A d F df = df F(A)2 A d F [ df ] = E d F I{X A} F(A) 2.
12 Importance Sampling Illustration Sampling the tail Monte Carlo Importance Sampling
13 Importance Sampling Quantifying Efficiency Variance Based Efficiency Criteria Basic idea: variance roughly the size of p 2. Embed in a sequence of problems such that p n = P{X n A} 0. Logarithmic Efficiency: for some ǫ > 0 lim sup n Var( p n ) p 2 ǫ n <. Strong Efficiency/Bounded Relative Error: Vanishing relative error: sup n lim sup n Var( p n ) p 2 n Var( p n ) p 2 n <. = 0.
14 Importance Sampling The Zero-Variance Change of Measure There is a best choice of sampling distribution, the zero-variance change of measure, given by For this choice F A ( ) = P{X n X n A}. df df A (x) = P{X n A}I{X n A} = p n I{X n A}, and hence [ ] Var( p n ) = E p n I{X n A} F(A) 2 = 0.
15 Example Problem A Random Walk with Heavy Tails Let {Z k } be iid non-negative and regularly varying (index α) with density f. Put S m = Z 1 + +Z m. Compute P{S m > n}. The zero-variance sampling distribution is P{(Z 1,...,Z m ) S m > n}. By regular variation we know that P{(Z 1,...,Z m ) S m > n} µ( ), where µ is concentrated on the coordinate axis. But µ and F are singular!
16 Example Problem A Random Walk with Heavy Tails Let {Z k } be iid non-negative and regularly varying (index α) with density f. Put S m = Z 1 + +Z m. Compute P{S m > n}. The zero-variance sampling distribution is P{(Z 1,...,Z m ) S m > n}. By regular variation we know that P{(Z 1,...,Z m ) S m > n} µ( ), where µ is concentrated on the coordinate axis. But µ and F are singular!
17 The Conditional Mixture Algorithm for Random Walks Outline 1 Introduction to Rare-Event Simulation The Problem with Monte Carlo Importance Sampling 2 Importance Sampling in a Heavy-Tailed Setting The Conditional Mixture Algorithm for Random Walks Performance of the Conditional Mixture Algorithm 3 Markov Chain Monte Carlo in Rare-Event Simulation Efficient Sampling for a Heavy-Tailed Random Walk Efficient Sampling of Heavy-Tailed Random Sums
18 The Conditional Mixture Algorithm for Random Walks The Conditional Mixture Algorithm 2 Suppose S i 1 = s. Sample Z i as follows 1 If s n, sample Z i from the original density f. 2 If s n, sample Z i from the mixture p i f(z)+(1 p i ) f i (y s), 1 i m 1, fm(y s), i = m. Idea: take f i to produce large values of Z i. 2 This algorithm is due to [4]
19 The Conditional Mixture Algorithm for Random Walks The Conditional Mixture Algorithm We can take f i s to be the conditional distributions of the form f(z)i{z > a(n s)} fi (z s) = P{Z > a(n s)}, i m 1, fm (z s) = where a (0, 1). f(z)i{z > n s} P{Z > n s}, Note: f i is the conditional distribution of Z given that Z is large.
20 Performance of the Conditional Mixture Algorithm Outline 1 Introduction to Rare-Event Simulation The Problem with Monte Carlo Importance Sampling 2 Importance Sampling in a Heavy-Tailed Setting The Conditional Mixture Algorithm for Random Walks Performance of the Conditional Mixture Algorithm 3 Markov Chain Monte Carlo in Rare-Event Simulation Efficient Sampling for a Heavy-Tailed Random Walk Efficient Sampling of Heavy-Tailed Random Sums
21 Performance of the Conditional Mixture Algorithm The Performance of the Conditional Mixture Algorithm The normalized second moment is E[ p n ] pn 2 = 1 df pn 2 d F (z 1,...,z m )I{s m > n}f(dz 1,...,dz m ) 1 df m 2 P{Z 1 > n} 2 d F (z 1,...,z m )I{s m > n}f(dz 1,...,dz m ) 1 1 df m 2 P{Z 1 > n} d F (nz 1,...,nz m )I{s m > 1} F(ndz 1,...,ndz m ). P{Z 1 > n}
22 Performance of the Conditional Mixture Algorithm The Performance of the Conditional Mixture Algorithm Weak convergence: F(n ) m P{Z 1 > n} k=1 I{ze k }µ α (dz) The normalized likelihood ratio is bounded: sup n 1 df P{Z 1 > n} d F (nz 1,...,nz m )I{s m > 1} <.
23 Performance of the Conditional Mixture Algorithm The Performance of the Conditional Mixture Algorithm The above enables us to show that the normalized second moment converges: lim n E[ p n ] p 2 n It is minimized at = 1 ( m 1 m 2 i=1 a α 1 p i i 1 j=1 p i = (m i 1)a α/2 + 1 (m i)a α/2 + 1, m ) +. p j p j j=1 with minimum 1 ( 2. m 2 (m 1)a α/2 + 1)
24 Performance of the Conditional Mixture Algorithm The Performance of the Conditional Mixture Algorithm The conditional mixture algorithm has (almost) vanishing relative error. Heavy-tailed heuristics indicate how to design the algorithm. Heavy-tailed asymptotics needed to prove efficiency of the algorithm.
25 MCMC for Rare Events 3 Let X be random variable with density f and compute p = P{X A}, where A is a rare event. The zero-variance sampling distribution is F A ( ) = P{X X A}, df A dx f(x)i{x A} (x) =. p It is possible to sample from F A by constructing a Markov chain with stationary distribution F A (e.g. Gibbs sampler or Metropolis-Hastings). Idea: sample from F A and extract the normalizing constant p. 3 This part is based on [5]
26 MCMC for Rare Events Let X 0,...,X T 1 be a sample of a Markov chain with stationary distribution F A. Consider a non-negative function v(x) with A v(x)dx = 1. The sample mean q T = 1 T is an estimator of [ v(x)i{x A} ] E FA = f(x) T 1 t=0 A v(x t )I{X t A}, f(x t ) v(x) f(x) Take q T as the estimator of 1/p. f(x) p dx = 1 v(x)dx = 1 p A p.
27 MCMC for Rare Events The rare-event properties of q T are determined by the choice of v. The large sample properties of q T are determined by the ergodic properties of the Markov chain.
28 The Normalized Variance The rare-event efficiency (p small) is determined by the normalized variance: ( v(x) ) p 2 Var FA f(x) I{X A} = p 2( E FA [( v(x) f(x) I{X A} ) 2 ] 1 p 2 ) = p 2( v 2 (x) f 2 (x) = p A v 2 (x) dx 1. f(x) f(x) p dx 1 p 2 )
29 The Optimal Choice of v It is optimal to take v as f(x)/p. Indeed, then ( v(x) ) p 2 Var FA f(x) I{X A} = p A v 2 (x) dx 1 = 0. f(x) Insight: take v as an approximation of the conditional density given the event.
30 Efficient Sampling for a Heavy-Tailed Random Walk Outline 1 Introduction to Rare-Event Simulation The Problem with Monte Carlo Importance Sampling 2 Importance Sampling in a Heavy-Tailed Setting The Conditional Mixture Algorithm for Random Walks Performance of the Conditional Mixture Algorithm 3 Markov Chain Monte Carlo in Rare-Event Simulation Efficient Sampling for a Heavy-Tailed Random Walk Efficient Sampling of Heavy-Tailed Random Sums
31 Efficient Sampling for a Heavy-Tailed Random Walk Efficient Sampling for a Heavy-Tailed Random Walk Let {Z k } be iid with density f. Put S n = Z 1 + +Z n. Compute P{S n > a n }. Heavy-tailed assumption: P{S n > a n } lim n P{M n > a n } = 1. (includes subexponential distributions, see [3]).
32 Efficient Sampling for a Heavy-Tailed Random Walk Gibbs Sampling of a Heavy-Tailed Random Walk 1 Start from a state with Z 1 > a n. 2 Update the steps in a random order according to j 1,...,j n (uniformly without replacement). 3 Update Z jk by sampling from P{Z Z + i j k Z i > n}. The vector (Z t,1,...,z t,n ) forms a uniformly ergodic Markov chain with stationary distribution F A ( ) = P{(Z 1,...,Z n ) Z Z n > a n }.
33 Efficient Sampling for a Heavy-Tailed Random Walk Rare-Event Efficiency The Choice of v Take v to be the density of P{(Z 1,...,Z n ) M n > a n }. Then v(z 1,...,z n ) f(z 1,...,z n ) = 1 P{M n > a n } I{ n i=1z i > a n }. Rare event efficiency: ( pnvar 2 v(z1,...,z n ) ) FA f(z 1,...,Z n ) I{Z 1 + +Z n > a n } = P{S n > a n } 2 P{M n > a n } 2P{M n > a n S n > a n }P{M n a n S n > a n } = P{S n > a n } ( 1 P{M n > a n } ) 0. P{M n > a n } P{S n > a n }
34 Efficient Sampling for a Heavy-Tailed Random Walk Rare-Event Efficiency The Choice of v Take v to be the density of P{(Z 1,...,Z n ) M n > a n }. Then v(z 1,...,z n ) f(z 1,...,z n ) = 1 P{M n > a n } I{ n i=1z i > a n }. Rare event efficiency: ( pnvar 2 v(z1,...,z n ) ) FA f(z 1,...,Z n ) I{Z 1 + +Z n > a n } = P{S n > a n } 2 P{M n > a n } 2P{M n > a n S n > a n }P{M n a n S n > a n } = P{S n > a n } ( 1 P{M n > a n } ) 0. P{M n > a n } P{S n > a n }
35 Efficient Sampling for a Heavy-Tailed Random Walk Numerical illustration n = 5, α = 2, a n = an. b = 25, T = 10 5, α = 2, n = 5, a = 5, p max = 0.737e-2 MCMC IS MC Avg. est e e e-2 Std. dev. 3e-5 9e-5 27e-5 Avg. time per batch(s) b = 25, T = 10 5, α = 2, n = 5, a = 20, p max = 4.901e-4 MCMC IS MC Avg. est e e e-4 Std. dev. 6e-7 13e-7 770e-7 Avg. time per batch(s) b = 20, T = 10 5, α = 2, n = 5, a = 10 3, p max = e-7 MCMC IS Avg. est e e-7 Std. dev. 3e-11 20e-11 Avg. time per batch(s) b = 20, T = 10 5, α = 2, n = 5, a = 10 4, p max = e-9 MCMC IS Avg. est e e-9 Std. dev. 7e e-14 Avg. time per batch(s)
36 Efficient Sampling for a Heavy-Tailed Random Walk Numerical Illustration n = 20, α = 2, a n = an. b = 25, T = 10 5, α = 2, n = 20, a = 20, p max = e-4 MCMC IS MC Avg. est e e e-4 Std. dev. 2e-7 3e-7 492e-7 Avg. time per batch(s) b = 25, T = 10 5, α = 2, n = 20, a = 200, p max = e-6 MCMC IS MC Avg. est e e e-6 Std. dev. 4e-10 12e-10 33,166e-10 Avg. time per batch(s) b = 20, T = 10 5, α = 2, n = 20, a = 10 3, p max = e-8 MCMC IS Avg. est e e-8 Std. dev. 7e-12 66e-12 Avg. time per batch(s) b = 20, T = 10 5, α = 2, n = 20, a = 10 4, p max = e-10 MCMC IS Avg. est e e-10 Std. dev. 2e-14 71e-14 Avg. time per batch(s)
37 Efficient Sampling of Heavy-Tailed Random Sums Outline 1 Introduction to Rare-Event Simulation The Problem with Monte Carlo Importance Sampling 2 Importance Sampling in a Heavy-Tailed Setting The Conditional Mixture Algorithm for Random Walks Performance of the Conditional Mixture Algorithm 3 Markov Chain Monte Carlo in Rare-Event Simulation Efficient Sampling for a Heavy-Tailed Random Walk Efficient Sampling of Heavy-Tailed Random Sums
38 Efficient Sampling of Heavy-Tailed Random Sums Efficient Sampling for a Heavy-Tailed Random Sum Let {Z k } be iid with density f. Put S n = Z 1 + +Z n and let {N n } be independent of {Z k } Compute P{S Nn > a n }. Heavy-tailed assumption: 4 P{S Nn > a n } lim n P{M Nn > a n } = 1. 4 See e.g. [6] for examples.
39 Efficient Sampling of Heavy-Tailed Random Sums Constructing the Gibbs Sampler Suppose we are in state (N t, Z t,1,...,z t,nt ) = (k t, z t,1,...,z t,kt ). Let kt = min{j : z t,1 + +z t,j > a n }. Update the number of steps N t+1 from the distribution p(k t+1 kt ) = P{N = k t+1 N k t } If k t+1 > k t, sample Z t+1,kt+1,...,z t+1,kt+1 independently from F Z. Proceed by updating all steps as before. Permute the steps.
40 Efficient Sampling of Heavy-Tailed Random Sums Properties of the algorithm The Markov chain {(N t, Z t,1,...,z t,nt )} is uniformly ergodic with stationary distribution F A ( ) = P{(Z 1,...,Z n ) Z Z n > a n }. With v as the density of P{(N, Z 1,...,Z N ) M N > a n } we have v(k, z 1,...,z k ) f(k, z 1,...,z k ) = 1 P{M N > a n } I{max z 1,...,z k > a n } = 1 1 g N (F Z (a n )) I{max z 1,...,z k > a n }. The algorithm has vanishing normalized variance.
41 Efficient Sampling of Heavy-Tailed Random Sums Numerical illustration Geometric(ρ) number of steps: ρ = 0.05, α = 1, a n = a/ρ. b = 25, T = 10 5, α = 1, ρ = 0.2, a = 10 2, p max = 0.990e-2 MCMC IS MC Avg. est e e e-2 Std. dev. 4e-5 6e-5 35e-5 Avg. time per batch(s) b = 25, T = 10 5, α = 1, ρ = 0.2, a = 10 3, p max = 0.999e-3 MCMC IS MC Avg. est e e e-3 Std. dev. 1e-6 3e-6 76e-6 Avg. time per batch(s) b = 20, T = 10 6, α = 1, ρ = 0.2, a = , p max = e-8 MCMC IS Avg. est e e-8 Std. dev. 6e e-14 Avg. time per batch(s) b = 20, T = 10 6, α = 1, ρ = 0.2, a = , p max = e-10 MCMC IS Avg. est e e-10 Std. dev. 0 13e-14 Avg. time per batch(s)
42 Efficient Sampling of Heavy-Tailed Random Sums Numerical Illustration Geometric(ρ) number of steps: ρ = 0.05, α = 1, a n = a/ρ. b = 25, T = 10 5, α = 1, ρ = 0.05, a = 10 3, p max = 0.999e-3 MCMC IS MC Avg. est e e e-3 Std. dev. 1e-6 4e-6 105e-6 Avg. time per batch(s) b = 25, T = 10 5, α = 1, ρ = 0.05, a = , p max = e-6 MCMC IS MC Avg. est e e e-6 Std. dev. 1e-10 53e-10 55,678e-10 Avg. time per batch(s)
43 Appendix For Further Reading I S. Asmussen and P.W. Glynn Stochastic Simulation: Algorithms and Anal- ysis. Springer-Verlag, New York, S. Resnick Heavy-Tail Phenomena: Probabilistic and Statistical Modeling. Springer, New York, D. Denisov, A.B. Dieker, and V. Shneer Large deviations for random walks under subexponentiality: the big-jump domain. Ann. Probab. 36(5), , 2008.
44 Appendix For Further Reading II P. Dupuis, K. Leder, and H. Wang Importance sampling for sums of random variables with regularly varying tails. ACM Trans. Model. Comput. Simul. 17(3), T. Gudmundsson and H. Hult Markov chain Monte Carlo for computing rare-event probabilities for a heavy-tailed random walk. C. Klüppelberg and T. Mikosch Large deviations for heavy-tailed random sums with applications to insurance and finance. J. Appl. Probab. 37(2), , 1997.
Importance Sampling Methodology for Multidimensional Heavy-tailed Random Walks
Importance Sampling Methodology for Multidimensional Heavy-tailed Random Walks Jose Blanchet (joint work with Jingchen Liu) Columbia IEOR Department RESIM Blanchet (Columbia) IS for Heavy-tailed Walks
More information16 : Markov Chain Monte Carlo (MCMC)
10-708: Probabilistic Graphical Models 10-708, Spring 2014 16 : Markov Chain Monte Carlo MCMC Lecturer: Matthew Gormley Scribes: Yining Wang, Renato Negrinho 1 Sampling from low-dimensional distributions
More informationProceedings of the 2016 Winter Simulation Conference T. M. K. Roeder, P. I. Frazier, R. Szechtman, E. Zhou, T. Huschka, and S. E. Chick, eds.
Proceedings of the 2016 Winter Simulation Conference T. M. K. Roeder, P. I. Frazier, R. Szechtman, E. Zhou, T. Huschka, and S. E. Chick, eds. AN M-ESTIMATOR FOR RARE-EVENT PROBABILITY ESTIMATION Zdravko
More informationPattern Recognition and Machine Learning. Bishop Chapter 11: Sampling Methods
Pattern Recognition and Machine Learning Chapter 11: Sampling Methods Elise Arnaud Jakob Verbeek May 22, 2008 Outline of the chapter 11.1 Basic Sampling Algorithms 11.2 Markov Chain Monte Carlo 11.3 Gibbs
More informationSTA 4273H: Statistical Machine Learning
STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Computer Science! Department of Statistical Sciences! rsalakhu@cs.toronto.edu! h0p://www.cs.utoronto.ca/~rsalakhu/ Lecture 7 Approximate
More informationComputer Vision Group Prof. Daniel Cremers. 11. Sampling Methods: Markov Chain Monte Carlo
Group Prof. Daniel Cremers 11. Sampling Methods: Markov Chain Monte Carlo Markov Chain Monte Carlo In high-dimensional spaces, rejection sampling and importance sampling are very inefficient An alternative
More informationAnswers and expectations
Answers and expectations For a function f(x) and distribution P(x), the expectation of f with respect to P is The expectation is the average of f, when x is drawn from the probability distribution P E
More informationLECTURE 15 Markov chain Monte Carlo
LECTURE 15 Markov chain Monte Carlo There are many settings when posterior computation is a challenge in that one does not have a closed form expression for the posterior distribution. Markov chain Monte
More informationEfficient rare-event simulation for sums of dependent random varia
Efficient rare-event simulation for sums of dependent random variables Leonardo Rojas-Nandayapa joint work with José Blanchet February 13, 2012 MCQMC UNSW, Sydney, Australia Contents Introduction 1 Introduction
More informationCS242: Probabilistic Graphical Models Lecture 7B: Markov Chain Monte Carlo & Gibbs Sampling
CS242: Probabilistic Graphical Models Lecture 7B: Markov Chain Monte Carlo & Gibbs Sampling Professor Erik Sudderth Brown University Computer Science October 27, 2016 Some figures and materials courtesy
More informationEFFICIENT TAIL ESTIMATION FOR SUMS OF CORRELATED LOGNORMALS. Leonardo Rojas-Nandayapa Department of Mathematical Sciences
Proceedings of the 2008 Winter Simulation Conference S. J. Mason, R. R. Hill, L. Mönch, O. Rose, T. Jefferson, J. W. Fowler eds. EFFICIENT TAIL ESTIMATION FOR SUMS OF CORRELATED LOGNORMALS Jose Blanchet
More informationIntroduction to Rare Event Simulation
Introduction to Rare Event Simulation Brown University: Summer School on Rare Event Simulation Jose Blanchet Columbia University. Department of Statistics, Department of IEOR. Blanchet (Columbia) 1 / 31
More informationEFFICIENT SIMULATION FOR LARGE DEVIATION PROBABILITIES OF SUMS OF HEAVY-TAILED INCREMENTS
Proceedings of the 2006 Winter Simulation Conference L. F. Perrone, F. P. Wieland, J. Liu, B. G. Lawson, D. M. Nicol, and R. M. Fujimoto, eds. EFFICIENT SIMULATION FOR LARGE DEVIATION PROBABILITIES OF
More informationEstimating Tail Probabilities of Heavy Tailed Distributions with Asymptotically Zero Relative Error
Estimating Tail Probabilities of Heavy Tailed Distributions with Asymptotically Zero Relative Error S. Juneja Tata Institute of Fundamental Research, Mumbai juneja@tifr.res.in February 28, 2007 Abstract
More informationIntroduction to Machine Learning CMU-10701
Introduction to Machine Learning CMU-10701 Markov Chain Monte Carlo Methods Barnabás Póczos & Aarti Singh Contents Markov Chain Monte Carlo Methods Goal & Motivation Sampling Rejection Importance Markov
More informationMax stable Processes & Random Fields: Representations, Models, and Prediction
Max stable Processes & Random Fields: Representations, Models, and Prediction Stilian Stoev University of Michigan, Ann Arbor March 2, 2011 Based on joint works with Yizao Wang and Murad S. Taqqu. 1 Preliminaries
More informationRegular Variation and Extreme Events for Stochastic Processes
1 Regular Variation and Extreme Events for Stochastic Processes FILIP LINDSKOG Royal Institute of Technology, Stockholm 2005 based on joint work with Henrik Hult www.math.kth.se/ lindskog 2 Extremes for
More informationControl Variates for Markov Chain Monte Carlo
Control Variates for Markov Chain Monte Carlo Dellaportas, P., Kontoyiannis, I., and Tsourti, Z. Dept of Statistics, AUEB Dept of Informatics, AUEB 1st Greek Stochastics Meeting Monte Carlo: Probability
More informationBayesian Inference and MCMC
Bayesian Inference and MCMC Aryan Arbabi Partly based on MCMC slides from CSC412 Fall 2018 1 / 18 Bayesian Inference - Motivation Consider we have a data set D = {x 1,..., x n }. E.g each x i can be the
More informationOn the inefficiency of state-independent importance sampling in the presence of heavy tails
Operations Research Letters 35 (2007) 251 260 Operations Research Letters www.elsevier.com/locate/orl On the inefficiency of state-independent importance sampling in the presence of heavy tails Achal Bassamboo
More informationComputational statistics
Computational statistics Markov Chain Monte Carlo methods Thierry Denœux March 2017 Thierry Denœux Computational statistics March 2017 1 / 71 Contents of this chapter When a target density f can be evaluated
More informationThe Particle Filter. PD Dr. Rudolph Triebel Computer Vision Group. Machine Learning for Computer Vision
The Particle Filter Non-parametric implementation of Bayes filter Represents the belief (posterior) random state samples. by a set of This representation is approximate. Can represent distributions that
More informationLarge deviations for random walks under subexponentiality: the big-jump domain
Large deviations under subexponentiality p. Large deviations for random walks under subexponentiality: the big-jump domain Ton Dieker, IBM Watson Research Center joint work with D. Denisov (Heriot-Watt,
More informationRare event simulation for the ruin problem with investments via importance sampling and duality
Rare event simulation for the ruin problem with investments via importance sampling and duality Jerey Collamore University of Copenhagen Joint work with Anand Vidyashankar (GMU) and Guoqing Diao (GMU).
More informationMonte Carlo Methods. Leon Gu CSD, CMU
Monte Carlo Methods Leon Gu CSD, CMU Approximate Inference EM: y-observed variables; x-hidden variables; θ-parameters; E-step: q(x) = p(x y, θ t 1 ) M-step: θ t = arg max E q(x) [log p(y, x θ)] θ Monte
More informationMonte Carlo methods for sampling-based Stochastic Optimization
Monte Carlo methods for sampling-based Stochastic Optimization Gersende FORT LTCI CNRS & Telecom ParisTech Paris, France Joint works with B. Jourdain, T. Lelièvre, G. Stoltz from ENPC and E. Kuhn from
More informationLecture 5: Importance sampling and Hamilton-Jacobi equations
Lecture 5: Importance sampling and Hamilton-Jacobi equations Henrik Hult Department of Mathematics KTH Royal Institute of Technology Sweden Summer School on Monte Carlo Methods and Rare Events Brown University,
More informationAsymptotic Tail Probabilities of Sums of Dependent Subexponential Random Variables
Asymptotic Tail Probabilities of Sums of Dependent Subexponential Random Variables Jaap Geluk 1 and Qihe Tang 2 1 Department of Mathematics The Petroleum Institute P.O. Box 2533, Abu Dhabi, United Arab
More informationApproximate Inference using MCMC
Approximate Inference using MCMC 9.520 Class 22 Ruslan Salakhutdinov BCS and CSAIL, MIT 1 Plan 1. Introduction/Notation. 2. Examples of successful Bayesian models. 3. Basic Sampling Algorithms. 4. Markov
More informationPart IV: Monte Carlo and nonparametric Bayes
Part IV: Monte Carlo and nonparametric Bayes Outline Monte Carlo methods Nonparametric Bayesian models Outline Monte Carlo methods Nonparametric Bayesian models The Monte Carlo principle The expectation
More informationMARKOV CHAIN MONTE CARLO
MARKOV CHAIN MONTE CARLO RYAN WANG Abstract. This paper gives a brief introduction to Markov Chain Monte Carlo methods, which offer a general framework for calculating difficult integrals. We start with
More informationSemi-Parametric Importance Sampling for Rare-event probability Estimation
Semi-Parametric Importance Sampling for Rare-event probability Estimation Z. I. Botev and P. L Ecuyer IMACS Seminar 2011 Borovets, Bulgaria Semi-Parametric Importance Sampling for Rare-event probability
More informationBrief introduction to Markov Chain Monte Carlo
Brief introduction to Department of Probability and Mathematical Statistics seminar Stochastic modeling in economics and finance November 7, 2011 Brief introduction to Content 1 and motivation Classical
More information6 Markov Chain Monte Carlo (MCMC)
6 Markov Chain Monte Carlo (MCMC) The underlying idea in MCMC is to replace the iid samples of basic MC methods, with dependent samples from an ergodic Markov chain, whose limiting (stationary) distribution
More informationMarkov Chain Monte Carlo
Chapter 5 Markov Chain Monte Carlo MCMC is a kind of improvement of the Monte Carlo method By sampling from a Markov chain whose stationary distribution is the desired sampling distributuion, it is possible
More informationComputer Vision Group Prof. Daniel Cremers. 14. Sampling Methods
Prof. Daniel Cremers 14. Sampling Methods Sampling Methods Sampling Methods are widely used in Computer Science as an approximation of a deterministic algorithm to represent uncertainty without a parametric
More informationMonte Carlo Integration I [RC] Chapter 3
Aula 3. Monte Carlo Integration I 0 Monte Carlo Integration I [RC] Chapter 3 Anatoli Iambartsev IME-USP Aula 3. Monte Carlo Integration I 1 There is no exact definition of the Monte Carlo methods. In the
More informationSTA 294: Stochastic Processes & Bayesian Nonparametrics
MARKOV CHAINS AND CONVERGENCE CONCEPTS Markov chains are among the simplest stochastic processes, just one step beyond iid sequences of random variables. Traditionally they ve been used in modelling a
More informationFluid Heuristics, Lyapunov Bounds and E cient Importance Sampling for a Heavy-tailed G/G/1 Queue
Fluid Heuristics, Lyapunov Bounds and E cient Importance Sampling for a Heavy-tailed G/G/1 Queue J. Blanchet, P. Glynn, and J. C. Liu. September, 2007 Abstract We develop a strongly e cient rare-event
More informationBayesian Methods for Machine Learning
Bayesian Methods for Machine Learning CS 584: Big Data Analytics Material adapted from Radford Neal s tutorial (http://ftp.cs.utoronto.ca/pub/radford/bayes-tut.pdf), Zoubin Ghahramni (http://hunch.net/~coms-4771/zoubin_ghahramani_bayesian_learning.pdf),
More informationAuxiliary Particle Methods
Auxiliary Particle Methods Perspectives & Applications Adam M. Johansen 1 adam.johansen@bristol.ac.uk Oxford University Man Institute 29th May 2008 1 Collaborators include: Arnaud Doucet, Nick Whiteley
More informationELEC633: Graphical Models
ELEC633: Graphical Models Tahira isa Saleem Scribe from 7 October 2008 References: Casella and George Exploring the Gibbs sampler (1992) Chib and Greenberg Understanding the Metropolis-Hastings algorithm
More informationComputer Vision Group Prof. Daniel Cremers. 10a. Markov Chain Monte Carlo
Group Prof. Daniel Cremers 10a. Markov Chain Monte Carlo Markov Chain Monte Carlo In high-dimensional spaces, rejection sampling and importance sampling are very inefficient An alternative is Markov Chain
More informationApril 20th, Advanced Topics in Machine Learning California Institute of Technology. Markov Chain Monte Carlo for Machine Learning
for for Advanced Topics in California Institute of Technology April 20th, 2017 1 / 50 Table of Contents for 1 2 3 4 2 / 50 History of methods for Enrico Fermi used to calculate incredibly accurate predictions
More information17 : Markov Chain Monte Carlo
10-708: Probabilistic Graphical Models, Spring 2015 17 : Markov Chain Monte Carlo Lecturer: Eric P. Xing Scribes: Heran Lin, Bin Deng, Yun Huang 1 Review of Monte Carlo Methods 1.1 Overview Monte Carlo
More informationLarge deviations for weighted empirical measures and processes arising in importance sampling. Pierre Nyquist
Large deviations for weighted empirical measures and processes arising in importance sampling Pierre Nyquist Abstract This thesis consists of two papers related to large deviation results associated with
More informationThe largest eigenvalues of the sample covariance matrix. in the heavy-tail case
The largest eigenvalues of the sample covariance matrix 1 in the heavy-tail case Thomas Mikosch University of Copenhagen Joint work with Richard A. Davis (Columbia NY), Johannes Heiny (Aarhus University)
More informationGenerating the Sample
STAT 80: Mathematical Statistics Monte Carlo Suppose you are given random variables X,..., X n whose joint density f (or distribution) is specified and a statistic T (X,..., X n ) whose distribution you
More informationMarkov Chain Monte Carlo Inference. Siamak Ravanbakhsh Winter 2018
Graphical Models Markov Chain Monte Carlo Inference Siamak Ravanbakhsh Winter 2018 Learning objectives Markov chains the idea behind Markov Chain Monte Carlo (MCMC) two important examples: Gibbs sampling
More informationSTA205 Probability: Week 8 R. Wolpert
INFINITE COIN-TOSS AND THE LAWS OF LARGE NUMBERS The traditional interpretation of the probability of an event E is its asymptotic frequency: the limit as n of the fraction of n repeated, similar, and
More informationA Dichotomy for Sampling Barrier-Crossing Events of Random Walks with Regularly Varying Tails
A Dichotomy for Sampling Barrier-Crossing Events of Random Walks with Regularly Varying Tails A. B. Dieker Columbia University Guido R. Lagos Georgia Institute of Technology March 3, 216 Abstract We consider
More informationStability and Rare Events in Stochastic Models Sergey Foss Heriot-Watt University, Edinburgh and Institute of Mathematics, Novosibirsk
Stability and Rare Events in Stochastic Models Sergey Foss Heriot-Watt University, Edinburgh and Institute of Mathematics, Novosibirsk ANSAPW University of Queensland 8-11 July, 2013 1 Outline (I) Fluid
More informationMarkov Chain Monte Carlo Using the Ratio-of-Uniforms Transformation. Luke Tierney Department of Statistics & Actuarial Science University of Iowa
Markov Chain Monte Carlo Using the Ratio-of-Uniforms Transformation Luke Tierney Department of Statistics & Actuarial Science University of Iowa Basic Ratio of Uniforms Method Introduced by Kinderman and
More informationZig-Zag Monte Carlo. Delft University of Technology. Joris Bierkens February 7, 2017
Zig-Zag Monte Carlo Delft University of Technology Joris Bierkens February 7, 2017 Joris Bierkens (TU Delft) Zig-Zag Monte Carlo February 7, 2017 1 / 33 Acknowledgements Collaborators Andrew Duncan Paul
More informationAsymptotic inference for a nonstationary double ar(1) model
Asymptotic inference for a nonstationary double ar() model By SHIQING LING and DONG LI Department of Mathematics, Hong Kong University of Science and Technology, Hong Kong maling@ust.hk malidong@ust.hk
More informationComputer Vision Group Prof. Daniel Cremers. 11. Sampling Methods
Prof. Daniel Cremers 11. Sampling Methods Sampling Methods Sampling Methods are widely used in Computer Science as an approximation of a deterministic algorithm to represent uncertainty without a parametric
More informationCalibration of Stochastic Volatility Models using Particle Markov Chain Monte Carlo Methods
Calibration of Stochastic Volatility Models using Particle Markov Chain Monte Carlo Methods Jonas Hallgren 1 1 Department of Mathematics KTH Royal Institute of Technology Stockholm, Sweden BFS 2012 June
More informationMarkov chain Monte Carlo
1 / 26 Markov chain Monte Carlo Timothy Hanson 1 and Alejandro Jara 2 1 Division of Biostatistics, University of Minnesota, USA 2 Department of Statistics, Universidad de Concepción, Chile IAP-Workshop
More informationMarkov Chain Monte Carlo (MCMC)
Markov Chain Monte Carlo (MCMC Dependent Sampling Suppose we wish to sample from a density π, and we can evaluate π as a function but have no means to directly generate a sample. Rejection sampling can
More informationProceedings of the 2012 Winter Simulation Conference C. Laroque, J. Himmelspach, R. Pasupathy, O. Rose, and A. M. Uhrmacher, eds.
Proceedings of the 22 Winter Simulation Conference C. Laroque, J. Himmelspach, R. Pasupathy, O. Rose, and A. M. Uhrmacher, eds. ON ERROR RATES IN RARE EVENT SIMULATION WITH HEAVY TAILS Søren Asmussen Aarhus
More informationModelling Operational Risk Using Bayesian Inference
Pavel V. Shevchenko Modelling Operational Risk Using Bayesian Inference 4y Springer 1 Operational Risk and Basel II 1 1.1 Introduction to Operational Risk 1 1.2 Defining Operational Risk 4 1.3 Basel II
More informationMCMC algorithms for fitting Bayesian models
MCMC algorithms for fitting Bayesian models p. 1/1 MCMC algorithms for fitting Bayesian models Sudipto Banerjee sudiptob@biostat.umn.edu University of Minnesota MCMC algorithms for fitting Bayesian models
More informationSums of dependent lognormal random variables: asymptotics and simulation
Sums of dependent lognormal random variables: asymptotics and simulation Søren Asmussen Leonardo Rojas-Nandayapa Department of Theoretical Statistics, Department of Mathematical Sciences Aarhus University,
More informationABSTRACT 1 INTRODUCTION. Jose Blanchet. Henry Lam. Boston University 111 Cummington Street Boston, MA 02215, USA
Proceedings of the 2011 Winter Simulation Conference S. Jain, R. R. Creasey, J. Himmelspach, K. P. White, and M. Fu, eds. RARE EVENT SIMULATION TECHNIQUES Jose Blanchet Columbia University 500 W 120th
More informationGeometric ρ-mixing property of the interarrival times of a stationary Markovian Arrival Process
Author manuscript, published in "Journal of Applied Probability 50, 2 (2013) 598-601" Geometric ρ-mixing property of the interarrival times of a stationary Markovian Arrival Process L. Hervé and J. Ledoux
More informationRare Events in Random Walks and Queueing Networks in the Presence of Heavy-Tailed Distributions
Rare Events in Random Walks and Queueing Networks in the Presence of Heavy-Tailed Distributions Sergey Foss Heriot-Watt University, Edinburgh and Institute of Mathematics, Novosibirsk The University of
More informationMarkov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains
Markov Chains A random process X is a family {X t : t T } of random variables indexed by some set T. When T = {0, 1, 2,... } one speaks about a discrete-time process, for T = R or T = [0, ) one has a continuous-time
More informationMarkov Chains and MCMC
Markov Chains and MCMC CompSci 590.02 Instructor: AshwinMachanavajjhala Lecture 4 : 590.02 Spring 13 1 Recap: Monte Carlo Method If U is a universe of items, and G is a subset satisfying some property,
More informationStat 451 Lecture Notes Markov Chain Monte Carlo. Ryan Martin UIC
Stat 451 Lecture Notes 07 12 Markov Chain Monte Carlo Ryan Martin UIC www.math.uic.edu/~rgmartin 1 Based on Chapters 8 9 in Givens & Hoeting, Chapters 25 27 in Lange 2 Updated: April 4, 2016 1 / 42 Outline
More informationSampling Algorithms for Probabilistic Graphical models
Sampling Algorithms for Probabilistic Graphical models Vibhav Gogate University of Washington References: Chapter 12 of Probabilistic Graphical models: Principles and Techniques by Daphne Koller and Nir
More informationRobert Collins CSE586, PSU Intro to Sampling Methods
Robert Collins Intro to Sampling Methods CSE586 Computer Vision II Penn State Univ Robert Collins A Brief Overview of Sampling Monte Carlo Integration Sampling and Expected Values Inverse Transform Sampling
More informationE cient Simulation and Conditional Functional Limit Theorems for Ruinous Heavy-tailed Random Walks
E cient Simulation and Conditional Functional Limit Theorems for Ruinous Heavy-tailed Random Walks Jose Blanchet and Jingchen Liu y June 14, 21 Abstract The contribution of this paper is to introduce change
More informationMonte Carlo Methods. Geoff Gordon February 9, 2006
Monte Carlo Methods Geoff Gordon ggordon@cs.cmu.edu February 9, 2006 Numerical integration problem 5 4 3 f(x,y) 2 1 1 0 0.5 0 X 0.5 1 1 0.8 0.6 0.4 Y 0.2 0 0.2 0.4 0.6 0.8 1 x X f(x)dx Used for: function
More informationUniversity of Toronto Department of Statistics
Norm Comparisons for Data Augmentation by James P. Hobert Department of Statistics University of Florida and Jeffrey S. Rosenthal Department of Statistics University of Toronto Technical Report No. 0704
More informationLecture 8: The Metropolis-Hastings Algorithm
30.10.2008 What we have seen last time: Gibbs sampler Key idea: Generate a Markov chain by updating the component of (X 1,..., X p ) in turn by drawing from the full conditionals: X (t) j Two drawbacks:
More informationIntroduction to Computational Biology Lecture # 14: MCMC - Markov Chain Monte Carlo
Introduction to Computational Biology Lecture # 14: MCMC - Markov Chain Monte Carlo Assaf Weiner Tuesday, March 13, 2007 1 Introduction Today we will return to the motif finding problem, in lecture 10
More informationBayesian Estimation with Sparse Grids
Bayesian Estimation with Sparse Grids Kenneth L. Judd and Thomas M. Mertens Institute on Computational Economics August 7, 27 / 48 Outline Introduction 2 Sparse grids Construction Integration with sparse
More informationStatistical inference on Lévy processes
Alberto Coca Cabrero University of Cambridge - CCA Supervisors: Dr. Richard Nickl and Professor L.C.G.Rogers Funded by Fundación Mutua Madrileña and EPSRC MASDOC/CCA student workshop 2013 26th March Outline
More informationExtreme Value Analysis and Spatial Extremes
Extreme Value Analysis and Department of Statistics Purdue University 11/07/2013 Outline Motivation 1 Motivation 2 Extreme Value Theorem and 3 Bayesian Hierarchical Models Copula Models Max-stable Models
More informationMonte Carlo Methods. Handbook of. University ofqueensland. Thomas Taimre. Zdravko I. Botev. Dirk P. Kroese. Universite de Montreal
Handbook of Monte Carlo Methods Dirk P. Kroese University ofqueensland Thomas Taimre University ofqueensland Zdravko I. Botev Universite de Montreal A JOHN WILEY & SONS, INC., PUBLICATION Preface Acknowledgments
More informationComputer Intensive Methods in Mathematical Statistics
Computer Intensive Methods in Mathematical Statistics Department of mathematics johawes@kth.se Lecture 5 Sequential Monte Carlo methods I 31 March 2017 Computer Intensive Methods (1) Plan of today s lecture
More informationCompeting sources of variance reduction in parallel replica Monte Carlo, and optimization in the low temperature limit
Competing sources of variance reduction in parallel replica Monte Carlo, and optimization in the low temperature limit Paul Dupuis Division of Applied Mathematics Brown University IPAM (J. Doll, M. Snarski,
More informationSampling from complex probability distributions
Sampling from complex probability distributions Louis J. M. Aslett (louis.aslett@durham.ac.uk) Department of Mathematical Sciences Durham University UTOPIAE Training School II 4 July 2017 1/37 Motivation
More informationMCMC and Gibbs Sampling. Sargur Srihari
MCMC and Gibbs Sampling Sargur srihari@cedar.buffalo.edu 1 Topics 1. Markov Chain Monte Carlo 2. Markov Chains 3. Gibbs Sampling 4. Basic Metropolis Algorithm 5. Metropolis-Hastings Algorithm 6. Slice
More informationLecture 2: From Linear Regression to Kalman Filter and Beyond
Lecture 2: From Linear Regression to Kalman Filter and Beyond Department of Biomedical Engineering and Computational Science Aalto University January 26, 2012 Contents 1 Batch and Recursive Estimation
More informationSome Results on the Ergodicity of Adaptive MCMC Algorithms
Some Results on the Ergodicity of Adaptive MCMC Algorithms Omar Khalil Supervisor: Jeffrey Rosenthal September 2, 2011 1 Contents 1 Andrieu-Moulines 4 2 Roberts-Rosenthal 7 3 Atchadé and Fort 8 4 Relationship
More informationLect4: Exact Sampling Techniques and MCMC Convergence Analysis
Lect4: Exact Sampling Techniques and MCMC Convergence Analysis. Exact sampling. Convergence analysis of MCMC. First-hit time analysis for MCMC--ways to analyze the proposals. Outline of the Module Definitions
More informationAsymptotics of random sums of heavy-tailed negatively dependent random variables with applications
Asymptotics of random sums of heavy-tailed negatively dependent random variables with applications Remigijus Leipus (with Yang Yang, Yuebao Wang, Jonas Šiaulys) CIRM, Luminy, April 26-30, 2010 1. Preliminaries
More informationCheng Soon Ong & Christian Walder. Canberra February June 2018
Cheng Soon Ong & Christian Walder Research Group and College of Engineering and Computer Science Canberra February June 2018 Outlines Overview Introduction Linear Algebra Probability Linear Regression
More information19 : Slice Sampling and HMC
10-708: Probabilistic Graphical Models 10-708, Spring 2018 19 : Slice Sampling and HMC Lecturer: Kayhan Batmanghelich Scribes: Boxiang Lyu 1 MCMC (Auxiliary Variables Methods) In inference, we are often
More informationBayesian Estimation of Input Output Tables for Russia
Bayesian Estimation of Input Output Tables for Russia Oleg Lugovoy (EDF, RANE) Andrey Polbin (RANE) Vladimir Potashnikov (RANE) WIOD Conference April 24, 2012 Groningen Outline Motivation Objectives Bayesian
More informationPractical unbiased Monte Carlo for Uncertainty Quantification
Practical unbiased Monte Carlo for Uncertainty Quantification Sergios Agapiou Department of Statistics, University of Warwick MiR@W day: Uncertainty in Complex Computer Models, 2nd February 2015, University
More informationAdaptive Monte Carlo methods
Adaptive Monte Carlo methods Jean-Michel Marin Projet Select, INRIA Futurs, Université Paris-Sud joint with Randal Douc (École Polytechnique), Arnaud Guillin (Université de Marseille) and Christian Robert
More informationAsymptotic behavior for sums of non-identically distributed random variables
Appl. Math. J. Chinese Univ. 2019, 34(1: 45-54 Asymptotic behavior for sums of non-identically distributed random variables YU Chang-jun 1 CHENG Dong-ya 2,3 Abstract. For any given positive integer m,
More informationAdvanced Monte Carlo integration methods. P. Del Moral (INRIA team ALEA) INRIA & Bordeaux Mathematical Institute & X CMAP
Advanced Monte Carlo integration methods P. Del Moral (INRIA team ALEA) INRIA & Bordeaux Mathematical Institute & X CMAP MCQMC 2012, Sydney, Sunday Tutorial 12-th 2012 Some hyper-refs Feynman-Kac formulae,
More informationA quick introduction to Markov chains and Markov chain Monte Carlo (revised version)
A quick introduction to Markov chains and Markov chain Monte Carlo (revised version) Rasmus Waagepetersen Institute of Mathematical Sciences Aalborg University 1 Introduction These notes are intended to
More informationExample: physical systems. If the state space. Example: speech recognition. Context can be. Example: epidemics. Suppose each infected
4. Markov Chains A discrete time process {X n,n = 0,1,2,...} with discrete state space X n {0,1,2,...} is a Markov chain if it has the Markov property: P[X n+1 =j X n =i,x n 1 =i n 1,...,X 0 =i 0 ] = P[X
More informationPractical conditions on Markov chains for weak convergence of tail empirical processes
Practical conditions on Markov chains for weak convergence of tail empirical processes Olivier Wintenberger University of Copenhagen and Paris VI Joint work with Rafa l Kulik and Philippe Soulier Toronto,
More informationProbabilistic Graphical Models Lecture 17: Markov chain Monte Carlo
Probabilistic Graphical Models Lecture 17: Markov chain Monte Carlo Andrew Gordon Wilson www.cs.cmu.edu/~andrewgw Carnegie Mellon University March 18, 2015 1 / 45 Resources and Attribution Image credits,
More informationStatistical Machine Learning Lecture 8: Markov Chain Monte Carlo Sampling
1 / 27 Statistical Machine Learning Lecture 8: Markov Chain Monte Carlo Sampling Melih Kandemir Özyeğin University, İstanbul, Turkey 2 / 27 Monte Carlo Integration The big question : Evaluate E p(z) [f(z)]
More information