Monte Carlo Methods. Jesús Fernández-Villaverde University of Pennsylvania

Size: px
Start display at page:

Download "Monte Carlo Methods. Jesús Fernández-Villaverde University of Pennsylvania"

Transcription

1 Monte Carlo Methods Jesús Fernández-Villaverde University of Pennsylvania 1

2 Why Monte Carlo? From previous chapter, we want to compute: 1. Posterior distribution: π ³ θ Y T,i = f(y T θ,i)π (θ i) R Θ i f(y T θ,i)π (θ i) dθ 2. Marginal likelihood: P ³ Y T i = Z Θ i f(y T θ,i)π (θ i) dθ Difficult to asses analytically or even to approximate (Phillips, 1996). Resort to simulation. 2

3 A Bit of Historical Background and Intuition Metropolis and Ulam (1949) and Von Neuman (1951). Why the name Monte Carlo? Two silly examples: 1. Probability of getting a total of six points when rolling two (fair) dices. 2. Throwing darts at a graph. 3

4 Classical Monte Carlo Integration Assume we know how to generate draws from π ³ θ Y T,i. What does it mean to draw from π ³ θ Y T,i? Two Basic Questions: 1. Whydowewanttodoit? 2. Howdowedoit? 4

5 WhyDoWeDoIt? Basic intuition: Glivenko-Cantelli s Theorem. Let X 1,...,X n be iid as X with distribution function F.Letω be the outcome and F n (x, ω) be the empirical distribution function based on observations X 1 (ω),...,x n (ω). Then, as n, sup F n (x, ω) F (x) a.s. 0, <x< It can be generalized to include dependence: A.W. Van der Vaart and J.A. Wellner, Weak Convergence and Empirical Processes, Springer- Verlag,

6 Basic Result Let h (θ) be a function of interest: indicator, moment, etc... By the Law of Large Numbers: Z E π( Y T,i) [h (θ)] = Θ i h (θ) π ³ θ Y T,i dθ ' h m = 1 m mx j=1 h ³ θ j If Var π( Y T,i) [h (θ)] <, bythecentrallimittheorem: Var π( Y T,i) [h m] ' 1 m mx j=1 ³ h ³ θj hm 2 6

7 How Do We Do It? Large literature. Two good surveys: 1. Luc Devroye: Non-Uniform Random Variate Generation, Springer- Verlag, Available for free at 2. Christian Robert and George Casella, Monte Carlo Statistical Methods, 2nd ed, Springer-Verlag,

8 Random Draws? Natural sources of randomness. Difficult to use. Acomputer......but a computer is a deterministic machine! Von Neumann (1951): Anyone who considers arithmetical methods of producing random digits is, of course, in a state of sin. 8

9 Was Von Neumann Right? Let s us do a simple experiment. Let susstartmatlab,typeformat long, typerand. Did we get ? This does not look terribly random. Why is this number appearing? 9

10 Basic Building Block MATLAB uses highly non-linear iterative algorithms that look like random. That is why sometimes we talk of pseudo-random number generators. We will concentrate on draws from a uniform distribution. Other (standard and nonstandard) distributions come from manipulations of the uniform. 10

11 Goal Derive algorithms that, starting from some initial value and applying iterative methods, produce a sequence that: (Lehmer, 1951): 1. It is unpredictable for the uninitiated (relation with Chaotic dynamical systems). 2. It passes a battery of standard statistical tests of randomness (like Kolmogorov-Smirnov test, ARMA(p,q), etc). 11

12 Basic Component Multiplicative Congruential Generator: x i =(ax i 1 + b)mod(m +1) x i takesvalueson{0, 1,...,M}. Transformation into a generator on [0, 1] with: x i = ax i 1 + b M +1 x 0 is called the seed. 12

13 Choices of Parameters Period and performance depends crucially on a, b, and M. Pick a =13,c=0,M=31, and x 0 =1. Let us run badrandom.m. Historical bad examples: IBM RND from the 1960 s. 13

14 AGoodChoice A traditional choice: a =7 5 = 16807, c=0,m= Period bounded by M. 32 bits versus 64 bits hardware. You may want to be aware that there is something called IEEE standard for floating point arithmetic. Problems and alternatives. 14

15 Real Life You do not want to code your own random number generator. Matlab implements the state of the art: KISS by Marsaglia and Zaman (1991). What about a compiler in Fortran or C++? 15

16 Nonuniform Distributions In general we need something different from the uniform distribution. How do we move from a uniform draw to other distributions? Simple transformations for standard distributions. Foundations of commercial software. 16

17 Two Basic Approaches 1. Use transformations. 2. Use inverse method. Why are those approaches attractive? 17

18 An Example of Transformations I: Normal Distributions Box and Muller (1958). Let U 1 and U 2 two uniform variables, then: x = cos2πu 1 ( 2logU 2 ) 0.5 y = sin2πu 1 ( 2logU 2 ) 0.5 are independent normal variables. Problem: x and y fall in a spiral. 18

19 An Example of Transformations II: Multivariate Normal Distributions If x N (0,I), then is distributed as N µ, Σ 0 Σ. y = µ + Σx Σ can be the Cholesky decomposition of the matrix of variancescovariances. 19

20 An Example of Transformations III: Discrete Uniform We want to draw from x U{1,N}. Find 1/N. Draw from U U [0.1]. If u [0, 1/N ] x =1, if u [1/N, 2/N ] x =3,andsoon. 20

21 Inverse Method Conceptually general procedure for random variables on <. For a non-decreasionf function F on <, the generalized inverse of F, F, is the function F (u) =inf{x : F (x) u} Lemma: If U U [0.1], then the random variable F (U) hasthe distribution F. 21

22 Proof: For u [0.1] and x F ([0.1]), we satisfy: F ³ F (u) u and F (F (x)) x Therefore n (u, x) :F (u) x o = {(u, x) :F (x) u} and P ³ F (U) x = P (U F (x)) = F (x) 22

23 An Example of the Inverse Method Exponential distribution: x Exp(1). F (x) =1 e x. x = log (1 u). Thus, X = log U is exponential if U is uniform. 23

24 Problems Algebraic tricks and the inverse method are difficult to generalize. Why? Complicated, multivariate distributions. Often, we only have a numerical procedure to evaluate the distribution we want to sample from. We need more general methods. 24

25 Acceptance Sampling is called the target den- θ π ³ θ Y T,i sity. with support C. π ³ θ Y T,i z g (z) withsupportc 0 C. g is called the source density. We require: 1. Weknowhowtodrawfromg. 2. Condition: sup θ C π ³ θ Y T,i g (θ) = a< 25

26 Acceptance Sampling Algorithm Steps: 1. u U(0, 1). 2. θ g. 3. If u> π θ Y T,i ag(θ ) return to step Set θ m = θ. 26

27 Why Does Acceptance Sampling Work? Unconditional probability of moving from step 3 to 4: ³ θ Y T ³,i θ Y T,i ZC0 π g (θ) dθ = ag(θ) ZC0 π dθ = 1 a a Unconditional probability of moving from step 3 to 4 when θ A: Z π ³ θ Y T,i Z π ³ θ Y T,i g (θ) dθ = dθ = 1 A ag(θ) A a a ZA π ³ θ Y T,i dθ Dividing both expressions: RA π ³ θ Y T,i dθ 1 a 1 a 27 = ZA π ³ θ Y T,i dθ

28 An Example Target density: π ³ θ Y T,i min " exp à θ2 2! ³sin (6θ) 2 +3cos(θ) 2 sin (4θ) 2 +1, 0 # Source density: g (θ) 1 Ã! (2π) 0.5 exp θ2 2 Let s take a look: acceptance.m. 28

29 Problems of Acceptance Sampling Two issues: 1. We disregard a lot of draws. We want to minimize a. How? 2. We need to check π/g is bounded. Necessary condition: g has thicker tails than those of f. 3. We need to evaluate bound a. Difficult to do. Can we do better? Yes, through importance sampling. 29

30 Importance Sampling I Similar framework than in acceptance sampling: 1. θ π ³ θ Y T,i with support C. π ³ θ Y T,i density. is called the target 2. z g (z) withsupportc 0 C. g is called the source density. Note that we can write: Z E π( Y T,i) [h (θ)] = h (θ) π ³ θ Y T,i dθ = Θ i ZΘi h (θ) π ³ θ Y T,i g (θ) g (θ) dθ 30

31 Importance Sampling II If E π( Y T,i) [h (θ)] exists, a Law of Large Numbers holds: h (θ) ZΘi π ³ θ Y T,i g (θ) dθ ' h I m = 1 mx h ³ π ³ θ j Y θ j T,i g (θ) m g ³ θ j j=1 and E π( Y T,i) [h (θ)] ' hi m where n θ j o m j=1 are draws from g (θ) andπ θ j Y T,i sampling weights. g(θ j ) are the important 31

32 Importance Sampling III π θ Y T,i If E π(θ Y T,i) Geweke, 1989) and: g(θ) exists, a Central Limit Theorem applies (see µ m 1/2 h I m E π( Y T,i) [h (θ)] N ³ 0, σ 2 σ 2 ' 1 m mx j=1 ³ h ³ θj h I m 2 π ³ θ j Y T,i g ³ θ j 2 Where, again, n θ j o m j=1 are draws from g (θ). 32

33 Importance Sampling IV Notice that: σ 2 ' 1 m mx j=1 ³ h ³ θj h I m 2 π ³ θ j Y T,i g ³ θ j 2 Therefore, we want π θ Y T,i g(θ) to be almost flat. 33

34 Importance Sampling V Intuition: σ 2 is minimized when π ³ θ Y T,i drawing from π ³ θ j Y T,i. = g (θ).,i.e. we are Hint: we can use as g (θ) thefirst terms of a Taylor approximation to π ³ θ Y T,i. How do we compute the Taylor approximation? 34

35 Conditions for the existence of E π(θ Y T,i) π θ Y T,i g(θ) This has to be checked analytically. Asimplecondition: π θ Y T,i g(θ) has to be bounded. Some times, we label ω ³ θ Y T,i = π θ Y T,i. g(θ) 35

36 Normalizing Factor I Assume we do not know the normalizing constant for π ³ θ Y T,i g (θ). and Let s call the unnormalized densities: eπ ³ θ Y T,i and eg (θ). Then: E π( Y T,i) [h (θ)] = R Θ i h (θ) eπ ³ θ Y T,i dθ RΘ i eπ ³ θ Y T,i dθ = R Θ i h (θ) eπ θ Y T,i eg (θ) dθ eg(θ) R eπ θ Y T,i Θ i eg(θ) eg (θ) dθ 36

37 Normalizing Factor II Consequently: h I m = 1 m P mj=1 h ³ θ j π θj Y T,i 1 m g(θ j ) P mj=1 π θ j Y T,i g(θ j ) = P mj=1 h ³ θ j ω ³ θj Y T,i P mj=1 ω ³ θ j Y T,i and: σ 2 ' m P ³ ³ ³ ³ m j=1 h θj h I m 2 ω θj Y T,i 2 ³ Pmj=1 ω ³ θ j Y T,i 2 37

38 The Importance of the Behavior of ω ³ θ j Y T,i : ExampleI Assume that we know π ³ θ j Y T,i = t ν. Butwedonotknowhowtodrawfromit. InsteadwedrawfromN (0, 1). Why? Let s run normalt.m 38

39 Evaluate the mean of t v. Draw n θ j o m j=1 from N (0, 1). Let t v(θ j ) φ(θ j ) = ω ³ θ j. Evaluate mean = P mj=1 θ j ω ³ θ j m 39

40 Evaluate the variance of the estimated mean of t v. Compute: var est mean = P mj=1 ³ θj mean 2 ω ³ θj 2 m Note: difference between: 1. The variance of a function of interest. 2. The variance of the computed mean of the function of interest. 40

41 Estimation of the Mean of t v : importancenormal.m υ Est. Mean Est.ofVar.ofEst.Mean

42 The Importance of the Behavior of ω ³ θ j Y T,i : ExampleII Opposite case than before. Assume that we know π ³ θ j Y T,i = N (0, 1). Butwedonotknowhowtodrawfromit. Insteadwedrawfromt v. 42

43 Estimation of the Mean of N (0, 1): importancet.m t ν Est. Mean Est. of Var. of Est. Mean

44 A Procedure to Check How Good is the Important Sampling Function This procedure is due to Geweke. It is called Relative Numerical Efficiency (RNE). Firstnoticethatifg (θ) =π ³ θ Y T,i, we have that: σ 2 ' 1 m = 1 m mx j=1 mx j=1 ³ h ³ θj h I m 2 π ³ θ j Y T,i g ³ θ j 2 ³ ³ h θj h I m 2 ' Varπ( Y T,i) [h (θ)] = 44

45 A Procedure of Checking how Good is the important Sampling Function II Therefore, for a given g (θ), the RNE: RNE = Var π( Y T,i) σ 2 [h (θ)] If RNE closed to 1 the important sampling procedure is working properly. If RNE is very low, closed to 0, the procedure is not working as properly. 45

46 Estimation of the Mean of t v t ν RNE Estimation of the Mean of N (0, 1) t ν RNE

47 Important Sampling and Robustness of Priors Priors are researcher specific. Imagine researchers 1 and 2 are working with the same model, i.e. with the same likelihood function, f(y T θ, 1) = f(y T θ, 2). (Now 1 and 2 do not imply different models but different researchers) But they have different priors π (θ 1) 6= π (θ 2). Imagine that researcher 1 has draws from the her posterior distribution n θj o N j=1 π ³ θ Y T, 1. 47

48 A Simple Manipulation If researcher 2 wants to compute ZΘi h (θ) π ³ θ Y T, 2 dθ for any ` (θ), he does not need to recompute everything. Note that h (θ) π ZΘi ³ θ Y T, 2 dθ = h (θ) ZΘi π ³ θ Y T, 2 π ³ θ Y T, 1 π ³ θ Y T, 1 dθ = RΘ i h (θ) f(yt θ,2)π(θ 2) f(y T θ,1)π(θ 1) π ³ θ Y T, 1 dθ f(y RΘ T θ,2)π(θ 1) i f(y T θ,1)π(θ 1) π ³ θ Y T, 1 dθ = RΘ i h (θ) π(θ 2) π(θ 1) π ³ θ Y T, 1 dθ π(θ 2) RΘ i π(θ 1) π ³ θ Y T, 1 dθ 48

49 Importance Sampling Then: 1 m P mj=1 h ³ θ j π(θj 2) 1 m π(θ j 1) P mj=1 π(θ j 2) π(θ j 1) RΘ i h (θ) π(θ 2) π(θ 1) π ³ θ Y T, 1 dθ π(θ 2) RΘ i π(θ 1) π ³ θ Y T, 1 dθ = = P mj=1 h ³ θ j π(θj 2) π(θ j 1) P mj=1 π(θ j 2) π(θ j 1) h (θ) π ZΘi ³ θ Y T, 2 dθ Simple computation. Increased variance. 49

Differentiation and Integration

Differentiation and Integration Differentiation and Integration (Lectures on Numerical Analysis for Economists II) Jesús Fernández-Villaverde 1 and Pablo Guerrón 2 February 12, 2018 1 University of Pennsylvania 2 Boston College Motivation

More information

MONTE CARLO METHODS. Hedibert Freitas Lopes

MONTE CARLO METHODS. Hedibert Freitas Lopes MONTE CARLO METHODS Hedibert Freitas Lopes The University of Chicago Booth School of Business 5807 South Woodlawn Avenue, Chicago, IL 60637 http://faculty.chicagobooth.edu/hedibert.lopes hlopes@chicagobooth.edu

More information

( ) ( ) Monte Carlo Methods Interested in. E f X = f x d x. Examples:

( ) ( ) Monte Carlo Methods Interested in. E f X = f x d x. Examples: Monte Carlo Methods Interested in Examples: µ E f X = f x d x Type I error rate of a hypothesis test Mean width of a confidence interval procedure Evaluating a likelihood Finding posterior mean and variance

More information

Generating the Sample

Generating the Sample STAT 80: Mathematical Statistics Monte Carlo Suppose you are given random variables X,..., X n whose joint density f (or distribution) is specified and a statistic T (X,..., X n ) whose distribution you

More information

Intelligent Systems I

Intelligent Systems I 1, Intelligent Systems I 12 SAMPLING METHODS THE LAST THING YOU SHOULD EVER TRY Philipp Hennig & Stefan Harmeling Max Planck Institute for Intelligent Systems 23. January 2014 Dptmt. of Empirical Inference

More information

Random processes and probability distributions. Phys 420/580 Lecture 20

Random processes and probability distributions. Phys 420/580 Lecture 20 Random processes and probability distributions Phys 420/580 Lecture 20 Random processes Many physical processes are random in character: e.g., nuclear decay (Poisson distributed event count) P (k, τ) =

More information

Generating Random Variables

Generating Random Variables Generating Random Variables Christian Robert Université Paris Dauphine and CREST, INSEE George Casella University of Florida Keywords and Phrases: Random Number Generator, Probability Integral Transform,

More information

Session 5B: A worked example EGARCH model

Session 5B: A worked example EGARCH model Session 5B: A worked example EGARCH model John Geweke Bayesian Econometrics and its Applications August 7, worked example EGARCH model August 7, / 6 EGARCH Exponential generalized autoregressive conditional

More information

Review of Statistical Terminology

Review of Statistical Terminology Review of Statistical Terminology An experiment is a process whose outcome is not known with certainty. The experiment s sample space S is the set of all possible outcomes. A random variable is a function

More information

Hochdimensionale Integration

Hochdimensionale Integration Oliver Ernst Institut für Numerische Mathematik und Optimierung Hochdimensionale Integration 14-tägige Vorlesung im Wintersemester 2010/11 im Rahmen des Moduls Ausgewählte Kapitel der Numerik Contents

More information

Lecture 20. Randomness and Monte Carlo. J. Chaudhry. Department of Mathematics and Statistics University of New Mexico

Lecture 20. Randomness and Monte Carlo. J. Chaudhry. Department of Mathematics and Statistics University of New Mexico Lecture 20 Randomness and Monte Carlo J. Chaudhry Department of Mathematics and Statistics University of New Mexico J. Chaudhry (UNM) CS 357 1 / 40 What we ll do: Random number generators Monte-Carlo integration

More information

Simulating Random Variables

Simulating Random Variables Simulating Random Variables Timothy Hanson Department of Statistics, University of South Carolina Stat 740: Statistical Computing 1 / 23 R has many built-in random number generators... Beta, gamma (also

More information

Contents. 1 Probability review Introduction Random variables and distributions Convergence of random variables...

Contents. 1 Probability review Introduction Random variables and distributions Convergence of random variables... Contents Probability review. Introduction..............................2 Random variables and distributions................ 3.3 Convergence of random variables................. 6 2 Monte Carlo methods

More information

Section 2.1: Lehmer Random Number Generators: Introduction

Section 2.1: Lehmer Random Number Generators: Introduction Section 21: Lehmer Random Number Generators: Introduction Discrete-Event Simulation: A First Course c 2006 Pearson Ed, Inc 0-13-142917-5 Discrete-Event Simulation: A First Course Section 21: Lehmer Random

More information

Likelihood-free MCMC

Likelihood-free MCMC Bayesian inference for stable distributions with applications in finance Department of Mathematics University of Leicester September 2, 2011 MSc project final presentation Outline 1 2 3 4 Classical Monte

More information

Introduction to Bayesian Statistics and Markov Chain Monte Carlo Estimation. EPSY 905: Multivariate Analysis Spring 2016 Lecture #10: April 6, 2016

Introduction to Bayesian Statistics and Markov Chain Monte Carlo Estimation. EPSY 905: Multivariate Analysis Spring 2016 Lecture #10: April 6, 2016 Introduction to Bayesian Statistics and Markov Chain Monte Carlo Estimation EPSY 905: Multivariate Analysis Spring 2016 Lecture #10: April 6, 2016 EPSY 905: Intro to Bayesian and MCMC Today s Class An

More information

General Principles in Random Variates Generation

General Principles in Random Variates Generation General Principles in Random Variates Generation E. Moulines and G. Fort Telecom ParisTech June 2015 Bibliography : Luc Devroye, Non-Uniform Random Variate Generator, Springer-Verlag (1986) available on

More information

Bayesian Inference for DSGE Models. Lawrence J. Christiano

Bayesian Inference for DSGE Models. Lawrence J. Christiano Bayesian Inference for DSGE Models Lawrence J. Christiano Outline State space-observer form. convenient for model estimation and many other things. Preliminaries. Probabilities. Maximum Likelihood. Bayesian

More information

Uniform and Exponential Random Floating Point Number Generation

Uniform and Exponential Random Floating Point Number Generation Uniform and Exponential Random Floating Point Number Generation Thomas Morgenstern Hochschule Harz, Friedrichstr. 57-59, D-38855 Wernigerode tmorgenstern@hs-harz.de Summary. Pseudo random number generators

More information

Stat 451 Lecture Notes Simulating Random Variables

Stat 451 Lecture Notes Simulating Random Variables Stat 451 Lecture Notes 05 12 Simulating Random Variables Ryan Martin UIC www.math.uic.edu/~rgmartin 1 Based on Chapter 6 in Givens & Hoeting, Chapter 22 in Lange, and Chapter 2 in Robert & Casella 2 Updated:

More information

Bayesian Inference for DSGE Models. Lawrence J. Christiano

Bayesian Inference for DSGE Models. Lawrence J. Christiano Bayesian Inference for DSGE Models Lawrence J. Christiano Outline State space-observer form. convenient for model estimation and many other things. Bayesian inference Bayes rule. Monte Carlo integation.

More information

Lehmer Random Number Generators: Introduction

Lehmer Random Number Generators: Introduction Lehmer Random Number Generators: Introduction Revised version of the slides based on the book Discrete-Event Simulation: a first course LL Leemis & SK Park Section(s) 21, 22 c 2006 Pearson Ed, Inc 0-13-142917-5

More information

Uniform Random Number Generators

Uniform Random Number Generators JHU 553.633/433: Monte Carlo Methods J. C. Spall 25 September 2017 CHAPTER 2 RANDOM NUMBER GENERATION Motivation and criteria for generators Linear generators (e.g., linear congruential generators) Multiple

More information

Uniform Random Binary Floating Point Number Generation

Uniform Random Binary Floating Point Number Generation Uniform Random Binary Floating Point Number Generation Prof. Dr. Thomas Morgenstern, Phone: ++49.3943-659-337, Fax: ++49.3943-659-399, tmorgenstern@hs-harz.de, Hochschule Harz, Friedrichstr. 57-59, 38855

More information

Bayesian Inference for DSGE Models. Lawrence J. Christiano

Bayesian Inference for DSGE Models. Lawrence J. Christiano Bayesian Inference for DSGE Models Lawrence J. Christiano Outline State space-observer form. convenient for model estimation and many other things. Bayesian inference Bayes rule. Monte Carlo integation.

More information

Sampling Distributions Allen & Tildesley pgs and Numerical Recipes on random numbers

Sampling Distributions Allen & Tildesley pgs and Numerical Recipes on random numbers Sampling Distributions Allen & Tildesley pgs. 345-351 and Numerical Recipes on random numbers Today I explain how to generate a non-uniform probability distributions. These are used in importance sampling

More information

Introduction to Bayesian Computation

Introduction to Bayesian Computation Introduction to Bayesian Computation Dr. Jarad Niemi STAT 544 - Iowa State University March 20, 2018 Jarad Niemi (STAT544@ISU) Introduction to Bayesian Computation March 20, 2018 1 / 30 Bayesian computation

More information

Econ 2140, spring 2018, Part IIa Statistical Decision Theory

Econ 2140, spring 2018, Part IIa Statistical Decision Theory Econ 2140, spring 2018, Part IIa Maximilian Kasy Department of Economics, Harvard University 1 / 35 Examples of decision problems Decide whether or not the hypothesis of no racial discrimination in job

More information

Nonlife Actuarial Models. Chapter 14 Basic Monte Carlo Methods

Nonlife Actuarial Models. Chapter 14 Basic Monte Carlo Methods Nonlife Actuarial Models Chapter 14 Basic Monte Carlo Methods Learning Objectives 1. Generation of uniform random numbers, mixed congruential method 2. Low discrepancy sequence 3. Inversion transformation

More information

Markov Chain Monte Carlo Lecture 1

Markov Chain Monte Carlo Lecture 1 What are Monte Carlo Methods? The subject of Monte Carlo methods can be viewed as a branch of experimental mathematics in which one uses random numbers to conduct experiments. Typically the experiments

More information

STAT 425: Introduction to Bayesian Analysis

STAT 425: Introduction to Bayesian Analysis STAT 425: Introduction to Bayesian Analysis Marina Vannucci Rice University, USA Fall 2017 Marina Vannucci (Rice University, USA) Bayesian Analysis (Part 2) Fall 2017 1 / 19 Part 2: Markov chain Monte

More information

Topics in Computer Mathematics

Topics in Computer Mathematics Random Number Generation (Uniform random numbers) Introduction We frequently need some way to generate numbers that are random (by some criteria), especially in computer science. Simulations of natural

More information

Monte Carlo Methods for Computation and Optimization (048715)

Monte Carlo Methods for Computation and Optimization (048715) Technion Department of Electrical Engineering Monte Carlo Methods for Computation and Optimization (048715) Lecture Notes Prof. Nahum Shimkin Spring 2015 i PREFACE These lecture notes are intended for

More information

TEORIA BAYESIANA Ralph S. Silva

TEORIA BAYESIANA Ralph S. Silva TEORIA BAYESIANA Ralph S. Silva Departamento de Métodos Estatísticos Instituto de Matemática Universidade Federal do Rio de Janeiro Sumário Numerical Integration Polynomial quadrature is intended to approximate

More information

2 Random Variable Generation

2 Random Variable Generation 2 Random Variable Generation Most Monte Carlo computations require, as a starting point, a sequence of i.i.d. random variables with given marginal distribution. We describe here some of the basic methods

More information

Bayesian Estimation of DSGE Models 1 Chapter 3: A Crash Course in Bayesian Inference

Bayesian Estimation of DSGE Models 1 Chapter 3: A Crash Course in Bayesian Inference 1 The views expressed in this paper are those of the authors and do not necessarily reflect the views of the Federal Reserve Board of Governors or the Federal Reserve System. Bayesian Estimation of DSGE

More information

Computational Complexity

Computational Complexity Computational Complexity (Lectures on Solution Methods for Economists II: Appendix) Jesús Fernández-Villaverde 1 and Pablo Guerrón 2 February 18, 2018 1 University of Pennsylvania 2 Boston College Computational

More information

Introduction to Machine Learning CMU-10701

Introduction to Machine Learning CMU-10701 Introduction to Machine Learning CMU-10701 Markov Chain Monte Carlo Methods Barnabás Póczos Contents Markov Chain Monte Carlo Methods Sampling Rejection Importance Hastings-Metropolis Gibbs Markov Chains

More information

Eco517 Fall 2014 C. Sims MIDTERM EXAM

Eco517 Fall 2014 C. Sims MIDTERM EXAM Eco57 Fall 204 C. Sims MIDTERM EXAM You have 90 minutes for this exam and there are a total of 90 points. The points for each question are listed at the beginning of the question. Answer all questions.

More information

Approximate Bayesian computation for spatial extremes via open-faced sandwich adjustment

Approximate Bayesian computation for spatial extremes via open-faced sandwich adjustment Approximate Bayesian computation for spatial extremes via open-faced sandwich adjustment Ben Shaby SAMSI August 3, 2010 Ben Shaby (SAMSI) OFS adjustment August 3, 2010 1 / 29 Outline 1 Introduction 2 Spatial

More information

E cient Importance Sampling

E cient Importance Sampling E cient David N. DeJong University of Pittsburgh Spring 2008 Our goal is to calculate integrals of the form Z G (Y ) = ϕ (θ; Y ) dθ. Special case (e.g., posterior moment): Z G (Y ) = Θ Θ φ (θ; Y ) p (θjy

More information

CSC 2541: Bayesian Methods for Machine Learning

CSC 2541: Bayesian Methods for Machine Learning CSC 2541: Bayesian Methods for Machine Learning Radford M. Neal, University of Toronto, 2011 Lecture 10 Alternatives to Monte Carlo Computation Since about 1990, Markov chain Monte Carlo has been the dominant

More information

2008 Winton. Review of Statistical Terminology

2008 Winton. Review of Statistical Terminology 1 Review of Statistical Terminology 2 Formal Terminology An experiment is a process whose outcome is not known with certainty The experiment s sample space S is the set of all possible outcomes. A random

More information

Enrico Fermi and the FERMIAC. Mechanical device that plots 2D random walks of slow and fast neutrons in fissile material

Enrico Fermi and the FERMIAC. Mechanical device that plots 2D random walks of slow and fast neutrons in fissile material Monte Carlo History Statistical sampling Buffon s needles and estimates of π 1940s: neutron transport in fissile material Origin of name People: Ulam, von Neuman, Metropolis, Teller Other areas of use:

More information

Lecture 5: Random numbers and Monte Carlo (Numerical Recipes, Chapter 7) Motivations for generating random numbers

Lecture 5: Random numbers and Monte Carlo (Numerical Recipes, Chapter 7) Motivations for generating random numbers Lecture 5: Random numbers and Monte Carlo (Numerical Recipes, Chapter 7) Motivations for generating random numbers To sample a function in a statistically controlled manner (i.e. for Monte Carlo integration)

More information

Nonlinear and/or Non-normal Filtering. Jesús Fernández-Villaverde University of Pennsylvania

Nonlinear and/or Non-normal Filtering. Jesús Fernández-Villaverde University of Pennsylvania Nonlinear and/or Non-normal Filtering Jesús Fernández-Villaverde University of Pennsylvania 1 Motivation Nonlinear and/or non-gaussian filtering, smoothing, and forecasting (NLGF) problems are pervasive

More information

Principles of Bayesian Inference

Principles of Bayesian Inference Principles of Bayesian Inference Sudipto Banerjee University of Minnesota July 20th, 2008 1 Bayesian Principles Classical statistics: model parameters are fixed and unknown. A Bayesian thinks of parameters

More information

Likelihood Inference. Jesús Fernández-Villaverde University of Pennsylvania

Likelihood Inference. Jesús Fernández-Villaverde University of Pennsylvania Likelihood Inference Jesús Fernández-Villaverde University of Pennsylvania 1 Likelihood Inference We are going to focus in likelihood-based inference. Why? 1. Likelihood principle (Berger and Wolpert,

More information

Verifying Regularity Conditions for Logit-Normal GLMM

Verifying Regularity Conditions for Logit-Normal GLMM Verifying Regularity Conditions for Logit-Normal GLMM Yun Ju Sung Charles J. Geyer January 10, 2006 In this note we verify the conditions of the theorems in Sung and Geyer (submitted) for the Logit-Normal

More information

Introduction to Machine Learning CMU-10701

Introduction to Machine Learning CMU-10701 Introduction to Machine Learning CMU-10701 Markov Chain Monte Carlo Methods Barnabás Póczos & Aarti Singh Contents Markov Chain Monte Carlo Methods Goal & Motivation Sampling Rejection Importance Markov

More information

Nested Sampling. Brendon J. Brewer. brewer/ Department of Statistics The University of Auckland

Nested Sampling. Brendon J. Brewer.   brewer/ Department of Statistics The University of Auckland Department of Statistics The University of Auckland https://www.stat.auckland.ac.nz/ brewer/ is a Monte Carlo method (not necessarily MCMC) that was introduced by John Skilling in 2004. It is very popular

More information

Course: ESO-209 Home Work: 1 Instructor: Debasis Kundu

Course: ESO-209 Home Work: 1 Instructor: Debasis Kundu Home Work: 1 1. Describe the sample space when a coin is tossed (a) once, (b) three times, (c) n times, (d) an infinite number of times. 2. A coin is tossed until for the first time the same result appear

More information

Computer intensive statistical methods Lecture 1

Computer intensive statistical methods Lecture 1 Computer intensive statistical methods Lecture 1 Jonas Wallin Chalmers, Gothenburg. Jonas Wallin - jonwal@chalmers.se 1/27 People and literature The following people are involved in the course: Function

More information

Bayes: All uncertainty is described using probability.

Bayes: All uncertainty is described using probability. Bayes: All uncertainty is described using probability. Let w be the data and θ be any unknown quantities. Likelihood. The probability model π(w θ) has θ fixed and w varying. The likelihood L(θ; w) is π(w

More information

MODEL COMPARISON CHRISTOPHER A. SIMS PRINCETON UNIVERSITY

MODEL COMPARISON CHRISTOPHER A. SIMS PRINCETON UNIVERSITY ECO 513 Fall 2008 MODEL COMPARISON CHRISTOPHER A. SIMS PRINCETON UNIVERSITY SIMS@PRINCETON.EDU 1. MODEL COMPARISON AS ESTIMATING A DISCRETE PARAMETER Data Y, models 1 and 2, parameter vectors θ 1, θ 2.

More information

Statistics 581 Revision of Section 4.4: Consistency of Maximum Likelihood Estimates Wellner; 11/30/2001

Statistics 581 Revision of Section 4.4: Consistency of Maximum Likelihood Estimates Wellner; 11/30/2001 Statistics 581 Revision of Section 4.4: Consistency of Maximum Likelihood Estimates Wellner; 11/30/2001 Some Uniform Strong Laws of Large Numbers Suppose that: A. X, X 1,...,X n are i.i.d. P on the measurable

More information

Eco517 Fall 2013 C. Sims MCMC. October 8, 2013

Eco517 Fall 2013 C. Sims MCMC. October 8, 2013 Eco517 Fall 2013 C. Sims MCMC October 8, 2013 c 2013 by Christopher A. Sims. This document may be reproduced for educational and research purposes, so long as the copies contain this notice and are retained

More information

1 The Glivenko-Cantelli Theorem

1 The Glivenko-Cantelli Theorem 1 The Glivenko-Cantelli Theorem Let X i, i = 1,..., n be an i.i.d. sequence of random variables with distribution function F on R. The empirical distribution function is the function of x defined by ˆF

More information

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( )

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( ) Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio (2014-2015) Etienne Tanré - Olivier Faugeras INRIA - Team Tosca October 22nd, 2014 E. Tanré (INRIA - Team Tosca) Mathematical

More information

conditional cdf, conditional pdf, total probability theorem?

conditional cdf, conditional pdf, total probability theorem? 6 Multiple Random Variables 6.0 INTRODUCTION scalar vs. random variable cdf, pdf transformation of a random variable conditional cdf, conditional pdf, total probability theorem expectation of a random

More information

Direct Simulation Methods #2

Direct Simulation Methods #2 Direct Simulation Methods #2 Econ 690 Purdue University Outline 1 A Generalized Rejection Sampling Algorithm 2 The Weighted Bootstrap 3 Importance Sampling Rejection Sampling Algorithm #2 Suppose we wish

More information

Long-Run Covariability

Long-Run Covariability Long-Run Covariability Ulrich K. Müller and Mark W. Watson Princeton University October 2016 Motivation Study the long-run covariability/relationship between economic variables great ratios, long-run Phillips

More information

IE 581 Introduction to Stochastic Simulation

IE 581 Introduction to Stochastic Simulation 1. List criteria for choosing the majorizing density r (x) when creating an acceptance/rejection random-variate generator for a specified density function f (x). 2. Suppose the rate function of a nonhomogeneous

More information

Chapter 4: Monte Carlo Methods. Paisan Nakmahachalasint

Chapter 4: Monte Carlo Methods. Paisan Nakmahachalasint Chapter 4: Monte Carlo Methods Paisan Nakmahachalasint Introduction Monte Carlo Methods are a class of computational algorithms that rely on repeated random sampling to compute their results. Monte Carlo

More information

Fall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A.

Fall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A. 1. Let P be a probability measure on a collection of sets A. (a) For each n N, let H n be a set in A such that H n H n+1. Show that P (H n ) monotonically converges to P ( k=1 H k) as n. (b) For each n

More information

LECTURE 5 NOTES. n t. t Γ(a)Γ(b) pt+a 1 (1 p) n t+b 1. The marginal density of t is. Γ(t + a)γ(n t + b) Γ(n + a + b)

LECTURE 5 NOTES. n t. t Γ(a)Γ(b) pt+a 1 (1 p) n t+b 1. The marginal density of t is. Γ(t + a)γ(n t + b) Γ(n + a + b) LECTURE 5 NOTES 1. Bayesian point estimators. In the conventional (frequentist) approach to statistical inference, the parameter θ Θ is considered a fixed quantity. In the Bayesian approach, it is considered

More information

Random Number Generators - a brief assessment of those available

Random Number Generators - a brief assessment of those available Random Number Generators - a brief assessment of those available Anna Mills March 30, 2003 1 Introduction Nothing in nature is random...a thing appears random only through the incompleteness of our knowledge.

More information

Session 3A: Markov chain Monte Carlo (MCMC)

Session 3A: Markov chain Monte Carlo (MCMC) Session 3A: Markov chain Monte Carlo (MCMC) John Geweke Bayesian Econometrics and its Applications August 15, 2012 ohn Geweke Bayesian Econometrics and its Session Applications 3A: Markov () chain Monte

More information

How does the computer generate observations from various distributions specified after input analysis?

How does the computer generate observations from various distributions specified after input analysis? 1 How does the computer generate observations from various distributions specified after input analysis? There are two main components to the generation of observations from probability distributions.

More information

Monte Carlo Integration

Monte Carlo Integration Monte Carlo Integration SCX5005 Simulação de Sistemas Complexos II Marcelo S. Lauretto www.each.usp.br/lauretto Reference: Robert CP, Casella G. Introducing Monte Carlo Methods with R. Springer, 2010.

More information

Lecture 2 : CS6205 Advanced Modeling and Simulation

Lecture 2 : CS6205 Advanced Modeling and Simulation Lecture 2 : CS6205 Advanced Modeling and Simulation Lee Hwee Kuan 21 Aug. 2013 For the purpose of learning stochastic simulations for the first time. We shall only consider probabilities on finite discrete

More information

Multivariate Simulations

Multivariate Simulations Multivariate Simulations Katarína Starinská starinskak@gmail.com Charles University Faculty of Mathematics and Physics Prague, Czech Republic 21.11.2011 Katarína Starinská Multivariate Simulations 1 /

More information

Sampling exactly from the normal distribution

Sampling exactly from the normal distribution 1 Sampling exactly from the normal distribution Charles Karney charles.karney@sri.com SRI International AofA 2017, Princeton, June 20, 2017 Background: In my day job, I ve needed normal (Gaussian) deviates

More information

Multivariate Distributions

Multivariate Distributions IEOR E4602: Quantitative Risk Management Spring 2016 c 2016 by Martin Haugh Multivariate Distributions We will study multivariate distributions in these notes, focusing 1 in particular on multivariate

More information

Numerical methods for lattice field theory

Numerical methods for lattice field theory Numerical methods for lattice field theory Mike Peardon Trinity College Dublin August 9, 2007 Mike Peardon (Trinity College Dublin) Numerical methods for lattice field theory August 9, 2007 1 / 37 Numerical

More information

BAYESIAN METHODS FOR VARIABLE SELECTION WITH APPLICATIONS TO HIGH-DIMENSIONAL DATA

BAYESIAN METHODS FOR VARIABLE SELECTION WITH APPLICATIONS TO HIGH-DIMENSIONAL DATA BAYESIAN METHODS FOR VARIABLE SELECTION WITH APPLICATIONS TO HIGH-DIMENSIONAL DATA Intro: Course Outline and Brief Intro to Marina Vannucci Rice University, USA PASI-CIMAT 04/28-30/2010 Marina Vannucci

More information

Principles of Bayesian Inference

Principles of Bayesian Inference Principles of Bayesian Inference Sudipto Banerjee 1 and Andrew O. Finley 2 1 Biostatistics, School of Public Health, University of Minnesota, Minneapolis, Minnesota, U.S.A. 2 Department of Forestry & Department

More information

Monte Carlo Methods. Part I: Introduction

Monte Carlo Methods. Part I: Introduction Monte Carlo Methods Part I: Introduction Spring Semester 2013/14 Department of Applied Mathematics, University of Crete Instructor: Harmandaris Vagelis, email: vagelis@tem.uoc.gr Course web page: http://www.tem.uoc.gr/~vagelis/courses/em385/

More information

Physics 403 Monte Carlo Techniques

Physics 403 Monte Carlo Techniques Physics 403 Monte Carlo Techniques Segev BenZvi Department of Physics and Astronomy University of Rochester Table of Contents 1 Simulation and Random Number Generation Simulation of Physical Systems Creating

More information

Monte Carlo Techniques

Monte Carlo Techniques Physics 75.502 Part III: Monte Carlo Methods 40 Monte Carlo Techniques Monte Carlo refers to any procedure that makes use of random numbers. Monte Carlo methods are used in: Simulation of natural phenomena

More information

Probability Rules. MATH 130, Elements of Statistics I. J. Robert Buchanan. Fall Department of Mathematics

Probability Rules. MATH 130, Elements of Statistics I. J. Robert Buchanan. Fall Department of Mathematics Probability Rules MATH 130, Elements of Statistics I J. Robert Buchanan Department of Mathematics Fall 2018 Introduction Probability is a measure of the likelihood of the occurrence of a certain behavior

More information

Econ 2148, spring 2019 Statistical decision theory

Econ 2148, spring 2019 Statistical decision theory Econ 2148, spring 2019 Statistical decision theory Maximilian Kasy Department of Economics, Harvard University 1 / 53 Takeaways for this part of class 1. A general framework to think about what makes a

More information

MATH20602 Numerical Analysis 1

MATH20602 Numerical Analysis 1 M\cr NA Manchester Numerical Analysis MATH20602 Numerical Analysis 1 Martin Lotz School of Mathematics The University of Manchester Manchester, January 27, 2014 Outline General Course Information Introduction

More information

Today: Fundamentals of Monte Carlo

Today: Fundamentals of Monte Carlo Today: Fundamentals of Monte Carlo What is Monte Carlo? Named at Los Alamos in 1940 s after the casino. Any method which uses (pseudo)random numbers as an essential part of the algorithm. Stochastic -

More information

Simulation of truncated normal variables. Christian P. Robert LSTA, Université Pierre et Marie Curie, Paris

Simulation of truncated normal variables. Christian P. Robert LSTA, Université Pierre et Marie Curie, Paris Simulation of truncated normal variables Christian P. Robert LSTA, Université Pierre et Marie Curie, Paris Abstract arxiv:0907.4010v1 [stat.co] 23 Jul 2009 We provide in this paper simulation algorithms

More information

Iterative Markov Chain Monte Carlo Computation of Reference Priors and Minimax Risk

Iterative Markov Chain Monte Carlo Computation of Reference Priors and Minimax Risk Iterative Markov Chain Monte Carlo Computation of Reference Priors and Minimax Risk John Lafferty School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 lafferty@cs.cmu.edu Abstract

More information

(I AL BL 2 )z t = (I CL)ζ t, where

(I AL BL 2 )z t = (I CL)ζ t, where ECO 513 Fall 2011 MIDTERM EXAM The exam lasts 90 minutes. Answer all three questions. (1 Consider this model: x t = 1.2x t 1.62x t 2 +.2y t 1.23y t 2 + ε t.7ε t 1.9ν t 1 (1 [ εt y t = 1.4y t 1.62y t 2

More information

Basic Sampling Methods

Basic Sampling Methods Basic Sampling Methods Sargur Srihari srihari@cedar.buffalo.edu 1 1. Motivation Topics Intractability in ML How sampling can help 2. Ancestral Sampling Using BNs 3. Transforming a Uniform Distribution

More information

ICES REPORT Model Misspecification and Plausibility

ICES REPORT Model Misspecification and Plausibility ICES REPORT 14-21 August 2014 Model Misspecification and Plausibility by Kathryn Farrell and J. Tinsley Odena The Institute for Computational Engineering and Sciences The University of Texas at Austin

More information

Gibbs Sampling in Linear Models #2

Gibbs Sampling in Linear Models #2 Gibbs Sampling in Linear Models #2 Econ 690 Purdue University Outline 1 Linear Regression Model with a Changepoint Example with Temperature Data 2 The Seemingly Unrelated Regressions Model 3 Gibbs sampling

More information

Random numbers and generators

Random numbers and generators Chapter 2 Random numbers and generators Random numbers can be generated experimentally, like throwing dice or from radioactive decay measurements. In numerical calculations one needs, however, huge set

More information

Metropolis Hastings. Rebecca C. Steorts Bayesian Methods and Modern Statistics: STA 360/601. Module 9

Metropolis Hastings. Rebecca C. Steorts Bayesian Methods and Modern Statistics: STA 360/601. Module 9 Metropolis Hastings Rebecca C. Steorts Bayesian Methods and Modern Statistics: STA 360/601 Module 9 1 The Metropolis-Hastings algorithm is a general term for a family of Markov chain simulation methods

More information

Dependence. Practitioner Course: Portfolio Optimization. John Dodson. September 10, Dependence. John Dodson. Outline.

Dependence. Practitioner Course: Portfolio Optimization. John Dodson. September 10, Dependence. John Dodson. Outline. Practitioner Course: Portfolio Optimization September 10, 2008 Before we define dependence, it is useful to define Random variables X and Y are independent iff For all x, y. In particular, F (X,Y ) (x,

More information

Gaussian processes. Chuong B. Do (updated by Honglak Lee) November 22, 2008

Gaussian processes. Chuong B. Do (updated by Honglak Lee) November 22, 2008 Gaussian processes Chuong B Do (updated by Honglak Lee) November 22, 2008 Many of the classical machine learning algorithms that we talked about during the first half of this course fit the following pattern:

More information

Generating Random Variables

Generating Random Variables Generating Random Variables These slides are created by Dr. Yih Huang of George Mason University. Students registered in Dr. Huang's courses at GMU can make a single machine-readable copy and print a single

More information

Model comparison. Christopher A. Sims Princeton University October 18, 2016

Model comparison. Christopher A. Sims Princeton University October 18, 2016 ECO 513 Fall 2008 Model comparison Christopher A. Sims Princeton University sims@princeton.edu October 18, 2016 c 2016 by Christopher A. Sims. This document may be reproduced for educational and research

More information

Other Noninformative Priors

Other Noninformative Priors Other Noninformative Priors Other methods for noninformative priors include Bernardo s reference prior, which seeks a prior that will maximize the discrepancy between the prior and the posterior and minimize

More information

Bayesian parameter estimation in predictive engineering

Bayesian parameter estimation in predictive engineering Bayesian parameter estimation in predictive engineering Damon McDougall Institute for Computational Engineering and Sciences, UT Austin 14th August 2014 1/27 Motivation Understand physical phenomena Observations

More information

Mean Intensity. Same units as I ν : J/m 2 /s/hz/sr (ergs/cm 2 /s/hz/sr) Function of position (and time), but not direction

Mean Intensity. Same units as I ν : J/m 2 /s/hz/sr (ergs/cm 2 /s/hz/sr) Function of position (and time), but not direction MCRT: L5 MCRT estimators for mean intensity, absorbed radiation, radiation pressure, etc Path length sampling: mean intensity, fluence rate, number of absorptions Random number generators J Mean Intensity

More information

σ(a) = a N (x; 0, 1 2 ) dx. σ(a) = Φ(a) =

σ(a) = a N (x; 0, 1 2 ) dx. σ(a) = Φ(a) = Until now we have always worked with likelihoods and prior distributions that were conjugate to each other, allowing the computation of the posterior distribution to be done in closed form. Unfortunately,

More information