Hochdimensionale Integration

Size: px
Start display at page:

Download "Hochdimensionale Integration"

Transcription

1 Oliver Ernst Institut für Numerische Mathematik und Optimierung Hochdimensionale Integration 14-tägige Vorlesung im Wintersemester 2010/11 im Rahmen des Moduls Ausgewählte Kapitel der Numerik

2 Contents 1. Introduction 1.1 An Example 1.2 A Selection of Strategies 1. Monte Carlo Integration 1.1 Convergence and Accuracy 1.2 Sampling Methods 1.3 Variance Reduction Methods 3. Sparse Grids 4. Quasi-Monte Carlo Integration 5. Extensions 5.1 ANOVA Decomposition 5.2 Concentration of Measure Oliver Ernst (TU Freiberg) Hochdimensionale Integration Wintersemester 2010/11 1

3 Inhalt 1. Introduction 1.1 An Example 1.2 A Selection of Strategies 1. Monte Carlo Integration 1.1 Convergence and Accuracy 1.2 Sampling Methods 1.3 Variance Reduction Methods 3. Sparse Grids 4. Quasi-Monte Carlo Integration 5. Extensions 5.1 ANOVA Decomposition 5.2 Concentration of Measure Oliver Ernst (TU Freiberg) Hochdimensionale Integration Wintersemester 2010/11 31

4 Inhalt 1. Introduction 1.1 An Example 1.2 A Selection of Strategies 1. Monte Carlo Integration 1.1 Convergence and Accuracy 1.2 Sampling Methods 1.3 Variance Reduction Methods 3. Sparse Grids 4. Quasi-Monte Carlo Integration 5. Extensions 5.1 ANOVA Decomposition 5.2 Concentration of Measure Oliver Ernst (TU Freiberg) Hochdimensionale Integration Wintersemester 2010/11 32

5 The Integral as an Expected Value Let x be a random variable which is uniformly distributed on [0, 1]. (Ω, A, P) x : Ω R probability space, measurable function, (cumulative) distribution function 0 ξ < 0, F (ξ) := P({x(ω) ξ}) = ξ 0 ξ 1, 1 ξ > 1. (probability) density (function) p(ξ) = I [0,1]. Oliver Ernst (TU Freiberg) Hochdimensionale Integration Wintersemester 2010/11 33

6 The Integral as an Expected Value For any f L 1 (0, 1) the expectation (expected value) of the random variable f(x(ω)) is given by 1 E [f(x)] = f(x(ω)) d P(ω) = f(x) dx. Ω 0 In the same way: for a d-dimensional random vector x : Ω [0, 1] d, x = x (ω), uniformly distributed on [0, 1] d and a function f L 1 ([0, 1] d ) we have E [f(x )] = f(x ) dx. [0,1] d For an ensemble {x n = x (ω j )} N j=1 sampled from the uniform distribution on [0, 1] d, an empirical estimate of the expectation is E [f(x )] Q N (f) := 1 N N f(x j ). j=1 Oliver Ernst (TU Freiberg) Hochdimensionale Integration Wintersemester 2010/11 34

7 Convergence of Monte Carlo Integration Theorem 1 (Strong Law of Large Numbers) Let {X n } n N be a sequence of mutually independent, identically distributed random variables with finite expected value µ := E[X n ]. Then the arithmetic averages of the partial sums S n := X X n converge to µ almost surely, i.e., ({ }) Sn P n µ = 1. Consequence: Moreover: Q N (f) I(f) as N. E[Q N (f)] = I(f) N N, i.e., Q N (f) is an unbiased estimator of I(f). Oliver Ernst (TU Freiberg) Hochdimensionale Integration Wintersemester 2010/11 35

8 Convergence Rate of Monte Carlo Integration Theorem 2 (Central Limit Theorem) Let {X n } n N be a sequence of mutually independent, identically distributed random variables with finite expected value and variance µ := E[X n ] and σ 2 := Var[X n ]. Then the sequence of random variables S n nµ σ n, n N, where S n := X X n, converges in distribution to the standard normal distribution, i.e., for all x R there holds ({ }) lim P Sn nµ n σ n < x = 1 x e t2 /2 dt =: Φ(x). 2π Oliver Ernst (TU Freiberg) Hochdimensionale Integration Wintersemester 2010/11 36

9 Convergence Rate of Monte Carlo Integration Consequence: denoting the MC error by we see that for N sufficiently large ɛ N ɛ N = ɛ N (f) := I(f) Q N (f) σ N ξ, σ = σ(f), ξ N(0, 1). More precisely: for a, b R, a < b, we have ( ) N lim P a < N σ ɛ N < b = P (a < ξ < b) = Φ(b) Φ(a). Note: probabilistic result, no deterministic bounds on MC error, only that it is of a certain size with some probability. Oliver Ernst (TU Freiberg) Hochdimensionale Integration Wintersemester 2010/11 37

10 Convergence Rate of Monte Carlo Integration error: bias: root mean-square error (RMSE): ɛ N (f) := I(f) Q N (f) E[ɛ N (f)] E[ɛN (f) 2 ] For random variables {x i } independent and uniformly distributed in [0, 1] d : E[ɛN (f) 2 ] = σn 1/2. Oliver Ernst (TU Freiberg) Hochdimensionale Integration Wintersemester 2010/11 38

11 Convergence Rate of Monte Carlo Integration Sample Size for Given Accuracy Probabilistic error bound: ɛ N For standard normal RV ξ, and ɛ > 0 σ N ξ, ξ N(0, 1). P ( ξ < ɛ) = Φ(ɛ) Φ( ɛ) = 2Φ(ɛ) 1 = erf where the error function erf is defined by erf(x) := 2 x e t2 dt. π For what value of the sample size N is ( ) σ P N ξ < ɛ c for a given confidence level c [0, 1]? 0 ( ɛ 2 ), Oliver Ernst (TU Freiberg) Hochdimensionale Integration Wintersemester 2010/11 39

12 Convergence Rate of Monte Carlo Integration Confidence Levels Answer: N σ 2 ɛ 2 s(c) 2, where s(c) defined by c = 2Φ(s) 1 = erf ( s 2 ). 2Φ(x) c s(c) x standard normal density(x) x Oliver Ernst (TU Freiberg) Hochdimensionale Integration Wintersemester 2010/11 40

13 Inhalt 1. Introduction 1.1 An Example 1.2 A Selection of Strategies 1. Monte Carlo Integration 1.1 Convergence and Accuracy 1.2 Sampling Methods 1.3 Variance Reduction Methods 3. Sparse Grids 4. Quasi-Monte Carlo Integration 5. Extensions 5.1 ANOVA Decomposition 5.2 Concentration of Measure Oliver Ernst (TU Freiberg) Hochdimensionale Integration Wintersemester 2010/11 41

14 Sampling Methods A random sequence is a vague notion... in which each term is unpredictable to the uninitiated and whose digits pass a certain number of tests traditional with statisticians... D. H. Lehmer, Berkeley, 1951 Oliver Ernst (TU Freiberg) Hochdimensionale Integration Wintersemester 2010/11 42

15 Sampling Methods Random Number Generators Random number generators (RNG) are computer programs which generate a sequence of rando numbers, which can be taken be uniformly distributed on [0, 1]. These sequences are not random, but deterministic (pseudo-random), but pass certain statistical tests which verify properties of random number sequences. References: Don Knuth. The Art of Computer Programming. Vol. II, Chapter 3. Ceve Moler. Numerical Computing with MATLAB. Chapter 9. We will view RNG as a black box. Fixing the internal state of the RNG will produce the same sequence of numbers. MATLAB: rand. Note: MATLAB now has three different RNG algorithms to choose from. Oliver Ernst (TU Freiberg) Hochdimensionale Integration Wintersemester 2010/11 43

16 Sampling Methods Inversion Method If X is a random variable with cdf F, i.e., P(X x) = F (x), then F is nondecreasing and we may define its inverse function for all u (0, 1) by F 1 (u) := inf{x R : F (x) = u}. If U U[0, 1], then X := F 1 (U) has cdf F. Consequence: If the evaluation of F 1 is computationally feasible, then samples {x n } of the random variable X may be generated from samples {u n } from a uniformly distributed random variable U [0, 1] as x n = F 1 (u n ). Oliver Ernst (TU Freiberg) Hochdimensionale Integration Wintersemester 2010/11 44

17 Sampling Methods Example: Univariate Gaussians If X N(0, 1), then F (x) = 1 2π x Samples of X may thus be obtained as e t2 dt = erf(x). X = 2 erf 1 (2U 1), U U[0, 1]. Oliver Ernst (TU Freiberg) Hochdimensionale Integration Wintersemester 2010/11 45

18 Sampling Methods Example: Box-Muller Transform Given two independent uniformly distributed RV U 1, U 2 U[0, 1], define X 1 := 2 log U 1 cos(2πu 2 ), X 2 := 2 log U 1 sin(2πu 2 ). (1a) (1b) Claim: X 1, X 2 N(0, 1), independent. Polar coordinates: Transformed density: (x 1, x 2 ) = (r cos ϑ, r sin ϑ) 1 2 2π e (x 1 +x2 2 )/2 dx 1 dx 2 = 1 2π e r 2 /2 r dr dϑ Oliver Ernst (TU Freiberg) Hochdimensionale Integration Wintersemester 2010/11 46

19 Sampling Methods Example: Box-Muller Transform Angular variable U 1 := ϑ/(2π) U[0, 1]. Radial variable r has density re r2 /2 with cdf and inverse function F (r) = r 0 ρe ρ2 /2 dρ = 1 e r2 /2 r = F 1 (u) = 2 log(1 u). Noting that U 2 U[0, 1] implies 1 U 2 U[0, 1], we obtain (1). Oliver Ernst (TU Freiberg) Hochdimensionale Integration Wintersemester 2010/11 47

20 Sampling Methods Acceptance-Rejection Method Oliver Ernst (TU Freiberg) Hochdimensionale Integration Wintersemester 2010/11 48

21 Inhalt 1. Introduction 1.1 An Example 1.2 A Selection of Strategies 1. Monte Carlo Integration 1.1 Convergence and Accuracy 1.2 Sampling Methods 1.3 Variance Reduction Methods 3. Sparse Grids 4. Quasi-Monte Carlo Integration 5. Extensions 5.1 ANOVA Decomposition 5.2 Concentration of Measure Oliver Ernst (TU Freiberg) Hochdimensionale Integration Wintersemester 2010/11 49

22 Variance Reduction Monte Carlo integration: error ɛ and sample size N related by ( ) σ ɛ = O N, ( σ ) 2 computing time N ɛ 2 options for acceleration: (1) reduce σ (2) modify statistics, i.e., replace random sample sequence by alternative sequence which improves exponent 1 2 (QMC). Oliver Ernst (TU Freiberg) Hochdimensionale Integration Wintersemester 2010/11 50

23 Variance Reduction Antithetic Variables Idea: For each sample point x, also use x. Resulting quadrature rule is Q N (f) = 1 2N N [f(x n ) + f( x n )]. n=1 Advantage: existing symmetries preserved. Example: computing E[f(x)], x N(0, σ), σ small. Setting x = σˆx, Taylor expansion of f(σˆx) about 0 is f(x) = f(0) + f (0)σˆx + O(σ 2 ). Since distribution of ˆx symmetric about zero, E[ˆx] = 0. For standard MC the terms don t cancel, i.e., MC error σ. Here: MC error σ 2. Oliver Ernst (TU Freiberg) Hochdimensionale Integration Wintersemester 2010/11 51

24 Variance Reduction Control Variates Idea: use auxiliary integrand g f for which I(g) known, write I(f) = I(f g) + I(g), and use MC to approximate I(f g). Resulting quadrature formula: Error: where I(f) Q N (f) := 1 N N [f(x n ) g(x n )] + I(g) n=1 ɛ N (f) = I(f) Q N (f) σ f g N, ( σf g 2 = I [(f g) I(f g)] 2) = I ( ( f g) 2), where f(x) := f(x) I(f), g(x) := g(x) I(g). Thus: improvement whenever σ f g σ f. Oliver Ernst (TU Freiberg) Hochdimensionale Integration Wintersemester 2010/11 52

25 Variance Reduction Control Variates Optimal use employs multiplier λ such that I(f) = I(f λg) + λi(g). Error of MC applied to first integral proportional to σ 2 f λg = I ( [ f(x ) λ g(x )] 2). Optimal value of λ found by minimizing σ f λg, which is obtained for λ = I( f g) I( g 2 ). Oliver Ernst (TU Freiberg) Hochdimensionale Integration Wintersemester 2010/11 53

26 Variance Reduction Moment Matching MC error due to statistical sampling error, i.e., difference between desired density p and empirical density associated with sample points {x n }. Manifested, e.g., in deviating moments: in general m 1 := E[x] = x p(x) dx, µ 1 := 1 N m 2 := E[x 2 ] = x 2 p(x) dx, µ 2 := 1 N Partial correction: modify {x n } to match µ 1, µ 2. First moment matched for x n x n := (x n µ 1 ) + m 1. N x n, n=1 N x 2 n. n=1 Oliver Ernst (TU Freiberg) Hochdimensionale Integration Wintersemester 2010/11 54

27 Variance Reduction Moment Matching First two moments matched by modified sequence x n x n := x n µ 1 c + m 1, c := µ 2 µ 2 1 m 2 m 2. 1 Caution: Samples no longer independent, estimates more difficult. CLT no longer applicable. Method possibly biased. Oliver Ernst (TU Freiberg) Hochdimensionale Integration Wintersemester 2010/11 55

28 Variance Reduction Stratified Sampling Idea: Split integration region D = [0, 1] into M subregions [ k 1 D k = M, k ], D k = 1 M M k. Local averages: f(x) := f k := 1 f(x) dx, D k D k x D k For each k sample N k := N/M points {x (k) i } distributed uniformly in D k. Stratified quadrature formula: Error: ɛ Q N (f) := 1 N M N/M k=1 i=1 σ s N, σ 2 s = I ( (f f) 2) = ( f M k=1 x (k) i ). D k ( f(x) fk ) 2 dx Oliver Ernst (TU Freiberg) Hochdimensionale Integration Wintersemester 2010/11 56

29 Variance Reduction Stratified Sampling Claim: σ s σ. More general: split D into M subregions D k such that D = M k=1 D k. Take N k random samples on D k such that M k=1 N k = N. In each D k, choose x (k) n distributed with density p (k), where p (k) (x) := p(x), p k := p(x) dx. p k D k Stratified quadrature formula: Error: Q N (f) := M p k N k N k k=1 n=1 ɛ N (f) = I(f) Q N (f) = f(x (k) n ). M k=1 ɛ (k) N k (f) Oliver Ernst (TU Freiberg) Hochdimensionale Integration Wintersemester 2010/11 57

30 Variance Reduction Stratified Sampling Components: ɛ (k) N k (f) p 1/2 ( ) 1/2 k (f(x) Nk ( D f k ) 2 p (k) pk (x) dx) = σ (k), k N k with variances σ (k) = p 1/2 ( k (f(x) ( D f k ) 2 p (k) (x) dx) = (f(x) f k ) 2 p(x) dx k D k with averages f k := f(x)p(x) dx/ p k. D k Can show: stratification lowers integration error if balance condition satisfied: p k = 1, k = 1,..., M. N k N ) 1/2 Oliver Ernst (TU Freiberg) Hochdimensionale Integration Wintersemester 2010/11 58

31 Variance Reduction Stratified Sampling Resulting error: ɛ N σ s N, σ 2 s = M [σ (k) ] 2. Since variance over subdomain always less than over entire domain, we have σ s σ. k=1 Oliver Ernst (TU Freiberg) Hochdimensionale Integration Wintersemester 2010/11 59

32 Variance Reduction Importance Sampling Idea: To approximate I(f), introduce density p and write f(x) I(f) = f(x) dx = p(x) dx, p(x) sample points {x n } from density p and form MC approximation Q N (f) = 1 N N i=1 f(x n ) p(x n ). Error: where ɛ N (f) = I(f) Q N (f) σ p = σ p N ( ) 2 f(x) p(x) 1 p(x) dx. Oliver Ernst (TU Freiberg) Hochdimensionale Integration Wintersemester 2010/11 60

33 Variance Reduction Importance Sampling Remarks: Effective when f/p nearly constant. Most widely used MC variance reduction method. Need to sample from distribution with density p (acceptance-rejection). Can be used for rare but important events, i.e., small regions of space where integrand f is large. Oliver Ernst (TU Freiberg) Hochdimensionale Integration Wintersemester 2010/11 61

Hochdimensionale Integration

Hochdimensionale Integration Oliver Ernst Institut für Numerische Mathematik und Optimierung Hochdimensionale Integration 4-tägige Vorlesung im Wintersemester 2/ im Rahmen des Moduls Ausgewählte Kapitel der Numerik Contents. Introduction.

More information

2 Random Variable Generation

2 Random Variable Generation 2 Random Variable Generation Most Monte Carlo computations require, as a starting point, a sequence of i.i.d. random variables with given marginal distribution. We describe here some of the basic methods

More information

L3 Monte-Carlo integration

L3 Monte-Carlo integration RNG MC IS Examination RNG MC IS Examination Monte-Carlo and Empirical Methods for Statistical Inference, FMS091/MASM11 Monte-Carlo integration Generating pseudo-random numbers: Rejection sampling Conditional

More information

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( )

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( ) Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio (2014-2015) Etienne Tanré - Olivier Faugeras INRIA - Team Tosca October 22nd, 2014 E. Tanré (INRIA - Team Tosca) Mathematical

More information

Scientific Computing: Monte Carlo

Scientific Computing: Monte Carlo Scientific Computing: Monte Carlo Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course MATH-GA.2043 or CSCI-GA.2112, Spring 2012 April 5th and 12th, 2012 A. Donev (Courant Institute)

More information

Lecture Notes 5 Convergence and Limit Theorems. Convergence with Probability 1. Convergence in Mean Square. Convergence in Probability, WLLN

Lecture Notes 5 Convergence and Limit Theorems. Convergence with Probability 1. Convergence in Mean Square. Convergence in Probability, WLLN Lecture Notes 5 Convergence and Limit Theorems Motivation Convergence with Probability Convergence in Mean Square Convergence in Probability, WLLN Convergence in Distribution, CLT EE 278: Convergence and

More information

Monte-Carlo MMD-MA, Université Paris-Dauphine. Xiaolu Tan

Monte-Carlo MMD-MA, Université Paris-Dauphine. Xiaolu Tan Monte-Carlo MMD-MA, Université Paris-Dauphine Xiaolu Tan tan@ceremade.dauphine.fr Septembre 2015 Contents 1 Introduction 1 1.1 The principle.................................. 1 1.2 The error analysis

More information

Monte Carlo methods for kinetic equations

Monte Carlo methods for kinetic equations Monte Carlo methods for kinetic equations Lecture 2: Monte Carlo simulation methods Lorenzo Pareschi Department of Mathematics & CMCS University of Ferrara Italy http://utenti.unife.it/lorenzo.pareschi/

More information

on t0 t T, how can one compute the value E[g(X(T ))]? The Monte-Carlo method is based on the

on t0 t T, how can one compute the value E[g(X(T ))]? The Monte-Carlo method is based on the SPRING 2008, CSC KTH - Numerical methods for SDEs, Szepessy, Tempone 203 Monte Carlo Euler for SDEs Consider the stochastic differential equation dx(t) = a(t, X(t))dt + b(t, X(t))dW (t) on t0 t T, how

More information

Lecture 20. Randomness and Monte Carlo. J. Chaudhry. Department of Mathematics and Statistics University of New Mexico

Lecture 20. Randomness and Monte Carlo. J. Chaudhry. Department of Mathematics and Statistics University of New Mexico Lecture 20 Randomness and Monte Carlo J. Chaudhry Department of Mathematics and Statistics University of New Mexico J. Chaudhry (UNM) CS 357 1 / 40 What we ll do: Random number generators Monte-Carlo integration

More information

Random processes and probability distributions. Phys 420/580 Lecture 20

Random processes and probability distributions. Phys 420/580 Lecture 20 Random processes and probability distributions Phys 420/580 Lecture 20 Random processes Many physical processes are random in character: e.g., nuclear decay (Poisson distributed event count) P (k, τ) =

More information

Gaussian vectors and central limit theorem

Gaussian vectors and central limit theorem Gaussian vectors and central limit theorem Samy Tindel Purdue University Probability Theory 2 - MA 539 Samy T. Gaussian vectors & CLT Probability Theory 1 / 86 Outline 1 Real Gaussian random variables

More information

Lecture 15 Random variables

Lecture 15 Random variables Lecture 15 Random variables Weinan E 1,2 and Tiejun Li 2 1 Department of Mathematics, Princeton University, weinan@princeton.edu 2 School of Mathematical Sciences, Peking University, tieli@pku.edu.cn No.1

More information

Monte Carlo Integration. Computer Graphics CMU /15-662, Fall 2016

Monte Carlo Integration. Computer Graphics CMU /15-662, Fall 2016 Monte Carlo Integration Computer Graphics CMU 15-462/15-662, Fall 2016 Talk Announcement Jovan Popovic, Senior Principal Scientist at Adobe Research will be giving a seminar on Character Animator -- Monday

More information

Negative Association, Ordering and Convergence of Resampling Methods

Negative Association, Ordering and Convergence of Resampling Methods Negative Association, Ordering and Convergence of Resampling Methods Nicolas Chopin ENSAE, Paristech (Joint work with Mathieu Gerber and Nick Whiteley, University of Bristol) Resampling schemes: Informal

More information

Continuous Random Variables

Continuous Random Variables 1 / 24 Continuous Random Variables Saravanan Vijayakumaran sarva@ee.iitb.ac.in Department of Electrical Engineering Indian Institute of Technology Bombay February 27, 2013 2 / 24 Continuous Random Variables

More information

Computer Intensive Methods in Mathematical Statistics

Computer Intensive Methods in Mathematical Statistics Computer Intensive Methods in Mathematical Statistics Department of mathematics KTH Royal Institute of Technology jimmyol@kth.se Lecture 2 Random number generation 27 March 2014 Computer Intensive Methods

More information

Numerical Methods I Monte Carlo Methods

Numerical Methods I Monte Carlo Methods Numerical Methods I Monte Carlo Methods Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course G63.2010.001 / G22.2420-001, Fall 2010 Dec. 9th, 2010 A. Donev (Courant Institute) Lecture

More information

Stochastic Convergence, Delta Method & Moment Estimators

Stochastic Convergence, Delta Method & Moment Estimators Stochastic Convergence, Delta Method & Moment Estimators Seminar on Asymptotic Statistics Daniel Hoffmann University of Kaiserslautern Department of Mathematics February 13, 2015 Daniel Hoffmann (TU KL)

More information

Generating the Sample

Generating the Sample STAT 80: Mathematical Statistics Monte Carlo Suppose you are given random variables X,..., X n whose joint density f (or distribution) is specified and a statistic T (X,..., X n ) whose distribution you

More information

where r n = dn+1 x(t)

where r n = dn+1 x(t) Random Variables Overview Probability Random variables Transforms of pdfs Moments and cumulants Useful distributions Random vectors Linear transformations of random vectors The multivariate normal distribution

More information

Qualifying Exam CS 661: System Simulation Summer 2013 Prof. Marvin K. Nakayama

Qualifying Exam CS 661: System Simulation Summer 2013 Prof. Marvin K. Nakayama Qualifying Exam CS 661: System Simulation Summer 2013 Prof. Marvin K. Nakayama Instructions This exam has 7 pages in total, numbered 1 to 7. Make sure your exam has all the pages. This exam will be 2 hours

More information

Limiting Distributions

Limiting Distributions We introduce the mode of convergence for a sequence of random variables, and discuss the convergence in probability and in distribution. The concept of convergence leads us to the two fundamental results

More information

8 Laws of large numbers

8 Laws of large numbers 8 Laws of large numbers 8.1 Introduction We first start with the idea of standardizing a random variable. Let X be a random variable with mean µ and variance σ 2. Then Z = (X µ)/σ will be a random variable

More information

In manycomputationaleconomicapplications, one must compute thede nite n

In manycomputationaleconomicapplications, one must compute thede nite n Chapter 6 Numerical Integration In manycomputationaleconomicapplications, one must compute thede nite n integral of a real-valued function f de ned on some interval I of

More information

Monte Carlo Techniques Basic Concepts

Monte Carlo Techniques Basic Concepts Monte Carlo Techniques Basic Concepts Image Synthesis Torsten Möller Reading Chapter 12, 13, 14.1 14.3 of Physically Based Rendering by Pharr&Humphreys Chapter 7 in Principles of Digital Image Synthesis,

More information

If we want to analyze experimental or simulated data we might encounter the following tasks:

If we want to analyze experimental or simulated data we might encounter the following tasks: Chapter 1 Introduction If we want to analyze experimental or simulated data we might encounter the following tasks: Characterization of the source of the signal and diagnosis Studying dependencies Prediction

More information

STA205 Probability: Week 8 R. Wolpert

STA205 Probability: Week 8 R. Wolpert INFINITE COIN-TOSS AND THE LAWS OF LARGE NUMBERS The traditional interpretation of the probability of an event E is its asymptotic frequency: the limit as n of the fraction of n repeated, similar, and

More information

Probability and Measure

Probability and Measure Probability and Measure Robert L. Wolpert Institute of Statistics and Decision Sciences Duke University, Durham, NC, USA Convergence of Random Variables 1. Convergence Concepts 1.1. Convergence of Real

More information

MAS113 Introduction to Probability and Statistics

MAS113 Introduction to Probability and Statistics MAS113 Introduction to Probability and Statistics School of Mathematics and Statistics, University of Sheffield 2018 19 Identically distributed Suppose we have n random variables X 1, X 2,..., X n. Identically

More information

Notes 18 : Optional Sampling Theorem

Notes 18 : Optional Sampling Theorem Notes 18 : Optional Sampling Theorem Math 733-734: Theory of Probability Lecturer: Sebastien Roch References: [Wil91, Chapter 14], [Dur10, Section 5.7]. Recall: DEF 18.1 (Uniform Integrability) A collection

More information

Monte Carlo Methods. Jesús Fernández-Villaverde University of Pennsylvania

Monte Carlo Methods. Jesús Fernández-Villaverde University of Pennsylvania Monte Carlo Methods Jesús Fernández-Villaverde University of Pennsylvania 1 Why Monte Carlo? From previous chapter, we want to compute: 1. Posterior distribution: π ³ θ Y T,i = f(y T θ,i)π (θ i) R Θ i

More information

Lecture 6 Basic Probability

Lecture 6 Basic Probability Lecture 6: Basic Probability 1 of 17 Course: Theory of Probability I Term: Fall 2013 Instructor: Gordan Zitkovic Lecture 6 Basic Probability Probability spaces A mathematical setup behind a probabilistic

More information

Karhunen-Loève Approximation of Random Fields Using Hierarchical Matrix Techniques

Karhunen-Loève Approximation of Random Fields Using Hierarchical Matrix Techniques Institut für Numerische Mathematik und Optimierung Karhunen-Loève Approximation of Random Fields Using Hierarchical Matrix Techniques Oliver Ernst Computational Methods with Applications Harrachov, CR,

More information

Chapter 2: Resampling Maarten Jansen

Chapter 2: Resampling Maarten Jansen Chapter 2: Resampling Maarten Jansen Randomization tests Randomized experiment random assignment of sample subjects to groups Example: medical experiment with control group n 1 subjects for true medicine,

More information

Limiting Distributions

Limiting Distributions Limiting Distributions We introduce the mode of convergence for a sequence of random variables, and discuss the convergence in probability and in distribution. The concept of convergence leads us to the

More information

c 2011 Peter A. Maginnis

c 2011 Peter A. Maginnis c 0 Peter A. Maginnis VARIANCE REDUCTION FOR POISSON AND MARKOV JUMP PROCESSES BY PETER A. MAGINNIS THESIS Submitted in partial fulfillment of the requirements for the degree of Master of Science in Mechanical

More information

Monte Carlo Studies. The response in a Monte Carlo study is a random variable.

Monte Carlo Studies. The response in a Monte Carlo study is a random variable. Monte Carlo Studies The response in a Monte Carlo study is a random variable. The response in a Monte Carlo study has a variance that comes from the variance of the stochastic elements in the data-generating

More information

Random Variables and Their Distributions

Random Variables and Their Distributions Chapter 3 Random Variables and Their Distributions A random variable (r.v.) is a function that assigns one and only one numerical value to each simple event in an experiment. We will denote r.vs by capital

More information

SDS 321: Introduction to Probability and Statistics

SDS 321: Introduction to Probability and Statistics SDS 321: Introduction to Probability and Statistics Lecture 14: Continuous random variables Purnamrita Sarkar Department of Statistics and Data Science The University of Texas at Austin www.cs.cmu.edu/

More information

Stochastic Simulation Variance reduction methods Bo Friis Nielsen

Stochastic Simulation Variance reduction methods Bo Friis Nielsen Stochastic Simulation Variance reduction methods Bo Friis Nielsen Applied Mathematics and Computer Science Technical University of Denmark 2800 Kgs. Lyngby Denmark Email: bfni@dtu.dk Variance reduction

More information

Variance Reduction s Greatest Hits

Variance Reduction s Greatest Hits 1 Variance Reduction s Greatest Hits Pierre L Ecuyer CIRRELT, GERAD, and Département d Informatique et de Recherche Opérationnelle Université de Montréal, Canada ESM 2007, Malta, October 22, 2007 Context

More information

Contents. 1 Probability review Introduction Random variables and distributions Convergence of random variables...

Contents. 1 Probability review Introduction Random variables and distributions Convergence of random variables... Contents Probability review. Introduction..............................2 Random variables and distributions................ 3.3 Convergence of random variables................. 6 2 Monte Carlo methods

More information

CS145: Probability & Computing

CS145: Probability & Computing CS45: Probability & Computing Lecture 5: Concentration Inequalities, Law of Large Numbers, Central Limit Theorem Instructor: Eli Upfal Brown University Computer Science Figure credits: Bertsekas & Tsitsiklis,

More information

Math 210, Final Exam, Spring 2012 Problem 1 Solution. (a) Find an equation of the plane passing through the tips of u, v, and w.

Math 210, Final Exam, Spring 2012 Problem 1 Solution. (a) Find an equation of the plane passing through the tips of u, v, and w. Math, Final Exam, Spring Problem Solution. Consider three position vectors (tails are the origin): u,, v 4,, w,, (a) Find an equation of the plane passing through the tips of u, v, and w. (b) Find an equation

More information

Chapter 4: Monte-Carlo Methods

Chapter 4: Monte-Carlo Methods Chapter 4: Monte-Carlo Methods A Monte-Carlo method is a technique for the numerical realization of a stochastic process by means of normally distributed random variables. In financial mathematics, it

More information

E cient Importance Sampling

E cient Importance Sampling E cient David N. DeJong University of Pittsburgh Spring 2008 Our goal is to calculate integrals of the form Z G (Y ) = ϕ (θ; Y ) dθ. Special case (e.g., posterior moment): Z G (Y ) = Θ Θ φ (θ; Y ) p (θjy

More information

The Nonparametric Bootstrap

The Nonparametric Bootstrap The Nonparametric Bootstrap The nonparametric bootstrap may involve inferences about a parameter, but we use a nonparametric procedure in approximating the parametric distribution using the ECDF. We use

More information

Convergence Concepts of Random Variables and Functions

Convergence Concepts of Random Variables and Functions Convergence Concepts of Random Variables and Functions c 2002 2007, Professor Seppo Pynnonen, Department of Mathematics and Statistics, University of Vaasa Version: January 5, 2007 Convergence Modes Convergence

More information

Chapter 4: Monte Carlo Methods. Paisan Nakmahachalasint

Chapter 4: Monte Carlo Methods. Paisan Nakmahachalasint Chapter 4: Monte Carlo Methods Paisan Nakmahachalasint Introduction Monte Carlo Methods are a class of computational algorithms that rely on repeated random sampling to compute their results. Monte Carlo

More information

Problem Set 2. MAS 622J/1.126J: Pattern Recognition and Analysis. Due: 5:00 p.m. on September 30

Problem Set 2. MAS 622J/1.126J: Pattern Recognition and Analysis. Due: 5:00 p.m. on September 30 Problem Set MAS 6J/1.16J: Pattern Recognition and Analysis Due: 5:00 p.m. on September 30 [Note: All instructions to plot data or write a program should be carried out using Matlab. In order to maintain

More information

f (1 0.5)/n Z =

f (1 0.5)/n Z = Math 466/566 - Homework 4. We want to test a hypothesis involving a population proportion. The unknown population proportion is p. The null hypothesis is p = / and the alternative hypothesis is p > /.

More information

Random Number Generation. CS1538: Introduction to simulations

Random Number Generation. CS1538: Introduction to simulations Random Number Generation CS1538: Introduction to simulations Random Numbers Stochastic simulations require random data True random data cannot come from an algorithm We must obtain it from some process

More information

Asymptotic Statistics-III. Changliang Zou

Asymptotic Statistics-III. Changliang Zou Asymptotic Statistics-III Changliang Zou The multivariate central limit theorem Theorem (Multivariate CLT for iid case) Let X i be iid random p-vectors with mean µ and and covariance matrix Σ. Then n (

More information

Expectation. DS GA 1002 Probability and Statistics for Data Science. Carlos Fernandez-Granda

Expectation. DS GA 1002 Probability and Statistics for Data Science.   Carlos Fernandez-Granda Expectation DS GA 1002 Probability and Statistics for Data Science http://www.cims.nyu.edu/~cfgranda/pages/dsga1002_fall17 Carlos Fernandez-Granda Aim Describe random variables with a few numbers: mean,

More information

Monte Carlo Integration I [RC] Chapter 3

Monte Carlo Integration I [RC] Chapter 3 Aula 3. Monte Carlo Integration I 0 Monte Carlo Integration I [RC] Chapter 3 Anatoli Iambartsev IME-USP Aula 3. Monte Carlo Integration I 1 There is no exact definition of the Monte Carlo methods. In the

More information

Review of Probabilities and Basic Statistics

Review of Probabilities and Basic Statistics Alex Smola Barnabas Poczos TA: Ina Fiterau 4 th year PhD student MLD Review of Probabilities and Basic Statistics 10-701 Recitations 1/25/2013 Recitation 1: Statistics Intro 1 Overview Introduction to

More information

Differentiation and Integration

Differentiation and Integration Differentiation and Integration (Lectures on Numerical Analysis for Economists II) Jesús Fernández-Villaverde 1 and Pablo Guerrón 2 February 12, 2018 1 University of Pennsylvania 2 Boston College Motivation

More information

Monte Carlo Methods for Stochastic Programming

Monte Carlo Methods for Stochastic Programming IE 495 Lecture 16 Monte Carlo Methods for Stochastic Programming Prof. Jeff Linderoth March 17, 2003 March 17, 2003 Stochastic Programming Lecture 16 Slide 1 Outline Review Jensen's Inequality Edmundson-Madansky

More information

Random variables. DS GA 1002 Probability and Statistics for Data Science.

Random variables. DS GA 1002 Probability and Statistics for Data Science. Random variables DS GA 1002 Probability and Statistics for Data Science http://www.cims.nyu.edu/~cfgranda/pages/dsga1002_fall17 Carlos Fernandez-Granda Motivation Random variables model numerical quantities

More information

Chapter 5 continued. Chapter 5 sections

Chapter 5 continued. Chapter 5 sections Chapter 5 sections Discrete univariate distributions: 5.2 Bernoulli and Binomial distributions Just skim 5.3 Hypergeometric distributions 5.4 Poisson distributions Just skim 5.5 Negative Binomial distributions

More information

Monte Carlo Integration I

Monte Carlo Integration I Monte Carlo Integration I Digital Image Synthesis Yung-Yu Chuang with slides by Pat Hanrahan and Torsten Moller Introduction L o p,ωo L e p,ωo s f p,ωo,ω i L p,ω i cosθ i i d The integral equations generally

More information

The multigroup Monte Carlo method part 1

The multigroup Monte Carlo method part 1 The multigroup Monte Carlo method part 1 Alain Hébert alain.hebert@polymtl.ca Institut de génie nucléaire École Polytechnique de Montréal ENE6103: Week 11 The multigroup Monte Carlo method part 1 1/23

More information

EE4601 Communication Systems

EE4601 Communication Systems EE4601 Communication Systems Week 2 Review of Probability, Important Distributions 0 c 2011, Georgia Institute of Technology (lect2 1) Conditional Probability Consider a sample space that consists of two

More information

1 Presessional Probability

1 Presessional Probability 1 Presessional Probability Probability theory is essential for the development of mathematical models in finance, because of the randomness nature of price fluctuations in the markets. This presessional

More information

3. Review of Probability and Statistics

3. Review of Probability and Statistics 3. Review of Probability and Statistics ECE 830, Spring 2014 Probabilistic models will be used throughout the course to represent noise, errors, and uncertainty in signal processing problems. This lecture

More information

Moments. Raw moment: February 25, 2014 Normalized / Standardized moment:

Moments. Raw moment: February 25, 2014 Normalized / Standardized moment: Moments Lecture 10: Central Limit Theorem and CDFs Sta230 / Mth 230 Colin Rundel Raw moment: Central moment: µ n = EX n ) µ n = E[X µ) 2 ] February 25, 2014 Normalized / Standardized moment: µ n σ n Sta230

More information

Random Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R

Random Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R In probabilistic models, a random variable is a variable whose possible values are numerical outcomes of a random phenomenon. As a function or a map, it maps from an element (or an outcome) of a sample

More information

Intelligent Systems I

Intelligent Systems I 1, Intelligent Systems I 12 SAMPLING METHODS THE LAST THING YOU SHOULD EVER TRY Philipp Hennig & Stefan Harmeling Max Planck Institute for Intelligent Systems 23. January 2014 Dptmt. of Empirical Inference

More information

Chapter 3: Maximum-Likelihood & Bayesian Parameter Estimation (part 1)

Chapter 3: Maximum-Likelihood & Bayesian Parameter Estimation (part 1) HW 1 due today Parameter Estimation Biometrics CSE 190 Lecture 7 Today s lecture was on the blackboard. These slides are an alternative presentation of the material. CSE190, Winter10 CSE190, Winter10 Chapter

More information

Monte Carlo Simulations

Monte Carlo Simulations Monte Carlo Simulations What are Monte Carlo Simulations and why ones them? Pseudo Random Number generators Creating a realization of a general PDF The Bootstrap approach A real life example: LOFAR simulations

More information

6 The normal distribution, the central limit theorem and random samples

6 The normal distribution, the central limit theorem and random samples 6 The normal distribution, the central limit theorem and random samples 6.1 The normal distribution We mentioned the normal (or Gaussian) distribution in Chapter 4. It has density f X (x) = 1 σ 1 2π e

More information

Probability and Measure

Probability and Measure Part II Year 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2018 84 Paper 4, Section II 26J Let (X, A) be a measurable space. Let T : X X be a measurable map, and µ a probability

More information

Spring 2012 Math 541B Exam 1

Spring 2012 Math 541B Exam 1 Spring 2012 Math 541B Exam 1 1. A sample of size n is drawn without replacement from an urn containing N balls, m of which are red and N m are black; the balls are otherwise indistinguishable. Let X denote

More information

Random Signal Transformations and Quantization

Random Signal Transformations and Quantization York University Department of Electrical Engineering and Computer Science EECS 4214 Lab #3 Random Signal Transformations and Quantization 1 Purpose In this lab, you will be introduced to transformations

More information

STAT 830 Non-parametric Inference Basics

STAT 830 Non-parametric Inference Basics STAT 830 Non-parametric Inference Basics Richard Lockhart Simon Fraser University STAT 801=830 Fall 2012 Richard Lockhart (Simon Fraser University)STAT 830 Non-parametric Inference Basics STAT 801=830

More information

1 Solution to Problem 2.1

1 Solution to Problem 2.1 Solution to Problem 2. I incorrectly worked this exercise instead of 2.2, so I decided to include the solution anyway. a) We have X Y /3, which is a - function. It maps the interval, ) where X lives) onto

More information

Expectation. DS GA 1002 Statistical and Mathematical Models. Carlos Fernandez-Granda

Expectation. DS GA 1002 Statistical and Mathematical Models.   Carlos Fernandez-Granda Expectation DS GA 1002 Statistical and Mathematical Models http://www.cims.nyu.edu/~cfgranda/pages/dsga1002_fall16 Carlos Fernandez-Granda Aim Describe random variables with a few numbers: mean, variance,

More information

Perhaps the simplest way of modeling two (discrete) random variables is by means of a joint PMF, defined as follows.

Perhaps the simplest way of modeling two (discrete) random variables is by means of a joint PMF, defined as follows. Chapter 5 Two Random Variables In a practical engineering problem, there is almost always causal relationship between different events. Some relationships are determined by physical laws, e.g., voltage

More information

Brief Review of Probability

Brief Review of Probability Maura Department of Economics and Finance Università Tor Vergata Outline 1 Distribution Functions Quantiles and Modes of a Distribution 2 Example 3 Example 4 Distributions Outline Distribution Functions

More information

CSCI-6971 Lecture Notes: Monte Carlo integration

CSCI-6971 Lecture Notes: Monte Carlo integration CSCI-6971 Lecture otes: Monte Carlo integration Kristopher R. Beevers Department of Computer Science Rensselaer Polytechnic Institute beevek@cs.rpi.edu February 21, 2006 1 Overview Consider the following

More information

Qualifying Exam in Probability and Statistics. https://www.soa.org/files/edu/edu-exam-p-sample-quest.pdf

Qualifying Exam in Probability and Statistics. https://www.soa.org/files/edu/edu-exam-p-sample-quest.pdf Part : Sample Problems for the Elementary Section of Qualifying Exam in Probability and Statistics https://www.soa.org/files/edu/edu-exam-p-sample-quest.pdf Part 2: Sample Problems for the Advanced Section

More information

Monte Carlo and cold gases. Lode Pollet.

Monte Carlo and cold gases. Lode Pollet. Monte Carlo and cold gases Lode Pollet lpollet@physics.harvard.edu 1 Outline Classical Monte Carlo The Monte Carlo trick Markov chains Metropolis algorithm Ising model critical slowing down Quantum Monte

More information

Krylov Subspace Methods for the Evaluation of Matrix Functions. Applications and Algorithms

Krylov Subspace Methods for the Evaluation of Matrix Functions. Applications and Algorithms Krylov Subspace Methods for the Evaluation of Matrix Functions. Applications and Algorithms 2. First Results and Algorithms Michael Eiermann Institut für Numerische Mathematik und Optimierung Technische

More information

MTH739U/P: Topics in Scientific Computing Autumn 2016 Week 6

MTH739U/P: Topics in Scientific Computing Autumn 2016 Week 6 MTH739U/P: Topics in Scientific Computing Autumn 16 Week 6 4.5 Generic algorithms for non-uniform variates We have seen that sampling from a uniform distribution in [, 1] is a relatively straightforward

More information

BTRY 4090: Spring 2009 Theory of Statistics

BTRY 4090: Spring 2009 Theory of Statistics BTRY 4090: Spring 2009 Theory of Statistics Guozhang Wang September 25, 2010 1 Review of Probability We begin with a real example of using probability to solve computationally intensive (or infeasible)

More information

Space Telescope Science Institute statistics mini-course. October Inference I: Estimation, Confidence Intervals, and Tests of Hypotheses

Space Telescope Science Institute statistics mini-course. October Inference I: Estimation, Confidence Intervals, and Tests of Hypotheses Space Telescope Science Institute statistics mini-course October 2011 Inference I: Estimation, Confidence Intervals, and Tests of Hypotheses James L Rosenberger Acknowledgements: Donald Richards, William

More information

Lecture 7: Chapter 7. Sums of Random Variables and Long-Term Averages

Lecture 7: Chapter 7. Sums of Random Variables and Long-Term Averages Lecture 7: Chapter 7. Sums of Random Variables and Long-Term Averages ELEC206 Probability and Random Processes, Fall 2014 Gil-Jin Jang gjang@knu.ac.kr School of EE, KNU page 1 / 15 Chapter 7. Sums of Random

More information

Basic Sampling Methods

Basic Sampling Methods Basic Sampling Methods Sargur Srihari srihari@cedar.buffalo.edu 1 1. Motivation Topics Intractability in ML How sampling can help 2. Ancestral Sampling Using BNs 3. Transforming a Uniform Distribution

More information

10 BIVARIATE DISTRIBUTIONS

10 BIVARIATE DISTRIBUTIONS BIVARIATE DISTRIBUTIONS After some discussion of the Normal distribution, consideration is given to handling two continuous random variables. The Normal Distribution The probability density function f(x)

More information

Semester , Example Exam 1

Semester , Example Exam 1 Semester 1 2017, Example Exam 1 1 of 10 Instructions The exam consists of 4 questions, 1-4. Each question has four items, a-d. Within each question: Item (a) carries a weight of 8 marks. Item (b) carries

More information

Calculus II Practice Test Problems for Chapter 7 Page 1 of 6

Calculus II Practice Test Problems for Chapter 7 Page 1 of 6 Calculus II Practice Test Problems for Chapter 7 Page of 6 This is a set of practice test problems for Chapter 7. This is in no way an inclusive set of problems there can be other types of problems on

More information

Lecture 2: Repetition of probability theory and statistics

Lecture 2: Repetition of probability theory and statistics Algorithms for Uncertainty Quantification SS8, IN2345 Tobias Neckel Scientific Computing in Computer Science TUM Lecture 2: Repetition of probability theory and statistics Concept of Building Block: Prerequisites:

More information

1: PROBABILITY REVIEW

1: PROBABILITY REVIEW 1: PROBABILITY REVIEW Marek Rutkowski School of Mathematics and Statistics University of Sydney Semester 2, 2016 M. Rutkowski (USydney) Slides 1: Probability Review 1 / 56 Outline We will review the following

More information

COMPSCI 240: Reasoning Under Uncertainty

COMPSCI 240: Reasoning Under Uncertainty COMPSCI 240: Reasoning Under Uncertainty Andrew Lan and Nic Herndon University of Massachusetts at Amherst Spring 2019 Lecture 20: Central limit theorem & The strong law of large numbers Markov and Chebyshev

More information

STA 2201/442 Assignment 2

STA 2201/442 Assignment 2 STA 2201/442 Assignment 2 1. This is about how to simulate from a continuous univariate distribution. Let the random variable X have a continuous distribution with density f X (x) and cumulative distribution

More information

Chap 2.1 : Random Variables

Chap 2.1 : Random Variables Chap 2.1 : Random Variables Let Ω be sample space of a probability model, and X a function that maps every ξ Ω, toa unique point x R, the set of real numbers. Since the outcome ξ is not certain, so is

More information

Time Series Solutions HT 2009

Time Series Solutions HT 2009 Time Series Solutions HT 2009 1. Let {X t } be the ARMA(1, 1) process, X t φx t 1 = ɛ t + θɛ t 1, {ɛ t } WN(0, σ 2 ), where φ < 1 and θ < 1. Show that the autocorrelation function of {X t } is given by

More information

Introduction to Bayesian Computation

Introduction to Bayesian Computation Introduction to Bayesian Computation Dr. Jarad Niemi STAT 544 - Iowa State University March 20, 2018 Jarad Niemi (STAT544@ISU) Introduction to Bayesian Computation March 20, 2018 1 / 30 Bayesian computation

More information

MA 575 Linear Models: Cedric E. Ginestet, Boston University Bootstrap for Regression Week 9, Lecture 1

MA 575 Linear Models: Cedric E. Ginestet, Boston University Bootstrap for Regression Week 9, Lecture 1 MA 575 Linear Models: Cedric E. Ginestet, Boston University Bootstrap for Regression Week 9, Lecture 1 1 The General Bootstrap This is a computer-intensive resampling algorithm for estimating the empirical

More information