Why Monte Carlo Integration? Introduction to Monte Carlo Method. Continuous Probability. Continuous Probability

Similar documents
Module 2. Random Processes. Version 2 ECE IIT, Kharagpur

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017

Lecture 3: Probability Distributions

b ), which stands for uniform distribution on the interval a x< b. = 0 elsewhere

Chapter Twelve. Integration. We now turn our attention to the idea of an integral in dimensions higher than one. Consider a real-valued function f : D

+, where 0 x N - n. k k

Bezier curves. Michael S. Floater. August 25, These notes provide an introduction to Bezier curves. i=0

Bézier curves. Michael S. Floater. September 10, These notes provide an introduction to Bézier curves. i=0

Probability Theory. The nth coefficient of the Taylor series of f(k), expanded around k = 0, gives the nth moment of x as ( ik) n n!

What would be a reasonable choice of the quantization step Δ?

Markov Chain Monte Carlo (MCMC), Gibbs Sampling, Metropolis Algorithms, and Simulated Annealing Bioinformatics Course Supplement

More metrics on cartesian products

Chapter 2 Transformations and Expectations. , and define f

Chapter Newton s Method

Limited Dependent Variables

Report on Image warping

DECOUPLING THEORY HW2

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X

PES 1120 Spring 2014, Spendier Lecture 6/Page 1

RELIABILITY ASSESSMENT

Math1110 (Spring 2009) Prelim 3 - Solutions

Lecture 3. Ax x i a i. i i

Appendix B: Resampling Algorithms

Probability and Random Variable Primer

THE ROYAL STATISTICAL SOCIETY 2006 EXAMINATIONS SOLUTIONS HIGHER CERTIFICATE

Simulation and Random Number Generation

Pulse Coded Modulation

Section 8.3 Polar Form of Complex Numbers

Spectral Graph Theory and its Applications September 16, Lecture 5

Lecture 12: Discrete Laplacian

1. Inference on Regression Parameters a. Finding Mean, s.d and covariance amongst estimates. 2. Confidence Intervals and Working Hotelling Bands

j) = 1 (note sigma notation) ii. Continuous random variable (e.g. Normal distribution) 1. density function: f ( x) 0 and f ( x) dx = 1

THEOREMS OF QUANTUM MECHANICS

Rigid body simulation

AS-Level Maths: Statistics 1 for Edexcel

MATH 829: Introduction to Data Mining and Analysis The EM algorithm (part 2)

Finding Dense Subgraphs in G(n, 1/2)

Linear Approximation with Regularization and Moving Least Squares

CHALMERS, GÖTEBORGS UNIVERSITET. SOLUTIONS to RE-EXAM for ARTIFICIAL NEURAL NETWORKS. COURSE CODES: FFR 135, FIM 720 GU, PhD

Lecture 3 Stat102, Spring 2007

8.592J: Solutions for Assignment 7 Spring 2005

Parametric fractional imputation for missing data analysis. Jae Kwang Kim Survey Working Group Seminar March 29, 2010

Lecture Notes on Linear Regression

Fall 2012 Analysis of Experimental Measurements B. Eisenstein/rev. S. Errede. . For P such independent random variables (aka degrees of freedom): 1 =

Week3, Chapter 4. Position and Displacement. Motion in Two Dimensions. Instantaneous Velocity. Average Velocity

CIS526: Machine Learning Lecture 3 (Sept 16, 2003) Linear Regression. Preparation help: Xiaoying Huang. x 1 θ 1 output... θ M x M

Suites of Tests. DIEHARD TESTS (Marsaglia, 1985) See

12. The Hamilton-Jacobi Equation Michael Fowler

Richard Socher, Henning Peters Elements of Statistical Learning I E[X] = arg min. E[(X b) 2 ]

ANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U)

4 Analysis of Variance (ANOVA) 5 ANOVA. 5.1 Introduction. 5.2 Fixed Effects ANOVA

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

A be a probability space. A random vector

Statistical analysis using matlab. HY 439 Presented by: George Fortetsanakis

10.34 Fall 2015 Metropolis Monte Carlo Algorithm

Homework 9 for BST 631: Statistical Theory I Problems, 11/02/2006

CS-433: Simulation and Modeling Modeling and Probability Review

( ) ( ) ( ) ( ) STOCHASTIC SIMULATION FOR BLOCKED DATA. Monte Carlo simulation Rejection sampling Importance sampling Markov chain Monte Carlo

Markov Chain Monte Carlo Lecture 6

CS 3710: Visual Recognition Classification and Detection. Adriana Kovashka Department of Computer Science January 13, 2015

Modelli Clamfim Integrali Multipli 7 ottobre 2015

Definition. Measures of Dispersion. Measures of Dispersion. Definition. The Range. Measures of Dispersion 3/24/2014

The Feynman path integral

n α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0

NP-Completeness : Proofs

Composite Hypotheses testing

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification

Department of Computer Science Artificial Intelligence Research Laboratory. Iowa State University MACHINE LEARNING

Generalized Linear Methods

Chapter 3 Describing Data Using Numerical Measures

Population element: 1 2 N. 1.1 Sampling with Replacement: Hansen-Hurwitz Estimator(HH)

Black Holes and the Hoop Conjecture. Black Holes in Supergravity and M/Superstring Theory. Penn. State University

Unit 5: Quadratic Equations & Functions

Using T.O.M to Estimate Parameter of distributions that have not Single Exponential Family

Section 3.6 Complex Zeros

Chapter 12. Ordinary Differential Equation Boundary Value (BV) Problems

Chapter 3. r r. Position, Velocity, and Acceleration Revisited

An Application of Fuzzy Hypotheses Testing in Radar Detection

Notes prepared by Prof Mrs) M.J. Gholba Class M.Sc Part(I) Information Technology

MTH 263 Practice Test #1 Spring 1999

Problem Set 9 Solutions

First Year Examination Department of Statistics, University of Florida

Feb 14: Spatial analysis of data fields

NUMERICAL DIFFERENTIATION

Gaussian Mixture Models

6 Supplementary Materials

Week 11: Chapter 11. The Vector Product. The Vector Product Defined. The Vector Product and Torque. More About the Vector Product

Canonical transformations

Propagation of error for multivariable function

The Expectation-Maximization Algorithm

Computation of Higher Order Moments from Two Multinomial Overdispersion Likelihood Models

ON A DETERMINATION OF THE INITIAL FUNCTIONS FROM THE OBSERVED VALUES OF THE BOUNDARY FUNCTIONS FOR THE SECOND-ORDER HYPERBOLIC EQUATION

A REVIEW OF ERROR ANALYSIS

PhysicsAndMathsTutor.com

Basic Statistical Analysis and Yield Calculations

UNIVERSITY OF TORONTO Faculty of Arts and Science. December 2005 Examinations STA437H1F/STA1005HF. Duration - 3 hours

Probability, Statistics, and Reliability for Engineers and Scientists SIMULATION

Kernel Methods and SVMs Extension

The Fourier Transform

Module 14: THE INTEGRAL Exploring Calculus

Transcription:

Introducton to Monte Carlo Method Kad Bouatouch IRISA Emal: kad@rsa.fr Wh Monte Carlo Integraton? To generate realstc lookng mages, we need to solve ntegrals of or hgher dmenson Pel flterng and lens smulaton both nvolve solvng a dmensonal ntegral Combnng pel flterng and lens smulaton requres solvng a 4 dmensonal ntegral ormal quadrature algorthms don t etend well beond dmenson Contnuous Probablt A contnuous random varable s a varable that randoml takes on a value from ts doman The behavor of s completel descrbed b the dstrbuton of values t take. Contnuous Probablt The dstrbuton of values that takes on s descrbed b a probablt dstrbuton functon p We sa that s dstrbuted accordng to p, or ~ p d = The probablt that takes on a value n b the nterval [a, b] s: P( a, b = d [ ] a

Epected Value The epected value of ~p s defned as E ( = d As a functon of a random varable s tself a random varable, the epected value of f( s E ( f ( = f ( d The epected value of a sum of random varables s the sum of the epected values: E( + = E( + E( Mult-Dmensonal Random Varables For some space S, we can defne a pdf p:s R If s a random varable, and ~p, the probablt that takes on a value n S S s: P( S = dµ Epected value of a real valued functon f :S R etends naturall to the multdmensonal case: S E ( f ( = f ( dµ S Monte Carlo Integraton Suppose we have a functon f( defned over the doman є [a, b] We would lke to evaluate the ntegral I = b f ( d a The Monte Carlo approach s to consder samples, selected randoml wth pdf, to estmate the ntegral f ( We get the followng estmator: I = = Monte Carlo Integraton Fndng the estmated value of the estmator we get: f ( E[ I ] E In other words: = = f ( = E = f ( = p d ( = f ( d = I f ( lm = I =

Varance The varance of the estmator s σ f ( I p d p = ( ( As the error n the estmator s proportonal to σ, the error s proportonal to So, to halve the error we need to use four tmes as man samples Varance Reducton Increase number of samples Choose p such that f/p has low varance (f and p should have smlar shape Ths s called mportance samplng because f p s large when f s large, and small when f s small, there wll be more sample n mportant regons Partton the doman of the ntegral nto several smaller regons and evaluate the ntegral as a the sum of ntegrals over the smaller regons Ths s called stratfed samplng Varance Reducton: Stratfed samplng Regon D dvded nto dsjont sub-regons D =[, + ]: sub-regon P = D D ( f ( / d = P ( f ( / P d = f ( Varance Reducton: Stratfed samplng samples per sub-regon n sub-regons /P : pdf for a sub-regon f ( I = f ( d = ( f ( / p ( p ( d +... + ( f ( / p ( p ( d + ( f ( / p ( p ( d D D ( f ( / d = P ( f ( / ( / P d = n f ( f ( I = n P f ( Xk = P. d I = D P X = k= k n

Varance Reducton: Stratfed samplng D eample samples Samplng Random Varables Gven a pdf, defned over the nterval [ mn, ma ], we can sample a random varable ~p from a set of unform random numbers ξ є [, prob( α < = P( = µ dµ mn To do ths, we need the cumulatve probablt dstrbuton functon: To get we transform ξ : = P - (ξ P - s guaranteed to est for all vald pdfs Eample Sample the pdf =3 /, є [-, ]: Frst we need to fnd P(: 3 P ( = 3 / d = / + C Choosng C such that P(-= and P(=, we get P(=( 3 +/ and P - 3 (ξ= ξ Thus, gven a random number ξ є [,, we can warp t accordng to to get : = 3 ξ Samplng D Random Varables If we have a D random varable α=(α, α wth pdf α, α, we need the two dmensonal cumulatve pdf: prob( α < & α < = P(, µ, µ dµ dµ = We can choose usng the margnal dstrbuton p G ( and then choose accordng to, where ma, pg ( =, d and = mn p ( If p s separable, that s, =q(,r(, the one dmensonal technque can be used on each dmenson nstead mn G mn

D eample: Unform Samplng of To unforml sample a trangle, we use barcentrc coordnates n a parametrc space Let A,B,C the 3 vertces of a trangle Then a pont P on the trangle s epressed as: C P= αa + βb + γc α + β + γ = α = - γ - β P P =(-γ - β A + βb + γc A γ B D eample: Unform Samplng of To unforml sample a trangle, we use barcentrc coordnates n a parametrc space (, β (, γ (, β + γ = D eample: Unform Samplng of To unforml sample a trangle, we use barcentrc coordnates Integratng the constant across the γ trangle gves.5 dβdγ = γ = β = Thus our pdf s β,γ= Snce β depends on γ (or γ depends on β, we use the margnal denst for γ, p G (γ: γ p G ( γ = dβ = γ D eample: Unform Samplng of From p G (γ, we fnd β γ = γ, β /p G (γ = /(-γ = /(-γ To fnd γ we look at the cummulatve pdf for γ: γ γ ξ = PG ( γ = pg ( γ dγ = γdγ = γ γ Solvng for γ we get γ = ξ We then turn to β : β ξ = P( β γ = β γ dβ = β β dβ = γ γ

D eample: Unform Samplng of Solvng for β we get: β = ξ ( γ = ξ( ( ξ = ξ Thus gven a set of random numbers ξ and ξ, we warp these to a set of barcentrc coordnates samplng a trangle: β, γ = ( ξ ξ, ( ξ ξ D eample: Unform Samplng of Dscs Dsc Generate random pont on unt dsk wth probablt denst: = /(π.r ϕ [,π ] et r [, R] d µ = ds = rdrdφ = ϕ r F( r, ϕ ( r /( π. R dr dϕ D CDF: D eample: Unform Samplng of Dscs Dsc Generate random pont on unt dsk wth probablt denst: = /(π.r ϕ [,π ] et r [, R] d µ = ds = rdrdφ = ϕ r F( r, (r/ R (/ π dr dϕ ϕ D CDF: Compute margnal pdf and assocated CDF ζ and ζ [,] are unform random numbers ϕ = πζ and r = R ζ D eample: Unform Samplng of Spheres Sphere π / θn π ϕn Θ = cos θ cos( ξ ϕ πξ π θ = a et = Θ = ( θ, ϕ dµ = dω = snθ dθ dϕ Θ = n+ cos( cosnθ θ = a ξ n + etϕ= πξ π Θ θ n dω Θn ϕ n

Summar Gven a functon f(µ, defned over an n-dmensonal doman S, we can estmate the ntegral of f over S b a sum: f ( f ( µ dµ S = where ~p s a random varable and are samples of selected accordng to p. To reduce the varance and get faster convergence we: Use mportance samplng: p should have smlar shape as f Use Stratfed samplng: Subdvde S nto smaller regons, evaluate the ntegral for each regon and sum these together