Monte Carlo Integration I
|
|
- Cory Tucker
- 5 years ago
- Views:
Transcription
1 Monte Carlo Integration I Digital Image Synthesis Yung-Yu Chuang with slides by Pat Hanrahan and Torsten Moller
2 Introduction L o p,ωo L e p,ωo s f p,ωo,ω i L p,ω i cosθ i i d The integral equations generally don t have analytic solutions, so we must turn to numerical methods. Standard methods like Trapezoidal integration or Gaussian quadrature are not effective for high-dimensional and discontinuous integrals. ω i
3 Numerical quadrature Suppose we want to calculate I b f d, but a can t solve it analytically. The approimations through quadrature rules have the form Iˆ n i w f i i which is essentially the weighted sum of samples of the function at various points
4 Midpoint rule convergence
5 Trapezoid rule convergence
6 Simpson s rule Similar to trapezoid but using a quadratic polynomial approimation convergence assuming f has a continuous fourth derivative.
7 Curse of dimensionality and discontinuity For an sd function f, If the d rule has a convergence rate of On -r, the sd rule would require a much larger number n s of samples to work as well as the d one. Thus, the convergence rate is only On -r/s. If f is discontinuous, convergence is On -/s for, g sd.
8 Randomized algorithms Las Vegas v.s. Monte Carlo Las Vegas: always gives the right answer by using randomness. Monte Carlo: gives the right answer on the average. Results depend on random numbers used, but statistically likely to be close to the right answer.
9 Monte Carlo integration Monte Carlo integration: uses sampling to estimate the values of integrals. It only requires to be able to evaluate the integrand at arbitrary points, making it easy to implement and applicable to many problems. If n samples are used, its converges at the rate of On -/. That is, to cut the error in half, it is necessary to evaluate four times as many samples. Images by Monte Carlo methods are often noisy. Most current methods try to reduce noise.
10 Monte Carlo methods Advantages Easy to implement Easy to think about but be careful of statistical bias Robust when used with comple integrands and domains shapes, lights, Efficient for high dimensional integrals Disadvantages Noisy Slow many samples needed for convergence
11 Basic concepts X is a random variable Applying a function to a random variable gives another random variable, Y=fX. CDF cumulative distribution function P Pr{ { X } PDF probability density function: nonnegative, sum to dp p d canonical uniform random variable ξ provided d by standard library and easy to transform to other distributions ib ti
12 Discrete probability distributions Discrete events X i with probability p i pi 0 n i p i p i Cumulative PDF distribution i P j j pi i Construction of samples: To randomly select an event, Select X i if P i U P i Pi 0 X Uniform random variable 3 U
13 Continuous probability distributions PDF p Uniform p 0 CDF P P pd 0 P Pr X P Pr X pd P P 0 0 P
14 Epected values Average value of a function f over some distribution of values p over its domain D f D E f p d p Eample: cos function over [0, π], p is uniform 0 E p cos cos d 0
15 Variance Epected deviation from the epected value Fundamental concept of quantifying i the error in Monte Carlo methods V[ f ] E f E f
16 Properties E af ae f E f X i E f X i i i V af a V f V f E f E f
17 Monte Carlo estimator Assume that we want to evaluate the integral of f over [a,b] Given a uniform random variable X i over [a,b], Monte Carlo estimator t N b a F f X N N i i says that the epected value E[F N ] of the estimator F N equals the integral
18 General Monte Carlo estimator Given a random variable X drawn from an arbitrary PDF p, then the estimator is F N N f X i i p X i N Although the converge rate of MC estimator is ON /, slower than other integral methods, its converge rate is independent of the dimension, making it the only practical method for high dimensional integral
19 Convergence of Monte Carlo Chebyshev s inequality: let X be a random variable with epected value μ and variance σ. For any real number k>0, Pr{ X k} k For eample, for k=, it shows that at least half of the value lie in the interval, Let Y f X / p X, the MC estimate F N becomes i i i F N N N Y i i
20 Convergence of Monte Carlo According to Chebyshev s inequality, ] [ ] [ Pr N N N F V F E F Y V Y V Y V Y V F V N i N i N i N ] [ Plugging into Chebyshev s inequality N N N N i i i i i i N ] [ Plugging into Chebyshev s inequality, ] [ Pr Y V I F N So, for a fied threshold, the error decreases at Pr N I F N,, the rate N -/.
21 Properties of estimators An estimator F N is called unbiased if for all N E[ F N ] Q That is, the epected value is independent of N. Otherwise, the bias of the estimator is defined as [ F ] E[ F ] Q N If the bias goes to zero as N increases, the estimator is called consistent lim [ F N lim N E [ F N N N ] 0 ] Q
22 Eample of a biased consistent estimator Suppose we are doing antialiasing on a d piel, to determine the piel value, we need to evaluate I w f d, where w is the 0 filter function with w d 0 A common way to evaluate this is N w X i i f X i FN N i w X i When N=, we have w X f X E[ F E E f X ] [ ] f d w X 0 I
23 Eample of a biased consistent estimator When N=, we have I d d w w f w f w F E 0 0 ] [ However, when N is very large, the bias approaches to zero pp N i i i N X f X w N F N i i N X w N X f X N li I d f w d w d f w X w N X f X w N F E N i i i i i N N N lim lim ] [ lim N i i N 0
24 Choosing samples N f X i F N N i p X i Carefully choosing the PDF from which samples are drawn is an important technique to reduce variance. We want the f/p to have a low variance. Hence, it is necessary to be able to draw samples from the chosen PDF. How to sample an arbitrary distribution from a variable of uniform distribution? Inversion Rejection Transform
25 Inversion method Cumulative probability distribution function P Pr X Construction of samples Solve for X=P - U Must know:. The integral of p. The inverse function P - U 0 X
26 Proof for the inversion method Let U be an uniform random variable and its CDF is P u =. We will show that Y=P - U has the CDF P.
27 Proof for the inversion method Let U be an uniform random variable and its CDF is P u =. We will show that Y=P - U has the CDF P. Pr Y Pr P U PrU P Pu P P because P is monotonic, P P Thus, Y s CDF is eactly P.
28 Inversion method Compute CDF P Compute P - Obtain ξ Compute X i =P - ξ
29 Eample: power function It is used in sampling Blinn s microfacet model. p n
30 Eample: power function It is used in sampling Blinn s microfacet model. Assume p n n P n n n d 0 0 n n n X ~ p X P U U Trick It only works for sampling power distribution Y ma U, U,, Un, Un n n Pr Y Pr U i
31 Eample: eponential distribution p ce a useful for rendering participating media. Compute CDF P Compute P - Obtain ξ Compute X i =P - ξ
32 Eample: eponential distribution p ce a useful for rendering participating media. 0 ce a d c a Compute CDF P Compute P - Obtain ξ Compute X i =P - ξ P P as ae ds 0 ln a X ln a e a ln a
33 Rejection method Sometimes, we can t integrate into CDF or invert CDF I 0 Algorithm f d y f d dy Pick U and U Accept U if U < fu Wasteful? Efficiency = Area / Area of rectangle y f
34 Rejection method Rejection method is a dart-throwing method without performing integration and inversion.. Find q so that p<mq. Dart throwing a. Choose a pair X, ξ, where X is sampled from q b. If ξ<px/mqx return X Equivalently, we pick a point X, ξmqx. If it lies beneath px then we are fine.
35 Why it works For each iteration, we generate X i from q. The sample is returned if ξ<px/mqx, which happens with probability px/mqx. So, the probability bilit to return is p p q Mq M whose integral is /M Thus, when a sample is returned with probability /M, X i is distributed according to p.
36 Eample: sampling a unit disk void RejectionSampleDiskfloat *, float *y { } float s, sy; do { s =.f -.f * RandomFloat; sy =.f -.f * RandomFloat; } while s*s + sy*sy >.f * = s; *y = sy; π/4~78.5% good samples, gets worse in higher dimensions, for eample, for sphere, π/6~5.4%
37 Transformation of variables Given a random variable X from distribution p to a random variable Y=yX, where y is one-toone, i.e. monotonic. We want to derive the distribution of Y, p y y. P y Pr{ Y y } Pr{ X } P y PDF: p dy y d y dp dp y y dy y d dp d y dy d p y p P P y y dy y p d y
38 Eample p Y sin X p y y cos p cos sin y y
39 Transformation method A problem to apply the above method is that we usually have some PDF to sample from, not a given transformation. Given a source random variable X with p and a target distribution p y y, try transform X into to another random variable Y so that t Y has the distribution p y y. We first have to find a transformation y so that P =P y y. Thus, y y P y P
40 Transformation method Let s prove that the above transform works. We first prove that the random variable Z= P has a uniform distribution. If so, then P y Z should have distribution P y from the inversion method. Pr Z Pr P X Pr X P P P Thus, Z is uniform and the transformation works. It is an obvious generalization of the inversion method, in which X is uniform and P =.
41 Eample p y p y y e y
42 Eample p y y e y p y P y y e y P y y P y ln ln ln ln P P y y Thus, if X has the distribution, then the p random variable has the distribution ln ln X Y y y e y p
43 Multiple dimensions We often need the other way around,
44 Spherical coordinates The spherical coordinate representation of directions is r sin cos y r sin sin z r cos J r sin J T p r,, r sinp, y, z
45 Spherical coordinates Now, look at relation between spherical directions and a solid angles d sindd, p, d d p d Hence, the density in terms of p, sin p
46 Multidimensional sampling Separable case: independently sample X from p and Y from p y. p,y=p p p y y Often, this is not possible. Compute the marginal density function p first. p p, y dy Then, compute the conditional density function p y p, y p Use D sampling with p and py.
47 Sampling a hemisphere Sample a hemisphere uniformly, i.e. p c
48 Sampling a hemisphere Sample a hemisphere uniformly, i.e. Sample first p Now sampling 0 p p, d p, p p 0 c sin d p p p, sin c sin
49 Sampling a hemisphere Now, we use inversion technique in order to sample the PDF s Inverting these: P sin ' d ' cos 0 P d ' 0 cos
50 Sampling a hemisphere Convert these to Cartesian coordinate cos y z sin cos sin sin cos cos sin Similar il derivation for a full sphere
51 Sampling a disk WRONG Equi-Areal RIGHT Equi-Areal r U U U r U
52 Sampling a disk WRONG Equi-Areal RIGHT Equi-Areal U U r U r U
53 Sampling a disk Uniform p, y p r, rp, y r Sample r first. Then, sample. Invert the CDF. 0 p r p r, d d r p r, p r pr P r r P r r
54 Shirley s mapping r U U 4 U
55 Sampling a triangle u 0, u u v 0 u v v u u v u u A dv du u du puv,
56 Sampling a triangle Here u and v are not independent! puv, Conditional probability p u p u, v dv 0 u p u dv u u p u Pu 0 udu u 0 pv u 0 0 u v u v p u, v p v U 0 U U 0 vo vo v u0 u0 P v u p v u dv dv
57 Cosine weighted hemisphere p cos p d 0 0 c cos sindd c 0 cos sind c p, cos sin d sindd
58 Cosine weighted hemisphere p, cos sin p cos sin d cos sin sin 0 p, p p P cos cos P
59 Cosine weighted hemisphere Malley s method: uniformly generates points on the unit disk and then generates directions by projecting them up to the hemisphere above it. Vector CosineSampleHemispherefloat u,float u{ } Vector ret; ConcentricSampleDisku, u, &ret., &ret.y; ret.z = sqrtfma0.f,.f - ret.*ret. - return ret; ret.y*ret.y; ret.y; r
60 Cosine weighted hemisphere Why does Malley s method work? r Unit disk sampling p r, Map to hemisphere r, sin, p y J T T Y r, X, T r sini J T p cos 0 0 cos
61 Cosine weighted hemisphere p y J T T Y r, X, r sin T J p cos 0 0 p, J p r, T T cos cos sin
62 Sampling Phong lobe p cos n / n p c cos n c cos sindd c cos n c c cos d cos n n p, n cos n sin
63 Sampling Phong lobe sin cos, n n p sin cos sin cos 0 n n n d n p sin cos ' ' 0 n d n P cos cos cos ' cos ' 0 n n n d n ' cos cos cos cos 0 n n n d n cos cos n
64 Sampling Phong lobe sin cos, n n p sin cos, p p n n ' sin cos ' n p p n ' 0 d P
65 Sampling Phong lobe When n=, it is actually equivalent to n,, cos, n P cos cos cosine-weighted hemisphere, cos, P cos cos sin sin cos
66 Piecewise-constant d distributions Sample from discrete D distributions. Useful for teture maps and environment lights. Consider fu, v defined by a set of n u n v values f[u i, v j ]. Given a continuous [u, v], we will use [u, v ] to denote the corresponding discrete u i, v j indices.
67 Piecewise-constant d distributions ], [, u v n n j i f v u f dudv v u f I integral 0 0 ], [, i j j i v u f f n n f v u f v u f ' ' integral i j j i v u v u f n n v u f dudv v u f v u f v u p,,,,, pdf i i u v u f n d ', marginal f i I du v u p v p, marginal density '] [ '] ', [, v p I v u f v p v u p v u p f conditional probability ] [ p p
68 Piecewise-constant d distributions f u, v DistributionD low-res marginal density pv low-res conditional probability p u v
69 Metropolis sampling Metropolis sampling can efficiently generate a set of samples from any non-negative negative function f requiring only the ability to evaluate f. Disadvantage: successive samples in the sequence are often correlated. It is not possible to ensure that t a small number of samples generated by Metropolis is well distributed over the domain. There is no technique like stratified sampling for Metropolis.
70 Metropolis sampling Problem: given an arbitrary function assuming generate a set of samples
71 Metropolis sampling Steps Generate initial i i sample 0 mutating current sample i to propose If it is accepted, i+ = Otherwise, i+ = i Acceptance probability guarantees distribution is the stationary distribution f
72 Metropolis sampling Mutations propose given i T is the tentative transition probability density of proposing p from Being able to calculate tentative transition probability is the only restriction for the choice of mutations a is the acceptance probability of accepting the transition By defining a carefully, we ensure
73 Metropolis sampling Detailed balance stationary distribution
74 Pseudo code
75 Pseudo code epected value
76 Binary eample I
77 Binary eample II
78 Acceptance probability Does not affect unbiasedness; just variance Want transitions ii to happen because transitions ii are often heading where f is large Maimize the acceptance probability Eplore state space better Reduce correlation
79 Mutation strategy Very free and fleible; the only requirement is to be able to calculate transition probability Based on applications and eperience The more mutation, the better Relative frequency of them is not so important
80 Start-up bias Using an initial sample not from f s distribution leads to a problem called start-up bias. Solution #: run MS for a while and use the current sample as the initial iti sample to re-start t the process. Epensive start-up cost Need to guess when to re-start Solution #: use another available sampling method to start
81 D eample
82 D eample mutation
83 D eample mutation mutation 0,000 iterations
84 D eample mutation mutation 300,000 iterations
85 D eample mutation 90% mutation + 0% mutation Periodically using uniform mutations increases ergodicity
86 D eample image copy
87 D eample image copy
88 D eample image copy sample 8 samples 56 samples per piel per piel per piel
Monte Carlo Techniques Basic Concepts
Monte Carlo Techniques Basic Concepts Image Synthesis Torsten Möller Reading Chapter 12, 13, 14.1 14.3 of Physically Based Rendering by Pharr&Humphreys Chapter 7 in Principles of Digital Image Synthesis,
More informationMonte Carlo Integration. Computer Graphics CMU /15-662, Fall 2016
Monte Carlo Integration Computer Graphics CMU 15-462/15-662, Fall 2016 Talk Announcement Jovan Popovic, Senior Principal Scientist at Adobe Research will be giving a seminar on Character Animator -- Monday
More informationReview: Basic MC Estimator
Overview Earlier lectures Monte Carlo I: Integration Signal processing and sampling Today: Monte Carlo II Noise and variance Efficiency Stratified sampling Importance sampling Review: Basic MC Estimator
More informationDifferentiation and Integration
Differentiation and Integration (Lectures on Numerical Analysis for Economists II) Jesús Fernández-Villaverde 1 and Pablo Guerrón 2 February 12, 2018 1 University of Pennsylvania 2 Boston College Motivation
More information16 : Markov Chain Monte Carlo (MCMC)
10-708: Probabilistic Graphical Models 10-708, Spring 2014 16 : Markov Chain Monte Carlo MCMC Lecturer: Matthew Gormley Scribes: Yining Wang, Renato Negrinho 1 Sampling from low-dimensional distributions
More informationEE 302 Division 1. Homework 6 Solutions.
EE 3 Division. Homework 6 Solutions. Problem. A random variable X has probability density { C f X () e λ,,, otherwise, where λ is a positive real number. Find (a) The constant C. Solution. Because of the
More informationMA 114 Worksheet #01: Integration by parts
Fall 8 MA 4 Worksheet Thursday, 3 August 8 MA 4 Worksheet #: Integration by parts. For each of the following integrals, determine if it is best evaluated by integration by parts or by substitution. If
More informationNumerical integration and importance sampling
2 Numerical integration and importance sampling 2.1 Quadrature Consider the numerical evaluation of the integral I(a, b) = b a dx f(x) Rectangle rule: on small interval, construct interpolating function
More informationAnswers and expectations
Answers and expectations For a function f(x) and distribution P(x), the expectation of f with respect to P is The expectation is the average of f, when x is drawn from the probability distribution P E
More informationSo we will instead use the Jacobian method for inferring the PDF of functionally related random variables; see Bertsekas & Tsitsiklis Sec. 4.1.
2011 Page 1 Simulating Gaussian Random Variables Monday, February 14, 2011 2:00 PM Readings: Kloeden and Platen Sec. 1.3 Why does the Box Muller method work? How was it derived? The basic idea involves
More informationIn economics, the amount of a good x demanded is a function of the price of that good. In other words,
I. UNIVARIATE CALCULUS Given two sets X and Y, a function is a rule that associates each member of X with eactly one member of Y. That is, some goes in, and some y comes out. These notations are used to
More informationGaussian integrals. Calvin W. Johnson. September 9, The basic Gaussian and its normalization
Gaussian integrals Calvin W. Johnson September 9, 24 The basic Gaussian and its normalization The Gaussian function or the normal distribution, ep ( α 2), () is a widely used function in physics and mathematical
More information8.4 Integration of Rational Functions by Partial Fractions Lets use the following example as motivation: Ex: Consider I = x+5
Math 2-08 Rahman Week6 8.4 Integration of Rational Functions by Partial Fractions Lets use the following eample as motivation: E: Consider I = +5 2 + 2 d. Solution: Notice we can easily factor the denominator
More informationUnit 3. Integration. 3A. Differentials, indefinite integration. y x. c) Method 1 (slow way) Substitute: u = 8 + 9x, du = 9dx.
Unit 3. Integration 3A. Differentials, indefinite integration 3A- a) 7 6 d. (d(sin ) = because sin is a constant.) b) (/) / d c) ( 9 8)d d) (3e 3 sin + e 3 cos)d e) (/ )d + (/ y)dy = implies dy = / d /
More informationNumerical Integration (Quadrature) Another application for our interpolation tools!
Numerical Integration (Quadrature) Another application for our interpolation tools! Integration: Area under a curve Curve = data or function Integrating data Finite number of data points spacing specified
More informationMarkov chain Monte Carlo
Markov chain Monte Carlo Probabilistic Models of Cognition, 2011 http://www.ipam.ucla.edu/programs/gss2011/ Roadmap: Motivation Monte Carlo basics What is MCMC? Metropolis Hastings and Gibbs...more tomorrow.
More informationM.S. Project Report. Efficient Failure Rate Prediction for SRAM Cells via Gibbs Sampling. Yamei Feng 12/15/2011
.S. Project Report Efficient Failure Rate Prediction for SRA Cells via Gibbs Sampling Yamei Feng /5/ Committee embers: Prof. Xin Li Prof. Ken ai Table of Contents CHAPTER INTRODUCTION...3 CHAPTER BACKGROUND...5
More informationNotes on Discriminant Functions and Optimal Classification
Notes on Discriminant Functions and Optimal Classification Padhraic Smyth, Department of Computer Science University of California, Irvine c 2017 1 Discriminant Functions Consider a classification problem
More informationToday: Fundamentals of Monte Carlo
Today: Fundamentals of Monte Carlo What is Monte Carlo? Named at Los Alamos in 940 s after the casino. Any method which uses (pseudo)random numbers as an essential part of the algorithm. Stochastic - not
More informationMAT 271E Probability and Statistics
MAT 71E Probability and Statistics Spring 013 Instructor : Class Meets : Office Hours : Textbook : Supp. Text : İlker Bayram EEB 1103 ibayram@itu.edu.tr 13.30 1.30, Wednesday EEB 5303 10.00 1.00, Wednesday
More informationRandom variable X is a mapping that maps each outcome s in the sample space to a unique real number x, x. X s. Real Line
Random Variable Random variable is a mapping that maps each outcome s in the sample space to a unique real number,. s s : outcome Sample Space Real Line Eamples Toss a coin. Define the random variable
More informationLecture Notes 2 Random Variables. Discrete Random Variables: Probability mass function (pmf)
Lecture Notes 2 Random Variables Definition Discrete Random Variables: Probability mass function (pmf) Continuous Random Variables: Probability density function (pdf) Mean and Variance Cumulative Distribution
More informationMachine Learning Lecture 3
Announcements Machine Learning Lecture 3 Eam dates We re in the process of fiing the first eam date Probability Density Estimation II 9.0.207 Eercises The first eercise sheet is available on L2P now First
More informationCSCI-6971 Lecture Notes: Probability theory
CSCI-6971 Lecture Notes: Probability theory Kristopher R. Beevers Department of Computer Science Rensselaer Polytechnic Institute beevek@cs.rpi.edu January 31, 2006 1 Properties of probabilities Let, A,
More informationMACHINE LEARNING ADVANCED MACHINE LEARNING
MACHINE LEARNING ADVANCED MACHINE LEARNING Recap of Important Notions on Estimation of Probability Density Functions 2 2 MACHINE LEARNING Overview Definition pdf Definition joint, condition, marginal,
More informationLecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable
Lecture Notes 1 Probability and Random Variables Probability Spaces Conditional Probability and Independence Random Variables Functions of a Random Variable Generation of a Random Variable Jointly Distributed
More informationLecture Notes 2 Random Variables. Random Variable
Lecture Notes 2 Random Variables Definition Discrete Random Variables: Probability mass function (pmf) Continuous Random Variables: Probability density function (pdf) Mean and Variance Cumulative Distribution
More informationToday: Fundamentals of Monte Carlo
Today: Fundamentals of Monte Carlo What is Monte Carlo? Named at Los Alamos in 1940 s after the casino. Any method which uses (pseudo)random numbers as an essential part of the algorithm. Stochastic -
More informationMath 180A. Lecture 16 Friday May 7 th. Expectation. Recall the three main probability density functions so far (1) Uniform (2) Exponential.
Math 8A Lecture 6 Friday May 7 th Epectation Recall the three main probability density functions so far () Uniform () Eponential (3) Power Law e, ( ), Math 8A Lecture 6 Friday May 7 th Epectation Eample
More informationTrigonometric integrals by basic methods
Roberto s Notes on Integral Calculus Chapter : Integration methods Section 7 Trigonometric integrals by basic methods What you need to know already: Integrals of basic trigonometric functions. Basic trigonometric
More informationCentral Limit Theorem and the Law of Large Numbers Class 6, Jeremy Orloff and Jonathan Bloom
Central Limit Theorem and the Law of Large Numbers Class 6, 8.5 Jeremy Orloff and Jonathan Bloom Learning Goals. Understand the statement of the law of large numbers. 2. Understand the statement of the
More informationfunctions Poisson distribution Normal distribution Arbitrary functions
Physics 433: Computational Physics Lecture 6 Random number distributions Generation of random numbers of various distribuition functions Normal distribution Poisson distribution Arbitrary functions Random
More informationMath 122 Fall Unit Test 1 Review Problems Set A
Math Fall 8 Unit Test Review Problems Set A We have chosen these problems because we think that they are representative of many of the mathematical concepts that we have studied. There is no guarantee
More informationDetection and Estimation Theory
ESE 524 Detection and Estimation Theory Joseph A. O Sullivan Samuel C. Sachs Professor Electronic Systems and Signals Research Laboratory Electrical and Systems Engineering Washington University 2 Urbauer
More informationAP Calculus AB Summer Assignment
AP Calculus AB Summer Assignment Name: When you come back to school, it is my epectation that you will have this packet completed. You will be way behind at the beginning of the year if you haven t attempted
More informationLecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable
Lecture Notes 1 Probability and Random Variables Probability Spaces Conditional Probability and Independence Random Variables Functions of a Random Variable Generation of a Random Variable Jointly Distributed
More informationAnswer Key 1973 BC 1969 BC 24. A 14. A 24. C 25. A 26. C 27. C 28. D 29. C 30. D 31. C 13. C 12. D 12. E 3. A 32. B 27. E 34. C 14. D 25. B 26.
Answer Key 969 BC 97 BC. C. E. B. D 5. E 6. B 7. D 8. C 9. D. A. B. E. C. D 5. B 6. B 7. B 8. E 9. C. A. B. E. D. C 5. A 6. C 7. C 8. D 9. C. D. C. B. A. D 5. A 6. B 7. D 8. A 9. D. E. D. B. E. E 5. E.
More informationQuantifying Uncertainty
Sai Ravela M. I. T Last Updated: Spring 2013 1 Markov Chain Monte Carlo Monte Carlo sampling made for large scale problems via Markov Chains Monte Carlo Sampling Rejection Sampling Importance Sampling
More informationCheng Soon Ong & Christian Walder. Canberra February June 2018
Cheng Soon Ong & Christian Walder Research Group and College of Engineering and Computer Science Canberra February June 2018 Outlines Overview Introduction Linear Algebra Probability Linear Regression
More informationState Space and Hidden Markov Models
State Space and Hidden Markov Models Kunsch H.R. State Space and Hidden Markov Models. ETH- Zurich Zurich; Aliaksandr Hubin Oslo 2014 Contents 1. Introduction 2. Markov Chains 3. Hidden Markov and State
More informationReview of elements of Calculus (functions in one variable)
Review of elements of Calculus (functions in one variable) Mainly adapted from the lectures of prof Greg Kelly Hanford High School, Richland Washington http://online.math.uh.edu/houstonact/ https://sites.google.com/site/gkellymath/home/calculuspowerpoints
More informationMonte Carlo. Lecture 15 4/9/18. Harvard SEAS AP 275 Atomistic Modeling of Materials Boris Kozinsky
Monte Carlo Lecture 15 4/9/18 1 Sampling with dynamics In Molecular Dynamics we simulate evolution of a system over time according to Newton s equations, conserving energy Averages (thermodynamic properties)
More informationWhy is the field of statistics still an active one?
Why is the field of statistics still an active one? It s obvious that one needs statistics: to describe experimental data in a compact way, to compare datasets, to ask whether data are consistent with
More informationSimulation. Where real stuff starts
1 Simulation Where real stuff starts ToC 1. What is a simulation? 2. Accuracy of output 3. Random Number Generators 4. How to sample 5. Monte Carlo 6. Bootstrap 2 1. What is a simulation? 3 What is a simulation?
More informationLecture Notes 3 Multiple Random Variables. Joint, Marginal, and Conditional pmfs. Bayes Rule and Independence for pmfs
Lecture Notes 3 Multiple Random Variables Joint, Marginal, and Conditional pmfs Bayes Rule and Independence for pmfs Joint, Marginal, and Conditional pdfs Bayes Rule and Independence for pdfs Functions
More informationNonlinear Equations. Chapter The Bisection Method
Chapter 6 Nonlinear Equations Given a nonlinear function f(), a value r such that f(r) = 0, is called a root or a zero of f() For eample, for f() = e 016064, Fig?? gives the set of points satisfying y
More informationGCE A Level H2 Mathematics November 2014 Paper 1. 1i) f 2 (x) = f( f(x) )
GCE A Level H Mathematics November 0 Paper i) f () f( f() ) f ( ),, 0 Let y y y y y y y f (). D f R f R\{0, } f () f (). Students must know that f () stands for f (f() ) and not [f()]. Students must also
More informationSoft and hard models. Jiří Militky Computer assisted statistical modeling in the
Soft and hard models Jiří Militky Computer assisted statistical s modeling in the applied research Monsters Giants Moore s law: processing capacity doubles every 18 months : CPU, cache, memory It s more
More information= 1 2 x (x 1) + 1 {x} (1 {x}). [t] dt = 1 x (x 1) + O (1), [t] dt = 1 2 x2 + O (x), (where the error is not now zero when x is an integer.
Problem Sheet,. i) Draw the graphs for [] and {}. ii) Show that for α R, α+ α [t] dt = α and α+ α {t} dt =. Hint Split these integrals at the integer which must lie in any interval of length, such as [α,
More informationAP Statistics Cumulative AP Exam Study Guide
AP Statistics Cumulative AP Eam Study Guide Chapters & 3 - Graphs Statistics the science of collecting, analyzing, and drawing conclusions from data. Descriptive methods of organizing and summarizing statistics
More informationToday: Fundamentals of Monte Carlo
Today: Fundamentals of Monte Carlo What is Monte Carlo? Named at Los Alamos in 1940 s after the casino. Any method which uses (pseudo)random numbers as an essential part of the algorithm. Stochastic -
More information6.867 Machine learning
6.867 Machine learning Mid-term eam October 8, 6 ( points) Your name and MIT ID: .5.5 y.5 y.5 a).5.5 b).5.5.5.5 y.5 y.5 c).5.5 d).5.5 Figure : Plots of linear regression results with different types of
More informationRandom variable X is a mapping that maps each outcome s in the sample space to a unique real number x, < x <. ( ) X s. Real Line
Random Variable Random variable is a mapping that maps each outcome s in the sample space to a unique real number, <
More informationECE276A: Sensing & Estimation in Robotics Lecture 10: Gaussian Mixture and Particle Filtering
ECE276A: Sensing & Estimation in Robotics Lecture 10: Gaussian Mixture and Particle Filtering Lecturer: Nikolay Atanasov: natanasov@ucsd.edu Teaching Assistants: Siwei Guo: s9guo@eng.ucsd.edu Anwesan Pal:
More information1993 AP Calculus AB: Section I
99 AP Calculus AB: Section I 90 Minutes Scientific Calculator Notes: () The eact numerical value of the correct answer does not always appear among the choices given. When this happens, select from among
More informationEco517 Fall 2004 C. Sims MIDTERM EXAM
Eco517 Fall 2004 C. Sims MIDTERM EXAM Answer all four questions. Each is worth 23 points. Do not devote disproportionate time to any one question unless you have answered all the others. (1) We are considering
More informationProbability Review. Yutian Li. January 18, Stanford University. Yutian Li (Stanford University) Probability Review January 18, / 27
Probability Review Yutian Li Stanford University January 18, 2018 Yutian Li (Stanford University) Probability Review January 18, 2018 1 / 27 Outline 1 Elements of probability 2 Random variables 3 Multiple
More informationMethods of Data Analysis Random numbers, Monte Carlo integration, and Stochastic Simulation Algorithm (SSA / Gillespie)
Methods of Data Analysis Random numbers, Monte Carlo integration, and Stochastic Simulation Algorithm (SSA / Gillespie) Week 1 1 Motivation Random numbers (RNs) are of course only pseudo-random when generated
More informationMarkov chain Monte Carlo Lecture 9
Markov chain Monte Carlo Lecture 9 David Sontag New York University Slides adapted from Eric Xing and Qirong Ho (CMU) Limitations of Monte Carlo Direct (unconditional) sampling Hard to get rare events
More informationMath 112 Section 10 Lecture notes, 1/7/04
Math 11 Section 10 Lecture notes, 1/7/04 Section 7. Integration by parts To integrate the product of two functions, integration by parts is used when simpler methods such as substitution or simplifying
More informationIntegration, differentiation, and root finding. Phys 420/580 Lecture 7
Integration, differentiation, and root finding Phys 420/580 Lecture 7 Numerical integration Compute an approximation to the definite integral I = b Find area under the curve in the interval Trapezoid Rule:
More informationChapter 3 Single Random Variables and Probability Distributions (Part 1)
Chapter 3 Single Random Variables and Probability Distributions (Part 1) Contents What is a Random Variable? Probability Distribution Functions Cumulative Distribution Function Probability Density Function
More informationPractice Midterm Solutions
Practice Midterm Solutions Math 4B: Ordinary Differential Equations Winter 20 University of California, Santa Barbara TA: Victoria Kala DO NOT LOOK AT THESE SOLUTIONS UNTIL YOU HAVE ATTEMPTED EVERY PROBLEM
More informationMobile Robot Localization
Mobile Robot Localization 1 The Problem of Robot Localization Given a map of the environment, how can a robot determine its pose (planar coordinates + orientation)? Two sources of uncertainty: - observations
More information!P x. !E x. Bad Things Happen to Good Signals Spring 2011 Lecture #6. Signal-to-Noise Ratio (SNR) Definition of Mean, Power, Energy ( ) 2.
Bad Things Happen to Good Signals oise, broadly construed, is any change to the signal from its expected value, x[n] h[n], when it arrives at the receiver. We ll look at additive noise and assume the noise
More informationChapter 2. Random Variable. Define single random variables in terms of their PDF and CDF, and calculate moments such as the mean and variance.
Chapter 2 Random Variable CLO2 Define single random variables in terms of their PDF and CDF, and calculate moments such as the mean and variance. 1 1. Introduction In Chapter 1, we introduced the concept
More informationMachine Learning 4771
Machine Learning 4771 Instructor: Tony Jebara Topic 7 Unsupervised Learning Statistical Perspective Probability Models Discrete & Continuous: Gaussian, Bernoulli, Multinomial Maimum Likelihood Logistic
More informationMonte Carlo Inference Methods
Monte Carlo Inference Methods Iain Murray University of Edinburgh http://iainmurray.net Monte Carlo and Insomnia Enrico Fermi (1901 1954) took great delight in astonishing his colleagues with his remarkably
More informationa b = a a a and that has been used here. ( )
Review Eercise ( i j+ k) ( i+ j k) i j k = = i j+ k (( ) ( ) ) (( ) ( ) ) (( ) ( ) ) = i j+ k = ( ) i ( ( )) j+ ( ) k = j k Hence ( ) ( i j+ k) ( i+ j k) = ( ) + ( ) = 8 = Formulae for finding the vector
More informationCS 450 Numerical Analysis. Chapter 8: Numerical Integration and Differentiation
Lecture slides based on the textbook Scientific Computing: An Introductory Survey by Michael T. Heath, copyright c 2018 by the Society for Industrial and Applied Mathematics. http://www.siam.org/books/cl80
More informationHochdimensionale Integration
Oliver Ernst Institut für Numerische Mathematik und Optimierung Hochdimensionale Integration 14-tägige Vorlesung im Wintersemester 2010/11 im Rahmen des Moduls Ausgewählte Kapitel der Numerik Contents
More informationProbabilistic Graphical Models Lecture 17: Markov chain Monte Carlo
Probabilistic Graphical Models Lecture 17: Markov chain Monte Carlo Andrew Gordon Wilson www.cs.cmu.edu/~andrewgw Carnegie Mellon University March 18, 2015 1 / 45 Resources and Attribution Image credits,
More informationMACHINE LEARNING ADVANCED MACHINE LEARNING
MACHINE LEARNING ADVANCED MACHINE LEARNING Recap of Important Notions on Estimation of Probability Density Functions 22 MACHINE LEARNING Discrete Probabilities Consider two variables and y taking discrete
More informationRandom processes and probability distributions. Phys 420/580 Lecture 20
Random processes and probability distributions Phys 420/580 Lecture 20 Random processes Many physical processes are random in character: e.g., nuclear decay (Poisson distributed event count) P (k, τ) =
More informationUnit #11 - Integration by Parts, Average of a Function
Unit # - Integration by Parts, Average of a Function Some problems and solutions selected or adapted from Hughes-Hallett Calculus. Integration by Parts. For each of the following integrals, indicate whether
More information(x 3)(x + 5) = (x 3)(x 1) = x + 5
RMT 3 Calculus Test olutions February, 3. Answer: olution: Note that + 5 + 3. Answer: 3 3) + 5) = 3) ) = + 5. + 5 3 = 3 + 5 3 =. olution: We have that f) = b and f ) = ) + b = b + 8. etting these equal
More informationECE295, Data Assimila0on and Inverse Problems, Spring 2015
ECE295, Data Assimila0on and Inverse Problems, Spring 2015 1 April, Intro; Linear discrete Inverse problems (Aster Ch 1 and 2) Slides 8 April, SVD (Aster ch 2 and 3) Slides 15 April, RegularizaFon (ch
More information7 Gaussian Discriminant Analysis (including QDA and LDA)
36 Jonathan Richard Shewchuk 7 Gaussian Discriminant Analysis (including QDA and LDA) GAUSSIAN DISCRIMINANT ANALYSIS Fundamental assumption: each class comes from normal distribution (Gaussian). X N(µ,
More informationWhat is a random variable
OKAN UNIVERSITY FACULTY OF ENGINEERING AND ARCHITECTURE MATH 256 Probability and Random Processes 04 Random Variables Fall 20 Yrd. Doç. Dr. Didem Kivanc Tureli didemk@ieee.org didem.kivanc@okan.edu.tr
More informationThe answers below are not comprehensive and are meant to indicate the correct way to solve the problem. sin
Math : Practice Final Answer Key Name: The answers below are not comprehensive and are meant to indicate the correct way to solve the problem. Problem : Consider the definite integral I = 5 sin ( ) d.
More informationRomberg Integration and Gaussian Quadrature
Romberg Integration and Gaussian Quadrature P. Sam Johnson October 17, 014 P. Sam Johnson (NITK) Romberg Integration and Gaussian Quadrature October 17, 014 1 / 19 Overview We discuss two methods for integration.
More informationRandom Walks A&T and F&S 3.1.2
Random Walks A&T 110-123 and F&S 3.1.2 As we explained last time, it is very difficult to sample directly a general probability distribution. - If we sample from another distribution, the overlap will
More informationNumerical Methods I Monte Carlo Methods
Numerical Methods I Monte Carlo Methods Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course G63.2010.001 / G22.2420-001, Fall 2010 Dec. 9th, 2010 A. Donev (Courant Institute) Lecture
More informationn px p x (1 p) n x. p x n(n 1)... (n x + 1) x!
Lectures 3-4 jacques@ucsd.edu 7. Classical discrete distributions D. The Poisson Distribution. If a coin with heads probability p is flipped independently n times, then the number of heads is Bin(n, p)
More information1 Introduction. Systems 2: Simulating Errors. Mobile Robot Systems. System Under. Environment
Systems 2: Simulating Errors Introduction Simulating errors is a great way to test you calibration algorithms, your real-time identification algorithms, and your estimation algorithms. Conceptually, the
More informationMonte Carlo and cold gases. Lode Pollet.
Monte Carlo and cold gases Lode Pollet lpollet@physics.harvard.edu 1 Outline Classical Monte Carlo The Monte Carlo trick Markov chains Metropolis algorithm Ising model critical slowing down Quantum Monte
More informationNUMERICAL COMPUTATION OF THE CAPACITY OF CONTINUOUS MEMORYLESS CHANNELS
NUMERICAL COMPUTATION OF THE CAPACITY OF CONTINUOUS MEMORYLESS CHANNELS Justin Dauwels Dept. of Information Technology and Electrical Engineering ETH, CH-8092 Zürich, Switzerland dauwels@isi.ee.ethz.ch
More informationCSC 446 Notes: Lecture 13
CSC 446 Notes: Lecture 3 The Problem We have already studied how to calculate the probability of a variable or variables using the message passing method. However, there are some times when the structure
More informationMTH739U/P: Topics in Scientific Computing Autumn 2016 Week 6
MTH739U/P: Topics in Scientific Computing Autumn 16 Week 6 4.5 Generic algorithms for non-uniform variates We have seen that sampling from a uniform distribution in [, 1] is a relatively straightforward
More informationStochastic Processes. Review of Elementary Probability Lecture I. Hamid R. Rabiee Ali Jalali
Stochastic Processes Review o Elementary Probability bili Lecture I Hamid R. Rabiee Ali Jalali Outline History/Philosophy Random Variables Density/Distribution Functions Joint/Conditional Distributions
More informationMonte Carlo Radiation Transfer I
Monte Carlo Radiation Transfer I Monte Carlo Photons and interactions Sampling from probability distributions Optical depths, isotropic emission, scattering Monte Carlo Basics Emit energy packet, hereafter
More informationMarkov Chain Monte Carlo (MCMC)
School of Computer Science 10-708 Probabilistic Graphical Models Markov Chain Monte Carlo (MCMC) Readings: MacKay Ch. 29 Jordan Ch. 21 Matt Gormley Lecture 16 March 14, 2016 1 Homework 2 Housekeeping Due
More informationTHS Step By Step Calculus Chapter 1
Name: Class Period: Throughout this packet there will be blanks you are epected to fill in prior to coming to class. This packet follows your Larson Tetbook. Do NOT throw away! Keep in 3 ring binder until
More informationAP Calculus AB/BC ilearnmath.net
CALCULUS AB AP CHAPTER 1 TEST Don t write on the test materials. Put all answers on a separate sheet of paper. Numbers 1-8: Calculator, 5 minutes. Choose the letter that best completes the statement or
More informationMonte Carlo Methods for Statistical Inference: Variance Reduction Techniques
Monte Carlo Methods for Statistical Inference: Variance Reduction Techniques Hung Chen hchen@math.ntu.edu.tw Department of Mathematics National Taiwan University 3rd March 2004 Meet at NS 104 On Wednesday
More informationMATH TOURNAMENT 2012 PROBLEMS SOLUTIONS
MATH TOURNAMENT 0 PROBLEMS SOLUTIONS. Consider the eperiment of throwing two 6 sided fair dice, where, the faces are numbered from to 6. What is the probability of the event that the sum of the values
More informationMAT 271E Probability and Statistics
MAT 7E Probability and Statistics Spring 6 Instructor : Class Meets : Office Hours : Textbook : İlker Bayram EEB 3 ibayram@itu.edu.tr 3.3 6.3, Wednesday EEB 6.., Monday D. B. Bertsekas, J. N. Tsitsiklis,
More informationIntelligent Systems I
1, Intelligent Systems I 12 SAMPLING METHODS THE LAST THING YOU SHOULD EVER TRY Philipp Hennig & Stefan Harmeling Max Planck Institute for Intelligent Systems 23. January 2014 Dptmt. of Empirical Inference
More informationMonte Carlo Simulations
Monte Carlo Simulations What are Monte Carlo Simulations and why ones them? Pseudo Random Number generators Creating a realization of a general PDF The Bootstrap approach A real life example: LOFAR simulations
More informationStatistics 100A Homework 5 Solutions
Chapter 5 Statistics 1A Homework 5 Solutions Ryan Rosario 1. Let X be a random variable with probability density function a What is the value of c? fx { c1 x 1 < x < 1 otherwise We know that for fx to
More information