S6880 #13. Variance Reduction Methods
|
|
- Gerald Wilkinson
- 6 years ago
- Views:
Transcription
1 S6880 #13 Variance Reduction Methods
2 1 Variance Reduction Methods Variance Reduction Methods 2 Importance Sampling Importance Sampling 3 Control Variates Control Variates Cauchy Example Revisited 4 Antithetic Variates Antithetic Variates 5 Conditioning Conditioning Outline (WMU) S6880 #13 S6880, Class Notes #13 2 / 29
3 General Techniques Reduction of simulation cost variance reduction. General Techniques 1 Importance Sampling 2 Control and antithetic variates 3 Conditioning 4 Use of special features of a problem (WMU) S6880 #13 S6880, Class Notes #13 3 / 29
4 A note about hit-or-miss Monte Carlo integretation = less efficient than original Monte Carlo φ(x) c 0 a θ = b a φ(x)dx φ(x) c over [a, b] b draw uniform sample from U([a, b] [0, c]) ˆθ = c(b a) {proportion of sample items falling in shaded area} Var(hit-or-miss Monte-Carlo) Var(original Monte-Carlo) (WMU) S6880 #13 S6880, Class Notes #13 4 / 29
5 1 Variance Reduction Methods Variance Reduction Methods 2 Importance Sampling Importance Sampling 3 Control Variates Control Variates Cauchy Example Revisited 4 Antithetic Variates Antithetic Variates 5 Conditioning Conditioning Outline (WMU) S6880 #13 S6880, Class Notes #13 5 / 29
6 What is Importance Sampling Want: Estimate θ = φ(x)f (x)dx = E f φ(x), X p.d.f. f estimated by ˆθ f = 1 n n i=1 φ(x i.i.d. i), X 1,..., X n f. Method: Instead of sampling from f, we sample from another p.d.f. i.i.d. g. i.e., X 1,..., X n g. ˆθ g = 1 n n i=1 ψ(x i), where ψ = φ f g. Note that [ θ = φ(x)f (x)dx = φ(x) f (x) ] g(x)dx g(x) = ψ(x)g(x)dx = Eψ(X), X g. (WMU) S6880 #13 S6880, Class Notes #13 6 / 29
7 Unbiasedness of Importance Sampling Estimator Note that 1 ˆθ g is unbiased (since E ˆθ g = Eψ(X) = θ). 2 Var(ˆθ g ) = Var g (ψ(x))/n = 1 (ψ(x) θ) 2 g(x)dx n = 1 ( φ(x) f (x) ) 2 n g(x) θ g(x)dx. which can be made small if we choose g such that ψ = φ f g is closed to a constant. = Choose g such that { g c φf g is easier to sample than f (WMU) S6880 #13 S6880, Class Notes #13 7 / 29
8 Cauchy Example, continued θ = P(Cauchy > 2), f Cauchy, φ(x) = I(x > 2). 1 So, want g I(x > 2). π(1+x 2 ) Try g(x) 2 2 I(x > 2) p.d.f. of x 2 U(0,1) p.d.f. of 1 U(0,1/2). Then ψ(x) = φ(x) f (x) g(x) = 1 x 2 2π I(x > 2) which is a (monotone) 1+x 2 increasing function over (2, ) with values bounded by 4 5 and 1. = constant. Can show Var g (ψ(x)) and hence Var(ˆθ g ) n (WMU) S6880 #13 S6880, Class Notes #13 8 / 29
9 θ = = = = = Note about Cauchy Example /2 1/2 0 1/2 0 ψ(x)g(x)dx = 1 2π 1 2π 2 1 g(x)dx 1 + x 2 1 x 2 2π 1 + x 2 g(x)dx y 2 g(y 1 )( y 2 )dy (let y = x 1 ) 1 [1 π(1 + y 2 ) 2 g(y 1 )y 2] dy 1 π(1 + y 2 ) dy (Note : g(y 1 ) = 2y 2 ) = Monte Carlo integration 1/2 0 f (y)dy see class notes 12. (WMU) S6880 #13 S6880, Class Notes #13 9 / 29
10 1 Variance Reduction Methods Variance Reduction Methods 2 Importance Sampling Importance Sampling 3 Control Variates Control Variates Cauchy Example Revisited 4 Antithetic Variates Antithetic Variates 5 Conditioning Conditioning Outline (WMU) S6880 #13 S6880, Class Notes #13 10 / 29
11 What are Control Variates Goal: Estimate θ = Eφ(X). Method: Find ψ( ) such that µ ψ = Eψ(X) = R ψ(x)f (x)dx is known and easy-to-calculate. Idea is to choose ψ( ) so that ψ φ, ψ(x) is a control variate. θ = Eφ(X) = E[φ(X) ψ(x)] + Eψ(X) = E[φ(X) ψ(x)] + µ ψ. Then ˆθ = µ ψ + 1 n n i=1 [φ(x i) ψ(x i )] where i.i.d. X 1,..., X n f. Var(ˆθ) = 1 n Var[φ(X) ψ(x)] 1 Varφ(X) if ψ φ. n (WMU) S6880 #13 S6880, Class Notes #13 11 / 29
12 How to Find Control Variates i.i.d. Example: X 1,..., X n F with mean µ (known). Want to estimate θ = Eφ(X) = Eφ(X 1,, X n ) = E(median) = EX (m+1) if n = 2m + 1. Choose ψ(x) = X = the sample mean, then Eψ(X) = µ(= µ ψ ). Then estimate θ by ˆθ = µ + 1 N N [φ(x (i) ) ψ(x (i) )], where i=1 X (i) = i th sample of size n = 2m + 1 = (X (i) i,, X (i) n ) i.i.d. F for i = 1,, N. ˆθ often has smaller variance than 1 N N i=1 φ(x(i) ). (WMU) S6880 #13 S6880, Class Notes #13 12 / 29
13 General Result In general, ψ(x) can be taken as an approximation of φ(x) based on some basis {b k (x)}. That is, ψ(x) = K k=1 a kb k (x), where Eb k (X) = µ k and is known. If a k s are unknown, then estimate a k s by least square estimates based on initial sample and obtain â k s. Use ˆψ(x) = K k=1 âkb k (x) and estimate θ by ˆθ = K â k µ k + 1 n k=1 n [φ(x (i) ) ˆψ(X (i) )]. i=1 (WMU) S6880 #13 S6880, Class Notes #13 13 / 29
14 Cauchy Example, revisited 1 The first few terms of the Taylor expansion of 1 1+x 2 suggest that we can consider x 2 and x 4. θ = P(X > 2) = 1 2 P(0 X 2) = π 1 + x 2 dx = f (x)g(x)dx, f (x) = 1 1 = p.d.f. of Cauchy π 1 + x 2 = π Eφ(X) g(x) = 1 2 I [0,2](x) = p.d.f. of U(0,2) where φ(x) = 1 1+x 2 with X U(0, 2). If we approximate φ(x) by ψ(x) = a 0 + a 2 x 2 + a 4 x 4 : sample x 1,, x 5 U(0, 2) ( initial sample) yielded: (WMU) S6880 #13 S6880, Class Notes #13 14 / 29
15 Cauchy Example, revisited 1, continued obs. x φ(x) regress y = φ(x) on x 2, x 4 = â 0 =.943, â 2 =.513, â 4 = That is, ˆψ(x) = x x 4. Then ˆθ = π [( x x 4 ) + 1 n = π [ n n [φ(x i ) ˆψ(x i )]] i=1 n [φ(x i ) ˆψ(x i )]] i=1 here EX k = 2 0 x k 1 2k 2dx = k+1 so EX 2 = 4 3, EX 4 = (WMU) S6880 #13 S6880, Class Notes #13 15 / 29
16 Cauchy Example, revisited 1, continued Var(ˆθ) = 4 π 2 n Var(φ(X) ˆψ(X)) = 4 π 2 n ( ) /n Plotting φ(x) and ˆψ(x) suggests that we would do better by including terms in x and x 3. Regress y on x, x 2, x 3, x 4 â 0 = 1.012, â 1 =.069, â 2 = 1.076, â 3 =.813, â 4 =.181 with variance /n φ(x) ψ^(x) (WMU) S6880 #13 S6880, Class Notes #13 16 / 29
17 Cauchy Example, revisited 1, continued Note: can increase the precision by increasing the initial sample size. For instance, an initial sample of size 10 yielded 0.036, 1.642, 1.829, 1.420, 0.532, 1.974, 1.245, 1.615, 0.303, Fit φ(x) on polynomial of degree 4 on x yielded ˆψ(x) = x.962x x 3.141x 4 with Var(ˆθ) /n. (WMU) S6880 #13 S6880, Class Notes #13 17 / 29
18 Cauchy Example, revisited 2 θ = P(X > 2) = 1 2 Ef (Y ), Y U(0, 1 2 ) = 1 1 Eφ(Y ) where φ(y) = 2π 1 + y 2, Y U(0, 1 2 ) Approximate φ(y) by ψ(y) = a 0 + a 1 y + a 2 y 2 + a 3 y 3 + a 4 y 4 : obs. y z = φ(y) where y s were sampled from U(0, 1 2 ). Regress z on y, y 2, y 3, y 4 : â 0 =.997, â 1 =.056, â 2 = 1.376, â 3 = 1.078, â 4 =.246. That is, ˆψ(y) = y 1.376y y 3.246y 4. (WMU) S6880 #13 S6880, Class Notes #13 18 / 29
19 Cauchy Example revisited 2, continued EY k = 1/2 0 y k 2dy = 2 k k+1, so ˆθ = 1 [ π n ] n (φ(y) ˆψ(y)). i=1 Var(ˆθ) = 1 4π 2 n Var(φ(Y ) ˆψ(Y )), where Y U(0, 1 2 ) /n = constant. Taking n = 1 (sample of size 1!) gives an accurate enough approximation to θ. (WMU) S6880 #13 S6880, Class Notes #13 19 / 29
20 1 Variance Reduction Methods Variance Reduction Methods 2 Importance Sampling Importance Sampling 3 Control Variates Control Variates Cauchy Example Revisited 4 Antithetic Variates Antithetic Variates 5 Conditioning Conditioning Outline (WMU) S6880 #13 S6880, Class Notes #13 20 / 29
21 Antithetic Variates Method Goal: Estimate θ = Eφ(X). Method: Find ψ such that ψ(x) has the same distributions as φ(x) such that cor(φ, ψ) < 0. Then θ = 1 2E[φ(X) + ψ(x)] and estimate θ by ˆθ = 1 n 2n i=1 [φ(x i) + ψ(x i )]. Var(ˆθ) = 1 Var[φ(X) + ψ(x)] 4n = 1 [Varφ(X) + Varψ(X) + 2cov(φ(X), ψ(x))] 4n = 1 [Varφ(X) + cov(φ(x), ψ(x))] 2n = Varφ(X) [1 + cor(φ(x), ψ(x))]. 2n Consequently, there are gains (in reduction of variance) since cor(φ(x), ψ(x)) < 0. (WMU) S6880 #13 S6880, Class Notes #13 21 / 29
22 How to Choose Antithetic Variates Theorem (Ripley) If U U(0, 1) then { Eφ(U) = Eφ(1 U) If, in addition, φ is monotone, then Varφ(U) = Varφ(1 U) cor(φ(u), φ(1 U)) < 0. (WMU) S6880 #13 S6880, Class Notes #13 22 / 29
23 Remarks on the Theorem 1 Reasons for monotone: θ = Eφ(X) = EZ, Z = φ(x) F Z (since F Z is monotone, so is F 1 1 Z ). θ = EZ = EFZ (U). So we can choose ψ(x) = F 1 1 Z (1 U) = FZ (1 F Z (Z )). 2 Theorem (Ripley, about generalization): If X is symmetric about c, then can choose ψ(x) = φ(2c X). 3 Use other method if F 1 Z is hard to calculate or φ is not monotone. (WMU) S6880 #13 S6880, Class Notes #13 23 / 29
24 Example φ is symmetric about 1 2, cor(φ(u), φ(1 U)) 1. Better use g(x) = { x, x < x, x > 1 2 ψ(x) = φ(g(x)). Then Eψ(X) = Eφ(X), for X U(0, 1) (WMU) S6880 #13 S6880, Class Notes #13 24 / 29
25 Cauchy Example, revisited 2 θ = π(1 + x 2 ) dx = 1 2 Eh(X), h(x) = 2 π(1 + x 2, X U(0, 2) ) = 1 2 Eh(2U) = 1 2 Eφ(U) by letting X = 2U, U U(0, 1), φ(u) = h(2u) = 2 π(1+4u 2 ). So, we can estimate θ by since φ is monotone. ˆθ = n n [φ(u i ) + φ(1 U i )] i=1 Var(ˆθ) = 1 4n Var[φ(U) + φ(1 U)] ( )/n. (WMU) S6880 #13 S6880, Class Notes #13 25 / 29
26 1 Variance Reduction Methods Variance Reduction Methods 2 Importance Sampling Importance Sampling 3 Control Variates Control Variates Cauchy Example Revisited 4 Antithetic Variates Antithetic Variates 5 Conditioning Conditioning Outline (WMU) S6880 #13 S6880, Class Notes #13 26 / 29
27 About Conditioning Recall: 1 E[X Y = y] = h(y), a function of y. 2 EX = E[E[X Y ]] 3 Var(X) = Var(E[X Y ]) + E[Var(X Y )] so that Var(E[X Y ]) Var(X) with the equality iff X = g(y ) (in which case Var(X Y = y) = Var(g(y) y) = 0!) Goal: Estimate θ = Eφ(X). Method: If there exists Y such that ψ(y) = E[φ(X) Y = y] can be easily calculated, then estimate θ by ˆθ = 1 n n i=1 ψ(y i) with variance Var(ˆθ) = 1 n Var(ψ(Y )) = 1 Var(E[φ(X) Y ]) n 1 n Var(φ(X)). (WMU) S6880 #13 S6880, Class Notes #13 27 / 29
28 Example Suppose X = Z W, Z N(0, 1), W > 0, and W and Z are independent (recall swindle ). The goal is to estimate θ = P(X n > c), where X n is the sample mean of the sample X 1,..., X n. No conditioning (naive!): where X (j) n 1 j N. ˆθ = 1 (j) #{X n > c, 1 j N}, N is the sample mean of the j th sample {X (j) 1,, X (j) n }, (WMU) S6880 #13 S6880, Class Notes #13 28 / 29
29 Example, continued Conditioning: Denote W = (W 1,..., W n ) t, w = (w 1,..., w n ). Given W = w, n X n = 1 n i=1 where σ 2 (w) = ( Z i N 0, 1 n W i n 2 n i=1 1 w 2 i i=1 ( ) c So, P(X n > c W = w) = 1 Φ σ(w) ) 1 wi 2 N(0, σ 2 (w)), ψ(w), where Φ( ) is the standard normal c.d.f. Therefore, a better estimate is ˆθ = 1 N N i=1 ψ(w(j) ), where W (j) denotes the j th sample {W (1) 1,, W (j) n } from W, 1 j N. (WMU) S6880 #13 S6880, Class Notes #13 29 / 29
Variance reduction. Michel Bierlaire. Transport and Mobility Laboratory. Variance reduction p. 1/18
Variance reduction p. 1/18 Variance reduction Michel Bierlaire michel.bierlaire@epfl.ch Transport and Mobility Laboratory Variance reduction p. 2/18 Example Use simulation to compute I = 1 0 e x dx We
More informationMathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( )
Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio (2014-2015) Etienne Tanré - Olivier Faugeras INRIA - Team Tosca October 22nd, 2014 E. Tanré (INRIA - Team Tosca) Mathematical
More informationOptimization and Simulation
Optimization and Simulation Variance reduction Michel Bierlaire Transport and Mobility Laboratory School of Architecture, Civil and Environmental Engineering Ecole Polytechnique Fédérale de Lausanne M.
More informationS6880 #7. Generate Non-uniform Random Number #1
S6880 #7 Generate Non-uniform Random Number #1 Outline 1 Inversion Method Inversion Method Examples Application to Discrete Distributions Using Inversion Method 2 Composition Method Composition Method
More informationStochastic Simulation Variance reduction methods Bo Friis Nielsen
Stochastic Simulation Variance reduction methods Bo Friis Nielsen Applied Mathematics and Computer Science Technical University of Denmark 2800 Kgs. Lyngby Denmark Email: bfni@dtu.dk Variance reduction
More informationStat 451 Lecture Notes Monte Carlo Integration
Stat 451 Lecture Notes 06 12 Monte Carlo Integration Ryan Martin UIC www.math.uic.edu/~rgmartin 1 Based on Chapter 6 in Givens & Hoeting, Chapter 23 in Lange, and Chapters 3 4 in Robert & Casella 2 Updated:
More informationMathematical Statistics
Mathematical Statistics Chapter Three. Point Estimation 3.4 Uniformly Minimum Variance Unbiased Estimator(UMVUE) Criteria for Best Estimators MSE Criterion Let F = {p(x; θ) : θ Θ} be a parametric distribution
More informationp. 6-1 Continuous Random Variables p. 6-2
Continuous Random Variables Recall: For discrete random variables, only a finite or countably infinite number of possible values with positive probability (>). Often, there is interest in random variables
More information18.440: Lecture 26 Conditional expectation
18.440: Lecture 26 Conditional expectation Scott Sheffield MIT 1 Outline Conditional probability distributions Conditional expectation Interpretation and examples 2 Outline Conditional probability distributions
More informationChapter 4 Multiple Random Variables
Review for the previous lecture Theorems and Examples: How to obtain the pmf (pdf) of U = g ( X Y 1 ) and V = g ( X Y) Chapter 4 Multiple Random Variables Chapter 43 Bivariate Transformations Continuous
More informationVariance reduction techniques
Variance reduction techniques Lecturer: Dmitri A. Moltchanov E-mail: moltchan@cs.tut.fi http://www.cs.tut.fi/ moltchan/modsim/ http://www.cs.tut.fi/kurssit/tlt-2706/ OUTLINE: Simulation with a given confidence;
More informationMarkov Chain Monte Carlo Lecture 1
What are Monte Carlo Methods? The subject of Monte Carlo methods can be viewed as a branch of experimental mathematics in which one uses random numbers to conduct experiments. Typically the experiments
More informationChapter 3: Unbiased Estimation Lecture 22: UMVUE and the method of using a sufficient and complete statistic
Chapter 3: Unbiased Estimation Lecture 22: UMVUE and the method of using a sufficient and complete statistic Unbiased estimation Unbiased or asymptotically unbiased estimation plays an important role in
More informationChapter 5: Monte Carlo Integration and Variance Reduction
Chapter 5: Monte Carlo Integration and Variance Reduction Lecturer: Zhao Jianhua Department of Statistics Yunnan University of Finance and Economics Outline 5.2 Monte Carlo Integration 5.2.1 Simple MC
More informationPerhaps the simplest way of modeling two (discrete) random variables is by means of a joint PMF, defined as follows.
Chapter 5 Two Random Variables In a practical engineering problem, there is almost always causal relationship between different events. Some relationships are determined by physical laws, e.g., voltage
More informationLecture 1: August 28
36-705: Intermediate Statistics Fall 2017 Lecturer: Siva Balakrishnan Lecture 1: August 28 Our broad goal for the first few lectures is to try to understand the behaviour of sums of independent random
More information4. Distributions of Functions of Random Variables
4. Distributions of Functions of Random Variables Setup: Consider as given the joint distribution of X 1,..., X n (i.e. consider as given f X1,...,X n and F X1,...,X n ) Consider k functions g 1 : R n
More informationComputer Intensive Methods in Mathematical Statistics
Computer Intensive Methods in Mathematical Statistics Department of mathematics johawes@kth.se Lecture 4 Variance reduction for MC methods 30 March 2017 Computer Intensive Methods (1) What do we need to
More informationContinuous Random Variables
1 / 24 Continuous Random Variables Saravanan Vijayakumaran sarva@ee.iitb.ac.in Department of Electrical Engineering Indian Institute of Technology Bombay February 27, 2013 2 / 24 Continuous Random Variables
More informationChapters 9. Properties of Point Estimators
Chapters 9. Properties of Point Estimators Recap Target parameter, or population parameter θ. Population distribution f(x; θ). { probability function, discrete case f(x; θ) = density, continuous case The
More informationconditional cdf, conditional pdf, total probability theorem?
6 Multiple Random Variables 6.0 INTRODUCTION scalar vs. random variable cdf, pdf transformation of a random variable conditional cdf, conditional pdf, total probability theorem expectation of a random
More informationIEOR E4703: Monte-Carlo Simulation
IEOR E4703: Monte-Carlo Simulation Output Analysis for Monte-Carlo Martin Haugh Department of Industrial Engineering and Operations Research Columbia University Email: martin.b.haugh@gmail.com Output Analysis
More informationEC212: Introduction to Econometrics Review Materials (Wooldridge, Appendix)
1 EC212: Introduction to Econometrics Review Materials (Wooldridge, Appendix) Taisuke Otsu London School of Economics Summer 2018 A.1. Summation operator (Wooldridge, App. A.1) 2 3 Summation operator For
More informationECE353: Probability and Random Processes. Lecture 7 -Continuous Random Variable
ECE353: Probability and Random Processes Lecture 7 -Continuous Random Variable Xiao Fu School of Electrical Engineering and Computer Science Oregon State University E-mail: xiao.fu@oregonstate.edu Continuous
More informationExercise 5 Release: Due:
Stochastic Modeling and Simulation Winter 28 Prof. Dr. I. F. Sbalzarini, Dr. Christoph Zechner (MPI-CBG/CSBD TU Dresden, 87 Dresden, Germany Exercise 5 Release: 8..28 Due: 5..28 Question : Variance of
More informationECE 4400:693 - Information Theory
ECE 4400:693 - Information Theory Dr. Nghi Tran Lecture 8: Differential Entropy Dr. Nghi Tran (ECE-University of Akron) ECE 4400:693 Lecture 1 / 43 Outline 1 Review: Entropy of discrete RVs 2 Differential
More information18.440: Lecture 28 Lectures Review
18.440: Lecture 28 Lectures 17-27 Review Scott Sheffield MIT 1 Outline Continuous random variables Problems motivated by coin tossing Random variable properties 2 Outline Continuous random variables Problems
More informationST5215: Advanced Statistical Theory
Department of Statistics & Applied Probability Wednesday, October 19, 2011 Lecture 17: UMVUE and the first method of derivation Estimable parameters Let ϑ be a parameter in the family P. If there exists
More informationGaussian vectors and central limit theorem
Gaussian vectors and central limit theorem Samy Tindel Purdue University Probability Theory 2 - MA 539 Samy T. Gaussian vectors & CLT Probability Theory 1 / 86 Outline 1 Real Gaussian random variables
More informationKernel Density Estimation
EECS 598: Statistical Learning Theory, Winter 2014 Topic 19 Kernel Density Estimation Lecturer: Clayton Scott Scribe: Yun Wei, Yanzhen Deng Disclaimer: These notes have not been subjected to the usual
More informationEvaluating the Performance of Estimators (Section 7.3)
Evaluating the Performance of Estimators (Section 7.3) Example: Suppose we observe X 1,..., X n iid N(θ, σ 2 0 ), with σ2 0 known, and wish to estimate θ. Two possible estimators are: ˆθ = X sample mean
More informationStatistics 3657 : Moment Approximations
Statistics 3657 : Moment Approximations Preliminaries Suppose that we have a r.v. and that we wish to calculate the expectation of g) for some function g. Of course we could calculate it as Eg)) by the
More informationMonte-Carlo MMD-MA, Université Paris-Dauphine. Xiaolu Tan
Monte-Carlo MMD-MA, Université Paris-Dauphine Xiaolu Tan tan@ceremade.dauphine.fr Septembre 2015 Contents 1 Introduction 1 1.1 The principle.................................. 1 1.2 The error analysis
More informationMonte Carlo-based statistical methods (MASM11/FMS091)
Monte Carlo-based statistical methods (MASM11/FMS091) Jimmy Olsson Centre for Mathematical Sciences Lund University, Sweden Lecture 4 Variance reduction for MC methods February 1, 2013 J. Olsson Monte
More informationMath438 Actuarial Probability
Math438 Actuarial Probability Jinguo Lian Department of Math and Stats Jan. 22, 2016 Continuous Random Variables-Part I: Definition A random variable X is continuous if its set of possible values is an
More informationChapter 4: Monte-Carlo Methods
Chapter 4: Monte-Carlo Methods A Monte-Carlo method is a technique for the numerical realization of a stochastic process by means of normally distributed random variables. In financial mathematics, it
More informationHT Introduction. P(X i = x i ) = e λ λ x i
MODS STATISTICS Introduction. HT 2012 Simon Myers, Department of Statistics (and The Wellcome Trust Centre for Human Genetics) myers@stats.ox.ac.uk We will be concerned with the mathematical framework
More informationIntroduction to Statistical Inference Self-study
Introduction to Statistical Inference Self-study Contents Definition, sample space The fundamental object in probability is a nonempty sample space Ω. An event is a subset A Ω. Definition, σ-algebra A
More informationScientific Computing: Monte Carlo
Scientific Computing: Monte Carlo Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course MATH-GA.2043 or CSCI-GA.2112, Spring 2012 April 5th and 12th, 2012 A. Donev (Courant Institute)
More informationUCSD ECE153 Handout #27 Prof. Young-Han Kim Tuesday, May 6, Solutions to Homework Set #5 (Prepared by TA Fatemeh Arbabjolfaei)
UCSD ECE53 Handout #7 Prof. Young-Han Kim Tuesday, May 6, 4 Solutions to Homework Set #5 (Prepared by TA Fatemeh Arbabjolfaei). Neural net. Let Y = X + Z, where the signal X U[,] and noise Z N(,) are independent.
More informationSpring 2012 Math 541B Exam 1
Spring 2012 Math 541B Exam 1 1. A sample of size n is drawn without replacement from an urn containing N balls, m of which are red and N m are black; the balls are otherwise indistinguishable. Let X denote
More informationFinal Examination Solutions (Total: 100 points)
Final Examination Solutions (Total: points) There are 4 problems, each problem with multiple parts, each worth 5 points. Make sure you answer all questions. Your answer should be as clear and readable
More informationFormulas for probability theory and linear models SF2941
Formulas for probability theory and linear models SF2941 These pages + Appendix 2 of Gut) are permitted as assistance at the exam. 11 maj 2008 Selected formulae of probability Bivariate probability Transforms
More informationMATH Solutions to Probability Exercises
MATH 5 9 MATH 5 9 Problem. Suppose we flip a fair coin once and observe either T for tails or H for heads. Let X denote the random variable that equals when we observe tails and equals when we observe
More informationSTA205 Probability: Week 8 R. Wolpert
INFINITE COIN-TOSS AND THE LAWS OF LARGE NUMBERS The traditional interpretation of the probability of an event E is its asymptotic frequency: the limit as n of the fraction of n repeated, similar, and
More informationSTT 441 Final Exam Fall 2013
STT 441 Final Exam Fall 2013 (12:45-2:45pm, Thursday, Dec. 12, 2013) NAME: ID: 1. No textbooks or class notes are allowed in this exam. 2. Be sure to show all of your work to receive credit. Credits are
More informationSTAT 430/510 Probability Lecture 16, 17: Compute by Conditioning
STAT 430/510 Probability Lecture 16, 17: Compute by Conditioning Pengyuan (Penelope) Wang Lecture 16-17 June 21, 2011 Computing Probability by Conditioning A is an arbitrary event If Y is a discrete random
More informationSimulation. Alberto Ceselli MSc in Computer Science Univ. of Milan. Part 4 - Statistical Analysis of Simulated Data
Simulation Alberto Ceselli MSc in Computer Science Univ. of Milan Part 4 - Statistical Analysis of Simulated Data A. Ceselli Simulation P.4 Analysis of Sim. data 1 / 15 Statistical analysis of simulated
More informationISyE 6644 Fall 2014 Test 3 Solutions
1 NAME ISyE 6644 Fall 14 Test 3 Solutions revised 8/4/18 You have 1 minutes for this test. You are allowed three cheat sheets. Circle all final answers. Good luck! 1. [4 points] Suppose that the joint
More informationSharpening Jensen s Inequality
Sharpening Jensen s Inequality arxiv:1707.08644v [math.st] 4 Oct 017 J. G. Liao and Arthur Berg Division of Biostatistics and Bioinformatics Penn State University College of Medicine October 6, 017 Abstract
More informationELEG 5633 Detection and Estimation Minimum Variance Unbiased Estimators (MVUE)
1 ELEG 5633 Detection and Estimation Minimum Variance Unbiased Estimators (MVUE) Jingxian Wu Department of Electrical Engineering University of Arkansas Outline Minimum Variance Unbiased Estimators (MVUE)
More informationWeek 9 The Central Limit Theorem and Estimation Concepts
Week 9 and Estimation Concepts Week 9 and Estimation Concepts Week 9 Objectives 1 The Law of Large Numbers and the concept of consistency of averages are introduced. The condition of existence of the population
More informationAPPM/MATH 4/5520 Solutions to Exam I Review Problems. f X 1,X 2. 2e x 1 x 2. = x 2
APPM/MATH 4/5520 Solutions to Exam I Review Problems. (a) f X (x ) f X,X 2 (x,x 2 )dx 2 x 2e x x 2 dx 2 2e 2x x was below x 2, but when marginalizing out x 2, we ran it over all values from 0 to and so
More informationChapter 4. Chapter 4 sections
Chapter 4 sections 4.1 Expectation 4.2 Properties of Expectations 4.3 Variance 4.4 Moments 4.5 The Mean and the Median 4.6 Covariance and Correlation 4.7 Conditional Expectation SKIP: 4.8 Utility Expectation
More information18.440: Lecture 28 Lectures Review
18.440: Lecture 28 Lectures 18-27 Review Scott Sheffield MIT Outline Outline It s the coins, stupid Much of what we have done in this course can be motivated by the i.i.d. sequence X i where each X i is
More informationSolutions to Homework Set #5 (Prepared by Lele Wang) MSE = E [ (sgn(x) g(y)) 2],, where f X (x) = 1 2 2π e. e (x y)2 2 dx 2π
Solutions to Homework Set #5 (Prepared by Lele Wang). Neural net. Let Y X + Z, where the signal X U[,] and noise Z N(,) are independent. (a) Find the function g(y) that minimizes MSE E [ (sgn(x) g(y))
More informationL3 Monte-Carlo integration
RNG MC IS Examination RNG MC IS Examination Monte-Carlo and Empirical Methods for Statistical Inference, FMS091/MASM11 Monte-Carlo integration Generating pseudo-random numbers: Rejection sampling Conditional
More informationIntroduction to Machine Learning
Introduction to Machine Learning Introduction to Probabilistic Methods Varun Chandola Computer Science & Engineering State University of New York at Buffalo Buffalo, NY, USA chandola@buffalo.edu Chandola@UB
More informationProbability Review. Yutian Li. January 18, Stanford University. Yutian Li (Stanford University) Probability Review January 18, / 27
Probability Review Yutian Li Stanford University January 18, 2018 Yutian Li (Stanford University) Probability Review January 18, 2018 1 / 27 Outline 1 Elements of probability 2 Random variables 3 Multiple
More informationMethods of evaluating estimators and best unbiased estimators Hamid R. Rabiee
Stochastic Processes Methods of evaluating estimators and best unbiased estimators Hamid R. Rabiee 1 Outline Methods of Mean Squared Error Bias and Unbiasedness Best Unbiased Estimators CR-Bound for variance
More informationStochastic Dynamical System for SDE driven by Nonlinear Brownian Motion
Stochastic Dynamical System for SDE driven by Nonlinear Brownian Motion Peng Shige Shandong University, China peng@sdu.edu.cn School of Stochastic Dynamical Systems and Ergodicity Loughborough University,
More information18.175: Lecture 15 Characteristic functions and central limit theorem
18.175: Lecture 15 Characteristic functions and central limit theorem Scott Sheffield MIT Outline Characteristic functions Outline Characteristic functions Characteristic functions Let X be a random variable.
More informationSampling Distributions
Sampling Distributions Mathematics 47: Lecture 9 Dan Sloughter Furman University March 16, 2006 Dan Sloughter (Furman University) Sampling Distributions March 16, 2006 1 / 10 Definition We call the probability
More informationIIT JAM : MATHEMATICAL STATISTICS (MS) 2013
IIT JAM : MATHEMATICAL STATISTICS (MS 2013 Question Paper with Answer Keys Ctanujit Classes Of Mathematics, Statistics & Economics Visit our website for more: www.ctanujit.in IMPORTANT NOTE FOR CANDIDATES
More informationProbability and Statistics
Probability and Statistics Jane Bae Stanford University hjbae@stanford.edu September 16, 2014 Jane Bae (Stanford) Probability and Statistics September 16, 2014 1 / 35 Overview 1 Probability Concepts Probability
More informationMath 635: An Introduction to Brownian Motion and Stochastic Calculus
First Prev Next Go To Go Back Full Screen Close Quit 1 Math 635: An Introduction to Brownian Motion and Stochastic Calculus 1. Introduction and review 2. Notions of convergence and results from measure
More informationMATHEMATICS 154, SPRING 2009 PROBABILITY THEORY Outline #11 (Tail-Sum Theorem, Conditional distribution and expectation)
MATHEMATICS 154, SPRING 2009 PROBABILITY THEORY Outline #11 (Tail-Sum Theorem, Conditional distribution and expectation) Last modified: March 7, 2009 Reference: PRP, Sections 3.6 and 3.7. 1. Tail-Sum Theorem
More informationStochastic Processes and Monte-Carlo Methods. University of Massachusetts: Spring Luc Rey-Bellet
Stochastic Processes and Monte-Carlo Methods University of Massachusetts: Spring 2010 Luc Rey-Bellet Contents 1 Random variables and Monte-Carlo method 3 1.1 Review of probability............................
More informationSTAT 430/510: Lecture 16
STAT 430/510: Lecture 16 James Piette June 24, 2010 Updates HW4 is up on my website. It is due next Mon. (June 28th). Starting today back at section 6.7 and will begin Ch. 7. Joint Distribution of Functions
More informationCS145: Probability & Computing
CS45: Probability & Computing Lecture 5: Concentration Inequalities, Law of Large Numbers, Central Limit Theorem Instructor: Eli Upfal Brown University Computer Science Figure credits: Bertsekas & Tsitsiklis,
More informationProblem Y is an exponential random variable with parameter λ = 0.2. Given the event A = {Y < 2},
ECE32 Spring 25 HW Solutions April 6, 25 Solutions to HW Note: Most of these solutions were generated by R. D. Yates and D. J. Goodman, the authors of our textbook. I have added comments in italics where
More informationStat 451 Lecture Notes Simulating Random Variables
Stat 451 Lecture Notes 05 12 Simulating Random Variables Ryan Martin UIC www.math.uic.edu/~rgmartin 1 Based on Chapter 6 in Givens & Hoeting, Chapter 22 in Lange, and Chapter 2 in Robert & Casella 2 Updated:
More informationSemester , Example Exam 1
Semester 1 2017, Example Exam 1 1 of 10 Instructions The exam consists of 4 questions, 1-4. Each question has four items, a-d. Within each question: Item (a) carries a weight of 8 marks. Item (b) carries
More informationE X A M. Probability Theory and Stochastic Processes Date: December 13, 2016 Duration: 4 hours. Number of pages incl.
E X A M Course code: Course name: Number of pages incl. front page: 6 MA430-G Probability Theory and Stochastic Processes Date: December 13, 2016 Duration: 4 hours Resources allowed: Notes: Pocket calculator,
More informationIntroduction to Bayesian Statistics
School of Computing & Communication, UTS January, 207 Random variables Pre-university: A number is just a fixed value. When we talk about probabilities: When X is a continuous random variable, it has a
More information7 Variance Reduction Techniques
7 Variance Reduction Techniques In a simulation study, we are interested in one or more performance measures for some stochastic model. For example, we want to determine the long-run average waiting time,
More informationErgodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R.
Ergodic Theorems Samy Tindel Purdue University Probability Theory 2 - MA 539 Taken from Probability: Theory and examples by R. Durrett Samy T. Ergodic theorems Probability Theory 1 / 92 Outline 1 Definitions
More informationMath 3215 Intro. Probability & Statistics Summer 14. Homework 5: Due 7/3/14
Math 325 Intro. Probability & Statistics Summer Homework 5: Due 7/3/. Let X and Y be continuous random variables with joint/marginal p.d.f. s f(x, y) 2, x y, f (x) 2( x), x, f 2 (y) 2y, y. Find the conditional
More informationUnbiased Estimation. Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others.
Unbiased Estimation Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others. To compare ˆθ and θ, two estimators of θ: Say ˆθ is better than θ if it
More informationNonlife Actuarial Models. Chapter 14 Basic Monte Carlo Methods
Nonlife Actuarial Models Chapter 14 Basic Monte Carlo Methods Learning Objectives 1. Generation of uniform random numbers, mixed congruential method 2. Low discrepancy sequence 3. Inversion transformation
More informationMonte Carlo method and application to random processes
Mote Carlo method ad applicatio to radom processes Lecture 3: Variace reductio techiques (8/3/2017) 1 Lecturer: Eresto Mordecki, Facultad de Ciecias, Uiversidad de la República, Motevideo, Uruguay Graduate
More informationVariance Reduction. in Statistics, we deal with estimators or statistics all the time
Variance Reduction in Statistics, we deal with estiators or statistics all the tie perforance of an estiator is easured by MSE for biased estiators and by variance for unbiased ones hence it s useful to
More informationConditional distributions. Conditional expectation and conditional variance with respect to a variable.
Conditional distributions Conditional expectation and conditional variance with respect to a variable Probability Theory and Stochastic Processes, summer semester 07/08 80408 Conditional distributions
More informationEstimation MLE-Pandemic data MLE-Financial crisis data Evaluating estimators. Estimation. September 24, STAT 151 Class 6 Slide 1
Estimation September 24, 2018 STAT 151 Class 6 Slide 1 Pandemic data Treatment outcome, X, from n = 100 patients in a pandemic: 1 = recovered and 0 = not recovered 1 1 1 0 0 0 1 1 1 0 0 1 0 1 0 0 1 1 1
More informationProperties of Random Variables
Properties of Random Variables 1 Definitions A discrete random variable is defined by a probability distribution that lists each possible outcome and the probability of obtaining that outcome If the random
More informationLecture 3: Expected Value. These integrals are taken over all of Ω. If we wish to integrate over a measurable subset A Ω, we will write
Lecture 3: Expected Value 1.) Definitions. If X 0 is a random variable on (Ω, F, P), then we define its expected value to be EX = XdP. Notice that this quantity may be. For general X, we say that EX exists
More informationMonte Carlo Solution of Integral Equations
1 Monte Carlo Solution of Integral Equations of the second kind Adam M. Johansen a.m.johansen@warwick.ac.uk www2.warwick.ac.uk/fac/sci/statistics/staff/academic/johansen/talks/ March 7th, 2011 MIRaW Monte
More informationDIFFERENT KINDS OF ESTIMATORS OF THE MEAN DENSITY OF RANDOM CLOSED SETS: THEORETICAL RESULTS AND NUMERICAL EXPERIMENTS.
DIFFERENT KINDS OF ESTIMATORS OF THE MEAN DENSITY OF RANDOM CLOSED SETS: THEORETICAL RESULTS AND NUMERICAL EXPERIMENTS Elena Villa Dept. of Mathematics Università degli Studi di Milano Toronto May 22,
More informationProbability and Statistics Notes
Probability and Statistics Notes Chapter Five Jesse Crawford Department of Mathematics Tarleton State University Spring 2011 (Tarleton State University) Chapter Five Notes Spring 2011 1 / 37 Outline 1
More informationRandom Variables. Saravanan Vijayakumaran Department of Electrical Engineering Indian Institute of Technology Bombay
1 / 13 Random Variables Saravanan Vijayakumaran sarva@ee.iitb.ac.in Department of Electrical Engineering Indian Institute of Technology Bombay August 8, 2013 2 / 13 Random Variable Definition A real-valued
More informationSDS 321: Practice questions
SDS 2: Practice questions Discrete. My partner and I are one of married couples at a dinner party. The 2 people are given random seats around a round table. (a) What is the probability that I am seated
More information3 Conditional Expectation
3 Conditional Expectation 3.1 The Discrete case Recall that for any two events E and F, the conditional probability of E given F is defined, whenever P (F ) > 0, by P (E F ) P (E)P (F ). P (F ) Example.
More informationPCMI Introduction to Random Matrix Theory Handout # REVIEW OF PROBABILITY THEORY. Chapter 1 - Events and Their Probabilities
PCMI 207 - Introduction to Random Matrix Theory Handout #2 06.27.207 REVIEW OF PROBABILITY THEORY Chapter - Events and Their Probabilities.. Events as Sets Definition (σ-field). A collection F of subsets
More informationMonte Carlo Methods for Statistical Inference: Variance Reduction Techniques
Monte Carlo Methods for Statistical Inference: Variance Reduction Techniques Hung Chen hchen@math.ntu.edu.tw Department of Mathematics National Taiwan University 3rd March 2004 Meet at NS 104 On Wednesday
More informationChapter 4. Continuous Random Variables 4.1 PDF
Chapter 4 Continuous Random Variables In this chapter we study continuous random variables. The linkage between continuous and discrete random variables is the cumulative distribution (CDF) which we will
More informationRandom Variables and Their Distributions
Chapter 3 Random Variables and Their Distributions A random variable (r.v.) is a function that assigns one and only one numerical value to each simple event in an experiment. We will denote r.vs by capital
More informationEconomics 583: Econometric Theory I A Primer on Asymptotics
Economics 583: Econometric Theory I A Primer on Asymptotics Eric Zivot January 14, 2013 The two main concepts in asymptotic theory that we will use are Consistency Asymptotic Normality Intuition consistency:
More informationSTAT 200C: High-dimensional Statistics
STAT 200C: High-dimensional Statistics Arash A. Amini May 30, 2018 1 / 59 Classical case: n d. Asymptotic assumption: d is fixed and n. Basic tools: LLN and CLT. High-dimensional setting: n d, e.g. n/d
More informationThe distribution inherited by Y is called the Cauchy distribution. Using that. d dy ln(1 + y2 ) = 1 arctan(y)
Stochastic Processes - MM3 - Solutions MM3 - Review Exercise Let X N (0, ), i.e. X is a standard Gaussian/normal random variable, and denote by f X the pdf of X. Consider also a continuous random variable
More informationAnalysis of Engineering and Scientific Data. Semester
Analysis of Engineering and Scientific Data Semester 1 2019 Sabrina Streipert s.streipert@uq.edu.au Example: Draw a random number from the interval of real numbers [1, 3]. Let X represent the number. Each
More information