MTH739U/P: Topics in Scientific Computing Autumn 2016 Week 6
|
|
- Myra Bishop
- 5 years ago
- Views:
Transcription
1 MTH739U/P: Topics in Scientific Computing Autumn 16 Week Generic algorithms for non-uniform variates We have seen that sampling from a uniform distribution in [, 1] is a relatively straightforward process, and all the programming languages, libraries, and environments for scientific computing normally provide either a function to sample uniformly in [, 1] (that function is called rand() in Octave/Matlab) or to sample a value uniformly at random among a set of integers (as for instance the function randi() in Octave/Matlab). However, in many practical situations one needs to draw random samples from other distributions, not from uniform distributions. In this section we will review a few algorithms to sample from non-uniform distributions. In particular, we will first focus on two algorithms that can be used (at least in principle) to sample from any generic distribution, namely the Inverse Function method and the Acceptance-Rejection method. Then, we will present two algorithms which allow to sample efficiently from a Gaussian distribution, namely the Box-Muller algorithm and the Marsaglia Polar Method. Incidentally, all these algorithms are based on the possibility to draw samples from a uniform distribution, which are then appropriately transformed in order to guarantee that the resulting values are distributed according to the desired probability density function (or probability distribution function, in the case of discrete event spaces). 4.6 Inverse Function method Let us consider a continuous random variable X with probability density function f(x) on a sample space Ω. The cumulative distribution function (CDF) is defined as the integral of the probability density function f(x): F (X) = P (X x) = x f(t)dt. Note that, since f(x) is a probability density function, F (X) takes values between and 1, and is monotonically increasing (see Fig. 1). Similarly, if the random variable X is discrete, the Cumulative Distribution Function is: F (X) = P (X x) = x i x P (X = x i ) We now consider F as a function of the random variable X: Y = F (X) which gives us a new random variable Y. Note that for any random variable X, with probability density function f(x), the random variable Y spans the interval [, 1] uniformly at random. Therefore, if we find the inverse of F, and apply that to a uniform 1
2 y. y PDF 4 4 CDF Figure 1: Probability density function (PDF) and cumulative density function (CDF) of the Normal distribution. distribution, we can generate a random variable with density f(x). mean that if we consider X = F 1 (Y ) In particular, we with Y = U(, 1), then the value X that we obtain has probability density function f(x). Notice that we know that F 1 always exists, by definition, since F is a monotonically increasing function. For the sake of simplicity, and without loss of generality, we will focus in the following on the case of scalar real-valued probability density functions, i.e. f : R R. The Inverse Function method to generate a random variable with probability density function f(x) consists of the following steps: 1. Calculate the CDF associated to f(x) as F (x) = x f(t)dt. Find the inverse function X = F 1 (Y ) 3. Sample the random variable u from U(, 1), i.e. from the uniform distribution in [, 1] 4. Return the sample Z = F 1 (u).
3 4.6.1 Example 1: exponential distribution As an example, let us consider the exponential distribution: f(x) = λe λx x. It is easy to verify that f(x) satisfied all the properties of a probability density functions, namely. f(x), x and: So, following the procedure sketched above: λe λx dx = e λx = 1 1. We compute the cumulative distribution function (CDF) associated to f(x): F (x) = = λ x x f(t)dt e λt dt = e λt x = 1 e λx. We find the inverse of the CDF x = F 1 (u): u = F (x) = 1 e λx e λx = 1 y x = 1 log(1 u) λ F 1 (u) = 1 log(1 u) λ 3. We sample a variable u U(, 1), and compute Z = F 1 (u) 4. The random variable Z has probability density function f(x)! A possible Octave/Matlab implementation of the algorithm for generating a random variable with an exponential distribution will look like: function z = exp_sample(lambda) u = rand; z = -1/lambda * log(1-u); end 3
4 4.6. Example : Rayleigh distribution Another example in which the Inverse Function method works seamlessly is that of the Rayleigh distribution, whose probability density function is: f(x) = xe x, x It is easy to verify that f(x) has all the properties of a probability density function, namely: f(x), x and + xe x dx = e x We apply the same procedure to sample from the Rayleigh distribution using the Inverse Function method, namely: + = 1. i) We first compute the CDF of the Rayleigh distribution: F (x) = x ii) Compute the inverse of the CDF: f(t)dt = x u = F (x) = 1 e x u 1 = e x x = ln(1 u) te t dt = 1 e x, iii) Then we sample u from the uniform distribution U(, 1) iv) The random variable Z = F 1 (u) is Rayleigh distributed. A possible Octave/Matlab implementation of a function to sample from the Rayleigh distribution would look like this: function z = sample_rayleigh() u = rand; z = sqrt(- * log(1-u)); end 4.7 Acceptance-Rejection method The Inverse Function method looks indeed pretty general, and in principle it could be used to sample from any probability density function, since the existence (and invertibility) of the CDF associated to a certain PDF is guaranteed by the definition of CDF. The only caveat is that, although the CDF can always be computed, it might not be possible 4
5 Figure : Graphical interpretation of the Acceptance-Rejection method to sample from continuous probability density functions. to express the CDF in a closed form. This happens, for instance, for all the probability density functions which do not have an elementary primitive function. A typical example is that of the Gaussian distribution: f(x) = 1 (x µ) e σ πσ which does not have a simple primitive function. In fact, the cumulative distribution function is the integral function: F (x) = x 1 (x µ) e σ πσ which does not have an explicit inverse. Even if the cumulative density function cannot be expressed in terms of elementary functions, it will still be possible to estimate the inverse of the CDF by numerically solving the corresponding integral, as it happens for instance for the Gaussian distribution, but this would in general require additional heavy computations. The Acceptance-Rejection method is a concrete alternative to the Inverse Function method whenever computing the inverse of the CDF is difficult. Assuming that we want to draw random samples from the probability density function f(x), x Ω, the Acceptance-Rejection method consists of the following steps: 1. Find a function g(x) such that g(x) f(x), x Ω and define A g = Ω g(x)dx. Sample a random variable x from the probability density function g(x)/a g, e.g. using the Inverse Function method 3. Sample a random variable ȳ uniformly in [, g( x)] 5
6 4. If ȳ f( x) then accept the sample and return x; otherwise, reject the sample and start again from step () above 5. The accepted samples will be distributed accordingly to f(x) Despite looking quite obscure at a first glance, this procedure has a very intuitive geometrical interpretation, as sketched in Fig.. Let us call F and G, respectively, the regions between the x-axis and the functions f(x) and g(x). Recall that the normalisation condition for the probability density function f(x) f(x)dx = 1 Ω implies that the area of the region F is A f = 1 (light-grey area in Fig. ). In general, if we sample a point ( x, ȳ) uniformly at random within the region G, then the abscissa x will be distributed accordingly to g(x)/a g. In fact, by definition, the probability that the abscissa x lies within the interval [ x δx, x + δx] is equal to the fraction between the area of the shaded dark-grey region in Fig., which is equal to δx g( x), and the area of the whole region G under the function g(x), which is equal to A g. Hence we have P ( x δx < x < x+δx) = δx g( x)/a g. In particular, the probability that the point ( x, ȳ) sampled uniformly at random in G lies between the x-axis and the curve f(x) is just equal to the ratio f( x)/g( x). Hence, if we sample a point ( x, ȳ) uniformly at random in G, and ȳ < f( x), then the abscissa x will be distributed with probability density function f(x). Notice that, in order to sample a point uniformly at random in the region G, we just need to sample its first coordinate x from g(x)/a g, and its second coordinate ȳ uniformly at random between and g( x). Then, the Acceptance-Rejection method accepts the sample (i.e., the abscissa x) only if ȳ < f( x). The fundamental hypothesis of the Acceptance-Rejection method is that we are able to draw random samples from g(x)/a g, for instance by using the Inverse Function method. This means that, in principle, we could set g(x) equal to any multiple of a distribution function having the the same event space Ω of f(x), such that g(x) f(x), x Ω. However, it is evident that the only points which will lead to an accepted (successful) sample are those which fall in the area F below the function f(x) (indicated as light-grey in Fig. ). This means that the acceptance rate, i.e. the fraction of points which yield an accepted sample, is equal to 1/A g. Consequently, in order to maximise the acceptance rate and to avoid to perform too many useless computations, we would have to minimise the area A g, i.e. we would like g(x) to be as close as possible to f(x). In general, it could be possible to use the simple Acceptance-Rejection method to sample from a Gaussian distribution. In particular, that would not be very efficient. In the following sections we will review two methods specifically tailored to sample random variables from a Gaussian distribution. 4.8 A note on the Gaussian distribution Recall the Gaussian PDF with mean µ and variance σ. f(x) = 1 e (x µ) σ πσ 6
7 σ σ 1σ µ 1σ σ 3σ Figure 3: Gaussian probability density with mean µ and standard deviation σ. Let us now define a new variable z = x µ σ. Using the notion that the probability is always conserved under transformation of variables, we can show that 1 f(x)dx = 1 ( x µ πσ e σ ) dx = = = 1 1 dx πσ e z 1 π e 1 z dz f(z)dz, dz dz noting that dx = σ. So, f(z) is a Gaussian with µ = and dz σ = 1. In other words, the random variable Z N(, 1). Therefore, the random variable X = σz + µ follows a Gaussian PDF with mean µ variance σ. This means that if we can generate a random sample Z from the distribution N(, 1) (i.e., from a Gaussian with µ = and σ = 1) then we can generate a sample X from a Gaussian distribution with expected value µ and variance σ by computing: X = σz + µ So the problem of generating random samples from a generic Gaussian distribution is reduced to the problem of sampling from N(, 1). But, how can we generate Z N(, 1)? Using the inversion method does not work as the Gaussian CDF is not analytically tractable and can be inverted only numerically. Several methods have been proposed to generate Gaussian random variables in an efficient way, and here we will consider two of them. 4.9 Box-Muller method Let us consider two random variables independently drawn from the Gaussian distribution X N(, 1) and Y N(, 1). If we consider X and Y as co-ordinates in two dimensions, 7
8 we can define a further two random variables R, and θ, corresponding to the polar representation of the coordinates X and Y : R = X + Y θ = arctan Y X and, conversely, X = R cos θ Y = R sin θ It can be shown that R is drawn from a Rayleigh distribution and has PDF f(r) = re r. and that θ follows a uniform distribution U(, π) x * exp( x^/) x Figure 4: Rayleigh distribution To prove the above statements, let us consider the joint PDF f(x, y). The integral b d a c f(x, y) dx dy is the probability that X lies between a and b and Y lies between c and d. We want to transform the variables X, Y to polar coordinates, R, θ. Since the two variables X and Y are drawn independently, then the joint probability density function f(x, y) factorises into the product of the two corresponding marginal probabilities. In other words, f(x, y) = f(x)f(y), hence: f(x, y) dx dy = 1 dx e 1 (x +y ) dy. π 8
9 Then we change variables, i.e. we write: f(x, y) dx dy = π dr dθ 1 r π e ( ) x, y J, θ, r where the determinant of the Jacobian is ( ) ( x, y x ) y J = det r r x y θ, r θ θ ( cos θ sin θ = det r sin θ r cos θ ), noting that x = r cos θ and y = r sin θ. So, the determinant is ( ) x, y J = r cos θ + r sin θ θ, r Therefore, with The marginal PDFs are: = r. f(x, y) dx dy = f(r) = f(θ) = f(r, θ) = r r π e π f(r, θ)dθ = re r f(r, θ)dr = 1 π π dr dθ r r π e The Box-Muller method uses these results, and works as follows: 1. Sample R from the Rayleigh distribution, e.g. by using the Inverse Function method described in the previous section.. Sample θ from the uniform distribution U(, π) in [, π]. This is simply done by multiplying the uniformly distributed variable by π 3. Compute X = R cos θ and Y = R sin θ. θ = πu(, 1) U(, π). This gives two independent random variables X and Y that both follow the Gaussian distribution N(, 1). 9
10 Algorithm A possible Octave/Matlab implementation of the algorithm for generating two Gaussian distributed random variables using the Box-Muller method is as follows: function [x,y] = box_muller() u1 = rand; u = rand; r = sqrt ( - * ln ( u1) ); v = * pi * u; x = r * cos v; y = r * sin v; end 4.1 Marsaglia polar method The Box-Muller method is not very efficient in practice, since it relies on the computation of trigonometric functions (which are normally relatively slow). To get around this, Marsaglia proposed a polar method for computing Gaussian random variables in the following way. Let us consider two uniformly distributed variables between 1 and 1: X U( 1, 1) Y U( 1, 1) Let s draw pairs of these, and accept both only if X + Y 1 or reject both otherwise. We will then have random points that are uniformly distributed on a disk with radius 1. See figure below. Figure 5: Marsaglia s polar method: unit disc Expressing these in polar coordinates, we get two random variables: ω = X + Y U(, 1) θ = arctan Y X U(, π). 1
11 Since ω is uniformly distributed in [, 1], then R = ln ω is Rayleigh distributed, and we can compute two variables that are Gaussian distributed: z 1 = R cos θ = ln ω x ω z = R sin θ = ln ω Y ω, where X = ω cos θ and Y = ω sin θ. A possible Octave/Matlab implementation of the algorithm to generate Gaussian random variables following Marsaglia s polar method is the following: function [z1,z] = marsaglia() w = ; while w > 1 x = * rand - 1; y = * rand - 1; w = x*x + y*y; end z1 = x * sqrt( - * ln(w) / w); z = y * sqrt( - * ln(w) / w); end %% U(-1,1) %% U(-1,1) Notice that the Marsaglia polar method is a specific instance of an acceptance-rejection procedure. In fact, the method requires to sample in the square [ 1, 1] [ 1, 1], but accepts only the points within the unit disc. It is easy to realise that the acceptance ratio of the Marsaglia polar method is equal to the fraction between the area of the unit disc and the area of the circumscribed square, i.e. to π/ This means that the Marsaglia polar method rejects about 1.5% of the samples. 11
Lecture 11: Probability, Order Statistics and Sampling
5-75: Graduate Algorithms February, 7 Lecture : Probability, Order tatistics and ampling Lecturer: David Whitmer cribes: Ilai Deutel, C.J. Argue Exponential Distributions Definition.. Given sample space
More informationTransformation of Probability Densities
Transformation of Probability Densities This Wikibook shows how to transform the probability density of a continuous random variable in both the one-dimensional and multidimensional case. In other words,
More information2 Functions of random variables
2 Functions of random variables A basic statistical model for sample data is a collection of random variables X 1,..., X n. The data are summarised in terms of certain sample statistics, calculated as
More informationSTAT 801: Mathematical Statistics. Distribution Theory
STAT 81: Mathematical Statistics Distribution Theory Basic Problem: Start with assumptions about f or CDF of random vector X (X 1,..., X p ). Define Y g(x 1,..., X p ) to be some function of X (usually
More informationSolution to Assignment 3
The Chinese University of Hong Kong ENGG3D: Probability and Statistics for Engineers 5-6 Term Solution to Assignment 3 Hongyang Li, Francis Due: 3:pm, March Release Date: March 8, 6 Dear students, The
More informationSTAT 450: Statistical Theory. Distribution Theory. Reading in Casella and Berger: Ch 2 Sec 1, Ch 4 Sec 1, Ch 4 Sec 6.
STAT 45: Statistical Theory Distribution Theory Reading in Casella and Berger: Ch 2 Sec 1, Ch 4 Sec 1, Ch 4 Sec 6. Basic Problem: Start with assumptions about f or CDF of random vector X (X 1,..., X p
More informationconditional cdf, conditional pdf, total probability theorem?
6 Multiple Random Variables 6.0 INTRODUCTION scalar vs. random variable cdf, pdf transformation of a random variable conditional cdf, conditional pdf, total probability theorem expectation of a random
More information3F1 Random Processes Examples Paper (for all 6 lectures)
3F Random Processes Examples Paper (for all 6 lectures). Three factories make the same electrical component. Factory A supplies half of the total number of components to the central depot, while factories
More informationLecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable
Lecture Notes 1 Probability and Random Variables Probability Spaces Conditional Probability and Independence Random Variables Functions of a Random Variable Generation of a Random Variable Jointly Distributed
More informationProbability and Distributions
Probability and Distributions What is a statistical model? A statistical model is a set of assumptions by which the hypothetical population distribution of data is inferred. It is typically postulated
More informationSOLUTIONS TO THE FINAL EXAM. December 14, 2010, 9:00am-12:00 (3 hours)
SOLUTIONS TO THE 18.02 FINAL EXAM BJORN POONEN December 14, 2010, 9:00am-12:00 (3 hours) 1) For each of (a)-(e) below: If the statement is true, write TRUE. If the statement is false, write FALSE. (Please
More informationLecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable
Lecture Notes 1 Probability and Random Variables Probability Spaces Conditional Probability and Independence Random Variables Functions of a Random Variable Generation of a Random Variable Jointly Distributed
More informationCh3. Generating Random Variates with Non-Uniform Distributions
ST4231, Semester I, 2003-2004 Ch3. Generating Random Variates with Non-Uniform Distributions This chapter mainly focuses on methods for generating non-uniform random numbers based on the built-in standard
More informationChapter 3 sections. SKIP: 3.10 Markov Chains. SKIP: pages Chapter 3 - continued
Chapter 3 sections Chapter 3 - continued 3.1 Random Variables and Discrete Distributions 3.2 Continuous Distributions 3.3 The Cumulative Distribution Function 3.4 Bivariate Distributions 3.5 Marginal Distributions
More informationChapter 4. Continuous Random Variables 4.1 PDF
Chapter 4 Continuous Random Variables In this chapter we study continuous random variables. The linkage between continuous and discrete random variables is the cumulative distribution (CDF) which we will
More informationChange Of Variable Theorem: Multiple Dimensions
Change Of Variable Theorem: Multiple Dimensions Moulinath Banerjee University of Michigan August 30, 01 Let (X, Y ) be a two-dimensional continuous random vector. Thus P (X = x, Y = y) = 0 for all (x,
More information5.2 Continuous random variables
5.2 Continuous random variables It is often convenient to think of a random variable as having a whole (continuous) interval for its set of possible values. The devices used to describe continuous probability
More informationGeneration from simple discrete distributions
S-38.3148 Simulation of data networks / Generation of random variables 1(18) Generation from simple discrete distributions Note! This is just a more clear and readable version of the same slide that was
More information1 Review of Probability and Distributions
Random variables. A numerically valued function X of an outcome ω from a sample space Ω X : Ω R : ω X(ω) is called a random variable (r.v.), and usually determined by an experiment. We conventionally denote
More informationRandom Variables and Their Distributions
Chapter 3 Random Variables and Their Distributions A random variable (r.v.) is a function that assigns one and only one numerical value to each simple event in an experiment. We will denote r.vs by capital
More informationStatistics for scientists and engineers
Statistics for scientists and engineers February 0, 006 Contents Introduction. Motivation - why study statistics?................................... Examples..................................................3
More informationContinuous Random Variables
1 / 24 Continuous Random Variables Saravanan Vijayakumaran sarva@ee.iitb.ac.in Department of Electrical Engineering Indian Institute of Technology Bombay February 27, 2013 2 / 24 Continuous Random Variables
More informationSTAT 450: Statistical Theory. Distribution Theory. Reading in Casella and Berger: Ch 2 Sec 1, Ch 4 Sec 1, Ch 4 Sec 6.
STAT 450: Statistical Theory Distribution Theory Reading in Casella and Berger: Ch 2 Sec 1, Ch 4 Sec 1, Ch 4 Sec 6. Example: Why does t-statistic have t distribution? Ingredients: Sample X 1,...,X n from
More information2 Random Variable Generation
2 Random Variable Generation Most Monte Carlo computations require, as a starting point, a sequence of i.i.d. random variables with given marginal distribution. We describe here some of the basic methods
More information4. CONTINUOUS RANDOM VARIABLES
IA Probability Lent Term 4 CONTINUOUS RANDOM VARIABLES 4 Introduction Up to now we have restricted consideration to sample spaces Ω which are finite, or countable; we will now relax that assumption We
More informationLecture 25: Review. Statistics 104. April 23, Colin Rundel
Lecture 25: Review Statistics 104 Colin Rundel April 23, 2012 Joint CDF F (x, y) = P [X x, Y y] = P [(X, Y ) lies south-west of the point (x, y)] Y (x,y) X Statistics 104 (Colin Rundel) Lecture 25 April
More informationStatistics 3657 : Moment Approximations
Statistics 3657 : Moment Approximations Preliminaries Suppose that we have a r.v. and that we wish to calculate the expectation of g) for some function g. Of course we could calculate it as Eg)) by the
More informationJoint Distributions. (a) Scalar multiplication: k = c d. (b) Product of two matrices: c d. (c) The transpose of a matrix:
Joint Distributions Joint Distributions A bivariate normal distribution generalizes the concept of normal distribution to bivariate random variables It requires a matrix formulation of quadratic forms,
More informationChapter 3 sections. SKIP: 3.10 Markov Chains. SKIP: pages Chapter 3 - continued
Chapter 3 sections 3.1 Random Variables and Discrete Distributions 3.2 Continuous Distributions 3.3 The Cumulative Distribution Function 3.4 Bivariate Distributions 3.5 Marginal Distributions 3.6 Conditional
More informationProbability Theory and Statistics. Peter Jochumzen
Probability Theory and Statistics Peter Jochumzen April 18, 2016 Contents 1 Probability Theory And Statistics 3 1.1 Experiment, Outcome and Event................................ 3 1.2 Probability............................................
More informationBMIR Lecture Series on Probability and Statistics Fall 2015 Discrete RVs
Lecture #7 BMIR Lecture Series on Probability and Statistics Fall 2015 Department of Biomedical Engineering and Environmental Sciences National Tsing Hua University 7.1 Function of Single Variable Theorem
More informationRandom Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R
In probabilistic models, a random variable is a variable whose possible values are numerical outcomes of a random phenomenon. As a function or a map, it maps from an element (or an outcome) of a sample
More informationSo we will instead use the Jacobian method for inferring the PDF of functionally related random variables; see Bertsekas & Tsitsiklis Sec. 4.1.
2011 Page 1 Simulating Gaussian Random Variables Monday, February 14, 2011 2:00 PM Readings: Kloeden and Platen Sec. 1.3 Why does the Box Muller method work? How was it derived? The basic idea involves
More informationChapter 2: Random Variables
ECE54: Stochastic Signals and Systems Fall 28 Lecture 2 - September 3, 28 Dr. Salim El Rouayheb Scribe: Peiwen Tian, Lu Liu, Ghadir Ayache Chapter 2: Random Variables Example. Tossing a fair coin twice:
More informationJoint Distributions: Part Two 1
Joint Distributions: Part Two 1 STA 256: Fall 2018 1 This slide show is an open-source document. See last slide for copyright information. 1 / 30 Overview 1 Independence 2 Conditional Distributions 3 Transformations
More informationCheng Soon Ong & Christian Walder. Canberra February June 2018
Cheng Soon Ong & Christian Walder Research Group and College of Engineering and Computer Science Canberra February June 2018 Outlines Overview Introduction Linear Algebra Probability Linear Regression
More informationt 2 + 2t dt = (t + 1) dt + 1 = arctan t x + 6 x(x 3)(x + 2) = A x +
MATH 06 0 Practice Exam #. (0 points) Evaluate the following integrals: (a) (0 points). t +t+7 This is an irreducible quadratic; its denominator can thus be rephrased via completion of the square as a
More informationECE 302 Division 2 Exam 2 Solutions, 11/4/2009.
NAME: ECE 32 Division 2 Exam 2 Solutions, /4/29. You will be required to show your student ID during the exam. This is a closed-book exam. A formula sheet is provided. No calculators are allowed. Total
More information2008 Winton. Review of Statistical Terminology
1 Review of Statistical Terminology 2 Formal Terminology An experiment is a process whose outcome is not known with certainty The experiment s sample space S is the set of all possible outcomes. A random
More information2. The CDF Technique. 1. Introduction. f X ( ).
Week 5: Distributions of Function of Random Variables. Introduction Suppose X,X 2,..., X n are n random variables. In this chapter, we develop techniques that may be used to find the distribution of functions
More informationx + ye z2 + ze y2, y + xe z2 + ze x2, z and where T is the
1.(8pts) Find F ds where F = x + ye z + ze y, y + xe z + ze x, z and where T is the T surface in the pictures. (The two pictures are two views of the same surface.) The boundary of T is the unit circle
More informationENGG2430A-Homework 2
ENGG3A-Homework Due on Feb 9th,. Independence vs correlation a For each of the following cases, compute the marginal pmfs from the joint pmfs. Explain whether the random variables X and Y are independent,
More information36. Double Integration over Non-Rectangular Regions of Type II
36. Double Integration over Non-Rectangular Regions of Type II When establishing the bounds of a double integral, visualize an arrow initially in the positive x direction or the positive y direction. A
More informationFinal Exam 2011 Winter Term 2 Solutions
. (a Find the radius of convergence of the series: ( k k+ x k. Solution: Using the Ratio Test, we get: L = lim a k+ a k = lim ( k+ k+ x k+ ( k k+ x k = lim x = x. Note that the series converges for L
More informationChapter 3: Random Variables 1
Chapter 3: Random Variables 1 Yunghsiang S. Han Graduate Institute of Communication Engineering, National Taipei University Taiwan E-mail: yshan@mail.ntpu.edu.tw 1 Modified from the lecture notes by Prof.
More informationMethods of Data Analysis Random numbers, Monte Carlo integration, and Stochastic Simulation Algorithm (SSA / Gillespie)
Methods of Data Analysis Random numbers, Monte Carlo integration, and Stochastic Simulation Algorithm (SSA / Gillespie) Week 1 1 Motivation Random numbers (RNs) are of course only pseudo-random when generated
More informationPerhaps the simplest way of modeling two (discrete) random variables is by means of a joint PMF, defined as follows.
Chapter 5 Two Random Variables In a practical engineering problem, there is almost always causal relationship between different events. Some relationships are determined by physical laws, e.g., voltage
More informationx n cos 2x dx. dx = nx n 1 and v = 1 2 sin(2x). Andreas Fring (City University London) AS1051 Lecture Autumn / 36
We saw in Example 5.4. that we sometimes need to apply integration by parts several times in the course of a single calculation. Example 5.4.4: For n let S n = x n cos x dx. Find an expression for S n
More informationBivariate Transformations
Bivariate Transformations October 29, 29 Let X Y be jointly continuous rom variables with density function f X,Y let g be a one to one transformation. Write (U, V ) = g(x, Y ). The goal is to find the
More informationChapter 3: Random Variables 1
Chapter 3: Random Variables 1 Yunghsiang S. Han Graduate Institute of Communication Engineering, National Taipei University Taiwan E-mail: yshan@mail.ntpu.edu.tw 1 Modified from the lecture notes by Prof.
More informationp. 6-1 Continuous Random Variables p. 6-2
Continuous Random Variables Recall: For discrete random variables, only a finite or countably infinite number of possible values with positive probability (>). Often, there is interest in random variables
More informationMAT 271E Probability and Statistics
MAT 71E Probability and Statistics Spring 013 Instructor : Class Meets : Office Hours : Textbook : Supp. Text : İlker Bayram EEB 1103 ibayram@itu.edu.tr 13.30 1.30, Wednesday EEB 5303 10.00 1.00, Wednesday
More informationBMIR Lecture Series on Probability and Statistics Fall, 2015 Uniform Distribution
Lecture #5 BMIR Lecture Series on Probability and Statistics Fall, 2015 Department of Biomedical Engineering and Environmental Sciences National Tsing Hua University s 5.1 Definition ( ) A continuous random
More informationSTAT2201. Analysis of Engineering & Scientific Data. Unit 3
STAT2201 Analysis of Engineering & Scientific Data Unit 3 Slava Vaisman The University of Queensland School of Mathematics and Physics What we learned in Unit 2 (1) We defined a sample space of a random
More informationMultiple Random Variables
Multiple Random Variables Joint Probability Density Let X and Y be two random variables. Their joint distribution function is F ( XY x, y) P X x Y y. F XY ( ) 1, < x
More informationMSc Mas6002, Introductory Material Mathematical Methods Exercises
MSc Mas62, Introductory Material Mathematical Methods Exercises These exercises are on a range of mathematical methods which are useful in different parts of the MSc, especially calculus and linear algebra.
More informationProbability, CLT, CLT counterexamples, Bayes. The PDF file of this lecture contains a full reference document on probability and random variables.
Lecture 5 A6523 Signal Modeling, Statistical Inference and Data Mining in Astrophysics Spring 2015 http://www.astro.cornell.edu/~cordes/a6523 Probability, CLT, CLT counterexamples, Bayes The PDF file of
More informationEEL 5544 Noise in Linear Systems Lecture 30. X (s) = E [ e sx] f X (x)e sx dx. Moments can be found from the Laplace transform as
L30-1 EEL 5544 Noise in Linear Systems Lecture 30 OTHER TRANSFORMS For a continuous, nonnegative RV X, the Laplace transform of X is X (s) = E [ e sx] = 0 f X (x)e sx dx. For a nonnegative RV, the Laplace
More informationMTH4101 CALCULUS II REVISION NOTES. 1. COMPLEX NUMBERS (Thomas Appendix 7 + lecture notes) ax 2 + bx + c = 0. x = b ± b 2 4ac 2a. i = 1.
MTH4101 CALCULUS II REVISION NOTES 1. COMPLEX NUMBERS (Thomas Appendix 7 + lecture notes) 1.1 Introduction Types of numbers (natural, integers, rationals, reals) The need to solve quadratic equations:
More informationFigure 25:Differentials of surface.
2.5. Change of variables and Jacobians In the previous example we saw that, once we have identified the type of coordinates which is best to use for solving a particular problem, the next step is to do
More informationRandom variables. DS GA 1002 Probability and Statistics for Data Science.
Random variables DS GA 1002 Probability and Statistics for Data Science http://www.cims.nyu.edu/~cfgranda/pages/dsga1002_fall17 Carlos Fernandez-Granda Motivation Random variables model numerical quantities
More informationStatistics 100A Homework 5 Solutions
Chapter 5 Statistics 1A Homework 5 Solutions Ryan Rosario 1. Let X be a random variable with probability density function a What is the value of c? fx { c1 x 1 < x < 1 otherwise We know that for fx to
More informationWe introduce methods that are useful in:
Instructor: Shengyu Zhang Content Derived Distributions Covariance and Correlation Conditional Expectation and Variance Revisited Transforms Sum of a Random Number of Independent Random Variables more
More informationA6523 Signal Modeling, Statistical Inference and Data Mining in Astrophysics Spring 2011
A6523 Signal Modeling, Statistical Inference and Data Mining in Astrophysics Spring 2011 Reading Chapter 5 (continued) Lecture 8 Key points in probability CLT CLT examples Prior vs Likelihood Box & Tiao
More informationMath 10C - Fall Final Exam
Math 1C - Fall 217 - Final Exam Problem 1. Consider the function f(x, y) = 1 x 2 (y 1) 2. (i) Draw the level curve through the point P (1, 2). Find the gradient of f at the point P and draw the gradient
More informationGeneration of non-uniform random numbers
Generation of non-uniform random numbers General method Function ran f() Theorem.- Let ˆx be a r.v. with probability distribution function Fˆx (x) û = Fˆx (ˆx) is a r.v. uniformly distributed in the interval
More informationMASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 8 10/1/2008 CONTINUOUS RANDOM VARIABLES
MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 8 10/1/2008 CONTINUOUS RANDOM VARIABLES Contents 1. Continuous random variables 2. Examples 3. Expected values 4. Joint distributions
More informationMonte-Carlo MMD-MA, Université Paris-Dauphine. Xiaolu Tan
Monte-Carlo MMD-MA, Université Paris-Dauphine Xiaolu Tan tan@ceremade.dauphine.fr Septembre 2015 Contents 1 Introduction 1 1.1 The principle.................................. 1 1.2 The error analysis
More informationChing-Han Hsu, BMES, National Tsing Hua University c 2015 by Ching-Han Hsu, Ph.D., BMIR Lab. = a + b 2. b a. x a b a = 12
Lecture 5 Continuous Random Variables BMIR Lecture Series in Probability and Statistics Ching-Han Hsu, BMES, National Tsing Hua University c 215 by Ching-Han Hsu, Ph.D., BMIR Lab 5.1 1 Uniform Distribution
More informationChapter 2 Continuous Distributions
Chapter Continuous Distributions Continuous random variables For a continuous random variable X the probability distribution is described by the probability density function f(x), which has the following
More informationReview of Statistical Terminology
Review of Statistical Terminology An experiment is a process whose outcome is not known with certainty. The experiment s sample space S is the set of all possible outcomes. A random variable is a function
More informationDO NOT BEGIN THIS TEST UNTIL INSTRUCTED TO START
Math 265 Student name: KEY Final Exam Fall 23 Instructor & Section: This test is closed book and closed notes. A (graphing) calculator is allowed for this test but cannot also be a communication device
More informationProbability Models. 4. What is the definition of the expectation of a discrete random variable?
1 Probability Models The list of questions below is provided in order to help you to prepare for the test and exam. It reflects only the theoretical part of the course. You should expect the questions
More informationChapter 5. Random Variables (Continuous Case) 5.1 Basic definitions
Chapter 5 andom Variables (Continuous Case) So far, we have purposely limited our consideration to random variables whose ranges are countable, or discrete. The reason for that is that distributions on
More informationMATH 3150: PDE FOR ENGINEERS FINAL EXAM (VERSION D) 1. Consider the heat equation in a wire whose diffusivity varies over time: u k(t) 2 x 2
MATH 35: PDE FOR ENGINEERS FINAL EXAM (VERSION D). Consider the heat equation in a wire whose diffusivity varies over time: u t = u k(t) x where k(t) is some positive function of time. Assume the wire
More informationCalculus II Study Guide Fall 2015 Instructor: Barry McQuarrie Page 1 of 8
Calculus II Study Guide Fall 205 Instructor: Barry McQuarrie Page of 8 You should be expanding this study guide as you see fit with details and worked examples. With this extra layer of detail you will
More informationReview for the First Midterm Exam
Review for the First Midterm Exam Thomas Morrell 5 pm, Sunday, 4 April 9 B9 Van Vleck Hall For the purpose of creating questions for this review session, I did not make an effort to make any of the numbers
More informationESTIMATION THEORY. Chapter Estimation of Random Variables
Chapter ESTIMATION THEORY. Estimation of Random Variables Suppose X,Y,Y 2,...,Y n are random variables defined on the same probability space (Ω, S,P). We consider Y,...,Y n to be the observed random variables
More informationLecture 13 - Wednesday April 29th
Lecture 13 - Wednesday April 29th jacques@ucsdedu Key words: Systems of equations, Implicit differentiation Know how to do implicit differentiation, how to use implicit and inverse function theorems 131
More informationQuestion: My computer only knows how to generate a uniform random variable. How do I generate others? f X (x)dx. f X (s)ds.
Simulation Question: My computer only knows how to generate a uniform random variable. How do I generate others?. Continuous Random Variables Recall that a random variable X is continuous if it has a probability
More informationProbability Density Under Transformation
Probability Density Under Transformation Pramook Khungurn September 5, 5 Introduction In creating an algorithm that samples points from some domain, a problem that always comes up is the following: Let
More informationJoint Probability Distributions and Random Samples (Devore Chapter Five)
Joint Probability Distributions and Random Samples (Devore Chapter Five) 1016-345-01: Probability and Statistics for Engineers Spring 2013 Contents 1 Joint Probability Distributions 2 1.1 Two Discrete
More informationMAS223 Statistical Inference and Modelling Exercises
MAS223 Statistical Inference and Modelling Exercises The exercises are grouped into sections, corresponding to chapters of the lecture notes Within each section exercises are divided into warm-up questions,
More informationMath 265H: Calculus III Practice Midterm II: Fall 2014
Name: Section #: Math 65H: alculus III Practice Midterm II: Fall 14 Instructions: This exam has 7 problems. The number of points awarded for each question is indicated in the problem. Answer each question
More informationMultivariate distributions
CHAPTER Multivariate distributions.. Introduction We want to discuss collections of random variables (X, X,..., X n ), which are known as random vectors. In the discrete case, we can define the density
More informationStatistical Theory MT 2007 Problems 4: Solution sketches
Statistical Theory MT 007 Problems 4: Solution sketches 1. Consider a 1-parameter exponential family model with density f(x θ) = f(x)g(θ)exp{cφ(θ)h(x)}, x X. Suppose that the prior distribution has the
More informationDistributions of Functions of Random Variables
STAT/MATH 395 A - PROBABILITY II UW Winter Quarter 217 Néhémy Lim Distributions of Functions of Random Variables 1 Functions of One Random Variable In some situations, you are given the pdf f X of some
More information1 Acceptance-Rejection Method
Copyright c 2016 by Karl Sigman 1 Acceptance-Rejection Method As we already know, finding an explicit formula for F 1 (y), y [0, 1], for the cdf of a rv X we wish to generate, F (x) = P (X x), x R, is
More informationAPPM/MATH 4/5520 Solutions to Problem Set Two. = 2 y = y 2. e 1 2 x2 1 = 1. (g 1
APPM/MATH 4/552 Solutions to Problem Set Two. Let Y X /X 2 and let Y 2 X 2. (We can select Y 2 to be anything but when dealing with a fraction for Y, it is usually convenient to set Y 2 to be the denominator.)
More informationMultivariate Statistics
Multivariate Statistics Chapter 2: Multivariate distributions and inference Pedro Galeano Departamento de Estadística Universidad Carlos III de Madrid pedro.galeano@uc3m.es Course 2016/2017 Master in Mathematical
More information2. Variance and Covariance: We will now derive some classic properties of variance and covariance. Assume real-valued random variables X and Y.
CS450 Final Review Problems Fall 08 Solutions or worked answers provided Problems -6 are based on the midterm review Identical problems are marked recap] Please consult previous recitations and textbook
More informationMath 147 Exam II Practice Problems
Math 147 Exam II Practice Problems This review should not be used as your sole source for preparation for the exam. You should also re-work all examples given in lecture, all homework problems, all lab
More information1. For each function, find all of its critical points and then classify each point as a local extremum or saddle point.
Solutions Review for Exam # Math 6. For each function, find all of its critical points and then classify each point as a local extremum or saddle point. a fx, y x + 6xy + y Solution.The gradient of f is
More informationContinuous Random Variables
Continuous Random Variables Recall: For discrete random variables, only a finite or countably infinite number of possible values with positive probability. Often, there is interest in random variables
More informationChapter 2. Random Variable. Define single random variables in terms of their PDF and CDF, and calculate moments such as the mean and variance.
Chapter 2 Random Variable CLO2 Define single random variables in terms of their PDF and CDF, and calculate moments such as the mean and variance. 1 1. Introduction In Chapter 1, we introduced the concept
More informationChapter 3, 4 Random Variables ENCS Probability and Stochastic Processes. Concordia University
Chapter 3, 4 Random Variables ENCS6161 - Probability and Stochastic Processes Concordia University ENCS6161 p.1/47 The Notion of a Random Variable A random variable X is a function that assigns a real
More information3. Minimization with constraints Problem III. Minimize f(x) in R n given that x satisfies the equality constraints. g j (x) = c j, j = 1,...
3. Minimization with constraints Problem III. Minimize f(x) in R n given that x satisfies the equality constraints g j (x) = c j, j = 1,..., m < n, where c 1,..., c m are given numbers. Theorem 3.1. Let
More informationDiscrete Mathematics and Probability Theory Spring 2014 Anant Sahai Note 18
EECS 7 Discrete Mathematics and Probability Theory Spring 214 Anant Sahai Note 18 A Brief Introduction to Continuous Probability Up to now we have focused exclusively on discrete probability spaces Ω,
More informationMore on Bayes and conjugate forms
More on Bayes and conjugate forms Saad Mneimneh A cool function, Γ(x) (Gamma) The Gamma function is defined as follows: Γ(x) = t x e t dt For x >, if we integrate by parts ( udv = uv vdu), we have: Γ(x)
More informationE X A M. Probability Theory and Stochastic Processes Date: December 13, 2016 Duration: 4 hours. Number of pages incl.
E X A M Course code: Course name: Number of pages incl. front page: 6 MA430-G Probability Theory and Stochastic Processes Date: December 13, 2016 Duration: 4 hours Resources allowed: Notes: Pocket calculator,
More information