2 Functions of random variables
|
|
- Lisa Lawson
- 5 years ago
- Views:
Transcription
1 2 Functions of random variables A basic statistical model for sample data is a collection of random variables X 1,..., X n. The data are summarised in terms of certain sample statistics, calculated as functions of the random variables, one important instance being the sample mean X = n 1 n i=1 X i. In this chapter we discuss the density transformation techniques useful for calculating distributions of sample statistics. 2.1 Transformation of densities Example Let random variable X be the temperature in Celsius degrees observed in an experiment, with some known pdf f X. In Fahrenheit degrees the temperature will be Y = 1.8X What is the pdf of Y? Let us argue first in terms of the cdf s. We have F Y (y) = P(Y y) = P(1.8X + 32 y) = P(X (y 32)/1.8) = F X ((y 32)/1.8). Differentiating this equality yields f Y (y) = f X ( y 32 Proposition 2.1. Suppose rv X has pdf f X and Y = ax + b where a 0 and b R. Then the pdf of Y is 1.8 f Y (y) = f ax+b (y) = f X ( y b a ). ) 1 a. (1) This is the linear change of variables formula for densities. Note it is a and not a that appears in the formula: we do not exclude negative a, but the density must be a nonnegative function! Denoting I the support of f X, the support J of f Y is the range of the function x ax+b with domain I, that is J = {y R : (y b)/a I}. 1
2 The proof of (1) for a > 0 is exactly the same, as in the Celsius-Fahrenheit example. For a < 0 we get F Y (y) = P(Y y) = P(aX + b y) = P(X (y b)/a) = 1 P(X < (y b)/a) = 1 F X ((y b)/a). ( ) ( ) Differentiation leads to f Y (y) = f y b 1 X a a = f y b 1 X a a. Example For X Exp(λ) and a > 0 we can show that ax Exp(λ/a). Indeed, we have f X (x) = λe λx (for x 0), thus by (1) f ax (y) = f X (y/a)/a = (λ/a)e (λ/a)y. The supports are I = J = (0, ). Example If Z N (0, 1) (standard normal rv) and a 0 then and for the rv az + b we have f az+b (y) = f Z ( y b a f Z (x) = 1 2π exp ( x 2 /2 ) ) 1 a = 1 2π a exp ( ) (y b)2 2a 2 which is the normal N (b, a 2 ) density with mean b and variance a 2. We see that a general normal random variable can be obtained as a linear function of the standard normal r.v. Z N (0, 1) az + b N (b, a 2 ). The supports in this case are I = J = R. More generally, for a 0 Z N (µ, σ 2 ) az + b N (aµ + b, a 2 σ 2 ). In particular, (Z µ)/σ N (0, 1) (this transformation is sometimes called standardisation). Often we need to consider rv s obtained as nonlinear functions of a given rv, for instance e X, log X or X 2. 2
3 Example Let X Exp(λ) and Y = X 2. Both variables have support (0, ). We get for y > 0 F Y (y) = P(Y y) = P(X 2 y) = P(X y) = F X ( y). Differentiating f Y (y) = f X ( y) 1 2 y. Recalling the formula for f X we get the pdf for Y and for y < 0 we have f Y (y) = 0. f Y (y) = λe λ y 1 2 y, y > 0, To discuss a general density transformation formula, assume that X is a random variable with given pdf f X supported by interval I. Here, we understand interval in the extended sense: the interval I can be finite, halfline (e.g. (0, )) or the whole real line R. The interval may or may not include the endpoints. Let g be a function on I, whose derivative either satisfies g (x) > 0 for all x I or satisfies g (x) < 0 for all x I. In the first case g is increasing and in the second a decreasing function on I. Define another random variable Y = g(x), which is continuous with some pdf f Y and support J := {g(x) : x I}. We denote g 1 (y) the function inverse to g(x) 1. Theorem 2.1. Let g : I J be a monotone function with g (x) 0 on interval I. Then f Y relates to f X by the density transformation formula Proof. Suppose g is decreasing, then f Y (y) = f X (g 1 (y)) (g 1 ) (y). (2) F Y (y) = P(Y y) = P(g(X) y) = P(X g 1 (y)) = 1 F X (g 1 (y)). Differentianting yields f Y (y) = f X (g 1 (y))(g 1 ) (y) = f X (g 1 (y)) (g 1 ) (y). 1 Not to be confused with 1/g(y). 3
4 If g is increasing we have F Y (y) = P(Y y) = P(g(X) y) = P(X g 1 (y)) = F X (g 1 (y)), and differentiating yields (2) again. Let us connect the formula (2) with the change of variable of integration from your Calculus II course. The expectation of Y = g(x) is given by E(Y ) = g(x)f X (x)dx, I where the pdf of X is used. Using the change of variable in the integral y = g(x), x = g 1 (y), dx = (g 1 ) (y)dy this becomes E(Y ) = yf X (g 1 (y)) (g 1 ) (y) dy. Comparing with (2) we arrive at E(Y ) = E(g(X)) = J J yf Y (y)dy, which is the formula for expectation E(Y ) in terms of the pdf f Y. The transformation formula is easy to remember in simpler notation. Let us just write y(x), x(y) (in place of g(x), g 1 (y)) for the functional relation between variables x and y. Then f Y (y) = f X (x(y)) dx dy. (3) We can re-write the formula using the fact that dx dy dy = 1/ dx. The transformation formula becomes / dy f Y (y) = f X (x(y)) dx. (4) The equation y = g(x) must be solved for x in terms of y, and this value of x substituted into f X (x) and dy/dx, thus leaving the formula for f Y entirely in terms of y. 4
5 Example Let X Uniform[0, 1], that is f X (x) = 1 for x [0, 1]. Consider Y = X 2. The supports of X and Y are I = J = [0, 1]. Solving the relation y = x 2 for x Substituting in (3) yields x = y, dx dy = 1 2 y. f Y (y) = 1 2 y 1/2, y [0, 1]; this pdf is denoted Beta(1/2, 1) and is a special case of the so-called beta density. We could also use (4), this would involve computing dy/dx = 2x, then replacing x by y. The density transformation formula (2) works both ways. If Y = g(x) then X = g 1 (Y ), thus using (2) yields f X (x) = f Y (g(x)) g (x). (5) For two density functions f X and f Y equivalent: the transformation formulas are f Y (y) = f X (g 1 (y)) (g 1 ) (y) f X (x) = f Y (g(x)) g (x). (6) Example Let U Uniform[0, 1] and let V = log U (we denote log x the natural logarithm base e, same as ln x). Then I = [0, 1] and J = [0, ), because log x is a decreasing function with log x as x 0 and log 1 = 0. With substitution v = log u, u = e v, du dv = e v we see that f V (v) = f U (u(v)) du dv = 1 e v = e v, v > 0, so V Exp(1). The function u = e v is inverse to v = log u. Therefore using equivalence (6) we conclude (without calculation) that for V Exp(1) the random variable e V is uniformly distributed on [0, 1]. 5
6 We summarise this as the following uniform-exponential transformation rules U Uniform[0, 1] log U Exp(1) V Exp(1) e V Uniform[0, 1]. (7) Many-to-one functions We turn to a more complex case, where function y = g(x) is not necesssarily monotone on I, so can be many-to-one function g : I J. Example Consider Y = X 2. This corresponds to the square function y = x 2, which is nonnegative and has zero derivative at x = 0. For the cdf of Y we have for y > 0 F Y (y) = P(Y y) = P(X 2 y) = P( X y) = P( y X y) = F X ( y) F X ( y). Differentiating this identity yields f Y (y) = 1 2 y f X( y) y f X( y). Two terms here appear because the quadratic parabola has two branches, that is for each y > 0 there are two values x = ± y contributing to f Y. To formulate the density transformation result within reasonable degree of generality, we will assume that g : I J has the property that the set {x : g(x) = y} is finite for every y J. The latter holds, in particular, when g is continuously differentiable on I and there are finitely many values x I such that g (x) = 0. Let Y = g(x). Under our assumptions the density transformation formula becomes f Y (y) = / dy f X (x) dx. (8) {x:g(x)=y} The equation y = g(x) must be solved for x in terms of y, and every solution x substituted into f X (x) and dy/dx, thus leaving the formula for f Y entirely in terms of y. Let us revise the example with Y = X 2. We have x = ± y and dy/dx = 2x, therefore dy/dx = 2 y, for x = y 6
7 dy/dx = 2 y, for x = y so dy/dx = 2 y in both cases. The density transformation formula becomes f Y (y) = f X (x) 1 2x = 1 2 y f X( y) y f X( y), {x:x 2 =y} which is the same result as obtained above by using the method of cumulative distribution functions. Next is an important application to the density of a squared standard normal rv. Example Let X N (0, 1), Y = X 2. So the function is y = x 2, the supports are I = (, ), J = [0, ) and the starting density is f X (x) = ( 2π) 1 e x2 /2. We compute the transformed density as f Y (y) = f X ( y) 1 2 y + f X( y) 1 2 y = 1 e y/2 1 2π 2 y + 1 e y/2 1 2π 2 y = 1 y e 1 2 y, 2π where we recognise the Gamma(1/2, 1/2) density, also denoted χ 2 1 and called chi-square density with one degree of freedom). We summarise this finding as Z N (0, 1) Z 2 χ 2 1. The density transformation formula extends literally to functions which have no derivative at some points. An example of the latter kind appears in the next exercise. Exercise For X N (0, 1) find the pdf of X. 2.2 The probability integral transform We review the exponential-uniform example, to observe a special feature. 7
8 Example We have seen in (7) that for V Exp(1) the random variable Y = e V is uniformly distributed on [0, 1], that is Y Uniform[0, 1]. We leave as an exercise checking that for U = 1 Y the distribution is again uniform, that is U Uniform[0, 1]. This gives us the relation V Exp(1) U = 1 e V Uniform[0, 1]. But F V (v) = 1 e v is the cdf of V, therefore U = 1 e V can be written as U = F V (V ). We see that substituting exponential rv in its own cdf yields a uniformly distributed rv. This is a special case of a very general fact. Definition Let F X be the cdf of rv X. The random variable Y = F X (X) is called the probability integral transform of X. The name derives from the representation of cdf as the probability integral F X (x) = x f X(z)dz. The idea is that F X (X) always has the uniform distribution when X is a continuous rv. To avoid small complications, we assume that the support of X is some interval I (bounded or infinite), thus f X (x) > 0 for x I. Note that the range of F X is J = [0, 1], because F X (x) is some probability, so taking values between 0 and 1. The function F X has a positive derivative on I, hence strictly increasing. Therefore there exists an inverse function F 1 X. Theorem 2.2. Let X be a continuous rv with cdf F X. Then F X (X) Uniform[0, 1]. Proof. The cdf of U Uniform[0, 1] is the linear function, F U (u) = u for u [0, 1]. Since F 1 (F (u)) = u we have P(F X (X) u) = P(X FX 1 (u)) = F X(FX 1 (u)) = u. The result has a converse: if U Uniform[0, 1] then F 1 (U) F is a random variable with cdf F. This gives a practical way to simulate a random variable with given distribution. A standard random number 8
9 generator outputs a sample value u from the uniform distribution. Calculating F 1 (u) we obtain a sample value from the cdf F. Definition The inverse cdf F 1 is called the quantile function associated with cdf F. For every p (0, 1) the function outputs the value F 1 (p) = z p called the p-quantile (or 100p%-percentile of the distribution). Hence the quantile satisfies the equation F (z p ) = p. For instance, F 1 (0.5) = z 0.5 is the median of the distribution. The quantile just introduced is the lower quantile, because the probability below z p is p. The lower p-quantile is also the upper (1 p)-quantile (also called (1 p)100% upper percentile), because the probability above z p is 1 p. 2.3 Multivariate density transformation We are looking for the generalisation of the density transformation theorem for joint density of two or more random variables. Let X 1, X 2 be rv s with joint pdf f X1,X 2 (x 1, x 2 ) having support I = {(x 1, x 2 ) R 2 : f X1,X 2 (x 1, x 2 ) > 0}. Suppose the correspondence given by two functions y 1 = g 1 (x 1, x 2 ), y 2 = g(x 1, x 2 ) (9) is such that these functions have continuous partial derivatives and (y 1, y 2 ) (x 1, x 2 ) 0, where (y 1, y 2 ) (x 1, x 2 ) = det ( y1 x 1 y 2 x 2 y 2 y 1 x 1 x 2 ). 9
10 is the Jacobian (also called the Jacobian determinant). Then (9) is a bijection onto some domain J R 2 and there exists an inverse correspondence 2 x 1 = g 1 (y 1, y 2 ), x 2 = g 1 (y 1, y 2 ). (10) The Jacobian of the inverse correspondence is ( ) (x 1, x 2 ) x1 x 1 (y 1, y 2 ) = det y 1 y 2 x 2 y 1 x 2 y 2 / (y1, y 2 ) = 1 (x 1, x 2 ) Theorem 2.3. The random variables Y 1 = g 1 (X 1, X 2 ), Y 2 = g 2 (X 1, X 2 ) have the joint pdf f Y1,Y 2 (y 1, y 2 ) = f X1,X 2 (g1 1 (y 1, y 2 ), g2 1 (y 1, y 2 )) (x 1, x 2 ) (y 1, y 2 ). (11) In the formula denotes the absolute value. / It is sometimes more convenient to compute first 1 (y 1,y 2 ) (x 1,x 2 ), as a function of the variables x 1, x 2, but then (10) should be used to obtain the density formula entirely in terms of variables y 1, y 2. Example In the important special case of linear transformation we have and y 1 = a 11 x 1 + a 12 x 2 + b 1, y 2 = a 21 x 1 + a 22 x 2 + b 2, y i x j = a ij (x 1, x 2 ) (y 1, y 2 ) = 1 ( ). a11 a det 12 a 21 a 22 2 The notation does not mean that g 1 1 is inverse function to g 1 : the latter as a one-dimensional function of two variables cannot be inverted. 10
11 The method of density transformation gives the joint pdf for Y 1 = g 1 (X 1, X 2 ), Y 2 = g 2 (X 1, X 2 ). From the joint density we can find the marginal densities f Y1 and f Y2 by integration. Next example illustrates the method. Example Suppose X 1 and X 2 have joint pdf f X1,X 2 (x 1, x 2 ) = exp( (x 1 + x 2 )), x 1 0, x 2 0, which means that X 1 are X 2 are independent with the same Exp(1) density. We are interested in the distribution of the sum X 1 + X 2. We set Y 1 = X 1, Y 2 = X 1 +X 2. Consider the transformation y 1 = x 1 and y 2 = x 1 + x 2 with inverse x 1 = y 1, x 2 = y 2 y 1. This maps the positive quadrant I = (0, ) (0, ) to J = {(y 1, y 2 ) R 2 : 0 < y 1 < y 2 }. The Jacobian is ( (x 1, x 2 ) 1 0 (y 1, y 2 ) = det 1 1 So the joint pdf of Y 1 and Y 2 is given by f Y1,Y 2 (y 1, y 2 ) = e y 1 (y 2 y 1 ) = e y 2, ) = 1. 0 y 1 y 2 < The pdf Y 2 = X 1 + X 2 is calculated from f Y1,Y 2 (y 1, y 2 ) as the marginal pdf of Y 2 by integrating out the variable y 1. f Y2 (y 2 ) = f Y1,Y 2 (y 1, y 2 )dy 1 = which is the Gamma(2, 1) density. We conclude that y2 0 e y 2 dy 1 = y 2 e y 2, 0 < y 2 <, X 1, X 2 iid Exp(1) X 1 + X 2 Gamma(2, 1) In general, if X and Y are independent continuous rv s, the pdf of their sum is f X+Y (z) = f X (z y)f Y (y)dy = f X (x)f Y (z x)dx. 11
12 The density f X+Y is called convolution of the densities f X and f Y. Example We consider X 1 and X 2 with the joint pdf f X1,X 2 (x 1, x 2 ) = 8x 1 x 2, 0 < x 1 < x 2 < 1. Suppose we want to find the pdf of Y 1 = X 1 /X 2. It is convenient to introduce a complimentary variable Y 2 = X 2, since we can then easily find the inverse correspondence. Thus we deal with functions y 1 = g 1 (x 1, x 2 ) = x 1 /x 2, y 2 = g 2 (x 1, x 2 ) = x 2. The inverse correspondence between (x 1, x 2 ) and (y 1, y 2 ) becomes x 1 = y 1 y 2, x 2 = y 2. We have I = {(x 1, x 2 ) : 0 < x 1 < x 2 < 1} which implies that J = {(y 1, y 2 ) : 0 < y 1 < 1, 0 < y 2 < 1}. To see the latter, note that y 2 like x 2 varies in the limits from 0 to 1, and when y 2 is fixed the ratio y 1 = x 1 /x 2 is again between 0 and 1. The Jacobian is (x 1, x 2 ) (y 1, y 2 ) = det ( y2 y ) = y 2. So f Y1,Y 2 (y 1, y 2 ) = 8(y 1 y 2 )y 2 y 2 = 8y 1 y2, 3 (y 1, y 2 ) J. Thus the marginal pdf of Y 1 is f Y1 (y 1 ) = 1 0 8y 1 y 3 2 dy 2 = 8y 1 [ y This is the pdf for distribution denoted Beta(2, 1). ] 1 0 = 2y 1, 0 < y 1 < 1. If the correspondence which maps x 1, x 2 to y 1 = g 1 (x 1, x 2 ), y 2 = g 2 (x 1, x 2 ) is many-to-one, the joint density of Y 1, Y 2 is f Y1,Y 2 (y 1, y 2 ) = {(x 1,x 2 ):g 1 (x 1,x 2 )=y 1,g 2 (x 1,x 2 )=y 2 } f X1,X 2 (x 1, x 2 ) (y 1, y 2 ) (x 1, x 2 ) 1. It will be easy now to extend the density transformation method to n jointly distributed continuous random variables. We say that random variables X 1,..., X n are jointly distributed, if their values are observed in the 12
13 same random experiment. The joint distribution is described by the joint pdf f X1,...,X n (x 1,..., x n ), which is a function depending on (x 1,..., x n ) R n that allows us to calculate probabilities by integration in n dimensions P((X 1,..., X n ) A) = f X1,...,X n (x 1,..., x n )dx 1 dx n, A A R n. Next is the most important special case: The iid sequence Random variables X 1,..., X n are iid (independent, identically distributed) with (one-dimensional) pdf f if their joint density is the product form f X1,...,X n (x 1,..., x n ) = f(x 1 )f(x 2 ) f(x n ). (12) Note that f = f Xi is the marginal density for every X i, i = 1,..., n. In Statistics, it is common to call the observed values of X 1,..., X n a sample from the population described by the pdf f. Vector notation and density transformation in higher dimensions To generalise the transformation formula to higher dimensions it is very convenient to use the vector notation. Definition A n-dimensional random variable is a random vector X = X 1. X n whose components X 1,..., X n are one-dimensional random variables. Using the vector notation we write f X (x) for the joint pdf of X 1,..., X n. Let I := {x R n : f X (x) > 0} be the support of f X and suppose that g : I J is a bijection from I to some other domain J R n. Suppose g has continuous partial derivatives. Consider the n-dimensional random variable Y = g(x), which means that we consider n 13
14 one-dimensional random variables Y 1 = g 1 (X 1,..., X n ), Y 2 = g 2 (X 1,..., X n ),. Y n = g n (X 1,..., X n ). Introduce the Jacobians y 1 y x (y) 1 1 x n (x) = det..... = y n y x 1 n x n x 1 x y 1 1 y n det x n y 1 1 / / (x) (y) = 1 x n y n The vector notation allows us to formulate the density transformation theorem in the form most close to the case of one dimension. Theorem 2.4. Let Y = g(x) where g is as above and suppose that (y) (x) 0 for x I. Then the pdf of Y is f Y (y) = f X (g 1 (y)) (x) (y) = f X(g 1 (y)) (y) (x) for y J (and f Y (y) = 0 for y / J). If the formula with (y) (x) 1 is used, the variable x in the computed Jacobian should be expressed in terms of y by solving the system of equations y = g(x) for x. Example Let Y = AX + b be a random vector obtained as multidimensional linear transformation of X, where A is a n n square matrix with det A 0 and b R n is a fixed vector. In this notation, AX stays for the product of a matrix and a column vector, thus in coordinate-wise writing n Y i = a ij X j + b i, i = 1,..., n. j=1 The Jacobian of the vector-function y = xa + b is (y) (x) = det A 14 1
15 because y i / x j = a ij. Solving y = Ax + b for x yields x = A 1 (y b), where A 1 is the matrix inverse to A thus f Y (y) = f X (A 1 (y b)) det A 1. Compare with the analogous formula in the one-dimensional case. Many-to-one vector functions More generally, let g : I J be a many-to-one function, such that the set {x : g(x) = y} is finite for each y. Then for Y = g(x) the density transformation theorem generalises as f Y (y) = {x:g(x)=y} f X (x) (y) (x) 1, y J. To apply this formula, one should find all solutions x to the vector equation g(x) = y which, written in coordinates, is a system of n equations in unknowns x 1,..., x n. For each solution the Jacobian must be calculated and the result left entirely in terms of the variables y 1,..., y n. 15
Chapter 3 sections. SKIP: 3.10 Markov Chains. SKIP: pages Chapter 3 - continued
Chapter 3 sections 3.1 Random Variables and Discrete Distributions 3.2 Continuous Distributions 3.3 The Cumulative Distribution Function 3.4 Bivariate Distributions 3.5 Marginal Distributions 3.6 Conditional
More informationChapter 3 sections. SKIP: 3.10 Markov Chains. SKIP: pages Chapter 3 - continued
Chapter 3 sections Chapter 3 - continued 3.1 Random Variables and Discrete Distributions 3.2 Continuous Distributions 3.3 The Cumulative Distribution Function 3.4 Bivariate Distributions 3.5 Marginal Distributions
More informationProbability and Distributions
Probability and Distributions What is a statistical model? A statistical model is a set of assumptions by which the hypothetical population distribution of data is inferred. It is typically postulated
More informationContinuous Random Variables
1 / 24 Continuous Random Variables Saravanan Vijayakumaran sarva@ee.iitb.ac.in Department of Electrical Engineering Indian Institute of Technology Bombay February 27, 2013 2 / 24 Continuous Random Variables
More information2 (Statistics) Random variables
2 (Statistics) Random variables References: DeGroot and Schervish, chapters 3, 4 and 5; Stirzaker, chapters 4, 5 and 6 We will now study the main tools use for modeling experiments with unknown outcomes
More informationMultivariate distributions
CHAPTER Multivariate distributions.. Introduction We want to discuss collections of random variables (X, X,..., X n ), which are known as random vectors. In the discrete case, we can define the density
More informationSTAT 801: Mathematical Statistics. Distribution Theory
STAT 81: Mathematical Statistics Distribution Theory Basic Problem: Start with assumptions about f or CDF of random vector X (X 1,..., X p ). Define Y g(x 1,..., X p ) to be some function of X (usually
More informationMASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 8 10/1/2008 CONTINUOUS RANDOM VARIABLES
MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 8 10/1/2008 CONTINUOUS RANDOM VARIABLES Contents 1. Continuous random variables 2. Examples 3. Expected values 4. Joint distributions
More informationBivariate Transformations
Bivariate Transformations October 29, 29 Let X Y be jointly continuous rom variables with density function f X,Y let g be a one to one transformation. Write (U, V ) = g(x, Y ). The goal is to find the
More informationSolution to Assignment 3
The Chinese University of Hong Kong ENGG3D: Probability and Statistics for Engineers 5-6 Term Solution to Assignment 3 Hongyang Li, Francis Due: 3:pm, March Release Date: March 8, 6 Dear students, The
More information1 Review of Probability and Distributions
Random variables. A numerically valued function X of an outcome ω from a sample space Ω X : Ω R : ω X(ω) is called a random variable (r.v.), and usually determined by an experiment. We conventionally denote
More informationRandom Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R
In probabilistic models, a random variable is a variable whose possible values are numerical outcomes of a random phenomenon. As a function or a map, it maps from an element (or an outcome) of a sample
More information4. CONTINUOUS RANDOM VARIABLES
IA Probability Lent Term 4 CONTINUOUS RANDOM VARIABLES 4 Introduction Up to now we have restricted consideration to sample spaces Ω which are finite, or countable; we will now relax that assumption We
More informationJoint Distributions. (a) Scalar multiplication: k = c d. (b) Product of two matrices: c d. (c) The transpose of a matrix:
Joint Distributions Joint Distributions A bivariate normal distribution generalizes the concept of normal distribution to bivariate random variables It requires a matrix formulation of quadratic forms,
More informationp. 6-1 Continuous Random Variables p. 6-2
Continuous Random Variables Recall: For discrete random variables, only a finite or countably infinite number of possible values with positive probability (>). Often, there is interest in random variables
More informationChapter 4 Multiple Random Variables
Review for the previous lecture Theorems and Examples: How to obtain the pmf (pdf) of U = g ( X Y 1 ) and V = g ( X Y) Chapter 4 Multiple Random Variables Chapter 43 Bivariate Transformations Continuous
More informationMathematical Statistics 1 Math A 6330
Mathematical Statistics 1 Math A 6330 Chapter 2 Transformations and Expectations Mohamed I. Riffi Department of Mathematics Islamic University of Gaza September 14, 2015 Outline 1 Distributions of Functions
More informationContinuous Random Variables and Continuous Distributions
Continuous Random Variables and Continuous Distributions Continuous Random Variables and Continuous Distributions Expectation & Variance of Continuous Random Variables ( 5.2) The Uniform Random Variable
More informationPCMI Introduction to Random Matrix Theory Handout # REVIEW OF PROBABILITY THEORY. Chapter 1 - Events and Their Probabilities
PCMI 207 - Introduction to Random Matrix Theory Handout #2 06.27.207 REVIEW OF PROBABILITY THEORY Chapter - Events and Their Probabilities.. Events as Sets Definition (σ-field). A collection F of subsets
More informationProbability Models. 4. What is the definition of the expectation of a discrete random variable?
1 Probability Models The list of questions below is provided in order to help you to prepare for the test and exam. It reflects only the theoretical part of the course. You should expect the questions
More informationSDS 321: Introduction to Probability and Statistics
SDS 321: Introduction to Probability and Statistics Lecture 17: Continuous random variables: conditional PDF Purnamrita Sarkar Department of Statistics and Data Science The University of Texas at Austin
More informationRandom Variables (Continuous Case)
Chapter 6 Random Variables (Continuous Case) Thus far, we have purposely limited our consideration to random variables whose ranges are countable, or discrete. The reason for that is that distributions
More informationSTAT Chapter 5 Continuous Distributions
STAT 270 - Chapter 5 Continuous Distributions June 27, 2012 Shirin Golchi () STAT270 June 27, 2012 1 / 59 Continuous rv s Definition: X is a continuous rv if it takes values in an interval, i.e., range
More informationChapter 5. Random Variables (Continuous Case) 5.1 Basic definitions
Chapter 5 andom Variables (Continuous Case) So far, we have purposely limited our consideration to random variables whose ranges are countable, or discrete. The reason for that is that distributions on
More informationP (x). all other X j =x j. If X is a continuous random vector (see p.172), then the marginal distributions of X i are: f(x)dx 1 dx n
JOINT DENSITIES - RANDOM VECTORS - REVIEW Joint densities describe probability distributions of a random vector X: an n-dimensional vector of random variables, ie, X = (X 1,, X n ), where all X is are
More informationRandom Variables and Their Distributions
Chapter 3 Random Variables and Their Distributions A random variable (r.v.) is a function that assigns one and only one numerical value to each simple event in an experiment. We will denote r.vs by capital
More informationConditional densities, mass functions, and expectations
Conditional densities, mass functions, and expectations Jason Swanson April 22, 27 1 Discrete random variables Suppose that X is a discrete random variable with range {x 1, x 2, x 3,...}, and that Y is
More informationMAS223 Statistical Inference and Modelling Exercises
MAS223 Statistical Inference and Modelling Exercises The exercises are grouped into sections, corresponding to chapters of the lecture notes Within each section exercises are divided into warm-up questions,
More information3 Continuous Random Variables
Jinguo Lian Math437 Notes January 15, 016 3 Continuous Random Variables Remember that discrete random variables can take only a countable number of possible values. On the other hand, a continuous random
More informationDistributions of Functions of Random Variables. 5.1 Functions of One Random Variable
Distributions of Functions of Random Variables 5.1 Functions of One Random Variable 5.2 Transformations of Two Random Variables 5.3 Several Random Variables 5.4 The Moment-Generating Function Technique
More informationSTAT 450: Statistical Theory. Distribution Theory. Reading in Casella and Berger: Ch 2 Sec 1, Ch 4 Sec 1, Ch 4 Sec 6.
STAT 45: Statistical Theory Distribution Theory Reading in Casella and Berger: Ch 2 Sec 1, Ch 4 Sec 1, Ch 4 Sec 6. Basic Problem: Start with assumptions about f or CDF of random vector X (X 1,..., X p
More informationLecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable
Lecture Notes 1 Probability and Random Variables Probability Spaces Conditional Probability and Independence Random Variables Functions of a Random Variable Generation of a Random Variable Jointly Distributed
More informationThis does not cover everything on the final. Look at the posted practice problems for other topics.
Class 7: Review Problems for Final Exam 8.5 Spring 7 This does not cover everything on the final. Look at the posted practice problems for other topics. To save time in class: set up, but do not carry
More informationSTAT 450: Statistical Theory. Distribution Theory. Reading in Casella and Berger: Ch 2 Sec 1, Ch 4 Sec 1, Ch 4 Sec 6.
STAT 450: Statistical Theory Distribution Theory Reading in Casella and Berger: Ch 2 Sec 1, Ch 4 Sec 1, Ch 4 Sec 6. Example: Why does t-statistic have t distribution? Ingredients: Sample X 1,...,X n from
More informationMathematics 426 Robert Gross Homework 9 Answers
Mathematics 4 Robert Gross Homework 9 Answers. Suppose that X is a normal random variable with mean µ and standard deviation σ. Suppose that PX > 9 PX
More information3. Probability and Statistics
FE661 - Statistical Methods for Financial Engineering 3. Probability and Statistics Jitkomut Songsiri definitions, probability measures conditional expectations correlation and covariance some important
More informationTransformations and Expectations
Transformations and Expectations 1 Distributions of Functions of a Random Variable If is a random variable with cdf F (x), then any function of, say g(), is also a random variable. Sine Y = g() is a function
More informationMath/Stats 425, Sec. 1, Fall 04: Introduction to Probability. Final Exam: Solutions
Math/Stats 45, Sec., Fall 4: Introduction to Probability Final Exam: Solutions. In a game, a contestant is shown two identical envelopes containing money. The contestant does not know how much money is
More informationLecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable
Lecture Notes 1 Probability and Random Variables Probability Spaces Conditional Probability and Independence Random Variables Functions of a Random Variable Generation of a Random Variable Jointly Distributed
More informationChapter 4. Continuous Random Variables 4.1 PDF
Chapter 4 Continuous Random Variables In this chapter we study continuous random variables. The linkage between continuous and discrete random variables is the cumulative distribution (CDF) which we will
More informationStatistics for scientists and engineers
Statistics for scientists and engineers February 0, 006 Contents Introduction. Motivation - why study statistics?................................... Examples..................................................3
More informationSTA 256: Statistics and Probability I
Al Nosedal. University of Toronto. Fall 2017 My momma always said: Life was like a box of chocolates. You never know what you re gonna get. Forrest Gump. There are situations where one might be interested
More informationSection 8.1. Vector Notation
Section 8.1 Vector Notation Definition 8.1 Random Vector A random vector is a column vector X = [ X 1 ]. X n Each Xi is a random variable. Definition 8.2 Vector Sample Value A sample value of a random
More information4. Distributions of Functions of Random Variables
4. Distributions of Functions of Random Variables Setup: Consider as given the joint distribution of X 1,..., X n (i.e. consider as given f X1,...,X n and F X1,...,X n ) Consider k functions g 1 : R n
More informationExercises and Answers to Chapter 1
Exercises and Answers to Chapter The continuous type of random variable X has the following density function: a x, if < x < a, f (x), otherwise. Answer the following questions. () Find a. () Obtain mean
More information2. The CDF Technique. 1. Introduction. f X ( ).
Week 5: Distributions of Function of Random Variables. Introduction Suppose X,X 2,..., X n are n random variables. In this chapter, we develop techniques that may be used to find the distribution of functions
More informationSampling Distributions
Sampling Distributions In statistics, a random sample is a collection of independent and identically distributed (iid) random variables, and a sampling distribution is the distribution of a function of
More informationReview of Probability Theory
Review of Probability Theory Arian Maleki and Tom Do Stanford University Probability theory is the study of uncertainty Through this class, we will be relying on concepts from probability theory for deriving
More informationF X (x) = P [X x] = x f X (t)dt. 42 Lebesgue-a.e, to be exact 43 More specifically, if g = f Lebesgue-a.e., then g is also a pdf for X.
10.2 Properties of PDF and CDF for Continuous Random Variables 10.18. The pdf f X is determined only almost everywhere 42. That is, given a pdf f for a random variable X, if we construct a function g by
More informationFormulas for probability theory and linear models SF2941
Formulas for probability theory and linear models SF2941 These pages + Appendix 2 of Gut) are permitted as assistance at the exam. 11 maj 2008 Selected formulae of probability Bivariate probability Transforms
More information18.440: Lecture 28 Lectures Review
18.440: Lecture 28 Lectures 18-27 Review Scott Sheffield MIT Outline Outline It s the coins, stupid Much of what we have done in this course can be motivated by the i.i.d. sequence X i where each X i is
More informationconditional cdf, conditional pdf, total probability theorem?
6 Multiple Random Variables 6.0 INTRODUCTION scalar vs. random variable cdf, pdf transformation of a random variable conditional cdf, conditional pdf, total probability theorem expectation of a random
More informationGeneration from simple discrete distributions
S-38.3148 Simulation of data networks / Generation of random variables 1(18) Generation from simple discrete distributions Note! This is just a more clear and readable version of the same slide that was
More informationContinuous Random Variables
Continuous Random Variables Recall: For discrete random variables, only a finite or countably infinite number of possible values with positive probability. Often, there is interest in random variables
More informationStatistics 3657 : Moment Approximations
Statistics 3657 : Moment Approximations Preliminaries Suppose that we have a r.v. and that we wish to calculate the expectation of g) for some function g. Of course we could calculate it as Eg)) by the
More informationSDS 321: Introduction to Probability and Statistics
SDS 321: Introduction to Probability and Statistics Lecture 14: Continuous random variables Purnamrita Sarkar Department of Statistics and Data Science The University of Texas at Austin www.cs.cmu.edu/
More informationSTAT 515 MIDTERM 2 EXAM November 14, 2018
STAT 55 MIDTERM 2 EXAM November 4, 28 NAME: Section Number: Instructor: In problems that require reasoning, algebraic calculation, or the use of your graphing calculator, it is not sufficient just to write
More informationChapter 2. Some Basic Probability Concepts. 2.1 Experiments, Outcomes and Random Variables
Chapter 2 Some Basic Probability Concepts 2.1 Experiments, Outcomes and Random Variables A random variable is a variable whose value is unknown until it is observed. The value of a random variable results
More informationStatistics 100A Homework 5 Solutions
Chapter 5 Statistics 1A Homework 5 Solutions Ryan Rosario 1. Let X be a random variable with probability density function a What is the value of c? fx { c1 x 1 < x < 1 otherwise We know that for fx to
More information1 Review of di erential calculus
Review of di erential calculus This chapter presents the main elements of di erential calculus needed in probability theory. Often, students taking a course on probability theory have problems with concepts
More informationChange Of Variable Theorem: Multiple Dimensions
Change Of Variable Theorem: Multiple Dimensions Moulinath Banerjee University of Michigan August 30, 01 Let (X, Y ) be a two-dimensional continuous random vector. Thus P (X = x, Y = y) = 0 for all (x,
More informationSTAT 430/510 Probability
STAT 430/510 Probability Hui Nie Lecture 16 June 24th, 2009 Review Sum of Independent Normal Random Variables Sum of Independent Poisson Random Variables Sum of Independent Binomial Random Variables Conditional
More informationExpectation, Variance and Standard Deviation for Continuous Random Variables Class 6, Jeremy Orloff and Jonathan Bloom
Expectation, Variance and Standard Deviation for Continuous Random Variables Class 6, 8.5 Jeremy Orloff and Jonathan Bloom Learning Goals. Be able to compute and interpret expectation, variance, and standard
More informationSampling Distributions
In statistics, a random sample is a collection of independent and identically distributed (iid) random variables, and a sampling distribution is the distribution of a function of random sample. For example,
More information1.12 Multivariate Random Variables
112 MULTIVARIATE RANDOM VARIABLES 59 112 Multivariate Random Variables We will be using matrix notation to denote multivariate rvs and their distributions Denote by X (X 1,,X n ) T an n-dimensional random
More information1 Random Variable: Topics
Note: Handouts DO NOT replace the book. In most cases, they only provide a guideline on topics and an intuitive feel. 1 Random Variable: Topics Chap 2, 2.1-2.4 and Chap 3, 3.1-3.3 What is a random variable?
More informationIntroduction to Normal Distribution
Introduction to Normal Distribution Nathaniel E. Helwig Assistant Professor of Psychology and Statistics University of Minnesota (Twin Cities) Updated 17-Jan-2017 Nathaniel E. Helwig (U of Minnesota) Introduction
More informationBASICS OF PROBABILITY
October 10, 2018 BASICS OF PROBABILITY Randomness, sample space and probability Probability is concerned with random experiments. That is, an experiment, the outcome of which cannot be predicted with certainty,
More informationMath 3215 Intro. Probability & Statistics Summer 14. Homework 5: Due 7/3/14
Math 325 Intro. Probability & Statistics Summer Homework 5: Due 7/3/. Let X and Y be continuous random variables with joint/marginal p.d.f. s f(x, y) 2, x y, f (x) 2( x), x, f 2 (y) 2y, y. Find the conditional
More informationECE 302 Division 2 Exam 2 Solutions, 11/4/2009.
NAME: ECE 32 Division 2 Exam 2 Solutions, /4/29. You will be required to show your student ID during the exam. This is a closed-book exam. A formula sheet is provided. No calculators are allowed. Total
More informationBrief Review of Probability
Maura Department of Economics and Finance Università Tor Vergata Outline 1 Distribution Functions Quantiles and Modes of a Distribution 2 Example 3 Example 4 Distributions Outline Distribution Functions
More informationSTAT 430/510: Lecture 16
STAT 430/510: Lecture 16 James Piette June 24, 2010 Updates HW4 is up on my website. It is due next Mon. (June 28th). Starting today back at section 6.7 and will begin Ch. 7. Joint Distribution of Functions
More informationBivariate Normal Distribution
.0. TWO-DIMENSIONAL RANDOM VARIABLES 47.0.7 Bivariate Normal Distribution Figure.: Bivariate Normal pdf Here we use matrix notation. A bivariate rv is treated as a random vector X X =. The expectation
More informationChapter 4. Chapter 4 sections
Chapter 4 sections 4.1 Expectation 4.2 Properties of Expectations 4.3 Variance 4.4 Moments 4.5 The Mean and the Median 4.6 Covariance and Correlation 4.7 Conditional Expectation SKIP: 4.8 Utility Expectation
More information1 Solution to Problem 2.1
Solution to Problem 2. I incorrectly worked this exercise instead of 2.2, so I decided to include the solution anyway. a) We have X Y /3, which is a - function. It maps the interval, ) where X lives) onto
More informationMoments. Raw moment: February 25, 2014 Normalized / Standardized moment:
Moments Lecture 10: Central Limit Theorem and CDFs Sta230 / Mth 230 Colin Rundel Raw moment: Central moment: µ n = EX n ) µ n = E[X µ) 2 ] February 25, 2014 Normalized / Standardized moment: µ n σ n Sta230
More informationBMIR Lecture Series on Probability and Statistics Fall 2015 Discrete RVs
Lecture #7 BMIR Lecture Series on Probability and Statistics Fall 2015 Department of Biomedical Engineering and Environmental Sciences National Tsing Hua University 7.1 Function of Single Variable Theorem
More informationASM Study Manual for Exam P, First Edition By Dr. Krzysztof M. Ostaszewski, FSA, CFA, MAAA Errata
ASM Study Manual for Exam P, First Edition By Dr. Krzysztof M. Ostaszewski, FSA, CFA, MAAA (krzysio@krzysio.net) Errata Effective July 5, 3, only the latest edition of this manual will have its errata
More informationMTH739U/P: Topics in Scientific Computing Autumn 2016 Week 6
MTH739U/P: Topics in Scientific Computing Autumn 16 Week 6 4.5 Generic algorithms for non-uniform variates We have seen that sampling from a uniform distribution in [, 1] is a relatively straightforward
More informationWe introduce methods that are useful in:
Instructor: Shengyu Zhang Content Derived Distributions Covariance and Correlation Conditional Expectation and Variance Revisited Transforms Sum of a Random Number of Independent Random Variables more
More information18.440: Lecture 28 Lectures Review
18.440: Lecture 28 Lectures 17-27 Review Scott Sheffield MIT 1 Outline Continuous random variables Problems motivated by coin tossing Random variable properties 2 Outline Continuous random variables Problems
More informationPerhaps the simplest way of modeling two (discrete) random variables is by means of a joint PMF, defined as follows.
Chapter 5 Two Random Variables In a practical engineering problem, there is almost always causal relationship between different events. Some relationships are determined by physical laws, e.g., voltage
More informationSTA 256: Statistics and Probability I
Al Nosedal. University of Toronto. Fall 2017 My momma always said: Life was like a box of chocolates. You never know what you re gonna get. Forrest Gump. Exercise 4.1 Let X be a random variable with p(x)
More information2. Conditional Expectation (9/15/12; cf. Ross)
2. Conditional Expectation (9/15/12; cf. Ross) Intro / Definition Examples Conditional Expectation Computing Probabilities by Conditioning 1 Intro / Definition Recall conditional probability: Pr(A B) Pr(A
More informationUCSD ECE 153 Handout #20 Prof. Young-Han Kim Thursday, April 24, Solutions to Homework Set #3 (Prepared by TA Fatemeh Arbabjolfaei)
UCSD ECE 53 Handout #0 Prof. Young-Han Kim Thursday, April 4, 04 Solutions to Homework Set #3 (Prepared by TA Fatemeh Arbabjolfaei). Time until the n-th arrival. Let the random variable N(t) be the number
More informationUCSD ECE250 Handout #27 Prof. Young-Han Kim Friday, June 8, Practice Final Examination (Winter 2017)
UCSD ECE250 Handout #27 Prof. Young-Han Kim Friday, June 8, 208 Practice Final Examination (Winter 207) There are 6 problems, each problem with multiple parts. Your answer should be as clear and readable
More informationChapter 2: Random Variables
ECE54: Stochastic Signals and Systems Fall 28 Lecture 2 - September 3, 28 Dr. Salim El Rouayheb Scribe: Peiwen Tian, Lu Liu, Ghadir Ayache Chapter 2: Random Variables Example. Tossing a fair coin twice:
More informationThe Multivariate Normal Distribution 1
The Multivariate Normal Distribution 1 STA 302 Fall 2017 1 See last slide for copyright information. 1 / 40 Overview 1 Moment-generating Functions 2 Definition 3 Properties 4 χ 2 and t distributions 2
More informationWhen Are Two Random Variables Independent?
When Are Two Random Variables Independent? 1 Introduction. Almost all of the mathematics of inferential statistics and sampling theory is based on the behavior of mutually independent random variables,
More informationChapter 5 continued. Chapter 5 sections
Chapter 5 sections Discrete univariate distributions: 5.2 Bernoulli and Binomial distributions Just skim 5.3 Hypergeometric distributions 5.4 Poisson distributions Just skim 5.5 Negative Binomial distributions
More informationContinuous Distributions
A normal distribution and other density functions involving exponential forms play the most important role in probability and statistics. They are related in a certain way, as summarized in a diagram later
More informationThis exam is closed book and closed notes. (You will have access to a copy of the Table of Common Distributions given in the back of the text.
TEST #3 STA 5326 December 4, 214 Name: Please read the following directions. DO NOT TURN THE PAGE UNTIL INSTRUCTED TO DO SO Directions This exam is closed book and closed notes. (You will have access to
More informationDistributions of Functions of Random Variables
STAT/MATH 395 A - PROBABILITY II UW Winter Quarter 217 Néhémy Lim Distributions of Functions of Random Variables 1 Functions of One Random Variable In some situations, you are given the pdf f X of some
More informationChapter 5,6 Multiple RandomVariables
Chapter 5,6 Multiple RandomVariables ENCS66 - Probabilityand Stochastic Processes Concordia University Vector RandomVariables A vector r.v. is a function where is the sample space of a random experiment.
More informationLecture 11. Probability Theory: an Overveiw
Math 408 - Mathematical Statistics Lecture 11. Probability Theory: an Overveiw February 11, 2013 Konstantin Zuev (USC) Math 408, Lecture 11 February 11, 2013 1 / 24 The starting point in developing the
More informationStat 366 A1 (Fall 2006) Midterm Solutions (October 23) page 1
Stat 366 A1 Fall 6) Midterm Solutions October 3) page 1 1. The opening prices per share Y 1 and Y measured in dollars) of two similar stocks are independent random variables, each with a density function
More informationIntroduction to Probability Theory
Introduction to Probability Theory Ping Yu Department of Economics University of Hong Kong Ping Yu (HKU) Probability 1 / 39 Foundations 1 Foundations 2 Random Variables 3 Expectation 4 Multivariate Random
More informationContinuous r.v practice problems
Continuous r.v practice problems SDS 321 Intro to Probability and Statistics 1. (2+2+1+1 6 pts) The annual rainfall (in inches) in a certain region is normally distributed with mean 4 and standard deviation
More informationIEOR 4703: Homework 2 Solutions
IEOR 4703: Homework 2 Solutions Exercises for which no programming is required Let U be uniformly distributed on the interval (0, 1); P (U x) = x, x (0, 1). We assume that your computer can sequentially
More informationContinuous distributions
CHAPTER 7 Continuous distributions 7.. Introduction A r.v. X is said to have a continuous distribution if there exists a nonnegative function f such that P(a X b) = ˆ b a f(x)dx for every a and b. distribution.)
More information