PROBABILITY DISTRIBUTIONS: (continued) The change of variables technique. Let f() and let y = y() be a monotonic transformation of such that = (y) eists. Let A be an event defined in terms of, and let B be the equivalent event defined in terms of y such that if A, then y = y() B and vice versa. Then, P (A) = P (B) and we can find the the p.d.f of y denoted by g(y). The continuous case. If, y are continuous random variables, then g(y)dy = f()d. y B If we write = (y) in the second integral, then the change of variable technique gives g(y)dy = f{(y)} d dy dy. y B B If y = y() is a monotonically decreasing transformation, then d/dy < 0, and f{(y)} > 0, and so f{(y)}d/dy < 0 cannot represent a p.d.f since g(y) 0 is necessary. The recourse is to change the sign on the y-ais. When d/dy < 0, it is replaced by its modulus d/dy > 0. In general, g(y) = f{(y)} d Ø dy Ø. A 1
The Standard Normal Distribution. Consider the function g(z) = e z2 /2, where < z <. There is g(z) > 0 for all z and also e z2 /2 dz = 1. 2π It follows that f(z) = 1 2π e z2 /2 constitutes a p.d.f., described as the standard normal and denoted by N(z; 0, 1). In general, the normal distribution is denoted by N(; µ, σ 2 ); so, in this case, there are µ = 0 and σ 2 = 1. The General Normal Distribution. This can be derived via the change of variables technique. Let z N(0, 1) = 1 e z2 /2 = f(z), 2π and let y = zσ + µ so that z = (y µ)/σ is the inverse function and dz/dy = σ 1 is its derivative. Then, ( g(y) = f{z(y)} dz Ødy Ø = 1 ep 1 µ ) 2 y µ 1 2π 2 σ σ Ω 1 = ep 1 (y µ) 2 æ 2πσ 2 2 σ 2. 2
The discrete case. Matters are simpler when f() is discrete and y = y() is a monotonic transformation. Then, if y g(y), it must be the case that X f{(y)}, whence g(y) = f{(y)}. Eample. Let y B g(y) = X A f() = X y B b(n =, p = 2/) =!!( )! µ 2 with = 0, 1, 2,, and let y = 2 so that (y) = y. Then µ 1, y g(y) =! ( y)!( y)! µ 2 y µ y 1, where y = 0, 1, 4, 9. Eample. Let f() = 2; 0 1. Let y = 8, which implies that = (y/8) 1/ = y 1/ /2 and d/dy = y 2/ /6. Then, µ Ø g(y) = f{(y)} d y 1/ ØØØ Ø dy Ø = 2 y 2/ 2 6 Ø = y 1/. 6
Epectations of a random variable. If f(), then the epected value of is E() = f()d if is continuous, and E() = X f() if is discrete. The epectations operator. We can economise on notation by defining the epectations operator E, which is subject to a number of simple rules. They are as follows: (a) If 0, then E() 0. (b) If a is a constant, then E(a) = a. (c) If a is a constant and is a random variable, then E(a) = ae(). (d) If, y are random variables, then E( + y) = E() + E(y). (e) The epectation of a sum is the sum of the epectations. By combining (c) and (d), we get: If E(a + by) = ae() + be(y). Thus, the epectation operator is a linear operator and E( P i a i i ) = P i a ie( i ). 4
Eample. The epected value of the binomial distribution b(; n, p) is E() = X b(; n, p) = nx n! (n )!! p (1 p) n. =0 We factorise np from the epression under the summation and we begin the summation at = 1. On defining y = 1, which means setting = y + 1 in the epression above, we get E() = nx b(; n, p) =0 = np = np n 1 X y=0 n 1 X y=0 (n 1)! ([n 1] y)!y! py (1 p) [n 1] y b(y; n 1, p) = np, where the final equality follows from the fact that we are summing the values of the binomial distribution b(y; n 1, p) over its entire domain to obtain a total of unity. 5
Eample. Let f() = e ; 0 <. Then E() = 0 e d. This must be evaluated by integrating by parts: u dv d = uv d v du d d. With u = and dv/d = e, this formula gives 0 e d = [ e ] 0 + 0 e d = [ e ] 0 [e ] 0 = 0 + 1 = 1. Observe that, since this integral is unity and since e > 0 over the domain of, it follows that f() = e is also a valid p.d.f. 6
Epectation of a function of random variable. Let y = y() be a function of f(). The value of E(y) can be found without first determining the p.d.f g(y) of y. Quite simply, there is E(y) = y()f()d. If y = y() is a monotonic transformation of, then it follows that E(y) = y yg(y)dy = = y yf{(y)} d Ø dy Ø dy y()f()d, which establishes a special case of the result. However, the result is not confined to monotonic transformations. It is equally valid for functions that are piecewise monotonic, i.e. ones that have monotonic segments, which includes virtually all of the functions that we might consider. 7
The moments of a distribution. The rth raw moment of relative to the datum a is defined by the epectation E{( a) r } = ( a) r f()d if is continuous, and E{( a) r } = X ( a) r f() if is discrete. The variance, which is a measure of the disperion of is the second moment of about the mean. This measure is minimise by the choice of a = E(): V () = E[{ E()} 2 ] = E[ 2 2E() + {E()} 2 ] = E( 2 ) {E()} 2. We can also define the variance operator V. (a) If is a random variable, then V () > 0. (b) If a is a constant, then then V (a) = 0. (c) If a is a constant and is a random variable, then V (a) = a 2 V (). To confirm the latter, we may consider V (a) = E{[a E(a)] 2 } = a 2 E{[ E()] 2 } = a 2 V (). If, y are independently distributed random variables, then V ( + y) = V () + V (y). But this is not true in general. 8
The variance of the binomial distribution. Consider a sequence of Bernoulli trials with i {0, 1} for all i. The p.d.f of the generic trial is Then f( i ) = p i (1 p) 1 i. E( i ) = X i i f( i ) = 0.(1 p) + 1.p = p. It follows that, in n trials, the epected value of the total score = P i i is E() = X E( i ) = np. i This is the epected value of the binomial distribution. To find the variance of the Bernoulli trial, we use the formula E() = E( 2 ) {E()} 2. For a single trial, there is E( 2 i ) = X f( i ) = p, i =0,1 V ( i ) = p p 2 = p(1 p) = pq, where q = 1 p. The outcome of the binomial random variable is the sum of a set of n independent and identical Bernoulli trials. Thus, the variance of the sum is the sum of the variances, and we have nx V () = V ( i ) = npq. i=1 9