BASICS OF PROBABILITY

Similar documents
Lecture 2: Review of Probability

Random Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R

3. Probability and Statistics

MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems

Random Variables. Cumulative Distribution Function (CDF) Amappingthattransformstheeventstotherealline.

Review of Probability Theory

Random Variables and Their Distributions

Random Variables. Saravanan Vijayakumaran Department of Electrical Engineering Indian Institute of Technology Bombay

Continuous Random Variables

1 Probability theory. 2 Random variables and probability theory.

ECON Fundamentals of Probability

Lecture 2: Repetition of probability theory and statistics

P (x). all other X j =x j. If X is a continuous random vector (see p.172), then the marginal distributions of X i are: f(x)dx 1 dx n

Algorithms for Uncertainty Quantification

Lecture 1: August 28

Probability- the good parts version. I. Random variables and their distributions; continuous random variables.

Perhaps the simplest way of modeling two (discrete) random variables is by means of a joint PMF, defined as follows.

2 (Statistics) Random variables

Quick Tour of Basic Probability Theory and Linear Algebra

Formulas for probability theory and linear models SF2941

conditional cdf, conditional pdf, total probability theorem?

1 Presessional Probability

Elements of Probability Theory

1 Joint and marginal distributions

SUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416)

MULTIVARIATE PROBABILITY DISTRIBUTIONS

Probability Theory and Statistics. Peter Jochumzen

Probability and Distributions

Stat 5101 Notes: Algorithms

Notes on Random Vectors and Multivariate Normal

Stat 5101 Notes: Algorithms (thru 2nd midterm)

Joint Distribution of Two or More Random Variables

ECON 5350 Class Notes Review of Probability and Distribution Theory

matrix-free Elements of Probability Theory 1 Random Variables and Distributions Contents Elements of Probability Theory 2

LIST OF FORMULAS FOR STK1100 AND STK1110

Appendix A : Introduction to Probability and stochastic processes

Chapter 4. Chapter 4 sections

01 Probability Theory and Statistics Review

Chapter 2. Some Basic Probability Concepts. 2.1 Experiments, Outcomes and Random Variables

Introduction to Computational Finance and Financial Econometrics Probability Review - Part 2

Introduction to Probability Theory

A Probability Review

Exam P Review Sheet. for a > 0. ln(a) i=0 ari = a. (1 r) 2. (Note that the A i s form a partition)

PCMI Introduction to Random Matrix Theory Handout # REVIEW OF PROBABILITY THEORY. Chapter 1 - Events and Their Probabilities

1.1 Review of Probability Theory

Probability. Paul Schrimpf. January 23, Definitions 2. 2 Properties 3

Bivariate distributions

REVIEW OF MAIN CONCEPTS AND FORMULAS A B = Ā B. Pr(A B C) = Pr(A) Pr(A B C) =Pr(A) Pr(B A) Pr(C A B)

Summary of basic probability theory Math 218, Mathematical Statistics D Joyce, Spring 2016

Recitation 2: Probability

The Multivariate Normal Distribution. In this case according to our theorem

Lecture 11. Multivariate Normal theory

Multivariate Random Variable

Probability. Paul Schrimpf. January 23, UBC Economics 326. Probability. Paul Schrimpf. Definitions. Properties. Random variables.

Lecture 11. Probability Theory: an Overveiw

More on Distribution Function

1 Random Variable: Topics

1 Review of Probability

ECE 4400:693 - Information Theory

EC212: Introduction to Econometrics Review Materials (Wooldridge, Appendix)

Set Theory Digression

1 Probability Review. 1.1 Sample Spaces

Slides. Advanced Statistics

Review (Probability & Linear Algebra)

18.440: Lecture 28 Lectures Review

Actuarial Science Exam 1/P

7 Random samples and sampling distributions

Multiple Random Variables

Chp 4. Expectation and Variance

SDS 321: Introduction to Probability and Statistics

ECE 650 Lecture 4. Intro to Estimation Theory Random Vectors. ECE 650 D. Van Alphen 1

STAT2201. Analysis of Engineering & Scientific Data. Unit 3

MAS113 Introduction to Probability and Statistics. Proofs of theorems

ECE Lecture #9 Part 2 Overview

Review (probability, linear algebra) CE-717 : Machine Learning Sharif University of Technology

18.440: Lecture 28 Lectures Review

5. Random Vectors. probabilities. characteristic function. cross correlation, cross covariance. Gaussian random vectors. functions of random vectors

Course: ESO-209 Home Work: 1 Instructor: Debasis Kundu

Lecture 25: Review. Statistics 104. April 23, Colin Rundel

The Multivariate Normal Distribution 1

EXPECTED VALUE of a RV. corresponds to the average value one would get for the RV when repeating the experiment, =0.

Chapter 4. Multivariate Distributions. Obviously, the marginal distributions may be obtained easily from the joint distribution:

Random Vectors and Multivariate Normal Distributions

5 Operations on Multiple Random Variables

BIOS 2083 Linear Models Abdus S. Wahed. Chapter 2 84

Stochastic processes Lecture 1: Multiple Random Variables Ch. 5

Part IA Probability. Theorems. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015

Introduction to Statistical Inference Self-study

4. CONTINUOUS RANDOM VARIABLES

Preliminary Statistics Lecture 2: Probability Theory (Outline) prelimsoas.webs.com

8 - Continuous random vectors

Jointly Distributed Random Variables

Preliminary Statistics Lecture 3: Probability Models and Distributions (Outline) prelimsoas.webs.com

ENGG2430A-Homework 2

Preliminary Statistics. Lecture 3: Probability Models and Distributions

Review: mostly probability and some statistics

Chapter 4 : Expectation and Moments

Probability Background

STT 441 Final Exam Fall 2013

Lecture 14: Multivariate mgf s and chf s

Transcription:

October 10, 2018 BASICS OF PROBABILITY Randomness, sample space and probability Probability is concerned with random experiments. That is, an experiment, the outcome of which cannot be predicted with certainty, even if the experiment is repeated under the same conditions. Such an experiment may arise because of lack of information, or an extremely large number of factors determining the outcome. It is assumed that a collection of possible outcomes can be described prior to its performance. The set of all possible outcomes is called a sample space, denoted by Ω. A simple example is tossing a coin. There are two outcomes, heads and tails, so we can write Ω {H, T }. Another simple example is rolling a dice: Ω {1, 2, 3, 4, 5, 6}. A sample space may contain finite or infinite number of outcomes. A collection subset 1 of elements of Ω is called an event. In the rolling a dice example, the event A {2, 4, 6} occurs if the outcome of the experiment is an even number. The crucial concept is that of probability or probability function. Probability function assigns probabilities numbers between 0 and 1 to the events. There exist a number of interpretations of probability. According to the frequentist or objective approach, the probability of an event is the relative frequency of occurrence of the event when the experiment is repeated a large number of times. The problem with this approach is that often experiments to which we want to ascribe probabilities cannot be repeated for various reasons. Alternatively, the subjective approach interprets probability of an event as being ascribed by one s knowledge or beliefs, general agreement and etc. A probability function has to satisfy the following axioms of probability: 1. Pr Ω 1. 2. For any event A, PrA 0. 3. If A 1, A 2,... is a countable sequence of mutually exclusive 2 events, then Pr A 1 A 2... Pr A 1 + Pr A 2 +.... Some important properties of probability include: If A B then Pr A PrB. PrA 1. PrA 1 Pr A c. Pr 0. Pr A B PrA + PrB Pr A B. A sample space, its collection of events and a probability function together define a probability space a formal definition of the probability space is omitted, since it requires some concepts beyond the scope of this course. 1 A is a subset of B A B if ω A implies that ω B as well. 2 Events A and B are mutually exclusive if A B empty set. 1

Theorem 1. Continuity of probability a Let {A i : i 1, 2,...} be a monotone increasing sequence of events that increases to A: A 1 A 2... A, where A lim i A i i1 A i. Then lim n PrA n PrA. b Let {A i : i 1, 2,...} be a monotone decreasing sequence of events that decreases to A: A 1 A 2... A, where A lim i A i i1 A i. Then lim n PrA n PrA. Conditional probability and independence If PrB > 0, the conditional probability of an event A, conditional on B is defined as follows: Pr A B Pr A B. PrB Conditional probability gives the probability of A knowing that B has occurred. For a given B, the conditional probability function Pr B is a proper probability function. It satisfies the axioms of probability and the same properties as the individual or marginal probability function. While marginal probabilities are ascribed based on the whole sample space Ω, conditioning can be seen as updating of the sample space based on new information. Probability of events A and B occurring jointly is given by the probability of their intersection Pr A B. The events A and B are called independent if the probability of their occurring together is equal to the product of their individual probabilities: Pr A B PrAPrB. If A and B are independent, then the fact that B has occurred provides us with no information regarding occurrence of A : Pr A B PrA. If A and B are independent, then so are A c and B, A and B c, A c and B c. Intuitively this means that, if B cannot provide information about occurrence of A, then it also cannot tell us whether A did not occur A c. Random variables Random experiments generally require a verbal description; thus it is more convenient to work with random variables - numerical representations to random experiments. A random variable is a function from a sample space to the real line. For every ω Ω, a random variable Xω assigns a number x R. For example, in the tossing a coin experiment, we can define a random variable that takes on the value 0 if the outcome of the experiment is heads, and 1 if the outcome is tails: XH 0, XT 1. Naturally, one can define many different random variables on the same sample space. For notation simplicity, Xω is usually replaced simply by X, however, it is important to distinguish between random variables functions and realized values. A common convention is to denote random variables by capital letters, and to denote realized values by small letters. One can speak about the probability of a random variable taking on a particular value Pr X x, where x R, or more generally, a probability of X taking a value in some subset of the real line Pr X S, where S R, for example S, 2. The probability of such an event is defined by the probability of the corresponding subset of the original sample space Ω : Pr X S Pr {ω Ω : Xω S}. For example, suppose that in the flipping a coin example X is defined as above. Then Pr X < 2 is given by the probability of the event {H, T }, Pr X 0.3, 5 Pr {T }, and Pr X > 1.2 Pr 0. For a random variable X, its cumulative distribution function CDF is defined as F X x Pr X x. Sometimes, the subscript X can be omitted if there is no ambiguity concerning the random variable being described. A CDF must be defined for all u R, and satisfy the following conditions: 1. lim u F u 0, lim u F u 1. 2

2. F x F y if x y nondecreasing. 3. lim u x F u F x right-continuous. A CDF gives a complete description of the distribution of a random variable: i.e. for any subset S of the real line for which PrX S is defined, PrX S can be computed using the knowledge of the CDF. Two random variables are equal in distribution, denoted by " d ", if they have the same CDF, i.e. F X u F Y u for all u R. Note that equality in distribution does not imply equality in the usual sense. It is possible, that X d Y, but PrX Y 0. Furthermore, the CDFs may be equal even if X and Y are defined on different probability spaces. In this case, the statement PrX Y is meaningless. A random variable is called discrete if its CDF is a step function. In this case, there exists a countable set of real number X {x 1, x 2,...} such that Pr X x i p X x i > 0 for all x i X and i p X x i 1. This set is called the support of a distribution, it contains all the values that X can take with probability different from zero. The values p X x i give a probability mass function PMF. In this case, F X u x i u p X x i. A random variable is continuous if its CDF is a continuous function. In this case, Pr X x 0 for all x R, so it is impossible to describe the distribution of X by specifying probabilities at various points on the real line. Instead, the distribution of a continuous random variable can be described by a probability density function PDF, which is defined as f X x df Xu du. ux Thus, F X x x f Xudu, and Pr X a, b b a f Xudu. Since the CDF is nondecreasing, fx 0 for all x R. Further, f Xudu 1. Random vectors, multivariate and conditional distributions In economics we are usually concerned with relationships between a number of variables. Thus, we need to consider joint behavior of several random variables defined on the same probability space. A random vector is a function from the sample space Ω to R n. For example, select randomly an individual and measure his height H, weight W and shoe size S: X Another example is tossing a coin n times. In this experiment, the sample space consists of all possible sequences of n H s and T s. Let X j be a random variable equal 1 if the j-th toss is H and zero otherwise. Then, the random vector X is given by X H W S X 1 X 2. X n By convention, a random vector is usually a column vector. Let x R n, i.e. x x 1, x 2,..., x n. The CDF of a vector or a joint CDF of its elements is defined as follows:.. F x 1, x 2,..., x n PrX 1 x 1, X 2 x 2,... X n x n for all x R n. 3

If the joint CDF is a continuous function, then the corresponding joint PDF is given by and thus, f x 1, x 2,..., x n n F u 1, u 2,..., u n u 1 u 2... u n F x 1, x 2,..., x n x1 x2, u1x 1,u 2x 2,...,u nx n xn... f u 1, u 2,..., u n du n... du 2 du 1. Since the joint distribution describes the behavior of all random variables jointly, it is possible from the joint distribution to obtain the individual distribution of a single element of the random vector marginal distribution, or the joint distribution of a number of its elements. One can obtain the marginal distribution of, for example, X 1 by integrating out variables x 2 through x n. Consider, a bivariate case. Let X and Y be two random variables with the CDF and PDF given by F X,Y and f X,Y respectively. The marginal CDF of X is Now, the marginal PDF of X is F X x Pr X x Pr X x, < Y < Y can take any value x df X x dx f X,Y u, v dvdu. d dx x f X,Y x, v dv. f X,Y u, v dvdu In a discrete case, one can obtain a marginal PMF from the joint in a similar way, by using sums instead of integrals: p Y y j n p X,Y x i, y j. i1 Note that a join distribution provides more information than the marginal distributions of the elements of a random vector together. Two different join distributions may have the same set of marginal distribution functions. In general, it is impossible to obtain a joint distribution from the marginal distributions. Conditional distribution describes the distribution of one random variable vector conditional on another random variable vector. In the continuous case, conditional PDF and CDF of X given Y is defined as f X Y x y f X,Y x, y, f Y y F X Y x y x f X Y u y du, respectively, for f Y y > 0. In the discrete case, suppose that with a probability greater than zero X takes values in {x 1, x 2,..., x n }, and Y takes values in {y 1, y 2,..., y k }. Let p X,Y x i, y j be the joint PMF. Then the conditional PMF of X conditional on Y is given by p X Y x y j p X,Y x, y j p Y y j for j 1, 2,..., k. It is important to distinguish between f X Y x y and f X Y x Y. The first means that Y is fixed at some realized value y, and f X Y x y is not a random function. On the other hand, notation f X Y x Y means that uncertainty about Y remains, and, consequently, f X Y x Y is a random function. Conditional CDFs and PDFs 4

satisfy all the properties of the unconditional CDF and PDF respectively. The concept of independent random variables is related to that of the events. Suppose that for all pairs of subsets of the real line, S 1 and S 2, we have that the events X S 1 and Y S 2 are independent, i.e. Pr X S 1, Y S 2 Pr X S 1 Pr Y S 2. 1 In this case, we say that X and Y are independent. In particular, the above condition implies that 3 F X,Y x, y F X xf Y y for all x R, y R. 2 In the continuous case, random variables are independent if and only if there joint PDF can be expressed as a product of their marginal PDFs: f X,Y x, y f X xf Y y for all x R, y R. Consequently, independence implies that for all x R, y R such that f Y y > 0, we have that f X Y x y f X x. Proposition 1. For any functions g and h, if X and Y are independent, then so are gx and hy. Expectation and moments Given a random variable X its mean, or expectation, or expected value defined as E X i x i p X x i in the discrete case, EX xf X xdx in the continuous case. Note that 0 xf Xxdx or 0 xf X xdx can be infinite. In such cases, we say that expectation does not exist, and assign E X if 0 xf Xxdx and 0 xf X xdx <, and E X if 0 xf Xxdx > and 0 xf X xdx. When 0 xf Xxdx and 0 xf X xdx, the expectation is not defined. The necessary and sufficient condition for E X to be defined and finite is that E X < try to prove it, in which case we say that X is integrable. Similarly to the continuous case, E X can be infinite or undefined in the discrete case if X can take on countably infinite number of values. Mean is a number not random, the weighted average of all values that X can take. It is a characteristic of the distribution and not of a random variable, i.e. if X d Y then EX EY. However, equality of means does not imply equality of distributions. Let g be a function. The expected value of gx is defined as EgX gxf X xdx. The k-th moment of a random variable X is defined as E X k. The first moment if simply the mean. The 3 Equation 2 can be used as a definition of independence as well, since the CDF provides a complete description of the distribution: actually 2 implies 1 as well. 5

k-th central moment X is EX EX k. The second central moment is called the variance: VarX EX EX 2 x EX 2 f X xdx. While the mean measures the center of the distribution, the variance is a measure of the spread of the distribution. If E X n, we say that the n-th moment does not exist. If the n-th moment exists, then all lower-order moments exist as well as can be seen from the following result. Proposition 2. Let X be a random variable, and let n > 0 be an integer. If E X n < and m is an integer such that m n, then E X m <. Proof. We consider the continuous case. In the discrete case, the proof is similar. E X m x 1 x 1 1 + x m f X x dx x m f X x dx + f X x dx + f X x dx + 1 + E X n < x >1 x >1 x n f X x dx x >1 x m f X x dx x m f X x dx because x m 1 under the first integral x n f X x dx because x m x n For a function of two random variables, h X, Y, its expectation is defined as EhX, Y hx, yf X,Y x, ydxdy under the second integral with a similar definition in the discrete case, with the integral and joint PDF replaced by the sum and joint PMF. Covariance of two random variable X and Y is defined as CovX, Y E X EX Y EY x EX y EY f X,Y x, ydxdy. Let a, b and c be some constants. Some useful properties include: Linearity of expectation: E ax + by + c aex + bey + c. VaraX + by + c a 2 VarX + b 2 VarY + 2abCovX, Y. CovaX + by, cz accovx, Z + bccovy, Z. CovX, Y CovY, X. CovX, a 0. 6

CovX, X VarX. EX EX 0. CovX, Y EXY EXEY. VarX E X 2 EX 2. The correlation coefficient of X and Y is defined as ρ X,Y EX, Y VarXVarY. The correlation coefficient is bounded between -1 and 1. It is equal to -1 or 1 if and only if, with probability equal 1, one random variable is a linear function of another: Y a + bx. If X and Y are independent, then EXY EXEY and CovX, Y 0. However, zero correlation uncorrelatedness does not imply independence. For a random vector matrix, the expectation is defined as a vector matrix composed of expected values of its corresponding elements: EX E X 1 X 2. X n EX 1 EX 2.. EX n The variance-covariance matrix of a random n-vector is a n n matrix defined as VarX E X EX X EX X 1 EX 1 X 2 EX 2 E X 1 EX 1 X 2 EX 2... X n EX n. X n EX n E X 1 EX 1 X 1 EX 1 E X 1 EX 1 X 2 EX 2... E X 1 EX 1 X n EX n E X 2 EX 2 X 1 EX 1 E X 2 EX 2 X 2 EX 2... E X 2 EX 2 X n EX n............ E X n EX n X 1 EX 1 E X n EX n X 2 EX 2... E X n EX n X n EX n VarX 1 Cov X 1, X 2... Cov X 1, X n Cov X 2, X 1 VarX 2... Cov X 2, X n............. Cov X n, X 1 Cov X n, X 2... Var X n It is a symmetric, positive semi-definite matrix, with variances on the main diagonal and covariances off the main diagonal. The variance-covariance matrix is positive semi-definite denoted by VarX 0, since for any n-vector of constants a, we have that a VarXa 0 : a VarXa a E X EX X EX a 7

E a X EX X EX a E X EX a 2 0. Let X R n and Y R k be two random vectors. Their covariance of X with Y is a n k matrix defined as CovX, Y E X EX Y EY Cov X 1, Y 1 Cov X 1, Y 2... Cov X 1, Y k Cov X 2, Y 1 Cov X 2, Y 2... Cov X 2, Y k............ Cov X n, Y 1 Cov X n, Y 2... Cov X n, Y k. Some useful properties: VarX E XX E X E X. CovX, Y CovY, X. VarX + Y VarX + VarY + CovX, Y + CovY, X. If Y α + ΓX, where α R k is a fixed non-random vector and Γ is a k n fixed matrix, then VarY ΓVarXΓ. For random vectors X, Y, Z and non-random matrices A, B, C: CovAX+BY, CZ ACovX, ZC + BCovY, ZC. Conditional expectation Conditional expectation is defined similarly to unconditional, but with respect to a conditional distribution. For example, in the continuous case, E X Y xf X Y x Y dx. In the discrete case, the condition is defined similarly, by using summation and the conditional PMF. Contrary to the unconditional expectation, the conditional expectation is a random variable, since f X Y x Y is a random function. The conditional expectation of X conditional on Y gives the average value of X conditional on Y, and can be seen as a function of Y : E X Y gy for some function g, which is determined by the joint distribution of X and Y. Conditional expectation satisfies all the properties of unconditional. Other properties include: Law of Iterated Expectation LIE: EE X Y EX. Proof for the bivariate continuous case: EE X Y E X y f Y ydy xf X Y x ydx f Y ydy xf X Y x yf Y ydydx xf X,Y x, ydydx x f X,Y x, ydy dx xf X xdx 8

EX. For any functions g and h, E gxhy Y hy E gx Y. If X and Y are independent, then EX Y EX, a constant. Suppose that EX Y EX, then CovX, Y 0. The second property is, basically, linearity of expectation, since, once we condition on Y, hy can be seen as a constant. Also, note that EX Y EX does not imply that X and Y are independent, because, it is simply says that the first conditional moment is not a function of Y. Still, it is possible, that higher conditional moments of X and its conditional distribution depend on Y. The property EX Y EX is called mean independence. Moment generating function The material discussed here is adopted from Hogg, McKean, and Craig 2005: Introduction to Mathematical Statistics. Let X be random variable such that Ee tx < for h < t < h for some h > 0. The moment generating function MGF of X is defined as Mt Ee tx for h < t < h. Let M s t denote the s-th order derivative of Mt at t. Suppose that X is continuously distributed with density fx. We have: M 1 t dmt dt d e tx fxdx dt d dt etx fxdx xe tx fxdx. Since e 0 x 1, for the first derivative of the MGF evaluated at t 0, we have: M 1 0 xfxdx EX. Next, for the second derivative of the the MGF we have: M 2 t d2 Mt dt 2 d xe tx fxdx dt x 2 e tx fxdx, and therefore M 2 0 EX 2. More generally, M s t x s e tx fxdx, and, when it exists, M s 0 EX s. 9

Thus, using the MGF one can recover all existing moments of the distribution of X. In fact, a stronger result holds, and one can show that if two random variables have the same MGFs then they have the same distribution. Proposition 3. Let M X t and M Y t be the two MGFs corresponding to the CDFs F X u and F Y u respectively, and assume that M X t and M Y t exist on h < t < h for some h > 0. Then F X u F Y u for all u R if and only if M X t M Y t for all t in h < t < h. Normal distribution For x R, the density function PDF of a normal distribution is given by fx; µ, σ 2 1 exp x µ2 2πσ 2 2σ 2, where µ and σ 2 are the two parameters determining the distribution. The common notation for a normally distributed random variable is X Nµ, σ 2. The normal distribution with µ 0 and σ 1 is called the standard normal distribution. Next, we derive the MGF of a normal distribution. Let Z N0, 1. The MGF is given by M Z t Ee tz e tz 1 e z2 /2 dz 2π 1 e z2 /2+tz t 2 /2 e t2 /2 dz 2π e t2 /2 1 e z t2 /2 dz 2π e t2 /2, where the last equality follows because 2π 1/2 e z t2 /2 is a normal PDF function, and therefore it integrates to 1. Further, for σ 0, define X µ + σz, then the MGF of X is given by M X t Ee tx Ee tµ+tσz e tµ Ee tσz e tµ M Z tσ e tµ+t2 σ 2 /2. Note also that X Nµ, σ 2. To show this, write z x µ/σ, and plug this into fz; 0, 1dz to obtain: 1 e x µ2 /2σ 2 x µ d 2π σ 1 2π e x µ2 /2σ 2 1 σ dx 1 2πσ 2 e x µ2 /2σ 2 dx fx; µ, σ 2 dx. If σ < 0, replace it with σ. It follows now that the MGF of Nµ, σ 2 distribution is given by M X t e tµ+t2 σ 2 /2. We can use M X t to compute the moments of a normal distribution. We have d dt M Xt µ + tσ 2 e tµ+t 2 σ 2 /2, 10

d 2 dt 2 M Xt µ + tσ 2 2 e tµ+t 2 σ 2 /2 + σ 2 e tµ+t2 σ 2 /2, and therefore EX d dt M X0 µ, EX 2 d2 dt 2 M X0 µ 2 + σ 2, or VarX EX 2 EX 2 σ 2. Let Z 1,..., Z n be independent N0, 1 random variables. We say Z Z 1,..., Z n is a standard normal random vector. For t R n, the MGF of Z is M Z t Ee t Z Ee t1z1+...+tnzn n i1 n i1 Ee tizi e t2 i /2 where the equality in the third line follows by independence. e t t/2, 3 Let Σ be an n n symmetric positive semidefinite matrix. Since Σ is symmetric, it admits an eigenvalue decomposition: Σ CΛC, where Λ is a diagonal matrix with the eigenvalues of Σ on the main diagonal: λ 1 0 λ 2 Λ... 0 λ n and C is a matrix of eigenvectors such that C C CC I n. Since Σ is positive semidefinite, λ i 0 for all i 1,..., n, and we can define a square-root matrix of Λ as Λ 1/2 The symmetric square-root matrix of Σ is defined as, λ 1/2 1 0 λ 1/2 2... 0 λ 1/2 n. Σ 1/2 CΛ 1/2 C. Note that Σ 1/2 Σ 1/2 CΛ 1/2 C CΛ 1/2 C CΛ 1/2 Λ 1/2 C CΛC Σ. Now let µ R n, and define X µ + Σ 1/2 Z, 4 11

where Z is a standard normal random n-vector. The MGF of X is M X t Ee t X Ee t µ+σ 1/2 Z e t µ Ee t Σ 1/2 Z e t µ Ee Σ1/2 t Z e t µ e Σ1/2 t Σ 1/2 t/2 e t µ+t Σt/2. by 3 We say that X is a normal random vector if its MGF is given by expt µ + t Σt/2. X Nµ, Σ. One can show that the joint PDF of X is given by We denote this as fx; µ, Σ 2π n/2 det Σ 1/2 exp x µ Σ 1 x µ/2, for x R n. One can also show that for X Nµ, Σ, EX µ, VarX Σ. The last two results can be easily shown using the definition of X in 4. They can also be derived using the MGF of X. Since t µ/ t µ, t Σt/ t Σ + Σ t 2Σt, and Σt/ t Σ, we obtain: t M Xt µ + Σt e t µ+t Σt/2, EX t M X0 µ; 2 t t M Xt Σe t µ+t Σt/2 + µ + Σt µ + Σt e t µ+t Σt/2, EXX 2 t t M X0 Σ + µµ, VarX EXX EXEX Σ. Proposition 4. Let X X 1,..., X n Nµ, Σ, and suppose that CovX i, X j 0 for i j, i.e. Σ is a diagonal matrix: Then X 1,..., X n are independent. σ1 2 0... 0 Σ 0 σ 2 2... 0............. 0 0... σn 2 Proof. First, for t R n, t Σt t 2 1σ 2 1 +... + t 2 nσ 2 n. Hence, the MGF of X becomes M X t expt µ + t Σt/2 expt 1 µ 1 +... + t n µ n + t 2 1σ1 2 +... + t 2 nσn/2 2 n expt i µ i + t 2 i σi 2 /2, i1 12

which is the MGF of n independent normal random variables with mean µ i and variance σi 2, i 1,..., n. Note that for a standard normal random vector, µ 0 and Σ I n. Proposition 5. Let X Nµ, Σ, and define Y α + ΓX. Then Y N α + Γµ, ΓΣΓ. Proof. It suffices to show that the MGF of Y is expt α + Γµ + t ΓΣΓ t/2. We have: E expt Y E expt α + ΓX expt αe expt ΓX expt αe expγ t X expt α expγ t µ + Γ t ΣΓ t/2 expt α + Γµ + t ΓΣΓ t/2. The next result shows that if X is a normal random vector, then the marginal distributions of its elements are also normal. Corollary 1. Let X Nµ, Σ, partition it as X X 1, X 2, where X 1 R n1 and X 2 R n2 i.e. X 1 and X 2 are jointly normal, and partition µ and Σ accordingly: X X 1 X 2 N µ 1 µ 2 Σ 11 Σ 12,. Σ 21 Σ 22 Then X 1 Nµ 1, Σ 11. Proof. Take A I n1 0 n1 n 2, so that X 1 AX. By Proposition 5, X 1 N Aµ, AΣA, however, Aµ AΣA I n1 0 n1 n 2 µ 1 µ 2 µ 1, I n1 0 n1 n 2 Σ 11 Σ 12 Σ 21 Σ 22 I n1 0 n2 n 1 Σ 11. We will show next that if X and Y are jointly normal, then the conditional distribution of Y given X is also normal. Proposition 6. Let X Y N µ X µ Y, Σ XX Σ Y X Σ XY Σ Y Y, where Σ XX is positive definite. Then, Y X N µ Y X X, Σ Y X, where µ Y X X µ Y + Σ Y X Σ 1 XX X µ X a vector-valued function of X. Σ Y X Σ Y Y Σ Y X Σ 1 XX Σ XY a fixed matrix. 13

Proof. Define V Y Σ Y X Σ 1 XX X Σ Y X Σ 1 XX I X Y. We have CovV, X CovY Σ Y X Σ 1 XXX, X CovY, X CovΣ Y X Σ 1 XXX, X Σ Y X Σ Y X Σ 1 XXCovX, X 0. Hence, V and X are uncorrelated. Next, since we can write V X Σ Y X Σ 1 XX I I 0 X Y, it follows that V and X are jointly normal. Hence V and X are independent, and V d V X NEV, VarV. Next, EV µ Y + Σ Y X Σ 1 XX µ X, VarV VarY + Σ Y X Σ 1 XX VarXΣ 1 XX Σ XY Σ Y X Σ 1 XXCovX, Y CovY, XΣ 1 XX Σ XY Σ Y Y Σ Y X Σ 1 XX Σ XY Σ Y X. Thus, V X Nµ Y + Σ Y X Σ 1 XX µ X, Σ Y X. This in turn implies that V + ΣY X Σ 1 XX X X Nµ Y + Σ Y X Σ 1 XX X µ X, Σ Y X d Nµ Y X X, Σ Y X. The desired result follows because, by construction, Y V + Σ Y X Σ 1 XX X. Note that, while the conditional mean Y is a function of the conditioning variable X, the conditional variance does not depend on X. Furthermore, in the multivariate normal case, the conditional expectation is a linear function of X, i.e. we can write E Y X α + BX, where α µ Y Σ Y X Σ 1 XX µ X, and B Σ Y X Σ 1 XX. In particular, when Y is a random variable scalar, we can write where β Σ Y X Σ 1 XX VarX 1 CovX, Y. EY X α + X β, Other useful statistical distributions The following distributions are related to normal and used extensively in statistical inference: Chi-square distribution. Suppose that Z N 0, I n, so the elements of Z, Z 1, Z 2,..., Z n are independent identically distributed iid standard normal random variables. Then X Z Z n i1 Z2 i has a 14

chi-square distribution with n degrees of freedom. It is conventional to write X χ 2 n. The mean of the χ 2 n distribution is n and the variance is 2n. If X 1 χ 2 n 1, X 2 χ 2 n 2 and independent, then X 1 + X 2 χ 2 n 1+n 2. t distribution. Let Z N0, 1 and X χ 2 n be independent, then Y Z/ X/n has a t distribution with n degrees of freedom Y t n. For large n, the density of t n approaches that of N0, 1. The mean of t n does not exists for n 1, and zero for n > 1. The variance of the t n distribution is n/n 2 for n > 2. F distribution. Let X 1 χ 2 n 1 and X 2 χ 2 n 2 be independent, then Y X1/n1 X 2/n 2 with n 1, n 2 degrees of freedom Y F n1,n 2. F 1,n t n 2. has an F distribution 15