Moment Generating Function. STAT/MTHE 353: 5 Moment Generating Functions and Multivariate Normal Distribution

Size: px
Start display at page:

Download "Moment Generating Function. STAT/MTHE 353: 5 Moment Generating Functions and Multivariate Normal Distribution"

Transcription

1 Moment Generating Function STAT/MTHE 353: 5 Moment Generating Functions and Multivariate Normal Distribution T. Linder Queen s University Winter 07 Definition Let X (X,...,X n ) T be a random vector and t (t,...,t n ) T R n.themoment generating function (MGF) is defined by M X (t) E e tt X for all t for which the exectation exists (i.e., finite). Remarks: M X (t) E e P n i tixi For 0 (0,...,0) T,wehaveM X (0). If X is a discrete random variable with finitely many values, then M X (t) E e tt X is always finite for all t R n. We will always assume that the distribution of X is such that M X (t) is finite for all t ( t 0,t 0 ) n for some t 0 > 0. STAT/MTHE 353: 5 MGF & Multivariate Normal Distribution / 34 STAT/MTHE 353: 5 MGF & Multivariate Normal Distribution / 34 The single most imortant roerty of the MGF is that is uniquely determines the distribution of a random vector: Theorem Assume M X (t) and M Y (t) are the MGFs of the random vectors X and Y and such that M X (t) M Y (t) for all t ( t 0,t 0 ) n.then F X (z) F Y (z) for all z R n where F X and F Y are the joint cdfs of X and Y. Remarks: F X (z) F Y (z) for all z R n clearly imlies M X (t) M Y (t). Thus M X (t) M Y (t) () F X (z) F Y (z) Most often we will use the theorem for random variables instead of random vectors. In this case, M X (t) M Y (t) for all t ( t 0,t 0 ) imlies F X (z) F Y (z) for all z R. STAT/MTHE 353: 5 MGF & Multivariate Normal Distribution 3/ 34 Connection with moments Let k,...,k n be nonnegative integers and k k + + k n. k M X (t) E e tx+ k E e tx+ n E X k Xkn n e tx+ +tnxn Setting t 0 (0,...,0) T, we k M X (t) t0 E X k Xkn n For a (scalar) random variable X we obtain the kth moment of X: d k dt k M X(t) t0 E X k STAT/MTHE 353: 5 MGF & Multivariate Normal Distribution 4/ 34

2 Theorem Assume X,...,X m are indeendent random vectors in R n and let X X + + X m.then Proof: my M X (t) M Xi (t) i M X (t) E e tt X E e tt (X + +X m) E e tt X e tt X m E e tt X E e tt X m M X (t) M Xm (t) Examle: MGF for X Gamma(r, ) and X + + X m where the X i are indeendent and X i Gamma(r i, ). Examle: MGF for X Poisson( ) and X + + X m where the X i are indeendent and X i Gamma( i ). Also, use the MGF to find E(X), E(X ),andvar(x). Note: This theorem gives us a owerful tool for determining the distribution of the sum of indeendent random variables. STAT/MTHE 353: 5 MGF & Multivariate Normal Distribution 5/ 34 STAT/MTHE 353: 5 MGF & Multivariate Normal Distribution 6/ 34 Theorem 3 Assume X is a random vector in R n, A is an m n real matrix and b R m. Then the MGF of Y AX + b is given at t R m by Proof: M Y (t) e tt b M X (A T t) M Y (t) E e tt Y E e tt (AX+b) e tt b E e tt AX e tt b E e (AT t) T X e tt b M X (A T t) Alications to Normal Distribution Let X N(0, ). Then M X (t) E(e tx ) Z e t / e t / Z Z e tx e x / dx e (x tx) dx e (x t) {z } N(t,) df dx Z e (x t) t dx Note: In the scalar case Y ax + b we obtain M Y (t) e tb M X (at) We obtain that for all t R M X (t) e t / STAT/MTHE 353: 5 MGF & Multivariate Normal Distribution 7/ 34 STAT/MTHE 353: 5 MGF & Multivariate Normal Distribution 8/ 34

3 Moments of a standard normal random variable Recall the ower series exansion e z P k0 zk valid for all z R. Aly this to z tx X M X (t) E(e tx (tx) k )E k0 X E (tx) k X E X k t k and to z t / Matching the coe k0 M X (t) e t / X i0 k0 t / i cient of t k,fork,,... we obtain 8 >< E(X k ) k/ (k/)!, k even, >: 0, k odd. STAT/MTHE 353: 5 MGF & Multivariate Normal Distribution 9/ 34 i! Sum of indeendent normal random variables Recall that if X N(0, ) and Y X + µ, where > 0, then Y N(µ, ). Thus the MGF of a N(µ, ) random variable is M Y (t) e tµ M X ( t) e tµ e ( t) / e tµ+t / Let X,...,X m be indeendent r.v. s with X i N(µ i, i ) and set X X + + X m. Then my my M X (t) M Xi (t) e tµi+t i / e t P m i µi + P m i This imlies i i X m X N µ i, i mx i i i.e., X + + X m is normal with mean µ X P m i µ i and variance X P m i i. i t / STAT/MTHE 353: 5 MGF & Multivariate Normal Distribution 0 / 34 Multivariate Normal Distributions Linear Algebra Review Recall that an n n real matrix C is called nonnegative definite if it is symmetric and x T Cx 0 for all x R n and ositive definite if it is symmetric and x T Cx > 0 for all x R n such that x 6 0 Let A be an arbitrary n n real matrix. Then C A T A is nonnegative definite. If A is nonsingular (invertible), then C is ositive definite. Proof: A T A is symmetric since (A T A) T (A T )(A T ) T A T A. Thus it is nonnegative definite since x T (A T A)x x T A T Ax (Ax) T (Ax) kaxk 0 STAT/MTHE 353: 5 MGF & Multivariate Normal Distribution / 34 For any nonnegative definite n n matrix C the following hold: () C has n nonnegative eigenvalues,..., n (counting multilicities) and corresonding n orthogonal unit-length eigenvectors b,...,b n : Cb i i b i, i,...,n where b T i b i, i,...,n and b T i b j 0if i 6 j. () (Sectral decomosition) C can be written as C BDB T where D diag(,..., n) is the diagonal matrix of the eigenvalues of C, andb is the orthogonal matrix whose ith column is b i,i.e.,b [b...b n ]. (3) C is ositive definite () C is nonsingular () all the eigenvalues i are ositive STAT/MTHE 353: 5 MGF & Multivariate Normal Distribution / 34

4 (4) C has a unique nonnegative definite square root C /,i.e.,there exists a unique nonnegative definite A such that C AA Proof: We only rove the existence of A. Let D / diag( / /,..., n ) and note that D / D / D. Let A BD / B T. Then A is nonnegative definite and Remarks: A AA (BD / B T )(BD / B T ) BD / B T BD / B T BD / D / B T C If C is ositive definite, then so is A. If we don t require that A be nonnegative definite, then in general there are infinitely many solutions A for AA T C. STAT/MTHE 353: 5 MGF & Multivariate Normal Distribution 3 / 34 Lemma 4 If is the covariance matrix of some random vector X (X,...,X n ) T, then it is nonnegative definite. Proof: We know that Cov(X) is symmetric. Let b R n be arbitrary. Then b T b b T Cov(X)b Cov(b T X) Var(b T X) 0 so is nonnegative definite Remark: It can be shown that an n n matrix is nonnegative definite if and only if there exists a random vector X (X,...,X n ) T such that Cov(X). STAT/MTHE 353: 5 MGF & Multivariate Normal Distribution 4 / 34 Defining the Multivariate Normal Distribution Let Z,...,Z n be indeendent r.v. s with Z i N(0, ). The multivariate MGF of Z (Z,...,Z n ) T is M Z (t) E e ttz E e P n i tizi ny i ny E e tizi i e t i / e P n i t i / e tt t Now let µ R n and A an n n real matrix. Then the MGF of X AZ + µ is M X (t) e tt µ M Z (A T t) e tt µ e (AT t) T (A T t) e tt µ e tt AA T t e tt µ+ tt t where AA T. Note that is nonnegative definite. Definition Let µ R n and let be an n n nonnegative definite matrix. A random vector X (X,...,X n ) is said to have a multivariate normal distribution with arameters µ and if its multivariate MGF is Notation: X N(µ, ). Remarks: M X (t) e tt µ+ tt t If Z (Z,...,Z n ) T with Z i N(0, ), i,...,n,then Z N(0, I), wherei is the n n identity matrix. We saw that if Z N(0, I), thenx AZ + µ N(µ, ), where AA T. One can show the following: X N(µ, ) if and only if X AZ + µ for a random n-vector Z N(0, I) and some n n matrix A with AA T. STAT/MTHE 353: 5 MGF & Multivariate Normal Distribution 5 / 34 STAT/MTHE 353: 5 MGF & Multivariate Normal Distribution 6 / 34

5 Mean and covariance for multivariate normal distribution Consider first Z N(0, I), i.e.,z (Z,...,Z n ) T,wheretheZ i are indeendent N(0, ) random variables. Then If X N(µ, ), thenx AZ + µ for a random n-vector Z N(0, I) and some n n matrix A with AA T. and Thus E(Z) E(Z ),...,E(Z n ) T (0,, 0) T 8 < if i j, E (Z i E(Z i ))(Z j E(Z j )) E(Z i Z j ) : 0 if i 6 j We have E(AZ + µ) AE(Z)+µ µ Also, Cov(AZ + µ) Cov(AZ) A Cov(Z)A T AA T Thus E(X) µ, Cov(X) E(Z) 0, Cov(Z) I STAT/MTHE 353: 5 MGF & Multivariate Normal Distribution 7 / 34 STAT/MTHE 353: 5 MGF & Multivariate Normal Distribution 8 / 34 Joint df for multivariate normal distribution Lemma 5 If a random vector X (X,...,X n ) T has covariance matrix that is not of full rank (i.e., singular), then X does not have a joint df. Theorem 6 If X (X,...,X n ) T N(µ, ), where is nonsingular, then it has a joint df given by f X (x) ( )n det e (x µ)t (x µ), x R n Proof sketch: If is singular, then there exists b R n such that b 6 0 and b 0. Consider the random variable b T X P n i b ix i : Var(b T X)Cov(b T X) b T Cov(X)b b T b 0 Therefore P (b T X c) for some constant c. If X had a joint df f(x), thenforb {x : b T x c} we should have Z Z P (b T X c) P (X B) f(x,...,x n ) dx dx n But this is imossible since B is an (n )-dimensional hyerlane whose n-dimensional volume is zero, so the integral must be zero. B Proof: We know that X AZ + µ where Z (Z,...,Z n ) T N(0, I) and A is an n n matrix such that AA T. Since is nonsingular, A must be nonsingular with inverse A. Thus the maing h(z) Az + µ is invertible with inverse g(x) A (x J g (x) deta By the multivariate transformation theorem µ) whose Jacobian is f X (x) f Z (g(x)) J g (x) f Z A (x µ) det A STAT/MTHE 353: 5 MGF & Multivariate Normal Distribution 9 / 34 STAT/MTHE 353: 5 MGF & Multivariate Normal Distribution 0 / 34

6 Proof cont d: Since Z (Z,...,Z n ) T,wheretheZ i are indeendent N(0, ) random variables, we have f Z (z) ny e z i / e P n i z i e zt z ( ) n ( ) n i Secial case: bivariate normal For n we have µ µ µ and so we get where µ i E(X i ), i Var(X i ), i,, and f X (x) f Z A (x µ) det A e (A (x µ)) T (A (x µ)) det A ( ) n since det A ( ) n e (x µ)t (A ) T A (x µ) det A ( )n det e (x µ)t (x µ) det and (A ) T A (exercise!) (X,X ) Cov(X,X ) Thus the bivariate normal distribution is determined by five scalar arameters µ, µ,,,and. is ositive definite () is invertible () det > 0: det ( ) > 0 () < and > 0 so a bivariate normal random variable (X,X ) has a df if and only if the comonents X and X have ositive variances and <. STAT/MTHE 353: 5 MGF & Multivariate Normal Distribution / 34 STAT/MTHE 353: 5 MGF & Multivariate Normal Distribution / 34 We have det Thus the joint df of (X,X ) T N(µ, ) is f(x,x ) e ( ) (x µ ) + (x µ ) (x µ )(x µ ) and (x µ) T (x µ) i x µ hx µ, x µ ( ) x µ h i ( ) x µ, x µ (x µ ) (x µ ) (x µ ) (x µ ) ( ) (x µ ) ( ) (x µ ) (x µ )(x µ )+ (x µ ) + (x µ ) (x µ )(x µ ) STAT/MTHE 353: 5 MGF & Multivariate Normal Distribution 3 / 34 Remark: If 0,then f(x,x ) e e (x µ ) (x µ ) f X (x )f X (x ) + (x µ ) e (x µ ) Therefore X and X are indeendent. It is also easy to see that f(x,x )f X (x )f X (x ) for all x and x imlies 0. Thus we obtain Two jointly normal random variables X and X are indeendent if and only if they are uncorrelated. STAT/MTHE 353: 5 MGF & Multivariate Normal Distribution 4 / 34

7 In general, the following imortant facts can be roved using the multivariate MGF: (i) If X (X,...,X n ) T N(µ, ), thenx,x,...x n are indeendent if and only if they are uncorrelated, i.e., Cov(X i,x j )0if i 6 j, i.e., is a diagonal matrix. (ii) Assume X (X,...,X n ) T N(µ, ) and let X (X,...,X k ) T, X (X k+,...,x n ) T Then X and X are indeendent if and only if Cov(X, X )0 k (n k),thek (n k) matrix of zeros, i.e., can be artitioned as 0 k (n k) 0 (n k) k where Cov(X ) and Cov(X ). Marginals of multivariate normal distributions Let X (X,...,X n ) T N(µ, ). If A is an m n matrix and b R m,then Y AX + b is a random m-vector. Its MGF at t R m is M Y (t) e tt b M X (A T t) Since M X ( )e T µ+ T for all R n, we obtain M Y (t) e tt b e (AT t) T µ+ (AT t) T (A T t) e tt (b+aµ)+ tt A A T t This means that Y N(b + Aµ, A A T ),i.e.,y is multivariate normal with mean b + Aµ and covariance A A T. Examle: Let a,...,a n R and determine the distribution of Y a X + + a n X n. STAT/MTHE 353: 5 MGF & Multivariate Normal Distribution 5 / 34 STAT/MTHE 353: 5 MGF & Multivariate Normal Distribution 6 / 34 For some ale m<nlet {i,...,i m } {,...,n} such that i <i < <i m. Let e j (0,...,0,, 0,...,0) t be the jth unit vector in R n and define the m n matrix A by 3 Then e T i A e T i e T i m 3 e T i m X X n 3 AX Thus (X i,...,x im ) T N(Aµ, A A T ). X i. X im Note the following: Aµ 6 4 and the (j, k)th entry of A A T is µ i. µ im (A A T ) jk A (i k th column of ) j ( ) iji k Cov(X ij,x ik ) Thus if X (X,...,X n ) T N(µ, ), then(x i,...,x im ) T is multivariate normal whose mean and covariance are obtained by icking out the corresonding elements of µ and. Secial case: For m we obtain that X i N(µ i, µ i E(X i ) and i Var(X i), foralli,...,n. i ),where STAT/MTHE 353: 5 MGF & Multivariate Normal Distribution 7 / 34 STAT/MTHE 353: 5 MGF & Multivariate Normal Distribution 8 / 34

8 Conditional distributions Let X (X,...,X n ) T N(µ, ) and for ale m<ndefine X (X,...,X m ) T, X (X m+,...,x n ) T We know that X N(µ, ) and X N(µ, ) where µ i E(X i ), ii Cov(X i ), i,. Then µ and can be artitioned as µ µ µ, where ij Cov(X i, X j ), i, j,. Note that is m m, is (n m) (n m), is m (n m), and is (n m) m. Also, T. We assume that is nonsingular and we want to determine the conditional distribution of X given X x. Recall that X AZ + µ for some Z (Z,...,Z n ) T where the Z i are indeendent N(0, ) random variables and A is such that AA T. Let Z (Z,...,Z m ) T and Z (Z m+,...,z n ) T. We want to determine such A in a artitioned form with dimensions corresonding to the artitioning of : B 0 m (n m) A C D We can write AA T as B 0 m (n m) B T C T C D 0 (n m) m D T BB T BC T CB T CC T + DD T STAT/MTHE 353: 5 MGF & Multivariate Normal Distribution 9 / 34 STAT/MTHE 353: 5 MGF & Multivariate Normal Distribution 30 / 34 We want to solve for B, C and D. First consider BB T. We choose B to be the unique ositive definite square root of : B / Recall that B is symmetric and it is invertible since is. Then CB T imlies Then CC T + DD T gives C (B T ) B DD T CC T B B ( ) T (BB) Now note that X AZ + µ gives X BZ + µ, X CZ + DZ + µ STAT/MTHE 353: 5 MGF & Multivariate Normal Distribution 3 / 34 Since B is invertible, given X x,wehavez B (x µ ). So given X x, we have that the conditional distribution of X and the conditional distribution of are the same. CB (x µ )+DZ + µ But Z is indeendent of X, so given X x, the conditional distribution of CB (x µ )+DZ + µ is the same as its unconditional distribution. We conclude that the conditional distribution of X given X x is multivariate normal with mean E(X X x ) µ + CB (x µ ) µ + B B (x µ ) µ + (x µ ) and covariance matrix DD T STAT/MTHE 353: 5 MGF & Multivariate Normal Distribution 3 / 34

9 Secial case: bivariate normal Suose X (X,X ) T N(µ, ) with µ µ and µ We have and µ + (x µ )µ + (x µ ) ( ) Thus the conditional distribution of X given X x is normal with (conditional) mean and variance E(X X x )µ + (x µ ) Var(X X x ) ( ) Equivalently, the conditional distribution of X given X x is N µ + (x µ ), ( ) If <, then the conditional df exists and is given by f X X (x x ) x µ ( ) e Remark: Note that E(X X x )µ + (x (a ne) function of x. (x µ ) ( ) µ ) is a linear Examle: Recall the MMSE estimate roblem for X N(0, X ) from the observation Y X + Z, wherez N(0, Z ) and X and Z are indeendent. Use the above the find g (y) E[X Y y] and comute the minimum mean square error E (X g (Y )). STAT/MTHE 353: 5 MGF & Multivariate Normal Distribution 33 / 34 STAT/MTHE 353: 5 MGF & Multivariate Normal Distribution 34 / 34

Chapter 7: Special Distributions

Chapter 7: Special Distributions This chater first resents some imortant distributions, and then develos the largesamle distribution theory which is crucial in estimation and statistical inference Discrete distributions The Bernoulli

More information

The Multivariate Normal Distribution. In this case according to our theorem

The Multivariate Normal Distribution. In this case according to our theorem The Multivariate Normal Distribution Defn: Z R 1 N(0, 1) iff f Z (z) = 1 2π e z2 /2. Defn: Z R p MV N p (0, I) if and only if Z = (Z 1,..., Z p ) T with the Z i independent and each Z i N(0, 1). In this

More information

Recall that if X 1,...,X n are random variables with finite expectations, then. The X i can be continuous or discrete or of any other type.

Recall that if X 1,...,X n are random variables with finite expectations, then. The X i can be continuous or discrete or of any other type. Expectations of Sums of Random Variables STAT/MTHE 353: 4 - More on Expectations and Variances T. Linder Queen s University Winter 017 Recall that if X 1,...,X n are random variables with finite expectations,

More information

BIOS 2083 Linear Models Abdus S. Wahed. Chapter 2 84

BIOS 2083 Linear Models Abdus S. Wahed. Chapter 2 84 Chapter 2 84 Chapter 3 Random Vectors and Multivariate Normal Distributions 3.1 Random vectors Definition 3.1.1. Random vector. Random vectors are vectors of random variables. For instance, X = X 1 X 2.

More information

The Multivariate Gaussian Distribution

The Multivariate Gaussian Distribution The Multivariate Gaussian Distribution Chuong B. Do October, 8 A vector-valued random variable X = T X X n is said to have a multivariate normal or Gaussian) distribution with mean µ R n and covariance

More information

Notes on Random Vectors and Multivariate Normal

Notes on Random Vectors and Multivariate Normal MATH 590 Spring 06 Notes on Random Vectors and Multivariate Normal Properties of Random Vectors If X,, X n are random variables, then X = X,, X n ) is a random vector, with the cumulative distribution

More information

MATH 829: Introduction to Data Mining and Analysis Consistency of Linear Regression

MATH 829: Introduction to Data Mining and Analysis Consistency of Linear Regression 1/9 MATH 829: Introduction to Data Mining and Analysis Consistency of Linear Regression Dominique Guillot Deartments of Mathematical Sciences University of Delaware February 15, 2016 Distribution of regression

More information

General Random Variables

General Random Variables Chater General Random Variables. Law of a Random Variable Thus far we have considered onl random variables whose domain and range are discrete. We now consider a general random variable X! defined on the

More information

5. Random Vectors. probabilities. characteristic function. cross correlation, cross covariance. Gaussian random vectors. functions of random vectors

5. Random Vectors. probabilities. characteristic function. cross correlation, cross covariance. Gaussian random vectors. functions of random vectors EE401 (Semester 1) 5. Random Vectors Jitkomut Songsiri probabilities characteristic function cross correlation, cross covariance Gaussian random vectors functions of random vectors 5-1 Random vectors we

More information

Random Vectors and Multivariate Normal Distributions

Random Vectors and Multivariate Normal Distributions Chapter 3 Random Vectors and Multivariate Normal Distributions 3.1 Random vectors Definition 3.1.1. Random vector. Random vectors are vectors of random 75 variables. For instance, X = X 1 X 2., where each

More information

3. Probability and Statistics

3. Probability and Statistics FE661 - Statistical Methods for Financial Engineering 3. Probability and Statistics Jitkomut Songsiri definitions, probability measures conditional expectations correlation and covariance some important

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra Numerous alications in statistics, articularly in the fitting of linear models. Notation and conventions: Elements of a matrix A are denoted by a ij, where i indexes the rows and

More information

Multivariate Gaussian Distribution. Auxiliary notes for Time Series Analysis SF2943. Spring 2013

Multivariate Gaussian Distribution. Auxiliary notes for Time Series Analysis SF2943. Spring 2013 Multivariate Gaussian Distribution Auxiliary notes for Time Series Analysis SF2943 Spring 203 Timo Koski Department of Mathematics KTH Royal Institute of Technology, Stockholm 2 Chapter Gaussian Vectors.

More information

Lecture 11. Multivariate Normal theory

Lecture 11. Multivariate Normal theory 10. Lecture 11. Multivariate Normal theory Lecture 11. Multivariate Normal theory 1 (1 1) 11. Multivariate Normal theory 11.1. Properties of means and covariances of vectors Properties of means and covariances

More information

2. Matrix Algebra and Random Vectors

2. Matrix Algebra and Random Vectors 2. Matrix Algebra and Random Vectors 2.1 Introduction Multivariate data can be conveniently display as array of numbers. In general, a rectangular array of numbers with, for instance, n rows and p columns

More information

BASICS OF PROBABILITY

BASICS OF PROBABILITY October 10, 2018 BASICS OF PROBABILITY Randomness, sample space and probability Probability is concerned with random experiments. That is, an experiment, the outcome of which cannot be predicted with certainty,

More information

1.12 Multivariate Random Variables

1.12 Multivariate Random Variables 112 MULTIVARIATE RANDOM VARIABLES 59 112 Multivariate Random Variables We will be using matrix notation to denote multivariate rvs and their distributions Denote by X (X 1,,X n ) T an n-dimensional random

More information

Continuous Random Variables

Continuous Random Variables 1 / 24 Continuous Random Variables Saravanan Vijayakumaran sarva@ee.iitb.ac.in Department of Electrical Engineering Indian Institute of Technology Bombay February 27, 2013 2 / 24 Continuous Random Variables

More information

The Multivariate Normal Distribution 1

The Multivariate Normal Distribution 1 The Multivariate Normal Distribution 1 STA 302 Fall 2017 1 See last slide for copyright information. 1 / 40 Overview 1 Moment-generating Functions 2 Definition 3 Properties 4 χ 2 and t distributions 2

More information

Lecture 15: Multivariate normal distributions

Lecture 15: Multivariate normal distributions Lecture 15: Multivariate normal distributions Normal distributions with singular covariance matrices Consider an n-dimensional X N(µ,Σ) with a positive definite Σ and a fixed k n matrix A that is not of

More information

Section 8.1. Vector Notation

Section 8.1. Vector Notation Section 8.1 Vector Notation Definition 8.1 Random Vector A random vector is a column vector X = [ X 1 ]. X n Each Xi is a random variable. Definition 8.2 Vector Sample Value A sample value of a random

More information

The Multivariate Normal Distribution. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 36

The Multivariate Normal Distribution. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 36 The Multivariate Normal Distribution Copyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 36 The Moment Generating Function (MGF) of a random vector X is given by M X (t) = E(e t X

More information

Probability- the good parts version. I. Random variables and their distributions; continuous random variables.

Probability- the good parts version. I. Random variables and their distributions; continuous random variables. Probability- the good arts version I. Random variables and their distributions; continuous random variables. A random variable (r.v) X is continuous if its distribution is given by a robability density

More information

EEL 5544 Noise in Linear Systems Lecture 30. X (s) = E [ e sx] f X (x)e sx dx. Moments can be found from the Laplace transform as

EEL 5544 Noise in Linear Systems Lecture 30. X (s) = E [ e sx] f X (x)e sx dx. Moments can be found from the Laplace transform as L30-1 EEL 5544 Noise in Linear Systems Lecture 30 OTHER TRANSFORMS For a continuous, nonnegative RV X, the Laplace transform of X is X (s) = E [ e sx] = 0 f X (x)e sx dx. For a nonnegative RV, the Laplace

More information

Lecture 14: Multivariate mgf s and chf s

Lecture 14: Multivariate mgf s and chf s Lecture 14: Multivariate mgf s and chf s Multivariate mgf and chf For an n-dimensional random vector X, its mgf is defined as M X (t) = E(e t X ), t R n and its chf is defined as φ X (t) = E(e ıt X ),

More information

ECE 650 Lecture 4. Intro to Estimation Theory Random Vectors. ECE 650 D. Van Alphen 1

ECE 650 Lecture 4. Intro to Estimation Theory Random Vectors. ECE 650 D. Van Alphen 1 EE 650 Lecture 4 Intro to Estimation Theory Random Vectors EE 650 D. Van Alphen 1 Lecture Overview: Random Variables & Estimation Theory Functions of RV s (5.9) Introduction to Estimation Theory MMSE Estimation

More information

MLES & Multivariate Normal Theory

MLES & Multivariate Normal Theory Merlise Clyde September 6, 2016 Outline Expectations of Quadratic Forms Distribution Linear Transformations Distribution of estimates under normality Properties of MLE s Recap Ŷ = ˆµ is an unbiased estimate

More information

Gaussian random variables inr n

Gaussian random variables inr n Gaussian vectors Lecture 5 Gaussian random variables inr n One-dimensional case One-dimensional Gaussian density with mean and standard deviation (called N, ): fx x exp. Proposition If X N,, then ax b

More information

8 - Continuous random vectors

8 - Continuous random vectors 8-1 Continuous random vectors S. Lall, Stanford 2011.01.25.01 8 - Continuous random vectors Mean-square deviation Mean-variance decomposition Gaussian random vectors The Gamma function The χ 2 distribution

More information

Review of Probability Theory II

Review of Probability Theory II Review of Probability Theory II January 9-3, 008 Exectation If the samle sace Ω = {ω, ω,...} is countable and g is a real-valued function, then we define the exected value or the exectation of a function

More information

Chapter 5. The multivariate normal distribution. Probability Theory. Linear transformations. The mean vector and the covariance matrix

Chapter 5. The multivariate normal distribution. Probability Theory. Linear transformations. The mean vector and the covariance matrix Probability Theory Linear transformations A transformation is said to be linear if every single function in the transformation is a linear combination. Chapter 5 The multivariate normal distribution When

More information

Multivariate Random Variable

Multivariate Random Variable Multivariate Random Variable Author: Author: Andrés Hincapié and Linyi Cao This Version: August 7, 2016 Multivariate Random Variable 3 Now we consider models with more than one r.v. These are called multivariate

More information

Review of Linear Algebra

Review of Linear Algebra Review of Linear Algebra Definitions An m n (read "m by n") matrix, is a rectangular array of entries, where m is the number of rows and n the number of columns. 2 Definitions (Con t) A is square if m=

More information

Chapter 5. Chapter 5 sections

Chapter 5. Chapter 5 sections 1 / 43 sections Discrete univariate distributions: 5.2 Bernoulli and Binomial distributions Just skim 5.3 Hypergeometric distributions 5.4 Poisson distributions Just skim 5.5 Negative Binomial distributions

More information

Chapter 5 continued. Chapter 5 sections

Chapter 5 continued. Chapter 5 sections Chapter 5 sections Discrete univariate distributions: 5.2 Bernoulli and Binomial distributions Just skim 5.3 Hypergeometric distributions 5.4 Poisson distributions Just skim 5.5 Negative Binomial distributions

More information

Random Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R

Random Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R In probabilistic models, a random variable is a variable whose possible values are numerical outcomes of a random phenomenon. As a function or a map, it maps from an element (or an outcome) of a sample

More information

Random Variables and Their Distributions

Random Variables and Their Distributions Chapter 3 Random Variables and Their Distributions A random variable (r.v.) is a function that assigns one and only one numerical value to each simple event in an experiment. We will denote r.vs by capital

More information

The Multivariate Normal Distribution. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 36

The Multivariate Normal Distribution. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 36 The Multivariate Normal Distribution Copyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 36 The Moment Generating Function (MGF) of a random vector X is given by M X (t) = E(e t X

More information

18 Bivariate normal distribution I

18 Bivariate normal distribution I 8 Bivariate normal distribution I 8 Example Imagine firing arrows at a target Hopefully they will fall close to the target centre As we fire more arrows we find a high density near the centre and fewer

More information

Formulas for probability theory and linear models SF2941

Formulas for probability theory and linear models SF2941 Formulas for probability theory and linear models SF2941 These pages + Appendix 2 of Gut) are permitted as assistance at the exam. 11 maj 2008 Selected formulae of probability Bivariate probability Transforms

More information

STAT 450. Moment Generating Functions

STAT 450. Moment Generating Functions STAT 450 Moment Generating Functions There are many uses of generating functions in mathematics. We often study the properties of a sequence a n of numbers by creating the function a n s n n0 In statistics

More information

ANOVA: Analysis of Variance - Part I

ANOVA: Analysis of Variance - Part I ANOVA: Analysis of Variance - Part I The purpose of these notes is to discuss the theory behind the analysis of variance. It is a summary of the definitions and results presented in class with a few exercises.

More information

Next tool is Partial ACF; mathematical tools first. The Multivariate Normal Distribution. e z2 /2. f Z (z) = 1 2π. e z2 i /2

Next tool is Partial ACF; mathematical tools first. The Multivariate Normal Distribution. e z2 /2. f Z (z) = 1 2π. e z2 i /2 Next tool is Partial ACF; mathematical tools first. The Multivariate Normal Distribution Defn: Z R 1 N(0,1) iff f Z (z) = 1 2π e z2 /2 Defn: Z R p MV N p (0, I) if and only if Z = (Z 1,..., Z p ) (a column

More information

7. Introduction to Large Sample Theory

7. Introduction to Large Sample Theory 7. Introuction to Large Samle Theory Hayashi. 88-97/109-133 Avance Econometrics I, Autumn 2010, Large-Samle Theory 1 Introuction We looke at finite-samle roerties of the OLS estimator an its associate

More information

The Multivariate Normal Distribution 1

The Multivariate Normal Distribution 1 The Multivariate Normal Distribution 1 STA 302 Fall 2014 1 See last slide for copyright information. 1 / 37 Overview 1 Moment-generating Functions 2 Definition 3 Properties 4 χ 2 and t distributions 2

More information

P (x). all other X j =x j. If X is a continuous random vector (see p.172), then the marginal distributions of X i are: f(x)dx 1 dx n

P (x). all other X j =x j. If X is a continuous random vector (see p.172), then the marginal distributions of X i are: f(x)dx 1 dx n JOINT DENSITIES - RANDOM VECTORS - REVIEW Joint densities describe probability distributions of a random vector X: an n-dimensional vector of random variables, ie, X = (X 1,, X n ), where all X is are

More information

Multivariate Distributions (Hogg Chapter Two)

Multivariate Distributions (Hogg Chapter Two) Multivariate Distributions (Hogg Chapter Two) STAT 45-1: Mathematical Statistics I Fall Semester 15 Contents 1 Multivariate Distributions 1 11 Random Vectors 111 Two Discrete Random Variables 11 Two Continuous

More information

Matrix Algebra, Class Notes (part 2) by Hrishikesh D. Vinod Copyright 1998 by Prof. H. D. Vinod, Fordham University, New York. All rights reserved.

Matrix Algebra, Class Notes (part 2) by Hrishikesh D. Vinod Copyright 1998 by Prof. H. D. Vinod, Fordham University, New York. All rights reserved. Matrix Algebra, Class Notes (part 2) by Hrishikesh D. Vinod Copyright 1998 by Prof. H. D. Vinod, Fordham University, New York. All rights reserved. 1 Converting Matrices Into (Long) Vectors Convention:

More information

Elements of Asymptotic Theory. James L. Powell Department of Economics University of California, Berkeley

Elements of Asymptotic Theory. James L. Powell Department of Economics University of California, Berkeley Elements of Asymtotic Theory James L. Powell Deartment of Economics University of California, Berkeley Objectives of Asymtotic Theory While exact results are available for, say, the distribution of the

More information

Partial Solutions for h4/2014s: Sampling Distributions

Partial Solutions for h4/2014s: Sampling Distributions 27 Partial Solutions for h4/24s: Sampling Distributions ( Let X and X 2 be two independent random variables, each with the same probability distribution given as follows. f(x 2 e x/2, x (a Compute the

More information

Multiple Random Variables

Multiple Random Variables Multiple Random Variables This Version: July 30, 2015 Multiple Random Variables 2 Now we consider models with more than one r.v. These are called multivariate models For instance: height and weight An

More information

Def. The euclidian distance between two points x = (x 1,...,x p ) t and y = (y 1,...,y p ) t in the p-dimensional space R p is defined as

Def. The euclidian distance between two points x = (x 1,...,x p ) t and y = (y 1,...,y p ) t in the p-dimensional space R p is defined as MAHALANOBIS DISTANCE Def. The euclidian distance between two points x = (x 1,...,x p ) t and y = (y 1,...,y p ) t in the p-dimensional space R p is defined as d E (x, y) = (x 1 y 1 ) 2 + +(x p y p ) 2

More information

x. Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ 2 ).

x. Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ 2 ). .8.6 µ =, σ = 1 µ = 1, σ = 1 / µ =, σ =.. 3 1 1 3 x Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ ). The Gaussian distribution Probably the most-important distribution in all of statistics

More information

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #1. July 11, 2013 Solutions

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #1. July 11, 2013 Solutions YORK UNIVERSITY Faculty of Science Department of Mathematics and Statistics MATH 222 3. M Test # July, 23 Solutions. For each statement indicate whether it is always TRUE or sometimes FALSE. Note: For

More information

MAS223 Statistical Inference and Modelling Exercises

MAS223 Statistical Inference and Modelling Exercises MAS223 Statistical Inference and Modelling Exercises The exercises are grouped into sections, corresponding to chapters of the lecture notes Within each section exercises are divided into warm-up questions,

More information

An Inverse Problem for Two Spectra of Complex Finite Jacobi Matrices

An Inverse Problem for Two Spectra of Complex Finite Jacobi Matrices Coyright 202 Tech Science Press CMES, vol.86, no.4,.30-39, 202 An Inverse Problem for Two Sectra of Comlex Finite Jacobi Matrices Gusein Sh. Guseinov Abstract: This aer deals with the inverse sectral roblem

More information

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p Math Bootcamp 2012 1 Review of matrix algebra 1.1 Vectors and rules of operations An p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the

More information

Random vectors X 1 X 2. Recall that a random vector X = is made up of, say, k. X k. random variables.

Random vectors X 1 X 2. Recall that a random vector X = is made up of, say, k. X k. random variables. Random vectors Recall that a random vector X = X X 2 is made up of, say, k random variables X k A random vector has a joint distribution, eg a density f(x), that gives probabilities P(X A) = f(x)dx Just

More information

Elements of Asymptotic Theory. James L. Powell Department of Economics University of California, Berkeley

Elements of Asymptotic Theory. James L. Powell Department of Economics University of California, Berkeley Elements of Asymtotic Theory James L. Powell Deartment of Economics University of California, Berkeley Objectives of Asymtotic Theory While exact results are available for, say, the distribution of the

More information

MIT Spring 2015

MIT Spring 2015 Regression Analysis MIT 18.472 Dr. Kempthorne Spring 2015 1 Outline Regression Analysis 1 Regression Analysis 2 Multiple Linear Regression: Setup Data Set n cases i = 1, 2,..., n 1 Response (dependent)

More information

1 Solution to Problem 2.1

1 Solution to Problem 2.1 Solution to Problem 2. I incorrectly worked this exercise instead of 2.2, so I decided to include the solution anyway. a) We have X Y /3, which is a - function. It maps the interval, ) where X lives) onto

More information

E[X n ]= dn dt n M X(t). ). What is the mgf? Solution. Found this the other day in the Kernel matching exercise: 1 M X (t) =

E[X n ]= dn dt n M X(t). ). What is the mgf? Solution. Found this the other day in the Kernel matching exercise: 1 M X (t) = Chapter 7 Generating functions Definition 7.. Let X be a random variable. The moment generating function is given by M X (t) =E[e tx ], provided that the expectation exists for t in some neighborhood of

More information

BMIR Lecture Series on Probability and Statistics Fall 2015 Discrete RVs

BMIR Lecture Series on Probability and Statistics Fall 2015 Discrete RVs Lecture #7 BMIR Lecture Series on Probability and Statistics Fall 2015 Department of Biomedical Engineering and Environmental Sciences National Tsing Hua University 7.1 Function of Single Variable Theorem

More information

Chater Matrix Norms and Singular Value Decomosition Introduction In this lecture, we introduce the notion of a norm for matrices The singular value de

Chater Matrix Norms and Singular Value Decomosition Introduction In this lecture, we introduce the notion of a norm for matrices The singular value de Lectures on Dynamic Systems and Control Mohammed Dahleh Munther A Dahleh George Verghese Deartment of Electrical Engineering and Comuter Science Massachuasetts Institute of Technology c Chater Matrix Norms

More information

2 b 3 b 4. c c 2 c 3 c 4

2 b 3 b 4. c c 2 c 3 c 4 OHSx XM511 Linear Algebra: Multiple Choice Questions for Chapter 4 a a 2 a 3 a 4 b b 1. What is the determinant of 2 b 3 b 4 c c 2 c 3 c 4? d d 2 d 3 d 4 (a) abcd (b) abcd(a b)(b c)(c d)(d a) (c) abcd(a

More information

Part IA Probability. Theorems. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015

Part IA Probability. Theorems. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015 Part IA Probability Theorems Based on lectures by R. Weber Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures.

More information

ACM 116: Lectures 3 4

ACM 116: Lectures 3 4 1 ACM 116: Lectures 3 4 Joint distributions The multivariate normal distribution Conditional distributions Independent random variables Conditional distributions and Monte Carlo: Rejection sampling Variance

More information

x 3y 2z = 6 1.2) 2x 4y 3z = 8 3x + 6y + 8z = 5 x + 3y 2z + 5t = 4 1.5) 2x + 8y z + 9t = 9 3x + 5y 12z + 17t = 7

x 3y 2z = 6 1.2) 2x 4y 3z = 8 3x + 6y + 8z = 5 x + 3y 2z + 5t = 4 1.5) 2x + 8y z + 9t = 9 3x + 5y 12z + 17t = 7 Linear Algebra and its Applications-Lab 1 1) Use Gaussian elimination to solve the following systems x 1 + x 2 2x 3 + 4x 4 = 5 1.1) 2x 1 + 2x 2 3x 3 + x 4 = 3 3x 1 + 3x 2 4x 3 2x 4 = 1 x + y + 2z = 4 1.4)

More information

1 Linear Algebra Problems

1 Linear Algebra Problems Linear Algebra Problems. Let A be the conjugate transpose of the complex matrix A; i.e., A = A t : A is said to be Hermitian if A = A; real symmetric if A is real and A t = A; skew-hermitian if A = A and

More information

p. 6-1 Continuous Random Variables p. 6-2

p. 6-1 Continuous Random Variables p. 6-2 Continuous Random Variables Recall: For discrete random variables, only a finite or countably infinite number of possible values with positive probability (>). Often, there is interest in random variables

More information

Economics 620, Lecture 5: exp

Economics 620, Lecture 5: exp 1 Economics 620, Lecture 5: The K-Variable Linear Model II Third assumption (Normality): y; q(x; 2 I N ) 1 ) p(y) = (2 2 ) exp (N=2) 1 2 2(y X)0 (y X) where N is the sample size. The log likelihood function

More information

Stat 5101 Notes: Algorithms (thru 2nd midterm)

Stat 5101 Notes: Algorithms (thru 2nd midterm) Stat 5101 Notes: Algorithms (thru 2nd midterm) Charles J. Geyer October 18, 2012 Contents 1 Calculating an Expectation or a Probability 2 1.1 From a PMF........................... 2 1.2 From a PDF...........................

More information

Knowledge Discovery and Data Mining 1 (VO) ( )

Knowledge Discovery and Data Mining 1 (VO) ( ) Knowledge Discovery and Data Mining 1 (VO) (707.003) Review of Linear Algebra Denis Helic KTI, TU Graz Oct 9, 2014 Denis Helic (KTI, TU Graz) KDDM1 Oct 9, 2014 1 / 74 Big picture: KDDM Probability Theory

More information

STAT 100C: Linear models

STAT 100C: Linear models STAT 100C: Linear models Arash A. Amini April 27, 2018 1 / 1 Table of Contents 2 / 1 Linear Algebra Review Read 3.1 and 3.2 from text. 1. Fundamental subspace (rank-nullity, etc.) Im(X ) = ker(x T ) R

More information

1.1 Review of Probability Theory

1.1 Review of Probability Theory 1.1 Review of Probability Theory Angela Peace Biomathemtics II MATH 5355 Spring 2017 Lecture notes follow: Allen, Linda JS. An introduction to stochastic processes with applications to biology. CRC Press,

More information

From Lay, 5.4. If we always treat a matrix as defining a linear transformation, what role does diagonalisation play?

From Lay, 5.4. If we always treat a matrix as defining a linear transformation, what role does diagonalisation play? Overview Last week introduced the important Diagonalisation Theorem: An n n matrix A is diagonalisable if and only if there is a basis for R n consisting of eigenvectors of A. This week we ll continue

More information

conditional cdf, conditional pdf, total probability theorem?

conditional cdf, conditional pdf, total probability theorem? 6 Multiple Random Variables 6.0 INTRODUCTION scalar vs. random variable cdf, pdf transformation of a random variable conditional cdf, conditional pdf, total probability theorem expectation of a random

More information

2 Functions of random variables

2 Functions of random variables 2 Functions of random variables A basic statistical model for sample data is a collection of random variables X 1,..., X n. The data are summarised in terms of certain sample statistics, calculated as

More information

Improving AOR Method for a Class of Two-by-Two Linear Systems

Improving AOR Method for a Class of Two-by-Two Linear Systems Alied Mathematics 2 2 236-24 doi:4236/am22226 Published Online February 2 (htt://scirporg/journal/am) Imroving AOR Method for a Class of To-by-To Linear Systems Abstract Cuixia Li Shiliang Wu 2 College

More information

Exam 2. Jeremy Morris. March 23, 2006

Exam 2. Jeremy Morris. March 23, 2006 Exam Jeremy Morris March 3, 006 4. Consider a bivariate normal population with µ 0, µ, σ, σ and ρ.5. a Write out the bivariate normal density. The multivariate normal density is defined by the following

More information

LECTURES 2-3 : Stochastic Processes, Autocorrelation function. Stationarity.

LECTURES 2-3 : Stochastic Processes, Autocorrelation function. Stationarity. LECTURES 2-3 : Stochastic Processes, Autocorrelation function. Stationarity. Important points of Lecture 1: A time series {X t } is a series of observations taken sequentially over time: x t is an observation

More information

Linear Algebra, part 2 Eigenvalues, eigenvectors and least squares solutions

Linear Algebra, part 2 Eigenvalues, eigenvectors and least squares solutions Linear Algebra, part 2 Eigenvalues, eigenvectors and least squares solutions Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2013 Main problem of linear algebra 2: Given

More information

Matrix Algebra. Matrix Algebra. Chapter 8 - S&B

Matrix Algebra. Matrix Algebra. Chapter 8 - S&B Chapter 8 - S&B Algebraic operations Matrix: The size of a matrix is indicated by the number of its rows and the number of its columns. A matrix with k rows and n columns is called a k n matrix. The number

More information

Introduction to Probability and Stocastic Processes - Part I

Introduction to Probability and Stocastic Processes - Part I Introduction to Probability and Stocastic Processes - Part I Lecture 2 Henrik Vie Christensen vie@control.auc.dk Department of Control Engineering Institute of Electronic Systems Aalborg University Denmark

More information

Probability Lecture III (August, 2006)

Probability Lecture III (August, 2006) robability Lecture III (August, 2006) 1 Some roperties of Random Vectors and Matrices We generalize univariate notions in this section. Definition 1 Let U = U ij k l, a matrix of random variables. Suppose

More information

Econ 508B: Lecture 5

Econ 508B: Lecture 5 Econ 508B: Lecture 5 Expectation, MGF and CGF Hongyi Liu Washington University in St. Louis July 31, 2017 Hongyi Liu (Washington University in St. Louis) Math Camp 2017 Stats July 31, 2017 1 / 23 Outline

More information

The Properties of Pure Diagonal Bilinear Models

The Properties of Pure Diagonal Bilinear Models American Journal of Mathematics and Statistics 016, 6(4): 139-144 DOI: 10.593/j.ajms.0160604.01 The roerties of ure Diagonal Bilinear Models C. O. Omekara Deartment of Statistics, Michael Okara University

More information

The purpose of this section is to derive the asymptotic distribution of the Pearson chi-square statistic. k (n j np j ) 2. np j.

The purpose of this section is to derive the asymptotic distribution of the Pearson chi-square statistic. k (n j np j ) 2. np j. Chapter 9 Pearson s chi-square test 9. Null hypothesis asymptotics Let X, X 2, be independent from a multinomial(, p) distribution, where p is a k-vector with nonnegative entries that sum to one. That

More information

Random Vectors 1. STA442/2101 Fall See last slide for copyright information. 1 / 30

Random Vectors 1. STA442/2101 Fall See last slide for copyright information. 1 / 30 Random Vectors 1 STA442/2101 Fall 2017 1 See last slide for copyright information. 1 / 30 Background Reading: Renscher and Schaalje s Linear models in statistics Chapter 3 on Random Vectors and Matrices

More information

5.1 Consistency of least squares estimates. We begin with a few consistency results that stand on their own and do not depend on normality.

5.1 Consistency of least squares estimates. We begin with a few consistency results that stand on their own and do not depend on normality. 88 Chapter 5 Distribution Theory In this chapter, we summarize the distributions related to the normal distribution that occur in linear models. Before turning to this general problem that assumes normal

More information

Probability and Distributions

Probability and Distributions Probability and Distributions What is a statistical model? A statistical model is a set of assumptions by which the hypothetical population distribution of data is inferred. It is typically postulated

More information

5 Operations on Multiple Random Variables

5 Operations on Multiple Random Variables EE360 Random Signal analysis Chapter 5: Operations on Multiple Random Variables 5 Operations on Multiple Random Variables Expected value of a function of r.v. s Two r.v. s: ḡ = E[g(X, Y )] = g(x, y)f X,Y

More information

Computational Methods CMSC/AMSC/MAPL 460. Eigenvalues and Eigenvectors. Ramani Duraiswami, Dept. of Computer Science

Computational Methods CMSC/AMSC/MAPL 460. Eigenvalues and Eigenvectors. Ramani Duraiswami, Dept. of Computer Science Computational Methods CMSC/AMSC/MAPL 460 Eigenvalues and Eigenvectors Ramani Duraiswami, Dept. of Computer Science Eigen Values of a Matrix Recap: A N N matrix A has an eigenvector x (non-zero) with corresponding

More information

2.3. VECTOR SPACES 25

2.3. VECTOR SPACES 25 2.3. VECTOR SPACES 25 2.3 Vector Spaces MATH 294 FALL 982 PRELIM # 3a 2.3. Let C[, ] denote the space of continuous functions defined on the interval [,] (i.e. f(x) is a member of C[, ] if f(x) is continuous

More information

11 a 12 a 21 a 11 a 22 a 12 a 21. (C.11) A = The determinant of a product of two matrices is given by AB = A B 1 1 = (C.13) and similarly.

11 a 12 a 21 a 11 a 22 a 12 a 21. (C.11) A = The determinant of a product of two matrices is given by AB = A B 1 1 = (C.13) and similarly. C PROPERTIES OF MATRICES 697 to whether the permutation i 1 i 2 i N is even or odd, respectively Note that I =1 Thus, for a 2 2 matrix, the determinant takes the form A = a 11 a 12 = a a 21 a 11 a 22 a

More information

The Hasse Minkowski Theorem Lee Dicker University of Minnesota, REU Summer 2001

The Hasse Minkowski Theorem Lee Dicker University of Minnesota, REU Summer 2001 The Hasse Minkowski Theorem Lee Dicker University of Minnesota, REU Summer 2001 The Hasse-Minkowski Theorem rovides a characterization of the rational quadratic forms. What follows is a roof of the Hasse-Minkowski

More information

Towards understanding the Lorenz curve using the Uniform distribution. Chris J. Stephens. Newcastle City Council, Newcastle upon Tyne, UK

Towards understanding the Lorenz curve using the Uniform distribution. Chris J. Stephens. Newcastle City Council, Newcastle upon Tyne, UK Towards understanding the Lorenz curve using the Uniform distribution Chris J. Stehens Newcastle City Council, Newcastle uon Tyne, UK (For the Gini-Lorenz Conference, University of Siena, Italy, May 2005)

More information

DISCRIMINANTS IN TOWERS

DISCRIMINANTS IN TOWERS DISCRIMINANTS IN TOWERS JOSEPH RABINOFF Let A be a Dedekind domain with fraction field F, let K/F be a finite searable extension field, and let B be the integral closure of A in K. In this note, we will

More information

ECE Lecture #10 Overview

ECE Lecture #10 Overview ECE 450 - Lecture #0 Overview Introduction to Random Vectors CDF, PDF Mean Vector, Covariance Matrix Jointly Gaussian RV s: vector form of pdf Introduction to Random (or Stochastic) Processes Definitions

More information

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2. APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product

More information