III - MULTIVARIATE RANDOM VARIABLES

Size: px
Start display at page:

Download "III - MULTIVARIATE RANDOM VARIABLES"

Transcription

1 Computational Methods and advanced Statistics Tools III - MULTIVARIATE RANDOM VARIABLES A random vector, or multivariate random variable, is a vector of n scalar random variables. The random vector is written X 1 X X = 2 (1)... X n A random vector is also said an n dimensional random variable. 1 Joint ad marginal densities Every random variable has a distribution function. DEFINITION III.1A - (DISTRIBUTION) - The n dimensional random variable X has the distribution function F(x 1,...,x n ) = P{X 1 x 1,...,X n x n }. (2) A random vector, X, is called absolutely continuous if there exists the joint probability density f such that x1 xn F(x 1,...,x n ) =... f(t 1,...,t n )dt 1,...dt n. (3) A random vector, X, is called discrete if the set of its values is discrete, that is either finite or countable. It means that X : Ω R n, where Ω is discrete. In this case the joint probability function (or, less properly, the joint discrete density) is defined as f(x 1,...,x n ) = P{X 1 = x 1,...,X n = x n }. (4) The joint distribution and the joint probability function are related by F(x 1,...,x n ) =... f(t 1,...,t n ). (5) t 1 x 1 t n x n 1

2 Ex. III.1B (n = 2) The joint distribution function of the random vector (X,Y) is defined by the relation F(x,y) = P(X x, Y y) The function F(x,y) is defined on R 2 and, as a function each variable, is monotone non-decreasing and such that lim F(x,y) = 1, lim F(x,y) = lim F(x,y) = 0 x,y + x y To know F(x,y) allows to know the probability that (X,Y) falls in any given rectangle (a,b] (c,d] : P[(X,Y) (a,b] (c,d] ] = F(b,d) F(a,d) F(b,c)+F(c,d). Sincerectangles approximate anymeasurableset ofr 2, ifwe knowf then we know P[(X,Y) A] for any measurable set A R 2 ; that is, the distribution function F(x, y) thoroughly describes the distribution measure of the couple (X,Y). If the random vector (X,Y) is absolutely continuous, i.e. if there is an integrable function f(x, y) such that f(x,y) 0, R 2 f(x,y)dxdy = 1, F(x,y) = x y f(ξ, η)dξdη, then A f(x,y)dxdy = P[(X,Y) A], measurable set A R 2, which illustrates the meaning of the joint probability density f(x, y). An example of an absolutely continuous random vector: (X,Y) with probability density and distribution function f(x,y) = 1 2π exp[ 1 2 (x2 +y 2 )]. For example (X,Y) falls in the unit disk D = {(x,y) : x 2 + y 2 1} with probability D f(x,y)dxdy = 1 2π /2 2π e r2 r drdθ = 2π [ ] e r 2 /2 1 2π = 0 1 e 1/2. Anexampleofadiscreterandom vector is(x,y)with X =scoregot bytossing a die, Y = score got by tossing another die. Then the joint probability 2

3 function has the value 1/36 for any (x i,y j ) {1,...,6} 2 and zero elsewhere. The distribution function F(x,y) = P(X = x i,y = y j ) x i x y j y is computed by counting and dividing by 36. For example F(2,4) = 8/36, F(3,3) = 9/36,..., F(6,6) = 1,. For a sub-vector S = (X 1,...,X k ) T (k < n) of the random vector X, the marginal density function is f S (x 1,...,x k ) = in the continuous case, and f S (x 1,...,x k ) = x k+1 + f(x 1,...,x n ) dx k+1...dx n (6)... x n f(x 1,...,x n ) dx k+1...dx n (7) if X is discrete. In both cases the marginal distribution function is F S (x 1,...,x k ) = F(x 1,...,x k,,..., ). (8) For the sake of notation, we will write f X (x) and F X (x) instead of f(x) and F(x), respectively, whenever it is necessary to emphasize to which random variable the function belongs. EX. III.1C f X,Y (x,y) = { 1 8, 0 y x 4 0, else Then the marginal density function of X is { x 0 1 f X (x) = f X,Y (x,y) dy = dy = 1 x, x [0,4] x / [0,4] R DEF. III.1D (INDEPENDENCE) The random vectors X and Y are independent 1 if f X,Y (x,y) = f X (x) f Y (y), x,y (9) 1 In most textbooks the n dimensional and m dimensional r.v. X,Y are said to be independent if P(X A,Y B) = P(X A) P(Y B), measurable A R n, B R m. A suitable choice of A, B leads directly to the equivalent definition (10) 3

4 which corresponds to F X,Y (x,y) = F X (x) F Y (y), x,y (10) 2 Moments of scalar random variables Evolution presupposes the existence of something that can develop. The theory says nothing about where this something came from. Furthermore, questions about the being, essence, dignity, mission, meaning and wherefore of the world and man cannot be answered in biological terms. (YouCat 42) An average of the possible values of X, each value being weighted by its probability, E[X] = x P{X = x}, x is ad litteram the notion of expectation for a discrete variable X (provided that such series is absolutely convergent). For continuous variables, the sums are replaced by integrals. DEFINITION III.2A (EXPECTATION) The expectation (or mean value) of an absolutely continuous variable X with density function f X is E[X] = x f X (x) dx (11) provided that the integral is absolutely convergent, i.e. E[ X ] <. If X is a random variable and g is a function, then Y = g(x) is also a random variable. To calculate the expectation of Y, we could find f Y and use (11). However the process of finding f Y can be complicated; instead, we can simply use E[Y] = E[g(X)] = g(x)f X (x) dx (12) This provides a method for calculating the moments of a distribution. R DEF. III.2B (MOMENTS) 4

5 The n-th moment of X is E[X n ] = and the n-th central moment is E[(X E[X]) n ] = R R x n f X (x) dx (13) (x E[X]) n f X (x) dx (14) The second central moment is also called the variance, and the the variance of X is given by Var[X] = E[(X E[X]) 2 ] = E[X 2 ] (E[X]) 2. (15) Now let X be an n dimensional random variable and put Y = g(x), where g is a function. As in (12) we have E[Y] = E[g(X)] = g(x 1...,x n )f X (x 1,...,x n )dx 1...dx n R n This gives sense to the mean of a linear combination of variables, and to any type of moments. Since E[aX 1 +bx 2 ] = ae[x 1 ]+be[x 2 ] the expectation operator is a linear operator. Of special interest is the covariance of X i and X j : DEF: III.2C - Covariance - The covariance of two scalar r.v. s X 1, 2 is Cov[X 1,X 2 ] = E[(X 1 E[X 1 ])(X 2 E[X 2 ])] (16) The covariance gives information about the simultaneous variation of two variables and is useful to find whether X 1,X 2 are dependent on each other. Since the expectation is linear, the covariance is a bilinear operator, i.e. linear in each of the two arguments, whence its calculation rule: Cov[aX 1 +bx 2,cX 3 +dx 4 ] = = ac Cov[X 1,X 3 ]+ad Cov[X 1,X 4 ]+bc Cov[X 2,X 3 ]+bd Cov[X 2,X 4 ](17) for constants a,b,c,d and r.v. X 1,...,X 4. 5

6 PROP. III.2D - For two scalar random variables X,Y we have the formulae: Cov(X,Y) = E(XY) µ x µ y (18) Var(X +Y) = Var(X)+2 Cov(X,Y)+Var(Y) (19) Proof E[(X µ x ) (Y µ y )] = E[XY] µ x E[Y] µ y E[X]+µ x µ y = = E[XY] 2µ x µ y +µ x µ y = E[XY] µ x µ y In particular, as we already know, Var(X) Cov(X,X) = E[X 2 ] µ 2 x. Besides, E[{X+Y (µ x +µ y )} 2 ] = E[(X µ x ) 2 +2(X µ x )(Y µ y )+(Y µ y ) 2 ] = = Var(X)+2 Cov(X,Y)+Var(Y) DEF. III.2D - Correlation coefficient - The correlation coefficient, or correlation, of two scalar r.v. s X 1,X 2 is ρ ρ(x 1,X 2 ) := E[ X 1 µ σ 1 X2 µ 2 σ 2 ] THM. III.2D - The covariance and the correlation of two scalar r.v. s X,Y satisfy Cov(X,Y) 2 Var(X)Var(Y) (Schwarz s inequality), 1 ρ(x,y) 1 respectively. Proof Since the expectation of a nonnegative variable is always nonnegative, 0 E[(θ X + Y ) 2 ] = θ 2 E[X 2 ]+2θE[ XY ]+E[Y 2 ], θ. 6

7 Thus a polynomial with degree 2 (in theta) does not take negative values, so that the discriminant must be 0: E[ XY ] 2 E[X 2 ]E[Y 2 ] 0, or E[ XY ] 2 E[X 2 ]E[Y 2 ]. Replacing X and Y by X µ x and Y µ y, and using f(t)g(t)dt f(t)g(t) dt, this implies E[(X mu x )(Y µ y )] 2 E[ (X mu x )(Y µ y ) ] 2 E[(X µ x ) 2 ]E[(Y µ y ) 2 ] = Var(X)Var(Y) Since ρ = Cov(X,Y)/ Var(X)Var(Y), this also means [ρ(x,y)] 2 1, i.e. 1 ρ +1. DEFINITION III.2E - Uncorrelation - Two scalar random variables, Y are said uncorrelated, or orthogonal, when Cov(X,Y) = 0. Since Cov(X,Y) = E(XY) µ x µ y, However X,Y independent X,Y uncorrelated X, Y uncorrelated does not imply X, Y independent as we see in the following example. EX. III.2F - Let U be uniformly distributed in [0,1] and consider the r.v. s X = sin2πu, Y = cos2πu. X and Y are not independent since Y has only two possible values if X is known. However they are orthogonal or uncorrelated: so that E(X) = 1 Cov(X,Y) = E(X Y) = 0 sin2πt dt = 0, E(Y) = sin2πt cos2πt dt = cos2πt dt = 0, 1 0 sin4πt dt = 0. In this case the covariance is zero, but the variables are not independent. 7

8 3 Moments of multivariate random variables We are interested in defining a (vector) first moment ad a (matrix) second moment of a multivariate random variable. DEF. III.3A (EXPECTATION OF A RANDOM VECTOR) The expectation (or the mean value) of the random vector X is µ = E[X] = E[X 1 ] E[X 2 ]. E[X n ] (20) DEFINITION III.3B (COVARIANCE matrix) The covariance matrix of the random vector X is Σ X = Var[X] = E[(X µ)(x µ) T ] = = Var[X 1 ] Cov[X 1,X 2 ]... Cov[X 1,X n ] Cov[X 2,X 1 ] Var[X 2 ]... Cov[X 2,X n ].. Cov[X n,x 1 ] Cov[X n,x 2 ]... Var[X n ] (21) Sometimes we shall use the notation σ1 2 σ σ 1n σ 21 σ σ 2n.. σ n1 σ n2... σn 2 Both σ ii and σ 2 i are possible notations for the variance of X i. By the calculation rule (17), the variance of a linear combination of X 1,X 2 is known from the variance matrix of (X 1,X 2 ): Var[z 1 X 1 +z 2 X 2 ] = z 2 1Var(X 1 )+2z 1 z 2 Cov(X 1,X 2 )+z 2 2Var(X 2 ). and in general, for any n 2, it is the the quadratic form of Σ calculated in the coefficients z 1,...,z n : Var[z 1 X z n X n ] = z T Σz. 8

9 According to III.2C, the correlation of two random variables X i, X j is: ρ ij = Equations (21, 23) lead to the definition Cov[X i,x j ] Var[Xi ]Var[X j ] = σ ij σ i σ j (22) DEFINITION III.3C The correlation matrix for X is 1 ρ ρ 1n ρ ρ 2n.. ρ n1 ρ n (23) THEOREM III.3D The covariance matrix Σ and the correlation matrix R are a) symmetric and b) nonnegative. Proof. (a)thesymmetryisobvious. (b)bydefinitionofvariance,var(z 1 X z n X n ) 0 for any z R n. Hence the quadratic form z T Σz Var[z T X] 0, z, i.e. the matrix Σ is nonnegative semidefinite.. REMARK III.3E - Very often Σ is positive definite, i.e. z T Σz 0 and z T Σz = 0 z = (0,...,0) When Σ is positive definite, we write Σ > 0. For our purposes, we always assume Σ > 0. COROLLARY III.3F - Linear transformations of random vectors Let X = (X 1,...,X n ) T have mean µ and covariance Σ X. Now we introduce the new random variable Y = (Y 1,...,Y n ) T by the transformation Y = a+bx where a is (n 1) vector and B is a (n n) matrix. Then the expectation and variance matrix of Y are: E[Y] = E[a+BX] = a+be[x] (24) Σ Y = Var[a+BX] = Var[BX] = BVar[X]B T (25) 9

10 Proof The first formula is obvious since the expectation operator is linear. As for the variance of BX we have: (Var[a+BX]) i,j = Cov[(BX) i, (BX) j ] = = Cov[ l b i,l X l, m b j,m X m ] = b i,l b j,m Cov(X l,x m ) = ( BΣ X B T). i,j l,m 4 The multivariate normal distribution We assume that X 1,...,X n are independent random variables with means µ 1,...,µ n, and variances σ 2 1,...,σ 2 n. We write X i N(µ i,σ 2 i). Now, define the random vector X = (X 1,...,X n ) T. Because the random variables are independent, the joint density is the product of the marginal densities: f X (x 1,...,x n ) = f X1 (x 1 ) f Xn (x n ) = n 1 σ i 2π exp 1 ( n exp σ i )(2π) n/2 [ (xi µ i ) 2 ] 2σi 2 [ 1 n [ ] ] xi µ 2 i. 2 σ i By introducing the mean µ = (µ 1,...,µ n ) T, and the covariance Σ X = Var[X] = diag(σ 2 1,...,σ 2 n), this is written f X (x) = [ 1 (2π) n/2 exp 1 ] detσ X 2 (x µ)t Σ 1 X (x µ) (26) A generalization to the case where the covariance matrix is a full matrix leads to the following DEF. III.4A - The multivariate normal distribution - The joint density function for the n dimensional random variable X with mean µ and covariance matrix Σ is 10

11 f X (x) = [ 1 (2π) n/2 exp 1 ] detσ X 2 (x µ)t Σ 1 X (x µ) (27) where Σ > 0. We write X N(µ,Σ). If X N(0,I) we say that X is standardized normally distributed. THM. III.4B - Within the class of normal variables, X 1,..,X n are uncorrelated if ad only if they are independent. Proof Cov(X i,x j ) = 0, i j implies that the variance matrix is diagonal. Thus the quadratic form in the exponent of f is a diagonal form. So the density is the product of the marginal densities, hence the X i s are independent. THM. III.4C - Principal components of N(µ;Σ). - Let X N(µ;Σ), with values in R n. Each level surface (or contour ) of the joint density f(x 1,...,x n ) is an ellipsoid {x : (x µ) T Σ 1 (x µ) = c 2 } (28) in R n, c being a constant. The axes of the ellipsoid are ±c λ i e i where (λ i,e i ) are the eigenvalue-eigenvector pairs of the covariance matrix Σ (i = 1,..,n). If the matrix G is defined with the normalized eigenvectors e i as columns, then the equation Y = G T (X µ), (29) determines independent and normally distributed variables Y i with mean zero and variance λ i. Such Y i s are said principal components of the normal vector X. Proof The covariance matrix Σ is symmetric and definite positive. Thus by the spectral decomposition theorem, G T ΣG = Λ, (or G T Σ 1 G = Λ 1 ) 11

12 where Λ = Diag(λ 1,...,λ n ) is the diagonal matrix of (positive) eigenvalues of Σ, while G is an orthogonal matrix whose columns are the corresponding normalized eigenvectors. By the principal component trasformation the ellipsoid becomes y = G T (x µ), i.e. x µ = Gy (Gy) T Σ 1 Gy = c 2, i.e. y T G T Σ 1 Gy = c 2, i.e. y T Λ 1 y = c 2 i.e n 1 λ i y 2 i = c 2. This is the equation of an ellipsoid with semiaxes of length c λ i. y 1,...,y n represent the proper axes of the ellipsoid. The direction of the axes are given by the eigenvectors of Σ since y = G T (x µ) is a rotation and G is orthogonal with eigenvectors of Σ as columns. EX. III.4D Principal components and contours of a two-dimensional normal vector. To obtain contours of a bivariate normal density we have to find the eigenvalues and eigenvectors of Σ. Suppose ( ) 12 9 µ 1 = 5, µ 2 = 8, Σ = Here the eiegnvalue equation becomes ( ) 12 λ 9 0 = Σ λi = det = (12 λ)(25 λ) 81 = λ The equation gives λ 1 = , λ 2 = The corresponding normalized eigenvectors are ( ) ( ) e 1 = e =..455 The ellipse of constant probability density (??) has center at µ = (5, 8) T and axes ±5.44 c e 1 and ±2.72 c e 2. The eigenvector e 1 lies along the line which makes an angle of cos 1 (.455) = with the X 1 axis and passes through the point (5, 8) in the (X 1,X 2 ) plane. By principal component transformation y = G T (x µ) ) where G = (e 1,e 2 ), the ellipse (??) has the equation n 1 {y : yi 2 = c 2 }. λ i 12

13 The ellipse as center at (y 1 = 0,y 2 = 0) and axes ±c λ i f i where f i, i = 1,2, are the eigenvectors of L = Diag(λ 1,λ 2 ). In terms of the new co-ordinate axes Y 1,Y 2, the eigenvectors are f 1 = (1,0) T, f 2 = (0,1) T. The lengths of the axes are c λ i, i = 1,2. The axes Y 1,Y 2 are the Principal Components of Σ, > Sigma<-matrix(c(12,9,9,25),nrow=2,ncol=2) > eigen(sigma)$values [1] > eigen(sigma)$vectors [,1] [,2] [1,] [2,] > THM. III.4E (n-dimensional standardization) Let X N(µ,Σ), with dimension n. Then there exists a square matrix C such that U := C 1 (X µ ) N(0;I). i.e. the components (U 1,...,U n ) are independent N(0;1) variables. Proof Due to symmetry of the positive matrix Σ, there always exists a non singular real matrix C such that Σ = CC T (a matrix square root of Σ). Setting U = C 1 (X µ), by means of Corollary III.3G we find: E[U] = C 1 E[X µ] = 0 Var[U] = C 1 Var[X](C 1 ) T = C 1 CC T (C T ) 1 = I. 5 Distributions derived from the normal distribution Most of the test quantities used in Statistics are based on distributions derived from the normal one. 13

14 Any linear combination of normally distributed random variables is normal. If, for instance, X N(µ; Σ), then the linear transformation Y = a+bx defines a normally distributed random variable Y N(a+Bµ, BΣB T ) (30) Compare with Corollary III.3E, where the transformation rule Σ BΣB T of the variance of X, for any affine map X a+bx, is established. Let U = (U 1,...,U n ) T be a vector of independent N(0;1) random variables. The (central) χ 2 distribution with n degrees of freedom is obtained as the squared sum of n independent N(0;1) random variables, i.e. n X 2 = Ui 2 = U T U χ 2 (n) (31) From this it is clear that if Y 1,...,Y n are independent N(µ i ;σ 2 i) r.v. s, then X 2 = n [ ] Yi µ 2 i χ 2 (n) since U i = (Y i µ i )/σ i is N(0;1) distributed. σ i THM. III.5A If X i, for i = 1,...,n, are independent normal variables from a population N(µ;σ 2 ), then the sample mean and the sample variance follow a normal and a chi squared distribution, respectively: X = 1 n n S 2 = 1 (Xi n 1 X) 2 X i X N(µ, σ2 n ) (n 1)S2 σ 2 χ 2 (n 1). Proof. X is normal as a linear transformation of normal variables. By linearity E[ X] = 1 n ie[x i ] = 1 nµ = µ and by independence n Var[ 1 n X i ] = 1 Var[X n n 2 i ] = nσ2 = σ2 i n 2 n. whence X N(µ,σ 2 /n). Now using this fact and the definition of χ 2, i ( X i µ σ ) 2 χ 2 (n), ( X µ σ/ n )2 = ( X µ) 2 σ 2 /n 14 χ 2 (1)

15 Now writing X i µ = X i X + X µ, the following decomposition can be verified: n ( X i µ n ) 2 (X i = X) 2 +n( X µ ) 2 = σ σ 2 σ = (n 1)S2 +n( X µ ) 2. σ 2 σ Therefore the random variable (n 1)S 2 σ 2 is the difference between a χ 2 (n) r.v. and a χ 2 (1) r.v., that is a r. v. with distribution χ 2 (n 1).. DEF. III.5B - The Student s t distribution with n degrees of freedom is obtained as T = Z (K/n) 1/2 t(n) (32) where Z N(0;1), K χ 2 (n), and Z and K are independent. The Fisher-Snedecor F distribution with (n 1,n 2 ) degrees of freedom appears as the following ratio F = K 1/n 1 K 2 /n 2 F(n 1,n 2 ) (33) where K 1 χ 2 (n 1 ), and K 2 χ 2 (n 2 ), and K 1,K 2 are indepedent. It is clearly seen from (32) that T 2 F(1,n 1 ). PROP. III.5C - For i = 1,...,n, let the X i be i.i.d. variables from a normal population N(µ;σ 2 ). If S 2 = 1 n (X n 1 i X) 2 then the estimated standardization of the mean T = X µ S/ n obeys a Student t distribution with n 1 degrees of freedom. 15

16 Figure 1: Chi-squared density with 5 degrees of freedom Figure 2: Student s t density with 8 degrees of freedom 16

17 Figure 3: Fisher s F density with 5 and 7 degrees of freedom If two independent samples {X i } i and {Y j } j (i = 1,...,n 1 ; j = 1,...n 2 ) are taken from normal populations with the same variance, the ratio F = S 2 1/S 2 2 obeys a Fisher s distribution with n 1 1,n 2 1 degrees of freedom. Proof Since X N(µ;σ 2 /n), we have: where T = X µ S/ n = X µ σ/ n = U S/σ = U K/(n 1) σ/ n S/ n = U = X µ σ/ n, K = S2 (n 1) σ 2 are independent with U N(0;1) and K χ 2 (n 1). Finally, under the assumptions of equal variance σ 2, S1 2 S2 2 = 1 n 1 1 S2 1 (n 1 1)/σ 2 1 n 2 1 S2 2 (n 2 1)/σ = K 1/(n 1 1) 2 K 2 /(n 2 1) where K 1 and K 2 are independent, K 1 χ 2 (n 1 1), K 2 χ 2 (n 2 1). 17

18 In the language and enviroment R, each distribution above described has four functions, as described in below (cited from: R reference card). Distributions rnorm(n, mean=0, sd=1) Gaussian (normal) rt(n, df) Student (t) rf(n, df1, df2) FisherSnedecor (F) (c2) rchisq(n, df) Pearson rbinom(n, size, prob) binomial runif(n, min=0, max=1) uniform All these functions can be used by replacing the letter r with d, p or q to get, respectively, the probability density (dfunc(x,...)), the cumulative probability density (pfunc(x,...)), and the value of quantile (qfunc(p,...), with 0 < p < 1). Some examples: > pnorm(5.8, mean=2, sd=3) funz. distribuzione N(2;9) in 5.8 [1] > dnorm(2.7,mean=0,sd=5) funz.densita N(0;25) in 2.7 [1] > qt(0.95,df=13) quantile 0.95 di "t" di Student [1] con 13 gradi di liberta > qchisq(0.99,df=10) quantile 0.99 di "chi quadro" [1] con 10 gradi di liberta > z <- seq( 0,40,length=1000) > plot(z, dchisq(z,df=10),ty="l") densita di chi quadro con 10 gradi di liberta 18

Continuous Random Variables

Continuous Random Variables 1 / 24 Continuous Random Variables Saravanan Vijayakumaran sarva@ee.iitb.ac.in Department of Electrical Engineering Indian Institute of Technology Bombay February 27, 2013 2 / 24 Continuous Random Variables

More information

Gaussian random variables inr n

Gaussian random variables inr n Gaussian vectors Lecture 5 Gaussian random variables inr n One-dimensional case One-dimensional Gaussian density with mean and standard deviation (called N, ): fx x exp. Proposition If X N,, then ax b

More information

The Multivariate Normal Distribution. In this case according to our theorem

The Multivariate Normal Distribution. In this case according to our theorem The Multivariate Normal Distribution Defn: Z R 1 N(0, 1) iff f Z (z) = 1 2π e z2 /2. Defn: Z R p MV N p (0, I) if and only if Z = (Z 1,..., Z p ) T with the Z i independent and each Z i N(0, 1). In this

More information

BASICS OF PROBABILITY

BASICS OF PROBABILITY October 10, 2018 BASICS OF PROBABILITY Randomness, sample space and probability Probability is concerned with random experiments. That is, an experiment, the outcome of which cannot be predicted with certainty,

More information

18 Bivariate normal distribution I

18 Bivariate normal distribution I 8 Bivariate normal distribution I 8 Example Imagine firing arrows at a target Hopefully they will fall close to the target centre As we fire more arrows we find a high density near the centre and fewer

More information

5. Random Vectors. probabilities. characteristic function. cross correlation, cross covariance. Gaussian random vectors. functions of random vectors

5. Random Vectors. probabilities. characteristic function. cross correlation, cross covariance. Gaussian random vectors. functions of random vectors EE401 (Semester 1) 5. Random Vectors Jitkomut Songsiri probabilities characteristic function cross correlation, cross covariance Gaussian random vectors functions of random vectors 5-1 Random vectors we

More information

3. Probability and Statistics

3. Probability and Statistics FE661 - Statistical Methods for Financial Engineering 3. Probability and Statistics Jitkomut Songsiri definitions, probability measures conditional expectations correlation and covariance some important

More information

Random Variables and Their Distributions

Random Variables and Their Distributions Chapter 3 Random Variables and Their Distributions A random variable (r.v.) is a function that assigns one and only one numerical value to each simple event in an experiment. We will denote r.vs by capital

More information

Lecture 2: Repetition of probability theory and statistics

Lecture 2: Repetition of probability theory and statistics Algorithms for Uncertainty Quantification SS8, IN2345 Tobias Neckel Scientific Computing in Computer Science TUM Lecture 2: Repetition of probability theory and statistics Concept of Building Block: Prerequisites:

More information

The Multivariate Gaussian Distribution

The Multivariate Gaussian Distribution The Multivariate Gaussian Distribution Chuong B. Do October, 8 A vector-valued random variable X = T X X n is said to have a multivariate normal or Gaussian) distribution with mean µ R n and covariance

More information

Random Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R

Random Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R In probabilistic models, a random variable is a variable whose possible values are numerical outcomes of a random phenomenon. As a function or a map, it maps from an element (or an outcome) of a sample

More information

EEL 5544 Noise in Linear Systems Lecture 30. X (s) = E [ e sx] f X (x)e sx dx. Moments can be found from the Laplace transform as

EEL 5544 Noise in Linear Systems Lecture 30. X (s) = E [ e sx] f X (x)e sx dx. Moments can be found from the Laplace transform as L30-1 EEL 5544 Noise in Linear Systems Lecture 30 OTHER TRANSFORMS For a continuous, nonnegative RV X, the Laplace transform of X is X (s) = E [ e sx] = 0 f X (x)e sx dx. For a nonnegative RV, the Laplace

More information

MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems

MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems Review of Basic Probability The fundamentals, random variables, probability distributions Probability mass/density functions

More information

Formulas for probability theory and linear models SF2941

Formulas for probability theory and linear models SF2941 Formulas for probability theory and linear models SF2941 These pages + Appendix 2 of Gut) are permitted as assistance at the exam. 11 maj 2008 Selected formulae of probability Bivariate probability Transforms

More information

Joint Distribution of Two or More Random Variables

Joint Distribution of Two or More Random Variables Joint Distribution of Two or More Random Variables Sometimes more than one measurement in the form of random variable is taken on each member of the sample space. In cases like this there will be a few

More information

Preliminary Statistics. Lecture 3: Probability Models and Distributions

Preliminary Statistics. Lecture 3: Probability Models and Distributions Preliminary Statistics Lecture 3: Probability Models and Distributions Rory Macqueen (rm43@soas.ac.uk), September 2015 Outline Revision of Lecture 2 Probability Density Functions Cumulative Distribution

More information

1 Review of Probability and Distributions

1 Review of Probability and Distributions Random variables. A numerically valued function X of an outcome ω from a sample space Ω X : Ω R : ω X(ω) is called a random variable (r.v.), and usually determined by an experiment. We conventionally denote

More information

5 Operations on Multiple Random Variables

5 Operations on Multiple Random Variables EE360 Random Signal analysis Chapter 5: Operations on Multiple Random Variables 5 Operations on Multiple Random Variables Expected value of a function of r.v. s Two r.v. s: ḡ = E[g(X, Y )] = g(x, y)f X,Y

More information

Probability and Distributions

Probability and Distributions Probability and Distributions What is a statistical model? A statistical model is a set of assumptions by which the hypothetical population distribution of data is inferred. It is typically postulated

More information

Review of Statistics I

Review of Statistics I Review of Statistics I Hüseyin Taştan 1 1 Department of Economics Yildiz Technical University April 17, 2010 1 Review of Distribution Theory Random variables, discrete vs continuous Probability distribution

More information

Algorithms for Uncertainty Quantification

Algorithms for Uncertainty Quantification Algorithms for Uncertainty Quantification Tobias Neckel, Ionuț-Gabriel Farcaș Lehrstuhl Informatik V Summer Semester 2017 Lecture 2: Repetition of probability theory and statistics Example: coin flip Example

More information

Chapter 5 continued. Chapter 5 sections

Chapter 5 continued. Chapter 5 sections Chapter 5 sections Discrete univariate distributions: 5.2 Bernoulli and Binomial distributions Just skim 5.3 Hypergeometric distributions 5.4 Poisson distributions Just skim 5.5 Negative Binomial distributions

More information

Random Variables. Cumulative Distribution Function (CDF) Amappingthattransformstheeventstotherealline.

Random Variables. Cumulative Distribution Function (CDF) Amappingthattransformstheeventstotherealline. Random Variables Amappingthattransformstheeventstotherealline. Example 1. Toss a fair coin. Define a random variable X where X is 1 if head appears and X is if tail appears. P (X =)=1/2 P (X =1)=1/2 Example

More information

x. Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ 2 ).

x. Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ 2 ). .8.6 µ =, σ = 1 µ = 1, σ = 1 / µ =, σ =.. 3 1 1 3 x Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ ). The Gaussian distribution Probably the most-important distribution in all of statistics

More information

EE4601 Communication Systems

EE4601 Communication Systems EE4601 Communication Systems Week 2 Review of Probability, Important Distributions 0 c 2011, Georgia Institute of Technology (lect2 1) Conditional Probability Consider a sample space that consists of two

More information

Problem Solving. Correlation and Covariance. Yi Lu. Problem Solving. Yi Lu ECE 313 2/51

Problem Solving. Correlation and Covariance. Yi Lu. Problem Solving. Yi Lu ECE 313 2/51 Yi Lu Correlation and Covariance Yi Lu ECE 313 2/51 Definition Let X and Y be random variables with finite second moments. the correlation: E[XY ] Yi Lu ECE 313 3/51 Definition Let X and Y be random variables

More information

Notes on Random Vectors and Multivariate Normal

Notes on Random Vectors and Multivariate Normal MATH 590 Spring 06 Notes on Random Vectors and Multivariate Normal Properties of Random Vectors If X,, X n are random variables, then X = X,, X n ) is a random vector, with the cumulative distribution

More information

Basic Concepts in Matrix Algebra

Basic Concepts in Matrix Algebra Basic Concepts in Matrix Algebra An column array of p elements is called a vector of dimension p and is written as x p 1 = x 1 x 2. x p. The transpose of the column vector x p 1 is row vector x = [x 1

More information

PCMI Introduction to Random Matrix Theory Handout # REVIEW OF PROBABILITY THEORY. Chapter 1 - Events and Their Probabilities

PCMI Introduction to Random Matrix Theory Handout # REVIEW OF PROBABILITY THEORY. Chapter 1 - Events and Their Probabilities PCMI 207 - Introduction to Random Matrix Theory Handout #2 06.27.207 REVIEW OF PROBABILITY THEORY Chapter - Events and Their Probabilities.. Events as Sets Definition (σ-field). A collection F of subsets

More information

Review: mostly probability and some statistics

Review: mostly probability and some statistics Review: mostly probability and some statistics C2 1 Content robability (should know already) Axioms and properties Conditional probability and independence Law of Total probability and Bayes theorem Random

More information

Chapter 4. Chapter 4 sections

Chapter 4. Chapter 4 sections Chapter 4 sections 4.1 Expectation 4.2 Properties of Expectations 4.3 Variance 4.4 Moments 4.5 The Mean and the Median 4.6 Covariance and Correlation 4.7 Conditional Expectation SKIP: 4.8 Utility Expectation

More information

MULTIVARIATE PROBABILITY DISTRIBUTIONS

MULTIVARIATE PROBABILITY DISTRIBUTIONS MULTIVARIATE PROBABILITY DISTRIBUTIONS. PRELIMINARIES.. Example. Consider an experiment that consists of tossing a die and a coin at the same time. We can consider a number of random variables defined

More information

Random Vectors 1. STA442/2101 Fall See last slide for copyright information. 1 / 30

Random Vectors 1. STA442/2101 Fall See last slide for copyright information. 1 / 30 Random Vectors 1 STA442/2101 Fall 2017 1 See last slide for copyright information. 1 / 30 Background Reading: Renscher and Schaalje s Linear models in statistics Chapter 3 on Random Vectors and Matrices

More information

Preliminary Statistics Lecture 3: Probability Models and Distributions (Outline) prelimsoas.webs.com

Preliminary Statistics Lecture 3: Probability Models and Distributions (Outline) prelimsoas.webs.com 1 School of Oriental and African Studies September 2015 Department of Economics Preliminary Statistics Lecture 3: Probability Models and Distributions (Outline) prelimsoas.webs.com Gujarati D. Basic Econometrics,

More information

Perhaps the simplest way of modeling two (discrete) random variables is by means of a joint PMF, defined as follows.

Perhaps the simplest way of modeling two (discrete) random variables is by means of a joint PMF, defined as follows. Chapter 5 Two Random Variables In a practical engineering problem, there is almost always causal relationship between different events. Some relationships are determined by physical laws, e.g., voltage

More information

8 - Continuous random vectors

8 - Continuous random vectors 8-1 Continuous random vectors S. Lall, Stanford 2011.01.25.01 8 - Continuous random vectors Mean-square deviation Mean-variance decomposition Gaussian random vectors The Gamma function The χ 2 distribution

More information

Multivariate Gaussian Distribution. Auxiliary notes for Time Series Analysis SF2943. Spring 2013

Multivariate Gaussian Distribution. Auxiliary notes for Time Series Analysis SF2943. Spring 2013 Multivariate Gaussian Distribution Auxiliary notes for Time Series Analysis SF2943 Spring 203 Timo Koski Department of Mathematics KTH Royal Institute of Technology, Stockholm 2 Chapter Gaussian Vectors.

More information

A Probability Review

A Probability Review A Probability Review Outline: A probability review Shorthand notation: RV stands for random variable EE 527, Detection and Estimation Theory, # 0b 1 A Probability Review Reading: Go over handouts 2 5 in

More information

MAS223 Statistical Inference and Modelling Exercises

MAS223 Statistical Inference and Modelling Exercises MAS223 Statistical Inference and Modelling Exercises The exercises are grouped into sections, corresponding to chapters of the lecture notes Within each section exercises are divided into warm-up questions,

More information

Chapter 2. Continuous random variables

Chapter 2. Continuous random variables Chapter 2 Continuous random variables Outline Review of probability: events and probability Random variable Probability and Cumulative distribution function Review of discrete random variable Introduction

More information

Multivariate Analysis and Likelihood Inference

Multivariate Analysis and Likelihood Inference Multivariate Analysis and Likelihood Inference Outline 1 Joint Distribution of Random Variables 2 Principal Component Analysis (PCA) 3 Multivariate Normal Distribution 4 Likelihood Inference Joint density

More information

Introduction to Computational Finance and Financial Econometrics Probability Review - Part 2

Introduction to Computational Finance and Financial Econometrics Probability Review - Part 2 You can t see this text! Introduction to Computational Finance and Financial Econometrics Probability Review - Part 2 Eric Zivot Spring 2015 Eric Zivot (Copyright 2015) Probability Review - Part 2 1 /

More information

Chapter 2. Some Basic Probability Concepts. 2.1 Experiments, Outcomes and Random Variables

Chapter 2. Some Basic Probability Concepts. 2.1 Experiments, Outcomes and Random Variables Chapter 2 Some Basic Probability Concepts 2.1 Experiments, Outcomes and Random Variables A random variable is a variable whose value is unknown until it is observed. The value of a random variable results

More information

Topics in Probability and Statistics

Topics in Probability and Statistics Topics in Probability and tatistics A Fundamental Construction uppose {, P } is a sample space (with probability P), and suppose X : R is a random variable. The distribution of X is the probability P X

More information

Random Vectors and Multivariate Normal Distributions

Random Vectors and Multivariate Normal Distributions Chapter 3 Random Vectors and Multivariate Normal Distributions 3.1 Random vectors Definition 3.1.1. Random vector. Random vectors are vectors of random 75 variables. For instance, X = X 1 X 2., where each

More information

Multivariate Random Variable

Multivariate Random Variable Multivariate Random Variable Author: Author: Andrés Hincapié and Linyi Cao This Version: August 7, 2016 Multivariate Random Variable 3 Now we consider models with more than one r.v. These are called multivariate

More information

MULTIVARIATE DISTRIBUTIONS

MULTIVARIATE DISTRIBUTIONS Chapter 9 MULTIVARIATE DISTRIBUTIONS John Wishart (1898-1956) British statistician. Wishart was an assistant to Pearson at University College and to Fisher at Rothamsted. In 1928 he derived the distribution

More information

Gaussian vectors and central limit theorem

Gaussian vectors and central limit theorem Gaussian vectors and central limit theorem Samy Tindel Purdue University Probability Theory 2 - MA 539 Samy T. Gaussian vectors & CLT Probability Theory 1 / 86 Outline 1 Real Gaussian random variables

More information

Def. The euclidian distance between two points x = (x 1,...,x p ) t and y = (y 1,...,y p ) t in the p-dimensional space R p is defined as

Def. The euclidian distance between two points x = (x 1,...,x p ) t and y = (y 1,...,y p ) t in the p-dimensional space R p is defined as MAHALANOBIS DISTANCE Def. The euclidian distance between two points x = (x 1,...,x p ) t and y = (y 1,...,y p ) t in the p-dimensional space R p is defined as d E (x, y) = (x 1 y 1 ) 2 + +(x p y p ) 2

More information

BIOS 2083 Linear Models Abdus S. Wahed. Chapter 2 84

BIOS 2083 Linear Models Abdus S. Wahed. Chapter 2 84 Chapter 2 84 Chapter 3 Random Vectors and Multivariate Normal Distributions 3.1 Random vectors Definition 3.1.1. Random vector. Random vectors are vectors of random variables. For instance, X = X 1 X 2.

More information

01 Probability Theory and Statistics Review

01 Probability Theory and Statistics Review NAVARCH/EECS 568, ROB 530 - Winter 2018 01 Probability Theory and Statistics Review Maani Ghaffari January 08, 2018 Last Time: Bayes Filters Given: Stream of observations z 1:t and action data u 1:t Sensor/measurement

More information

Bivariate distributions

Bivariate distributions Bivariate distributions 3 th October 017 lecture based on Hogg Tanis Zimmerman: Probability and Statistical Inference (9th ed.) Bivariate Distributions of the Discrete Type The Correlation Coefficient

More information

Recall that if X 1,...,X n are random variables with finite expectations, then. The X i can be continuous or discrete or of any other type.

Recall that if X 1,...,X n are random variables with finite expectations, then. The X i can be continuous or discrete or of any other type. Expectations of Sums of Random Variables STAT/MTHE 353: 4 - More on Expectations and Variances T. Linder Queen s University Winter 017 Recall that if X 1,...,X n are random variables with finite expectations,

More information

ACM 116: Lectures 3 4

ACM 116: Lectures 3 4 1 ACM 116: Lectures 3 4 Joint distributions The multivariate normal distribution Conditional distributions Independent random variables Conditional distributions and Monte Carlo: Rejection sampling Variance

More information

conditional cdf, conditional pdf, total probability theorem?

conditional cdf, conditional pdf, total probability theorem? 6 Multiple Random Variables 6.0 INTRODUCTION scalar vs. random variable cdf, pdf transformation of a random variable conditional cdf, conditional pdf, total probability theorem expectation of a random

More information

1.12 Multivariate Random Variables

1.12 Multivariate Random Variables 112 MULTIVARIATE RANDOM VARIABLES 59 112 Multivariate Random Variables We will be using matrix notation to denote multivariate rvs and their distributions Denote by X (X 1,,X n ) T an n-dimensional random

More information

ECON Fundamentals of Probability

ECON Fundamentals of Probability ECON 351 - Fundamentals of Probability Maggie Jones 1 / 32 Random Variables A random variable is one that takes on numerical values, i.e. numerical summary of a random outcome e.g., prices, total GDP,

More information

Multiple Random Variables

Multiple Random Variables Multiple Random Variables This Version: July 30, 2015 Multiple Random Variables 2 Now we consider models with more than one r.v. These are called multivariate models For instance: height and weight An

More information

2. Matrix Algebra and Random Vectors

2. Matrix Algebra and Random Vectors 2. Matrix Algebra and Random Vectors 2.1 Introduction Multivariate data can be conveniently display as array of numbers. In general, a rectangular array of numbers with, for instance, n rows and p columns

More information

Review (Probability & Linear Algebra)

Review (Probability & Linear Algebra) Review (Probability & Linear Algebra) CE-725 : Statistical Pattern Recognition Sharif University of Technology Spring 2013 M. Soleymani Outline Axioms of probability theory Conditional probability, Joint

More information

Expectation. DS GA 1002 Probability and Statistics for Data Science. Carlos Fernandez-Granda

Expectation. DS GA 1002 Probability and Statistics for Data Science.   Carlos Fernandez-Granda Expectation DS GA 1002 Probability and Statistics for Data Science http://www.cims.nyu.edu/~cfgranda/pages/dsga1002_fall17 Carlos Fernandez-Granda Aim Describe random variables with a few numbers: mean,

More information

Review of Probability Theory

Review of Probability Theory Review of Probability Theory Arian Maleki and Tom Do Stanford University Probability theory is the study of uncertainty Through this class, we will be relying on concepts from probability theory for deriving

More information

Distributions of Functions of Random Variables. 5.1 Functions of One Random Variable

Distributions of Functions of Random Variables. 5.1 Functions of One Random Variable Distributions of Functions of Random Variables 5.1 Functions of One Random Variable 5.2 Transformations of Two Random Variables 5.3 Several Random Variables 5.4 The Moment-Generating Function Technique

More information

Lecture 2: Review of Probability

Lecture 2: Review of Probability Lecture 2: Review of Probability Zheng Tian Contents 1 Random Variables and Probability Distributions 2 1.1 Defining probabilities and random variables..................... 2 1.2 Probability distributions................................

More information

Chapter 5. The multivariate normal distribution. Probability Theory. Linear transformations. The mean vector and the covariance matrix

Chapter 5. The multivariate normal distribution. Probability Theory. Linear transformations. The mean vector and the covariance matrix Probability Theory Linear transformations A transformation is said to be linear if every single function in the transformation is a linear combination. Chapter 5 The multivariate normal distribution When

More information

Lecture 14: Multivariate mgf s and chf s

Lecture 14: Multivariate mgf s and chf s Lecture 14: Multivariate mgf s and chf s Multivariate mgf and chf For an n-dimensional random vector X, its mgf is defined as M X (t) = E(e t X ), t R n and its chf is defined as φ X (t) = E(e ıt X ),

More information

CHAPTER 4 MATHEMATICAL EXPECTATION. 4.1 Mean of a Random Variable

CHAPTER 4 MATHEMATICAL EXPECTATION. 4.1 Mean of a Random Variable CHAPTER 4 MATHEMATICAL EXPECTATION 4.1 Mean of a Random Variable The expected value, or mathematical expectation E(X) of a random variable X is the long-run average value of X that would emerge after a

More information

Lecture 11. Multivariate Normal theory

Lecture 11. Multivariate Normal theory 10. Lecture 11. Multivariate Normal theory Lecture 11. Multivariate Normal theory 1 (1 1) 11. Multivariate Normal theory 11.1. Properties of means and covariances of vectors Properties of means and covariances

More information

Variance reduction. Michel Bierlaire. Transport and Mobility Laboratory. Variance reduction p. 1/18

Variance reduction. Michel Bierlaire. Transport and Mobility Laboratory. Variance reduction p. 1/18 Variance reduction p. 1/18 Variance reduction Michel Bierlaire michel.bierlaire@epfl.ch Transport and Mobility Laboratory Variance reduction p. 2/18 Example Use simulation to compute I = 1 0 e x dx We

More information

Lecture 1: August 28

Lecture 1: August 28 36-705: Intermediate Statistics Fall 2017 Lecturer: Siva Balakrishnan Lecture 1: August 28 Our broad goal for the first few lectures is to try to understand the behaviour of sums of independent random

More information

2 (Statistics) Random variables

2 (Statistics) Random variables 2 (Statistics) Random variables References: DeGroot and Schervish, chapters 3, 4 and 5; Stirzaker, chapters 4, 5 and 6 We will now study the main tools use for modeling experiments with unknown outcomes

More information

Expectation. DS GA 1002 Statistical and Mathematical Models. Carlos Fernandez-Granda

Expectation. DS GA 1002 Statistical and Mathematical Models.   Carlos Fernandez-Granda Expectation DS GA 1002 Statistical and Mathematical Models http://www.cims.nyu.edu/~cfgranda/pages/dsga1002_fall16 Carlos Fernandez-Granda Aim Describe random variables with a few numbers: mean, variance,

More information

Quick Tour of Basic Probability Theory and Linear Algebra

Quick Tour of Basic Probability Theory and Linear Algebra Quick Tour of and Linear Algebra Quick Tour of and Linear Algebra CS224w: Social and Information Network Analysis Fall 2011 Quick Tour of and Linear Algebra Quick Tour of and Linear Algebra Outline Definitions

More information

Notes for Math 324, Part 19

Notes for Math 324, Part 19 48 Notes for Math 324, Part 9 Chapter 9 Multivariate distributions, covariance Often, we need to consider several random variables at the same time. We have a sample space S and r.v. s X, Y,..., which

More information

Vectors and Matrices Statistics with Vectors and Matrices

Vectors and Matrices Statistics with Vectors and Matrices Vectors and Matrices Statistics with Vectors and Matrices Lecture 3 September 7, 005 Analysis Lecture #3-9/7/005 Slide 1 of 55 Today s Lecture Vectors and Matrices (Supplement A - augmented with SAS proc

More information

Let X and Y denote two random variables. The joint distribution of these random

Let X and Y denote two random variables. The joint distribution of these random EE385 Class Notes 9/7/0 John Stensby Chapter 3: Multiple Random Variables Let X and Y denote two random variables. The joint distribution of these random variables is defined as F XY(x,y) = [X x,y y] P.

More information

More than one variable

More than one variable Chapter More than one variable.1 Bivariate discrete distributions Suppose that the r.v. s X and Y are discrete and take on the values x j and y j, j 1, respectively. Then the joint p.d.f. of X and Y, to

More information

5.1 Consistency of least squares estimates. We begin with a few consistency results that stand on their own and do not depend on normality.

5.1 Consistency of least squares estimates. We begin with a few consistency results that stand on their own and do not depend on normality. 88 Chapter 5 Distribution Theory In this chapter, we summarize the distributions related to the normal distribution that occur in linear models. Before turning to this general problem that assumes normal

More information

Lecture Note 1: Probability Theory and Statistics

Lecture Note 1: Probability Theory and Statistics Univ. of Michigan - NAME 568/EECS 568/ROB 530 Winter 2018 Lecture Note 1: Probability Theory and Statistics Lecturer: Maani Ghaffari Jadidi Date: April 6, 2018 For this and all future notes, if you would

More information

Statistics 351 Probability I Fall 2006 (200630) Final Exam Solutions. θ α β Γ(α)Γ(β) (uv)α 1 (v uv) β 1 exp v }

Statistics 351 Probability I Fall 2006 (200630) Final Exam Solutions. θ α β Γ(α)Γ(β) (uv)α 1 (v uv) β 1 exp v } Statistics 35 Probability I Fall 6 (63 Final Exam Solutions Instructor: Michael Kozdron (a Solving for X and Y gives X UV and Y V UV, so that the Jacobian of this transformation is x x u v J y y v u v

More information

If g is also continuous and strictly increasing on J, we may apply the strictly increasing inverse function g 1 to this inequality to get

If g is also continuous and strictly increasing on J, we may apply the strictly increasing inverse function g 1 to this inequality to get 18:2 1/24/2 TOPIC. Inequalities; measures of spread. This lecture explores the implications of Jensen s inequality for g-means in general, and for harmonic, geometric, arithmetic, and related means in

More information

Covariance. Lecture 20: Covariance / Correlation & General Bivariate Normal. Covariance, cont. Properties of Covariance

Covariance. Lecture 20: Covariance / Correlation & General Bivariate Normal. Covariance, cont. Properties of Covariance Covariance Lecture 0: Covariance / Correlation & General Bivariate Normal Sta30 / Mth 30 We have previously discussed Covariance in relation to the variance of the sum of two random variables Review Lecture

More information

Introduction to Probability and Stocastic Processes - Part I

Introduction to Probability and Stocastic Processes - Part I Introduction to Probability and Stocastic Processes - Part I Lecture 2 Henrik Vie Christensen vie@control.auc.dk Department of Control Engineering Institute of Electronic Systems Aalborg University Denmark

More information

Probability and Statistics Notes

Probability and Statistics Notes Probability and Statistics Notes Chapter Five Jesse Crawford Department of Mathematics Tarleton State University Spring 2011 (Tarleton State University) Chapter Five Notes Spring 2011 1 / 37 Outline 1

More information

Chp 4. Expectation and Variance

Chp 4. Expectation and Variance Chp 4. Expectation and Variance 1 Expectation In this chapter, we will introduce two objectives to directly reflect the properties of a random variable or vector, which are the Expectation and Variance.

More information

Lecture 11. Probability Theory: an Overveiw

Lecture 11. Probability Theory: an Overveiw Math 408 - Mathematical Statistics Lecture 11. Probability Theory: an Overveiw February 11, 2013 Konstantin Zuev (USC) Math 408, Lecture 11 February 11, 2013 1 / 24 The starting point in developing the

More information

Introduction to Probability Theory

Introduction to Probability Theory Introduction to Probability Theory Ping Yu Department of Economics University of Hong Kong Ping Yu (HKU) Probability 1 / 39 Foundations 1 Foundations 2 Random Variables 3 Expectation 4 Multivariate Random

More information

MATHEMATICS 154, SPRING 2009 PROBABILITY THEORY Outline #11 (Tail-Sum Theorem, Conditional distribution and expectation)

MATHEMATICS 154, SPRING 2009 PROBABILITY THEORY Outline #11 (Tail-Sum Theorem, Conditional distribution and expectation) MATHEMATICS 154, SPRING 2009 PROBABILITY THEORY Outline #11 (Tail-Sum Theorem, Conditional distribution and expectation) Last modified: March 7, 2009 Reference: PRP, Sections 3.6 and 3.7. 1. Tail-Sum Theorem

More information

f X, Y (x, y)dx (x), where f(x,y) is the joint pdf of X and Y. (x) dx

f X, Y (x, y)dx (x), where f(x,y) is the joint pdf of X and Y. (x) dx INDEPENDENCE, COVARIANCE AND CORRELATION Independence: Intuitive idea of "Y is independent of X": The distribution of Y doesn't depend on the value of X. In terms of the conditional pdf's: "f(y x doesn't

More information

1 Probability theory. 2 Random variables and probability theory.

1 Probability theory. 2 Random variables and probability theory. Probability theory Here we summarize some of the probability theory we need. If this is totally unfamiliar to you, you should look at one of the sources given in the readings. In essence, for the major

More information

EC212: Introduction to Econometrics Review Materials (Wooldridge, Appendix)

EC212: Introduction to Econometrics Review Materials (Wooldridge, Appendix) 1 EC212: Introduction to Econometrics Review Materials (Wooldridge, Appendix) Taisuke Otsu London School of Economics Summer 2018 A.1. Summation operator (Wooldridge, App. A.1) 2 3 Summation operator For

More information

Course: ESO-209 Home Work: 1 Instructor: Debasis Kundu

Course: ESO-209 Home Work: 1 Instructor: Debasis Kundu Home Work: 1 1. Describe the sample space when a coin is tossed (a) once, (b) three times, (c) n times, (d) an infinite number of times. 2. A coin is tossed until for the first time the same result appear

More information

Expectation of Random Variables

Expectation of Random Variables 1 / 19 Expectation of Random Variables Saravanan Vijayakumaran sarva@ee.iitb.ac.in Department of Electrical Engineering Indian Institute of Technology Bombay February 13, 2015 2 / 19 Expectation of Discrete

More information

Chapter 5. Chapter 5 sections

Chapter 5. Chapter 5 sections 1 / 43 sections Discrete univariate distributions: 5.2 Bernoulli and Binomial distributions Just skim 5.3 Hypergeometric distributions 5.4 Poisson distributions Just skim 5.5 Negative Binomial distributions

More information

Chapter 4. Multivariate Distributions. Obviously, the marginal distributions may be obtained easily from the joint distribution:

Chapter 4. Multivariate Distributions. Obviously, the marginal distributions may be obtained easily from the joint distribution: 4.1 Bivariate Distributions. Chapter 4. Multivariate Distributions For a pair r.v.s (X,Y ), the Joint CDF is defined as F X,Y (x, y ) = P (X x,y y ). Obviously, the marginal distributions may be obtained

More information

Joint Gaussian Graphical Model Review Series I

Joint Gaussian Graphical Model Review Series I Joint Gaussian Graphical Model Review Series I Probability Foundations Beilun Wang Advisor: Yanjun Qi 1 Department of Computer Science, University of Virginia http://jointggm.org/ June 23rd, 2017 Beilun

More information

Lectures on Statistics. William G. Faris

Lectures on Statistics. William G. Faris Lectures on Statistics William G. Faris December 1, 2003 ii Contents 1 Expectation 1 1.1 Random variables and expectation................. 1 1.2 The sample mean........................... 3 1.3 The sample

More information

Section 8.1. Vector Notation

Section 8.1. Vector Notation Section 8.1 Vector Notation Definition 8.1 Random Vector A random vector is a column vector X = [ X 1 ]. X n Each Xi is a random variable. Definition 8.2 Vector Sample Value A sample value of a random

More information

Mathematics 426 Robert Gross Homework 9 Answers

Mathematics 426 Robert Gross Homework 9 Answers Mathematics 4 Robert Gross Homework 9 Answers. Suppose that X is a normal random variable with mean µ and standard deviation σ. Suppose that PX > 9 PX

More information

2 Functions of random variables

2 Functions of random variables 2 Functions of random variables A basic statistical model for sample data is a collection of random variables X 1,..., X n. The data are summarised in terms of certain sample statistics, calculated as

More information