Multivariate Gaussian Distribution. Auxiliary notes for Time Series Analysis SF2943. Spring 2013
|
|
- Marcia Jones
- 6 years ago
- Views:
Transcription
1 Multivariate Gaussian Distribution Auxiliary notes for Time Series Analysis SF2943 Spring 203 Timo Koski Department of Mathematics KTH Royal Institute of Technology, Stockholm
2 2
3 Chapter Gaussian Vectors. Multivariate Gaussian Distribution Let us recall the following; X is a normal random variable, if where µ is real and σ > 0. Notation: X Nµ,σ 2. f X x = σ 2π e 2σ 2 x µ2, Properties: X Nµ,σ 2 EX = µ, Var = σ 2. X Nµ,σ 2, then the moment generating function is and the characteristic function is ψ X t = E [ e tx] = e tµ+ 2 t2 σ 2,. ϕ X t = E [ e itx] = e itµ 2 t2 σ 2..2 X Nµ,σ 2 Y = ax +b Naµ+b,a 2 σ 2. X Nµ,σ 2 Z = X µ σ N0,... Notation for Vectors, Mean Vector, Covariance Matrix & Characteristic Functions An n random vector or a multivariate random variable is denoted by X X 2 X =. = X,X 2,...,X n, X n 3
4 4 where is the vector transpose. A vector in R n is designated by x x 2 x =. = x,x 2,...,x n. x n We denote by F X x the joint distribution function of X, which means that F X x = PX x = PX x,x 2 x 2,...,X n x n. The following definitions are natural. We have the mean vector E[X ] E[X 2 ] µ X = E[X] =., E[X n ] which is a n column vector of means =expected values of the components of X. The covariance matrix is a square n n -matrix [ C X := E X µ X X µ X ], where the entry at the position i,j is c i,j def = C X i,j = E[X i µ i X j µ j ], that is the covariance of X i and X j. Every covariance matrix, now designated by C, is by construction symmetric and nonnegative definite, i.e, for all x R n C = C.3 x Cx 0..4 It is shown in linear algebra that nonnegative definiteness is equivalent to detc 0. In terms of the entries c i,j of a covariance matrix C = c i,j n,n, i=,j= there are the following necessary properties.. c i,j = c j,i symmetry. 2. c i,i = VarX i = σ 2 i 0 the elements in the main diagonal are the variances, and thus all elements in the main diagonal are nonnegative. 3. c 2 i,j c i,i c j,j.
5 5 Example.. The covariance matrix of a bivariate random variable X = X,X 2 is often written in the following form σ 2 C = ρσ σ 2 ρσ σ 2 σ2 2,.5 where σ 2 = VarX, σ 2 2 = VarX 2 and ρ = CovX,Y/σ σ 2 is the coefficient of correlation of X and X 2. C is invertible positive definite if and only if ρ 2. The rules for finding the mean vector and the covariance matrix of a transformed vector are simple. Proposition..2 X is a random vector with mean vector µ X and covariance matrix C X. B is a m n matrix. If Y = BX+b, then Proof For simplicity of writing, take b = µ = 0. Then EY = Bµ X +b.6 C Y = BC X B..7 C Y = EYY = EBXBX = [ = EBXX B = BE XX ] B = BC X B. We have Definition.. [ ] φ X s def = E e is X = e is x df X x.8 R n is the characteristic function of the random vector X. In.8 s x is a scalar product in R n, s x = n s i x i. i= As F X is a joint distribution function on R n and R n is a notation for a multiple integral over R n, we know that R n df X x =, which means that φ X 0 =, where 0 is a n -vector of zeros.
6 6..2 Multivariate Normal Distribution Definition..2 X has a multivariate normal distribution with mean vector µ and covariance matrix C, written as X N µ,c, if and only if the characteristic function is given as φ X s = e is µ 2 s Cs..9 Theorem..3 X has a multivariate normal distribution N µ, C if and only of n a X = a i X i.0 has a normal distribution for all vectors a = a,a 2,...,a n. Additional properties are:. Theorem..4 If Y = BX+b, and X N µ,c, then Y N i= Bµ+b,BCB. 2. Theorem..5 A Gaussian multivariate random variable has independent components if and only if the covariance matrix is diagonal. 3. Theorem..6 If C is positive definite detc > 0, then it can be shown that there is a simultaneous density of the form f X x = 2π n/2 detc e 2 x µx C x µ X.. 4. Theorem..7 X,X 2 is a bivariate Gaussian random variable. The conditional distribution for X 2 given X = x is N µ 2 +ρ σ2 x µ,σ 2 σ 2 ρ 2,.2 where µ 2 = EX 2, µ = EX 2, σ 2 = VarX 2, σ = VarX and ρ = CovX,X 2 /σ σ 2. Proof is done by an explicit evaluation of. followed by an explicit evaluation of the pertinent conditional density. Definition..3 Z N 0,I is a standard Gaussian vector, where I is n n identity matrix.
7 7 Let X N µ X,C. Then, if C is positive definite, we can factorize C as C = AA, for n n matrix A, where A is lower triangular. Actually we can always decompose C = LDL, where L is a unique n n lower triangular, D is diagonal with positive elements on the main diagonal, and we write A = L D. Then A is lower triangular. Then Z = A X µ X is a standard Gaussian vector. In some applications, like, e.g., in time series analysis and signal processing, one refers to A as a whitening matrix. It can be shown that A is lower triangular, thus we have obtained Z by a causal operation, in the sense that Z i is a function of X,...,X i. Z is known as the innovations of X. Conversely, one goes from the innovations to X through another causal operation by X = AZ+b, and then X = N b,aa. Example..8 Factorization of a 2 2 Covariance Matrix Let X N µ,c. X 2 Let Z och Z 2 be independent N0,. We consider the lower triangular matrix B = σ 0 ρσ 2 σ 2 ρ 2,.3 which clearly has an inverse, as soon as ρ ±. Moreover, one verifies that C = B B, when we write C as in.5. Then we get X X 2 Z = µ+b,.4 Z 2 where, of course, 0 N Z 2 0 Z 0, 0.
8 8.2 Partitioned Covariance Matrices Assume that X, n, is partitioned as X = X,X 2, where X is p and X 2 is q, n = q +p. Let the covariance matrix C be partitioned in the sense that Σ Σ C = 2,.5 Σ 2 Σ 22 where Σ is p p, Σ 22 is q q e.t.c.. The mean is partitioned correspondingly as µ µ :=..6 µ 2 Let X N n µ,c, where N n refers to a normal distribution in n variables, C and µ are partitioned as in Then the marginal distribution of X 2 is X 2 N q µ 2,Σ 22, if Σ 22 is invertible. Let X N n µ,c, where C and µ are partitioned as in Assume that the inverse Σ 22 exists. Then the conditional distribution of X given X 2 = x 2 is normal, or, X X 2 = x 2 N p µ 2,Σ 2,.7 where µ 2 = µ +Σ 2 Σ 22 x 2 µ 2.8 and Σ 2 = Σ Σ 2 Σ 22 Σ 2. By virtue of.7 and.8 the best estimator in the mean square sense and the best linear estimator in the mean square sense are one and the same random variable..3 Gaussian Time Series {X t t T} is a Gaussian time series, if all joint distributions are multivariate Gaussian. In other words, ffor any t,...,t n and integer n the vector Here X t,x t2,...,x tn N µ t,σ t. µ t = E[X t ],E[X t2 ],...,E[X tn ] is the mean vector with components obtained from the mean function of the process {X t t T}. The matrix Σ t = {γ X t i,t j } n,n i=,j= has as its arrays the values of the ACVF of {X t t T}.
9 .4 Appendix: Symmetric Matrices & Orthogonal Diagonalization & Gaussian Vectors We quote some results from [] or any textbook in linear algebra. An n n matrix A is orthogonally diagonalizable, if there is an orthogonal matrix P i.e., P P =PP = I such that P AP = Λ, where Λ is a diagonal matrix. Then we have Theorem.4. If A is an n n matrix, then the following are equivalent: i A is orthogonally diagonalizable. ii A has an orthonormal set of eigenvectors. iii A is symmetric. 9 Since covariance matrices are symmetric, we have by the theorem above that all covariance matrices are orthogonally diagonalizable. Theorem.4.2 If A is a symmetric matrix, then i Eigenvalues of A are all real numbers. ii Eigenvectors from different eigenspaces are orthogonal. That is, all eigenvalues of a covariance matrix are real. Hence we have for any covariance matrix the spectral decomposition C = n λ i e i e i,.9 i= where Ce i = λ i e i. Since C is nonnegative definite, and its eigenvectors are orthonormal, 0 e i Ce i = λ i e i e i = λ i, and thus the eigenvalues of a covariance matrix are nonnegative. Let now P be an orthogonal matrix such that P C X P = Λ, and X N 0,C X, i.e., C X is a covariance matrix and Λ is diagonal with the eigenvalues of C X on the main diagonal. Then if Y = P X, we have by theorem..4 that Y N 0,Λ.
10 0 In other words, Y is a Gaussian vector and has by theorem..5 independent components. This method of producing independent Gaussians has several important applications. One of these is the principal component analysis. In addition, the operation is invertible, as recreates X N 0,C X from Y. X = PY.5 Appendix: Proof of.2 Let X = X,X 2 Nµ X,C, µ X = inverse of C in.5 is C = σ 2σ2 2 ρ2 µ µ 2 and C in.5 with ρ 2. The σ2 2 ρσ σ 2 ρσ σ 2 σ 2 Then we get by straightforward evaluation in. where ρ 2 Now we claim that f X x = = [ x µ σ 2π detc e 2 x µx C x µ X. 2πσ σ 2 ρ 2 e 2 Qx,x2,.20 f X2 X =x x 2 = Qx,x 2 = 2 2ρx µ x 2 µ 2 σ σ 2 + e 2 σ 2 x 2 µ 2x 2 2, σ 2 2π ] 2 x2 µ 2. a density of a Gaussian random variable X 2 X = x with the conditional expectation µ 2 x and the conditional variance σ 2 µ 2 x = µ 2 +ρ σ 2 σ x µ, σ 2 = σ 2 ρ2. To prove these assertions about f X2 X =x x 2 we set f X x = e 2σ 2 x µ 2,.2 σ 2π σ 2
11 and compute the ratio fx,x 2 x,x2 f Xx. We get from the above by.20 and.2 that f X,X 2 x,x 2 f X x = σ 2π 2πσ σ 2 ρ 2 e 2 Qx,x2+ 2σ 2 x µ 2, which we organize, for clarity, by introducing the auxiliary function Hx,x 2 by 2 Hx,x 2 def = 2 Qx,x 2 + 2σ 2 x µ 2. Here we have ρ 2 = [ x µ σ ρ 2 ρ 2 Hx,x 2 = 2 2ρx µ x 2 µ 2 σ σ x µ 2ρx µ x 2 µ 2 σ σ 2 ρ 2 σ Evidently we have now shown Hx,x 2 = Hence we have found that ] 2 x2 µ 2 x µ σ 2 2 x 2 µ 2 ρ σ2 σ x µ σ2 2. ρ2 σ 2 x2 µ 2 + σ2 2. ρ2 2 f X,X 2 x,x 2 f X x = x 2 µ 2 ρ σ 2 x e σ µ 2 2 σ 2 2 ρ2. ρ2 σ 2 2π This establishes the properties of bivariate normal random variables claimed in.2 above.
12 2
13 Bibliography [] H. Anton & C. Rorres: Elementary Linear Algebra with Supplemental Applications. John Wiley & Sons Asia Pte Ltd, 20. 3
Formulas for probability theory and linear models SF2941
Formulas for probability theory and linear models SF2941 These pages + Appendix 2 of Gut) are permitted as assistance at the exam. 11 maj 2008 Selected formulae of probability Bivariate probability Transforms
More informationThe Multivariate Normal Distribution. In this case according to our theorem
The Multivariate Normal Distribution Defn: Z R 1 N(0, 1) iff f Z (z) = 1 2π e z2 /2. Defn: Z R p MV N p (0, I) if and only if Z = (Z 1,..., Z p ) T with the Z i independent and each Z i N(0, 1). In this
More informationThe Multivariate Gaussian Distribution
The Multivariate Gaussian Distribution Chuong B. Do October, 8 A vector-valued random variable X = T X X n is said to have a multivariate normal or Gaussian) distribution with mean µ R n and covariance
More informationx. Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ 2 ).
.8.6 µ =, σ = 1 µ = 1, σ = 1 / µ =, σ =.. 3 1 1 3 x Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ ). The Gaussian distribution Probably the most-important distribution in all of statistics
More information2. Matrix Algebra and Random Vectors
2. Matrix Algebra and Random Vectors 2.1 Introduction Multivariate data can be conveniently display as array of numbers. In general, a rectangular array of numbers with, for instance, n rows and p columns
More informationEcon 204 Supplement to Section 3.6 Diagonalization and Quadratic Forms. 1 Diagonalization and Change of Basis
Econ 204 Supplement to Section 3.6 Diagonalization and Quadratic Forms De La Fuente notes that, if an n n matrix has n distinct eigenvalues, it can be diagonalized. In this supplement, we will provide
More information22m:033 Notes: 7.1 Diagonalization of Symmetric Matrices
m:33 Notes: 7. Diagonalization of Symmetric Matrices Dennis Roseman University of Iowa Iowa City, IA http://www.math.uiowa.edu/ roseman May 3, Symmetric matrices Definition. A symmetric matrix is a matrix
More informationBasic Concepts in Matrix Algebra
Basic Concepts in Matrix Algebra An column array of p elements is called a vector of dimension p and is written as x p 1 = x 1 x 2. x p. The transpose of the column vector x p 1 is row vector x = [x 1
More informationLinear Algebra: Characteristic Value Problem
Linear Algebra: Characteristic Value Problem . The Characteristic Value Problem Let < be the set of real numbers and { be the set of complex numbers. Given an n n real matrix A; does there exist a number
More informationExercise Set 7.2. Skills
Orthogonally diagonalizable matrix Spectral decomposition (or eigenvalue decomposition) Schur decomposition Subdiagonal Upper Hessenburg form Upper Hessenburg decomposition Skills Be able to recognize
More informationTAMS39 Lecture 2 Multivariate normal distribution
TAMS39 Lecture 2 Multivariate normal distribution Martin Singull Department of Mathematics Mathematical Statistics Linköping University, Sweden Content Lecture Random vectors Multivariate normal distribution
More information1 Last time: least-squares problems
MATH Linear algebra (Fall 07) Lecture Last time: least-squares problems Definition. If A is an m n matrix and b R m, then a least-squares solution to the linear system Ax = b is a vector x R n such that
More informationIntroduction to Computational Finance and Financial Econometrics Matrix Algebra Review
You can t see this text! Introduction to Computational Finance and Financial Econometrics Matrix Algebra Review Eric Zivot Spring 2015 Eric Zivot (Copyright 2015) Matrix Algebra Review 1 / 54 Outline 1
More information18 Bivariate normal distribution I
8 Bivariate normal distribution I 8 Example Imagine firing arrows at a target Hopefully they will fall close to the target centre As we fire more arrows we find a high density near the centre and fewer
More informationNext tool is Partial ACF; mathematical tools first. The Multivariate Normal Distribution. e z2 /2. f Z (z) = 1 2π. e z2 i /2
Next tool is Partial ACF; mathematical tools first. The Multivariate Normal Distribution Defn: Z R 1 N(0,1) iff f Z (z) = 1 2π e z2 /2 Defn: Z R p MV N p (0, I) if and only if Z = (Z 1,..., Z p ) (a column
More informationMoment Generating Function. STAT/MTHE 353: 5 Moment Generating Functions and Multivariate Normal Distribution
Moment Generating Function STAT/MTHE 353: 5 Moment Generating Functions and Multivariate Normal Distribution T. Linder Queen s University Winter 07 Definition Let X (X,...,X n ) T be a random vector and
More informationRemark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.
Sec 6 Eigenvalues and Eigenvectors Definition An eigenvector of an n n matrix A is a nonzero vector x such that A x λ x for some scalar λ A scalar λ is called an eigenvalue of A if there is a nontrivial
More informationa 11 a 12 a 11 a 12 a 13 a 21 a 22 a 23 . a 31 a 32 a 33 a 12 a 21 a 23 a 31 a = = = = 12
24 8 Matrices Determinant of 2 2 matrix Given a 2 2 matrix [ ] a a A = 2 a 2 a 22 the real number a a 22 a 2 a 2 is determinant and denoted by det(a) = a a 2 a 2 a 22 Example 8 Find determinant of 2 2
More informationVAR Model. (k-variate) VAR(p) model (in the Reduced Form): Y t-2. Y t-1 = A + B 1. Y t + B 2. Y t-p. + ε t. + + B p. where:
VAR Model (k-variate VAR(p model (in the Reduced Form: where: Y t = A + B 1 Y t-1 + B 2 Y t-2 + + B p Y t-p + ε t Y t = (y 1t, y 2t,, y kt : a (k x 1 vector of time series variables A: a (k x 1 vector
More informationLinear Algebra Primer
Introduction Linear Algebra Primer Daniel S. Stutts, Ph.D. Original Edition: 2/99 Current Edition: 4//4 This primer was written to provide a brief overview of the main concepts and methods in elementary
More informationLecture Notes on the Gaussian Distribution
Lecture Notes on the Gaussian Distribution Hairong Qi The Gaussian distribution is also referred to as the normal distribution or the bell curve distribution for its bell-shaped density curve. There s
More informationLINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS
LINEAR ALGEBRA, -I PARTIAL EXAM SOLUTIONS TO PRACTICE PROBLEMS Problem (a) For each of the two matrices below, (i) determine whether it is diagonalizable, (ii) determine whether it is orthogonally diagonalizable,
More informationLinear Algebra review Powers of a diagonalizable matrix Spectral decomposition
Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Prof. Tesler Math 283 Fall 2016 Also see the separate version of this with Matlab and R commands. Prof. Tesler Diagonalizing
More informationThe Multivariate Normal Distribution
The Multivariate Normal Distribution Paul Johnson June, 3 Introduction A one dimensional Normal variable should be very familiar to students who have completed one course in statistics. The multivariate
More informationMTH50 Spring 07 HW Assignment 7 {From [FIS0]}: Sec 44 #4a h 6; Sec 5 #ad ac 4ae 4 7 The due date for this assignment is 04/05/7 Sec 44 #4a h Evaluate the erminant of the following matrices by any legitimate
More informationChapter 5. The multivariate normal distribution. Probability Theory. Linear transformations. The mean vector and the covariance matrix
Probability Theory Linear transformations A transformation is said to be linear if every single function in the transformation is a linear combination. Chapter 5 The multivariate normal distribution When
More informationRandom vectors X 1 X 2. Recall that a random vector X = is made up of, say, k. X k. random variables.
Random vectors Recall that a random vector X = X X 2 is made up of, say, k random variables X k A random vector has a joint distribution, eg a density f(x), that gives probabilities P(X A) = f(x)dx Just
More informationNotes on Random Vectors and Multivariate Normal
MATH 590 Spring 06 Notes on Random Vectors and Multivariate Normal Properties of Random Vectors If X,, X n are random variables, then X = X,, X n ) is a random vector, with the cumulative distribution
More informationFundamentals of Engineering Analysis (650163)
Philadelphia University Faculty of Engineering Communications and Electronics Engineering Fundamentals of Engineering Analysis (6563) Part Dr. Omar R Daoud Matrices: Introduction DEFINITION A matrix is
More informationRemark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.
Sec 5 Eigenvectors and Eigenvalues In this chapter, vector means column vector Definition An eigenvector of an n n matrix A is a nonzero vector x such that A x λ x for some scalar λ A scalar λ is called
More informationLinear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.
Matrices Operations Linear Algebra Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0 The rectangular array 1 2 1 4 3 4 2 6 1 3 2 1 in which the
More informationLecture 11. Multivariate Normal theory
10. Lecture 11. Multivariate Normal theory Lecture 11. Multivariate Normal theory 1 (1 1) 11. Multivariate Normal theory 11.1. Properties of means and covariances of vectors Properties of means and covariances
More informationMAC Module 12 Eigenvalues and Eigenvectors. Learning Objectives. Upon completing this module, you should be able to:
MAC Module Eigenvalues and Eigenvectors Learning Objectives Upon completing this module, you should be able to: Solve the eigenvalue problem by finding the eigenvalues and the corresponding eigenvectors
More informationMAC Module 12 Eigenvalues and Eigenvectors
MAC 23 Module 2 Eigenvalues and Eigenvectors Learning Objectives Upon completing this module, you should be able to:. Solve the eigenvalue problem by finding the eigenvalues and the corresponding eigenvectors
More informationPrincipal Components Theory Notes
Principal Components Theory Notes Charles J. Geyer August 29, 2007 1 Introduction These are class notes for Stat 5601 (nonparametrics) taught at the University of Minnesota, Spring 2006. This not a theory
More informationand let s calculate the image of some vectors under the transformation T.
Chapter 5 Eigenvalues and Eigenvectors 5. Eigenvalues and Eigenvectors Let T : R n R n be a linear transformation. Then T can be represented by a matrix (the standard matrix), and we can write T ( v) =
More informationMath 3191 Applied Linear Algebra
Math 9 Applied Linear Algebra Lecture 9: Diagonalization Stephen Billups University of Colorado at Denver Math 9Applied Linear Algebra p./9 Section. Diagonalization The goal here is to develop a useful
More informationContinuous Random Variables
1 / 24 Continuous Random Variables Saravanan Vijayakumaran sarva@ee.iitb.ac.in Department of Electrical Engineering Indian Institute of Technology Bombay February 27, 2013 2 / 24 Continuous Random Variables
More informationCovariance. Lecture 20: Covariance / Correlation & General Bivariate Normal. Covariance, cont. Properties of Covariance
Covariance Lecture 0: Covariance / Correlation & General Bivariate Normal Sta30 / Mth 30 We have previously discussed Covariance in relation to the variance of the sum of two random variables Review Lecture
More informationLinear Algebra review Powers of a diagonalizable matrix Spectral decomposition
Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Prof. Tesler Math 283 Fall 2018 Also see the separate version of this with Matlab and R commands. Prof. Tesler Diagonalizing
More informationMATH 240 Spring, Chapter 1: Linear Equations and Matrices
MATH 240 Spring, 2006 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 8th Ed. Sections 1.1 1.6, 2.1 2.2, 3.2 3.8, 4.3 4.5, 5.1 5.3, 5.5, 6.1 6.5, 7.1 7.2, 7.4 DEFINITIONS Chapter 1: Linear
More informationGaussian random variables inr n
Gaussian vectors Lecture 5 Gaussian random variables inr n One-dimensional case One-dimensional Gaussian density with mean and standard deviation (called N, ): fx x exp. Proposition If X N,, then ax b
More informationMAC Module 2 Systems of Linear Equations and Matrices II. Learning Objectives. Upon completing this module, you should be able to :
MAC 0 Module Systems of Linear Equations and Matrices II Learning Objectives Upon completing this module, you should be able to :. Find the inverse of a square matrix.. Determine whether a matrix is invertible..
More informationGaussian Models (9/9/13)
STA561: Probabilistic machine learning Gaussian Models (9/9/13) Lecturer: Barbara Engelhardt Scribes: Xi He, Jiangwei Pan, Ali Razeen, Animesh Srivastava 1 Multivariate Normal Distribution The multivariate
More information8. Diagonalization.
8. Diagonalization 8.1. Matrix Representations of Linear Transformations Matrix of A Linear Operator with Respect to A Basis We know that every linear transformation T: R n R m has an associated standard
More information5. Random Vectors. probabilities. characteristic function. cross correlation, cross covariance. Gaussian random vectors. functions of random vectors
EE401 (Semester 1) 5. Random Vectors Jitkomut Songsiri probabilities characteristic function cross correlation, cross covariance Gaussian random vectors functions of random vectors 5-1 Random vectors we
More informationEEL 5544 Noise in Linear Systems Lecture 30. X (s) = E [ e sx] f X (x)e sx dx. Moments can be found from the Laplace transform as
L30-1 EEL 5544 Noise in Linear Systems Lecture 30 OTHER TRANSFORMS For a continuous, nonnegative RV X, the Laplace transform of X is X (s) = E [ e sx] = 0 f X (x)e sx dx. For a nonnegative RV, the Laplace
More informationLinear Algebra (Review) Volker Tresp 2018
Linear Algebra (Review) Volker Tresp 2018 1 Vectors k, M, N are scalars A one-dimensional array c is a column vector. Thus in two dimensions, ( ) c1 c = c 2 c i is the i-th component of c c T = (c 1, c
More informationRandom Variables and Their Distributions
Chapter 3 Random Variables and Their Distributions A random variable (r.v.) is a function that assigns one and only one numerical value to each simple event in an experiment. We will denote r.vs by capital
More informationRecall : Eigenvalues and Eigenvectors
Recall : Eigenvalues and Eigenvectors Let A be an n n matrix. If a nonzero vector x in R n satisfies Ax λx for a scalar λ, then : The scalar λ is called an eigenvalue of A. The vector x is called an eigenvector
More informationSTAT 501 Assignment 1 Name Spring 2005
STAT 50 Assignment Name Spring 005 Reading Assignment: Johnson and Wichern, Chapter, Sections.5 and.6, Chapter, and Chapter. Review matrix operations in Chapter and Supplement A. Written Assignment: Due
More informationThe Singular Value Decomposition
The Singular Value Decomposition Philippe B. Laval KSU Fall 2015 Philippe B. Laval (KSU) SVD Fall 2015 1 / 13 Review of Key Concepts We review some key definitions and results about matrices that will
More information3. Probability and Statistics
FE661 - Statistical Methods for Financial Engineering 3. Probability and Statistics Jitkomut Songsiri definitions, probability measures conditional expectations correlation and covariance some important
More informationChapter 3. Determinants and Eigenvalues
Chapter 3. Determinants and Eigenvalues 3.1. Determinants With each square matrix we can associate a real number called the determinant of the matrix. Determinants have important applications to the theory
More informationLecture 14: Multivariate mgf s and chf s
Lecture 14: Multivariate mgf s and chf s Multivariate mgf and chf For an n-dimensional random vector X, its mgf is defined as M X (t) = E(e t X ), t R n and its chf is defined as φ X (t) = E(e ıt X ),
More informationSpectral Theorem for Self-adjoint Linear Operators
Notes for the undergraduate lecture by David Adams. (These are the notes I would write if I was teaching a course on this topic. I have included more material than I will cover in the 45 minute lecture;
More information1 Inner Product and Orthogonality
CSCI 4/Fall 6/Vora/GWU/Orthogonality and Norms Inner Product and Orthogonality Definition : The inner product of two vectors x and y, x x x =.., y =. x n y y... y n is denoted x, y : Note that n x, y =
More information1 Data Arrays and Decompositions
1 Data Arrays and Decompositions 1.1 Variance Matrices and Eigenstructure Consider a p p positive definite and symmetric matrix V - a model parameter or a sample variance matrix. The eigenstructure is
More information[POLS 8500] Review of Linear Algebra, Probability and Information Theory
[POLS 8500] Review of Linear Algebra, Probability and Information Theory Professor Jason Anastasopoulos ljanastas@uga.edu January 12, 2017 For today... Basic linear algebra. Basic probability. Programming
More informationReview (Probability & Linear Algebra)
Review (Probability & Linear Algebra) CE-725 : Statistical Pattern Recognition Sharif University of Technology Spring 2013 M. Soleymani Outline Axioms of probability theory Conditional probability, Joint
More informationRandom Vectors and Multivariate Normal Distributions
Chapter 3 Random Vectors and Multivariate Normal Distributions 3.1 Random vectors Definition 3.1.1. Random vector. Random vectors are vectors of random 75 variables. For instance, X = X 1 X 2., where each
More informationc c c c c c c c c c a 3x3 matrix C= has a determinant determined by
Linear Algebra Determinants and Eigenvalues Introduction: Many important geometric and algebraic properties of square matrices are associated with a single real number revealed by what s known as the determinant.
More informationMATRICES. a m,1 a m,n A =
MATRICES Matrices are rectangular arrays of real or complex numbers With them, we define arithmetic operations that are generalizations of those for real and complex numbers The general form a matrix of
More informationMultivariate Distributions
IEOR E4602: Quantitative Risk Management Spring 2016 c 2016 by Martin Haugh Multivariate Distributions We will study multivariate distributions in these notes, focusing 1 in particular on multivariate
More informationUCSD ECE153 Handout #34 Prof. Young-Han Kim Tuesday, May 27, Solutions to Homework Set #6 (Prepared by TA Fatemeh Arbabjolfaei)
UCSD ECE53 Handout #34 Prof Young-Han Kim Tuesday, May 7, 04 Solutions to Homework Set #6 (Prepared by TA Fatemeh Arbabjolfaei) Linear estimator Consider a channel with the observation Y XZ, where the
More informationEigenvectors and SVD 1
Eigenvectors and SVD 1 Definition Eigenvectors of a square matrix Ax=λx, x=0. Intuition: x is unchanged by A (except for scaling) Examples: axis of rotation, stationary distribution of a Markov chain 2
More informationQuantum Computing Lecture 2. Review of Linear Algebra
Quantum Computing Lecture 2 Review of Linear Algebra Maris Ozols Linear algebra States of a quantum system form a vector space and their transformations are described by linear operators Vector spaces
More informationMA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems
MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems Review of Basic Probability The fundamentals, random variables, probability distributions Probability mass/density functions
More informationVectors and Matrices Statistics with Vectors and Matrices
Vectors and Matrices Statistics with Vectors and Matrices Lecture 3 September 7, 005 Analysis Lecture #3-9/7/005 Slide 1 of 55 Today s Lecture Vectors and Matrices (Supplement A - augmented with SAS proc
More informationQuadratic forms. Here. Thus symmetric matrices are diagonalizable, and the diagonalization can be performed by means of an orthogonal matrix.
Quadratic forms 1. Symmetric matrices An n n matrix (a ij ) n ij=1 with entries on R is called symmetric if A T, that is, if a ij = a ji for all 1 i, j n. We denote by S n (R) the set of all n n symmetric
More informationLinear Systems. Class 27. c 2008 Ron Buckmire. TITLE Projection Matrices and Orthogonal Diagonalization CURRENT READING Poole 5.4
Linear Systems Math Spring 8 c 8 Ron Buckmire Fowler 9 MWF 9: am - :5 am http://faculty.oxy.edu/ron/math//8/ Class 7 TITLE Projection Matrices and Orthogonal Diagonalization CURRENT READING Poole 5. Summary
More informationTherefore, A and B have the same characteristic polynomial and hence, the same eigenvalues.
Similar Matrices and Diagonalization Page 1 Theorem If A and B are n n matrices, which are similar, then they have the same characteristic equation and hence the same eigenvalues. Proof Let A and B be
More informationLet X and Y denote two random variables. The joint distribution of these random
EE385 Class Notes 9/7/0 John Stensby Chapter 3: Multiple Random Variables Let X and Y denote two random variables. The joint distribution of these random variables is defined as F XY(x,y) = [X x,y y] P.
More informationAPPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.
APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product
More informationMidterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015
Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015 The test lasts 1 hour and 15 minutes. No documents are allowed. The use of a calculator, cell phone or other equivalent electronic
More information7. Symmetric Matrices and Quadratic Forms
Linear Algebra 7. Symmetric Matrices and Quadratic Forms CSIE NCU 1 7. Symmetric Matrices and Quadratic Forms 7.1 Diagonalization of symmetric matrices 2 7.2 Quadratic forms.. 9 7.4 The singular value
More informationMatrix Algebra: Summary
May, 27 Appendix E Matrix Algebra: Summary ontents E. Vectors and Matrtices.......................... 2 E.. Notation.................................. 2 E..2 Special Types of Vectors.........................
More informationLinear Algebra Primer
Linear Algebra Primer D.S. Stutts November 8, 995 Introduction This primer was written to provide a brief overview of the main concepts and methods in elementary linear algebra. It was not intended to
More informationconditional cdf, conditional pdf, total probability theorem?
6 Multiple Random Variables 6.0 INTRODUCTION scalar vs. random variable cdf, pdf transformation of a random variable conditional cdf, conditional pdf, total probability theorem expectation of a random
More informationMA 575 Linear Models: Cedric E. Ginestet, Boston University Revision: Probability and Linear Algebra Week 1, Lecture 2
MA 575 Linear Models: Cedric E Ginestet, Boston University Revision: Probability and Linear Algebra Week 1, Lecture 2 1 Revision: Probability Theory 11 Random Variables A real-valued random variable is
More information16.584: Random Vectors
1 16.584: Random Vectors Define X : (X 1, X 2,..X n ) T : n-dimensional Random Vector X 1 : X(t 1 ): May correspond to samples/measurements Generalize definition of PDF: F X (x) = P[X 1 x 1, X 2 x 2,...X
More informationChapter 5 continued. Chapter 5 sections
Chapter 5 sections Discrete univariate distributions: 5.2 Bernoulli and Binomial distributions Just skim 5.3 Hypergeometric distributions 5.4 Poisson distributions Just skim 5.5 Negative Binomial distributions
More informationLarge Scale Data Analysis Using Deep Learning
Large Scale Data Analysis Using Deep Learning Linear Algebra U Kang Seoul National University U Kang 1 In This Lecture Overview of linear algebra (but, not a comprehensive survey) Focused on the subset
More informationIII - MULTIVARIATE RANDOM VARIABLES
Computational Methods and advanced Statistics Tools III - MULTIVARIATE RANDOM VARIABLES A random vector, or multivariate random variable, is a vector of n scalar random variables. The random vector is
More informationDef. The euclidian distance between two points x = (x 1,...,x p ) t and y = (y 1,...,y p ) t in the p-dimensional space R p is defined as
MAHALANOBIS DISTANCE Def. The euclidian distance between two points x = (x 1,...,x p ) t and y = (y 1,...,y p ) t in the p-dimensional space R p is defined as d E (x, y) = (x 1 y 1 ) 2 + +(x p y p ) 2
More informationA Probability Review
A Probability Review Outline: A probability review Shorthand notation: RV stands for random variable EE 527, Detection and Estimation Theory, # 0b 1 A Probability Review Reading: Go over handouts 2 5 in
More informationMath 313 Chapter 1 Review
Math 313 Chapter 1 Review Howard Anton, 9th Edition May 2010 Do NOT write on me! Contents 1 1.1 Introduction to Systems of Linear Equations 2 2 1.2 Gaussian Elimination 3 3 1.3 Matrices and Matrix Operations
More informationEE4601 Communication Systems
EE4601 Communication Systems Week 2 Review of Probability, Important Distributions 0 c 2011, Georgia Institute of Technology (lect2 1) Conditional Probability Consider a sample space that consists of two
More informationMATH 369 Linear Algebra
Assignment # Problem # A father and his two sons are together 00 years old. The father is twice as old as his older son and 30 years older than his younger son. How old is each person? Problem # 2 Determine
More informationMATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.
MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. Adjoint operator and adjoint matrix Given a linear operator L on an inner product space V, the adjoint of L is a transformation
More informationIntroduction to Probability and Stocastic Processes - Part I
Introduction to Probability and Stocastic Processes - Part I Lecture 2 Henrik Vie Christensen vie@control.auc.dk Department of Control Engineering Institute of Electronic Systems Aalborg University Denmark
More informationHomework For each of the following matrices, find the minimal polynomial and determine whether the matrix is diagonalizable.
Math 5327 Fall 2018 Homework 7 1. For each of the following matrices, find the minimal polynomial and determine whether the matrix is diagonalizable. 3 1 0 (a) A = 1 2 0 1 1 0 x 3 1 0 Solution: 1 x 2 0
More informationGaussian vectors and central limit theorem
Gaussian vectors and central limit theorem Samy Tindel Purdue University Probability Theory 2 - MA 539 Samy T. Gaussian vectors & CLT Probability Theory 1 / 86 Outline 1 Real Gaussian random variables
More information1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det
What is the determinant of the following matrix? 3 4 3 4 3 4 4 3 A 0 B 8 C 55 D 0 E 60 If det a a a 3 b b b 3 c c c 3 = 4, then det a a 4a 3 a b b 4b 3 b c c c 3 c = A 8 B 6 C 4 D E 3 Let A be an n n matrix
More informationLecture 10 - Eigenvalues problem
Lecture 10 - Eigenvalues problem Department of Computer Science University of Houston February 28, 2008 1 Lecture 10 - Eigenvalues problem Introduction Eigenvalue problems form an important class of problems
More informationOnline Exercises for Linear Algebra XM511
This document lists the online exercises for XM511. The section ( ) numbers refer to the textbook. TYPE I are True/False. Lecture 02 ( 1.1) Online Exercises for Linear Algebra XM511 1) The matrix [3 2
More informationThe following definition is fundamental.
1. Some Basics from Linear Algebra With these notes, I will try and clarify certain topics that I only quickly mention in class. First and foremost, I will assume that you are familiar with many basic
More information1 Principal component analysis and dimensional reduction
Linear Algebra Working Group :: Day 3 Note: All vector spaces will be finite-dimensional vector spaces over the field R. 1 Principal component analysis and dimensional reduction Definition 1.1. Given an
More informationLecture 5: Moment Generating Functions
Lecture 5: Moment Generating Functions IB Paper 7: Probability and Statistics Carl Edward Rasmussen Department of Engineering, University of Cambridge February 28th, 2018 Rasmussen (CUED) Lecture 5: Moment
More informationBivariate distributions
Bivariate distributions 3 th October 017 lecture based on Hogg Tanis Zimmerman: Probability and Statistical Inference (9th ed.) Bivariate Distributions of the Discrete Type The Correlation Coefficient
More information