This note derives marginal and conditional means and covariances when the joint distribution may be singular and discusses the resulting invariants.

Similar documents
An Alternative Proof of the Greville Formula

Product of Range Symmetric Block Matrices in Minkowski Space

Lecture 6: Geometry of OLS Estimation of Linear Regession

Properties of Matrices and Operations on Matrices

2. Matrix Algebra and Random Vectors

Consistent Bivariate Distribution

~ g-inverses are indeed an integral part of linear algebra and should be treated as such even at an elementary level.

The generalized Schur complement in group inverses and (k + 1)-potent matrices

Chapter 1. Matrix Algebra

NOTES ON MATRICES OF FULL COLUMN (ROW) RANK. Shayle R. Searle ABSTRACT

Linear Algebra and Matrix Inversion

SPACES OF MATRICES WITH SEVERAL ZERO EIGENVALUES

Linear Algebra. Week 7

Recursive Determination of the Generalized Moore Penrose M-Inverse of a Matrix

Optimization problems on the rank and inertia of the Hermitian matrix expression A BX (BX) with applications

CONTROL DESIGN FOR SET POINT TRACKING

A note on the equality of the BLUPs for new observations under two linear models

The Skew-Symmetric Ortho-Symmetric Solutions of the Matrix Equations A XA = D

Stat 159/259: Linear Algebra Notes

Applied Matrix Algebra Lecture Notes Section 2.2. Gerald Höhn Department of Mathematics, Kansas State University

A matrix over a field F is a rectangular array of elements from F. The symbol

Moore [1, 2] introduced the concept (for matrices) of the pseudoinverse in. Penrose [3, 4] introduced independently, and much later, the identical

Bare minimum on matrix algebra. Psychology 588: Covariance structure and factor models

MATH2210 Notebook 2 Spring 2018

the Ghebyshev Problem

ELEMENTARY LINEAR ALGEBRA

GOLDEN SEQUENCES OF MATRICES WITH APPLICATIONS TO FIBONACCI ALGEBRA

BESSEL MATRIX DIFFERENTIAL EQUATIONS: EXPLICIT SOLUTIONS OF INITIAL AND TWO-POINT BOUNDARY VALUE PROBLEMS

Problem 1. CS205 Homework #2 Solutions. Solution

Re-nnd solutions of the matrix equation AXB = C

Linear Algebra. Session 8

Linear Algebra Review

More Linear Algebra. Edps/Soc 584, Psych 594. Carolyn J. Anderson

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

a Λ q 1. Introduction

Preliminary Linear Algebra 1. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 100

On Pseudo SCHUR Complements in an EP Matrix

PROCEEDINGS OF THE YEREVAN STATE UNIVERSITY MOORE PENROSE INVERSE OF BIDIAGONAL MATRICES. I

A system of matrix equations and a linear matrix equation over arbitrary regular rings with identity

Xβ is a linear combination of the columns of X: Copyright c 2010 Dan Nettleton (Iowa State University) Statistics / 25 X =

Two remarks on normality preserving Borel automorphisms of R n

Linear Algebra in Computer Vision. Lecture2: Basic Linear Algebra & Probability. Vector. Vector Operations

A Simple Derivation of Right Interactor for Tall Transfer Function Matrices and its Application to Inner-Outer Factorization Continuous-Time Case

Lecture II: Linear Algebra Revisited

MATH36001 Generalized Inverses and the SVD 2015

Matrix Representation

Math Camp II. Basic Linear Algebra. Yiqing Xu. Aug 26, 2014 MIT

Regression. Oscar García

MOORE-PENROSE INVERSE IN AN INDEFINITE INNER PRODUCT SPACE

Linear Algebra V = T = ( 4 3 ).

On Euclidean distance matrices

Estimation of the Response Mean. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 27

Prof. Dr.-Ing. Armin Dekorsy Department of Communications Engineering. Stochastic Processes and Linear Algebra Recap Slides

ELEMENTARY LINEAR ALGEBRA

Institute of Computer Science. Verification of linear (in)dependence in finite precision arithmetic

Chapter 2: Matrix Algebra

MARKOVIANITY OF A SUBSET OF COMPONENTS OF A MARKOV PROCESS

a11 a A = : a 21 a 22

Projective Schemes with Degenerate General Hyperplane Section II

CORACH PORTA RECHT INEQUALITY FOR CLOSED RANGE OPERATORS

The Multivariate Normal Distribution 1

Linear Algebra. Preliminary Lecture Notes

B553 Lecture 5: Matrix Algebra Review

Numerical Linear Algebra Homework Assignment - Week 2

Linear Algebra Review

Analytical formulas for calculating the extremal ranks and inertias of A + BXB when X is a fixed-rank Hermitian matrix

Multivariate Statistical Analysis

Review of Vectors and Matrices

Chapter 3. Matrices. 3.1 Matrices

On the Relative Gain Array (RGA) with Singular and Rectangular Matrices

Diagonal and Monomial Solutions of the Matrix Equation AXB = C

arxiv: v1 [math.ra] 28 Jan 2016

Systems of Linear Equations and Matrices

Chapter 5 Matrix Approach to Simple Linear Regression

A note on an unusual type of polar decomposition

Generalized Principal Pivot Transform

On the Efficiencies of Several Generalized Least Squares Estimators in a Seemingly Unrelated Regression Model and a Heteroscedastic Model

Markov Random Fields

On Strongly Prime Semiring

On Sum and Restriction of Hypo-EP Operators

Let X and Y denote two random variables. The joint distribution of these random

Systems of Linear Equations and Matrices

Moore-Penrose-invertible normal and Hermitian elements in rings

Basic Concepts in Matrix Algebra

. a m1 a mn. a 1 a 2 a = a n

Operators with Compatible Ranges

APPLICATIONS OF THE HYPER-POWER METHOD FOR COMPUTING MATRIX PRODUCTS

Chapter 4. Matrices and Matrix Rings

EXPLICIT SOLUTION OF THE OPERATOR EQUATION A X + X A = B

Chapter Two Elements of Linear Algebra

The Drazin inverses of products and differences of orthogonal projections

Introduction to the Tensor Train Decomposition and Its Applications in Machine Learning

The DMP Inverse for Rectangular Matrices

The Distribution of the Covariance Matrix for a Subset of Elliptical Distributions with Extension to Two Kurtosis Parameters

Maths for Signals and Systems Linear Algebra in Engineering

Solutions of a constrained Hermitian matrix-valued function optimization problem with applications

Inferences on a Normal Covariance Matrix and Generalized Variance with Monotone Missing Data

1 Linear Algebra Problems

AREA-BISECTING POLYGONAL PATHS. Warren Page New York City Technical College, CUNY

Maths for Signals and Systems Linear Algebra in Engineering

Transcription:

of By W. A. HARRIS, Jr. ci) and T. N. This note derives marginal and conditional means and covariances when the joint distribution may be singular and discusses the resulting invariants. 1. Introduction. For convenience our discussion will be in terms of the linear operator expected value, written 8 (#), and multivariate normal distributions. However, our proofs are algebraic and apply to any multivariate distribution for which zero correlation and independence are equivalent. A normal distribution is characterized by its mean /*, and its covariance matrix S, defined by fjl = 8(X), ^ = S([X-^] [-X"-/i]') and we write X is distributed according to N(p, S) - If -X" is distributed according to Nfa, S) and D is a real matrix, then Z= DX is distributed according to N(Dju,DJ]D^. Our first result is Theorem 1. Let X be distributed according to N(p, S) (.possibly singular') and let X, p., S have the compatible partitionings: \YI' \A2/ Then the equation SnA^+Sia = 0 has at least one solution and the Received November 17, 1965. 1) School of Mathematics, University of Minnesota, U. S. A., and Research Institute for Mathematical Sciences, Kyoto University, Japan. 2) Ordnance Division, Honeywell, Inc., U. S. A.

200 W. A. Harris, Jr. and T. N. Helvig random vectors X : and X^ + M'X^ are independent. (i) (ii) Hence Xi has a marginal distribution which is distributed according to #Gn, Su), and the conditional distribution of X z given that X I = S is: X 1 = g')={ji2 M'( v^), with covariance matrix-. G. Marsaglia [2] has shown that (with A + denoting the pseudoinverse of A in the sense of Penrose [3]) Su( SuSw) +Si2 = 0 (i.e. that one choice for M is SuXia) and hence that X ± and Xz ^i^lnxi are independent and concluded from this (corresponding to (ii) above) that G (X 2 \ X l = f) = ^ + S ai Si (f - Ml) and cov(z 2 X, = f ) S22 2-J21 Sll 5_J12 If Su is nonsingular, then 2ii = SrA Af is unique, and these are the familiar formulas. However, if Su is singular, then if the equation Sn^+Xi2 = 0 has one solution, it has many solutions and it appears worthwhile to determine the invariants of the conditional means and covariances resulting from different choices of the solution M. To this end we prove the following theorems. Theorem 2. Let the hypotheses of Theorem 1 hold. If M is a solution of the equation ^nm+^12 = O y then ^22 + M f^12 is an invariant and hence L]12 <ij22.zj21 xljll ^_112 Theorem 3. Let the hypotheses of Theorem 1 hold. If f is compatible^ with Sn and M is a solution of 2Hu -W+ Su = 0, then M'( /O is an invariant and hence (3) The definition of compatible is a natural one and occurs in Section 5.

2. Preliminaries. Marginal and conditional distributions 201 Solutions of linear equations of the form AX=C can be completely characterized in terms of the pseudoinverse, A +, of the matrix A which is defined as the unique solution of the equations AA + A = A, A + AA + = A\ AA + =(AA + )*, A + A=(A + A)*. R. Penrose [3] has shown that AX=C has a solution if and only if AA + C=C. Further, if a solution exists, the general solution is given by (2.1) X=A + C+(I-A + A)Y where Y is arbitrary. If A is real, A + is real, and the following properties are easily verified from the definition of A 1 : (2. 2) (A'A) + = A + G4 + )', AAA + = A', A! (A + ) 'A' = A'\ and A = A f implies A + =(A + y and A + A = AA +. Further, if T is a real rxm matrix of rank r, then TT' is nonsingular and it is easily verified that if A= T'T, then A + = T\TT'Y*T and 3. Proof of Theorem 1. Our proof of this theorem is not substantially different from that of Marsaglia [2] but is given for completeness. In fact, it is the classical proof [1J for nonsingular distributions except that we must show that the equation Sn-W+Si2 = 0 has a solution even if Sn is singular. Now S is a positive, semi-definite matrix, and hence can be written in the form, with real 5 and U, (3.1) S = Thus the solvability of S u Af4-S 11 = 0 becomes S'S(S'SYS r U=S f U, which will be satisfied for all U only if S f S(S'S) + S' = S r. But, using

202 W. A. Harris, Jr. and T. N. Helvig (2.2) we have S f S(S f S^+S r = S f SS + (S + ys f=: S r (S + ys f = S f. Hence SuAf-rSi2 = 0 has a solution. Let M be a solution of Su-W+Su = 0 and consider the random vector - then Y is distributed N(Dfi,. A simple computation yields that 0 Hence YI and F 2 are independent and Theorem 1 is proved. 4. Proof of Theorem 2. Consider the equation Su^f^ Sis- By Theorem 1 this equation has a solution and hence by (2. 1) the general solution is (4. 1) M= - Si S u + (/- Si SiO F or M^-SMSi+ya-SiSu). Thus since y / (/-SnSu)Sis=y r/ (S u -SiSuSu)=0 by the solvability condition. Hence M'Su is an invariant and the theorem is proved. 5. Proof of Theorem 3o If Su is singular, then X l is a distribution in a ^-dimensional space which is concentrated in a lower dimensional subspace and if f is compatible with this lower dimensional subspace we shall prove that M'( /ji) is an invariant. Assume, without loss of generality that Su and ^ have the partitioning Ml l \(T2]

Marginal and conditional distributions 203 where rank Su = rank <y n (a simple reordering of the components of Xi will achieve this). Make the transformation Y l = CX ly where and the partitioning is induced by the partitioning of Su and /*i. Hence Y^ is distributed according to N(Cfj. l9 CSnC')- But i hence, with probability one, y^ = G(y^) 9 or Definition. T/z vector $ is said to be compatible with Su ^/ ^ ^^^ the form % = (] With? a = tf2itfii 1 fi. Thus the proof of Theorem 3 reduces to showing that (5.1) (/-S S Since Sn is a positive, semi-definite matrix, it may be written in the form Sn=T'T where T=(su) is a rectangular matrix with the same number of rows as the rank of Sn- Thus by Section 2, we have yj+y; ) M and ^irfii^w'scs's)" 1. Using this formulation, a simple straight forward calculation shows that (5. 1) is valid and the theorem is proved.

204 W. A. Harris, Jr. and T. N. Helvig REFERENCES [1] T. W. Anderson, An Introduction to Multivariate Statistical Analysis, John Wiley, New York (1958). [2] G. Marsaglia, Conditional means and covariances of normal variables with singular covariance matrix, J. Amer. Statist. Assoc. (1964), 1203-1204. [3] R. Penrose, A generalized inverse for matrices, Proc. Cambridge Philos. Soc. 51 (1955), 406-413.