CANONICAL CORRELATION ANALYSIS AND REDUCED RANK REGRESSION IN AUTOREGRESSIVE MODELS. BY T. W. ANDERSON Stanford University

Size: px
Start display at page:

Download "CANONICAL CORRELATION ANALYSIS AND REDUCED RANK REGRESSION IN AUTOREGRESSIVE MODELS. BY T. W. ANDERSON Stanford University"

Transcription

1 he Annals of Statistics 2002, Vol. 30, No. 4, CANONICAL CORRELAION ANALYSIS AND REDUCED RANK REGRESSION IN AUOREGRESSIVE MODELS BY. W. ANDERSON Stanford University When the rank of the autoregression matrix is unrestricted, the maximum likelihood estimator under normality is the least squares estimator. When the rank is restricted, the maximum likelihood estimator is composed of the eigenvectors of the effect covariance matrix in the metric of the error covariance matrix corresponding to the largest eigenvalues [Anderson,. W. (95). Ann. Math. Statist ]. he asymptotic distribution of these two covariance matrices under normality is obtained and is used to derive the asymptotic distributions of the eigenvectors and eigenvalues under normality. hese asymptotic distributions differ from the asymptotic distributions when the regressors are independent variables. he asymptotic distribution of the reduced rank regression is the asymptotic distribution of the least squares estimator with some restrictions; hence the covariance of the reduced rank regression is smaller than that of the least squares estimator. his result does not depend on normality.. Introduction. he objective of canonical correlation analysis is to discover and use linear combinations of the variables in one set that are highly correlated with linear combinations of variables in another set. he linear combinations define canonical variables, and the correlations between corresponding canonical variables are the canonical correlations. See Anderson (984), Chapter 2, for an exposition and Anderson (999a) for asymptotic theory in case of regression on independent variables. In time series analysis the variables in one set are measurements made in the present, and the variables in the other set are measurements made in the past. he linear combinations of the variables of the past may be employed for predicting variables of the present and future. A framework within which to develop this analysis may be provided by autoregressive moving average models (ARMA). Box and iao (977) and iao and say (989) have explored this area. See Reinsel (997) and Reinsel and Velu (998) for more details. In this paper the asymptotic distribution of the squares of the canonical correlations between the present and past is derived for (stationary) autoregressive processes. hat distribution is contrasted with the asymptotic distribution of the canonical correlations for two sets of variables with a joint normal distribution, a common regression model. he comparison mirrors the comparison between Received August 999; revised September 200. AMS 2000 subject classifications. 62H0, 62M0. Key words and phrases. Canonical correlations and vectors, eigenvalues and eigenvectors, asymptotic distributions. 34

2 CANONICAL CORRELAION ANALYSIS 35 scalar versions of the models; for a scalar autoregression with process correlation ρ, the asymptotic variance of the sample correlation coefficient is ρ 2, but the asymptotic variance of the sample (Pearson) correlation in a bivariate normal distribution with correlation ρ is ( ρ 2 ) 2. Note the variance in the bivariate model is ρ 2 times the variance in the autoregressive model. Moreover, the sample canonical correlations in autoregression are asymptotically correlated while in regression they are asymptotically uncorrelated when the population canonical correlations are distinct. hus there are differences in inference in the two types of models; these differences affect model-selection procedures. he asymptotic distribution of the coefficients of the canonical variables is obtained for the first-order autoregressive process. he result is of interest because it shows that the asymptotic distribution of the coefficients of the canonical variables as well as the asymptotic distribution of the canonical correlations is different from those distributions for the classical regression model. Velu, Reinsel and Wichern (986) have also derived the asymptotic distribution of these canonical correlations, but their results differ from the results in this paper. More detail is given in Section 7. he reduced rank regression estimator of a regression matrix is composed of a small number of canonical variables [Anderson (95)]. It is the maximum likelihood estimator in the autoregression model as well as the regression model when the unobserved random disturbances (or errors) are normally distributed [Anderson (95), page 345]. In Section 5 its asymptotic distribution is found under the assumption that the disturbance in the autoregressive model is independent of the lagged variables. his asymptotic distribution is the same as the asymptotic distribution of the reduced rank regressor estimator of the coefficient matrix in the classical regression model as obtained by Ryan, Hubert, Carter, Sprague and Parrott (992), Schmidli (995), Stoica and Viberg (996) and Reinsel and Velu (998), under the assumption that the disturbances are normally distributed and by Anderson (999b) under general conditions. Here it is also shown that the asymptotic distribution is valid regardless of the assumption on the disturbances (as long as their variances exist). he algebra here differs from the algebra in Anderson (999b) because the transformation to canonical form differs in the two models. 2. he model and canonical analysis. 2.. he model. In this paper the model is the autoregressive process (AR) (2.) m Y t = B i Y t i + Z t, t =...,, 0,,..., i=

3 36. W. ANDERSON where Y t and Z t are p-component vectors with Z t unobserved and independent of Y t, Y t 2,...and EZ t = 0, EZ t Z t =. We assume that the roots of m (2.2) λm I λ m i B i = 0 i= are less than in absolute value. hen (2.) defines the stationary process {Y t } with EY t = 0 and autocovariances (2.3) Ɣ h = EY t Y t h = EY t+hy t = Ɣ h [e.g., Anderson (97), Chapter 7]. If Y t i is replaced by Y t i µ, i = 0,,...,m, in (2.), then EY t = µ.ifm =, Y t = s=0 B s Z t s and Ɣ h = B h Ɣ Population. In the AR(m) model the present is represented by Y t and the (relevant) past by Ỹt = (Y t,...,y t m ).Let (2.4) EY tỹ t =[Ɣ,...,Ɣ m ]=Ɣ, EỸt Ỹ t = Ɣ. hen the population canonical correlations between Y t and Ỹt are the roots of ρɣ 0 Ɣ (2.5) Ɣ ρ Ɣ = 0, and the canonical vectors are the corresponding solutions of [ ][ ] ρɣ0 Ɣ α (2.6) Ɣ = 0, α Ɣ 0 α =, ω Ɣω =. ρ Ɣ ω he largest root of (2.5), say ρ, is the first canonical correlation and the corresponding solution to (2.6) defines the first pair of canonical vectors, which are the coefficients of the first pair of canonical variables. he first canonical correlation is the maximum correlation between linear combinations of Y t and Ỹt. From (2.6) we obtain (2.7) Ɣ Ɣ Ɣ α = ρ 2 Ɣ 0 α. here are p nonnegative solutions to Ɣ Ɣ Ɣ ρ 2 (2.8) Ɣ 0 = 0 and p corresponding linearly independent vectors α satisfying (2.6) and α ii > 0. If the root ρ 2 of (2.8) is unique, the solution α of (2.6) (and α ii > 0) is uniquely determined. [Since the matrix A = (α,...,α p ) is nonsingular, the components of Y t can be numbered in such a way that the ith component of α i is nonzero.] he number of positive solutions to (2.8) is equal to the rank of Ɣ and indicates the degree of dependence between Y t and Ỹt.

4 CANONICAL CORRELAION ANALYSIS Estimation. Given a series of observations Y m+,...,y 0, Y,...,Y we form sample covariance matrices S YY = Y t Y t, S = (2.9) Ỹ t Ỹ t, t= t= S Y = (2.0) Y tỹ t = S Y, B = S Y S. t= he estimator B is the least squares estimator of B and is maximum likelihood if the Z t s are normally distributed conditional on Y m+,...,y 0 given. he sample canonical correlations (r >r 2 > >r p ) and vectors are defined by rs YY S Y (2.) S Y r S = 0, [ ][ ] rs YY S Y a (2.2) = 0, a S YY a =, w S w =, S Y r S w and a ii > 0. hese are the maximum likelihood estimators if the Z t s are normally distributed. From (2.2) we obtain S Y S S Y a = r 2 S YY a, which is the sample analog of (2.7). his equation can be rearranged to give (2.3) S Y S S Y a = t(s YY S Y S S Y )a, where t = r 2 /( r 2 ). Note that S YY S Y S S Y = S ZZ S Z S S Z and SZ S p S Z 0 as. Hence, SYY S Y S S Y is asymptotically equivalent to S ZZ. In what follows we shall not distinguish between S ZZ = t= Z t Z t and S YY S Y S S Y = t= ẐtẐ t,whereẑt = Y t BỸt is the residual. When m =, Ỹt = Y t, Ɣ h = Ɣ h, etc. Let Ɣ 0 = Ɣ. Wedefine S = S = Y t Y t, S Z = (2.4) Y t Z t. t= t= Section 6 studies these problems for m>. 3. Asymptotic distribution of sample matrices. o find the asymptotic distribution of the canonical correlations r,...,r p and canonical variates a,...,a p and w,...,w p we need the asymptotic distribution of S YY, S Y and S or of S Z, S and S ZZ. If Z t has finite fourth moments, then (S YY Ɣ 0 ) has a limiting normal distribution [Anderson (97)]. Further, if Z t is normally distributed, the fourthorder moments of Y t and the second-order moments of S YY are quadratic functions

5 38. W. ANDERSON of Ɣ. In this section we find the asymptotic covariances of sample matrices when m =. he asymptotic covariances when m> can be obtained from the application of Ỹt. o express the covariances of the sample matrices we use the vec notation. For A = (a,...,a n ) we define veca = (a,...,a n ). he Kronecker product of two matrices A = (a ij ) and B is A B = (a ij B). A basic relation is vecabc = (C A) vecb, which implies vecxy = vec xy = (y x) vec = y x. Define the commutator matrix K as the (square) permutation matrix such that vec A = K vec A for every square matrix of the same order as K. Note that K(A B) = (B A)K. HEOREM. If the Z t s are independently normally distributed, the limiting distribution of vec S ZZ = vec(s ZZ ), vecs Z = vec S Z, and vec S = vec(s Ɣ) is normal with means 0, 0 and 0 and covariances (3.) E vec S ZZ (vec S ZZ ) = (I + K)( ), (3.2) E vec S Z (vec S Z ) = Ɣ, (3.3) E vec S Z (vec S ZZ ) = 0, (3.4) (3.5) E vec S (vec S ZZ ) (I + K)[I (B B)] ( ), E vec S (vec S Z ) (I + K)[I (B B)] ( BƔ), (3.6) E vec S (vec S ) (I + K)[I (B B)] [(Ɣ ) + ( Ɣ) ( )] [I (B B )] = (I + K) { [I (B B)] (Ɣ Ɣ) + (Ɣ Ɣ)[I (B B )] (Ɣ Ɣ) }. First we prove the following lemma. LEMMA. If {Z t } consists of uncorrelated random vectors with EZ t = 0 and EZ t Z t =, then ( ) (3.7) S BS B = BS Z + S Z B + S ZZ + O p as. (3.8) PROOF. he model Y t = BY t + Z t implies Y t Y t = BS B + BS Z + S Z B + S ZZ. t=

6 CANONICAL CORRELAION ANALYSIS 39 Since t= Y t Y t = S + Y Y Y 0Y 0, Lemma follows. PROOF OF HEOREM. he sample covariances of a stationary autoregressive process are asymptotically normally distributed; see Anderson (97), Chapter 5, for example. It remains to prove (3.) to 3.6). If X N(0, ), EX i X j X k X l = σ ij σ kl + σ ik σ jl + σ il σ jk ; in vec notation E vec XX (vec XX ) = E(X X)(X X ) = vec (vec ) + ( ) + K( ). IfX and Y are independent, E vec XX (vec YY ) = EXX EYY, E vec XY (vec XY ) = E(Y X)(Y X ) (3.9) = E(YY XX ) = EYY EXX, (3.0) E vec YX (vec XY ) = KE vec XY (vec XY ) = K(EYY EXX ). he asymptotic variance of vec S ZZ = vec(/ ) t= (Z t Z t ) is (3.). o show (3.2) we write (3.) E vec S Z (vec S Z ) = E t,s= (Z t Y t )(Z s Y s ) = E (Z t Z s Y t Y s ). t,s= Next, (3.3) follows from vec S Z = t= (Z t Y t ) and vec S ZZ = (/ ) s= [(Z s Z s ) vec ]. hen (3.4), (3.5) and (3.6) are consequences of (3.7), (3.), (3.2) and (3.3) and vec S =[I (B B)] { (I B) vecs Z + (B I) vecs Z + vec } S ZZ ( ) + O (3.2) =[I (B B)] { (I + K)(I B) vec S Z + vec } S ZZ ( ) + O. he second form of (3.6) follows from the substitution of = Ɣ BƔB in the first form. If B = 0,thenY t = Z t, Ɣ = and (3.6) reduces to (3.). he effect of B 0 is to tend to inflate the asymptotic covariance matrix of vec S. he characteristic roots of B and hence of B B (products of the roots of B) are less than in absolute value; hence I (B B) 0. If p =, B B = b 2 and I (B B) = b2. In the classical regression model Y t = BX t + Z t with EX t = 0, EZ t = 0, EX t X t = XX, EZ t Z t =, and EX t Z t = 0. heney t = 0 and EY t Y t = B XX B + = YY.IftheX t s and Z t s are independent, (3.3) Cov[vec S YY, vec S YY ]=(I + K)( YY YY ).

7 40. W. ANDERSON In the autoregressive model EY t Y t = Ɣ = YY, but (3.6) is very different from (3.3). In the scalar case (3.6) is 2[( + β 2 )/( β 2 )]σ 2 instead of 2σ 2,whereσ 2 = Ey 2 t ; the factor ( + β2 )/( β 2 ) inflates the variance of t= y 2 t /.helargerβ 2 is, the larger the inflation factor. Note that the covariances depend on Z t being normal. 4. Asymptotic distributions of canonical correlations and vectors. Let the solutions to (4.) form the matrices (4.2) BƔB φ = θ φ, φ φ = = (φ,...,φ p ), = diag(θ,...,θ p ) with θ θ 2 θ p 0andφ ii > 0, i =,...,p. Let the solutions to (4.3) form the matrices (4.4) BS B f = ts ZZ f, f S ZZ f = F = (f,...,f p ), = diag(t,...,t p ) with t >t 2 > >t p and f ii > 0, i =,...,p. We shall find the asymptotic distribution of F and when the Z t s are normally distributed. Since Ɣ = BƔB +, (4.) is equivalent to (4.5) Ɣφ = δ φ, φ φ =, where δ = ( ρ 2 ) = θ +. Similarly, (4.3) is asymptotically equivalent to (4.6) where d = ( r 2 ) = t +, since (4.7) S f = ds ZZ f, f S ZZ f =, (SYY S ) = ( Y t Y t t= = (Y Y Y 0Y 0 ) p 0. ) Y t Y t t= Let Y t = X t, Z t = W t and B( ) =.henx t satisfies (4.8) (4.9) (4.0) X t = X t + W t, EX t X t = Ɣ = + I = = diag(δ,...,δ p ), EW t W t = = I

8 and EX t W t = 0.Let CANONICAL CORRELAION ANALYSIS 4 (4.) = X t X t, t= (4.2) WW = W t W t. t= he asymptotic covariances of and WW are given in heorem with S and S ZZ replaced by and WW, by I, Ɣ by and B by. HEOREM 2. If the Z t s are independently normally distributed, the limiting distribution of = ( ) and WW = ( WW I) is normal with means 0 and 0 and covariances E vec (vec ) (I + K)[I ( )] (4.3) [( I) + (I ) (I I)][I ( )] = (I + K) { [I ( )] ( ) + ( )[I ( )] ( ) }, (4.4) (4.5) E vec WW (vec WW ) = (I + K)(I I), E vec (vec WW ) (I + K)[I ( )] (I I). Note that ( I), (I ) and (I I) are diagonal. Let h = f, H = F, D = diag(d,...,d p ).henh and D satisfy (4.6) H = WW HD, H WW H = I. p p Since and WW I, the probability limit of (4.6) is H = H, H H = I, which implies plim H is diagonal with plim h ii =±if the diagonal elements of are different. Since φ ii > 0 and plim f ii > 0, then plim h ii > 0. Hence H p I and D p. Let H = (H I), andd = (D ) = ( ), where = diag(t,...,t p ). From (4.6) we obtain (4.7) (4.8) H H + D = WW + o p(), H + H = WW + o p(). In components (4.7) and (4.8) are h ij (δ j δ i ) = tij t WW d ii = t ii tii WW δ i + o p (), 2h ii = t WW ii + o p (). ij δ j +o p (), i j,

9 42. W. ANDERSON Let (4.9) p E = (ε i ε i )(ε i ε i ), i= where ε i has in the ith position and 0 s elsewhere; E is a diagonal p 2 p 2 matrix with in the [(i )p + i, (i )p + i]th position, i =,...,p,and0 s elsewhere. hen vecd = E vec( WW ).NoteE2 = E; hence the Moore Penrose generalized inverse of E is E + = E. LetH = H d + H n,whereh d = diag(h,...,h pp ).henvech d = E vec H. o write (4.7) more suitably we note that vech = vec IH = ( I) vec H and vec H = vec H I = (I ) vech.define (4.20) N = ( I) (I ) = diag(δ I,δ 2 I,...,δ p I ) = diag(0,δ δ 2,...,δ δ p,δ 2 δ, 0,...,δ p δ p, 0). he Moore Penrose inverse of N is ( N + = diag 0, (4.2), 0, δ 2 δ δ δ 2,..., hen NN + = I p 2 E and NE = N + E = 0. We can write (4.7) as δ δ p, δ 2 δ 3,..., ), 0. δ p δ p (4.22) N vec H + vec D = vec( WW ) + o p() = vec ( I) vec WW + o p(). From (4.22) we obtain (4.23) (4.24) vec H n = N+[ vec ( I) WW] vec + op (), vec D = E [ vec ( I) vec WW] + op (). In vec notation, (4.8) is (I + K) vec H = vec WW + o p(), from which we obtain (4.25) E(I + K) vec H = 2E vech = 2vecH d = E vec WW + o p(). DEFINIION. (4.26) If (X, Y ) d (X, Y) with EX X < and EY Y <,then A Cov(X, Y ) = E(X EX)(Y EY).

10 CANONICAL CORRELAION ANALYSIS 43 he asymptotic covariance matrix of vec ( I) vec WW is A Cov [ vec ( I) vec WW, vec ( I) ] vec WW = (I + K)[I ( )] ( ) (4.27) + ( )[I ( )] (I + K) + ( 2 I) ( ) = (I + K) ( ) + ( ) (I + K) + N + ( I), where (4.28) =[I ( )]. he element of in the jth row of the ith block of rows and in the lth column of the kth block of columns is denoted as λ ij,kl.note = I. he asymptotic covariance matrix of vec WW is (4.4), and the asymptotic covariance matrix between vec ( I) vec WW and vec WW is A Cov [ vec ( I) vec WW, vec ] WW (4.29) = (I + K)[I ( )] (I I) ( I)(I + K)(I I) = (I + K) ( I)(I + K). he asymptotic covariance matrices of vech n, vec H d and vecd are found from (4.29), (4.4) and (4.27). From (4.24), (4.27), EK = E, E( 2 I) = E( ), E( I) = E(I ) and (A B)K = K(B A), we obtain the following theorem. HEOREM 3. If the Z t s are independently normally distributed and if the roots of Ɣ δ =0 are distinct, the nonzero elements of D have a limiting normal distribution with means 0; the covariance matrix of this limiting distribution is given by (4.30) A Cov(vec D, vec D ) = 2E { [I ( )] ( ) he asymptotic covariance of d i and d j is + ( )[I ( )] } E = 2E[ ( ) + ( ) ]E. A Cov(di,d j ) = 2(λ ii,jjδ j θ j + δ i θ i λ jj,ii ) ( ρj 2 = 2 λ ii,,jj ( ρj 2 + λ ρ 2 ) i )2 jj,ii (4.3) ( ρi 2)2 = 2(φ i φ i ){ [I (B B)] [Ɣ (Ɣ )] +[Ɣ (Ɣ )][I (B B )] } (φ j φ j ).

11 44. W. ANDERSON he vector φ i is estimated consistently by f i ; the matrices B, Ɣ, and by B, S and S ZZ, respectively, and δ i by d i. he variance (4.3) can, therefore, be estimated consistently. Note that di = ti = (t i θ i ) and rj 2 = (rj 2 ρ2 j ) = ( ρ2 j )2 (t j θ j ) + o p (). hus, r 2,...,r2 p have a limiting normal distribution with means 0 and covariances (4.32) A Cov(ri 2,rj 2 ) = 2[ λ ii,jj ( ρi 2 )2 ρj 2 + λ jj,iiρi 2 ( ρ2 j )2]. Unless is a diagonal matrix, the roots r,...,r p are asymptotically correlated; that is, dependent. In the special case that is diagonal (ψ ij = ψ ii δ ij = ρ i δ ij, where δ ii = andδ ij = 0, i j), the roots are asymptotically independent, and the asymptotic variance of di = ti is 4ρi 2/( ρ2 i )3 and the ith component of X t = X t + W t is X it = ρ i X i,t + W it. Contrast this result with the canonical form of Y t = BX t +W t, namely U it = ρ i V it +W it with EUit 2 = EV it 2 = ( ρ2 i ) and EWit 2 =. In this case the asymptotic variance of ti is 4ρi 2/( ρ2 i )2 [Anderson (999a)]. he ratio of the variance in the autoregressive model to that in the regression model is ( ρi 2), which is greater than. (Note that this result agrees with the asymptotic distribution of the serial correlation coefficient in a scalar first-order autoregressive process [heorem of Anderson (97)]. he variance of the limiting distribution of ri = (r i ρ i ) when is diagonal is ρi 2 as compared to ( ρi 2)2 in the regression model.) In the classical case, the eigenvalues are asymptotically independent because Y t and X t are each transformed, and hence B can be transformed into a diagonal matrix, but in the autoregression case Y t and Y t are transformed by the same linear transformation. Now consider vec H = vec H n + vec H d (4.33) = N +[ vec ( I) WW] vec 2 E vec WW. hen A Cov(vec H, vec H ) = N +[ (I + K) ( ) + ( ) (I + K) + ( 2 I) ( ) ] N + 2 N+ [(I + K) ( I)(I + K)]E (4.34) 2 E[ (I + K) (I + K)( I)]N + + 4E(I + K)E = N +[ (I + K) ( ) + ( ) (I + K) ] N + + N + ( I) 2 N+ (I + K) E 2 E (I + K)N E because N + ( I)E = 0. he asymptotic covariance matrix of vecf = (I ) vec H is (4.34) multiplied on the left-hand side by (I ) andonthe right-hand side by (I ).

12 CANONICAL CORRELAION ANALYSIS 45 In the classical regression model Y t = BX t + Z t the eigenvalues of YX XX XY in the metric of ZZ are asymptotically independent when the population eigenvalues are different. In the AR() case the eigenvalues are asymptotically dependent. (See the comments in Section 7..) 5. Estimation of reduced rank regression. If the rank of B in Y t = BY t + Z t is specified to be k( p), the maximum likelihood estimator of B under normality with Y 0 given [Anderson (95)] is (5.) B k = S ZZ F F B = S Y, where F and are the first k columns of F and = (w,...,w p ), respectively. he matrix B k is the reduced rank regression coefficient matrix. In this section the asymptotic distribution of B k will be derived without assuming normality. In canonical terms the reduced rank regression estimator is (5.2) k = WW H H, where H consists of the first k columns of H, and the least squares estimator is (5.3) = X = + W. hen k = ( k ) (5.4) = ( WW I (k)i (k) + H I (k) + I (k)h ) + I(k) I (k) W + o p (), where I (k) = (I k, 0). he submatrix consisting of the last p k rows and columns of = I is ( ) ( ) = 0, which implies 2 = 0, 22 = 0 and [ ] 2 (5.5) =. 0 0 Expansion of (5.4) in terms of WW, W and H gives {[ ][ ] [ ] [ k = WW 2 WW I 0 H H H ]} 2 WW 22 WW 0 0 H [ ] [ ][ ][ ] (5.6) I 0 W + 2 W W 2 + o p () 22 W 0 I (WW + H + H ) ( WW + H + H ) 2 = + W + 2 W + o p(). (WW 2 + H 2 ) ( 2 WW + H 2 ) 2

13 46. W. ANDERSON he first k columns of (4.7) and the upper left-hand side submatrix of (4.8) are [ H H + D ] [ H 2 H = WW ] (5.7) WW + o p (), (5.8) H + H = WW + o p(). Use of (5.7) and (5.8) in (5.6) yields k = W 2 W ( 2 2 WW ) ( 2 2 WW ) (5.9) + o p(). ( I) ( I) 2 From XX = + W + W + WW, XX = + o p (/ ) and (5.5) we obtain ( (5.0) 2 2 WW = 2 W + 22 W 2 + o p ). hen from (5.9) we derive W 2 W k = (W W 2 ) ( 2 W + 22 W 2 ) + o p() ( I) ( I) 2 [ ] [ ] [ ] I = W (5.) W 0 I ( I) (, 2 ) 2 + o p () [ ] [ ] I = W W 0 I M + o p(), where (5.2) [ ] M = (, 2 ) =, 2 = (, 2 ) and = I. Note that M 2 = M by virtue of the upper left-hand corner of =. AlsoM M = M = M. hen vec [ [ ( )] ( 0 0 (5.3) k ) ] = (I M ) vec W 0 I + o p(). Notice that the development to this point does not involve the second-order moments of, W and WW and hence does not require normality of W t. he covariance of vec W is (5.4) E vec W (vec W ) = I regardless of the nature of the distribution of W t.

14 CANONICAL CORRELAION ANALYSIS 47 he asymptotic covariance of vec k and vec( k ) is (5.5) A Cov [ vec k, vec( k {[ ] ) ] } {[ = E vec 2 M vec W 22 W 2 W [ ] 0 0 = M (I M) 0 I = 0; 22 W ] (I M) } that is, vec k and vec( k ) are asymptotically independent. Hence, the covariance of the limiting distribution of vec k is the covariance of vec minus the covariance of vec( k ). he asymptotic covariance of vec( k ) is (5.6) A Cov [ vec( k )] [ ] [ ] = (I M ) (I M) = (I M ) 0 I 0 I [ ( )] 0 0 = ( I) (I M ). 0 I Each factor in the brackets is idempotent of rank p k; that is, each can be diagonalized to diag(i p k, 0). In a sense the advantage of k over is a reduction of the variability by a factor of [(p k)/p] 2. he covariance of the limiting distribution of vec B k is that of vec B minus that of vec( B B k ) =[ ( ) ] vec( k );thatis,(ɣ I) minus (5.7) A Cov [ vec( B B k )] = [ ( ) ] A Cov [ vec( k )] ( ) [ ( )] I 0 = ( ) ( ) I. 0 0 Since B has rank k, it can be written as B = B =,where consists of the first k columns of, = (p p) and = B = (p p). Note that = Ɣ since Ɣ =.Further (5.8) =, = BƔB = Ɣ, I =, ( ) = and (5.9) = = I k.

15 48. W. ANDERSON hen (5.7) is A Cov [ vec( B B k (5.20) )] = [ Ɣ ( Ɣ ) ] [ ( ) ]. Note that the second term in each factor in (5.20) is invariant with respect to the transformation (, ) ( G, G ) for nonsingular G. When = and = B (G = I), the effective normalization is = I and Ɣ =. HEOREM 4. Let B =, where and are p k matrices of rank k. Suppose that {Z t } is a sequence of independent identically distributed random vectors with mean 0 and covariance, and let {Y t } be defined by Y t = BY t + Z t. hen vec( B k B) has a limiting normal distribution with mean 0 and covariance matrix (5.2) Ɣ [Ɣ ( Ɣ ) ] [ ( ) ]. his covariance matrix can be written as (Ɣ ) (Ɣ ) (5.22) {[ I Ɣ ( Ɣ ) ] [ I ( ) ]}. he first term in (5.22) is the covariance matrix of vec( B B), the least squares estimator of B. he second term is the covariance matrix of the limiting distribution of vec( B B k ). Each bracketed matrix in the pair of braces is idempotent of rank p k. A measure of the reduction in the variance of the estimator of B is tr [ (A Cov B ) A Cov( B B k )]/[ tr(a Cov B ) A Cov B ] (5.23) = tr [ I Ɣ ( Ɣ ) ] tr [ I ( ) ]/ p 2 = (p k) 2 /p 2. We have used tr(a B) = tr A tr B, trɣ ( Ɣ ) = tr Ɣ ( Ɣ ) = tr, tr ( ) = tr ( ) = tr I k I k = k and (5.9). he limiting distribution of ( B k B) holds under exactly the same conditions as for the limiting distribution of ( B B); in particular, only the second-order moment of Z t is assumed. Hence confidence regions for B established on the basis of normality of Z t hold generally. Note that the asymptotic distribution of the sample canonical variate coefficients depends on the fourthorder moments of Z t ; nevertheless, the asymptotic distribution of the reduced rank regression estimator, which is a function of those coefficients, does not depend on fourth-order moments. he asymptotic covariance matrix of B k in this AR() model is the same as the asymptotic covariance matrix of B k in the model Y t = BX t + Z t,wherex t s

16 CANONICAL CORRELAION ANALYSIS 49 are independently identically distributed with EX t = 0 and EX t X t = XX (in place of Ɣ) orwherethex t s are nonstochastic and S XX XX [Anderson (999b)]. In the present AR() model the transformation to canonical form Y t Y t, Y t Y t is different from the transformation for independent X t s (Y t A Y t, X t X t ). he likelihood ratio criterion for testing the null hypothesis that the rank of B is k against alternatives that the rank is greater than k when the Z t s are normally distributed is p p 2logλ = log( ri 2 ) (5.24) i=k+ ri 2 i=k+ [Anderson (95)]. When the null hypothesis is true, this criterion has a limiting χ 2 -distribution with (p k) 2 degrees of freedom. his number is the product of the ranks of the two idempotent factors in (5.6) and (5.20). 6. Autoregressive processes of order greater than one. 6.. Canonical correlations and vectors. We consider the process (2.) written as (6.) Y t = BỸt + Z t, where B = (B,...,B m ).SinceƔ = B ƔB +, (4.), which defines φ and θ, is equivalent to (4.5); since S = B S B + SẐẐ + o p(/ ) and SẐẐ = S ZZ + o p (/ ), (4.3), defining f and t, is asymptotically equivalent to (4.6). he transformation X t = Y t leads to (4.6), which in turn leads to (4.7) and (4.8) for H and D. o carry out the analysis, we need the asymptotic distribution of S and S ZZ and of and WW for m>. he process {Ỹt} can be defined by (6.2) Ỹ t = BỸt + Z t, where Ỹt = (Y t, Y t,...,y t m+ ), Z t = (Z t, 0,...,0),and (6.3) B B 2... B m B m I B = 0 I I 0 [Anderson (97), Section 5.3]. Note that the roots of (2.2) are the roots of λi B =0 (assumed to be less than in absolute value). We have Y t = JỸt,

17 50. W. ANDERSON where J = (I p, 0,...,0), Z t = J Z t, S YY = J S YY J and S ZZ = J S ZZ J. he transformation X t = Y t, W t = Z t carries (6.) to (6.4) X t = X t + W t, where = B(I p ). he process (6.2) is carried to X t = X t + W t, where = (I m ) B[I m ( ) ] has the same form as B. We have X t = J X t, W t = J W t, XX = J XX J, WW = J WW J.Let = E X X t t = (I p ) Ɣ(I p ). he asymptotic covariances of = ( ) and WW = ( WW I) are found from heorem. HEOREM 5. If the W t are independently normally distributed with EW t = 0 and EW t W t = I, then and WW have a limiting normal distribution with means 0 and 0 and covariances E vec (vec ) (I + K) [ I ( ) ] (6.5) [ ( I) + (I ) (I I) ][ I ( ) ] = (I + K) {[ I ( ) ] ( ) + ( ) [ I ( ) ] ( ) }, (6.6) (6.7) E vec WW (vec WW ) = (I + K)(J J J J) = (J J )(I + K)(J J), E vec (vec WW ) (I + K) [ I ( ) ] (J J J J) = [ I ( ) ] (J J )(I + K)(J J). Here K refers to the commutation matrix of dimension (pm) 2 (pm) 2.Note W t = J W t and K(J J ) = (J J )K. Since vec = (J J) vec and vec WW = (J J) vec WW, the asymptotic covariances of vec are the asymptotic covariances given in heorem 5 multiplied on the left by (J J) and the right by (J J ).Define = [ I ( ) ] (6.8). hen E vec (vec ) (J J)(I + K) { ( ) + ( ) } (J J ) (I + K)( ) (6.9) = (I + K)(J J) ( )(J J ) + (J J)( ) (J J )(I + K) (I + K)( ), (6.0) E vec WW (vec WW ) = (I + K)(I I),

18 CANONICAL CORRELAION ANALYSIS 5 (6.) E vec (vec WW ) (I + K)(J J) (J J ). he covariance of the limiting distribution of vec ( I) vec WW is E [ vec ( I) vec WW][ vec ( I) vec ] WW (6.2) = (I + K)(J J) [ ( ) + ( ) ] (J J ) (I + K)(J J) (J J )( I) ( I)(J J) (J J )(I + K) (I + K)( ) + ( I)(I + K)( I). he last line of (6.2) is ( 2 I) ( ) = ( I)N. Note that (J J) (J J ) is the upper left-hand p p submatrix of =[I ( )] = s=0 ( s s ) and (J J) ( )(J J ) is the upper left-hand submatrix of (6.3) [ I ( ) ] ( ) = ( s s )( ) s=0 = (E X X t t s E X X t t s ), s=0 which is s=0 (M s M s ),wherem s = EX t X t s.letj s J = s. hen (6.2) can be written A Cov [ vec ( I) vec WW, vec ( I) vec ] WW = (I + K) (M s M s ) (I + K) ( s s ) (6.4) s= s=0 ( s s )(I + K) + ( 2 I) + K( ). s=0 HEOREM 6. If the Z t s are independently normally distributed and if the roots of Ɣ δ are distinct, the nonzero elements of D have a limiting normal distribution with means 0; the covariance of this limiting normal distribution is A Cov(vec D, vecd ) = 2E(J J) { ( ) + ( ) } (6.5) (J J )E 2E { (J J) (J J ) + ( J J) (J J ) } E. In the calculations of (6.5) can be found from = +I. he operation (ε i ε i )(J J) { }(J J )(ε j ε j ) selects the element in the i, ith row and j,jth column of { },and(ε i ε i )(ε i ε i )(J J){ }(J J )(ε j ε j )(ε j ε j ) places it in the i, ith row and j,jth column of the product. Since is a diagonal matrix, (ε i ε i)(j J) (J J )(ε j ε j ) selects the i, ith row and the j,jth column and multiplies it by δ j.

19 52. W. ANDERSON 6.2. Reduced rank regression. he reduced rank regression estimator of B is (5.) with B defined in (2.0) and F the first k columns of F. he transformation X t = Y t, W t = Z t carries (6.) to (6.4). he reduced rank regression estimator of is (5.2), where (6.6) = X = + W is the least squares estimator of.from I = we obtain 0 = 2 2, which implies 2 = 0; here has been partitioned into k and p k columns, = (, 2 ). he analog of (5.6) is [ ( k = WW + H + H ) ] (6.7) ( 2 WW + H 2 ) Here W denotes the first k rows of W. From (5.7) we have + [ ] W + o p (). 0 (6.8) H 2 = ( 2 XX 2 WW )( I) + o p (). From (6.4) and 2 = 0 we obtain (6.9) 2 XX = W + 2 WW + o p(), (6.20) hen (6.2) where H 2 = W ( I) 2 WW + o p(). [ ] k = 0 W ( I) + {[ ] [ ] W 0 = + M 0 2 W [ ] W + o p () 0 } + o p (), (6.22) M =. Note that M 2 = M. For (5.3) we obtain (6.23) vec [ ( k ) ] = [ (I M ) I ] [ 0 vec 2 W ] + o p (), and vec( k ) and vec k are asymptotically independent. he covariance of the limiting distribution of vec( k ) is (5.6) with replaced by and M replaced by = M. By algebra similar to that of Section 5 the covariance of the limiting distribution of vec( B r B) is (5.20) with Ɣ and replaced by Ɣ and, respectively, where is defined by B =.

20 CANONICAL CORRELAION ANALYSIS Discussion. 7.. Reinsel. Velu, Reinsel and Wichern (986), Ahn and Reinsel (988), Reinsel (997) and Reinsel and Velu (998) treat the reduced rank regression estimator for autoregressive processes. In the first paper the authors suggest the model in which B (p pm) is factored into the product of a p r matrix and an r pm matrix and proceed to find the asymptotic distribution of the two factors when is assumed known. However, this distribution is incorrect because the variability due to S (my notation) is ignored. Further, the assertion that this asymptotic distribution holds when is replaced by S ZZ is incorrect. o explain this matter in more detail, suppose = I and m =. hen the first factor in B = is = = (φ,...,φ k ) as defined in (4.) with = I, and its estimator is F = (f,...,f k ) as defined in (4.3) with S ZZ replaced by I. he left-hand side equation in (4.3) leads to (7.) (BS B + S Z B + BS Z ) = + F BƔB F + o p(), where F = (F ) and = ( ). After the transformation to X t = Y t, W t = Z t (7.) is (7.2) ( + W + W )I (k) = I (k) + H H + o p(). In solving for H the term was discarded by the authors. Ahn and Reinsel (988) generalize the model to let B = B L B R,..., B m = B ml B mr such that range B il range B i+,l and find a canonical form for the matrices. hey set up the likelihood equations and indicate that the covariance matrix of the asymptotic normal distribution of the estimators can be obtained from the Fisher information matrix. When their results are specialized to the firstorder case (m = ), their results agree with heorem 4, but their expression for the asymptotic covariance matrix is not explicit (personal communication by one of the authors). See also Reinsel (997), Section 6., for further details on the nested model. In Reinsel and Velu (998) the heorem 2 of Velu, Reinsel and Wichern (986) about the asymptotic distribution of the factors and its proof are repeated as heorem 2.4 and its proof as pertaining to the model Y t = BX t +Z t with EZ t Z t = ZZ known and X t being exogenous or independent regressors. In this case the variability due to S XX XX is ignored. In the section on the autoregressive model they approach the asymptotic distribution of the reduced rank regression estimator by assuming that the Z t s are normally distributed and use the Fisher information matrix. Although they do not give the asymptotic covariance matrix explicitly, the details can be filled in, as shown by one of the authors in a personal communication; the asymptotic covariance matrix is shown to be (5.2).

21 54. W. ANDERSON 7.2. Lütkepohl. Lütkepohl (993) purports to obtain the asymptotic distribution of B k = by finding the joint asymptotic distribution of and and from that the distribution of +. he asymptotic distribution of and is incorrect, however; see Anderson (999b). he asserted asymptotic covariance of B k is not given very explicitly. Acknowledgment. suggestions. he author thanks two anonymous referees for valuable REFERENCES AHN, S. K. andreinsel, G. C. (988). Nested reduced-rank autoregressive models for multiple time series. J. Amer. Statist. Assoc ANDERSON,. W. (95). Estimating linear restrictions on regression coefficients for multivariate normal distributions. Ann. Math. Statist [Correction (980) Ann. Statist ] ANDERSON,. W. (97). he Statistical Analysis of ime Series. Wiley, New York. ANDERSON,. W. (984). An Introduction to Multivariate Statistical Analysis, 2nd ed. Wiley, New York. ANDERSON,. W. (999a). Asymptotic theory for canonical correlation analysis. J. Multivariate Anal ANDERSON,. W. (999b). Asymptotic distribution of the reduced rank regression estimator under general conditions. Ann. Statist BOX, G.E.P.andIAO, G. C. (977). A canonical analysis of multiple time series. Biometrika LÜKEPOHL, H. (993). Introduction to Multiple ime Series Analysis, 2nd ed. Springer, New York. REINSEL, G. C. (997). Elements of Multivariate ime Series Analysis, 2nd ed. Springer, New York. REINSEL, G.C.andVELU, R. P. (998). Multivariate Reduced Rank Regression. Springer, New York. RYAN, D.A.J.,HUBER, J.J.,CARER, E.M.,SPRAGUE, J.B.andPARRO, J. (992). A reduced-rank multivariate regression approach to aquatic joint toxicity experiments. Biometrics SCHMIDLI, H. (995). Reduced-Rank Regression: With Applications to Quantitative Structure- Activity Relationships. Springer, Berlin. SOICA, P. andviberg, M. (996). Maximum likelihood parameter and rank estimation in reduced-rank multivariate linear regressions. IEEE rans. Signal Process IAO, G. C. and SAY, R. S. (989). Model specification in multivariate time series (with discussion). J. Roy. Statist. Soc. Ser. B VELU, R. P., REINSEL, G. C. andwichern, D. W. (986). Reduced rank models for multiple times series. Biometrika DEPARMEN OF SAISICS SANFORD UNIVERSIY SANFORD,CALIFORNIA twa@stat.stanford.edu

Reduced rank regression in cointegrated models

Reduced rank regression in cointegrated models Journal of Econometrics 06 (2002) 203 26 www.elsevier.com/locate/econbase Reduced rank regression in cointegrated models.w. Anderson Department of Statistics, Stanford University, Stanford, CA 94305-4065,

More information

Estimation and Testing for Common Cycles

Estimation and Testing for Common Cycles Estimation and esting for Common Cycles Anders Warne February 27, 2008 Abstract: his note discusses estimation and testing for the presence of common cycles in cointegrated vector autoregressions A simple

More information

2. Matrix Algebra and Random Vectors

2. Matrix Algebra and Random Vectors 2. Matrix Algebra and Random Vectors 2.1 Introduction Multivariate data can be conveniently display as array of numbers. In general, a rectangular array of numbers with, for instance, n rows and p columns

More information

Inverse of a Square Matrix. For an N N square matrix A, the inverse of A, 1

Inverse of a Square Matrix. For an N N square matrix A, the inverse of A, 1 Inverse of a Square Matrix For an N N square matrix A, the inverse of A, 1 A, exists if and only if A is of full rank, i.e., if and only if no column of A is a linear combination 1 of the others. A is

More information

On Expected Gaussian Random Determinants

On Expected Gaussian Random Determinants On Expected Gaussian Random Determinants Moo K. Chung 1 Department of Statistics University of Wisconsin-Madison 1210 West Dayton St. Madison, WI 53706 Abstract The expectation of random determinants whose

More information

Linear Algebra Review

Linear Algebra Review Linear Algebra Review Yang Feng http://www.stat.columbia.edu/~yangfeng Yang Feng (Columbia University) Linear Algebra Review 1 / 45 Definition of Matrix Rectangular array of elements arranged in rows and

More information

Properties of Matrices and Operations on Matrices

Properties of Matrices and Operations on Matrices Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,

More information

Canonical Correlation Analysis of Longitudinal Data

Canonical Correlation Analysis of Longitudinal Data Biometrics Section JSM 2008 Canonical Correlation Analysis of Longitudinal Data Jayesh Srivastava Dayanand N Naik Abstract Studying the relationship between two sets of variables is an important multivariate

More information

Instrumental Variables

Instrumental Variables Università di Pavia 2010 Instrumental Variables Eduardo Rossi Exogeneity Exogeneity Assumption: the explanatory variables which form the columns of X are exogenous. It implies that any randomness in the

More information

Online Appendix. j=1. φ T (ω j ) vec (EI T (ω j ) f θ0 (ω j )). vec (EI T (ω) f θ0 (ω)) = O T β+1/2) = o(1), M 1. M T (s) exp ( isω)

Online Appendix. j=1. φ T (ω j ) vec (EI T (ω j ) f θ0 (ω j )). vec (EI T (ω) f θ0 (ω)) = O T β+1/2) = o(1), M 1. M T (s) exp ( isω) Online Appendix Proof of Lemma A.. he proof uses similar arguments as in Dunsmuir 979), but allowing for weak identification and selecting a subset of frequencies using W ω). It consists of two steps.

More information

Inferences on a Normal Covariance Matrix and Generalized Variance with Monotone Missing Data

Inferences on a Normal Covariance Matrix and Generalized Variance with Monotone Missing Data Journal of Multivariate Analysis 78, 6282 (2001) doi:10.1006jmva.2000.1939, available online at http:www.idealibrary.com on Inferences on a Normal Covariance Matrix and Generalized Variance with Monotone

More information

The purpose of this section is to derive the asymptotic distribution of the Pearson chi-square statistic. k (n j np j ) 2. np j.

The purpose of this section is to derive the asymptotic distribution of the Pearson chi-square statistic. k (n j np j ) 2. np j. Chapter 9 Pearson s chi-square test 9. Null hypothesis asymptotics Let X, X 2, be independent from a multinomial(, p) distribution, where p is a k-vector with nonnegative entries that sum to one. That

More information

1 Determinants. 1.1 Determinant

1 Determinants. 1.1 Determinant 1 Determinants [SB], Chapter 9, p.188-196. [SB], Chapter 26, p.719-739. Bellow w ll study the central question: which additional conditions must satisfy a quadratic matrix A to be invertible, that is to

More information

COMPARISON OF FIVE TESTS FOR THE COMMON MEAN OF SEVERAL MULTIVARIATE NORMAL POPULATIONS

COMPARISON OF FIVE TESTS FOR THE COMMON MEAN OF SEVERAL MULTIVARIATE NORMAL POPULATIONS Communications in Statistics - Simulation and Computation 33 (2004) 431-446 COMPARISON OF FIVE TESTS FOR THE COMMON MEAN OF SEVERAL MULTIVARIATE NORMAL POPULATIONS K. Krishnamoorthy and Yong Lu Department

More information

Large Sample Properties of Estimators in the Classical Linear Regression Model

Large Sample Properties of Estimators in the Classical Linear Regression Model Large Sample Properties of Estimators in the Classical Linear Regression Model 7 October 004 A. Statement of the classical linear regression model The classical linear regression model can be written in

More information

TAMS39 Lecture 2 Multivariate normal distribution

TAMS39 Lecture 2 Multivariate normal distribution TAMS39 Lecture 2 Multivariate normal distribution Martin Singull Department of Mathematics Mathematical Statistics Linköping University, Sweden Content Lecture Random vectors Multivariate normal distribution

More information

Multivariate Time Series

Multivariate Time Series Multivariate Time Series Notation: I do not use boldface (or anything else) to distinguish vectors from scalars. Tsay (and many other writers) do. I denote a multivariate stochastic process in the form

More information

Linear Models and Estimation by Least Squares

Linear Models and Estimation by Least Squares Linear Models and Estimation by Least Squares Jin-Lung Lin 1 Introduction Causal relation investigation lies in the heart of economics. Effect (Dependent variable) cause (Independent variable) Example:

More information

Review of Classical Least Squares. James L. Powell Department of Economics University of California, Berkeley

Review of Classical Least Squares. James L. Powell Department of Economics University of California, Berkeley Review of Classical Least Squares James L. Powell Department of Economics University of California, Berkeley The Classical Linear Model The object of least squares regression methods is to model and estimate

More information

THE UNIVERSITY OF CHICAGO Graduate School of Business Business 41912, Spring Quarter 2008, Mr. Ruey S. Tsay. Solutions to Final Exam

THE UNIVERSITY OF CHICAGO Graduate School of Business Business 41912, Spring Quarter 2008, Mr. Ruey S. Tsay. Solutions to Final Exam THE UNIVERSITY OF CHICAGO Graduate School of Business Business 41912, Spring Quarter 2008, Mr. Ruey S. Tsay Solutions to Final Exam 1. (13 pts) Consider the monthly log returns, in percentages, of five

More information

On the Efficiencies of Several Generalized Least Squares Estimators in a Seemingly Unrelated Regression Model and a Heteroscedastic Model

On the Efficiencies of Several Generalized Least Squares Estimators in a Seemingly Unrelated Regression Model and a Heteroscedastic Model Journal of Multivariate Analysis 70, 8694 (1999) Article ID jmva.1999.1817, available online at http:www.idealibrary.com on On the Efficiencies of Several Generalized Least Squares Estimators in a Seemingly

More information

MIVQUE and Maximum Likelihood Estimation for Multivariate Linear Models with Incomplete Observations

MIVQUE and Maximum Likelihood Estimation for Multivariate Linear Models with Incomplete Observations Sankhyā : The Indian Journal of Statistics 2006, Volume 68, Part 3, pp. 409-435 c 2006, Indian Statistical Institute MIVQUE and Maximum Likelihood Estimation for Multivariate Linear Models with Incomplete

More information

High-dimensional asymptotic expansions for the distributions of canonical correlations

High-dimensional asymptotic expansions for the distributions of canonical correlations Journal of Multivariate Analysis 100 2009) 231 242 Contents lists available at ScienceDirect Journal of Multivariate Analysis journal homepage: www.elsevier.com/locate/jmva High-dimensional asymptotic

More information

Mathematical Foundations of Applied Statistics: Matrix Algebra

Mathematical Foundations of Applied Statistics: Matrix Algebra Mathematical Foundations of Applied Statistics: Matrix Algebra Steffen Unkel Department of Medical Statistics University Medical Center Göttingen, Germany Winter term 2018/19 1/105 Literature Seber, G.

More information

Matrix Algebra, Class Notes (part 2) by Hrishikesh D. Vinod Copyright 1998 by Prof. H. D. Vinod, Fordham University, New York. All rights reserved.

Matrix Algebra, Class Notes (part 2) by Hrishikesh D. Vinod Copyright 1998 by Prof. H. D. Vinod, Fordham University, New York. All rights reserved. Matrix Algebra, Class Notes (part 2) by Hrishikesh D. Vinod Copyright 1998 by Prof. H. D. Vinod, Fordham University, New York. All rights reserved. 1 Converting Matrices Into (Long) Vectors Convention:

More information

Do not copy, quote, or cite without permission LECTURE 4: THE GENERAL LISREL MODEL

Do not copy, quote, or cite without permission LECTURE 4: THE GENERAL LISREL MODEL LECTURE 4: THE GENERAL LISREL MODEL I. QUICK REVIEW OF A LITTLE MATRIX ALGEBRA. II. A SIMPLE RECURSIVE MODEL IN LATENT VARIABLES. III. THE GENERAL LISREL MODEL IN MATRIX FORM. A. SPECIFYING STRUCTURAL

More information

Latent variable interactions

Latent variable interactions Latent variable interactions Bengt Muthén & Tihomir Asparouhov Mplus www.statmodel.com November 2, 2015 1 1 Latent variable interactions Structural equation modeling with latent variable interactions has

More information

Unit roots in vector time series. Scalar autoregression True model: y t 1 y t1 2 y t2 p y tp t Estimated model: y t c y t1 1 y t1 2 y t2

Unit roots in vector time series. Scalar autoregression True model: y t 1 y t1 2 y t2 p y tp t Estimated model: y t c y t1 1 y t1 2 y t2 Unit roots in vector time series A. Vector autoregressions with unit roots Scalar autoregression True model: y t y t y t p y tp t Estimated model: y t c y t y t y t p y tp t Results: T j j is asymptotically

More information

Chapter 11 GMM: General Formulas and Application

Chapter 11 GMM: General Formulas and Application Chapter 11 GMM: General Formulas and Application Main Content General GMM Formulas esting Moments Standard Errors of Anything by Delta Method Using GMM for Regressions Prespecified weighting Matrices and

More information

Introduction to Matrix Algebra

Introduction to Matrix Algebra Introduction to Matrix Algebra August 18, 2010 1 Vectors 1.1 Notations A p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the line. When p

More information

Chapter 1. Matrix Algebra

Chapter 1. Matrix Algebra ST4233, Linear Models, Semester 1 2008-2009 Chapter 1. Matrix Algebra 1 Matrix and vector notation Definition 1.1 A matrix is a rectangular or square array of numbers of variables. We use uppercase boldface

More information

An Introduction to Multivariate Statistical Analysis

An Introduction to Multivariate Statistical Analysis An Introduction to Multivariate Statistical Analysis Third Edition T. W. ANDERSON Stanford University Department of Statistics Stanford, CA WILEY- INTERSCIENCE A JOHN WILEY & SONS, INC., PUBLICATION Contents

More information

Chapter 5 Matrix Approach to Simple Linear Regression

Chapter 5 Matrix Approach to Simple Linear Regression STAT 525 SPRING 2018 Chapter 5 Matrix Approach to Simple Linear Regression Professor Min Zhang Matrix Collection of elements arranged in rows and columns Elements will be numbers or symbols For example:

More information

Pattern correlation matrices and their properties

Pattern correlation matrices and their properties Linear Algebra and its Applications 327 (2001) 105 114 www.elsevier.com/locate/laa Pattern correlation matrices and their properties Andrew L. Rukhin Department of Mathematics and Statistics, University

More information

Peter Hoff Linear and multilinear models April 3, GLS for multivariate regression 5. 3 Covariance estimation for the GLM 8

Peter Hoff Linear and multilinear models April 3, GLS for multivariate regression 5. 3 Covariance estimation for the GLM 8 Contents 1 Linear model 1 2 GLS for multivariate regression 5 3 Covariance estimation for the GLM 8 4 Testing the GLH 11 A reference for some of this material can be found somewhere. 1 Linear model Recall

More information

a Λ q 1. Introduction

a Λ q 1. Introduction International Journal of Pure and Applied Mathematics Volume 9 No 26, 959-97 ISSN: -88 (printed version); ISSN: -95 (on-line version) url: http://wwwijpameu doi: 272/ijpamv9i7 PAijpameu EXPLICI MOORE-PENROSE

More information

Notes on Time Series Modeling

Notes on Time Series Modeling Notes on Time Series Modeling Garey Ramey University of California, San Diego January 17 1 Stationary processes De nition A stochastic process is any set of random variables y t indexed by t T : fy t g

More information

1. The Multivariate Classical Linear Regression Model

1. The Multivariate Classical Linear Regression Model Business School, Brunel University MSc. EC550/5509 Modelling Financial Decisions and Markets/Introduction to Quantitative Methods Prof. Menelaos Karanasos (Room SS69, Tel. 08956584) Lecture Notes 5. The

More information

Next is material on matrix rank. Please see the handout

Next is material on matrix rank. Please see the handout B90.330 / C.005 NOTES for Wednesday 0.APR.7 Suppose that the model is β + ε, but ε does not have the desired variance matrix. Say that ε is normal, but Var(ε) σ W. The form of W is W w 0 0 0 0 0 0 w 0

More information

THE UNIVERSITY OF CHICAGO Booth School of Business Business 41912, Spring Quarter 2016, Mr. Ruey S. Tsay

THE UNIVERSITY OF CHICAGO Booth School of Business Business 41912, Spring Quarter 2016, Mr. Ruey S. Tsay THE UNIVERSITY OF CHICAGO Booth School of Business Business 41912, Spring Quarter 2016, Mr. Ruey S. Tsay Lecture 5: Multivariate Multiple Linear Regression The model is Y n m = Z n (r+1) β (r+1) m + ɛ

More information

Vectors and Matrices Statistics with Vectors and Matrices

Vectors and Matrices Statistics with Vectors and Matrices Vectors and Matrices Statistics with Vectors and Matrices Lecture 3 September 7, 005 Analysis Lecture #3-9/7/005 Slide 1 of 55 Today s Lecture Vectors and Matrices (Supplement A - augmented with SAS proc

More information

Testing a Normal Covariance Matrix for Small Samples with Monotone Missing Data

Testing a Normal Covariance Matrix for Small Samples with Monotone Missing Data Applied Mathematical Sciences, Vol 3, 009, no 54, 695-70 Testing a Normal Covariance Matrix for Small Samples with Monotone Missing Data Evelina Veleva Rousse University A Kanchev Department of Numerical

More information

Elements of Multivariate Time Series Analysis

Elements of Multivariate Time Series Analysis Gregory C. Reinsel Elements of Multivariate Time Series Analysis Second Edition With 14 Figures Springer Contents Preface to the Second Edition Preface to the First Edition vii ix 1. Vector Time Series

More information

STAT 100C: Linear models

STAT 100C: Linear models STAT 100C: Linear models Arash A. Amini June 9, 2018 1 / 56 Table of Contents Multiple linear regression Linear model setup Estimation of β Geometric interpretation Estimation of σ 2 Hat matrix Gram matrix

More information

A note on the equality of the BLUPs for new observations under two linear models

A note on the equality of the BLUPs for new observations under two linear models ACTA ET COMMENTATIONES UNIVERSITATIS TARTUENSIS DE MATHEMATICA Volume 14, 2010 A note on the equality of the BLUPs for new observations under two linear models Stephen J Haslett and Simo Puntanen Abstract

More information

Chapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in

Chapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in Chapter 4 - MATRIX ALGEBRA 4.1. Matrix Operations A a 11 a 12... a 1j... a 1n a 21. a 22.... a 2j... a 2n. a i1 a i2... a ij... a in... a m1 a m2... a mj... a mn The entry in the ith row and the jth column

More information

Matrix Algebra Review

Matrix Algebra Review APPENDIX A Matrix Algebra Review This appendix presents some of the basic definitions and properties of matrices. Many of the matrices in the appendix are named the same as the matrices that appear in

More information

Topic 7 - Matrix Approach to Simple Linear Regression. Outline. Matrix. Matrix. Review of Matrices. Regression model in matrix form

Topic 7 - Matrix Approach to Simple Linear Regression. Outline. Matrix. Matrix. Review of Matrices. Regression model in matrix form Topic 7 - Matrix Approach to Simple Linear Regression Review of Matrices Outline Regression model in matrix form - Fall 03 Calculations using matrices Topic 7 Matrix Collection of elements arranged in

More information

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Prof. Tesler Math 283 Fall 2016 Also see the separate version of this with Matlab and R commands. Prof. Tesler Diagonalizing

More information

New Introduction to Multiple Time Series Analysis

New Introduction to Multiple Time Series Analysis Helmut Lütkepohl New Introduction to Multiple Time Series Analysis With 49 Figures and 36 Tables Springer Contents 1 Introduction 1 1.1 Objectives of Analyzing Multiple Time Series 1 1.2 Some Basics 2

More information

Vector autoregressive Moving Average Process. Presented by Muhammad Iqbal, Amjad Naveed and Muhammad Nadeem

Vector autoregressive Moving Average Process. Presented by Muhammad Iqbal, Amjad Naveed and Muhammad Nadeem Vector autoregressive Moving Average Process Presented by Muhammad Iqbal, Amjad Naveed and Muhammad Nadeem Road Map 1. Introduction 2. Properties of MA Finite Process 3. Stationarity of MA Process 4. VARMA

More information

EA = I 3 = E = i=1, i k

EA = I 3 = E = i=1, i k MTH5 Spring 7 HW Assignment : Sec.., # (a) and (c), 5,, 8; Sec.., #, 5; Sec.., #7 (a), 8; Sec.., # (a), 5 The due date for this assignment is //7. Sec.., # (a) and (c). Use the proof of Theorem. to obtain

More information

Ch 2: Simple Linear Regression

Ch 2: Simple Linear Regression Ch 2: Simple Linear Regression 1. Simple Linear Regression Model A simple regression model with a single regressor x is y = β 0 + β 1 x + ɛ, where we assume that the error ɛ is independent random component

More information

The Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment

The Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment he Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment William Glunt 1, homas L. Hayden 2 and Robert Reams 2 1 Department of Mathematics and Computer Science, Austin Peay State

More information

identity matrix, shortened I the jth column of I; the jth standard basis vector matrix A with its elements a ij

identity matrix, shortened I the jth column of I; the jth standard basis vector matrix A with its elements a ij Notation R R n m R n m r R n s real numbers set of n m real matrices subset of R n m consisting of matrices with rank r subset of R n n consisting of symmetric matrices NND n subset of R n s consisting

More information

ON VARIANCE COVARIANCE COMPONENTS ESTIMATION IN LINEAR MODELS WITH AR(1) DISTURBANCES. 1. Introduction

ON VARIANCE COVARIANCE COMPONENTS ESTIMATION IN LINEAR MODELS WITH AR(1) DISTURBANCES. 1. Introduction Acta Math. Univ. Comenianae Vol. LXV, 1(1996), pp. 129 139 129 ON VARIANCE COVARIANCE COMPONENTS ESTIMATION IN LINEAR MODELS WITH AR(1) DISTURBANCES V. WITKOVSKÝ Abstract. Estimation of the autoregressive

More information

ARIMA Modelling and Forecasting

ARIMA Modelling and Forecasting ARIMA Modelling and Forecasting Economic time series often appear nonstationary, because of trends, seasonal patterns, cycles, etc. However, the differences may appear stationary. Δx t x t x t 1 (first

More information

(Y jz) t (XjZ) 0 t = S yx S yz S 1. S yx:z = T 1. etc. 2. Next solve the eigenvalue problem. js xx:z S xy:z S 1

(Y jz) t (XjZ) 0 t = S yx S yz S 1. S yx:z = T 1. etc. 2. Next solve the eigenvalue problem. js xx:z S xy:z S 1 Abstract Reduced Rank Regression The reduced rank regression model is a multivariate regression model with a coe cient matrix with reduced rank. The reduced rank regression algorithm is an estimation procedure,

More information

4.1 Order Specification

4.1 Order Specification THE UNIVERSITY OF CHICAGO Booth School of Business Business 41914, Spring Quarter 2009, Mr Ruey S Tsay Lecture 7: Structural Specification of VARMA Models continued 41 Order Specification Turn to data

More information

Cointegration Lecture I: Introduction

Cointegration Lecture I: Introduction 1 Cointegration Lecture I: Introduction Julia Giese Nuffield College julia.giese@economics.ox.ac.uk Hilary Term 2008 2 Outline Introduction Estimation of unrestricted VAR Non-stationarity Deterministic

More information

Notes on Random Vectors and Multivariate Normal

Notes on Random Vectors and Multivariate Normal MATH 590 Spring 06 Notes on Random Vectors and Multivariate Normal Properties of Random Vectors If X,, X n are random variables, then X = X,, X n ) is a random vector, with the cumulative distribution

More information

Multivariate Analysis and Likelihood Inference

Multivariate Analysis and Likelihood Inference Multivariate Analysis and Likelihood Inference Outline 1 Joint Distribution of Random Variables 2 Principal Component Analysis (PCA) 3 Multivariate Normal Distribution 4 Likelihood Inference Joint density

More information

Multivariate Time Series Analysis and Its Applications [Tsay (2005), chapter 8]

Multivariate Time Series Analysis and Its Applications [Tsay (2005), chapter 8] 1 Multivariate Time Series Analysis and Its Applications [Tsay (2005), chapter 8] Insights: Price movements in one market can spread easily and instantly to another market [economic globalization and internet

More information

MEI Exam Review. June 7, 2002

MEI Exam Review. June 7, 2002 MEI Exam Review June 7, 2002 1 Final Exam Revision Notes 1.1 Random Rules and Formulas Linear transformations of random variables. f y (Y ) = f x (X) dx. dg Inverse Proof. (AB)(AB) 1 = I. (B 1 A 1 )(AB)(AB)

More information

Lecture 13: Simple Linear Regression in Matrix Format. 1 Expectations and Variances with Vectors and Matrices

Lecture 13: Simple Linear Regression in Matrix Format. 1 Expectations and Variances with Vectors and Matrices Lecture 3: Simple Linear Regression in Matrix Format To move beyond simple regression we need to use matrix algebra We ll start by re-expressing simple linear regression in matrix form Linear algebra is

More information

Marcia Gumpertz and Sastry G. Pantula Department of Statistics North Carolina State University Raleigh, NC

Marcia Gumpertz and Sastry G. Pantula Department of Statistics North Carolina State University Raleigh, NC A Simple Approach to Inference in Random Coefficient Models March 8, 1988 Marcia Gumpertz and Sastry G. Pantula Department of Statistics North Carolina State University Raleigh, NC 27695-8203 Key Words

More information

More Powerful Tests for Homogeneity of Multivariate Normal Mean Vectors under an Order Restriction

More Powerful Tests for Homogeneity of Multivariate Normal Mean Vectors under an Order Restriction Sankhyā : The Indian Journal of Statistics 2007, Volume 69, Part 4, pp. 700-716 c 2007, Indian Statistical Institute More Powerful Tests for Homogeneity of Multivariate Normal Mean Vectors under an Order

More information

. a m1 a mn. a 1 a 2 a = a n

. a m1 a mn. a 1 a 2 a = a n Biostat 140655, 2008: Matrix Algebra Review 1 Definition: An m n matrix, A m n, is a rectangular array of real numbers with m rows and n columns Element in the i th row and the j th column is denoted by

More information

Orthogonal decompositions in growth curve models

Orthogonal decompositions in growth curve models ACTA ET COMMENTATIONES UNIVERSITATIS TARTUENSIS DE MATHEMATICA Volume 4, Orthogonal decompositions in growth curve models Daniel Klein and Ivan Žežula Dedicated to Professor L. Kubáček on the occasion

More information

Lecture 1 Review: Linear models have the form (in matrix notation) Y = Xβ + ε,

Lecture 1 Review: Linear models have the form (in matrix notation) Y = Xβ + ε, 2. REVIEW OF LINEAR ALGEBRA 1 Lecture 1 Review: Linear models have the form (in matrix notation) Y = Xβ + ε, where Y n 1 response vector and X n p is the model matrix (or design matrix ) with one row for

More information

A Note on Bootstraps and Robustness. Tony Lancaster, Brown University, December 2003.

A Note on Bootstraps and Robustness. Tony Lancaster, Brown University, December 2003. A Note on Bootstraps and Robustness Tony Lancaster, Brown University, December 2003. In this note we consider several versions of the bootstrap and argue that it is helpful in explaining and thinking about

More information

Instrumental Variables

Instrumental Variables Università di Pavia 2010 Instrumental Variables Eduardo Rossi Exogeneity Exogeneity Assumption: the explanatory variables which form the columns of X are exogenous. It implies that any randomness in the

More information

Chapter 5 Prediction of Random Variables

Chapter 5 Prediction of Random Variables Chapter 5 Prediction of Random Variables C R Henderson 1984 - Guelph We have discussed estimation of β, regarded as fixed Now we shall consider a rather different problem, prediction of random variables,

More information

Analysis of the AIC Statistic for Optimal Detection of Small Changes in Dynamic Systems

Analysis of the AIC Statistic for Optimal Detection of Small Changes in Dynamic Systems Analysis of the AIC Statistic for Optimal Detection of Small Changes in Dynamic Systems Jeremy S. Conner and Dale E. Seborg Department of Chemical Engineering University of California, Santa Barbara, CA

More information

Journal of Multivariate Analysis. Sphericity test in a GMANOVA MANOVA model with normal error

Journal of Multivariate Analysis. Sphericity test in a GMANOVA MANOVA model with normal error Journal of Multivariate Analysis 00 (009) 305 3 Contents lists available at ScienceDirect Journal of Multivariate Analysis journal homepage: www.elsevier.com/locate/jmva Sphericity test in a GMANOVA MANOVA

More information

We begin by thinking about population relationships.

We begin by thinking about population relationships. Conditional Expectation Function (CEF) We begin by thinking about population relationships. CEF Decomposition Theorem: Given some outcome Y i and some covariates X i there is always a decomposition where

More information

MA 575 Linear Models: Cedric E. Ginestet, Boston University Revision: Probability and Linear Algebra Week 1, Lecture 2

MA 575 Linear Models: Cedric E. Ginestet, Boston University Revision: Probability and Linear Algebra Week 1, Lecture 2 MA 575 Linear Models: Cedric E Ginestet, Boston University Revision: Probability and Linear Algebra Week 1, Lecture 2 1 Revision: Probability Theory 11 Random Variables A real-valued random variable is

More information

Zellner s Seemingly Unrelated Regressions Model. James L. Powell Department of Economics University of California, Berkeley

Zellner s Seemingly Unrelated Regressions Model. James L. Powell Department of Economics University of California, Berkeley Zellner s Seemingly Unrelated Regressions Model James L. Powell Department of Economics University of California, Berkeley Overview The seemingly unrelated regressions (SUR) model, proposed by Zellner,

More information

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Prof. Tesler Math 283 Fall 2018 Also see the separate version of this with Matlab and R commands. Prof. Tesler Diagonalizing

More information

A note on profile likelihood for exponential tilt mixture models

A note on profile likelihood for exponential tilt mixture models Biometrika (2009), 96, 1,pp. 229 236 C 2009 Biometrika Trust Printed in Great Britain doi: 10.1093/biomet/asn059 Advance Access publication 22 January 2009 A note on profile likelihood for exponential

More information

Y t = ΦD t + Π 1 Y t Π p Y t p + ε t, D t = deterministic terms

Y t = ΦD t + Π 1 Y t Π p Y t p + ε t, D t = deterministic terms VAR Models and Cointegration The Granger representation theorem links cointegration to error correction models. In a series of important papers and in a marvelous textbook, Soren Johansen firmly roots

More information

Fundamentals of Engineering Analysis (650163)

Fundamentals of Engineering Analysis (650163) Philadelphia University Faculty of Engineering Communications and Electronics Engineering Fundamentals of Engineering Analysis (6563) Part Dr. Omar R Daoud Matrices: Introduction DEFINITION A matrix is

More information

Section 3.3. Matrix Rank and the Inverse of a Full Rank Matrix

Section 3.3. Matrix Rank and the Inverse of a Full Rank Matrix 3.3. Matrix Rank and the Inverse of a Full Rank Matrix 1 Section 3.3. Matrix Rank and the Inverse of a Full Rank Matrix Note. The lengthy section (21 pages in the text) gives a thorough study of the rank

More information

Appendix A: The time series behavior of employment growth

Appendix A: The time series behavior of employment growth Unpublished appendices from The Relationship between Firm Size and Firm Growth in the U.S. Manufacturing Sector Bronwyn H. Hall Journal of Industrial Economics 35 (June 987): 583-606. Appendix A: The time

More information

ABOUT PRINCIPAL COMPONENTS UNDER SINGULARITY

ABOUT PRINCIPAL COMPONENTS UNDER SINGULARITY ABOUT PRINCIPAL COMPONENTS UNDER SINGULARITY José A. Díaz-García and Raúl Alberto Pérez-Agamez Comunicación Técnica No I-05-11/08-09-005 (PE/CIMAT) About principal components under singularity José A.

More information

Testing Some Covariance Structures under a Growth Curve Model in High Dimension

Testing Some Covariance Structures under a Growth Curve Model in High Dimension Department of Mathematics Testing Some Covariance Structures under a Growth Curve Model in High Dimension Muni S. Srivastava and Martin Singull LiTH-MAT-R--2015/03--SE Department of Mathematics Linköping

More information

The Hilbert Space of Random Variables

The Hilbert Space of Random Variables The Hilbert Space of Random Variables Electrical Engineering 126 (UC Berkeley) Spring 2018 1 Outline Fix a probability space and consider the set H := {X : X is a real-valued random variable with E[X 2

More information

arxiv: v1 [math.ra] 11 Aug 2014

arxiv: v1 [math.ra] 11 Aug 2014 Double B-tensors and quasi-double B-tensors Chaoqian Li, Yaotang Li arxiv:1408.2299v1 [math.ra] 11 Aug 2014 a School of Mathematics and Statistics, Yunnan University, Kunming, Yunnan, P. R. China 650091

More information

STAT 100C: Linear models

STAT 100C: Linear models STAT 100C: Linear models Arash A. Amini April 27, 2018 1 / 1 Table of Contents 2 / 1 Linear Algebra Review Read 3.1 and 3.2 from text. 1. Fundamental subspace (rank-nullity, etc.) Im(X ) = ker(x T ) R

More information

Hands-on Matrix Algebra Using R

Hands-on Matrix Algebra Using R Preface vii 1. R Preliminaries 1 1.1 Matrix Defined, Deeper Understanding Using Software.. 1 1.2 Introduction, Why R?.................... 2 1.3 Obtaining R.......................... 4 1.4 Reference Manuals

More information

Regression. Oscar García

Regression. Oscar García Regression Oscar García Regression methods are fundamental in Forest Mensuration For a more concise and general presentation, we shall first review some matrix concepts 1 Matrices An order n m matrix is

More information

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES JOEL A. TROPP Abstract. We present an elementary proof that the spectral radius of a matrix A may be obtained using the formula ρ(a) lim

More information

6-1. Canonical Correlation Analysis

6-1. Canonical Correlation Analysis 6-1. Canonical Correlation Analysis Canonical Correlatin analysis focuses on the correlation between a linear combination of the variable in one set and a linear combination of the variables in another

More information

Topics in Probability and Statistics

Topics in Probability and Statistics Topics in Probability and tatistics A Fundamental Construction uppose {, P } is a sample space (with probability P), and suppose X : R is a random variable. The distribution of X is the probability P X

More information

arxiv:math/ v5 [math.ac] 17 Sep 2009

arxiv:math/ v5 [math.ac] 17 Sep 2009 On the elementary symmetric functions of a sum of matrices R. S. Costas-Santos arxiv:math/0612464v5 [math.ac] 17 Sep 2009 September 17, 2009 Abstract Often in mathematics it is useful to summarize a multivariate

More information

A matrix over a field F is a rectangular array of elements from F. The symbol

A matrix over a field F is a rectangular array of elements from F. The symbol Chapter MATRICES Matrix arithmetic A matrix over a field F is a rectangular array of elements from F The symbol M m n (F ) denotes the collection of all m n matrices over F Matrices will usually be denoted

More information

Thomas J. Fisher. Research Statement. Preliminary Results

Thomas J. Fisher. Research Statement. Preliminary Results Thomas J. Fisher Research Statement Preliminary Results Many applications of modern statistics involve a large number of measurements and can be considered in a linear algebra framework. In many of these

More information

Boolean Inner-Product Spaces and Boolean Matrices

Boolean Inner-Product Spaces and Boolean Matrices Boolean Inner-Product Spaces and Boolean Matrices Stan Gudder Department of Mathematics, University of Denver, Denver CO 80208 Frédéric Latrémolière Department of Mathematics, University of Denver, Denver

More information

Multivariate Time Series: VAR(p) Processes and Models

Multivariate Time Series: VAR(p) Processes and Models Multivariate Time Series: VAR(p) Processes and Models A VAR(p) model, for p > 0 is X t = φ 0 + Φ 1 X t 1 + + Φ p X t p + A t, where X t, φ 0, and X t i are k-vectors, Φ 1,..., Φ p are k k matrices, with

More information

Matrix Algebra Determinant, Inverse matrix. Matrices. A. Fabretti. Mathematics 2 A.Y. 2015/2016. A. Fabretti Matrices

Matrix Algebra Determinant, Inverse matrix. Matrices. A. Fabretti. Mathematics 2 A.Y. 2015/2016. A. Fabretti Matrices Matrices A. Fabretti Mathematics 2 A.Y. 2015/2016 Table of contents Matrix Algebra Determinant Inverse Matrix Introduction A matrix is a rectangular array of numbers. The size of a matrix is indicated

More information