Variable selection in multiple regression with random design
|
|
- Lynn Doyle
- 6 years ago
- Views:
Transcription
1 Variable selection in multiple regression with random design arxiv: v1 [math.st] 26 Jun 2015 Alban Mbina Mbina 1, Guy Martial Nkiet 2, Assi Nguessan 3 1 URMI, Université des Sciences et Techniques de Masuku, Franceville, Gabon albanmbinambina@yahoo.fr 2 URMI, Université des Sciences et Techniques de Masuku, Franceville, Gabon gnkiet@hotmail.com 3 Université des Sciences et Technologies de Lille, Lille, France assi.nguessan@polytech-lille.fr Abstract We propose a method for variable selection in multiple regression with random predictors. This method is based on a criterion that permits to reduce the variable selection problem to a problem of estimating suitable permutation and dimensionality. Then, estimators for these parameters are proposed and the resulting method for selecting variables is shown to be consistent. A simulation study that permits to gain understanding of the performances of the proposed approach and to compare it with an existing method is given. Keywords Variable selection; Multiple linear regression; Random design; Selection criterion; Consistency 1. Introduction The selection of variables and models is an old and important problem in statistics, and several approaches have been proposed to deal with it for various methods of multivariate statistical analysis. For linear regression, many model selection criteria have been proposed in the literature. Surveys on earlier work in this field may be found in [6, 13, 14], whereas some monographs on this topic are avalaible e.g.,[7, 8]. Most of the methods that have been proposed for variable selection in linear regression deal with the case where the covariates are assumed to be nonrandom; for this case, many selection criteria have been introduced in the literature. These include the FPE criterion [13, 14, 12, 15], cross-validation [16, 11], AIC and C p type criteria e.g., [4], the prediction error criterion [5], and so on. There is just a few works dealing with the case where the covariates are random, although its importance that have been recognized in [2] who argued that that this case typically gives higher prediction errors than the fixed design counterparts and hence more is gained by variable selection. Linear regression with random design were considered in [17, 9] for variable selection, but these works only deal with univariate models, that is models for which the response is a real random variable. A recent work that considered multiple regression model is [1] in which a method based on 1
2 applying an adaptative LASSO type penalty and a novel BIC-type selection criterion have been proposed in order to select both predictors and responses. In this paper we extend the approach introduced in [9] to the case of multiple regression. In Section 2, the multiple regression model that is used is presented as well as a statement of the variable selection problem. Then, the used criterion is introduced and we give a characterization result that permits to reduce the variable selection problem to an estimation problem for two parameters. In Section 3, we propose our method for selecting variables by estimating the two previous parameters, and we prove its consistency. Section 4 is devoted to a simulation study which permits to evaluate finite sample performances of the proposal and to compare it with the method given in [1]. Proofs of lemmas and theorems are given in Section Model and criterion for selection In this section, the multiple regresion model in which we are interested is introduced and a statement of the corresponding variable selection problem is given. It is described as a problem of estimation of a suitable set. A criterion permitting to characterize this set is propsed as well as an estimator of this criterion. Finally, we give a result that permits to obtain asymtotic properties of this estimator Model and statement of the problem We consider the multiple regression model given by: Y = BX +ε 1 where X and Y are random vectors valued into R p and R q respectively with p 2 and q 2, B is a q p matrix of real coefficients, and ε is a random vector valued into R q and which is independent of X. Writing and X = X 1. X p B =, Y = Y 1. Y q, ε = b 11 b 12 b 1p b 21 b 22 b 2p. b q1.. b q2 b qp it is easily seen that Model 1 is equivalent to having a set of p univariate regression models given by: and can also be writen as where Y i = ε 1. ε q b ij X j +ε i, i = 1,,q, 2 Y = X j b j +ε 3 b j = We are interested with the variable selection problem, that is identifying the X j s which are not relevant in the previous set of models, on the basis of an i.i.d. sample X k,y k of X,Y. We say that a 1 k n b 1j b 2j. b qj. 2
3 variable X j is not relevant if the corresponding coefficients vector b j is null. So, putting I = {1,,p} we consider the subset I 0 = {j I/ b j R q = 0} which is assumed to be non-empty, and we tackle the variable selection problem in Model 1 as a problem of estimating the set I 0 or, equivalently, the set I 1 = I I 0. In order to simplify the estimation of I 1 we will first characterize it by means of a criterion which introduced below Characterization of I 1 Without loss of generality, we assume that X and Y are centered; thus, that is also the case for ε. Furthermore, denoting by R k the usual Euclidean norm of R k, we assume that E X 4 R p < + and E Y 4 R q < +. Then, it is possible to define the covariance operators V 1 = EX X and V 12 = EY X, 4 where denotes the tensor product of vectors defined as follows: when E and F are euclidean spaces and u,v is a pair belonging to E F, the tensor product u v is the linear map from E to F such that where, E denotes the inner product in E. h E, u vh = u,h E v, Remark 1. In all of the paper, we essentially use covariance operators, but the translation into matrix terms is obvious and more details can be found in [3]. Particularly, when u and v are vectors in R p and R q respectively, the matrix related to the operator u v, relative to canonical bases, is vu T where u T is the transpose of u. So, if matricial expressions are prefered to operators, one can identify the operators given in 4 with the matrices V 1 = E XX T and V 12 = E XY T. In all of the paper, the operator V 1 is assumed to be invertible. For any subset K of I, let A K be the projector x = x i i I R p x K = x i i K R cardk and put Π K := A K A KV 1 A K 1 A K, where A denotes the adjoint operator of A. Then, we introduce the criterion ξ K = V 12 V 1 Π K V 12 5 where denotes the usual operator norm given by A = tra A. This criterion permits to give a more explicit expression of I 1 as stated in the following lemma. Lemma 1. We have I 1 K if, and only if, ξ K = 0. This lemma permits to characterize the fact that an interger i belongs to I 0. Indeed, since having i I 0 is equivalent to having I 1 I {i}, we deduce from this lemma that one has i I 0 if, and only if, ξ Ki = 0 where K i = I {i}. Then I 1 consists of the elements of I for which ξ Ki does not vanish. Now, let us consider the unique permutation σ of I satisfying: i ξ Kσ1 ξ Kσ2 ξ Kσp ; ii ξ Kσi = ξ Kσj and i < j imply σi < σj. Since I 0 is a not empty, there exists an integer s I, that we call the dimensionality, satisfying ξ Kσ1 ξ Kσ2 ξ Kσs > 0 = ξ Kσs+1 = = ξ Kσs+1. Therefore, we obviously have the following characterization of I 1 : Lemma 2. I 1 = {σk/1 k s}. 3
4 This result shows that estimation of I 1 reduces to that of the two parameters σ and s. So, our method for selecting variables will be based on estimating these parameters; in the next subsection, an estimator of the used criterion will be introduced. That will be the basis of the proposed procedure for variable selection Estimation of the criterion Recalling that we have an i.i.d. sample X k,y k of X,Y, we consider the sample means 1 k n and the empirical covariance operators and X n = n 1 X k, Y n = n 1 Y k, V 1 = n 1 V 12 = n 1 n X k X X k X, n Y k Y X k X. Then, for any K I, an estimator of ξ K is given by Π ξ K = V 12 V 1 Π K V 12 where K = A K A KV 1 A K 1 A K. The result given below permits to obtain asymptotic properties of this estimator. As usual, when E and F are Euclidean vector spaces, we denote by LE,F the vector space of operators from E to F. When E = F, we simply write LE instead of LE,E. Each element A of LR p+q can be writen as A = A 11 A 12 A 21 A 22 where A 11 LR p, A 12 LR q,r p, A 21 LR p,r q and A 22 LR q. Then we consider the projectors and we have: Proposition 1. We have P 1 : A LR p+q A 11 LR p and P 2 : A LR p+q A 12 LR q,r p, n ξ K = Ψ K Ĥ + nδ K, where δ K = V 12 V 1 Π K V 12, Ψ K n N is a sequence of random operators which converges almost surely, as n +, to the operator Ψ K of LLR p+q,lr q,r p given by: Ψ K A = P 2 A P 1 AΠ K V 12 +V 1 Π K P 1 AΠ K V 12 V 1 Π K P 2 A, and Ĥ n N is a sequence of random variables valued into LR p+q which converges in distribution to random variable H having a normal distributon with mean 0 and covariance operator given by: Γ = E Z Z V Z Z V, Z being the R p+q -valued random variable given by X Z = Y and is the tensor product between elements of LR p+q related to the inner product < A,B >= tra B. 4
5 3. Selection of variables Lemma 2 shows that estimation of I 1 reduces to that of σ and s. In this section, estimators for these two parameters are proposed and consistency properties are established for them Estimation of σ and s Let us consider a sequence f n n N of functions from I to R + such that there exists a real α ]0,1/2[ and a strictly decreasing function f : I R + satisfying: Then, recalling that K i = I {i}, we put i I, lim n + nα f n i = f i. φ i = ξ K i +f n i i I and we take as estimator of σ the random permutation σ of I such that φ σ i φ φ φ σ 1 σ 2 σ p and if = φ σ j with i < j, then σ i < σ j. Furthermore, we consider the random set = { σ j; 1 j i } and the random variable Ĵ i ψ i = ξ +g Ĵ n σ i i i I where g n n N is a sequence of functions from I to R + such that there exist a real β ]0,1[ and a strictly increasing function g : I R + satisfying: i I, lim n β g n i = gi. n + Then, we take as estimator of s the random variable { ŝ = min i I / ψ i = min j I ψ } j. The variable selection is achieved by taking the random set Î 1 = { σ i ; 1 i ŝ } as estimator of I Consistency The following theorem establishes consistency for the preceding estimators : Theorem 2. We have: i lim n + P σ = σ = 1; ii ŝ converges in probability to s, as n +. As a consequence of this theorem, we easily obtain: lim n + P of our method for selecting variables in the model Simulations Î 1 = I 1 = 1. This shows the consistency 5
6 Sample size Proposed method ASCCA e e e e e e e e e-8 Table 1: Average of prediction errors over 2000 replications In this section, we report results of a simulation study which was made in order to check the efficacy of the proposed approach and to compare it with an existing method: the ASCCA method introduced by An et al This latter method is based on re-casting the multivariate regression problem as a classical CCA problem for which a least quares type formulation is constructed, and applying an adaptative LASSO type penalty together with a BIC-type selection criterion see [1] for more details. Our simulated data is based on two independent data sets: training data and test data, each with sample size n = 50, 100, 500, 800, 1000, Thetrainingdataisusedforselectingvariablesbyusingboth ourmethod, with penalty terms f n i = n 1/4 i 1 and g n i = n 3/4 i, and the ASCCA method. The test data is used for computing prediction error given by e = 1 n n Y k Ŷk 2 R q, where Y k is an observed response and Ŷ k is the usual linear predictor of Y k computed by using the variables selected at the previous step, that is Ŷ k = X T X 1 Y k where X is a matrix with n rows and columns containing the observations of the X j s that have been selected in the previous step. Each data set was generated as follows: X k is generated from a multivariate normal distribution in R 7 with mean 0 and covariance covx k i,x k j = 0.5 i j for any 1 i,j 7, and the corresponding response Y k is generated according to 1 with B = and the related error term ε k having a multivariate normal distribution in R 5 with mean 0 and covariance matrix 0.5I 5, where I 5 denotes the 5-dimensional identity matrix. The outputs of the numerical experiment are the averages of the aforementioned prediction errors over 2000 independent replications. The results are reported in Table 1. Our method gives the better results for n 100 but was outperformed by the ASCCA method for n = Proofs 5.1. Proof of Lemma 1 6
7 Denoting by Ω, A, P the considered probability space, we consider the operators: x 1 y L 1 : x =. 1. R p x j X j L 2 Ω,A,P and L 2 : y =.. R q x p with adjoints are respectively given by: L 1 : Z L2 Ω,A,P EZX R p, and L 2 : Z L2 Ω,A,P EZY R q. y q q y i Y i L 2 Ω,A,P It is easy to verify that L 1L 1 = V 1 and L 1L 2 = V 12. Denoting by RA the range of the operator A, and from the fact that the orthogonal projector Π RA onto RA is given by Π RA = AA A 1 A, we clearly have ξ K = L 1L 2 L 1L 1 A KA K L 1L 1 A K 1 A K L 1L 2 = L 1L 2 L 1Π RL1A K L 2 = L 1Π RL1A K L 2, 6 where E denotes the orthogonal space of the vector space E. For any vector α = α 1,,α q T in R q, one has q q q q L 2 α = α i Y i = b ij X j +ε i = α i b ij X j + α i ε i. i=1 i=1 Since for any u = u 1,,u p T R p, we have < L 1 u,α i ε i >= α i u j < X j,α i ε i >= i=1 u j α i EX j ε i = it follows that α i ε i RL 1 and, from RL 1 RL 1 A K, we obtain Thus, where i=1 i=1 u j α i EX j Eε i = 0, L 1Π RL1A K α iε i = L 1α i ε i = Eα i ε i X = α i Eε i EX = 0. L 1 Π RL 1A K L 2α = L 1 Π RL 1A K q i=1 b i = α i b ij X j = b i1 b i2. b ip. q α i L 1 Π RL 1A L 1b K i, 7 If ξ K = 0, then considering, for i = 1,,q, the vector α = 0,,0,1,0,,0 of R q whose coordinates are null except the i-th one which equals 1, we deduce from 7 that L 1 Π RL 1A K L 1b i = 0. Since, for any operator A, kera A =kera, it follows that we have Π RL1A K L 1b i = 0, that is i=1 L 1 b i RL 1 A K. 8 Denoting by K the cardinality of K and putting K = {k 1,k 2,,k K }, we deduce from 8 that there exists a vector β = T β 1,,β K R K such that L 1 b i = L 1 A Kβ, that is K b ij X j = β l X kl l=1 7
8 and, equivalently, K b ikl β l X kl + l=1 l I K b ij X j = 0. 9 Since V 1 is invertible we have kerl 1 = kerl 1L 1 = kerv 1 = {0}. Then, X 1,,X p are linearly independent and, therefore, 9 implies that, for all j I K, bij = 0. This property holds for any i {1,,q}, then we deduce that I K I 0 and, equivalently, that I 1 K. Reciprocally, we first have L 1Π RL1A K L 1b i = L 1Π RL1A K b ij X j = L 1Π RL1A b K ij X j + b ij X j j K j I K K = L 1Π RL1A b K ikl X kl + b ij X j. l=1 j I K If I 1 K, then I K I 0 and, consequently, for all j I K, b ij = 0. Thus K L 1Π RL1A L 1b K i = L 1Π RL1A b K ikl X kl = L 1Π RL1A L 1A K Kb i = 0 because L 1 A K b i RL 1 A K. Then, from 7 and 6, we deduce that ξ K = Proof of Proposition 1 We have: l=1 n ξ K = n V 12 V 12 n V 1 V 1 Π K 12 V n Π 1 K Π K V 12 + nδ K, V V 1 Π K n V 12 V 12 and since it follows: Π K Π K = A K A K V 1 A K 1 A K V 1 A K 1 A K = A K A K V 1 A K 1 A K V 1 A K A K V 1 A K A K V 1 A K 1 A K = Π K V 1 V 1 Π K, n ξ K = n V 12 V 12 n V 1 V 1 Π K n + V 1 Π K V 1 V 1 Π K V 12 V 1 Π K n V 12 V 12 + nδ K. V Let us consider the R p+q -valued random vectors X Z =, Z Y k X k = Y k, k = 1,,n; 8
9 the covariance operator of Z is given by V = EZ Z and can be writen as V = V 1 V V 21 V 2 where V 2 = EY Y and V 21 = V12. Further, putting we can write where Z n = n 1 Z k, and V n = n 1 Z k Z Z k Z, V = V 1 V 21 V 12 V 2 V 2 = n 1 n Y k Y Y k Y and 11 and 12 that n ξ K = Ψ operator from LR p+q to LR p defined by 12 V 21 = K Ĥ + nδ K, where Ĥ = n V V. V 12 Then we deduce from 10, A LR p+q, Ψ K A = P 2A P 1 A Π K V 12 +V 1 Π K P 1AΠ A V 12 V 1 Π K P 2 A. and Ψ K is the random Considering the usual operators norm defined in LE,F by A = sup x E {0} Ax F / x E and recalling that, for two operators A and B, one has AB A B, we obtain Ψ K A Ψ Π KA = P 1 A K Π K V 12 P 1 AΠ K V 12 V 12 +V 1 Π K Π K P 1 AΠ K V 12 + V 1 Π K P 1 AΠ K V 12 V 12 P 1 A [ Π K Π K V 12 + Π K V 12 V 12 + V 1 Π K Π K Π K V 12 ] + V 1 Π K 2 V 12 V 12 [ Π K Π K V 12 + Π K V 12 V 12 + V 1 Π K Π K Π K V 12 + V 1 Π K 2 V 12 V 12 ] P 1, A, where T, := sup A LR p+q {0} TA / A. Hence Ψ K Ψ K, [ 1+ V 1 Π K ] V 12 Π K Π K P 1, 13 From the strong law of large numbers it is easily seen that n + to V 1 resp. V 12. Therefore, we deduce that distribution of Ĥ. We have Ĥ = Ĥ 1 Ĥ 2 where Ĥ 1 = 1 n n Z k Z k V n +[1+ V 1 Π K ] Π K V 12 V 12 P 1,. V 1 resp. V 12 converges almost surely, as Π K converges almost surely, as n + to Π K, and from 13 Ψ K converges almost surely, as n + to Ψ K. It remains to obtain the asymptotic and Ĥ 2 = 1 nz nz. n 9
10 The central limit theorem ensures that Ĥ 1 resp. nz converges in distribution, as n +, to a random variable H resp. U having a centered normal distribution with covariance operator Γ resp. Γ given by Γ = E Z Z V Z Z V resp. Γ = EZ Z. Hence, Ĥ 2 converges in probability, as n +, to 0 and Slustky theorem permits to conclude that Ĥ converges in distribution, as n +, to H Proof of Theorem 2 We just need to prove the lemma which is given below. Then the proof of Theorem 1 is similar than that of Theorem 3.1 in [10]. Let r N and m 1,,m r N r such that r l=1 m l = p and ξ Kσ1 = = ξ Kσm1 > ξ K σm1 +1 = = ξ K σm1 +m 2 > > ξ K σm1 +m 2 + +m r 1 +1 = = ξ K σm1 +m 2 + +mr. Then, putting E = {l N /1 l r, m l 2} and F l := m 0 = 0, we have: ξ Lemma 3. If E, then for all l E and all i F l, the sequence n α probability to 0 as n +. Proof. Let us put γ l = ξ Kσi = ξ Kσi+1 ; if γ l = 0, then n α ξ K σi ξ K σi+1 = n α 1 2 Ψ K Ĥ σi Ψ { l 1 k=0 m l } k +1,, k=0 m k 1 with K σi K σi+1 Ĥ Ψ n α 1 2 K σi Ψ Ĥ K σi+1 n α 1 2 Ψ K σi Ψ K σi+1 Ĥ, ξ K σi+1 converges in Since Ψ K σi and Ψ K σi+1 converge almost surely, as n +, to Ψ Kσi and Ψ Kσi+1 respectively, and since Ĥ converges ξ in distribution, as n +, to H, it follows from the preceding inequality and from α < 1/2 that n α K σi ξ K σi+1 converges in probability to 0 as n +. If γ l 0, we have n α ξ K σi ξ K σi+1 = n α 1 2 = + = + Ψ Ψ K Ĥ σi + nδ Kσi Ψ K Ĥ σi+1 + nδ Kσi+1 n α 1 2 Ψ K Ĥ σi 2 Ψ K Ĥ σi+1 2 K Ĥ σi + nδ Kσi + Ψ K Ĥ σi+1 + nδ Kσi+1 2n α δ Kσi, Ψ K Ĥ σi δ Kσi+1, Ψ K Ĥ σi+1 Ψ K Ĥ σi + nδ Kσi + Ψ K Ĥ σi+1 + nδ Kσi+1 n α 1 Ψ K Ĥ σi 2 Ψ K Ĥ σi+1 2 n 1 2 Ψ K Ĥ σi +δ Kσi + n 1 2 Ψ K Ĥ σi+1 +δ Kσi+1 2n α 1 2 δ Kσi, Ψ K Ĥ σi δ Kσi+1, Ψ K Ĥ σi+1 n 1 2 Ψ K Ĥ σi +δ Kσj + n 1 2 Ψ K Ĥ σi+1 +δ Kσi+1, 10
11 where <, > is the inner prodcut defined by < A,B >= tra B. First, n α 1 Ψ σj Ĥ 2 Ψ σj+1 Ĥ 2 n α 1 Ψ K Ĥ σi 2 + Ψ K Ĥ σi+1 2 n α 1 Ψ K σi 2 + Ψ K σi+1 2 Ĥ 2 14 and, further, 2n α 1 2 δ Kσi, 2n α 1 2 δ Kσi, 2n α 1 2 Ψ K Ĥ σi δ Kσi+1, + Ψ K σi+1 Ĥ Ψ K Ĥ σi δ Kσi+1, Ψ K Ĥ σi+1 δ Kσi Ψ K Ĥ σi + δ Kσi+1 Ψ K Ĥ σi+1 2n α 1 2 γl Ψ + Ψ K σi+1 Ĥ. 15 Equations ξ 14 and 15, and the above recalled convergence properties permit to conclude that the sequence n α K σi ξ K σi+1 converges in probability to 0, as n +. References [1] B. An, J. Guo, H. Wang. Multivariate regression shrinkage and selection by canonical correlation analysis. Comput. Statist. Data Anal., 62:93 107, [2] L. Breiman, P. Spector. Submodel selection and evaluation in regression. The X-random case. Internat. Statist. Rev., 60: , [3] J. Dauxois, Y. Romain, S. Viguier. Tensor products and statistics. Linear Algebra Appl., 210:59 88, [4] Y. Fujikoshi, K. Sato. Modified AIC and C p in multivariate linear regression. Biometrika, 84: , [5] Y. Fujikoshi, T. Kan, S. Takahashi, T. Sakurai. Prediction error criterion for selecting variables in a linear regression model. Ann. Inst. Stat. Math., 63: , [6] R. R. Hocking. The analysis and selection in linear regression. Biometrics, 32:1 49, [7] H. Linhart, W. Zucchini. Model selection. Wiley, New York, [8] A. J. Miller. Subset selection in regression. Chapman and Hall, London, [9] G. M. Nkiet. Sélection des variables dans un modèle structurel de régression linéaire. C. R. Acad. sci. paris I, 333: , [10] G. M. Nkiet. Direct variable selection for discrimination among several groups. J. Multivariate Anal., 105: , [11] J. Shao. Linear model selection by cross-validation. J. Amer. Statist. Assoc., 88: , [12] R. Shibata. Approximate efficiency of a selection procedure for the number of regression variables. Biometrika, 71:43 49,
12 [13] M. L. Thomson. Selection of variables in multiple regression. Part I. A review and evaluation. Internat. Statist. Rev., 46:1 19, [14] M. L. Thomson. Selection of variables in multiple regression. Part II. Chosen procedures, computations and examples. Internat. Statist. Rev., 46: , [15] P. Zhang. On the distributional properties of model selection criteria. J. Amer. Statist. Assoc., 87: , [16] P. Zhang. Model selection via multifold cross validation. Ann. Statist., 21: , [17] X. Zheng, W. Y. Loh. A consistent variable selection criterion for linear models with high-dimensional covariates. Statistica Sinica, 7: ,
Inverse of a Square Matrix. For an N N square matrix A, the inverse of A, 1
Inverse of a Square Matrix For an N N square matrix A, the inverse of A, 1 A, exists if and only if A is of full rank, i.e., if and only if no column of A is a linear combination 1 of the others. A is
More informationChapter 4 Euclid Space
Chapter 4 Euclid Space Inner Product Spaces Definition.. Let V be a real vector space over IR. A real inner product on V is a real valued function on V V, denoted by (, ), which satisfies () (x, y) = (y,
More informationLinear Regression. In this problem sheet, we consider the problem of linear regression with p predictors and one intercept,
Linear Regression In this problem sheet, we consider the problem of linear regression with p predictors and one intercept, y = Xβ + ɛ, where y t = (y 1,..., y n ) is the column vector of target values,
More informationSupplementary Materials for Tensor Envelope Partial Least Squares Regression
Supplementary Materials for Tensor Envelope Partial Least Squares Regression Xin Zhang and Lexin Li Florida State University and University of California, Bereley 1 Proofs and Technical Details Proof of
More informationMoore Penrose inverses and commuting elements of C -algebras
Moore Penrose inverses and commuting elements of C -algebras Julio Benítez Abstract Let a be an element of a C -algebra A satisfying aa = a a, where a is the Moore Penrose inverse of a and let b A. We
More informationPartitioned Covariance Matrices and Partial Correlations. Proposition 1 Let the (p + q) (p + q) covariance matrix C > 0 be partitioned as C = C11 C 12
Partitioned Covariance Matrices and Partial Correlations Proposition 1 Let the (p + q (p + q covariance matrix C > 0 be partitioned as ( C11 C C = 12 C 21 C 22 Then the symmetric matrix C > 0 has the following
More informationKALMAN-TYPE RECURSIONS FOR TIME-VARYING ARMA MODELS AND THEIR IMPLICATION FOR LEAST SQUARES PROCEDURE ANTONY G AU T I E R (LILLE)
PROBABILITY AND MATHEMATICAL STATISTICS Vol 29, Fasc 1 (29), pp 169 18 KALMAN-TYPE RECURSIONS FOR TIME-VARYING ARMA MODELS AND THEIR IMPLICATION FOR LEAST SQUARES PROCEDURE BY ANTONY G AU T I E R (LILLE)
More informationOn the simplest expression of the perturbed Moore Penrose metric generalized inverse
Annals of the University of Bucharest (mathematical series) 4 (LXII) (2013), 433 446 On the simplest expression of the perturbed Moore Penrose metric generalized inverse Jianbing Cao and Yifeng Xue Communicated
More informationConsistency of Distance-based Criterion for Selection of Variables in High-dimensional Two-Group Discriminant Analysis
Consistency of Distance-based Criterion for Selection of Variables in High-dimensional Two-Group Discriminant Analysis Tetsuro Sakurai and Yasunori Fujikoshi Center of General Education, Tokyo University
More informationDavid Adam. Jean-Luc Chabert LAMFA CNRS-UMR 6140, Université de Picardie, France
#A37 INTEGERS 10 (2010), 437-451 SUBSETS OF Z WITH SIMULTANEOUS ORDERINGS David Adam GAATI, Université de la Polynésie Française, Tahiti, Polynésie Française david.adam@upf.pf Jean-Luc Chabert LAMFA CNRS-UMR
More informationW if p = 0; ; W ) if p 1. p times
Alternating and symmetric multilinear functions. Suppose and W are normed vector spaces. For each integer p we set {0} if p < 0; W if p = 0; ( ; W = L( }... {{... } ; W if p 1. p times We say µ p ( ; W
More informationAn Unbiased C p Criterion for Multivariate Ridge Regression
An Unbiased C p Criterion for Multivariate Ridge Regression (Last Modified: March 7, 2008) Hirokazu Yanagihara 1 and Kenichi Satoh 2 1 Department of Mathematics, Graduate School of Science, Hiroshima University
More informationMATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.
MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. Adjoint operator and adjoint matrix Given a linear operator L on an inner product space V, the adjoint of L is a transformation
More informationConsistency of Test-based Criterion for Selection of Variables in High-dimensional Two Group-Discriminant Analysis
Consistency of Test-based Criterion for Selection of Variables in High-dimensional Two Group-Discriminant Analysis Yasunori Fujikoshi and Tetsuro Sakurai Department of Mathematics, Graduate School of Science,
More informationCOMPLETE QTH MOMENT CONVERGENCE OF WEIGHTED SUMS FOR ARRAYS OF ROW-WISE EXTENDED NEGATIVELY DEPENDENT RANDOM VARIABLES
Hacettepe Journal of Mathematics and Statistics Volume 43 2 204, 245 87 COMPLETE QTH MOMENT CONVERGENCE OF WEIGHTED SUMS FOR ARRAYS OF ROW-WISE EXTENDED NEGATIVELY DEPENDENT RANDOM VARIABLES M. L. Guo
More informationOn matrix variate Dirichlet vectors
On matrix variate Dirichlet vectors Konstencia Bobecka Polytechnica, Warsaw, Poland Richard Emilion MAPMO, Université d Orléans, 45100 Orléans, France Jacek Wesolowski Polytechnica, Warsaw, Poland Abstract
More informationSTAT 100C: Linear models
STAT 100C: Linear models Arash A. Amini April 27, 2018 1 / 1 Table of Contents 2 / 1 Linear Algebra Review Read 3.1 and 3.2 from text. 1. Fundamental subspace (rank-nullity, etc.) Im(X ) = ker(x T ) R
More informationLecture 2: Linear Algebra Review
EE 227A: Convex Optimization and Applications January 19 Lecture 2: Linear Algebra Review Lecturer: Mert Pilanci Reading assignment: Appendix C of BV. Sections 2-6 of the web textbook 1 2.1 Vectors 2.1.1
More informationNumber of points on a family of curves over a finite field
arxiv:1610.02978v1 [math.nt] 10 Oct 2016 Number of points on a family of curves over a finite field Thiéyacine Top Abstract In this paper we study a family of curves obtained by fibre products of hyperelliptic
More informationTime Series Analysis. Asymptotic Results for Spatial ARMA Models
Communications in Statistics Theory Methods, 35: 67 688, 2006 Copyright Taylor & Francis Group, LLC ISSN: 036-0926 print/532-45x online DOI: 0.080/036092050049893 Time Series Analysis Asymptotic Results
More informationStochastic Design Criteria in Linear Models
AUSTRIAN JOURNAL OF STATISTICS Volume 34 (2005), Number 2, 211 223 Stochastic Design Criteria in Linear Models Alexander Zaigraev N. Copernicus University, Toruń, Poland Abstract: Within the framework
More informationarxiv: v1 [math.ra] 11 Aug 2014
Double B-tensors and quasi-double B-tensors Chaoqian Li, Yaotang Li arxiv:1408.2299v1 [math.ra] 11 Aug 2014 a School of Mathematics and Statistics, Yunnan University, Kunming, Yunnan, P. R. China 650091
More informationOn a Nonparametric Notion of Residual and its Applications
On a Nonparametric Notion of Residual and its Applications Bodhisattva Sen and Gábor Székely arxiv:1409.3886v1 [stat.me] 12 Sep 2014 Columbia University and National Science Foundation September 16, 2014
More informationMore Powerful Tests for Homogeneity of Multivariate Normal Mean Vectors under an Order Restriction
Sankhyā : The Indian Journal of Statistics 2007, Volume 69, Part 4, pp. 700-716 c 2007, Indian Statistical Institute More Powerful Tests for Homogeneity of Multivariate Normal Mean Vectors under an Order
More informationarxiv: v1 [math.dg] 1 Jul 2014
Constrained matrix Li-Yau-Hamilton estimates on Kähler manifolds arxiv:1407.0099v1 [math.dg] 1 Jul 014 Xin-An Ren Sha Yao Li-Ju Shen Guang-Ying Zhang Department of Mathematics, China University of Mining
More informationj=1 x j p, if 1 p <, x i ξ : x i < ξ} 0 as p.
LINEAR ALGEBRA Fall 203 The final exam Almost all of the problems solved Exercise Let (V, ) be a normed vector space. Prove x y x y for all x, y V. Everybody knows how to do this! Exercise 2 If V is a
More informationPRODUCT OF OPERATORS AND NUMERICAL RANGE
PRODUCT OF OPERATORS AND NUMERICAL RANGE MAO-TING CHIEN 1, HWA-LONG GAU 2, CHI-KWONG LI 3, MING-CHENG TSAI 4, KUO-ZHONG WANG 5 Abstract. We show that a bounded linear operator A B(H) is a multiple of a
More informationZweier I-Convergent Sequence Spaces
Chapter 2 Zweier I-Convergent Sequence Spaces In most sciences one generation tears down what another has built and what one has established another undoes. In mathematics alone each generation builds
More informationInner Product and Orthogonality
Inner Product and Orthogonality P. Sam Johnson October 3, 2014 P. Sam Johnson (NITK) Inner Product and Orthogonality October 3, 2014 1 / 37 Overview In the Euclidean space R 2 and R 3 there are two concepts,
More informationPolynomial Properties in Unitriangular Matrices 1
Journal of Algebra 244, 343 351 (2001) doi:10.1006/jabr.2001.8896, available online at http://www.idealibrary.com on Polynomial Properties in Unitriangular Matrices 1 Antonio Vera-López and J. M. Arregi
More informationarxiv: v1 [math.pr] 22 May 2008
THE LEAST SINGULAR VALUE OF A RANDOM SQUARE MATRIX IS O(n 1/2 ) arxiv:0805.3407v1 [math.pr] 22 May 2008 MARK RUDELSON AND ROMAN VERSHYNIN Abstract. Let A be a matrix whose entries are real i.i.d. centered
More informationHigh-dimensional two-sample tests under strongly spiked eigenvalue models
1 High-dimensional two-sample tests under strongly spiked eigenvalue models Makoto Aoshima and Kazuyoshi Yata University of Tsukuba Abstract: We consider a new two-sample test for high-dimensional data
More informationarxiv: v1 [math.gr] 8 Nov 2008
SUBSPACES OF 7 7 SKEW-SYMMETRIC MATRICES RELATED TO THE GROUP G 2 arxiv:0811.1298v1 [math.gr] 8 Nov 2008 ROD GOW Abstract. Let K be a field of characteristic different from 2 and let C be an octonion algebra
More informationMA 575 Linear Models: Cedric E. Ginestet, Boston University Revision: Probability and Linear Algebra Week 1, Lecture 2
MA 575 Linear Models: Cedric E Ginestet, Boston University Revision: Probability and Linear Algebra Week 1, Lecture 2 1 Revision: Probability Theory 11 Random Variables A real-valued random variable is
More information1. Introduction Over the last three decades a number of model selection criteria have been proposed, including AIC (Akaike, 1973), AICC (Hurvich & Tsa
On the Use of Marginal Likelihood in Model Selection Peide Shi Department of Probability and Statistics Peking University, Beijing 100871 P. R. China Chih-Ling Tsai Graduate School of Management University
More informationStable Process. 2. Multivariate Stable Distributions. July, 2006
Stable Process 2. Multivariate Stable Distributions July, 2006 1. Stable random vectors. 2. Characteristic functions. 3. Strictly stable and symmetric stable random vectors. 4. Sub-Gaussian random vectors.
More informationarxiv: v1 [stat.me] 30 Dec 2017
arxiv:1801.00105v1 [stat.me] 30 Dec 2017 An ISIS screening approach involving threshold/partition for variable selection in linear regression 1. Introduction Yu-Hsiang Cheng e-mail: 96354501@nccu.edu.tw
More informationRegression and Statistical Inference
Regression and Statistical Inference Walid Mnif wmnif@uwo.ca Department of Applied Mathematics The University of Western Ontario, London, Canada 1 Elements of Probability 2 Elements of Probability CDF&PDF
More information1 Last time: least-squares problems
MATH Linear algebra (Fall 07) Lecture Last time: least-squares problems Definition. If A is an m n matrix and b R m, then a least-squares solution to the linear system Ax = b is a vector x R n such that
More informationExistence of minimizers for the pure displacement problem in nonlinear elasticity
Existence of minimizers for the pure displacement problem in nonlinear elasticity Cristinel Mardare Université Pierre et Marie Curie - Paris 6, Laboratoire Jacques-Louis Lions, Paris, F-75005 France Abstract
More information1. Projective geometry
1. Projective geometry Homogeneous representation of points and lines in D space D projective space Points at infinity and the line at infinity Conics and dual conics Projective transformation Hierarchy
More informationOn the minimum of several random variables
On the minimum of several random variables Yehoram Gordon Alexander Litvak Carsten Schütt Elisabeth Werner Abstract For a given sequence of real numbers a,..., a n we denote the k-th smallest one by k-
More informationGaussian vectors and central limit theorem
Gaussian vectors and central limit theorem Samy Tindel Purdue University Probability Theory 2 - MA 539 Samy T. Gaussian vectors & CLT Probability Theory 1 / 86 Outline 1 Real Gaussian random variables
More informationAbstract. In this article, several matrix norm inequalities are proved by making use of the Hiroshima 2003 result on majorization relations.
HIROSHIMA S THEOREM AND MATRIX NORM INEQUALITIES MINGHUA LIN AND HENRY WOLKOWICZ Abstract. In this article, several matrix norm inequalities are proved by making use of the Hiroshima 2003 result on majorization
More informationLARGE DEVIATION PROBABILITIES FOR SUMS OF HEAVY-TAILED DEPENDENT RANDOM VECTORS*
LARGE EVIATION PROBABILITIES FOR SUMS OF HEAVY-TAILE EPENENT RANOM VECTORS* Adam Jakubowski Alexander V. Nagaev Alexander Zaigraev Nicholas Copernicus University Faculty of Mathematics and Computer Science
More informationChapter 1. Measure Spaces. 1.1 Algebras and σ algebras of sets Notation and preliminaries
Chapter 1 Measure Spaces 1.1 Algebras and σ algebras of sets 1.1.1 Notation and preliminaries We shall denote by X a nonempty set, by P(X) the set of all parts (i.e., subsets) of X, and by the empty set.
More informationGQE ALGEBRA PROBLEMS
GQE ALGEBRA PROBLEMS JAKOB STREIPEL Contents. Eigenthings 2. Norms, Inner Products, Orthogonality, and Such 6 3. Determinants, Inverses, and Linear (In)dependence 4. (Invariant) Subspaces 3 Throughout
More informationOn corrections of classical multivariate tests for high-dimensional data. Jian-feng. Yao Université de Rennes 1, IRMAR
Introduction a two sample problem Marčenko-Pastur distributions and one-sample problems Random Fisher matrices and two-sample problems Testing cova On corrections of classical multivariate tests for high-dimensional
More information11 Hypothesis Testing
28 11 Hypothesis Testing 111 Introduction Suppose we want to test the hypothesis: H : A q p β p 1 q 1 In terms of the rows of A this can be written as a 1 a q β, ie a i β for each row of A (here a i denotes
More informationarxiv: v1 [math.st] 2 Apr 2016
NON-ASYMPTOTIC RESULTS FOR CORNISH-FISHER EXPANSIONS V.V. ULYANOV, M. AOSHIMA, AND Y. FUJIKOSHI arxiv:1604.00539v1 [math.st] 2 Apr 2016 Abstract. We get the computable error bounds for generalized Cornish-Fisher
More informationLINEAR MAPS ON M n (C) PRESERVING INNER LOCAL SPECTRAL RADIUS ZERO
ITALIAN JOURNAL OF PURE AND APPLIED MATHEMATICS N. 39 2018 (757 763) 757 LINEAR MAPS ON M n (C) PRESERVING INNER LOCAL SPECTRAL RADIUS ZERO Hassane Benbouziane Mustapha Ech-Chérif Elkettani Ahmedou Mohamed
More informationModel Selection and Geometry
Model Selection and Geometry Pascal Massart Université Paris-Sud, Orsay Leipzig, February Purpose of the talk! Concentration of measure plays a fundamental role in the theory of model selection! Model
More informationSTAT 309: MATHEMATICAL COMPUTATIONS I FALL 2013 PROBLEM SET 2
STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2013 PROBLEM SET 2 1. You are not allowed to use the svd for this problem, i.e. no arguments should depend on the svd of A or A. Let W be a subspace of C n. The
More informationMath Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88
Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant
More informationOptimization Theory. A Concise Introduction. Jiongmin Yong
October 11, 017 16:5 ws-book9x6 Book Title Optimization Theory 017-08-Lecture Notes page 1 1 Optimization Theory A Concise Introduction Jiongmin Yong Optimization Theory 017-08-Lecture Notes page Optimization
More informationLecture notes: Applied linear algebra Part 1. Version 2
Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and
More informationThe best generalised inverse of the linear operator in normed linear space
Linear Algebra and its Applications 420 (2007) 9 19 www.elsevier.com/locate/laa The best generalised inverse of the linear operator in normed linear space Ping Liu, Yu-wen Wang School of Mathematics and
More informationBootstrap Simulation Procedure Applied to the Selection of the Multiple Linear Regressions
JKAU: Sci., Vol. 21 No. 2, pp: 197-212 (2009 A.D. / 1430 A.H.); DOI: 10.4197 / Sci. 21-2.2 Bootstrap Simulation Procedure Applied to the Selection of the Multiple Linear Regressions Ali Hussein Al-Marshadi
More informationThe Rademacher Cotype of Operators from l N
The Rademacher Cotype of Operators from l N SJ Montgomery-Smith Department of Mathematics, University of Missouri, Columbia, MO 65 M Talagrand Department of Mathematics, The Ohio State University, 3 W
More informationChapter 5 Matrix Approach to Simple Linear Regression
STAT 525 SPRING 2018 Chapter 5 Matrix Approach to Simple Linear Regression Professor Min Zhang Matrix Collection of elements arranged in rows and columns Elements will be numbers or symbols For example:
More informationMOMENT CONVERGENCE RATES OF LIL FOR NEGATIVELY ASSOCIATED SEQUENCES
J. Korean Math. Soc. 47 1, No., pp. 63 75 DOI 1.4134/JKMS.1.47..63 MOMENT CONVERGENCE RATES OF LIL FOR NEGATIVELY ASSOCIATED SEQUENCES Ke-Ang Fu Li-Hua Hu Abstract. Let X n ; n 1 be a strictly stationary
More informationLinear algebra and applications to graphs Part 1
Linear algebra and applications to graphs Part 1 Written up by Mikhail Belkin and Moon Duchin Instructor: Laszlo Babai June 17, 2001 1 Basic Linear Algebra Exercise 1.1 Let V and W be linear subspaces
More informationMi-Hwa Ko. t=1 Z t is true. j=0
Commun. Korean Math. Soc. 21 (2006), No. 4, pp. 779 786 FUNCTIONAL CENTRAL LIMIT THEOREMS FOR MULTIVARIATE LINEAR PROCESSES GENERATED BY DEPENDENT RANDOM VECTORS Mi-Hwa Ko Abstract. Let X t be an m-dimensional
More informationMTH 2032 SemesterII
MTH 202 SemesterII 2010-11 Linear Algebra Worked Examples Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education December 28, 2011 ii Contents Table of Contents
More informationOn High-Dimensional Cross-Validation
On High-Dimensional Cross-Validation BY WEI-CHENG HSIAO Institute of Statistical Science, Academia Sinica, 128 Academia Road, Section 2, Nankang, Taipei 11529, Taiwan hsiaowc@stat.sinica.edu.tw 5 WEI-YING
More informationMidterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015
Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015 The test lasts 1 hour and 15 minutes. No documents are allowed. The use of a calculator, cell phone or other equivalent electronic
More informationDuality of finite-dimensional vector spaces
CHAPTER I Duality of finite-dimensional vector spaces 1 Dual space Let E be a finite-dimensional vector space over a field K The vector space of linear maps E K is denoted by E, so E = L(E, K) This vector
More informationMATH 644: Regression Analysis Methods
MATH 644: Regression Analysis Methods FINAL EXAM Fall, 2012 INSTRUCTIONS TO STUDENTS: 1. This test contains SIX questions. It comprises ELEVEN printed pages. 2. Answer ALL questions for a total of 100
More informationAssignment 1 Math 5341 Linear Algebra Review. Give complete answers to each of the following questions. Show all of your work.
Assignment 1 Math 5341 Linear Algebra Review Give complete answers to each of the following questions Show all of your work Note: You might struggle with some of these questions, either because it has
More informationON D-OPTIMAL DESIGNS FOR ESTIMATING SLOPE
Sankhyā : The Indian Journal of Statistics 999, Volume 6, Series B, Pt. 3, pp. 488 495 ON D-OPTIMAL DESIGNS FOR ESTIMATING SLOPE By S. HUDA and A.A. AL-SHIHA King Saud University, Riyadh, Saudi Arabia
More informationJackknife Empirical Likelihood Test for Equality of Two High Dimensional Means
Jackknife Empirical Likelihood est for Equality of wo High Dimensional Means Ruodu Wang, Liang Peng and Yongcheng Qi 2 Abstract It has been a long history to test the equality of two multivariate means.
More informationMATH 205C: STATIONARY PHASE LEMMA
MATH 205C: STATIONARY PHASE LEMMA For ω, consider an integral of the form I(ω) = e iωf(x) u(x) dx, where u Cc (R n ) complex valued, with support in a compact set K, and f C (R n ) real valued. Thus, I(ω)
More informationIntertibility and spectrum of the multiplication operator on the space of square-summable sequences
Intertibility and spectrum of the multiplication operator on the space of square-summable sequences Objectives Establish an invertibility criterion and calculate the spectrum of the multiplication operator
More informationOn corrections of classical multivariate tests for high-dimensional data
On corrections of classical multivariate tests for high-dimensional data Jian-feng Yao with Zhidong Bai, Dandan Jiang, Shurong Zheng Overview Introduction High-dimensional data and new challenge in statistics
More informationMODULE 8 Topics: Null space, range, column space, row space and rank of a matrix
MODULE 8 Topics: Null space, range, column space, row space and rank of a matrix Definition: Let L : V 1 V 2 be a linear operator. The null space N (L) of L is the subspace of V 1 defined by N (L) = {x
More information1. Addition: To every pair of vectors x, y X corresponds an element x + y X such that the commutative and associative properties hold
Appendix B Y Mathematical Refresher This appendix presents mathematical concepts we use in developing our main arguments in the text of this book. This appendix can be read in the order in which it appears,
More informationON ASSOUAD S EMBEDDING TECHNIQUE
ON ASSOUAD S EMBEDDING TECHNIQUE MICHAL KRAUS Abstract. We survey the standard proof of a theorem of Assouad stating that every snowflaked version of a doubling metric space admits a bi-lipschitz embedding
More informationJournal of Multivariate Analysis. Sphericity test in a GMANOVA MANOVA model with normal error
Journal of Multivariate Analysis 00 (009) 305 3 Contents lists available at ScienceDirect Journal of Multivariate Analysis journal homepage: www.elsevier.com/locate/jmva Sphericity test in a GMANOVA MANOVA
More informationA note on the equality of the BLUPs for new observations under two linear models
ACTA ET COMMENTATIONES UNIVERSITATIS TARTUENSIS DE MATHEMATICA Volume 14, 2010 A note on the equality of the BLUPs for new observations under two linear models Stephen J Haslett and Simo Puntanen Abstract
More informationTesting Some Covariance Structures under a Growth Curve Model in High Dimension
Department of Mathematics Testing Some Covariance Structures under a Growth Curve Model in High Dimension Muni S. Srivastava and Martin Singull LiTH-MAT-R--2015/03--SE Department of Mathematics Linköping
More informationSAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to Linear Algebra
1.1. Introduction SAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to Linear algebra is a specific branch of mathematics dealing with the study of vectors, vector spaces with functions that
More informationMultiplicative Perturbation Bounds of the Group Inverse and Oblique Projection
Filomat 30: 06, 37 375 DOI 0.98/FIL67M Published by Faculty of Sciences Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/filomat Multiplicative Perturbation Bounds of the Group
More informationMATH FINAL EXAM REVIEW HINTS
MATH 109 - FINAL EXAM REVIEW HINTS Answer: Answer: 1. Cardinality (1) Let a < b be two real numbers and define f : (0, 1) (a, b) by f(t) = (1 t)a + tb. (a) Prove that f is a bijection. (b) Prove that any
More informationMATH 22A: LINEAR ALGEBRA Chapter 4
MATH 22A: LINEAR ALGEBRA Chapter 4 Jesús De Loera, UC Davis November 30, 2012 Orthogonality and Least Squares Approximation QUESTION: Suppose Ax = b has no solution!! Then what to do? Can we find an Approximate
More informationarxiv: v1 [math.fa] 19 Jul 2009
Spectral radius of Hadamard product versus conventional product for non-negative matrices arxiv:0907.3312v1 [math.fa] 19 Jul 2009 Abstract Koenraad M.R. Audenaert Dept. of Mathematics, Royal Holloway,
More informationVectors and Matrices Statistics with Vectors and Matrices
Vectors and Matrices Statistics with Vectors and Matrices Lecture 3 September 7, 005 Analysis Lecture #3-9/7/005 Slide 1 of 55 Today s Lecture Vectors and Matrices (Supplement A - augmented with SAS proc
More informationOPTIMAL TRANSPORTATION PLANS AND CONVERGENCE IN DISTRIBUTION
OPTIMAL TRANSPORTATION PLANS AND CONVERGENCE IN DISTRIBUTION J.A. Cuesta-Albertos 1, C. Matrán 2 and A. Tuero-Díaz 1 1 Departamento de Matemáticas, Estadística y Computación. Universidad de Cantabria.
More informationCS 143 Linear Algebra Review
CS 143 Linear Algebra Review Stefan Roth September 29, 2003 Introductory Remarks This review does not aim at mathematical rigor very much, but instead at ease of understanding and conciseness. Please see
More informationOn Riesz-Fischer sequences and lower frame bounds
On Riesz-Fischer sequences and lower frame bounds P. Casazza, O. Christensen, S. Li, A. Lindner Abstract We investigate the consequences of the lower frame condition and the lower Riesz basis condition
More informationSection 3.9. Matrix Norm
3.9. Matrix Norm 1 Section 3.9. Matrix Norm Note. We define several matrix norms, some similar to vector norms and some reflecting how multiplication by a matrix affects the norm of a vector. We use matrix
More informationPrincipal Components Theory Notes
Principal Components Theory Notes Charles J. Geyer August 29, 2007 1 Introduction These are class notes for Stat 5601 (nonparametrics) taught at the University of Minnesota, Spring 2006. This not a theory
More informationMath Linear Algebra II. 1. Inner Products and Norms
Math 342 - Linear Algebra II Notes 1. Inner Products and Norms One knows from a basic introduction to vectors in R n Math 254 at OSU) that the length of a vector x = x 1 x 2... x n ) T R n, denoted x,
More informationOn Expected Gaussian Random Determinants
On Expected Gaussian Random Determinants Moo K. Chung 1 Department of Statistics University of Wisconsin-Madison 1210 West Dayton St. Madison, WI 53706 Abstract The expectation of random determinants whose
More informationSome recent results on controllability of coupled parabolic systems: Towards a Kalman condition
Some recent results on controllability of coupled parabolic systems: Towards a Kalman condition F. Ammar Khodja Clermont-Ferrand, June 2011 GOAL: 1 Show the important differences between scalar and non
More informationVariable Selection for Highly Correlated Predictors
Variable Selection for Highly Correlated Predictors Fei Xue and Annie Qu arxiv:1709.04840v1 [stat.me] 14 Sep 2017 Abstract Penalty-based variable selection methods are powerful in selecting relevant covariates
More informationBahadur representations for bootstrap quantiles 1
Bahadur representations for bootstrap quantiles 1 Yijun Zuo Department of Statistics and Probability, Michigan State University East Lansing, MI 48824, USA zuo@msu.edu 1 Research partially supported by
More informationQUASI-DIAGONALIZABLE AND CONGRUENCE-NORMAL MATRICES. 1. Introduction. A matrix A C n n is normal if AA = A A. A is said to be conjugate-normal if
QUASI-DIAGONALIZABLE AND CONGRUENCE-NORMAL MATRICES H. FAßBENDER AND KH. D. IKRAMOV Abstract. A matrix A C n n is unitarily quasi-diagonalizable if A can be brought by a unitary similarity transformation
More informationQuantum Statistics -First Steps
Quantum Statistics -First Steps Michael Nussbaum 1 November 30, 2007 Abstract We will try an elementary introduction to quantum probability and statistics, bypassing the physics in a rapid first glance.
More informationA revisit to a reverse-order law for generalized inverses of a matrix product and its variations
A revisit to a reverse-order law for generalized inverses of a matrix product and its variations Yongge Tian CEMA, Central University of Finance and Economics, Beijing 100081, China Abstract. For a pair
More informationk times l times n times
1. Tensors on vector spaces Let V be a finite dimensional vector space over R. Definition 1.1. A tensor of type (k, l) on V is a map T : V V R k times l times which is linear in every variable. Example
More information