Supplementary Materials for Tensor Envelope Partial Least Squares Regression
|
|
- Alvin Mills
- 5 years ago
- Views:
Transcription
1 Supplementary Materials for Tensor Envelope Partial Least Squares Regression Xin Zhang and Lexin Li Florida State University and University of California, Bereley 1 Proofs and Technical Details Proof of Lemma 1 Proof. From the vectorized linear model 3, we can see that B m+1 is the regression coefficient matrix for the multivariate linear regression of Y on vecx. Therefore B m+1 = cov 1 {vecx}cov{vecx, Y } = Σ 1 m Σ 1 1 Cm+1. It follows from the basic property of Tucer operator that B = C; Σ 1 1,..., Σ 1 m, I r. Proof of Lemma 2 Proof. Recall that in the tensor PLS regression, EY X = B m+1 vecx = EY T = Ψ m+1 vect, where T = X; W T 1,..., W T m. Therefore, B m+1 vecx = Ψ m+1 vect = Ψ m+1 vec X; W T 1,..., W T m = Ψ m+1 W T m W T 1 vecx = Ψ; W 1,..., W m, I r m+1 vecx. This implies that B = Ψ; W 1,..., W m, I r under the PLS regression assumption, and hence B PLS = Ψ; Ŵ1,..., Ŵm, I r. The rest of the proof follows from the fact that cov{vect } = WmΣ T m W m W1 T Σ 1 W 1 and then applying Lemma 1 on the regression of Y on T and 1
2 Proof of Proposition 1 Proof. From Lemma 2, we have B PLS = Ψ; Ŵ1,..., Ŵm, I r = ĈT ; Ŵ 1 T Σ 1 Ŵ 1 1,..., Ŵ m T Σ m Ŵ m 1, I r ; Ŵ1,..., Ŵm, I r = ĈT ; Ŵ1Ŵ 1 T Σ 1 Ŵ 1 1,..., ŴmŴ m T Σ m Ŵ m 1, I r = Ĉ; Ŵ1Ŵ 1 T Σ 1 Ŵ 1 1 Ŵ1 T,..., ŴmŴ m T Σ m Ŵ m 1 Ŵm, T I r, where the last equality follows from ĈT = Ĉ; Ŵ T 1,..., Ŵ T m, I r. The conclusion then follows from Ŵ1Ŵ T 1 Σ 1 Ŵ 1 1 Ŵ T 1 = PŴ Σ 1 Σ and B OLS = 1 1 Ĉ; Σ 1,..., Σ m, I r. Proof of Proposition 2 Proof. First, X Q X P implies 0 = cov{vecx Q, vecx P } = Σ m Q Σ P Σ 1. It holds if and only if Q Σ P = 0. By the definition of a reducing subspace Coo et al., 2010, Q Σ P = 0 if and only if spanp = spanγ is a reducing subspace of Σ, i.e., Σ = Γ Ω Γ T + Γ 0Ω 0 Γ T 0. Second, we substitute X = X P + X Q into 3 and get Y = B m+1 vecx P + B m+1 vecx Q + ε = B P m+1 vecx P + B Q m+1 vecx Q + ε. Therefore, Y X Q X P implies B Q = 0, which is equivalent to B = B P = B Γ Γ T, which gives the parametrization B = Θ; Γ 1,, Γ m, I r. Proof of Proposition 3 Proof. We first show T ΣX B E Σm Bm EΣ1 B1. From Definition 2, this means we need to show that a E Σm Bm EΣ1 B1 is a reducing subspace of Σ X, and b it contains spanb m+1. It is straightforward to see a from Σ X = Σ m Σ 1 and that E Σ B is a reducing subspace of Σ for each. To show b, we can write B = P B, where P is the projection on to E Σ B, because 2
3 spanb E Σ B by definition. This further implies that B = B P, = 1,..., m, and that B = B 1 P 1 2 m P m. Taing mode-m + 1 matricization on both sides of the last equation, we have b. We next show T ΣX B E Σm Bm EΣ1 B1. By definition, we can write T ΣX B = E m E 1 for some E IR p, = 1,..., m. It remains to show that E E Σ B. We achieve this by showing c E is a reducing subspace of Σ, d E contains spanb, and then by noticing E Σ B is the smallest subspace satisfying c and d. Note that d can be directly obtained from Proposition 2. To get c, we recall that T ΣX B is a reducing subspace of Σ X = Σ m Σ 1 and thus P Em E 1 Σ m Σ 1 Q Em E 1 = 0, where P Em E 1 = P Em P E1 equation we get, and Q Em E 1 = I P Em E 1. Expand the above P Em Σ m P E1 Σ 1 P Em Σ m P Em P E1 Σ 1 P E1 = 0, which implies the following equality by right-multiplying P Em P E2 I p1 : P Em Σ m P Em P E2 Σ 2 Q E2 P E1 Σ 1 Q E1 = 0, which implies E 1 reduces Σ 1. Similarly we get E reduces Σ for all = 1,..., m. This completes the proof. Proof of Lemma 3 Proof. From Algorithm 4, we now that w s is the dominant eigenvector of C T s 1 C s 1, T which equals to Q s 1 C0 C 0 Q s 1. The conclusion follows from noticing C = T C 0 C 0. Proof of Theorem 1 Proof. From Lemma 3, we see that w 1 is the first eigenvector of C. Then for s > 1, we have w s is the first eigenvector of Q s 1 C Q s 1 where Q s 1 is the projection 3
4 onto the orthogonal complement of span Σ w 1,..., w s 1. Following the proof of Proposition 4.1 in Coo et al. 2013, we have W 0... W u = E Σ C = W u +1 =... = W p, where E Σ C is the Σ -envelope of span C. We next need to show E Σ B = E Σ C. From Lemma 3, we see that span C = spanc. From Lemma 1, we see that spanb = spanσ 1 C. Finally, from Proposition 2.4 of Coo et al. 2010, we have E Σ B = EΣ Σ 1 B and thus E Σ B = EΣ C = E Σ C. Proof of Theorem 2 Proof. From Theorem 1, if d is chosen as u, the population value spanw = W u = E Σ B. Since the sample version of Algorithm 4 is based on eigen-decomposition and n-consistent sample covariance matrices Ĉ, Σ Y, and Σ, = 1,..., m, it is clear that Ŵ is n-consistent for W. Hence Ŵ is n-consistent for the envelope E Σ B. For d u, we have W d = W u = E Σ B, therefore the projection PŴ Σ is n-consistent for P W Σ. Since W is a semi-orthogonal basis for the envelope E Σ B, it is a reducing subspace of Σ and hence P W Σ = P W by definition. Therefore, PŴ Σ is n-consistent estimator for the projection onto the envelope E Σ B. Then from Proposition 1, we recall that BPLS = B OLS 1 PŴ1 Σ 1 2 m PŴm Σ m. The n-consistency of BPLS then follows from the n-consistency of BOLS and PŴ Σ. 2 Additional simulations We consider an additional simulation example, with a univariate response and a 3-way tensor predictor with higher dimension and ran. Specifically, the simulation setup is similar to that in Section 5.2, with two main changes. The first is that the response is now a scalar, which allows a more direct comparison with the CP method of Zhou et al that was designed for a univariate response. The second is that we now consider both the original setup of a predictor tensor X IR with a core tensor Θ IR p = 20, u = 2, and a new setup of a predictor tensor X IR with 4
5 p, u 20, 2 40, 5 p, u 20, 2 40, 5 Model Prediction OLS CP TEPLS TEPLS-CV I II III I II III Model Estimation OLS CP TEPLS TEPLS-CV I > 10 4 > II III > 10 6 > I > 10 5 > II III > 10 7 > Table S1: Univariate response and 3-way predictor. Performance under various scenarios and comparison of estimators. OLS, CP, tensor envelope PLS with true and estimated envelope dimensions. Reported are the average and standard error in parenthesis of the prediction mean squared error evaluated on an independent testing data, and the estimation error, all based on 100 data replications. a core tensor Θ IR p = 40, u = 5. The latter has a higher predictor dimension and ran, and is comparable to the dimension of the ADHD real data example in Section 6.2. The sample size for training and testing data is still fixed at n = 200. Table S1 summarizes the prediction and estimation results based on 100 data replications. It is again clearly seen that the proposed tensor envelope PLS method is more competitive than the alternative solutions in terms of both prediction and estimation accuracy across all model scenarios. References Coo, R. D., Helland, I. S., and Su, Z Envelopes and partial least squares regression. J. R. Stat. Soc. Ser. B. Stat. Methodol., 755:
6 Coo, R. D., Li, B., and Chiaromonte, F Envelope models for parsimonious and efficient multivariate linear regression. Statist. Sinica, 203: Zhou, H., Li, L., and Zhu, H Tensor regression with applications in neuroimaging data analysis. Journal of the American Statistical Association, :
Tensor Envelope Partial Least Squares Regression
Tensor Envelope Partial Least Squares Regression Xin Zhang and Lexin Li Florida State University and University of California Berkeley Abstract Partial least squares (PLS) is a prominent solution for dimension
More informationSupplementary Materials for Fast envelope algorithms
1 Supplementary Materials for Fast envelope algorithms R. Dennis Cook and Xin Zhang 1 Proof for Proposition We first prove the following: F(A 0 ) = log A T 0 MA 0 + log A T 0 (M + U) 1 A 0 (A1) log A T
More informationarxiv: v1 [stat.me] 30 Jan 2015
Parsimonious Tensor Response Regression Lexin Li and Xin Zhang arxiv:1501.07815v1 [stat.me] 30 Jan 2015 University of California, Bereley; and Florida State University Abstract Aiming at abundant scientific
More informationParsimonious Tensor Response Regression
Parsimonious Tensor Response Regression Lexin Li and Xin Zhang University of California at Bereley; and Florida State University Abstract Aiming at abundant scientific and engineering data with not only
More informationEnvelopes: Methods for Efficient Estimation in Multivariate Statistics
Envelopes: Methods for Efficient Estimation in Multivariate Statistics Dennis Cook School of Statistics University of Minnesota Collaborating at times with Bing Li, Francesca Chiaromonte, Zhihua Su, Inge
More informationFast envelope algorithms
1 2 Fast envelope algorithms R. Dennis Cook and Xin Zhang 3 4 5 6 7 8 9 10 11 12 13 14 Abstract In this paper, we develop new fast algorithms for envelope estimation that are stable and can be used in
More informationStatistica Sinica Preprint No: SS R2
Statistica Sinica Preprint No: SS-2016-0037R2 Title Fast envelope algorithms Manuscript ID SS-2016.0037 URL http://www.stat.sinica.edu.tw/statistica/ DOI 10.5705/ss.202016.0037 Complete List of Authors
More informationTowards a Regression using Tensors
February 27, 2014 Outline Background 1 Background Linear Regression Tensorial Data Analysis 2 Definition Tensor Operation Tensor Decomposition 3 Model Attention Deficit Hyperactivity Disorder Data Analysis
More informationModel-free Envelope Dimension Selection
Model-free Envelope Dimension Selection Xin Zhang and Qing Mai arxiv:1709.03945v2 [stat.me] 15 Sep 2017 Abstract An envelope is a targeted dimension reduction subspace for simultaneously achieving dimension
More informationAlgorithms for Envelope Estimation II
Algorithms for Envelope Estimation II arxiv:1509.03767v1 [stat.me] 12 Sep 2015 R. Dennis Cook, Liliana Forzani and Zhihua Su September 15, 2015 Abstract We propose a new algorithm for envelope estimation,
More informationModeling Mutagenicity Status of a Diverse Set of Chemical Compounds by Envelope Methods
Modeling Mutagenicity Status of a Diverse Set of Chemical Compounds by Envelope Methods Subho Majumdar School of Statistics, University of Minnesota Envelopes in Chemometrics August 4, 2014 1 / 23 Motivation
More informationA Selective Review of Sufficient Dimension Reduction
A Selective Review of Sufficient Dimension Reduction Lexin Li Department of Statistics North Carolina State University Lexin Li (NCSU) Sufficient Dimension Reduction 1 / 19 Outline 1 General Framework
More informationEnvelopes for Efficient Multivariate Parameter Estimation
Envelopes for Efficient Multivariate Parameter Estimation A DISSERTATION SUBMITTED TO THE FACULTY OF THE GRADUATE SCHOOL OF THE UNIVERSITY OF MINNESOTA BY Xin Zhang IN PARTIAL FULFILLMENT OF THE REQUIREMENTS
More informationSTAT 151A: Lab 1. 1 Logistics. 2 Reference. 3 Playing with R: graphics and lm() 4 Random vectors. Billy Fang. 2 September 2017
STAT 151A: Lab 1 Billy Fang 2 September 2017 1 Logistics Billy Fang (blfang@berkeley.edu) Office hours: Monday 9am-11am, Wednesday 10am-12pm, Evans 428 (room changes will be written on the chalkboard)
More informationWorksheet for Lecture 23 (due December 4) Section 6.1 Inner product, length, and orthogonality
Worksheet for Lecture (due December 4) Name: Section 6 Inner product, length, and orthogonality u Definition Let u = u n product or dot product to be and v = v v n be vectors in R n We define their inner
More informationENVELOPE MODELS FOR PARSIMONIOUS AND EFFICIENT MULTIVARIATE LINEAR REGRESSION
Statistica Sinica 20 (2010), 927-1010 ENVELOPE MODELS FOR PARSIMONIOUS AND EFFICIENT MULTIVARIATE LINEAR REGRESSION R. Dennis Cook 1, Bing Li 2 and Francesca Chiaromonte 2 1 University of Minnesota and
More informationTAMS39 Lecture 2 Multivariate normal distribution
TAMS39 Lecture 2 Multivariate normal distribution Martin Singull Department of Mathematics Mathematical Statistics Linköping University, Sweden Content Lecture Random vectors Multivariate normal distribution
More informationRegression Graphics. 1 Introduction. 2 The Central Subspace. R. D. Cook Department of Applied Statistics University of Minnesota St.
Regression Graphics R. D. Cook Department of Applied Statistics University of Minnesota St. Paul, MN 55108 Abstract This article, which is based on an Interface tutorial, presents an overview of regression
More informationSimultaneous envelopes for multivariate linear regression
1 2 3 4 Simultaneous envelopes for multivariate linear regression R. Dennis Cook and Xin Zhang December 2, 2013 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 Abstract We introduce envelopes for simultaneously
More informationChapter 3 Transformations
Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases
More informationPeter Hoff Linear and multilinear models April 3, GLS for multivariate regression 5. 3 Covariance estimation for the GLM 8
Contents 1 Linear model 1 2 GLS for multivariate regression 5 3 Covariance estimation for the GLM 8 4 Testing the GLH 11 A reference for some of this material can be found somewhere. 1 Linear model Recall
More informationMathematical foundations - linear algebra
Mathematical foundations - linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar
More informationDimension Reduction in Abundant High Dimensional Regressions
Dimension Reduction in Abundant High Dimensional Regressions Dennis Cook University of Minnesota 8th Purdue Symposium June 2012 In collaboration with Liliana Forzani & Adam Rothman, Annals of Statistics,
More informationSufficient Dimension Reduction for Longitudinally Measured Predictors
Sufficient Dimension Reduction for Longitudinally Measured Predictors Ruth Pfeiffer National Cancer Institute, NIH, HHS joint work with Efstathia Bura and Wei Wang TU Wien and GWU University JSM Vancouver
More informationFamily Feud Review. Linear Algebra. October 22, 2013
Review Linear Algebra October 22, 2013 Question 1 Let A and B be matrices. If AB is a 4 7 matrix, then determine the dimensions of A and B if A has 19 columns. Answer 1 Answer A is a 4 19 matrix, while
More informationP = A(A T A) 1 A T. A Om (m n)
Chapter 4: Orthogonality 4.. Projections Proposition. Let A be a matrix. Then N(A T A) N(A). Proof. If Ax, then of course A T Ax. Conversely, if A T Ax, then so Ax also. x (A T Ax) x T A T Ax (Ax) T Ax
More informationFoundations for Envelope Models and Methods
1 2 3 Foundations for Envelope Models and Methods R. Dennis Cook and Xin Zhang October 6, 2014 4 5 6 7 8 9 10 11 12 13 14 Abstract Envelopes were recently proposed by Cook, Li and Chiaromonte (2010) as
More informationDS-GA 1002 Lecture notes 10 November 23, Linear models
DS-GA 2 Lecture notes November 23, 2 Linear functions Linear models A linear model encodes the assumption that two quantities are linearly related. Mathematically, this is characterized using linear functions.
More information1. Stochastic Processes and filtrations
1. Stochastic Processes and 1. Stoch. pr., A stochastic process (X t ) t T is a collection of random variables on (Ω, F) with values in a measurable space (S, S), i.e., for all t, In our case X t : Ω S
More informationMultivariate Distributions
IEOR E4602: Quantitative Risk Management Spring 2016 c 2016 by Martin Haugh Multivariate Distributions We will study multivariate distributions in these notes, focusing 1 in particular on multivariate
More informationEconomics 620, Lecture 5: exp
1 Economics 620, Lecture 5: The K-Variable Linear Model II Third assumption (Normality): y; q(x; 2 I N ) 1 ) p(y) = (2 2 ) exp (N=2) 1 2 2(y X)0 (y X) where N is the sample size. The log likelihood function
More informationMATH Linear Algebra
MATH 304 - Linear Algebra In the previous note we learned an important algorithm to produce orthogonal sequences of vectors called the Gramm-Schmidt orthogonalization process. Gramm-Schmidt orthogonalization
More informationIV. Matrix Approximation using Least-Squares
IV. Matrix Approximation using Least-Squares The SVD and Matrix Approximation We begin with the following fundamental question. Let A be an M N matrix with rank R. What is the closest matrix to A that
More informationMathematics Department Stanford University Math 61CM/DM Inner products
Mathematics Department Stanford University Math 61CM/DM Inner products Recall the definition of an inner product space; see Appendix A.8 of the textbook. Definition 1 An inner product space V is a vector
More informationMath 261 Lecture Notes: Sections 6.1, 6.2, 6.3 and 6.4 Orthogonal Sets and Projections
Math 6 Lecture Notes: Sections 6., 6., 6. and 6. Orthogonal Sets and Projections We will not cover general inner product spaces. We will, however, focus on a particular inner product space the inner product
More informationLearning gradients: prescriptive models
Department of Statistical Science Institute for Genome Sciences & Policy Department of Computer Science Duke University May 11, 2007 Relevant papers Learning Coordinate Covariances via Gradients. Sayan
More informationOverview. Motivation for the inner product. Question. Definition
Overview Last time we studied the evolution of a discrete linear dynamical system, and today we begin the final topic of the course (loosely speaking) Today we ll recall the definition and properties of
More information1 Invariant subspaces
MATH 2040 Linear Algebra II Lecture Notes by Martin Li Lecture 8 Eigenvalues, eigenvectors and invariant subspaces 1 In previous lectures we have studied linear maps T : V W from a vector space V to another
More informationMATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION
MATH (LINEAR ALGEBRA ) FINAL EXAM FALL SOLUTIONS TO PRACTICE VERSION Problem (a) For each matrix below (i) find a basis for its column space (ii) find a basis for its row space (iii) determine whether
More information(Received April 2008; accepted June 2009) COMMENT. Jinzhu Jia, Yuval Benjamini, Chinghway Lim, Garvesh Raskutti and Bin Yu.
960 R. DENNIS COOK, BING LI AND FRANCESCA CHIAROMONTE Johnson, R. A. and Wichern, D. W. (2007). Applied Multivariate Statistical Analysis. Sixth Edition. Pearson Prentice Hall. Jolliffe, I. T. (2002).
More informationLinear Algebra. Paul Yiu. 6D: 2-planes in R 4. Department of Mathematics Florida Atlantic University. Fall 2011
Linear Algebra Paul Yiu Department of Mathematics Florida Atlantic University Fall 2011 6D: 2-planes in R 4 The angle between a vector and a plane The angle between a vector v R n and a subspace V is the
More informationA note on the equality of the BLUPs for new observations under two linear models
ACTA ET COMMENTATIONES UNIVERSITATIS TARTUENSIS DE MATHEMATICA Volume 14, 2010 A note on the equality of the BLUPs for new observations under two linear models Stephen J Haslett and Simo Puntanen Abstract
More informationESTIMATION OF MULTIVARIATE MEANS WITH HETEROSCEDASTIC ERRORS USING ENVELOPE MODELS
Statistica Sinica 23 (2013), 213-230 doi:http://dx.doi.org/10.5705/ss.2010.240 ESTIMATION OF MULTIVARIATE MEANS WITH HETEROSCEDASTIC ERRORS USING ENVELOPE MODELS Zhihua Su and R. Dennis Cook University
More informationLinear Regression and Its Applications
Linear Regression and Its Applications Predrag Radivojac October 13, 2014 Given a data set D = {(x i, y i )} n the objective is to learn the relationship between features and the target. We usually start
More informationFinal Exam, Linear Algebra, Fall, 2003, W. Stephen Wilson
Final Exam, Linear Algebra, Fall, 2003, W. Stephen Wilson Name: TA Name and section: NO CALCULATORS, SHOW ALL WORK, NO OTHER PAPERS ON DESK. There is very little actual work to be done on this exam if
More informationSupplementary Materials: Proofs and Technical Details for Parsimonious Tensor Response Regression Lexin Li and Xin Zhang
Suppleentary Materials: Proofs and Tecnical Details for Parsionious Tensor Response Regression Lexin Li and Xin Zang A Soe preliinary results We will apply te following two results repeatedly. For a positive
More informationLarge-scale eigenvalue problems
ELE 538B: Mathematics of High-Dimensional Data Large-scale eigenvalue problems Yuxin Chen Princeton University, Fall 208 Outline Power method Lanczos algorithm Eigenvalue problems 4-2 Eigendecomposition
More informationLecture 1: Systems of linear equations and their solutions
Lecture 1: Systems of linear equations and their solutions Course overview Topics to be covered this semester: Systems of linear equations and Gaussian elimination: Solving linear equations and applications
More informationPackage Renvlp. R topics documented: January 18, 2018
Type Package Title Computing Envelope Estimators Version 2.5 Date 2018-01-18 Author Minji Lee, Zhihua Su Maintainer Minji Lee Package Renvlp January 18, 2018 Provides a general routine,
More informationDS-GA 1002 Lecture notes 12 Fall Linear regression
DS-GA Lecture notes 1 Fall 16 1 Linear models Linear regression In statistics, regression consists of learning a function relating a certain quantity of interest y, the response or dependent variable,
More informationRandom Vectors, Random Matrices, and Matrix Expected Value
Random Vectors, Random Matrices, and Matrix Expected Value James H. Steiger Department of Psychology and Human Development Vanderbilt University James H. Steiger (Vanderbilt University) 1 / 16 Random Vectors,
More informationLecture Notes 1: Vector spaces
Optimization-based data analysis Fall 2017 Lecture Notes 1: Vector spaces In this chapter we review certain basic concepts of linear algebra, highlighting their application to signal processing. 1 Vector
More informationAppendix to Portfolio Construction by Mitigating Error Amplification: The Bounded-Noise Portfolio
Appendix to Portfolio Construction by Mitigating Error Amplification: The Bounded-Noise Portfolio Long Zhao, Deepayan Chakrabarti, and Kumar Muthuraman McCombs School of Business, University of Texas,
More informationEstimation of Multivariate Means with. Heteroscedastic Errors Using Envelope Models
1 Estimation of Multivariate Means with Heteroscedastic Errors Using Envelope Models Zhihua Su and R. Dennis Cook University of Minnesota Abstract: In this article, we propose envelope models that accommodate
More informationMATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)
MATH 20F: LINEAR ALGEBRA LECTURE B00 (T KEMP) Definition 01 If T (x) = Ax is a linear transformation from R n to R m then Nul (T ) = {x R n : T (x) = 0} = Nul (A) Ran (T ) = {Ax R m : x R n } = {b R m
More informationCriteria for Determining If A Subset is a Subspace
These notes closely follow the presentation of the material given in David C. Lay s textbook Linear Algebra and its Applications (3rd edition). These notes are intended primarily for in-class presentation
More informationRenvlp: An R Package for Efficient Estimation in Multivariate Analysis Using Envelope Models
Renvlp: An R Package for Efficient Estimation in Multivariate Analysis Using Envelope Models Minji Lee University of Florida Zhihua Su University of Florida Abstract The envelope models can achieve substantial
More informationE cient Importance Sampling
E cient David N. DeJong University of Pittsburgh Spring 2008 Our goal is to calculate integrals of the form Z G (Y ) = ϕ (θ; Y ) dθ. Special case (e.g., posterior moment): Z G (Y ) = Θ Θ φ (θ; Y ) p (θjy
More informationarxiv: v1 [math.gr] 8 Nov 2008
SUBSPACES OF 7 7 SKEW-SYMMETRIC MATRICES RELATED TO THE GROUP G 2 arxiv:0811.1298v1 [math.gr] 8 Nov 2008 ROD GOW Abstract. Let K be a field of characteristic different from 2 and let C be an octonion algebra
More informationSTAT 100C: Linear models
STAT 100C: Linear models Arash A. Amini April 27, 2018 1 / 1 Table of Contents 2 / 1 Linear Algebra Review Read 3.1 and 3.2 from text. 1. Fundamental subspace (rank-nullity, etc.) Im(X ) = ker(x T ) R
More informationANOVA: Analysis of Variance - Part I
ANOVA: Analysis of Variance - Part I The purpose of these notes is to discuss the theory behind the analysis of variance. It is a summary of the definitions and results presented in class with a few exercises.
More information1 Data Arrays and Decompositions
1 Data Arrays and Decompositions 1.1 Variance Matrices and Eigenstructure Consider a p p positive definite and symmetric matrix V - a model parameter or a sample variance matrix. The eigenstructure is
More information7. Dimension and Structure.
7. Dimension and Structure 7.1. Basis and Dimension Bases for Subspaces Example 2 The standard unit vectors e 1, e 2,, e n are linearly independent, for if we write (2) in component form, then we obtain
More informationSingular Value Decomposition (SVD)
School of Computing National University of Singapore CS CS524 Theoretical Foundations of Multimedia More Linear Algebra Singular Value Decomposition (SVD) The highpoint of linear algebra Gilbert Strang
More informationIr O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )
Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O
More informationData Analysis and Manifold Learning Lecture 6: Probabilistic PCA and Factor Analysis
Data Analysis and Manifold Learning Lecture 6: Probabilistic PCA and Factor Analysis Radu Horaud INRIA Grenoble Rhone-Alpes, France Radu.Horaud@inrialpes.fr http://perception.inrialpes.fr/ Outline of Lecture
More informationLecture 3: Review of Linear Algebra
ECE 83 Fall 2 Statistical Signal Processing instructor: R Nowak Lecture 3: Review of Linear Algebra Very often in this course we will represent signals as vectors and operators (eg, filters, transforms,
More information3. For a given dataset and linear model, what do you think is true about least squares estimates? Is Ŷ always unique? Yes. Is ˆβ always unique? No.
7. LEAST SQUARES ESTIMATION 1 EXERCISE: Least-Squares Estimation and Uniqueness of Estimates 1. For n real numbers a 1,...,a n, what value of a minimizes the sum of squared distances from a to each of
More informationLinear Models Review
Linear Models Review Vectors in IR n will be written as ordered n-tuples which are understood to be column vectors, or n 1 matrices. A vector variable will be indicted with bold face, and the prime sign
More informationSection 6.2, 6.3 Orthogonal Sets, Orthogonal Projections
Section 6. 6. Orthogonal Sets Orthogonal Projections Main Ideas in these sections: Orthogonal set = A set of mutually orthogonal vectors. OG LI. Orthogonal Projection of y onto u or onto an OG set {u u
More informationLecture 3: Review of Linear Algebra
ECE 83 Fall 2 Statistical Signal Processing instructor: R Nowak, scribe: R Nowak Lecture 3: Review of Linear Algebra Very often in this course we will represent signals as vectors and operators (eg, filters,
More informationAgain consider the multivariate linear model (1.1), but now allowing the predictors to be stochastic. Restating it for ease of reference,
Chapter 4 Predictor Envelopes In Chapter 1 we considered reduction of Y, relying on the notion of material and immaterial information for motivation and using E Σ (B) with B =span(β) as the reduction construct.
More informationOrthogonal Complements
Orthogonal Complements Definition Let W be a subspace of R n. If a vector z is orthogonal to every vector in W, then z is said to be orthogonal to W. The set of all such vectors z is called the orthogonal
More informationChapter 4 Euclid Space
Chapter 4 Euclid Space Inner Product Spaces Definition.. Let V be a real vector space over IR. A real inner product on V is a real valued function on V V, denoted by (, ), which satisfies () (x, y) = (y,
More information1 Principal Components Analysis
Lecture 3 and 4 Sept. 18 and Sept.20-2006 Data Visualization STAT 442 / 890, CM 462 Lecture: Ali Ghodsi 1 Principal Components Analysis Principal components analysis (PCA) is a very popular technique for
More informationGaussian Models (9/9/13)
STA561: Probabilistic machine learning Gaussian Models (9/9/13) Lecturer: Barbara Engelhardt Scribes: Xi He, Jiangwei Pan, Ali Razeen, Animesh Srivastava 1 Multivariate Normal Distribution The multivariate
More informationsparse and low-rank tensor recovery Cubic-Sketching
Sparse and Low-Ran Tensor Recovery via Cubic-Setching Guang Cheng Department of Statistics Purdue University www.science.purdue.edu/bigdata CCAM@Purdue Math Oct. 27, 2017 Joint wor with Botao Hao and Anru
More informationLow Rank Approximation Lecture 7. Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL
Low Rank Approximation Lecture 7 Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL daniel.kressner@epfl.ch 1 Alternating least-squares / linear scheme General setting:
More informationPLS. theoretical results for the chemometrics use of PLS. Liliana Forzani. joint work with R. Dennis Cook
PLS theoretical results for the chemometrics use of PLS Liliana Forzani Facultad de Ingeniería Química, UNL, Argentina joint work with R. Dennis Cook Example in chemometrics A concrete situation could
More informationEcon 2120: Section 2
Econ 2120: Section 2 Part I - Linear Predictor Loose Ends Ashesh Rambachan Fall 2018 Outline Big Picture Matrix Version of the Linear Predictor and Least Squares Fit Linear Predictor Least Squares Omitted
More informationMatrix-Variate Regressions and Envelope Models
Matrix-Variate Regressions and Envelope Models arxiv:1605.01485v2 [stat.me] 30 Jul 2017 Shanshan Ding Department of Applied Economics and Statistics, University of Delaware and R. Dennis Cook School of
More informationMultivariate Statistical Analysis
Multivariate Statistical Analysis Fall 2011 C. L. Williams, Ph.D. Lecture 4 for Applied Multivariate Analysis Outline 1 Eigen values and eigen vectors Characteristic equation Some properties of eigendecompositions
More informationPrincipal Component Analysis
CSci 5525: Machine Learning Dec 3, 2008 The Main Idea Given a dataset X = {x 1,..., x N } The Main Idea Given a dataset X = {x 1,..., x N } Find a low-dimensional linear projection The Main Idea Given
More informationKernel-Based Contrast Functions for Sufficient Dimension Reduction
Kernel-Based Contrast Functions for Sufficient Dimension Reduction Michael I. Jordan Departments of Statistics and EECS University of California, Berkeley Joint work with Kenji Fukumizu and Francis Bach
More information2. Review of Linear Algebra
2. Review of Linear Algebra ECE 83, Spring 217 In this course we will represent signals as vectors and operators (e.g., filters, transforms, etc) as matrices. This lecture reviews basic concepts from linear
More informationMath Linear Algebra II. 1. Inner Products and Norms
Math 342 - Linear Algebra II Notes 1. Inner Products and Norms One knows from a basic introduction to vectors in R n Math 254 at OSU) that the length of a vector x = x 1 x 2... x n ) T R n, denoted x,
More informationMore Powerful Tests for Homogeneity of Multivariate Normal Mean Vectors under an Order Restriction
Sankhyā : The Indian Journal of Statistics 2007, Volume 69, Part 4, pp. 700-716 c 2007, Indian Statistical Institute More Powerful Tests for Homogeneity of Multivariate Normal Mean Vectors under an Order
More informationLecture 17: Section 4.2
Lecture 17: Section 4.2 Shuanglin Shao November 4, 2013 Subspaces We will discuss subspaces of vector spaces. Subspaces Definition. A subset W is a vector space V is called a subspace of V if W is itself
More information15 Singular Value Decomposition
15 Singular Value Decomposition For any high-dimensional data analysis, one s first thought should often be: can I use an SVD? The singular value decomposition is an invaluable analysis tool for dealing
More informationOn the Estimation and Application of Max-Stable Processes
On the Estimation and Application of Max-Stable Processes Zhengjun Zhang Department of Statistics University of Wisconsin Madison, WI 53706, USA Co-author: Richard Smith EVA 2009, Fort Collins, CO Z. Zhang
More informationSmall area estimation with missing data using a multivariate linear random effects model
Department of Mathematics Small area estimation with missing data using a multivariate linear random effects model Innocent Ngaruye, Dietrich von Rosen and Martin Singull LiTH-MAT-R--2017/07--SE Department
More informationVectors and Matrices Statistics with Vectors and Matrices
Vectors and Matrices Statistics with Vectors and Matrices Lecture 3 September 7, 005 Analysis Lecture #3-9/7/005 Slide 1 of 55 Today s Lecture Vectors and Matrices (Supplement A - augmented with SAS proc
More informationRecall the convention that, for us, all vectors are column vectors.
Some linear algebra Recall the convention that, for us, all vectors are column vectors. 1. Symmetric matrices Let A be a real matrix. Recall that a complex number λ is an eigenvalue of A if there exists
More informationMath 1553, Introduction to Linear Algebra
Learning goals articulate what students are expected to be able to do in a course that can be measured. This course has course-level learning goals that pertain to the entire course, and section-level
More informationSolving Constrained Rayleigh Quotient Optimization Problem by Projected QEP Method
Solving Constrained Rayleigh Quotient Optimization Problem by Projected QEP Method Yunshen Zhou Advisor: Prof. Zhaojun Bai University of California, Davis yszhou@math.ucdavis.edu June 15, 2017 Yunshen
More informationShrinkage Inverse Regression Estimation for Model Free Variable Selection
Shrinkage Inverse Regression Estimation for Model Free Variable Selection Howard D. Bondell and Lexin Li 1 North Carolina State University, Raleigh, USA Summary. The family of inverse regression estimators
More informationLinear Systems. Class 27. c 2008 Ron Buckmire. TITLE Projection Matrices and Orthogonal Diagonalization CURRENT READING Poole 5.4
Linear Systems Math Spring 8 c 8 Ron Buckmire Fowler 9 MWF 9: am - :5 am http://faculty.oxy.edu/ron/math//8/ Class 7 TITLE Projection Matrices and Orthogonal Diagonalization CURRENT READING Poole 5. Summary
More informationEigenvalues, Eigenvectors and the Jordan Form
EE/ME 701: Advanced Linear Systems Eigenvalues, Eigenvectors and the Jordan Form Contents 1 Introduction 3 1.1 Review of basic facts about eigenvectors and eigenvalues..... 3 1.1.1 Looking at eigenvalues
More informationTensor algebras and subproduct systems arising from stochastic matrices
Tensor algebras and subproduct systems arising from stochastic matrices Daniel Markiewicz (Ben-Gurion Univ. of the Negev) Joint Work with Adam Dor-On (Univ. of Waterloo) OAOT 2014 at ISI, Bangalore Daniel
More informationQuantum Computing Lecture 2. Review of Linear Algebra
Quantum Computing Lecture 2 Review of Linear Algebra Maris Ozols Linear algebra States of a quantum system form a vector space and their transformations are described by linear operators Vector spaces
More information