Supplementary Materials for Fast envelope algorithms
|
|
- Elfrieda Atkinson
- 5 years ago
- Views:
Transcription
1 1 Supplementary Materials for Fast envelope algorithms R. Dennis Cook and Xin Zhang 1 Proof for Proposition We first prove the following: F(A 0 ) = log A T 0 MA 0 + log A T 0 (M + U) 1 A 0 (A1) log A T 0 MA 0 + log A T 0 M 1 A 0 (A) = 0, (A) where the inequality (A) is because M > 0 and U 0, and hence (M + U) 1 M 1 and A T 0 (M + U) 1 A 0 A T 0 M 1 A 0. To show the equality (A), we need to apply Lemma in the Appendix of (Cook et al., 01): A T MA = M A T 0 M 1 A 0 for any M > 0 and any orthogonal basis (A, A 0 ) R p p. Therefore, in (A), log A T 0 MA 0 + log A T 0 M 1 A 0 = log A T 0 MA 0 + log A T MA log M, which equals to zero because span(a) is a reducing subspace of M. Turning to the second part of the proposition, we decompose U = uu T, where u has full column rank, and decompose (I + u T M 1 u) 1 = bb T. Let C = (A T 0 M 1 A 0 ) 1. Then using the Woodbury identity for matrix inverses (i.e. (D + XEX T ) 1 = D 1 + D 1 X(E 1 + R. Dennis Cook is Professor, School of Statistics, University of Minnesota, Minneapolis, MN ( dennis@stat.umn.edu). Xin Zhang is Assistant Professor, Department of Statistics, Florida State University, Tallahassee, FL, 06 ( henry@stat.fsu.edu). 1
2 1 16 X T D 1 X) 1 X T D 1 for square and invertible matrices D and E) and a common determinant identity (i.e. D + XEX T = D E E 1 + X T D 1 X ) we have log A T 0 (M + U) 1 A 0 = log A T 0 (M + uu T ) 1 A 0 = log A T 0 M 1 A 0 A T 0 M 1 ubb T u T M 1 A 0 = log A T 0 M 1 A 0 + log I b T u T M 1 A 0 CA T 0 M 1 ub. 17 Since span(a 0 ) is a reducing subspace of M, A T 0 M 1 A 0 = (A T 0 MA 0 ) 1 and thus log A T 0 (M + U) 1 A 0 = log A T 0 MA 0 + log I b T u T M 1 A 0 CA T 0 M 1 ub It follows that F (A 0 ) = 0 if and only if b T u T M 1 A 0 = 0. Since b has full column rank, this holds if and only if span(m 1 u) span(a). Since span(a) reduces M, this holds if and only if span(u) span(a). To prove E M (U) = AE A T MA(A T UA), we first need to establish span(a T UA) span(a T MA) (cf. Definition ). Since span(u) span(m), there is a matrix B so that U = MB. Thus span(a T UA) = span(a T U) = span(a T MB) span(a T M) = span(a T MA). We next let E 1 = E A T MA(A T UA) when used as a subscript. The conclusion can be deducted from the following quantities: M = P A MP A + Q A MQ A = A(A T MA)A T + Q A MQ A = A(P E1 A T MAP E1 + Q E1 A T MAQ E1 )A T + Q A MQ A = AP E1 A T MAP E1 A T + (AQ E1 A T + Q A )M(AQ E1 A T + Q A ), where the final equation holds because AQ E1 A T MQ A = AQ E1 (A T MA 0 )A 0 = 0 and because AP E1 A T and (AQ E1 A T +Q A ) are orthogonal projections. It follows that span(ap E1 A T ) = AE A T MA(A T UA) is a reducing subspace of M that contains span(u). The envelope equality E M (U) = AE A T MA(A T UA) follows from the minimality of E A T MA(A T UA).
3 9 Proof for Proposition and Proposition 0 1 We first establish the results in Proposition about ũ. Recall that ũ is the number of eigen- vectors from the decomposition M = p i=1 λ iv i v T i used in Step 1 of Algorithm that are not orthogonal to span(u) and that, from Proposition 1, E M (U) = q j=1 P ju for q projections P j, j = 1,..., q, onto the distinct (and unique) eigenspaces of M. For these eigenspaces, if span(p j ) E M (U) for some j = 1,..., q, then the associated eigenvectors will be guaranteed to intersect with span(u) because of the minimality of the envelope. If span(p j ) EM (U) 6 for some j = 1,..., q, then the associated eigenvectors will be orthogonal to span(u). Thus for the first part of Proposition, if all eigenspaces are contained in either E M (U) or EM 7 (U), then u = ũ and equals to the sum of the dimensions of eigenspaces that are contained in 8 9 E M (U). However, if some eigenspace span(p j ) intersect with both E M (U) and EM (U), then 0 by E M (U) = q j=1 P ju we have P j U E M (U) and P j U EM (U). Since any vector in 1 the eigenspace span(p j ) is a eigenvector for M, therefore different eigen-decomposition leads 6 7 to different number ũ. Depending on the particular eigen-decomposition, ũ can be any integer from u to u + K, where K is the sum of the dimensions of all such eigenspaces that intersect with both the envelope and the orthogonal completion of the envelope. To prove Proposition, we let I denote the index set of the ũ eigenvectors that are not orthogonal to span(u), and let I 0 denote the remaining indices in {1,..., p}. Then we have v i E M (U) 0 and vi T Uv i > 0 for i I and v i EM (U) for i I 0. Now we finally turn to 8 the function F(v i ). From Proposition, we know that F(v i ) = 0 for i I 0. For i I, let P E and Q E denote the projection onto E M (U) and EM 9 (U), respectively. Then it is straightforward 0 to see that, (M + U) 1 = {P E (M + U)P E + Q E MQ E } 1 = P E (M + U) 1 P E + Q E M 1 Q E. (A1) 1 Because v T i Uv i > 0 for i I we have v T i (M + U) 1 v i < v T i M 1 v i, and thus f i < 0 for i I. Ordering f 1,..., f p monotonically, we have f ( p)... f (p ũ+1) < f (p ũ) =... = f (1) = 0. For d ũ, span(a 0 ) is a subset of E M (U) and thus F(A 0) = 0 from equation
4 6 (A1). By construction, both span(a) and span(a 0 ) are always reducing subspaces of M. Thus applying Proposition, we have AE A T MA(A T UA) = E M (U) for d u. Proof for Proposition Because the objective function F(A 0 ) is a smooth and differentiable function in M and U, it follows that F n (Â0) = F(Â0) + O p (n 1/ ). To see this treat F n (Â0) = log ÂT 0 MÂ0 + log ÂT 0 ( M+Û) 1  0 = f( M, Û, Â0) as a function of M, Û and Â0. Then we have F(Â0) = f(m, U, Â0). The partial derivatives of f(m, U, Â0) with respect to M and U can be com- puted (not shown here) and are bounded because log X / X = X 1 for symmetric positive definite matrix X and the components (ÂT 0 MÂ0) 1, (ÂT 0 MÂ0) 1, (ÂT 0 ( M + Û) 1  0 ) 1 and (ÂT 0 (M + U) 1  0 ) 1 are bounded with probability 1. Since f(m, U, Â0) is smooth and differentiable with respect to its first two arguments, M M = O p (n 1/ 6 ) and Û U = O p (n 1/ ), it follows by a Taylor expansion that f( M, Û, Â0) f(m, U, Â0) = O p (n 1/ 6 ) From the n-consistency of eigenvectors, we have ÂT 0 MÂ0 = A T 0 MA 0 + O p (n 1/ ) and  T 0 (M+U) 1  0 = A T 0 (M+U) 1 A 0 +O p (n 1/ ). Therefore, F(Â0) = F(A 0 )+O p (n 1/ ). Additional numerical results for Section. In Section., we analyzed the meat protein data set following the previous studies in Cook et al. (01) and Cook and Zhang (016). Recall that in Section., we used the protein percentage of n = 10 meat samples as the univariate response variable Y i R 1, i = 1,..., n, and use corresponding p = 0 spectral measurements from near-infrared transmittance at every fourth wavelength between 80nm and 100nm as the predictor X i R p. We then randomly split the data into a testing sample and a training sample in a 1: ratio and recorded the prediction mean squared errors (PMSE) and repeated this procedure 100 times. Figure.1 summarized the averaged prediction mean squared errors (PMSE) for the four estimators (ECD, 1D, ECS-ECD, and ECS-1D). The ECD algorithm was proven again to be the most reliable
5 Prediction MSE for two dimensional envelope Prediction MSE for two dimensional envelope PMSE 1 PMSE ECD 1D ECS ECD ECS 1D 0 ECD 1D ECS ECD ECS 1D Figure A1: Meat Protein Data: prediction mean squares error comparison of ECD and 1D algorithms when u =. The left panel summarized all the 100 PMSE for each of the four estimators; the right panel is the zoomed-in view of the left panel, that is after deleting the outliers of the 1D algorithm s PMSE one, while the performances of both the ECS-1D and the ECS-ECD estimators are very similar to that of the ECD algorithm. For u =, we have observed a big difference between the 1D and the ECD algorithm. Since both algorithms are under the same sequential 1D framework of (Cook and Zhang, 016) and are trying to optimize the same objective function, we further examined their differences more carefully. In Figure A1, we have the side-by-side boxplot of the 100 PMSE for all the four estimators. The means of the PMSE over the 100 data sets for each estimators are:.1 (ECD);.79 (1D);.16 (ECS-ECD);.19 (ECS-1D), while the medians are:.1 (ECD),. (1D),.1 (ECS-ECD),.17 (ECS-ECD). References Cook, R. D., Helland, I. S., and Su, Z. (01). Envelopes and partial least squares regression. J. R. Stat. Soc. Ser. B. Stat. Methodol., 7(): Cook, R. D. and Zhang, X. (016). Algorithms for envelope estimation. Journal of Computational and Graphical Statistics, (1):8 00.
Statistica Sinica Preprint No: SS R2
Statistica Sinica Preprint No: SS-2016-0037R2 Title Fast envelope algorithms Manuscript ID SS-2016.0037 URL http://www.stat.sinica.edu.tw/statistica/ DOI 10.5705/ss.202016.0037 Complete List of Authors
More informationFast envelope algorithms
1 2 Fast envelope algorithms R. Dennis Cook and Xin Zhang 3 4 5 6 7 8 9 10 11 12 13 14 Abstract In this paper, we develop new fast algorithms for envelope estimation that are stable and can be used in
More informationSupplementary Materials for Tensor Envelope Partial Least Squares Regression
Supplementary Materials for Tensor Envelope Partial Least Squares Regression Xin Zhang and Lexin Li Florida State University and University of California, Bereley 1 Proofs and Technical Details Proof of
More informationLinear Systems. Class 27. c 2008 Ron Buckmire. TITLE Projection Matrices and Orthogonal Diagonalization CURRENT READING Poole 5.4
Linear Systems Math Spring 8 c 8 Ron Buckmire Fowler 9 MWF 9: am - :5 am http://faculty.oxy.edu/ron/math//8/ Class 7 TITLE Projection Matrices and Orthogonal Diagonalization CURRENT READING Poole 5. Summary
More informationModel-free Envelope Dimension Selection
Model-free Envelope Dimension Selection Xin Zhang and Qing Mai arxiv:1709.03945v2 [stat.me] 15 Sep 2017 Abstract An envelope is a targeted dimension reduction subspace for simultaneously achieving dimension
More informationEnvelopes: Methods for Efficient Estimation in Multivariate Statistics
Envelopes: Methods for Efficient Estimation in Multivariate Statistics Dennis Cook School of Statistics University of Minnesota Collaborating at times with Bing Li, Francesca Chiaromonte, Zhihua Su, Inge
More informationNo books, no notes, no calculators. You must show work, unless the question is a true/false, yes/no, or fill-in-the-blank question.
Math 304 Final Exam (May 8) Spring 206 No books, no notes, no calculators. You must show work, unless the question is a true/false, yes/no, or fill-in-the-blank question. Name: Section: Question Points
More informationLecture notes: Applied linear algebra Part 1. Version 2
Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and
More information1 Last time: least-squares problems
MATH Linear algebra (Fall 07) Lecture Last time: least-squares problems Definition. If A is an m n matrix and b R m, then a least-squares solution to the linear system Ax = b is a vector x R n such that
More informationLecture 4 Orthonormal vectors and QR factorization
Orthonormal vectors and QR factorization 4 1 Lecture 4 Orthonormal vectors and QR factorization EE263 Autumn 2004 orthonormal vectors Gram-Schmidt procedure, QR factorization orthogonal decomposition induced
More information22m:033 Notes: 7.1 Diagonalization of Symmetric Matrices
m:33 Notes: 7. Diagonalization of Symmetric Matrices Dennis Roseman University of Iowa Iowa City, IA http://www.math.uiowa.edu/ roseman May 3, Symmetric matrices Definition. A symmetric matrix is a matrix
More informationSTAT 151A: Lab 1. 1 Logistics. 2 Reference. 3 Playing with R: graphics and lm() 4 Random vectors. Billy Fang. 2 September 2017
STAT 151A: Lab 1 Billy Fang 2 September 2017 1 Logistics Billy Fang (blfang@berkeley.edu) Office hours: Monday 9am-11am, Wednesday 10am-12pm, Evans 428 (room changes will be written on the chalkboard)
More informationReconstruction and Higher Dimensional Geometry
Reconstruction and Higher Dimensional Geometry Hongyu He Department of Mathematics Louisiana State University email: hongyu@math.lsu.edu Abstract Tutte proved that, if two graphs, both with more than two
More informationAnnouncements Monday, November 20
Announcements Monday, November 20 You already have your midterms! Course grades will be curved at the end of the semester. The percentage of A s, B s, and C s to be awarded depends on many factors, and
More informationAPPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.
APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product
More informationSimultaneous envelopes for multivariate linear regression
1 2 3 4 Simultaneous envelopes for multivariate linear regression R. Dennis Cook and Xin Zhang December 2, 2013 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 Abstract We introduce envelopes for simultaneously
More informationMATH 23a, FALL 2002 THEORETICAL LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS Solutions to Final Exam (in-class portion) January 22, 2003
MATH 23a, FALL 2002 THEORETICAL LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS Solutions to Final Exam (in-class portion) January 22, 2003 1. True or False (28 points, 2 each) T or F If V is a vector space
More informationSPECTRAL PROPERTIES OF THE LAPLACIAN ON BOUNDED DOMAINS
SPECTRAL PROPERTIES OF THE LAPLACIAN ON BOUNDED DOMAINS TSOGTGEREL GANTUMUR Abstract. After establishing discrete spectra for a large class of elliptic operators, we present some fundamental spectral properties
More informationPeriodicity & State Transfer Some Results Some Questions. Periodic Graphs. Chris Godsil. St John s, June 7, Chris Godsil Periodic Graphs
St John s, June 7, 2009 Outline 1 Periodicity & State Transfer 2 Some Results 3 Some Questions Unitary Operators Suppose X is a graph with adjacency matrix A. Definition We define the operator H X (t)
More informationLinear Algebra using Dirac Notation: Pt. 2
Linear Algebra using Dirac Notation: Pt. 2 PHYS 476Q - Southern Illinois University February 6, 2018 PHYS 476Q - Southern Illinois University Linear Algebra using Dirac Notation: Pt. 2 February 6, 2018
More informationSingular Value Decomposition (SVD)
School of Computing National University of Singapore CS CS524 Theoretical Foundations of Multimedia More Linear Algebra Singular Value Decomposition (SVD) The highpoint of linear algebra Gilbert Strang
More informationNumerical Linear Algebra
University of Alabama at Birmingham Department of Mathematics Numerical Linear Algebra Lecture Notes for MA 660 (1997 2014) Dr Nikolai Chernov April 2014 Chapter 0 Review of Linear Algebra 0.1 Matrices
More informationMath Linear Algebra II. 1. Inner Products and Norms
Math 342 - Linear Algebra II Notes 1. Inner Products and Norms One knows from a basic introduction to vectors in R n Math 254 at OSU) that the length of a vector x = x 1 x 2... x n ) T R n, denoted x,
More informationProblem # Max points possible Actual score Total 120
FINAL EXAMINATION - MATH 2121, FALL 2017. Name: ID#: Email: Lecture & Tutorial: Problem # Max points possible Actual score 1 15 2 15 3 10 4 15 5 15 6 15 7 10 8 10 9 15 Total 120 You have 180 minutes to
More informationNORMS ON SPACE OF MATRICES
NORMS ON SPACE OF MATRICES. Operator Norms on Space of linear maps Let A be an n n real matrix and x 0 be a vector in R n. We would like to use the Picard iteration method to solve for the following system
More information14 Singular Value Decomposition
14 Singular Value Decomposition For any high-dimensional data analysis, one s first thought should often be: can I use an SVD? The singular value decomposition is an invaluable analysis tool for dealing
More informationFundamentals of Engineering Analysis (650163)
Philadelphia University Faculty of Engineering Communications and Electronics Engineering Fundamentals of Engineering Analysis (6563) Part Dr. Omar R Daoud Matrices: Introduction DEFINITION A matrix is
More informationORTHOGONALITY AND LEAST-SQUARES [CHAP. 6]
ORTHOGONALITY AND LEAST-SQUARES [CHAP. 6] Inner products and Norms Inner product or dot product of 2 vectors u and v in R n : u.v = u 1 v 1 + u 2 v 2 + + u n v n Calculate u.v when u = 1 2 2 0 v = 1 0
More informationAnnouncements Monday, November 26
Announcements Monday, November 26 Please fill out your CIOS survey! WeBWorK 6.6, 7.1, 7.2 are due on Wednesday. No quiz on Friday! But this is the only recitation on chapter 7. My office is Skiles 244
More informationVector Spaces and Linear Transformations
Vector Spaces and Linear Transformations Wei Shi, Jinan University 2017.11.1 1 / 18 Definition (Field) A field F = {F, +, } is an algebraic structure formed by a set F, and closed under binary operations
More informationAlgorithms for Envelope Estimation II
Algorithms for Envelope Estimation II arxiv:1509.03767v1 [stat.me] 12 Sep 2015 R. Dennis Cook, Liliana Forzani and Zhihua Su September 15, 2015 Abstract We propose a new algorithm for envelope estimation,
More informationMathematical Optimisation, Chpt 2: Linear Equations and inequalities
Mathematical Optimisation, Chpt 2: Linear Equations and inequalities Peter J.C. Dickinson p.j.c.dickinson@utwente.nl http://dickinson.website version: 12/02/18 Monday 5th February 2018 Peter J.C. Dickinson
More informationDIAGONALIZATION. In order to see the implications of this definition, let us consider the following example Example 1. Consider the matrix
DIAGONALIZATION Definition We say that a matrix A of size n n is diagonalizable if there is a basis of R n consisting of eigenvectors of A ie if there are n linearly independent vectors v v n such that
More informationMatrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =
30 MATHEMATICS REVIEW G A.1.1 Matrices and Vectors Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = a 11 a 12... a 1N a 21 a 22... a 2N...... a M1 a M2... a MN A matrix can
More informationLinear Algebra and Dirac Notation, Pt. 2
Linear Algebra and Dirac Notation, Pt. 2 PHYS 500 - Southern Illinois University February 1, 2017 PHYS 500 - Southern Illinois University Linear Algebra and Dirac Notation, Pt. 2 February 1, 2017 1 / 14
More informationMath 408 Advanced Linear Algebra
Math 408 Advanced Linear Algebra Chi-Kwong Li Chapter 4 Hermitian and symmetric matrices Basic properties Theorem Let A M n. The following are equivalent. Remark (a) A is Hermitian, i.e., A = A. (b) x
More informationSUMMARY OF MATH 1600
SUMMARY OF MATH 1600 Note: The following list is intended as a study guide for the final exam. It is a continuation of the study guide for the midterm. It does not claim to be a comprehensive list. You
More informationSUPPLEMENTAL NOTES FOR ROBUST REGULARIZED SINGULAR VALUE DECOMPOSITION WITH APPLICATION TO MORTALITY DATA
SUPPLEMENTAL NOTES FOR ROBUST REGULARIZED SINGULAR VALUE DECOMPOSITION WITH APPLICATION TO MORTALITY DATA By Lingsong Zhang, Haipeng Shen and Jianhua Z. Huang Purdue University, University of North Carolina,
More informationReview of Matrices and Block Structures
CHAPTER 2 Review of Matrices and Block Structures Numerical linear algebra lies at the heart of modern scientific computing and computational science. Today it is not uncommon to perform numerical computations
More informationSUPPLEMENT TO CHAPTER 3
SUPPLEMENT TO CHAPTER 3 1.1 Linear combinations and spanning sets Consider the vector space R 3 with the unit vectors e 1 = (1, 0, 0), e 2 = (0, 1, 0), e 3 = (0, 0, 1). Every vector v = (a, b, c) R 3 can
More informationMATH2210 Notebook 3 Spring 2018
MATH2210 Notebook 3 Spring 2018 prepared by Professor Jenny Baglivo c Copyright 2009 2018 by Jenny A. Baglivo. All Rights Reserved. 3 MATH2210 Notebook 3 3 3.1 Vector Spaces and Subspaces.................................
More informationx 3y 2z = 6 1.2) 2x 4y 3z = 8 3x + 6y + 8z = 5 x + 3y 2z + 5t = 4 1.5) 2x + 8y z + 9t = 9 3x + 5y 12z + 17t = 7
Linear Algebra and its Applications-Lab 1 1) Use Gaussian elimination to solve the following systems x 1 + x 2 2x 3 + 4x 4 = 5 1.1) 2x 1 + 2x 2 3x 3 + x 4 = 3 3x 1 + 3x 2 4x 3 2x 4 = 1 x + y + 2z = 4 1.4)
More informationAlmost Sharp Quantum Effects
Almost Sharp Quantum Effects Alvaro Arias and Stan Gudder Department of Mathematics The University of Denver Denver, Colorado 80208 April 15, 2004 Abstract Quantum effects are represented by operators
More informationMath 1553, Introduction to Linear Algebra
Learning goals articulate what students are expected to be able to do in a course that can be measured. This course has course-level learning goals that pertain to the entire course, and section-level
More informationRecall : Eigenvalues and Eigenvectors
Recall : Eigenvalues and Eigenvectors Let A be an n n matrix. If a nonzero vector x in R n satisfies Ax λx for a scalar λ, then : The scalar λ is called an eigenvalue of A. The vector x is called an eigenvector
More informationLINEAR ALGEBRA SUMMARY SHEET.
LINEAR ALGEBRA SUMMARY SHEET RADON ROSBOROUGH https://intuitiveexplanationscom/linear-algebra-summary-sheet/ This document is a concise collection of many of the important theorems of linear algebra, organized
More informationLinear algebra 2. Yoav Zemel. March 1, 2012
Linear algebra 2 Yoav Zemel March 1, 2012 These notes were written by Yoav Zemel. The lecturer, Shmuel Berger, should not be held responsible for any mistake. Any comments are welcome at zamsh7@gmail.com.
More informationIr O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )
Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O
More informationChapter 6: Orthogonality
Chapter 6: Orthogonality (Last Updated: November 7, 7) These notes are derived primarily from Linear Algebra and its applications by David Lay (4ed). A few theorems have been moved around.. Inner products
More information235 Final exam review questions
5 Final exam review questions Paul Hacking December 4, 0 () Let A be an n n matrix and T : R n R n, T (x) = Ax the linear transformation with matrix A. What does it mean to say that a vector v R n is an
More informationMET Workshop: Exercises
MET Workshop: Exercises Alex Blumenthal and Anthony Quas May 7, 206 Notation. R d is endowed with the standard inner product (, ) and Euclidean norm. M d d (R) denotes the space of n n real matrices. When
More informationLinear Algebra. Min Yan
Linear Algebra Min Yan January 2, 2018 2 Contents 1 Vector Space 7 1.1 Definition................................. 7 1.1.1 Axioms of Vector Space..................... 7 1.1.2 Consequence of Axiom......................
More informationLinear Algebra. Paul Yiu. Department of Mathematics Florida Atlantic University. Fall A: Inner products
Linear Algebra Paul Yiu Department of Mathematics Florida Atlantic University Fall 2011 6A: Inner products In this chapter, the field F = R or C. We regard F equipped with a conjugation χ : F F. If F =
More informationEnvelopes for Efficient Multivariate Parameter Estimation
Envelopes for Efficient Multivariate Parameter Estimation A DISSERTATION SUBMITTED TO THE FACULTY OF THE GRADUATE SCHOOL OF THE UNIVERSITY OF MINNESOTA BY Xin Zhang IN PARTIAL FULFILLMENT OF THE REQUIREMENTS
More informationThroughout these notes we assume V, W are finite dimensional inner product spaces over C.
Math 342 - Linear Algebra II Notes Throughout these notes we assume V, W are finite dimensional inner product spaces over C 1 Upper Triangular Representation Proposition: Let T L(V ) There exists an orthonormal
More informationarxiv: v5 [math.na] 16 Nov 2017
RANDOM PERTURBATION OF LOW RANK MATRICES: IMPROVING CLASSICAL BOUNDS arxiv:3.657v5 [math.na] 6 Nov 07 SEAN O ROURKE, VAN VU, AND KE WANG Abstract. Matrix perturbation inequalities, such as Weyl s theorem
More informationLINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS
LINEAR ALGEBRA, -I PARTIAL EXAM SOLUTIONS TO PRACTICE PROBLEMS Problem (a) For each of the two matrices below, (i) determine whether it is diagonalizable, (ii) determine whether it is orthogonally diagonalizable,
More informationAppendix A: Matrices
Appendix A: Matrices A matrix is a rectangular array of numbers Such arrays have rows and columns The numbers of rows and columns are referred to as the dimensions of a matrix A matrix with, say, 5 rows
More informationFoundations for Envelope Models and Methods
1 2 3 Foundations for Envelope Models and Methods R. Dennis Cook and Xin Zhang October 6, 2014 4 5 6 7 8 9 10 11 12 13 14 Abstract Envelopes were recently proposed by Cook, Li and Chiaromonte (2010) as
More information5.) For each of the given sets of vectors, determine whether or not the set spans R 3. Give reasons for your answers.
Linear Algebra - Test File - Spring Test # For problems - consider the following system of equations. x + y - z = x + y + 4z = x + y + 6z =.) Solve the system without using your calculator..) Find the
More informationChapter 5 Eigenvalues and Eigenvectors
Chapter 5 Eigenvalues and Eigenvectors Outline 5.1 Eigenvalues and Eigenvectors 5.2 Diagonalization 5.3 Complex Vector Spaces 2 5.1 Eigenvalues and Eigenvectors Eigenvalue and Eigenvector If A is a n n
More informationMath 407: Linear Optimization
Math 407: Linear Optimization Lecture 16: The Linear Least Squares Problem II Math Dept, University of Washington February 28, 2018 Lecture 16: The Linear Least Squares Problem II (Math Dept, University
More informationDesigning Information Devices and Systems II
EECS 16B Fall 2016 Designing Information Devices and Systems II Linear Algebra Notes Introduction In this set of notes, we will derive the linear least squares equation, study the properties symmetric
More informationAnnouncements Monday, November 26
Announcements Monday, November 26 Please fill out your CIOS survey! WeBWorK 6.6, 7.1, 7.2 are due on Wednesday. No quiz on Friday! But this is the only recitation on chapter 7. My office is Skiles 244
More informationSPECTRAL THEOREM FOR COMPACT SELF-ADJOINT OPERATORS
SPECTRAL THEOREM FOR COMPACT SELF-ADJOINT OPERATORS G. RAMESH Contents Introduction 1 1. Bounded Operators 1 1.3. Examples 3 2. Compact Operators 5 2.1. Properties 6 3. The Spectral Theorem 9 3.3. Self-adjoint
More informationEconomics 620, Lecture 5: exp
1 Economics 620, Lecture 5: The K-Variable Linear Model II Third assumption (Normality): y; q(x; 2 I N ) 1 ) p(y) = (2 2 ) exp (N=2) 1 2 2(y X)0 (y X) where N is the sample size. The log likelihood function
More informationEigenvectors and Hermitian Operators
7 71 Eigenvalues and Eigenvectors Basic Definitions Let L be a linear operator on some given vector space V A scalar λ and a nonzero vector v are referred to, respectively, as an eigenvalue and corresponding
More informationLecture Summaries for Linear Algebra M51A
These lecture summaries may also be viewed online by clicking the L icon at the top right of any lecture screen. Lecture Summaries for Linear Algebra M51A refers to the section in the textbook. Lecture
More informationProposition 42. Let M be an m n matrix. Then (32) N (M M)=N (M) (33) R(MM )=R(M)
RODICA D. COSTIN. Singular Value Decomposition.1. Rectangular matrices. For rectangular matrices M the notions of eigenvalue/vector cannot be defined. However, the products MM and/or M M (which are square,
More informationThe Residual Spectrum and the Continuous Spectrum of Upper Triangular Operator Matrices
Filomat 28:1 (2014, 65 71 DOI 10.2298/FIL1401065H Published by Faculty of Sciences and Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/filomat The Residual Spectrum and the
More information8. Diagonalization.
8. Diagonalization 8.1. Matrix Representations of Linear Transformations Matrix of A Linear Operator with Respect to A Basis We know that every linear transformation T: R n R m has an associated standard
More informationLecture: Linear algebra. 4. Solutions of linear equation systems The fundamental theorem of linear algebra
Lecture: Linear algebra. 1. Subspaces. 2. Orthogonal complement. 3. The four fundamental subspaces 4. Solutions of linear equation systems The fundamental theorem of linear algebra 5. Determining the fundamental
More informationSection 6.2, 6.3 Orthogonal Sets, Orthogonal Projections
Section 6. 6. Orthogonal Sets Orthogonal Projections Main Ideas in these sections: Orthogonal set = A set of mutually orthogonal vectors. OG LI. Orthogonal Projection of y onto u or onto an OG set {u u
More informationRepresentations of moderate growth Paul Garrett 1. Constructing norms on groups
(December 31, 2004) Representations of moderate growth Paul Garrett Representations of reductive real Lie groups on Banach spaces, and on the smooth vectors in Banach space representations,
More informationThe University of Texas at Austin Department of Electrical and Computer Engineering. EE381V: Large Scale Learning Spring 2013.
The University of Texas at Austin Department of Electrical and Computer Engineering EE381V: Large Scale Learning Spring 2013 Assignment Two Caramanis/Sanghavi Due: Tuesday, Feb. 19, 2013. Computational
More informationALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA
ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA Kent State University Department of Mathematical Sciences Compiled and Maintained by Donald L. White Version: August 29, 2017 CONTENTS LINEAR ALGEBRA AND
More informationECS130 Scientific Computing Handout E February 13, 2017
ECS130 Scientific Computing Handout E February 13, 2017 1. The Power Method (a) Pseudocode: Power Iteration Given an initial vector u 0, t i+1 = Au i u i+1 = t i+1 / t i+1 2 (approximate eigenvector) θ
More informationThe Jordan Normal Form and its Applications
The and its Applications Jeremy IMPACT Brigham Young University A square matrix A is a linear operator on {R, C} n. A is diagonalizable if and only if it has n linearly independent eigenvectors. What happens
More informationMathematics 1. Part II: Linear Algebra. Exercises and problems
Bachelor Degree in Informatics Engineering Barcelona School of Informatics Mathematics Part II: Linear Algebra Eercises and problems February 5 Departament de Matemàtica Aplicada Universitat Politècnica
More informationMatrix l-algebras over l-fields
PURE MATHEMATICS RESEARCH ARTICLE Matrix l-algebras over l-fields Jingjing M * Received: 05 January 2015 Accepted: 11 May 2015 Published: 15 June 2015 *Corresponding author: Jingjing Ma, Department of
More informationQuadratic Programming
Quadratic Programming Outline Linearly constrained minimization Linear equality constraints Linear inequality constraints Quadratic objective function 2 SideBar: Matrix Spaces Four fundamental subspaces
More informationMath 3191 Applied Linear Algebra
Math 9 Applied Linear Algebra Lecture : Orthogonal Projections, Gram-Schmidt Stephen Billups University of Colorado at Denver Math 9Applied Linear Algebra p./ Orthonormal Sets A set of vectors {u, u,...,
More informationDS-GA 1002 Lecture notes 10 November 23, Linear models
DS-GA 2 Lecture notes November 23, 2 Linear functions Linear models A linear model encodes the assumption that two quantities are linearly related. Mathematically, this is characterized using linear functions.
More informationJordan Canonical Form
Jordan Canonical Form Suppose A is a n n matrix operating on V = C n. First Reduction (to a repeated single eigenvalue). Let φ(x) = det(x A) = r (x λ i ) e i (1) be the characteristic equation of A. Factor
More informationFor each problem, place the letter choice of your answer in the spaces provided on this page.
Math 6 Final Exam Spring 6 Your name Directions: For each problem, place the letter choice of our answer in the spaces provided on this page...... 6. 7. 8. 9....... 6. 7. 8. 9....... B signing here, I
More informationMath 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.
Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,
More informationNovember 18, 2013 ANALYTIC FUNCTIONAL CALCULUS
November 8, 203 ANALYTIC FUNCTIONAL CALCULUS RODICA D. COSTIN Contents. The spectral projection theorem. Functional calculus 2.. The spectral projection theorem for self-adjoint matrices 2.2. The spectral
More informationDifferential Topology Final Exam With Solutions
Differential Topology Final Exam With Solutions Instructor: W. D. Gillam Date: Friday, May 20, 2016, 13:00 (1) Let X be a subset of R n, Y a subset of R m. Give the definitions of... (a) smooth function
More informationMATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)
MATH 20F: LINEAR ALGEBRA LECTURE B00 (T KEMP) Definition 01 If T (x) = Ax is a linear transformation from R n to R m then Nul (T ) = {x R n : T (x) = 0} = Nul (A) Ran (T ) = {Ax R m : x R n } = {b R m
More informationSolutions for Homework Assignment 2
Solutions for Homework Assignment 2 Problem 1. If a,b R, then a+b a + b. This fact is called the Triangle Inequality. By using the Triangle Inequality, prove that a b a b for all a,b R. Solution. To prove
More informationOctober 25, 2013 INNER PRODUCT SPACES
October 25, 2013 INNER PRODUCT SPACES RODICA D. COSTIN Contents 1. Inner product 2 1.1. Inner product 2 1.2. Inner product spaces 4 2. Orthogonal bases 5 2.1. Existence of an orthogonal basis 7 2.2. Orthogonal
More informationWarm-up. True or false? Baby proof. 2. The system of normal equations for A x = y has solutions iff A x = y has solutions
Warm-up True or false? 1. proj u proj v u = u 2. The system of normal equations for A x = y has solutions iff A x = y has solutions 3. The normal equations are always consistent Baby proof 1. Let A be
More informationSingular Value Decomposition (SVD) and Polar Form
Chapter 2 Singular Value Decomposition (SVD) and Polar Form 2.1 Polar Form In this chapter, we assume that we are dealing with a real Euclidean space E. Let f: E E be any linear map. In general, it may
More informationLinear Algebra Lecture Notes-II
Linear Algebra Lecture Notes-II Vikas Bist Department of Mathematics Panjab University, Chandigarh-64 email: bistvikas@gmail.com Last revised on March 5, 8 This text is based on the lectures delivered
More informationMath113: Linear Algebra. Beifang Chen
Math3: Linear Algebra Beifang Chen Spring 26 Contents Systems of Linear Equations 3 Systems of Linear Equations 3 Linear Systems 3 2 Geometric Interpretation 3 3 Matrices of Linear Systems 4 4 Elementary
More informationChapter 7: Symmetric Matrices and Quadratic Forms
Chapter 7: Symmetric Matrices and Quadratic Forms (Last Updated: December, 06) These notes are derived primarily from Linear Algebra and its applications by David Lay (4ed). A few theorems have been moved
More informationProblem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show
MTH 0: Linear Algebra Department of Mathematics and Statistics Indian Institute of Technology - Kanpur Problem Set Problems marked (T) are for discussions in Tutorial sessions (T) If A is an m n matrix,
More informationDECOMPOSING A SYMMETRIC MATRIX BY WAY OF ITS EXPONENTIAL FUNCTION
DECOMPOSING A SYMMETRIC MATRIX BY WAY OF ITS EXPONENTIAL FUNCTION MALIHA BAHRI, WILLIAM COLE, BRAD JOHNSTON, AND MADELINE MAYES Abstract. It is well known that one can develop the spectral theorem to decompose
More informationPRACTICE PROBLEMS FOR THE FINAL
PRACTICE PROBLEMS FOR THE FINAL Here are a slew of practice problems for the final culled from old exams:. Let P be the vector space of polynomials of degree at most. Let B = {, (t ), t + t }. (a) Show
More informationExercise Sheet 1.
Exercise Sheet 1 You can download my lecture and exercise sheets at the address http://sami.hust.edu.vn/giang-vien/?name=huynt 1) Let A, B be sets. What does the statement "A is not a subset of B " mean?
More information