Properties of Matrices and Operations on Matrices


 Vivian Robbins
 1 years ago
 Views:
Transcription
1 Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations, and columns represent the variables or features that are observed for each unit. If the elements of a matrix X represent numeric observations on variables in the structure of a rectangular array as indicated above, the mathematical properties of X carry useful information about the observations and about the variables themselves. In addition, mathematical operations on the matrix may be useful in discovering structure in the data. These operations include various transformations and factorizations. 1
2 Symmetric Matrices A matrix A with elements a ij is said to be symmetric if each element a ji has the same value as a ij. Symmetric matrices have useful properties that we will mention from time to time. Symmetric matrices provide a generalization of the inner product. If A is symmetric and x and y are conformable vectors, then the bilinear form x T Ay has the property that x T Ay = y T Ax, and hence this operation on x and y is commutative, which is one of the properties of an inner product. More generally, a bilinear form is a kernel function of the two vectors, and a symmetric matrix corresponds to a symmetric kernel. An important type of bilinear form is x T Ax, which is called a quadratic form. 2
3 Nonnegative Definite and Positive Definite Matrices A real symmetric matrix A such that for any real conformable vector x the quadratic form x T Ax is nonnegative, that is, such that x T Ax 0, is called a nonnegative definite matrix. We denote the fact that A is nonnegative definite by A 0. (Note that we consider the zero matrix, 0 n n, to be nonnegative definite.) If the quadratic form is strictly positive, A is called a positive definite matrix and we write A 0. 3
4 Systems of Linear Equations One of the most common uses of matrices is to represent a system of linear equations Ax = b. Whether or not the system has a solution (that is, whether or not for a given A and b there is an x such that Ax = b) depends on the number of linearly independent rows in A (that is, considering each row of A as being a vector). The number of linearly independent rows of a matrix, which is also the number of linearly independent columns of the matrix, is called the rank of the matrix. A matrix is said to be of full rank if its rank is equal to either its number of rows or its number of columns. 4
5 A square full rank matrix is called a nonsingular matrix. We call a matrix that is square but not full rank singular. The system Ax = b has a solution if and only if rank(a b) rank(a), where A b is the matrix formed from A by adjoining b as an additional column. If a solution exists, the system is said to be consistent. The common regression equations do not satisfy the condition. 5
6 Matrix Inverses If the system Ax = b is consistent then x = A b is a solution, where A is any matrix such that AA A = A, as we can see by substituting A b into AA Ax = Ax. Given a matrix A, a matrix A such that AA A = A is called a generalized inverse of A, and we denote it as indicated. If A is square and of full rank, the generalized inverse, which is unique, is called the inverse and is denoted by A 1. It has a stronger property: AA 1 = A 1 A = I, where I is the identity matrix. 6
7 To the general requirement AA A = A, we successively add three requirements that define special generalized inverses, sometimes called respectively g 2, g 3, and g 4 inverses. The general generalized inverse is sometimes called a g 1 inverse. The g 4 inverse is called the MoorePenrose inverse. For a matrix A, a MoorePenrose inverse, denoted by A +, is a matrix that has four properties. 7
8 1. AA + A = A. Any matrix that satisfies this condition is called a generalized inverse, and as we have seen above is denoted by A. For many applications, this is the only condition necessary. Such a matrix is also called a g 1 inverse, an inner pseudoinverse, or a conditional inverse. 2. A + AA + = A +. A matrix A + that satisfies this condition is called an outer pseudoinverse. A g 1 inverse that also satisfies this condition is called a g 2 inverse or reflexive generalized inverse, and is denoted by A. 3. A + A is symmetric. 4. AA + is symmetric. 8
9 The Matrix X T X When numerical data are stored in the usual way in a matrix X, the matrix X T X often plays an important role in statistical analysis. A matrix of this form is called a Gramian matrix, and it has some interesting properties. First of all, we note that X T X is symmetric; that is, the (ij) th element, k x k,i x k,j is the same as the (ji) th element. Secondly, because for any y, (Xy) T Xy 0, X T X is nonnegative definite. Next we note that X T X = 0 X = 0. 9
10 The generalized inverses of X T X have useful properties. First, we see from the definition, for any generalized inverse (X T X), that ((X T X) ) T is also a generalized inverse of X T X. (Note that (X T X) is not necessarily symmetric.) Also, we have X(X T X) X T X = X. This means that (X T X) X T is a generalized inverse of X. The MoorePenrose inverse of X has an interesting relationship with a generalized inverse of X T X: XX + = X(X T X) X T. 10
11 An important property of X(X T X) X T is its invariance to the choice of the generalized inverse of X T X. The matrix X(X T X) X T has a number of other interesting properties in addition to those mentioned above. ( X(X T X) X T) ( X(X T X) X T) = X(X T X) (X T X)(X T X) X T that is, X(X T X) X T is idempotent. = X(X T X) X T, It is clear that the only idempotent matrix that is of full rank is the identity I. 11
12 Any real symmetric idempotent matrix is a projection matrix. The most familiar application of the matrix X(X T X) X T is in the analysis of the linear regression model y = Xβ + ɛ. This matrix projects the observed vector y onto a lowerdimensional subspace that represents the fitted model: ŷ = X(X T X) X T y. Projection matrices, as the name implies, generally transform or project a vector onto a lowerdimensional subspace. 12
13 Eigenvalues and Eigenvectors Multiplication of a given vector by a square matrix may result in a scalar multiple of the vector. If A is an n n matrix, v is a vector not equal to 0, and c is a scalar such that Av = cv, we say v is an eigenvector of A and c is an eigenvalue of A. We should note how remarkable the relationship Av = cv is: The effect of a matrix multiplication of an eigenvector is the same as a scalar multiplication of the eigenvector. The eigenvector is an invariant of the transformation in the sense that its direction does not change under the matrix multiplication transformation. 13
14 Eigenvalues and Eigenvectors We immediately see that if an eigenvalue of a matrix A is 0, then A must be singular. We also note that if v is an eigenvector of A, and t is any nonzero scalar, tv is also an eigenvector of A. Hence, we can normalize eigenvectors, and we often do. If A is symmetric there are several useful facts about its eigenvalues and eigenvectors. The eigenvalues and eigenvector of a (real) symmetric matrix are all real. 14
15 Eigenvalues and Eigenvectors The eigenvectors of a symmetric matrix are (or can be chosen to be) mutually orthogonal. We can therefore represent a symmetric matrix A as A = V CV T, where V is an orthogonal matrix whose columns are the eigenvectors of A and C is a diagonal matrix whose (ii) th element is the eigenvalue corresponding to the eigenvector in the i th column of V. This is called the diagonal factorization of A. 15
16 Eigenvalues and Eigenvectors If A is a nonnegative (positive) definite matrix, and c is an eigenvalue with corresponding eigenvector v, if we multiply both sides of the equation Av = cv, we have v T Av = cv T v 0(> 0), and since v T v > 0, we have c 0(> 0). The maximum modulus of any eigenvalue in a given matrix is of interest. This value is called the spectral radius, and for the matrix A, is denoted by ρ(a): ρ(a) = max c i, where the c i s are the eigenvalues of A. The spectral radius is very important in many applications, from both computational and statistical standpoints. The convergence of some iterative algorithms, for example, depend on bounds on the spectral radius. 16
17 Matrix Decomposition Computations with matrices are often facilitated by first decomposing the matrix into multiplicative factors that are easier to work with computationally, or else reveal some important characteristics of the matrix. Some decompositions exist only for special types of matrices, such as symmetric matrices or positive definite matrices. 17
18 The Singular Value Decomposition One of most useful decompositions, and one that applies to all types of matrices, is the singular value decomposition. An n m matrix A can be factored as A = UDV T, where U is an n n orthogonal matrix, V is an m m orthogonal matrix, and D is an n m diagonal matrix with nonnegative entries. The number of positive entries in D is the same as the rank of A. This factorization is called the singular value decomposition (SVD) or the canonical singular value factorization of A. 18
19 Singular Values and the Singular Value Decomposition The elements on the diagonal of D, d i, are called the singular values of A. We can rearrange the entries in D so that d 1 d 2, and by rearranging the columns of U correspondingly, nothing is changed. If the rank of the matrix is r, we have d 1 d r > 0, and if r < min(n, m), then d r+1 = = d min(n,m) = 0. In this case D = where D r = diag(d 1,..., d r ). [ Dr ], From the factorization defining the singular values, we see that the singular values of A T are the same as those of A. 19
20 Singular Values and the Singular Value Decomposition For a matrix with more rows than columns, in an alternate definition of the singular value decomposition, the matrix U is n m with orthogonal columns, and D is an m m diagonal matrix with nonnegative entries. Likewise, for a matrix with more columns than rows, the singular value decomposition can be defined as above but with the matrix V being m n with orthogonal columns and D being m m and diagonal with nonnegative entries. If A is symmetric its singular values are the absolute values of its eigenvalues. 20
21 SVD and the MoorePenrose Inverse The MoorePenrose inverse of a matrix has a simple relationship to its SVD. If the SVD of A is given by UDV T, then its MoorePenrose inverse is A + = V D + U T, as is easy to verify. The MoorePenrose inverse of D is just the matrix D + formed by inverting all of the positive entries of D and leaving the other entries unchanged. 21
22 Square Root Factorization of a Nonnegative Definite Matrix If A is a nonnegative definite matrix (which, for me, means that it is symmetric), its eigenvalues are nonnegative, so we can write S = C 1 2, where S is a diagonal matrix whose elements are the square roots of the elements in the C matrix in the diagonal factorization of A. Now we observe that (V SV T ) 2 = V CV T = A; hence, we write and we have (A 1 2) 2 = A. A 1 2 = V SV T, 22
Chapter 1. Matrix Algebra
ST4233, Linear Models, Semester 1 20082009 Chapter 1. Matrix Algebra 1 Matrix and vector notation Definition 1.1 A matrix is a rectangular or square array of numbers of variables. We use uppercase boldface
More informationThe Singular Value Decomposition
The Singular Value Decomposition Philippe B. Laval KSU Fall 2015 Philippe B. Laval (KSU) SVD Fall 2015 1 / 13 Review of Key Concepts We review some key definitions and results about matrices that will
More informationLinGloss. A glossary of linear algebra
LinGloss A glossary of linear algebra Contents: Decompositions Types of Matrices Theorems Other objects? Quasitriangular A matrix A is quasitriangular iff it is a triangular matrix except its diagonal
More informationChapter 3. Matrices. 3.1 Matrices
40 Chapter 3 Matrices 3.1 Matrices Definition 3.1 Matrix) A matrix A is a rectangular array of m n real numbers {a ij } written as a 11 a 12 a 1n a 21 a 22 a 2n A =.... a m1 a m2 a mn The array has m rows
More informationlinearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice
3. Eigenvalues and Eigenvectors, Spectral Representation 3.. Eigenvalues and Eigenvectors A vector ' is eigenvector of a matrix K, if K' is parallel to ' and ' 6, i.e., K' k' k is the eigenvalue. If is
More informationLinear Algebra Review. Vectors
Linear Algebra Review 9/4/7 Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa (UCSD) Cogsci 8F Linear Algebra review Vectors
More informationFoundations of Matrix Analysis
1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the
More informationSection 3.9. Matrix Norm
3.9. Matrix Norm 1 Section 3.9. Matrix Norm Note. We define several matrix norms, some similar to vector norms and some reflecting how multiplication by a matrix affects the norm of a vector. We use matrix
More informationStat 206: Linear algebra
Stat 206: Linear algebra James Johndrow (adapted from Iain Johnstone s notes) 20161102 Vectors We have already been working with vectors, but let s review a few more concepts. The inner product of two
More informationMATH36001 Generalized Inverses and the SVD 2015
MATH36001 Generalized Inverses and the SVD 201 1 Generalized Inverses of Matrices A matrix has an inverse only if it is square and nonsingular. However there are theoretical and practical applications
More informationKnowledge Discovery and Data Mining 1 (VO) ( )
Knowledge Discovery and Data Mining 1 (VO) (707.003) Review of Linear Algebra Denis Helic KTI, TU Graz Oct 9, 2014 Denis Helic (KTI, TU Graz) KDDM1 Oct 9, 2014 1 / 74 Big picture: KDDM Probability Theory
More informationMathematical Foundations of Applied Statistics: Matrix Algebra
Mathematical Foundations of Applied Statistics: Matrix Algebra Steffen Unkel Department of Medical Statistics University Medical Center Göttingen, Germany Winter term 2018/19 1/105 Literature Seber, G.
More informationB553 Lecture 5: Matrix Algebra Review
B553 Lecture 5: Matrix Algebra Review Kris Hauser January 19, 2012 We have seen in prior lectures how vectors represent points in R n and gradients of functions. Matrices represent linear transformations
More informationCS 246 Review of Linear Algebra 01/17/19
1 Linear algebra In this section we will discuss vectors and matrices. We denote the (i, j)th entry of a matrix A as A ij, and the ith entry of a vector as v i. 1.1 Vectors and vector operations A vector
More informationAppendix A: Matrices
Appendix A: Matrices A matrix is a rectangular array of numbers Such arrays have rows and columns The numbers of rows and columns are referred to as the dimensions of a matrix A matrix with, say, 5 rows
More informationLinear Algebra: Matrix Eigenvalue Problems
CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given
More informationUNIT 6: The singular value decomposition.
UNIT 6: The singular value decomposition. María Barbero Liñán Universidad Carlos III de Madrid Bachelor in Statistics and Business Mathematical methods II 20112012 A square matrix is symmetric if A T
More information7. Symmetric Matrices and Quadratic Forms
Linear Algebra 7. Symmetric Matrices and Quadratic Forms CSIE NCU 1 7. Symmetric Matrices and Quadratic Forms 7.1 Diagonalization of symmetric matrices 2 7.2 Quadratic forms.. 9 7.4 The singular value
More informationComputational Methods CMSC/AMSC/MAPL 460. Eigenvalues and Eigenvectors. Ramani Duraiswami, Dept. of Computer Science
Computational Methods CMSC/AMSC/MAPL 460 Eigenvalues and Eigenvectors Ramani Duraiswami, Dept. of Computer Science Eigen Values of a Matrix Recap: A N N matrix A has an eigenvector x (nonzero) with corresponding
More informationLinear Algebra (Review) Volker Tresp 2017
Linear Algebra (Review) Volker Tresp 2017 1 Vectors k is a scalar (a number) c is a column vector. Thus in two dimensions, c = ( c1 c 2 ) (Advanced: More precisely, a vector is defined in a vector space.
More informationChapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors
Chapter 7 Canonical Forms 7.1 Eigenvalues and Eigenvectors Definition 7.1.1. Let V be a vector space over the field F and let T be a linear operator on V. An eigenvalue of T is a scalar λ F such that there
More informationLinear Algebra. Session 12
Linear Algebra. Session 12 Dr. Marco A Roque Sol 08/01/2017 Example 12.1 Find the constant function that is the least squares fit to the following data x 0 1 2 3 f(x) 1 0 1 2 Solution c = 1 c = 0 f (x)
More informationALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA
ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA Kent State University Department of Mathematical Sciences Compiled and Maintained by Donald L. White Version: August 29, 2017 CONTENTS LINEAR ALGEBRA AND
More informationLinear Algebra (Review) Volker Tresp 2018
Linear Algebra (Review) Volker Tresp 2018 1 Vectors k, M, N are scalars A onedimensional array c is a column vector. Thus in two dimensions, ( ) c1 c = c 2 c i is the ith component of c c T = (c 1, c
More informationPseudoinverse & MoorePenrose Conditions
ECE 275AB Lecture 7 Fall 2008 V1.0 c K. KreutzDelgado, UC San Diego p. 1/1 Lecture 7 ECE 275A Pseudoinverse & MoorePenrose Conditions ECE 275AB Lecture 7 Fall 2008 V1.0 c K. KreutzDelgado, UC San Diego
More informationDSGA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.
DSGA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1
More informationReview of Linear Algebra
Review of Linear Algebra Dr Gerhard Roth COMP 40A Winter 05 Version Linear algebra Is an important area of mathematics It is the basis of computer vision Is very widely taught, and there are many resources
More informationNotes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T.
Notes on singular value decomposition for Math 54 Recall that if A is a symmetric n n matrix, then A has real eigenvalues λ 1,, λ n (possibly repeated), and R n has an orthonormal basis v 1,, v n, where
More informationLinear Algebra  Part II
Linear Algebra  Part II Projection, Eigendecomposition, SVD (Adapted from Sargur Srihari s slides) Brief Review from Part 1 Symmetric Matrix: A = A T Orthogonal Matrix: A T A = AA T = I and A 1 = A T
More informationDeep Learning Book Notes Chapter 2: Linear Algebra
Deep Learning Book Notes Chapter 2: Linear Algebra Compiled By: Abhinaba Bala, Dakshit Agrawal, Mohit Jain Section 2.1: Scalars, Vectors, Matrices and Tensors Scalar Single Number Lowercase names in italic
More informationLinear Algebra for Machine Learning. Sargur N. Srihari
Linear Algebra for Machine Learning Sargur N. srihari@cedar.buffalo.edu 1 Overview Linear Algebra is based on continuous math rather than discrete math Computer scientists have little experience with it
More informationChapter 3 Transformations
Chapter 3 Transformations An Introduction to Optimization Spring, 2014 WeiTa Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases
More informationReview of Some Concepts from Linear Algebra: Part 2
Review of Some Concepts from Linear Algebra: Part 2 Department of Mathematics Boise State University January 16, 2019 Math 566 Linear Algebra Review: Part 2 January 16, 2019 1 / 22 Vector spaces A set
More informationMath Camp II. Basic Linear Algebra. Yiqing Xu. Aug 26, 2014 MIT
Math Camp II Basic Linear Algebra Yiqing Xu MIT Aug 26, 2014 1 Solving Systems of Linear Equations 2 Vectors and Vector Spaces 3 Matrices 4 Least Squares Systems of Linear Equations Definition A linear
More informationLecture 6: Geometry of OLS Estimation of Linear Regession
Lecture 6: Geometry of OLS Estimation of Linear Regession Xuexin Wang WISE Oct 2013 1 / 22 Matrix Algebra An n m matrix A is a rectangular array that consists of nm elements arranged in n rows and m columns
More informationIntroduction to Numerical Linear Algebra II
Introduction to Numerical Linear Algebra II Petros Drineas These slides were prepared by Ilse Ipsen for the 2015 Gene Golub SIAM Summer School on RandNLA 1 / 49 Overview We will cover this material in
More informationSTAT 309: MATHEMATICAL COMPUTATIONS I FALL 2017 LECTURE 5
STAT 39: MATHEMATICAL COMPUTATIONS I FALL 17 LECTURE 5 1 existence of svd Theorem 1 (Existence of SVD) Every matrix has a singular value decomposition (condensed version) Proof Let A C m n and for simplicity
More informationLarge Scale Data Analysis Using Deep Learning
Large Scale Data Analysis Using Deep Learning Linear Algebra U Kang Seoul National University U Kang 1 In This Lecture Overview of linear algebra (but, not a comprehensive survey) Focused on the subset
More informationMath Bootcamp An pdimensional vector is p numbers put together. Written as. x 1 x =. x p
Math Bootcamp 2012 1 Review of matrix algebra 1.1 Vectors and rules of operations An pdimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the
More informationSeminar on Linear Algebra
Supplement Seminar on Linear Algebra Projection, Singular Value Decomposition, Pseudoinverse Kenichi Kanatani Kyoritsu Shuppan Co., Ltd. Contents 1 Linear Space and Projection 1 1.1 Expression of Linear
More informationReview of Linear Algebra
Review of Linear Algebra Definitions An m n (read "m by n") matrix, is a rectangular array of entries, where m is the number of rows and n the number of columns. 2 Definitions (Con t) A is square if m=
More informationLecture notes: Applied linear algebra Part 1. Version 2
Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tuberlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and
More informationLinear algebra for computational statistics
University of Seoul May 3, 2018 Vector and Matrix Notation Denote 2dimensional data array (n p matrix) by X. Denote the element in the ith row and the jth column of X by x ij or (X) ij. Denote by X j
More informationLinear Systems. Carlo Tomasi. June 12, r = rank(a) b range(a) n r solutions
Linear Systems Carlo Tomasi June, 08 Section characterizes the existence and multiplicity of the solutions of a linear system in terms of the four fundamental spaces associated with the system s matrix
More informationIntroduction to Matrix Algebra
Introduction to Matrix Algebra August 18, 2010 1 Vectors 1.1 Notations A pdimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the line. When p
More informationLinear Algebra Highlights
Linear Algebra Highlights Chapter 1 A linear equation in n variables is of the form a 1 x 1 + a 2 x 2 + + a n x n. We can have m equations in n variables, a system of linear equations, which we want to
More informationMatrix Algebra: Summary
May, 27 Appendix E Matrix Algebra: Summary ontents E. Vectors and Matrtices.......................... 2 E.. Notation.................................. 2 E..2 Special Types of Vectors.........................
More informationProposition 42. Let M be an m n matrix. Then (32) N (M M)=N (M) (33) R(MM )=R(M)
RODICA D. COSTIN. Singular Value Decomposition.1. Rectangular matrices. For rectangular matrices M the notions of eigenvalue/vector cannot be defined. However, the products MM and/or M M (which are square,
More informationLinear Algebra V = T = ( 4 3 ).
Linear Algebra Vectors A column vector is a list of numbers stored vertically The dimension of a column vector is the number of values in the vector W is a dimensional column vector and V is a 5dimensional
More informationMATH 583A REVIEW SESSION #1
MATH 583A REVIEW SESSION #1 BOJAN DURICKOVIC 1. Vector Spaces Very quick review of the basic linear algebra concepts (see any linear algebra textbook): (finite dimensional) vector space (or linear space),
More informationEE731 Lecture Notes: Matrix Computations for Signal Processing
EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University September 22, 2005 0 Preface This collection of ten
More informationGlossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB
Glossary of Linear Algebra Terms Basis (for a subspace) A linearly independent set of vectors that spans the space Basic Variable A variable in a linear system that corresponds to a pivot column in the
More informationPreliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012
Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.
More informationMatrices and Vectors. Definition of Matrix. An MxN matrix A is a twodimensional array of numbers A =
30 MATHEMATICS REVIEW G A.1.1 Matrices and Vectors Definition of Matrix. An MxN matrix A is a twodimensional array of numbers A = a 11 a 12... a 1N a 21 a 22... a 2N...... a M1 a M2... a MN A matrix can
More information1 Linear Algebra Problems
Linear Algebra Problems. Let A be the conjugate transpose of the complex matrix A; i.e., A = A t : A is said to be Hermitian if A = A; real symmetric if A is real and A t = A; skewhermitian if A = A and
More informationMatrices and Linear Algebra
Contents Quantitative methods for Economics and Business University of Ferrara Academic year 20172018 Contents 1 Basics 2 3 4 5 Contents 1 Basics 2 3 4 5 Contents 1 Basics 2 3 4 5 Contents 1 Basics 2
More informationBare minimum on matrix algebra. Psychology 588: Covariance structure and factor models
Bare minimum on matrix algebra Psychology 588: Covariance structure and factor models Matrix multiplication 2 Consider three notations for linear combinations y11 y1 m x11 x 1p b11 b 1m y y x x b b n1
More informationELE/MCE 503 Linear Algebra Facts Fall 2018
ELE/MCE 503 Linear Algebra Facts Fall 2018 Fact N.1 A set of vectors is linearly independent if and only if none of the vectors in the set can be written as a linear combination of the others. Fact N.2
More information1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det
What is the determinant of the following matrix? 3 4 3 4 3 4 4 3 A 0 B 8 C 55 D 0 E 60 If det a a a 3 b b b 3 c c c 3 = 4, then det a a 4a 3 a b b 4b 3 b c c c 3 c = A 8 B 6 C 4 D E 3 Let A be an n n matrix
More information1 Inner Product and Orthogonality
CSCI 4/Fall 6/Vora/GWU/Orthogonality and Norms Inner Product and Orthogonality Definition : The inner product of two vectors x and y, x x x =.., y =. x n y y... y n is denoted x, y : Note that n x, y =
More informationLecture 1 Review: Linear models have the form (in matrix notation) Y = Xβ + ε,
2. REVIEW OF LINEAR ALGEBRA 1 Lecture 1 Review: Linear models have the form (in matrix notation) Y = Xβ + ε, where Y n 1 response vector and X n p is the model matrix (or design matrix ) with one row for
More informationLinear Algebra Review. FeiFei Li
Linear Algebra Review FeiFei Li 1 / 37 Vectors Vectors and matrices are just collections of ordered numbers that represent something: movements in space, scaling factors, pixel brightnesses, etc. A vector
More informationσ 11 σ 22 σ pp 0 with p = min(n, m) The σ ii s are the singular values. Notation change σ ii A 1 σ 2
HE SINGULAR VALUE DECOMPOSIION he SVD existence  properties. Pseudoinverses and the SVD Use of SVD for leastsquares problems Applications of the SVD he Singular Value Decomposition (SVD) heorem For
More informationChapter 6: Orthogonality
Chapter 6: Orthogonality (Last Updated: November 7, 7) These notes are derived primarily from Linear Algebra and its applications by David Lay (4ed). A few theorems have been moved around.. Inner products
More informationIntroduction to Matrices
POLS 704 Introduction to Matrices Introduction to Matrices. The Cast of Characters A matrix is a rectangular array (i.e., a table) of numbers. For example, 2 3 X 4 5 6 (4 3) 7 8 9 0 0 0 Thismatrix,with4rowsand3columns,isoforder
More informationMaths for Signals and Systems Linear Algebra in Engineering
Maths for Signals and Systems Linear Algebra in Engineering Lectures 13 15, Tuesday 8 th and Friday 11 th November 016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL PROCESSING IMPERIAL COLLEGE
More informationECE 275A Homework # 3 Due Thursday 10/27/2016
ECE 275A Homework # 3 Due Thursday 10/27/2016 Reading: In addition to the lecture material presented in class, students are to read and study the following: A. The material in Section 4.11 of Moon & Stirling
More informationStat 159/259: Linear Algebra Notes
Stat 159/259: Linear Algebra Notes Jarrod Millman November 16, 2015 Abstract These notes assume you ve taken a semester of undergraduate linear algebra. In particular, I assume you are familiar with the
More informationRecall the convention that, for us, all vectors are column vectors.
Some linear algebra Recall the convention that, for us, all vectors are column vectors. 1. Symmetric matrices Let A be a real matrix. Recall that a complex number λ is an eigenvalue of A if there exists
More informationJim Lambers MAT 610 Summer Session Lecture 1 Notes
Jim Lambers MAT 60 Summer Session 20090 Lecture Notes Introduction This course is about numerical linear algebra, which is the study of the approximate solution of fundamental problems from linear algebra
More informationEigenvalues and diagonalization
Eigenvalues and diagonalization Patrick Breheny November 15 Patrick Breheny BST 764: Applied Statistical Modeling 1/20 Introduction The next topic in our course, principal components analysis, revolves
More informationMATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018
Homework #1 Assigned: August 20, 2018 Review the following subjects involving systems of equations and matrices from Calculus II. Linear systems of equations Converting systems to matrix form Pivot entry
More informationLecture II: Linear Algebra Revisited
Lecture II: Linear Algebra Revisited Overview Vector spaces, Hilbert & Banach Spaces, etrics & Norms atrices, Eigenvalues, Orthogonal Transformations, Singular Values Operators, Operator Norms, Function
More information1 Last time: leastsquares problems
MATH Linear algebra (Fall 07) Lecture Last time: leastsquares problems Definition. If A is an m n matrix and b R m, then a leastsquares solution to the linear system Ax = b is a vector x R n such that
More informationComputational math: Assignment 1
Computational math: Assignment 1 Thanks Ting Gao for her Latex file 11 Let B be a 4 4 matrix to which we apply the following operations: 1double column 1, halve row 3, 3add row 3 to row 1, 4interchange
More informationMathematical foundations  linear algebra
Mathematical foundations  linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar
More informationMatrix Algebra, part 2
Matrix Algebra, part 2 MingChing Luoh 2005.9.12 1 / 38 Diagonalization and Spectral Decomposition of a Matrix Optimization 2 / 38 Diagonalization and Spectral Decomposition of a Matrix Also called Eigenvalues
More informationReview of some mathematical tools
MATHEMATICAL FOUNDATIONS OF SIGNAL PROCESSING Fall 2016 Benjamín Béjar Haro, Mihailo Kolundžija, Reza Parhizkar, Adam Scholefield Teaching assistants: Golnoosh Elhami, Hanjie Pan Review of some mathematical
More informationNumerical Linear Algebra Homework Assignment  Week 2
Numerical Linear Algebra Homework Assignment  Week 2 Đoàn Trần Nguyên Tùng Student ID: 1411352 8th October 2016 Exercise 2.1: Show that if a matrix A is both triangular and unitary, then it is diagonal.
More informationLinear Algebra review Powers of a diagonalizable matrix Spectral decomposition
Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Prof. Tesler Math 283 Fall 2016 Also see the separate version of this with Matlab and R commands. Prof. Tesler Diagonalizing
More informationLEAST SQUARES SOLUTION TRICKS
LEAST SQUARES SOLUTION TRICKS VESA KAARNIOJA, JESSE RAILO AND SAMULI SILTANEN Abstract This handout is for the course Applications of matrix computations at the University of Helsinki in Spring 2018 We
More informationELEMENTS OF MATRIX ALGEBRA
ELEMENTS OF MATRIX ALGEBRA CHUNGMING KUAN Department of Finance National Taiwan University September 09, 2009 c ChungMing Kuan, 1996, 2001, 2009 Email: ckuan@ntuedutw; URL: homepagentuedutw/ ckuan CONTENTS
More informationNumerical Methods. Elena loli Piccolomini. Civil Engeneering. piccolom. Metodi Numerici M p. 1/??
Metodi Numerici M p. 1/?? Numerical Methods Elena loli Piccolomini Civil Engeneering http://www.dm.unibo.it/ piccolom elena.loli@unibo.it Metodi Numerici M p. 2/?? Least Squares Data Fitting Measurement
More informationLinear Systems. Carlo Tomasi
Linear Systems Carlo Tomasi Section 1 characterizes the existence and multiplicity of the solutions of a linear system in terms of the four fundamental spaces associated with the system s matrix and of
More informationFinal Review Written by Victoria Kala SH 6432u Office Hours R 12:30 1:30pm Last Updated 11/30/2015
Final Review Written by Victoria Kala vtkala@mathucsbedu SH 6432u Office Hours R 12:30 1:30pm Last Updated 11/30/2015 Summary This review contains notes on sections 44 47, 51 53, 61, 62, 65 For your final,
More information2. Review of Linear Algebra
2. Review of Linear Algebra ECE 83, Spring 217 In this course we will represent signals as vectors and operators (e.g., filters, transforms, etc) as matrices. This lecture reviews basic concepts from linear
More informationReview of linear algebra
Review of linear algebra 1 Vectors and matrices We will just touch very briefly on certain aspects of linear algebra, most of which should be familiar. Recall that we deal with vectors, i.e. elements of
More informationLinear Systems. Class 27. c 2008 Ron Buckmire. TITLE Projection Matrices and Orthogonal Diagonalization CURRENT READING Poole 5.4
Linear Systems Math Spring 8 c 8 Ron Buckmire Fowler 9 MWF 9: am  :5 am http://faculty.oxy.edu/ron/math//8/ Class 7 TITLE Projection Matrices and Orthogonal Diagonalization CURRENT READING Poole 5. Summary
More information2. Matrix Algebra and Random Vectors
2. Matrix Algebra and Random Vectors 2.1 Introduction Multivariate data can be conveniently display as array of numbers. In general, a rectangular array of numbers with, for instance, n rows and p columns
More informationMathematical foundations  linear algebra
Mathematical foundations  linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar
More information8. Diagonalization.
8. Diagonalization 8.1. Matrix Representations of Linear Transformations Matrix of A Linear Operator with Respect to A Basis We know that every linear transformation T: R n R m has an associated standard
More informationProblem # Max points possible Actual score Total 120
FINAL EXAMINATION  MATH 2121, FALL 2017. Name: ID#: Email: Lecture & Tutorial: Problem # Max points possible Actual score 1 15 2 15 3 10 4 15 5 15 6 15 7 10 8 10 9 15 Total 120 You have 180 minutes to
More informationLinear Models Review
Linear Models Review Vectors in IR n will be written as ordered ntuples which are understood to be column vectors, or n 1 matrices. A vector variable will be indicted with bold face, and the prime sign
More informationAPPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the ndimensional column vector with components x 1 x 2.
APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the ndimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product
More informationLINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM
LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F is R or C. Definition 1. A linear operator
More informationAlgebra C Numerical Linear Algebra Sample Exam Problems
Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finitedimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric
More informationMath 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination
Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column
More informationNOTES ON BILINEAR FORMS
NOTES ON BILINEAR FORMS PARAMESWARAN SANKARAN These notes are intended as a supplement to the talk given by the author at the IMSc Outreach Programme Enriching Collegiate Education2015. Symmetric bilinear
More information