Stat 206: Linear algebra
|
|
- Brittney Fisher
- 5 years ago
- Views:
Transcription
1 Stat 206: Linear algebra James Johndrow (adapted from Iain Johnstone s notes) Vectors We have already been working with vectors, but let s review a few more concepts. The inner product of two vectors x, y is x 1 y = x j y j, which we will sometimes express as xx, yy, and the angle θ formed by two vectors can be expressed in terms of inner products j=1 cos(θ) = x 1 y? x1 x? y 1 y. Here s how to take the inner product in R. set.seed(17) p <- 5 x <- matrix(rnorm(p),p,1) # p-vector with iid normal(0,1) entries y <- matrix(rnorm(p),p,1) xy <- t(x)%*%y # compute angle thet <- acos(xy/(sqrt(t(x)%*%x)*sqrt(t(y)%*%y))) So for this example the inner product of x and y is about and the angle between the vectors is about θ = 1.65 radians. The usual interpretation in R 2 carries over to R p, i.e. if θ = 0 or π then x y and if θ = π/2 then x K y. 1 The projection of a vector x onto a vector y is given by 2 proj(x, y) = yy1 y 1 y x = P yx = (x1 yy 1 ) 1 y 1 = x1 y y y 1 y y, where the last equality holds because x 1 y is a scalar. Every vector can be expressed as a linear combination of its projection onto y and its projection onto the orthogonal complement of y, which is defined by 1 The notation x y means x and y are parallel, and x K y means x and y are perpendicular. 2 The book uses the third expression below, but I find the first more intuitive. In particular, if you ve done linear models, you ll recognize (yy 1 )/(y 1 y) as a special case of X(X 1X) 1 X 1, the perpendicular projection operator onto the column space of X, when X = y is a vector instead of a matrix. ) (I yy1 y 1 x = (I P y )x y It s clear that
2 stat 206: linear algebra 2 x = yy1 y 1 y x + ) (I yy1 y 1 x, y so that every x can be written as a linear combination of its projection parallel to and perpendicular to y. Let s compute an example in R. x <- matrix(rnorm(p),p,1) y <-.5*x +.5*matrix(rnorm(p),p,1) Py <- (y%*%t(y))/c(t(y)%*%y) proj <- Py%*%x orth <- (diag(p)-py)%*%x yproj <- t(y)%*%proj yorth <- t(y)%*%orth You ll note that xp y x, yy «3.93 but that x(i P y )x, yy « ˆ 10 16, so P y x and (I P y )x really are the portions of x parallel to and perpendicular to y. Two vectors are said to be linearly dependent if one can be written as a scalar multiple of the other, i.e. x and y satisfy x = cy for some c P R. If two vectors x and y are linearly dependent, then the projection of x onto y is just x, and the projection of x onto the orthogonal complement of y is the zero vector. 3 Thus, we can think of linearly dependent vectors are parallel. A collection of n vectors x (1),..., x (n) are linearly dependent if there exists an i and constants c 1,..., c n such that c i x (i) = c 1 x (1) c (i 1) x (i 1) + c (i+1) x (i+1) c n x (n), that is, if at least one of them can be written as a linear combination of the others. A collection is said to be linearly independent if none of the vectors in the collection are linearly dependent. Notice I am being pretty careful about using «signs or saying is approximately. Everything you do on the computer is in some sense approximate, since the computer only assigns limited memory to storing any number in decimal expansion. As a result, it cannot tell the difference between 0 and, say , or between 8 and That s why I didn t round x(i P y )x, yy, so you can see this in action. That number is actually zero, but the computer has finite precision, so some error is introduced when doing the calculations. 3 This is why in the example above where I computed the projection of x onto y in R, I generated y as a weighted average of x and some random stuff, otherwise the projection would be close to zero. This hints at relationships between linear dependence and correlation, which we will get to soon. Matrices A matrix X is a rectangular array of numbers. If a matrix has n rows and p columns, we say the matrix is n ˆ p. A matrix is square if n = p, and the diagonal of a square matrix are the elements X ii with the same row and column index. The transpose, X 1, of a n ˆ p matrix X is the p ˆ n matrix whose rows are formed by the columns of X. That is, the first row of X 1 is the first column of X, the second row of X 1 is the second column of X, and so on. Two matrices of the same dimension can be added by simply adding corresponding entries. If X and A are both n ˆ p matrices, then the matrix X + A has entries (X + A) ij = X ij + A ij. Here s how to do some of these things in R.
3 stat 206: linear algebra 3 n <- 10 p <- 5 X <- matrix(rnorm(n*p),n,p) Xt <- t(x) # transpose of X A <- matrix(rnorm(n*p),n,p) XA <- X+A #add X and A D <- matrix(rnorm(n*n),n,n) Ddiag <- diag(d) # get the diagonal of D D2 <- diag(rnorm(n)) # an n by n diagonal matrix The notion of rank is an important one. Definition 1 (rank of a matrix). Let A be a real-valued matrix. The row rank rank r (A) of A is the number of linearly independent rows of the matrix. The column rank rank c (A) is the number of linearly independent columns. The row rank and column rank are always equal (and therefore one just refers to the rank of a matrix). A matrix A is full rank if rank(a) = min(n, p). Matrix multiplication is in some ways analogous to multiplication of real numbers, but does not obey all of the same rules. First, only matrices of conformable dimension may be multiplied. An n ˆ p matrix X and a p ˆ m matrix A may be multiplied in the order XA because the column dimension of X matches the row dimension of A. The result is a n ˆ m matrix with entries (XA) ij = k=1 l=1 X ki A jl = xx [i,], A [,j] y so that the i, j element of the product is formed by taking the inner product of the vectors X [i,] and A [,j] formed by the ith row of X and the jth column of A. N.B.: p-vectors are p ˆ 1 matrices, and their transposes are 1 ˆ p matrices. So if X is n ˆ p and y is p ˆ 1, then the product Xy is a n ˆ 1 matrix (an n-vector). Matrix multiplication does not commute, that is, in general, XA XB. For rectangular matrices, often only one direction makes sense, since we can multiply a n ˆ p matrix X by a p ˆ m matrix A but not a p ˆ m matrix A by a n ˆ p matrix X, since the column dimension of A does not match the row dimension of X. Of course, if A and X are both square p ˆ p matrices, then we can multiply in either direction. Even then, it is still not the case in general that AX = XA. Two matrices are said to commute if and only if AX = XA. When matrices commute, it simplifies a lot of calculations, but usually matrices we will be working with will not commute.
4 stat 206: linear algebra 4 The p-dimensional identity matrix I p is a p ˆ p matrix with all of its diagonal entries equal to one and all of its off-diagonal entries equal to zero, i.e. I p = We will often drop the subscript p when the dimension of I is clear. The matrix I is a multiplicative identity. So if X is n ˆ p, XI = X, and IX 1 = X 1 for the p ˆ p identity I. This holds in general for any matrix/vector for which the dimensions allow multiplication to happen. Two other definitional/notational things. A (square) matrix A is said to be symmetric if A = A 1, and A is said to be orthogonal if AA 1 = I. Multiplicative inverses also exist, though again there are important differences with the one-dimensional case. First, the inverse is only defined for square matrices. If A is a square matrix, then it has an inverse B if and only if there exists a square matrix B for which AB = BA = I, from which we may deduce that matrices and their inverses commute, and that the inverse of an orthogonal matrix is its transpose. In this case we write B = A 1, so the usual notation for the inverse of A will be A 1. Not all square matrices have an inverse, but when they do, the inverse is unique. A square matrix A has an inverse if and only if its columns are linearly independent. Another useful property is the following Remark 1 (inverse of product). Suppose A and B are both invertible p ˆ p matrices. Then (AB) 1 = B 1 A 1. Proof. Suppose C = (AB) 1. Since ABB 1 A 1 = AIA 1 = AA 1 = I B 1 A 1 AB = B 1 IB = B 1 B = I and inverses are unique, it follows that C = B 1 A 1.
5 stat 206: linear algebra 5 We usually compute matrix inverses in R, since the number of operations necessary to compute a matrix is large. Here, I generate a random matrix and compute its inverse. X <- matrix(rnorm(n*p),n,p) Sn <- n^(-1)*t(x)%*%x Sninv <- solve(sn) tst <- Sn%*%Sninv maxdiff <- max(abs(tst-diag(p))) We can check this worked by checking how different S (n) (n) (S ) 1 is from I p. In this case, the maximum entrywise difference (in absolute value) between S (n) (S (n)) 1 and I p is ˆ Matrix inversion is generally a O(n 3 ) operation, so methods that require computing the inverse will scale poorly in p. There are various strategies to improving scalability, mainly by using methods or approximations that don t require explicitly forming the inverse of a general p ˆ p matrix. For example, it is easy to invert a diagonal matrix: the inverse is simply a diagonal matrix with entries given by the reciprocal of the entries of the original matrix 4 4 Try this for a 3-by-3 example. Perhaps the most useful matrix results in multivariate statistics have to do with eigenvalues and eigenvectors. The eigenvalues of a p ˆ p square matrix A are the solutions to the equation Ax = λx, where x P R p is a p-vector and λ is scalar. The vectors x for which there exists λ satisfying the eigenvalue equation are the eigenvectors. Clearly, if λ is a (real) solution to the eigenvalue equation with eigenvector x, then for any real number c, cλ is also a solution with eigenvector c 1 x. Since the eigenvectors and eigenvalues are only unique up to a multiplicative constant, it is typical to normalize eigenvectors to have length 1, i.e. so that x 1 x = 1, and denote these normalized eigenvectors by e. Every square, symmetric p ˆ p matrix has p pairs of eigenvalues and eigenvectors (e (1), λ 1 ),..., (e (p), λ p ). The eigenvectors can be chosen to be mutually orthogonal (so (e (j) ) 1 e (k) = 0 for every pair j, k). The eigenvectors are unique unless two or more of the eigenvalues are equal. In statistics, we are often concerned with positive definite matrices Definition 2 (positive definite matrix). A square p ˆ p matrix A is positive definite if and only if the quadratic form x 1 Ax ą 0
6 stat 206: linear algebra 6 for every non-zero p-vector x, where a non-zero vector is any vector for which at least one entry is not zero. A is positive semi-definite if and only if for every non-zero p-vector x. x 1 Ax ě 0 One reason to care about positive definite matrices is the following, given without proof. 5 Remark 2 (sample covariance). Suppose x (1),..., x (n) are a random sample with common mean µ and positive-definite covariance Σ. Then if n ą p, S n is positive definite. Another important fact is that a real square matrix is invertible if and only if it is positive definite. 6 If a matrix A is symmetric and positive definite, there exists a decomposition of A into a matrix U with columns consisting of the eigenvectors and a diagonal matrix Λ with diagonal entries given by the eigenvalues. 5 Technical note: the phrase almost surely should be added to the end of this remark. This is entirely irrelevant, but for the reader who has some familiarity with measure theory I wanted to be complete. 6 positive semi-definite matrices have pseudoinverses (if interested, there is a decent page on wikipedia) Theorem 1 (spectral decomposition). Suppose A is symmetric and positive semi-definite. Then A = UΛU 1 where U is a p ˆ p orthogonal matrix whose jth column is the eigenvector e j, and Λ jj = λ j, the jth eigenvalue of A. Since U is orthogonal, we can also write this as A = UΛU 1. Spectral decompositions are very useful. For example, using Remark 1 and orthogonality of U, we have that if A = UΛU 1 is the spectral decomposition of a positive definite matrix A, then A 1 = UΛ 1 U 1. Since Λ is a diagonal matrix, its inverse is just the matrix (Λ 1 ) jj = λ 1 jj, which is easy to compute. Moreover, this implies that the eigenvalues of the inverse A 1 are the reciprocals of the eigenvalues of A, so that the largest eigenvalue of A 1 is the reciprocal of the smallest eigenvalue of A, and the smallest eigenvalue of A 1 is the reciprocal of the largest eigenvalue of A. Positive definite matrices also have square roots. A square root A 1/2 of a symmetric matrix A is any matrix B for which BB = A; for positive-definite matrices, there is a unique square root A 1/2 that is also positive definite (although there are multiple square roots). One way to obtain the square root of a positive definite matrix A is from its spectral decomposition:
7 stat 206: linear algebra 7 Remark 3. Suppose A = UΛU 1 is the spectral decomposition of A. Then UΛ 1/2 U 1 is a square root of A, where Λ 1/2 is the diagonal matrix with entries (Λ 1/2 ) jj = λ 1/2 j. Proof. UΛ 1/2 U 1 UΛ 1/2 U 1 = UΛ 1/2 IΛ 1/2 U 1 = UΛ 1/2 Λ 1/2 U 1 = UΛU 1 = A. From this it is clear that if A has spectral decomposition UΛU 1, then A 1/2 has spectral decomposition A 1/2 = UΛ 1/2 U 1. Eigenvalues also give us useful bounds on the size of quadratic forms: Remark 4 (quadratic form eigenvalue inequalities). Let λ 1,..., λ p be the eigenvalues of A in decreasing order. Then for every x P R p, λ p x 2 2 ď x 1 Ax ď λ 1 x 2 2. Since A 1/2 x 2 = x 1 Ax, the eigenvalues of A give us bounds on the amount by which A elongates any vector. Two important matrix quantities are the trace and determinant. Determinants and traces appear in the densities of multivariate distributions. The trace of a square matrix A is the sum of its diagonal elements: tr(a) A jj. j=1 Some key properties of the trace are tr(a + B) = tr(a) + tr(b) tr(ca) = c tr(a) tr(abc) = tr(cab) = tr(bca) the cyclic property tr(a) = ÿ λ j, j so the trace is the sum of the eigenvalues. The determinant of a square p ˆ p matrix is defined recursively by A = a 1j A [ 1, j] ( 1) 1+j, j=1
8 stat 206: linear algebra 8 where A [ 1, j] is the submatrix obtained by deleting the first row and jth column of A, and A = a 11 if A is scalar. There is a connection with eigenvalues via the characteristic polynomial g A (t) = ti A, a polynomial equation in t whose roots are equal to the eigenvalues of A. Although not the formal definition, for our purposes it is often useful to think of the determinant of a square matrix A, as the product of its eigenvalues A = ź j λ j. A couple of useful properties of the determinant are A 1 = A 1 log A = log(λ j ) There are matrix decompositions in addition to the spectral decomposition that often prove useful in applied statistics. Suppose X is a n ˆ p real matrix. Let m = min(n, p). The singular value decomposition of X is given by j=1 X = UDV, where U is a n ˆ m orthogonal matrix, D is a m ˆ m diagonal matrix, and V is a m ˆ p orthogonal matrix. The diagonal elements of D are referred to as the singular values of X. The singular value decomposition and spectral decomposition are related, since (UDV ) 1 UDV = V 1 DU 1 UDV = V 1 DIDV = V 1 D 2 V, where D 2 is the product DD. Since V is orthogonal, it follows that D 2 = Λ in the spectral decomposition of X 1 X when X 1 X is positivedefinite (equivalently, when n ą p). You can see this for yourself in R. 7 7 What happens when n ă p? In X <- matrix(rnorm(n*p),n,p) XX <- t(x)%*%x s <- svd(x) u <- eigen(t(x)%*%x) df <- data.frame(lam=u$values,d2=s$d^2) ggplot(df,aes(x=lam,y=d2)) + geom_point() fact, a lot of this still makes sense, except that there are only n distinct eigenvalues, which are the squares of the diagonal elements of D. We can still write a spectral decomposition of X X, too, but in this case X X is positive semi-definite, not positivedefinite.
9 stat 206: linear algebra 9 Note that the number of singular values is equal to the rank of X. Another useful decomposition is the Cholesky decomposition Theorem 2. Suppose Σ is a symmetric and positive-definite matrix. Then there exists a unique invertible, lower triangular matrix L, referred to as the Cholesky decomposition, such that Σ = LL 1. Vector and matrix calculus can be very useful in multivariate statistics, and we ll briefly review some key results here. Suppose x is a vector and A a matrix. Then d lam Figure 1: eigenvalues of X X plotted against the squared singular values B Ax = A1 Bx B Bx x1 A = A B Bx x1 x = 2x B Bx x1 Ax = Ax + A 1 x. The Jacobian matrix associated with a transformation f : R p Ñ R q is the p ˆ q matrix with entries J f (x) = Bf 1 Bf 1 Bx 1 Bx 2 Bf 2 Bf 2 Bx 1 Bx 2. Bf q Bx 1. Bf q Bx 2 Bf 1 Bx p Bf 2 Bx p.... Bf q Bx p, where f(x) = (f 1 (x),..., f q (x)) 1 is the (vector) output of the function f. The Jacobian is the matrix form of the total derivative of the function f, familiar from multivariate calculus. We can express the chain rule for vector functions as J f g (x) = J f (g(x))j g (x), that is, the total derivative of the composition of functions (f g)(x) = f(g(x)) is the product of the total derivative of f evaluated at g(x) and the total derivative of g. Here s a simple example B Bx log(x1 Ax) = 2Ax x 1 Ax, assuming A is symmetric and positive definite.
10 stat 206: linear algebra 10 Finally, a couple of results on derivatives of trace and determinant. Suppose A is a p ˆ p symmetric and positive-definite matrix, and B is a p ˆ p matrix. Then References Btr(AB) BA B A BA = A A 1, = B 1, Btr(A 1 B) BA = B 1 A 2.
Properties of Matrices and Operations on Matrices
Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,
More informationMATRIX ALGEBRA. or x = (x 1,..., x n ) R n. y 1 y 2. x 2. x m. y m. y = cos θ 1 = x 1 L x. sin θ 1 = x 2. cos θ 2 = y 1 L y.
as Basics Vectors MATRIX ALGEBRA An array of n real numbers x, x,, x n is called a vector and it is written x = x x n or x = x,, x n R n prime operation=transposing a column to a row Basic vector operations
More informationIntroduction to Matrix Algebra
Introduction to Matrix Algebra August 18, 2010 1 Vectors 1.1 Notations A p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the line. When p
More informationFinal Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2
Final Review Sheet The final will cover Sections Chapters 1,2,3 and 4, as well as sections 5.1-5.4, 6.1-6.2 and 7.1-7.3 from chapters 5,6 and 7. This is essentially all material covered this term. Watch
More informationReview of Linear Algebra
Review of Linear Algebra Definitions An m n (read "m by n") matrix, is a rectangular array of entries, where m is the number of rows and n the number of columns. 2 Definitions (Con t) A is square if m=
More informationCS 246 Review of Linear Algebra 01/17/19
1 Linear algebra In this section we will discuss vectors and matrices. We denote the (i, j)th entry of a matrix A as A ij, and the ith entry of a vector as v i. 1.1 Vectors and vector operations A vector
More informationMath Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p
Math Bootcamp 2012 1 Review of matrix algebra 1.1 Vectors and rules of operations An p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the
More informationEE731 Lecture Notes: Matrix Computations for Signal Processing
EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University September 22, 2005 0 Preface This collection of ten
More informationAppendix A: Matrices
Appendix A: Matrices A matrix is a rectangular array of numbers Such arrays have rows and columns The numbers of rows and columns are referred to as the dimensions of a matrix A matrix with, say, 5 rows
More informationMatrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =
30 MATHEMATICS REVIEW G A.1.1 Matrices and Vectors Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = a 11 a 12... a 1N a 21 a 22... a 2N...... a M1 a M2... a MN A matrix can
More informationLinear Algebra March 16, 2019
Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented
More informationVectors and Matrices Statistics with Vectors and Matrices
Vectors and Matrices Statistics with Vectors and Matrices Lecture 3 September 7, 005 Analysis Lecture #3-9/7/005 Slide 1 of 55 Today s Lecture Vectors and Matrices (Supplement A - augmented with SAS proc
More informationKnowledge Discovery and Data Mining 1 (VO) ( )
Knowledge Discovery and Data Mining 1 (VO) (707.003) Review of Linear Algebra Denis Helic KTI, TU Graz Oct 9, 2014 Denis Helic (KTI, TU Graz) KDDM1 Oct 9, 2014 1 / 74 Big picture: KDDM Probability Theory
More informationDS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.
DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1
More informationDuke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014
Duke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014 Linear Algebra A Brief Reminder Purpose. The purpose of this document
More informationMath Linear Algebra Final Exam Review Sheet
Math 15-1 Linear Algebra Final Exam Review Sheet Vector Operations Vector addition is a component-wise operation. Two vectors v and w may be added together as long as they contain the same number n of
More informationChapter 1. Matrix Algebra
ST4233, Linear Models, Semester 1 2008-2009 Chapter 1. Matrix Algebra 1 Matrix and vector notation Definition 1.1 A matrix is a rectangular or square array of numbers of variables. We use uppercase boldface
More information2. Matrix Algebra and Random Vectors
2. Matrix Algebra and Random Vectors 2.1 Introduction Multivariate data can be conveniently display as array of numbers. In general, a rectangular array of numbers with, for instance, n rows and p columns
More informationDot Products. K. Behrend. April 3, Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem.
Dot Products K. Behrend April 3, 008 Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem. Contents The dot product 3. Length of a vector........................
More informationIntroduction to Quantitative Techniques for MSc Programmes SCHOOL OF ECONOMICS, MATHEMATICS AND STATISTICS MALET STREET LONDON WC1E 7HX
Introduction to Quantitative Techniques for MSc Programmes SCHOOL OF ECONOMICS, MATHEMATICS AND STATISTICS MALET STREET LONDON WC1E 7HX September 2007 MSc Sep Intro QT 1 Who are these course for? The September
More informationChapter 3. Matrices. 3.1 Matrices
40 Chapter 3 Matrices 3.1 Matrices Definition 3.1 Matrix) A matrix A is a rectangular array of m n real numbers {a ij } written as a 11 a 12 a 1n a 21 a 22 a 2n A =.... a m1 a m2 a mn The array has m rows
More information18.06SC Final Exam Solutions
18.06SC Final Exam Solutions 1 (4+7=11 pts.) Suppose A is 3 by 4, and Ax = 0 has exactly 2 special solutions: 1 2 x 1 = 1 and x 2 = 1 1 0 0 1 (a) Remembering that A is 3 by 4, find its row reduced echelon
More informationStat 206: Sampling theory, sample moments, mahalanobis
Stat 206: Sampling theory, sample moments, mahalanobis topology James Johndrow (adapted from Iain Johnstone s notes) 2016-11-02 Notation My notation is different from the book s. This is partly because
More information2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian
FE661 - Statistical Methods for Financial Engineering 2. Linear algebra Jitkomut Songsiri matrices and vectors linear equations range and nullspace of matrices function of vectors, gradient and Hessian
More informationExamples True or false: 3. Let A be a 3 3 matrix. Then there is a pattern in A with precisely 4 inversions.
The exam will cover Sections 6.-6.2 and 7.-7.4: True/False 30% Definitions 0% Computational 60% Skip Minors and Laplace Expansion in Section 6.2 and p. 304 (trajectories and phase portraits) in Section
More informationBasic Concepts in Matrix Algebra
Basic Concepts in Matrix Algebra An column array of p elements is called a vector of dimension p and is written as x p 1 = x 1 x 2. x p. The transpose of the column vector x p 1 is row vector x = [x 1
More informationCOMP 558 lecture 18 Nov. 15, 2010
Least squares We have seen several least squares problems thus far, and we will see more in the upcoming lectures. For this reason it is good to have a more general picture of these problems and how to
More informationMatrices and Linear Algebra
Contents Quantitative methods for Economics and Business University of Ferrara Academic year 2017-2018 Contents 1 Basics 2 3 4 5 Contents 1 Basics 2 3 4 5 Contents 1 Basics 2 3 4 5 Contents 1 Basics 2
More informationLecture 13: Simple Linear Regression in Matrix Format. 1 Expectations and Variances with Vectors and Matrices
Lecture 3: Simple Linear Regression in Matrix Format To move beyond simple regression we need to use matrix algebra We ll start by re-expressing simple linear regression in matrix form Linear algebra is
More informationLinear Algebra for Machine Learning. Sargur N. Srihari
Linear Algebra for Machine Learning Sargur N. srihari@cedar.buffalo.edu 1 Overview Linear Algebra is based on continuous math rather than discrete math Computer scientists have little experience with it
More informationMath Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88
Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant
More informationIntroduction to Matrices
POLS 704 Introduction to Matrices Introduction to Matrices. The Cast of Characters A matrix is a rectangular array (i.e., a table) of numbers. For example, 2 3 X 4 5 6 (4 3) 7 8 9 0 0 0 Thismatrix,with4rowsand3columns,isoforder
More informationMath Camp II. Basic Linear Algebra. Yiqing Xu. Aug 26, 2014 MIT
Math Camp II Basic Linear Algebra Yiqing Xu MIT Aug 26, 2014 1 Solving Systems of Linear Equations 2 Vectors and Vector Spaces 3 Matrices 4 Least Squares Systems of Linear Equations Definition A linear
More information1 Matrices and vector spaces
Matrices and vector spaces. Which of the following statements about linear vector spaces are true? Where a statement is false, give a counter-example to demonstrate this. (a) Non-singular N N matrices
More informationLecture 1 Review: Linear models have the form (in matrix notation) Y = Xβ + ε,
2. REVIEW OF LINEAR ALGEBRA 1 Lecture 1 Review: Linear models have the form (in matrix notation) Y = Xβ + ε, where Y n 1 response vector and X n p is the model matrix (or design matrix ) with one row for
More informationThe Hilbert Space of Random Variables
The Hilbert Space of Random Variables Electrical Engineering 126 (UC Berkeley) Spring 2018 1 Outline Fix a probability space and consider the set H := {X : X is a real-valued random variable with E[X 2
More informationImage Registration Lecture 2: Vectors and Matrices
Image Registration Lecture 2: Vectors and Matrices Prof. Charlene Tsai Lecture Overview Vectors Matrices Basics Orthogonal matrices Singular Value Decomposition (SVD) 2 1 Preliminary Comments Some of this
More informationMobile Robotics 1. A Compact Course on Linear Algebra. Giorgio Grisetti
Mobile Robotics 1 A Compact Course on Linear Algebra Giorgio Grisetti SA-1 Vectors Arrays of numbers They represent a point in a n dimensional space 2 Vectors: Scalar Product Scalar-Vector Product Changes
More informationLINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM
LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F is R or C. Definition 1. A linear operator
More informationA primer on matrices
A primer on matrices Stephen Boyd August 4, 2007 These notes describe the notation of matrices, the mechanics of matrix manipulation, and how to use matrices to formulate and solve sets of simultaneous
More information22.3. Repeated Eigenvalues and Symmetric Matrices. Introduction. Prerequisites. Learning Outcomes
Repeated Eigenvalues and Symmetric Matrices. Introduction In this Section we further develop the theory of eigenvalues and eigenvectors in two distinct directions. Firstly we look at matrices where one
More informationChapter 2. Linear Algebra. rather simple and learning them will eventually allow us to explain the strange results of
Chapter 2 Linear Algebra In this chapter, we study the formal structure that provides the background for quantum mechanics. The basic ideas of the mathematical machinery, linear algebra, are rather simple
More informationMATH 221: SOLUTIONS TO SELECTED HOMEWORK PROBLEMS
MATH 221: SOLUTIONS TO SELECTED HOMEWORK PROBLEMS 1. HW 1: Due September 4 1.1.21. Suppose v, w R n and c is a scalar. Prove that Span(v + cw, w) = Span(v, w). We must prove two things: that every element
More informationAnswers in blue. If you have questions or spot an error, let me know. 1. Find all matrices that commute with A =. 4 3
Answers in blue. If you have questions or spot an error, let me know. 3 4. Find all matrices that commute with A =. 4 3 a b If we set B = and set AB = BA, we see that 3a + 4b = 3a 4c, 4a + 3b = 3b 4d,
More informationB553 Lecture 5: Matrix Algebra Review
B553 Lecture 5: Matrix Algebra Review Kris Hauser January 19, 2012 We have seen in prior lectures how vectors represent points in R n and gradients of functions. Matrices represent linear transformations
More informationLecture 8: Linear Algebra Background
CSE 521: Design and Analysis of Algorithms I Winter 2017 Lecture 8: Linear Algebra Background Lecturer: Shayan Oveis Gharan 2/1/2017 Scribe: Swati Padmanabhan Disclaimer: These notes have not been subjected
More informationRepeated Eigenvalues and Symmetric Matrices
Repeated Eigenvalues and Symmetric Matrices. Introduction In this Section we further develop the theory of eigenvalues and eigenvectors in two distinct directions. Firstly we look at matrices where one
More informationLecture 7: Positive Semidefinite Matrices
Lecture 7: Positive Semidefinite Matrices Rajat Mittal IIT Kanpur The main aim of this lecture note is to prepare your background for semidefinite programming. We have already seen some linear algebra.
More informationMATH 240 Spring, Chapter 1: Linear Equations and Matrices
MATH 240 Spring, 2006 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 8th Ed. Sections 1.1 1.6, 2.1 2.2, 3.2 3.8, 4.3 4.5, 5.1 5.3, 5.5, 6.1 6.5, 7.1 7.2, 7.4 DEFINITIONS Chapter 1: Linear
More informationCS 143 Linear Algebra Review
CS 143 Linear Algebra Review Stefan Roth September 29, 2003 Introductory Remarks This review does not aim at mathematical rigor very much, but instead at ease of understanding and conciseness. Please see
More informationFoundations of Matrix Analysis
1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the
More informationCS123 INTRODUCTION TO COMPUTER GRAPHICS. Linear Algebra 1/33
Linear Algebra 1/33 Vectors A vector is a magnitude and a direction Magnitude = v Direction Also known as norm, length Represented by unit vectors (vectors with a length of 1 that point along distinct
More information[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]
Math 43 Review Notes [Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty Dot Product If v (v, v, v 3 and w (w, w, w 3, then the
More informationGetting Started with Communications Engineering. Rows first, columns second. Remember that. R then C. 1
1 Rows first, columns second. Remember that. R then C. 1 A matrix is a set of real or complex numbers arranged in a rectangular array. They can be any size and shape (provided they are rectangular). A
More informationBASIC NOTIONS. x + y = 1 3, 3x 5y + z = A + 3B,C + 2D, DC are not defined. A + C =
CHAPTER I BASIC NOTIONS (a) 8666 and 8833 (b) a =6,a =4 will work in the first case, but there are no possible such weightings to produce the second case, since Student and Student 3 have to end up with
More informationLinear Algebra Review. Fei-Fei Li
Linear Algebra Review Fei-Fei Li 1 / 51 Vectors Vectors and matrices are just collections of ordered numbers that represent something: movements in space, scaling factors, pixel brightnesses, etc. A vector
More informationEcon Slides from Lecture 8
Econ 205 Sobel Econ 205 - Slides from Lecture 8 Joel Sobel September 1, 2010 Computational Facts 1. det AB = det BA = det A det B 2. If D is a diagonal matrix, then det D is equal to the product of its
More informationDot Products, Transposes, and Orthogonal Projections
Dot Products, Transposes, and Orthogonal Projections David Jekel November 13, 2015 Properties of Dot Products Recall that the dot product or standard inner product on R n is given by x y = x 1 y 1 + +
More informationIr O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )
Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O
More informationReview problems for MA 54, Fall 2004.
Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on
More information1 Matrices and matrix algebra
1 Matrices and matrix algebra 1.1 Examples of matrices A matrix is a rectangular array of numbers and/or variables. For instance 4 2 0 3 1 A = 5 1.2 0.7 x 3 π 3 4 6 27 is a matrix with 3 rows and 5 columns
More informationIntroduction Eigen Values and Eigen Vectors An Application Matrix Calculus Optimal Portfolio. Portfolios. Christopher Ting.
Portfolios Christopher Ting Christopher Ting http://www.mysmu.edu/faculty/christophert/ : christopherting@smu.edu.sg : 6828 0364 : LKCSB 5036 November 4, 2016 Christopher Ting QF 101 Week 12 November 4,
More informationMATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018
Homework #1 Assigned: August 20, 2018 Review the following subjects involving systems of equations and matrices from Calculus II. Linear systems of equations Converting systems to matrix form Pivot entry
More information18.06 Quiz 2 April 7, 2010 Professor Strang
18.06 Quiz 2 April 7, 2010 Professor Strang Your PRINTED name is: 1. Your recitation number or instructor is 2. 3. 1. (33 points) (a) Find the matrix P that projects every vector b in R 3 onto the line
More informationChapter 2. Matrix Arithmetic. Chapter 2
Matrix Arithmetic Matrix Addition and Subtraction Addition and subtraction act element-wise on matrices. In order for the addition/subtraction (A B) to be possible, the two matrices A and B must have the
More informationLinear Algebra Massoud Malek
CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product
More informationSingular Value Decomposition. 1 Singular Value Decomposition and the Four Fundamental Subspaces
Singular Value Decomposition This handout is a review of some basic concepts in linear algebra For a detailed introduction, consult a linear algebra text Linear lgebra and its pplications by Gilbert Strang
More informationLinear Algebra: Characteristic Value Problem
Linear Algebra: Characteristic Value Problem . The Characteristic Value Problem Let < be the set of real numbers and { be the set of complex numbers. Given an n n real matrix A; does there exist a number
More information18.06 Problem Set 8 - Solutions Due Wednesday, 14 November 2007 at 4 pm in
806 Problem Set 8 - Solutions Due Wednesday, 4 November 2007 at 4 pm in 2-06 08 03 Problem : 205+5+5+5 Consider the matrix A 02 07 a Check that A is a positive Markov matrix, and find its steady state
More informationA matrix over a field F is a rectangular array of elements from F. The symbol
Chapter MATRICES Matrix arithmetic A matrix over a field F is a rectangular array of elements from F The symbol M m n (F ) denotes the collection of all m n matrices over F Matrices will usually be denoted
More informationReview of Linear Algebra
Review of Linear Algebra Dr Gerhard Roth COMP 40A Winter 05 Version Linear algebra Is an important area of mathematics It is the basis of computer vision Is very widely taught, and there are many resources
More informationLecture 2: Linear Algebra Review
EE 227A: Convex Optimization and Applications January 19 Lecture 2: Linear Algebra Review Lecturer: Mert Pilanci Reading assignment: Appendix C of BV. Sections 2-6 of the web textbook 1 2.1 Vectors 2.1.1
More informationLinear Models Review
Linear Models Review Vectors in IR n will be written as ordered n-tuples which are understood to be column vectors, or n 1 matrices. A vector variable will be indicted with bold face, and the prime sign
More information3 (Maths) Linear Algebra
3 (Maths) Linear Algebra References: Simon and Blume, chapters 6 to 11, 16 and 23; Pemberton and Rau, chapters 11 to 13 and 25; Sundaram, sections 1.3 and 1.5. The methods and concepts of linear algebra
More informationchapter 5 INTRODUCTION TO MATRIX ALGEBRA GOALS 5.1 Basic Definitions
chapter 5 INTRODUCTION TO MATRIX ALGEBRA GOALS The purpose of this chapter is to introduce you to matrix algebra, which has many applications. You are already familiar with several algebras: elementary
More information1 Linearity and Linear Systems
Mathematical Tools for Neuroscience (NEU 34) Princeton University, Spring 26 Jonathan Pillow Lecture 7-8 notes: Linear systems & SVD Linearity and Linear Systems Linear system is a kind of mapping f( x)
More informationLinear Algebra review Powers of a diagonalizable matrix Spectral decomposition
Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Prof. Tesler Math 283 Fall 2016 Also see the separate version of this with Matlab and R commands. Prof. Tesler Diagonalizing
More informationChapter Two Elements of Linear Algebra
Chapter Two Elements of Linear Algebra Previously, in chapter one, we have considered single first order differential equations involving a single unknown function. In the next chapter we will begin to
More informationLinear algebra for computational statistics
University of Seoul May 3, 2018 Vector and Matrix Notation Denote 2-dimensional data array (n p matrix) by X. Denote the element in the ith row and the jth column of X by x ij or (X) ij. Denote by X j
More informationElementary linear algebra
Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The
More informationIntroduction to Matrices
214 Analysis and Design of Feedback Control Systems Introduction to Matrices Derek Rowell October 2002 Modern system dynamics is based upon a matrix representation of the dynamic equations governing the
More informationMath 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination
Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column
More informationIntroduction to Mobile Robotics Compact Course on Linear Algebra. Wolfram Burgard, Cyrill Stachniss, Kai Arras, Maren Bennewitz
Introduction to Mobile Robotics Compact Course on Linear Algebra Wolfram Burgard, Cyrill Stachniss, Kai Arras, Maren Bennewitz Vectors Arrays of numbers Vectors represent a point in a n dimensional space
More informationStat 159/259: Linear Algebra Notes
Stat 159/259: Linear Algebra Notes Jarrod Millman November 16, 2015 Abstract These notes assume you ve taken a semester of undergraduate linear algebra. In particular, I assume you are familiar with the
More informationMath 291-2: Lecture Notes Northwestern University, Winter 2016
Math 291-2: Lecture Notes Northwestern University, Winter 2016 Written by Santiago Cañez These are lecture notes for Math 291-2, the second quarter of MENU: Intensive Linear Algebra and Multivariable Calculus,
More informationLinear Algebra review Powers of a diagonalizable matrix Spectral decomposition
Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Prof. Tesler Math 283 Fall 2018 Also see the separate version of this with Matlab and R commands. Prof. Tesler Diagonalizing
More informationFoundations of Computer Vision
Foundations of Computer Vision Wesley. E. Snyder North Carolina State University Hairong Qi University of Tennessee, Knoxville Last Edited February 8, 2017 1 3.2. A BRIEF REVIEW OF LINEAR ALGEBRA Apply
More informationSTAT200C: Review of Linear Algebra
Stat200C Instructor: Zhaoxia Yu STAT200C: Review of Linear Algebra 1 Review of Linear Algebra 1.1 Vector Spaces, Rank, Trace, and Linear Equations 1.1.1 Rank and Vector Spaces Definition A vector whose
More informationMAT 2037 LINEAR ALGEBRA I web:
MAT 237 LINEAR ALGEBRA I 2625 Dokuz Eylül University, Faculty of Science, Department of Mathematics web: Instructor: Engin Mermut http://kisideuedutr/enginmermut/ HOMEWORK 2 MATRIX ALGEBRA Textbook: Linear
More informationMatrix Algebra: Summary
May, 27 Appendix E Matrix Algebra: Summary ontents E. Vectors and Matrtices.......................... 2 E.. Notation.................................. 2 E..2 Special Types of Vectors.........................
More informationLinear Algebra Highlights
Linear Algebra Highlights Chapter 1 A linear equation in n variables is of the form a 1 x 1 + a 2 x 2 + + a n x n. We can have m equations in n variables, a system of linear equations, which we want to
More informationConceptual Questions for Review
Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.
More informationA TOUR OF LINEAR ALGEBRA FOR JDEP 384H
A TOUR OF LINEAR ALGEBRA FOR JDEP 384H Contents Solving Systems 1 Matrix Arithmetic 3 The Basic Rules of Matrix Arithmetic 4 Norms and Dot Products 5 Norms 5 Dot Products 6 Linear Programming 7 Eigenvectors
More informationMatrices. 1 a a2 1 b b 2 1 c c π
Matrices 2-3-207 A matrix is a rectangular array of numbers: 2 π 4 37 42 0 3 a a2 b b 2 c c 2 Actually, the entries can be more general than numbers, but you can think of the entries as numbers to start
More informationLinear Algebra Basics
Linear Algebra Basics For the next chapter, understanding matrices and how to do computations with them will be crucial. So, a good first place to start is perhaps What is a matrix? A matrix A is an array
More informationMATRICES ARE SIMILAR TO TRIANGULAR MATRICES
MATRICES ARE SIMILAR TO TRIANGULAR MATRICES 1 Complex matrices Recall that the complex numbers are given by a + ib where a and b are real and i is the imaginary unity, ie, i 2 = 1 In what we describe below,
More informationMath 308 Midterm Answers and Comments July 18, Part A. Short answer questions
Math 308 Midterm Answers and Comments July 18, 2011 Part A. Short answer questions (1) Compute the determinant of the matrix a 3 3 1 1 2. 1 a 3 The determinant is 2a 2 12. Comments: Everyone seemed to
More informationLinear Algebra V = T = ( 4 3 ).
Linear Algebra Vectors A column vector is a list of numbers stored vertically The dimension of a column vector is the number of values in the vector W is a -dimensional column vector and V is a 5-dimensional
More informationAPPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.
APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product
More information18.06 Professor Strang Quiz 3 May 2, 2008
18.06 Professor Strang Quiz 3 May 2, 2008 Your PRINTED name is: Grading 1 2 3 Please circle your recitation: 1) M 2 2-131 A. Ritter 2-085 2-1192 afr 2) M 2 4-149 A. Tievsky 2-492 3-4093 tievsky 3) M 3
More information