APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the ndimensional column vector with components x 1 x 2.


 Dwayne Burke
 1 years ago
 Views:
Transcription
1 APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the ndimensional column vector with components 0 x x 2 B A x n Definition 6 (scalar product). The scalar product w x is defined as: w x nx w i x i w T x i (A..) and has a natural geometric interpretation as: w x w x cos( ) (A..2) where is the angle between the two vectors. Thus if the lengths of two vectors are fixed their inner product is largest when 0, whereupon one vector is a constant multiple of the other. If the scalar product x T y 0, then x and y are orthogonal (they are a right angles to each other). The length of a vector is denoted x, the squared length is given by x 2 x T x x 2 x 2 + x x 2 n (A..3) A unit vector x has x T x. Definition 7 (Linear dependence). A set of vectors x,...,x n is linearly dependent if there exists a vector x j that can be expressed as a linear combination of the other vectors. Viceversa, if the only solution to nx i x i 0 (A..4) i 543
2 e e* a a β e* α e Figure A.: Resolving a vector a into components along the orthogonal directions e and e. The projection of a onto these two directions are lengths and along the directions e and e. is for all i 0,i,...,n, the vectors x,...,x n are linearly independent. A..2 The scalar product as a projection Suppose that we wish to resolve the vector a into its components along the orthogonal directions specified by the unit vectors e and e. That is e e and e e 0. This is depicted in fig(a.). We are required to find the scalar values and such that a e + e From this we obtain (A..5) a e e e + e e, a e e e + e e (A..6) From the orthogonality and unit lengths of the vectors e and e, this becomes simply a e, a e (A..7) A set of vectors is orthonormal if they are mutually orthogonal and have unit length. This means that we can write the vector a in terms of the orthonormal components e and e as a (a e) e +(a e ) e (A..8) One can see therefore that the scalar product between a and e projects the vector a onto the (unit) direction e. The projection of a vector a onto a direction specified by f is therefore A..3 a f f 2 f Lines in space (A..9) A line in 2 (or more) dimensions can be specified as follows. The vector of any point along the line is given, for some s, by the equation p a + su, s 2 R. (A..0) where u is parallel to the line, and the line passes through the point a, see fig(a.2). This is called the parametric representation of the line. An alternative specification can be given by realising that all vectors along the line are orthogonal to the normal of the line, n (u and n are orthonormal). That is (p a) n 0, p n a n (A..) If the vector n is of unit length, the right hand side of the above represents the shortest distance from the origin to the line, drawn by the dashed line in fig(a.2) (since this is the projection of a onto the normal direction). a n u p Figure A.2: A line can be specified by some position vector on the line, a, and a unit vector along the direction of the line, u. In 2 dimensions, there is a unique direction, n, perpendicular to the line. In three dimensions, the vectors perpendicular to the direction of the line lie in a plane, whose normal vector is in the direction of the line, u. 544 DRAFT March 9, 200
3 p v u a n Figure A.3: A plane can be specified by a point in the plane, a and two, nonparallel directions in the plane, u and v. The normal to the plane is unique, and in the same direction as the directed line from the origin to the nearest point on the plane. A..4 Planes and hyperplanes A line is a one dimensional hyperplane. To define a twodimensional plane (in arbitrary dimensional space) one may specify two vectors u and v that lie in the plane (they need not be mutually orthogonal), and a position vector a in the plane, see fig(a.3). Any vector p in the plane can then be written as p a + su + tv, (s, t) 2 R. (A..2) An alternative definition is given by considering that any vector within the plane must be orthogonal to the normal of the plane n. (p a) n 0, p n a n (A..3) The right hand side of the above represents the shortest distance from the origin to the plane, drawn by the dashed line in fig(a.3). The advantage of this representation is that it has the same form as a line. Indeed, this representation of (hyper)planes is independent of the dimension of the space. In addition, only two vectors need to be defined a point in the plane, a, and the normal to the plane n. A..5 Matrices An m n matrix A is a collection of scalar m n values arranged in a rectangle of m rows and n columns. A vector can be considered a n matrix. If the element of the ith row and jth column is A ij,thena T denotes the matrix that has A ji there instead  the transpose of A. For example A and its transpose are : A A A (A..4) The i, j element of matrix A can be written A ij or in cases where more clarity is required, [A] ij (for example A ij ). Definition 8 (transpose). The transpose B T of the n by m matrix B is the m by n matrix D with components h B Ti kj B jk ; k,...,m j,...,n. (A..5) B, B T T B and (AB) T B T A T. If the shapes of the matrices A,B and C are such that it makes sense to calculate the product ABC, then (ABC) T C T B T A T (A..6) A square matrix A is symmetric if A T A. A square matrix is called Hermitian if A A T (A..7) where denotes the complex conjugate operator. For Hermitian matrices, the eigenvectors form an orthogonal set, with real eigenvalues. DRAFT March 9,
4 Definition 9 (Matrix addition). For two matrix A and B of the same size, [A + B] ij [A] ij +[B] ij (A..8) Definition 20 (Matrix multiplication). For an l by n matrix A and an n by m matrix B, theproduct AB is the l by m matrix with elements [AB] ij nx [A] ij [B] jk ; i,...,l k,...,m. (A..9) j For example a a 2 a 2 a 22 x x 2 a x + a 2 x 2 a 2 x 2 + a 22 x 2 (A..20) Note that even if BA is defined as well, that is if l n, generally BA is not equal to AB (when they do we say they commute). The matrix I is the identity matrix, necessarily square, with s on the diagonal and 0 s everywhere else. For clarity we may also write I m for an square m m identity matrix. Then for an m n matrix A I m A AI n A (A..2) The identity matrix has elements [I] ij ij given by the Kronecker delta: ij i j 0 i 6 j (A..22) Definition 2 (Trace). trace (A) X i A ii X i i (A..23) where i are the eigenvalues of A. A..6 Linear transformations Rotations If we assume that rotation of a twodimensional vector x (x, y) T can be accomplished by matrix multiplication Rx then, since matrix multiplication is distributive, we only need to work out how the axes unit vectors i (, 0) T and j (0, ) T transform since Rx xri + yrj The unit vectors i and j under rotation by degrees transform to vectors Ri r r 2 cos sin Rj r2 r 22 sin cos (A..24) (A..25) 546 DRAFT March 9, 200
5 From this, one can simply read o the values for the elements cos sin R sin cos (A..26) A..7 Determinants Definition 22 (Determinant). For a square matrix A, the determinant is the volume of the transformation of the matrix A (up to a sign change). That is, we take a hypercube of unit volume and map each vertex under the transformation, and the volume of the resulting object is defined as the determinant. Writing [A] ij a ij, a a det 2 a a 2 a a 22 a 2 a 2 (A..27) 22 0 a a 2 a 3 a 2 a 22 a 23 A a (a 22 a 33 a 23 a 32 ) a 2 (a 2 a 33 a 3 a 23 )+a 3 (a 2 a 32 a 3 a 22 ) (A..28) a 3 a 32 a 33 The determinant in the (3 3) case has the form a2 a a det 23 a2 a a a 32 a 2 det 23 a2 a + a 33 a 3 a 3 det a 3 a 32 (A..29) The determinant of the (3 3) matrix A is given by the sum of terms ( ) i+ a i det (A i )wherea i is the (2 2) matrix formed from A by removing the i th row and column. This form of the determinant generalises to any dimension. That is, we can define the determinant recursively as an expansion along the top row of determinants of reduced matrices. The absolute value of the determinant is the volume of the transformation. det A T det(a) (A..30) For square matrices A and B of equal dimensions, det (AB) det(a)det(b), det (I) ) det A /det (A) (A..3) For any matrix A which collapses dimensions, then the volume of the transformation is zero, and so is the determinant. If the determinant is zero, the matrix cannot be invertible since given any vector x, given a projection y Ax, we cannot uniquely compute which vector x was projected to y there will in general be an infinite number of solutions. Definition 23 (Orthogonal matrix). A square matrix A is orthogonal if AA T I A T A. From the properties of the determinant, we see therefore that an orthogonal matrix has determinant ± and hence corresponds to a volume preserving transformation i.e. a rotation. Definition 24 (Matrix rank). For an m n matrix X with n columns, each written as an mvector: X x,...,x n (A..32) the rank of X is the maximum number of linearly independent columns (or equivalently rows). A n n square matrix is full rank if the rank is n and the matrix is nonsingular. Otherwise the matrix is reduced rank and is singular. DRAFT March 9,
6 A..8 Matrix inversion Definition 25 (Matrix inversion). For a square matrix A, its inverse satisfies A A I AA (A..33) It is not always possible to find a matrix A such that A A I. In that case, we call the matrix A singular. Geometrically, singular matrices correspond to projections : if we were to take the transform of each of the vertices v of a binary hypercube Av, the volume of the transformed hypercube would be zero. If you are given a vector y and a singular transformation, A, one cannot uniquely identify a vector x for which y Ax  typically there will be a whole space of possibilities. Provided the inverses matrices exist (AB) B A (A..34) For a nonsquare matrix A such that AA T is invertible, then the pseudo inverse, defined as A A T AA T (A..35) satisfies AA I. A..9 Computing the matrix inverse For a 2 2 matrix, it is straightforward to work out for a general matrix, the explicit form of the inverse. a b If the matrix whose inverse we wish to find is A, then the condition for the inverse is c d a c b e d g f h 0 0 (A..36) Multiplying out the left hand side, we obtain the four conditions ae + bg ce + dg af + bh cf + dh 0 0 (A..37) It is readily verified that the solution to this set of four linear equations is given by e g f d h ad bc b c A (A..38) a The quantity ad bc is the determinant of A. There are many ways to compute the inverse of a general matrix, and we refer the reader to more specialised texts. Note that, if one wants to solve only a linear system, although the solution can be obtained through matrix inversion, this should not be use. Often, one needs to solve huge dimensional linear systems of equations, and speed becomes an issue. These equations can be solved much more accurately and quickly using elimination techniques such as Gaussian Elimination. A..0 Eigenvalues and eigenvectors The eigenvectors of a matrix correspond to the natural coordinate system, in which the geometric transformation represented by A can be most easily understood. 548 DRAFT March 9, 200
7 Definition 26 (Eigenvalues and Eigenvectors). For a square matrix A, e is an eigenvector of A with eigenvalue if Ae e (A..39) ny det (A) i i Hence a matrix is singular if it has a zero eigenvalue. The trace of a matrix can be expressed as (A..40) trace (A) X i i (A..4) For an (n n) dimensional matrix, there are (including repetitions) n eigenvalues, each with a corresponding eigenvector. We can reform equation (A..39) as (A I) e 0 (A..42) This is a linear equation, for which the eigenvector e and eigenvalue is a solution. We can write equation (A..42) as Be 0, whereb A I. If B has an inverse, then a solution is e B 0 0, which trivially satisfies the eigenequation. For any nontrivial solution to the problem Be 0, we therefore need B to be noninvertible. This is equivalent to the condition that B has zero determinant. Hence is an eigenvalue of A if det (A I) 0 (A..43) This is known as the characteristic equation. This determinant equation will be a polynomial of degree n and the resulting equation is known as the characteristic polynomial. Once we have found an eigenvalue, the corresponding eigenvector can be found by substituting this value for in equation (A..39) and solving the linear equations for e. It may be that the for an eigenvalue the eigenvector is not unique and there is a space of corresponding vectors. Geometrically, the eigenvectors are special directions such that the e ect of the transformation A along a direction e is simply to scale the vector e. For a rotation matric R in general there will be no direction preserved under the rotation so that the eigenvalues and eigenvectors are complex valued (which is why the Fourier representation, which corresponds to representation in a rotated basis, is necessarily complex). Remark (Orthogonality of eigenvectors of symmetric matrices). For a real symmetric matric A A T, and two of its eigenvectors e i and e j of A are orthogonal (e i ) T e j 0 if the eigenvalues i and j are di erent. The above can be shown by considering: Ae i i e i ) (e j ) T Ae i i (e j ) T e i (A..44) Since A is symmetric, the left hand side is equivalent to ((e j ) T A)e i (Ae j ) T e i j (e j ) T e i ) i (e j ) T e i j (e j ) T e i (A..45) If i 6 j, this condition can be satisfied only if (e j ) T e i 0, namely that the eigenvectors are orthogonal. DRAFT March 9,
8 A.. Matrix decompositions The observation that the eigenvectors of a symmetric matrix are orthogonal leads directly to the spectral decomposition formula below. Definition 27 (Spectral decomposition). A symmetric matrix A has an eigendecomposition A nx i ie i e T i (A..46) where i is the eigenvalue of eigenvector e i and the eigenvectors form an orthogonal set, e i T e j ij In matrix notation A E E T e i T e i (A..47) (A..48) where E is the matrix of eigenvectors and the corresponding diagonal eigenvalue matrix. More generally, for a square nonsymmetric nonsingular A we can write A E E (A..49) Definition 28 (Singular Value Decomposition). The SVD decomposition of a n p matrix X is X USV T (A..50) where dim U n n with U T U I n. Also dim V p p with V T V I p. The matrix S has dim S n p with zeros everywhere except on the diagonal entries. The singular values are the diagonal entries [S] ii and are positive. The singular values are ordered so that the upper left diagonal element of S contains the largest singular value. Quadratic forms Definition 29 (Quadratic form). x T Ax + x T b (A..5) Definition 30 (Positive definite matrix). A symmetric matrix A, with the property that x T Ax 0 for any vector x is called nonnegative definite. A symmetric matrix A, with the property that x T Ax > 0 for any vector x 6 0 is called positive definite. A positive definite matrix has full rank and is thus invertible. Using the eigendecomposition of A, x T Ax X i iy T e i (e i ) T x X i i x T e i 2 (A..52) which is greater than zero if and only if all the eigenvalues are positive. Hence A is positive definite if and only if all its eigenvalues are positive. 550 DRAFT March 9, 200
9 Multivariate Calculus A.2 Matrix Identities Definition 3 (TraceLog formula). For a positive definite matrix A, trace (log A) log det (A) (A.2.) Note that the above logarithm of a matrix is not the elementwise logarithm. In MATLAB the required function is logm. In general for an analytic function f(x), f(m) is defined via the powerseries expansion of the function. On the right, since det (A) is a scalar, the logarithm is the standard logarithm of a scalar. Definition 32 (Matrix Inversion Lemma (Woodbury formula)). Provided the appropriate inverses exist: A + UV T A A U I + V T A U V T A (A.2.2) Eigenfunctions Z K(x 0,x) a (x) a a (x 0 ) (A.2.3) x By an analogous argument that proves the theorem of linear algebra above, the eigenfunctions are orthogonal of a real symmetric kernel, K(x 0,x)K(x, x 0 ) are orthogonal: Z x a(x) b (x) ab (A.2.4) where (x) is the complex conjugate of (x). From the previous results, we know that a symmetric real matrix K must have a decomposition in terms eigenvectors with positive, real eigenvalues. Since this is to be true for any dimension of matrix, it suggests that we need the (real symmetric) kernel function itself to have a decomposition (provided the eigenvalues are countable) K(x i,x j ) X µ µ µ(x i ) µ(x j ) (A.2.5) since then X y i K(x i,x j )y j X i,j i,j,µ µy i µ (x i ) µ(x j )y j X µ µ ( X i y i µ (x i )) ( X i {z } z i y i µ (x i )) {z } zi (A.2.6) which is greater than zero if the eigenvalues are all positive (since for complex z, zz 0). If the eigenvalues are uncountable (which happens when the domain of the kernel is unbounded), the appropriate decomposition is Z K(x i,x j ) (s) (x i,s) (x j,s)ds (A.2.7) A.3 Multivariate Calculus This definition of the inner product is useful, and particularly natural in the context of translation invariant kernels. We are free to define the inner product, but this conjugate form is often the most useful. DRAFT March 9,
10 Multivariate Calculus x f(x) Figure A.4: Interpreting the gradient. The ellipses are contours of constant function value, f const. At any point x, the gradient vector rf(x) points along the direction of maximal increase of the function. x 2 Definition 33 (Partial derivative). Consider a function of n variables, f(x,x 2,...,x n ) or f(x). The partial derivative of f w.r.t. x at x is defined as the following limit (when it exists) f(x lim + h, x 2,...,x n) f(x xx h!0 h (A.3.) The gradient vector of f will be denoted by rf or g rf(x) g(x) @x n C A (A.3.2) A.3. Interpreting the gradient vector Consider a function f(x) that depends on a vector x. We are interested in how the function changes when the vector x changes by a small amount : x! x +,where is a vector whose length is very small. According to a Taylor expansion, the function will change to f (x + )f(x)+ X i i + O 2 i We can interpret the summation above as the scalar product between the vector rf with components [rf] i and. f (x + )f(x)+(rf) T + O 2 (A.3.4) The gradient points along the direction in which the function increases most rapidly. Why? Consider a direction ˆp (a unit length vector). Then a displacement, units along this direction changes the function value to f(x + ˆp) f(x)+ rf(x) ˆp (A.3.5) The direction ˆp for which the function has the largest change is that which maximises the overlap rf(x) ˆp rf(x) ˆp cos rf(x) cos (A.3.6) where is the angle between ˆp and rf(x). The overlap is maximised when 0, giving ˆp rf(x)/ rf(x). Hence, the direction along which the function changes the most rapidly is along rf(x). A.3.2 Higher derivatives The first derivative of a function of n variables is an nvector; the secondderivative of an nvariable function is defined by the n 2 partial derivatives of the n first partial derivatives w.r.t. the n i,...,n; j,...,n i 552 DRAFT March 9, 200
11 Multivariate Calculus which is usually 2 i, i 6 2 i 2, i j (A.3.8) If the partial derivatives i, / 2 i are continuous, 2 i exists 2 2 i. (A.3.9) These n 2 second partial derivatives are represented by a square, symmetric matrix called the Hessian matrix of f(x) A.3.3 H f (x) Chain 2 n 2 C A (A.3.0) Consider f(x,...,x n ). Now let each x j be parameterized by u,...,u m, i.e. x j x j (u,...,u m ). What is So f f(x + x,...,x n + x n ) f(x,...,x n ) x j f Therefore mx mx j u + higher order terms nx j x j + higher order u + higher order terms (A.3.) Definition 34 (Chain nx (A.3.2) or in vector f(x(u)) rf @u (A.3.3) Definition 35 (Directional derivative). Assume f is di erentiable. We define the scalar directional derivative (D v f)(x ) of f in a direction v at a point x. Let x x + hv, Then (D v f)(x ) d dh f(x + hv) h0 X j v j xx rf T v (A.3.4) A.3.4 Matrix calculus DRAFT March 9,
12 Inequalities Definition 36 (Derivative of a matrix trace). For matrices A, and trace (AB) (A.3.5) Definition 37 (Derivative of log det (A)). So log det (log A) T log det (A) (A.3.6) (A.3.7) Definition 38 (Derivative of a matrix inverse). For an invertible matrix A (A.3.8) A.4 Inequalities A.4. Convexity Definition 39 (Convex function). A function f(x) is defined as convex if for any x, y and 0 apple apple f( x +( )y) apple f(x)+( )f(y) (A.4.) If f(x) is convex, f(x) is called concave. An intuitive picture of a convex function is to consider first the quantity x +( )y. As we vary from 0 to, this traces points between x ( 0) and y ( ). Hence for 0 we start at the point x, f(x) and as increase trace a straight line towards the point y, f(y) at. Convexity states that the function f always lies below this straight line. Geometrically this means that the function f(x) is always always increasing (never nondecreasing). Hence if d 2 f(x)/dx 2 > 0 the function is convex. As an example, the function log x is concave since its second derivative is negative: d dx log x x, d 2 dx 2 log x x 2 (A.4.2) A.4.2 Jensen s inequality For a convex function f(x), it follows directly from the definition of convexity that f(hxi p(x) ) applehf(x)i p(x) (A.4.3) for any distribution p(x). 554 DRAFT March 9, 200
Background Mathematics (2/2) 1. David Barber
Background Mathematics (2/2) 1 David Barber University College London Modified by Samson Cheung (sccheung@ieee.org) 1 These slides accompany the book Bayesian Reasoning and Machine Learning. The book and
More informationIntroduction to Matrix Algebra
Introduction to Matrix Algebra August 18, 2010 1 Vectors 1.1 Notations A pdimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the line. When p
More informationOR MSc Maths Revision Course
OR MSc Maths Revision Course Tom Byrne School of Mathematics University of Edinburgh t.m.byrne@sms.ed.ac.uk 15 September 2017 General Information Today JCMB Lecture Theatre A, 09:3012:30 Mathematics revision
More informationCS 246 Review of Linear Algebra 01/17/19
1 Linear algebra In this section we will discuss vectors and matrices. We denote the (i, j)th entry of a matrix A as A ij, and the ith entry of a vector as v i. 1.1 Vectors and vector operations A vector
More informationQuantum Computing Lecture 2. Review of Linear Algebra
Quantum Computing Lecture 2 Review of Linear Algebra Maris Ozols Linear algebra States of a quantum system form a vector space and their transformations are described by linear operators Vector spaces
More informationLinear Algebra, part 2 Eigenvalues, eigenvectors and least squares solutions
Linear Algebra, part 2 Eigenvalues, eigenvectors and least squares solutions AnnaKarin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2013 Main problem of linear algebra 2: Given
More informationKnowledge Discovery and Data Mining 1 (VO) ( )
Knowledge Discovery and Data Mining 1 (VO) (707.003) Review of Linear Algebra Denis Helic KTI, TU Graz Oct 9, 2014 Denis Helic (KTI, TU Graz) KDDM1 Oct 9, 2014 1 / 74 Big picture: KDDM Probability Theory
More informationMath Bootcamp An pdimensional vector is p numbers put together. Written as. x 1 x =. x p
Math Bootcamp 2012 1 Review of matrix algebra 1.1 Vectors and rules of operations An pdimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the
More informationMath 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination
Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column
More informationThe Singular Value Decomposition
The Singular Value Decomposition Philippe B. Laval KSU Fall 2015 Philippe B. Laval (KSU) SVD Fall 2015 1 / 13 Review of Key Concepts We review some key definitions and results about matrices that will
More informationLinear Algebra review Powers of a diagonalizable matrix Spectral decomposition
Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Prof. Tesler Math 283 Fall 2016 Also see the separate version of this with Matlab and R commands. Prof. Tesler Diagonalizing
More informationLinear Algebra: Matrix Eigenvalue Problems
CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given
More informationMATH 583A REVIEW SESSION #1
MATH 583A REVIEW SESSION #1 BOJAN DURICKOVIC 1. Vector Spaces Very quick review of the basic linear algebra concepts (see any linear algebra textbook): (finite dimensional) vector space (or linear space),
More informationMatrix Algebra: Summary
May, 27 Appendix E Matrix Algebra: Summary ontents E. Vectors and Matrtices.......................... 2 E.. Notation.................................. 2 E..2 Special Types of Vectors.........................
More informationLecture notes on Quantum Computing. Chapter 1 Mathematical Background
Lecture notes on Quantum Computing Chapter 1 Mathematical Background Vector states of a quantum system with n physical states are represented by unique vectors in C n, the set of n 1 column vectors 1 For
More informationReview problems for MA 54, Fall 2004.
Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on
More informationDSGA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.
DSGA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1
More informationChapter 2 Notes, Linear Algebra 5e Lay
Contents.1 Operations with Matrices..................................1.1 Addition and Subtraction.............................1. Multiplication by a scalar............................ 3.1.3 Multiplication
More informationLinear Algebra review Powers of a diagonalizable matrix Spectral decomposition
Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Prof. Tesler Math 283 Fall 2018 Also see the separate version of this with Matlab and R commands. Prof. Tesler Diagonalizing
More informationProperties of Matrices and Operations on Matrices
Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,
More information1. General Vector Spaces
1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule
More information2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian
FE661  Statistical Methods for Financial Engineering 2. Linear algebra Jitkomut Songsiri matrices and vectors linear equations range and nullspace of matrices function of vectors, gradient and Hessian
More informationEE731 Lecture Notes: Matrix Computations for Signal Processing
EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University September 22, 2005 0 Preface This collection of ten
More informationSymmetric and anti symmetric matrices
Symmetric and anti symmetric matrices In linear algebra, a symmetric matrix is a square matrix that is equal to its transpose. Formally, matrix A is symmetric if. A = A Because equal matrices have equal
More informationReview of Linear Algebra
Review of Linear Algebra Definitions An m n (read "m by n") matrix, is a rectangular array of entries, where m is the number of rows and n the number of columns. 2 Definitions (Con t) A is square if m=
More informationReview of some mathematical tools
MATHEMATICAL FOUNDATIONS OF SIGNAL PROCESSING Fall 2016 Benjamín Béjar Haro, Mihailo Kolundžija, Reza Parhizkar, Adam Scholefield Teaching assistants: Golnoosh Elhami, Hanjie Pan Review of some mathematical
More informationElementary maths for GMT
Elementary maths for GMT Linear Algebra Part 2: Matrices, Elimination and Determinant m n matrices The system of m linear equations in n variables x 1, x 2,, x n a 11 x 1 + a 12 x 2 + + a 1n x n = b 1
More informationLinear Algebra Review. Vectors
Linear Algebra Review 9/4/7 Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa (UCSD) Cogsci 8F Linear Algebra review Vectors
More informationChapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form
Chapter 5. Linear Algebra A linear (algebraic) equation in n unknowns, x 1, x 2,..., x n, is an equation of the form a 1 x 1 + a 2 x 2 + + a n x n = b where a 1, a 2,..., a n and b are real numbers. 1
More informationMobile Robotics 1. A Compact Course on Linear Algebra. Giorgio Grisetti
Mobile Robotics 1 A Compact Course on Linear Algebra Giorgio Grisetti SA1 Vectors Arrays of numbers They represent a point in a n dimensional space 2 Vectors: Scalar Product ScalarVector Product Changes
More informationConceptual Questions for Review
Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.
More informationMath113: Linear Algebra. Beifang Chen
Math3: Linear Algebra Beifang Chen Spring 26 Contents Systems of Linear Equations 3 Systems of Linear Equations 3 Linear Systems 3 2 Geometric Interpretation 3 3 Matrices of Linear Systems 4 4 Elementary
More informationEE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2
EE/ACM 150  Applications of Convex Optimization in Signal Processing and Communications Lecture 2 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory April 5, 2012 Andre Tkacenko
More informationLinear Algebra March 16, 2019
Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented
More informationA Brief Outline of Math 355
A Brief Outline of Math 355 Lecture 1 The geometry of linear equations; elimination with matrices A system of m linear equations with n unknowns can be thought of geometrically as m hyperplanes intersecting
More informationLinear algebra for computational statistics
University of Seoul May 3, 2018 Vector and Matrix Notation Denote 2dimensional data array (n p matrix) by X. Denote the element in the ith row and the jth column of X by x ij or (X) ij. Denote by X j
More informationLinear Algebra Review (Course Notes for Math 308H  Spring 2016)
Linear Algebra Review (Course Notes for Math 308H  Spring 2016) Dr. Michael S. Pilant February 12, 2016 1 Background: We begin with one of the most fundamental notions in R 2, distance. Letting (x 1,
More information22.3. Repeated Eigenvalues and Symmetric Matrices. Introduction. Prerequisites. Learning Outcomes
Repeated Eigenvalues and Symmetric Matrices. Introduction In this Section we further develop the theory of eigenvalues and eigenvectors in two distinct directions. Firstly we look at matrices where one
More informationIntroduction to Mobile Robotics Compact Course on Linear Algebra. Wolfram Burgard, Cyrill Stachniss, Kai Arras, Maren Bennewitz
Introduction to Mobile Robotics Compact Course on Linear Algebra Wolfram Burgard, Cyrill Stachniss, Kai Arras, Maren Bennewitz Vectors Arrays of numbers Vectors represent a point in a n dimensional space
More informationLinear Algebra (Review) Volker Tresp 2018
Linear Algebra (Review) Volker Tresp 2018 1 Vectors k, M, N are scalars A onedimensional array c is a column vector. Thus in two dimensions, ( ) c1 c = c 2 c i is the ith component of c c T = (c 1, c
More informationComputational Methods CMSC/AMSC/MAPL 460. Eigenvalues and Eigenvectors. Ramani Duraiswami, Dept. of Computer Science
Computational Methods CMSC/AMSC/MAPL 460 Eigenvalues and Eigenvectors Ramani Duraiswami, Dept. of Computer Science Eigen Values of a Matrix Recap: A N N matrix A has an eigenvector x (nonzero) with corresponding
More informationLinear Algebra Primer
Linear Algebra Primer David Doria daviddoria@gmail.com Wednesday 3 rd December, 2008 Contents Why is it called Linear Algebra? 4 2 What is a Matrix? 4 2. Input and Output.....................................
More informationAlgebra II. Paulius Drungilas and Jonas Jankauskas
Algebra II Paulius Drungilas and Jonas Jankauskas Contents 1. Quadratic forms 3 What is quadratic form? 3 Change of variables. 3 Equivalence of quadratic forms. 4 Canonical form. 4 Normal form. 7 Positive
More informationChapter Two Elements of Linear Algebra
Chapter Two Elements of Linear Algebra Previously, in chapter one, we have considered single first order differential equations involving a single unknown function. In the next chapter we will begin to
More informationVectors in Function Spaces
Jim Lambers MAT 66 Spring Semester 1516 Lecture 18 Notes These notes correspond to Section 6.3 in the text. Vectors in Function Spaces We begin with some necessary terminology. A vector space V, also
More informationBasic Concepts in Matrix Algebra
Basic Concepts in Matrix Algebra An column array of p elements is called a vector of dimension p and is written as x p 1 = x 1 x 2. x p. The transpose of the column vector x p 1 is row vector x = [x 1
More informationA = 3 B = A 1 1 matrix is the same as a number or scalar, 3 = [3].
Appendix : A Very Brief Linear ALgebra Review Introduction Linear Algebra, also known as matrix theory, is an important element of all branches of mathematics Very often in this course we study the shapes
More informationMaths for Signals and Systems Linear Algebra in Engineering
Maths for Signals and Systems Linear Algebra in Engineering Lectures 13 15, Tuesday 8 th and Friday 11 th November 016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL PROCESSING IMPERIAL COLLEGE
More informationLinear Algebra. Workbook
Linear Algebra Workbook Paul Yiu Department of Mathematics Florida Atlantic University Last Update: November 21 Student: Fall 2011 Checklist Name: A B C D E F F G H I J 1 2 3 4 5 6 7 8 9 10 xxx xxx xxx
More informationElementary Linear Algebra
Matrices J MUSCAT Elementary Linear Algebra Matrices Definition Dr J Muscat 2002 A matrix is a rectangular array of numbers, arranged in rows and columns a a 2 a 3 a n a 2 a 22 a 23 a 2n A = a m a mn We
More informationLINEAR ALGEBRA QUESTION BANK
LINEAR ALGEBRA QUESTION BANK () ( points total) Circle True or False: TRUE / FALSE: If A is any n n matrix, and I n is the n n identity matrix, then I n A = AI n = A. TRUE / FALSE: If A, B are n n matrices,
More informationA VERY BRIEF LINEAR ALGEBRA REVIEW for MAP 5485 Introduction to Mathematical Biophysics Fall 2010
A VERY BRIEF LINEAR ALGEBRA REVIEW for MAP 5485 Introduction to Mathematical Biophysics Fall 00 Introduction Linear Algebra, also known as matrix theory, is an important element of all branches of mathematics
More informationMath 18, Linear Algebra, Lecture C00, Spring 2017 Review and Practice Problems for Final Exam
Math 8, Linear Algebra, Lecture C, Spring 7 Review and Practice Problems for Final Exam. The augmentedmatrix of a linear system has been transformed by row operations into 5 4 8. Determine if the system
More informationMath Linear Algebra Final Exam Review Sheet
Math 151 Linear Algebra Final Exam Review Sheet Vector Operations Vector addition is a componentwise operation. Two vectors v and w may be added together as long as they contain the same number n of
More informationPart IA. Vectors and Matrices. Year
Part IA Vectors and Matrices Year 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2018 Paper 1, Section I 1C Vectors and Matrices For z, w C define the principal value of z w. State de Moivre s
More informationMATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)
MATH 20F: LINEAR ALGEBRA LECTURE B00 (T KEMP) Definition 01 If T (x) = Ax is a linear transformation from R n to R m then Nul (T ) = {x R n : T (x) = 0} = Nul (A) Ran (T ) = {Ax R m : x R n } = {b R m
More informationFoundations of Matrix Analysis
1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the
More information33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM
33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM (UPDATED MARCH 17, 2018) The final exam will be cumulative, with a bit more weight on more recent material. This outline covers the what we ve done since the
More information8. Diagonalization.
8. Diagonalization 8.1. Matrix Representations of Linear Transformations Matrix of A Linear Operator with Respect to A Basis We know that every linear transformation T: R n R m has an associated standard
More informationLinGloss. A glossary of linear algebra
LinGloss A glossary of linear algebra Contents: Decompositions Types of Matrices Theorems Other objects? Quasitriangular A matrix A is quasitriangular iff it is a triangular matrix except its diagonal
More informationImage Registration Lecture 2: Vectors and Matrices
Image Registration Lecture 2: Vectors and Matrices Prof. Charlene Tsai Lecture Overview Vectors Matrices Basics Orthogonal matrices Singular Value Decomposition (SVD) 2 1 Preliminary Comments Some of this
More informationSTEP Support Programme. STEP 2 Matrices Topic Notes
STEP Support Programme STEP 2 Matrices Topic Notes Definitions............................................. 2 Manipulating Matrices...................................... 3 Transformations.........................................
More informationLinear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.
Matrices Operations Linear Algebra Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0 The rectangular array 1 2 1 4 3 4 2 6 1 3 2 1 in which the
More informationMathematical foundations  linear algebra
Mathematical foundations  linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar
More informationMATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018
Homework #1 Assigned: August 20, 2018 Review the following subjects involving systems of equations and matrices from Calculus II. Linear systems of equations Converting systems to matrix form Pivot entry
More informationLINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM
LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F is R or C. Definition 1. A linear operator
More informationA matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and
Section 5.5. Matrices and Vectors A matrix is a rectangular array of objects arranged in rows and columns. The objects are called the entries. A matrix with m rows and n columns is called an m n matrix.
More information61 Study Guide and Intervention Multivariable Linear Systems and Row Operations
61 Study Guide and Intervention Multivariable Linear Systems and Row Operations Gaussian Elimination You can solve a system of linear equations using matrices. Solving a system by transforming it into
More informationLINEAR ALGEBRA SUMMARY SHEET.
LINEAR ALGEBRA SUMMARY SHEET RADON ROSBOROUGH https://intuitiveexplanationscom/linearalgebrasummarysheet/ This document is a concise collection of many of the important theorems of linear algebra, organized
More informationMath Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88
Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant
More informationA matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and
Section 5.5. Matrices and Vectors A matrix is a rectangular array of objects arranged in rows and columns. The objects are called the entries. A matrix with m rows and n columns is called an m n matrix.
More informationGRE Subject test preparation Spring 2016 Topic: Abstract Algebra, Linear Algebra, Number Theory.
GRE Subject test preparation Spring 2016 Topic: Abstract Algebra, Linear Algebra, Number Theory. Linear Algebra Standard matrix manipulation to compute the kernel, intersection of subspaces, column spaces,
More information11 a 12 a 21 a 11 a 22 a 12 a 21. (C.11) A = The determinant of a product of two matrices is given by AB = A B 1 1 = (C.13) and similarly.
C PROPERTIES OF MATRICES 697 to whether the permutation i 1 i 2 i N is even or odd, respectively Note that I =1 Thus, for a 2 2 matrix, the determinant takes the form A = a 11 a 12 = a a 21 a 11 a 22 a
More informationMultivariable Calculus
2 Multivariable Calculus 2.1 Limits and Continuity Problem 2.1.1 (Fa94) Let the function f : R n R n satisfy the following two conditions: (i) f (K ) is compact whenever K is a compact subset of R n. (ii)
More informationLinear Algebra. Min Yan
Linear Algebra Min Yan January 2, 2018 2 Contents 1 Vector Space 7 1.1 Definition................................. 7 1.1.1 Axioms of Vector Space..................... 7 1.1.2 Consequence of Axiom......................
More informationRepeated Eigenvalues and Symmetric Matrices
Repeated Eigenvalues and Symmetric Matrices. Introduction In this Section we further develop the theory of eigenvalues and eigenvectors in two distinct directions. Firstly we look at matrices where one
More informationA matrix over a field F is a rectangular array of elements from F. The symbol
Chapter MATRICES Matrix arithmetic A matrix over a field F is a rectangular array of elements from F The symbol M m n (F ) denotes the collection of all m n matrices over F Matrices will usually be denoted
More informationMath Linear Algebra II. 1. Inner Products and Norms
Math 342  Linear Algebra II Notes 1. Inner Products and Norms One knows from a basic introduction to vectors in R n Math 254 at OSU) that the length of a vector x = x 1 x 2... x n ) T R n, denoted x,
More informationNotes on Eigenvalues, Singular Values and QR
Notes on Eigenvalues, Singular Values and QR Michael Overton, Numerical Computing, Spring 2017 March 30, 2017 1 Eigenvalues Everyone who has studied linear algebra knows the definition: given a square
More informationMATH 221: SOLUTIONS TO SELECTED HOMEWORK PROBLEMS
MATH 221: SOLUTIONS TO SELECTED HOMEWORK PROBLEMS 1. HW 1: Due September 4 1.1.21. Suppose v, w R n and c is a scalar. Prove that Span(v + cw, w) = Span(v, w). We must prove two things: that every element
More informationMatrices and Vectors. Definition of Matrix. An MxN matrix A is a twodimensional array of numbers A =
30 MATHEMATICS REVIEW G A.1.1 Matrices and Vectors Definition of Matrix. An MxN matrix A is a twodimensional array of numbers A = a 11 a 12... a 1N a 21 a 22... a 2N...... a M1 a M2... a MN A matrix can
More informationIntroduction to Mobile Robotics Compact Course on Linear Algebra. Wolfram Burgard, Bastian Steder
Introduction to Mobile Robotics Compact Course on Linear Algebra Wolfram Burgard, Bastian Steder Reference Book Thrun, Burgard, and Fox: Probabilistic Robotics Vectors Arrays of numbers Vectors represent
More informationIMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET
IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET This is a (not quite comprehensive) list of definitions and theorems given in Math 1553. Pay particular attention to the ones in red. Study Tip For each
More information2. Matrix Algebra and Random Vectors
2. Matrix Algebra and Random Vectors 2.1 Introduction Multivariate data can be conveniently display as array of numbers. In general, a rectangular array of numbers with, for instance, n rows and p columns
More informationElementary linear algebra
Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The
More informationPHYS 705: Classical Mechanics. Rigid Body Motion Introduction + Math Review
1 PHYS 705: Classical Mechanics Rigid Body Motion Introduction + Math Review 2 How to describe a rigid body? Rigid Body  a system of point particles fixed in space i r ij j subject to a holonomic constraint:
More informationReview of Linear Algebra Definitions, Change of Basis, Trace, Spectral Theorem
Review of Linear Algebra Definitions, Change of Basis, Trace, Spectral Theorem Steven J. Miller June 19, 2004 Abstract Matrices can be thought of as rectangular (often square) arrays of numbers, or as
More information1. Select the unique answer (choice) for each problem. Write only the answer.
MATH 5 Practice Problem Set Spring 7. Select the unique answer (choice) for each problem. Write only the answer. () Determine all the values of a for which the system has infinitely many solutions: x +
More informationLinear Algebra V = T = ( 4 3 ).
Linear Algebra Vectors A column vector is a list of numbers stored vertically The dimension of a column vector is the number of values in the vector W is a dimensional column vector and V is a 5dimensional
More informationJim Lambers MAT 610 Summer Session Lecture 1 Notes
Jim Lambers MAT 60 Summer Session 20090 Lecture Notes Introduction This course is about numerical linear algebra, which is the study of the approximate solution of fundamental problems from linear algebra
More information1 9/5 Matrices, vectors, and their applications
1 9/5 Matrices, vectors, and their applications Algebra: study of objects and operations on them. Linear algebra: object: matrices and vectors. operations: addition, multiplication etc. Algorithms/Geometric
More informationThe following definition is fundamental.
1. Some Basics from Linear Algebra With these notes, I will try and clarify certain topics that I only quickly mention in class. First and foremost, I will assume that you are familiar with many basic
More information1 Inner Product and Orthogonality
CSCI 4/Fall 6/Vora/GWU/Orthogonality and Norms Inner Product and Orthogonality Definition : The inner product of two vectors x and y, x x x =.., y =. x n y y... y n is denoted x, y : Note that n x, y =
More informationlinearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice
3. Eigenvalues and Eigenvectors, Spectral Representation 3.. Eigenvalues and Eigenvectors A vector ' is eigenvector of a matrix K, if K' is parallel to ' and ' 6, i.e., K' k' k is the eigenvalue. If is
More informationLinear Algebra Massoud Malek
CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product
More informationProblems in Linear Algebra and Representation Theory
Problems in Linear Algebra and Representation Theory (Most of these were provided by Victor Ginzburg) The problems appearing below have varying level of difficulty. They are not listed in any specific
More informationLinear Algebra Highlights
Linear Algebra Highlights Chapter 1 A linear equation in n variables is of the form a 1 x 1 + a 2 x 2 + + a n x n. We can have m equations in n variables, a system of linear equations, which we want to
More informationMATH 320: PRACTICE PROBLEMS FOR THE FINAL AND SOLUTIONS
MATH 320: PRACTICE PROBLEMS FOR THE FINAL AND SOLUTIONS There will be eight problems on the final. The following are sample problems. Problem 1. Let F be the vector space of all real valued functions on
More informationOctober 25, 2013 INNER PRODUCT SPACES
October 25, 2013 INNER PRODUCT SPACES RODICA D. COSTIN Contents 1. Inner product 2 1.1. Inner product 2 1.2. Inner product spaces 4 2. Orthogonal bases 5 2.1. Existence of an orthogonal basis 7 2.2. Orthogonal
More informationGeometric Modeling Summer Semester 2010 Mathematical Tools (1)
Geometric Modeling Summer Semester 2010 Mathematical Tools (1) Recap: Linear Algebra Today... Topics: Mathematical Background Linear algebra Analysis & differential geometry Numerical techniques Geometric
More information