# Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =

Save this PDF as:

Size: px
Start display at page:

Download "Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A ="

## Transcription

1 30 MATHEMATICS REVIEW G A.1.1 Matrices and Vectors Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = a 11 a a 1N a 21 a a 2N a M1 a M2... a MN A matrix can also be written as A = (a nm ) where n = 1, 2,..., N and m = 1, 2,..., M. Matrices are usually written as boldface, upper-case letters. Definition of Matrix Transpose. The transpose of A, denoted by A t, is the NxM matrix (a mn ) or A t = a 11 a a 1M a 21 a a 2M a N1 a N2... a NM Example. A example of a matrix A and it s transpose A t is: 1 2 [ ] A = 3 4 and A t = Definition of Vectors.. A column vector is a Bx1 matrix and a row vector is a 1xB matrix, where B 1. Column and row vectors are usually simply referred to as vectors and they are assumed to be column vectors unless they are explicitly identified as row vector or if it is clear from the context. Vectors are denoted by boldface, lower-case letters. x = [x 1, x 2,..., x B ] t = Definition of Dimension. The integer B is called the dimension of the vector x in the definition above. The phrase x is B dimensional is equivalent to stating that the dimension of x is B. Definition of 0 and 1 vectors.. The vectors x 1 x 2. x B 0 = (0, 0,..., 0) t and 1 = (1, 1,..., 1) t are called the Origin or Zero Vector and the One Vector, respectively. Definition of Vector Addition and Scalar Multiplication.. The addition of two vectors x 1, x 2 R B is defined as: x 3 = [x 11 + x 21, x 12 + x 22,..., x 1B + x 2B ] t. The word

2 LINEAR ALGEBRA 31 scalars is another word for numbers. Scalar multiplication is the product of a number α and a vector x, defined by αx = [αx 1, αx 2,..., αx B ] t. Example. If x 1 = [1, 2] and x 2 = [3, 4] then x 3 = x 1 + x 2 = [1 + 3, 2 + 4] = [4, 6]. If α = 3, then αx 1 = [3, 6]. Definition of Matrix Multiplication. If A and B are MxN and NxP matrices, then the matrix product, C = AB, is the MxP matrix defined by c mp = N a mn b np n=1 If A 3 2 and B 2 2 are the following matrices: 1 2 [ ] A = 3 4 and B = then C = AB is the 3 2 matrix: C = For example, c 2,1 = (3)(11) + (4)(13) = = 85. Definition of Dot, or Inner, Product. If x and y are B-dimensional vectors, then the matrix product x t y is referred to as the Dot or Inner Product of x and y. Example. If x = [ 2, 0, 2] t and y = [4, 1, 2] t, then x t y = (-2)(4) + (0)(1) + (2)(2) = -4. Note that a linear system of equations can be represented as a matrix multiplication. For example, the system and be written in matrix-vector form as a 11 x 1 + a 12 x 2 = b 1 a 21 x 1 + a 22 x 2 = b 2 Ax = b. Definition of Diagonal Matrix. A diagonal matrix D = (d mn ) is an NxN matrix with the property that d mn = 0 if m n. Definition of Identity Matrix. The NxN Identity Matrix, denoted by I N or just I if N is known, is the NxN diagonal matrix with d nn = 1 for n = 1, 2,..., N. Notice that if A is any NxN matrix, then AI N = I N A = A. Definition of Inverse Matrix. Let A be an NxN square matrix. If there is a matrix B with the property that AB = BA = I N, then B is called the inverse of A or A inverse.

3 32 MATHEMATICS REVIEW G It is denoted by A 1. The inverse of a square matrix A does not always exist. If it does exist, then A is invertible. A.1.2 Vector Spaces Definition of Vector, or Linear, Space. Let B denote a positive integer. A finite-dimensional Vector, or Linear, Space with dimension B is a collection of B dimensional vectors, V that satisfies commutative, associative, and distributive laws and with the properties that: 1. For every x V and y V, the sum (x + y) V. 2. For every real number s and x V, the product sx V. 3. For every x V, the additive inverse x V, which implies that 0 V. Notation. A B-dimensional vector space is often denoted by R B. Definition of Subspace. A Subspace is a subset of a vector space that is also a vector space. Notice that subspaces of vector spaces always include the origin. Although it is not common, we us the notation S V to state that S is a subspace of a vector space V. Figure A.1 Subspace and non-subspace Subspaces are often used in algorithms for analyzing image spectrometer data. The motivation is that spectra with certain characteristics might all be contained in a subspace. Definition of Linear Combination. If y, {x d } D d=1 RB and y = a 1 x 1 + a 2 x a D x D then y is a linear combination of {x 1, x 2,..., x D }. The numbers a 1, a 2,..., a D are called the coefficients of the linear combination. Notice that the linear combination can be written in matrix-vector form as: y = Xa

4 LINEAR ALGEBRA 33 where X is the BxD matrix with x d as the d th column and a is the vector of coefficients. Definition of Linear Transformation.. Let V and W denote two vector spaces. A function f : V W is called a Linear Transformation if a, b R and x, y V, the statement f (ax + by) = af (x) + bf (y) is true. If A is an m n matrix and x R m, then f (x) = Ax is a linear transformation. Definition of Linear Independence. If B={x 1, x 2,..., x D } is a set of vectors with the property that a 1 x 1 + a 2 x a D x D = 0 = a 1 = a 2 =... = a D = 0 then the vectors in the set B are called Linearly Independent. Informally, no vector in a set of linearly independent vectors can be written as a linear combination of the other vectors in the set. Fact.There can be no more than B linearly independent vectors in a B dimensional vector space. If B = {x 1, x 2,..., x B } is a set of B linearly independent vectors in a B dimensional space V, then every x V can be written uniquely as a linear combination of the vectors in B. Definition of Basis. A Basis of a subspace of B-dimensional space, S, is a set of vectors {v 1, v 2,..., v D } with the property that every vector x S can be written one and exactly one way as a linear combination of the elements of the basis. In other words, if x S then there is a unique set of coefficients, a 1, a 2,..., a D such that y = a 1 v 1 + a 2 + v 2, + a D v D. It must be the case that D B. Fact.There are infinitely many bases for any subspace but they all have the same number of elements. The number of elements is called the dimension of the subspace. Definition of Standard Basis. The standard basis of a D-dimensional subspace is the set of vectors {s 1, s 2,..., s D } R D where the d th element of the d th vector s d, s dd, is equal to one and the rest of the elements are equal to zero. That is s dj = 1 if d = j and s dj = 0 if d j. Example.. The standard basis for R 3 is s 1 = 0 and s 2 = 0 1 and s 3 = Any 3-Dimensional vector can be represented as a linear combination of the standard basis vectors: [ x1 ] x 2 = x 1 s 1 + x 2 s 2 + x 3 s 3 = x x x 3 0 x Fact.If the columns of an NxN matrix A form a basis for R N, then A is invertible. Bases can be though of as coordinate systems. It can be useful to represent vectors in terms of different bases, or coordinate systems. Therefore, the method for changing the

5 34 MATHEMATICS REVIEW G coordinates used to represent a vector from one basis to another is first described. Then, a method for determining a linear transformation for mapping one basis to another basis is described. The two methods are referred to as Change of Coordinates or Change of Basis. Definition of Change of Basis. The process of changing the representation of a vector x from a representation in terms of one basis to a representation in terms of a different basis is called a Change of Basis transformation. Calculating a change of basis transformation may require solving a linear system. To see this, let U = {u 1, u 2,..., u B } and V = {v 1, v 2,..., v B } be two bases for R B. Suppose the representation of a vector x in terms of the basis U is known to be x = a 1 u 1 + a 2 u a B u B = Ua and that the representation of x in terms of V is unknown x = c 1 v 1 + c 2 v c B v B = Vc. In other words, a = (a 1, a 2,..., a B ) is known and c = (c 1, c 2,..., c B ) is unknown. Then c = V 1 Ua. In particular, if U is the standard basis, then U = I and x = [a 1, a 2,..., a B ] t, so c = V 1 x. The matrix W = V 1 U is called the change of basis matrix. Definition of Pseudo-inverse. The definition of a pseudo-inverse is motivated by linear systems of equations, Ax = b. If A is not square, then A 1 is not defined so one cannot write x = A 1 b. However, one can write (A t A) x = A t b. Note that this is not equivalent to the original linear system of equations. The matrix, A t A is N N so, assuming that N < M, it is almost certainly invertible. Therefore, x = (A t A) 1 A t b is a solution of (A t A) x = A t b. The matrix (A t A) 1 A t b is called the pseudo-inverse of A and is denoted by A +.

6 LINEAR ALGEBRA 35 A.1.3 Norms, Metrics, and Dissimilarities Definition of norm. A norm produces a quantity computed from vectors that somehow measures the size of the vector. More formally, a norm is a function, N : R B R + with the properties that x, y R B and a R: 1. N (x) 0 and N (x) = 0 if and only if x = N (ax) = a N (x). 3. N (x + y) N (x) + N (y) Notation. The norm is usually written with the vertical bar notation: N (x) = x. Definition of L p norm, or just p-norm.. Assume p 1. The p-norm is defined by ( B ) 1 x p = b=1 x b p p. Definition of Euclidean Norm.. The Euclidean Norm is the p-norm with p = 2, that is, the L 2 norm. If p is not specified, then it is assumed that x denotes the Euclidean norm. Note that the Euclidean norm can be written as x = x t x. Definition of L norm, or just -norm.. The norm is x = max B b=1 x b. Definition of Unit Norm Vector. A vector x is said to have unit norm if x = 1. Note that if x is any vector then has unit norm. x x Example. The set of all vectors x R 2 with unit norm are equivalent to a circle, called the unit circle, since the set of all points with x x 2 2 = 1 is a circle of radius 1. Definition of Generalized Unit Circle.. Let N be a norm. The set U = {x N (x) = 1} is called a Generalized Unit Circle. Often, for simplicity, the set Uis called a Unit Circle even though it is only a geometric circle of the Euclidean norm. It is sometimes useful to visualize the unit circle for other norms. Some examples are shown in A.2. Fact.If x and y are vectors, then x t y = x y cos θ, where θ is the angle between x and y. If x and y are normalized, then the inner product of the vectors is equal to the cosine of the angle between the vectors. x t ( ) t ( ) y x y x y = = cos θ. x y Definition of Distance Function or Metric. A distance or metric is a function defined on pairs of vectors, d : R B xr B R + and has the following properties: 1. d (x, y) 0 and d (x, y) = 0 if and only if x = y 2. d (x, y) = d (y, x) 3. For any z, d (x, y) d (x, z) + d (z, y)

7 36 MATHEMATICS REVIEW G Figure A.2 Some Unit Circles. Note that as p increases, the L p norm approaches the L norm. Definition of L p distance. d p (x, y) = x y p. Note that the Euclidean distance squared can be written as d 2 (x, y) 2 = x y 2 = (x y) t (x y) = x 2 + y 2 2x t y. Therefore, if x and y are unit vectors, then measuring Euclidean distance between vectors is equivalent to measuring the cosine of the angle between the two vectors: d 2 (x, y) 2 = x y 2 = (x y) t (x y) = x t y = 2 2 cos θ which implies that, cos θ = x y 2. Definition of Orthogonality and Orthonormal Bases. Two non-zero vectors x and y are called orthogonal if x t y = 0. Note that orthogonal is another word for perpendicular since (for non-zero vectors) x t y = x y cos θ = 0 only happens if cos θ = 0. An orthonormal basis is a basis U = {u 1, u 2,..., u B } with the property that u i u j = δ ij where δ ij is the Kronecker delta function.

8 LINEAR ALGEBRA 37 Example. Computing Coefficients of Orthonormal Bases. Suppose that U = {u 1, u 2,..., u B } is an orthonormal basis of R B, and x R B. Then there exists coefficients c 1, c 2,..., c B such that x = c 1 u 1 + c 2 u c B u B. In general, one must solve a linear system of equations to find the coefficients. However, with an orthonormal basis, it is much easier. For example, to find c 1, transpose x x t = (c 1 u 1 + c 2 u c B u B ) t = c 1 u t 1 + c 2 u t c B u t B and then multiply on the right by u 1 x t u 1 = c 1 u t 1u 1 + c 2 u t 2u c B u t Bu 1 = c 1. In fact, for every b = 1, 2,..., B, c b = x t u b. Definition of Projections. The projection of a vector x onto a vector y is given by P roj (x, y) = x t y y. Figure A.3 Projection of a vector x onto a vector y

9 38 MATHEMATICS REVIEW G A.1.4 Eigenanalysis and Symmetric Matrices Eigenvalues, Eigenvectors, and Symmetric Matrices play a fundamental role in describing properties of spectral data obtained from imaging spectrometers. The definitions given here are not the most general but are sufficient for understanding and devising algorithms for imaging spectroscopy. Definition of Eigenvalues and Eigenvectors. Let A be an n n matrix. A number λ R and corresponding vector v λ R n are called an eigenvalue and eigenvector, respectively, of A if Av λ = λv λ. It is common to write v in place of v λ if there is no ambiguity. Definition of Symmetric Matrix. A square matrix A is called symmetric if A = A t. Definition of Positive Definite and Semi-definite Matrices. A square matrix A is called positive definite if, x : x t Ax > 0. A is positive semi-definite if x : x t Ax 0. In these cases, if λ and v are an eigenvalue and corresponding eigenvector pair and v 0, then v t Av = v t λv = λ x 2 > 0. Since x 2 > 0, it must be true that λ > 0. Thus, eigenvalues corresponding to nonzero eigenvectors of a positive definite matrix are positive. The same argument can be used for positive semi-definite matrices (in which case the eigenvalues are non-negative). A Orthonormal Basis Theorem If A is an B B positive-definite, symmetric matrix, then the eigenvectors of A form an orthonormal basis for R B. This theorem is presented without proof; the interested reader can find it in many linear algebra texts. Suppose V = {v 1, v 2,..., v B } is an orthonormal basis of eigenvectors of A. Let V = [v 1 v 2... v B ] be the matrix with columns equal to the elements of the basis V. Then AV = [λ 1 v 1 λ 1 v 2... λ B v B ] = V Λ where Λ = diag (λ 1, λ 2,..., λ B ) is the diagonal B B matrix with the eigenvalues along the diagonal and zeros elsewhere. Thus A = V ΛV t. This relationship is called diagonalization. Often V is taken to be the matrix with rows equal to the transposes of the elements of V. In this case, diagonalization is written as A = V t ΛV. Thus, a consequence of the Orthonormal Basis Theorem is that any positive-definite, symmetric matrix A can be diagonalized by the matrix V whose columns are the orthonormal basis of eigenvectors of A and a diagonal matrix Λ with diagonal elements equal to the eigenvalues of A. Example. Suppose A is the matrix The reader can verify that the eigenvalues of A are λ 1 = , λ 2 = , λ 3 = and the corresponding eigenvectors are the columns of the matrix V given by

10 LINEAR ALGEBRA 39 A Singular Value Decomposition The Singular Value Decomposition (SVD) extends the concept of diagonalization to some non-square and non-invertible square matrices. There are very stable numerical algorithms for computing the SVD and is therefore often at the core of computational algorithms for tasks such as linear system solvers and computing inverses and pseudo-inverses of matrices. Definition of Singular Value Decomposition (SVD). Let A be an M N with M > N. The SVD of A is given by A = V ΣU t where V is an M M orthogonal matrix whose columns are eigenvectors of AA t, U is an M M orthogonal matrix of eigenvectors of A t A, and Σ is an M N matrix that is as diagonal as possible, that is, Σ is of the form: Σ = σ σ σ N σ N. (A.1) The first N rows and N columns of Σ consist of a diagonal matrix and the remaining M N rows are all zeros. The pseudo-inverse of an M N with M > N matrix provides motivation for the definition, although the derivation is omitted here. Recall that the pseudo-inverse is given by A + = (A t A) 1 A t. It can be shown that A + = UΣ + V t where: 1 σ σ Σ +. =... (A.2) σ N σ N Note that Σ is M N and Σ + is N M. Therefore, ΣΣ + is M M and Σ + Σ is N N. Furthermore, Σ + Σ =

11 40 MATHEMATICS REVIEW G σ σ σ σ σ N σ N = If M < N and rank (A) = M, then σ σ Σ =.... (A.3) σ M σ M Calculating (A t A) 1 is often numerically unstable. The SVD is numerically stable. Example. Suppose Then the SVD of A is given by: 1 2 A = [ ] V = Σ = U = One can verify that A + := (A t A) 1 A t = UΣ + V t. A Simultaneous Diagonalization Let C and C n denote symmetric, positive semidefinite matrices. Then, there exists a matrix, W, that simultaneously diagonalizes C and Cn. The matrix W is constructed here. First, note that there exist matrices U and Λ such that Λ is of the form: C = U t ΛU.

12 LINEAR ALGEBRA 41 Λ = λ λ λ B λ B where λ b 0. Hence, Λ = Λ 1 2 Λ 1 2. Therefore Let Λ 1 2 UCU t Λ 1 2 = I. C nt = Λ 1 2 UCn U t Λ 1 2. Then C t n t = C nt and C nt is positive semi-definite. Hence, there exists V such that V t V = V V t = I and C nt = V t D nt V or V C nt V t = D nt where D nt is a diagonal matrix with non-negative entries. Let W = V Λ 1 2 U. Then and W CW t = V Λ 1 2 UCU t Λ 1 2 V t = V IV t = V V t = I W C n W t = V Λ 1 2 UCn U t Λ 1 2 V t = V C nt V t = D nt. Therefore, W simultaneously diagonalizes C and C n.

13 42 MATHEMATICS REVIEW G A.1.5 Determinants, Ranks, and Numerical Issues In this section, some quantities related to solutions of linear systems are defined and used to get a glimpse at some numerical issues that occur in Intelligent Systems all too frequently. A more thorough study requires a course in Numerical Linear Algebra. It is often assumed that solutions are estimated numerically and so there is error due to finite precision arithmetic. The determinant is a number associated with a square matrix. The precise definition is recursive and requires quite a few subscripts so it is first motivated by the definition using a 2 2 matrix A. The definition is then given and some important properties are discussed. Consider the problem of solving a linear system of two equations and two unknowns, Ax = b, using standard row reduction: [ a11 a ] [ ] 12 x1 = a 21 a 22 x 2 [ b1 b 2 ] a 21 0 a21 a 11 a 12 a 11a 22 a 21a 12 a 11 ] [ = x 2 [ x1 a21b1 a 11 a21b1+a11b2 a 11 ] so, after a little more algebra, the solutions for x 1 and x 2 are found to be x 2 = a 11b 2 a 21 b 1 a 11 a 22 a 21 a 12 and x 1 = a 22b 1 a 12 b 2 a 22 a 11 a 12 a 21 The denominator of the solutions are called the determinant of A which is often denoted by det (A) or by A. Notice that the numerators of the solutions for x 1 and x 2 are the determinants of the matrices [ ] [ ] b1 a 12 a11 b 1 A 1 = and A 2 = b 2 a 22 a 21 b 2 Definition of Rank of a matrix. Let A denote an M N matrix. It is a fact that the number of linearly independent rows of A is equal to the number of linearly independent columns of A. The Rank of A, often denoted by rank (A), is the number of linearly independent rows (or columns). Note that, if M > N, then rank (A) N. It is almost always true that rank (A) = N but one must take care to make sure it is true. Similarly, if A is a square matrix of size N N, then it is almost always true that rank (A) = N. In the latter case, A 1 exists and the matrix A is referred to as Invertible or Nonsingular. If rank (A) < N, then A is non-invertible, also referred to as Singular. Definition of Null Space. Assume x 1, x 2 0. Note that, if Ax 1 =Ax 2 = 0 then α, β R, A (αx 1 + βx 2 ) = αa (x 1 ) + βa (x 2 )= 0. Therefore, the set N A with the property that n N A An = 0 is a subspace of R N called the Null Space of A. Of course, 0 N A. In fact, 0 may be the only vector in N A, particularly if M > N. However, if M < N, which is often the case for deep learning networks, then it is certain that there are nonzero vectors x N A R N. Example. Null Spaces and Singular Value Decomposition. If M < N and rank(a) = M, then the SVD of A is of the form A = VΣU t where Σ is defined in Eq. A.3. Since V and U are square, nonsingular matrices, N V = N W = {0}. Therefore,

14 LINEAR ALGEBRA 43 N A = { x n = U t x and n = (0, 0,..., 0, n M+1, n M+2,..., n N ) t}. since some elements of the first M columns of Σ are nonzero and all the elements of the last N M + 1 columns of Σ are zeros. If M > N and rank(a) = N, then Σ is defined in Eq. A.3. Since there is a nonzero element in each column of Σ, N A = {0}. A Condition Number Consider the following linear system: [ 2 1 ] [ ] [ ] x1 1 Ax = b : = x 2 1 Problems with this system are depicted in Fig. A.4 (A.4) Figure A.4 A linear system associated with two lines (Red and Blue) that are almost parallel. The solution of the system is x 1 = 0 and x 2 = 1. Plot A depicts the lines. Plot B depicts the Mean Squared Error(MSE) of the system when x 1 ranges from -0.1 to 0.2 and x 2 is held constant at x 2=1. So, for example, if x 1 < 0.03 then the MSE < For some applications, 0.03 can be a very significant error. Of course, We can make the MSE much smaller for a wider range of values of x 1 by letting a 21 get close and close to 2, e.g. 2.01, 2.001, , etc. Error can be looked at relatively in addition to the MSE. For any square, nonsingular matrix A and linear system Ax = b let x t denote a solution of the system and x f an estimated solution, which will have some error. Thus Ax t = b but Ax f = b f and b b f. A couple of definitions are required. These will lead to the notion of the condition number of a matrix, which is a very useful quantity. Definition of Residual and Relative Residual for Linear Systems. The Residual is r = b b f = b Ax f and the Relative Residual is r b. Definition of Error and Relative Error for Linear Systems. The Error is e = x t x f and the Relative Error is e x t. Definition of Condition Number of a Matrix. The Condition Number of a square matrix is cond (A) = A A 1. If A is not square, then bfa 1 is replaced by the pseudo-inverse A +. It is a fact that cond (A) 1.

### Image Registration Lecture 2: Vectors and Matrices

Image Registration Lecture 2: Vectors and Matrices Prof. Charlene Tsai Lecture Overview Vectors Matrices Basics Orthogonal matrices Singular Value Decomposition (SVD) 2 1 Preliminary Comments Some of this

### A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

Section 5.5. Matrices and Vectors A matrix is a rectangular array of objects arranged in rows and columns. The objects are called the entries. A matrix with m rows and n columns is called an m n matrix.

### Properties of Matrices and Operations on Matrices

Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,

### Basic Concepts in Linear Algebra

Basic Concepts in Linear Algebra Grady B Wright Department of Mathematics Boise State University February 2, 2015 Grady B Wright Linear Algebra Basics February 2, 2015 1 / 39 Numerical Linear Algebra Linear

### Linear Algebra for Machine Learning. Sargur N. Srihari

Linear Algebra for Machine Learning Sargur N. srihari@cedar.buffalo.edu 1 Overview Linear Algebra is based on continuous math rather than discrete math Computer scientists have little experience with it

### Linear Algebra Massoud Malek

CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

### Linear Algebra and Eigenproblems

Appendix A A Linear Algebra and Eigenproblems A working knowledge of linear algebra is key to understanding many of the issues raised in this work. In particular, many of the discussions of the details

### Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O

### Stat 159/259: Linear Algebra Notes

Stat 159/259: Linear Algebra Notes Jarrod Millman November 16, 2015 Abstract These notes assume you ve taken a semester of undergraduate linear algebra. In particular, I assume you are familiar with the

### MATRIX ALGEBRA. or x = (x 1,..., x n ) R n. y 1 y 2. x 2. x m. y m. y = cos θ 1 = x 1 L x. sin θ 1 = x 2. cos θ 2 = y 1 L y.

as Basics Vectors MATRIX ALGEBRA An array of n real numbers x, x,, x n is called a vector and it is written x = x x n or x = x,, x n R n prime operation=transposing a column to a row Basic vector operations

### Introduction to Matrix Algebra

Introduction to Matrix Algebra August 18, 2010 1 Vectors 1.1 Notations A p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the line. When p

### [POLS 8500] Review of Linear Algebra, Probability and Information Theory

[POLS 8500] Review of Linear Algebra, Probability and Information Theory Professor Jason Anastasopoulos ljanastas@uga.edu January 12, 2017 For today... Basic linear algebra. Basic probability. Programming

### EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2

EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 2 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory April 5, 2012 Andre Tkacenko

### 1. General Vector Spaces

1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule

### Maths for Signals and Systems Linear Algebra in Engineering

Maths for Signals and Systems Linear Algebra in Engineering Lectures 13 15, Tuesday 8 th and Friday 11 th November 016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL PROCESSING IMPERIAL COLLEGE

### 1 Linearity and Linear Systems

Mathematical Tools for Neuroscience (NEU 34) Princeton University, Spring 26 Jonathan Pillow Lecture 7-8 notes: Linear systems & SVD Linearity and Linear Systems Linear system is a kind of mapping f( x)

### DS-GA 1002 Lecture notes 10 November 23, Linear models

DS-GA 2 Lecture notes November 23, 2 Linear functions Linear models A linear model encodes the assumption that two quantities are linearly related. Mathematically, this is characterized using linear functions.

### Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant

### Chapter Two Elements of Linear Algebra

Chapter Two Elements of Linear Algebra Previously, in chapter one, we have considered single first order differential equations involving a single unknown function. In the next chapter we will begin to

### Parallel Singular Value Decomposition. Jiaxing Tan

Parallel Singular Value Decomposition Jiaxing Tan Outline What is SVD? How to calculate SVD? How to parallelize SVD? Future Work What is SVD? Matrix Decomposition Eigen Decomposition A (non-zero) vector

### Solutions to Final Practice Problems Written by Victoria Kala Last updated 12/5/2015

Solutions to Final Practice Problems Written by Victoria Kala vtkala@math.ucsb.edu Last updated /5/05 Answers This page contains answers only. See the following pages for detailed solutions. (. (a x. See

### Review of Linear Algebra

Review of Linear Algebra Dr Gerhard Roth COMP 40A Winter 05 Version Linear algebra Is an important area of mathematics It is the basis of computer vision Is very widely taught, and there are many resources

### Matrix Operations. Linear Combination Vector Algebra Angle Between Vectors Projections and Reflections Equality of matrices, Augmented Matrix

Linear Combination Vector Algebra Angle Between Vectors Projections and Reflections Equality of matrices, Augmented Matrix Matrix Operations Matrix Addition and Matrix Scalar Multiply Matrix Multiply Matrix

### MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T KEMP) Definition 01 If T (x) = Ax is a linear transformation from R n to R m then Nul (T ) = {x R n : T (x) = 0} = Nul (A) Ran (T ) = {Ax R m : x R n } = {b R m

### Math Linear Algebra Final Exam Review Sheet

Math 15-1 Linear Algebra Final Exam Review Sheet Vector Operations Vector addition is a component-wise operation. Two vectors v and w may be added together as long as they contain the same number n of

### Singular Value Decomposition

Singular Value Decomposition Motivatation The diagonalization theorem play a part in many interesting applications. Unfortunately not all matrices can be factored as A = PDP However a factorization A =

### Review of some mathematical tools

MATHEMATICAL FOUNDATIONS OF SIGNAL PROCESSING Fall 2016 Benjamín Béjar Haro, Mihailo Kolundžija, Reza Parhizkar, Adam Scholefield Teaching assistants: Golnoosh Elhami, Hanjie Pan Review of some mathematical

### B553 Lecture 5: Matrix Algebra Review

B553 Lecture 5: Matrix Algebra Review Kris Hauser January 19, 2012 We have seen in prior lectures how vectors represent points in R n and gradients of functions. Matrices represent linear transformations

### IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET This is a (not quite comprehensive) list of definitions and theorems given in Math 1553. Pay particular attention to the ones in red. Study Tip For each

### Introduction to Matrices

214 Analysis and Design of Feedback Control Systems Introduction to Matrices Derek Rowell October 2002 Modern system dynamics is based upon a matrix representation of the dynamic equations governing the

### a11 a A = : a 21 a 22

Matrices The study of linear systems is facilitated by introducing matrices. Matrix theory provides a convenient language and notation to express many of the ideas concisely, and complicated formulas are

### COMP 558 lecture 18 Nov. 15, 2010

Least squares We have seen several least squares problems thus far, and we will see more in the upcoming lectures. For this reason it is good to have a more general picture of these problems and how to

### Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Prof. Tesler Math 283 Fall 2016 Also see the separate version of this with Matlab and R commands. Prof. Tesler Diagonalizing

### Example Linear Algebra Competency Test

Example Linear Algebra Competency Test The 4 questions below are a combination of True or False, multiple choice, fill in the blank, and computations involving matrices and vectors. In the latter case,

### IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET This is a (not quite comprehensive) list of definitions and theorems given in Math 1553. Pay particular attention to the ones in red. Study Tip For each

### 5.) For each of the given sets of vectors, determine whether or not the set spans R 3. Give reasons for your answers.

Linear Algebra - Test File - Spring Test # For problems - consider the following system of equations. x + y - z = x + y + 4z = x + y + 6z =.) Solve the system without using your calculator..) Find the

### Quiz ) Locate your 1 st order neighbors. 1) Simplify. Name Hometown. Name Hometown. Name Hometown.

Quiz 1) Simplify 9999 999 9999 998 9999 998 2) Locate your 1 st order neighbors Name Hometown Me Name Hometown Name Hometown Name Hometown Solving Linear Algebraic Equa3ons Basic Concepts Here only real

### Linear Algebra Primer

Linear Algebra Primer David Doria daviddoria@gmail.com Wednesday 3 rd December, 2008 Contents Why is it called Linear Algebra? 4 2 What is a Matrix? 4 2. Input and Output.....................................

### MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors.

MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors. Orthogonal sets Let V be a vector space with an inner product. Definition. Nonzero vectors v 1,v

### 1 Matrices and matrix algebra

1 Matrices and matrix algebra 1.1 Examples of matrices A matrix is a rectangular array of numbers and/or variables. For instance 4 2 0 3 1 A = 5 1.2 0.7 x 3 π 3 4 6 27 is a matrix with 3 rows and 5 columns

### Eigenvectors and Hermitian Operators

7 71 Eigenvalues and Eigenvectors Basic Definitions Let L be a linear operator on some given vector space V A scalar λ and a nonzero vector v are referred to, respectively, as an eigenvalue and corresponding

### Linear Algebra. Linear Equations and Matrices. Copyright 2005, W.R. Winfrey

Copyright 2005, W.R. Winfrey Topics Preliminaries Systems of Linear Equations Matrices Algebraic Properties of Matrix Operations Special Types of Matrices and Partitioned Matrices Matrix Transformations

### Lecture 1 Review: Linear models have the form (in matrix notation) Y = Xβ + ε,

2. REVIEW OF LINEAR ALGEBRA 1 Lecture 1 Review: Linear models have the form (in matrix notation) Y = Xβ + ε, where Y n 1 response vector and X n p is the model matrix (or design matrix ) with one row for

### Recall the convention that, for us, all vectors are column vectors.

Some linear algebra Recall the convention that, for us, all vectors are column vectors. 1. Symmetric matrices Let A be a real matrix. Recall that a complex number λ is an eigenvalue of A if there exists

### Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic

Applied Mathematics 205 Unit II: Numerical Linear Algebra Lecturer: Dr. David Knezevic Unit II: Numerical Linear Algebra Chapter II.3: QR Factorization, SVD 2 / 66 QR Factorization 3 / 66 QR Factorization

### EE731 Lecture Notes: Matrix Computations for Signal Processing

EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University October 17, 005 Lecture 3 3 he Singular Value Decomposition

### Matrix & Linear Algebra

Matrix & Linear Algebra Jamie Monogan University of Georgia For more information: http://monogan.myweb.uga.edu/teaching/mm/ Jamie Monogan (UGA) Matrix & Linear Algebra 1 / 84 Vectors Vectors Vector: A

### Math 443 Differential Geometry Spring Handout 3: Bilinear and Quadratic Forms This handout should be read just before Chapter 4 of the textbook.

Math 443 Differential Geometry Spring 2013 Handout 3: Bilinear and Quadratic Forms This handout should be read just before Chapter 4 of the textbook. Endomorphisms of a Vector Space This handout discusses

### MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS AND MATRICES Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

### Review of Linear Algebra Definitions, Change of Basis, Trace, Spectral Theorem

Review of Linear Algebra Definitions, Change of Basis, Trace, Spectral Theorem Steven J. Miller June 19, 2004 Abstract Matrices can be thought of as rectangular (often square) arrays of numbers, or as

### Definition 1. A set V is a vector space over the scalar field F {R, C} iff. there are two operations defined on V, called vector addition

6 Vector Spaces with Inned Product Basis and Dimension Section Objective(s): Vector Spaces and Subspaces Linear (In)dependence Basis and Dimension Inner Product 6 Vector Spaces and Subspaces Definition

### Linear Algebra- Final Exam Review

Linear Algebra- Final Exam Review. Let A be invertible. Show that, if v, v, v 3 are linearly independent vectors, so are Av, Av, Av 3. NOTE: It should be clear from your answer that you know the definition.

### Lecture 3: Review of Linear Algebra

ECE 83 Fall 2 Statistical Signal Processing instructor: R Nowak Lecture 3: Review of Linear Algebra Very often in this course we will represent signals as vectors and operators (eg, filters, transforms,

### Lecture 3: Review of Linear Algebra

ECE 83 Fall 2 Statistical Signal Processing instructor: R Nowak, scribe: R Nowak Lecture 3: Review of Linear Algebra Very often in this course we will represent signals as vectors and operators (eg, filters,

### 1 9/5 Matrices, vectors, and their applications

1 9/5 Matrices, vectors, and their applications Algebra: study of objects and operations on them. Linear algebra: object: matrices and vectors. operations: addition, multiplication etc. Algorithms/Geometric

### Chapter 7. Linear Algebra: Matrices, Vectors,

Chapter 7. Linear Algebra: Matrices, Vectors, Determinants. Linear Systems Linear algebra includes the theory and application of linear systems of equations, linear transformations, and eigenvalue problems.

### Topic 2 Quiz 2. choice C implies B and B implies C. correct-choice C implies B, but B does not imply C

Topic 1 Quiz 1 text A reduced row-echelon form of a 3 by 4 matrix can have how many leading one s? choice must have 3 choice may have 1, 2, or 3 correct-choice may have 0, 1, 2, or 3 choice may have 0,

### 1. The Polar Decomposition

A PERSONAL INTERVIEW WITH THE SINGULAR VALUE DECOMPOSITION MATAN GAVISH Part. Theory. The Polar Decomposition In what follows, F denotes either R or C. The vector space F n is an inner product space with

### ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA Kent State University Department of Mathematical Sciences Compiled and Maintained by Donald L. White Version: August 29, 2017 CONTENTS LINEAR ALGEBRA AND

### Numerical Methods. Elena loli Piccolomini. Civil Engeneering. piccolom. Metodi Numerici M p. 1/??

Metodi Numerici M p. 1/?? Numerical Methods Elena loli Piccolomini Civil Engeneering http://www.dm.unibo.it/ piccolom elena.loli@unibo.it Metodi Numerici M p. 2/?? Least Squares Data Fitting Measurement

### A = 3 B = A 1 1 matrix is the same as a number or scalar, 3 = [3].

Appendix : A Very Brief Linear ALgebra Review Introduction Linear Algebra, also known as matrix theory, is an important element of all branches of mathematics Very often in this course we study the shapes

### (a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =

. (5 points) (a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? dim N(A), since rank(a) 3. (b) If we also know that Ax = has no solution, what do we know about the rank of A? C(A)

### Chapter 3. Matrices. 3.1 Matrices

40 Chapter 3 Matrices 3.1 Matrices Definition 3.1 Matrix) A matrix A is a rectangular array of m n real numbers {a ij } written as a 11 a 12 a 1n a 21 a 22 a 2n A =.... a m1 a m2 a mn The array has m rows

### Math 224, Fall 2007 Exam 3 Thursday, December 6, 2007

Math 224, Fall 2007 Exam 3 Thursday, December 6, 2007 You have 1 hour and 20 minutes. No notes, books, or other references. You are permitted to use Maple during this exam, but you must start with a blank

### HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION)

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION) PROFESSOR STEVEN MILLER: BROWN UNIVERSITY: SPRING 2007 1. CHAPTER 1: MATRICES AND GAUSSIAN ELIMINATION Page 9, # 3: Describe

### Computational Methods. Eigenvalues and Singular Values

Computational Methods Eigenvalues and Singular Values Manfred Huber 2010 1 Eigenvalues and Singular Values Eigenvalues and singular values describe important aspects of transformations and of data relations

### Singular Value Decomposition

Chapter 6 Singular Value Decomposition In Chapter 5, we derived a number of algorithms for computing the eigenvalues and eigenvectors of matrices A R n n. Having developed this machinery, we complete our

### MATH 369 Linear Algebra

Assignment # Problem # A father and his two sons are together 00 years old. The father is twice as old as his older son and 30 years older than his younger son. How old is each person? Problem # 2 Determine

### Basic Concepts in Matrix Algebra

Basic Concepts in Matrix Algebra An column array of p elements is called a vector of dimension p and is written as x p 1 = x 1 x 2. x p. The transpose of the column vector x p 1 is row vector x = [x 1

### Singular Value Decomposition (SVD)

School of Computing National University of Singapore CS CS524 Theoretical Foundations of Multimedia More Linear Algebra Singular Value Decomposition (SVD) The highpoint of linear algebra Gilbert Strang

### CS123 INTRODUCTION TO COMPUTER GRAPHICS. Linear Algebra 1/33

Linear Algebra 1/33 Vectors A vector is a magnitude and a direction Magnitude = v Direction Also known as norm, length Represented by unit vectors (vectors with a length of 1 that point along distinct

### MA 265 FINAL EXAM Fall 2012

MA 265 FINAL EXAM Fall 22 NAME: INSTRUCTOR S NAME:. There are a total of 25 problems. You should show work on the exam sheet, and pencil in the correct answer on the scantron. 2. No books, notes, or calculators

### Calculating determinants for larger matrices

Day 26 Calculating determinants for larger matrices We now proceed to define det A for n n matrices A As before, we are looking for a function of A that satisfies the product formula det(ab) = det A det

### 10: Representation of point group part-1 matrix algebra CHEMISTRY. PAPER No.13 Applications of group Theory

1 Subject Chemistry Paper No and Title Module No and Title Module Tag Paper No 13: Applications of Group Theory CHE_P13_M10 2 TABLE OF CONTENTS 1. Learning outcomes 2. Introduction 3. Definition of a matrix

### v = v 1 2 +v 2 2. Two successive applications of this idea give the length of the vector v R 3 :

Length, Angle and the Inner Product The length (or norm) of a vector v R 2 (viewed as connecting the origin to a point (v 1,v 2 )) is easily determined by the Pythagorean Theorem and is denoted v : v =

### Vector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition)

Vector Space Basics (Remark: these notes are highly formal and may be a useful reference to some students however I am also posting Ray Heitmann's notes to Canvas for students interested in a direct computational

### 4. Matrix inverses. left and right inverse. linear independence. nonsingular matrices. matrices with linearly independent columns

L. Vandenberghe ECE133A (Winter 2018) 4. Matrix inverses left and right inverse linear independence nonsingular matrices matrices with linearly independent columns matrices with linearly independent rows

### Linear Algebra in Actuarial Science: Slides to the lecture

Linear Algebra in Actuarial Science: Slides to the lecture Fall Semester 2010/2011 Linear Algebra is a Tool-Box Linear Equation Systems Discretization of differential equations: solving linear equations

### Numerical Methods I Solving Square Linear Systems: GEM and LU factorization

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 18th,

### BASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x

BASIC ALGORITHMS IN LINEAR ALGEBRA STEVEN DALE CUTKOSKY Matrices and Applications of Gaussian Elimination Systems of Equations Suppose that A is an n n matrix with coefficents in a field F, and x = (x,,

### 1. Select the unique answer (choice) for each problem. Write only the answer.

MATH 5 Practice Problem Set Spring 7. Select the unique answer (choice) for each problem. Write only the answer. () Determine all the values of a for which the system has infinitely many solutions: x +

### 1 Matrices and Systems of Linear Equations

March 3, 203 6-6. Systems of Linear Equations Matrices and Systems of Linear Equations An m n matrix is an array A = a ij of the form a a n a 2 a 2n... a m a mn where each a ij is a real or complex number.

### 18.06SC Final Exam Solutions

18.06SC Final Exam Solutions 1 (4+7=11 pts.) Suppose A is 3 by 4, and Ax = 0 has exactly 2 special solutions: 1 2 x 1 = 1 and x 2 = 1 1 0 0 1 (a) Remembering that A is 3 by 4, find its row reduced echelon

### Linear Algebra & Geometry why is linear algebra useful in computer vision?

Linear Algebra & Geometry why is linear algebra useful in computer vision? References: -Any book on linear algebra! -[HZ] chapters 2, 4 Some of the slides in this lecture are courtesy to Prof. Octavia

### Dimension reduction, PCA & eigenanalysis Based in part on slides from textbook, slides of Susan Holmes. October 3, Statistics 202: Data Mining

Dimension reduction, PCA & eigenanalysis Based in part on slides from textbook, slides of Susan Holmes October 3, 2012 1 / 1 Combinations of features Given a data matrix X n p with p fairly large, it can

### Linear Algebra Practice Problems

Linear Algebra Practice Problems Math 24 Calculus III Summer 25, Session II. Determine whether the given set is a vector space. If not, give at least one axiom that is not satisfied. Unless otherwise stated,

### ICS 6N Computational Linear Algebra Matrix Algebra

ICS 6N Computational Linear Algebra Matrix Algebra Xiaohui Xie University of California, Irvine xhx@uci.edu February 2, 2017 Xiaohui Xie (UCI) ICS 6N February 2, 2017 1 / 24 Matrix Consider an m n matrix

### Mathematical Foundations

Chapter 1 Mathematical Foundations 1.1 Big-O Notations In the description of algorithmic complexity, we often have to use the order notations, often in terms of big O and small o. Loosely speaking, for

### 2. Review of Linear Algebra

2. Review of Linear Algebra ECE 83, Spring 217 In this course we will represent signals as vectors and operators (e.g., filters, transforms, etc) as matrices. This lecture reviews basic concepts from linear

### chapter 5 INTRODUCTION TO MATRIX ALGEBRA GOALS 5.1 Basic Definitions

chapter 5 INTRODUCTION TO MATRIX ALGEBRA GOALS The purpose of this chapter is to introduce you to matrix algebra, which has many applications. You are already familiar with several algebras: elementary

### Computational math: Assignment 1

Computational math: Assignment 1 Thanks Ting Gao for her Latex file 11 Let B be a 4 4 matrix to which we apply the following operations: 1double column 1, halve row 3, 3add row 3 to row 1, 4interchange

### Least Squares Optimization

Least Squares Optimization The following is a brief review of least squares optimization and constrained optimization techniques. Broadly, these techniques can be used in data analysis and visualization

### NONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction

NONCOMMUTATIVE POLYNOMIAL EQUATIONS Edward S Letzter Introduction My aim in these notes is twofold: First, to briefly review some linear algebra Second, to provide you with some new tools and techniques

### No books, no notes, no calculators. You must show work, unless the question is a true/false, yes/no, or fill-in-the-blank question.

Math 304 Final Exam (May 8) Spring 206 No books, no notes, no calculators. You must show work, unless the question is a true/false, yes/no, or fill-in-the-blank question. Name: Section: Question Points

### Singular Value Decomposition

Singular Value Decomposition (Com S 477/577 Notes Yan-Bin Jia Sep, 7 Introduction Now comes a highlight of linear algebra. Any real m n matrix can be factored as A = UΣV T where U is an m m orthogonal

### Final Exam. Linear Algebra Summer 2011 Math S2010X (3) Corrin Clarkson. August 10th, Solutions

Final Exam Linear Algebra Summer Math SX (3) Corrin Clarkson August th, Name: Solutions Instructions: This is a closed book exam. You may not use the textbook, notes or a calculator. You will have 9 minutes

### Chapters 5 & 6: Theory Review: Solutions Math 308 F Spring 2015

Chapters 5 & 6: Theory Review: Solutions Math 308 F Spring 205. If A is a 3 3 triangular matrix, explain why det(a) is equal to the product of entries on the diagonal. If A is a lower triangular or diagonal

### AM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition

AM 205: lecture 8 Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition QR Factorization A matrix A R m n, m n, can be factorized