Cheat Sheet for MATH46 Here is the stuff you really need to remember for the exams Linear systems Ax = b Problem: We consider a linear system of m equations for n unknowns x,,x n : For a given matrix A R m n and a vector b R m we want to find all vectors x R n such that Ax = b Algorithms: Gaussian elimination: This gives the decomposition row p of A row p m of A = LU where the row echelon form U R m n has r nonzero rows and r nonzero pivots in columns q,,q r The variables x q,,x qr are called basic variables, the remaining variables are called free variables the matrix L R m m is lower triangular: the diagonal elements are, below the diagonal are the multipliers used in the elimination the permuation vector p contains the numbers,,m in a scrambled order For given L,U, p find basis for rangea: Use columns q,,q r of A find basis for nulla: Set one of the free variables to and the others to zero Then use Ux = and back substitution to find the basic variables x qr,,x q This gives vectors v (),,v (n r) find basis for rangea : Use rows,,r of U w p := u find basis for nulla : For j =,,m r: Solve L u = e (r+ j) by forward substitition Then let w pm := u m gives m r basis vectors w (),,w (m r) Here e (k) := [,, k,, k k+,, m ] For given L,U, p and right hand side vector b R m find general solution of linear system Ax = b: solve Ly = b p b pm by forward substitution find particular solution x part of Ux = y: Set all free variables to Then use back substitution to find the basic variables x qr,,x q Find a basis v (),,v (n r) for nulla using U (see above) The general solution is x = x part + c v () + + c n r v (n r) where c,,c n r R are arbitrary This
{ Subspace V = span v (),,v (k) with given vectors v ( j) R n find basis for V : Let A have the rows v (),,v (k), then V = rangea Find row echelon form U (we don t need L, p) use rows,,r of U find basis for orthogonal complement V : since V = nulla: use row echelon form U to find basis for nulla Important facts to remember: rangea and rangea have both dimension r nulla and rangea are orthogonal complements in R n, dimnulla = n r nulla and rangea are orthogonal complements in R m, dimnulla = m r Hence: Ax = b has a solution b is orthogonal on basis vectors of nulla (we have m r solvability conditions) Case of square matrix A R n n If r = ranka < n we say that A is singular If r = n we say A is nonsingular A is singular deta = rows of A are linearly dependent columns of A are linearly dependent If A is singular: linear system Ax = b has either no solution or infinitely many solutions If A is nonsingular: linear system Ax = b has a unique solution x R n Let u ( j) denote the solution vector of Au ( j) = e ( j), then A = [ u (),,u (n)] R n n is called inverse matrix Then the solution of Ax = b is given by x = A b Least squares problem Ac b = min Problem: We have a matrix A R m n with m n and linearly independent columns Since the linear system Ac = b does in general not have a solution, we want to find c R n such that Ac {{ b is as small as possible residual r Application: Curve fitting : We have measured experimental data t t m y y m and want to fit this with a curve y = c g (t) + + c n g n (t) Here g (t),,g n (t) are given functions and c,,c n are unknown parameters which we want to find We want that the residuals r j = c g (t j ) + + c n g n (t j ) y j are small : Least squares fit: find c,,c n such that r + + r m is minimal g (t ) g m (t ) With A = we have r = Ac y Hence we want to find c R n such that Ac y is minimal g (t m ) g n (t m )
Algorithms: Method : Normal equations: Solve the n n linear system (A A)c = A b Method : Orthogonalization Find orthogonal basis p (),, p (n) for span { a (),,a (n) using Gram-Schmidt process: p () := a (), p () := a () a() p () p () p {{ () p (), p () := a () a() p () p () p {{ () s s p () a() p () p () p () {{ s p (), yielding a decomposition A = PS where P = [ p (),, p (n)] has orthogonal columns, S = m n Let d j := p( j) b p ( j) for j =,,n p ( j) solve Sc = d by back substitution s s n sn,n n n is upper triangular The determinant det A For a square matrix A R n n det A = matrix is singular (ie columns are linearly dependent) expansion formula: Use row or the column to write deta in terms of (n ) (n ) determinants: deta = a deta [] a deta [] + + ( ) n+ a n deta [n] = a deta [] a deta [] + + ( ) n+ a n deta [n] Here A [ jk] denotes the matrix where we remove row j and column k This also works with the row j or column j of the matrix, but with the factor ( ) j [ ] a b c [ ] [ ] [ a b det = ad bc, det d e f e f d f d e = a det b det + c c d h k g k g h g h k ] 4 Eigenvalue problem Av = v 4 Finding eigenvalues and eigenvectors Problem: A square matrix A R n n describes a mapping R n R n If we choose a new basis v (),,v (n) of R n this mapping is represented by the matrix B = V AV where V = [ v (),,v (n)] We want to find a new basis such that the matrix B becomes as simple as possible : if possible, we would like B to be a diagonal matrix Hence want to find a vectors v and a numbers such that Av = v
Algorithm: How to find eigenvalues: a a a n find characteristic polynomial p() = det a a = ( ) n n + c n n + + an,n a n a n,n a nn {{ A I c + c = p() by evaluating the determinant solve p() = to find the eigenvalues,, n For n = you have to solve a quadratic equation For n = try to guess one eigenvalue, then you can write p() = ( )q() with a quadratic polynomial q() I will give you hints how to find the eigenvalues if n > How to find eigenvectors: For each eigenvalue,, n solve the linear system a a a n v a a v an,n = a n a n,n a nn v n ie, we have to find a basis for null(a I) Since det(a I) = we have r := rank(a I) < n, and we have to find n r linearly independent vectors We call null(a I) the eigenspace for the eigenvalue By taking the eigenvectors for all eigenvalues we obtain vectors v (),,v (m) These vectors are linearly independent, hence either m = n: then the matrix is called diagonizable, ie, the vectors v (),,v (n) form a basis for all of R n or m < n: then the matrix is called not diagonizable Note: eigenvalues and eigenvectors may be complex 4 Case of diagonizable matrices If we can find n linearly independent eigenvectors v (),,v (n), then V = [ v (),,v (n)] is nonsingular Then by changing to the new basis v (),,v (n) the matrix A becomes the diagonal matrix B = : (Note that blank elements in the matrix are supposed to be zero) A [v (),,v (n)] = [v (),,v (n)], {{{{ V V n {{ A = V BV, V AV = B B n 4 Case of non-diagonizable matrices and the Jordan normal form If all eigenvalues are different the matrix is always diagonizable If there are multiple eigenvalues it can happen that there are fewer eigenvectors than the multiplicity of the eigenvalue Assume, eg, that = is a triple eigenvalue, but dimnull(a I) =, ie, there is only one eigenvector v () Then we can find find two generalized eigenvectors v (,),v (,) satisfying (A I)v () =, (A I)v (,) = v (), (A I)v (,) = v (,)
We call v (),v (,),v (,) a Jordan chain of length Note that we have where instead of a diagonal matrix [ A v (),v (,),v (,)] = [v (),v (,),v (,)] we now have a so-called Jordan box Example: assume that for A R 6 6 we have = = =, 4 = 5 =, 6 = 5 If there is only one eigenvector v () for = we can find a Jordan chain v (),v (,),v (,) of length If there is only one eigenvector v (4) for = we can find a Jordan chain v (4),v (4,) of length Then we have V = [ v (),v (,),v (,),v (4),v (4,),v (6)] and AV = V B with B = 5 Main result (Jordan normal form): For any matrix A R n n we can find a basis of Jordan chains for R n If V is the matrix which has these Jordan chains as columns, then we have AV = V B where the matrix B has Jordan boxes along the diagonal 44 Symmetric matrices and quadratic forms A quadratic form is a sum of terms with x i x j We can write it as q(x) = i=n j=n a i j x i x j = x Ax with a symmetric matrix A R n n Eg, consider for n = the quadratic form q(x) = x + x + x x x x x x x = [x,x,x ] x x x Change of basis: For linearly independent vectors v (),,v (n) we can write x = y v () + +y n v (n) = V y with the matrix V = [ v (),,v (n)] Then the quadratic form becomes with B = V AV q(x) = x Ax = y V {{ AV y = y By B Main result for a symmetric matrix A R n n : We can find an orthonormal basis v (),,v (n) of eigenvectors (ie, V V = I and V = V ) such that V AV = B = with real eigenvalues,, n, hence the quadratic form becomes with the new variables n q(x) = x Ax = y By = y + + n y n
find characteristic polynomial p() = det(a I) solve p() =, this gives eigenvalues,, n which are real for each eigenvalue : find a basis for null(a I) this gives n linearly independent eigenvectors u (),,u (n) (ie, the matrix A is diagonizable) for multiple eigenvalues: find an orthogonal basis by using the Gram-Schmidt process normalize the basis vectors: v ( j) := u ( j) / u ( j) Example: For A = we obtain the eigenvalues =, =, = For = we have A I = which has rank, hence we get one eigenvector v () = we have A I = which has rank, hence we get two eigenvectors v () =, v () = Note that v () and v () are not orthogonal on each other Therefore we can use Gram-Schmidt to replace v () with w () = v () v() v () v () v () v() = = Now the three vectors, obain an orthonormal basis: V = 6 6,, For = are orthogonal on each other We now divide each vector by its norm to,, B = V AV = Hence we obtain, q(x) = y By = y + y + y Positive definite matrices, positive semidefinite matrices We say a symmetric matrix A R n n is positive definite if q(x) = x Ax > for all x We say a symmetric matrix A R n n is positive semidefinite if q(x) = x Ax for all x Result: A is positive definite all eigenvalues of A are positive A is positive semidefinite all eigenvalues of A are Example: The matrix A = is not positive definite since it has a negative eigenvalue =