Cheat Sheet for MATH461

Similar documents
MA 265 FINAL EXAM Fall 2012

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 2174: Practice Midterm 1

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

Review problems for MA 54, Fall 2004.

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

235 Final exam review questions

Math 18, Linear Algebra, Lecture C00, Spring 2017 Review and Practice Problems for Final Exam

1 Last time: least-squares problems

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

Linear least squares problem: Example

MAT Linear Algebra Collection of sample exams

Dot product and linear least squares problems

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

Conceptual Questions for Review

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008

ft-uiowa-math2550 Assignment OptionalFinalExamReviewMultChoiceMEDIUMlengthForm due 12/31/2014 at 10:36pm CST

MATH 304 Linear Algebra Lecture 34: Review for Test 2.

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

Chapter 3 Transformations

Math 323 Exam 2 Sample Problems Solution Guide October 31, 2013

Warm-up. True or false? Baby proof. 2. The system of normal equations for A x = y has solutions iff A x = y has solutions

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

SUMMARY OF MATH 1600

Gaussian Elimination without/with Pivoting and Cholesky Decomposition

Problem 1: Solving a linear equation

Solution of Linear Equations

Math 224, Fall 2007 Exam 3 Thursday, December 6, 2007

Linear Algebra Primer

Chapters 5 & 6: Theory Review: Solutions Math 308 F Spring 2015

There are six more problems on the next two pages

ANSWERS. E k E 2 E 1 A = B

2. Every linear system with the same number of equations as unknowns has a unique solution.

ELE/MCE 503 Linear Algebra Facts Fall 2018

AMS526: Numerical Analysis I (Numerical Linear Algebra)

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL

1. Let A = (a) 2 (b) 3 (c) 0 (d) 4 (e) 1

PRACTICE FINAL EXAM. why. If they are dependent, exhibit a linear dependence relation among them.

Spring 2014 Math 272 Final Exam Review Sheet

18.06 Professor Johnson Quiz 1 October 3, 2007

LINEAR ALGEBRA SUMMARY SHEET.

Math 21b. Review for Final Exam

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

Linear Least Squares Problems

1. General Vector Spaces

18.06SC Final Exam Solutions

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018

1 9/5 Matrices, vectors, and their applications

Matrix Algebra for Engineers Jeffrey R. Chasnov

I. Multiple Choice Questions (Answer any eight)

Study Guide for Linear Algebra Exam 2

1. Let m 1 and n 1 be two natural numbers such that m > n. Which of the following is/are true?

MATH. 20F SAMPLE FINAL (WINTER 2010)

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION)

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =

MATH 3321 Sample Questions for Exam 3. 3y y, C = Perform the indicated operations, if possible: (a) AC (b) AB (c) B + AC (d) CBA

DIAGONALIZATION. In order to see the implications of this definition, let us consider the following example Example 1. Consider the matrix

PRACTICE PROBLEMS FOR THE FINAL

Practice Final Exam. Solutions.

Math 310 Final Exam Solutions

Extra Problems for Math 2050 Linear Algebra I

Math 308 Practice Test for Final Exam Winter 2015

Math 407: Linear Optimization

MATH 221, Spring Homework 10 Solutions

Matrix decompositions

MATH 1553, Intro to Linear Algebra FINAL EXAM STUDY GUIDE

(a) II and III (b) I (c) I and III (d) I and II and III (e) None are true.

No books, no notes, no calculators. You must show work, unless the question is a true/false, yes/no, or fill-in-the-blank question.

TBP MATH33A Review Sheet. November 24, 2018

Linear Algebra Highlights

Math 265 Linear Algebra Sample Spring 2002., rref (A) =

L2-7 Some very stylish matrix decompositions for solving Ax = b 10 Oct 2015

Math Final December 2006 C. Robinson

Math 415 Exam I. Name: Student ID: Calculators, books and notes are not allowed!

1. Diagonalize the matrix A if possible, that is, find an invertible matrix P and a diagonal

Math Computation Test 1 September 26 th, 2016 Debate: Computation vs. Theory Whatever wins, it ll be Huuuge!

Math 314/ Exam 2 Blue Exam Solutions December 4, 2008 Instructor: Dr. S. Cooper. Name:

5.) For each of the given sets of vectors, determine whether or not the set spans R 3. Give reasons for your answers.

LU Factorization. A m x n matrix A admits an LU factorization if it can be written in the form of A = LU

Linear Algebra- Final Exam Review

FINAL EXAM Ma (Eakin) Fall 2015 December 16, 2015

Class notes: Approximation

Total 170. Name. Final Examination M340L-CS

LINEAR ALGEBRA KNOWLEDGE SURVEY

MATH Spring 2011 Sample problems for Test 2: Solutions

MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors.

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

Math 369 Exam #2 Practice Problem Solutions

Final Exam Practice Problems Answers Math 24 Winter 2012

Matrices related to linear transformations

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N.

Final Review Written by Victoria Kala SH 6432u Office Hours R 12:30 1:30pm Last Updated 11/30/2015

33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM

Linear Algebra Practice Problems

Math 102 Final Exam - Dec 14 - PCYNH pm Fall Name Student No. Section A0

Math 2114 Common Final Exam May 13, 2015 Form A

Algebra C Numerical Linear Algebra Sample Exam Problems

UNIT 6: The singular value decomposition.

Transcription:

Cheat Sheet for MATH46 Here is the stuff you really need to remember for the exams Linear systems Ax = b Problem: We consider a linear system of m equations for n unknowns x,,x n : For a given matrix A R m n and a vector b R m we want to find all vectors x R n such that Ax = b Algorithms: Gaussian elimination: This gives the decomposition row p of A row p m of A = LU where the row echelon form U R m n has r nonzero rows and r nonzero pivots in columns q,,q r The variables x q,,x qr are called basic variables, the remaining variables are called free variables the matrix L R m m is lower triangular: the diagonal elements are, below the diagonal are the multipliers used in the elimination the permuation vector p contains the numbers,,m in a scrambled order For given L,U, p find basis for rangea: Use columns q,,q r of A find basis for nulla: Set one of the free variables to and the others to zero Then use Ux = and back substitution to find the basic variables x qr,,x q This gives vectors v (),,v (n r) find basis for rangea : Use rows,,r of U w p := u find basis for nulla : For j =,,m r: Solve L u = e (r+ j) by forward substitition Then let w pm := u m gives m r basis vectors w (),,w (m r) Here e (k) := [,, k,, k k+,, m ] For given L,U, p and right hand side vector b R m find general solution of linear system Ax = b: solve Ly = b p b pm by forward substitution find particular solution x part of Ux = y: Set all free variables to Then use back substitution to find the basic variables x qr,,x q Find a basis v (),,v (n r) for nulla using U (see above) The general solution is x = x part + c v () + + c n r v (n r) where c,,c n r R are arbitrary This

{ Subspace V = span v (),,v (k) with given vectors v ( j) R n find basis for V : Let A have the rows v (),,v (k), then V = rangea Find row echelon form U (we don t need L, p) use rows,,r of U find basis for orthogonal complement V : since V = nulla: use row echelon form U to find basis for nulla Important facts to remember: rangea and rangea have both dimension r nulla and rangea are orthogonal complements in R n, dimnulla = n r nulla and rangea are orthogonal complements in R m, dimnulla = m r Hence: Ax = b has a solution b is orthogonal on basis vectors of nulla (we have m r solvability conditions) Case of square matrix A R n n If r = ranka < n we say that A is singular If r = n we say A is nonsingular A is singular deta = rows of A are linearly dependent columns of A are linearly dependent If A is singular: linear system Ax = b has either no solution or infinitely many solutions If A is nonsingular: linear system Ax = b has a unique solution x R n Let u ( j) denote the solution vector of Au ( j) = e ( j), then A = [ u (),,u (n)] R n n is called inverse matrix Then the solution of Ax = b is given by x = A b Least squares problem Ac b = min Problem: We have a matrix A R m n with m n and linearly independent columns Since the linear system Ac = b does in general not have a solution, we want to find c R n such that Ac {{ b is as small as possible residual r Application: Curve fitting : We have measured experimental data t t m y y m and want to fit this with a curve y = c g (t) + + c n g n (t) Here g (t),,g n (t) are given functions and c,,c n are unknown parameters which we want to find We want that the residuals r j = c g (t j ) + + c n g n (t j ) y j are small : Least squares fit: find c,,c n such that r + + r m is minimal g (t ) g m (t ) With A = we have r = Ac y Hence we want to find c R n such that Ac y is minimal g (t m ) g n (t m )

Algorithms: Method : Normal equations: Solve the n n linear system (A A)c = A b Method : Orthogonalization Find orthogonal basis p (),, p (n) for span { a (),,a (n) using Gram-Schmidt process: p () := a (), p () := a () a() p () p () p {{ () p (), p () := a () a() p () p () p {{ () s s p () a() p () p () p () {{ s p (), yielding a decomposition A = PS where P = [ p (),, p (n)] has orthogonal columns, S = m n Let d j := p( j) b p ( j) for j =,,n p ( j) solve Sc = d by back substitution s s n sn,n n n is upper triangular The determinant det A For a square matrix A R n n det A = matrix is singular (ie columns are linearly dependent) expansion formula: Use row or the column to write deta in terms of (n ) (n ) determinants: deta = a deta [] a deta [] + + ( ) n+ a n deta [n] = a deta [] a deta [] + + ( ) n+ a n deta [n] Here A [ jk] denotes the matrix where we remove row j and column k This also works with the row j or column j of the matrix, but with the factor ( ) j [ ] a b c [ ] [ ] [ a b det = ad bc, det d e f e f d f d e = a det b det + c c d h k g k g h g h k ] 4 Eigenvalue problem Av = v 4 Finding eigenvalues and eigenvectors Problem: A square matrix A R n n describes a mapping R n R n If we choose a new basis v (),,v (n) of R n this mapping is represented by the matrix B = V AV where V = [ v (),,v (n)] We want to find a new basis such that the matrix B becomes as simple as possible : if possible, we would like B to be a diagonal matrix Hence want to find a vectors v and a numbers such that Av = v

Algorithm: How to find eigenvalues: a a a n find characteristic polynomial p() = det a a = ( ) n n + c n n + + an,n a n a n,n a nn {{ A I c + c = p() by evaluating the determinant solve p() = to find the eigenvalues,, n For n = you have to solve a quadratic equation For n = try to guess one eigenvalue, then you can write p() = ( )q() with a quadratic polynomial q() I will give you hints how to find the eigenvalues if n > How to find eigenvectors: For each eigenvalue,, n solve the linear system a a a n v a a v an,n = a n a n,n a nn v n ie, we have to find a basis for null(a I) Since det(a I) = we have r := rank(a I) < n, and we have to find n r linearly independent vectors We call null(a I) the eigenspace for the eigenvalue By taking the eigenvectors for all eigenvalues we obtain vectors v (),,v (m) These vectors are linearly independent, hence either m = n: then the matrix is called diagonizable, ie, the vectors v (),,v (n) form a basis for all of R n or m < n: then the matrix is called not diagonizable Note: eigenvalues and eigenvectors may be complex 4 Case of diagonizable matrices If we can find n linearly independent eigenvectors v (),,v (n), then V = [ v (),,v (n)] is nonsingular Then by changing to the new basis v (),,v (n) the matrix A becomes the diagonal matrix B = : (Note that blank elements in the matrix are supposed to be zero) A [v (),,v (n)] = [v (),,v (n)], {{{{ V V n {{ A = V BV, V AV = B B n 4 Case of non-diagonizable matrices and the Jordan normal form If all eigenvalues are different the matrix is always diagonizable If there are multiple eigenvalues it can happen that there are fewer eigenvectors than the multiplicity of the eigenvalue Assume, eg, that = is a triple eigenvalue, but dimnull(a I) =, ie, there is only one eigenvector v () Then we can find find two generalized eigenvectors v (,),v (,) satisfying (A I)v () =, (A I)v (,) = v (), (A I)v (,) = v (,)

We call v (),v (,),v (,) a Jordan chain of length Note that we have where instead of a diagonal matrix [ A v (),v (,),v (,)] = [v (),v (,),v (,)] we now have a so-called Jordan box Example: assume that for A R 6 6 we have = = =, 4 = 5 =, 6 = 5 If there is only one eigenvector v () for = we can find a Jordan chain v (),v (,),v (,) of length If there is only one eigenvector v (4) for = we can find a Jordan chain v (4),v (4,) of length Then we have V = [ v (),v (,),v (,),v (4),v (4,),v (6)] and AV = V B with B = 5 Main result (Jordan normal form): For any matrix A R n n we can find a basis of Jordan chains for R n If V is the matrix which has these Jordan chains as columns, then we have AV = V B where the matrix B has Jordan boxes along the diagonal 44 Symmetric matrices and quadratic forms A quadratic form is a sum of terms with x i x j We can write it as q(x) = i=n j=n a i j x i x j = x Ax with a symmetric matrix A R n n Eg, consider for n = the quadratic form q(x) = x + x + x x x x x x x = [x,x,x ] x x x Change of basis: For linearly independent vectors v (),,v (n) we can write x = y v () + +y n v (n) = V y with the matrix V = [ v (),,v (n)] Then the quadratic form becomes with B = V AV q(x) = x Ax = y V {{ AV y = y By B Main result for a symmetric matrix A R n n : We can find an orthonormal basis v (),,v (n) of eigenvectors (ie, V V = I and V = V ) such that V AV = B = with real eigenvalues,, n, hence the quadratic form becomes with the new variables n q(x) = x Ax = y By = y + + n y n

find characteristic polynomial p() = det(a I) solve p() =, this gives eigenvalues,, n which are real for each eigenvalue : find a basis for null(a I) this gives n linearly independent eigenvectors u (),,u (n) (ie, the matrix A is diagonizable) for multiple eigenvalues: find an orthogonal basis by using the Gram-Schmidt process normalize the basis vectors: v ( j) := u ( j) / u ( j) Example: For A = we obtain the eigenvalues =, =, = For = we have A I = which has rank, hence we get one eigenvector v () = we have A I = which has rank, hence we get two eigenvectors v () =, v () = Note that v () and v () are not orthogonal on each other Therefore we can use Gram-Schmidt to replace v () with w () = v () v() v () v () v () v() = = Now the three vectors, obain an orthonormal basis: V = 6 6,, For = are orthogonal on each other We now divide each vector by its norm to,, B = V AV = Hence we obtain, q(x) = y By = y + y + y Positive definite matrices, positive semidefinite matrices We say a symmetric matrix A R n n is positive definite if q(x) = x Ax > for all x We say a symmetric matrix A R n n is positive semidefinite if q(x) = x Ax for all x Result: A is positive definite all eigenvalues of A are positive A is positive semidefinite all eigenvalues of A are Example: The matrix A = is not positive definite since it has a negative eigenvalue =