Solution of Linear Equations

Similar documents
Singular Value Decomposition

Conceptual Questions for Review

Cheat Sheet for MATH461

Matrix decompositions

7. Dimension and Structure.

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008

lecture 2 and 3: algorithms for linear algebra

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4

CS412: Lecture #17. Mridul Aanjaneya. March 19, 2015

Linear Algebra Primer

SUMMARY OF MATH 1600

Gaussian Elimination and Back Substitution

Warm-up. True or false? Baby proof. 2. The system of normal equations for A x = y has solutions iff A x = y has solutions

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

ELE/MCE 503 Linear Algebra Facts Fall 2018

Practical Linear Algebra: A Geometry Toolbox

lecture 3 and 4: algorithms for linear algebra

LINEAR ALGEBRA SUMMARY SHEET.

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

PRACTICE PROBLEMS FOR THE FINAL

2. Every linear system with the same number of equations as unknowns has a unique solution.

Math 18, Linear Algebra, Lecture C00, Spring 2017 Review and Practice Problems for Final Exam

Main matrix factorizations

The SVD-Fundamental Theorem of Linear Algebra

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show

Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015

Matrix Algebra for Engineers Jeffrey R. Chasnov

A Brief Outline of Math 355

Linear Algebra Highlights

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =

Numerical Linear Algebra

Program Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects

AMS526: Numerical Analysis I (Numerical Linear Algebra)

CS 246 Review of Linear Algebra 01/17/19

Scientific Computing

Computational Methods. Systems of Linear Equations

MATH 304 Linear Algebra Lecture 34: Review for Test 2.

This can be accomplished by left matrix multiplication as follows: I

Dimension and Structure

Computational Linear Algebra

Linear Algebraic Equations

A Review of Linear Algebra

1 9/5 Matrices, vectors, and their applications

Direct Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

18.06SC Final Exam Solutions

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

Direct Methods for Solving Linear Systems. Matrix Factorization

Numerical Methods - Numerical Linear Algebra

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 13

MATH 2030: ASSIGNMENT 4 SOLUTIONS

Numerical Analysis: Solving Systems of Linear Equations

G1110 & 852G1 Numerical Linear Algebra

Solutions to Final Practice Problems Written by Victoria Kala Last updated 12/5/2015

ft-uiowa-math2550 Assignment OptionalFinalExamReviewMultChoiceMEDIUMlengthForm due 12/31/2014 at 10:36pm CST

Fundamentals of Engineering Analysis (650163)

Review Notes for Linear Algebra True or False Last Updated: February 22, 2010

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

Review problems for MA 54, Fall 2004.

Online Exercises for Linear Algebra XM511

MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS

Lecture Summaries for Linear Algebra M51A

MATH Spring 2011 Sample problems for Test 2: Solutions

Practice Final Exam. Solutions.

a 11 a 12 a 11 a 12 a 13 a 21 a 22 a 23 . a 31 a 32 a 33 a 12 a 21 a 23 a 31 a = = = = 12

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3

Final Review Written by Victoria Kala SH 6432u Office Hours R 12:30 1:30pm Last Updated 11/30/2015

Math 407: Linear Optimization

MTH 464: Computational Linear Algebra

Sample Final Exam: Solutions

Algebra C Numerical Linear Algebra Sample Exam Problems

Linear Systems of n equations for n unknowns

1. General Vector Spaces

Econ Slides from Lecture 7

There are six more problems on the next two pages

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular

Linear Algebra in Actuarial Science: Slides to the lecture

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION)

Numerical Linear Algebra

MATH 3511 Lecture 1. Solving Linear Systems 1

Applied Linear Algebra in Geoscience Using MATLAB

14.2 QR Factorization with Column Pivoting

18.06 Professor Johnson Quiz 1 October 3, 2007

Jim Lambers MAT 610 Summer Session Lecture 1 Notes

I. Multiple Choice Questions (Answer any eight)

EK102 Linear Algebra PRACTICE PROBLEMS for Final Exam Spring 2016

Foundations of Matrix Analysis

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018

Math Linear Algebra Final Exam Review Sheet

MA 265 FINAL EXAM Fall 2012

Applied Linear Algebra

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

Math 314/ Exam 2 Blue Exam Solutions December 4, 2008 Instructor: Dr. S. Cooper. Name:

1. Linear systems of equations. Chapters 7-8: Linear Algebra. Solution(s) of a linear system of equations (continued)

Math 102 Final Exam - Dec 14 - PCYNH pm Fall Name Student No. Section A0

UNIT 6: The singular value decomposition.

Matrix Factorization and Analysis

Transcription:

Solution of Linear Equations (Com S 477/577 Notes) Yan-Bin Jia Sep 7, 07 We have discussed general methods for solving arbitrary equations, and looked at the special class of polynomial equations A subclass of the latter comprises all the systems of linear equations to which the area of linear algebra is devoted In fact, many a problem in numerical analysis can be reduced to one of solving a system of linear equations We already witnessed this in the use of Newton s method to solve a system of nonlinear equations Other applications include solution of ordinary or partial differential equations (ODEs or PDEs), the eigenvalue problems of mathematical analysis, least-squares fitting of data, and polynomial approximation Elements of Linear Algebra Recall that a basis for a vector space is a sequence of vectors that are linearly independent and span the space Given a linear function f : R n R m, and a pair of bases as below: B {e,e,,e n }, for R n, B {d,d,,d m }, for R m, we can represent f by an m n matrix A such that f(x) Ax for any x R n Note that the matrix A depends on the choices of the bases B and B The rank r of an m n matrix A is the number of independent rows The column space of A, denoted col(a), consists of all the linear combinations of its columns The row space of A, denoted row(a), consists of all the linear combinations of its rows Since each column has m components, the column space of A is a subspace of R m It consists of all the points in R m that are image vectors under the mapping A Similarly, the row space is a subspace of R n The null space null(a) of the matrix is made up of all the solutions to Ax 0, where x R n Theorem (Fundamental Theorem of Linear Allegra) Let A be an m n matrix Both its row and column spaces have dimension r Its null space has dimension n r and is the orthogonal complement of its row space (in R n ) In other words, R n row(a) null(a), n dim(row(a))+dim(null(a)) Consider the system of equations Ax b

If b is not an element of col(a), then the system is inconsistent (or overdetermined) If b col(a) and null(a) is non-trivial, then we say that the system is underdetermined In this case, every solution x can be split into a row space component x r and a null space component x n so that Ax A(x r +x n ) Ax r +Ax n Ax r The null space goes to zero, Ax n 0, while the row space component goes to the column space, Ax r Ax If A is an n n square matrix, we say that A is singular iff det(a) 0 iff rank(a) < n iff the rows of A are not linearly independent iff the columns of A are not linearly independent iff the dimension of the null space of A is non-zero iff A is not invertible LU Decomposition An m n matrix A, where m n, can be written in the form where PA LDU, P is an m m permutation matrix that specifies row interchanges, L is an m m square lower-triangular matrix with s on the diagonal, U is an m n upper-triangular matrix with s on the diagonal, D is an m m square diagonal matrix The entries on the diagonal of D are called pivots (named after the Gaussian elimination procedure) When A is a square matrix, the product of the pivots is equal to ±det(a), where the sign is chosen if odd number of row interchanges are performed and the sign + is chosen otherwise If A is symmetric and P I, the identity matrix, then U L T 4 If A is symmetric and positive definite, then U L T and the diagonal entries of D are strictly positive

Example 0 0 ( 0 ) 0 0 0 0 ( 0 ) ( 0 0 0 0 0 0 0 0 ) ( 0 0 0 0 0 0 ) ; Like most other decompositions and factorizations, the LU (or LDU) decomposition is used to simplify the solution of the system Ax b Suppose A is square and non-singular, solving the above system is equivalent to solving We then first solve for the vector y and then solve LDUx Pb Ly Pb Ux D y, for the vector x Each of the above systems can be solved easily using forward or backward substitution Crout s Algorithm The LU decomposition for an n n square matrix A can be generated directly by Gaussian elimination Nevertheless, a more efficient procedure is Crout s algorithm In case no pivoting is needed, the algorithm yields two matrices L {l ij } and U {u ij } whose product is A {a ij }, namely, 0 0 0 l 0 0 l l 0 l n l n l n u u u u n 0 u u u n 0 0 u u n 0 0 0 u nn a a a a n a a a a n a a a a n a n a n a n a nn The algorithm solves for L and U simultaneously and column by column At the jth outer iteration step, it generates column j of U and then column j of L for j,,,n do for i,,,j do u ij a ij i k l iku kj for i j +,j +,,n do l ij u jj ( a ij j k l iku kj ) You can also do it row by row except row i of L has to be determined before row i of U

If you work through a few iterations of the above procedure, you will see that the α s and β s that occur on the right-hand side of the two equations in the procedure are already determined by the time they are needed And every a ij is used only once and never again Together these entries are used column by column as well For compactness, we can store l ij and u ij in the location a ij used to occupy, namely, u u u u n l u u u n l l u u n l n l n l n u nn Looking at step 5 of Crout s algorithm, we should be worried about the possibility of u jj becoming zero Here is an example from [, p 97] showing a matrix with no LU decomposition due to this degeneracy Suppose We must have 4 7 5 0 0 l 0 l l u u u 0 u u 0 0 u u, l, l, u, and u 0 The (,) entry determined from the product matrix on the right hand side is l u +l u 6 It is not equal to the (,) entry (value 5) of the original matrix! The contradiction arised because u 0 In fact, we would not even be able to continue Crout s algorithm to calculate l via a division by 0 Theorem An n n matrix A has an LU factorization if det(a i ) 0, where A i is the upper left i i submatrix, for i,,n If the LU factorization exists and det(a) 0, then it is unique For numerical stability, pivoting should be performed in Crout s algorithm The key point is to notice that the first equation in the procedure for u ij is exactly the same as the second equation for l ij except for the division in the latter equation This means that we can choose the largest j a ij a ik u kj, k i j,,n as the diagonal element u jj and switch corresponding rows in L and A Example 4 To illustrate on pivoting, let us carry out a few steps of Crout s algorithm on the matrix 7 6 5 4 8 0 9 6 4 5 4

In the first step, we need to determine u Which of rows,,, 4 would result in the largest (absolute) value of u? if row u u, if row u 4 u 4, if row u 9 u 9, if row u 5 u 5 Thus we set u 9 and exchange rows and in A: 9 6 4 0 0 0 4 8 0 7 6 5 0 0 0 0 0 0 5 0 0 0 7 6 5 4 8 0 9 6 4 5 where the first matrix on the right hand side is a permutation matrix which exchanges rows and of the second (original) matrix via multiplication In the second step, we use the first column to determine that l 4 9, l 9, and l 4 5 9 Next, we let u a 6 and matrices L and U take the form 0 0 0 9 6 4 9 0 u 9 0 0 5 0 0 9 To determine u, we find out which of rows,, 4 would result in the largest u value: if row if row if row 4 4 9 ( 6)+u 8 u, 9 ( 6)+u 7 u 7, 5 9 ( 6)+u u Since row yields the largest absolute value of u, we set u Using u, we obtain the second column of L: By this time, we have 0 0 0 4 9 0 0 9 7 5 9 l 7 9 6 0 0 0 0 0 and l 4 9 6 4 4 8 0 7 6 5 5, Factorization Based on Eigenvalues Suppose the n n matrix A has n linearly independent eigenvectors, then A SΛS, 5

where Λ is a diagonal matrix whose entries are the eigenvalues of A and S is an eigenvector matrix whose columns are the eigenvectors of A When the eigenvalues of A are all different, it is automatic that the eigenvectors are independent Therefore A can be diagonalized Every n n matrix can be decomposed into the Jordan form, that is, where J J J s, J i A MJM, λ i λi, i s, with λ i an eigenvalue of A Here s is the number of independent eigenvectors of A and M consists of eigenvectors and generalized eigenvectors 4 QR Factorization Suppose A is an m n matrix with independent columns (hence m n) We can factor A as A QR, Here Q with dimensions m n has the same column space as A but its columns are orthonormal vectors In other words, Q T Q I And R with dimensions n n is invertible and upper triangular The first application is the QR algorithm which repeatedly produces QR factorizations of matrices derived from A, building the eigenvalues of A in the process The second application is in the solution of an overconstrained system Ax b in the leastsquares sense The least-squares solution x is given by x (A T A) A T b, assuming that the columns of A are independent But Q T Q I, so x (R T Q T QR) R T Q T b (R T R) R T Q T b R (R T ) R T Q T b R Q T b So we can obtain x by computing Q T b and then using backsubstitution to solve R x Q T b This is numerically more stable than solving the system A T A x A T b The QR factorization can be be computed using the Gram-Schmidt process 4 The Gram-Schmidt Procedure Given n linearly independent vectors v,,v n, the Gram-Schmidt procedure constructs n orthonormal vectors û,,û n such that these two sets of vectors span the same space First, it constructs n orthogonal vectors w,,w n below: w v, () j v T j w j v j w i w T i w w i, j,,n () i i 6

Essentially, we subtract from every new vector v j its projection in the directions w / w,, w i / w j that are already set Next, we simply perform a normalization by letting û j w j w j, j,,n Example 5 Consider three vectors: v, v 0, and v 0 Carry out the Gram-Schmidt procedure as follows: w, 0 w v vt w w T w w 0 0 w, 6 0 6 6 Hence the orthonormal basis consists of vectors û, û 0 6, and û 4 Generating the QR Factorization To obtain the QR factorization, we first use Gram-Schmidt to orthogonalize the columns of A The resulting orthonormal vectors constitute the columns of Q, that is, Q (û,û,,û n ) The matrix R is formed by keeping track of the Gram-Schmidt operations Then, R expresses the columns of A as linear combinations of the columns of Q 7

More specifically, we rewrite () and () into the following: j v j s ij w i +w j, i j,,n, where s ij (v T j w i)/(w T i w i), i j, have already been calculated by the Gram-Schmidt procedure The above equations are further rewritten as v j j s ij w i û i + w j û j i n r ij û i, i where r ij From A QR we see that R (r ij ) Example 6 In the last example, we notice that v w û, s ij w i if i < j, w j if i j, 0 if i > j v w +w û + 6û, v w w +w û 6û + û So the QR decomposition is given by 0 0 6 6 0 6 0 6 6 0 0 References [] M Erdmann Lecture notes for 6-8 Mathematical Fundamentals for Robotics The Robotics Institute, Carnegie Mellon University, 998 [] G Golub and C F Van Loan Matrix Computations, rd edition The Johns Hopkins University Press, Baltimore, 996 [] G Strang Introduction to Linear Algebra Wellesley-Cambridge Press, 99 [4] W H Press, et al Numerical Recipes in C++: The Art of Scientific Computing Cambridge University Press, nd edition, 00 8