Numerical Methods I Solving Square Linear Systems: GEM and LU factorization

Similar documents
Scientific Computing: Dense Linear Systems

Scientific Computing: Solving Linear Systems

Numerical Methods I Non-Square and Sparse Linear Systems

Numerical Linear Algebra

Scientific Computing

Lecture 9. Errors in solving Linear Systems. J. Chaudhry (Zeb) Department of Mathematics and Statistics University of New Mexico

Computational Methods. Systems of Linear Equations

Direct Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le

MTH 464: Computational Linear Algebra

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4

Numerical Methods I Eigenvalue Problems

Direct Methods for Solving Linear Systems. Matrix Factorization

Numerical Linear Algebra

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

1.Chapter Objectives

AMS526: Numerical Analysis I (Numerical Linear Algebra)

lecture 2 and 3: algorithms for linear algebra

Solving Linear Systems of Equations

Numerical Linear Algebra

Program Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects

Linear Algebraic Equations

MATH 3511 Lecture 1. Solving Linear Systems 1

A Review of Matrix Analysis

COURSE Numerical methods for solving linear systems. Practical solving of many problems eventually leads to solving linear systems.

LU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark

Linear Algebra and Matrix Inversion

lecture 3 and 4: algorithms for linear algebra

Lecture 7. Gaussian Elimination with Pivoting. David Semeraro. University of Illinois at Urbana-Champaign. February 11, 2014

Linear Algebra Massoud Malek

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018

This can be accomplished by left matrix multiplication as follows: I

PowerPoints organized by Dr. Michael R. Gustafson II, Duke University

Numerical Linear Algebra

9. Numerical linear algebra background

Chapter 2 - Linear Equations

GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511)

5 Solving Systems of Linear Equations

CHAPTER 6. Direct Methods for Solving Linear Systems

Introduction to Mathematical Programming

Outline. Math Numerical Analysis. Errors. Lecture Notes Linear Algebra: Part B. Joseph M. Mahaffy,

Conceptual Questions for Review

Matrices and systems of linear equations

LU Factorization. LU factorization is the most common way of solving linear systems! Ax = b LUx = b

Engineering Computation

Numerical Methods I: Numerical linear algebra

Numerical Methods I Singular Value Decomposition

Fundamentals of Engineering Analysis (650163)

A Review of Linear Algebra

Foundations of Matrix Analysis

22A-2 SUMMER 2014 LECTURE 5

Review of Basic Concepts in Linear Algebra

Numerical Linear Algebra

Linear Algebra Highlights

2.1 Gaussian Elimination

Matrix & Linear Algebra

MATH2210 Notebook 2 Spring 2018

7. LU factorization. factor-solve method. LU factorization. solving Ax = b with A nonsingular. the inverse of a nonsingular matrix

B553 Lecture 5: Matrix Algebra Review

Section 3.5 LU Decomposition (Factorization) Key terms. Matrix factorization Forward and back substitution LU-decomposition Storage economization

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

Linear Algebra Review. Vectors

. =. a i1 x 1 + a i2 x 2 + a in x n = b i. a 11 a 12 a 1n a 21 a 22 a 1n. i1 a i2 a in

Gaussian Elimination and Back Substitution

9. Numerical linear algebra background

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Review Questions REVIEW QUESTIONS 71

6 Linear Systems of Equations

Math Camp II. Basic Linear Algebra. Yiqing Xu. Aug 26, 2014 MIT

CS227-Scientific Computing. Lecture 4: A Crash Course in Linear Algebra

Review of matrices. Let m, n IN. A rectangle of numbers written like A =

Solution of Linear Equations

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 13

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Dense LU factorization and its error analysis

Linear Equations and Matrix

Algebra C Numerical Linear Algebra Sample Exam Problems

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Math 471 (Numerical methods) Chapter 3 (second half). System of equations

Practical Linear Algebra: A Geometry Toolbox

MODULE 7. where A is an m n real (or complex) matrix. 2) Let K(t, s) be a function of two variables which is continuous on the square [0, 1] [0, 1].

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

Next topics: Solving systems of linear equations

Phys 201. Matrices and Determinants

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3

Process Model Formulation and Solution, 3E4

Numerical Methods - Numerical Linear Algebra

CS412: Lecture #17. Mridul Aanjaneya. March 19, 2015

Lecture 12 (Tue, Mar 5) Gaussian elimination and LU factorization (II)

Math Linear Algebra II. 1. Inner Products and Norms

Basic Concepts in Linear Algebra

Dot Products, Transposes, and Orthogonal Projections

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic

Example: 2x y + 3z = 1 5y 6z = 0 x + 4z = 7. Definition: Elementary Row Operations. Example: Type I swap rows 1 and 3

14.2 QR Factorization with Column Pivoting

AMS526: Numerical Analysis I (Numerical Linear Algebra)

EE731 Lecture Notes: Matrix Computations for Signal Processing

G1110 & 852G1 Numerical Linear Algebra

MTH 464: Computational Linear Algebra

LU Factorization. LU Decomposition. LU Decomposition. LU Decomposition: Motivation A = LU

Transcription:

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 18th, 2014 A. Donev (Courant Institute) Lecture II 9/2014 1 / 43

Outline 1 Linear Algebra Background 2 Conditioning of linear systems 3 Gauss elimination and LU factorization Pivoting LU factorization Cholesky Factorization Pivoting and Stability 4 Conclusions A. Donev (Courant Institute) Lecture II 9/2014 2 / 43

Kernel Space Linear Algebra Background The dimension of the column space of a matrix is called the rank of the matrix A R m,n, r = rank A min(m, n). If r = min(m, n) then the matrix is of full rank. The nullspace null(a) or kernel ker(a) of a matrix A is the subspace of vectors x for which Ax = 0. The dimension of the nullspace is called the nullity of the matrix. The orthogonal complement V or orthogonal subspace of a subspace V is the set of all vectors that are orthogonal to every vector in V. A. Donev (Courant Institute) Lecture II 9/2014 3 / 43

Linear Algebra Background Fundamental Theorem One of the most important theorems in linear algebra: For A R m,n rank A + nullity A = n. In addition to the range and kernel spaces of a matrix, two more important vector subspaces for a given matrix A are the: Row space or coimage of a matrix is the column (image) space of its transpose, im A T. Its dimension is also equal to the the rank. Left nullspace or cokernel of a matrix is the nullspace or kernel of its transpose, ker A T. Second fundamental theorem in linear algebra: im A T = (ker A) ker A T = (im A) A. Donev (Courant Institute) Lecture II 9/2014 4 / 43

Linear Algebra Background The Matrix Inverse A square matrix A = [n, n] is invertible or nonsingular if there exists a matrix inverse A 1 = B = [n, n] such that: AB = BA = I, where I is the identity matrix (ones along diagonal, all the rest zeros). The following statements are equivalent for A R n,n : A is invertible. A is full-rank, rank A = n. The columns and also the rows are linearly independent and form a basis for R n. The determinant is nonzero, det A 0. Zero is not an eigenvalue of A. A. Donev (Courant Institute) Lecture II 9/2014 5 / 43

Linear Algebra Background Matrix Algebra Matrix-matrix multiplication is not commutative, AB BA in general. Note x T y is a scalar (dot product) so this commutes. Some useful properties: C (A + B) = CA + CB and ABC = (AB) C = A (BC) ( A T ) T = A and (AB) T = B T A T ( A 1 ) 1 = A and (AB) 1 = B 1 A 1 and ( A T ) 1 = ( A 1 ) T Instead of matrix division, think of multiplication by an inverse: AB = C ( A 1 A ) { B = A 1 B = A 1 C C A = CB 1 A. Donev (Courant Institute) Lecture II 9/2014 6 / 43

Vector norms Linear Algebra Background Norms are the abstraction for the notion of a length or magnitude. For a vector x R n, the p-norm is and special cases of interest are: ( n ) 1/p x p = x i p 1 The 1-norm (L 1 norm or Manhattan distance), x 1 = n i=1 x i 2 The 2-norm (L 2 norm, Euclidian distance), x 2 = n x x = i=1 x i 2 3 The -norm (L or maximum norm), x = max 1 i n x i 1 Note that all of these norms are inter-related in a finite-dimensional setting. i=1 A. Donev (Courant Institute) Lecture II 9/2014 7 / 43

Matrix norms Linear Algebra Background Matrix norm induced by a given vector norm: Ax A = sup x 0 x Ax A x The last bound holds for matrices as well, AB A B. Special cases of interest are: 1 The 1-norm or column sum norm, A 1 = max j n i=1 a ij 2 The -norm or row sum norm, A = max i n j=1 a ij 3 The 2-norm or spectral norm, A 2 = σ 1 (largest singular value) 4 The Euclidian or Frobenius norm, A F = i,j a ij 2 (note this is not an induced norm) A. Donev (Courant Institute) Lecture II 9/2014 8 / 43

Conditioning of linear systems Matrices and linear systems It is said that 70% or more of applied mathematics research involves solving systems of m linear equations for n unknowns: n a ij x j = b i, i = 1,, m. j=1 Linear systems arise directly from discrete models, e.g., traffic flow in a city. Or, they may come through representing or more abstract linear operators in some finite basis (representation). Common abstraction: Ax = b Special case: Square invertible matrices, m = n, det A 0: x = A 1 b. The goal: Calculate solution x given data A, b in the most numerically stable and also efficient way. A. Donev (Courant Institute) Lecture II 9/2014 9 / 43

Conditioning of linear systems Stability analysis: rhs perturbations Perturbations on right hand side (rhs) only: A (x + δx) = b + δb b + Aδx = b + δb Using the bounds δx = A 1 δb δx A 1 δb b A x x b / A the relative error in the solution can be bounded by δx x A 1 δb A 1 δb x b / A = κ(a) δb b where the conditioning number κ(a) depends on the matrix norm used: κ(a) = A A 1 1. A. Donev (Courant Institute) Lecture II 9/2014 10 / 43

Conditioning of linear systems Stability analysis: matrix perturbations Perturbations of the matrix only: (A + δa) (x + δx) = b δx = A 1 (δa) (x + δx) δx x + δx A 1 δa = κ(a) δa A. Conclusion: The conditioning of the linear system is determined by κ(a) = A A 1 1 No numerical method can cure an ill-conditioned systems, κ(a) 1. The conditioning number can only be estimated in practice since A 1 is not available (see MATLAB s rcond function). Practice: What is κ(a) for diagonal matrices in the 1-norm, -norm, and 2-norm? A. Donev (Courant Institute) Lecture II 9/2014 11 / 43

Conditioning of linear systems Mixed perturbations Now consider general perturbations of the data: (A + δa) (x + δx) = b + δb The full derivation is the book [next slide]: δx x κ(a) 1 κ(a) δa A ( δb b + δa ) A Important practical estimate: Roundoff error in the data, with rounding unit u (recall 10 16 for double precision), produces a relative error δx x 2uκ(A) It certainly makes no sense to try to solve systems with κ(a) > 10 16. A. Donev (Courant Institute) Lecture II 9/2014 12 / 43

Conditioning of linear systems General perturbations (1) A. Donev (Courant Institute) Lecture II 9/2014 13 / 43

Conditioning of linear systems General perturbations (2) A. Donev (Courant Institute) Lecture II 9/2014 14 / 43

Numerical Solution of Linear Systems There are several numerical methods for solving a system of linear equations. The most appropriate method really depends on the properties of the matrix A: General dense matrices, where the entries in A are mostly non-zero and nothing special is known. We focus on the Gaussian Elimination Method (GEM). General sparse matrices, where only a small fraction of a ij 0. Symmetric and also positive-definite dense or sparse matrices. Special structured sparse matrices, arising from specific physical properties of the underlying system (more in Numerical Methods II). It is also important to consider how many times a linear system with the same or related matrix or right hand side needs to be solved. A. Donev (Courant Institute) Lecture II 9/2014 15 / 43

GEM: Eliminating x 1 A. Donev (Courant Institute) Lecture II 9/2014 16 / 43

GEM: Eliminating x 2 A. Donev (Courant Institute) Lecture II 9/2014 17 / 43

GEM: Backward substitution A. Donev (Courant Institute) Lecture II 9/2014 18 / 43

GEM as an LU factorization tool Observation, proven in the book (not very intuitively): A = LU, where L is unit lower triangular (l ii = 1 on diagonal), and U is upper triangular. GEM is thus essentially the same as the LU factorization method. A. Donev (Courant Institute) Lecture II 9/2014 19 / 43

GEM in MATLAB Sample MATLAB code (for learning purposes only, not real computing!): f u n c t i o n A = MyLU(A) % LU f a c t o r i z a t i o n in p l a c e ( o v e r w r i t e A) [ n,m]= s i z e (A ) ; i f ( n = m) ; e r r o r ( Matrix not square ) ; end f o r k =1:(n 1) % For v a r i a b l e x ( k ) % C a l c u l a t e m u l t i p l i e r s i n column k : A( ( k +1):n, k ) = A( ( k +1):n, k ) / A( k, k ) ; % Note : P i v o t element A( k, k ) assumed nonzero! f o r j =(k +1): n % E l i m i n a t e v a r i a b l e x ( k ) : A( ( k +1):n, j ) = A( ( k +1):n, j )... A( ( k +1):n, k ) A( k, j ) ; end end end A. Donev (Courant Institute) Lecture II 9/2014 20 / 43

Gauss Elimination Method (GEM) GEM is a general method for dense matrices and is commonly used. Implementing GEM efficiently is difficult and we will not discuss it here, since others have done it for you! The LAPACK public-domain library is the main repository for excellent implementations of dense linear solvers. MATLAB uses a highly-optimized variant of GEM by default, mostly based on LAPACK. MATLAB does have specialized solvers for special cases of matrices, so always look at the help pages! A. Donev (Courant Institute) Lecture II 9/2014 21 / 43

Pivoting example Gauss elimination and LU factorization Pivoting A. Donev (Courant Institute) Lecture II 9/2014 22 / 43

GEM Matlab example (1) Pivoting >> L=[1 0 0 ; 3 1 0 ; 2 0 1 ] L = 1 0 0 3 1 0 2 0 1 >> U=[1 1 3 ; 0 3 5; 0 0 4] U = 1 1 3 0 3 5 0 0 4 A. Donev (Courant Institute) Lecture II 9/2014 23 / 43

GEM Matlab example (2) Pivoting >> AP=L U % Permuted A AP = 1 1 3 3 6 4 2 2 2 >> A=[1 1 3 ; 2 2 2 ; 3 6 4 ] A = 1 1 3 2 2 2 3 6 4 A. Donev (Courant Institute) Lecture II 9/2014 24 / 43

GEM Matlab example (3) Pivoting >> AP=MyLU(AP) % Two l a s t rows permuted AP = 1 1 3 3 3 5 2 0 4 >> MyLU(A) % No p i v o t i n g ans = 1 1 3 2 0 4 3 I n f I n f A. Donev (Courant Institute) Lecture II 9/2014 25 / 43

GEM Matlab example (4) Pivoting >> [Lm,Um,Pm]= l u (A) Lm = 1.0000 0 0 0.6667 1.0000 0 0.3333 0.5000 1.0000 Um = 3.0000 6.0000 4.0000 0 2.0000 0.6667 0 0 2.0000 Pm = 0 0 1 0 1 0 1 0 0 A. Donev (Courant Institute) Lecture II 9/2014 26 / 43

GEM Matlab example (5) Pivoting >> Lm Um ans = 3 6 4 2 2 2 1 1 3 >> A A = 1 1 3 2 2 2 3 6 4 >> norm ( Lm Um Pm A ) ans = 0 A. Donev (Courant Institute) Lecture II 9/2014 27 / 43

Pivoting Pivoting during LU factorization Partial (row) pivoting permutes the rows (equations) of A in order to ensure sufficiently large pivots and thus numerical stability: PA = LU Here P is a permutation matrix, meaning a matrix obtained by permuting rows and/or columns of the identity matrix. Complete pivoting also permutes columns, PAQ = LU. A. Donev (Courant Institute) Lecture II 9/2014 28 / 43

Solving linear systems LU factorization Once an LU factorization is available, solving a linear system is simple: Ax = LUx = L (Ux) = Ly = b so solve for y using forward substitution. This was implicitly done in the example above by overwriting b to become y during the factorization. Then, solve for x using backward substitution Ux = y. In MATLAB, the backslash operator (see help on mldivide) x = A\b A 1 b, solves the linear system Ax = b using the LAPACK library. Never use matrix inverse to do this, even if written as such on paper. A. Donev (Courant Institute) Lecture II 9/2014 29 / 43

Permutation matrices LU factorization If row pivoting is necessary, the same applies if one also permutes the equations (rhs b): PAx = LUx = Ly = Pb or formally (meaning for theoretical purposes only) x = (LU) 1 Pb = U 1 L 1 Pb Observing that permutation matrices are orthogonal matrices, P 1 = P T, A = P 1 LU = ( P T L ) U = LU where L is a row permutation of a unit lower triangular matrix. A. Donev (Courant Institute) Lecture II 9/2014 30 / 43

In MATLAB Gauss elimination and LU factorization LU factorization Doing x = A\b is equivalent to performing an LU factorization and doing two triangular solves (backward and forward substitution): [ L, U] = lu(a) y = L\b x = U\y This is a carefully implemented backward stable pivoted LU factorization, meaning that the returned solution is as accurate as the conditioning number allows. The MATLAB call [L, U, P] = lu(a) returns the permutation matrix but the call [ L, U] = lu(a) permutes the lower triangular factor directly. A. Donev (Courant Institute) Lecture II 9/2014 31 / 43

GEM Matlab example (1) LU factorization >> A = [ 1 2 3 ; 4 5 6 ; 7 8 0 ] ; >> b=[2 1 1] ; >> x=aˆ( 1) b ; x % Don t do t h i s! ans = 2.5556 2.1111 0.1111 >> x = A\b ; x % Do t h i s i n s t e a d ans = 2.5556 2.1111 0.1111 >> l i n s o l v e (A, b ) % Even more c o n t r o l ans = 2.5556 2.1111 0.1111 A. Donev (Courant Institute) Lecture II 9/2014 32 / 43

GEM Matlab example (2) LU factorization >> [ L,U] = l u (A) % Even b e t t e r i f r e s o l v i n g L = 0.1429 1.0000 0 0.5714 0.5000 1.0000 1.0000 0 0 U = 7.0000 8.0000 0 0 0.8571 3.0000 0 0 4.5000 >> norm( L U A, i n f ) ans = 0 >> y = L\b ; >> x = U\y ; x ans = 2.5556 2.1111 0.1111 A. Donev (Courant Institute) Lecture II 9/2014 33 / 43

Cost estimates for GEM LU factorization For forward or backward substitution, at step k there are (n k) multiplications and subtractions, plus a few divisions. The total over all n steps is n k=1 (n k) = n(n 1) 2 n2 2 subtractions and multiplications, giving a total of n 2 floating-point operations (FLOPs). For GEM, at step k there are (n k) 2 multiplications and subtractions, plus a few divisions. The total is n FLOPS = 2 (n k) 2 2n3 3, k=1 and the O(n 2 ) operations for the triangular solves are neglected. When many linear systems need to be solved with the same A the factorization can be reused. A. Donev (Courant Institute) Lecture II 9/2014 34 / 43

Positive-Definite Matrices Cholesky Factorization A real symmetric matrix A is positive definite iff (if and only if): 1 All of its eigenvalues are real (follows from symmetry) and positive. 2 x 0, x T Ax > 0, i.e., the quadratic form defined by the matrix A is convex. 3 There exists a unique lower triangular L, L ii > 0, A = LL T, termed the Cholesky factorization of A (symmetric LU factorization). 1 For Hermitian complex matrices just replace transposes with adjoints (conjugate transpose), e.g., A T A (or A H in the book). A. Donev (Courant Institute) Lecture II 9/2014 35 / 43

Cholesky Factorization Cholesky Factorization The MATLAB built in function R = chol(a) gives the Cholesky factorization and is a good way to test for positive-definiteness. For Hermitian/symmetric matrices with positive diagonals MATLAB tries a Cholesky factorization first, before resorting to LU factorization with pivoting. The cost of a Cholesky factorization is about half the cost of GEM, n 3 /3 FLOPS. A. Donev (Courant Institute) Lecture II 9/2014 36 / 43

When pivoting is unnecessary Pivoting and Stability It can be shown that roundoff is not a problem for triangular system Tx = b (forward or backward substitution). Specifically, δx x nuκ(t), so unless the number of unknowns n is very very large the truncation errors are small for well-conditioned systems. Special classes of well-behaved matrices A: 1 Diagonally-dominant matrices, meaning a ii j i a ij or a ii j i a ji 2 Symmetric positive-definite matrices, i.e., Cholesky factorization does not require pivoting, δx 2 x 2 8n 2 uκ(a). A. Donev (Courant Institute) Lecture II 9/2014 37 / 43

When pivoting is necessary Pivoting and Stability For a general matrix A, roundoff analysis leads to the following type of estimate δx x U nuκ(a) L, A which shows that small pivots, i.e., large multipliers l ij, can lead to large roundoff errors. What we want is an estimate that only involves n and κ(a). Since the optimal pivoting cannot be predicted a-priori, it is best to search for the largest pivot in the same column as the current pivot, and exchange the two rows (partial pivoting). A. Donev (Courant Institute) Lecture II 9/2014 38 / 43

Partial Pivoting Gauss elimination and LU factorization Pivoting and Stability The cost of partial pivoting is searching among O(n) elements n times, so O(n 2 ), which is small compared to O(n 3 ) total cost. Complete pivoting requires searching O(n 2 ) elements n times, so cost is O(n 3 ) which is usually not justified. The recommended strategy is to use partial (row) pivoting even if not strictly necessary (MATLAB takes care of this). A. Donev (Courant Institute) Lecture II 9/2014 39 / 43

What pivoting does Pivoting and Stability The problem with GEM without pivoting is large growth factors (not large numbers per se) a (k) max i,j,k ij ρ = max i,j a ij Pivoting is not needed for positive-definite matrices because ρ 2: a ij 2 a ii a jj (so the largest element is on the diagonal) a (k+1) ii a (k+1) ij = a (k) ii = a (k) ij ( a (k) ki a (k) kk ) 2 l ik a (k) kj = a (k) ij a (k+1) ii a(k) ki a (k) kk a (k) ii a (k) kj + (GEM) a (k) ki 2 a (k) kk 2 a (k) ii A. Donev (Courant Institute) Lecture II 9/2014 40 / 43

Matrix Rescaling Conclusions Pivoting is not always sufficient to ensure lack of roundoff problems. In particular, large variations among the entries in A should be avoided. This can usually be remedied by changing the physical units for x and b to be the natural units x 0 and b 0. Rescaling the unknowns and the equations is generally a good idea even if not necessary: x = D x x = Diag {x 0 } x and b = D b b = Diag {b 0 } b. Ax = AD x x = D b b ( D 1 b AD x) x = b The rescaled matrix à = D 1 b AD x should have a better conditioning, but this is hard to achieve in general. Also note that reordering the variables from most important to least important may also help. A. Donev (Courant Institute) Lecture II 9/2014 41 / 43

Conclusions Special Matrices in MATLAB MATLAB recognizes (i.e., tests for) some special matrices automatically: banded, permuted lower/upper triangular, symmetric, Hessenberg, but not sparse. In MATLAB one may specify a matrix B instead of a single right-hand side vector b. The MATLAB function X = linsolve(a, B, opts) allows one to specify certain properties that speed up the solution (triangular, upper Hessenberg, symmetric, positive definite,none), and also estimates the condition number along the way. Use linsolve instead of backslash if you know (for sure!) something about your matrix. A. Donev (Courant Institute) Lecture II 9/2014 42 / 43

Conclusions Conclusions/Summary The conditioning of a linear system Ax = b is determined by the condition number κ(a) = A A 1 1 Gauss elimination can be used to solve general square linear systems and also produces a factorization A = LU. Partial pivoting is often necessary to ensure numerical stability during GEM and leads to PA = LU or A = LU. For symmetric positive definite matrices the Cholesky factorization A = LL T is preferred and does not require pivoting. MATLAB has excellent linear solvers based on well-known public domain libraries like LAPACK. Use them! A. Donev (Courant Institute) Lecture II 9/2014 43 / 43