LINEAR ALGEBRA: NUMERICAL METHODS. Version: August 12,

Similar documents
3 QR factorization revisited

Applied Linear Algebra in Geoscience Using MATLAB

Iterative Methods. Splitting Methods

lecture 2 and 3: algorithms for linear algebra

1 Number Systems and Errors 1

Linear algebra for MATH2601: Theory

1 Extrapolation: A Hint of Things to Come

Introduction to Applied Linear Algebra with MATLAB

In this section again we shall assume that the matrix A is m m, real and symmetric.

(Mathematical Operations with Arrays) Applied Linear Algebra in Geoscience Using MATLAB

Diagonalization of Matrix

lecture 3 and 4: algorithms for linear algebra

Applied Linear Algebra

22m:033 Notes: 7.1 Diagonalization of Symmetric Matrices

Name Solutions Linear Algebra; Test 3. Throughout the test simplify all answers except where stated otherwise.

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

JACOBI S ITERATION METHOD

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

Linear Algebra Review

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 9

Linear Least-Squares Data Fitting

MATH 310, REVIEW SHEET 2

Computational Methods. Least Squares Approximation/Optimization

Linear Algebra, part 3 QR and SVD

Numerical Methods I Eigenvalue Problems

Conceptual Questions for Review

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

Numerical Methods - Numerical Linear Algebra

Numerical Methods in Matrix Computations

Course Notes: Week 1

Matrix decompositions

Notes on Eigenvalues, Singular Values and QR

Department of Mathematics California State University, Los Angeles Master s Degree Comprehensive Examination in. NUMERICAL ANALYSIS Spring 2015

APPLIED NUMERICAL LINEAR ALGEBRA

Cheat Sheet for MATH461

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Applied Linear Algebra in Geoscience Using MATLAB

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #2 Solutions

COMP 558 lecture 18 Nov. 15, 2010

Numerical Methods I Non-Square and Sparse Linear Systems

Review problems for MA 54, Fall 2004.

MAA507, Power method, QR-method and sparse matrix representation.

Singular Value Decomposition

Linear Algebra Primer

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

PRACTICE FINAL EXAM. why. If they are dependent, exhibit a linear dependence relation among them.

UNIT 6: The singular value decomposition.

MA 265 FINAL EXAM Fall 2012

Chapter 7 Iterative Techniques in Matrix Algebra

Numerical Analysis: Solutions of System of. Linear Equation. Natasha S. Sharma, PhD

Today s class. Linear Algebraic Equations LU Decomposition. Numerical Methods, Fall 2011 Lecture 8. Prof. Jinbo Bi CSE, UConn

Econ Slides from Lecture 7

22.4. Numerical Determination of Eigenvalues and Eigenvectors. Introduction. Prerequisites. Learning Outcomes

L2-7 Some very stylish matrix decompositions for solving Ax = b 10 Oct 2015

MATH 1553, Intro to Linear Algebra FINAL EXAM STUDY GUIDE

CHAPTER 11. A Revision. 1. The Computers and Numbers therein

Linear Algebra, part 3. Going back to least squares. Mathematical Models, Analysis and Simulation = 0. a T 1 e. a T n e. Anna-Karin Tornberg

Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015

CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares

Dimensionality Reduction: PCA. Nicholas Ruozzi University of Texas at Dallas

I. Multiple Choice Questions (Answer any eight)

Parallel Singular Value Decomposition. Jiaxing Tan

Iterative Methods for Solving A x = b

MATH 304 Linear Algebra Lecture 34: Review for Test 2.

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Lecture # 11 The Power Method for Eigenvalues Part II. The power method find the largest (in magnitude) eigenvalue of. A R n n.

Stabilization and Acceleration of Algebraic Multigrid Method

DIAGONALIZATION. In order to see the implications of this definition, let us consider the following example Example 1. Consider the matrix

MAT 610: Numerical Linear Algebra. James V. Lambers

Solving Linear Systems

Linear Analysis Lecture 16

30.5. Iterative Methods for Systems of Equations. Introduction. Prerequisites. Learning Outcomes

MATH 310, REVIEW SHEET

Practice problems for Exam 3 A =

18.06SC Final Exam Solutions

Computational Methods. Eigenvalues and Singular Values

Lab 1: Iterative Methods for Solving Linear Systems

QR Decomposition. When solving an overdetermined system by projection (or a least squares solution) often the following method is used:

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

Preface to Second Edition... vii. Preface to First Edition...

Lecture 6, Sci. Comp. for DPhil Students

Numerical Methods I: Eigenvalues and eigenvectors

Definition (T -invariant subspace) Example. Example

Linear Algebra Practice Problems

Lecture 11. Linear systems: Cholesky method. Eigensystems: Terminology. Jacobi transformations QR transformation

The 'linear algebra way' of talking about "angle" and "similarity" between two vectors is called "inner product". We'll define this next.

Vector and Matrix Norms. Vector and Matrix Norms

18.06 Quiz 2 April 7, 2010 Professor Strang

6.4 Krylov Subspaces and Conjugate Gradients

EAD 115. Numerical Solution of Engineering and Scientific Problems. David M. Rocke Department of Applied Science

Lecture 10 - Eigenvalues problem

Review of matrices. Let m, n IN. A rectangle of numbers written like A =

MATCOM and Selected MATCOM Functions

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

Algebra C Numerical Linear Algebra Sample Exam Problems

TMA4125 Matematikk 4N Spring 2017

Linear Algebra in Actuarial Science: Slides to the lecture

Announcements Monday, November 06

5.7 Cramer's Rule 1. Using Determinants to Solve Systems Assumes the system of two equations in two unknowns

Linear Algebra- Final Exam Review

Transcription:

LINEAR ALGEBRA: NUMERICAL METHODS. Version: August 12, 2000 74 6 Summary Here we summarize the most important information about theoretical and numerical linear algebra. MORALS OF THE STORY: I. Theoretically perfect algorithms could be very unstable numerically. Errors pop up unaviodably, and they can get amplified enormously, especially for real-life applications, when big matrices are considered. II. Orthogonal matrices are the best for numerical stability (their condition number is 1). They should be used all the times. Linear algebra has two fundamental problems: (i) Solving Ax = b (ii) Diagonalizing a matrix A. 6.1 Methods for Ax = b 6.1.1 Theoretical methods for Ax = b. ThemostcommonmethodistheGauss elimination, which is equivalent to the LU decomposition. Advantages of LU: (i) Applicable for any matrix (ii) Finds all solutions (iii) Easy to program (iv) Fast

LINEAR ALGEBRA: NUMERICAL METHODS. Version: August 12, 2000 75 Disadvantages of LU: (i) Could easily be unstable (ii) Does not find approximate solutions (least square) A more advanced method is the QR-decomposition. Suppose first that we already have an exact QR-decomposition. Then Ax = b is reduced to Rx = Q t b which is easy to backsolve. Advantages of QR over LU: (i) If you have a QR-decomposition, then solving Ax = b via Rx = Q t b is as wellconditioned as the original problem. (ii) Finds least square solutions as well (when no exact solution exists). When there are exact solutions, it finds all of them. Disadvantages of QR over LU: (i) It usually takes a more complicated algorithm to find the QR-decomposition than LU. (ii) It is slower than LU (mainly because it has to compute lots of norms, square roots etc.) But in general QR-decomposition is always superior to the LU-decomposition. This raises the question: How to find the QR-decomposition? Theoretically, the QRdecomposition is obtained via Gram-Schmidt procedure Advantages of finding QR with Gram-Schmidt: (i) Applicable for any matrix (ii) Easy to program (iii) Fast Disadvantages of finding QR with Gram-Schmidt: (i) Can be unstable, especially for matrices with (almost) linearly dependent columns.

LINEAR ALGEBRA: NUMERICAL METHODS. Version: August 12, 2000 76 6.1.2 Numerical methods for Ax = b. LU-decomposition (=Gauss elimination) can be improved by partial pivoting Advantages of partial pivoting (i) Can improve the stability of LU quite a bit (ii) Easy to program, it almost takes no extra effort and running time Disadvantages of partial pivoting (i) Only reduces instability, but in many cases still far from the optimal stability. Given the advantages of solving Ax = b with QR, it is of primary importance to find the QR-decomposition in a numerically stable way. The way to do it is via orthogonal matrices, i.e. by Householder reflections or Givens rotations. The performance of the two methods are similar. Givens is easier to program, but performs slower. Advantages of Householder or Givens over Gram-Schmidt (i) They are stable (ii) They also can be applied for any matrix, as Gram-Schmidt (though we just learned them for square matrices). Disadvantages of Householder or Givens over Gram-Schmidt: NONE Finally, certain equations Ax = b can be solved iteratively. We learned the Jacobi and Gauss-Seidel iterations (only for square matrices, but there are versions for general matrices). In general Gauss-Seidel is better than Jacobi for speed, memory requirement and programming. Advantages of Jacobi or Gauss Seidel over the exact methods: LU and QR (i) Stable (QR is also stable, LU is not)

LINEAR ALGEBRA: NUMERICAL METHODS. Version: August 12, 2000 77 (ii) They are fast for special matrices. Disadvantages of Jacobi or Gauss Seidel over the exact methods: LU and QR (i) For many matrices they don t converge. There are many tricks to improve this feature, but in general these methods are applicable only for special matrices something like strong diagonal. But for numerical solutions of differential equations such matrices arise very often. (ii) Even if they converge, they find only one solution. Usually they perform badly for degenerate matrices (we just learned them for regular square matrices, but again, there are generalizations) 6.2 Methods for diagonalizing a square matrix A 6.2.1 Theoretical methods for diagonalizing A, finding eigenvalues, -vectors Remember, that in general there is no exact method, only iterative ones. The basic strategy is first finding the characteristic polynomial p(λ) (by computing a determinant), then finding the roots of p(λ) (usually by Newton s method this is the iterative part of the game), finally finding the eigenvectors by solving several linear equations (one for each eigenvalue). This last problem is the one discussed in the previous section. Advantages of diagonalizing via charateristic polynomial: (i) Applicable for any diagonalizable matrix (ii) Conceptually clear (iii) Newton s method is stable and there are methods in the previous section to solve (A λ)x =0inastableway. Disadvantages of diagonalizing via charateristic polynomial: (i) Computing large determinants (especially with a variable symbol λ) can be unstable. We did not discuss this issue

LINEAR ALGEBRA: NUMERICAL METHODS. Version: August 12, 2000 78 (ii) Newton s method works only if there are some a-priori guesses on the solutions. In addition, it requires modifications to deal with complex eigenvalues. (iii) The program is quite complicated altogether, it has many steps and takes a long time (you have to solve n equations at the end). 6.2.2 Numerical methods for diagonalizing A, finding eigenvalues, -vectors The Power method: Advantages of the power method: (i) Fast and simple for individual eigenvalues-vectors. (ii) Can be modified (shift, inversion) to find all eigenvalues (including complex ones), but need some guess on the location of the eigenvalues (as in Newton) (iii) Stable. Disadvantages of the power method: (i) There are complications for multiple eigenvalues. (ii) Needs some initial guess on the eigenvalues (iii) One has to find each eigenvalue-eigenvector separately, i.e. diagonalization. not very fast for full The Jacobi iteration: Advantages of the Jacobi iteration (i) Fully diagonalizes a symmetric matrix in one iteration (ii) Easy to program (iii) Stable. Disadvantages of the Jacobi iteration

LINEAR ALGEBRA: NUMERICAL METHODS. Version: August 12, 2000 79 (i) Works only for symmetric matrices (but this is what one needs for SVD) (ii) Slow The QR-iteration: Thisisthebest and most commonly used method for eigenvalues (with its several variants and improvements) Advantages of the QR-iteration: (i) Gives eigenvalues for general matrices and also eigenvectors for symmetric ones. (ii) Gives everything in one iteration (iii) Faster than Jacobi, if one has a good QR-decomposition algorithm (iv) Stable Disadvantages of the QR-iteration: (i) Cannot deal with complex or multiple eigenvalues, unless modified. Modifications exist but they are not so simple.