MATH2071: LAB #5: Norms, Errors and Condition Numbers

Size: px
Start display at page:

Download "MATH2071: LAB #5: Norms, Errors and Condition Numbers"

Transcription

1 MATH2071: LAB #5: Norms, Errors and Condition Numbers 1 Introduction Introduction Exercise 1 Vector Norms Exercise 2 Matrix Norms Exercise 3 Compatible Matrix Norms Exercise 4 More on the Spectral Radius Exercise 5 Types of Errors Exercise 6 Condition Numbers Exercise 7 The objects we work with in linear systems are vectors and matrices. In order to make statements about the size of these objects, and the errors we make in solutions, we want to be able to describe the sizes of vectors and matrices, which we do by using norms. We then need to consider whether we can bound the size of the product of a matrix and vector, given that we know the size of the two factors. In order for this to happen, we will need to use matrix and vector norms that are compatible. These kinds of bounds will become very important in error analysis. We will then consider the notions of forward error and backward error in a linear algebra computation. From the definitions of norms and errors, we can define the condition number of a matrix, which will give us an objective way of measuring how bad a matrix is, and how many digits of accuracy we can expect when solving a particular linear system. This lab will take two sessions. You may find it convenient to print the pdf version of this lab rather than the web page itself. 2 Vector Norms A vector norm assigns a size to a vector, in such a way that scalar multiples do what we expect, and the triangle inequality is satisfied. There are three common vector norms in n dimensions: The L 1 vector norm The L 2 (or Euclidean ) vector norm The L vector norm x 1 = n x i i=1 x 2 = n x = i=1 x 2 i max x i i=1,...,n To compute the norm of a vector x in Matlab: x 1 = norm(x,1); x 2 = norm(x,2)= norm(x); x = norm(x,inf) (Recall that inf is the Matlab symbol corresponding to.) 1

2 Exercise 1: For each of the following vectors: x1 = [ 1; 2; 3 ] x2 = [ 1; 0; 0 ] x3 = [ 1; 1; 1 ] compute the vector norms, using the appropriate Matlab commands. Be sure your answers are reasonable. L1 L2 L Infinity x x x Matrix Norms A matrix norm assigns a size to a matrix, again, in such a way that scalar multiples do what we expect, and the triangle inequality is satisfied. However, what s more important is that we want to be able to mix matrix and vector norms in various computations. So we are going to be very interested in whether a matrix norm is compatible with a particular vector norm, that is, when it is safe to say: There are five common matrix norms: The L 1 or max column sum matrix norm: The L 2 matrix norm: Ax A x A 1 = A 2 = max j=1,...,n i=1 where λ i is a (necessarily real) eigenvalue of A A or where µ i is a singular value of A; The L or max row sum matrix norm: A 2 = A = n A i,j max λi j=1,...,n max j=1,...,n µ i max i=1,...,n j=1 n A i,j The Frobenius matrix norm: A F = A i,j 2 i,j=1,...,n The spectral matrix norm; A spec = max λ i i=1,...,n (only defined for a square matrix), where λ i is a (possibly complex) eigenvalue of A. 2

3 To compute the norm of a matrix A in Matlab: A 1 = norm(a,1); A 2 = norm(a,2)=norm(a); A = norm(a,inf); A F = norm(a, fro ) A spec = (see below) 4 Compatible Matrix Norms One way to define a matrix norm is to do so in terms of a particular vector norm. We use the formula: A = max x 0 Ax x (It would be more precise to use sup rather than max here, as Atkinson does, but the surface of a sphere is a compact set, so the supremum is attained, and the maximum is correct.) A matrix norm defined in this way is said to be vector-bound to the given vector norm. The most interesting and useful property a matrix norm can have is when we can use it to bound certain expressions involving a matrix-vector product. We want to be able to say the following: Ax A x (1) but this expression is not true for an arbitrary chosen pair of matrix and vector norms. If it is true, then the two are compatible. If a matrix norm is vector-bound to a particular vector norm, then the two norms are guaranteed to be compatible. Thus, for any vector norm, there is always at least one matrix norm that we can use. But that vector-bound matrix norm is not always the only choice. In particular, the L 2 matrix norm is difficult (expensive) to compute, but there is a simple alternative. Note that: The L 1, L 2 and L matrix norms can be shown to be vector-bound to the corresponding vector norms and hence are guaranteed to be compatible with them; The Frobenius matrix norm is not vector-bound to the L 2 vector norm, but is compatible with it; the Frobenius norm is much easier to compute than the L 2 matrix norm. The spectral matrix norm is not vector-bound to any vector norm, but it almost is. This norm is useful because we often want to think about the behavior of a matrix as being determined by its largest eigenvalue, and it often is. But there is no vector norm for which it is always true that Ax A spec x Exercise 2: Consider each of the following column vectors: x1 = [ 1, 2, 3 ] x2 = [ 1, 0, 0 ] x3 = [ 1, 1, 1 ] For the same matrix A you used above: 3

4 A=[ ] verify that the compatibility condition holds by comparing the values of A that you computed in the previous exercise with the ratios of Ax / x. The final column refers to satisfaction of the compatibility relationship (1). Matrix Vector norm(a*x1) norm(a*x2) norm(a*x3) norm(a) OK? norm norm norm(x1) norm(x2) norm(x3) fro 2 inf inf 5 More on the Spectral Radius In the above text there is a statement that the spectral radius is not vector-bound with any norm, but that it almost is. In this section we will see by example what this means. In the first place, let s try to see why it isn t vector-bound to the L 2 norm. Consider the following false proof. Given a matrix A, for any vector x, break it into a sum of eigenvectors of A as x = x i e i where e i are the eigenvectors of A, normalized to unit length. Then, for λ i the eigenvalues of A, Ax 2 2 = λ i x i e i 2 2 max λ i 2 x i 2 i A 2 spec x 2 and taking square roots completes the proof. Why is this proof false? The error is in the statement that all vectors can be expanded as sums of orthonormal eigenvectors. If you knew, for example, that A were positive definite and symmetric, then, the eigenvectors of A would form an orthonormal basis and the proof would be correct. Hence the term almost. On the other hand, this observation provides the counterexample. Exercise 3: Consider the following matrix A=[ ] You should recognize this as a Jordan block. Any matrix can be decomposed into several such blocks by a change of basis. 4

5 (a) Use the Matlab routine [V,D]=eig(A) to get the eigenvalues (D) and eigenvectors V of A. How many linearly independent eigenvectors are there? (b) You just computed the eigenvalues of A. What is the spectral norm of A? (c) For the vector x=[1;1;1;1;1;1;1], what are x L 2, Ax L 2, and ( A spec)( x L 2)? (d) Is Equation (1) satisfied? It is a well-known fact that if the spectral radius of a matrix A is smaller than 1.0 then lim n A n = 0. Exercise 4: (a) For x=[1;1;1;1;1;1;1] compute and plot A k x L 2 for k = 0, 1,..., 40. Please include this plot with your summary. (b) What is the smallest value of k > 2 for which A k x L 2 Ax L 2? (c) What is max 0 k 40 A k x L 2? 6 Types of Errors A natural assumption to make is that the term error refers always to the difference between the computed and exact answers. We are going to have to discuss several kinds of error, so let s refer to this first error as solution error or forward error. Suppose we want to solve a linear system of the form Ax = b, (with exact solution x) and we computed xapprox. We define the solution error as xapprox x. Sometimes the solution error is not possible to compute, and we would like a substitute whose behavior is acceptably close to that of the solution error. One example would be during an iterative solution process where we would like to monitor the progress of the iteration but we do not yet have the solution and cannot compute the solution error. In this case, we are interested in the residual error or backward error, which is defined by Axapprox b = bapprox b where, for convenience, we have defined the variable bapprox to equal Axapprox. Another way of looking at the residual error is to see that it s telling us the difference between the right hand side that would work for xapprox versus the right hand side we have. If we think of the right hand side as being a target, and our solution procedure as determining how we should aim an arrow so that we hit this target, then The solution error is telling us how badly we aimed our arrow; The residual error is telling us how much we would have to move the target in order for our badly aimed arrow to be a bullseye. There are problems for which the solution error is huge and the residual error tiny, and all the other possible combinations also occur. Exercise 5: For each of the following cases, compute the Euclidean norm (two norm) of the solution error and residual error and characterize each as large or small. (For the purpose of this exercise, take large to mean greater than 1 and small to mean smaller than 0.01.) In each case, you must find the true solution first (Pay attention! In each case you can guess the true solution, xtrue.), then compare it with the approximate solution xapprox. (a) A=[1,1;1,(1-1.e-12)], b=[0;0], xapprox=[1;-1] (b) A=[1,1;1,(1-1.e-12)], b=[1;1], xapprox=[ ;0] (c) A=[1,1;1,(1-1.e-12)], b=[1;1], xapprox=[100;100] (d) A=[1.e+12,-1.e+12;1,1], b=[0;2], xapprox=[1.001;1] 5

6 Case Residual large/small xtrue Error large/small The norms of the matrix and its inverse exert some limits on the relationship between the forward and backward errors. Assuming we have compatible norms: xapprox x = A 1 A(xapprox x) ( A 1 )( bapprox b ) and Put another way, Axapprox b = Axapprox Ax ( A )( xapprox x ) (solution error) A 1 (residual error) (residual error) A (solution error) Often, it s useful to consider the size of an error relative to the true quantity. This quantity is sometimes multiplied by 100 and expressed as a percentage. If the true solution is x and we computed xapprox = x + x, the relative solution error is defined as (relative solution error) = x approx x x = x x Given the computed solution xapprox, we know that it satifies the equation Axapprox = bapprox. If we write bapprox = b + b, then we can define the relative residual error as: (relative residual error) = b approx b b = b b These quantities depend on the vector norm used, they cannot be defined in cases where the divisor is zero, and they are problematic when the divisor is small. Actually, relative quantities are important beyond being easier to understand. Consider the boundary value problem u = π2 4 sin(πx 2 ) u(0) = 0 (2) u(1) = 1 This problem has the exact solution u = sin πx/2. In the following exercise you will be computing the solution for various mesh sizes and using vector norms to compute the solution error. You will see why relative errors are most convenient for error computation. Exercise 6: 6

7 (a) Make a copy of your rope bvp.m file from last lab, or download a copy of mine. Change its name to bvp.m and modify it so that it corresponds to the boundary value problem in 2 above. Do not forget to change the comments. (b) As a test, solve the system for npts=10, plot the solution and compare it with sin(pi*x /2). They should be quite close. You do not need to send me the plot. (c) Use code such as the following to compute solution errors for various mesh sizes, and fill in the following table. Recall that the exact solution is sin πx/2. (Note: you may want to put this code into a script m-file.) sizes=[ ]; for k=1:7 [x,y]=bvp(sizes(k)); error(k,1)=norm(y-sin(pi*x /2)); relative_error(k,1)=error(k,1)/norm(sin(pi*x/2)); end disp([error(1:6)./error(2:7),... relative_error(1:6)./relative_error(2:7)]) Euclidean norm error ratio relative error ratio 10/ 20 20/ 40 40/ 80 80/ / /640 (d) Repeat the above for the L 1 vector norm. 1 vector norm error ratio relative error ratio 10/ 20 20/ 40 40/ 80 80/ / /640 (e) Repeat the above for the L vector norm. Infinity vector norm error ratio relative error ratio 10/ 20 20/ 40 40/ 80 80/ / /640 (f) The method used to solve this boundary value problem converges as O(h 2 ). Which of the above calculations yields this rate? 7

8 7 Condition Numbers Given a square matrix A, the L 2 condition number k 2 (A) is defined as: k 2 (A) = A 2 A 1 2 (3) if the inverse of A exists. If the inverse does not exist, then we say that the condition number is infinite. Similar definitions apply for k 1 (A) and k (A). Matlab provides three functions for computing condition numbers: cond, condest, and rcond. cond computes the condition number according to Equation (3), and can use the one norm, the two norm, the infinity norm or the Frobenius norm. This calculation can be expensive, but it is accurate. condest computes an estimate of the condition number using the one norm. rcond uses a different method to estimate the reciprocal condition number, defined as { 1/k1 (A) if A is nonsingular rcond(a) = 0 if A is singular So as a matrix goes singular, rcond(a) goes to zero in a way similar to the determinant. We won t worry about the fact that the condition number is somewhat expensive to compute, since it requires computing the inverse or (possibly) the singular value decomposition (a topic to be studied later). Instead, we ll concentrate on what it s good for. We already know that it s supposed to give us some idea of how singular the matrix is. But its real role is in error estimation for the linear system problem. We suppose that we are really interested in solving the linear system Ax = b but that the right hand side we give to the computer has a small error or perturbation in it. We might denote this perturbed right hand side as b + b. We can then assume that our solution will be slightly perturbed, so that we are justified in writing the system as A(x + x) = b + b The question is, if b is really small, can we expect that x is small? Can we actually guarantee such a limit? If we are content to look at the relative errors, and if the norm used to define k(a) is compatible with the vector norm used, it is fairly easy (Atkinson, Section 8.4) to show that: x x k(a) b b You can see that we would like the condition number to be as small as possible. (It turns out that the condition number is bounded below. What is the smallest possible value of the condition number?). In particular, since we have about 14 digits of accuracy in Matlab, if a matrix has a condition number of 10 14, or rcond(a) of 10 14, then an error in the last significant digit of any element of the right hand side has the potential to make all of the digits of the solution wrong! In the exercises below we will have occasion to use a special matrix called the Frank matrix. Row i of the n n Frank matrix has the formula: 0 for j < i 2 A i,j = n + 1 i for j = i 1 n + 1 j for j i The Frank matrix for n = 5 looks like:

9 The determinant of the Frank matrix is 1, but is difficult to compute numerically. This matrix has a special form called Hessenberg form wherein all elements below the first subdiagonal are zero. Matlab provides the Frank matrix in its gallery of matrices, gallery( frank,n), but we will use an m-file frank.m. You can find more information about the Frank matrix from the Matrix Market, and the references therein. Exercise 7: To see how the condition number can warn you about loss of accuracy, let s try solving the problem Ax = b, for x=ones(n,1), and with A being the Frank matrix. Compute b=a*x; xsolved=a\b; difference=xsolved-x; Of course, xsolved would be the same as x if there were no arithmetic rounding errors, but there are rounding errors, so difference is not zero. This loss of accuracy is the point of the exercise. For convenience let s use Matlab s estimate cond(a) for the L 1 condition number, and assume that the relative error in b is eps, the machine precision. (Recall that Matlab understands the name eps to indicate the smallest number you can add to 1 and get a result different from 1.) We expect that the relative solution error to be bounded by eps * cond(a). Since cond uses the Euclidean norm by default, use the Euclidean norm in constructing the table. Matrix Size cond(a) eps*cond(a) difference / x Your second and third columns should be roughly comparable in size (within a factor of 1000 or so). 9

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

Lecture 10: Powers of Matrices, Difference Equations

Lecture 10: Powers of Matrices, Difference Equations Lecture 10: Powers of Matrices, Difference Equations Difference Equations A difference equation, also sometimes called a recurrence equation is an equation that defines a sequence recursively, i.e. each

More information

Let x be an approximate solution for Ax = b, e.g., obtained by Gaussian elimination. Let x denote the exact solution. Call. r := b A x.

Let x be an approximate solution for Ax = b, e.g., obtained by Gaussian elimination. Let x denote the exact solution. Call. r := b A x. ESTIMATION OF ERROR Let x be an approximate solution for Ax = b, e.g., obtained by Gaussian elimination. Let x denote the exact solution. Call the residual for x. Then r := b A x r = b A x = Ax A x = A

More information

A TOUR OF LINEAR ALGEBRA FOR JDEP 384H

A TOUR OF LINEAR ALGEBRA FOR JDEP 384H A TOUR OF LINEAR ALGEBRA FOR JDEP 384H Contents Solving Systems 1 Matrix Arithmetic 3 The Basic Rules of Matrix Arithmetic 4 Norms and Dot Products 5 Norms 5 Dot Products 6 Linear Programming 7 Eigenvectors

More information

Review Questions REVIEW QUESTIONS 71

Review Questions REVIEW QUESTIONS 71 REVIEW QUESTIONS 71 MATLAB, is [42]. For a comprehensive treatment of error analysis and perturbation theory for linear systems and many other problems in linear algebra, see [126, 241]. An overview of

More information

Jim Lambers MAT 610 Summer Session Lecture 2 Notes

Jim Lambers MAT 610 Summer Session Lecture 2 Notes Jim Lambers MAT 610 Summer Session 2009-10 Lecture 2 Notes These notes correspond to Sections 2.2-2.4 in the text. Vector Norms Given vectors x and y of length one, which are simply scalars x and y, the

More information

Math/Phys/Engr 428, Math 529/Phys 528 Numerical Methods - Summer Homework 3 Due: Tuesday, July 3, 2018

Math/Phys/Engr 428, Math 529/Phys 528 Numerical Methods - Summer Homework 3 Due: Tuesday, July 3, 2018 Math/Phys/Engr 428, Math 529/Phys 528 Numerical Methods - Summer 28. (Vector and Matrix Norms) Homework 3 Due: Tuesday, July 3, 28 Show that the l vector norm satisfies the three properties (a) x for x

More information

Section 3.9. Matrix Norm

Section 3.9. Matrix Norm 3.9. Matrix Norm 1 Section 3.9. Matrix Norm Note. We define several matrix norms, some similar to vector norms and some reflecting how multiplication by a matrix affects the norm of a vector. We use matrix

More information

Outline. Math Numerical Analysis. Errors. Lecture Notes Linear Algebra: Part B. Joseph M. Mahaffy,

Outline. Math Numerical Analysis. Errors. Lecture Notes Linear Algebra: Part B. Joseph M. Mahaffy, Math 54 - Numerical Analysis Lecture Notes Linear Algebra: Part B Outline Joseph M. Mahaffy, jmahaffy@mail.sdsu.edu Department of Mathematics and Statistics Dynamical Systems Group Computational Sciences

More information

Singular Value Decompsition

Singular Value Decompsition Singular Value Decompsition Massoud Malek One of the most useful results from linear algebra, is a matrix decomposition known as the singular value decomposition It has many useful applications in almost

More information

Topics. Review of lecture 2/11 Error, Residual and Condition Number. Review of lecture 2/16 Backward Error Analysis The General Case 1 / 22

Topics. Review of lecture 2/11 Error, Residual and Condition Number. Review of lecture 2/16 Backward Error Analysis The General Case 1 / 22 Topics Review of lecture 2/ Error, Residual and Condition Number Review of lecture 2/6 Backward Error Analysis The General Case / 22 Theorem (Calculation of 2 norm of a symmetric matrix) If A = A t is

More information

6 Linear Systems of Equations

6 Linear Systems of Equations 6 Linear Systems of Equations Read sections 2.1 2.3, 2.4.1 2.4.5, 2.4.7, 2.7 Review questions 2.1 2.37, 2.43 2.67 6.1 Introduction When numerically solving two-point boundary value problems, the differential

More information

MATH2071: LAB #7: Factorizations

MATH2071: LAB #7: Factorizations 1 Introduction MATH2071: LAB #7: Factorizations Introduction Exercise 1 Orthogonal Matrices Exercise 2 A Set of Questions Exercise 3 The Gram Schmidt Method Exercise 4 Gram-Schmidt Factorization Exercise

More information

MATH 612 Computational methods for equation solving and function minimization Week # 2

MATH 612 Computational methods for equation solving and function minimization Week # 2 MATH 612 Computational methods for equation solving and function minimization Week # 2 Instructor: Francisco-Javier Pancho Sayas Spring 2014 University of Delaware FJS MATH 612 1 / 38 Plan for this week

More information

MATH0328: Numerical Linear Algebra Homework 3 SOLUTIONS

MATH0328: Numerical Linear Algebra Homework 3 SOLUTIONS MATH038: Numerical Linear Algebra Homework 3 SOLUTIONS Due Wednesday, March 8 Instructions Complete the following problems. Show all work. Only use Matlab on problems that are marked with (MATLAB). Include

More information

Bindel, Fall 2011 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Jan 9

Bindel, Fall 2011 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Jan 9 Problem du jour Week 3: Wednesday, Jan 9 1. As a function of matrix dimension, what is the asymptotic complexity of computing a determinant using the Laplace expansion (cofactor expansion) that you probably

More information

Chapter 2 - Linear Equations

Chapter 2 - Linear Equations Chapter 2 - Linear Equations 2. Solving Linear Equations One of the most common problems in scientific computing is the solution of linear equations. It is a problem in its own right, but it also occurs

More information

The residual again. The residual is our method of judging how good a potential solution x! of a system A x = b actually is. We compute. r = b - A x!

The residual again. The residual is our method of judging how good a potential solution x! of a system A x = b actually is. We compute. r = b - A x! The residual again The residual is our method of judging how good a potential solution x! of a system A x = b actually is. We compute r = b - A x! which gives us a measure of how good or bad x! is as a

More information

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular form) Given: matrix C = (c i,j ) n,m i,j=1 ODE and num math: Linear algebra (N) [lectures] c phabala 2016 DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix

More information

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES JOEL A. TROPP Abstract. We present an elementary proof that the spectral radius of a matrix A may be obtained using the formula ρ(a) lim

More information

Differential Equations

Differential Equations This document was written and copyrighted by Paul Dawkins. Use of this document and its online version is governed by the Terms and Conditions of Use located at. The online version of this document is

More information

The Solution of Linear Systems AX = B

The Solution of Linear Systems AX = B Chapter 2 The Solution of Linear Systems AX = B 21 Upper-triangular Linear Systems We will now develop the back-substitution algorithm, which is useful for solving a linear system of equations that has

More information

MAT 343 Laboratory 3 The LU factorization

MAT 343 Laboratory 3 The LU factorization In this laboratory session we will learn how to MAT 343 Laboratory 3 The LU factorization 1. Find the LU factorization of a matrix using elementary matrices 2. Use the MATLAB command lu to find the LU

More information

Qualifying Examination

Qualifying Examination Summer 24 Day. Monday, September 5, 24 You have three hours to complete this exam. Work all problems. Start each problem on a All problems are 2 points. Please email any electronic files in support of

More information

AMSC/CMSC 466 Problem set 3

AMSC/CMSC 466 Problem set 3 AMSC/CMSC 466 Problem set 3 1. Problem 1 of KC, p180, parts (a), (b) and (c). Do part (a) by hand, with and without pivoting. Use MATLAB to check your answer. Use the command A\b to get the solution, and

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

PowerPoints organized by Dr. Michael R. Gustafson II, Duke University

PowerPoints organized by Dr. Michael R. Gustafson II, Duke University Part 3 Chapter 10 LU Factorization PowerPoints organized by Dr. Michael R. Gustafson II, Duke University All images copyright The McGraw-Hill Companies, Inc. Permission required for reproduction or display.

More information

MAT 343 Laboratory 6 The SVD decomposition and Image Compression

MAT 343 Laboratory 6 The SVD decomposition and Image Compression MA 4 Laboratory 6 he SVD decomposition and Image Compression In this laboratory session we will learn how to Find the SVD decomposition of a matrix using MALAB Use the SVD to perform Image Compression

More information

Linear algebra for MATH2601 Numerical methods

Linear algebra for MATH2601 Numerical methods Linear algebra for MATH2601 Numerical methods László Erdős August 12, 2000 Contents 1 Introduction 3 1.1 Typesoferrors... 4 1.1.1 Rounding errors... 5 1.1.2 Truncationerrors... 6 1.1.3 Conditioningerrors...

More information

2.4 The Extreme Value Theorem and Some of its Consequences

2.4 The Extreme Value Theorem and Some of its Consequences 2.4 The Extreme Value Theorem and Some of its Consequences The Extreme Value Theorem deals with the question of when we can be sure that for a given function f, (1) the values f (x) don t get too big or

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra The two principal problems in linear algebra are: Linear system Given an n n matrix A and an n-vector b, determine x IR n such that A x = b Eigenvalue problem Given an n n matrix

More information

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel LECTURE NOTES on ELEMENTARY NUMERICAL METHODS Eusebius Doedel TABLE OF CONTENTS Vector and Matrix Norms 1 Banach Lemma 20 The Numerical Solution of Linear Systems 25 Gauss Elimination 25 Operation Count

More information

Applied Linear Algebra in Geoscience Using MATLAB

Applied Linear Algebra in Geoscience Using MATLAB Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in

More information

Error Analysis for Solving a Set of Linear Equations

Error Analysis for Solving a Set of Linear Equations Error Analysis for Solving a Set of Linear Equations We noted earlier that the solutions to an equation Ax = b depends significantly on the matrix A. In particular, a unique solution to Ax = b exists if

More information

This appendix provides a very basic introduction to linear algebra concepts.

This appendix provides a very basic introduction to linear algebra concepts. APPENDIX Basic Linear Algebra Concepts This appendix provides a very basic introduction to linear algebra concepts. Some of these concepts are intentionally presented here in a somewhat simplified (not

More information

12. Perturbed Matrices

12. Perturbed Matrices MAT334 : Applied Linear Algebra Mike Newman, winter 208 2. Perturbed Matrices motivation We want to solve a system Ax = b in a context where A and b are not known exactly. There might be experimental errors,

More information

Chap 3. Linear Algebra

Chap 3. Linear Algebra Chap 3. Linear Algebra Outlines 1. Introduction 2. Basis, Representation, and Orthonormalization 3. Linear Algebraic Equations 4. Similarity Transformation 5. Diagonal Form and Jordan Form 6. Functions

More information

Numerical Methods I Eigenvalue Problems

Numerical Methods I Eigenvalue Problems Numerical Methods I Eigenvalue Problems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 October 2nd, 2014 A. Donev (Courant Institute) Lecture

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Computational Economics and Finance

Computational Economics and Finance Computational Economics and Finance Part II: Linear Equations Spring 2016 Outline Back Substitution, LU and other decomposi- Direct methods: tions Error analysis and condition numbers Iterative methods:

More information

14 Singular Value Decomposition

14 Singular Value Decomposition 14 Singular Value Decomposition For any high-dimensional data analysis, one s first thought should often be: can I use an SVD? The singular value decomposition is an invaluable analysis tool for dealing

More information

Math 504 (Fall 2011) 1. (*) Consider the matrices

Math 504 (Fall 2011) 1. (*) Consider the matrices Math 504 (Fall 2011) Instructor: Emre Mengi Study Guide for Weeks 11-14 This homework concerns the following topics. Basic definitions and facts about eigenvalues and eigenvectors (Trefethen&Bau, Lecture

More information

Name: INSERT YOUR NAME HERE. Due to dropbox by 6pm PDT, Wednesday, December 14, 2011

Name: INSERT YOUR NAME HERE. Due to dropbox by 6pm PDT, Wednesday, December 14, 2011 AMath 584 Name: INSERT YOUR NAME HERE Take-home Final UWNetID: INSERT YOUR NETID Due to dropbox by 6pm PDT, Wednesday, December 14, 2011 The main part of the assignment (Problems 1 3) is worth 80 points.

More information

NORMS ON SPACE OF MATRICES

NORMS ON SPACE OF MATRICES NORMS ON SPACE OF MATRICES. Operator Norms on Space of linear maps Let A be an n n real matrix and x 0 be a vector in R n. We would like to use the Picard iteration method to solve for the following system

More information

Linear Algebra March 16, 2019

Linear Algebra March 16, 2019 Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented

More information

Linear Algebra Methods for Data Mining

Linear Algebra Methods for Data Mining Linear Algebra Methods for Data Mining Saara Hyvönen, Saara.Hyvonen@cs.helsinki.fi Spring 2007 1. Basic Linear Algebra Linear Algebra Methods for Data Mining, Spring 2007, University of Helsinki Example

More information

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.] Math 43 Review Notes [Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty Dot Product If v (v, v, v 3 and w (w, w, w 3, then the

More information

Class notes: Approximation

Class notes: Approximation Class notes: Approximation Introduction Vector spaces, linear independence, subspace The goal of Numerical Analysis is to compute approximations We want to approximate eg numbers in R or C vectors in R

More information

Number Systems III MA1S1. Tristan McLoughlin. December 4, 2013

Number Systems III MA1S1. Tristan McLoughlin. December 4, 2013 Number Systems III MA1S1 Tristan McLoughlin December 4, 2013 http://en.wikipedia.org/wiki/binary numeral system http://accu.org/index.php/articles/1558 http://www.binaryconvert.com http://en.wikipedia.org/wiki/ascii

More information

Properties of Linear Transformations from R n to R m

Properties of Linear Transformations from R n to R m Properties of Linear Transformations from R n to R m MATH 322, Linear Algebra I J. Robert Buchanan Department of Mathematics Spring 2015 Topic Overview Relationship between the properties of a matrix transformation

More information

Elementary Linear Algebra

Elementary Linear Algebra Matrices J MUSCAT Elementary Linear Algebra Matrices Definition Dr J Muscat 2002 A matrix is a rectangular array of numbers, arranged in rows and columns a a 2 a 3 a n a 2 a 22 a 23 a 2n A = a m a mn We

More information

FIXED POINT ITERATIONS

FIXED POINT ITERATIONS FIXED POINT ITERATIONS MARKUS GRASMAIR 1. Fixed Point Iteration for Non-linear Equations Our goal is the solution of an equation (1) F (x) = 0, where F : R n R n is a continuous vector valued mapping in

More information

Getting Started with Communications Engineering

Getting Started with Communications Engineering 1 Linear algebra is the algebra of linear equations: the term linear being used in the same sense as in linear functions, such as: which is the equation of a straight line. y ax c (0.1) Of course, if we

More information

(Mathematical Operations with Arrays) Applied Linear Algebra in Geoscience Using MATLAB

(Mathematical Operations with Arrays) Applied Linear Algebra in Geoscience Using MATLAB Applied Linear Algebra in Geoscience Using MATLAB (Mathematical Operations with Arrays) Contents Getting Started Matrices Creating Arrays Linear equations Mathematical Operations with Arrays Using Script

More information

A Review of Linear Algebra

A Review of Linear Algebra A Review of Linear Algebra Gerald Recktenwald Portland State University Mechanical Engineering Department gerry@me.pdx.edu These slides are a supplement to the book Numerical Methods with Matlab: Implementations

More information

Review of matrices. Let m, n IN. A rectangle of numbers written like A =

Review of matrices. Let m, n IN. A rectangle of numbers written like A = Review of matrices Let m, n IN. A rectangle of numbers written like a 11 a 12... a 1n a 21 a 22... a 2n A =...... a m1 a m2... a mn where each a ij IR is called a matrix with m rows and n columns or an

More information

11.1 Vectors in the plane

11.1 Vectors in the plane 11.1 Vectors in the plane What is a vector? It is an object having direction and length. Geometric way to represent vectors It is represented by an arrow. The direction of the arrow is the direction of

More information

CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares

CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares Robert Bridson October 29, 2008 1 Hessian Problems in Newton Last time we fixed one of plain Newton s problems by introducing line search

More information

Next topics: Solving systems of linear equations

Next topics: Solving systems of linear equations Next topics: Solving systems of linear equations 1 Gaussian elimination (today) 2 Gaussian elimination with partial pivoting (Week 9) 3 The method of LU-decomposition (Week 10) 4 Iterative techniques:

More information

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit II: Numerical Linear Algebra Lecturer: Dr. David Knezevic Unit II: Numerical Linear Algebra Chapter II.3: QR Factorization, SVD 2 / 66 QR Factorization 3 / 66 QR Factorization

More information

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012.

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012. Math 5620 - Introduction to Numerical Analysis - Class Notes Fernando Guevara Vasquez Version 1990. Date: January 17, 2012. 3 Contents 1. Disclaimer 4 Chapter 1. Iterative methods for solving linear systems

More information

ESTIMATION OF ERROR. r = b Abx a quantity called the residual for bx. Then

ESTIMATION OF ERROR. r = b Abx a quantity called the residual for bx. Then ESTIMATION OF ERROR Let bx denote an approximate solution for Ax = b; perhaps bx is obtained by Gaussian elimination. Let x denote the exact solution. Then introduce r = b Abx a quantity called the residual

More information

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = 30 MATHEMATICS REVIEW G A.1.1 Matrices and Vectors Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = a 11 a 12... a 1N a 21 a 22... a 2N...... a M1 a M2... a MN A matrix can

More information

Numerical Analysis: Solving Systems of Linear Equations

Numerical Analysis: Solving Systems of Linear Equations Numerical Analysis: Solving Systems of Linear Equations Mirko Navara http://cmpfelkcvutcz/ navara/ Center for Machine Perception, Department of Cybernetics, FEE, CTU Karlovo náměstí, building G, office

More information

4. Linear transformations as a vector space 17

4. Linear transformations as a vector space 17 4 Linear transformations as a vector space 17 d) 1 2 0 0 1 2 0 0 1 0 0 0 1 2 3 4 32 Let a linear transformation in R 2 be the reflection in the line = x 2 Find its matrix 33 For each linear transformation

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Lecture 5. Ch. 5, Norms for vectors and matrices. Norms for vectors and matrices Why?

Lecture 5. Ch. 5, Norms for vectors and matrices. Norms for vectors and matrices Why? KTH ROYAL INSTITUTE OF TECHNOLOGY Norms for vectors and matrices Why? Lecture 5 Ch. 5, Norms for vectors and matrices Emil Björnson/Magnus Jansson/Mats Bengtsson April 27, 2016 Problem: Measure size of

More information

Eigenvalue and Eigenvector Problems

Eigenvalue and Eigenvector Problems Eigenvalue and Eigenvector Problems An attempt to introduce eigenproblems Radu Trîmbiţaş Babeş-Bolyai University April 8, 2009 Radu Trîmbiţaş ( Babeş-Bolyai University) Eigenvalue and Eigenvector Problems

More information

Linear Algebra Handout

Linear Algebra Handout Linear Algebra Handout References Some material and suggested problems are taken from Fundamentals of Matrix Algebra by Gregory Hartman, which can be found here: http://www.vmi.edu/content.aspx?id=779979.

More information

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 18th,

More information

Math 308 Midterm Answers and Comments July 18, Part A. Short answer questions

Math 308 Midterm Answers and Comments July 18, Part A. Short answer questions Math 308 Midterm Answers and Comments July 18, 2011 Part A. Short answer questions (1) Compute the determinant of the matrix a 3 3 1 1 2. 1 a 3 The determinant is 2a 2 12. Comments: Everyone seemed to

More information

Bindel, Spring 2017 Numerical Analysis (CS 4220) Notes for

Bindel, Spring 2017 Numerical Analysis (CS 4220) Notes for 1 Norms revisited Notes for 2017-02-01 In the last lecture, we discussed norms, including induced norms: if A maps between two normed vector spaces V and W, the induced norm on A is Av W A V,W = sup =

More information

Applied Linear Algebra in Geoscience Using MATLAB

Applied Linear Algebra in Geoscience Using MATLAB Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in

More information

Functional Analysis Review

Functional Analysis Review Outline 9.520: Statistical Learning Theory and Applications February 8, 2010 Outline 1 2 3 4 Vector Space Outline A vector space is a set V with binary operations +: V V V and : R V V such that for all

More information

Linear Least-Squares Data Fitting

Linear Least-Squares Data Fitting CHAPTER 6 Linear Least-Squares Data Fitting 61 Introduction Recall that in chapter 3 we were discussing linear systems of equations, written in shorthand in the form Ax = b In chapter 3, we just considered

More information

MAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction

MAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction MAT4 : Introduction to Applied Linear Algebra Mike Newman fall 7 9. Projections introduction One reason to consider projections is to understand approximate solutions to linear systems. A common example

More information

Inner product spaces. Layers of structure:

Inner product spaces. Layers of structure: Inner product spaces Layers of structure: vector space normed linear space inner product space The abstract definition of an inner product, which we will see very shortly, is simple (and by itself is pretty

More information

x x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b)

x x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b) Numerical Methods - PROBLEMS. The Taylor series, about the origin, for log( + x) is x x2 2 + x3 3 x4 4 + Find an upper bound on the magnitude of the truncation error on the interval x.5 when log( + x)

More information

AM 205: lecture 6. Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization

AM 205: lecture 6. Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization AM 205: lecture 6 Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization Unit II: Numerical Linear Algebra Motivation Almost everything in Scientific Computing

More information

Review and problem list for Applied Math I

Review and problem list for Applied Math I Review and problem list for Applied Math I (This is a first version of a serious review sheet; it may contain errors and it certainly omits a number of topic which were covered in the course. Let me know

More information

Applied Linear Algebra in Geoscience Using MATLAB

Applied Linear Algebra in Geoscience Using MATLAB Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in

More information

Linear Algebra Review

Linear Algebra Review Chapter 1 Linear Algebra Review It is assumed that you have had a course in linear algebra, and are familiar with matrix multiplication, eigenvectors, etc. I will review some of these terms here, but quite

More information

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit II: Numerical Linear Algebra Lecturer: Dr. David Knezevic Unit II: Numerical Linear Algebra Chapter II.2: LU and Cholesky Factorizations 2 / 82 Preliminaries 3 / 82 Preliminaries

More information

5.3 The Power Method Approximation of the Eigenvalue of Largest Module

5.3 The Power Method Approximation of the Eigenvalue of Largest Module 192 5 Approximation of Eigenvalues and Eigenvectors 5.3 The Power Method The power method is very good at approximating the extremal eigenvalues of the matrix, that is, the eigenvalues having largest and

More information

MAT1302F Mathematical Methods II Lecture 19

MAT1302F Mathematical Methods II Lecture 19 MAT302F Mathematical Methods II Lecture 9 Aaron Christie 2 April 205 Eigenvectors, Eigenvalues, and Diagonalization Now that the basic theory of eigenvalues and eigenvectors is in place most importantly

More information

(Linear equations) Applied Linear Algebra in Geoscience Using MATLAB

(Linear equations) Applied Linear Algebra in Geoscience Using MATLAB Applied Linear Algebra in Geoscience Using MATLAB (Linear equations) Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots

More information

Review of Basic Concepts in Linear Algebra

Review of Basic Concepts in Linear Algebra Review of Basic Concepts in Linear Algebra Grady B Wright Department of Mathematics Boise State University September 7, 2017 Math 565 Linear Algebra Review September 7, 2017 1 / 40 Numerical Linear Algebra

More information

AN ITERATION. In part as motivation, we consider an iteration method for solving a system of linear equations which has the form x Ax = b

AN ITERATION. In part as motivation, we consider an iteration method for solving a system of linear equations which has the form x Ax = b AN ITERATION In part as motivation, we consider an iteration method for solving a system of linear equations which has the form x Ax = b In this, A is an n n matrix and b R n.systemsof this form arise

More information

Chapter 1 Review of Equations and Inequalities

Chapter 1 Review of Equations and Inequalities Chapter 1 Review of Equations and Inequalities Part I Review of Basic Equations Recall that an equation is an expression with an equal sign in the middle. Also recall that, if a question asks you to solve

More information

The following definition is fundamental.

The following definition is fundamental. 1. Some Basics from Linear Algebra With these notes, I will try and clarify certain topics that I only quickly mention in class. First and foremost, I will assume that you are familiar with many basic

More information

Lecture 7. Gaussian Elimination with Pivoting. David Semeraro. University of Illinois at Urbana-Champaign. February 11, 2014

Lecture 7. Gaussian Elimination with Pivoting. David Semeraro. University of Illinois at Urbana-Champaign. February 11, 2014 Lecture 7 Gaussian Elimination with Pivoting David Semeraro University of Illinois at Urbana-Champaign February 11, 2014 David Semeraro (NCSA) CS 357 February 11, 2014 1 / 41 Naive Gaussian Elimination

More information

CS 6820 Fall 2014 Lectures, October 3-20, 2014

CS 6820 Fall 2014 Lectures, October 3-20, 2014 Analysis of Algorithms Linear Programming Notes CS 6820 Fall 2014 Lectures, October 3-20, 2014 1 Linear programming The linear programming (LP) problem is the following optimization problem. We are given

More information

Chapter 3. Linear and Nonlinear Systems

Chapter 3. Linear and Nonlinear Systems 59 An expert is someone who knows some of the worst mistakes that can be made in his subject, and how to avoid them Werner Heisenberg (1901-1976) Chapter 3 Linear and Nonlinear Systems In this chapter

More information

Eigenvalues, Eigenvectors, and Diagonalization

Eigenvalues, Eigenvectors, and Diagonalization Week12 Eigenvalues, Eigenvectors, and Diagonalization 12.1 Opening Remarks 12.1.1 Predicting the Weather, Again Let us revisit the example from Week 4, in which we had a simple model for predicting the

More information

LAB 2: Orthogonal Projections, the Four Fundamental Subspaces, QR Factorization, and Inconsistent Linear Systems

LAB 2: Orthogonal Projections, the Four Fundamental Subspaces, QR Factorization, and Inconsistent Linear Systems Math 550A MATLAB Assignment #2 1 Revised 8/14/10 LAB 2: Orthogonal Projections, the Four Fundamental Subspaces, QR Factorization, and Inconsistent Linear Systems In this lab you will use Matlab to study

More information

18.06 Problem Set 1 - Solutions Due Wednesday, 12 September 2007 at 4 pm in

18.06 Problem Set 1 - Solutions Due Wednesday, 12 September 2007 at 4 pm in 18.6 Problem Set 1 - Solutions Due Wednesday, 12 September 27 at 4 pm in 2-16. Problem : from the book.(5=1+1+1+1+1) (a) problem set 1.2, problem 8. (a) F. A Counterexample: u = (1,, ), v = (, 1, ) and

More information

Scientific Computing

Scientific Computing Scientific Computing Direct solution methods Martin van Gijzen Delft University of Technology October 3, 2018 1 Program October 3 Matrix norms LU decomposition Basic algorithm Cost Stability Pivoting Pivoting

More information

FLOATING POINT ARITHMETHIC - ERROR ANALYSIS

FLOATING POINT ARITHMETHIC - ERROR ANALYSIS FLOATING POINT ARITHMETHIC - ERROR ANALYSIS Brief review of floating point arithmetic Model of floating point arithmetic Notation, backward and forward errors Roundoff errors and floating-point arithmetic

More information

MTH 464: Computational Linear Algebra

MTH 464: Computational Linear Algebra MTH 464: Computational Linear Algebra Lecture Outlines Exam 2 Material Prof. M. Beauregard Department of Mathematics & Statistics Stephen F. Austin State University February 6, 2018 Linear Algebra (MTH

More information

Cheat Sheet for MATH461

Cheat Sheet for MATH461 Cheat Sheet for MATH46 Here is the stuff you really need to remember for the exams Linear systems Ax = b Problem: We consider a linear system of m equations for n unknowns x,,x n : For a given matrix A

More information