Part IB - Easter Term 2003 Numerical Analysis I

Size: px
Start display at page:

Download "Part IB - Easter Term 2003 Numerical Analysis I"

Transcription

1 Part IB - Easter Term 2003 Numerical Analysis I 1. Course description Here is an approximative content of the course 1. LU factorization Introduction. Gaussian elimination. LU factorization. Pivoting. 2. Existence and uniqueness of LU factorization. Symmetric positive definite matrices. Cholesky factorization. Band matrices. 3. QR factorization Orthogonal matrices. QR factorization. The Gram Schmidt algorithm. 4. Givens rotation. Householder reflections. 5. Iterative methods Matrix and vector norms. Basic iterative schemes: Jacobi, Gauss Seidel, simple iteration. 6. Necessary and sufficient conditions for convergence. 7. Linear least squares Normal equation. QR and least squares. 8. Orthogonal polynomials Orthogonal systems. Three term recurrence relation. 9. Polynomial interpolation Lagrange formula. Divided differences. Newton formula. 10. Error bounds for polynomial interpolation Error bound for Lagrange interpolation. Chebyshev polynomials. 11. Approximation of linear functionals Numerical integration. Gaussian quadrature. Numerical differentiation. 12. The Peano kernel theorem and its application. 2. Lecture Notes on the Web I will try to put every handout on the web-site of the Numerical Analysis Group at < a few days before the corresponding lecture.

2 Numerical Analysis Lecture 1 1 LU factorization 1.1 Introduction Problem 1.1 The basic problem of numerical analysis is: solve (1.1) The formula gives an opportunity to solve (1.1) explicitly by Cramer s rule with the small detail that it costs multiplications. Given a computer with flops/sec, this way of solving an system will take a), sec, b), min, c), years. Thus we have to look at methods from a constructive and practical point of view. Example 1.2 (Triangular matrices) An upper triangular matrix has if. Then. Also we can solve by so-called back substitution Similarly, a lower triangular matrix (s.t. if ) allows us to solve directly by the forward substitution. Hence, if we manage to factorize as then solution of the system can be split into two cases (1.2) both being solved sufficiently easily, as we have seen. Example 1.3 (Orthogonal matrices) An orthogonal matrix satisfies, i.e.,, so that a solution to is simply. If where is orthogonal and is upper triangular, then again one can solve the system in two simple steps 1.2 Gaussian elimination A familiar method for solving a system. 1

3 or, in the matrix form, is Gaussian elimination. If, then we can eliminate the first unknown from the second and subsequent equations by substracting equation from equation. With this is equivalent to premultiplication of and by the lower triangular resulting in the equivalent system. If, then we can eliminate the second unknown from the bottom equations, and so on. With the procedure will end with an upper triangular. So, with, the resulting system can be easily solved by the back substitution 2

4 1.3 LU factorization It is possible to visualize the elimination process as deriving a factorization of Since, we obtain into two factors. Lemma 1.4 We have..... Proof. Notice that if. This implies, so that, and then, by induction say,. we understand a representa- Definition 1.5 By the LU factorization of a (nonsingular) matrix tion of as a product where is a lower triangular matrix with unit diagonal and is a nonsingular upper triangular matrix. Remark 1.6 Nonsingularity of is a natural requirement in all practical applications. Theoretically it is not necessary; see Question 1 on Examples Sheet 1. Algorithm 1.7 The previous lemma allows computing of the LU factorization as follows. for k=1 to n-1 do for i=k+1 to n do a(i,k) := a(i,k)/a(i,i); #-- l(i,k) := a(i,k)/a(i,i); for m=k+1 to n do a(i,m) := a(i,m) - a(i,k)*a(k,m) #-- a(i,m) := a(i,m) - l(i,k)*a(k,m) end end end Notice that we can use the matrix itself to store the elements of and, in the subdiagonal and in the upper triangular portions of, respectively. Example 1.8 Application 1.9 One can use the LU factorization in the following problems. 1. Solution of linear systems (see (1.2)). 2. Calculation of the determinant:. 3. Calculation of the inverse of :. The cost. It is only multiplications and divisions that matter. By the we perform divisions and multiplications. Hence -th step of the LU algorithm In forward-substitution we use multiplications/divisions to determine, thus. 3

5 Remark 1.10 The only yet important difference between LU and Gaussian elimination is that we do not consider the right hand side until the factorization is complete. This is useful e.g. when there are many right hand sides, in particular if not all the s are known at the outset: in Gaussian elimination the solution for each new would require computational operations, whereas with LU factorization operations are required for the initial factorization, but then the solution for each new only requires operations. Method 1.11 Another version of the LU factorization is based on the direct approach. If, and and are the columns and the rows of the matrices and respectively, then Since the leading elements of and are zero for, it follows that the first columns and rows of the matrix are zeros as well. Hence, which is the 1-st row of the matrix, coincides with the 1-st row of, while coincides with the 1-st column of. Substracting and letting we obtain similarly that and are the 2-nd row and the 2-nd column of respectively, and so on. 1.4 Pivoting The Gauss algorithms fails if. A remedy is to exchange rows of by picking a suitable pivotal equation, i.e., the equation which is used to eliminate one unknown from certain of the other equation. This technique is called column pivoting and it means that, having obtained, we exchange two rows of so that the element of largest magnitude in the -th column is in the pivotal position. In other words, The exchange of rows and can be regarded as the pre-multiplication of the relevant matrix by a permutation matrix. The same exchange is required in the portion of that has been formed already (i.e., in the first columns). Also, we need to record the permutation of rows to solve for the right-hand side and/or to compute the determinant. Example 1.12 An important advantage of column pivoting is that every element of has magnitude at most one. This avoids not just division by zero (what is not the case in the previous example) but also tends to reduce the chance of very large numbers occuring during the factorization, a phenomenon that might lead to ill conditioning and to accumulation of round-off error. 4

6 1.5 Examples Example 1.13 (LU factorization) i.e. Example 1.14 (Forward and back substitutions) Then the solution to the system proceeds in two steps 1) Forward substitution 2) Back substitution Example 1.15 (LU factorization with pivoting) i.e. 5

7 1.6 Exercises Exercises with a -mark are not subject to supervision! They are just for fun! 1.1. Calculate all LU factorizations of the matrix where all diagonal elements of are one. By using one of these factorizations, find all solutions of the equation where By using column pivoting if necessary to exchange rows of, an LU factorization of a real matrix is calculated, where has ones on its diagonal, and where the moduli of the offdiagonal elements of do not exceed one. Let be the largest of the moduli of the elements of. Prove by induction on that elements of satisfy the condition. Then construct and nonzero matrices that yield and respectively Let Find the LU factorization of and use it to solve the system Prove that every nonsingular matrix admits an LU factorization with pivoting, i.e. for every nonsingular matrix there exists a permutation matrix such that 6

8 Numerical Analysis Lecture 2 2 LU factorization (cont.) 2.1 Existence and uniqueness of the LU factorization Definition 2.1 A square matrix is called strictly regular if all its leading submatrices ( itself as well) are nonsingular. Theorem 2.2 (Existence) A matrix admits a factorization it is strictly regular. Proof. This part follows from the scheme Since, are nonsingular, so is. The LU factorization by Gaussian elimination runs according to the following scheme so that with, after steps we obtain The partition as above gives whence Here is a upper triangular matrix (which in fact coincides with ), and we can proceed further with the algorithm if, i.e., if. But is lower triangular with unit diagonal and is nonsingular by assumption. Hence is nonsingular and we can perform the next step. Theorem 2.3 (Uniqueness) The LU factorization is unique, i.e. implies Proof. The equality implies, say). The inverse of a lower triangular matrix and the product of such matrices are lower triangular matrices. The same is true for upper triangular matrices. Consequently, is simultaneously lower and upper triangular, hence it is diagonal. Since has unit diagonal, we obtain. Corollary 2.4 A strictly regular matrix has a unique factorization where both and have unit diagonal. Corollary 2.5 A strictly regular symmetric matrix admits the unique representation. Proof. By strict regularity,, and by symmetry, Since the LDU factorization is unique, 1

9 2.2 Symmetric positive definite and diagonally dominant matrices Here we consider two important types of matrices which are strictly regular hence possess an LU factorization. Definition 2.6 A matrix is called (symmetric) positive definite (SPD-matrix) if for all and. Theorem 2.7 Let be a real symmetric matrix. It is positive definite if and only if it has the factorization in which the diagonal elements of are all positive. Proof. Suppose that with, and let. Since is nonsingular,. Then, hence is positive definite. Conversely, if is (symmetric) positive definite, then it is strictly regular. (For if for some and some, then for we obtain, a contradiction to the positive definiteness.) Thus, by Corollary 2.5 it admits an LDL factorization. Take such that. Then. Method 2.8 One can check if a symmetric matrix is positive definite by trying to form its LDL factorization. (The only application of this method is during examination hours.) Example 2.9 The matrix below is positive definite. Corollary 2.10 (Cholesky factorization) A positive definite matrix admits the Cholesky factorization where is a lower triangular matrix. Proof. Since with positive diagonal, we can write Definition 2.11 A matrix is called strictly diagonally dominant (by rows) if for all, e.g.. Theorem 2.12 If factorization exists. is strictly diagonally dominant (by rows), then it is srictly regular, hence the LU Proof. Take any and let be its largest absolute value component, i.e.,. Then the strict diagonal dominance gives for the -th component of the value Hence for any, i.e. is nonsingular. The leading submatrices of a strictly diagonally dominant matrix are (even more) strictly diagonally dominant, hence the strict regularity of. 2

10 2.3 Sparse and band matrices Definition 2.13 A matrix is called a sparse matrix if nearly all elements of are zero. Most useful examples are banded matrices and block banded matrices Definition 2.14 A matrix is called a band matrix if there exists an integer such that for all In other words, all nonzero elements of reside in a band of width along the main diagonal. It is frequently required to solve very large systems with sparse matrices ( is considered small in this context!). The LU factorization of such makes sense only if and inherit much of the sparsity of (so that the cost of computing, say, is comparable with that of ). To this end the following theorem is useful. Theorem 2.15 Let be the LU factorization (without pivoting) of a sparse matrix. Then all leading zeros in the rows of to the left of diagonal are inherited by and all leading zeros in the columns of above the diagonal are inherited by. Proof. Let be the leading zero in the -th row. Then, since, we obtain and so on. Similarly for the leading zeros in the -th column. Since, it follows that and so on. Corollary 2.16 If is a band matrix and, then for all, i.e. and are the band matrices with the same band width as. The cost (Exercise 2.3). In the case of a banded, we need just 1) operations to factorize and 2) operations to solve a linear system. This must be compared with and operations respectively for an dense matrix. If this represents a very substantial saving! Method 2.17 Theorem 2.15 suggests that for a factorization of a sparse but not nicely structured matrix one might try to reorder its rows and columns by a preliminary calculation so that many of the zero elements become leading zero elements in rows and columns, thus reducing the fill-in in and. 3

11 Example 2.18 The LU factorization of has significant fill-in. However, exchanging the first and the last rows and columns yields 2.4 Exercises Exercises with a -mark are not subject to supervision! They are just for fun! 2.1. Let be a real matrix that has the factorization, where is lower triangular with ones on its diagonal and is upper triangular. Prove that, for every integer, the first rows of span the same space as the first rows of. Prove also that the first columns of are in the -dimensional subspace that is spanned by the first columns of. Hence deduce that no LU factorization of the given form exists if we have, where is the leading submatrix of and where is the matrix whose columns are the first columns of Calculate the Cholesky factorization of the matrix Deduce from the factorization the value of that makes the matrix singular. Also find this value of by seeking the vector in the null-space of the matrix whose first component is one Let be an nonsingular band matrix that satisfies the condition if, where is small, and let Gaussian elimination with column pivoting be used to solve. Identify all the coefficients of the intermediate equations that can become nonzero. Hence deduce that the total number of additions and multiplications of the complete calculation can be bounded by a constant multiple of Prove the following statement: Theorem [Gerschgorin] Let be an matrix (with real or complex coefficients). Then all of its eigenvalues are contained in the union, where are the circles in the complex plane [Hint. Let be an eigenvalue of, let be the corresponding eigenvector, and let be its largest absolute value component. Consider the -th equation of the relation, and apply arguments similar to those used in the proof of Theorem 2.12.] 4

12 Numerical Analysis Lecture 3 3 QR factorization 3.1 Inner product spaces Definition 3.1 The scalar product on a real vector space is a function which satisfies the following axioms with equality only if A vector space with a scalar product is called the inner product space. If are called orthogonal. A set of vectors is called orthonormal if, then the vectors For, the function is called the norm of (induced by the given scalar product), and we have the Cauchy Schwarz inequality Example 3.2 For, the following rule defines the so-called Euclidean scalar product Example 3.3 Another example, which will be used later in construction of orthogonal polynomials, is the scalar product on the space of continuous function on with respect to the fixed positive weight function. It is given by the rule 3.2 Orthogonal matrices Definition 3.4 An matrix is called orthogonal if, e.g.,. Thus, the columns of the orthogonal satisfy, i.e., they are orthonormal with respect to the Euclidean scalar product in. For a square we get therefore, is also an orthogonal matrix, so that the rows of are orthonormal as well. In particular, the column and the row elements of orthogonal satisfy. It follows also that a square orthogonal is nonsingular, moreover 1

13 3.3 QR factorization Definition 3.5 The QR factorization of an matrix is the representation where is an orthogonal matrix and is an upper triangular matrix,... Due to the bottom zero element of, the columns are inessential for the representation itself, hence we can safely write... (3.1) The latter formula is called the skinny QR factorization. Theorem 3.6 Every matrix has a QR factorization. If is non-singular, then a factorization where has a positive main diagonal is unique. Proof. Three algorithms of the QR factorization, the Gram Schmidt orthogonalization (for the skinny version), the Givens rotations and the Householder reflections, are given below. Let be non-singular. Then is positive definite, and there is a unique Cholesky factorization with having a positive main diagonal. Thus, is uniquely determined. Method 3.7 (Solution to a linear system) If is square nonsingular, we can solve by calculating the QR factorization of and then proceeding in two steps solving first (hence ) and then (a triangular system, back-substitution). 3.4 The Gram-Schmidt orthogonalization Let be sought. If we denote the columns of and by and respectively, then it follows from (3.1) that, given, we want to find an orthonormal set that satisfies The latter problem can be solved for a general inner product space. Theorem 3.8 Let be an inner product space, and let elements be linearly independent. Then there exist elements such that (3.2) (3.3) 2

14 Proof. From the first equation of (3.2), since, Suppose that the elements which satisfy (3.2)-(3.3) are constructed for all. The next element should be the part of the orthonormal set, and it should satisfy (3.4) Multiplying both sides of (3.4) by find the first coefficients (in the scalar product sense) and using orthonormality, we Substituting them back to (3.4) we obtain By construction, each is a linear combination of, hence is non-zero since are linearly independent, i.e., is well-defined. Algorithm 3.9 We may put the previous construction in the following algorithm for k = 1 to n do b[k] := a[k]; for j = 1 to k-1 do % void if k=1 r(j,k) := <q[j],a[k]>; b[k] := b[k] - r(j,k)*q[j] end r(k,k) := sqrt(<b[k],b[k]>); q[k] := b[k]/r(k,k) end Example 3.10 QR factorization by Gram-Schmidt of So, 3

15 Numerical Analysis Lecture 4 4 QR factorization (cont.) 4.1 Further properties of orthogonal matrices The Gram-Schmidt algorithm is very unstable, the round-off errors accumulate rapidly and even for moderate values of the computed matrix is no longer orthogonal. Much better algorithms are based on constructing as a composition of certain elementary orthogonal mappings. Lemma 4.1 A matrix is orthogonal (i.e., the mapping preserves Euclidean norm ) Proof. Set. Then If is orthogonal, then, i.e.. Conversely, if is norm-preserving, then (and ) Taking, we get, while the next choice will provide, whence, i.e.,. A mapping which preserves the distance between any two points (as orthogonal mapping does) is called isometry. It is clear that the composition of two isometrical mappings is an isometry itself. So, the following is true. Lemma 4.2 If are orthogonal, so is. The simplest isometries in are rotations and reflections. So, the idea of the next two algorithms is quite clear geometrically: given a sequence of vectors we can always determine a sequence of elementary rotations (reflections) in that brings the coordinates of to the triangular form. 4.2 Givens rotations Consider a problem: given, find such that, with some, A solution always exists and is given by Generally, for any vector, we can find a matrix such that : : : Such a matrix is called a Givens rotation. The mapping rotates the vectors in a twodimensional plane spanned by and (clockwise by the angle ), hence it is orthogonal. 1

16 Lemma 4.3 Let be an matrix and, for any, let. Then the -th and the -th rows of are linear combinations of the -th and -th rows of all other rows of remains the same as those of. Proof. Follows immediately from the form of. From this lemma it follows that we can subsequently annihilate the subdiagonal elements, column by column, and this implies the next statement. Theorem 4.4 For any matrix, there exist Givens matrices such that is an upper triangular matrix. Method 4.5 The -elements have changed through a single rotation while the -element remained the same. The cost. There are less than rotations and each rotation replaces two rows by their linear combinations, hence the total cost of computing is. If the matrix is required in an explicit form, say, for solving the system with many righthand sides, set and, for each successive rotation, replace by. The final is the product of all the rotations, in correct order, and we let. The extra cost is. If only one vector is required, we multiply the vector by successive rotations, the cost being. For each rotation requires twice as much multiplication as the corresponding Gaussian elimination, hence the total cost is, twice more expensive. However, the QR factorization is generally more stable than the LU one. 4.3 Householder transformations Definition 4.6 Given nonzero, the matrix is called a Householder transformation or a Householder reflection. Since this transformation reflects any vector in the -dimensional hyperplane spanned by the vectors orthogonal to. Each such matrix is symmetric and (since reflection is an isometry) orthogonal. Lemma 4.7 For any two vectors of equal length, the Householder transformation with reflects onto. In particular, for any, the choice implies if Proof. Draw a picture. 2

17 Theorem 4.8 For any matrix, there exist Housholder matrices such that is an upper triangular matrix. Proof. We take with the vector, and being the first column of, or explicitly Then has as its first column. Suppose that the first columns of have an upper triangular form, and let be the -th column of. So, we define the next taken as Due to its zero components, such a is orthogonal to the previous columns of, hence they remain invariant under reflection (as well as the first rows of ). The bottom component of will vanish due to Lemma 4.7. Example 4.9 Calculation of and. If the matrix is required in an explicit form, set initially and, for each successive reflection, replace by. As in the case of Givens rotations, by the end of the computation,. The same algorithm is being used for calculating the vector. Remark 4.10 For the practical computations notice the following. For a matrix vector, one should compute the products and as, respectively a Deciding between Givens and Householder transformations. If is dense, it is in general more convenient to use Householder reflections. Givens rotations come into their own, however, when has many leading zeros in its rows. In an extreme case, if an matrix consists of zeros underneath the first subdiagonal, they can be rotated away in just Givens rotations, at the cost of operations! 3

18 Example 4.11 (Givens rotations) For we have the following steps of the Givens rotation algorithm: Finally, so that 4

19 Example 4.12 (Householder reflection) For we have the following steps of the Housholder algorithm: Finally, so that 5

20 Numerical Analysis Lecture 5 5 Iterative methods for linear systems 5.1 Norms of vectors and matrices Definition 5.1 The norm on a vector space (say, ) is a functional which satisfies the following axioms: 1), and ; 2), 3), A vector space with a norm is called the normed space. Example 5.2 The following three vector norms in the the the are in common use -norm -norm or the Euclidean norm -norm or the max-norm They are particular cases of the -norm:, Definition 5.3 The matrix norm is a functional such that 1)-3) is a norm on the space of matrices; 4). Example 5.4 Given a vector norm, the functional (5.1) is a matrix norm. (Prove it.) It is called the induced matrix norm. By definition, It is not too difficult to determine the matrix norms induced by three basic vector norms. i.e., max absolute column sum; i.e., [the dominant eigenvalue of ] i.e., max absolute row sum. Definition 5.5 Let. The set of the eigenvalues of is called spectrum of, and the value is called the spectral radius of. Lemma 5.6 For any induced matrix norm, and for any, we have If is symmetric, then. 1

21 Proof. If is an eigenvalue of and is the corresponding eigenvector, then Thus, for any, hence. If, then. Definition 5.7 A sequence in a normed space is called convergent if there is an s.t. 5.2 Basic iterative schemes The general iterative method for solving is a rule We will consider the simplest ones: linear, one-step, stationary iterative schemes Here one chooses and so that, the fix point of the iteration, satisfies. Standard terminology: the iteration matrix, the error, the residual. For a given class of matrices, we are interested in the convergent methods, i.e. the methods such that for every starting value. Substracting from (5.2) we obtain (5.2) i.e., a method is convergent if for any. Scheme 5.8 (Regular splitting) This is the scheme is non-singular The simplest splitting is, where, and are portions of : subdiagonal (strictly lower triangular), diagonal and superdiagonal (strictly upper triangular), respectively. We assume that no diagonal element of is zero. Method 5.9 (Jacobi) Method 5.10 (Gauss Seidel) Remark 5.11 There is no need to invert any matrix. Writing the original system as the Jacobi method reads while the Gauss Seidel becomes 2

22 Example 5.12 Jacobi and Gauss Seidel methods for an approximate solution of with the starting vector. 1) The Jacobi method: With, the first two iterations are 2) The Gauss Seidel method: With, the first two iterations are 3) The exact solution is. 4) Both methods converge (because is diagonally dominant), the Gauss Seidel method much faster. Scheme 5.13 (Iterative refinement) This is the scheme If, then, so that one should choose as an approximation to. The iteration matrix of this scheme is Remark 5.14 Any regular splitting can be viewed as an iterative refinement, e.g., Jacobi: Gauss-Seidel: Method 5.15 (Simple iteration) 3

23 Numerical Analysis Lecture 6 6 Iterative methods for linear systems (cont.) 6.1 Sufficient condition for convergence The next statement provides a condition which guarantees convergence of an iterative method. Theorem 6.1 Let and let for some induced matrix norm and for some nonsingular (e.g., ). Then the method converges. Proof. Set and let. Then As an application of this result, consider the following partially familiar class of matrices. Definition 6.2 A matrix respectively. Theorem 6.3 Let converge. is called strictly diagonally dominant by rows or by columns, if or be strictly diagonally dominant by rows. Then both Jacobi and Gauss Seidel methods Proof. We give a proof only for Gauss-Seidel. The Jacobi method is treated similarly (and simpler). We set, so that and notice that the strict diagonal dominance by rows implies For the 1-st component of we have the inequality Assuming that the similar estimate is valid for the first components of, i.e., we derive the same estimate for the -th component Thus,, hence. Remark 6.4 We did not use Theorem 6.1 explicitly, however we proved that, for a strictly diagonally dominant, the Gauss-Seidel method implies, and this is true for any. Since, where is the iteration matrix, this means the following: for a strictly diagonally dominant, i.e., the condition of Theorem 6.1 is fulfilled. 1

24 6.2 Necessary and sufficient conditions for convergence Theorem 6.5 Let. Then for all if and only if. Proof. 1) We commence with the case and wish to demonstrate that, for some real, the vector need not tend to. Let be an eigenvalue of (the real), real or complex, such that, and let be a corresponding eigenvector, i.e.,. Then, and If is real, we choose, hence, and this cannot tend to zero. If is complex, then with some real vectore. But then at least one of the sequences does not tend to zero. For if both do, then also and this contradicts (6.1). This completes the proof of the only if part of the theorem. 2) Now we turn to the if case, i.e.,, where we assume for simplicity that possesses linearly independent eigenvectors such that. Linear independence means that every can be expressed as a linear combination of the eigenvectors, i.e., there exist such that. Thus, (6.1) and since we have, as required. Theorem 6.6 Suppose that, a solution of, satisfies and we are given the iterative scheme (6.2) Then for any choice of if an only if. Proof. Set. Since, substracting this equation from (6.2) we obtain, hence. Now, by Theorem 6.5, for any iff. Corollary 6.7 Let be positive definite. Then the method of simple iteration converges if and only if. Proof. Let be the spectrum of. Then the spectrum of the iteration matrix is whence the spectral radius of has the value Since for a positive definite we have, it follows that Remark 6.8 The sufficient condition of Theorem 6.1 is at first sight stronger than that of Theorem 6.5 because, for any nonsingular and for any induced matrix norm, we have (The first equality holds for the entire spectra, the second one is due to Lemma 5.6.) However, they are equivalent: for any matrix, and for any there exists an induced matrix norm such that (however, one cannot take ). 2

25 6.3 Extras Theorem 6.9 Let methods converge. be strictly diagonally dominant by columns. Then both Jacobi and Gauss Seidel Proof. Here is a proof of the simpler Jacobi case. The iteration matrix is with we obtain, so Now, if we take the matrix -norm, the diagonal dominance of by columns implies and application of Theorem 6.1 yields the result. For the Gauss-Seidel case, the proof again follows with, and the -norm. Hint: transpose the iteration matrix and use the equality together with some consequences of Theorem 6.3. Remark 6.10 The complete proof of the if case of Theorem 6.5 exploits the so-called Jordan normal form of the matrix, namely, where is a block diagonal matrix consisting of the Jordan blocks To prove that if, one can split, notice that for, and evaluate the terms of expansion. Remark 6.11 In the method of simple iteration, if we know the bound not only for but also for, then the choice is optimal in the sense that it provides the best rate of convergence (of the method of simple iteration) on the class of SPD-matrices with, namely 3

26 Numerical Analysis Lecture Linear least squares 7.1 Statement of the problem Consider the problem of finding a vector such that where a matrix and a vector are given and. When there are more equations than unknowns, the system is called overdetermined. In general, an overdetermined system has no solution, but we would like to have and close in a sense. Problems of this form occur frequently when we collect observations (which, typically, are prone to measurement error) and wish to exploit them to form an -variable linear model (e.g., trying to put some planet observations on an ellipse). Choosing the Euclidean distance as a measure of closeness, one obtains Problem 7.1 (Least squares in ) Given and, find i.e., find (the argument) which minimizes (the functional). Discussion 7.2 Let with. Then the vectors, where runs through, form an -dimensional plane in, say, spanned by. Problem 7.1 is the problem of finding This problem has a clear geometric solution: given, the minimal distance between and vectors in is attained when is the foot of the perpendicular from onto the plane, i.e. when The next theorem puts these arguments into precise analytic form. Since the proof uses nothing more than the properties of the scalar product in, it costs nothing to replace by a general inner product space. Problem 7.3 (Least squares in ) Given and, find The element is called the best least squares approximation to from. Theorem 7.4 Let be an inner product space, and let and a subspace be given. Then (7.1) Proof. Given, consider the quadratic polynomial in, Its least value occurs when, so that If for some, so is, and then, i.e., is not optimal. Conversely, if for all, then, for any, with, we find that i.e., is the best approximant.

27 Corollary 7.5 If is the best least squares approximation to from, then the latter inequality being strict for. 7.2 Normal equations in particular If, then writing and running in (7.1) through the basis finctions we obtain a linear system of equations for determining the coefficients so that we may reformulate Theorem 7.4 in the following way. Theorem 7.6 Let and let. Then where The system is called the normal equations, is the Gram matrix, is the normal solution. Coming back to our particular problem of least squares solution to we obtain Corollary 7.7 Let and. Then However, the method of normal equations for finding an optimal has several disadvantages. The main one is ill-conditioning. Example 7.8 (Polynomial least squares) Given of degree such that, find an algebraic polynomial If one seeks, then and is very close to the so-called Hilbert matrix, the monster that explodes all the algorithms. 7.3 QR and least squares An alternative is provided by the QR factorization. Suppose that, with an orthogonal and an upper triangular. Then, since orthogonal mapping is lengthpreserving, i.e., for any, we have therefore we may seek that minimises. Suppose for simplicity that. Then the bottom rows of are zero. Therefore we find by solving the (nonsingular) linear system given by the first equations of Similar (although more complicated) algorithm applies when. Note, recalling our former remark, that we don t require explicitly and need to evaluate only. 2

28 Numerical Analysis Lecture 8 8 Linear least squares (cont.) 8.1 Orthogonal systems Recall that a set in an inner product space is orthogonal if for, and it is orthonormal if The least squares approximation by orthogonal systems is especially simple and effective. Theorem 8.1 Let be an inner product space, be a subspace, and let be an orthogonal set that spans. Then the least squares approximant to any from is given by the formula and the value of the least squares approximation is Proof. By Theorem 7.4, the best is determined by the conditions, for all, hence, since are orthogonal, we obtain Further, by Corollary 7.5, we have, and, by orthogonality of,. Method 8.2 Suppose we are given an infinite orthogonal sequence and we want to approximate with a prescribed accuracy by a linear combination of the first terms, i.e., find and such that Firstly, it follows that, for any, the optimal equals and, secondly, that the required value can be calculated by summing the terms of until we reach the bound (8.1) Example 8.3 For, the space of continuous -periodic functions, equipped with the scalar product, the trigonometric functions form an orthogonal system, so that for its -th partial Fourier series given by is the best approximation to from, the space of all trigonometric polynomial of degree. If we are given with non-orthogonal, we can get an orthogonal basis by the Gram-Schmidt algorithm. This procedure is not stable numerically, but generally it is the only one available. However, in some cases there exist better algorithms. For, this was the QR factorization by means of Givens or Householder transfomations. Another case is construction of orthogonal algebraic polynomials to be considered next. 1

29 8.2 Orthogonal polynomials Consider, the space of all continuous real-valued functions, and define a scalar product on by Here, the so-called weight function, is a fixed positive function, such that the integral exists for all. We denote by the space of all algebraic polynomials of degree (at most), i.e., if. If the leading coefficent equals, then is called the monic polynomial. Given a scalar product (8.2), we say that is the -th orthogonal polynomial if (8.2) (Note: different weights lead to different orthogonal polynomials!) Lemma 8.4 For every there exists a unique monic orthogonal polynomial. Any is uniquely expressible as a linear combination (8.3) Proof. We just repeat the Gram-Schmidt algorithm. Starting with we set Then is orthogonal to each previous, and, hence. Therefore, and any has an expansion (8.3) where each coefficient is uniquely determined by multiplying both sides with. If is -th orthogonal polynomial, then in its expansion (8.3) all for, and if it is monic then, i.e.,. Theorem 8.5 (Three-term recurrence relation) Monic orthogonal polynomials satisfy the relation where,, and (8.4) Proof. Based on (8.3), let us look at the coefficients of the expansion, since both and are monic polynomials of degree. By definition,. Because of monicity, we have the equality where, so that, hence. Since, we obtain, thus. It follows that and that is equivalent to (8.4). Remark 8.6 If have leading coefficients, then the recurrence takes the form and, with an appropriate choice of, may become very simple. 2

30 8.3 Examples Example 8.7 Classical examples of orthogonal polynomials include Weight function Example 8.8 (Chebyshev polynomials) Chebyshev polynomials of degree are defined by or, in a more instructive form, They satisfy the relations where the first two ones are straightforward and the recurrence follows from the equality In particular, are indeed algebraic polynomials with leading coefficients. Further, i.e., are orthogonal with respect to the scalar product 8.4 Least squares polynomial fitting Next theorem justifies Method 8.2 in the case of polynomial least squares approximation. Theorem 8.9 (Parceval identity) If is finite, then Proof. Let hence According to the Weierstrass theorem, any function in can be approximated arbitrarily close by a polynomial, hence and we deduce that as. Remark 8.10 Analogous indentity is valid for the trigonometric system. 3

31 Numerical Analysis Lecture 9 9 Polynomial interpolation Let be a real-valued continuous function defined on some interval and let be distinct points in. We wish to construct a polynomial of degree which interpolates at these points, i.e., satisfies Theorem 9.1 (Existence and uniqueness) Given and distinct points, there is exactly one polynomial such that all. Proof. 1) There is at least one polynomial interpolant, the one in the Lagrange form, (9.1) the latter are the fundamental Lagrange polynomials. Each is the product of linear factors, hence. It also equals 1 at and vanishes at, i.e.,. Therefore and 2) There is at most one polynomial interpolant to on. For if there are two,, then the polynomial is of degree and vanishes at points, whence. Remark 9.2 Let us introduce the so-called nodal polynomial Then, in the expression (9.1) for, the numerator is simply while the denominator is equal to. With that we arrive to a compact Lagrange form (9.2) The Lagrange forms (9.1)-(9.2) for the interpolating polynomials are easy to manipulate with, but they are unsuitable for numerical evaluations. An alternative is the Newton form which has an adaptive nature. Method 9.3 (The Newton form) For, let be the polynomial interpolant to on. Then two subsequent and interpolate the same values for, hence their difference is a polynomial of degree that vanishes at points. Thus with some constant which is seen to be the leading coefficient of. It follows that can be built step by step as one constructs the sequence, with obtained from by addition the term from the right-hand side of (9.3): (9.3) 1

32 Definition 9.4 (Divided difference) Given and distinct points, the divided difference of order is, by definition, the leading coefficient of the polynomial which interpolates at these points. With this definition we arrive at the Newton formula for the interpolating polynomial. Theorem 9.5 (Newton formula) Given distinct points, let be the polynomial that interpolates at these points. Then it may be written in the Newton form or, more compactly, (9.4) To make this formula of any use, we need expression for. One such can be derived from the Lagrange formula (9.2) by identifying the leading coefficient of. This turns to be However, it has the same computational disadvantages as the Lagrange form itself. A useful way to calculate divided difference is again an adaptive (or recurrence) approach. Theorem 9.6 (Recurrence relation) For distinct, we have Proof. Let be the polynomials such that interpolates on interpolates on and define the polynomial (9.5) One readily sees that for all, hence, is the -th degree interpolating polynomial for. Moreover, the leading coefficient of is equal to the difference of those of and divided by, and that is exactly what the recurrence (9.5) says. Method 9.7 The recursive formula (9.5) allows for fast evaluation of the divided difference table This can be done in operations and the outcome are the numbers at the head of the columns which can be used in the Newton form (9.4). Method 9.8 (Horner scheme) Finally, evaluation of at a given point using Newton formula (provided that divided differences are known) requires just operations, as long as we do it by the Horner scheme 2

33 Numerical Analysis Lecture Error bounds for polynomial interpolation Here we study the interpolation error restricted to the class of differentiable functions that possess, say, continuous derivatives on the interval ; we denote this class by. We start with an estimate provided by the basic relation (9.3) between two consecutive interpolating polynomials. Theorem 10.1 Let interpolate at distinct points. Then for any (10.1) Proof. Given, let be any other point. Then, by (9.3), the corresponding polynomials and are related by In particular, putting, and noticing that, we obtain the latter equality being the same as (10.1). This theorem shows the error to be like the next term in the Newton form. However, we cannot evaluate the right-hand side of (10.1) without knowing the number. But as we now show we can relate it to the -st derivative of. For this we need a version of the Rolle s theorem: Lemma 10.2 If is zero at distinct points, then has at least distinct zeros in. Proof. By Rolle s theorem, if is zero at two point, then is zero at an intermediate point. So, we deduce that vanishes at least at distinct points. Next, applying Rolle to, we conclude that vanishes at points, and so on. Theorem 10.3 Let be the smallest interval that contains and let. Then there exists such that (10.2) Proof. Let be the interpolating polynomial to on. The error function has at least zeros in so, by Rolle s theorem, must vanish at some, i.e., On the other hand, if (lower order terms), then (for any ) So, combining (10.1) with (10.2) where we put, we obtain Theorem 10.4 Let, and let interpolate at points. Then for every there exists such that (10.3) 1

34 The equality (10.3) with the value for some is of hardly any use. Usually one has a bound for in terms of some norm, e.g., the -norm (the max-norm) The estimate (10.3) takes then the form (10.4) If we want to find the maximal error over the interval, then maximizing first the right- and then the left-hand side over we get yet one more error bound for polynomial interpolation Here we put the lower index in in order to emphasize the dependence of on the sequence of interpolating points. The choice of makes a big difference! Definition 10.5 The Chebyshev polynomial of degree on is defined by (10.5) One sees at once that, on, 1) takes its maximal absolute value with alternating signs times: 2) has distinct zeros: Lemma 10.6 The Chebyshev polynomials satisfy the recurrence relation In particular, is really an algebraic polynomial of degree with the leading coefficient. Proof. Expressions (10.6) are straightforward, the recurrence follows from the equality (10.6) (10.7) via the substitution. Theorem 10.7 On the interval, among all polynomials of degree with leading coefficient equal to one, the Chebyshev polynomial deviates least from zero, i.e., Proof. Suppose there is a polynomial such that, and set 1) The leading coefficient of both and is one, thus is of degree at most. 2) On the other hand, at the points, the Chebyshev polynomial takes the values altenatively, while, hence alternates in sign at these points, therefore it has a zero in each of intervals, i.e. at least zeros in the interval, a contradiction to. Corollary 10.8 For, let. Then, for all, we have Theorem 10.9 For, the best choice of interpolating points is, 2

35 10.1 Additional issues Theorem 10.4 can be proved without use of the Newton formula. Theorem 10.4 Let, and let interpolate at points. Then for every there exists such that (10.3) Proof. If coincides with any from the interpolating set, then both sides of (10.3) vanish, hence formula trivially holds. So, we let be any other point, and consider the function with some constant. For any, at points, and we choose particular so that at as well, i.e., Then has distinct zeros, and, by Rolle s theorem, for some. So, whence and that is the same as (10.3). 3

36 Numerical Analysis Lecture Approximation of linear functionals Given a vector space (e.g.,, or ), the linear functional is a linear mapping such that, and. We will treat the space, and the functionals 1), ( being fixed), 2) Our goal is to find an approximation of the form (11.1) For the functionals 1) 2), this is called numerical differentiation and numerical integration, respectively. Method 11.1 A suggestive approximation method is to interpolate by and take. This is called the interpolating formula (of degree ), in this case. We have already seen an interpolating formula, namely that of Lagrange for the functional : By linearity of, we have, thus the interpolating formula has the form (11.2) Method 11.2 Another method is to require from the formula (11.1) to be exact on, that is to become equality for any. In this case the number of terms (almost) need not to be restricted, and if, then it is certainly not a polynomial of degree that substitutes the function. If, then the formula is of high accuracy. However, if, then both methods are the same. Lemma 11.3 The formula is interpolating it is exact on. Proof. The interpolating formula (11.2) is exact on by definition. Conversely, if the formula in the lemma is exact on, take to obtain, i.e., (11.2) Numerical integration A formula of numerical integration is called a quadrature formula with the nodes and the weights. By Lemma 11.3, for any fixed, the interpolating formula is exact on. Can one find a quadrature of higher accuracy which is exact on with some? Since we are free in choosing nodes, we may hope to increase the degree of accuracy by. More is impossible. Claim 11.4 No quadrature formula with nodes is exact for all if. Proof. Take. Then, and, but for any s. Hence the integral and the quadrature do not match. Our next goal is to show that can be attained. For this we need 1

37 Lemma 11.5 Let be orthogonal to all on. Then all the zeros of are real, distinct and lie in the interval. Proof. Denote by the number of the sign changes of in and assume that. If set, and if set where s are the points where a sign change of occurs. Then,, hence. On the other hand, by construction, does not change sign throughout and vanishes at a finite number of points, hence a contradiction. Thus, (hence ) and the statement follows. Theorem 11.6 Let a quadrature with nodes be exact on (i.e. interpolating). Then it is exact on if and only if its nodes are the zeros of the -st orthogonal polynomial. Proof. 1) Let be the zeros of. Given any, we can represent it uniquely as with some Since is orthogonal to, we have On the other hand, because, But while the quadrature is exact on, hence, i.e., the righthand sides of two previous equalities coincide, therefore the left-hand sides coincide too, i.e.,. 2) Conversely, if a quadrature with nodes is exact for all, then letting and taking any we find i.e., is orthogonal to all. Definition 11.7 A quadrature with nodes exact on is called the Gaussian quadrature. Example 11.8 For and, the underlying orthogonal polynomials are the Legendre polynomials. The corresponding weights and nodes of the Gaussian quadratures are as follows. Example 11.9 For and, the orthogonal polynomials are the Chebyshev polynomials and the qudrature rule is particularly attractive: Example (Simplest quadrature formulae ( )). the rectangle rule: the midpoint rule: the trapezoidal rule: the Simpson rule: 1-point, non-gaussian exact on constants 1-point, Gaussian, exact on linear f-ns 2-point, non-gaussian, exact on linear f-ns 3-point, non-gaussian, but of higher accuracy (exact on cubics) 2

38 11.2 Numerical differentiation Consider the interpolating formulae for numerical differentiation which are exact on the polynomials of degree. The simplest ones are with, i.e., where is the interpolating polynomial of degree. But is times the leading coefficient of, i.e.,, and we obtain the simplest rules. Example the forward difference: the central difference: the 2-nd central difference: 2-point, exact on linear f-ns 2-point, of higher accuracy, exact on quadratics 3-point, of higher accuracy, exact on cubics Example 11.12,. (Of course, one can transform any formula to any interval.) (11.3) 3

39 Numerical Analysis Lecture Error of approximation 12.1 Introduction Given a linear functional and an approximation scheme, we are interested in the error (which is a linear functional as well) If acts on, then it is natural to seek an estimate in terms of, i.e., (12.1) Since on, such an estimate can exist only if on i.e., the approximation formula must be exact on (e.g., it can be interpolating of degree ). If holds with some and moreover, for any, there is an such that then the constant is called least or sharp. The next section gives a general approach to obtaining sharp estimates for the functionals that vanish on The Peano kernel theorem Our point of departure is the Taylor formula with an integral remainder term, (12.2) where the sum is the Taylor polynomial to (at the point ). One can can verify (12.2) by integration by parts. It is standard to write the remainder in the slightly different form, which makes the range of integration independent of : Let be a linear functional on. Then with Let us put formally function under the integration sign, so that, for any fixed value of, it will act on the a function, say, of which (for a given ) is defined as,. To each value there corresponds a value, thus we have a function called the Peano kernel of the functional. So, formally, we may write (12.3) 1

40 Definition 12.1 Denote by functionals of two types the set of linear functionals which are linear combinations of the Lemma 12.2 without proof If, then equality is valid. Theorem 12.3 (The Peano kernel theorem) Let and let be an approximation formula which is exact on. Then the error functional (and any other functional that vanishes on ) has the integral representation Proof. The formula follows from because and Error bounds Theorem 12.4 Under assumptions of the previous theorem, we have the sharp inequality (12.4) (12.5) Proof. Apply the inequality side of (12.4). to the right-hand Theorem 12.5 Let be an interpolating formula. If the Peano kernel does not change its sign, then (12.6) Proof. Put into (12.4). Since, we obtain, and since, we have. So, for any, If does not change, then, hence. Example 12.6 For example, we have an analogue of (10.4) for the -th derivative provided that does not change its sign. Finally, there is a simple (though not very accurate) error bound for numerical integration. Theorem 12.7 Let be a quadrature with nodes that is exact on. Then (12.7) Proof. We have 2

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

Applied Linear Algebra

Applied Linear Algebra Applied Linear Algebra Peter J. Olver School of Mathematics University of Minnesota Minneapolis, MN 55455 olver@math.umn.edu http://www.math.umn.edu/ olver Chehrzad Shakiban Department of Mathematics University

More information

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N.

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N. Math 410 Homework Problems In the following pages you will find all of the homework problems for the semester. Homework should be written out neatly and stapled and turned in at the beginning of class

More information

Dot Products. K. Behrend. April 3, Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem.

Dot Products. K. Behrend. April 3, Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem. Dot Products K. Behrend April 3, 008 Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem. Contents The dot product 3. Length of a vector........................

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Numerical Methods - Numerical Linear Algebra

Numerical Methods - Numerical Linear Algebra Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear

More information

Practical Linear Algebra: A Geometry Toolbox

Practical Linear Algebra: A Geometry Toolbox Practical Linear Algebra: A Geometry Toolbox Third edition Chapter 12: Gauss for Linear Systems Gerald Farin & Dianne Hansford CRC Press, Taylor & Francis Group, An A K Peters Book www.farinhansford.com/books/pla

More information

Math 307 Learning Goals

Math 307 Learning Goals Math 307 Learning Goals May 14, 2018 Chapter 1 Linear Equations 1.1 Solving Linear Equations Write a system of linear equations using matrix notation. Use Gaussian elimination to bring a system of linear

More information

Conceptual Questions for Review

Conceptual Questions for Review Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 7 Interpolation Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted

More information

Numerical Methods I Non-Square and Sparse Linear Systems

Numerical Methods I Non-Square and Sparse Linear Systems Numerical Methods I Non-Square and Sparse Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 25th, 2014 A. Donev (Courant

More information

Course Notes: Week 1

Course Notes: Week 1 Course Notes: Week 1 Math 270C: Applied Numerical Linear Algebra 1 Lecture 1: Introduction (3/28/11) We will focus on iterative methods for solving linear systems of equations (and some discussion of eigenvalues

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

MTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education

MTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education MTH 3 Linear Algebra Study Guide Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education June 3, ii Contents Table of Contents iii Matrix Algebra. Real Life

More information

Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Numerical Analysis Lecture Notes Peter J Olver 8 Numerical Computation of Eigenvalues In this part, we discuss some practical methods for computing eigenvalues and eigenvectors of matrices Needless to

More information

1 Number Systems and Errors 1

1 Number Systems and Errors 1 Contents 1 Number Systems and Errors 1 1.1 Introduction................................ 1 1.2 Number Representation and Base of Numbers............. 1 1.2.1 Normalized Floating-point Representation...........

More information

Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015

Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015 Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015 The test lasts 1 hour and 15 minutes. No documents are allowed. The use of a calculator, cell phone or other equivalent electronic

More information

Computational Methods. Systems of Linear Equations

Computational Methods. Systems of Linear Equations Computational Methods Systems of Linear Equations Manfred Huber 2010 1 Systems of Equations Often a system model contains multiple variables (parameters) and contains multiple equations Multiple equations

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

FINITE-DIMENSIONAL LINEAR ALGEBRA

FINITE-DIMENSIONAL LINEAR ALGEBRA DISCRETE MATHEMATICS AND ITS APPLICATIONS Series Editor KENNETH H ROSEN FINITE-DIMENSIONAL LINEAR ALGEBRA Mark S Gockenbach Michigan Technological University Houghton, USA CRC Press Taylor & Francis Croup

More information

Vector Spaces, Orthogonality, and Linear Least Squares

Vector Spaces, Orthogonality, and Linear Least Squares Week Vector Spaces, Orthogonality, and Linear Least Squares. Opening Remarks.. Visualizing Planes, Lines, and Solutions Consider the following system of linear equations from the opener for Week 9: χ χ

More information

homogeneous 71 hyperplane 10 hyperplane 34 hyperplane 69 identity map 171 identity map 186 identity map 206 identity matrix 110 identity matrix 45

homogeneous 71 hyperplane 10 hyperplane 34 hyperplane 69 identity map 171 identity map 186 identity map 206 identity matrix 110 identity matrix 45 address 12 adjoint matrix 118 alternating 112 alternating 203 angle 159 angle 33 angle 60 area 120 associative 180 augmented matrix 11 axes 5 Axiom of Choice 153 basis 178 basis 210 basis 74 basis test

More information

Linear Algebra. and

Linear Algebra. and Instructions Please answer the six problems on your own paper. These are essay questions: you should write in complete sentences. 1. Are the two matrices 1 2 2 1 3 5 2 7 and 1 1 1 4 4 2 5 5 2 row equivalent?

More information

MAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction

MAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction MAT4 : Introduction to Applied Linear Algebra Mike Newman fall 7 9. Projections introduction One reason to consider projections is to understand approximate solutions to linear systems. A common example

More information

Math 471 (Numerical methods) Chapter 3 (second half). System of equations

Math 471 (Numerical methods) Chapter 3 (second half). System of equations Math 47 (Numerical methods) Chapter 3 (second half). System of equations Overlap 3.5 3.8 of Bradie 3.5 LU factorization w/o pivoting. Motivation: ( ) A I Gaussian Elimination (U L ) where U is upper triangular

More information

3 QR factorization revisited

3 QR factorization revisited LINEAR ALGEBRA: NUMERICAL METHODS. Version: August 2, 2000 30 3 QR factorization revisited Now we can explain why A = QR factorization is much better when using it to solve Ax = b than the A = LU factorization

More information

ECE133A Applied Numerical Computing Additional Lecture Notes

ECE133A Applied Numerical Computing Additional Lecture Notes Winter Quarter 2018 ECE133A Applied Numerical Computing Additional Lecture Notes L. Vandenberghe ii Contents 1 LU factorization 1 1.1 Definition................................. 1 1.2 Nonsingular sets

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 13

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 13 STAT 309: MATHEMATICAL COMPUTATIONS I FALL 208 LECTURE 3 need for pivoting we saw that under proper circumstances, we can write A LU where 0 0 0 u u 2 u n l 2 0 0 0 u 22 u 2n L l 3 l 32, U 0 0 0 l n l

More information

Math 302 Outcome Statements Winter 2013

Math 302 Outcome Statements Winter 2013 Math 302 Outcome Statements Winter 2013 1 Rectangular Space Coordinates; Vectors in the Three-Dimensional Space (a) Cartesian coordinates of a point (b) sphere (c) symmetry about a point, a line, and a

More information

Mobile Robotics 1. A Compact Course on Linear Algebra. Giorgio Grisetti

Mobile Robotics 1. A Compact Course on Linear Algebra. Giorgio Grisetti Mobile Robotics 1 A Compact Course on Linear Algebra Giorgio Grisetti SA-1 Vectors Arrays of numbers They represent a point in a n dimensional space 2 Vectors: Scalar Product Scalar-Vector Product Changes

More information

LU Factorization. LU factorization is the most common way of solving linear systems! Ax = b LUx = b

LU Factorization. LU factorization is the most common way of solving linear systems! Ax = b LUx = b AM 205: lecture 7 Last time: LU factorization Today s lecture: Cholesky factorization, timing, QR factorization Reminder: assignment 1 due at 5 PM on Friday September 22 LU Factorization LU factorization

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

This can be accomplished by left matrix multiplication as follows: I

This can be accomplished by left matrix multiplication as follows: I 1 Numerical Linear Algebra 11 The LU Factorization Recall from linear algebra that Gaussian elimination is a method for solving linear systems of the form Ax = b, where A R m n and bran(a) In this method

More information

Review Questions REVIEW QUESTIONS 71

Review Questions REVIEW QUESTIONS 71 REVIEW QUESTIONS 71 MATLAB, is [42]. For a comprehensive treatment of error analysis and perturbation theory for linear systems and many other problems in linear algebra, see [126, 241]. An overview of

More information

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel LECTURE NOTES on ELEMENTARY NUMERICAL METHODS Eusebius Doedel TABLE OF CONTENTS Vector and Matrix Norms 1 Banach Lemma 20 The Numerical Solution of Linear Systems 25 Gauss Elimination 25 Operation Count

More information

SUMMARY OF MATH 1600

SUMMARY OF MATH 1600 SUMMARY OF MATH 1600 Note: The following list is intended as a study guide for the final exam. It is a continuation of the study guide for the midterm. It does not claim to be a comprehensive list. You

More information

4.2. ORTHOGONALITY 161

4.2. ORTHOGONALITY 161 4.2. ORTHOGONALITY 161 Definition 4.2.9 An affine space (E, E ) is a Euclidean affine space iff its underlying vector space E is a Euclidean vector space. Given any two points a, b E, we define the distance

More information

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION)

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION) HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION) PROFESSOR STEVEN MILLER: BROWN UNIVERSITY: SPRING 2007 1. CHAPTER 1: MATRICES AND GAUSSIAN ELIMINATION Page 9, # 3: Describe

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra The two principal problems in linear algebra are: Linear system Given an n n matrix A and an n-vector b, determine x IR n such that A x = b Eigenvalue problem Given an n n matrix

More information

= W z1 + W z2 and W z1 z 2

= W z1 + W z2 and W z1 z 2 Math 44 Fall 06 homework page Math 44 Fall 06 Darij Grinberg: homework set 8 due: Wed, 4 Dec 06 [Thanks to Hannah Brand for parts of the solutions] Exercise Recall that we defined the multiplication of

More information

Part IB Numerical Analysis

Part IB Numerical Analysis Part IB Numerical Analysis Definitions Based on lectures by G. Moore Notes taken by Dexter Chua Lent 206 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after

More information

Numerical Analysis: Solving Systems of Linear Equations

Numerical Analysis: Solving Systems of Linear Equations Numerical Analysis: Solving Systems of Linear Equations Mirko Navara http://cmpfelkcvutcz/ navara/ Center for Machine Perception, Department of Cybernetics, FEE, CTU Karlovo náměstí, building G, office

More information

Lecture 3: QR-Factorization

Lecture 3: QR-Factorization Lecture 3: QR-Factorization This lecture introduces the Gram Schmidt orthonormalization process and the associated QR-factorization of matrices It also outlines some applications of this factorization

More information

Linear Algebra. Min Yan

Linear Algebra. Min Yan Linear Algebra Min Yan January 2, 2018 2 Contents 1 Vector Space 7 1.1 Definition................................. 7 1.1.1 Axioms of Vector Space..................... 7 1.1.2 Consequence of Axiom......................

More information

Cambridge University Press The Mathematics of Signal Processing Steven B. Damelin and Willard Miller Excerpt More information

Cambridge University Press The Mathematics of Signal Processing Steven B. Damelin and Willard Miller Excerpt More information Introduction Consider a linear system y = Φx where Φ can be taken as an m n matrix acting on Euclidean space or more generally, a linear operator on a Hilbert space. We call the vector x a signal or input,

More information

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION MATH (LINEAR ALGEBRA ) FINAL EXAM FALL SOLUTIONS TO PRACTICE VERSION Problem (a) For each matrix below (i) find a basis for its column space (ii) find a basis for its row space (iii) determine whether

More information

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

lecture 2 and 3: algorithms for linear algebra

lecture 2 and 3: algorithms for linear algebra lecture 2 and 3: algorithms for linear algebra STAT 545: Introduction to computational statistics Vinayak Rao Department of Statistics, Purdue University August 27, 2018 Solving a system of linear equations

More information

Review problems for MA 54, Fall 2004.

Review problems for MA 54, Fall 2004. Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on

More information

Duke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014

Duke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014 Duke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014 Linear Algebra A Brief Reminder Purpose. The purpose of this document

More information

October 25, 2013 INNER PRODUCT SPACES

October 25, 2013 INNER PRODUCT SPACES October 25, 2013 INNER PRODUCT SPACES RODICA D. COSTIN Contents 1. Inner product 2 1.1. Inner product 2 1.2. Inner product spaces 4 2. Orthogonal bases 5 2.1. Existence of an orthogonal basis 7 2.2. Orthogonal

More information

Scientific Computing with Case Studies SIAM Press, Lecture Notes for Unit VII Sparse Matrix

Scientific Computing with Case Studies SIAM Press, Lecture Notes for Unit VII Sparse Matrix Scientific Computing with Case Studies SIAM Press, 2009 http://www.cs.umd.edu/users/oleary/sccswebpage Lecture Notes for Unit VII Sparse Matrix Computations Part 1: Direct Methods Dianne P. O Leary c 2008

More information

Solution of Linear Equations

Solution of Linear Equations Solution of Linear Equations (Com S 477/577 Notes) Yan-Bin Jia Sep 7, 07 We have discussed general methods for solving arbitrary equations, and looked at the special class of polynomial equations A subclass

More information

arxiv: v1 [math.na] 5 May 2011

arxiv: v1 [math.na] 5 May 2011 ITERATIVE METHODS FOR COMPUTING EIGENVALUES AND EIGENVECTORS MAYSUM PANJU arxiv:1105.1185v1 [math.na] 5 May 2011 Abstract. We examine some numerical iterative methods for computing the eigenvalues and

More information

Linear Algebra Primer

Linear Algebra Primer Linear Algebra Primer David Doria daviddoria@gmail.com Wednesday 3 rd December, 2008 Contents Why is it called Linear Algebra? 4 2 What is a Matrix? 4 2. Input and Output.....................................

More information

Tangent spaces, normals and extrema

Tangent spaces, normals and extrema Chapter 3 Tangent spaces, normals and extrema If S is a surface in 3-space, with a point a S where S looks smooth, i.e., without any fold or cusp or self-crossing, we can intuitively define the tangent

More information

Introduction to Applied Linear Algebra with MATLAB

Introduction to Applied Linear Algebra with MATLAB Sigam Series in Applied Mathematics Volume 7 Rizwan Butt Introduction to Applied Linear Algebra with MATLAB Heldermann Verlag Contents Number Systems and Errors 1 1.1 Introduction 1 1.2 Number Representation

More information

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB Glossary of Linear Algebra Terms Basis (for a subspace) A linearly independent set of vectors that spans the space Basic Variable A variable in a linear system that corresponds to a pivot column in the

More information

Bare-bones outline of eigenvalue theory and the Jordan canonical form

Bare-bones outline of eigenvalue theory and the Jordan canonical form Bare-bones outline of eigenvalue theory and the Jordan canonical form April 3, 2007 N.B.: You should also consult the text/class notes for worked examples. Let F be a field, let V be a finite-dimensional

More information

Math 307 Learning Goals. March 23, 2010

Math 307 Learning Goals. March 23, 2010 Math 307 Learning Goals March 23, 2010 Course Description The course presents core concepts of linear algebra by focusing on applications in Science and Engineering. Examples of applications from recent

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

BASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x

BASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x BASIC ALGORITHMS IN LINEAR ALGEBRA STEVEN DALE CUTKOSKY Matrices and Applications of Gaussian Elimination Systems of Equations Suppose that A is an n n matrix with coefficents in a field F, and x = (x,,

More information

Linear Least-Squares Data Fitting

Linear Least-Squares Data Fitting CHAPTER 6 Linear Least-Squares Data Fitting 61 Introduction Recall that in chapter 3 we were discussing linear systems of equations, written in shorthand in the form Ax = b In chapter 3, we just considered

More information

1 9/5 Matrices, vectors, and their applications

1 9/5 Matrices, vectors, and their applications 1 9/5 Matrices, vectors, and their applications Algebra: study of objects and operations on them. Linear algebra: object: matrices and vectors. operations: addition, multiplication etc. Algorithms/Geometric

More information

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012.

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012. Math 5620 - Introduction to Numerical Analysis - Class Notes Fernando Guevara Vasquez Version 1990. Date: January 17, 2012. 3 Contents 1. Disclaimer 4 Chapter 1. Iterative methods for solving linear systems

More information

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES JOEL A. TROPP Abstract. We present an elementary proof that the spectral radius of a matrix A may be obtained using the formula ρ(a) lim

More information

5.6. PSEUDOINVERSES 101. A H w.

5.6. PSEUDOINVERSES 101. A H w. 5.6. PSEUDOINVERSES 0 Corollary 5.6.4. If A is a matrix such that A H A is invertible, then the least-squares solution to Av = w is v = A H A ) A H w. The matrix A H A ) A H is the left inverse of A and

More information

Numerical Methods in Matrix Computations

Numerical Methods in Matrix Computations Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices

More information

G1110 & 852G1 Numerical Linear Algebra

G1110 & 852G1 Numerical Linear Algebra The University of Sussex Department of Mathematics G & 85G Numerical Linear Algebra Lecture Notes Autumn Term Kerstin Hesse (w aw S w a w w (w aw H(wa = (w aw + w Figure : Geometric explanation of the

More information

Scientific Computing

Scientific Computing Scientific Computing Direct solution methods Martin van Gijzen Delft University of Technology October 3, 2018 1 Program October 3 Matrix norms LU decomposition Basic algorithm Cost Stability Pivoting Pivoting

More information

AMS 209, Fall 2015 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems

AMS 209, Fall 2015 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems AMS 209, Fall 205 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems. Overview We are interested in solving a well-defined linear system given

More information

AM 205: lecture 6. Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization

AM 205: lecture 6. Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization AM 205: lecture 6 Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization Unit II: Numerical Linear Algebra Motivation Almost everything in Scientific Computing

More information

The Solution of Linear Systems AX = B

The Solution of Linear Systems AX = B Chapter 2 The Solution of Linear Systems AX = B 21 Upper-triangular Linear Systems We will now develop the back-substitution algorithm, which is useful for solving a linear system of equations that has

More information

Math Linear Algebra II. 1. Inner Products and Norms

Math Linear Algebra II. 1. Inner Products and Norms Math 342 - Linear Algebra II Notes 1. Inner Products and Norms One knows from a basic introduction to vectors in R n Math 254 at OSU) that the length of a vector x = x 1 x 2... x n ) T R n, denoted x,

More information

Index. book 2009/5/27 page 121. (Page numbers set in bold type indicate the definition of an entry.)

Index. book 2009/5/27 page 121. (Page numbers set in bold type indicate the definition of an entry.) page 121 Index (Page numbers set in bold type indicate the definition of an entry.) A absolute error...26 componentwise...31 in subtraction...27 normwise...31 angle in least squares problem...98,99 approximation

More information

Math 4242 Fall 2016 (Darij Grinberg): homework set 8 due: Wed, 14 Dec b a. Here is the algorithm for diagonalizing a matrix we did in class:

Math 4242 Fall 2016 (Darij Grinberg): homework set 8 due: Wed, 14 Dec b a. Here is the algorithm for diagonalizing a matrix we did in class: Math 4242 Fall 206 homework page Math 4242 Fall 206 Darij Grinberg: homework set 8 due: Wed, 4 Dec 206 Exercise Recall that we defined the multiplication of complex numbers by the rule a, b a 2, b 2 =

More information

MAT Linear Algebra Collection of sample exams

MAT Linear Algebra Collection of sample exams MAT 342 - Linear Algebra Collection of sample exams A-x. (0 pts Give the precise definition of the row echelon form. 2. ( 0 pts After performing row reductions on the augmented matrix for a certain system

More information

Algebra C Numerical Linear Algebra Sample Exam Problems

Algebra C Numerical Linear Algebra Sample Exam Problems Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

Applied Linear Algebra in Geoscience Using MATLAB

Applied Linear Algebra in Geoscience Using MATLAB Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in

More information

The Conjugate Gradient Method

The Conjugate Gradient Method The Conjugate Gradient Method Classical Iterations We have a problem, We assume that the matrix comes from a discretization of a PDE. The best and most popular model problem is, The matrix will be as large

More information

8. Diagonalization.

8. Diagonalization. 8. Diagonalization 8.1. Matrix Representations of Linear Transformations Matrix of A Linear Operator with Respect to A Basis We know that every linear transformation T: R n R m has an associated standard

More information

Iterative Methods for Solving A x = b

Iterative Methods for Solving A x = b Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http

More information

A = 3 1. We conclude that the algebraic multiplicity of the eigenvalues are both one, that is,

A = 3 1. We conclude that the algebraic multiplicity of the eigenvalues are both one, that is, 65 Diagonalizable Matrices It is useful to introduce few more concepts, that are common in the literature Definition 65 The characteristic polynomial of an n n matrix A is the function p(λ) det(a λi) Example

More information

MTH 2032 Semester II

MTH 2032 Semester II MTH 232 Semester II 2-2 Linear Algebra Reference Notes Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education December 28, 2 ii Contents Table of Contents

More information

Review of matrices. Let m, n IN. A rectangle of numbers written like A =

Review of matrices. Let m, n IN. A rectangle of numbers written like A = Review of matrices Let m, n IN. A rectangle of numbers written like a 11 a 12... a 1n a 21 a 22... a 2n A =...... a m1 a m2... a mn where each a ij IR is called a matrix with m rows and n columns or an

More information

Linear Algebra Highlights

Linear Algebra Highlights Linear Algebra Highlights Chapter 1 A linear equation in n variables is of the form a 1 x 1 + a 2 x 2 + + a n x n. We can have m equations in n variables, a system of linear equations, which we want to

More information

1. General Vector Spaces

1. General Vector Spaces 1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule

More information

chapter 12 MORE MATRIX ALGEBRA 12.1 Systems of Linear Equations GOALS

chapter 12 MORE MATRIX ALGEBRA 12.1 Systems of Linear Equations GOALS chapter MORE MATRIX ALGEBRA GOALS In Chapter we studied matrix operations and the algebra of sets and logic. We also made note of the strong resemblance of matrix algebra to elementary algebra. The reader

More information

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit II: Numerical Linear Algebra Lecturer: Dr. David Knezevic Unit II: Numerical Linear Algebra Chapter II.3: QR Factorization, SVD 2 / 66 QR Factorization 3 / 66 QR Factorization

More information

Vector Spaces. Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms.

Vector Spaces. Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms. Vector Spaces Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms. For each two vectors a, b ν there exists a summation procedure: a +

More information

5.7 Cramer's Rule 1. Using Determinants to Solve Systems Assumes the system of two equations in two unknowns

5.7 Cramer's Rule 1. Using Determinants to Solve Systems Assumes the system of two equations in two unknowns 5.7 Cramer's Rule 1. Using Determinants to Solve Systems Assumes the system of two equations in two unknowns (1) possesses the solution and provided that.. The numerators and denominators are recognized

More information

Index. for generalized eigenvalue problem, butterfly form, 211

Index. for generalized eigenvalue problem, butterfly form, 211 Index ad hoc shifts, 165 aggressive early deflation, 205 207 algebraic multiplicity, 35 algebraic Riccati equation, 100 Arnoldi process, 372 block, 418 Hamiltonian skew symmetric, 420 implicitly restarted,

More information

We wish the reader success in future encounters with the concepts of linear algebra.

We wish the reader success in future encounters with the concepts of linear algebra. Afterword Our path through linear algebra has emphasized spaces of vectors in dimension 2, 3, and 4 as a means of introducing concepts which go forward to IRn for arbitrary n. But linear algebra does not

More information

Cheat Sheet for MATH461

Cheat Sheet for MATH461 Cheat Sheet for MATH46 Here is the stuff you really need to remember for the exams Linear systems Ax = b Problem: We consider a linear system of m equations for n unknowns x,,x n : For a given matrix A

More information

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

MATH 240 Spring, Chapter 1: Linear Equations and Matrices MATH 240 Spring, 2006 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 8th Ed. Sections 1.1 1.6, 2.1 2.2, 3.2 3.8, 4.3 4.5, 5.1 5.3, 5.5, 6.1 6.5, 7.1 7.2, 7.4 DEFINITIONS Chapter 1: Linear

More information

MATH 167: APPLIED LINEAR ALGEBRA Least-Squares

MATH 167: APPLIED LINEAR ALGEBRA Least-Squares MATH 167: APPLIED LINEAR ALGEBRA Least-Squares October 30, 2014 Least Squares We do a series of experiments, collecting data. We wish to see patterns!! We expect the output b to be a linear function of

More information

Chapter 2 Linear Transformations

Chapter 2 Linear Transformations Chapter 2 Linear Transformations Linear Transformations Loosely speaking, a linear transformation is a function from one vector space to another that preserves the vector space operations. Let us be more

More information

7. Dimension and Structure.

7. Dimension and Structure. 7. Dimension and Structure 7.1. Basis and Dimension Bases for Subspaces Example 2 The standard unit vectors e 1, e 2,, e n are linearly independent, for if we write (2) in component form, then we obtain

More information

Linear Algebra March 16, 2019

Linear Algebra March 16, 2019 Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented

More information