MATH2071: LAB #7: Factorizations

Size: px
Start display at page:

Download "MATH2071: LAB #7: Factorizations"

Transcription

1 1 Introduction MATH2071: LAB #7: Factorizations Introduction Exercise 1 Orthogonal Matrices Exercise 2 A Set of Questions Exercise 3 The Gram Schmidt Method Exercise 4 Gram-Schmidt Factorization Exercise 5 Householder Matrices Exercise 6 Householder Factorization Exercise 7 The QR Method for Linear Systems Exercise 8 The singular value decomposition Exercise 9 Exercise 10 We have seen one factorization PLU that can be used to solve a linear system provided that the system is square, and that it is nonsingular, and that it is not too badly conditioned. However, if we want to handle problems with a bad condition number, or that are singular, or even rectangular, we are going to need to come up with a different approach. In this lab, we will look at two versions of the QR factorization: A = QR where Q is an orthogonal matrix, and R is upper triangular, and also the singular value decomposition (SVD) that factors a matrix into the form A = USV T. We will see that the QR factorization can be used to solve the same problems that the PLU factorization handles, but can be extended to do several other tasks for which the PLU factorization is useless. In situations where we are concerned about controlling roundoff error, the QR factorization can be more accurate than the PLU factorization, and is even a little easier to use when solving linear systems, since the inverse of Q is simply Q T. We will also see that the SVD can be used to compress data, to identify the range and nullspace of a linear operator, and even to solve non-square systems. You may find it convenient to print the pdf version of this lab rather than the web page itself. This lab will take three sessions. 2 Orthogonal Matrices Definition: An orthogonal matrix is a real matrix whose inverse is equal to its transpose. By convention, an orthogonal matrix is usually denoted by the symbol Q. The definition of an orthogonal matrix immediately implies that QQ T = Q T Q = I. From the definition of an orthogonal matrix, and from the fact that the L 2 vector norm of x can be defined by: x 2 = (x T x) and the fact that (Ax) T = x T A T you should be able to deduce that, for orthogonal matrices, Qx 2 = x 2 1

2 If I multiply x by Q and its L 2 norm doesn t change, then Qx must lie on the circle whose radius is x. In other words, the only thing that has happened is that Qx has been rotated around the origin by some angle, or reflected through the origin. That means that an orthogonal matrix represents a rotation or reflection. When matrices are complex, the term unitary is an analog to orthogonal. A matrix is unitary if UU H = I, where the H superscript refers to the Hermitian or conjugate-transpose of the matrix. In Matlab, the prime operator constructs the Hermitian and the dot-prime operator constructs the transpose. A real matrix that is unitary is actually orthogonal. 3 A Set of Questions Suppose we have N vectors x i, each of dimension M. Here are some common and related questions: Can we determine if the vectors are linearly independent? Can we determine the dimension of the space the vectors span? (This is the rank.) Can we construct an orthonormal basis for the space spanned by the vectors? Can we construct an orthonormal basis for the space of vectors perpendicular to all the x i? Given a new vector x N+1, can we determine whether this vector lies in the space spanned by the other vectors? If a new vector lies in the space, what specific linear combination of the old vectors is equal to the new one? If we create an M by N matrix using the vectors as columns, what is the rank of the matrix? How can we solve a rectangular system of linear equations? Since we actually plan to do these calculations numerically we also need to be concerned about how we deal with numerical error. For instance, in exact arithmetic, it s enough to say a matrix is not singular. In numerical calculations, we would like to be able to say that a particular matrix, while not singular, is nearly singular. 4 The Gram Schmidt Method The Gram Schmidt method can be thought of as a process that analyzes a set of vectors X, producing an equivalent (and possibly smaller) set of vectors Q that span the same space, have unit L 2 norm, and are pairwise orthogonal. Note that if we can produce such a set of vectors, then we can easily answer many of the questions in our set of problems. In particular, the size of the set Q tells us whether the set X was linearly independent, and the dimension of the space spanned and the rank of a matrix constructed from the vectors. And of course, the vectors in Q are the orthonormal basis we were seeking. So you should believe that being able to compute the vectors Q is a valuable ability. In fact, the Gram-Schmidt method can help us with all the problems on our list. We will be using the Gram-Schmidt process to factor a matrix, but the process itself pops up repeatedly whenever sets of vectors must be made orthogonal. You may see, for example, it in Krylov space methods for iteratively solving systems of equations and for eigenvalue problems. In words, the Gram-Schmidt process goes like this: 1. Start with no vectors in Q; 2. Consider x, the next (possibly the first) vector in X ; 2

3 3. For every vector q i already in Q, compute the projection of q i on x (i.e., q i x) and subtract it from x to get a vector orthogonal to q i ; 4. Compute the norm of (what s left of) x. If the norm is zero (or too small), discard x from Q; otherwise, divide x by its norm and move it from X to Q. 5. If there are more vectors in X, return to step When you get here, X is empty and you have an orthonormal set of vectors Q spanning the same space as the original set X. Here is a sketch of the Gram Schmidt process as an algorithm. Assume that n x is the number of x vectors: n q = 0 % n q will become the number of q vectors for k = 1 to n x for l = 1 to n q % if n q = 0 the loop is skipped r lk = q l x k x k = x k - r lk q l end r kk = x k x k if r kk > 0 n q = n q + 1 q nq = x k /r kk end end You should be able to match this algorithm to the previous verbal description. Can you see how the L 2 norm of a vector is being computed? Exercise 1: (a) Implement the Gram-Schmidt process in an m-file called gram schmidt.m. Your function should have the signature: function Q = gram_schmidt ( X ) % Q = gram_schmidt ( X ) % more comments % your name and the date Assume the x vectors are stored as columns of the matrix X. Be sure you understand this data structure because it is a potential source of confusion. In the algorithm description above, the x vectors are members of a set X, but here these vectors are stored as the columns of a matrix X. Similarly, the vectors in the set Q are stored as the columns of a matrix Q. The first column of X corresponds to the vector x 1, so that the Matlab expression X(k,1) refers to the k th component of the vector x 1, and the Matlab expression X(:,1) refers to the Matlab column vector corresponding to x 1. Similarly for the other columns. Include your code in your summary report. (b) It is always best to test your code on simple examples for which you know the answers. Test your code using the following input: X = [ ] You should be able to see that the correct result is the identity matrix. If you do not get the identity matrix, find your bug before continuing. 3

4 (c) Another simple test you can do in your head is the following: X = [ ] The columns of Q should have L 2 norm 1, and be pairwise orthogonal, and you should be able to confirm these facts by inspection. (d) Test your Gram-Schmidt code to compute a matrix Q using the following input: X = [ ] If your code is working correctly, you should compute approximately: [ ] You should verify that the columns of Q have L 2 norm 1, and are pairwise orthogonal. If you like programming problems, think of a way to do this check in a single line of Matlab code. (e) Show that the matrix Q you just computed is not an orthogonal matrix, even though its columns form an orthornormal set. Are you surprised? 5 Gram-Schmidt Factorization We need to take a closer look at the Gram-Schmidt process. Recall how the process of Gauss elimination could actually be regarded as a process of factorization. This insight enabled us to solve many other problems. In the same way, the Gram-Schmidt process is actually carrying out a different factorization that will give us the key to other problems. Just to keep our heads on straight, let me point out that we re about to stop thinking of X as a bunch of vectors, and instead regard it as a matrix. Since it is traditional for matrices to be called A, that s what we ll call our set of vectors from now on. Now, in the Gram-Schmidt algorithm, the numbers that we called r lk and r kk, that we computed, used, and discarded, actually record important information. They can be regarded as the nonzero elements of an upper triangular matrix R. The Gram-Schmidt process actually produces a factorization of the matrix A of the form: A = QR Here, the matrix Q has the same M N shape as A, so it s only square if A is. The matrix R will be square (N N), and upper triangular. Now that we re trying to produce a factorization of a matrix, we need to modify the Gram-Schmidt algorithm of the previous section. Every time we consider a vector x k at the beginning of a loop, we will now always produce a vector q k at the end of the loop, instead of dropping some out. Hence, if r kk, the norm of vector x k, is zero, we will simply set q k to the zero vector. Exercise 2: (a) Make a copy of the m-file gram schmidt.m and call it gs factor.m. Modify it to compute the Q and R factors of a (possibly) rectangular matrix A. It should have the signature function [ Q, R ] = gs_factor ( A ) % [ Q, R ] = gs_factor ( A ) % more comments 4

5 % your name and the date Include a copy of your code with your summary. (b) Compute the QR factorization of the following matrix. A = [ ] You computed Q above, and you should get the same matrix Q again. Check that A = QR: if so, your code is correct. (c) Compute the QR factorization of a matrix of random numbers (A=rand(100,100)). (Hint: use norms to check the equalities.) Is it true that Q T Q = I? (Hint: Does norm(q *Q-eye(100), fro ) equal 0?) Is it true that QQ T = I? Is Q orthogonal? Is the matrix R upper triangular? (Hint: the Matlab functions tril or triu can help, so you don t have to look at all those numbers.) Is it true that A = QR? (d) Compute the QR factorization of the following matrix. A = [ ] Is it true that Q T Q = I? Is it true that QQ T = I? Is Q orthogonal? Is the matrix R upper triangular? Is it true that A = QR? Exercise 3: One step in the Gram-Schmidt process avoids dividing by zero by checking that r kk > 0. This is, in fact, very poor practice! The reason is that r kk might be of roundoff size. Find the matrices Q and R for the matrix A = [ *eps ] Are the columns of your Q matrix orthonormal? (Recall that eps is the smallest number you can add to 1.0 and change its value.) The solution to this problem is more art than science, and is beyond the scope of this lab. One feasible way to fix the problem would be to replace zero with, say, eps*max(max(r)). 5

6 6 Householder Matrices It turns out that it is possible to take a given vector d and find a matrix H so that the product Hd is proportinal to the vector e 1 = (1, 0,..., 0) T. That is, it is possible to find a matrix H that zeros out all but the first entry of d. The matrix is given by v = d d e 1 (d d e 1 ) H = I 2vv T (1) Note that v T v is a scalar and vv T is a matrix. To see why Equation (1) is true, denote d = (d 1, d 2,...) T and do the following calculation. Hd = d 2 (d d e 1)(d d e 1 ) T (d d e 1 ) T (d d e 1 ) d = d 2 d 2 2d 1 d ) + d 2 (d d e 1)( d 2 d d 1 ) = d 2( d 2 d d 1 ) (2 d 2 2d 1 d ) (d d e 1) = d e 1 You can find further discussion in Atkinson, page 601, and in several places on the web. For example, a Planet Math (planetmath.org) encyclopedia article. Atkinson, on page 611, describes a way to use this idea to take any column of a matrix and make those entries below the diagonal entry be zero. Repeating this sequentially for each column will result in an upper triangular matrix. Atkinson s discussion can be summarized in the following way: Starting with a column vector of length n [ ] c b =, d where c is a column vector of length k 1 and d is a column vector of length n k + 1. The objective is to construct a matrix H with the property that c H b = 0,. where the k th component of the product, (H b) k, is non-zero. The Householder matrix H will be constructed from a vector [ ] 0 w = v where the vector v is the same size as d. The algorithm for constructing v and hence w can be summarized as follows. 1. Set α = ± d with signum(α) = signum(d 1 ) (Atkinson s Equations (9.3.5) and (9.3.9).) (The sign is arbitrary and chosen to minimize roundoff errors.) [ 1 2. Set v 1 = 2 1 d 1 ] α (Atkinson s Equation (9.3.8).) 6

7 3. Set p = αv 1 (Atkinson s Equation (9.3.7).) 4. Set v j = dj 2p for j = 2, 3,... (Atkinson s Equation (9.3.10).) [ ] 0 5. Set w =. v 6. Set H = I 2ww T. In the following exercise you will write a Matlab function to implement this algorithm. Exercise 4: (a) Start a function m-file named householder.m with signature and beginning lines function H = householder(b, k) % H = householder(b, k) % more comments % your name and the date n = length(b); d(:,1) = b(k:n); if d(1)>=0 alpha = -norm(d); else alpha = norm(d); end (b) Add comments at the beginning, being sure to explain the use of the variable k. (c) In the case that alpha is exactly zero, the algorithm will fail because of a division by zero. For the case that alpha is zero, return H = eye(n); otherwise, complete the above algorithm. (d) Test your function on the vector b=[10;9;8;7;6;5;4;3;2;1]; with k=1 and again with k=5. Check that H should be orthogonal and H*b should have zeros in positions below k. (Hint: Orthogonality of H is equivalent to the vector w being a unit vector. If H is not orthogonal, check w.) This householder function can be used for the QR factorization of a matrix by proceeding through a series of partial factorizations A = Q k R k, where Q 0 is the identity matrix, and R 0 is the matrix A. When we begin the k th step of factorization, our factor R k 1 is only upper triangular in columns 1 to k 1. Our goal on the k-th step is to find a better factor R k that is upper triangular through column k. If we can do this process n 1 times, we re done. Suppose, then, that we ve partially factored the matrix A, up to column k 1. In other words, we have a factorization A = Q k 1 R k 1 for which the matrix Q k 1 is orthogonal, but for which the matrix R k 1 is only upper triangular for the first k 1 columns. To proceed from our partial factorization, we re going to consider taking a Householder matrix H, and inserting it into the factorization as follows: A = Q k 1 R k 1 = Q k 1 H T HR k 1 7

8 Then, if we define Q k = Q k 1 H T it will again be the case that: R k = HR k 1 A = Q k R k and we re guaranteed that Q k is still orthogonal. Our construction of H guarantees that R k is actually upper triangular all the way through column k. Exercise 5: In this exercise you will see how the Householder matrix can be used in the steps of an algorithm that passes through a matrix, one column at a time, and turns the bottom of each column into zeros. (a) Define the matrix A to be the transpose of the Frank matrix of order 5. (b) Compute H1, the Householder matrix that knocks out the subdiagonal entries in column 1 of A, and then compute A1=H1*A. Are there any non-zero values below the diagonal in the first column? (c) Now let s compute H2, the Householder matrix that knocks out the subdiagonal entries in column 2 of A1, and compute A2=H2*A1. This matrix should have subdiagonal zeros in column 2. You should be convinced that you can zero out any subdiagonal column you want. Since zeroing out one subdiagonal column doesn t affect the previous columns, you can proceed sequentially to zero out all subdiagonal columns. 7 Householder Factorization For a rectangular M by N matrix A, the Householder QR factorization has the form A = QR where the matrix Q is M M (hence square and truly orthogonal) and the matrix R is M N, and upper triangular (or upper trapezoidal if you want to be more accurate.) If the matrix A is not square, then this definition is different from the Gram-Schmidt factorization we discussed before. The obvious difference is the shape of the factors. Here, it s the Q matrix that is square. The other difference, which you ll have to take on faith, is that the Householder factorization is generally more accurate (smaller arithmetic errors), and easier to define compactly. Householder QR Factorization Algorithm: Q = I; R = A; for k = 1:min(m,n) Construct the Householder matrix H for column k of the matrix R; Q = Q * H ; R = H * R; end Exercise 6: (a) Write an m-file h factor.m. It should have the form function [ Q, R ] = h_factor ( A ) % [ Q, R ] = h_factor ( A ) % more comments % your name and the date 8

9 Use the routine householder.m in order to compute the H matrix that you need at each step. (b) A simple test is the matrix A = [ ] You should see that simply interchanging the first and second columns of A turns it into an upper triangular matrix, so you can guess what the Householder matrices will be and what the solution is. Be sure that Q*R is A. (c) Test your code by computing the QR factorization of the Hilbert matrix of order 4. Compare your results with those of the Matlab qr function. 8 The QR Method for Linear Systems If we have computed the Householder QR factorization of a matrix without encountering any singularities, then it is easy to solve linear systems. We use the property of the Q factor that Q T Q = I: Ax = b QRx = b Q T QRx = Q T b Rx = Q T b (2) so all we have to do is form the new right hand side Q T b and then solve the upper triangular system. And that s easy because it s the same as the u solve step of our old PLU solution algorithm from lab 6. Exercise 7: Make a copy of your file u solve.m from lab 6 (upper-triangular solver), or get a copy of my version. Write a file called h solve.m. It should have the signature function x = h_solve ( Q, R, b ) % x = h_solve ( Q, R, b ) % more comments % your name and the date The matrix R is upper triangular, so you can use u solve to solve (2) above. Assume that the QR factors come from the h factor routine. Set up your code to compute Q T b and then solve the upper triangular system Rx = Q T b using u solve. When you think your solver is working, test it out on a system as follows: n = 5 A = frank ( n ) x = [ 1 : n ] b = A * x [ Q, R ] = h_factor ( A ) x2 = h_solve ( Q, R, b ) norm(x - x2) % should be close to zero. 9

10 9 The singular value decomposition One of the most important matrix product decompositions is the singular value decomposition. According to Atkinson s Theorem 7.5 (page 478), if A is an m n matrix, then there are matrices U, S and V, with A = USV T, (3) where U is n n and unitary, and V is m m and unitary. Recall that unitary matrices are like orthogonal matrices but they could have complex entries. Unitary matrices with all real entries are orthogonal matrices. The matrix S is n m and is of the form { σ k for k = l S kl = 0 for k l and σ 1 σ 2 σ 3 σ r > 0 = 0 = = 0. The number r is the rank of the matrix A and is necessarily no larger than min(m, n). The singular values, σ k, are real and positive and are the eigenvalues of the Hermitian matrix A H A. The singular value decomposition provides much useful information about the matrix A. This information includes: Information about the range and nullspace of A regarded as a linear operator, including the dimensions of these spaces. A method of solving non-square matrix systems. A method of compressing the matrix A. You will see these facts illustrated below. Atkinson gives a proof of the SVD but it does not provide a numerically useful algorithm. Further, the obvious algorithm involves finding the eigenvalues of A H A, and this is not really practical because of roundoff difficulties. Matlab includes a function called svd with signature [U S V]=svd(A) to compute the singular value decomposition and we will be using it in the following exercises. This function uses the Lapack subroutine dgesvd, so if you were to need it in a Fortran or C program, it would be available by linking against the Lapack library. Supposing that r is the smallest integer so that σ r = 0, then the SVD implies that the rank of the matrix is r 1. Similarly, if the matrix A is n n, then the rank of the nullspace of A is n r + 1. The first r 1 columns of U are an orthonormal basis of the range of A, and the last n r + 1 columns of V are an orthonormal basis for the nullspace of A. Of course, in the presence of roundoff some judgement must be made as to what constitutes a zero singular value, but the SVD is the best (if expensive) way to discover the rank and nullity of a matrix. This is especially true when there is a large gap between the smallest of the large singular values and the largest of the small singular values. The SVD can also be used to solve a matrix system. Assuming that the matrix is non-singular, all singular values are strictly positive, and the SVD can be used to solve a system. b = Ax b = USV H x U H b = SV H x S + U H b = V H x (4) V S + U H b = x Where S + is the diagonal matrix whose entries are 1/σ k. It turns out that Equation (4) is not a good way to solve nonsingular systems, but is an excellent way to solve systems that are singular or almost singular! 10

11 To this end, the matrix S + is the matrix with the same shape as S T whose elements are given as { S + kl = 1/σ k for k = l and σ k > 0 0 otherwise Further, if A is close to singular, a similar definition but with diagonal entries 1/σ k for σ k > ɛ for some ɛ can work very nicely. In these latter cases, the solution is the least-squares best fit solution and the matrix A + is called the Moore-Penrose pseudo-inverse of A. Exercise 8: In this exercise you will use the Matlab svd function to solve for the best-fit linear function of several variables through a set of points. This is an example of solving a rectangular system. Imagine that you have been given many samples of related data involving several variables and you wish to find a linear relationship among the variables that approximates the given data in the best-fit sense. This can be written as a rectangular system a 1 d 11 + a 2 d 12 + a 3 d d 1n = 0 a 1 d 21 + a 2 d 22 + a 3 d d 2n = 0 a 1 d 31 + a 2 d 32 + a 3 d d 3n = where d ij represents the data and a i the desired coefficients. The best-fit solution can be found using the SVD. As we have done several times before, we will first generate a data set with known solution and then use svd to recover the known solution. Place Matlab code for the following steps into a script m-file called exer8.m (a) Generate a data set consisting of twenty samples of each of four variables using the following Matlab code. N=20; d1=rand(n,1); d2=rand(n,1); d3=rand(n,1); d4=4*d1-3*d2+2*d3-1; It should be obvious that these vectors satisfy the equation 4d 1 3d 2 + 2d 3 d 4 = 1, but, if you were solving a real problem, you would not be aware of this equation, and it would be your objective to discover it, i.e., to find the coefficients in the equation. (b) Introduce small errors into the data. The rand function produces numbers between 0 and 1. EPSILON=1.e-5; d1=d1.*(1+epsilon*rand(n,1)); d2=d2.*(1+epsilon*rand(n,1)); d3=d3.*(1+epsilon*rand(n,1)); d4=d4.*(1+epsilon*rand(n,1)); (c) Regard the four vectors d1,d2,d3,d4 as data given to you, and construct the matrix consisting of the four column vectors, A=[d1,d2,d3,d4]. (d) Use the Matlab svd function [U S V]=svd(A);. 11

12 Please include the four non-zero values of S in your summary, but not the matrices U and V. (e) Just to confirm that you have everything right, compute norm(a-u*s*v, fro ). This number should be roundoff-sized. (f) Construct the matrix S + (call it Splus) so that S + kl = { 1/S(k, k) for k = l and S(k, k) > 0 0 for k l or S(k, k) = 0 (g) We are seeking the coefficient vector x that x 1 d 1 + x 2 d 2 + x 3 d 3 + x 4 d 4 = 1. Find this vector by setting b=ones(n,1) and then using Equation (4) above. Include your solution with your summary. It should be close to the known solution x=[4;-3;2;-1]. Sometimes the data you are given turns out to be deficient because the variables supposed to be independent are actually related. This dependency will cause the coefficient matrix to be singular or nearly singular. Dealing with dependencies of this sort is one of the greatest difficulties in using least-squares fitting methods. In the following exercise you will construct a deficient set of data and see that the singular value decomposition can find the solution nonetheless. Exercise 9: (a) Copy your m-file exer8.m to exer9.m. Replace the line d3=rand(n,1); with the line d3=d1+d2; This introduces nonuniqueness into the solution for the coefficients. (b) After using svd to find U, S, and V, print S(1,1), S(2,2), S(3,3), S(4,4). You should see that S(4,4) is substantially smaller than the others. Treat it as if it were zero and find the coefficient vector x as before. The solution you found is probably not x=[4;-3;2;-1]. (c) Look at V(:,4). This is the vector associated with the singular value that you set to zero. This vector should be associated with the extra relationship d1+d2-d3=0 that was introduced in the first step. What multiple of V(:,4) can be added to your solution to yield [4;-3;2;-1]? In the following exercise you will see how the singular value decomposition can be used to compress a graphical figure by representing the figure as a matrix and then using the singular value decomposition to find the closest matrix of lower rank to the original. This approach can form the basis of efficient compression methods. Exercise 10: (a) This photo from NASA is one of the pictures the Mars rovers sent back. Download it. (b) Matlab provides various image processing utilities. In this case, read the image in and convert it to a matrix of real ( double ) numbers using the following commands. mars=double(imread( mars.jpg )); The variable mars will be a matrix of integers between 0 and 255. (c) Matlab has the notion of a colormap that determines how the values in the matrix will be colored. Use the command 12

13 colormap( bone ); to set a reasonable colormap. Now you can show the picture with the command image(mars) Please do not send me this plot. (d) Use the command [U S V]=svd(mars); to perform the singular value decomposition. Do not include these matrices with your summary! (e) Plot the singular values: plot(diag(s)). You should see that the final 300 singular values are essentially zero, and the singular values larger than 50 are still noticable. Please send me this plot with your summary. (f) Construct the three new matrices mars200=u(:,1:200)*s(1:200,1:200)*v(:,1:200) ; mars100=u(:,1:100)*s(1:100,1:100)*v(:,1:100) ; mars25=u(:,1:25)*s(1:25,1:25)*v(:,1:25) ; These matrices are of lower rank than the mars matrix, and can be stored in a more efficient manner. Do not include these matrices with your summary! (g) Plot all three files as images using the following commands; figure(1); colormap( bone ); image(mars); title( Original mars ); figure(2); colormap( bone ); image(mars25); title( mars with 25 singular values ); figure(3); colormap( bone ); image(mars100); title( mars with 100 singular values ); figure(4); colormap( bone ); image(mars200); title( mars with 200 singular values ); You should see essentially no difference between the original and the one with 200 singular values, while you should see degradation of the image in the case of 25 singular values. Please only include the 25 singular value case with your summary. 13

MATH2071: LAB #5: Norms, Errors and Condition Numbers

MATH2071: LAB #5: Norms, Errors and Condition Numbers MATH2071: LAB #5: Norms, Errors and Condition Numbers 1 Introduction Introduction Exercise 1 Vector Norms Exercise 2 Matrix Norms Exercise 3 Compatible Matrix Norms Exercise 4 More on the Spectral Radius

More information

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit II: Numerical Linear Algebra Lecturer: Dr. David Knezevic Unit II: Numerical Linear Algebra Chapter II.3: QR Factorization, SVD 2 / 66 QR Factorization 3 / 66 QR Factorization

More information

Notes on Eigenvalues, Singular Values and QR

Notes on Eigenvalues, Singular Values and QR Notes on Eigenvalues, Singular Values and QR Michael Overton, Numerical Computing, Spring 2017 March 30, 2017 1 Eigenvalues Everyone who has studied linear algebra knows the definition: given a square

More information

Singular Value Decompsition

Singular Value Decompsition Singular Value Decompsition Massoud Malek One of the most useful results from linear algebra, is a matrix decomposition known as the singular value decomposition It has many useful applications in almost

More information

Applied Numerical Linear Algebra. Lecture 8

Applied Numerical Linear Algebra. Lecture 8 Applied Numerical Linear Algebra. Lecture 8 1/ 45 Perturbation Theory for the Least Squares Problem When A is not square, we define its condition number with respect to the 2-norm to be k 2 (A) σ max (A)/σ

More information

Vector and Matrix Norms. Vector and Matrix Norms

Vector and Matrix Norms. Vector and Matrix Norms Vector and Matrix Norms Vector Space Algebra Matrix Algebra: We let x x and A A, where, if x is an element of an abstract vector space n, and A = A: n m, then x is a complex column vector of length n whose

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 9

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 9 STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 9 1. qr and complete orthogonal factorization poor man s svd can solve many problems on the svd list using either of these factorizations but they

More information

18.06 Quiz 2 April 7, 2010 Professor Strang

18.06 Quiz 2 April 7, 2010 Professor Strang 18.06 Quiz 2 April 7, 2010 Professor Strang Your PRINTED name is: 1. Your recitation number or instructor is 2. 3. 1. (33 points) (a) Find the matrix P that projects every vector b in R 3 onto the line

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.] Math 43 Review Notes [Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty Dot Product If v (v, v, v 3 and w (w, w, w 3, then the

More information

Lecture 3: QR-Factorization

Lecture 3: QR-Factorization Lecture 3: QR-Factorization This lecture introduces the Gram Schmidt orthonormalization process and the associated QR-factorization of matrices It also outlines some applications of this factorization

More information

Linear Least-Squares Data Fitting

Linear Least-Squares Data Fitting CHAPTER 6 Linear Least-Squares Data Fitting 61 Introduction Recall that in chapter 3 we were discussing linear systems of equations, written in shorthand in the form Ax = b In chapter 3, we just considered

More information

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N.

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N. Math 410 Homework Problems In the following pages you will find all of the homework problems for the semester. Homework should be written out neatly and stapled and turned in at the beginning of class

More information

Designing Information Devices and Systems II

Designing Information Devices and Systems II EECS 16B Fall 2016 Designing Information Devices and Systems II Linear Algebra Notes Introduction In this set of notes, we will derive the linear least squares equation, study the properties symmetric

More information

AM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition

AM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition AM 205: lecture 8 Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition QR Factorization A matrix A R m n, m n, can be factorized

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra The two principal problems in linear algebra are: Linear system Given an n n matrix A and an n-vector b, determine x IR n such that A x = b Eigenvalue problem Given an n n matrix

More information

LINEAR ALGEBRA KNOWLEDGE SURVEY

LINEAR ALGEBRA KNOWLEDGE SURVEY LINEAR ALGEBRA KNOWLEDGE SURVEY Instructions: This is a Knowledge Survey. For this assignment, I am only interested in your level of confidence about your ability to do the tasks on the following pages.

More information

MAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction

MAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction MAT4 : Introduction to Applied Linear Algebra Mike Newman fall 7 9. Projections introduction One reason to consider projections is to understand approximate solutions to linear systems. A common example

More information

A Review of Linear Algebra

A Review of Linear Algebra A Review of Linear Algebra Gerald Recktenwald Portland State University Mechanical Engineering Department gerry@me.pdx.edu These slides are a supplement to the book Numerical Methods with Matlab: Implementations

More information

Lecture # 5 The Linear Least Squares Problem. r LS = b Xy LS. Our approach, compute the Q R decomposition, that is, n R X = Q, m n 0

Lecture # 5 The Linear Least Squares Problem. r LS = b Xy LS. Our approach, compute the Q R decomposition, that is, n R X = Q, m n 0 Lecture # 5 The Linear Least Squares Problem Let X R m n,m n be such that rank(x = n That is, The problem is to find y LS such that We also want Xy =, iff y = b Xy LS 2 = min y R n b Xy 2 2 (1 r LS = b

More information

The Singular Value Decomposition

The Singular Value Decomposition The Singular Value Decomposition Philippe B. Laval KSU Fall 2015 Philippe B. Laval (KSU) SVD Fall 2015 1 / 13 Review of Key Concepts We review some key definitions and results about matrices that will

More information

Orthogonality. 6.1 Orthogonal Vectors and Subspaces. Chapter 6

Orthogonality. 6.1 Orthogonal Vectors and Subspaces. Chapter 6 Chapter 6 Orthogonality 6.1 Orthogonal Vectors and Subspaces Recall that if nonzero vectors x, y R n are linearly independent then the subspace of all vectors αx + βy, α, β R (the space spanned by x and

More information

Linear Algebra, Summer 2011, pt. 2

Linear Algebra, Summer 2011, pt. 2 Linear Algebra, Summer 2, pt. 2 June 8, 2 Contents Inverses. 2 Vector Spaces. 3 2. Examples of vector spaces..................... 3 2.2 The column space......................... 6 2.3 The null space...........................

More information

Getting Started with Communications Engineering

Getting Started with Communications Engineering 1 Linear algebra is the algebra of linear equations: the term linear being used in the same sense as in linear functions, such as: which is the equation of a straight line. y ax c (0.1) Of course, if we

More information

MAT 343 Laboratory 6 The SVD decomposition and Image Compression

MAT 343 Laboratory 6 The SVD decomposition and Image Compression MA 4 Laboratory 6 he SVD decomposition and Image Compression In this laboratory session we will learn how to Find the SVD decomposition of a matrix using MALAB Use the SVD to perform Image Compression

More information

Linear Algebra Using MATLAB

Linear Algebra Using MATLAB Linear Algebra Using MATLAB MATH 5331 1 May 12, 2010 1 Selected material from the text Linear Algebra and Differential Equations Using MATLAB by Martin Golubitsky and Michael Dellnitz Contents 1 Preliminaries

More information

. = V c = V [x]v (5.1) c 1. c k

. = V c = V [x]v (5.1) c 1. c k Chapter 5 Linear Algebra It can be argued that all of linear algebra can be understood using the four fundamental subspaces associated with a matrix Because they form the foundation on which we later work,

More information

7. Dimension and Structure.

7. Dimension and Structure. 7. Dimension and Structure 7.1. Basis and Dimension Bases for Subspaces Example 2 The standard unit vectors e 1, e 2,, e n are linearly independent, for if we write (2) in component form, then we obtain

More information

Applied Linear Algebra in Geoscience Using MATLAB

Applied Linear Algebra in Geoscience Using MATLAB Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in

More information

Homework 1 Elena Davidson (B) (C) (D) (E) (F) (G) (H) (I)

Homework 1 Elena Davidson (B) (C) (D) (E) (F) (G) (H) (I) CS 106 Spring 2004 Homework 1 Elena Davidson 8 April 2004 Problem 1.1 Let B be a 4 4 matrix to which we apply the following operations: 1. double column 1, 2. halve row 3, 3. add row 3 to row 1, 4. interchange

More information

1 Linearity and Linear Systems

1 Linearity and Linear Systems Mathematical Tools for Neuroscience (NEU 34) Princeton University, Spring 26 Jonathan Pillow Lecture 7-8 notes: Linear systems & SVD Linearity and Linear Systems Linear system is a kind of mapping f( x)

More information

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES 48 Arnoldi Iteration, Krylov Subspaces and GMRES We start with the problem of using a similarity transformation to convert an n n matrix A to upper Hessenberg form H, ie, A = QHQ, (30) with an appropriate

More information

Introduction to SVD and Applications

Introduction to SVD and Applications Introduction to SVD and Applications Eric Kostelich and Dave Kuhl MSRI Climate Change Summer School July 18, 2008 Introduction The goal of this exercise is to familiarize you with the basics of the singular

More information

c 1 v 1 + c 2 v 2 = 0 c 1 λ 1 v 1 + c 2 λ 1 v 2 = 0

c 1 v 1 + c 2 v 2 = 0 c 1 λ 1 v 1 + c 2 λ 1 v 2 = 0 LECTURE LECTURE 2 0. Distinct eigenvalues I haven t gotten around to stating the following important theorem: Theorem: A matrix with n distinct eigenvalues is diagonalizable. Proof (Sketch) Suppose n =

More information

Iterative Methods for Solving A x = b

Iterative Methods for Solving A x = b Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http

More information

Applied Linear Algebra in Geoscience Using MATLAB

Applied Linear Algebra in Geoscience Using MATLAB Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in

More information

Linear Analysis Lecture 16

Linear Analysis Lecture 16 Linear Analysis Lecture 16 The QR Factorization Recall the Gram-Schmidt orthogonalization process. Let V be an inner product space, and suppose a 1,..., a n V are linearly independent. Define q 1,...,

More information

L2-7 Some very stylish matrix decompositions for solving Ax = b 10 Oct 2015

L2-7 Some very stylish matrix decompositions for solving Ax = b 10 Oct 2015 L-7 Some very stylish matrix decompositions for solving Ax = b 10 Oct 015 Marty McFly: Wait a minute, Doc. Ah... Are you telling me you built a time machine... out of a DeLorean? Doc Brown: The way I see

More information

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2 Final Review Sheet The final will cover Sections Chapters 1,2,3 and 4, as well as sections 5.1-5.4, 6.1-6.2 and 7.1-7.3 from chapters 5,6 and 7. This is essentially all material covered this term. Watch

More information

The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA)

The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA) Chapter 5 The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA) 5.1 Basics of SVD 5.1.1 Review of Key Concepts We review some key definitions and results about matrices that will

More information

Stability of the Gram-Schmidt process

Stability of the Gram-Schmidt process Stability of the Gram-Schmidt process Orthogonal projection We learned in multivariable calculus (or physics or elementary linear algebra) that if q is a unit vector and v is any vector then the orthogonal

More information

Maths for Signals and Systems Linear Algebra in Engineering

Maths for Signals and Systems Linear Algebra in Engineering Maths for Signals and Systems Linear Algebra in Engineering Lectures 13 15, Tuesday 8 th and Friday 11 th November 016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL PROCESSING IMPERIAL COLLEGE

More information

Lecture 3: Special Matrices

Lecture 3: Special Matrices Lecture 3: Special Matrices Feedback of assignment1 Random matrices The magic matrix commend magic() doesn t give us random matrix. Random matrix means we will get different matrices each time when we

More information

Numerical Linear Algebra Chap. 2: Least Squares Problems

Numerical Linear Algebra Chap. 2: Least Squares Problems Numerical Linear Algebra Chap. 2: Least Squares Problems Heinrich Voss voss@tu-harburg.de Hamburg University of Technology Institute of Numerical Simulation TUHH Heinrich Voss Numerical Linear Algebra

More information

Eigenvectors and Hermitian Operators

Eigenvectors and Hermitian Operators 7 71 Eigenvalues and Eigenvectors Basic Definitions Let L be a linear operator on some given vector space V A scalar λ and a nonzero vector v are referred to, respectively, as an eigenvalue and corresponding

More information

5.6. PSEUDOINVERSES 101. A H w.

5.6. PSEUDOINVERSES 101. A H w. 5.6. PSEUDOINVERSES 0 Corollary 5.6.4. If A is a matrix such that A H A is invertible, then the least-squares solution to Av = w is v = A H A ) A H w. The matrix A H A ) A H is the left inverse of A and

More information

Singular value decomposition

Singular value decomposition Singular value decomposition The eigenvalue decomposition (EVD) for a square matrix A gives AU = UD. Let A be rectangular (m n, m > n). A singular value σ and corresponding pair of singular vectors u (m

More information

14 Singular Value Decomposition

14 Singular Value Decomposition 14 Singular Value Decomposition For any high-dimensional data analysis, one s first thought should often be: can I use an SVD? The singular value decomposition is an invaluable analysis tool for dealing

More information

18.06 Professor Johnson Quiz 1 October 3, 2007

18.06 Professor Johnson Quiz 1 October 3, 2007 18.6 Professor Johnson Quiz 1 October 3, 7 SOLUTIONS 1 3 pts.) A given circuit network directed graph) which has an m n incidence matrix A rows = edges, columns = nodes) and a conductance matrix C [diagonal

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 13

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 13 STAT 309: MATHEMATICAL COMPUTATIONS I FALL 208 LECTURE 3 need for pivoting we saw that under proper circumstances, we can write A LU where 0 0 0 u u 2 u n l 2 0 0 0 u 22 u 2n L l 3 l 32, U 0 0 0 l n l

More information

Linear Algebra and Matrices

Linear Algebra and Matrices Linear Algebra and Matrices 4 Overview In this chapter we studying true matrix operations, not element operations as was done in earlier chapters. Working with MAT- LAB functions should now be fairly routine.

More information

Singular Value Decomposition. 1 Singular Value Decomposition and the Four Fundamental Subspaces

Singular Value Decomposition. 1 Singular Value Decomposition and the Four Fundamental Subspaces Singular Value Decomposition This handout is a review of some basic concepts in linear algebra For a detailed introduction, consult a linear algebra text Linear lgebra and its pplications by Gilbert Strang

More information

Linear Algebra March 16, 2019

Linear Algebra March 16, 2019 Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

Linear Algebra, part 3 QR and SVD

Linear Algebra, part 3 QR and SVD Linear Algebra, part 3 QR and SVD Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2012 Going back to least squares (Section 1.4 from Strang, now also see section 5.2). We

More information

Lecture notes: Applied linear algebra Part 1. Version 2

Lecture notes: Applied linear algebra Part 1. Version 2 Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and

More information

This can be accomplished by left matrix multiplication as follows: I

This can be accomplished by left matrix multiplication as follows: I 1 Numerical Linear Algebra 11 The LU Factorization Recall from linear algebra that Gaussian elimination is a method for solving linear systems of the form Ax = b, where A R m n and bran(a) In this method

More information

Lab 1: Iterative Methods for Solving Linear Systems

Lab 1: Iterative Methods for Solving Linear Systems Lab 1: Iterative Methods for Solving Linear Systems January 22, 2017 Introduction Many real world applications require the solution to very large and sparse linear systems where direct methods such as

More information

18.06SC Final Exam Solutions

18.06SC Final Exam Solutions 18.06SC Final Exam Solutions 1 (4+7=11 pts.) Suppose A is 3 by 4, and Ax = 0 has exactly 2 special solutions: 1 2 x 1 = 1 and x 2 = 1 1 0 0 1 (a) Remembering that A is 3 by 4, find its row reduced echelon

More information

October 25, 2013 INNER PRODUCT SPACES

October 25, 2013 INNER PRODUCT SPACES October 25, 2013 INNER PRODUCT SPACES RODICA D. COSTIN Contents 1. Inner product 2 1.1. Inner product 2 1.2. Inner product spaces 4 2. Orthogonal bases 5 2.1. Existence of an orthogonal basis 7 2.2. Orthogonal

More information

Name: INSERT YOUR NAME HERE. Due to dropbox by 6pm PDT, Wednesday, December 14, 2011

Name: INSERT YOUR NAME HERE. Due to dropbox by 6pm PDT, Wednesday, December 14, 2011 AMath 584 Name: INSERT YOUR NAME HERE Take-home Final UWNetID: INSERT YOUR NETID Due to dropbox by 6pm PDT, Wednesday, December 14, 2011 The main part of the assignment (Problems 1 3) is worth 80 points.

More information

Numerical Methods. Elena loli Piccolomini. Civil Engeneering. piccolom. Metodi Numerici M p. 1/??

Numerical Methods. Elena loli Piccolomini. Civil Engeneering.  piccolom. Metodi Numerici M p. 1/?? Metodi Numerici M p. 1/?? Numerical Methods Elena loli Piccolomini Civil Engeneering http://www.dm.unibo.it/ piccolom elena.loli@unibo.it Metodi Numerici M p. 2/?? Least Squares Data Fitting Measurement

More information

MATH2071: LAB 9: The Singular Value Decomposition

MATH2071: LAB 9: The Singular Value Decomposition MATH2071: LAB 9: The Singular Value Decomposition 1 Introduction Introduction Exercise 1 The singular value decomposition Exercise 2 Two numerical algorithms Exercise 3 The standard algorithm Exercise

More information

Singular Value Decomposition

Singular Value Decomposition Singular Value Decomposition (Com S 477/577 Notes Yan-Bin Jia Sep, 7 Introduction Now comes a highlight of linear algebra. Any real m n matrix can be factored as A = UΣV T where U is an m m orthogonal

More information

Lecture 6 Positive Definite Matrices

Lecture 6 Positive Definite Matrices Linear Algebra Lecture 6 Positive Definite Matrices Prof. Chun-Hung Liu Dept. of Electrical and Computer Engineering National Chiao Tung University Spring 2017 2017/6/8 Lecture 6: Positive Definite Matrices

More information

Linear Models Review

Linear Models Review Linear Models Review Vectors in IR n will be written as ordered n-tuples which are understood to be column vectors, or n 1 matrices. A vector variable will be indicted with bold face, and the prime sign

More information

Linear Algebra, Summer 2011, pt. 3

Linear Algebra, Summer 2011, pt. 3 Linear Algebra, Summer 011, pt. 3 September 0, 011 Contents 1 Orthogonality. 1 1.1 The length of a vector....................... 1. Orthogonal vectors......................... 3 1.3 Orthogonal Subspaces.......................

More information

Cheat Sheet for MATH461

Cheat Sheet for MATH461 Cheat Sheet for MATH46 Here is the stuff you really need to remember for the exams Linear systems Ax = b Problem: We consider a linear system of m equations for n unknowns x,,x n : For a given matrix A

More information

The QR Factorization

The QR Factorization The QR Factorization How to Make Matrices Nicer Radu Trîmbiţaş Babeş-Bolyai University March 11, 2009 Radu Trîmbiţaş ( Babeş-Bolyai University) The QR Factorization March 11, 2009 1 / 25 Projectors A projector

More information

k is a product of elementary matrices.

k is a product of elementary matrices. Mathematics, Spring Lecture (Wilson) Final Eam May, ANSWERS Problem (5 points) (a) There are three kinds of elementary row operations and associated elementary matrices. Describe what each kind of operation

More information

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 18th,

More information

Orthonormal Bases; Gram-Schmidt Process; QR-Decomposition

Orthonormal Bases; Gram-Schmidt Process; QR-Decomposition Orthonormal Bases; Gram-Schmidt Process; QR-Decomposition MATH 322, Linear Algebra I J. Robert Buchanan Department of Mathematics Spring 205 Motivation When working with an inner product space, the most

More information

Math 291-2: Lecture Notes Northwestern University, Winter 2016

Math 291-2: Lecture Notes Northwestern University, Winter 2016 Math 291-2: Lecture Notes Northwestern University, Winter 2016 Written by Santiago Cañez These are lecture notes for Math 291-2, the second quarter of MENU: Intensive Linear Algebra and Multivariable Calculus,

More information

Krylov Subspaces. Lab 1. The Arnoldi Iteration

Krylov Subspaces. Lab 1. The Arnoldi Iteration Lab 1 Krylov Subspaces Lab Objective: Discuss simple Krylov Subspace Methods for finding eigenvalues and show some interesting applications. One of the biggest difficulties in computational linear algebra

More information

The QR Decomposition

The QR Decomposition The QR Decomposition We have seen one major decomposition of a matrix which is A = LU (and its variants) or more generally PA = LU for a permutation matrix P. This was valid for a square matrix and aided

More information

IV. Matrix Approximation using Least-Squares

IV. Matrix Approximation using Least-Squares IV. Matrix Approximation using Least-Squares The SVD and Matrix Approximation We begin with the following fundamental question. Let A be an M N matrix with rank R. What is the closest matrix to A that

More information

Lecture 6, Sci. Comp. for DPhil Students

Lecture 6, Sci. Comp. for DPhil Students Lecture 6, Sci. Comp. for DPhil Students Nick Trefethen, Thursday 1.11.18 Today II.3 QR factorization II.4 Computation of the QR factorization II.5 Linear least-squares Handouts Quiz 4 Householder s 4-page

More information

Inner product spaces. Layers of structure:

Inner product spaces. Layers of structure: Inner product spaces Layers of structure: vector space normed linear space inner product space The abstract definition of an inner product, which we will see very shortly, is simple (and by itself is pretty

More information

Least-Squares Systems and The QR factorization

Least-Squares Systems and The QR factorization Least-Squares Systems and The QR factorization Orthogonality Least-squares systems. The Gram-Schmidt and Modified Gram-Schmidt processes. The Householder QR and the Givens QR. Orthogonality The Gram-Schmidt

More information

22 Approximations - the method of least squares (1)

22 Approximations - the method of least squares (1) 22 Approximations - the method of least squares () Suppose that for some y, the equation Ax = y has no solutions It may happpen that this is an important problem and we can t just forget about it If we

More information

Least Squares Optimization

Least Squares Optimization Least Squares Optimization The following is a brief review of least squares optimization and constrained optimization techniques. Broadly, these techniques can be used in data analysis and visualization

More information

REVIEW FOR EXAM III SIMILARITY AND DIAGONALIZATION

REVIEW FOR EXAM III SIMILARITY AND DIAGONALIZATION REVIEW FOR EXAM III The exam covers sections 4.4, the portions of 4. on systems of differential equations and on Markov chains, and..4. SIMILARITY AND DIAGONALIZATION. Two matrices A and B are similar

More information

Lecture 6: Lies, Inner Product Spaces, and Symmetric Matrices

Lecture 6: Lies, Inner Product Spaces, and Symmetric Matrices Math 108B Professor: Padraic Bartlett Lecture 6: Lies, Inner Product Spaces, and Symmetric Matrices Week 6 UCSB 2014 1 Lies Fun fact: I have deceived 1 you somewhat with these last few lectures! Let me

More information

Answers in blue. If you have questions or spot an error, let me know. 1. Find all matrices that commute with A =. 4 3

Answers in blue. If you have questions or spot an error, let me know. 1. Find all matrices that commute with A =. 4 3 Answers in blue. If you have questions or spot an error, let me know. 3 4. Find all matrices that commute with A =. 4 3 a b If we set B = and set AB = BA, we see that 3a + 4b = 3a 4c, 4a + 3b = 3b 4d,

More information

MAT 343 Laboratory 3 The LU factorization

MAT 343 Laboratory 3 The LU factorization In this laboratory session we will learn how to MAT 343 Laboratory 3 The LU factorization 1. Find the LU factorization of a matrix using elementary matrices 2. Use the MATLAB command lu to find the LU

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

Conceptual Questions for Review

Conceptual Questions for Review Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.

More information

Orthogonal iteration revisited

Orthogonal iteration revisited Week 10 11: Friday, Oct 30 and Monday, Nov 2 Orthogonal iteration revisited Last time, we described a generalization of the power methods to compute invariant subspaces. That is, starting from some initial

More information

Matrix Factorization and Analysis

Matrix Factorization and Analysis Chapter 7 Matrix Factorization and Analysis Matrix factorizations are an important part of the practice and analysis of signal processing. They are at the heart of many signal-processing algorithms. Their

More information

CS227-Scientific Computing. Lecture 4: A Crash Course in Linear Algebra

CS227-Scientific Computing. Lecture 4: A Crash Course in Linear Algebra CS227-Scientific Computing Lecture 4: A Crash Course in Linear Algebra Linear Transformation of Variables A common phenomenon: Two sets of quantities linearly related: y = 3x + x 2 4x 3 y 2 = 2.7x 2 x

More information

MAA507, Power method, QR-method and sparse matrix representation.

MAA507, Power method, QR-method and sparse matrix representation. ,, and representation. February 11, 2014 Lecture 7: Overview, Today we will look at:.. If time: A look at representation and fill in. Why do we need numerical s? I think everyone have seen how time consuming

More information

Linear algebra for MATH2601: Theory

Linear algebra for MATH2601: Theory Linear algebra for MATH2601: Theory László Erdős August 12, 2000 Contents 1 Introduction 4 1.1 List of crucial problems............................... 5 1.2 Importance of linear algebra............................

More information

Fall TMA4145 Linear Methods. Exercise set Given the matrix 1 2

Fall TMA4145 Linear Methods. Exercise set Given the matrix 1 2 Norwegian University of Science and Technology Department of Mathematical Sciences TMA445 Linear Methods Fall 07 Exercise set Please justify your answers! The most important part is how you arrive at an

More information

MATH 350: Introduction to Computational Mathematics

MATH 350: Introduction to Computational Mathematics MATH 350: Introduction to Computational Mathematics Chapter V: Least Squares Problems Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Spring 2011 fasshauer@iit.edu MATH

More information

Worksheet for Lecture 25 Section 6.4 Gram-Schmidt Process

Worksheet for Lecture 25 Section 6.4 Gram-Schmidt Process Worksheet for Lecture Name: Section.4 Gram-Schmidt Process Goal For a subspace W = Span{v,..., v n }, we want to find an orthonormal basis of W. Example Let W = Span{x, x } with x = and x =. Give an orthogonal

More information

Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination

Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination Chapter 2 Solving Systems of Equations A large number of real life applications which are resolved through mathematical modeling will end up taking the form of the following very simple looking matrix

More information

MATH 22A: LINEAR ALGEBRA Chapter 4

MATH 22A: LINEAR ALGEBRA Chapter 4 MATH 22A: LINEAR ALGEBRA Chapter 4 Jesús De Loera, UC Davis November 30, 2012 Orthogonality and Least Squares Approximation QUESTION: Suppose Ax = b has no solution!! Then what to do? Can we find an Approximate

More information

Review Questions REVIEW QUESTIONS 71

Review Questions REVIEW QUESTIONS 71 REVIEW QUESTIONS 71 MATLAB, is [42]. For a comprehensive treatment of error analysis and perturbation theory for linear systems and many other problems in linear algebra, see [126, 241]. An overview of

More information

15 Singular Value Decomposition

15 Singular Value Decomposition 15 Singular Value Decomposition For any high-dimensional data analysis, one s first thought should often be: can I use an SVD? The singular value decomposition is an invaluable analysis tool for dealing

More information

Numerical Methods I Non-Square and Sparse Linear Systems

Numerical Methods I Non-Square and Sparse Linear Systems Numerical Methods I Non-Square and Sparse Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 25th, 2014 A. Donev (Courant

More information