LAB 2: Orthogonal Projections, the Four Fundamental Subspaces, QR Factorization, and Inconsistent Linear Systems

Similar documents
Applied Linear Algebra in Geoscience Using MATLAB

MATH 235: Inner Product Spaces, Assignment 7

Math 314/ Exam 2 Blue Exam Solutions December 4, 2008 Instructor: Dr. S. Cooper. Name:

Dot product and linear least squares problems

Assignment 1 Math 5341 Linear Algebra Review. Give complete answers to each of the following questions. Show all of your work.

7. Dimension and Structure.

Math 3191 Applied Linear Algebra

4.3 - Linear Combinations and Independence of Vectors

SUMMARY OF MATH 1600

5.) For each of the given sets of vectors, determine whether or not the set spans R 3. Give reasons for your answers.

MATH 260 LINEAR ALGEBRA EXAM III Fall 2014

LINEAR ALGEBRA KNOWLEDGE SURVEY

(Linear equations) Applied Linear Algebra in Geoscience Using MATLAB

Homework 5. (due Wednesday 8 th Nov midnight)

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008

Fall 2016 MATH*1160 Final Exam

REVIEW FOR EXAM III SIMILARITY AND DIAGONALIZATION

Applied Linear Algebra in Geoscience Using MATLAB

Linear Algebra I for Science (NYC)

Math 18, Linear Algebra, Lecture C00, Spring 2017 Review and Practice Problems for Final Exam

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

Assignment #9: Orthogonal Projections, Gram-Schmidt, and Least Squares. Name:

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL

(Practice)Exam in Linear Algebra

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

2. Every linear system with the same number of equations as unknowns has a unique solution.

MATH 1B03 Day Class Final Exam Bradd Hart, Dec. 13, 2013

Math 3191 Applied Linear Algebra

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Elementary Linear Algebra Review for Exam 2 Exam is Monday, November 16th.

Linear Algebra. Session 12

MAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction

Linear least squares problem: Example

Linear Algebra Massoud Malek

Reduction to the associated homogeneous system via a particular solution

Math 1553, Introduction to Linear Algebra

(Mathematical Operations with Arrays) Applied Linear Algebra in Geoscience Using MATLAB

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =

MATH 167: APPLIED LINEAR ALGEBRA Least-Squares

Conceptual Questions for Review

Lecture 3: Linear Algebra Review, Part II

MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors.

MATH 22A: LINEAR ALGEBRA Chapter 4

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

10 Orthogonality Orthogonal subspaces

Lecture 10: Vector Algebra: Orthogonal Basis

Designing Information Devices and Systems II

Lecture 13: Orthogonal projections and least squares (Section ) Thang Huynh, UC San Diego 2/9/2018

Least squares problems Linear Algebra with Computer Science Application

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

Solution of Linear Equations

Linear Algebra- Final Exam Review

4.1 Distance and Length

Vector Geometry. Chapter 5

ORTHOGONALITY AND LEAST-SQUARES [CHAP. 6]

4 ORTHOGONALITY ORTHOGONALITY OF THE FOUR SUBSPACES 4.1

MATH 2331 Linear Algebra. Section 1.1 Systems of Linear Equations. Finding the solution to a set of two equations in two variables: Example 1: Solve:

Worksheet for Lecture 25 Section 6.4 Gram-Schmidt Process

b 1 b 2.. b = b m A = [a 1,a 2,...,a n ] where a 1,j a 2,j a j = a m,j Let A R m n and x 1 x 2 x = x n

Dimension and Structure

Inner products. Theorem (basic properties): Given vectors u, v, w in an inner product space V, and a scalar k, the following properties hold:

Math 307 Learning Goals. March 23, 2010

Math 407: Linear Optimization

MA201: Further Mathematical Methods (Linear Algebra) 2002

MODULE 8 Topics: Null space, range, column space, row space and rank of a matrix

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

is Use at most six elementary row operations. (Partial

In this chapter you will learn how to use MATLAB to work with lengths, angles and projections in subspaces of R and later in certain linear spaces.

The 'linear algebra way' of talking about "angle" and "similarity" between two vectors is called "inner product". We'll define this next.

6. Orthogonality and Least-Squares

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION)

In this LAB you will explore the following topics using MATLAB. Investigate properties of the column space of a matrix.

Linear Algebra Final Exam Study Guide Solutions Fall 2012

Chapter 6. Orthogonality

Lecture 23: 6.1 Inner Products

18.06 Quiz 2 April 7, 2010 Professor Strang

P = A(A T A) 1 A T. A Om (m n)

Solutions to Review Problems for Chapter 6 ( ), 7.1

EXAM. Exam 1. Math 5316, Fall December 2, 2012

18.06 Professor Johnson Quiz 1 October 3, 2007

Math 24 Spring 2012 Sample Homework Solutions Week 8

Linear Models Review

Computational math: Assignment 1

Orthogonality. 6.1 Orthogonal Vectors and Subspaces. Chapter 6

LINEAR ALGEBRA REVIEW

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

No books, no notes, no calculators. You must show work, unless the question is a true/false, yes/no, or fill-in-the-blank question.

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

I. Multiple Choice Questions (Answer any eight)

Chapter 6: Orthogonality

Systems of Linear Equations

Linear Algebra Review. Vectors

Math 261 Lecture Notes: Sections 6.1, 6.2, 6.3 and 6.4 Orthogonal Sets and Projections

Math 224, Fall 2007 Exam 3 Thursday, December 6, 2007

MATH 2331 Linear Algebra. Section 2.1 Matrix Operations. Definition: A : m n, B : n p. Example: Compute AB, if possible.

Vector spaces. DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis.

orthogonal relations between vectors and subspaces Then we study some applications in vector spaces and linear systems, including Orthonormal Basis,

Math 308 Discussion Problems #4 Chapter 4 (after 4.3)

Math Linear Algebra II. 1. Inner Products and Norms

Transcription:

Math 550A MATLAB Assignment #2 1 Revised 8/14/10 LAB 2: Orthogonal Projections, the Four Fundamental Subspaces, QR Factorization, and Inconsistent Linear Systems In this lab you will use Matlab to study the following topics: Geometric aspects of vectors: norm, dot product, and orthogonal projection onto a line The four fundamental subspaces associated with a matrix. The Gram-Schmidt Orthogonalization Algorithm and the QR matrix factorization. Orthogonal projections onto subspaces Best approximate solution to an inconsistent linear system The method of least squares for fitting curves to data points Matlab Preliminaries Tcodes: For this lab you will need the Teaching Codes grams.m, linefit.m, lsq.m, partic.m Before beginning work on the Lab questions you should copy these codes from the Math 550 course web page to your directory. Lab Write-up: After opening your diary file, type the comment line % Math 550 MATLAB Lab Assignment #2 at the MATLAB prompt. Type format compact so that your diary file will not have unnecessary spaces. Put labels to mark the beginning of your work on each part of each question, so that your edited lab write-up has the format % Question 1 (a).... % Question 1 (b)... and so on. Be sure to answer all the questions in the lab assignment. Insert comments in your diary file as you work through the assignment. Random Seed: When you start your Matlab session, initialize the random number generator by typing rand( seed, abcd) where abcd are the last four digits of your Student ID number. This will ensure that you generate your own particular random vectors and matrices. BE SURE TO INCLUDE THIS LINE IN YOUR LAB WRITE-UP Question 1. Norm, Dot Product, and Orthogonal Projection onto a Line Generate random vectors u,v R 3 by u = rvect(3), v = rvect(3). Use these vectors in the following. (a) The norm u of a vector is calculated by the Matlab command norm(u). The triangle inequality asserts that for all vectors u and v, the norms satisfy u + v u + v. Check this inequality by Matlab for your particular vectors.

Math 550A MATLAB Assignment #2 2 (b) The dot product u v is calculated in Matlab by u *v when u and v are column vectors of the same size. The absolute value t of a number t is calculated in Matlab by abs(t). The Cauchy-Schwarz inequality asserts that for all vectors u and v, the norms satisfy u v u v. Check this inequality by Matlab for your particular vectors. (c) The orthogonal projection of the vector v onto the line L (one-dimensional subspace) spanned by the vector u is w = u v u 2 u (see Figure 3.5 on page 152 of Strang s text). Use Matlab to calculate w for your vectors. Two vectors are orthogonal if their dot product is zero. Verify by Matlab that the vector z = v w is orthogonal to u. (If the dot product is not exactly zero but is a very small number of size 10 13 for example, then the vectors are considered orthogonal for numerical purposes.) (d) The formula for w in (c) can also be written as a matrix-vector product. Use Matlab to obtain the matrix P = u*inv(u *u)*u (note carefully the punctuation and the order of the factors in this formula). Explain why P is a 3 3 matrix, although u is a vector. Calculate by Matlab that Pv is the vector w for your u and v. Write out (by hand) an algebraic justification for the equality w = P v in general, using the properties of matrix multiplication. Question 2. The Four Fundamental Subspaces (a) Generate a random 3 2 matrix B = rmat(3,2). Then form the 3 4 matrix A = [B 2*B]. The Matlab function orth will produce an orthonormal basis for the column space Col(A). Let C = orth(a). Verify that the columns of C are an orthonormal set of vectors by calculating C *C. What is the rank of A? (b) What is the dimension of the null space Null(A)? The Matlab function null will produce an orthonormal basis for Null(A). Let N = null(a). Verify that the columns of N are an orthonormal set of vectors by calculating N *N. Also verify that A N = 0, so each column of N is in the null space of A. (c) What is the dimension of the column space of A T? What is the dimension of the null space of A T? Let CT = orth(a ) and NT = null(a ) be the matrices for the column space and the null space of A T. Use the theory of the Four Fundamental Subspaces associated with A to determine which of the following matrices must be zero: C *NT, C*N, N *CT, N*CT. Now check your answer by Matlab. Question 3. Gram-Schmidt Orthogonalization Generate three random vectors in R 3 by u1 = rvect(3), u2 = rvect(3), u3 = rvect(3) Use these vectors in the following. (a) Since the set of vector {u 1,u 2,u 3 } is random, it should be linearly independent. Use Matlab to check this by calculating the rank of a suitable matrix. For the same reason, these vectors are almost certainly not mutually orthogonal. Check this with Matlab. (b) Now use these vectors to obtain an orthogonal basis for R 3, following the Gram-Schmidt algorithm: v1 = u1, v2 = u2 - ((v1 *u2)/(v1 *v1))*v1

Math 550A MATLAB Assignment #2 3 Note that the denominator in this formula has been written as a dot product instead of v 1 2, since this avoids calculating a square root. The vector subtracted from u 2 to obtain v 2 is the projection of u 2 onto the line spanned by v 1, as in Question 1. Check that the vectors v 1 and v 2 are mutually orthogonal (within a negligible numerical error). Now set v3 = u3 - ((v1 *u3)/(v1 *v1))*v1 - ((v2 *u3)/(v2 *v2))*v2 (use the up-arrow key to edit the previous command). The vectors subtracted from u 3 to obtain v 3 are the projections of u 3 onto the lines spanned by v 1 and by v 2, as in Question 1. Check that the vectors v 1,v 2,v 3 mutually orthogonal (within a negligible numerical error). (c) Finally rescale the vectors v 1,v 2,v 3 to obtain an orthonormal basis for R 3 : w1 = v1/norm(v1), w2 = v2/norm(v2), w3 = v3/norm(v3) Check (by Matlab) that w i = 1 for i = 1, 2, 3. (d) A = QR Factorization: Set A = [u1, u2, u3], Q = [w1, w2, w3], R = Q *A Which entries of R do you know must be zero for any random choice of vectors? Verify by Matlab that A = Q R. Calculate R(1,1)*w1 R(1,2)*w1 + R(2,2)*w2 R(1,3)*w1 + R(2,3)*w2 + R(3,3)*w3 How are these three vectors related to the original vectors u 1,u 2,u 3? Question 4. Orthogonal Projections Generate three random vectors u 1,u 2,u 3 R 5 and the matrix A with these vectors as columns: u1 = rvect(5); u2 = rvect(5); u3 = rvect(5); A = [u1, u2, u3] Use these vectors and this matrix in the following. (a) Let W = Col(A) be the subspace of R 5 spanned by {u 1,u 2,u 3 }. What is dim(w)? Justify your answer by an appropriate rank calculation using Matlab. The teaching code grams.m carries out the steps of the Gram-Schmidt algorithm, just as you did stepby-step in Question 3. Calculate Q = grams(a); w1 = Q(:,1), w2 = Q(:,2), w3 = Q(:,3) by Matlab. Calculate Q *Q. Why does the answer tell you that {w 1,w 2,w 3 } is an orthonormal set? (b) The orthogonal projection P from R 5 onto the subspace W is given by the 5 5 matrix P = w1*w1 + w2*w2 + w3*w3 (note that w1 is a 5 1 column vector; hence w1 is a 1 5 row vector and w1*w1 is a 5 5 matrix). Verify by Matlab that P P T = 0 and P 2 P = 0 (these are the properties that characterize an orthogonal projection matrix). (c) Orthogonal Decomposition v = w + z: Generate another random vector v = rvect(5). Set w = P*v, z = v - w. Verify by Matlab that w *z = 0. This shows that w is in the subspace W and that z is the component of v perpendicular W (see Figure 3.8 on page 162 of Strang s text). (d) The projection matrix P onto the subspace W can be calculated directly from the matrix A, without first orthogonalizing the columns of A, as in Question 3. Define

Math 550A MATLAB Assignment #2 4 PW = A*inv(A *A)*A (see formula (3) on page 156 of Strang s text). Check by Matlab that norm(pw - P) = 0 (up to negligible numerical error). Question 5. Approximate Solution to Inconsistent Linear System For this question use the 5 3 matrix A and the vector v R 5 from Question 4. (a) Approximate Solution to Ax = v: Let v = w+z be the orthogonal decomposition from Question 4(c). Show that (i) The equation Ax = v is inconsistent (has no solutions). (ii) The equation Ax = w is consistent. (Calculate the rank of the augmented matrix in each case). Now solve the equation (ii) by xls = inv(a *A)*A *v (see equation (2) on page 162 of Strang s text). Check by Matlab that A*xls = w (up to negligible numerical error). Denote the vector xls by x. It is called the least squares approximate solution to the (unsolvable) equation Ax = v. (b) Closest Vector Property: Generate a random vector y = rvect(3) and verify by Matlab that P*A*y = A*y. This shows that Ay W. Since A x = w, the vector A x is the vector in W that is closest to v. Thus the function Ay v is minimized by choosing y = x. This is the closest vector property (see page 161 of Strang), which is also called the least squares property since the Ay v 2 is the sum of the squares of the components of Ay v. To illustrate this property, use the Matlab norm function to verify that A x v < Ay v. Question 6. Fitting Curves to Data Points (a) Generating and Plotting Linear Data: Define a column vector of ten equally-spaced t values t = [1:10] Now generate a column vector of the corresponding values of the linear function y = 4 + t and plot it by y = 4 + t; linefit(t,y) (Note the semicolon that suppresses the display of the y data). Matlab should open a new window in which this line is plotted. Notice that the line passes exactly through each data point, and the best fit equation is the given function 4 + t. Print this graph by clicking on file in the figure tool bar, and then click on print. Include the printed copy in your lab write-up. (b) Linear Data with Random Noise: Now add some random noise to each y data value in part (a): y = 2 + 4*rand(10,1) + t; linefit(t,y) These random data points don t lie on the line from part (a), even though they show the same general trend. The line that is plotted is the best fitting line; it minimizes the mean square error between the y coordinates on the line and the data values. Notice how some data points are above the line, and others are below the line. The equation of the best line fitting the random data points is displayed above the graph. Print this graph (following the same procedure as in part (a)). Include the printed copy in your lab write-up. (c) Fitting a Parabola to Data: Consider the following data: t 5 10 15 20 25 30 y 140 290 560 910 1400 2000

Math 550A MATLAB Assignment #2 5 The goal is to fit this data by a parabola y = C + Dt + Et 2 with the coefficients C, D, E chosen to minimize the least-square error. Use Matlab to define vectors t,y R 6 using this data. Then define a 6 3 matrix A whose first column has all ones, the second column is t, and the third column has the squares of the entries in t. You can get the third column by the command t.ˆ2 (notice the period before the exponent). Now calculate the vector x R 3 whose components C, D, E are the coefficients in the best-fitting parabola (see Section 3.3, Problem #25 on page 172 of Strang for an example, and also Question 5(a)). Finally, plot the data points and the least-squares parabola fitting these points by the commands figure; plot(t,y, * ); hold on s = [0:0.1:30]; u = ones(301,1); C = [u s (s.ˆ2) ]; plot(s, C*x) You should get a figure with the six data points indicated by and a smooth parabola passing very close to each data point. Label and print out the figure. Calculate the relative least-squares error norm(y - A*x)/norm(y) This should be less than 0.01 (a one-percent error). Final Editing of Lab Write-up: After you have worked through all the parts of the lab assignment, edit your diary file. Include the Matlab calculations, but remove errors that you made in entering commands and remove other material that is not directly related to the questions.