Math 416, Spring 2010 Gram-Schmidt, the QR-factorization, Orthogonal Matrices March 4, 2010 GRAM-SCHMIDT, THE QR-FACTORIZATION, ORTHOGONAL MATRICES

Similar documents
Orthonormal Bases; Gram-Schmidt Process; QR-Decomposition

Section 6.4. The Gram Schmidt Process

Dot Products, Transposes, and Orthogonal Projections

Lecture 4: Applications of Orthogonality: QR Decompositions

Worksheet for Lecture 25 Section 6.4 Gram-Schmidt Process

Math 416, Spring 2010 Matrix multiplication; subspaces February 2, 2010 MATRIX MULTIPLICATION; SUBSPACES. 1. Announcements

Answers in blue. If you have questions or spot an error, let me know. 1. Find all matrices that commute with A =. 4 3

MAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

Math 291-2: Lecture Notes Northwestern University, Winter 2016

Orthogonal Complements

Linear Algebra, Summer 2011, pt. 3

Math 21b: Linear Algebra Spring 2018

Math 261 Lecture Notes: Sections 6.1, 6.2, 6.3 and 6.4 Orthogonal Sets and Projections

Math 290-2: Linear Algebra & Multivariable Calculus Northwestern University, Lecture Notes

There are two things that are particularly nice about the first basis

March 27 Math 3260 sec. 56 Spring 2018

Math 1180, Notes, 14 1 C. v 1 v n v 2. C A ; w n. A and w = v i w i : v w = i=1

Assignment 1 Math 5341 Linear Algebra Review. Give complete answers to each of the following questions. Show all of your work.

Chapter 3 Transformations

Practice Exam. 2x 1 + 4x 2 + 2x 3 = 4 x 1 + 2x 2 + 3x 3 = 1 2x 1 + 3x 2 + 4x 3 = 5

Inner products. Theorem (basic properties): Given vectors u, v, w in an inner product space V, and a scalar k, the following properties hold:

MATH 22A: LINEAR ALGEBRA Chapter 4

Math 416, Spring 2010 More on Algebraic and Geometric Properties January 21, 2010 MORE ON ALGEBRAIC AND GEOMETRIC PROPERTIES

Section 6.2, 6.3 Orthogonal Sets, Orthogonal Projections

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

MODULE 8 Topics: Null space, range, column space, row space and rank of a matrix

Math 3191 Applied Linear Algebra

Applied Linear Algebra in Geoscience Using MATLAB

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

LINEAR ALGEBRA KNOWLEDGE SURVEY

EXAM 2 REVIEW DAVID SEAL

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

Assignment #9: Orthogonal Projections, Gram-Schmidt, and Least Squares. Name:

33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM

Linear Systems. Class 27. c 2008 Ron Buckmire. TITLE Projection Matrices and Orthogonal Diagonalization CURRENT READING Poole 5.4

Miderm II Solutions To find the inverse we row-reduce the augumented matrix [I A]. In our case, we row reduce

MATH 167: APPLIED LINEAR ALGEBRA Least-Squares

Math Linear Algebra

Lecture 4 Orthonormal vectors and QR factorization

Vectors. Vectors and the scalar multiplication and vector addition operations:

Announcements Monday, November 20

Chapter 6: Orthogonality

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

Math 416, Spring 2010 Coordinate systems and Change of Basis February 16, 2010 COORDINATE SYSTEMS AND CHANGE OF BASIS. 1.

MATH 235: Inner Product Spaces, Assignment 7

Linear Algebra II. 7 Inner product spaces. Notes 7 16th December Inner products and orthonormal bases

FFTs in Graphics and Vision. Groups and Representations

Homework 5. (due Wednesday 8 th Nov midnight)

Homework 11 Solutions. Math 110, Fall 2013.

MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors.

Math 396. An application of Gram-Schmidt to prove connectedness

Lecture 6: Lies, Inner Product Spaces, and Symmetric Matrices

MATH 15a: Linear Algebra Practice Exam 2

5. Orthogonal matrices

FINAL EXAM Ma (Eakin) Fall 2015 December 16, 2015

The 'linear algebra way' of talking about "angle" and "similarity" between two vectors is called "inner product". We'll define this next.

MATH 310, REVIEW SHEET

Math Real Analysis II

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

SUMMARY OF MATH 1600

Vector Spaces, Orthogonality, and Linear Least Squares

and u and v are orthogonal if and only if u v = 0. u v = x1x2 + y1y2 + z1z2. 1. In R 3 the dot product is defined by

ISOMETRIES OF R n KEITH CONRAD

LECTURES 14/15: LINEAR INDEPENDENCE AND BASES

6. Orthogonality and Least-Squares

MTH 2310, FALL Introduction

TBP MATH33A Review Sheet. November 24, 2018

Chapter 4 Euclid Space

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

Math 407: Linear Optimization

is Use at most six elementary row operations. (Partial

Dot product and linear least squares problems

INNER PRODUCT SPACE. Definition 1

. = V c = V [x]v (5.1) c 1. c k

MTH 2032 SemesterII

orthogonal relations between vectors and subspaces Then we study some applications in vector spaces and linear systems, including Orthonormal Basis,

MAT 211, Spring 2015, Introduction to Linear Algebra.

Linear Algebra Fundamentals

MATH 23a, FALL 2002 THEORETICAL LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS Solutions to Final Exam (in-class portion) January 22, 2003

Math Linear Algebra II. 1. Inner Products and Norms

MATH 221: SOLUTIONS TO SELECTED HOMEWORK PROBLEMS

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL

Inverses. Stephen Boyd. EE103 Stanford University. October 28, 2017

Lecture 10: Vector Algebra: Orthogonal Basis

6.1. Inner Product, Length and Orthogonality

Conceptual Questions for Review

Solving a system by back-substitution, checking consistency of a system (no rows of the form

A PRIMER ON SESQUILINEAR FORMS

Lecture 3: Linear Algebra Review, Part II

Linear Algebra Massoud Malek

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #2 Solutions

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N.

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

We see that this is a linear system with 3 equations in 3 unknowns. equation is A x = b, where

Recitation 8: Graphs and Adjacency Matrices

Transcription:

Math 46, Spring 00 Gram-Schmidt, the QR-factorization, Orthogonal Matrices March 4, 00 GRAM-SCHMIDT, THE QR-FACTORIZATION, ORTHOGONAL MATRICES Recap Yesterday we talked about several new, important concepts The central theme connecting the ideas was orthogonality, but more specifically we covered orthogonal complements; a big theorem that let us write a vector as a sum of two vectors, one from a given vector space and the other from its orthogonal complement; specifically, for V and x there is a unique representation x x + x with x V and x V ; a formula for computing x using dot products (once we have an orthonormal basis); formally defining the map proj V by proj V ( x ) x ; and writing down a matrix which gives the transformation proj V (this means that proj V is a linear transformation); specifically, we said that if u,, u s is a basis for V then u proj V ( x ) u us us x Making orthonormal bases The Gram-Schmidt Algorithm We saw yesterday that orthonormal bases are handy to use; for instance, they make writing the formula for proj V simple Our goal today is to produce orthonormal bases More specifically, if someone gives us a basis v,, v m of a space V, we want to manufacture an orthonormal basis for V from v,, v m from it Example Given the linearly independent collection { v }, construct an orthonormal basis for Span( v ) Solution First notice that Span( v ) is -dimensional since v is a basis Since we are looking for an orthonormal basis of a dimensional space we only have to find one vector, and orthonormality just means it should be unit length So let s just scale the vector we ve been given by its magnitude Our orthonormal basis for Span( v ) is the vector u v v Example Given the linearly independent collection { v, v }, construct an orthonormal basis for Span( v, v ) Solution Since we are working in a -dimensional space we ll have to do a little more work than last time We ll begin in the same way, by defining the first vector in our orthonormal basis by scaling the first vector in the given basis u v v acs@mathuiucedu http://wwwmathuiucedu/~acs/w0/math46 Page of 5

Math 46, Spring 00 Gram-Schmidt, the QR-factorization, Orthogonal Matrices March 4, 00 The second vector in our orthonormal basis must be orthogonal to u and have unit length To find a vector orthogonal to u, we will use the decomposition v v + v, where the decomposition is relative to Span( u ) The vector v is orthogonal to u like we want, but it isn t quite unit length If we scale it appropriately, though, it will be u v v v v v v v v ( u v ) u v ( u v ) u The second equality I wrote follows from how we constructed v as the difference v v The third equality comes from expressing v as a dot product in terms of the orthonormal basis { u } of Span( u ) Span( v ) Example Given the linearly independent collection { v, v, v 3 }, construct an orthonormal basis for Span( v, v, v 3 ) Solution We re now in a 3-dimensional space, so we get to do a little more work We ll begin as before, by defining the first vector in our orthonormal basis by scaling the first vector in the given basis v u v Also as before we ll construct the second vector by writing v v + v (the decomposition relative to Span( u )) Then we define u v v v v v v v v ( u v ) u v ( u v ) u Finally for the third vector we need to find a vector orthogonal to Span( u, u ) Span( v, v ) For this we will write v3 v 3 + v3, the decomposition relative to the space Span( u, u ) Span( v, v ) Then we ll define u 3 v3 v3 v 3 v 3 v 3 v 3 v 3 v3 ( u v 3 ) u ( u v 3 ) u v ( u v 3 ) u ( u v 3 ) u Example Let s compute a specific example We ll try to construct an orthonormal basis for the span of the vectors ( ) v and ( ) 0 v From above we write v u v ( ) acs@mathuiucedu http://wwwmathuiucedu/~acs/w0/math46 Page of 5

Math 46, Spring 00 Gram-Schmidt, the QR-factorization, Orthogonal Matrices March 4, 00 For the second vector, we have ( ) ( ) 0 0 u ( u v ) u u u ( u v ) u the above junk ( 0 ) above junk The process we have outlined above can be followed for any number of initial vectors In fact, by going through the cases we have, we should be able to see how the algorithm works generally The process we have followed is called the Gram-Schmidt algorithm Theorem (Gram-Schmidt Algorithm) Suppose we are given a collection { v,, v s } which is linearly independent Then if we write vi v i + vi (the decomposition relative to the space Span( v,, v i )), the collection of vectors { u,, u s } defined by vi ui v i is an orthonormal basis for Span( v,, v s ) We can calculate u i iteratively as and v u v vi ( u v i ) u ( u i v i ) u i ui v i ( u v i ) u ( u i v i ) u i The algorithm actually provides a factorization of the matrix whose columns are the initial linearly independent vectors This factorization is called the QR factorization of the matrix Theorem (QR factorization) Suppose that A is a matrix whose columns v,, v s are linearly independent Then A QR where Q is the matrix Q u us and R is the matrix v R u v u v 3 v u v 3 u v s u v s v 3 u3 v s u s v s v s u v u v u v 3 u v u v u v 3 us v us v us v 3 u v s u v s u s v s Proof The fact that we can write the matrix R in two ways just comes from the fact that u i ui v j 0 when i > j (since u i is orthogonal to Span( v,, v i ) by construction) v i v i and that acs@mathuiucedu http://wwwmathuiucedu/~acs/w0/math46 Page 3 of 5

Math 46, Spring 00 Gram-Schmidt, the QR-factorization, Orthogonal Matrices March 4, 00 Using the first expression for the matrix R, we ll check the matrices on the left and right hand side are the same column by column For this, note that the ith column of the product (ie, the right hand side) is just u v i u u s v i 0 ( u v i ) u + ( u v i ) u + + ( u i v i ) u i + v i ui ( u v i ) u + ( u v i ) u + + ( u i v i ) u i + v i ui }{{} vi vi v i But this is just the ith column of the left hand side, so our matrices must be equal 3 Orthogonal Transformations and Matrices Definition 3 A linear transformation T : R n R n is called orthogonal if it preserves the length of vectors: T ( x ) x for all x R n A matrix is called orthogonal if it corresponds to an orthogonal linear transformation Notice that we have used the adjective orthogonal before in the context of a pair of vectors Saying that a matrix is orthogonal, though, means something different, so be careful to keep these two concepts separate in your mind! Example Any rotation in R is an orthogonal transformation, since ( ) ( ) cos θ sin θ x (cos θx sin θ cos θ x sin θx ) + (sin θx + cos θx ) cos θx sin θ cos θx x + sin θx + sin θx + sin θ cos θx x + cos θx ( ) x + x x x This isn t surprising given our intuition of what rotations do to vectors in R Example If V is a subspace of R n, then we define This is orthogonal because Ref V ( x ) x x Ref V ( x ) x x x + x x + x x + x x Example If V is a subspace of R n that isn t all of R n, then proj V is not an orthogonal transformation To see why this is true, choose a nonzero vector x V (such a vector exists since V R n ) Then proj V ( x ) 0 x acs@mathuiucedu http://wwwmathuiucedu/~acs/w0/math46 Page 4 of 5

Math 46, Spring 00 Gram-Schmidt, the QR-factorization, Orthogonal Matrices March 4, 00 This example points out an important feature of orthogonal matrices: they must have trivial kernel This gives the following result Theorem 3 An orthogonal matrix is invertible Proof Since an orthogonal matrix is square automatically, we only have to check that ker(a) { 0 } So choose b ker(a), and we want to show b 0 Using the definition of kernel we have A b 0, so that A b 0 0 But A is orthogonal, so that A b b Hence we have b 0, and so b 0 Hence ker(a) { 0 } as desired This proof only tells us that A is invertible, but doesn t tell us what the inverse is We ll compute the inverse of an orthogonal matrix by the end of the class period Theorem 3 (Operations Preserving Orthogonality in Matrices) The product of two orthogonal transformations is orthogonal The inverse of an orthogonal matrix is orthogonal Proof To prove the first statement, suppose that A and B are two orthogonal, n n matrices Then for any x R n we have AB x A(B x ) B x x (in the equalities labeled I have used the orthogonality of A and B) This means that AB is orthogonal For the second statement, let A be an orthogonal n n matrix and choose a vector x R n ; our goal is to show A x x To do so, notice A x AA x x, where the equality labeled uses the orthogonality of A In class there was a very reasonable question asked: why is an orthogonal matrix called orthogonal? One good answer to this question is the following Theorem 33 If v and w are orthogonal elements of R n, then for any orthogonal transformation T : R n R n the vectors T ( v ) and T ( w ) are orthogonal We ll prove this result at the beginning of class next time To do this, we ll need the following result (which you ll prove for homework): Lemma 34 For two vectors x, y R n, we have if and only if x y 0 v + y x + y acs@mathuiucedu http://wwwmathuiucedu/~acs/w0/math46 Page 5 of 5