The Gram Schmidt Process

Similar documents
The Gram Schmidt Process

Orthonormal Bases; Gram-Schmidt Process; QR-Decomposition

Section 6.4. The Gram Schmidt Process

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

Solutions to Review Problems for Chapter 6 ( ), 7.1

Math 3191 Applied Linear Algebra

Worksheet for Lecture 25 Section 6.4 Gram-Schmidt Process

MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors.

2. Every linear system with the same number of equations as unknowns has a unique solution.

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

MTH 2310, FALL Introduction

Section 6.2, 6.3 Orthogonal Sets, Orthogonal Projections

Math 261 Lecture Notes: Sections 6.1, 6.2, 6.3 and 6.4 Orthogonal Sets and Projections

The Gram-Schmidt Process 1

Worksheet for Lecture 15 (due October 23) Section 4.3 Linearly Independent Sets; Bases

Chapter 6: Orthogonality

Homework 5. (due Wednesday 8 th Nov midnight)

Linear Algebra. and

March 27 Math 3260 sec. 56 Spring 2018

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

Homework 11 Solutions. Math 110, Fall 2013.

There are two things that are particularly nice about the first basis

MTH 2032 SemesterII

10 Orthogonality Orthogonal subspaces

Inner products. Theorem (basic properties): Given vectors u, v, w in an inner product space V, and a scalar k, the following properties hold:

Linear Algebra Massoud Malek

Math 1180, Notes, 14 1 C. v 1 v n v 2. C A ; w n. A and w = v i w i : v w = i=1

Vectors. Vectors and the scalar multiplication and vector addition operations:

6. Orthogonality and Least-Squares

6.1. Inner Product, Length and Orthogonality

MATH Linear Algebra

Review of Linear Algebra

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

Worksheet for Lecture 15 (due October 23) Section 4.3 Linearly Independent Sets; Bases

Lecture 4: Applications of Orthogonality: QR Decompositions

MODULE 8 Topics: Null space, range, column space, row space and rank of a matrix

MAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction

Practice Exam. 2x 1 + 4x 2 + 2x 3 = 4 x 1 + 2x 2 + 3x 3 = 1 2x 1 + 3x 2 + 4x 3 = 5

Projections and Least Square Solutions. Recall that given an inner product space V with subspace W and orthogonal basis for

Math 3191 Applied Linear Algebra

Definition 1. A set V is a vector space over the scalar field F {R, C} iff. there are two operations defined on V, called vector addition

Chapter 3 Transformations

Algorithms to Compute Bases and the Rank of a Matrix

Midterm 1 Review. Written by Victoria Kala SH 6432u Office Hours: R 12:30 1:30 pm Last updated 10/10/2015

(v, w) = arccos( < v, w >

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

Conceptual Questions for Review

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

Math 290, Midterm II-key

MATH 304 Linear Algebra Lecture 34: Review for Test 2.

LINEAR ALGEBRA SUMMARY SHEET.

Lecture 10: Vector Algebra: Orthogonal Basis

Chapter 6. Orthogonality and Least Squares

MATH 23a, FALL 2002 THEORETICAL LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS Solutions to Final Exam (in-class portion) January 22, 2003

Typical Problem: Compute.

Recall: Dot product on R 2 : u v = (u 1, u 2 ) (v 1, v 2 ) = u 1 v 1 + u 2 v 2, u u = u u 2 2 = u 2. Geometric Meaning:

Math 18, Linear Algebra, Lecture C00, Spring 2017 Review and Practice Problems for Final Exam

LINEAR ALGEBRA W W L CHEN

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Section 6.1. Inner Product, Length, and Orthogonality

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic

Elementary Linear Algebra Review for Exam 2 Exam is Monday, November 16th.

INNER PRODUCT SPACE. Definition 1

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N.

Matrix decompositions

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

MTH 309Y 37. Inner product spaces. = a 1 b 1 + a 2 b a n b n

Linear Algebra, Summer 2011, pt. 3

Linear Models Review

7. Dimension and Structure.

Review problems for MA 54, Fall 2004.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

Span & Linear Independence (Pop Quiz)

which arises when we compute the orthogonal projection of a vector y in a subspace with an orthogonal basis. Hence assume that P y = A ij = x j, x i

Linear Algebra Primer

80 min. 65 points in total. The raw score will be normalized according to the course policy to count into the final score.

orthogonal relations between vectors and subspaces Then we study some applications in vector spaces and linear systems, including Orthonormal Basis,

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008

MATH 235: Inner Product Spaces, Assignment 7

REVIEW FOR EXAM III SIMILARITY AND DIAGONALIZATION

A Primer in Econometric Theory

1 Inner Product and Orthogonality

Linear Algebra Final Exam Study Guide Solutions Fall 2012

1 0 1, then use that decomposition to solve the least squares problem. 1 Ax = 2. q 1 = a 1 a 1 = 1. to find the intermediate result:

1111: Linear Algebra I

MATH 22A: LINEAR ALGEBRA Chapter 4

Math 102, Winter 2009, Homework 7

MATH 115A: SAMPLE FINAL SOLUTIONS

1 Dirac Notation for Vector Spaces

Linear Algebra. Min Yan

Math Linear Algebra

Quizzes for Math 304

Diagonalizing Matrices

(v, w) = arccos( < v, w >

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show

MATH 221: SOLUTIONS TO SELECTED HOMEWORK PROBLEMS

Lecture 03. Math 22 Summer 2017 Section 2 June 26, 2017

No books, no notes, no calculators. You must show work, unless the question is a true/false, yes/no, or fill-in-the-blank question.

Transcription:

u 2 u The Gram Schmidt Process Now we will present a procedure, based on orthogonal projection, that converts any linearly independent set of vectors into an orthogonal set. Let us begin with the simple case of two vectors: We want to convert this into an orthogonal set by keeping u intact, and modifying u 2. So let v = u.

2 Then compute the projection p = proj v (u 2 ) of u 2 onto v. u 2 u = v p p = proj v (u 2) If this projection is zero, that means that u 2 is already orthogonal to v, so v 2 = u 2.

Otherwise, we correct u 2 by removing the part of it that is parallel to v, i.e., we set v 2 = u 2 p. u 2 u = v v 2 p = proj v (u 2) v 2 = u 2 p p Note that each of v and v 2 is a linear combination of u and u 2, and vice-versa, so span(v, v 2 ) = span(u, u ).

Next, suppose we have a third vector, u 3. If it is already orthogonal to v and v 2, then we can set v 3 = u 3. Otherwise, we need to correct u 3 by removing its projections onto both v and v 2. Let v 3 = u 3 proj v (u 3 ) proj v2 (u 3 ) Then it is clear that v 3 is orthogonal to v and v 2. And again, it is clear that the two sets span the same subspace: span(v, v 2, v 3 ) = span(u, u, u 3 ). We can continue this for more independent vectors, u 4, u 5,..., each time correcting the new vector u j to make it orthogonal to the previously constructed v, v 2,..., v j. Summarizing, we have

Theorem (The Gram Schmidt Process) Given a set { u, u 2,..., u k } of linearly independent vectors in an inner product space, perform the following computations: Step. v = u Step 2. v 2 = u 2 proj v (u 2 ) Step 3. v 3 = u 3 proj v (u 3 ) proj v2 (u 3 ) Step 4. v 4 = u 4 proj v (u 4 ) proj v2 (u 4 ) proj v3 (u 4 ). (Continue till Step k.) Then (i) The set { v, v 2,..., v k } is orthogonal, and (ii) for each j =, 2,..., k, span(v, v 2,..., v j ) = span(u, u,..., u j ).

If we express the projections in terms of the inner product, we have the following form for the process: Step. v = u Step 2. v 2 = u 2 v, u 2 v, v v Step 3. v 3 = u 3 v, u 3 v, v v v 2, u 3 v 2, v 2 v 2 Step 4. v 4 = u 4 v, u 4 v, v v v 2, u 4 v 2, v 2 v 2 v 3, u 4 v 3, v 3 v 3. (Continue until Step k.)

7 The above process applies to any linearly independent set. But if we start with a basis B = { u, u 2,..., u n } of an inner product space V and the result is B = { v, v 2,..., v n }, then the theorem says that span(b ) = span(b) = V. So B is an orthogonal basis of V. The Gram Schmidt process creates an orthogonal set. But as we have seen before, we can easily normalize the resulting vectors to produce an orthonormal set, if desired. So we have the following theorem. Theorem Every finite-dimensional inner product space has an orthonormal basis.

Example: Let V = R 4 with the standard Euclidean inner product (dot product). Let us apply the Gram Schmidt process to the vectors u =, u 2 = 0, u 3 = 0 0. Step. v = u = Step 2. v 2 = u 2 proj v (u 2 ) = u 2 v, u 2 v, v v = 0 3 4 = / 4 3 / 4 / 4 / 4

9 Step 3. v 3 = u 3 proj v (u 3 ) proj v2 (u 3 ) = u 3 v, u 3 v, v v v 2, u 3 v 2, v 2 v 2 0 / 4 0 / 2 / 6 = 4 / 4 3/4 / 4 = / 2 / 2 + / 2 / 6 0 / 4 0 / 2 / 6 / 3 = 0 2/ 3 / 3 Let W be the span of { u, u 2, u 3 }. Then B = { v, v 2, v 3 } = { (,,, ), ( / 4, 3 / 4, / 4, / 4 ), ( / 3, 0, 2 / 3, / 3 ) } is an orthogonal basis of the subspace W.

0 Remember that if we rescale the vectors in a basis by nonzero scalar multiples, the result is still a basis. So we could simplify this a bit by eliminating fractions. Let w = v, w 2 = 4v 2 = (, 3,, ), w 3 = 3v 3 = (, 0, 2, ). Then B = { w, w 2, w 3 } is also an orthogonal basis of W. And now we can find an orthonormal basis by normalizing: w = 4 = 2, w 2 = 2 = 2 3, w 3 = 6, so an orthonormal basis of W is B = w w, w 2 w 2, w 3 w 3 { ( = 2, 2, 2, ) 2,, ( 2 3, 3 2 3, 2 3, 2 3 ) (, 6, 0, 2, 6 6 ) }.

Rescale-as-you-go Notice that in constructing each v j, it is only the direction that counts, not the norm. So the type of rescaling we did in going from B to B could be done earlier, at each step. Here is the calculation again with that simplification. Step and Step 2 are the same, but before proceeding to Step 3, we can clear fractions. We can say it this way: Step 2. Let v 2 = u 2 proj v (u 2 ) = ( / 4, 3 / 4, / 4, / 4 ). v 2 = 4v 2 = (, 3,, ). Step 3. Let v 3 = u 3 v, u 3 v, v v v 2, u 3 v 2, v 2 v 2 0 / 3 = 2 4 2 3 2 = 0 2/ 3 0 / 3 So define Etc.

2 There is a way to organize the calculations that may save time and avoid errors. Recall again the sequence of steps: Step. v = u Step 2. v 2 = u 2 v, u 2 v, v v Step 3. v 3 = u 3 v, u 3 v, v v v 2, u 3 v 2, v 2 v 2 Step 4. v 4 = u 4 v, u 4 v, v v v 2, u 4 v 2, v 2 v 2 v 3, u 4 v 3, v 3 v 3 Notice that the inner product v, v is needed in every step after the first, the inner product v 2, v 2 is needed in every step after the second, etc. Also, for each v i, the inner products v i, u j will be needed in the j-th step, for each j > i.

It is useful to compute these inner products as soon as possible, and record them in a table for later use. v i v i 2 u 2 u 3 u 4 v v, v v, u 2 v, u 3 v, u 4 v 2 v 2, v 2 v 2, u 3 v 2, u 4 v 3 v 3, v 3 v 3, u 4 v 4 v 4, v 4. So as soon as v i is calculated (and after any rescaling, if desired), we can fill out the i-th row of this table.

Example: Let us redo the previous example in this format. u = (,,, ), (, 0,, ), (0,,, 0) v i v i 2 (, 0,, ) (0,,, 0) (,,, ) 4 3 2 Then v 2 = u 2 v,u2 v v,v = (, 0,, ) 3 4 (0,,, 0) = 4 (, 3,, ), so v 2 = (, 3,, ). Then v i v i 2 (, 0,, ) (0,,, 0) (,,, ) 4 3 2 (, 3,, ) 2 2 Finally, v 3 = u 3 2 4 v + 2 2 v 2 = = 3 (, 0,, ), etc.

5 Example: Let V = R 5 with the standard inner product. We will apply the Gram Schmidt process to u = (,,, 0, 0), u 2 = (0,, 0, 0, ), u 3 = (,, 0,, 0). Set v = u. Now we can fill in the first row of the table: v i v i 2 (0,, 0, 0, ) (,, 0,, 0) (,,, 0, 0) 3 0 Then v 2 = u 2 v,u2 v v,v = (0,, 0, 0, ) 3 (,,, 0, 0) = (/3, 2/3, /3, 0, ). So let v 2 = 3v 2 = (, 2,, 0, 3).

6 Now we can fill in the second row: v i v i 2 (0,, 0, 0, ) (,, 0,, 0) (,,, 0, 0) 3 0 (, 2,, 0, 3) 5 3 Then v 3 = u 3 v,u3 v v,v v2,u3 v v 2,v 2 2 = (,, 0,, 0) 0 v 5 (, 2,, 0, 3) = (4/5, 3/5, /5,, 3/5). So let v 3 = (4, 3,, 5, 3). At this point, { v, v 2, v 3 } is an orthogonal basis for the span of { u, u 2, u 3 }.

But if we want an orthonormal basis we should complete the third row: v i v i 2 (0,, 0, 0, ) (,, 0,, 0) (,,, 0, 0) 3 0 (, 2,, 0, 3) 5 3 (4, 3,, 5, 3) 60 Now the diagonal entries of the table contain the squared norms of the orthogonal basis, so the resulting orthonormal basis is w = ( v =, ),, 0, 0 3 3 3 3 w 2 = 5 v = w 3 = 60 v = ( 5, ( 4 60, 2 5, 3 60, 5, 0, 60, ) 3 5 5 60, ) 3 60

Example: Let V = P 2 with the inner product given by p, q = Find an orthonormal basis for V. p(x)q(x) dx. Solution. First, it will be useful to record that x k dx = x { k+ 2/(k + ) if k is even, k + = 0 if k is odd. We start with the standard basis, u =, u 2 = x, u 3 = x 2, and apply the Gram-Schmidt process. First, set v = u =. Then v, v =, = v, u 2 =, x = v, u 3 =, x 2 = x 0 dx = 2, x dx = 0, x 2 dx = 2/3,

9 So we have the first row of the table: v i v i 2 x x 2 2 0 2/3 Then v 2 = u 2 0v = u 2. (Note that u 2 is unchanged, because it was already orthogonal to u.) Then v 2, v 2 = x, x = x 2 dx = 2/3. And v 2, u 3 = x, x 2 = x 3 dx = 0.

So we have v i v i 2 x x 2 2 0 2/3 x 2/3 0 Finally, v 3 = u 3 v, u 3 v, v v v 2, u 3 v 2, v 2 v 2 = x 2 2/3 2 () 0 2/3 (x) = x 2 3 And v 3, v 3 = (x 2 3 )2 dx = x 4 2 3 x 2 + 9 dx = 2 5 4 9 + 2 9 = 8 45.

2 Now we have v i v i 2 x x 2 2 0 2/3 x 2/3 0 x 2 /3 8/45 So v = 2, v 2 = 2/3, v 3 = 8/45. Then our orthonormal basis is w =, 2 w 2 = 3 x = x, 2/3 2 w 3 = 45 (x 2 /3) = (x 2 /3). 8/45 8

Extending to an orthogonal basis Suppose we start with a linearly independent set u,..., u k with the property that an initial segment u, u 2,..., u j is already orthogonal. If we perform the Gram Schmidt process, we find that v = u, v 2 = u 2,... v j = u j. That is, the first j vectors in the list are unaffected. This happens because if u j is orthogonal to each of u,..., u j, then it is orthogonal to v i for i < j. So the projection proj vi u j = v i, u j v i, v i v i is equal to 0. This means that no corrections are made in computing v j.

Consequently, we have the following theorem. Theorem Let S = { b, b 2,..., b k } be an orthogonal set of nonzero vectors in a finite-dimensional inner product space V. Then S can be extended to an orthogonal basis of V. Furthermore, if S is orthonormal, it can be extended to an orthonormal basis. Proof. First, extend S to a basis B = { b, b 2,..., b k, b k+,..., b n } of V. Apply the Gram Schmidt process to B to get an orthogonal basis B. According to the remarks above, B will contain S unchanged.

Example: Extend S = { (,, 0, ), (, 0, 0, ) } to an orthogonal basis of R 4. Solution: S is clearly orthogonal. First, we need to extend S to a basis of R 4, without worrying about orthogonality. Recall that one way to do this is to add on the standard basis, and then use Gaussian elimination to select a basis from the extended set. So we form the matrix 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Now we row-reduce this matrix. Since the first two columns are linearly independent, they will contain pivots, and so will be included in the selected basis.

We get 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 0 0 0 0 0 0 Since the last matrix is in echelon form and has pivots in columns, 2, 3, and 5, the selected basis is B = { u = (,, 0, ), u 2 = (, 0, 0, ), u 3 = (, 0, 0, 0), u 4 = (0, 0,, 0) }.

26 Now we proceed with the Gram-Schmidt process. Since u and u 2 are already orthogonal, we know that v = u and v 2 = u 2, so our table is v i v i 2 (, 0, 0, ) (, 0, 0, 0) (0, 0,, 0) (,, 0, ) 3 0 0 (, 0, 0, ) 2 0 So v 3 = u 3 3 v 2 v 2. The common denominator is 6, so we can rescale and set v 3 = 6v 3 = 6u 3 2v 3v 2 = 6(, 0, 0, 0) 2(,, 0, ) 3(, 0, 0, ) = (, 2, 0, ).

27 Now, we could continue with u 4, but observe that u 4 is already orthogonal to v, v, and v 3, so v 4 = u 4. So an orthogonal basis containing S is B = { (,, 0, ), (, 0, 0, ), (, 2, 0, ), (0, 0,, 0) }

The Gram Schmidt process presented here assumes that we start with a linearly independent set. As a final note, let us consider what would happen if the original set S were linearly dependent, and we proceed with the Gram-Schmidt process anyway. What would go wrong? The following example illustrates the situation. Suppose S = { (, 0,, 0), (,, 2, 0), (2,, 3, 0), (, 0, 0, ) }. Find an orthogonal basis of the subspace W spanned by S.

We begin as usual by setting v = (, 0,, 0) and computing the first row of the table: v i v i 2 (,, 2, 0) (2,, 3, 0) (, 0, 0, ) (, 0,, 0) 2 3 5 Then v 2 = (,, 2, 0) 3 2 (, 0,, 0), which gives v 2 = (, 2,, 0). So v i v i 2 (,, 2, 0) (2,, 3, 0) (, 0, 0, ) (, 0,, 0) 2 3 5 (, 2,, 0) 6 3 But then comes trouble: v 3 = (2,, 3, 0) 5 2 (, 0,, 0) (, 2,, 0) = (0, 0, 0, 0). 2

0 Now we have a problem, because v 3 = 0, and this can never be part of a basis. Furthermore, v 3, v 3 = 0, and this number occurs in the denominator of future steps! A closer inspection shows that the reason this occurred is that the original set S was linearly dependent. We should start over by reducing the spanning set S to a basis of W, and then use the Gram Schmidt procedure. But all is not lost! We can actually proceed by ignoring v 3 and continuing.

3 So our table becomes v i v i 2 (,, 2, 0) (, 0, 0, ) (, 0,, 0) 2 3 (, 2,, 0) 6 And v 4 = (, 0, 0, ) 2 (, 0,, 0) + 6 (, 2,, 0) = ( 3, 3, 3, ). So set v 4 = (,,, 3). Conclusion: B = { (, 0,, 0), (, 2,, 0), (,,, 3) } is an orthogonal basis of W = span{ (, 0,, 0), (,, 2, 0), (2,, 3, 0), (, 0, 0, ) }. Exercise: Verify this statement directly.

We can summarize the idea here as follows Theorem (Gram Schmidt process for linearly dependent sets) If the Gram Schmidt process is applied to a set S = { u,..., u k }, then the set S is linearly independent if and only if each of the vectors v,..., v k produced is nonzero. If, during the process, any v j that becomes zero is discarded before continuing, then the resulting set is still orthogonal, and spans the same subspace as S.