Linear Algebra II. 7 Inner product spaces. Notes 7 16th December Inner products and orthonormal bases

Similar documents
Vectors. Vectors and the scalar multiplication and vector addition operations:

LECTURE VI: SELF-ADJOINT AND UNITARY OPERATORS MAT FALL 2006 PRINCETON UNIVERSITY

Math 443 Differential Geometry Spring Handout 3: Bilinear and Quadratic Forms This handout should be read just before Chapter 4 of the textbook.

Homework 11 Solutions. Math 110, Fall 2013.

Review of Linear Algebra

(, ) : R n R n R. 1. It is bilinear, meaning it s linear in each argument: that is

AMS526: Numerical Analysis I (Numerical Linear Algebra)

October 25, 2013 INNER PRODUCT SPACES

LINEAR ALGEBRA W W L CHEN

INNER PRODUCT SPACE. Definition 1

Diagonalizing Matrices

7 Bilinear forms and inner products

Orthonormal Bases; Gram-Schmidt Process; QR-Decomposition

Linear Algebra (Review) Volker Tresp 2017

MATH Linear Algebra

Linear Algebra (Review) Volker Tresp 2018

22m:033 Notes: 7.1 Diagonalization of Symmetric Matrices

Vectors. September 2, 2015

Applied Linear Algebra in Geoscience Using MATLAB

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Image Registration Lecture 2: Vectors and Matrices

Mathematical Methods wk 1: Vectors

Mathematical Methods wk 1: Vectors

12x + 18y = 30? ax + by = m

Linear Models Review

Real symmetric matrices/1. 1 Eigenvalues and eigenvectors

MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors.

2. Review of Linear Algebra

Linear algebra II Homework #1 solutions A = This means that every eigenvector with eigenvalue λ = 1 must have the form

Review of linear algebra

Math 416, Spring 2010 Gram-Schmidt, the QR-factorization, Orthogonal Matrices March 4, 2010 GRAM-SCHMIDT, THE QR-FACTORIZATION, ORTHOGONAL MATRICES

Lecture 4 Orthonormal vectors and QR factorization

MTH 2032 SemesterII

Chapter 6: Orthogonality

which arises when we compute the orthogonal projection of a vector y in a subspace with an orthogonal basis. Hence assume that P y = A ij = x j, x i

There are two things that are particularly nice about the first basis

Practice Exam. 2x 1 + 4x 2 + 2x 3 = 4 x 1 + 2x 2 + 3x 3 = 1 2x 1 + 3x 2 + 4x 3 = 5

Lecture 1.4: Inner products and orthogonality

Categories and Quantum Informatics: Hilbert spaces

Linear algebra II Homework #1 due Thursday, Feb. 2 A =. 2 5 A = When writing up solutions, write legibly and coherently.

Math 396. An application of Gram-Schmidt to prove connectedness

Lecture 3: Review of Linear Algebra

Lecture 3: Review of Linear Algebra

1 Dirac Notation for Vector Spaces

Section 6.4. The Gram Schmidt Process

Numerical Linear Algebra

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

The 'linear algebra way' of talking about "angle" and "similarity" between two vectors is called "inner product". We'll define this next.

1. General Vector Spaces

6 Inner Product Spaces

CMU CS 462/662 (INTRO TO COMPUTER GRAPHICS) HOMEWORK 0.0 MATH REVIEW/PREVIEW LINEAR ALGEBRA

NATIONAL UNIVERSITY OF SINGAPORE DEPARTMENT OF MATHEMATICS SEMESTER 2 EXAMINATION, AY 2010/2011. Linear Algebra II. May 2011 Time allowed :

Page 52. Lecture 3: Inner Product Spaces Dual Spaces, Dirac Notation, and Adjoints Date Revised: 2008/10/03 Date Given: 2008/10/03

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

homogeneous 71 hyperplane 10 hyperplane 34 hyperplane 69 identity map 171 identity map 186 identity map 206 identity matrix 110 identity matrix 45

,, rectilinear,, spherical,, cylindrical. (6.1)

Linear Algebra Massoud Malek

6. Orthogonality and Least-Squares

Linear Algebra & Geometry why is linear algebra useful in computer vision?

Review problems for MA 54, Fall 2004.

MAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction

Recall: Dot product on R 2 : u v = (u 1, u 2 ) (v 1, v 2 ) = u 1 v 1 + u 2 v 2, u u = u u 2 2 = u 2. Geometric Meaning:

5. Orthogonal matrices

Designing Information Devices and Systems II

4.2. ORTHOGONALITY 161

Linear Algebra Review. Vectors

Tune-Up Lecture Notes Linear Algebra I

Matrix Algebra: Summary

Math 413/513 Chapter 6 (from Friedberg, Insel, & Spence)

Tutorials in Optimization. Richard Socher

Math Linear Algebra II. 1. Inner Products and Norms

. Here we are using the standard inner-product over C k to define orthogonality. Recall that the inner-product of two vectors φ = i α i.

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

Linear Algebra. Min Yan

This appendix provides a very basic introduction to linear algebra concepts.

Mathematical Principles for Scientific Computing and Visualization

STAT 151A: Lab 1. 1 Logistics. 2 Reference. 3 Playing with R: graphics and lm() 4 Random vectors. Billy Fang. 2 September 2017

Mathematical Foundations: Intro

v = v 1 2 +v 2 2. Two successive applications of this idea give the length of the vector v R 3 :

A PRIMER ON SESQUILINEAR FORMS

Elementary maths for GMT

A Review of Linear Algebra

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL

FFTs in Graphics and Vision. Groups and Representations

LINEAR ALGEBRA MICHAEL PENKAVA

Matrix Theory, Math6304 Lecture Notes from September 27, 2012 taken by Tasadduk Chowdhury

Linear Algebra, Summer 2011, pt. 3

Linear Independence. Stephen Boyd. EE103 Stanford University. October 9, 2017

Linear Algebra & Geometry why is linear algebra useful in computer vision?

Eigenvectors and Hermitian Operators

Linear Algebra 2 Spectral Notes

MTH 2310, FALL Introduction

Linear Algebra. and

k is a product of elementary matrices.

Linear algebra II Homework #1 due Thursday, Feb A =

Lecture 2: Linear Algebra Review

SOLUTION KEY TO THE LINEAR ALGEBRA FINAL EXAM 1 2 ( 2) ( 1) c a = 1 0

Lecture 7. Econ August 18

Elementary maths for GMT

Math 3191 Applied Linear Algebra

Transcription:

MTH6140 Linear Algebra II Notes 7 16th December 2010 7 Inner product spaces Ordinary Euclidean space is a 3-dimensional vector space over R, but it is more than that: the extra geometric structure (lengths, angles, etc.) can all be derived from a special kind of bilinear form on the space known as an inner product. We examine inner product spaces and their linear maps in this chapter. Throughout the chapter, all vector spaces will be real vector spaces, i.e. the underlying field is always the real numbers R. 7.1 Inner products and orthonormal bases Definition 7.1 An inner product on a real vector space V is a function b : V V R satisfying b is bilinear (that is, b is linear in the first variable when the second is kept constant and vice versa); b is symmetric (that is, b(v, w) = b(w, v) for all v, w V ); b is positive definite, that is, b(v,v) 0 for all v V, and b(v,v) = 0 if and only if v = 0. We usually write b(v,w) as v w. An inner product is sometimes called a dot product (because of this notation). Definition 7.2 When a real vector space is equipped with an inner product, we refer to it as an inner product space. Geometrically, in a real vector space, we define v w = v. w cosθ, where v and w are the lengths of v and w, and θ is the angle between v and w. Of course this 1

definition doesn t work if either v or w is zero, but in this case v w = 0. But it is much easier to reverse the process. Given an inner product on V, we define v = v v for any vector v V ; and, if v,w 0, then we define the angle between them to be θ, where cosθ = v w v. w. Definition 7.3 A basis (v 1,...,v n ) for an inner product space is called orthonormal if v i v j = δ i j (the Kronecker delta) for 1 i, j n. In other words, v i v i = 1 for all 1 i n, and v i v j = 0 for i j. Remark: If vectors v 1,...,v n satisfy v i v j = δ i j, then they are necessarily linearly independent. For suppose that c 1 v 1 + + c n v n = 0. Taking the inner product of this equation with v i, we find that c i = 0, for all i. 7.2 The Gram-Schmidt process Given any basis for V, a constructive method for finding an orthonormal basis is known as the Gram Schmidt process. Let w 1,...,w n be any basis for V. The Gram Schmidt process works as follows. Since w 1 0, we have w 1 w 1 > 0, that is, w 1 > 0. Put v 1 = w 1 / w 1 ; then v 1 = 1, that is, v 1 v 1 = 1. For i = 2,...,n, let w i = w i (v 1 w i )v 1. Then for i = 2,...,n. v 1 w i = v 1 w i (v 1 w i )(v 1 v 1 ) = 0 Now apply the Gram Schmidt process recursively to (w 2,...,w n). Since we replace these vectors by linear combinations of themselves, their inner products with v 1 remain zero throughout the process. So if we end up with vectors v 2,...,v n, then v 1 v i = 0 for i = 2,...,n. By induction, we can assume that v i v j = δ i j for i, j = 2,...,n; by what we have said, this holds if i or j is 1 as well. Definition 7.4 The inner product on R n for which the standard basis is orthonormal is called the standard inner product on R n. 2

Example 7.1 In R 3 (with the standard inner product), apply the Gram Schmidt process to the vectors w 1 = 1 2, w 2 = 2 1 1 0, w 3 = We have w 1 w 1 = 9, so in the first step we put v 1 = 1 3 w 1 = 1/3 2/3. 2/3 1 0. 0 Now v 1 w 2 = 1 and v 1 w 3 = 1 3, so in the second step we find w 2 = w 2 v 1 = 2/3 1/3, w 3 = w 3 1 3 v 1 = 8/9 2/9. 2/9 Now we apply Gram Schmidt recursively to w 2 and w 3. We have w 2 w 2 = 1, so v 2 = w 2 = 2/3 1/3. Then v 2 w 3 = 2 3, so Finally, w 3 w 3 = 4 9, so w 3 = w 3 2 3 v 2 = v 3 = 3 2 w 3 = 4/9 4/9 2/9 2/3 1/3.. Check that the three vectors v 1, v 2, v 3 we have found really do form an orthonormal basis. 3

7.3 Adjoints and orthogonal linear maps Definition 7.5 Let V be an inner product space, and T : V V a linear map. Then the adjoint of T is the linear map T : V V defined by v T (w) = T (v) w. Proposition 7.1 If T : V V is represented by the matrix A relative to an orthonormal basis of V, then T is represented by the transposed matrix A. Now we define two important classes of linear maps on V. Definition 7.6 Let T be a linear map on an inner product space V. (a) T is self-adjoint if T = T. (b) T is orthogonal if it is invertible and T = T 1. We see that, if T is represented by a matrix A (relative to an orthonormal basis), then T is self-adjoint if and only if A is symmetric; T is orthogonal if and only if A A = I, i.e. if and only if A 1 = A. We will examine self-adjoint maps (or symmetric matrices) further in the next chapter. Here we look at orthogonal maps. Theorem 7.2 The following are equivalent for a linear map T on an inner product space V : (a) T is orthogonal; (b) T preserves the inner product, that is, T (v) T (v) = v v; (c) T maps an orthonormal basis of V to an orthonormal basis. Proof We have T (v) T (w) = v T (T (w)), by the definition of adjoint (see Definition 7.5); so (a) and (b) are equivalent. Suppose that (v 1,...,v n ) is an orthonormal basis, that is, v i v j = δ i j. If (b) holds, then T (v i ) T (v j ) = δ i j, so that (T (v 1 ),...,T (v n ) is an orthonormal basis, and (c) holds. Conversely, suppose that (c) holds, and let v = x i v i and w = y i v i for some orthonormal basis (v 1,...,v n ), so that v w = x i y i. We have T (v) T (w) = ( x i T (v i ) ) ( y i T (v i ) ) = x i y i, since T (v i ) T (v j ) = δ i j by assumption; so (b) holds. 4

Corollary 7.3 T is orthogonal if and only if the columns of the matrix representing T relative to an orthonormal basis themselves form an orthonormal basis. Proof The columns of the matrix representing T are just the vectors T (v 1 ),...,T (v n ), written in coordinates relative to v 1,...,v n. So this follows from the equivalence of (a) and (c) in the theorem. Alternatively, the condition on columns shows that A A = I, where A is the matrix representing T ; so T T = I, and T is orthogonal. Example 7.2 Our earlier example of the Gram Schmidt process produces the orthogonal matrix 1/3 2/3 2/3 2/3 1/3 2/3 1/3 whose columns are precisely the orthonormal basis we constructed in the example. 5