Lecture 2j Inner Product Spaces (pages )

Similar documents
Review of linear algebra

Math 313 (Linear Algebra) Exam 2 - Practice Exam

MA 1B ANALYTIC - HOMEWORK SET 7 SOLUTIONS

Math 240 Calculus III

Chapter 2 Notes, Linear Algebra 5e Lay

Math 110, Spring 2015: Midterm Solutions

Review problems for MA 54, Fall 2004.

MATH 583A REVIEW SESSION #1

Linear Algebra Basics

LS.1 Review of Linear Algebra

MATH 431: FIRST MIDTERM. Thursday, October 3, 2013.

MTH5112 Linear Algebra I MTH5212 Applied Linear Algebra (2017/2018)

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

Materials engineering Collage \\ Ceramic & construction materials department Numerical Analysis \\Third stage by \\ Dalya Hekmat

A = 3 B = A 1 1 matrix is the same as a number or scalar, 3 = [3].

Computationally, diagonal matrices are the easiest to work with. With this idea in mind, we introduce similarity:

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

A VERY BRIEF LINEAR ALGEBRA REVIEW for MAP 5485 Introduction to Mathematical Biophysics Fall 2010

Review of Linear Algebra

Announcements Wednesday, November 01

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space

Lecture 19: Isometries, Positive operators, Polar and singular value decompositions; Unitary matrices and classical groups; Previews (1)

Examples True or false: 3. Let A be a 3 3 matrix. Then there is a pattern in A with precisely 4 inversions.

Determinants and Scalar Multiplication

Matrix Algebra Determinant, Inverse matrix. Matrices. A. Fabretti. Mathematics 2 A.Y. 2015/2016. A. Fabretti Matrices

[ Here 21 is the dot product of (3, 1, 2, 5) with (2, 3, 1, 2), and 31 is the dot product of

Image Registration Lecture 2: Vectors and Matrices

Math 360 Linear Algebra Fall Class Notes. a a a a a a. a a a

. =. a i1 x 1 + a i2 x 2 + a in x n = b i. a 11 a 12 a 1n a 21 a 22 a 1n. i1 a i2 a in

1 Principal component analysis and dimensional reduction

1 Matrices and Systems of Linear Equations. a 1n a 2n

Knowledge Discovery and Data Mining 1 (VO) ( )

Roberto s Notes on Linear Algebra Chapter 10: Eigenvalues and diagonalization Section 3. Diagonal matrices

MAT 1332: CALCULUS FOR LIFE SCIENCES. Contents. 1. Review: Linear Algebra II Vectors and matrices Definition. 1.2.

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

Topic 15 Notes Jeremy Orloff

Systems of Algebraic Equations and Systems of Differential Equations

Methods for Solving Linear Systems Part 2

Think about systems of linear equations, their solutions, and how they might be represented with matrices.

NOTES FOR LINEAR ALGEBRA 133

4. Linear transformations as a vector space 17

1 0 1, then use that decomposition to solve the least squares problem. 1 Ax = 2. q 1 = a 1 a 1 = 1. to find the intermediate result:

Exam questions with full solutions

CS 246 Review of Linear Algebra 01/17/19

Assignment 10. Arfken Show that Stirling s formula is an asymptotic expansion. The remainder term is. B 2n 2n(2n 1) x1 2n.

Matrix Arithmetic. a 11 a. A + B = + a m1 a mn. + b. a 11 + b 11 a 1n + b 1n = a m1. b m1 b mn. and scalar multiplication for matrices via.

M.A.P. Matrix Algebra Procedures. by Mary Donovan, Adrienne Copeland, & Patrick Curry

MATH 1210 Assignment 4 Solutions 16R-T1

MATH 235. Final ANSWERS May 5, 2015

Review for Exam Find all a for which the following linear system has no solutions, one solution, and infinitely many solutions.

Linear Algebra March 16, 2019

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

Matrices and Deformation

Linear Algebra (Review) Volker Tresp 2017

LECTURES 14/15: LINEAR INDEPENDENCE AND BASES

Answers in blue. If you have questions or spot an error, let me know. 1. Find all matrices that commute with A =. 4 3

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

Announcements Wednesday, November 01

The Exponential of a Matrix

Basic Linear Algebra in MATLAB

Hints and Selected Answers to Homework Sets

12. Perturbed Matrices

Chapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in

Quantum Computing Lecture 2. Review of Linear Algebra

Online Exercises for Linear Algebra XM511

Contents. 2.1 Vectors in R n. Linear Algebra (part 2) : Vector Spaces (by Evan Dummit, 2017, v. 2.50) 2 Vector Spaces

Chapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

The Matrix-Tree Theorem

Typical Problem: Compute.

University of Colorado Denver Department of Mathematical and Statistical Sciences Applied Linear Algebra Ph.D. Preliminary Exam May 25th, 2018

TOPIC III LINEAR ALGEBRA

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.

3 Matrix Algebra. 3.1 Operations on matrices

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

Matrices Gaussian elimination Determinants. Graphics 2009/2010, period 1. Lecture 4: matrices

Math Camp II. Basic Linear Algebra. Yiqing Xu. Aug 26, 2014 MIT

A PRIMER ON SESQUILINEAR FORMS

Math 554 Qualifying Exam. You may use any theorems from the textbook. Any other claims must be proved in details.

Roberto s Notes on Linear Algebra Chapter 4: Matrix Algebra Section 4. Matrix products

MATH 213 Linear Algebra and ODEs Spring 2015 Study Sheet for Midterm Exam. Topics

Econ Slides from Lecture 7

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

MATH 2050 Assignment 8 Fall [10] 1. Find the determinant by reducing to triangular form for the following matrices.

MATH 304 Linear Algebra Lecture 8: Vector spaces. Subspaces.

1111: Linear Algebra I

. a m1 a mn. a 1 a 2 a = a n

1300 Linear Algebra and Vector Geometry

Determinants and Scalar Multiplication

LINEAR ALGEBRA REVIEW

Vector, Matrix, and Tensor Derivatives

Second Midterm Exam April 14, 2011 Answers., and

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

1 Matrices and Systems of Linear Equations

Sections 6.1 and 6.2: Systems of Linear Equations

MATH 320, WEEK 7: Matrices, Matrix Operations

Math 309 Notes and Homework for Days 4-6

22m:033 Notes: 3.1 Introduction to Determinants

Math 18, Linear Algebra, Lecture C00, Spring 2017 Review and Practice Problems for Final Exam

Transcription:

Lecture 2j Inner Product Spaces (pages 348-350) So far, we have taken the essential properties of R n and used them to form general vector spaces, and now we have taken the essential properties of the standard basis and created orthonormal bases. But the definition of an orthonormal basis was dependent on the properties of the dot product. But what was so special about the dot product? We take the essential properties of the dot product and use then to define the more general concept of an inner product. Definition: Let V be a vector space over R. An inner product on V is a function, : V V R such that (1) v, v 0 for all v V, and v, v = 0 if and only if v = 0. (positive definite) (2) v, w = w, v for all v, w V (symmetric) (3) v, sw + tz = s v, w + t v, z for all s, t R and v, w, z V (bilinear) Note that, if we combine properties (2) and (3), then we also have that sv + tw, z = s v, z + t w, z. In its full generality, we have that s 1 x + s 2 v, t 1 w + t 2 z = s 1 x, t 1 w + t 2 z + s 2 v, t 1 w + t 2 z = s 1 t 1 x, w + s 1 t 2 x, z + s 2 t 1 v, w + s 2 t 2 v, z Definition: A vector space V with an inner product is called an inner product space. Let s start our study of inner product spaces by looking in R n. Example: Show that function, defined by x, y = x 1 y 1 + 4x 2 y 2 + 9x 3 y 3 is an inner product on R 3. To show this, we verify that, satisfies the three properties of an inner product. (1) For any x R 3, x, x = x 1 x 1 + 4x 2 x 2 + 9x 3 x 3 = x 2 1 + 4x 2 2 + 9x 2 3. Since each of x 2 1, x 2 2, and x 2 3 are greater than or equal to zero, we see that x, x 0. What if x, x = 0? Then x 2 1 + 4x 2 2 + 9x 2 3 = 0. But since the terms in this sum are all greater than or equal to zero, the only way their sum can equal zero is if each of the terms equals zero. So we get x 2 1 = 0, 4x 2 2 = 0, and 9x 2 3 = 0. This means that x 1 = 0, x 2 = 0, and x 3 = 0, which means that x = 0. And so we see that, is positive definite. 1

(2) For any x, y R 3, x, y = x 1 y 1 + 4x 2 y 2 + 9x 3 y 3 = y 1 x 1 + 4y 2 x 2 + 9y 3 x 3 = y, x, so, is symmetric. (3) For any x, y, z R 3 and s, t R, x, s y + t z = x 1 (sy 1 + tz 1 ) + 4x 2 (sy 2 + tz 2 ) + 9x 3 (sy 3 + tz 3 ) = sx 1 y 1 + tx 1 z 1 + 4sx 2 y 2 + 4tx 2 z 2 + 9sx 3 y 3 + 9tx 3 z 3 = s(x 1 y 1 + 4x 2 y 2 + 9x 3 y 3 ) + t(x 1 z 1 + 4x 2 z 2 + 9x 3 z 3 ) = s x, y + t x, z So, is bilinear. Example: The function, defined by x, y = 2x 1 y 1 + 3x 2 y 2 is not an inner product on R 3 because it is not positive definite. For example, we get that, 3 3 = 2(0)(0) + 3(0)(0) = 0, even though 3. Note that the textbook shows that this function does define an inner 0 product on R 2, so this example is intended to show the importance of paying attention which vector space you are working in. Example: The function, defined by x, y = x 1 y 1 + x 1 y 2 + x 1 y 3 is not an inner product on R 3 because it is not symmetric. For example, 1 2, 2 3 = (1)(2) + (1)(3) + (1)(4) = 9, but 2 3, 1 2 = 3 4 4 3 (2)(1) + (2)(2) + (2)(3) = 12, so 1 2, 2 3 3 4 function is also not positive definite. It is bilinear.) 2 3 4, 1 2 3. (Our The dot product was only defined in R n, but an inner product can be defined in any vector space. What could be an inner product in a matrix space? Since our result is supposed to be a number, not a vector, matrix multiplication is not a contender. What about the determinant? 2

Example: The function A, B = det(ab) is not an inner product ([ on M(2, ] [ 2). 1 0 1 0 The easiest thing to see is that it is not positive definite, since det [ ] [ ] [ ] [ ] 1 0 1 0 1 0 1 0 det = 0, so, = 0 even though [ ]. This counterexample easily extends to show that A, B = det(ab) is not an inner product on M(n, n) for any n. It is also worth noting that the determinant is not bilinear, since in general det(sa) = s n deta sdeta. So, while the determinant is useful for many things, it is not useful for defining an inner product on matrix spaces. There is another operation associated with matrices that results in a number, though, and that is the trace. Recall that the trace of a matrix is the sum of the diagonal entries. Example: The function A, B = tr(b T A) is an inner product on M(m, n). To see this, let s first look at this definition in M(2, 3): [ ] [ ] a11 a 12 a 13 b11 b, 12 b 13 a 21 a 22 a 23 b 21 b 22 b 23 b 11 b 21 [ ] = tr b 12 b 22 a11 a 12 a 13 a b 13 b 21 a 22 a 23 23 b 11 a 11 + b 21 a 21?? = tr? b 12 a 12 + b 22 a 22??? b 13 a 13 + b 23 a 23 ]) = = b 11 a 11 + b 21 a 21 + b 12 a 12 + b 22 a 22 + b 13 a 13 + b 23 a 23 Note that I left some entries in the matrix B T A blank because it was not necessary to calculate [ them] in[ order to[ compute [ the trace. In[ fact, the [ entries b11 a11 b12 a12 b13 a13 on the diagonal are b 21 a 21 ], ] ] ] ], and. b 22 a 22 b 23 a 23 That is, the i-th diagonal entry is the dot product of the i-th column of B with the i-th column of A. As you may now be guessing, this will be true in our general case. Consider that our definition for matrix multiplication was that the ij-th entry of BA is the dot product of the i-th row of B with the j-th column of A. But our B is currently B T, and the i-th row of B T is the same as the i-th column of B, so the ij-th entry of B T A is the dot product of the i-th column of B and the j-th column of A. Since we are looking for the trace of B T A, we need to take the sum of the diagonal entries (the ii entries), which will be the dot product of the i-th column of B with the i-th column of A. So, if A = [ ] [ a 1 a n and B = b1 bn ], then 3

A, B = tr(b T A) = n bi a i With this formula in hand, let s show that, is an inner product. (1) A, A = n i=1 a i a i. Since a i a i 0 for all i (as the dot product is positive definite), we know that n i=1 a i a i 0. Moreover, if n i=1 a i a i = 0, then we need each a i a i = 0, and again using the fact that the dot product is positive definite, this means that a i = 0 for all i. And since the columns of A are all the zero vector, we get that A is the zero matrix. As such, we have shown that, is positive definite. (2) We have already shown that A, B = n i=1 b i a i, so now we want to look at B, A. By the same arguments we used for A, B, we know that the ij-th entry of A T B is the dot product of the i-th column of A with the j-th column of B. This means that tr(a T B) = n i=1 a i b i. Since dot products are symmetrical, we know that a i b i = b i a i. And thus we have the following: i=1 B, A = tr(a T B) = n i=1 a i b i = n i=1 b i a i = A, B (3) A, sb + tc = n i=1 (s b i + t c i ) a i = n i=1 s b i a i + t c i a i = s n i=1 b i a i + t n i=1 c i a i = s A, B + t A, C Before we leave this example, let s take a further look at the formula A, B = tr(b T A) = n i=1 b i a i. Because the dot product b i a i is the sum b i1 a i1 + + b im a im. But since we cycle through all possible i values when we take the inner product, what we end up with is that A, B = n n i=1 j=1 b ija ij. Don t let the double sum scare you! The important fact is what is inside: that we are taking the product of the corresponding entries of A and B, and adding them together. This formula is quite reminiscent of our formula for the dot product in R n. (And, in fact, under the standard isomorphism from M(m, n) to R mn, this is the same as the dot product.) So whenever we need to calculate tr(b T A), we know it is n [ 2 1 3 5 9 + 10 = 3. n i=1 j=1 b ija ij. So, for example, [ ] 1 0 = (2)(1) + (1)(0) + (3)( 3) + ( 5)( 2) = 2 + 0 3 2 ], Isn t that easier than multiplying the matrices? Note that the double sum would technically have us multiply down a column and then move to the next column 4

and to reverse the order of the A and B terms. But thanks to the commutivity of addition and multiplication in the reals, we can multiply our pairs in any order we like, so long as we remember to do every pair. I choose to multiply across the rows simply because I frequently think of a matrix in terms of reading the entries along the rows first. (For example, the standard basis for M(m, n) finds the solitary 1 traveling across rows before going down to the next column.) You can use whatever pattern feels natural to you. Now let s look at some examples of the inner product in polynomial spaces. Again, we need to find an operation on polynomials that results in a number, not a polynomial. Most of the things we know to do with polynomials from outside linear algebra (multiplying, factoring, taking derivatives) still result in polynomials. But there is one thing we do with polynomials that results in a number plug in a number! Example: Show that p, q = 2p(0)q(0)+3p(1)q(1)+4p(2)q(2) defines an inner product on P 2. (1) We have that p, p = 2(p(0)) 2 + 3(p(1)) 2 + 4(p(2)) 2 is clearly greater than or equal to zero, since all the terms in the sum are positive. Moreover, the only way to have 2(p(0)) 2 + 3(p(1)) 2 + 4(p(2)) 2 = 0 is to have p(0) = 0, p(1) = 0, and p(2) = 0. If p(x) = p 0 +p 1 x+p 2 x 2, then, p(0) = 0 means p 0 +p 1 (0)+p 2 (0) = 0, so p 0 = 0. And p(1) = 0 means p 0 + p 1 (1) + p 2 (1 2 ) = 0, which means that p 1 +p 2 = 0, or that p 1 = p 2. Lastly, p(2) = 0 means that p 0 +p 1 (2)+p 2 (2 2 ) = 0, so 2( p 2 ) + 4p 2 = 0. From this we see that p 2 = 0, which means that p 1 = 0 also. So we have shown that if p, p = 0, then p 0 = p 1 = p 2 = 0, so p is the zero polynomial. As such, we see that, is positive definite. (2) p, q = 2p(0)q(0)+3p(1)q(1)+4p(2)q(2) = 2q(0)p(0)+3q(1)p(1)+4q(2)p(2) = q, p, so, is symmetric. (3) For any p, q, r P 2 and s, t R, we have the following: p, sq + tr = 2p(0)(sq + tr)(0) + 3p(1)(sq + tr)(1) + 4p(2)(sq + tr)(2) = 2p(0)(sq(0) + tr(0)) + 3p(1)(sq(1) + tr(1)) + 4p(2)(sq(2) + tr(2)) = 2sp(0)q(0) + 2tp(0)r(0) + 3sp(1)q(1) + 3tp(1)r(1) + 4sp(2)q(2) + 4tp(2)r(2) = 2sp(0)q(0) + 3sp(1)q(1) + 4sp(2)q(2) + 2tp(0)r(0) + 3tp(1)r(1) + 4tp(2)r(2) = s p, q + t p, r So we see that, is bilinear. Example: The function p, q = 2p(0)q(0) + 3p(1)q(1) + 4p(2)q(2) is also an inner product on P 1, but is not an inner product on P 3, or P n for any n > 2. The reason it isn t an inner product in the larger polynomial spaces is that it fails to be positive definite. Consider for example that if p(x) = 2x 3x 2 + x 3, then p(0) = 2(0) 3(0) + 0 = 0, p(1) = 2(1) 3(1) + (1) = 0, and p(2) = 2(2) 3(4) + (8) = 0, so p, p = 0 even though p is not the zero polynomial. 5

How did I find p(x)? Well, I knew I needed p(0) = 0, p(1) = 0, and p(2) = 0. In P 3, that forms the following system of equations: p 0 = 0 p 0 + p 1 + p 2 + p 3 = 0 p 0 + 2p 1 + 4p 2 + 8p 3 = 0 Going ahead and plugging in p 0 = 0, we focus our attention on finding a solution to the system p 1 + p 2 + p 3 = 0 2p 1 + 4p 2 + 8p 3 = 0 We solve this system by row reducing its coefficient matrix as follows: [ ] [ ] [ ] 1 1 1 1 1 1 1 1 1 2 4 8 0 2 6 0 1 3 [ ] 1 0 2 0 1 3 So we see that this system has non-trivial solutions. The general solution is p 1 p 2 p 3 = t 2 3 1 I set t = 1 to get the solution p 1 = 2, p 2 = 3, and p 3 = 1, which is the polynomial 2x 3x 2 + x 3. 6