Linear Algebra Lecture Notes-II

Similar documents
Linear Algebra Lecture Notes-I

1. General Vector Spaces

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Lecture 7: Positive Semidefinite Matrices

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Lecture notes on Quantum Computing. Chapter 1 Mathematical Background

Linear algebra and applications to graphs Part 1

6 Inner Product Spaces

1 Last time: least-squares problems

MAT Linear Algebra Collection of sample exams

Linear Algebra. Workbook

Vector Spaces and Linear Transformations

Foundations of Matrix Analysis

Typical Problem: Compute.

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

Elementary linear algebra

Spectral Theorem for Self-adjoint Linear Operators

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Linear System Theory

235 Final exam review questions

Linear Algebra 1. M.T.Nair Department of Mathematics, IIT Madras. and in that case x is called an eigenvector of T corresponding to the eigenvalue λ.

Numerical Linear Algebra Homework Assignment - Week 2

Throughout these notes we assume V, W are finite dimensional inner product spaces over C.

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

Lecture notes: Applied linear algebra Part 1. Version 2

a 11 a 12 a 11 a 12 a 13 a 21 a 22 a 23 . a 31 a 32 a 33 a 12 a 21 a 23 a 31 a = = = = 12

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES

Lecture 2: Linear operators

Chapter 7: Symmetric Matrices and Quadratic Forms

Algebra II. Paulius Drungilas and Jonas Jankauskas

SYLLABUS. 1 Linear maps and matrices

I. Multiple Choice Questions (Answer any eight)

Numerical Linear Algebra

BASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x

Chapter 4 Euclid Space

2.2. Show that U 0 is a vector space. For each α 0 in F, show by example that U α does not satisfy closure.

Lecture Summaries for Linear Algebra M51A

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2

Math 108b: Notes on the Spectral Theorem

Lecture 1: Review of linear algebra

Eigenvalues and Eigenvectors

j=1 x j p, if 1 p <, x i ξ : x i < ξ} 0 as p.

Linear algebra 2. Yoav Zemel. March 1, 2012

Diagonalization by a unitary similarity transformation

LinGloss. A glossary of linear algebra

Review problems for MA 54, Fall 2004.

Conceptual Questions for Review

Contents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2

GQE ALGEBRA PROBLEMS

Math 102 Final Exam - Dec 14 - PCYNH pm Fall Name Student No. Section A0

LINEAR ALGEBRA REVIEW

NOTES ON BILINEAR FORMS

The following definition is fundamental.

18.06 Problem Set 8 - Solutions Due Wednesday, 14 November 2007 at 4 pm in

LINEAR ALGEBRA REVIEW

Notes on basis changes and matrix diagonalization

Linear Algebra Highlights

Linear Algebra 2 Spectral Notes

Math 408 Advanced Linear Algebra

ANSWERS. E k E 2 E 1 A = B

MATH 583A REVIEW SESSION #1

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

PRACTICE PROBLEMS FOR THE FINAL

Solution to Homework 1

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

LINEAR ALGEBRA MICHAEL PENKAVA

Math Linear Algebra II. 1. Inner Products and Norms

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

Last name: First name: Signature: Student number:

1 Linear Algebra Problems

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #2 Solutions

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

1. Foundations of Numerics from Advanced Mathematics. Linear Algebra

4. Linear transformations as a vector space 17

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =

University of Colorado Denver Department of Mathematical and Statistical Sciences Applied Linear Algebra Ph.D. Preliminary Exam June 8, 2012

Math 554 Qualifying Exam. You may use any theorems from the textbook. Any other claims must be proved in details.

Quadratic forms. Here. Thus symmetric matrices are diagonalizable, and the diagonalization can be performed by means of an orthogonal matrix.

Introduction to Linear Algebra, Second Edition, Serge Lange

Final A. Problem Points Score Total 100. Math115A Nadja Hempel 03/23/2017

Schur s Triangularization Theorem. Math 422

Further Mathematical Methods (Linear Algebra)

Definitions for Quizzes

Math 21b. Review for Final Exam

Chapter 6: Orthogonality

Linear Algebra in Actuarial Science: Slides to the lecture

Then x 1,..., x n is a basis as desired. Indeed, it suffices to verify that it spans V, since n = dim(v ). We may write any v V as r

MATH 304 Linear Algebra Lecture 34: Review for Test 2.

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

ACM 104. Homework Set 5 Solutions. February 21, 2001

Study Guide for Linear Algebra Exam 2

22m:033 Notes: 7.1 Diagonalization of Symmetric Matrices

Spectral Theorems in Euclidean and Hermitian Spaces

MATH 235. Final ANSWERS May 5, 2015

Math Final December 2006 C. Robinson

Econ 204 Supplement to Section 3.6 Diagonalization and Quadratic Forms. 1 Diagonalization and Change of Basis

Online Exercises for Linear Algebra XM511

Transcription:

Linear Algebra Lecture Notes-II Vikas Bist Department of Mathematics Panjab University, Chandigarh-64 email: bistvikas@gmail.com Last revised on March 5, 8 This text is based on the lectures delivered for the B.Tech students of IIT Bhilai. These lecture notes are basics of Linear Algebra, and can be treated as a first course in Linear Algebra. The students are suggested to go through the examples carefully and attempt the exercises appearing in the text. CONTENTS Similarity of matrices Eigenvalues and eigenvetors 3 Inner product and orthogonality 7 4 Adjoint of a linear transformation 5 Unitary similarity 4 6 Bilinear forms 6 7 Symmetric bilinear forms 8

SIMILARITY OF MATRICES SIMILARITY OF MATRICES Let V be a vector space over K of dimension n. Let T be a linear operator on V. If B is a fixed basis then we have seen that [T ] B is an n n matrix. Further T is injective (hence bijective) if and only if [T ] B is invertible. Here we see how matrix of a linear operator changes when we change basis. We see how matrices appear when we compose linear transformations. Let V, W and X be vector spaces over K. and let T : V W and S : W X be linear transformations so that S T is defined. Let B, B and B be ordered bases of V W and X respectively. Write B = {v,..., v n }. Then the j-th column of the matrix B [S T ] B is: Also [(S T )v j ] B = B [S] B [T v j ] B = B [S] B B [T ] B[v] B. [(S T )v j ] B = B [(S T ] B [v j ] B. Hence on equating these last two equations we have the following: () B [(S T ] B = B [S] B B [T ] B Now assume that T is a linear operator on V. Let B and B be bases of V. We write [T ] B for B [T ] B If I is the identity operator on V, then using equation (): [T ] B = [I T ] B = B [I] B B [T ] B = B [I] B B [T I] B = B [I] B [T ] B B [I] B. Also again by (): I n = [I] B = B [I] B B [I] B. Thus if P = B [I] B, then P = B [I] B. REMARK.. Note that B [I] B is a matrix whose j-th column is the elements of K when the j-th element of B is expressed as a linear combination of the elements of B. PROPOSITION.. Let V be an n dimensional vector space over a field F, T a linear operator on V and let B and B be ordered bases of V. Then there is an invertible matrix P such that [T ] B = P [T ] B P. Square matrices A and B are similar if for some invertible matrix P B = P AP. The above statement mean that the change of basis is a similarity transform on the matrices of a linear operator. PROPOSITION.3. Similar matrices have the same trace and the same determinants. Recall that a linear operator on V is a linear transformation from V to V.

EIGENVALUES AND EIGENVETORS Thus we can define the determinant and the trace of a linear operator T on V by: dett := det[t ] B and trt := tr[t ] B, where B is any ordered basis of V. EXAMPLE.4. Let T be a linear operator whose matrix with respect to a basis {u, u, u 3 } is A = 3. We find the matrix of T with respect to the basis 3 B = {v = u + u, v = u + u 3, v 3 = u 3 + u }. Thus T (v ) = T (u + u ) = T (u ) + T (u ) = 3u + 3u + u 3 = 5 v + v + v 3 T (v ) = T (u + u 3 ) = T (u ) + T (u 3 ) = u + 4u + 4u 3 = v + 3v + 3v 3 T (v 3 ) = T (u 3 + u ) = T (u 3 ) + T (u ) = 3u + u + 3u 3 = v + v + 5 v 3 5/ / Hence [T ] B = / 3 /. / 5/ Now B [I] B =. If we call this matrix P, then P [T ] B P = [T ] B. Verify that [T ] B and [T ] B have the same determinant and the same rank. EIGENVALUES AND EIGENVETORS DEFINITION.. Let T be a linear operator on V. Then λ K is an eigenvalue of T if there is a non-zero v V such that T v = λv. The vector v is called an eigenvector corresponding to λ. We have seen that a linear operator corresponds to a matrix and conversely, so to keep the things simple, we basically deal with square matrices. Restating the above definition for matrices: Let A be an n n matrix. A non-zero column vector v is called an eigenvalue of A if there is a number λ such that Av = λv. The number λ is called an eigenvector of A. Thus an eigenvalue is a number λ such that the homogeneous system (A λi n )x = has a non-zero solution. This happens if and only if rank(a λi n ) < n, that is, A λi n is not invertible which is equivalent to det(a λi n ) =. This means that eigenvalues of A are precisely those values λ for which the matrix (A λi n )

EIGENVALUES AND EIGENVETORS has the zero determinant. In other words eigenvalues are the roots of the following monic polynomial: c A (x) := det(xi n A). This polynomial is called the characteristic polynomial of A. Note that eigenvectors corresponding to an eigenvalue are not unique. In fact if u is an eigenvector, then αu is also an eigenvector corresponding to λ for any α K, EXERCISE.. Let A K n n. Then λ K is an eigenvalue if and only if ker(a λi n ) {}. Any non-zero vector of ker(a λi n ) is an eigenvector corresponding to λ. EXAMPLE.3. Find the eigenvalues and the corresponding eigenvectors for the 3 matrix: A = 3. 3 The characteristic equation of A is: x 3 det(xi 3 A) = x 3 x 3 = (x 5)(x ). Thus the eigenvalues are and 5. To find eigenvectors corresponding to eigenvalue 5, we need to find solution of the system (A 5I 3 )x =. Now A 5I 3 is. Now to solve this we reduce this matrix to row echelon from. Interchanging the first and the third row and then adding first and the second row to the third row we have: Now subtracting the first row from the second we have: 3 3 This is he row echelon form and so the rank of A 5I 3 is thus nullity is. Thus there is one fundamental solution and that is given by a solution of x + x x 3 =, x x 3 =. 3

EIGENVALUES AND EIGENVETORS Hence solution is x = x = x 3 or u eigenvector corresponding to 5 as., where u K. Thus we can take Eigenvector corresponding to is a solution of the system (A I 3 )x =. Now A I 3 = and the row echelon form for this matrix is a matrix of rank and so nullity. This means that there are fundamental solutions that is linearly independent eigenvectors corresponding to eigenvalue. Thus eigenvector corresponding to is satisfying the equation x + x + x 3 = or x 3 = x x. Thus any solution of the system is x x x x = x + x The two linearly independent eigenvectors can be taken as and. Note that the characteristic polynomial for an n n matrix is of degree n. Thus an n n matrix cannot have more than n distinct eigenvalues. [ The ] matrix may not have eigenvalues. For example the matrix A = has characteristic polynomial x. This has no eigenvalues when A is considered as a matrix in Q. But A has eigenvalues when considered as a matrix in R. Note that if A C n n, then A always have eigenvalues. Procedure to find eigenvalues and eigenvectors (i). Find the characteristic polynomial of A. (ii). Find all its roots, these are eigenvalues. (iii). For each eigenvalue λ, solve the system (A λi n )x =. 4

EIGENVALUES AND EIGENVETORS EXAMPLE.4. Consider the matrix A =. The characteristic polynomial is (x )(x 3)(x ) Eigenvalues are,, 3. Eigenvector corresponding to : obtained by solving the system (A I 3 )x =, x x =. Thus x +x x 3 = and x +x 3 =. x 3 that is, Thus x = x 3 and x = x 3. An eigenvector corresponding to is that is, Eigenvector corresponding to : obtained by solving the system (A I 3 )x =, x x =. Thus we have x = x 3 =. Hence an x 3 eigenvector corresponding to is that is,. Eigenvector corresponding to 3: obtained by solving the system (A 3I 3 )x =, x x =. Thus we have x = x 3 and x =. x 3 Hence an eigenvector corresponding to 3 is.. PROPOSITION.5. Let A be an n n matrix. If A has k distinct eigenvalues, then eigenvectors corresponding to these eigenvalues are linearly independent. Proof. Suppose that λ,... λ k be distinct eigenvalues of A with corresponding eigenvectors u,..., u k. Thus Au i = λ i u i, for i =,..., k. Let α u +... α m u m = and let r be largest index such that α r. Then multiplying by (A λ I n ) on the both sides we have: (λ λ )u + + (λ k λ )u k =. Next multiplying by (A λ I n ) and continuing upto (A λ r I n ), we have (λ r λ )(λ r λ )... (λ r λ r )α r u r =. This is a contradiction as (λ k λ )(λ k λ )... (λ k λ k )α k and u r being a a characteristic vector is non-zero too. Hence u,..., u k are linearly independent. 5

EIGENVALUES AND EIGENVETORS The reader can verify that eigenvectors in the Example.4 are actually linearly independent. Let A K n n and λ be an eigenvalue of A. Assume that u,..., u k K n are linearly independent eigenvectors corresponding to λ. Then extend {u,..., u k } to a basis of K n : {u, u,..., u k, u k+,..., u n }. Let P K n n with j-th column as u j. Then P is invertible. Then consider the matrix P AP, For j =,..., k, the j-th column is (P AP )e j = (P A)P e j = P Au j = (P (λu j ) = λp u j = λp P e j = λe j. [ ] Thus P λik X AP =. It follows that if K Y n has a basis consisting of eigenvectors of A then P AP is a diagonal matrix. A matrix A that is similar to a diagonal matrix is called diagonalizable matrix. A K n n is diagonalizable if and only if K n has a basis consisting only of the eigenvectors of A. The following is immediate. PROPOSITION.6. If A K n n has n distinct characteristic vectors, then A is diagonalizable. Recall that if A is an n n matrix, then adj(a) is the adjoint of A such that A adj(a) = det(a) I n. Now replacing the matrix A by the matrix xi n A we have (xi n A) adj(xi n A) = det(xi n A) I n = c A (x)i n, where c A (x) is the characteristic polynomial of A. Now adj(xi n A) is a matrix with entries consisting of cofactors of xi n A. Thus the entries are polynomials of degree at most n. Thus we can write: adj(xi n A) = B + b x + + B n x n, where each B j is an n n matrix. Thus: (xi n A)(B + b x + + B n x n ) = c A (x)i n. Equating the coefficients of the powers of x: AB = a I n B AB = a I n B n AB n = a n I n B n = I n.. 6

3 INNER PRODUCT AND ORTHOGONALITY Now pre-multiply the second equation by A, the third by A,..., the (n )-th (the last) by A n and add all the resulting equations. We have a I n + a A + + a n A n + A n =. Hence c A (A) =. This proves the following important result called the Cayley Hamilton Theorem. PROPOSITION.7 (Cayley-Hamilton). If A is an n n matrix and c A (x) is its characteristic polynomial, then c A (A) =. PROPOSITION.8. A is invertible if and only if c A (). Proof. Let c A (x) = c + c x + + c n x n + x n. Then c A () = means that c =. Now c = if and only if one of the roots of the characteristic polynomial is, that is, one of the the eigenvalues of A is. This implies that there is a non-zero column vector u such that Au =. Hence rank(a) < n. This is equivalent to that A is not invertible. The Cayley Hamilton Theorem can be used for finding the inverse of a matrix. The following example illustrates this fact. 4 EXAMPLE.9. Let A = 3 3. Then c A (x) = x 3 7x + 4x 8. 3 Since the constant term of c A (x) is non-zero, A is invertible. By Cayley-Hamilton Theorem: A 3 7A + 4A 8I 3 =. Now multiplying by A, we have: A 7A + 4I 3 8A =. Hence A = 8 (A 7A + 4I 3 ). 3 INNER PRODUCT AND ORTHOGONALITY From here onwards F will denote either the field R or C. DEFINITION 3.. An inner product on V is a mapping V V to F which maps an ordered pair x, y to (x, y) F such that the following properties hold. (i).(x, x) and (x, x) = if and only if x =. (ii). (αx + βy, z) = α(x, z) + β(y, z) for all x, y, x V and α, β F. (iii). (y, x) = (x, y) for all x, y, x V. 3 A vector space V over F equipped with an inner product is called an inner product space. For a polynomial p(x) = p + p x + + p k x k, we write p(a) = p I n + p A + + p k A k. 3 z denotes the complex conjugate of z. 7

3 INNER PRODUCT AND ORTHOGONALITY The following are consequences of definition: () (x, αy + βz) = (αy + βz, x) = α(y, x) + β(z, x) = α(x, y) + β(x, z) for all x, y, z V and α, β F. () α(x, y) = (αx, y) = (x, αy) for all x, y V and α F. EXERCISE 3.. Show that in an inner product space V : (i) (x, ) = for all x V. (ii) If x V such that (x, y) = for all y V, then x =. EXERCISE 3.3. Show that for fixed w V, the mapping f : V F given by f(x) = (x, w) is a linear form.. On C n the inner product (x, y) := y x is called the standard inner product. The corresponding standard inner product on R n is clearly: (x, y) := y t x. EXAMPLE 3.4.. A generalization of the standard inner product. Let A be an invertible n n matrix. Then the following is also an inner product: (x, y) A := (Ay) (Ax), for all x, y V. 3. On F m n, the space of m n matrices with entries from F, the following is an inner product: (A, B) = tr(b A). If n = this reduces to the standard inner product of vectors. It is for this reason it is referred as the standard inner product of matrices. 4. Let V = R n [x], the space of polynomials with coefficients in R and of degree at most n. Then there is an inner product on V given by: (p, q) = p(t)q(t) dt. REMARK 3.5. We normally deal here with the standard inner product space R n. Whenever we say inner product space F n, without mentioning inner product, we mean the standard inner product space. If V is an inner product space, for x V, define the norm of x : x := (x, x). A vector x such that x = is called a unit vector. The following is an important inequality. Vectors x, y in an inner product space are called orthogonal if (x, y) =. EXERCISE 3.6. In R 3 find all vectors orthogonal to. 8

3 INNER PRODUCT AND ORTHOGONALITY A subset {u,..., u k } is an orthogonal set if (u i, u j ) =, for i j, and is called an orthonormal set if it is orthogonal and very vector is a unit vector, that is, { if i = j (u i, u j ) = δ ij = if i j. EXAMPLE 3.7. The standard basis of F n is an orthonormal basis. PROPOSITION 3.8. Every orthonormal set of an inner product space is linearly independent Proof. Let {u,..., u k } be an orthonormal set. Suppose that α u + + α k u k =. Then α l = ( k i= α iu i, u l ) = for each l. Our next result shows that every finite dimensional inner product space has an orthonormal basis. PROPOSITION 3.9 (Gram Schmidt orthogonalization procedure). Let {x,..., x k } be an ordered linearly independent set. Then there is an orthonormal set {u,..., u k } such that for each x,..., x l = u,..., u l for all l =,..., k. Proof. Let u = x x. Suppose that mutually orthonormal set of vectors {u,..., u l } have been constructed such that x,..., x l = u,..., u l. Define y l+ = x l+ l (x l+, u i )u i. i= Then (y l+, u i ) = for all i =,..., l. Now let u l+ = u l+ y l+. then clearly, {u,..., u l+ } is an orthonormal set. Since from the above equation it follows that x l+ y l+, u,... u l = u l+, u,..., u l, and so x,..., x l+ u,..., u l+. Also from above equation, it follows that ( u l+ = x l+ y l+ ) l (x l+, u i )u i. Therefore u l+ u,..., u l, x l+ = x,..., x l+. Hence u,..., u l+ x,..., x l+. i= 9

3 INNER PRODUCT AND ORTHOGONALITY Procedure for generating an orthonormal set. Given a linearly independent set (ordered): {x,..., x k }. Write u = x. (Normalize the vector) x u,..., u i are done, then define y i+ = x i+ ((x i+.u )u + + (x i+, u i )u i ), Write u i+ = y i+ y i+. {u,..., u k } is an orthonormal set. EXAMPLE 3.. Find an orthonormal basis for R 3 from a given basis,,. x =, x =, x 3 =. u =. y = x (x, u )u = = /. u = 3 / y 3 = x 3 (x 3, u )u (x 3, u )u = 3 = 3. u 3 =. 3 Hence orthonormal basis is, 3, 3. EXERCISE 3.. Show that if {u,..., u n } is an orthonormal basis for V and v V, then v = n i= (v, u i)u i. PROPOSITION 3.. Let V be an inner product space over F. Let S V. Define S = {u V : (u, x) = for all x S}. Then S is a subspace of V. Proof. If u, w S, then for α, β F : (αu + βw, x) = α(u, x) + β(w, x) = for all x S. Hence αu + βw S, and so S is a subspace of V. EXERCISE 3.3. Let V be an inner product space and let X, Y V. Prove that: (i) If X Y, then Y X. (ii) S = S. (iii) If X is a basis of a subspace W of V, then X = W.

3 INNER PRODUCT AND ORTHOGONALITY (iv) If W is a subspace of V, then (W ) = W. Caution: (iv) is not true if X is a subset of V as X is a subspace. PROPOSITION 3.4. Let w,..., w k be linearly independent in an inner product space V. Then dim{w,..., w k } = dim(v ) k. Proof. Let dimv = n. Let W = w,..., w k. and let {u,..., u k } be an orthonormal basis of W. Then {w,..., w k } = W = u,..., u k. Extend u,..., u k to u,..., u k, x k+,..., x n so that this is a basis of V. Now by Gram Schmidt the take the corresponding orthonormal basis u,..., u k, u k+,..., u n. Now if v = n i= α iu i W, then α j = (v, u j ) = for j =,..., k. Hence v u k+,..., u n. Therefore, dim W = n k. We leave it to the student to verify the following. Let W be a subspace of an inner product space V and let {w,..., w k be an orthonormal basis for W. Then for any v V : v k (v, w j )w j W. i= The vector k i= (v, w j)w j W is called the orthogonal projection of v onto W (and along W.) EXAMPLE 3.5. Let W = x x x 3 x 4 : x + x + x 3 + x 4 = be a subspace of x x 3 x 4 = R 4. Find W. Also find the orthogonal projection of e along W. Observe that W =,. Thus W = y =, y =. These two vectors are already orthogonal. Thus an orthonormal basis for W is { y, 6 y }. Now e = 4 y + 3 y. EXERCISE 3.6. Let W be a subspace of V. Show that W W = {}. PROPOSITION 3.7. Let W be a subspace of V. Then to each v V there are unique vectors v and v such that v = v + v, v W and v W. Proof. Let {w,..., w k } be an orthonormal basis of W. Then for v = v k i= (v, w j)w j W. Thus v = v + v where v = sum k i= (v, w j)w j W. If v = v + v = v + v v v W, v v W, then v v = v v W W = {}. Hence uniqueness.

4 ADJOINT OF A LINEAR TRANSFORMATION Let V be an inner product space. Let W be a subspace of V. The above proposition proves that for given v V unique vectors v W and v W such that v = v + v. Thus we have a mapping Pr W : V V given by Pr w (v) = v. called the orthogonal projection of V onto W. Thus if v V, then P r W (v) W and so v P r W (v) W. Also if {w,..., w k } is an orthonormal basis of W, then by uniqueness it follows that P r W (v) = k (v, w i )w i. i= EXERCISE 3.8. (i) Verify that P r W is a linear operator on V. (ii) Observe that P r W (w) = w for all w W. Thus P rw (v) = P r W (v) for all v V, that is P rw = P r W. (iii) Let X = {w,..., w n } be an orthonormal basis of V such that {w,..., w k } is an orthonormal basis of W. Show that [P r W ] X = [ Ir ]. 4 ADJOINT OF A LINEAR TRANSFORMATION Exercise 3.3 shows that if V is an inner product space, then for fixed y V, the map: v (v, x) is a linear functional. The next result states that the converse also holds. PROPOSITION 4.. [Riesz representation theorem] Let V be an n dimensional inner product space and let f be a linear functional on V. Then there is a unique y V such that f(x) = (x, y) for all x V. Proof. Since ker f is n dimensional, (ker f) is one-dimensional (Proposition 3.4). Let u be a unit vector in (ker f). We show that f(x) = (x, αu) for a suitable α F. If this is the case, then f(u) = α or α = f(u). Now we verify that y = f(u)u is the required vector. Indeed each v = v + γu, where v ker f, Now f(v) = γf(u). Also (v, y) = (v, f(u)u) = f(u). Hence f(v) = (v, y) for all v V. Finally, if w V is another such vector. Then f(v) = (v, y) = (v, w), for all v V. But then (v, y w) = for all v V and so y = w. Let V and W be inner product spaces of dimensions n and m respectively. Let T V W be a linear transformation. Let y W be fixed. Then f T : V F given by f T (v) = (T v, y) is a linear form. By Proposition 4., there is unique vector x V such that f T (x) = (v, x). Thus for each y W there is a unique vector

4 ADJOINT OF A LINEAR TRANSFORMATION x V. This defines a map: T : W V given by T (y) = x. In other words This map T is called the adjoint of T. (T (v), y) = (v, T (y)) for all v V. PROPOSITION 4.. T : W V is linear. Proof. For α, β F and y, z W : (v, T (αy + βz)) = (T (v), αy + βz) = α(t (v), y) + β(t (v), z) = α(v, T (y)) + β(v, T (z)) = (v, αt (y) + βt (z)), for all v V Hence T (αy + βz) = αt (y) + βt (z) for all α, β F and y, z W. EXERCISE 4.3. Prove that (T ) = T. Let V and W be inner product spaces of dimensions n and m respectively. Let V = {v,..., v n } and W = {w,..., w m } be orthonormal bases for V and W. Then T (v j ) = m i= (T (v j), w i )w i. Therefore W[T ] V is an m n matrix with (i.j)-th entry as (T (v j ), w i ). This implies that V [T ] W is an n m matrix with (i.j)-th entry as (T (w j ), v i ). Now as (T (w j ), v i ) = (v i, T (w j ) = (T (v i ), w j ). This proves the following. The (i, j)-th entry of V [T ] W is the conjugate of the (j, i)-th entry of W [T ] V. Thus if A is the matrix of T with respect to some orthonormal bases, then the matrix of T is the conjugate transpose of A. EXERCISE 4.4. Prove the following properties of the adjoint. (i) If S, T L(V, W ), then (αs + T ) = αs + T for α F. (ii) (ST ) = T S. (iii) If T L(V ) and T is invertible, then (T ) = (T ). x [ EXAMPLE 4.5. Let T : R 3 R given by T x + x = + 3x 3 x x 3 ]. Then 4x + 5x + 6x 3 ]. Therefore the matrix [ 3 the matrix of with respect to the standard bases is 4 5 5 4 of T with respect to the same bases if. Hence T : R R 3 given by [ ] y + 4y T y = y y + 5y. 3y + 6y 5 3 6 3

5 UNITARY SIMILARITY x EXERCISE 4.6. Let T : C 3 C given by T y = z [ ] + ι T. ι [ ] x + ιy. Find (i + ι)y + 3z DEFINITION 4.7. A linear operator on an inner product space V is called self adjoint if T = T. Thus T is self adjoint if and only if the matrix of the matrix of T with respect to orthonormal basis is Hermitian 4 5 UNITARY SIMILARITY Let us assume from here onwards that all matrices have entries in C and C n has the standard inner product. For an n n matrix, write A for the conjugate transpose of A. A matrix with orthonormal columns is called a unitary matrix. Thus columns of a unitary matrix form an orthonormal basis. Note that the (i, j)-th entry of the matrix A A is the inner (standard) product of the i-th and j-th columns, it follows that A is unitary if and only if A A = I n. REMARK 5.. An n n matrix such that A t A = I n is called an orthogonal [ matrix. ] An orthogonal matrix need not be unitary. For example ι 3. However, if A R ι n n, then A is orthogonal if and only if unitary. EXERCISE 5.. Show that the inverse of a unitary matrix is unitary, and the product of two unitary matrices is unitary. Show by example that the sum of two unitary matrices is not unitary. DEFINITION 5.3. Matrices A, B C n n are called unitary similar if there is a unitary matrix U such that U AU = B. A C n n is unitary diagonalizable if A is unitary similar to a diagonal matrix. PROPOSITION 5.4 (Schur). Let A C n n. Then A is unitary similar to an upper triangular matrix. Proof. By induction onn. If n = the statement is obvious. Assume that the statement is true for all matrices of order less than n. Let λ be an eigenvalue of A and u be corresponding eigenvector of norm.. Let U be a unitary matrix 4 A matrix A is Hermitian if it is equal to its conjugate transpose, that is A = A t.. 4

5 UNITARY SIMILARITY whose [ first ] column is u. Then the first column of P AP is λe and so U AU = λ. Here * denotes those entries which are of no interest to us. A Now by induction hypothesis [ there] is an unitary matrix V such that V A V is upper triangular. Now if U =, then U = U V U is unitary and U AU = [ λ V A V ]. Hence an upper triangular matrix. Note that diagonal entries of an upper triangular matrix are eigenvalues. Hence we have the following statement. Let A C n n. Then det(a) is the product of diagonal eigenvalues, and the trace of A is the sum of its eigenvalues. An immediate consequence of the Proposition 5.4 is the following. PROPOSITION 5.5. A unitary matrix is unitary diagonalizable. EXERCISE 5.6. Let T be an upper triangular n n matrix such that T T = T T. Then T is a diagonal matrix. DEFINITION 5.7. An n n matrix A is normal if AA = A A. PROPOSITION 5.8 (Spectral decomposition). An n n matrix A is unitary diagonalizable if and only if A is normal. Proof. Suppose A is normal matrix. Then by Proposition 5.4, there is a unitary matrix U such that U A = T, an upper triangular matrix. Now T T = (U AU)(U AU) = U (AA )U = U (A A)U = (U A U)(U AU) = T T. Hence T is a diagonal matrix. Conversely, if U AU = D, a diagonal matrix, then A = UDU. Since any two diagonal matrices commute, DD = D D. Thus AA = (UDU )(UD U ) = U(DD )U = U(D D)U = A A, and A is normal. PROPOSITION 5.9. Let A be a normal matrix. Then: (i) Au = A u for every u C n. (ii) If Au = λu, then A u = λu, where λ C. (iii) Eigenvectors corresponding to distinct eigenvalues are orthogonal. Proof. (i) Au = (Au) (Au) = u (A A)u = u (AA )u = (A u) (A u) = A u. Hence Au = A u. (ii) A λi n is also normal. Thus using (i): = (A λi n )u = (A λi n ) u = (A λi n )u. Hence A u = λu. (iii) Let Au = λu and Av = µv, λ µ. Then λv u = v (Au) = (A v) u = (µv) u = µv u. Hence v u = 5

6 BILINEAR FORMS EXAMPLE 5.. The matrix is Hermitian, and so unitary diagonalizable. The eigenvalues of this matrix are and 4. Eigenvectors corresponding to to are and and eigenvector corresponding to 4 is. Now by Proposition 5.9, we need to find an orthonormal eigenvectors corresponding to. These can be 6 3 and. Hence U = 6 3. 6 3 6 BILINEAR FORMS DEFINITION 6.. A bilinear form on V is a mapping β : f V K such that (i) f(αx + βy, z) = αf(x, z) + βf(y, z) for all α, β K and x, y, z V. (ii)f(x, αy + βz) = αf(x, y) + βf(x, z) for all α, β K and x, y, z V. REMARK 6.. If f : V V K is a bilinear form then: (i) f(x, ) = for all x V and f(, y) = for all y V. (ii) f(αx, y) = αf(x, y) = f(x, αy) for α K and x, y V. EXAMPLE 6.3.. Let A K n n. Then f : K n K n K given by f(x, y) = y t Ax is a bilinear form.. An inner product on R n is a bilinear form. Consider a bilinear form f : V V K. Let B = {u,..., u n } be a basis of V. For x, y V, we have: x = n i= α iu i and y = n i= γ iu i and thus f(x, y) = n α i γ j f(u i, u j ) = [x] t BA[y] B, i,j= where A is an n n matrix whose (i, j)-th entry is f(u i, u j ). The matrix A is called the matrix of bilinear form f with respect to an ordered basis B, denoted by [f] B. EXERCISE 6.4. Show that if f, g : V V K are bilinear forms and B is a fixed basis of V, then [f + g] B = [f] B + [g] B and [αf] B = α[f] B for all α K. 6

6 BILINEAR FORMS EXERCISE 6.5. Let B = {u,..., u n } be a basis of V, and φ : B B K be any map. Then for any x, y V we have unique representations: x = n i= α iu i and y = n i= β iu i. Verify that if f(x, y) = then f is a bilinear from on V. n α i β j φ(u i, u j ), i,j= EXERCISE 6.6. Let T : V V be a linear transformation 5. Define f : V V K by f(x, y) = T (y)(x). Show that f is a bilinear from. PROPOSITION 6.7. Let B = {u,..., u n } and C = {v,..., v n } be bases of V. Let f : V V K is a bilinear form on V. Then there is an invertible matrix P such that [f] B = P t [f] C P. Proof. Let B = {u,..., u n } and C = {v,..., v n } be bases of V. Let φ is a bilinear form on V, and B and C be the matrices of φ with respect to B and C, respectively. There is an invertible matrix P such that [u] B = P [u] C. Thus for any x, y V : φ(x, y) = [x] t BB[y] B = [x] t CP t BP [y] C = [x] t CC[y] C. This gives the change of base formula: [φ] B = P t [φ] C P, where P is invertible, for given any two ordered basis B and C. DEFINITION 6.8. A bilinear form f : V V K is called symmetric if f(x, y) = f(y, x), for all x, y V, and skew-symmetric if f(x, y) = f(y, x), for all x, y V. EXERCISE 6.9. Let V be a vector space over K and B be a fixed basis. f : V V K is a symmetric (skew-symmetric) bilinear form if and only if [f] B is a symmetric (skew-symmetric) matrix. PROPOSITION 6.. Let K is a field of characteristic not equal to. Then every bilinear form on V over K is the sum of symmetric and skew-symmetric bilinear forms. 5 Recall that V is the space of all linear forms on V. 7

7 SYMMETRIC BILINEAR FORMS Proof. Define g(x, y) = (f(x, y) + f(y, x)) and h(x, y) = (f(x, y) f(y, x)). Then verify that g is symmetric bilinear for on V and h is skew-symmetric bilinear for on V such that f = g + h. EXERCISE 6.. Let K is a field of characteristic not equal to. Show that if f : V V K is a skew-symmetric bilinear form then f(x, x) = for all x V. 7 SYMMETRIC BILINEAR FORMS In this section we study bilinear forms over R. Let V be an n-dimensional vector space over R. f : V V R be a symmetric bilinear form. Let [f] be the matrix of f with respect to the standard basis. Thus f(x, y) = [x] t [f][y]. Since f is symmetric bilinear form, [f] is a symmetric matrix. Since the eigenvalues of [f] are all real numbers. Let there be p be the number of positive eigenvalues, q be the number of negative eigenvalues of [f] and let r be the rank of the matrix [f]. Write s = n r. there is a orthogonal matrix U such that U t [f]u = diag(λ,..., λ p, λ p+,..., λ p+q,..., ), λ i >. Let Q = diag( λ,..., λ p+q,..., ). Then if P = UQ, we have P t [f]p = diag(i p, I q, ). The triplet (p, q, s) is called the index of f, the number p + q is the rank of f, and the number p q is called the signature of f. Let V be a finite dimension vector space over R. (i). Every symmetric bilinear from f on V can be represented by a block diagonal matrix: diag(i p, I q, s ), where p, q, s are uniquely determined by f. (ii). Two symmetric bilinear forms are equivalent if and only if they have the same index. (iii). Two symmetric bilinear forms are equivalent if and only if they have the same rank and the same signature. Let A be a symmetric real matrix. Then q : R n R given by q(x) = x A x is called the quadratic form corresponding to A. A R n n and symmetric is positive semi-definite if x t Ax for all x R n. The following conditions are equivalent for a symmetric matrix: (i) A is positive semi-definite. (ii) All eigenvalues of A are non-negative. (iii) All principal submatrices have non-negative determinants (iv) There is a matrix B such that A = B t B. 8

7 SYMMETRIC BILINEAR FORMS EXAMPLE 7.. Let A =. We find a matrix Q so that Q t AQ is diagonal. 3 The method is we start with A and I 3 then the column operation that we apply on A the same row operation is to be applied on A and the corresponding column operation is applied on the identity matrix. Finally this way when A transformed to the diagonal form, then the identity matrix transforms to Q. Write columns of A and I 3. 3 times the first column added to the second column, do the same on rows 3 times the first column added to the third column, do the same on rows Add second column to the third column, do the same for rows: Hence Q = 3 3 and Q t AQ =. 9