NOTES ON BILINEAR FORMS

Similar documents
A PRIMER ON SESQUILINEAR FORMS

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

Spectral Theorem for Self-adjoint Linear Operators

Linear algebra II Homework #1 solutions A = This means that every eigenvector with eigenvalue λ = 1 must have the form

Real symmetric matrices/1. 1 Eigenvalues and eigenvectors

Vector Spaces and Linear Transformations

Numerical Linear Algebra Homework Assignment - Week 2

1 Linear Algebra Problems

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

Linear Algebra Lecture Notes-II

I. Multiple Choice Questions (Answer any eight)

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

BASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x

Symmetric and anti symmetric matrices

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Properties of Matrices and Operations on Matrices

18.06 Problem Set 8 - Solutions Due Wednesday, 14 November 2007 at 4 pm in

Symmetric and self-adjoint matrices

6 Inner Product Spaces

LINEAR ALGEBRA REVIEW

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm

The following definition is fundamental.

Diagonalizing Matrices

A = 3 B = A 1 1 matrix is the same as a number or scalar, 3 = [3].

A VERY BRIEF LINEAR ALGEBRA REVIEW for MAP 5485 Introduction to Mathematical Biophysics Fall 2010

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show

MATHEMATICS 217 NOTES

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

Throughout these notes we assume V, W are finite dimensional inner product spaces over C.

Linear Algebra. Workbook

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES

Eigenvalues and Eigenvectors

Lecture notes: Applied linear algebra Part 1. Version 2

GQE ALGEBRA PROBLEMS

Mathematical foundations - linear algebra

Linear algebra II Tutorial solutions #1 A = x 1

Linear Algebra Formulas. Ben Lee

Linear algebra 2. Yoav Zemel. March 1, 2012

Review of Linear Algebra Definitions, Change of Basis, Trace, Spectral Theorem

DIAGONALIZATION. In order to see the implications of this definition, let us consider the following example Example 1. Consider the matrix

Chapter 2. Matrix Arithmetic. Chapter 2

Math 108b: Notes on the Spectral Theorem

Solutions to Final Practice Problems Written by Victoria Kala Last updated 12/5/2015

Matrix Theory, Math6304 Lecture Notes from Sept 11, 2012 taken by Tristan Whalen

Contents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2

ACM 104. Homework Set 5 Solutions. February 21, 2001

Definition 1. A set V is a vector space over the scalar field F {R, C} iff. there are two operations defined on V, called vector addition

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Assignment 1 Math 5341 Linear Algebra Review. Give complete answers to each of the following questions. Show all of your work.

The University of Texas at Austin Department of Electrical and Computer Engineering. EE381V: Large Scale Learning Spring 2013.

Problem # Max points possible Actual score Total 120

MATH 583A REVIEW SESSION #1

Linear Algebra: Matrix Eigenvalue Problems

Econ 204 Supplement to Section 3.6 Diagonalization and Quadratic Forms. 1 Diagonalization and Change of Basis

Spectral Theorems in Euclidean and Hermitian Spaces

Math 554 Qualifying Exam. You may use any theorems from the textbook. Any other claims must be proved in details.

Lecture 2: Linear operators

Econ Slides from Lecture 7

1 Matrices and vector spaces

ALGEBRA 8: Linear algebra: characteristic polynomial

MTH 102: Linear Algebra Department of Mathematics and Statistics Indian Institute of Technology - Kanpur. Problem Set

Background Mathematics (2/2) 1. David Barber

Stat 159/259: Linear Algebra Notes

Linear Algebra. Session 12

Linear Algebra 1. M.T.Nair Department of Mathematics, IIT Madras. and in that case x is called an eigenvector of T corresponding to the eigenvalue λ.

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008

1. General Vector Spaces

MATH 431: FIRST MIDTERM. Thursday, October 3, 2013.

Linear Algebra Review. Vectors

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice

Chapter 5 Eigenvalues and Eigenvectors

Linear Algebra Massoud Malek

Linear Algebra. Min Yan

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Linear Systems. Class 27. c 2008 Ron Buckmire. TITLE Projection Matrices and Orthogonal Diagonalization CURRENT READING Poole 5.4

The Spectral Theorem for normal linear maps

Linear Algebra Practice Problems

Lecture 10 - Eigenvalues problem

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

Vector Spaces. distributive law u,v. Associative Law. 1 v v. Let 1 be the unit element in F, then

Chapter 6: Orthogonality

1 Last time: least-squares problems

Since G is a compact Lie group, we can apply Schur orthogonality to see that G χ π (g) 2 dg =

Online Exercises for Linear Algebra XM511

Study Guide for Linear Algebra Exam 2

Designing Information Devices and Systems II

SYMPLECTIC GEOMETRY: LECTURE 1

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

G1110 & 852G1 Numerical Linear Algebra

Linear Algebra Review

Jim Lambers MAT 610 Summer Session Lecture 1 Notes

implies that if we fix a basis v of V and let M and M be the associated invertible symmetric matrices computing, and, then M = (L L)M and the

Computational Methods CMSC/AMSC/MAPL 460. Eigenvalues and Eigenvectors. Ramani Duraiswami, Dept. of Computer Science

MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors.

October 25, 2013 INNER PRODUCT SPACES

Then since v is an eigenvector of T, we have (T λi)v = 0. Then

13. Forms and polar spaces

Diagonalization by a unitary similarity transformation

Transcription:

NOTES ON BILINEAR FORMS PARAMESWARAN SANKARAN These notes are intended as a supplement to the talk given by the author at the IMSc Outreach Programme Enriching Collegiate Education-2015. Symmetric bilinear product A symmetric bilinear product on a (finite dimensional) real vector space V is a mapping.,. : V V R, (u, v) u, v which is R-linear in each variable u, v and is symmetric, that is, u, v = v, u. We say that.,. is non-degenerate if u, v = 0 for all v V implies that u = 0. We say that.,. is positive definite if u, u > 0 if u 0. A positive definite symmetric bilinear product on V is also known as an inner product. An inner product is evidently non-degenerate. Fix an ordered basis B = {v 1,..., v n } of V. Let A M n (R) be a symmetric matrix. Writing any x V as x = 1 j n x jv j, we may identify x with the column vector (x 1,..., x n ) t. We obtain a symmetric bilinear product on V defined as x, y A := x t Ay. Note that v i, v j A = a ij where A = (a ij ). Conversely if.,. : V V R is any symmetric bilinear product on V, then.,. =.,. A where A = ( v i, v j ). The matrix A is called the matrix of.,. with respect to B. If A is non-singular, then.,. A is non-degenerate. Indeed for a non-zero element x in V, there exists z in V such that x t z 0 in fact, we can choose z to be a standard column vector. Since A is non-singular, there exists y such that z = Ay. Then x, y = x t Ay = x t z 0. Conversely, if.,. is non-degenerate, then its matrix with respect to any basis of V is non-singular. If v 1,..., v n is another ordered basis B for V and if P = (p ij ) is the change of basis matrix from B to B so that v j = p ij v i, then x i v i = x = x jv j = x jp ij v i and so (x i ) = P (x j). Let A and A be the matrix of the same symmetric bilinear product.,. with respect to B and B respectively, so that x t Ay = x, y = (x ) t A y. Thus (x ) t A y = x t Ay = (x ) t P t AP y for all column vectors x, y R n. It follows that A = P t AP ; equivalently A = (P 1 ) t A P 1. Let.,. be a fixed symmetric bilinear product on V. For u, v V we say that u is perpendicular (or orthogonal) to v (written u v) if u, v = 0. For W a subset of V 1

2 PARAMESWARAN SANKARAN we denote by W the subset {v V v w w W } of V. Evidently W is a vector subspace of V. When W is a subspace, we have dim W dim V dim W. The null space of.,. is the space V of all vectors v that are orthogonal to the whole of V. The bilinear product.,. defines a non-degenerate symmetric bilinear product on V/N where N is the null space of.,.. If W V is a vector subspace then the restriction of.,. to W W is a symmetric bilinear product on W. This bilinear product on W is non-singular if and only if W W = 0. In turn W W = 0 if and only if V = W W (internal direct sum). Let V = R 2 (regarded as column vectors) and let x, y = x 1 y 2 + x 2 y 1. Then e 1, e 1 = 0 = e 2, e 2. Nevertheless, the bilinear product is non-degenerate. Indeed, the matrix of the bilinear product with respect to the basis e 1, e 2 is ( 0 1 1 0 ). Since it is invertible, it follows that.,. is non-degenerate. Note that e 1 e 2, e 1 e 2 < 0. A basis B = u 1,..., u n of V is called orthogonal if u i u j for i j. Note that the matrix of.,. with respect to an orthogonal basis is diagonal. There always exists an orthogonal basis B for any symmetric bilinear product. If it is non-degenerate, one may replace each v B by v/ v, v to obtain, possibly after a rearrangement of the basis elements, an ordered basis with respect to which the matrix of the bilinear product has the form ( I r 0 0 I s ), where Ir denotes the identity matrix of order r. The number r is an invariant of the bilinear product. It equals the dimension of V if and only if the bilinear product is positive definite. If W V is a subspace such that the bilinear product is non-degenerate on W (so that V = W W as observed above), then we have the orthogonal projection p : V W defined by p W = 0, p W = identity. If {w 1,..., w k } is an orthogonal basis for W, then, for any v V, p(v) = v,w j 1 j k w w j,w j j. Indeed v v,w j 1 j k w w j,w j j defines a linear map of V to W that vanishes on W and is identity on W. Hermitian product Let V be a (finite dimensional) complex vector space. A Hermitian product.,. : V V C is a sesquilinear map conjugate linear in the first argument and complex linear in the second such that u, v = v, u. It is called non-degenerate if u, v = 0 for all v V implies u = 0. It is called positive definite if u, u > 0 for u 0 (note that u, u is real since u, u = u, u ). A positive definite Hermitian product is also called an inner product. A Hermitian inner product is evidently non-degenerate. As in the case of symmetric bilinear product on real vector spaces, one has the notion of the matrix of a Hermitian product (with respect to an ordered C-basis for V ). The matrix A of a Hermitian product is Hermitian, that is, A = A where A := Āt. If A, A

NOTES ON BILINEAR FORMS 3 are two matrices of the same Hermitian product with respect to two ordered bases B, B and if P is the change of basis matrix from B to B, then A = P AP. Let.,. be a Hermitian inner product on a (finite dimensional) complex vector space V. Suppose that T : V V is a C-linear transformation. We obtain a linear transformation T : V V, where T v for v in V is defined by T v, w = v, T w for all v, w V. Note that (ST ) = T S for any two linear transformations S, T of V and that T T is an involution, that is, (T ) = T for all T. We say that T is normal if T T = T T, equivalently T v, T w = T v, T w for all v, w V. We say that T is Hermitian if T = T, equivalently, T v, w = v, T w. We say that T is unitary if T T = T T = I, the identity transformation, equivalently T v, T w = v, w for all v, w V. These notions carry over to elements of M n (C) where A is defined as Āt. Thus A is Hermitian if A = A, unitary if AA = A A = I, and normal if AA = A A. Note that a real n n matrix is Hermitian if and only if it is symmetric and is unitary if and only if it is orthogonal (that is AA t = A t A = I). The two versions in terms of linear transformation and matrices are related by considering the matrix of a transformation with respect to an ordered basis B of V with respect to which the matrix of the Hermitian inner product is the identity. The following are some basic properties of Hermitian matrices. The eigenvalues of a Hermitian (or a real symmetric) transformation are all real. Proof. Suppose that A is Hermitian and v is an eigenvector corresponding to an eigenvalue λ of A. Then λ v, v = λv, v = Av, v = v, Av = λ v, v. But v, v = 0 since v 0, by positive definiteness of.,.. Hence λ = λ. The eigenvalues of a unitary matrix are of absolute value 1. Proof. Let Av = λv with v 0 and A being unitary. Then v, v = Av, Av = λv, λv = λλ v, v. Cancelling v, v we obtain that λ 2 = λλ = 1. Theorem (Spectral theorem for normal matrices) Let T be a normal matrix in M n (C). Then there exists a n n unitary matrix P such that P T P is diagonal. Reference Chapter 8 of M. Artin, Algebra, 2nd ed., Pearson, New Delhi (2011).

4 PARAMESWARAN SANKARAN Problems (1) Find an orthogonal basis of R 2 for the symmetric bilinear given by the matrix (a) ( 2 1 1 3 ), (b) ( 1 2 2 0 ). (2) Find the orthogonal projection of the vector (2, 3, 4) t R 3 onto the xy-plane where the symmetric bilinear product is given by the matrix 1 1 0 1 2 1. 0 1 3 (3) Prove that the maximum of entries of a positive definite matrix A is attained on the diagonal. (4) Suppose that A is a complex n n matrix such that x Ax is real for all x C n. Is A Hermitian? (5) Let.,. be a non-zero symmetric bilinear product on a real vector space or a Hermitian product on a complex vector space V. Show that there exists a vector v such that v, v 0. Show that V = U U where U is the vector subspace of V spanned by v. (6) Let.,. be a positive definite Hermitian product on a complex vector space V. Define bilinear maps (.,.), [.,.] : V V R (where V is regarded as a real vector space) as the real and imaginary parts of.,. so that u, v = (u, v) + 1[u, v]. Show that (.,.) is a positive definite symmetric bilinear product on V and that [., ] is skew symmetric, i.e., [u, v] = [v, u]. (7) On the vector space M n (R) define A, B as tr(a t.b). Show that this is a positive definite symmetric bilinear product. (8) Let W 1, W 2 V and let.,. be a symmetric bilinear product. Show that (a) (W 1 + W 2 ) = W1 W2, (b) W W. When does equality hold in (b)? (9) Suppose that.,. is non-degenerate symmetric bilinear product on V. Show that dim W (1/2) dim V if W W. (10) If A, B are symmetric n n real matrices which commute, show that there exists a matrix P such that both P t AP and P t BP are diagonal. Find two 2 2 symmetric matrices A and B such that AB is not symmetric. (Comment: The product of two commuting symmetric matrices is symmetric.)

NOTES ON BILINEAR FORMS 5 Hints/Solutions to Problems (1) (a) Take, for example, e 1 = (0, 1) t to be the first basis vector (this wouldn t have been a good choice if e 1, e 1 were 0 but that is not the case). Let the second be ae 1 + be 2. We want ae 1 + be 2, e 1 = 2a + b = 0. Thus, we can take the second vector to be (1, 2) t. (b) Proceeding as in (a), we get e 1 = (0, 1) t, ( 2, 1) t. (2) We could directly apply the formula in the notes provided we have an orthogonal basis for the xy-plane. To find such a basis, we could take w 1 = e 1 = (1, 0, 0) t to be the first basis vector. Let w 2 = ae 1 + be 2 be the second, where e 2 = (0, 1, 0) t. We want w 2, w 1 = ae 1 + be 2, w 1 = 0, so we get a + b = 0. Thus we can take w 2 = e 1 e 2. Letting v = (1, 2, 3) t, we have p(v) = v, w 1 w 1, w 1 w 1 + v, w 2 w 2, w 2 w 2 = 3 1 w 1 + 5 1 w 2 = 3e 1 5(e 1 e 2 ) = ( 2, 5, 0) t (3) In fact, for a symmetric positive definite real matrix A = (a ij ), the maximum cannot be attained at a non-diagonal element. To see this, just observe that e i e j, e i e j = a ii + a jj 2a ij, for i j. Thus if the maximum were attained at a ij, we would get a contradiction: e i e j, e i e j 0. (4) Put A = A 1 + ia 2. The imaginary part of (x 1 ix 2 ) t (A 1 + ia 2 )(x 1 + ix 2 ) is x t 2A 1 x 1 + x t 1A 1 x 2 + x t 1A 2 x 1 + x t 2A 2 x 2 Putting x 2 = 0, we get x t 1A 2 x 1 = 0 for all x 1, which means A 2 is skew-symmetric. But now we also have x t 1A 1 x 2 x t 2A 1 x 1 = 0. But x t 1A 1 x 2 x t 2A 1 x 1 = x t 1A 1 x 2 x t 1A t 1x 2 = x t 1(A 1 A t 1)x 2. So x t 1(A 1 A t 1)x 2 = 0 for all x 1, x 2. So A 1 A t 1 = 0; in other words, A 1 is symmetric. Thus A is Hermitian. (5) Since.,. is non-zero, there exists u 1, u 2 V such that u 1, u 2 =: λ 0. In case V is a complex vector space, replacing u 2 by λu 2 if necessary, we may (and do) assume, u 1, u 2 is real. Thus u 2, u 1 = u 1, u 2. If u i, u i 0 for some i 2, take v = u i. If u i, u i = 0 for i = 1, 2, set v = u 1 +u 2. Then v, v = 2 u 1, u 2 0. Since U is the one-dimensional vector space spanned by v, if U U 0, then v v and so v, v = 0, contrary to our choice of v. So we must have U U = 0. This implies that V = U U. (6) Fix notation as in the solution above of (4).That A 2 is skew-symmetric and A 1 is symmetric follows from (4). The real part of (x 1 ix 2 ) t (A 1 + ia 2 )(x 1 + ix 2 ) is x t 1A 1 x 1 x t 1A 2 x 2 + x t 2A 2 x 1 + x t 2A 1 x 2 Putting x 2 = 0, we get x t 1A 2 x 1 0 for all x 1 with equality only if x 1 = 0, which means that A 1 is positive definite. (7) Writing A = (a 1,..., a n ) where a i is the i th column of A, we see that trace(a t A) = a t 1a 1 + + a t na n 0 with equality holding only if each a i = 0 (in other words,

6 PARAMESWARAN SANKARAN only if A = 0). This proves that the form A, B = trace(b t A) is positive definite. It is evidently symmetric. (8) (a) and (b) follow readily from the definitions. To see when equality holds in (b), first observe the following: for any subspace W of V, we have (1) dim W = dim V dim W + dim(w V ) and (2) W V. Now, using these, we get dim W = dim W + dim V dim(w V ). Thus for W to equal W, it is necessary and sufficient that V = W V, or equivalently V W. Two cases where this condition holds are: V = 0 (the form is non-degenerate); W = V. (9) This follows from the equality dim W = dim V dim W when the form is nondegenerate (see the solution to the previous item). (10) In fact, we can find an orthogonal matrix P with the desired property. By using the spectral theorem for real symmetric matrices once, we may assume that A is diagonal. Then B is block diagonal symmetric: the block sizes are the multiplicities of the entries of A. Now we apply the spectral theorem to each block of B. Since each corresponding block of A is a scalar matrix, A will not be disturbed when we diagonalize B. For the second part: If A = ( ) 1 1. 2 2 ( 1 0 0 2 ) and B = ( 1 1 1 1 ), then AB =