Mathematical Methods wk 2: Linear Operators

Similar documents
Quantum Computing Lecture 2. Review of Linear Algebra

Review problems for MA 54, Fall 2004.

Conceptual Questions for Review

Linear Algebra: Matrix Eigenvalue Problems

Foundations of Matrix Analysis

1. General Vector Spaces

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

MATH 583A REVIEW SESSION #1

A Brief Outline of Math 355

Linear Algebra. Min Yan

Mathematical Methods wk 1: Vectors

Mathematical Methods wk 1: Vectors

Knowledge Discovery and Data Mining 1 (VO) ( )

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008

Linear Algebra in Actuarial Science: Slides to the lecture

Lecture notes on Quantum Computing. Chapter 1 Mathematical Background

Math Linear Algebra Final Exam Review Sheet

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

Mathematical Foundations of Quantum Mechanics

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

Math Linear Algebra II. 1. Inner Products and Norms

4. Determinants.

Lecture 1 Review: Linear models have the form (in matrix notation) Y = Xβ + ε,

33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM

Introduction to Matrix Algebra

The following definition is fundamental.

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

Matrices A brief introduction

Matrices and Linear Algebra

Matrix Representation

Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane.

MATH 235. Final ANSWERS May 5, 2015

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p

Linear algebra and applications to graphs Part 1

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

SAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to Linear Algebra

Eigenvectors and Hermitian Operators

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

Matrices A brief introduction

Symmetric and anti symmetric matrices

CS 246 Review of Linear Algebra 01/17/19

3 (Maths) Linear Algebra

Linear Algebra using Dirac Notation: Pt. 2

Linear Algebra Review. Vectors

1 = I I I II 1 1 II 2 = normalization constant III 1 1 III 2 2 III 3 = normalization constant...

1 Linear Algebra Problems

j=1 x j p, if 1 p <, x i ξ : x i < ξ} 0 as p.

SAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to Linear Algebra

Throughout these notes we assume V, W are finite dimensional inner product spaces over C.

Duke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

Real symmetric matrices/1. 1 Eigenvalues and eigenvectors

Algebra Workshops 10 and 11

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

Linear Algebra Primer

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Chapter 5 Eigenvalues and Eigenvectors

1 Last time: least-squares problems

Linear Algebra. Workbook

Properties of Linear Transformations from R n to R m

Differential Topology Final Exam With Solutions

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =

Linear Algebra March 16, 2019

LINEAR ALGEBRA REVIEW

SYLLABUS. 1 Linear maps and matrices

Econ Slides from Lecture 7

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

Math 291-2: Lecture Notes Northwestern University, Winter 2016

CS 143 Linear Algebra Review

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show

Linear Algebra Primer

SUMMARY OF MATH 1600

Linear Algebra Review

MA 1B ANALYTIC - HOMEWORK SET 7 SOLUTIONS

Properties of Matrices and Operations on Matrices

The Singular Value Decomposition

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.

Chapter 1. Matrix Calculus

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

Chapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in

GQE ALGEBRA PROBLEMS

Matrix Factorization and Analysis

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

G1110 & 852G1 Numerical Linear Algebra

ELE/MCE 503 Linear Algebra Facts Fall 2018

18.06 Problem Set 8 - Solutions Due Wednesday, 14 November 2007 at 4 pm in

22.3. Repeated Eigenvalues and Symmetric Matrices. Introduction. Prerequisites. Learning Outcomes

5.6. PSEUDOINVERSES 101. A H w.

1 9/5 Matrices, vectors, and their applications

Eigenvalues and Eigenvectors

Assignment 11 (C + C ) = (C + C ) = (C + C) i(c C ) ] = i(c C) (AB) = (AB) = B A = BA 0 = [A, B] = [A, B] = (AB BA) = (AB) AB

Topics in linear algebra

MATHEMATICS 217 NOTES

Trace Class Operators and Lidskii s Theorem

Lecture 10: Eigenvectors and eigenvalues (Numerical Recipes, Chapter 11)

Fundamentals of Engineering Analysis (650163)

Transcription:

John Magorrian, magog@thphysoxacuk These are work-in-progress notes for the second-year course on mathematical methods The most up-to-date version is available from http://www-thphysphysicsoxacuk/people/johnmagorrian/mm 3 Linear operators roof: that Any v V can be expressed as v n α i e i Using the expression 35 for I, we have I v e i e i α j e j e i α j e i e j δij e i α i v 36 II Throughout this section I assume that V is an n-dimensional inner-product space and that e,, e n are an orthonormal basis for this space Recall that linear operators are mappings from a vector space V to itself that satisfy the conditions 3 Let A be a linear operator and suppose that b A a 3 In 4 we saw how this is equivalent to the matrix equation b Aa, where a and b are the column vectors that represent a and b respectively Armed with our inner product we can now obtain an explicit expression for the elements of the matrix representing the operator A Expanding a n a k e k and b n b i e i, equation 3 becomes b i e i a k A e k 3 Now choose any basis bra e j and apply it to both sides: n n e j b i e i e j a k A e k δji b i e j e i e j A e k a k Therefore equation 3 can be represented by the matrix equation b j 33 A jk a k, 34 where A jk e j A e k are the matrix elements of the operator A in the e,, e n basis 3 The identity operator The identity operator I defined through I v v for all v V is clearly a linear operator Less obviously, it can be written as I e i e i 35 This is sometimes known as resolution of the identity The individual terms e i e i appearing in the sum 35 are known as projection operators: if we apply i e i e i to a vector a j a j e j the result i a a i e i Similarly, a i e i a i We have already seen a use of projection operators in the Gram Schmidt procedure earlier equation 3 Combining operators The composition of two linear operators A and B is another linear operator In case it is not obvious how to show this, let us write C AB for the result of applying B first, then A Now notice that C is a mapping from V to V and that conditions 3 hold for any a, b V and α F C a + b A B a + b A B a + A B b C a + C b, C α A A Bα a A αb a α AB a αc a Exercise: Show that the matrix representing C is identical to the matrix obtained by multiplying the matrix representations of the operators A and B Exercise: Show that sum of two linear operators is another linear operator In general AB BA: the order of composition matters The difference is another linear operator, known as the commutator of A and B Functions of operators 37 AB BA [A, B] 38 We can construct new linear operators by composition and addition For example, given a linear operator A, let us define exp A in the obvious way as exp A lim I + N N A N I + A + A + 3! A3 + + m! Am +, in which I is the identity operator, A 0 I, A A, A AA, A 3 AAA and so on It might not be obvious that this exp A is a linear operator, but notice that it is a mapping from V to itself and that for any a, b V and α F we have that exp A a + b exp A a + exp A b, exp Aα a αexp A a 39 30

II 3 0 Example: what is expαg, where G? 0 First note that G I Therefore G m m I and G m+ m G We can use this to split the expansion of the exponential into a sum of even and odd terms: expαg k! αgk k0 m! αm G m + m +! αm+ G m+ m0 m0 m! m α m I + m +! m α m+ G m0 cos αi + sin αg cos α sin α sin α cos α m0 3 34 Dual of an operator Given an operator A, let b A a be the result of applying A to an arbitrary ket a How are the corresponding dual vectors a and b related? Expanding b A a in terms of components we have that b i e i A ij a j e i 34 Using 3 to obtain the dual of each side of this equation gives e i b i e i A ija j a j A ji e i 35 II 4 For this reason G is known as the generator of two-dimensional rotations: for ɛ the operator I + ɛg is a rotation by an angle ɛ; from the definition 39 the operator expαg is obtained by chaining together many such infinitesmal rotations 33 Special types of matrix The LHS is clearly just b The RHS is a A To see this, apply the RHS to an arbitrary basis ket e k The result is j a j A jk The result of operating on the same ket with a A is a A e k e j a j A e k a j e j A e k a j A jk 36 Let A be an n n matrix with elements A ij The transpose A T is obtained from A by swapping rows and columns It has elements A T ij A ji 3 Therefore, if b A a then the corresponding dual vectors are related via b a A : the dual to the operator A is its Hermitian conjugate A Exercise: Use property 3 of the inner product to show that a A b b A a The Hermitian conjugate A is obtained by swapping rows and columns and taking the complex conjugate It has elements A ij A ji 33 35 Change of basis Exercise: Show that AB T B T A T and AB B A The following table lists the conditions that are satisfied by members of some important families of matrices: Symmetric: A A T Hermitian: A A Orthogonal: AA T I Unitary: AA I [Normal: A A AA ] Hermitian matrices are sometimes also known as self-adjoint matrices Recall that orthogonal matrices represent reflections and rotations It turns out 35 below that the conditions for a matrix to be Hermitian or unitary are independent of orthonormal basis: if the matrix representing a particular linear operator is Hermitian unitary in one basis then it is Hermitian unitary in all bases and one can sensibly say that the underlying operator itself is Hermitian unitary For the special case of real vector spaces this implies that if a matrix is symmetric orthogonal in one basis then it is symmetric orthogonal in all bases Exercise: Show that the product of two unitary matrices is another unitary matrix Does the same result hold for Hermitian matrices? If not, find a condition under which it does hold Bases are not unique Sometimes it is convenient to do parts of a calculation in one basis and the rest in another Therefore it is important to know how to transform vector coordinates and matrix elements from one basis to another Suppose we have two different orthonormal basis sets, { e,, e n } and { e,, e n }, with e i U ji e j 37 That is, the i th column of the matrix U gives the representation of e i in the unprimed basis The corresponding relationship for the basis bras is e i e j Uji 38 The orthonormality of the bases places constraints on the the transformation matrix U: l δ ij e i e n n j e k Uki U lj e l UkiU lj e k e l UkiU kj U ik U kj, 39 l

II 5 II 6 or, in matrix form, I U U: the matrix U describing the coordinate transformation must be unitary: unitary matrices are the generalization of real orthogonal transformations ie, rotations and reflections to complex vector spaces Transformation of vector components Taking an arbitrary vector a n a j e j n and using e i to project a along the ith primed basis vector gives a j e j 36 Rank The range of a linear operator is the set of all possible output vectors: range A {A v : v V} 36 a i e i a e k Uki a j e j Uki In matrix form, the components in the primed basis are given by a U a Transformation of matrix elements A ij e i A n n e j e k Uki A U lj e l l a j e k e j Ujia j U ij a j 30 In the primed basis, the operator A has matrix elements Uki l e k A e l U lj U ik A kl U lj, 3 so that A U AU What does this mean? When applying the matrix A to a vector whose coordinates a are given with respect to the e i basis, think of U as transforming from a to a Then we apply the matrix A to the result before transforming back to the primed basis with U Exercise: Derive the change-of-basis formulae 30 and 3 by resolving the identity 35 That is, write a i e i a e i I a A ij e i A e j e i IAI e 3 j and use the fact that I can be expressed as I k e k e k l e l e l You should find that a i j A ij k e i e j a j, e i e k l e k A e l e l e 33 j, The range is clearly the subspace of V spanned by the vectors A e, A e,, A e n The rank of an operator is the dimension of its range Examples: rank 0 0 0 0 3, 0 0 3 rank 0 0 0 0, 0 rank 0 0 0 0 37 0 The rank of a matrix is unchanged under any of the following elementary row operations: swap two rows; multiply one row by a non-zero constant; add a multiple of one row to another Each such operation produces a new matrix whose columns are linear combinations of the columns of the original matrix A simple way of calculating the rank of a matrix is to use elementary row operations to reduce it to an upper triangular matrix and counting the number of LI columns in the result Every elementary row operation can be expressed as an elementary matrix For example, starting with a 3 3 matrix A and adding α times row 3 to row, then swapping the first two rows gives a new matrix E E A, where the elementary matrices E and E are E 0 0 0 α, E 0 0 0 0 38 0 0 0 0 where e i e j is the projection of e j onto the e i basis that is, a matrix whose jth column expresses e j in the primed basis How is this matrix related to U introduced above? Aside: linear equations To solve the linear equation Ax b apply elementary row operations E, E, to A to reduce it to upper triangular form That is, Exercise: Justify the statement at the end of 33: show that if a matrix is Hermitian unitary in one orthonormal basis then it is Hermitian unitary in any other orthonormal basis e e α a e e Example: Example: two-dimensional rotations Suppose that the e, e basis is related to the e, e basis by a rotation through an angle α From the diagram, the basis vectors are related through e cos α e + sin α e e sin α e + cos α e with U T cos α sin α, sin α cos α so that U e e U T e, 34 e cos α sin α sin α cos α 35 The i th column of U expresses e i in the e j basis: U ji e j e i Clearly the coordinates a, a of a vector a in the two bases are related through a U T a Ax b E m E E Ax E m E E b A A A 3 A,n A n x b 0 A A 3 A,n A n 0 0 A 33 A 3,n A x b 3n x 3 b, 3 0 0 0 A n,n A n,n x n b n 0 0 0 0 A n,n x n b n where A ij E m E E A ij and b i E m E E b i Then the x i can be found by backsubstitution 39 Aside: matrix inverse To find the inverse assuming that it exists, of a square matrix A, apply elementary row operations to reduce A E A E E A E m E E A I while simultaneously applying the same operations to the identity I: I E I E E I E m E E I Then E m E A I, so that A E m E I

II 7 II 8 37 Determinant roperties of the determinant ermutations A permutation of the list,,, m is another list that contains each of the numbers,, m exactly once In other words, it is a straightforward shuffling of the order of the elements There are m! permutations of an m-element list Given a permutation we write for the first element in the shuffled list, for the second, etc Then can be written as,,, m An alternative notation is m, 330 m In the following A and B are any two square matrices i If two rows of A are equal to one another then det A 0 ii If B can be obtained from A by multiplying one row by a factor α then det B α det A, iii If B can be obtained from A by adding a multiple of one row to another then det B det A iv If B can be obtained from A by swapping two rows then det B det A v deta T det A vi detab det A det B roof of i: Suppose that the first two rows of A are equal, A j A j, and consider the factor F below in each term of the sum which emphasises that is a mapping from the set,, m top row to itself values given on bottom row From any two permutation mappings and Q we can compose a new one Q defined through Qi Qi There is an identity mapping for which i i and every has an inverse det A sgn A A A n, n 335 F m, 33 m which is well defined because each number,,, m appears exactly once in the top row of 33 Any permutation,,, m can be constructed from,,, m by a sequence of pairwise element exchanges Even odd permutations require an even odd number of exchanges The sign of a permutation is defined as { +, if is an even permutation of,,, m, sgn, if is an odd permutation of,,, m 33 Since A j A j we have that F sgn A, A, For each permutation there is another,, that swaps only and, leaving the other elements unchanged This new permutation has F sgn A, A, sgn A, A, F So, the term involving cancels out the contribution of the term involving to the sum 335 Summing over all the result is det A 0 roof of ii: Without loss of generality, suppose that B αa αa αa n A A A n A n A n A nn 336 Given two permutations, Q, we have sgn Q sgn sgnq The identity permutation is even Therefore + sgn sgn sgn, showing that sgn sgn The following table shows all 6 permutations of the 3-elements list,, 3: Then det B sgn B B B n, n Determinant 3 sgn 3 + 3-3 3 + 4 3-5 3 + 6 3 - The determinant of an n n matrix A is defined through det A sgn A A A n, n, 333 roof of iii: Suppose that B α sgn αa A A n, n sgn A A A n, n α det A A + αa A + αa A n + αa n A A A n A n A n A nn 337 338 where the sum is over all permutations of,, 3, n For example, for the case n 3 Then det A A A A 33 A A A 33 + A A 3 A 3 A 3 A A 3 + A 3 A A 3 A A 3 A 3, 334 det B sgn [ A + αa ] A A n, n where in the sum 333 I have taken the permutations in the order given in the table above Exercise: Confirm that this expression agrees with the result obtained by the more familiar minorexpansion method sgn A A A A n, n +α sgn A A A n, n 339 det A 0 The second term is zero by property i above

II 9 II 0 roof of iv: Suppose, for example, that B is obtained from A by swapping the first two rows Then det B sgn A A A n, n sgn A, Q A, Q A n, Qn, 340 where Q is a permutation that unswaps the first two rows and leaves the rest unchanged: Q, Q and Qi i for i > The sum over can be replaced by another sum over Q, with sgn sgn sgnq sgn Therefore roof of v: det B deta T det A sgn A A A n, n sgn A T A T A T n, n 34 sgn A A A n,n 34 n sgn A i,i Let Q be the inverse permutation to : if i j, then Qj i Then A ii A jqj and we have that n A ii n A jqj Noting also that sgn sgnq and replacing the sum over by an equivalent sum over Q, obtain n n sgn sgnq A jqj det A 343 roof of vi: homework exercise! A i,i Q Exercise: Use the properties above together with results derived in 35 to show that the determinant is independent of basis 38 Trace The trace tr A of an n n matrix A is the sum of its diagonal elements: tr A A ii 344 The trace satisfies because ABii trab trba 345 n trab A ij B ji B ji A ij trba 346 BAjj Taking A A, B A A 3 A m we see that tra A A m tra A m A : the trace is invariant under cyclic permutations Exercise: Show that the trace is independent of basis Explain then how, given a 3 3 rotation matrix, one can find the rotation angle directly from the trace 39 Eigenvectors and diagonalisation Recall that v 0 is an eigenvector of an operator A with eigenvalue λ if it satisfies the eigenvalue equation A v λ v 347 To find v we first find λ by rewriting eq above as A v λ v A λi v 0 348 Clearly A λi can t be invertible: if it were, then we could operate on 0 with A λi to get v So, we must have that deta λi 0, 349 which is known as the characteristic equation for A The characteristic equation is an n th -order polynomial in λ which can always be written in the form λ λ λ λ λ λ n 0, where the n roots ie, eigenvalues λ,, λ n are in general complex and not necessarily distinct If a particular value of λ appears k > times then that eigenvalue is said to be k-fold degenerate The integer k is also sometimes called the multiplicity of the eigenvalue Exercise: Let A be a linear operator with eigenvector v and corresponding eigenvalue λ Show that v is also an eigenvector of the linear operator expαa, but with eigenvalue expαλ Example: Example: find the eigenvalues and eigenvectors of A 0 0 0 350 0 The characteristic equation is λ λ λ λ 0, which, when factorised, is λ λ 0 Therefore the eigenvalues are λ 0,, : the eigenvalue λ is doubly degenerate To find the eigenvector v corresponding to the first eigenvalue with λ λ 0, take v x, x, x 3 T and substitute into eigenvalue equation 347 to find x x 3 and x 0 Any vector v that satisfies these constraints is an eigenvector of A with eigenvalue 0 For example, we could take v, 0, T or π, 0, π T or even i, 0, i T assuming we have a complex vector space It is usually most convenient, however, to choose them to make v normalized Therefore we choose v, 0, T Taking λ λ λ 3 and substituting v x, x, x 3 T into the eigenvalue equation yields the constraints x + x 3 0 and x x 3 0 So, we must set x x 3, but we are free to choose anything for x For v let us take x x 3 / and x 0, while for v 3 we choose x x 3 0 and x To summarise, we have found the following eigenvalues and eigenvectors for the matrix A: λ 0, v 0, λ, v 0, λ 3, v 3 0 35 0 Diagonalisation Notice that for the 3 3 matrix A above we were able to find three eigenvectors v, v, v 3 These three eigenvectors turn out to be orthogonal Therefore they can be used as a basis for the 3-dimensional vector space on which A operates In this eigenbasis the matrix representing the operator A takes on a particularly simple form: it is diagonal, with matrix elements v i A v j λ i δ ij 35

II II 30 The complex spectral theorem A fundamental problem of linear algebra is to find the conditions under which a given matrix A can be diagonalised, ie, whether there exists an invertible coordinate transformation for which A becomes diagonal For our purposes, we need only accept that a sufficient condition is that A be Hermitian see next section If you don t like to accept such things, let A be an operator on a complex inner-product space V It is relatively easy to show that V has an orthonormal basis consisting of eigenvectors of A if and only if A is normal Recall that normal matrices satisfy AA A A: Hermitian and unitary matrices are special cases of normal matrices Here is an outline sketch of the proof Let v,, v n be the eigenvectors of A and λ,, λ n the corresponding eigenvalues First suppose that V has an orthonormal basis consisting of the eigenvectors v i In this basis the matrix representing A is diagonal with matrix elements A ij v i A v j The matrix representing A has elements A ij v i A v j v j A v i, which is also diagonal Diagonal matrices commute Therefore AA A A: A is normal Now the converse: we need to show that if A is normal then V has an orthonormal basis consisting of eigenvectors of A Any matrix can be reduced to upper triangular form by applying an appropriate set of elementary row operations see 36 In particular, there is an orthonormal basis e,, e n in which the matrix representing A becomes e i A e j a a n 353 As A is Hermitian we have A A Operate on the first of 356 with v and use the second to operate on v Subtracting, the result is 0 λ λ v v 357 But v v > 0 Therefore λ λ : the eigenvalues are real roof of : the eigenvectors of a Hermitian operator are orthogonal eigenvectors with corresponding eigenvalues λ, λ The eigenvalue equations are A v λ v, A v λ v Let v and v be two 358 For simplicity, let us first consider the case λ λ Operating on the first of 358 with v and on the second with v results in v A v λ v v, 359 v A v λ v v Taking the complex conjugate of the second of these gives From 353 we see that 0 a nn A e a and A e a + a + + a n 354 v A v λ v v v A v λ v v, since λ λ and v A v v A v v A v Now subtract 360 from the first of 359: 360 But A is normal, so A e A e This means that all entries in the first row of the matrix in the RHS of 353 vanish, except possibly for the first Similarly, from equation 353 we have that A e a + a a, and A e a + a 3 + + a n, using the result that a 0 from the previous paragraph Because A is normal we have that A e A e : the whole second row must equal zero, except possibly for the diagonal element a Repeating this procedure for A e 3 to A e n shows that all of the off-diagonal elements in 353 must be zero Therefore the orthonormal basis vectors e,, e n are eigenvectors of A The corresponding result for real vector spaces is harder to prove It states that a real vector space V has an orthonormal basis consisting of eigenvectors of A if and only if A is symmetric which is equivalent to Hermitian for the special case of a real vector space 3 Hermitian operators Hermitian operators are particularly important Here are two central results If A is Hermitian then: its eigenvalues are real its eigenvectors are orthogonal and therefore form a basis of V roof of : the eigenvalues of a Hermitian operator are real eigenvalue λ Then the eigenvalue equation 347 and its dual are A v λ v v A λ v 355 Let v be a eigenvector of A with 356 0 λ λ v v 36 Under our assumption that λ λ we must have v v 0: the eigenvectors are orthogonal If all n eigenvalues are distinct, then it is clear that the eigenvectors span V and therefore form a basis If λ λ then we can use the Gram Schmidt procedure to construct an orthonormal pair from v and v : the complex spectral theorem 30 guarantees that a Hermitian operator on an n-dimensional space always has n orthogonal eigenvectors and so there must exist appropriate linear combinations of v and v that are orthogonal Exercise: Show that i the eigenvalues of a unitary operator are complex numbers of unit modulus and ii the eigenvectors of a unitary operator are mutually orthogonal How to diagonalise a Hermitian operator Let A be a Hermitian operator with normalised eigenvectors v,, v n When the matrix representing A is expressed in terms of its eigenbasis then the matrix elements are v i A v j λ i δ ij, 36 where λ i is the eigenvalue corresponding to v i In our standard e,, e n basis, we have that the matrix elements of A are given by e i A e j e i IAI e j 363 e i v k v k A v l v l e j, l resolving the identity 35 through I k v k v k l v l v l Written as a matrix equation, this is A U diagλ,, λ n U, 364 where U ji e j v i so that the i th column of U expresses v i in terms of the e,, e n basis

II 3 II 4 Exercise: Using UU U U I show the following: U AU diagλ,, λ n, A m U diagλ m,, λ m n U, tr A λ i, det A n λ i 365 Simultaneous diagonalisation of two Hermitian matrices Let A and B be two Hermitian operators There exists a basis in which A and B are both diagonal if and only if [A, B] 0 Some comments before proving this: Because the eigenvectors of Hermitian operators are orthogonal, any basis in which such an operator is diagonal must be an eigenbasis An equivalent statement is therefore that A and B both have the same eigenvectors if and only if [A, B] 0 3 In this eigenbasis the only difference between A and B is the values of their diagonal elements ie, their eigenvalues roof: We first show that if there is basis in which A and B are both diagonal then [A, B] 0 This is obvious: diagonal matrices commute The converse is that if [A, B] 0 then there is a basis in which both A and B are diagonal To prove this, note that because A is Hermitian we can find a basis in which A diaga,, a n, where the a i are the eigenvalues of A In this basis B will be represented by some matrix B B B n B 366 B n B n B nn The commutator 0 a a B a a 3 B 3 a a B 0 a a 3 B 3 AB BA a 3 a B 3 a 3 a B 3 0 367 By assumption [A, B] 0 From the matrix 367 we see that this means that B ij B ji 0 for all indices i, j for which a i a j : if all of A s eigenvalues are distinct then B must be diagonal If some of the {a i } aren t distinct we have just a little more work to do Take, for example, the case a a Then we have that B B 0 0 B B 0 0 B B 0 0 B 33 0 B 0 0 0 B 44, 368 B n using B B B B and B B n diagb 33,, B nn to denote block submatrices of B Now introduce a unitary change-of-basis matrix U U 0 U U U 0 U 0 0 0 369 0 Ī n Then in this new basis B will be represented by the matrix U Ū BU Ū Bn 370 Notice that U changes only the upper-left corner of B We can always choose Ū to make Ū B Ū diagb, b, so that the matrix U BU becomes diagonal The basis change U has no effect on the matrix representing A because Ā diaga, a a diag, is proportional to the identity, we have that 3 Odds and ends Quadratic forms U Ū AU ĀŪ Ā n Expressions such as ax + bxy + cy, or, more generally, j<i Ā A 37 Ān A ii x i + A ij x i x j A ij x i x j 37 are known as homogeneous quadratic forms They can be written in matrix form as x T Ax, where A is a symmetric matrix with elements A ij A ji For example, 4x + 6xy + 7y x y 4 3 x 373 3 7 y Exercise: Explain why under a certain change of basis x x the quadratic form 37 can be expressed as λ x + +λ n x n, where the λ i are the eigenvalues of A What is the relationship between x and x? Do you need to make any assumptions about the elements A ij or x i? Lorentz transformations Our definition of scalar product includes a condition 4 that scalar products are positive unless one of the vectors involved is zero This condition is relaxed in the special theory of relativity, where the scalar product of two four-vectors x ct, x, y, z and x c t, x, ȳ, z is defined to be x x c t x ȳ z ct x x T ηx, 374 y z where the metric η diag,,, The square of the length of a spacetime interval dx cdt, dx, dy, dz is then ds cdt dx dy dz, 375 which can be positive, negative or zero, depending on whether the interval is time-like, space-like or light-like A Lorentz transformation is a change of basis x x, x x that preserves the scalar product 374 An example familiar from the first-year course is x Λx, where γ βγ Λ βγ γ with β v/c and γ / β It is easy to confirm that Λ T ηλ η, 376

II 5 Tensor product Let V and W be n- and m-dimensional vector spaces respectively The tensor product V W is an vector space of dimension n m A basis for V W is e V i e W j, where e V,, e V n is a basis of V and e W,, e W m is a basis of W For any vi V, w i W and scalar α, the tensor product satisfies v + v w v w + v w, v w + w v w + v w, α v w v α w α v w 377 Given linear operators A : V V and B : W W we can define a new linear operator A B : V W V W through A B v v A v B w 378 If V and W are inner-product spaces then there is a natural inner product between two vectors v w and v w V W: v w v w v v w w 379 Further reading See RHB 8, DR II Jordan normal form Not all matrices can be diagonalised, but it turns out that for every matrix A there is an invertible transformation for which A takes on the block-diagonal form A A A An, 380 where the blocks are λ i λ i A i λi, 38 having ones immediately above the main diagonal Each λ i here is an eigenvalue of A with multiplicity given by the number of diagonal elements in the corresponding A i Singular value decomposition A generalisation of the eigenvector decomposition we have been applying to square matrices holds to any m n matrix A: any such A can be factorized as λ i A UDV, 38 where U is an m m unitary matrix, D is an m n diagonal matrix consisting of the so-called singular values of A, and V is an n n unitary matrix