MAT265 Mathematical Quantum Mechanics Brief Review of the Representations of SU(2)

Similar documents
Symmetries, Groups, and Conservation Laws

Representation theory and quantum mechanics tutorial Spin and the hydrogen atom

G : Quantum Mechanics II

Plan for the rest of the semester. ψ a

Representations of Lorentz Group

The AKLT Model. Lecture 5. Amanda Young. Mathematics, UC Davis. MAT290-25, CRN 30216, Winter 2011, 01/31/11

Chapter 2 The Group U(1) and its Representations

Lecture 19 (Nov. 15, 2017)

Lecture 10: A (Brief) Introduction to Group Theory (See Chapter 3.13 in Boas, 3rd Edition)

Introduction to Modern Quantum Field Theory

Part III Symmetries, Fields and Particles

Introduction to Group Theory

Physics 557 Lecture 5

Physics 221A Fall 1996 Notes 14 Coupling of Angular Momenta

1. Rotations in 3D, so(3), and su(2). * version 2.0 *

9 Electron orbits in atoms

Highest-weight Theory: Verma Modules

msqm 2011/8/14 21:35 page 189 #197

Density Matrices. Chapter Introduction

Are these states normalized? A) Yes

The groups SO(3) and SU(2) and their representations

Symmetries, Fields and Particles. Examples 1.

Group representations

Representation Theory

1. Matrix multiplication and Pauli Matrices: Pauli matrices are the 2 2 matrices. 1 0 i 0. 0 i

= a. a = Let us now study what is a? c ( a A a )

3 Symmetry Protected Topological Phase

Physics 129B, Winter 2010 Problem Set 4 Solution

Attempts at relativistic QM

Symmetries and particle physics Exercises

Symmetries for fun and profit

Representation theory & the Hubbard model

Notes on Spin Operators and the Heisenberg Model. Physics : Winter, David G. Stroud

Quantum Computing Lecture 2. Review of Linear Algebra

Lecture If two operators A, B commute then they have same set of eigenkets.

4 Matrix product states

Coupling of Angular Momenta Isospin Nucleon-Nucleon Interaction

The following definition is fundamental.

Quantum Field Theory

Notes on Lie Algebras

BRST and Dirac Cohomology

Quantum Physics II (8.05) Fall 2004 Assignment 3

2. Introduction to quantum mechanics

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

(1.1) In particular, ψ( q 1, m 1 ; ; q N, m N ) 2 is the probability to find the first particle

The quantum state as a vector

Notes on SU(3) and the Quark Model

Addition of Angular Momenta

Introduction to Quantum Spin Systems

G : Quantum Mechanics II

Rotations in Quantum Mechanics

Page 712. Lecture 42: Rotations and Orbital Angular Momentum in Two Dimensions Date Revised: 2009/02/04 Date Given: 2009/02/04

26 Group Theory Basics

The Spinor Representation

Differential Topology Final Exam With Solutions

Since G is a compact Lie group, we can apply Schur orthogonality to see that G χ π (g) 2 dg =

QM and Angular Momentum

INTRODUCTION TO LIE ALGEBRAS. LECTURE 2.

Lecture 45: The Eigenvalue Problem of L z and L 2 in Three Dimensions, ct d: Operator Method Date Revised: 2009/02/17 Date Given: 2009/02/11

Continuous symmetries and conserved currents

Clifford Algebras and Spin Groups

Lecture 5. Hartree-Fock Theory. WS2010/11: Introduction to Nuclear and Particle Physics

Group Representations

LECTURE 16: LIE GROUPS AND THEIR LIE ALGEBRAS. 1. Lie groups

Exercises Lie groups

Angular Momentum set II

THE EULER CHARACTERISTIC OF A LIE GROUP

Quantum Field Theory III

Representations of angular momentum

Lie Theory in Particle Physics

Mathematical foundations - linear algebra

GROUP THEORY IN PHYSICS

Contents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2

Representations of su(2)

Angular Momentum in Quantum Mechanics

Functional determinants

4 Group representations

Isotropic harmonic oscillator

1 Hermitian symmetric spaces: examples and basic properties

Problem Set No. 3: Canonical Quantization Due Date: Wednesday October 19, 2018, 5:00 pm. 1 Spin waves in a quantum Heisenberg antiferromagnet

Notes on nilpotent orbits Computational Theory of Real Reductive Groups Workshop. Eric Sommers

Particles I, Tutorial notes Sessions I-III: Roots & Weights

Notation. For any Lie group G, we set G 0 to be the connected component of the identity.

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

QUATERNIONS AND ROTATIONS

Lorentz-covariant spectrum of single-particle states and their field theory Physics 230A, Spring 2007, Hitoshi Murayama

1 Mathematical preliminaries

LIE ALGEBRAS: LECTURE 7 11 May 2010

CHAPTER 6. Representations of compact groups

Lecture 10. The Dirac equation. WS2010/11: Introduction to Nuclear and Particle Physics

Problem Set 2 Due Thursday, October 1, & & & & # % (b) Construct a representation using five d orbitals that sit on the origin as a basis:

MAT 445/ INTRODUCTION TO REPRESENTATION THEORY

Frustration-free Ground States of Quantum Spin Systems 1

Appendix: SU(2) spin angular momentum and single spin dynamics

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

Problems in Linear Algebra and Representation Theory

REPRESENTATION THEORY WEEK 7

1 Fundamental physical postulates. C/CS/Phys C191 Quantum Mechanics in a Nutshell I 10/04/07 Fall 2007 Lecture 12

Quantum Symmetries and Cartan Decompositions in Arbitrary Dimensions

Review of Linear Algebra Definitions, Change of Basis, Trace, Spectral Theorem

Transcription:

MAT65 Mathematical Quantum Mechanics Brief Review of the Representations of SU() (Notes for MAT80 taken by Shannon Starr, October 000) There are many references for representation theory in general, and the representations of SU() in particular. Three that I used are A. Edmonds, Angular Momentum in Quantum Mechanics 1957. Wu-Ki Tung, Group Theory in Physics 1985. Fulton & Harris, Representation Theory 1991. 1 su, so 3 (R) and sl (C) The group SU() is the group of unitary matrices with determinant 1. Every such matrix can be uniquely written as ( ) z w U(z, w) = w z for (z, w) C, with the condition that z + w = 1. In other words, SU() is topologically equivalent to the unit sphere in C, which is the same as the real 3-sphere. SU() is a real Lie group, meaning it is a group with a compatible structure of a real manifold. A Lie algebra g is a vector space with a bilinear form [.,.] : g g g, called a Lie bracket, satisfying 1. [Y, X] = [X, Y ],. [X, [Y, Z]] + [Y, [Z, X]] + [Z, [X, Y ]] = 0. The Lie algebra su is defined as the tangent space to SU() at the identity. We obtain the tangent space by taking all limits ( ) U(1 + ɛz, ɛw ) U(1, 0) Z W A(Z, W ) = lim = ɛ 0,ɛ R ɛ W Z 1

for those Z and W which satisfy det U(1 + ɛz, ɛw ) = 1 + O(ɛ ), i.e. Re(Z) = 0. These are the matrices in M satisfying A = A. This is a threedimensional, real vector space with basis is 1, is, is 3, where S 1 = ( 0 1 1 0 ), S = ( 0 i i 0 ), S 3 = ( 1 0 0 1 are the spin- 1 matrices from before. Recall the spin matrices satisfy the commutation relations [S 1, S ] = is 3, [S, S 3 ] = is 1, [S 3, S 1 ] = is. The group SU() is studied in connection with the quantization of angular momentum. One may wonder if SO 3 (R), the group of rotations in real, threedimensional space, is a better group to study. The rotations about the three axes are given by the matrices 1 0 0 R 1 (θ) = 0 cos θ sin θ 0 sin θ cos θ cos θ 0 sin θ R (θ) = 0 1 0 sin θ 0 cos θ cos θ sin θ 0 R 3 (θ) = sin θ cos θ 0 0 0 1 The group SO(3) is generated by these matrices for θ [0, π). It is easy to calculate the derivative of each of these matrices at zero: 0 0 0 0 0 1 0 1 0 r 1 = 0 0 1, r = 0 0 0, r 3 = 1 0 0. 0 1 0 1 0 0 0 0 0 These are the basis elements for so 3 (R). So so 3 (R) consists of all skeworthogonal (A T = A), real 3 3 matrices. Moreover, the basis elements satisfy the commutation relations [r α, r β ] = ε αβγ r γ. It is customary to define J α = ir α, whereupon we recover the commutation relations [J α, J β ] = ε αβγ J γ. So, in fact so 3 (R) is exactly the same as su. Moreover, SO 3 (R) is not a simply connected group, while SU() is. Indeed, SU() is a double-cover of SO 3 (R) which can be obtained by considering the representation of SU() on R 3 wherein the three-dimensional )

3 vectors are actually traceless, hermitian matrices X = x 1 S + x S + x 3 S 3, and SU() acts by conjugation X X = UXU for U SU(). Since X is still hermitian, and traceless, U actually defines a linear transformation on this three-dimensional, real space. Note that det(x) = x 1 + x + x 3, and det(x ) = det(x), so that the linear transformation is actually orthogonal. Finally, it is special because SU() is connected (so the determinant of the image cannot take both values 1 and -1). Thus we have a map of SU() into SO(3). It is two-to-one because A and A both induce the same map. There is a one-to-one correspondence between the representations of a Lie algebra and Lie group, when the Lie group is connected and simply connected. This means that the representations of su and so 3 (R) are both the same as the representations of SU(), but not of SO(3). This explains why we study SU() instead of SO(3). The Lie algebra su is a real Lie algebra. It can be thought of as a real subspace of B(C ). It is often useful to have a complex subspace instead. We can define the complexification of su, which is simply the complex vector space spanned by S 1, S, S 3. It is easy to see that this is sl (C), the set of all trace-zero complex matrices in M. (This is the Lie algebra for the Lie group SL (C) : all complex matrices with determinant 1.) We define S ± = S 1 ± is sl (C), which satisfy the commutation relations [S 3, S + ] = S +, [S 3, S ] = S, [S +, S ] = S 3 The matrices (S 3, S +, S ) generate sl (C) just as well as (S 1, S, S 3 ). S + and S are the raising and lowering operators. Representations Any linear map ρ : su M n such that [ρ(is α ), ρ(is β )] = ε αβγ ρ(is γ ), is called an n-dimensional representation of su. Such a representation is specified by the images ρ(is α ), α = 1,, 3. A representation of sl (C) is a linear map such that [ρ(s 3 ), ρ(s ± )] = ±ρ(s ± ), [ρ(s + ), ρ(s )] = ρ(s 3 ) Any representation of su can be extended to a representation of sl (C), and any representation of sl (C) can be restricted to a representation of su. If ρ satisfies ρ(a ) = ρ(a), then it is called a unitary repesentation. An important point is that if ρ is not unitary, a priori, we can always redefine

4 the inner-product on C n so as to make it unitary. This is because SU() is a compact group, and so has a unique Haar measure, H. The unique Haar measure is characterized by the fact that H(E) = H(UE) for every measurable E SU() and U SU(), and H(SU()) = 1. Whatever inner-product is on C n initially, we average over SU(): ψ φ = Uψ Uφ dh(u). SU() With the new inner-product on C n, ρ is unitary. From now on all representations are unitary. A representation is irreducible if there is no proper, invariant subspace V C n. An invariant subspace is one for which ρ(s α )v V for every v V and α = 1,, 3. The entire list of finite-dimensional, irreducible representations was given in lecture. They are specified by the spin S matrices. We will not prove this here; it is proved in each of the three references above. Suppose that W is an invariant subspace of C n. Then the orthogonal complement W is also invariant, since ρ is unitary. This proves that every finite-dimensional representation of su (and so 3 (R) and sl (C)) is completely reducible; i.e. it can be decomposed into a direct sum of irreducible representations. Suppose that ρ 1 : su B(V 1 ) and ρ : su B(V ) are two representations of su on two f.d. complex vector spaces V 1 and V. Then there is a representation ρ : su B(V 1 V ) given by ρ(s α ) = ρ 1 (S α ) 1I + 1I 1 ρ (S α ) where 1I j is the identity operator on V j, j = 1,. It is trivial to check that this satisfies the commutation relations, since for A 1, B 1 V 1, A, B V : [1I 1 A, B 1 1I ] = 0, [1I 1 A, 1I 1 B ] = 1I 1 [A, B ], [A 1 1I, B 1 1I ] = [A 1, B 1 ] 1I. It is also trivial to check that the tensor product of two unitary representations is again unitary. In general it is not true that if V 1 and V are irreducible representations then the tensor product V 1 V is also irreducible. It is therefore a natural question to ask how V 1 V decomposes into irreducibles. This is the Clebsch-Gordon problem, which we will now discuss.

5 3 Matrix representations We start with a brief review of spin S matrices, S = 1, 1, 3,.... n, the dimension of the Hilbert space for each spin, is related to S by n = S + 1. The standard spin matrices are given in a basis where S 3 is diagonal: it is customary to label the basis by the eigenvalues of S 3, which are all simple: For m = S, S + 1,..., S 1, S : S 3 m = m m. { m } S m S, S 3 is given by S S 3 S 1 =... S I.e., in the basis (3.1) S 1 = S+ + S where, S = S+ S, i Note that c S+1 = c S = 0. S + m = c m+1 m + 1, S m = c m m 1, (3.) S + = c m = S(S + 1) m(m 1) = (S + m)(s + 1 m). 0 c S 0 c S 1...... 0 c S+1 0 (3.3) S = (S + ) (3.4) Or equivalently: (S ± ) m,n = c m δ m,n±1. The famous SU() commutation realtions are [S +, S ] = S 3 and one can quickly derive [S 3, S + ] m = ((m + 1)c m+1 mc m+1 ) m + 1 = S + m [S 3, S + ] = S +

6 and similarly, [S 3, S ] = S. Or, in yet another form: [S α, S β ] = i γ ε αβγ S γ, α, β, γ = 1,, 3 ε 13 = 1 and totally antisymmetric ε αβγ is the Levi-Civita symbol. Example: S = 1; n = 3: S 3 = 1 0 0 0 0 0 0 0 1 S = ; S 1 = 0 i/ 0 i/ 0 i/ 0 i/ 0 0 1/ 0 1/ 0 1/ 0 1/ 0 There is more to say about the spin matrices, but let s first define the model. Let Λ Z d be a finite subset. Set n = S + 1, then H Λ = x Λ H x, A Λ = B(H Λ ) = x Λ A x, H x = C n A x = Mn For A M n, A x = A 1I Λ\{x}, which means that A acts on the x th factor in H Λ. Note that for Λ 0 Λ, A Λ0 is naturally embedded into A Λ (anything which can be measured on the atoms/spins in Λ 0 can be measured on the atoms/spins in Λ, which contains Λ 0 ) X A Λ0, regard X = X 1I Λ\Λ0 A Λ (3.5) This is also similar to considring a function of x 1 alone or a function of (x, x, x 3 ) R63, which happens not to depedn on x and x 3. We can now define the Heisenberg Hamiltonian(s): H Λ = J S x S y (3.6) {x,y} Λ x y =1 where S x S y is shorthand for S 1 xs 1 y + S xs y + S 3 xs 3 y. J > 0 is the ferromagnet and J < 0 the antiferromagnet. Its sign and magnitude is material dependent

7 and to some extent J also depends on external factors such as pressure. For us it will be a given constant of which only the sign matters. Usually we will assume J = +1 or J = 1. The fundamental difference between the ferromagnet and the antiferromagnet can already be seen in the model on just two nearest neighbor sites. The Hamiltonian is then JS x S y. This operator can be diagonalized most easily by using just a bit of representation theory of the group SU(). 4 Clebsch-Gordon coefficients Instead of considering arbitrary representations V 1, V we will consider irreducible representations. The reason is that we already know V 1 and V can de decomposed into irreducibles V 1 = V 1,1 V 1,r1, V = V,1 V,r. Since is distributive w.r.t., we see that in general V 1 V = r 1 r i 1 =1 i =1 V 1,i1 V,i. If we can say what each V 1,i1 V,i is in terms of irreducibles, then we can determinde the direct sum decomposition of V 1 V. Since V 1,i1 and V,i are each irreducible, it suffices to solve the Clebsch-Gordon problem for V 1 and V both irreducible. The entire list of irreps of su was given in Section 3, as the spin S matrices. Let us suppose we have S = j 1 and S = j. Therefore, V 1 = C j 1+1 and V = C j+1. We will refer to these representations as D (j1) and D (j), following standard practice. The states of V α will be labeled by ψ α (j α, m α ) where m α = j α, j α + 1,, j α, (α = 1, ). This is chosen so that S 3 αψ α (j α, m α ) = m α ψ α (j α, m α ). We define the Casimir operator S α = (S 1 α) + (S α) + (S 3 α). Observe that by the derivation property of [.,.] (namely, [AB, C] = A[B, C] + [A, C]B), we have [A, B ] = {[A, B], B}

8 where {X, Y } = XY + Y X is the anticommutator. Therefore Similarly, [S 3 α, (S 1 α) ] = {[S 3 α, S 1 α], S 1 α} = i{s α, S 1 α}. [S 3 α, (S α) ] = {[S 3 α, S α], S α} = i{s 1 α, S α}. Finally, [Sα, 3 (Sα) 3 ] = 0 for obvious reasons. Since the anticommutator is symmetric, we see that [Sα, 3 S α ] = 0. By permutation symmetry, we also have [Sα, 1 S α ] = [Sα, S α ] = 0. Therefore [S α ±, S α ] = 0, and we see that S α acts on V α as a constant times the identity matrix. To see just what constant, we calculate it on ψ α (j α, j α ). Note that S α = 1 (S+ α Sα +Sα S α + )+ (Sα) 3. Therefore, since S α + ψ α (j α, j α ) = 0, S α ψ α (j α, j α ) = 1 S+ α S α ψ α (j α, j α ) + j αψ α (j α, m α ) = j α (j α + 1)ψ α (j α, m α ) An important fact is that this Casimir operator can distinguish vectors in different irreps because its eigenvalue is an injective function of j. The representation on V 1 V is generated by the operators S 3 = S1 3 + S 3 and S ± = S 1 ± + S ±. In general this will be a direct sum of irreps given by spin j matrices for j taking on some values. Note that the Casimir operator for the tensor product is S = S 1 + S = S 1 + S + S 1 S, where S 1 S = S 3 1S 3 + 1 (S+ 1 S + S 1 S + ). Since S 1 and S are essentially constants times the identity matrix, ( S 1, S, S, S 3 ) is a commuting family of operators. We prefer to keep the decorations S 1, S since it makes clear what the dimensions of the two irreps are that we are tensoring. Suppose ψ(j 1, j, j, m) is a simultaneous eigenstate with eigenvalues (j 1 + j 1, j + j, j + j, m). Then ψ(j 1, j, j, m) = m1,m m 1 +m =m M(j 1, m 1, j, m ; j 1, j, j, m)ψ 1 (j 1, m 1 ) ψ (j, m ). It is an abuse of notation to label the eigenstates ψ(j 1, j, j, m) before proving that for each quadruple of eigenvalues there is at most one eigenstate. But we

9 will systematically prove this fact regardless of the label for the eigenstates, so the abuse is not important. We consider ψ 1 (j 1, j 1 ) ψ (j, j ). This is a simulatneous eigevector of S 1, S and S 3, with eigenvalues j 1 (j 1 + 1), j (j + 1) and j 1 + j. But also, since S + α ψ 1 (j 1, m 1 ) ψ (j, m ) = 0 for α = 1,, we see that it is an eigenvector of S 1 S with eigenvalue j 1 j. This means it is an eigenvector of J, with eigenvalue (j 1 + j )(j 1 + j + 1). In other words, ψ 1 (j 1, m ) ψ (j, m ) = ψ(j 1, j, j 1 + j, j 1 + j ). So there is at least one copy of the irrep D j 1+j, generated by the spin j 1 + j matrices. This is the only state ψ(j 1, j, j, j 1 +j ) for any j. Thus every irrep in the direct sum decomposition of V 1 V has spin at mose j 1 + j, and in fact there is only one copy of that irrep. (If there were any other irrep with spin at least j 1 + j it would contain an eigenstate of J 3 with eigenvalue j 1 + j, orthogonal to the one we just determined.) Suppose now that we have proved there is a unique copy of the irrep D (j) in D (j 1) D (j ) for j = j 1 + j, j 1 + j 1,..., j, where j > j 1 j. The eigenspace of J 3 with eigenvalue j 1 has dimension j 1 + j j +, while the eigenstates {ψ(j 1, j, j, j 1) D (j) : j = j 1 + j, j 1 + j 1,..., j } only account for a (j 1 + j j + 1)-dimensional subspace. Taking the unique vector orthogonal to all of these yields a state ψ(j 1, j, j, j 1) with j < j. (Because the orthogonal complement of an invariant subspace is invariant.) But since the third component of spin for this state is j 1, it must be that j = j 1. So there is at least one copy of D (j 1). Since any other copy of D (j 1) would give an additional orthogonal state in the eigenspace of J 3 with eigenvalue j 1, there is a unique copy of D (j 1). Thus we have proved that there is a unique copy of D (j) in the tensor product D (j 1) D (j ) for each j = j 1 + j, j 1 + j 1,..., j 1 j. To see that this is actually the entire list of irreps, note that the dimensions match. The dimension of D (j 1) D (j ) is (j 1 + 1)(j + 1), and dim(d (j 1+j ) D (j 1+j 1) D ( j 1 j ) ) = j 1 +j j= j 1 j (j+1) = (j 1 +1)(j +1). We have thus solved the problem of stating which irreps appear in the direct sum decomposition of V 1 V. We have not said what the matrix

10 Foregoing the anal- M(j 1, m 1, j, m ; j 1, j, j, m) is, which connects them. ysis, the result is zero unless m = m 1 + m and M(j 1, j, m 1 + m, j; j 1, m, j, m ) [ (j + 1)(j1 + j j)!(j 1 j + j)!( j 1 + j + j)! = (j 1 + j + j + 1)! ] 1/ [(j 1 + m 1 )!(j 1 m 1 )!(j + m )!(j m )!(j + m)!(j m)!] 1/ z ( 1) z 1 z!(j 1 + j j z)!(j 1 m 1 z)!(j + m z)!(j j + m 1 + z)!(j j 1 m + z)! Details of this calculation, as well as more symmetric forms of the vectorcoupling coefficients can be found in Edmonds.