Angular Momentum in Quantum Mechanics.

Similar documents
1 Mathematical preliminaries

The quantum state as a vector

QM and Angular Momentum

Linear Algebra in Hilbert Space

PY 351 Modern Physics - Lecture notes, 3

MP463 QUANTUM MECHANICS

Lecture 3 Dynamics 29

Generators for Continuous Coordinate Transformations

Mathematical Foundations of Quantum Mechanics

SECOND QUANTIZATION PART I

Statistical Interpretation

df(x) = h(x) dx Chemistry 4531 Mathematical Preliminaries Spring 2009 I. A Primer on Differential Equations Order of differential equation

Chapter 2. Linear Algebra. rather simple and learning them will eventually allow us to explain the strange results of

Page 684. Lecture 40: Coordinate Transformations: Time Transformations Date Revised: 2009/02/02 Date Given: 2009/02/02

Quantum Dynamics. March 10, 2017

C/CS/Phys C191 Quantum Mechanics in a Nutshell 10/06/07 Fall 2009 Lecture 12

2. As we shall see, we choose to write in terms of σ x because ( X ) 2 = σ 2 x.

1 Infinite-Dimensional Vector Spaces

Coordinate and Momentum Representation. Commuting Observables and Simultaneous Measurements. January 30, 2012

Mathematical Introduction

1 Fundamental physical postulates. C/CS/Phys C191 Quantum Mechanics in a Nutshell I 10/04/07 Fall 2007 Lecture 12

PHYS-454 The position and momentum representations

PLEASE LET ME KNOW IF YOU FIND TYPOS (send to

Introduction to Electronic Structure Theory

Page 52. Lecture 3: Inner Product Spaces Dual Spaces, Dirac Notation, and Adjoints Date Revised: 2008/10/03 Date Given: 2008/10/03

Quantum Mechanics- I Prof. Dr. S. Lakshmi Bala Department of Physics Indian Institute of Technology, Madras

The 3 dimensional Schrödinger Equation

1 Measurement and expectation values

Physics 70007, Fall 2009 Answers to Final Exam

Mathematical Formulation of the Superposition Principle

Quantum Mechanics- I Prof. Dr. S. Lakshmi Bala Department of Physics Indian Institute of Technology, Madras

Physics 221A Fall 1996 Notes 14 Coupling of Angular Momenta

Addition of Angular Momenta

Page 712. Lecture 42: Rotations and Orbital Angular Momentum in Two Dimensions Date Revised: 2009/02/04 Date Given: 2009/02/04

1.1 Quantum mechanics of one particle

The Quantum Theory of Atoms and Molecules

d 3 r d 3 vf( r, v) = N (2) = CV C = n where n N/V is the total number of molecules per unit volume. Hence e βmv2 /2 d 3 rd 3 v (5)

Physics 342 Lecture 26. Angular Momentum. Lecture 26. Physics 342 Quantum Mechanics I

-state problems and an application to the free particle

C/CS/Phys 191 Quantum Mechanics in a Nutshell I 10/04/05 Fall 2005 Lecture 11

Physics 221A Fall 1996 Notes 12 Orbital Angular Momentum and Spherical Harmonics

Lecture 7. More dimensions

Vector Spaces in Quantum Mechanics

II. The Machinery of Quantum Mechanics

1 Dirac Notation for Vector Spaces

Quantum Physics 2006/07

Vector Spaces for Quantum Mechanics J. P. Leahy January 30, 2012

The Hamiltonian and the Schrödinger equation Consider time evolution from t to t + ɛ. As before, we expand in powers of ɛ; we have. H(t) + O(ɛ 2 ).

We start with some important background material in classical and quantum mechanics.

The Postulates of Quantum Mechanics

Rotations in Quantum Mechanics

in terms of the classical frequency, ω = , puts the classical Hamiltonian in the form H = p2 2m + mω2 x 2

Supplementary information I Hilbert Space, Dirac Notation, and Matrix Mechanics. EE270 Fall 2017

Rotational motion of a rigid body spinning around a rotational axis ˆn;

1.3 Translational Invariance

Lecture 6. Four postulates of quantum mechanics. The eigenvalue equation. Momentum and energy operators. Dirac delta function. Expectation values

Quantum Mechanics-I Prof. Dr. S. Lakshmi Bala Department of Physics Indian Institute of Technology, Madras. Lecture - 21 Square-Integrable Functions

Recitation 1 (Sep. 15, 2017)

Quantum Theory and Group Representations

1 Notes and Directions on Dirac Notation

Properties of Commutators and Schroedinger Operators and Applications to Quantum Computing

Quantum Mechanics I Physics 5701

1 Quantum fields in Minkowski spacetime

Physics 505 Homework No. 1 Solutions S1-1

Angular Momentum - set 1

Notes on Quantum Mechanics. Finn Ravndal Institute of Physics University of Oslo, Norway

d 1 µ 2 Θ = 0. (4.1) consider first the case of m = 0 where there is no azimuthal dependence on the angle φ.

Review of the Formalism of Quantum Mechanics

1 The postulates of quantum mechanics

Physics 221A Fall 2010 Notes 4 Spatial Degrees of Freedom

Lecture 10: Solving the Time-Independent Schrödinger Equation. 1 Stationary States 1. 2 Solving for Energy Eigenstates 3

OPERATORS AND MEASUREMENT

C/CS/Phys 191 Uncertainty principle, Spin Algebra 10/11/05 Fall 2005 Lecture 13

PHYS Handout 6

Lecture 12. The harmonic oscillator

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

BASICS OF QUANTUM MECHANICS. Reading: QM Course packet Ch 5

Lecture 5 (Sep. 20, 2017)

22.3. Repeated Eigenvalues and Symmetric Matrices. Introduction. Prerequisites. Learning Outcomes

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Chemistry 5.76 Revised February, 1982 NOTES ON MATRIX METHODS

E = φ 1 A The dynamics of a particle with mass m and charge q is determined by the Hamiltonian

Attempts at relativistic QM

Lecture 9: Eigenvalues and Eigenvectors in Classical Mechanics (See Section 3.12 in Boas)

Why quantum field theory?

Physics 505 Homework No. 8 Solutions S Spinor rotations. Somewhat based on a problem in Schwabl.

129 Lecture Notes Relativistic Quantum Mechanics

Eigenvectors and Hermitian Operators

Quantum Mechanics I Physics 5701

Continuous quantum states, Particle on a line and Uncertainty relations

Quantum Mechanics for Scientists and Engineers. David Miller

Lecture 10: A (Brief) Introduction to Group Theory (See Chapter 3.13 in Boas, 3rd Edition)

Introduction to Group Theory

Lecture 7: Vectors and Matrices II Introduction to Matrices (See Sections, 3.3, 3.6, 3.7 and 3.9 in Boas)

Particle Physics. Michaelmas Term 2011 Prof. Mark Thomson. Handout 2 : The Dirac Equation. Non-Relativistic QM (Revision)

Lecture 4 (Sep. 18, 2017)

Introduction to Quantum Mechanics PVK - Solutions. Nicolas Lanzetti

( ) = 9φ 1, ( ) = 4φ 2.

Lecture 21 Relevant sections in text: 3.1

Repeated Eigenvalues and Symmetric Matrices

Transcription:

Angular Momentum in Quantum Mechanics. R. C. Johnson March 10, 2015 1 Brief review of the language and concepts of Quantum Mechanics. We begin with a review of the basic concepts involved in the quantum mechanical description of physical systems and the notation we will use in the lectures. The notes in this Section are not intended to provide an introductory course on Quantum Mechanics. They assume the reader has had at least a first course at undergraduate physics level that covers some historical background and reviews the experimental evidence that leads to the necessity for the formalism developed here as the most useful description we have of nature at the subatomic level. Some familiarity with matrices, differential equations, complex numbers and vector is assumed. The basic concepts of Dirac s vector space formulation of non-relativistic Quantum Mechanics are described in an informal, non-rigorous way. We believe that this formalism provides the best and most economical framework for understanding of the quantum world in a notation that is of great practicality for the description of many-body systems. We believe that these advantages outweigh any difficulties that may arise because of the abstractions involved. These difficulties are no worse then those involved in mastering the concept of vectors in physical space. 1.1 The structure of theories in physics. We first show how the way dynamical systems are described in quantum mechanics fits into the same general scheme that we use in classical physics. 1.1.1 Newtownian particle mechanics. 1. Dynamical variables. Particle coordinates and momenta: r 1, p 1, r 2, p 2,... (1) 1

2. Definition of a state S at time t. A set of values of r 1 (t), p 1 (t), r 2 (t), p 2 (t),... 3. Dynamical law describing the way the state changes with time. Newton s Law of Motion: dp i (t) dt dr i (t) dt 1.1.2 The Electromagnetic Field. = F i (r 1 (t), p 1 (t), r 2 (t), p 2 (t),...), 1. Dynamical variables. Electric and Magnetic fields at all points r: = p i(t) m i i = 1, 2,.... (2) E(r, t), B(r, t). (3) 2. Definition of a state S at time t. A set of values of E(r, t), B(r, t) for all r. 3.Dynamical law describing the way the state changes with time. Maxwell s equations: 1 E(r, t) c t = B(r, t) 4π J(r, t), c 1.E(r, t) c = 4πρ(r, t), 1 B(r, t) c t = E(r, t), 1.B(r, t) c = 0. (4) 1.1.3 Quantum Mechanics of a spinless point particle. 1. Dynamical variables. The operators corresponding to the particle position coordinate and momentum: ˆr, ˆp. 2. Definition of a state S at time t. The ket vector S, t at time t. 3. Dynamical law describing the way the state changes with time. The Schrödinger equation: ı h S, t = Ĥ(ˆr, ˆp) S, t, (5) t where Ĥ(ˆr, ˆp) is the Hamiltonian operator for the particle being considered. Ĥ(ˆr, ˆp) is constructed out of the operators ˆr and ˆp. 2

1.1.4 Features of the 3 theories. 1. Note the occurrence of various key constants: mass, electric charge, Plank s constant. 2. In all cases the state S evolves in time according to differential equations that are first order in time derivatives, This means that they are all causal, i.e., given a state S at time t the state at all other times is determined. 3. A key difference between Quantum Mechanics and the other theories is the very different concept of state and its connection with what we measure in the laboratory. In the world described by Newtonian Mechanics and Electromagnetic Theory there is a one-to-one correspondence between the dynamical variables used in the theory and quantities that are believed to be measurable in the laboratory. In Quantum Mechanics the relationship is many-to-one. One state can correspond to many observed values of the dynamical variables and the interpretation of the ket vector in terms of probability amplitudes reflects this fact. In order to be able to use and interpret the Schrödinger equation (5) we clearly must understand the concept of ket vectors and operators. 1.2 The formalism of quantum mechanics. 1.2.1 Ket vectors. We are used to describing an electric field in terms of a vector in 3-dimensional space, E. Note that the concept of a vector is a very abstract one. A vector is not a number, or even 3 numbers. In a particular coordinate system E is described by its numerical components, E 1, E 2, E 3, along 3 orthogonal directions in space denoted by the vectors of unit length e 1, e 2, e 3. The vector E can then be written E = E 1 e 1 + E 2 e 2 + E 3 e 3. (6) But if we change these axes the same vector E is described by 3 different numbers. The vector description of the electric field collects into one concept the infinite number of triplets of numbers that describe the same quantity in all possible axes. Of course, all the triplets are related to each other by a simple formula which defines what it means to be a vector. The use of vectors implies that the relations between the electric and magnetic fields embodied in Maxwell s equations, eq.(4), are valid for any choice of coordinate axis in space. In a similar way, there are many ways of describing a system in quantum mechanics. All these descriptions are completely valid and related to each other by linear equations that are analogous to the way the components that describe E and B in different coordinate systems are related. Dirac invented a language for writing down quantum mechanics that encapsulates these relationships. A state in quantum mechanics is described by a vector in an infinite dimensional complex space. Dirac called these vectors ket vectors and invented a special notation for them: a vertical line, a label and angular bracket, >. A 3

state S is described by a ket S labelled so that it is distinguished from a ket describing any another state. Different physical systems correspond to different spaces. Different states of a particular physical system are described by different kets in the same space. Ket vectors in the same space can be added and multiplied by complex numbers to give new kets. Just like vectors in physical 3-dimensional space, ket vectors are abstract quantities. To make a connection with numbers we need axes in ket space called basis kets, u 1, u 2,.... The infinite dimensions of Dirac s ket spaces shows up as the need to use bases with an infinite number of basis vectors u i. The complexity of the space means that the components of S are complex numbers S 1, S 2,... in general. The fact that the u i form a basis means that, analogously to eq.(6), we can write S = S 1 u 1 + S 2 u 2 +.... (7) 1.2.2 Physical interpretation of ket components. In quantum mechanics the connection between the state of the system and the results of measurements on the system is contained in the interpretation of the numbers S i as probability amplitudes. The physical meaning of these amplitudes is particularly clear when the basis is orthonormal. We discuss below what it means to be an orthonormall basis, but leaving the formal definition aside for the moment, S i 2 is the probability that measurements will give results consistent with being in the state described by the ket u i. (See Appendix A for revision notes on complex numbers.) If we describe the same ket vector S using a different orthonormal basis the numbers S i which will change to a new set of complex numbers that describe the probability amplitudes for a different set of observables corresponding to the new basis states. If we want to ask a question about the probable results of particular measurements on a physical system we have to learn how the choice of measurements picks out a particular basis. This link is guaranteed by choosing the u i to be eigenfunctions of one or more Hermitean operators. When we have a particular measurement in mind, All bases are equal, but some are more equal than others, to mis-quote 1984. To understand these statements we need to understand the connection between quantities observable in an experiment and operators and this in turn means we need to understand the concept of an Hermitian operator. The power of Dirac s formulation in terms of ket vectors is revealed in the form of the Schrödinger equation given in eq.(5) in which there is no explicit reference to a particular basis. All bases have the same weight. To use this formulation in terms of standard mathematical forms, such as differential operators and matrices, we have to learn how to express kets and the Hamiltonian operator Ĥ in different bases. 4

1.2.3 Operators and observables. In quantum mechanics every observable is represented by a Hermitian operator. The only possible result of a measurement of an observable is one of the eigenvalues of the corresponding operator. If the system happens to have been prepared in an eigenstate of an observable A corresponding to the operator  with eigenvalue α 1, then a measurement of A will result in the value α 1 with certainty. We label the corresponding eigenket with its eigenvalue α 1. It satisfies the eigenvalue equation  α 1 = α 1 α 1. (8) 1.2.4 Inner product of kets. The concept of an Hermitian operator arises when we have an inner product defined between kets. This is analogous to the dot product of ordinary vectors. The inner product of two kets is defined as a rule which assigns a complex number to any pair of kets. The inner product associated with the two kets a and b is written b a. To qualify as an inner product the rule that gives the complex number b a must satisfy have the following properties for all a and b. 1. b a = ( a b ). (9) 2. If α is an arbitrary complex number and d = α a then 3. If a = e + f then b d = α b a. (10) b a = b e + b f. (11) Property (9) means that the symbol b a is not symmetric in the labels a and b. Interchanging them is the same as complex conjugation. Properties (10) and (11) means that if g = α e + β f then b g = α b e + β b f. (12) We say that b a is linear in a. But (9) implies that g a = α e a + β f a. (13) We say that b a is antilinear in the ket on the left of the vertical line, b. 4.Property (9) implies that a a is a real number. We also require that it be positive or zero. a a 0. (14) The equality is satisfied only if a is the nul ket, 0. a = a + 0 for all a. The latter satisfies 5

It also follows from the defining properties that for all a. 1.2.5 Orthonormal sets of kets. 0 a = a 0 = 0 0 = 0, (15) Two kets a and b are said to be orthogonal if their inner product vanishes, i.e., a b = 0. (16) A set of kets u 1, u 2,... is said to be orthonormal if different kets in the set are orthogonal and each is normalized so that u i u i = 1, i.e., where δ i,j is the Kroneker delta symbol defined by u i u j = δ i,j, (17) δ i,j = 0, i j, = 1, i = j. (18) 1.2.6 Inner products in terms of ket components. Having defined an inner product of kets and found an orthonormal basis we are now in a position to actually evaluate an inner product of two arbitrary kets S and S once we have an expression for them in terms of components and basis kets as in eq.(7): S = S 1 u 1 + S 2 u 2 +..., S = S 1 u 1 + S 2 u 2 +.... (19) Using the linearity property (12) we have But using (13) and (17) and and in general S S = S 1 S u 1 + S 2 S u 2 +.... (20) S u 1 = (S 1) u 1 u 1 + (S 2) u 2 u 1 +... = (S 1), (21) S u 2 = (S 1) u 1 u 2 + (S 2) u 2 u 2 +... = (S 2), (22) S u i = (S i). (23) 6

Using this result in (20) we have finally S S = (S 1) S 1 + (S 2) S 2 +... = i= (S i) S i. (24) Note in particular that if the S and S are the same state then i=1 S S = S 1 2 + S 2 2 +... = i= i=1 S i 2. (25) Bearing in mind the interpretation of the components S i as probability amplitudes, we would like the components S i to satisfy i= i=1 S i 2 = 1. Eq.(25) we see that this is the same condition as requiring the ket S to be normalised so that S S = 1. In an orthonormal basis eq.(23) gives us a useful formula for the components of an arbitrary ket in terms of an inner product: Eq.(19) can now be written S i = u i S. (26) S = u 1 u 1 S + u 2 u 2 S +..., = i u i u i S. (27) Note that as an aid to memory we have extended are notation slightly by putting the comples numbers u i S behind the kets u i. By definition this means exactly the same as u i S u i. We shall see below that using this convention gives a simple way to express formulae for relations between kets into relations between numbers. 1.2.7 Operators acting on kets. In general we define an operator O as any unambiguous rule that when applied to an arbitrary ket, gives another ket. Our notation for operators is to put a hat over a suitably chosen symbol, such as Ô. If according to the rule O its action on S is to produce S we write S = Ô S. (28) In quantum mechanics we frequently deal with a sub-class of operators called linear operators. They satisfy Ô( a + b ) = Ô a + Ô b Ô(α a ) = α Ô a, (29) 7

where α is an arbitrary complex number. Unless specified otherwise, all operators referred to in these notes will be linear operators 1. An important example of a linear operator can be found in eq.(27). This equation can be rewritten in a way that makes it look like a trivial identity if we agree to regard a quantity like a b as a linear operator defined by the rule Eq.(27) can now be written ( a b ) c = a ( b c ). (30) S = i = ( i u i u i S u i u i ) S. (31) This reveals that this relation can be viewed as identifying the operator in round brackets as the unit operator: u i u i = 1. (32) i This formula is often referred to as the completeness relation for basis kets. 1.2.8 Operators, kets and numbers. If we take the inner product of the ket b defined in eq.(28) with the ket c we write the resulting complex number c b as c b = c Ô a. (33) We often refer a number constructed in this way as a matrix element of the operator Ô. This reason for this terminology becomes clearer if we consider the collection of numbers u i Ô u j, where i, j run over all the members of a set of basis kets. This matrix plays a crucial role when, for example, we express the relation eq.(28) in terms of numbers. Taking the inner product of both sides of eq.(28) with basis vector u i we get u i S = u i Ô S. (34) Using the formula (26) and the linearity of Ô we find that (34) can be written S i = j u i Ô u j S j. (35) 1 The concept of an anti-linear operator occurs when the time reversal transformation is discussed. Instead of the second of properties (29), an anti-linear operator Ô satisfies Ô(α a ) = α Ô a. 8

This result has a useful interpretation in terms of standard operations on numbers. We think of the numbers S 1, S 2,..., and S 1, S 2,..., as two matrices each with one column and an infinite number of rows. The formula (35) can now be understood as: Matrix S is the result of multiplying matrix S, according to the rules of matrix multiplication by the matrix O, where the element of O in the ith row and the jth column is u i Ô u j, i.e, S = O S. (36) In other words, the relation between kets and operators in eq.(28) is mapped precisely onto a relation between matrices representing S, S and Ô in eq.(36). 1.2.9 Hermitian operators and Hermitian Matrices. By definition, the Hermitian conjugate of an operator Ô is an operator Ô that satisfies a Ô b = b Ô a, (37) for ALL kets a and b. If Ô = Ô, i.e, if Ô a = Ô a for all a, we say that Ô is an Hermitian operator. For an Hermitean operator eq.(37) tells us that a Ô b = b Ô a. (38) for all kets a and b. PROBLEM. Use the definition (37) to show that (Â ˆB) = ˆB Â. (39) We saw in the last subsection that an arbitrary linear operator Ô is represented in an orthonormal basis by a matrix with components O i,j given by O i,j = u i Ô u j (40) If Ô is Hermitean, eq.(38) tells us that the matrix elements O i,j must satisfy O i,j = O j,i. (41) In other words, the matrix representing an Hermitian operator is the same as the complex conjugate of the matrix obtained by interchanging its rows and columns. Such a matrix is known as an Hermitian matrix. It is proved in text books that (1).The eigenvalues of an Hermitian operator are all real numbers. 9

(2). Eigen kets corresponding to distinct eigenvalues of an Hermitian operator are orthogonal. We can choose them all to be normalised to unity. The eigenkets then form an orthonormal set. (3). The set of eigenkets can be chosen so that they satisfy the completeness relation (32) These properties of Hermitian operators map on to the analogous properties of finite Hermitian matrices, e.g., eq.(41) Note that the orthogonality property in (2) above refers only to distinct eigenvalues. It frequently happens in physical applications that eigenvalues of Hermitian operators are degenerate, i.e., several linearly independent eigenkets can be found all corresponding to the same eigenvalue. If there are, say, N i of these, they can, by taking appropriate linear combinations of them, taken to be an orthonormal set. All N i must be included in the summation in the completeness relation. 1.2.10 Eigenket notations. Collecting our results, let  be an Hermitian operator corresponding to some observable. Its eigenvalues are the real numbers, α i, in general infinite in number. The corresponding eigenkets are α i, n i and satisfy  α i, n i = α i α i, n i. (42) We have labelled the eigenkets corresponding to the ith eigenvalue with two quantities, the value of the eigenvalue itself and a second label n i, that distinguishes different members of an orthonormal set of degenerate eigenkets. The way this set is chosen is not unique. In practice, the labels n i are frequently chosen to be the eigenvalues of one or more operators that can be constructed from the dynamical variables of the system and differ from A. The number of operators involved depends on the number of degrees of freedom of the system. These operators must satisfy very special conditions before they can be used in this way. They must correspond to compatible observables. What this condition means is discussed below. So far we have used a notation that assumes the spectrum of eigenvalues is a discrete set. In practice many operators of physical significance may have a spectrum of eigenvalues which is completely or partially continuous. For example, the spectrum of the Hermitian operator corresponding to the particle position on the x axis, the operator ˆx, has a spectrum that includes all real values of x from to +. We label the corresponding eigenkets, x, < x < +. These satisfy ˆx x = x x. (43) The ket x describes a state in which a measurement of the particle s position would yield the value x with certainty. ( More accurately, the particle position density for this state is zero except at x.) 10

In the formulae of Sections 1.2.5 and 1.2.6 all the Kroneker delta functions have to be replaced by Dirac delta functions and the summations over the discrete variable i must be replaced by integrals over x. + x x = δ(x x ), (44) S = + dx x ψ S (x), (45) ψ S (x) = x S, (46) dx x x = 1, (47) S S = S S = + + dx ψ S (x)ψ S(x), (48) dx ψ S (x) 2. (49) In eqs(45) and (46) we have introduced a special notation, ψ S (x) for the coefficients in the expansion of S in the orthonormal set x, < x < +. ψ S (x) = x S. (50) This is the standard ψ notation for what is called the wave function of the state S in introductory courses in quantum mechanics. We see that the wave function ψ S (x) is revealed as the collection of complex numbers that describe the state S in the basis of eigenvectors x and is just one of the many ways of describing the the ket S. The x basis is a convenient one for some purposes, but it has no more fundamental significance than any other basis. Note that the interpretation of the wave function as a position probability amplitude is consistent with the general picture given in Section 1.2.1 that interprets the coefficients in the expansion of S in a basis as probability amplitudes. The formulation of quantum mechanics in terms of kets in a complex vector space shows us that the function ψ S (x) is just one of many other ways of representing a physical state as a set of complex numbers. We get a different set every time we use a different basis, but all these sets are equally valid descriptions of S and each set separately contains the maximum information we can have about a physical state according to quantum mechanics. 1.2.11 The momentum basis. As a concrete example of another basis we consider the eigenkets of the Hermitian operator ˆp x corresponding to a particle s momentum in the x direction. We use the letter p, < p < + to denote the real eigenvalues of ˆp x so that an eigenket is p and satisfies ˆp x p = p p. (51) 11

where A general state S can be expanded S = + dp p φ S (p), (52) φ S (p) = p S. (53) φ S (p) might be called the wavefunction of S in the momentum basis or in momentum space. Both functions φ S (p) and ψ S (x) give a complete description of S. We can transform from one function to the other using φ S (p) = p S, = p ( = = + + + dx x x ) S dx p x x S dx p x ψ S (x). (54) This formula gives the value of φ S (p) for a particular p as an integral over all values of x of ψ S (x) and the inner product p x. The latter is a function of x and p but is independent of S. The complex numbers p x have three possible interpretations: (1) The matrix p x defines the transformation from the x to the p basis as in eq.(54). The matrix x p = p x defines the transformation from the p to the x basis through the inverse formula to eq.(54), ψ S (x) = + dp x p φ S (p). (55) (2) As a function of p, p x is the wavefunction in the p basis of a state in which the particle is definitely at the point x (an eigenstate of operator ˆx with eigenvalue x). See eq.(53). (3) As a function of x, x p = p x is the wavefunction in the x basis of a state in which the particle has definite momentum p (an eigenstate of operator ˆp with eigenvalue p). See eq.(50). To obtain x p, or p x, we must input more information about the physical meaning of the momentum and position operators. For present purposes we will simply appeal to our knowledge from introductory quantum mechanics courses that acting on a wave function ψ S x the momentum operator is h ı According to interpretation (3) above, x p is the wave function at point x of a state corresponding to a definite momentum p. The wave function corresponding to the ket ˆp x p should therefore be h d ı dx x p. On the other hand ˆp x p = p p and so h ı d dx. x p = p x p. (56) x 12

This is a simple differential equation for the function of x, x p, for a fixed p. The solution is x p = 1 exp(ı ıp x ), (57) 2π h h where we have chosen an arbitrary constant in front of the exponential so that the is normalisation gives + dp x p p x = δ(x x). (58) When the explicit formula (57) for x p is inserted into the relations (54) and (55) we see that wavefunctions in the x and p basis are Fourier transforms of each other. The Heisenberg Uncertainty relation between position and momentum are then seen to be just the well-known relation between between the extent in the relevant coordinates (x and p in this case) of functions that are Fourier transform of each other (ψ S (x) and φ S (p) in this case). A much more elegant and powerful derivation of the results (120) and (57) can be based on the correspondence between the commutation relation [ˆx, ˆp x ] = ı h and the Poisson bracket of the same dynamical variables in classical mechanics. This is the derivation given in Dirac s book on Quantum Mechanics. We will ask see later that the momentum operator is the generator of spatial translations. Momentum plays a similar role in classical mechanics. The formula (54) displays the general structure of the relation between ket components in different bases. In the case that the two bases are both described in terms of purely discrete labels this relation is readily interpretable as a relationship between column matrices of the two sets of components and a matrix whose elements composed of the inner products of the two sets of basis kets. Thus if the initial basis set is u i with components U i and corresponding quantities in the second set are v i and V i, then the analogue of eq.(54) is V i = j T i,j U j (59) where the matrix elements T i,j are given by T i,j = v i u j. (60) It is straightforward to show that the the matrix T is unitary, T T = 1. Thus basis changes are governed by a unitary transform. It is customary to use the same language when the bases involve of both discrete and continuous labels. 1.3 Compatible observables. It may be possible to create a complete set of kets that are all eigenkets of two Hermitian operators  and ˆB. If we denote a pair of eigenvalues of  and ˆB by 13

the real numbers α and β respectively, then a simultaneous eigenket is written α, β and satisfies  α, β = α α, β, ˆB α, β = β α, β. (61) It is shown in text books that a necessary and sufficient condition for a complete set of simultaneous eigenstates to exist  and ˆB must commute,i.e.,  ˆB = ˆBÂ, (62) by In general the commutator of two operators Ĉ and ˆD is an operator defined [Ĉ, ˆD] = Ĉ ˆD ˆDĈ. (63) Hence the condition (62) is equivalent to the statement that the commutator operator of [Â, ˆB] must be the zero operator. [Â, ˆB] = 0, (64) This is a very special condition that is certainly not satis fied by all operators. Some pairs of operators have a commutator that is a non-zero complex number rather than a more complicated operator, e.g., [x, p x ] = ı h. In elementary quantum mechanics courses it is explained how the corresponding observables satisfy the Heisenberg Uncertainty Principle. More generally, if a pair of operators have a non-zero commutator, whether a complex number or another operator, we cannot find a complete set of eigenkets of these operators. Bearing in mind the basic interpretation, discussed in Section 1.2.3, of the situation that occurs when a state is described by an eigenket, we see that in general we can t prepare a set of states in which the observables corresponding to the two non-commuting operators have a definite value. This is exactly the situation that occurs with the different components of the angular momentum operator for a particle or a system of particles., J x, J y, J z, no two of which commute. 1.4 Quantum Dynamics. Conservation Laws. In quantum mechanics the way the state of a system evolves in time is determined by the Schrödinger equation ı h S, t = Ĥ S, t. (65) t The details nature of the Hamiltonian operator Ĥ depends on the physical system being considered. Ĥ is often determined by taking the classical Hamiltonian that is believed to describe the analogous classical system and replacing the dynamical variables by linear operators with commutation relations determined 14

by the connection between Classical Poisson brackets and quantum commutators. It also frequently happens that the system has no classical analogue, for example because experiment has determined that the system must have degrees of freedom that have no classical analogue, so that the structure of the Hamiltonian has to constructed on the basis of intuition or general principles such as symmetry requirements. We emphasised in Section 1.1 that the Schrödinger equation is a causal equation. This means that if the ket S, t is known at time t, the Schrödinger equation completely determines the ket describing the system at time t + δ t, and so on for all time. To see this, we have, for sufficiently small δ t S, t + δ t = S, t + δ t S, t t = S, t + δ t 1 S, t. (66) ı hĥ We see that the Hamiltonian operator plays a fundamental role in determining the way the state ket changes with time. The Hamitonian operator has a second fundamental role. Its eigenvalues define the possible energy values the system can have. Its eigenkets E i are states in which the energy of the system has a definite value and satisfy 1.4.1 Stationary states with definite energy. Ĥ E i = E i E i. (67) We go through some of the algebra in this sub-section because the derivations use many of the basic properties of kets, linear operators, Hermitian operators and inner products discussed in sub-section 1.2. If the system is prepared in an eigenstate of Ĥ it evolves in time in a very special way. Suppose for example that the system is in state E 0 > at time t = 0. The ket S 0, t = exp( ıe 0t h ) E 0 > (68) reduces to E 0 > at t = 0 and satisfies the Schrödinger equation exactly for all time. To see this we first evaluate the left-hand-side of eq.(65) and obtain ı h t exp( ıe 0t h E 0 > = ı h ( ıe 0 h ) exp( ıe 0t h E 0 > = E 0 exp( ıe 0t h ) E 0 >. (69) The right-hand-side of eq.(65) can be evaluated using the fact that Ĥ is a linear operator and hence can be taken around the complex number exp( ıe0t h ) in 15

eq.(68) to give Ĥ exp( ıe 0t h ) E 0 > = exp( ıe 0t h ) Ĥ E 0 > = exp( ıe 0t h ) E 0 E 0 > = E 0 exp( ıe 0t h ) E 0 >, (70) which agrees with the evaluation of the left-hand-side of (65) we gave in eq.(69). The equality given in eq.(70) also shows that a state prepared in an eigenstate of Ĥ at t = 0 remains an eigenstate of Ĥ with the same eigenvalue for all time. As time progresses the only effect is to multiply the initial state by a complex number of magnitude unit. Another property of the state (68) is that the expectation value of any time independent linear operator Ô doesn t change with time because S 0, t Ô S 0, t = exp( ıe 0t h ) S 0, t Ô E 0 = exp( ıe 0t h )( E 0 Ô S 0, t ) = exp( ıe 0t h )(exp( ıe 0t h ) E 0 Ô E 0 ) = exp( ıe 0t h ) exp(+ıe 0t h )( E 0 Ô E 0 ) = E 0 Ô E 0, (71) which is independent of time. A similar calculation shows that the probability amplitude for observing the state (68) to be in an arbitrary state S at time t is S S 0, t = exp( ıe0t h ) S E 0 and so the probability is S S 0, t 2 = S E 0 2, (72) which is also independent of time. In general, solutions of the Schrödinger equation of the form exp( ıeit h ) E i where E i is eigenstate of Ĥ with eigenvalue E i are known as stationary states. Eq.(71) we gave the time dependence of the expectation value of an arbitrary linear operator in a stationary state. We now show that the Hamiltonian operator has the special property that its expectation value is in dependent of time for ANY solution of the Schrödinger equation, not just stationary states. Consider the expectation value S, t Ĥ S, t where S, t is an arbitrary solution of eq.(5). We have t S, t Ĥ S, t = ( t S, t )Ĥ S, t + S, t Ĥ( S, t ). (73) t 16

From the Schrödinger equation we deduce that the second term is S, t Ĥ( t S, t ) = S, t Ĥ(Ĥ S, t ) ı h = 1 ı h S, t Ĥ2 S, t ). (74) The first term on the right-hand-side of eq.(73) can be expressed ( t S, t )Ĥ S, t = [ S, t Ĥ( S, t )] t = [ S, t Ĥ 1 S, t )] ı hĥ = 1 ı h [ S, t Ĥ2 S, t )] = 1 ı h S, t Ĥ2 S, t ), (75) where we have used the definition of an Hermitian operator and its properties. Comparing eqs.(74) and (75) we conclude that S, t Ĥ S, t = 0, (76) t because the two terms on the right-hand-side of eq.(73) cancel exactly. This is the form that the conservation of energy shows up in Quantum Mechanics. We can derive the same result by a different route that is very instructive. The eigenkets of the Hamiltonian form an orthonormal basis. Using them we can write down a general solution of the Schrödinger equation in the form S, t = i S i (t) E i. (77) The expansion coefficients S i (t) are functions of time. According to the general principles given in Section 1.2.1 they are the probability amplitudes for finding the system in a state E i with energy E i at time t. Following steps very similar to those we used in the verification that the stationary states are special solutions of the Schrödinger equation we find that the coefficients S i (t) must depend on time according to S i (t) = exp( ıe it h ) S i, (78) where the constants S i have the meaning of the values of the S i t at t = 0. We immediately deduce that the probabilities S i (t) 2 are independent of time and equal to their values at t = 0, namely S i 2. Assuming the state at t = 0 is normalised to unity so that i S i 2 = 1 the state remains normalised for all time and the expectation value of the energy calculated from the time independent probability distribution S i (t) 2 = S i 2, i = 1, 2,..., will be i E i S i 2 which is independent of time. 17

We have calculated two expressions for the expectation value of the energy that must be equal and hence S, t Ĥ S, t = i E i S i 2. (79) This equality can be checked directly by explicitly inserting the expansion (77) for the ket S, t on the left-hand-side and using the linearity properties of the inner product and the linearity of Ĥ. 1.4.2 Time dependence of the expectation value of a general operator. We calculate the time dependence of the expectation value of an arbitrary operator, Ô, not necessarily the Hamiltonian. Using similar steps to those used in obtains the results (74) and (75) we find 1 S, t Ô S, t = S, t [Ô, Ĥ] S, t. (80) t ı h Thus, in general the time derivative of the expectation value of an operator Ô is proportional to the expectation value of the commutator of Ô with the Hamiltonian of the system. 1.4.3 Commutators, conservation laws and symmetries. We have already referred to the importance of the commutator of two operators in Section 1.3. There it was stated that if the commutator of two operators is zero (we say they commute ) we can construct a set of basis states in which both operators have a definite value, namely, one of their eigenvalues. This implies that if the operator corresponding to an observable commutes with the Hamiltonian of the system then states in of definite energy exist in which the observable also has a definite value. The result (80) means that for any state the expectation value of the observable will be constant in time. Such observables are called constants of the motion. We will see later how constants of the motion are also linked with symmetries of the system through their commutator with the Hamiltonian. 2 Symmetry Transformations. This course is primarily about geometrical symmetries of systems descrbed by non-relativistic Quantum Mechanics. A symmetry transformation of a system is defined to be a transformation that, when applied to any dynamically possible state of the system, leads to another dynamically possible state. For example, in the classical mechanics of point particles, the interchange of the position of two identical particles. 18

In quantum Mechanics, dynamically possible states are ket vectors that satisfy the Schrödinger equation ( 5). We repeat it here for convenience. ı h S, t = Ĥ(ˆr, ˆp) S, t, (81) t We consider a linear operator i.e., Û that commutes with the Hamiltonian Ĥ, [Û, Ĥ] = 0, (82) or equivalently ÛĤ = ĤÛ. (83) Operating on both sides of the equality (81) with linearity and the property (83) we find Û and using its assumed ı h (Û S, t ) = Ĥ(ˆr, ˆp) (Û S, t ). (84) t This result implies that if S, t is a dynamically possible state then so is the transformed state S, t = Û S, t because it also satisfies the Schrödinger equation 2. We will require that S, t and S, t have the same normalisation so that S, t S, t = S, t S, t. (85) This is statisfied if Û is a unitary operator and therefore has the property Û 1 = Û. (86) We see that in Quantum Mechanics symmetry transformations are represented by a unitary transformation 3. 2.1 The transformation operator for translations. We first consider a system of a single spinless particle confined to the x-axis. In a certain state S (we ignore the time label temporarily) the particle has a wavefunction in the x-basis, ψ S (x). We translate the system by the distance a along the x-axis. The new state is S a and we ask: What is the new wavefunction ψ S a(x) that describes S a in the x-basis? 2 Note that if U was anti-linear the left-hand side of (84) would be replced by ı h (Û S, t ) = ı h (Û S, t ), which hints at why the time reversal transformation t ( t) operator is anti-linear. 3 In the case of the time reversal transformation the corresponding U is anti-unitary, which means it satisfies (86) but is anti-linear. For anti-linear operators the meaning of Hermitian has to be modified slightly. Instead of (38), anti-linear Hermitian operators satisfy a (Ô b ) = b (Ô a ). 19

A moments thought shows that the new wave function must satisfy ψ S a(x) = ψ S (x a), (87) i.e., the new wavefunction at x must have the same value as the old wavefunction at x a. For example, if the most probable value of x is x 0 in the state S, then the most probable value of x in the state S a will be (x 0 + a). The result 87 is what we need to derive the form of the unitary operator Û(a) that gives the relation beween S a and S in the formula S a = Û(a) S. (88) We identify Û(a) by recognising that the right-hand-side of (87) can be expanded using Taylor s theorem f(x + a) = f(x) + a df(x) dx Applying this general result to eq.(87) gives ψ S a(x) = ψ S (x) + ( a) dψ S(x) dx + a2 d 2 f(x) 2! dx 2 + a3 d 3 f(x) 3! dx 3... + an d n f(x) n! dx n +.... (89)... + an d n ψ S (x)) n! dx n +... = ψ S (x) + ( a) ī h ˆp xψ S (x)) + ( a)2 ( ī 2! h )2 (ˆp x ) 2 ψ S (x)... + ( a)n ( ī n! h )n (ˆp x ) n ψ S (x) +..., (90) where in the last line we have replace the derivative operators d dx, d 2 of the momentum operator ˆp x using the relation dx 2 by powers ˆp x = ī h ˆp x. (91) Applying Taylor s Theorem to the function exp(z) gives exp(z) = 1 + a + a2 2! + a3 3!... + an n! +.... (92) This result can be used to rewrite the last line in eq.(90) as ψ S a(x) = exp( ı aˆp x h )ψ S(x) (93) Comparing this formula, which is valid for all values of x, with eq.(88) we deduce that the kets describing S and S a are related by this formula with Û(a) = exp( ı aˆp x ). (94) h The nature of physical space is that translations in the x, y and z directions commute so it is a simple step to write down the generalisation to the operator that produces a translation by a vector displacement a: Û(a) = exp( ı a.ˆp ). (95) h 20

From this explicit form it is straight forward to show that for an Hermitian momentum operator Û(a) is unitary. The exponential structure of this expression is typical for operators that correspond to symmetry operations. The role played by the momentum operator should be noted. We say that the momentum operator generates translations. It appears in the formula multiplied by the parameter a which determines the amount of translation. We showed earlier that if a transformation is a symmetry of a system the transformation operator must commute with the Hamiltonian. In the case of translations this means that Û(a) of eq(95) must commute with Ĥ for all a. This can only be true if the generator ˆp itself commutes with Ĥ, [ˆp, Ĥ] = 0. We see that this in turn implies that momentum is a conserved quantity. To see what the vanishing of the commutator of an observable with the Hamiltonian implies in a non-trivial case, we consider the Hamiltonian for particles labelled 1 and 2 interacting through a force generated by a potential energy function V (r 1 r 2 ). The Hamiltonian operator is Ĥ 12 = ˆp2 1 2m 1 + ˆp2 2 2m 2 + V (ˆr 1 ˆr 2 ). (96) The momentum operators clearly commute with the kinetic energy terms but they do not individually commute with the potential energy term V and hence they do not commute with Ĥ12. For example [ˆp 1, V ] = h ı r 1 V. (97) Hence as long as V is not constant everywhere the momentum of the individual particles are not constants of the motion. However, the sum ˆp 1 + ˆp 2 does commute with Ĥ if V is a function only of the diplacement ˆr 1 ˆr 2 as indicated in (96). Because of the latter assumption r1 V (ˆr 1 ˆr 2 ) = r2 V (ˆr 1 ˆr 2 ), (98) and hence the the terms involving V in [(ˆp 1 + ˆp 2, Ĥ] cancel and the total momentum of the two particles is conserved. Of course, if V is a constant everywhere the individual particle momentum operators do commute with Ĥ and the the indivudual momenta are conserved,. This reflects the complete absence of forces in the system in this case and corresponds classically to Newton s First Law Note that our assumption about V means that it unchanged by the transformation ˆr 1 ˆr 1 +a, ˆr 2 ˆr 2 +a which is precisely the meaning of translational invariance in this case. In the classical case the condition (98) is just the condition for the validity of Newton s Third Law, Action and reaction are equal and opposite. In summary, we have seen that there is a strong connection between translational symmetry and the conservation of momentum. This is our first example 21

of the connection between a symmetry and a conservation law for the operator that generates the corresponding transformation. 3 Transformation operator for rotations. We turn next to the operator corresponding to rotations. We will find that the generator in this case is an angular momentum operator and the role played by a is played by an angle and an axis of rotation. But before going further we have to anticipate that rotations in 3 dimensions are more complicated than translations. This is associated with the property of physical space that leads to the non-interchangeability of rotations about different axes in general. In quantum mechanics this shows up as the non-commutation of the components of the angular momentum operator that generate those rotations. In Appendix B we set out the details of the notations and conventions we use for describing rotations about an axis in 3 dimensions. When we rotate a vector about an axis its components along a fixed set of 3 orthogonal basis vectors change. This is the active way of looking at rotations. In the passive point of view we ask how the components of a fixed vector change when we rotate the orthogonal basis vectors instead. Both ways of considering a rotation are used depending on the context. In both cases we are looking at the relation between 2 sets of 3 numbers. It is shown in Appendix B how in both cases these relationships are most simply expressed in terms of 3 3 real orthogonal matrices. It is fundamental to the derivation of these results to appreciate why we mean by a rotation in the 3-dimensional space we use in the non-relativistic description of physical phenomena. When a pair of vectors are both subjected to the same rotation by the same angle about the same axis their lengths are unchanged and the angle between the 2 vectors is also unchanged. We consider a basis {a 1, a 2, a 3 }, of vectors of unit length forming an orthogonal coordinate system. Each of these bias vectors is subjected to a rotation R(α, n), through an angle α about some axis defined by the unit vector n, resulting in 3 new vectors {A 1, A 2, A 3 }, i.e, a i R(α,n) A i, i = 1, 2, 3. (99) It is a consequence of the concept of rotation described in the previous paragraph that the set {A 1, A 2, A 3 } also forms an orthogonal basis of unit vectors. It is shown in Appendix B that the same matrix R i j (α, n) relates the original and transformed vector components in both the active and passive points of view. This matrix can be expressed entirely in terms of the 9 inner products of the basis vectors {a 1, a 2, a 3 } and {A 1, A 2, A 3 } through the formula R i j (α, n) = a i.a j. (100) In the following we will frequently omit the argument (α, n) in R i j (α, n) if this can be done without ambiguity. 22

In detail: (i) Active viewpoint: Vector W is obtained from vector V by a rotation R(α, n). The components of the two vectors in the a i, i = 1, 2, 3, basis are, respectively, W i, i = 1, 2, 3, and V i, i = 1, 2, 3,. They are related by W i = j R i j V j. (101) (ii) Passive viewpoint: Vector W is obtained from vector V by a rotation R(α, n). The components of the vector V in the a i, i = 1, 2, 3, basis are V i, i = 1, 2, 3,. The components R(α,n) of the same vector V in the A i, i = 1, 2, 3, basis, where a i A i, are V i, i = 1, 2, 3,. The 2 sets of numbers V i, i = 1, 2, 3, and V i, i = 1, 2, 3, are related by V i = j R i j V j, (102) where R is the transpose of matrix R with elements R i j = R j i. (103) It is also shown in Appendix B that R = R 1, so that R describes the rotation inverse to R(α, n). In other words the matrix involved in the passive viewpoint is the matrix corresponding to the inverse rotation in the active viewpoint. 3.1 Rotating a state. A ket vector S describes a state S of a physical system (we omit the time variable temporarily) that exists in 3-dimensional physical space. If we rotate the state in this space by a rotation R(α, n) we obtain a new state S described by a new ket S. The two states will be related by an operator ˆR(α, n) such that S = ˆR(α, n) S. (104) We approach the problem of determining the operator ˆR(α, n) by first considering the case of a spineless single particle and the description of S and S in terms of the wave functions ψ S (x, y, z) and ψ S (x, y, z) where (x, y, z) are the coordinates of a general position vector r along the basis vectors a i, i = 1, 2, 3, i.e., r = xa 1 + ya 2 + za 3. (105) In terms of the kets S and S are given by ψ S (x, y, z) = x, y, z; a S ψ S (x, y, z) = x, y, z; a S. (106) 23

Note that in the 3-dimensional case we have to label the position eigenkets x, y, z; a with an extra label a to indicate that the coordinates x, y, z, refer to the axes a i, i = 1, 2, 3. We can imagine the complex number that the function ψ S (x, y, z) assigns to the point (x, y, z) as written on a flag fixed to that point. What we mean by rotating the state S by R(α, n) is to move this flag, with the number written on it, to the point (Rx, Ry, Rz) reached from (x, y, z) by the rotation R(α, n) 4. The number now written on the flag at the point (Rx, Ry, Rz) is ψ S (x, y, z). But this is the number that is to be assigned to (Rx, Ry, Rz) by the wavefunction ψ S that by definition describes the rotated state S. Clearly, we must have ψ S (Rx, Ry, Rz)) = ψ S (x, y, z), (107) for all points (x, y, z). Equivalently, we can write this as ψ S (x, y, z) = ψ S (R 1 x, R 1 y, R 1 z)), (108) where (R 1 x, R 1 y, R 1 z)) is the point obtained from the point (x, y, z) by the inverse rotation R 1 = R( α, a 3 ). Eq.(107) is the equivalent for a rotation to the relation given in eq.(87) for a translation. We proceed in an analogous way to find the operator that generates a rotation. We start by considering a rotation about the z-axis (a 3 ) by an angle α 5. The matrix R i j (α, a 3 ) that appears in eq.(107) is given by cos α sin α 0 sin α cos α 0 0 0 1, (109) The inverse rotation R 1 is obtained by replacing α by α or, equivalently, taking the transpose of matrix R i j (αn), and we find R 1 x = cos α x + sin α y, R 1 y = cos α y sin α x, R 1 z = z. (110) Using these results the relation (108) can be written ψ S (x, y, z) = ψ S (cos α x + sin α y, cos α y sin α x, z)). (111) Following the method we used for translations we want to expand the righthand-side of eq.(111) in powers of α using Taylor s theorem. 4 We use an abbreviated notation in which, e.g., Rx means the x component of the vector obtained by rotating r = xa 1 + ya 2 + za 3 5 The angle α is positive if, looking along the positive axis of rotation away from the plane of rotation, the sense of rotation is clockwise. 24

To obtain the term of first order in α we use cos α = 1 α2 2 +..., sin α = α α3 6 +..., (112) and neglect all terms of order α 2 or higher. We obtain ψ S (x, y, z) = ψ S (x + αy, y αx, z) = ψ S (x, y, z) + αy ψ S(x, y, z) x αx ψ S(x, y, z) y +... = ψ S (x, y, z) ıαxˆp y h ψ S(x, y, z) + ıαyˆp x h ψ S(x, y, z) +... = ψ S (x, y, z) ıᾱ h (xˆp y yˆp x )ψ S (x, y, z) +... (113) where the neglected terms are all of order α 2 or higher. In the last two lines of eq.(113) we have used the standard expressions for the components of the momentum operator in terms of partial derivatives: ˆp x = h ı x, ˆp y = h ı y, ˆp z = h ı z. (114) We recognise that the combination (xˆp y yˆp x ) is the z-component of the vector r p, which is the classical expression for the orbital angular momentum of the particle, l = r p. The expression (113) can therefore be written ψ S (x, y, z) = (1 ıᾱ h ˆl z +...)ψ S (x, y, z) +..., (115) which can be compared with eq.(90) in the translation case. For an infinitesimal rotation angle δα, when we can ignore quadratic and higher powers, we deduce from (115) that the rotation operator for an infinitesimal rotation about the z -axis is ˆR(δα, a 3 ) = 1 δαˆl z. (116) We can find operator for a finite rotation by first noting that roatations about an axis satisfy ˆR(α, a 3 ) ˆR(β, a 3 ) = ˆR(α + β, a 3 ), (117) for arbitrary finite angles α and β. In particular and hence, using eq.(116), ˆR(α, a 3 ) ˆR(δα, a 3 ) = ˆR(α + δα, a 3 ), (118) ˆR(α + δα, a 3 ) ˆR(α, a 3 ) = ıδαˆl z ˆR(δα, a3 ). (119) 25

In the limit δα 0 this tells us d ˆR(α, a 3 ) d α = ıˆl z ˆR(δα, a3 ). (120) The unique solution to this differential equation satisfying ˆR(α = 0, a 3 ) = 1 is the operator ˆR(α, a 3 ) = exp( ıαˆl z ). (121) We can now immediately write down the rotation operator for a general rotation by an angle α about an arbitrary axis n: ˆR(α, n) = exp( ıαˆl.n). (122) The operator ˆl.n means (ˆl x n x + ˆl y n y + ˆl z n z ) where n x, n y, n z are the components of n. However, this is not a practical way forward because the angular momentum operators don t commute and we have to follow a different path. 3.2 Angular momentum conservation. We saw in Section 1.3 that if if the operator that generates a transformation commutes with the Hamiltonian of the system, Ĥ, then the transformation corresponds to a symmetry transformation. Hence if the Hamiltonian commutes with the operator ˆR(α, n) the system is invariant under a rotation about the axis n. If this is true for all α then eq.(122) tells that this can only be true if the Hamiltonian commutes with the component of the angular momentum operator along n, and there must exist a basis in which the system has a definite energy (eigenvalue of Ĥ) and the angular momentum in the direction n ( an eigenvalue of ˆl.n). To proceed further we must learn about the eigenvalue spectrum of ˆl. 4 Eigenvalue spectrum of the angular momentum operators. 4.1 Eigenfunctions of ˆl z. Using the techniques of Appendix C we can express the orbital angular momentum operators for a single particle in terms of differential operators with respect to the polar and azimuthal angles θ and φ. The operator for ˆl z is particularly φ simple (ˆl z = h ı ) and leads to a straightforward differential equation for the corresponding eigenvalue equation. The condition that m l h is an eigenvalue and that ψ ml (θ, φ) is an eigenfunction of ˆl z is h ψ ml ı φ = m l hψ ml. (123) 26