Canonical ensembles. system: E,V,N,T,p,... E, V, N,T, p... R R R R R

Similar documents
Grand-canonical ensembles

9.1 System in contact with a heat reservoir

Brief review of Quantum Mechanics (QM)

Lecture 8. The Second Law of Thermodynamics; Energy Exchange

Chapter 4: Going from microcanonical to canonical ensemble, from energy to temperature.

[S R (U 0 ɛ 1 ) S R (U 0 ɛ 2 ]. (0.1) k B

Part II: Statistical Physics

Quantum ideal gases: bosons

Part II: Statistical Physics

Lecture 8. The Second Law of Thermodynamics; Energy Exchange

(# = %(& )(* +,(- Closed system, well-defined energy (or e.g. E± E/2): Microcanonical ensemble

Introduction Statistical Thermodynamics. Monday, January 6, 14

Lecture 4: Entropy. Chapter I. Basic Principles of Stat Mechanics. A.G. Petukhov, PHYS 743. September 7, 2017

First Problem Set for Physics 847 (Statistical Physics II)

PHYSICS 219 Homework 2 Due in class, Wednesday May 3. Makeup lectures on Friday May 12 and 19, usual time. Location will be ISB 231 or 235.

2. Thermodynamics. Introduction. Understanding Molecular Simulation

Statistical. mechanics

Basics of Statistical Mechanics

Statistical Mechanics in a Nutshell

Homework Hint. Last Time

1 Foundations of statistical physics

to satisfy the large number approximations, W W sys can be small.

Basics of Statistical Mechanics

Grand Canonical Formalism

Models in condensed matter physics

Calculus II. Calculus II tends to be a very difficult course for many students. There are many reasons for this.

Algebra Exam. Solutions and Grading Guide

Chapter 3. Entropy, temperature, and the microcanonical partition function: how to calculate results with statistical mechanics.

5.2 Infinite Series Brian E. Veitch

OSU Physics Department Comprehensive Examination #115

Ideal gases. Asaf Pe er Classical ideal gas

Addison Ault, Department of Chemistry, Cornell College, Mount Vernon, IA. There are at least two ways to think about statistical thermodynamics.

Quadratic Equations Part I

P3317 HW from Lecture and Recitation 10

S j H o = gµ o H o. j=1

d 1 µ 2 Θ = 0. (4.1) consider first the case of m = 0 where there is no azimuthal dependence on the angle φ.

Chapter 2 Ensemble Theory in Statistical Physics: Free Energy Potential

Physics 127b: Statistical Mechanics. Second Order Phase Transitions. The Ising Ferromagnet

Thermodynamics: Chapter 02 The Second Law of Thermodynamics: Microscopic Foundation of Thermodynamics. September 10, 2013

Physics Sep Example A Spin System

Dr.Salwa Alsaleh fac.ksu.edu.sa/salwams

A Brief Introduction to Statistical Mechanics

The Ideal Gas. One particle in a box:

2m + U( q i), (IV.26) i=1

Physics 172H Modern Mechanics

Let s start by reviewing what we learned last time. Here is the basic line of reasoning for Einstein Solids

Tutorial on obtaining Taylor Series Approximations without differentiation

1 Commutators (10 pts)

d 3 r d 3 vf( r, v) = N (2) = CV C = n where n N/V is the total number of molecules per unit volume. Hence e βmv2 /2 d 3 rd 3 v (5)

base 2 4 The EXPONENT tells you how many times to write the base as a factor. Evaluate the following expressions in standard notation.

PHYS 352 Homework 2 Solutions

MITOCW ocw-18_02-f07-lec17_220k

Select/ Special Topics in Atomic Physics Prof. P. C. Deshmukh Department of Physics Indian Institute of Technology, Madras

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

Recitation: 10 11/06/03

CHEM-UA 652: Thermodynamics and Kinetics

Fitting a Straight Line to Data

Math101, Sections 2 and 3, Spring 2008 Review Sheet for Exam #2:

Instructor (Brad Osgood)

Quiz 3 for Physics 176: Answers. Professor Greenside

The goal of equilibrium statistical mechanics is to calculate the diagonal elements of ˆρ eq so we can evaluate average observables < A >= Tr{Â ˆρ eq

Lecture 10 - Moment of Inertia

Statistical Physics. How to connect the microscopic properties -- lots of changes to the macroscopic properties -- not changing much.

Topic 5 Notes Jeremy Orloff. 5 Homogeneous, linear, constant coefficient differential equations

8.044 Lecture Notes Chapter 5: Thermodynamcs, Part 2

Interacting Fermi Gases

Time-Dependent Statistical Mechanics 5. The classical atomic fluid, classical mechanics, and classical equilibrium statistical mechanics

Internal Degrees of Freedom

i=1 n i, the canonical probabilities of the micro-states [ βǫ i=1 e βǫn 1 n 1 =0 +Nk B T Nǫ 1 + e ǫ/(k BT), (IV.75) E = F + TS =

UNDERSTANDING BOLTZMANN S ANALYSIS VIA. Contents SOLVABLE MODELS

Concepts for Specific Heat

Physics 505 Homework No. 8 Solutions S Spinor rotations. Somewhat based on a problem in Schwabl.

Physics 212: Statistical mechanics II Lecture XI

Elements of Statistical Mechanics

AN ALGEBRA PRIMER WITH A VIEW TOWARD CURVES OVER FINITE FIELDS

Caltech Ph106 Fall 2001

Page 404. Lecture 22: Simple Harmonic Oscillator: Energy Basis Date Given: 2008/11/19 Date Revised: 2008/11/19

#29: Logarithm review May 16, 2009

TheFourierTransformAndItsApplications-Lecture28

213 Midterm coming up

Lecture Notes Set 3a: Probabilities, Microstates and Entropy

Algebra. Here are a couple of warnings to my students who may be here to get a copy of what happened on a day that you missed.

C/CS/Phys C191 Particle-in-a-box, Spin 10/02/08 Fall 2008 Lecture 11

Lecture 2: Intro. Statistical Mechanics

Orbital Motion in Schwarzschild Geometry

The Microcanonical Approach. (a) The volume of accessible phase space for a given total energy is proportional to. dq 1 dq 2 dq N dp 1 dp 2 dp N,

2 Canonical quantization

Concepts in Materials Science I. StatMech Basics. VBS/MRC Stat Mech Basics 0

Chem 3502/4502 Physical Chemistry II (Quantum Mechanics) 3 Credits Spring Semester 2006 Christopher J. Cramer. Lecture 9, February 8, 2006

1. Thermodynamics 1.1. A macroscopic view of matter

MITOCW ocw f99-lec23_300k

Second quantization: where quantization and particles come from?

ACCESS TO SCIENCE, ENGINEERING AND AGRICULTURE: MATHEMATICS 1 MATH00030 SEMESTER / Functions

MITOCW ocw-18_02-f07-lec02_220k

Algebra & Trig Review

Electro Magnetic Field Dr. Harishankar Ramachandran Department of Electrical Engineering Indian Institute of Technology Madras

Handout 10. Applications to Solids

Thermodynamics & Statistical Mechanics

CHEM-UA 652: Thermodynamics and Kinetics

Difference Equations

Transcription:

Canonical ensembles So far, we have studied classical and quantum microcanonical ensembles, i.e. ensembles of large numbers of isolated thermodynamic systems that are all in the same (equilibrium) macrostate. We saw that if we re able to find the multiplicity Ω of a macrostate, we immediately find the entropy S = k B lnω and then we can just go ahead and calculate everything that we like. The problem is that calculating multiplicities can lead to quite difficult (or even impossible) mathematical problems, so we can only do it for a handful of very simple problems. We certainly can t do it for quantum ideal gases, which is what we d like to study and understand in this course. So we need to find an alternative formulation of statistical mechanics, since this one, although quite beautiful and certainly complete, is too difficult to use because of the math involved. So what can we change, in order to get a different formulation? Well, what we can change about our system is how it is separated from the outside world let us see what happens if our model system is not isolated, but closed (it exchanges energy but not particles with the outside, see Fig. below ). The outside is called a bath or reservoir, and we picture it as a very very large system, also in equilibrium and at some temperature T R that we can set to whatever value we like. By very very large we really mean here that E R E. The reason is this: suppose we prepare our system in some initial state, and then we set it in thermal contact with the reservoir. We know that the system will evolve towards equilibrium, and that in the process some energy (heat) will be transferred from the system to the reservoir or viceversa. If E E R, this amount of exchanged heat will be negligibly small from the point of view of the reservoir so the reservoir will remain at the same temperature T R. That s precisely what we mean by a reservoir its state remains the same no matter how the state of our system changes. To fulfill this condition, it must be very big compared to the system. system: E,V,N,T,p,... reservoir: E, V, N,T, p... R R R R R Figure : Model of a closed thermodynamic system. The system has some energy E, volume V, etc and is in thermal contact with a bath or reservoir with energy E R, volume V R, etc. The bath is always assumed to be much much bigger than the system, so that E E R,V V R,N N R. We know that in thermal equilibrium T = T R. As already discussed, the macrostate of such a system will now be characterized by its temperature T (which will be the same as that of the outside reservoir, T = T R ), its volume V and number of particles N (again, I am using a classical gas as a typical example. We know that for other systems we might change some of these variables, for instance we might not need the volume if we deal with a crystal). The key point is that we must use T instead of E to characterize the macrostate. We call an ensemble of very many copies of our closed system, all prepared in the same macrostate

T,V,N, a canonical ensemble. It is classical or quantum, depending on the problem at hand. Let us first study classical canonical ensembles, and then we will see the easy generalization to quantum canonical ensembles. Classical canonical ensemble For a classical system, we know that its microstate is characterized by all generalized coordinates (q,p). The second postulate of classical stat mech says that if we can find the density of probability ρ c (q,p) to find a system in the canonical ensemble in microstate (q,p), then we can calculate any macroscopic property as an ensemble average. For example, the internal energy is just the average of the system s energy: U = H = G N h Nfρ c(q,p)h(q,p) Why? Well, because G N ρ h Nf c (q,p) is, by definition, the probability to have a microstate in between q,q+dq and p,p+dp, and these microstates all have energy H(q,p). If we sum over contributions from all of the microstates, we find the average energy in the system, which is what we mean by internal energy. As before, the Gibbs factor is there to make sure we do not over-count microstates. The factor h Nf is there for convenience, so that G N is dimensionless. As in the general case, the h Nf normalization condition is: = G N h Nfρ c(q,p) Another quantity that we can calculate is the entropy, since we know that S = k B lnρ = k B G N h Nfρ c(q,p)lnρ c (q,p) Remember that S = k B lnρ is always true, so it must also hold when we use the canonical density of probability ρ c (q,p). Etc. So we need to figure out ρ c (q,p). For the microcanonical system, this was the point where we used the 3rd postulate, which said that for an isolated system any allowed microstate is equally likely from there we concluded that the microcanonical density of probability ρ mc (q,p) = const = /Ω if E H(q,p) E + δe, and zero otherwise. In contrast, our closed system can have absolutely any energy, since in principle it can exchange any amount of heat with the reservoir. We expect that microstates corresponding to whatever energy is consistent with the system s fixed temperature T are more likely than microstates with very very different energy... so we know something here, we can t claim full ignorance. In any event, this is not an isolated system. So what can we do? Well, asistraditional, wereducetheproblemtoonethatwealreadyknowhowtosolve, bynoticing that the total system = system + reservoir is isolated, and so we can use microcanonical statistics for it. The microstate of the total system is characterized by the generalized coordinates q,p;q R,p R, where the latter describe the microsystems making up the reservoir. Its Hamiltonian is H T (q,p;q R,p R ) = H(q,p)+H R (q R,p R ) this just says that the total energy is the sum of the energies of the components (note: you might wonder about adding an extra interaction term, since microsystems inside the system have to somehow interact with those from the outside, if they can exchange energy. In fact, this is not a problem, because one can argue that this interaction, whatever it is, must be proportional to the contact surface between the two systems. As such, for the large thermodynamic systems that we consider, this is much much smaller than either H or H R, which are proportional to the volumes).

We know that the density of probability to find the total system in the microstate (q,p;q R,p R ) is then: { ρ mc (q,p;q R,p R ) = Ω T (E T,δE T,V,V R,N,N R if E ) T H T (q,p;q R,p R ) E T +δe T 0 otherwise where Ω T (E T,δE T,V,V R,N,N R ) is the multiplicity for the total system. This is the same as saying that the total probability to find the system in a microstate between q,q+dq and p,p+dp AND the reservoir in a microstate between q R,q R +dq R and p R,p R +dp R is: dq R dp R G N G NR h Nf+N Rf R ρ mc (q,p;q R,p R ) (particles cannot go through the wall, so the Gibbs factor is a product of the two factors particles inside and outside can never interchange places). But we don t care what is the reservoir doing, all we want to know is the probability that the system itself is in a microstate between q,q+dq and p,p+dp. To find that, we must simply sum up all over all the possible reservoir microstates, while keeping the system in the desired microstate. Therefore ρ c (q,p) G N h Nf = Res dq R dp R G N G NR h Nf+N Rf R ρ mc (q,p;q R,p R ) where the integral is over all the reservoir s degrees of freedom. After simplifying, we have: ρ c (q,p) = Res dq R dp R G NR h N Rf R ρ mc (q,p;q R,p R ) = Ω T ET H(q,p)+HR(qR,pR) ET+δET dq R dp R G NR h N Rf R since ρ mc = Ω T when this energy condition is satisfied, and zero otherwise. However, we can rewrite the condition as: ρ c (q,p) = Ω T ET H(q,p) HR(qR,pR) ET H(q,p)+δET dq R dp R G NR h N Rf R ρ c (q,p) = Ω R(E T H(q,p),δE T,V R,N R ) Ω T (E T,δE T,V,V R,N,N R ) since the integral is by definition just the multiplicity for the reservoir to be in a macrostate of energy E T H(q,p), with V R and N R. Using the link between the entropy of the reservoir and its multiplicity S R = k B lnω R (because the reservoir is so big and insensitive to what the system does, all microcanonical formulae valid for an isolated system apply to the reservoir), we then have: Ω R (E T H(q,p),δE T,V R,N R ) = e S R (E T H(q,p),δE T,V R,N R ) k B e S R (E T,δE T,V R,N R ) H(q,p) S R E R k B where I used the fact that the energy of the system H(q,p) is very small compared to the total energy, and used a Taylor expansion. But we know that S R = = E R T R T at equilibrium. We define β = /(k B T), and find the major result: ρ c (q,p) = Z e βh(q,p) () 3

where Z is a constant (what we obtain when we collect all terms that do not depend on (q,p)), called the canonical partition function. We can find its value from the normalization condition: = G N h Nfρ c(q,p) Z(T,V,N) = () G N h Nfe βh(q,p) Note that Z is a function of T (through the β in exponent), of V (the integrals over positions are restricted to the volume V), and N, which appears in the number of degrees of freedom, and the number of integrals, and possibly in the Gibbs factor. The result in Eq. () should make you very happy: this is what is known as the Boltzmann distribution or Boltzmann probability. I am sure you ve been told before that the probability to find a system at temperature T to have energy E is proportional to e βe now we ve derived this formula from the basic postulates. Moreover, we know when we can apply it: it holds for closed systems only! (we already saw that for a microcanonical ensemble ρ mc is very different, and you ll have to believe me, for the time being, that ρ gc that we ll find for grand-canonical ensembles open systems will also be different.) Now that we know the canonical density of probability, we can calculate the internal energy (see discussion at the beginning) U = H(q,p) = G N h Nfρ c(q,p)h(q,p) = Z G N h NfH(q,p)e βh(q,p) However,thiscanbesimplifiedwithatrickwe vealreadydiscussedafewtimes,byrewritinghe βh = β e βh so that: U = Z β = Z G N h Nfe βh(q,p) Z β since the integral is exactly the partition function (see Eq. ()). So as soon as we know the canonical partition function Z(T,V,N), we can immediately find the internal energy as: U = Z Z β = lnz β How about the entropy? Using the expression of ρ c in the general Boltzmann formula we discussed in the beginning, we find that: S = k B G N h Nfρ c(q,p)lnρ c (q,p) = k B G N h Nf Z e βh(q,p) [ lnz βh(q,p)] S = k B lnz +k B βu since the first integral is related to the normalization condition, while the second is just the average of H. From this we find that the free energy is: F(T,V,N) = U TS = k B T lnz(t,v,n) (4) I hope you are now really impressed with how beautifully everything holds together. If you remember, when we reviewed thermodynamics, we decided that for a closed system, if we managed to calculate its free energy F(T,V,N), then we would use df = SdT pdv +µdn (5) to find S, p and µ as its partial derivatives; and anything else can be obtained by further derivatives. (3) 4

So let us summarize how we solve a classical canonical ensemble problem. As always, we begin by identifying the number of degrees of freedom f per microsystem, all needed generalized coordinates q, p that fully characterize a microstate, and the Hamiltonian of the system H(q, p). Then we calculate the partition function from Eq. (). Once we have it, we know that F = k B T lnz. Once we have F, we can find S = ( F T ) V,N ;p = ( ) ( ) F F ;µ = V N T,N T,V The internal energy comes from Eq. (3) that s the simplest way to get it, from the statistical average. Of course, we could also use the fact that U = F +TS, and that we already know F and S the result will be the same. From U we can calculate C V and whatever else we might like. For example, we might also want to find the standard deviation of the energy (we know that the average is U, but how big are the fluctuations about this average?). For this, we use the definition of any statistical average to find: H = G N h Nfρ c(q,p)[h(q,p)] = Z e βh(q,p) = G N h Nf[H(q,p)] Z β Z where I used again the same trick (if you learn this trick, it ll make your lives a lot easier. We can do the calculation by brute force as well, see example below but it s better to avoid that if you can). Then, since U = H = Z, we find the standard deviation of the energy to be: Z β σ E = H H = Z [ β Z Z ] Z = [ β β Z ] Z = U β β = k BT U T = k BT C V (6) Note that this must be true for any system whatsoever, since we made no special assumption about what thehamiltonian looks like, or anythinglike that thisisalways valid. In fact, thislast equation is an example of the kind of results one obtains from the so-called fluctuation-dissipation theorem, a very powerful theorem. If you continue on to graduate school in physics, you ll learn to recognize and love this theorem, but we will drop this topic for now. Before looking at some examples, let us generalize this discussion to: Quantum canonical ensembles As we already know, the main difference between a classical and a quantum system is how we characterize the microstates. For a classical system, we use q, p which are continuous variables. So if we want to sum over all microstates (for example in order to calculate an ensemble average) we actually have to do many integrals over the whole phase-space. In contrast, microstates of a quantum system are its eigenstates, and are characterized by the appropriate quantum numbers for the particular problem. Their energies are always discrete, so if we want to sum over all microstates (for example in order to calculate an ensemble average) in this case we really have to do a sum over all the eigenstates. To be more specific, let us assume we have a quantum system described by a Hamiltonian Ĥ (an operator), and let E α be its eigenenergies, where α is one or more quantum numbers (however many are required to fully identify the eigenstate for the problem at hand). What is the canonical probability to find the system in a given microstate α? Well, here we should repeat all the arguments we used for the classical system: the total system is isolated, so we can use microcanonical ensemble 5

formalism for it, after which we can sum over all the microstates of the reservoir since we don t care what the reservoir is doing... to find out at the end of the day that the probability to find the closed quantum system in a microstate α is simply: p α = Z e βeα (7) This is the direct analog of the classical expression which had e βh(q,p). If you look again at how we derived that, you ll see that nowhere did we need to worry whether the energy of the system in that microstate (which is H(q,p) for a classical system, respectively E α for a quantum system) is continuous or discrete. This is why the quantum and classical cases give similar looking results. Again we call Z the canonical partition function, and again we calculate it from the normalization condition, which is now: = p α Z(T,V,N) = e βeα (8) α α Note: here we need to be very careful. The sum is over all the microstates, not over the energy levels. If an eigenstate is degenerate (i.e., there are several different microstates all with the same energy E β, let s say) then the total contribution from that energy is g β e βe β, where gβ is the degeneracy of that level each microstate of energy E β contributes an e βe β to the sum. Now that we have the probability, we can calculate ensemble averages. For example: U = H = α p α E α since when in microstate α, the system has energy E α. But we can now do exactly the same trick again: U = E α e βeα = Z Z α Z β = lnz β Similarly S = k B lnp = k B p α lnp α =... = k B lnz +U/T α if you go through precisely the same kinds of steps we did for classical systems. The only difference is that the integrals over microstates for the classical cases go here into sums over quantum numbers, the rest is the same. So from here, we find again that F(T,V,N) = k B T lnz(t,v,n) and then all the rest with the partial derivatives to find S,p,µ and U and C V goes precisely the same. Even σ E = C V k B T remains true as well (check!). So the only difference is at the step where we calculate the partition function Z we do an integral over the whole phase space for a classical system, whereas for a quantum system we do a sum over all possible eigenstates. Let s see some examples.. Classical ideal gas As usual, assume N identical simple atoms, with f = 3 degrees of freedom each, inside a volume V and kept at a temperature T. The microstate is characterized by r,..., r N, p,..., p N. Because there are no interactions, the Hamiltonian inside the box is simply H = N i= 6 p i m

Then, by definition (see Eq. ()): Z = N!h 3N d r d r N d p d p N e β N p i i= m since the Gibbs factor is N!. Each d r i = V, since each particle can only be located inside the box. We still have to make 3N integrals over all 3N components of the N momenta. Since p i = p i,x+p i,y+p i,z and d p = dp i,x dp i,y dp i,z, note that the remaining integrals factorize in a product of simple gaussian integrals: Z = V N N!h 3N [ ] [ ] dp,x e β m p.x dp N,z e β m p N,z So we have 3N perfectly identical integrals, each of each is equal to π β m so Then: Z(T,V,N) = V N N! F = k B T lnz = Nk B T ln V ( mπkb T h ( mπkb T h )3N )3 +k B T lnn! = mπ β = mπk B T, and After using Stirling s formula lnn! = N lnn N, we can group terms together to find: F(T,V,N) = k B T lnz = Nk B T ln V ( )3 mπkb T Nk N h B T At this point you should stop and verify that (i) this indeed has units of energy, and (ii) this is indeed an extensive quantity. We now take partial derivatives and find: S = ( ) F T V,N µ = ( ) F N T,V = Nk B ln V N ( ) F V T,N ( mπkb T h p = = Nk BT V = k B T ln V ( mπkb T N h )3 + 5 Nk B )3 If we want U, we can either use U = F +TS, or better, we can use: U = β lnz = [ 3N β ln ] β +... = 3 Nk BT where... where terms that didn t depend on T or β, so they do not contribute to the derivative. Of course, we could also calculate U as an ensemble average: U = H = N!h 3N d r d r N 7 d p [ N p d p N Z e β i N p ] i i= m i= m

This can be done! There are now 3N different terms (there are 3N contributions to the total energy from the 3N momentum components) and each multiple integral is doable. In fact they all give the same result and I might force you to go through this calculation once in the next assignment, just so you see how much you have to work if you don t learn the nice tricks and instead do things by brute force. H can also be calculate by brute force, but it has (3N) multiple integrals... so the trick with the derivative is really useful, and we will use it in future calculations as well. Learn it! Some comments now: () I hope you agree that calculating Z is a lot simpler than calculating Ω was. In fact, we ll soon learn another trick, called the factorization theorem, that makes things even simpler. In any case, we ll never have to deal with hyperspheres again, only simple gaussian integrals; () we obviously got the right results you should compare this with what we found for the classical microcanonical ensemble, and convince yourself that these and those relations are perfectly equivalent. But this agreement should puzzle you, in fact. Is it obvious that we should get the same relationships between the macroscopic variables whether the system is isolated or closed? These are very different conditions! We ll see in a bit that the reason we get the same results is because these are thermodynamic (big) systems. If they weren t, the results might be very different indeed.. Chain of classical D harmonic oscillators In this case f =, H = N i= ( p x,i m + mω u ) i (see discussion for microcanonical ensemble for notation, if needed). Then, since G N = in this case, using the definition of the partition function we have: ( ) Z = du h N... dp x,n e β N p x,i i= m +mω u i The exponential again factorizes in simple gaussian integrals: Each spatial integral equals Then, Z = h N du e mω u β... dp x,n e β p x,n m π/ βmω and each momentum integral equals to Z = h N π/ βmω N π/ β m N = ( kb T hω ) N π/ β, so that: m F = k B T lnz = Nk B T ln k BT hω and we find: U = β lnz =... = N β U = Nk BT etc. (you should check that again we get all the results in agreement with the ones we had for microcanonical ensembles. Again, you should wonder why that is so?) Before looking at some new systems (problems we could not treat with microcanonical ensemble formalism, but we will be able to easily solve them with canonical ensemble formalism), let us notice that in all the cases above, we had to do sets of N identical integrals. This allows us to make the following simplification: 8

.3 Factorization theorem works only for non-interacting systems! For non-interacting systems, the total Hamiltonian is the sum of Hamiltonians of each microsystem (e.g., atoms that make up the gas) H = N i= h i. The Hamiltonian of each microsystem depends only on the generalized coordinates and momenta of that particular microsystem, let s call them q i,p i, so: N H = h i (q i,p i ) i= Using this in the definition of the partition function, we find that for non-interacting systems we can rewrite: Z(T,V,N) = [z(t,v)] N G N where z(t,v) = e βh(q,p) h f and the integrals are only over coordinates/momenta associated with a single microsystem, not with all N of them. If you think about it, z(t,v) = Z(T,V,N = ) is just the partition function for a system with a single microsystem inside. What these formulae show, is that we only need to do one set of integrals over the coordinates and momenta of a single particle (f integrals). All particles are identical, so the contributions from the integrals of the other particles have to be equal to these. Once we have Z, we use F = k B T lnz, etc. For example, for the chain of classical harmonic oscillators, we have f = so: z(t,v) = h p β( du dpe m +mω u ) = h π/ βmω π/ β m = k BT hω and since G N =, we find the same Z as before. So we only need to do f integrals, not Nf. We ll see that there is an analogous factorization theorem for quantum canonical ensembles we ll wait with that until we look at some quantum examples. But first, let us study a system we could not investigate within the microcanonical ensemble, because we could not calculate the multiplicity:.4 N classical non-interacting spins (aka paramagnetic spins) Remember that we have studied N non-interacting spins-/. This will be the classical version of the problem, where we assume that spins are simple (classical) vectors that could point in any direction (so, no quantization that only allows certain spin projections). More precisely, we assume that each atom has a magnetic moment m, which has a known value m, but is free to point in any direction in space. If we assume the atoms fixed in a lattice, then the only degrees of freedom per atom are the two angles θ,φ that describe the orientation of its magnetic moment. Of course, this comes with two angular momenta p θ,p φ that characterize how fast these angles change in time. So f =. Now we need the Hamiltonian. As we ve already discussed in assignment, the rotational kinetic energy for a rotating object with angular momentum l is: l I = ( ) p θ + p φ I sin θ 9

where I is the moment of inertia and we use p θ and p φ as names for the projections of the angular momentum along the corresponding directions. The potential energy (coming from interactions with an external magnetic field, which we assume to be oriented along z the z-axis) is: m B = mbcosθ (see figure ). So the total Hamiltonian of one classical spin is: h = ( ) p θ + p φ I sin mbcosθ θ φ θ m y Because the spins are non-interacting, the total Hamiltonian is just the sum of individual Hamiltonians, and we can use the factorization theorem. Since the volume is fixed (atoms locked in a crystal) there is no dependence on V. Therefore: x Fig. Spherical coordinates θ, φ characterizing the orientation of the magnetic moment m of an atom. ( ( ) ) z(t) = π π dθ dφ dp h θ dp φ e β p I θ + p φ sin mbcosθ θ 0 0 Of these 4 integrals, the one over φ is trivial and gives π. The integral over p θ is just a gaussian, and gives πi β. The integral over p φ is also a gaussian, and gives z(t) = π πi dθπ h 0 β πisin θ β e βmbcosθ = 4π I βh πi sin θ. So we are left with: β π dθsinθe βmbcosθ 0 Using a new variable u = cosθ du = sinθdθ, we have θ = 0 u = ;θ = π u =, so: z(t) = 4π I βh due βmbu = 4π I sinh(βmb) βh βmb z(t) = I(k BT) mb h ( ) mb sinh k B T You must admit that these integrals were rather trivial (especially compared with the microcanonical version, which you should now try and see how far you can take). Since the spins are locked in a lattice, G N =, so Z(T,N) = [z(t)] N, and: [ I(kB T) ( ) ] mb F(T,N) = Nk B T lnz(t) = Nk B T ln mb h sinh k B T We can calculate the internal energy: U = lnz = N β β lnz = Nk BT NmBcoth ( ) mb k B T and then the specific heat, entropy, etc. More interesting in this case is to calculate the magnetic properties of this system, in particular we would like to know what is the average magnetization: M N = m i = i= N G N h Nfρ c(q,p) m i i= 0

where, of course, in this particular case: G N h = dθ...dθ N dφ...dφ N dp θ,...dp θ,n dp φ,...dp φ,n Nf h N and, by definition [ ( ) ] ρ c (q,p) = Z e βh(q,p) Ne β N p i= I θ + p φ i i sin mbcosθ θ i i z Of course, we could jump into doing the integrals, but let us think about this for a second. It should be apparent that m = m =... = m N, since the spins are identical and placed in identical conditions (same magnetic field), so they should all have the same average magnetization. This tells us that it s enough to find the value of one of them, and then M = N m, for example. Now, consider: [ ( ) ] dθ...dθ N dφ...dφ N dp θ,...dp θ,n dp φ,...dp φ,n m = Ne β N p i= I θ + p φ i i sin mbcosθ θ i i m h N z Since m = m(sinθ cosφ,sinθ sinφ,cosθ ) that we re averaging only depends on the angles of the first spin θ,φ, the integrals over all coordinates and momenta with i > are the same as when we calculated Z. In fact, the 4 integrals over the momenta and coordinates of each of the spins,3,...n will each just give a z, so that in the end we are left with: m = π π dθ h dφ dp θ, 0 0 dp φ z e β ( I ( ) ) p θ, + p φ, sin mbcosθ θ m If we stop to think about it, this formula is very reasonable. Since we re calculating an average, everything multiplying the averaged quantity (i.e. m ) must be the corresponding density of probability to find spin pointing in the direction θ,φ and with angular momenta p θ,,p φ. The formula above says that this density of probability is z e βh, where h is the energy of the spin in short notation. We could infer this result directly without doing the integrals over the other spins angles and momenta: since the spins do not interact with one another, they behave independently and the total probability ρ c must be the product of probabilities for each spin to do its own thing. Since ρ c is a product of N terms z e βh i, one for each spin i =,N, each of these terms must be the probability for the corresponding spin to be in its corresponding microstate. Another way to think about this is that in the absence of interactions, the spin of interest would behave just the same if it was the only spin in the system. In the limit N, ρ c z e βh. The bottom line is that for non-interacting microsystems, the probability for one of them to be in a microstate is z e βh where h is the energy of the microsystem in that microstate and z is the corresponding normalization factor, which indeed equals the partition function if there is a single particle in the system. For interacting systems this is not true, however, and there we must start from the full ρ c and integrate over all the possible states of all the other microsystems. Coming back to our expectation value, we can now do the angular momenta integrals, since m = m(sinθ cosφ,sinθ sinφ,cosθ ) does not depend on them. As already discussed, those integrals are simple gaussians and will simplify some terms from z. Using the expression of z and simplifying those terms, we find: m = βmb 4π sinh(βmb) π 0 π dφ dθ sinθ e βmbcosθ m 0

Let s start calculating averages for the individual components: m,x = βmb π π dφ dθ sinθ e βmbcosθ msinθ cosφ = 0 4π sinh(βmb) 0 0 because π 0 dφcosφ = 0. Similarly, we find m,y = 0. This is expected since there is nothing favoring some direction in the xy plane more than the others. As we just said, the xy projection of the spin can point with equal likelihood in any direction, so the average in the xy plane must be 0. This is the result of the symmetry of the problem, and so far as solutions at the exam are concerned, I am perfectly satisfied if you answer a problem like this by saying M x = M y = 0 because of symmetries. However, m,z 0, since the spin is more likely to point up than down, so the average should be some positive value. Indeed: m,z = βmb π π dφ 4π sinh(βmb) 0 0 dθ sinθ e βmbcosθ mcosθ = m βmb sinh(βmb) duue βmbu where we did the φ integral, and changed to the new variable u = cosθ. Here we can either integrate by parts, or observe that our favorite trick holds: ue βmbu = βm (B) eβmbu, and we ve already done the integral from the exponential alone. Either way, we obtain: [ m,z = m coth(βmb) ] = ml(βmb) βmb where the function L(x) = coth(x) is called the Langevin function, and looks as shown in the x figure below. So the total magnetization of the system is: M = ê z NmL(βmB) Now before analyzing the meaning of this, let me show you the smart way of solving this problem. It is quite similar with the trick for calculating U = H by derivatives from Z, instead of doing the integrals. In the derivation above we did the integrals which is fine, but takes time. Here s the short, trick-based solution. First, as I said, we notice that we must have M x = M y = 0 because of the symmetry of the problem all xy in-plane directions are perfectly equivalent to each other. Then, by definition: M z = G N h Nf Z e βh M z Next, we notice that H = K.E. B N i= m i,z = K.E. BM z. So we can use the trick, since: M z e β(k.e. BMz) = β B e β(k.e. BMz) (note that the kinetic energy does not depend on the magnetic field B, so the derivative with B gives the right answer). Therefore: M z = Z β B = G N h Nfe βh Z β B Z = k BT B lnz = Nk BT B lnz Calculating this derivative we find the answer M z = NmL(βmB). I hope you agree that this was a lot easier, and that it s worth paying attention: if you are asked for the average of some quantity

which issomehowpartofthehamiltonian, thetrickcan beused. Youshould nowbeabletocalculate Mz quite easily as well, and then the standard deviation of the magnetization. The Langevin function L(x) = coth(x) is plotted at right. x It asymptotically goes to value, since /x 0 when x, while coth(x). For small values of x, using Taylor expansions, you should be able to verify that L(x) x +... 3 For us, the argument is x = mb. Large x means k k B T BT mb, i.e. large magnetic fields and low temperatures. In this limit, we find L(x) and so : M z Nm L(x) y=x/3 Fig 3. The Langevin function L(x). At small x, L(x) x/3, while for x,l(x). showing that at small temperature k B T mb, the spins will go to the lowest energy state (low temperature) which consists in all pointing in the positive z-direction. This makes sense. At high temperatures k B T mb, we have x = mb/(k B T), and therefore: M z Nm x 3 = Nm B 3k B T 0 This also makes sense. At high temperatures the spins have lots and lots of (kinetic) energy, so they rotate fast through all possible orientations, and the magnetization becomes smaller and smaller in average. Experimentalists can measure the magnetization and have confirmed this behavior. In fact, they prefer to define the so-called magnetic susceptibility: χ = M z B which measures how the average magnetization changes with changing the applied magnetic field (while keeping the temperature and number of spins constant). χ ~ T Curie Law kt mb Fig 4. Magnetic susceptibility of paramagnetic spins. Using the exact expression of M z in terms of the Langevin function and taking the derivative with B, we find: [ ] χ = Nm β (βmb) sinh (βmb) The shape of this is plotted in Fig. 4. At low temperatures k B T mb, one finds that χ 0. This is expected since in this limit we had M z = Nm = const. At high temperatures k B T mb we find: χ = Nm 3k B T (here M z = Nm B 3k B, see above). This is well-known as the Curie law. In fact, one of the first things T one does when one has a new material to investigate is to measure its magnetic susceptibility. If it decreases like /T at high-t, one knows for sure that there are some non-interacting (paramagnetic) magnetic impurities in that sample. We will see soon that the high-t results agree with what the quantum theory predicts (as they should). x 3

Let me show you one more neat thing. If you look at the expression of the internal energy, you can easily show that we can rewrite it as: U = Nk B T M z B However, we know that U = H = K.E. B M z so it follows that for this particular kinetic energy, K.E. = Nk B T. Since the average must be the same for each spin (no reason why one would rotate slower or faster in average), it follows that the average kinetic energy per spin must be k B T. You might have noticed that we got similar results in previous cases. For example, for a simple ideal classical gas, the average kinetic energy per atom was 3/k B T. The difference is that the kinetic energy of the spin is the sum of two quadratic terms (one proportional to p θ, one proportional to p φ), while the kinetic energy of a moving atom has three quadratic terms (one proportional to p x, one proportional to p y and one proportional to p z). So we may guess that the average of each quadratic term in the energy (per microsystem) is k B T/. This guess also agrees with what we found for D classical harmonic oscillators: there there are two quadratic terms in the energy of each particle (one proportional to p x, one to u ), and indeed we found the average energy per particle to be k B T = k B T/. One can demonstrate that for classical systems this is indeed true: the average expectation value of any term in the Hamiltonian that is quadratic in a generalized momentum or a generalized coordinate is always k B T/ (per microsystem). If the Hamiltonian of a microsystem is the sum of g quadratic terms, and there are no interactions between microsystems, then we have U = Ngk B T/ this is called the equipartition theorem and you ll have the pleasant task to prove it yourselves in the next assignment. What the example of paramagnetic spins shows, is that if we have terms which do not depend quadratically on some momentum or coordinate (like the potential energy, which is proportional to cosθ, not θ ) then we can get very different expectation values for those terms (in this case, something proportional to the Langevin function). Before looking at some quantum examples, let us discuss why the microcanonical and the canonical predictions are identical. It is not obvious that this should be so, since one might expect that relationships between the macroscopic variables might depend on whether the system is isolated (or not) from the rest of the universe. 3 Fluctuations The main difference between an isolated and a closed system has to do with their energies. For an isolated system, we know that the system is only allowed to have energies in a narrow interval E mc,e mc +δe. If we plot the density of probability to find the isolated system to have some energy E, it therefore looks as shown in Fig. 5: it is zero everywhere except in the allowed interval, where it is a constant (any allowed microstate is equally likely). Note that I use E mc to represent the allowed value for the energy of the isolated system, since E can be any energy. p (E) mc E mc E + δe mc Fig 5. Density of probability to find an isolated system to have energy E. On the other hand, since a closed system can exchange energy with the outside, it follows that it E 4

could have any energy whatsoever. Let s find the probability p c (E)δE to find a closed system with energy between some values E,E + δe. We know the probability ρ c (q,p) G N to find the closed h Nf system in a microstate in the vicinity of q,p. So the desired answer must be: p c (E)δE = E H(q,p) E+δE G N h Nfρ c(q,p) i.e. we keep contributions only from the microstates which have the desired energy, and sum over the probabilities to be in these microstates. But ρ c (q,p) = Z e βh(q,p) and for all the microstates contributing to the integral H(q,p) = E, so it follows that: p c (E)δE = Z e βe E H(q,p) E+δE G N h = Nf Z e βe Ω(E,δE,N,...) since the phase-space integral is just the multiplicity of the macrostate E, δe, N,... It follows that: where p c (E) = Z e βe g(e) g(e) = Ω(E,δE,N,...) δe is called the density of states, because it is the number of microstates within an energy interval δe, divided by δe. We call such quantities densities (for example, particle density is the number of particles in a certain volume, divided by the volume. It s the same here, except we count the number of states within a certain energy interval, divided by the energy interval). We would like to plot this probability p c (E) and see how different it looks from the microcanonical one. First we need to figure out how Ω(E,δE,N,...) depends on the energy E. If you look at all the examples we ve investigated, every single time we found that Ω(E,δE,N,...) E xn δe where x is some number, e.g. x = 3/ for classical ideal gases, x = for D classical harmonic oscillators, etc. (strictly speaking, we found E xn, but then we always used xn xn). It follows that p x (E) E xn e βe, i.e. it is the product of a function that increases fast with E, and one that decreases fast with E (see Fig. 6). We expect it then to have some maximum somewhere at a finite value, let s call it E c. So far, this looks quite different from p mc (E). However, let us try to be more precise and locate where the maximum E c is, as well as what is the width of this peak. The maximum comes from asking that: dp c (E) de = 0 = ZδE β = Ω Ω E T = k B [ βω+ Ω ] E e βe lnω E = S E c p (E) c e βe E c δe xn Ε Fig 6. Density of probability to find a closed system to have energy E. We found that the maximum is at a value E c where = S T E c. However, for the isolated system we know that T mc = S E mc that s how we find the temperature of an isolated system. Comparing the two, it follows that if conditions are arranged such that T = T mc, i.e. the closed system is kept at the temperature equal to that of the isolated system, then E c = E mc, i.e. the peak in p c (E) is at E 5

the same value where p mc (E) is finite. So at least the maxima of these probabilities have the same location, if the temperatures are equal. How about the width? Well, we ve already showed that the standard deviation of the energy is always: H H kb T δe = C V H U However, both U and C V are extensive quantities, U N,C V N (e.g., for classical ideal gas of simple atoms we had U = 3Nk B T/, C V = 3Nk B /), so it follows that the relative width of the peak is always: δe 0 N for thermodynamic systems with very large N, such as we consider. So in fact p c (E) also has a very narrow and sharp peak at E c = E mc, just like the microcanonical probability p mc. This explains why both ensembles give the same predictions. However, note that this only holds for large N. As we ve discussed when looking at the statistical meaning of the entropy, for such large systems the most probable state (energy E c, in this case) becomes so overwhelmingly more likely than any other state, that the probability to have the system in any other state is virtually zero. So even though the closed system can in principle have any energy, in fact its energy will stay put at its average value U, and the fluctuations about this value are extremely small, δe N 0. As a result, this looks very similar (and will behave the same) as an isolated system, where the energy is fixed at a desired value E mc. However, for small systems the fluctuations in the energy of a closed systems can be substantial, and then it will make a difference if the system is closed or isolated. The good news for us is that for large, thermodynamic systems, we can use whichever ensemble is most convenient (easiest calculation) and get the same results (we ll see later that grandcanonical ensembles also give the same relations between the macroscopic variables, and it will be for the same reasons: in principle, for those ensembles the number of particles also varies and can be anything. But in reality, one can show that with overwhelming probability the actual value is fixed to the most probable value and fluctuations around it are very small, so the actual conditions are very similar to those of closed or isolated systems). There is one more interesting property that holds for both quantum and classical canonical systems. Let me mention it here briefly, without demonstration (see textbook for it). 4 Minimization principle Remember that for an isolated system in equilibrium, the entropy (which is the important thermodynamic potential in this case, from which we can derive everything else) has a maximum. Well, interestingly enough, one can show that for a closed system in equilibrium, the free energy (which is the important thermodynamic potential in this case, from which we can derive everything else) has a minimum. As I said, I will not prove this, but the interesting thing is that equilibrium corresponds to an extremum of the appropriate thermodynamic potential. In fact, from this we can infer quite a lot. Remember that F = U TS, and that S and T are always positive quantities. Now, in equilibrium this quantity is minimized, as I just said. At low T, the way to minimize this is to make U as small as possible (if T is small, we expect that the term TS is less important than the term U). Since U = H, minimizing it means going towards the ground-state (the state of minimum possible energy). Indeed, in all cases we studied we found that 6

at low-t, the equilibrium state looks more and more like the ground-state of the system. Usually this state is non-degenerate (for a quantum system), so it has multiplicity Ω = and so S = k B lnω = 0. We call such a state ordered, because we know precisely what each microsystem is doing. For example, for spins, each spin is in the state with maximum projection all the time, which is a very orderly state if you think about it. However, at high-t, the term TS becomes important and we can minimize F by maximizing S (in that case, subtracting TS from U will give the smallest possible value for F). But maximizing S means going to a macrostate with the largest possible multiplicity, i.e. the most disordered state possible, where by disorder we mean that many choices are available to each microsystem and they go through all of them, giving the large multiplicity. For example, for spins, at high-t we saw that they point with some probability in any direction, and the larger T is, the more likely all directions become (the average was going towards zero). So if we take a snapshot of the spins at high-t, at any moment they ll be pointing every which way and changing their orientations from snapshot to snapshot... which is a very disordered state. Based on these general ideas, we can now understand why at low T matter goes into a crystal (solid phase) and as we raise the temperature it has transitions to liquid and then gas phases: the solid is the most orderly of them, since each atom is pinned in some position (it can oscillate about it, but it will be at all times in the expected neighborhood). A liquid is more disorderly, since we don t know anymore where each atom is; however the average distance between neighboring atoms is not that different from what it was in a solid, so there is still some remnant of order. In a gas, however, any atom can be anywhere, and two neighboring atoms could be anywhere from in direct contact (when they collide) to extremely large distances apart, so this is an extremely disordered state. Another example is for interacting ferromagnetic spins (we will not discuss this problem in this course, but I assume you may have heard of these ideas). In this case, at low-t the system is ferromagnetic with all spins aligned with each other. Again, that is a very orderly state, but now comes because of interactions between spins (there is no externally applied field). At hightemperatures, we expect a transition to a disordered state, i.e. one where each spins points every which way this is called a paramagnetic state. We can use stat mech very well to study this transition, the only complication is that now we have to deal with interactions, and in this course we only focus on non-interacting systems. If you ll take a grad-level course of stat-mech, it s guaranteed that this example would be one of the very first interacting systems you would study. Many other general trends of evolution with T can be understood based on this general idea, of going from the most ordered to the most disordered possible state as T increases. If you ll follow graduate studies in physics, you ll hear a lot more about this. We are not quite done with classical canonical systems: a bit later on I will show you, for fun, how we deal with a weakly-interacting classical gas, and what is the difference from the ideal classical gas. That will give you a taste of what real calculations with interactions added in are like. However, let us first quickly consider some quantum non-interacting canonical ensembles, just so we see how these sort of problems work as well. 5 Quantum canonical ensembles Let me first say that we will only be able to deal with quantum problems where the microsystems are distinguishable by position, i.e. they are locked in a crystal and cannot exchange positions (e.g., quantum harmonic oscillators, quantum spins, etc). To treat quantum gases, where microsystems 7

can interchange their locations, we will need to use grand-canonical ensembles. Note that we could still treat mixed problems, where some degrees of freedom are treated as quantum while the overall motion is treated as classical (for example gas of atoms which classical translation, but with quantum spins, stay tuned for assignments). Such approaches make sense if we are at such temperatures that the kinetic translational energy can be treated as classical, but the spin degree of freedom (for example) is still quantum. But we still won t be able to treat quantum translational motion, in this approach. Let s get started. We know that here microstates correspond to eigenstates of the total Hamiltonian. For non-interacting systems, these eigenstates are characterized by a set of quantum numbers α,...,α N, where α are the quantum numbers characterizing the eigenenergy of the first microsystem, etc. For example, for a chain of quantum harmonic oscillators, the microstate is characterized by the positive integers n,...,n N, with the total energy being E n,...,n N = N i= ( hω n i + ) Similarly, for a chain of spins-, the quantum state of each spin is characterized by the quantum number s z = ±, so that the energy of a spin is e s z = gµ B Bs z (see prev. set of notes). As a result, the microstate of the entire system is characterized by s z,,...,s z,n and E sz,,...,s z,n = N i= gµ B Bs z,i. Etc. Let me go back to calling the quantum numbers (one or more) per microsystem as α, so that the microstate is described by α,...α N and has the total energy E α,...,α N. Then, as discussed, the probability to be in the microstate α,...α N is: p α,...,α N = Z e βeα,...,α N where the canonical partition function is obtained from the normalization condition: α,...,α N p α,...,α N = Z = α,...,α N e βeα,...,αn Note that here we also have a factorization theorem, which holds for non-interacting systems for which E α,...,α N = N i= e αi, where e αi is the contribution of the i th microsystem. In this case, we can rewrite: Z = α e βeα α N e βeαn = z N where z = α e βeα is the single particle canonical partition function. The sum is over all possible values of the quantum number(s) α of a single particle. Once we have z and Z = z N, then we use F = k B T lnz = Nk B T lnz and we re on our way. Let s see first the examples we ve done using microcanonical ensembles. First, for spins / (or two level systems) we have: ( z = e βgµ BBs z = e β gµ B B +e β gµ B B = cosh β gµ ) BB s z= 8

You should now verify that the resulting Z and F give the same results as we obtained before, for example U = N gµ BB tanh ( ) β gµ BB (remember that we called ǫo = gµ BB ). You could also calculate average magnetizations, etc, but we ll do these for a more general problem very soon. Let us also look at quantum harmonic oscillators. In this case: z = n=0 e β hω(n+ ) = e β hω This is a geometric series, and I hope you remember that and so, for any x <, we have: For us, x = e β hω < indeed, so we find: z = e β hω N n=0 x n = xn+ x n=0 x n = x [ ] e β hω n n=0 e β hω = sinh ( ) β hω in nicer form. We should again check that using Z = z N,F = k B T lnz,u = lnz etc, we β recover all the relationships we found in the microcanonical ensemble. I hope you agree that these calculations are a lot easier it took quite a bit of ingenuity to figure out the multiplicity of the macrostate (especially for quantum harmonic oscillators), whereas these calculations are just straightforward, we only need to do some simple sums! They re actually even simpler than the classical examples, where we had to do some integrals. 5. Quantum paramagnetic spins Let us consider N non-interacting quantum spins (locked in a lattice). The spins have magnitude S, where S can be any integer or half-integer (so we will deal with all possible values at once), and are placed in an external uniform magnetic field B = Bê z. As discussed, the Hamiltonian for a single spin is (the Zeeman interaction): ĥ = gµ B h ˆ S B = gµ B h ŜzB where Ŝz is the z-component of the spin operator of that particular spin. This operator has the eigenvalues S h, (S ) h,...,(s ) h,s h, i.e. S+ different allowed projections. It follows that the single spin eigenvalues are: e m = gµ B Bm where m = S, S +,...,S,S can take S + values. If the spins are non-interacting, we can use the factorization theorem Z = z N, where: z(t) = S m= S e βem = S e βgµ BBm m= S 9