Grand-canonical ensembles

Similar documents
Quantum ideal gases: bosons

Brief review of Quantum Mechanics (QM)

Grand Canonical Formalism

Canonical ensembles. system: E,V,N,T,p,... E, V, N,T, p... R R R R R

Introduction Statistical Thermodynamics. Monday, January 6, 14

2. Thermodynamics. Introduction. Understanding Molecular Simulation

9.1 System in contact with a heat reservoir

(i) T, p, N Gibbs free energy G (ii) T, p, µ no thermodynamic potential, since T, p, µ are not independent of each other (iii) S, p, N Enthalpy H

We already came across a form of indistinguishably in the canonical partition function: V N Q =

Chapter 14. Ideal Bose gas Equation of state

Lecture 8. The Second Law of Thermodynamics; Energy Exchange

The perfect quantal gas

Lecture 8. The Second Law of Thermodynamics; Energy Exchange

d 3 r d 3 vf( r, v) = N (2) = CV C = n where n N/V is the total number of molecules per unit volume. Hence e βmv2 /2 d 3 rd 3 v (5)

Elements of Statistical Mechanics

Chapter 4: Going from microcanonical to canonical ensemble, from energy to temperature.

Monatomic ideal gas: partition functions and equation of state.

i=1 n i, the canonical probabilities of the micro-states [ βǫ i=1 e βǫn 1 n 1 =0 +Nk B T Nǫ 1 + e ǫ/(k BT), (IV.75) E = F + TS =

First Problem Set for Physics 847 (Statistical Physics II)

Quantum Statistics (2)

Quantum Grand Canonical Ensemble

[S R (U 0 ɛ 1 ) S R (U 0 ɛ 2 ]. (0.1) k B

5. Systems in contact with a thermal bath

Advanced Thermodynamics. Jussi Eloranta (Updated: January 22, 2018)

A Brief Introduction to Statistical Mechanics

Part II Statistical Physics

ChE 210B: Advanced Topics in Equilibrium Statistical Mechanics

Identical Particles. Bosons and Fermions

Physics 505 Homework No.2 Solution

Ideal gases. Asaf Pe er Classical ideal gas

Recitation: 10 11/06/03

5. Systems in contact with a thermal bath

CHEM-UA 652: Thermodynamics and Kinetics

Quantum statistics: properties of the Fermi-Dirac distribution.

Part II: Statistical Physics

Physics 112 The Classical Ideal Gas

(# = %(& )(* +,(- Closed system, well-defined energy (or e.g. E± E/2): Microcanonical ensemble

PHYS 352 Homework 2 Solutions

The properties of an ideal Fermi gas are strongly determined by the Pauli principle. We shall consider the limit:

2m + U( q i), (IV.26) i=1

AN ALGEBRA PRIMER WITH A VIEW TOWARD CURVES OVER FINITE FIELDS

A.1 Homogeneity of the fundamental relation

Thermodynamics: Chapter 02 The Second Law of Thermodynamics: Microscopic Foundation of Thermodynamics. September 10, 2013

VII.B Canonical Formulation

Introduction. Chapter The Purpose of Statistical Mechanics

21 Lecture 21: Ideal quantum gases II

1 Multiplicity of the ideal gas

The state of a quantum ideal gas is uniquely specified by the occupancy of singleparticle

summary of statistical physics

Part II: Statistical Physics

Part II: Statistical Physics

Quantum Mechanics- I Prof. Dr. S. Lakshmi Bala Department of Physics Indian Institute of Technology, Madras

ChE 503 A. Z. Panagiotopoulos 1

Statistical Mechanics in a Nutshell

The non-interacting Bose gas

Calculus II. Calculus II tends to be a very difficult course for many students. There are many reasons for this.

CHEM-UA 652: Thermodynamics and Kinetics

1 Quantum field theory and Green s function

Chapter 2: Equation of State

Real Analysis Prof. S.H. Kulkarni Department of Mathematics Indian Institute of Technology, Madras. Lecture - 13 Conditional Convergence

Differential Equations

Final Exam for Physics 176. Professor Greenside Wednesday, April 29, 2009

UNIVERSITY OF SOUTHAMPTON

PHYS Statistical Mechanics I Course Outline

Models in condensed matter physics

I. BASICS OF STATISTICAL MECHANICS AND QUANTUM MECHANICS

Statistical Mechanics Notes. Ryan D. Reece

Quantum ideal gases: fermions

213 Midterm coming up

Quadratic Equations Part I

to satisfy the large number approximations, W W sys can be small.

Fluctuations of Trapped Particles

Statistical Mechanics Victor Naden Robinson vlnr500 3 rd Year MPhys 17/2/12 Lectured by Rex Godby

PHY 5524: Statistical Mechanics, Spring February 11 th, 2013 Midterm Exam # 1

Algebra. Here are a couple of warnings to my students who may be here to get a copy of what happened on a day that you missed.

Internal Degrees of Freedom

5.2 Infinite Series Brian E. Veitch

1 Fluctuations of the number of particles in a Bose-Einstein condensate

Algebra Exam. Solutions and Grading Guide

MITOCW 6. Standing Waves Part I

AST1100 Lecture Notes

Statistical Thermodynamics and Monte-Carlo Evgenii B. Rudnyi and Jan G. Korvink IMTEK Albert Ludwig University Freiburg, Germany

Imperial College London BSc/MSci EXAMINATION May 2008 THERMODYNAMICS & STATISTICAL PHYSICS

The goal of equilibrium statistical mechanics is to calculate the diagonal elements of ˆρ eq so we can evaluate average observables < A >= Tr{Â ˆρ eq

IV. Classical Statistical Mechanics

N independent electrons in a volume V (assuming periodic boundary conditions) I] The system 3 V = ( ) k ( ) i k k k 1/2

φ(ν)dν = 1. (1) We can define an average intensity over this profile, J =

Physics Oct Reading. K&K chapter 6 and the first half of chapter 7 (the Fermi gas). The Ideal Gas Again

1 The fundamental equation of equilibrium statistical mechanics. 3 General overview on the method of ensembles 10

Intermission: Let s review the essentials of the Helium Atom

Lecture 4: Constructing the Integers, Rationals and Reals

Physics 127a: Class Notes

Lecture 4: Entropy. Chapter I. Basic Principles of Stat Mechanics. A.G. Petukhov, PHYS 743. September 7, 2017

Phase Transitions. µ a (P c (T ), T ) µ b (P c (T ), T ), (3) µ a (P, T c (P )) µ b (P, T c (P )). (4)

6.730 Physics for Solid State Applications

Notes: Pythagorean Triples

Volume in n Dimensions

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Physics Department Statistical Physics I Spring Term Solutions to Problem Set #10

Let s start by reviewing what we learned last time. Here is the basic line of reasoning for Einstein Solids

Assignment 8. Tyler Shendruk December 7, 2010

Transcription:

Grand-canonical ensembles As we know, we are at the point where we can deal with almost any classical problem (see below), but for quantum systems we still cannot deal with problems where the translational degrees of freedom are described quantum mechanically and particles can interchange their locations in such cases we can write the expression for the canonical partition function, but because of the restriction on the occupation numbers we simply cannot calculate it! (see end of previous write-up). Even for classical systems, we do not know how to deal with problems where the number of particles is not fixed (open systems). For example, suppose we have a surface on which certain types of atoms can be adsorbed (trapped). The surface is in contact with a gas containing these atoms, and depending on conditions some will stick to the surface while other become free and go into the gas. Suppose we are interested only in the properties of the surface (for instance, the average number of trapped atoms as a function of temperature). Since the numbers of atoms on the surface varies, this is an open system and we still do not know how to solve this problem. So for these reasons we need to introduce grand-canonical ensembles. This will finally allow us to study quantum ideal gases (our main goal for this course). As we expect, the results we ll obtain at high temperatures will agree with the classical predictions we already have, however, as we will see, the low-temperature quantum behavior is really interesting and worth the effort! Like we did for canonical ensembles, I ll introduce the formalism for classical problems first, and then we ll generalize to quantum systems. So consider an open system in contact with a thermal and particle reservoir. system: E,V,N,T,p, µ reservoir: E, V, N,T, p,µ R R R R R R Figure : Model of an open thermodynamic system. The system is in contact with a thermal and particle reservoir (i.e., both energy and particles get exchanged between the two, but E +E R = E T = const, N +N R = N T = const). The reservoir is assumed to be much bigger than the system, E E R,N N R. In equilibrium T = T R,µ = µ R. As we know (see thermo review), the macrostate of such a system will be characterized by its temperature T (equal to that of the reservoir), its volume V and its chemical potential µ (which will also be equal to that of the reservoir) again, I am using a classical gas as a typical example. We call an ensemble of very many copies of our open system, all prepared in the same equilibrium macrostate T,V,µ, a grandcanonical ensemble. As always, our goal is to find out the relationships that hold at equilibrium between the macroscopic variables, i.e. in this case to find out how U,S,p, N,... depend on T,V,µ. Note that because the number of particles is no longer fixed, we can only speak about the ensemble average N (average number of particles in the container for

given values of T,V,µ) from now on. Classical grand-canonical ensemble As was the case for the canonical ensemble, our goal is to find the density of probability ρ g.c. (N,q,p) to find the system in a given microstate once we know this, we can compute any ensemble average and answer any question about the properties of the system. Note that since the number of microsystems (atoms or whatever may be the case) that are inside the system varies, we will specify N explicitly from now on: a microstate is characterized by how many microsystems are in the system in that microstate, N, and for each for these microsystems we need f generalized coordinates and f generalized momenta to describe its behavior, so we have a total of Nf microscopic variables q,p. Soweneedtofigureoutρ g.c. (N,q,p). Wewillfollowthereasoningweusedforcanonicalensembles have a quick look over that and you ll see the parallels. In fact, I m going to copy and paste text from there, making only the appropriate changes here and there. First, we reduce the problem to one that we already know how to solve, by noting that the total system = system + reservoir is isolated, and so we can use microcanonical statistics for it. The microstate of the total system is characterized by the generalized coordinates N,q,p;N R,q R,p R, where the latter describe the microsystems that are inside the reservoir. Clearly, N+N R = N T is the total number of microsystems in the total system, which is a constant. So in fact the total system s microstate is characterized by: N,q,p;N T N,q R,p R. Its Hamiltonian is H T (N,q,p;N R,q R,p R ) = H(q,p) + H R (q R,p R ) here I won t write the dependence of the Hamiltonian on N explicitly, but we know it s there: for example, in the kinetic energy we sum over the energies of the microsystems inside the system, and the number of terms in the sum is N, i.e. how many microsystems are contributing to the energy in that particular microstate. Based on the microcanonical ensemble results, we know that the density of probability to find the total system in the microstate N,q,p;N T N,q R,p R is: ρ mc (N,q,p;N T N,q R,p R ) = { Ω T (E T,δE T,V,V R,N T ) if E T H T (N,q,p;N T N,q R,p R ) E T +δe T 0 otherwise where Ω T (E T,δE T,V,V R,N T ) is the multiplicity for the total system. This is the same as saying that the total probability to find the system in a microstate with N microsystems that are located between q,q+dq and p,p+dp AND the reservoir to have the remaining N T N microsystems located between q R,q R +dq R and p R,p R +dp R is: dq R dp R G N G NT Nh Nf+(N T N)f R ρ mc (N,q,p;N T N,q R,p R ). Note that even though the microsystems are the same, they may have different number of degrees of freedom in the system and the reservoir think about the previous example with atoms adsorbed on a surface. It is very likely that we need different degrees of freedom to describe the state of an atom when on the surface (inside the system) than when it s in the gas (the reservoir). However, the total number of atoms on the surface plus inside the gas is fixed, and that s all that matters. We do not care/need to know what the reservoir is doing, all we want to know is the probability that the system is in a microstate which has N microsystems between q,q+dq and p,p+dp. To

find that, we must simply sum up over all the possible reservoir microstates, while keeping the system in the desired microstate. Therefore ρ g.c. (N,q,p) G N h = dq Res R dp R ρ Nf G N G NT Nh Nf+(N mc (N,q,p;N T N,q R,p R ), T N)f R where the integral is over all the reservoir s degrees of freedom. After simplifying, we have: ρ g.c. (N,q,p) = = Ω T Res dq R dp R G NT Nh (N T N)f R ρ mc (N,q,p;N T N,q R,p R ) ET H(q,p)+HR(qR,pR) ET+δET dq R dp R G NT Nh (N T N)f R since ρ mc = Ω T when this condition is satisfied, and zero otherwise. However, we can rewrite the condition as: ρ g.c. (N,q,p) = Ω T ET H(q,p) HR(qR,pR) ET H(q,p)+δET dq R dp R G NT Nh (N T N)f R ρ g.c. (N,q,p) = Ω R(E T H(q,p),δE T,V R,N T N) Ω T (E T,δE T,V,V R,N T ) since the integral is by definition just the multiplicity for the reservoir to be in a macrostate of energy E T H(q,p), with V R and N T N microsystems. If you have a quick look at the canonical ensemble derivation, you ll see that so far everything went very similarly, except that we kept track carefully of how many microsystems are where. Using the link between the entropy of the reservoir and its multiplicity S R = k B lnω R (because the reservoir is so big and insensitive to what the system does, all microcanonical formulae valid for an isolated system apply to the reservoir), we then have: Ω R (E T H(q,p),δE T,V R,N T N) = e S R (E T H(q,p),δE T,V R,N T N) k B e S R (E T,δE T,V R,N T ) H(q,p) S R E R N S R N R k B where I used the fact that the energy of the system H(q,p) E T and N N T and performed a Taylor expansion. So the appearance of the second derivative is where the difference between canonical and grandcanonical ensembles shows up. But we know that at equilibrium so we find the major result: S R E R = T R = T ; S R N R = µ R T R = µ T ρ g.c. (N,q,p) = Z e β[h(q,p) µn] () where Z is a constant (what we obtain when we collect all terms that do not depend on (N,q,p)), called the grand-canonical partition function. We can find its value from the normalization condition: = G N h Nfρ g.c.(n,q,p) Z(T,V,µ) = G N h Nfe β[h(q,p) µn] () 3

Note that here sum over all microstates means to sum over all possible numbers N of microsystems in the system, and for each N to sum over all possible locations/momenta of the microsystems. Of course, for classical systems we know that the sum over locations/momenta is really an integral, because these are continuous variables. You might argue that we should stop the sum over N at N T, but since N T is by definition much much bigger than the average number of microsystems in the system, it will turn out that we can use the upper limit to be infinity it makes calculations easier and the error can be made arbitrarily small by making the reservoir bigger, i.e. N T. Note also that Z is a function of T (through the β in exponent), of V (the integrals over positions are restricted to the volume V), and µ (again through the exponent). Now that we know the grandcanonical density of probability, we can calculate the internal energy U = H(q,p) = G N h Nfρ g.c.(n,q,p)h(q,p) = Z G N h NfH(q,p)e β[h(q,p) µn] Here we have to be a bit careful. We can t simply use the trick with the derivative with respect to β, since this will bring down both H (which we want), but also µn (which we don t want): β e β[h(q,p) µn] = [H(q,p) µn]e β[h(q,p) µn] H(q,p)e β[h(q,p) µn] So here s what we do. Let me define: α = βµ and use this instead of µ as a variable, so that Z = Z(β,V,α) and ρ g.c. (N,q,p) = Z e βh(q,p)+αn Now, we have to pretend that α and β are independent variables, i.e. we forget for a bit what is definition of α, we pretend that it s just some quantity totally unrelated to β. If this was true, then we could use: β e βh(q,p)+αn = H(q,p)e βh(q,p)+αn and we could then write: U = Z [ β ] G N h Nfe βh(q,p)+αn U = Z(β,V,α) β Z(β,V,α) = β lnz(β,v,α) So the point is to treat α as a variable independent of β while we take the derivative, and only after we re done with taking the derivative to remember that α = βµ. This forgetfulness is very useful since it allows us to calculate another ensemble average very simply, namely: N = G N h Nfρ g.c.(n,q,p)n = Z(β,V,α) G N h NfNe βh(q,p)+αn Again, treating α and β as independent variables while we take derivatives, we have: α e βh(q,p)+αn = Ne βh(q,p)+αn 4

so that we avoid doing the integrals and we find: N = Z(β,V,α) α Z(β,V,α) = α lnz(β,v,α). So we can easily also calculate the average number of microsystems in the system. We will look at some examples soon and you ll see that doing this is simple in practice, it just requires a bit of attention when taking the derivatives. This approach can be extended easily to find (check!) that: H = Z(β,V,α) β Z(β,V,α) and N = Z(β,V,α) α Z(β,V,α). Of course, we would need these quantities to calculate standard deviations. Now, this trick I described above, with using α and β, is what people usually do, and what is given in textbooks etc. All is needed is that while you take the derivatives, you treat α and β as independent variable. For reasons which escape me, some students think that this is too fishy and refuse to use this trick. So here is an alternate trick, which is almost as good (takes just a bit more work) and gives the precise same answers at the end of the day. Let s look again at: U = H(q,p) = G N h Nfρ g.c.(n,q,p)h(q,p) = Z G N h NfH(q,p)e β[h(q,p) µn] Clearly we d like to not have to do the integrals explicitly, so we have to get rid of the H somehow. If you do not like the trick with introducing α, then we can do this. First, introduce an x in front of the Hamiltonian, in the exponent, and ask that x be set to at the end of the calculation, since clearly: U = Z G N h NfH(q,p)e β[xh(q,p) µn] Now we can take the derivative with respect to x, so that we have: U = [ ] Z β x Z(x) where Z(x) = x= G N h Nfe β[xh(q,p) µn] can be quickly calculated (this is where the extra work comes in), just like you calculated Z = Z(x = ) (it s just a matter of tracking where the extra x goes in some gaussians). So, putting these together, and since we set x = at the end, we have: U = β x lnz(x) and similarly H = β x Z(x) x= x= x= 5

and here the only trick is to first take all the derivatives, and then set x = as the very last step. To calculate ensemble averages of N we do not need to introduce any new variable, since we already have µ there, so we can also write: and similarly N = Z(β,V,α) N = Z(β,V,α) G N h NfNe βh(q,p)+βµn = Z [ β ] µ Z = β e βh(q,p)+βµn = G N h NfN Z β µ Z µ lnz If you think about it, taking these derivatives is just the same as taking derivatives with respect to α, in the previous notation. How about calculating other ensemble averages, for example the entropy? Using the expression of ρ g.c. in the general Boltzmann formula, we find that: S = k B lnρ = k B = k B G N h Nfρ g.c.(n,q,p)lnρ g.c. (N,q,p) G N h Nf Z e β[h(q,p) µn] [ lnz βh(q,p)+βµn] S = k B lnz +k B βu k B β N since the first integral is related to the normalization condition, while the second and third are just the ensemble averages of H,N. From this we find that the grand-canonical potential is: φ(t,v,µ) = U TS µ N = k B T lnz(t,v,µ) (3) As you see, things again fit very nicely together. If you remember, when we reviewed thermodynamics, we decided that for an open system we need to be able to compute the grand-canonical potential φ(t,v,µ), since then we can use dφ = SdT pdv N dµ (4) to find S, p and N as its partial derivatives, which tells us how they depend on T,V,µ. This seems to give us an alternate way to find N, but in reality we find the same formula with the derivative from lnz (not surprisingly we should get the same result no matter how we go about it). There is also an alternative way to find U, from using U = φ+ts +µ N, where all the terms on the right hand side are known once Z is calculated. Of course, this gives the same result as the tricks with derivatives, but it involves a bit more work. So let us summarize how we solve a classical grandcanonical ensemble problem, once T,V,µ,... (the macroscopic variables) are given to us. As always, we begin by identifying the number of degrees of freedom f per microsystem, and all needed generalized coordinates q, p that fully characterize a microstate that has N microsystems in the system, and then the Hamiltonian of the system H(q, p). Then we calculate the partition function from Eq. (). Once we have it, we know φ = k B T lnz. Once we have φ, we can find S = ( ) φ ;p = T V,µ ( ) φ ; N = V T,µ ( ) φ µ T,V 6

We can also calculate the internal energy and N from U = β lnz(β,v,α); N = α lnz(β,v,α); where α,β are treated as independent variables while we take the derivatives, after which we can set α = βµ. Similarly, we can calculate averages of H, N, HN,..., but using the proper number of derivatives with respect to α and β. If you do not like this, use the trick with introducing the x, and then setting it to be after all derivatives were taken. Any other ensemble averages are calculated starting from the definition of an ensemble average and the known density of probability for the grandcanonical ensemble, and by doing all integrals over positions/momenta and sum over N. One last thing. Remember that for the canonical partition function and non-interacting systems, we could use the factorization theorem to simplify the calculation. It turns out we can do a similar thing here. First, start with the general definition of the grand-canonical partition function: Z(T,V,µ) = G N h Nfe β[h(q,p) µn] = e βµn G N h Nfe βh(q,p) Now we recognize that the integrals are simply the canonical partition function for a system with N particles, so: Z(T,V,µ) = e βµn Z(T,V,N) So in fact, if we know how to calculate Z (which we do) there isn t much left of the calculation. Let s simplify even further. For non-interacting systems where particles can move and exchange positions (such as gases), we know from the factorization theorem that: Using this in the above sum, we find: Z(T,V,µ) = Z(T,V,N) = N! [z(t,v)]n [ ] z(t,v)e βµ N ( = exp e βµ z(t,v) ) N! since the sum is just the expansion of the exponential function. Those of you with good memory will be delighted to learn that e βµ has its own special name, namely fugacity. For problems where the microsystems are distinguishable because they are located at different spatial location (crystal-type problems), Gibbs factor is and: and therefore: Z(T,V,µ) = Z(T,N) = [z(t)] N [ e βµ z(t) ] N = e βµ z(t) if e βµ z(t) < (otherwise the geometric series is not convergent). Of course, we ll find that this condition is generally satisfied. So the conclusion is that once we calculate z (just as we did it for canonical ensembles), we immediately have Z, i.e. dealing with a grandcanonical classical system really does not involve any more work/math that dealing with a canonical system at least so far as classical problems are concerned. 7

. Classical ideal gas Let s check how this works for a classical ideal gas our favorite classical model. Assume a system of volume V in contact with a thermal and simple atom reservoir with temperature and chemical potential T, µ. Let s calculate the average number of particles in the container, N, their internal energy U and their pressure p. Because these are simple, non-interacting atoms, they have f = 3 degrees of freedom, and we can use factorization theorem, and G N = N!. Of course, we know that: z(t,v) = h 3 d r ( )3 p β πmkb T z N d pe m = V Z(T,V,N) = h N! (three identical gaussians plus one integral over the volume). Therefore (see formula above): and so: Z = e βµn [ Z(T,V,N) = e βµ z ] N = exp e βµ V N! φ(t,v,µ) = k B T lnz = k B Te βµ V ( πmkb T h ( πmkb T This is indeed an extensive quantity. It is maybe not so obvious that it has the right (energy) units, but you should be able to convince yourselves that that is true (remember that z is a number, and µ is an energy). To calculate N we have two alternatives, either as a partial derivative of φ with respect to µ (I ll let you do this) or using the trick with α and β. Let s do it by the second method. First, we replace βµ α everywhere where they appear together. We then find: Z(β,V,α) = exp e α V ( πm βh )3 lnz(β,v,α) = e α V Now, assuming α and β to be independent variables, we have: N = α lnz(β,v,α) = eα V ( πm )3 h ( πm This looks a bit strange, but let s not loose heart this tells us how N depends upon T,V,µ (the variables which characterize the macrostate for the open system) and it is not something we ve looked at before. Notice that you can extract how µ depends on T,V, N from this you should do that and see that the result agrees with what we obtained for canonical ensembles, if we replace N N. The internal energy is: So yeeeii! It works! U = β lnz(β,v,α) = 3 β eα V ( πm βh βh )3 βh )3 U = 3 N k BT )3 )3 8

Finally, we find the pressure from the partial derivative of φ with respect to V: p = ( ) ( )3 φ = k B Te βµ πmkb T pv = N kb T V h T,µ So, yeeeiii again! As you can see, we do obtain the same results we got using micro- and canonical ensemble formalisms for this problem, except here we must use N instead of N. Why is this, why do we get the same predictions even though now the number of particles if not fixed? Exactly like in the case of the equivalence between microcanonical and canonical formulations results, this is due to the fact that the systems are very large, N 0 3. Unlike in a canonical ensemble, in a grandcanonical ensemble the number of particles is not fixed, and it could be anything, in principle. However, because the system is so big, one can show (just as we did for the energy of the canonical ensembles) that the probability of finding the system to have anything but N particles is an extremely small number (basically zero), i.e. it is overwhelmingly likely that the system always has precisely N atoms inside. One can show that the relative standard deviation σ N N = N N N N 0 (I ll probably give this to you as an assignment exercise). So, as I ve said all along, for (large) thermodynamic systems we can use whichever formalism is most convenient to solve a problem. We will see more examples of classical problems to be solved by grandcanonical formalism (the kinds of problems where the number of particles can really change) in the next assignment. We will also consider problems where we have more than one species of atoms in the container in that case, it is possible that one species can pass through the wall (system is open, from its point of view) and another species cannot go through the wall (system is closed from its point of view). In this case, we need to use a mixed description, using canonical formalism for the atoms whose number is conserved, and grandcanonical formalism for those whose number can vary. This probably sounds more complicated than it is in practice, as you ll see. But now it is finally time to concentrate on quantum gas -type of problems, which we could not solve by any other formalism. We are finally able to study them. Quantum grandcanonical ensembles First, we need to figure out how to properly characterize microstates of the quantum system, since the number of microsystems is not fixed. If you think about it, we can t just say that the microstates are eigenstates of the Hamiltonian (the way we did for quantum canonical systems), because after all even the Hamiltonian is not unique microstates with different numbers of particles have different Hamiltonians! (number of terms, e.g. kinetic energy contributions, in the Hamiltonian changes in proportion to how many microsystems are in the microstate). So we have to be a bit careful how we go about characterizing all the microstates. What will simplify things tremendously is the fact that we will only deal with non-interacting systems. For interacting systems, one really needs to use more formal objects like density matrices, but we won t go there. Let e α be the energies of all (discrete) energy levels, if there was a single microsystem in the system. We ll always assume the energy to be zero if there is no microsystem in the system (empty 9

system). α are all the needed quantum numbers to describe these levels degeneracies are very important! These energies (and the single-particle wavefunctions associated with them) are usually called single-particle energies or single-particle orbitals. To make things more clear, let s look at some examples as we go along. First, a simple crystal - like example. Assume we have a surface with a total number N T of sites where simple atoms could be trapped. If a trap is empty, its energy is zero. If it catches an atom, its energy is lowered to ǫ 0 < 0. This surface is in contact with a gas of atoms (the reservoir), with known T and µ. The question could be, for instance, what is the average number N of atoms trapped on the surface. What are the single-particle orbitals, in this case? Well, if we have a single atom in the system= surface with traps, it must be trapped in some site or other, and the energy will be ǫ 0. We could use as quantum number an integer n N T which tells us at which site is the atom trapped so here we have a N T degenerate spectrum of single-particle states, all with the same energy ǫ 0. As a second example, let s consider a quantum gas problem. Assume simple atoms with quantum dynamics in a cubic box of volume V = L 3. Of course, we ll generally want to know what are the properties of the quantum gas when the system is in contact with a thermal and particle reservoir with known T,µ. However, right now all we want to know, is what are the single particle levels. For this, we must find the spectrum (the eigenstates) when there is just one atom in the system. This we know how to do. In this case the Hamiltonian is: and the eigenstates are: ĥ = h m ( d dx + d dy + d dz e nx,n y,n z = h 8mL (n x +n y +n z) where n x,n y,n z =,,... are strictly positive integers. So here three quantum number α = n x,n y,n z characterize the single-particle orbitals (if you don t remember where this formula comes from, it is a simple generalization of the d case we solved when we looked at multiplicities, when we discussed microcanonical ensembles). Strictly speaking, atoms also have some spin S, so we should actually include a 4th quantum number α = n x,n y,n z,m where m = S, S +,...S,S is the spin projection. For any other problem we can figure out the single-particle orbitals (or states) similarly. Nowlet sgobacktoourgeneraldescription, wheree α aretheenergiesofallpossiblesingle-particle orbitals. What happens if there are more microsystems in the system? Well, let s start with two. Because these are non-interacting microsystems, they occupy the same set of single-particle orbitals. So to characterize the state, we now need two pairs of quantum numbers, say α and α, to tell us which two states are occupied. The energy is simply e α + e α. For example, if there is a second atom on the surface, it must also be trapped in some trap or other, just like the first one, so I could specify the state by saying atom is trapped at site n while atom is trapped at site n. Of course, the energy is ǫ 0. Similarly, a second atom in the box will be in some eigenstates e n x,n y,n and the z energy will be the sum of the two. The wavefunction for the total system is now: Ψ( r, r ) = φ α ( r )φ β ( r ) where φ α ( r) is the single-particle wavefunction associated with the single-particle state e α. Right? WRONG! What quantum mechanics tells us is that if the particles are identical, their wavefunction must be either symmetric (for so-called bosonic particles, i.e. whose spin S is an integer) or antisymmetric (for so-called fermionic particles, i.e. whose spin S is half-integer ) to exchanges of the two, i.e. Ψ( r, r ) = ±Ψ( r, r ) Ψ( r, r ) = Ψ( r, r ) ) 0

This is what indistinguishability really means. If the particles are truly identical, then there is no measurement whatsoever that we can perform to tell us which of the two particles is at r and which at r, so if we exchanged their positions we should see no difference (same probability to find them at those locations). So going back, it follows that for two fermions, the two-particle wavefunction must be: Ψ F ( r, r ) = φ α ( r )φ β ( r ) φ α ( r )φ β ( r ) while for bosons, we must have: Ψ B ( r, r ) = φ α ( r )φ β ( r )+φ α ( r )φ β ( r ) (there is actually an overall normalization factor, but that is just a constant that has no relevance for our discussion). This immediately tells us that we cannot have two fermions occupy the same single-electron orbitals, i.e. α β always. If α = β we find Ψ F ( r, r ) = 0, which is not allowed (wavefunctions must normalize to ). This is known as Pauli s exclusion principle. There is no such restriction for bosons, there we can have any number of bosons occupying the same single-particle state. If we now look at these two-particle wavefunctions, we see that it s just as likely that particle is in state α and in β, as it is to have in state β and in state α. Therefore, it makes much more sense to characterize this state by saying that there is a particle in state α and a particle in state β, and not attempt anymore to say which is which they re identical and either one could be in either state with equal probability. We can rephrase this by saying that of all single-particle orbitals, only α and β are occupied, while all other ones are empty. In fact, we can define an occupation number which is an integer n α associated to each single-particle level. For empty levels n α = 0, while for occupied levels n α counts how many particles are in that particular orbital. For bosons, n α = 0,,,3,... could be any number between 0 and infinity. For fermions, n α = 0 or! Because of the exclusion principle, we cannot have more than fermion occupying a state. This is the difference between fermions and bosons. It might not look like much, but as we will see, it will lead to extraordinarily different behavior of fermions vs. bosons at low temperatures, i.e. where quantum behavior comes into play. We can now generalize. For any number of microsystems (particles) present in the system, we can specify a possible microstate by giving the occupation numbers for all the single-particle orbitals (if there is an infinite number of orbitals, such as for a particle in a box, lots and lots of occupations numbers will be zero; but we still have to list all infinite number of them). So the microstate is now specified through the values {n α } of all occupation numbers in that microstate. Allowing all numbers {n α } to take all their possible values will generate all possible microstates for all possible numbers of microsystems. Of course, the total number of microsystems (particles) in the microstate must be N {nα} = α n α i.e. we go through all levels and sum how many particles are occupying each of them clearly this sum gives the total number of particles in that microstate. The energy of this microstate is: E {nα} = α n α e α again we go through all levels, for each one which is occupied (n α 0) we add how many particles are in that level, n α, times the energy of each one of them, e α. Again, I think this should be quite

an obvious equation for non-interacting systems. For interacting systems, things are much more difficult, because the energy is no longer the sum of single-particle energies. This was the hard part. Now that we know how we describe the microstates, and how many particles are in a microstate and what is their energy, we re practically done. Following the same reasoning as for classical ensembles, we find that the probability to be in a microstate is p µstate = Z e β(eµstate µnµstate) As in the canonical case, the only difference is that classical degrees of freedom are continuous, so we can only talk about density of probability to be in a microstate. For quantum systems, microstates are discrete so we can talk about probability to be in a microstate. The reason the formula above holds is because nowhere in its derivation did we have to assume that the energy is continuous, so things proceed similarly if the energy is discrete. Of course, the grand-canonical partition function Z must be such that the normalization condition holds: p µstate = Z = µstates e β(eµstate µnµstate) µstates In terms of occupation numbers which characterize the microstates, these formulae become: p {nα} = Z e β(e {nα} µn {nα} ) = Z e β( α nαeα µ α nα) = Z e β α nα(eα µ) To find Z we must sum over all microstates. Since the allowed values for the occupation numbers are different, let s do this separately for the two cases, For a fermionic system, each n α = 0 or. As a result: Z F = e β α nα(eα µ) α n α=0 Throughout the remainder of the course, I ll use this shorthand notation: α n α=0 n α =0n α =0 where there is a sum for each single particle orbital (if there s an infinite number of them, there s an infinite number of sums). The short-hand notation simply says that for each possible α, there s a sum sign in the product. This is actually very simple to calculate, because the exponential factorizes in terms each of which depend on a single occupation number n α. So we can group each exponential with its sum, and find: Z F = e βnα(eα µ) α n α=0 But now each sum is trivial, nα=0e βnα(eα µ) = +e β(eα µ), and so: Z F = α ( +e β(e α µ) ) Just to make sure you followed this, let us do this for the case of the atoms trapped on the surface (we ll study the quantum gases in detail starting next lecture). In this case, we decided that we have

a finite number of single-particle levels indexed by the integer α n, n N T which tells us at which site is the single atom trapped. Let s assume that the atoms are fermionic, i.e. there can be at most atom in any trap (you might want to ask me some questions here...). The microstate is now described by the occupation numbers n,...,n NT where n i is zero if trap i is empty and if trap is occupied by an atom. The number of atoms in the microstate is N = N T i=n i, and the energy is E = N T i=( ǫ 0 )n i = ǫ 0 N, and E µn = ( ǫ 0 µ) N T i=n i. In this case: = Z F = n =0n =0 n NT =0 e β(e µn) = e β( ǫ 0 µ)n e β( ǫ 0 µ)n n =0 n =0 n =0 n =0 e β( ǫ 0 µ)n NT n NT =0 e β( ǫ 0 µ)(n + +n NT ) n T =0 = [ +e β( ǫ 0 µ) ] N T In this case, there is a finite number of single particle orbitals, and each contributes the same since they all have the same energy ǫ 0. In the general case, each orbital contributes +e β(eα µ), and we must multiply over all the orbitals. That s precisely what the general formula for Z F means. All the other formulae we have derived for classical grand-canonical system hold unchanged, in particular: φ F = k B T lnz = k B T ln [ +e β(eα µ)] α For different fermionic systems we ll have different energies ǫ α and number of levels α, but this formula will always hold. Let us do the same for a bosonic system. In that case: Z B = e β α nα(eα µ) α since occupation numbers can now be anything. Again we can factorize the product: Z B = e βnα(eα µ) α n α=0 n α=0 Each sum is an infinite geometric series. Note that we must have: e β(ǫα µ) in order for each of these series to be convergent. Since β > 0, it follows that for a bosonic system, we must always have µ e α for all single particle levels and therefore for bosons, we must have: µ e GS where e GS is the energy of the single-particle ground-state. This restriction will turn out to have important consequences. For fermions we have no restrictions for the chemical potential. If the restriction µ e GS is satisfied, then each geometric series n=0 x n = /( x) is convergent, and we find: Z B = ( ) α e β(eα µ) 3

and φ B = k B T lnz B = +k B T α ln [ e β(eα µ)] In fact, because of the similarities of the formulae, we can group together the results for both fermions and bosons and write: Z = ( ) ±e β(e ± α µ) α and φ = k B T lnz = k B T α ln ( ±e β(eα µ)) where the upper sign is for fermions, and the lower sign is for boson systems. From partial derivatives of φ we can calculate S,p, N, as usual, since dφ = SdT pdv N dµ. We can also use the tricks with α and β to find U and N. Let s remember them, and check that they still hold. First, we replace βµ α everywhere this product appear. In terms of α and β, we have: where from normalization, By definition, p µstate = Z e βeµstate+αnµstate Z(α,β,...) = e βeµstate+αnµstate µstates U = µstatesp µstate E µstate = µstate e Z µstatese βeµstate+αnµstate = Z β Z(α,β,...) = β lnz(α,β,...) if, while taking the derivative, we pretend that α and β are independent variables. Similarly, N = µstatesp µstate N µstate = µstate e Z µstatesn βeµstate+αnµstate = Z α Z(α,β,...) = α lnz(α,β,...) So indeed, we have precisely the same formulae as before. This is because the only difference is what is meant by µstates. For classical systems that implies a sum over N and many integrals over all classical degrees of freedom; for quantum systems, this is a sum over all possible occupation numbers. But nothing in the derivation depended on such details. For our quantum system, after replacing βµ α, we have: lnz(α,β,...) = ± γ ln ( ±e βeγ+α) (again, upper sign for fermions, lower sign for bosons). I prefer to call the quantum numbers γ this time, since α = βµ is now taken. If we take the derivatives, we find that: U = β lnz(α,β,...) = (±) γ e γ e βeγ+α ±e βeγ+α = γ e γ e β(eγ µ) ± and N = α lnz(α,β,...) = (±) γ e βeγ+α ±e βeγ+α = γ e β(eγ µ) ± (we can go back to α βµ after we took the derivatives. Results are generally in terms of µ. 4

But, on the other hand, using the expressions for number of particles and energy of a microstate in terms of occupation numbers, we have: N = γ n γ = γ n γ and U = γ e γ n γ = γ e γ n γ Comparing these with the equations above, we see that we must have the average occupation number of level γ to be: n γ = e β(eγ µ) ± These are extremely important results, which will come up time and time again. So let s discuss them separately, in some detail. For fermions, the average occupation number of a level with energy ǫ γ is: n γ = e β(eγ µ) + This is called the Fermi-Dirac distribution. Let s analyze it a bit. First, since e β(eγ µ) 0 no matter what values β,µ,e γ have, it is clear that always 0 n γ. This makes perfect sense. In any microstate, n γ can only be 0 or, so its average must be a number between 0 and! At T = 0 β, we see that if e γ < µ e β(eγ µ) 0 and so n γ. In other words, levels whose energy is below the chemical potential µ are certainly occupied at T = 0. If, however, e γ > µ e β(eγ µ) and so n γ 0. Therefore, levels whose energy is above the chemical potential µ are certainly empty at T = 0. If the temperature is low, but not zero, the occupation number will be somewhat changed for levels whose energy is within about k B T of µ (see figure below). But levels with e γ µ k B T are still certainly occupied, and levels with e γ µ k B T are still certainly empty. It s just within k B T of µ the transition from occupied to empty is no longer abrupt, instead the average occupation numbers continuously go from to 0. <n(e)> Fermi Dirac T=0 kt T > 0 e GS µ e Figure : Fermi-Dirac distribution, showing the average occupation number of a level of energy e, as a function of e. There are no levels below e GS. At T = 0, all levels between e GS and µ are fully occupied, n =, while all levels above µ are completely empty, n = 0. At finite-t, the average occupation numbers for energies roughly in the interval [µ k B T,µ+k B T] are changed, and the decrease is now gradual. 5

For bosons, the average occupation number of a level with energy ǫ γ is: n γ = e β(eγ µ) This is called the Bose-Einstein distribution. Let s analyze it a bit. First, remember that for bosons we must always have µ e GS e γ µ 0. Now we see that this is very necessary, since with this restriction e β(eγ µ) and the average occupation numbers are positive! They must be positive the average of any number whose only allowed values are 0,,,... cannot be negative. Unlike for fermions, however, we see that an average occupation number could be anything between 0 and infinity. In fact, let us consider T = 0 behavior. Here we have two cases: () µ < e GS. In this case e β(eγ µ) as β,t 0, so all average occupation numbers become vanishingly small. Since N = γ n γ, if all n γ 0 then N 0. This can happen if conditions are such that particles would rather not stay in the system (they prefer to be in the bath at low-temperatures). But this is a rather boring case. The more interesting case is when (ii) µ = e GS. In this case, the average occupation numbers are still zero for all higher energy levels, but we see that the ground-state itself has an infinite occupation number! Clearly, that can t be quite right indeed, we ll have to do this analysis more carefully when we study the so-called Bose-Einstein condensation. What this result tells us, though, is that for bosons, at T = 0 all particles that are in the system (however many they may be) occupy the ground-state orbital. This does make sense! We know that at T = 0 we expect the system to go into its ground-state. Of course, the lowest total energy is obtained when we place the particles in the orbitals with the lowest possible energies. For bosons, we are allowed to place all of them in the single-particle ground-state orbital, and that is indeed the lowest total energy possible. For fermions, we cannot put more that particle in a state, therefore to get the lowest total energy possible, we occupy all the lowest energy states available with one particle in each which is what the Fermi-Dirac distribution predicts at T = 0. So you see how the change of sign in the denominator leads to extremely different results! You might now wonder how is it possible that a gas of bosonic atoms and a gas of fermionic atoms will behave the same at high-temperatures, given how differently they behave at low-t. After all, we know that at high-t we should have agreement with the classical predictions, for e.g. find that pv = Nk B T, etc. Since there is only one set of relationships for classical gases, it follows that both bosons and fermions should behave the same at high temperatures. For this to happen, clearly these average occupation numbers should also be equal at high-temperatures, otherwise we should be able to tell the difference somehow. Interestingly enough, this indeed happens. What we will show a bit later on (you ll have to just believe me for now) is that at high-temperature we have to set µ to be an extremely large negative number, µ k B T, if we want to have large numbers of particles in the box, N 0 3. All one-particle levels e γ = e nx,n y,n z in a box have positive energies, and therefore now β(e γ µ) e β(eγ µ) e β(eγ µ) ± e β(eγ µ). So the sign doesn t make any difference anymore, and at high temperatures we find both for fermions and bosons that: n γ e β(eγ µ) As we will show soon, in this limit we indeed find agreement with the expected classical results. 6