Theoretical Statistical Physics

Size: px
Start display at page:

Download "Theoretical Statistical Physics"

Transcription

1 Theoretical Statistical Physics Prof. Dr. Christof Wetterich Institute for Theoretical Physics Heidelberg University Last update: March 25, 2014 Script prepared by Robert Lilow, using earlier student's script by Birgir Ásgeirsson, Denitsa Dragneva and Emil Pavlov. For corrections and improvement suggestions please send a mail to lilow@stud.uni-heidelberg.de.

2

3 Contents I. Foundations of Statistical Physics 9 1. Introduction From Microphysics to Macrophysics Macroscopic laws and simplicity Fundamental laws are of probabilistic nature Basic concepts of statistical physics Probability distribution and expectation values States (Microstates) Probability distribution Observables Expectation value Dispersion Expectation values and probability distribution Simple examples Two-state system Four-state system Reduced system or subsystem Probabilistic observables Quantum statistics and density matrix Expectation value Dispersion Diagonal operators Properties of the density matrix Diagonal elements of ρ define the probability distribution Positivity constraints for the elements of ρ Non-diagonal operators Pure quantum states Change of the basis by unitary transformation Pure and mixed states Microstates Partition function Continuous distribution and continuous variables

4 2.9. Probability density of energy Equal probability for all values τ Boltzmann distribution for microstates τ Number of one particle states in a momentum interval Correlation functions Conditional probability p(λ B λ A ) Product observable B A Correlation function Independent observables Classical correlation function Quantum correlations Statistical description of many-particles systems Chain with N lattice sites Probability distribution Macroscopic probability Expectation value Dispersion Uncorrelated probability distribution N spins on a chain Correlation functions Independent measurement Statistical Ensembles Gaussian distribution Thermodynamic limit Equilibrium Ensembles Microcanonical ensemble Fundamental statistical postulate Microcanonical partition function Entropy S Partition function for ideal classical gas Entropy of an ideal classical gas Thermodynamic limit The limit E E Derivatives of the entropy Canonical ensemble System in a heat bath Canonical partition function and Boltzmann factor Relation between temperature and energy Energy distribution Equivalence of microcanonical and canonical ensemble Temperature Temperature and energy distribution in subsystems

5 Thermal equilibrium Grand canonical ensemble Relations for entropy Entropy in canonical ensemble Entropy in grand canonical ensemble General expression for entropy Entropy and density matrix Thermodynamics Thermodynamic potentials State variables, equations of state Thermodynamic potentials and partition functions Differential relations Legendre transform Properties of entropy Zero of absolute temperature rd law of thermodynamics Reversible and irreversible processes Subsystems and constraints Removing the constraint Increase of entropy by removing the constrains nd law of thermodynamics Reversible processes for isolated systems Reversible processes for open systems Thermodynamics and non-equilibrium states II. Statistical Systems Ideal quantum gas Occupation numbers and functional integral Partition function for the grand canonical ensemble Mean occupation numbers Bosons Fermions Mean energy Chemical potential Boltzmann statistics Photon gas Black body radiation Spectrum of black body radiation Energy density Canonical partition function Free energy and equation of state

6 6.5. Degenerate Fermi gas Quasiparticles The limit T Computation of the Fermi energy Thermodynamics Ground state energy One-particle density of states Ω 1 (ε) for free non-relativistic fermions E(T,N ) for arbitrary temperature Smoothing of Fermi surface Computation of µ(t ) from the normalisation condition Occupation numbers for particles and holes Solve for µ(t ) Non-relativistic bosons Chemical potential Number of bosons in the ground state Qualitative behaviour of n(e ν ) for T Gibbs potential Classical ideal gas Classical approximation Gibbs potential Thermal de Broglie wavelength Thermodynamics Validity of the approximation for a diluted gas (classical gas) Bose-Einstein condensation n(µ) Critical temperature Ground state of bosonic gas Bose-Einstein condensation Gas of interacting particles Interactions and complexity Real gases Mean field approximation Classical statistics in phase space Average potential mean field Expansion of canonical partition function Computation of Z ww (1) Excluded volume Mean potential Free energy Equation of state Specific heat of real gases

7 7.6. Equipartition theorem Magnetic systems Magnetization Degrees of freedom Hamiltonian Magnetization Thermodynamic potential Effective potential Minimal principle Spontaneous symmetry breaking Symmetries Vanishing magnetic field Domains General form of U Landau theory and phase transition Quartic expansion of U Phase transition for j = Discontinuity in magnetization for non-zero B Magnetic susceptibility Phase transitions Order of phase transition First order phase transition Second order phase transition Probabilistic interpretation Liquid-gas phase transition Effective potential First order transition Compressibility Chemical potential Phase diagram Vapor pressure curve p T diagram Clausius-Clapeyron relation Latent heat Superheating, supercooling Critical Phenomena Critical exponents Specific heat Order parameter as function of temperature Susceptibility Order parameter at critical temperature as function of the source.. 158

8 Mean field exponents Universal critical exponents Correlation functions Correlation length Correlation function and generating functional Correlation function and susceptibility Critical exponent of correlation length and anomalous dimension Critical opalescence Universality Macroscopic laws for second order phase transition Ising model in one dimension Ising model in d > 1 dimensions Renormalization flow Fixed points

9 Part I. Foundations of Statistical Physics 9

10

11 1. Introduction 1.1. From Microphysics to Macrophysics Fundamental laws for basic interactions do not explain everything, e. g. the description of macroscopic objects, consisting of many particles (solids, fluids, gases). A typical number of particles is Avogadro's number: N a It denotes the number of molecules per mol, e. g. the number of water molecules in 18 g water. Typical questions are: connections between pressure (p), volume (V ), and temperature (T ) of gases (equation of state) specific heat capacity of a material conductivity of a solid, colour of a fluid connection between magnetization and magnetic field melting of ice How does a refrigerator work? How does a supernova explode? energy budget of the Earth's atmosphere diffusion through a membrane in a plant particle beam in a high energy collider Typical area of application: systems with many degrees of freedom macroscopic objects For example a water droplet: Consider an isolated system a droplet of water with volume V and number of particles N. A classical mechanical description needs 6N initial conditions 6N degrees of freedom. The distance between two trajectories, which initially are close to each other, grows exponentially with time. Initial state can never be prepared (known) with arbitrary precision: Limit for predictability (weather forecast) 11

12 1. Introduction Try deterministic classical mechanics as a description of a water droplet for one second. One needs more information than can be saved in the entire universe, even if one stores one bit in every volume with Planck size (10 34 m) 3 The goal of the statistical physics is to develop laws for the macrostates by deriving them from the microphysics with the help of statistical methods Macroscopic laws and simplicity qualitatively new laws new concepts, e. g. entropy statistic laws are very exact, relative fluctuations 1 N predictive power only for systems with many degrees of freedom; uncertainty about the behaviour of single atoms (quantum physics), but precise predictions about the whole system thermodynamics material properties non-equilibrium processes Scales for macroscopic systems: L mean distance between particles in fluids and solids 10 7 cm, N 1 t timescale of the microscopic motions: motion of an atom: s motion of an electron (in solids): s 1.3. Fundamental laws are of probabilistic nature Fundamental laws are of probabilistic nature The role of the statistical physics in the building of physics is huge: it is the foundation of the modern physics! There are two two major strategies in physics, as illustrated in Figure 1.1: 1. Fundamental theory: systems with many (infinite) degrees of freedom; QFT, String theory (for scattering of two particles) are formulated as statistical theories. 2. The role of statistics to understand complex systems is obvious. 12

13 1.3. Fundamental laws are of probabilistic nature Abstraction Understanding of the complexity Fundamental laws (1) (2) Figure 1.1.: The two major strategies in physics To calculate the predictions, which follow from the fundamental theory, one uses methods of statistical physics just like in this lecture. Basic laws of physics are statistical in nature! They make statements about: the probability that, when measuring an observable A once, one gets the value A i the conditional probability for two measurements: if one measures a value A i for an observable A, what is the probability that one measures the value B j for a second observable B? Remarks 1. From microphysics to microphysics and from Planck's scale (10 19 GeV) to the energy scale of LHC (1000 GeV) 16 orders of magnitude. 2. Universality: one does not consider all the details of the microscopic laws predictive power of the QFT! 3. Conceptional revolution due to the statistical physics. Modern physics: Plank, Boltzmann, Einstein. 4. Determinism probabilistic theory predictive power 13

14

15 2. Basic concepts of statistical physics 2.1. Probability distribution and expectation values States (Microstates) We will denote states with τ and the number of states with Ω. In the beginning, we will examine only a finite number of states Ω < Probability distribution To every state τ we associate a probability p τ, the probability that this state is realized : The probabilities p τ have to obey two conditions The range of p τ is therefore τ p τ. (2.1) p τ 0 positivity, (2.2) p τ = 1 normalization. (2.3) τ 0 p τ 1. (2.4) The probability distribution is {p τ } = {p 1,p 2,p 3,... }, which is an ordered sequence of Ω numbers Observables Capital roman letters are used for observables, e. g. A. A classical observable A has a fixed value A τ in every state τ : τ A τ Spec A, (2.5) where the spectrum Spec A of an observable is the set of all possible measurement values {λ A }, λ A R. The probability to measure a certain value λ A is given by: p λa = p τ, (2.6) A τ =λ A τ where one sums over all states for which A τ = λ A. A linear combination of two classical observables A and B is again a classical observable: C = f A A + f B B with f A,B R, (2.7) C τ = f A A τ + f B B τ. (2.8) 15

16 2. Basic concepts of statistical physics τ A (2) A (3) Table 2.1.: A system with A (σ ) = δ σ τ And a function of A is an observable, too: ( ) f (A) = f (A τ τ ). (2.9) Expectation value The expectation value of an observable is defined as A = p τ A τ. (2.10) τ Dispersion The dispersion is given by A 2 = p τ (A τ A ) 2 (2.11) τ = (A 2 τ 2 A τ A + A 2 ) (2.12) τ = p τ A 2 τ A 2 (2.13) τ = A 2 A 2. (2.14) Expectation values and probability distribution The expectation values of a complete set of observables determine the probability distribution {p τ } uniquely. Consider Ω observables A (σ ) with A τ (σ ) = δ σ τ, as illustrated in Table 2.1. Then, it is A (σ ) = p τ A τ (σ ) = p σ. (2.15) τ The observables A (σ ) are called projectors. They can be considered as a basis for observables. All classical observables can be expressed as linear combinations of projector observables. An association of A (σ ) with possible events (i. e. that state σ is realized) leads to the Kolmogorov axioms for probabilities. 16

17 2.2. Simple examples 2.2. Simple examples Two-state system Typical two-state systems would a coin, a spin, occupied / unoccupied states or computer bits. States, possibilities and observables in these are usually denoted by: τ :, or 1, 2 or 1, 0 p τ : p, p or p 1, p 2, p 1 + p 2 = 1 A: number up number down, spin, particle number As an example, let us define the observable A, for which holds that A 1 = 1 for state and A 2 = 1 for state. Then, we can express the spin in z-direction in terms of A: S z = 2 A. (2.16) The number of particles is given by N = 1 2 (A + 1), which corresponds to N 1 = 1 and N 2 = 0. The expectation values of A and N are Their dispersions are easily calculated. For A we first need A = p 1 A 1 + p 2 A 2 (2.17) = p 1 (1 p 1 ) (2.18) = 2p 1 1, (2.19) N = 1 2 (A + 1) (2.20) = p 1 N 1 + p 2 N 2 (2.21) = p 1. (2.22) A 2 = p 1 A p 2A 2 2 (2.23) = p 1 + (1 p 1 ) (2.24) = 1, (2.25) where we used A 2 1 = A2 2 = 1. Then, A 2 = A 2 A 2 (2.26) = 1 (2p 1 1) 2 (2.27) = 4p 1 (1 p 1 ), (2.28) and therefore And for N accordingly: A 2 = 0 for p 1 = 0 or p 1 = 1. (2.29) N 2 = 1 p p 2 = p 1, (2.30) N 2 = p 1 (1 p 1 ). (2.31) 17

18 2. Basic concepts of statistical physics Table 2.2.: States τ in a four-state system A τ total spin B τ spin 1 C τ spin 2 Table 2.3.: Observables in a four-state system Four-state system An example of a four-state system are two particles, each with a spin in two possible directions. Different possibilities to denote these states are listed in Table 2.2. We can now introduce a variable called the total spin A. Other possible observables are the spin of the first particle B and the spin of the second particle C. Values of these observables are given in Table 2.3. For the particular case of equal probabilities for all states, p 1 = p 2 = p 3 = p 4 = 1 4, we can compute the expectation values: and the dispersions: A = B = C = 0, (2.32) B 2 = C 2 = 1, A 2 = 2, (2.33) B 2 = C 2 = 1, A 2 = 2. (2.34) 2.3. Reduced system or subsystem For most parts of this lecture the basic concepts of section 2.1, classical statistics, are sufficient. So, we want to describe quantum systems, but often use classical statistics. This needs justification! The following parts of chapter 2 motivate why this is possible for many purposes. The basic observation is that classical statistics and quantum statistics have much in common! Many quantum features appear when one considers subsystems of classical statistical systems. When only part of the information in {p τ } is needed or available, we can integrate out the redundant information, which results in a reduced probability distribution. Examples 1. Consider the previous example of a four-state system again. Suppose we only have information about spin 1. Then, we can combine the probabilities for states with the 18

19 2.3. Reduced system or subsystem same B τ and arbitrary C τ. As a consequence, the system turns into an effective two-state system. Here, τ = 1 groups together these microscopic states τ = 1, τ = 2, in which B τ has the same values B τ = 1. Similarly, τ = 2 groups together τ = 3 and τ = 4 with B τ = 1. The states of this reduced ensemble are illustrated in Table 2.4. The probabilities of the subsystem states are p 1 = p 1 + p 2, (2.35) p 2 = p 3 + p 4. (2.36) Using B 1 = 1 and B 2 = 1, which are the classical observalbles in the subsystem, the expectation value of B can be computed from the subsystem: B = p 1 B 1 + p 2 B 2 (2.37) = p 1 p 2 (2.38) = p 1 + p 2 p 3 p 4. (2.39) As it should be, this is the same as computed from the underlying microscopic system: B = 4 B τ p τ. (2.40) τ =1 In this case, B is a system observable, which means that the information in the system is enough to compute the distribution of the measurement values. In contrast, A and C are environment observables. 2. Consider the case when A is the system observable of the subsystem. Thus, the system becomes an effective three-state system with The probabilities for these states are Ā 1 = 2, Ā 2 = 0, Ā 3 = 2. (2.41) p 1 = p 1, p 2 = p 2 + p 3, p 3 = p 4. (2.42) State 2 of the subsystem groups together the states 2 and 3 of the microsystem that both have A τ = 0. The states 1 and 3 of the subsystem are identical to the microstates 1 and 4 with A τ = 2 and A τ = 2. We will continue examining this example in section

20 2. Basic concepts of statistical physics 1 2 Table 2.4.: States of the reduced ensemble τ in example Probabilistic observables In every state τ a classical observable has a fixed value A τ. Is this the most general case? No, more general is the concept of probabilistic observables. In every state τ a probabilistic observable takes its possible measurement values λ B with a certain probability w τ (λ B ). Examples 1. Water on earth: Divide all probes of H 2 O-molecules into three states (using some criterion): τ = 1: solid (ice) τ = 2: liquid (water) τ = 3: gas (vapor) So, p τ are the probabilities to find water in the form of solid, liquid or gas. One would like to determine the average density of water on earth, and the dispersion. Therefore, one needs the probability distribution of density in each of the three states w 1 (n), w 2 (n), w 3 (n). (the distributions depend on the state, but in each state the density has no fixed value). Then, the average density is given by n = p 1 dn n w 1 (n) + p 2 dn n w 2 (n) + p 3 dn n w 3 (n) (2.43) = p 1 n 1 + p 2 n 2 + p 3 n 3, (2.44) where n τ = dn n w τ (n) (2.45) is the mean value in state τ. We use normalized probability distributions: dn w τ (n) = 1. The Motivation of these concepts is, to characterize microstates by density and state of water. So, we have microstates (n,τ ) with probability distribution P (n,τ ), obeying dn P (n,τ ) = 1. (2.46) τ In this setting, n is a classical observable with fixed value n in each state (n,τ ). It is n = dn P (n,τ ) n, (2.47) n 2 = dn P (n,τ ) n 2. (2.48) 20

21 2.4. Probabilistic observables Define probabilities for states τ of reduced system: p τ = dn P (n,τ ), (2.49) and probabilities to find n in a state τ : w τ (n) = P (n,τ ) p τ. (2.50) Their normalization is given by dn w τ (n) = 1. (2.51) Then, P (n,τ ) = p τ w τ (n), (2.52) and we can rewrite (2.47) and (2.48) as n = τ n 2 = p τ τ dn w τ (n) n = p τ n τ, (2.53) τ dn w τ (n) n 2 = p τ ( n 2 ) τ. (2.54) One finds ( n 2 ) τ ( n τ ) 2 due to fluctuations of n in state τ. Therefore, n 2 ( n τ ) 2. (2.55) τ For n the specification of n τ for every state is sufficient (at this point no distinction from classical observable), but n 2 needs additional information about distribution of values n in each state τ. n is a probabilistic observable on the level of states τ. τ 2. Subsystem of states with total spin: Probabilistic observables typically occur for subsystems. For example we can identify the two-spin system with the microstates (n,τ ) of water. Then, the three states τ characterized by total spin A in subsection 2.2.2: Ā 1 = 2, Ā 2 = 0, Ā 3 = 2, (2.56) are analogous to the states ice, water, vapor in the first example. 21

22 2. Basic concepts of statistical physics For this subsystem the first spin B is a probabilistic observable. As B 1 = 1 and B 3 = 1, B behaves as a classical observable in the states τ = 1,3. But what about B in state τ = 2? In that case, there is no fixed value for B. Instead, we have the probabilities The mean value of B in state τ = 2 is Let us check this: p 2 w 2 (1) = p 2 + p 3 to find B = 1, (2.57) p 3 w 2 ( 1) = p 2 + p 3 to find B = 1. (2.58) B 2 = w 2 (1) 1 + w 2 ( 1) ( 1) = p 2 p 3 p 2 + p 3. (2.59) B = p 1 B 2 + p 2 B 2 + p 3 B 3 (2.60) = p 1 + (p 2 + p 3 ) p2 p 3 p 4 (2.61) p } {{ } 2 + p } {{ } 3 p 2 B 2 = p 1 + p 2 p 3 p 4 q. e. d. (2.62) However, which differs from ( B 2 ) 2! Thus, ( B 2 ) 2 = 1, (2.63) B 2 = 1 3 τ =1 p τ B2 τ. (2.64) This is a clear sign of a probabilistic observable. And probabilistic observables need additional information about the probability distribution of measurement values in a given state, which we have sketched in Figure 2.1. So, we conclude: B environment observable additional information B system observable. (2.65) We have listed all the observables that we just have computed in Table Quantum mechanics: Consider two-state quantum mechanics: τ = 1 : τ = 2 : S z = 1, (2.66) S z = 1, (2.67) where S k = 2 S k. 22

23 2.5. Quantum statistics and density matrix w τ 1 1 B Figure 2.1.: Probability distribution of the probabilistic variable B in given state τ p p 1 p 2 + p 3 p 4 A N p B 1 2 p 3 p 2 +p 3 1 B Table 2.5.: Observables in the reduced system in example 2 The additional observable S x has no fixed value in the states τ, only probabilities to find S x = ±1: Its expectation value is given by So, w 1 (1) = a, w 1 ( 1) = 1 a, (2.68) w 2 (1) = b, w 2 ( 1) = 1 b, (2.69) ( S x ) 1 = 2a 1, ( S x ) 2 = 2b 1. (2.70) S x = p 1 (2a 1) + p 2 (2b 1) (2.71) where S x = 1 for a = b = 1 and S x = 1 for a = b = Quantum statistics and density matrix Expectation value In quantum statistics the expectation value is computed as = 2p 1 a + 2p 2 b 1. (2.72) 1 S x 1, (2.73) A = tr (ρâ), (2.74) 23

24 2. Basic concepts of statistical physics where  is the operator associated with A and ρ is the density matrix. Let us look at the particular case of a pure state. For pure states the density matrix can be expressed as ρ σ τ = ψ σ ψ τ, (2.75) where ψ is a complex vector field. Then, we find ψ  ψ = ψ τ A τ σ ψ σ (2.76) = ψ σ ψ τ A τ σ (2.77) = ρ σ τ A τ σ (2.78) = tr (ρâ). (2.79) Dispersion Accordingly, the dispersion is given by A 2 = tr (ρâ 2 ), (2.80) A 2 = A 2 A 2. (2.81) Diagonal operators When A is diagonal: then the expectation value is just A  = 0 A , (2.82) A N  τ σ = A n δ τ σ, (2.83) A = tr (ρâ) (2.84) = ρ σ τ  τ σ (2.85) τ,σ = ρ τ τ A τ. (2.86) τ So, one only needs the diagonal elements of the density matrix. They can be associated with probabilities p τ = ρ τ τ, where ρ τ τ 0. This way, we recover classical statistics: f (A) = p τ f (A τ ). (2.87) τ 24

25 2.5. Quantum statistics and density matrix Example A two-state system: Let  = ( ) ( A 1 0 ρ11 ρ 12 0 A 2 and ρ = ρ12 22) ρ, then: ( ) ( ) ( ) ρ11 ρ ρâ = 12 A1 0 A1 ρ ρ12 = 11 A 2 ρ 12 ρ 22 0 A 2 A 1 ρ12, (2.88) A 2 ρ 22 tr (ρâ) = A 1 ρ 11 + A 2 ρ 22, (2.89) A = ρ τ τ A τ = p τ A τ. (2.90) Properties of the density matrix n The density matrix ρ is a N N matrix with the following properties: 1. Hermicity: ρ = ρ, so all eigenvalues λ n are real 2. Positivity: n : λ n 0 3. Normalization: tr ρ = 1, so n λ n = 1 and n : 0 λ n Diagonal elements of ρ define the probability distribution From the properties in subsection it follows that ρ τ τ 0 and τ ρ τ τ = 1. So, {ρ τ τ } has all the properties of a probability distribution. Example Quantum mechanical two-state system: τ In this case, the density matrix ρ is a complex 2 2 matrix of the form ( ) ρ11 ρ ρ = 12 ρ12, (2.91) 1 ρ 11 with ρ 11 R, 0 ρ Consider the observable ( ) 1 0  =. (2.92) 0 1 We can find its expectation value: A = [( ) tr 0 1 = 2 tr ( ρ11 ρ 12 ρ ρ 11 ( ) ] ρ11 ρ 12 ρ12 1 ρ 11 ) (2.93) (2.94) = 2 (2ρ 11 1). (2.95) From the classical statistics we have A = p 1 A 1 + p 2 A 2. If we set p 1 = ρ 11, p 2 = 1 ρ 11, A 1 = 2, A 2 = 2, then we will get the same result (2.95). 25

26 2. Basic concepts of statistical physics ρ 11 (1 ρ 11 ) ρ 11 Figure 2.2.: Positivity condition for two state density matrix Positivity constraints for the elements of ρ To determine the eigenvalues λ n of the density matrix (2.91), we have to solve 0 = (ρ 11 λ)(1 ρ 11 λ) f (2.96) = λ 2 λ + ρ 11 (1 ρ 11 ) f, (2.97) where We find λ 1,2 = 1 2 f = ρ (2.98) ( 1 ± 1 4[ρ 11 (1 ρ 11 ) f ] ). (2.99) Using the positivity conditions for the eigenvalues, λ 1,2 0, we arrive at ρ 11 (1 ρ 11 ) f 0, (2.100) and therefore, ρ 11 (1 ρ 11 ) 0. (2.101) The full positivity condition for the two-state matrix is f = ρ 12 2 ρ 11 (1 ρ 11 ). (2.102) In Figure 2.2 we have plotted ρ 11 (1 ρ 11 ). From the positivity condition it follows that ρ 12 2 only can take values on or below this curve Non-diagonal operators One may ask why we need the information in ρ 12, as for the expectation value it holds A = τ ρ τ τ  τ, provided that  is diagonal. The answer is that ρ 12 carries additional information for the probabilistic observables. 26

27 2.5. Quantum statistics and density matrix 1 2 S z 1 1 S x ρ 12 + ρ12 ρ 12 + ρ12 S x 2, S z Table 2.6.: Mean values of spin observables in the states 1 and 2 Example Spin in arbitrary direction S i = 2 S i: The spin components S z and S x are different types of observables: S z : Classical observable ( ) S z = 1, ( ) S 1 z = 1, (2.103) 2 ( ) S z 2 = 1, ( ) S 2 1 z = 1. (2.104) 2 S x : Probabilistic observable There is a probability distribution for finding the values +1 or 1 in the states 1 and 2 : S x 1 = S x 2 = ρ 12 + ρ12. (2.105) But in both states the square has a fixed value: ( ) S x 2 = ( ) S 2 1 x = 1. (2.106) 2 For a better overview, we have listed these observables in Table 2.6. The probabilities to find S x = ±1 in state 1 are given by such that With these probabilities we arrive at w 1 (1) = 1 2 (1 + ρ 12 + ρ 12 ), (2.107) w 1 ( 1) = 1 2 (1 ρ 12 ρ 12 ), (2.108) w 1 (1) + w 1 ( 1) = 1. (2.109) S x 1 = w 1 (1) w 1 ( 1) = ρ 12 + ρ12. (2.110) Both, w 1 (1) and w 1 ( 1), have to be positive, which can be inferred from the condition ρ 12 + ρ 12 1, (2.111) or equivalently Re ρ (2.112) 27

28 2. Basic concepts of statistical physics This follows from (2.102): Re ρ 12 = ρ 12 2 Im ρ 12 2 (2.113) ρ 11 (1 ρ 11 ) Im ρ 12 2 (2.114) ρ 11 (1 ρ 11 ) (2.115) 1 2. (2.116) By similar calculations we find the same results for w 2 (1) and w 2 ( 1). We shall check if classical statistics (with probabilistic observables) and quantum statistics produce discrepancies: Classical: Quantum: S x = p 1 S x 1 + p 2 S x 2 (2.117) = ρ 11 (ρ 12 + ρ 12 ) + (1 ρ 11)(ρ 12 + ρ 12 ) (2.118) = ρ 12 + ρ 12 (2.119) = 2 Re ρ 12 (2.120) [( ) ( ) ] S ρ11 ρ x = tr ρ12 (2.121) ρ ( ) ρ12 ρ = tr 11 ρ22 ρ12 (2.122) = ρ 12 + ρ12 (2.123) = 2 Re ρ 12. (2.124) One can see that (2.120) and (2.124) give exactly the same result. Actually, this could be generalized for any hermitian quantum operator ( ) Â = Â A11 A = 12 A. (2.125) 12 A 22 The probabilistic observable has mean values in the states 1 and 2, given by A 1 and A 2 : A 1 = A 11 + ρ 12 A 12 + ρ 12 A 12 (2.126) A 2 = A 22 + ρ 12 A 12 + ρ 12 A 12. (2.127) It turns out that quantum mechanics can be interpreted as classical statistics, but with additional information for probabilistic observables. As long as we are dealing with diagonal observables, there is no difference between both approaches. In this case, non-diagonal elements of ρ are of no importance. Actually, the last statement is only true for a given point in time or for a static ρ. Non-diagonal elements of ρ are important for the time evolution of the probability distribution: where the sum over repeated indices is implied. {p τ (t)} {ρ τ τ (t)}, (2.128) 28

29 2.5. Quantum statistics and density matrix Pure quantum states Consider a special form of the density matrix ρ σ τ = ψ σ ψ τ, where ψ is a complex vector with N components and τ ψ τψ τ = 1. In other words, ψ τ is a wave function. Then, where, again, the sum over repeated indices is implied, and A = tr (ρâ) (2.129) = ρ σ τ  τ σ (2.130) = ψ τ Âτ σ ψ σ (2.131) = ψ  ψ, (2.132) p τ = ρ τ τ = ψ τ 2 0. (2.133) As already mentioned, only ρ τ τ is needed for diagonal observables. So, for diagonal observables there is no difference between pure and mixed states. We can state the condition for pure states as because Change of the basis by unitary transformation ρ 2 = ρ, (2.134) ρ σ τ ρ τ α = ψ σ ψ τψ τ ψ α (2.135) = ψ σ ψ α (2.136) = ρ σ α. (2.137) A unitary transformation is of the kind ρ ρ = U ρ U, where U U = 1. ρ has still all the properties of the density matrix: (ρ ) = ρ, tr ρ = 1, positivity. Operators also can be transferred unitarily:   = U ÂU. Nevertheless, the expectation values are invariant under this transformation: Let us list several important properties: A = tr (ρ  ) (2.138) = tr (U ρ U U ÂU ) (2.139) = tr (U ρâu ) (2.140) = tr (U U ρâ ) (2.141) = tr (ρâ ). (2.142) Even if  is diagonal, U ÂU can be non-diagonal in the general case. If ρ can be diagonalized, then ρ = diag (λ 1,λ 2,...,λ N ) 29

30 2. Basic concepts of statistical physics l z pure states density matrix probability p 1 p 2 p 3 Table 2.7.: z-component of the angular momentum Pure states are invariant: ρ 2 = ρ ρ 2 = ρ. (2.143) One can see this via ρ 2 = U ρ U U ρ U (2.144) = U ρ 2 U (2.145) = U ρ U (2.146) = ρ. (2.147) From this it follows that λ 2 n = λ n, leaving only the possible values λ n = 0,1. This means that ρ σ τ = δ σ β δ τ β and therefore, which is a pure state. ψ σ = δ σ β e iα, (2.148) ψ τ = δ τ β e iα, (2.149) Pure and mixed states Consider an observable l = 1, which could be associated with the angular momentum. Then, l z could be -1, 0 or 1. The system is described in Table 2.7. Of course, p 1 + p 2 + p 3 = 1 must be satisfied. The density matrix of the system is ρ = p p 2 0. (2.150) 0 0 p 3 This is a mixed state as long as there are p i 0,1. One could check and see that in this case ρ 2 = p p (2.151) 0 0 p3 2 30

31 2.6. Microstates Example Let us examine a system characterized by the following density matrix: 1 ρ = (2.152) Here, all the states l z = 1,0,1 have equal probability, so l z = 0. If we compare this with a pure state: ρ = (2.153) 3 1 leading to the density matrix ρ = (2.154) we can see that both, (2.152) and (2.154), have the same diagonal elements. Consequently, l z = 0, again. They differ, though, in the mean values of the probabilistic observables, because of the different off-diagonal elements Microstates Back to the quantum-mechanical point of view: provided that ψ τ is a complete orthonormal basis, then ψ τ are microstates. Examples 1. Hydrogen atom: Here, τ = ( ˆn,l,m,s) is a multiple index. We assume that the microstates are countable, although it is commonly otherwise. Often, they are also limited, e. g. only microstates with E < E 0 5. The energy of a state with quantum number ˆn is given by Let us consider the following states: E = Ḙ n 2. (2.155) ψ 1,2 : ˆn = 1, l = 0, m = 0, s = ±1; E = E 0, (2.156) ψ 3,...,10 : ˆn = 2, l = 0,1, m, s; E = E 0 4, (2.157) where N = = 10. The energy operator is given by Ê = E 0 diag ( 1,1, 1 4,..., 1 4 ). Let the distribution be uniform: ρ = diag ( 1 10,..., ) Then, the expectation value of the energy is ( ) E = ( E0 ) = 0.4 E 0. (2.158) 31

32 2. Basic concepts of statistical physics cos ( π L x) m x = 1 2 x sin ( 2π L x) m x = 1 x cos ( 3π L x) m x = 3 2 x Figure 2.3.: Wave functions of a particle in a box 2. A particle in a box: The particle is described by the wave equation ψ = Eψ. (2.159) 2M Consider a cubic box with an edge length L. Then, the solution is found to be where ψ = e ipx x e ipy y e ipz z ψ0 (2.160) = e ipx ψ0, (2.161) E = 1 2M p2. (2.162) The periodical boundary conditions give ψ ( L 2,y,z) = ψ ( L 2,y,z) (and the same for y and z). Then p x,y,z have discrete values: p = 2π L m, (2.163) with m = (m x,m y,m z ), m x,y,z Z. (2.164) Three examples of these periodical wave functions are illustrated in Figure

33 2.7. Partition function 2.7. Partition function What is the number of states for a given Energy E of a single particle (N = 1) in an onedimensional (d = 1) cubic box with edge length L? We will denote the number of states in [E,E + δe] by Ω(E,V ). For our purposes, assume that E 2 π 2 2ML 2. It is so, If δe E, we can approximate p 2 2M = π2 2 n 2 2M L 2, (2.165) E < π2 2 2M n 2 < E + δe. (2.166) L2 ( E + δe E ) δe 2 E (2.167) = E + 1 δe, 2 E (2.168) leading to 2ME π L < n < 2ME π L + 1 2π Therefore, the number of states Ω(E,V ) is given by which means that Ω V! Ω(E,V ) = V 2π ( ) 1/2 2M LδE. (2.169) E ( ) 1/2 2M δe, (2.170) E 2.8. Continuous distribution and continuous variables We will examine the continuous description of closely situated discrete States. We need to define the probability density p(x) with: dx p(x) = 1 dx p(x) A(x) = A. (2.171) We will use the following definitions: I (x): interval of states τ which belong to the interval [ x dx 2,x + dx 2 ], p(x): mean probability in I (x), Ω(x): number of states belonging to I (x), 33

34 2. Basic concepts of statistical physics A(x): A x, i. e, mean value of A in interval I (x) (probabilistic variable). We can express the probability of the reduced system with states x as p(x) dx = p τ = p(x) Ω(x), (2.172) τ I (x ) and A(x) is given by A(x) = A x = τ I (x ) p τ A τ τ I (x ) p τ. (2.173) Let us check if the continuous case is equivalent to the discrete one: dx p(x) A(x) = dx p(x) A(x) (2.174) x ( ) = p τ x = x τ I (x ) } {{ } dx p(x ) τ I (x ) τ I (x ) p τ A τ τ I (x ) p τ (2.175) p τ A τ (2.176) = p τ A τ q. e. d. (2.177) τ Note that A 2 = dx p(x) A 2 (x) = τ p τ A τ holds only, if the dispersion of A vanishes in the continuum limit dx 0. We shall see that p(x) depends on the choice of the variables x. For that purpose, consider the explicit coordinate transformation The expectation value of A in the new coordinates becomes: A = dx p(x) A(x) = This can be generalized for many variables: A = x = f (x ), (2.178) dx = d f dx dx. (2.179) dx d f dx p[f (x )] A[f (x )]. (2.180) } {{ }} {{ } p (x ) A (x ) dx 1 dx n p(x 1,...,x n ) A(x 1,...,x n ) (2.181) 34

35 2.9. Probability density of energy where p(x) is transformed by the Jacobi determinant: p (x ) = d f dx p[f (x )], (2.182) x i = f i (x j ), (2.183) d f dx = det df 1 dx 1. df 1 dx n 2.9. Probability density of energy df n dx The probability density of energy can be inferred from via leading us to Ω(E) δe p(e) de = p(e) Ω(E) δe change of symbols δ Ω(E) δe p(e) de = p(e) Ω E df n dx n An example would be one particle in a linear box (d = 1, N = 1).. (2.184) de, (2.185) continuum limit Ω(E), (2.186) E de. (2.187) Equal probability for all values τ We know that So, we can conclude that Then, δ Ω(E) Ω(E) = δ Ω δe V 2π V 2π p j (E) = ( ) V 2M 1/2Z 1 2π E for E < E max, 0 else. ( ) 1/2 2M δe. (2.188) E ( ) 1/2 2M. (2.189) E (2.190) Here, Z is a constant factor, which could be determined by the normalization condition de p(e) = 1: Z = V 2π Emax 0 de ( ) 1/2 2M. (2.191) E 35

36 2. Basic concepts of statistical physics p τ p τ E τ E τ (a) Equipotential distribution (b) Boltzmann distribution Figure 2.4.: Comparison of equipotential and Boltzmann distribution Boltzmann distribution for microstates τ The Boltzmann distribution is given by the formula: p τ = Z 1 e Eτ kbt = Z 1 e Eτ T. (2.192) We have set k b = 1, which means that we measure the temperature in energy units. E τ is the energy of state τ. In Figure 2.4 the Boltzmann distribution is compared to an equipotential one. We find p b (E) = V ( ) 1/2 2M e T E Z 1, (2.193) 2π E E = E 2 = Emax 0 Emax 0 de p(e) E, (2.194) de p(e) E 2. (2.195) It should be marked that p(e) is the macroscopic probability distribution and p τ is the microscopic one Number of one particle states in a momentum interval Number of one particle states in a momentum interval p x, p y, p z First, the one-dimensional case: Ω(p) = Periodical boundary conditions would give For an arbitrary dimension d we have Ω(p) = of d = 3 is 3 Ω = p x p y p z L 2π p x. (2.196) p x < 2π L n < p x + p x. (2.197) V (2π) d p 1 p d, which in the most usual case V (2π) 3 p 1 p 2 p 3. (2.198) 36

37 2.9. Probability density of energy Number of one particle states with energy E for arbitrary d Using the usual energy-momentum relation, we find Ω E = d d d ( Ω p 2 ) p δ p 1 p d 2M E, (2.199) Ω E = V (2π) d ( p d d 2 ) p δ 2M E. (2.200) For d = 1 it is ( p 2 ) dp δ 2M E = 2M = ( 2M E 0 dε ε 1/2 δ (ε E) (2.201) ) 1/2, (2.202) with ε = p2 2M. We have used that p = 2ME and dp = 2 = d p (2.203) dε 2M 1 2 ε 1/2. (2.204) So, we arrive at For d = 3 we have Ω E = L 2π d 3 p = 4π ( ) 1/2 2M. (2.205) E 0 d p p 2, (2.206) with ε = p 2 2M. So, we find ( p d d 2 ) p δ 2M E = 4π 0 dε δ (ε E) M 2 ε 1/2 2Mε } {{ }}{{} p p 2 ε (2.207) = 4π 2 M 3 /2 E 1 /2, (2.208) and the number of states per energy is given by Ω E = V (2π) 3 4π 2 M 3 /2 E 1 /2. (2.209) 37

38 2. Basic concepts of statistical physics Boltzmann probability distribution for one particle This Boltzmann probability distribution for one particle is given by p(e) = Ω E e E T Z 1, (2.210) where Z is the normalization factor. To find it, we use de p(e) = 1. (2.211) So, we get where Ω E = F E1 /2. Therefore, Z = Z = F 0 de Ω E e E T, (2.212) de E 1 /2 e E/T = F Z (2.213) and p(e) = Z 1 E 1 /2 e E T (2.214) So, it is possible to find an expression for this probability, where does not longer appear Correlation functions If there are two observables A and B, what is the probability that a measurement of A and B yields specific values λ A and λ B? Imagine that B is measured after A, then the probability becomes p(λ B,λ A ) Conditional probability p(λ B λ A ) The probability to find λ B, if λ A is measured for A, is called conditional probability and is written as p(λ B λ A ). It is p(λ B,λ A ) = p(λ B λ A ) p(λ A ). (2.215) Product observable B A The product observable is defined as Its spectrum is given by B A = (λ B,λ A ) p(λ B,λ A ) λ B λ A (2.216) = p(λ B λ A ) p(λ A ) λ B λ A. (2.217) λ A λ B Spec (B A) = {λ B λ A }. (2.218) B A depends on conditional probabilities, not only on p(λ B ) and p(λ A ). 38

39 2.10. Correlation functions Correlation function The (connected) correlation function is defined as B A c = B A B A. (2.219) If B A c 0, then B and A are correlated. Example A falling marble: Imagine a marble that is falling out of pipe on a prism, as illustrated in Figure 2.5a. Moreover, it can roll down on any of the sides of the prism with equal probability. However, there are two detectors A and B on the left side. The corresponding measurement values are λ A = 1 and λ B = 1 if the marble goes through the detectors and 0 otherwise. The probability to measure λ A = 1 is 1 2. It is easy to see that p(λ B λ A ) = 1. Therefore, p(λ B,λ A ) = 1 2. With this we calculate B A = 1 2, (2.220) A = p A λ A = 1 2, (2.221) λ A B = p B λ }{{} B = 1 2, (2.222) λ B 1 2 B A = 1 4, (2.223) B A c = = 1 4 (2.224) Is B A = A B true? Although it is true in this case, in general it is not. A simple example is shown in Figure 2.5b Independent observables If the measurement of A does not influence B, then the condtional probability is independent of λ A : So, only for independent observables it is p(λ B λ A ) = p(λ B ). (2.225) p(λ B,λ A ) = p(λ B ) p(λ A ). (2.226) 39

40 2. Basic concepts of statistical physics A A B B (a) Two possibilities (b) Three possibilities Figure 2.5.: A marble falling down on a prism But we must keep in mind that this is not true in general! In this case, the correlation function can be written as B A = p(λ B ) p(λ A ) λ B λ A (2.227) λ B λ A ( ) ( ) = p(λ B ) λ B p(λ A ) λ A (2.228) λ B λ A Then, It is important to note that the reverse statement is not true! Classical correlation function It is τ = B A. (2.229) } {{ } BA B A c = 0. (2.230) (B A) τ = B τ A τ. (2.231) The classical correlation function is ( ) ( ) BA c = p τ B τ A τ p τ B τ p τ A τ. (2.232) In general, BA c 0. Setting B = A, we find τ } {{ } B τ } {{ } A AA c = A 2. (2.233) There is an easy recipe for calculating conditional probabilities p(λ B λ A ): 40

41 2.10. Correlation functions 1. We eliminate all τ for which A τ λ A. 2. Now we have a new system with states τ and p τ = p τ τ p τ. (2.234) It is A τ = λ A and B τ = B τ. 3. The result is: p(λ B,λ A ) = τ A τ =λ A B τ =λ B p τ. (2.235) One should note that after the measurement of A the relative probabilities p τ p ρ p τ p ρ before the measurement of A: p τ p ρ are the same as = p τ p ρ. (2.236) In this lecture we will be mostly concerned with classical products of two observables, i. e. BA = AB, and with classical correlation functions, which are special correlation functions. However, if the relative probabilities change after the measurement of A, then we have different conditional probabilities: B A BA. Note that every consistent choice of conditional probabilities p(λ B λ A ) leads to the definition of a product of two observables B A. Many different products and correlation functions are possible. In general, the correct choice depends on the way how measurements are performed. This choice is important for the statistical description of systems with few states like atoms (with bounded energy), where the measurement of a Fermi observable has a strong influence on the possible outcome of the measurement of a second observable Quantum correlations We now have B A = tr ( ρ  ) ˆB + ˆBÂ, (2.237) 2 with Â, ˆB being diagonal:  = diag (A τ ). Therefore, B A = ρ τ τ A τ B τ. (2.238) τ The quantum product coincides with the classical one, in this case. However, if  and ˆB do not commute, this is no longer correct, because in that case  and ˆB cannot simultaneously be diagonal. 41

42

43 3. Statistical description of many-particles systems N is the particle number. Typically N N Avogadro Chain with N lattice sites Probability distribution We consider N fermions being distributed over the chain. If site j is occupied by a fermion, we denote this by s j = +1. If it is not, we denote it by s j = 1. The possible states of such a system is given by τ = {s j } where s j = ±1 for every j. So Ω = 2 N. A possible configuration is shown in Figure 3.1. Later, in subsection 3.2.1, we will interpret this as a chain of spins. In the following, for notational convenience, we will denote any quantity f, which depends on the complete microscopical state {s j }, as f {s j } f ({s j }). Let us define the number of fermions N. For every collection {s j }, the corresponding value of the fermion number is 1 N {s j } = 2 (s j + 1). (3.1) j Lattice sites are independent and we say that the probability of a site being occupied is q. So, 1 q is the probability of it being unoccupied. Then p{s j } = N [ ( ) ] q 1 2 sj j=1 (3.2) = q N (1 q) N N. (3.3) Macroscopic probability We want to compute the macroscopic probability to find N fermions (particles), that is p(n ): p(n ) = q N (1 q) N N Ω(N ) (3.4) where Ω(N ) = N! N! (N N )! is the number of microscopic states with N particles. (3.5) 43

44 3. Statistical description of many-particles systems 1 j N Figure 3.1.: Fermions on a chain It is given by the number of possible ways to distribute N indistinguishable particles on N different sites: Particle 1 N sites Ω(1) = N Particle 2 (N 1) sites N (N 1) Ω(2) = 2 Particle 3 (N 2) sites N (N 1)(N 2) Ω(3) = 2 3 All in all, we find the probability to be given by the binomial distribution Using the general relation. p(n ) = q N (1 q) N N N! N! (N N )!. (3.6) (q + r ) N = N N =0 we can prove that p(n ) has the correct normalization N! N! (N N )! qn r N N, (3.7) N p(n ) = 1 (3.8) N =0 by setting r = 1 q. We conclude that there are two different levels at which one can characterize this system. One can consider the 2 N microstates τ micro = {s j } or the N + 1 macrostates τ macro = N. 44

45 3.1. Chain with N lattice sites Expectation value The expectation value is given by N = = = N p(n ) N (3.9) N =0 N N =0 N N =0 = q q N! N! (N N )! N qn (1 q) N N (3.10) N! N! (N N )! q q qn r N N r =1 q (3.11) N N =0 N! N! (N N )! qn r N N r =1 q (3.12) = q q (q + r )N r =1 q (3.13) = N (q + r ) N 1 q r =1 q (3.14) = Nq. (3.15) To arrive at this result, we used the following trick: We expressed N via a q-derivative acting on q N. Thereto we considered r to be independent of q. Only in the end, after using (3.7) and performing the q-derivative, we substituted r = 1 q again Dispersion The dispersion can be computed using the same trick as above: N 2 = N p(n ) N 2 (3.16) N =0 ( = q ) 2 (q + r ) N q r =1 q = q ( ) Nq (q + r ) N 1 r q =1 q (3.17) (3.18) = qn [ (q + r ) N 1 + (N 1) q (q + r ) N 2] r =1 q (3.19) = qn + q 2 N (N 1) (3.20) = q 2 N 2 + N (q q 2 ). (3.21) 45

46 3. Statistical description of many-particles systems From the expectation value and the dispersion we can calculate the variance and the relative standard deviation N 2 = N 2 N 2 (3.22) = Nq (1 q). (3.23) N Nq (1 q) N = Nq = 1 1 q N q = (3.24) (3.25) 1 N 1 q. (3.26) Notice that the relative standard deviation is proportional to 1 N! So, the distribution gets sharper with increasing N. For example: N = N N 10 10! (3.27) This is one of the main reasons why statistical physics has such a predictive power. The statistical uncertainty is so small that one virtually can calculate deterministically by using mean values Uncorrelated probability distribution Here, uncorrelated means that the probabilities q do not depend on other lattice sites. It does not mean that correlation functions between different sites have to vanish N spins on a chain We consider a chain of N spins. Therefore the total spin S, which is a macroscopic observable, is given by S = s j. (3.28) In quantum mechanics the spin is 2S. The number of -spins (compare to (3.1)) is N = N j=1 j 1 2 (s j + 1) (3.29) = 1 2 S N (3.30) 46

47 3.2. Uncorrelated probability distribution and therefore S = 2N N (3.31) = N (2q 1). (3.32) Spin expectation value, dispersion, variance and relative standard deviation are then found to be S = 2 N N (3.33) = N (2q 1), (3.34) S 2 = 4 N 2 4 N N + N 2, (3.35) S 2 = 4 ( N 2 N ) 2 (3.36) = 4 N 2 (3.37) = 4 Nq (1 q), (3.38) S S 1, (3.39) N where the last equation only holds if S 0, i. e. q 1 2. Now, using equation (3.6), the macroscopic probability is given by p(s) = p ( ) N = N +S 2 (3.40) = q N 2 +S (1 q) N 2 S N! ( ) ( ) N +S 2! N S 2! (3.41) and the microscopic probability by p{s j } = = N [ ( ) ] q 1 2 sj (3.42) j=1 N P (s j ). (3.43) j=1 The latter is a product of independent probabilities for each spin s j. That is because we assumed uncorrelated spins, i. e. the spin s j does not know anything about its neighbours. In the following this property is used to compute correlation functions Correlation functions Let us compute the correlation function for l k in a microstate τ = {s j }: s l s k = s l s k p{s j } (3.44) {s j } 47

48 3. Statistical description of many-particles systems Using (3.43) and the fact that the sum over all micro-states can be written as = (3.45) {s j } = j s j =±1 s 1 =±1 s 2 =±1 s N =±1, (3.46) we find ( ) ( ) s l s k = P (s j ) s l s k (3.47) j j s j =±1 s j =±1 j ( ) = P (s j ) s l s k (3.48) = jl,k s j =±1 ( P (s j ) s l =±1 s k =±1 ) P (s l )P (s k ) s l s k. (3.49) Moreover, we can use the normalisation of p{s j } P (s j ) = 1 (3.50) to compute With this, we finally arrive at jl,k s j =±1 s l s k = j P (s j ) = s j =±1 s l =±1 s l =±1 s l =±1 1 s k =±1 P (s l )P (s k ). (3.51) s k =±1 P (s l )P (s k ) s l s k. (3.52) s k =±1 P (s l )P (s k ) Notice, that this is the same expression as for a system consisting of only two spins! We conclude that in the case of uncorrelated spins the spins s j, j l,k, do not matter for s l s k. By a similar calculation we also get an expression for the expectation value: s s k = k =±1 P (s k ) s k. (3.53) s k =±1 P (s k ) So, obviously s l s k = s l s k, (3.54) which leads to the observation that the connected correlation function, defined in (2.219) as vanishes for the system we are looking at: s l s k c = s l s k s l s k, (3.55) s l s k c = 0. (3.56) 48

49 3.2. Uncorrelated probability distribution This resembles the fact that we are dealing with uncorrelated spins here. In contrast to that, an example for a system with correlated spins is the Ising model. Its microscopic probability is given by p{s j } = e κ j (s j s j+1 ). (3.57) In this case there are next neighbour interactions, so the spins know about each other. We will come back to this later Independent measurement Consider a sequence of independent measurements of one spin. We denote the individual measurements by j = 1,...,N. Then such a sequence is denoted as {s j } = {s 1,s 2,...,s N }. (3.58) If the probability for one spin is P (s k ), the probability for a given sequence is p{s j } = P (s j ), (3.59) just like throwing dice N times. The average of s for a sequence of N measurements is j s = S N (3.60) = 2q 1. (3.61) Of course, there always is some statistical error for a finite sequence, i. e. the value of the total spin S fluctuates between different sequences. The dispersion is computed as From (3.38) we know that which can be related to s via S 2 = S 2 S 2, (3.62) where S 2 = p{s j } S 2, (3.63) {s j } ( ) 2 and S 2 = s m. (3.64) S = 2 N s = S N m q (1 q), (3.65) (3.66) = 2 q (1 q) N (3.67) 1 N. (3.68) 49

50 3. Statistical description of many-particles systems Therefore, large sequences give accurate results. But beware of the fact that the statistical measurement error s has nothing to do with the fluctuations s j of the actual spin: s 2 j = s 2 j s j 2 (3.69) = 1 (2q 1) 2 (3.70) = 4q (1 q). (3.71) So, s j = 2 q (1 q) 2 q (1 q) N = s. (3.72) s is an effect of measuring, whereas s j is a quantum mechanical effect as every single measurement either yields s j = +1 or s j = 1 (if q = 1 2, s j = 1). Or in other words, s is a macroscopic, whereas s j is a microscopic variable. Summary 1. Different physical situations are described by the same probability distribution p{s j }: a) Fermions on a lattice of N sites, b) N independent spins, c) Random walk (exercise), d) Sequence of measurements. 2. One major theme in statistical physics is to compute macroscopic probability distributions from microscopic probability distributions Statistical Ensembles A statistical ensemble is constituted by specifying the states τ and the probability distribution p τ or the density matrix ρ τ, respectively. In such an ensemble different operators / variables can be defined. We can imagine an ensemble as a collection of infinitely many similar, uncorrelated experimental set-ups in which we can measure the values of the state variables. From this their probability distributions can be inferred ( sequences of measurements with N ). For example, consider an ensemble of 1-particle systems with spin, i. e. an ensemble of 2-state systems (τ = {, }). This is a sequence of measurements on a single spin and should not be confused with systems with many degrees of freedom (many spins). These would be described by ensembles with many lattice sites or particles. Of course, real sequences of measurements can only be finite. 50

51 3.4. Gaussian distribution 3.4. Gaussian distribution Systems with many uncorrelated degrees of freedom are approximated by a Gaussian distribution. This approximation gets exact in the limit of infinite number of particles N. The continuous description is given by (x x )2 p(x) = c e 2σ 2, (3.73) where x = x is the mean value of the macroscopic variable x, σ = x = x 2 is the variance and c = c(σ ) is a normalization constant which is determined by demanding We find 1 = The generalization to several variables x i is given by: dx p(x) (3.74) = c(σ ) 2πσ 2. (3.75) c(σ ) = 1 2π 1 σ. (3.76) p = c e 1 2 A i j (x i x i )(x j x j ), (3.77) where A ij are the components of a positive definite symmetric Matrix A, i. e. a matrix with only positive eigenvalues. As an example we can think about fermions on a 1-dimensional chain. We will show that it is the correct large N limit of the binomial distribution Using Stirling's formula p(n ) = N! N! (N N )! qn r N N, r = 1 q. (3.78) we get lim ln(n!) = ( ) N + 1 N 2 ln N N ln (2π) + O(N 1 ), (3.79) α (N ) = lnp(n ) (3.80) = ( ) N ln N N ln (2π) ( ) N ln N + N 1 2 ln (2π) ( ) N N ln (N N ) + N N 1 2 ln (2π) + N lnq + (N N ) ln (1 q) (3.81) = ( ) ( ) ( ) N ln N N ln N N N ln (N N ) + N lnq + (N N ) ln (1 q) 1 2 ln (2π). (3.82) 51

52 3. Statistical description of many-particles systems It can be shown that where α (N ) (N N 2 ) 2 N 2, (3.83) N = Nq, (3.84) N 2 = Nq (1 q), (3.85) for N close to the maximum N. The fact that the maximum is given by N = Nq, can be proven by computing the first derivative of α: 0 = α N N = N = ln N = ln N N N So, up to N 1 corrections we find and therefore (3.86) N N + ln (N N ) + N N N N + lnq ln (1 q) (3.87) ln 1 q + O( N 1 ). q (3.88) N N N = 1 q q Moreover, from the second derivative we can infer the variance: 1 N 2 = 2 α N N 2 = N 1 = N N 1 N N 2 N = So, using N = Nq, up to N 2 corrections it is and therefore Furthermore, the third derivative yields, (3.89) N = Nq. (3.90) ( ) 1 N 2 (N N ) + 1 (N N ) 2 N (3.91) (3.92) (N N ) N + O( N 2 ). (3.93) 1 N 2 = N (N Nq) Nq, (3.94) N = Nq (1 q). (3.95) 3 α N 3 N = N = N 2 (N N ) 2 N 2 (3.96) All in all, this leads to the central limit theorem of statistics which states that any uncorrelated ensemble can be described by a Gaussian probability distribution in the limit N. 52

53 3.5. Thermodynamic limit 3.5. Thermodynamic limit The thermodynamic limit is given by V or N, respectively. In this limit the Gaussian distribution becomes extremely narrow. For example N = N 2 = 2N = , (3.97) when q = 1/2. If we then consider a small deviation the relative probability is given by N = N ( ), (3.98) p(n ) p( N ) = (N N ) 2 e 2 N 2 (3.99) = e (3.100) = e (3.101) (3.102) And for a deviation it is given by Therefore, only values N = N ( ) (3.103) p(n ) p( N ) = e 1 4 = O(1). (3.104) N 5 N N N + 5 N (3.105) play a role. And in this regime the deviations of the actual probability distribution from a Gaussian distribution are extremely small! 53

54

55 4. Equilibrium Ensembles Equilibrium states are described by statistical ensembles with a static probability distribution. If the probability distribution {p τ } is time independent, then all expectation values of the observables are time independent, too. One example would be an isolated system after sufficiently long time or, respectively, just parts of the isolated system. Another one would be an open system being immersed in a heat bath. Basic postulate for equilibrium states An isolated system approaches an equilibrium state. After sufficiently long time an isolated system can be described by an equilibrium ensemble. This conjecture was made by Boltzmann, but was never proven. Actually, it most probably is not true, as there are two problems with it. First, the necessary time scale could be extremely large. For example think of systems like water, honey or glass. And second, the microscopic equations are invariant under time reversal, but the time direction always points towards equilibrium. But even though the basic postulate may not be strictly true, and despite the existence of important practical systems which do not equilibrate, the basic postulate is useful for many applications and it often provides very good approximations Microcanonical ensemble In the following we will be examining an isolated system in a volume V with fixed energy E and fixed number of particles N Fundamental statistical postulate Microstates These are the states τ of a statistical ensemble, given by all possible quantum states with specified V, E, N. Note that in quantum mechanics one has to choose a complete basis of eigenstates to E, N in a given volume V. Then one can identify ψ τ τ. (4.1) Or equivalently: An isolated system in equilibrium can be found with equal probability in each of its allowed states. 55

56 4. Equilibrium Ensembles Microcanonical ensemble The number of microstates with given E and N is denoted by Ω(E,N ). Then the probability of the system being in one these is given by p τ = 1 Ω(E,N ). (4.2) This raises several arguments in favour of the basic postulate for equilibrium states. To begin with, no state τ is preferred to another state τ. In addition to that, this so-called equipartition of states (i. e. equal p τ ) is constant in time. But one has to be careful. The existence of an asymptotic state for t is only valid for large enough N. Moreover there is an obstruction due to conserved correlations, i. e. a system whose initial correlations differ from the ones of the microcanonical ensemble cannot reach the latter asymptotically. Note that for a microcanonical ensemble all expectation values and classical correlations for classical observables are uniquely fixed Microcanonical partition function The microcanonical partition function is denoted by Z mic, where Z mic = Ω(E), (4.3) p τ = Zmic 1 (4.4) Z mic (E,N,V ) depends on the thermodynamical variables E,N,V. Recall that Ω(E) δ Ω(E) is the number of states with energy in between E and E + δe Entropy S For a microcanonical ensemble the entropy is given by: S = k b ln Ω, (4.5) where k b is the Boltzmann constant. Setting it to 1 simply means to measure the temperature in energy units, e. g. ev. Then we have S(E,N,V ) = ln Ω(E,N,V ). (4.6) This equation is valid for isolated systems in equilibrium. The entropy S is an extensive thermodynamic potential, i. e. it is additive when uniting subsystems. One can conclude that, in the limit of large N, the entropy is proportional to N : S N for N. (4.7) 56

57 4.1. Microcanonical ensemble Examples 1. Consider N uncorrelated particles where each of them can be in F different states. Furthermore, say the energy is given by E = cn, independent of the particles states. Then there are F N possible microstates. Ω = F N (4.8) ln Ω = N ln F (4.9) S = k b N ln F (4.10) 2. Consider N uncorrelated particles on N lattice sites with E = cn and N = bv. Applying Stirling's formula on ln N! for large N : one finds N! Ω = N! (N N )! (4.11) S = ln N! ln N! ln (N N )! (4.12) ln N! N ln N N (4.13) S = [N ln N N ] [N ln N N ] [(N N ) ln (N N ) (N N )] (4.14) = N ln N N N + N ln (4.15) ( N ) N ( N N = N ln N 1 N ln 1 N ) (4.16) N [ ( ) bv = N ln N 1 bv ( N ln 1 N ) ]. (4.17) bv Now it is obvious that S is extensive, since one can write it as N times a function of the particle density n = N V : S = N f (n). (4.18) For small n we can approximate S = N [ ( ) ] b ln n (4.19) Partition function for ideal classical gas To calculate the partition function for an ideal classical gas, we have to count the number of quantum states of N particles in a box with volume V (e. g. with periodic boundary conditions) for a fixed total energy E. We will approach this step by step. 57

58 4. Equilibrium Ensembles 1) Quantum states for one particle (N = 1) τ p = (p x,p y,p z ) (4.20) Assigning an occupation number n(p) to each of these states, we find 1 = n(p), (4.21) with The partition sum can then be expressed as Ω = p = p2 2M =E 2) Quantum states for two particles (N = 2) p E = n(p)e(p), (4.22) p E(p) = p2 2M. (4.23) ( d 3 p 3 Ω p 2 ) p 3 δ 2M E δe (4.24) ( V p = (2π) 3 d 3 2 ) p δ 2M E δe (4.25) V = (2π) 3 4π 2 M 3 /2 E 1 /2 δe. (4.26) Using the same definitions as in the one particle case, we now get 2 = n(p), (4.27) p E = n(p)e(p). (4.28) p We should discriminate between fermions, for which n(p) {0,1}, and bosons, for which n(p) can take arbitrary values. In any case, we can split up the partition sum into two parts: Ω = Ω 1 + Ω 2. (4.29) Here, Ω 1 is the number of states with n(p 1 ) = 1, n(p 2 ) = 1 for some p 1 p 2, and Ω 2 is the number of states with n(p) = 2 for some p. These two contributions are given by Ω 1 = 1 2 p 1 p 2 p 1 E= p2 1 +p2 2 2M 0 for fermions, Ω 2 = Ω (b) 2 = p for bosons. E= p2 M, (4.30) (4.31) 58

59 4.1. Microcanonical ensemble The factor 1 2 in (4.30) is due to the fact that we are dealing with indistinguishable quantum mechanical particles. So, swapping both particles does not count as a different state. The full partition sum then takes the form ± 1 ( ) bosons 2 Ω(b) 2. (4.32) fermions Ω = 1 2 p 1 p 2 E= p2 1 +p2 2 2M Note that in (4.32) the p 2 sum runs over all possible momentum values, whereas in (4.30) it excludes p 2 = p 1. If we neglect the ± 1 2 Ω(b) 2 term in (4.32) we arrive at the approximate description of a classical gas. Then there is no distinction between fermions and bosons any more. Performing the two momentum sums analogously to the one particle case leads to Ω = 1 2 3) Quantum states for N particles ( ) 2 d 3 p 1 d 3 V p 2 (2π) 3 δ ( p p 2 2 2M ) E δe. (4.33) The extension to the general N particle case is straight-forward: N = n(p), (4.34) p E = n(p)e(p), (4.35) p Ω = 1 N! ( ) N ( N d 3 p 1 d 3 V p 2 ) j p n (2π) 3 δ 2M E δe. (4.36) j=1 As in the two particle case, equation (4.36) neglects the particular behaviour when two or more momenta commute. Again, this is the classical approximation where no distinction between fermions and bosons is made. 4) Partition function We define ω = Ω δe = Ω E. (4.37) To find ω, we have to evaluate a 3N -dimensional integral. For this purpose, we will perform a change of variables: p j,k = p 2 j 2ME x j,k, 2M = x2 j E. (4.38) where the x j are chosen such that x 2 j E = E. (4.39) j 59

60 4. Equilibrium Ensembles Then d 3 p 1 d 3 p N = (2ME) 3N /2 j d 3 x 1 d 3 x N, (4.40) p 2 j 2M = x 2 j E. (4.41) j For our purposes, we also introduce the 3N -component vector y = (x 1,1,x 1,2,x 1,3,x 2,1,... ). Now, finally, we can calculate ω: ω = 1 N! ( V (2π) 3 = 1 N! V N ( ME 2π 2 2 ) N ( ) (2ME) 3N /2 dy 1 dy 3N δ yi 2 E E i (4.42) ) 3N/2 1 E F. (4.43) The factor 1 E comes from applying the relation δ (E X ) = 1 E δ (X ) and F = dy 1 dy 3N δ ( 1 3N yi 2 i=1 ) (4.44) is the volume of the (3N 1)-dimensional unit sphere S 3N, which is equal to the surface of a 3N -dimensional unit ball. It can be calculated from the general formula F = = 2 Γ ( ) π 3N /2 3N 2 (4.45) 2 ( 3N 2 1)! π3n /2. (4.46) As a consistency check we compute (4.46) in the case N = 1: F = 2 1 2! π3/2 = 4 π π 3 /2 (4.47) (4.48) = 4π q. e. d. (4.49) Here we have used 1 2! = Γ ( ) 3 2 = 1 2 π in the second line. Using the definition (4.37), the partition sum is given by Ω = 2 N! ( 3N 2 1)! ( ) 3N/2 M 2π 2 E 3N /2 V N δe E. (4.50) 60

61 4.1. Microcanonical ensemble Finally, using δ E E = 1 3N, we arrive at Z mic = Ω = 1 N! ( 3N 2 ( ) 3N/2 M )! 2π 2 E 3N /2 V N. (4.51) N is typically of the order Therefore, Z mic increases extremely rapidly with E and V! Entropy of an ideal classical gas With the help of our favorite approximated Stirling's formula (4.13) we can find the entropy of an ideal classical gas S: S = 3N 2 ln M 2π 2 + 3N ( ) 3N ln E + N lnv ln N! ln! (4.52) 2 2 { 1 = k b 3N 2 ln M 2π lnv ln E 1 3 ln N ln 3N }. (4.53) 2 So, { 1 S = k b 3N 3 ln V N ln 2 E 3 N ln M 2π }. (4.54) 6 3N = f is the number of degrees of freedom. In a compact form we can write: and therefore S = c k b f (4.55) Ω(E) E cf, (4.56) where c depends on the system. So, the partition sum increases rapidly with increasing energy. This form of the entropy and the partition sum applies too many other systems, too. It is important to note, that the factor 1 N! in Ω(E) was necessary for the entropy to be an extensive quantity. This factor is absent in the classical description! It was first introduced by Max Planck in 1900 and was heavily discussed back then Thermodynamic limit The thermodynamic limit is described by V, N, E. (4.57) In this limit the useful quantities to describe the system are: N = n V (particle density), (4.58) E = ϵ V (energy density), (4.59) 61

62 4. Equilibrium Ensembles Extensive Intensive V p (pressure) N n E ϵ S s = S V (entropy density) Table 4.1.: Examples for extensive and intensive quantities as these stay constant in the limiting process. So, mathematically more rigorously we define the limiting process via an auxiliary parameter a: and consider the limit V av, N an, E ae (4.60) We make the distinction between extensive quantities (A), where A lim V V and the intensive quantities (B), where a. (4.61) = finite 0, (4.62) lim A = finite. (4.63) V Some extensive and intensive quantities are listed in table Table 4.1. Again note, that the entropy is an extensive quantity: S = 3k b nv c(n,ϵ), (4.64) where c(n,ϵ) = 1 ( 2 2 ln ϵ ) 1 3 n 3 ln n ln M 2π (4.65) On the other hand, ratios of extensive quantities, such as n, ϵ and s, are intensive quantities. By setting k b = 1 and = 1 we find the entropy density to be given by So s = S (4.66) V { ( = 3n ln n 1/3 ϵ ) 1/2 + ln + ln M 1/2 + const.}. (4.67) n s = 3n ( ) Mϵ ln n + const.. (4.68) 5 /3 In general we have to be careful with units, as we can only take logarithms of dimensionless numbers. But a quick dimensional analysis confirms this here: M M 1, (4.69) ϵ M 4, (4.70) n M 3. (4.71) 62

63 4.1. Microcanonical ensemble The limit E E 0 We now denote by E 0 the quantum mechanical energy of the ground state and choose δe to be an energy difference smaller than the difference between E 0 and the lowest excited state E 1 : So, if the ground state is non-degenerate, then δe < E 1 E 0. (4.72) lim Ω(E) = 1 (4.73) E E 0 and as a consequence lim S (E) = 0. (4.74) E E 0 On the other hand, if we have a q-fold degeneracy, then lim S = k b lnq k b f. (4.75) E E 0 So, often the assumption is made that S vanishes in this limit, as lnq is tiny compared to the usually huge f. Then S V 0 (4.76) in the thermodynamic limit Derivatives of the entropy Temperature First recall that up to now, the partition sum and the entropy have been functions of the state variables E, N, and V : Z mic = Ω(E,N,V ), (4.77) S(E,N,V ) = k b ln Ω(E,N,V ). (4.78) Now we take the derivative with respect to E at fixed particle number and volume: 1 T = S E N,V. (4.79) This defines the temperature T = T (E,N,V ). It can be used as a new thermodynamic state variable. So, we can make a variable substitution in order to express the entropy not as a function of E but of T: S = S(T,N,V ). Moreover, we see that the temperature is an intensive quantity, as S and E are extensive and so their ratio must be intensive. 63

64 4. Equilibrium Ensembles Pressure Now we take the derivative with respect to V at fixed particle number and energy: p T = S V E,N. (4.80) This defines the pressure p. And it can immediately be concluded that the pressure is an intensive quantity. Let us know compute this for our ideal gas of particles: S V = k b (N ln VN ) V + f (E,N ) (4.81) N = k b (4.82) V = p T. (4.83) From this we can infer the equation of state for an ideal classical gas: pv = k b NT. (4.84) In thermodynamics T and p are determined by the change of the number of available states when changing E and V. Chemical potential Now we take the derivative with respect to N at fixed volume and energy: µ = T S N E,V (4.85) This defines the chemical potential µ. We see that we can compute all these variables, if we just have the entropy at hand. So, basically the entropy already contains the whole thermodynamics! From here on, one can develop the complete formal machinery of thermodynamics, resting upon the thermodynamic potentials, such as S, E, F and G, and their derivatives (Legendre transformations). Note that up to now T is only a definition, but it will be motivated in the following. We conclude this section by giving a little summary about entropy in the case of a microcanonical ensemble description of a system in equilibrium: S = k b ln Ω S is extensive, i. e. S V, S f N S V 0 for E E 0 (3 rd law of thermodynamics) 1 T = S (E,N,V ) E N,V (important part of the 2 nd law of thermodynamics) 64

65 4.2. Canonical ensemble Reservoir System S E R Figure 4.1.: System connected to a reservoir with energy exchange 4.2. Canonical ensemble System in a heat bath A canonical ensemble describes a system in a heat bath, intuitively speaking. We consider a closed system with a volume V. So, energy can be exchanged with the environment, but the particle number N remains constant. Additionally, there is a heat bath characterized by the temperature T. In the following we will derive all properties of this compound system. The system S is connected to the reservoir (heat bath) R with which it can exchange energy, but no particles. This is illustrated in Figure 4.1. But it is important that the total system is isolated. Moreover we demand that V r V s, (4.86) N r N s 1, (4.87) i. e. the reservoir R is much larger than the system S concerning volume as well as particle number. The total system can be described by a microcanonical ensemble with total energy E g = E s + E r = const. (4.88) However, E r and E s are not fixed individually. More precisely, the total system is described by a Hamiltonian H = H s + H r + H int, (4.89) where H int describes the interaction between S and R. We assume this coupling to be weak: H int H s, H r. (4.90) Now we denote by ψ n the quantum states of S and by φ ν the quantum states of R. They form a complete basis, i. e. a quantum state (microstate) of the full system can always be expressed as χ τ = ψ n ϕ ν, (4.91) 65

66 4. Equilibrium Ensembles where τ = (n,ν ). Now the energies are given by And from the statistical postulate it follows that E sn = ψ n H s ψ n, (4.92) E rν = φ ν H r φ ν, (4.93) E gnν = E sn + E rν = E g. (4.94) p nν = Ω 1 g (E g ). (4.95) We now consider operators which only depend on n, i. e. they are independent of the properties of the reservoir: A = p nν A n (4.96) where n ν = p n A n, (4.97) n p n = p nν. (4.98) ν In other words, we look at the reduced system S. This restriction to leads to the concept of a canonical ensemble. Our task is now to compute the reduced probability distribution p n. These considerations will lead us to the famous Boltzmann factor e En kbt Canonical partition function and Boltzmann factor From (4.94), (4.95) and (4.98) we conclude that p n = Ωg 1 (E g ) δ (E sn + E rν E g ) δe g. (4.99) ν This means that the sum only runs over such ν which fulfil the constraint (4.94). So, the sum is just the number of states in R. Therefore p n will depend on E sn and the properties of the reservoir (T r ) and we conclude that p n (E s ) = Ω r(e r ) Ω g (E g ). (4.100) E r =E g E s As we demanded E s E r, E r is close to E g and we can make a Taylor expansion of ln Ω r (E r ) around E g : ln Ω r (E r ) ln Ω r (E g ) + (E r E g ) ln Ω r (E g ). (4.101) E } {{ }} g {{ } E s β For E r E s this gets exact (ln Ω r f r ln E r ). 66

67 4.2. Canonical ensemble We have defined β = E ln Ω r(e) E=Er, (4.102) where the evaluation at E r instead of E g is fine, as this difference is a higher order effect in the Taylor expansion. Note that β 0 for monotonously increasing Ω r (E). So, β just depends on the density of states of the reservoir: and we infer that β = 1 S r k b E (4.103) E=E r with T r being the temperature of the reservoir. Ω r is now given by and inserting this into (4.100) leads to Here, β = 1 k b T r, (4.104) Ω r (E r ) = Ω r (E g ) e β (E r E g ), (4.105) p n (E s ) = Z 1 can e β E s. (4.106) Z can = Ω g(e g ) Ω r (E g ) (4.107) is the canonical partition function. It only depends on properties of the reservoir. It can be computed from the normalisation of p n : p n = 1. (4.108) n So, Z can = e β E n. (4.109) n Or, summing over energy states instead: Z can = Ω(E) e β E (4.110) E = de ω(e) e β E. (4.111) So all in all, Z can = Z can (β,n,v ), (4.112) where N and V are the particle number and the volume of S. So, Z can only depends on R via β. 67

68 4. Equilibrium Ensembles Relation between temperature and energy Let us next compute the mean energy of our statistical ensemble. First, we recall the probability to be given by p n = Z 1 can e β E n, (4.113) where and, using the convention k b = 1, Then we find Z can = e β E n (4.114) n β = 1 T = 1 T r. (4.115) Ē = E = p n E n (4.116) n n e β E n E n = n e β E (4.117) n = β ln e β E n. (4.118) n So, the mean energy is given by Ē = β ln Z can. (4.119) Energy distribution The probability density to find an energy E is w(e) = ω(e) e β E (4.120) E c sf s e β E, (4.121) where f s, the number of degrees of freedom, is usually a huge number. Then, this distribution describes an extremely sharp peak, as sketched in Figure 4.2. Say, that w(e) is given by a Gaussian distribution: w(e) = const. e (E Ē)2 2 E 2. (4.122) Then we can determine Ē from the maximum of w or, respecitvely lnw: (E Ē)2 lnw = 2 E 2 + const. (4.123) = c s f s ln E βe, (4.124) 68

69 4.2. Canonical ensemble w E Figure 4.2.: Energy distribution where we inserted (4.121) in the second line. We get This means that 0 = ω E Ē Ē = c s f s β = c s f s Ē β. (4.125) = c s f s T, (4.126) and therefore T = Ē c s f s, (4.127) so the temperature can be interpreted as something like the energy per degree of freedom. The energy dispersion can be inferred from the second derivative: So, 1 E 2 = ω Ē 2 E 2 = c s f s Ē 2. (4.128) E = Ē cs f s, (4.129) E Ē = 1 cs f s. (4.130) In the thermodynamic limit the relative energy fluctuations are negligible. So, we can assume E = Ē and associate statistical quantities with actual numbers, this way. This means that we can actually speak of the energy in a canonical ensemble. Then, microcanonical and canonical ensemble are equivalent! 69

70 4. Equilibrium Ensembles Microcanonical E mic Z mic (E) = Ω(E) Canonical β Z can (β) Table 4.2.: Related quantities in microcanonical and canonical ensemble Equivalence of microcanonical and canonical ensemble Different quantities, characterizing the microcanonical or canonical ensemble, can be related, as shown in Table 4.2. Note that the energy E is fixed in this case. They are related via Ē = β ln Z can(β), (4.131) β mic = E ln Z mic(e). (4.132) The question is, if we identify E mic = E can, can we also con conclude β mic = β can, i. e. T s = T r? Temperature Microcanonical ensemble (T = T s ) β mic = ln Ω E N,V = 1 k b T (4.133) Here, T is an absolute temperature. So, its units are Kelvin or ev, if we set k b = 1. For example 300 K 1 40 ev. Recall that T is an intensive quantity and that it can be interpreted as an energy per degree of freedom: via T = β 1 = E s c s f s. (4.134) In the case of an ideal gas we can compute T from the entropy This means that S = 3N 2 2E ln 3N 1 T = S E N,V and therefore the average (kinetic) energy per particle is as each particle has 3 degrees of freedom. + f (V,N ) (4.135) = 3N 2E. (4.136) E 3N = E f = 1 2 T, (4.137) E N = 3 2 T, (4.138) 70

71 4.2. Canonical ensemble Canonical ensemble (T = T r ) T is the characteristic quantity for the canonical ensemble, i. e. it is a property of the total system. So to speak, we can forget the concept of the reservoir. T can be measured by measuring the mean energy E in the system. As a next step, we will show that T r = T s Temperature and energy distribution in subsystems Consider a system that is divided into two arbitrary sized subsystems s 1 and s 2 with energies E 1 and E 2, respectively. Using the notation E = E 1 and E g = E 1 + E 2, the probability to find the system in a state with a certain energy E is w(e) = Ω 1 (E) Ω 2 (E g E) Ω 1 g (E g ). (4.139) As said before, w(e) is a sharply peaked distribution, dominated by E max = Ē. It is 0 = E lnw E=Ē = E ln Ω 1(E) E=Ē + E ln Ω 2(E g E) E=Ē (4.140) (4.141) = β 1 β 2 (4.142) = 1 1, T 1 T 2 (4.143) where we used the definition of β in the third equation. The minus sign in front of β 2 arises from the derivative of the argument E g E of Ω 2 with respect to E. So, we find T 1 = T 2. (4.144) We conclude, that we can define the temperature of any ensemble with small fluctuations of E around Ē via 1 T = E ln Ω(E) E=Ē. (4.145) But note, that in general the temperature is determined by the energy dependency of the partition sum Thermal equilibrium 0 th law of thermodynamics If two systems are in thermal equilibrium with a third system, they are also in mutual thermal equilibrium. There exists a variable characterising the equilibrium state, the temperature, that has the same value for the two systems. Of course this is also valid for several systems in thermal equilibrium (T 1 = T 2 =... = T n ). 71

72 4. Equilibrium Ensembles Reservoir System S E,N R Figure 4.3.: System connected to a reservoir with energy and particle exchange 4.3. Grand canonical ensemble The three different types of statistical ensembles are, next to the volume V, characterized by different quantities: Microcanonical: N,E Canonical: N,T Grand canonical: The grand canonical ensemble describes an open system S in equilibrium that is able to exchange energy and particles with a reservoir R, as illustrated in Figure 4.3. For example one can consider a part of a gas volume. We will treat it in the same manner as we treated the canonical ensemble. The probability for a certain state is now given by where µ,t p n (E n,n n ) = Z 1 gc e (β E n+α N n ), (4.146) β = E ln Ω r(e g,n g ), (4.147) α = N ln Ω r(e g,n g ). (4.148) We define the chemical potential µ as Then, the probability takes the form µ = α β. (4.149) p n (E n,n n ) = Z 1 gc e β (E n µn n ), (4.150) the grand canonical partition function is given by Z gc = Ω(E,N,V ) e β (E µn ) (4.151) E = N de dn ω(e,n,v ) e β (E µn ), (4.152) 72

73 4.4. Relations for entropy and the partition sum is Ω(E,N,V ) = ω(e,n,v ) δe δn. (4.153) The averaged quantities can again be extracted via derivatives of the logarithm of the partition function: 4.4. Relations for entropy Entropy in canonical ensemble Ē µ N = β ln Z gc(β,µ,v ), (4.154) N = 1 β µ ln Z gc(β,µ,v ). (4.155) We can approximate the probability density ω(e) in the partition function Z can = de ω(e) e β E (4.156) by means of a saddle point expansion: Z can = ω(ē) e β Ē de e a 2 (E Ē) (4.157) } {{ } 2π a where Then the partition function takes the form and its logarithm is, using E = Ē, a = c f Ē 2. (4.158) Z can = Ω(Ē) e β Ē Ē δe Then, altogether, we find the entropy to be given by 2π c f } {{ } O(1) (4.159) ln Z can = ln Ω(E) βe +... (4.160) } {{ }}{{} f f S = ln Z can + βe = ( 1 β ) ln Z can. (4.161) β Note that, as said before, the difference between the fixed energy E for the microcanonical and E = E = Ē for the canonical ensemble is negligible. It vanishes in the thermodynamic limit. 73

74 4. Equilibrium Ensembles Entropy in grand canonical ensemble By the same reasoning as before we get and therefore the entropy is ln Z gc = ln Ω(E) β (E µn ) (4.162) S = ln Z gc + β (E µn ) = General expression for entropy ( 1 β ) ln Z gc. (4.163) β In general, the entropy can be expressed in the following form: S = p τ lnp τ, (4.164) τ where we had τ n in the previous cases. The validity of this expression can be checked easily: In the microcanonical case it is p τ = 1 Ω, (4.165) so we have S = τ = ln Ω = ln Ω 1 Ω ln 1 Ω ( 1 ) 1 Ω τ }{{} Ω q. e. d. (4.166) (4.167) (4.168) And in the canonical case it is so we have p τ = 1 e β E τ, Z can (4.169) lnp τ = ln Z can βe τ, (4.170) S = p τ lnp τ (4.171) τ ( ) = p τ ln Zcan + βe τ τ = ln Z can + βe q. e. d. (4.172) (4.173) Similarly, it can be checked in the grand canonical case. 74

75 4.4. Relations for entropy Entropy and density matrix The entropy is related to the density matrix via S = tr ρ ln ρ. (4.174) This can also be considered as a general definition of the entropy, as it coincides with S = ln Ω(E) for microcanonical, canonical and grand canonical ensembles. Most importantly, this expression is valid in any basis. Therefore it also holds when diagonalizing ρ, which proves (4.164) more generally: S = ρ τ τ ln ρ τ τ (4.175) τ = p τ lnp τ q. e. d. (4.176) τ 75

76

77 5. Thermodynamics 5.1. Thermodynamic potentials State variables, equations of state Classical thermodynamics describes equilibrium states and transitions from one equilibrium state to another. Historically it was developed before statistical physics, but it can be derived as an effective macroscopic theory from the microscopic statistical theory. Without any doubt it has an immense importance regarding technical aspects. It is mostly axiomatic, based on the laws of thermodynamics and their consequences. Classical thermodynamics only uses a few macroscopic variables (both extensive and intensive) such as E, T, S, N, µ, V, p. We distinguish state variables or independent thermodynamic variables, which characterise the ensemble, and dependent thermodynamic variables. For the different types of ensembles (systems) this is illustrated in Table 5.1 The thermodynamic limit provides a functional relationship between the state variables. Usually, no average values are used, because E E, etc. There are several practical question, which we will try to answer in this chapter: How much energy should be deposited in a system, so that its temperature rises by a certain amount, provided the volume stays constant? How much will the pressure increase during this process? How is heat transformed into mechanical energy? What is the efficiency of machines? When is a process reversible and when not? The dependence of the dependent variables on the state variables is called equation of state. Examples 1. Equation of state for an ideal gas: pv = RTn M = k b T N (5.1) n M : number of moles R: gas constant (R = 8.31 This follows from J mol K ) S V E,N = p T. (5.2) 77

78 5. Thermodynamics State variables Dependent variables microcanonical ensemble (isolated system) ln Z mic (E,N,V ) S,T,µ,p canonical ensemble (closed system) ln Z can (T,N,V ) S,E,µ,p grand canonical ensemble (open system) ln Z gc (T,µ,V ) S,E,N,p Table 5.1.: Different kinds of ensembles (systems) and their corresponding state and dependent variables It can be concluded that p = Tn. (5.3) 2. Van der Waals equation of state for a real gas: a,b: constants specific to the gas (p + an 2 ) = nt 1 bn (5.4) The task of the statistical physics is to explain the equations of state and to derive their functional form. On the other hand, classical thermodynamics examines the consequences of their existence. There are seven variables and only 4 equations, which automatically makes three of the variables independent (state variables), but there also are additional thermodynamic conditions. As already mentioned, variables split into two types: extensive (E, N,V,S) and intensive (T,µ,p). However, the choice of the state variables cannot be arbitrary at least one of them has to be extensive. Example Grand canonical ensemble with state variables T,µ,V : S = V f S (T,µ) (5.5) E = V f E (T,µ) (5.6) N = V f N (T,µ) (5.7) p = f p (T,µ) (5.8) Therefore, (p,t,µ) cannot be used as three independent variables at the same time! If the particle number is not conserved (e. g. photon), there are only two state variables, as µ = 0. Then, p = f p (T ). (5.9) So, there is a relation between pressure and temperature. Defining ϵ = E V, we find p = f p (ϵ), (5.10) and in the special case of photons it is p = 1 3 ϵ. (5.11) 78

79 5.1. Thermodynamic potentials Thermodynamic potentials and partition functions S = S (E,N,V ) = k b ln Z mic (E,N,V ) (entropy) (5.12) F = F (T,N,V ) = k b T ln Z can (T,N,V ) (free energy) (5.13) J = J (T,µ,V ) = k b T ln Z gc (T,µ,V ) (Gibbs potential) (5.14) ( ) S E ( ) S N ( ) S V N,V E,V E,N = 1 T = µ T = p T (5.15) (5.16) (5.17) At this point, these are just definitions. Their physical meaning will be explained later. Note that, traditionally, in thermodynamics one denotes the fixed quantities explicitly. These 3 relations together with S(E,N,V ) lead to 4 relations for the 7 variables in total. We now use S = ln Z can + βe (5.18) and β = 1 T to find F T = ln Z can = S E T, (5.19) and therefore On the other hand, using F = E TS. (5.20) S = ln Z gc + β (E µn ), (5.21) we have J T = ln Z gc = S E µn T, (5.22) and therefore or J = E µn TS (5.23) J = F µn. (5.24) 79

80 5. Thermodynamics Differential relations Recall the following definitions: 1 T = S E µ T = S N p T = S V N,V E,V E,N, (5.25), (5.26). (5.27) With these, one arrives at the following expressions for the differentials of S: ds = 1 T de µ T dn + p dv, (5.28) T and of E: de = T ds }{{} heat δq + µdn }{{} added particles δ E N pdv. (5.29) }{{} work δ A 1 st law of thermodynamics: conservation of energy (Helmholtz, Mayer) A thermodynamic system can be characterized by its energy. The energy can be changed by adding heat, work or matter. For an isolated system the energy is conserved. It is very important to note that heat, work and added particles are not properties of an equilibrium system! Statements like Heat is added / lost make no sense! But these quantities may describe differences between two equilibrium systems. Heat For example, the heat is defined as Therefore it accounts for a change in temperature: δq = E 2 E 1 for fixed V,N. (5.30) δq = E(T 2,V,N ) E(T 1,V,N ). (5.31) Again, note that this no property of a system. Neither is it a thermodynamic variable. But it can be used as a thermodynamic definition of the entropy: δq = T ds. (5.32) 80

81 5.1. Thermodynamic potentials ds F p Figure 5.1.: Mechanical work Mechanical work A definition of mechanical work can be given via a slow change of volume at fixed S, N (compare Figure 5.1): δa = F p ds (5.33) = F p ds (5.34) = p a ds (5.35) = p dv, (5.36) where p is the pressure and a the area. We can identify different signs of δa with different physical situations: δa < 0 : system does work, (5.37) δa > 0 : systems absorbs mechanical energy. (5.38) In general, δa describes the change in energy by work at fixed S, N. For example, it also describes the change of an external magnetic field instead of the change of volume. We note that the statistical definition of pressure coincides with the mechanical definition force per area, but the statistical one is more general! Legendre transform The change from S to F at fixed V, N is one example of a Legendre transform. In this case, the conjugate variables are E and β = 1 T. To be formally more precise, its actually a transform between the following quantities: Let us look at this in more detail. The new variable is β: Then, the Legendre transform of S is given as S(E) β F (β) = F (β). (5.39) S E = β. (5.40) F = βe S, (5.41) 81

82 5. Thermodynamics and therefore The derivative of F with respect to β is F = E TS. (5.42) the original variable, as it should be. On the other hand, we find using (5.20). And therefore F β = E + β E β S β = E + β E β S E }{{} β E β (5.43) (5.44) = E, (5.45) F T N,V = S, (5.46) df = SdT + µdn pdv. (5.47) Similarly, it is dj = SdT N dµ pdv, (5.48) which can be concluded, using µ ln Z gc = βn, (5.49) β J = ln Z gc, (5.50) so, as well as J µ = N, (5.51) ( S = 1 β ) ln Z gc (5.52) β ( = 1 β ) ( β J ) (5.53) β = β J + β J + β 2 J β (5.54) = J T. (5.55) Now we have everything at hand to construct thermodynamics: 82

83 5.1. Thermodynamic potentials 1. As the basis: S = ln Ω, the central probabilistic quantity (equivalent to S = τ p τ lnp τ = tr (ρ ln ρ)) 2. Definitions of T, µ,p as derivatives of S: ds = 1 T de µ T dn + p dv (5.56) T 3. Computation of the partition function for the given system S (E,N,V ), or equivalently F (T,N,V ) or J (T,µ,V ) (aim of statistical physics, learn counting ) Note that no other than these 3 axioms are needed for thermodynamics! And all fundamental laws of thermodynamics can be derived from these! We already have shown the latter for the 0 th and the 1 st law Properties of entropy 1) Additivity Consider two systems with energies E 1 and E 2 (neglect again H int ), then 2) Maximum principle for entropy Ω g = Ω 1 Ω 2, (5.57) ln Ω g = ln Ω 1 + ln Ω 2, (5.58) S g = S 1 + S 2. (5.59) We permit energy exchange between these two systems, with the constraint E 1 + E 2 = E g. (5.60) We define E 1 = E and want to find the condition for the maximum of the probability distribution W (E), which describes the equilibrium state: w(e) = Ω 1(E 1 ) Ω 2 (E 2 ) Ω g (E g ). (5.61) E 1 +E 2 =E g From the considerations in 1) we can conclude that a maximum of w(e) corresponds to a maximum of ln Ω 1 + ln Ω 2, which corresponds to a maximum of S 1 + S 2 and therefore to a maximum of S g. This means that in equilibrium the system chooses E 1 such that the total entropy is maximal, with the constraint E 1 + E 2 = E g. In equilibrium entropy is maximal, subject to constraints. Of course, the individual energies E 1, E 2 can be computed from the maximum of S 1 + S 2 with the condition E 1 + E 2 = E g. 83

84 5. Thermodynamics Another property of the entropy can be inferred from this. Let us start with two isolated systems in equilibrium, with arbitrary energies E (0) 1, E(0) 2 and E g = E (0) 1 + E (0) 2. They have entropies S (0) 1, S (0) 2. Then we bring them into thermal contact (i. e. we allow energy exchange). This leads to an energy flow E (0) 1 E 1, E (0) 2 E 2 until S 1 + S 2 is maximal for a given E g. This means that S 1 + S 2 S (0) 1 + S (0) 2. (5.62) Entropy can only increase! This is a key ingredient of the 2 nd law of thermodynamics! Zero of absolute temperature Approaching its ground state, the partition sum of a system will approach a value of the order of 1: Ω E E 0 Ω(E 0 ) = Ω 0 = O(1), (5.63) and therefore the entropy will approach 0: (or S S 0, S 0 V 0). Using ln Ω E and therefore, by the definition of the absolute temperature, S E E 0 0 (5.64) = 1 Ω E 0 Ω 0 E, (5.65) we find as, choosing E 0 0, ( ) 1 ln Ω E T = Ω 0 E Ω, (5.66) Ω E E E 0, (5.67) E E f, (5.68) Ω e f f. (5.69) This means that in the thermodynamic limit the ground state temperature approaches 0: T f e f f 0, (5.70) or, in other words, T (E = E 0 ) = 0. (5.71) The properties (5.63) and (5.67) of Ω(E) are illustrated in Figure

85 5.2. Reversible and irreversible processes ln Ω E 0 E Figure 5.2.: Behaviour of the partition sum at zero temperature S S T Figure 5.3.: Entropy for T rd law of thermodynamics 3 rd law of thermodynamics (empirical by Planck (Nernst)) ForT 0 + the entropy obeys S S 0, where S 0 is independent of the thermodynamic variables of the system. So, the entropy for T 0 + will behave as sketched in Figure 5.3. An example for this would be the hyper-fine splitting in atoms Reversible and irreversible processes Subsystems and constraints We consider an isolated system in equilibrium with fixed energy E (or interval [E,E + δe], respectively). Furthermore we recall that Ω = Z mic is the number of (micro-)states with energy E, which the system can occupy, and that the entropy is defined as S = k b ln Ω. In general, the system is determined by some constraints. These correspond to particular parameters: Ω = Ω(E,y 1,...,y k ) (5.72) (or intervals [y 1,y 1 + δy 1 ],...,[y k,y k + δy k ]). 85

86 5. Thermodynamics Gas (V 1 ) Vacuum Gas 1 (E 1,V 1,N 1 ) Gas 2 Wall (a) Example 1 (fixed volume V 1 ) Thermally isolating wall (b) Example 2 (fixed energy E 1 ) Figure 5.4.: Examples of constraints Examples 1. Fixed volume V 1 (Figure 5.4a): V 1 : volume occupied by the gas 2. Fixed energy of a subsystem E 1 (Figure 5.4b): E 1 : energy of gas 1 V 1 : volume of gas 1 N 1 : particle number of gas 1 This example can be considered as two isolated subsystems: Ω g (E,E 1,V,V 1,N,N 1 ) = Ω(E 1,V 1,N 1 ) Ω(E E 1,V V 1,N N 1 ). (5.73) Removing the constraint After sufficient time the system will reach a new equilibrium state: Ω i Ω f. (5.74) Thereby, one always has Ω f Ω i. (5.75) All the states compatible with the constraint are still possible. But additionally, new states with fixed energy E become possible, which previously have been excluded by the constraint. In general, removing the constraint in an isolated system leads to a final state whose entropy is larger than or equal to the entropy of the initial state: S f S i. (5.76) (The final and initial states are equilibrium states. It is not important what happens in between.) 86

87 5.2. Reversible and irreversible processes Examples 1. Removing the wall: Ω i V1 N (5.77) Ω f V N (5.78) Ω f Ω i for V > V 1 (5.79) 2. Wall becomes thermally conductive (subsystem is a canonical ensemble): Ω i = Ω i1 Ω i2 (5.80) Ω i1 1 N 1! ( 3N 1 2 ( m ) 3N1/2 3N1/2 ) V! 2π 2 1 E 3N 1/2 δe 1 1 (diluted gas at high energy) (5.81) E 1 Ω f Ω i Ei Ē i (Ω i w(e 1 ), Ω f w( E 1 )) (5.82) Ω f Ω i for E 1 E 1max = E 1 (recall Figure 4.2) (5.83) Ω f Ω i for T 1 = T 2 (5.84) Increase of entropy by removing the constrains The probability that the system without constraints, after sufficiently long time, still stays in its initial state is w i = Ω i Ω f. (5.85) That is because 1 Ω f is the probability that the system is in one particular macrostate (of the system without constraints) and Ω i is the number of microstates satisfying the initial conditions. Of course, w i 1 if Ω i Ω f. (5.86) Examples 1. Probability that the gas still occupies a volume V 1 after removing the wall for V = 2V 1 : ( 1 ) N w i = (5.87) 2 (factor 1 2 for each molecule) 2. Probability that the energy of gas 1 remains E 1 after the wall becomes thermally conductive: w i = w(e 1 ) δe (5.88) For the interval [E max 1 2 δe,e max + 1 δ E 2δE] with E 1 it is N For E 1 Ē we find w(e 1 ) to be exponentially suppressed. w(e max ) δe = w(ē) δe 1. (5.89) 87

88 5. Thermodynamics Let us look at the effect of removing the constraint on a general variable y (e. g. y = E 1, the energy in the subvolume V 1 ). Beforehand, it has a fixed initial value y i. Afterwards, it satisfies a probability distribution characterized by w(y)δy Ω(y), (5.90) where, in general, ȳ y i, with ȳ being the position of the maximum of w(y), w y ȳ = 0. (The generalization to multiple variables y 1,...,y k is straightforward.) Now we consider the limit N. (5.91) This will lead to a sharp value y f = ȳ = y max (again, generally y f y i ). y f is determined by the entropy becoming maximal (Ω(y max )): S (y f ) maximal. (5.92) Without constraints the parameters will take such values that the entropy becomes maximal (at given boundary conditions like V, N, E of the system). Example y is the energy in a subvolume: Beforehand, S (E, N,V ; y) is calculated with given constraints. Afterwards, we have to satisfy S y = 0 to find y E,N,V max. The entropy of a system grows when approaching the equilibrium state. Note that this is a direct consequence of our two postulates! (It is not necessarily valid if one of the basic postulates is not obeyed!) Initial states with the probability distribution not being a uniform distribution over all possible microstates (equilibrium distribution) are very unlikely for large N : w i = Ω i 1, Ω f (5.93) S i < S f. (5.94) So, from the equilibrium state point of view approaching the equilibrium state corresponds to the transition from an unlikely to a likely state (content of the first postulate) nd law of thermodynamics The entropy of an isolated system cannot decrease. Historical formulations: Clausius: Heat cannot flow from lower to higher temperature. 88

89 5.2. Reversible and irreversible processes Kelvin: It is impossible to produce work continuously by lowering the temperature of a system, without other changes in the environment. In other words, the latter formulation states the impossibility of perpetuum motion machines. Short summary of the laws of thermodynamics 0 th law: T, T 1 = T 2 1 st law: E, conservation of E 2 nd law: S, S cannot decrease 3 rd law: S min = Reversible processes for isolated systems A process (i) (f) in an isolated system is reversible, if the process (f) (i) is possible, too. Otherwise, the process is irreversible. In an isolated system an increase of entropy indicates an irreversible process. On the other hand, a reversible process is, concerning the isolated total system, adiabatic (the entropy of the total system stays constant) Reversible processes for open systems It must be possible, after performing some process, to perform its reversed process, so that the system gets back into its initial state, without changing the environment. The Carnot cycle is an example of such a reversible process (if it is quasi-static). The temperature difference between the reservoirs is lowered and work is performed. Its reversed process describes a fridge: work is performed in order to enlarge the temperature difference between the reservoirs. The p V diagram of the Carnot cycle is shown in Figure 5.5. Irreversible processes, on the other hand, are dissipative processes. Typically, in such a process the energy is being distributed over several degrees of freedom (conversion of mechanical energy into heat). Examples are mechanical friction, eddy currents, viscose friction and balancing the temperature of two heat reservoirs. 89

90 5. Thermodynamics p (d) (a) (b) (c) V Figure 5.5.: Carnot cycle Thermodynamics and non-equilibrium states The course of an irreversible process cannot be described by thermodynamic potentials. But its final and initial states can be described, if they are equilibrium states. 90

91 Part II. Statistical Systems 91

92

93 6. Ideal quantum gas In an ideal gas the interactions between its particles can be neglected. This can be used as an approximation for several kinds of physical systems like: molecules photons (radiation) electrons quasi-particles in solids (phonons, magnetons ) It provides a good approximation especially at small densities, but also for many other problems, e. g. electrons in solids Occupation numbers and functional integral We consider some one-particle states ν. In the case of momentum states, we have ν = p = (p 1,p 2,p 3 ). To each of these states we can assign an energy E ν = E(p) and an occupation number n ν = n(p). A given sequence of occupation numbers {n ν } = {n(p)} corresponds to a given function n(p). Depending on the type of particles we consider, the occupation numbers can attain different value ranges: fermions: n ν = 0,1 (6.1) bosons: n ν = 0,1,2,..., (6.2) A microstate τ of the full system is given by the sequence of occupation numbers: τ {n ν }. (6.3) This can be used to express the sum over all microstates via the one-particle states: τ {n ν } (6.4) = (6.5) n 1 n 2 n k =. (6.6) ν n ν 93

94 6. Ideal quantum gas This way, every possible sequence appears exactly once! For the different types of particles this yields: ( 1 ) fermions: = (6.7) bosons: {n ν } ν n ν =0 ( ) = {n ν } As said before, we can relate a sequence of occupation numbers {n ν } to a function n(p). This means that we also can identify the sum over all possible sequences with a sum over all possible functions n(p): D n(p). (6.9) This is a functional integral! {n ν } 6.2. Partition function for the grand canonical ensemble Using the results from the previous section, we can express the grand canonical partition function as Z gc = e β (E τ µn τ ) (6.10) τ = {n ν } = {n ν } = {n ν } ν ν n ν =0 e β ( ν E ν n ν µ ν n ν ) e β ν n ν (E ν µ) ν n ν e βn ν (E ν µ) ( ) ( ) = e βn ν (E ν µ) ( = ν n ν ν e βn ν (E ν µ) (6.8) (6.11) (6.12) (6.13) (6.14) ). (6.15) The last step may seem confusing, but by explicitly writing it out, it is easy to see that every summand in (6.14) appears exactly once and with the same weight in (6.15), too. The logarithm of the partition function is given by ( ) ln Z gc = ln e βn ν (E ν µ). (6.16) For fermions this yields ln Z (f) gc = ν ln ν n ν ( ) 1 + e β (E ν µ), (6.17) 94

95 6.3. Mean occupation numbers whereas for bosons we find (j = n ν ) ln Z (b) gc = [ ln Applying the formula for the geometric series: we arrive at ν j=0 ( ) j ] e β (E ν µ). (6.18) 1 + x + x 2 + x 3 + = 1 1 x, (6.19) ln Z (b) gc = ln ν 1 1 e β (E ν µ). (6.20) Now we only have to carry out sums over one-particle states! This is an enormous simplification as it is much easier to perform ν rather than τ = {n ν }! But note that this just works for free particles without any interactions Mean occupation numbers Bosons In general it is N = N τ p τ (6.21) τ N τ e β (E τ µn τ ) = (6.22) Z gc = 1 β τ µ ln Z gc. (6.23) So, using (6.20), for bosons we find N = ν = ν = ν [ 1 β 1 β ( µ ln 1 e ) ] β (E ν µ) (6.24) µ e β (E ν µ) 1 e β (E ν µ) e β (E ν µ) 1 e β (E ν µ) } {{ } n (b) ν (6.25), (6.26) and therefore N = ν n (b) ν, (6.27) 95

96 6. Ideal quantum gas with n (b) ν being the mean occupation number of the state ν: n (b) ν = 1 e β (E ν µ). (Bose-Einstein statistics) (6.28) 1 If we also allow for state dependent µ ν, this generalizes to Z (β; µ 1,...,µ ν ) = n 1 n ν e β ν (E ν n ν µ ν n ν ) (6.29) and n ν = 1 β ln Z µν. (6.30) µ ν =µ For bosons this yields and n (b) ν as before Fermions 1 ln Z (β; µ ν ) = ln 1 e β (E ν µ ν ) ν (6.31) As for bosons, the mean occupation number of fermions is given by the sum of the mean one-particle occupation numbers: N = n ν (f), (6.32) where in this case n (f) ν = 1 β ν ( ) µ ln 1 + e β (E ν µ) (6.33) = e β (E ν µ) 1 + e β (E ν µ). (6.34) And therefore n (f) ν = 1 e β (E ν µ). (Femi-Dirac-statistics) (6.35) + 1 From this it is easy to see that n (f) ν Moreover, we can look at some special cases: only can take values in the following range: 0 n (f) ν 1. (6.36) n ν (f) 0 for E ν µ T, (6.37) n ν (f) 1 for E ν µ < 0 and E ν µ T. (6.38) 96

97 6.3. Mean occupation numbers Mean energy In general, the mean energy is computed as ( Ē = β + µ ) ln Z gc (6.39) β µ = β ln Z gc + µ N. (6.40) For bosons it is (E ν µ)e β (E ν µ) Ē = 1 e ν β (E + µ N (6.41) ν µ) = (E ν µ) n ν (b) + µ n ν (b) (6.42) ν ν ν = E ν n ν (b). (6.43) Performing the same calculation for fermions, we find that both are given by Ē = E ν n ν. (6.44) ν Similar formulas can be found for other macroscopic quantities, too! Note that n ν itself already is a macroscopic quantity! Chemical potential One question we have not discussed yet is: Which value should we choose for µ? 1) Conserved number of particles Examples for conserved numbers of particles are: N e : electric charge and lepton number N i : number of molecules of a certain type, if chemical transitions are negligible N b : baryon number (in the early universe) In general, so we conclude N = 1 β µ ln Z gc(β,µ = N (β,µ), (6.45) µ = µ( N,T ). (6.46) In the thermodynamic limit the grand canonical and the canonical ensemble are equivalent, which leads to µ = µ(n,t ). (6.47) Note that, if there are several conserved quantum numbers N i, there also are several independent µ i. 97

98 6. Ideal quantum gas 2) No conserved number of particles Examples for cases where the number of particles is not conserved are photons and protons. In these cases the canonical ensemble does not depend on N, i. e. µ 0. (6.48) But there is no restriction of the summands in {n ν }, anyway, as N is no conserved quantum number. Here, the mean occupation numbers characterize the canonical ensemble. 3) Approximatively conserved quantum numbers If one considers time scales, which are much shorter than the decay time of the particles, then one can consider the corresponding particle number as approximatively conserved and assume µ 0. (6.49) Boltzmann statistics In the regime of large energies (compared to T ): E ν µ T (6.50) the mean occupation number of a state ν is approximately n ν e β (E ν µ), (Maxwell-Boltzmann statistics for classical particles ) (6.51) which holds for bosons and fermions, equally. This recovery of the classical description in the large energy limit is an example of the correspondence principle between QM and classical mechanics. We also can write this as Eν µ n ν = C e T (6.52) = C e Eν T (6.53) to emphasize the famous Boltzmann factor e Eν T. Note that E ν is the energy of the one-particle state ν it is not to be mistaken for the energy E τ of a microstate τ like, for example, in p τ = Zcane 1 β E τ. As a comparison recall the computation of Ω(E,N ) earlier in this lecture, with the approximation that Ω is dominated by microstates where each ν is occupied at most once, which corresponds to n ν 1. (6.54) 98

99 6.4. Photon gas Hole Figure 6.1.: Hole in a box 6.4. Photon gas Think of a box with a small hole in it, like in Figure 6.1. The electromagnetic radiation within the box will be in equilibrium, with the temperature T given by the temperature of the walls. The hole is needed for observation. If there is no radiation from the outside entering through the hole, this system is an example of a black body. The concept of a black body was crucial in the development of modern physics, as classical field theory yields wrong results (large deviations from the measured frequency distribution of the radiation). Historically, two of the most important milestones in understanding this as a photon gas in thermal equilibrium have been the quantisation of electromagnetic energy by Planck in 1900 () and Einstein's idea of quantising the the electromagnetic radiation itself as photons in 1905 (explanation of the photoelectric effect). From a modern statistical physics point of view, a photon gas is a gas of non-interacting particles whose particle number is not conserved (µ = 0). So it is described by a canonical ensemble. Note that in the case of free photons n(p) = n ν is conserved for every p (ν). Such an ensemble with µ ν 0 can be very different from black body radiation (µ ν = 0), e. g. radio waves Black body radiation Consider a gas of photons in thermal equilibrium with a temperature T. As µ = 0, we immediately can conclude 1 n ν = e β E. (6.55) ν 1 So, to proceed, one-photon states ν and their energies E ν are necessary. To get them, we start with the Maxwell equation (wave equation for the electric field): Assuming it takes the form 1 2 E c 2 t 2 = 2 E. (6.56) E = E(r ) e iωt, (6.57) 2 E(r ) + ω2 c 2 E = 0, (6.58) 99

100 6. Ideal quantum gas which is solved by E(r ) = A e ikr. (6.59) The exact form of boundary conditions is not important, so we choose periodic ones: From this we can conclude the dispersion law k = 2π L n with n x,y,z Z. (6.60) k 2 = ω2, ω = c k. (6.61) c2 Now, using the fact that the diversion of the electric field vanishes (no sources): we can directly infer the transversality of electromagnetic waves: E(r ) = 0, (6.62) k A = 0. (6.63) This means that there are 2 degrees of freedom for every k (massless particle with spin 1 and helicity H = ±1). Quantisation For an electromagnetic wave, energy and momentum are given by Therefore, the one-photon state energy is, using (6.60), E = ω, (6.64) p = k. (6.65) where the states ν are given by E ν = c k = 2πc L, (6.66) ν = (n x,n y,n z,h ) with n x,y,z Z, H = ±1. (6.67) We also can split up the states as into ν = (ν t,ν i ), (6.68) ν t (n x,n y,n z ) translational degrees of freedom, (6.69) ν i H inner degrees of freedom. (6.70) Note that the translational degrees of freedom are the same for all kinds of particles, e. g. atoms, molecules. 100

101 6.4. Photon gas Density of the translational states For large volumes all translational states lie very densely, as E ν 1 L. Therefore, we can replace the sums over the states by integrals: ν t V 2V ν d 3 p (2π) 3, (6.71) d 3 k (2π) 3, (6.72) where we defined the volume V = L 3. To arrive at (6.72), we used p = k and the fact that E ν is independent of H. So, the factor of 2 arises due to the two possible values of the helicity H = ± Spectrum of black body radiation The mean number of photons per volume (photon density ñ) in the momentum interval [k,k+dk] is, using (6.55), (6.72) and E = ω, ñ(k) d 3 k = With the dispersion law ω = c k we can deduce 2 e βω 1 d 3 k (2π) 3. (6.73) d 3 k = 4π k 2 d k (6.74) = 4π c 3 ω2 dω, (6.75) so the mean number of photons per volume in the frequency interval [ω,ω + dω] is ñ(ω) dω = 8π ω 2 (2πc) 3 e βω dω. (6.76) 1 Note that there is no divergence for ω 0! Using E = ω, we can easily infer the mean photon energy per volume in the frequency interval [ω,ω + dω] as ū(ω,t ) dω = ω 3 π 2 c 3 e βω dω. (6.77) 1 This is the frequency spectrum of black body radiation. It is plotted in Figure 6.2. Defining η = βω = ω k b T, we recover the original form ū(η,t ) dη = (k bt ) 4 η 3 π 2 (c) 3 e η dη, (6.78) 1 postulated by Planck in Its maximum is roughly at ω 3k b T or η 3, respectively. 101

102 6. Ideal quantum gas ū ω k b T Figure 6.2.: Frequency spectrum of black body radiation Of course, we also can deduce the wavelength spectrum, using ω = 2πc λ : ū(λ,t ) dλ = 16π 2 c 1 e 2πc kbt λ 1 dλ λ 5. (6.79) Let us look at an example: the spectrum of sunlight. Its maximum is at λ max 500 nm which is just what one would expect from a black body radiation spectrum at the sun's surface temperature T 5800 K. This maximum lies in the regime of green visible light which, as an interesting side note, is also the maximum of the sensitivity of the human eye! Furthermore, we can consider the low frequency limit ω k b T, where the frequency spectrum can be approximated by ū(ω,t ) dω = ω 3 π 2 c 3 dω βω (6.80) = k bt π 2 c 3 ω2 dω, (6.81) which recovers classical electrodynamics (Einstein 1905). In this limit does not appear, any more. The extrapolation to large frequencies ω k b T, on the other hand, leads to a divergence: 0 dω ū(ω,t ). (6.82) This demonstrates the incompleteness of the theory! So, classical electrodynamics has to be modified. A solution is given by quantum statistics of photons, which accounts for the particle-wave duality (for large occupation number we are in the regime of classical fields, whereas for small occupation numbers we are in the regime of classical particles). 102

103 6.4. Photon gas Energy density The energy density is given by So, we conclude that ρ = π2 15 ρ = E V = dω ū(ω,t ) (6.83) = 0 0 = (k bt ) 4 π 2 c 3 3 This law contains several fundamental constants: dη ū(η,t ) (6.84) c: unification of space and time: special relativity : quantum physics η 3 e η 1 dη 0 } {{ } π (6.85) (k b T ) 4. (Stefan-Boltzmann law) (6.86) (c) 3 k b : associates T to energy, microphysical understanding of thermodynamics Natural units A frequently used system of units are the natural units. They are defined via This has some consequences: k b = 1: Temperature is measured in units of energy, e. g. c =: Mass is measured in units of energy, e. g. = c = k b = 1. (6.87) 1 K = ev. (6.88) m e = 511 kev, (6.89) 1 ev = g, (6.90) and distances are measured in units of time, e. g, light seconds. = 1: Time and distances are measured in inverse units of energy, e. g. 1 fm = m = 197 MeV, (6.91) 1 m = (ev) 1. (6.92) In Table 6.1 some physical quantities and constants and their respective mass (energy) dimensions in natural units are listed. 103

104 6. Ideal quantum gas Quantity Mass dimension E M T M L,t V N M 0 µ M p M 4 Z mic,can,gc M 0 F M S M 0 e M 0 Q M 0 α M 0 M 1 M 3 Table 6.1.: Mass dimension of some physical quantities Canonical partition function The logarithm of the canonical partition function for a photon gas is computed as 1 ln Z can (β,v ) = ln 1 e β E. (6.93) ν ν Together with (6.72) and E ν = η β this yields ln Z can = V π 2 ( kb T c ) 3 dη η 2 1 ln 0 1 e } {{ η } π 4 45 (6.94) = π2 45 V β 3, (6.95) where we used natural units in the last step again. Let us check, if this gives the correct result for the energy density: and therefore E = β ln Z can (6.96) = π2 15 V β 4 (6.97) ρ = E V = π2 15 T 4 (6.98) which agrees with (6.86), using natural units. 104

105 6.5. Degenerate Fermi gas Remarks 1. Not only E itself is a macroscopic observable, but the energy distribution as a function of the frequency is one, too. 2. From Z can other quantities, e. g. the pressure of the photon gas, easily can be attained. thermodynamic potentials classical thermodynamics Free energy and equation of state Using (6.95), the free energy is found to take the form where we defined or, using natural units, F = k b T ln Z can (6.99) = τ 3 VT 4, (6.100) τ = k4 bπ 2 15c 3 3 (6.101) τ = π2 15, (6.102) respectively. Then, the entropy, energy and pressure are given by ( ) F S = = 4 T 3 τvt 3, (6.103) V E = F + TS = τvt 4, (6.104) ( ) F p = = τ T 3 T 4. (6.105) V Combining the expressions for E and p and using ρ = E V, we arrive at the equation of state for a photon gas: where, as a reminder, the energy density ρ is given by 6.5. Degenerate Fermi gas Quasiparticles p = 1 3 ρ, (6.106) ρ = π2 15 T 4. (6.107) At low temperature and high pressure quantum statistics become important! A typical example of this are electrons in metals. They can be considered as quasi free and are, due to their spin of s = 1 2, fermions. Recall that the mean fermionic occupation number is given by n (f) ν = 1 e β (E ν µ) + 1. (6.108) 105

106 6. Ideal quantum gas n 1 µ E ν Figure 6.3.: Mean occupation number in zero temperature limit The limit T 0 We consider the low temperature regime: k b T E ν µ. (6.109) Thereby, the zero temperature limit of the mean occupation number depends on the relative size of E ν and µ: This describes a step function as plotted in Figure 6.3. We define the Fermi energy ε f via n k bt 0 0 for E ν > µ, (6.110) n k bt 0 1 for E ν < µ. (6.111) ε f = lim T 0 µ(n,t ), (6.112) and the Fermi momentum p f or k f, respectively, via the usual energy momentum relation: with M being the effective mass of the quasiparticle Computation of the Fermi energy ε f = p2 f 2M = 2 k 2 f 2M, (6.113) We consider electrons or atoms with 2 internal degrees of freedom (д i = 2). Then, the number of particles is computed as V N = 2 (2π) 3 d 3 k (6.114) k 2 <k 2 f = V kf π 2 d k k 2 (6.115) 0 = V 3π 2 k3 f. (6.116) 106

107 6.5. Degenerate Fermi gas So, using n = N V, the Fermi momentum and energy are given by k f = (3π 2 n) 1 /3, (6.117) ε f = 1 2M (3π2 n) 2 /3. (6.118) This way, we also know µ(n) at T = 0. For free non-relativistic particles one also can define the Fermi temperature: k b T f = ε f. (6.119) For example the Fermi temperature of electrons in copper is approximately K. So, we see that in this case (and also in general) k b T ε f is a very good approximation Thermodynamics for our purposes the energy E(T,V,N ) is the most suitable thermodynamic potential. To calculate it, we perform an expansion in T T f. In the lowest order approximation the energy is given by the ground state energy: E = E 0. (6.120) Ground state energy The ground state energy is computed as E 0 = lim E (6.121) T 0 V = 2 (2π) 3 d 3 k k2 (6.122) 2M k 2 <kf 2 kf = V 1 π 2 2M 0 d k k 4 (6.123) V = 10π 2 M k5 f (6.124) = 3 kfn 2 10 M, (6.125) where we used (6.116) in the last step. Also using (6.113), leads us to E 0 = 3 5 Nε f. (6.126) So, in the low temperature regime we have E N = 3 5 µ +... (6.127) 107

108 6. Ideal quantum gas One might wonder if E N is constant? The answer is no! It depends on the density due to µ = ε f = 2 2M (3π2 n) 2 /3. (6.128) An additional particle always has the energy ε f! (not the mean energy of the already present particles) So far, we have found the zero temperature equation of state for our Fermi gas: One-particle density of states E(T = 0,N,V ) = 3 10M (3π2 n) 2 /3 N. (6.129) We would like to find a more general type of dispersion relation E ν ε(k) ω(k), (6.130) which also considers the interaction with the lattice. Note that ε denotes the one-particle energy here, not the energy density E V. It is given by for a single atomic non-relativistic gas and by ε = 2 2M k 2 (6.131) ε = c k (6.132) for a relativistic Fermi gas. In principle, this does not change anything regarding the computation of k f, now we just allow for a general function ε f (k f ). But nevertheless, still holds, as it is a general result. So, the energy is now computed as and therefore where we have defined E = д i V (2π) 3 = д i V (2π) 3 k 2 <kf 2 E = N N = д i V 6π 2 k3 f (6.133) d 3 k ε(k) n(k) (6.134) dε Ω 1 (ε) = д i V N d 3 k δ [ε ε(k)] n(ε) ε, (6.135) dε Ω 1 (ε) n(ε) ε, (6.136) d 3 k δ [ε ε(k)], (6.137) (2π) 3 108

109 6.5. Degenerate Fermi gas δε ε Figure 6.4.: Energy states the number of states per particle for given ε or rather δε, as sketched in Figure 6.4. (It corresponds to the quantity ω(e) for statistics of one particle in the first part of this lecture, as it has mass dimension M 1.) It is an important microscopic property of a specific Fermi gas, as it influences the conductivity of metals, for example Ω 1 (ε) for free non-relativistic fermions Inserting the non-relativistic dispersion relation (6.131) into the expression (6.137) for Ω 1 leads to Ω 1 (ε) = д iv 2π 2 d k k 2 δ (ε 2 k 2 ). (6.138) N 0 } 2M {{ } ε (k ) The result is Ω 1 (ε) = 3 2 ε 3/2 f ε 1 /2 Θ(ε), (6.139) which is only non-vanishing if ε > 0. Let us check if this reproduces the correct ground state energy: E εf 0 N = dε ε Ω 1 (ε) (6.140) 0 = 3 2 ε 3/2 f 2 5 ε 5 /2 f (6.141) = 3 5 ε f q. e. d. (6.142) Can we determine Ω 1 (ε) from thermodynamics, too? Often it is difficult to calculate it from first principles! But while the whole function Ω 1 (ε) may be difficult to compute, one can infer some important properties. For example, one would like to know how thermodynamic variables, as specific heat, depend on Ω 1 (ε), i. e. how they depend on the microphysics. One uses thermodynamic observables for the measurement of microphysical properties. 109

110 6. Ideal quantum gas The method, used for this, is to compute how observables depend on Ω 1 (ε) by leaving Ω 1 (ε) a free function E(T, N ) for arbitrary temperature We already know that E(T,µ) = N dε Ω 1 (ε) ε e β (ε µ) + 1, (6.143) which we obtain by inserting (6.108) into (6.136). It holds for arbitrary T. So, in principle, this yields a general equation of state for E N, but only if we know the form of µ(t ). To compute it, we start with the general expression for the number of particles: N = n ν (6.144) = ν = N dε n(ε) д i V From this we can read off the normalisation condition d 3 k δ [ε ε(k)] (6.145) (2π) 3 dε Ω 1 (ε) n(ε). (6.146) dε Ω 1 (ε) which, for known Ω 1 (ε), determines µ(t ). As a cross check, we look at the T = 0 case again. In this case it is leading to which agrees with (6.140). 1 e β (ε µ) + 1 = 1, (6.147) n(ε) = Θ(ε f ε), (6.148) E 0 = N Smoothing of Fermi surface εf 0 dε Ω 1 (ε) ε (6.149) If we look at temperatures T > 0, the Fermi surface is not a sharp step function, any more. But as long as we are still considering low temperatures, it just deviates slightly from this, as illustrated in Figure 6.5, i. e. it becomes a smooth step function, as only electrons close to the Fermi edge play a role at these small temperatures. This can be seen by looking at the deviation of the energy at T 0 from the ground state energy (T = 0) for fixed N, V : E E 0 = N [ dε ε Ω 1 (ε) 1 e β (ε µ) + 1 Θ(ε f ε) ]. (6.150) For T T f the main contribution to this deviation comes from ε values close to ε f. Note that ε f = ε f (n), which follows from the calculation at T = 0, whereas µ = µ(t ), as µ ε f for T 0. A remark towards nomenclature: If electrons have an energy larger than ε f, we call them (excited) particles. If they have an energy less than ε f, we can call them holes. 110

111 6.5. Degenerate Fermi gas n 1 ε f ε Figure 6.5.: Smooth Fermi surface Computation of µ(t ) from the normalisation condition We know that [ ] 1 dε Ω 1 (ε) e β (ε µ) + 1 Θ(ε f ε) = 0. (6.151) To calculate it, we will perform the so-called Sommerfeld approximation: We perform a Taylor expansion of Ω 1 (ε) around ε f : Ω 1 (ε) = Ω 1 (ε f ) + Ω 1 (ε f) (ε ε f ) +... (6.152) and define as well as as the new variable µ = ε f + µ, (6.153) x = β (ε ε f ), (6.154) such that β (ε µ) = β (ε ε f ) β µ (6.155) = x β µ. (6.156) This leads to 0 = dx [ Ω 1 (ε f ) + Ω 1 (ε ] [ ] f) x 1 β e x e β µ + 1 Θ( x) +... (6.157) Expanding it for small β µ: 1 e x e β µ e x (1 β µ) (6.158) ) (6.159) (e x + 1) ( 1 β µ ex e x +1 1 (e x + 1) + β µ e x (e x + 1) 2, (6.160) 111

112 6. Ideal quantum gas f x Figure 6.6.: The difference in between the occupation numbers at T 0 and T = 0 yields where we defined 0 = dx [ Ω 1 (ε f ) + Ω 1 (ε ] [ f) x f (x) + β µ β f (x) = e x ] (e x + 1) 2, (6.161) 1 e x Θ( x), (6.162) + 1 Now, (6.161) is just a linear expression in µ, but the x-integrations still have to be performed Occupation numbers for particles and holes Note that the function f (x), which characterizes the difference in between the occupation numbers at T 0 and T = 0, is odd: 1 f ( x) = e x Θ(x) + 1 (6.163) = ex e x + Θ( x) (6.164) = ex (1 + e x ) e x + Θ( x) + 1 (6.165) = 1 e x + Θ( x) + 1 (6.166) = f (x). (6.167) This means that at T 0 the occupation number for the particles is enhanced in the same extent as it is lowered for the holes, when compared to the occupation number at T = 0. It is easy to this by looking at the plot of f (x) in Figure 6.6. We also can immediately conclude that dx f (x) = 0. (6.168) 112

113 6.5. Degenerate Fermi gas Moreover, some straightforward calculations yield where the last one follows directly from Solve for µ(t ) dx x f (x) Inserting (6.168) to (6.171) into (6.161) yields e x = π2 6, (6.169) dx (e x + 1) 2 = 1, (6.170) e x dx x (e x + 1) 2 = 0, (6.171) ex being even. (e x +1) 2 β µ Ω 1 (ε f ) = Ω 1 (ε f) β By reinserting µ = ε f + µ, we arrive at an expression for µ: µ = ε f π2 6 π 2 6. (6.172) Ω 1 (ε f) Ω 1 (ε f ) (k bt ) 2. (6.173) Let us compute this explicitly for the example of free non-relativistic particles: Ω 1 = 3 2 ε 3/2 f ε 1 /2 Ω 1 = 3 4 ε 3/2 f ε 1/2 Ω 1 (ε f ) = 3 2 ε 1 f, (6.174) Ω 1 (ε f) = 3 4 ε 2 f. (6.175) Therefore it is and we end up with Ω 1 (ε f) Ω 1 (ε f ) = 1 2 ε 1 f, (6.176) ( ( ) 2 ) µ = ε f 1 π2 kb T (6.177) 12 ε f A similar calculation can be performed for the energy: E E [ ] 0 1 = dε Ω 1 (ε) N e β (ε εf µ) + 1 Θ(ε f ε) Using µ = µ ε f and inserting (6.173) leads to (6.178) µ ε f Ω 1 (ε f ) + π2 6 (k bt ) 2 [Ω 1 (ε f ) + ε f Ω 1 (ε f)]. (6.179) E = E 0 + π2 6 (k bt ) 2 Ω 1 (ε f ) N. (6.180) 113

114 6. Ideal quantum gas T f [K] T [K] 3 He gas (single atomic) He liquid 12 3 electrons in Na white dwarfs (electrons) neutron stars Table 6.2.: Fermi temperature T f vs. characteristic temperature T of different materials By defining the Fermi temperature in a more general context via k b T f = 3 2Ω 1 (ε f ), (6.181) which also reproduces the result k b T f = ε f for free non-relativistic particles, we can bring this into the form E = E 0 + π2 (k b T ) 2 N. (6.182) 4 k b T f This way, one also can consider T f as a measure for Ω 1 (ε f ). With an expression for the energy at hand we now can calculate the specific heat It is given by c v = E T V,N. (6.183) c v = π2 2 k b N T T f, (6.184) which is linear in T! Indeed, this is in good agreement with experimental results for metals at low temperatures! (the lattice contribution is c v T 3 ) Moreover, the measurement of T f allows us to determine the density of states at the Fermi edge Ω 1 (ε f ). This as an example of getting microphysical information out of the macroscopic thermodynamic behaviour of a system! In Table 6.2 we have listed the Fermi temperature in comparison to the characteristic temperature of some materials Non-relativistic bosons Chemical potential We are considering single atomic bosonic gases without any internal degrees of freedom excited (д i = 1), e. g. hydrogen. So, the mean occupation number is n ν = 1 e β (E ν µ) 1. (6.185) 114

115 6.6. Non-relativistic bosons We demand n ν 0. It immediately follows that µ < E 0, if E 0 is the lowest energy. Without loss of generality we set E 0 = 0, i. e. µ < 0 (otherwise rescale E ν and µ by some additive shift, as only their difference E ν µ matters, which is shift invariant), for example E ν = p2 2M Number of bosons in the ground state The mean occupation of the ground state is given by For small βµ this becomes n 0 = n 0 = 1 e β µ 1. (6.186) 1 1 βµ 1 (6.187) = 1 βµ, (6.188) which can become very large for µ 0. Macroscopically, n 0 corresponds to Bose-Einstein n condensation, i. e. 0 N denotes the fraction of condensed bosons (for superfluids n 0 N is the superfluid density). In the thermodynamic limit V we distinguish the following cases: n 0 0 no condensate, (6.189) N n 0 N n s > 0 Bose-Einstein condensate, (6.190) Qualitative behaviour of n(e ν ) for T 0 For smaller T at fixed N there are fewer particles at high E ν due to stronger Boltzmann suppression. On the other hand, of course, this has to be compensated by more particles at small E ν. We have plotted this in Figure 6.7. It follows that the low temperature limit of µ is µ(t,n ) T 0 0. (6.191) Gibbs potential The Gibbs potential was defined as J = T ln Z gc = pv. (6.192) In this case, it is given by 1 J = T ln 1 e β (E ν µ) ν = T д i V d 3 k [ (2π) 3 ln 1 e T 1 (6.193) ( 2 k 2 2M µ ) ]. (6.194) 115

116 6. Ideal quantum gas n T 2 > T 1 E 0 T 1 T 2 E ν Figure 6.7.: Occupation number for different temperatures at fixed N 6.7. Classical ideal gas Classical approximation The classical approximation is given by or, inserting E ν, Regarding the occupation number, this leads to E ν µ T, (6.195) 2 k 2 2M µ T. (6.196) n ν = 1 e β (E ν µ) 1 e β (E ν µ) (6.197) for bosons and fermions, equally. So, the classical approximation is only valid for n ν Gibbs potential Using this approximation, the Gibbs potential is given by J = T д i V = T д i V 2π 2 d 3 k 1 (2π) 3 e T 0 ( 2 k 2 2M µ ) d k k 2 e 1 T which, of course, is again the same for bosons and fermions. So, J = T д i (6.198) ( ) 2 k 2 2M µ, (6.199) V 2π 2 e µ T I3, (6.200) 116

117 6.7. Classical ideal gas where I 3 = 0 = b = b dx x 2 e bx 2 (6.201) 0 ( 1 2 dx e bx 2 (6.202) ) π b (6.203) = 1 4 π1 /2 b 3/2, (6.204) with b = 2 2Mk b T. (6.205) Thermal de Broglie wavelength We define the thermal de Broglie wavelength as ( ) 2π 2 1/2 λ(t ) =, (6.206) Mk b T or, using natural units, as λ(t ) = ( ) 1/2 2π. (6.207) MT Due to the relation between temperature and kinetic energy: we also can associate it to the momentum via Recalling the definition (6.205) of b, we find p 2 2M T (6.208) MT p 2. (6.209) λ 2 = 4πb, (6.210) and therefore b 3/2 = λ 3 (4π) 3 /2. (6.211) This way, we can express the factor I 3 in the Gibbs potential (6.200) in terms of the thermal de Broglie wavelength: I 3 = 2π 2 λ 3. (6.212) 117

118 6. Ideal quantum gas Thermodynamics The Gibbs potential is now given by as well as So, the number of particles is computed as J = T д i V λ 3 e µ T, (6.213) J = pv. (6.214) N = ( ) J µ T,V (6.215) = 1 T J (6.216) = pv T, (6.217) leading to the equation of state: pv = NT. (6.218) Validity of the approximation for a diluted gas (classical gas) We want to check the validity of the classical approximation For µ < 0 it is sufficient to check if p 2 2M µ T. (6.219) µ T 1. (6.220) We know that N = pv T = J T (6.221) (6.222) leading to = д i V λ 3 e µ T, (6.223) e µ T = Nλ 3 д i V, (6.224) and therefore µ T = lnд i + ln V Nλ 3. (6.225) From this we can read off that the approximation is valid if V N λ3 (T ). (6.226) 118

119 6.8. Bose-Einstein condensation Therefore, recalling that λ T 1/2, the classical approximation is valid for a diluted gas at sufficiently high temperature: i. e. with far fewer than 1 particle per volume λ 3 (T ) Bose-Einstein condensation n λ 3 (T ), (6.227) We want to examine what happens, if we keep n fixed, while lowering T n(µ) The density is given by Using and defining n = N V = d 3 k 1 (2π) 3 ( e β k 2 d 3 k (2π) 3 = 1 2π 2 = 1 4π 2 2M µ ) 1. (6.228) d k k 2 (6.229) d k 2 k 2 (6.230) this becomes and therefore n = 1 4π 2 z = k 2 2MT = λ(t )2 k 2 4π ( ) 3/2 4π λ 2 dz z 0 nλ 3 = 2 π dz z (6.231), (6.232) 1 e z β µ 1, (6.233) 1 e z β µ 1. (6.234) As µ is restricted to µ 0, we are interested in the limit µ 0, i. e. βµ 0 +. One can read off that nλ 3 increases in this limit Critical temperature Let us compute nλ 3 for µ = 0 (its maximal value): (nλ 3 ) c = 2 dz 1 z π e z. (6.235) 1 } ) {{ ( ) } 3 2 Γ( 3 2 ζ 119

120 6. Ideal quantum gas Using Γ ( ) 3 2 = 1 2 π, we arrive at So, the critical wavelength is given by (nλ 3 ) c = ζ ( 3 2 ). (6.236) which, using λ c = ζ ( ) 1/3 3 2 n 1/3 c, (6.237) λ 2 c = 2π MT c, (6.238) leads us to an expression for the critical temperature: Using ζ ( 3 2 ) 2.61, gives us the approximate result T c = 2π M ζ ( ) 3 2/3 2 n 2/3 c. (6.239) T c M n2 /3 c. (6.240) So, if we lower T until it reaches T c for a given n, µ reaches 0. But what happens for T < T c? Is there no solution? Ground state of bosonic gas At T = 0 the gas is in its ground state and has minimal energy. Therefore, all bosons have to be in the state with the lowest energy E 0. In a box with periodic boundary conditions E 0 is given by E 0 = 2 k 0 2 2M, (6.241) with k0 being the lowest possible discrete momentum. And in a cavity it is given by E 0 = ω 0, (6.242) with ω 0 being the lowest resonance frequency. But if we now demand n ν =0 (T = 0) = N, (6.243) we have a problem, as this diverges in the thermodynamic limit. The macroscopic number of particles in the ground state is n ν =0 V = N V = n 0. (6.244) So, the atoms in the ground state are not properly counted in the transition from the discrete sum ν to the integral d 3 k (2π) 3. The total number number of atoms is given by n = n 0 (T ) + ñ(t ) (6.245) where ñ(t ) is the number of non-condensed thermal atoms. If n 0 0 we speak of Bose-Einstein condensation (BEC). 120

121 6.8. Bose-Einstein condensation n 0 T c T Figure 6.8.: Number of condensed atoms Figure 6.9.: Bose-Einstein condensation Bose-Einstein condensation At T < T c it is µ = 0 and the number of non-condensed thermal atoms is given by ñ(t ) = ζ ( 3 2 ) λ(t ) 3 < n, (6.246) and therefore the remaining atoms have to form a Bose-Einstein condensate: n 0 (T ) = n ñ(t ) > 0. (6.247) The qualitative behaviour of this function is plotted in Figure 6.8. At the critical temperature it is n 0 (T c ) = 0 (6.248) and it occurs a second order phase transition! For n 0 0 we also observe superfluidity! Experimental images of a BEC are shown in Figure 6.9. We can express the number of condensed atoms in terms of the critical temperature. To do so, we use ( ) 3/2 ñ(t ) ñ(t c ) = λ3 (T c ) T λ 3 (T ) =, (6.249) T c 121

Statistical Mechanics

Statistical Mechanics Franz Schwabl Statistical Mechanics Translated by William Brewer Second Edition With 202 Figures, 26 Tables, and 195 Problems 4u Springer Table of Contents 1. Basic Principles 1 1.1 Introduction 1 1.2

More information

Table of Contents [ttc]

Table of Contents [ttc] Table of Contents [ttc] 1. Equilibrium Thermodynamics I: Introduction Thermodynamics overview. [tln2] Preliminary list of state variables. [tln1] Physical constants. [tsl47] Equations of state. [tln78]

More information

Suggestions for Further Reading

Suggestions for Further Reading Contents Preface viii 1 From Microscopic to Macroscopic Behavior 1 1.1 Introduction........................................ 1 1.2 Some Qualitative Observations............................. 2 1.3 Doing

More information

1. Thermodynamics 1.1. A macroscopic view of matter

1. Thermodynamics 1.1. A macroscopic view of matter 1. Thermodynamics 1.1. A macroscopic view of matter Intensive: independent of the amount of substance, e.g. temperature,pressure. Extensive: depends on the amount of substance, e.g. internal energy, enthalpy.

More information

(# = %(& )(* +,(- Closed system, well-defined energy (or e.g. E± E/2): Microcanonical ensemble

(# = %(& )(* +,(- Closed system, well-defined energy (or e.g. E± E/2): Microcanonical ensemble Recall from before: Internal energy (or Entropy): &, *, - (# = %(& )(* +,(- Closed system, well-defined energy (or e.g. E± E/2): Microcanonical ensemble & = /01Ω maximized Ω: fundamental statistical quantity

More information

1 The fundamental equation of equilibrium statistical mechanics. 3 General overview on the method of ensembles 10

1 The fundamental equation of equilibrium statistical mechanics. 3 General overview on the method of ensembles 10 Contents EQUILIBRIUM STATISTICAL MECHANICS 1 The fundamental equation of equilibrium statistical mechanics 2 2 Conjugate representations 6 3 General overview on the method of ensembles 10 4 The relation

More information

fiziks Institute for NET/JRF, GATE, IIT-JAM, JEST, TIFR and GRE in PHYSICAL SCIENCES

fiziks Institute for NET/JRF, GATE, IIT-JAM, JEST, TIFR and GRE in PHYSICAL SCIENCES Content-Thermodynamics & Statistical Mechanics 1. Kinetic theory of gases..(1-13) 1.1 Basic assumption of kinetic theory 1.1.1 Pressure exerted by a gas 1.2 Gas Law for Ideal gases: 1.2.1 Boyle s Law 1.2.2

More information

First Problem Set for Physics 847 (Statistical Physics II)

First Problem Set for Physics 847 (Statistical Physics II) First Problem Set for Physics 847 (Statistical Physics II) Important dates: Feb 0 0:30am-:8pm midterm exam, Mar 6 9:30am-:8am final exam Due date: Tuesday, Jan 3. Review 0 points Let us start by reviewing

More information

In-class exercises. Day 1

In-class exercises. Day 1 Physics 4488/6562: Statistical Mechanics http://www.physics.cornell.edu/sethna/teaching/562/ Material for Week 8 Exercises due Mon March 19 Last correction at March 5, 2018, 8:48 am c 2017, James Sethna,

More information

Elements of Statistical Mechanics

Elements of Statistical Mechanics Elements of Statistical Mechanics Thermodynamics describes the properties of macroscopic bodies. Statistical mechanics allows us to obtain the laws of thermodynamics from the laws of mechanics, classical

More information

(i) T, p, N Gibbs free energy G (ii) T, p, µ no thermodynamic potential, since T, p, µ are not independent of each other (iii) S, p, N Enthalpy H

(i) T, p, N Gibbs free energy G (ii) T, p, µ no thermodynamic potential, since T, p, µ are not independent of each other (iii) S, p, N Enthalpy H Solutions exam 2 roblem 1 a Which of those quantities defines a thermodynamic potential Why? 2 points i T, p, N Gibbs free energy G ii T, p, µ no thermodynamic potential, since T, p, µ are not independent

More information

Basic Concepts and Tools in Statistical Physics

Basic Concepts and Tools in Statistical Physics Chapter 1 Basic Concepts and Tools in Statistical Physics 1.1 Introduction Statistical mechanics provides general methods to study properties of systems composed of a large number of particles. It establishes

More information

Identical Particles. Bosons and Fermions

Identical Particles. Bosons and Fermions Identical Particles Bosons and Fermions In Quantum Mechanics there is no difference between particles and fields. The objects which we refer to as fields in classical physics (electromagnetic field, field

More information

Phase Transitions. µ a (P c (T ), T ) µ b (P c (T ), T ), (3) µ a (P, T c (P )) µ b (P, T c (P )). (4)

Phase Transitions. µ a (P c (T ), T ) µ b (P c (T ), T ), (3) µ a (P, T c (P )) µ b (P, T c (P )). (4) Phase Transitions A homogeneous equilibrium state of matter is the most natural one, given the fact that the interparticle interactions are translationally invariant. Nevertheless there is no contradiction

More information

Principles of Equilibrium Statistical Mechanics

Principles of Equilibrium Statistical Mechanics Debashish Chowdhury, Dietrich Stauffer Principles of Equilibrium Statistical Mechanics WILEY-VCH Weinheim New York Chichester Brisbane Singapore Toronto Table of Contents Part I: THERMOSTATICS 1 1 BASIC

More information

THERMODYNAMICS THERMOSTATISTICS AND AN INTRODUCTION TO SECOND EDITION. University of Pennsylvania

THERMODYNAMICS THERMOSTATISTICS AND AN INTRODUCTION TO SECOND EDITION. University of Pennsylvania THERMODYNAMICS AND AN INTRODUCTION TO THERMOSTATISTICS SECOND EDITION HERBERT B. University of Pennsylvania CALLEN JOHN WILEY & SONS New York Chichester Brisbane Toronto Singapore CONTENTS PART I GENERAL

More information

PHYS3113, 3d year Statistical Mechanics Tutorial problems. Tutorial 1, Microcanonical, Canonical and Grand Canonical Distributions

PHYS3113, 3d year Statistical Mechanics Tutorial problems. Tutorial 1, Microcanonical, Canonical and Grand Canonical Distributions 1 PHYS3113, 3d year Statistical Mechanics Tutorial problems Tutorial 1, Microcanonical, Canonical and Grand Canonical Distributions Problem 1 The macrostate probability in an ensemble of N spins 1/2 is

More information

Collective behavior, from particles to fields

Collective behavior, from particles to fields 978-0-51-87341-3 - Statistical Physics of Fields 1 Collective behavior, from particles to fields 1.1 Introduction One of the most successful aspects of physics in the twentieth century was revealing the

More information

We already came across a form of indistinguishably in the canonical partition function: V N Q =

We already came across a form of indistinguishably in the canonical partition function: V N Q = Bosons en fermions Indistinguishability We already came across a form of indistinguishably in the canonical partition function: for distinguishable particles Q = Λ 3N βe p r, r 2,..., r N ))dτ dτ 2...

More information

Outline for Fundamentals of Statistical Physics Leo P. Kadanoff

Outline for Fundamentals of Statistical Physics Leo P. Kadanoff Outline for Fundamentals of Statistical Physics Leo P. Kadanoff text: Statistical Physics, Statics, Dynamics, Renormalization Leo Kadanoff I also referred often to Wikipedia and found it accurate and helpful.

More information

P3317 HW from Lecture and Recitation 10

P3317 HW from Lecture and Recitation 10 P3317 HW from Lecture 18+19 and Recitation 10 Due Nov 6, 2018 Problem 1. Equipartition Note: This is a problem from classical statistical mechanics. We will need the answer for the next few problems, and

More information

Physics 127b: Statistical Mechanics. Landau Theory of Second Order Phase Transitions. Order Parameter

Physics 127b: Statistical Mechanics. Landau Theory of Second Order Phase Transitions. Order Parameter Physics 127b: Statistical Mechanics Landau Theory of Second Order Phase Transitions Order Parameter Second order phase transitions occur when a new state of reduced symmetry develops continuously from

More information

Chapter 2 Ensemble Theory in Statistical Physics: Free Energy Potential

Chapter 2 Ensemble Theory in Statistical Physics: Free Energy Potential Chapter Ensemble Theory in Statistical Physics: Free Energy Potential Abstract In this chapter, we discuss the basic formalism of statistical physics Also, we consider in detail the concept of the free

More information

UNDERSTANDING BOLTZMANN S ANALYSIS VIA. Contents SOLVABLE MODELS

UNDERSTANDING BOLTZMANN S ANALYSIS VIA. Contents SOLVABLE MODELS UNDERSTANDING BOLTZMANN S ANALYSIS VIA Contents SOLVABLE MODELS 1 Kac ring model 2 1.1 Microstates............................ 3 1.2 Macrostates............................ 6 1.3 Boltzmann s entropy.......................

More information

21 Lecture 21: Ideal quantum gases II

21 Lecture 21: Ideal quantum gases II 2. LECTURE 2: IDEAL QUANTUM GASES II 25 2 Lecture 2: Ideal quantum gases II Summary Elementary low temperature behaviors of non-interacting particle systems are discussed. We will guess low temperature

More information

List of Comprehensive Exams Topics

List of Comprehensive Exams Topics List of Comprehensive Exams Topics Mechanics 1. Basic Mechanics Newton s laws and conservation laws, the virial theorem 2. The Lagrangian and Hamiltonian Formalism The Lagrange formalism and the principle

More information

Introduction. Chapter The Purpose of Statistical Mechanics

Introduction. Chapter The Purpose of Statistical Mechanics Chapter 1 Introduction 1.1 The Purpose of Statistical Mechanics Statistical Mechanics is the mechanics developed to treat a collection of a large number of atoms or particles. Such a collection is, for

More information

Part II Statistical Physics

Part II Statistical Physics Part II Statistical Physics Theorems Based on lectures by H. S. Reall Notes taken by Dexter Chua Lent 2017 These notes are not endorsed by the lecturers, and I have modified them (often significantly)

More information

PRINCIPLES OF PHYSICS. \Hp. Ni Jun TSINGHUA. Physics. From Quantum Field Theory. to Classical Mechanics. World Scientific. Vol.2. Report and Review in

PRINCIPLES OF PHYSICS. \Hp. Ni Jun TSINGHUA. Physics. From Quantum Field Theory. to Classical Mechanics. World Scientific. Vol.2. Report and Review in LONDON BEIJING HONG TSINGHUA Report and Review in Physics Vol2 PRINCIPLES OF PHYSICS From Quantum Field Theory to Classical Mechanics Ni Jun Tsinghua University, China NEW JERSEY \Hp SINGAPORE World Scientific

More information

Thermodynamics, Gibbs Method and Statistical Physics of Electron Gases

Thermodynamics, Gibbs Method and Statistical Physics of Electron Gases Bahram M. Askerov Sophia R. Figarova Thermodynamics, Gibbs Method and Statistical Physics of Electron Gases With im Figures Springer Contents 1 Basic Concepts of Thermodynamics and Statistical Physics...

More information

510 Subject Index. Hamiltonian 33, 86, 88, 89 Hamilton operator 34, 164, 166

510 Subject Index. Hamiltonian 33, 86, 88, 89 Hamilton operator 34, 164, 166 Subject Index Ab-initio calculation 24, 122, 161. 165 Acentric factor 279, 338 Activity absolute 258, 295 coefficient 7 definition 7 Atom 23 Atomic units 93 Avogadro number 5, 92 Axilrod-Teller-forces

More information

PH4211 Statistical Mechanics Brian Cowan

PH4211 Statistical Mechanics Brian Cowan PH4211 Statistical Mechanics Brian Cowan Contents 1 The Methodology of Statistical Mechanics 1.1 Terminology and Methodology 1.1.1 Approaches to the subject 1.1.2 Description of states 1.1.3 Extensivity

More information

Fundamentals. Statistical. and. thermal physics. McGRAW-HILL BOOK COMPANY. F. REIF Professor of Physics Universüy of California, Berkeley

Fundamentals. Statistical. and. thermal physics. McGRAW-HILL BOOK COMPANY. F. REIF Professor of Physics Universüy of California, Berkeley Fundamentals of and Statistical thermal physics F. REIF Professor of Physics Universüy of California, Berkeley McGRAW-HILL BOOK COMPANY Auckland Bogota Guatemala Hamburg Lisbon London Madrid Mexico New

More information

summary of statistical physics

summary of statistical physics summary of statistical physics Matthias Pospiech University of Hannover, Germany Contents 1 Probability moments definitions 3 2 bases of thermodynamics 4 2.1 I. law of thermodynamics..........................

More information

Thermal and Statistical Physics Department Exam Last updated November 4, L π

Thermal and Statistical Physics Department Exam Last updated November 4, L π Thermal and Statistical Physics Department Exam Last updated November 4, 013 1. a. Define the chemical potential µ. Show that two systems are in diffusive equilibrium if µ 1 =µ. You may start with F =

More information

9.1 System in contact with a heat reservoir

9.1 System in contact with a heat reservoir Chapter 9 Canonical ensemble 9. System in contact with a heat reservoir We consider a small system A characterized by E, V and N in thermal interaction with a heat reservoir A 2 characterized by E 2, V

More information

Second Quantization: Quantum Fields

Second Quantization: Quantum Fields Second Quantization: Quantum Fields Bosons and Fermions Let X j stand for the coordinate and spin subscript (if any) of the j-th particle, so that the vector of state Ψ of N particles has the form Ψ Ψ(X

More information

!!"#!$%&' coefficient, D, naturally vary with different gases, but their

!!#!$%&' coefficient, D, naturally vary with different gases, but their Qualifying Examination Statistical Mechanics Fall, 2017 1. (Irreversible processes and fluctuations, 20 points + 5 bonus points) Please explain the physics behind the following phenomena in Statistical

More information

6.730 Physics for Solid State Applications

6.730 Physics for Solid State Applications 6.730 Physics for Solid State Applications Lecture 25: Chemical Potential and Equilibrium Outline Microstates and Counting System and Reservoir Microstates Constants in Equilibrium Temperature & Chemical

More information

INTRODUCTION TO о JLXJLA Из А lv-/xvj_y JrJrl Y üv_>l3 Second Edition

INTRODUCTION TO о JLXJLA Из А lv-/xvj_y JrJrl Y üv_>l3 Second Edition INTRODUCTION TO о JLXJLA Из А lv-/xvj_y JrJrl Y üv_>l3 Second Edition Kerson Huang CRC Press Taylor & Francis Croup Boca Raton London New York CRC Press is an imprint of the Taylor & Francis Group an Informa

More information

1 Fluctuations of the number of particles in a Bose-Einstein condensate

1 Fluctuations of the number of particles in a Bose-Einstein condensate Exam of Quantum Fluids M1 ICFP 217-218 Alice Sinatra and Alexander Evrard The exam consists of two independant exercises. The duration is 3 hours. 1 Fluctuations of the number of particles in a Bose-Einstein

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Physics Department Statistical Physics I Spring Term 2013 Notes on the Microcanonical Ensemble

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Physics Department Statistical Physics I Spring Term 2013 Notes on the Microcanonical Ensemble MASSACHUSETTS INSTITUTE OF TECHNOLOGY Physics Department 8.044 Statistical Physics I Spring Term 2013 Notes on the Microcanonical Ensemble The object of this endeavor is to impose a simple probability

More information

1 Foundations of statistical physics

1 Foundations of statistical physics 1 Foundations of statistical physics 1.1 Density operators In quantum mechanics we assume that the state of a system is described by some vector Ψ belonging to a Hilbert space H. If we know the initial

More information

LECTURES ON STATISTICAL MECHANICS E. MANOUSAKIS

LECTURES ON STATISTICAL MECHANICS E. MANOUSAKIS LECTURES ON STATISTICAL MECHANICS E. MANOUSAKIS February 18, 2011 2 Contents 1 Need for Statistical Mechanical Description 9 2 Classical Statistical Mechanics 13 2.1 Phase Space..............................

More information

Grand Canonical Formalism

Grand Canonical Formalism Grand Canonical Formalism Grand Canonical Ensebmle For the gases of ideal Bosons and Fermions each single-particle mode behaves almost like an independent subsystem, with the only reservation that the

More information

Physics 212: Statistical mechanics II Lecture XI

Physics 212: Statistical mechanics II Lecture XI Physics 212: Statistical mechanics II Lecture XI The main result of the last lecture was a calculation of the averaged magnetization in mean-field theory in Fourier space when the spin at the origin is

More information

Math Questions for the 2011 PhD Qualifier Exam 1. Evaluate the following definite integral 3" 4 where! ( x) is the Dirac! - function. # " 4 [ ( )] dx x 2! cos x 2. Consider the differential equation dx

More information

The Methodology of Statistical Mechanics

The Methodology of Statistical Mechanics Chapter 4 The Methodology of Statistical Mechanics c 2006 by Harvey Gould and Jan Tobochnik 16 November 2006 We develop the basic methodology of statistical mechanics and provide a microscopic foundation

More information

Statistical Mechanics

Statistical Mechanics 42 My God, He Plays Dice! Statistical Mechanics Statistical Mechanics 43 Statistical Mechanics Statistical mechanics and thermodynamics are nineteenthcentury classical physics, but they contain the seeds

More information

Physics 607 Exam 2. ( ) = 1, Γ( z +1) = zγ( z) x n e x2 dx = 1. e x2

Physics 607 Exam 2. ( ) = 1, Γ( z +1) = zγ( z) x n e x2 dx = 1. e x2 Physics 607 Exam Please be well-organized, and show all significant steps clearly in all problems. You are graded on your work, so please do not just write down answers with no explanation! Do all your

More information

The non-interacting Bose gas

The non-interacting Bose gas Chapter The non-interacting Bose gas Learning goals What is a Bose-Einstein condensate and why does it form? What determines the critical temperature and the condensate fraction? What changes for trapped

More information

Collective Effects. Equilibrium and Nonequilibrium Physics

Collective Effects. Equilibrium and Nonequilibrium Physics 1 Collective Effects in Equilibrium and Nonequilibrium Physics: Lecture 2, 24 March 2006 1 Collective Effects in Equilibrium and Nonequilibrium Physics Website: http://cncs.bnu.edu.cn/mccross/course/ Caltech

More information

Statistical Mechanics Notes. Ryan D. Reece

Statistical Mechanics Notes. Ryan D. Reece Statistical Mechanics Notes Ryan D. Reece August 11, 2006 Contents 1 Thermodynamics 3 1.1 State Variables.......................... 3 1.2 Inexact Differentials....................... 5 1.3 Work and Heat..........................

More information

Lecture 4: Entropy. Chapter I. Basic Principles of Stat Mechanics. A.G. Petukhov, PHYS 743. September 7, 2017

Lecture 4: Entropy. Chapter I. Basic Principles of Stat Mechanics. A.G. Petukhov, PHYS 743. September 7, 2017 Lecture 4: Entropy Chapter I. Basic Principles of Stat Mechanics A.G. Petukhov, PHYS 743 September 7, 2017 Chapter I. Basic Principles of Stat Mechanics A.G. Petukhov, Lecture PHYS4: 743 Entropy September

More information

MP463 QUANTUM MECHANICS

MP463 QUANTUM MECHANICS MP463 QUANTUM MECHANICS Introduction Quantum theory of angular momentum Quantum theory of a particle in a central potential - Hydrogen atom - Three-dimensional isotropic harmonic oscillator (a model of

More information

Physics 607 Final Exam

Physics 607 Final Exam Physics 607 Final Exam Please be well-organized, and show all significant steps clearly in all problems You are graded on your work, so please do not ust write down answers with no explanation! o state

More information

IV. Classical Statistical Mechanics

IV. Classical Statistical Mechanics IV. Classical Statistical Mechanics IV.A General Definitions Statistical Mechanics is a probabilistic approach to equilibrium macroscopic properties of large numbers of degrees of freedom. As discussed

More information

Chapter 14. Ideal Bose gas Equation of state

Chapter 14. Ideal Bose gas Equation of state Chapter 14 Ideal Bose gas In this chapter, we shall study the thermodynamic properties of a gas of non-interacting bosons. We will show that the symmetrization of the wavefunction due to the indistinguishability

More information

Phase transitions and critical phenomena

Phase transitions and critical phenomena Phase transitions and critical phenomena Classification of phase transitions. Discontinous (st order) transitions Summary week -5 st derivatives of thermodynamic potentials jump discontinously, e.g. (

More information

I. BASICS OF STATISTICAL MECHANICS AND QUANTUM MECHANICS

I. BASICS OF STATISTICAL MECHANICS AND QUANTUM MECHANICS I. BASICS OF STATISTICAL MECHANICS AND QUANTUM MECHANICS Marus Holzmann LPMMC, Maison de Magistère, Grenoble, and LPTMC, Jussieu, Paris marus@lptl.jussieu.fr http://www.lptl.jussieu.fr/users/marus (Dated:

More information

Elementary Lectures in Statistical Mechanics

Elementary Lectures in Statistical Mechanics George DJ. Phillies Elementary Lectures in Statistical Mechanics With 51 Illustrations Springer Contents Preface References v vii I Fundamentals: Separable Classical Systems 1 Lecture 1. Introduction 3

More information

Lecture notes for QFT I (662)

Lecture notes for QFT I (662) Preprint typeset in JHEP style - PAPER VERSION Lecture notes for QFT I (66) Martin Kruczenski Department of Physics, Purdue University, 55 Northwestern Avenue, W. Lafayette, IN 47907-036. E-mail: markru@purdue.edu

More information

PAPER No. 6: PHYSICAL CHEMISTRY-II (Statistical

PAPER No. 6: PHYSICAL CHEMISTRY-II (Statistical Subject Paper No and Title Module No and Title Module Tag 6: PHYSICAL CHEMISTRY-II (Statistical 1: Introduction to Statistical CHE_P6_M1 TABLE OF CONTENTS 1. Learning Outcomes 2. Statistical Mechanics

More information

Magnets, 1D quantum system, and quantum Phase transitions

Magnets, 1D quantum system, and quantum Phase transitions 134 Phys620.nb 10 Magnets, 1D quantum system, and quantum Phase transitions In 1D, fermions can be mapped into bosons, and vice versa. 10.1. magnetization and frustrated magnets (in any dimensions) Consider

More information

Statistical Mechanics Victor Naden Robinson vlnr500 3 rd Year MPhys 17/2/12 Lectured by Rex Godby

Statistical Mechanics Victor Naden Robinson vlnr500 3 rd Year MPhys 17/2/12 Lectured by Rex Godby Statistical Mechanics Victor Naden Robinson vlnr500 3 rd Year MPhys 17/2/12 Lectured by Rex Godby Lecture 1: Probabilities Lecture 2: Microstates for system of N harmonic oscillators Lecture 3: More Thermodynamics,

More information

Statistical Mechanics

Statistical Mechanics Statistical Mechanics Entropy, Order Parameters, and Complexity James P. Sethna Laboratory of Atomic and Solid State Physics Cornell University, Ithaca, NY OXFORD UNIVERSITY PRESS Contents List of figures

More information

Students are required to pass a minimum of 15 AU of PAP courses including the following courses:

Students are required to pass a minimum of 15 AU of PAP courses including the following courses: School of Physical and Mathematical Sciences Division of Physics and Applied Physics Minor in Physics Curriculum - Minor in Physics Requirements for the Minor: Students are required to pass a minimum of

More information

I. Collective Behavior, From Particles to Fields

I. Collective Behavior, From Particles to Fields I. Collective Behavior, From Particles to Fields I.A Introduction The object of the first part of this course was to introduce the principles of statistical mechanics which provide a bridge between the

More information

Title of communication, titles not fitting in one line will break automatically

Title of communication, titles not fitting in one line will break automatically Title of communication titles not fitting in one line will break automatically First Author Second Author 2 Department University City Country 2 Other Institute City Country Abstract If you want to add

More information

An Outline of (Classical) Statistical Mechanics and Related Concepts in Machine Learning

An Outline of (Classical) Statistical Mechanics and Related Concepts in Machine Learning An Outline of (Classical) Statistical Mechanics and Related Concepts in Machine Learning Chang Liu Tsinghua University June 1, 2016 1 / 22 What is covered What is Statistical mechanics developed for? What

More information

Introduction Statistical Thermodynamics. Monday, January 6, 14

Introduction Statistical Thermodynamics. Monday, January 6, 14 Introduction Statistical Thermodynamics 1 Molecular Simulations Molecular dynamics: solve equations of motion Monte Carlo: importance sampling r 1 r 2 r n MD MC r 1 r 2 2 r n 2 3 3 4 4 Questions How can

More information

UNIVERSITY OF LONDON. BSc and MSci EXAMINATION 2005 DO NOT TURN OVER UNTIL TOLD TO BEGIN

UNIVERSITY OF LONDON. BSc and MSci EXAMINATION 2005 DO NOT TURN OVER UNTIL TOLD TO BEGIN UNIVERSITY OF LONDON BSc and MSci EXAMINATION 005 For Internal Students of Royal Holloway DO NOT UNTIL TOLD TO BEGIN PH610B: CLASSICAL AND STATISTICAL THERMODYNAMICS PH610B: CLASSICAL AND STATISTICAL THERMODYNAMICS

More information

Statistical Mechanics I

Statistical Mechanics I Statistical Mechanics I Peter S. Riseborough November 15, 11 Contents 1 Introduction 4 Thermodynamics 4.1 The Foundations of Thermodynamics............... 4. Thermodynamic Equilibrium....................

More information

...Thermodynamics. Lecture 15 November 9, / 26

...Thermodynamics. Lecture 15 November 9, / 26 ...Thermodynamics Conjugate variables Positive specific heats and compressibility Clausius Clapeyron Relation for Phase boundary Phase defined by discontinuities in state variables Lecture 15 November

More information

Decoherence and the Classical Limit

Decoherence and the Classical Limit Chapter 26 Decoherence and the Classical Limit 26.1 Introduction Classical mechanics deals with objects which have a precise location and move in a deterministic way as a function of time. By contrast,

More information

Phase Transitions in Condensed Matter Spontaneous Symmetry Breaking and Universality. Hans-Henning Klauss. Institut für Festkörperphysik TU Dresden

Phase Transitions in Condensed Matter Spontaneous Symmetry Breaking and Universality. Hans-Henning Klauss. Institut für Festkörperphysik TU Dresden Phase Transitions in Condensed Matter Spontaneous Symmetry Breaking and Universality Hans-Henning Klauss Institut für Festkörperphysik TU Dresden 1 References [1] Stephen Blundell, Magnetism in Condensed

More information

UNIVERSITY OF OSLO FACULTY OF MATHEMATICS AND NATURAL SCIENCES

UNIVERSITY OF OSLO FACULTY OF MATHEMATICS AND NATURAL SCIENCES UNIVERSITY OF OSLO FCULTY OF MTHEMTICS ND NTURL SCIENCES Exam in: FYS430, Statistical Mechanics Day of exam: Jun.6. 203 Problem :. The relative fluctuations in an extensive quantity, like the energy, depends

More information

Hydrodynamics. Stefan Flörchinger (Heidelberg) Heidelberg, 3 May 2010

Hydrodynamics. Stefan Flörchinger (Heidelberg) Heidelberg, 3 May 2010 Hydrodynamics Stefan Flörchinger (Heidelberg) Heidelberg, 3 May 2010 What is Hydrodynamics? Describes the evolution of physical systems (classical or quantum particles, fluids or fields) close to thermal

More information

Introduction to the Renormalization Group

Introduction to the Renormalization Group Introduction to the Renormalization Group Gregory Petropoulos University of Colorado Boulder March 4, 2015 1 / 17 Summary Flavor of Statistical Physics Universality / Critical Exponents Ising Model Renormalization

More information

Classical Statistical Mechanics: Part 1

Classical Statistical Mechanics: Part 1 Classical Statistical Mechanics: Part 1 January 16, 2013 Classical Mechanics 1-Dimensional system with 1 particle of mass m Newton s equations of motion for position x(t) and momentum p(t): ẋ(t) dx p =

More information

III. Kinetic Theory of Gases

III. Kinetic Theory of Gases III. Kinetic Theory of Gases III.A General Definitions Kinetic theory studies the macroscopic properties of large numbers of particles, starting from their (classical) equations of motion. Thermodynamics

More information

in-medium pair wave functions the Cooper pair wave function the superconducting order parameter anomalous averages of the field operators

in-medium pair wave functions the Cooper pair wave function the superconducting order parameter anomalous averages of the field operators (by A. A. Shanenko) in-medium wave functions in-medium pair-wave functions and spatial pair particle correlations momentum condensation and ODLRO (off-diagonal long range order) U(1) symmetry breaking

More information

Physics 408 Final Exam

Physics 408 Final Exam Physics 408 Final Exam Name You are graded on your work (with partial credit where it is deserved) so please do not just write down answers with no explanation (or skip important steps)! Please give clear,

More information

Lecture 6: Fluctuation-Dissipation theorem and introduction to systems of interest

Lecture 6: Fluctuation-Dissipation theorem and introduction to systems of interest Lecture 6: Fluctuation-Dissipation theorem and introduction to systems of interest In the last lecture, we have discussed how one can describe the response of a well-equilibriated macroscopic system to

More information

PHAS2228 : Statistical Thermodynamics

PHAS2228 : Statistical Thermodynamics PHAS8 : Statistical Thermodynamics Last L A TEXed on: April 1, 007 Contents I. Notes 1 1. Basic Thermodynamics : Zeroth and First Laws 1.1. Zeroth Law - Temperature and Equilibrium................ 1..

More information

Time-Dependent Statistical Mechanics 5. The classical atomic fluid, classical mechanics, and classical equilibrium statistical mechanics

Time-Dependent Statistical Mechanics 5. The classical atomic fluid, classical mechanics, and classical equilibrium statistical mechanics Time-Dependent Statistical Mechanics 5. The classical atomic fluid, classical mechanics, and classical equilibrium statistical mechanics c Hans C. Andersen October 1, 2009 While we know that in principle

More information

Supplement: Statistical Physics

Supplement: Statistical Physics Supplement: Statistical Physics Fitting in a Box. Counting momentum states with momentum q and de Broglie wavelength λ = h q = 2π h q In a discrete volume L 3 there is a discrete set of states that satisfy

More information

On the Heisenberg and Schrödinger Pictures

On the Heisenberg and Schrödinger Pictures Journal of Modern Physics, 04, 5, 7-76 Published Online March 04 in SciRes. http://www.scirp.org/ournal/mp http://dx.doi.org/0.436/mp.04.5507 On the Heisenberg and Schrödinger Pictures Shigei Fuita, James

More information

Anderson Localization Looking Forward

Anderson Localization Looking Forward Anderson Localization Looking Forward Boris Altshuler Physics Department, Columbia University Collaborations: Also Igor Aleiner Denis Basko, Gora Shlyapnikov, Vincent Michal, Vladimir Kravtsov, Lecture2

More information

Part II: Statistical Physics

Part II: Statistical Physics Chapter 7: Quantum Statistics SDSMT, Physics 2013 Fall 1 Introduction 2 The Gibbs Factor Gibbs Factor Several examples 3 Quantum Statistics From high T to low T From Particle States to Occupation Numbers

More information

Statistical Physics. The Second Law. Most macroscopic processes are irreversible in everyday life.

Statistical Physics. The Second Law. Most macroscopic processes are irreversible in everyday life. Statistical Physics he Second Law ime s Arrow Most macroscopic processes are irreversible in everyday life. Glass breaks but does not reform. Coffee cools to room temperature but does not spontaneously

More information

Chapter 6. Phase transitions. 6.1 Concept of phase

Chapter 6. Phase transitions. 6.1 Concept of phase Chapter 6 hase transitions 6.1 Concept of phase hases are states of matter characterized by distinct macroscopic properties. ypical phases we will discuss in this chapter are liquid, solid and gas. Other

More information

CH 240 Chemical Engineering Thermodynamics Spring 2007

CH 240 Chemical Engineering Thermodynamics Spring 2007 CH 240 Chemical Engineering Thermodynamics Spring 2007 Instructor: Nitash P. Balsara, nbalsara@berkeley.edu Graduate Assistant: Paul Albertus, albertus@berkeley.edu Course Description Covers classical

More information

Physics 541: Condensed Matter Physics

Physics 541: Condensed Matter Physics Physics 541: Condensed Matter Physics In-class Midterm Exam Wednesday, October 26, 2011 / 14:00 15:20 / CCIS 4-285 Student s Name: Instructions There are 23 questions. You should attempt all of them. Mark

More information

Quantum statistics: properties of the Fermi-Dirac distribution.

Quantum statistics: properties of the Fermi-Dirac distribution. Statistical Mechanics Phys54 Fall 26 Lecture #11 Anthony J. Leggett Department of Physics, UIUC Quantum statistics: properties of the Fermi-Dirac distribution. In the last lecture we discussed the properties

More information

Collective Effects. Equilibrium and Nonequilibrium Physics

Collective Effects. Equilibrium and Nonequilibrium Physics Collective Effects in Equilibrium and Nonequilibrium Physics: Lecture 3, 3 March 2006 Collective Effects in Equilibrium and Nonequilibrium Physics Website: http://cncs.bnu.edu.cn/mccross/course/ Caltech

More information

Physics 607 Final Exam

Physics 607 Final Exam Physics 607 Final Exam Please be well-organized, and show all significant steps clearly in all problems. You are graded on your work, so please do not just write down answers with no explanation! Do all

More information

01. Equilibrium Thermodynamics I: Introduction

01. Equilibrium Thermodynamics I: Introduction University of Rhode Island DigitalCommons@URI Equilibrium Statistical Physics Physics Course Materials 2015 01. Equilibrium Thermodynamics I: Introduction Gerhard Müller University of Rhode Island, gmuller@uri.edu

More information

International Physics Course Entrance Examination Questions

International Physics Course Entrance Examination Questions International Physics Course Entrance Examination Questions (May 2010) Please answer the four questions from Problem 1 to Problem 4. You can use as many answer sheets you need. Your name, question numbers

More information

Concepts for Specific Heat

Concepts for Specific Heat Concepts for Specific Heat Andreas Wacker 1 Mathematical Physics, Lund University August 17, 018 1 Introduction These notes shall briefly explain general results for the internal energy and the specific

More information