STATISTICAL PHYSICS II

Similar documents
UNDERSTANDING BOLTZMANN S ANALYSIS VIA. Contents SOLVABLE MODELS

Introduction. Chapter The Purpose of Statistical Mechanics

III. Kinetic Theory of Gases

G : Statistical Mechanics

Introduction. Statistical physics: microscopic foundation of thermodynamics degrees of freedom 2 3 state variables!

G : Statistical Mechanics Notes for Lecture 3 I. MICROCANONICAL ENSEMBLE: CONDITIONS FOR THERMAL EQUILIBRIUM Consider bringing two systems into

Basic Concepts and Tools in Statistical Physics

ChE 503 A. Z. Panagiotopoulos 1

d 3 r d 3 vf( r, v) = N (2) = CV C = n where n N/V is the total number of molecules per unit volume. Hence e βmv2 /2 d 3 rd 3 v (5)

4. The Green Kubo Relations

MACROSCOPIC VARIABLES, THERMAL EQUILIBRIUM. Contents AND BOLTZMANN ENTROPY. 1 Macroscopic Variables 3. 2 Local quantities and Hydrodynamics fields 4

Grand Canonical Formalism

The dynamics of small particles whose size is roughly 1 µmt or. smaller, in a fluid at room temperature, is extremely erratic, and is

Classical Statistical Mechanics: Part 1

IV. Classical Statistical Mechanics

Onsager theory: overview

ChE 210B: Advanced Topics in Equilibrium Statistical Mechanics

From the microscopic to the macroscopic world. Kolloqium April 10, 2014 Ludwig-Maximilians-Universität München. Jean BRICMONT

CHAPTER V. Brownian motion. V.1 Langevin dynamics

2m + U( q i), (IV.26) i=1

Curves in the configuration space Q or in the velocity phase space Ω satisfying the Euler-Lagrange (EL) equations,

We already came across a form of indistinguishably in the canonical partition function: V N Q =

G : Statistical Mechanics

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Physics Department Statistical Physics I Spring Term 2013 Notes on the Microcanonical Ensemble

Identical Particles. Bosons and Fermions

1 Phase Spaces and the Liouville Equation

Khinchin s approach to statistical mechanics

Statistical Mechanics

Phase Transitions. µ a (P c (T ), T ) µ b (P c (T ), T ), (3) µ a (P, T c (P )) µ b (P, T c (P )). (4)

New ideas in the non-equilibrium statistical physics and the micro approach to transportation flows

Elements of Statistical Mechanics

Lattice Boltzmann Method

(i) T, p, N Gibbs free energy G (ii) T, p, µ no thermodynamic potential, since T, p, µ are not independent of each other (iii) S, p, N Enthalpy H

Statistical Mechanics in a Nutshell

Introduction Statistical Thermodynamics. Monday, January 6, 14

CHEM-UA 652: Thermodynamics and Kinetics

Supplement: Statistical Physics

Statistical mechanics of classical systems

Time-Dependent Statistical Mechanics 5. The classical atomic fluid, classical mechanics, and classical equilibrium statistical mechanics

Chapter 9: Statistical Mechanics

1 The fundamental equation of equilibrium statistical mechanics. 3 General overview on the method of ensembles 10

Part II: Statistical Physics

Chapter 2 Ensemble Theory in Statistical Physics: Free Energy Potential

Physics 4230 Final Examination 10 May 2007

Statistical Thermodynamics and Monte-Carlo Evgenii B. Rudnyi and Jan G. Korvink IMTEK Albert Ludwig University Freiburg, Germany

Physics Dec The Maxwell Velocity Distribution

HAMILTON S PRINCIPLE

1. Introductory Examples

PHYSICS 715 COURSE NOTES WEEK 1

Thermodynamics of nuclei in thermal contact

Fractional exclusion statistics: A generalised Pauli principle

Condensed Matter Physics Prof. G. Rangarajan Department of Physics Indian Institute of Technology, Madras

The fine-grained Gibbs entropy

This is a Gaussian probability centered around m = 0 (the most probable and mean position is the origin) and the mean square displacement m 2 = n,or

1 Fluctuations of the number of particles in a Bose-Einstein condensate

5. Systems in contact with a thermal bath

8.333: Statistical Mechanics I Fall 2007 Test 2 Review Problems

Thermodynamic equilibrium

Brownian Motion and Langevin Equations

8 Lecture 8: Thermodynamics: Principles

Decoherence and the Classical Limit

MD Thermodynamics. Lecture 12 3/26/18. Harvard SEAS AP 275 Atomistic Modeling of Materials Boris Kozinsky

Linear Response and Onsager Reciprocal Relations

Energy Barriers and Rates - Transition State Theory for Physicists

9.1 System in contact with a heat reservoir

Information Theory in Statistical Mechanics: Equilibrium and Beyond... Benjamin Good

Hydrodynamics. Stefan Flörchinger (Heidelberg) Heidelberg, 3 May 2010

PHYS 705: Classical Mechanics. Hamiltonian Formulation & Canonical Transformation

Thermodynamics of violent relaxation

Liouville Equation. q s = H p s

Physics 172H Modern Mechanics

Lecture notes for QFT I (662)

although Boltzmann used W instead of Ω for the number of available states.

THEODORE VORONOV DIFFERENTIABLE MANIFOLDS. Fall Last updated: November 26, (Under construction.)

Stochastic Particle Methods for Rarefied Gases

1 Foundations of statistical physics

Experimental Soft Matter (M. Durand, G. Foffi)

Sketchy Notes on Lagrangian and Hamiltonian Mechanics

Lecture 6: Ideal gas ensembles

CONTENTS 1. In this course we will cover more foundational topics such as: These topics may be taught as an independent study sometime next year.

The Ginzburg-Landau Theory

Fluid equations, magnetohydrodynamics

Thermodynamics, Gibbs Method and Statistical Physics of Electron Gases

Statistical Mechanics

Lecture 4: Entropy. Chapter I. Basic Principles of Stat Mechanics. A.G. Petukhov, PHYS 743. September 7, 2017

2. As we shall see, we choose to write in terms of σ x because ( X ) 2 = σ 2 x.

Landau s Fermi Liquid Theory

1 Quantum field theory and Green s function

where (E) is the partition function of the uniform ensemble. Recalling that we have (E) = E (E) (E) i = ij x (E) j E = ij ln (E) E = k ij ~ S E = kt i

7 The Navier-Stokes Equations

Statistical. mechanics

I. Collective Behavior, From Particles to Fields

Caltech Ph106 Fall 2001

Symmetry of the Dielectric Tensor

Mathematical Structures of Statistical Mechanics: from equilibrium to nonequilibrium and beyond Hao Ge

Markovian Description of Irreversible Processes and the Time Randomization (*).

Definition 5.1. A vector field v on a manifold M is map M T M such that for all x M, v(x) T x M.

Markov Chain Monte Carlo The Metropolis-Hastings Algorithm

84 My God, He Plays Dice! Chapter 12. Irreversibility. This chapter on the web informationphilosopher.com/problems/reversibility

Physics 212: Statistical mechanics II Lecture IV

Transcription:

. STATISTICAL PHYSICS II (Statistical Physics of Non-Equilibrium Systems) Lecture notes 2009 V.S. Shumeiko 1

. Part I CLASSICAL KINETIC THEORY 2

1 Lecture Outline of equilibrium statistical physics 1.1 Definitions Let us consider a classical system in 3 dimensions with N particles. The microscopic state of the system is fully defined by the position and momentum of each of the N particles. If we denote the position of the i-th particle by the 3-dimensional vector q i and similarly denote the momentum of the particle by p i, the microstate of the system is specified by the vectors { q 1, p 1, q 2, p 2,..., q N, p N }. The 6N-dimensional space spanned by these vectors is known as the phase space Γ of the system. For simplicity let us denote the microstate of the system by the 6N-dimensional vector X. As the system evolves in time it traces a trajectory X(t) in phase space the simplest example is the time development of a set of non-interacting particles in the absence of external forces: from Newton s laws we know that the momenta of the particles do not change and the particles positions change with constant velocities; consequently the state of the system as a function of time is a straight line in phase space. If there are external or internal forces acting upon the system the trajectory in phase space can be considerably more complicated and, under some circumstances, chaotic. In statistical physics the system usually consists of a huge number of particles ( 10 23 ), and it is not possible to determine the positions and momenta of all particles. Instead, we define the macrostate of the system by specifying some macroscopic quantities like pressure, volume and temperature. Usually particular values of these macroscopic quantities can originate from any one of a large number of different microstates (the exception is if we specify T = 0, in which case the system will be in one of a (usually) small number of ground states). Therefore, we define a probability density f N ( X, t) such that f N ( X, t) X = the probability that at time t the microstate of the system is in the neighborhood X of the point X in the phase space. Since f N ( X, t) is a probability density it obeys the normalization condition dxf N ( X, t) = 1, dx = d 3 q i d 3 p i. (1.1) i A probability density is also often called a distribution function. It is convenient in many cases to use different from Eq. (1.1) normalizations of the distribution function, for example, the integral being equal to the number of particles N, or to the particle density N/V. Any macroscopic variable A( X) is characterized by various statistical averages, (moments), which are also called ensemble averages. For example, a first moment A = dxa( X)f( X), (1.2) 3

more generally, n-th moment, A n = dxa n ( X)f( X). (1.3) Fluctuations around average values are characterized by cumulants, for example, the second cumulant or variance, var A = (A A ) 2 = A 2 A 2. (1.4) Correlation function between some two variables A( X) and B( X) are defined as AB = dxa( X)B( X)f( X). (1.5) Statistically independent variables satisfy the relation, AB = A B. (1.6) In classical physics, statistical correlations indicate interaction: non-interacting particles can be considered as statistically independent. In quantum physics this is not the case: for example the Pauli exclusion principle imposes quantum correlations on fermionic particles which persist even in absence of any interaction. 1.2 Self-averaging The goal of equilibrium statistical physics is to establish bridge between macroscopic variables introduced in the thermodynamics (temperature, pressure, entropy, internal energy, quantity of heat, etc.) and the ensemble averages of the microscopic description. The first issue to address is apparent qualitative difference between those two descriptions: random variables of microscopic description always fluctuate while thermodynamical quantities do not seem to. This discrepancy is lifted by the self-averaging property of macroscopic variables. Consider, e.g. total energy of an ideal gas E, which consists of sum of energies of all molecules, E = N i ɛ i, where i counts the molecules. A random quantity ɛ i has some average value, which we assume for simplicity independent on i, ɛ i = ɛ, and some variance, also i-independent, var ɛ i = δ. Since molecules of an ideal gas do not interact, they may be considered as statistically independent, hence ɛ i ɛ j = ɛ i ɛ j for i j. Macroscopically, our gas is characterized by an internal energy U = E = Nɛ. Let us evaluate the variance of the internal energy, var E = E 2 U 2. (1.7) Using microscopic energies, we rewrite this, var E = = N N ɛ i ɛ j ɛ i ɛ j ij ij N N [ ɛ 2 i ɛ i 2 ] + [ ɛ i ɛ j ɛ i ɛ j ] (1.8) i i j 4

The second sum is equal to zero by virtue of the molecules statistical independence, thus we get, var E = Nδ. (1.9) Fluctuation is quantified by the ratio of the square root of variance to the average, var E δ = U Nɛ. (1.10) Thus we come to an important conclusion that the fluctuation is proportional to 1/ N. For very large systems (the thermodynamic limit N ) the fluctuation is vanishingly small, 1/ N 0. This property is called self-averaging. Same argument also applies to any macroscopic variable that is sum of many non-interacting microscopic ingredients. Clearly, the self-averaging property is a particular case of the central limit theorem of the probability theory: fluctuation of an additive function of many random variables is infinitely small. In fact, the self-averaging property also holds for any thermodynamical systems with arbitrary strong microscopic interactions, provided this interaction is a short range one. To see this, we apply the following scaling argument: let us consider an infinitely large system (thermodynamic limit!), and divide it in very large (asymptotically infinite) number of blocks, each of them itself being a large thermodynamic system. Due to the local microscopic interactions, the interaction between the blocks are confined to blocks interfaces, while bulk parts of the blocks do not interact. The interface areas are vanishingly small compared to the bulk volume in the thermodynamic limit, thus blocks can be considered non-interacting hence statistically independent. Then repeating the same calculation for any additive thermodynamic characteristic of the whole system, as we did for the energy of an ideal gas, we find that this thermodynamic characteristic will be self-averaging. 1.3 Microcanonical ensemble Thermodynamical equilibrium is a unique state of a system characterized by the maximum value of the entropy. What distribution function does correspond to this state? Evaluation of the form for such a distribution function is a central task of the equilibrium statistical physics. The result must be consistent with the phenomenological laws of thermodynamics, in particular with the maximum entropy principle - the second thermodynamic law. There are many ways to derive (better to say postulate) the equilibrium distribution function. Here we will follow the derivation based on the microscopic equation for the entropy postulated by Gibbs, and the maximum entropy principle. The relation between the entropy and the distribution function suggested by Gibbs has the form, S = k ln f = k dxf( X) ln f( X), (1.11) 5

where k is the Boltzmann constant. Now we have to solve a variational problem: find such a distribution function that provides maximum value for the integral defining the entropy. Let us consider a closed system, whose exact energy is fixed and thus it defines a 6N 1-dimensional surface in the phase space, Γ E. Let us denote the area of the surface as Ω(E). The variational problem must be solved under the normalization constraint, which is to be taken into account by considering the following extended variational problem, δ[s α 1 ] = 0, (1.12) where α is a Lagrange multiplier. It is to be found from the normalization condition 1 = 1. Performing explicit variation procedure we get, δ dx [ kf( X) ln f( X) αf( X)] = Γ E dx δ[ kf( X) ln f( X) αf( X)] Γ E = dx [ k ln f( X) k α] δf( X) = 0. Γ E (1.13) The integral in the above equation is equal zero for arbitrary variation of f( X) if and only if the expression in the brackets is zero, k ln f( X) k α = 0. (1.14) Solution to this equation is a constant function over the energy surface, f( X) = const. From the normalization condition we find this constant, f( X) = 1 Ω(E). (1.15) This equilibrium distribution function for closed system is known as the microcanonical ensemble. For this ensemble, the entropy is related to the available phase volume, S(E) = k ln Ω(E), or Ω(E) = e S(E)/k. (1.16) Intuitive feeling of nature of such a distribution can be gained by considering picture of chaotic motion of the system over the surface of constant energy, which visits every region of the surface evenly frequently. Such a behavior is called ergodic, and corresponding assumption about evolution of the system is known as the ergodicity hypothesis. Thermodynamic description is relevant for ergodic systems. It is generally believed that very large interacting systems are ergodic, however, exact knowledge of which macroscopic Hamiltonians generate ergodic evolutions is not known. Mathematically, the ergodicity property is formulated in terms of a time average along the system trajectory X(t), T 1 A X0 = lim dta( X(t)), T T X(0) = X0. (1.17) 0 For ergodic evolution the time average does not depend on an initial state, and it is equal to the ensemble average, A = A. (1.18) 6

2 Lecture Equilibrium ensembles 2.1 Canonical ensemble More practical equilibrium distribution is derived for systems connected to a thermostat in a such a way that the system and the thermostate can exchange their energy. Thus exact total energy of the system fluctuates. Let us suppose that the internal energy of the system is fixed, U = E( X) = const. Such an ensemble is called the canonical ensemble or the Gibbs ensemble. Gibbs distribution is derived using the same definition of entropy in Eq. (1.11), and requiring the maximum entropy value. However, now there is one more constraint to be fulfilled in addition to the distribution normalization - conservation of the internal energy. The corresponding variational problem now contains two Lagrange multipliers, α and β, δ[s α 1 β E ] = 0. (2.1) Going through a similar calculation as in the previous section, we derive the following equation for the Gibbs distribution, k ln f( X) k α βe( X) = 0. (2.2) The solution has an exponential form, f( X) = exp k + α βe( X). (2.3) k k Connection between the constants in this solution and the thermodynamic parameters of the system is recovered by considering equation for the entropy and substituting the distribution Eq. (2.3), S = k ln f = (k + α) + βu. (2.4) This is to be compared with the thermodynamic relation F (V, T ) = U(V, S) ST, for the Gibbs free energy F, giving β = 1/T, k + α = F/T. Hence the Gibbs distribution takes the form, f( X) = exp F E( X). (2.5) kt Using the normalization condition, one connects free energy to the partition function Z, F (V, T ) = kt ln Z, Z(V, T ) = dxe E( X)/kT. (2.6) This equation establishes basis for the calculation of all macroscopic thermodynamical quantities from microscopic Hamiltonian of the system H( X) = E( X). 7

Let us apply Gibbs distribution to an ideal gas. In this case, the total energy is the sum of energies of individual molecules, E = N i ɛ i. Thus the Gibbs distribution can be factorized, f({ɛ i }) = e F/kT i e ɛ i/kt. (2.7) Each factor in this product has a meaning of a (non-normalized) distribution function for individual molecule. Assuming the molecule energy consisting of the kinetic energy and the potential energy in an external field, ( ) N p 2 E = i 2m + U( q i), (2.8) i we get ( f( p i, q i ) = C exp p2 i 2mkT U( q ) i), (2.9) kt where C is a normalization constant. This is the Boltzmann distribution; Maxwell derived this distribution for U = 0. We close this section with a remark that since for large thermodynamical systems the energy fluctuation is negligibly small, the results of calculations using microcanonical or canonical ensembles coincide (in the thermodynamical limit). 2.2 Grand canonical ensemble Let us allow the system also to exchange particles with the thermostat, keeping the average number of particles constant, N = N = const. The distribution for this system is known as the Grand canonical ensemble. It can be derived (do it!) using the same arguments as before adding the particle conservation constraint. The result reads, ( ) Ω E + µn f(e, N) = exp, (2.10) kt where µ is a chemical potential, and Ω(V, T, µ) is a thermodynamical potential related to the Gibbs free energy via the Legendre transformation, Ω(V, T, µ) = F (V, T, N) µn. (2.11) The difference between the thermodynamical potentials F and Ω for systems with variable number of particles is that F depends on average particle number, df µdn, while Ω - on chemical potential, dω Ndµ. Thus equation Ω/ µ = N in fact fixes the chemical potential for given N. Explicit equation for Ω through the partition function again follows from the normalization, Ω(V, T, µ) = kt ln Z, Z(V, T, µ) = dxe ( E( X)+µN)/kT, (2.12) 8 N

but the partition function now contains the sum over all particle numbers. To explore the properties of the grand canonical ensemble, we apply it to evaluation of distribution function of a quantum ideal gas. That is given by the Fermi or Bose distributions rather than Boltzmann distribution Eq. (2.9). The difference from the classical case is that the quantum particles are non-distinguishable, and the factorized distribution does not refer to individual particles. Another way to see this is to say that individual particles are not statistically independent. Instead, the occupations of individual quantum states are statistically independent, and it is possible to talk about average populations of individual quantum states. Let us introduce population n i of i-th quantum state, and write total energy of the gas on the form E = i ɛ i n i, N = i n i, (2.13) where ɛ i is the energy of the i-th quantum state. The partition function has the form, Z = ( exp ) (ɛ i µ)n i, (2.14) {n i } i kt where the sum is taken over all possible particle configurations {n i } occupying all the quantum states. One can rewrite this sum in an equivalent form, =. (2.15) {n i } i n i This allows us to factorize the partition function, Z = i z i, z i = n i e (ɛ i µ)n i kt, (2.16) to the product of partial partition functions of the quantum states. Correspondingly, the thermodynamic potential of the gas consists of contributions of individual quantum states, Ω = i Ω i, Ω i = kt ln z i. (2.17) This allows us to define an average occupations of quantum states n i, N = Ω µ = i Ω i µ = i n i, n i = kt µ ln z i, (2.18) Let us now evaluate the partial partition function in Eq. (2.16). In the case of Fermi particles, occupation of the state can be n i = 0, 1, giving z i = 1 + e (ɛ i µ)/kt. In the Bose case, there is no constraint on the state occupation, thus z i = [ 1 e (ɛ i µ)/kt ] 1. 9

Taking this into account we get for both the cases, n i = ± kt µ ln [ 1 ± e (ɛ i µ)/kt ] = The upper (lower) sign refers to the Fermi (Bose) statistics. 1 e (ɛ i µ)/kt ± 1. (2.19) 10

3 Lecture Exploring vicinity of equilibrium 3.1 Fluctuation around equilibrium Consider some macroscopic variable A( X) that is not exactly constant in equilibrium (for example, for microcanonical ensemble this is not an energy) but depends on exact microscopic state. Many microscopic states contribute to the same value of macroscopic variable, let s denote corresponding area of the phase space as Ω(A). Since the microcanonical distribution function is constant over Γ E, the probability of certain value of the variable is proportional to the corresponding area, P (A) Ω(A). Let us define non-equilibrium entropy S(A) similar to definition in Eq. (1.16), then P (A) e S(A)/k. (3.1) For small deviation from the equilibrium value, A = A 0 + x, we can expand the entropy, remembering that it has maximum at equilibrium, S(A) = S 0 + 1 d 2 S 2 da 2 x2, d 2 S < 0. (3.2) da2 Introducing g = (1/k)(d 2 S/dA 2 ), we write the probability density (distribution function) for macroscopic fluctuation on the form, f(x) = Ce (1/2)gx2. (3.3) Thus fluctuation of macroscopic variable around equilibrium has a gaussian form. This result belongs to Einstein. Constant C in Eq. (3.3) is defined by the normalization condition, C = g/2π, the first moment is zero, while the second moment, x 2 = 1/g. Gaussian distribution has a known remarkable property that all higher order moments are expressed through the second one, i.e. through g. To see this, we use the method that is very useful for calculation of moments and correlation functions of several variables. Let us introduce generating function ( dx exp 1 2 J(h) = gx2 + hx ) ( ) (3.4) dx exp 1 2 gx2 Then any moment can be calculated using relation, x n = dn J dh n. (3.5) h=0 Consider the following transformation, (1/2)gx 2 hx = (1/2)g(x h/g) 2 (1/2)(h 2 /g). Then changing the variable, (x h/g) = y we find that integrals in the nominator and denominator in Eq. (8.22) will cancel, giving the result, J(h) = e h2 /2g. (3.6) 11

It is clear from this equation that indeed x 2 = 1/g, and all (even) high order moments depends entirely on g. This derivation can be extended to the case of several variables, P (A 1,... A n ) e S(A 1,...A n )/k. (3.7) Expanding entropy with respect to small deviation from equilibrium x 1... x n, and introducing matrix g ik = 1 k 2 S A i A k, g ik = g ki, (3.8) we have S(A 1,... A n ) = S 0 (k/2)g ik x i x k. The probability density has the form, f(x 1,... x n ) = C exp ( 1 ) 2 g ikx i x k. (3.9) Now the normalization constant reads C = correlation function we have, x i x j = g 1 ij. det g/(2π) n, and for the second order (3.10) This result is easily obtained by using the generating function method (check!). In the following, we shall use so called thermodynamic forces that are defined as X i = 1 k S x i = g ij x j. (3.11) Using the result of Eq. (3.10), we can find the correlation functions between the fluctuating variables and the thermodynamic forces, X i x k = g ij g 1 jk = δ ik. (3.12) 3.2 Time correlation of fluctuations Let us consider an evolution of small deviations from equilibrium x i (t) in time. This evolution is characterized by a correlation function x i (t)x i (t ) = φ i (t t ). (3.13) This correlation function depends on the difference of the times because in equilibrium the distribution is stationary. The time ordering in this equation does not matter since x i is a classical (commutative) variable, thus φ i (t t ) = φ i (t t). Similarly, we can introduce time cross correlation function, x i (t)x j (t ) = φ ij (t t ). (3.14) This correlation function has an obvious property, φ ij (t t ) = φ ji (t t). (3.15) 12

There is a more fundamental symmetry property of the cross correlation function related to the microscopic time reversal symmetry: φ ij (t t ) = φ ij (t t). (3.16) One would expect that deviation from equilibrium will tend to evolve towards equilibrium. However, how can then the deviation from equilibrium spontaneously appear? As Paul Ehrenfest argued, spontaneous deviation from equilibrium in the fluctuation region x var x is due to microscopic processes, which are time reversible. Thus evolution of fluctuation forward and backward in time is symmetric. Combining the two symmetry relations in Eqs. (3.15) and (3.16), we get φ ij (τ) = φ ji (τ). (3.17) This relation is valid for the quantities that are time reversible themselves (e.g. position not velocity), and which evolve according to time-reversal Hamiltoinians (e.g. not containing magnetic field). If it is not the case this symmetry relation must be appropriately generalized. 3.3 Evolution towards equilibrium Let us now consider large deviation from equilibrium, which exceeds the fluctuation region, x var x. In this case fluctuation is negligible (self-averaging) and we should not distinguish x and x. In this case,the system will tend to evolve towards equilibrium, so that at t x i = 0 and ẋ i = 0. The simplest imaginable equation describing such an evolution is a linear equation, which for a single variable reads, ẋ(t) = λx(t), x(t) = x(0)e λt. (3.18) This kinetic equation describes exponential relaxation towards equilibrium with the relaxation rate given by a constant λ. This is a phenomenological equation that can be only justified by comparison with experiment. Indeed this type of evolution is common for chemical reactions, and also for many condensed matter physics problems, as we will see. It is relevant for weak non-equilibrium states, and for slow relaxation processes. In principle, there are more complex, e.g. non-linear kinetic equations, or integral equations containing the time memory. Derivation of macroscopic kinetic derivations, including values of the relaxation rates, must be done on a more fundamental microscopic grounds. For several variables, linear kinetic equation looks similar, ẋ i (t) = λ ij x j (t), (3.19) and now its exponential solutions are characterized by a set of relaxation rates given by the eigenvalues of matrix λ ij. More commonly this kinetic equation is written using the thermodynamic forces introduced in Eq. (3.11). Inverting this equation, we get, ẋ i (t) = γ ij X j (t), γ ij = λ im g 1 mj. (3.20) 13

This is a canonical form of the kinetic equation of non-equilibrium thermodynamics. It says that the thermodynamic forces generate thermodynamics flows (ẋ), the proportionality coefficients γ ij are called the kinetic or transport coefficients. Typical examples of thermodynamic forces are gradients of temperature or chemical potential, while examples of the flows are given by thermal current or mass current. Corresponding kinetic coefficients are thermal conductivity and diffusion coefficient. Kinetic equation can be written in an equivalent form resembling Hamiltonian equation, 1 S ẋ i (t) = γ ij. (3.21) k x j However, this is a misleading similarity: the Hamiltonian evolution and the one given by kinetic equation are qualitatively different, the principle difference concerns time reversibility. Kinetic equation Eq. (3.20) is purely phenomenological one: neither kinetic coefficients nor region of equation applicability is known for any particular system. Theoretical verification of this equation can be only done on a microscopic level, and this is the central task of the Boltzmann transport theory. Amazingly, one general property of the kinetic coefficients can be nevertheless established without appealing to a microscopic theory: this is the famous symmetry relation formulated by Onsager: γ ij = γ ji. (3.22) This symmetry is a consequence of the microscopic time reversibility expressed by Eq. (3.17). To prove this symmetry property, let us continue our kinetic equation into the fluctuation region. Here we have to remember that, rigorously speaking, our equation is valid for averages, x and X. However, because of linear form of this equation and stationary equilibrium distribution, the same equation will be valid for exact fluctuating quantities. Then we write time correlation function Eq. (3.17), x i (τ)x j (0) = x j (τ)x i (0), (3.23) and apply the time derivative to both sides of the equation, ẋ i (τ)x j (0) = ẋ j (τ)x i (0). (3.24) With the help of kinetic equation Eq. (3.20) we eliminate the time derivatives, γ ik X k (τ)x j (0) = γ jk X k (τ)x i (0). (3.25) Now we put τ = 0, and make use of relation Eq. (3.12), which is valid for the coinciding times, γ ik δ kj = γ jk δ ki. (3.26) This proves the Onsager s symmetry relation, Eq. (9.22). 14

4 Lecture Stochastic processes 4.1 Definitions Now we change scope of discussion and approach problem from a microscopic side. Good example of a system we have in mind is a classical gas of hard particles scattered elastically by randomly distributed fixed impurities, see Fig. 1. We neglect scattering between the particles and thus it suffices to consider trajectory of only single particle. If we know exact positions of the impurity scatterers, and exact initial position and momentum of our particle, we can, in principle recover the trajectory. However, if we don t have such a precise knowledge, trajectory will look to us completely chaotic, and we should think about particle motion as a random process. Let us focus on momentum of the particle. In each collision event the absolute value of the momentum is conserved, but the momentum direction changes randomly. If we are interested in calculation of single time momenta, p n (t), we need to only know a probability P ( p, t) for particle to have momentum p at time moment t. However, if we are interested in two-time correlation function, p(t 1 ) p(t 2 ), we need to know more about the stochastic process, namely, joint probability P ( p 1, t 1 ; p 2, t 2 ). Figure 1: Elastic scattering by pins. Full description of a stochastic process during certain time period, t 1 < t 2... < t n is given by the joint probability, P ( p 1, t 1 ; p 2, t 2 ;... p n, t n ), (4.1) This is equivalent to define a probability distribution on a space of all possible trajectories P ({ p(t)}). Each trajectory is called a realization of a stochastic process. Then any averages are to be calculated using the rule,... = P ({ p(t)})(...), (4.2) { p(t)} where summation is done over all the realizations. In the case of continuous realizations, the sum over realizations is replaced by a functional integral,... = Dx(t) P ({x(t)})(...). (4.3) 15

To make our discussion more general, we drop the momentum notation and consider some continuous random variable x(t), characterized by the probability densities. We can defined reduced probability densities, e.g. n 1-point probability density, f(x 1, t 1 ;... x n 1, t n 1 ) = dx n f(x 1, t 1 ;... x n 1, t n 1 ; x n, t n ). (4.4) Further we introduce a conditional probability density, F (x 1, t 1 ;... x n 1, t n 1 x n, t n ), which defines a probability density for value x n at moment t n provided the values x 1,... x n 1 at moments t 1,... t n 1 have realized with certainty. Formal definition of the conditional probability is, f(x 1, t 1 ;... x n, t n ) = F (x 1, t 1 ;... x n 1, t n 1 x n, t n ) f(x 1, t 1 ;... x n 1, t n 1 ). (4.5) From this we deduce following useful relation, f(x n, t n ) =... dx 1... dx n 1 F (x 1, t 1 ;... x n 1, t n 1 x n, t n ) f(x 1, t 1 ;... x n 1, t n 1 ).(4.6) Conditional probability density obviously satisfies the following normalization equation, dx n F (x 1, t 1 ;... x n 1, t n 1 x n, t n ) = 1 (4.7) In physics we often deal with some simpler stochastic processes that are called Markov processes. Markov process is defined by relation, F (x 1, t 1 ;... x n 1, t n 1 x n, t n ) = F (x n 1, t n 1 x n, t n ), (4.8) i.e. the conditional probability only depends on the realization value at the previous moment of time but not on all earlier values. Thus Markov process is often referred to as a process without memory. Knowledge of the two-time conditional probabilities completely defines the Markov process. Indeed, suppose we know f(x 1, t 1 ), then we know all the joint probabilities, f(x 1, t 1 ; x 2, t 2 ) = F (x 1, t 1 x 2, t 2 )f(x 1, t 1 ), f(x 1, t 1 ; x 2, t 2 ; x 3, t 3 ) = F (x 2, t 2 x 3, t 3 )f(x 1, t 1 ; x 2, t 2 ) and so forth. 4.2 Master equation = F (x 2, t 2 x 3, t 3 )F (x 1, t 1 x 2, t 2 )f(x 1, t 1 ), (4.9) Continuous Markov processes can be defined through a differential equation that is known as a master equation. Master equations play a central role in the non-equilibrium statistical physics. Consider two close moments of time, t and t + t. Fore many physical systems the conditional probability to make a transition from x 1 to x 2 x 1 for small time differences is small. Suppose it is proportional to t. Then we can introduce transition rate W (x 1, x 2, t), F (x 1, t x 2, t + t) = W (x 1, x 2, t) t, x 1 x 2. (4.10) 16

Small probability to leave the point x 1 implies that the probability to stay at point x 1 is large. To find it we use Eq. (4.7) for n = 2, F (x 1, t + t x 1, t) + dx 2 W (x 1, x 2, t) t = 1. (4.11) Let us write Eq. (4.6) for n = 2, f(x 2, t + t) = dx 1 F (x 1, t; x 2, t + t) f(x 1, t 1 ), (4.12) and substitute there transition rates according to Eq. (4.10). Then we get, f(x 2, t + t) = F (x 2, t + t x 2, t)f(x 2, t) + dx 1 W (x 1, x 2, t) tf(x 1, t). (4.13) We eliminate the first term by using Eq. (4.11), where we change indices 1 2, then we get, [ ] f(x 2, t+ t) = 1 dx 1 W (x 2, x 1, t) t f(x 2, t)+ dx 1 W (x 1, x 2, t) tf(x 1, t).(4.14) Moving term f(x 2, t) to the left hand side, and dividing by t, we consider the limit t 0. This will give us a differential equation describing evolution of a Markovian distribution function in time - master equation, f(x, t) t = dx 1 [W (x, x 1, t) f(x, t) W (x 1, x, t)f(x 1, t)], (4.15) This equation has a typical form of a conservation equation: change of the probability functions is determined by the balance between an outflux - transitions from the point of interest x to any other points (first term in the right hand side), and an influx - transitions to the point x from any other points. 4.3 H-theorem In general, master equation is a complex integro-differential equation. Its stationary solution only exists as soon as transition rates are time-independent, W (x, x 1 ) f 0 (x) W (x 1, x)f 0 (x 1 ) = 0 (4.16) Time evolution of an arbitrary initial distribution can be roughly guessed by formally ignoring the influx term - this model is called the relaxation time approximation. Then equation takes a differential form, δf(x, t) t = 1 τ(x) δf(x, t), 1 τ(x) = dx 1 W (x, x 1 ). (4.17) Solution to this equation has exponential form resembling relaxation of the macroscopic variables according to the kinetic equation discussed earlier, δf(x, t) = δf(x, 0)e t/τ(x). (4.18) 17

The relaxation time τ is now expressed through microscopic characteristic of the system - transition rate. At this point we can formulate two questions: How can we be sure that keeping the influx term will not qualitatively change the evolution? And what connection has this mathematical exercise to thermodynamics and relaxation to equilibrium. The positive answer to these questions is given by so called H-theorem discovered by Boltzmann. This theorem states that evolution according to master equation gives rise to increase of the Gibbs entropy, which achieves the maximum value for the stationary distribution. Thus this equation has a physical meaning and indeed desribes evolution of a non-equilibrium system towards thermodynamic equilibrium. We prove the H-theorem for our initial system - ideal gas with impurity scattering. For this physical realization, important symmetry property of the transition rates holds (here we suppress vector notation): W (p, p 1 ) = W (p 1, p). (4.19) This property is called a detailed balance equation, and it is a consequence of the microscopic time reversibility of the scattering process: transition rate from state p to state p 1 is the same as the reversed transition rate. Additionally, due to conservation of the particle energy during the scattering event, the transition rate must contain an energy conservation factor, W (p, p 1 ) = w(p, p 1 )δ(ɛ ɛ 1 ). Thus, master equation can be written on the form, f(p, t) t = dp 1 w(p, p 1 ) δ(ɛ ɛ 1 ) [f(p, t) f(p 1, t)]. (4.20) The rate of the entropy production directly follows from the definition of the Gibbs entropy, S(t) = k dpf(p, t) ln f(p, t), Ṡ(t) = k dpf(p, t)[ln f(p, t) + 1]. (4.21) The time derivative here is eliminated by using Eq. (4.20) (we skip time argument), Ṡ = k dp dp 1 W (p, p 1 )[f(p 1 ) f(p)][ln f(p) + 1]. (4.22) By interchanging the momenta p p 1, and using symmetry of W we transform this equation to the form, Ṡ = 1 2 k dpdp 1 W (p, p 1 )[f(p 1 ) f(p)][ln f(p) ln f(p 1 )] (4.23) The sign of the integral depends on the factor, [f(p 1 ) f(p)][ln f(p) ln f(p 1 )] = f(p 1 )(1 z) ln z, z = f(p) f(p 1 ). (4.24) It is easy to check that the sign of the function (1 z) ln z 0, is always negative, and the maximum value, zero, is achieved at z = 1. 18

Thus we conclude, that during evolution governed by the master equation the entropy grows, Ṡ 0, and that the equilibrium distribution satisfies the equation f 0 ( p) = f 0 ( p 1 ). Since the energy conservation, this can be written on a form, f 0 (ɛ, n) = f 0 (ɛ, n 1 ), p = p n. (4.25) In other words the equilibrium distribution is isotropic. The physical interpretation of this result is pretty natural: multiple scattering by impurities will try to erase eventual anisotropy of an initial distribution, while energy dependence is not affected since the microscopic energy conservation. From this example we see that a particular form of an equilibrium distribution depends on a kind of scattering. For this system - ideal gas with impurities - true equilibrium given by Gibbs distribution is never reached because of particles do not exchange their energies. In this case we could only talk about partial (local) equilibrium. H-theorem ensures that the system relaxes to an equilibrium in the thermodynamical sense, but the evolution seldom has a simple exponential form with specific relaxation time. Usually it is more complex functional dependence which can be roughly characterized by some typical relaxation time (relaxation time approximation). There is nevertheless a physical example of an exponentially evolving system. If impurity scatterers are spherically symmetric, scattering does not depend on the direction of incoming particle, w( p, p 1 ) = w(ɛ, n n 1 ). Furthermore, scattering can be often approximated as a spherically symmetric scattering, i.e. w(ɛ, n n 1 ) = w(ɛ). In this case, our master equation Eq. (4.20) takes the form, f(ɛ, n, t) t = do 1 ν(ɛ)w(ɛ) [f(ɛ, n, t) f(ɛ, n 1, t)], (4.26) where we performed integration over energy ɛ 1 and introduced a density of states ν(ɛ) = m 2mɛ; integration now goes over a solid angle in the momentum space, do = dθdϕ sin θ. Let s consider purely anisotropic (not necessarily small!) deviation from equilibrium, f(ɛ, n, t) = f 0 (ɛ) + δf(ɛ, n, t), do δf = 0; for this deviation the kinetic equation reads, δf(ɛ, n, t) t = 4πν(ɛ)w(ɛ) δf(ɛ, n, t). (4.27) It describes an exponential relaxation with an energy dependent relaxation time τ(ɛ) = 1/4πν(ɛ)w(ɛ). What is important here, is that a macroscopic kinetic characteristic of the system can be related to its microscopic characteristics. 19

5 Lecture Bolzmann equation 5.1 Binary scattering Let us consider another important scattering process: binary scattering of molecules in an ideal gas, see Fig. 2. Similar to the previous lecture, we will look at this process as a chain of random transitions between two-particle states p 1, p 2 p 3, p 4. Such a process is characterized with two-particle distribution function f (2) (p 1, p 2, t). Assuming Markov property of the process we introduce transition rates W (p 1, p 2 ; p 3, p 4 ; t). For stationary process the rates are time independent, moreover they possess obvious kinematic symmetry, W (p 1, p 2 ; p 3, p 4 ) = W (p 2, p 1 ; p 4, p 3 ). Time reversal symmetry of the collision process defines another symmetry - detailed balance principle, W (p 1, p 2 ; p 3, p 4 ) = W (p 3, p 4 ; p 1, p 2 ). Finally, there is momentum and energy conservation during collision event, which is reflected by the delta function factors in the collision rates, W (p 1, p 2 ; p 3, p 4 ) = w(p 1, p 2 ; p 3, p 4 )δ( p 1 + p 2 p 3 p 4 )δ(ɛ 1 + ɛ 2 ɛ 3 ɛ 4 ), (5.1) ɛ = p 2 /2m. Using these definitions we can write down a master equation for the two-particle distribution function, f (2) (p 1, p 2, t) = dp 3 dp 4 W (p 1, p 2 ; p 3, p 4 ) [f (2) (p 3, p 4 ;, t) f (2) (p 1, p 2, t)]. (5.2) t Similar to case of the impurity scattering this scattering integral also has outgoing and incoming parts. p 4 p 1 p 2 p 3 Figure 2: Binary scattering. The two-particle distribution function is relevant for evaluation of two-particle correlation functions. However, in practice, we commonly calculate single particle averages. Therefore it is desirable to derive master equation for a simpler object - single particle density matrix. Using the relation f(p 1, t) = dp 2 f (2) (p 1, p 2, t), (5.3) we could try to simplify Eq. (5.2), however, it is generally not possible to reduce distribution functions in the integrand at the right hand side because of essential 20

momentum dependence of the transition rates. To overcome this difficulty we make a crucial assumption about factorization of the two-particle distribution function, f (2) (p 1, p 2, t) = f(p 1, t)f(p 2, t). (5.4) Then the master equation takes the form, f(p 1, t) t = dp 2 dp 3 dp 4 W (p 1, p 2 ; p 3, p 4 ) [f(p 3, t)f(p 4, t) f(p 1, t)f(p 2, t)]. (5.5) Motivation for such a factorization comes from assumption about statistical independence of particles between the collisions since they do not interact. However, rigorous justification for the factorization does not exist. There are two reasons to believe that this equation correctly describes physical systems: (i) its stationary solution corresponds to the Maxwell distribution, (ii) similar to the case of impurity scattering, binary scattering integral obeys the H-theorem. Let us look for the stationary solution of Eq. (5.5). It t corresponds to the zero value of the square brackets in the integrand, i.e. f(p 3 )f(p 4 ) = f(p 1 )f(p 2 ). (5.6) It is easy to check that an exponential function f(p) = Ce ɛ/kt satisfies this functional equation due to energy conservation law in Eq. (5.1). Moreover, due to the momentum conservation, one can find more general form for the stationary solution, f( p) = C exp ɛ V p kt. (5.7) This solution has a simple physical meaning: it corresponds to the Maxwell distribution in the moving reference frame with velocity V. Indeed, since binary scattering does not change the total momentum of the gas, physical equilibrium should not depend on the choice of the reference frame, p p m V, i.e. it respects the Galilean invariance. Note that this is different from the impurity scattering where the impurities fix the laboratory reference frame. Derivation of the H-theorem is completely analogous to the calculation done in the previous section: we start with equation for the entropy production rate, eliminate the time derivative by means of the master equation, and then transform the integrand using symmetry properties of the transition rates, [f(p 3, t)f(p 4, t) f(p 1, t)f(p 2, t)] ln f(p 1 ) 1 2 [f(p 3, t)f(p 4, t) f(p 1, t)f(p 2, t)][ln f(p 1 ) + ln f(p 2 )] 1 4 [f(p 3, t)f(p 4, t) f(p 1, t)f(p 2, t)] [ln{f(p 1 )f(p 2 )} ln{f(p 3 )f(p 4 )}]. This function is always negative and reaches its maximum value when distribution is given by Eq. (5.7). 21

It is important to stress that the master equation for binary gas gives independent, and even more general argument in favor of Maxwell distribution to describe the equilibrium: it tells that arbitrary non-equilibrium distribution will evolve to the unique form, which is the stationary point of this equation. If we did not know anything about equilibrium statistical ensembles, we could derive Maxwell distribution from the master equation. 5.2 Boltzmann equation So far we considered evolution of physical systems in terms of completely stochastic processes. However, this is not totally true. During periods of time between the collisions, particles motion is completely deterministic. Moreover, important case of spatial inhomogeneity was ignored. We repair this drawback by using the following, still phenomenological, argument. Deterministic particle dynamics in absence of random scattering is described by the Liouville equation, df( r, p, t) dt = f( r, p, t) t f( r, p, t) + v r du( r) f( r, p, t) r p = 0, (5.8) i.e. full time derivative of the distribution function is zero. Scattering violate this identity, bringing additional channels for time evolution of distribution, ( ) df( r, p, t) df( r, p, t) = I coll ({f}). (5.9) dt dt coll The right hand side of this equation is given by a master equation of the form Eq. (4.20), or Eq. (5.2), or both depending on the kind of scattering. This combination of the Liouville equation describing deterministic evolution (often called convection or drift term), and master equation describing scattering constitutes the Boltzmann equation, f( r, p, t) t + v f( r, p, t) r du( r) f( r, p, t) r p = a I (a) coll ({f}). (5.10) The sum term includes all scattering mechanisms of importance. At this stage we can allow ourself to put following question: what is the meaning of combining deterministic dynamical evolution governed by Liouville equation and random dynamical evolution governed by master equation? For which object such a combined equation is written? Indeed, dynamical evolution for exact distribution function conserves the phase volume occupied by the system according to the Liouville theorem. The master equation evolution, on the contrary, leads to increasing phase volume, in accord with growing entropy, eventually leading to microcanonical distribution evenly covering the whole surface Γ E, see Fig. 3. The answer to this important conceptual question was given by P. Ehrenfest in the 1920s, who argued that an initial phase region occupied by the system develops with time to a region with extremely complicated irregular form, keeping constant volume, so that the inside points become arbitrary close to any point of outside region. Thus slightly washing 22

out exact distribution(coarse graining) will lead to a phase region with smooth shape and larger volume, eventually covering the whole available phase space. Nice example of such a situation is given be a fine dye powder diluted in a water: although the total volume of the dye particles is constant and small compared to the water volume, we do not distinguish individual particles and see homogeneous colored liquid. Furthermore, Ehrenfest argued that time reversal evolution of exact distribution function does not contradict irreversible evolution of the coarse grained distribution: if we manage to select the complex shaped phase region which is identical to the one developed by the system during forward time evolution, and reverse the time - we will obtain initial smooth phase region. However, the probability of this is negligibly small - any little error in our selection will lead to an even more complex region with larger coarse grained volume. Another interesting interpretation of entropy growth is related to an information: given the accuracy of our measurement (shape of coarse grained phase volume) we increasingly loose information about yet more and more complex shape of the exact distribution. This information loss is reflected by growing entropy. t 0 t 1 t 0 t 1 t 2 exact distribution coarse grained distribution Figure 3: Upper: unstable particle trajectories - distance grows exponentially with time. Lower: Exact dynamical evolution of initial distribution vs coarse grained distribution; initial phase region develops complex shape due to unstable particle trajectories. 5.3 Hydrodynamic equations Here we consider first application of Boltzmann equation - equations of hydrodynamics. These equations are an approximate form of the Boltzmann equation applied to a certain class of non-equilibrium macroscopic processes. Boltzmann equation is very complicated equation that is very difficult to solve even numerically. However, several classes of problems can be identified when the Boltzmann equation can be consistently 23

approximated with simpler, macrosocpic equations. The purpose of this discussion, besides demonstrating an astonishing fact that hydrodynamics is just consequence of the Boltzmann equation, is to give illustration of such approximative methods. We start by noticing that integration of the collision integral I coll (f) in Eq. (5.5) over p 1 gives zero, dp 1 I coll (f) = dp 1 dp 2 dp 3 dp 4 W (p 1, p 2 ; p 3, p 4 ) [f(p 3, t)f(p 4, t) f(p 1, t)f(p 2, t)] = 0. (5.11) This can be proven by re-labelling p 1, p 2 p 3, p 4 in, for instance, the outgoing (second) term and using the detailed balance equation. Furthermore, we notice that the conservation of microscopic momentum and energy for binary scattering, Eq. (5.1), gives rise to further constraints on the collision integral. Indeed, let us consider equation, dp 1 p 1 I coll (f) = dp 1 dp 2 dp 3 dp 4 p 1 W (p 1, p 2 ; p 3, p 4 ) [f(p 3, t)f(p 4, t) f(p 1, t)f(p 2, t)]. (5.12) By re-labelling p 1, p 3 p 2, p 4, and then p 1, p 2 p 3, p 4, and using the symmetries of the scattering rate, we get p 1 (1/4)( p 1 + p 2 p 3 p 4 ), which equals zero by virtue of the momentum conservation. Thus we get dp 1 p 1 I coll (f) = 0. (5.13) Similarly, we easily show that the energy conservation results in the relation, dp 1 ɛ 1 I coll (f) = 0. (5.14) Now we consider the implications of these conservation laws for the Boltzmann equation. We shall multiply the both sides of Eq. (5.10) consequently by 1, p, and ɛ, and integrate over momentum. This will cancel the right hand side (rhs) terms, and we get for the first case, dp f( r, p, t) + dp v f( r, p, t) = 0. (5.15) t r The integral of the last term of the Boltzmann equation is zero because the integrand is a full derivative over momentum. This equation can be re-written in a more familiar form by introducing a new normalization for the distribution function, dp f( r, p, t) = n( r, t), (5.16) where n( r, t) is the particle density. We also introduce the particle current density vector, j( r, t) = dp vf( r, p, t), (5.17) 24

then Eq. (5.15) takes the form of the continuity equation for the particle density, n( r, t) t + div j( r, t) = 0. (5.18) Multiplication of the Boltzmann equation by p and integration over momentum gives, j α ( r, t) t + P αβ( r, t) r β = 1 m F α( r, t)n( r, t). (5.19) Here we introduced index α = x, y, z for the vector components, and defined P αβ ( r, t) = dp v α v β f( r, p, t). (5.20) The last term in the Boltzmann equation was integrated by parts, taking into account relation p α / p β = δ αβ (since the components of the momentum vector are independent variables). Equation (5.19) can be interpreted as a continuity equation for the current density, the rhs term describing a driving effect of an external force. Similarly, we get the continuity equation for the energy density, u( r, t) t + div j E ( r, t) = F ( r, t) j( r, t). (5.21) Here u is the internal energy density, and j E is the energy current density, u( r, t) = dp ɛ( p)f( r, p, t), j E ( r, t) = dp vɛ( p)f( r, p, t) (5.22) 5.4 Local equilibrium The conservation equations for macroscopic quantities derived in the previous section have a general character. Now we specify them for a certain choice of the distribution function describing a local equilibrium, f (0) = C( r, t) exp ɛ( p) p V ( r, t) kt ( r, t). (5.23) This function resembles the equilibrium Boltzmann distribution in Eq. (5.7), but with non-homogeneous in time and space coefficients. This function describes approximate solution of the Boltzmann equation for the situation when the inhomogeneities are small on the time scale of the relaxation time and the spatial scale of the mean free path. Namely, dt dt T/τ, T T/vτ, (5.24) 25

and similar for C, and V. If this is the case, then the left hand side of the Boltzmann equation is small compared to the collision integral, and can be neglected, 0 I coll (f), (5.25) and the function in Eq. (5.23) gives solution to this equation. To evaluate the conservation equations for the local equilibrium, we notice that Eq. (5.23) can be rewritten on an equivalent form, f (0) = C( r, t) exp m( v V ( r, t)) 2 2kT ( r, t) (5.26) (the factor e mv 2 /2kT has been included in the normalization constant). This function is isotropic with respect to the variable v V. Generally, for isotropic functions, the averages of vector components are, dq q α F (q 2 ) = 0, dq q α q β F (q 2 ) = Qδ αβ. (5.27) This general property allows us to write the current density on the form, j = dp [( v V ) + V ]f (0) = nv, (5.28) and also simplify the quantity, P αβ = P δ αβ + nv α V β, where P is the pressure. With these simplifications, the equations Eq. (5.15), (5.19), take the form after some algebra, n t + div(n V ) = 0 V t + ( V ) V + 1 n P = F. (5.29) These are the hydrodynamic equations for an ideal liquid without any dissipation. Note that these equations contain only macroscopic quantities, the microscopic distribution function is left behind the scene. Equation (5.21) can be also transformed into a macroscopic equation for temperature dynamics by using relation m (v V ) 2 = 3kT, and similar algebraic transformation. Of course, the local equilibrium distribution function is not an exact solution for the Boltzmann equation. There are non-equilibrium corrections proportional to the (small) external forces, temperature and velocity gradients, which generate dissipative contributions to the flows, leading to thermal resistance, viscosity, etc. We will consider some examples of such dissipative effects in the next lectures. A consistent mathematical procedure for constructing the dissipative terms in the hydrodynamic equations was devised by Enskog in 1917. This procedure leads, in particular, to the Navier-Stokes equation and its generalizations. 26

6 Lecture Transport theory and kinetic coefficients A goal of the transport theory is a calculation of non-equilibrium macroscopic currents induced by external fields, spatial gradients, and other thermodynamical forces, and calculation of the kinetic coefficients. Usually the currents are evaluated in a linear approximation with respect to small deviation from equilibrium, and the theory is often called a linear response theory. In a solid state physics, typical currents of interest include electric current for charged particles (or mass current for neutral particles), and heat current. Corresponding kinetic coefficients are conductivity, thermal conductivity, and thermoelectric coefficients. 6.1 Current definitions Electric current density for electrons in a solid is defined j e = e v = e d 3 p vf( r, p, t), (6.1) distribution function here is normalized to electron density, n = N V = d 3 p f( r, p, t). (6.2) Heat current density is defined as difference between an energy flow density, j E = d 3 p vɛf( r, p, t) (6.3) and energy transferred by the particle flow µ v, j q = d 3 p (ɛ µ) vf( r, p, t). (6.4) 6.2 Linearized BE Consider Boltzmann equation of Eq. (5.10) with the collision integral corresponding to an electron-electron scattering, and the external force corresponding to static electric field, f( r, p, t) t + v f( r, p, t) r f( r, p, t) e ϕ p = I coll ({f}). (6.5) Equilibrium solution to this equation correspond to (i) stationary state, (ii) absence of spatial gradients, (iii) absence of external fields, then it reads, f (0) ( p) = exp ɛ(p) + µ + p V kt (6.6) In absence of macroscopic current flows, V must be zero, and equilibrium distribution function is isotropic and depends only on energy, f (0) (E). 27