Liouville Equation. q s = H p s

Similar documents
Canonical transformations (Lecture 4)

Physics 5153 Classical Mechanics. Canonical Transformations-1

Time-Dependent Statistical Mechanics 5. The classical atomic fluid, classical mechanics, and classical equilibrium statistical mechanics

The kinetic equation (Lecture 11)

HAMILTON S PRINCIPLE

Curves in the configuration space Q or in the velocity phase space Ω satisfying the Euler-Lagrange (EL) equations,

Two-Body Problem. Central Potential. 1D Motion

L(q, q) = m 2 q2 V (q) 2 m + V (q)

Hamiltonian flow in phase space and Liouville s theorem (Lecture 5)

for changing independent variables. Most simply for a function f(x) the Legendre transformation f(x) B(s) takes the form B(s) = xs f(x) with s = df

Nonlinear Single-Particle Dynamics in High Energy Accelerators

Classical Statistical Mechanics: Part 1

Chapter 1. Principles of Motion in Invariantive Mechanics

Under evolution for a small time δt the area A(t) = q p evolves into an area

III. Kinetic Theory of Gases

Lecture 5. Alexey Boyarsky. October 21, Legendre transformation and the Hamilton equations of motion

Grand Canonical Formalism

Supplement on Lagrangian, Hamiltonian Mechanics

PHYS 705: Classical Mechanics. Hamiltonian Formulation & Canonical Transformation

Caltech Ph106 Fall 2001

Assignment 8. [η j, η k ] = J jk

Canonical transformations and exact invariants for time-dependent Hamiltonian systems

Introduction to Path Integrals

Lecture I: Constrained Hamiltonian systems

Part II. Classical Dynamics. Year

Hamilton-Jacobi theory

Physics 106a, Caltech 13 November, Lecture 13: Action, Hamilton-Jacobi Theory. Action-Angle Variables

Identical Particles. Bosons and Fermions

Symmetries 2 - Rotations in Space

Elastic Scattering. R = m 1r 1 + m 2 r 2 m 1 + m 2. is the center of mass which is known to move with a constant velocity (see previous lectures):

G : Statistical Mechanics

Complex Numbers. The set of complex numbers can be defined as the set of pairs of real numbers, {(x, y)}, with two operations: (i) addition,

Physics 106b: Lecture 7 25 January, 2018

1. Introductory Examples

The Geometry of Euler s equation. Introduction

Hamiltonian. March 30, 2013

G : Statistical Mechanics

This document is stored in Documents/4C/vectoralgebra.tex Compile it with LaTex. VECTOR ALGEBRA

Distributions of statistical mechanics

Theoretical physics. Deterministic chaos in classical physics. Martin Scholtz

conventions and notation

4. Complex Oscillations

Solutions to Problems in Goldstein, Classical Mechanics, Second Edition. Chapter 9

NIU PHYS 500, Fall 2006 Classical Mechanics Solutions for HW6. Solutions

= 0. = q i., q i = E

The Wave Function. Chapter The Harmonic Wave Function

From quantum to classical statistical mechanics. Polyatomic ideal gas.

Vector and Tensor Calculus

HANDOUT #12: THE HAMILTONIAN APPROACH TO MECHANICS

Linearization of Differential Equation Models

Lagrangian and Hamiltonian Mechanics (Symon Chapter Nine)

Statistical Mechanics Solution Set #1 Instructor: Rigoberto Hernandez MoSE 2100L, , (Dated: September 4, 2014)

s-scattering Length by Bold DiagMC

4.3 Lecture 18: Quantum Mechanics

Electric and Magnetic Forces in Lagrangian and Hamiltonian Formalism

EULER-LAGRANGE TO HAMILTON. The goal of these notes is to give one way of getting from the Euler-Lagrange equations to Hamilton s equations.

Lecture 4. Alexey Boyarsky. October 6, 2015

MATH 353 LECTURE NOTES: WEEK 1 FIRST ORDER ODES

Handout 10. Applications to Solids

INTRODUCTION. Introduction. Discrete charges: Electric dipole. Continuous charge distributions. Flux of a vector field

Physics 351 Wednesday, February 14, 2018

Averaging II: Adiabatic Invariance for Integrable Systems (argued via the Averaging Principle)

Sketchy Notes on Lagrangian and Hamiltonian Mechanics

18.02 Multivariable Calculus Fall 2007

The Calculus of Vec- tors

1 Differentiable manifolds and smooth maps. (Solutions)

An introduction to Birkhoff normal form

Differential Operators and the Divergence Theorem

Exercise 1 Classical Bosonic String

Practice Problems for Final Exam

Hamiltonian Dynamics

Statistical Mechanics

PHYS2100: Hamiltonian dynamics and chaos. M. J. Davis

1 Differentiable manifolds and smooth maps. (Solutions)

Forces of Constraint & Lagrange Multipliers

Hamiltonian Field Theory

Physics 106a, Caltech 4 December, Lecture 18: Examples on Rigid Body Dynamics. Rotating rectangle. Heavy symmetric top

Lecture 11 : Overview

Coordinate Curves for Trajectories

Integration in the Complex Plane (Zill & Wright Chapter 18)

Mechanics, Heat, Oscillations and Waves Prof. V. Balakrishnan Department of Physics Indian Institute of Technology, Madras

The Principle of Least Action

On the Hamilton-Jacobi Variational Formulation of the Vlasov Equation

Physics 200 Lecture 4. Integration. Lecture 4. Physics 200 Laboratory

G : Statistical Mechanics

The Methodology of Statistical Mechanics

Computing the Universe, Dynamics IV: Liouville, Boltzmann, Jeans [DRAFT]

Draft TAYLOR SERIES METHOD FOR SYSTEM OF PARTICLES INTERACTING VIA LENNARD-JONES POTENTIAL. Nikolai Shegunov, Ivan Hristov

CHAPTER 9. Microscopic Approach: from Boltzmann to Navier-Stokes. In the previous chapter we derived the closed Boltzmann equation:

We start with some important background material in classical and quantum mechanics.

Gauss s Law & Potential

Variational principles and Hamiltonian Mechanics

Page 684. Lecture 40: Coordinate Transformations: Time Transformations Date Revised: 2009/02/02 Date Given: 2009/02/02

The Wave Function. Chapter The Harmonic Wave Function

Introduction to Polar Coordinates in Mechanics (for AQA Mechanics 5)

Physical Dynamics (SPA5304) Lecture Plan 2018

Phase Transitions. µ a (P c (T ), T ) µ b (P c (T ), T ), (3) µ a (P, T c (P )) µ b (P, T c (P )). (4)

The Two -Body Central Force Problem

Oscillations and Waves

Page 404. Lecture 22: Simple Harmonic Oscillator: Energy Basis Date Given: 2008/11/19 Date Revised: 2008/11/19

Transcription:

Liouville Equation In this section we will build a bridge from Classical Mechanics to Statistical Physics. The bridge is Liouville equation. We start with the Hamiltonian formalism of the Classical Mechanics, where the state of a system with m degrees of freedom is described by m pairs of conjugated variables called (generalized) coordinates and momenta {q s, p s }, s = 1, 2,..., m. The equations of motion are generated with the Hamiltonian function, H({q s, p s }), by the following rule q s = H p s, (1) ṗ s = H q s. (2) Fore example, if we have N three-dimensional particles of mass M interacting with each other via a pair potential U, and also interacting with some external potential V, then the Hamiltonian for this system reads: H = N j=1 p 2 j 2M + N j=1 V (r j ) + i<j U(r i r j ), (3) where r j and p j are the radius-vector and momentum of the j-th particle, respectively. In this example, m = 3N: each component of each radiusvector represents a separate degree of freedom. The following property of Eqs. (1)-(2) will be crucial for us. If we need to describe time evolution of some function A({q s, p s }) due to the evolution of coordinates and momenta, then the following relation takes place Ȧ = {H, A}, (4) where the symbol in the r.h.s. is a shorthand notation called Poisson bracket for the following expression {H, A} = s H p s A q s H q s A p s. (5) [The proof is straightforward. The chain rule for da({q s (t), p s (t)})/dt and then Eqs. (1)-(2) for q s and ṗ s.] 1

Hence, any quantity A({q s, p s }) is a constant of motion if, and only if, its Poisson bracket with the Hamiltonian is zero. In particular, the Hamiltonian itself is a constant of motion, since {H, H} = 0, and this is nothing else than the conservation of energy, because the physical meaning of the Hamiltonian function is energy expressed in terms of coordinates and momenta. Definition: The phase space is a 2m-dimensional space of points, or, equivalently, vectors of the following form: X = (q 1, q 2,..., q m, p 1, p 2,..., p m ). (6) Each point/vector in the phase space represents a state of the mechanical system. If we know X at some time moment, say, t = 0, then the further evolution of X the trajectory X(t) in the phase space is unambiguously given by Eqs. (1)-(2), since these are the first-order differential equations with respect to the vector function X(t). (For the same reason different trajectories cannot intersect!) The phase space is convenient for statistical description of mechanical system. Suppose that the initial state for a system is known only with a certain finite accuracy. This means that actually we know only the probability density W 0 (X) of having the point X somewhere in the phase space. If the initial condition is specified in terms of probability density, then the subsequent evolution should be also described probabilistically, that is we have to work with the distribution W (X, t), which should be somehow related to the initial condition W (X, 0) = W 0 (X). Our goal is to establish this relation. We introduce a notion of a statistical ensemble. Instead of dealing with probability density, we will work with a quantity which is proportional to it, and is much transparent. Namely, we simultaneously take some large number N ens of identical and independent systems distributed in accordance with W (X, t). We call this set of systems statistical ensemble. The j-th member of the ensemble is represented by its point X j in the phase space. The crucial observation is that the quantity N ens W (X, t) gives the concentration of the points {X j }. Hence, to find the evolution of W we just need to describe the evolution of the concentration of the points X j, which is intuitively easier, since each X j obeys the Hamiltonian equation of motion. A toy model. To get used to the ensemble description, and also to obtain some important insights, consider the following dynamical model with just one degree of freedom: H = (1/4)(p 2 + q 2 ) 2. (7) 2

The equations of motion are: The quantity q = (p 2 + q 2 ) p, (8) ṗ = (p 2 + q 2 ) q. (9) ω = p 2 + q 2 (10) is a constant of motion, since, up to a numeric factor, it is a square root of energy. We thus have a linear system of equations which is easily solved: q = ωp, (11) ṗ = ωq, (12) q(t) = q 0 cos ωt + p 0 sin ωt, (13) p(t) = p 0 cos ωt q 0 sin ωt, (14) where q 0 q(0), p 0 p(0), and ω = p 2 0 + q2 0. We see that our system is a non-linear harmonic oscillator. It performs harmonic oscillations, but in contrast to a linear harmonic oscillator, the frequency of oscillations is a function of energy. Now we take N ens = 1000 replicas of our system and uniformly distribute them within the square 0.75 q 1.25, 0.25 p 0.25 of the two-dimensional phase space. Then we apply the equations of motion (13)- (14) to each points and trace the evolution. Some characteristic snapshots are presented in Fig. 1. In accordance with the equations of motion, each point rotates along corresponding circle of the radius p 2 0 + q2 0. Since our oscillators are non-linear, points with larger radii rotate faster, and this leads to the formation of the spiral structure. The number of the spiral windings increases with time. With a fixed number of points in the ensemble, at some large enough time it becomes simply impossible to resolve the spiral structure. For all practical purposes, it means that instead of dealing with the actual distribution W (X, t), which is beyond our experimental resolution, we can work with an effective distribution W eff (X, t) obtained by slightly smearing W (X, t). [Actually, this or that sort of smearing (either explicit or implicit) is an unavoidable ingredient of any Statistical-Mechanical description!] In contrast to the genuine distribution W (X, t) that keeps increasing the number of spiral windings, the smeared distribution W eff (X, t) 3

saturates to a certain equilibrium (=time-independent) function, perfectly describing our ensemble at large times (see the plot for t = 1000). With our equations of motion, we see that the generic structure of our equilibrium W eff (X) (no matter what is the initial distribution) is W eff (X) = f(p 2 + q 2 ), the particular form of the function f coming from the initial distribution. Indeed, with respect to an individual member of the ensemble, the evolution is a kind of roulette that randomizes the position of corresponding phase space point X j along the circle of the radius p 2 + q 2. Below we will see how this property is generalized to any equilibrium ensemble of Hamiltonian systems. 4

Figure 1: Evolution of the ensemble of 1000 systems described by the Hamiltonian (7). After playing with a toy model, we are ready to consider a general case. From now on we normalize the function W (X, t) to the number of the ensemble members. Correspondingly, the number of points in the phase space volume at the time t is given by the integral N Ω0 (t) = W (X, t) dω, (15) where dω = dq 1... dq m dp 1... dp m is the element of the phase space volume; the integration is over the volume. To characterize the rate of variation of the number of points within the volume, we use the following time derivative Ṅ Ω0 = W (X, t) dω. (16) t 5

By the definition of the function W (X, t), its variable X does not depend on time, so that the time derivative deals only with the variable t. There is an alternative way of calculating Ṅ. We may count the number of points that cross the surface of the volume per unit time: Ṅ Ω0 = J ds. (17) surface of Here J is the flux of the points [number of points per unit (and perpendicular to velocity) surface per unit time]; ds = n ds, where n is the unit normal vector at a surface point, and ds is the surface element. We assume that n is directed outwards and thus write the sign minus in the right-hand side of (17). In accordance with the known theorem of calculus, the surface integral (17) can be converted into the bulk integral surface of J ds = J dω, (18) where is the vector differential operator = ( q 1,..., q m, p 1,..., p m ). (19) We arrive at the equality t W (X, t) dω = J dω. (20) Since Eq. (20) is true for an arbitrary, including an infinitesimally small one, we actually have W (X, t) = J. (21) t This is a quite general relation, known as the continuity equation. It arises in theories describing flows of conserving quantities (say, particles of fluids and gases). The dimensionality of the problem does not matter. Now we are going to independently relate the flux J to W (X, t) and thus end up with a closed equation in terms of W (X, t). By the definition of J we have J = W (X, t) Ẋ, (22) 6

because the flux of particles is always equal to their concentration times velocity. In our case, the velocity Ẋ is just a function of X following from the Hamiltonian (we utilize the equations of motion): Ẋ = ( q 1,... q m, ṗ 1,... ṗ m ) = ( H p 1,..., H, H,..., H ) p m q 1 q m (23) Plugging this into the continuity equation and doing some algebra leading to the cancellation of terms 2 H/q s p s by the terms - 2 H/p s q s, we ultimately arrive at an elegant formula (we take advantage of the previously introduced Poisson bracket) W (X, t) = {W, H}. (24) t This is the Liouville equation the equation of motion for the distribution function W (X, t). Since it is the first-order differential equation with respect to time, it unambiguously defines the evolution of any given initial distribution. While the form of the Liouville equation definitely has something in common with Eq. (4), the physical meaning of the two is radically different. In the l.h.s. of Eq. (4) we are dealing with the full derivative with respect to time, A A({q s (t), p s (t)}), while the variable X in Eq. (24) is essentially time-independent; it just labels a fixed point in the phase space. Note also the different sign: {W, H} = {H, W }. Nevertheless, the relation Eq. (4) becomes crucially important for understanding the structure of the equilibrium solutions of the Liouville equation. Indeed for any equilibrium (=time-independent) solution W (X) we have {H, W } = 0. Thus, if we formally the procedure has no direct physical meaning! plug X = X(t) into W (X), where X(t) is any trajectory satisfying the equations of motions, then the result will be time-independent. That is any equilibrium W is formally equal to some constant of motion, and vice versa! We have already seen an example of this when playing with our toy model. Now we see that this is a general theorem (known as Liouville s theorem).. 7