Onsager theory: overview

Similar documents
4. The Green Kubo Relations

Derivation of the GENERIC form of nonequilibrium thermodynamics from a statistical optimization principle

Introduction. Statistical physics: microscopic foundation of thermodynamics degrees of freedom 2 3 state variables!

Hydrodynamics. Stefan Flörchinger (Heidelberg) Heidelberg, 3 May 2010

ensemble and the canonical ensemble. The microcanonical ensemble is of fundamental

ChE 503 A. Z. Panagiotopoulos 1

NPTEL

G : Statistical Mechanics

IV. Classical Statistical Mechanics

G : Statistical Mechanics

Markovian Description of Irreversible Processes and the Time Randomization (*).

MD Thermodynamics. Lecture 12 3/26/18. Harvard SEAS AP 275 Atomistic Modeling of Materials Boris Kozinsky

Stochastic Particle Methods for Rarefied Gases

NON-EQUILIBRIUM THERMODYNAMICS

Contrasting measures of irreversibility in stochastic and deterministic dynamics

Brownian Motion and Langevin Equations

1. Thermodynamics 1.1. A macroscopic view of matter

III. Kinetic Theory of Gases

ONSAGER S VARIATIONAL PRINCIPLE AND ITS APPLICATIONS. Abstract

Part1B(Advanced Physics) Statistical Physics

Linear Response and Onsager Reciprocal Relations

(# = %(& )(* +,(- Closed system, well-defined energy (or e.g. E± E/2): Microcanonical ensemble

Table of Contents [ntc]

t = no of steps of length s

08. Brownian Motion. University of Rhode Island. Gerhard Müller University of Rhode Island,

This is a Gaussian probability centered around m = 0 (the most probable and mean position is the origin) and the mean square displacement m 2 = n,or

1 The fundamental equation of equilibrium statistical mechanics. 3 General overview on the method of ensembles 10

Collective Effects. Equilibrium and Nonequilibrium Physics

Non equilibrium thermodynamic transformations. Giovanni Jona-Lasinio

3. General properties of phase transitions and the Landau theory

Fundamentals. Statistical. and. thermal physics. McGRAW-HILL BOOK COMPANY. F. REIF Professor of Physics Universüy of California, Berkeley

Collective Effects. Equilibrium and Nonequilibrium Physics

A Brief Introduction to Statistical Mechanics

Statistical Mechanics

LANGEVIN THEORY OF BROWNIAN MOTION. Contents. 1 Langevin theory. 1 Langevin theory 1. 2 The Ornstein-Uhlenbeck process 8

Continuum Mechanics Fundamentals

Metropolis Monte Carlo simulation of the Ising Model

CHAPTER V. Brownian motion. V.1 Langevin dynamics

Dynamique d un gaz de sphères dures et équation de Boltzmann

in order to insure that the Liouville equation for f(?; t) is still valid. These equations of motion will give rise to a distribution function f(?; t)

Time-Dependent Statistical Mechanics 5. The classical atomic fluid, classical mechanics, and classical equilibrium statistical mechanics

Non equilibrium thermodynamics: foundations, scope, and extension to the meso scale. Miguel Rubi

Langevin Methods. Burkhard Dünweg Max Planck Institute for Polymer Research Ackermannweg 10 D Mainz Germany

Statistical Thermodynamics and Monte-Carlo Evgenii B. Rudnyi and Jan G. Korvink IMTEK Albert Ludwig University Freiburg, Germany

Symmetry of the linearized Boltzmann equation: Entropy production and Onsager-Casimir relation

Stochastic Processes. A stochastic process is a function of two variables:

Renormalization Group: non perturbative aspects and applications in statistical and solid state physics.

Statistical Mechanics of Active Matter

ELEMENTS OF PROBABILITY THEORY

Thus, the volume element remains the same as required. With this transformation, the amiltonian becomes = p i m i + U(r 1 ; :::; r N ) = and the canon

Lecture 6: Ideal gas ensembles

Lecture 6: Irreversible Processes

Physics 562: Statistical Mechanics Spring 2003, James P. Sethna Homework 5, due Wednesday, April 2 Latest revision: April 4, 2003, 8:53 am

Measures of irreversibility in quantum phase space

Topics in Nonequilibrium Physics. Nicolas Borghini

STATISTICAL PHYSICS II

Thermodynamics of computation.

On the Asymptotic Convergence. of the Transient and Steady State Fluctuation Theorems. Gary Ayton and Denis J. Evans. Research School Of Chemistry

Non-equilibrium phenomena and fluctuation relations

Major Concepts Lecture #11 Rigoberto Hernandez. TST & Transport 1

INTRODUCTION TO о JLXJLA Из А lv-/xvj_y JrJrl Y üv_>l3 Second Edition

THERMODYNAMICS THERMOSTATISTICS AND AN INTRODUCTION TO SECOND EDITION. University of Pennsylvania

3.320 Lecture 18 (4/12/05)

Lecture 12: Detailed balance and Eigenfunction methods

Brief Review of Statistical Mechanics

F(t) equilibrium under H 0

Thermodynamics of nuclei in thermal contact

Quantum measurement theory and micro-macro consistency in nonequilibrium statistical mechanics

Energy Barriers and Rates - Transition State Theory for Physicists

Théorie des grandes déviations: Des mathématiques à la physique

PH4211 Statistical Mechanics Brian Cowan

Javier Junquera. Statistical mechanics

Introduction Statistical Thermodynamics. Monday, January 6, 14

Thermodynamical cost of accuracy and stability of information processing

THERMODYNAMICS AND STATISTICAL MECHANICS

New ideas in the non-equilibrium statistical physics and the micro approach to transportation flows

19-9 Adiabatic Expansion of an Ideal Gas

The propagation of chaos for a rarefied gas of hard spheres

RENORMALIZATION OF DYSON S VECTOR-VALUED HIERARCHICAL MODEL AT LOW TEMPERATURES

Stochastic equations for thermodynamics

In-class exercises Day 1

BROWNIAN DYNAMICS SIMULATIONS WITH HYDRODYNAMICS. Abstract

Fluctuation theorems. Proseminar in theoretical physics Vincent Beaud ETH Zürich May 11th 2009

Quantum Mechanical Foundations of Causal Entropic Forces

Best-fit quasi-equilibrium ensembles: a general approach to statistical closure of underresolved Hamiltonian dynamics

Deriving Thermodynamics from Linear Dissipativity Theory

Basic Concepts and Tools in Statistical Physics

Preferred spatio-temporal patterns as non-equilibrium currents

Bridging scales. from microscopic dynamics to macroscopic laws. M. Hairer. UC Berkeley, 20/03/2018. Imperial College London

UNIVERSITY OF OSLO FACULTY OF MATHEMATICS AND NATURAL SCIENCES

CH 240 Chemical Engineering Thermodynamics Spring 2007

State Space Representation of Gaussian Processes

Symmetry of the Dielectric Tensor

Grand Canonical Formalism

Time-Dependent Statistical Mechanics 1. Introduction

Brownian motion and the Central Limit Theorem

Chapter 3. Forces, Momentum & Stress. 3.1 Newtonian mechanics: a very brief résumé

2m + U( q i), (IV.26) i=1

Dynamics of a tagged monomer: Effects of elastic pinning and harmonic absorption. Shamik Gupta

Two recent works on molecular systems out of equilibrium

Transcription:

Onsager theory: overview Pearu Peterson December 18, 2006 1 Introduction Our aim is to study matter that consists of large number of molecules. A complete mechanical description of such a system is practically impossible due to inability (i) to solve the system of equations with enormous size that describes the motion of molecules; (ii) to determine the initial state of the system. So, in the following the statistical description of matter will be introduced that should resolve the issues with handling the system of equation size as well as dealing with irreversible character of matter. 1.1 Mechanical ensemble theory To overcome the difficulty of specifying initial conditions, Gibbs introduced the concept of an ensemble that consists of a large number of systems being equivalent in a macroscopic sense. Each such a system is described as a distribution of points in phase space (a space of positions and momenta of molecules). Two systems are considered equivalent when their macroscopic properties are equal within the same macroscopic specification of the ensemble. The distribution of phase points is used to assign probabilities to points in phase space based on their frequency in the ensemble. The evolution of this probability cloud is described by the Liouville equation. According to Gibbs, an estimate of the value of any measurable property in the system is obtained by averaging over the same property in the ensemble. To describe a system using Gibbs ensemble approach, two questions arise: When is the specification of the ensemble good enough so that the resulting average will give good estimates for any system in ensemble as well as will reliably predict the future? To answer the question, it is necessary to use empirical information. 1

For such a good specification, what is the corresponding distribution in phase space? There is no answer to this within the confines of the mechanical theory and so the answer must be postulated. The basic empirical guide to good ensemble specifications is given by equilibrium thermodynamics. In thermodynamics the state of a large system (but far from critical points) is determined by its entropy as a function of extensive variables: the internal energy E, the volume V, and the number of molecules N. This is the specification of micro-canonical ensemble. The canonical ensemble specification uses temperature T instead of internal energy. The limit where N and V become infinite with N/V fixed, is called the thermodynamic limit. In mechanical ensemble theory we have to introduce postulates about the nature of initial distribution functions which cannot themselfs be measured. For a non-equilibrium ensemble, this is especially a serious problem. 1.2 Physical ensemble theory Measurements on macroscopic systems are restricted to a small number of variables. If a given set of variables will lead to a self-contained description of a large molecular system then it constitutes a contracted description of a macroscopic system. Similar to Gibbs ensemble theory, a question of what variables will provide a good contracted description of a macroscopic system in the sense that the future behaviour of the system will be well predicted from the knowledge of initial values? The answer is given by empirical observations. For example: For an isothermal solution undergoing slow chemical reactions at constant pressure the concentrations of the various reacting species give a good contracted description. For a fluid flow that is not too rapid, a hydrodynamic-level description involving mass, energy, and momentum densities as functions of spatial positions provides a good contracted description. For a matter close to equilibrium the thermodynamic-level description involving extensive variables E, V, and N provides a good contracted description. In the contracted description of matter at equilibrium two systems which are at equilibrium and poses the same values of the extensive variables are identical, for the purposes of most measurements. But in the molecular state or location in phase space, they are not the same. To measure the differences 2

which appear in these systems that have the same macroscopic description, molecular fluctuations are used. These fluctuations must be small. The need for fluctuations demonstrate our limited ability to make measurements and so contracted descriptions must be inherently statistical. A physical ensemble consists of a collection of systems, identically prepared with respect to the contracted description. Different from Gibbs ensemble where one requires a knowledge of the statistical distribution in phase space, in a contracted description for physical ensembles the statistical properties of ensemble, that are variables which one has been chosen to use in the contracted description, are described. Mathematically physical ensembles are treated by the theory of stochastic processes. 2 Stochastic Processes A vector-valued stochastic process is denoted by n(t) = (n 1 (t),...,n k (t)) where n i (t) is random variable for fixed t. It is characterised by the joint distribution for all possible finite specifications of times. If Prob[n(t 1 ) n 1,...,n(t m ) n m ] defines the joint probability that the variables are less than the indicated values n 1,...,n m then the join distribution of n(t 1 ),...,n(t m ) is W m (n 1,t 1 ;...;n m,t m )dn 1 dn m = Prob[n 1 n(t 1 ) n 1 +dn 1,...,n m n(t m ) n m +dn m ] We have W m 1 (n 1,t 1 ;...;n m 1,t m 1 ) = W m (n 1,t 1 ;...;n m,t m )dn m. A knowledge of all the joint probability densities provides a complete statistical description for measurements of n made at a finite number of different times. Two-time averages of the scalar functions f and g are defined by < f(n(t))g(n(t )) > 2 = f(n(t))g(n(t ))W 2 (n,t;n,t )dndn. Fluctuations about the average is defined as δn(t) = n(t) < n(t) >. Two-time conditional (transition) probability density is defied as follows Prob[n(t 0 ) = n 0,n n(t) n + dn] = P 2 (n 0,t 0 n,t)dn. 3

We have W 2 (n 0,t 0 ;n,t) = W 1 (n 0,t 0 )P 2 (n 0,t 0 n,t),. W m (n 1,t 1 ;...;n m,t m ) = W m 1 (n 1,t 1 ;...;n m 1,t m 1 )P m (n 1,t 1 ;...;n m 1,t m 1 n m,t m ). The conditional average of a function f(n(t)) is defined by < f(n(t))) > 0 = f(n)p 2 (n 0,t 0 n,t)dn. 2.1 Markov processes A Markov process (without memory) has a property that conditions prior the most recent one in a conditional ensemble do not affect the subsequent probability distribution: P m (n 1,t 1 ;...;n m 1,t m 1 n m,t m ) = P 2 (n m 1,t m 1 n m,t m ). Markov process is completely defined by W 1 (n,t) and P 2 (n 0,t 0 n,t). 2.2 Stationary processes A stationary stochastic process is the one which is invariant to time translations by the same interval τ, that is, n(t 1 +τ),...,n(t m +τ) are statistically indistinguishable from the untranslated n(t 0 ),...,n(t m ) for all m and τ. For a stationary process all single-time averages are constants, the singletime probability density, W 1, is independent of time, the joint probability densities depend only on pairwise time differences: W 2 (n 1,t 1 ;n 2,t 2 ) = W 2 (n 1, 0;n 2,t 2 t 1 ), P 2 (n 0,t 0 ;n,t) = P 2 (n 0, 0 n,t t 0 ). Ensembles at thermodynamic equilibrium or at non-equilibrium steady state are stationary, for example. 2.3 Gaussian processes A stochastic process is called Gaussian if all the joint and conditional probability densities have a Gaussian form: G(n) = exp( (n n)t σ 1 (n n)/2), (2π)m detσ 4

where σ is covariance (nonsingular, symmetric, positive definite) matrix and n is average. For a stationary Gaussian process, the lowest-order joint density will be time independent and the average values and covariance will be constant. For conditional Gaussian ensembles, the average values and covariance of n(t) will be time dependent. 2.4 Ornstein-Uhlenbeck processes An Ornstein-Uhlenbeck process is generated by a linear stochastic differential equation which includes a white noise term, i.e. a Langevin-like equation: da dt = Ha + f, where H is negative semi-defined matrix, < a >= 0, f is white noise with the strength matrix γ = (Hσ + σh T ), σ = lim t < δa(t)δa T (t) > 0. 3 The Onsager picture Lars Onsager developed a consistent statistical theory of irreversible processes that is able to relate measurable quantities (transport coefficients, thermodynamic derivatives) to the results of experimental measurements. This theory is linear and limited in validity to a neighbourhood of equilibrium. However, the basic structure of the Onsager theory contains many features that are needed for more general situations. In fact, the Onsager theory is a special case of equilibrium for the general theory described in this course. The Onsager theory describes the properties of equilibrium ensembles. For measurable physical quantities one must specify some subset of extensive variables, collected in n, that characterise the system. The aim is to characterise the dynamics of n in the equilibrium ensemble, that is, how the initial value n 0 evolve to some possible value of n at a later time t. These transitions depend only on the elapsed time t since an equilibrium ensemble is stationary. According to the Onsager principle, the rate of the change in the conditional averages < n(t) > 0 are linearly related to the average deviations of the conjugate intensive variables from their equilibrium values where the coupling matrix is symmetric and non-negative definite. This is the linear law for irreversible processes in an equilibrium ensemble. For ease of describing the linear relaxations of the averages to their equilibrium values, the deviations of n from their equilibrium values n e as a = n n e will 5

be used. The conditional average values of a will be denoted as a =< a > 0. The thermodynamically conjugate variable to n i is denoted by F i = S n i and is called as thermodynamic force. The variable conjugate to a i is then X i = S ( ) e S. n i n i The Onsager principle reads now J i da i dt = j L ij X j, where J i is the average thermodynamic flux and L ij satisfies the following constraints: The coupling matrix L must satisfy the Onsager-Casmir reciprocal symmetry relationships: L ij (B) = ǫ i ǫ j L ji ( B), where ǫ i = 1 when n i is odd variable, +1 otherwise. Recall that variable n i is called odd when its sign changes on time reversal. In general, time reversal changes the sign of time, velocities, and magnetic field. The reciprocal relations follow from microscopic reversibility that itself follows from the two general symmetry properties of the two-time correlation function: C(τ) < a 0 a T > 1 = E 1 exp(h T τ) where W 1 (a) = G(0,E,a) and E = S/k B. The coupling matrix L must be positive semi-definite. This property is closely related to the Second Law of thermodynamics from which follows that the entropy must be maximised at the equilibrium state and has the following form if the system is displaced by a small amount of a from the equilibrium S(a) = S(0) + 1 S ij a i a j, 2 ( ) e where S ij = 2 S n i n j is negative semi-definite matrix. On the other hand for small a we have X i = S n i (0) + j 6 i,j S ij a j ( S n i ) e

and S(a) = S(0) + 1 2 i X ia i. Since the Second Law holds only on the average, then ds(a) dt = i X i da i dt = i,j L ij X i X j Φ 0, which is implies that L is a positive semi-definite matrix. Quadratic function Φ is called the Rayleigh-Onsager dissipation function. Only the symmetric part of L, L s, causes dissipation. The Onsager principle can be written in the following form da dt = Ha, where H = LS. Its solution reads as a(a 0,t) = exp(ht)a 0, where a 0 is the vector of initial deviations from equilibrium in the conditional ensemble. A given system in the ensemble will not of course exhibit this exponential relaxation due to molecular fluctuations δa(t) = a(t) a(a 0,t). According to Onsager, these small fluctuations are similar to an impressed macroscopic deviation, only they appear spontaneously. So, they should satisfy a dynamical equation similar to of a. This is called the Onsager s regression hypothesis for fluctuations: dδa = Hδa + dt f, where f is a random term. From this follows Langevin type of equation for a(t): da = Ha + f. dt So a(t) must be a stationary, Gaussian, Markov process and regression hypothesis is equivalent to assumption that fluctuations in the equilibrium ensemble are Gaussian and Markovian. It also means that the random term f is a multivariate white noise and a is an Ornstein-Uhlenbeck process. For white noise we have < f(t) > = 0, < f(t ) f T (t) > = γδ(t t ). where γ satisfies the fluctuation-dissipation theorem: Hσ + σh T = γ with σ the equilibrium covariance matrix σ =< aa T > e = E 1 = k B S 1. We have also H = LS so that γ = (LS( k B )S 1 (k B )S 1 SL T ) = 2k B L s. So, the strength of the random terms is proportional to the symmetric part of L. 7

In summary, the Onsager theory requires choosing proper extensive variables describing the macroscopic properties of ensemble, finding the dynamic coupling matrix L, determining from the local equilibrium entropy the thermodynamic coupling matrix S and then it follows that the deviations of the extensive variables around their equilibrium values are stationary, Gaussian, Markov processes. The single-time probability density of the deviations is a Gaussian centred at zero with the covariance σ = k B S 1. As a function in time, the deviations satisfy the Langevin type equation with white noise term as random force. The strength of the random force is defined by the symmetric part of the dynamic coupling matrix L which is also responsible for the dissipation function to be non-negative. 8