This is a Gaussian probability centered around m = 0 (the most probable and mean position is the origin) and the mean square displacement m 2 = n,or

Similar documents
Fokker-Planck Equation with Detailed Balance

Brownian motion and the Central Limit Theorem

The dynamics of small particles whose size is roughly 1 µmt or. smaller, in a fluid at room temperature, is extremely erratic, and is

CHAPTER V. Brownian motion. V.1 Langevin dynamics

2. Molecules in Motion

Smoluchowski Diffusion Equation

Brownian Motion and Langevin Equations

7 The Waveform Channel

08. Brownian Motion. University of Rhode Island. Gerhard Müller University of Rhode Island,

16. Working with the Langevin and Fokker-Planck equations

VIII.B Equilibrium Dynamics of a Field

Chapter 6 - Random Processes

BROWNIAN DYNAMICS SIMULATIONS WITH HYDRODYNAMICS. Abstract

Topics covered so far:

Local vs. Nonlocal Diffusions A Tale of Two Laplacians

Fourier Analysis and Power Spectral Density

Ver 3808 E1.10 Fourier Series and Transforms (2014) E1.10 Fourier Series and Transforms. Problem Sheet 1 (Lecture 1)

LANGEVIN THEORY OF BROWNIAN MOTION. Contents. 1 Langevin theory. 1 Langevin theory 1. 2 The Ornstein-Uhlenbeck process 8

Lecture 12: Detailed balance and Eigenfunction methods

Diffusion in the cell

MD Thermodynamics. Lecture 12 3/26/18. Harvard SEAS AP 275 Atomistic Modeling of Materials Boris Kozinsky

Lecture 3: From Random Walks to Continuum Diffusion

3F1 Random Processes Examples Paper (for all 6 lectures)

Stochastic Processes. M. Sami Fadali Professor of Electrical Engineering University of Nevada, Reno

Chapter 2. Deriving the Vlasov Equation From the Klimontovich Equation 19. Deriving the Vlasov Equation From the Klimontovich Equation

University of Washington Department of Chemistry Chemistry 453 Winter Quarter 2008

Lecture 12: Detailed balance and Eigenfunction methods

G : Statistical Mechanics

Linear Response in Classical Physics

LANGEVIN EQUATION AND THERMODYNAMICS

Order of Magnitude Astrophysics - a.k.a. Astronomy 111. Photon Opacities in Matter

ELEG 3143 Probability & Stochastic Process Ch. 6 Stochastic Process

Diffusion of a density in a static fluid

Brownian Motion: Fokker-Planck Equation

Deterministic. Deterministic data are those can be described by an explicit mathematical relationship

Langevin Methods. Burkhard Dünweg Max Planck Institute for Polymer Research Ackermannweg 10 D Mainz Germany

13.42 READING 6: SPECTRUM OF A RANDOM PROCESS 1. STATIONARY AND ERGODIC RANDOM PROCESSES

Time-Dependent Statistical Mechanics 1. Introduction

Kolmogorov Equations and Markov Processes

The Brownian Oscillator

Continuous and Discrete random process

Turbulent Flows. g u

Introduction. Statistical physics: microscopic foundation of thermodynamics degrees of freedom 2 3 state variables!

Math 345 Intro to Math Biology Lecture 21: Diffusion

ELEMENTS OF PROBABILITY THEORY

Second Order ODEs. Second Order ODEs. In general second order ODEs contain terms involving y, dy But here only consider equations of the form

8: Correlation. E1.10 Fourier Series and Transforms ( ) Fourier Transform - Correlation: 8 1 / 11. 8: Correlation

I. Dissipative Dynamics

4. The Green Kubo Relations

ECE6604 PERSONAL & MOBILE COMMUNICATIONS. Week 3. Flat Fading Channels Envelope Distribution Autocorrelation of a Random Process

Diffusion equation, flux, diffusion coefficient scaling. Diffusion in fully ionized plasma vs. weakly ionized plasma. n => Coulomb collision frequency

Lecture Notes 7 Stationary Random Processes. Strict-Sense and Wide-Sense Stationarity. Autocorrelation Function of a Stationary Process

Lecture 4: Introduction to stochastic processes and stochastic calculus

UCSD ECE250 Handout #27 Prof. Young-Han Kim Friday, June 8, Practice Final Examination (Winter 2017)

Controlled Diffusions and Hamilton-Jacobi Bellman Equations

p 1 ( Y p dp) 1/p ( X p dp) 1 1 p

Signals and Spectra - Review

Problems 5: Continuous Markov process and the diffusion equation

7.7 The Schottky Formula for Shot Noise

Fundamentals. Statistical. and. thermal physics. McGRAW-HILL BOOK COMPANY. F. REIF Professor of Physics Universüy of California, Berkeley

1 Introduction. 2 Diffusion equation and central limit theorem. The content of these notes is also covered by chapter 3 section B of [1].

Example 4.1 Let X be a random variable and f(t) a given function of time. Then. Y (t) = f(t)x. Y (t) = X sin(ωt + δ)

PHYS 352. Noise. Noise. fluctuations in voltage (or current) about zero average voltage = 0 average V 2 is not zero

Modeling of noise by Legendre polynomial expansion of the Boltzmann equation

Chapter 3. Random Process & Partial Differential Equations

Review of Fourier Transform

Numerical integration and importance sampling

Week 9 Generators, duality, change of measure

Practice Problems For Test 3

6.9.1 Fokker-Planck for a One-Dimensional Markov Process T2 Fokker-Planck for a Multi-Dimensional Markov Process... 51

CHAPTER 4. Introduction to the. Heat Conduction Model

Stochastic Particle Methods for Rarefied Gases

15.3 The Langevin theory of the Brownian motion

REVIEW: Waves on a String

Non-equilibrium phenomena and fluctuation relations

Basic Concepts and Tools in Statistical Physics

Introduction to nonequilibrium physics

The First Principle Calculation of Green Kubo Formula with the Two-Time Ensemble Technique

Module 4. Signal Representation and Baseband Processing. Version 2 ECE IIT, Kharagpur

Fluid-Particles Interaction Models Asymptotics, Theory and Numerics I

LINEAR RESPONSE THEORY

Fundamentals of Fluid Dynamics: Waves in Fluids

Math review. Math review

13. Power Spectrum. For a deterministic signal x(t), the spectrum is well defined: If represents its Fourier transform, i.e., if.

Continuum Limit of Forward Kolmogorov Equation Friday, March 06, :04 PM

Problem Sheet 1 Examples of Random Processes

Onsager theory: overview

Statistical Mechanics of Active Matter

IV. Covariance Analysis

A Short Introduction to Diffusion Processes and Ito Calculus

C C C C 2 C 2 C 2 C + u + v + (w + w P ) = D t x y z X. (1a) y 2 + D Z. z 2

Table of Contents [ntc]

A path integral approach to the Langevin equation

Major Concepts Lecture #11 Rigoberto Hernandez. TST & Transport 1

Polymer dynamics. Course M6 Lecture 5 26/1/2004 (JAE) 5.1 Introduction. Diffusion of polymers in melts and dilute solution.

2A1H Time-Frequency Analysis II

2. As we shall see, we choose to write in terms of σ x because ( X ) 2 = σ 2 x.

Practice Problems For Test 3

Semiconductor Device Physics

Physics 562: Statistical Mechanics Spring 2003, James P. Sethna Homework 5, due Wednesday, April 2 Latest revision: April 4, 2003, 8:53 am

Transcription:

Physics 7b: Statistical Mechanics Brownian Motion Brownian motion is the motion of a particle due to the buffeting by the molecules in a gas or liquid. The particle must be small enough that the effects of the discrete nature of matter are apparent, but large compared to the molecular scale (pollen in the early experiments, various plastic beads these days). It is a convenient example to display the residual effects of molecular noise on macroscopic degrees of freedom. I will use this example to investigate the type of physics encountered, and the tools used to treat the fluctuations. Random Walk The first observation of Brownian motion is that the particle under the microscope appears to perform a random walk, and it is first useful to study this aspect in its simplest form. Lets consider first a one dimensional random walk, consisting of n jumps of ±l along the x axis. We take n to be even (the odd case is essentially the same, but differs in minor details). For the particle after n jumps to be at x = ml there must have been (n + m) forward jumps, and (n m) backwards jumps (in any order), and m must be even. The probability of arriving at x = ml is therefore n! p n (m) = [ (n m)]! [ (n + m)]!. () For large m, n Stirling s approximation n! (πn) / (n/e) n gives p n (m) = e m /n. () πn This is a Gaussian probability centered around m = (the most probable and mean position is the origin) and the mean square displacement m = n,or x = nl. (3) For large n the discreteness of the displacements is unimportant compared to the root mean square distance of the walk. Transforming to a continuous variable x and a probability density p(x, t) using p n (m) = p(x) l (since the interval between the discrete results is dx = l) and introducing time supposing there are n jumps in time t ) p(x, t) = exp ( x (4) 4πDt 4Dt where we have written nl /t = D. (5) We recognize that this is the expression for diffusion, with p satisfying p t = D p, p(x,t = ) = δ(x) (6) x

with the diffusion constant D. In terms of D x = Dt. (7) These results are readily extended to 3 dimensions, since we can consider a walk with steps (±l,±l,±l) for example, so that the walk is the product of walks in each dimension. The mean square distance gone after n walks is again r = nl with L = 3l the length of each step. The probability distribution p( x,t) satisfies the 3d diffusion equation p t = exp (4πDt) 3/ ( r with r = x + y + z. This equation is simply the product of three d diffusion equations with D = nl /t as before. The means square distance is r = 6Dt. (9) 4Dt ) (8) (The results in d can similarly be constructed.) The fact that the mean displacement is zero, and the mean square displacement grows linearly in time can be derived by very simple arguments. Lets consider the two dimensional case of a random walk consisting of n vectors of length s but with arbitrary angles θ i taken from a uniform probability distribution. The total displacement in the x direction is X = i s cos θ i. () Clearly X = since cos θ i is equally likely to be positive or negative. On the other hand X ( ) = s cos θ i = s cos θ i cos θ j i i j () = s i (cos θ i ) = ns / () where we have used the fact that i,j =i cos θ i cos θ j = since again each cos θ i is equally likely to be positive or negative. Thus the mean square distance is R = X + Y = ns. (3) This specific result is useful in adding complex numbers with random phases: the average amplitude is zero, and the mean square magnitude (the intensity ) scales linearly with the number of vectors. Some general nomenclature The position x(t) in a one dimensional random walk forms a one dimensional random process in general a scalar function y(t) for which the future data is not determined uniquely by the known initial data.

The random process is in general characterized by probability distributions p,p...such that p n (y,t ; y,t...; y n,t n )dy dy...dy n (4) is the probability that a single process drawn from the ensemble of processes will take on a value between y and y + dy at t etc. The different p n are related by p n dy j p n. (5) Ensemble averages are determined by integrating over the appropriate distribution, e.g. for the mean and the two point correlation function y(t ) = y(t )y(t ) = y p (y,t )dy, (6) y y p (y,t ; y,t )dy dy. (7) Higher order correlation functions require the knowledge of higher order distribution functions. In the random walk we have just looked at p. A stationary random process is one for which the p n depend only on time differences, or p n (y,t + τ; y,t + τ;...; y n,t n + τ) = p n (y,t ; y,t...; y n,t n ). (8) I have chosen to formulate the random walk as starting a particle from a particular position at time t =, so that x(t) is not stationary. Alternatively we could have considered a stationary process (e.g. the field of vision of a microscope with many Brownian particles) and then calculated the conditional probability P (x,t x,t ) which is the probability of the particle being at x at time t given that it was at x at time t. Then P (, x,t) takes the diffusive form that we have calculated and the p n all just depend on the time difference (p (x, t) is just constant, for example). Means and correlation functions are defined with respect to the ensemble average. For a stationary random process we usually assume ergodicity, and replace the ensemble average by a time average, e.g. y = y(t) = lim y(t)dt. (9) The probability distribution for the random walk is a Gaussian function. A Gaussian process in general is one in which all the probability distributions are Gaussian n n p n (y,t ; y,t...; y n,t n ) = A exp α jk (y j y )(y k y ) () j= k= where y is the mean of y, α jk is a positive definite matrix and A is a normalization constant. For a stationary process y is time independent and α and A depend only on time differences. Gaussian processes are important in physics because of the central limit theorem: if Y = N yi () 3

with y i a random process or variable with arbitrary distribution but with finite mean y and variance σy then for N large Y is a Gaussian process or variable [ ] (Y Y ) p(y) = exp πσy σy () with Y = y and σ Y = σ y / N. The central limit theorem is why the Gaussian distribution of the random walk is independent of the details of the step (e.g. fixed length, or varying length) providing the mean is zero and the variance is finite. Equation (6) for the time evolution of the probability distribution is actually the Fokker-Planck equation for this random process. We will return to this topic in more detail later. Spectral description of a random process For a conventional function y(t) a convenient definition of the Fourier transform is ỹ(f) = y(t) = y(t)e iπf t dt, ỹ(f)e iπf t df. (3a) (3b) The correctness of the inverse is shown from the result e iπxy sin πxy dx = lim = δ(y). (4) x πy For a real function y(t) we have ỹ (f ) =ỹ( f). For a stationary random process the integral defining ỹ(f) diverges, so we instead define the auxiliary process { y(t) <t<t/ y T (t) = (5) otherwise and then use the finite ỹ T (f ). Parseval s theorem tells us lim T T [y(t)] = lim ỹ T (f ) df. (6) Here and elsewhere we use ỹ (f ) =ỹ( f)to restrict the frequency domain to positive values. With these preliminaries in mind, we define the spectral density of the random process y(t) as G y (f ) = lim [y(t) ȳ]e iπf t dt. (7) where ȳ is the time average over T. Why do we use this expression? Lets suppose that the mean has been subtracted off of y, soȳ =. The quantity inside the is the Fourier transform of the 4

process y T (t). How does this grow with T? We can estimate this by supposing the interval T to be formed of N subintervals of length τ. The Fourier transform ỹ T is then the sum of N transforms ỹ τ of processes y τ (t)defined over the interval τ. For a random process we would expect each of the ỹ τ to be of similar magnitude, but with arbitrary phase, since the latter depends sensitively on the phasing of the e iπf t with respect to the time start of the signal. Adding N complex numbers with random phase gives a number of magnitude N and random phase. Thus the transform of y(t) ȳ grows as T and the phase varies over all values as T changes. The spectral density G y (f ) is constructed to be independent of T and to contain all the useful information. Parseval s theorem now gives us G y (f )df = lim [y(t) ȳ] dt = σ y, (8) so that the frequency integral of the spectral density is the variance of the signal. The spectral density is directly related to the Fourier transform of the correlation function C y (τ). Let s set the mean ȳ to zero for simplicity. Then, using assumption of ergodicity to replace the ensemble average by a time average, the correlation function is C y (τ) = lim T = lim T T T dt y(t)y(t + τ) (9) dt y T (t)y T (t + τ) (3) where the small error in replacing y(t + τ)by y T (t + τ)is unimportant in the limit. Now inserting the Fourier transforms and using ỹ (f ) =ỹ( f) C y (τ) = lim dt df df ỹ T (f )ỹ T (f )e iπf τ e iπ(f+f )t. (3) The t integrations is δ(f + f ), and using ỹ (f ) =ỹ( f)gives Thus we have the inverse pair C y (τ) = lim = lim = C y (τ) = G y (f ) = 4 df ỹ T (f ) e iπf τ (3) df ỹ T (f ) cos πf τ (33) G y (f ) cos(πf τ )df. (34) G y (f ) cos(πf τ )df C y (τ) cos(πf τ )dτ (35a) (35b) (since C y and G f are both even functions, we have written the results as cosine transforms only involving the positive domain). These equations are known as the Wiener-Khintchine theorem. 5

A particularly simple spectral density is a flat one, independent of frequency. We describe such a random process as being white. The corresponding correlation function is a delta function, i.e. no correlations except for time differences tending to zerp. One strength parameter g is needed to specify the force G F (f ) = g, C F (τ) = g δ(τ). (36a) (36b) The Einstein Relation Einstein showed how to relate the diffusion constant, describing the random fluctuations of the Brownian particle, to its mobility µ, the systematic response to an externally applied force. Under an applied force dv /dx the drift velocity of the particle is (the definition of the mobility) u d = µ dv dx. (37) For a sphere of radius a in a liquid the viscosity η the mobility is given by the Stokes expression µ = (6πηau d ), and so µ is related to the dissipation in the fluid. Consider now the thermodynamic equilibrium of a density n(x) of independent Brownian particles in the potential V(x). We can dynamically understand the equilibrium in terms of the cancelling of the particle currents due to diffusion and mobility D dn ( dx + n µ dv ) =. (38) dx Equilibrium thermodynamics on the other hand tells us n(x) exp[ V(x)/kT]. Substituting into Eq. (38) gives the Einstein identity D = kt µ. (39) Note the use of equilibrium constraints to relate fluctuation quantities (the diffusion constant which gives us x (t) ) and dissipation coefficients (µ or η). This is an example of a general approach known as fluctuation dissipation theory, that we will take up again later. The fact that the fluctuations and dissipation of a Brownian particle are related should not be unexpected: both are a reflection of the molecular buffeting, the dissipation given by the net force due to the systematic component of the collisions coming from the drift of the particle relative to the equilibrium molecular velocity distribution, and the fluctuations coming from the random component. Fluctuation-Dissipation Theory The relationship between the dissipation coefficient and the fluctuations is made more explicit by directly evaluating D in terms of the fluctuations producing the random walk D = lim [x(t) x()]. (4) t t 6

Expressing the displacement as the integral of the stochastic velocity u x(t) x() = t u(t )dt (4) leads to t t D = lim dt dt u(t )u(t ), (4) t t which depends on the velocity correlation function. The integrand is symmetric in t,t and we can replace the integral over the square by twice the integral over the triangle <t <t,t <t <t, and then introducing the time difference τ = t t t t t D = lim dt dτ u(t )u(t + τ). (43) t t Since the correlation function u(t )u(t + τ) decays to zero in some finite relaxation time τ r,as t the limit of the second integral can be replaced by infinity for almost all values of t in the first integration. Further, u(t )u(t + τ) = C u (τ) is independent of t (u(t) is a stationary random process if external conditions are fixed). Hence D = dτ u()u(τ) (44) and µ = dτ u()u(τ) (45) kt directly relating a dissipation coefficient to a correlation function. 7