Random walks. Marc R. Roussel Department of Chemistry and Biochemistry University of Lethbridge. March 18, 2009

Similar documents
RANDOM WALKS IN ONE DIMENSION

1 Gambler s Ruin Problem

Markov Chains on Countable State Space

Module 6:Random walks and related areas Lecture 24:Random woalk and other areas. The Lecture Contains: Random Walk.

Readings: Finish Section 5.2

The lattice model of polymer solutions

16. Working with the Langevin and Fokker-Planck equations

Random Walk and Other Lattice Models

Math Homework 5 Solutions

4 Sums of Independent Random Variables

Physics 562: Statistical Mechanics Spring 2002, James P. Sethna Prelim, due Wednesday, March 13 Latest revision: March 22, 2002, 10:9

Lattice protein models

Random Walks and Quantum Walks

1 Gambler s Ruin Problem

Markov Chains (Part 3)

Lecture notes for /12.586, Modeling Environmental Complexity. D. H. Rothman, MIT September 24, Anomalous diffusion

C C C C 2 C 2 C 2 C + u + v + (w + w P ) = D t x y z X. (1a) y 2 + D Z. z 2

Math 456: Mathematical Modeling. Tuesday, March 6th, 2018

Lecture 1: Random walk

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains

Importance Sampling in Monte Carlo Simulation of Rare Transition Events

Lectures on Markov Chains

1/2 1/2 1/4 1/4 8 1/2 1/2 1/2 1/2 8 1/2 6 P =

Introduction to Stochastic Processes

Simple Consumption / Savings Problems (based on Ljungqvist & Sargent, Ch 16, 17) Jonathan Heathcote. updated, March The household s problem X

Diffusion in the cell

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1

STOCHASTIC MODELS LECTURE 1 MARKOV CHAINS. Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept.

PHYSICS 653 HOMEWORK 1 P.1. Problems 1: Random walks (additional problems)

Newton s first law. Objectives. Assessment. Assessment. Assessment. Assessment 5/20/14. State Newton s first law and explain its meaning.

Probability, Random Processes and Inference

Lecture 20 : Markov Chains

Transience: Whereas a finite closed communication class must be recurrent, an infinite closed communication class can be transient:

MAS275 Probability Modelling Exercises

REVIEW: Waves on a String

Chapter 10. Finite-State Markov Chains. Introductory Example: Googling Markov Chains

Universal examples. Chapter The Bernoulli process

Importance Sampling in Monte Carlo Simulation of Rare Transition Events

Markov Chains Handout for Stat 110

Anomalous diffusion in biology: fractional Brownian motion, Lévy flights

MITOCW MIT6_041F11_lec17_300k.mp4

Introduction to Partial Differential Equation - I. Quick overview

MATH 564/STAT 555 Applied Stochastic Processes Homework 2, September 18, 2015 Due September 30, 2015

2 A bank account for electricity II: flows and taxes

Lecture 4 - Random walk, ruin problems and random processes

APERITIFS. Chapter Diffusion

Diffusion and Reactions in Fractals and Disordered Systems

Stochastic processes. MAS275 Probability Modelling. Introduction and Markov chains. Continuous time. Markov property

Lecture 25: Large Steps and Long Waiting Times

Anomalous Lévy diffusion: From the flight of an albatross to optical lattices. Eric Lutz Abteilung für Quantenphysik, Universität Ulm

Gambler s Ruin with Catastrophes and Windfalls

Brownian motion 18.S995 - L03 & 04

Biomolecular hydrodynamics

The cost/reward formula has two specific widely used applications:

The Boundary Problem: Markov Chain Solution

FREQUENTLY ASKED QUESTIONS February 21, 2017

Assignment 5 Biophys 4322/5322

Lecture 13. Drunk Man Walks

Potentials of Unbalanced Complex Kinetics Observed in Market Time Series

Non-homogeneous random walks on a semi-infinite strip

n N D n p = n i p N A

Butterflies, Bees & Burglars

1 Ways to Describe a Stochastic Process

0.1 Naive formulation of PageRank

Lecture 3: From Random Walks to Continuum Diffusion

End-to-end length of a stiff polymer

Return probability on a lattice

Inverse Langevin approach to time-series data analysis

Inference for Stochastic Processes

Lecture 9 - Carrier Drift and Diffusion (cont.), Carrier Flow. September 24, 2001

by Martin Bier, September 2003 past decade microcapillaries have found wide application in analytical

WHAT IS a sandpile? Lionel Levine and James Propp

From Random Numbers to Monte Carlo. Random Numbers, Random Walks, Diffusion, Monte Carlo integration, and all that

Professor: Arpita Upadhyaya Physics 131

Name: 180A MIDTERM 2. (x + n)/2

Biased random walk on percolation clusters. Noam Berger, Nina Gantert and Yuval Peres

Charge Carriers in Semiconductor

Nonlinear Dynamical Systems Lecture - 01

Random walk on a polygon

IEOR 6711: Professor Whitt. Introduction to Markov Chains

Be sure this exam has 9 pages including the cover. The University of British Columbia

Probability Models. 4. What is the definition of the expectation of a discrete random variable?

Rubber elasticity. Marc R. Roussel Department of Chemistry and Biochemistry University of Lethbridge. February 21, 2009

Homework set 2 - Solutions

Unit 4 Review. inertia interaction pair net force Newton s first law Newton s second law Newton s third law position-time graph

88 CONTINUOUS MARKOV CHAINS

Examples of Countable State Markov Chains Thursday, October 16, :12 PM

Chapter 6 - Random Processes

Lecture 8 - Carrier Drift and Diffusion (cont.), Carrier Flow. February 21, 2007

Math-Stat-491-Fall2014-Notes-V

33 The Gambler's Ruin 1

Stochastic Models: Markov Chains and their Generalizations

Stochastic Processes and Advanced Mathematical Finance. Intuitive Introduction to Diffusions

Conformal invariance and covariance of the 2d self-avoiding walk

This is a Gaussian probability centered around m = 0 (the most probable and mean position is the origin) and the mean square displacement m 2 = n,or

Decision Theory: Markov Decision Processes

BENG 221 Mathematical Methods in Bioengineering. Fall 2017 Midterm

Cover times of random searches

CHAPTER 11: CHROMATOGRAPHY FROM A MOLECULAR VIEWPOINT

Basic Physics of Semiconductors

Transcription:

Random walks Marc R. Roussel Department of Chemistry and Biochemistry University of Lethbridge March 18, 009 1 Why study random walks? Random walks have a huge number of applications in statistical mechanics. You have already seen that they can be used as a model of diffusion. We can include absorbing boundaries to model adsorption or other similar processes. A self-avoiding random walk (defined in the next section) can be used as a model for a polymer. Some types of energy transfer processes can also be modeled as random walks. There are lots of other examples. There are also lots of applications in other areas, including ecology (movement of animals in an environment) and economics (modeling short-term market fluctuations). Types of random walks The random walk described in the textbook is an unbiased and unbounded random walk in one dimension. We can of course look at random walks in higher dimensional spaces. We can also study biased random walks in which the probabilities of moving to the right or left are different. Another variation is a bounded random walk, in which the space on which the random walk occurs is finite. There are at least two versions of these: We can have a random walk with a reflecting boundary, modeling for instance diffusion in a container of finite size. We can also have a random walk with an absorbing boundary, which describes a process that stops when some particular value of a variable is reached. An example of the latter is the famous gambler s ruin problem: A gambler starts with x dollars and stops when he runs out of money. What is the probability that the gambler will run out of money, and what is the expected time to ruin? In addition to looking at random walks in different numbers of spatial dimensions, we can also study random walks in spaces of different connectivity. For example, we can study random walks on square lattices or on hexagonal lattices. We can also study random walks on lattices where each site is only connected to some of its nearest neighbors. This might be a model of diffusion through a zeolite or other porous material. Generally, in random walk models, the time is a discrete variable: The time steps are numbered, and at each time step the walker takes a step. In some variations, the walker 1

can stay in its location instead of taking a step, but time is still discrete. It is also possible to treat the case where the time between steps is a random variable. This kind of model is particularly useful for describing chemical reactions and related processes. Another important type is the self-avoiding random walk. A self-avoiding random walk is one that is not allowed to revisit the same site twice. Special techniques are required to generate self-avoiding random walks since there is a tendency, particularly in two dimensions, for simple methods to get stuck, i.e. to advance the walker to a position where it is completely surrounded by sites it has already visited. Since self-avoiding walks are often used to simulate polymer conformations, we need to be able to generate walks with a preset number of steps (monomers or segments), so getting stuck after some small number of steps is something we need to avoid. 3 Some questions associated with random walks Depending on the application, we may be interested in asking different questions about a random walk process. However, there are some questions that come up over and over again: Root-mean-squared distance: How does the root-mean-squared distance travelled depend on the number of steps taken? Mean recurrence time: What is the average interval of time (number of steps) between visits to a particular site (e.g. the origin of the walk)? In some cases (e.g. an unbiased random walk in three dimensions), the mean recurrence time is infinite, which means that a random walker will eventually wander away and not return. Mean time to capture: What is the average time required for a walker to be captured by (e.g.) an absorbing boundary? Capture/escape probability: What is the probability that a walker will be captured (or not) by an absorbing boundary or site(s)? Mean first-passage time: What is the average time required to reach a particular target? Theoretical expressions can be obtained for all of these quantities in some simple cases. In other cases, we need to use simulations. 4 Markov processes Loosely, a Markov process is a random process for which the probabilities of future states depend only on the current state, and not on the prior history. An ordinary random walk is clearly a Markov process: If I know where the random walker is now, I can calculate the probability that it will reach some given site at a later time without knowing how it reached its current position. Let P n (x) be the probability that a random walker has reached coordinate x after n steps from some specified initial condition (often x = 0). The transition probability W k (X x) is

the probability that, if the random walker was at coordinate x at step n, then it will be at x = X k steps later. Markov processes obey the following relationship: P n (X) = x W k (X x)p n k (x), (1) where the sum is taken over all values of x 1 at which the walker could have been k steps earlier. 5 The biased random walk in one dimension We are going to look at a biased random walk in one dimension. If an unbiased random walk is a model of diffusion, a biased walk might be a model of diffusion with an added drift velocity. We will in fact now derive a diffusion-advection equation for a random walk, i.e. an equation with both a diffusion term and a directed drift component. We first need to define our random walk model. Let p be the probability that the random walker takes a step to the right in the next time step, i.e. the probability that the coordinate (m) will be incremented by 1. Similarly, let q be the probability that the random walker takes a step to the left. We allow for the possibility that the walker stays where it is in a time step, i.e. that p + q < 1 such that the probability of staying in place is 1 p q. The constants p, q and 1 p q are the transition probabilities in this model. If we apply equation 1 with k = 1, we get or P n (m) = pp n 1 (m 1) + qp n 1 (m + 1) + (1 p q)p n 1 (m), P n (m) P n 1 (m) = pp n 1 (m 1) + qp n 1 (m + 1) (p + q)p n 1 (m), The quantity on the left of the equal sign looks a little like a time derivative. In fact, if one time step takes time, P n(m) P n 1 (m). The quantity on the right of the equal sign needs more work. It can be rewritten in the form pp n 1 (m 1) + qp n 1 (m + 1) + (p + q)p n 1 (m) = 1 (p + q) [P n 1(m 1) P n 1 (m) + P n 1 (m + 1)] 1 (p q) [P n 1(m + 1) P n 1 (m 1)]. () If the steps of our random walk are of length x, then the last term in this equation is related to the spatial derivative of P : P (x, t ) P n 1(m + 1) P n 1 (m 1). (3) x 3

(Note that x = m x.) This may be less obvious, but the quantity after the equal sign in equation is in fact related to the second derivative with respect to position. The second derivative is defined by P (x, t ) 1 [ ] P (x + x, t ) P (x, t ) x 1 [ Pn 1 (m + 1) P n 1 (m) P ] n 1(m) P n 1 (m 1) x x x = 1 ( x) [P n 1(m + 1) P n 1 (m) + P n 1 (m 1)]. Putting it all together, we get = ( x) (p + q) P (x, t ) x(p q) P (x, t ). The above derivation assumes that is small. That being the case, and assuming that P is a continuous function of t, P (x, t ) P (x, t). We get If we identify = ( x) (p + P (x, t) q) x (x, t) (p q) P. we get and D = ( x) (p + q) (4) u = x (p q), (5) = D P (x, t) u. (6) Note that if p = q = 1, equation 4 is the Einstein-Smoluchowski equation for the diffusion coefficient. This equation is therefore a generalization of the Einstein-Smoluchowski result to the biased random walk. Also note that u defined by equation 5 is a velocity. Equation 6 can be transformed to a form which is more clearly a diffusion-advection equation by noting that, for a system that contains N 0 particles undergoing the biased random walk process, the lineal concentration is c(x, t) = N 0 P (x, t)/ x so that c satisfies the equation c(x, t) = D c(x, t) c(x, t) u. (7) The last term in this equation is an advection (drift) term. If u > 0 (p > q), then this term will cause the distribution c(x, t) to drift to the right. To see this, consider figure 1 and imagine a case where diffusion is weak so that the advective term dominates. In the tail of the distribution (small x), c/ > 0, so c/ < 0. The tail therefore dies away. At the leading edge (to the right of the maximum), c/ < 0, so c/ > 0. The leading 4

c u c/ > 0 c/ < 0 c/ < 0 c/ > 0 x Figure 1: Motion of a concentration distribution for equation 7 in the case of weak diffusion with u > 0. 5

edge therefore grows. Growth ahead of the maximum and decay behind it results in a net movement in the direction of u. The diffusion-advection equation 7 can be solved. The solution for a delta-function initial condition at x = 0 is c(x, t) = N 0 4πDt exp ( (x ut) The expression x ut is characteristic of traveling wave solutions. In this case, because of diffusion, the wave both travels and dissipates. 4Dt ). 6