A Detailed Look at a Discrete Randomw Walk with Spatially Dependent Moments and Its Continuum Limit
|
|
- Virgil Richards
- 5 years ago
- Views:
Transcription
1 A Detailed Look at a Discrete Randomw Walk with Spatially Dependent Moments and Its Continuum Limit David Vener Department of Mathematics, MIT May 5, 3 Introduction In 8.366, we discussed the relationship between random walks and diffusion equations at length. To a large extent, we used the continuum diffusion limit to approximate the behavior of the discrete random walk because the mathematics of calculus is often easier than the combinatorics required to consider the exact distribution of the random process. Problem of Problem Set exemplifies this attitude towards the calculation; we considered a discrete random walk on a one-dimensional lattice with spatially dependent transition probabilities. In the continuum limit, which will be discussed in more detail, we found that the probability distribution to leading order satisfies the Fokker- Planck equation and then solved this equation. In this project, I will further develop this calculation, discuss approaches to the discrete problem, and compare the continuum approximation to results derived from the discrete problem. The Discrete Problem Let M >> be a large integer, and let X n be the position of a random walker on the integers after n steps with the following transition probabilities for all n : i M, j = i +, M i M i Prob {X n = j X n = i} = +, j = i, M i M M, otherwise Since the transition probabilities only depend on the current position of the walker, this random walk is said to satisfy the Markov property. Furthermore, since the walker can only reach M i M, the walk can be described by a finite Markov Chain. Let P ij be a M + by M + matrix defined by P ij = Prob {X = j X = i} i i = ν i,j + + ν i,j+ M M
2 D. Vener Random Walks and Diffusion Spring 3 Course Project for M i, j M. Now since the Markov property holds, we can write Prob {X = j X = i} = M Prob {X = k X = i} Prob {X = j X = k} k= M M = P ik P kj k= M Similarly, we can use mathematical induction to prove that = P. 3 ij P n ij = Prob {X n = j X = i}. 4 Therefore, given an initial position, i, for the random walker, we can calculate the probability that the walker is at position j after n steps, simply by calculating the elements of the i th row of P n. One might consider the discrete problem finished now since given any finite number of steps and any initial position we have an algorithm for computing the exact probability that the walker is at any position. However, for large M which, in fact, is a necessary component of the continuum limit discussed next the calculation may be very difficult by hand and may require too much memory on a computer. Therefore, we may also wish to ask whether or not P n approaches a limit in some sense, and, if so, how quickly. We will return to this problem in Section 4. 3 The Continuum Approximation of the Discrete Problem 3. The Continuum Limit Now suppose that the discrete random walk described above occurs on a lattice with spacing a << and that the spacing between time steps is δ <<. Then we can associate for each M i M we associate the position x = ai and for each n a time t = nδ. Now define the random variable X t; a, δ, M such that X nδ; a, δ, M = X n. Rewriting the transition probabilities from the discrete walk in terms of X t and x, we have { Prob X n + δ; a, δ, M = x X nδ; a, δ, M = x [ ] x = ν x x a + x + ν x x + a. 5 Ma Ma So by defining X t; a, δ, M X t + δ; a, δ, M X t; a, δ, M, 6 we can now calculate the spatially dependent moments, m l x; a, δ, M, by m l x; a, δ, M < [ X t; a, δ, M] l X t; a, δ, M = x > al x a l x = + + Ma Ma { al = M x, l odd l 7 a, l even.
3 D. Vener Random Walks and Diffusion Spring 3 Course Project 3 Now let s define D l x; a, δ, M m l x;a,,m l!, and let us choose a, δ, and M, so that D and D are both O as δ. For example, we could choose M and a δ to get D = x and D =. Given a sufficient choice, let D = πx and D = D. Then we have al l!m D x, l odd l x; a, δ, M = a l l!, l even, l / l! D l / πx δ l /, l odd = 8 l/ D l/ δ l/, l even. l! This means that if D and π are O quantities, then D l = O δ l+/, which is different from the standard case where D l is assumed to be O δ l /, for l 3. In the limit of δ, a, and M, holding π and D fixed, X t; a, δ, M approaches a random variable, X t, which, for each t, can take values x from a continuous set,. Now let χ x, t x, be the probability density function PDF of X t given X = x. In Problem of Problem Set, we found a partial differential equation for the dynamics of a more general PDF; adapting that equation to the case of moments with no explicit time dependence, we can write χ [ x, t x, = πx + δ ] πx χ + [ D δ ] π x + δπd χ t x [ ] x 3 δ 4 [ δ ] x 3 3 πdx χ x 4 3 D χ + O δ. 9 This equation combined with the initial condition that χ x, x, = ν x x, allows us to solve for χ x, t x, up to O δ. This is the continuous analog for calculating the matrix P n ij, where n = t/δ as discussed in Section. 3. A Regular Perturbation Series for ρ Let us non-dimensionalize the PDE 8 by supposing that the problem has a characteristic time T and a characteristic length L. Defining dimensionless variables t = t/t and x = x/l, Equation 9 becomes [ ] χ t x, [ δ ] DT δ π δπdt T t x, = πt x + πt x χ + x L x + L χ x [ ] [ ] 3 δ πdt 4 δ D T x χ 4 3 L 4 χ + O δ. x 3 3 L x Choosing πt = DT/L =, we take T = /π and L = D/π to give us Equation 9 in the following dimensionless form. { [ ] [ ]} χ δ x x, t t x, = [ χ + xχ χ x xχ] + x T x x { [ ] [ ]} δ 3 4 δ = xχ + χ + O. T 3 x 3 4 x 3 T
4 D. Vener Random Walks and Diffusion Spring 3 Course Project 4 Recall that t = δ represents the number of steps taken by the random walker. Therefore, taking γ δ /T << represents looking at the position of the walker after many steps. In this limit, we can expand χ in a regular perturbation series by taking χ x, t x, = χ x, t x, + γχ x, t x, + O γ, 3 where χ x, t x, satisfies subject to the constraint χ t x, t x, = [ x xχ ] + x χ, 4 We then force χ x, t x, to satisfy χ t subject to the constraint χ x, x, = ν x x. 5 x, t x, = x [ xχ ] + x χ + δ { [ ] [ ]} T x xχ x x χ δ { 3 [ ] 4 [ ]} = T x 3 3 xχ + x 4 3 χ, 6 χ x, x, =. 7 χ represents the leading order behavior of the PDF and will now be calculated. We will calculate χ, the leading order correction, in Section The Leading Order Behavior This problem was solved in the solutions to Problem Set ; therefore we will not go into too much detail here. In order to solve Equation 4, we make the following change of variables: ρ = xe t ξ = t. With this change, Equation 4 becomes, Taking the Fourier transform in the ρ variable, gives which has the solution χ ξ χ = χ + e. 8 ξ ρ χˆ k, ξ = e ξ k χˆ k, ξ, 9 ξ χˆ k, ξ = C k exp ξ e ξ k,
5 D. Vener Random Walks and Diffusion Spring 3 Course Project 5 for an arbitrary C k which must be chosen to satisfy the initial condition χ ρ, = ν ρ x /L. Taking the Fourier transform of Equation allows us to write χˆ k, = e ikx /L = C k exp k which, in turn, implies that, upon defining x = x /L, { } ik x ξ χˆ k, ξ = e ξ e exp k e. 3 Now that we have the Fourier transform of χ, we can invert it to calculate χ ρ, ξ itself. We have { } x ξ χ ρ, ξ = e ξ dk e ikξ e ik exp k e λ { } = e ξ dk exp k e ξ + ik ρ x λ } [ ] } xi ρ x dk ξ x = e exp e ξ λ exp e k i ρ e ξ } ρ x =. 4 λ e ξ exp e ξ Finally, if we wish to express χ, in terms of the original variables, we have upon the necessary re-normalization } π π x x e γt χ x, t x, = λd e γt exp. 5 D e γt As a brief aside, we note here that for any x, π { π } lim χ x, t x, = exp x, 6 t λd D which shows that the leading order term approaches a steady solution which is independent of the initial starting position. We will comment on this again in Section The First Correction to Leading Order Now we have solved Equation 4 for χ, we can plug this into Equation 6 to solve for χ. In terms of ρ and ξ as defined in Section 3.3, Equation 6 is { [ ] [ ] [ ] } χ e 4ξ 4 χ = χ + e ξ χ ρ + ξ 3 ρχ ρχ. 7 ξ ρ ρ ρ e χ e ξ ρ ρ 4
6 D. Vener Random Walks and Diffusion Spring 3 Course Project 6 Now, when taking the Fourier transform of this equation, recall that ξ ik, ρ i ξ. Therefore, the transform of Equation 7 gives χˆ = k e ξ χˆ χˆ k + k χˆ + k e ξ χˆ + k 3 e ξ χˆ + k 4 e 4ξ χˆ 8 ξ k k 3 k 3 To solve this equation, note that from Equation 3, we compute that { } = exp ξ + k e ξ + ik x ξ χˆ ξ ξ = + k e. 9 χˆ Therefore, by dividing Equation 8 by ˆχ, we recognize that it can be rewritten to read ˆχ = ξ ˆχ ˆχ k ˆχ k + k ˆχ k + k e ξ ˆχ + 3 k3 e ξ ˆχ k + 3 k4 e 4ξ ˆχ = k ik x + k4 ik 3 x k x + 6 k4 e 4ξ 3 k4 e ξ + 3 ik3 x e ξ. 3 We may now solve for ˆχ, finding 4ξ ξ ξ χˆ = χˆ k ξ ikˆ x ξ + k 4 ξ ik 3 xˆ ξ k x ˆξ + k 4 e k 4 e + ik 3 xˆ e + B k, where B k must be chosen to satisfy Equation 7, i.e. the initial condition for χ. Since χ x, x, =, χˆ ρ, = 3 and χˆ k, ξ [ ] = χˆ ρ, ξ ξ k ik k 4 x ik 3 x k + x [ ] 4ξ ξ ξ χˆ k 4 e k 4 e + ik 3 x e Recall that, when taking the inverse Fourier transform, k m ξ m, so that i m m χ ξ χ ξ 4 χ 3 χ ξ χ χ ρ, ξ x, = ξ x x ξ x ρ 3 ρ + ρ ρ 4 ρ e4ξ 4 χ ξ ξ e 4 χ e 3 χ + 4 ρ ρ 4 x 6 ρ 3 ξ χ ξ x x + ξ χ e 3 χ = + ξ ρ ρ x 6 ρ 3 eξ e 4ξ ξ 4 χ ρ 4
7 D. Vener Random Walks and Diffusion Spring 3 Course Project 7 However, this can be simplified by noting that } m ρ x m m χ ρ m = m ρ m λ e ξ exp e ξ ρ x = H m χ, 35 e ξ n/ e ξ where H m z is the m th order Hermite Polynomial. In terms of the Hermite Polynomials, we find that the first order correction is [ ] [ ] x x ξe ξ ξe ξ ρ x ρ x χ ρ, ξ = H χ e H ξ χ + e ξ e ξ e ξ [ ] ξe 3ξ e ξ ρ x + x H 3 χ e [ ξ 3/ 6 e ξ e ξ ] e ξ + e ξ ξe 4ξ ρ x + H 4 χ e ξ 4 e ξ e ξ e ξ 4 The Many-step Limit of the Discrete Process Recall that in Section 3.3, we found that π { π } lim χ x, t x, = exp x, 37 t λd D a independent of x. Since π = M and D =, we can rewrite Equation 37 in terms of the parameters of the original discrete problem. That is { } { } dx dx x lim Prob x X t < x + X = x = exp. 38 t a Mλ M a Recalling that x = aj, where j is the position on the lattice, we see even more explicitly that the leading order continuum approximation predicts that for the discrete process { } { } lim Prob j X n < j + = i = exp M j. 39 n X Mλ From this prediction we might expect the discrete process has a stationary limit which is also independent of the initial starting position. In this section we will see to what extent that is the case. First we introduce a theorem which will be used later. 4. Doeblin s Theorem Suppose Q = Q ij i,j K is probability transition matrix on a finite number of states. Further suppose that there exists an γ > such that Q ij e for all i and j. Then Doeblin s Theorem states that there exists a unique vector µ such that µq = µ, µ i for all i, and K i= µ i =. Furthermore, given an vector v with v i for all i, and K i= v i =, K Note that v i= v i. vp n µ γn n. 4
8 D. Vener Random Walks and Diffusion Spring 3 Course Project 8 4. A Stationary State for the Discrete Process Let ξ be a by M + row vector with ξ i, for all i, and M i= M ξ i =. We shall interpret ξ to be the state of the system after some number of steps since we note that by choosing ξ i = Prob {X n = i} and allowing P as defined in Section to act on ξ by right multiplication, we have M ξp j = ξ i P ij i= M M = Prob {X n+ = j X n = i} Prob {X n = i} i= M = Prob {X n+ = j}. 4 That is, if ξ contains the probability that the random walker is at each of the points on the lattice after n steps, then ξp is a row vector that contains the probability that the random walker is at each of the points on the lattice after n + steps. If lim n Prob {X n = j}, then there must exist a state vector such that it satisfies all of the properties of the ξ above and = P. To compute, we first note that it is an eigenvector of P corresponding to eigenvalue λ =. Therefore, i i = i P ij ν ij = i ν i,j + + i ν i,j+ i ν ij M M M + j = j + M + + j j+ j. 4 M M This implies that in addition to the conditions that of must satisfy the following recurrence relation. M i= M i = and i for all i, the elements M + + j j+ = M j M + j j. 43 This recursion can be solved with generating functions; however, we will just verify that j = M is the solution. First note that M M +j M M M M = M +j M j M M + j M M + j j= M j= M = M + M =. 44
9 D. Vener Random Walks and Diffusion Spring 3 Course Project 9 Also, M M M + j M M j M + j j = M M + j M + j M! M! = M M M + j! M j! M + j! M j! [ ] M! M = M M + j! M j! M + j [ ] M! M j = M M + j! M j! M + j = M! M M + j! M j! = M + j + j Interpretation of the Stationary State In Section 4., we found that the discrete random walk introduced in Section has a steady-state solution which is consistent with analysis of the continuum equation. However, one may ask the question, Does P n ij tend to j for any initial position i? We shall see that the answer to this question is both no and yes. Strictly speaking, lim n P n ij does not exist. This is easily seen, since the walker must either take exactly one step either to the left of the right. Therefore, if the walker is on an even position after n steps, he is guaranteed to be on an odd position after n + steps. Similarly, if the walker is at an odd position after n steps, he will be on a even position after n + steps. Therefore, P n ij is either for all even n or all odd n. But since j is non zero for all j, j cannot be the desired limit. Thus, we have lost some information about the system in the continuum limit which cannot be retrieved. However, we will now show that the walker does reach the stationary state in the following weaker sense. We will prove that for all i and j [ ] lim P n ij + P n+ = j, 46 n i.e. that the average state is the stationary state. To do this, let us first consider the elements of P. We have from matrix multiplication ij P i + i i + = + ν i,j + + i ν ij i,j 4 M M M M i i + + i + ν i,j M M Therefore, if, instead of considering the random walker taking one step at a time, we only look at his position after an even number of steps, he will always be on a position with the same parity as his original position. Thus we can separate the even and odd points in the lattice into two separate classes with the property that transitions generated from P stay within the same class, so that the classes can be considered separately. To this end, let E be the M M + by + matrix representing the transition prob- M abilities between the even states of P, and let O be the M by matrix representing i,j
10 D. Vener Random Walks and Diffusion Spring 3 Course Project the transition probabilities between between the odd states of P. Since even a random walker who starts at the far end has a positive probability of reaching the other end after M total steps. Therefore, E M >, for all i, j. i,j >, O M i+,j+ Thus, by Doeblin s Theorem for finite state Markov chains, both lim n E n i,j and lim n O n i+,j+ exist and are independent of i, for all i and j. In fact, a similar calculation to the one used to calculate the j above verifies that M lim E n i,j = = M j, 48 n M + j and that M lim O n i+,j+ = = M j+. 49 M + j + n Now let s reconsider the original Markov chain on all of the M + points on the lattice. If i and j have the same parity, we have already argued that P k+ = for all k. From our ij consideration of E and O and the fact that exactly one of n and n + is even, we have shown that [ ] lim P n ij + P n+ = [ j] = j. 5 n i,j If, however, i and j have different parity, we have already shown that P k ij Furthermore, = for all k. M P k+ = P ii P k ij i = M i j i i = P k + + P k. 5 M i+,j M i,j This, since i ± and j must have the same parity, we have [ ] [ ] i i lim n P n ij + P n+ = lim P k i,j n + + P k M i+,j M i,j i i = lim P k + + lim P k 4 M k i+,j 4 M k i,j i i = j + + j = j. 5 4 M 4 M We have thus proved our claim that the average position of the random walker after n steps approaches a steady distribution that is independent of the starting position. 5 A Comparison of the Discrete and Continuous Solutions For the purpose of comparison, let us choose M =,δ =.5, and a =. In this case, we have π = = and D = a =, so that the non-dimensional equations hold. M
11 D. Vener Random Walks and Diffusion Spring 3 Course Project x error x position Figure : Error in leading order term Let us consider the symmetric case where the random walker begins at x =. Then for n = 5, i.e. t = 5 the error at each x made by the continuous approximation of the timeaveeraged probabilities can be seen in Figure. This picture seems to match an even fourth order polynomial times a Gaussian, which is the long time behavior exhibited by the correction term derived in Section 3.4. In fact, the error made by the continuous approximation when we include the first two terms is shown in Figure. This error appears to be a higher order polynomial times a Gaussian, as might be expected from the calculations above. Furthermore, for all x within a central region the results are not qualitatively different for t 5. 6 Conclusions In this project, we have calculated a better continuous approximation to a discrete process by finding the leading order correction to the continuum limit. This correction appears to be very accurate within the central region. We have also demonstrated that the exact limiting distribution for the discrete process can be calculated given any initial position, and we then used that calculation to see that some information about the exact distribution is lost and cannot be recovered in the continuum limit. That is, even the most accurate continuous approximation cannot predict that the discrete process is periodic and is not a good approximation for a particular time. References [] S. Karlin and H. M. Taylor, A First Course in Stochastic Processes, nd ed., Academic Press, New York, NY 997. [] H. Risken, The Fokker-Planck Equation: Methods of Solution and Applications, nd ed., Springer Verlag, Berlin 996.
12 D. Vener Random Walks and Diffusion Spring 3 Course Project 3 x 7 error x position Figure : Error in first and second order terms
P i [B k ] = lim. n=1 p(n) ii <. n=1. V i :=
2.7. Recurrence and transience Consider a Markov chain {X n : n N 0 } on state space E with transition matrix P. Definition 2.7.1. A state i E is called recurrent if P i [X n = i for infinitely many n]
More informationLecture 12: Detailed balance and Eigenfunction methods
Miranda Holmes-Cerfon Applied Stochastic Analysis, Spring 2015 Lecture 12: Detailed balance and Eigenfunction methods Readings Recommended: Pavliotis [2014] 4.5-4.7 (eigenfunction methods and reversibility),
More information1 Introduction. 2 Diffusion equation and central limit theorem. The content of these notes is also covered by chapter 3 section B of [1].
1 Introduction The content of these notes is also covered by chapter 3 section B of [1]. Diffusion equation and central limit theorem Consider a sequence {ξ i } i=1 i.i.d. ξ i = d ξ with ξ : Ω { Dx, 0,
More informationStochastic Particle Methods for Rarefied Gases
CCES Seminar WS 2/3 Stochastic Particle Methods for Rarefied Gases Julian Köllermeier RWTH Aachen University Supervisor: Prof. Dr. Manuel Torrilhon Center for Computational Engineering Science Mathematics
More information8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains
8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains 8.1 Review 8.2 Statistical Equilibrium 8.3 Two-State Markov Chain 8.4 Existence of P ( ) 8.5 Classification of States
More informationLecture 3: Central Limit Theorem
Lecture 3: Central Limit Theorem Scribe: Jacy Bird (Division of Engineering and Applied Sciences, Harvard) February 8, 003 The goal of today s lecture is to investigate the asymptotic behavior of P N (εx)
More informationThe Quantum Harmonic Oscillator
The Classical Analysis Recall the mass-spring system where we first introduced unforced harmonic motion. The DE that describes the system is: where: Note that throughout this discussion the variables =
More informationLecture 12: Detailed balance and Eigenfunction methods
Lecture 12: Detailed balance and Eigenfunction methods Readings Recommended: Pavliotis [2014] 4.5-4.7 (eigenfunction methods and reversibility), 4.2-4.4 (explicit examples of eigenfunction methods) Gardiner
More informationThis ODE arises in many physical systems that we shall investigate. + ( + 1)u = 0. (λ + s)x λ + s + ( + 1) a λ. (s + 1)(s + 2) a 0
Legendre equation This ODE arises in many physical systems that we shall investigate We choose We then have Substitution gives ( x 2 ) d 2 u du 2x 2 dx dx + ( + )u u x s a λ x λ a du dx λ a λ (λ + s)x
More informationThe dynamics of small particles whose size is roughly 1 µmt or. smaller, in a fluid at room temperature, is extremely erratic, and is
1 I. BROWNIAN MOTION The dynamics of small particles whose size is roughly 1 µmt or smaller, in a fluid at room temperature, is extremely erratic, and is called Brownian motion. The velocity of such particles
More informationChapter 5.3: Series solution near an ordinary point
Chapter 5.3: Series solution near an ordinary point We continue to study ODE s with polynomial coefficients of the form: P (x)y + Q(x)y + R(x)y = 0. Recall that x 0 is an ordinary point if P (x 0 ) 0.
More informationLecture 3: Central Limit Theorem
Lecture 3: Central Limit Theorem Scribe: Jacy Bird (Division of Engineering and Applied Sciences, Harvard) February 8, 003 The goal of today s lecture is to investigate the asymptotic behavior of P N (
More informationSIO 221B, Rudnick adapted from Davis 1. 1 x lim. N x 2 n = 1 N. { x} 1 N. N x = 1 N. N x = 1 ( N N x ) x = 0 (3) = 1 x N 2
SIO B, Rudnick adapted from Davis VII. Sampling errors We do not have access to the true statistics, so we must compute sample statistics. By this we mean that the number of realizations we average over
More informationMS&E 321 Spring Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10. x n+1 = f(x n ),
MS&E 321 Spring 12-13 Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10 Section 4: Steady-State Theory Contents 4.1 The Concept of Stochastic Equilibrium.......................... 1 4.2
More informationCS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions
CS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions Instructor: Erik Sudderth Brown University Computer Science April 14, 215 Review: Discrete Markov Chains Some
More informationMarkov Chains, Stochastic Processes, and Matrix Decompositions
Markov Chains, Stochastic Processes, and Matrix Decompositions 5 May 2014 Outline 1 Markov Chains Outline 1 Markov Chains 2 Introduction Perron-Frobenius Matrix Decompositions and Markov Chains Spectral
More informationThe Polymers Tug Back
Tugging at Polymers in Turbulent Flow The Polymers Tug Back Jean-Luc Thiffeault http://plasma.ap.columbia.edu/ jeanluc Department of Applied Physics and Applied Mathematics Columbia University Tugging
More informationStochastic process for macro
Stochastic process for macro Tianxiao Zheng SAIF 1. Stochastic process The state of a system {X t } evolves probabilistically in time. The joint probability distribution is given by Pr(X t1, t 1 ; X t2,
More informationSolutions of differential equations using transforms
Solutions of differential equations using transforms Process: Take transform of equation and boundary/initial conditions in one variable. Derivatives are turned into multiplication operators. Solve (hopefully
More informationThe Sommerfeld Polynomial Method: Harmonic Oscillator Example
Chemistry 460 Fall 2017 Dr. Jean M. Standard October 2, 2017 The Sommerfeld Polynomial Method: Harmonic Oscillator Example Scaling the Harmonic Oscillator Equation Recall the basic definitions of the harmonic
More informationStochastic processes. MAS275 Probability Modelling. Introduction and Markov chains. Continuous time. Markov property
Chapter 1: and Markov chains Stochastic processes We study stochastic processes, which are families of random variables describing the evolution of a quantity with time. In some situations, we can treat
More information4452 Mathematical Modeling Lecture 16: Markov Processes
Math Modeling Lecture 16: Markov Processes Page 1 4452 Mathematical Modeling Lecture 16: Markov Processes Introduction A stochastic model is one in which random effects are incorporated into the model.
More informationBirth-death chain models (countable state)
Countable State Birth-Death Chains and Branching Processes Tuesday, March 25, 2014 1:59 PM Homework 3 posted, due Friday, April 18. Birth-death chain models (countable state) S = We'll characterize the
More information18.175: Lecture 30 Markov chains
18.175: Lecture 30 Markov chains Scott Sheffield MIT Outline Review what you know about finite state Markov chains Finite state ergodicity and stationarity More general setup Outline Review what you know
More informationMARKOV CHAIN MONTE CARLO
MARKOV CHAIN MONTE CARLO RYAN WANG Abstract. This paper gives a brief introduction to Markov Chain Monte Carlo methods, which offer a general framework for calculating difficult integrals. We start with
More informationExamples of Countable State Markov Chains Thursday, October 16, :12 PM
stochnotes101608 Page 1 Examples of Countable State Markov Chains Thursday, October 16, 2008 12:12 PM Homework 2 solutions will be posted later today. A couple of quick examples. Queueing model (without
More informationLecture 6 Quantum Mechanical Systems and Measurements
Lecture 6 Quantum Mechanical Systems and Measurements Today s Program: 1. Simple Harmonic Oscillator (SHO). Principle of spectral decomposition. 3. Predicting the results of measurements, fourth postulate
More informationLecture 1: Brief Review on Stochastic Processes
Lecture 1: Brief Review on Stochastic Processes A stochastic process is a collection of random variables {X t (s) : t T, s S}, where T is some index set and S is the common sample space of the random variables.
More informationMATH 205C: STATIONARY PHASE LEMMA
MATH 205C: STATIONARY PHASE LEMMA For ω, consider an integral of the form I(ω) = e iωf(x) u(x) dx, where u Cc (R n ) complex valued, with support in a compact set K, and f C (R n ) real valued. Thus, I(ω)
More informationECS 289 / MAE 298, Network Theory Spring 2014 Problem Set # 1, Solutions. Problem 1: Power Law Degree Distributions (Continuum approach)
ECS 289 / MAE 298, Network Theory Spring 204 Problem Set #, Solutions Problem : Power Law Degree Distributions (Continuum approach) Consider the power law distribution p(k) = Ak γ, with support (i.e.,
More informationDiffusion Monte Carlo
Diffusion Monte Carlo Notes for Boulder Summer School 2010 Bryan Clark July 22, 2010 Diffusion Monte Carlo The big idea: VMC is a useful technique, but often we want to sample observables of the true ground
More informationMarkov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains
Markov Chains A random process X is a family {X t : t T } of random variables indexed by some set T. When T = {0, 1, 2,... } one speaks about a discrete-time process, for T = R or T = [0, ) one has a continuous-time
More informationPrime numbers and Gaussian random walks
Prime numbers and Gaussian random walks K. Bruce Erickson Department of Mathematics University of Washington Seattle, WA 9895-4350 March 24, 205 Introduction Consider a symmetric aperiodic random walk
More informationStatistics 992 Continuous-time Markov Chains Spring 2004
Summary Continuous-time finite-state-space Markov chains are stochastic processes that are widely used to model the process of nucleotide substitution. This chapter aims to present much of the mathematics
More informationMarkov Processes Hamid R. Rabiee
Markov Processes Hamid R. Rabiee Overview Markov Property Markov Chains Definition Stationary Property Paths in Markov Chains Classification of States Steady States in MCs. 2 Markov Property A discrete
More informationOn Reparametrization and the Gibbs Sampler
On Reparametrization and the Gibbs Sampler Jorge Carlos Román Department of Mathematics Vanderbilt University James P. Hobert Department of Statistics University of Florida March 2014 Brett Presnell Department
More informationNumerical Analysis Preliminary Exam 10 am to 1 pm, August 20, 2018
Numerical Analysis Preliminary Exam 1 am to 1 pm, August 2, 218 Instructions. You have three hours to complete this exam. Submit solutions to four (and no more) of the following six problems. Please start
More information6 Markov Chain Monte Carlo (MCMC)
6 Markov Chain Monte Carlo (MCMC) The underlying idea in MCMC is to replace the iid samples of basic MC methods, with dependent samples from an ergodic Markov chain, whose limiting (stationary) distribution
More information3.3 Accumulation Sequences
3.3. ACCUMULATION SEQUENCES 25 3.3 Accumulation Sequences Overview. One of the most important mathematical ideas in calculus is that of an accumulation of change for physical quantities. As we have been
More informationDensities for the Navier Stokes equations with noise
Densities for the Navier Stokes equations with noise Marco Romito Università di Pisa Universitat de Barcelona March 25, 2015 Summary 1 Introduction & motivations 2 Malliavin calculus 3 Besov bounds 4 Other
More information6.207/14.15: Networks Lectures 4, 5 & 6: Linear Dynamics, Markov Chains, Centralities
6.207/14.15: Networks Lectures 4, 5 & 6: Linear Dynamics, Markov Chains, Centralities 1 Outline Outline Dynamical systems. Linear and Non-linear. Convergence. Linear algebra and Lyapunov functions. Markov
More information2. The Schrödinger equation for one-particle problems. 5. Atoms and the periodic table of chemical elements
1 Historical introduction The Schrödinger equation for one-particle problems 3 Mathematical tools for quantum chemistry 4 The postulates of quantum mechanics 5 Atoms and the periodic table of chemical
More informationMarkov Chains Handout for Stat 110
Markov Chains Handout for Stat 0 Prof. Joe Blitzstein (Harvard Statistics Department) Introduction Markov chains were first introduced in 906 by Andrey Markov, with the goal of showing that the Law of
More informationP(X 0 = j 0,... X nk = j k )
Introduction to Probability Example Sheet 3 - Michaelmas 2006 Michael Tehranchi Problem. Let (X n ) n 0 be a homogeneous Markov chain on S with transition matrix P. Given a k N, let Z n = X kn. Prove that
More informationTHE N-VALUE GAME OVER Z AND R
THE N-VALUE GAME OVER Z AND R YIDA GAO, MATT REDMOND, ZACH STEWARD Abstract. The n-value game is an easily described mathematical diversion with deep underpinnings in dynamical systems analysis. We examine
More informationBrownian Motion: Fokker-Planck Equation
Chapter 7 Brownian Motion: Fokker-Planck Equation The Fokker-Planck equation is the equation governing the time evolution of the probability density of the Brownian particla. It is a second order differential
More informationA matrix over a field F is a rectangular array of elements from F. The symbol
Chapter MATRICES Matrix arithmetic A matrix over a field F is a rectangular array of elements from F The symbol M m n (F ) denotes the collection of all m n matrices over F Matrices will usually be denoted
More information(I AL BL 2 )z t = (I CL)ζ t, where
ECO 513 Fall 2011 MIDTERM EXAM The exam lasts 90 minutes. Answer all three questions. (1 Consider this model: x t = 1.2x t 1.62x t 2 +.2y t 1.23y t 2 + ε t.7ε t 1.9ν t 1 (1 [ εt y t = 1.4y t 1.62y t 2
More informationMarkov Processes. Stochastic process. Markov process
Markov Processes Stochastic process movement through a series of well-defined states in a way that involves some element of randomness for our purposes, states are microstates in the governing ensemble
More informationLinear maps. Matthew Macauley. Department of Mathematical Sciences Clemson University Math 8530, Spring 2017
Linear maps Matthew Macauley Department of Mathematical Sciences Clemson University http://www.math.clemson.edu/~macaule/ Math 8530, Spring 2017 M. Macauley (Clemson) Linear maps Math 8530, Spring 2017
More informationWEAK VERSIONS OF STOCHASTIC ADAMS-BASHFORTH AND SEMI-IMPLICIT LEAPFROG SCHEMES FOR SDES. 1. Introduction
WEAK VERSIONS OF STOCHASTIC ADAMS-BASHFORTH AND SEMI-IMPLICIT LEAPFROG SCHEMES FOR SDES BRIAN D. EWALD 1 Abstract. We consider the weak analogues of certain strong stochastic numerical schemes considered
More informationarxiv: v2 [math.pr] 4 Sep 2017
arxiv:1708.08576v2 [math.pr] 4 Sep 2017 On the Speed of an Excited Asymmetric Random Walk Mike Cinkoske, Joe Jackson, Claire Plunkett September 5, 2017 Abstract An excited random walk is a non-markovian
More informationMarkov Chains. X(t) is a Markov Process if, for arbitrary times t 1 < t 2 <... < t k < t k+1. If X(t) is discrete-valued. If X(t) is continuous-valued
Markov Chains X(t) is a Markov Process if, for arbitrary times t 1 < t 2
More informationPhysics 212: Statistical mechanics II Lecture XI
Physics 212: Statistical mechanics II Lecture XI The main result of the last lecture was a calculation of the averaged magnetization in mean-field theory in Fourier space when the spin at the origin is
More informationLecture: Local Spectral Methods (1 of 4)
Stat260/CS294: Spectral Graph Methods Lecture 18-03/31/2015 Lecture: Local Spectral Methods (1 of 4) Lecturer: Michael Mahoney Scribe: Michael Mahoney Warning: these notes are still very rough. They provide
More informationQuestion Points Score Total: 70
The University of British Columbia Final Examination - April 204 Mathematics 303 Dr. D. Brydges Time: 2.5 hours Last Name First Signature Student Number Special Instructions: Closed book exam, no calculators.
More informationTime Series 2. Robert Almgren. Sept. 21, 2009
Time Series 2 Robert Almgren Sept. 21, 2009 This week we will talk about linear time series models: AR, MA, ARMA, ARIMA, etc. First we will talk about theory and after we will talk about fitting the models
More informationFIXED POINT ITERATIONS
FIXED POINT ITERATIONS MARKUS GRASMAIR 1. Fixed Point Iteration for Non-linear Equations Our goal is the solution of an equation (1) F (x) = 0, where F : R n R n is a continuous vector valued mapping in
More informationStatistics 150: Spring 2007
Statistics 150: Spring 2007 April 23, 2008 0-1 1 Limiting Probabilities If the discrete-time Markov chain with transition probabilities p ij is irreducible and positive recurrent; then the limiting probabilities
More informationarxiv: v7 [quant-ph] 22 Aug 2017
Quantum Mechanics with a non-zero quantum correlation time Jean-Philippe Bouchaud 1 1 Capital Fund Management, rue de l Université, 75007 Paris, France. (Dated: October 8, 018) arxiv:170.00771v7 [quant-ph]
More informationMATH 425, HOMEWORK 3 SOLUTIONS
MATH 425, HOMEWORK 3 SOLUTIONS Exercise. (The differentiation property of the heat equation In this exercise, we will use the fact that the derivative of a solution to the heat equation again solves the
More informationSolutions Preliminary Examination in Numerical Analysis January, 2017
Solutions Preliminary Examination in Numerical Analysis January, 07 Root Finding The roots are -,0, a) First consider x 0 > Let x n+ = + ε and x n = + δ with δ > 0 The iteration gives 0 < ε δ < 3, which
More informationCalculus for the Life Sciences II Assignment 6 solutions. f(x, y) = 3π 3 cos 2x + 2 sin 3y
Calculus for the Life Sciences II Assignment 6 solutions Find the tangent plane to the graph of the function at the point (0, π f(x, y = 3π 3 cos 2x + 2 sin 3y Solution: The tangent plane of f at a point
More informationRates of Convergence to Self-Similar Solutions of Burgers Equation
Rates of Convergence to Self-Similar Solutions of Burgers Equation by Joel Miller Andrew Bernoff, Advisor Advisor: Committee Member: May 2 Department of Mathematics Abstract Rates of Convergence to Self-Similar
More informationPopulation Genetics: a tutorial
: a tutorial Institute for Science and Technology Austria ThRaSh 2014 provides the basic mathematical foundation of evolutionary theory allows a better understanding of experiments allows the development
More informationLIMITING PROBABILITY TRANSITION MATRIX OF A CONDENSED FIBONACCI TREE
International Journal of Applied Mathematics Volume 31 No. 18, 41-49 ISSN: 1311-178 (printed version); ISSN: 1314-86 (on-line version) doi: http://dx.doi.org/1.173/ijam.v31i.6 LIMITING PROBABILITY TRANSITION
More informationOn asymptotic behavior of a finite Markov chain
1 On asymptotic behavior of a finite Markov chain Alina Nicolae Department of Mathematical Analysis Probability. University Transilvania of Braşov. Romania. Keywords: convergence, weak ergodicity, strong
More informationLab 8: Measuring Graph Centrality - PageRank. Monday, November 5 CompSci 531, Fall 2018
Lab 8: Measuring Graph Centrality - PageRank Monday, November 5 CompSci 531, Fall 2018 Outline Measuring Graph Centrality: Motivation Random Walks, Markov Chains, and Stationarity Distributions Google
More informationMarkov Chains and Stochastic Sampling
Part I Markov Chains and Stochastic Sampling 1 Markov Chains and Random Walks on Graphs 1.1 Structure of Finite Markov Chains We shall only consider Markov chains with a finite, but usually very large,
More informationFinite-Horizon Statistics for Markov chains
Analyzing FSDT Markov chains Friday, September 30, 2011 2:03 PM Simulating FSDT Markov chains, as we have said is very straightforward, either by using probability transition matrix or stochastic update
More informationLECTURE 10: REVIEW OF POWER SERIES. 1. Motivation
LECTURE 10: REVIEW OF POWER SERIES By definition, a power series centered at x 0 is a series of the form where a 0, a 1,... and x 0 are constants. For convenience, we shall mostly be concerned with the
More informationVIII.B Equilibrium Dynamics of a Field
VIII.B Equilibrium Dynamics of a Field The next step is to generalize the Langevin formalism to a collection of degrees of freedom, most conveniently described by a continuous field. Let us consider the
More informationFinal Exam May 4, 2016
1 Math 425 / AMCS 525 Dr. DeTurck Final Exam May 4, 2016 You may use your book and notes on this exam. Show your work in the exam book. Work only the problems that correspond to the section that you prepared.
More informationLocal vs. Nonlocal Diffusions A Tale of Two Laplacians
Local vs. Nonlocal Diffusions A Tale of Two Laplacians Jinqiao Duan Dept of Applied Mathematics Illinois Institute of Technology Chicago duan@iit.edu Outline 1 Einstein & Wiener: The Local diffusion 2
More informationMath 5588 Final Exam Solutions
Math 5588 Final Exam Solutions Prof. Jeff Calder May 9, 2017 1. Find the function u : [0, 1] R that minimizes I(u) = subject to u(0) = 0 and u(1) = 1. 1 0 e u(x) u (x) + u (x) 2 dx, Solution. Since the
More informationNotes on Special Functions
Spring 25 1 Notes on Special Functions Francis J. Narcowich Department of Mathematics Texas A&M University College Station, TX 77843-3368 Introduction These notes are for our classes on special functions.
More informationSystems Driven by Alpha-Stable Noises
Engineering Mechanics:A Force for the 21 st Century Proceedings of the 12 th Engineering Mechanics Conference La Jolla, California, May 17-20, 1998 H. Murakami and J. E. Luco (Editors) @ASCE, Reston, VA,
More informationGaussian, Markov and stationary processes
Gaussian, Markov and stationary processes Gonzalo Mateos Dept. of ECE and Goergen Institute for Data Science University of Rochester gmateosb@ece.rochester.edu http://www.ece.rochester.edu/~gmateosb/ November
More informationON THE MOMENTS OF ITERATED TAIL
ON THE MOMENTS OF ITERATED TAIL RADU PĂLTĂNEA and GHEORGHIŢĂ ZBĂGANU The classical distribution in ruins theory has the property that the sequence of the first moment of the iterated tails is convergent
More information5 Applying the Fokker-Planck equation
5 Applying the Fokker-Planck equation We begin with one-dimensional examples, keeping g = constant. Recall: the FPE for the Langevin equation with η(t 1 )η(t ) = κδ(t 1 t ) is = f(x) + g(x)η(t) t = x [f(x)p
More information6.842 Randomness and Computation March 3, Lecture 8
6.84 Randomness and Computation March 3, 04 Lecture 8 Lecturer: Ronitt Rubinfeld Scribe: Daniel Grier Useful Linear Algebra Let v = (v, v,..., v n ) be a non-zero n-dimensional row vector and P an n n
More informationHandbook of Stochastic Methods
C. W. Gardiner Handbook of Stochastic Methods for Physics, Chemistry and the Natural Sciences Third Edition With 30 Figures Springer Contents 1. A Historical Introduction 1 1.1 Motivation I 1.2 Some Historical
More information1. Introductory Examples
1. Introductory Examples We introduce the concept of the deterministic and stochastic simulation methods. Two problems are provided to explain the methods: the percolation problem, providing an example
More informationDifferentiable Functions
Differentiable Functions Let S R n be open and let f : R n R. We recall that, for x o = (x o 1, x o,, x o n S the partial derivative of f at the point x o with respect to the component x j is defined as
More informationMidterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015
Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015 The test lasts 1 hour and 15 minutes. No documents are allowed. The use of a calculator, cell phone or other equivalent electronic
More informationAPERITIFS. Chapter Diffusion
Chapter 1 APERITIFS Broadly speaking, non-equilibrium statistical physics describes the time-dependent evolution of many-particle systems. The individual particles are elemental interacting entities which,
More informationINTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING
INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING ERIC SHANG Abstract. This paper provides an introduction to Markov chains and their basic classifications and interesting properties. After establishing
More informationChapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations
Chapter 1: Systems of linear equations and matrices Section 1.1: Introduction to systems of linear equations Definition: A linear equation in n variables can be expressed in the form a 1 x 1 + a 2 x 2
More informationSolutions to Problem Set 5
UC Berkeley, CS 74: Combinatorics and Discrete Probability (Fall 00 Solutions to Problem Set (MU 60 A family of subsets F of {,,, n} is called an antichain if there is no pair of sets A and B in F satisfying
More informationLecture 3: From Random Walks to Continuum Diffusion
Lecture 3: From Random Walks to Continuum Diffusion Martin Z. Bazant Department of Mathematics, MIT February 3, 6 Overview In the previous lecture (by Prof. Yip), we discussed how individual particles
More information12. Perturbed Matrices
MAT334 : Applied Linear Algebra Mike Newman, winter 208 2. Perturbed Matrices motivation We want to solve a system Ax = b in a context where A and b are not known exactly. There might be experimental errors,
More informationLattices and Hermite normal form
Integer Points in Polyhedra Lattices and Hermite normal form Gennady Shmonin February 17, 2009 1 Lattices Let B = { } b 1,b 2,...,b k be a set of linearly independent vectors in n-dimensional Euclidean
More informationA Simple Solution for the M/D/c Waiting Time Distribution
A Simple Solution for the M/D/c Waiting Time Distribution G.J.Franx, Universiteit van Amsterdam November 6, 998 Abstract A surprisingly simple and explicit expression for the waiting time distribution
More informationMIT Final Exam Solutions, Spring 2017
MIT 8.6 Final Exam Solutions, Spring 7 Problem : For some real matrix A, the following vectors form a basis for its column space and null space: C(A) = span,, N(A) = span,,. (a) What is the size m n of
More informationx x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b)
Numerical Methods - PROBLEMS. The Taylor series, about the origin, for log( + x) is x x2 2 + x3 3 x4 4 + Find an upper bound on the magnitude of the truncation error on the interval x.5 when log( + x)
More informationQuantum Mechanics: Vibration and Rotation of Molecules
Quantum Mechanics: Vibration and Rotation of Molecules 8th April 2008 I. 1-Dimensional Classical Harmonic Oscillator The classical picture for motion under a harmonic potential (mass attached to spring
More informationSeparation of Variables in Linear PDE: One-Dimensional Problems
Separation of Variables in Linear PDE: One-Dimensional Problems Now we apply the theory of Hilbert spaces to linear differential equations with partial derivatives (PDE). We start with a particular example,
More informationReturn probability on a lattice
Return probability on a lattice Chris H. Rycroft October 24, 2006 To begin, we consider a basic example of a discrete first passage process. Consider an unbiased Bernoulli walk on the integers starting
More informationBulk scaling limits, open questions
Bulk scaling limits, open questions Based on: Continuum limits of random matrices and the Brownian carousel B. Valkó, B. Virág. Inventiones (2009). Eigenvalue statistics for CMV matrices: from Poisson
More informationMagnetic waves in a two-component model of galactic dynamo: metastability and stochastic generation
Center for Turbulence Research Annual Research Briefs 006 363 Magnetic waves in a two-component model of galactic dynamo: metastability and stochastic generation By S. Fedotov AND S. Abarzhi 1. Motivation
More information