Introduction to Markov Processes

Size: px
Start display at page:

Download "Introduction to Markov Processes"

Transcription

1 Introuction to Markov Processes Connexions moule m44014 Zzis law Gustav) Meglicki, Jr Office of the VP for Information Technology Iniana University RCS: Section-2.tex,v /12/21 18:03:08 gustav Exp Copyright c 2012 by Zzis law Meglicki December 21, 2012 Abstract We introuce the concept of stochastic an Markov processes an the Chapman-Kolmogorov equations that efine the latter. We show how Markov processes can be escribe in terms of the Markov propagator ensity function an the relate propagator moment functions. We introuce the Kramers-Moyal equations an use them to iscuss the evolution of the moments. We introuce two simple example processes that illustrate how Markov processes can be efine an characterize in practical terms. Finally, we introuce homogeneous Markov processes an show how the apparatus evelope so far simplifies for this class. This moule oes not iscuss more specific Markov processes continuous, jump, birth-eath, etc. Contents 1 Preliminaries 2 2 Stochastic an Markov Processes 2 3 The Chapman-Kolmogorov Equation 4 4 Moments of Markov State Density Function 5 5 The Markov Propagator 6 6 The Kramers-Moyal Equations 9 7 Evolution of the Moments Mean, Variance an Covariance Homogeneous Markov Processes 17 1

2 2 License to Connexions by Zzis law Meglicki, Jr 1 Preliminaries 1. Probability Distributions, Connexions moule m Stochastic an Markov Processes Stochastic processes A stochastic process is a process that evolves probabilistically through various states attaine at various times from a well efine initial state x 0 at time t 0. The probability ensity function of the process epens on the states an times at which they are reache, for example, P n 1 x n, t n), x n 1, t n 1),..., x 1, t 1) x 0, t 0)) 1) is the probability that the process will reach x 1 at time t 1, then will progress to x 2 at t 2, then will go through all the subsequent configurations liste, to reach x n at time t n, given that it has starte from x 0 at time t 0. Whether the process evolves between the points continuously or in jumps we on t enquire or care) at this stage. But we will evelop the means to specify this an it will let us say a lot about such processes, which is quite surprising given how general they seem to be at first sight. A stochastic process can be further characterize by other conitional probability ensities such as P n 1 2 x n, t n), x n 1, t n 1),..., x 2, t 2) x 1, t 1), x 0, t 0)), P n 2 3 x n, t n), x n 1, t n 1),..., x 3, t 3) x 2, t 2), x 1, t 1), x 0, t 0)) an so on... What is to the right of are spacetime points at which the system has been. What is to the left are spacetime points which the system may visit, an it is the probability of the system oing so that the function escribes. Let us consier function P 1 j x j, t j) x j 1, t j 1),..., x 1, t 1), x 0, t 0)) 3) 2) Markov processes A Markov process oes not remember its history Here we state that the probability of the system reaching x j at time t j is a function of where the system has been so far, that is, it epens on the system s entire history. We say that a stochastic process is Markovian if this is not the case, that is, if the probability of the system reaching x j at t j epens only on where it s been at t j 1, but not on the previous states. A Markov process is a process that remembers only the last state reache. We express it symbolically as follows P 1 j x j, t j) x j 1, t j 1),..., x 1, t 1), x 0, t 0)) = P x j, t j) x j 1, t j 1)). 4) This assumption simplifies the escription of the corresponing stochastic processes to the point of making them tractable, which is why we are so intereste in them. Let us consier P 2 1 x 2, t 2), x 1, t 1) x 0, t 0)). Clearly, this is equal the probability of the system reaching x 1 at t 1 times

3 Creative Commons Attribution License CC-BY 3.0) 3 the probability of reaching x 2 at t 2 given that it s been at x 1 at t 1 an at x 0 at t 0, that is P 2 1 x 2, t 2), x 1, t 1) x 0, t 0)) = P x 1, t 1) x 0, t 0)) P 1 2 x 2, t 2) x 1, t 1), x 0, t 0)). 5) But if this is to be a Markov process then Consequently P 2 1 x 2, t 2) x 1, t 1), x 0, t 0)) = P x 2, t 2) x 1, t 1)). 6) P 2 1 x 2, t 2), x 1, t 1) x 0, t 0)) = P x 2, t 2) x 1, t 1)) P x 1, t 1) x 0, t 0)). 7) This extens naturally to an arbitrary number of transitions, so that P n 1 x n, t n),..., x 1, t 1) x 0, t 0)) = n P x i, t i) x i 1, t i 1)). 8) i=1 The magic function, P x i, t i) x i 1, t i 1)), is calle the Markov state ensity function. It is ifficult not to notice here similarity to quantum mechanical processes. If x i was to be a quantum particle position attaine at time t i, then the probability amplitue a complex number in general) of the particle progressing from x 0 at t 0 through x 1 at t 1, x 2 at t 2 an so on, until reaching x n at t n along this specific path woul be calculate similarly as n x i, t i) x i 1, t i 1). 9) i=1 The full probability amplitue of the particle starting from x 0 at t 0 an reaching x n at t n woul then be a sum of such proucts evaluate for all possible paths that the particle coul take: x n, t n) x 0, t 0) = n ) x i, t i) x i 1, t i 1). 10) paths The probability itself woul be evaluate by taking the square of the absolute value of x n, t n) x 0, t 0). But for a single, specific path 8) woul apply, because the square of the amplitue in this case woul be a prouct of squares of the single step amplitues, as liste by 9). We can therefore think of a progression of a quantum particle along a certain specific path as a typical Markovian process. We will also fin, eventually, that this is how the Brownian motion is escribe, another classic example of a Markov process. There is an intriguing brige between Brownian motion an quantum mechanics, pointe to by Ewar Nelson, a professor of Mathematics at Princeton University at the time, in 1966 a topic we inten to explore in further moules. i=1 Markov state ensity function Similarity to quantum mechanics A single Feynman path is a Markov process Brownian motion is a Markov process

4 4 License to Connexions by Zzis law Meglicki, Jr 3 The Chapman-Kolmogorov Equation Require properties of the Markov state ensity function The Markov state ensity function P x 2, t 2) x 1, t 1)) must satisfy certain obvious properties, namely P x 2, t 2) x 1, t 1)) 0 11) The Chapman-Kolmogorov equations an Ωx 2 ) P x 2, t 2) x 1, t 1)) x 2 = 1, 12) where Ωx 2) is the omain of x 2. These erive from the above integral representing probability. The fact that P relates to the Markov process is reflecte in the following property P 1 1 x 3, t 3) x 1, t 1)) = P 2 1 x 3, t 3), x 2, t 2) x 1, t 1)) x 2 = Ωx 2 ) Ωx 2 ) P x 3, t 3) x 2, t 2)) P x 2, t 2) x 1, t 1)) x 2. 13) This is the celebrate Chapman-Kolmogorov equation. We are going to rewrite the equation in two ways by making the following substitutions forwar x 1 x 0, t 1 t 0, x 2 x x, t 2 t, x 3 x, t 3 t + t, 14) which yiels the forwar Kolmogorov equation: backwar P x, t + t) x 0, t 0)) = P x, t + t) x x, t) ) P x x, t) x 0, t ) 0) x x 1 x 0, t 1 t 0, x 2 x 0 + x, t 2 t 0 + t, x 3 x, 15) t 3 t, 16)

5 Creative Commons Attribution License CC-BY 3.0) 5 which yiels the backwar Kolmogorov equation: P x, t) x 0, t 0)) = P x, t) x + x, t 0 + t) ) P x + x, t 0 + t) x 0, t ) 0) x Of course, we always assume that t 1 < t 2 < t 3 an that the same hols for the substitutions. Nothing stops us from inserting more intermeiate points into the progression of the observe system through the Kolmogorov steps, which leas to the compoune Chapman-Kolmogorov equation P x n, t n) x 0, t 0)) = n... P x i, t i) x i 1, t i 1)) x 1... x n 1. Ωx n 1 ) Ωx 1 ) i=1 18) 4 Moments of Markov State Density Function 17) The moments of the Markov state ensity function are compute as for any other probability ensity, namely x n = x n P x, t) x 0, t 0)) x, 19) Ωx) an similarly we o with variance an stanar eviation: varx) = σ 2 x) = x x ) 2. 20) We can also compute the first term in the covariance of the last two positions in the Markov chain: x 2x 1 = x 1x 2P x 2, t 2) x 1, t 1)) P x 1, t 1) x 0, t 0)) x 1 x 2. Ωx 2 ) Ωx 1 ) 21) Before we go any further then, let us first brush up on some properties of the moments, variances, covariances an stanar eviations: x x = 0, varx) = σ 2 x) = x x ) 2 = x 2 x 2, x 2 x 2, 22) where the equality hols for sure variable only. Some elementary properties of covariances an correlations are: covx, y) = x x )y y ) = xy x y, σx)σy) covx, y), covx, y) corrx, y) = σx)σy), 1 corrx, y) 23) Moments are compute as usual

6 6 License to Connexions by Zzis law Meglicki, Jr Also, we observe that statistically inepenent variables x an y, that is, variables such that P xyx, y) = P xx)p yy) an uncorrelate, that is covx, y) = 0. But uncorrelate variables o not have to be statistically inepenent. 5 The Markov Propagator Markov propagator ensity function The observe similarity between Markov processes an quantum mechanics shoul have prepare us for what s coming now. A Markov propagator ensity function is the Markov state ensity function that yiels probability ensity at t +, where is an infinitesimal increment, given that the system has been at x at time t, that is P x + x, t + ) x, t) ), 24) where x, unlike, is not infinitesimal for example, the state may have jumpe in the time to somwhere quite far away from x. We consier it a function of x, parametrize by the initial state x, t) an labelle by an we employ the impressively looking capital pi, Π, to enote it: Π x x, t) ) = P x + x, t + ) x, t) ). 25) We can think of it as a notational shortcut for 24). The notation here reflects that of quantum mechanics. We can rea the construct as a evice that implements the infinitesimal time avance, applie to the initial state x, t). After the application of the evice, we ask about the probability of the system rifting from x by x. Require properties of the Being itself a probability ensity, Π must satisfy Markov propagator ensity function Π x x, t) ) 0, Π x x, t) ) x = 1. 26) For = 0 we must have Π x = 0 x, t) ) = δx ). 27) The Chapman-Kolmogorov equation In this case, the time avance evice oes not avance the time at all, so the system in question must remain at x. Being the Markov state ensity function, the propagator ensity function must also satisfy the Chapman-Kolmogorov equation, which in this case is usually written in the following form Π x x, t) ) = Π x x 1 α) x + x, t + α ) ) Ωx ) Π x α x, t) ) x, 28) where α ]0, 1[, which follows irectly from the evaluation of P x + x, t + ) x, t) )

7 Creative Commons Attribution License CC-BY 3.0) 7 through an intermeiate point x + x at t + α. The first step in 28) avances the state from its origin at x, t) by the infinitesimal time machine of α, an the state eflects by x. The secon step then commences with the state at x + x an the time avance to t + α. We apply again the time machine that avances the state by the remainer of, that is, by 1 α) an the state ens up eflecte by x from the original x. But this is not the starting point of this propagator. The starting point is x + x, so its en point of x + x must be recompute in reference to x + x, which is x + x x x = x x. Since Π x x, t)) is a probability ensity of x, the latter is its ranom variable. An it is this ranom variable, here enote by the orere pair that associates the probability ensity with it, x, Π x x, t) )) The Markov propagator is the ranom variable associate with Π that we call the Markov propagator. 25) remins us that we may think of it as x = xt + ) xt), which makes it a sort of a ifferential. But it is a ranom variable ifferential that is sensitive to the changes in the probability ensity across the, not just xt). So we shoul really write this more accurately as: x, P x x ) ) = x, P xx, t + )) x, P xx, t)). 29) The Ranom Variable Transformation theorem provies us with a formula for the probability ensity of variables that result from some functional operation on other ranom variables. The formula is The Ranom Variable Transformation theorem P y y 1,..., y m) = m... P x x 1,..., x n) δ y i f i x 1,..., x n)) x 1... x n, Ωx 1 ) Ωx n) i=1 where f i x 1,..., x n) are functions that transform x i into y j we o not insist on m = n), P x is a combine multivariate probability ensity of x 1,..., x n, an P y is the probability ensity of the y 1,..., y m ranom variables prouce by the operations f i. δ is the Dirac elta function. The Chapman-Kolmogorov equation for the propagator ensity function, 28), can be rewritten to reflect the Ranom Variable Transformation formula as follows Π x x, t) ) = Π x 2 1 α) x + x 1, t + α )) Ωx 1 ) Ωx 2 ) Π x 1 α x, t)) δ x x 1 x 2 ) x1 x 2, 31) which emonstrates that the ranom variable operation that is being performe here is x 1 + x 2. It is also in this sense that we shoul unerstan x = xt + ) xt). 30)

8 = Π 2x, t) + O ) 2) Π 1x, t) + O ) 2)) 2. 38) 8 License to Connexions by Zzis law Meglicki, Jr Propagator moment functions Time erivative of the Markov process Π representing in some way a ifferential woul correspon to something like a erivative if we were to ivie it by Π x x, t)). 32) This is no longer a probability ensity, because unlike 26) it oes not integrate to 1 over the omain of x. But it is still everywhere positive an we may associate moments with it. Assuming x an x to be scalars, we efine Π nx, t) = lim x 0 Ωx ) n Π x x, t)) x. 33) ) Π nx, t) is calle the n-th propagator moment function of the Markov process escribe by 24). The above efinition oes not imply that Π nx, t) = x )n Π x x, t) ) x. 34) What it implies is that Π nx, t) = x )n Π x x, t) ) x + O ) 2). 35) Upon ivision of both sies by an upon taking the limit 0 the small term of the secon an higher orers in, O ) 2), isappears. We ve been careful to refer to Π x x, t)) / as something like a erivative. This is because a well efine time erivative for a real, genuinely stochastic Markov process oes not exist. A trajectory of Brownian motion, for example, is clearly not ifferentiable. This is just one such example. We can formalize this as follows. We begin with xt + ) xt) x ± σ x ). 36) Now we switch to 1-D an make use of the propagator moment functions. Clearly, from 35) an so σ 2 x ) = x 2 x 2 x = Π 1x, t) + O ) 2) 37) Therefore σ x ) = From this we get Π 2x, t) + O ) 2 ) Π 1x, t) + O ) 2 )) 2 = Π 2x, t) + O ) 2). 39) xt + ) xt) Π 1x, t) ± Π2x, t) ± O ) 2). 40)

9 Creative Commons Attribution License CC-BY 3.0) 9 We see now that the secon term, the one that contains Π 2, exploes as 0, unless Π 2 is zero as well. But Π 2, which is relate to the variance, is zero only if x is a sure variable, therefore not representing a genuinely stochastic Markov process. The compoune Chapman-Kolmogorov equation 18) can be rewritten in terms of propagator ensities assuming that the interval [t 0, t] is subivie into a large number n of infinitesimal segments. Then The compoune Chapman-Kolmogorov equation P x, t) x 0, t 0)) = n... Π x i x i 1, t i 1)) x 1... x n 1. Ωx 1 ) Ωx n 1 ) i=1 41) In principle, this equation lets us reconstruct the Markov process from the knowlege of the propagator ensity function. It can be thought of as a Markovian equivalent of the Schröinger equation, where the Hamiltonian plays a similar role. Equations 26), 27) an 28) supplemente by a small number of aitional requirements lea to tractable expressions for the propagator ensity functions, again in similarity to known expressions for Hamiltonians. The proceure then makes the resulting Markov processes tractable analytically an numerically. It is most surprising how much can be inferre about them starting from simple assumptions. 6 The Kramers-Moyal Equations The Kramers-Moyal Equations are partial ifferential equations for the Markov state ensity function expresse with the help of the propagator moment functions, efine by 33). They follow irectly from the forwar 15) an backwar 17) Kolmogorov equations. forwar Our starting point is the 1-imensional forwar Kolmogorov equation P x, t + t) x 0, t 0)) = P x, t + t) x x, t) ) P x x, t) x 0, t ) 0) x We introuce an auxiliary function 42) fx) = P x + x, t + t) x, t) ) P x, t) x 0, t 0)). 43) an observe that the expression uner the integral in the forwar Kolmogorov equation 42) is fx x ), which can be expane in the Taylor series aroun x, if f is analytic: fx x x ) n ) = fx) + n! n=1 n fx) x n. 44)

10 10 License to Connexions by Zzis law Meglicki, Jr We substitute this into 42) with the following effect P x, t + t) x 0, t 0)) = P x + x, t + t) x, t) ) P x, t) x 0, t 0)) x + n=1 1) n n! n x ) n P x + x, t + t) x, t) ) P x, t) x 0, t 0)) x. x n Let s have a look at the first integral. x appears only in the first P term. This, therefore, is a normalization integral which evaluates to 1 times the secon P term. The integrals in the sum similarly epen on x, which appears only in the first P in the integrate function. The secon P is therefore a coefficient that can be put in front of the integral. We subract P x, t) x 0, t 0)) from both sies an ivie both sie by t which yiels 45) P x, t + t) x 0, t 0)) P x, t) x 0, t 0)) = t 1) n n P x, t) x 0, t 0)) x ) n P x + x, t + t) x, t) ) ) x. n! x n t n=1 Now we take a limit t 0. In this limit the integral in the sum, upon its ivision by t becomes x ) n Π x x, t)) x = Π nx, t), 47) 46) The forwar Kramers-Moyal equation the efinition we have alreay introuce in 33). This, finally, leas to the forwar Kramers-Moyal equation t P x, t) x0, t0)) = n=1 1) n n! n Πnx, t)p x, t) x0, t0))). xn 48) backwar Our starting point is the 1-imensional backwar Kolmogorov equation P x, t) x 0, t 0)) = P x, t) x 0 + x, t 0 + t) ) P x 0 + x, t 0 + t) x 0, t ) 0) x We introuce an auxiliary function fx 0) = P x, t) x 0, t 0 + t 0)) 50) an observe that the first P uner the integral in the backwar Komogorov equation 49) is fx 0 + x ), which can be expane in 49)

11 Creative Commons Attribution License CC-BY 3.0) 11 the Taylor series aroun x 0, if f is analytic: fx 0 + x x ) n ) = fx 0) + n! n=1 We substitute this into 49) with the following effect P x, t) x 0, t 0)) = P x, t) x 0, t 0 + t 0)) + n=1 n fx 0). 51) x n 0 P x 0 + x, t 0 + t 0) x 0, t 0) ) x ) 1 n P x, t) x 0, t n! x n 0 + t 0)) 0 x ) n P x 0 + x, t 0 + t 0) x 0, t ) 0) x. 52) We observe that the integral in the first aen is a normalization integral for P an therefore equal to 1. This leaves P x, t) x 0, t 0 + t 0)) alone an we transfer it to the left sie of the equation an ivie both sies by t 0. In the limit of t 0 0, this yiels minus the erivative of P on the left sie. The right sie of the equation is left with the sum only an the integral turns into the moment integral of the Markov propagator ensity function, which, upon the ivision by we call Π nx 0, t 0), as per 33). The being use so can no longer be use again to play with t 0 in the ifferentiate P. Instea P x, t) x 0, t 0 + t 0)) t 0 P x, t) x0, t0)). 53) In summary we en up with the backwar Kramers-Moyal equation: 1 n P x, t) x 0, t 0)) = Πnx0, t0) P x, t) x 0, t 0)). t 0 n! x n n=1 0 54) The thing to observe is that the Π n coefficients are ifferentiate together with P in the forwar Kramers-Moyal equation, but not in the backwar one. There is also the 1) n factor in the forwar equation, but not in the backwar one, an the sign in front of the time erivative is negative in the backwar equation. Finally, looking at both Kramers-Moyal equations, it is easier to unerstan why one is calle forwar an the other backwar. This is not so clear when looking at the original Kolmogorov equations. In the forwar Kramers-Moyal equation, the x 0, t 0) pair is a fixe parameter an the ifferentiation is over t an x an procees forwar in time. The initial conition for the equation is P x, t = t 0) x 0, t 0)) = δx x 0). 55) In the backwar Kramers-Moyal equation, the x, t) pair is a fixe parameter an the ifferentiation is over t 0 an x 0 an procees backwar in time. The initial conition for the equation is P x, t) x 0, t 0 = t)) = δx x 0). 56) The backwar Kramers-Moyal equation

12 12 License to Connexions by Zzis law Meglicki, Jr 7 Evolution of the Moments We re going to o it all in 1-D. Our starting point is xt + ) = xt) + x. 57) Hence x n t + ) = xt) + x )n = x n t) + ) n n x n k t)x k. 58) k k=1 Now we average both sies of the equation ) n x n t + ) = x n n t) + x n k t)x k. 59) k The probability ensity of x n t) is P x, t) x 0, t 0)) an the probability ensity of x is Π x x, t)). Therefore the combine probability ensity of x n k t)x k is k=1 Π x x, t) ) P x, t) x 0, t 0)) 60) an the integration to prouce x j t)x k for some j an k must run over x an x : x j t)x k = x j x k Π x x, t) ) P x, t) x 0, t 0)) x x = Ωx) Ωx) x j Π k x, t)p x, t) x 0, t 0)) x + O ) 2) = x j t)π k xt), t) + O ) 2). 61) Time evolution of the moments Let us go back to 59). We subtract x n t) from both sies, substitute 61) in place of x n k t)x k an ivie both sies by, which yiels xn t) = ) n n x n k t)π k xt), t). 62) k k=1 An this is our equation for the evolution of the moments of the Markov process xt) that oes not use explicitly the P s or the Πs. But, of course, the propagator ensity is hien insie the propagator moment functions Π n. The initial conition for the equation is x n t 0) = x n 0. 63) Time evolution of the mean 7.1 Mean, Variance an Covariance The equation for the evolution of the mean of the Markov process is trivial, xt) = Π1xt), t), 64)

13 Creative Commons Attribution License CC-BY 3.0) 13 an follows irectly from 62). The initial conition for this equation is For x 2 62) yiels x t = t 0) = x 0. 65) x 2 t) = 2 xt)π 1xt), t) + Π 2xt), t). 66) From this an from 64) we obtain Time evolution of the variance var xt)) = x 2 t) xt) 2) = 2 xt)π 1xt), t) + Π 2xt), t) 2 xt) Π 1xt), t). 67) The initial conition for this equation is var x t = t 0)) = 0. 68) Covariance is a function of two variables, for example, cov xt 1), xt 2)). Here we are going to evaluate its erivative with respect to t 2. cov xt 1), xt 2)) = xt 1)xt 2) xt 1) xt 2) ) 2 2 = 2 xt 1)xt 2) xt 1) 2 xt 2) = 2 xt 1)xt 2) xt 1) Π 1xt 2), t 2). 69) The first component of the sum on the right sie of the equation still requires some work. We procee as follows xt 1)xt 2 + 2) = xt 1) xt 2) + x t 2) ) = xt 1)xt 2) + xt 1)x t 2), 70) the average of which is where Therefore xt 1)xt 2) + xt 1)x t 2), 71) xt 1)xt 2) = x 1x 2P x 2, t 2) x 1, t 1)) P x 1, t 1) x 0, t 0)) x 1 x 2. Ωx 1 ) Ωx 2 ) 72) xt 1)xt 2 + 2) xt 1)xt 2) = xt 1)x t 2). 73)

14 14 License to Connexions by Zzis law Meglicki, Jr The right sie of this equation is somewhat tricky, because here we have to average x t 2). We o this as follows xt1)x t 2) = Ωx 1 ) Ωx 2 ) x 1x Π x 2 x 2, t 2) ) P x 2, t 2) x 1, t 1)) P x 1, t 1) x 0, t 0)) x x 2 x 1 = x 1 Π1x 2, t 2) 2 + O 2) 2)) Ωx 1 ) Ωx 2 ) P x 2, t 2) x 1, t 1)) P x 1, t 1) x 0, t 0)) x 2 x 1 = xt 1)Π 1xt 2), t 2) 2 + O 2) 2). 74) Substituting this result into 73) an iviing both sies by 2 yiels 2 xt 1)xt 2) = xt 1)Π 1xt 2), t 2). 75) Time evolution of the covariance Now we plug this result into 69) which yiels 2 cov xt 1), xt 2)) = xt 1)Π 1xt 2), t 2) xt 1) Π 1xt 2), t 2). 76) The initial conition for this equation is cov x t 1), x t 2 = t 1)) = var x t 1)). 77) Two examples of Markov processes It is useful to note that when Π 1x, t) oes not epen on x, then it falls out of the brackets on the right sie of 76) which makes the right sie zero. Thus cov xt 1), xt 2)) ens up inepenent of t 2 an so it must remain set to the initial conition, that is, var x t 1)). Looking at 64), 67) an 76) an the relate initial conitions 65), 68) an 77) we fin that these equations seem not only eminently tractable, but even relatively simple epening, that is, on the Π nx, t) functions. Thus, by persistent chipping at the problem, we have progresse from the initial view of stochastic processes that was intimiating, to say the least, to quite tractable equations that escribe the evolution of the mean, variance an covariance of the Markov process xt). To illustrate this point, we are going to look at two simple examples that happen to be applicable to some Markov processes of interest. The examples also illustrate how we woul use the propagator moment functions of the Markov process to specify it. Π 1x, t) = v, Π 2x, t) = γ where v an γ 0 are constants. In this case Π 1 = v an Π 2 = γ an the equations that escribe the evolution of the mean, the variance an the covariance plus their

15 Creative Commons Attribution License CC-BY 3.0) 15 initial conitions are xt) = v x t = t 0) = x 0 var xt)) = γ var x t = t 0)) = 0 cov x t 1), x t 2)) 2 = 0. cov x t 1), x t 2 = t 1)) = var x t 1)). 78) The erivative of the covariance is zero, because Π 1 is a constant, which means that cov xt 1), xt 2)) = var x t 1)). 79) The solution to this problem is therefore xt) = x 0 + v t t 0) var x t)) = γ t t 0) cov xt 1), xt 2)) = γ t 1 t 0). 80) For γ = 0 the variance an the covariance remain zero an the process becomes eterministic. Π 1x, t) = λx, Π 2x, t) = γ where λ > 0 an γ 0 are constants. This example is a little more complicate. The equations that govern it are as follows xt) = λ xt) x t = t 0) = x 0 var xt)) = 2λ var xt)) + γ var x t = t 0)) = 0 cov x t 1), x t 2)) 2 = λ cov x t 1), x t 2)) cov x t 1), x t 2 = t 1)) = var x t 1)). 81) Before we go any further, we re going to explain how these equations come about. The first one is obvious. The equation for the variance is obtaine by substituting our specific expressions for Π 1 an Π 2 in 67) which yiels var xt)) = 2 xt) λxt)) + γ 2 xt) λxt) = 2λ x 2 t) xt) 2) + γ = 2λ var xt)) + γ. 82)

16 16 License to Connexions by Zzis law Meglicki, Jr The equation for the covariance is obtaine by substituting Π 1 an Π 2 as efine above in 76) which yiels 2 cov x t 1), x t 2)) = xt 1) λxt 2)) xt 1) λxt 2) = λ xt 1)xt 2) xt 1) xt 2) ) = λ cov xt 1), xt 2)). 83) The solution of the first equation in 81) is of the form e λt. The initial conition forces the following choice of constants xt) = x 0e λt t 0). 84) The variance equation woul be like the mean equation were it not for the non-homogeneous term γ. We eal with this by postulating a solution of the form var xt)) = Ae 2λt t 0) ft), 85) where A is a constant an ) Ae 2λt t0) ft) = Ae 2λt t 0) ) ft) 2λft) 86) Subsituting this solution into the equation for the variance yiels ) Ae 2λt t 0) ft) 2λft) = 2λAe 2λt t0) ft) + γ 87) We a 2λAe 2λt t 0) ft) to both sies of the equation, which kills this term an we are left with ft) = γ A e2λt t 0), 88) which solves to ft) = γ 2λA e2λt t 0) + B, 89) where B is another constant. We substitute this into 85) an obtain var xt)) = Ae 2λt t 0) γ ) 2λA e2λt t 0) + B = γ 2λ + ABe 2λt t 0), 90) leaving us with just one constant AB as shoul be expecte. For t = t 0 the exp function is 1 an we obtain var xt = t 0)) = γ + AB = 0 91) 2λ which implies that AB = γ/2λ). Therefore var xt)) = γ ) 1 e 2λt t 0). 92) 2λ

17 Creative Commons Attribution License CC-BY 3.0) 17 The covariance equation is just like the mean equation. But here the initial conition at t 2 = t 1 sets the covariance to variance of xt 1). Consequently the solution is cov x t 1), x t 2)) = var xt 1)) e λt 2 t 1 ) 93) But we have the expression for the variance in the form of 92). Plugging it into the above solution yiels cov x t 1), x t 2)) = γ ) 1 e 2λt 1 t 0 ) e λt 2 t 1 ). 94) 2λ In summary xt) = x 0e λt t 0) var xt)) = γ 1 ) e 2λt t 0) 2λ cov x t 1), x t 2)) = γ 2λ 1 e 2λt 1 t 0 ) ) e λt 2 t 1 ). 95) 8 Homogeneous Markov Processes Markov processes are sai to be homogeneous if the corresponing Markov process ensity function oes not epen on time that is Π x x, t) ) = Π x x ), 96) in which case they are calle temporally homogeneous; space that is Π x x, t) ) = Π x t ), 97) in which case they are calle spatially homogeneous; time an space that is Π x x, t) ) = Π x ), 98) in which case they are calle completely homogeneous. Brownian motion is an example of a completely homogeneous process. A great eal can be sai about such processes, because the equations that escribe them are simpler an their certain properties can be seen right away. One such important property of temporally homogeneous Markov processes, an by extension also of completely homogeneous ones, is that the probability ensity P x, t) x 0, t 0)) epens on time through t t 0 only, in other wors Temporally homogeneous Markov processes P x, t) x 0, t 0)) = P x, t t 0) x 0, 0)). 99) From this it follows immeiately that P x, t) x0, t0)) = P x, t) x 0, t 0)) 100) t t 0

18 18 License to Connexions by Zzis law Meglicki, Jr Because the propagator ensity oes not epen on time, the propagator moment functions Π n o not epen on time either Π nx) = 1 x n Πx x) x. 101) The Kramers-Moyal equations Consequently, the Kramers-Moyal equations simplify too, with the left sie of the equations fully interchangeable on account of 100) t P x, t) x0, t0)) = t 0 P x, t) x 0, t 0)) = n=1 n=1 1) n n! n Πnx)P x, t) x0, t0))), xn 1 n Πnx0) P x, t) x 0, t 0)). n! x n 0 102) Completely homogeneous Markov processes If the process is completely homogeneous, the above equations apply, but we can simplify even more. Let us ivie [t, t 0] into n infinitesimal intervals, each of length, then we can write the compoune Chapman- Kolmogorov equation as follows P x, t) x 0, t 0)) =... Ωx 1 ) Ωx n 1 ) j=1 n Π x j ), 103) where n 1 x n = x x 0 x i x x 0 = i=1 n i=1 = t t0 n. 104) The integrals run over x 1,..., x n 1, but not over x n. We can a the latter by making use of the above expression for x x 0 an inserting the corresponing Dirac elta in 103), then we make use of the fact that Π x i ) epen on their own x i only an obtain P x, t) x 0, t 0)) = n j=1 Ωx j ) Π x i ) δ x x 0 n i=1 x i ) x j. 105) The next step in gaining further insights is to express the Dirac elta in the form of the Fourier transform: which in this case becomes δx) = 1 2π 1 2π e ikx x 0) e ikx k, 106) n e ikx j k. 107) j=1

19 Creative Commons Attribution License CC-BY 3.0) 19 We substitute this into 105), also assume that for each j, Ωx j) = [, ], which yiels e ikx x 0) P x, t) x 0, t 0)) = 1 2π where = 1 2π ˆΠ k ) = Π x ) ) n e ikx x k e ikx x 0) ˆΠ k ) ) n k, 108) Π x ) e ikx x 109) is the Fourier transform of Π. Looking at 108) we see that not only oes P x, t) x 0, t 0)) epen on t t 0, as per 104). It epens on x through x x 0 too. Hence P x, t) x 0, t 0)) = P x x 0, t t 0) 0, 0)), 110) wherefrom P t P x = P t 0 = P x ) For the completely homogeneous Markov processes the propagator moment function Π nx, t) no longer epen on x or t. They are therefore constants, Π n. In combination with 111), this reuces the two Kramers- Moyal equations to just one t P x, t) x0, t0)) = n=1 1) n n! n Π n P x, t) x0, t0)). 112) xn The Kramers-Moyal equations The equation for the evolution of the moments, 62), similarly simplifies to ) n n xn t) = Π k x n k t). 113) k k=1

Schrödinger s equation.

Schrödinger s equation. Physics 342 Lecture 5 Schröinger s Equation Lecture 5 Physics 342 Quantum Mechanics I Wenesay, February 3r, 2010 Toay we iscuss Schröinger s equation an show that it supports the basic interpretation of

More information

Linear First-Order Equations

Linear First-Order Equations 5 Linear First-Orer Equations Linear first-orer ifferential equations make up another important class of ifferential equations that commonly arise in applications an are relatively easy to solve (in theory)

More information

Markov Chains in Continuous Time

Markov Chains in Continuous Time Chapter 23 Markov Chains in Continuous Time Previously we looke at Markov chains, where the transitions betweenstatesoccurreatspecifietime- steps. That it, we mae time (a continuous variable) avance in

More information

The Exact Form and General Integrating Factors

The Exact Form and General Integrating Factors 7 The Exact Form an General Integrating Factors In the previous chapters, we ve seen how separable an linear ifferential equations can be solve using methos for converting them to forms that can be easily

More information

The derivative of a function f(x) is another function, defined in terms of a limiting expression: f(x + δx) f(x)

The derivative of a function f(x) is another function, defined in terms of a limiting expression: f(x + δx) f(x) Y. D. Chong (2016) MH2801: Complex Methos for the Sciences 1. Derivatives The erivative of a function f(x) is another function, efine in terms of a limiting expression: f (x) f (x) lim x δx 0 f(x + δx)

More information

Implicit Differentiation

Implicit Differentiation Implicit Differentiation Thus far, the functions we have been concerne with have been efine explicitly. A function is efine explicitly if the output is given irectly in terms of the input. For instance,

More information

Lectures - Week 10 Introduction to Ordinary Differential Equations (ODES) First Order Linear ODEs

Lectures - Week 10 Introduction to Ordinary Differential Equations (ODES) First Order Linear ODEs Lectures - Week 10 Introuction to Orinary Differential Equations (ODES) First Orer Linear ODEs When stuying ODEs we are consiering functions of one inepenent variable, e.g., f(x), where x is the inepenent

More information

Separation of Variables

Separation of Variables Physics 342 Lecture 1 Separation of Variables Lecture 1 Physics 342 Quantum Mechanics I Monay, January 25th, 2010 There are three basic mathematical tools we nee, an then we can begin working on the physical

More information

Quantum Mechanics in Three Dimensions

Quantum Mechanics in Three Dimensions Physics 342 Lecture 20 Quantum Mechanics in Three Dimensions Lecture 20 Physics 342 Quantum Mechanics I Monay, March 24th, 2008 We begin our spherical solutions with the simplest possible case zero potential.

More information

Chapter 6: Energy-Momentum Tensors

Chapter 6: Energy-Momentum Tensors 49 Chapter 6: Energy-Momentum Tensors This chapter outlines the general theory of energy an momentum conservation in terms of energy-momentum tensors, then applies these ieas to the case of Bohm's moel.

More information

II. First variation of functionals

II. First variation of functionals II. First variation of functionals The erivative of a function being zero is a necessary conition for the etremum of that function in orinary calculus. Let us now tackle the question of the equivalent

More information

Math 1B, lecture 8: Integration by parts

Math 1B, lecture 8: Integration by parts Math B, lecture 8: Integration by parts Nathan Pflueger 23 September 2 Introuction Integration by parts, similarly to integration by substitution, reverses a well-known technique of ifferentiation an explores

More information

MA 2232 Lecture 08 - Review of Log and Exponential Functions and Exponential Growth

MA 2232 Lecture 08 - Review of Log and Exponential Functions and Exponential Growth MA 2232 Lecture 08 - Review of Log an Exponential Functions an Exponential Growth Friay, February 2, 2018. Objectives: Review log an exponential functions, their erivative an integration formulas. Exponential

More information

Calculus of Variations

Calculus of Variations Calculus of Variations Lagrangian formalism is the main tool of theoretical classical mechanics. Calculus of Variations is a part of Mathematics which Lagrangian formalism is base on. In this section,

More information

Convergence of Random Walks

Convergence of Random Walks Chapter 16 Convergence of Ranom Walks This lecture examines the convergence of ranom walks to the Wiener process. This is very important both physically an statistically, an illustrates the utility of

More information

Math Notes on differentials, the Chain Rule, gradients, directional derivative, and normal vectors

Math Notes on differentials, the Chain Rule, gradients, directional derivative, and normal vectors Math 18.02 Notes on ifferentials, the Chain Rule, graients, irectional erivative, an normal vectors Tangent plane an linear approximation We efine the partial erivatives of f( xy, ) as follows: f f( x+

More information

PHYS 414 Problem Set 2: Turtles all the way down

PHYS 414 Problem Set 2: Turtles all the way down PHYS 414 Problem Set 2: Turtles all the way own This problem set explores the common structure of ynamical theories in statistical physics as you pass from one length an time scale to another. Brownian

More information

Lagrangian and Hamiltonian Mechanics

Lagrangian and Hamiltonian Mechanics Lagrangian an Hamiltonian Mechanics.G. Simpson, Ph.. epartment of Physical Sciences an Engineering Prince George s Community College ecember 5, 007 Introuction In this course we have been stuying classical

More information

UNDERSTANDING INTEGRATION

UNDERSTANDING INTEGRATION UNDERSTANDING INTEGRATION Dear Reaer The concept of Integration, mathematically speaking, is the "Inverse" of the concept of result, the integration of, woul give us back the function f(). This, in a way,

More information

SYNCHRONOUS SEQUENTIAL CIRCUITS

SYNCHRONOUS SEQUENTIAL CIRCUITS CHAPTER SYNCHRONOUS SEUENTIAL CIRCUITS Registers an counters, two very common synchronous sequential circuits, are introuce in this chapter. Register is a igital circuit for storing information. Contents

More information

Linear and quadratic approximation

Linear and quadratic approximation Linear an quaratic approximation November 11, 2013 Definition: Suppose f is a function that is ifferentiable on an interval I containing the point a. The linear approximation to f at a is the linear function

More information

Topic 7: Convergence of Random Variables

Topic 7: Convergence of Random Variables Topic 7: Convergence of Ranom Variables Course 003, 2016 Page 0 The Inference Problem So far, our starting point has been a given probability space (S, F, P). We now look at how to generate information

More information

Survey Sampling. 1 Design-based Inference. Kosuke Imai Department of Politics, Princeton University. February 19, 2013

Survey Sampling. 1 Design-based Inference. Kosuke Imai Department of Politics, Princeton University. February 19, 2013 Survey Sampling Kosuke Imai Department of Politics, Princeton University February 19, 2013 Survey sampling is one of the most commonly use ata collection methos for social scientists. We begin by escribing

More information

Diagonalization of Matrices Dr. E. Jacobs

Diagonalization of Matrices Dr. E. Jacobs Diagonalization of Matrices Dr. E. Jacobs One of the very interesting lessons in this course is how certain algebraic techniques can be use to solve ifferential equations. The purpose of these notes is

More information

NOTES ON EULER-BOOLE SUMMATION (1) f (l 1) (n) f (l 1) (m) + ( 1)k 1 k! B k (y) f (k) (y) dy,

NOTES ON EULER-BOOLE SUMMATION (1) f (l 1) (n) f (l 1) (m) + ( 1)k 1 k! B k (y) f (k) (y) dy, NOTES ON EULER-BOOLE SUMMATION JONATHAN M BORWEIN, NEIL J CALKIN, AND DANTE MANNA Abstract We stuy a connection between Euler-MacLaurin Summation an Boole Summation suggeste in an AMM note from 196, which

More information

ELEC3114 Control Systems 1

ELEC3114 Control Systems 1 ELEC34 Control Systems Linear Systems - Moelling - Some Issues Session 2, 2007 Introuction Linear systems may be represente in a number of ifferent ways. Figure shows the relationship between various representations.

More information

JUST THE MATHS UNIT NUMBER DIFFERENTIATION 2 (Rates of change) A.J.Hobson

JUST THE MATHS UNIT NUMBER DIFFERENTIATION 2 (Rates of change) A.J.Hobson JUST THE MATHS UNIT NUMBER 10.2 DIFFERENTIATION 2 (Rates of change) by A.J.Hobson 10.2.1 Introuction 10.2.2 Average rates of change 10.2.3 Instantaneous rates of change 10.2.4 Derivatives 10.2.5 Exercises

More information

Lecture Introduction. 2 Examples of Measure Concentration. 3 The Johnson-Lindenstrauss Lemma. CS-621 Theory Gems November 28, 2012

Lecture Introduction. 2 Examples of Measure Concentration. 3 The Johnson-Lindenstrauss Lemma. CS-621 Theory Gems November 28, 2012 CS-6 Theory Gems November 8, 0 Lecture Lecturer: Alesaner Mąry Scribes: Alhussein Fawzi, Dorina Thanou Introuction Toay, we will briefly iscuss an important technique in probability theory measure concentration

More information

1 Heisenberg Representation

1 Heisenberg Representation 1 Heisenberg Representation What we have been ealing with so far is calle the Schröinger representation. In this representation, operators are constants an all the time epenence is carrie by the states.

More information

Vectors in two dimensions

Vectors in two dimensions Vectors in two imensions Until now, we have been working in one imension only The main reason for this is to become familiar with the main physical ieas like Newton s secon law, without the aitional complication

More information

Table of Common Derivatives By David Abraham

Table of Common Derivatives By David Abraham Prouct an Quotient Rules: Table of Common Derivatives By Davi Abraham [ f ( g( ] = [ f ( ] g( + f ( [ g( ] f ( = g( [ f ( ] g( g( f ( [ g( ] Trigonometric Functions: sin( = cos( cos( = sin( tan( = sec

More information

Chapter 3 Notes, Applied Calculus, Tan

Chapter 3 Notes, Applied Calculus, Tan Contents 3.1 Basic Rules of Differentiation.............................. 2 3.2 The Prouct an Quotient Rules............................ 6 3.3 The Chain Rule...................................... 9 3.4

More information

Euler equations for multiple integrals

Euler equations for multiple integrals Euler equations for multiple integrals January 22, 2013 Contents 1 Reminer of multivariable calculus 2 1.1 Vector ifferentiation......................... 2 1.2 Matrix ifferentiation........................

More information

The Press-Schechter mass function

The Press-Schechter mass function The Press-Schechter mass function To state the obvious: It is important to relate our theories to what we can observe. We have looke at linear perturbation theory, an we have consiere a simple moel for

More information

4. Important theorems in quantum mechanics

4. Important theorems in quantum mechanics TFY4215 Kjemisk fysikk og kvantemekanikk - Tillegg 4 1 TILLEGG 4 4. Important theorems in quantum mechanics Before attacking three-imensional potentials in the next chapter, we shall in chapter 4 of this

More information

Make graph of g by adding c to the y-values. on the graph of f by c. multiplying the y-values. even-degree polynomial. graph goes up on both sides

Make graph of g by adding c to the y-values. on the graph of f by c. multiplying the y-values. even-degree polynomial. graph goes up on both sides Reference 1: Transformations of Graphs an En Behavior of Polynomial Graphs Transformations of graphs aitive constant constant on the outsie g(x) = + c Make graph of g by aing c to the y-values on the graph

More information

23 Implicit differentiation

23 Implicit differentiation 23 Implicit ifferentiation 23.1 Statement The equation y = x 2 + 3x + 1 expresses a relationship between the quantities x an y. If a value of x is given, then a corresponing value of y is etermine. For

More information

6 General properties of an autonomous system of two first order ODE

6 General properties of an autonomous system of two first order ODE 6 General properties of an autonomous system of two first orer ODE Here we embark on stuying the autonomous system of two first orer ifferential equations of the form ẋ 1 = f 1 (, x 2 ), ẋ 2 = f 2 (, x

More information

The Principle of Least Action

The Principle of Least Action Chapter 7. The Principle of Least Action 7.1 Force Methos vs. Energy Methos We have so far stuie two istinct ways of analyzing physics problems: force methos, basically consisting of the application of

More information

The total derivative. Chapter Lagrangian and Eulerian approaches

The total derivative. Chapter Lagrangian and Eulerian approaches Chapter 5 The total erivative 51 Lagrangian an Eulerian approaches The representation of a flui through scalar or vector fiels means that each physical quantity uner consieration is escribe as a function

More information

Some Examples. Uniform motion. Poisson processes on the real line

Some Examples. Uniform motion. Poisson processes on the real line Some Examples Our immeiate goal is to see some examples of Lévy processes, an/or infinitely-ivisible laws on. Uniform motion Choose an fix a nonranom an efine X := for all (1) Then, {X } is a [nonranom]

More information

x 2 2x 8 (x 4)(x + 2)

x 2 2x 8 (x 4)(x + 2) Problems With Notation Mathematical notation is very precise. This contrasts with both oral communication an some written English. Correct mathematical notation: x 2 2x 8 (x 4)(x + 2) lim x 4 = lim x 4

More information

First Order Linear Differential Equations

First Order Linear Differential Equations LECTURE 6 First Orer Linear Differential Equations A linear first orer orinary ifferential equation is a ifferential equation of the form ( a(xy + b(xy = c(x. Here y represents the unknown function, y

More information

θ x = f ( x,t) could be written as

θ x = f ( x,t) could be written as 9. Higher orer PDEs as systems of first-orer PDEs. Hyperbolic systems. For PDEs, as for ODEs, we may reuce the orer by efining new epenent variables. For example, in the case of the wave equation, (1)

More information

Chapter 2. Exponential and Log functions. Contents

Chapter 2. Exponential and Log functions. Contents Chapter. Exponential an Log functions This material is in Chapter 6 of Anton Calculus. The basic iea here is mainly to a to the list of functions we know about (for calculus) an the ones we will stu all

More information

Fractional Geometric Calculus: Toward A Unified Mathematical Language for Physics and Engineering

Fractional Geometric Calculus: Toward A Unified Mathematical Language for Physics and Engineering Fractional Geometric Calculus: Towar A Unifie Mathematical Language for Physics an Engineering Xiong Wang Center of Chaos an Complex Network, Department of Electronic Engineering, City University of Hong

More information

Solving the Schrödinger Equation for the 1 Electron Atom (Hydrogen-Like)

Solving the Schrödinger Equation for the 1 Electron Atom (Hydrogen-Like) Stockton Univeristy Chemistry Program, School of Natural Sciences an Mathematics 101 Vera King Farris Dr, Galloway, NJ CHEM 340: Physical Chemistry II Solving the Schröinger Equation for the 1 Electron

More information

Conservation Laws. Chapter Conservation of Energy

Conservation Laws. Chapter Conservation of Energy 20 Chapter 3 Conservation Laws In orer to check the physical consistency of the above set of equations governing Maxwell-Lorentz electroynamics [(2.10) an (2.12) or (1.65) an (1.68)], we examine the action

More information

Unit #6 - Families of Functions, Taylor Polynomials, l Hopital s Rule

Unit #6 - Families of Functions, Taylor Polynomials, l Hopital s Rule Unit # - Families of Functions, Taylor Polynomials, l Hopital s Rule Some problems an solutions selecte or aapte from Hughes-Hallett Calculus. Critical Points. Consier the function f) = 54 +. b) a) Fin

More information

Survival Facts from Quantum Mechanics

Survival Facts from Quantum Mechanics Survival Facts from Quantum Mechanics Operators, Eigenvalues an Eigenfunctions An operator O may be thought as something that operates on a function to prouce another function. We enote operators with

More information

3.7 Implicit Differentiation -- A Brief Introduction -- Student Notes

3.7 Implicit Differentiation -- A Brief Introduction -- Student Notes Fin these erivatives of these functions: y.7 Implicit Differentiation -- A Brief Introuction -- Stuent Notes tan y sin tan = sin y e = e = Write the inverses of these functions: y tan y sin How woul we

More information

d dx But have you ever seen a derivation of these results? We ll prove the first result below. cos h 1

d dx But have you ever seen a derivation of these results? We ll prove the first result below. cos h 1 Lecture 5 Some ifferentiation rules Trigonometric functions (Relevant section from Stewart, Seventh Eition: Section 3.3) You all know that sin = cos cos = sin. () But have you ever seen a erivation of

More information

Chapter 3 Definitions and Theorems

Chapter 3 Definitions and Theorems Chapter 3 Definitions an Theorems (from 3.1) Definition of Tangent Line with slope of m If f is efine on an open interval containing c an the limit Δy lim Δx 0 Δx = lim f (c + Δx) f (c) = m Δx 0 Δx exists,

More information

SYDE 112, LECTURE 1: Review & Antidifferentiation

SYDE 112, LECTURE 1: Review & Antidifferentiation SYDE 112, LECTURE 1: Review & Antiifferentiation 1 Course Information For a etaile breakown of the course content an available resources, see the Course Outline. Other relevant information for this section

More information

Integration Review. May 11, 2013

Integration Review. May 11, 2013 Integration Review May 11, 2013 Goals: Review the funamental theorem of calculus. Review u-substitution. Review integration by parts. Do lots of integration eamples. 1 Funamental Theorem of Calculus In

More information

Math 342 Partial Differential Equations «Viktor Grigoryan

Math 342 Partial Differential Equations «Viktor Grigoryan Math 342 Partial Differential Equations «Viktor Grigoryan 6 Wave equation: solution In this lecture we will solve the wave equation on the entire real line x R. This correspons to a string of infinite

More information

How the potentials in different gauges yield the same retarded electric and magnetic fields

How the potentials in different gauges yield the same retarded electric and magnetic fields How the potentials in ifferent gauges yiel the same retare electric an magnetic fiels José A. Heras a Departamento e Física, E. S. F. M., Instituto Politécnico Nacional, México D. F. México an Department

More information

EVALUATING HIGHER DERIVATIVE TENSORS BY FORWARD PROPAGATION OF UNIVARIATE TAYLOR SERIES

EVALUATING HIGHER DERIVATIVE TENSORS BY FORWARD PROPAGATION OF UNIVARIATE TAYLOR SERIES MATHEMATICS OF COMPUTATION Volume 69, Number 231, Pages 1117 1130 S 0025-5718(00)01120-0 Article electronically publishe on February 17, 2000 EVALUATING HIGHER DERIVATIVE TENSORS BY FORWARD PROPAGATION

More information

Quantum mechanical approaches to the virial

Quantum mechanical approaches to the virial Quantum mechanical approaches to the virial S.LeBohec Department of Physics an Astronomy, University of Utah, Salt Lae City, UT 84112, USA Date: June 30 th 2015 In this note, we approach the virial from

More information

Fall 2016: Calculus I Final

Fall 2016: Calculus I Final Answer the questions in the spaces provie on the question sheets. If you run out of room for an answer, continue on the back of the page. NO calculators or other electronic evices, books or notes are allowe

More information

Notes on Lie Groups, Lie algebras, and the Exponentiation Map Mitchell Faulk

Notes on Lie Groups, Lie algebras, and the Exponentiation Map Mitchell Faulk Notes on Lie Groups, Lie algebras, an the Exponentiation Map Mitchell Faulk 1. Preliminaries. In these notes, we concern ourselves with special objects calle matrix Lie groups an their corresponing Lie

More information

G4003 Advanced Mechanics 1. We already saw that if q is a cyclic variable, the associated conjugate momentum is conserved, L = const.

G4003 Advanced Mechanics 1. We already saw that if q is a cyclic variable, the associated conjugate momentum is conserved, L = const. G4003 Avance Mechanics 1 The Noether theorem We alreay saw that if q is a cyclic variable, the associate conjugate momentum is conserve, q = 0 p q = const. (1) This is the simplest incarnation of Noether

More information

Solutions to Practice Problems Tuesday, October 28, 2008

Solutions to Practice Problems Tuesday, October 28, 2008 Solutions to Practice Problems Tuesay, October 28, 2008 1. The graph of the function f is shown below. Figure 1: The graph of f(x) What is x 1 + f(x)? What is x 1 f(x)? An oes x 1 f(x) exist? If so, what

More information

Some vector algebra and the generalized chain rule Ross Bannister Data Assimilation Research Centre, University of Reading, UK Last updated 10/06/10

Some vector algebra and the generalized chain rule Ross Bannister Data Assimilation Research Centre, University of Reading, UK Last updated 10/06/10 Some vector algebra an the generalize chain rule Ross Bannister Data Assimilation Research Centre University of Reaing UK Last upate 10/06/10 1. Introuction an notation As we shall see in these notes the

More information

0.1 Differentiation Rules

0.1 Differentiation Rules 0.1 Differentiation Rules From our previous work we ve seen tat it can be quite a task to calculate te erivative of an arbitrary function. Just working wit a secon-orer polynomial tings get pretty complicate

More information

Applications of the Wronskian to ordinary linear differential equations

Applications of the Wronskian to ordinary linear differential equations Physics 116C Fall 2011 Applications of the Wronskian to orinary linear ifferential equations Consier a of n continuous functions y i (x) [i = 1,2,3,...,n], each of which is ifferentiable at least n times.

More information

Differentiation ( , 9.5)

Differentiation ( , 9.5) Chapter 2 Differentiation (8.1 8.3, 9.5) 2.1 Rate of Change (8.2.1 5) Recall that the equation of a straight line can be written as y = mx + c, where m is the slope or graient of the line, an c is the

More information

Calculus of Variations

Calculus of Variations 16.323 Lecture 5 Calculus of Variations Calculus of Variations Most books cover this material well, but Kirk Chapter 4 oes a particularly nice job. x(t) x* x*+ αδx (1) x*- αδx (1) αδx (1) αδx (1) t f t

More information

d dx [xn ] = nx n 1. (1) dy dx = 4x4 1 = 4x 3. Theorem 1.3 (Derivative of a constant function). If f(x) = k and k is a constant, then f (x) = 0.

d dx [xn ] = nx n 1. (1) dy dx = 4x4 1 = 4x 3. Theorem 1.3 (Derivative of a constant function). If f(x) = k and k is a constant, then f (x) = 0. Calculus refresher Disclaimer: I claim no original content on this ocument, which is mostly a summary-rewrite of what any stanar college calculus book offers. (Here I ve use Calculus by Dennis Zill.) I

More information

Generalization of the persistent random walk to dimensions greater than 1

Generalization of the persistent random walk to dimensions greater than 1 PHYSICAL REVIEW E VOLUME 58, NUMBER 6 DECEMBER 1998 Generalization of the persistent ranom walk to imensions greater than 1 Marián Boguñá, Josep M. Porrà, an Jaume Masoliver Departament e Física Fonamental,

More information

The Ehrenfest Theorems

The Ehrenfest Theorems The Ehrenfest Theorems Robert Gilmore Classical Preliminaries A classical system with n egrees of freeom is escribe by n secon orer orinary ifferential equations on the configuration space (n inepenent

More information

Introduction to variational calculus: Lecture notes 1

Introduction to variational calculus: Lecture notes 1 October 10, 2006 Introuction to variational calculus: Lecture notes 1 Ewin Langmann Mathematical Physics, KTH Physics, AlbaNova, SE-106 91 Stockholm, Sween Abstract I give an informal summary of variational

More information

The Natural Logarithm

The Natural Logarithm The Natural Logarithm -28-208 In earlier courses, you may have seen logarithms efine in terms of raising bases to powers. For eample, log 2 8 = 3 because 2 3 = 8. In those terms, the natural logarithm

More information

Review of Differentiation and Integration for Ordinary Differential Equations

Review of Differentiation and Integration for Ordinary Differential Equations Schreyer Fall 208 Review of Differentiation an Integration for Orinary Differential Equations In this course you will be expecte to be able to ifferentiate an integrate quickly an accurately. Many stuents

More information

Define each term or concept.

Define each term or concept. Chapter Differentiation Course Number Section.1 The Derivative an the Tangent Line Problem Objective: In this lesson you learne how to fin the erivative of a function using the limit efinition an unerstan

More information

Problem Sheet 2: Eigenvalues and eigenvectors and their use in solving linear ODEs

Problem Sheet 2: Eigenvalues and eigenvectors and their use in solving linear ODEs Problem Sheet 2: Eigenvalues an eigenvectors an their use in solving linear ODEs If you fin any typos/errors in this problem sheet please email jk28@icacuk The material in this problem sheet is not examinable

More information

1 Math 285 Homework Problem List for S2016

1 Math 285 Homework Problem List for S2016 1 Math 85 Homework Problem List for S016 Note: solutions to Lawler Problems will appear after all of the Lecture Note Solutions. 1.1 Homework 1. Due Friay, April 8, 016 Look at from lecture note exercises:

More information

Assignment 1. g i (x 1,..., x n ) dx i = 0. i=1

Assignment 1. g i (x 1,..., x n ) dx i = 0. i=1 Assignment 1 Golstein 1.4 The equations of motion for the rolling isk are special cases of general linear ifferential equations of constraint of the form g i (x 1,..., x n x i = 0. i=1 A constraint conition

More information

THE VAN KAMPEN EXPANSION FOR LINKED DUFFING LINEAR OSCILLATORS EXCITED BY COLORED NOISE

THE VAN KAMPEN EXPANSION FOR LINKED DUFFING LINEAR OSCILLATORS EXCITED BY COLORED NOISE Journal of Soun an Vibration (1996) 191(3), 397 414 THE VAN KAMPEN EXPANSION FOR LINKED DUFFING LINEAR OSCILLATORS EXCITED BY COLORED NOISE E. M. WEINSTEIN Galaxy Scientific Corporation, 2500 English Creek

More information

Witt#5: Around the integrality criterion 9.93 [version 1.1 (21 April 2013), not completed, not proofread]

Witt#5: Around the integrality criterion 9.93 [version 1.1 (21 April 2013), not completed, not proofread] Witt vectors. Part 1 Michiel Hazewinkel Sienotes by Darij Grinberg Witt#5: Aroun the integrality criterion 9.93 [version 1.1 21 April 2013, not complete, not proofrea In [1, section 9.93, Hazewinkel states

More information

PDE Notes, Lecture #11

PDE Notes, Lecture #11 PDE Notes, Lecture # from Professor Jalal Shatah s Lectures Febuary 9th, 2009 Sobolev Spaces Recall that for u L loc we can efine the weak erivative Du by Du, φ := udφ φ C0 If v L loc such that Du, φ =

More information

11.7. Implicit Differentiation. Introduction. Prerequisites. Learning Outcomes

11.7. Implicit Differentiation. Introduction. Prerequisites. Learning Outcomes Implicit Differentiation 11.7 Introuction This Section introuces implicit ifferentiation which is use to ifferentiate functions expresse in implicit form (where the variables are foun together). Examples

More information

Math 1271 Solutions for Fall 2005 Final Exam

Math 1271 Solutions for Fall 2005 Final Exam Math 7 Solutions for Fall 5 Final Eam ) Since the equation + y = e y cannot be rearrange algebraically in orer to write y as an eplicit function of, we must instea ifferentiate this relation implicitly

More information

Introduction to the Vlasov-Poisson system

Introduction to the Vlasov-Poisson system Introuction to the Vlasov-Poisson system Simone Calogero 1 The Vlasov equation Consier a particle with mass m > 0. Let x(t) R 3 enote the position of the particle at time t R an v(t) = ẋ(t) = x(t)/t its

More information

2 ODEs Integrating Factors and Homogeneous Equations

2 ODEs Integrating Factors and Homogeneous Equations 2 ODEs Integrating Factors an Homogeneous Equations We begin with a slightly ifferent type of equation: 2.1 Exact Equations These are ODEs whose general solution can be obtaine by simply integrating both

More information

Sturm-Liouville Theory

Sturm-Liouville Theory LECTURE 5 Sturm-Liouville Theory In the three preceing lectures I emonstrate the utility of Fourier series in solving PDE/BVPs. As we ll now see, Fourier series are just the tip of the iceberg of the theory

More information

Experiment 2, Physics 2BL

Experiment 2, Physics 2BL Experiment 2, Physics 2BL Deuction of Mass Distributions. Last Upate: 2009-05-03 Preparation Before this experiment, we recommen you review or familiarize yourself with the following: Chapters 4-6 in Taylor

More information

Math 115 Section 018 Course Note

Math 115 Section 018 Course Note Course Note 1 General Functions Definition 1.1. A function is a rule that takes certain numbers as inputs an assigns to each a efinite output number. The set of all input numbers is calle the omain of

More information

Integration by Parts

Integration by Parts Integration by Parts 6-3-207 If u an v are functions of, the Prouct Rule says that (uv) = uv +vu Integrate both sies: (uv) = uv = uv + u v + uv = uv vu, vu v u, I ve written u an v as shorthan for u an

More information

TMA 4195 Matematisk modellering Exam Tuesday December 16, :00 13:00 Problems and solution with additional comments

TMA 4195 Matematisk modellering Exam Tuesday December 16, :00 13:00 Problems and solution with additional comments Problem F U L W D g m 3 2 s 2 0 0 0 0 2 kg 0 0 0 0 0 0 Table : Dimension matrix TMA 495 Matematisk moellering Exam Tuesay December 6, 2008 09:00 3:00 Problems an solution with aitional comments The necessary

More information

1 dx. where is a large constant, i.e., 1, (7.6) and Px is of the order of unity. Indeed, if px is given by (7.5), the inequality (7.

1 dx. where is a large constant, i.e., 1, (7.6) and Px is of the order of unity. Indeed, if px is given by (7.5), the inequality (7. Lectures Nine an Ten The WKB Approximation The WKB metho is a powerful tool to obtain solutions for many physical problems It is generally applicable to problems of wave propagation in which the frequency

More information

Computing Derivatives

Computing Derivatives Chapter 2 Computing Derivatives 2.1 Elementary erivative rules Motivating Questions In this section, we strive to unerstan the ieas generate by the following important questions: What are alternate notations

More information

Lecture 6: Calculus. In Song Kim. September 7, 2011

Lecture 6: Calculus. In Song Kim. September 7, 2011 Lecture 6: Calculus In Song Kim September 7, 20 Introuction to Differential Calculus In our previous lecture we came up with several ways to analyze functions. We saw previously that the slope of a linear

More information

Math 210 Midterm #1 Review

Math 210 Midterm #1 Review Math 20 Miterm # Review This ocument is intene to be a rough outline of what you are expecte to have learne an retaine from this course to be prepare for the first miterm. : Functions Definition: A function

More information

arxiv:physics/ v2 [physics.ed-ph] 23 Sep 2003

arxiv:physics/ v2 [physics.ed-ph] 23 Sep 2003 Mass reistribution in variable mass systems Célia A. e Sousa an Vítor H. Rorigues Departamento e Física a Universiae e Coimbra, P-3004-516 Coimbra, Portugal arxiv:physics/0211075v2 [physics.e-ph] 23 Sep

More information

Lecture 16: The chain rule

Lecture 16: The chain rule Lecture 6: The chain rule Nathan Pflueger 6 October 03 Introuction Toay we will a one more rule to our toolbo. This rule concerns functions that are epresse as compositions of functions. The iea of a composition

More information

Solutions to Math 41 Second Exam November 4, 2010

Solutions to Math 41 Second Exam November 4, 2010 Solutions to Math 41 Secon Exam November 4, 2010 1. (13 points) Differentiate, using the metho of your choice. (a) p(t) = ln(sec t + tan t) + log 2 (2 + t) (4 points) Using the rule for the erivative of

More information

A note on asymptotic formulae for one-dimensional network flow problems Carlos F. Daganzo and Karen R. Smilowitz

A note on asymptotic formulae for one-dimensional network flow problems Carlos F. Daganzo and Karen R. Smilowitz A note on asymptotic formulae for one-imensional network flow problems Carlos F. Daganzo an Karen R. Smilowitz (to appear in Annals of Operations Research) Abstract This note evelops asymptotic formulae

More information

and from it produce the action integral whose variation we set to zero:

and from it produce the action integral whose variation we set to zero: Lagrange Multipliers Monay, 6 September 01 Sometimes it is convenient to use reunant coorinates, an to effect the variation of the action consistent with the constraints via the metho of Lagrange unetermine

More information