BIOCHEMICAL OSCILLATIONS VIA STOCHASTIC PROCESSES

Size: px
Start display at page:

Download "BIOCHEMICAL OSCILLATIONS VIA STOCHASTIC PROCESSES"

Transcription

1 BIOCHEMICAL OSCILLATIONS VIA STOCHASTIC PROCESSES EDUARDO S. ZERON 1. Historical Introduction Since the first observation of current oscillations during electrodissolution of an iron wire in nitric acid by Fechner [2] in 1828, the experimental evidence of electrochemical instabilities resulting in nonlinear behaviour such as oscillations, multistability, and chaos have been constantly accumulating. A simple oscillating or chaotic electrochemical cell can be built by placing an aluminium anode in a solution of sodium chloride in tap water. The oscillating current is produced in this case by the competing forces of corrosion by the chloride ions and protection by the carbonate ions from the tap water. It is quite easy to understand why the first oscillating chemical reactions were discovered around 1930 experimenting with an electrochemical cell, for the galvanometer was invented in 1820 and that it was for 90 years one of the few instruments sensible enough to do scientific measurements. Consider that Edwin H. Armstrong [1] published until 1914 an explanation of the Audion s operation. However Fechner s discovery was met with skepticism, because the idea of an oscillating chemical reaction seems to contradict the laws of thermodynamic and it is difficult to understand why two competing forces (dissolving and passivation of iron in nitric acid) should produce an oscillating chemical reaction instead of simply converging to a state of equilibrium. Date: May

2 2 EDUARDO S. ZERON This theoretical atmosphere changed in 1910, when Alfred J. Lotka published his famous paper entitled Contribution to the Theory of Periodic Reactions [4]. This work contains a detailed analysis of the following hypothetical autocatalytic model that produces damped chemical oscillations, (1) E dx a X, X + Y b 2Y, = a[e] bxy, and dy Y c, = bxy cy. Lotka explained in detail why the non-linear (autocatalytic) term bxy is absolutely necessary for producing (damped) oscillations. Even so it is explicitly stated in [4] that: No reaction is known which follows the above law, and as a matter of fact the case here considered was suggested by the consideration of matters lying outside the field of physical chemistry, A simple modification of system (1) is able to produce sustained oscillations. One only needs to change the constant term a[e] by ax in (1) in order to obtain sustained oscillations instead of damped ones. The new model is better know as the Lotka-Volterra equations and gives a simple description of the dynamical behaviour of biological systems in which two species interact, one as a predator and the other one as its prey, (2) dx = ax bxy and dy = bxy cy. Even when no chemical clocks (homogeneous oscillating chemical reaction) was known at the time Lotka analysed system (1), several chemical clocks were discovered quite soon. The fist one is the Bray- Liebhafsky reaction discovered by W.C. Bray in 1921, when he investigated the role of the iodate IO3 ions in the catalytic degradation of hydrogen peroxide to oxygen and water. The oscillations are produced

3 BIOCHEMICAL OSCILLATIONS VIA STOCHASTIC PROCESSES 3 by the following complementary reactions. 5H 2 O 2 + I 2 2IO 3 + 2H + + 4H 2 O, 5H 2 O 2 + 2IO 3 + 2H + I 2 + 5O 2 + 6H 2 O. More popular chemical clocks are the Belousov-Zhabotinsky and the Briggs-Rauscher reactions. Our interest in Lotka system (1) comes from the fact that these differential equations are involved in the analysis of the Circadian Rhythm, but the oscillations are produced by stochastic resonances; i.e. the stochastic simulations of system (1) oscillate when the respective deterministic simulations do not. Thus we begin analysing when the deterministic systems (1) and (2) oscillate, and then we introduce the stochastic analysis. 2. The deterministic Lotka-Volterra system We began analysing the classical Lotka-Volterra system (2) dx = ax bxy and dy = bxy cy, where the constants a, b, and c are all strictly positive. There are two stationary states, the trivial one (0, 0) that is a saddle point (it is associated to a positive eingenvalue and a negative one); and the second steady state ( c, a ) that is associated to a pair of pure imaginary b b eigenvalues. It is easy to calculate that any initial condition different from the steady states yields to an oscillatory solution. second equation in (2) by the first one to obtain dy dx = bxy cy ax bxy and Integrating with respect to x yields ( a y b ) dy dx = b c x. (3) a ln y by + c ln x bx = r. Divide the Homework 1. Let r be a given real constant. Prove that the locus of all points (x, y) that satisfies (3) is either the empty set or a closed curve in the plane. Hint: Prove first that a ln y by is a concave function

4 4 EDUARDO S. ZERON that attains its maximum at a/b and that converges to when y > 0 goes to 0+ or Simple oscillating systems. It is quite easy to build systems that have a stable limit cycle, and so they are able to produce sustained oscillations. Consider the following system, (4) dr = a r and dθ = b. We obviously have that θ = c 0 +bt and that r converges to the steady state a. If we use the change of variables: r = x 2 + y 2, θ = arctan(y/x); we can rewrite (4) as follows and 2xẋ + 2yẏ = a x 2 y 2 and x = r cos(θ), y = r sin(θ); xẏ yẋ x 2 + y 2 = b. Solving for ẋ and ẏ yields a system that obviously has only one stable limit circle x 2 +y 2 = r, 4 dx 4 dy = = ax 2by x, x 2 + y2 ay + 2bx y. x 2 + y2 Homework 2. Use the change of variables below with d > 0 in order to build another system that has only one stable limit circle, r = 2d x 2 + y 2, θ = arctan(y/x); and x = r d cos(θ), y = r d sin(θ) The original Lotka system. We analyse now the original Lotka system (1) dx dy = ac bxy and = bxy cy, where [E]=c and the constants a, b, and c are all strictly positive. There is only one steady state ( c, a) associated to the eigenvalues b ab ± a 2 b 2 4abc 2

5 BIOCHEMICAL OSCILLATIONS VIA STOCHASTIC PROCESSES 5 Both eigenvalues have strictly negative real part, so that the steady state ( c, a) is locally stable. Moreover the solutions to (1) are damped b oscillations when 4c > ab. Nevertheless the stochastic simulations of Lotka system (1) produce sustained oscillations. We have to explain first what oscillations do mean (and how to identify them) in a stochastic process. 3. Probability Space A probability space (X, Ω, p) is composed from a fixed set X called the base space, a collection Ω of measurable subsets of X called the space of events, and a probability measure p 0 that quantifies the size of every element or event B Ω. Notice that each B is at the same time an element of Ω and a subset of X. The collection Ω of (events) subsets of X must be a σ-algebra; i.e., it must satisfy the following properties: The empty set and the total set X are in Ω, If B Ω, the complement X \ B also lies in Ω, Given a countable collection {B k } k=1 of elements in Ω, their union k B k and intersection k B k and both in Ω. The probability measure p is a function p : Ω R that satisfies the following properties: p( ) = 0 and p(b) 0 for every B Ω, p(x) = 1 and p(a) p(b) for all A B in Ω, Given a countable collection {B k } k=1 of elements disjoint by pairs in Ω, the measure of the union p ( k B k) is equal to the sum k p(b k). Example 3. Maybe one of the simplest examples is obtained when X is a countable set, Ω = 2 X is the collection of all subsets of X, and each element x X is endowed with a fix real value µ(x) = c x 0 in such a way that µ(x) x X is equal to one.

6 6 EDUARDO S. ZERON It is easy to verify that Ω is a σ-algebra; and that µ is a probability measure when every subset B X is quantified by the sum µ(b) := µ(x). x B The space of probability (X, 2 X, µ) is known as a discrete probability space. One can take for example the natural numbers X = N and the Poisson distribution µ(x) = λx e λ x! for the parameter λ > 0 real. It is not possible to use the previous structure when X is not countable, but even so, we can construct several structures as follows. Example 4. Let X be an open or closed subset of the real space R n. Define Ω as the Borel σ-algebra; i.e., it is generated by all the relative open and closed subsets of X under countable unions, intersections, unions of intersections, etcetera. Given an integrable function f : X R such that f(x) 0 for every x X and f(x)dx equal to one, x X we can define the probability measure p(b) := f(x)dx for all B Ω, x B in such a way that (X, Ω, p) is a well defined space of probability. The function f is known as a probability density function. Notice that the probability of a point p(x) = 0 is zero for every x X. We can take for example the normal (Gaussian) probability density function f(x) = e x2 /2 2π defined on the real line X = R. 4. Random Variables Let (X, Ω, p) be a probability space (either discrete or not). A random variable is any (measurable) function g : X R; i.e., for every open set in the real line U R, the inverse image g 1 (U) X is an element in the σ-algebra Ω, so that the probability p ( g 1 (U) ) is well defined.

7 BIOCHEMICAL OSCILLATIONS VIA STOCHASTIC PROCESSES 7 In general, one can think of (X, Ω, p) as the set of all possible experiments that can be done in a physical system, while x g(x) is a specific physical measure that we do in the experiment x X. For example X can be the space of all human beings that live or lived on earth, while g(x) is the weight of a given human being x X in particular. Any random variable g : X R automatically induces a structure of probability space on its image g(x). For example, if (X, Ω, µ) is a discrete space of probability, then X and its image g(x) are both countable. Hence we can consider the σ-algebra 2 g(x) composed from all subsets of g(x), and the new probability measure ρ g defined on g(x) by the formula: ρ g (E) := µ ( g 1 (E) ) = µ(x) = µ ( g 1 (y) ) x g 1 (E) y E for every subset E g(x). Likewise, given an arbitrary space of probability (X, Ω, p), consider a random variable h : X R. One can construct a second space of probability (h(x), S, ρ h ), where S is the Borel σ-algebra over h(x) and ρ h is the probability measure defined by: ρ h (U) := p ( h 1 (U) ) for every U S. Moreover, if we suppose that ρ h (U) = 0 for every set U S whose Lebesgue measure λ(u) = 0, there exists a probability density function f h : h(x) R such that f h (y) 0 and ρ h (U) := p ( h 1 (U) ) = dp(x) = f h (y)dy x h 1 (U) y U for every U S. A quite important point with the random variables is that random variables can always be added, multiplied, etcetera, because its image lies in the real numbers. In particular we can define

8 8 EDUARDO S. ZERON la mean and variance of the random variables introduced above. E(g) = x X E(h) = x X g(x)µ(x) = y g(x) h(x)dp(x) = V (g) = E(g 2 ) E(g) 2. yµ ( g 1 (y) ) = y h(x) yf h (y)dy, y g(x) yρ g (y), Now then what is the probability density function association to the sum of two random variables? Consider a pair of random variables g and h defined on a discrete probability space (X, Ω, µ) and with image in the integer numbers Z. event for z Z? What is the probability of the following D z = {(w, x) X 2 g(w) + h(x) = z}. The first problem that we have to solve is to define the probability in the product X 2. A simple solution is to define the product measure µ 2 (A B) = µ(a)µ(b) for all subsets A and B of X. Whence for all points y, z Z: µ 2( g 1 (y) h 1 (z) ) = µ ( g 1 (y) ) µ ( h 1 (z) ) = ρ g (y)ρ h (z). In general, given any probability measure p defined on the product space X 2, we say that the random variables g and h are independent with respect con to the probability p if and only if p ( g 1 (y) h 1 (z) ) = p ( g 1 (y) Y ) p ( X h 1 (z) ). We can now calculate the probability of the event D z as a convolution of the associated measures ρ g and ρ h, µ 2 (D z ) = = y Z w,x X, g(w)+h(x)=z µ 2 (w, x) = y Z µ 2( g 1 (y) h 1 (z y) ) µ ( g 1 (y) ) µ ( h 1 (z y) ) = y Z ρ g (y)ρ h (z y).

9 BIOCHEMICAL OSCILLATIONS VIA STOCHASTIC PROCESSES 9 A similar result can be calculated when (X, Ω, p) is a non-discrete probability space. Consider now a countable number of random variables {g k } k=1 defined over a discrete probability space (X, Ω, µ). Endow every product X n with the standard product measure µ n, so that the random variables g 1, g 2,..., g n are all independent by pairs with respect to the probability measure µ n. Finally assume that the means and variances are all equal and finite, i.e., E(g k ) = c < and V (g k ) = σ 2 < k = 1, 2, 3,... The Central Limit Theorem yields that the following sequence of random variables converge in distribution to a Gaussian distribution, n [ g1 + g g ] n c ; σ n i.e., for every real number t R, we have that [ n [ g1 + g g ] ] t n e s2 /2 lim n µn c < t = ds. σ n 2π A similar result can be obtained when (x, Ω, p) is a non discrete Markov Chain. 5. Markov Chains Markov Chains allows us to analyse the dynamic of those systems for which there exists a fixed sequence of finite times t 0 < t 1 < t 2 < < t k < < with lim k t k =, such that the dynamic of the system is static inside each temporal interval [t k, t k+1 ). The state of the system can then be represented by a sequence of probability measures p k, one for every interval [t k, t k+1 ), where the index k 0 runs over all the natural numbers. In particular p k (x) is the probability that the system is at the state x in the time interval [t k, t k+1 ). The state of the system and the probability measures p k can only change as a jump at the times t k for every k 1. In a formal sense, a Markov Chain is a probability space (X, Ω, p k ) with a countable number of probability measures p k : Ω R indexed

10 10 EDUARDO S. ZERON by the integer k 0 and that can only change in discrete steps. Moreover the exact values of the measure p k+1 is exclusively given by the values of the previous measure p k, and the relation is linear. Hence, if we assume that the probability space (X, Ω, p k ) is discrete, there exits a collection of real numbers q k (x, y) in such a way that: p k+1 (y) = p k (x) q k (x, y) for y X, k N. x X The coefficients q k (x, y) cannot be arbitrary, every q k must lie in the real interval [0, 1] and for all fixed elements x X and k N, q k (x, y) = 1. y X Both conditions are necessary to guarantee that each p k+1 is indeed a probability measure. Suppose for example that p k is a probability measure of mass one at the fix element w X; i.e., { 1 if x = w, p k (x) = 0 otherwise. Then p k+1 (y) = q k (w, y) for every y X, so that q k (w, y) must lie in the closed interval [0, 1]. The second condition is necessary to guarantee that p k+1 is indeed a probability measure with total sum equal to one; i.e., y X p k+1 (y) = x X [ p k (x) ] q k (x, y) y X = x X p k (x) = 1. Notice in particular that the probability measure p 0 is indeed the initial state of the system; and that we can calculate the other probability measures p k in an inductive form. Moreover, if we suppose that the elements of the countable set X are indexed by the positive integers, we can talk about the first, second, third, etcetera, element in X. This indexation allow us to see the probability measures p k (x) (resp. coefficients q k (x, y)) as maybe infinite vectors (resp. matrixes), P k := ( p k (x) ) x X and Q k := [ q k (x, y) ] (x,y) X 2.

11 BIOCHEMICAL OSCILLATIONS VIA STOCHASTIC PROCESSES 11 Previous notation allows us to write k P k+1 = P k Q k = P 0 Q 0 Q 1 Q k = P 0 Q j, for every k = 0, 1, 2,... Matrixes Q k are known as the transfer matrixes of the system. Notice that the number one is always an eigenvalue to every Q k, actually (1, 1,...) is the associated eigenvector; i.e., 1 = Q k j=0 Those matrices whose entries lie in the real interval [0, 1] and that satisfy the above identity are known as stochastic matrixes. Suppose for example that we have a random walk in a circle with (let us say) six fixed positions. Enumerate the positions clockwise from one to six. Assuming that we are at any given position: we move to a new position after throwing a dice, so we move to the adjacent position at the left side when we get a one or a two, otherwise we move to the adjacent right position. Given any initial position, it is trivial to simulate millions of random walks around the circle, and so to calculate the probability of being at any position at a given time. However it is really easy to calculate the probability of being at any position in the circle without having to do any simulation. Suppose that the time is indexed by integer numbers k 0. The probability that we are at any of the six positions in the circle can be expressed as an horizontal vector P k = (a 1, a 2,..., a 6 ) R 6, where each 0 a j 1 is indeed the probability that we are standing on the j-th position at the time k 0. Now then, if we assume that the dice is fear, we can accept that the probability of moving to the position at the left side (resp. right side) is equal to 1/3 (resp. 2/3), so

12 12 EDUARDO S. ZERON that we can construct the following stochastic matrix of transference, Q 0 = If we never change the dice, we shall have that the matrixes of transference are all equal; i.e., Q k = Q 0 for k = 1, 2, 3,... Hence, assuming that we stand on position one as the initial condition, we have that the initial probability P 0 has mass one at the first position, and so we can calculate the evolution of the probabilities P k following an inductive process: P 0 = ( 1, 0, 0, 0, 0, 0 ), ( P 1 = P 0 Q 0 = 0, 2 3, 0, 0, 0, 1 ), 3 ( 4 P 2 = P 0 Q 2 0 = 9, 0, 4 9, 0, 1 ) 9, 0, ( P 3 = P 0 Q 3 0 = 0, 4 9, 0, 1 3, 0, 2 ), 9 ( 8 P 4 = P 0 Q =, 0, 27 27, 0, 8 ) 27, 0, P 5 = P 0 Q 5 0 =. = lim P 2k = k lim P 2k+1 = k. ( 1 3, 0, 1 3, 0, 1 ) 3, 0, ( 0, 1 3, 0, 1 3, 0, 1 ). 3 ( 0, 1 10, 0, 3 27, 0, 8 27 ),

13 BIOCHEMICAL OSCILLATIONS VIA STOCHASTIC PROCESSES 13 Homework 5. Verify the previous calculations and prove that the identities below holds for the initial probability P 0 = (0, 1, 0,...) with mass one at the second position, lim P 2k = k lim P 2k+1 = k ( 0, 1 3, 0, 1 3, 0, 1 ), 3 ( 1 3, 0, 1 3, 0, 1 ) 3, 0. Moreover, given the initial probability P 0 ( 1, 1, 0...), prove that 2 2 ( 1 lim P k = k 6, 1 6, 1 6, 1 6, 1 6, 1 6) 5.1. Simulations. Consider again the general case of a (discrete) Markov Chain (X, Ω, p k ), so that it is defined on a discrete probability space X and which dynamics on time is given by p k+1 (y) = p k (x) q k (x, y) for y X, k N. x X The example analysed in the previous paragraphs indicated us how to simulate any trajectory in a Markov Chain. We obviously use an inductive process. Assume that x(k) X is the state of the system in the temporal interval [t k, t k+1 ), so that x(0) X is the initial state of the system. We only need to choose the new state of the system in X as a random variable x(k+1) = y with the following distribution function: y q k (x(k), y) for y X. Recall that each q k (x(k), y) lies in [0, 1] and the sum y q k(x(k), y) is equal to one. It is very important to remember that Markov Chains do not have memory, in the sense that the new state x(k+1) is only determined by the exact value of the actual state x(k). It is not important what was the value of the previous states x(k 1), x(k 2), x(k 3), etcetera, when we calculate the new state x(k+1), we only consider the value of the actual state x(k).

14 14 EDUARDO S. ZERON 5.2. Transfer matrixes that are invariant on time. A very special case happens when the coefficients q k (x, y) and transfer matrixes Q k are all invariant with respect to the time t k, so that Q k = Q 0 y P k = P 0 Q k 0, for every k = 0, 1, 2,... The importance of this case lies in the facts that the probability distribution (measure) P k = P 0 Q k 0 can be calculated in a simple way and that we can prove the existence of a stationary distribution P satisfying so that P = P Q 0, P k = P for every k 0, when P 0 = P. The distribution P can be calculated as follows, P = lim k P 0 Q k 0, when the limit exists for some initial probability distribution P 0. If the above limit does not exist, we can always use the following formula that always converges to a stationary distribution, p 0 + P 0 Q 0 + P 0 Q P 0 Q k 0 P = lim. k k Example, a system of chemical reactions. Markov Chains can be used to model a chemical reactor (a system of chemical reactions) under the assumptions that all the chemical reactions happen (one at a time) in a fixed sequence of finite times: t 0 < t 1 < t 2 < < t k < < with lim k t k =. Consider a chemical system in which n different chemical species are involved into m different chemical reactions. The state of this system can be represented by a vector of natural numbers x N n, such that the instantaneous molecule count of the l-th chemical species is represented

15 BIOCHEMICAL OSCILLATIONS VIA STOCHASTIC PROCESSES 15 by the l-th entry (x l ) of the vector x. Each one of the chemical reactions taking place in this reactor can be represented as usual: (5) x a(j,x) x + v[j], j = 1, 2... m, where a(j, x) 0 (resp. v[j] Z n ) denote the propensity (resp. stoichiometric vector) of the j-th reaction. Therefore each entry of the vector v[j] Z n determines the change produced in the molecule count of the corresponding species by the occurrence of the j-th chemical reaction. For example, the following two representations of a chemical reactions are completely equivalent A + B 1 3A, (x A, x B,...) x Ax B (xa +2, x B 1,...). Here and thereafter we assume that every propensity a(j, x) does not depend on time explicitly. In accordance to the fact that negative molecule counts are impossible, a(j, x) vanishes whenever any entry of x or x + v[j] is negative. We also introduce a countable number of (discrete) probability measures p k in the base space N n, so that p k (x) is the probability that the system is at the state x N n in the time interval [t k, t k+1 ) for k 0. Now then, suppose for some moments that the system is at the state x(k) N n in the temporal interval [t k, t k+1 ), how do we calculate which one of the m-possible chemical reactions happens now? The more natural option is to chose the following chemical reaction in a random way, but taking in consideration that the probability of choosing the j-th reaction should be proportional to the value of the respective propensity a(j, x(k)) 0. Whence the next chemical reaction is chosen as a random variable with the following distribution function j a(j, x(k)) m l=1 a(l, x(k)) for j = 1, 2,..., m. Previous considerations yield a natural way to calculate the respective transfer matrixes associated to the chemical system, and so to determine the temporal evolution of the probability measures. Thus

16 16 EDUARDO S. ZERON for every y X and k N, m p x (y v[j]) a(j, y v[j]) p k+1 (y) = m = l=1 a(l, y v[j]) where j=1 q k (x, y) = a(j, x) n l=1 x N n p k (x) q k (x, y), if y = x + v[j], a(l, x) 0 otherwise. Notice that the sums above are calculated over all possible j-th chemical reactions that begin in the state x = y v[j] and finish in y. We can simulate this model (Markov Chain) according to the procedure described in subsection 5.1. Moreover this model describes very well the random behaviour of a chemical reactor, but it requires the assumption that all the chemical reactions happen (one at a time) in a fixed sequence of finite times: t 0 < t 1 < t 2 < < t k < < with lim k t k =. This assumption does not hold in real life, because the chemical reactions happen at random times as well, so that the values of t k must be chosen in a random form Non-discrete probability spaces. One can easily analyse Markov Chains defined on a non-discrete probability space, but we must proceed with care, because it is not possible to use sums when the number of elements is non-countable. The natural solution is to use integrals instead of sums. In any case we only need to keep in mind that a Markov Chain is a probability space (X, Ω, p k ) with a countable number of probability measures p k : Ω R indexed by natural numbers k 0. What it is really important is that the probability measure p k+1 can be calculated as a linear combination of the values of the previous measure p k (x). In other words Markov Chains do not have memory, in the sense that every state x k+1 only depends on the value of the previous state x k.

17 BIOCHEMICAL OSCILLATIONS VIA STOCHASTIC PROCESSES Stochastic processes with continuous time Previous section was devoted to analyse discrete probability spaces (X, Ω, p k ) that are endowed with a countable number of probability measures p k : Ω R indexed by an integer number k 0, and such that the measure p k+1 is calculated as a linear combination of the values of the previous measure p k (x). Following the same tonic we analyse now those stochastic processes that can be expressed as a discrete probability space (X, Ω, p t ) with a non-countable number of probability measures p t : Ω R indexed by a real number t [t 0, ) that it is called time. Moreover the dynamic of p t is governed by a differential equation, where dp t / is a linear combination of the values of the measure p t (x). Hence there exists real numbers q t (x, y) such that dp t (y) = x X p t (x) q t (x, y) for y X, t t 0. A necessary condition is that the sum q t (x, y) = 0 for all x X and t t 0. y X This condition is necessary to guarantee that y p t(y) is constant with respect to the time t; i.e., d p t (y) = [ p t (x) ] q t (x, y) y X x X y X = 0. Thus, if p 0 (t) is the initial probability measure with y p t 0 (y) equal to, then p t is also a probability measure with y p t(y) equal to one for every t t 0. Notice in particular that unlike the analysis done in the previous Chapter 5, the new function y q t (x, y) is not a distribution function, because at least one of the coefficients q t (x, y) must be negative, in order to guarantee that sum of all of them with respect to y X is equal to zero.

18 18 EDUARDO S. ZERON Since there are too many different kinds of discrete stochastic processes with continuous time, we shall devote ourselves to analyse a particular kind called birth and death models. These models are really similar to the Markov Chains, in the sense that the state of the system changes as a jump, but it is now supposed that these jumps happen at a random time as well. Systems of chemical reactions are the ideal examples of birth and death models, because the chemical reactions can be seen as processes where molecules are destroyed, transformed, and generated at random times Modelling and realistic simulation of a system of chemical reactions. Consider a chemical system in which n different chemical species are involved into m different chemical reactions. The state of this system can be represented by a vector of natural numbers x N n, such that the instantaneous molecule count of the l-th chemical species is represented by the l-th entry (x l ) of the vector x. Each one of the chemical reactions taking place in this reactor can be represented as usual: (6) x a(j,x) x + v[j], j = 1, 2... m, where a(j, x) 0 (resp. v[j] Z n ) denote the propensity (resp. stoichiometric vector) of the j-th reaction. Therefore each entry of the vector v[j] Z n determines the change produced in the molecule count of the corresponding species by the occurrence of the j-th chemical reaction. In accordance to the fact that negative molecule counts are impossible, a(j, x) vanishes whenever any entry of x or x + v[j] is negative. Nevertheless we introduce know a non-countable quantity of probability measures p t defined in the base space N n and indexed by a real time t t 0, in such a way that p t0 is the initial probability measure. Simulations are calculated with the Gillespie algorithm: Suppose that x(t) N n is the state of the system at a given time t t 0, so

19 BIOCHEMICAL OSCILLATIONS VIA STOCHASTIC PROCESSES 19 that x(t 0 ) is the initial state of the system. We firstly calculate when the next chemical reaction is happening, and to so we calculate a temporal increment τ in the real interval [0, ) as a random variable with exponential distribution function τ λ exp( λτ), where λ = m a(l, x(t)). l=1 Hence the next chemical reaction happens at the time t+τ. decide now which one of the m-possible chemical reactions is next, and we proceed here as in the case of modelling a chemical reactor as a Markov Chain: The next reaction is chosen as a random variable with the following distribution function j a(j, x(t)) m l=1 a(l, x(t)) for j = 1, 2,..., m. We Therefore, if the j-th chemical reaction is chosen, the new state of the system is given by x(t+τ) = x(t)+v[j]. Moreover in the temporal interval [t, t+τ) the state of the system is considered constant and equal to x(t). It is important to indicate that calculating the time increment τ 0 as a random variable with exponential distribution function is a necessary condition for asserting that the continuous time process has no memory (like in the case of a Markov Chain) Chemical master equation. The differential equation that governs the temporal dynamic of the probability measures p t can be deduced from the following considerations. Supposing that x(t) N n is the state of the system at the time t t 0, the value of each propensity a(j, x(t)) 0 can be interpreted as follow: For every positive and small enough temporal increment δt 0 a(j, x(t))δt is the probability that the j-th chemical reaction happens in the time interval [t, t+δt).

20 20 EDUARDO S. ZERON This consideration dictates how the probability measure p t evolves [ m ] m p t+δt (y) = 1 a(j, y)δt p t (y) p t (y v[j])a(j, y v[j]) δt j=1 for every y N n. The term between the square brackets in the above equation is the probability that no chemical reactions happen in the time interval [t, t+δt), while the sum at the right is the probability that some j-th reaction happen (beginning at the state y v[j] and finishing at y). We now proceed to move some terms around, divide by δt, and calculate the limit when δt, at the end we obtain the differential j=1 equation that governs the temporal dynamic of p t, dp t (y) p t+δt (y) p t (y) = lim δt 0 δt m [ ] = a(j, y v[j]) p t (y v[j]) a(j, y) p t (y). j=1 for every y N n. Previous result is known as the chemical master equation, and it yields a way to calculate the exact value of the probability measure p t (y) at every given time t t 0. We only need to solve the chemical master equation with an initial probability measure p t0. On the other hand notice that the identities below hold for every index j = 1, 2,..., m, y N n a(j, y v[j]) p t (y v[j]) = and so d y N n p t (y) = 0. y N n a(j, y) p t (y), 6.3. Langevin equation. Every term a(j, y v[j])p t (y v[j]) can be approximated using a truncated Taylor series, n a(j, y v[j])p t (y v[j]) a(j, y)p t (y) v k [j] da(j, y)p t(y) dy k + n n k=1 l=1 k=1 v k [j]v l [j] d2 a(j, y)p t (y) 2 dy k dy l.

21 BIOCHEMICAL OSCILLATIONS VIA STOCHASTIC PROCESSES 21 These approximations are used to deduce the Fokker-Planck equation, a partial differential equation that also governs the temporal dynamic of the probability measure p t, dp t (y) n [ d = p t (y) dy k + k=1 n n k=1 l=1 d 2 dy k dy l m j=1 ] v k [j]a(j, y) [ p t (y) m j=1 ] v k [j]v l [j]a(j, y). 2 The probability measure p t (y) can be calculated by solving or simulating the chemical master equation or the Fokker-Plank equation for a given time t t 0. Moreover the Fokker-Planck equation can be translated into the associated Langevin equation, this is a classical procedure, dx(t) [ m = j=1 ] [ m v[j]a(j, x(t)) + j=1 v[j] a(j, x(t)) dw ] j(t). The derivatives dw j (t)/ in the equation above are non-correlated white noise terms (derivatives of Brownian movements W j ). Notice that there is one white noise term for each of the m chemical reactions. Moreover the Langevin equation gets reduced to the deterministic chemical equation when the white noise terms dw j (t)/ are removed. Observe that the differential equation dx(t) = m v[j]a(j, x(t)) j=1 represents the deterministic dynamical behaviour of the following system of chemical reactions x a(j,x) x + v[j], for j = 1, 2,..., m. 7. Stochastic Lotka system We come back to the original Lotka chemical model [4], but we introduce a new term to denote the direct degradation of the chemical

22 22 EDUARDO S. ZERON specie X, a b X, X + Y c 2Y, Y e. The associated chemical master equation is given by dp (x, y; t) Hence the Langevin equation is dx (7) dy (8) where = ap (x 1, y; t) ap (x, y, t) + b(x+1)p (x+1, y; t) bxp (x, y; t) + c(x+1)(y 1)P (x+1, y 1; t) cxyp (x, y; t) + e(y+1)p (x, y+1; t) eyp (x, y; t). = a bx cxy + Ẇ1 a Ẇ 2 bx Ẇ 3 cxy, = cxy ey + Ẇ3 cxy Ẇ 4 ey, Ẇk are four non-correlated white noise terms. It is important to recall that the variables x and y denote the number of molecules of the respective chemical species X and Y, so that the dynamic of the concentrations [X] and [Y ] can be calculated by dividing equations (7) and (8) by the volume of the chemical reactor. The quite interesting property is that system (7) (8) cannot produce sustained oscillations when the noise terms Ẇk are removed, but sustained oscillations are indeed induced for some particular sets of parameters (a, b, c, d) when the noise terms Ẇk are included. The real problem is to characterise for which sets of parameters (a, b, c, d) sustained oscillations are produced. References [1] E.H. Armstorng. Operating Features of the Audion. Annals of the New York Academy of Sciences, 27 ( ), issue 1, pp [2] G.T. Fechner. Schweigg J. Chem. Phys. 53 (1828), p [3] J.L. Hudson and T.T. Tsotsis. Electrochemical reaction dynamics: a review. Chem. Eng. Sci. 49 (1994), issue 10, pp [4] A.J. Lotka. Contribution to the Theory of Periodic Reactions. J. Phys. Chem. 14 (1910), issue 3, pp

Dynamical Systems. August 13, 2013

Dynamical Systems. August 13, 2013 Dynamical Systems Joshua Wilde, revised by Isabel Tecu, Takeshi Suzuki and María José Boccardi August 13, 2013 Dynamical Systems are systems, described by one or more equations, that evolve over time.

More information

2D-Volterra-Lotka Modeling For 2 Species

2D-Volterra-Lotka Modeling For 2 Species Majalat Al-Ulum Al-Insaniya wat - Tatbiqiya 2D-Volterra-Lotka Modeling For 2 Species Alhashmi Darah 1 University of Almergeb Department of Mathematics Faculty of Science Zliten Libya. Abstract The purpose

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

ELEMENTS OF PROBABILITY THEORY

ELEMENTS OF PROBABILITY THEORY ELEMENTS OF PROBABILITY THEORY Elements of Probability Theory A collection of subsets of a set Ω is called a σ algebra if it contains Ω and is closed under the operations of taking complements and countable

More information

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3 Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................

More information

1 (t + 4)(t 1) dt. Solution: The denominator of the integrand is already factored with the factors being distinct, so 1 (t + 4)(t 1) = A

1 (t + 4)(t 1) dt. Solution: The denominator of the integrand is already factored with the factors being distinct, so 1 (t + 4)(t 1) = A Calculus Topic: Integration of Rational Functions Section 8. # 0: Evaluate the integral (t + )(t ) Solution: The denominator of the integrand is already factored with the factors being distinct, so (t

More information

6. Brownian Motion. Q(A) = P [ ω : x(, ω) A )

6. Brownian Motion. Q(A) = P [ ω : x(, ω) A ) 6. Brownian Motion. stochastic process can be thought of in one of many equivalent ways. We can begin with an underlying probability space (Ω, Σ, P) and a real valued stochastic process can be defined

More information

In terms of measures: Exercise 1. Existence of a Gaussian process: Theorem 2. Remark 3.

In terms of measures: Exercise 1. Existence of a Gaussian process: Theorem 2. Remark 3. 1. GAUSSIAN PROCESSES A Gaussian process on a set T is a collection of random variables X =(X t ) t T on a common probability space such that for any n 1 and any t 1,...,t n T, the vector (X(t 1 ),...,X(t

More information

Gillespie s Algorithm and its Approximations. Des Higham Department of Mathematics and Statistics University of Strathclyde

Gillespie s Algorithm and its Approximations. Des Higham Department of Mathematics and Statistics University of Strathclyde Gillespie s Algorithm and its Approximations Des Higham Department of Mathematics and Statistics University of Strathclyde djh@maths.strath.ac.uk The Three Lectures 1 Gillespie s algorithm and its relation

More information

If we want to analyze experimental or simulated data we might encounter the following tasks:

If we want to analyze experimental or simulated data we might encounter the following tasks: Chapter 1 Introduction If we want to analyze experimental or simulated data we might encounter the following tasks: Characterization of the source of the signal and diagnosis Studying dependencies Prediction

More information

Lecture 6 Basic Probability

Lecture 6 Basic Probability Lecture 6: Basic Probability 1 of 17 Course: Theory of Probability I Term: Fall 2013 Instructor: Gordan Zitkovic Lecture 6 Basic Probability Probability spaces A mathematical setup behind a probabilistic

More information

REVIEW OF DIFFERENTIAL CALCULUS

REVIEW OF DIFFERENTIAL CALCULUS REVIEW OF DIFFERENTIAL CALCULUS DONU ARAPURA 1. Limits and continuity To simplify the statements, we will often stick to two variables, but everything holds with any number of variables. Let f(x, y) be

More information

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past.

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past. 1 Markov chain: definition Lecture 5 Definition 1.1 Markov chain] A sequence of random variables (X n ) n 0 taking values in a measurable state space (S, S) is called a (discrete time) Markov chain, if

More information

Linearization of Differential Equation Models

Linearization of Differential Equation Models Linearization of Differential Equation Models 1 Motivation We cannot solve most nonlinear models, so we often instead try to get an overall feel for the way the model behaves: we sometimes talk about looking

More information

Summary of basic probability theory Math 218, Mathematical Statistics D Joyce, Spring 2016

Summary of basic probability theory Math 218, Mathematical Statistics D Joyce, Spring 2016 8. For any two events E and F, P (E) = P (E F ) + P (E F c ). Summary of basic probability theory Math 218, Mathematical Statistics D Joyce, Spring 2016 Sample space. A sample space consists of a underlying

More information

FIRST-ORDER SYSTEMS OF ORDINARY DIFFERENTIAL EQUATIONS III: Autonomous Planar Systems David Levermore Department of Mathematics University of Maryland

FIRST-ORDER SYSTEMS OF ORDINARY DIFFERENTIAL EQUATIONS III: Autonomous Planar Systems David Levermore Department of Mathematics University of Maryland FIRST-ORDER SYSTEMS OF ORDINARY DIFFERENTIAL EQUATIONS III: Autonomous Planar Systems David Levermore Department of Mathematics University of Maryland 4 May 2012 Because the presentation of this material

More information

STABILITY. Phase portraits and local stability

STABILITY. Phase portraits and local stability MAS271 Methods for differential equations Dr. R. Jain STABILITY Phase portraits and local stability We are interested in system of ordinary differential equations of the form ẋ = f(x, y), ẏ = g(x, y),

More information

Random Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R

Random Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R In probabilistic models, a random variable is a variable whose possible values are numerical outcomes of a random phenomenon. As a function or a map, it maps from an element (or an outcome) of a sample

More information

Chapter 4. Measure Theory. 1. Measure Spaces

Chapter 4. Measure Theory. 1. Measure Spaces Chapter 4. Measure Theory 1. Measure Spaces Let X be a nonempty set. A collection S of subsets of X is said to be an algebra on X if S has the following properties: 1. X S; 2. if A S, then A c S; 3. if

More information

MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems

MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems Review of Basic Probability The fundamentals, random variables, probability distributions Probability mass/density functions

More information

The Liapunov Method for Determining Stability (DRAFT)

The Liapunov Method for Determining Stability (DRAFT) 44 The Liapunov Method for Determining Stability (DRAFT) 44.1 The Liapunov Method, Naively Developed In the last chapter, we discussed describing trajectories of a 2 2 autonomous system x = F(x) as level

More information

Solutions to Dynamical Systems 2010 exam. Each question is worth 25 marks.

Solutions to Dynamical Systems 2010 exam. Each question is worth 25 marks. Solutions to Dynamical Systems exam Each question is worth marks [Unseen] Consider the following st order differential equation: dy dt Xy yy 4 a Find and classify all the fixed points of Hence draw the

More information

Chapter 4: First-order differential equations. Similarity and Transport Phenomena in Fluid Dynamics Christophe Ancey

Chapter 4: First-order differential equations. Similarity and Transport Phenomena in Fluid Dynamics Christophe Ancey Chapter 4: First-order differential equations Similarity and Transport Phenomena in Fluid Dynamics Christophe Ancey Chapter 4: First-order differential equations Phase portrait Singular point Separatrix

More information

Random processes and probability distributions. Phys 420/580 Lecture 20

Random processes and probability distributions. Phys 420/580 Lecture 20 Random processes and probability distributions Phys 420/580 Lecture 20 Random processes Many physical processes are random in character: e.g., nuclear decay (Poisson distributed event count) P (k, τ) =

More information

EN Nonlinear Control and Planning in Robotics Lecture 3: Stability February 4, 2015

EN Nonlinear Control and Planning in Robotics Lecture 3: Stability February 4, 2015 EN530.678 Nonlinear Control and Planning in Robotics Lecture 3: Stability February 4, 2015 Prof: Marin Kobilarov 0.1 Model prerequisites Consider ẋ = f(t, x). We will make the following basic assumptions

More information

ẋ = f(x, y), ẏ = g(x, y), (x, y) D, can only have periodic solutions if (f,g) changes sign in D or if (f,g)=0in D.

ẋ = f(x, y), ẏ = g(x, y), (x, y) D, can only have periodic solutions if (f,g) changes sign in D or if (f,g)=0in D. 4 Periodic Solutions We have shown that in the case of an autonomous equation the periodic solutions correspond with closed orbits in phase-space. Autonomous two-dimensional systems with phase-space R

More information

Perhaps the simplest way of modeling two (discrete) random variables is by means of a joint PMF, defined as follows.

Perhaps the simplest way of modeling two (discrete) random variables is by means of a joint PMF, defined as follows. Chapter 5 Two Random Variables In a practical engineering problem, there is almost always causal relationship between different events. Some relationships are determined by physical laws, e.g., voltage

More information

Math Ordinary Differential Equations

Math Ordinary Differential Equations Math 411 - Ordinary Differential Equations Review Notes - 1 1 - Basic Theory A first order ordinary differential equation has the form x = f(t, x) (11) Here x = dx/dt Given an initial data x(t 0 ) = x

More information

Introduction to Information Entropy Adapted from Papoulis (1991)

Introduction to Information Entropy Adapted from Papoulis (1991) Introduction to Information Entropy Adapted from Papoulis (1991) Federico Lombardo Papoulis, A., Probability, Random Variables and Stochastic Processes, 3rd edition, McGraw ill, 1991. 1 1. INTRODUCTION

More information

Stochastic process for macro

Stochastic process for macro Stochastic process for macro Tianxiao Zheng SAIF 1. Stochastic process The state of a system {X t } evolves probabilistically in time. The joint probability distribution is given by Pr(X t1, t 1 ; X t2,

More information

MATH 215/255 Solutions to Additional Practice Problems April dy dt

MATH 215/255 Solutions to Additional Practice Problems April dy dt . For the nonlinear system MATH 5/55 Solutions to Additional Practice Problems April 08 dx dt = x( x y, dy dt = y(.5 y x, x 0, y 0, (a Show that if x(0 > 0 and y(0 = 0, then the solution (x(t, y(t of the

More information

TEST CODE: MMA (Objective type) 2015 SYLLABUS

TEST CODE: MMA (Objective type) 2015 SYLLABUS TEST CODE: MMA (Objective type) 2015 SYLLABUS Analytical Reasoning Algebra Arithmetic, geometric and harmonic progression. Continued fractions. Elementary combinatorics: Permutations and combinations,

More information

11 Chaos in Continuous Dynamical Systems.

11 Chaos in Continuous Dynamical Systems. 11 CHAOS IN CONTINUOUS DYNAMICAL SYSTEMS. 47 11 Chaos in Continuous Dynamical Systems. Let s consider a system of differential equations given by where x(t) : R R and f : R R. ẋ = f(x), The linearization

More information

1 The pendulum equation

1 The pendulum equation Math 270 Honors ODE I Fall, 2008 Class notes # 5 A longer than usual homework assignment is at the end. The pendulum equation We now come to a particularly important example, the equation for an oscillating

More information

154 Chapter 9 Hints, Answers, and Solutions The particular trajectories are highlighted in the phase portraits below.

154 Chapter 9 Hints, Answers, and Solutions The particular trajectories are highlighted in the phase portraits below. 54 Chapter 9 Hints, Answers, and Solutions 9. The Phase Plane 9.. 4. The particular trajectories are highlighted in the phase portraits below... 3. 4. 9..5. Shown below is one possibility with x(t) and

More information

Math 4B Notes. Written by Victoria Kala SH 6432u Office Hours: T 12:45 1:45pm Last updated 7/24/2016

Math 4B Notes. Written by Victoria Kala SH 6432u Office Hours: T 12:45 1:45pm Last updated 7/24/2016 Math 4B Notes Written by Victoria Kala vtkala@math.ucsb.edu SH 6432u Office Hours: T 2:45 :45pm Last updated 7/24/206 Classification of Differential Equations The order of a differential equation is the

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 15. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

CHEM-UA 652: Thermodynamics and Kinetics

CHEM-UA 652: Thermodynamics and Kinetics CHEM-UA 65: Thermodynamics and Kinetics Notes for Lecture I. THE COMPLEXITY OF MULTI-STEP CHEMICAL REACTIONS It should be clear by now that chemical kinetics is governed by the mathematics of systems of

More information

Lecture Notes Introduction to Ergodic Theory

Lecture Notes Introduction to Ergodic Theory Lecture Notes Introduction to Ergodic Theory Tiago Pereira Department of Mathematics Imperial College London Our course consists of five introductory lectures on probabilistic aspects of dynamical systems,

More information

Chapter 1 Statistical Reasoning Why statistics? Section 1.1 Basics of Probability Theory

Chapter 1 Statistical Reasoning Why statistics? Section 1.1 Basics of Probability Theory Chapter 1 Statistical Reasoning Why statistics? Uncertainty of nature (weather, earth movement, etc. ) Uncertainty in observation/sampling/measurement Variability of human operation/error imperfection

More information

Probability and Measure

Probability and Measure Part II Year 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2018 84 Paper 4, Section II 26J Let (X, A) be a measurable space. Let T : X X be a measurable map, and µ a probability

More information

Why study probability? Set theory. ECE 6010 Lecture 1 Introduction; Review of Random Variables

Why study probability? Set theory. ECE 6010 Lecture 1 Introduction; Review of Random Variables ECE 6010 Lecture 1 Introduction; Review of Random Variables Readings from G&S: Chapter 1. Section 2.1, Section 2.3, Section 2.4, Section 3.1, Section 3.2, Section 3.5, Section 4.1, Section 4.2, Section

More information

3 Applications of partial differentiation

3 Applications of partial differentiation Advanced Calculus Chapter 3 Applications of partial differentiation 37 3 Applications of partial differentiation 3.1 Stationary points Higher derivatives Let U R 2 and f : U R. The partial derivatives

More information

5 Applying the Fokker-Planck equation

5 Applying the Fokker-Planck equation 5 Applying the Fokker-Planck equation We begin with one-dimensional examples, keeping g = constant. Recall: the FPE for the Langevin equation with η(t 1 )η(t ) = κδ(t 1 t ) is = f(x) + g(x)η(t) t = x [f(x)p

More information

28. Pendulum phase portrait Draw the phase portrait for the pendulum (supported by an inextensible rod)

28. Pendulum phase portrait Draw the phase portrait for the pendulum (supported by an inextensible rod) 28. Pendulum phase portrait Draw the phase portrait for the pendulum (supported by an inextensible rod) θ + ω 2 sin θ = 0. Indicate the stable equilibrium points as well as the unstable equilibrium points.

More information

Brownian Motion. An Undergraduate Introduction to Financial Mathematics. J. Robert Buchanan. J. Robert Buchanan Brownian Motion

Brownian Motion. An Undergraduate Introduction to Financial Mathematics. J. Robert Buchanan. J. Robert Buchanan Brownian Motion Brownian Motion An Undergraduate Introduction to Financial Mathematics J. Robert Buchanan 2010 Background We have already seen that the limiting behavior of a discrete random walk yields a derivation of

More information

Lecture 6 Positive Definite Matrices

Lecture 6 Positive Definite Matrices Linear Algebra Lecture 6 Positive Definite Matrices Prof. Chun-Hung Liu Dept. of Electrical and Computer Engineering National Chiao Tung University Spring 2017 2017/6/8 Lecture 6: Positive Definite Matrices

More information

Handout 2: Invariant Sets and Stability

Handout 2: Invariant Sets and Stability Engineering Tripos Part IIB Nonlinear Systems and Control Module 4F2 1 Invariant Sets Handout 2: Invariant Sets and Stability Consider again the autonomous dynamical system ẋ = f(x), x() = x (1) with state

More information

Lotka Volterra Predator-Prey Model with a Predating Scavenger

Lotka Volterra Predator-Prey Model with a Predating Scavenger Lotka Volterra Predator-Prey Model with a Predating Scavenger Monica Pescitelli Georgia College December 13, 2013 Abstract The classic Lotka Volterra equations are used to model the population dynamics

More information

Differential Equations and Modeling

Differential Equations and Modeling Differential Equations and Modeling Preliminary Lecture Notes Adolfo J. Rumbos c Draft date: March 22, 2018 March 22, 2018 2 Contents 1 Preface 5 2 Introduction to Modeling 7 2.1 Constructing Models.........................

More information

Stochastic Processes

Stochastic Processes Stochastic Processes A very simple introduction Péter Medvegyev 2009, January Medvegyev (CEU) Stochastic Processes 2009, January 1 / 54 Summary from measure theory De nition (X, A) is a measurable space

More information

Analysis III. Exam 1

Analysis III. Exam 1 Analysis III Math 414 Spring 27 Professor Ben Richert Exam 1 Solutions Problem 1 Let X be the set of all continuous real valued functions on [, 1], and let ρ : X X R be the function ρ(f, g) = sup f g (1)

More information

Math 1270 Honors ODE I Fall, 2008 Class notes # 14. x 0 = F (x; y) y 0 = G (x; y) u 0 = au + bv = cu + dv

Math 1270 Honors ODE I Fall, 2008 Class notes # 14. x 0 = F (x; y) y 0 = G (x; y) u 0 = au + bv = cu + dv Math 1270 Honors ODE I Fall, 2008 Class notes # 1 We have learned how to study nonlinear systems x 0 = F (x; y) y 0 = G (x; y) (1) by linearizing around equilibrium points. If (x 0 ; y 0 ) is an equilibrium

More information

A plane autonomous system is a pair of simultaneous first-order differential equations,

A plane autonomous system is a pair of simultaneous first-order differential equations, Chapter 11 Phase-Plane Techniques 11.1 Plane Autonomous Systems A plane autonomous system is a pair of simultaneous first-order differential equations, ẋ = f(x, y), ẏ = g(x, y). This system has an equilibrium

More information

Recitation 2: Probability

Recitation 2: Probability Recitation 2: Probability Colin White, Kenny Marino January 23, 2018 Outline Facts about sets Definitions and facts about probability Random Variables and Joint Distributions Characteristics of distributions

More information

Review of Probability Theory

Review of Probability Theory Review of Probability Theory Arian Maleki and Tom Do Stanford University Probability theory is the study of uncertainty Through this class, we will be relying on concepts from probability theory for deriving

More information

Week Quadratic forms. Principal axes theorem. Text reference: this material corresponds to parts of sections 5.5, 8.2,

Week Quadratic forms. Principal axes theorem. Text reference: this material corresponds to parts of sections 5.5, 8.2, Math 051 W008 Margo Kondratieva Week 10-11 Quadratic forms Principal axes theorem Text reference: this material corresponds to parts of sections 55, 8, 83 89 Section 41 Motivation and introduction Consider

More information

Non-Linear Models. Non-Linear Models Cont d

Non-Linear Models. Non-Linear Models Cont d Focus on more sophistiated interaction models between systems. These lead to non-linear, rather than linear, DEs; often not soluble exactly in analytical form so use Phase-Plane Analysis. This is a method

More information

REVIEW FOR EXAM III SIMILARITY AND DIAGONALIZATION

REVIEW FOR EXAM III SIMILARITY AND DIAGONALIZATION REVIEW FOR EXAM III The exam covers sections 4.4, the portions of 4. on systems of differential equations and on Markov chains, and..4. SIMILARITY AND DIAGONALIZATION. Two matrices A and B are similar

More information

7 Planar systems of linear ODE

7 Planar systems of linear ODE 7 Planar systems of linear ODE Here I restrict my attention to a very special class of autonomous ODE: linear ODE with constant coefficients This is arguably the only class of ODE for which explicit solution

More information

1 Stat 605. Homework I. Due Feb. 1, 2011

1 Stat 605. Homework I. Due Feb. 1, 2011 The first part is homework which you need to turn in. The second part is exercises that will not be graded, but you need to turn it in together with the take-home final exam. 1 Stat 605. Homework I. Due

More information

HOMEWORK ASSIGNMENT 6

HOMEWORK ASSIGNMENT 6 HOMEWORK ASSIGNMENT 6 DUE 15 MARCH, 2016 1) Suppose f, g : A R are uniformly continuous on A. Show that f + g is uniformly continuous on A. Solution First we note: In order to show that f + g is uniformly

More information

Connectedness. Proposition 2.2. The following are equivalent for a topological space (X, T ).

Connectedness. Proposition 2.2. The following are equivalent for a topological space (X, T ). Connectedness 1 Motivation Connectedness is the sort of topological property that students love. Its definition is intuitive and easy to understand, and it is a powerful tool in proofs of well-known results.

More information

THE QUEEN S UNIVERSITY OF BELFAST

THE QUEEN S UNIVERSITY OF BELFAST THE QUEEN S UNIVERSITY OF BELFAST 0SOR20 Level 2 Examination Statistics and Operational Research 20 Probability and Distribution Theory Wednesday 4 August 2002 2.30 pm 5.30 pm Examiners { Professor R M

More information

Thus, X is connected by Problem 4. Case 3: X = (a, b]. This case is analogous to Case 2. Case 4: X = (a, b). Choose ε < b a

Thus, X is connected by Problem 4. Case 3: X = (a, b]. This case is analogous to Case 2. Case 4: X = (a, b). Choose ε < b a Solutions to Homework #6 1. Complete the proof of the backwards direction of Theorem 12.2 from class (which asserts the any interval in R is connected). Solution: Let X R be a closed interval. Case 1:

More information

Sample Spaces, Random Variables

Sample Spaces, Random Variables Sample Spaces, Random Variables Moulinath Banerjee University of Michigan August 3, 22 Probabilities In talking about probabilities, the fundamental object is Ω, the sample space. (elements) in Ω are denoted

More information

Lecture 5. Outline: Limit Cycles. Definition and examples How to rule out limit cycles. Poincare-Bendixson theorem Hopf bifurcations Poincare maps

Lecture 5. Outline: Limit Cycles. Definition and examples How to rule out limit cycles. Poincare-Bendixson theorem Hopf bifurcations Poincare maps Lecture 5 Outline: Limit Cycles Definition and examples How to rule out limit cycles Gradient systems Liapunov functions Dulacs criterion Poincare-Bendixson theorem Hopf bifurcations Poincare maps Limit

More information

Week 9 Generators, duality, change of measure

Week 9 Generators, duality, change of measure Week 9 Generators, duality, change of measure Jonathan Goodman November 18, 013 1 Generators This section describes a common abstract way to describe many of the differential equations related to Markov

More information

Chapter 1. Measure Spaces. 1.1 Algebras and σ algebras of sets Notation and preliminaries

Chapter 1. Measure Spaces. 1.1 Algebras and σ algebras of sets Notation and preliminaries Chapter 1 Measure Spaces 1.1 Algebras and σ algebras of sets 1.1.1 Notation and preliminaries We shall denote by X a nonempty set, by P(X) the set of all parts (i.e., subsets) of X, and by the empty set.

More information

Stat 451: Solutions to Assignment #1

Stat 451: Solutions to Assignment #1 Stat 451: Solutions to Assignment #1 2.1) By definition, 2 Ω is the set of all subsets of Ω. Therefore, to show that 2 Ω is a σ-algebra we must show that the conditions of the definition σ-algebra are

More information

Mean-field dual of cooperative reproduction

Mean-field dual of cooperative reproduction The mean-field dual of systems with cooperative reproduction joint with Tibor Mach (Prague) A. Sturm (Göttingen) Friday, July 6th, 2018 Poisson construction of Markov processes Let (X t ) t 0 be a continuous-time

More information

Systems of Linear ODEs

Systems of Linear ODEs P a g e 1 Systems of Linear ODEs Systems of ordinary differential equations can be solved in much the same way as discrete dynamical systems if the differential equations are linear. We will focus here

More information

1.1 Review of Probability Theory

1.1 Review of Probability Theory 1.1 Review of Probability Theory Angela Peace Biomathemtics II MATH 5355 Spring 2017 Lecture notes follow: Allen, Linda JS. An introduction to stochastic processes with applications to biology. CRC Press,

More information

1. < 0: the eigenvalues are real and have opposite signs; the fixed point is a saddle point

1. < 0: the eigenvalues are real and have opposite signs; the fixed point is a saddle point Solving a Linear System τ = trace(a) = a + d = λ 1 + λ 2 λ 1,2 = τ± = det(a) = ad bc = λ 1 λ 2 Classification of Fixed Points τ 2 4 1. < 0: the eigenvalues are real and have opposite signs; the fixed point

More information

Solutions to Math 53 Math 53 Practice Final

Solutions to Math 53 Math 53 Practice Final Solutions to Math 5 Math 5 Practice Final 20 points Consider the initial value problem y t 4yt = te t with y 0 = and y0 = 0 a 8 points Find the Laplace transform of the solution of this IVP b 8 points

More information

Notes for Math 450 Stochastic Petri nets and reactions

Notes for Math 450 Stochastic Petri nets and reactions Notes for Math 450 Stochastic Petri nets and reactions Renato Feres Petri nets Petri nets are a special class of networks, introduced in 96 by Carl Adam Petri, that provide a convenient language and graphical

More information

TEST CODE: MIII (Objective type) 2010 SYLLABUS

TEST CODE: MIII (Objective type) 2010 SYLLABUS TEST CODE: MIII (Objective type) 200 SYLLABUS Algebra Permutations and combinations. Binomial theorem. Theory of equations. Inequalities. Complex numbers and De Moivre s theorem. Elementary set theory.

More information

TWO DIMENSIONAL FLOWS. Lecture 5: Limit Cycles and Bifurcations

TWO DIMENSIONAL FLOWS. Lecture 5: Limit Cycles and Bifurcations TWO DIMENSIONAL FLOWS Lecture 5: Limit Cycles and Bifurcations 5. Limit cycles A limit cycle is an isolated closed trajectory [ isolated means that neighbouring trajectories are not closed] Fig. 5.1.1

More information

1 Independent increments

1 Independent increments Tel Aviv University, 2008 Brownian motion 1 1 Independent increments 1a Three convolution semigroups........... 1 1b Independent increments.............. 2 1c Continuous time................... 3 1d Bad

More information

Simulation methods for stochastic models in chemistry

Simulation methods for stochastic models in chemistry Simulation methods for stochastic models in chemistry David F. Anderson anderson@math.wisc.edu Department of Mathematics University of Wisconsin - Madison SIAM: Barcelona June 4th, 21 Overview 1. Notation

More information

Lecture 20/Lab 21: Systems of Nonlinear ODEs

Lecture 20/Lab 21: Systems of Nonlinear ODEs Lecture 20/Lab 21: Systems of Nonlinear ODEs MAR514 Geoffrey Cowles Department of Fisheries Oceanography School for Marine Science and Technology University of Massachusetts-Dartmouth Coupled ODEs: Species

More information

Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of. F s F t

Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of. F s F t 2.2 Filtrations Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of σ algebras {F t } such that F t F and F t F t+1 for all t = 0, 1,.... In continuous time, the second condition

More information

Stochastic model of mrna production

Stochastic model of mrna production Stochastic model of mrna production We assume that the number of mrna (m) of a gene can change either due to the production of a mrna by transcription of DNA (which occurs at a rate α) or due to degradation

More information

Lebesgue Measure. Dung Le 1

Lebesgue Measure. Dung Le 1 Lebesgue Measure Dung Le 1 1 Introduction How do we measure the size of a set in IR? Let s start with the simplest ones: intervals. Obviously, the natural candidate for a measure of an interval is its

More information

September Math Course: First Order Derivative

September Math Course: First Order Derivative September Math Course: First Order Derivative Arina Nikandrova Functions Function y = f (x), where x is either be a scalar or a vector of several variables (x,..., x n ), can be thought of as a rule which

More information

MATH 56A: STOCHASTIC PROCESSES CHAPTER 6

MATH 56A: STOCHASTIC PROCESSES CHAPTER 6 MATH 56A: STOCHASTIC PROCESSES CHAPTER 6 6. Renewal Mathematically, renewal refers to a continuous time stochastic process with states,, 2,. N t {,, 2, 3, } so that you only have jumps from x to x + and

More information

Handbook of Stochastic Methods

Handbook of Stochastic Methods C. W. Gardiner Handbook of Stochastic Methods for Physics, Chemistry and the Natural Sciences Third Edition With 30 Figures Springer Contents 1. A Historical Introduction 1 1.1 Motivation I 1.2 Some Historical

More information

Notes on Measure, Probability and Stochastic Processes. João Lopes Dias

Notes on Measure, Probability and Stochastic Processes. João Lopes Dias Notes on Measure, Probability and Stochastic Processes João Lopes Dias Departamento de Matemática, ISEG, Universidade de Lisboa, Rua do Quelhas 6, 1200-781 Lisboa, Portugal E-mail address: jldias@iseg.ulisboa.pt

More information

Math 564 Homework 1. Solutions.

Math 564 Homework 1. Solutions. Math 564 Homework 1. Solutions. Problem 1. Prove Proposition 0.2.2. A guide to this problem: start with the open set S = (a, b), for example. First assume that a >, and show that the number a has the properties

More information

Course 212: Academic Year Section 1: Metric Spaces

Course 212: Academic Year Section 1: Metric Spaces Course 212: Academic Year 1991-2 Section 1: Metric Spaces D. R. Wilkins Contents 1 Metric Spaces 3 1.1 Distance Functions and Metric Spaces............. 3 1.2 Convergence and Continuity in Metric Spaces.........

More information

Linear Algebra 1 Exam 2 Solutions 7/14/3

Linear Algebra 1 Exam 2 Solutions 7/14/3 Linear Algebra 1 Exam Solutions 7/14/3 Question 1 The line L has the symmetric equation: x 1 = y + 3 The line M has the parametric equation: = z 4. [x, y, z] = [ 4, 10, 5] + s[10, 7, ]. The line N is perpendicular

More information

The Banach-Tarski paradox

The Banach-Tarski paradox The Banach-Tarski paradox 1 Non-measurable sets In these notes I want to present a proof of the Banach-Tarski paradox, a consequence of the axiom of choice that shows us that a naive understanding of the

More information

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable Lecture Notes 1 Probability and Random Variables Probability Spaces Conditional Probability and Independence Random Variables Functions of a Random Variable Generation of a Random Variable Jointly Distributed

More information

STOCHASTIC PROCESSES Basic notions

STOCHASTIC PROCESSES Basic notions J. Virtamo 38.3143 Queueing Theory / Stochastic processes 1 STOCHASTIC PROCESSES Basic notions Often the systems we consider evolve in time and we are interested in their dynamic behaviour, usually involving

More information

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable Lecture Notes 1 Probability and Random Variables Probability Spaces Conditional Probability and Independence Random Variables Functions of a Random Variable Generation of a Random Variable Jointly Distributed

More information

1: PROBABILITY REVIEW

1: PROBABILITY REVIEW 1: PROBABILITY REVIEW Marek Rutkowski School of Mathematics and Statistics University of Sydney Semester 2, 2016 M. Rutkowski (USydney) Slides 1: Probability Review 1 / 56 Outline We will review the following

More information

Mathematics 1EM/1ES/1FM/1FS Notes, weeks 18-23

Mathematics 1EM/1ES/1FM/1FS Notes, weeks 18-23 2 MATRICES Mathematics EM/ES/FM/FS Notes, weeks 8-2 Carl Dettmann, version May 2, 22 2 Matrices 2 Basic concepts See: AJ Sadler, DWS Thorning, Understanding Pure Mathematics, pp 59ff In mathematics, a

More information

Introduction to First Order Equations Sections

Introduction to First Order Equations Sections A B I L E N E C H R I S T I A N U N I V E R S I T Y Department of Mathematics Introduction to First Order Equations Sections 2.1-2.3 Dr. John Ehrke Department of Mathematics Fall 2012 Course Goals The

More information

Efficient Leaping Methods for Stochastic Chemical Systems

Efficient Leaping Methods for Stochastic Chemical Systems Efficient Leaping Methods for Stochastic Chemical Systems Ioana Cipcigan Muruhan Rathinam November 18, 28 Abstract. Well stirred chemical reaction systems which involve small numbers of molecules for some

More information

Measures and Measure Spaces

Measures and Measure Spaces Chapter 2 Measures and Measure Spaces In summarizing the flaws of the Riemann integral we can focus on two main points: 1) Many nice functions are not Riemann integrable. 2) The Riemann integral does not

More information